code
stringlengths
2.5k
6.36M
kind
stringclasses
2 values
parsed_code
stringlengths
0
404k
quality_prob
float64
0
0.98
learning_prob
float64
0.03
1
<a href="https://colab.research.google.com/github/Adibuoy23/Adibuoy23.github.io/blob/master/Apple/Arrays_%26_Strings.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ### 1 Two sum Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target. You may assume that each input would have exactly one solution, and you may not use the same element twice. You can return the answer in any order. Example 1 ``` Input: nums = [2,7,11,15], target = 9 Output: [0,1] Output: Because nums[0] + nums[1] == 9, we return [0, 1]. ``` Example 2 ``` Input: nums = [3,2,4], target = 6 Output: [1,2] ``` Example 3 ``` Input: nums = [3,3], target = 6 Output: [0,1] ``` Constraints: * 2 <= nums.length <= 103 * -109 <= nums[i] <= 109 * -109 <= target <= 109 * Only one valid answer exists. ``` class Solution(object): def twoSum(self, nums, target): """ :type nums: List[int] :type target: int :rtype: List[int] """ if len(nums)==0: assert len(nums)>0 hash = {} for i,num in enumerate(nums): diff = target - num if diff not in hash: hash[num] = i else: return [hash[diff],i] sol = Solution() sol.twoSum([3,2,4],6) ``` ### 2 Substring search Given a string s, find the length of the longest substring without repeating characters. Example 1 ``` Input: s = "abcabcbb" Output: 3 Explanation: The answer is "abc", with the length of 3. ``` Example 2 ``` Input: s = "bbbbb" Output: 1 Explanation: The answer is "b", with the length of 1. ``` Example 3 ``` Input: s = "pwwkew" Output: 3 Explanation: The answer is "wke", with the length of 3. Notice that the answer must be a substring, "pwke" is a subsequence and not a substring. ``` Example 4 ``` Input: s = "" Output: 0 ``` Constraints: * 0 <= s.length <= 5 * 104 * s consists of English letters, digits, symbols and spaces. ``` class Solution(object): def lengthOfLongestSubstring(self, s): """ :type s: str :rtype: int """ out = '' length = 0 for i in s: if i not in out: out += i else: out = out.split(i)[1]+i length = max(length, len(out)) return length sol = Solution() sol.lengthOfLongestSubstring('abcabbb') ``` ### 3 Atoi() Implement the myAtoi(string s) function, which converts a string to a 32-bit signed integer (similar to C/C++'s atoi function). The algorithm for myAtoi(string s) is as follows: Read in and ignore any leading whitespace. 1. Check if the next character (if not already at the end of the string) is '-' or '+'. Read this character in if it is either. This determines if the final result is negative or positive respectively. Assume the result is positive if neither is present. 2. Read in next the characters until the next non-digit charcter or the end of the input is reached. The rest of the string is ignored. 3. Convert these digits into an integer (i.e. "123" -> 123, "0032" -> 32). If no digits were read, then the integer is 0. Change the sign as necessary (from step 2). 4. If the integer is out of the 32-bit signed integer range [-231, 231 - 1], then clamp the integer so that it remains in the range. Specifically, integers less than -231 should be clamped to -231, and integers greater than 231 - 1 should be clamped to 231 - 1. Return the integer as the final result. Note: * Only the space character ' ' is considered a whitespace character. * Do not ignore any characters other than the leading whitespace or the rest of the string after the digits. Example 1 ``` Input: str = "42" Output: 42 Explanation: The underlined characters are what is read in, the caret is the current reader position. Step 1: "42" (no characters read because there is no leading whitespace) ^ Step 2: "42" (no characters read because there is neither a '-' nor '+') ^ Step 3: "42" ("42" is read in) ^ The parsed integer is 42. Since 42 is in the range [-231, 231 - 1], the final result is 42. ``` Example 2 ``` Input: str = " -42" Output: -42 Explanation: Step 1: " -42" (leading whitespace is read and ignored) ^ Step 2: " -42" ('-' is read, so the result should be negative) ^ Step 3: " -42" ("42" is read in) ^ The parsed integer is -42. Since -42 is in the range [-231, 231 - 1], the final result is -42. ``` Example 3 ``` Input: str = "4193 with words" Output: 4193 Explanation: Step 1: "4193 with words" (no characters read because there is no leading whitespace) ^ Step 2: "4193 with words" (no characters read because there is neither a '-' nor '+') ^ Step 3: "4193 with words" ("4193" is read in; reading stops because the next character is a non-digit) ^ The parsed integer is 4193. Since 4193 is in the range [-231, 231 - 1], the final result is 4193. ``` Example 4 ``` Input: str = "words and 987" Output: 0 Explanation: Step 1: "words and 987" (no characters read because there is no leading whitespace) ^ Step 2: "words and 987" (no characters read because there is neither a '-' nor '+') ^ Step 3: "words and 987" (reading stops immediately because there is a non-digit 'w') ^ The parsed integer is 0 because no digits were read. Since 0 is in the range [-231, 231 - 1], the final result is 4193. ``` Example 5 ``` Input: str = "-91283472332" Output: -2147483648 Explanation: Step 1: "-91283472332" (no characters read because there is no leading whitespace) ^ Step 2: "-91283472332" ('-' is read, so the result should be negative) ^ Step 3: "-91283472332" ("91283472332" is read in) ^ The parsed integer is -91283472332. Since -91283472332 is less than the lower bound of the range [-2^31, 2^31 - 1], the final result is clamped to -2^31 = -2147483648. ``` Constraints: * 0 <= s.length <= 200 * s consists of English letters (lower-case and upper-case), digits (0-9), ' ', '+', '-', and '.'. ``` ```
github_jupyter
Input: nums = [2,7,11,15], target = 9 Output: [0,1] Output: Because nums[0] + nums[1] == 9, we return [0, 1]. Input: nums = [3,2,4], target = 6 Output: [1,2] Input: nums = [3,3], target = 6 Output: [0,1] class Solution(object): def twoSum(self, nums, target): """ :type nums: List[int] :type target: int :rtype: List[int] """ if len(nums)==0: assert len(nums)>0 hash = {} for i,num in enumerate(nums): diff = target - num if diff not in hash: hash[num] = i else: return [hash[diff],i] sol = Solution() sol.twoSum([3,2,4],6) Input: s = "abcabcbb" Output: 3 Explanation: The answer is "abc", with the length of 3. Input: s = "bbbbb" Output: 1 Explanation: The answer is "b", with the length of 1. Input: s = "pwwkew" Output: 3 Explanation: The answer is "wke", with the length of 3. Notice that the answer must be a substring, "pwke" is a subsequence and not a substring. Input: s = "" Output: 0 class Solution(object): def lengthOfLongestSubstring(self, s): """ :type s: str :rtype: int """ out = '' length = 0 for i in s: if i not in out: out += i else: out = out.split(i)[1]+i length = max(length, len(out)) return length sol = Solution() sol.lengthOfLongestSubstring('abcabbb') Input: str = "42" Output: 42 Explanation: The underlined characters are what is read in, the caret is the current reader position. Step 1: "42" (no characters read because there is no leading whitespace) ^ Step 2: "42" (no characters read because there is neither a '-' nor '+') ^ Step 3: "42" ("42" is read in) ^ The parsed integer is 42. Since 42 is in the range [-231, 231 - 1], the final result is 42. Input: str = " -42" Output: -42 Explanation: Step 1: " -42" (leading whitespace is read and ignored) ^ Step 2: " -42" ('-' is read, so the result should be negative) ^ Step 3: " -42" ("42" is read in) ^ The parsed integer is -42. Since -42 is in the range [-231, 231 - 1], the final result is -42. Input: str = "4193 with words" Output: 4193 Explanation: Step 1: "4193 with words" (no characters read because there is no leading whitespace) ^ Step 2: "4193 with words" (no characters read because there is neither a '-' nor '+') ^ Step 3: "4193 with words" ("4193" is read in; reading stops because the next character is a non-digit) ^ The parsed integer is 4193. Since 4193 is in the range [-231, 231 - 1], the final result is 4193. Input: str = "words and 987" Output: 0 Explanation: Step 1: "words and 987" (no characters read because there is no leading whitespace) ^ Step 2: "words and 987" (no characters read because there is neither a '-' nor '+') ^ Step 3: "words and 987" (reading stops immediately because there is a non-digit 'w') ^ The parsed integer is 0 because no digits were read. Since 0 is in the range [-231, 231 - 1], the final result is 4193. Input: str = "-91283472332" Output: -2147483648 Explanation: Step 1: "-91283472332" (no characters read because there is no leading whitespace) ^ Step 2: "-91283472332" ('-' is read, so the result should be negative) ^ Step 3: "-91283472332" ("91283472332" is read in) ^ The parsed integer is -91283472332. Since -91283472332 is less than the lower bound of the range [-2^31, 2^31 - 1], the final result is clamped to -2^31 = -2147483648.
0.696165
0.979609
``` import importlib import xarray as xr import numpy as np import pandas as pd import matplotlib.pyplot as plt import sys from CASutils import plotposition_utils as plotpos importlib.reload(plotpos) plotpath="/project/cas/islas/python_plots/snowpaper/FIGURES/" #data_cam = xr.open_dataset("/project/cas/islas/python_savs/snowpaper/DATA_SORT/t850_laggedregs/laggedreg_cam.nc") data_scam = xr.open_dataset("/project/cas/islas/python_savs/snowpaper/DATA_SORT/t850_laggedregs/laggedreg_scam.nc") def plotlaggedreg(data,titlestr,ylabelstr,x1,x2,y1,y2,color='darkred',yticks=None,yticknames=None,yrange=None, xlabel=False): ax = fig.add_axes(np.array([x1,y1,(x2-x1),(y2-y1)])) ax.plot([-10,10],[0,0],color='black') ax.plot(np.arange(-10,11,1), data, color=color, linewidth=2) ax.set_xticks([-10,-8,-6,-4,-2,0,2,4,6,8,10]) ax.set_xticklabels(['-10','-8','-6','-4','-2','0','2','4','6','8','10'], fontsize=12) ax.set_ylabel(ylabelstr, fontsize=14) ax.set_title(titlestr, fontsize=16) ax.set_xlim(-10,10) if (yticks): ax.set_yticks(yticks) ax.set_yticklabels(yticknames, fontsize=12) if (yrange): ax.set_ylim(yrange) if (xlabel): ax.set_xlabel('Lag (days)', fontsize=14) return ax def oplotlaggedreg(ax, data, color='darkred'): ax.plot(np.arange(-10,11,1), data, color=color, linewidth=2) return ax x1, x2, y1, y2 = plotpos.get3by6coords() netclm5 = -1.*data_scam.fsnsregclm5 + data_scam.flnsregclm5 + data_scam.shflxregclm5 + data_scam.lhflxregclm5 netsnowd = -1.*data_scam.fsnsregsnowd + data_scam.flnsregsnowd + data_scam.shflxregsnowd + data_scam.lhflxregsnowd fig = plt.figure(figsize=(16,16)) cityplot=0 ax = plotlaggedreg(-1.*(data_scam.t850regclm5.isel(city=cityplot)),'(a) T850, Saskatoon','Temperature (K)',x1[0],x2[0],y1[0],y2[0], color='darkblue', yrange=(-1,0.1),yticks=[-1,-0.8,-0.6,-0.4,-0.2,0],yticknames=['-1.0','-0.8','-0.6','-0.4','-0.2','0']) ax = oplotlaggedreg(ax,-1.*(data_scam.t850regsnowd.isel(city=cityplot)), color='forestgreen') ax.text(-9,-0.6,'CLM5',color='darkblue',fontsize=12) ax.text(-9,-0.8,'SNOWD',color='forestgreen',fontsize=12) ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=cityplot)),'(d) T2m, Saskatoon','Temperature (K)',x1[3],x2[3],y1[3],y2[3], color='darkblue', yrange=(-0.8,0.1),yticks=[-0.8,-0.6,-0.4,-0.2,0],yticknames=['-0.8','-0.6','-0.4','-0.2','0']) ax = oplotlaggedreg(ax,-1.*(data_scam.trefhtregsnowd.isel(city=cityplot)), color='forestgreen') ax.text(-9,-0.5,'CLM5',color='darkblue',fontsize=12) ax.text(-9,-0.65,'SNOWD',color='forestgreen',fontsize=12) ax = plotlaggedreg(-1.*(netclm5.isel(city=cityplot) - netsnowd.isel(city=cityplot)),'(g) Net flux, CLM5$-$SNOWD, Saskatoon','Flux (Wm$^{-2}$)', x1[6],x2[6],y1[6],y2[6],color='firebrick', yrange=(-0.05,0.8),yticks=[0,0.2,0.4,0.6,0.8],yticknames=['0','0.2','0.4','0.6','0.8']) ax = oplotlaggedreg(ax, -1.*(data_scam.bulksnowregclm5.isel(city=cityplot)-data_scam.bulksnowregsnowd.isel(city=cityplot)),color='forestgreen') ax.text(-9,0.55,'$F\\uparrow$',color='firebrick', fontsize=14) ax.text(-9,0.4,'$F_{sno}\\uparrow$',color='forestgreen', fontsize=14) ax = plotlaggedreg(-1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), '(j) Fluxes, CLM5$-$SNOWD, Saskatoon','Flux (Wm$^{-2}$)',x1[9],x2[9],y1[9],y2[9],color='darkorange', yrange=(-0.05,0.5),yticks=[0,0.1,0.2,0.3,0.4,0.5],yticknames=['0','0.1','0.2','0.3','0.4','0.5']) ax = oplotlaggedreg(ax, -1.*(data_scam.flnsregclm5.isel(city=cityplot) - data_scam.flnsregsnowd.isel(city=cityplot)), color='red') ax = oplotlaggedreg(ax, (data_scam.fsnsregclm5.isel(city=cityplot) - data_scam.fsnsregsnowd.isel(city=cityplot)), color='royalblue') ax = oplotlaggedreg(ax, -1.*(data_scam.lhflxregclm5.isel(city=cityplot) - data_scam.lhflxregsnowd.isel(city=cityplot)), color='blueviolet') ax.text(-9,0.41,'$SH\\uparrow$',color='darkorange',fontsize=14) ax.text(-9,0.31,'$LW\\uparrow$',color='red',fontsize=14) ax.text(-9,0.21,'$SW\\uparrow$',color='royalblue',fontsize=14) ax.text(-9,0.11,'$LH\\uparrow$',color='blueviolet',fontsize=14) #ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=cityplot) - data_scam.trefhtregsnowd.isel(city=cityplot)),' ',' ',x1[9],x2[9],y1[9],y2[9], color='black') #ax2 = ax.twinx() #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), color='darkorange') #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.flnsregclm5.isel(city=cityplot) - data_scam.flnsregsnowd.isel(city=cityplot)), color='red') #ax2 = oplotlaggedreg(ax2, (data_scam.fsnsregclm5.isel(city=cityplot) - data_scam.fsnsregsnowd.isel(city=cityplot)), color='royalblue') #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.lhflxregclm5.isel(city=cityplot) - data_scam.lhflxregsnowd.isel(city=cityplot)), color='blueviolet') ax = plotlaggedreg(-1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), '(m) SH, CLM5$-$SNOWD, Saskatoon','Flux (Wm$^{-2}$)', x1[12],x2[12],y1[12],y2[12],color='darkorange', xlabel=True, yrange=(-0.05,0.4),yticks=[0,0.1,0.2,0.3,0.4],yticknames=['0','0.1','0.2','0.3','0.4']) ax = oplotlaggedreg(ax, -1.*(data_scam.shflxconstructregclm5.isel(city=cityplot) - data_scam.shflxconstructregsnowd.isel(city=cityplot)),color='cadetblue') ax.text(-9,0.3,'$SH\\uparrow$', color='darkorange',fontsize=14) ax.text(-9,0.2,'$SH^{*}\\uparrow$',color='cadetblue',fontsize=14) cityplot=1 ax = plotlaggedreg(-1.*(data_scam.t850regclm5.isel(city=cityplot)),'(b) T850, Toronto',' ',x1[1],x2[1],y1[0],y2[0], color='darkblue', yrange=(-1,0.1),yticks=[-1,-0.8,-0.6,-0.4,-0.2,0],yticknames=['-1.0','-0.8','-0.6','-0.4','-0.2','0'] ) ax = oplotlaggedreg(ax,-1.*(data_scam.t850regsnowd.isel(city=cityplot)), color='forestgreen') ax.text(-9,-0.6,'CLM5',color='darkblue',fontsize=12) ax.text(-9,-0.8,'SNOWD',color='forestgreen',fontsize=12) ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=cityplot)),'(e) T2m, Toronto',' ',x1[4],x2[4],y1[3],y2[3], color='darkblue', yrange=(-0.6,0.1), yticks=[-0.6,-0.4,-0.2,0],yticknames=['-0.6','-0.4','-0.2','0']) ax = oplotlaggedreg(ax,-1.*(data_scam.trefhtregsnowd.isel(city=cityplot)), color='forestgreen') ax.text(-9,-0.38,'CLM5',color='darkblue',fontsize=12) ax.text(-9,-0.49,'SNOWD',color='forestgreen',fontsize=12) ax = plotlaggedreg(-1.*(netclm5.isel(city=cityplot) - netsnowd.isel(city=cityplot)),'(h) Net Flux, CLM5$-$SNOWD, Toronto',' ',x1[7],x2[7],y1[6],y2[6],color='firebrick', yrange=(-0.05,0.8),yticks=[0,0.2,0.4,0.6,0.8],yticknames=['0','0.2','0.4','0.6','0.8']) ax = oplotlaggedreg(ax, -1.*(data_scam.bulksnowregclm5.isel(city=cityplot)-data_scam.bulksnowregsnowd.isel(city=cityplot)),color='forestgreen') ax.text(-9,0.55,'$F\\uparrow$',color='firebrick', fontsize=14) ax.text(-9,0.4,'$F_{sno}\\uparrow$',color='forestgreen', fontsize=14) ax = plotlaggedreg(-1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), '(k) Fluxes, CLM5$-$SNOWD, Toronto',' ',x1[10],x2[10],y1[10],y2[10],color='darkorange', yrange=(-0.05,0.5),yticks=[0,0.1,0.2,0.3,0.4,0.5],yticknames=['0','0.1','0.2','0.3','0.4','0.5']) ax = oplotlaggedreg(ax, -1.*(data_scam.flnsregclm5.isel(city=cityplot) - data_scam.flnsregsnowd.isel(city=cityplot)), color='red') ax = oplotlaggedreg(ax, (data_scam.fsnsregclm5.isel(city=cityplot) - data_scam.fsnsregsnowd.isel(city=cityplot)), color='royalblue') ax = oplotlaggedreg(ax, -1.*(data_scam.lhflxregclm5.isel(city=cityplot) - data_scam.lhflxregsnowd.isel(city=cityplot)), color='blueviolet') ax.text(-9,0.41,'$SH\\uparrow$',color='darkorange',fontsize=14) ax.text(-9,0.31,'$LW\\uparrow$',color='red',fontsize=14) ax.text(-9,0.21,'$SW\\uparrow$',color='royalblue',fontsize=14) ax.text(-9,0.11,'$LH\\uparrow$',color='blueviolet',fontsize=14) #ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=cityplot) - data_scam.trefhtregsnowd.isel(city=cityplot)),' ',' ',x1[10],x2[10],y1[9],y2[9], color='black') #ax2 = ax.twinx() #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), color='darkorange') #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.flnsregclm5.isel(city=cityplot) - data_scam.flnsregsnowd.isel(city=cityplot)), color='red') #ax2 = oplotlaggedreg(ax2, (data_scam.fsnsregclm5.isel(city=cityplot) - data_scam.fsnsregsnowd.isel(city=cityplot)), color='royalblue') #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.lhflxregclm5.isel(city=cityplot) - data_scam.lhflxregsnowd.isel(city=cityplot)), color='blueviolet') ax = plotlaggedreg(-1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), '(n) SH, CLM5$-$SNOWD, Toronto',' ', x1[13],x2[13],y1[12],y2[12],color='darkorange', xlabel=True, yrange=(-0.05,0.4),yticks=[0,0.1,0.2,0.3,0.4],yticknames=['0','0.1','0.2','0.3','0.4']) ax = oplotlaggedreg(ax, -1.*(data_scam.shflxconstructregclm5.isel(city=cityplot) - data_scam.shflxconstructregsnowd.isel(city=cityplot)),color='cadetblue') ax.text(-9,0.3,'$SH\\uparrow$', color='darkorange',fontsize=14) ax.text(-9,0.2,'$SH^{*}\\uparrow$',color='cadetblue',fontsize=14) cityplot=2 ax = plotlaggedreg(-1.*(data_scam.t850regclm5.isel(city=cityplot)),'(c) T850, Siderovsk',' ',x1[2],x2[2],y1[0],y2[0], color='darkblue', yrange=(-1,0.1), yticks=[-1,-0.8,-0.6,-0.4,-0.2,0],yticknames=['-1.0','-0.8','-0.6','-0.4','-0.2','0']) ax = oplotlaggedreg(ax,-1.*(data_scam.t850regsnowd.isel(city=cityplot)), color='forestgreen') ax.text(-9,-0.6,'CLM5',color='darkblue',fontsize=12) ax.text(-9,-0.8,'SNOWD',color='forestgreen',fontsize=12) ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=cityplot)),'(f) T2m, Siderovsk',' ',x1[5],x2[5],y1[3],y2[3], color='darkblue', yrange=(-1.2,0.1), yticks=[-1.2,-1,-0.8,-.6,-0.4,-0.2,0],yticknames=['-1.2','-1.0','-0.8','-0.6','-0.4','-0.2','0']) ax = oplotlaggedreg(ax,-1.*(data_scam.trefhtregsnowd.isel(city=cityplot)), color='forestgreen') ax.text(-9,-0.8,'CLM5',color='darkblue',fontsize=12) ax.text(-9,-1,'SNOWD',color='forestgreen',fontsize=12) ax = plotlaggedreg(-1.*(netclm5.isel(city=cityplot) - netsnowd.isel(city=cityplot)),'(i) Net Flux, CLM5$-$SNOWD, Siderovsk',' ',x1[8],x2[8],y1[6],y2[6],color='firebrick', yrange=(-0.05,0.8),yticks=[0,0.2,0.4,0.6,0.8],yticknames=['0','0.2','0.4','0.6','0.8']) ax = oplotlaggedreg(ax, -1.*(data_scam.bulksnowregclm5.isel(city=cityplot)-data_scam.bulksnowregsnowd.isel(city=cityplot)),color='forestgreen') ax.text(-9,0.55,'$F\\uparrow$',color='firebrick', fontsize=14) ax.text(-9,0.4,'$F_{sno}\\uparrow$',color='forestgreen', fontsize=14) ax = plotlaggedreg(-1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), '(l) Fluxes, CLM5$-$SNOWD, Siderovsk',' ',x1[11],x2[11],y1[11],y2[11],color='darkorange', yrange=(-0.05,0.5),yticks=[0,0.1,0.2,0.3,0.4,0.5],yticknames=['0','0.1','0.2','0.3','0.4','0.5']) ax = oplotlaggedreg(ax, -1.*(data_scam.flnsregclm5.isel(city=cityplot) - data_scam.flnsregsnowd.isel(city=cityplot)), color='red') ax = oplotlaggedreg(ax, (data_scam.fsnsregclm5.isel(city=cityplot) - data_scam.fsnsregsnowd.isel(city=cityplot)), color='royalblue') ax = oplotlaggedreg(ax, -1.*(data_scam.lhflxregclm5.isel(city=cityplot) - data_scam.lhflxregsnowd.isel(city=cityplot)), color='blueviolet') ax.text(-9,0.41,'$SH\\uparrow$',color='darkorange',fontsize=14) ax.text(-9,0.31,'$LW\\uparrow$',color='red',fontsize=14) ax.text(-9,0.21,'$SW\\uparrow$',color='royalblue',fontsize=14) ax.text(-9,0.11,'$LH\\uparrow$',color='blueviolet',fontsize=14) #ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=cityplot) - data_scam.trefhtregsnowd.isel(city=cityplot)),' ',' ',x1[11],x2[11],y1[9],y2[9], color='black') #ax2 = ax.twinx() #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), color='darkorange') #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.flnsregclm5.isel(city=cityplot) - data_scam.flnsregsnowd.isel(city=cityplot)), color='red') #ax2 = oplotlaggedreg(ax2, (data_scam.fsnsregclm5.isel(city=cityplot) - data_scam.fsnsregsnowd.isel(city=cityplot)), color='royalblue') #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.lhflxregclm5.isel(city=cityplot) - data_scam.lhflxregsnowd.isel(city=cityplot)), color='blueviolet') ax = plotlaggedreg(-1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), '(o) SH, CLM5$-$SNOWD, Siderovsk',' ', x1[14],x2[14],y1[12],y2[12],color='darkorange', xlabel=True, yrange=(-0.05,0.4),yticks=[0,0.1,0.2,0.3,0.4],yticknames=['0','0.1','0.2','0.3','0.4']) ax = oplotlaggedreg(ax, -1.*(data_scam.shflxconstructregclm5.isel(city=cityplot) - data_scam.shflxconstructregsnowd.isel(city=cityplot)),color='cadetblue') ax.text(-9,0.3,'$SH\\uparrow$', color='darkorange',fontsize=14) ax.text(-9,0.2,'$SH^{*}\\uparrow$',color='cadetblue',fontsize=14) fig.savefig(plotpath+"fig10.pdf", facecolor='white', bbox_inches='tight') fig = plt.figure(figsize=(16,16)) cityplot=0 #ax = plotlaggedreg(-1.*(data_cam.trefhtregclm5.isel(city=0)),'T2m','Temperature (K)',x1[0],x2[0],y1[0],y2[0], color='black') #ax = oplotlaggedreg(ax,-1.*(data_cam.trefhtregsnowd.isel(city=0)), color='red') ax = plotlaggedreg(-1.*(data_cam.trefhtregclm5.isel(city=cityplot) - data_cam.trefhtregsnowd.isel(city=cityplot)),'TREFHT','Temperature (K)',x1[0],x2[0],y1[0],y2[0], color='black') ax = plotlaggedreg(-1.*(data_cam.flnsregclm5.isel(city=cityplot) - data_cam.flnsregsnowd.isel(city=cityplot)),'FLNS','Temperature (K)',x1[1],x2[1],y1[1],y2[1], color='black') ax = plotlaggedreg(-1.*(data_cam.shflxregclm5.isel(city=cityplot) - data_cam.shflxregsnowd.isel(city=cityplot)),'SHFLX','Temperature (K)',x1[2],x2[2],y1[2],y2[2], color='black') tstaketbot_clm5 = data_cam.tsregclm5.isel(city=cityplot) - data_cam.tbotregclm5.isel(city=cityplot) tstaketbot_snowd = data_cam.tsregsnowd.isel(city=cityplot) - data_cam.tbotregsnowd.isel(city=cityplot) ax = plotlaggedreg(-1.*(tstaketbot_clm5 - tstaketbot_snowd),'TS-TBOT','Temperature (K)',x1[3],x2[3],y1[3],y2[3], color='black') ax = plotlaggedreg(-1.*(data_cam.tsregclm5.isel(city=cityplot) - data_cam.tsregsnowd.isel(city=cityplot)),'TS','Temperature (K)',x1[4],x2[4],y1[4],y2[4], color='black') ax = plotlaggedreg(-1.*(data_cam.tbotregclm5.isel(city=cityplot) - data_cam.tbotregsnowd.isel(city=cityplot)),'TBOT','Temperature (K)',x1[5],x2[5],y1[5],y2[5], color='black') #ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=0)-data_scam.trefhtregsnowd.isel(city=0)),'T2m','Temperature (K)',x1[0],x2[0],y1[0],y2[0], color='black') #ax = plotlaggedreg(-1.*(data_cam.trefhtregclm5.isel(city=0)-data_cam.trefhtregsnowd.isel(city=0)),'T2m','Temperature (K)',x1[1],x2[1],y1[1],y2[1], color='black') #ax = plotlaggedreg(-1.*data_scam.trefhtregclm5.isel(city=0), 'T2m', 'Temperature (K)',x1[0],x2[0],y1[0],y2[0]) #ax = oplotlaggedreg(ax, -1.*data_scam.trefhtregsnowd.isel(city=0), color='forestgreen') print(data_cam) ```
github_jupyter
import importlib import xarray as xr import numpy as np import pandas as pd import matplotlib.pyplot as plt import sys from CASutils import plotposition_utils as plotpos importlib.reload(plotpos) plotpath="/project/cas/islas/python_plots/snowpaper/FIGURES/" #data_cam = xr.open_dataset("/project/cas/islas/python_savs/snowpaper/DATA_SORT/t850_laggedregs/laggedreg_cam.nc") data_scam = xr.open_dataset("/project/cas/islas/python_savs/snowpaper/DATA_SORT/t850_laggedregs/laggedreg_scam.nc") def plotlaggedreg(data,titlestr,ylabelstr,x1,x2,y1,y2,color='darkred',yticks=None,yticknames=None,yrange=None, xlabel=False): ax = fig.add_axes(np.array([x1,y1,(x2-x1),(y2-y1)])) ax.plot([-10,10],[0,0],color='black') ax.plot(np.arange(-10,11,1), data, color=color, linewidth=2) ax.set_xticks([-10,-8,-6,-4,-2,0,2,4,6,8,10]) ax.set_xticklabels(['-10','-8','-6','-4','-2','0','2','4','6','8','10'], fontsize=12) ax.set_ylabel(ylabelstr, fontsize=14) ax.set_title(titlestr, fontsize=16) ax.set_xlim(-10,10) if (yticks): ax.set_yticks(yticks) ax.set_yticklabels(yticknames, fontsize=12) if (yrange): ax.set_ylim(yrange) if (xlabel): ax.set_xlabel('Lag (days)', fontsize=14) return ax def oplotlaggedreg(ax, data, color='darkred'): ax.plot(np.arange(-10,11,1), data, color=color, linewidth=2) return ax x1, x2, y1, y2 = plotpos.get3by6coords() netclm5 = -1.*data_scam.fsnsregclm5 + data_scam.flnsregclm5 + data_scam.shflxregclm5 + data_scam.lhflxregclm5 netsnowd = -1.*data_scam.fsnsregsnowd + data_scam.flnsregsnowd + data_scam.shflxregsnowd + data_scam.lhflxregsnowd fig = plt.figure(figsize=(16,16)) cityplot=0 ax = plotlaggedreg(-1.*(data_scam.t850regclm5.isel(city=cityplot)),'(a) T850, Saskatoon','Temperature (K)',x1[0],x2[0],y1[0],y2[0], color='darkblue', yrange=(-1,0.1),yticks=[-1,-0.8,-0.6,-0.4,-0.2,0],yticknames=['-1.0','-0.8','-0.6','-0.4','-0.2','0']) ax = oplotlaggedreg(ax,-1.*(data_scam.t850regsnowd.isel(city=cityplot)), color='forestgreen') ax.text(-9,-0.6,'CLM5',color='darkblue',fontsize=12) ax.text(-9,-0.8,'SNOWD',color='forestgreen',fontsize=12) ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=cityplot)),'(d) T2m, Saskatoon','Temperature (K)',x1[3],x2[3],y1[3],y2[3], color='darkblue', yrange=(-0.8,0.1),yticks=[-0.8,-0.6,-0.4,-0.2,0],yticknames=['-0.8','-0.6','-0.4','-0.2','0']) ax = oplotlaggedreg(ax,-1.*(data_scam.trefhtregsnowd.isel(city=cityplot)), color='forestgreen') ax.text(-9,-0.5,'CLM5',color='darkblue',fontsize=12) ax.text(-9,-0.65,'SNOWD',color='forestgreen',fontsize=12) ax = plotlaggedreg(-1.*(netclm5.isel(city=cityplot) - netsnowd.isel(city=cityplot)),'(g) Net flux, CLM5$-$SNOWD, Saskatoon','Flux (Wm$^{-2}$)', x1[6],x2[6],y1[6],y2[6],color='firebrick', yrange=(-0.05,0.8),yticks=[0,0.2,0.4,0.6,0.8],yticknames=['0','0.2','0.4','0.6','0.8']) ax = oplotlaggedreg(ax, -1.*(data_scam.bulksnowregclm5.isel(city=cityplot)-data_scam.bulksnowregsnowd.isel(city=cityplot)),color='forestgreen') ax.text(-9,0.55,'$F\\uparrow$',color='firebrick', fontsize=14) ax.text(-9,0.4,'$F_{sno}\\uparrow$',color='forestgreen', fontsize=14) ax = plotlaggedreg(-1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), '(j) Fluxes, CLM5$-$SNOWD, Saskatoon','Flux (Wm$^{-2}$)',x1[9],x2[9],y1[9],y2[9],color='darkorange', yrange=(-0.05,0.5),yticks=[0,0.1,0.2,0.3,0.4,0.5],yticknames=['0','0.1','0.2','0.3','0.4','0.5']) ax = oplotlaggedreg(ax, -1.*(data_scam.flnsregclm5.isel(city=cityplot) - data_scam.flnsregsnowd.isel(city=cityplot)), color='red') ax = oplotlaggedreg(ax, (data_scam.fsnsregclm5.isel(city=cityplot) - data_scam.fsnsregsnowd.isel(city=cityplot)), color='royalblue') ax = oplotlaggedreg(ax, -1.*(data_scam.lhflxregclm5.isel(city=cityplot) - data_scam.lhflxregsnowd.isel(city=cityplot)), color='blueviolet') ax.text(-9,0.41,'$SH\\uparrow$',color='darkorange',fontsize=14) ax.text(-9,0.31,'$LW\\uparrow$',color='red',fontsize=14) ax.text(-9,0.21,'$SW\\uparrow$',color='royalblue',fontsize=14) ax.text(-9,0.11,'$LH\\uparrow$',color='blueviolet',fontsize=14) #ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=cityplot) - data_scam.trefhtregsnowd.isel(city=cityplot)),' ',' ',x1[9],x2[9],y1[9],y2[9], color='black') #ax2 = ax.twinx() #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), color='darkorange') #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.flnsregclm5.isel(city=cityplot) - data_scam.flnsregsnowd.isel(city=cityplot)), color='red') #ax2 = oplotlaggedreg(ax2, (data_scam.fsnsregclm5.isel(city=cityplot) - data_scam.fsnsregsnowd.isel(city=cityplot)), color='royalblue') #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.lhflxregclm5.isel(city=cityplot) - data_scam.lhflxregsnowd.isel(city=cityplot)), color='blueviolet') ax = plotlaggedreg(-1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), '(m) SH, CLM5$-$SNOWD, Saskatoon','Flux (Wm$^{-2}$)', x1[12],x2[12],y1[12],y2[12],color='darkorange', xlabel=True, yrange=(-0.05,0.4),yticks=[0,0.1,0.2,0.3,0.4],yticknames=['0','0.1','0.2','0.3','0.4']) ax = oplotlaggedreg(ax, -1.*(data_scam.shflxconstructregclm5.isel(city=cityplot) - data_scam.shflxconstructregsnowd.isel(city=cityplot)),color='cadetblue') ax.text(-9,0.3,'$SH\\uparrow$', color='darkorange',fontsize=14) ax.text(-9,0.2,'$SH^{*}\\uparrow$',color='cadetblue',fontsize=14) cityplot=1 ax = plotlaggedreg(-1.*(data_scam.t850regclm5.isel(city=cityplot)),'(b) T850, Toronto',' ',x1[1],x2[1],y1[0],y2[0], color='darkblue', yrange=(-1,0.1),yticks=[-1,-0.8,-0.6,-0.4,-0.2,0],yticknames=['-1.0','-0.8','-0.6','-0.4','-0.2','0'] ) ax = oplotlaggedreg(ax,-1.*(data_scam.t850regsnowd.isel(city=cityplot)), color='forestgreen') ax.text(-9,-0.6,'CLM5',color='darkblue',fontsize=12) ax.text(-9,-0.8,'SNOWD',color='forestgreen',fontsize=12) ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=cityplot)),'(e) T2m, Toronto',' ',x1[4],x2[4],y1[3],y2[3], color='darkblue', yrange=(-0.6,0.1), yticks=[-0.6,-0.4,-0.2,0],yticknames=['-0.6','-0.4','-0.2','0']) ax = oplotlaggedreg(ax,-1.*(data_scam.trefhtregsnowd.isel(city=cityplot)), color='forestgreen') ax.text(-9,-0.38,'CLM5',color='darkblue',fontsize=12) ax.text(-9,-0.49,'SNOWD',color='forestgreen',fontsize=12) ax = plotlaggedreg(-1.*(netclm5.isel(city=cityplot) - netsnowd.isel(city=cityplot)),'(h) Net Flux, CLM5$-$SNOWD, Toronto',' ',x1[7],x2[7],y1[6],y2[6],color='firebrick', yrange=(-0.05,0.8),yticks=[0,0.2,0.4,0.6,0.8],yticknames=['0','0.2','0.4','0.6','0.8']) ax = oplotlaggedreg(ax, -1.*(data_scam.bulksnowregclm5.isel(city=cityplot)-data_scam.bulksnowregsnowd.isel(city=cityplot)),color='forestgreen') ax.text(-9,0.55,'$F\\uparrow$',color='firebrick', fontsize=14) ax.text(-9,0.4,'$F_{sno}\\uparrow$',color='forestgreen', fontsize=14) ax = plotlaggedreg(-1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), '(k) Fluxes, CLM5$-$SNOWD, Toronto',' ',x1[10],x2[10],y1[10],y2[10],color='darkorange', yrange=(-0.05,0.5),yticks=[0,0.1,0.2,0.3,0.4,0.5],yticknames=['0','0.1','0.2','0.3','0.4','0.5']) ax = oplotlaggedreg(ax, -1.*(data_scam.flnsregclm5.isel(city=cityplot) - data_scam.flnsregsnowd.isel(city=cityplot)), color='red') ax = oplotlaggedreg(ax, (data_scam.fsnsregclm5.isel(city=cityplot) - data_scam.fsnsregsnowd.isel(city=cityplot)), color='royalblue') ax = oplotlaggedreg(ax, -1.*(data_scam.lhflxregclm5.isel(city=cityplot) - data_scam.lhflxregsnowd.isel(city=cityplot)), color='blueviolet') ax.text(-9,0.41,'$SH\\uparrow$',color='darkorange',fontsize=14) ax.text(-9,0.31,'$LW\\uparrow$',color='red',fontsize=14) ax.text(-9,0.21,'$SW\\uparrow$',color='royalblue',fontsize=14) ax.text(-9,0.11,'$LH\\uparrow$',color='blueviolet',fontsize=14) #ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=cityplot) - data_scam.trefhtregsnowd.isel(city=cityplot)),' ',' ',x1[10],x2[10],y1[9],y2[9], color='black') #ax2 = ax.twinx() #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), color='darkorange') #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.flnsregclm5.isel(city=cityplot) - data_scam.flnsregsnowd.isel(city=cityplot)), color='red') #ax2 = oplotlaggedreg(ax2, (data_scam.fsnsregclm5.isel(city=cityplot) - data_scam.fsnsregsnowd.isel(city=cityplot)), color='royalblue') #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.lhflxregclm5.isel(city=cityplot) - data_scam.lhflxregsnowd.isel(city=cityplot)), color='blueviolet') ax = plotlaggedreg(-1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), '(n) SH, CLM5$-$SNOWD, Toronto',' ', x1[13],x2[13],y1[12],y2[12],color='darkorange', xlabel=True, yrange=(-0.05,0.4),yticks=[0,0.1,0.2,0.3,0.4],yticknames=['0','0.1','0.2','0.3','0.4']) ax = oplotlaggedreg(ax, -1.*(data_scam.shflxconstructregclm5.isel(city=cityplot) - data_scam.shflxconstructregsnowd.isel(city=cityplot)),color='cadetblue') ax.text(-9,0.3,'$SH\\uparrow$', color='darkorange',fontsize=14) ax.text(-9,0.2,'$SH^{*}\\uparrow$',color='cadetblue',fontsize=14) cityplot=2 ax = plotlaggedreg(-1.*(data_scam.t850regclm5.isel(city=cityplot)),'(c) T850, Siderovsk',' ',x1[2],x2[2],y1[0],y2[0], color='darkblue', yrange=(-1,0.1), yticks=[-1,-0.8,-0.6,-0.4,-0.2,0],yticknames=['-1.0','-0.8','-0.6','-0.4','-0.2','0']) ax = oplotlaggedreg(ax,-1.*(data_scam.t850regsnowd.isel(city=cityplot)), color='forestgreen') ax.text(-9,-0.6,'CLM5',color='darkblue',fontsize=12) ax.text(-9,-0.8,'SNOWD',color='forestgreen',fontsize=12) ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=cityplot)),'(f) T2m, Siderovsk',' ',x1[5],x2[5],y1[3],y2[3], color='darkblue', yrange=(-1.2,0.1), yticks=[-1.2,-1,-0.8,-.6,-0.4,-0.2,0],yticknames=['-1.2','-1.0','-0.8','-0.6','-0.4','-0.2','0']) ax = oplotlaggedreg(ax,-1.*(data_scam.trefhtregsnowd.isel(city=cityplot)), color='forestgreen') ax.text(-9,-0.8,'CLM5',color='darkblue',fontsize=12) ax.text(-9,-1,'SNOWD',color='forestgreen',fontsize=12) ax = plotlaggedreg(-1.*(netclm5.isel(city=cityplot) - netsnowd.isel(city=cityplot)),'(i) Net Flux, CLM5$-$SNOWD, Siderovsk',' ',x1[8],x2[8],y1[6],y2[6],color='firebrick', yrange=(-0.05,0.8),yticks=[0,0.2,0.4,0.6,0.8],yticknames=['0','0.2','0.4','0.6','0.8']) ax = oplotlaggedreg(ax, -1.*(data_scam.bulksnowregclm5.isel(city=cityplot)-data_scam.bulksnowregsnowd.isel(city=cityplot)),color='forestgreen') ax.text(-9,0.55,'$F\\uparrow$',color='firebrick', fontsize=14) ax.text(-9,0.4,'$F_{sno}\\uparrow$',color='forestgreen', fontsize=14) ax = plotlaggedreg(-1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), '(l) Fluxes, CLM5$-$SNOWD, Siderovsk',' ',x1[11],x2[11],y1[11],y2[11],color='darkorange', yrange=(-0.05,0.5),yticks=[0,0.1,0.2,0.3,0.4,0.5],yticknames=['0','0.1','0.2','0.3','0.4','0.5']) ax = oplotlaggedreg(ax, -1.*(data_scam.flnsregclm5.isel(city=cityplot) - data_scam.flnsregsnowd.isel(city=cityplot)), color='red') ax = oplotlaggedreg(ax, (data_scam.fsnsregclm5.isel(city=cityplot) - data_scam.fsnsregsnowd.isel(city=cityplot)), color='royalblue') ax = oplotlaggedreg(ax, -1.*(data_scam.lhflxregclm5.isel(city=cityplot) - data_scam.lhflxregsnowd.isel(city=cityplot)), color='blueviolet') ax.text(-9,0.41,'$SH\\uparrow$',color='darkorange',fontsize=14) ax.text(-9,0.31,'$LW\\uparrow$',color='red',fontsize=14) ax.text(-9,0.21,'$SW\\uparrow$',color='royalblue',fontsize=14) ax.text(-9,0.11,'$LH\\uparrow$',color='blueviolet',fontsize=14) #ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=cityplot) - data_scam.trefhtregsnowd.isel(city=cityplot)),' ',' ',x1[11],x2[11],y1[9],y2[9], color='black') #ax2 = ax.twinx() #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), color='darkorange') #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.flnsregclm5.isel(city=cityplot) - data_scam.flnsregsnowd.isel(city=cityplot)), color='red') #ax2 = oplotlaggedreg(ax2, (data_scam.fsnsregclm5.isel(city=cityplot) - data_scam.fsnsregsnowd.isel(city=cityplot)), color='royalblue') #ax2 = oplotlaggedreg(ax2, -1.*(data_scam.lhflxregclm5.isel(city=cityplot) - data_scam.lhflxregsnowd.isel(city=cityplot)), color='blueviolet') ax = plotlaggedreg(-1.*(data_scam.shflxregclm5.isel(city=cityplot) - data_scam.shflxregsnowd.isel(city=cityplot)), '(o) SH, CLM5$-$SNOWD, Siderovsk',' ', x1[14],x2[14],y1[12],y2[12],color='darkorange', xlabel=True, yrange=(-0.05,0.4),yticks=[0,0.1,0.2,0.3,0.4],yticknames=['0','0.1','0.2','0.3','0.4']) ax = oplotlaggedreg(ax, -1.*(data_scam.shflxconstructregclm5.isel(city=cityplot) - data_scam.shflxconstructregsnowd.isel(city=cityplot)),color='cadetblue') ax.text(-9,0.3,'$SH\\uparrow$', color='darkorange',fontsize=14) ax.text(-9,0.2,'$SH^{*}\\uparrow$',color='cadetblue',fontsize=14) fig.savefig(plotpath+"fig10.pdf", facecolor='white', bbox_inches='tight') fig = plt.figure(figsize=(16,16)) cityplot=0 #ax = plotlaggedreg(-1.*(data_cam.trefhtregclm5.isel(city=0)),'T2m','Temperature (K)',x1[0],x2[0],y1[0],y2[0], color='black') #ax = oplotlaggedreg(ax,-1.*(data_cam.trefhtregsnowd.isel(city=0)), color='red') ax = plotlaggedreg(-1.*(data_cam.trefhtregclm5.isel(city=cityplot) - data_cam.trefhtregsnowd.isel(city=cityplot)),'TREFHT','Temperature (K)',x1[0],x2[0],y1[0],y2[0], color='black') ax = plotlaggedreg(-1.*(data_cam.flnsregclm5.isel(city=cityplot) - data_cam.flnsregsnowd.isel(city=cityplot)),'FLNS','Temperature (K)',x1[1],x2[1],y1[1],y2[1], color='black') ax = plotlaggedreg(-1.*(data_cam.shflxregclm5.isel(city=cityplot) - data_cam.shflxregsnowd.isel(city=cityplot)),'SHFLX','Temperature (K)',x1[2],x2[2],y1[2],y2[2], color='black') tstaketbot_clm5 = data_cam.tsregclm5.isel(city=cityplot) - data_cam.tbotregclm5.isel(city=cityplot) tstaketbot_snowd = data_cam.tsregsnowd.isel(city=cityplot) - data_cam.tbotregsnowd.isel(city=cityplot) ax = plotlaggedreg(-1.*(tstaketbot_clm5 - tstaketbot_snowd),'TS-TBOT','Temperature (K)',x1[3],x2[3],y1[3],y2[3], color='black') ax = plotlaggedreg(-1.*(data_cam.tsregclm5.isel(city=cityplot) - data_cam.tsregsnowd.isel(city=cityplot)),'TS','Temperature (K)',x1[4],x2[4],y1[4],y2[4], color='black') ax = plotlaggedreg(-1.*(data_cam.tbotregclm5.isel(city=cityplot) - data_cam.tbotregsnowd.isel(city=cityplot)),'TBOT','Temperature (K)',x1[5],x2[5],y1[5],y2[5], color='black') #ax = plotlaggedreg(-1.*(data_scam.trefhtregclm5.isel(city=0)-data_scam.trefhtregsnowd.isel(city=0)),'T2m','Temperature (K)',x1[0],x2[0],y1[0],y2[0], color='black') #ax = plotlaggedreg(-1.*(data_cam.trefhtregclm5.isel(city=0)-data_cam.trefhtregsnowd.isel(city=0)),'T2m','Temperature (K)',x1[1],x2[1],y1[1],y2[1], color='black') #ax = plotlaggedreg(-1.*data_scam.trefhtregclm5.isel(city=0), 'T2m', 'Temperature (K)',x1[0],x2[0],y1[0],y2[0]) #ax = oplotlaggedreg(ax, -1.*data_scam.trefhtregsnowd.isel(city=0), color='forestgreen') print(data_cam)
0.340595
0.427815
# AutoEncoder with tf.keras ไปฅไธ‹็‚บ่ฉฆ็”จ tensorflow keras ``` import os import time import numpy as np import glob import matplotlib.pyplot as plt import PIL import tensorflow as tf # tfe = tf.contrib.eager # tf.enable_eager_execution() print(f"tensorflow version: {tf.__version__}") print(f"tensorflow version: {tf.keras.__version__}") ``` ## Load Mnist dataset first ``` (train_images, train_label), (test_images, test_label) = tf.keras.datasets.mnist.load_data() #normalize data train_images = train_images/255. train_images = np.reshape(train_images, (len(train_images),28,28,1)) train_images_flatten = train_images.reshape(train_images.shape[0],-1) test_images = test_images/255. test_images = np.reshape(test_images, (len(test_images),28,28,1)) test_images_flatten = test_images.reshape(test_images.shape[0],-1) #ๅ…ˆblockไฝไธ็Ÿฅ้“่ฆๅนนๅ˜›็š„ # Binarization train_images[train_images >= .5] = 1. train_images[train_images < .5] = 0. test_images[test_images >= .5] = 1. test_images[test_images < .5] = 0. ``` ## Vallina AutoEncoder ๆˆ‘ๅ€‘ๅˆ†ๅˆฅๅปบ้€ decoder & encoder, ๆŽฅ่‘—ๅ†ๆŠŠไป–ๅ€‘ๆŽฅๅ†ไธ€่ตท ``` ## input_image input_img = tf.keras.layers.Input(shape = (784,)) encoder = tf.keras.layers.Dense(128, activation='relu')(input_img) encoder = tf.keras.layers.Dense(32, activation='relu')(encoder) encoder = tf.keras.layers.Dense(2, activation='relu', name="latent_space")(encoder) decoder = tf.keras.layers.Dense(32, activation='relu')(encoder) decoder = tf.keras.layers.Dense(128, activation='relu')(decoder) decoder = tf.keras.layers.Dense(784, activation='sigmoid')(decoder) autoencoder = tf.keras.models.Model(input_img, decoder) autoencoder.compile(optimizer = tf.keras.optimizers.Adam(lr = 0.001), loss='binary_crossentropy') autoencoder.summary() autoencoder.fit(train_images_flatten, train_images_flatten, epochs=100, batch_size=256, shuffle=True, validation_data=(test_images_flatten, test_images_flatten), verbose = 2) decoded_imgs = autoencoder.predict(test_images_flatten) n = 10 # how many images we will display plt.figure(figsize=(20, 4)) for i in range(n): # display original ax = plt.subplot(2, n, i + 1) plt.imshow(test_images_flatten[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display reconstruction ax = plt.subplot(2, n, i + 1 + n) plt.imshow(decoded_imgs[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() ``` ### visualize latent_space ``` ##่ฉฆ่‘—่ชฟ่ชฟ็œ‹ latent space ่ฉฆ่ฉฆ latent_space = tf.keras.Model(inputs = autoencoder.input, outputs = autoencoder.get_layer("latent_space").output) latent_z = latent_space.predict(test_images_flatten) encodings= np.asarray(latent_z) encodings = encodings.reshape(test_images_flatten.shape[0], 2) plt.figure(figsize=(7, 7)) plt.scatter(encodings[:, 0], encodings[:, 1], c=test_label, cmap=plt.cm.jet) plt.show() ``` ## conv_AE ``` input_img = tf.keras.layers.Input(shape = (28, 28, 1)) encoder = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same')(input_img) encoder = tf.keras.layers.MaxPooling2D((2, 2), padding='same')(encoder) encoder = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same')(encoder) encoder = tf.keras.layers.MaxPooling2D((2, 2), padding='same', name="latent_space")(encoder) decoder = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same')(encoder) decoder = tf.keras.layers.UpSampling2D((2, 2))(decoder) decoder = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same')(decoder) decoder = tf.keras.layers.UpSampling2D((2, 2))(decoder) decoder = tf.keras.layers.Conv2D(1, (3, 3), activation = "sigmoid", padding = "same")(decoder) conv_autoencoder = tf.keras.models.Model(input_img, decoder) conv_autoencoder.compile(optimizer='Adam', loss='binary_crossentropy') conv_autoencoder.summary() conv_autoencoder.fit(train_images, train_images, epochs=100, batch_size=128, shuffle=True, validation_data=(test_images, test_images), verbose = 2) decoded_imgs = conv_autoencoder.predict(test_images) n = 10 plt.figure(figsize=(20, 4)) for i in range(n): # display original ax = plt.subplot(2, n, i + 1) plt.imshow(test_images[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display reconstruction ax = plt.subplot(2, n, i + 1 + n) plt.imshow(decoded_imgs[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() X[:,0] latent_space = tf.keras.Model(inputs = conv_autoencoder.input, outputs = conv_autoencoder.get_layer("latent_space").output) latent_z = latent_space.predict(test_images) encodings= np.asarray(latent_z) encodings = encodings.reshape(encodings.shape[0], -1) from sklearn.decomposition import PCA pca = PCA(n_components=2) X = pca.fit_transform(encodings) Y = test_label plt.figure(figsize=(7, 7)) plt.scatter(X[:,0], X[:,1], c= test_label, cmap=plt.cm.jet) plt.show() X[:0].shape # !pip install plotly import plotly import plotly.plotly as py import plotly.graph_objs as go import numpy as np # x, y, z = np.random.multivariate_normal(np.array([0,0,0]), np.eye(10), 400).transpose() X[:,0], X[:,1], X[:,2] trace1 = go.Scatter3d( x=X[:,0][:1000], y=X[:,1][:1000], z=X[:,2][:1000], mode='markers', marker=dict( size=12, color = test_label, # set color to an array/list of desired values colorscale='Rainbow', # choose a colorscale opacity=0.8 ) ) data = [trace1] layout = go.Layout( margin=dict( l=0, r=0, b=0, t=0 ) ) fig = go.Figure(data=data, layout=layout) plotly.offline.plot(fig) ``` ## image denoising AE ``` (train_images, train_label), (test_images, test_label) = tf.keras.datasets.fashion_mnist.load_data() #normalize data train_images = train_images/255. train_images = np.reshape(train_images, (len(train_images),28,28,1)) train_images_flatten = train_images.reshape(train_images.shape[0],-1) test_images = test_images/255. test_images = np.reshape(test_images, (len(test_images),28,28,1)) test_images_flatten = test_images.reshape(test_images.shape[0],-1) #ๅ…ˆblockไฝไธ็Ÿฅ้“่ฆๅนนๅ˜›็š„ # Binarization train_images[train_images >= .5] = 1. train_images[train_images < .5] = 0. test_images[test_images >= .5] = 1. test_images[test_images < .5] = 0. noise = 0.2 train_images_noise = train_images + noise + np.random.normal(loc = 0.0, scale = 0.2, size=train_images.shape) test_images_noise = test_images + noise + np.random.normal(loc = 0.0, scale = 0.2, size=test_images.shape) n = 10 plt.figure(figsize=(20, 2)) for i in range(n): ax = plt.subplot(1, n, i + 1) plt.imshow(test_images_noise[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() train_images_noise = np.clip(train_images_noise, 0., 1.) test_images_noise = np.clip(test_images_noise, 0., 1.) n = 10 plt.figure(figsize=(20, 2)) for i in range(n): ax = plt.subplot(1, n, i + 1) plt.imshow(test_images_noise[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() input_img = tf.keras.layers.Input(shape = (28, 28, 1)) encoder = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same')(input_img) encoder = tf.keras.layers.MaxPooling2D((2, 2), padding='same')(encoder) encoder = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', padding='same')(encoder) encoder = tf.keras.layers.MaxPooling2D((2, 2), padding='same')(encoder) decoder = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', padding='same')(encoder) decoder = tf.keras.layers.UpSampling2D((2, 2))(decoder) decoder = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same')(decoder) decoder = tf.keras.layers.UpSampling2D((2, 2))(decoder) decoder = tf.keras.layers.Conv2D(1, (3, 3), activation = "sigmoid", padding = "same")(decoder) denoise_autoencoder = tf.keras.models.Model(input_img, decoder) denoise_autoencoder.compile(optimizer='Adam', loss='binary_crossentropy') denoise_autoencoder.summary() denoise_autoencoder.fit(train_images_noise, train_images, epochs=100, batch_size=128, shuffle=True, validation_data=(test_images_noise, test_images), verbose = 2) decoded_imgs = denoise_autoencoder.predict(test_images_noise) n = 10 plt.figure(figsize=(20, 10)) for i in range(n): # display original ax = plt.subplot(3, n, i + 1) plt.imshow(test_images[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display original ax = plt.subplot(3, n, i + 1 + n) plt.imshow(test_images_noise[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display reconstruction ax = plt.subplot(3, n, i + 1 + n + n) plt.imshow(decoded_imgs[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() ```
github_jupyter
import os import time import numpy as np import glob import matplotlib.pyplot as plt import PIL import tensorflow as tf # tfe = tf.contrib.eager # tf.enable_eager_execution() print(f"tensorflow version: {tf.__version__}") print(f"tensorflow version: {tf.keras.__version__}") (train_images, train_label), (test_images, test_label) = tf.keras.datasets.mnist.load_data() #normalize data train_images = train_images/255. train_images = np.reshape(train_images, (len(train_images),28,28,1)) train_images_flatten = train_images.reshape(train_images.shape[0],-1) test_images = test_images/255. test_images = np.reshape(test_images, (len(test_images),28,28,1)) test_images_flatten = test_images.reshape(test_images.shape[0],-1) #ๅ…ˆblockไฝไธ็Ÿฅ้“่ฆๅนนๅ˜›็š„ # Binarization train_images[train_images >= .5] = 1. train_images[train_images < .5] = 0. test_images[test_images >= .5] = 1. test_images[test_images < .5] = 0. ## input_image input_img = tf.keras.layers.Input(shape = (784,)) encoder = tf.keras.layers.Dense(128, activation='relu')(input_img) encoder = tf.keras.layers.Dense(32, activation='relu')(encoder) encoder = tf.keras.layers.Dense(2, activation='relu', name="latent_space")(encoder) decoder = tf.keras.layers.Dense(32, activation='relu')(encoder) decoder = tf.keras.layers.Dense(128, activation='relu')(decoder) decoder = tf.keras.layers.Dense(784, activation='sigmoid')(decoder) autoencoder = tf.keras.models.Model(input_img, decoder) autoencoder.compile(optimizer = tf.keras.optimizers.Adam(lr = 0.001), loss='binary_crossentropy') autoencoder.summary() autoencoder.fit(train_images_flatten, train_images_flatten, epochs=100, batch_size=256, shuffle=True, validation_data=(test_images_flatten, test_images_flatten), verbose = 2) decoded_imgs = autoencoder.predict(test_images_flatten) n = 10 # how many images we will display plt.figure(figsize=(20, 4)) for i in range(n): # display original ax = plt.subplot(2, n, i + 1) plt.imshow(test_images_flatten[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display reconstruction ax = plt.subplot(2, n, i + 1 + n) plt.imshow(decoded_imgs[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() ##่ฉฆ่‘—่ชฟ่ชฟ็œ‹ latent space ่ฉฆ่ฉฆ latent_space = tf.keras.Model(inputs = autoencoder.input, outputs = autoencoder.get_layer("latent_space").output) latent_z = latent_space.predict(test_images_flatten) encodings= np.asarray(latent_z) encodings = encodings.reshape(test_images_flatten.shape[0], 2) plt.figure(figsize=(7, 7)) plt.scatter(encodings[:, 0], encodings[:, 1], c=test_label, cmap=plt.cm.jet) plt.show() input_img = tf.keras.layers.Input(shape = (28, 28, 1)) encoder = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same')(input_img) encoder = tf.keras.layers.MaxPooling2D((2, 2), padding='same')(encoder) encoder = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same')(encoder) encoder = tf.keras.layers.MaxPooling2D((2, 2), padding='same', name="latent_space")(encoder) decoder = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same')(encoder) decoder = tf.keras.layers.UpSampling2D((2, 2))(decoder) decoder = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same')(decoder) decoder = tf.keras.layers.UpSampling2D((2, 2))(decoder) decoder = tf.keras.layers.Conv2D(1, (3, 3), activation = "sigmoid", padding = "same")(decoder) conv_autoencoder = tf.keras.models.Model(input_img, decoder) conv_autoencoder.compile(optimizer='Adam', loss='binary_crossentropy') conv_autoencoder.summary() conv_autoencoder.fit(train_images, train_images, epochs=100, batch_size=128, shuffle=True, validation_data=(test_images, test_images), verbose = 2) decoded_imgs = conv_autoencoder.predict(test_images) n = 10 plt.figure(figsize=(20, 4)) for i in range(n): # display original ax = plt.subplot(2, n, i + 1) plt.imshow(test_images[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display reconstruction ax = plt.subplot(2, n, i + 1 + n) plt.imshow(decoded_imgs[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() X[:,0] latent_space = tf.keras.Model(inputs = conv_autoencoder.input, outputs = conv_autoencoder.get_layer("latent_space").output) latent_z = latent_space.predict(test_images) encodings= np.asarray(latent_z) encodings = encodings.reshape(encodings.shape[0], -1) from sklearn.decomposition import PCA pca = PCA(n_components=2) X = pca.fit_transform(encodings) Y = test_label plt.figure(figsize=(7, 7)) plt.scatter(X[:,0], X[:,1], c= test_label, cmap=plt.cm.jet) plt.show() X[:0].shape # !pip install plotly import plotly import plotly.plotly as py import plotly.graph_objs as go import numpy as np # x, y, z = np.random.multivariate_normal(np.array([0,0,0]), np.eye(10), 400).transpose() X[:,0], X[:,1], X[:,2] trace1 = go.Scatter3d( x=X[:,0][:1000], y=X[:,1][:1000], z=X[:,2][:1000], mode='markers', marker=dict( size=12, color = test_label, # set color to an array/list of desired values colorscale='Rainbow', # choose a colorscale opacity=0.8 ) ) data = [trace1] layout = go.Layout( margin=dict( l=0, r=0, b=0, t=0 ) ) fig = go.Figure(data=data, layout=layout) plotly.offline.plot(fig) (train_images, train_label), (test_images, test_label) = tf.keras.datasets.fashion_mnist.load_data() #normalize data train_images = train_images/255. train_images = np.reshape(train_images, (len(train_images),28,28,1)) train_images_flatten = train_images.reshape(train_images.shape[0],-1) test_images = test_images/255. test_images = np.reshape(test_images, (len(test_images),28,28,1)) test_images_flatten = test_images.reshape(test_images.shape[0],-1) #ๅ…ˆblockไฝไธ็Ÿฅ้“่ฆๅนนๅ˜›็š„ # Binarization train_images[train_images >= .5] = 1. train_images[train_images < .5] = 0. test_images[test_images >= .5] = 1. test_images[test_images < .5] = 0. noise = 0.2 train_images_noise = train_images + noise + np.random.normal(loc = 0.0, scale = 0.2, size=train_images.shape) test_images_noise = test_images + noise + np.random.normal(loc = 0.0, scale = 0.2, size=test_images.shape) n = 10 plt.figure(figsize=(20, 2)) for i in range(n): ax = plt.subplot(1, n, i + 1) plt.imshow(test_images_noise[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() train_images_noise = np.clip(train_images_noise, 0., 1.) test_images_noise = np.clip(test_images_noise, 0., 1.) n = 10 plt.figure(figsize=(20, 2)) for i in range(n): ax = plt.subplot(1, n, i + 1) plt.imshow(test_images_noise[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() input_img = tf.keras.layers.Input(shape = (28, 28, 1)) encoder = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same')(input_img) encoder = tf.keras.layers.MaxPooling2D((2, 2), padding='same')(encoder) encoder = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', padding='same')(encoder) encoder = tf.keras.layers.MaxPooling2D((2, 2), padding='same')(encoder) decoder = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', padding='same')(encoder) decoder = tf.keras.layers.UpSampling2D((2, 2))(decoder) decoder = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', padding='same')(decoder) decoder = tf.keras.layers.UpSampling2D((2, 2))(decoder) decoder = tf.keras.layers.Conv2D(1, (3, 3), activation = "sigmoid", padding = "same")(decoder) denoise_autoencoder = tf.keras.models.Model(input_img, decoder) denoise_autoencoder.compile(optimizer='Adam', loss='binary_crossentropy') denoise_autoencoder.summary() denoise_autoencoder.fit(train_images_noise, train_images, epochs=100, batch_size=128, shuffle=True, validation_data=(test_images_noise, test_images), verbose = 2) decoded_imgs = denoise_autoencoder.predict(test_images_noise) n = 10 plt.figure(figsize=(20, 10)) for i in range(n): # display original ax = plt.subplot(3, n, i + 1) plt.imshow(test_images[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display original ax = plt.subplot(3, n, i + 1 + n) plt.imshow(test_images_noise[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display reconstruction ax = plt.subplot(3, n, i + 1 + n + n) plt.imshow(decoded_imgs[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show()
0.550124
0.909747
This notebook is to decide $S_A = S_{ref}$ is a good approximation along our open boundary. ``` import numpy as np import netCDF4 as nc import os import subprocess as sp import sys sys.path.append('/data/nsoontie/MEOPAR/tools/I_ForcingFiles/OBC/') import gsw_calls ``` I want to check if, along our open boundary, $\delta S ~=0$ by the gsw standards. Right now, we are using reference salinity. Recall, $S_A = S_{ref} + \delta S$ The TEOS-10 primer says that in coastal areas where $\delta S$ is unknown, it is appopriate to use $\delta S=0$. That was also suggested to me in an email from Rich. **Note**: Matlab wrappers are linked in this directory. They are under version control in tools/I_ForcingFiles/OBC ``` f = nc.Dataset('/data/nsoontie/MEOPAR/NEMO-forcing/open_boundaries/west/SalishSea2_Masson_corrected.nc') sal_pract = f.variables['vosaline'][:] temp_pot = f.variables['votemper'][:] dep = np.expand_dims(np.expand_dims(np.expand_dims(f.variables['deptht'][:],axis=0),axis=2),axis=3) \ + np.zeros(sal_pract.shape) long = f.variables['nav_lon'][:] + np.zeros(sal_pract.shape) lat = f.variables['nav_lat'][:] + np.zeros(sal_pract.shape) f = nc.Dataset('/data/nsoontie/MEOPAR/NEMO-forcing/open_boundaries/west/SalishSea_west_TEOS10.nc') sal_ref = f.variables['vosaline'][:] p = gsw_calls.call_p_from_z(-dep, lat) sal_abs = gsw_calls.call_SA_from_SP(sal_pract, p, long, lat) ``` # Absolute Salinity vs Practical Salinity ``` import matplotlib.pyplot as plt %matplotlib inline dS = sal_abs-sal_ref plt.hist(dS.flatten()) plt.xlabel('$\delta S$') plt.ylabel('number of occurences') plt.boxplot(dS.flatten()) plt.ylabel('$\delta S$') ``` This is probably not very significant. We've decided to use $\delta S = 0$. # Conservative Temperature vs Potential Temperature ``` CT = gsw_calls.call_CT_from_PT(sal_abs, temp_pot) diff=CT-temp_pot plt.hist(diff.flatten()) plt.ylabel(('Conservative - Potential, deg C')) ``` Looks like the differences aren't super big for boundary. # Matlab Reference Salinity vs ours Note: This comparison wes performed before the boundary files were overwritten. ``` def call_SR_from_SP(SP): fname ="'SRout'" SPfile= "'SPfile'" for f, var in zip([SPfile,], [SP,]): np.savetxt(f[1:-1],var.flatten(), delimiter=',') shape = SP.shape functioncall = 'mw_gsw_SR_from_SP({},{});exit'.format(fname, SPfile) cmd = ["matlab", "-nodesktop", "-nodisplay", "-r", functioncall] sp.run(cmd) SR = np.loadtxt(fname[1:-1], delimiter=',') for f in [fname, SPfile]: os.remove(f[1:-1]) return SR.reshape(shape) sal_ref_matlab = call_SR_from_SP(sal_pract) diff = sal_ref_matlab - sal_ref plt.hist(diff.flatten()) plt.ylabel('MAtlab ref Salinity - ours, g/kg') ```
github_jupyter
import numpy as np import netCDF4 as nc import os import subprocess as sp import sys sys.path.append('/data/nsoontie/MEOPAR/tools/I_ForcingFiles/OBC/') import gsw_calls f = nc.Dataset('/data/nsoontie/MEOPAR/NEMO-forcing/open_boundaries/west/SalishSea2_Masson_corrected.nc') sal_pract = f.variables['vosaline'][:] temp_pot = f.variables['votemper'][:] dep = np.expand_dims(np.expand_dims(np.expand_dims(f.variables['deptht'][:],axis=0),axis=2),axis=3) \ + np.zeros(sal_pract.shape) long = f.variables['nav_lon'][:] + np.zeros(sal_pract.shape) lat = f.variables['nav_lat'][:] + np.zeros(sal_pract.shape) f = nc.Dataset('/data/nsoontie/MEOPAR/NEMO-forcing/open_boundaries/west/SalishSea_west_TEOS10.nc') sal_ref = f.variables['vosaline'][:] p = gsw_calls.call_p_from_z(-dep, lat) sal_abs = gsw_calls.call_SA_from_SP(sal_pract, p, long, lat) import matplotlib.pyplot as plt %matplotlib inline dS = sal_abs-sal_ref plt.hist(dS.flatten()) plt.xlabel('$\delta S$') plt.ylabel('number of occurences') plt.boxplot(dS.flatten()) plt.ylabel('$\delta S$') CT = gsw_calls.call_CT_from_PT(sal_abs, temp_pot) diff=CT-temp_pot plt.hist(diff.flatten()) plt.ylabel(('Conservative - Potential, deg C')) def call_SR_from_SP(SP): fname ="'SRout'" SPfile= "'SPfile'" for f, var in zip([SPfile,], [SP,]): np.savetxt(f[1:-1],var.flatten(), delimiter=',') shape = SP.shape functioncall = 'mw_gsw_SR_from_SP({},{});exit'.format(fname, SPfile) cmd = ["matlab", "-nodesktop", "-nodisplay", "-r", functioncall] sp.run(cmd) SR = np.loadtxt(fname[1:-1], delimiter=',') for f in [fname, SPfile]: os.remove(f[1:-1]) return SR.reshape(shape) sal_ref_matlab = call_SR_from_SP(sal_pract) diff = sal_ref_matlab - sal_ref plt.hist(diff.flatten()) plt.ylabel('MAtlab ref Salinity - ours, g/kg')
0.187281
0.752649
# Transfer Learning In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html). ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU). Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy. With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now. ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms, models ``` Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`. ``` data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=64) ``` We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.html#id5). Let's print out the model architecture so we can see what's going on. ``` model = models.densenet121(pretrained=True) model ``` This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers. ``` # Freeze parameters so we don't backprop through them for param in model.parameters(): param.requires_grad = False from collections import OrderedDict classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(1024, 500)), ('relu', nn.ReLU()), ('fc2', nn.Linear(500, 2)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier ``` With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time. PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU. ``` import time for device in ['cpu', 'cuda']: criterion = nn.NLLLoss() # Only train the classifier parameters, feature parameters are frozen optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) model.to(device) for ii, (inputs, labels) in enumerate(trainloader): # Move input and label tensors to the GPU inputs, labels = inputs.to(device), labels.to(device) start = time.time() outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() if ii==3: break print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds") ``` You can write device agnostic code which will automatically use CUDA if it's enabled like so: ```python # at beginning of the script device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ... # then whenever you get a new Tensor or Module # this won't copy if they are already on the desired device input = data.to(device) model = MyModule(...).to(device) ``` From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily. >**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen. ``` ## TODO: Use a pretrained model to classify the cat and dog images device = torch.device("cuda" if torch.cuda.is_available() else "cpu") criterion = nn.NLLLoss() optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) model = models.densenet121(pretrained=True) for param in model.parameters(): param.requires_grad = False from collections import OrderedDict classifier1 = nn.Sequential(OrderedDict([ ("fc1", nn.Linear(1024, 500)), ("relu", nn.ReLU()), ("fc2", nn.Linear(500, 2)), ("output", nn.LogSoftmax(dim=1)) ])) class Classifier2(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(1024, 500) self.fc2 = nn.Linear(500, 2) def forward(self, x): x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.log_softmax(self.fc2(x), dim=1) return x classifier2 = Classifier2() model.classifier = classifier1 model epochs = 3 steps = 0 model.to(device) #added print_every = 5 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: steps += 1 images, labels = images.to(device), labels.to(device) optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: model.eval() test_loss = 0 accuracy = 0 for images, labels in testloader: images, labels = images.to(device), labels.to(device) log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) model.train() # print(f'Accuracy: {accuracy.item()*100}%') print("Epock: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[e]), "Test Loss: {:.3f}.. ".format(test_losses[e]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) ```
github_jupyter
%matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms, models data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=64) model = models.densenet121(pretrained=True) model # Freeze parameters so we don't backprop through them for param in model.parameters(): param.requires_grad = False from collections import OrderedDict classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(1024, 500)), ('relu', nn.ReLU()), ('fc2', nn.Linear(500, 2)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier import time for device in ['cpu', 'cuda']: criterion = nn.NLLLoss() # Only train the classifier parameters, feature parameters are frozen optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) model.to(device) for ii, (inputs, labels) in enumerate(trainloader): # Move input and label tensors to the GPU inputs, labels = inputs.to(device), labels.to(device) start = time.time() outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() if ii==3: break print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds") # at beginning of the script device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ... # then whenever you get a new Tensor or Module # this won't copy if they are already on the desired device input = data.to(device) model = MyModule(...).to(device) ## TODO: Use a pretrained model to classify the cat and dog images device = torch.device("cuda" if torch.cuda.is_available() else "cpu") criterion = nn.NLLLoss() optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) model = models.densenet121(pretrained=True) for param in model.parameters(): param.requires_grad = False from collections import OrderedDict classifier1 = nn.Sequential(OrderedDict([ ("fc1", nn.Linear(1024, 500)), ("relu", nn.ReLU()), ("fc2", nn.Linear(500, 2)), ("output", nn.LogSoftmax(dim=1)) ])) class Classifier2(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(1024, 500) self.fc2 = nn.Linear(500, 2) def forward(self, x): x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.log_softmax(self.fc2(x), dim=1) return x classifier2 = Classifier2() model.classifier = classifier1 model epochs = 3 steps = 0 model.to(device) #added print_every = 5 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: steps += 1 images, labels = images.to(device), labels.to(device) optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: model.eval() test_loss = 0 accuracy = 0 for images, labels in testloader: images, labels = images.to(device), labels.to(device) log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) model.train() # print(f'Accuracy: {accuracy.item()*100}%') print("Epock: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[e]), "Test Loss: {:.3f}.. ".format(test_losses[e]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
0.754734
0.98987
# Topic Modeling on DBLP ``` from rdfframes.client.http_client import HttpClientDataFormat, HttpClient from rdfframes.knowledge_graph import KnowledgeGraph ``` ## Choose the graph and define the SPARQL endpoint URI ``` graph = KnowledgeGraph( graph_uri='http://dblp.l3s.de', prefixes={ "xsd": "http://www.w3.org/2001/XMLSchema#", "swrc": "http://swrc.ontoware.org/ontology#", "rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#", "dc": "http://purl.org/dc/elements/1.1/", "dcterm": "http://purl.org/dc/terms/", "dblprc": "http://dblp.l3s.de/d2r/resource/conferences/" }) endpoint = 'http://10.161.202.101:8890/sparql/' port = 8890 output_format = HttpClientDataFormat.PANDAS_DF client = HttpClient(endpoint_url=endpoint, port=port, return_format=output_format) ``` ## Build a dataframe of papers titles from the graph ``` dataset = graph.entities('swrc:InProceedings', entities_col_name='paper')\ .expand(src_col_name='paper', predicate_list=[ ('dc:creator', 'author'), ('dcterm:issued', 'date'), ('swrc:series', 'conference'), ('dc:title', 'title')]) dataset = dataset.cache() authors = dataset.filter({'date':['>= 2000'], 'conference': ['IN (dblprc:vldb, dblprc:sigmod)']})\ .group_by(['author']).count('paper', 'papers_count')\ .filter({'papers_count':['>= 20']}) titles = dataset.join(authors, 'author').filter({'date': ['>= 2010']}).select_cols(['title']) ``` ## Execute RDFframes code to get the result in a dataframe ``` df = titles.execute(client, return_format=output_format) print(df.head(10)) ``` ## Clean the data ``` # removing everything except alphabets` df['clean_title'] = df['title'].str.replace("[^a-zA-Z#]", " ") # removing short words df['clean_title'] = df['clean_title'].apply(lambda x: ' '.join([w for w in str(x).split() if len(w)>3])) # make all text lowercase df['clean_title'] = df['clean_title'].apply(lambda x: x.lower()) print(df.head()) import nltk nltk.download('stopwords') # Using the stopwords. from nltk.corpus import stopwords # Initialize the stopwords stop_words = stopwords.words('english') stop_words = [x.strip() for x in stop_words] + ['based'] # tokenization tokenized_doc = df['clean_title'].apply(lambda x: x.split()) # remove stop-words tokenized_doc = tokenized_doc.apply(lambda x: [item for item in x if item not in stop_words]) # de-tokenization detokenized_doc = [] for i in range(len(df)): t = ' '.join(tokenized_doc[i]) detokenized_doc.append(t) df['clean_title'] = detokenized_doc print(df.head()) from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.decomposition import TruncatedSVD vectorizer = TfidfVectorizer(stop_words='english', max_features= 1000, # keep top 1000 terms max_df = 0.5, smooth_idf=True) X = vectorizer.fit_transform(df['clean_title']) # document-term matrix # SVD represent documents and terms in vectors svd_model = TruncatedSVD(n_components=20, algorithm='randomized', n_iter=100, random_state=122) svd_model.fit(X) terms = vectorizer.get_feature_names() for i, comp in enumerate(svd_model.components_): terms_comp = zip(terms, comp) sorted_terms = sorted(terms_comp, key= lambda x:x[1], reverse=True)[:7] string = "Topic "+str(i)+": " for t in sorted_terms: string += t[0] + " " print(string) ```
github_jupyter
from rdfframes.client.http_client import HttpClientDataFormat, HttpClient from rdfframes.knowledge_graph import KnowledgeGraph graph = KnowledgeGraph( graph_uri='http://dblp.l3s.de', prefixes={ "xsd": "http://www.w3.org/2001/XMLSchema#", "swrc": "http://swrc.ontoware.org/ontology#", "rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#", "dc": "http://purl.org/dc/elements/1.1/", "dcterm": "http://purl.org/dc/terms/", "dblprc": "http://dblp.l3s.de/d2r/resource/conferences/" }) endpoint = 'http://10.161.202.101:8890/sparql/' port = 8890 output_format = HttpClientDataFormat.PANDAS_DF client = HttpClient(endpoint_url=endpoint, port=port, return_format=output_format) dataset = graph.entities('swrc:InProceedings', entities_col_name='paper')\ .expand(src_col_name='paper', predicate_list=[ ('dc:creator', 'author'), ('dcterm:issued', 'date'), ('swrc:series', 'conference'), ('dc:title', 'title')]) dataset = dataset.cache() authors = dataset.filter({'date':['>= 2000'], 'conference': ['IN (dblprc:vldb, dblprc:sigmod)']})\ .group_by(['author']).count('paper', 'papers_count')\ .filter({'papers_count':['>= 20']}) titles = dataset.join(authors, 'author').filter({'date': ['>= 2010']}).select_cols(['title']) df = titles.execute(client, return_format=output_format) print(df.head(10)) # removing everything except alphabets` df['clean_title'] = df['title'].str.replace("[^a-zA-Z#]", " ") # removing short words df['clean_title'] = df['clean_title'].apply(lambda x: ' '.join([w for w in str(x).split() if len(w)>3])) # make all text lowercase df['clean_title'] = df['clean_title'].apply(lambda x: x.lower()) print(df.head()) import nltk nltk.download('stopwords') # Using the stopwords. from nltk.corpus import stopwords # Initialize the stopwords stop_words = stopwords.words('english') stop_words = [x.strip() for x in stop_words] + ['based'] # tokenization tokenized_doc = df['clean_title'].apply(lambda x: x.split()) # remove stop-words tokenized_doc = tokenized_doc.apply(lambda x: [item for item in x if item not in stop_words]) # de-tokenization detokenized_doc = [] for i in range(len(df)): t = ' '.join(tokenized_doc[i]) detokenized_doc.append(t) df['clean_title'] = detokenized_doc print(df.head()) from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.decomposition import TruncatedSVD vectorizer = TfidfVectorizer(stop_words='english', max_features= 1000, # keep top 1000 terms max_df = 0.5, smooth_idf=True) X = vectorizer.fit_transform(df['clean_title']) # document-term matrix # SVD represent documents and terms in vectors svd_model = TruncatedSVD(n_components=20, algorithm='randomized', n_iter=100, random_state=122) svd_model.fit(X) terms = vectorizer.get_feature_names() for i, comp in enumerate(svd_model.components_): terms_comp = zip(terms, comp) sorted_terms = sorted(terms_comp, key= lambda x:x[1], reverse=True)[:7] string = "Topic "+str(i)+": " for t in sorted_terms: string += t[0] + " " print(string)
0.395484
0.705151
# The Dynamic Factor Model ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import metran metran.show_versions() ``` <div class="alert alert-block alert-info"> <b>Tip:</b> To run this notebook and the related metran model, it is strongly recommended to install Numba (http://numba.pydata.org). This Just-In-Time (JIT) compiler compiles the computationally intensive part of metran model. </div> When modeling multiple groundwater time series within the same hydrological system, it often appears that these components show distinct correlations between locations. Usually large part of the correlation is caused by common input stresses like precipitation and evapotranspiration, which shows up within the deterministic components of the models. The residual components of the univariate TFN models are often correlated as well. This means that there is spatial correlation which has not been captured by the deterministic component, e.g. because of errors in common input data or due to simplification of the hydrological model leading to misspecification of the deterministic component. We can exploit these correlations by modeling the series simultaneously with a dynamic factor model. Dynamic factor modeling (DFM) is a multivariate timeseries analysis technique used to describe the variation among many variables in terms of a few underlying but unobserved variables called factors. This notebook explains the Dynamic Factor Model (DFM) as presented in [Berendrecht and Van Geer, 2016](#References). It describes the model, model parameters and how the results may be interpreted. ## 1. Basic multivariate AR(1) model A general univariate AR(1) model can be written as: $$ \begin{align} {x}_t&=\phi x_{t-1}+\eta_t\\ {n}_t&={x}_t+\varepsilon_t \end{align} $$ with $\phi$ the AR(1) parameter, $\eta_t$ a zero mean white noise process, and $\varepsilon_t$ the measurement noise. In the current version of `metran` the measurement noise is assumed to be zero, so that $n_t=x_t$. The multivariate extension of this model is: $$ \left[\begin{array}{c}x_{1}\\x_{2}\end{array}\right]_t = \left[\begin{array}{cc}\phi_{1} & 0\\0 & \phi_{2}\end{array}\right] \left[\begin{array}{c}x_{1}\\x_{2}\end{array}\right]_{t-1} + \left[\begin{array}{c}\eta_{1}\\\eta_{2}\end{array}\right]_t $$ Or: $$ \mathbf{x}_t=\mathbf{\Phi} \mathbf{x}_{t-1}+\mathbf{\eta}_t $$ ## 2. Generate synthetic correlated time series Let us generate time series based on the 2-dimensional model given above. We use the AR(1) model to generate three time series with the AR(1) parameter $\phi$: two series as the specific dynamic factor and one series as the common dynamic factor. Combining the specific and common dynamic factors results in two time series which are mutually correlated. ``` # seed numpy.random np.random.seed(20210505) # define mean and scale (standard deviation for noise series) mean = np.zeros(3) scale = [1, 0.6, 2] # generate noise series that are mutually uncorrelated noise = np.random.multivariate_normal(mean, np.diag(np.square(scale)), 2001) # generate AR(1) processes phi = np.array([0.80, 0.95, 0.90]) a = np.zeros_like(noise) for i in range(1, noise.shape[0]): a[i] = noise[i] + np.multiply(a[i - 1], phi) # add AR(1) processes to construct two correlated series s1 = np.add(a[1:, 0], a[1:, 2]) s2 = np.add(a[1:, 1], a[1:, 2]) s = pd.DataFrame(data=np.array([s1, s2]).T, index=pd.date_range(start='1-1-2000', periods=2000), columns=['series 1', 'series 2']) s.plot(figsize=(10, 2), xlabel='Date'); ``` We can calculated the mean and standard deviation of the generated series and test the correlation between these series. The correlation must be close to the desired correlation defined above. ``` print('Mean:') print(s.mean()) print('\nStandard deviation:') print(s.std()) print('\nCorrelation:') print(s.corr()) ``` ## 3. The Dynamic Factor Model<a id="dfm"></a> With the Dynamic Factor Model (DFM) we try to decompose series into latent (unobserved) factors describing common and specific dynamics. For the example above, the common dynamic factor describe the all variation that is found in both series. The remaining part of each series is described by the specific dynamic factor. Mathematically, this can be written as: $$ \left[\begin{array}{c}n_{1,t}\\n_{2,t}\end{array}\right] = \left[\begin{array}{c}x_{s,1}\\x_{s,2}\end{array}\right]_t + \left[\begin{array}{c}\gamma_{1}\\ \gamma_{2}\end{array}\right] x_{c,t} $$ where $\gamma_1$ and $\gamma_2$ are the factor loadings for series 1 resp. series 2. These factor loadings describe how the series $n_1$ and $n_2$ are related to the common dynamic factor. The specific dynamic factors $x_s$ and common dynamic factor $x_c$ can be described by an AR(1) model as: $$ \begin{align} \mathbf{x}_{s,t}&=\left[\begin{array}{cc}\phi_{s,1} & 0\\0 & \phi_{s,2}\end{array}\right]\mathbf{x}_{s,t-1}+\left[\begin{array}{c}\eta_{s,1}\\\eta_{s,2}\end{array}\right]_t\\ x_{c,t}&=\phi_c x_{c,t-1}+\eta_{c,t} \end{align} $$ The model can also be written in a single matrix notation as: $$ \begin{align} \mathbf{x}_{t}&=\Phi \mathbf{x}_{t-1}+\mathbf{\eta}_{t}\\ \mathbf{n}_{t}&=\mathbf{Z} \mathbf{x}_{t} \end{align} $$ with the state vector $ \mathbf{x}=\left[\begin{array}{c}x_{s,1}\\x_{s,2}\\x_{c,1}\end{array}\right]$, the transition matrix $\mathbf{\Phi}=\left[\begin{array}{ccc}\phi_{s,1} & 0 & 0\\0 & \phi_{s,2} & 0\\0 & 0 & \phi_{c}\end{array}\right]$, the transition noise vector $\mathbf{\eta}=\left[\begin{array}{c}\eta_{s,1}\\ \eta_{s,2}\\ \eta_{c}\end{array}\right]$, and the observation matrix $\mathbf{Z}=\left[\begin{array}{ccc}1&0&\gamma_1\\0&1&\gamma_2\end{array}\right]$. When analyzing more than two series, multiple common dynamic factors may be used. In that case, the equation for the common dynamic factor also becomes a vector equation. ## 4. Standardization<a id="standardization"></a> With the DFM we want to describe the common and specifc dynamics based on the correlation rather than the covariance structure. Therefore, all series are standardized as: $$\tilde{n}_{i,t} = \frac{n_{i,t}-\mu_{n_i}}{\sigma_{n_i}}$$ This standardization is done internally in `metran`, so there is no need to perform any standardization beforehand. However, as an illustration, the code below shows the standardized series. ``` mt = metran.Metran(s) series_std = mt.standardize(s) series_std.plot(figsize=(10, 2), xlabel='Date').set_ylim(-4, 4); ``` ## 5. Running the model Let us now run the model for the generate time series. In this example, we solve the model with `report=False`. This means that no report is shown. Instead, we analyze the results step by step. ``` mt = metran.Metran(s) mt.solve(report=False) ``` ### 5.1 Factors, communality and specificity Metran first determines the optimal number of common dynamic factors based on the correlation structure of the time series. For this, the Minimum Average Partial (MAP) test is used ([Velicer, 1976](#References); [Velicer et al., 2000](#References)). If this test results in 0 factors, then a second test is done based on the Kaiser criterion ([Kaiser, 1960](#References)). In this case, as we can see above, 1 factor has been selected to describe the common dynamics. Besides, Metran estimates the factor loadings $\gamma_1$ and $\gamma_2$ using the minimum residual (minres) algorithm ([Harman and Jones, 1966](#References)). ``` print('Factors:\n', mt.factors) ``` As described in [section 3](#dfm), the factor loadings show the degree to which a factor elaborates a variable (observed series). The sum of squared factor loadings for all common factors for a given series is referred to as the communality. The communality measures the fraction of variance in a given variable explained by all common factors jointly, or in our case, one common factor. ``` print('Communality:', mt.get_communality()) ``` The fraction that is unique/specific for each series is referred to as the specificity and is calculated as (1 - communality). ``` print('Specificity:', mt.get_specificity()) ``` ### 5.2 Estimating AR(1) parameters After the number of factors and associated factor loadings have been estimated, Metran uses an optimization algorithm to estimate the AR(1) model parameters $\phi_{s,1}$, $\phi_{s,2}$, and $\phi_{c}$. Similar to the AR parameter is `pastas`, $\phi$ is written as: $$ \phi_k=e^{โˆ’\Delta t_i/\alpha_k} $$ and $\alpha_k$ is being estimated. As all series have been standardized, the variance of each series is equal to 1. In addtion, we know the communality (and specificity) for each series, which means that we know the variance of the specific and common dynamic factors. As a result, the noise variance parameters of the AR(1) model do not need to be estimated. Instead, Metran calculates them as: $$ \begin{align} q_{s,1} &= \left(1-\phi_{s,1}^2\right) \cdot s_1 \\ q_{s,2} &= \left(1-\phi_{s,2}^2\right) \cdot s_2 \\ q_{c} &= \left(1-\phi_{c}^2\right) \end{align} $$ with $s_1$ and $s_2$ the specificity of series 1 resp. series 2. The results of the parameter estimation process can be shown using `mt.fit_report()`. ``` print(mt.fit_report()) ``` ### 5.3 Metran report Further output of the Metran model parameters and statistics is given by `mt.metran_report()`. The following results are shown: - nfct: number of factors - fep: percentage of total variance explained by these factors - communality for each series: percentage of variance that a series has in common with other series. - state parameters: - AR(1) parameter $\phi$, calculated from the optimized parameter $\alpha$ - variance $q$ of white noise process $\eta$ - observation parameters: - factor loadings $\gamma$ for each factor and series - scale: standard deviation $\sigma_n$ of each series (used for standardization, see [section 4](#standardization)) - mean: mean $\mu_n$ of each series (used for standardization, see [section 4](#standardization)) - state correlations: correlation between specific and/or common dynamic factors ``` print(mt.metran_report()) ``` The statistic `fep` is based on the eigenvalues of the correlation matrix. The eigenvalues can be retrieved from the `metran` class. ``` mt.eigval ``` The sum of the eigenvalues always equals the dimension of the correlation matrix, in this case 2. ``` round(mt.eigval.sum()) ``` As we have used 1 eigenvalue (`nfct` = 1), the statistic `fep` is calculated as: ``` round(100 * mt.eigval[0] / mt.eigval.sum(), 2) ``` ## 6. Checking the estimated AR(1) parameters We can compare the estimate AR(1) parameters $\phi$ with the AR(1) parameters used to generate the time series. ``` print(np.round(np.diagonal(mt.get_transition_matrix()), 2), 'vs', phi) ``` The estimated parameters are close to those being used to generate the synthetic series, which means that the model has estimated the autoregression of the latent components well. ## 7. Decomposition of series The specific dynamic components (sdf's) $x_{s,1}$ and $x_{s,2}$ can be retrieved from the state vector $\mathbf{x}$. ``` mt.get_state_means().plot(figsize=(10, 2), xlabel='Date', title='Specific and common dynamic factors' ).set_ylim(-4, 4); ``` Note that the common factor need to be multiplied by the factor loadings, to get the common factor for each series. Furthermore, these results are for the standardized series and need to be rescaled to obtain the unstandardized dynamic factors. Metran has a specific method to obtain the specific and common dynamic factors for each series. ``` mt.decompose_simulation(name='series 1').plot( figsize=(10, 2), xlabel='Date', title='Specific and common dynamic factor for series 1'); ``` We can compare the calculated specificity with the variance of the specific dynamic component divided by the series variance (which is the sum of the specific and common dynamic factor). ``` sim1 = mt.decompose_simulation(name='series 1') sdf1_variance = sim1['sdf'].var() / sim1.sum(axis=1).var() print('Variance sdf series 1:', "{:.2f}%".format(100 * sdf1_variance)) print('Specificity series 1 :', "{:.2f}%".format(100 * mt.get_specificity()[0])) ``` Theoretically, these values must be equal. In practice, they may slightly differ, e.g. due to some correlation between the specific and common dynamic factor. We can test this by calculating the correlation. ``` sim1.corr() ``` Similar to series 1, we can decompose series 2 and compare the associated specificity and communality. ``` mt.decompose_simulation(name='series 2').plot( figsize=(10, 2), xlabel='Date', title='Specific and common dynamic factor for series 2'); sim2 = mt.decompose_simulation(name='series 2') sdf2_variance = sim2['sdf'].var() / sim2.sum(axis=1).var() print('Variance sdf series 2:', "{:.2f}%".format(100 * sdf2_variance)) print('Specificity series 2 :', "{:.2f}%".format(100 * mt.get_specificity()[1])) sim2.corr() ``` ## References - Berendrecht, W.L., F.C. van Geer, 2016. A dynamic factor modeling framework for analyzing multiple groundwater head series simultaneously, Journal of Hydrology, 536, pp. 50-60, [DOI](http://dx.doi.org/10.1016/j.jhydrol.2016.02.028). - Harman, H., Jones, W., 1966. Factor analysis by minimizing residuals (minres). Psychometrika 31, 351โ€“368. - Kaiser, H.F., 1960. The application of electronic computers to factor analysis. Educ. Psychol. Meas. 20, 141โ€“151. - Velicer, W.F., 1976. Determining the number of components from the matrix of partial correlations. Psychometrika 41, 321โ€“327. - Velicer, W.F., Eaton, C.A., Fava, J.L., 2000. Construct explication through factor or component analysis: a review and evaluation of alternative procedures for determining the number of factors or components. In: Goffin, R., Helmes, E. (Eds.), Problems and Solutions in Human Assessment. Springer, US, pp. 41โ€“71.
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt import metran metran.show_versions() # seed numpy.random np.random.seed(20210505) # define mean and scale (standard deviation for noise series) mean = np.zeros(3) scale = [1, 0.6, 2] # generate noise series that are mutually uncorrelated noise = np.random.multivariate_normal(mean, np.diag(np.square(scale)), 2001) # generate AR(1) processes phi = np.array([0.80, 0.95, 0.90]) a = np.zeros_like(noise) for i in range(1, noise.shape[0]): a[i] = noise[i] + np.multiply(a[i - 1], phi) # add AR(1) processes to construct two correlated series s1 = np.add(a[1:, 0], a[1:, 2]) s2 = np.add(a[1:, 1], a[1:, 2]) s = pd.DataFrame(data=np.array([s1, s2]).T, index=pd.date_range(start='1-1-2000', periods=2000), columns=['series 1', 'series 2']) s.plot(figsize=(10, 2), xlabel='Date'); print('Mean:') print(s.mean()) print('\nStandard deviation:') print(s.std()) print('\nCorrelation:') print(s.corr()) mt = metran.Metran(s) series_std = mt.standardize(s) series_std.plot(figsize=(10, 2), xlabel='Date').set_ylim(-4, 4); mt = metran.Metran(s) mt.solve(report=False) print('Factors:\n', mt.factors) print('Communality:', mt.get_communality()) print('Specificity:', mt.get_specificity()) print(mt.fit_report()) print(mt.metran_report()) mt.eigval round(mt.eigval.sum()) round(100 * mt.eigval[0] / mt.eigval.sum(), 2) print(np.round(np.diagonal(mt.get_transition_matrix()), 2), 'vs', phi) mt.get_state_means().plot(figsize=(10, 2), xlabel='Date', title='Specific and common dynamic factors' ).set_ylim(-4, 4); mt.decompose_simulation(name='series 1').plot( figsize=(10, 2), xlabel='Date', title='Specific and common dynamic factor for series 1'); sim1 = mt.decompose_simulation(name='series 1') sdf1_variance = sim1['sdf'].var() / sim1.sum(axis=1).var() print('Variance sdf series 1:', "{:.2f}%".format(100 * sdf1_variance)) print('Specificity series 1 :', "{:.2f}%".format(100 * mt.get_specificity()[0])) sim1.corr() mt.decompose_simulation(name='series 2').plot( figsize=(10, 2), xlabel='Date', title='Specific and common dynamic factor for series 2'); sim2 = mt.decompose_simulation(name='series 2') sdf2_variance = sim2['sdf'].var() / sim2.sum(axis=1).var() print('Variance sdf series 2:', "{:.2f}%".format(100 * sdf2_variance)) print('Specificity series 2 :', "{:.2f}%".format(100 * mt.get_specificity()[1])) sim2.corr()
0.505615
0.989286
# Introduction to Docker **Learning Objectives** * Build and run Docker containers * Pull Docker images from Docker Hub and Google Container Registry * Push Docker images to Google Container Registry ## Overview Docker is an open platform for developing, shipping, and running applications. With Docker, you can separate your applications from your infrastructure and treat your infrastructure like a managed application. Docker helps you ship code faster, test faster, deploy faster, and shorten the cycle between writing code and running code. Docker does this by combining kernel containerization features with workflows and tooling that helps you manage and deploy your applications. Docker containers can be directly used in Kubernetes, which allows them to be run in the Kubernetes Engine with ease. After learning the essentials of Docker, you will have the skillset to start developing Kubernetes and containerized applications. ## Basic Docker commands See what docker images you have. ``` !docker images ``` If this is the first time working with docker you won't have any repositories listed. **Note**. If you are running this in an AI Notebook, then you should see a single image `gcr.io/inverting-proxy/agent`. This is the container that is currently running the AI Notebook. Let's use `docker run` to pull a docker image called `hello-world` from the public registry. The docker daemon will search for the `hello-world` image, if it doesn't find the image locally, it pulls the image from a public registry called Docker Hub, creates a container from that image, and runs the container for you. ``` !docker run hello-world ``` Now when we look at our docker images we should see `hello-world` there as well. ``` !docker images ``` This is the image pulled from the Docker Hub public registry. The Image ID is in `SHA256` hash formatโ€”this field specifies the Docker image that's been provisioned. When the docker daemon can't find an image locally, it will by default search the public registry for the image. Let's run the container again: Now, if we want to run `docker run hello-world` again, it won't have to download from the container registry. To see all docker containers running, use `docker ps`. ``` !docker ps ``` There are no running containers. **Note. If you are running this in at AI Notebook, you'll see one container running.** The `hello-world` containers you ran previously already exited. In order to see all containers, including ones that have finished executing, run docker `ps -a`: ``` !docker ps -a ``` This shows you the Container ID, a UUID generated by Docker to identify the container, and more metadata about the run. The container Names are also randomly generated but can be specified with docker run --name [container-name] hello-world. ## Build a Docker container Let's build a Docker image that's based on a simple node application. Open a new text file and write the following. Save the file in a folder called `dockerfiles` and name the file `intro.docker` ```bash # Use an official Node runtime as the parent image FROM node:6 # Set the working directory in the container to /app WORKDIR /app # Copy the current directory contents into the container at /app ADD . /app # Make the container's port 80 available to the outside world EXPOSE 80 # Run app.js using node when the container launches CMD ["node", "./src/app.js"] ``` This file instructs the Docker daemon on how to build your image. The initial line specifies the base parent image, which in this case is the official Docker image for node version 6. In the second, we set the working (current) directory of the container. In the third, we add the current directory's contents (indicated by the "." ) into the container. Then we expose the container's port so it can accept connections on that port and finally run the node command to start the application. Check out the other [Docker command references](https://docs.docker.com/engine/reference/builder/#known-issues-run) to understand what each line does. We're going to use this Docker container to run a simple node.js app. Have a look at `app.js`. This is a simple HTTP server that listens on port 80 and returns "Hello World." Now let's build the image. Note again the "`.`", which means current directory so you need to run this command from within the directory that has the Dockerfile. The `-t` is to name and tag an image with the `name:tag` syntax. The name of the image is `node-app` and the tag is `0.1`. The tag is highly recommended when building Docker images. If you don't specify a tag, the tag will default to latest and it becomes more difficult to distinguish newer images from older ones. Also notice how each line in the Dockerfile above results in intermediate container layers as the image is built. ``` !docker build -t node-app:0.1 -f dockerfiles/intro.docker . ``` Let's check that the image has been created correctly. ``` !docker images ``` You should see a `node-app` repository that was created only seconds ago. Notice `node` is the base image and `node-app` is the image you built. You can't remove `node` without removing `node-app` first. The size of the image is relatively small compared to VMs. Other versions of the node image such as `node:slim` and `node:alpine` can give you even smaller images for easier portability. The topic of slimming down container sizes is further explored in Advanced Topics. You can view all versions in the official repository here. Note, you can remove an image from your docker images using `docker rmi [repository]:[tag]`. ## Run a Docker container Now we'll run the container based on the image you built above using the `docker run` command. The `--name` flag allows you to name the container if you like. And `-p` instructs Docker to map the host's port 4000 to the container's port 80. This allows you to reach the server at http://localhost:4000. Without port mapping, you would not be able to reach the container at localhost. ``` !docker ps -a !docker run -p 4000:80 --name my-app node-app:0.1 ``` To test out the server, open a terminal window and type the following command: ```bash curl http://localhost:4000 ``` You should see the server respond with `Hello World` The container will run as long as the initial terminal is running. If you want to stop the container, run the following command in the terminal to stop and remove the container: ```bash docker stop my-app && docker rm my-app ``` After a few moments the container will stop. You should notice the cell above will complete execution. #### Running the container in the background If you want to the container to run in the background (not tied to the terminal's session), you need to specify the `-d` flag. Now run the following command to start the container in the background ``` !docker run -p 4000:80 --name my-app -d node-app:0.1 ``` Your container is now running in the background. You can check the status of your running container using `docker ps` ``` !docker ps ``` Notice the container is running in the output of docker ps. You can look at the logs by executing `docker logs [container_id]`. ``` # Note, your container id will be different !docker logs b9d5fd6b8e33 ``` You should see ```bash Server running at http://0.0.0.0:80/ ``` If you want to follow the log's output as the container is running, use the `-f` option. ## Modify & Publish Let's modify the application and push it to your Google Cloud Repository (gcr). After that you'll remove all local containers and images to simulate a fresh environment, and then pull and run your containers from gcr. This will demonstrate the portability of Docker containers. ### Edit `app.js` Open the file `./src/app.js` with the text editor and replace "Hello World" with another string. Then build this new image. ``` !docker build -t node-app:0.2 -f dockerfiles/intro.docker . ``` Notice in `Step 2` of the output we are using an existing cache layer. From `Step 3` and on, the layers are modified because we made a change in `app.js`. Run another container with the new image version. Notice how we map the host's port 8000 instead of 80. We can't use host port 4000 because it's already in use. ``` !docker run -p 8000:80 --name my-app-2 -d node-app:0.2 ``` You can check that both container are running using `docker ps`. ``` !docker ps ``` And let's test boht containers using `curl` as before: ``` !curl http://localhost:8000 !curl http://localhost:4000 ``` Recall, to stop a container running, you can execute the following command either in a terminal or (because they are running in the background) in a cell in this notebook. ### Publish to gcr Now you're going to push your image to the Google Container Registry (gcr). To push images to your private registry hosted by gcr, you need to tag the images with a registry name. The format is `[hostname]/[project-id]/[image]:[tag]`. For gcr: * `[hostname]`= gcr.io * `[project-id]`= your project's ID * `[image]`= your image name * `[tag]`= any string tag of your choice. If unspecified, it defaults to "latest". ``` import os PROJECT_ID = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME os.environ["PROJECT_ID"] = PROJECT_ID ``` Let's tag `node-app:0.2`. ``` !docker images %%bash docker tag node-app:0.2 gcr.io/${PROJECT_ID}/node-app:0.2 ``` Now when we list our docker images we should see this newly tagged repository. ``` !docker images ``` Next, let's push this image to gcr. ``` %%bash docker push gcr.io/${PROJECT_ID}/node-app:0.2 ``` Check that the image exists in `gcr` by visiting the image registry Cloud Console. You can navigate via the console to `Navigation menu > Container Registry` or visit the url from the cell below: ``` %%bash echo "http://gcr.io/${PROJECT_ID}/node-app" ``` ### Test the published gcr image Let's test this image. You could start a new VM, ssh into that VM, and install gcloud. For simplicity, we'll just remove all containers and images to simulate a fresh environment. First, stop and remove all containers using `docker stop` and `docker rm`. **Be careful not to stop the container running this AI Notebook!**. ``` !docker stop my-app && docker rm my-app !docker stop my-app-2 && docker rm my-app-2 ``` Now remove the docker images you've created above using `docker rmi`. ``` !docker images %%bash docker rmi node-app:0.2 docker rmi gcr.io/${PROJECT_ID}/node-app:0.2 docker rmi node-app:0.1 docker rmi node:6 docker rmi -f hello-world:latest ``` Confirm all images are removed with `docker images`. ``` !docker images ``` At this point you should have a pseudo-fresh environment. Now, pull the image and run it. ``` %%bash docker pull gcr.io/${PROJECT_ID}/node-app:0.2 docker run -p 4000:80 -d gcr.io/${PROJECT_ID}/node-app:0.2 ``` You can check that it's running as expected using before: ``` !curl http://localhost:4000 ``` Copyright 2020 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
github_jupyter
!docker images !docker run hello-world !docker images !docker ps !docker ps -a # Use an official Node runtime as the parent image FROM node:6 # Set the working directory in the container to /app WORKDIR /app # Copy the current directory contents into the container at /app ADD . /app # Make the container's port 80 available to the outside world EXPOSE 80 # Run app.js using node when the container launches CMD ["node", "./src/app.js"] !docker build -t node-app:0.1 -f dockerfiles/intro.docker . !docker images !docker ps -a !docker run -p 4000:80 --name my-app node-app:0.1 curl http://localhost:4000 docker stop my-app && docker rm my-app !docker run -p 4000:80 --name my-app -d node-app:0.1 !docker ps # Note, your container id will be different !docker logs b9d5fd6b8e33 Server running at http://0.0.0.0:80/ !docker build -t node-app:0.2 -f dockerfiles/intro.docker . !docker run -p 8000:80 --name my-app-2 -d node-app:0.2 !docker ps !curl http://localhost:8000 !curl http://localhost:4000 import os PROJECT_ID = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME os.environ["PROJECT_ID"] = PROJECT_ID !docker images %%bash docker tag node-app:0.2 gcr.io/${PROJECT_ID}/node-app:0.2 !docker images %%bash docker push gcr.io/${PROJECT_ID}/node-app:0.2 %%bash echo "http://gcr.io/${PROJECT_ID}/node-app" !docker stop my-app && docker rm my-app !docker stop my-app-2 && docker rm my-app-2 !docker images %%bash docker rmi node-app:0.2 docker rmi gcr.io/${PROJECT_ID}/node-app:0.2 docker rmi node-app:0.1 docker rmi node:6 docker rmi -f hello-world:latest !docker images %%bash docker pull gcr.io/${PROJECT_ID}/node-app:0.2 docker run -p 4000:80 -d gcr.io/${PROJECT_ID}/node-app:0.2 !curl http://localhost:4000
0.477554
0.935051
<center> <img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" /> </center> <h1 align=center><font size = 5>Assignment: SQL Notebook for Peer Assignment</font></h1> Estimated time needed: **60** minutes. ## Introduction Using this Python notebook you will: 1. Understand the Spacex DataSet 2. Load the dataset into the corresponding table in a Db2 database 3. Execute SQL queries to answer assignment questions ## Overview of the DataSet SpaceX has gained worldwide attention for a series of historic milestones. It is the only private company ever to return a spacecraft from low-earth orbit, which it first accomplished in December 2010. SpaceX advertises Falcon 9 rocket launches on its website with a cost of 62 million dollars wheras other providers cost upward of 165 million dollars each, much of the savings is because Space X can reuse the first stage. Therefore if we can determine if the first stage will land, we can determine the cost of a launch. This information can be used if an alternate company wants to bid against SpaceX for a rocket launch. This dataset includes a record for each payload carried during a SpaceX mission into outer space. ### Download the datasets This assignment requires you to load the spacex dataset. In many cases the dataset to be analyzed is available as a .CSV (comma separated values) file, perhaps on the internet. Click on the link below to download and save the dataset (.CSV file): <a href="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module_2/data/Spacex.csv?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01" target="_blank">Spacex DataSet</a> ### Store the dataset in database table **it is highly recommended to manually load the table using the database console LOAD tool in DB2**. <img src = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module_2/images/spacexload.png"> Now open the Db2 console, open the LOAD tool, Select / Drag the .CSV file for the dataset, Next create a New Table, and then follow the steps on-screen instructions to load the data. Name the new table as follows: **SPACEXDATASET** **Follow these steps while using old DB2 UI which is having Open Console Screen** **Note:While loading Spacex dataset, ensure that detect datatypes is disabled. Later click on the pencil icon(edit option).** 1. Change the Date Format by manually typing DD-MM-YYYY and timestamp format as DD-MM-YYYY HH\:MM:SS. Here you should place the cursor at Date field and manually type as DD-MM-YYYY. 2. Change the PAYLOAD_MASS\_\_KG\_ datatype to INTEGER. <img src = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module_2/images/spacexload2.png"> **Changes to be considered when having DB2 instance with the new UI having Go to UI screen** * Refer to this insruction in this <a href="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/Labs_Coursera_V5/labs/Lab%20-%20Sign%20up%20for%20IBM%20Cloud%20-%20Create%20Db2%20service%20instance%20-%20Get%20started%20with%20the%20Db2%20console/instructional-labs.md.html?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">link</a> for viewing the new Go to UI screen. * Later click on **Data link(below SQL)** in the Go to UI screen and click on **Load Data** tab. * Later browse for the downloaded spacex file. <img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module_2/images/browsefile.png" width="800"/> * Once done select the schema andload the file. <img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module_2/images/spacexload3.png" width="800"/> ``` !pip install sqlalchemy==1.3.9 !pip install ibm_db_sa !pip install ipython-sql ``` ### Connect to the database Let us first load the SQL extension and establish a connection with the database ``` %load_ext sql ``` **DB2 magic in case of old UI service credentials.** In the next cell enter your db2 connection string. Recall you created Service Credentials for your Db2 instance before. From the **uri** field of your Db2 service credentials copy everything after db2:// (except the double quote at the end) and paste it in the cell below after ibm_db_sa:// <img src ="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/FinalModule_edX/images/URI.jpg"> in the following format **%sql ibm_db_sa://my-username:my-password\@my-hostname:my-port/my-db-name** **DB2 magic in case of new UI service credentials.** <img src ="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module_2/images/servicecredentials.png" width=600> * Use the following format. * Add security=SSL at the end **%sql ibm_db_sa://my-username:my-password\@my-hostname:my-port/my-db-name?security=SSL** ``` %sql ibm_db_sa://pdn17746:Knq2c233n9LAAj1M@ba99a9e6-d59e-4883-8fc0-d6a8c9f7a08f.c1ogj3sd0tgtu0lqde00.databases.appdomain.cloud:31321/bludb?security=SSL %sql SELECT * FROM SPACEXDATASET ``` ## Tasks Now write and execute SQL queries to solve the assignment tasks. ### Task 1 ##### Display the names of the unique launch sites in the space mission ``` %sql SELECT DISTINCT(launch_site) FROM SPACEXDATASET ``` ### Task 2 ##### Display 5 records where launch sites begin with the string 'CCA' ``` %sql SELECT * FROM SPACEXDATASET WHERE launch_site LIKE 'CCA%' LIMIT 5 ``` ### Task 3 ##### Display the total payload mass carried by boosters launched by NASA (CRS) ``` %sql SELECT SUM(payload_mass__kg_) FROM SPACEXDATASET WHERE customer = 'NASA (CRS)' ``` ### Task 4 ##### Display average payload mass carried by booster version F9 v1.1 ``` %sql SELECT AVG(payload_mass__kg_) FROM SPACEXDATASET WHERE booster_version = 'F9 v1.1' ``` ### Task 5 ##### List the date when the first successful landing outcome in ground pad was acheived. *Hint:Use min function* ``` %sql SELECT MIN(DATE) FROM SPACEXDATASET WHERE landing__outcome LIKE 'Success (ground pad)' LIMIT 1 ``` ### Task 6 ##### List the names of the boosters which have success in drone ship and have payload mass greater than 4000 but less than 6000 ``` %sql SELECT booster_version FROM SPACEXDATASET WHERE landing__outcome LIKE 'Success (drone ship)' AND payload_mass__kg_ BETWEEN 4000 AND 6000; ``` ### Task 7 ##### List the total number of successful and failure mission outcomes ``` %sql SELECT COUNT(mission_outcome), mission_outcome FROM SPACEXDATASET GROUP BY mission_outcome; ``` ### Task 8 ##### List the names of the booster_versions which have carried the maximum payload mass. Use a subquery ``` %sql SELECT booster_version FROM SPACEXDATASET WHERE payload_mass__kg_ = (SELECT MAX(payload_mass__kg_) FROM SPACEXDATASET) ``` ### Task 9 ##### List the failed landing_outcomes in drone ship, their booster versions, and launch site names for in year 2015 ``` %sql SELECT landing__outcome, booster_version, launch_site FROM SPACEXDATASET WHERE YEAR(DATE) = 2015 AND landing__outcome = 'Failure (drone ship)' ``` ### Task 10 ##### Rank the count of landing outcomes (such as Failure (drone ship) or Success (ground pad)) between the date 2010-06-04 and 2017-03-20, in descending order ``` %sql SELECT COUNT(landing__outcome), landing__outcome FROM SPACEXDATASET WHERE DATE BETWEEN '2010-06-04' AND '2017-03-20' GROUP BY landing__outcome ORDER BY COUNT(landing__outcome) DESC ; ``` ### Reference Links * <a href ="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/Labs_Coursera_V5/labs/Lab%20-%20String%20Patterns%20-%20Sorting%20-%20Grouping/instructional-labs.md.html?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01&origin=www.coursera.org">Hands-on Lab : String Patterns, Sorting and Grouping</a> * <a href="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/Labs_Coursera_V5/labs/Lab%20-%20Built-in%20functions%20/Hands-on_Lab__Built-in_Functions.md.html?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01&origin=www.coursera.org">Hands-on Lab: Built-in functions</a> * <a href="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/Labs_Coursera_V5/labs/Lab%20-%20Sub-queries%20and%20Nested%20SELECTs%20/instructional-labs.md.html?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01&origin=www.coursera.org">Hands-on Lab : Sub-queries and Nested SELECT Statements</a> * <a href="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/Module%205/DB0201EN-Week3-1-3-SQLmagic.ipynb?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">Hands-on Tutorial: Accessing Databases with SQL magic</a> * <a href= "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DB0201EN-SkillsNetwork/labs/Module%205/DB0201EN-Week3-1-4-Analyzing.ipynb?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">Hands-on Lab: Analyzing a real World Data Set</a> ## Author(s) <h4> Lakshmi Holla </h4> ## Other Contributors <h4> Rav Ahuja </h4> ## Change log | Date | Version | Changed by | Change Description | | ---------- | ------- | ------------- | ------------------------- | | 2021-10-12 | 0.4 | Lakshmi Holla | Changed markdown | | 2021-08-24 | 0.3 | Lakshmi Holla | Added library update | | 2021-07-09 | 0.2 | Lakshmi Holla | Changes made in magic sql | | 2021-05-20 | 0.1 | Lakshmi Holla | Created Initial Version | ## <h3 align="center"> ยฉ IBM Corporation 2021. All rights reserved. <h3/>
github_jupyter
!pip install sqlalchemy==1.3.9 !pip install ibm_db_sa !pip install ipython-sql %load_ext sql %sql ibm_db_sa://pdn17746:Knq2c233n9LAAj1M@ba99a9e6-d59e-4883-8fc0-d6a8c9f7a08f.c1ogj3sd0tgtu0lqde00.databases.appdomain.cloud:31321/bludb?security=SSL %sql SELECT * FROM SPACEXDATASET %sql SELECT DISTINCT(launch_site) FROM SPACEXDATASET %sql SELECT * FROM SPACEXDATASET WHERE launch_site LIKE 'CCA%' LIMIT 5 %sql SELECT SUM(payload_mass__kg_) FROM SPACEXDATASET WHERE customer = 'NASA (CRS)' %sql SELECT AVG(payload_mass__kg_) FROM SPACEXDATASET WHERE booster_version = 'F9 v1.1' %sql SELECT MIN(DATE) FROM SPACEXDATASET WHERE landing__outcome LIKE 'Success (ground pad)' LIMIT 1 %sql SELECT booster_version FROM SPACEXDATASET WHERE landing__outcome LIKE 'Success (drone ship)' AND payload_mass__kg_ BETWEEN 4000 AND 6000; %sql SELECT COUNT(mission_outcome), mission_outcome FROM SPACEXDATASET GROUP BY mission_outcome; %sql SELECT booster_version FROM SPACEXDATASET WHERE payload_mass__kg_ = (SELECT MAX(payload_mass__kg_) FROM SPACEXDATASET) %sql SELECT landing__outcome, booster_version, launch_site FROM SPACEXDATASET WHERE YEAR(DATE) = 2015 AND landing__outcome = 'Failure (drone ship)' %sql SELECT COUNT(landing__outcome), landing__outcome FROM SPACEXDATASET WHERE DATE BETWEEN '2010-06-04' AND '2017-03-20' GROUP BY landing__outcome ORDER BY COUNT(landing__outcome) DESC ;
0.371251
0.940735
# Inference and Validation Now that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch. As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here: ```python testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) ``` The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training. ``` import torch from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) ``` Here I'll create a model like normal, using the same one from my solution for part 4. ``` from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim=1) return x ``` The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set. ``` model = Classifier() images, labels = next(iter(testloader)) # Get the class probabilities ps = torch.exp(model(images)) # Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples print(ps.shape) ``` With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index. ``` top_p, top_class = ps.topk(1, dim=1) # Look at the most likely classes for the first 10 examples print(top_class[:10,:]) ``` Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape. If we do ```python equals = top_class == labels ``` `equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row. ``` equals = top_class == labels.view(*top_class.shape) ``` Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error ``` RuntimeError: mean is not implemented for type torch.ByteTensor ``` This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implement for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`. ``` accuracy = torch.mean(equals.type(torch.FloatTensor)) print(f'Accuracy: {accuracy.item()*100}%') ``` The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up the by turning off gradients using `torch.no_grad()`: ```python # turn off gradients with torch.no_grad(): # validation pass here for images, labels in testloader: ... ``` >**Exercise:** Implement the validation loop below. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting. ``` model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt plt.plot(train_losses, label='Training loss') plt.plot(test_losses, label='Validation loss') plt.legend(frameon=False) ``` ## Overfitting If we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting. <img src='assets/overfitting.png' width=450px> The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss. The most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout) module. ```python class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) # output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x ``` During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode. ```python # turn off gradients with torch.no_grad(): # set model to evaluation mode model.eval() # validation pass here for images, labels in testloader: ... # set model back to train mode model.train() ``` > **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss. ``` class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) # output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[-1]), "Test Loss: {:.3f}.. ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt plt.plot(train_losses, label='Training loss') plt.plot(test_losses, label='Validation loss') plt.legend(frameon=False) ``` ## Inference Now that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context. ``` # Import helper module (should be in the repo) import helper # Test out your network! model.eval() dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.view(1, 784) # Calculate the class probabilities (softmax) for img with torch.no_grad(): output = model.forward(img) ps = torch.exp(output) # Plot the image and probabilities helper.view_classify(img.view(1, 28, 28), ps, version='Fashion') ``` ## Next Up! In the next part, I'll show you how to save your trained models. In general, you won't want to train a model everytime you need it. Instead, you'll train once, save it, then load the model when you want to train more or use if for inference.
github_jupyter
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) import torch from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() images, labels = next(iter(testloader)) # Get the class probabilities ps = torch.exp(model(images)) # Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples print(ps.shape) top_p, top_class = ps.topk(1, dim=1) # Look at the most likely classes for the first 10 examples print(top_class[:10,:]) equals = top_class == labels equals = top_class == labels.view(*top_class.shape) RuntimeError: mean is not implemented for type torch.ByteTensor accuracy = torch.mean(equals.type(torch.FloatTensor)) print(f'Accuracy: {accuracy.item()*100}%') # turn off gradients with torch.no_grad(): # validation pass here for images, labels in testloader: ... model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt plt.plot(train_losses, label='Training loss') plt.plot(test_losses, label='Validation loss') plt.legend(frameon=False) class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) # output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x # turn off gradients with torch.no_grad(): # set model to evaluation mode model.eval() # validation pass here for images, labels in testloader: ... # set model back to train mode model.train() class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) # output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[-1]), "Test Loss: {:.3f}.. ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt plt.plot(train_losses, label='Training loss') plt.plot(test_losses, label='Validation loss') plt.legend(frameon=False) # Import helper module (should be in the repo) import helper # Test out your network! model.eval() dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.view(1, 784) # Calculate the class probabilities (softmax) for img with torch.no_grad(): output = model.forward(img) ps = torch.exp(output) # Plot the image and probabilities helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
0.960805
0.989024
``` %load_ext autoreload %autoreload 2 from math import ceil import torch from torch.utils.data import DataLoader from torch.autograd import Variable import torch.optim as optim import matplotlib.pyplot as plt %matplotlib inline import sys sys.path.append('..') from utils.input_pipeline import get_image_folders from utils.training import train from utils.quantization import optimization_step, quantize, initial_scales torch.cuda.is_available() torch.backends.cudnn.benchmark = True LEARNING_RATE = 1e-4 # learning rate for all possible weights HYPERPARAMETER_T = 0.15 # hyperparameter for quantization ``` # Create data iterators ``` batch_size = 64 train_folder, val_folder = get_image_folders() train_iterator = DataLoader( train_folder, batch_size=batch_size, num_workers=4, shuffle=True, pin_memory=True ) val_iterator = DataLoader( val_folder, batch_size=256, num_workers=4, shuffle=False, pin_memory=True ) # number of training samples train_size = len(train_folder.imgs) train_size ``` # Model ``` from get_densenet import get_model model, loss, optimizer = get_model(learning_rate=LEARNING_RATE) # load pretrained model, accuracy ~73% model.load_state_dict(torch.load('../vanilla_densenet_big/model_step5.pytorch_state')) ``` #### keep copy of full precision kernels ``` # copy almost all full precision kernels of the model all_fp_kernels = [ Variable(kernel.data.clone(), requires_grad=True) for kernel in optimizer.param_groups[1]['params'] ] # all_fp_kernels - kernel tensors of all convolutional layers # (with the exception of the first conv layer) ``` #### initial quantization ``` # scaling factors for each quantized layer initial_scaling_factors = [] # these kernels will be quantized all_kernels = [kernel for kernel in optimizer.param_groups[1]['params']] for k, k_fp in zip(all_kernels, all_fp_kernels): # choose initial scaling factors w_p_initial, w_n_initial = initial_scales(k_fp.data) initial_scaling_factors += [(w_p_initial, w_n_initial)] # do quantization k.data = quantize(k_fp.data, w_p_initial, w_n_initial, t=HYPERPARAMETER_T) ``` #### parameter updaters ``` # optimizer for updating only all_fp_kernels optimizer_fp = optim.Adam(all_fp_kernels, lr=LEARNING_RATE) # optimizer for updating only scaling factors optimizer_sf = optim.Adam([ Variable(torch.FloatTensor([w_p, w_n]).cuda(), requires_grad=True) for w_p, w_n in initial_scaling_factors ], lr=LEARNING_RATE) ``` # Train ``` from torch.optim.lr_scheduler import ReduceLROnPlateau class lr_scheduler_list: """ReduceLROnPlateau for a list of optimizers.""" def __init__(self, optimizer_list): self.lr_scheduler_list = [ ReduceLROnPlateau( optimizer, mode='max', factor=0.1, patience=3, verbose=True, threshold=0.01, threshold_mode='abs' ) for optimizer in optimizer_list ] def step(self, test_accuracy): for scheduler in self.lr_scheduler_list: scheduler.step(test_accuracy) n_epochs = 15 n_batches = ceil(train_size/batch_size) # total number of batches in the train set n_batches %%time optimizer_list = [optimizer, optimizer_fp, optimizer_sf] def optimization_step_fn(model, loss, x_batch, y_batch): return optimization_step( model, loss, x_batch, y_batch, optimizer_list=optimizer_list, t=HYPERPARAMETER_T ) all_losses = train( model, loss, optimization_step_fn, train_iterator, val_iterator, n_epochs, lr_scheduler=lr_scheduler_list(optimizer_list) ) # epoch logloss accuracy top5_accuracy time (first value: train, second value: val) # backup model.cpu(); torch.save(model.state_dict(), 'model_ternary_quantization.pytorch_state') ``` # Continue training ``` # reduce learning rate for optimizer in optimizer_list: for group in optimizer.param_groups: group['lr'] = 1e-5 n_epochs = 5 model.cuda(); %%time def optimization_step_fn(model, loss, x_batch, y_batch): return optimization_step( model, loss, x_batch, y_batch, optimizer_list=optimizer_list, t=HYPERPARAMETER_T ) all_losses = train( model, loss, optimization_step_fn, train_iterator, val_iterator, n_epochs ) # epoch logloss accuracy top5_accuracy time (first value: train, second value: val) ``` # Final save ``` model.cpu(); torch.save(model.state_dict(), 'model_ternary_quantization.pytorch_state') ```
github_jupyter
%load_ext autoreload %autoreload 2 from math import ceil import torch from torch.utils.data import DataLoader from torch.autograd import Variable import torch.optim as optim import matplotlib.pyplot as plt %matplotlib inline import sys sys.path.append('..') from utils.input_pipeline import get_image_folders from utils.training import train from utils.quantization import optimization_step, quantize, initial_scales torch.cuda.is_available() torch.backends.cudnn.benchmark = True LEARNING_RATE = 1e-4 # learning rate for all possible weights HYPERPARAMETER_T = 0.15 # hyperparameter for quantization batch_size = 64 train_folder, val_folder = get_image_folders() train_iterator = DataLoader( train_folder, batch_size=batch_size, num_workers=4, shuffle=True, pin_memory=True ) val_iterator = DataLoader( val_folder, batch_size=256, num_workers=4, shuffle=False, pin_memory=True ) # number of training samples train_size = len(train_folder.imgs) train_size from get_densenet import get_model model, loss, optimizer = get_model(learning_rate=LEARNING_RATE) # load pretrained model, accuracy ~73% model.load_state_dict(torch.load('../vanilla_densenet_big/model_step5.pytorch_state')) # copy almost all full precision kernels of the model all_fp_kernels = [ Variable(kernel.data.clone(), requires_grad=True) for kernel in optimizer.param_groups[1]['params'] ] # all_fp_kernels - kernel tensors of all convolutional layers # (with the exception of the first conv layer) # scaling factors for each quantized layer initial_scaling_factors = [] # these kernels will be quantized all_kernels = [kernel for kernel in optimizer.param_groups[1]['params']] for k, k_fp in zip(all_kernels, all_fp_kernels): # choose initial scaling factors w_p_initial, w_n_initial = initial_scales(k_fp.data) initial_scaling_factors += [(w_p_initial, w_n_initial)] # do quantization k.data = quantize(k_fp.data, w_p_initial, w_n_initial, t=HYPERPARAMETER_T) # optimizer for updating only all_fp_kernels optimizer_fp = optim.Adam(all_fp_kernels, lr=LEARNING_RATE) # optimizer for updating only scaling factors optimizer_sf = optim.Adam([ Variable(torch.FloatTensor([w_p, w_n]).cuda(), requires_grad=True) for w_p, w_n in initial_scaling_factors ], lr=LEARNING_RATE) from torch.optim.lr_scheduler import ReduceLROnPlateau class lr_scheduler_list: """ReduceLROnPlateau for a list of optimizers.""" def __init__(self, optimizer_list): self.lr_scheduler_list = [ ReduceLROnPlateau( optimizer, mode='max', factor=0.1, patience=3, verbose=True, threshold=0.01, threshold_mode='abs' ) for optimizer in optimizer_list ] def step(self, test_accuracy): for scheduler in self.lr_scheduler_list: scheduler.step(test_accuracy) n_epochs = 15 n_batches = ceil(train_size/batch_size) # total number of batches in the train set n_batches %%time optimizer_list = [optimizer, optimizer_fp, optimizer_sf] def optimization_step_fn(model, loss, x_batch, y_batch): return optimization_step( model, loss, x_batch, y_batch, optimizer_list=optimizer_list, t=HYPERPARAMETER_T ) all_losses = train( model, loss, optimization_step_fn, train_iterator, val_iterator, n_epochs, lr_scheduler=lr_scheduler_list(optimizer_list) ) # epoch logloss accuracy top5_accuracy time (first value: train, second value: val) # backup model.cpu(); torch.save(model.state_dict(), 'model_ternary_quantization.pytorch_state') # reduce learning rate for optimizer in optimizer_list: for group in optimizer.param_groups: group['lr'] = 1e-5 n_epochs = 5 model.cuda(); %%time def optimization_step_fn(model, loss, x_batch, y_batch): return optimization_step( model, loss, x_batch, y_batch, optimizer_list=optimizer_list, t=HYPERPARAMETER_T ) all_losses = train( model, loss, optimization_step_fn, train_iterator, val_iterator, n_epochs ) # epoch logloss accuracy top5_accuracy time (first value: train, second value: val) model.cpu(); torch.save(model.state_dict(), 'model_ternary_quantization.pytorch_state')
0.706089
0.791055
*This tutorial is part of the [Learn Machine Learning](https://www.kaggle.com/learn/learn-machine-learning/) series. In this step, you will learn how to use cross-validation for better measures of model performance.* # What is Cross Validation Machine learning is an iterative process. You will face choices about predictive variables to use, what types of models to use,what arguments to supply those models, etc. We make these choices in a data-driven way by measuring model quality of various alternatives. You've already learned to use `train_test_split` to split the data, so you can measure model quality on the test data. Cross-validation extends this approach to model scoring (or "model validation.") Compared to `train_test_split`, cross-validation gives you a more reliable measure of your model's quality, though it takes longer to run. ## The Shortcoming of Train-Test Split Imagine you have a dataset with 5000 rows. The `train_test_split` function has an argument for `test_size` that you can use to decide how many rows go to the training set and how many go to the test set. The larger the test set, the more reliable your measures of model quality will be. At an extreme, you could imagine having only 1 row of data in the test set. If you compare alternative models, which one makes the best predictions on a single data point will be mostly a matter of luck. You will typically keep about 20% as a test dataset. But even with 1000 rows in the test set, there's some random chance in determining model scores. A model might do well on one set of 1000 rows, even if it would be inaccurate on a different 1000 rows. The larger the test set, the less randomness (aka "noise") there is in our measure of model quality. But we can only get a large test set by removing data from our training data, and smaller training datasets mean worse models. In fact, the ideal modeling decisions on a small dataset typically aren't the best modeling decisions on large datasets. --- ## The Cross-Validation Procedure In cross-validation, we run our modeling process on different subsets of the data to get multiple measures of model quality. For example, we could have 5 **folds** or experiments. We divide the data into 5 pieces, each being 20% of the full dataset. ![cross-validation-graphic](https://i.stack.imgur.com/1fXzJ.png) We run an experiment called experiment 1 which uses the first fold as a holdout set, and everything else as training data. This gives us a measure of model quality based on a 20% holdout set, much as we got from using the simple train-test split. We then run a second experiment, where we hold out data from the second fold (using everything except the 2nd fold for training the model.) This gives us a second estimate of model quality. We repeat this process, using every fold once as the holdout. Putting this together, 100% of the data is used as a holdout at some point. Returning to our example above from train-test split, if we have 5000 rows of data, we end up with a measure of model quality based on 5000 rows of holdout (even if we don't use all 5000 rows simultaneously. ## Trade-offs Between Cross-Validation and Train-Test Split Cross-validation gives a more accurate measure of model quality, which is especially important if you are making a lot of modeling decisions. However, it can take more time to run, because it estimates models once for each fold. So it is doing more total work. Given these tradeoffs, when should you use each approach? On small datasets, the extra computational burden of running cross-validation isn't a big deal. These are also the problems where model quality scores would be least reliable with train-test split. So, if your dataset is smaller, you should run cross-validation. For the same reasons, a simple train-test split is sufficient for larger datasets. It will run faster, and you may have enough data that there's little need to re-use some of it for holdout. There's no simple threshold for what constitutes a large vs small dataset. If your model takes a couple minute or less to run, it's probably worth switching to cross-validation. If your model takes much longer to run, cross-validation may slow down your workflow more than it's worth. Alternatively, you can run cross-validation and see if the scores for each experiment seem close. If each experiment gives the same results, train-test split is probably sufficient. # Example First we read the data ``` import pandas as pd data = pd.read_csv('./data/melbourne-housing-snapshot/melb_data.csv') cols_to_use = ['Rooms', 'Distance', 'Landsize', 'BuildingArea', 'YearBuilt'] X = data[cols_to_use] y = data.Price ``` Then specify a pipeline of our modeling steps (It can be very difficult to do cross-validation properly if you arent't using [pipelines](https://www.kaggle.com/dansbecker/pipelines)) ``` from sklearn.ensemble import RandomForestRegressor from sklearn.pipeline import make_pipeline from sklearn.preprocessing import Imputer my_pipeline = make_pipeline(Imputer(), RandomForestRegressor()) ``` Finally get the cross-validation scores: ``` from sklearn.model_selection import cross_val_score scores = cross_val_score(my_pipeline, X, y, scoring='neg_mean_absolute_error') print(scores) ``` You may notice that we specified an argument for *scoring*. This specifies what measure of model quality to report. The docs for scikit-learn show a [list of options](http://scikit-learn.org/stable/modules/model_evaluation.html). It is a little surprising that we specify *negative* mean absolute error in this case. Scikit-learn has a convention where all metrics are defined so a high number is better. Using negatives here allows them to be consistent with that convention, though negative MAE is almost unheard of elsewhere. You typically want a single measure of model quality to compare between models. So we take the average across experiments. ``` print('Mean Absolute Error %2f' %(-1 * scores.mean())) ``` # Conclusion Using cross-validation gave us much better measures of model quality, with the added benefit of cleaning up our code (no longer needing to keep track of separate train and test sets. So, it's a good win. # Your Turn 1. Convert the code for your on-going project over from train-test split to cross-validation. Make sure to remove all code that divides your dataset into training and testing datasets. Leaving code you don't need any more would be sloppy. 2. Add or remove a predictor from your models. See the cross-validation score using both sets of predictors, and see how you can compare the scores. ``` data = pd.read_csv('./data/house-prices-advanced-regression-techniques/train.csv') data = data.dropna(subset=['SalePrice'], axis=0) y = data['SalePrice'] X = data.drop(['SalePrice'], axis=1).select_dtypes(exclude=['object']) pipeline = make_pipeline(Imputer(strategy="median"), RandomForestRegressor(random_state=42)) scores = cross_val_score(pipeline, X, y, scoring='neg_mean_absolute_error') scores print('Mean Absolute Error %2f' %(-1 * scores.mean())) ```
github_jupyter
import pandas as pd data = pd.read_csv('./data/melbourne-housing-snapshot/melb_data.csv') cols_to_use = ['Rooms', 'Distance', 'Landsize', 'BuildingArea', 'YearBuilt'] X = data[cols_to_use] y = data.Price from sklearn.ensemble import RandomForestRegressor from sklearn.pipeline import make_pipeline from sklearn.preprocessing import Imputer my_pipeline = make_pipeline(Imputer(), RandomForestRegressor()) from sklearn.model_selection import cross_val_score scores = cross_val_score(my_pipeline, X, y, scoring='neg_mean_absolute_error') print(scores) print('Mean Absolute Error %2f' %(-1 * scores.mean())) data = pd.read_csv('./data/house-prices-advanced-regression-techniques/train.csv') data = data.dropna(subset=['SalePrice'], axis=0) y = data['SalePrice'] X = data.drop(['SalePrice'], axis=1).select_dtypes(exclude=['object']) pipeline = make_pipeline(Imputer(strategy="median"), RandomForestRegressor(random_state=42)) scores = cross_val_score(pipeline, X, y, scoring='neg_mean_absolute_error') scores print('Mean Absolute Error %2f' %(-1 * scores.mean()))
0.439507
0.988992
``` !pip install -r https://raw.githubusercontent.com/datamllab/automl-in-action-notebooks/master/requirements.txt ``` ## 8.1.1 Loading image classification dataset ``` !!wget https://github.com/datamllab/automl-in-action-notebooks/raw/master/data/mnist.tar.gz !!tar xzf mnist.tar.gz ``` ``` train/ 0/ 1.png 21.png ... 1/ 2/ 3/ ... test/ 0/ 1/ ... ``` ``` import os import autokeras as ak batch_size = 32 img_height = 28 img_width = 28 parent_dir = "data" test_data = ak.image_dataset_from_directory( os.path.join(parent_dir, "test"), seed=123, color_mode="grayscale", image_size=(img_height, img_width), batch_size=batch_size, ) for images, labels in test_data.take(1): print(images.shape, images.dtype) print(labels.shape, labels.dtype) ``` ## 8.1.2 Splitting the loaded dataset ``` all_train_data = ak.image_dataset_from_directory( os.path.join(parent_dir, "train"), seed=123, color_mode="grayscale", image_size=(img_height, img_width), batch_size=batch_size, ) train_data = all_train_data.take(int(60000 / batch_size * 0.8)) validation_data = all_train_data.skip(int(60000 / batch_size * 0.8)) train_data = ak.image_dataset_from_directory( os.path.join(parent_dir, "train"), validation_split=0.2, subset="training", seed=123, color_mode="grayscale", image_size=(img_height, img_width), batch_size=batch_size, ) validation_data = ak.image_dataset_from_directory( os.path.join(parent_dir, "train"), validation_split=0.2, subset="validation", seed=123, color_mode="grayscale", image_size=(img_height, img_width), batch_size=batch_size, ) import tensorflow as tf train_data = train_data.prefetch(5) validation_data = validation_data.prefetch(5) test_data = test_data.prefetch(tf.data.AUTOTUNE) ``` Then we just do one quick demo of AutoKeras to make sure the dataset works. ``` clf = ak.ImageClassifier(overwrite=True, max_trials=1) clf.fit(train_data, epochs=1, validation_data=validation_data) print(clf.evaluate(test_data)) ``` ## 8.1.3 Loading text classification dataset You can also load text datasets in the same way. ``` !!wget https://github.com/datamllab/automl-in-action-notebooks/raw/master/data/imdb.tar.gz !!tar xzf imdb.tar.gz ``` For this dataset, the data is already split into train and test. We just load them separately. ``` import os import autokeras as ak import tensorflow as tf train_data = ak.text_dataset_from_directory( "imdb/train", validation_split=0.2, subset="training", seed=123, max_length=1000, batch_size=32, ).prefetch(1000) validation_data = ak.text_dataset_from_directory( "imdb/train", validation_split=0.2, subset="validation", seed=123, max_length=1000, batch_size=32, ).prefetch(1000) test_data = ak.text_dataset_from_directory( "imdb/test", max_length=1000, ).prefetch(1000) clf = ak.TextClassifier(overwrite=True, max_trials=1) clf.fit(train_data, epochs=2, validation_data=validation_data) print(clf.evaluate(test_data)) ``` ## 8.1.4 Handling large dataset in general format ``` data = [5, 8, 9, 3, 6] def generator(): for i in data: yield i for x in generator(): print(x) dataset = tf.data.Dataset.from_generator(generator, output_types=tf.int32) for x in dataset: print(x.numpy()) import numpy as np parent_dir = "imdb" def load_data(path): data = [] for class_label in ["pos", "neg"]: for file_name in os.listdir(os.path.join(path, class_label)): data.append((os.path.join(path, class_label, file_name), class_label)) data = np.array(data) np.random.shuffle(data) return data def get_generator(data): def data_generator(): for file_path, class_label in data: text_file = open(file_path, "r") text = text_file.read() text_file.close() yield text, class_label return data_generator all_train_np = load_data(os.path.join(parent_dir, "train")) def np_to_dataset(data_np): return ( tf.data.Dataset.from_generator( get_generator(data_np), output_types=tf.string, output_shapes=tf.TensorShape([2]), ) .map(lambda x: (x[0], x[1])) .batch(32) .prefetch(5) ) train_data = np_to_dataset(all_train_np[:20000]) validation_data = np_to_dataset(all_train_np[20000:]) test_np = load_data(os.path.join(parent_dir, "test")) test_data = np_to_dataset(test_np) for texts, labels in train_data.take(1): print(texts.shape) print(labels.shape) clf = ak.TextClassifier(overwrite=True, max_trials=1) clf.fit(train_data, epochs=2, validation_data=validation_data) print(clf.evaluate(test_data)) ```
github_jupyter
!pip install -r https://raw.githubusercontent.com/datamllab/automl-in-action-notebooks/master/requirements.txt !!wget https://github.com/datamllab/automl-in-action-notebooks/raw/master/data/mnist.tar.gz !!tar xzf mnist.tar.gz train/ 0/ 1.png 21.png ... 1/ 2/ 3/ ... test/ 0/ 1/ ... import os import autokeras as ak batch_size = 32 img_height = 28 img_width = 28 parent_dir = "data" test_data = ak.image_dataset_from_directory( os.path.join(parent_dir, "test"), seed=123, color_mode="grayscale", image_size=(img_height, img_width), batch_size=batch_size, ) for images, labels in test_data.take(1): print(images.shape, images.dtype) print(labels.shape, labels.dtype) all_train_data = ak.image_dataset_from_directory( os.path.join(parent_dir, "train"), seed=123, color_mode="grayscale", image_size=(img_height, img_width), batch_size=batch_size, ) train_data = all_train_data.take(int(60000 / batch_size * 0.8)) validation_data = all_train_data.skip(int(60000 / batch_size * 0.8)) train_data = ak.image_dataset_from_directory( os.path.join(parent_dir, "train"), validation_split=0.2, subset="training", seed=123, color_mode="grayscale", image_size=(img_height, img_width), batch_size=batch_size, ) validation_data = ak.image_dataset_from_directory( os.path.join(parent_dir, "train"), validation_split=0.2, subset="validation", seed=123, color_mode="grayscale", image_size=(img_height, img_width), batch_size=batch_size, ) import tensorflow as tf train_data = train_data.prefetch(5) validation_data = validation_data.prefetch(5) test_data = test_data.prefetch(tf.data.AUTOTUNE) clf = ak.ImageClassifier(overwrite=True, max_trials=1) clf.fit(train_data, epochs=1, validation_data=validation_data) print(clf.evaluate(test_data)) !!wget https://github.com/datamllab/automl-in-action-notebooks/raw/master/data/imdb.tar.gz !!tar xzf imdb.tar.gz import os import autokeras as ak import tensorflow as tf train_data = ak.text_dataset_from_directory( "imdb/train", validation_split=0.2, subset="training", seed=123, max_length=1000, batch_size=32, ).prefetch(1000) validation_data = ak.text_dataset_from_directory( "imdb/train", validation_split=0.2, subset="validation", seed=123, max_length=1000, batch_size=32, ).prefetch(1000) test_data = ak.text_dataset_from_directory( "imdb/test", max_length=1000, ).prefetch(1000) clf = ak.TextClassifier(overwrite=True, max_trials=1) clf.fit(train_data, epochs=2, validation_data=validation_data) print(clf.evaluate(test_data)) data = [5, 8, 9, 3, 6] def generator(): for i in data: yield i for x in generator(): print(x) dataset = tf.data.Dataset.from_generator(generator, output_types=tf.int32) for x in dataset: print(x.numpy()) import numpy as np parent_dir = "imdb" def load_data(path): data = [] for class_label in ["pos", "neg"]: for file_name in os.listdir(os.path.join(path, class_label)): data.append((os.path.join(path, class_label, file_name), class_label)) data = np.array(data) np.random.shuffle(data) return data def get_generator(data): def data_generator(): for file_path, class_label in data: text_file = open(file_path, "r") text = text_file.read() text_file.close() yield text, class_label return data_generator all_train_np = load_data(os.path.join(parent_dir, "train")) def np_to_dataset(data_np): return ( tf.data.Dataset.from_generator( get_generator(data_np), output_types=tf.string, output_shapes=tf.TensorShape([2]), ) .map(lambda x: (x[0], x[1])) .batch(32) .prefetch(5) ) train_data = np_to_dataset(all_train_np[:20000]) validation_data = np_to_dataset(all_train_np[20000:]) test_np = load_data(os.path.join(parent_dir, "test")) test_data = np_to_dataset(test_np) for texts, labels in train_data.take(1): print(texts.shape) print(labels.shape) clf = ak.TextClassifier(overwrite=True, max_trials=1) clf.fit(train_data, epochs=2, validation_data=validation_data) print(clf.evaluate(test_data))
0.468791
0.863737
# COVID19 - District Region ``` from pexecute.process import ProcessLoom loom = ProcessLoom(max_runner_cap=10) import urllib.request import pandas as pd import numpy as np # Download data import get_data LoadData=True if LoadData: get_data.get_data() dfSP = pd.read_csv("data/dados_municipios_SP.csv") dfSP # Model # lista DRSs DRS = list(dfSP["DRS"].unique()) DRS.remove("Indefinido") DRS ``` # SEAIR-D Model Equations $$\begin{array}{l}\frac{d s}{d t}=-[\beta i(t) + \beta_2 a(t)-\mu] \cdot s(t)\\ \frac{d e}{d t}=[\beta i(t) + \beta_2 a(t)] \cdot s(t) -(\sigma+\mu) \cdot e(t)\\ \frac{d a}{d t}=\sigma e(t) \cdot (1-p)-(\gamma+\mu) \cdot a(t) \\ \frac{d i}{d t}=\sigma e(t) \cdot p - (\gamma + \sigma_2 + \sigma_3 + \mu) \cdot i(t)\\ \frac{d r}{d t}=(b + \sigma_2) \cdot i(t) + \gamma \cdot a(t) - \mu \cdot r(t)\\ \frac{d k}{d t}=(a + \sigma_3 - \mu) \cdot d(t) \end{array}$$ The last equation does not need to be solve because: $$\frac{d k}{d t}=-(\frac{d e}{d t}+\frac{d a}{d t}+\frac{d i}{d t}+\frac{d r}{d t})$$ The sum of all rates are equal to zero! The importance of this equation is that it conservates the rates. ## Parameters $\beta$: Effective contact rate [1/min] $\gamma$: Recovery(+Mortality) rate $\gamma=(a+b)$ [1/min] $a$: mortality of healed [1/min] $b$: recovery rate [1/min] $\sigma$: is the rate at which individuals move from the exposed to the infectious classes. Its reciprocal ($1/\sigma$) is the average latent (exposed) period. $\sigma_2$: is the rate at which individuals move from the infectious to the healed classes. Its reciprocal ($1/\sigma_2$) is the average latent (exposed) period $\sigma_3$: is the rate at which individuals move from the infectious to the dead classes. Its reciprocal ($1/\sigma_3$) is the average latent (exposed) period $p$: is the fraction of the exposed which become symptomatic infectious sub-population. $(1-p)$: is the fraction of the exposed which becomes asymptomatic infectious sub-population. ``` #objective function Odeint solver from scipy.integrate import odeint #objective function Odeint solver def lossOdeint(point, data, death, s_0, e_0, a_0, i_0, r_0, d_0, startNCases, ratioRecovered, weigthCases, weigthRecov): size = len(data) beta, beta2, sigma, sigma2, sigma3, gamma, b, mu = point def SEAIRD(y,t): S = y[0] E = y[1] A = y[2] I = y[3] R = y[4] D = y[5] p=0.2 # beta2=beta y0=-(beta2*A+beta*I)*S+mu*S #S y1=(beta2*A+beta*I)*S-sigma*E-mu*E #E y2=sigma*E*(1-p)-gamma*A-mu*A #A y3=sigma*E*p-gamma*I-sigma2*I-sigma3*I-mu*I#I y4=b*I+gamma*A+sigma2*I-mu*R #R y5=(-(y0+y1+y2+y3+y4)) #D return [y0,y1,y2,y3,y4,y5] y0=[s_0,e_0,a_0,i_0,r_0,d_0] tspan=np.arange(0, size, 1) res=odeint(SEAIRD,y0,tspan,hmax=0.01) l1=0 l2=0 l3=0 tot=0 for i in range(0,len(data.values)): if data.values[i]>startNCases: l1 = l1+(res[i,3] - data.values[i])**2 l2 = l2+(res[i,5] - death.values[i])**2 newRecovered=min(1e6,data.values[i]*ratioRecovered) l3 = l3+(res[i,4] - newRecovered)**2 tot+=1 l1=np.sqrt(l1/max(1,tot)) l2=np.sqrt(l2/max(1,tot)) l3=np.sqrt(l3/max(1,tot)) #weight for cases u = weigthCases #Brazil US 0.1 w = weigthRecov #weight for deaths v = max(0,1. - u - w) return u*l1 + v*l2 + w*l3 # Initial parameters dfparam = pd.read_csv("data/param.csv") dfparam # Initial parameter optimization # Load solver GlobalOptimization=True if GlobalOptimization: import LearnerGlobalOpt as Learner # basinhopping global optimization (several times minimize) else: import Learner #minimize allDistricts=True districtRegion="DRS 01 - Grande Sรฃo Paulo" if allDistricts: for districtRegion in DRS: query = dfparam.query('DRS == "{}"'.format(districtRegion)).reset_index() parameters = np.array(query.iloc[:, 2:])[0] learner = Learner.Learner(districtRegion, lossOdeint, *parameters) loom.add_function(learner.train()) else: query = dfparam.query('DRS == "{}"'.format(districtRegion)).reset_index() parameters = np.array(query.iloc[:, 2:])[0] learner = Learner.Learner(districtRegion, lossOdeint, *parameters) loom.add_function(learner.train()) loom.execute() ``` # Plots ``` import matplotlib.pyplot as plt import covid_plots def loadDataFrame(filename): df= pd.read_pickle(filename) df.columns = [c.lower().replace(' ', '_') for c in df.columns] df.columns = [c.lower().replace('(', '') for c in df.columns] df.columns = [c.lower().replace(')', '') for c in df.columns] return df #DRS 01 - Grande Sรฃo Paulo #DRS 02 - Araรงatuba #DRS 03 - Araraquara #DRS 04 - Baixada Santista #DRS 05 - Barretos #DRS 06 - Bauru #DRS 07 - Campinas #DRS 08 - Franca #DRS 09 - Marรญlia #DRS 10 - Piracicaba #DRS 11 - Presidente Prudente #DRS 12 - Registro #DRS 13 - Ribeirรฃo Preto #DRS 14 - Sรฃo Joรฃo da Boa Vista #DRS 15 - Sรฃo Josรฉ do Rio Preto #DRS 16 - Sorocaba #DRS 17 - Taubatรฉ #select districts for plotting districts4Plot=['DRS 01 - Grande Sรฃo Paulo', 'DRS 04 - Baixada Santista', 'DRS 07 - Campinas', 'DRS 05 - Barretos', 'DRS 15 - Sรฃo Josรฉ do Rio Preto'] #main district region for analysis districtRegion = "DRS 01 - Grande Sรฃo Paulo" #Choose here your options #opt=0 all plots #opt=1 corona log plot #opt=2 logistic model prediction #opt=3 bar plot with growth rate #opt=4 log plot + bar plot #opt=5 SEAIR-D Model opt = 0 #versio'n to identify the png file result version = "1" #parameters for plotting query = dfparam.query('DRS == "{}"'.format(districtRegion)).reset_index() startdate = query['start-date'][0] predict_range = query['prediction-range'][0] %%javascript IPython.OutputArea.prototype._should_scroll = function(lines){ return false; } startCase=1 covid_plots.covid_plots(districtRegion, districts4Plot, startdate,predict_range, startCase, opt, version, show=True) ```
github_jupyter
from pexecute.process import ProcessLoom loom = ProcessLoom(max_runner_cap=10) import urllib.request import pandas as pd import numpy as np # Download data import get_data LoadData=True if LoadData: get_data.get_data() dfSP = pd.read_csv("data/dados_municipios_SP.csv") dfSP # Model # lista DRSs DRS = list(dfSP["DRS"].unique()) DRS.remove("Indefinido") DRS #objective function Odeint solver from scipy.integrate import odeint #objective function Odeint solver def lossOdeint(point, data, death, s_0, e_0, a_0, i_0, r_0, d_0, startNCases, ratioRecovered, weigthCases, weigthRecov): size = len(data) beta, beta2, sigma, sigma2, sigma3, gamma, b, mu = point def SEAIRD(y,t): S = y[0] E = y[1] A = y[2] I = y[3] R = y[4] D = y[5] p=0.2 # beta2=beta y0=-(beta2*A+beta*I)*S+mu*S #S y1=(beta2*A+beta*I)*S-sigma*E-mu*E #E y2=sigma*E*(1-p)-gamma*A-mu*A #A y3=sigma*E*p-gamma*I-sigma2*I-sigma3*I-mu*I#I y4=b*I+gamma*A+sigma2*I-mu*R #R y5=(-(y0+y1+y2+y3+y4)) #D return [y0,y1,y2,y3,y4,y5] y0=[s_0,e_0,a_0,i_0,r_0,d_0] tspan=np.arange(0, size, 1) res=odeint(SEAIRD,y0,tspan,hmax=0.01) l1=0 l2=0 l3=0 tot=0 for i in range(0,len(data.values)): if data.values[i]>startNCases: l1 = l1+(res[i,3] - data.values[i])**2 l2 = l2+(res[i,5] - death.values[i])**2 newRecovered=min(1e6,data.values[i]*ratioRecovered) l3 = l3+(res[i,4] - newRecovered)**2 tot+=1 l1=np.sqrt(l1/max(1,tot)) l2=np.sqrt(l2/max(1,tot)) l3=np.sqrt(l3/max(1,tot)) #weight for cases u = weigthCases #Brazil US 0.1 w = weigthRecov #weight for deaths v = max(0,1. - u - w) return u*l1 + v*l2 + w*l3 # Initial parameters dfparam = pd.read_csv("data/param.csv") dfparam # Initial parameter optimization # Load solver GlobalOptimization=True if GlobalOptimization: import LearnerGlobalOpt as Learner # basinhopping global optimization (several times minimize) else: import Learner #minimize allDistricts=True districtRegion="DRS 01 - Grande Sรฃo Paulo" if allDistricts: for districtRegion in DRS: query = dfparam.query('DRS == "{}"'.format(districtRegion)).reset_index() parameters = np.array(query.iloc[:, 2:])[0] learner = Learner.Learner(districtRegion, lossOdeint, *parameters) loom.add_function(learner.train()) else: query = dfparam.query('DRS == "{}"'.format(districtRegion)).reset_index() parameters = np.array(query.iloc[:, 2:])[0] learner = Learner.Learner(districtRegion, lossOdeint, *parameters) loom.add_function(learner.train()) loom.execute() import matplotlib.pyplot as plt import covid_plots def loadDataFrame(filename): df= pd.read_pickle(filename) df.columns = [c.lower().replace(' ', '_') for c in df.columns] df.columns = [c.lower().replace('(', '') for c in df.columns] df.columns = [c.lower().replace(')', '') for c in df.columns] return df #DRS 01 - Grande Sรฃo Paulo #DRS 02 - Araรงatuba #DRS 03 - Araraquara #DRS 04 - Baixada Santista #DRS 05 - Barretos #DRS 06 - Bauru #DRS 07 - Campinas #DRS 08 - Franca #DRS 09 - Marรญlia #DRS 10 - Piracicaba #DRS 11 - Presidente Prudente #DRS 12 - Registro #DRS 13 - Ribeirรฃo Preto #DRS 14 - Sรฃo Joรฃo da Boa Vista #DRS 15 - Sรฃo Josรฉ do Rio Preto #DRS 16 - Sorocaba #DRS 17 - Taubatรฉ #select districts for plotting districts4Plot=['DRS 01 - Grande Sรฃo Paulo', 'DRS 04 - Baixada Santista', 'DRS 07 - Campinas', 'DRS 05 - Barretos', 'DRS 15 - Sรฃo Josรฉ do Rio Preto'] #main district region for analysis districtRegion = "DRS 01 - Grande Sรฃo Paulo" #Choose here your options #opt=0 all plots #opt=1 corona log plot #opt=2 logistic model prediction #opt=3 bar plot with growth rate #opt=4 log plot + bar plot #opt=5 SEAIR-D Model opt = 0 #versio'n to identify the png file result version = "1" #parameters for plotting query = dfparam.query('DRS == "{}"'.format(districtRegion)).reset_index() startdate = query['start-date'][0] predict_range = query['prediction-range'][0] %%javascript IPython.OutputArea.prototype._should_scroll = function(lines){ return false; } startCase=1 covid_plots.covid_plots(districtRegion, districts4Plot, startdate,predict_range, startCase, opt, version, show=True)
0.483161
0.896478
# Part 2 - Advanced text classifiers As seen in the past, we can create models that take advantage of counts of words and tf-idf scores and that yield some pretty accurate predictions. But it is possible to make use of several additional features to improve our classifier. In this learning unit we are going to check how we could use other data extracted from our text data to determine if an e-mail is 'spam' or 'not spam' (also known as ham). We are going to use a very well known Kaggle dataset for spam detection - [Kaggle Spam Collection](https://www.kaggle.com/uciml/sms-spam-collection-dataset). ![ham_or_spam](./media/ham_spam.jpg) This part will also introduce you to feature unions, a very useful way of combining different feature sets into your models. This scikit-learn class comes hand-in-hand with pipelines. Both allow you to delegate the work of combining and piping your transformer's outputs - your features - allowing you to create workflows in a very simple way. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import warnings from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.base import BaseEstimator, TransformerMixin from sklearn.preprocessing import StandardScaler import nltk import spacy %matplotlib inline warnings.simplefilter("ignore") ``` ## 1 - Spam and Ham As we mentioned before, we are going to try and come up with ways of detecting spam in the Kaggle Spam dataset. Let's load it and look into the data. ``` df = pd.read_csv('./datasets/spam.csv', encoding='latin1') df.drop(["Unnamed: 2", "Unnamed: 3", "Unnamed: 4"], axis=1,inplace=True) df.rename(columns={"v1":"label", "v2":"message"},inplace=True) df.head() ``` You could think it should be quite easy to detect the spam text, since it is clearer to the human eye. I don't know about you, but I'm always suspicious of free stuff. There ain't no such thing as a free lunch. But by now you should also know that what seems obvious in text to us is sometimes not as easy to detect by a model. So, what kind of features could you use for this? The most obvious one is the words themselves, which you already know how to - using CountVectorizer or TfIdfVectorizer. ## 1.1 - Baseline To start with, let's look at the target class distribution, ``` df.label.value_counts(normalize=True) ``` So, if we were to create a dumb classifier which always predicts "ham", we would get an accuracy of 86.6% for this dataset. Let's get our baseline with the Bag-of-words approach. Here we are going to use a RandomForestClassifier, a powerful machine learning classifier that fits very well in this problem. You may remember this estimator from SLU13. ``` # Split in train and validation train_data, test_data = train_test_split(df, test_size=0.2, random_state=41) # Build the pipeline text_clf = Pipeline([('tfidf', TfidfVectorizer()), ('classifier', RandomForestClassifier(random_state = 41))]) # Train the classifier text_clf.fit(map(str, train_data['message'].values), train_data['label'].values) predicted = text_clf.predict(map(str, test_data['message'].values)) np.mean(predicted == test_data['label']) ``` Powerful words, no? Our next step is to include other features. ## 1.2 - Adding extra features But, beside this vectorization as a bag-of-words, let's understand if our classifier can be fed other signals we can retrieve from the text. Let's check for example the *length of the message*. We'll first compute it and add it as a feature in our dataframe. ``` df['length'] = df['message'].map(len) df.head() ``` **Is this feature useful?** Since this is only one numerical feature, we can just simply plot its distribution in our data. Let's evaluate the length distribution for "Spam" and "Ham" ``` ax_list = df.hist(column='length', by='label', bins=50,figsize=(12,4)) ax_list[0].set_xlim((0,300)) ax_list[1].set_xlim((0,300)) ``` Seems quite different, right? So you would guess this feature should be helpful in your classifier. But let's actually check this feature through the use of a text classifier. Now for the tricky parts. ### Preprocessing If BLU07 is still fresh on you, you remember that when using pipelines we just fed it the text column. In fact, we could feed it more than one column, but the standard preprocessing applies the same preprocessing to the whole dataset. For our heterogeneous data, this doesn't quite work. So what can we do if we want to have a pipeline using several different features from several different columns? We can't apply the same methods to everything right? So first thing we can do is to create a selector transformer that simply returns the right column in the dataset by the key value(s) you pass. You can find below two such transformers: `TextSelector` for text columns and `NumberSelector` for number columns. Note that the only difference between them is the return type. ``` class Selector(BaseEstimator, TransformerMixin): """ Transformer to select a column from the dataframe to perform additional transformations on """ def __init__(self, key): self.key = key def fit(self, X, y=None): return self class TextSelector(Selector): """ Transformer to select a single column from the data frame to perform additional transformations on Use on text columns in the data """ def transform(self, X): return X[self.key] class NumberSelector(Selector): """ Transformer to select a single column from the data frame to perform additional transformations on Use on numeric columns in the data """ def transform(self, X): return X[[self.key]] ``` And then we define pipelines tailored for each of our cases. ``` text = Pipeline([ ('selector', TextSelector("message")), ('tfidf', TfidfVectorizer()) ]) length = Pipeline([ ('selector', NumberSelector("length")), ('standard', StandardScaler()) ]) ``` Notice that we used the `StandardScaler`. The use of this scaler (scales the feature to zero mean and unit variance) is because we don't want to have different feature scales in our classifier. Most of classification algorithms expect the features to be in the same scale! You might be wondering now: > *How does this solve my problem... now I have two pipelines and although I can feed my whole dataset they are separate pipelines... does this help at all?* In fact, if you were to run them separately this would not be that helpful, since you would have to add the classifier at the end of each. It seems like we are missing only one piece, a way to combine steps in parallel and not in sequence. This is where feature unions come in! ## 1.3 - Feature Unions While pipelines define a cascaded workflow, feature unions allow you to parallelize your workflows and have several transformations applied in parallel to your pipeline. The image below presents a simple pipeline, in sequence: <img src="./media/pipeline.png" width="40%"> While the following one presents what it is called a feature union: <img src="./media/unions.png" width="70%"> The latter is quite simple to define in scikit-learn, as follows: ``` # Feature Union allow us to use multiple distinct features in our classifier feats = FeatureUnion([('text', text), ('length', length)]) ``` Now you can use this combination of pipelines and feature unions inside a new pipeline! <img src="./media/pipelines_dawg.png" width="45%"> We then get our final flow, from which we can extract the classification score. ``` # Split in train and validation train_data, test_data = train_test_split(df, test_size=0.2, random_state=41) pipeline = Pipeline([ ('features',feats), ('classifier', RandomForestClassifier(random_state = 41)), ]) pipeline.fit(train_data, train_data.label) preds = pipeline.predict(test_data) np.mean(preds == test_data.label) ``` Our new feature does help! We got a slight improvement from a baseline that was already quite high. Nicely done. Let's now play with other more complex text features and see if we can maximize our classification score even more. ## 1.4 - Advanced features What kind of features can you think of? You could start by just having the number of words, in the same way that we had the character length of the sentence: ``` df['words'] = df['message'].str.split().map(len) ``` Remember BLU07? Remember stopwords? <img src="./media/stopwords.png" width="40%"> Let's count only words that are not stopwords, since these are normally less relevant. If you haven't downloaded stopwords yet or in case the cell below returns an error, make sure to run this command `nltk.download('stopwords')` ``` stop_words = nltk.corpus.stopwords.words('english') df['words_not_stopword'] = df['message'].apply(lambda x: len([t for t in x.split() if t not in stop_words])) ``` In the same way, we can apply counts conditioned on other different characteristics, like counting the number of commas in the sentence or the number of words that are uppercased or capitalized: ``` df['commas'] = df['message'].str.count(',') df['upper'] = df['message'].map(lambda x: map(str.isupper, x)).map(sum) df['capitalized'] = df['message'].map(lambda x: map(str.istitle, x)).map(sum) ``` We can also model the type of words by their length, for example: ``` #get the average word length df['avg_word_length'] = df['message'].apply(lambda x: np.mean([len(t) for t in x.split() if t not in stop_words]) if len([len(t) for t in x.split(' ') if t not in stop_words]) > 0 else 0) ``` Let's take a look then at our output data frame, and all the features we added: ``` df.head() ``` And now we can use the Feature Unions that we learned about to merge all these together. We'll split the data, create pipelines for all our new features and get their unions. Easy, right? ``` words = Pipeline([ ('selector', NumberSelector(key='words')), ('standard', StandardScaler()) ]) words_not_stopword = Pipeline([ ('selector', NumberSelector(key='words_not_stopword')), ('standard', StandardScaler()) ]) avg_word_length = Pipeline([ ('selector', NumberSelector(key='avg_word_length')), ('standard', StandardScaler()) ]) commas = Pipeline([ ('selector', NumberSelector(key='commas')), ('standard', StandardScaler()), ]) upper = Pipeline([ ('selector', NumberSelector(key='upper')), ('standard', StandardScaler()), ]) capitalized = Pipeline([ ('selector', NumberSelector(key='capitalized')), ('standard', StandardScaler()), ]) feats = FeatureUnion([('text', text), ('length', length), ('words', words), ('words_not_stopword', words_not_stopword), ('avg_word_length', avg_word_length), ('commas', commas), ('upper', upper), ('capitalized', capitalized)]) feature_processing = Pipeline([('feats', feats)]) ``` We ended with our classifier so let's run it and get our classification score. *Drumroll, please.* ``` # Split in train and validation train_data, test_data = train_test_split(df, test_size=0.2, random_state=41) pipeline = Pipeline([ ('features',feats), ('classifier', RandomForestClassifier(random_state = 41)), ]) pipeline.fit(train_data, train_data.label) preds = pipeline.predict(test_data) np.mean(preds == test_data.label) ``` <img src="./media/sad.png" width="40%"> Well, that was a bit underwhelming... Although we are still above the baseline, we didn't surpass by much the score of using just the text and its length. But don't despair, with all the tools from BLU07, BLU08 and the first part of this BLU you are already perfectly equipped to find yet new features and to analyze if they are good or not. Even to integrate your pipelines with dimensionality reduction techniques that might find your meaningful features among all these. Or maybe we are using the wrong score metric for this problem? ๐Ÿง (hint hint for another future BLU ๐Ÿ˜œ) ## 2 - Other classifiers New approaches in text processing have arised with new machine learning methods known as deep learning. The usage of deep learning methods is out of the scope for this BLU, but it is important that the reader is aware of the potential of such methods to improve over traditional machine learning algorithms. In particular, we suggest the knowledge about two different classifiers besides sklearn. * [StarSpace](https://github.com/facebookresearch/StarSpace) * [Vowpal Wabbit classifier](https://github.com/JohnLangford/vowpal_wabbit/wiki) ### Additional Pointers * https://www.kaggle.com/baghern/a-deep-dive-into-sklearn-pipelines * http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html * http://michelleful.github.io/code-blog/2015/06/20/pipelines/ * https://scikit-learn.org/0.18/auto_examples/hetero_feature_union.html ## 3 - Final remarks And we are at the end of our NLP specialization. It saddens me, but it is time to say goodbye. Throughout these BLUs you learned: * How to process text * Typical text features used in classification tasks * State of the art techniques to encode text * Methods to analyze feature importance * Methods to perform feature reduction * How to design pipelines and combine different features inside them You are now armed with several tools to perform text classification and much more in NLP. Don't forget to review all of this for the NLP hackathon, and to do your best in the Exercises. <img src="./media/so_long.jpg" width="40%">
github_jupyter
import pandas as pd import numpy as np import matplotlib.pyplot as plt import warnings from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.base import BaseEstimator, TransformerMixin from sklearn.preprocessing import StandardScaler import nltk import spacy %matplotlib inline warnings.simplefilter("ignore") df = pd.read_csv('./datasets/spam.csv', encoding='latin1') df.drop(["Unnamed: 2", "Unnamed: 3", "Unnamed: 4"], axis=1,inplace=True) df.rename(columns={"v1":"label", "v2":"message"},inplace=True) df.head() df.label.value_counts(normalize=True) # Split in train and validation train_data, test_data = train_test_split(df, test_size=0.2, random_state=41) # Build the pipeline text_clf = Pipeline([('tfidf', TfidfVectorizer()), ('classifier', RandomForestClassifier(random_state = 41))]) # Train the classifier text_clf.fit(map(str, train_data['message'].values), train_data['label'].values) predicted = text_clf.predict(map(str, test_data['message'].values)) np.mean(predicted == test_data['label']) df['length'] = df['message'].map(len) df.head() ax_list = df.hist(column='length', by='label', bins=50,figsize=(12,4)) ax_list[0].set_xlim((0,300)) ax_list[1].set_xlim((0,300)) class Selector(BaseEstimator, TransformerMixin): """ Transformer to select a column from the dataframe to perform additional transformations on """ def __init__(self, key): self.key = key def fit(self, X, y=None): return self class TextSelector(Selector): """ Transformer to select a single column from the data frame to perform additional transformations on Use on text columns in the data """ def transform(self, X): return X[self.key] class NumberSelector(Selector): """ Transformer to select a single column from the data frame to perform additional transformations on Use on numeric columns in the data """ def transform(self, X): return X[[self.key]] text = Pipeline([ ('selector', TextSelector("message")), ('tfidf', TfidfVectorizer()) ]) length = Pipeline([ ('selector', NumberSelector("length")), ('standard', StandardScaler()) ]) # Feature Union allow us to use multiple distinct features in our classifier feats = FeatureUnion([('text', text), ('length', length)]) # Split in train and validation train_data, test_data = train_test_split(df, test_size=0.2, random_state=41) pipeline = Pipeline([ ('features',feats), ('classifier', RandomForestClassifier(random_state = 41)), ]) pipeline.fit(train_data, train_data.label) preds = pipeline.predict(test_data) np.mean(preds == test_data.label) df['words'] = df['message'].str.split().map(len) stop_words = nltk.corpus.stopwords.words('english') df['words_not_stopword'] = df['message'].apply(lambda x: len([t for t in x.split() if t not in stop_words])) df['commas'] = df['message'].str.count(',') df['upper'] = df['message'].map(lambda x: map(str.isupper, x)).map(sum) df['capitalized'] = df['message'].map(lambda x: map(str.istitle, x)).map(sum) #get the average word length df['avg_word_length'] = df['message'].apply(lambda x: np.mean([len(t) for t in x.split() if t not in stop_words]) if len([len(t) for t in x.split(' ') if t not in stop_words]) > 0 else 0) df.head() words = Pipeline([ ('selector', NumberSelector(key='words')), ('standard', StandardScaler()) ]) words_not_stopword = Pipeline([ ('selector', NumberSelector(key='words_not_stopword')), ('standard', StandardScaler()) ]) avg_word_length = Pipeline([ ('selector', NumberSelector(key='avg_word_length')), ('standard', StandardScaler()) ]) commas = Pipeline([ ('selector', NumberSelector(key='commas')), ('standard', StandardScaler()), ]) upper = Pipeline([ ('selector', NumberSelector(key='upper')), ('standard', StandardScaler()), ]) capitalized = Pipeline([ ('selector', NumberSelector(key='capitalized')), ('standard', StandardScaler()), ]) feats = FeatureUnion([('text', text), ('length', length), ('words', words), ('words_not_stopword', words_not_stopword), ('avg_word_length', avg_word_length), ('commas', commas), ('upper', upper), ('capitalized', capitalized)]) feature_processing = Pipeline([('feats', feats)]) # Split in train and validation train_data, test_data = train_test_split(df, test_size=0.2, random_state=41) pipeline = Pipeline([ ('features',feats), ('classifier', RandomForestClassifier(random_state = 41)), ]) pipeline.fit(train_data, train_data.label) preds = pipeline.predict(test_data) np.mean(preds == test_data.label)
0.715821
0.967564
``` from __future__ import division from scipy.spatial.distance import euclidean from mpl_toolkits.mplot3d import Axes3D import numpy as np import pandas as pd import matplotlib.pyplot as plt def matrix_group(group, pcoa): ''' Fรคsst die Koordinaten der Gruppe in einer Matrix zusammen. ''' arr = np.empty((0,3), int) #Wenn Sample in Gruppe ist, fรผge Koordinaten dem Array hinzu: for row in pcoa.index: if any(True for val in group['sample_name'] if val == pcoa['id'][row]): axis1 = pcoa['axis1'][row] axis2 = pcoa['axis2'][row] axis3 = pcoa['axis3'][row] arr = np.append(arr, np.array([[axis1,axis2,axis3]]), axis=0) return arr #Berechne Koordinaten der Healthy Plane #Aus Studie รผbernommen def compute_coefficients(xyz): """Fit a plane to the first three dimensions of a matrix Parameters ---------- xyz : array-like The matrix of data to fit the plane to. Returns ------- np.array 1-dimensional array with four values, the coefficients `a`, `b`, `c` and `d` in the equation: .. math:: a\ x + b\ y - c\ z + d = 0. """ x = xyz[:, 0] y = xyz[:, 1] z = xyz[:, 2] A = np.column_stack([x, y, np.ones_like(x)]) abd, residuals, rank, s = np.linalg.lstsq(A, z) # add the coefficient of Z to return np.insert(abd, 2, -1) if __name__ == "__main__": #Ergebnisse der PCoA einlesen pcoa = pd.read_csv('coordinates.txt', sep='\t') #Metadaten einlesen df = pd.read_csv("NIHMS841832-supplement-1.csv", sep=',') #Healthy Control HC = df[df.ibd_subtype.eq("HC")] HC_matrix = matrix_group(HC,pcoa) #CCD CCD = df[df.ibd_subtype.eq("CCD")] CCD_matrix = matrix_group(CCD, pcoa) #ICD-r ICD_r = df[df.ibd_subtype.eq("ICD_r")] ICD_r_matrix = matrix_group(ICD_r, pcoa) #ICD-nr ICD_nr = df[df.ibd_subtype.eq("ICD_nr")] ICD_nr_matrix = matrix_group(ICD_nr, pcoa) #UCD UC = df[df.ibd_subtype.eq("UC")] UC_matrix = matrix_group(UC, pcoa) coef = compute_coefficients(HC_matrix) a = coef[0] b = coef[1] c = coef[2] d = coef[3] #Plottet die Plane from skspatial.objects import Points, Plane from skspatial.plotting import plot_3d pointsHC = Points(HC_matrix) pointsICD_r = Points(ICD_r_matrix) pointsICD_nr = Points(ICD_nr_matrix) pointsCCD = Points(CCD_matrix) pointsUC = Points(UC_matrix) plane = Plane.best_fit(pointsHC) fig, ax = plot_3d( pointsHC.plotter(c='g', s=70, depthshade=False), pointsICD_r.plotter(c='y', s=70, depthshade=False), pointsICD_nr.plotter(c='r', s=70, depthshade=False), pointsCCD.plotter(c='purple', s=70, depthshade=False), pointsUC.plotter(c='b', s=70, depthshade=False), plane.plotter(alpha=0.2, lims_x=(-0.2,0.8), lims_y=(-0.2,0.2)), ) fig.set_size_inches(40, 40) plt.savefig('Plane.png') plt.show() #Koeffizienten der Ebene print(coef) #Erstellt Daten fรผr das Random Forest Modell #Create a new DataFrame dataframe = pd.DataFrame(columns = ['sample_name' , 'bmi', 'calprotectin', 'sex', 'distance_Hp', 'Gesund']) for row in pcoa.index: axis1 = pcoa['axis1'][row] axis2 = pcoa['axis2'][row] axis3 = pcoa['axis3'][row] sample_id = pcoa['id'][row] sample = df[df.sample_name.eq(sample_id)] bmi = sample['bmi'].values[0] if bmi == 'missing: not provided' or bmi == 'not collected': bmi = np.nan calprotectin = sample['calprotectin'].values[0] if calprotectin == 'not applicable' or calprotectin == 'not collected': calprotectin = np.nan if sample['sex'].values[0] == 'male': sex = 1 else: sex = 0 distance = plane.distance_point([axis1, axis2, axis3]) if any(True for val in HC['sample_name'] if val == pcoa['id'][row]): dataframe = dataframe.append({'sample_name' : sample_id , 'bmi' : bmi, 'calprotectin' : calprotectin, 'sex' : sex, 'distance_Hp' : distance, 'Gesund' : 1} , ignore_index=True) else: dataframe = dataframe.append({'sample_name' : sample_id , 'bmi' : bmi, 'calprotectin' : calprotectin, 'sex' : sex, 'distance_Hp' : distance, 'Gesund' : 0} , ignore_index=True) dataframe.to_csv("data_for_random_forest.csv", index = False) dataframe #Whisker Plots / Boxplots import seaborn as sn def distance_arr(group, plane): ''' Erstellt Liste mit Distanzen zur Hp fรผr jede Gruppe ''' group_liste = [] for i in range(0, len(group)): axis1 = group[i][0] axis2 = group[i][1] axis3 = group[i][2] dist = plane.distance_point([axis1, axis2, axis3]) group_liste.append(dist) return np.array(group_liste) HC_arr = distance_arr(HC_matrix, plane) ICD_r_arr = distance_arr(ICD_r_matrix, plane) ICD_nr_arr = distance_arr(ICD_nr_matrix, plane) CCD_arr = distance_arr(CCD_matrix, plane) UC_arr = distance_arr(UC_matrix, plane) all_arr = [HC_arr, ICD_r_arr, ICD_nr_arr, CCD_arr, UC_arr] ax = sn.boxplot(data=all_arr, palette=["g","y","r","purple","b"]) ax.set(xticklabels=["HC","ICD_r", "ICD_nr", "CCD", "UC"]) ```
github_jupyter
from __future__ import division from scipy.spatial.distance import euclidean from mpl_toolkits.mplot3d import Axes3D import numpy as np import pandas as pd import matplotlib.pyplot as plt def matrix_group(group, pcoa): ''' Fรคsst die Koordinaten der Gruppe in einer Matrix zusammen. ''' arr = np.empty((0,3), int) #Wenn Sample in Gruppe ist, fรผge Koordinaten dem Array hinzu: for row in pcoa.index: if any(True for val in group['sample_name'] if val == pcoa['id'][row]): axis1 = pcoa['axis1'][row] axis2 = pcoa['axis2'][row] axis3 = pcoa['axis3'][row] arr = np.append(arr, np.array([[axis1,axis2,axis3]]), axis=0) return arr #Berechne Koordinaten der Healthy Plane #Aus Studie รผbernommen def compute_coefficients(xyz): """Fit a plane to the first three dimensions of a matrix Parameters ---------- xyz : array-like The matrix of data to fit the plane to. Returns ------- np.array 1-dimensional array with four values, the coefficients `a`, `b`, `c` and `d` in the equation: .. math:: a\ x + b\ y - c\ z + d = 0. """ x = xyz[:, 0] y = xyz[:, 1] z = xyz[:, 2] A = np.column_stack([x, y, np.ones_like(x)]) abd, residuals, rank, s = np.linalg.lstsq(A, z) # add the coefficient of Z to return np.insert(abd, 2, -1) if __name__ == "__main__": #Ergebnisse der PCoA einlesen pcoa = pd.read_csv('coordinates.txt', sep='\t') #Metadaten einlesen df = pd.read_csv("NIHMS841832-supplement-1.csv", sep=',') #Healthy Control HC = df[df.ibd_subtype.eq("HC")] HC_matrix = matrix_group(HC,pcoa) #CCD CCD = df[df.ibd_subtype.eq("CCD")] CCD_matrix = matrix_group(CCD, pcoa) #ICD-r ICD_r = df[df.ibd_subtype.eq("ICD_r")] ICD_r_matrix = matrix_group(ICD_r, pcoa) #ICD-nr ICD_nr = df[df.ibd_subtype.eq("ICD_nr")] ICD_nr_matrix = matrix_group(ICD_nr, pcoa) #UCD UC = df[df.ibd_subtype.eq("UC")] UC_matrix = matrix_group(UC, pcoa) coef = compute_coefficients(HC_matrix) a = coef[0] b = coef[1] c = coef[2] d = coef[3] #Plottet die Plane from skspatial.objects import Points, Plane from skspatial.plotting import plot_3d pointsHC = Points(HC_matrix) pointsICD_r = Points(ICD_r_matrix) pointsICD_nr = Points(ICD_nr_matrix) pointsCCD = Points(CCD_matrix) pointsUC = Points(UC_matrix) plane = Plane.best_fit(pointsHC) fig, ax = plot_3d( pointsHC.plotter(c='g', s=70, depthshade=False), pointsICD_r.plotter(c='y', s=70, depthshade=False), pointsICD_nr.plotter(c='r', s=70, depthshade=False), pointsCCD.plotter(c='purple', s=70, depthshade=False), pointsUC.plotter(c='b', s=70, depthshade=False), plane.plotter(alpha=0.2, lims_x=(-0.2,0.8), lims_y=(-0.2,0.2)), ) fig.set_size_inches(40, 40) plt.savefig('Plane.png') plt.show() #Koeffizienten der Ebene print(coef) #Erstellt Daten fรผr das Random Forest Modell #Create a new DataFrame dataframe = pd.DataFrame(columns = ['sample_name' , 'bmi', 'calprotectin', 'sex', 'distance_Hp', 'Gesund']) for row in pcoa.index: axis1 = pcoa['axis1'][row] axis2 = pcoa['axis2'][row] axis3 = pcoa['axis3'][row] sample_id = pcoa['id'][row] sample = df[df.sample_name.eq(sample_id)] bmi = sample['bmi'].values[0] if bmi == 'missing: not provided' or bmi == 'not collected': bmi = np.nan calprotectin = sample['calprotectin'].values[0] if calprotectin == 'not applicable' or calprotectin == 'not collected': calprotectin = np.nan if sample['sex'].values[0] == 'male': sex = 1 else: sex = 0 distance = plane.distance_point([axis1, axis2, axis3]) if any(True for val in HC['sample_name'] if val == pcoa['id'][row]): dataframe = dataframe.append({'sample_name' : sample_id , 'bmi' : bmi, 'calprotectin' : calprotectin, 'sex' : sex, 'distance_Hp' : distance, 'Gesund' : 1} , ignore_index=True) else: dataframe = dataframe.append({'sample_name' : sample_id , 'bmi' : bmi, 'calprotectin' : calprotectin, 'sex' : sex, 'distance_Hp' : distance, 'Gesund' : 0} , ignore_index=True) dataframe.to_csv("data_for_random_forest.csv", index = False) dataframe #Whisker Plots / Boxplots import seaborn as sn def distance_arr(group, plane): ''' Erstellt Liste mit Distanzen zur Hp fรผr jede Gruppe ''' group_liste = [] for i in range(0, len(group)): axis1 = group[i][0] axis2 = group[i][1] axis3 = group[i][2] dist = plane.distance_point([axis1, axis2, axis3]) group_liste.append(dist) return np.array(group_liste) HC_arr = distance_arr(HC_matrix, plane) ICD_r_arr = distance_arr(ICD_r_matrix, plane) ICD_nr_arr = distance_arr(ICD_nr_matrix, plane) CCD_arr = distance_arr(CCD_matrix, plane) UC_arr = distance_arr(UC_matrix, plane) all_arr = [HC_arr, ICD_r_arr, ICD_nr_arr, CCD_arr, UC_arr] ax = sn.boxplot(data=all_arr, palette=["g","y","r","purple","b"]) ax.set(xticklabels=["HC","ICD_r", "ICD_nr", "CCD", "UC"])
0.73848
0.596933
# Support Vector Machine (SVM) ## Importing the libraries ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd ``` ## Importing the dataset ``` dataset = pd.read_csv('Social_Network_Ads.csv') X = dataset.iloc[:, :-1].values y = dataset.iloc[:, -1].values ``` ## Splitting the dataset into the Training set and Test set ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0) print(X_train) print(y_train) print(X_test) print(y_test) ``` ## Feature Scaling ``` from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) print(X_train) print(X_test) ``` ## Training the SVM model on the Training set ``` from sklearn.svm import SVC classifier = SVC(kernel = 'linear', random_state = 0) # Only 2 closest points matters(support vectors) classifier.fit(X_train, y_train) ``` ## Predicting a new result ``` print(classifier.predict(sc.transform([[30,87000]]))) ``` ## Predicting the Test set results ``` y_pred = classifier.predict(X_test) print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1)) ``` ## Making the Confusion Matrix ``` from sklearn.metrics import confusion_matrix, accuracy_score cm = confusion_matrix(y_test, y_pred) print(cm) accuracy_score(y_test, y_pred) ``` ## Visualising the Training set results ``` from matplotlib.colors import ListedColormap X_set, y_set = sc.inverse_transform(X_train), y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('SVM (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ``` ## Visualising the Test set results ``` from matplotlib.colors import ListedColormap X_set, y_set = sc.inverse_transform(X_test), y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('SVM (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ```
github_jupyter
import numpy as np import matplotlib.pyplot as plt import pandas as pd dataset = pd.read_csv('Social_Network_Ads.csv') X = dataset.iloc[:, :-1].values y = dataset.iloc[:, -1].values from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0) print(X_train) print(y_train) print(X_test) print(y_test) from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) print(X_train) print(X_test) from sklearn.svm import SVC classifier = SVC(kernel = 'linear', random_state = 0) # Only 2 closest points matters(support vectors) classifier.fit(X_train, y_train) print(classifier.predict(sc.transform([[30,87000]]))) y_pred = classifier.predict(X_test) print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1)) from sklearn.metrics import confusion_matrix, accuracy_score cm = confusion_matrix(y_test, y_pred) print(cm) accuracy_score(y_test, y_pred) from matplotlib.colors import ListedColormap X_set, y_set = sc.inverse_transform(X_train), y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('SVM (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() from matplotlib.colors import ListedColormap X_set, y_set = sc.inverse_transform(X_test), y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('SVM (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show()
0.566978
0.965283
<h1>2b. Machine Learning using tf.estimator </h1> In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is. ``` !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst # Ensure the right version of Tensorflow is installed. !pip freeze | grep tensorflow==2.1 import tensorflow as tf import pandas as pd import numpy as np import shutil print(tf.__version__) ``` Read data created in the previous chapter. ``` # In CSV, label is the first column, after the features, followed by the key CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key'] FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1] LABEL = CSV_COLUMNS[0] df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS) df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS) df_test = pd.read_csv('./taxi-test.csv', header = None, names = CSV_COLUMNS) ``` <h2> Train and eval input functions to read from Pandas Dataframe </h2> ``` # TODO: Create an appropriate input_fn to read the training data def make_train_input_fn(df, num_epochs): return tf.compat.v1.estimator.inputs.pandas_input_fn( #ADD CODE HERE ) # TODO: Create an appropriate input_fn to read the validation data def make_eval_input_fn(df): return tf.compat.v1.estimator.inputs.pandas_input_fn( #ADD CODE HERE ) ``` Our input function for predictions is the same except we don't provide a label ``` # TODO: Create an appropriate prediction_input_fn def make_prediction_input_fn(df): return tf.compat.v1.estimator.inputs.pandas_input_fn( #ADD CODE HERE ) ``` ### Create feature columns for estimator ``` # TODO: Create feature columns ``` <h3> Linear Regression with tf.Estimator framework </h3> ``` tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO) OUTDIR = 'taxi_trained' shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time # TODO: Train a linear regression model model = #ADD CODE HERE model.train(#ADD CODE HERE ) ``` Evaluate on the validation data (we should defer using the test data to after we have selected a final model). ``` def print_rmse(model, df): metrics = model.evaluate(input_fn = make_eval_input_fn(df)) print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss']))) print_rmse(model, df_valid) ``` This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction. ``` # TODO: Predict from the estimator model we trained using test dataset ``` This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well. <h3> Deep Neural Network regression </h3> ``` # TODO: Copy your LinearRegressor estimator and replace with DNNRegressor. Remember to add a list of hidden units i.e. [32, 8, 2] ``` We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about! But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model. <h2> Benchmark dataset </h2> Let's do this on the benchmark dataset. ``` from google.cloud import bigquery import numpy as np import pandas as pd def create_query(phase, EVERY_N): """ phase: 1 = train 2 = valid """ base_query = """ SELECT (tolls_amount + fare_amount) AS fare_amount, EXTRACT(DAYOFWEEK FROM pickup_datetime) * 1.0 AS dayofweek, EXTRACT(HOUR FROM pickup_datetime) * 1.0 AS hourofday, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count * 1.0 AS passengers, CONCAT(CAST(pickup_datetime AS STRING), CAST(pickup_longitude AS STRING), CAST(pickup_latitude AS STRING), CAST(dropoff_latitude AS STRING), CAST(dropoff_longitude AS STRING)) AS key FROM `nyc-tlc.yellow.trips` WHERE trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 """ if EVERY_N == None: if phase < 2: # Training query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) < 2".format(base_query) else: # Validation query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) = {1}".format(base_query, phase) else: query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), {1})) = {2}".format(base_query, EVERY_N, phase) return query query = create_query(2, 100000) df = bigquery.Client().query(query).to_dataframe() print_rmse(model, df) ``` RMSE on benchmark dataset is <b>9.61</b> (your results will vary because of random seeds). This is not only way more than our original benchmark of 6.00, but it doesn't even beat our distance-based rule's RMSE of 8.02. Fear not -- you have learned how to write a TensorFlow model, but not to do all the things that you will have to do to your ML model performant. We will do this in the next chapters. In this chapter though, we will get our TensorFlow model ready for these improvements. In a software sense, the rest of the labs in this chapter will be about refactoring the code so that we can improve it. ## Challenge Exercise Create a neural network that is capable of finding the volume of a cylinder given the radius of its base (r) and its height (h). Assume that the radius and height of the cylinder are both in the range 0.5 to 2.0. Simulate the necessary training dataset. <p> Hint (highlight to see): <p style='color:white'> The input features will be r and h and the label will be $\pi r^2 h$ Create random values for r and h and compute V. Your dataset will consist of r, h and V. Then, use a DNN regressor. Make sure to generate enough data. </p> Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
github_jupyter
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst # Ensure the right version of Tensorflow is installed. !pip freeze | grep tensorflow==2.1 import tensorflow as tf import pandas as pd import numpy as np import shutil print(tf.__version__) # In CSV, label is the first column, after the features, followed by the key CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key'] FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1] LABEL = CSV_COLUMNS[0] df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS) df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS) df_test = pd.read_csv('./taxi-test.csv', header = None, names = CSV_COLUMNS) # TODO: Create an appropriate input_fn to read the training data def make_train_input_fn(df, num_epochs): return tf.compat.v1.estimator.inputs.pandas_input_fn( #ADD CODE HERE ) # TODO: Create an appropriate input_fn to read the validation data def make_eval_input_fn(df): return tf.compat.v1.estimator.inputs.pandas_input_fn( #ADD CODE HERE ) # TODO: Create an appropriate prediction_input_fn def make_prediction_input_fn(df): return tf.compat.v1.estimator.inputs.pandas_input_fn( #ADD CODE HERE ) # TODO: Create feature columns tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO) OUTDIR = 'taxi_trained' shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time # TODO: Train a linear regression model model = #ADD CODE HERE model.train(#ADD CODE HERE ) def print_rmse(model, df): metrics = model.evaluate(input_fn = make_eval_input_fn(df)) print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss']))) print_rmse(model, df_valid) # TODO: Predict from the estimator model we trained using test dataset # TODO: Copy your LinearRegressor estimator and replace with DNNRegressor. Remember to add a list of hidden units i.e. [32, 8, 2] from google.cloud import bigquery import numpy as np import pandas as pd def create_query(phase, EVERY_N): """ phase: 1 = train 2 = valid """ base_query = """ SELECT (tolls_amount + fare_amount) AS fare_amount, EXTRACT(DAYOFWEEK FROM pickup_datetime) * 1.0 AS dayofweek, EXTRACT(HOUR FROM pickup_datetime) * 1.0 AS hourofday, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count * 1.0 AS passengers, CONCAT(CAST(pickup_datetime AS STRING), CAST(pickup_longitude AS STRING), CAST(pickup_latitude AS STRING), CAST(dropoff_latitude AS STRING), CAST(dropoff_longitude AS STRING)) AS key FROM `nyc-tlc.yellow.trips` WHERE trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 """ if EVERY_N == None: if phase < 2: # Training query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) < 2".format(base_query) else: # Validation query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) = {1}".format(base_query, phase) else: query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), {1})) = {2}".format(base_query, EVERY_N, phase) return query query = create_query(2, 100000) df = bigquery.Client().query(query).to_dataframe() print_rmse(model, df)
0.306735
0.969699
``` import time import numpy as np import pandas as pd ``` # Jupyter Notebooks for Research ## 2. Multi Notebook Projects I find that some of my notebooks get out of hand. Weeks of work leads to 100+ cells. The notebook feels encumbered (browser, kernel or server load?). A kernel restart means 10+ mins to get back to where I was. Its unpleasant. There is probably sub-optimal code in there but the real issue is that I've abused a the single notebook model and its time to do better. ### Splitting your notebook into multiple notebooks Maybe you could think of this as the chapters of the analysis. Technically, I suppose "books" might seem more natural but with a substantial dataset and/ or compute intensive analyses I reckon you'll be best splitting on what you'd consider chapters. Here's an example project layout: 1. Data preparation 2. Exploratory analysis 1. Facet 1 2. Facet 2 3. Model fitting The main technical aspect you need to consider is **exchanging data between the notebooks**. In this example you might have something like the following dependency tree: ``` 01_01_data_prep.ipynb โ”‚ โ””โ”€โ”€โ”€ โ”‚ โ”‚ 02_01_facet_1.ipynb โ”‚ โ”‚ 02_02_facet_2.ipynb โ”‚ โ””โ”€โ”€โ”€ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”‚ 03_01_model.ipynb ``` There are a variety of ways you can acheive this and its going to be pretty straightforward. I would advise some form of checksumming though. ### Generate data First we generate our raw dataset. ``` def long_running_data_generation(n=5): time.sleep(n) np.random.seed(seed=444) # vcs would be a pain without this return pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD')) df = long_running_data_generation() df.head() ``` ### Processing We have a few processing steps to do. ``` df = df.assign(A2=2*df.A+df.B) df.head() ``` ### Storage Let's store that for future use. ``` # cell imports?! maybe I won't use these anywhere else... import hashlib import datetime import gzip # safely hash a dataframe # TODO: Include reference to where I saw this row_hashes = pd.util.hash_pandas_object(df, index=True) df_hash = hashlib.sha256(row_hashes.values).hexdigest() print(df_hash) # write the file, don't clobber it if its already there, this could be slow filename = f'data_prep_df_{df_hash[:7]}.csv.gz' now = datetime.datetime.now() try: with gzip.open(filename, 'x') as scores_file: scores_file.write('# Creation time: {}\n'.format(str(now)).encode()) scores_file.write('# Table hash: {}\n'.format(df_hash).encode()) scores_file.write(df.to_csv().encode()) print('Saved {}'.format(filename)) except FileExistsError: print('{} already exists.'.format(filename)) ``` Seems to be a problem verifying the CSV on the otherside, maybe a pickle will work. ``` df.to_pickle(f'data_prep_df_{df_hash[:7]}.p.gz') ```
github_jupyter
import time import numpy as np import pandas as pd 01_01_data_prep.ipynb โ”‚ โ””โ”€โ”€โ”€ โ”‚ โ”‚ 02_01_facet_1.ipynb โ”‚ โ”‚ 02_02_facet_2.ipynb โ”‚ โ””โ”€โ”€โ”€ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”‚ 03_01_model.ipynb def long_running_data_generation(n=5): time.sleep(n) np.random.seed(seed=444) # vcs would be a pain without this return pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD')) df = long_running_data_generation() df.head() df = df.assign(A2=2*df.A+df.B) df.head() # cell imports?! maybe I won't use these anywhere else... import hashlib import datetime import gzip # safely hash a dataframe # TODO: Include reference to where I saw this row_hashes = pd.util.hash_pandas_object(df, index=True) df_hash = hashlib.sha256(row_hashes.values).hexdigest() print(df_hash) # write the file, don't clobber it if its already there, this could be slow filename = f'data_prep_df_{df_hash[:7]}.csv.gz' now = datetime.datetime.now() try: with gzip.open(filename, 'x') as scores_file: scores_file.write('# Creation time: {}\n'.format(str(now)).encode()) scores_file.write('# Table hash: {}\n'.format(df_hash).encode()) scores_file.write(df.to_csv().encode()) print('Saved {}'.format(filename)) except FileExistsError: print('{} already exists.'.format(filename)) df.to_pickle(f'data_prep_df_{df_hash[:7]}.p.gz')
0.188436
0.848094
# Function Practice Exercises Problems are arranged in increasing difficulty: * Warmup - these can be solved using basic comparisons and methods * Level 1 - these may involve if/then conditional statements and simple methods * Level 2 - these may require iterating over sequences, usually with some kind of loop * Challenging - these will take some creativity to solve ## WARMUP SECTION: #### LESSER OF TWO EVENS: Write a function that returns the lesser of two given numbers *if* both numbers are even, but returns the greater if one or both numbers are odd lesser_of_two_evens(2,4) --> 2 lesser_of_two_evens(2,5) --> 5 ``` def lesser_of_two_evens(a,b): pass if a % 2 == 0 and b % 2 == 0 : return min(a,b) elif a % 2 != 0 or b % 2 != 0 : return max(a,b) # Check lesser_of_two_evens(2,4) # Check lesser_of_two_evens(2,5) ``` #### ANIMAL CRACKERS: Write a function takes a two-word string and returns True if both words begin with same letter animal_crackers('Levelheaded Llama') --> True animal_crackers('Crazy Kangaroo') --> False ``` def animal_crackers(text): pass list1 = text.split() if list1[0][0] == list1[1][0] : return True else : return False # Check animal_crackers('Levelheaded Llama') # Check animal_crackers('Crazy Kangaroo') ``` #### MAKES TWENTY: Given two integers, return True if the sum of the integers is 20 *or* if one of the integers is 20. If not, return False makes_twenty(20,10) --> True makes_twenty(12,8) --> True makes_twenty(2,3) --> False ``` def makes_twenty(n1,n2): pass if ((n1 + n2 == 20) or ((n1 == 20) or (n2 == 20))) : return True else : return False # Check makes_twenty(20,10) # Check makes_twenty(2,3) ``` # LEVEL 1 PROBLEMS #### OLD MACDONALD: Write a function that capitalizes the first and fourth letters of a name old_macdonald('macdonald') --> MacDonald Note: `'macdonald'.capitalize()` returns `'Macdonald'` ``` def old_macdonald(name): pass return name[0 : 3].capitalize() + name[3 : ].capitalize() # Check old_macdonald('macdonald') ``` #### MASTER YODA: Given a sentence, return a sentence with the words reversed master_yoda('I am home') --> 'home am I' master_yoda('We are ready') --> 'ready are We' Note: The .join() method may be useful here. The .join() method allows you to join together strings in a list with some connector string. For example, some uses of the .join() method: >>> "--".join(['a','b','c']) >>> 'a--b--c' This means if you had a list of words you wanted to turn back into a sentence, you could just join them with a single space string: >>> " ".join(['Hello','world']) >>> "Hello world" ``` def master_yoda(text): pass return ' '.join(text.split()[: : -1]) # Check master_yoda('I am home') # Check master_yoda('We are ready') ``` #### ALMOST THERE: Given an integer n, return True if n is within 10 of either 100 or 200 almost_there(90) --> True almost_there(104) --> True almost_there(150) --> False almost_there(209) --> True NOTE: `abs(num)` returns the absolute value of a number ``` def almost_there(n): pass return ((abs(100 - n) <= 10) or (abs(200 - n) <= 10)) # Check almost_there(104) # Check almost_there(150) # Check almost_there(209) ``` # LEVEL 2 PROBLEMS #### FIND 33: Given a list of ints, return True if the array contains a 3 next to a 3 somewhere. has_33([1, 3, 3]) โ†’ True has_33([1, 3, 1, 3]) โ†’ False has_33([3, 1, 3]) โ†’ False ``` def has_33(nums): pass length = len(nums) for i in range(length - 1) : if nums[i] == 3 and nums[i + 1] == 3: return True return False # Check has_33([1, 3, 3]) # Check has_33([1, 3, 1, 3]) # Check has_33([3, 1, 3]) ``` #### PAPER DOLL: Given a string, return a string where for every character in the original there are three characters paper_doll('Hello') --> 'HHHeeellllllooo' paper_doll('Mississippi') --> 'MMMiiissssssiiippppppiii' ``` def paper_doll(text): pass newstring = '' for i in text : newstring = newstring + i + i + i return newstring # Check paper_doll('Hello') # Check paper_doll('Mississippi') ``` #### BLACKJACK: Given three integers between 1 and 11, if their sum is less than or equal to 21, return their sum. If their sum exceeds 21 *and* there's an eleven, reduce the total sum by 10. Finally, if the sum (even after adjustment) exceeds 21, return 'BUST' blackjack(5,6,7) --> 18 blackjack(9,9,9) --> 'BUST' blackjack(9,9,11) --> 19 ``` def blackjack(a,b,c): pass if sum((a,b,c)) <= 21: return sum((a,b,c)) elif sum((a,b,c)) <=31 and 11 in (a,b,c): return sum((a,b,c)) - 10 else: return 'BUST' # Check blackjack(5,6,7) # Check blackjack(9,9,9) # Check blackjack(9,9,11) ``` #### SUMMER OF '69: Return the sum of the numbers in the array, except ignore sections of numbers starting with a 6 and extending to the next 9 (every 6 will be followed by at least one 9). Return 0 for no numbers. summer_69([1, 3, 5]) --> 9 summer_69([4, 5, 6, 7, 8, 9]) --> 9 summer_69([2, 1, 6, 9, 11]) --> 14 ``` def summer_69(arr): pass s = 0 sums = 0 for i in range(len(arr)) : if arr[i] == 6 : sums = 9 for j in range(i + 1 , len(arr) - 1) : if arr[j] == 9 : break else : sums = sums + arr[j] else : s = s + arr[i] return (s - sums) # Check summer_69([1, 3, 5]) # Check summer_69([4, 5, 6, 7, 8, 9]) # Check summer_69([2, 1, 6, 9, 11]) ``` # CHALLENGING PROBLEMS #### SPY GAME: Write a function that takes in a list of integers and returns True if it contains 007 in order spy_game([1,2,4,0,0,7,5]) --> True spy_game([1,0,2,4,0,5,7]) --> True spy_game([1,7,2,0,4,5,0]) --> False ``` def spy_game(nums): pass code = [0,0,7,'b'] for num in nums: if num == code[0]: code.pop(0) return len(code) == 1 # Check spy_game([1,2,4,0,0,7,5]) # Check spy_game([1,0,2,4,0,5,7]) # Check spy_game([1,7,2,0,4,5,0]) ``` #### COUNT PRIMES: Write a function that returns the *number* of prime numbers that exist up to and including a given number count_primes(100) --> 25 By convention, 0 and 1 are not prime. ``` def count_primes(num): pass s = 0 total = 0 for i in range(1,num + 1) : s = 0 for j in range(1,i + 1): if i % j == 0 : s = s + 1 if s == 2 : total = total + 1 return total # Check count_primes(100) ``` ### Just for fun: #### PRINT BIG: Write a function that takes in a single letter, and returns a 5x5 representation of that letter print_big('a') out: * * * ***** * * * * HINT: Consider making a dictionary of possible patterns, and mapping the alphabet to specific 5-line combinations of patterns. <br>For purposes of this exercise, it's ok if your dictionary stops at "E". ``` def print_big(letter): pass patterns = {1:' * ',2:' * * ',3:'* *',4:'*****',5:'**** ',6:' * ',7:' * ',8:'* * ',9:'* '} alphabet = {'A':[1,2,4,3,3],'B':[5,3,5,3,5],'C':[4,9,9,9,4],'D':[5,3,3,3,5],'E':[4,9,4,9,4]} for pattern in alphabet[letter.upper()]: print(patterns[pattern]) print_big('a') ``` ## Great Job!
github_jupyter
def lesser_of_two_evens(a,b): pass if a % 2 == 0 and b % 2 == 0 : return min(a,b) elif a % 2 != 0 or b % 2 != 0 : return max(a,b) # Check lesser_of_two_evens(2,4) # Check lesser_of_two_evens(2,5) def animal_crackers(text): pass list1 = text.split() if list1[0][0] == list1[1][0] : return True else : return False # Check animal_crackers('Levelheaded Llama') # Check animal_crackers('Crazy Kangaroo') def makes_twenty(n1,n2): pass if ((n1 + n2 == 20) or ((n1 == 20) or (n2 == 20))) : return True else : return False # Check makes_twenty(20,10) # Check makes_twenty(2,3) def old_macdonald(name): pass return name[0 : 3].capitalize() + name[3 : ].capitalize() # Check old_macdonald('macdonald') def master_yoda(text): pass return ' '.join(text.split()[: : -1]) # Check master_yoda('I am home') # Check master_yoda('We are ready') def almost_there(n): pass return ((abs(100 - n) <= 10) or (abs(200 - n) <= 10)) # Check almost_there(104) # Check almost_there(150) # Check almost_there(209) def has_33(nums): pass length = len(nums) for i in range(length - 1) : if nums[i] == 3 and nums[i + 1] == 3: return True return False # Check has_33([1, 3, 3]) # Check has_33([1, 3, 1, 3]) # Check has_33([3, 1, 3]) def paper_doll(text): pass newstring = '' for i in text : newstring = newstring + i + i + i return newstring # Check paper_doll('Hello') # Check paper_doll('Mississippi') def blackjack(a,b,c): pass if sum((a,b,c)) <= 21: return sum((a,b,c)) elif sum((a,b,c)) <=31 and 11 in (a,b,c): return sum((a,b,c)) - 10 else: return 'BUST' # Check blackjack(5,6,7) # Check blackjack(9,9,9) # Check blackjack(9,9,11) def summer_69(arr): pass s = 0 sums = 0 for i in range(len(arr)) : if arr[i] == 6 : sums = 9 for j in range(i + 1 , len(arr) - 1) : if arr[j] == 9 : break else : sums = sums + arr[j] else : s = s + arr[i] return (s - sums) # Check summer_69([1, 3, 5]) # Check summer_69([4, 5, 6, 7, 8, 9]) # Check summer_69([2, 1, 6, 9, 11]) def spy_game(nums): pass code = [0,0,7,'b'] for num in nums: if num == code[0]: code.pop(0) return len(code) == 1 # Check spy_game([1,2,4,0,0,7,5]) # Check spy_game([1,0,2,4,0,5,7]) # Check spy_game([1,7,2,0,4,5,0]) def count_primes(num): pass s = 0 total = 0 for i in range(1,num + 1) : s = 0 for j in range(1,i + 1): if i % j == 0 : s = s + 1 if s == 2 : total = total + 1 return total # Check count_primes(100) def print_big(letter): pass patterns = {1:' * ',2:' * * ',3:'* *',4:'*****',5:'**** ',6:' * ',7:' * ',8:'* * ',9:'* '} alphabet = {'A':[1,2,4,3,3],'B':[5,3,5,3,5],'C':[4,9,9,9,4],'D':[5,3,3,3,5],'E':[4,9,4,9,4]} for pattern in alphabet[letter.upper()]: print(patterns[pattern]) print_big('a')
0.225246
0.924483
## You can use this notebook template to check out any stock price from Yahoo Finance ``` import numpy as np import pandas as pd import pandas_datareader.data as pdr import matplotlib.pyplot as plt import seaborn as sns import datetime %matplotlib inline plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 plt.rcParams["figure.figsize"] = (20,4) from IPython.display import Image Image("https://www.apple.com/ac/structured-data/images/open_graph_logo.png", width=1000, height=1000) ``` ## What is stock? When you learn fraction, you probably learn the analogy of slicing a pizza. Imagine we slice up a company pie into some slices, each slice is a "share" of the company stock, and has a price that changes according its value. Its value is determined mostly by supply and demand: how many people want its products and how it can supply the demand. In practice, we cannot slice up a company, but the basic meaning is similar. If the company goes badly, all the slices goes badly. If the company gains values and becomes bigger, the slices become bigger (as long as there are not any more shares) and the prices go up. ``` Image("./data/pizza.png", width=300, height=300) ``` ## import Apple trading data from yahoo ่‹นๆžœไบคๆ˜“ๆ—ถ้—ดๅบๅˆ— Let's call the data ``` apple = pdr.get_data_yahoo('AAPL', start=datetime.datetime(1990, 7, 16), end=datetime.date.today()) #plt.style.use('fivethirtyeight') plt.style.use('seaborn-darkgrid') apple.Volume.plot(figsize=(20,4), secondary_y=True, grid=True) ax1 = apple.Close.plot(color='blue', grid=True, label='Price') ax2 = apple.Volume.plot(color='green', grid=True, secondary_y=True, label='Trading volume') h1, l1 = ax1.get_legend_handles_labels() h2, l2 = ax2.get_legend_handles_labels() plt.title("Apple stock daily closing price and trading volume") plt.legend(h1+h2, l1+l2, loc=2) ``` ###### Note: '1e9' on the vertical axis means 1000000000, i.e. 10^9. This is scientific notation. ``` #beginning 5 days of the data apple.head() apple.shape #Change the format to make numbers display more nicely pd.options.display.float_format = '{:20,.1f}'.format ``` ## Most recent prices You should definitely check this out. ``` apple.tail(1) apple.to_pickle('apple_03152019.pkl') ``` ## Trading history #### Trading volume = how much interest people have in the stock, either to buy or to sell ``` fig = plt.figure(figsize = (20,5)) apple.Volume.plot() plt.title("Trading volume", fontdict={'fontsize': 20, 'fontweight': 'bold'}) ``` #### Trading price = how much did/do people believe apple is worth ``` apple.plot(secondary_y=['Volume'], mark_right=False, figsize = (20,5)) ``` ## Irational exuberance It is interesting to see when the maximum of trading price and volume took place. ``` apple.idxmax(axis=0, skipna=True) price_max= pd.to_datetime(" 2018-10-03") volume_max =pd.to_datetime("2000-09-29") ax1 = apple.loc['2000-06-30':].Close.plot(color='blue', grid=True, label='Price') ax2 = apple.loc['2000-06-30':].Volume.plot(color='green', grid=True, secondary_y=True, label='Trading volume') h1, l1 = ax1.get_legend_handles_labels() h2, l2 = ax2.get_legend_handles_labels() plt.title("largest trading volume date and highest price date", fontdict={'fontsize': 20, 'fontweight': 'bold'}) plt.legend(h1+h2, l1+l2, loc=2) ax1.axvline(price_max, color ='red', alpha=0.5, dashes=(5, 2, 1, 2), linewidth=3.0) ax2.axvline(volume_max, color ='red', alpha=0.7, dashes=(1,1), linewidth=4.0) ``` ### another way to plot the same thing ``` ax=apple[['High',"Volume"]].plot(secondary_y=['Volume'], mark_right=False, figsize=(20, 5)) ax.axvline(price_max, color ='grey', alpha=0.5, dashes=(5, 2, 1, 2), linewidth=4.0) ax.axvline(volume_max, color ='grey', alpha=0.5, dashes=(1,1), linewidth=4.0) ``` ## Apple price peaked in October 2018, but the trading volume by far maxed out on September 29, 2000. Zooming in to June 30, 2000 to December 30, 2000. ### *Price (blue line) dropped in free fall while trading volume skyrocketed* ### What happened on September 29, 2000? From [CNN "Apple bruises tech sector" September 29, 2000: 4:33 p.m. ET](https://money.cnn.com/2000/09/29/markets/techwrap/) >Computer maker's warning weighs on hardware, chip stocks; Nasdaq tumbles >NEW YORK (CNNfn) - Computer hardware makers bore the brunt of a sell-off in the technology sector Friday on the heels of an earnings warning from Apple Computer. >Apple's (AAPL: Research, Estimates) market value was sliced in half Friday, its shares falling \$27.75 to end the session 51.9 percent lower at \$25.75. They were the most actively-traded on Nasdaq and were among the biggest percentage decliners as well. >The Nasdaq composite index, which is weighed heavily with technology names, ended the session 105.92 lower at 3,672.40, a 2.8 percent decline on the day. >After Thursday's closing bell, Apple warned that its fourth-quarter profit would fall well short of Wall Street forecasts. The company blamed lower-than-expected sales in September, with particular weakness in the education market. >It was the latest in a raft of warnings from high-tech companies in recent weeks, including semiconductor giant Intel, which last week warned that its revenue growth in the third quarter would amount to as little as half what some on the Street had expected. ``` ax1 = apple.loc['2000-06-30':'2000-12-30'].Close.plot(color='blue', grid=True, label='Price', figsize=(20,8), linewidth=3.0) ax2 = apple.loc['2000-06-30':'2000-12-30'].Volume.plot(color='green', grid=True, secondary_y=True, label='Trading volume',figsize=(20,8),linewidth=2.0) h1, l1 = ax1.get_legend_handles_labels() h2, l2 = ax2.get_legend_handles_labels() plt.title("Apple stock daily closing price and trading volume: 2000-06-30 to 2001-06-30", fontdict={'fontsize': 20, 'fontweight': 'bold'}) plt.legend(h1+h2, l1+l2, loc=2) ``` ### Stock split A stock is split into more shares. ##### Wait a minute, CNN news back then in 2000 quoted Apple price sliced more than half to \$25.75. But how come it looked like it was less than \$5 back then? From [Apple](https://investor.apple.com/investor-relations/faq/) > How many times has Apple's stock split? > Appleโ€™s stock has split four times since the company went public. The stock split on a 7-for-1 basis on June 9, 2014 and split on a 2-for-1 basis on February 28, 2005, June 21, 2000, and June 16, 1987. ## Average monthly trading data ``` apple.resample('M').mean().plot(secondary_y="Volume") plt.title("average monthly trading data", fontdict={'fontsize': 20, 'fontweight': 'bold'}) price = apple.Close price.resample('M').mean().plot() plt.title("average monthly closing prices", fontdict={'fontsize': 20, 'fontweight': 'bold'}) ``` ## Moving average Moving average is to take average by longer periods of time at each time point. The result will be a much smoother line. The longer periods of time, the smoother the line is. This is because we are averaging numbers. ``` apple.drop('Adj Close',axis=1, inplace=True) moving = apple.rolling(30, center = True).mean() moving.columns=['High_ma','Low_ma','Open_ma','Close_ma','Volume_ma'] moving.shape ma=pd.concat((moving, apple), axis=1) ma.loc['2010':,['Close_ma','Close']].plot(figsize=(20,8), linewidth=5, alpha=0.5) plt.title("30-day center moving average closing price and original closing price", fontdict={'fontsize': 20, 'fontweight': 'bold'}) ```
github_jupyter
import numpy as np import pandas as pd import pandas_datareader.data as pdr import matplotlib.pyplot as plt import seaborn as sns import datetime %matplotlib inline plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 plt.rcParams["figure.figsize"] = (20,4) from IPython.display import Image Image("https://www.apple.com/ac/structured-data/images/open_graph_logo.png", width=1000, height=1000) Image("./data/pizza.png", width=300, height=300) apple = pdr.get_data_yahoo('AAPL', start=datetime.datetime(1990, 7, 16), end=datetime.date.today()) #plt.style.use('fivethirtyeight') plt.style.use('seaborn-darkgrid') apple.Volume.plot(figsize=(20,4), secondary_y=True, grid=True) ax1 = apple.Close.plot(color='blue', grid=True, label='Price') ax2 = apple.Volume.plot(color='green', grid=True, secondary_y=True, label='Trading volume') h1, l1 = ax1.get_legend_handles_labels() h2, l2 = ax2.get_legend_handles_labels() plt.title("Apple stock daily closing price and trading volume") plt.legend(h1+h2, l1+l2, loc=2) #beginning 5 days of the data apple.head() apple.shape #Change the format to make numbers display more nicely pd.options.display.float_format = '{:20,.1f}'.format apple.tail(1) apple.to_pickle('apple_03152019.pkl') fig = plt.figure(figsize = (20,5)) apple.Volume.plot() plt.title("Trading volume", fontdict={'fontsize': 20, 'fontweight': 'bold'}) apple.plot(secondary_y=['Volume'], mark_right=False, figsize = (20,5)) apple.idxmax(axis=0, skipna=True) price_max= pd.to_datetime(" 2018-10-03") volume_max =pd.to_datetime("2000-09-29") ax1 = apple.loc['2000-06-30':].Close.plot(color='blue', grid=True, label='Price') ax2 = apple.loc['2000-06-30':].Volume.plot(color='green', grid=True, secondary_y=True, label='Trading volume') h1, l1 = ax1.get_legend_handles_labels() h2, l2 = ax2.get_legend_handles_labels() plt.title("largest trading volume date and highest price date", fontdict={'fontsize': 20, 'fontweight': 'bold'}) plt.legend(h1+h2, l1+l2, loc=2) ax1.axvline(price_max, color ='red', alpha=0.5, dashes=(5, 2, 1, 2), linewidth=3.0) ax2.axvline(volume_max, color ='red', alpha=0.7, dashes=(1,1), linewidth=4.0) ax=apple[['High',"Volume"]].plot(secondary_y=['Volume'], mark_right=False, figsize=(20, 5)) ax.axvline(price_max, color ='grey', alpha=0.5, dashes=(5, 2, 1, 2), linewidth=4.0) ax.axvline(volume_max, color ='grey', alpha=0.5, dashes=(1,1), linewidth=4.0) ax1 = apple.loc['2000-06-30':'2000-12-30'].Close.plot(color='blue', grid=True, label='Price', figsize=(20,8), linewidth=3.0) ax2 = apple.loc['2000-06-30':'2000-12-30'].Volume.plot(color='green', grid=True, secondary_y=True, label='Trading volume',figsize=(20,8),linewidth=2.0) h1, l1 = ax1.get_legend_handles_labels() h2, l2 = ax2.get_legend_handles_labels() plt.title("Apple stock daily closing price and trading volume: 2000-06-30 to 2001-06-30", fontdict={'fontsize': 20, 'fontweight': 'bold'}) plt.legend(h1+h2, l1+l2, loc=2) apple.resample('M').mean().plot(secondary_y="Volume") plt.title("average monthly trading data", fontdict={'fontsize': 20, 'fontweight': 'bold'}) price = apple.Close price.resample('M').mean().plot() plt.title("average monthly closing prices", fontdict={'fontsize': 20, 'fontweight': 'bold'}) apple.drop('Adj Close',axis=1, inplace=True) moving = apple.rolling(30, center = True).mean() moving.columns=['High_ma','Low_ma','Open_ma','Close_ma','Volume_ma'] moving.shape ma=pd.concat((moving, apple), axis=1) ma.loc['2010':,['Close_ma','Close']].plot(figsize=(20,8), linewidth=5, alpha=0.5) plt.title("30-day center moving average closing price and original closing price", fontdict={'fontsize': 20, 'fontweight': 'bold'})
0.456894
0.933613
# User Guide [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/imartinezl/cpab/HEAD) ## Introduction The CPAB library allows to create transformations $\phi(x,t)$ based on the integration of a continuous piecewise affine velocity field $v(x)$. Let us bring some clarity to this sentence by including some definitions: - The transformation $\phi(x,t)$ is created by the integration of a velocity field. For that, we need to solve a differential equation of the form: $$\frac{\partial\phi(x,t)}{\partial t} = v(\phi(x))$$ The transformation $\phi(x,t)$ depend on two variables $x$ (spatial dimension) and $t$ (integration time). - The velocity field $v(x)$ can be a function of any form and shape, but in this library we focus on an specific type of functions, which are continuous piecewise affine functions. - Continous function: there are no discontinuities in the function domain - Piecewise function: is a function that is defined by parts - Affine: is a geometric transformation that consist on a linear transformation + a translation. Thus, a continous, piecewise, and affine function is just a set of lines joined together. In summary, in this library integrate (efficiently) these functions to create diffeomorphic transformations $\phi(x,t)$ that are very useful for a lot of tasks in machine learning. ## Loading libraries First, we need to import the necessary Python libraries: ``cpab`` library to compute the transformations, ``matplotlib`` for data visualization, ``numpy`` for array manipulation and ``pytorch`` for autodifferentiation and gradient descent optimization. ``` import numpy as np import torch import matplotlib.pyplot as plt import cpab plt.rcParams["figure.figsize"] = (10, 7) ``` ## Transformation parameters In order to create a transformation $\phi(x,t)$, several options need to be specified. CPAB transformations are built by integrating a continuous piecewise affine velocity field $v(x)$. Such velocity field is defined onto a regular grid, or tesselation. In this example, we will set the number of intervals to 5 (``tess_size=5``). The ``backend`` option let us choose between ``numpy`` backend and the ``pytorch`` backend, the preferred option for optimization tasks. These computations can be also executed on CPU or GPU ``device`` (for the ``pytorch`` backend). The ``zero_boundary`` condition set to ``True`` constraints the velocity $v(x)$ at the tesselation boundary to 0, so $v(0)=0$ and $v(1)=0$. The ``basis`` option let us choose between {``svd``, ``sparse``, ``rref``, ``qr``}, and it represents the method to obtain the null space representation for continuous piecewise affine functions with ``tess_size`` intervals. In this case, we have used the QR decomposition to build the basis. ``` tess_size = 5 backend = "numpy" # ["pytorch", "numpy"] device = "cpu" # ["cpu", "gpu"] zero_boundary = True # [True, False] basis = "qr" # ["svd", "sparse", "rref", "qr"] T = cpab.Cpab(tess_size, backend, device, zero_boundary, basis) ``` ## Transformation example Then, we need to create the one-dimensional grid that is going to be transformed. For that, we use the ``uniform_meshgrid`` method, and we set the number of equally spaced points in the grid to 100. The velocity field $v(x)$ in CPAB transformations are parameterized by a vector $\theta$. In this example, taking into account the zero velocity constraints at the boundary, only 4 dimensions or degree of freedom are left to play with, and that indeed is the dimensionality of $\theta$, a vector of 4 values. Finally, we can pass the ``grid`` and the ``theta`` parameters to the ``transform_grid`` method and compute the transformed grid ``grid_t`` $\phi(x)$. ``` outsize = 100 grid = T.uniform_meshgrid(outsize) batch_size = 1 theta = T.identity(batch_size, epsilon=2) grid_t = T.transform_grid(grid, theta) ``` We can use the methods ``visualize_velocity`` and ``visualize_deformgrid`` to plot the velocity field $v(x)$ and the transformed grid $\phi(x,t)$ respectively. ``` T.visualize_velocity(theta); T.visualize_deformgrid(theta); ``` The dotted black line represents the identity tranformation $\phi(x,t) = x$. ## Integration details By default, the velocity field is integrated up to $t==1$. The following figure shows the how the transformed grid changes along the integration time $t$. ``` grid = T.uniform_meshgrid(outsize) theta = T.identity(batch_size, epsilon=2) fig, ax = plt.subplots() ax_zoom = fig.add_axes([0.2,0.58,0.2,0.25]) ax.axline((0,0),(1,1), color="blue", ls="dashed") ax_zoom.axline((0,0),(1,1), color="blue", ls="dashed") N = 11 for i in range(N): time = i / (N-1) grid_t = T.transform_grid(grid, theta, time=time) ax.plot(grid, grid_t.T, label=round(time, 2), color="black", alpha=time) ax_zoom.plot(grid, grid_t.T, label=round(time, 2), color="black", alpha=time) ax.grid() ax.set_xlabel("Original Time") ax.set_ylabel("Transformed Time") sm = plt.cm.ScalarMappable(cmap="gray_r") cbar = plt.colorbar(sm, ax=ax) cbar.ax.get_yaxis().labelpad = 15 cbar.ax.set_ylabel('Integration time', rotation=270) ax_zoom.grid() ax_zoom.set_xlim(.25, .35) ax_zoom.set_ylim(.25, .35) ax_zoom.set_xticklabels([]) ax_zoom.set_yticklabels([]) ax_zoom.xaxis.set_ticks_position('none') ax_zoom.yaxis.set_ticks_position('none') from matplotlib.patches import Rectangle import matplotlib.lines as lines r = Rectangle((.25,.25), 0.1, 0.1, edgecolor="red", facecolor="none", lw=1) ax.add_patch(r) line = lines.Line2D([0.085,0.25], [0.62, 0.35], color="red", lw=1) ax.add_line(line) line = lines.Line2D([0.435,0.35], [0.62, 0.35], color="red", lw=1) ax.add_line(line); ``` ## Scaling and squaring The CPAB library allows to use the scaling and squaring method to approximate the velocity field integration. This method uses the following property of diffeomorphic transformations to accelerate the computation of the integral: $$\phi(x,t+s) = \phi(x,t) \circ \phi(x,s)$$ Thus, computing the transformation $\phi$ at time $t+s$ is equivalent to composing the transformations at time $t$ and $s$. In the scaling and squaring method, we impose $t=s$, so that we need to compute only one transformation and self-compose it: $$\phi(x,2t) = \phi(x,t) \circ \phi(x,t)$$ Repeating this procedure multiple times (N), we can efficienty approximate the integration: $$\phi(x,t^{2N}) = \phi(x,t) \; \underbrace{\circ \; \cdots \; \circ}_{N} \; \phi(x,t)$$ ``` grid = T.uniform_meshgrid(outsize) theta = T.identity(batch_size, epsilon=2) fig, ax = plt.subplots() ax_zoom = fig.add_axes([0.2,0.58,0.2,0.25]) ax.axline((0,0),(1,1), color="blue", ls="dashed") ax_zoom.axline((0,0),(1,1), color="blue", ls="dashed") N = 11 for i in range(N): alpha = i / (N-1) grid_t = T.transform_grid_ss(grid, theta / 2**N, N=i+1) ax.plot(grid, grid_t.T, label=round(time, 2), color="black", alpha=alpha) ax_zoom.plot(grid, grid_t.T, label=round(time, 2), color="black", alpha=alpha) ax.grid() ax.set_xlabel("Original Time") ax.set_ylabel("Transformed Time") sm = plt.cm.ScalarMappable(cmap="gray_r") cbar = plt.colorbar(sm, ax=ax) cbar.ax.get_yaxis().labelpad = 15 cbar.ax.set_ylabel('Scaling-Squaring iteration', rotation=270) ax_zoom.grid() ax_zoom.set_xlim(.25, .35) ax_zoom.set_ylim(.25, .35) ax_zoom.set_xticklabels([]) ax_zoom.set_yticklabels([]) ax_zoom.xaxis.set_ticks_position('none') ax_zoom.yaxis.set_ticks_position('none') from matplotlib.patches import Rectangle import matplotlib.lines as lines r = Rectangle((.25,.25), 0.1, 0.1, edgecolor="red", facecolor="none", lw=1) ax.add_patch(r) line = lines.Line2D([0.085,0.25], [0.62, 0.35], color="red", lw=1) ax.add_line(line) line = lines.Line2D([0.435,0.35], [0.62, 0.35], color="red", lw=1) ax.add_line(line); ``` ## Data transformation The time series data must have a shape (batch, length, channels). In this example, we have created a sinusoidal dataset of one batch, 50 points in length, and 2 channels. Then, to transform time series data, we can use the ``transform_data`` method and pass as arguments: - data: n-dimensional array of shape (batch, length, channels) - theta: transformation parameters - outsize: length of the transformed data, with final shape (batch, outsize, channels) ``` batch_size = 1 length = 50 channels = 2 outsize = 100 # Generation m = np.ones((batch_size, channels)) x = np.linspace(m*0, m*2*np.pi, length, axis=1) data = np.sin(x) theta = T.identity(batch_size, epsilon=1) data_t = T.transform_data(data, theta, outsize) ``` And we can visualize this data transformation with the ``visualize_deformdata`` method. The <span style="color:red">red</span> curves represent the original data and the <span style="color:blue">blue</span> ones are the transformed data after applying the transformation. ``` T.visualize_deformdata(data, theta); ```
github_jupyter
import numpy as np import torch import matplotlib.pyplot as plt import cpab plt.rcParams["figure.figsize"] = (10, 7) tess_size = 5 backend = "numpy" # ["pytorch", "numpy"] device = "cpu" # ["cpu", "gpu"] zero_boundary = True # [True, False] basis = "qr" # ["svd", "sparse", "rref", "qr"] T = cpab.Cpab(tess_size, backend, device, zero_boundary, basis) outsize = 100 grid = T.uniform_meshgrid(outsize) batch_size = 1 theta = T.identity(batch_size, epsilon=2) grid_t = T.transform_grid(grid, theta) T.visualize_velocity(theta); T.visualize_deformgrid(theta); grid = T.uniform_meshgrid(outsize) theta = T.identity(batch_size, epsilon=2) fig, ax = plt.subplots() ax_zoom = fig.add_axes([0.2,0.58,0.2,0.25]) ax.axline((0,0),(1,1), color="blue", ls="dashed") ax_zoom.axline((0,0),(1,1), color="blue", ls="dashed") N = 11 for i in range(N): time = i / (N-1) grid_t = T.transform_grid(grid, theta, time=time) ax.plot(grid, grid_t.T, label=round(time, 2), color="black", alpha=time) ax_zoom.plot(grid, grid_t.T, label=round(time, 2), color="black", alpha=time) ax.grid() ax.set_xlabel("Original Time") ax.set_ylabel("Transformed Time") sm = plt.cm.ScalarMappable(cmap="gray_r") cbar = plt.colorbar(sm, ax=ax) cbar.ax.get_yaxis().labelpad = 15 cbar.ax.set_ylabel('Integration time', rotation=270) ax_zoom.grid() ax_zoom.set_xlim(.25, .35) ax_zoom.set_ylim(.25, .35) ax_zoom.set_xticklabels([]) ax_zoom.set_yticklabels([]) ax_zoom.xaxis.set_ticks_position('none') ax_zoom.yaxis.set_ticks_position('none') from matplotlib.patches import Rectangle import matplotlib.lines as lines r = Rectangle((.25,.25), 0.1, 0.1, edgecolor="red", facecolor="none", lw=1) ax.add_patch(r) line = lines.Line2D([0.085,0.25], [0.62, 0.35], color="red", lw=1) ax.add_line(line) line = lines.Line2D([0.435,0.35], [0.62, 0.35], color="red", lw=1) ax.add_line(line); grid = T.uniform_meshgrid(outsize) theta = T.identity(batch_size, epsilon=2) fig, ax = plt.subplots() ax_zoom = fig.add_axes([0.2,0.58,0.2,0.25]) ax.axline((0,0),(1,1), color="blue", ls="dashed") ax_zoom.axline((0,0),(1,1), color="blue", ls="dashed") N = 11 for i in range(N): alpha = i / (N-1) grid_t = T.transform_grid_ss(grid, theta / 2**N, N=i+1) ax.plot(grid, grid_t.T, label=round(time, 2), color="black", alpha=alpha) ax_zoom.plot(grid, grid_t.T, label=round(time, 2), color="black", alpha=alpha) ax.grid() ax.set_xlabel("Original Time") ax.set_ylabel("Transformed Time") sm = plt.cm.ScalarMappable(cmap="gray_r") cbar = plt.colorbar(sm, ax=ax) cbar.ax.get_yaxis().labelpad = 15 cbar.ax.set_ylabel('Scaling-Squaring iteration', rotation=270) ax_zoom.grid() ax_zoom.set_xlim(.25, .35) ax_zoom.set_ylim(.25, .35) ax_zoom.set_xticklabels([]) ax_zoom.set_yticklabels([]) ax_zoom.xaxis.set_ticks_position('none') ax_zoom.yaxis.set_ticks_position('none') from matplotlib.patches import Rectangle import matplotlib.lines as lines r = Rectangle((.25,.25), 0.1, 0.1, edgecolor="red", facecolor="none", lw=1) ax.add_patch(r) line = lines.Line2D([0.085,0.25], [0.62, 0.35], color="red", lw=1) ax.add_line(line) line = lines.Line2D([0.435,0.35], [0.62, 0.35], color="red", lw=1) ax.add_line(line); batch_size = 1 length = 50 channels = 2 outsize = 100 # Generation m = np.ones((batch_size, channels)) x = np.linspace(m*0, m*2*np.pi, length, axis=1) data = np.sin(x) theta = T.identity(batch_size, epsilon=1) data_t = T.transform_data(data, theta, outsize) T.visualize_deformdata(data, theta);
0.604749
0.990642
# PyTorch Feature Extractor - use <https://github.com/christiansafka/img2vec.git> to extract features with CUDA ``` %reload_ext autoreload %autoreload 2 import os, sys from os.path import join import time %matplotlib inline import matplotlib.pyplot as plt from pathlib import Path from glob import glob import torch import torch.nn as nn import torchvision.models as models import torchvision.transforms as transforms import numpy as np import imutils from PIL import Image import cv2 as cv if not cv.__version__ == '3.4.2': print('pip install opencv-python==3.4.2 or greater') # append notebook imports folder sys.path.append(str(Path(os.getcwd()).parent)) from utils import imx # https://github.com/christiansafka/img2vec.git # append notebook imports folder sys.path.append(str(Path(os.getcwd()).parent.parent/'vframe/')) from vframe.settings import vframe_cfg as cfg from vframe.utils import im_utils, logger_utils from vframe.settings import types # get a test image im_test_list = glob(join(cfg.DIR_TEST_IMAGES, 'classify', '*')) fp_im_test = np.random.choice(im_test_list) opt_cuda = True device = torch.device("cuda" if opt_cuda else "cpu") scaler = transforms.Resize((224, 224)) normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) to_tensor = transforms.ToTensor() #models.vgg19 opt_model = 'vgg19' layer = 'default' if opt_model == 'alexnet': model = models.alexnet(pretrained=True) if layer == 'default': layer = model.classifier[-2] layer_output_size = 4096 else: layer = model.classifier[-layer] elif opt_model == 'resnet18': model = models.resnet18(pretrained=True) # layer = model._modules.get('avgpool') layer = model.avgpool layer_output_size = 512 elif opt_model == 'vgg16': model = models.vgg16(pretrained=True) layer = model.classifier[3] elif opt_model == 'vgg19': model = models.vgg19(pretrained=True) layer = model.classifier[3] layer_output_size = 4096 model = model.to(device) model.parameters model.classifier[3] im_pil = Image.open(fp_im_test) im_pt = normalize(to_tensor(scaler(im_pil))).unsqueeze(0).to(device) #my_embedding = torch.zeros(1, layer_output_size, 1, 1) embedding = torch.zeros(1, layer_output_size) def copy_data(m, i, o): embedding.copy_(o.data) h = layer.register_forward_hook(copy_data) h_x = model(im_pt) h_x = model(im_pt) h.remove() vec = my_embedding.numpy()[0, :] vec = vec.tolist() print(len(vec)) # print(tensor) print(vec[:10]) vec_norm = vec/np.linalg.norm(vec) print(vec_norm[:10]) ```
github_jupyter
%reload_ext autoreload %autoreload 2 import os, sys from os.path import join import time %matplotlib inline import matplotlib.pyplot as plt from pathlib import Path from glob import glob import torch import torch.nn as nn import torchvision.models as models import torchvision.transforms as transforms import numpy as np import imutils from PIL import Image import cv2 as cv if not cv.__version__ == '3.4.2': print('pip install opencv-python==3.4.2 or greater') # append notebook imports folder sys.path.append(str(Path(os.getcwd()).parent)) from utils import imx # https://github.com/christiansafka/img2vec.git # append notebook imports folder sys.path.append(str(Path(os.getcwd()).parent.parent/'vframe/')) from vframe.settings import vframe_cfg as cfg from vframe.utils import im_utils, logger_utils from vframe.settings import types # get a test image im_test_list = glob(join(cfg.DIR_TEST_IMAGES, 'classify', '*')) fp_im_test = np.random.choice(im_test_list) opt_cuda = True device = torch.device("cuda" if opt_cuda else "cpu") scaler = transforms.Resize((224, 224)) normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) to_tensor = transforms.ToTensor() #models.vgg19 opt_model = 'vgg19' layer = 'default' if opt_model == 'alexnet': model = models.alexnet(pretrained=True) if layer == 'default': layer = model.classifier[-2] layer_output_size = 4096 else: layer = model.classifier[-layer] elif opt_model == 'resnet18': model = models.resnet18(pretrained=True) # layer = model._modules.get('avgpool') layer = model.avgpool layer_output_size = 512 elif opt_model == 'vgg16': model = models.vgg16(pretrained=True) layer = model.classifier[3] elif opt_model == 'vgg19': model = models.vgg19(pretrained=True) layer = model.classifier[3] layer_output_size = 4096 model = model.to(device) model.parameters model.classifier[3] im_pil = Image.open(fp_im_test) im_pt = normalize(to_tensor(scaler(im_pil))).unsqueeze(0).to(device) #my_embedding = torch.zeros(1, layer_output_size, 1, 1) embedding = torch.zeros(1, layer_output_size) def copy_data(m, i, o): embedding.copy_(o.data) h = layer.register_forward_hook(copy_data) h_x = model(im_pt) h_x = model(im_pt) h.remove() vec = my_embedding.numpy()[0, :] vec = vec.tolist() print(len(vec)) # print(tensor) print(vec[:10]) vec_norm = vec/np.linalg.norm(vec) print(vec_norm[:10])
0.374562
0.795301
# Random Signals and LTI-Systems *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).* ## Linear Mean In the following we aim at finding a relation between the linear mean $\mu_x[k]$ of the input signal $x[k]$ and the linear mean $\mu_y[k]$ of the output signal $y[k] = \mathcal{H} \{ x[k] \}$ of a linear time-invariant (LTI) system. ### Non-Stationary Input Signal Let's first impose no restrictions in terms of stationarity to the input signal. The [linear mean](../random_signals/ensemble_averages.ipynb#Linear-mean) of the output signal is then given as \begin{equation} \mu_y[k] = E\{ y[k] \} = E\{ x[k] * h[k] \} \end{equation} where $h[k]$ denotes the impulse response of the system. Since the convolution and the ensemble average are linear operations, and $h[k]$ is a deterministic signal, this can be rewritten as \begin{equation} \mu_y[k] = \mu_x[k] * h[k] \end{equation} The linear mean of the output signal $\mu_y[k]$ is given as the convolution of the linear mean of the input signal $\mu_x[k]$ with the impulse response $h[k]$ of the system. #### Example The linear mean $\mu_y[k]$ of the output of an LTI system with given impulse response $h[k]$ and non-stationary random input signal $x[k]$ is computed. The estimated linear means $\hat{\mu}_x[k]$ and $\hat{\mu}_y[k]$ of the input and output signals are plotted. ``` import numpy as np import matplotlib.pyplot as plt L = 32 # number of random samples N = 10000 # number of sample functions # generate input signal (white Gaussian noise) np.random.seed(2) x = np.random.normal(size=(N, L)) x[:, L//2] += 1 # generate output signal h = 2*np.fft.irfft([1, 1, 1, 0, 0, 0]) y = np.asarray([np.convolve(x[n, :], h, mode='full') for n in range(N)]) def estimate_plot_linear_mean(x): '''Estimate and plot linear mean.''' # estimate linear mean by ensemble average mu = 1/N * np.sum(x, 0) # plot linear mean plt.stem(mu, use_line_collection=True) plt.xlabel(r'$k$') plt.ylabel(r'$\hat{\mu}[k]$') plt.axis([0, x.shape[1], -1.2, 1.2]) plt.figure(figsize=(10, 3)) plt.title(r'Estimated linear mean $\hat{\mu}_x[k]$ of input signal') estimate_plot_linear_mean(x) plt.figure(figsize=(10, 3)) plt.title(r'Estimated linear mean $\hat{\mu}_y[k]$ of output signal') estimate_plot_linear_mean(y) ``` **Exercise** * Can you estimate the impulse response $h[k]$ of the system from above plots of $\hat{\mu}_x[k]$ and $\hat{\mu}_y[k]$? * You can check your results by plotting the impulse response $h[k]$, for instance with the command `plt.stem(h)`. Solution: Inspecting above plot, the linear mean of the input signal can be approximated as $\mu_x[k] = \delta[k]$. The linear mean of the output is then given as $\mu_y[k] = \delta[k] * h[k] = h[k]$. It follows that the impulse response of the LTI system can be estimated from the linear mean $\mu_y[k]$. ### Stationary Input Signal For a (wide-sense) stationary process, the linear mean of the input signal $\mu_x[k] = \mu_x$ does not depend on the time index $k$. For a (wide-sense) stationary input signal, also the output signal of the system is (wide-sense) stationary. Using the result for the non-stationary case above yields \begin{equation} \begin{split} \mu_y &= \mu_x * h[k] \\ &= \sum_{\kappa = -\infty}^{\infty}\mu_x[k-\kappa]h[\kappa] \\ &= \mu_x \cdot \sum_{\kappa = -\infty}^{\infty}h[\kappa] \\ &= \mu_x \cdot \sum_{\kappa = -\infty}^{\infty}h[\kappa]\cdot\mathrm{e}^{-\mathrm{j}\Omega\kappa} \hspace{5mm} \text{for}\,\,\Omega=0 \\ &= \mu_x \cdot H(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \big\vert_{\Omega = 0} \end{split} \end{equation} where $H(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \mathcal{F}_* \{ h[k] \}$ denotes the discrete time Fourier transformation (DTFT) of the impulse response. The linear mean of a (wide-sense) stationary input signal is weighted by the transmission characteristics for the constant (i.e. DC, $\Omega = 0$) component of the LTI system. This implies that for a system which just attenuates the input signal $y[k] = A \cdot x[k]$, e.g. an ideal amplifier, the linear mean at the output is given as $\mu_y = A \cdot \mu_x$. Furthermore, if the input signal is zero-mean $\mu_x = 0$, the output signal is also zero-mean $\mu_y = 0$. **Copyright** This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples.
github_jupyter
import numpy as np import matplotlib.pyplot as plt L = 32 # number of random samples N = 10000 # number of sample functions # generate input signal (white Gaussian noise) np.random.seed(2) x = np.random.normal(size=(N, L)) x[:, L//2] += 1 # generate output signal h = 2*np.fft.irfft([1, 1, 1, 0, 0, 0]) y = np.asarray([np.convolve(x[n, :], h, mode='full') for n in range(N)]) def estimate_plot_linear_mean(x): '''Estimate and plot linear mean.''' # estimate linear mean by ensemble average mu = 1/N * np.sum(x, 0) # plot linear mean plt.stem(mu, use_line_collection=True) plt.xlabel(r'$k$') plt.ylabel(r'$\hat{\mu}[k]$') plt.axis([0, x.shape[1], -1.2, 1.2]) plt.figure(figsize=(10, 3)) plt.title(r'Estimated linear mean $\hat{\mu}_x[k]$ of input signal') estimate_plot_linear_mean(x) plt.figure(figsize=(10, 3)) plt.title(r'Estimated linear mean $\hat{\mu}_y[k]$ of output signal') estimate_plot_linear_mean(y)
0.645455
0.990848
# Build a Pipeline > A tutorial on building pipelines to orchestrate your ML workflow A Kubeflow pipeline is a portable and scalable definition of a machine learning (ML) workflow. Each step in your ML workflow, such as preparing data or training a model, is an instance of a pipeline component. This document provides an overview of pipeline concepts and best practices, and instructions describing how to build an ML pipeline. ## Before you begin 1. Run the following command to install the Kubeflow Pipelines SDK. If you run this command in a Jupyter notebook, restart the kernel after installing the SDK. ``` !pip install kfp --upgrade ``` 2. Import the `kfp` and `kfp.components` packages. ``` import kfp import kfp.components as comp ``` 3. Create an instance of the [`kfp.Client` class][kfp-client]. To find your Kubeflow Pipelines cluster's hostname and URL scheme, open the Kubeflow Pipelines user interface in your browser. The URL of the Kubeflow Pipelines user interface is something like `https://my-cluster.my-organization.com/pipelines`. In this case, the host name and URL scheme are `https://my-cluster.my-organization.com`. [kfp-client]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.client.html#kfp.Client ``` # If you run this command on a Jupyter notebook running on Kubeflow, you can # exclude the host parameter. # client = kfp.Client() client = kfp.Client(host='<your-kubeflow-pipelines-host-name>') ``` ## Understanding pipelines A Kubeflow pipeline is a portable and scalable definition of an ML workflow, based on containers. A pipeline is composed of a set of input parameters and a list of the steps in this workflow. Each step in a pipeline is an instance of a component, which is represented as an instance of [`ContainerOp`][container-op]. You can use pipelines to: * Orchestrate repeatable ML workflows. * Accelerate experimentation by running a workflow with different sets of hyperparameters. ### Understanding pipeline components A pipeline component is a containerized application that performs one step in a pipeline's workflow. Pipeline components are defined in [component specifications][component-spec], which define the following: * The component's interface, its inputs and outputs. * The component's implementation, the container image and the command to execute. * The component's metadata, such as the name and description of the component. You can build components by [defining a component specification for a containerized application][component-dev], or you can [use the Kubeflow Pipelines SDK to generate a component specification for a Python function][python-function-component]. You can also [reuse prebuilt components in your pipeline][prebuilt-components]. ### Understanding the pipeline graph Each step in your pipeline's workflow is an instance of a component. When you define your pipeline, you specify the source of each step's inputs. Step inputs can be set from the pipeline's input arguments, constants, or step inputs can depend on the outputs of other steps in this pipeline. Kubeflow Pipelines uses these dependencies to define your pipeline's workflow as a graph. For example, consider a pipeline with the following steps: ingest data, generate statistics, preprocess data, and train a model. The following describes the data dependencies between each step. * **Ingest data**: This step loads data from an external source which is specified using a pipeline argument, and it outputs a dataset. Since this step does not depend on the output of any other steps, this step can run first. * **Generate statistics**: This step uses the ingested dataset to generate and output a set of statistics. Since this step depends on the dataset produced by the ingest data step, it must run after the ingest data step. * **Preprocess data**: This step preprocesses the ingested dataset and transforms the data into a preprocessed dataset. Since this step depends on the dataset produced by the ingest data step, it must run after the ingest data step. * **Train a model**: This step trains a model using the preprocessed dataset, the generated statistics, and pipeline parameters, such as the learning rate. Since this step depends on the preprocessed data and the generated statistics, it must run after both the preprocess data and generate statistics steps are complete. Since the generate statistics and preprocess data steps both depend on the ingested data, the generate statistics and preprocess data steps can run in parallel. All other steps are executed once their data dependencies are available. ## Designing your pipeline When designing your pipeline, think about how to split your ML workflow into pipeline components. The process of splitting an ML workflow into pipeline components is similar to the process of splitting a monolithic script into testable functions. The following rules can help you define the components that you need to build your pipeline. * Components should have a single responsibility. Having a single responsibility makes it easier to test and reuse a component. For example, if you have a component that loads data you can reuse that for similar tasks that load data. If you have a component that loads and transforms a dataset, the component can be less useful since you can use it only when you need to load and transform that dataset. * Reuse components when possible. Kubeflow Pipelines provides [components for common pipeline tasks and for access to cloud services][prebuilt-components]. * Consider what you need to know to debug your pipeline and research the lineage of the models that your pipeline produces. Kubeflow Pipelines stores the inputs and outputs of each pipeline step. By interrogating the artifacts produced by a pipeline run, you can better understand the variations in model quality between runs or track down bugs in your workflow. In general, you should design your components with composability in mind. Pipelines are composed of component instances, also called steps. Steps can define their inputs as depending on the output of another step. The dependencies between steps define the pipeline workflow graph. ### Building pipeline components Kubeflow pipeline components are containerized applications that perform a step in your ML workflow. Here are the ways that you can define pipeline components: * If you have a containerized application that you want to use as a pipeline component, create a component specification to define this container image as a pipeline component. This option provides the flexibility to include code written in any language in your pipeline, so long as you can package the application as a container image. Learn more about [building pipeline components][component-dev]. * If your component code can be expressed as a Python function, [evaluate if your component can be built as a Python function-based component][python-function-component]. The Kubeflow Pipelines SDK makes it easier to build lightweight Python function-based components by saving you the effort of creating a component specification. Whenever possible, [reuse prebuilt components][prebuilt-components] to save yourself the effort of building custom components. The example in this guide demonstrates how to build a pipeline that uses a Python function-based component and reuses a prebuilt component. ### Understanding how data is passed between components When Kubeflow Pipelines runs a component, a container image is started in a Kubernetes Pod and your componentโ€™s inputs are passed in as command-line arguments. When your component has finished, the component's outputs are returned as files. In your component's specification, you define the components inputs and outputs and how the inputs and output paths are passed to your program as command-line arguments. You can pass small inputs, such as short strings or numbers, to your component by value. Large inputs, such as datasets, must be passed to your component as file paths. Outputs are written to the paths that Kubeflow Pipelines provides. Python function-based components make it easier to build pipeline components by building the component specification for you. Python function-based components also handle the complexity of passing inputs into your component and passing your functionโ€™s outputs back to your pipeline. Learn more about how [Python function-based components handle inputs and outputs][python-function-component-data-passing]. ## Getting started building a pipeline The following sections demonstrate how to get started building a Kubeflow pipeline by walking through the process of converting a Python script into a pipeline. ### Design your pipeline The following steps walk through some of the design decisions you may face when designing a pipeline. 1. Evaluate the process. In the following example, a Python function downloads a zipped tar file (`.tar.gz`) that contains several CSV files, from a public website. The function extracts the CSV files and then merges them into a single file. [container-op]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.dsl.html#kfp.dsl.ContainerOp [component-spec]: https://www.kubeflow.org/docs/components/pipelines/reference/component-spec/ [python-function-component]: https://www.kubeflow.org/docs/components/pipelines/sdk/python-function-components/ [component-dev]: https://www.kubeflow.org/docs/components/pipelines/sdk/component-development/ [python-function-component-data-passing]: https://www.kubeflow.org/docs/components/pipelines/sdk/python-function-components/#understanding-how-data-is-passed-between-components [prebuilt-components]: https://www.kubeflow.org/docs/examples/shared-resources/ ``` import glob import pandas as pd import tarfile import urllib.request def download_and_merge_csv(url: str, output_csv: str): with urllib.request.urlopen(url) as res: tarfile.open(fileobj=res, mode="r|gz").extractall('data') df = pd.concat( [pd.read_csv(csv_file, header=None) for csv_file in glob.glob('data/*.csv')]) df.to_csv(output_csv, index=False, header=False) ``` 2. Run the following Python command to test the function. ``` download_and_merge_csv( url='https://storage.googleapis.com/ml-pipeline-playground/iris-csv-files.tar.gz', output_csv='merged_data.csv') ``` 3. Run the following to print the first few rows of the merged CSV file. ``` !head merged_data.csv ``` 4. Design your pipeline. For example, consider the following pipeline designs. * Implement the pipeline using a single step. In this case, the pipeline contains one component that works similarly to the example function. This is a straightforward function, and implementing a single-step pipeline is a reasonable approach in this case. The down side of this approach is that the zipped tar file would not be an artifact of your pipeline runs. Not having this artifact available could make it harder to debug this component in production. * Implement this as a two-step pipeline. The first step downloads a file from a website. The second step extracts the CSV files from a zipped tar file and merges them into a single file. This approach has a few benefits: * You can reuse the [Web Download component][web-download-component] to implement the first step. * Each step has a single responsibility, which makes the components easier to reuse. * The zipped tar file is an artifact of the first pipeline step. This means that you can examine this artifact when debugging pipelines that use this component. This example implements a two-step pipeline. ### Build your pipeline components 1. Build your pipeline components. This example modifies the initial script to extract the contents of a zipped tar file, merge the CSV files that were contained in the zipped tar file, and return the merged CSV file. This example builds a Python function-based component. You can also package your component's code as a Docker container image and define the component using a ComponentSpec. In this case, the following modifications were required to the original function. * The file download logic was removed. The path to the zipped tar file is passed as an argument to this function. * The import statements were moved inside of the function. Python function-based components require standalone Python functions. This means that any required import statements must be defined within the function, and any helper functions must be defined within the function. Learn more about [building Python function-based components][python-function-components]. * The function's arguments are decorated with the [`kfp.components.InputPath`][input-path] and the [`kfp.components.OutputPath`][output-path] annotations. These annotations let Kubeflow Pipelines know to provide the path to the zipped tar file and to create a path where your function stores the merged CSV file. The following example shows the updated `merge_csv` function. [web-download-component]: https://github.com/kubeflow/pipelines/blob/master/components/web/Download/component.yaml [python-function-components]: https://www.kubeflow.org/docs/components/pipelines/sdk/python-function-components/ [input-path]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html?highlight=inputpath#kfp.components.InputPath [output-path]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html?highlight=outputpath#kfp.components.OutputPath ``` def merge_csv(file_path: comp.InputPath('Tarball'), output_csv: comp.OutputPath('CSV')): import glob import pandas as pd import tarfile tarfile.open(name=file_path, mode="r|gz").extractall('data') df = pd.concat( [pd.read_csv(csv_file, header=None) for csv_file in glob.glob('data/*.csv')]) df.to_csv(output_csv, index=False, header=False) ``` 2. Use [`kfp.components.create_component_from_func`][create_component_from_func] to return a factory function that you can use to create pipeline steps. This example also specifies the base container image to run this function in, the path to save the component specification to, and a list of PyPI packages that need to be installed in the container at runtime. [create_component_from_func]: (https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.create_component_from_func [container-op]: https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.dsl.html#kfp.dsl.ContainerOp ``` create_step_merge_csv = kfp.components.create_component_from_func( func=merge_csv, output_component_file='component.yaml', # This is optional. It saves the component spec for future use. base_image='python:3.7', packages_to_install=['pandas==1.1.4']) ``` ### Build your pipeline 1. Use [`kfp.components.load_component_from_url`][load_component_from_url] to load the component specification YAML for any components that you are reusing in this pipeline. [load_component_from_url]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html?highlight=load_component_from_url#kfp.components.load_component_from_url ``` web_downloader_op = kfp.components.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/master/components/web/Download/component.yaml') ``` 2. Define your pipeline as a Python function. Your pipeline function's arguments define your pipeline's parameters. Use pipeline parameters to experiment with different hyperparameters, such as the learning rate used to train a model, or pass run-level inputs, such as the path to an input file, into a pipeline run. Use the factory functions created by `kfp.components.create_component_from_func` and `kfp.components.load_component_from_url` to create your pipeline's tasks. The inputs to the component factory functions can be pipeline parameters, the outputs of other tasks, or a constant value. In this case, the `web_downloader_task` task uses the `url` pipeline parameter, and the `merge_csv_task` uses the `data` output of the `web_downloader_task`. ``` # Define a pipeline and create a task from a component: def my_pipeline(url): web_downloader_task = web_downloader_op(url=url) merge_csv_task = create_step_merge_csv(file=web_downloader_task.outputs['data']) # The outputs of the merge_csv_task can be referenced using the # merge_csv_task.outputs dictionary: merge_csv_task.outputs['output_csv'] ``` ### Compile and run your pipeline After defining the pipeline in Python as described in the preceding section, use the following instructions to compile the pipeline and submit it to the Kubeflow Pipelines service. 1. Run the following to compile your pipeline and save it as `pipeline.yaml`. ``` kfp.compiler.Compiler().compile( pipeline_func=my_pipeline, package_path='pipeline.yaml') ``` 2. Run the following to submit the compiled workflow specification (`pipeline.yaml`) using the Kubeflow Pipelines SDK. You can also use the Kubeflow Pipelines user interface to upload and run your `pipeline.yaml`. See the guide to [getting started with the UI][quickstart]. [quickstart]: https://www.kubeflow.org/docs/components/pipelines/pipelines-quickstart ``` client.create_run_from_pipeline_package( pipeline_file='pipeline.yaml', arguments={ 'url': 'https://storage.googleapis.com/ml-pipeline-playground/iris-csv-files.tar.gz' }) ``` ## Next steps * Learn about advanced pipeline features, such as [authoring recursive components][recursion] and [using conditional execution in a pipeline][conditional]. * Learn how to [manipulate Kubernetes resources in a pipeline][k8s-resources] (Experimental). [conditional]: https://github.com/kubeflow/pipelines/blob/master/samples/tutorials/DSL%20-%20Control%20structures/DSL%20-%20Control%20structures.py [recursion]: https://www.kubeflow.org/docs/components/pipelines/sdk/dsl-recursion/ [k8s-resources]: https://www.kubeflow.org/docs/components/pipelines/sdk/manipulate-resources/
github_jupyter
!pip install kfp --upgrade import kfp import kfp.components as comp # If you run this command on a Jupyter notebook running on Kubeflow, you can # exclude the host parameter. # client = kfp.Client() client = kfp.Client(host='<your-kubeflow-pipelines-host-name>') import glob import pandas as pd import tarfile import urllib.request def download_and_merge_csv(url: str, output_csv: str): with urllib.request.urlopen(url) as res: tarfile.open(fileobj=res, mode="r|gz").extractall('data') df = pd.concat( [pd.read_csv(csv_file, header=None) for csv_file in glob.glob('data/*.csv')]) df.to_csv(output_csv, index=False, header=False) download_and_merge_csv( url='https://storage.googleapis.com/ml-pipeline-playground/iris-csv-files.tar.gz', output_csv='merged_data.csv') !head merged_data.csv def merge_csv(file_path: comp.InputPath('Tarball'), output_csv: comp.OutputPath('CSV')): import glob import pandas as pd import tarfile tarfile.open(name=file_path, mode="r|gz").extractall('data') df = pd.concat( [pd.read_csv(csv_file, header=None) for csv_file in glob.glob('data/*.csv')]) df.to_csv(output_csv, index=False, header=False) create_step_merge_csv = kfp.components.create_component_from_func( func=merge_csv, output_component_file='component.yaml', # This is optional. It saves the component spec for future use. base_image='python:3.7', packages_to_install=['pandas==1.1.4']) web_downloader_op = kfp.components.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/master/components/web/Download/component.yaml') # Define a pipeline and create a task from a component: def my_pipeline(url): web_downloader_task = web_downloader_op(url=url) merge_csv_task = create_step_merge_csv(file=web_downloader_task.outputs['data']) # The outputs of the merge_csv_task can be referenced using the # merge_csv_task.outputs dictionary: merge_csv_task.outputs['output_csv'] kfp.compiler.Compiler().compile( pipeline_func=my_pipeline, package_path='pipeline.yaml') client.create_run_from_pipeline_package( pipeline_file='pipeline.yaml', arguments={ 'url': 'https://storage.googleapis.com/ml-pipeline-playground/iris-csv-files.tar.gz' })
0.543348
0.975946
# PyTorch: Transfer Learning tutorial http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html ``` import time import os import copy import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler from torch.autograd import Variable import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt from tqdm import tqdm plt.ion() ``` The `transforms.Compose` class "composes several transforms together". So we're creating randomly resized crops, horizontally flipping and normalizing the images, using what I assume it the image means. ``` train_transform = transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) val_transform = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) data_path = os.path.join('data', 'hymenoptera_data') ``` The `ImageFolder` object represents a generic image folder, where images are arranged like this: ``` root/dog/xxx.png root/dog/xxy.png root/dog/xxz.png ``` ``` root/cat/123.png root/cat/nsdf3.png root/cat/asd932_.png ``` ``` train_dataset = datasets.ImageFolder(os.path.join(data_path, 'train'), transform=train_transform) val_dataset = datasets.ImageFolder(os.path.join(data_path, 'val'), transform=val_transform) ``` The `DataLoader` class combines a dataset and a sampler, and provides single- or multi-process iterators over the dataset. ``` train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=4, shuffle=True, num_workers=4) val_dataloader = torch.utils.data.DataLoader(val_dataset, batch_size=4, shuffle=True, num_workers=4) train_size = len(train_dataset) val_size = len(val_dataset) class_names = train_dataset.classes use_gpu = torch.cuda.is_available() ``` Visualising a few images ``` def im_show(input_img, title=None): """Img show for Tensor.""" # Put the channels last. Presumably DataLoader is channels first. input_img = input_img.numpy().transpose((1, 2, 0)) # Multiple by the std and add the mean (revert preprocessing) mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) input_img = std * input_img + mean # Ensure all values are between 0 and 1. input_img = np.clip(input_img, 0, 1) plt.imshow(input_img) if title is not None: plt.title(title) plt.pause(0.001) inputs, classes = next(iter(train_dataloader)) out = torchvision.utils.make_grid(inputs) im_show(out, title=[class_names[i] for i in classes]) def train_model(model, criterion, optimizer, scheduler, num_epochs=25): tic = time.time() best_model_weights = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print(f'Epoch {epoch} / {num_epochs - 1}') print('-' * 10) # Training phase scheduler.step() model.train(True) running_loss = 0.0 running_corrects = 0 for (inputs, labels) in tqdm(train_dataloader): inputs, labels = Variable(inputs), Variable(labels) optimizer.zero_grad() # Run forward prop outputs = model(inputs) _, preds = torch.max(outputs.data, 1) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.data[0] * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / train_size epoch_acc = running_corrects / train_size print(f'Training loss: {epoch_loss} Accuracy: {epoch_acc}') # Validation phase model.train(False) running_loss = 0.0 running_corrects = 0 for inputs, labels in val_dataloader: inputs, labels = Variable(inputs), Variable(labels) optimizer.zero_grad() # Run forward prop outputs = model(inputs) _, preds = torch.max(outputs.data, 1) loss = criterion(outputs, labels) running_loss += loss.data[0] * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / val_size epoch_acc = running_corrects / val_size print(f'Validation loss: {epoch_loss} Accuracy: {epoch_acc}') if epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) print() toc = time.time() - tic print('Training complete in {:.0f}m {:.0f}s'.format(toc // 60, toc % 60)) print('Best validation accuracy: {:4f}'.format(best_acc)) model.load_state_dict(best_model_wts) return model ``` Load a pretrained model and reset final fully-connected layer. ``` model_conv = torchvision.models.resnet18(pretrained=True) # Freeze all layers for param in model_conv.parameters(): param.requires_grad = False # Replace Dense/FC prediction layer num_ftrs = model_conv.fc.in_features model_conv.fc = nn.Linear(in_features=num_ftrs, out_features=2) criterion = nn.CrossEntropyLoss() optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.0001, momentum=0.9) # Schedule learning rate to decay by a factor of 0.1 every 7 epochs. exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1) model_ft = train_model( model_conv, criterion, optimizer_conv, exp_lr_scheduler, num_epochs=25) ```
github_jupyter
import time import os import copy import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler from torch.autograd import Variable import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt from tqdm import tqdm plt.ion() train_transform = transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) val_transform = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) data_path = os.path.join('data', 'hymenoptera_data') root/dog/xxx.png root/dog/xxy.png root/dog/xxz.png root/cat/123.png root/cat/nsdf3.png root/cat/asd932_.png train_dataset = datasets.ImageFolder(os.path.join(data_path, 'train'), transform=train_transform) val_dataset = datasets.ImageFolder(os.path.join(data_path, 'val'), transform=val_transform) train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=4, shuffle=True, num_workers=4) val_dataloader = torch.utils.data.DataLoader(val_dataset, batch_size=4, shuffle=True, num_workers=4) train_size = len(train_dataset) val_size = len(val_dataset) class_names = train_dataset.classes use_gpu = torch.cuda.is_available() def im_show(input_img, title=None): """Img show for Tensor.""" # Put the channels last. Presumably DataLoader is channels first. input_img = input_img.numpy().transpose((1, 2, 0)) # Multiple by the std and add the mean (revert preprocessing) mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) input_img = std * input_img + mean # Ensure all values are between 0 and 1. input_img = np.clip(input_img, 0, 1) plt.imshow(input_img) if title is not None: plt.title(title) plt.pause(0.001) inputs, classes = next(iter(train_dataloader)) out = torchvision.utils.make_grid(inputs) im_show(out, title=[class_names[i] for i in classes]) def train_model(model, criterion, optimizer, scheduler, num_epochs=25): tic = time.time() best_model_weights = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print(f'Epoch {epoch} / {num_epochs - 1}') print('-' * 10) # Training phase scheduler.step() model.train(True) running_loss = 0.0 running_corrects = 0 for (inputs, labels) in tqdm(train_dataloader): inputs, labels = Variable(inputs), Variable(labels) optimizer.zero_grad() # Run forward prop outputs = model(inputs) _, preds = torch.max(outputs.data, 1) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.data[0] * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / train_size epoch_acc = running_corrects / train_size print(f'Training loss: {epoch_loss} Accuracy: {epoch_acc}') # Validation phase model.train(False) running_loss = 0.0 running_corrects = 0 for inputs, labels in val_dataloader: inputs, labels = Variable(inputs), Variable(labels) optimizer.zero_grad() # Run forward prop outputs = model(inputs) _, preds = torch.max(outputs.data, 1) loss = criterion(outputs, labels) running_loss += loss.data[0] * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / val_size epoch_acc = running_corrects / val_size print(f'Validation loss: {epoch_loss} Accuracy: {epoch_acc}') if epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) print() toc = time.time() - tic print('Training complete in {:.0f}m {:.0f}s'.format(toc // 60, toc % 60)) print('Best validation accuracy: {:4f}'.format(best_acc)) model.load_state_dict(best_model_wts) return model model_conv = torchvision.models.resnet18(pretrained=True) # Freeze all layers for param in model_conv.parameters(): param.requires_grad = False # Replace Dense/FC prediction layer num_ftrs = model_conv.fc.in_features model_conv.fc = nn.Linear(in_features=num_ftrs, out_features=2) criterion = nn.CrossEntropyLoss() optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.0001, momentum=0.9) # Schedule learning rate to decay by a factor of 0.1 every 7 epochs. exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1) model_ft = train_model( model_conv, criterion, optimizer_conv, exp_lr_scheduler, num_epochs=25)
0.850158
0.959459
# Recognizing hand-written digits using neural network **Outline** * [Intro](#intro) * [Basic Concepts to understand deep learning](#concepts) * [1 layer neural network using Keras](#1nn_keras) * [Model Building](#keras_model) * [Make Prediction](#keras_predict) * [1 layer neural network using Sklearn](#1nn_sklearn) * [Parameter Tuning](#tune) * [Make Prediction](#sklearn_predict) * [Side Note for activation function](#activation) * [Reference](#refer) --- ``` %matplotlib inline # basic setup import pandas as pd import numpy as np import matplotlib.pyplot as plt from keras.datasets import mnist np.random.seed(0) # model related import keras.utils.np_utils as np_utils import keras.models as models, keras.layers.core as layers, keras.optimizers ``` ## <a id='intro'>Intro</a> For almost every begineer of deep learning, the first "Hello world" example is to recognizing hand-written digits using the MNIST database. It includes handwritten digits from 0 to 9 and has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image. In this notebook, we will be using this data set to get a sense of building deep learing model using keras. ## <a id='concepts'>Basic Concepts to understand deep learning</a> ### **Important Concepts** Good weights are the key to good predictions. There are three concepts to understand how a model gets good weights. Those are * Loss Function * Gradient Descent * Back Propagations > **Loss Function** the loss function of a logistic regression is log likelihood. The larger the value is, the better the model is. In Scikit-learn has a conventin in its metrics that lower scores are better. So, scikit-learn report -1 * log loss / num_observations. The division by the number of datapoints is used so the range of values doesn't systematically vary with the dataset size. The function's input is actual and predicted value. If the predicted values are close to the actual values, then the loss function will output a small value; if the predicted values are far off, the function will return a high value. > **Gradient Descent** The input of the loss function is actual and predicted value. We also know that the predicted value is the output of the model itself with some weights. Therefore, we can get the value of our loss function by changing the weight, i.e., some parameters, of the model. The goal is to find the weight that can generate us the best, i.e., the lowest value for the loss function. Gradient descent is a method that can help us get those weight. For a more detailed illustration about gradient descent as well as stochastic gradient descent, please see this [post](https://nbviewer.jupyter.org/github/johnnychiuchiu/Machine-Learning/blob/master/OptimizationMethod/gradientDescent.ipynb). Generally, we don't use all of our data to calculate each step in gradient descent, since doing so would require a lot of calculations and it'll be slow. Instead, we use a batch of data point or just a single data point. That is basically what batch gradient descent and stochastic gradient descent are. 1 time through the data is called an epoch. We incrementally improve weight from multiple epochs, so each image will be used to improve weights more than once. We set the number of epochs as a parameter into the fit command as we have seen above. We can set the batch size in the ImageDataGenerator function as well. The size of the weight changes is determined by something called the learning rate. Low learning rate means that our model may take a long time training before it gets accurate. High learning rate means that our model may take hugh steps around in that field, and it may result in jumping over the best weights. > **Backward Propagation** How do we find which way we can change the weights to improve the loss function? Basically, how do we see which way goes downhill? That is what backward propagation does. More specifically, suppose we have only one parameters with a bell shape curve. It is pretty straight forward for us to know which way to go downhill of the loss function. We can do it by simply take a partial derivative of that parameter. However, since neural network is a combination of a lots of activation function, for example, logistic function or ReLu. We can think of neural network as function of many other function. If we want to know whether if the partial derivative of a particular data point is positive or negative, we'll need to take the partial derivative of the outer most function. Then by chain rule, we'll need to know the partial derivative of the 1-step ahead inner function...etc. I think this is the idea of backward propagation. In short, backward propagation is a process by which we use to find out which way to change the weights at each step of gradient descent. <img src="pic/loss.png" style="width: 600px;height: 450px;"/> ### Number of parameters for a single layer neural network Suppose, * There are k features in the model * The number of node in the hidden layer is M * The number of output is 1 for regression neural network For each node, we will need $k+1$ parameters, and there are M nodes. Therefore, in the input to hidden layer, we'll need $M \times (k+1)$ parameters. From hidden to output layer, we will need $M+1$ parameters. Totally, we need $M \times (k+1) + (M + 1)$ parameters to fit this neural network model. <img src="pic/1nn.png" style="width: 600px;height: 450px;"/> ### Steps to fit a neural network model 1. Standardize predictors. Due to the high flexibility of the model, it is very easy to overfit the data. To solve this, we add a regularization term in the cost function. We want to standardize our features before fitting the model since we want to give the same shrinkage weight to all the predictors. If we donโ€™t scale it, the predictor with larger range will shrink more. In other words, the weight of shrink will be different. 2. Standard the response. For a prediction problem the model is learning an approximation of the function between the input and output data. Commonly this is done through gradient descent which relies on calculation of the error between predictions and true values for each instance. Now obviously your gradient descent wont work if your model output is constrained to a range of values by your activation function (such as Sigmoid with range [0, 1]) which your real output values fall outside of. 3. choose * hidden layers * nodes in each hidden layer * output activation function (usually linear or logistic) * other options and tuning parameters (e.g. $\lambda$) 4. Software estimates parameters to minimize (nonlinear LS with shrinkage): $$\sum_{i=1}^{n}\Big[y_i - g(x_i, \theta) \Big]^2 + \lambda \Big(\sum_{m=1}^{M}\sum_{j=0}^{k}\alpha_{m,j}^2 +\sum_{m=0}^{M}\beta_m^2 \Big)$$ 5. For making prediction: when you reach the last layer (the output neuron(s)), what you get is again another activation between -1 and 1. You have to convert this back into a value for the house in question, whether that value will be used as a prediction in a test set or to calculate error during training. However you do this, you just have to be consistent and use the same de-normalization procedure in training and testing. ![](pic/1nn.jpg) ## <a id='1nn_keras'>1 layer neural network using Keras</a> ### **Read Data** ``` # load dataset from keras datasets (X_train, Y_train), (X_test, Y_test) = keras.datasets.mnist.load_data() # reshape a 2D array (28, 28) into a vector (784, ) # -1 means that it will figure out the length of another dimension automatically X_train = X_train.reshape(-1, 784) X_test = X_test.reshape(-1, 784) # constrant the values in the array within [0,1] X_train = X_train.astype('float32') / 255.0 X_test = X_test.astype('float32') / 255.0 # one-hot encoding to the response variable Y_train = np_utils.to_categorical(Y_train, 10) Y_test = np_utils.to_categorical(Y_test, 10) ``` ### <a id='keras_model'>Build 1 layer neural network model</a> ``` # create a nn model model = models.Sequential() # first layer, input is 784 dim with 128 nodes model.add(layers.Dense(units=128, input_dim=784)) # the activation function is sigmoid model.add(layers.Activation('sigmoid')) # the number of output category is with 10 model.add(layers.Dense(units=10)) # output prob calculation function model.add(layers.Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='sgd') #keras.optimizers.RMSprop() model.fit(X_train, Y_train, batch_size=128, epochs=20, verbose=2) ``` ### <a id='keras_predict'>Make Prediction</a> ``` predictions = model.predict(X_test) ``` ### **Evaluate using overall training accuracy** ``` def test_accuracy_1nn(predictions, X_test, Y_test): ncorrect = 0 i=0 for (ex, cls) in zip(X_test, Y_test): if np.argmax(cls) == np.argmax(predictions[i]): ncorrect += 1 i = i+1 print("Test accuracy: %d/%d = %0.2f %%" % (ncorrect, len(Y_test), 100.0*ncorrect/len(Y_test))) test_accuracy_1nn(predictions, X_test, Y_test) ``` As shown above, the overall accuracy using 1 layer neural network is 97.72%, which is quite high. ### **Let's also plot out the image and the result to get a better sense of it** ``` for i in range(10): plt.imshow(np.reshape(X_test[i], (28, 28)), cmap='gray') print("Prediction:", np.argmax(predictions[i])) print("Probability:", predictions[i][np.argmax(predictions[i])]) plt.show() print("--------------------------------------") ``` --- ## <a id="1nn_sklearn">1 layer neural network using Sklearn</a> ``` %matplotlib inline import os import pandas as pd import numpy as np import matplotlib.pyplot as plt import math from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV from sklearn import model_selection from sklearn.neural_network import MLPClassifier from sklearn.preprocessing import scale from sklearn.metrics import classification_report SEED = 12345 def fit_nn(x, y, param_setting={}, fold=5, seed=SEED): """Neural Network for Classification, get the CV AUC""" # set seed and default parameter params_default = {'random_state':seed, 'activation': 'logistic', } # update the input parameters params = dict(params_default) params.update(param_setting) seed= SEED kfold = model_selection.KFold(n_splits=fold, random_state=seed) model = MLPClassifier(**params) results = model_selection.cross_val_score(model, x, y, cv=kfold, scoring='roc_auc') print(results.mean()) model.fit(x, y) return model param_setting={'max_iter':2000, 'early_stopping':True, 'learning_rate_init':0.01} nn_base = fit_nn(X_train, Y_train, fold=5, param_setting=param_setting, seed=SEED) nn_base ``` ### <a id="tune">Parameter Tuning </a> **Steps** 1. Use logistics activation function for 1 hidden layer neural network 2. Tune the model using combination of node in the hidden layer and alpha and pick the best parameters. **Key parameters** * **hidden_layer_sizes**: The ith element represents the number of neurons in the ith hidden layer. length = n_layers - 2. Default is (100,), which means that there is only 1 hidden layer with 100 nodes, since len((100,))=1=3-2. 3 means that there are 3 layers including input and output layer. For architecture 56:25:11:7:5:3:1 with input 56 and 1 output hidden layers will be (25:11:7:5:3). So tuple hidden_layer_sizes = (25,11,7,5,3,) * **alpha**: L2 penalty (regularization term) parameter. ``` def parameter_tuning(model, X_train, y_train, param_grid, fold=5): """ Tune a tree based model using GridSearch, and return a model object with an updated parameters Parameters ---------- model: sklearn's ensemble tree model the model we want to do the hyperparameter tuning. X_train: pandas DataFrame Preprocessed training data. Note that all the columns should be in numeric format. y_train: pandas Series param_grid: dict contains all the parameters that we want to tune for the responding model. Note ---------- * we use kfold in GridSearchCV in order to make sure the CV Score is consistent with the score that we get from all the other function, including fit_bagging, fit_randomforest and fit_gbm. * We use model_selection.KFold with fixed seed in order to make sure GridSearchCV uses the same seed as model_selection.cross_val_score. """ seed=SEED # if 'n_estimators' in param_grid: # model.set_params(warm_start=True) kfold = model_selection.KFold(n_splits=fold, random_state=seed) gs_model = GridSearchCV(model, param_grid, cv=kfold, scoring='roc_auc') gs_model.fit(X_train, y_train) # best hyperparameter setting print('best parameters:{}'.format(gs_model.best_params_)) print('best score:{}'.format(gs_model.best_score_)) # refit model on best parameters model.set_params(**gs_model.best_params_) model.fit(X_train, y_train) return(model) ``` > **Set Default Parameter** ``` params = { 'max_iter':2000, 'early_stopping':True, 'learning_rate_init':0.01, 'random_state':SEED } nn = MLPClassifier(**params) ``` > **Set Tuning Parameter** ``` param_grid_nn_1 = { 'hidden_layer_sizes': [(100,), (150,)], 'alpha': [0.001, 0.01], } nn_2 = parameter_tuning(nn, X_train, Y_train, param_grid_nn_1) nn_2 ``` > **Get Cross Validation Accuracy** ``` def get_cv_score(model, x, y, fold=5, scoring='accuracy', seed=SEED): """Get the cv score for a fitted model""" kfold = model_selection.KFold(n_splits=fold, random_state=seed) results = model_selection.cross_val_score(model, x, y, cv=kfold, scoring=scoring) print(results.mean()) return results # get the mean accuracy. get_cv_score(nn_2, X_train, Y_train, fold=5, scoring='accuracy', seed=SEED) ``` ### <a id='sklearn_predict'>Make Prediction</a> plot out the image and the result to get a better sense of it ``` predictions_sklearn = nn_2.predict_proba(X_test) for i in range(10): plt.imshow(np.reshape(X_test[i], (28, 28)), cmap='gray') print("Prediction:", np.argmax(predictions_sklearn[i])) print("Probability:", predictions_sklearn[i][np.argmax(predictions_sklearn[i])]) plt.show() print("--------------------------------------") ``` ### <a id='activation'>Side note for Activation Function</a> > **What is activation function?** The function help to decide whether each neuron should be activated or not and adding non-linearity to the networks. See more on [Understanding Activation Functions in Neural Networks](https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0) > **Why do we use ReLU replacing Sigmoid?** The gradient of the sigmoid is close to 0 when the x is large positive or very negative. It means that when we try to update the weight, the update is very small. It takes a long time to run. Even when the value of the data point is big, when it times the gradient, the result will be still small. That's why we need ReLU. The gradient of ReLU will not be close to 0 when x is large positive or very negative. When we use ReLU as activation function, the larger derivative (or gradient) or it can help us update the weight more efficiently than using the sigmoid function. The pros of using ReLu is that we can train the model more efficiently. If we have enough time, the result of using sigmoid and ReLU will be the same. When we do the back propagation, what we really calculate is the derivative of the activation function. The parameters of the activation is the things that we have control of. That's where the difference of using ReLU and Sigmoid function from. [Why do we use ReLU in neural networks and how do we use it?](https://stats.stackexchange.com/questions/226923/why-do-we-use-relu-in-neural-networks-and-how-do-we-use-it) Gradients of logistic and hyperbolic tangent networks are smaller than the positive portion of the ReLU. This means that the positive portion is updated more rapidly as training progresses. However, this comes at a cost. The 0 gradient on the left-hand side is has its own problem, called "dead neurons," in which a gradient update sets the incoming values to a ReLU such that the output is always zero; modified ReLU units such as ELU (or Leaky ReLU etc.) can minimize this. ## <a id="refer">Reference</a> * [sklearn MLPClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier) * [sklearn scale](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.scale.html) * [Stackoverflow: hidden_layer_sizes](https://stackoverflow.com/questions/35363530/python-scikit-learn-mlpclassifier-hidden-layer-sizes) * [Adam Optimization](https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/) * [sklearn: scoring parameter](http://scikit-learn.org/0.15/modules/model_evaluation.html)
github_jupyter
%matplotlib inline # basic setup import pandas as pd import numpy as np import matplotlib.pyplot as plt from keras.datasets import mnist np.random.seed(0) # model related import keras.utils.np_utils as np_utils import keras.models as models, keras.layers.core as layers, keras.optimizers # load dataset from keras datasets (X_train, Y_train), (X_test, Y_test) = keras.datasets.mnist.load_data() # reshape a 2D array (28, 28) into a vector (784, ) # -1 means that it will figure out the length of another dimension automatically X_train = X_train.reshape(-1, 784) X_test = X_test.reshape(-1, 784) # constrant the values in the array within [0,1] X_train = X_train.astype('float32') / 255.0 X_test = X_test.astype('float32') / 255.0 # one-hot encoding to the response variable Y_train = np_utils.to_categorical(Y_train, 10) Y_test = np_utils.to_categorical(Y_test, 10) # create a nn model model = models.Sequential() # first layer, input is 784 dim with 128 nodes model.add(layers.Dense(units=128, input_dim=784)) # the activation function is sigmoid model.add(layers.Activation('sigmoid')) # the number of output category is with 10 model.add(layers.Dense(units=10)) # output prob calculation function model.add(layers.Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='sgd') #keras.optimizers.RMSprop() model.fit(X_train, Y_train, batch_size=128, epochs=20, verbose=2) predictions = model.predict(X_test) def test_accuracy_1nn(predictions, X_test, Y_test): ncorrect = 0 i=0 for (ex, cls) in zip(X_test, Y_test): if np.argmax(cls) == np.argmax(predictions[i]): ncorrect += 1 i = i+1 print("Test accuracy: %d/%d = %0.2f %%" % (ncorrect, len(Y_test), 100.0*ncorrect/len(Y_test))) test_accuracy_1nn(predictions, X_test, Y_test) for i in range(10): plt.imshow(np.reshape(X_test[i], (28, 28)), cmap='gray') print("Prediction:", np.argmax(predictions[i])) print("Probability:", predictions[i][np.argmax(predictions[i])]) plt.show() print("--------------------------------------") %matplotlib inline import os import pandas as pd import numpy as np import matplotlib.pyplot as plt import math from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV from sklearn import model_selection from sklearn.neural_network import MLPClassifier from sklearn.preprocessing import scale from sklearn.metrics import classification_report SEED = 12345 def fit_nn(x, y, param_setting={}, fold=5, seed=SEED): """Neural Network for Classification, get the CV AUC""" # set seed and default parameter params_default = {'random_state':seed, 'activation': 'logistic', } # update the input parameters params = dict(params_default) params.update(param_setting) seed= SEED kfold = model_selection.KFold(n_splits=fold, random_state=seed) model = MLPClassifier(**params) results = model_selection.cross_val_score(model, x, y, cv=kfold, scoring='roc_auc') print(results.mean()) model.fit(x, y) return model param_setting={'max_iter':2000, 'early_stopping':True, 'learning_rate_init':0.01} nn_base = fit_nn(X_train, Y_train, fold=5, param_setting=param_setting, seed=SEED) nn_base def parameter_tuning(model, X_train, y_train, param_grid, fold=5): """ Tune a tree based model using GridSearch, and return a model object with an updated parameters Parameters ---------- model: sklearn's ensemble tree model the model we want to do the hyperparameter tuning. X_train: pandas DataFrame Preprocessed training data. Note that all the columns should be in numeric format. y_train: pandas Series param_grid: dict contains all the parameters that we want to tune for the responding model. Note ---------- * we use kfold in GridSearchCV in order to make sure the CV Score is consistent with the score that we get from all the other function, including fit_bagging, fit_randomforest and fit_gbm. * We use model_selection.KFold with fixed seed in order to make sure GridSearchCV uses the same seed as model_selection.cross_val_score. """ seed=SEED # if 'n_estimators' in param_grid: # model.set_params(warm_start=True) kfold = model_selection.KFold(n_splits=fold, random_state=seed) gs_model = GridSearchCV(model, param_grid, cv=kfold, scoring='roc_auc') gs_model.fit(X_train, y_train) # best hyperparameter setting print('best parameters:{}'.format(gs_model.best_params_)) print('best score:{}'.format(gs_model.best_score_)) # refit model on best parameters model.set_params(**gs_model.best_params_) model.fit(X_train, y_train) return(model) params = { 'max_iter':2000, 'early_stopping':True, 'learning_rate_init':0.01, 'random_state':SEED } nn = MLPClassifier(**params) param_grid_nn_1 = { 'hidden_layer_sizes': [(100,), (150,)], 'alpha': [0.001, 0.01], } nn_2 = parameter_tuning(nn, X_train, Y_train, param_grid_nn_1) nn_2 def get_cv_score(model, x, y, fold=5, scoring='accuracy', seed=SEED): """Get the cv score for a fitted model""" kfold = model_selection.KFold(n_splits=fold, random_state=seed) results = model_selection.cross_val_score(model, x, y, cv=kfold, scoring=scoring) print(results.mean()) return results # get the mean accuracy. get_cv_score(nn_2, X_train, Y_train, fold=5, scoring='accuracy', seed=SEED) predictions_sklearn = nn_2.predict_proba(X_test) for i in range(10): plt.imshow(np.reshape(X_test[i], (28, 28)), cmap='gray') print("Prediction:", np.argmax(predictions_sklearn[i])) print("Probability:", predictions_sklearn[i][np.argmax(predictions_sklearn[i])]) plt.show() print("--------------------------------------")
0.794425
0.994785
<a href="https://colab.research.google.com/github/michaelengh/github-slideshow/blob/main/08___Project1Part4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` from google.colab import drive drive.mount('/content/drive') import pandas as pd import numpy as np salesfile = '/content/drive/MyDrive/**Coding Dojo**/02 Week 2: Pandas/Files for Lessons/sales_predictions.csv' df = pd.read_csv(salesfile) df.head() ``` item 1 ``` df.shape ``` item 2 ``` df.info() ``` item 3 ``` df.duplicated().sum() ``` item 4 ``` df.isna().sum() ``` item 5 ``` null_vals = df.isna().sum() nullx = null_vals[null_vals>0].index nullx df = df.fillna(df.mean()) # replaced the null values in item weight with the mean value. will now skew low/high values and for overall mean value wont change drastically df.isna().sum() df.fillna('Missing', inplace=True) # only one column left with missing data that are characters so i filled with Missing ``` item 6 ``` df.isna().sum() data_types = df.dtypes data_types ``` item 7 ``` df['Item_Fat_Content'].value_counts() ``` item 7 ``` df['Item_Fat_Content'] = df['Item_Fat_Content'].replace('LF','Low Fat') df['Item_Fat_Content'] = df['Item_Fat_Content'].replace('low fat','Low Fat') df['Item_Fat_Content'] = df['Item_Fat_Content'].replace('reg','Regular') df['Item_Fat_Content'].value_counts() ``` item 8 ``` print('Min Item Weight:',df['Item_Weight'].min()) print('Min Item Visibility:',df['Item_Visibility'].min()) print('Min Item MRP:',df['Item_MRP'].min()) print('Min Est Year:',df['Outlet_Establishment_Year'].min()) print('Min Item Outlet Sales:',df['Item_Outlet_Sales'].min()) print('Avg Item Weight:',df['Item_Weight'].mean()) print('Avg Item Visibility:',df['Item_Visibility'].mean()) print('Avg Item MRP:',df['Item_MRP'].mean()) print('Avg Item Est Year:',df['Outlet_Establishment_Year'].mean()) print('Avg Item Outlet Sales:',df['Item_Outlet_Sales'].mean()) print('Max Item Weight:',df['Item_Weight'].max()) print('Max Item Visibility:',df['Item_Visibility'].max()) print('Max Item MRP:',df['Item_MRP'].max()) print('Max Item Est Year:',df['Outlet_Establishment_Year'].max()) print('Max Item Outlet Sales:',df['Item_Outlet_Sales'].max()) ``` Project 1 - Part 3 (Core) step1 ``` import matplotlib.pyplot as plt import seaborn as sns df['Item_Weight'].hist(bins = 'auto',edgecolor='black') plt.title('Item Weight') plt.ylabel('Count') plt.xlabel('Weight'); ``` with this histogram I can see that most items weigh in the 12-13 range ``` df['Item_Fat_Content'].hist(edgecolor='black') plt.title('Item Fat Content') plt.ylabel('Count') plt.xlabel('Fat Category'); ``` From this i can see that majority of the items are Low Fat ``` #Item_Outlet_Sales df.boxplot(column = 'Item_Outlet_Sales',); plt.title('Item Outlet Sales'); plt.ylabel('Predicted Sales'); ``` With a boxplot on the Item sales column, I am able to see that there are a few outliers that range between 6500 and 12500 that may scew any averages that i may want to retrieve ``` df['Item_Outlet_Sales'].hist(bins='auto') corr = df.corr() sns.heatmap(corr, cmap = 'Blues', annot=True); data_types = df.dtypes obj_cols = data_types[data_types=="object"] obj_cols = obj_cols.index for col in obj_cols: print(f"{col}:") print(df[col].value_counts(dropna=False)) print("\n\n") ``` the below hist i can see that majority of inventory is from outlet type "Supermarket Type 1" ``` df['Outlet_Type'].hist(edgecolor='black') plt.title('Outlet Type') plt.ylabel('Count') plt.xlabel('Type of Outlet'); ``` below we can compare that even though Supermarket type 1(which has the most sales) have roughly the same ratio of items sold between the categories of Low fat and Regular. ``` sm_typeone = df['Outlet_Type'] == 'Supermarket Type1' sm_one_df = df.loc[sm_typeone, :] sm_one_df['Item_Fat_Content'].hist() plt.title('Fat Content Items Sold for Supermarket Type 1') plt.ylabel('Count') plt.xlabel('Fat Content of Item'); other_df = df.loc[~sm_typeone, :] other_df['Item_Fat_Content'].hist() plt.title('Fat Content Items Sold for Supermarket Type 1') plt.ylabel('Count') plt.xlabel('Fat Content of Item'); corr_two = sm_one_df.corr() sns.heatmap(corr_two, cmap = 'Greens', annot=True); ``` the below plot of weights for the given store type is showing a mean of 12.5. all items are within normal ranges with no ouliers reported. ``` sm_one_df.boxplot(column = 'Item_Weight',); plt.title('Item Weight for Supermarket Type One'); plt.ylabel('Weight'); ``` ********* Project 1 - Part 4 (Core) ********* We will continue to work on your sales prediction project. The goal of this is to help the retailer understand the properties of products and outlets that play crucial roles in increasing sales. For Part 4, your task is to build several data visualizations to help your stakeholders better understand trends in the data. Feel free to get creative with this week - this is your chance to set your project apart from others with exceptional visualizations and analyses. Build on your previous cleaning, exploration, and analysis. Create a minimum of two data visualizations that help others understand trends in the data (explanatory data analysis). Since these graphs are for reporting purposes, make sure they look nice by including titles, legends, etc. ``` sns.boxplot(data=df, x='Outlet_Type', y='Item_Outlet_Sales') plt.xlabel('Outlet Type', size=16) plt.ylabel('Item Sales', size=16) plt.title('Avg Number Items Sold Per Outlet', size=20) plt.xticks(rotation=20); ``` Above I can see that Supermarket Type 3 has the highiest mean of Item sales compared to the other location types. ``` type_sale = df.groupby('Item_Type')['Item_Outlet_Sales'].mean() type_sale plt.barh(type_sale.index, type_sale.values) plt.xlabel("Item $ Sale", fontsize = 16) plt.ylabel('Item Type', fontsize = 16) plt.title('Mean of Sales by Item Type', fontsize = 16) ``` from above i can see the highest Mean for sales comes from the Starch Food category followed by Seafood. with this scatterplot i can see that as the items visibility decreases the item sales also drop. as seen by the trend line dropping. this would then show me that we would have chance at profiting from moving items areound to increase the visibility of some items. ``` ax = sns.regplot(data=df, x='Item_Outlet_Sales', y='Item_Visibility', scatter_kws={'s':1}, line_kws = dict(color='black', ls=':')) ax.set(title='Sales Dependancy on Item Visibility', xlabel='Projected Sales $', ylabel='Item Visibility'); ```
github_jupyter
from google.colab import drive drive.mount('/content/drive') import pandas as pd import numpy as np salesfile = '/content/drive/MyDrive/**Coding Dojo**/02 Week 2: Pandas/Files for Lessons/sales_predictions.csv' df = pd.read_csv(salesfile) df.head() df.shape df.info() df.duplicated().sum() df.isna().sum() null_vals = df.isna().sum() nullx = null_vals[null_vals>0].index nullx df = df.fillna(df.mean()) # replaced the null values in item weight with the mean value. will now skew low/high values and for overall mean value wont change drastically df.isna().sum() df.fillna('Missing', inplace=True) # only one column left with missing data that are characters so i filled with Missing df.isna().sum() data_types = df.dtypes data_types df['Item_Fat_Content'].value_counts() df['Item_Fat_Content'] = df['Item_Fat_Content'].replace('LF','Low Fat') df['Item_Fat_Content'] = df['Item_Fat_Content'].replace('low fat','Low Fat') df['Item_Fat_Content'] = df['Item_Fat_Content'].replace('reg','Regular') df['Item_Fat_Content'].value_counts() print('Min Item Weight:',df['Item_Weight'].min()) print('Min Item Visibility:',df['Item_Visibility'].min()) print('Min Item MRP:',df['Item_MRP'].min()) print('Min Est Year:',df['Outlet_Establishment_Year'].min()) print('Min Item Outlet Sales:',df['Item_Outlet_Sales'].min()) print('Avg Item Weight:',df['Item_Weight'].mean()) print('Avg Item Visibility:',df['Item_Visibility'].mean()) print('Avg Item MRP:',df['Item_MRP'].mean()) print('Avg Item Est Year:',df['Outlet_Establishment_Year'].mean()) print('Avg Item Outlet Sales:',df['Item_Outlet_Sales'].mean()) print('Max Item Weight:',df['Item_Weight'].max()) print('Max Item Visibility:',df['Item_Visibility'].max()) print('Max Item MRP:',df['Item_MRP'].max()) print('Max Item Est Year:',df['Outlet_Establishment_Year'].max()) print('Max Item Outlet Sales:',df['Item_Outlet_Sales'].max()) import matplotlib.pyplot as plt import seaborn as sns df['Item_Weight'].hist(bins = 'auto',edgecolor='black') plt.title('Item Weight') plt.ylabel('Count') plt.xlabel('Weight'); df['Item_Fat_Content'].hist(edgecolor='black') plt.title('Item Fat Content') plt.ylabel('Count') plt.xlabel('Fat Category'); #Item_Outlet_Sales df.boxplot(column = 'Item_Outlet_Sales',); plt.title('Item Outlet Sales'); plt.ylabel('Predicted Sales'); df['Item_Outlet_Sales'].hist(bins='auto') corr = df.corr() sns.heatmap(corr, cmap = 'Blues', annot=True); data_types = df.dtypes obj_cols = data_types[data_types=="object"] obj_cols = obj_cols.index for col in obj_cols: print(f"{col}:") print(df[col].value_counts(dropna=False)) print("\n\n") df['Outlet_Type'].hist(edgecolor='black') plt.title('Outlet Type') plt.ylabel('Count') plt.xlabel('Type of Outlet'); sm_typeone = df['Outlet_Type'] == 'Supermarket Type1' sm_one_df = df.loc[sm_typeone, :] sm_one_df['Item_Fat_Content'].hist() plt.title('Fat Content Items Sold for Supermarket Type 1') plt.ylabel('Count') plt.xlabel('Fat Content of Item'); other_df = df.loc[~sm_typeone, :] other_df['Item_Fat_Content'].hist() plt.title('Fat Content Items Sold for Supermarket Type 1') plt.ylabel('Count') plt.xlabel('Fat Content of Item'); corr_two = sm_one_df.corr() sns.heatmap(corr_two, cmap = 'Greens', annot=True); sm_one_df.boxplot(column = 'Item_Weight',); plt.title('Item Weight for Supermarket Type One'); plt.ylabel('Weight'); sns.boxplot(data=df, x='Outlet_Type', y='Item_Outlet_Sales') plt.xlabel('Outlet Type', size=16) plt.ylabel('Item Sales', size=16) plt.title('Avg Number Items Sold Per Outlet', size=20) plt.xticks(rotation=20); type_sale = df.groupby('Item_Type')['Item_Outlet_Sales'].mean() type_sale plt.barh(type_sale.index, type_sale.values) plt.xlabel("Item $ Sale", fontsize = 16) plt.ylabel('Item Type', fontsize = 16) plt.title('Mean of Sales by Item Type', fontsize = 16) ax = sns.regplot(data=df, x='Item_Outlet_Sales', y='Item_Visibility', scatter_kws={'s':1}, line_kws = dict(color='black', ls=':')) ax.set(title='Sales Dependancy on Item Visibility', xlabel='Projected Sales $', ylabel='Item Visibility');
0.393735
0.940134
### **PINN eikonal solver to demonstrate transfer learning for a smooth v(x,z) model** ``` from google.colab import drive drive.mount('/content/gdrive') cd "/content/gdrive/My Drive/Colab Notebooks/Codes/PINN_isotropic_eikonal_R1" !pip install sciann==0.5.4.0 !pip install tensorflow==2.2.0 #!pip install keras==2.3.1 import numpy as np import pandas as pd import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import tensorflow as tf from sciann import Functional, Variable, SciModel, PDE from sciann.utils import * import scipy.io import time import random tf.config.threading.set_intra_op_parallelism_threads(1) tf.config.threading.set_inter_op_parallelism_threads(1) np.random.seed(123) tf.random.set_seed(123) #Model specifications v0 = 2.; # Velocity at the origin of the model vergrad = 0.4; # Vertical gradient horgrad = 0.1; # Horizontal gradient zmin = 0.; zmax = 6.; deltaz = 0.02; xmin = 0.; xmax = 6.; deltax = 0.02; # Point-source location sz = 1.0; sx = 4.0; # Number of training points num_tr_pts = 2500 # Creating grid, calculating refrence traveltimes, and prepare list of grid points for training (X_star) z = np.arange(zmin,zmax+deltaz,deltaz) nz = z.size x = np.arange(xmin,xmax+deltax,deltax) nx = x.size Z,X = np.meshgrid(z,x,indexing='ij') # Preparing velocity model vs = v0 + vergrad*sz + horgrad*sx # Velocity at the source location velmodel = vs + vergrad*(Z-sz) + horgrad*(X-sx); # Traveltime solution if vergrad==0 and horgrad==0: # For homogeneous velocity model T_data = np.sqrt((Z-sz)**2 + (X-sx)**2)/v0; else: # For velocity gradient model T_data = np.arccosh(1.0+0.5*(1.0/velmodel)*(1/vs)*(vergrad**2 + horgrad**2)*((X-sx)**2 + (Z-sz)**2))/np.sqrt(vergrad**2 + horgrad**2) X_star = [Z.reshape(-1,1), X.reshape(-1,1)] # Grid points for prediction selected_pts = np.random.choice(np.arange(Z.size),num_tr_pts,replace=False) Zf = Z.reshape(-1,1)[selected_pts] Zf = np.append(Zf,sz) Xf = X.reshape(-1,1)[selected_pts] Xf = np.append(Xf,sx) X_starf = [Zf.reshape(-1,1), Xf.reshape(-1,1)] # Grid points for training # Plot the velocity model with the source location plt.style.use('default') plt.figure(figsize=(4,4)) ax = plt.gca() im = ax.imshow(velmodel, extent=[xmin,xmax,zmax,zmin], aspect=1, cmap="jet") ax.plot(sx,sz,'k*',markersize=8) plt.xlabel('Offset (km)', fontsize=14) plt.xticks(fontsize=10) plt.ylabel('Depth (km)', fontsize=14) plt.yticks(fontsize=10) ax.xaxis.set_major_locator(plt.MultipleLocator(2)) ax.yaxis.set_major_locator(plt.MultipleLocator(2)) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="6%", pad=0.15) cbar = plt.colorbar(im, cax=cax) cbar.set_label('km/s',size=10) cbar.ax.tick_params(labelsize=10) plt.savefig("./figs/vofxz/velmodel.pdf", format='pdf', bbox_inches="tight") # Analytical solution for the known traveltime part vel = velmodel[int(round(sz/deltaz)),int(round(sx/deltax))] # Velocity at the source location T0 = np.sqrt((Z-sz)**2 + (X-sx)**2)/vel; px0 = np.divide(X-sx, T0*vel**2, out=np.zeros_like(T0), where=T0!=0) pz0 = np.divide(Z-sz, T0*vel**2, out=np.zeros_like(T0), where=T0!=0) # Find source location id in X_star TOLX = 1e-6 TOLZ = 1e-6 sids,_ = np.where(np.logical_and(np.abs(X_starf[0]-sz)<TOLZ , np.abs(X_starf[1]-sx)<TOLX)) print(sids) print(sids.shape) print(X_starf[0][sids,0]) print(X_starf[1][sids,0]) # Preparing the Sciann model object K.clear_session() layers = [20]*10 # Appending source values velmodelf = velmodel.reshape(-1,1)[selected_pts]; velmodelf = np.append(velmodelf,vs) px0f = px0.reshape(-1,1)[selected_pts]; px0f = np.append(px0f,0.) pz0f = pz0.reshape(-1,1)[selected_pts]; pz0f = np.append(pz0f,0.) T0f = T0.reshape(-1,1)[selected_pts]; T0f = np.append(T0f,0.) xt = Variable("xt",dtype='float64') zt = Variable("zt",dtype='float64') vt = Variable("vt",dtype='float64') px0t = Variable("px0t",dtype='float64') pz0t = Variable("pz0t",dtype='float64') T0t = Variable("T0t",dtype='float64') tau = Functional("tau", [zt, xt], layers, 'l-atan') # Loss function based on the factored isotropic eikonal equation L = (T0t*diff(tau, xt) + tau*px0t)**2 + (T0t*diff(tau, zt) + tau*pz0t)**2 - 1.0/vt**2 targets = [tau, PDE(10*L), (1-sign(tau*T0t))*abs(tau*T0t)] target_vals = [(sids, np.ones(sids.shape).reshape(-1,1)), 'zeros', 'zeros'] model1 = SciModel( [zt, xt, vt, pz0t, px0t, T0t], targets, optimizer='scipy-l-BFGS-B' ) #Model training start_time = time.time() hist1 = model1.train( X_starf + [velmodelf,pz0f,px0f,T0f], target_vals, batch_size = X_starf[0].size, epochs = 1000, learning_rate = 0.00005, verbose=0 ) elapsed = time.time() - start_time print('Training time: %.2f seconds' %(elapsed)) # Transfer Learning model2 = SciModel( [zt, xt, vt, pz0t, px0t, T0t], targets, load_weights_from='models/vofz_model-end.hdf5', optimizer='scipy-l-BFGS-B' ) #Model training start_time = time.time() hist2 = model2.train( X_starf + [velmodelf,pz0f,px0f,T0f], target_vals, batch_size = X_starf[0].size, epochs = 200, learning_rate = 0.00005, verbose=0 ) elapsed = time.time() - start_time print('Training time: %.2f seconds' %(elapsed)) # Convergence history plot for verification fig = plt.figure(figsize=(5,3)) ax = plt.axes() ax.semilogy(hist1.history['loss'],LineWidth=2,label='Random initial model') ax.semilogy(hist2.history['loss'],LineWidth=2,label='Pre-trained initial model') ax.set_xlabel('Epochs',fontsize=14) plt.xticks(fontsize=10) #ax.xaxis.set_major_locator(plt.MultipleLocator(5000)) ax.set_ylabel('Loss',fontsize=14) plt.yticks(fontsize=10); plt.grid() plt.legend() plt.savefig("./figs/vofxz/loss.pdf", format='pdf', bbox_inches="tight") # Predicting traveltime solution from the trained model L_pred = L.eval(model2, X_star + [velmodel,pz0,px0,T0]) tau_pred = tau.eval(model2, X_star + [velmodel,pz0,px0,T0]) tau_pred = tau_pred.reshape(Z.shape) T_pred = tau_pred*T0 print('Time at source: %.4f'%(tau_pred[int(round(sz/deltaz)),int(round(sx/deltax))])) # Plot the PINN solution error plt.style.use('default') plt.figure(figsize=(4,4)) ax = plt.gca() im = ax.imshow(np.abs(T_pred-T_data), extent=[xmin,xmax,zmax,zmin], aspect=1, cmap="jet") plt.xlabel('Offset (km)', fontsize=14) plt.xticks(fontsize=10) plt.ylabel('Depth (km)', fontsize=14) plt.yticks(fontsize=10) ax.xaxis.set_major_locator(plt.MultipleLocator(2)) ax.yaxis.set_major_locator(plt.MultipleLocator(2)) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="6%", pad=0.15) cbar = plt.colorbar(im, cax=cax) cbar.set_label('seconds',size=10) cbar.ax.tick_params(labelsize=10) plt.savefig("./figs/vofxz/pinnerror.pdf", format='pdf', bbox_inches="tight") # Load fast sweeping traveltims for comparison T_fsm = np.load('./inputs/vofxz/traveltimes/Tcomp.npy') # Plot the first order FSM solution error plt.style.use('default') plt.figure(figsize=(4,4)) ax = plt.gca() im = ax.imshow(np.abs(T_fsm-T_data), extent=[xmin,xmax,zmax,zmin], aspect=1, cmap="jet") plt.xlabel('Offset (km)', fontsize=14) plt.xticks(fontsize=10) plt.ylabel('Depth (km)', fontsize=14) plt.yticks(fontsize=10) ax.xaxis.set_major_locator(plt.MultipleLocator(2)) ax.yaxis.set_major_locator(plt.MultipleLocator(2)) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="6%", pad=0.15) cbar = plt.colorbar(im, cax=cax) cbar.set_label('seconds',size=10) cbar.ax.tick_params(labelsize=10) plt.savefig("./figs/vofxz/fsmerror.pdf", format='pdf', bbox_inches="tight") # Traveltime contour plots plt.figure(figsize=(5,5)) ax = plt.gca() im1 = ax.contour(T_data, 6, extent=[xmin,xmax,zmin,zmax], colors='r') im2 = ax.contour(T_pred, 6, extent=[xmin,xmax,zmin,zmax], colors='k',linestyles = 'dashed') im3 = ax.contour(T_fsm, 6, extent=[xmin,xmax,zmin,zmax], colors='b',linestyles = 'dotted') ax.plot(sx,sz,'k*',markersize=8) plt.xlabel('Offset (km)', fontsize=14) plt.ylabel('Depth (km)', fontsize=14) ax.tick_params(axis='both', which='major', labelsize=8) plt.gca().invert_yaxis() h1,_ = im1.legend_elements() h2,_ = im2.legend_elements() h3,_ = im3.legend_elements() ax.legend([h1[0], h2[0], h3[0]], ['Analytical', 'PINN', 'Fast sweeping'],fontsize=12) ax.xaxis.set_major_locator(plt.MultipleLocator(2)) ax.yaxis.set_major_locator(plt.MultipleLocator(2)) plt.xticks(fontsize=10) plt.yticks(fontsize=10) plt.savefig("./figs/vofxz/contours.pdf", format='pdf', bbox_inches="tight") print(np.linalg.norm(T_pred-T_data)/np.linalg.norm(T_data)) print(np.linalg.norm(T_pred-T_data)) !nvidia-smi -L ```
github_jupyter
from google.colab import drive drive.mount('/content/gdrive') cd "/content/gdrive/My Drive/Colab Notebooks/Codes/PINN_isotropic_eikonal_R1" !pip install sciann==0.5.4.0 !pip install tensorflow==2.2.0 #!pip install keras==2.3.1 import numpy as np import pandas as pd import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import tensorflow as tf from sciann import Functional, Variable, SciModel, PDE from sciann.utils import * import scipy.io import time import random tf.config.threading.set_intra_op_parallelism_threads(1) tf.config.threading.set_inter_op_parallelism_threads(1) np.random.seed(123) tf.random.set_seed(123) #Model specifications v0 = 2.; # Velocity at the origin of the model vergrad = 0.4; # Vertical gradient horgrad = 0.1; # Horizontal gradient zmin = 0.; zmax = 6.; deltaz = 0.02; xmin = 0.; xmax = 6.; deltax = 0.02; # Point-source location sz = 1.0; sx = 4.0; # Number of training points num_tr_pts = 2500 # Creating grid, calculating refrence traveltimes, and prepare list of grid points for training (X_star) z = np.arange(zmin,zmax+deltaz,deltaz) nz = z.size x = np.arange(xmin,xmax+deltax,deltax) nx = x.size Z,X = np.meshgrid(z,x,indexing='ij') # Preparing velocity model vs = v0 + vergrad*sz + horgrad*sx # Velocity at the source location velmodel = vs + vergrad*(Z-sz) + horgrad*(X-sx); # Traveltime solution if vergrad==0 and horgrad==0: # For homogeneous velocity model T_data = np.sqrt((Z-sz)**2 + (X-sx)**2)/v0; else: # For velocity gradient model T_data = np.arccosh(1.0+0.5*(1.0/velmodel)*(1/vs)*(vergrad**2 + horgrad**2)*((X-sx)**2 + (Z-sz)**2))/np.sqrt(vergrad**2 + horgrad**2) X_star = [Z.reshape(-1,1), X.reshape(-1,1)] # Grid points for prediction selected_pts = np.random.choice(np.arange(Z.size),num_tr_pts,replace=False) Zf = Z.reshape(-1,1)[selected_pts] Zf = np.append(Zf,sz) Xf = X.reshape(-1,1)[selected_pts] Xf = np.append(Xf,sx) X_starf = [Zf.reshape(-1,1), Xf.reshape(-1,1)] # Grid points for training # Plot the velocity model with the source location plt.style.use('default') plt.figure(figsize=(4,4)) ax = plt.gca() im = ax.imshow(velmodel, extent=[xmin,xmax,zmax,zmin], aspect=1, cmap="jet") ax.plot(sx,sz,'k*',markersize=8) plt.xlabel('Offset (km)', fontsize=14) plt.xticks(fontsize=10) plt.ylabel('Depth (km)', fontsize=14) plt.yticks(fontsize=10) ax.xaxis.set_major_locator(plt.MultipleLocator(2)) ax.yaxis.set_major_locator(plt.MultipleLocator(2)) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="6%", pad=0.15) cbar = plt.colorbar(im, cax=cax) cbar.set_label('km/s',size=10) cbar.ax.tick_params(labelsize=10) plt.savefig("./figs/vofxz/velmodel.pdf", format='pdf', bbox_inches="tight") # Analytical solution for the known traveltime part vel = velmodel[int(round(sz/deltaz)),int(round(sx/deltax))] # Velocity at the source location T0 = np.sqrt((Z-sz)**2 + (X-sx)**2)/vel; px0 = np.divide(X-sx, T0*vel**2, out=np.zeros_like(T0), where=T0!=0) pz0 = np.divide(Z-sz, T0*vel**2, out=np.zeros_like(T0), where=T0!=0) # Find source location id in X_star TOLX = 1e-6 TOLZ = 1e-6 sids,_ = np.where(np.logical_and(np.abs(X_starf[0]-sz)<TOLZ , np.abs(X_starf[1]-sx)<TOLX)) print(sids) print(sids.shape) print(X_starf[0][sids,0]) print(X_starf[1][sids,0]) # Preparing the Sciann model object K.clear_session() layers = [20]*10 # Appending source values velmodelf = velmodel.reshape(-1,1)[selected_pts]; velmodelf = np.append(velmodelf,vs) px0f = px0.reshape(-1,1)[selected_pts]; px0f = np.append(px0f,0.) pz0f = pz0.reshape(-1,1)[selected_pts]; pz0f = np.append(pz0f,0.) T0f = T0.reshape(-1,1)[selected_pts]; T0f = np.append(T0f,0.) xt = Variable("xt",dtype='float64') zt = Variable("zt",dtype='float64') vt = Variable("vt",dtype='float64') px0t = Variable("px0t",dtype='float64') pz0t = Variable("pz0t",dtype='float64') T0t = Variable("T0t",dtype='float64') tau = Functional("tau", [zt, xt], layers, 'l-atan') # Loss function based on the factored isotropic eikonal equation L = (T0t*diff(tau, xt) + tau*px0t)**2 + (T0t*diff(tau, zt) + tau*pz0t)**2 - 1.0/vt**2 targets = [tau, PDE(10*L), (1-sign(tau*T0t))*abs(tau*T0t)] target_vals = [(sids, np.ones(sids.shape).reshape(-1,1)), 'zeros', 'zeros'] model1 = SciModel( [zt, xt, vt, pz0t, px0t, T0t], targets, optimizer='scipy-l-BFGS-B' ) #Model training start_time = time.time() hist1 = model1.train( X_starf + [velmodelf,pz0f,px0f,T0f], target_vals, batch_size = X_starf[0].size, epochs = 1000, learning_rate = 0.00005, verbose=0 ) elapsed = time.time() - start_time print('Training time: %.2f seconds' %(elapsed)) # Transfer Learning model2 = SciModel( [zt, xt, vt, pz0t, px0t, T0t], targets, load_weights_from='models/vofz_model-end.hdf5', optimizer='scipy-l-BFGS-B' ) #Model training start_time = time.time() hist2 = model2.train( X_starf + [velmodelf,pz0f,px0f,T0f], target_vals, batch_size = X_starf[0].size, epochs = 200, learning_rate = 0.00005, verbose=0 ) elapsed = time.time() - start_time print('Training time: %.2f seconds' %(elapsed)) # Convergence history plot for verification fig = plt.figure(figsize=(5,3)) ax = plt.axes() ax.semilogy(hist1.history['loss'],LineWidth=2,label='Random initial model') ax.semilogy(hist2.history['loss'],LineWidth=2,label='Pre-trained initial model') ax.set_xlabel('Epochs',fontsize=14) plt.xticks(fontsize=10) #ax.xaxis.set_major_locator(plt.MultipleLocator(5000)) ax.set_ylabel('Loss',fontsize=14) plt.yticks(fontsize=10); plt.grid() plt.legend() plt.savefig("./figs/vofxz/loss.pdf", format='pdf', bbox_inches="tight") # Predicting traveltime solution from the trained model L_pred = L.eval(model2, X_star + [velmodel,pz0,px0,T0]) tau_pred = tau.eval(model2, X_star + [velmodel,pz0,px0,T0]) tau_pred = tau_pred.reshape(Z.shape) T_pred = tau_pred*T0 print('Time at source: %.4f'%(tau_pred[int(round(sz/deltaz)),int(round(sx/deltax))])) # Plot the PINN solution error plt.style.use('default') plt.figure(figsize=(4,4)) ax = plt.gca() im = ax.imshow(np.abs(T_pred-T_data), extent=[xmin,xmax,zmax,zmin], aspect=1, cmap="jet") plt.xlabel('Offset (km)', fontsize=14) plt.xticks(fontsize=10) plt.ylabel('Depth (km)', fontsize=14) plt.yticks(fontsize=10) ax.xaxis.set_major_locator(plt.MultipleLocator(2)) ax.yaxis.set_major_locator(plt.MultipleLocator(2)) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="6%", pad=0.15) cbar = plt.colorbar(im, cax=cax) cbar.set_label('seconds',size=10) cbar.ax.tick_params(labelsize=10) plt.savefig("./figs/vofxz/pinnerror.pdf", format='pdf', bbox_inches="tight") # Load fast sweeping traveltims for comparison T_fsm = np.load('./inputs/vofxz/traveltimes/Tcomp.npy') # Plot the first order FSM solution error plt.style.use('default') plt.figure(figsize=(4,4)) ax = plt.gca() im = ax.imshow(np.abs(T_fsm-T_data), extent=[xmin,xmax,zmax,zmin], aspect=1, cmap="jet") plt.xlabel('Offset (km)', fontsize=14) plt.xticks(fontsize=10) plt.ylabel('Depth (km)', fontsize=14) plt.yticks(fontsize=10) ax.xaxis.set_major_locator(plt.MultipleLocator(2)) ax.yaxis.set_major_locator(plt.MultipleLocator(2)) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="6%", pad=0.15) cbar = plt.colorbar(im, cax=cax) cbar.set_label('seconds',size=10) cbar.ax.tick_params(labelsize=10) plt.savefig("./figs/vofxz/fsmerror.pdf", format='pdf', bbox_inches="tight") # Traveltime contour plots plt.figure(figsize=(5,5)) ax = plt.gca() im1 = ax.contour(T_data, 6, extent=[xmin,xmax,zmin,zmax], colors='r') im2 = ax.contour(T_pred, 6, extent=[xmin,xmax,zmin,zmax], colors='k',linestyles = 'dashed') im3 = ax.contour(T_fsm, 6, extent=[xmin,xmax,zmin,zmax], colors='b',linestyles = 'dotted') ax.plot(sx,sz,'k*',markersize=8) plt.xlabel('Offset (km)', fontsize=14) plt.ylabel('Depth (km)', fontsize=14) ax.tick_params(axis='both', which='major', labelsize=8) plt.gca().invert_yaxis() h1,_ = im1.legend_elements() h2,_ = im2.legend_elements() h3,_ = im3.legend_elements() ax.legend([h1[0], h2[0], h3[0]], ['Analytical', 'PINN', 'Fast sweeping'],fontsize=12) ax.xaxis.set_major_locator(plt.MultipleLocator(2)) ax.yaxis.set_major_locator(plt.MultipleLocator(2)) plt.xticks(fontsize=10) plt.yticks(fontsize=10) plt.savefig("./figs/vofxz/contours.pdf", format='pdf', bbox_inches="tight") print(np.linalg.norm(T_pred-T_data)/np.linalg.norm(T_data)) print(np.linalg.norm(T_pred-T_data)) !nvidia-smi -L
0.514156
0.689456
## Introduction to Data Science ### Data Science Tasks: Search and Ranking ``` import os import re import nltk import matplotlib.pyplot as plt import pandas as pd import pandas.io.sql as psql from IPython.display import Image, HTML, IFrame, FileLink, FileLinks #needed to render in notebook from IPython.core.display import display from bs4 import BeautifulSoup import urllib from urllib.parse import urljoin from sqlite3 import dbapi2 as sqlite %matplotlib inline # Set default figure size for this notebook plt.rcParams['figure.figsize'] = (16.0, 12.8) ``` Specifying the path to the files ``` outputs = "../outputs/" dbfile = "searchindex.sqlite" db = os.path.join(outputs,dbfile) stoplist_en = nltk.corpus.stopwords.words('english') stoplist_pt = nltk.corpus.stopwords.words('portuguese') ignorewords = stoplist_en + stoplist_pt ``` #### First block of classes and functions: crawling ``` Image(filename='../datasets/Figs/db_schema.png') class crawler: def __init__(self,dbname): self.con=sqlite.connect(dbname) def __del__(self): self.con.close() def dbcommit(self): self.con.commit() def createindextables(self): self.con.execute('create table if not exists urllist(url)') self.con.execute('create table if not exists wordlist(word)') self.con.execute('create table if not exists wordlocation(urlid,wordid,location)') self.con.execute('create table if not exists link(fromid integer,toid integer)') self.con.execute('create table if not exists linkwords(wordid,linkid)') self.con.execute('create index if not exists wordidx on wordlist(word)') self.con.execute('create index if not exists urlidx on urllist(url)') self.con.execute('create index if not exists wordurlidx on wordlocation(wordid)') self.con.execute('create index if not exists urltoidx on link(toid)') self.con.execute('create index if not exists urlfromidx on link(fromid)') self.dbcommit() def isindexed(self,url): '''Verify whether url is already indexed''' q = "select rowid from urllist where url='{}'" u = self.con.execute(q.format(url)).fetchone() if u != None: q = "select * from wordlocation where urlid={}" v = self.con.execute(q.format(u[0])).fetchone() if v != None: return True return False def gettextonly(self,soup): '''Extract raw text from html page''' v = soup.string if v == None: c = soup.contents resulttext = '' for t in c: subtext = self.gettextonly(t) resulttext += subtext+'\n' return resulttext else: return v.strip() def separatewords(self,text): '''splits the sentences by the non alpha characters and converts all words to lowercase''' splitter = re.compile('\\W*', flags=re.U) return [s.lower() for s in splitter.split(text) if s != ''] def getentryid(self, table, field, value, createnew=True): '''Add page id to the database, if not present''' q = "select rowid from {} where {}='{}'" cursor = self.con.execute(q.format(table,field,value)) result = cursor.fetchone() if result == None: q = "insert into {} ({}) values ('{}')" cursor = self.con.execute(q.format(table,field,value)) return cursor.lastrowid else: return result[0] def addtoindex(self,url,sopa): '''Add url to the index if not there''' if self.isindexed(url): print('Page {} already indexed...'.format(url)) return print('Indexing: {}'.format(url)) text = self.gettextonly(sopa) words = self.separatewords(text) urlid = self.getentryid('urllist', 'url', url) for i in range(len(words)): word = words[i] if word in ignorewords: continue wordid = self.getentryid('wordlist','word',word) q = "insert into wordlocation (urlid,wordid,location) values ({},{},{})" self.con.execute(q.format(urlid,wordid,i)) def addlinkref(self,urlFrom,urlTo,linkText): '''Add a link between two pages''' words = self.separatewords(linkText) fromid = self.getentryid('urllist','url', urlFrom) toid = self.getentryid('urllist','url', urlTo) if fromid == toid: return q = "insert into link(fromid,toid) values ({},{})" cursor = self.con.execute(q.format(fromid,toid)) linkid = cursor.lastrowid for word in words: if word in ignorewords: continue wordid = self.getentryid('wordlist','word', word) q = "insert into linkwords(linkid,wordid) values ({},{})" self.con.execute(q.format(linkid,wordid)) def crawl(self,pages,depth=1): '''Starts indexing seed page(s), goes indexing all pages following breadth first, until the desired depth''' print('Seed URL(s)') for p in pages: print(p) print('\nIndexing from seed with depth of {}\n'.format(depth)) for i in range(depth+1): newpages = {} for page in pages: try: c = urllib.request.urlopen(page) except: print(u'Could not access page: {}'.format(page)) continue try: p = c.read() soup = BeautifulSoup(p, "lxml") self.addtoindex(page,soup) links = soup('a') for link in links: if ('href' in dict(link.attrs)): url = urljoin(page,link['href']) if url.find("'") != -1: continue url = url.split('#')[0] #Keeps base url if url[0:4] == 'http' and not self.isindexed(url): newpages[url] = 1 linkText = self.gettextonly(link) self.addlinkref(page,url,linkText) self.dbcommit() except: print("Could not parse page {}".format(page)) raise self.dbcommit() pages = newpages def calculatepagerank(self,iterations=20): '''Initialize each url with pagerank = 1, and iterates until it reaches the limit. Calculates pagerank with damping factor''' self.con.execute('drop table if exists pagerank') self.dbcommit() self.con.execute('create table pagerank(urlid primary key,score)') for (urlid,) in self.con.execute('select rowid from urllist'): q = 'insert into pagerank(urlid,score) values ({},1.0)' self.con.execute(q.format(urlid)) self.dbcommit() for i in range(iterations): print("Iteration {}".format(i)) for (urlid,) in self.con.execute('select rowid from urllist'): pr = 0.15 q1 = 'select distinct fromid from link where toid = {}' for (linker,) in self.con.execute(q1.format(urlid)): q2 = 'select score from pagerank where urlid = {}' linkingpr = self.con.execute(q2.format(linker)).fetchone()[0] q3 = 'select count(*) from link where fromid = {}' linkingcount = self.con.execute(q3.format(linker)).fetchone()[0] pr += 0.85 * (linkingpr/linkingcount) q4 = 'update pagerank set score = {} where urlid = {}' self.con.execute(q4.format(pr,urlid)) self.dbcommit() ``` #### Second block of classes and functions: searching ``` class searcher: def __init__(self,dbname): self.con=sqlite.connect(dbname) def __del__(self): self.con.close() def getmatchrows(self,q): fieldlist='w0.urlid' tablelist='' clauselist='' wordids=[] words=q.split(' ') tablenumber=0 for word in words: q = "select rowid from wordlist where word='{}'" wordrow = self.con.execute(q.format(word)).fetchone() if wordrow != None: wordid = wordrow[0] wordids.append(wordid) if tablenumber > 0: tablelist += ',' clauselist += ' and ' clauselist += 'w{}.urlid=w{}.urlid and '.format(tablenumber-1,tablenumber) fieldlist += ',w{}.location'.format(tablenumber) tablelist += 'wordlocation w{}'.format(tablenumber) clauselist += 'w{}.wordid={}'.format(tablenumber,wordid) tablenumber += 1 fullquery = 'select {} from {} where {}'.format(fieldlist,tablelist,clauselist) cursor = self.con.execute(fullquery) rows = [row for row in cursor] return rows, wordids def getscoredlist(self,rows,wordids): totalscores = dict([(row[0],0) for row in rows]) weights=[(1.0,self.frequencyscore(rows)), (1.0,self.locationscore(rows)), (1.0,self.distancescore(rows)), (1.0,self.inboundlinkscore(rows)), (1.0,self.linktextscore(rows,wordids)), (1.0,self.pagerankscore(rows))] for (weight,scores) in weights: for url in totalscores: totalscores[url] += weight*scores[url] return totalscores def geturlname(self,id): q = "select url from urllist where rowid = {}" return self.con.execute(q.format(id)).fetchone()[0] def query(self,q): try: rows,wordids = self.getmatchrows(q) except: print('No results in the database...') return scores = self.getscoredlist(rows,wordids) rankedscores = [(score,url) for (url,score) in scores.items()] rankedscores.sort() rankedscores.reverse() print('\nResultados para busca por {}:'.format(q)) for (score,urlid) in rankedscores[0:10]: print('{}\t{}'.format(score, self.geturlname(urlid))) return wordids,[r[1] for r in rankedscores[0:10]] def normalizescores(self,scores,smallIsBetter=0): vsmall=0.00001 # Avoiding division by zero if smallIsBetter: minscore = min(scores.values()) return dict([(u,float(minscore)/max(vsmall,l)) for (u,l) in scores.items()]) else: maxscore = max(scores.values()) if maxscore == 0: maxscore = vsmall return dict([(u,float(c)/maxscore) for (u,c) in scores.items()]) def frequencyscore(self,rows): counts = dict([(row[0],0) for row in rows]) for row in rows: counts[row[0]] += 1 return self.normalizescores(counts) def locationscore(self,rows): locations = dict([(row[0],1000000) for row in rows]) for row in rows: loc = sum(row[1:]) if loc < locations[row[0]]: locations[row[0]] = loc return self.normalizescores(locations,smallIsBetter=1) def distancescore(self,rows): if len(rows[0]) <= 2: return dict([(row[0],1.0) for row in rows]) mindistance = dict([(row[0],1000000) for row in rows]) for row in rows: dist = sum([abs(row[i]-row[i-1]) for i in range(1,len(row))]) if dist < mindistance[row[0]]: mindistance[row[0]]=dist return self.normalizescores(mindistance,smallIsBetter=1) def inboundlinkscore(self,rows): uniqueurls = dict([(row[0],1) for row in rows]) q = 'select count(*) from link where toid = {}' inboundcount = dict([(u,self.con.execute(q.format(u)).fetchone()[0]) for u in uniqueurls]) return self.normalizescores(inboundcount) def linktextscore(self,rows,wordids): linkscores = dict([(row[0],0) for row in rows]) for wordid in wordids: q = 'select link.fromid,link.toid from linkwords,' q += 'link where wordid={} and linkwords.linkid=link.rowid' cursor = self.con.execute(q.format(wordid)) for (fromid,toid) in cursor: if toid in linkscores: q = 'select score from pagerank where urlid={}' pr=self.con.execute(q.format(fromid)).fetchone()[0] linkscores[toid] += pr maxscore = max(linkscores.values()) normalizedscores = dict([(u,float(l)/maxscore) for (u,l) in linkscores.items()]) return normalizedscores def pagerankscore(self,rows): q = 'select score from pagerank where urlid={}' pageranks=dict([(row[0],self.con.execute(q.format(row[0])).fetchone()[0]) for row in rows]) maxrank = max(pageranks.values()) normalizedscores = dict([(u,float(l)/maxrank) for (u,l) in pageranks.items()]) return normalizedscores ``` Defining seed pages ``` seed1 = ['http://www.oglobo.com/'] seed2 = ['http://emap.fgv.br/'] ``` Instantiating the crawler ``` crawl = crawler(db) ``` Creating tables - needed only in the first time ``` crawl.createindextables() ``` Crawling to a specific depth level ``` crawl.crawl(seed2,1) ``` Calculates pagerank with n iterations ``` crawl.calculatepagerank(25) ``` Instantiating the searcher ``` search = searcher(db) ``` Querying the index ``` query = 'matemรกtica' print(search.query(query)) conn=sqlite.connect(db) df_mysql1 = psql.read_sql('select * from urllist;', con=conn) df_mysql1.head() df_mysql1.loc[0] df_mysql2 = psql.read_sql('select * from link;', con=conn) df_mysql2.head() for i in range(10): print('Link entre as pรกginas {} --> {}'.format(df_mysql1['url'].loc[df_mysql2['fromid'].loc[i]], df_mysql1['url'].loc[df_mysql2['toid'].loc[i]])) conn.close() ```
github_jupyter
import os import re import nltk import matplotlib.pyplot as plt import pandas as pd import pandas.io.sql as psql from IPython.display import Image, HTML, IFrame, FileLink, FileLinks #needed to render in notebook from IPython.core.display import display from bs4 import BeautifulSoup import urllib from urllib.parse import urljoin from sqlite3 import dbapi2 as sqlite %matplotlib inline # Set default figure size for this notebook plt.rcParams['figure.figsize'] = (16.0, 12.8) outputs = "../outputs/" dbfile = "searchindex.sqlite" db = os.path.join(outputs,dbfile) stoplist_en = nltk.corpus.stopwords.words('english') stoplist_pt = nltk.corpus.stopwords.words('portuguese') ignorewords = stoplist_en + stoplist_pt Image(filename='../datasets/Figs/db_schema.png') class crawler: def __init__(self,dbname): self.con=sqlite.connect(dbname) def __del__(self): self.con.close() def dbcommit(self): self.con.commit() def createindextables(self): self.con.execute('create table if not exists urllist(url)') self.con.execute('create table if not exists wordlist(word)') self.con.execute('create table if not exists wordlocation(urlid,wordid,location)') self.con.execute('create table if not exists link(fromid integer,toid integer)') self.con.execute('create table if not exists linkwords(wordid,linkid)') self.con.execute('create index if not exists wordidx on wordlist(word)') self.con.execute('create index if not exists urlidx on urllist(url)') self.con.execute('create index if not exists wordurlidx on wordlocation(wordid)') self.con.execute('create index if not exists urltoidx on link(toid)') self.con.execute('create index if not exists urlfromidx on link(fromid)') self.dbcommit() def isindexed(self,url): '''Verify whether url is already indexed''' q = "select rowid from urllist where url='{}'" u = self.con.execute(q.format(url)).fetchone() if u != None: q = "select * from wordlocation where urlid={}" v = self.con.execute(q.format(u[0])).fetchone() if v != None: return True return False def gettextonly(self,soup): '''Extract raw text from html page''' v = soup.string if v == None: c = soup.contents resulttext = '' for t in c: subtext = self.gettextonly(t) resulttext += subtext+'\n' return resulttext else: return v.strip() def separatewords(self,text): '''splits the sentences by the non alpha characters and converts all words to lowercase''' splitter = re.compile('\\W*', flags=re.U) return [s.lower() for s in splitter.split(text) if s != ''] def getentryid(self, table, field, value, createnew=True): '''Add page id to the database, if not present''' q = "select rowid from {} where {}='{}'" cursor = self.con.execute(q.format(table,field,value)) result = cursor.fetchone() if result == None: q = "insert into {} ({}) values ('{}')" cursor = self.con.execute(q.format(table,field,value)) return cursor.lastrowid else: return result[0] def addtoindex(self,url,sopa): '''Add url to the index if not there''' if self.isindexed(url): print('Page {} already indexed...'.format(url)) return print('Indexing: {}'.format(url)) text = self.gettextonly(sopa) words = self.separatewords(text) urlid = self.getentryid('urllist', 'url', url) for i in range(len(words)): word = words[i] if word in ignorewords: continue wordid = self.getentryid('wordlist','word',word) q = "insert into wordlocation (urlid,wordid,location) values ({},{},{})" self.con.execute(q.format(urlid,wordid,i)) def addlinkref(self,urlFrom,urlTo,linkText): '''Add a link between two pages''' words = self.separatewords(linkText) fromid = self.getentryid('urllist','url', urlFrom) toid = self.getentryid('urllist','url', urlTo) if fromid == toid: return q = "insert into link(fromid,toid) values ({},{})" cursor = self.con.execute(q.format(fromid,toid)) linkid = cursor.lastrowid for word in words: if word in ignorewords: continue wordid = self.getentryid('wordlist','word', word) q = "insert into linkwords(linkid,wordid) values ({},{})" self.con.execute(q.format(linkid,wordid)) def crawl(self,pages,depth=1): '''Starts indexing seed page(s), goes indexing all pages following breadth first, until the desired depth''' print('Seed URL(s)') for p in pages: print(p) print('\nIndexing from seed with depth of {}\n'.format(depth)) for i in range(depth+1): newpages = {} for page in pages: try: c = urllib.request.urlopen(page) except: print(u'Could not access page: {}'.format(page)) continue try: p = c.read() soup = BeautifulSoup(p, "lxml") self.addtoindex(page,soup) links = soup('a') for link in links: if ('href' in dict(link.attrs)): url = urljoin(page,link['href']) if url.find("'") != -1: continue url = url.split('#')[0] #Keeps base url if url[0:4] == 'http' and not self.isindexed(url): newpages[url] = 1 linkText = self.gettextonly(link) self.addlinkref(page,url,linkText) self.dbcommit() except: print("Could not parse page {}".format(page)) raise self.dbcommit() pages = newpages def calculatepagerank(self,iterations=20): '''Initialize each url with pagerank = 1, and iterates until it reaches the limit. Calculates pagerank with damping factor''' self.con.execute('drop table if exists pagerank') self.dbcommit() self.con.execute('create table pagerank(urlid primary key,score)') for (urlid,) in self.con.execute('select rowid from urllist'): q = 'insert into pagerank(urlid,score) values ({},1.0)' self.con.execute(q.format(urlid)) self.dbcommit() for i in range(iterations): print("Iteration {}".format(i)) for (urlid,) in self.con.execute('select rowid from urllist'): pr = 0.15 q1 = 'select distinct fromid from link where toid = {}' for (linker,) in self.con.execute(q1.format(urlid)): q2 = 'select score from pagerank where urlid = {}' linkingpr = self.con.execute(q2.format(linker)).fetchone()[0] q3 = 'select count(*) from link where fromid = {}' linkingcount = self.con.execute(q3.format(linker)).fetchone()[0] pr += 0.85 * (linkingpr/linkingcount) q4 = 'update pagerank set score = {} where urlid = {}' self.con.execute(q4.format(pr,urlid)) self.dbcommit() class searcher: def __init__(self,dbname): self.con=sqlite.connect(dbname) def __del__(self): self.con.close() def getmatchrows(self,q): fieldlist='w0.urlid' tablelist='' clauselist='' wordids=[] words=q.split(' ') tablenumber=0 for word in words: q = "select rowid from wordlist where word='{}'" wordrow = self.con.execute(q.format(word)).fetchone() if wordrow != None: wordid = wordrow[0] wordids.append(wordid) if tablenumber > 0: tablelist += ',' clauselist += ' and ' clauselist += 'w{}.urlid=w{}.urlid and '.format(tablenumber-1,tablenumber) fieldlist += ',w{}.location'.format(tablenumber) tablelist += 'wordlocation w{}'.format(tablenumber) clauselist += 'w{}.wordid={}'.format(tablenumber,wordid) tablenumber += 1 fullquery = 'select {} from {} where {}'.format(fieldlist,tablelist,clauselist) cursor = self.con.execute(fullquery) rows = [row for row in cursor] return rows, wordids def getscoredlist(self,rows,wordids): totalscores = dict([(row[0],0) for row in rows]) weights=[(1.0,self.frequencyscore(rows)), (1.0,self.locationscore(rows)), (1.0,self.distancescore(rows)), (1.0,self.inboundlinkscore(rows)), (1.0,self.linktextscore(rows,wordids)), (1.0,self.pagerankscore(rows))] for (weight,scores) in weights: for url in totalscores: totalscores[url] += weight*scores[url] return totalscores def geturlname(self,id): q = "select url from urllist where rowid = {}" return self.con.execute(q.format(id)).fetchone()[0] def query(self,q): try: rows,wordids = self.getmatchrows(q) except: print('No results in the database...') return scores = self.getscoredlist(rows,wordids) rankedscores = [(score,url) for (url,score) in scores.items()] rankedscores.sort() rankedscores.reverse() print('\nResultados para busca por {}:'.format(q)) for (score,urlid) in rankedscores[0:10]: print('{}\t{}'.format(score, self.geturlname(urlid))) return wordids,[r[1] for r in rankedscores[0:10]] def normalizescores(self,scores,smallIsBetter=0): vsmall=0.00001 # Avoiding division by zero if smallIsBetter: minscore = min(scores.values()) return dict([(u,float(minscore)/max(vsmall,l)) for (u,l) in scores.items()]) else: maxscore = max(scores.values()) if maxscore == 0: maxscore = vsmall return dict([(u,float(c)/maxscore) for (u,c) in scores.items()]) def frequencyscore(self,rows): counts = dict([(row[0],0) for row in rows]) for row in rows: counts[row[0]] += 1 return self.normalizescores(counts) def locationscore(self,rows): locations = dict([(row[0],1000000) for row in rows]) for row in rows: loc = sum(row[1:]) if loc < locations[row[0]]: locations[row[0]] = loc return self.normalizescores(locations,smallIsBetter=1) def distancescore(self,rows): if len(rows[0]) <= 2: return dict([(row[0],1.0) for row in rows]) mindistance = dict([(row[0],1000000) for row in rows]) for row in rows: dist = sum([abs(row[i]-row[i-1]) for i in range(1,len(row))]) if dist < mindistance[row[0]]: mindistance[row[0]]=dist return self.normalizescores(mindistance,smallIsBetter=1) def inboundlinkscore(self,rows): uniqueurls = dict([(row[0],1) for row in rows]) q = 'select count(*) from link where toid = {}' inboundcount = dict([(u,self.con.execute(q.format(u)).fetchone()[0]) for u in uniqueurls]) return self.normalizescores(inboundcount) def linktextscore(self,rows,wordids): linkscores = dict([(row[0],0) for row in rows]) for wordid in wordids: q = 'select link.fromid,link.toid from linkwords,' q += 'link where wordid={} and linkwords.linkid=link.rowid' cursor = self.con.execute(q.format(wordid)) for (fromid,toid) in cursor: if toid in linkscores: q = 'select score from pagerank where urlid={}' pr=self.con.execute(q.format(fromid)).fetchone()[0] linkscores[toid] += pr maxscore = max(linkscores.values()) normalizedscores = dict([(u,float(l)/maxscore) for (u,l) in linkscores.items()]) return normalizedscores def pagerankscore(self,rows): q = 'select score from pagerank where urlid={}' pageranks=dict([(row[0],self.con.execute(q.format(row[0])).fetchone()[0]) for row in rows]) maxrank = max(pageranks.values()) normalizedscores = dict([(u,float(l)/maxrank) for (u,l) in pageranks.items()]) return normalizedscores seed1 = ['http://www.oglobo.com/'] seed2 = ['http://emap.fgv.br/'] crawl = crawler(db) crawl.createindextables() crawl.crawl(seed2,1) crawl.calculatepagerank(25) search = searcher(db) query = 'matemรกtica' print(search.query(query)) conn=sqlite.connect(db) df_mysql1 = psql.read_sql('select * from urllist;', con=conn) df_mysql1.head() df_mysql1.loc[0] df_mysql2 = psql.read_sql('select * from link;', con=conn) df_mysql2.head() for i in range(10): print('Link entre as pรกginas {} --> {}'.format(df_mysql1['url'].loc[df_mysql2['fromid'].loc[i]], df_mysql1['url'].loc[df_mysql2['toid'].loc[i]])) conn.close()
0.281801
0.538862
# Classifier Evaluation ## Classifier 1: Decision Tree (unpruned, \_4444 attributes) We're evaluating the natural, \_4444 format of the classifier, meaning without considering Scoring Margin and using 4 bins for all the attributes and for the classifier. ``` # some useful mysklearn package import statements and reloads import importlib import mysklearn.myutils importlib.reload(mysklearn.myutils) import mysklearn.myutils as myutils import mysklearn.myevaluation importlib.reload(mysklearn.myevaluation) import mysklearn.myevaluation as myevaluation import mysklearn.myclassifiers importlib.reload(mysklearn.myclassifiers) from mysklearn.myclassifiers import MyDecisionTreeClassifier, MyKNeighborsClassifier import mysklearn.mypytable importlib.reload(mysklearn.mypytable) from mysklearn.mypytable import MyPyTable import copy import random from tabulate import tabulate header, data = myutils.load_from_file("input_data/NCAA_Statistics_24444.csv") # Now, we can move to create some decision trees. Let's first create trees over the whole dataset, then # test upon our stratisfied k-fold splitting method. random.seed(13) class_col = myutils.get_column(data, header, "Win Percentage") data = myutils.drop_column(data, header, "Win Percentage") data = myutils.drop_column(data, header, "Scoring Margin") atts = header[1:-1] # Let's stratisfy X_indices = range(len(class_col)) X_train_folds, X_test_folds = myevaluation.stratified_kfold_cross_validation(X_indices, class_col, n_splits=10) y_preds = [] y_reals = [] correct = 0 total = 0 for fold_index in range(len(X_train_folds)): X_train = [] X_test = [] y_train = [] y_test = [] for train_index in X_train_folds[fold_index]: X_train.append(copy.deepcopy(data[train_index])) y_train.append(copy.deepcopy(class_col[train_index])) for test_index in X_test_folds[fold_index]: X_test.append(copy.deepcopy(data[test_index])) y_test.append(copy.deepcopy(class_col[test_index])) # Get a classifier in here... my_dt = MyDecisionTreeClassifier() # Fitting... my_dt.fit(X_train, y_train) # ... and predicting! y_pred = my_dt.predict(X_test) # Counting and recording... for i in range(len(y_pred)): total += 1 if y_pred[i] == y_test[i]: correct += 1 y_preds.append(copy.deepcopy(y_pred[i])) y_reals.append(copy.deepcopy(y_test[i])) print("Predictive Accuracy:", str(round(correct / total, 3))) print("Error Rate:", str(round(1 - correct / total, 3))) print() print("Confusion Matrix:") print() labels = ["1", "2", "3", "4"] conf_matrix = myevaluation.confusion_matrix(y_reals, y_preds, labels) for index in range(len(conf_matrix)): conf_matrix[index].append(sum(conf_matrix[index])) if conf_matrix[index][-1] == 0: conf_matrix[index].append(0) else: conf_matrix[index].append(round(100 * conf_matrix[index][index] / conf_matrix[index][-1], 2)) conf_matrix[index] = [index+1] + conf_matrix[index] header = ["Win% Tier"] for index in labels: header.append(index) header.append("Total") header.append("Recognition (%%)") print(tabulate(conf_matrix, headers=header, tablefmt="rst", numalign="right")) ``` ## Classifier 2: K Nearest Neighbors We're evaluating the natural, \_4444 format of the classifier, meaning without considering Scoring Margin and using 4 bins for all the attributes and for the classifier. ``` importlib.reload(myutils) import os ncaa_path = os.path.join("input_data","NCAA_Statistics_24444.csv") ncaa_data = MyPyTable().load_from_file(ncaa_path) win_percentage = ncaa_data.get_column("Win Percentage") scoring_margin = ncaa_data.get_column("Scoring Margin") efg = ncaa_data.get_column("eFG%") spg_bpg = ncaa_data.get_column("SPG+BPG") Rebound_margin = ncaa_data.get_column("Rebound Margin") random.seed(13) X_indices = range(len(win_percentage)) X_train_folds, X_test_folds = myevaluation.stratified_kfold_cross_validation(X_indices,win_percentage,n_splits=10) knn = MyKNeighborsClassifier(n_neighbors=10) knn_predictions = [] knn_actual = [] for i in range(len(X_train_folds)): y_train_1 = [] X_train_1 = [] y_test_1 = [] X_test_1 = [] for index in X_train_folds[i]: X_train_1.append([scoring_margin[index],efg[index],spg_bpg[index],Rebound_margin[index]]) y_train_1.append(win_percentage[index]) for index in X_test_folds[i]: X_test_1.append([scoring_margin[index],efg[index],spg_bpg[index],Rebound_margin[index]]) y_test_1.append(win_percentage[index]) knn.fit(X_train_1,y_train_1) knn_predictions.append(knn.predict(X_test_1)) knn_actual.append(y_test_1) knn_predictions_1d_1 = [] for i in knn_predictions: for k in i: knn_predictions_1d_1.append(k) knn_actual_1d_1 = [] for i in knn_actual: for j in i: knn_actual_1d_1.append(j) knn_total_correct = 0 knn_total_predictions = len(knn_actual_1d_1) for i in range(len(knn_predictions_1d_1)): if knn_predictions_1d_1[i] == knn_actual_1d_1[i]: knn_total_correct += 1 knn_accuracy = knn_total_correct /knn_total_predictions knn_error_rate = 1 - knn_accuracy print() print("KNN: " +"accuracy = "+str(knn_accuracy)+ ", error rate = " + str(knn_error_rate)) column_names = [1,2,3,4] knn_matrix = myevaluation.confusion_matrix(knn_actual_1d_1,knn_predictions_1d_1,column_names) sum_matrix_1 = [] for i in knn_matrix: sum_matrix_1.append(sum(i)) recognition_1 = [] for i in range(len(knn_matrix)): if sum_matrix_1[i] > 0: recognition_1.append((knn_matrix[i][i] / sum_matrix_1[i]) * 100) else: recognition_1.append(0) for i in range(len(knn_matrix)): knn_matrix[i].append(sum_matrix_1[i]) for i in range(len(knn_matrix)): knn_matrix[i].insert(0,column_names[i]) for i in range(len(knn_matrix)): knn_matrix[i].append(recognition_1[i]) column_names_2= ["Win% Tier",1,2,3,4,"total","Recognition (%)"] print() print(tabulate(knn_matrix,column_names_2)) print() ``` ## Classifier 3: Random Forests (unweighted, c.o.e) We'll use our above Decision Tree classification and gather ensemble predictions therefrom. ``` # some useful mysklearn package import statements and reloads import importlib import mysklearn.myutils importlib.reload(mysklearn.myutils) import mysklearn.myutils as myutils import mysklearn.myevaluation importlib.reload(mysklearn.myevaluation) import mysklearn.myevaluation as myevaluation import mysklearn.myclassifiers importlib.reload(mysklearn.myclassifiers) from mysklearn.myclassifiers import MyRandomForestClassifier import mysklearn.mypytable importlib.reload(mysklearn.mypytable) from mysklearn.mypytable import MyPyTable import copy import random from tabulate import tabulate header, data = myutils.load_from_file("input_data/NCAA_Statistics_24444.csv") random.seed(15) # Now, we can move to create some decision trees. Let's first create trees over the whole dataset, then # test upon our stratisfied k-fold splitting method. class_col = myutils.get_column(data, header, "Win Percentage") data = myutils.drop_column(data, header, "Win Percentage") data = myutils.drop_column(data, header, "Scoring Margin") atts = header[1:-1] X_indices = range(len(class_col)) X_train_folds, X_test_folds = myevaluation.stratified_kfold_cross_validation(X_indices, class_col, n_splits=10) y_preds = [] y_reals = [] correct = 0 total = 0 for fold_index in range(len(X_train_folds)): X_train = [] X_test = [] y_train = [] y_test = [] for train_index in X_train_folds[fold_index]: X_train.append(copy.deepcopy(data[train_index])) y_train.append(copy.deepcopy(class_col[train_index])) for test_index in X_test_folds[fold_index]: X_test.append(copy.deepcopy(data[test_index])) y_test.append(copy.deepcopy(class_col[test_index])) # Get a classifier in here... my_rf = MyRandomForestClassifier() # Fitting... my_rf.fit(X_train, y_train, n_trees=50, m_trees=10, min_atts=2) # ... and predicting! y_pred = my_rf.predict(X_test) # Counting and recording... for i in range(len(y_pred)): total += 1 if y_pred[i] == y_test[i]: correct += 1 y_preds.append(copy.deepcopy(y_pred[i])) y_reals.append(copy.deepcopy(y_test[i])) print("Predictive Accuracy:", str(round(correct / total, 3))) print("Error Rate:", str(round(1 - correct / total, 3))) print() print("Confusion Matrix:") print() labels = ["1", "2", "3", "4"] conf_matrix = myevaluation.confusion_matrix(y_reals, y_preds, labels) for index in range(len(conf_matrix)): conf_matrix[index].append(sum(conf_matrix[index])) if conf_matrix[index][-1] == 0: conf_matrix[index].append(0) else: conf_matrix[index].append(round(100 * conf_matrix[index][index] / conf_matrix[index][-1], 2)) conf_matrix[index] = [index+1] + conf_matrix[index] header = ["Win% Tier"] for index in labels: header.append(index) header.append("Total") header.append("Recognition (%%)") print(tabulate(conf_matrix, headers=header, tablefmt="rst", numalign="right")) ```
github_jupyter
# some useful mysklearn package import statements and reloads import importlib import mysklearn.myutils importlib.reload(mysklearn.myutils) import mysklearn.myutils as myutils import mysklearn.myevaluation importlib.reload(mysklearn.myevaluation) import mysklearn.myevaluation as myevaluation import mysklearn.myclassifiers importlib.reload(mysklearn.myclassifiers) from mysklearn.myclassifiers import MyDecisionTreeClassifier, MyKNeighborsClassifier import mysklearn.mypytable importlib.reload(mysklearn.mypytable) from mysklearn.mypytable import MyPyTable import copy import random from tabulate import tabulate header, data = myutils.load_from_file("input_data/NCAA_Statistics_24444.csv") # Now, we can move to create some decision trees. Let's first create trees over the whole dataset, then # test upon our stratisfied k-fold splitting method. random.seed(13) class_col = myutils.get_column(data, header, "Win Percentage") data = myutils.drop_column(data, header, "Win Percentage") data = myutils.drop_column(data, header, "Scoring Margin") atts = header[1:-1] # Let's stratisfy X_indices = range(len(class_col)) X_train_folds, X_test_folds = myevaluation.stratified_kfold_cross_validation(X_indices, class_col, n_splits=10) y_preds = [] y_reals = [] correct = 0 total = 0 for fold_index in range(len(X_train_folds)): X_train = [] X_test = [] y_train = [] y_test = [] for train_index in X_train_folds[fold_index]: X_train.append(copy.deepcopy(data[train_index])) y_train.append(copy.deepcopy(class_col[train_index])) for test_index in X_test_folds[fold_index]: X_test.append(copy.deepcopy(data[test_index])) y_test.append(copy.deepcopy(class_col[test_index])) # Get a classifier in here... my_dt = MyDecisionTreeClassifier() # Fitting... my_dt.fit(X_train, y_train) # ... and predicting! y_pred = my_dt.predict(X_test) # Counting and recording... for i in range(len(y_pred)): total += 1 if y_pred[i] == y_test[i]: correct += 1 y_preds.append(copy.deepcopy(y_pred[i])) y_reals.append(copy.deepcopy(y_test[i])) print("Predictive Accuracy:", str(round(correct / total, 3))) print("Error Rate:", str(round(1 - correct / total, 3))) print() print("Confusion Matrix:") print() labels = ["1", "2", "3", "4"] conf_matrix = myevaluation.confusion_matrix(y_reals, y_preds, labels) for index in range(len(conf_matrix)): conf_matrix[index].append(sum(conf_matrix[index])) if conf_matrix[index][-1] == 0: conf_matrix[index].append(0) else: conf_matrix[index].append(round(100 * conf_matrix[index][index] / conf_matrix[index][-1], 2)) conf_matrix[index] = [index+1] + conf_matrix[index] header = ["Win% Tier"] for index in labels: header.append(index) header.append("Total") header.append("Recognition (%%)") print(tabulate(conf_matrix, headers=header, tablefmt="rst", numalign="right")) importlib.reload(myutils) import os ncaa_path = os.path.join("input_data","NCAA_Statistics_24444.csv") ncaa_data = MyPyTable().load_from_file(ncaa_path) win_percentage = ncaa_data.get_column("Win Percentage") scoring_margin = ncaa_data.get_column("Scoring Margin") efg = ncaa_data.get_column("eFG%") spg_bpg = ncaa_data.get_column("SPG+BPG") Rebound_margin = ncaa_data.get_column("Rebound Margin") random.seed(13) X_indices = range(len(win_percentage)) X_train_folds, X_test_folds = myevaluation.stratified_kfold_cross_validation(X_indices,win_percentage,n_splits=10) knn = MyKNeighborsClassifier(n_neighbors=10) knn_predictions = [] knn_actual = [] for i in range(len(X_train_folds)): y_train_1 = [] X_train_1 = [] y_test_1 = [] X_test_1 = [] for index in X_train_folds[i]: X_train_1.append([scoring_margin[index],efg[index],spg_bpg[index],Rebound_margin[index]]) y_train_1.append(win_percentage[index]) for index in X_test_folds[i]: X_test_1.append([scoring_margin[index],efg[index],spg_bpg[index],Rebound_margin[index]]) y_test_1.append(win_percentage[index]) knn.fit(X_train_1,y_train_1) knn_predictions.append(knn.predict(X_test_1)) knn_actual.append(y_test_1) knn_predictions_1d_1 = [] for i in knn_predictions: for k in i: knn_predictions_1d_1.append(k) knn_actual_1d_1 = [] for i in knn_actual: for j in i: knn_actual_1d_1.append(j) knn_total_correct = 0 knn_total_predictions = len(knn_actual_1d_1) for i in range(len(knn_predictions_1d_1)): if knn_predictions_1d_1[i] == knn_actual_1d_1[i]: knn_total_correct += 1 knn_accuracy = knn_total_correct /knn_total_predictions knn_error_rate = 1 - knn_accuracy print() print("KNN: " +"accuracy = "+str(knn_accuracy)+ ", error rate = " + str(knn_error_rate)) column_names = [1,2,3,4] knn_matrix = myevaluation.confusion_matrix(knn_actual_1d_1,knn_predictions_1d_1,column_names) sum_matrix_1 = [] for i in knn_matrix: sum_matrix_1.append(sum(i)) recognition_1 = [] for i in range(len(knn_matrix)): if sum_matrix_1[i] > 0: recognition_1.append((knn_matrix[i][i] / sum_matrix_1[i]) * 100) else: recognition_1.append(0) for i in range(len(knn_matrix)): knn_matrix[i].append(sum_matrix_1[i]) for i in range(len(knn_matrix)): knn_matrix[i].insert(0,column_names[i]) for i in range(len(knn_matrix)): knn_matrix[i].append(recognition_1[i]) column_names_2= ["Win% Tier",1,2,3,4,"total","Recognition (%)"] print() print(tabulate(knn_matrix,column_names_2)) print() # some useful mysklearn package import statements and reloads import importlib import mysklearn.myutils importlib.reload(mysklearn.myutils) import mysklearn.myutils as myutils import mysklearn.myevaluation importlib.reload(mysklearn.myevaluation) import mysklearn.myevaluation as myevaluation import mysklearn.myclassifiers importlib.reload(mysklearn.myclassifiers) from mysklearn.myclassifiers import MyRandomForestClassifier import mysklearn.mypytable importlib.reload(mysklearn.mypytable) from mysklearn.mypytable import MyPyTable import copy import random from tabulate import tabulate header, data = myutils.load_from_file("input_data/NCAA_Statistics_24444.csv") random.seed(15) # Now, we can move to create some decision trees. Let's first create trees over the whole dataset, then # test upon our stratisfied k-fold splitting method. class_col = myutils.get_column(data, header, "Win Percentage") data = myutils.drop_column(data, header, "Win Percentage") data = myutils.drop_column(data, header, "Scoring Margin") atts = header[1:-1] X_indices = range(len(class_col)) X_train_folds, X_test_folds = myevaluation.stratified_kfold_cross_validation(X_indices, class_col, n_splits=10) y_preds = [] y_reals = [] correct = 0 total = 0 for fold_index in range(len(X_train_folds)): X_train = [] X_test = [] y_train = [] y_test = [] for train_index in X_train_folds[fold_index]: X_train.append(copy.deepcopy(data[train_index])) y_train.append(copy.deepcopy(class_col[train_index])) for test_index in X_test_folds[fold_index]: X_test.append(copy.deepcopy(data[test_index])) y_test.append(copy.deepcopy(class_col[test_index])) # Get a classifier in here... my_rf = MyRandomForestClassifier() # Fitting... my_rf.fit(X_train, y_train, n_trees=50, m_trees=10, min_atts=2) # ... and predicting! y_pred = my_rf.predict(X_test) # Counting and recording... for i in range(len(y_pred)): total += 1 if y_pred[i] == y_test[i]: correct += 1 y_preds.append(copy.deepcopy(y_pred[i])) y_reals.append(copy.deepcopy(y_test[i])) print("Predictive Accuracy:", str(round(correct / total, 3))) print("Error Rate:", str(round(1 - correct / total, 3))) print() print("Confusion Matrix:") print() labels = ["1", "2", "3", "4"] conf_matrix = myevaluation.confusion_matrix(y_reals, y_preds, labels) for index in range(len(conf_matrix)): conf_matrix[index].append(sum(conf_matrix[index])) if conf_matrix[index][-1] == 0: conf_matrix[index].append(0) else: conf_matrix[index].append(round(100 * conf_matrix[index][index] / conf_matrix[index][-1], 2)) conf_matrix[index] = [index+1] + conf_matrix[index] header = ["Win% Tier"] for index in labels: header.append(index) header.append("Total") header.append("Recognition (%%)") print(tabulate(conf_matrix, headers=header, tablefmt="rst", numalign="right"))
0.102184
0.746578
Machine learning "in the database" (including systems such as Spark) is an increasingly popular topic. And where there is machine learning, there is a need for data preparation. Many machine learning algorithms expect all data to be numeric without missing values. [vtreat]() is a package (available for [Python](https://github.com/WinVector/pyvtreat) or for [R](https://github.com/WinVector/vtreat)) that reliably converts fairly wild data into such a format. To support machine leaning in the database we are adding the ability to both export vtreat data preparations as data (so they can be later used by stored procedures) and as [data algebra](https://github.com/WinVector/data_algebra) pipelines (so they can be immediately translated to executable SQL). This note is a demonstration of converting a [Python vtreat](https://github.com/WinVector/pyvtreat) data preparation into a [data algebra](https://github.com/WinVector/data_algebra) pipeline, which can then in turn be converted to SQL queries. [R vtreat](https://winvector.github.io/vtreat/) already has similar functionality with [as_rquery_plan()](https://winvector.github.io/vtreat/reference/as_rquery_plan.html). Let's work a simple problem. First we import our modules. ``` import pandas as pd from data_algebra.data_ops import * import data_algebra.SQLite import data_algebra.test_util import vtreat from vtreat.vtreat_db_adapter import as_data_algebra_pipeline ``` Now let's bring in and arrange our data. ``` # Data from: # https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008 data_all = pd.read_csv("diabetes_head.csv") n = data_all.shape[0] data_all['orig_index'] = range(n) d_train = data_all.loc[range(n-5), :].reset_index(inplace=False, drop=True) d_app = data_all.loc[range(n-5, n)].reset_index(inplace=False, drop=True) ``` We define our problem by declaring which columns is the dependent variable, which columns are potential explanitory variables, and any other columns we wish to cary around. ``` outcome_name = "readmitted" cols_to_copy = ["orig_index", "encounter_id", "patient_nbr"] + [outcome_name] vars = ['time_in_hospital', 'weight'] columns = vars + cols_to_copy d_train.loc[:, columns] ``` Now we specify our vtreat data preparation scheme. Documentation and tutorials on these concepts can be found [here](https://github.com/WinVector/pyvtreat). ``` treatment = vtreat.BinomialOutcomeTreatment( cols_to_copy=cols_to_copy, outcome_name=outcome_name, outcome_target=True, params=vtreat.vtreat_parameters( {"sparse_indicators": False, "filter_to_recommended": False,} ), ) d_train_treated = treatment.fit_transform(d_train.loc[:, columns]) ``` We can apply this data treatment to new data. ``` d_app_treated = treatment.transform(d_app.loc[:, columns]) d_app_treated ``` Now for the feature that is new for vtreat version 1.0.1 (not yet released to PyPi). We can export the entire fit data preparation plan as a single table. ``` transform_as_data = treatment.description_matrix() transform_as_data ``` It is a simple matter to write a procedure (or in the case of databases, as stored procedure) that reproduces the vtreat data preparation from this table. For example vtreat itself now (in version 1.0.1) supplies a function that translates the table into a [data algebra](https://github.com/WinVector/data_algebra) pipeline. This means we can run the data preparation in any database that we have a data algebra SQL adapter for! Let's see this translation in action. ``` ops = as_data_algebra_pipeline( source=descr(d_app=d_app.loc[:, columns]), vtreat_descr=transform_as_data, treatment_table_name='transform_as_data', ) # print(ops) # could print this, but it tends to be large! transformed = ops.eval({ 'd_app': d_app.loc[:, columns], 'transform_as_data': transform_as_data}) transformed assert data_algebra.test_util.equivalent_frames(transformed, d_app_treated) ``` We can then run the same operations in an SQL database we have an adapter for. Currently, we have good adapters for Google Big Query, Spark, PostgreSQL, MySQL, and SQLite. The data algebra has extension classes designed to make producing new database adapters easy. Let's simply use SQLite as a convenient example. ``` db_handle = data_algebra.SQLite.example_handle() sql = db_handle.to_sql(ops) # print(sql) # could print this, but it tends to be large! db_handle.insert_table(d_app.loc[:, columns], table_name='d_app') db_handle.insert_table(transform_as_data, table_name='transform_as_data') db_handle.execute('CREATE TABLE res AS ' + sql) res_db = db_handle.read_query('SELECT * FROM res ORDER BY orig_index LIMIT 10') res_db assert data_algebra.test_util.equivalent_frames(res_db, d_app_treated) db_handle.close() ``` And that is it: advanced data preparation directly in the database. We train the vtreat data preparation in-memory, but it now can be exported and used many more places at much greater scale. Note: for larger examples we suggest composing operations by sequential updates instead of nesting. For how to do this please see [here](https://github.com/WinVector/pyvtreat/blob/main/Examples/Database/update_joins.ipynb). Or if there are not too many values in the re-mapping tables one can use the upcoming case statement based export. ``` ops_case = as_data_algebra_pipeline( source=descr(d_app=d_app.loc[:, columns]), vtreat_descr=transform_as_data, treatment_table_name='transform_as_data', use_case_merges=True, ) ops_case print(db_handle.to_sql(ops_case)) ``` The CASE WHEN based mapping is supplied by [data algebra mapv()](https://github.com/WinVector/data_algebra/blob/main/Examples/Map/mapv.ipynb).
github_jupyter
import pandas as pd from data_algebra.data_ops import * import data_algebra.SQLite import data_algebra.test_util import vtreat from vtreat.vtreat_db_adapter import as_data_algebra_pipeline # Data from: # https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008 data_all = pd.read_csv("diabetes_head.csv") n = data_all.shape[0] data_all['orig_index'] = range(n) d_train = data_all.loc[range(n-5), :].reset_index(inplace=False, drop=True) d_app = data_all.loc[range(n-5, n)].reset_index(inplace=False, drop=True) outcome_name = "readmitted" cols_to_copy = ["orig_index", "encounter_id", "patient_nbr"] + [outcome_name] vars = ['time_in_hospital', 'weight'] columns = vars + cols_to_copy d_train.loc[:, columns] treatment = vtreat.BinomialOutcomeTreatment( cols_to_copy=cols_to_copy, outcome_name=outcome_name, outcome_target=True, params=vtreat.vtreat_parameters( {"sparse_indicators": False, "filter_to_recommended": False,} ), ) d_train_treated = treatment.fit_transform(d_train.loc[:, columns]) d_app_treated = treatment.transform(d_app.loc[:, columns]) d_app_treated transform_as_data = treatment.description_matrix() transform_as_data ops = as_data_algebra_pipeline( source=descr(d_app=d_app.loc[:, columns]), vtreat_descr=transform_as_data, treatment_table_name='transform_as_data', ) # print(ops) # could print this, but it tends to be large! transformed = ops.eval({ 'd_app': d_app.loc[:, columns], 'transform_as_data': transform_as_data}) transformed assert data_algebra.test_util.equivalent_frames(transformed, d_app_treated) db_handle = data_algebra.SQLite.example_handle() sql = db_handle.to_sql(ops) # print(sql) # could print this, but it tends to be large! db_handle.insert_table(d_app.loc[:, columns], table_name='d_app') db_handle.insert_table(transform_as_data, table_name='transform_as_data') db_handle.execute('CREATE TABLE res AS ' + sql) res_db = db_handle.read_query('SELECT * FROM res ORDER BY orig_index LIMIT 10') res_db assert data_algebra.test_util.equivalent_frames(res_db, d_app_treated) db_handle.close() ops_case = as_data_algebra_pipeline( source=descr(d_app=d_app.loc[:, columns]), vtreat_descr=transform_as_data, treatment_table_name='transform_as_data', use_case_merges=True, ) ops_case print(db_handle.to_sql(ops_case))
0.28587
0.980487
420-A52-SF - Algorithmes d'apprentissage supervisรฉ - Hiver 2020 - Spรฉcialisation technique en Intelligence Artificielle - Mikaรซl Swawola, M.Sc. <br/> ![Travaux Pratiques - Rรฉgression linรฉaire multiple](static/03-tp-banner.png) <br/> **Objectif:** cette sรฉance de travaux pratique consiste en la mise en oeuvre sous forme de code vectorisรฉ de l'**algorithme du gradient en rรฉgression linรฉaire multiple**. Le jeu de donnรฉes utilisรฉ sera la version complรจte du jeu de donnรฉes *Advertising* et devra รชtre **mis ร  l'รฉchelle** ``` %reload_ext autoreload %autoreload 2 %matplotlib inline ``` ### 0 - Chargement des bibliothรจques ``` # Manipulation de donnรฉes import numpy as np import pandas as pd from collections import defaultdict # Visualisation de donnรฉes import matplotlib.pyplot as plt import seaborn as sns # Outils divers from tqdm.notebook import tqdm_notebook from tqdm import tqdm # Configuration de la visualisation sns.set(style="darkgrid", rc={'figure.figsize':(11.7,8.27)}) ``` ### 1 - Lecture du jeu de donnรฉes advertising **Exercice 1-1**: ร  l'aide de la bibiothรจques *pandas*, lire le fichier `advertising-multivariate.csv` ``` # Complรฉter le code ci-dessous ~ 1 ligne df = None ``` **Exercice 1-2**: ร  l'aide de la fonction `head()`, visualiser les premiรจres lignes de la trame de donnรฉes. Quelle sera la taille du vecteur de paramรจtres $\theta$ ? ``` # Complรฉter le code ci-dessous ~ 1 ligne None ``` ### 2 - Mise ร  l'รฉchelle des donnรฉes **Exercice 2**: Standardiser les donnรฉes.<br/> Note: Il n'est pas nรฉcรฉssaire de standardiser la variable de sortie, mais vous pouvez le faire ร  des fins de simplification ``` # Complรฉter le code ci-dessous ~ 1 ligne df_norm = None ``` ### 3 - Prรฉparation de la structure de donnรฉes **Exercice 3**: Construire la matrice des prรฉdicteurs X sans oublier d'ajouter une colonne reprรฉsentant $x_0$ ``` # Complรฉter le code ci-dessous ~ 5 lignes x0 = None x1 = None x2 = None x3 = None X = None y = df['sales'].values # Nous gardons ici les valeurs non standardisรฉe ``` <strong style='color: green'>TEST - Le code ci-dessous vous permet de tester la forme de `X`. Le `assert` ne doit pas renvoyer d'exception</strong> ``` assert X.shape == (4,200) ``` ### 4 - Dรฉfinition du modรจle **Exercice 4**: complรฉter la fonction ci-dessous reprรฉsentant le modรจle de rรฉgression linรฉaire multiple (hypothรจse) Pour rappel, le modรจle de rรฉgression multiple est $h_{\theta}(x)=\theta_{0}x_0 + \theta_{1}x_1 + \cdots + \theta_{n}x_n = \theta^TX$ ``` def hypothesis(x, theta): assert x.shape[0] == theta.shape[0] # Complรฉter le code ~ 1 ligne h = None return h ``` <strong style='color: green'>TEST - Le code ci-dessous vous permet de tester votre fonction `hypothesis`. Le `assert` ne doit pas renvoyer d'exception</strong> ``` x_test = np.array([[1,1],[3,4],[2,2],[1,-1]]) theta_test = np.array([1,2,2,4]).reshape(-1,1) hypothesis(x_test, theta_test) assert np.array_equal(hypothesis(x_test,theta_test), np.array([[15,9]])) ``` ### 5 - Fonction de coรปt **Exercice 5**: complรฉter la fonction ci-dessous permettant le calcul du coรปt (fonction de coรปt) Pour rappel, la fonction de coรปt en rรฉgression linรฉaire multiple s'exprime sous la forme $J(\theta)= \frac{1}{2m}\sum\limits_{i=1}^{m}(h_{\theta}(x^{(i)})-y^{(i)})^{2}=\frac{1}{2m}(y-X^t\theta)^T\times(y-X^t\theta)$ Remarque: comme le montre l'รฉquation ci-dessus, il exite deux mรฉthodes pour calculer la fonction de coรปt. Choisissez celle qui vous convient.<br/><em>Optionnel: faites l'autre mรฉthode</em> ``` def cost_function(x,y, theta): # Complรฉter le code ~ 1-4 lignes cost = None return cost ``` <strong style='color: green'>TEST - Le code ci-dessous permet de tester la fonction `cost_function`. Celle-ci doit retourner un `numpy.float64`, c'est-ร -dire un nombre et non tableau (array). Le `assert` ne doit pas renvoyer d'exception et le rรฉsultat attendu est ~ 94.92</strong> ``` theta_test = np.array([1,2,2,4]).reshape(-1,1) cost = cost_function(X,y,theta_test) assert type(cost) == np.float64 cost ``` ### 6 - Algorithme du gradient **Exercice 6**: Complรฉter l'algorithme du gradient ci-dessous. Choisir le vecteur $\theta$ initial, la valeurs du **pas** ($\alpha$) et le **nombre d'itรฉrations**. Un test de convergence ne sera pas utilisรฉ ici. $ \text{Rรฉpรฉter pendant n_iterations} \{\\ \theta_{j}:= \theta_{j} - \alpha\frac{1}{m}\sum\limits_{i=1}^{m}(h_{\theta}(x^{(i)})-y^{(i)})\times x_{j}^{(i)}\quad\forall j \\ \} $ Ou sous forme vectorisรฉe: $ \text{Rรฉpรฉter pendant n_iterations} \{\\ \theta:= \theta - \alpha\frac{1}{m}(\theta^TX-y)\times X \\ \} $ <strong>Vous รชtes vivement encouragรฉs ร  utiliser la forme vectorisรฉe. Celle-ci est de toute faรงon plus simple ร  coder que la version non vectorisรฉe !</strong> ``` theta = None alpha = None n_iterations = None m = y.shape[0] history = defaultdict(list) for i in tqdm(range(0, n_iterations)): # Complรฉter le code ~ 2 lignes None # Sauvegarde des valeurs intermรฉdiaires de theta et du coรปt if i%50 == 0: cost = cost_function(X, y, theta) history['theta_0'].append(theta[0]) history['theta_1'].append(theta[1]) history['theta_2'].append(theta[2]) history['theta_3'].append(theta[3]) history['cost'].append(cost) print(f'Theta = {theta}') ``` Les valeurs des paramรจtres $\theta_j$ devraient approcher ```[[14.0225 ] [ 3.92908869] [ 2.79906919] [-0.02259517]]``` ### 7 - Interprรฉtation des paramรจtres **Exercice 7**: Interprรฉter les paramรจtres obtenus ### Fin du TP
github_jupyter
%reload_ext autoreload %autoreload 2 %matplotlib inline # Manipulation de donnรฉes import numpy as np import pandas as pd from collections import defaultdict # Visualisation de donnรฉes import matplotlib.pyplot as plt import seaborn as sns # Outils divers from tqdm.notebook import tqdm_notebook from tqdm import tqdm # Configuration de la visualisation sns.set(style="darkgrid", rc={'figure.figsize':(11.7,8.27)}) # Complรฉter le code ci-dessous ~ 1 ligne df = None # Complรฉter le code ci-dessous ~ 1 ligne None # Complรฉter le code ci-dessous ~ 1 ligne df_norm = None # Complรฉter le code ci-dessous ~ 5 lignes x0 = None x1 = None x2 = None x3 = None X = None y = df['sales'].values # Nous gardons ici les valeurs non standardisรฉe assert X.shape == (4,200) def hypothesis(x, theta): assert x.shape[0] == theta.shape[0] # Complรฉter le code ~ 1 ligne h = None return h x_test = np.array([[1,1],[3,4],[2,2],[1,-1]]) theta_test = np.array([1,2,2,4]).reshape(-1,1) hypothesis(x_test, theta_test) assert np.array_equal(hypothesis(x_test,theta_test), np.array([[15,9]])) def cost_function(x,y, theta): # Complรฉter le code ~ 1-4 lignes cost = None return cost theta_test = np.array([1,2,2,4]).reshape(-1,1) cost = cost_function(X,y,theta_test) assert type(cost) == np.float64 cost theta = None alpha = None n_iterations = None m = y.shape[0] history = defaultdict(list) for i in tqdm(range(0, n_iterations)): # Complรฉter le code ~ 2 lignes None # Sauvegarde des valeurs intermรฉdiaires de theta et du coรปt if i%50 == 0: cost = cost_function(X, y, theta) history['theta_0'].append(theta[0]) history['theta_1'].append(theta[1]) history['theta_2'].append(theta[2]) history['theta_3'].append(theta[3]) history['cost'].append(cost) print(f'Theta = {theta}')
0.333829
0.959762
# Autoencoders In this notebook we will explore autoencoder models. These are models in which the inputs are *encoded* to some intermediate representation before this representation is then *decoded* to try to reconstruct the inputs. They are example of a model which uses an unsupervised training method and are both interesting as a model in their own right and as a method for pre-training useful representations to use in supervised tasks such as classification. Autoencoders were covered as a pre-training method in the [sixth lecture slides](http://www.inf.ed.ac.uk/teaching/courses/mlp/2016/mlp06-enc.pdf). __*Correction: The original version of this notebook used the term 'contractive autoencoder' to refer to an autoencoder where the encoder 'contracts' the input to a smaller dimension hidden representation. This is non-standard usage - 'Contractive Autoencoder' is used more commonly for an autoencoder variant with a specific form of regularisation described in the paper [Contractive Autoencoders: Explicit Feature Invariance During Feature Extraction](http://www.icml-2011.org/papers/455_icmlpaper.pdf). Apologies for any confusion and thanks to Iain Murray for pointing out this error.*__ ## Exercise 1: Linear <s>contractive</s> autoencoders For the first exercise we will consider training a simple autoencoder where the hidden representation is smaller in dimension than the input and the objective is to minimise the mean squared error between the original inputs and reconstructed inputs. To begin with we will consider models in which the encoder and decoder are both simple affine transformations. When training an autoencoder the target outputs for the model are the original inputs. A simple way to integrate this in to our `mlp` framework is to define a new data provider inheriting from a base data provider (e.g. `MNISTDataProvider`) which overrides the `next` method to return the inputs batch as both inputs and targets to the model. A data provider of this form has been provided for you in `mlp.data_providers` as `MNISTAutoencoderDataProvider`. Use this data provider to train an autoencoder model with a 50 dimensional hidden representation and both encoder and decoder defined by affine transformations. You should use a sum of squared differences error and a basic gradient descent learning rule with learning rate 0.01. Initialise the biases to zero and use a uniform Glorot initialisation for both layers weights. Train the model for 25 epochs with a batch size of 50. ``` import numpy as np import logging import mlp.layers as layers import mlp.models as models import mlp.optimisers as optimisers import mlp.errors as errors import mlp.learning_rules as learning_rules import mlp.data_providers as data_providers import mlp.initialisers as initialisers import matplotlib.pyplot as plt %matplotlib inline # Seed a random number generator seed = 10102016 rng = np.random.RandomState(seed) # Set up a logger object to print info about the training run to stdout logger = logging.getLogger() logger.setLevel(logging.INFO) logger.handlers = [logging.StreamHandler()] # Create data provider objects for the MNIST data set train_data = data_providers.MNISTAutoencoderDataProvider('train', batch_size=50, rng=rng) valid_data = data_providers.MNISTAutoencoderDataProvider('valid', batch_size=50, rng=rng) input_dim, output_dim, hidden_dim = 784, 784, 50 weights_init = initialisers.GlorotUniformInit(rng=rng) biases_init = initialisers.ConstantInit(0.) model = models.MultipleLayerModel([ layers.AffineLayer(input_dim, hidden_dim, weights_init, biases_init), layers.AffineLayer(hidden_dim, output_dim, weights_init, biases_init), ]) error = errors.SumOfSquaredDiffsError() learning_rule = learning_rules.GradientDescentLearningRule(0.01) num_epochs = 25 stats_interval = 1 optimiser = optimisers.Optimiser( model, error, learning_rule, train_data, valid_data) stats, keys, run_time = optimiser.train(num_epochs=num_epochs, stats_interval=stats_interval) # Plot the change in the validation and training set error over training. fig_1 = plt.figure(figsize=(8, 4)) ax_1 = fig_1.add_subplot(111) for k in ['error(train)', 'error(valid)']: ax_1.plot(np.arange(1, stats.shape[0]) * stats_interval, stats[1:, keys[k]], label=k) ax_1.legend(loc=0) ax_1.set_xlabel('Epoch number') ``` Using the function defined in the cell below (from the first lab notebook), plot a batch of the original images and the autoencoder reconstructions. ``` def show_batch_of_images(img_batch, fig_size=(3, 3), num_rows=None): fig = plt.figure(figsize=fig_size) batch_size, im_height, im_width = img_batch.shape if num_rows is None: # calculate grid dimensions to give square(ish) grid num_rows = int(batch_size**0.5) num_cols = int(batch_size * 1. / num_rows) if num_rows * num_cols < batch_size: num_cols += 1 # intialise empty array to tile image grid into tiled = np.zeros((im_height * num_rows, im_width * num_cols)) # iterate over images in batch + indexes within batch for i, img in enumerate(img_batch): # calculate grid row and column indices r, c = i % num_rows, i // num_rows tiled[r * im_height:(r + 1) * im_height, c * im_height:(c + 1) * im_height] = img ax = fig.add_subplot(111) ax.imshow(tiled, cmap='Greys', vmin=0., vmax=1.) ax.axis('off') fig.tight_layout() plt.show() return fig, ax inputs, targets = valid_data.next() recons = model.fprop(inputs)[-1] _ = show_batch_of_images(inputs.reshape((-1, 28, 28)), (4, 2), 5) _ = show_batch_of_images(recons.reshape((-1, 28, 28)), (4, 2), 5) ``` ### Optional extension: principle components analysis *This section is provided for the interest of those also sitting MLPR or otherwise already familiar with eigendecompositions and PCA. Feel free to skip over if this doesn't apply to you (or even if it does).* For a linear (affine) autoencoder model trained with a sum of squared differences error function there is an analytic solution for the optimal model parameters corresponding to [principle components analysis](https://en.wikipedia.org/wiki/Principal_component_analysis). If we have a training dataset of $N$ $D$-dimensional vectors $\left\lbrace \boldsymbol{x}^{(n)} \right\rbrace_{n=1}^N$, then we can calculate the empiricial mean and covariance of the training data using \begin{equation} \boldsymbol{\mu} = \frac{1}{N} \sum_{n=1}^N \left[ \boldsymbol{x}^{(n)} \right] \qquad \text{and} \qquad \mathbf{\Sigma} = \frac{1}{N} \sum_{n=1}^N \left[ \left(\boldsymbol{x}^{(n)} - \boldsymbol{\mu} \right) \left(\boldsymbol{x}^{(n)} - \boldsymbol{\mu} \right)^{\rm T} \right]. \end{equation} We can then calculate an [eigendecomposition](https://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix) of the covariance matrix \begin{equation} \mathbf{\Sigma} = \mathbf{Q} \mathbf{\Lambda} \mathbf{Q}^{\rm T} \qquad \mathbf{Q} = \left[ \begin{array}{cccc} \uparrow & \uparrow & \cdots & \uparrow \\ \boldsymbol{q}_1 & \boldsymbol{q}_2 & \cdots & \boldsymbol{q}_D \\ \downarrow & \downarrow & \cdots & \downarrow \\ \end{array} \right] \qquad \mathbf{\Lambda} = \left[ \begin{array}{cccc} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & \vdots \\ \vdots & \vdots & \ddots & 0 \\ 0 & 0 & \cdots & \lambda_D \\ \end{array} \right] \end{equation} with $\mathbf{Q}$ an orthogonal matrix, $\mathbf{Q}\mathbf{Q}^{\rm T} = \mathbf{I}$, with columns $\left\lbrace \boldsymbol{q}_d \right\rbrace_{d=1}^D$ corresponding to the eigenvectors of $\mathbf{\Sigma}$ and $\mathbf{\Lambda}$ a diagonal matrix with diagonal elements $\left\lbrace \lambda_d \right\rbrace_{d=1}^D$ the corresponding eigenvalues of $\mathbf{\Sigma}$. Assuming the eigenvalues are ordered such that $\lambda_1 < \lambda_2 < \dots < \lambda_D$ then the top $K$ principle components of the inputs (eigenvectors with largest eigenvalues) correspond to $\left\lbrace \boldsymbol{q}_d \right\rbrace_{d=D + 1 - K}^D$. If we define a $D \times K$ matrix $\mathbf{V} = \left[ \boldsymbol{q}_{D + 1 - K} ~ \boldsymbol{q}_{D + 2 - K} ~\cdots~ \boldsymbol{q}_D \right]$ then we can find the projections of a (mean normalised) input vector on to the selected $K$ principle components as $\boldsymbol{h} = \mathbf{V}^{\rm T}\left( \boldsymbol{x} - \boldsymbol{\mu}\right)$. We can then use these principle component projections to form a reconstruction of the original input just in terms of the $K$ top principle components using $\boldsymbol{r} = \mathbf{V} \boldsymbol{h} + \boldsymbol{\mu}$. We can see that this is just a sequence of two affine transformations and so is directly analagous to a model with two affine layers and with $K$ dimensional outputs of the first layer / inputs to second. The function defined in the cell below will calculate the PCA solution for a set of input vectors and a defined number of components $K$. Use it to calculate the top 50 principle components of the MNIST training data. Use the returned matrix and mean vector to calculate the PCA based reconstructions of a batch of 50 MNIST images and use the `show_batch_of_images` function to plot both the original and reconstructed inputs alongside each other. Also calculate the sum of squared differences error for the PCA solution on the MNIST training set and compare to the figure you got by gradient descent based training above. Will the gradient based training produce the same hidden representations as the PCA solution if it is trained to convergence? ``` def get_pca_parameters(inputs, num_components=50): mean = inputs.mean(0) inputs_zm = inputs - mean[None, :] covar = np.einsum('ij,ik', inputs_zm, inputs_zm) eigvals, eigvecs = np.linalg.eigh(covar) return eigvecs[:, -num_components:], mean V, mu = get_pca_parameters(train_data.inputs) hiddens = (inputs - mu[None, :]).dot(V) recons = hiddens.dot(V.T) + mu[None, :] _ = show_batch_of_images(inputs.reshape((-1, 28, 28)), (4, 2), 5) _ = show_batch_of_images(recons.reshape((-1, 28, 28)), (4, 2), 5) hiddens = (train_data.inputs - mu[None, :]).dot(V) recons = hiddens.dot(V.T) + mu[None, :] print(error(train_data.inputs, recons)) ``` ## Exercise 2: Non-linear <s>contractive</s> autoencoders Those who did the extension in the previous exercise will have just seen that for an autoencoder with both linear / affine encoder and decoders, there is an analytic solution for the parameters which minimise a sum of squared differences error. In general the advantage of using gradient-based training methods is that it allows us to use non-linear models for which there is no analytic solution for the optimal parameters. The hope is the use of non-linear transformations between the affine transformation layers will increase the representational power of the model (a sequence of affine transformations applied without any interleaving non-linear operations can always be represented by a single affine transformation). Train a autoencoder with an initial affine layer (output dimension again 50) followed by a rectified linear layer, then an affine transformation projecting to outputs of same dimension as the original inputs, and finally a logistic sigmoid layer at the output. As the only layers with parameters are the two affine layers which have the same dimensions as in the fully affine model above, the overall model here has the same number of parameters as previously. Again train for 25 epochs with 50 training examples per batch and use a uniform Glorot initialisation for the weights, and zero biases initialisation. Use our implementation of the 'Adam' adaptive moments learning rule (available in `mlp.learning_rules` as `AdamLearningRule`) rather than basic gradient descent here (the adaptivity helps deal with the varying appropriate scale of updates induced by the non-linear transformations in this model). ``` input_dim, output_dim, hidden_dim = 784, 784, 50 weights_init = initialisers.GlorotUniformInit(rng=rng) biases_init = initialisers.ConstantInit(0.) model = models.MultipleLayerModel([ layers.AffineLayer(input_dim, hidden_dim, weights_init, biases_init), layers.ReluLayer(), layers.AffineLayer(hidden_dim, output_dim, weights_init, biases_init), layers.SigmoidLayer() ]) error = errors.SumOfSquaredDiffsError() learning_rule = learning_rules.AdamLearningRule() num_epochs = 25 stats_interval = 1 optimiser = optimisers.Optimiser( model, error, learning_rule, train_data, valid_data) stats, keys, run_time = optimiser.train(num_epochs=num_epochs, stats_interval=stats_interval) # Plot the change in the validation and training set error over training. fig_1 = plt.figure(figsize=(8, 4)) ax_1 = fig_1.add_subplot(111) for k in ['error(train)', 'error(valid)']: ax_1.plot(np.arange(1, stats.shape[0]) * stats_interval, stats[1:, keys[k]], label=k) ax_1.legend(loc=0) ax_1.set_xlabel('Epoch number') ``` Plot batches of the inputs and reconstructed inputs for this non-linear autoencoder model and compare to the corresponding plots for the linear models above. ``` inputs, targets = valid_data.next() recons = model.fprop(inputs)[-1] _ = show_batch_of_images(inputs.reshape((-1, 28, 28)), (4, 2), 5) _ = show_batch_of_images(recons.reshape((-1, 28, 28)), (4, 2), 5) ``` ## Exercise 3: Denoising autoencoders So far we have just considered autoencoders that try to reconstruct the input vector fed into them via some intermediate lower-dimensional 'contracted' representation. The contraction is important as if we were to mantain the input dimensionality in all layers of the model a trivial optima for the model to learn would be to apply an identity transformation at each layer. It can be desirable for the intermediate hidden representation to be robust to noise in the input. The intuition is that this will force the model to learn to maintain the 'important structure' in the input in the hidden representation (that needed to reconstruct the input). This also removes the requirement to have a contracted hidden representation (as the model can no longer simply learn to apply an identity transformation) though in practice we will still often use a lower-dimensional hidden representation as we believe there is a certain level of redundancy in the input data and so the important structure can be represented with a lower dimensional representation. Create a new data provider object which adds to noise to the inputs to an autoencoder in each batch it returns. There are various different ways you could introduce noise. The three suggested in the lecture slides are * *Gaussian*: add independent, zero-mean Gaussian noise of a fixed standard-deviation to each dimension of the input vectors. * *Masking*: generate a random binary mask and perform an elementwise multiplication with each input (forcing some subset of the values to zero). * *Salt and pepper*: select a random subset of values in each input and randomly assign either zero or one to them. You should choose one of these noising schemes to implement. It may help to know that the base `DataProvider` object already has access to a random number generator object as its `self.rng` attribute. ``` class MNISTDenoisingAutoencoderDataProvider(data_providers.MNISTDataProvider): """Simple wrapper data provider for training a denoising autoencoder on MNIST.""" def next(self): """Returns next data batch or raises `StopIteration` if at end.""" inputs, targets = super( MNISTDenoisingAutoencoderDataProvider, self).next() noised_inputs = (self.rng.uniform(size=inputs.shape) < 0.75) * inputs return noised_inputs, inputs ``` Once you have implemented your chosen scheme, use the new data provider object to train a denoising autoencoder with the same model architecture as in exercise 2. ``` # Create data provider objects for the MNIST data set train_data = MNISTDenoisingAutoencoderDataProvider('train', batch_size=50, rng=rng) valid_data = MNISTDenoisingAutoencoderDataProvider('valid', batch_size=50, rng=rng) input_dim, output_dim, hidden_dim = 784, 784, 50 weights_init = initialisers.GlorotUniformInit(rng=rng) biases_init = initialisers.ConstantInit(0.) model = models.MultipleLayerModel([ layers.AffineLayer(input_dim, hidden_dim, weights_init, biases_init), layers.ReluLayer(), layers.AffineLayer(hidden_dim, output_dim, weights_init, biases_init), layers.SigmoidLayer() ]) error = errors.SumOfSquaredDiffsError() learning_rule = learning_rules.AdamLearningRule() num_epochs = 25 stats_interval = 1 optimiser = optimisers.Optimiser( model, error, learning_rule, train_data, valid_data) stats, keys, run_time = optimiser.train(num_epochs=num_epochs, stats_interval=stats_interval) # Plot the change in the validation and training set error over training. fig_1 = plt.figure(figsize=(8, 4)) ax_1 = fig_1.add_subplot(111) for k in ['error(train)', 'error(valid)']: ax_1.plot(np.arange(1, stats.shape[0]) * stats_interval, stats[1:, keys[k]], label=k) ax_1.legend(loc=0) ax_1.set_xlabel('Epoch number') ``` Use the `show_batch_of_images` function from above to visualise a batch of noisy inputs from your data provider implementation and the denoised reconstructions from your trained denoising autoencoder. ``` inputs, targets = valid_data.next() recons = model.fprop(inputs)[-1] _ = show_batch_of_images(inputs.reshape((-1, 28, 28)), (4, 2), 5) _ = show_batch_of_images(recons.reshape((-1, 28, 28)), (4, 2), 5) ``` ## Exercise 4: Using an autoencoder as an initialisation for supervised training As a final exercise we will use the first layer of an autoencoder for MNIST digit images as a layer within a multiple layer model trained to do digit classification. The intuition behind pretraining methods like this is that the hidden representations learnt by an autoencoder should be a more useful representation for training a classifier than the raw pixel values themselves. We could fix the parameters in the layers taken from the autoencoder but generally we can get better performance by letting the whole model be trained end-to-end on the supervised training task, with the learnt autoencoder parameters in this case acting as a potentially more intelligent initialisation than randomly sampling the parameters which can help ease some of the optimisation issues encountered due to poor initialisation of a model. You can either use one of the autoencoder models you trained in the previous exercises, or train a new autoencoder model for specifically for this exercise. Create a new model object (instance of `mlp.models.MultipleLayerModel`) in which the first layer(s) of the list of layer passed to the model constructor are the trained first layer(s) from your autoencoder model (these can be accessed via the `layers` attribute which is a list of all the layers in a model). Add any additional layers you wish to the pretrained layers - at the very least you will need to add an output layer with output dimension 10 to allow the model to be used to predict class labels. Train this new model on the original MNIST image, digit labels pairs with a cross entropy error. ``` ae_model = model train_data = data_providers.MNISTDataProvider('train', batch_size=50, rng=rng) valid_data = data_providers.MNISTDataProvider('valid', batch_size=50, rng=rng) input_dim, output_dim, hidden_dim = 784, 10, 50 weights_init = initialisers.GlorotUniformInit(rng=rng) biases_init = initialisers.ConstantInit(0.) model = models.MultipleLayerModel([ ae_model.layers[0], layers.ReluLayer(), layers.AffineLayer(hidden_dim, output_dim, weights_init, biases_init) ]) error = errors.CrossEntropySoftmaxError() learning_rule = learning_rules.AdamLearningRule() num_epochs = 25 stats_interval = 1 data_monitors={'acc': lambda y, t: (y.argmax(-1) == t.argmax(-1)).mean()} optimiser = optimisers.Optimiser( model, error, learning_rule, train_data, valid_data, data_monitors) stats, keys, run_time = optimiser.train(num_epochs=num_epochs, stats_interval=stats_interval) # Plot the change in the validation and training set error over training. fig_1 = plt.figure(figsize=(8, 4)) ax_1 = fig_1.add_subplot(111) for k in ['error(train)', 'error(valid)']: ax_1.plot(np.arange(1, stats.shape[0]) * stats_interval, stats[1:, keys[k]], label=k) ax_1.legend(loc=0) ax_1.set_xlabel('Epoch number') ```
github_jupyter
import numpy as np import logging import mlp.layers as layers import mlp.models as models import mlp.optimisers as optimisers import mlp.errors as errors import mlp.learning_rules as learning_rules import mlp.data_providers as data_providers import mlp.initialisers as initialisers import matplotlib.pyplot as plt %matplotlib inline # Seed a random number generator seed = 10102016 rng = np.random.RandomState(seed) # Set up a logger object to print info about the training run to stdout logger = logging.getLogger() logger.setLevel(logging.INFO) logger.handlers = [logging.StreamHandler()] # Create data provider objects for the MNIST data set train_data = data_providers.MNISTAutoencoderDataProvider('train', batch_size=50, rng=rng) valid_data = data_providers.MNISTAutoencoderDataProvider('valid', batch_size=50, rng=rng) input_dim, output_dim, hidden_dim = 784, 784, 50 weights_init = initialisers.GlorotUniformInit(rng=rng) biases_init = initialisers.ConstantInit(0.) model = models.MultipleLayerModel([ layers.AffineLayer(input_dim, hidden_dim, weights_init, biases_init), layers.AffineLayer(hidden_dim, output_dim, weights_init, biases_init), ]) error = errors.SumOfSquaredDiffsError() learning_rule = learning_rules.GradientDescentLearningRule(0.01) num_epochs = 25 stats_interval = 1 optimiser = optimisers.Optimiser( model, error, learning_rule, train_data, valid_data) stats, keys, run_time = optimiser.train(num_epochs=num_epochs, stats_interval=stats_interval) # Plot the change in the validation and training set error over training. fig_1 = plt.figure(figsize=(8, 4)) ax_1 = fig_1.add_subplot(111) for k in ['error(train)', 'error(valid)']: ax_1.plot(np.arange(1, stats.shape[0]) * stats_interval, stats[1:, keys[k]], label=k) ax_1.legend(loc=0) ax_1.set_xlabel('Epoch number') def show_batch_of_images(img_batch, fig_size=(3, 3), num_rows=None): fig = plt.figure(figsize=fig_size) batch_size, im_height, im_width = img_batch.shape if num_rows is None: # calculate grid dimensions to give square(ish) grid num_rows = int(batch_size**0.5) num_cols = int(batch_size * 1. / num_rows) if num_rows * num_cols < batch_size: num_cols += 1 # intialise empty array to tile image grid into tiled = np.zeros((im_height * num_rows, im_width * num_cols)) # iterate over images in batch + indexes within batch for i, img in enumerate(img_batch): # calculate grid row and column indices r, c = i % num_rows, i // num_rows tiled[r * im_height:(r + 1) * im_height, c * im_height:(c + 1) * im_height] = img ax = fig.add_subplot(111) ax.imshow(tiled, cmap='Greys', vmin=0., vmax=1.) ax.axis('off') fig.tight_layout() plt.show() return fig, ax inputs, targets = valid_data.next() recons = model.fprop(inputs)[-1] _ = show_batch_of_images(inputs.reshape((-1, 28, 28)), (4, 2), 5) _ = show_batch_of_images(recons.reshape((-1, 28, 28)), (4, 2), 5) def get_pca_parameters(inputs, num_components=50): mean = inputs.mean(0) inputs_zm = inputs - mean[None, :] covar = np.einsum('ij,ik', inputs_zm, inputs_zm) eigvals, eigvecs = np.linalg.eigh(covar) return eigvecs[:, -num_components:], mean V, mu = get_pca_parameters(train_data.inputs) hiddens = (inputs - mu[None, :]).dot(V) recons = hiddens.dot(V.T) + mu[None, :] _ = show_batch_of_images(inputs.reshape((-1, 28, 28)), (4, 2), 5) _ = show_batch_of_images(recons.reshape((-1, 28, 28)), (4, 2), 5) hiddens = (train_data.inputs - mu[None, :]).dot(V) recons = hiddens.dot(V.T) + mu[None, :] print(error(train_data.inputs, recons)) input_dim, output_dim, hidden_dim = 784, 784, 50 weights_init = initialisers.GlorotUniformInit(rng=rng) biases_init = initialisers.ConstantInit(0.) model = models.MultipleLayerModel([ layers.AffineLayer(input_dim, hidden_dim, weights_init, biases_init), layers.ReluLayer(), layers.AffineLayer(hidden_dim, output_dim, weights_init, biases_init), layers.SigmoidLayer() ]) error = errors.SumOfSquaredDiffsError() learning_rule = learning_rules.AdamLearningRule() num_epochs = 25 stats_interval = 1 optimiser = optimisers.Optimiser( model, error, learning_rule, train_data, valid_data) stats, keys, run_time = optimiser.train(num_epochs=num_epochs, stats_interval=stats_interval) # Plot the change in the validation and training set error over training. fig_1 = plt.figure(figsize=(8, 4)) ax_1 = fig_1.add_subplot(111) for k in ['error(train)', 'error(valid)']: ax_1.plot(np.arange(1, stats.shape[0]) * stats_interval, stats[1:, keys[k]], label=k) ax_1.legend(loc=0) ax_1.set_xlabel('Epoch number') inputs, targets = valid_data.next() recons = model.fprop(inputs)[-1] _ = show_batch_of_images(inputs.reshape((-1, 28, 28)), (4, 2), 5) _ = show_batch_of_images(recons.reshape((-1, 28, 28)), (4, 2), 5) class MNISTDenoisingAutoencoderDataProvider(data_providers.MNISTDataProvider): """Simple wrapper data provider for training a denoising autoencoder on MNIST.""" def next(self): """Returns next data batch or raises `StopIteration` if at end.""" inputs, targets = super( MNISTDenoisingAutoencoderDataProvider, self).next() noised_inputs = (self.rng.uniform(size=inputs.shape) < 0.75) * inputs return noised_inputs, inputs # Create data provider objects for the MNIST data set train_data = MNISTDenoisingAutoencoderDataProvider('train', batch_size=50, rng=rng) valid_data = MNISTDenoisingAutoencoderDataProvider('valid', batch_size=50, rng=rng) input_dim, output_dim, hidden_dim = 784, 784, 50 weights_init = initialisers.GlorotUniformInit(rng=rng) biases_init = initialisers.ConstantInit(0.) model = models.MultipleLayerModel([ layers.AffineLayer(input_dim, hidden_dim, weights_init, biases_init), layers.ReluLayer(), layers.AffineLayer(hidden_dim, output_dim, weights_init, biases_init), layers.SigmoidLayer() ]) error = errors.SumOfSquaredDiffsError() learning_rule = learning_rules.AdamLearningRule() num_epochs = 25 stats_interval = 1 optimiser = optimisers.Optimiser( model, error, learning_rule, train_data, valid_data) stats, keys, run_time = optimiser.train(num_epochs=num_epochs, stats_interval=stats_interval) # Plot the change in the validation and training set error over training. fig_1 = plt.figure(figsize=(8, 4)) ax_1 = fig_1.add_subplot(111) for k in ['error(train)', 'error(valid)']: ax_1.plot(np.arange(1, stats.shape[0]) * stats_interval, stats[1:, keys[k]], label=k) ax_1.legend(loc=0) ax_1.set_xlabel('Epoch number') inputs, targets = valid_data.next() recons = model.fprop(inputs)[-1] _ = show_batch_of_images(inputs.reshape((-1, 28, 28)), (4, 2), 5) _ = show_batch_of_images(recons.reshape((-1, 28, 28)), (4, 2), 5) ae_model = model train_data = data_providers.MNISTDataProvider('train', batch_size=50, rng=rng) valid_data = data_providers.MNISTDataProvider('valid', batch_size=50, rng=rng) input_dim, output_dim, hidden_dim = 784, 10, 50 weights_init = initialisers.GlorotUniformInit(rng=rng) biases_init = initialisers.ConstantInit(0.) model = models.MultipleLayerModel([ ae_model.layers[0], layers.ReluLayer(), layers.AffineLayer(hidden_dim, output_dim, weights_init, biases_init) ]) error = errors.CrossEntropySoftmaxError() learning_rule = learning_rules.AdamLearningRule() num_epochs = 25 stats_interval = 1 data_monitors={'acc': lambda y, t: (y.argmax(-1) == t.argmax(-1)).mean()} optimiser = optimisers.Optimiser( model, error, learning_rule, train_data, valid_data, data_monitors) stats, keys, run_time = optimiser.train(num_epochs=num_epochs, stats_interval=stats_interval) # Plot the change in the validation and training set error over training. fig_1 = plt.figure(figsize=(8, 4)) ax_1 = fig_1.add_subplot(111) for k in ['error(train)', 'error(valid)']: ax_1.plot(np.arange(1, stats.shape[0]) * stats_interval, stats[1:, keys[k]], label=k) ax_1.legend(loc=0) ax_1.set_xlabel('Epoch number')
0.780913
0.990092
# mapping-challenge-mask_rcnn-training ![CrowdAI-Logo](https://github.com/crowdAI/crowdai/raw/master/app/assets/images/misc/crowdai-logo-smile.svg?sanitize=true) This notebook contains the baseline code for the training a vanilla [Mask RCNN](https://arxiv.org/abs/1703.06870) model for the [crowdAI Mapping Challenge](https://www.crowdai.org/challenges/mapping-challenge). This code is adapted from the [Mask RCNN]() tensorflow implementation available here : [https://github.com/matterport/Mask_RCNN](https://github.com/matterport/Mask_RCNN). First we begin by importing all the necessary dependencies : ``` import os import sys import time import numpy as np # Download and install the Python COCO tools from https://github.com/waleedka/coco # That's a fork from the original https://github.com/pdollar/coco with a bug # fix for Python 3. # I submitted a pull request https://github.com/cocodataset/cocoapi/pull/50 # If the PR is merged then use the original repo. # Note: Edit PythonAPI/Makefile and replace "python" with "python3". # # A quick one liner to install the library # !pip install git+https://github.com/waleedka/coco.git#subdirectory=PythonAPI from pycocotools.coco import COCO from pycocotools.cocoeval import COCOeval from pycocotools import mask as maskUtils from evaluate import build_coco_results, evaluate_coco from dataset import SpaceNetChallengeDataset import zipfile import urllib.request import shutil ``` ## Dataset location Now we have to download all the files in the datasets section and untar them to have the following structure : ``` โ”œโ”€โ”€ data | โ”œโ”€โ”€ pretrained_weights.h5 (already included in this repository) โ”‚ย ย  โ”œโ”€โ”€ test โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ images/ โ”‚ โ”‚ โ””โ”€โ”€ annotation.json โ”‚ย ย  โ”œโ”€โ”€ train โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ images/ โ”‚ โ”‚ โ””โ”€โ”€ annotation.json โ”‚ย ย  โ””โ”€โ”€ val โ”‚ย ย  โ””โ”€โ”€ images/ โ”‚ โ””โ”€โ”€ annotation.json ``` Note that the `pretrained_weights.h5` (available at [https://www.crowdai.org/challenges/mapping-challenge/dataset_files](https://www.crowdai.org/challenges/mapping-challenge/dataset_files)) are the weights used for the baseline submission, and are obtained by running the learning schedule mentioned later in the experiment. In the said experiment, the initial weights used can be found [here](https://github.com/matterport/Mask_RCNN/releases/download/v2.1/mask_rcnn_balloon.h5). ``` # Root directory of the project ROOT_DIR = os.path.abspath("../../") # Import Mask RCNN sys.path.append(ROOT_DIR) # To find local version of the library from mrcnn.config import Config from mrcnn import model as modellib, utils PRETRAINED_MODEL_PATH = os.path.join(ROOT_DIR,"data/" "pretrained_weights.h5") LOGS_DIRECTORY = os.path.join(ROOT_DIR, "logs") ``` ## Experiment Configuration ``` from dataset import SpaceNetChallengeConfig config = SpaceNetChallengeConfig() config.display() ``` ## Instantiate Model ``` model = modellib.MaskRCNN(mode="training", config=config, model_dir=LOGS_DIRECTORY) # Load pretrained weights model_path = PRETRAINED_MODEL_PATH model.load_weights(model_path, by_name=True) ``` ## Load Training and Validation Dataset ``` # Load training dataset dataset_train = SpaceNetChallengeDataset() dataset_train.load_dataset(dataset_dir="../../data", subset="train") dataset_train.prepare() # Load validation dataset dataset_val = SpaceNetChallengeDataset() val_coco = dataset_val.load_dataset(dataset_dir="../../data", subset="val") dataset_val.prepare() ``` ## Train ``` # *** This training schedule is an example. Update to your needs *** from imgaug import augmenters as iaa from imgaug import parameters as iap # Inspired by SIMDRWN/YOLT: https://github.com/CosmiQ/simrdwn/blob/master/core/yolt_data_prep_funcs.py#L1003-L1182 augmentation = iaa.Sequential([ iaa.WithColorspace(to_colorspace="HSV", from_colorspace="RGB", children=[ iaa.WithChannels([0,1], iaa.Multiply((0.5, 1.5))), iaa.WithChannels(2, iaa.Multiply((0.7, 1.3))) ]), iaa.OneOf([ iaa.Flipud(1), iaa.Fliplr(1), iaa.Affine(rotate=iap.Uniform(0, 90)), iaa.Affine(rotate=90), iaa.Affine(rotate=iap.Uniform(90, 180)), iaa.Affine(rotate=180), iaa.Affine(rotate=iap.Uniform(180, 270)), iaa.Affine(rotate=270), iaa.Affine(rotate=iap.Uniform(270, 360)), iaa.Affine(rotate=360), ]) ]) # Training - Stage 1 print("Training network heads") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=40, layers='heads', augmentation=augmentation) # Training - Stage 2 # Finetune layers from ResNet stage 4 and up print("Fine tune Resnet stage 4 and up") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=120, layers='4+', augmentation=augmentation) # Training - Stage 3 # Fine tune all layers print("Fine tune all layers") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE / 10, epochs=160, layers='all', augmentation=augmentation) ``` Now you can monitor the training by running : ``` tensorboard --logdir=logs/[path-to-your-experiment-logdir] ``` and if everything works great, you should see something like : ![loss-plot](../../images/loss-plot.png) # Author Sharada Mohanty [sharada.mohanty@epfl.ch](sharada.mohanty@epfl.ch)
github_jupyter
import os import sys import time import numpy as np # Download and install the Python COCO tools from https://github.com/waleedka/coco # That's a fork from the original https://github.com/pdollar/coco with a bug # fix for Python 3. # I submitted a pull request https://github.com/cocodataset/cocoapi/pull/50 # If the PR is merged then use the original repo. # Note: Edit PythonAPI/Makefile and replace "python" with "python3". # # A quick one liner to install the library # !pip install git+https://github.com/waleedka/coco.git#subdirectory=PythonAPI from pycocotools.coco import COCO from pycocotools.cocoeval import COCOeval from pycocotools import mask as maskUtils from evaluate import build_coco_results, evaluate_coco from dataset import SpaceNetChallengeDataset import zipfile import urllib.request import shutil โ”œโ”€โ”€ data | โ”œโ”€โ”€ pretrained_weights.h5 (already included in this repository) โ”‚ย ย  โ”œโ”€โ”€ test โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ images/ โ”‚ โ”‚ โ””โ”€โ”€ annotation.json โ”‚ย ย  โ”œโ”€โ”€ train โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ images/ โ”‚ โ”‚ โ””โ”€โ”€ annotation.json โ”‚ย ย  โ””โ”€โ”€ val โ”‚ย ย  โ””โ”€โ”€ images/ โ”‚ โ””โ”€โ”€ annotation.json # Root directory of the project ROOT_DIR = os.path.abspath("../../") # Import Mask RCNN sys.path.append(ROOT_DIR) # To find local version of the library from mrcnn.config import Config from mrcnn import model as modellib, utils PRETRAINED_MODEL_PATH = os.path.join(ROOT_DIR,"data/" "pretrained_weights.h5") LOGS_DIRECTORY = os.path.join(ROOT_DIR, "logs") from dataset import SpaceNetChallengeConfig config = SpaceNetChallengeConfig() config.display() model = modellib.MaskRCNN(mode="training", config=config, model_dir=LOGS_DIRECTORY) # Load pretrained weights model_path = PRETRAINED_MODEL_PATH model.load_weights(model_path, by_name=True) # Load training dataset dataset_train = SpaceNetChallengeDataset() dataset_train.load_dataset(dataset_dir="../../data", subset="train") dataset_train.prepare() # Load validation dataset dataset_val = SpaceNetChallengeDataset() val_coco = dataset_val.load_dataset(dataset_dir="../../data", subset="val") dataset_val.prepare() # *** This training schedule is an example. Update to your needs *** from imgaug import augmenters as iaa from imgaug import parameters as iap # Inspired by SIMDRWN/YOLT: https://github.com/CosmiQ/simrdwn/blob/master/core/yolt_data_prep_funcs.py#L1003-L1182 augmentation = iaa.Sequential([ iaa.WithColorspace(to_colorspace="HSV", from_colorspace="RGB", children=[ iaa.WithChannels([0,1], iaa.Multiply((0.5, 1.5))), iaa.WithChannels(2, iaa.Multiply((0.7, 1.3))) ]), iaa.OneOf([ iaa.Flipud(1), iaa.Fliplr(1), iaa.Affine(rotate=iap.Uniform(0, 90)), iaa.Affine(rotate=90), iaa.Affine(rotate=iap.Uniform(90, 180)), iaa.Affine(rotate=180), iaa.Affine(rotate=iap.Uniform(180, 270)), iaa.Affine(rotate=270), iaa.Affine(rotate=iap.Uniform(270, 360)), iaa.Affine(rotate=360), ]) ]) # Training - Stage 1 print("Training network heads") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=40, layers='heads', augmentation=augmentation) # Training - Stage 2 # Finetune layers from ResNet stage 4 and up print("Fine tune Resnet stage 4 and up") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=120, layers='4+', augmentation=augmentation) # Training - Stage 3 # Fine tune all layers print("Fine tune all layers") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE / 10, epochs=160, layers='all', augmentation=augmentation) tensorboard --logdir=logs/[path-to-your-experiment-logdir]
0.676727
0.923661
``` # link colab to google drive directory where this project data is placed from google.colab import drive drive.mount('/content/gdrive', force_remount=True) import numpy as np import tensorflow as tf print(tf.__version__) # set project path projectpath = "/content/gdrive/My Drive/GraphAttnProject/ErdosRanyiSubmission/" print(projectpath) #print(datareadpath) %cd /content/gdrive/My Drive/GraphAttnProject/ErdosRanyiSubmission/ !pip install dgl import os os.chdir(projectpath) os.getcwd() from CodeZip_ER import * ``` # Caveman Graph ``` name = 'CircularLadder' num_train = 768 num_val = 256 num_test = 256 if name == 'Caveman': p_base_er = 0.027 p_combine = 0.004 m = 0 elif name == 'Cycle': p_base_er = 0.018 p_combine = 0.004 m = 1 elif name == 'Grid': p_base_er = 0.026666 p_combine = 0.004 m = 3 elif name == 'Ladder': p_base_er = 0.026 p_combine = 0.004 m = 4 elif name == 'CircularLadder': p_base_er = 0.03 p_combine = 0.004 m = 5 info = generate_graphs_labels_ER(m=m, n_min = 100, n_max =101, num_train = num_train, num_val = num_val, num_test = num_test, p_base_er = p_base_er, p_combine = p_combine, all_connected1 = False, all_connected2 = True) #GWK_masking = generate_masking_GWK(all_train_graphs_shuffled, all_val_graphs_shuffled, all_test_graphs_shuffled, path_length, num_random_walk, stopping_prob, p, q) GAT_masking = generate_masking_GAT(info[0], info[2], info[4]) train_graphs, train_labels, val_graphs, val_labels, test_graphs, test_labels = info # save random walk lists as pickle file a_file = open(f"graph_data/{name}/train_graphs.pkl", "wb") pickle.dump(train_graphs, a_file) a_file.close() a_file = open(f"graph_data/{name}/val_graphs.pkl", "wb") pickle.dump(val_graphs, a_file) a_file.close() with open(f"graph_data/{name}/train_labels.npy", 'wb') as f: np.save(f, train_labels) with open(f"graph_data/{name}/val_labels.npy", 'wb') as f: np.save(f, val_labels) ``` ### generate random walks ``` # generate random walks for GKAT from deepwalk import OnlyWalk path_length = 5 num_random_walk= 50 def generate_walks_GKAT(graphs, num_random_walk, path_length, stopping_prob = 0.0, p=1, q=1, ignore_start = False): walks = [] print('Start generating GWK masking') print("walk length = ", path_length) print("number of random walks = ", num_random_walk) for i in tqdm(range(len(graphs))): graph = (graphs[i]) n2v = OnlyWalk.Node2vec_onlywalk(graph = graph, path_length=path_length, num_paths=num_random_walk, p=p, q=q, stop_prob = stopping_prob, with_freq_mat = True) walks.append(n2v.walker.walks_dict) return walks # start random walks GKAT_walks_train = generate_walks_GKAT(train_graphs, path_length = path_length, num_random_walk = num_random_walk, stopping_prob = 0, p = 1, q= 1, ignore_start = False) GKAT_walks_val = generate_walks_GKAT(val_graphs, path_length = path_length, num_random_walk = num_random_walk, stopping_prob = 0, p = 1, q= 1, ignore_start = False) # save random walk lists as pickle file a_file = open(f"graph_data/{name}/GKAT_walks_dict_train.pkl", "wb") pickle.dump(GKAT_walks_train, a_file) a_file.close() a_file = open(f"graph_data/{name}/GKAT_walks_dict_val.pkl", "wb") pickle.dump(GKAT_walks_val, a_file) a_file.close() GKAT_walks_train = pickle.load(open(f"graph_data/{name}/GKAT_walks_dict_train.pkl", 'rb')) GKAT_walks_val = pickle.load(open(f"graph_data/{name}/GKAT_walks_dict_val.pkl", 'rb')) ``` ### generate random walk frequency matrix and GKAT dot product kernel ``` def generate_trunc_walks_dict(walks, trunc_len): print("generating walks with length = ", trunc_len) num_random_walk = len(walks[0][0]) trunc_walks = [] num_graphs = len(walks) for i in range(num_graphs): g_dict = {} num_nodes = len(walks[i]) for j in range(num_nodes): walklist = [] for k in range(num_random_walk): walklist.append(walks[i][j][k][:trunc_len]) g_dict[j] = walklist trunc_walks.append(g_dict) return trunc_walks def generate_frequency_matrix_and_masking_GKAT(walks_dict): num_graphs = len(walks_dict) num_random_walk = len(walks_dict[0][0]) walk_length = len(walks_dict[0][0][0]) freq_mat_list = [] dot_kernel_list = [] for graph in tqdm(walks_dict): num_nodes = len(graph) freq_mat = np.zeros([num_nodes, num_nodes]) for key in graph: for i in range(num_random_walk): for j in range(walk_length): freq_mat[int(key),int(graph[key][i][j])] +=1 freq_mat /= num_random_walk dot_prod = np.matmul(freq_mat, np.transpose(freq_mat)) # divide the dot_prod kernel by the norm of the kernel deno = np.matmul(np.diagonal(dot_prod)[:, None], np.transpose(np.diagonal(dot_prod)[:, None])) dot_kernel = dot_prod / np.sqrt(deno) #np.diagonal(dot_prod)[:, None] freq_mat_list.append(freq_mat* num_random_walk) dot_kernel_list.append(dot_kernel) return freq_mat_list, dot_kernel_list # generate random walk frequency matrix and GKAT dot product kernel with different random walk lengths # the generated data are saved also as pickle files for trunc_len in range(4,5): trunc_walks_train = generate_trunc_walks_dict(GKAT_walks_train, trunc_len) trunc_walks_val = generate_trunc_walks_dict(GKAT_walks_val, trunc_len) freq_mat_list, dot_kernel_list = generate_frequency_matrix_and_masking_GKAT(trunc_walks_train) a_file = open(f"graph_data/{name}/GKAT_freq_mats_train_len={trunc_len}.pkl", "wb") pickle.dump(freq_mat_list, a_file) a_file.close() a_file = open(f"graph_data/{name}/GKAT_dot_kernels_train_len={trunc_len}.pkl", "wb") pickle.dump(dot_kernel_list, a_file) a_file.close() freq_mat_list, dot_kernel_list = generate_frequency_matrix_and_masking_GKAT(trunc_walks_val) a_file = open(f"graph_data/{name}/GKAT_freq_mats_val_len={trunc_len}.pkl", "wb") pickle.dump(freq_mat_list, a_file) a_file.close() a_file = open(f"graph_data/{name}/GKAT_dot_kernels_val_len={trunc_len}.pkl", "wb") pickle.dump(dot_kernel_list, a_file) a_file.close() GAT_masking = generate_masking_GAT(info[0], info[2], info[4]) a_file = open(f"graph_data/{name}/GAT_masking_train.pkl", "wb") pickle.dump(GAT_masking[0], a_file) a_file.close() a_file = open(f"graph_data/{name}/GAT_masking_val.pkl", "wb") pickle.dump(GAT_masking[1], a_file) a_file.close() ```
github_jupyter
# link colab to google drive directory where this project data is placed from google.colab import drive drive.mount('/content/gdrive', force_remount=True) import numpy as np import tensorflow as tf print(tf.__version__) # set project path projectpath = "/content/gdrive/My Drive/GraphAttnProject/ErdosRanyiSubmission/" print(projectpath) #print(datareadpath) %cd /content/gdrive/My Drive/GraphAttnProject/ErdosRanyiSubmission/ !pip install dgl import os os.chdir(projectpath) os.getcwd() from CodeZip_ER import * name = 'CircularLadder' num_train = 768 num_val = 256 num_test = 256 if name == 'Caveman': p_base_er = 0.027 p_combine = 0.004 m = 0 elif name == 'Cycle': p_base_er = 0.018 p_combine = 0.004 m = 1 elif name == 'Grid': p_base_er = 0.026666 p_combine = 0.004 m = 3 elif name == 'Ladder': p_base_er = 0.026 p_combine = 0.004 m = 4 elif name == 'CircularLadder': p_base_er = 0.03 p_combine = 0.004 m = 5 info = generate_graphs_labels_ER(m=m, n_min = 100, n_max =101, num_train = num_train, num_val = num_val, num_test = num_test, p_base_er = p_base_er, p_combine = p_combine, all_connected1 = False, all_connected2 = True) #GWK_masking = generate_masking_GWK(all_train_graphs_shuffled, all_val_graphs_shuffled, all_test_graphs_shuffled, path_length, num_random_walk, stopping_prob, p, q) GAT_masking = generate_masking_GAT(info[0], info[2], info[4]) train_graphs, train_labels, val_graphs, val_labels, test_graphs, test_labels = info # save random walk lists as pickle file a_file = open(f"graph_data/{name}/train_graphs.pkl", "wb") pickle.dump(train_graphs, a_file) a_file.close() a_file = open(f"graph_data/{name}/val_graphs.pkl", "wb") pickle.dump(val_graphs, a_file) a_file.close() with open(f"graph_data/{name}/train_labels.npy", 'wb') as f: np.save(f, train_labels) with open(f"graph_data/{name}/val_labels.npy", 'wb') as f: np.save(f, val_labels) # generate random walks for GKAT from deepwalk import OnlyWalk path_length = 5 num_random_walk= 50 def generate_walks_GKAT(graphs, num_random_walk, path_length, stopping_prob = 0.0, p=1, q=1, ignore_start = False): walks = [] print('Start generating GWK masking') print("walk length = ", path_length) print("number of random walks = ", num_random_walk) for i in tqdm(range(len(graphs))): graph = (graphs[i]) n2v = OnlyWalk.Node2vec_onlywalk(graph = graph, path_length=path_length, num_paths=num_random_walk, p=p, q=q, stop_prob = stopping_prob, with_freq_mat = True) walks.append(n2v.walker.walks_dict) return walks # start random walks GKAT_walks_train = generate_walks_GKAT(train_graphs, path_length = path_length, num_random_walk = num_random_walk, stopping_prob = 0, p = 1, q= 1, ignore_start = False) GKAT_walks_val = generate_walks_GKAT(val_graphs, path_length = path_length, num_random_walk = num_random_walk, stopping_prob = 0, p = 1, q= 1, ignore_start = False) # save random walk lists as pickle file a_file = open(f"graph_data/{name}/GKAT_walks_dict_train.pkl", "wb") pickle.dump(GKAT_walks_train, a_file) a_file.close() a_file = open(f"graph_data/{name}/GKAT_walks_dict_val.pkl", "wb") pickle.dump(GKAT_walks_val, a_file) a_file.close() GKAT_walks_train = pickle.load(open(f"graph_data/{name}/GKAT_walks_dict_train.pkl", 'rb')) GKAT_walks_val = pickle.load(open(f"graph_data/{name}/GKAT_walks_dict_val.pkl", 'rb')) def generate_trunc_walks_dict(walks, trunc_len): print("generating walks with length = ", trunc_len) num_random_walk = len(walks[0][0]) trunc_walks = [] num_graphs = len(walks) for i in range(num_graphs): g_dict = {} num_nodes = len(walks[i]) for j in range(num_nodes): walklist = [] for k in range(num_random_walk): walklist.append(walks[i][j][k][:trunc_len]) g_dict[j] = walklist trunc_walks.append(g_dict) return trunc_walks def generate_frequency_matrix_and_masking_GKAT(walks_dict): num_graphs = len(walks_dict) num_random_walk = len(walks_dict[0][0]) walk_length = len(walks_dict[0][0][0]) freq_mat_list = [] dot_kernel_list = [] for graph in tqdm(walks_dict): num_nodes = len(graph) freq_mat = np.zeros([num_nodes, num_nodes]) for key in graph: for i in range(num_random_walk): for j in range(walk_length): freq_mat[int(key),int(graph[key][i][j])] +=1 freq_mat /= num_random_walk dot_prod = np.matmul(freq_mat, np.transpose(freq_mat)) # divide the dot_prod kernel by the norm of the kernel deno = np.matmul(np.diagonal(dot_prod)[:, None], np.transpose(np.diagonal(dot_prod)[:, None])) dot_kernel = dot_prod / np.sqrt(deno) #np.diagonal(dot_prod)[:, None] freq_mat_list.append(freq_mat* num_random_walk) dot_kernel_list.append(dot_kernel) return freq_mat_list, dot_kernel_list # generate random walk frequency matrix and GKAT dot product kernel with different random walk lengths # the generated data are saved also as pickle files for trunc_len in range(4,5): trunc_walks_train = generate_trunc_walks_dict(GKAT_walks_train, trunc_len) trunc_walks_val = generate_trunc_walks_dict(GKAT_walks_val, trunc_len) freq_mat_list, dot_kernel_list = generate_frequency_matrix_and_masking_GKAT(trunc_walks_train) a_file = open(f"graph_data/{name}/GKAT_freq_mats_train_len={trunc_len}.pkl", "wb") pickle.dump(freq_mat_list, a_file) a_file.close() a_file = open(f"graph_data/{name}/GKAT_dot_kernels_train_len={trunc_len}.pkl", "wb") pickle.dump(dot_kernel_list, a_file) a_file.close() freq_mat_list, dot_kernel_list = generate_frequency_matrix_and_masking_GKAT(trunc_walks_val) a_file = open(f"graph_data/{name}/GKAT_freq_mats_val_len={trunc_len}.pkl", "wb") pickle.dump(freq_mat_list, a_file) a_file.close() a_file = open(f"graph_data/{name}/GKAT_dot_kernels_val_len={trunc_len}.pkl", "wb") pickle.dump(dot_kernel_list, a_file) a_file.close() GAT_masking = generate_masking_GAT(info[0], info[2], info[4]) a_file = open(f"graph_data/{name}/GAT_masking_train.pkl", "wb") pickle.dump(GAT_masking[0], a_file) a_file.close() a_file = open(f"graph_data/{name}/GAT_masking_val.pkl", "wb") pickle.dump(GAT_masking[1], a_file) a_file.close()
0.222025
0.385519
# Notebook for testing the TensorFlow 2.0 setup This netbook is for testing the [TensorFlow](https://www.tensorflow.org/) setup using the [Keras API](https://keras.io/). Below is a set of required imports. Run the cell, and no error messages should appear. In particular, **TensorFlow 2 is required**. Some warnings may appear, this should be fine. ``` %matplotlib inline import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.utils import plot_model, to_categorical from tensorflow.keras.datasets import mnist, fashion_mnist, imdb import os if not os.path.isfile('pml_utils.py'): !wget https://raw.githubusercontent.com/csc-training/intro-to-dl/master/day1/pml_utils.py from pml_utils import show_failures from sklearn.model_selection import train_test_split from distutils.version import LooseVersion as LV import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set() print('Using Tensorflow version: {}, and Keras version: {}.'.format(tf.__version__, tf.keras.__version__)) assert(LV(tf.__version__) >= LV("2.0.0")) ``` Let's check if we have GPU available. ``` gpus = tf.config.list_physical_devices('GPU') if len(gpus) > 0: for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) from tensorflow.python.client import device_lib for d in device_lib.list_local_devices(): if d.device_type == 'GPU': print('GPU', d.physical_device_desc) else: print('No GPU, using CPU instead.') ``` ## Getting started: 30 seconds to Keras (This section is adapted from https://keras.io/) The core data structure of Keras is a *Model*, a way to organize layers. While there are several ways to create Models in Keras, we will be using the [*functional* API](https://keras.io/guides/functional_api/). We start by creating an input layer: ``` inputs = keras.Input(shape=(100,)) ``` We create further layers by calling a specific layer on its input object: ``` x = layers.Dense(units=64, activation="relu")(inputs) outputs = layers.Dense(units=10, activation="softmax")(x) ``` Then we can create a Model by specifying its inputs and outputs: ``` model = keras.Model(inputs=inputs, outputs=outputs, name="test_model") ``` A summary of the model: ``` print(model.summary()) ``` Let's draw a fancier graph of our model: ``` plot_model(model, show_shapes=True) ``` Once your model looks good, configure its learning process with `.compile()`: ``` model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) ``` You can now begin training your model with `.fit()`. Let's generate some random data and use it to train the model: ``` X_train = np.random.rand(128, 100) Y_train = to_categorical(np.random.randint(10, size=128)) model.fit(X_train, Y_train, epochs=5, batch_size=32, verbose=2); ``` Evaluate your performance on test data with `.evaluate():` ``` X_test = np.random.rand(64, 100) Y_test = to_categorical(np.random.randint(10, size=64)) loss, acc = model.evaluate(X_test, Y_test, batch_size=32) print() print('loss:', loss, 'acc:', acc) ``` --- *Run this notebook in Google Colaboratory using [this link](https://colab.research.google.com/github/csc-training/intro-to-dl/blob/master/day1/01-tf2-test-setup.ipynb).*
github_jupyter
%matplotlib inline import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.utils import plot_model, to_categorical from tensorflow.keras.datasets import mnist, fashion_mnist, imdb import os if not os.path.isfile('pml_utils.py'): !wget https://raw.githubusercontent.com/csc-training/intro-to-dl/master/day1/pml_utils.py from pml_utils import show_failures from sklearn.model_selection import train_test_split from distutils.version import LooseVersion as LV import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set() print('Using Tensorflow version: {}, and Keras version: {}.'.format(tf.__version__, tf.keras.__version__)) assert(LV(tf.__version__) >= LV("2.0.0")) gpus = tf.config.list_physical_devices('GPU') if len(gpus) > 0: for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) from tensorflow.python.client import device_lib for d in device_lib.list_local_devices(): if d.device_type == 'GPU': print('GPU', d.physical_device_desc) else: print('No GPU, using CPU instead.') inputs = keras.Input(shape=(100,)) x = layers.Dense(units=64, activation="relu")(inputs) outputs = layers.Dense(units=10, activation="softmax")(x) model = keras.Model(inputs=inputs, outputs=outputs, name="test_model") print(model.summary()) plot_model(model, show_shapes=True) model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) X_train = np.random.rand(128, 100) Y_train = to_categorical(np.random.randint(10, size=128)) model.fit(X_train, Y_train, epochs=5, batch_size=32, verbose=2); X_test = np.random.rand(64, 100) Y_test = to_categorical(np.random.randint(10, size=64)) loss, acc = model.evaluate(X_test, Y_test, batch_size=32) print() print('loss:', loss, 'acc:', acc)
0.805173
0.983455
## Background In this article, we will use [softmax](https://en.wikipedia.org/wiki/Softmax_function) classifier to build a simple image classification neural network with an accuracy of 32%. In a Softmax classifier, binary logic is generalized and regressed to multiple logic. Softmax classifier will output the probability of the corresponding category. We will first define a softmax classifier, then use the training set of [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) to train the neural network, and finally use the test set to verify the accuracy of the neural network. Letโ€™s get started. ## Import dependencies Like the previous course [GettingStarted](https://thoughtworksinc.github.io/DeepLearning.scala/demo/GettingStarted.html), we need to introduce each class of DeepLearning.scala. ``` import $plugin.$ivy.`com.thoughtworks.implicit-dependent-type::implicit-dependent-type:2.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiableany:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiablenothing:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiableseq:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiabledouble:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiablefloat:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiablehlist:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiablecoproduct:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiableindarray:1.0.0` import $ivy.`org.nd4j:nd4j-native-platform:0.7.2` import $ivy.`org.rauschig:jarchivelib:0.5.0` import $ivy.`org.plotly-scala::plotly-jupyter-scala:0.3.0` import java.io.{FileInputStream, InputStream} import com.thoughtworks.deeplearning import org.nd4j.linalg.api.ndarray.INDArray import com.thoughtworks.deeplearning.DifferentiableHList._ import com.thoughtworks.deeplearning.DifferentiableDouble._ import com.thoughtworks.deeplearning.DifferentiableINDArray._ import com.thoughtworks.deeplearning.DifferentiableAny._ import com.thoughtworks.deeplearning.DifferentiableINDArray.Optimizers._ import com.thoughtworks.deeplearning.{ DifferentiableHList, DifferentiableINDArray, Layer, Symbolic } import com.thoughtworks.deeplearning.Layer.Tape import com.thoughtworks.deeplearning.Symbolic.Layers.Identity import com.thoughtworks.deeplearning.Symbolic._ import com.thoughtworks.deeplearning.Poly.MathFunctions._ import com.thoughtworks.deeplearning.Poly.MathMethods./ import com.thoughtworks.deeplearning.Poly.MathOps import org.nd4j.linalg.api.ndarray.INDArray import org.nd4j.linalg.factory.Nd4j import org.nd4j.linalg.indexing.{INDArrayIndex, NDArrayIndex} import org.nd4j.linalg.ops.transforms.Transforms import org.nd4s.Implicits._ import shapeless._ import plotly._ import plotly.element._ import plotly.layout._ import plotly.JupyterScala._ import scala.collection.immutable.IndexedSeq ``` To reduce the line numbers outputted by `jupyter-scala` and to make sure that the page output will not be too long, we need to set `pprintConfig`. ``` pprintConfig() = pprintConfig().copy(height = 2) ``` ## Build the neural network. ### Write softmax To use `softmax` classifier (softmax classifier is a neural network combined by `softmax` and a full connection), we first need to write softmax function, formula: ![](https://www.zhihu.com/equation?tex=f_j%28z%29%3D%5Cfrac%7Be%5E%7Bz_j%7D%7D%7B%5Csum_ke%5E%7Bz_k%7D%7D) ``` def softmax(implicit scores: INDArray @Symbolic): INDArray @Symbolic = { val expScores = exp(scores) expScores / expScores.sum(1) } ``` ### Set learning rate Learning rate need to be set for the full connection layer. Learning rate visually describes the change rate of `weight`. A too-low learning rate will result in slow decrease of `loss`, which will require longer time for training; A too-high learning rate will result in rapid decrease of `loss` at first while fluctuation around the lowest point afterward. ``` implicit def optimizer: Optimizer = new LearningRate { def currentLearningRate() = 0.00001 } ``` ### Combine neural network Define a full connection layer and [initialize Weight](https://github.com/ThoughtWorksInc/DeepLearning.scala/wiki/Getting-Started#231--weight-intialization), `Weight` shall be a two-dimension `INDArray` of `NumberOfPixels ร— NumberOfClasses`. `scores` is the score of each image corresponding to each category, representing the feasible probability of each category corresponding to each image. ``` //10 label of CIFAR10 images(airplane,automobile,bird,cat,deer,dog,frog,horse,ship,truck) val NumberOfClasses: Int = 10 val NumberOfPixels: Int = 3072 def createMyNeuralNetwork(implicit input: INDArray @Symbolic): INDArray @Symbolic = { val initialValueOfWeight = Nd4j.randn(NumberOfPixels, NumberOfClasses) * 0.001 val weight: INDArray @Symbolic = initialValueOfWeight.toWeight val scores: INDArray @Symbolic = input dot weight softmax.compose(scores) } val myNeuralNetwork = createMyNeuralNetwork ``` ### Combine LossFunction To learn about the prediction result of the neural network, we need to write the loss function `lossFunction`. We use [cross-entropy loss](https://en.wikipedia.org/wiki/Cross_entropy) to make comparison between this result and the actual result before return the score. Formula: ![](https://zhihu.com/equation?tex=%5Cdisplaystyle+H%28p%2Cq%29%3D-%5Csum_xp%28x%29+logq%28x%29) ``` def lossFunction(implicit pair: (INDArray :: INDArray :: HNil) @Symbolic): Double @Symbolic = { val input = pair.head val expectedOutput = pair.tail.head val probabilities = myNeuralNetwork.compose(input) -(expectedOutput * log(probabilities)).mean } ``` ## Prepare data ## Read data To read the images and corresponding label information for test data from CIFAR10 database and process them, we need [`import $file.ReadCIFAR10ToNDArray`](https://github.com/ThoughtWorksInc/DeepLearning.scala-website/blob/master/ipynbs/ReadCIFAR10ToNDArray.sc). This is a script file containing the read and processed CIFAR10 data, provided in this course. ``` import $file.ReadCIFAR10ToNDArray val trainNDArray = ReadCIFAR10ToNDArray.readFromResource("/cifar-10-batches-bin/data_batch_1.bin", 1000) val testNDArray = ReadCIFAR10ToNDArray.readFromResource("/cifar-10-batches-bin/test_batch.bin", 100) ``` ### Process data Before passing data to the softmax classifier, we first process label data with ([one hot encoding](https://en.wikipedia.org/wiki/One-hot)): transform INDArray of `NumberOfPixels ร— 1` into INDArray of `NumberOfPixels ร— NumberOfClasses`. The value of correct classification corresponding to each line is 1, and the values of other columns are 0. The reason for differentiating the training set and test set is to make it clear that whether the network is over trained which leads to [overfitting](https://en.wikipedia.org/wiki/Overfitting). While processing label data, we used [Utils](https://github.com/ThoughtWorksInc/DeepLearning.scala-website/blob/master/ipynbs/Utils.sc), which is also provided in this course. ``` val trainData = trainNDArray.head val testData = testNDArray.head val trainExpectResult = trainNDArray.tail.head val testExpectResult = testNDArray.tail.head import $file.Utils val vectorizedTrainExpectResult = Utils.makeVectorized(trainExpectResult, NumberOfClasses) val vectorizedTestExpectResult = Utils.makeVectorized(testExpectResult, NumberOfClasses) ``` ## Train the neural network To observe the training process of the neural network, we need to output `loss`; while training the neural network, the `loss` shall be deceasing. ``` val lossSeq = for (iteration <- 0 until 2000) yield { val loss = lossFunction.train(trainData :: vectorizedTrainExpectResult :: HNil) if(iteration % 100 == 0){ println(s"at iteration $iteration loss is $loss") } loss } plotly.JupyterScala.init() val plot = Seq( Scatter(lossSeq.indices, lossSeq) ) plot.plot( title = "loss by time" ) ``` ## Verify the neural network and predict the accuracy We use the processed test data to verify the prediction result of the neural network and compute the accuracy. The accuracy shall be about 32%. ``` val right = Utils.getAccuracy(myNeuralNetwork.predict(testData), testExpectResult) println(s"the result is $right %") ``` ## Summary We have learned the follows in this article: * Prepare and process CIFAR10 data * Write softmax classifier * Use the prediction image of the neural network written by softmax classifier to match with the probability of each category. [Complete code](https://github.com/izhangzhihao/deeplearning-tutorial/blob/master/src/main/scala/com/thoughtworks/deeplearning/tutorial/SoftmaxLinearClassifier.scala)
github_jupyter
import $plugin.$ivy.`com.thoughtworks.implicit-dependent-type::implicit-dependent-type:2.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiableany:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiablenothing:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiableseq:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiabledouble:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiablefloat:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiablehlist:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiablecoproduct:1.0.0` import $ivy.`com.thoughtworks.deeplearning::differentiableindarray:1.0.0` import $ivy.`org.nd4j:nd4j-native-platform:0.7.2` import $ivy.`org.rauschig:jarchivelib:0.5.0` import $ivy.`org.plotly-scala::plotly-jupyter-scala:0.3.0` import java.io.{FileInputStream, InputStream} import com.thoughtworks.deeplearning import org.nd4j.linalg.api.ndarray.INDArray import com.thoughtworks.deeplearning.DifferentiableHList._ import com.thoughtworks.deeplearning.DifferentiableDouble._ import com.thoughtworks.deeplearning.DifferentiableINDArray._ import com.thoughtworks.deeplearning.DifferentiableAny._ import com.thoughtworks.deeplearning.DifferentiableINDArray.Optimizers._ import com.thoughtworks.deeplearning.{ DifferentiableHList, DifferentiableINDArray, Layer, Symbolic } import com.thoughtworks.deeplearning.Layer.Tape import com.thoughtworks.deeplearning.Symbolic.Layers.Identity import com.thoughtworks.deeplearning.Symbolic._ import com.thoughtworks.deeplearning.Poly.MathFunctions._ import com.thoughtworks.deeplearning.Poly.MathMethods./ import com.thoughtworks.deeplearning.Poly.MathOps import org.nd4j.linalg.api.ndarray.INDArray import org.nd4j.linalg.factory.Nd4j import org.nd4j.linalg.indexing.{INDArrayIndex, NDArrayIndex} import org.nd4j.linalg.ops.transforms.Transforms import org.nd4s.Implicits._ import shapeless._ import plotly._ import plotly.element._ import plotly.layout._ import plotly.JupyterScala._ import scala.collection.immutable.IndexedSeq pprintConfig() = pprintConfig().copy(height = 2) def softmax(implicit scores: INDArray @Symbolic): INDArray @Symbolic = { val expScores = exp(scores) expScores / expScores.sum(1) } implicit def optimizer: Optimizer = new LearningRate { def currentLearningRate() = 0.00001 } //10 label of CIFAR10 images(airplane,automobile,bird,cat,deer,dog,frog,horse,ship,truck) val NumberOfClasses: Int = 10 val NumberOfPixels: Int = 3072 def createMyNeuralNetwork(implicit input: INDArray @Symbolic): INDArray @Symbolic = { val initialValueOfWeight = Nd4j.randn(NumberOfPixels, NumberOfClasses) * 0.001 val weight: INDArray @Symbolic = initialValueOfWeight.toWeight val scores: INDArray @Symbolic = input dot weight softmax.compose(scores) } val myNeuralNetwork = createMyNeuralNetwork def lossFunction(implicit pair: (INDArray :: INDArray :: HNil) @Symbolic): Double @Symbolic = { val input = pair.head val expectedOutput = pair.tail.head val probabilities = myNeuralNetwork.compose(input) -(expectedOutput * log(probabilities)).mean } import $file.ReadCIFAR10ToNDArray val trainNDArray = ReadCIFAR10ToNDArray.readFromResource("/cifar-10-batches-bin/data_batch_1.bin", 1000) val testNDArray = ReadCIFAR10ToNDArray.readFromResource("/cifar-10-batches-bin/test_batch.bin", 100) val trainData = trainNDArray.head val testData = testNDArray.head val trainExpectResult = trainNDArray.tail.head val testExpectResult = testNDArray.tail.head import $file.Utils val vectorizedTrainExpectResult = Utils.makeVectorized(trainExpectResult, NumberOfClasses) val vectorizedTestExpectResult = Utils.makeVectorized(testExpectResult, NumberOfClasses) val lossSeq = for (iteration <- 0 until 2000) yield { val loss = lossFunction.train(trainData :: vectorizedTrainExpectResult :: HNil) if(iteration % 100 == 0){ println(s"at iteration $iteration loss is $loss") } loss } plotly.JupyterScala.init() val plot = Seq( Scatter(lossSeq.indices, lossSeq) ) plot.plot( title = "loss by time" ) val right = Utils.getAccuracy(myNeuralNetwork.predict(testData), testExpectResult) println(s"the result is $right %")
0.650356
0.977778
``` import pandas as pd import matplotlib.pyplot as plt import plotly.express as px from sklearn import preprocessing import seaborn as sns import textwrap df = pd.read_csv('../../../data/topic-matrices/pre-covid-pharma-companies.csv') df.set_index('org', inplace=True) # df['sum'] = df.sum(axis=1) # df['rowWiseMax'] = df[['COVID-19','Community Healthcare','Vaccination','Mental Health','Nutrition and Well-being','Health Research','Chronic Diseases','Medical Trials']].max(axis=1) # df['rowWiseMax'] = df[['Community Healthcare','Health Research','Chronic Diseases','Medical Trials','Customer Experience']].max(axis=1) # max = df.to_numpy().max() # new_df = df.copy() # new_df = new_df[['Health Research','Community Healthcare','Chronic Diseases','Medical Trials','Customer Experience']].div(new_df['sum'], axis=0) # new_df # minMaxScaler = preprocessing.MinMaxScaler() # df['Community Healthcare'] = minMaxScaler.fit_transform(df['Community Healthcare'].values.reshape(-1,1)) # df['Health Research'] = minMaxScaler.fit_transform(df['Health Research'].values.reshape(-1,1)) # df['Chronic Diseases'] = minMaxScaler.fit_transform(df['Chronic Diseases'].values.reshape(-1,1)) # df['Medical Trials'] = minMaxScaler.fit_transform(df['Medical Trials'].values.reshape(-1,1)) # df['Customer Experience'] = minMaxScaler.fit_transform(df['Customer Experience'].values.reshape(-1,1)) # df['COVID-19'] = minMaxScaler.fit_transform(df['COVID-19'].values.reshape(-1,1)) # df['Community Healthcare'] = minMaxScaler.fit_transform(df['Community Healthcare'].values.reshape(-1,1)) # df['Vaccination'] = minMaxScaler.fit_transform(df['Vaccination'].values.reshape(-1,1)) # df['Mental Health'] = minMaxScaler.fit_transform(df['Mental Health'].values.reshape(-1,1)) # df['Nutrition and Well-being'] = minMaxScaler.fit_transform(df['Nutrition and Well-being'].values.reshape(-1,1)) # df['Health Research'] = minMaxScaler.fit_transform(df['Health Research'].values.reshape(-1,1)) # df['Chronic Diseases'] = minMaxScaler.fit_transform(df['Chronic Diseases'].values.reshape(-1,1)) # df['Medical Trials'] = minMaxScaler.fit_transform(df['Medical Trials'].values.reshape(-1,1)) # new_df = df.copy() # new_df = new_df[['COVID-19','Community Healthcare','Vaccination','Mental Health','Nutrition and Well-being','Health Research','Chronic Diseases','Medical Trials']].div(max, axis=0) # new_df = new_df[['Community Healthcare','Health Research','Chronic Diseases','Medical Trials','Customer Experience']].div(max, axis=0) # new_df = new_df[['Community Healthcare','Health Research','Chronic Diseases','Medical Trials','Customer Experience']].div(df['rowWiseMax'], axis=0) # new_df plt.figure(figsize = (14,4)) plt.rcParams['axes.titlepad'] = 25 sns.set(font_scale=1.4) # ax = sns.heatmap(df, annot=True, cmap='Blues',fmt='g') ax = sns.heatmap(df, annot=True, cmap='Blues', fmt='g') plt.title('Topic distribution for Pharmaceutical Companies - Before COVID 19', fontdict={'fontsize':20}) plt.xlabel('List of topics') plt.ylabel('Organization') ax.xaxis.label.set_size(14) ax.set_xticklabels(textwrap.fill(x.get_text(), 11) for x in ax.get_xticklabels()) plt.xticks(rotation=5) plt.savefig('pharmaceutical-companies.pdf', transparent=True, bbox_inches='tight') plt.savefig('pharmaceutical-companies.png', transparent=True, bbox_inches='tight') plt.show() ```
github_jupyter
import pandas as pd import matplotlib.pyplot as plt import plotly.express as px from sklearn import preprocessing import seaborn as sns import textwrap df = pd.read_csv('../../../data/topic-matrices/pre-covid-pharma-companies.csv') df.set_index('org', inplace=True) # df['sum'] = df.sum(axis=1) # df['rowWiseMax'] = df[['COVID-19','Community Healthcare','Vaccination','Mental Health','Nutrition and Well-being','Health Research','Chronic Diseases','Medical Trials']].max(axis=1) # df['rowWiseMax'] = df[['Community Healthcare','Health Research','Chronic Diseases','Medical Trials','Customer Experience']].max(axis=1) # max = df.to_numpy().max() # new_df = df.copy() # new_df = new_df[['Health Research','Community Healthcare','Chronic Diseases','Medical Trials','Customer Experience']].div(new_df['sum'], axis=0) # new_df # minMaxScaler = preprocessing.MinMaxScaler() # df['Community Healthcare'] = minMaxScaler.fit_transform(df['Community Healthcare'].values.reshape(-1,1)) # df['Health Research'] = minMaxScaler.fit_transform(df['Health Research'].values.reshape(-1,1)) # df['Chronic Diseases'] = minMaxScaler.fit_transform(df['Chronic Diseases'].values.reshape(-1,1)) # df['Medical Trials'] = minMaxScaler.fit_transform(df['Medical Trials'].values.reshape(-1,1)) # df['Customer Experience'] = minMaxScaler.fit_transform(df['Customer Experience'].values.reshape(-1,1)) # df['COVID-19'] = minMaxScaler.fit_transform(df['COVID-19'].values.reshape(-1,1)) # df['Community Healthcare'] = minMaxScaler.fit_transform(df['Community Healthcare'].values.reshape(-1,1)) # df['Vaccination'] = minMaxScaler.fit_transform(df['Vaccination'].values.reshape(-1,1)) # df['Mental Health'] = minMaxScaler.fit_transform(df['Mental Health'].values.reshape(-1,1)) # df['Nutrition and Well-being'] = minMaxScaler.fit_transform(df['Nutrition and Well-being'].values.reshape(-1,1)) # df['Health Research'] = minMaxScaler.fit_transform(df['Health Research'].values.reshape(-1,1)) # df['Chronic Diseases'] = minMaxScaler.fit_transform(df['Chronic Diseases'].values.reshape(-1,1)) # df['Medical Trials'] = minMaxScaler.fit_transform(df['Medical Trials'].values.reshape(-1,1)) # new_df = df.copy() # new_df = new_df[['COVID-19','Community Healthcare','Vaccination','Mental Health','Nutrition and Well-being','Health Research','Chronic Diseases','Medical Trials']].div(max, axis=0) # new_df = new_df[['Community Healthcare','Health Research','Chronic Diseases','Medical Trials','Customer Experience']].div(max, axis=0) # new_df = new_df[['Community Healthcare','Health Research','Chronic Diseases','Medical Trials','Customer Experience']].div(df['rowWiseMax'], axis=0) # new_df plt.figure(figsize = (14,4)) plt.rcParams['axes.titlepad'] = 25 sns.set(font_scale=1.4) # ax = sns.heatmap(df, annot=True, cmap='Blues',fmt='g') ax = sns.heatmap(df, annot=True, cmap='Blues', fmt='g') plt.title('Topic distribution for Pharmaceutical Companies - Before COVID 19', fontdict={'fontsize':20}) plt.xlabel('List of topics') plt.ylabel('Organization') ax.xaxis.label.set_size(14) ax.set_xticklabels(textwrap.fill(x.get_text(), 11) for x in ax.get_xticklabels()) plt.xticks(rotation=5) plt.savefig('pharmaceutical-companies.pdf', transparent=True, bbox_inches='tight') plt.savefig('pharmaceutical-companies.png', transparent=True, bbox_inches='tight') plt.show()
0.364212
0.400368
``` %load_ext autoreload %autoreload 2 import numpy as np import pandas as pd import matplotlib.pyplot as plt import provid import datetime from provid.source import download, update srcs = download(national_patient=False) for name, src in srcs.items(): src.parse() ``` # National aggregate data ``` srcs["county_case"].df ``` # National patient data ``` srcs["national_patient"].df ``` ## Cases by age ``` srcs["national_patient"].df.age_group.value_counts(normalize=True) * 100 srcs["national_patient"].df.age_group.value_counts(normalize=True).plot.pie() srcs["national_patient"].df.age_group.value_counts(normalize=True) * 100 ``` ## Deaths by age ``` srcs["national_patient"].df[srcs["national_patient"].df.death_yn == "Yes"].age_group.value_counts(normalize=True) * 100 srcs["national_patient"].df[srcs["national_patient"].df.death_yn == "Yes"].age_group.value_counts(normalize=True).plot.pie() ``` # County data ``` # Mercer, NJ code = "021" srcs["county_case"].df cases = srcs["county_case"].df[srcs["county_case"].df.countyFIPS == 34021] deaths = srcs["county_death"].df[srcs["county_death"].df.countyFIPS == 34021] cases = cases.drop(labels=["countyFIPS", "County Name", "State", "stateFIPS"], axis=1) deaths = deaths.drop(labels=["countyFIPS", "County Name", "State", "stateFIPS"], axis=1) cases.T.plot() deaths.T.plot() ``` # Princeton data ``` srcs["princeton"].df[["total_positive", "total_deaths"]].iloc[::-1].plot() start_date = datetime.date(2020, 1, 1) end_date = datetime.date.today() dates = pd.DataFrame(pd.to_datetime([start_date + datetime.timedelta(days=delta) for delta in range((end_date - start_date).days + 1)])) dates.index = dates.iloc[:, 0] local = { "total_deaths": srcs["princeton"].df.total_deaths.iloc[::-1].rename("total_deaths"), "total_cases": srcs["princeton"].df.total_positive.iloc[::-1].rename("total_cases"), "total_active": srcs["princeton"].df.active_positive.iloc[::-1].rename("total_active"), "total_tests": (srcs["princeton"].df.total_positive.iloc[::-1] + srcs["princeton"].df.total_negative.iloc[::-1]).rename("total_tests") } local_table = dates for df in local: local_table = pd.merge(pd.DataFrame(local_table), local[df], how="outer", left_index=True, right_index=True) for df in local: local_table[df].iloc[:np.argmin(local_table[df])] = 0 local_table.interpolate(method="time", limit_direction="both").total_cases.plot() local_table = local_table.fillna(method="ffill") del local_table[0] local_diffs = local_table.diff() local_diffs.columns = [col.replace("total", "new") for col in local_table.columns] local_table = pd.concat([local_table, local_diffs], axis=1) local_table.round().new_cases.plot() local_table = dates for df in local: local_table = pd.merge(pd.DataFrame(local_table), local[df], how="outer", left_index=True, right_index=True) for df in local: local_table[df].iloc[:np.argmin(local_table[df])] = 0 local_table = local_table.interpolate(method="time", limit_direction="both") del local_table[0] local_diffs = local_table.diff() local_diffs.columns = [col.replace("total", "new") for col in local_table.columns] local_table = pd.concat([local_table, local_diffs], axis=1) local_table county = { "total_deaths": srcs["county_death"].df[srcs["county_death"].df.countyFIPS == 34021].drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).T.iloc[:, 0].rename("total_deaths"), "total_cases": srcs["county_case"].df[srcs["county_case"].df.countyFIPS == 34021].drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).T.iloc[:, 0].rename("total_cases"), # "total_active": srcs["princeton"].df.active_positive.iloc[::-1].rename("total_active"), # "total_tests": (srcs["princeton"].df.total_positive.iloc[::-1] + srcs["princeton"].df.total_negative.iloc[::-1]).rename("total_tests") } county_table = dates for df in county: county_table = pd.merge(pd.DataFrame(county_table), county[df], how="outer", left_index=True, right_index=True) for df in county: county_table[df].iloc[:np.argmin(county_table[df])] = 0 county_table = county_table.interpolate(method="time", limit_direction="both") del county_table[0] county_diffsa = county_table.diff() county_diffs.columns = [col.replace("total", "new") for col in county_table.columns] county_table = pd.concat([county_table, county_diffs], axis=1) county_table deaths = { "local": srcs["princeton"].df.total_deaths.iloc[::-1].rename("local"), "county": srcs["county_death"].df[srcs["county_death"].df.countyFIPS == 34021].drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).T.iloc[:, 0].rename("county"), "state": srcs["county_death"].df[srcs["county_death"].df.State == "NJ"].drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).sum(axis=0).rename("state"), "national": srcs["county_death"].df.drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).sum(axis=0).rename("national") } deaths_table = None for geo in ["local", "county", "state", "national"]: deaths[geo].index = pd.to_datetime(deaths[geo].index) if geo == "local": deaths_table = pd.DataFrame(deaths[geo]) else: deaths_table = pd.merge(deaths_table, deaths[geo], how="outer", left_index=True, right_index=True) for geo in ["local", "county", "state", "national"]: deaths_table[geo].iloc[:np.argmin(deaths_table[geo])] = 0 deaths_table = deaths_table.interpolate(method="time", limit_direction="both") deaths_table.plot() cases = { "local": srcs["princeton"].df.total_positive.iloc[::-1].rename("local"), "county": srcs["county_case"].df[srcs["county_case"].df.countyFIPS == 34021].drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).T.iloc[:, 0].rename("county"), "state": srcs["county_case"].df[srcs["county_case"].df.State == "NJ"].drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).sum(axis=0).rename("state"), "national": srcs["county_case"].df.drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).sum(axis=0).rename("national") } cases_table = None for geo in ["local", "county", "state", "national"]: cases[geo].index = pd.to_datetime(cases[geo].index) if geo == "local": cases_table = cases[geo] else: cases_table = pd.merge(cases_table, cases[geo], how="outer", left_index=True, right_index=True) for geo in ["local", "county", "state", "national"]: cases_table[geo].iloc[:np.argmin(cases_table[geo])] = 0 cases_table = cases_table.interpolate(method="time", limit_direction="both") cases_table.local.plot() national = srcs["national_data"].df national.date = pd.to_datetime(national.date, format="%Y%m%d") national state = srcs["state_data"].df[srcs["state_data"].df.state == "NJ"] state.index = pd.to_datetime(state.date, format="%Y%m%d") state_table = pd.merge(pd.DataFrame(dates), state, how="outer", left_index=True, right_index=True) state_table = state_table._get_numeric_data() for column in state_table.columns: state_table[column].iloc[:np.argmin(state_table[column])] = 0 state_table = state_table.fillna(method="ffill") state_table["total_tests"] = state_table.positive + state_table.negative state_table["rolling_total_tests"] = state_table.total_tests.rolling(7).mean() state_table["rolling_positive_tests"] = state_table.positive.rolling(7).mean() state_table["positive_test_rate"] = state_table.rolling_positive_tests / state_table.rolling_total_tests * 100 state_table.positive_test_rate.plot() state_table.deathIncrease.plot() state ```
github_jupyter
%load_ext autoreload %autoreload 2 import numpy as np import pandas as pd import matplotlib.pyplot as plt import provid import datetime from provid.source import download, update srcs = download(national_patient=False) for name, src in srcs.items(): src.parse() srcs["county_case"].df srcs["national_patient"].df srcs["national_patient"].df.age_group.value_counts(normalize=True) * 100 srcs["national_patient"].df.age_group.value_counts(normalize=True).plot.pie() srcs["national_patient"].df.age_group.value_counts(normalize=True) * 100 srcs["national_patient"].df[srcs["national_patient"].df.death_yn == "Yes"].age_group.value_counts(normalize=True) * 100 srcs["national_patient"].df[srcs["national_patient"].df.death_yn == "Yes"].age_group.value_counts(normalize=True).plot.pie() # Mercer, NJ code = "021" srcs["county_case"].df cases = srcs["county_case"].df[srcs["county_case"].df.countyFIPS == 34021] deaths = srcs["county_death"].df[srcs["county_death"].df.countyFIPS == 34021] cases = cases.drop(labels=["countyFIPS", "County Name", "State", "stateFIPS"], axis=1) deaths = deaths.drop(labels=["countyFIPS", "County Name", "State", "stateFIPS"], axis=1) cases.T.plot() deaths.T.plot() srcs["princeton"].df[["total_positive", "total_deaths"]].iloc[::-1].plot() start_date = datetime.date(2020, 1, 1) end_date = datetime.date.today() dates = pd.DataFrame(pd.to_datetime([start_date + datetime.timedelta(days=delta) for delta in range((end_date - start_date).days + 1)])) dates.index = dates.iloc[:, 0] local = { "total_deaths": srcs["princeton"].df.total_deaths.iloc[::-1].rename("total_deaths"), "total_cases": srcs["princeton"].df.total_positive.iloc[::-1].rename("total_cases"), "total_active": srcs["princeton"].df.active_positive.iloc[::-1].rename("total_active"), "total_tests": (srcs["princeton"].df.total_positive.iloc[::-1] + srcs["princeton"].df.total_negative.iloc[::-1]).rename("total_tests") } local_table = dates for df in local: local_table = pd.merge(pd.DataFrame(local_table), local[df], how="outer", left_index=True, right_index=True) for df in local: local_table[df].iloc[:np.argmin(local_table[df])] = 0 local_table.interpolate(method="time", limit_direction="both").total_cases.plot() local_table = local_table.fillna(method="ffill") del local_table[0] local_diffs = local_table.diff() local_diffs.columns = [col.replace("total", "new") for col in local_table.columns] local_table = pd.concat([local_table, local_diffs], axis=1) local_table.round().new_cases.plot() local_table = dates for df in local: local_table = pd.merge(pd.DataFrame(local_table), local[df], how="outer", left_index=True, right_index=True) for df in local: local_table[df].iloc[:np.argmin(local_table[df])] = 0 local_table = local_table.interpolate(method="time", limit_direction="both") del local_table[0] local_diffs = local_table.diff() local_diffs.columns = [col.replace("total", "new") for col in local_table.columns] local_table = pd.concat([local_table, local_diffs], axis=1) local_table county = { "total_deaths": srcs["county_death"].df[srcs["county_death"].df.countyFIPS == 34021].drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).T.iloc[:, 0].rename("total_deaths"), "total_cases": srcs["county_case"].df[srcs["county_case"].df.countyFIPS == 34021].drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).T.iloc[:, 0].rename("total_cases"), # "total_active": srcs["princeton"].df.active_positive.iloc[::-1].rename("total_active"), # "total_tests": (srcs["princeton"].df.total_positive.iloc[::-1] + srcs["princeton"].df.total_negative.iloc[::-1]).rename("total_tests") } county_table = dates for df in county: county_table = pd.merge(pd.DataFrame(county_table), county[df], how="outer", left_index=True, right_index=True) for df in county: county_table[df].iloc[:np.argmin(county_table[df])] = 0 county_table = county_table.interpolate(method="time", limit_direction="both") del county_table[0] county_diffsa = county_table.diff() county_diffs.columns = [col.replace("total", "new") for col in county_table.columns] county_table = pd.concat([county_table, county_diffs], axis=1) county_table deaths = { "local": srcs["princeton"].df.total_deaths.iloc[::-1].rename("local"), "county": srcs["county_death"].df[srcs["county_death"].df.countyFIPS == 34021].drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).T.iloc[:, 0].rename("county"), "state": srcs["county_death"].df[srcs["county_death"].df.State == "NJ"].drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).sum(axis=0).rename("state"), "national": srcs["county_death"].df.drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).sum(axis=0).rename("national") } deaths_table = None for geo in ["local", "county", "state", "national"]: deaths[geo].index = pd.to_datetime(deaths[geo].index) if geo == "local": deaths_table = pd.DataFrame(deaths[geo]) else: deaths_table = pd.merge(deaths_table, deaths[geo], how="outer", left_index=True, right_index=True) for geo in ["local", "county", "state", "national"]: deaths_table[geo].iloc[:np.argmin(deaths_table[geo])] = 0 deaths_table = deaths_table.interpolate(method="time", limit_direction="both") deaths_table.plot() cases = { "local": srcs["princeton"].df.total_positive.iloc[::-1].rename("local"), "county": srcs["county_case"].df[srcs["county_case"].df.countyFIPS == 34021].drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).T.iloc[:, 0].rename("county"), "state": srcs["county_case"].df[srcs["county_case"].df.State == "NJ"].drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).sum(axis=0).rename("state"), "national": srcs["county_case"].df.drop(columns=["countyFIPS", "County Name", "State", "stateFIPS"]).sum(axis=0).rename("national") } cases_table = None for geo in ["local", "county", "state", "national"]: cases[geo].index = pd.to_datetime(cases[geo].index) if geo == "local": cases_table = cases[geo] else: cases_table = pd.merge(cases_table, cases[geo], how="outer", left_index=True, right_index=True) for geo in ["local", "county", "state", "national"]: cases_table[geo].iloc[:np.argmin(cases_table[geo])] = 0 cases_table = cases_table.interpolate(method="time", limit_direction="both") cases_table.local.plot() national = srcs["national_data"].df national.date = pd.to_datetime(national.date, format="%Y%m%d") national state = srcs["state_data"].df[srcs["state_data"].df.state == "NJ"] state.index = pd.to_datetime(state.date, format="%Y%m%d") state_table = pd.merge(pd.DataFrame(dates), state, how="outer", left_index=True, right_index=True) state_table = state_table._get_numeric_data() for column in state_table.columns: state_table[column].iloc[:np.argmin(state_table[column])] = 0 state_table = state_table.fillna(method="ffill") state_table["total_tests"] = state_table.positive + state_table.negative state_table["rolling_total_tests"] = state_table.total_tests.rolling(7).mean() state_table["rolling_positive_tests"] = state_table.positive.rolling(7).mean() state_table["positive_test_rate"] = state_table.rolling_positive_tests / state_table.rolling_total_tests * 100 state_table.positive_test_rate.plot() state_table.deathIncrease.plot() state
0.265404
0.810666
``` import psycopg2 import numpy as np import pandas as pd import matplotlib.pyplot as plt import nltk import pprint %matplotlib inline pp = pprint.PrettyPrinter(indent=4).pprint #https://xkcd.com/1098/ url = 'https://imgs.xkcd.com/comics/star_ratings.png' from IPython.display import Image Image(url,width=300, height=300) ``` ### Get ratings data for all restaurant reviews Get two sets of data: all users reviews and elite user reviews. The supposition here is that elite users could be better than normal users in two ways: 1. They may have a more normal rating distribution. 2. Their reviews many be (overall) better quality. ``` conn = psycopg2.connect('dbname=yelp user=tlappas host=/var/run/postgresql') cur = conn.cursor() cur.execute(""" select business.name, review.stars, review.review_text, user_info.user_id, user_info.elite from review, business, user_info where review.user_id = user_info.user_id and business.business_id = review.business_id and business.categories LIKE '%Restaurants%' """) all_reviews = pd.DataFrame(cur.fetchall(), columns=['name', 'stars', 'text', 'user_id', 'elite']) cur.execute(""" select business.name, review.stars, review.review_text, user_info.user_id, user_info.elite from review, business, user_info where review.user_id = user_info.user_id and business.business_id = review.business_id and business.categories LIKE '%Restaurants%' and length(user_info.elite) != 0 """) elite_reviews = pd.DataFrame(cur.fetchall(), columns=['name', 'stars', 'text', 'user_id', 'elite']) ``` ### Compare Total Reviews: All Users vs Elite Users ``` hist_data = [len(all_reviews), len(elite_reviews)] fig, ax = plt.subplots(1, 1, figsize=(5,5)) ax.set(xlabel='Reviews', ylabel='Frequency') ax.set(title='Review Counts') ax.set(xticks=[1,2], xticklabels=['All Users','Elite Users']) ax.set(xlim=[0,len(hist_data)+1]) ax.bar(range(1,len(hist_data)+1), hist_data, width=0.8, align='center') plt.show() ``` ### Plot Review Histograms Imbalanced categories are an issue in the review dataset. Exploring combinations of the star ratings for all users and elite users. Seems reasonable to combine star ratings into groups for classification. ``` #https://xkcd.com/1098/ url = 'https://imgs.xkcd.com/comics/star_ratings.png' from IPython.display import Image Image(url,width=300, height=300) # Starting at 1 to cut off the dataframe row index # Indexing from 1 to end to remove the 0 bin all_counts = np.bincount(all_reviews.loc[1:,'stars'])[1:] elite_counts = np.bincount(elite_reviews.loc[1:,'stars'])[1:] # bincount returns a list of max value + 1. If there's no 5 star rating length with be 4. Etc. all_counts = np.append(all_counts, ([0] * (5 - len(all_counts)))) elite_counts = np.append(elite_counts, ([0] * (5 - len(elite_counts)))) fig_titles = ['Star Ratings For All Users','Star Ratings For Elite Users'] plot_titles = ['1-5 Stars', '[1,2] vs [4,5]', '[1,3] vs [4,5]', '[1,4] vs [5]', '[1,2] vs [3,4] vs [5]'] tick_labels = [ ['1', '2', '3', '4', '5'], ['1+2','4+5'], ['1+3','4+5'], ['1+4','5'], ['1+2','3+4','5'] ] datasets = [all_counts, elite_counts] for i, data in enumerate(datasets): fig, ax = plt.subplots(1,len(data), sharey='row', figsize=(3*len(data),5)) fig.suptitle(fig_titles[i]) rating_combos = [ data, [sum(data[:2]), sum(data[3:])], [sum(data[:3]), sum(data[3:])], [sum(data[:4]), data[4]], [sum(data[:2]), sum(data[2:4]), data[4]] ] for j, hist_data in enumerate(rating_combos): # Set universal attributes for k in range(ax.shape[0]): ax[k].set(xlabel='Stars') # Special behavior for first graph if j == 0: ax[j].set(ylabel='Frequency') # Plot ax[j].set(title=plot_titles[j]) ax[j].set(xlim=[0,len(hist_data)+1]) ax[j].set(xticks=range(1,len(hist_data)+1), xticklabels=tick_labels[j]) ax[j].bar(range(1,len(hist_data)+1), hist_data, width=0.8, align='center') ``` ### Review Length Plot the review length (in characters) for all users and elite users. Review lengths are all between 1 and 5000 characters. Bin width is 50. ``` all_reviews_length = [len(review) for review in all_reviews.loc[1:, 'text']] elite_reviews_length = [len(review) for review in elite_reviews.loc[1:, 'text']] all_hist = np.bincount(np.append(all_reviews_length, [5000])) elite_hist = np.bincount(np.append(elite_reviews_length, [5000])) all_hist[-1] -= 1 elite_hist[-1] -= 1 fig, ax = plt.subplots(2, 1, figsize=(15,10)) length_data = [all_hist[1:], elite_hist[1:]] plot_titles = ['All User Reviews\' Lengths', 'Elite Users\'s Review Lengths'] for i, hist_data in enumerate(length_data): # Each array has 5000 bins, which is way more than necessary # Downsample to 100 hist_data = hist_data.reshape(-1, 50).mean(axis=1) # Set universal attributes for k in range(ax.shape[0]): ax[k].set(xlabel='Review Length (chars)', ylabel='Frequency') # Plot ax[i].set(title=plot_titles[i]) ax[i].set(xticks=[10, 30, 50, 70, 90], xticklabels=['500','1500','2500','3500','4500']) ax[i].set(xlim=[0,len(hist_data)+1]) ax[i].bar(range(1,len(hist_data)+1), hist_data, width=0.8, align='center') plt.show() ```
github_jupyter
import psycopg2 import numpy as np import pandas as pd import matplotlib.pyplot as plt import nltk import pprint %matplotlib inline pp = pprint.PrettyPrinter(indent=4).pprint #https://xkcd.com/1098/ url = 'https://imgs.xkcd.com/comics/star_ratings.png' from IPython.display import Image Image(url,width=300, height=300) conn = psycopg2.connect('dbname=yelp user=tlappas host=/var/run/postgresql') cur = conn.cursor() cur.execute(""" select business.name, review.stars, review.review_text, user_info.user_id, user_info.elite from review, business, user_info where review.user_id = user_info.user_id and business.business_id = review.business_id and business.categories LIKE '%Restaurants%' """) all_reviews = pd.DataFrame(cur.fetchall(), columns=['name', 'stars', 'text', 'user_id', 'elite']) cur.execute(""" select business.name, review.stars, review.review_text, user_info.user_id, user_info.elite from review, business, user_info where review.user_id = user_info.user_id and business.business_id = review.business_id and business.categories LIKE '%Restaurants%' and length(user_info.elite) != 0 """) elite_reviews = pd.DataFrame(cur.fetchall(), columns=['name', 'stars', 'text', 'user_id', 'elite']) hist_data = [len(all_reviews), len(elite_reviews)] fig, ax = plt.subplots(1, 1, figsize=(5,5)) ax.set(xlabel='Reviews', ylabel='Frequency') ax.set(title='Review Counts') ax.set(xticks=[1,2], xticklabels=['All Users','Elite Users']) ax.set(xlim=[0,len(hist_data)+1]) ax.bar(range(1,len(hist_data)+1), hist_data, width=0.8, align='center') plt.show() #https://xkcd.com/1098/ url = 'https://imgs.xkcd.com/comics/star_ratings.png' from IPython.display import Image Image(url,width=300, height=300) # Starting at 1 to cut off the dataframe row index # Indexing from 1 to end to remove the 0 bin all_counts = np.bincount(all_reviews.loc[1:,'stars'])[1:] elite_counts = np.bincount(elite_reviews.loc[1:,'stars'])[1:] # bincount returns a list of max value + 1. If there's no 5 star rating length with be 4. Etc. all_counts = np.append(all_counts, ([0] * (5 - len(all_counts)))) elite_counts = np.append(elite_counts, ([0] * (5 - len(elite_counts)))) fig_titles = ['Star Ratings For All Users','Star Ratings For Elite Users'] plot_titles = ['1-5 Stars', '[1,2] vs [4,5]', '[1,3] vs [4,5]', '[1,4] vs [5]', '[1,2] vs [3,4] vs [5]'] tick_labels = [ ['1', '2', '3', '4', '5'], ['1+2','4+5'], ['1+3','4+5'], ['1+4','5'], ['1+2','3+4','5'] ] datasets = [all_counts, elite_counts] for i, data in enumerate(datasets): fig, ax = plt.subplots(1,len(data), sharey='row', figsize=(3*len(data),5)) fig.suptitle(fig_titles[i]) rating_combos = [ data, [sum(data[:2]), sum(data[3:])], [sum(data[:3]), sum(data[3:])], [sum(data[:4]), data[4]], [sum(data[:2]), sum(data[2:4]), data[4]] ] for j, hist_data in enumerate(rating_combos): # Set universal attributes for k in range(ax.shape[0]): ax[k].set(xlabel='Stars') # Special behavior for first graph if j == 0: ax[j].set(ylabel='Frequency') # Plot ax[j].set(title=plot_titles[j]) ax[j].set(xlim=[0,len(hist_data)+1]) ax[j].set(xticks=range(1,len(hist_data)+1), xticklabels=tick_labels[j]) ax[j].bar(range(1,len(hist_data)+1), hist_data, width=0.8, align='center') all_reviews_length = [len(review) for review in all_reviews.loc[1:, 'text']] elite_reviews_length = [len(review) for review in elite_reviews.loc[1:, 'text']] all_hist = np.bincount(np.append(all_reviews_length, [5000])) elite_hist = np.bincount(np.append(elite_reviews_length, [5000])) all_hist[-1] -= 1 elite_hist[-1] -= 1 fig, ax = plt.subplots(2, 1, figsize=(15,10)) length_data = [all_hist[1:], elite_hist[1:]] plot_titles = ['All User Reviews\' Lengths', 'Elite Users\'s Review Lengths'] for i, hist_data in enumerate(length_data): # Each array has 5000 bins, which is way more than necessary # Downsample to 100 hist_data = hist_data.reshape(-1, 50).mean(axis=1) # Set universal attributes for k in range(ax.shape[0]): ax[k].set(xlabel='Review Length (chars)', ylabel='Frequency') # Plot ax[i].set(title=plot_titles[i]) ax[i].set(xticks=[10, 30, 50, 70, 90], xticklabels=['500','1500','2500','3500','4500']) ax[i].set(xlim=[0,len(hist_data)+1]) ax[i].bar(range(1,len(hist_data)+1), hist_data, width=0.8, align='center') plt.show()
0.398524
0.717964
<a href="https://colab.research.google.com/github/ViFLara/Statistics-and-Machine-Learning/blob/master/random_forest.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import pandas as pd data = pd.read_csv("drive/MyDrive/creditcard.csv") data.head(10) # amount -> transaction value print(data.isna().sum()) n_transactions = data['Class'].count() n_frauds = data['Class'].sum() # sum of all rows n_no_frauds = n_transactions - n_frauds frauds_percentage = n_frauds / n_transactions no_frauds_percentage = n_no_frauds / n_transactions print("Number of transactions: ", n_transactions) print("Number of frauds: ", n_frauds, "%.2f" %(frauds_percentage * 100)) print("Number of transactions without frouds: ", n_no_frauds, "%.2f" %(no_frauds_percentage * 100)) from sklearn.model_selection import StratifiedShuffleSplit def run_validator(x, y): validator = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=0) for train_id, test_id in validator.split(x, y) : x_train, x_test = x[train_id], x[test_id] y_train, y_test = y[train_id], y[test_id] return x_train, x_test, y_train, y_test %%time from sklearn import tree def run_classifier(classifier, x_train, x_test, y_train): tree = classifier.fit(x_train, y_train) y_pred = tree.predict(x_test) # predict whether transactions are fraud or not return y_pred import matplotlib.pyplot as plt def save_tree(classifier, name): plt.figure(figsize=(200,100)) tree.plot_tree(classifier, filled=True, fontsize=14) plt.savefig(name) plt.close() from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix from sklearn.metrics import precision_score from sklearn.metrics import recall_score def validate_tree(y_test, y_pred): print(accuracy_score(y_test, y_pred)) print(confusion_matrix(y_test, y_pred)) print(precision_score(y_test, y_pred)) # fraud transactions print(recall_score(y_test, y_pred)) # number of fraud hits / number of transactions classified as frauds # validator execution x = data.drop('Class', axis=1).values y = data['Class'].values x_train, x_test, y_train, y_test = run_validator(x, y) # classifier execution of decision_tree_classifier decision_tree_classifier = tree.DecisionTreeClassifier() y_predict_decision_tree = run_classifier(decision_tree_classifier, x_train, x_test, y_train) # creation of the decision tree figure save_tree(decision_tree_classifier, 'tree_decision1.png') # decision tree validation validate_tree(y_test, y_predict_decision_tree) print(decision_tree_classifier) print(decision_tree_classifier.get_depth()) # classifier execution of decision_tree_classifier decision_tree_classifier = tree.DecisionTreeClassifier(max_depth=10, random_state=0) y_predict_decision_tree = run_classifier(decision_tree_classifier, x_train, x_test, y_train) # decision tree validation validate_tree(y_test, y_predict_decision_tree) # classifier execution of decision_tree_classifier decision_tree_classifier = tree.DecisionTreeClassifier(max_depth=10, random_state=0, min_samples_leaf=10) y_predict_decision_tree = run_classifier(decision_tree_classifier, x_train, x_test, y_train) # decision tree validation validate_tree(y_test, y_predict_decision_tree) %%time # classifier execution of decision_tree_classifier decision_tree_classifier = tree.DecisionTreeClassifier(max_depth=5, random_state=0) y_predict_decision_tree = run_classifier(decision_tree_classifier, x_train, x_test, y_train) # decision tree validation validate_tree(y_test, y_predict_decision_tree) %%time from sklearn.ensemble import RandomForestClassifier random_forest_classifier = RandomForestClassifier(n_estimators=50, random_state=0, max_depth=10) # n_estimators -> number of trees y_predict_random_forest = run_classifier(random_forest_classifier, x_train, x_test, y_train) save_tree(random_forest_classifier.estimators_[0], "random_forest1") save_tree(random_forest_classifier.estimators_[1], "random_forest2") validate_tree(y_test, y_predict_random_forest) %%time from sklearn.ensemble import AdaBoostClassifier adaboost_classifier = AdaBoostClassifier(random_state=0) y_predict_adaboost = run_classifier(adaboost_classifier, x_train, x_test, y_train) save_tree(adaboost_classifier.estimators_[0], "adaboost1") save_tree(adaboost_classifier.estimators_[1], "adaboost2") validate_tree(y_test, y_predict_adaboost) %%time from sklearn.ensemble import AdaBoostClassifier adaboost_classifier = AdaBoostClassifier(random_state=0, n_estimators=100) y_predict_adaboost = run_classifier(adaboost_classifier, x_train, x_test, y_train) validate_tree(y_test, y_predict_adaboost) %%time from sklearn.ensemble import AdaBoostClassifier adaboost_classifier = AdaBoostClassifier(random_state=0, n_estimators=200) y_predict_adaboost = run_classifier(adaboost_classifier, x_train, x_test, y_train) validate_tree(y_test, y_predict_adaboost) ```
github_jupyter
import pandas as pd data = pd.read_csv("drive/MyDrive/creditcard.csv") data.head(10) # amount -> transaction value print(data.isna().sum()) n_transactions = data['Class'].count() n_frauds = data['Class'].sum() # sum of all rows n_no_frauds = n_transactions - n_frauds frauds_percentage = n_frauds / n_transactions no_frauds_percentage = n_no_frauds / n_transactions print("Number of transactions: ", n_transactions) print("Number of frauds: ", n_frauds, "%.2f" %(frauds_percentage * 100)) print("Number of transactions without frouds: ", n_no_frauds, "%.2f" %(no_frauds_percentage * 100)) from sklearn.model_selection import StratifiedShuffleSplit def run_validator(x, y): validator = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=0) for train_id, test_id in validator.split(x, y) : x_train, x_test = x[train_id], x[test_id] y_train, y_test = y[train_id], y[test_id] return x_train, x_test, y_train, y_test %%time from sklearn import tree def run_classifier(classifier, x_train, x_test, y_train): tree = classifier.fit(x_train, y_train) y_pred = tree.predict(x_test) # predict whether transactions are fraud or not return y_pred import matplotlib.pyplot as plt def save_tree(classifier, name): plt.figure(figsize=(200,100)) tree.plot_tree(classifier, filled=True, fontsize=14) plt.savefig(name) plt.close() from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix from sklearn.metrics import precision_score from sklearn.metrics import recall_score def validate_tree(y_test, y_pred): print(accuracy_score(y_test, y_pred)) print(confusion_matrix(y_test, y_pred)) print(precision_score(y_test, y_pred)) # fraud transactions print(recall_score(y_test, y_pred)) # number of fraud hits / number of transactions classified as frauds # validator execution x = data.drop('Class', axis=1).values y = data['Class'].values x_train, x_test, y_train, y_test = run_validator(x, y) # classifier execution of decision_tree_classifier decision_tree_classifier = tree.DecisionTreeClassifier() y_predict_decision_tree = run_classifier(decision_tree_classifier, x_train, x_test, y_train) # creation of the decision tree figure save_tree(decision_tree_classifier, 'tree_decision1.png') # decision tree validation validate_tree(y_test, y_predict_decision_tree) print(decision_tree_classifier) print(decision_tree_classifier.get_depth()) # classifier execution of decision_tree_classifier decision_tree_classifier = tree.DecisionTreeClassifier(max_depth=10, random_state=0) y_predict_decision_tree = run_classifier(decision_tree_classifier, x_train, x_test, y_train) # decision tree validation validate_tree(y_test, y_predict_decision_tree) # classifier execution of decision_tree_classifier decision_tree_classifier = tree.DecisionTreeClassifier(max_depth=10, random_state=0, min_samples_leaf=10) y_predict_decision_tree = run_classifier(decision_tree_classifier, x_train, x_test, y_train) # decision tree validation validate_tree(y_test, y_predict_decision_tree) %%time # classifier execution of decision_tree_classifier decision_tree_classifier = tree.DecisionTreeClassifier(max_depth=5, random_state=0) y_predict_decision_tree = run_classifier(decision_tree_classifier, x_train, x_test, y_train) # decision tree validation validate_tree(y_test, y_predict_decision_tree) %%time from sklearn.ensemble import RandomForestClassifier random_forest_classifier = RandomForestClassifier(n_estimators=50, random_state=0, max_depth=10) # n_estimators -> number of trees y_predict_random_forest = run_classifier(random_forest_classifier, x_train, x_test, y_train) save_tree(random_forest_classifier.estimators_[0], "random_forest1") save_tree(random_forest_classifier.estimators_[1], "random_forest2") validate_tree(y_test, y_predict_random_forest) %%time from sklearn.ensemble import AdaBoostClassifier adaboost_classifier = AdaBoostClassifier(random_state=0) y_predict_adaboost = run_classifier(adaboost_classifier, x_train, x_test, y_train) save_tree(adaboost_classifier.estimators_[0], "adaboost1") save_tree(adaboost_classifier.estimators_[1], "adaboost2") validate_tree(y_test, y_predict_adaboost) %%time from sklearn.ensemble import AdaBoostClassifier adaboost_classifier = AdaBoostClassifier(random_state=0, n_estimators=100) y_predict_adaboost = run_classifier(adaboost_classifier, x_train, x_test, y_train) validate_tree(y_test, y_predict_adaboost) %%time from sklearn.ensemble import AdaBoostClassifier adaboost_classifier = AdaBoostClassifier(random_state=0, n_estimators=200) y_predict_adaboost = run_classifier(adaboost_classifier, x_train, x_test, y_train) validate_tree(y_test, y_predict_adaboost)
0.544801
0.907024
# Lesson 5: Tidy Data *Learn to prepare data for visualization and analytics.* ## Instructions This tutorial provides step-by-step training divided into numbered sections. The sections often contain embeded exectable code for demonstration. This tutorial is accompanied by a practice notebook: [L05-Tidy_Data-Practice.ipynb](./L05-Tidy_Data-Practice.ipynb). Throughout this tutorial sections labeled as "Tasks" are interspersed and indicated with the icon: ![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/16/Apps-gnome-info-icon.png). You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook. ## Introduction The purpose of this assignment is to learn and practice with preparing tidy datasets. Often data we are asked to analyze is provided to us in formats that are not easy to visualize or analyze. Many visualization tools such as Seaborn or analytical tools such as supervised machine learning libraries expect data to be tidied. It is important to know what "tidy" data is, how to reformat a data into a tidy format, and to organize our own scientific data to help ourselves and others analyze it. **What are "tidy" datasets?** > Tidy datasets are easy to manipulate, model and visualize, and have a specific structure: each variable is a column, each observation is a row, and each type of observational unit is a table. \- Wickham, Hadley. [Tidy Data](https://www.jstatsoft.org/article/view/v059i10). *Journal of Statistical Software*, 59.10 (2014): 1 - 23. Before proceeding, fully read the [Tidy Data paper](https://www.jstatsoft.org/article/view/v059i10) (quoted above) by Hadley Wickham. Once finished, return here to reinforce the techniques introduced by that paper. --- ## 1. Getting Started As before, we import any needed packages at the top of our notebook. Let's import Numpy and Pandas: ``` import numpy as np import pandas as pd ``` #### Task 1a: Setup <span style="float:right; margin-left:10px; clear:both;">![Task](./media/task-icon.png)</span> Import the following packages: + `pandas` as `pd` + `numpy` as `np` ## 2. Tidy Rules ### 2.1 Recognizing data components To understand the rules for tidy data, we should define a few terms: 'variable', 'observation' and 'observational unit'. + **variable**: > A variable is a characteristic of a unit being observed... to which a numerical measure or a category... can be assigned (e.g. income, age, weight, etc., and โ€œoccupationโ€, โ€œindustryโ€, โ€œdiseaseโ€, etc. \- [OECD Glossary of Statistical terms -- Variable](https://stats.oecd.org/glossary/detail.asp?ID=2857) + **observation**: > An observation is the value, at a particular period, of a particular variable \- [OECD Glossary of Statistical terms -- Observation](https://stats.oecd.org/glossary/detail.asp?ID=6132) + **observational unit**: > Observation units are those entities on which information is received and statistics are compiled. \- [OECD Glossary of Statistical terms -- Observation Unit](https://stats.oecd.org/glossary/detail.asp?ID=1873) With those definitions for reference, remember from the text that in order for a dataset to be considered "tidy" it must be organized into a table (i.e. Pandas DataFrame) and follow these rules: + Each variable forms a unique column in the data frame. + Each observation forms a row in the data frame. + Each **type** of observational unit needs its own table. To demonstrate the meaning of these rules, let's first examine a dataset described in the Tidy Data paper. Execute the following lines of code that manually creates a Pandas data frame containing the example table: ``` # Create the data rows and columns. data = [['John Smith', None, 2], ['Jane Doe', 16, 11], ['Mary Johnson', 3, 1]] # Create the list of labels for the data frame. headers = ['', 'Treatment_A', 'Treatment_B'] # Create the data frame. pd.DataFrame(data, columns=headers) ``` This data is not in tidy format. Can you see why? #### Task 2a: Understand the data <span style="float:right; margin-left:10px; clear:both;">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png) </span> Using the table above, answer the following: - What are the variables? - What are the observations? - What is the observable unit? - Are the variables columns? - Are the observations rows? ### 2.1 Spotting messy data The author provides a few useful indicators that help us spot untidied data: 1. Column headers are values, not variable names. 2. Multiple variables are stored in one column. 3. Variables are stored in both rows and columns. 4. Multiple types of observational units are stored in the same table. 5. A single observational unit is stored in multiple tables. As an example, let's look at a data set that the author borrowed from the Pew Reserach Center that provides religious affiliation and yearly income ranges for individuals surveyed. Execute the following code which manually puts that data into a Pandas data frame: ``` data = [['Agnostic',27,34,60,81,76,137], ['Atheist',12,27,37,52,35,70], ['Buddhist',27,21,30,34,33,58], ['Catholic',418,617,732,670,638,1116], ['Don\'t know/refused',15,14,15,11,10,35], ['Evangelical Prot',575,869,1064,982,881,1486], ['Hindu',1,9,7,9,11,34], ['Historically Black Prot',228,244,236,238,197,223], ['Jehovah\'s Witness',20,27,24,24,21,30], ['Jewish',19,19,25,25,30,95]] headers = ['religion','<$10k','$10-20k','$20-30k','$30-40k','$40-50k','$50-75k'] religion = pd.DataFrame(data, columns=headers) religion ``` #### Task 2b: Explain causes of untidyness <span style="float:right; margin-left:10px; clear:both;">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png) </span> Using the data set above: - Explain why the data above is untidy? - What are the variables? - What are the observations? As another example, consider the data frame also provided by the author. For this data, the demographic groups are broken down by sex (m, f) and age (0โ€“14, 15โ€“25, 25โ€“34, 35โ€“44, 45โ€“54, 55โ€“64, 65+, or unknown). Execute the following: ``` data = [['AD', 2000, 0, 0, 1, 0, 0, 0, 0, None, None], ['AE', 2000, 2, 4, 4, 6, 5, 12, 10, None, 3], ['AF', 2000, 52, 228, 183, 149, 129, 94, 80, None, 93], ['AG', 2000, 0, 0, 0, 0, 0, 0, 1, None, 1], ['AL', 2000, 2, 19, 21, 14, 24, 19, 16, None, 3], ['AM', 2000, 2, 152, 130, 131, 63, 26, 21, None, 1], ['AN', 2000, 0, 0, 1, 2, 0, 0, 0, None, 0], ['AO', 2000, 186, 999, 1003, 912, 482, 312, 194, None, 247], ['AR', 2000, 97, 278, 594, 402, 419, 368, 330, None, 121], ['AS', 2000, None, None, None, None, 1, 1, None, None, None]] headers = ['country', 'year', 'm014', 'm1524', 'm2534', 'm3544', 'm4554', 'm5564', 'm65', 'mu', 'f014'] demographics = pd.DataFrame(data, columns=headers) demographics ``` #### Task 2c: Explain causes of untidyness <span style="float:right; margin-left:10px; clear:both;">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png) </span> Using the data set above: - Explain why the data above is untidy? - What are the variables? - What are the observations? --- ## 3. Melting Data In the Tidy paper, the author indicated that many times a data set can be corrected, or tidied, by first "melting" the data. Fortunately, Pandas provides the `pd.melt` function! See the [online documenation for pd.melt](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html) for full usage instructions. The author provides five different use cases where melting (and other transformations) can be performed: 1. Column headers are values, not variable names. 2. Multiple variables are stored in one column. 3. Variables are stored in both rows and columns. 4. Multiple types of observational units are stored in the same table. 5. A single observational unit is stored in multiple tables. We will explore only a few of these use cases. However, the techniques provided by these examples will help with melting for all of them. ### 3.1 Use Case #1: column headers are values To demonsrate melting let's create a sample dataframe that provides the progress level of different groups of individuals in a process that has two stages: ``` df = pd.DataFrame({'Group': {0: 'A', 1: 'B', 2: 'C'}, 'Stage1': {0: 1, 1: 3, 2: 5}, 'Stage2': {0: 2, 1: 4, 2: 6}}) df ``` It's clear that this dataset does not follow tidy rules. This is because information about the stage is housed in the header (i.e. two different stages: stage1 and stage2). To tidy this up, we should have a separate column that indicates the stage and a corresponding column that indicates the observation for each stage. The first step to correct this is to melt the data. To melt a dataset using Pandas, you must indicate which columns in the current data frame should be kept as columns and which columns should be melted (also called **unpivoted**) to rows. This is indicated using two arguments provided to `pd.melt`: - `id_vars`: indicates the columns to use as identifier variables. These columns remain as columns in the dataframe after melting. - `value_vars`: indicates the columns to melt (unpivot). If not specified, then all columns that are not set as `id_vars` are used. - The column header becomes a value in a new column - The value within the original column is matched with the header value in an adjacent column. As an example, let's melt the example dataframe: ``` df2 = pd.melt(df, id_vars=['Group'], value_vars=['Stage1', 'Stage2']) df2 ``` Observe that the new column labels named 'variable' and 'value' do not indicate what the data the colomn contains. We can either set these manually using: ```python df2.columns = ['Group', 'Stage', 'Level'] ``` Or, we can provide the new labels when we melt the data using the `var_name` and `value_name` arguments: ``` df2 = pd.melt(df, id_vars=['Group'], value_vars=['Stage1', 'Stage2'], var_name='Stage', value_name='Level') df2 ``` #### Task 3a: Melt data, use case #1 <span style="float:right; margin-left:10px; clear:both;">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png) </span> Using the `pd.melt` function, melt the demographics data introduced in section 2. Be sure to: - Set the column headers correctly. - Order by country - Print the first 10 lines of the resulting melted dataset. ### 3.2 Use Case #2: multiple variables stored in one column Sometimes, melting the data is not enough. Consider the demographics example where the sex and the age range are combined into a single column label. In Task 3a we melted that dataset: <table> <tr><th>country</th><th>year</th><th>age</th><th>freq</th></tr> <tr><td>AD</td><td>2000</td><td>m014</td><td>0</td></tr> <tr><td>AD</td><td>2000</td><td>m5564</td><td>0</td></tr> <tr><td>AD</td><td>2000</td><td>m3544</td><td>0</td></tr> <tr><td>AD</td><td>2000</td><td>m65</td><td>0</td></tr> <tr><td>AD</td><td>2000</td><td>m2534</td><td>1</td></tr> <tr><td>AD</td><td>2000</td><td>mu</td><td>None</td></tr> <tr><td>AD</td><td>2000</td><td>m1524</td><td>0</td></tr> <tr><td>AD</td><td>2000</td><td>f014</td><td>NaN</td></tr> <tr><td>AD</td><td>2000</td><td>m4554</td><td>0</td></tr> <tr><td>AE</td><td>2000</td><td>m5564</td><td>12</td></tr> </table> We need to split that `age` column into three different columns corresponding to the sex, minimum age and maximum age. To do this, we can use the following line of code: ```Python temp_df = melted_df["age"].str.extract("(\D)(\d+)(\d{2})") ``` Remember, that Pandas provides a [pandas.Series.str.extract](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extract.html) function for manipulating the string values of a Series, and each column in a Pandas dataframe is a series. We can use this function to break apart the value into three separate columns. Observe the argument provided to the `.str.extract` function: `(\D)(\d+)(\d{2})`. This type of string is called a regular expression (RE). We will not cover regular expressions in detail, but they are a powerful method for parsing strings to either match elements of the string or to split them. An [introduction to REs](https://docs.python.org/3.4/howto/regex.html#regex-howto) for Python and [a full syntax description](https://docs.python.org/3.4/library/re.html#regular-expression-syntax) is available online. But here is a short explanation for the elements of the RE above: + `(\D)`: Matches any single character which is not a digit. This correspondes to the sex: 'f' or 'm'. + `(\d+)`: Matches one or more digits. This correspondes to the minimum age which may be one or more digts. + `(\d{2})`: Matches exactly two digits. This requires that the last two digits are the max age. Let's try it and see how it works: ``` # Melt the demographics dataset and sort by country: melted_df = pd.melt(demographics, id_vars=["country", "year"], var_name="age", value_name="freq") melted_df = melted_df.sort_values(by=["country"]) # Split 'age' column into a new dataframe containing the three components: sex, # minimum age and maximum age. temp_df = melted_df["age"].str.extract("(\D)(\d+)(\d{2})") temp_df.columns = ['sex', 'min_age', 'max_age'] temp_df.head(10) ``` ### 3.3 Use Case #3: variables are in both rows and columns Consider the following dataset which contains the daily weather records for five months in 2010 for the MX17004 weather station in Mexico. Each day of the month has it's own column (e.g. d1, d2, d3, etc.). The example data only provides the first 8 days: ``` data = [['MX17004',2010,1,'tmax',None,None,None,None,None,None,None,None], ['MX17004',2010,1,'tmin',None,None,None,None,None,None,None,None], ['MX17004',2010,2,'tmax',None,27.3,24.1,None,None,None,None,None], ['MX17004',2010,2,'tmin',None,14.4,14.4,None,None,None,None,None], ['MX17004',2010,3,'tmax',None,None,None,None,32.1,None,None,None], ['MX17004',2010,3,'tmin',None,None,None,None,14.2,None,None,None], ['MX17004',2010,4,'tmax',None,None,None,None,None,None,None,None], ['MX17004',2010,4,'tmin',None,None,None,None,None,None,None,None], ['MX17004',2010,5,'tmax',None,None,None,None,None,None,None,None], ['MX17004',2010,5,'tmin',None,None,None,None,None,None,None,None]] headers = ['id','year','month','element','d1','d2','d3','d4','d5','d6','d7','d8'] weather = pd.DataFrame(data, columns=headers) weather ``` In this dataset there are two problems. First, we have a violation of use case #1 where observations are stored in the column labels for the days (e.g. d1, d2, d3, etc.). Second, we have a violation of use case #3. Observe that the 'element' column contains values that should be variables. We want the min and max temperatures for each day as columns. First, let's deal with the first problem by including `id`, `year`, `month` and `element` as `id_vars`. Observe that we will currently not try to tidy the `element` column. We want to remove the 'd' from the day so let's name the column `temp_day`: ``` melted_weather = pd.melt(weather, id_vars=['id', 'year', 'month', 'element'], var_name='temp_day', value_name='temperature') melted_weather.head(10) ``` Now, let's create an actual date for the measurement rather than storing year, month and day separately. Let's add a new column to the dataframe named 'day' that uses a regular expression to remove the letter 'd' from the beginning of the day. ``` melted_weather["day"] = melted_weather["temp_day"].str.extract("d(\d+)", expand=False) melted_weather.head(10) ``` We can now combine the year, month and day to form a proper date using the Pandas `apply` function. Execute the code below and observe the in-line comments for the meaning of each line of code: ``` # Import the datetime library. import datetime # Our year, month, and day columns must be numeric. Currently they are # strings. We can use the Pandas "apply" function to convert these columns. melted_weather[["year", "month", "day"]] = melted_weather[["year", "month", "day"]].apply(pd.to_numeric) # Convert temperature to numeric as well melted_weather[["temperature"]] = melted_weather[["temperature"]].apply(pd.to_numeric) # We want to use the Python datetime function to cobmine the year, month and day # into a proper date. In Python this is a datetime object, not a string. So, we # need to use the apply function, just like above, to convert the dates. We'll # create a simple little function that we'll use to apply the datetime change. def create_date(row): return datetime.datetime(year=row["year"], month=int(row["month"]), day=row["day"]) # Apply the create_date function to each row of our data frame for the "date" column. melted_weather["date"] = melted_weather.apply(lambda row: create_date(row), axis=1) # Now take a look! melted_weather.head(10) ``` Now that we have our date corrected, and properly melted, we can address the second problem: the `element` column containing variable names. To fix this we need to do the opposite of melting and we need to **pivot**. To do this we can use the [pd.pivot](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html) function. This function takes the following arguments: - `index`: indicates the columns to use to make the new frameโ€™s index. If None, uses existing index - `columns`: indicates the column to use whose values will become the new frameโ€™s columns. - `values`: indicates the columns to use for populating new frameโ€™s values. Let's use the `pivot_table` function, which is a generalization of the `pivot` function that handles duplicate values or one index/column pair. This will move the `element` column values to be new columns in our data frame. But first, we will also want to drop unwanted columns: ``` # Remove unwanted columns weather_min = melted_weather.drop(['year', 'month', 'day', 'temp_day'], axis=1) weather_min.head(10) # Unpivot and reset indexes. The pivot_table function automatically removes rows with null values. weather_tidy = weather_min.pivot_table(index=["id","date"], columns="element", values="temperature") weather_tidy.reset_index(drop=False, inplace=True) weather_tidy ``` The weather data is now tidy (although rather small). Observe, that in the code above, we called the function `reset_index` on the Tidy'ed weather data. If we do not do this, then the row indexes are not incremental within the data frame. #### Task 3b: Practice with a new dataset <span style="float:right; margin-left:10px; clear:both;">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png) </span> Download the [PI_DataSet.txt](https://hivdb.stanford.edu/download/GenoPhenoDatasets/PI_DataSet.txt) file from [HIV Drug Resistance Database](https://hivdb.stanford.edu/pages/genopheno.dataset.html). Store the file in the same directory as the practice notebook for this assignment. Here is the meaning of data columns: - SeqID: a numeric identifier for a unique HIV isolate protease sequence. Note: disruption of the protease inhibits HIVโ€™s ability to reproduce. - The Next 8 columns are identifiers for unique protease inhibitor class drugs. - The values in these columns are the fold resistance over wild type (the HIV strain susceptible to all drugs). - Fold change is the ratio of the drug concentration needed to inhibit the isolate. - The latter columns, with P as a prefix, are the positions of the amino acids in the protease. - '-' indicates consensus. - '.' indicates no sequence. - '#' indicates an insertion. - '~' indicates a deletion;. - '*' indicates a stop codon - a letter indicates one letter Amino Acid substitution. - two and more amino acid codes indicates a mixture.ย  Import this dataset into your notebook, view the top few rows of the data and respond to these questions: - What are the variables? - What are the observations? - What are the values? #### Task 3c: Practice with a new dataset Part 2 <span style="float:right; margin-left:10px; clear:both;">![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/96/Apps-gnome-info-icon.png) </span> Use the data retreived from task 3b, generate a data frame containing a Tidyโ€™ed set of values for drug concentration fold change. BE sure to: - Set the column names as โ€˜SeqIDโ€™, โ€˜Drugโ€™ and โ€˜Fold_changeโ€™. - Order the data frame first by sequence ID and then by Drug name - Reset the row indexes - Display the first 10 elements.
github_jupyter
import numpy as np import pandas as pd # Create the data rows and columns. data = [['John Smith', None, 2], ['Jane Doe', 16, 11], ['Mary Johnson', 3, 1]] # Create the list of labels for the data frame. headers = ['', 'Treatment_A', 'Treatment_B'] # Create the data frame. pd.DataFrame(data, columns=headers) data = [['Agnostic',27,34,60,81,76,137], ['Atheist',12,27,37,52,35,70], ['Buddhist',27,21,30,34,33,58], ['Catholic',418,617,732,670,638,1116], ['Don\'t know/refused',15,14,15,11,10,35], ['Evangelical Prot',575,869,1064,982,881,1486], ['Hindu',1,9,7,9,11,34], ['Historically Black Prot',228,244,236,238,197,223], ['Jehovah\'s Witness',20,27,24,24,21,30], ['Jewish',19,19,25,25,30,95]] headers = ['religion','<$10k','$10-20k','$20-30k','$30-40k','$40-50k','$50-75k'] religion = pd.DataFrame(data, columns=headers) religion data = [['AD', 2000, 0, 0, 1, 0, 0, 0, 0, None, None], ['AE', 2000, 2, 4, 4, 6, 5, 12, 10, None, 3], ['AF', 2000, 52, 228, 183, 149, 129, 94, 80, None, 93], ['AG', 2000, 0, 0, 0, 0, 0, 0, 1, None, 1], ['AL', 2000, 2, 19, 21, 14, 24, 19, 16, None, 3], ['AM', 2000, 2, 152, 130, 131, 63, 26, 21, None, 1], ['AN', 2000, 0, 0, 1, 2, 0, 0, 0, None, 0], ['AO', 2000, 186, 999, 1003, 912, 482, 312, 194, None, 247], ['AR', 2000, 97, 278, 594, 402, 419, 368, 330, None, 121], ['AS', 2000, None, None, None, None, 1, 1, None, None, None]] headers = ['country', 'year', 'm014', 'm1524', 'm2534', 'm3544', 'm4554', 'm5564', 'm65', 'mu', 'f014'] demographics = pd.DataFrame(data, columns=headers) demographics df = pd.DataFrame({'Group': {0: 'A', 1: 'B', 2: 'C'}, 'Stage1': {0: 1, 1: 3, 2: 5}, 'Stage2': {0: 2, 1: 4, 2: 6}}) df df2 = pd.melt(df, id_vars=['Group'], value_vars=['Stage1', 'Stage2']) df2 df2.columns = ['Group', 'Stage', 'Level'] df2 = pd.melt(df, id_vars=['Group'], value_vars=['Stage1', 'Stage2'], var_name='Stage', value_name='Level') df2 temp_df = melted_df["age"].str.extract("(\D)(\d+)(\d{2})") # Melt the demographics dataset and sort by country: melted_df = pd.melt(demographics, id_vars=["country", "year"], var_name="age", value_name="freq") melted_df = melted_df.sort_values(by=["country"]) # Split 'age' column into a new dataframe containing the three components: sex, # minimum age and maximum age. temp_df = melted_df["age"].str.extract("(\D)(\d+)(\d{2})") temp_df.columns = ['sex', 'min_age', 'max_age'] temp_df.head(10) data = [['MX17004',2010,1,'tmax',None,None,None,None,None,None,None,None], ['MX17004',2010,1,'tmin',None,None,None,None,None,None,None,None], ['MX17004',2010,2,'tmax',None,27.3,24.1,None,None,None,None,None], ['MX17004',2010,2,'tmin',None,14.4,14.4,None,None,None,None,None], ['MX17004',2010,3,'tmax',None,None,None,None,32.1,None,None,None], ['MX17004',2010,3,'tmin',None,None,None,None,14.2,None,None,None], ['MX17004',2010,4,'tmax',None,None,None,None,None,None,None,None], ['MX17004',2010,4,'tmin',None,None,None,None,None,None,None,None], ['MX17004',2010,5,'tmax',None,None,None,None,None,None,None,None], ['MX17004',2010,5,'tmin',None,None,None,None,None,None,None,None]] headers = ['id','year','month','element','d1','d2','d3','d4','d5','d6','d7','d8'] weather = pd.DataFrame(data, columns=headers) weather melted_weather = pd.melt(weather, id_vars=['id', 'year', 'month', 'element'], var_name='temp_day', value_name='temperature') melted_weather.head(10) melted_weather["day"] = melted_weather["temp_day"].str.extract("d(\d+)", expand=False) melted_weather.head(10) # Import the datetime library. import datetime # Our year, month, and day columns must be numeric. Currently they are # strings. We can use the Pandas "apply" function to convert these columns. melted_weather[["year", "month", "day"]] = melted_weather[["year", "month", "day"]].apply(pd.to_numeric) # Convert temperature to numeric as well melted_weather[["temperature"]] = melted_weather[["temperature"]].apply(pd.to_numeric) # We want to use the Python datetime function to cobmine the year, month and day # into a proper date. In Python this is a datetime object, not a string. So, we # need to use the apply function, just like above, to convert the dates. We'll # create a simple little function that we'll use to apply the datetime change. def create_date(row): return datetime.datetime(year=row["year"], month=int(row["month"]), day=row["day"]) # Apply the create_date function to each row of our data frame for the "date" column. melted_weather["date"] = melted_weather.apply(lambda row: create_date(row), axis=1) # Now take a look! melted_weather.head(10) # Remove unwanted columns weather_min = melted_weather.drop(['year', 'month', 'day', 'temp_day'], axis=1) weather_min.head(10) # Unpivot and reset indexes. The pivot_table function automatically removes rows with null values. weather_tidy = weather_min.pivot_table(index=["id","date"], columns="element", values="temperature") weather_tidy.reset_index(drop=False, inplace=True) weather_tidy
0.385722
0.988188
# Creacion Consolidado Inventario Comenzaremos el analisis de la data consolidando la info en un unico DataSet ``` import pandas as pd import numpy as np import os import warnings from tqdm import tqdm from unicodedata import normalize from pjud.data import transformdata from pjud.data import cleandata warnings.filterwarnings(action="ignore") pd.set_option('display.max_columns', 100) tqdm.pandas() path_processed = "../data/processed/pjud" df_inventario = pd.read_feather(f"{path_processed}/processes_Inventario.feather") print(f"Existen : {len(df_inventario)} causas en inventario") df_inventario.columns ``` Unificar RIT y Tribunal en una sola columna para evitar mala interpretacion de causas ``` df_inventario['TRIBUNAL-RIT'] = df_inventario['COD. TRIBUNAL'].map(str) + "-" + df_inventario['RIT'].map(str) df_inventario.sample(8) ``` Imputo algunos datos que si dispongo del dataset. ``` df_inventario['Aร‘O INGRESO'] = df_inventario['RIT'].progress_apply(cleandata.obtiene_aรฑo) df_inventario.columns columnas_duplicadas = ['index'] df_inventario.drop(columnas_duplicadas, axis='columns', inplace=True) ``` ## Carga Data relacionada a Tipologia de delitos con INVENTARIO ``` path = "../data/processed/delitos" df_tipologia = pd.read_feather(f"{path}/clean_Delitos.feather") print(f"{len(df_tipologia)} delitos utilizados en PJUD") df_tipologia.columns df_inventario_tipologia = pd.merge(df_inventario,df_tipologia, how='left', on=['COD. MATERIA']) df_inventario_tipologia.sample(10) columnas_duplicadas = ['index', 'MATERIA_x'] df_inventario_tipologia.drop(columnas_duplicadas, axis='columns', inplace=True) df_inventario_tipologia.rename(columns = {'MATERIA_y':'MATERIA'}, inplace=True) df_inventario_tipologia.columns ``` ## Carga Data relacionada a Poblacion ``` df_poblacion = pd.read_feather(f"{path_processed}/processes_DataConsolidada_Poblacion_Jurisdiccion.feather") print(f"{len(df_poblacion)} registros") df_poblacion df_fulldatainventario = pd.merge(df_inventario_tipologia, df_poblacion, how='left', on=['CORTE','TRIBUNAL']) len(df_fulldatainventario) df_fulldatainventario df_fulldatainventario.columns ``` VERIFICAR FILAS A ELIMINAR ... CAMBIARON .... ``` columnas_duplicadas = ['index'] df_fulldatainventario.drop(columnas_duplicadas, axis='columns', inplace=True) #df_consolidado.drop(['MATERIA_x','FECHA TERMINO_x','Aร‘O TERMINO_x','TIPO CAUSA_y'], axis='columns', inplace=True) df_fulldatainventario.sample(5) ``` Reordenando Nombres de las columnas ... ``` df_fulldatainventario.rename(columns = {'COD. CORTE':'cod_corte', 'COD. TRIBUNAL':'cod_tribunal', 'RIT':'rit', 'CORTE':'corte', 'TRIBUNAL':'tribunal', 'COMPETENCIA':'competencia', 'TIPO CAUSA':'tipo_causa', 'COD. MATERIA':'cod_materia', 'TIPO ULT. DILIGENCIA':'tipo_ultima_diligencia', 'FECHA ULT. DILIGENCIA':'fecha_ultima_diligencia', 'FECHA INGRESO':'fecha_ingreso', 'Aร‘O INGRESO':'aรฑo_ingreso', 'TOTAL INVENTARIO':'total_inventario', 'TRIBUNAL-RIT':'tribunal_rit', 'MATERIA':'materia', 'TIPOLOGIA MATERIA':'tipologia_materia', 'VIGENCIA MATERIA':'vigencia_materia', 'REGION':'region', 'POBLACION':'poblacion', 'HOMBRES':'hombres', 'MUJERES':'mujeres', 'URBANO':'urbano', 'RURAL':'rural', 'COMUNAS':'comunas', 'JUECES':'dotacion_jueces', 'ASIENTO':'asiento', 'TIPO JUZGADO':'tipo_juzgado' },inplace = True) df_fulldatainventario.columns df_fulldatainventario = df_fulldatainventario[['region','cod_corte','corte','tribunal_rit','cod_tribunal','rit','tribunal','competencia','tipo_juzgado','dotacion_jueces','tipo_causa', 'aรฑo_ingreso','fecha_ingreso','cod_materia','materia','tipologia_materia','vigencia_materia','tipo_ultima_diligencia','fecha_ultima_diligencia', 'total_inventario','asiento','comunas','poblacion','hombres','mujeres','urbano','rural']] df_fulldatainventario.sample(5) # Directorio donde se guardaran archivos feather df_fulldatainventario.reset_index(inplace = True) os.makedirs(path_processed, exist_ok = True) # Guardamos dataset como archivo feather df_fulldatainventario.to_feather(f'{path_processed}/consolidated_FULLDATA_INVENTARIOS.feather') ```
github_jupyter
import pandas as pd import numpy as np import os import warnings from tqdm import tqdm from unicodedata import normalize from pjud.data import transformdata from pjud.data import cleandata warnings.filterwarnings(action="ignore") pd.set_option('display.max_columns', 100) tqdm.pandas() path_processed = "../data/processed/pjud" df_inventario = pd.read_feather(f"{path_processed}/processes_Inventario.feather") print(f"Existen : {len(df_inventario)} causas en inventario") df_inventario.columns df_inventario['TRIBUNAL-RIT'] = df_inventario['COD. TRIBUNAL'].map(str) + "-" + df_inventario['RIT'].map(str) df_inventario.sample(8) df_inventario['Aร‘O INGRESO'] = df_inventario['RIT'].progress_apply(cleandata.obtiene_aรฑo) df_inventario.columns columnas_duplicadas = ['index'] df_inventario.drop(columnas_duplicadas, axis='columns', inplace=True) path = "../data/processed/delitos" df_tipologia = pd.read_feather(f"{path}/clean_Delitos.feather") print(f"{len(df_tipologia)} delitos utilizados en PJUD") df_tipologia.columns df_inventario_tipologia = pd.merge(df_inventario,df_tipologia, how='left', on=['COD. MATERIA']) df_inventario_tipologia.sample(10) columnas_duplicadas = ['index', 'MATERIA_x'] df_inventario_tipologia.drop(columnas_duplicadas, axis='columns', inplace=True) df_inventario_tipologia.rename(columns = {'MATERIA_y':'MATERIA'}, inplace=True) df_inventario_tipologia.columns df_poblacion = pd.read_feather(f"{path_processed}/processes_DataConsolidada_Poblacion_Jurisdiccion.feather") print(f"{len(df_poblacion)} registros") df_poblacion df_fulldatainventario = pd.merge(df_inventario_tipologia, df_poblacion, how='left', on=['CORTE','TRIBUNAL']) len(df_fulldatainventario) df_fulldatainventario df_fulldatainventario.columns columnas_duplicadas = ['index'] df_fulldatainventario.drop(columnas_duplicadas, axis='columns', inplace=True) #df_consolidado.drop(['MATERIA_x','FECHA TERMINO_x','Aร‘O TERMINO_x','TIPO CAUSA_y'], axis='columns', inplace=True) df_fulldatainventario.sample(5) df_fulldatainventario.rename(columns = {'COD. CORTE':'cod_corte', 'COD. TRIBUNAL':'cod_tribunal', 'RIT':'rit', 'CORTE':'corte', 'TRIBUNAL':'tribunal', 'COMPETENCIA':'competencia', 'TIPO CAUSA':'tipo_causa', 'COD. MATERIA':'cod_materia', 'TIPO ULT. DILIGENCIA':'tipo_ultima_diligencia', 'FECHA ULT. DILIGENCIA':'fecha_ultima_diligencia', 'FECHA INGRESO':'fecha_ingreso', 'Aร‘O INGRESO':'aรฑo_ingreso', 'TOTAL INVENTARIO':'total_inventario', 'TRIBUNAL-RIT':'tribunal_rit', 'MATERIA':'materia', 'TIPOLOGIA MATERIA':'tipologia_materia', 'VIGENCIA MATERIA':'vigencia_materia', 'REGION':'region', 'POBLACION':'poblacion', 'HOMBRES':'hombres', 'MUJERES':'mujeres', 'URBANO':'urbano', 'RURAL':'rural', 'COMUNAS':'comunas', 'JUECES':'dotacion_jueces', 'ASIENTO':'asiento', 'TIPO JUZGADO':'tipo_juzgado' },inplace = True) df_fulldatainventario.columns df_fulldatainventario = df_fulldatainventario[['region','cod_corte','corte','tribunal_rit','cod_tribunal','rit','tribunal','competencia','tipo_juzgado','dotacion_jueces','tipo_causa', 'aรฑo_ingreso','fecha_ingreso','cod_materia','materia','tipologia_materia','vigencia_materia','tipo_ultima_diligencia','fecha_ultima_diligencia', 'total_inventario','asiento','comunas','poblacion','hombres','mujeres','urbano','rural']] df_fulldatainventario.sample(5) # Directorio donde se guardaran archivos feather df_fulldatainventario.reset_index(inplace = True) os.makedirs(path_processed, exist_ok = True) # Guardamos dataset como archivo feather df_fulldatainventario.to_feather(f'{path_processed}/consolidated_FULLDATA_INVENTARIOS.feather')
0.159741
0.760984
``` !wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \ -O /tmp/horse-or-human.zip !wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip \ -O /tmp/validation-horse-or-human.zip ``` The following python code will use the OS library to use Operating System libraries, giving you access to the file system, and the zipfile library allowing you to unzip the data. ``` import os import zipfile local_zip = '/tmp/horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/tmp/horse-or-human') local_zip = '/tmp/validation-horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/tmp/validation-horse-or-human') zip_ref.close() ``` The contents of the .zip are extracted to the base directory `/tmp/horse-or-human`, which in turn each contain `horses` and `humans` subdirectories. In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like', 'this is what a human looks like' etc. One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. If you remember with the handwriting example earlier, we had labelled 'this is a 1', 'this is a 7' etc. Later you'll see something called an ImageGenerator being used -- and this is coded to read images from subdirectories, and automatically label them from the name of that subdirectory. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. ImageGenerator will label the images appropriately for you, reducing a coding step. Let's define each of these directories: ``` # Directory with our training horse pictures train_horse_dir = os.path.join('/tmp/horse-or-human/horses') # Directory with our training human pictures train_human_dir = os.path.join('/tmp/horse-or-human/humans') # Directory with our training horse pictures validation_horse_dir = os.path.join('/tmp/validation-horse-or-human/horses') # Directory with our training human pictures validation_human_dir = os.path.join('/tmp/validation-horse-or-human/humans') ``` Now, let's see what the filenames look like in the `horses` and `humans` training directories: ``` train_horse_names = os.listdir(train_horse_dir) print(train_horse_names[:10]) train_human_names = os.listdir(train_human_dir) print(train_human_names[:10]) validation_horse_hames = os.listdir(validation_horse_dir) print(validation_horse_hames[:10]) validation_human_names = os.listdir(validation_human_dir) print(validation_human_names[:10]) ``` Let's find out the total number of horse and human images in the directories: ``` print('total training horse images:', len(os.listdir(train_horse_dir))) print('total training human images:', len(os.listdir(train_human_dir))) print('total validation horse images:', len(os.listdir(validation_horse_dir))) print('total validation human images:', len(os.listdir(validation_human_dir))) ``` Now let's take a look at a few pictures to get a better sense of what they look like. First, configure the matplot parameters: ``` %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg # Parameters for our graph; we'll output images in a 4x4 configuration nrows = 4 ncols = 4 # Index for iterating over images pic_index = 0 ``` Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time: ``` # Set up matplotlib fig, and size it to fit 4x4 pics fig = plt.gcf() fig.set_size_inches(ncols * 4, nrows * 4) pic_index += 8 next_horse_pix = [os.path.join(train_horse_dir, fname) for fname in train_horse_names[pic_index-8:pic_index]] next_human_pix = [os.path.join(train_human_dir, fname) for fname in train_human_names[pic_index-8:pic_index]] for i, img_path in enumerate(next_horse_pix+next_human_pix): # Set up subplot; subplot indices start at 1 sp = plt.subplot(nrows, ncols, i + 1) sp.axis('Off') # Don't show axes (or gridlines) img = mpimg.imread(img_path) plt.imshow(img) plt.show() ``` ## Building a Small Model from Scratch But before we continue, let's start defining the model: Step 1 will be to import tensorflow. ``` import tensorflow as tf ``` We then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers. Finally we add the densely connected layers. Note that because we are facing a two-class classification problem, i.e. a *binary classification problem*, we will end our network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function), so that the output of our network will be a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0). ``` model = tf.keras.models.Sequential([ # Note the input shape is the desired size of the image 300x300 with 3 bytes color # This is the first convolution tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)), tf.keras.layers.MaxPooling2D(2, 2), # The second convolution tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The third convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fourth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fifth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # Flatten the results to feed into a DNN tf.keras.layers.Flatten(), # 512 neuron hidden layer tf.keras.layers.Dense(512, activation='relu'), # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans') tf.keras.layers.Dense(1, activation='sigmoid') ]) ``` The model.summary() method call prints a summary of the NN ``` model.summary() ``` The "output shape" column shows how the size of your feature map evolves in each successive layer. The convolution layers reduce the size of the feature maps by a bit due to padding, and each pooling layer halves the dimensions. Next, we'll configure the specifications for model training. We will train our model with the `binary_crossentropy` loss, because it's a binary classification problem and our final activation is a sigmoid. (For a refresher on loss metrics, see the [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) We will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, we will want to monitor classification accuracy. **NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descent#RMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/#SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descent#Adam) and [Adagrad](https://developers.google.com/machine-learning/glossary/#AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.) ``` from tensorflow.keras.optimizers import RMSprop model.compile(loss='binary_crossentropy', optimizer=RMSprop(lr=0.001), metrics=['acc']) ``` ### Data Preprocessing Let's set up data generators that will read pictures in our source folders, convert them to `float32` tensors, and feed them (with their labels) to our network. We'll have one generator for the training images and one for the validation images. Our generators will yield batches of images of size 300x300 and their labels (binary). As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network. (It is uncommon to feed raw pixels into a convnet.) In our case, we will preprocess our images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range). In Keras this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`. These generators can then be used with the Keras model methods that accept data generators as inputs: `fit_generator`, `evaluate_generator`, and `predict_generator`. ``` from tensorflow.keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1/255) validation_datagen = ImageDataGenerator(rescale=1/255) # Flow training images in batches of 128 using train_datagen generator train_generator = train_datagen.flow_from_directory( '/tmp/horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 150x150 batch_size=128, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') # Flow training images in batches of 128 using train_datagen generator validation_generator = validation_datagen.flow_from_directory( '/tmp/validation-horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 150x150 batch_size=32, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') ``` ### Training Let's train for 15 epochs -- this may take a few minutes to run. Do note the values per epoch. The Loss and Accuracy are a great indication of progress of training. It's making a guess as to the classification of the training data, and then measuring it against the known label, calculating the result. Accuracy is the portion of correct guesses. ``` history = model.fit_generator( train_generator, steps_per_epoch=8, epochs=15, verbose=1, validation_data = validation_generator, validation_steps=8) ``` ###Running the Model Let's now take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, it will then upload them, and run them through the model, giving an indication of whether the object is a horse or a human. ``` import numpy as np from google.colab import files from keras.preprocessing import image uploaded = files.upload() for fn in uploaded.keys(): # predicting images path = '/content/' + fn img = image.load_img(path, target_size=(300, 300)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) images = np.vstack([x]) classes = model.predict(images, batch_size=10) print(classes[0]) if classes[0]>0.5: print(fn + " is a human") else: print(fn + " is a horse") ``` ### Visualizing Intermediate Representations To get a feel for what kind of features our convnet has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the convnet. Let's pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images. ``` import numpy as np import random from tensorflow.keras.preprocessing.image import img_to_array, load_img # Let's define a new Model that will take an image as input, and will output # intermediate representations for all layers in the previous model after # the first. successive_outputs = [layer.output for layer in model.layers[1:]] #visualization_model = Model(img_input, successive_outputs) visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs) # Let's prepare a random input image from the training set. horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names] human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names] img_path = random.choice(horse_img_files + human_img_files) img = load_img(img_path, target_size=(300, 300)) # this is a PIL image x = img_to_array(img) # Numpy array with shape (150, 150, 3) x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 150, 150, 3) # Rescale by 1/255 x /= 255 # Let's run our image through our network, thus obtaining all # intermediate representations for this image. successive_feature_maps = visualization_model.predict(x) # These are the names of the layers, so can have them as part of our plot layer_names = [layer.name for layer in model.layers] # Now let's display our representations for layer_name, feature_map in zip(layer_names, successive_feature_maps): if len(feature_map.shape) == 4: # Just do this for the conv / maxpool layers, not the fully-connected layers n_features = feature_map.shape[-1] # number of features in feature map # The feature map has shape (1, size, size, n_features) size = feature_map.shape[1] # We will tile our images in this matrix display_grid = np.zeros((size, size * n_features)) for i in range(n_features): # Postprocess the feature to make it visually palatable x = feature_map[0, :, :, i] x -= x.mean() x /= x.std() x *= 64 x += 128 x = np.clip(x, 0, 255).astype('uint8') # We'll tile each filter into this big horizontal grid display_grid[:, i * size : (i + 1) * size] = x # Display the grid scale = 20. / n_features plt.figure(figsize=(scale * n_features, scale)) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') ``` As you can see we go from the raw pixels of the images to increasingly abstract and compact representations. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being "activated"; most are set to zero. This is called "sparsity." Representation sparsity is a key feature of deep learning. These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline. ## Clean Up Before running the next exercise, run the following cell to terminate the kernel and free memory resources: ``` import os, signal os.kill(os.getpid(), signal.SIGKILL) ```
github_jupyter
!wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \ -O /tmp/horse-or-human.zip !wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip \ -O /tmp/validation-horse-or-human.zip import os import zipfile local_zip = '/tmp/horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/tmp/horse-or-human') local_zip = '/tmp/validation-horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/tmp/validation-horse-or-human') zip_ref.close() # Directory with our training horse pictures train_horse_dir = os.path.join('/tmp/horse-or-human/horses') # Directory with our training human pictures train_human_dir = os.path.join('/tmp/horse-or-human/humans') # Directory with our training horse pictures validation_horse_dir = os.path.join('/tmp/validation-horse-or-human/horses') # Directory with our training human pictures validation_human_dir = os.path.join('/tmp/validation-horse-or-human/humans') train_horse_names = os.listdir(train_horse_dir) print(train_horse_names[:10]) train_human_names = os.listdir(train_human_dir) print(train_human_names[:10]) validation_horse_hames = os.listdir(validation_horse_dir) print(validation_horse_hames[:10]) validation_human_names = os.listdir(validation_human_dir) print(validation_human_names[:10]) print('total training horse images:', len(os.listdir(train_horse_dir))) print('total training human images:', len(os.listdir(train_human_dir))) print('total validation horse images:', len(os.listdir(validation_horse_dir))) print('total validation human images:', len(os.listdir(validation_human_dir))) %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg # Parameters for our graph; we'll output images in a 4x4 configuration nrows = 4 ncols = 4 # Index for iterating over images pic_index = 0 # Set up matplotlib fig, and size it to fit 4x4 pics fig = plt.gcf() fig.set_size_inches(ncols * 4, nrows * 4) pic_index += 8 next_horse_pix = [os.path.join(train_horse_dir, fname) for fname in train_horse_names[pic_index-8:pic_index]] next_human_pix = [os.path.join(train_human_dir, fname) for fname in train_human_names[pic_index-8:pic_index]] for i, img_path in enumerate(next_horse_pix+next_human_pix): # Set up subplot; subplot indices start at 1 sp = plt.subplot(nrows, ncols, i + 1) sp.axis('Off') # Don't show axes (or gridlines) img = mpimg.imread(img_path) plt.imshow(img) plt.show() import tensorflow as tf model = tf.keras.models.Sequential([ # Note the input shape is the desired size of the image 300x300 with 3 bytes color # This is the first convolution tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)), tf.keras.layers.MaxPooling2D(2, 2), # The second convolution tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The third convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fourth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fifth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # Flatten the results to feed into a DNN tf.keras.layers.Flatten(), # 512 neuron hidden layer tf.keras.layers.Dense(512, activation='relu'), # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans') tf.keras.layers.Dense(1, activation='sigmoid') ]) model.summary() from tensorflow.keras.optimizers import RMSprop model.compile(loss='binary_crossentropy', optimizer=RMSprop(lr=0.001), metrics=['acc']) from tensorflow.keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1/255) validation_datagen = ImageDataGenerator(rescale=1/255) # Flow training images in batches of 128 using train_datagen generator train_generator = train_datagen.flow_from_directory( '/tmp/horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 150x150 batch_size=128, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') # Flow training images in batches of 128 using train_datagen generator validation_generator = validation_datagen.flow_from_directory( '/tmp/validation-horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 150x150 batch_size=32, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') history = model.fit_generator( train_generator, steps_per_epoch=8, epochs=15, verbose=1, validation_data = validation_generator, validation_steps=8) import numpy as np from google.colab import files from keras.preprocessing import image uploaded = files.upload() for fn in uploaded.keys(): # predicting images path = '/content/' + fn img = image.load_img(path, target_size=(300, 300)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) images = np.vstack([x]) classes = model.predict(images, batch_size=10) print(classes[0]) if classes[0]>0.5: print(fn + " is a human") else: print(fn + " is a horse") import numpy as np import random from tensorflow.keras.preprocessing.image import img_to_array, load_img # Let's define a new Model that will take an image as input, and will output # intermediate representations for all layers in the previous model after # the first. successive_outputs = [layer.output for layer in model.layers[1:]] #visualization_model = Model(img_input, successive_outputs) visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs) # Let's prepare a random input image from the training set. horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names] human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names] img_path = random.choice(horse_img_files + human_img_files) img = load_img(img_path, target_size=(300, 300)) # this is a PIL image x = img_to_array(img) # Numpy array with shape (150, 150, 3) x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 150, 150, 3) # Rescale by 1/255 x /= 255 # Let's run our image through our network, thus obtaining all # intermediate representations for this image. successive_feature_maps = visualization_model.predict(x) # These are the names of the layers, so can have them as part of our plot layer_names = [layer.name for layer in model.layers] # Now let's display our representations for layer_name, feature_map in zip(layer_names, successive_feature_maps): if len(feature_map.shape) == 4: # Just do this for the conv / maxpool layers, not the fully-connected layers n_features = feature_map.shape[-1] # number of features in feature map # The feature map has shape (1, size, size, n_features) size = feature_map.shape[1] # We will tile our images in this matrix display_grid = np.zeros((size, size * n_features)) for i in range(n_features): # Postprocess the feature to make it visually palatable x = feature_map[0, :, :, i] x -= x.mean() x /= x.std() x *= 64 x += 128 x = np.clip(x, 0, 255).astype('uint8') # We'll tile each filter into this big horizontal grid display_grid[:, i * size : (i + 1) * size] = x # Display the grid scale = 20. / n_features plt.figure(figsize=(scale * n_features, scale)) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') import os, signal os.kill(os.getpid(), signal.SIGKILL)
0.59514
0.932145
*best viewed in [nbviewer](https://nbviewer.jupyter.org/github/CambridgeSemiticsLab/BH_time_collocations/blob/master/data/annotations/annotating_semantics.ipynb)* # Annotating Time Adverbials with Semantic Classes ## The Semantics of Time and Events ### Cody Kingham <a href="../../docs/sponsors.md"><img height=200px width=200px align="left" src="../../docs/images/CambridgeU_BW.png"></a> ``` ! echo "last updated:"; date ``` ## Time adverbials locate events on a timeline An event, in line with Croft 2012, refers to an entire aspectual-temporal expression, with all of the various entities that make up the expression, including the verb lexeme, morphemes, verb arguments, adverbials, and more. Note that the term 'event' is used generically rather than to refer to a specific kind of aspectual category, following Croft 2012. **A time adverbial serves to situate the whole or part of an event along a metaphorical one-dimensional timeline** (Haspelmath 1997: 23-42). This timeline is a metaphorical extension of the spatial dimension (idem). ``` "The baby was born before her great grandfather died." (Haspelmath 1997: 28) RefT: her great-grandfather died | โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€> | LSit: the baby was born ``` Here the located situation `LSit` (located situation) refers to the event, while `RefT` (reference time) refers to the information anchored to the time adverbial. The positioning between the event and the reference time is supplied by the preposition "before". In the above example, the durational quality of the situation "the baby was born" is bounded (i.e. not durative). This is due to the interaction of the various constructions in the sentence, including the verb tense and the semantic structure associated with the verb lexeme. Specifically, there are two dimensions expressed by an event such as the one in the example. There is both a qualitative (phasal) dimension, and a temporal dimension. Following Croft 2012, Croft 2012 (*Verbs: Aspect and Causal Structure*), these two dimensions can be captured as spatial metaphors by using a graph: ``` "The door is open." โ”‚ โ”‚ โ”‚ ______ โ”‚ . q โ”‚ . โ”‚..... โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ t ``` The x-axis, or *time dimension* (t), is the one-dimensional timeline referred to by Haspelmath as a metaphorical extension of space. Its domain (input values) are continuous, thus the time dimension can be segmented into arbitrary segments or spans of various size (see time units like "day", "year", "moment"). The y-axis, or *quality dimension* (q), models the phases unique to an event with points along the axis. Unlike the time dimension, the input values of the quality dimension must be a whole number (i.e. not fractional) which indicates how many phases an event consists of. Thus, y=1 is the first phase of an event, y=2 is the second, and so on, with most event types being summarized with y=1 to 3. Where the horizontal dotted line represents the situation *before* the door was open, and the vertical line represents the immediate change in state. On the q-dimension, there are therefore 2 coordinates (i.e. 'open' y=1, versus 'not open' y=2). The solid horizontal line represents the state of the door being open. One could also add an additional point on the time dimension to represent the position of the speaker, which would align with the open state. ### On the "observable" versus "unobservable" The dotted line on the plot above, the phase before the door was opened, models information not explicitly found in the sentence. In other cases, as will be seen, a given constructional network implies a phase that follows the event and extends onwards, likewise without any explicit corresponding element in the sentence. These are phases that we, as humans, know intuitively from world knowledge about how these kinds of events unfold and result. But without any means of validating our intuition, we are left with a methodological problem. From the perspective of linguistics as a science, we stand on one side of a great dividing wall between the unseen and the seen, where the unseen is the concepts, beliefs, practices, and customs that lie hidden somewhere in the brain. <img src="../../docs/figures/schemas/empirical_linguistics.svg" height="600px" width="600px"> But we are not left helpless. As seen in the schema above, cognitive links in the brain give rise to statistical links amongst constructional patterns. The tricky part is that patterns and concepts are, of course, not the same thing. And often intricate combinations of patterns are woven for the purpose of pairing an idea. For instance, in English the formation of the "perfective" meaning with present tense "have" + past participle shows how several kinds of patterns (orthographic, lexical, verbal, and syntactic) are together and simultaneously linked to one idea. They are, in other words, non-compositionalโ€”an emergent phenomenon. *to be continued...* ### Merging Croft and Haspelmath's models **We can combine Croft's spatial models of event structure with Haspelmath's model of time adverbials quite easily by adding the time adverbial information below the time dimension.** ``` "The door was open for an hour." q โ”‚ โ”‚ โ”‚ ______ โ”‚ . โ”‚ . โ”‚..... โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ t | |โ€”โ€”โ€”โ€”โ€”| | hours | RefT: "for an hour" ``` This is an example of Haspelmath's "atelic extent" (1997: 120f). The reference time of the time adverbial is now added below the time axis, and can be seen to highlight the span of time during which the door was open. (What about the final state of the door? This information most likely needs to be supplied from the context. The shift to past tense here seems to imply that the door is now closed. But that does not seem required by the semantic context. It is possible, however, the past tense and the ending of such a state are statistically associatedโ€”in which case we could say that the construction has a default interpretation in which the state ends.) ## Aspect is constructional not verbal The aspect of an event derives from a whole constructional network, rather than a single given construction such as the verb. ## Building Annotations In this notebook, we aim to develop a method of annotating time adverbials and their respective events in the Hebrew Bible. These annotations will form the basis for a multivariate statistical analysis, which will seek to uncover the strongest predictors of time adverbial usage in the text. The time adverbial dataset of this project consists of >5000 individual instances of adverbial phrases throughout the Hebrew Bible. It would be time prohibitive to manually tag every single one of them. Thus, it is also a goal of this notebook In Haspelmath's survey of time adverbials throughout world languages, he identifies several common semantic categories. Modern Hebrew is amongst the languages surveyed. Here are the semantic classes for Modern Hebrew with their most common leading prepositions: anterior - ืœืคื ื™ posterior - ืื—ืจื™ simultaneous location - ื‘ anterior durative - ืขื“ posterior durative - ืžืŸ atelic extent - รธ + quantified NP telic extent - ื‘ + quantified NP distance future - ืขื•ื“ distance past - ืœืคื ื™ in sense of "ago" distance posterior - ื–ื” + quantified NP <hr> # Python Now we import the modules and data needed for the analysis. ``` # standard & data science packages import collections import re import pandas as pd pd.set_option('max_rows', 100) pd.set_option('max_colwidth',100) import numpy as np import matplotlib.pyplot as plt from matplotlib import rcParams rcParams['font.serif'] = ['SBL Biblit'] import seaborn as sns from bidi.algorithm import get_display # bi-directional text support for plotting from paths import main_table, figs from IPython.display import HTML, display # custom packages (see /tools) from cx_analysis.load import cxs from tf_tools.load import load_tf from stats.significance import contingency_table, apply_fishers # launch Text-Fabric with custom data TF, API, A = load_tf(silent='deep') A.displaySetup(condenseType='phrase') F, E, T, L = A.api.F, A.api.E, A.api.T, A.api.L # corpus analysis methods # load and set up project dataset times_full = pd.read_csv(main_table, sep='\t') times_full.set_index(['node'], inplace=True) times = times_full[~times_full.classi.str.contains('component')] # select singles times.head() ``` # Generic Overview First, let's get re-acquainted with the general makeup of the dataset. ``` time_surfaces = pd.DataFrame(times['token'].value_counts()) time_surfaces.head(50) ``` ## Generating Automatic Annotations for Biblical Hebrew We generate automatic annotations to lessen the workload of annotating and to solve repetitive tasks at once. These annotations are all tentative, and subject to human correction and adjustment. In order to formulate a standard, I want to practice with a few key cases that we've already seen in the dataset above. Here's a diverse group of common adverbials selected from the above counts. ื‘.ื”.ื™ื•ื.ื”.ื”ื•ื ื”.ื™ื•ื.ื”.ื–ื” ืขื“.ื”.ื™ื•ื.ื”.ื–ื” ืฉืื‘ืข.ื™ื•ื The semantic labels are taken from Croft 2012. The two-dimensional plots from Croft's method are merged with the one dimensional timeline of Haspelmath. Where relevant, the contribution from each construction is indicated by writing it near the graphed lines. **ื‘.ื”.ื™ื•ื.ื”.ื”ื•ื** ``` in_that_day = times[times.token == 'ื‘.ื”.ื™ื•ื.ื”.ื”ื•ื'] in_that_day.head() in_that_day.iloc[0:1][['ref', 'text', 'clause']] ``` ``` ื‘ึทึผื™ึนึผึฃื•ื ื”ึทื”ึ—ื•ึผื ื›ึธึผืจึทึงืช ื™ึฐื”ื•ึธึ›ื” ืึถืชึพืึทื‘ึฐืจึธึ–ื ื‘ึฐึผืจึดึฃื™ืช Achievement, irreversible directed q | | | | ------> | | | | ื›ืจืช ื‘ืจื™ืช |......... |_________________t /โ€“โ€“โ€“โ€“โ€“/โ€“โ€“โ€“โ€“โ€“/โ€“โ€“โ€“โ€“โ€“/ days | ื‘ื™ื•ื ื”ื”ื•ื ``` This diagram contains an analysis for the situation expressed in Gen 15:18. The main constructional network modified by the adverbial is the construction ื›ืจืช "he cut" + direct object **ื”.ื™ื•ื.ื”.ื–ื”** ``` this_day = times[times.token == 'ื”.ื™ื•ื.ื”.ื–ื”'] this_day.head() this_day.iloc[0:1][['ref', 'text', 'clause']] ``` ื”ื™ื•ื is an interesting case because it is zero-marked. Mose zero-marked time adverbials seem to be durative. Is ื”ื™ื•ื durative or is it punctual? ``` A.plain(L.u(1447763,'verse')[0], condenseType='verse') ``` ``` ื”ึทื™ึนึผึฃื•ื ื”ึทื–ึถึผึ—ื” ืึธื—ึตืœึ™ ืชึตึผึคืช ืคึทึผื—ึฐื“ึฐึผืšึธึ™ ื•ึฐื™ึดืจึฐืึธึฃืชึฐืšึธึ” ืขึทืœึพืคึฐึผื ึตื™ึ™ ื”ึธึฝืขึทืžึดึผึ”ื™ื Accomplishment, non-incremental q | ... | . | ืชืŸ . | /\/\/\/ | | | | ืื—ืœ |..... |_________________t /โ€“โ€“โ€“โ€“โ€“/โ€“โ€“โ€“โ€“โ€“/โ€“โ€“โ€“โ€“โ€“/ days ...... | ื”ื™ื•ื ื”ื–ื” ``` ``` this_day.iloc[1:2][['ref', 'text', 'clause']] A.plain(L.u(1447792,'verse')[0], condenseType='verse') ``` ``` ื”ึทื™ึนึผึคื•ื ื”ึทื–ึถึผื”ึ™ ืจึธืึดึ”ื™ื ื•ึผ ื›ึดึผึฝื™ึพื™ึฐื“ึทื‘ึตึผึงืจ ืึฑืœึนื”ึดึ›ื™ื ืึถืชึพื”ึธึฝืึธื“ึธึ–ื ื•ึธื—ึธึฝื™ Activity, undirected q | | | ืจืื™ื ื• | /\/\/\/ | . | . |..... |_________________t /โ€“โ€“โ€“โ€“โ€“/โ€“โ€“โ€“โ€“โ€“/โ€“โ€“โ€“โ€“โ€“/ days ...... | ื”ื™ื•ื ื”ื–ื” ``` ## Pilot Study: Genesis I'm going to run a pilot study on time adverbials in the book of Genesis. This will allow me to practice the annotations, as well as to gather useful data on patterns. I hope to come out of the study with further ideas on how to optimize a set of procedures that can accurately predict the semantic labels of a given construction. The procedures could also include statistical association data. That would allow me to automatically tag all of the single-phrasal adverbials in the dataset, and then manually correct bad cases. For the whole dataset, I could then measure the success of the automatically tagged data, and thus get a sense for how effective the outlined procedures are. ``` genesis_times = times.loc[times.book == 'Genesis', :] print(genesis_times.shape[0], 'times in Genesis selected') # look at what times are contained pd.DataFrame(genesis_times.token.value_counts()).head(50) ``` Prepare data for export. ``` genesis_data = genesis_times.loc[:, ['ref', 'time', 'token', 'clause']] # add columns for manual annotations genesis_data['TA_type'] = '' genesis_data['Aspect_main'] = '' genesis_data['Aspect_second'] = '' genesis_data # gather additional contexts to add to dataset contexts = {} for tp in genesis_data.index: verse = L.u(tp, 'verse')[0] sentence = L.u(tp, 'sentence')[0] html_link = A.webLink(tp, _asString=True) href = re.search('href="([^"]*)"', html_link).group(1) contexts[tp] = { 'sentence': T.text(sentence), 'verse': T.text(verse), 'link': href, } new_contexts = pd.DataFrame.from_dict(contexts, orient='index') new_contexts.index.name = 'node' new_contexts.head() ``` **Merge context data and export .csv** ``` genesis_pilot_data = pd.concat([genesis_data, new_contexts], axis=1) genesis_pilot_data.head(2) times_full[times_full.ref == 'Gen 6:4'] len(times) len(times_full) L.u(1446839, 'verse')[0] for phrase in L.d(1414495, 'phrase'): print(T.text(phrase), F.function.v(phrase), phrase) L.u(653115, 'timephrase') E.mother.t(653115) T.text(428071) T.text(1446838) problem = cxs['phrase2cxs'][1446838][0] problem.cases genesis_pilot_data.to_csv('Genesis_pilot/genesis_times.csv', encoding='UTF-16') ``` Data imported into Google Drive for maual annotation at the following link: https://docs.google.com/spreadsheets/d/12K623fm6iAWoTcqSwrK0SlahdOLDEC3XdBPMyO5X98M/edit?usp=sharing **Notes During Pilot Study to Improve Spreadsheet** * include broader scope for clause inclusion * ืขืชื” + clause * ื•ื™ื”ื™ + clause, look closely at these cases because I'm not sure how they work! * add clause types * think about distinction between locative and extensional uses of ืœ; same ambiguity in spatial realm * need to include clauses after ื™ื•ื as part of the TA **Should I simplify the dataset?** * 1446838 is marked as non-single because it has an attributive clause; the parsed cx objects are very top heavy. Probably should switch to manually annotated phrases :/
github_jupyter
! echo "last updated:"; date "The baby was born before her great grandfather died." (Haspelmath 1997: 28) RefT: her great-grandfather died | โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€> | LSit: the baby was born "The door is open." โ”‚ โ”‚ โ”‚ ______ โ”‚ . q โ”‚ . โ”‚..... โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ t "The door was open for an hour." q โ”‚ โ”‚ โ”‚ ______ โ”‚ . โ”‚ . โ”‚..... โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ t | |โ€”โ€”โ€”โ€”โ€”| | hours | RefT: "for an hour" # standard & data science packages import collections import re import pandas as pd pd.set_option('max_rows', 100) pd.set_option('max_colwidth',100) import numpy as np import matplotlib.pyplot as plt from matplotlib import rcParams rcParams['font.serif'] = ['SBL Biblit'] import seaborn as sns from bidi.algorithm import get_display # bi-directional text support for plotting from paths import main_table, figs from IPython.display import HTML, display # custom packages (see /tools) from cx_analysis.load import cxs from tf_tools.load import load_tf from stats.significance import contingency_table, apply_fishers # launch Text-Fabric with custom data TF, API, A = load_tf(silent='deep') A.displaySetup(condenseType='phrase') F, E, T, L = A.api.F, A.api.E, A.api.T, A.api.L # corpus analysis methods # load and set up project dataset times_full = pd.read_csv(main_table, sep='\t') times_full.set_index(['node'], inplace=True) times = times_full[~times_full.classi.str.contains('component')] # select singles times.head() time_surfaces = pd.DataFrame(times['token'].value_counts()) time_surfaces.head(50) in_that_day = times[times.token == 'ื‘.ื”.ื™ื•ื.ื”.ื”ื•ื'] in_that_day.head() in_that_day.iloc[0:1][['ref', 'text', 'clause']] ื‘ึทึผื™ึนึผึฃื•ื ื”ึทื”ึ—ื•ึผื ื›ึธึผืจึทึงืช ื™ึฐื”ื•ึธึ›ื” ืึถืชึพืึทื‘ึฐืจึธึ–ื ื‘ึฐึผืจึดึฃื™ืช Achievement, irreversible directed q | | | | ------> | | | | ื›ืจืช ื‘ืจื™ืช |......... |_________________t /โ€“โ€“โ€“โ€“โ€“/โ€“โ€“โ€“โ€“โ€“/โ€“โ€“โ€“โ€“โ€“/ days | ื‘ื™ื•ื ื”ื”ื•ื this_day = times[times.token == 'ื”.ื™ื•ื.ื”.ื–ื”'] this_day.head() this_day.iloc[0:1][['ref', 'text', 'clause']] A.plain(L.u(1447763,'verse')[0], condenseType='verse') ื”ึทื™ึนึผึฃื•ื ื”ึทื–ึถึผึ—ื” ืึธื—ึตืœึ™ ืชึตึผึคืช ืคึทึผื—ึฐื“ึฐึผืšึธึ™ ื•ึฐื™ึดืจึฐืึธึฃืชึฐืšึธึ” ืขึทืœึพืคึฐึผื ึตื™ึ™ ื”ึธึฝืขึทืžึดึผึ”ื™ื Accomplishment, non-incremental q | ... | . | ืชืŸ . | /\/\/\/ | | | | ืื—ืœ |..... |_________________t /โ€“โ€“โ€“โ€“โ€“/โ€“โ€“โ€“โ€“โ€“/โ€“โ€“โ€“โ€“โ€“/ days ...... | ื”ื™ื•ื ื”ื–ื” this_day.iloc[1:2][['ref', 'text', 'clause']] A.plain(L.u(1447792,'verse')[0], condenseType='verse') ื”ึทื™ึนึผึคื•ื ื”ึทื–ึถึผื”ึ™ ืจึธืึดึ”ื™ื ื•ึผ ื›ึดึผึฝื™ึพื™ึฐื“ึทื‘ึตึผึงืจ ืึฑืœึนื”ึดึ›ื™ื ืึถืชึพื”ึธึฝืึธื“ึธึ–ื ื•ึธื—ึธึฝื™ Activity, undirected q | | | ืจืื™ื ื• | /\/\/\/ | . | . |..... |_________________t /โ€“โ€“โ€“โ€“โ€“/โ€“โ€“โ€“โ€“โ€“/โ€“โ€“โ€“โ€“โ€“/ days ...... | ื”ื™ื•ื ื”ื–ื” genesis_times = times.loc[times.book == 'Genesis', :] print(genesis_times.shape[0], 'times in Genesis selected') # look at what times are contained pd.DataFrame(genesis_times.token.value_counts()).head(50) genesis_data = genesis_times.loc[:, ['ref', 'time', 'token', 'clause']] # add columns for manual annotations genesis_data['TA_type'] = '' genesis_data['Aspect_main'] = '' genesis_data['Aspect_second'] = '' genesis_data # gather additional contexts to add to dataset contexts = {} for tp in genesis_data.index: verse = L.u(tp, 'verse')[0] sentence = L.u(tp, 'sentence')[0] html_link = A.webLink(tp, _asString=True) href = re.search('href="([^"]*)"', html_link).group(1) contexts[tp] = { 'sentence': T.text(sentence), 'verse': T.text(verse), 'link': href, } new_contexts = pd.DataFrame.from_dict(contexts, orient='index') new_contexts.index.name = 'node' new_contexts.head() genesis_pilot_data = pd.concat([genesis_data, new_contexts], axis=1) genesis_pilot_data.head(2) times_full[times_full.ref == 'Gen 6:4'] len(times) len(times_full) L.u(1446839, 'verse')[0] for phrase in L.d(1414495, 'phrase'): print(T.text(phrase), F.function.v(phrase), phrase) L.u(653115, 'timephrase') E.mother.t(653115) T.text(428071) T.text(1446838) problem = cxs['phrase2cxs'][1446838][0] problem.cases genesis_pilot_data.to_csv('Genesis_pilot/genesis_times.csv', encoding='UTF-16')
0.334916
0.98394
Download the conditional WikiArt model from here: https://archive.org/download/wikiart-stylegan2-conditional-model/WikiArt5.pkl ( There's also an unconditional model here: https://archive.org/download/wikiart-stylegan2-conditional-model/WikiArt_Uncond2.pkl ) ``` import ipywidgets as widgets import pickle import math import random import PIL.Image import numpy as np import pickle import dnnlib import dnnlib.tflib as tflib network_pkl = 'WikiArt5.pkl' #network_pkl = 'WikiArt_Uncond2.pkl' dnnlib.tflib.init_tf() with dnnlib.util.open_url(network_pkl) as f: _G, _D, Gs = pickle.load(f) Gs_syn_kwargs = dnnlib.EasyDict() batch_size = 8 Gs_syn_kwargs.output_transform = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True) Gs_syn_kwargs.randomize_noise = True Gs_syn_kwargs.minibatch_size = batch_size artist = widgets.Dropdown( options=[('Unknown Artist', 0), ('Boris Kustodiev', 1), ('Camille Pissarro', 2), ('Childe Hassam', 3), ('Claude Monet', 4), ('Edgar Degas', 5), ('Eugene Boudin', 6), ('Gustave Dore', 7), ('Ilya Repin', 8), ('Ivan Aivazovsky', 9), ('Ivan Shishkin', 10), ('John Singer Sargent', 11), ('Marc Chagall', 12), ('Martiros Saryan', 13), ('Nicholas Roerich', 14), ('Pablo Picasso', 15), ('Paul Cezanne', 16), ('Pierre Auguste Renoir', 17), ('Pyotr Konchalovsky', 18), ('Raphael Kirchner', 19), ('Rembrandt', 20), ('Salvador Dali', 21), ('Vincent Van Gogh', 22), ('Hieronymus Bosch', 23), ('Leonardo Da Vinci', 24), ('Albrecht Durer', 25), ('Edouard Cortes', 26), ('Sam Francis', 27), ('Juan Gris', 28), ('Lucas Cranach The Elder', 29), ('Paul Gauguin', 30), ('Konstantin Makovsky', 31), ('Egon Schiele', 32), ('Thomas Eakins', 33), ('Gustave Moreau', 34), ('Francisco Goya', 35), ('Edvard Munch', 36), ('Henri Matisse', 37), ('Fra Angelico', 38), ('Maxime Maufra', 39), ('Jan Matejko', 40), ('Mstislav Dobuzhinsky', 41), ('Alfred Sisley', 42), ('Mary Cassatt', 43), ('Gustave Loiseau', 44), ('Fernando Botero', 45), ('Zinaida Serebriakova', 46), ('Georges Seurat', 47), ('Isaac Levitan', 48), ('Joaquรฃยญn Sorolla', 49), ('Jacek Malczewski', 50), ('Berthe Morisot', 51), ('Andy Warhol', 52), ('Arkhip Kuindzhi', 53), ('Niko Pirosmani', 54), ('James Tissot', 55), ('Vasily Polenov', 56), ('Valentin Serov', 57), ('Pietro Perugino', 58), ('Pierre Bonnard', 59), ('Ferdinand Hodler', 60), ('Bartolome Esteban Murillo', 61), ('Giovanni Boldini', 62), ('Henri Martin', 63), ('Gustav Klimt', 64), ('Vasily Perov', 65), ('Odilon Redon', 66), ('Tintoretto', 67), ('Gene Davis', 68), ('Raphael', 69), ('John Henry Twachtman', 70), ('Henri De Toulouse Lautrec', 71), ('Antoine Blanchard', 72), ('David Burliuk', 73), ('Camille Corot', 74), ('Konstantin Korovin', 75), ('Ivan Bilibin', 76), ('Titian', 77), ('Maurice Prendergast', 78), ('Edouard Manet', 79), ('Peter Paul Rubens', 80), ('Aubrey Beardsley', 81), ('Paolo Veronese', 82), ('Joshua Reynolds', 83), ('Kuzma Petrov Vodkin', 84), ('Gustave Caillebotte', 85), ('Lucian Freud', 86), ('Michelangelo', 87), ('Dante Gabriel Rossetti', 88), ('Felix Vallotton', 89), ('Nikolay Bogdanov Belsky', 90), ('Georges Braque', 91), ('Vasily Surikov', 92), ('Fernand Leger', 93), ('Konstantin Somov', 94), ('Katsushika Hokusai', 95), ('Sir Lawrence Alma Tadema', 96), ('Vasily Vereshchagin', 97), ('Ernst Ludwig Kirchner', 98), ('Mikhail Vrubel', 99), ('Orest Kiprensky', 100), ('William Merritt Chase', 101), ('Aleksey Savrasov', 102), ('Hans Memling', 103), ('Amedeo Modigliani', 104), ('Ivan Kramskoy', 105), ('Utagawa Kuniyoshi', 106), ('Gustave Courbet', 107), ('William Turner', 108), ('Theo Van Rysselberghe', 109), ('Joseph Wright', 110), ('Edward Burne Jones', 111), ('Koloman Moser', 112), ('Viktor Vasnetsov', 113), ('Anthony Van Dyck', 114), ('Raoul Dufy', 115), ('Frans Hals', 116), ('Hans Holbein The Younger', 117), ('Ilya Mashkov', 118), ('Henri Fantin Latour', 119), ('M.C. Escher', 120), ('El Greco', 121), ('Mikalojus Ciurlionis', 122), ('James Mcneill Whistler', 123), ('Karl Bryullov', 124), ('Jacob Jordaens', 125), ('Thomas Gainsborough', 126), ('Eugene Delacroix', 127), ('Canaletto', 128)], value=22, description='Artist: ' ) genre = widgets.Dropdown( options=[('Abstract Painting', 129), ('Cityscape', 130), ('Genre Painting', 131), ('Illustration', 132), ('Landscape', 133), ('Nude Painting', 134), ('Portrait', 135), ('Religious Painting', 136), ('Sketch And Study', 137), ('Still Life', 138), ('Unknown Genre', 139)], value=129, description='Genre: ' ) style = widgets.Dropdown( options=[('Abstract Expressionism', 140), ('Action Painting', 141), ('Analytical Cubism', 142), ('Art Nouveau', 143), ('Baroque', 144), ('Color Field Painting', 145), ('Contemporary Realism', 146), ('Cubism', 147), ('Early Renaissance', 148), ('Expressionism', 149), ('Fauvism', 150), ('High Renaissance', 151), ('Impressionism', 152), ('Mannerism Late Renaissance', 153), ('Minimalism', 154), ('Naive Art Primitivism', 155), ('New Realism', 156), ('Northern Renaissance', 157), ('Pointillism', 158), ('Pop Art', 159), ('Post Impressionism', 160), ('Realism', 161), ('Rococo', 162), ('Romanticism', 163), ('Symbolism', 164), ('Synthetic Cubism', 165), ('Ukiyo-e', 166)], value=160, description='Style: ' ) seed = widgets.IntSlider(min=0, max=100000, step=1, value=9, description='Seed: ') scale = widgets.FloatSlider(min=0, max=25, step=0.1, value=2, description='Global Scale: ') truncation = widgets.FloatSlider(min=-5, max=10, step=0.1, value=1, description='Truncation: ') variance = widgets.FloatSlider(min=0, max=10, step=0.1, value=0.4, description='Variance: ') iterations = widgets.IntSlider(min=0, max=1000, step=1, value=64, description='Iterations: ') top_box = widgets.HBox([artist, genre, style]) mid_box = widgets.HBox([variance, iterations]) bot_box = widgets.HBox([seed, scale, truncation]) ui = widgets.VBox([top_box, mid_box, bot_box]) def display_sample(artist, genre, style, variance, iterations, seed, scale, truncation): batch_size = 1 l1 = np.zeros((1,167)) l1[0][artist] = 1.0 l1[0][genre] = 1.0 l1[0][style] = 1.0 l1 = scale * l1 all_seeds = [seed] * batch_size all_z = np.stack([np.random.RandomState(seed).randn(*Gs.input_shape[1:]) for seed in all_seeds]) # [minibatch, component] all_w = Gs.components.mapping.run(all_z, l1) # [minibatch, layer, component] total = 0.0 acc_w = np.zeros((batch_size,18,512)) for i in range(400): # calculate approximate center acc_w += Gs.components.mapping.run(0*all_z+np.random.RandomState(i).randn(512), np.tile(l1, (batch_size, 1))) # [minibatch, layer, component] total+=1.0 acc_w /= total w_avg = acc_w if variance == 0 or iterations < 1: if truncation != 1: all_w = w_avg + (all_w - w_avg) * truncation # [minibatch, layer, component] all_images = Gs.components.synthesis.run(all_w, **Gs_syn_kwargs) else: acc_w = np.zeros((batch_size,18,512)) total = 0.0 for i in range(iterations): all_w = Gs.components.mapping.run(all_z + variance*np.random.RandomState(i).randn(512), np.tile(l1 + 0.1*variance*np.random.RandomState(i).randn(167), (batch_size, 1))) # [minibatch, layer, component] if truncation != 1: all_w = w_avg + (all_w - w_avg) * truncation # [minibatch, layer, component] acc_w += all_w total+=1.0 acc_w /= total all_images = Gs.components.synthesis.run(acc_w, **Gs_syn_kwargs) display(PIL.Image.fromarray(np.median(all_images, axis=0).astype(np.uint8))) out = widgets.interactive_output(display_sample, {'artist': artist, 'genre': genre, 'style': style, 'seed': seed, 'variance': variance, 'iterations': iterations, 'scale': scale, 'truncation': truncation}) display(ui, out) # Have fun playing with the sliders! seed1 = widgets.IntSlider(min=0, max=100000, step=1, value=0, description='Content Seed: ') seed2 = widgets.IntSlider(min=0, max=100000, step=1, value=0, description='Content Label: ') seed1b = widgets.IntSlider(min=0, max=100000, step=1, value=0, description='Style Seed: ') seed2b = widgets.IntSlider(min=0, max=100000, step=1, value=0, description='Style Label: ') scale = widgets.FloatSlider(min=-5, max=5, step=0.05, value=0, description='Scale: ') truncation = widgets.FloatSlider(min=-2, max=2, step=0.1, value=1, description='Truncation: ') top_box = widgets.HBox([seed1, seed2]) mid_box = widgets.HBox([seed1b, seed2b]) bot_box = widgets.HBox([scale, truncation]) ui = widgets.VBox([top_box, mid_box, bot_box]) def display_sample(seed1, seed2, seed1b, seed2b, scale, truncation): batch_size = 1 all_seeds = [seed1] * batch_size all_seedsb = [seed1b] * batch_size all_z = np.stack([np.random.RandomState(seed).randn(*Gs.input_shape[1:]) for seed in all_seeds]) # [minibatch, component] all_zb = np.stack([np.random.RandomState(seed).randn(*Gs.input_shape[1:]) for seed in all_seedsb]) # [minibatch, component] all_l = scale * np.random.RandomState(seed2).randn(167) all_lb = scale * np.random.RandomState(seed2b).randn(167) all_w = Gs.components.mapping.run(all_z, np.tile(all_l, (batch_size, 1))) # [minibatch, layer, component] all_wb = Gs.components.mapping.run(all_zb, np.tile(all_lb, (batch_size, 1))) # [minibatch, layer, component] if truncation != 1: w_avg = Gs.get_var('dlatent_avg') all_w = w_avg + (all_w - w_avg) * truncation # [minibatch, layer, component] all_wb = w_avg + (all_wb - w_avg) * truncation # [minibatch, layer, component] all_w = np.concatenate((all_w[:,0:9,:], all_wb[:,9:18,:]), axis=1) all_images = Gs.components.synthesis.run(all_w, **Gs_syn_kwargs) display(PIL.Image.fromarray(np.median(all_images, axis=0).astype(np.uint8))) out = widgets.interactive_output(display_sample, {'seed1': seed1, 'seed2': seed2, 'seed1b': seed1b, 'seed2b': seed2b, 'scale': scale, 'truncation': truncation}) display(ui, out) ```
github_jupyter
import ipywidgets as widgets import pickle import math import random import PIL.Image import numpy as np import pickle import dnnlib import dnnlib.tflib as tflib network_pkl = 'WikiArt5.pkl' #network_pkl = 'WikiArt_Uncond2.pkl' dnnlib.tflib.init_tf() with dnnlib.util.open_url(network_pkl) as f: _G, _D, Gs = pickle.load(f) Gs_syn_kwargs = dnnlib.EasyDict() batch_size = 8 Gs_syn_kwargs.output_transform = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True) Gs_syn_kwargs.randomize_noise = True Gs_syn_kwargs.minibatch_size = batch_size artist = widgets.Dropdown( options=[('Unknown Artist', 0), ('Boris Kustodiev', 1), ('Camille Pissarro', 2), ('Childe Hassam', 3), ('Claude Monet', 4), ('Edgar Degas', 5), ('Eugene Boudin', 6), ('Gustave Dore', 7), ('Ilya Repin', 8), ('Ivan Aivazovsky', 9), ('Ivan Shishkin', 10), ('John Singer Sargent', 11), ('Marc Chagall', 12), ('Martiros Saryan', 13), ('Nicholas Roerich', 14), ('Pablo Picasso', 15), ('Paul Cezanne', 16), ('Pierre Auguste Renoir', 17), ('Pyotr Konchalovsky', 18), ('Raphael Kirchner', 19), ('Rembrandt', 20), ('Salvador Dali', 21), ('Vincent Van Gogh', 22), ('Hieronymus Bosch', 23), ('Leonardo Da Vinci', 24), ('Albrecht Durer', 25), ('Edouard Cortes', 26), ('Sam Francis', 27), ('Juan Gris', 28), ('Lucas Cranach The Elder', 29), ('Paul Gauguin', 30), ('Konstantin Makovsky', 31), ('Egon Schiele', 32), ('Thomas Eakins', 33), ('Gustave Moreau', 34), ('Francisco Goya', 35), ('Edvard Munch', 36), ('Henri Matisse', 37), ('Fra Angelico', 38), ('Maxime Maufra', 39), ('Jan Matejko', 40), ('Mstislav Dobuzhinsky', 41), ('Alfred Sisley', 42), ('Mary Cassatt', 43), ('Gustave Loiseau', 44), ('Fernando Botero', 45), ('Zinaida Serebriakova', 46), ('Georges Seurat', 47), ('Isaac Levitan', 48), ('Joaquรฃยญn Sorolla', 49), ('Jacek Malczewski', 50), ('Berthe Morisot', 51), ('Andy Warhol', 52), ('Arkhip Kuindzhi', 53), ('Niko Pirosmani', 54), ('James Tissot', 55), ('Vasily Polenov', 56), ('Valentin Serov', 57), ('Pietro Perugino', 58), ('Pierre Bonnard', 59), ('Ferdinand Hodler', 60), ('Bartolome Esteban Murillo', 61), ('Giovanni Boldini', 62), ('Henri Martin', 63), ('Gustav Klimt', 64), ('Vasily Perov', 65), ('Odilon Redon', 66), ('Tintoretto', 67), ('Gene Davis', 68), ('Raphael', 69), ('John Henry Twachtman', 70), ('Henri De Toulouse Lautrec', 71), ('Antoine Blanchard', 72), ('David Burliuk', 73), ('Camille Corot', 74), ('Konstantin Korovin', 75), ('Ivan Bilibin', 76), ('Titian', 77), ('Maurice Prendergast', 78), ('Edouard Manet', 79), ('Peter Paul Rubens', 80), ('Aubrey Beardsley', 81), ('Paolo Veronese', 82), ('Joshua Reynolds', 83), ('Kuzma Petrov Vodkin', 84), ('Gustave Caillebotte', 85), ('Lucian Freud', 86), ('Michelangelo', 87), ('Dante Gabriel Rossetti', 88), ('Felix Vallotton', 89), ('Nikolay Bogdanov Belsky', 90), ('Georges Braque', 91), ('Vasily Surikov', 92), ('Fernand Leger', 93), ('Konstantin Somov', 94), ('Katsushika Hokusai', 95), ('Sir Lawrence Alma Tadema', 96), ('Vasily Vereshchagin', 97), ('Ernst Ludwig Kirchner', 98), ('Mikhail Vrubel', 99), ('Orest Kiprensky', 100), ('William Merritt Chase', 101), ('Aleksey Savrasov', 102), ('Hans Memling', 103), ('Amedeo Modigliani', 104), ('Ivan Kramskoy', 105), ('Utagawa Kuniyoshi', 106), ('Gustave Courbet', 107), ('William Turner', 108), ('Theo Van Rysselberghe', 109), ('Joseph Wright', 110), ('Edward Burne Jones', 111), ('Koloman Moser', 112), ('Viktor Vasnetsov', 113), ('Anthony Van Dyck', 114), ('Raoul Dufy', 115), ('Frans Hals', 116), ('Hans Holbein The Younger', 117), ('Ilya Mashkov', 118), ('Henri Fantin Latour', 119), ('M.C. Escher', 120), ('El Greco', 121), ('Mikalojus Ciurlionis', 122), ('James Mcneill Whistler', 123), ('Karl Bryullov', 124), ('Jacob Jordaens', 125), ('Thomas Gainsborough', 126), ('Eugene Delacroix', 127), ('Canaletto', 128)], value=22, description='Artist: ' ) genre = widgets.Dropdown( options=[('Abstract Painting', 129), ('Cityscape', 130), ('Genre Painting', 131), ('Illustration', 132), ('Landscape', 133), ('Nude Painting', 134), ('Portrait', 135), ('Religious Painting', 136), ('Sketch And Study', 137), ('Still Life', 138), ('Unknown Genre', 139)], value=129, description='Genre: ' ) style = widgets.Dropdown( options=[('Abstract Expressionism', 140), ('Action Painting', 141), ('Analytical Cubism', 142), ('Art Nouveau', 143), ('Baroque', 144), ('Color Field Painting', 145), ('Contemporary Realism', 146), ('Cubism', 147), ('Early Renaissance', 148), ('Expressionism', 149), ('Fauvism', 150), ('High Renaissance', 151), ('Impressionism', 152), ('Mannerism Late Renaissance', 153), ('Minimalism', 154), ('Naive Art Primitivism', 155), ('New Realism', 156), ('Northern Renaissance', 157), ('Pointillism', 158), ('Pop Art', 159), ('Post Impressionism', 160), ('Realism', 161), ('Rococo', 162), ('Romanticism', 163), ('Symbolism', 164), ('Synthetic Cubism', 165), ('Ukiyo-e', 166)], value=160, description='Style: ' ) seed = widgets.IntSlider(min=0, max=100000, step=1, value=9, description='Seed: ') scale = widgets.FloatSlider(min=0, max=25, step=0.1, value=2, description='Global Scale: ') truncation = widgets.FloatSlider(min=-5, max=10, step=0.1, value=1, description='Truncation: ') variance = widgets.FloatSlider(min=0, max=10, step=0.1, value=0.4, description='Variance: ') iterations = widgets.IntSlider(min=0, max=1000, step=1, value=64, description='Iterations: ') top_box = widgets.HBox([artist, genre, style]) mid_box = widgets.HBox([variance, iterations]) bot_box = widgets.HBox([seed, scale, truncation]) ui = widgets.VBox([top_box, mid_box, bot_box]) def display_sample(artist, genre, style, variance, iterations, seed, scale, truncation): batch_size = 1 l1 = np.zeros((1,167)) l1[0][artist] = 1.0 l1[0][genre] = 1.0 l1[0][style] = 1.0 l1 = scale * l1 all_seeds = [seed] * batch_size all_z = np.stack([np.random.RandomState(seed).randn(*Gs.input_shape[1:]) for seed in all_seeds]) # [minibatch, component] all_w = Gs.components.mapping.run(all_z, l1) # [minibatch, layer, component] total = 0.0 acc_w = np.zeros((batch_size,18,512)) for i in range(400): # calculate approximate center acc_w += Gs.components.mapping.run(0*all_z+np.random.RandomState(i).randn(512), np.tile(l1, (batch_size, 1))) # [minibatch, layer, component] total+=1.0 acc_w /= total w_avg = acc_w if variance == 0 or iterations < 1: if truncation != 1: all_w = w_avg + (all_w - w_avg) * truncation # [minibatch, layer, component] all_images = Gs.components.synthesis.run(all_w, **Gs_syn_kwargs) else: acc_w = np.zeros((batch_size,18,512)) total = 0.0 for i in range(iterations): all_w = Gs.components.mapping.run(all_z + variance*np.random.RandomState(i).randn(512), np.tile(l1 + 0.1*variance*np.random.RandomState(i).randn(167), (batch_size, 1))) # [minibatch, layer, component] if truncation != 1: all_w = w_avg + (all_w - w_avg) * truncation # [minibatch, layer, component] acc_w += all_w total+=1.0 acc_w /= total all_images = Gs.components.synthesis.run(acc_w, **Gs_syn_kwargs) display(PIL.Image.fromarray(np.median(all_images, axis=0).astype(np.uint8))) out = widgets.interactive_output(display_sample, {'artist': artist, 'genre': genre, 'style': style, 'seed': seed, 'variance': variance, 'iterations': iterations, 'scale': scale, 'truncation': truncation}) display(ui, out) # Have fun playing with the sliders! seed1 = widgets.IntSlider(min=0, max=100000, step=1, value=0, description='Content Seed: ') seed2 = widgets.IntSlider(min=0, max=100000, step=1, value=0, description='Content Label: ') seed1b = widgets.IntSlider(min=0, max=100000, step=1, value=0, description='Style Seed: ') seed2b = widgets.IntSlider(min=0, max=100000, step=1, value=0, description='Style Label: ') scale = widgets.FloatSlider(min=-5, max=5, step=0.05, value=0, description='Scale: ') truncation = widgets.FloatSlider(min=-2, max=2, step=0.1, value=1, description='Truncation: ') top_box = widgets.HBox([seed1, seed2]) mid_box = widgets.HBox([seed1b, seed2b]) bot_box = widgets.HBox([scale, truncation]) ui = widgets.VBox([top_box, mid_box, bot_box]) def display_sample(seed1, seed2, seed1b, seed2b, scale, truncation): batch_size = 1 all_seeds = [seed1] * batch_size all_seedsb = [seed1b] * batch_size all_z = np.stack([np.random.RandomState(seed).randn(*Gs.input_shape[1:]) for seed in all_seeds]) # [minibatch, component] all_zb = np.stack([np.random.RandomState(seed).randn(*Gs.input_shape[1:]) for seed in all_seedsb]) # [minibatch, component] all_l = scale * np.random.RandomState(seed2).randn(167) all_lb = scale * np.random.RandomState(seed2b).randn(167) all_w = Gs.components.mapping.run(all_z, np.tile(all_l, (batch_size, 1))) # [minibatch, layer, component] all_wb = Gs.components.mapping.run(all_zb, np.tile(all_lb, (batch_size, 1))) # [minibatch, layer, component] if truncation != 1: w_avg = Gs.get_var('dlatent_avg') all_w = w_avg + (all_w - w_avg) * truncation # [minibatch, layer, component] all_wb = w_avg + (all_wb - w_avg) * truncation # [minibatch, layer, component] all_w = np.concatenate((all_w[:,0:9,:], all_wb[:,9:18,:]), axis=1) all_images = Gs.components.synthesis.run(all_w, **Gs_syn_kwargs) display(PIL.Image.fromarray(np.median(all_images, axis=0).astype(np.uint8))) out = widgets.interactive_output(display_sample, {'seed1': seed1, 'seed2': seed2, 'seed1b': seed1b, 'seed2b': seed2b, 'scale': scale, 'truncation': truncation}) display(ui, out)
0.306942
0.617195
``` import pandas as pd import numpy as np import cPickle from nltk.corpus import stopwords from gensim.models import word2vec import nltk.data import re import logging from nltk.stem.snowball import * import itertools from skill_transform import * # Python 2.x: import HTMLParser html_parser = HTMLParser.HTMLParser() import multiprocessing logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) from nltk.stem import WordNetLemmatizer, PorterStemmer wordnet_lemmatizer = WordNetLemmatizer() porter = PorterStemmer() TEST_CASE = {} skill_transform("html5_and_css3_or_html6") def test_case(model, model_name): cases = [ "machine learning", "js", "javascript", "python", "html", "html5", "css", "angular", "nodejs" ] TEST_CASE[model_name] = [] for case in cases: try: TEST_CASE[model_name].append(model.similar_by_word(skill_transform(case))) except: pass import cPickle as pickle musthave_dice_naruki_data = pickle.load( open( "musthave_dice_naruki_data.p", "rb" ) ) import glob allFiles = glob.glob("dice-full*.json") frame = pd.DataFrame() list_ = [] for file_ in allFiles: df = pd.read_json(file_) list_.append(df) skill_dice_jd = pd.concat(list_) len(skill_dice_jd) [s[0] for s in skill_dice_jd.loc[0, 'job_description'][0]] skill_dice_jd.iloc[0] dice_JD = pd.read_json("dice-full-2.json") with open("prep_data_tokens_underscore_1", "rb") as g: data_dice = cPickle.load(g) dice_data = [ [ skill_transform(d.replace("_", " ")) for d in dice if d is not None ] for dice in data_dice if dice is not None ] musthave_dice_data = dice_data + musthave_data musthave_dice_data = [ list(set([ d for d in data if d is not None and len(d) > 0 ])) for data in musthave_dice_data ] print len(musthave_dice_data) musthave_dice_data print ("Training model...") musthave_dice_model = word2vec.Word2Vec(musthave_dice_data, workers=multiprocessing.cpu_count(), # Number of threads to run in parallel size=300, min_count=1, window=10, sample = 1e-3, # Downsample setting for frequent words iter=4, sg =1 ) musthave_dice_model.init_sims(replace=True) test_case(musthave_dice_model, "musthave_dice_model") # model.similar_by_word(skill_transform('machine learning')) # model.similar_by_word('python') musthave_dice_naruki_data = musthave_dice_data data_naruki = pd.read_csv('naukri_skill_full', header = 0, encoding='ISO-8859-1') data_naruki.drop_duplicates(subset=['id', 'skill'], keep='last') data_naruki['skill'] = data_naruki['skill'].apply(skill_transform) data_naruki_final = data_naruki.groupby('id')['skill'].apply(list) print len(data_naruki_final) for skills in data_naruki_final: if len(skills) > 2 and skills not in musthave_dice_naruki_data: musthave_dice_naruki_data.append(skills) len(musthave_dice_naruki_data) import cPickle as pickle pickle.dump( musthave_dice_naruki_data, open( "musthave_dice_naruki_data.p", "wb" ) ) print ("Training model...") musthave_dice_naruki_model = word2vec.Word2Vec(musthave_dice_naruki_data, workers=multiprocessing.cpu_count(), # Number of threads to run in parallel size=300, min_count=3, window=10, sample = 1e-3, # Downsample setting for frequent words iter=4, sg =1 ) musthave_dice_naruki_model.init_sims(replace=True) test_case(musthave_dice_naruki_model, "musthave_dice_naruki_model") TEST_CASE TEST_CASE_TABLE = [ pd.DataFrame(TEST_CASE[t]) for t in TEST_CASE ] [ t for t in TEST_CASE ] TEST_CASE_TABLE[0] TEST_CASE_TABLE[1] TEST_CASE_TABLE[2] musthave_dice_naruki_model.wv.save_word2vec_format("musthave_dice_naruki.model") !ls -l musthave_dice_naruki_model_100d = word2vec.Word2Vec(musthave_dice_naruki_data, workers=multiprocessing.cpu_count(), # Number of threads to run in parallel size=100, min_count=5, window=10, sample = 1e-3, # Downsample setting for frequent words iter=4, sg =1 ) musthave_dice_naruki_model_200d = word2vec.Word2Vec(musthave_dice_naruki_data, workers=multiprocessing.cpu_count(), # Number of threads to run in parallel size=200, min_count=5, window=10, sample = 1e-3, # Downsample setting for frequent words iter=4, sg =1 ) musthave_dice_naruki_model_100d.wv.save_word2vec_format("musthave_dice_naruki_100d.model") musthave_dice_naruki_model_200d.wv.save_word2vec_format("musthave_dice_naruki_200d.model") !ls -l !free -m musthave_dice_naruki_model.wv.similar_by_word(skill_transform('javascript'),topn=10) musthave_dice_naruki_model_200d.wv.similar_by_word(skill_transform('html5')) # [(u'css3', 0.9307963252067566), # (u'bootstrap', 0.9234713912010193), # (u'ui_developer', 0.9192823171615601), # (u'front_end_engineer', 0.9077576398849487), # (u'front__end', 0.9058327674865723), # (u'css_3', 0.9039251208305359), # (u'ui_engineer', 0.9014566540718079), # (u'adobe_experience_management_suite', 0.9006956815719604), # (u'html4', 0.9003456234931946), # (u'user_interface_developer', 0.8981920480728149)] musthave_dice_naruki_model.similar_by_word(skill_transform('javascript')) musthave_dice_naruki_model.similar_by_word(skill_transform('node_js'), 300) musthave_dice_naruki_model_100d.similar_by_word(skill_transform('node_js'), 300) re.sub(r'^angular.*$', 'angular', "angular.js") ```
github_jupyter
import pandas as pd import numpy as np import cPickle from nltk.corpus import stopwords from gensim.models import word2vec import nltk.data import re import logging from nltk.stem.snowball import * import itertools from skill_transform import * # Python 2.x: import HTMLParser html_parser = HTMLParser.HTMLParser() import multiprocessing logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) from nltk.stem import WordNetLemmatizer, PorterStemmer wordnet_lemmatizer = WordNetLemmatizer() porter = PorterStemmer() TEST_CASE = {} skill_transform("html5_and_css3_or_html6") def test_case(model, model_name): cases = [ "machine learning", "js", "javascript", "python", "html", "html5", "css", "angular", "nodejs" ] TEST_CASE[model_name] = [] for case in cases: try: TEST_CASE[model_name].append(model.similar_by_word(skill_transform(case))) except: pass import cPickle as pickle musthave_dice_naruki_data = pickle.load( open( "musthave_dice_naruki_data.p", "rb" ) ) import glob allFiles = glob.glob("dice-full*.json") frame = pd.DataFrame() list_ = [] for file_ in allFiles: df = pd.read_json(file_) list_.append(df) skill_dice_jd = pd.concat(list_) len(skill_dice_jd) [s[0] for s in skill_dice_jd.loc[0, 'job_description'][0]] skill_dice_jd.iloc[0] dice_JD = pd.read_json("dice-full-2.json") with open("prep_data_tokens_underscore_1", "rb") as g: data_dice = cPickle.load(g) dice_data = [ [ skill_transform(d.replace("_", " ")) for d in dice if d is not None ] for dice in data_dice if dice is not None ] musthave_dice_data = dice_data + musthave_data musthave_dice_data = [ list(set([ d for d in data if d is not None and len(d) > 0 ])) for data in musthave_dice_data ] print len(musthave_dice_data) musthave_dice_data print ("Training model...") musthave_dice_model = word2vec.Word2Vec(musthave_dice_data, workers=multiprocessing.cpu_count(), # Number of threads to run in parallel size=300, min_count=1, window=10, sample = 1e-3, # Downsample setting for frequent words iter=4, sg =1 ) musthave_dice_model.init_sims(replace=True) test_case(musthave_dice_model, "musthave_dice_model") # model.similar_by_word(skill_transform('machine learning')) # model.similar_by_word('python') musthave_dice_naruki_data = musthave_dice_data data_naruki = pd.read_csv('naukri_skill_full', header = 0, encoding='ISO-8859-1') data_naruki.drop_duplicates(subset=['id', 'skill'], keep='last') data_naruki['skill'] = data_naruki['skill'].apply(skill_transform) data_naruki_final = data_naruki.groupby('id')['skill'].apply(list) print len(data_naruki_final) for skills in data_naruki_final: if len(skills) > 2 and skills not in musthave_dice_naruki_data: musthave_dice_naruki_data.append(skills) len(musthave_dice_naruki_data) import cPickle as pickle pickle.dump( musthave_dice_naruki_data, open( "musthave_dice_naruki_data.p", "wb" ) ) print ("Training model...") musthave_dice_naruki_model = word2vec.Word2Vec(musthave_dice_naruki_data, workers=multiprocessing.cpu_count(), # Number of threads to run in parallel size=300, min_count=3, window=10, sample = 1e-3, # Downsample setting for frequent words iter=4, sg =1 ) musthave_dice_naruki_model.init_sims(replace=True) test_case(musthave_dice_naruki_model, "musthave_dice_naruki_model") TEST_CASE TEST_CASE_TABLE = [ pd.DataFrame(TEST_CASE[t]) for t in TEST_CASE ] [ t for t in TEST_CASE ] TEST_CASE_TABLE[0] TEST_CASE_TABLE[1] TEST_CASE_TABLE[2] musthave_dice_naruki_model.wv.save_word2vec_format("musthave_dice_naruki.model") !ls -l musthave_dice_naruki_model_100d = word2vec.Word2Vec(musthave_dice_naruki_data, workers=multiprocessing.cpu_count(), # Number of threads to run in parallel size=100, min_count=5, window=10, sample = 1e-3, # Downsample setting for frequent words iter=4, sg =1 ) musthave_dice_naruki_model_200d = word2vec.Word2Vec(musthave_dice_naruki_data, workers=multiprocessing.cpu_count(), # Number of threads to run in parallel size=200, min_count=5, window=10, sample = 1e-3, # Downsample setting for frequent words iter=4, sg =1 ) musthave_dice_naruki_model_100d.wv.save_word2vec_format("musthave_dice_naruki_100d.model") musthave_dice_naruki_model_200d.wv.save_word2vec_format("musthave_dice_naruki_200d.model") !ls -l !free -m musthave_dice_naruki_model.wv.similar_by_word(skill_transform('javascript'),topn=10) musthave_dice_naruki_model_200d.wv.similar_by_word(skill_transform('html5')) # [(u'css3', 0.9307963252067566), # (u'bootstrap', 0.9234713912010193), # (u'ui_developer', 0.9192823171615601), # (u'front_end_engineer', 0.9077576398849487), # (u'front__end', 0.9058327674865723), # (u'css_3', 0.9039251208305359), # (u'ui_engineer', 0.9014566540718079), # (u'adobe_experience_management_suite', 0.9006956815719604), # (u'html4', 0.9003456234931946), # (u'user_interface_developer', 0.8981920480728149)] musthave_dice_naruki_model.similar_by_word(skill_transform('javascript')) musthave_dice_naruki_model.similar_by_word(skill_transform('node_js'), 300) musthave_dice_naruki_model_100d.similar_by_word(skill_transform('node_js'), 300) re.sub(r'^angular.*$', 'angular', "angular.js")
0.19923
0.131312
<h1 style="font-size:35px; color:black; ">Lab 1 Quantum Circuits</h1> Prerequisite - [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html) - [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html) Other relevant materials - [Access IBM Quantum Systems](https://qiskit.org/documentation/install.html#access-ibm-quantum-systems) - [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration) - [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html) - [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq) - [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html) ``` from qiskit import * from qiskit.visualization import plot_histogram import numpy as np ``` <h2 style="font-size:24px;">Part 1: Classical logic gates with quantum circuits</h2> <br> <div style="background: #E8E7EB; border-radius: 5px; -moz-border-radius: 5px;"> <p style="background: #800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; "><b>Goal</b></p> <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .</p> </div> An implementation of the `NOT` gate is provided as an example. ``` def NOT(inp): """An NOT gate. Parameters: inp (str): Input, encoded in qubit 0. Returns: QuantumCircuit: Output NOT circuit. str: Output value measured from qubit 0. """ qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit qc.reset(0) # We encode '0' as the qubit state |0โŸฉ, and '1' as |1โŸฉ # Since the qubit is initially |0โŸฉ, we don't need to do anything for an input of '0' # For an input of '1', we do an x to rotate the |0โŸฉ to |1โŸฉ if inp=='1': qc.x(0) # barrier between input state and gate operation qc.barrier() # Now we've encoded the input, we can do a NOT on it using x qc.x(0) #barrier between gate operation and measurement qc.barrier() # Finally, we extract the |0โŸฉ/|1โŸฉ output of the qubit and encode it in the bit c[0] qc.measure(0,0) qc.draw('mpl') # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = backend.run(qc, shots=1, memory=True) output = job.result().get_memory()[0] return qc, output ## Test the function for inp in ['0', '1']: qc, out = NOT(inp) print('NOT with input',inp,'gives output',out) display(qc.draw()) print('\n') ``` <h3 style="font-size: 20px">&#128211; XOR gate</h3> Takes two binary strings as input and gives one as output. The output is '0' when the inputs are equal and '1' otherwise. ``` def XOR(inp1,inp2): """An XOR gate. Parameters: inpt1 (str): Input 1, encoded in qubit 0. inpt2 (str): Input 2, encoded in qubit 1. Returns: QuantumCircuit: Output XOR circuit. str: Output value measured from qubit 1. """ qc = QuantumCircuit(2, 1) qc.reset(range(2)) if inp1=='1': qc.x(0) if inp2=='1': qc.x(1) # barrier between input state and gate operation qc.barrier() # this is where your program for quantum XOR gate goes # barrier between input state and gate operation qc.barrier() qc.measure(1,0) # output from qubit 1 is measured #We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') #Since the output will be deterministic, we can use just a single shot to get it job = backend.run(qc, shots=1, memory=True) output = job.result().get_memory()[0] return qc, output ## Test the function for inp1 in ['0', '1']: for inp2 in ['0', '1']: qc, output = XOR(inp1, inp2) print('XOR with inputs',inp1,inp2,'gives output',output) display(qc.draw()) print('\n') ``` <h3 style="font-size: 20px">&#128211; AND gate</h3> Takes two binary strings as input and gives one as output. The output is `'1'` only when both the inputs are `'1'`. ``` def AND(inp1,inp2): """An AND gate. Parameters: inpt1 (str): Input 1, encoded in qubit 0. inpt2 (str): Input 2, encoded in qubit 1. Returns: QuantumCircuit: Output XOR circuit. str: Output value measured from qubit 2. """ qc = QuantumCircuit(3, 1) qc.reset(range(2)) if inp1=='1': qc.x(0) if inp2=='1': qc.x(1) qc.barrier() # this is where your program for quantum AND gate goes qc.barrier() qc.measure(2, 0) # output from qubit 2 is measured # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = backend.run(qc, shots=1, memory=True) output = job.result().get_memory()[0] return qc, output ## Test the function for inp1 in ['0', '1']: for inp2 in ['0', '1']: qc, output = AND(inp1, inp2) print('AND with inputs',inp1,inp2,'gives output',output) display(qc.draw()) print('\n') ``` <h3 style="font-size: 20px">&#128211; NAND gate</h3> Takes two binary strings as input and gives one as output. The output is `'0'` only when both the inputs are `'1'`. ``` def NAND(inp1,inp2): """An NAND gate. Parameters: inpt1 (str): Input 1, encoded in qubit 0. inpt2 (str): Input 2, encoded in qubit 1. Returns: QuantumCircuit: Output NAND circuit. str: Output value measured from qubit 2. """ qc = QuantumCircuit(3, 1) qc.reset(range(3)) if inp1=='1': qc.x(0) if inp2=='1': qc.x(1) qc.barrier() # this is where your program for quantum NAND gate goes qc.barrier() qc.measure(2, 0) # output from qubit 2 is measured # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = backend.run(qc,shots=1,memory=True) output = job.result().get_memory()[0] return qc, output ## Test the function for inp1 in ['0', '1']: for inp2 in ['0', '1']: qc, output = NAND(inp1, inp2) print('NAND with inputs',inp1,inp2,'gives output',output) display(qc.draw()) print('\n') ``` <h3 style="font-size: 20px">&#128211; OR gate</h3> Takes two binary strings as input and gives one as output. The output is '1' if either input is '1'. ``` def OR(inp1,inp2): """An OR gate. Parameters: inpt1 (str): Input 1, encoded in qubit 0. inpt2 (str): Input 2, encoded in qubit 1. Returns: QuantumCircuit: Output XOR circuit. str: Output value measured from qubit 2. """ qc = QuantumCircuit(3, 1) qc.reset(range(3)) if inp1=='1': qc.x(0) if inp2=='1': qc.x(1) qc.barrier() # this is where your program for quantum OR gate goes qc.barrier() qc.measure(2, 0) # output from qubit 2 is measured # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = backend.run(qc,shots=1,memory=True) output = job.result().get_memory()[0] return qc, output ## Test the function for inp1 in ['0', '1']: for inp2 in ['0', '1']: qc, output = OR(inp1, inp2) print('OR with inputs',inp1,inp2,'gives output',output) display(qc.draw()) print('\n') ``` <h2 style="font-size:24px;">Part 2: AND gate on Quantum Computer</h2> <br> <div style="background: #E8E7EB; border-radius: 5px; -moz-border-radius: 5px;"> <p style="background: #800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; "><b>Goal</b></p> <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.</p> </div> In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy. The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate. Here we will use the quantum systems from the IBM Quantum Experience. If you do not have access, you can do so [here](https://qiskit.org/documentation/install.html#access-ibm-quantum-systems). Now that you are ready to use the real quantum computer, let's begin. <h3 style="font-size: 20px">Step 1. Choosing a device</h3> First load the account from the credentials saved on disk by running the following cell: ``` IBMQ.load_account() ``` After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`: ``` IBMQ.providers() ``` Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider. ``` provider = IBMQ.get_provider('ibm-q') provider.backends() ``` Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system. ``` import qiskit.tools.jupyter backend_ex = provider.get_backend('ibmq_16_melbourne') backend_ex ``` For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators: ``` backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator and x.status().operational==True) backends ``` One convenient way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular. ``` from qiskit.providers.ibmq import least_busy backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and not x.configuration().simulator and x.status().operational==True)) backend ``` Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates. In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular its triangular connectivity, that will be useful for circuits with Toffoli gates. ``` # run this cell backend1 = provider.get_backend('ibmqx2') backend2 = provider.get_backend('ibmq_athens') ``` <h3 style="font-size: 20px">Step 2. Define AND function for a real device</h3> We now define the AND function. We choose 8192 as the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related information is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration). <h4 style="font-size: 16px">Qiskit Transpiler</h4> It is important to know that when running a circuit on a real quantum computer, circuits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required. ``` qc_and = QuantumCircuit(3) qc_and.ccx(0,1,2) print('AND gate') display(qc_and.draw()) print('\n\nTranspiled AND gate with all the reqiured connectiviy') qc_and.decompose().draw() ``` In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform. You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included. ``` from qiskit.tools.monitor import job_monitor # run the cell to define AND gate for real quantum system def AND(inp1, inp2, backend, layout): qc = QuantumCircuit(3, 1) qc.reset(range(3)) if inp1=='1': qc.x(0) if inp2=='1': qc.x(1) qc.barrier() qc.ccx(0, 1, 2) qc.barrier() qc.measure(2, 0) qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3) job = backend.run(qc_trans, shots=8192) print(job.job_id()) job_monitor(job) output = job.result().get_counts() return qc_trans, output ``` When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. <h4 style="font-size: 16px">Case A) Three qubits on <code>ibmqx2</code> with the triangle connectivity</h4> First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle connection and determine your initial layout. ``` # run this cell for the widget backend1 ``` <p>&#128211; Assign your choice of layout to the list variable <code>layout1</code> in the cell below</p> ``` # Assign your choice of the initial_layout to the variable layout1 as a list # ex) layout1 = [0,2,4] layout1 = ``` <p>&#128211; Describe the reason for your choice of initial layout.</p> Execute `AND` gate on `ibmqx2` by running the cell below. ``` output1_all = [] qc_trans1_all = [] prob1_all = [] worst = 1 best = 0 for input1 in ['0','1']: for input2 in ['0','1']: qc_trans1, output1 = AND(input1, input2, backend1, layout1) output1_all.append(output1) qc_trans1_all.append(qc_trans1) prob = output1[str(int( input1=='1' and input2=='1' ))]/8192 prob1_all.append(prob) print('\nProbability of correct answer for inputs',input1,input2) print( '{:.2f}'.format(prob) ) print('---------------------------------') worst = min(worst,prob) best = max(best, prob) print('') print('\nThe highest of these probabilities was {:.2f}'.format(best)) print('The lowest of these probabilities was {:.2f}'.format(worst)) ``` Once your job is finished by running, you can then easily access the results via: ```python results = backend.retrieve_job('JOB_ID').result(). ``` Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). <h4 style="font-size: 16px">Case B) Three qubits on <code>ibmq_athens</code> for the linear nearest neighbor connectivity</h4> Examine `ibmq_athens` through the widget by running the cell below. ``` backend2 ``` <p>&#128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.</p> ``` layout2 = [] ``` <p>&#128211; Describe the reason for choice of initial layout.</p> Execute `AND` gate on `ibmq_athens` by running the cell below. ``` output2_all = [] qc_trans2_all = [] prob2_all = [] worst = 1 best = 0 for input1 in ['0','1']: for input2 in ['0','1']: qc_trans2, output2 = AND(input1, input2, backend2, layout2) output2_all.append(output2) qc_trans2_all.append(qc_trans2) prob = output2[str(int( input1=='1' and input2=='1' ))]/8192 prob2_all.append(prob) print('\nProbability of correct answer for inputs',input1,input2) print('{:.2f}'.format(prob) ) print('---------------------------------') worst = min(worst,prob) best = max(best, prob) print('') print('\nThe highest of these probabilities was {:.2f}'.format(best)) print('The lowest of these probabilities was {:.2f}'.format(worst)) ``` <h3 style="font-size: 20px">Step 3. Interpret the result</h3> There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html#supplementary-information) (See the Supplementary Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosely corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit. A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. <h4 style="font-size: 16px">A) Circuit depth and result accuracy</h4> Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer. ``` print('Transpiled AND gate circuit for ibmq_athens with input 0 0') print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth())) print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) ) qc_trans2_all[0].draw() print('Transpiled AND gate circuit for ibmq_athens with input 0 1') print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth())) print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) ) qc_trans2_all[1].draw() print('Transpiled AND gate circuit for ibmq_athens with input 1 0') print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth())) print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) ) qc_trans2_all[2].draw() print('Transpiled AND gate circuit for ibmq_athens with input 1 1') print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth())) print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) ) qc_trans2_all[3].draw() ``` <p>&#128211; Explain reason for the dissimilarity of the circuits. Describe the relations between the property of the circuit and the accuracy of the outcomes.</p> <h4 style="font-size: 16px">B) Qubit connectivity and circuit depth</h4> Investigate the transpiled circuits for `ibmqx2` by running the cells below. ``` print('Transpiled AND gate circuit for ibmqx2 with input 0 0') print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth())) print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) ) qc_trans1_all[0].draw() print('Transpiled AND gate circuit for ibmqx2 with input 0 1') print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth())) print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) ) qc_trans1_all[1].draw() print('Transpiled AND gate circuit for ibmqx2 with input 1 0') print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth())) print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) ) qc_trans1_all[2].draw() print('Transpiled AND gate circuit for ibmqx2 with input 1 1') print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth())) print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) ) qc_trans1_all[3].draw() ``` <p>&#128211; Explain the reason for the similarity of the circuits. Describe the relations between the properties of the circuits and the accuracy of the outcomes.</p> <h4 style="font-size: 16px">C) Error rates and result accuracy</h4> <p>&#128211; Until now we have been using circuit depth and nonlocal gate count as good indicators of circuit performance on real devices. However we see something interesting in the results above. The AND gate on <code>ibmq_athens</code> has ~8-15 <code>cx</code> gates per circuit, but has a success rate that is comparable to or even higher than <code>ibmqx2</code> that executes only 6 <code>cx</code> gates. Why is this?</p>
github_jupyter
from qiskit import * from qiskit.visualization import plot_histogram import numpy as np def NOT(inp): """An NOT gate. Parameters: inp (str): Input, encoded in qubit 0. Returns: QuantumCircuit: Output NOT circuit. str: Output value measured from qubit 0. """ qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit qc.reset(0) # We encode '0' as the qubit state |0โŸฉ, and '1' as |1โŸฉ # Since the qubit is initially |0โŸฉ, we don't need to do anything for an input of '0' # For an input of '1', we do an x to rotate the |0โŸฉ to |1โŸฉ if inp=='1': qc.x(0) # barrier between input state and gate operation qc.barrier() # Now we've encoded the input, we can do a NOT on it using x qc.x(0) #barrier between gate operation and measurement qc.barrier() # Finally, we extract the |0โŸฉ/|1โŸฉ output of the qubit and encode it in the bit c[0] qc.measure(0,0) qc.draw('mpl') # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = backend.run(qc, shots=1, memory=True) output = job.result().get_memory()[0] return qc, output ## Test the function for inp in ['0', '1']: qc, out = NOT(inp) print('NOT with input',inp,'gives output',out) display(qc.draw()) print('\n') def XOR(inp1,inp2): """An XOR gate. Parameters: inpt1 (str): Input 1, encoded in qubit 0. inpt2 (str): Input 2, encoded in qubit 1. Returns: QuantumCircuit: Output XOR circuit. str: Output value measured from qubit 1. """ qc = QuantumCircuit(2, 1) qc.reset(range(2)) if inp1=='1': qc.x(0) if inp2=='1': qc.x(1) # barrier between input state and gate operation qc.barrier() # this is where your program for quantum XOR gate goes # barrier between input state and gate operation qc.barrier() qc.measure(1,0) # output from qubit 1 is measured #We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') #Since the output will be deterministic, we can use just a single shot to get it job = backend.run(qc, shots=1, memory=True) output = job.result().get_memory()[0] return qc, output ## Test the function for inp1 in ['0', '1']: for inp2 in ['0', '1']: qc, output = XOR(inp1, inp2) print('XOR with inputs',inp1,inp2,'gives output',output) display(qc.draw()) print('\n') def AND(inp1,inp2): """An AND gate. Parameters: inpt1 (str): Input 1, encoded in qubit 0. inpt2 (str): Input 2, encoded in qubit 1. Returns: QuantumCircuit: Output XOR circuit. str: Output value measured from qubit 2. """ qc = QuantumCircuit(3, 1) qc.reset(range(2)) if inp1=='1': qc.x(0) if inp2=='1': qc.x(1) qc.barrier() # this is where your program for quantum AND gate goes qc.barrier() qc.measure(2, 0) # output from qubit 2 is measured # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = backend.run(qc, shots=1, memory=True) output = job.result().get_memory()[0] return qc, output ## Test the function for inp1 in ['0', '1']: for inp2 in ['0', '1']: qc, output = AND(inp1, inp2) print('AND with inputs',inp1,inp2,'gives output',output) display(qc.draw()) print('\n') def NAND(inp1,inp2): """An NAND gate. Parameters: inpt1 (str): Input 1, encoded in qubit 0. inpt2 (str): Input 2, encoded in qubit 1. Returns: QuantumCircuit: Output NAND circuit. str: Output value measured from qubit 2. """ qc = QuantumCircuit(3, 1) qc.reset(range(3)) if inp1=='1': qc.x(0) if inp2=='1': qc.x(1) qc.barrier() # this is where your program for quantum NAND gate goes qc.barrier() qc.measure(2, 0) # output from qubit 2 is measured # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = backend.run(qc,shots=1,memory=True) output = job.result().get_memory()[0] return qc, output ## Test the function for inp1 in ['0', '1']: for inp2 in ['0', '1']: qc, output = NAND(inp1, inp2) print('NAND with inputs',inp1,inp2,'gives output',output) display(qc.draw()) print('\n') def OR(inp1,inp2): """An OR gate. Parameters: inpt1 (str): Input 1, encoded in qubit 0. inpt2 (str): Input 2, encoded in qubit 1. Returns: QuantumCircuit: Output XOR circuit. str: Output value measured from qubit 2. """ qc = QuantumCircuit(3, 1) qc.reset(range(3)) if inp1=='1': qc.x(0) if inp2=='1': qc.x(1) qc.barrier() # this is where your program for quantum OR gate goes qc.barrier() qc.measure(2, 0) # output from qubit 2 is measured # We'll run the program on a simulator backend = Aer.get_backend('qasm_simulator') # Since the output will be deterministic, we can use just a single shot to get it job = backend.run(qc,shots=1,memory=True) output = job.result().get_memory()[0] return qc, output ## Test the function for inp1 in ['0', '1']: for inp2 in ['0', '1']: qc, output = OR(inp1, inp2) print('OR with inputs',inp1,inp2,'gives output',output) display(qc.draw()) print('\n') IBMQ.load_account() IBMQ.providers() provider = IBMQ.get_provider('ibm-q') provider.backends() import qiskit.tools.jupyter backend_ex = provider.get_backend('ibmq_16_melbourne') backend_ex backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator and x.status().operational==True) backends from qiskit.providers.ibmq import least_busy backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and not x.configuration().simulator and x.status().operational==True)) backend # run this cell backend1 = provider.get_backend('ibmqx2') backend2 = provider.get_backend('ibmq_athens') qc_and = QuantumCircuit(3) qc_and.ccx(0,1,2) print('AND gate') display(qc_and.draw()) print('\n\nTranspiled AND gate with all the reqiured connectiviy') qc_and.decompose().draw() from qiskit.tools.monitor import job_monitor # run the cell to define AND gate for real quantum system def AND(inp1, inp2, backend, layout): qc = QuantumCircuit(3, 1) qc.reset(range(3)) if inp1=='1': qc.x(0) if inp2=='1': qc.x(1) qc.barrier() qc.ccx(0, 1, 2) qc.barrier() qc.measure(2, 0) qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3) job = backend.run(qc_trans, shots=8192) print(job.job_id()) job_monitor(job) output = job.result().get_counts() return qc_trans, output # run this cell for the widget backend1 # Assign your choice of the initial_layout to the variable layout1 as a list # ex) layout1 = [0,2,4] layout1 = output1_all = [] qc_trans1_all = [] prob1_all = [] worst = 1 best = 0 for input1 in ['0','1']: for input2 in ['0','1']: qc_trans1, output1 = AND(input1, input2, backend1, layout1) output1_all.append(output1) qc_trans1_all.append(qc_trans1) prob = output1[str(int( input1=='1' and input2=='1' ))]/8192 prob1_all.append(prob) print('\nProbability of correct answer for inputs',input1,input2) print( '{:.2f}'.format(prob) ) print('---------------------------------') worst = min(worst,prob) best = max(best, prob) print('') print('\nThe highest of these probabilities was {:.2f}'.format(best)) print('The lowest of these probabilities was {:.2f}'.format(worst)) results = backend.retrieve_job('JOB_ID').result(). backend2 layout2 = [] output2_all = [] qc_trans2_all = [] prob2_all = [] worst = 1 best = 0 for input1 in ['0','1']: for input2 in ['0','1']: qc_trans2, output2 = AND(input1, input2, backend2, layout2) output2_all.append(output2) qc_trans2_all.append(qc_trans2) prob = output2[str(int( input1=='1' and input2=='1' ))]/8192 prob2_all.append(prob) print('\nProbability of correct answer for inputs',input1,input2) print('{:.2f}'.format(prob) ) print('---------------------------------') worst = min(worst,prob) best = max(best, prob) print('') print('\nThe highest of these probabilities was {:.2f}'.format(best)) print('The lowest of these probabilities was {:.2f}'.format(worst)) print('Transpiled AND gate circuit for ibmq_athens with input 0 0') print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth())) print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) ) qc_trans2_all[0].draw() print('Transpiled AND gate circuit for ibmq_athens with input 0 1') print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth())) print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) ) qc_trans2_all[1].draw() print('Transpiled AND gate circuit for ibmq_athens with input 1 0') print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth())) print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) ) qc_trans2_all[2].draw() print('Transpiled AND gate circuit for ibmq_athens with input 1 1') print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth())) print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) ) qc_trans2_all[3].draw() print('Transpiled AND gate circuit for ibmqx2 with input 0 0') print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth())) print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) ) qc_trans1_all[0].draw() print('Transpiled AND gate circuit for ibmqx2 with input 0 1') print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth())) print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) ) qc_trans1_all[1].draw() print('Transpiled AND gate circuit for ibmqx2 with input 1 0') print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth())) print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) ) qc_trans1_all[2].draw() print('Transpiled AND gate circuit for ibmqx2 with input 1 1') print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth())) print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates())) print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) ) qc_trans1_all[3].draw()
0.869327
0.960398
``` from pathlib import Path import pandas as pd import numpy as np from tensorflow.keras.layers import Dense, Input, Lambda, Conv2D, MaxPooling2D, Flatten, Dropout, LSTM from tensorflow.keras.models import Model, Sequential from tensorflow.keras.optimizers import Adam, Adadelta from tensorflow.keras import backend as K from tensorflow.keras.regularizers import l2 from tensorflow.keras.utils import to_categorical from sklearn.metrics import classification_report from sklearn.model_selection import train_test_split from sklearn.utils import shuffle from sklearn.preprocessing import LabelBinarizer from imblearn.combine import SMOTETomek import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (20,10) DATA_DIR = Path("../../data/public/football-data/data") df = pd.DataFrame() subjects = ["Subject-000", "Subject-001", "Subject-002", "Subject-003", "Subject-004", "Subject-005", "Subject-006", "Subject-007", "Subject-008"] for s in subjects: for d in DATA_DIR.joinpath(s).iterdir(): current_df = pd.read_csv(str(d)) y = d.stem.split("-")[0] current_df["y"] = y if y == "Pass" and np.random.random() > .7: df = df.append(current_df[::5]) elif y == "Dribbling" and np.random.random() > .3: df = df.append(current_df[::5]) elif y == "Walking" or y == "Running": current_df["y"] = "Around" df = df.append(current_df[::5]) elif y != "Pass" and y != "Dribbling": df = df.append(current_df[::5]) from collections import Counter letter_counts = Counter(df.y.values) dft = pd.DataFrame.from_dict(letter_counts, orient="index") dft.plot(kind="bar") X = df.drop("y", axis=1).values y = df.y.values # sm = SMOTETomek(random_state=42) # X, y = sm.fit_resample(X, y) X.shape, y.shape ``` ## Siamese NN ![imagen.png](attachment:imagen.png) ``` def df_to_X_y_seq(df, seq_len): X = [] y = [] for start, end in zip(np.arange(0, df.shape[0] - seq_len, seq_len), np.arange(seq_len, df.shape[0], seq_len)): X.append(df.iloc[start:end,:-1].values) y.append(df.iloc[start:end,-1].mode()[0]) X = np.array(X) y = np.array(y) return X, y X, y = df_to_X_y_seq(df, 800) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, random_state=42) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=1) input_shape = (800, 12, 1) left_input = Input(input_shape) right_input = Input(input_shape) convnet = Sequential() convnet.add(Conv2D(64,(100, 3), strides=(1,1), activation='relu', input_shape=input_shape, kernel_regularizer=l2(2e-4))) convnet.add(MaxPooling2D()) convnet.add(Conv2D(128,(50, 2),activation='relu', kernel_regularizer=l2(2e-4))) convnet.add(MaxPooling2D()) convnet.add(Conv2D(128,(20,1),activation='relu', kernel_regularizer=l2(2e-4))) convnet.add(MaxPooling2D()) convnet.add(Conv2D(64,(4,1),activation='relu', kernel_regularizer=l2(2e-4))) convnet.add(Flatten()) convnet.add(Dense(4096, activation="sigmoid",kernel_regularizer=l2(1e-3))) convnet.summary() encoded_l = convnet(left_input) encoded_r = convnet(right_input) L1_layer = Lambda(lambda tensors:K.abs(tensors[0] - tensors[1])) L1_distance = L1_layer([encoded_l, encoded_r]) prediction = Dense(1, activation='sigmoid')(L1_distance) siamese_net = Model(inputs=[left_input, right_input], outputs=prediction) siamese_net.compile(loss="binary_crossentropy", optimizer=Adam(0.00006)) siamese_net.summary() class SiameseLoader: """For loading batches and testing tasks to a siamese net""" def __init__(self, data, categories): self.data = data self.categories = categories self.info = {} def get_batch(self, batch_size, s="train"): """ Create batch of n pairs, half same class, half different class Parameters ---------- batch_size: int Number of pairs to create s: str Set name Returns ------- tuple tuple[0] -> array like of shape (batch_size, 2, window_size, n_features, 1) containing the pairs of the batch tuple[1] -> array like of shape (batch_size) containing the targets. """ X = self.data[s] y = self.categories[s] n_seq, seq_length, n_features = X.shape # Initialize 2 empty arrays for the input image batch pairs = [np.zeros((batch_size, seq_length, n_features, 1)) for i in range(2)] # Initialize vector for the targets, and make one half of it '1's, so 2nd half of batch has same class targets = np.zeros((batch_size,)) targets[batch_size//2:] = 1 labels = [] for i in range(batch_size): idx_1 = np.random.randint(0, n_seq) pairs[0][i,:,:,:] = X[idx_1].reshape(seq_length, n_features, 1) pair_0_label = y[idx_1] # Pick images of same class for 1st half, different for 2nd if i >= batch_size // 2: idx_2 = np.random.choice([l[0] for l in enumerate(y) if l[1] == pair_0_label]) else: idx_2 = np.random.choice([l[0] for l in enumerate(y) if l[1] != pair_0_label]) labels.append((pair_0_label, y[idx_2])) pairs[1][i,:,:,:] = X[idx_2].reshape(seq_length, n_features, 1) return pairs, targets, labels def generate(self, batch_size, s="train"): """A generator for batches, so model.fit_generator can be used. """ while True: pairs, targets = self.get_batch(batch_size, s) yield (pairs, targets) def make_oneshot_task(self, s="val", N=5): """ Create pairs of test image, support set for testing N way one-shot learning. Parametes --------- s: str, optional Name of the used set N: int, optional Support set size Returns ------- tuple tuple[0] -> array like of shape (batch_size, 2, window_size, n_features, 1) containing the pairs of the batch Paris where first element is the test image and the second one is an instance of the support set tuple[1] -> array like of shape (batch_size) with a single 1, which is the target of support set """ X = self.data[s] y = self.categories[s] enum_labels = list(enumerate(y)) n_seq, seq_length, n_features = X.shape # Pick the true label true_label = np.random.choice(y) true_instances = np.array([l[0] for l in enum_labels if l[1] == true_label]).astype(np.int) false_instances = np.array([l[0] for l in enum_labels if l[1] != true_label]).astype(np.int) # Build the support set support_set_idx = np.random.choice(false_instances, size=(N)) support_set = X[support_set_idx].reshape(N, seq_length, n_features, 1) if len(true_instances) == 1: test_img_idx, support_true_idx = true_instances[0], true_instances[0] else: test_img_idx, support_true_idx = np.random.choice(true_instances, size=(2,), replace=False) support_set[0,:,:,:] = X[support_true_idx].reshape(seq_length, n_features, 1) # Pick the same test image N times test_img = [X[test_img_idx].reshape(seq_length, n_features, 1)]*N # Set the first target to 1 because the first element of support set is the desired output targets = np.zeros((N,)) targets[0] = 1 targets, test_img, support_set = shuffle(targets, test_img, support_set) pairs = [test_img, support_set] return pairs, targets def test_oneshot(self, model, k, s="val", verbose=0): """ Test average N way oneshot learning accuracy of a siamese neural net over k one-shot tasks Parameters ---------- model: kearas.model k: int Number of one shot tasks s: str, optional Name of the set verbose: boolean, optional If True -> print the accuracy Returns ------- float Accuaracy on the k one shot tasks """ n_correct = 0 for i in range(k): inputs, targets = self.make_oneshot_task(s) probs = model.predict(inputs) if np.argmax(probs) == np.argmax(targets): n_correct += 1 percent_correct = (100.0 * n_correct / k) if verbose: print("Got an average of {}% learning accuracy".format(percent_correct)) return percent_correct loader = SiameseLoader(data={"train": X, "val": X_val}, categories={"train": y, "val": y_val}) n_iter = 10000 loss_every = 10 weights_path = "siamese_net.h5" batch_size = 30 best_loss = 99999 for i in range(1, n_iter): (inputs, targets, l) = loader.get_batch(batch_size) loss = siamese_net.train_on_batch(inputs, targets) if i % loss_every == 0: print("Iteration {}, Loss: {}".format(i, loss)) val_acc = loader.test_oneshot(siamese_net, 20, verbose=True) # If loss improve store the weights if best_loss > loss: best_loss = loss siamese_net.save(weights_path) ``` ## Making predictions ``` all_X = np.concatenate((X, X_val)) examples, seq_len, features = all_X.shape all_X = all_X[:].reshape(examples, seq_len, features, 1) all_y = np.concatenate((y, y_val)) np.savez("prediction_set.npz", y=all_y, X=all_X) def predict(model, serie, prediction_set): """ Parameters ---------- model: keras.model serie: array of shape (200, 7) Returns ------- tuple tuple[0] -> label of the `serie` tuple[1] -> max probability of each class """ # Pair the serie with each instance in the prdiction set pairs = [[serie.reshape(seq_len, features, 1)]*len(prediction_set["X"]), prediction_set["X"]] # Calculate the probabilities probs = model.predict(pairs) probs_label = list(zip(probs.flatten(), prediction_set["y"])) for l in np.unique(prediction_set["y"]): current_probs = [p[0] for p in probs_label if p[1] == l] # Get the max probabily and get the corresponding label return prediction_set["y"][np.argmax(probs.flatten())] y_true = y_test.tolist()[0] pred_set = np.load("prediction_set.npz") y_pred = [] for idx, i in enumerate(X_test[:20]): print("Predicting...") pred = predict(siamese_net, i, pred_set) y_pred.append(pred) print("Pred:", pred, ", True:", y_true[idx]) print("Predict done...") target_names = np.unique(y_true).tolist() print(classification_report(y_true, y_pred, target_names=target_names)) ``` ## Making predictions loading the model from weights ``` weights_path = "siamese_net.h5" siamese_net.load_weights(weights_path) y_true = y_test.tolist() y_pred = [ predict(siamese_net, i, np.load("prediction_set.npz"))[0] for i in X_test ] target_names = np.unique(y_true).tolist() print(classification_report(y_true, y_pred, target_names=target_names)) list(zip(y_true, y_pred)) ``` ## CNN ``` samples, seq, features = X_train.shape samples_val, _, _ = X_val.shape samples_test, _, _ = X_test.shape import tensorflow le = LabelBinarizer().fit(y) model = Sequential() model.add(Conv2D(32, kernel_size=(20, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(32, (50, 1), activation='relu')) model.add(MaxPooling2D()) model.add(Conv2D(64, (20, 1), activation='relu')) model.add(MaxPooling2D()) model.add(Flatten()) model.add(Dense(512, activation='relu')) model.add(Dense(5, activation='softmax')) model.compile(loss=tensorflow.keras.losses.binary_crossentropy, optimizer=tensorflow.keras.optimizers.Adadelta(), metrics=['accuracy']) model.summary() model.fit(X_train.reshape(samples, seq, features, 1), le.transform(y_train), batch_size=30, epochs=10, verbose=1, validation_data=(X_val.reshape(samples_val, seq, features, 1), le.transform(y_val))) score = model.evaluate(X_test.reshape(samples_test, seq, features, 1), le.transform(y_test), verbose=0) model.save("cnn.h5") print('Test loss:', score[0]) print('Test accuracy:', score[1]) print(classification_report(y_test, le.inverse_transform(model.predict(X_test.reshape(samples_test, seq, features, 1))))) ``` ## Siamese LSTM Net ``` def exponent_neg_manhattan_distance(left, right): return K.exp(-K.sum(K.abs(left-right), axis=1, keepdims=False)) left_input = Input(shape=(500, 12)) right_input = Input(shape=(500, 12)) shared_lstm = Sequential([ LSTM(300, return_sequences=False), Dense(512) ]) encoded_l = shared_lstm(left_input) encoded_r = shared_lstm(right_input) L1_layer = Lambda(lambda tensors:K.abs(tensors[0] - tensors[1])) L1_distance = L1_layer([encoded_l, encoded_r]) prediction = Dense(1, activation='sigmoid')(L1_distance) malstm = Model(inputs=[left_input, right_input], outputs=prediction) malstm.compile(loss="binary_crossentropy", optimizer=Adam(0.00006)) class SiameseLoader: """For loading batches and testing tasks to a siamese net""" def __init__(self, data, categories): self.data = data self.categories = categories self.info = {} def get_batch(self, batch_size, s="train"): """ Create batch of n pairs, half same class, half different class Parameters ---------- batch_size: int Number of pairs to create s: str Set name Returns ------- tuple tuple[0] -> array like of shape (batch_size, 2, window_size, n_features, 1) containing the pairs of the batch tuple[1] -> array like of shape (batch_size) containing the targets. """ X = self.data[s] y = self.categories[s] n_seq, seq_length, n_features = X.shape # Initialize 2 empty arrays for the input image batch pairs = [np.zeros((batch_size, seq_length, n_features)) for i in range(2)] # Initialize vector for the targets, and make one half of it '1's, so 2nd half of batch has same class targets = np.zeros((batch_size,)) targets[batch_size//2:] = 1 labels = [] for i in range(batch_size): idx_1 = np.random.randint(0, n_seq) pairs[0][i,:,:] = X[idx_1] pair_0_label = y[idx_1] # Pick images of same class for 1st half, different for 2nd if i >= batch_size // 2: idx_2 = np.random.choice([l[0] for l in enumerate(y) if l[1] == pair_0_label]) else: idx_2 = np.random.choice([l[0] for l in enumerate(y) if l[1] != pair_0_label]) labels.append((pair_0_label, y[idx_2])) pairs[1][i,:,:] = X[idx_2] return pairs, targets, labels def generate(self, batch_size, s="train"): """A generator for batches, so model.fit_generator can be used. """ while True: pairs, targets = self.get_batch(batch_size, s) yield (pairs, targets) def make_oneshot_task(self, s="val", N=5): """ Create pairs of test image, support set for testing N way one-shot learning. Parametes --------- s: str, optional Name of the used set N: int, optional Support set size Returns ------- tuple tuple[0] -> array like of shape (batch_size, 2, window_size, n_features, 1) containing the pairs of the batch Paris where first element is the test image and the second one is an instance of the support set tuple[1] -> array like of shape (batch_size) with a single 1, which is the target of support set """ X = self.data[s] y = self.categories[s] enum_labels = list(enumerate(y)) n_seq, seq_length, n_features = X.shape # Pick the true label true_label = np.random.choice(y) true_instances = np.array([l[0] for l in enum_labels if l[1] == true_label]).astype(np.int) false_instances = np.array([l[0] for l in enum_labels if l[1] != true_label]).astype(np.int) # Build the support set support_set_idx = np.random.choice(false_instances, size=(N)) support_set = X[support_set_idx] if len(true_instances) == 1: test_img_idx, support_true_idx = true_instances[0], true_instances[0] else: test_img_idx, support_true_idx = np.random.choice(true_instances, size=(2,), replace=False) support_set[0,:,:] = X[support_true_idx] # Pick the same test image N times test_img = [X[test_img_idx]]*N # Set the first target to 1 because the first element of support set is the desired output targets = np.zeros((N,)) targets[0] = 1 targets, test_img, support_set = shuffle(targets, test_img, support_set) pairs = [test_img, support_set] return pairs, targets def test_oneshot(self, model, k, s="val", verbose=0): """ Test average N way oneshot learning accuracy of a siamese neural net over k one-shot tasks Parameters ---------- model: kearas.model k: int Number of one shot tasks s: str, optional Name of the set verbose: boolean, optional If True -> print the accuracy Returns ------- float Accuaracy on the k one shot tasks """ n_correct = 0 for i in range(k): inputs, targets = self.make_oneshot_task(s) probs = model.predict(inputs) if np.argmax(probs) == np.argmax(targets): n_correct += 1 percent_correct = (100.0 * n_correct / k) if verbose: print("Got an average of {}% learning accuracy".format(percent_correct)) return percent_correct loader = SiameseLoader(data={"train": X, "val": X_val}, categories={"train": y, "val": y_val}) n_iter = 10000 loss_every = 10 weights_path = "siamese_net_lstm.h5" batch_size = 30 best_loss = 99999 for i in range(1, n_iter): (inputs, targets, l) = loader.get_batch(batch_size) loss = malstm.train_on_batch(inputs, targets) if i % loss_every == 0: print("Iteration {}, Loss: {}".format(i, loss)) val_acc = loader.test_oneshot(malstm, 20, verbose=True) # If loss improve store the weights if best_loss > loss: best_loss = loss malstm.save(weights_path) ```
github_jupyter
from pathlib import Path import pandas as pd import numpy as np from tensorflow.keras.layers import Dense, Input, Lambda, Conv2D, MaxPooling2D, Flatten, Dropout, LSTM from tensorflow.keras.models import Model, Sequential from tensorflow.keras.optimizers import Adam, Adadelta from tensorflow.keras import backend as K from tensorflow.keras.regularizers import l2 from tensorflow.keras.utils import to_categorical from sklearn.metrics import classification_report from sklearn.model_selection import train_test_split from sklearn.utils import shuffle from sklearn.preprocessing import LabelBinarizer from imblearn.combine import SMOTETomek import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (20,10) DATA_DIR = Path("../../data/public/football-data/data") df = pd.DataFrame() subjects = ["Subject-000", "Subject-001", "Subject-002", "Subject-003", "Subject-004", "Subject-005", "Subject-006", "Subject-007", "Subject-008"] for s in subjects: for d in DATA_DIR.joinpath(s).iterdir(): current_df = pd.read_csv(str(d)) y = d.stem.split("-")[0] current_df["y"] = y if y == "Pass" and np.random.random() > .7: df = df.append(current_df[::5]) elif y == "Dribbling" and np.random.random() > .3: df = df.append(current_df[::5]) elif y == "Walking" or y == "Running": current_df["y"] = "Around" df = df.append(current_df[::5]) elif y != "Pass" and y != "Dribbling": df = df.append(current_df[::5]) from collections import Counter letter_counts = Counter(df.y.values) dft = pd.DataFrame.from_dict(letter_counts, orient="index") dft.plot(kind="bar") X = df.drop("y", axis=1).values y = df.y.values # sm = SMOTETomek(random_state=42) # X, y = sm.fit_resample(X, y) X.shape, y.shape def df_to_X_y_seq(df, seq_len): X = [] y = [] for start, end in zip(np.arange(0, df.shape[0] - seq_len, seq_len), np.arange(seq_len, df.shape[0], seq_len)): X.append(df.iloc[start:end,:-1].values) y.append(df.iloc[start:end,-1].mode()[0]) X = np.array(X) y = np.array(y) return X, y X, y = df_to_X_y_seq(df, 800) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, random_state=42) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=1) input_shape = (800, 12, 1) left_input = Input(input_shape) right_input = Input(input_shape) convnet = Sequential() convnet.add(Conv2D(64,(100, 3), strides=(1,1), activation='relu', input_shape=input_shape, kernel_regularizer=l2(2e-4))) convnet.add(MaxPooling2D()) convnet.add(Conv2D(128,(50, 2),activation='relu', kernel_regularizer=l2(2e-4))) convnet.add(MaxPooling2D()) convnet.add(Conv2D(128,(20,1),activation='relu', kernel_regularizer=l2(2e-4))) convnet.add(MaxPooling2D()) convnet.add(Conv2D(64,(4,1),activation='relu', kernel_regularizer=l2(2e-4))) convnet.add(Flatten()) convnet.add(Dense(4096, activation="sigmoid",kernel_regularizer=l2(1e-3))) convnet.summary() encoded_l = convnet(left_input) encoded_r = convnet(right_input) L1_layer = Lambda(lambda tensors:K.abs(tensors[0] - tensors[1])) L1_distance = L1_layer([encoded_l, encoded_r]) prediction = Dense(1, activation='sigmoid')(L1_distance) siamese_net = Model(inputs=[left_input, right_input], outputs=prediction) siamese_net.compile(loss="binary_crossentropy", optimizer=Adam(0.00006)) siamese_net.summary() class SiameseLoader: """For loading batches and testing tasks to a siamese net""" def __init__(self, data, categories): self.data = data self.categories = categories self.info = {} def get_batch(self, batch_size, s="train"): """ Create batch of n pairs, half same class, half different class Parameters ---------- batch_size: int Number of pairs to create s: str Set name Returns ------- tuple tuple[0] -> array like of shape (batch_size, 2, window_size, n_features, 1) containing the pairs of the batch tuple[1] -> array like of shape (batch_size) containing the targets. """ X = self.data[s] y = self.categories[s] n_seq, seq_length, n_features = X.shape # Initialize 2 empty arrays for the input image batch pairs = [np.zeros((batch_size, seq_length, n_features, 1)) for i in range(2)] # Initialize vector for the targets, and make one half of it '1's, so 2nd half of batch has same class targets = np.zeros((batch_size,)) targets[batch_size//2:] = 1 labels = [] for i in range(batch_size): idx_1 = np.random.randint(0, n_seq) pairs[0][i,:,:,:] = X[idx_1].reshape(seq_length, n_features, 1) pair_0_label = y[idx_1] # Pick images of same class for 1st half, different for 2nd if i >= batch_size // 2: idx_2 = np.random.choice([l[0] for l in enumerate(y) if l[1] == pair_0_label]) else: idx_2 = np.random.choice([l[0] for l in enumerate(y) if l[1] != pair_0_label]) labels.append((pair_0_label, y[idx_2])) pairs[1][i,:,:,:] = X[idx_2].reshape(seq_length, n_features, 1) return pairs, targets, labels def generate(self, batch_size, s="train"): """A generator for batches, so model.fit_generator can be used. """ while True: pairs, targets = self.get_batch(batch_size, s) yield (pairs, targets) def make_oneshot_task(self, s="val", N=5): """ Create pairs of test image, support set for testing N way one-shot learning. Parametes --------- s: str, optional Name of the used set N: int, optional Support set size Returns ------- tuple tuple[0] -> array like of shape (batch_size, 2, window_size, n_features, 1) containing the pairs of the batch Paris where first element is the test image and the second one is an instance of the support set tuple[1] -> array like of shape (batch_size) with a single 1, which is the target of support set """ X = self.data[s] y = self.categories[s] enum_labels = list(enumerate(y)) n_seq, seq_length, n_features = X.shape # Pick the true label true_label = np.random.choice(y) true_instances = np.array([l[0] for l in enum_labels if l[1] == true_label]).astype(np.int) false_instances = np.array([l[0] for l in enum_labels if l[1] != true_label]).astype(np.int) # Build the support set support_set_idx = np.random.choice(false_instances, size=(N)) support_set = X[support_set_idx].reshape(N, seq_length, n_features, 1) if len(true_instances) == 1: test_img_idx, support_true_idx = true_instances[0], true_instances[0] else: test_img_idx, support_true_idx = np.random.choice(true_instances, size=(2,), replace=False) support_set[0,:,:,:] = X[support_true_idx].reshape(seq_length, n_features, 1) # Pick the same test image N times test_img = [X[test_img_idx].reshape(seq_length, n_features, 1)]*N # Set the first target to 1 because the first element of support set is the desired output targets = np.zeros((N,)) targets[0] = 1 targets, test_img, support_set = shuffle(targets, test_img, support_set) pairs = [test_img, support_set] return pairs, targets def test_oneshot(self, model, k, s="val", verbose=0): """ Test average N way oneshot learning accuracy of a siamese neural net over k one-shot tasks Parameters ---------- model: kearas.model k: int Number of one shot tasks s: str, optional Name of the set verbose: boolean, optional If True -> print the accuracy Returns ------- float Accuaracy on the k one shot tasks """ n_correct = 0 for i in range(k): inputs, targets = self.make_oneshot_task(s) probs = model.predict(inputs) if np.argmax(probs) == np.argmax(targets): n_correct += 1 percent_correct = (100.0 * n_correct / k) if verbose: print("Got an average of {}% learning accuracy".format(percent_correct)) return percent_correct loader = SiameseLoader(data={"train": X, "val": X_val}, categories={"train": y, "val": y_val}) n_iter = 10000 loss_every = 10 weights_path = "siamese_net.h5" batch_size = 30 best_loss = 99999 for i in range(1, n_iter): (inputs, targets, l) = loader.get_batch(batch_size) loss = siamese_net.train_on_batch(inputs, targets) if i % loss_every == 0: print("Iteration {}, Loss: {}".format(i, loss)) val_acc = loader.test_oneshot(siamese_net, 20, verbose=True) # If loss improve store the weights if best_loss > loss: best_loss = loss siamese_net.save(weights_path) all_X = np.concatenate((X, X_val)) examples, seq_len, features = all_X.shape all_X = all_X[:].reshape(examples, seq_len, features, 1) all_y = np.concatenate((y, y_val)) np.savez("prediction_set.npz", y=all_y, X=all_X) def predict(model, serie, prediction_set): """ Parameters ---------- model: keras.model serie: array of shape (200, 7) Returns ------- tuple tuple[0] -> label of the `serie` tuple[1] -> max probability of each class """ # Pair the serie with each instance in the prdiction set pairs = [[serie.reshape(seq_len, features, 1)]*len(prediction_set["X"]), prediction_set["X"]] # Calculate the probabilities probs = model.predict(pairs) probs_label = list(zip(probs.flatten(), prediction_set["y"])) for l in np.unique(prediction_set["y"]): current_probs = [p[0] for p in probs_label if p[1] == l] # Get the max probabily and get the corresponding label return prediction_set["y"][np.argmax(probs.flatten())] y_true = y_test.tolist()[0] pred_set = np.load("prediction_set.npz") y_pred = [] for idx, i in enumerate(X_test[:20]): print("Predicting...") pred = predict(siamese_net, i, pred_set) y_pred.append(pred) print("Pred:", pred, ", True:", y_true[idx]) print("Predict done...") target_names = np.unique(y_true).tolist() print(classification_report(y_true, y_pred, target_names=target_names)) weights_path = "siamese_net.h5" siamese_net.load_weights(weights_path) y_true = y_test.tolist() y_pred = [ predict(siamese_net, i, np.load("prediction_set.npz"))[0] for i in X_test ] target_names = np.unique(y_true).tolist() print(classification_report(y_true, y_pred, target_names=target_names)) list(zip(y_true, y_pred)) samples, seq, features = X_train.shape samples_val, _, _ = X_val.shape samples_test, _, _ = X_test.shape import tensorflow le = LabelBinarizer().fit(y) model = Sequential() model.add(Conv2D(32, kernel_size=(20, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(32, (50, 1), activation='relu')) model.add(MaxPooling2D()) model.add(Conv2D(64, (20, 1), activation='relu')) model.add(MaxPooling2D()) model.add(Flatten()) model.add(Dense(512, activation='relu')) model.add(Dense(5, activation='softmax')) model.compile(loss=tensorflow.keras.losses.binary_crossentropy, optimizer=tensorflow.keras.optimizers.Adadelta(), metrics=['accuracy']) model.summary() model.fit(X_train.reshape(samples, seq, features, 1), le.transform(y_train), batch_size=30, epochs=10, verbose=1, validation_data=(X_val.reshape(samples_val, seq, features, 1), le.transform(y_val))) score = model.evaluate(X_test.reshape(samples_test, seq, features, 1), le.transform(y_test), verbose=0) model.save("cnn.h5") print('Test loss:', score[0]) print('Test accuracy:', score[1]) print(classification_report(y_test, le.inverse_transform(model.predict(X_test.reshape(samples_test, seq, features, 1))))) def exponent_neg_manhattan_distance(left, right): return K.exp(-K.sum(K.abs(left-right), axis=1, keepdims=False)) left_input = Input(shape=(500, 12)) right_input = Input(shape=(500, 12)) shared_lstm = Sequential([ LSTM(300, return_sequences=False), Dense(512) ]) encoded_l = shared_lstm(left_input) encoded_r = shared_lstm(right_input) L1_layer = Lambda(lambda tensors:K.abs(tensors[0] - tensors[1])) L1_distance = L1_layer([encoded_l, encoded_r]) prediction = Dense(1, activation='sigmoid')(L1_distance) malstm = Model(inputs=[left_input, right_input], outputs=prediction) malstm.compile(loss="binary_crossentropy", optimizer=Adam(0.00006)) class SiameseLoader: """For loading batches and testing tasks to a siamese net""" def __init__(self, data, categories): self.data = data self.categories = categories self.info = {} def get_batch(self, batch_size, s="train"): """ Create batch of n pairs, half same class, half different class Parameters ---------- batch_size: int Number of pairs to create s: str Set name Returns ------- tuple tuple[0] -> array like of shape (batch_size, 2, window_size, n_features, 1) containing the pairs of the batch tuple[1] -> array like of shape (batch_size) containing the targets. """ X = self.data[s] y = self.categories[s] n_seq, seq_length, n_features = X.shape # Initialize 2 empty arrays for the input image batch pairs = [np.zeros((batch_size, seq_length, n_features)) for i in range(2)] # Initialize vector for the targets, and make one half of it '1's, so 2nd half of batch has same class targets = np.zeros((batch_size,)) targets[batch_size//2:] = 1 labels = [] for i in range(batch_size): idx_1 = np.random.randint(0, n_seq) pairs[0][i,:,:] = X[idx_1] pair_0_label = y[idx_1] # Pick images of same class for 1st half, different for 2nd if i >= batch_size // 2: idx_2 = np.random.choice([l[0] for l in enumerate(y) if l[1] == pair_0_label]) else: idx_2 = np.random.choice([l[0] for l in enumerate(y) if l[1] != pair_0_label]) labels.append((pair_0_label, y[idx_2])) pairs[1][i,:,:] = X[idx_2] return pairs, targets, labels def generate(self, batch_size, s="train"): """A generator for batches, so model.fit_generator can be used. """ while True: pairs, targets = self.get_batch(batch_size, s) yield (pairs, targets) def make_oneshot_task(self, s="val", N=5): """ Create pairs of test image, support set for testing N way one-shot learning. Parametes --------- s: str, optional Name of the used set N: int, optional Support set size Returns ------- tuple tuple[0] -> array like of shape (batch_size, 2, window_size, n_features, 1) containing the pairs of the batch Paris where first element is the test image and the second one is an instance of the support set tuple[1] -> array like of shape (batch_size) with a single 1, which is the target of support set """ X = self.data[s] y = self.categories[s] enum_labels = list(enumerate(y)) n_seq, seq_length, n_features = X.shape # Pick the true label true_label = np.random.choice(y) true_instances = np.array([l[0] for l in enum_labels if l[1] == true_label]).astype(np.int) false_instances = np.array([l[0] for l in enum_labels if l[1] != true_label]).astype(np.int) # Build the support set support_set_idx = np.random.choice(false_instances, size=(N)) support_set = X[support_set_idx] if len(true_instances) == 1: test_img_idx, support_true_idx = true_instances[0], true_instances[0] else: test_img_idx, support_true_idx = np.random.choice(true_instances, size=(2,), replace=False) support_set[0,:,:] = X[support_true_idx] # Pick the same test image N times test_img = [X[test_img_idx]]*N # Set the first target to 1 because the first element of support set is the desired output targets = np.zeros((N,)) targets[0] = 1 targets, test_img, support_set = shuffle(targets, test_img, support_set) pairs = [test_img, support_set] return pairs, targets def test_oneshot(self, model, k, s="val", verbose=0): """ Test average N way oneshot learning accuracy of a siamese neural net over k one-shot tasks Parameters ---------- model: kearas.model k: int Number of one shot tasks s: str, optional Name of the set verbose: boolean, optional If True -> print the accuracy Returns ------- float Accuaracy on the k one shot tasks """ n_correct = 0 for i in range(k): inputs, targets = self.make_oneshot_task(s) probs = model.predict(inputs) if np.argmax(probs) == np.argmax(targets): n_correct += 1 percent_correct = (100.0 * n_correct / k) if verbose: print("Got an average of {}% learning accuracy".format(percent_correct)) return percent_correct loader = SiameseLoader(data={"train": X, "val": X_val}, categories={"train": y, "val": y_val}) n_iter = 10000 loss_every = 10 weights_path = "siamese_net_lstm.h5" batch_size = 30 best_loss = 99999 for i in range(1, n_iter): (inputs, targets, l) = loader.get_batch(batch_size) loss = malstm.train_on_batch(inputs, targets) if i % loss_every == 0: print("Iteration {}, Loss: {}".format(i, loss)) val_acc = loader.test_oneshot(malstm, 20, verbose=True) # If loss improve store the weights if best_loss > loss: best_loss = loss malstm.save(weights_path)
0.857559
0.798521
### **1. ๆœบๅ™จๅญฆไน ไธญ็š„็›‘็ฃๅญฆไน ใ€้ž็›‘็ฃๅญฆไน ใ€ๅผบๅŒ–ๅญฆไน ๆœ‰ไฝ•ๅŒบๅˆซ**๏ผŸ ไธ‰็งๅญฆไน ๆ–นๅผ็š„ไธป่ฆๅŒบๅˆซๅœจไบŽๆ˜ฏๅฆๆœ‰LabelไปฅๅŠLabel็š„ๆž„ๆˆๆ–นๅผ็š„ไธๅŒ๏ผš 1. ็›‘็ฃๅญฆไน ๏ผŒๆ˜ฏๆœ‰Label็š„ๅญฆไน ๏ผ›ไธ”Labelๆ˜ฏ็”ฑไบบๅทฅไบง็”Ÿ็š„ใ€‚ๅœจๅญฆไน ็š„่ฟ‡็จ‹ไธญ๏ผŒ้€š่ฟ‡ไผ˜ๅŒ–ๆ–นๆณ•ไผ˜ๅŒ–ๆจกๅž‹ๅ‚ๆ•ฐๆฅ็ผฉๅฐ้ข„ๆต‹็ป“ๆžœไธŽไบบๅทฅLabel็š„ๅทฎ่ทๆฅๅฎž็Žฐๅญฆไน ใ€‚ๅฆ‚Figure1ไธญๆ‰€็คบ๏ผŒๅญฆไน ็š„ไปปๅŠกๆ˜ฏๅฏนๅ›พ็‰‡่ฟ›่กŒๅˆ†็ฑป๏ผŒๅˆคๆ–ญๆ˜ฏๅฆๆ˜ฏ้ธญๅญใ€‚ๅœจๅทฆๅ›พๆ•ฐๆฎไธญๆ ‡ๆณจๅ‡บๆฅ็š„Duckๅ’ŒNot Duckๅฐฑๆ˜ฏไบบๅทฅ็ป™็š„Labelใ€‚ 2. ๆ— ็›‘็ฃๅญฆไน , ๆ˜ฏๆ— Label็š„ๅญฆไน ๏ผ›ๅœจๅญฆไน ็š„่ฟ‡็จ‹ไธญ๏ผŒ้€š่ฟ‡็ฎ—ๆณ•่‡ชๅŠจๅ‘็Žฐๆ•ฐๆฎไธญๅ†…ๅœจ็š„่ง„ๅพ‹ๆฅๅฎž็Žฐๅญฆไน ใ€‚ๅฆ‚Figure1ไธญๆ‰€็คบ๏ผŒๅณๅ›พไธญๆฒกๆœ‰็ป™ๅ›พ็‰‡ไปปไฝ•Labelไฟกๆฏ๏ผŒๆจกๅž‹่‡ชๅŠจๅฐ†ๅ›พ็‰‡่ฟ›่กŒๅˆ†็ฑปใ€‚ 3. ๅผบๅŒ–ๅญฆไน ๏ผŒๆ˜ฏไธ€็ง็‰นๆฎŠ็š„ๆœ‰Label็š„ๅญฆไน ๆ–นๅผ๏ผ›Labelไธๆ˜ฏ็”ฑไบบๅทฅไบง็”Ÿ็š„๏ผŒ่€Œๆ˜ฏ็”ฑAgentไธŽ็Žฏๅขƒ็š„ไบคไบ’ๆฅไบง็”Ÿ็š„๏ผŒๅˆๅฏไปฅๆˆไธบRewardใ€‚ๅผบๅŒ–ๅญฆไน ็š„ๅญฆไน ๆ–นๅผๆ˜ฏ้€š่ฟ‡ไผ˜ๅŒ–Action็š„็ญ–็•ฅไปฅ่Žทๅพ—ๆœ€ๅคงRewardๆฅ่ฟ›่กŒๅญฆไน ็š„ใ€‚ๅฆ‚Figure2ไธญๆ‰€็คบ๏ผŒ่€้ผ (Agent)้€š่ฟ‡ๅฏน่ฟทๅฎซ๏ผˆ็Žฏๅขƒ๏ผ‰็š„่ง‚ๅฏŸ่Žทๅ–ๅฝ“ๅ‰็š„็Šถๆ€๏ผŒ็„ถๅŽๆ นๆฎๅฝ“ๅ‰็š„็Šถๆ€ๅˆคๆ–ญๅšๅ‡บActionใ€‚ๅฝ“่‡ชๅทฑ็š„ActionไปŽ็Žฏๅขƒไธญ่Žทๅพ—ไบ†Reward๏ผˆๅฅถ้…ช๏ผ‰ๅ้ฆˆไน‹ๅŽ๏ผŒๅฐฑไผšๆ นๆฎๅ้ฆˆๅŽปไผ˜ๅŒ–่‡ชๅทฑ็š„่กŒไธบ็ญ–็•ฅ๏ผŒไฝฟๅพ—ๅœจ่ฟ™ไธช็ญ–็•ฅ็š„ๆŒ‡ๅฏผไธ‹๏ผŒๅœจๅฝ“ๅ‰็š„็Žฏๅขƒ๏ผˆๆธธๆˆ่ง„ๅˆ™๏ผ‰ไธญ๏ผŒ่Žทๅพ—ๆœ€ๅคง็š„Rewardใ€‚ ![Supervised](./images/S_Un_Supervised.png) ***Figure 1. Supervised and UnSupervised Learning*** ![RL](./images/RL.png) ***Figure 2. Reinforcement Learning *** ### **2. ไป€ไนˆๆ˜ฏ็ญ–็•ฅ็ฝ‘็ปœ๏ผŒไปทๅ€ผ็ฝ‘็ปœ๏ผŒๆœ‰ไฝ•ๅŒบๅˆซ**? - ็ญ–็•ฅ็ฝ‘็ปœ๏ผšๅฐฑๆ˜ฏๆ นๆฎ็ป™ๅฎš็š„่พ“ๅ…ฅ็Šถๆ€้€š่ฟ‡่ฎก็ฎ—็ป™ๅ‡บไธ€ไธช็กฎๅฎš่พ“ๅ‡บ็š„็ฝ‘็ปœใ€‚ๆฏ”ๅฆ‚๏ผˆๅŠจไฝœ1๏ผŒ็Šถๆ€2๏ผ‰๏ผŒ๏ผˆๅŠจไฝœ2๏ผŒ็Šถๆ€4๏ผ‰ใ€‚ - ไปทๅ€ผ็ฝ‘็ปœ๏ผšๅฐฑๆ˜ฏๆ นๆฎ็ป™ๅฎš็š„่พ“ๅ…ฅ็Šถๆ€้€š่ฟ‡่ฎก็ฎ—่ฏ„ไผฐๅฝ“ๅ‰็Šถๆ€็š„ไปทๅ€ผใ€‚ไปทๅ€ผๅคงๅฐๅฏ้€š่ฟ‡ๆœ‰ๅคšๅคงๆฆ‚็އ่Žทๅพ—ๅคšๅฐ‘ๅฅ–ๅŠฑๅ้ฆˆๆฅ่ฏ„ไผฐใ€‚ ### **3.่ฏท็ฎ€่ฟฐMCTS๏ผˆ่’™็‰นๅกๆด›ๆ ‘ๆœ็ดข๏ผ‰็š„ๅŽŸ็†๏ผŒ4ไธชๆญฅ้ชคSelect, Expansion๏ผŒSimluation๏ผŒBackpropagationๆ˜ฏๅฆ‚ไฝ•ๆ“ไฝœ็š„?** MCTS๏ผˆ่’™็‰นๅกๆด›ๆœ็ดข๏ผ‰็š„ๅŸบๆœฌๆ€ๆƒณๆ˜ฏ้€š่ฟ‡ๅคšๆฌกๆจกๆ‹Ÿๅšๅผˆ่ฟ‡็จ‹๏ผŒๆ นๆฎๆฏๆฌกๆจกๆ‹Ÿ็š„ๆœ€็ปˆ่พ“่ตข็ป“ๆžœ๏ผŒๆฅๆœ็ดขๅ‡บไธ€ไธชๆœ€ไผ˜็š„็ญ–็•ฅๆ ‘ใ€‚ๆฏไธช่Š‚็‚น่กจ็คบไบ†ไธ€ไธชๅฑ€้ข๏ผŒๅฎƒไฟๅญ˜ไบ†ๅฝ“ๅ‰็š„็Šถๆ€ไฟกๆฏ๏ผŒโ€œไปทๅ€ผโ€ไฟกๆฏ็”จA/B่กจ็คบ๏ผŒไปฃ่กจไบ†่ขซ่ฎฟ้—ฎBๆฌก๏ผŒ่Žท่ƒœไบ†Aๆฌก็š„ๆฆ‚็އใ€‚ - Selection: ไปŽๆ น่Š‚็‚นๅพ€ไธ‹่ตฐ๏ผŒๆฏๆฌก้ƒฝ้€‰ๆ‹ฉไธ€ไธชโ€œๆœ€ๆœ‰ไปทๅ€ผ็š„ๅญ่Š‚็‚นโ€๏ผŒ็›ดๅˆฐๆ‰พๅˆฐโ€œๅญ˜ๅœจๆœชๆ‰ฉๅฑ•็š„ๅญ่Š‚็‚นโ€๏ผŒๅณ่ฟ™ไธชๅฑ€้ขๅญ˜ๅœจๆœช่ตฐ่ฟ‡็š„ๅŽ็ปญ่ตฐๆณ•็š„่Š‚็‚น๏ผŒๆฏ”ๅฆ‚Figure3ไธญ็š„3/3่Š‚็‚นใ€‚ๅ…ถไธญโ€œ่Š‚็‚น็š„ไปทๅ€ผโ€้€š่ฟ‡UCB๏ผˆUpper Confidence Bound๏ผ‰็ฎ—ๆณ•ๆฅ่ฏ„ไผฐ๏ผŒUCB็ฎ—ๆณ•็š„ไปทๅ€ผ่ฏ„ไผฐๅ‡ฝๆ•ฐๅนณ่กกไบ†ๆœ็ดข-ๅˆฉ็”จ้—ฎ้ข˜ใ€‚ - Expansion: ็ป™้€‰ๅฎš็š„่Š‚็‚น๏ผˆ3/3๏ผ‰ๅŠ ไธŠไธ€ไธช0/0ๅญ่Š‚็‚น๏ผŒๅณๆ˜ฏๅฏนๅฝ“ๅ‰็š„โ€œๆœชๆ‰ฉๅฑ•็š„ๅญ่Š‚็‚นโ€่ฟ›่กŒๆ‰ฉๅฑ•ใ€‚ - Simulation๏ผšไฝฟ็”จๅฟซ้€Ÿ่ตฐๅญ็ญ–็•ฅ๏ผˆRollout Policy๏ผ‰่ตฐๅˆฐๅบ•๏ผŒๅพ—ๅˆฐไธ€ไธช่ƒœ่ดŸ็ป“ๆžœใ€‚ - Backpropagation: ๆŠŠๆจกๆ‹Ÿ็š„็ป“ๆžœ0/1 ๆˆ–่€… 1/1 ๏ผˆๅœจFigure3ไธญ็š„ไพ‹ๅญๆ˜ฏ0/1๏ผ‰ๅŠ ๅˆฐๅฎƒๆ‰€ๆœ‰็š„็ˆถ่Š‚็‚นไธŠใ€‚ ![MCTS](./images/MCTS.png) ***Figure3.Monte Carlo Tree Search*** ### **4. ๅ‡่ฎพไฝ ๆ˜ฏๆŠ–้Ÿณ็š„ๆŠ€ๆœฏ่ดŸ่ดฃไบบ๏ผŒๅผบๅŒ–ๅญฆไน ๅœจไฟกๆฏๆตๆŽจ่ไธญไผšๆœ‰ๆ€Žๆ ท็š„ไฝœ็”จ๏ผŒๅฆ‚ๆžœ่ฆ่ฟ›่กŒไฝฟ็”จๅผบๅŒ–ๅญฆไน ๏ผŒ้ƒฝๆœ‰ๅ“ชไบ›่ฆ็ด ้œ€่ฆ่€ƒ่™‘?** ๅœจๅฏน้—ฎ้ข˜่ฟ›่กŒๅผบๅŒ–ๅญฆไน ๅปบๆจก็š„ๆ—ถๅ€™๏ผŒ้ฆ–ๅ…ˆ่ฆ่€ƒ่™‘็š„ๆ˜ฏๅผบๅŒ–ๅญฆไน ็š„ๅŸบๆœฌ่ฆ็ด ๅณ State, Action ๅ’Œ Rewardใ€‚ๅœจๆŠ–้ŸณไฟกๆฏๆตๆŽจ่ไธญ๏ผŒ - State๏ผšๅฏไปฅๅฐ†ๅฝ“ๅ‰็”จๆˆทไธŽ่ง†้ข‘ไบคไบ’็š„ๆ—ถ้—ดๅบๅˆ—่กŒไธบ๏ผŒ็”จๆˆท็š„ไบบๅฃๅญฆไฟกๆฏ๏ผŒ่ฎพๅค‡ไฟกๆฏไธŠไธ‹ๆ–‡ไฟกๆฏ็ญ‰๏ผŒไฝœไธบ็”จๆˆทๅฝ“ๅ‰็š„ไธ€ไธช็Šถๆ€ใ€‚ - Action: ๅฏไปฅๅฐ†็ณป็ปŸๅฐ†ๆŸ่ง†้ข‘ๆŽจ่็ป™ๅฝ“ๅ‰็”จๆˆท็š„่กŒไธบไฝœไธบไธ€ไธชactionใ€‚ - Reward: ็”จๆˆทๅฏน่ขซๆŽจ่็š„่ง†้ข‘็š„ๆ“ไฝœๅ้ฆˆ๏ผŒๆฏ”ๅฆ‚็‚นๅ‡ป๏ผŒ่ง‚็œ‹๏ผŒๆ”ถ่—๏ผŒ็‚น่ตž๏ผŒ่ฏ„่ฎบ็ญ‰ไฝœไธบRewardๅ้ฆˆ็ป™็ณป็ปŸใ€‚ ๅœจๆญคๅŸบ็ก€ไธŠ, ่€ƒ่™‘ๅผบๅŒ–ๅญฆไน ็š„ๆ€ปไฝ“ๆžถๆž„๏ผ›ๆฏ”ๅฆ‚ๆ˜ฏ้‡‡็”จModel-based ่ฟ˜ๆ˜ฏ model-free ๆžถๆž„(ๅ‚็…งFigure4)๏ผ›ไปฅๅŠๆ˜ฏๅฆ่ฆๅผ•ๅ…ฅๆจกๆ‹Ÿ่ฎญ็ปƒๆœบๅˆถ๏ผŒไปฅ่พ…ๅŠฉๅ’ŒๅŠ ๅฟซRL็ฎ—ๆณ•็š„่ฎญ็ปƒ๏ผŒๆฏ”ๅฆ‚AlphaGo Zeroไธญๅผ•ๅ…ฅ็š„MCTS๏ผˆๅ‚็…งFigure5๏ผ‰ใ€‚ ![model](./images/model.png) ***Figure4.Model-based and model-free*** ![alphago](./images/alphago.png) ***Figure5.AlphaGo Zero Architecture*** ๆŽฅไธ‹ๆฅ๏ผŒ่ฟ˜่ฆ่€ƒ่™‘ๆ•ฐๆฎ็ป“ๆž„ๅ’Œๆ•ฐๆฎ้›†็š„็ป„็ป‡๏ผŒไปฅๅŠ่ฎพ่ฎกๆทฑๅบฆ็ฝ‘็ปœ็ป“ๆž„็”จๆฅไปŽๆ•ฐๆฎไธญๅญฆไน ๅพ—ๅˆฐValue๏ผˆๅฏนๅฝ“ๅ‰ๆƒ…ๅ†ต็š„่ฏ„ไผฐ๏ผ‰ๅ’Œ Policy(ๆŽฅไธ‹ๆฅAction็š„็ญ–็•ฅ)ใ€‚ ๆญคๅค–๏ผŒ้€š่ฟ‡่ฟ˜้œ€่ฆ่€ƒ่™‘ๆŽข็ดขๅ’Œๅˆฉ็”จ็š„ๅนณ่กก้—ฎ้ข˜๏ผŒๆฏ”ๅฆ‚ๆ˜ฏ่ฆโ€œๅฎ‰ๅ…จ็š„โ€ๅๅคๆŽจ่็ป™็”จๆˆทๆ„Ÿๅ…ด่ถฃ็š„ๆŸไธ€็ฑป่ง†้ข‘่ฟ˜ๆ˜ฏ่ฆๅš้€‚ๅฝ“็š„ๆŽข็ดข๏ผŒๅŽปไบ†่งฃ็”จๆˆทๅนฟๆณ›็š„ๅ…ด่ถฃ็‚น๏ผŒๆฅไฝฟๅพ—็ณป็ปŸๅ…ทๆœ‰ๆ›ดๅผบๅคง็š„ๆปก่ถณ็”จๆˆทๅ…ด่ถฃ้œ€ๆฑ‚็š„ๆŽจ่่ƒฝๅŠ›ใ€‚ ๆœ€ๅŽ็”ฑไบŽๆ•ฐๆฎ็จ€็–๏ผŒๅœจ็บฟไธŠๅœบๆ™ฏไธ‹็Žฏๅขƒ๏ผˆๅœจๆญคๅบ”็”จไธญๅณ็”จๆˆท๏ผ‰ไธŽๆŽจ่็ณป็ปŸไบคไบ’็š„้ข‘็އ็›ธๅฏน่พƒไฝŽ๏ผŒๆ‰€ไปฅ็บฟไธŠ่ฎญ็ปƒ่พƒไธบๅ›ฐ้šพใ€‚ๅ› ๆญคๅฏไปฅ้€š่ฟ‡ๅœจ็บฟไธ‹ไปฟ็œŸ็Žฏๅขƒไธ‹่ฎญ็ปƒ็š„ๆ–นๅผๆฅๅฎŒๆˆๅˆๅง‹่ฎญ็ปƒ๏ผŒ็„ถๅŽๅ†ไธŠ็บฟ้€ๆญฅ่ฐƒไผ˜ใ€‚ไพ‹ๅฆ‚Figure6,ๆ‰€็คบ็š„้‡‡็”จๆธธๆˆไปฟ็œŸ็Žฏๅขƒๆฅ่ฎญ็ปƒ่‡ชๅŠจ้ฉพ้ฉถๅขžๅผบๅญฆไน ๆจกๅž‹ใ€‚ ![Auto-Drive-Pure](./images/autodrive_p.png) ***Figure6.Automatic Drive*** ๅŒๆ ท๏ผŒๅœจๆŠ–้ŸณไฟกๆฏๆตๆŽจ่็š„ๅœบๆ™ฏไธญ๏ผŒๆˆ‘ไปฌไนŸๅฏไปฅๅœจ็บฟไธ‹็Žฏๅขƒไธญไฝฟ็”จๅ…ถไป–่ฎญ็ปƒๅฅฝ็š„ไฟกๆฏๆตๆŽจ่ๆจกๅž‹๏ผŒๆฏ”ๅฆ‚DSINๆจกๅž‹ๆฅๆจกๆ‹Ÿ็”จๆˆท๏ผŒ็ป™RL็ณป็ปŸๆŽจ่็š„ๅ†…ๅฎนๅ้ฆˆๆฅๅฎŒๆˆๅฏนRLๆจกๅž‹็š„็บฟไธ‹ๅˆๆญฅ่ฎญ็ปƒใ€‚ ### ***5.ๅœจ่‡ชๅŠจ้ฉพ้ฉถไธญ๏ผŒๅฆ‚ไฝ•ไฝฟ็”จๅผบๅŒ–ๅญฆไน ่ฟ›่กŒ่ฎญ็ปƒ๏ผŒ่ฏท่ฏดๆ˜Ž็ฎ€่ฆ็š„ๆ€่ทฏ?*** ๅœจๅผบๅŒ–ๅญฆไน ไธญ๏ผŒAgent้œ€่ฆ่Žทๅพ—ไธŽ็Žฏๅขƒ็š„ๅ้ฆˆๆฅไธๆ–ญไผ˜ๅŒ–่‡ช่บซๅฏนๅฝ“ๅ‰็Šถๆ€ไปทๅ€ผไผฐ่ฎกๅ’Œๅˆถๅฎšๆœ€ไผ˜่กŒๅŠจ็ญ–็•ฅใ€‚ๅœจ่‡ชๅŠจ้ฉพ้ฉถ็š„ๅบ”็”จไธญ๏ผŒๅฆ‚ๆžœ่ฎฉๆ™บ่ƒฝไฝ“ๆŽงๅˆถๅฎž็‰ฉๆฑฝ่ฝฆ็›ดๆŽฅไธŽ็œŸๅฎž็Žฏๅขƒ่ฟ›่กŒไบคไบ’ๆฅ่ฎญ็ปƒ๏ผŒๆ˜พ็„ถๆ˜ฏๆˆๆœฌ้ซ˜๏ผŒๅฑ้™ฉๅคง๏ผŒไธ”ไธๅคชๅฎž้™…็š„ไธ€็งๅšๆณ•ใ€‚ๆ‰€ไปฅ้€šๅธธๆฅ่ฏด๏ผŒๅผบๅŒ–ๅญฆไน Agentไผšๅœจไธ€ไธชๅฏน็Žฐๅฎžไธ–็•Œ้ซ˜ไปฟ็œŸ๏ผŒไผ ๆ„Ÿๅ™จๆ‰€่Žทๆ•ฐๆฎ้ซ˜ไปฟ็œŸ็š„ๆจกๆ‹Ÿ็Žฏๅขƒไธ‹่ฟ›่กŒๅผ€ๅ‘๏ผŒ่ฎญ็ปƒๅ’Œ้ชŒ่ฏ็š„ใ€‚ไพ‹ๅฆ‚,ๅœจIntel ๅ’Œ Toyota ่”ๅˆๅผ€ๅ‘็š„่‡ชๅŠจ้ฉพ้ฉถๆจกๆ‹Ÿ็Žฏๅขƒ CARLA(Car Learning to Act)ไธญ๏ผŒ ๏ผˆๅ‚่€ƒFigure7๏ผ‰ใ€‚ ๅผบๅŒ–ๅญฆไน ็ณป็ปŸๅฏไปฅไปŽ่ฝฏ็กฌไปถไผ ๆ„Ÿๅ™จ๏ผˆ่ฝฏไปถไธป่ฆๆŒ‡้€š่ฟ‡่ง†่ง‰็ฎ—ๆณ•ๅขžๅผบ่ฟ‡็š„ไฟกๅทๆ”ถ้›†็ณป็ปŸ๏ผ‰่Žทๅพ—ๅ‘จๅ›ด็Žฏๅขƒ็š„ๆทฑๅบฆๅ›พๅƒไฟกๆฏ๏ผŒๅฝฉ่‰ฒๅ›พๅƒไฟกๆฏ๏ผŒๅœบๆ™ฏ่ฏญไน‰ๅˆ†ๅ‰ฒไฟกๆฏๅ’Œ้›ท่พพๅž‹ๅทใ€‚ ![CARLA](./images/autodrive.png) ***Figure7. CARLA simulator*** ๅŸบไบŽๆญค๏ผŒๅฏๅฏนๅผบๅŒ–ๅญฆไน ็š„ๅŸบๆœฌ่ฆ็ด ่ฟ›่กŒๅปบๆจกๅฆ‚ไธ‹๏ผš - state: ้€š่ฟ‡ไผ ๆ„Ÿๅ™จ่Žทๅพ—็š„ๅฝ“ๅ‰ๆ—ถๅˆปๆˆ–่€…ไน‹ๅ‰ไธ€ๆฎตๆ—ถ้—ด็ช—ๅฃๅ†…็š„ๆฑฝ่ฝฆๆ‰€ๅค„็Žฏๅขƒ็š„ไฟกๆฏไปฅๅŠๆฑฝ่ฝฆๆœฌ่บซ็š„่ฟ่กŒ็Šถๆ€ไฟกๆฏใ€‚ - action: agentๅฏๅฏนๆฑฝ่ฝฆๆŽงๅˆถ็š„ๆ“ไฝœๆฏ”ๅฆ‚๏ผšๅ‰่ฟ›๏ผŒๅŽ้€€๏ผŒๅœ่ฝฆ๏ผŒๅทฆ่ฝฌ๏ผŒๅณ่ฝฌ๏ผŒๅŠ ้€Ÿๅ’Œๅ‡้€Ÿ็ญ‰ใ€‚ - environment๏ผšๆจกๆ‹Ÿ็š„ไธ‰็ปดๅœบๆ™ฏๅ’Œไผ ๆ„Ÿๅ™จใ€‚ - reward: ไธ€ๆฎตๆ—ถ้—ดๅ†…ๆญฃๅธธ่ฟ่กŒๆ‰€่Žทๅพ—็š„ๆญฃๅ‘reward๏ผŒไปฅๅŠๅ› ไธๅŒไบ‹ๆ•…่€Œ่Žทๅพ—็š„่ดŸๅ‘็š„rewardใ€‚ ่ฎพ่ฎกๅผบๅŒ–ๅญฆไน ็š„็ฎ—ๆณ•ๆก†ๆžถ๏ผŒไปŽๅฏนๅบ”็”จ็š„็†่งฃไธŠ็œ‹๏ผŒ่‡ชๅŠจ้ฉพ้ฉถๅผบๅŒ–ๅญฆไน Agentๆ›ด้€‚ๅˆModel-Based RLใ€‚ๅฏไปฅ้‡‡็”จๆทฑๅบฆๅญฆไน ๅฏนๅฝ“ๅ‰็š„่พ“ๅ…ฅไฟกๆฏ่ฟ›่กŒๅญฆไน ๆฅ่Žทๅพ—ไปทๅ€ผ็ฝ‘็ปœๅ’Œ็ญ–็•ฅ็ฝ‘็ปœ๏ผŒๅฏนๅฝ“ๅ‰็Šถๆ€่ฟ›่กŒ่ฏ„ไผฐๅ’Œๅˆถๅฎšไธ‹ไธ€ๆญฅ็ญ–็•ฅใ€‚ ๅœจๆทฑๅบฆๅญฆไน ๆ–น้ข๏ผš 1. ๅฏไปฅ้€š่ฟ‡็ซฏๅˆฐ็ซฏ็š„ๅญฆไน ๆก†ๆžถ๏ผŒๅฐ†ไผ ๆ„Ÿๅ™จ็š„่พ“ๅ…ฅ็ป„็ป‡ๆˆstate็›ดๆŽฅไผ ้€’็ป™ๆทฑๅบฆ็ฝ‘็ปœ๏ผŒ็„ถๅŽๆ นๆฎๆ‰€่Žทๅพ—็š„reward่ฟ›่กŒๅญฆไน ใ€‚ 2. ไนŸๅฏไปฅๅˆ†ไธบไธคๆญฅ๏ผš 1๏ผ‰ๅฐ†ๆ‰€่Žทๅพ—ๅˆฐไฟกๆฏๅ…ˆ่ฟ›่กŒโ€œๅœฐๅ›พๅŒ–โ€๏ผŒๅฐ†ๅ›พๅฝขไฟกๆฏ่ฝฌๆขๆˆไธ€ไธชไบŒ็ปด็š„่ฏญไน‰ๅœฐๅ›พ๏ผ› 2๏ผ‰ๅœจๅœฐๅ›พไธญๆ ‡่ฏ†ๅ‡บๆฑฝ่ฝฆ็š„ไฝ็ฝฎ๏ผŒไธด่ฝฆ็š„ๆœช็Ÿฅ๏ผŒ้šœ็ข็‰ฉๅ’Œ่กŒไบบ็š„ๆœช็Ÿฅ็ญ‰ไฟกๆฏ๏ผ› 3๏ผ‰็„ถๅŽๅ…ˆ่ฎญ็ปƒๅŸบไบŽๆŠฝ่ฑกไฟกๆฏๆŽงๅˆถๆฑฝ่ฝฆ็š„ๅญ็ฝ‘็ปœ๏ผ› 4๏ผ‰ๆœ€ๅŽๅ†ๅฐ†ๆญค็ฝ‘็ปœไธŽ่€ƒ่™‘ไบ†็›ดๆŽฅๅ›พๅƒไฟกๆฏ็ฝ‘็ปœๅˆๅนถๅœจไธ€่ตท๏ผŒๅšไปทๅ€ผ่ฎก็ฎ—ๅ’Œ่กŒไธบ่ง„ๅˆ’ใ€‚
github_jupyter
### **1. ๆœบๅ™จๅญฆไน ไธญ็š„็›‘็ฃๅญฆไน ใ€้ž็›‘็ฃๅญฆไน ใ€ๅผบๅŒ–ๅญฆไน ๆœ‰ไฝ•ๅŒบๅˆซ**๏ผŸ ไธ‰็งๅญฆไน ๆ–นๅผ็š„ไธป่ฆๅŒบๅˆซๅœจไบŽๆ˜ฏๅฆๆœ‰LabelไปฅๅŠLabel็š„ๆž„ๆˆๆ–นๅผ็š„ไธๅŒ๏ผš 1. ็›‘็ฃๅญฆไน ๏ผŒๆ˜ฏๆœ‰Label็š„ๅญฆไน ๏ผ›ไธ”Labelๆ˜ฏ็”ฑไบบๅทฅไบง็”Ÿ็š„ใ€‚ๅœจๅญฆไน ็š„่ฟ‡็จ‹ไธญ๏ผŒ้€š่ฟ‡ไผ˜ๅŒ–ๆ–นๆณ•ไผ˜ๅŒ–ๆจกๅž‹ๅ‚ๆ•ฐๆฅ็ผฉๅฐ้ข„ๆต‹็ป“ๆžœไธŽไบบๅทฅLabel็š„ๅทฎ่ทๆฅๅฎž็Žฐๅญฆไน ใ€‚ๅฆ‚Figure1ไธญๆ‰€็คบ๏ผŒๅญฆไน ็š„ไปปๅŠกๆ˜ฏๅฏนๅ›พ็‰‡่ฟ›่กŒๅˆ†็ฑป๏ผŒๅˆคๆ–ญๆ˜ฏๅฆๆ˜ฏ้ธญๅญใ€‚ๅœจๅทฆๅ›พๆ•ฐๆฎไธญๆ ‡ๆณจๅ‡บๆฅ็š„Duckๅ’ŒNot Duckๅฐฑๆ˜ฏไบบๅทฅ็ป™็š„Labelใ€‚ 2. ๆ— ็›‘็ฃๅญฆไน , ๆ˜ฏๆ— Label็š„ๅญฆไน ๏ผ›ๅœจๅญฆไน ็š„่ฟ‡็จ‹ไธญ๏ผŒ้€š่ฟ‡็ฎ—ๆณ•่‡ชๅŠจๅ‘็Žฐๆ•ฐๆฎไธญๅ†…ๅœจ็š„่ง„ๅพ‹ๆฅๅฎž็Žฐๅญฆไน ใ€‚ๅฆ‚Figure1ไธญๆ‰€็คบ๏ผŒๅณๅ›พไธญๆฒกๆœ‰็ป™ๅ›พ็‰‡ไปปไฝ•Labelไฟกๆฏ๏ผŒๆจกๅž‹่‡ชๅŠจๅฐ†ๅ›พ็‰‡่ฟ›่กŒๅˆ†็ฑปใ€‚ 3. ๅผบๅŒ–ๅญฆไน ๏ผŒๆ˜ฏไธ€็ง็‰นๆฎŠ็š„ๆœ‰Label็š„ๅญฆไน ๆ–นๅผ๏ผ›Labelไธๆ˜ฏ็”ฑไบบๅทฅไบง็”Ÿ็š„๏ผŒ่€Œๆ˜ฏ็”ฑAgentไธŽ็Žฏๅขƒ็š„ไบคไบ’ๆฅไบง็”Ÿ็š„๏ผŒๅˆๅฏไปฅๆˆไธบRewardใ€‚ๅผบๅŒ–ๅญฆไน ็š„ๅญฆไน ๆ–นๅผๆ˜ฏ้€š่ฟ‡ไผ˜ๅŒ–Action็š„็ญ–็•ฅไปฅ่Žทๅพ—ๆœ€ๅคงRewardๆฅ่ฟ›่กŒๅญฆไน ็š„ใ€‚ๅฆ‚Figure2ไธญๆ‰€็คบ๏ผŒ่€้ผ (Agent)้€š่ฟ‡ๅฏน่ฟทๅฎซ๏ผˆ็Žฏๅขƒ๏ผ‰็š„่ง‚ๅฏŸ่Žทๅ–ๅฝ“ๅ‰็š„็Šถๆ€๏ผŒ็„ถๅŽๆ นๆฎๅฝ“ๅ‰็š„็Šถๆ€ๅˆคๆ–ญๅšๅ‡บActionใ€‚ๅฝ“่‡ชๅทฑ็š„ActionไปŽ็Žฏๅขƒไธญ่Žทๅพ—ไบ†Reward๏ผˆๅฅถ้…ช๏ผ‰ๅ้ฆˆไน‹ๅŽ๏ผŒๅฐฑไผšๆ นๆฎๅ้ฆˆๅŽปไผ˜ๅŒ–่‡ชๅทฑ็š„่กŒไธบ็ญ–็•ฅ๏ผŒไฝฟๅพ—ๅœจ่ฟ™ไธช็ญ–็•ฅ็š„ๆŒ‡ๅฏผไธ‹๏ผŒๅœจๅฝ“ๅ‰็š„็Žฏๅขƒ๏ผˆๆธธๆˆ่ง„ๅˆ™๏ผ‰ไธญ๏ผŒ่Žทๅพ—ๆœ€ๅคง็š„Rewardใ€‚ ![Supervised](./images/S_Un_Supervised.png) ***Figure 1. Supervised and UnSupervised Learning*** ![RL](./images/RL.png) ***Figure 2. Reinforcement Learning *** ### **2. ไป€ไนˆๆ˜ฏ็ญ–็•ฅ็ฝ‘็ปœ๏ผŒไปทๅ€ผ็ฝ‘็ปœ๏ผŒๆœ‰ไฝ•ๅŒบๅˆซ**? - ็ญ–็•ฅ็ฝ‘็ปœ๏ผšๅฐฑๆ˜ฏๆ นๆฎ็ป™ๅฎš็š„่พ“ๅ…ฅ็Šถๆ€้€š่ฟ‡่ฎก็ฎ—็ป™ๅ‡บไธ€ไธช็กฎๅฎš่พ“ๅ‡บ็š„็ฝ‘็ปœใ€‚ๆฏ”ๅฆ‚๏ผˆๅŠจไฝœ1๏ผŒ็Šถๆ€2๏ผ‰๏ผŒ๏ผˆๅŠจไฝœ2๏ผŒ็Šถๆ€4๏ผ‰ใ€‚ - ไปทๅ€ผ็ฝ‘็ปœ๏ผšๅฐฑๆ˜ฏๆ นๆฎ็ป™ๅฎš็š„่พ“ๅ…ฅ็Šถๆ€้€š่ฟ‡่ฎก็ฎ—่ฏ„ไผฐๅฝ“ๅ‰็Šถๆ€็š„ไปทๅ€ผใ€‚ไปทๅ€ผๅคงๅฐๅฏ้€š่ฟ‡ๆœ‰ๅคšๅคงๆฆ‚็އ่Žทๅพ—ๅคšๅฐ‘ๅฅ–ๅŠฑๅ้ฆˆๆฅ่ฏ„ไผฐใ€‚ ### **3.่ฏท็ฎ€่ฟฐMCTS๏ผˆ่’™็‰นๅกๆด›ๆ ‘ๆœ็ดข๏ผ‰็š„ๅŽŸ็†๏ผŒ4ไธชๆญฅ้ชคSelect, Expansion๏ผŒSimluation๏ผŒBackpropagationๆ˜ฏๅฆ‚ไฝ•ๆ“ไฝœ็š„?** MCTS๏ผˆ่’™็‰นๅกๆด›ๆœ็ดข๏ผ‰็š„ๅŸบๆœฌๆ€ๆƒณๆ˜ฏ้€š่ฟ‡ๅคšๆฌกๆจกๆ‹Ÿๅšๅผˆ่ฟ‡็จ‹๏ผŒๆ นๆฎๆฏๆฌกๆจกๆ‹Ÿ็š„ๆœ€็ปˆ่พ“่ตข็ป“ๆžœ๏ผŒๆฅๆœ็ดขๅ‡บไธ€ไธชๆœ€ไผ˜็š„็ญ–็•ฅๆ ‘ใ€‚ๆฏไธช่Š‚็‚น่กจ็คบไบ†ไธ€ไธชๅฑ€้ข๏ผŒๅฎƒไฟๅญ˜ไบ†ๅฝ“ๅ‰็š„็Šถๆ€ไฟกๆฏ๏ผŒโ€œไปทๅ€ผโ€ไฟกๆฏ็”จA/B่กจ็คบ๏ผŒไปฃ่กจไบ†่ขซ่ฎฟ้—ฎBๆฌก๏ผŒ่Žท่ƒœไบ†Aๆฌก็š„ๆฆ‚็އใ€‚ - Selection: ไปŽๆ น่Š‚็‚นๅพ€ไธ‹่ตฐ๏ผŒๆฏๆฌก้ƒฝ้€‰ๆ‹ฉไธ€ไธชโ€œๆœ€ๆœ‰ไปทๅ€ผ็š„ๅญ่Š‚็‚นโ€๏ผŒ็›ดๅˆฐๆ‰พๅˆฐโ€œๅญ˜ๅœจๆœชๆ‰ฉๅฑ•็š„ๅญ่Š‚็‚นโ€๏ผŒๅณ่ฟ™ไธชๅฑ€้ขๅญ˜ๅœจๆœช่ตฐ่ฟ‡็š„ๅŽ็ปญ่ตฐๆณ•็š„่Š‚็‚น๏ผŒๆฏ”ๅฆ‚Figure3ไธญ็š„3/3่Š‚็‚นใ€‚ๅ…ถไธญโ€œ่Š‚็‚น็š„ไปทๅ€ผโ€้€š่ฟ‡UCB๏ผˆUpper Confidence Bound๏ผ‰็ฎ—ๆณ•ๆฅ่ฏ„ไผฐ๏ผŒUCB็ฎ—ๆณ•็š„ไปทๅ€ผ่ฏ„ไผฐๅ‡ฝๆ•ฐๅนณ่กกไบ†ๆœ็ดข-ๅˆฉ็”จ้—ฎ้ข˜ใ€‚ - Expansion: ็ป™้€‰ๅฎš็š„่Š‚็‚น๏ผˆ3/3๏ผ‰ๅŠ ไธŠไธ€ไธช0/0ๅญ่Š‚็‚น๏ผŒๅณๆ˜ฏๅฏนๅฝ“ๅ‰็š„โ€œๆœชๆ‰ฉๅฑ•็š„ๅญ่Š‚็‚นโ€่ฟ›่กŒๆ‰ฉๅฑ•ใ€‚ - Simulation๏ผšไฝฟ็”จๅฟซ้€Ÿ่ตฐๅญ็ญ–็•ฅ๏ผˆRollout Policy๏ผ‰่ตฐๅˆฐๅบ•๏ผŒๅพ—ๅˆฐไธ€ไธช่ƒœ่ดŸ็ป“ๆžœใ€‚ - Backpropagation: ๆŠŠๆจกๆ‹Ÿ็š„็ป“ๆžœ0/1 ๆˆ–่€… 1/1 ๏ผˆๅœจFigure3ไธญ็š„ไพ‹ๅญๆ˜ฏ0/1๏ผ‰ๅŠ ๅˆฐๅฎƒๆ‰€ๆœ‰็š„็ˆถ่Š‚็‚นไธŠใ€‚ ![MCTS](./images/MCTS.png) ***Figure3.Monte Carlo Tree Search*** ### **4. ๅ‡่ฎพไฝ ๆ˜ฏๆŠ–้Ÿณ็š„ๆŠ€ๆœฏ่ดŸ่ดฃไบบ๏ผŒๅผบๅŒ–ๅญฆไน ๅœจไฟกๆฏๆตๆŽจ่ไธญไผšๆœ‰ๆ€Žๆ ท็š„ไฝœ็”จ๏ผŒๅฆ‚ๆžœ่ฆ่ฟ›่กŒไฝฟ็”จๅผบๅŒ–ๅญฆไน ๏ผŒ้ƒฝๆœ‰ๅ“ชไบ›่ฆ็ด ้œ€่ฆ่€ƒ่™‘?** ๅœจๅฏน้—ฎ้ข˜่ฟ›่กŒๅผบๅŒ–ๅญฆไน ๅปบๆจก็š„ๆ—ถๅ€™๏ผŒ้ฆ–ๅ…ˆ่ฆ่€ƒ่™‘็š„ๆ˜ฏๅผบๅŒ–ๅญฆไน ็š„ๅŸบๆœฌ่ฆ็ด ๅณ State, Action ๅ’Œ Rewardใ€‚ๅœจๆŠ–้ŸณไฟกๆฏๆตๆŽจ่ไธญ๏ผŒ - State๏ผšๅฏไปฅๅฐ†ๅฝ“ๅ‰็”จๆˆทไธŽ่ง†้ข‘ไบคไบ’็š„ๆ—ถ้—ดๅบๅˆ—่กŒไธบ๏ผŒ็”จๆˆท็š„ไบบๅฃๅญฆไฟกๆฏ๏ผŒ่ฎพๅค‡ไฟกๆฏไธŠไธ‹ๆ–‡ไฟกๆฏ็ญ‰๏ผŒไฝœไธบ็”จๆˆทๅฝ“ๅ‰็š„ไธ€ไธช็Šถๆ€ใ€‚ - Action: ๅฏไปฅๅฐ†็ณป็ปŸๅฐ†ๆŸ่ง†้ข‘ๆŽจ่็ป™ๅฝ“ๅ‰็”จๆˆท็š„่กŒไธบไฝœไธบไธ€ไธชactionใ€‚ - Reward: ็”จๆˆทๅฏน่ขซๆŽจ่็š„่ง†้ข‘็š„ๆ“ไฝœๅ้ฆˆ๏ผŒๆฏ”ๅฆ‚็‚นๅ‡ป๏ผŒ่ง‚็œ‹๏ผŒๆ”ถ่—๏ผŒ็‚น่ตž๏ผŒ่ฏ„่ฎบ็ญ‰ไฝœไธบRewardๅ้ฆˆ็ป™็ณป็ปŸใ€‚ ๅœจๆญคๅŸบ็ก€ไธŠ, ่€ƒ่™‘ๅผบๅŒ–ๅญฆไน ็š„ๆ€ปไฝ“ๆžถๆž„๏ผ›ๆฏ”ๅฆ‚ๆ˜ฏ้‡‡็”จModel-based ่ฟ˜ๆ˜ฏ model-free ๆžถๆž„(ๅ‚็…งFigure4)๏ผ›ไปฅๅŠๆ˜ฏๅฆ่ฆๅผ•ๅ…ฅๆจกๆ‹Ÿ่ฎญ็ปƒๆœบๅˆถ๏ผŒไปฅ่พ…ๅŠฉๅ’ŒๅŠ ๅฟซRL็ฎ—ๆณ•็š„่ฎญ็ปƒ๏ผŒๆฏ”ๅฆ‚AlphaGo Zeroไธญๅผ•ๅ…ฅ็š„MCTS๏ผˆๅ‚็…งFigure5๏ผ‰ใ€‚ ![model](./images/model.png) ***Figure4.Model-based and model-free*** ![alphago](./images/alphago.png) ***Figure5.AlphaGo Zero Architecture*** ๆŽฅไธ‹ๆฅ๏ผŒ่ฟ˜่ฆ่€ƒ่™‘ๆ•ฐๆฎ็ป“ๆž„ๅ’Œๆ•ฐๆฎ้›†็š„็ป„็ป‡๏ผŒไปฅๅŠ่ฎพ่ฎกๆทฑๅบฆ็ฝ‘็ปœ็ป“ๆž„็”จๆฅไปŽๆ•ฐๆฎไธญๅญฆไน ๅพ—ๅˆฐValue๏ผˆๅฏนๅฝ“ๅ‰ๆƒ…ๅ†ต็š„่ฏ„ไผฐ๏ผ‰ๅ’Œ Policy(ๆŽฅไธ‹ๆฅAction็š„็ญ–็•ฅ)ใ€‚ ๆญคๅค–๏ผŒ้€š่ฟ‡่ฟ˜้œ€่ฆ่€ƒ่™‘ๆŽข็ดขๅ’Œๅˆฉ็”จ็š„ๅนณ่กก้—ฎ้ข˜๏ผŒๆฏ”ๅฆ‚ๆ˜ฏ่ฆโ€œๅฎ‰ๅ…จ็š„โ€ๅๅคๆŽจ่็ป™็”จๆˆทๆ„Ÿๅ…ด่ถฃ็š„ๆŸไธ€็ฑป่ง†้ข‘่ฟ˜ๆ˜ฏ่ฆๅš้€‚ๅฝ“็š„ๆŽข็ดข๏ผŒๅŽปไบ†่งฃ็”จๆˆทๅนฟๆณ›็š„ๅ…ด่ถฃ็‚น๏ผŒๆฅไฝฟๅพ—็ณป็ปŸๅ…ทๆœ‰ๆ›ดๅผบๅคง็š„ๆปก่ถณ็”จๆˆทๅ…ด่ถฃ้œ€ๆฑ‚็š„ๆŽจ่่ƒฝๅŠ›ใ€‚ ๆœ€ๅŽ็”ฑไบŽๆ•ฐๆฎ็จ€็–๏ผŒๅœจ็บฟไธŠๅœบๆ™ฏไธ‹็Žฏๅขƒ๏ผˆๅœจๆญคๅบ”็”จไธญๅณ็”จๆˆท๏ผ‰ไธŽๆŽจ่็ณป็ปŸไบคไบ’็š„้ข‘็އ็›ธๅฏน่พƒไฝŽ๏ผŒๆ‰€ไปฅ็บฟไธŠ่ฎญ็ปƒ่พƒไธบๅ›ฐ้šพใ€‚ๅ› ๆญคๅฏไปฅ้€š่ฟ‡ๅœจ็บฟไธ‹ไปฟ็œŸ็Žฏๅขƒไธ‹่ฎญ็ปƒ็š„ๆ–นๅผๆฅๅฎŒๆˆๅˆๅง‹่ฎญ็ปƒ๏ผŒ็„ถๅŽๅ†ไธŠ็บฟ้€ๆญฅ่ฐƒไผ˜ใ€‚ไพ‹ๅฆ‚Figure6,ๆ‰€็คบ็š„้‡‡็”จๆธธๆˆไปฟ็œŸ็Žฏๅขƒๆฅ่ฎญ็ปƒ่‡ชๅŠจ้ฉพ้ฉถๅขžๅผบๅญฆไน ๆจกๅž‹ใ€‚ ![Auto-Drive-Pure](./images/autodrive_p.png) ***Figure6.Automatic Drive*** ๅŒๆ ท๏ผŒๅœจๆŠ–้ŸณไฟกๆฏๆตๆŽจ่็š„ๅœบๆ™ฏไธญ๏ผŒๆˆ‘ไปฌไนŸๅฏไปฅๅœจ็บฟไธ‹็Žฏๅขƒไธญไฝฟ็”จๅ…ถไป–่ฎญ็ปƒๅฅฝ็š„ไฟกๆฏๆตๆŽจ่ๆจกๅž‹๏ผŒๆฏ”ๅฆ‚DSINๆจกๅž‹ๆฅๆจกๆ‹Ÿ็”จๆˆท๏ผŒ็ป™RL็ณป็ปŸๆŽจ่็š„ๅ†…ๅฎนๅ้ฆˆๆฅๅฎŒๆˆๅฏนRLๆจกๅž‹็š„็บฟไธ‹ๅˆๆญฅ่ฎญ็ปƒใ€‚ ### ***5.ๅœจ่‡ชๅŠจ้ฉพ้ฉถไธญ๏ผŒๅฆ‚ไฝ•ไฝฟ็”จๅผบๅŒ–ๅญฆไน ่ฟ›่กŒ่ฎญ็ปƒ๏ผŒ่ฏท่ฏดๆ˜Ž็ฎ€่ฆ็š„ๆ€่ทฏ?*** ๅœจๅผบๅŒ–ๅญฆไน ไธญ๏ผŒAgent้œ€่ฆ่Žทๅพ—ไธŽ็Žฏๅขƒ็š„ๅ้ฆˆๆฅไธๆ–ญไผ˜ๅŒ–่‡ช่บซๅฏนๅฝ“ๅ‰็Šถๆ€ไปทๅ€ผไผฐ่ฎกๅ’Œๅˆถๅฎšๆœ€ไผ˜่กŒๅŠจ็ญ–็•ฅใ€‚ๅœจ่‡ชๅŠจ้ฉพ้ฉถ็š„ๅบ”็”จไธญ๏ผŒๅฆ‚ๆžœ่ฎฉๆ™บ่ƒฝไฝ“ๆŽงๅˆถๅฎž็‰ฉๆฑฝ่ฝฆ็›ดๆŽฅไธŽ็œŸๅฎž็Žฏๅขƒ่ฟ›่กŒไบคไบ’ๆฅ่ฎญ็ปƒ๏ผŒๆ˜พ็„ถๆ˜ฏๆˆๆœฌ้ซ˜๏ผŒๅฑ้™ฉๅคง๏ผŒไธ”ไธๅคชๅฎž้™…็š„ไธ€็งๅšๆณ•ใ€‚ๆ‰€ไปฅ้€šๅธธๆฅ่ฏด๏ผŒๅผบๅŒ–ๅญฆไน Agentไผšๅœจไธ€ไธชๅฏน็Žฐๅฎžไธ–็•Œ้ซ˜ไปฟ็œŸ๏ผŒไผ ๆ„Ÿๅ™จๆ‰€่Žทๆ•ฐๆฎ้ซ˜ไปฟ็œŸ็š„ๆจกๆ‹Ÿ็Žฏๅขƒไธ‹่ฟ›่กŒๅผ€ๅ‘๏ผŒ่ฎญ็ปƒๅ’Œ้ชŒ่ฏ็š„ใ€‚ไพ‹ๅฆ‚,ๅœจIntel ๅ’Œ Toyota ่”ๅˆๅผ€ๅ‘็š„่‡ชๅŠจ้ฉพ้ฉถๆจกๆ‹Ÿ็Žฏๅขƒ CARLA(Car Learning to Act)ไธญ๏ผŒ ๏ผˆๅ‚่€ƒFigure7๏ผ‰ใ€‚ ๅผบๅŒ–ๅญฆไน ็ณป็ปŸๅฏไปฅไปŽ่ฝฏ็กฌไปถไผ ๆ„Ÿๅ™จ๏ผˆ่ฝฏไปถไธป่ฆๆŒ‡้€š่ฟ‡่ง†่ง‰็ฎ—ๆณ•ๅขžๅผบ่ฟ‡็š„ไฟกๅทๆ”ถ้›†็ณป็ปŸ๏ผ‰่Žทๅพ—ๅ‘จๅ›ด็Žฏๅขƒ็š„ๆทฑๅบฆๅ›พๅƒไฟกๆฏ๏ผŒๅฝฉ่‰ฒๅ›พๅƒไฟกๆฏ๏ผŒๅœบๆ™ฏ่ฏญไน‰ๅˆ†ๅ‰ฒไฟกๆฏๅ’Œ้›ท่พพๅž‹ๅทใ€‚ ![CARLA](./images/autodrive.png) ***Figure7. CARLA simulator*** ๅŸบไบŽๆญค๏ผŒๅฏๅฏนๅผบๅŒ–ๅญฆไน ็š„ๅŸบๆœฌ่ฆ็ด ่ฟ›่กŒๅปบๆจกๅฆ‚ไธ‹๏ผš - state: ้€š่ฟ‡ไผ ๆ„Ÿๅ™จ่Žทๅพ—็š„ๅฝ“ๅ‰ๆ—ถๅˆปๆˆ–่€…ไน‹ๅ‰ไธ€ๆฎตๆ—ถ้—ด็ช—ๅฃๅ†…็š„ๆฑฝ่ฝฆๆ‰€ๅค„็Žฏๅขƒ็š„ไฟกๆฏไปฅๅŠๆฑฝ่ฝฆๆœฌ่บซ็š„่ฟ่กŒ็Šถๆ€ไฟกๆฏใ€‚ - action: agentๅฏๅฏนๆฑฝ่ฝฆๆŽงๅˆถ็š„ๆ“ไฝœๆฏ”ๅฆ‚๏ผšๅ‰่ฟ›๏ผŒๅŽ้€€๏ผŒๅœ่ฝฆ๏ผŒๅทฆ่ฝฌ๏ผŒๅณ่ฝฌ๏ผŒๅŠ ้€Ÿๅ’Œๅ‡้€Ÿ็ญ‰ใ€‚ - environment๏ผšๆจกๆ‹Ÿ็š„ไธ‰็ปดๅœบๆ™ฏๅ’Œไผ ๆ„Ÿๅ™จใ€‚ - reward: ไธ€ๆฎตๆ—ถ้—ดๅ†…ๆญฃๅธธ่ฟ่กŒๆ‰€่Žทๅพ—็š„ๆญฃๅ‘reward๏ผŒไปฅๅŠๅ› ไธๅŒไบ‹ๆ•…่€Œ่Žทๅพ—็š„่ดŸๅ‘็š„rewardใ€‚ ่ฎพ่ฎกๅผบๅŒ–ๅญฆไน ็š„็ฎ—ๆณ•ๆก†ๆžถ๏ผŒไปŽๅฏนๅบ”็”จ็š„็†่งฃไธŠ็œ‹๏ผŒ่‡ชๅŠจ้ฉพ้ฉถๅผบๅŒ–ๅญฆไน Agentๆ›ด้€‚ๅˆModel-Based RLใ€‚ๅฏไปฅ้‡‡็”จๆทฑๅบฆๅญฆไน ๅฏนๅฝ“ๅ‰็š„่พ“ๅ…ฅไฟกๆฏ่ฟ›่กŒๅญฆไน ๆฅ่Žทๅพ—ไปทๅ€ผ็ฝ‘็ปœๅ’Œ็ญ–็•ฅ็ฝ‘็ปœ๏ผŒๅฏนๅฝ“ๅ‰็Šถๆ€่ฟ›่กŒ่ฏ„ไผฐๅ’Œๅˆถๅฎšไธ‹ไธ€ๆญฅ็ญ–็•ฅใ€‚ ๅœจๆทฑๅบฆๅญฆไน ๆ–น้ข๏ผš 1. ๅฏไปฅ้€š่ฟ‡็ซฏๅˆฐ็ซฏ็š„ๅญฆไน ๆก†ๆžถ๏ผŒๅฐ†ไผ ๆ„Ÿๅ™จ็š„่พ“ๅ…ฅ็ป„็ป‡ๆˆstate็›ดๆŽฅไผ ้€’็ป™ๆทฑๅบฆ็ฝ‘็ปœ๏ผŒ็„ถๅŽๆ นๆฎๆ‰€่Žทๅพ—็š„reward่ฟ›่กŒๅญฆไน ใ€‚ 2. ไนŸๅฏไปฅๅˆ†ไธบไธคๆญฅ๏ผš 1๏ผ‰ๅฐ†ๆ‰€่Žทๅพ—ๅˆฐไฟกๆฏๅ…ˆ่ฟ›่กŒโ€œๅœฐๅ›พๅŒ–โ€๏ผŒๅฐ†ๅ›พๅฝขไฟกๆฏ่ฝฌๆขๆˆไธ€ไธชไบŒ็ปด็š„่ฏญไน‰ๅœฐๅ›พ๏ผ› 2๏ผ‰ๅœจๅœฐๅ›พไธญๆ ‡่ฏ†ๅ‡บๆฑฝ่ฝฆ็š„ไฝ็ฝฎ๏ผŒไธด่ฝฆ็š„ๆœช็Ÿฅ๏ผŒ้šœ็ข็‰ฉๅ’Œ่กŒไบบ็š„ๆœช็Ÿฅ็ญ‰ไฟกๆฏ๏ผ› 3๏ผ‰็„ถๅŽๅ…ˆ่ฎญ็ปƒๅŸบไบŽๆŠฝ่ฑกไฟกๆฏๆŽงๅˆถๆฑฝ่ฝฆ็š„ๅญ็ฝ‘็ปœ๏ผ› 4๏ผ‰ๆœ€ๅŽๅ†ๅฐ†ๆญค็ฝ‘็ปœไธŽ่€ƒ่™‘ไบ†็›ดๆŽฅๅ›พๅƒไฟกๆฏ็ฝ‘็ปœๅˆๅนถๅœจไธ€่ตท๏ผŒๅšไปทๅ€ผ่ฎก็ฎ—ๅ’Œ่กŒไธบ่ง„ๅˆ’ใ€‚
0.381565
0.780621
# Validation of the function "comp_tonality" based on ISO1996-2 annex K ISO 1996-2 does not specify any method to validate the correct operation of the prominent tone detection process using the inspection method. Therefore, this validation has been created, which consists of processing different signals using the function to be validated ("comp_tonality") and superimposing, graphically, the result obtained by the function on the graph of the third octave band spectrum of these signals, thus allowing to visually check if the results are correct. The signals used for the verification were obtained from the website https://freesound.org/. Signals corresponding to pure tones of the different frequency bands representative of the audible frequency range have been chosen, as well as multi-tone signals (presenting several tones in different audible frequency ranges in the same signal) and atonal signals (presenting no tone at all). ``` # Standard library imports import math import matplotlib.pyplot as plt import numpy as ny # Local imports from mosqito.sound_level_meter.noct_spectrum.noct_spectrum import noct_spectrum from mosqito.utils.load import load from mosqito.sq_metrics.tonality.tonality_iso1996K.comp_tonality import comp_tonality ``` # Pure Tone Detection ## Low-frequency range of the audible spectrum ### Pure Tone at 31'5 Hz Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\TONE31'5HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 31.5 Hz with a level of 87 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 31,5 Hz @ 87 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 31.5 Hz with a level of 87 dB. The result of the function and the graphical result match. The calculation of the function is correct. ### Pure Tone at 50 Hz Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\TONE50HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 50 Hz with a level of 87 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 50 Hz @ 87 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 50 Hz with a level of 87 dB. The result of the function and the graphical result match. The calculation of the function is correct. ### Pure Tone at 100 Hz Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\TONE100HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 100 Hz with a level of 85 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 100 Hz @ 85 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 100 Hz with a level of 85 dB. The result of the function and the graphical result match. The calculation of the function is correct. ### Pure Tone at 125 Hz Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\TONE125HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 125 Hz with a level of 87 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 125 Hz @ 87 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 125 Hz with a level of 87 dB. The result of the function and the graphical result match. The calculation of the function is correct. ## Mid-frequency range of the audible spectrum ### Pure Tone at 160 Hz Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\TONE160HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 160 Hz with a level of 87 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 160 Hz @ 87 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 160 Hz with a level of 87 dB. The result of the function and the graphical result match. The calculation of the function is correct. ### Pure Tone at 200 Hz Loading of the Signal to be validated ``` # Define path to the .wav file # To be replaced by your own path path = "input\TONE200HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 200 Hz with a level of 85 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 200 Hz @ 85 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 200 Hz with a level of 85 dB. The result of the function and the graphical result match. The calculation of the function is correct. ### Pure Tone at 250 Hz Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\TONE250HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 250 Hz with a level of 87 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 250 Hz @ 87 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 250 Hz with a level of 87 dB. The result of the function and the graphical result match. The calculation of the function is correct. ### Pure Tone at 400 Hz Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\TONE400HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 400 Hz with a level of 87 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 400 Hz @ 87 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 400 Hz with a level of 87 dB. The result of the function and the graphical result match. The calculation of the function is correct. ## High-frequency range of the audible spectrum ### Pure Tone at 1 KHz Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\TONE1000HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 1 kHz with a level of 85 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 1 KHz @ 85 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 1 KHz with a level of 85 dB. The result of the function and the graphical result match. The calculation of the function is correct. ### Pure Tone at 2 KHz Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\TONE2000HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 2 kHz with a level of 85 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 2 KHz @ 85 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 2 KHz with a level of 85 dB. The result of the function and the graphical result match. The calculation of the function is correct. ### Pure Tone at 4 KHz Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\TONE4000HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 4 kHz with a level of 85 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 4 KHz @ 85 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 4 KHz with a level of 85 dB. The result of the function and the graphical result match. The calculation of the function is correct. ### Pure Tone at 5 KHz Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\TONE5000HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 5 kHz with a level of 85 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 5 KHz @ 85 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 5 KHz with a level of 85 dB. The result of the function and the graphical result match. The calculation of the function is correct. # Tone Detection in Multi-Tone Signals ### Clock Alarm Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\MULTITONE_ALARM.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 800 Hz with a level of 54 dB, in the center frequency band of 2 kHz with a level of 56 dB, in the center frequency band of 3'15 kHz with a level of 64 dB and in the center frequency band of 5 KHz with a level of 75 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, Clock Alarm') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 800 Hz with a level of 54 dB, in the center frequency band of 2 kHz with a level of 56 dB, in the center frequency band of 3'15 kHz with a level of 64 dB and in the center frequency band of 5 KHz with a level of 75 dB. The result of the function and the graphical result match. The calculation of the function is correct. ### Siren Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\MULTITONE_SIREN.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: there is a prominent tone in the center frequency band of 1'6 KHz with a level of 62 dB, in the center frequency band of 2'5 kHz with a level of 57 dB and in the center frequency band of 8 KHz with a level of 39 dB. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, Siren') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: there is a prominent tone in the center frequency band of 1'6 KHz with a level of 62 dB, in the center frequency band of 2'5 kHz with a level of 57 dB and in the center frequency band of 8 KHz with a level of 39 dB. The result of the function and the graphical result match. The calculation of the function is correct. # Tone Detection in Signals without Prominent Tones ## Radio Static Noise Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\WHITE_NOISE.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: No prominent tones detected. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() plt.ylim(0,100) ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, Radio Static Noise') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: No pronounced peaks are seen in the spectrum. No prominent tones are detected. The result of the function and the graphical result match. The calculation of the function is correct. ## Electric Guitar Riff Loading of the Signal to be validated: ``` # Define path to the .wav file # To be replaced by your own path path = "input\ATONAL_ELECTRIC_GUITAR_RIFF.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) ``` Using the "comp_tonality" function: ``` tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") ``` Result: No prominent tones detected. Graphically: ``` #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() plt.ylim(0,100) ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, Electric Guitar Riff') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() ``` Graphic result: No pronounced peaks are seen in the spectrum. No prominent tones are detected. The result of the function and the graphical result match. The calculation of the function is correct.
github_jupyter
# Standard library imports import math import matplotlib.pyplot as plt import numpy as ny # Local imports from mosqito.sound_level_meter.noct_spectrum.noct_spectrum import noct_spectrum from mosqito.utils.load import load from mosqito.sq_metrics.tonality.tonality_iso1996K.comp_tonality import comp_tonality # Define path to the .wav file # To be replaced by your own path path = "input\TONE31'5HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 31,5 Hz @ 87 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\TONE50HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 50 Hz @ 87 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\TONE100HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 100 Hz @ 85 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\TONE125HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 125 Hz @ 87 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\TONE160HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 160 Hz @ 87 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\TONE200HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 200 Hz @ 85 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\TONE250HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 250 Hz @ 87 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\TONE400HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 400 Hz @ 87 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\TONE1000HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 1 KHz @ 85 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\TONE2000HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 2 KHz @ 85 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\TONE4000HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 4 KHz @ 85 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\TONE5000HZ.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, 5 KHz @ 85 dB') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\MULTITONE_ALARM.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, Clock Alarm') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\MULTITONE_SIREN.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, Siren') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\WHITE_NOISE.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() plt.ylim(0,100) ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, Radio Static Noise') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show() # Define path to the .wav file # To be replaced by your own path path = "input\ATONAL_ELECTRIC_GUITAR_RIFF.wav" # load and obtain the signal data and its sampling frequency. sig, fs = load(path) tones = comp_tonality(sig, fs) print("----RESULT-----") print(tones) print("---------------") #-- we obtain the data of the Lp in thirds of octave of the signal of which #-- we want to know the prominent tones fmin = 25 fmax = 20000 third_spec = noct_spectrum(sig=sig, fs=fs, fmin=fmin, fmax=fmax) third_spec = list(third_spec) # -- Obtain the lists of the central frequencies and the average Lp fc = third_spec[1].tolist() Lp_Pa = third_spec[0].tolist() #-- Create a list with the Lp conversion in dB. Lp = [] P_ref = 20e-06 for i in range(0, len(Lp_Pa)): P = Lp_Pa[i][0] level = 20*math.log10(P/P_ref) Lp.append(level) # Create the graph plt.plot(fc, Lp) plt.semilogx() plt.ylim(0,100) ## y-axis legend plt.ylabel('Averaged Sound Pressure Level - Leq [dB]') ## x-axis legend plt.xlabel('Center Frequency [Hz]') ## Graphic Title plt.title('Third-octave band spectrum, Electric Guitar Riff') ## Graphical representation of the result obtained by the function ""comp_tonality"" items = tones.items() items = list(items) for i in range(0, len(items)): for j in range(0,1): x = items[i][j] y = items[i][j+1] plt.plot(x, y, marker='o', color='red') ## Show Graph plt.show()
0.428712
0.983053
Option chains ======= ``` from ib_insync import * util.startLoop() ib = IB() ib.connect('127.0.0.1', 7497, clientId=29) amc = Stock('AMC', 'SMART', 'USD') ib.qualifyContracts(amc) ``` Suppose we want to find the options on the SPX, with the following conditions: * Use the next three monthly expiries; * Use strike prices within +- 20 dollar of the current SPX value; * Use strike prices that are a multitude of 5 dollar. To get the current market value, first create a contract for the underlyer (the S&P 500 index): To avoid issues with market data permissions, we'll use delayed data: ``` # ib.reqMarketDataType(4) ``` Then get the ticker. Requesting a ticker can take up to 11 seconds. Take the current market value of the ticker: The following request fetches a list of option chains: ``` chains = ib.reqSecDefOptParams(amc.symbol, '', amc.secType, amc.conId) util.df(chains) ``` These are four option chains that differ in ``exchange`` and ``tradingClass``. The latter is 'SPX' for the monthly and 'SPXW' for the weekly options. Note that the weekly expiries are disjoint from the monthly ones, so when interested in the weekly options the monthly options can be added as well. In this case we're only interested in the monthly options trading on SMART: ``` chain = next(c for c in chains if c.tradingClass == 'AMC' and c.exchange == 'SMART') chain # What we have here is the full matrix of expirations x strikes. From this we can build all the option contracts that meet our conditions: strikes = [strike for strike in chain.strikes if strike % 5 == 0 and amcValue - 20 < strike < amcValue + 20] expirations = sorted(exp for exp in chain.expirations)[:3] rights = ['P', 'C'] contracts = [Option('AMC', expiration, strike, right, 'SMART', tradingClass='AMC') for right in rights for expiration in expirations for strike in strikes] contracts = ib.qualifyContracts(*contracts) len(contracts) type(expirations[0]) contracts[0] ``` Now to get the market data for all options in one go: ``` tickers = ib.reqTickers(*contracts) tickers[0] a = ib.positions() option_positions = [x for x in a if x.contract.secType== 'OPT'] option_positions_dict = {} for option_position in option_positions: option_positions_dict[option_position.contract] = option_position.position option_positions[0].contract for (contract, position) in option_positions_dict.items(): contract_new = Option(contract.symbol,contract.lastTradeDateOrContractMonth, contract.strike,contract.right, exchange = 'SMART', tradingClass = contract.tradingClass) ib.qualifyContracts(contract_new) ticker = ib.reqTickers(contract_new) print(ticker) tickers = ib.reqTickers(*contracts) tickers[0] a = contracts[0] a ``` The option greeks are available from the ``modelGreeks`` attribute, and if there is a bid, ask resp. last price available also from ``bidGreeks``, ``askGreeks`` and ``lastGreeks``. For streaming ticks the greek values will be kept up to date to the current market situation. ``` ib.disconnect() ```
github_jupyter
from ib_insync import * util.startLoop() ib = IB() ib.connect('127.0.0.1', 7497, clientId=29) amc = Stock('AMC', 'SMART', 'USD') ib.qualifyContracts(amc) # ib.reqMarketDataType(4) chains = ib.reqSecDefOptParams(amc.symbol, '', amc.secType, amc.conId) util.df(chains) chain = next(c for c in chains if c.tradingClass == 'AMC' and c.exchange == 'SMART') chain # What we have here is the full matrix of expirations x strikes. From this we can build all the option contracts that meet our conditions: strikes = [strike for strike in chain.strikes if strike % 5 == 0 and amcValue - 20 < strike < amcValue + 20] expirations = sorted(exp for exp in chain.expirations)[:3] rights = ['P', 'C'] contracts = [Option('AMC', expiration, strike, right, 'SMART', tradingClass='AMC') for right in rights for expiration in expirations for strike in strikes] contracts = ib.qualifyContracts(*contracts) len(contracts) type(expirations[0]) contracts[0] tickers = ib.reqTickers(*contracts) tickers[0] a = ib.positions() option_positions = [x for x in a if x.contract.secType== 'OPT'] option_positions_dict = {} for option_position in option_positions: option_positions_dict[option_position.contract] = option_position.position option_positions[0].contract for (contract, position) in option_positions_dict.items(): contract_new = Option(contract.symbol,contract.lastTradeDateOrContractMonth, contract.strike,contract.right, exchange = 'SMART', tradingClass = contract.tradingClass) ib.qualifyContracts(contract_new) ticker = ib.reqTickers(contract_new) print(ticker) tickers = ib.reqTickers(*contracts) tickers[0] a = contracts[0] a ib.disconnect()
0.384103
0.866019
## Merge csv files into one file with new columns ``` import glob import csv import os import re import datetime import calendar import configparser import time import datetime import pymongo from pymongo import MongoClient import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) # set folder name in config.ini config = configparser.ConfigParser() config.read('config.ini') folder_name = config['DEFAULT']['Folder-Name'] ip = config['DEFAULT']['IP'] port = config['DEFAULT']['MongoDB-Port'] # create folder if not exist folder = "output/merged_csv/" if not os.path.exists(folder): os.makedirs(folder) # get folder for file_reader and file_writer data_path = "output/" + folder_name outfile_path = folder + folder_name + "_merged_data.csv" file_writer = csv.writer(open(outfile_path,'w')) file_counter = 0 # read all csv files from the dictionary for input_file in glob.glob(os.path.join(data_path,'*.csv')): # get specific substring as the name of new column collection_name = re.search('{(.+?)}', input_file).group(1) year = collection_name[0:4] week = re.search('_(.+?)_', collection_name).group(1)[1:] country = collection_name.split('_')[-1] # get month by week and year d = str(year) + "-W" + str(week) r = datetime.datetime.strptime(d + '-1', "%Y-W%W-%w") m = re.search('-(.+?)-', str(r)).group(1) month = calendar.month_abbr[int(m)] year_week = year + "-week" + week week_order = year + week # read rows from the input csv files and write into the output csv file with open(input_file,'r') as csv_file: file_reader = csv.reader(csv_file,delimiter=',') if file_counter < 1: for i, row in enumerate(file_reader): if i==0: row.append('year') row.append('month') row.append('week') row.append('year-week') row.append('week_order') row.append("collection_name") else: row.append(year) row.append(month) row.append(week) row.append(year_week) row.append(week_order) row.append("Twitter "+ country) file_writer.writerow(row) else: header = next(file_reader,None) for row in file_reader: row.append(year) row.append(month) row.append(week) row.append(year_week) row.append(week_order) row.append("Twitter "+ country) file_writer.writerow(row) file_counter += 1 print("File " + folder_name + "_merged_data.csv is ready") ```
github_jupyter
import glob import csv import os import re import datetime import calendar import configparser import time import datetime import pymongo from pymongo import MongoClient import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) # set folder name in config.ini config = configparser.ConfigParser() config.read('config.ini') folder_name = config['DEFAULT']['Folder-Name'] ip = config['DEFAULT']['IP'] port = config['DEFAULT']['MongoDB-Port'] # create folder if not exist folder = "output/merged_csv/" if not os.path.exists(folder): os.makedirs(folder) # get folder for file_reader and file_writer data_path = "output/" + folder_name outfile_path = folder + folder_name + "_merged_data.csv" file_writer = csv.writer(open(outfile_path,'w')) file_counter = 0 # read all csv files from the dictionary for input_file in glob.glob(os.path.join(data_path,'*.csv')): # get specific substring as the name of new column collection_name = re.search('{(.+?)}', input_file).group(1) year = collection_name[0:4] week = re.search('_(.+?)_', collection_name).group(1)[1:] country = collection_name.split('_')[-1] # get month by week and year d = str(year) + "-W" + str(week) r = datetime.datetime.strptime(d + '-1', "%Y-W%W-%w") m = re.search('-(.+?)-', str(r)).group(1) month = calendar.month_abbr[int(m)] year_week = year + "-week" + week week_order = year + week # read rows from the input csv files and write into the output csv file with open(input_file,'r') as csv_file: file_reader = csv.reader(csv_file,delimiter=',') if file_counter < 1: for i, row in enumerate(file_reader): if i==0: row.append('year') row.append('month') row.append('week') row.append('year-week') row.append('week_order') row.append("collection_name") else: row.append(year) row.append(month) row.append(week) row.append(year_week) row.append(week_order) row.append("Twitter "+ country) file_writer.writerow(row) else: header = next(file_reader,None) for row in file_reader: row.append(year) row.append(month) row.append(week) row.append(year_week) row.append(week_order) row.append("Twitter "+ country) file_writer.writerow(row) file_counter += 1 print("File " + folder_name + "_merged_data.csv is ready")
0.074597
0.235372
``` # Import list import numpy as np import pandas as pd import matplotlib.pyplot as plt import os import time import joblib as jb os.chdir('..') import AstroPack as AP os.chdir('./final_models') from matplotlib import rc rc('text', usetex=True) from sklearn.model_selection import train_test_split from sklearn.feature_selection import RFE from sklearn.metrics import (mean_absolute_error, median_absolute_error, r2_score, max_error, mean_squared_error,explained_variance_score) from sklearn.preprocessing import StandardScaler from sklearn.ensemble import RandomForestRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.pipeline import Pipeline %matplotlib inline ``` # Getting the data ## Hyperparameter Tuning ``` # Get the hyperparameter tuning results os.chdir('../hyperparameter_tuning/teff') teff_models = pd.read_csv('rf_teff_tuning.csv') os.chdir('../logg') logg_models = pd.read_csv('rf_logg_tuning.csv') os.chdir('../feh') feh_models = pd.read_csv('rf_FeH_tuning.csv') os.chdir('../../final_models') ``` ## Stars data (SPLUS + WISE + GAIA + LAMOST) ``` # Create a list with all the columns that will be used column_list = ['ID', 'teff', 'teff_err', 'logg', 'logg_err', 'feh', 'feh_err'] + AP.Filters['JPLUS'] + AP.Filters['WISE'] + AP.Filters['GAIA'] # Import the full dataframe with stars that have both SPLUS, WISE, GAIA and LAMOST data os.chdir('../data') stars_raw = pd.read_csv('STEPPs Input Data (SPLUS) - Corrected.csv', usecols=column_list) os.chdir('../final_models') # The DataFrame assembler from AstroPack takes a 'TILE_ID' and a 'NUMBER' for each star (inherited from the J-PLUS tables), # and since the S-PLUS stars don't have those, we create dummy ones stars_raw['TILE_ID'] = np.arange(len(stars_raw)) stars_raw['NUMBER'] = 1 # Drop any row with missing values from the dataframe stars_raw = stars_raw.dropna() # Filter the stars according to their parameter errors stars_raw = stars_raw[stars_raw['teff_err'] <= 300] stars_raw = stars_raw[stars_raw['logg_err'] <= 0.4] stars_raw = stars_raw[stars_raw['feh_err'] <= 0.4] # Convert it into a dataframe with magnitudes and colors, indexed by the TILE ID and NUMBER of the star stars_raw, stars = AP.AssembleWorkingDF(stars_raw, addWISE=True, addGALEX=False, addGAIA=True, Colors=True, Combinations=False) stellar_parameters = stars_raw[['teff', 'logg', 'feh']] ``` # Teff predictor ### Model Ranking We first check the results from the hyperpameter optimization ``` # Print the final ranking of models teff_models[['n_features', 'max_features', 'n_trees', 'min_samples_leaf', 'R2', 'StdR2']].sort_values(by = 'R2', ascending = False).head(5) ``` ### Model Training We then choose the best hyperparameter combination (n_features = 60, max_features = 0.25, n_trees = 100 and msl = 1) and train a model using that ``` # Split the full sample into training and test samples x_train, x_test, y_train, y_test = train_test_split(stars, stellar_parameters, test_size=0.25, random_state=42) # Get the effective temperatures y_train_teff = y_train['teff'] y_test_teff = y_test['teff'] # Initialize the optimized feature selector feature_selector = RFE(estimator=DecisionTreeRegressor(), n_features_to_select = 60, verbose = 0, step = 200) # Initialize the optimized random forest rf = RandomForestRegressor(n_estimators=100, max_features=0.25, min_samples_leaf = 1) # Create a pipeline with the feature selector and the random forest rf_teff_pipeline = Pipeline(steps = [('Feature Selector', feature_selector),('Model', rf)]) # Fit the pipeline to the training data rf_teff_pipeline = rf_teff_pipeline.fit(x_train, y_train_teff.values.reshape(len(y_train_teff))) # Save the pipeline to a file jb.dump(rf_teff_pipeline, open('rf_teff_estimator/pipeline.sav', 'wb'), compress = 9) ``` ### Model Testing Having trained the model, the next step is to test it ``` # Split the full sample into training and test samples x_train, x_test, y_train, y_test = train_test_split(stars, stellar_parameters, test_size=0.25, random_state=42) # Load the pipeline from its file rf_teff_pipeline = jb.load(open('rf_teff_estimator/pipeline.sav', 'rb')) # Predict the temperatures for the test sample teff_predictions_rf = rf_teff_pipeline.predict(x_test) # Calculate the error metrics and print them to the screen MAE = mean_absolute_error(y_test_teff, teff_predictions_rf) RMSE = np.sqrt(mean_squared_error(y_test_teff, teff_predictions_rf)) MaxE = max_error(y_test_teff, teff_predictions_rf) R2 = r2_score(y_test_teff, teff_predictions_rf) print('Mean Absolute Error: {:.3f}'.format(MAE)) print('Root Mean Squared Error: {:.3f}'.format(RMSE)) print('Max Error: {:.3f}'.format(MaxE)) print('R2 Score: {:.3f}'.format(R2)) # Plot the prediction and error graphs and save them teff_test_results = AP.plot_test_graphs(y_test_teff, teff_predictions_rf, r'$\mathbf{T_{eff}}$ (K)', parameter_range = [3500, 9000], error_range = [-750, 750], color = 'red') teff_test_results.savefig('rf_teff_estimator/test_results.jpg', dpi = 250) ``` # logg predictor ### Model Ranking We first check the results from the hyperparameter optimization ``` # Print the final ranking of models logg_models[['n_features', 'max_features', 'n_trees', 'min_samples_leaf', 'R2', 'StdR2']].sort_values(by = 'R2', ascending = False).head(5) ``` ### Model Training Here, we choose the best hyperparameter combination (n_features = 45, max_features = 0.5, n_trees = 100 and msl = 1) and train a model using that ``` # Split the full sample into training and test samples x_train, x_test, y_train, y_test = train_test_split(stars, stellar_parameters, test_size=0.25, random_state=42) # Get the surface gravities y_train_logg = y_train['logg'] y_test_logg = y_test['logg'] # Initialize the optimized feature selector feature_selector = RFE(estimator=DecisionTreeRegressor(), n_features_to_select = 45, verbose = 0, step = 200) # Initialize the optimized random forest rf = RandomForestRegressor(n_estimators=100, max_features=0.5, min_samples_leaf = 1) # Create a pipeline with the feature selector and the random forest rf_logg_pipeline = Pipeline(steps = [('Feature Selector', feature_selector),('Model', rf)]) # Fit the pipeline to the training data rf_logg_pipeline = rf_logg_pipeline.fit(x_train, y_train_logg.values.reshape(len(y_train_logg))) # Save the pipeline to a file jb.dump(rf_logg_pipeline, open('rf_logg_estimator/pipeline.sav', 'wb'), compress = 9) ``` ### Model Testing Having trained the model, the next step is to test it ``` # Split the full sample into training and test samples x_train, x_test, y_train, y_test = train_test_split(stars, stellar_parameters, test_size=0.25, random_state=42) # Load the pipeline from its file rf_logg_pipeline = jb.load(open('rf_logg_estimator/pipeline.sav', 'rb')) # Predict the gravities for the test sample logg_predictions = rf_logg_pipeline.predict(x_test) # Calculate the error metrics and print them to the screen MAE = mean_absolute_error(y_test_logg, logg_predictions) RMSE = np.sqrt(mean_squared_error(y_test_logg, logg_predictions)) MaxE = max_error(y_test_logg, logg_predictions) R2 = r2_score(y_test_logg, logg_predictions) print('Mean Absolute Error: {:.3f}'.format(MAE)) print('Root Mean Squared Error: {:.3f}'.format(RMSE)) print('Max Error: {:.3f}'.format(MaxE)) print('R2 Score: {:.3f}'.format(R2)) # Plot the prediction and error graphs and save them logg_test_results = AP.plot_test_graphs(y_test_logg, logg_predictions, r'$\mathbf{logg}$', parameter_range = [0.25, 5.0], error_range = [-1.5, 1.5], color = 'blue') logg_test_results.savefig('rf_logg_estimator/test_results.jpg', dpi = 250) ``` # FeH predictor ### Model Ranking We first check the results from the hyperpameter optimization ``` feh_models[['n_features', 'max_features', 'n_trees', 'min_samples_leaf', 'R2', 'StdR2']].sort_values(by = 'R2', ascending = False).head(5) ``` ### Model Training We then choose the best hyperparameter combination (n_features = 60, max_features = 0.25 and n_trees = 100) and train a model using that ``` # Split the full sample into training and test samples x_train, x_test, y_train, y_test = train_test_split(stars, stellar_parameters, test_size=0.25, random_state=42) # Get the metalicities y_train_feh = y_train['feh'] y_test_feh = y_test['feh'] # Initialize the optimized feature selector feature_selector = RFE(estimator=DecisionTreeRegressor(), n_features_to_select = 45, verbose = 0, step = 200) # Initialize the optimized random forest rf = RandomForestRegressor(n_estimators=100, max_features=0.25) # Create a pipeline with the feature selector and the random forest rf_feh_pipeline = Pipeline(steps = [('Feature Selector', feature_selector),('Model', rf)]) # Fit the pipeline to the training data rf_feh_pipeline = rf_feh_pipeline.fit(x_train, y_train_feh.values.reshape(len(y_train_feh))) # Save the pipeline to a file jb.dump(rf_feh_pipeline, open('rf_feh_estimator/pipeline.sav', 'wb'), compress = 9) ``` ### Model Testing Having trained the model, the next step is to test it ``` # Split the full sample into training and test samples x_train, x_test, y_train, y_test = train_test_split(stars, stellar_parameters, test_size=0.25, random_state=42) # Load the pipeline from its file rf_feh_pipeline = jb.load(open('rf_feh_estimator/pipeline.sav', 'rb')) # Predict the metalicities for the test sample feh_predictions_rf = rf_feh_pipeline.predict(x_test) # Calculate the error metrics and print them to the screen MAE = mean_absolute_error(y_test_feh, feh_predictions_rf) RMSE = np.sqrt(mean_squared_error(y_test_feh, feh_predictions_rf)) MaxE = max_error(y_test_feh, feh_predictions_rf) R2 = r2_score(y_test_feh, feh_predictions_rf) print('Mean Absolute Error: {:.3f}'.format(MAE)) print('Root Mean Squared Error: {:.3f}'.format(RMSE)) print('Max Error: {:.3f}'.format(MaxE)) print('R2 Score: {:.3f}'.format(R2)) # Plot the prediction and error graphs and save them feh_test_results = AP.plot_test_graphs(y_test_feh, feh_predictions_rf, r'$\mathbf{[Fe/H]}$', parameter_range = [-2.5, 0.75], error_range = [-1.0, 1.0], color = 'green') feh_test_results.savefig('rf_feh_estimator/test_results.jpg', dpi = 250) ```
github_jupyter
# Import list import numpy as np import pandas as pd import matplotlib.pyplot as plt import os import time import joblib as jb os.chdir('..') import AstroPack as AP os.chdir('./final_models') from matplotlib import rc rc('text', usetex=True) from sklearn.model_selection import train_test_split from sklearn.feature_selection import RFE from sklearn.metrics import (mean_absolute_error, median_absolute_error, r2_score, max_error, mean_squared_error,explained_variance_score) from sklearn.preprocessing import StandardScaler from sklearn.ensemble import RandomForestRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.pipeline import Pipeline %matplotlib inline # Get the hyperparameter tuning results os.chdir('../hyperparameter_tuning/teff') teff_models = pd.read_csv('rf_teff_tuning.csv') os.chdir('../logg') logg_models = pd.read_csv('rf_logg_tuning.csv') os.chdir('../feh') feh_models = pd.read_csv('rf_FeH_tuning.csv') os.chdir('../../final_models') # Create a list with all the columns that will be used column_list = ['ID', 'teff', 'teff_err', 'logg', 'logg_err', 'feh', 'feh_err'] + AP.Filters['JPLUS'] + AP.Filters['WISE'] + AP.Filters['GAIA'] # Import the full dataframe with stars that have both SPLUS, WISE, GAIA and LAMOST data os.chdir('../data') stars_raw = pd.read_csv('STEPPs Input Data (SPLUS) - Corrected.csv', usecols=column_list) os.chdir('../final_models') # The DataFrame assembler from AstroPack takes a 'TILE_ID' and a 'NUMBER' for each star (inherited from the J-PLUS tables), # and since the S-PLUS stars don't have those, we create dummy ones stars_raw['TILE_ID'] = np.arange(len(stars_raw)) stars_raw['NUMBER'] = 1 # Drop any row with missing values from the dataframe stars_raw = stars_raw.dropna() # Filter the stars according to their parameter errors stars_raw = stars_raw[stars_raw['teff_err'] <= 300] stars_raw = stars_raw[stars_raw['logg_err'] <= 0.4] stars_raw = stars_raw[stars_raw['feh_err'] <= 0.4] # Convert it into a dataframe with magnitudes and colors, indexed by the TILE ID and NUMBER of the star stars_raw, stars = AP.AssembleWorkingDF(stars_raw, addWISE=True, addGALEX=False, addGAIA=True, Colors=True, Combinations=False) stellar_parameters = stars_raw[['teff', 'logg', 'feh']] # Print the final ranking of models teff_models[['n_features', 'max_features', 'n_trees', 'min_samples_leaf', 'R2', 'StdR2']].sort_values(by = 'R2', ascending = False).head(5) # Split the full sample into training and test samples x_train, x_test, y_train, y_test = train_test_split(stars, stellar_parameters, test_size=0.25, random_state=42) # Get the effective temperatures y_train_teff = y_train['teff'] y_test_teff = y_test['teff'] # Initialize the optimized feature selector feature_selector = RFE(estimator=DecisionTreeRegressor(), n_features_to_select = 60, verbose = 0, step = 200) # Initialize the optimized random forest rf = RandomForestRegressor(n_estimators=100, max_features=0.25, min_samples_leaf = 1) # Create a pipeline with the feature selector and the random forest rf_teff_pipeline = Pipeline(steps = [('Feature Selector', feature_selector),('Model', rf)]) # Fit the pipeline to the training data rf_teff_pipeline = rf_teff_pipeline.fit(x_train, y_train_teff.values.reshape(len(y_train_teff))) # Save the pipeline to a file jb.dump(rf_teff_pipeline, open('rf_teff_estimator/pipeline.sav', 'wb'), compress = 9) # Split the full sample into training and test samples x_train, x_test, y_train, y_test = train_test_split(stars, stellar_parameters, test_size=0.25, random_state=42) # Load the pipeline from its file rf_teff_pipeline = jb.load(open('rf_teff_estimator/pipeline.sav', 'rb')) # Predict the temperatures for the test sample teff_predictions_rf = rf_teff_pipeline.predict(x_test) # Calculate the error metrics and print them to the screen MAE = mean_absolute_error(y_test_teff, teff_predictions_rf) RMSE = np.sqrt(mean_squared_error(y_test_teff, teff_predictions_rf)) MaxE = max_error(y_test_teff, teff_predictions_rf) R2 = r2_score(y_test_teff, teff_predictions_rf) print('Mean Absolute Error: {:.3f}'.format(MAE)) print('Root Mean Squared Error: {:.3f}'.format(RMSE)) print('Max Error: {:.3f}'.format(MaxE)) print('R2 Score: {:.3f}'.format(R2)) # Plot the prediction and error graphs and save them teff_test_results = AP.plot_test_graphs(y_test_teff, teff_predictions_rf, r'$\mathbf{T_{eff}}$ (K)', parameter_range = [3500, 9000], error_range = [-750, 750], color = 'red') teff_test_results.savefig('rf_teff_estimator/test_results.jpg', dpi = 250) # Print the final ranking of models logg_models[['n_features', 'max_features', 'n_trees', 'min_samples_leaf', 'R2', 'StdR2']].sort_values(by = 'R2', ascending = False).head(5) # Split the full sample into training and test samples x_train, x_test, y_train, y_test = train_test_split(stars, stellar_parameters, test_size=0.25, random_state=42) # Get the surface gravities y_train_logg = y_train['logg'] y_test_logg = y_test['logg'] # Initialize the optimized feature selector feature_selector = RFE(estimator=DecisionTreeRegressor(), n_features_to_select = 45, verbose = 0, step = 200) # Initialize the optimized random forest rf = RandomForestRegressor(n_estimators=100, max_features=0.5, min_samples_leaf = 1) # Create a pipeline with the feature selector and the random forest rf_logg_pipeline = Pipeline(steps = [('Feature Selector', feature_selector),('Model', rf)]) # Fit the pipeline to the training data rf_logg_pipeline = rf_logg_pipeline.fit(x_train, y_train_logg.values.reshape(len(y_train_logg))) # Save the pipeline to a file jb.dump(rf_logg_pipeline, open('rf_logg_estimator/pipeline.sav', 'wb'), compress = 9) # Split the full sample into training and test samples x_train, x_test, y_train, y_test = train_test_split(stars, stellar_parameters, test_size=0.25, random_state=42) # Load the pipeline from its file rf_logg_pipeline = jb.load(open('rf_logg_estimator/pipeline.sav', 'rb')) # Predict the gravities for the test sample logg_predictions = rf_logg_pipeline.predict(x_test) # Calculate the error metrics and print them to the screen MAE = mean_absolute_error(y_test_logg, logg_predictions) RMSE = np.sqrt(mean_squared_error(y_test_logg, logg_predictions)) MaxE = max_error(y_test_logg, logg_predictions) R2 = r2_score(y_test_logg, logg_predictions) print('Mean Absolute Error: {:.3f}'.format(MAE)) print('Root Mean Squared Error: {:.3f}'.format(RMSE)) print('Max Error: {:.3f}'.format(MaxE)) print('R2 Score: {:.3f}'.format(R2)) # Plot the prediction and error graphs and save them logg_test_results = AP.plot_test_graphs(y_test_logg, logg_predictions, r'$\mathbf{logg}$', parameter_range = [0.25, 5.0], error_range = [-1.5, 1.5], color = 'blue') logg_test_results.savefig('rf_logg_estimator/test_results.jpg', dpi = 250) feh_models[['n_features', 'max_features', 'n_trees', 'min_samples_leaf', 'R2', 'StdR2']].sort_values(by = 'R2', ascending = False).head(5) # Split the full sample into training and test samples x_train, x_test, y_train, y_test = train_test_split(stars, stellar_parameters, test_size=0.25, random_state=42) # Get the metalicities y_train_feh = y_train['feh'] y_test_feh = y_test['feh'] # Initialize the optimized feature selector feature_selector = RFE(estimator=DecisionTreeRegressor(), n_features_to_select = 45, verbose = 0, step = 200) # Initialize the optimized random forest rf = RandomForestRegressor(n_estimators=100, max_features=0.25) # Create a pipeline with the feature selector and the random forest rf_feh_pipeline = Pipeline(steps = [('Feature Selector', feature_selector),('Model', rf)]) # Fit the pipeline to the training data rf_feh_pipeline = rf_feh_pipeline.fit(x_train, y_train_feh.values.reshape(len(y_train_feh))) # Save the pipeline to a file jb.dump(rf_feh_pipeline, open('rf_feh_estimator/pipeline.sav', 'wb'), compress = 9) # Split the full sample into training and test samples x_train, x_test, y_train, y_test = train_test_split(stars, stellar_parameters, test_size=0.25, random_state=42) # Load the pipeline from its file rf_feh_pipeline = jb.load(open('rf_feh_estimator/pipeline.sav', 'rb')) # Predict the metalicities for the test sample feh_predictions_rf = rf_feh_pipeline.predict(x_test) # Calculate the error metrics and print them to the screen MAE = mean_absolute_error(y_test_feh, feh_predictions_rf) RMSE = np.sqrt(mean_squared_error(y_test_feh, feh_predictions_rf)) MaxE = max_error(y_test_feh, feh_predictions_rf) R2 = r2_score(y_test_feh, feh_predictions_rf) print('Mean Absolute Error: {:.3f}'.format(MAE)) print('Root Mean Squared Error: {:.3f}'.format(RMSE)) print('Max Error: {:.3f}'.format(MaxE)) print('R2 Score: {:.3f}'.format(R2)) # Plot the prediction and error graphs and save them feh_test_results = AP.plot_test_graphs(y_test_feh, feh_predictions_rf, r'$\mathbf{[Fe/H]}$', parameter_range = [-2.5, 0.75], error_range = [-1.0, 1.0], color = 'green') feh_test_results.savefig('rf_feh_estimator/test_results.jpg', dpi = 250)
0.708616
0.721253
# Lesson 1 ## 00:00:00 - Intro ## 00:01:41 - Setting up development environment * Crestle gives you a Juypter notebook for 3c an hour. * Paperspace another option. * All course data is in Fast.ai repo under `fastai` > `courses` > `ml1`. ## 00:05:14 - Recommendations for watching video * Watch, then follow along with video later (probably more useful to in person students). ## 00:06:15 - Course approach * Top-down approach: lot's of practical upfront, then theory later. * Course is a summary of 25 years of Jeremy's research - not a summary of other people's research. * Chance to practise technical writing by authoring blog posts on stuff you learn. ## 00:08:08 - Importing libraries in Juypter notebook * Autoreload commands lets you edit source code and have it immediately available in Juypter. ``` %load_ext autoreload %autoreload 2 %matplotlib inline from math import sqrt from pathlib import Path from fastai.imports import * from fastai.structured import * import pandas as pd from pandas.api.types import is_string_dtype from pandas_summary import DataFrameSummary from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier from IPython.display import display from sklearn import metrics ``` ## 00:08:42 - Why not follow Python code standards? * Doesn't follow PEP8. * Basic idea: data science is not software engineering, even if they eventually become them. * Prototyping models requires thinking about some new paradigms. * Can figure out where a function is from by putting its name into Juypter: ``` display ``` * 1 question mark shows docs: `?display`, 2 shows source: `??display`. ## 00:12:08 - Kaggle competition: Blue Book for Bulldozers * Kaggle comps allow you to download a real-world dataset. * Can submit to leaderboard of old competitions. * No other way to know if you're competent at solving that type of problem. * Machine Learning can help us understand a dataset: not just make predictions of it. * Downloading data: ``` PATH = Path('./data/bluebook') PATH.mkdir(parents=True, exist_ok=True) !kaggle competitions download -c bluebook-for-bulldozers --path={PATH} ``` * Can also download using [https://daniel.haxx.se/blog/2015/11/23/copy-as-curl/](copy as Curl) command in FF. * Ensure using `-o` flag in Curl to specify the output location. ``` !ls {PATH} ``` ### 00:24:32 - Audience questions * Q1: What are the curly brackets? * A1: Expand Python variables before passing to the shell. ## 00:25:14 - Exploring the dataset * It's in CSV format. Can use head to look at the first few lines: ``` !unzip {PATH}/Train.zip -d {PATH} !head {PATH}/Train.csv ``` * Jeremy considers this data structured data (vs unstructured: images, audio). * NLP people refer to structured data as something else. * Pandas most commonly used tool for dealing with structured data. * Everyone uses the same abbreviation: `pd`. * Can read a csv file using the `read_csv` command. * Args: * `parse_dates` picks which columns are dates. * `low_memory` - read more of the file to decide what the types are. ``` df_raw = pd.read_csv(f'{PATH}/Train.csv', low_memory=False, parse_dates=['saledate']) df_raw.head() ``` * Can see the last rows using the `tail()` method. * If you have a lot of rows, can be worth transposing it with `transpose()`: ``` df_raw.tail().transpose() ``` * The value you want to predict (`SalePrice` in this example) is called the "dependent variable". ### 00:33:08 - Audience questions * Q1: Aren't you at risk of overfitting if you spend too much time looking at the data? * A1: Prefer "machine learning driven" exploratory data analysis. ## 00:34:06 - Evaluation metrics * For Kaggle projects, there's an evaluation section that describes how the project is evaluated. * Bluebook example: root mean squared log error. * Can replace column with the log of its value, as follows: ``` sale_price = np.log(df_raw.SalePrice) ``` ## 00:36:31 - Intro to Random Forests * Brief: universal machine learning technique. * Can be used for categorical or continuous. * Can be used with columns of any kind. * Doesn't overfit in general: very easy to stop if it is. * Don't need a separate validation set in general. * Few statistical assumptions. * Great place to start. #### 00:38:13 - Curse of dimensionality (audience question) * Q1: What about curse of dimensionality? * A1: Idea: the more columns you have, the more empty space you have. Higher dimensions tend to have lots of points on the edges. * Doesn't tend to be a problem in practise. * Even K neighest neighbours works well in high dimensions. * "Theory took over machine learning in the 90s" - today's ML is more impirical. * Related: no free lunch theorem. * Claim: no type of model works well for any kind of dataset. * It's true in the sense that a dataset could be random, so obviously there won't be a good model for it. * In the real world: we aren't using random datasets, so there are techniques that work for almost all kinds of real datasets. * Ensembles of decision trees is one example (which a Random Forest is). #### 00:42:53 - Sklearn, Regressors vs Classifiers * Sklearn: by far the most important package for ML in Python. * Does almost everything, though not the best of everything. * Two types of random forests in Sk: regressor and classifier: ``` print(RandomForestRegressor) print(RandomForestClassifier) ``` * Lot of people thing regressor is linear regression, which is not true or appropriate (?) * Regressor = something which predicts a continuous output. * Putting the cursor over a method/function and pressing Shift-tab in Jupyter will return the docs. * First attempt at running regressor: ``` m = RandomForestRegressor(n_jobs=-1) m.fit(df_raw, sale_price) ``` * Error tells you that you need to convert strings into numbers: that's what an ML model expects. * First issue: `saledate` is a date. Need to convert to ints. * Can be converted with Fast.ai's `add_datepart` * Adds columns like: day of month, day of year, is it a public holiday and so on. * Any important stuff you can tell the model about the date? Special events etc. ``` df_raw.head() add_datepart(df_raw, 'saledate') df_raw.head() ``` * Can access datetime-related method on datetime columns using `df.dt.<some_method>`. * No hard in adding more columns, might as well use all datetime attributes. * Also need to convert strings: * Can use Pandas `category` type to convert strings to categorical. ``` df_raw.UsageBand.head() for col_name, col in df_raw.items(): if is_string_dtype(col): df_raw[col_name] = col.astype('category').cat.as_ordered() df_raw.UsageBand.cat.categories ``` * May want to order certain categories, where it makes sense (like above). * Can use `set_categories` to do that: ``` df_raw.UsageBand.cat.set_categories(['High', 'Medium', 'Low'], ordered=True, inplace=True) df_raw.UsageBand.cat.codes.head() ``` #### 01:00:56 - Audience question * Q1: Can you explain the column ordering? * A1: (reexplains ordering) #### 01:04:17 - Find missing values * Can use `isnull` with `sum` to find all columns with missing columns: ``` df_raw.isnull().sum().sort_index() / len(df_raw) ``` ### 01:05:23 - Saving state of dataframe to "feather" * can use `to_feather` to save data to disk in the sample way as its stored in RAM. * By far the fastest way to read Dataframe. * Becoming the standard even in Spark and Java. ``` (PATH / 'tmp').mkdir(exist_ok=True) df_raw.to_feather(PATH / 'tmp' / 'raw') ``` * Can be read with `pd.read_feather`. ``` df_raw = pd.read_feather(PATH / 'tmp' / 'raw') ``` ### 01:07:37 - Final preprocessing * Want to replace string with numeric codes, handle missing continuous values and split dependent variable out. * Can do it all with Fast.ai's `proc_df` method. ``` proc_df df, y, nas = proc_df(df_raw, 'SalePrice') ``` ### 01:08:01 - `proc_df` internals * What `proc_df` does 1. Takes DataFrame and output field name as input. 2. Make a copy of DataFrame. 3. Extract dependant variable. 4. Prepare continuous columns by fixing missing values by setting continuous values to their median and adding a column that defines whether something is null or not. 5. Prepare categorical columns by replacing the values with their numeric codes (+1 to convert -1 into 0 -- not sure why). ``` df_copy = df_raw.copy() sale_price = np.log(df_copy.pop('SalePrice')) for col_name, col in df_copy.items(): if is_numeric_dtype(col): # Add a column that defines whether the value is NA or not. if pd.isna(col).sum(): df_copy[f'{col_name}_is_na'] = pd.isna(col) # Set the value to the median of the dataset. df_copy[col_name] = col.fillna(col.median()) continue # Assume categorical # Add 1 to move -1 to 0. df_copy[col_name] = df_copy[col_name].cat.codes + 1 df_copy.head() sale_price.head() df_copy.columns ``` * Notice that we leave the `ModelID` and `MachineID` in that shouldn't make must sense for a model to learn, but it doesn't tend to cause problems with random forests. * Random forest are "trivially parallelisable" * Pass `n_jobs=-1` to create a separate job for each CPU. ``` m = RandomForestRegressor(n_jobs=-1) m.fit(df_copy, sale_price) m.score(df_copy, sale_price) ``` * The score is measured using $r^2$ where 0 is worst and 1 is best. ## 01:13:24 - Measuring overfitting * Want to separate data into training and validation to measure how well your training is actually doing. ``` def split_vals(a, n): return a[:n].copy(), a[n:].copy() num_valid = 12000 num_train = len(df_copy) - num_valid X_train, X_valid = split_vals(df_copy, num_train) y_train, y_valid = split_vals(sale_price, num_train) X_train.shape, y_train.shape, X_valid.shape, y_valid.shape m = RandomForestRegressor(n_jobs=-1) m.fit(X_train, y_train) predictions = m.predict(X_valid) errors_squared = (predictions - y_valid) ** 2 mean_error = errors_squared.mean() print('Root mean squared error (validation):', sqrt(mean_error)) ``` * Would get us to about 28th on the private leaderboard and 136th on the public. ## 01:16:09 - Assigment * Try these steps on as many Kaggle competitions as you can.
github_jupyter
%load_ext autoreload %autoreload 2 %matplotlib inline from math import sqrt from pathlib import Path from fastai.imports import * from fastai.structured import * import pandas as pd from pandas.api.types import is_string_dtype from pandas_summary import DataFrameSummary from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier from IPython.display import display from sklearn import metrics display PATH = Path('./data/bluebook') PATH.mkdir(parents=True, exist_ok=True) !kaggle competitions download -c bluebook-for-bulldozers --path={PATH} !ls {PATH} !unzip {PATH}/Train.zip -d {PATH} !head {PATH}/Train.csv df_raw = pd.read_csv(f'{PATH}/Train.csv', low_memory=False, parse_dates=['saledate']) df_raw.head() df_raw.tail().transpose() sale_price = np.log(df_raw.SalePrice) print(RandomForestRegressor) print(RandomForestClassifier) m = RandomForestRegressor(n_jobs=-1) m.fit(df_raw, sale_price) df_raw.head() add_datepart(df_raw, 'saledate') df_raw.head() df_raw.UsageBand.head() for col_name, col in df_raw.items(): if is_string_dtype(col): df_raw[col_name] = col.astype('category').cat.as_ordered() df_raw.UsageBand.cat.categories df_raw.UsageBand.cat.set_categories(['High', 'Medium', 'Low'], ordered=True, inplace=True) df_raw.UsageBand.cat.codes.head() df_raw.isnull().sum().sort_index() / len(df_raw) (PATH / 'tmp').mkdir(exist_ok=True) df_raw.to_feather(PATH / 'tmp' / 'raw') df_raw = pd.read_feather(PATH / 'tmp' / 'raw') proc_df df, y, nas = proc_df(df_raw, 'SalePrice') df_copy = df_raw.copy() sale_price = np.log(df_copy.pop('SalePrice')) for col_name, col in df_copy.items(): if is_numeric_dtype(col): # Add a column that defines whether the value is NA or not. if pd.isna(col).sum(): df_copy[f'{col_name}_is_na'] = pd.isna(col) # Set the value to the median of the dataset. df_copy[col_name] = col.fillna(col.median()) continue # Assume categorical # Add 1 to move -1 to 0. df_copy[col_name] = df_copy[col_name].cat.codes + 1 df_copy.head() sale_price.head() df_copy.columns m = RandomForestRegressor(n_jobs=-1) m.fit(df_copy, sale_price) m.score(df_copy, sale_price) def split_vals(a, n): return a[:n].copy(), a[n:].copy() num_valid = 12000 num_train = len(df_copy) - num_valid X_train, X_valid = split_vals(df_copy, num_train) y_train, y_valid = split_vals(sale_price, num_train) X_train.shape, y_train.shape, X_valid.shape, y_valid.shape m = RandomForestRegressor(n_jobs=-1) m.fit(X_train, y_train) predictions = m.predict(X_valid) errors_squared = (predictions - y_valid) ** 2 mean_error = errors_squared.mean() print('Root mean squared error (validation):', sqrt(mean_error))
0.553264
0.910823
## Bayesian updates using the table method This notebook demonstrates a way of doing simple Bayesian updates using the table method, with a Pandas DataFrame as the table. Copyright 2018 Allen Downey MIT License: https://opensource.org/licenses/MIT ``` # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' import numpy as np import pandas as pd ``` As an example, I'll use the "cookie problem", which is a version of a classic probability "urn problem". Suppose there are two bowls of cookies. * Bowl #1 has 10 chocolate and 30 vanilla. * Bowl #2 has 20 of each. You choose a bowl at random, and then pick a cookie at random. The cookie turns out to be vanilla. What is the probability that the bowl you picked from is Bowl #1? ### The BayesTable class Here's the class that represents a Bayesian table. ``` class BayesTable(pd.DataFrame): def __init__(self, hypo, prior=1): columns = ['hypo', 'prior', 'likelihood', 'unnorm', 'posterior'] super().__init__(columns=columns) self.hypo = hypo self.prior = prior def mult(self): self.unnorm = self.prior * self.likelihood def norm(self): nc = np.sum(self.unnorm) self.posterior = self.unnorm / nc return nc def update(self): self.mult() return self.norm() def reset(self): return BayesTable(self.hypo, self.posterior) ``` Here's an instance that represents the two hypotheses: you either chose from Bowl 1 or Bowl 2: ``` table = BayesTable(['Bowl 1', 'Bowl 2']) ``` Since we didn't specify prior probabilities, the default value is equal priors for all hypotheses. Now we can specify the likelihoods: * The likelihood of getting a vanilla cookie from Bowl 1 is 3/4. * The likelihood of getting a vanilla cookie from Bowl 2 is 1/2. Here's how we plug the likelihoods in: ``` table.likelihood = [3/4, 1/2] table ``` The next step is to multiply the priors by the likelihoods, which yields the unnormalized posteriors. ``` table.mult() table ``` Now we can compute the normalized posteriors; `norm` returns the normalization constant. ``` table.norm() table ``` We can read the posterior probabilities from the last column: the probability that you chose from Bowl 1 is 60%. ### Resetting Suppose you put the first cookie back, stir the bowl, and select another cookie from the same bowl. If this second cookie is chocolate, what is the probability, now, that you are drawing from Bowl 1? To solve this problem, we want a new table where the priors in the new table are the posteriors from the old table. That's what the `reset` method computes: ``` table2 = table.reset() ``` Here are the likelihoods for the second update. ``` table2.likelihood = [1/4, 1/2] ``` We could run `mult` and `norm` again, or run `update`, which does both steps. ``` table2.update() ``` Here are the results. ``` table2 ``` But the result is the same. ``` in_region(X, Y) ```
github_jupyter
# Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' import numpy as np import pandas as pd class BayesTable(pd.DataFrame): def __init__(self, hypo, prior=1): columns = ['hypo', 'prior', 'likelihood', 'unnorm', 'posterior'] super().__init__(columns=columns) self.hypo = hypo self.prior = prior def mult(self): self.unnorm = self.prior * self.likelihood def norm(self): nc = np.sum(self.unnorm) self.posterior = self.unnorm / nc return nc def update(self): self.mult() return self.norm() def reset(self): return BayesTable(self.hypo, self.posterior) table = BayesTable(['Bowl 1', 'Bowl 2']) table.likelihood = [3/4, 1/2] table table.mult() table table.norm() table table2 = table.reset() table2.likelihood = [1/4, 1/2] table2.update() table2 in_region(X, Y)
0.778228
0.985814
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. # Automated Machine Learning _**Exploring Previous Runs**_ ## Contents 1. [Introduction](#Introduction) 1. [Setup](#Setup) 1. [Explore](#Explore) 1. [Download](#Download) 1. [Register](#Register) ## Introduction In this example we present some examples on navigating previously executed runs. We also show how you can download a fitted model for any previous run. Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook. In this notebook you will learn how to: 1. List all experiments in a workspace. 2. List all AutoML runs in an experiment. 3. Get details for an AutoML run, including settings, run widget, and all metrics. 4. Download a fitted pipeline for any iteration. ## Setup ``` import pandas as pd import json from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.train.automl.run import AutoMLRun ws = Workspace.from_config() ``` ## Explore ### List Experiments ``` experiment_list = Experiment.list(workspace=ws) summary_df = pd.DataFrame(index = ['No of Runs']) for experiment in experiment_list: automl_runs = list(experiment.get_runs(type='automl')) summary_df[experiment.name] = [len(automl_runs)] pd.set_option('display.max_colwidth', -1) summary_df.T ``` ### List runs for an experiment Set `experiment_name` to any experiment name from the result of the Experiment.list cell to load the AutoML runs. ``` experiment_name = 'automl-local-classification' # Replace this with any project name from previous cell. proj = ws.experiments[experiment_name] summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name']) automl_runs = list(proj.get_runs(type='automl')) automl_runs_project = [] for run in automl_runs: properties = run.get_properties() tags = run.get_tags() amlsettings = json.loads(properties['AMLSettingsJsonString']) if 'iterations' in tags: iterations = tags['iterations'] else: iterations = properties['num_iterations'] summary_df[run.id] = [amlsettings['task_type'], run.get_details()['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name']] if run.get_details()['status'] == 'Completed': automl_runs_project.append(run.id) from IPython.display import HTML projname_html = HTML("<h3>{}</h3>".format(proj.name)) from IPython.display import display display(projname_html) display(summary_df.T) ``` ### Get details for a run Copy the project name and run id from the previous cell output to find more details on a particular run. ``` run_id = automl_runs_project[0] # Replace with your own run_id from above run ids assert (run_id in summary_df.keys()), "Run id not found! Please set run id to a value from above run ids" from azureml.widgets import RunDetails experiment = Experiment(ws, experiment_name) ml_run = AutoMLRun(experiment = experiment, run_id = run_id) summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name', 'Start Time', 'End Time']) properties = ml_run.get_properties() tags = ml_run.get_tags() status = ml_run.get_details() amlsettings = json.loads(properties['AMLSettingsJsonString']) if 'iterations' in tags: iterations = tags['iterations'] else: iterations = properties['num_iterations'] start_time = None if 'startTimeUtc' in status: start_time = status['startTimeUtc'] end_time = None if 'endTimeUtc' in status: end_time = status['endTimeUtc'] summary_df[ml_run.id] = [amlsettings['task_type'], status['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name'], start_time, end_time] display(HTML('<h3>Runtime Details</h3>')) display(summary_df) #settings_df = pd.DataFrame(data = amlsettings, index = ['']) display(HTML('<h3>AutoML Settings</h3>')) display(amlsettings) display(HTML('<h3>Iterations</h3>')) RunDetails(ml_run).show() children = list(ml_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics rundata = pd.DataFrame(metricslist).sort_index(1) display(HTML('<h3>Metrics</h3>')) display(rundata) ``` ## Download ### Download the Best Model for Any Given Metric ``` metric = 'AUC_weighted' # Replace with a metric name. best_run, fitted_model = ml_run.get_output(metric = metric) fitted_model ``` ### Download the Model for Any Given Iteration ``` iteration = 1 # Replace with an iteration number. best_run, fitted_model = ml_run.get_output(iteration = iteration) fitted_model ``` ## Register ### Register fitted model for deployment If neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered. ``` description = 'AutoML Model' tags = None ml_run.register_model(description = description, tags = tags) print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure. ``` ### Register the Best Model for Any Given Metric ``` metric = 'AUC_weighted' # Replace with a metric name. description = 'AutoML Model' tags = None ml_run.register_model(description = description, tags = tags, metric = metric) print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure. ``` ### Register the Model for Any Given Iteration ``` iteration = 1 # Replace with an iteration number. description = 'AutoML Model' tags = None ml_run.register_model(description = description, tags = tags, iteration = iteration) print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure. ```
github_jupyter
import pandas as pd import json from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.train.automl.run import AutoMLRun ws = Workspace.from_config() experiment_list = Experiment.list(workspace=ws) summary_df = pd.DataFrame(index = ['No of Runs']) for experiment in experiment_list: automl_runs = list(experiment.get_runs(type='automl')) summary_df[experiment.name] = [len(automl_runs)] pd.set_option('display.max_colwidth', -1) summary_df.T experiment_name = 'automl-local-classification' # Replace this with any project name from previous cell. proj = ws.experiments[experiment_name] summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name']) automl_runs = list(proj.get_runs(type='automl')) automl_runs_project = [] for run in automl_runs: properties = run.get_properties() tags = run.get_tags() amlsettings = json.loads(properties['AMLSettingsJsonString']) if 'iterations' in tags: iterations = tags['iterations'] else: iterations = properties['num_iterations'] summary_df[run.id] = [amlsettings['task_type'], run.get_details()['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name']] if run.get_details()['status'] == 'Completed': automl_runs_project.append(run.id) from IPython.display import HTML projname_html = HTML("<h3>{}</h3>".format(proj.name)) from IPython.display import display display(projname_html) display(summary_df.T) run_id = automl_runs_project[0] # Replace with your own run_id from above run ids assert (run_id in summary_df.keys()), "Run id not found! Please set run id to a value from above run ids" from azureml.widgets import RunDetails experiment = Experiment(ws, experiment_name) ml_run = AutoMLRun(experiment = experiment, run_id = run_id) summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name', 'Start Time', 'End Time']) properties = ml_run.get_properties() tags = ml_run.get_tags() status = ml_run.get_details() amlsettings = json.loads(properties['AMLSettingsJsonString']) if 'iterations' in tags: iterations = tags['iterations'] else: iterations = properties['num_iterations'] start_time = None if 'startTimeUtc' in status: start_time = status['startTimeUtc'] end_time = None if 'endTimeUtc' in status: end_time = status['endTimeUtc'] summary_df[ml_run.id] = [amlsettings['task_type'], status['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name'], start_time, end_time] display(HTML('<h3>Runtime Details</h3>')) display(summary_df) #settings_df = pd.DataFrame(data = amlsettings, index = ['']) display(HTML('<h3>AutoML Settings</h3>')) display(amlsettings) display(HTML('<h3>Iterations</h3>')) RunDetails(ml_run).show() children = list(ml_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics rundata = pd.DataFrame(metricslist).sort_index(1) display(HTML('<h3>Metrics</h3>')) display(rundata) metric = 'AUC_weighted' # Replace with a metric name. best_run, fitted_model = ml_run.get_output(metric = metric) fitted_model iteration = 1 # Replace with an iteration number. best_run, fitted_model = ml_run.get_output(iteration = iteration) fitted_model description = 'AutoML Model' tags = None ml_run.register_model(description = description, tags = tags) print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure. metric = 'AUC_weighted' # Replace with a metric name. description = 'AutoML Model' tags = None ml_run.register_model(description = description, tags = tags, metric = metric) print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure. iteration = 1 # Replace with an iteration number. description = 'AutoML Model' tags = None ml_run.register_model(description = description, tags = tags, iteration = iteration) print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure.
0.484624
0.900486
# Python functions practice quiz This quiz test your basic knowledge of Python functions. (The solutions do not appear if you view this file from github, so download the notebook to your laptop and view it in Jupyter. Clicking on the โ–ถ`Solution` button opens up the solution for each question.) ### Question Write a function called `dub(x)` that returns twice the numeric argument. <details> <summary>Solution</summary> <pre> def dub(x): return 2*x </pre> </details> ### Question Write a function called `max(x,y)` that returns the larger of `x` and `y`. If the two values are equal, return either value. <details> <summary>Solution</summary> <pre> def max(x,y): if x >= y: return x return y # or "else: return y" </pre> </details> ### Question What does this function return? ```python def f(): print(3.14) ``` <details> <summary>Solution</summary> The return value is None. Though the function does print 3.14, that is nothing to do with the return value. In the absence of a return statement, None is returned. </details> ### Question What does this function print? ```python def f(): return 3.14 ``` <details> <summary>Solution</summary> This function prints nothing. It does, however, return a value of 3.14, but that is nothing to do with printing. In Jupyter lab, though, you would see output 3.14 if you called `f()` on a line by itself because it is operating in interactive mode. Still, the function does not print anything itself; it would be Jupyter doing the printing. </details> ### Question Using the `max()` function you just wrote, find a maximum the following three numbers 5, -3, and 10. <details> <summary>Solution</summary> <pre> max(5, max(-3, 10)) </pre> </details> ### Question Does this function return -1, `[3, 5, 7]`, or just 3? ```python def f(): values = [1, 3, 5, 7] for v in values: if v > 2: return v return -1 ``` <details> <summary>Solution</summary> The return value is 3. Remember that the return statement acts like a jump out of the function. A return statement in a loop does not collect a bunch of values and then return those. Because we jump out of the function from within the loop, we never get to the return -1 statement so -1 can't be the answer. </details> ### Question Write a Python function called `mult(data)` that returns the multiplication of all of the elements in the `data` list argument. <details> <summary>Solution</summary> <pre> def mult(data): v = 1 for x in data: v *= x return v </pre> </details> ### Question Write a function called `decimate(data)` that alters the list of numbers coming in by dividing each by 10. There is no return value; it modifies the incoming list. ```python values = [1, 3, 5, 7] decimate(values) print(values) ``` <details> <summary>Solution</summary> Because we need to alter the elements of the incoming list, we can use a for-each list. We have to use an indexed loop. <pre> def decimate(data): for i in range(len(data)): data[i] /= 10 </pre> </details> ### Question What is the output of the following program? ```python x = 10 def f(): print(x) ``` <details> <summary>Solution</summary> This is a trick question. We didn't call the function so there is no output from this program. </details> ### Question What is the output of the following program? ```python x = [5,6] def f(): print(sum(x)) x = [1,2,3] f() ``` <details> <summary>Solution</summary> The output is 6 because the function executes after we have reassigned a new list to x. </details> ### Question What is the output of the following program? ```python x = 10 def f(): x = 5 f() print(x) ``` <details> <summary>Solution</summary> The output is 10. The assignment to x within the function definition does not affect the global variable. Assignments within functions generally create local variables instead of altering any global variables. </details> ### Question What is the output of the following program? ```python def z(): q() print('z') def q(): print('q') def m(): print('m') z() m() ``` <details> <summary>Solution</summary> The output is: <pre> m q z </pre> because the main program calls m(), which prints "m" and calls z(), which calls q() THEN prints "z". Function q() just prints "q" and returns. </details>
github_jupyter
def f(): print(3.14) def f(): return 3.14 def f(): values = [1, 3, 5, 7] for v in values: if v > 2: return v return -1 values = [1, 3, 5, 7] decimate(values) print(values) x = 10 def f(): print(x) x = [5,6] def f(): print(sum(x)) x = [1,2,3] f() x = 10 def f(): x = 5 f() print(x) def z(): q() print('z') def q(): print('q') def m(): print('m') z() m()
0.223631
0.988347
``` import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt #Assigning the dataframe url to a variable. url_df = "https://github.com/Jefexon/Alura-Data-Immersion-3/blob/main/Data/data_experiments.zip?raw=true" url_df_results = "https://github.com/Jefexon/Alura-Data-Immersion-3/blob/main/Data/data_results.csv?raw=true" #Assigning the uncompressed csv dataframe to a variable. df_results = pd.read_csv(url_df_results) df = pd.read_csv(url_df, compression = 'zip') df.head() df_results.head() ``` --- #### Checking if there are only zeros ``` df_results['acat_inhibitor'].unique() ``` **1** is found when a mechanism of action (names of columns on the right of **id**, e.g. *11-beta-hsd1-inhibitor) is activated, in the contrary **0**. ---------------------- ### Finding the most used mechanism of action. #### Selecting desired columns using *.selec_dtypes('int64)*. ``` counting_moa = df_results.select_dtypes('int64').sum().sort_values(ascending=False) counting_moa ``` #### Selecting desired columns using *.drop('id', axis=1)*. The first parameter in *.drop()* in this case is the name of the column we want to exclude, and the second parameter (*axis=1*) specifies it is a column (use *axis=0* for row). ``` counting_moa = df_results.drop('id', axis=1).sum().sort_values(ascending=False) counting_moa ``` --- ### Finding how many times each *id* was 'activated'. ``` df_results.drop('id', axis=1).sum(axis=1) ``` #### Adding column *n_moa* wich is the sum of the mechanisms of action activated, calculated previously ``` df_results['n_moa'] = df_results.drop('id', axis=1).sum(axis=1) df_results.head() ``` #### Adding column *activated_moa* which tells me if there has been at least 1 activation. ``` df_results['activated_moa'] = df_results['n_moa'] != 0 df_results.head() ``` ---------------------- ### Merging *df* with *df_results* ``` df_combined = pd.merge(df, df_results[['id', 'n_moa', 'activated_moa']], on='id') df_combined.head() df_combined.query('treatment == "with_control"')['activated_moa'].value_counts() ``` ---------------------- ``` main_composite = df_combined['composite'].value_counts().index[:6] plt.figure(figsize=(12, 8)) sns.boxplot(data = df_combined.query('composite in @main_composite'), y= 'g0', x='composite', hue='activated_moa') ``` ### Challenge 1: Find the top 10 actions (e.g. inhibitor, antagonist...). ### Challenge 2: Create column *is_control* for when treatment == with_control. ### Challenge 3: Create 3 columns to indicate if duration is 24, 48, 72. One column for each duration, use 0s and 1s or True and False. ### Challenge 4: Study about merging dataframes. https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html ### Challenge 5: Analyse more in depth the boxplot, considering duration and dose (choose one drug and compare it with control). ### Challenge 6: Discover if we have any composite that depending on the configuration of the expermient, activate or not any MOA. ### Challenge 7: Discover if we have any composite that depending on the configuration of the expermient, activate different MOAs. ### Challenge 8: Summary
github_jupyter
import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt #Assigning the dataframe url to a variable. url_df = "https://github.com/Jefexon/Alura-Data-Immersion-3/blob/main/Data/data_experiments.zip?raw=true" url_df_results = "https://github.com/Jefexon/Alura-Data-Immersion-3/blob/main/Data/data_results.csv?raw=true" #Assigning the uncompressed csv dataframe to a variable. df_results = pd.read_csv(url_df_results) df = pd.read_csv(url_df, compression = 'zip') df.head() df_results.head() df_results['acat_inhibitor'].unique() counting_moa = df_results.select_dtypes('int64').sum().sort_values(ascending=False) counting_moa counting_moa = df_results.drop('id', axis=1).sum().sort_values(ascending=False) counting_moa df_results.drop('id', axis=1).sum(axis=1) df_results['n_moa'] = df_results.drop('id', axis=1).sum(axis=1) df_results.head() df_results['activated_moa'] = df_results['n_moa'] != 0 df_results.head() df_combined = pd.merge(df, df_results[['id', 'n_moa', 'activated_moa']], on='id') df_combined.head() df_combined.query('treatment == "with_control"')['activated_moa'].value_counts() main_composite = df_combined['composite'].value_counts().index[:6] plt.figure(figsize=(12, 8)) sns.boxplot(data = df_combined.query('composite in @main_composite'), y= 'g0', x='composite', hue='activated_moa')
0.622689
0.811228
# Week 0: Introduction to Deep Learning Frameworks ## Notebook 1: MNIST Classification with a Dense Neural Network on Tensorflow2 Welcome to Applied AI study group! As a starter pack to the study group, you have three notebooks to get acquainted with the commonly used deep learning frameworks. We will use Python together with Jupyter to go through all our notebooks. You can use the Python3 installed and available on your system, or you can go to the [python website](https://www.python.org/downloads/) to install Python3 to your system. Another alternative is to use [miniconda](https://docs.conda.io/en/latest/miniconda.html) to install python from scratch along with some useful packages. ## 0. Problem Definition In all of the notebooks of this preparatory week, the problem we are trying to solve is **classification** using machine learning. More specifically, we have images and different categories. We are going to build models that will predict the category of a given image. The dataset we are using in this notebook is [MNIST](http://yann.lecun.com/exdb/mnist/). This is a widely used classification dataset in computer vision and machine learning fields, consisting of handwritten digits from zero to nine. We will try to train a model that predicts the digit given an image. ## 1. Installation To install Jupyter notebook on your system, you can run the following command to install with pip: pip install notebook Or if you are using conda, you can run: conda install -c conda-forge notebook Go to the directory where these notebooks are contained and run: jupyter notebook to open up your notebooks and begin your adventure! ## 2. Imports and Checks In this notebook, we are starting our journey of deep learning frameworks with [TensorFlow2](https://www.tensorflow.org). We first install TensorFlow2 using the official guidelines found [here](https://www.tensorflow.org/install). The whole process usually boils down to running: ``` !pip install tensorflow ``` After the installation is done, let's import tensorflow first: ``` import tensorflow as tf ``` Let's print the version and make sure that we are using the right version: ``` print(tf.__version__) ``` If you are seeing any version above >= 2.0.0, then you are good to go. Below we are installing and importing a high level wrapper around `tf.data` named `tensorflow_datasets` to directly load datasets that are ready to be trained! We will only use this package to show the list of datasets available within `tf.data`: ``` !pip install tensorflow_datasets import tensorflow_datasets as tfds tfds.list_builders() ``` Below we import the necessary libraries for data exploration and some further data operations. If any of these packages are not installed on your system, please install them via `pip` or `conda`: ``` import numpy as np import matplotlib.pyplot as plt import math ``` ## 3. Data Preparation We will use `tensorflow_datasets` to load the MNIST dataset. MNIST may be the most commonly used dataset in computer vision because of its simplicity. We are splitting the data as *train* and *test* sets and we are not using batching yet, so the `batch_size` parameter is -1. ``` mnist_training, mnist_test = tfds.load('mnist', split=['train', 'test'], batch_size=-1, as_supervised=True) ``` Below we see a summary of the pixel values of the MNIST data: ``` print(mnist_training) print(mnist_test) ``` We get the images and labels separately to prepare for training: ``` mnist_training_images, mnist_training_labels = mnist_training[0], mnist_training[1] mnist_test_images, mnist_test_labels = mnist_test[0], mnist_test[1] ``` As the next step, we print the shapes. MNIST contains $28 \times 28$ grayscale images. In addition, we have 60,000 training images and 10,000 test images. ``` print(mnist_training_images.shape) print(mnist_training_labels.shape) print(mnist_test_images.shape) print(mnist_test_labels.shape) ``` Let's visualize the first training image using `matplotlib`: ``` plt.imshow(mnist_training_images[0][:, :, 0] ,cmap = 'gray') print(mnist_training_labels[0]) ``` Let's also visualize the first test image: ``` plt.imshow(mnist_test_images[0][:, :, 0] ,cmap = 'gray') print(mnist_test_labels[0]) ``` Next, we begin data preprocessing. We will use `tf.reshape` to change the shapes of the images into trainable vectors of size 784 (28 x 28). First we get the shapes separately: ``` num_training_images = mnist_training_images.shape[0] num_test_images = mnist_test_images.shape[0] img_width, img_height = mnist_training_images.shape[1], mnist_training_images.shape[2] ``` Since we are using a dense network, we *flatten* the images into vectors of $784 \times 1$: ``` mnist_training_images = tf.reshape(mnist_training_images, shape=(num_training_images, img_width * img_height)) mnist_test_images = tf.reshape(mnist_test_images, shape=(num_test_images, img_width * img_height)) ``` Let's see if we actually changed the shape of the data: ``` print(mnist_training_images.shape) print(mnist_test_images.shape) ``` Another preprocessing step is to normalize the data. As you already know from studying deep learning, normalization is a key step on preparing the data. MNIST pixels are normally between 0 and 255. We normalize the images by dividing each pixel to 255 to map the pixel values between 0 and 1. Let's first look at the minimum and maximum values of the pixels and the labels. Please note that we do not have to normalize the labels. However, we need to create one-hot vectors from label values. More on that in a short while: ``` print(np.amax(mnist_training_images[0]),np.amin(mnist_training_images[0])) print(np.amax(mnist_test_images[0]),np.amin(mnist_test_images[0])) print(np.amax(mnist_training_labels),np.amin(mnist_training_labels)) print(np.amax(mnist_test_labels),np.amin(mnist_test_labels)) ``` We divide all the pixel values to 255.0 and cast them to type `tf.float32`. We also cast the label values into `tf.int64`. ``` def preprocess(x, y): x = tf.cast(x, tf.float32) / 255.0 y = tf.cast(y, tf.int64) return x, y ``` We have to create one-hot vectors for the labels for the neural network to calculate the error. Below, we are creating the one-hot vectors and actually creating the dataset with the batch size 128: ``` def create_dataset(xs, ys, n_classes=10): ys = tf.one_hot(ys, depth=n_classes) return tf.data.Dataset.from_tensor_slices((xs, ys)) \ .map(preprocess) \ .shuffle(len(ys)) \ .batch(128) train_dataset = create_dataset(mnist_training_images, mnist_training_labels) test_dataset = create_dataset(mnist_test_images, mnist_test_labels) print(train_dataset) train_dataset.element_spec ``` Yay! We have our dataset now. Let's check the dataloader: ``` batch_images, batch_labels = next(iter(train_dataset)) print(batch_images.shape) print(batch_labels.shape) print(np.amax(batch_images[0]),np.amin(batch_images[0])) ``` Our data loader works like a charm. We have 128 vectors that are 784 dimensional as images, and 128 vectors that are 10 dimensional as labels. Our maximum pixel value is 1 and the minimum is 0. Data is ready! Let's visualize the first image in our batch: ``` plt.imshow(tf.reshape(batch_images[0], shape=(img_width, img_height, 1))[:, :, 0] ,cmap = 'gray') ``` ## 4. Model Creation Let's define the hyperparameters of the model that we are going to use. We will create a three layer neural network consisting of dense layers. The `layer_neurons` variable below defines the sizes of the network. ``` input_shape = 784 label_shape = 10 lr = 0.003 layer_neurons = [ [input_shape, 200], [200, 80], [80, label_shape], ] bias_shapes = [200, 80, label_shape] initializer = tf.initializers.glorot_uniform() ``` Below we define a function that creates a dense layer in TF2. It is simply multiplying the inputs and the weights, adds biases and passes the whole calculation from a sigmoid. ``` def dense_layer(inputs, weights, bias): return tf.nn.sigmoid(tf.matmul(inputs, weights) + bias) ``` Below we write functions to initialize the weights and biases: ``` def get_weight(shape, name): return tf.Variable(initializer(shape), name=name, trainable=True, dtype=tf.float32) def get_bias(shape, name): return tf.Variable(initializer([shape]), name=name, trainable=True, dtype=tf.float32) ``` We define the weights and biases to be used in our model: ``` weights = [] bias = [] i = 0 for layer in layer_neurons: weights.append(get_weight(layer, 'weight{}'.format(i))) i+=1 i = 0 for layer in bias_shapes: bias.append(get_bias(layer, 'bias{}'.format(i))) i+=1 ``` As an important step, we define the function that creates our whole neural network. As mentioned earlier, we have a three layer neural network: ``` def model(input): l1 = dense_layer(input, weights[0], bias[0]) l2 = dense_layer(l1, weights[1], bias[1]) l3 = dense_layer(l2, weights[2], bias[2]) return l3 ``` Below we define the optimizer and the loss function. One thing to note here is that since we are using `softmax_cross_entropy_with_logits` as the loss function, we don't have to include a `softmax` layer into our model. The reason for this is that the `softmax_cross_entropy_with_logits` function already applies a softmax to the given inputs. ``` optimizer = tf.optimizers.Adam(lr) def loss(pred, target): return tf.nn.softmax_cross_entropy_with_logits(target, pred) ``` We define one training step below. Note that we are using `tf.GradientTape` here for automatic differentiation. Therefore, we don't have to define the backward pass operations while creating the model. ``` def train_step(model, inputs, outputs, epoch): epoch_loss_avg = None with tf.GradientTape() as tape: current_loss = loss(model(inputs), outputs) grads = tape.gradient(current_loss, weights) optimizer.apply_gradients(zip(grads, weights)) epoch_loss_avg = tf.reduce_mean(current_loss) return epoch_loss_avg ``` ## 5. Training Below we train our model for 10 epochs. We traverse over all training dataset. Total loss is divided by number of iterations to get average loss for each batch: ``` num_epochs = 10 for epoch in range(num_epochs): epoch_loss = 0 i = 0 for train_data in train_dataset: batch_images, batch_labels = train_data iter_loss = train_step(model, batch_images, batch_labels, epoch) epoch_loss += iter_loss i+=1 print("--- On epoch {} ---".format(epoch)) tf.print("| Loss: ", epoch_loss/i) ``` ## 6. Evaluation We use the trained model over the test dataset and normalize with number of test samples to obtain the final accuracy: ``` acc = 0 for test_data in test_dataset: batch_images, batch_labels = test_data predictions = model(batch_images) predictions = tf.nn.softmax(predictions) equality = tf.math.equal(np.argmax(predictions, axis=1), np.argmax(batch_labels, axis=1)) acc += np.sum(equality) acc /= 10000 print(acc) ``` Congratulations on finishing this notebook! You can move on to the next one, which we are going to use PyTorch to classify MNIST examples. **Bonus - Try to:** - Get a test image - Plot the image - Make a model prediction on the image - Print the predicted label and the actual label!
github_jupyter
!pip install tensorflow import tensorflow as tf print(tf.__version__) !pip install tensorflow_datasets import tensorflow_datasets as tfds tfds.list_builders() import numpy as np import matplotlib.pyplot as plt import math mnist_training, mnist_test = tfds.load('mnist', split=['train', 'test'], batch_size=-1, as_supervised=True) print(mnist_training) print(mnist_test) mnist_training_images, mnist_training_labels = mnist_training[0], mnist_training[1] mnist_test_images, mnist_test_labels = mnist_test[0], mnist_test[1] print(mnist_training_images.shape) print(mnist_training_labels.shape) print(mnist_test_images.shape) print(mnist_test_labels.shape) plt.imshow(mnist_training_images[0][:, :, 0] ,cmap = 'gray') print(mnist_training_labels[0]) plt.imshow(mnist_test_images[0][:, :, 0] ,cmap = 'gray') print(mnist_test_labels[0]) num_training_images = mnist_training_images.shape[0] num_test_images = mnist_test_images.shape[0] img_width, img_height = mnist_training_images.shape[1], mnist_training_images.shape[2] mnist_training_images = tf.reshape(mnist_training_images, shape=(num_training_images, img_width * img_height)) mnist_test_images = tf.reshape(mnist_test_images, shape=(num_test_images, img_width * img_height)) print(mnist_training_images.shape) print(mnist_test_images.shape) print(np.amax(mnist_training_images[0]),np.amin(mnist_training_images[0])) print(np.amax(mnist_test_images[0]),np.amin(mnist_test_images[0])) print(np.amax(mnist_training_labels),np.amin(mnist_training_labels)) print(np.amax(mnist_test_labels),np.amin(mnist_test_labels)) def preprocess(x, y): x = tf.cast(x, tf.float32) / 255.0 y = tf.cast(y, tf.int64) return x, y def create_dataset(xs, ys, n_classes=10): ys = tf.one_hot(ys, depth=n_classes) return tf.data.Dataset.from_tensor_slices((xs, ys)) \ .map(preprocess) \ .shuffle(len(ys)) \ .batch(128) train_dataset = create_dataset(mnist_training_images, mnist_training_labels) test_dataset = create_dataset(mnist_test_images, mnist_test_labels) print(train_dataset) train_dataset.element_spec batch_images, batch_labels = next(iter(train_dataset)) print(batch_images.shape) print(batch_labels.shape) print(np.amax(batch_images[0]),np.amin(batch_images[0])) plt.imshow(tf.reshape(batch_images[0], shape=(img_width, img_height, 1))[:, :, 0] ,cmap = 'gray') input_shape = 784 label_shape = 10 lr = 0.003 layer_neurons = [ [input_shape, 200], [200, 80], [80, label_shape], ] bias_shapes = [200, 80, label_shape] initializer = tf.initializers.glorot_uniform() def dense_layer(inputs, weights, bias): return tf.nn.sigmoid(tf.matmul(inputs, weights) + bias) def get_weight(shape, name): return tf.Variable(initializer(shape), name=name, trainable=True, dtype=tf.float32) def get_bias(shape, name): return tf.Variable(initializer([shape]), name=name, trainable=True, dtype=tf.float32) weights = [] bias = [] i = 0 for layer in layer_neurons: weights.append(get_weight(layer, 'weight{}'.format(i))) i+=1 i = 0 for layer in bias_shapes: bias.append(get_bias(layer, 'bias{}'.format(i))) i+=1 def model(input): l1 = dense_layer(input, weights[0], bias[0]) l2 = dense_layer(l1, weights[1], bias[1]) l3 = dense_layer(l2, weights[2], bias[2]) return l3 optimizer = tf.optimizers.Adam(lr) def loss(pred, target): return tf.nn.softmax_cross_entropy_with_logits(target, pred) def train_step(model, inputs, outputs, epoch): epoch_loss_avg = None with tf.GradientTape() as tape: current_loss = loss(model(inputs), outputs) grads = tape.gradient(current_loss, weights) optimizer.apply_gradients(zip(grads, weights)) epoch_loss_avg = tf.reduce_mean(current_loss) return epoch_loss_avg num_epochs = 10 for epoch in range(num_epochs): epoch_loss = 0 i = 0 for train_data in train_dataset: batch_images, batch_labels = train_data iter_loss = train_step(model, batch_images, batch_labels, epoch) epoch_loss += iter_loss i+=1 print("--- On epoch {} ---".format(epoch)) tf.print("| Loss: ", epoch_loss/i) acc = 0 for test_data in test_dataset: batch_images, batch_labels = test_data predictions = model(batch_images) predictions = tf.nn.softmax(predictions) equality = tf.math.equal(np.argmax(predictions, axis=1), np.argmax(batch_labels, axis=1)) acc += np.sum(equality) acc /= 10000 print(acc)
0.631481
0.996134
# 1. Load the houseprices data from Thinkful's database. ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression from sklearn import linear_model from sklearn.model_selection import train_test_split from sklearn.metrics import mean_absolute_error import statsmodels.api as sm from statsmodels.tools.eval_measures import mse, rmse from sklearn.linear_model import LassoCV from sklearn.linear_model import RidgeCV from sklearn.linear_model import ElasticNetCV from sqlalchemy import create_engine import seaborn as sns # Display preferences. %matplotlib inline pd.options.display.float_format = '{:.3f}'.format import warnings warnings.filterwarnings(action="ignore") postgres_user = 'dsbc_student' postgres_pw = '7*.8G9QH21' postgres_host = '142.93.121.174' postgres_port = '5432' postgres_db = 'houseprices' engine = create_engine('postgresql://{}:{}@{}:{}/{}'.format( postgres_user, postgres_pw, postgres_host, postgres_port, postgres_db)) housing_df = pd.read_sql_query('select * from houseprices',con=engine) # no need for an open connection, # as we're only doing a single query engine.dispose() ``` # 2. Do data cleaning, exploratory data analysis, and feature engineering. You can use your previous work in this module. But make sure that your work is satisfactory. ``` housing_df.info() total_missing = housing_df.isnull().sum().sort_values(ascending=False) percent_missing = (housing_df.isnull().sum()/housing_df.isnull().count()).sort_values(ascending=False) missing_data = pd.concat([total_missing, percent_missing], axis=1, keys=['Total', 'Percent']) missing_data.head(20) ``` Checking the sales price distribution to make sure there is normality. ``` plt.hist(housing_df.saleprice) plt.title("The distribution of sale prices") plt.xlabel("sale prices") plt.ylabel("number of occurrence") plt.show() ``` looks like there could be some potential outliers, so I will run a boxplot to double check what I see in the histogram ``` sns.boxplot(housing_df['saleprice'], whis = 5) plt.tight_layout() ``` Checking for housing values that are outliers above 600,000 ``` housing_df[(housing_df['saleprice'] > 600000)].head(50).sort_values(by = 'saleprice', ascending = False) ``` I think dropping these four outliers will make the model run more accurately, but will revisit this idea later ## Checking on the nulls looks like a lot of the nulls are categorical, diving deeper some of them may be easy to replace while others may not be useful in the analysis of the data ``` total_missing = housing_df.isnull().sum().sort_values(ascending=False) percent_missing = (housing_df.isnull().sum()/housing_df.isnull().count()).sort_values(ascending=False) missing_data = pd.concat([total_missing, percent_missing], axis=1, keys=['Total', 'Percent']) missing_data.head(20) ``` ### PoolQC ``` housing_df['poolqc'].unique() ``` Going to check poolqc to see if it has any relationship with the pool size ``` pool_check = housing_df[housing_df['poolarea'] > 0] pool_check.info() ``` This validates that the nulls in pool qc are due to a lack of pool in a lot of these properties. I'll fill the nulls for poolqc with a value that states "no pool" ``` housing_df['poolqc'].fillna("no pool", inplace = True) ``` ### Miscfeature ``` housing_df['miscfeature'].unique() ``` looks like there is too much random variety here to imputate anything, so I'll be dropping this column ``` housing_df = housing_df.drop(columns = 'miscfeature') ``` ### Fence ``` housing_df['fence'].unique() ``` I think it's fair to assume that nulls in fence are instances where there isn't a fence. ``` housing_df['fence'].fillna("no fence", inplace = True) ``` ### Fireplacequ I have a feeling this will be a lot like poolqc where when there isn't a fireplace this values is NaN ``` fireplace_check = housing_df[housing_df['fireplaces'] > 0] fireplace_check.info() ``` Again, when there is a fireplace the fireplace quality variable has a value. So in this case, I can again indicate "no fireplace" in place of nulls ``` housing_df['fireplacequ'].fillna("no fireplace", inplace = True) ``` ### Alley Since alley likely doesn't have a tremendous impace on house prices and I won't be using it in my model I plan on just dropping this column all together ``` housing_df = housing_df.drop(columns = 'alley') ``` ### Remaining null values since the remaining null values represent such a small percentage of the overall data set, I feel comfortable just dropping them. ``` housedf = housing_df.dropna() housedf.info() ``` ## Correlations and Feature engineering ``` non_numeric_columns = housedf.select_dtypes(['object']).columns print(non_numeric_columns) print("The number of non-numerical columns is {}".format(len(non_numeric_columns))) numeric_columns = housedf.select_dtypes(['int64', 'float64']).columns print(numeric_columns) print("The number of numerical columns is {}".format(len(numeric_columns))) np.abs(housedf[numeric_columns].iloc[:,1:].corr().loc[:,"saleprice"]).sort_values(ascending=False) plt.figure(figsize=(30,50)) for index, column in enumerate(non_numeric_columns): plt.subplot(11,4,index+1) sns.barplot(housedf.groupby(column)["saleprice"].mean().index, housedf.groupby(column)["saleprice"].mean()) plt.title("Average saleprice wrt. {}".format(column)) plt.ylabel("Average sale price") plt.xlabel(column) plt.xticks(rotation='vertical') plt.tight_layout() plt.show() print(numeric_columns) ``` Looking at the correlations I've made my feature selection by what I figure to be the most impactful features on sales price. Numeric features were chosen if their correlation with target variable was over .5. I also added yrsold and mosold for the purpose of adding in additional economic data for question 6 ``` chosen_features = ['mszoning', 'neighborhood', 'condition1', 'condition2', 'bldgtype', 'housestyle', 'roofstyle', 'roofmatl', 'exterior1st', 'exterior2nd', 'masvnrtype', 'exterqual', 'extercond', 'foundation', 'bsmtqual', 'bsmtcond', 'bsmtexposure', 'bsmtfintype1', 'bsmtfintype2', 'heating', 'heatingqc', 'centralair', 'electrical', 'kitchenqual', 'functional', 'garagetype', 'garagefinish', 'garagequal', 'garagecond', 'paveddrive', 'poolqc', 'fence', 'saletype', 'salecondition', 'overallqual', 'grlivarea', 'garagecars', 'garagearea' ,'totalbsmtsf', 'firstflrsf','fullbath','totrmsabvgrd','yearbuilt','yearremodadd','garageyrblt', 'saleprice', 'yrsold', 'mosold'] house_test = housedf[chosen_features] house = pd.get_dummies(house_test, drop_first = True) ``` # 3. Now, split your data into train and test sets where 20% of the data resides in the test set. ``` Y = house['saleprice'] X = house.drop(columns = ['saleprice']) lrm = linear_model.LinearRegression() lrm.fit(X,Y) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) print("The number of observations in training set is {}".format(X_train.shape[0])) print("The number of observations in test set is {}".format(X_test.shape[0])) X_train = sm.add_constant(X_train) # We fit an OLS model using statsmodels results = sm.OLS(y_train, X_train).fit() # We print the summary results print(results.summary()) # We add constant to the model as it's a best practice # to do so every time! X_test = sm.add_constant(X_test) # We are making predictions here y_preds = results.predict(X_test) plt.scatter(y_test, y_preds) plt.plot(y_test, y_test, color="red") plt.xlabel("true values") plt.ylabel("predicted values") plt.title("Charges: true and predicted values") plt.show() Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) print("The number of observations in training set is {}".format(X_train.shape[0])) print("The number of observations in test set is {}".format(X_test.shape[0])) # We fit an OLS model using sklearn lrm = LinearRegression() lrm.fit(X_train, y_train) # We are making predictions here y_preds_train = lrm.predict(X_train) y_preds_test = lrm.predict(X_test) print("R-squared of the model in the training set is: {}".format(lrm.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model in the test set is: {}".format(lrm.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100)) ``` # 4. Build several linear regression models including Lasso, Ridge, or ElasticNet and train them in the training set. Use k-fold cross-validation to select the best hyperparameters if your models include one! ``` from sklearn.linear_model import Lasso Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) lassoregr = Lasso(alpha=100) lassoregr.fit(X_train, y_train) # We are making predictions here y_preds_train = lassoregr.predict(X_train) y_preds_test = lassoregr.predict(X_test) train_score= lassoregr.score(X_train, y_train) test_score= lassoregr.score(X_test, y_test) coeff_used = np.sum(lassoregr.coef_!=0) print ("training score: {}".format(train_score)) print ("test score: {}".format(test_score)) print ("number of features used: {} ".format(coeff_used)) print('-------------------------------------------------') print("R-squared of the model on the training set is: {}".format(lassoregr.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model on the test set is: {}".format(lassoregr.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100)) from sklearn.linear_model import Ridge Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) # Fitting a ridge regression model. Alpha is the regularization # parameter (usually called lambda). As alpha gets larger, parameter # shrinkage grows more pronounced. ridgeregr = Ridge(alpha=10) ridgeregr.fit(X_train, y_train) # We are making predictions here y_preds_train = ridgeregr.predict(X_train) y_preds_test = ridgeregr.predict(X_test) print("R-squared of the model on the training set is: {}".format(ridgeregr.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model on the test set is: {}".format(ridgeregr.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100)) from sklearn.linear_model import ElasticNet Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) elasticregr = ElasticNet(alpha=10, l1_ratio=0.5) elasticregr.fit(X_train, y_train) # We are making predictions here y_preds_train = elasticregr.predict(X_train) y_preds_test = elasticregr.predict(X_test) print("R-squared of the model on the training set is: {}".format(elasticregr.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model on the test set is: {}".format(elasticregr.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100)) ``` ## K fold cross validation ``` from sklearn.linear_model import RidgeCV, LassoCV, ElasticNetCV from sklearn.linear_model import LassoCV Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) lassoregr = LassoCV() lassoregr.fit(X_train, y_train) # We are making predictions here y_preds_train = lassoregr.predict(X_train) y_preds_test = lassoregr.predict(X_test) train_score= lassoregr.score(X_train, y_train) test_score= lassoregr.score(X_test, y_test) coeff_used = np.sum(lassoregr.coef_!=0) print ("training score: {}".format(train_score)) print ("test score: {}".format(test_score)) print ("number of features used: {} ".format(coeff_used)) print('-------------------------------------------------') print("R-squared of the model on the training set is: {}".format(lassoregr.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model on the test set is: {}".format(lassoregr.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100)) from sklearn.linear_model import RidgeCV Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) # Fitting a ridge regression model. Alpha is the regularization # parameter (usually called lambda). As alpha gets larger, parameter # shrinkage grows more pronounced. ridgeregr = RidgeCV() ridgeregr.fit(X_train, y_train) # We are making predictions here y_preds_train = ridgeregr.predict(X_train) y_preds_test = ridgeregr.predict(X_test) print("R-squared of the model on the training set is: {}".format(ridgeregr.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model on the test set is: {}".format(ridgeregr.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100)) from sklearn.linear_model import ElasticNetCV Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) elasticregr = ElasticNetCV() elasticregr.fit(X_train, y_train) # We are making predictions here y_preds_train = elasticregr.predict(X_train) y_preds_test = elasticregr.predict(X_test) print("R-squared of the model on the training set is: {}".format(elasticregr.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model on the test set is: {}".format(elasticregr.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100)) ``` Ridge seems to perform the best
github_jupyter
import matplotlib.pyplot as plt import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression from sklearn import linear_model from sklearn.model_selection import train_test_split from sklearn.metrics import mean_absolute_error import statsmodels.api as sm from statsmodels.tools.eval_measures import mse, rmse from sklearn.linear_model import LassoCV from sklearn.linear_model import RidgeCV from sklearn.linear_model import ElasticNetCV from sqlalchemy import create_engine import seaborn as sns # Display preferences. %matplotlib inline pd.options.display.float_format = '{:.3f}'.format import warnings warnings.filterwarnings(action="ignore") postgres_user = 'dsbc_student' postgres_pw = '7*.8G9QH21' postgres_host = '142.93.121.174' postgres_port = '5432' postgres_db = 'houseprices' engine = create_engine('postgresql://{}:{}@{}:{}/{}'.format( postgres_user, postgres_pw, postgres_host, postgres_port, postgres_db)) housing_df = pd.read_sql_query('select * from houseprices',con=engine) # no need for an open connection, # as we're only doing a single query engine.dispose() housing_df.info() total_missing = housing_df.isnull().sum().sort_values(ascending=False) percent_missing = (housing_df.isnull().sum()/housing_df.isnull().count()).sort_values(ascending=False) missing_data = pd.concat([total_missing, percent_missing], axis=1, keys=['Total', 'Percent']) missing_data.head(20) plt.hist(housing_df.saleprice) plt.title("The distribution of sale prices") plt.xlabel("sale prices") plt.ylabel("number of occurrence") plt.show() sns.boxplot(housing_df['saleprice'], whis = 5) plt.tight_layout() housing_df[(housing_df['saleprice'] > 600000)].head(50).sort_values(by = 'saleprice', ascending = False) total_missing = housing_df.isnull().sum().sort_values(ascending=False) percent_missing = (housing_df.isnull().sum()/housing_df.isnull().count()).sort_values(ascending=False) missing_data = pd.concat([total_missing, percent_missing], axis=1, keys=['Total', 'Percent']) missing_data.head(20) housing_df['poolqc'].unique() pool_check = housing_df[housing_df['poolarea'] > 0] pool_check.info() housing_df['poolqc'].fillna("no pool", inplace = True) housing_df['miscfeature'].unique() housing_df = housing_df.drop(columns = 'miscfeature') housing_df['fence'].unique() housing_df['fence'].fillna("no fence", inplace = True) fireplace_check = housing_df[housing_df['fireplaces'] > 0] fireplace_check.info() housing_df['fireplacequ'].fillna("no fireplace", inplace = True) housing_df = housing_df.drop(columns = 'alley') housedf = housing_df.dropna() housedf.info() non_numeric_columns = housedf.select_dtypes(['object']).columns print(non_numeric_columns) print("The number of non-numerical columns is {}".format(len(non_numeric_columns))) numeric_columns = housedf.select_dtypes(['int64', 'float64']).columns print(numeric_columns) print("The number of numerical columns is {}".format(len(numeric_columns))) np.abs(housedf[numeric_columns].iloc[:,1:].corr().loc[:,"saleprice"]).sort_values(ascending=False) plt.figure(figsize=(30,50)) for index, column in enumerate(non_numeric_columns): plt.subplot(11,4,index+1) sns.barplot(housedf.groupby(column)["saleprice"].mean().index, housedf.groupby(column)["saleprice"].mean()) plt.title("Average saleprice wrt. {}".format(column)) plt.ylabel("Average sale price") plt.xlabel(column) plt.xticks(rotation='vertical') plt.tight_layout() plt.show() print(numeric_columns) chosen_features = ['mszoning', 'neighborhood', 'condition1', 'condition2', 'bldgtype', 'housestyle', 'roofstyle', 'roofmatl', 'exterior1st', 'exterior2nd', 'masvnrtype', 'exterqual', 'extercond', 'foundation', 'bsmtqual', 'bsmtcond', 'bsmtexposure', 'bsmtfintype1', 'bsmtfintype2', 'heating', 'heatingqc', 'centralair', 'electrical', 'kitchenqual', 'functional', 'garagetype', 'garagefinish', 'garagequal', 'garagecond', 'paveddrive', 'poolqc', 'fence', 'saletype', 'salecondition', 'overallqual', 'grlivarea', 'garagecars', 'garagearea' ,'totalbsmtsf', 'firstflrsf','fullbath','totrmsabvgrd','yearbuilt','yearremodadd','garageyrblt', 'saleprice', 'yrsold', 'mosold'] house_test = housedf[chosen_features] house = pd.get_dummies(house_test, drop_first = True) Y = house['saleprice'] X = house.drop(columns = ['saleprice']) lrm = linear_model.LinearRegression() lrm.fit(X,Y) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) print("The number of observations in training set is {}".format(X_train.shape[0])) print("The number of observations in test set is {}".format(X_test.shape[0])) X_train = sm.add_constant(X_train) # We fit an OLS model using statsmodels results = sm.OLS(y_train, X_train).fit() # We print the summary results print(results.summary()) # We add constant to the model as it's a best practice # to do so every time! X_test = sm.add_constant(X_test) # We are making predictions here y_preds = results.predict(X_test) plt.scatter(y_test, y_preds) plt.plot(y_test, y_test, color="red") plt.xlabel("true values") plt.ylabel("predicted values") plt.title("Charges: true and predicted values") plt.show() Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) print("The number of observations in training set is {}".format(X_train.shape[0])) print("The number of observations in test set is {}".format(X_test.shape[0])) # We fit an OLS model using sklearn lrm = LinearRegression() lrm.fit(X_train, y_train) # We are making predictions here y_preds_train = lrm.predict(X_train) y_preds_test = lrm.predict(X_test) print("R-squared of the model in the training set is: {}".format(lrm.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model in the test set is: {}".format(lrm.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100)) from sklearn.linear_model import Lasso Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) lassoregr = Lasso(alpha=100) lassoregr.fit(X_train, y_train) # We are making predictions here y_preds_train = lassoregr.predict(X_train) y_preds_test = lassoregr.predict(X_test) train_score= lassoregr.score(X_train, y_train) test_score= lassoregr.score(X_test, y_test) coeff_used = np.sum(lassoregr.coef_!=0) print ("training score: {}".format(train_score)) print ("test score: {}".format(test_score)) print ("number of features used: {} ".format(coeff_used)) print('-------------------------------------------------') print("R-squared of the model on the training set is: {}".format(lassoregr.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model on the test set is: {}".format(lassoregr.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100)) from sklearn.linear_model import Ridge Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) # Fitting a ridge regression model. Alpha is the regularization # parameter (usually called lambda). As alpha gets larger, parameter # shrinkage grows more pronounced. ridgeregr = Ridge(alpha=10) ridgeregr.fit(X_train, y_train) # We are making predictions here y_preds_train = ridgeregr.predict(X_train) y_preds_test = ridgeregr.predict(X_test) print("R-squared of the model on the training set is: {}".format(ridgeregr.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model on the test set is: {}".format(ridgeregr.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100)) from sklearn.linear_model import ElasticNet Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) elasticregr = ElasticNet(alpha=10, l1_ratio=0.5) elasticregr.fit(X_train, y_train) # We are making predictions here y_preds_train = elasticregr.predict(X_train) y_preds_test = elasticregr.predict(X_test) print("R-squared of the model on the training set is: {}".format(elasticregr.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model on the test set is: {}".format(elasticregr.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100)) from sklearn.linear_model import RidgeCV, LassoCV, ElasticNetCV from sklearn.linear_model import LassoCV Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) lassoregr = LassoCV() lassoregr.fit(X_train, y_train) # We are making predictions here y_preds_train = lassoregr.predict(X_train) y_preds_test = lassoregr.predict(X_test) train_score= lassoregr.score(X_train, y_train) test_score= lassoregr.score(X_test, y_test) coeff_used = np.sum(lassoregr.coef_!=0) print ("training score: {}".format(train_score)) print ("test score: {}".format(test_score)) print ("number of features used: {} ".format(coeff_used)) print('-------------------------------------------------') print("R-squared of the model on the training set is: {}".format(lassoregr.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model on the test set is: {}".format(lassoregr.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100)) from sklearn.linear_model import RidgeCV Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) # Fitting a ridge regression model. Alpha is the regularization # parameter (usually called lambda). As alpha gets larger, parameter # shrinkage grows more pronounced. ridgeregr = RidgeCV() ridgeregr.fit(X_train, y_train) # We are making predictions here y_preds_train = ridgeregr.predict(X_train) y_preds_test = ridgeregr.predict(X_test) print("R-squared of the model on the training set is: {}".format(ridgeregr.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model on the test set is: {}".format(ridgeregr.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100)) from sklearn.linear_model import ElasticNetCV Y = house['saleprice'] X = house.drop(columns = ['saleprice']) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 465) elasticregr = ElasticNetCV() elasticregr.fit(X_train, y_train) # We are making predictions here y_preds_train = elasticregr.predict(X_train) y_preds_test = elasticregr.predict(X_test) print("R-squared of the model on the training set is: {}".format(elasticregr.score(X_train, y_train))) print("-----Test set statistics-----") print("R-squared of the model on the test set is: {}".format(elasticregr.score(X_test, y_test))) print("Mean absolute error of the prediction is: {}".format(mean_absolute_error(y_test, y_preds_test))) print("Mean squared error of the prediction is: {}".format(mse(y_test, y_preds_test))) print("Root mean squared error of the prediction is: {}".format(rmse(y_test, y_preds_test))) print("Mean absolute percentage error of the prediction is: {}".format(np.mean(np.abs((y_test - y_preds_test) / y_test)) * 100))
0.438304
0.909063
``` !pip install tensorflow==2.5.0 !pip install -q -U kaggle !pip install --upgrade --force-reinstall --no-deps kaggle !mkdir ~/.kaggle !cp /content/drive/MyDrive/kaggle.json ~/.kaggle/ !chmod 600 ~/.kaggle/kaggle.json !kaggle datasets download -d mrkmakr/criteo-dataset !unzip criteo-dataset.zip import os import itertools import pandas as pd import numpy as np from sklearn.preprocessing import LabelEncoder, KBinsDiscretizer from sklearn.model_selection import train_test_split import tensorflow as tf from tensorflow.keras import Model from tensorflow.keras.layers import Layer, Input, ReLU from tensorflow.keras.layers import Dense, Embedding, Dropout from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.regularizers import l2 from tensorflow.keras.losses import binary_crossentropy from tensorflow.keras.callbacks import EarlyStopping from tensorflow.keras.optimizers import Adam from tensorflow.keras.metrics import AUC os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' os.environ['CUDA_VISIBLE_DEVICES'] = '0' file = 'dac/train.txt' read_part = True sample_num = 100000 test_size = 0.2 embed_dim = 8 att_vector = 8 mode = 'att' # 'max', 'avg' dropout = 0.5 activation = 'relu' embed_reg = 1e-5 learning_rate = 0.001 batch_size = 4096 epochs = 10 def sparseFeature(feat, feat_num, embed_dim=4): """ create dictionary for sparse feature :param feat: feature name :param feat_num: the total number of sparse features that do not repeat :param embed_dim: embedding dimension :return: """ return {'feat_name': feat, 'feat_num': feat_num, 'embed_dim': embed_dim} def denseFeature(feat): """ create dictionary for dense feature :param feat: dense feature name :return: """ return {'feat_name': feat} def create_criteo_dataset(file, embed_dim=8, read_part=True, sample_num=100000, test_size=0.2): """ a example about creating criteo dataset :param file: dataset's path :param embed_dim: the embedding dimension of sparse features :param read_part: whether to read part of it :param sample_num: the number of instances if read_part is True :param test_size: ratio of test dataset :return: feature columns, train, test """ names = ['label', 'I1', 'I2', 'I3', 'I4', 'I5', 'I6', 'I7', 'I8', 'I9', 'I10', 'I11', 'I12', 'I13', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'C10', 'C11', 'C12', 'C13', 'C14', 'C15', 'C16', 'C17', 'C18', 'C19', 'C20', 'C21', 'C22', 'C23', 'C24', 'C25', 'C26'] if read_part: data_df = pd.read_csv(file, sep='\t', iterator=True, header=None, names=names) data_df = data_df.get_chunk(sample_num) else: data_df = pd.read_csv(file, sep='\t', header=None, names=names) sparse_features = ['C' + str(i) for i in range(1, 27)] dense_features = ['I' + str(i) for i in range(1, 14)] features = sparse_features + dense_features data_df[sparse_features] = data_df[sparse_features].fillna('-1') data_df[dense_features] = data_df[dense_features].fillna(0) # Bin continuous data into intervals. est = KBinsDiscretizer(n_bins=100, encode='ordinal', strategy='uniform') data_df[dense_features] = est.fit_transform(data_df[dense_features]) for feat in sparse_features: le = LabelEncoder() data_df[feat] = le.fit_transform(data_df[feat]) # ==============Feature Engineering=================== # ==================================================== feature_columns = [sparseFeature(feat, int(data_df[feat].max()) + 1, embed_dim=embed_dim) for feat in features] train, test = train_test_split(data_df, test_size=test_size) train_X = train[features].values.astype('int32') train_y = train['label'].values.astype('int32') test_X = test[features].values.astype('int32') test_y = test['label'].values.astype('int32') return feature_columns, (train_X, train_y), (test_X, test_y) class AFM(Model): def __init__(self, feature_columns, mode, att_vector=8, activation='relu', dropout=0.5, embed_reg=1e-6): """ AFM :param feature_columns: A list. sparse column feature information. :param mode: A string. 'max'(MAX Pooling) or 'avg'(Average Pooling) or 'att'(Attention) :param att_vector: A scalar. attention vector. :param activation: A string. Activation function of attention. :param dropout: A scalar. Dropout. :param embed_reg: A scalar. the regularizer of embedding """ super(AFM, self).__init__() self.sparse_feature_columns = feature_columns self.mode = mode self.embed_layers = { 'embed_' + str(i): Embedding(input_dim=feat['feat_num'], input_length=1, output_dim=feat['embed_dim'], embeddings_initializer='random_uniform', embeddings_regularizer=l2(embed_reg)) for i, feat in enumerate(self.sparse_feature_columns) } if self.mode == 'att': self.attention_W = Dense(units=att_vector, activation=activation, use_bias=True) self.attention_dense = Dense(units=1, activation=None) self.dropout = Dropout(dropout) self.dense = Dense(units=1, activation=None) def call(self, inputs): # Input Layer sparse_inputs = inputs # Embedding Layer embed = [self.embed_layers['embed_{}'.format(i)](sparse_inputs[:, i]) for i in range(sparse_inputs.shape[1])] embed = tf.transpose(tf.convert_to_tensor(embed), perm=[1, 0, 2]) # (None, len(sparse_inputs), embed_dim) # Pair-wise Interaction Layer row = [] col = [] for r, c in itertools.combinations(range(len(self.sparse_feature_columns)), 2): row.append(r) col.append(c) p = tf.gather(embed, row, axis=1) # (None, (len(sparse) * len(sparse) - 1) / 2, k) q = tf.gather(embed, col, axis=1) # (None, (len(sparse) * len(sparse) - 1) / 2, k) bi_interaction = p * q # (None, (len(sparse) * len(sparse) - 1) / 2, k) # mode if self.mode == 'max': # MaxPooling Layer x = tf.reduce_sum(bi_interaction, axis=1) # (None, k) elif self.mode == 'avg': # AvgPooling Layer x = tf.reduce_mean(bi_interaction, axis=1) # (None, k) else: # Attention Layer x = self.attention(bi_interaction) # (None, k) # Output Layer outputs = tf.nn.sigmoid(self.dense(x)) return outputs def summary(self): sparse_inputs = Input(shape=(len(self.sparse_feature_columns),), dtype=tf.int32) Model(inputs=sparse_inputs, outputs=self.call(sparse_inputs)).summary() def attention(self, bi_interaction): a = self.attention_W(bi_interaction) # (None, (len(sparse) * len(sparse) - 1) / 2, t) a = self.attention_dense(a) # (None, (len(sparse) * len(sparse) - 1) / 2, 1) a_score = tf.nn.softmax(a, axis=1) # (None, (len(sparse) * len(sparse) - 1) / 2, 1) outputs = tf.reduce_sum(bi_interaction * a_score, axis=1) # (None, embed_dim) return outputs # ========================== Create dataset ======================= feature_columns, train, test = create_criteo_dataset(file=file, embed_dim=embed_dim, read_part=read_part, sample_num=sample_num, test_size=test_size) train_X, train_y = train test_X, test_y = test # ============================Build Model========================== mirrored_strategy = tf.distribute.MirroredStrategy() with mirrored_strategy.scope(): model = AFM(feature_columns, mode, att_vector, activation, dropout, embed_reg) model.summary() # =========================Compile============================ model.compile(loss=binary_crossentropy, optimizer=Adam(learning_rate=learning_rate), metrics=[AUC()]) # ============================model checkpoint====================== # check_path = 'save/afm_weights.epoch_{epoch:04d}.val_loss_{val_loss:.4f}.ckpt' # checkpoint = tf.keras.callbacks.ModelCheckpoint(check_path, save_weights_only=True, # verbose=1, period=5) # ===========================Fit============================== model.fit( train_X, train_y, epochs=epochs, callbacks=[EarlyStopping(monitor='val_loss', patience=2, restore_best_weights=True)], # checkpoint, batch_size=batch_size, validation_split=0.1 ) # ===========================Test============================== print('test AUC: %f' % model.evaluate(test_X, test_y, batch_size=batch_size)[1]) ```
github_jupyter
!pip install tensorflow==2.5.0 !pip install -q -U kaggle !pip install --upgrade --force-reinstall --no-deps kaggle !mkdir ~/.kaggle !cp /content/drive/MyDrive/kaggle.json ~/.kaggle/ !chmod 600 ~/.kaggle/kaggle.json !kaggle datasets download -d mrkmakr/criteo-dataset !unzip criteo-dataset.zip import os import itertools import pandas as pd import numpy as np from sklearn.preprocessing import LabelEncoder, KBinsDiscretizer from sklearn.model_selection import train_test_split import tensorflow as tf from tensorflow.keras import Model from tensorflow.keras.layers import Layer, Input, ReLU from tensorflow.keras.layers import Dense, Embedding, Dropout from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.regularizers import l2 from tensorflow.keras.losses import binary_crossentropy from tensorflow.keras.callbacks import EarlyStopping from tensorflow.keras.optimizers import Adam from tensorflow.keras.metrics import AUC os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' os.environ['CUDA_VISIBLE_DEVICES'] = '0' file = 'dac/train.txt' read_part = True sample_num = 100000 test_size = 0.2 embed_dim = 8 att_vector = 8 mode = 'att' # 'max', 'avg' dropout = 0.5 activation = 'relu' embed_reg = 1e-5 learning_rate = 0.001 batch_size = 4096 epochs = 10 def sparseFeature(feat, feat_num, embed_dim=4): """ create dictionary for sparse feature :param feat: feature name :param feat_num: the total number of sparse features that do not repeat :param embed_dim: embedding dimension :return: """ return {'feat_name': feat, 'feat_num': feat_num, 'embed_dim': embed_dim} def denseFeature(feat): """ create dictionary for dense feature :param feat: dense feature name :return: """ return {'feat_name': feat} def create_criteo_dataset(file, embed_dim=8, read_part=True, sample_num=100000, test_size=0.2): """ a example about creating criteo dataset :param file: dataset's path :param embed_dim: the embedding dimension of sparse features :param read_part: whether to read part of it :param sample_num: the number of instances if read_part is True :param test_size: ratio of test dataset :return: feature columns, train, test """ names = ['label', 'I1', 'I2', 'I3', 'I4', 'I5', 'I6', 'I7', 'I8', 'I9', 'I10', 'I11', 'I12', 'I13', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'C10', 'C11', 'C12', 'C13', 'C14', 'C15', 'C16', 'C17', 'C18', 'C19', 'C20', 'C21', 'C22', 'C23', 'C24', 'C25', 'C26'] if read_part: data_df = pd.read_csv(file, sep='\t', iterator=True, header=None, names=names) data_df = data_df.get_chunk(sample_num) else: data_df = pd.read_csv(file, sep='\t', header=None, names=names) sparse_features = ['C' + str(i) for i in range(1, 27)] dense_features = ['I' + str(i) for i in range(1, 14)] features = sparse_features + dense_features data_df[sparse_features] = data_df[sparse_features].fillna('-1') data_df[dense_features] = data_df[dense_features].fillna(0) # Bin continuous data into intervals. est = KBinsDiscretizer(n_bins=100, encode='ordinal', strategy='uniform') data_df[dense_features] = est.fit_transform(data_df[dense_features]) for feat in sparse_features: le = LabelEncoder() data_df[feat] = le.fit_transform(data_df[feat]) # ==============Feature Engineering=================== # ==================================================== feature_columns = [sparseFeature(feat, int(data_df[feat].max()) + 1, embed_dim=embed_dim) for feat in features] train, test = train_test_split(data_df, test_size=test_size) train_X = train[features].values.astype('int32') train_y = train['label'].values.astype('int32') test_X = test[features].values.astype('int32') test_y = test['label'].values.astype('int32') return feature_columns, (train_X, train_y), (test_X, test_y) class AFM(Model): def __init__(self, feature_columns, mode, att_vector=8, activation='relu', dropout=0.5, embed_reg=1e-6): """ AFM :param feature_columns: A list. sparse column feature information. :param mode: A string. 'max'(MAX Pooling) or 'avg'(Average Pooling) or 'att'(Attention) :param att_vector: A scalar. attention vector. :param activation: A string. Activation function of attention. :param dropout: A scalar. Dropout. :param embed_reg: A scalar. the regularizer of embedding """ super(AFM, self).__init__() self.sparse_feature_columns = feature_columns self.mode = mode self.embed_layers = { 'embed_' + str(i): Embedding(input_dim=feat['feat_num'], input_length=1, output_dim=feat['embed_dim'], embeddings_initializer='random_uniform', embeddings_regularizer=l2(embed_reg)) for i, feat in enumerate(self.sparse_feature_columns) } if self.mode == 'att': self.attention_W = Dense(units=att_vector, activation=activation, use_bias=True) self.attention_dense = Dense(units=1, activation=None) self.dropout = Dropout(dropout) self.dense = Dense(units=1, activation=None) def call(self, inputs): # Input Layer sparse_inputs = inputs # Embedding Layer embed = [self.embed_layers['embed_{}'.format(i)](sparse_inputs[:, i]) for i in range(sparse_inputs.shape[1])] embed = tf.transpose(tf.convert_to_tensor(embed), perm=[1, 0, 2]) # (None, len(sparse_inputs), embed_dim) # Pair-wise Interaction Layer row = [] col = [] for r, c in itertools.combinations(range(len(self.sparse_feature_columns)), 2): row.append(r) col.append(c) p = tf.gather(embed, row, axis=1) # (None, (len(sparse) * len(sparse) - 1) / 2, k) q = tf.gather(embed, col, axis=1) # (None, (len(sparse) * len(sparse) - 1) / 2, k) bi_interaction = p * q # (None, (len(sparse) * len(sparse) - 1) / 2, k) # mode if self.mode == 'max': # MaxPooling Layer x = tf.reduce_sum(bi_interaction, axis=1) # (None, k) elif self.mode == 'avg': # AvgPooling Layer x = tf.reduce_mean(bi_interaction, axis=1) # (None, k) else: # Attention Layer x = self.attention(bi_interaction) # (None, k) # Output Layer outputs = tf.nn.sigmoid(self.dense(x)) return outputs def summary(self): sparse_inputs = Input(shape=(len(self.sparse_feature_columns),), dtype=tf.int32) Model(inputs=sparse_inputs, outputs=self.call(sparse_inputs)).summary() def attention(self, bi_interaction): a = self.attention_W(bi_interaction) # (None, (len(sparse) * len(sparse) - 1) / 2, t) a = self.attention_dense(a) # (None, (len(sparse) * len(sparse) - 1) / 2, 1) a_score = tf.nn.softmax(a, axis=1) # (None, (len(sparse) * len(sparse) - 1) / 2, 1) outputs = tf.reduce_sum(bi_interaction * a_score, axis=1) # (None, embed_dim) return outputs # ========================== Create dataset ======================= feature_columns, train, test = create_criteo_dataset(file=file, embed_dim=embed_dim, read_part=read_part, sample_num=sample_num, test_size=test_size) train_X, train_y = train test_X, test_y = test # ============================Build Model========================== mirrored_strategy = tf.distribute.MirroredStrategy() with mirrored_strategy.scope(): model = AFM(feature_columns, mode, att_vector, activation, dropout, embed_reg) model.summary() # =========================Compile============================ model.compile(loss=binary_crossentropy, optimizer=Adam(learning_rate=learning_rate), metrics=[AUC()]) # ============================model checkpoint====================== # check_path = 'save/afm_weights.epoch_{epoch:04d}.val_loss_{val_loss:.4f}.ckpt' # checkpoint = tf.keras.callbacks.ModelCheckpoint(check_path, save_weights_only=True, # verbose=1, period=5) # ===========================Fit============================== model.fit( train_X, train_y, epochs=epochs, callbacks=[EarlyStopping(monitor='val_loss', patience=2, restore_best_weights=True)], # checkpoint, batch_size=batch_size, validation_split=0.1 ) # ===========================Test============================== print('test AUC: %f' % model.evaluate(test_X, test_y, batch_size=batch_size)[1])
0.769514
0.233444
# Training Neural Networks The network we built in the previous part isn't so smart, it doesn't know anything about our handwritten digits. Neural networks with non-linear activations work like universal function approximators. There is some function that maps your input to the output. For example, images of handwritten digits to class probabilities. The power of neural networks is that we can train them to approximate this function, and basically any function given enough data and compute time. <img src="assets/function_approx.png" width=500px> At first the network is naive, it doesn't know the function mapping the inputs to the outputs. We train the network by showing it examples of real data, then adjusting the network parameters such that it approximates this function. To find these parameters, we need to know how poorly the network is predicting the real outputs. For this we calculate a **loss function** (also called the cost), a measure of our prediction error. For example, the mean squared loss is often used in regression and binary classification problems $$ \large \ell = \frac{1}{2n}\sum_i^n{\left(y_i - \hat{y}_i\right)^2} $$ where $n$ is the number of training examples, $y_i$ are the true labels, and $\hat{y}_i$ are the predicted labels. By minimizing this loss with respect to the network parameters, we can find configurations where the loss is at a minimum and the network is able to predict the correct labels with high accuracy. We find this minimum using a process called **gradient descent**. The gradient is the slope of the loss function and points in the direction of fastest change. To get to the minimum in the least amount of time, we then want to follow the gradient (downwards). You can think of this like descending a mountain by following the steepest slope to the base. <img src='assets/gradient_descent.png' width=350px> ## Backpropagation For single layer networks, gradient descent is straightforward to implement. However, it's more complicated for deeper, multilayer neural networks like the one we've built. Complicated enough that it took about 30 years before researchers figured out how to train multilayer networks. Training multilayer networks is done through **backpropagation** which is really just an application of the chain rule from calculus. It's easiest to understand if we convert a two layer network into a graph representation. <img src='assets/backprop_diagram.png' width=550px> In the forward pass through the network, our data and operations go from bottom to top here. We pass the input $x$ through a linear transformation $L_1$ with weights $W_1$ and biases $b_1$. The output then goes through the sigmoid operation $S$ and another linear transformation $L_2$. Finally we calculate the loss $\ell$. We use the loss as a measure of how bad the network's predictions are. The goal then is to adjust the weights and biases to minimize the loss. To train the weights with gradient descent, we propagate the gradient of the loss backwards through the network. Each operation has some gradient between the inputs and outputs. As we send the gradients backwards, we multiply the incoming gradient with the gradient for the operation. Mathematically, this is really just calculating the gradient of the loss with respect to the weights using the chain rule. $$ \large \frac{\partial \ell}{\partial W_1} = \frac{\partial L_1}{\partial W_1} \frac{\partial S}{\partial L_1} \frac{\partial L_2}{\partial S} \frac{\partial \ell}{\partial L_2} $$ **Note:** I'm glossing over a few details here that require some knowledge of vector calculus, but they aren't necessary to understand what's going on. We update our weights using this gradient with some learning rate $\alpha$. $$ \large W^\prime_1 = W_1 - \alpha \frac{\partial \ell}{\partial W_1} $$ The learning rate $\alpha$ is set such that the weight update steps are small enough that the iterative method settles in a minimum. ## Losses in PyTorch Let's start by seeing how we calculate the loss with PyTorch. Through the `nn` module, PyTorch provides losses such as the cross-entropy loss (`nn.CrossEntropyLoss`). You'll usually see the loss assigned to `criterion`. As noted in the last part, with a classification problem such as MNIST, we're using the softmax function to predict class probabilities. With a softmax output, you want to use cross-entropy as the loss. To actually calculate the loss, you first define the criterion then pass in the output of your network and the correct labels. Something really important to note here. Looking at [the documentation for `nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss), > This criterion combines `nn.LogSoftmax()` and `nn.NLLLoss()` in one single class. > > The input is expected to contain scores for each class. This means we need to pass in the raw output of our network into the loss, not the output of the softmax function. This raw output is usually called the *logits* or *scores*. We use the logits because softmax gives you probabilities which will often be very close to zero or one but floating-point numbers can't accurately represent values near zero or one ([read more here](https://docs.python.org/3/tutorial/floatingpoint.html)). It's usually best to avoid doing calculations with probabilities, typically we use log-probabilities. ``` import torch from torch import nn import torch.nn.functional as F from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)), ]) # Download and load the training data trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) ``` ### Note If you haven't seen `nn.Sequential` yet, please finish the end of the Part 2 notebook. ``` # Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10)) # Define the loss criterion = nn.CrossEntropyLoss() # Get our data images, labels = next(iter(trainloader)) # Flatten images images = images.view(images.shape[0], -1) # Forward pass, get our logits logits = model(images) # Calculate the loss with the logits and the labels loss = criterion(logits, labels) print(loss) ``` In my experience it's more convenient to build the model with a log-softmax output using `nn.LogSoftmax` or `F.log_softmax` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LogSoftmax)). Then you can get the actual probabilities by taking the exponential `torch.exp(output)`. With a log-softmax output, you want to use the negative log likelihood loss, `nn.NLLLoss` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss)). >**Exercise:** Build a model that returns the log-softmax as the output and calculate the loss using the negative log likelihood loss. Note that for `nn.LogSoftmax` and `F.log_softmax` you'll need to set the `dim` keyword argument appropriately. `dim=0` calculates softmax across the rows, so each column sums to 1, while `dim=1` calculates across the columns so each row sums to 1. Think about what you want the output to be and choose `dim` appropriately. ``` # TODO: Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1) ) # TODO: Define the loss criterion = nn.NLLLoss() ### Run this to check your work # Get our data images, labels = next(iter(trainloader)) # Flatten images images = images.view(images.shape[0], -1) # Forward pass, get our logits logits = model(images) # Calculate the loss with the logits and the labels loss = criterion(logits, labels) print(loss) ``` ## Autograd Now that we know how to calculate a loss, how do we use it to perform backpropagation? Torch provides a module, `autograd`, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operations performed on tensors, then going backwards through those operations, calculating gradients along the way. To make sure PyTorch keeps track of operations on a tensor and calculates the gradients, you need to set `requires_grad = True` on a tensor. You can do this at creation with the `requires_grad` keyword, or at any time with `x.requires_grad_(True)`. You can turn off gradients for a block of code with the `torch.no_grad()` content: ```python x = torch.zeros(1, requires_grad=True) >>> with torch.no_grad(): ... y = x * 2 >>> y.requires_grad False ``` Also, you can turn on or off gradients altogether with `torch.set_grad_enabled(True|False)`. The gradients are computed with respect to some variable `z` with `z.backward()`. This does a backward pass through the operations that created `z`. ``` x = torch.randn(2,2, requires_grad=True) print(x) y = x**2 print(y) ``` Below we can see the operation that created `y`, a power operation `PowBackward0`. ``` ## grad_fn shows the function that generated this variable print(y.grad_fn) ``` The autograd module keeps track of these operations and knows how to calculate the gradient for each one. In this way, it's able to calculate the gradients for a chain of operations, with respect to any one tensor. Let's reduce the tensor `y` to a scalar value, the mean. ``` z = y.mean() print(z) ``` You can check the gradients for `x` and `y` but they are empty currently. ``` print(x.grad) ``` To calculate the gradients, you need to run the `.backward` method on a Variable, `z` for example. This will calculate the gradient for `z` with respect to `x` $$ \frac{\partial z}{\partial x} = \frac{\partial}{\partial x}\left[\frac{1}{n}\sum_i^n x_i^2\right] = \frac{x}{2} $$ ``` z.backward() print(x.grad) print(x/2) ``` These gradients calculations are particularly useful for neural networks. For training we need the gradients of the cost with respect to the weights. With PyTorch, we run data forward through the network to calculate the loss, then, go backwards to calculate the gradients with respect to the loss. Once we have the gradients we can make a gradient descent step. ## Loss and Autograd together When we create a network with PyTorch, all of the parameters are initialized with `requires_grad = True`. This means that when we calculate the loss and call `loss.backward()`, the gradients for the parameters are calculated. These gradients are used to update the weights with gradient descent. Below you can see an example of calculating the gradients using a backwards pass. ``` # Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() images, labels = next(iter(trainloader)) images = images.view(images.shape[0], -1) logits = model(images) loss = criterion(logits, labels) print('Before backward pass: \n', model[0].weight.grad) loss.backward() print('After backward pass: \n', model[0].weight.grad) ``` ## Training the network! There's one last piece we need to start training, an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's [`optim` package](https://pytorch.org/docs/stable/optim.html). For example we can use stochastic gradient descent with `optim.SGD`. You can see how to define an optimizer below. ``` from torch import optim # Optimizers require the parameters to optimize and a learning rate optimizer = optim.SGD(model.parameters(), lr=0.01) ``` Now we know how to use all the individual parts so it's time to see how they work together. Let's consider just one learning step before looping through all the data. The general process with PyTorch: * Make a forward pass through the network * Use the network output to calculate the loss * Perform a backward pass through the network with `loss.backward()` to calculate the gradients * Take a step with the optimizer to update the weights Below I'll go through one training step and print out the weights and gradients so you can see how it changes. Note that I have a line of code `optimizer.zero_grad()`. When you do multiple backwards passes with the same parameters, the gradients are accumulated. This means that you need to zero the gradients on each training pass or you'll retain gradients from previous training batches. ``` print('Initial weights - ', model[0].weight) images, labels = next(iter(trainloader)) images.resize_(64, 784) # Clear the gradients, do this because gradients are accumulated optimizer.zero_grad() # Forward pass, then backward pass, then update weights output = model(images) loss = criterion(output, labels) loss.backward() print('Gradient -', model[0].weight.grad) # Take an update step and few the new weights optimizer.step() print('Updated weights - ', model[0].weight) ``` ### Training for real Now we'll put this algorithm into a loop so we can go through all the images. Some nomenclature, one pass through the entire dataset is called an *epoch*. So here we're going to loop through `trainloader` to get our training batches. For each batch, we'll doing a training pass where we calculate the loss, do a backwards pass, and update the weights. >**Exercise:** Implement the training pass for our network. If you implemented it correctly, you should see the training loss drop with each epoch. ``` ## Your solution here model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.003) epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in trainloader: # Flatten MNIST images into a 784 long vector images = images.view(images.shape[0], -1) # Training pass # Zeroing out gradient from previous epoch optimizer.zero_grad() # Getting predictions output = model(images) # Calculating loss loss = criterion(output, labels) # Propagating backwards loss.backward() running_loss += loss.item() # Updating the weights optimizer.step() else: print(f"Training loss: {running_loss/len(trainloader)}") ``` With the network trained, we can check out it's predictions. ``` %matplotlib inline import helper images, labels = next(iter(trainloader)) img = images[0].view(1, 784) # Turn off gradients to speed up this part with torch.no_grad(): logps = model(img) # Output of the network are log-probabilities, need to take exponential for probabilities ps = torch.exp(logps) helper.view_classify(img.view(1, 28, 28), ps) ``` Now our network is brilliant. It can accurately predict the digits in our images. Next up you'll write the code for training a neural network on a more complex dataset.
github_jupyter
import torch from torch import nn import torch.nn.functional as F from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)), ]) # Download and load the training data trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10)) # Define the loss criterion = nn.CrossEntropyLoss() # Get our data images, labels = next(iter(trainloader)) # Flatten images images = images.view(images.shape[0], -1) # Forward pass, get our logits logits = model(images) # Calculate the loss with the logits and the labels loss = criterion(logits, labels) print(loss) # TODO: Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1) ) # TODO: Define the loss criterion = nn.NLLLoss() ### Run this to check your work # Get our data images, labels = next(iter(trainloader)) # Flatten images images = images.view(images.shape[0], -1) # Forward pass, get our logits logits = model(images) # Calculate the loss with the logits and the labels loss = criterion(logits, labels) print(loss) x = torch.zeros(1, requires_grad=True) >>> with torch.no_grad(): ... y = x * 2 >>> y.requires_grad False x = torch.randn(2,2, requires_grad=True) print(x) y = x**2 print(y) ## grad_fn shows the function that generated this variable print(y.grad_fn) z = y.mean() print(z) print(x.grad) z.backward() print(x.grad) print(x/2) # Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() images, labels = next(iter(trainloader)) images = images.view(images.shape[0], -1) logits = model(images) loss = criterion(logits, labels) print('Before backward pass: \n', model[0].weight.grad) loss.backward() print('After backward pass: \n', model[0].weight.grad) from torch import optim # Optimizers require the parameters to optimize and a learning rate optimizer = optim.SGD(model.parameters(), lr=0.01) print('Initial weights - ', model[0].weight) images, labels = next(iter(trainloader)) images.resize_(64, 784) # Clear the gradients, do this because gradients are accumulated optimizer.zero_grad() # Forward pass, then backward pass, then update weights output = model(images) loss = criterion(output, labels) loss.backward() print('Gradient -', model[0].weight.grad) # Take an update step and few the new weights optimizer.step() print('Updated weights - ', model[0].weight) ## Your solution here model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.003) epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in trainloader: # Flatten MNIST images into a 784 long vector images = images.view(images.shape[0], -1) # Training pass # Zeroing out gradient from previous epoch optimizer.zero_grad() # Getting predictions output = model(images) # Calculating loss loss = criterion(output, labels) # Propagating backwards loss.backward() running_loss += loss.item() # Updating the weights optimizer.step() else: print(f"Training loss: {running_loss/len(trainloader)}") %matplotlib inline import helper images, labels = next(iter(trainloader)) img = images[0].view(1, 784) # Turn off gradients to speed up this part with torch.no_grad(): logps = model(img) # Output of the network are log-probabilities, need to take exponential for probabilities ps = torch.exp(logps) helper.view_classify(img.view(1, 28, 28), ps)
0.913677
0.99477
## ะ˜ะฝะธั†ะฐะปะธะทะฐั†ะธั ``` import numpy as np # random number from 0 to MAX_NUMBER for guessing MAX_NUMBER = 100 # how many times to test algorithm ITERATION_COUNT = 10000 def score_game(game_core): '''ะ—ะฐะฟัƒัะบะฐะตะผ ะธะณั€ัƒ 1000 ั€ะฐะท, ั‡ั‚ะพะฑั‹ ัƒะทะฝะฐั‚ัŒ, ะบะฐะบ ะฑั‹ัั‚ั€ะพ ะธะณั€ะฐ ัƒะณะฐะดั‹ะฒะฐะตั‚ ั‡ะธัะปะพ''' count_ls = [] np.random.seed(1) # ั„ะธะบัะธั€ัƒะตะผ RANDOM SEED, ั‡ั‚ะพะฑั‹ ะฒะฐัˆ ัะบัะฟะตั€ะธะผะตะฝั‚ ะฑั‹ะป ะฒะพัะฟั€ะพะธะทะฒะพะดะธะผ! random_array = np.random.randint(1, MAX_NUMBER+1, size=(ITERATION_COUNT)) for number in random_array: count_ls.append(game_core(number)) score = int(np.mean(count_ls)) print(f"ะ’ะฐัˆ ะฐะปะณะพั€ะธั‚ะผ ัƒะณะฐะดั‹ะฒะฐะตั‚ ั‡ะธัะปะพ ะฒ ัั€ะตะดะฝะตะผ ะทะฐ {score} ะฟะพะฟั‹ั‚ะพะบ") return(score) ``` ## ะะปะณะพั€ะธั‚ะผ ะฟะพะปัŒะทะพะฒะฐั‚ะตะปั v1 ะฝะฐั‡ะธะฝะฐะตะผ ั ัะตั€ะตะดะธะฝั‹ ะดะธะฐะฟะฐะทะพะฝะฐ ะทะฐะณะฐะดั‹ะฒะฐะฝะธั ะดะฒะธะณะฐะตะผัั ะฒ ัั‚ะพั€ะพะฝัƒ ั‡ะธัะปะฐ, ะฒะดะฒะพะต ัƒะผะตะฝัŒัˆะฐั ัˆะฐะณ (ั‡ะตั‚ะฒะตั€ั‚ัŒ ะดะธะฐะฟะฐะทะพะฝะฐ ะธะทะฝะฐั‡ะฐะปัŒะฝะพ) ั ะบะฐะถะดะพะน ะฟะพะฟั‹ั‚ะบะพะน ``` def game_core_v3(number): """Custom algorithm for guessing random number. Args: number (int): Number to guess Returns: int: Number of attempts """ count = 1 predict = MAX_NUMBER // 2 step = predict // 2 + int(predict % 2 > 0) while number != predict: count += 1 if number > predict: predict += step elif number < predict: predict -= step step = step // 2 + int(step % 2 > 0) return(count) score_game(game_core_v3) ``` ## ะะปะณะพั€ะธั‚ะผ ะฟะพะปัŒะทะพะฒะฐั‚ะตะปั v2 ะฟะพะธัะบ ัะพ ัะดะฒะธะณะพะผ ะณั€ะฐะฝะธั† (ะฑะธะธะฝะฐั€ะฝั‹ะน?) ``` def game_core_v4(number): """Guessing based on Binary Search. Args: number (int): Number to guess Returns: int: Number of attempts """ left = 0 right = MAX_NUMBER+1 count = 1 predict = MAX_NUMBER // 2 while number != predict: count += 1 if predict < number: left = predict else: right = predict predict = (left + right) // 2 return(count) score_game(game_core_v4) ``` # ะะปะณะพั€ะธั‚ะผ ะทะฐะดะฐะฝะธั v1 ั€ะฐะฝะดะพะผะฝะพ ะฟั‹ั‚ะฐะตะผัั ัƒะณะฐะดะฐั‚ัŒ ั‡ะธัะปะพ ``` def game_core_v1(number): '''ะŸั€ะพัั‚ะพ ัƒะณะฐะดั‹ะฒะฐะตะผ ะฝะฐ random, ะฝะธะบะฐะบ ะฝะต ะธัะฟะพะปัŒะทัƒั ะธะฝั„ะพั€ะผะฐั†ะธัŽ ะพ ะฑะพะปัŒัˆะต ะธะปะธ ะผะตะฝัŒัˆะต. ะคัƒะฝะบั†ะธั ะฟั€ะธะฝะธะผะฐะตั‚ ะทะฐะณะฐะดะฐะฝะฝะพะต ั‡ะธัะปะพ ะธ ะฒะพะทะฒั€ะฐั‰ะฐะตั‚ ั‡ะธัะปะพ ะฟะพะฟั‹ั‚ะพะบ''' count = 0 while True: count+=1 predict = np.random.randint(1,101) # ะฟั€ะตะดะฟะพะปะฐะณะฐะตะผะพะต ั‡ะธัะปะพ if number == predict: return count # ะฒั‹ั…ะพะด ะธะท ั†ะธะบะปะฐ, ะตัะปะธ ัƒะณะฐะดะฐะปะธ score_game(game_core_v1) ``` # ะะปะณะพั€ะธั‚ะผ ะทะฐะดะฐะฝะธั v2 ั€ะฐะฝะดะพะผะฝะพ ะฒั‹ะฑะธั€ะฐะตะผ ั‡ะธัะปะพ, ะฐ ะฟะพั‚ะพะผ ะดะฒะธะณะฐะตะผัั ะพั‚ ะฝะตะณะพ ะบ ัƒะณะฐะดั‹ะฒะฐะตะผะพะผัƒ ั ัˆะฐะณะพะผ 1 ``` def game_core_v2(number): '''ะกะฝะฐั‡ะฐะปะฐ ัƒัั‚ะฐะฝะฐะฒะปะธะฒะฐะตะผ ะปัŽะฑะพะต random ั‡ะธัะปะพ, ะฐ ะฟะพั‚ะพะผ ัƒะผะตะฝัŒัˆะฐะตะผ ะธะปะธ ัƒะฒะตะปะธั‡ะธะฒะฐะตะผ ะตะณะพ ะฒ ะทะฐะฒะธัะธะผะพัั‚ะธ ะพั‚ ั‚ะพะณะพ, ะฑะพะปัŒัˆะต ะพะฝะพ ะธะปะธ ะผะตะฝัŒัˆะต ะฝัƒะถะฝะพะณะพ. ะคัƒะฝะบั†ะธั ะฟั€ะธะฝะธะผะฐะตั‚ ะทะฐะณะฐะดะฐะฝะฝะพะต ั‡ะธัะปะพ ะธ ะฒะพะทะฒั€ะฐั‰ะฐะตั‚ ั‡ะธัะปะพ ะฟะพะฟั‹ั‚ะพะบ''' count = 1 predict = np.random.randint(1,101) while number != predict: count+=1 if number > predict: predict += 1 elif number < predict: predict -= 1 return(count) # ะฒั‹ั…ะพะด ะธะท ั†ะธะบะปะฐ, ะตัะปะธ ัƒะณะฐะดะฐะปะธ score_game(game_core_v2) ```
github_jupyter
import numpy as np # random number from 0 to MAX_NUMBER for guessing MAX_NUMBER = 100 # how many times to test algorithm ITERATION_COUNT = 10000 def score_game(game_core): '''ะ—ะฐะฟัƒัะบะฐะตะผ ะธะณั€ัƒ 1000 ั€ะฐะท, ั‡ั‚ะพะฑั‹ ัƒะทะฝะฐั‚ัŒ, ะบะฐะบ ะฑั‹ัั‚ั€ะพ ะธะณั€ะฐ ัƒะณะฐะดั‹ะฒะฐะตั‚ ั‡ะธัะปะพ''' count_ls = [] np.random.seed(1) # ั„ะธะบัะธั€ัƒะตะผ RANDOM SEED, ั‡ั‚ะพะฑั‹ ะฒะฐัˆ ัะบัะฟะตั€ะธะผะตะฝั‚ ะฑั‹ะป ะฒะพัะฟั€ะพะธะทะฒะพะดะธะผ! random_array = np.random.randint(1, MAX_NUMBER+1, size=(ITERATION_COUNT)) for number in random_array: count_ls.append(game_core(number)) score = int(np.mean(count_ls)) print(f"ะ’ะฐัˆ ะฐะปะณะพั€ะธั‚ะผ ัƒะณะฐะดั‹ะฒะฐะตั‚ ั‡ะธัะปะพ ะฒ ัั€ะตะดะฝะตะผ ะทะฐ {score} ะฟะพะฟั‹ั‚ะพะบ") return(score) def game_core_v3(number): """Custom algorithm for guessing random number. Args: number (int): Number to guess Returns: int: Number of attempts """ count = 1 predict = MAX_NUMBER // 2 step = predict // 2 + int(predict % 2 > 0) while number != predict: count += 1 if number > predict: predict += step elif number < predict: predict -= step step = step // 2 + int(step % 2 > 0) return(count) score_game(game_core_v3) def game_core_v4(number): """Guessing based on Binary Search. Args: number (int): Number to guess Returns: int: Number of attempts """ left = 0 right = MAX_NUMBER+1 count = 1 predict = MAX_NUMBER // 2 while number != predict: count += 1 if predict < number: left = predict else: right = predict predict = (left + right) // 2 return(count) score_game(game_core_v4) def game_core_v1(number): '''ะŸั€ะพัั‚ะพ ัƒะณะฐะดั‹ะฒะฐะตะผ ะฝะฐ random, ะฝะธะบะฐะบ ะฝะต ะธัะฟะพะปัŒะทัƒั ะธะฝั„ะพั€ะผะฐั†ะธัŽ ะพ ะฑะพะปัŒัˆะต ะธะปะธ ะผะตะฝัŒัˆะต. ะคัƒะฝะบั†ะธั ะฟั€ะธะฝะธะผะฐะตั‚ ะทะฐะณะฐะดะฐะฝะฝะพะต ั‡ะธัะปะพ ะธ ะฒะพะทะฒั€ะฐั‰ะฐะตั‚ ั‡ะธัะปะพ ะฟะพะฟั‹ั‚ะพะบ''' count = 0 while True: count+=1 predict = np.random.randint(1,101) # ะฟั€ะตะดะฟะพะปะฐะณะฐะตะผะพะต ั‡ะธัะปะพ if number == predict: return count # ะฒั‹ั…ะพะด ะธะท ั†ะธะบะปะฐ, ะตัะปะธ ัƒะณะฐะดะฐะปะธ score_game(game_core_v1) def game_core_v2(number): '''ะกะฝะฐั‡ะฐะปะฐ ัƒัั‚ะฐะฝะฐะฒะปะธะฒะฐะตะผ ะปัŽะฑะพะต random ั‡ะธัะปะพ, ะฐ ะฟะพั‚ะพะผ ัƒะผะตะฝัŒัˆะฐะตะผ ะธะปะธ ัƒะฒะตะปะธั‡ะธะฒะฐะตะผ ะตะณะพ ะฒ ะทะฐะฒะธัะธะผะพัั‚ะธ ะพั‚ ั‚ะพะณะพ, ะฑะพะปัŒัˆะต ะพะฝะพ ะธะปะธ ะผะตะฝัŒัˆะต ะฝัƒะถะฝะพะณะพ. ะคัƒะฝะบั†ะธั ะฟั€ะธะฝะธะผะฐะตั‚ ะทะฐะณะฐะดะฐะฝะฝะพะต ั‡ะธัะปะพ ะธ ะฒะพะทะฒั€ะฐั‰ะฐะตั‚ ั‡ะธัะปะพ ะฟะพะฟั‹ั‚ะพะบ''' count = 1 predict = np.random.randint(1,101) while number != predict: count+=1 if number > predict: predict += 1 elif number < predict: predict -= 1 return(count) # ะฒั‹ั…ะพะด ะธะท ั†ะธะบะปะฐ, ะตัะปะธ ัƒะณะฐะดะฐะปะธ score_game(game_core_v2)
0.445771
0.909827
# Example: CanvasXpress facet Chart No. 7 This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at: https://www.canvasxpress.org/examples/facet-7.html This example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function. Everything required for the chart to render is included in the code below. Simply run the code block. ``` from canvasxpress.canvas import CanvasXpress from canvasxpress.js.collection import CXEvents from canvasxpress.render.jupyter import CXNoteBook cx = CanvasXpress( render_to="facet7", data={ "y": { "vars": [ "s1", "s2", "s3", "s4", "s5", "s6", "s7", "s8", "s9", "s10", "s11", "s12", "s13", "s14", "s15", "s16", "s17", "s18", "s19", "s20", "s21" ], "smps": [ "U-Trial 1", "U-Trial 2", "U-Trial 3", "S-Trial 1", "S-Trial 2", "S-Trial 3" ], "data": [ [ 38.4, 27.7, 25.7, 53.1, 30.6, 30.2 ], [ 46.2, 57.2, 41.9, 54.7, 43.3, 56.7 ], [ 72.5, 57.9, 51.9, 74.2, 53.4, 42.4 ], [ 38, 38, 32.2, 49.6, 37.4, 34.4 ], [ 82.8, 57.9, 64.7, 53.6, 48.6, 44.8 ], [ 33.9, 32, 31.4, 51.3, 35.5, 42.9 ], [ 50.4, 40.6, 40.1, 44.1, 46.9, 42.7 ], [ 35, 33.1, 43.2, 34, 26.4, 24.8 ], [ 32.8, 26.8, 33.9, 34.5, 25.1, 25.1 ], [ 60.1, 53.2, 40.4, 59.1, 87.1, 59.2 ], [ 75.1, 63.1, 58, 67.3, 43.8, 42.2 ], [ 57.6, 57.7, 61.5, 75.5, 126.6, 48.4 ], [ 55.5, 63.3, 44.6, 41.1, 41.8, 32 ], [ 49.5, 45.8, 35.3, 52.2, 53.8, 48.1 ], [ 40.9, 35.7, 37.2, 28.3, 26, 33.7 ], [ 44.3, 46.8, 39.4, 74.9, 45.3, 42.6 ], [ 93.8, 91.9, 77.4, 77.5, 55.8, 54.9 ], [ 47.9, 59.9, 52.8, 50.9, 58.6, 64.5 ], [ 75.2, 54.1, 63.6, 70.1, 44, 43.1 ], [ 46.2, 39.3, 56.6, 60.3, 47.8, 52.8 ], [ 56.3, 45.8, 58.9, 59.9, 36.8, 44.3 ] ] }, "m": { "Name": "Scents", "Description": "Data on the time subjects required to complete a pencil and paper maze when they were smelling a floral scent and when they were not.", "Reference": "Hirsch, A. R., and Johnston, L. H. Odors and Learning, Smell & Taste Treatment and Research Foundation, Chicago." }, "z": { "Sex": [ "M", "F", "M", "M", "M", "F", "F", "F", "M", "F", "F", "F", "F", "M", "M", "M", "M", "M", "F", "F", "M" ], "Smoker": [ "N", "Y", "N", "N", "N", "Y", "N", "N", "N", "N", "Y", "Y", "Y", "Y", "N", "N", "Y", "N", "Y", "N", "N" ], "Opinion": [ "pos", "neg", "pos", "neg", "neg", "pos", "pos", "pos", "pos", "indiff", "pos", "indiff", "pos", "indiff", "indiff", "pos", "neg", "neg", "pos", "neg", "neg" ], "Age": [ 23, 43, 43, 32, 15, 37, 26, 35, 26, 31, 35, 55, 25, 39, 25, 26, 33, 62, 54, 38, 65 ], "Order": [ 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1 ] } }, config={ "graphType": "Scatter2D", "layoutCollapse": False, "layoutType": "wrap", "legendBox": True, "shapeBy": "Age", "showTransition": False, "theme": "CanvasXpress", "title": "Floral scent data set", "xAxis": [ "U-Trial 1", "U-Trial 2", "U-Trial 3" ], "yAxis": [ "S-Trial 1", "S-Trial 2", "S-Trial 3" ] }, width=613, height=613, events=CXEvents(), after_render=[ [ "segregate", [ "Variables", [ "Opinion", "Sex" ], None, None ] ] ], other_init_params={ "version": 35, "events": False, "info": False, "afterRenderInit": False, "noValidate": True } ) display = CXNoteBook(cx) display.render(output_file="facet_7.html") ```
github_jupyter
from canvasxpress.canvas import CanvasXpress from canvasxpress.js.collection import CXEvents from canvasxpress.render.jupyter import CXNoteBook cx = CanvasXpress( render_to="facet7", data={ "y": { "vars": [ "s1", "s2", "s3", "s4", "s5", "s6", "s7", "s8", "s9", "s10", "s11", "s12", "s13", "s14", "s15", "s16", "s17", "s18", "s19", "s20", "s21" ], "smps": [ "U-Trial 1", "U-Trial 2", "U-Trial 3", "S-Trial 1", "S-Trial 2", "S-Trial 3" ], "data": [ [ 38.4, 27.7, 25.7, 53.1, 30.6, 30.2 ], [ 46.2, 57.2, 41.9, 54.7, 43.3, 56.7 ], [ 72.5, 57.9, 51.9, 74.2, 53.4, 42.4 ], [ 38, 38, 32.2, 49.6, 37.4, 34.4 ], [ 82.8, 57.9, 64.7, 53.6, 48.6, 44.8 ], [ 33.9, 32, 31.4, 51.3, 35.5, 42.9 ], [ 50.4, 40.6, 40.1, 44.1, 46.9, 42.7 ], [ 35, 33.1, 43.2, 34, 26.4, 24.8 ], [ 32.8, 26.8, 33.9, 34.5, 25.1, 25.1 ], [ 60.1, 53.2, 40.4, 59.1, 87.1, 59.2 ], [ 75.1, 63.1, 58, 67.3, 43.8, 42.2 ], [ 57.6, 57.7, 61.5, 75.5, 126.6, 48.4 ], [ 55.5, 63.3, 44.6, 41.1, 41.8, 32 ], [ 49.5, 45.8, 35.3, 52.2, 53.8, 48.1 ], [ 40.9, 35.7, 37.2, 28.3, 26, 33.7 ], [ 44.3, 46.8, 39.4, 74.9, 45.3, 42.6 ], [ 93.8, 91.9, 77.4, 77.5, 55.8, 54.9 ], [ 47.9, 59.9, 52.8, 50.9, 58.6, 64.5 ], [ 75.2, 54.1, 63.6, 70.1, 44, 43.1 ], [ 46.2, 39.3, 56.6, 60.3, 47.8, 52.8 ], [ 56.3, 45.8, 58.9, 59.9, 36.8, 44.3 ] ] }, "m": { "Name": "Scents", "Description": "Data on the time subjects required to complete a pencil and paper maze when they were smelling a floral scent and when they were not.", "Reference": "Hirsch, A. R., and Johnston, L. H. Odors and Learning, Smell & Taste Treatment and Research Foundation, Chicago." }, "z": { "Sex": [ "M", "F", "M", "M", "M", "F", "F", "F", "M", "F", "F", "F", "F", "M", "M", "M", "M", "M", "F", "F", "M" ], "Smoker": [ "N", "Y", "N", "N", "N", "Y", "N", "N", "N", "N", "Y", "Y", "Y", "Y", "N", "N", "Y", "N", "Y", "N", "N" ], "Opinion": [ "pos", "neg", "pos", "neg", "neg", "pos", "pos", "pos", "pos", "indiff", "pos", "indiff", "pos", "indiff", "indiff", "pos", "neg", "neg", "pos", "neg", "neg" ], "Age": [ 23, 43, 43, 32, 15, 37, 26, 35, 26, 31, 35, 55, 25, 39, 25, 26, 33, 62, 54, 38, 65 ], "Order": [ 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1 ] } }, config={ "graphType": "Scatter2D", "layoutCollapse": False, "layoutType": "wrap", "legendBox": True, "shapeBy": "Age", "showTransition": False, "theme": "CanvasXpress", "title": "Floral scent data set", "xAxis": [ "U-Trial 1", "U-Trial 2", "U-Trial 3" ], "yAxis": [ "S-Trial 1", "S-Trial 2", "S-Trial 3" ] }, width=613, height=613, events=CXEvents(), after_render=[ [ "segregate", [ "Variables", [ "Opinion", "Sex" ], None, None ] ] ], other_init_params={ "version": 35, "events": False, "info": False, "afterRenderInit": False, "noValidate": True } ) display = CXNoteBook(cx) display.render(output_file="facet_7.html")
0.40204
0.782849
# Prediction Use the {ref}`openpifpaf.predict <cli-help-predict>` tool on the command line to run multi-person pose estimation on images. To create predictions from other Python modules, please refer to {doc}`predict_api`. First we present the command line tool for predictions on images, {ref}`openpifpaf.predict <cli-help-predict>`. Then follows a short introduction to OpenPifPaf predictions on videos with {ref}`openpifpaf.video <cli-help-video>`. ## Images Run {ref}`openpifpaf.predict <cli-help-predict>` on an image: ``` %%bash python -m openpifpaf.predict coco/000000081988.jpg --image-output --json-output --image-min-dpi=200 --show-file-extension=jpeg ``` This command produced two outputs: an image and a json file. You can provide file or folder arguments to the `--image-output` and `--json-output` flags. Here, we used the default which created these two files: ```sh coco/000000081988.jpg.predictions.jpeg coco/000000081988.jpg.predictions.json ``` Here is the image: ``` import IPython IPython.display.Image('coco/000000081988.jpg.predictions.jpeg') ``` Image credit: "[Learning to surf](https://www.flickr.com/photos/fotologic/6038911779/in/photostream/)" by fotologic which is licensed under [CC-BY-2.0]. [CC-BY-2.0]: https://creativecommons.org/licenses/by/2.0/ And below is the json output. The json data is a list where each entry in the list corresponds to one pose annotation. In this case, there are five entries corresponding to the five people in the image. Each annotation contains information on `"keypoints"`, `"bbox"`, `"score"` and `"category_id"`. All coordinates are in pixel coordinates. The `"keypoints"` entry is in COCO format with triples of `(x, y, c)` (`c` for confidence) for every joint as listed under {ref}`coco-person-keypoints`. The pixel coordinates have sub-pixel accuracy, i.e. 10.5 means the joint is between pixel 10 and 11. In rare cases, joints can be localized outside the field of view and then the pixel coordinates can be negative. When `c` is zero, the joint was not detected. The `"bbox"` (bounding box) format is `(x, y, w, h)`: the $(x, y)$ coordinate of the top-left corner followed by width and height. The `"score"` is a number between zero and one. ``` %%bash python -m json.tool coco/000000081988.jpg.predictions.json ``` Optional Arguments: * `--show`: show interactive matplotlib output * `--debug-indices`: enable debug messages and debug plots (see {ref}`Examples <example-debug>`) Full list of arguments is available with `--help`: {ref}`CLI help for predict <cli-help-predict>`. ## Videos ```sh python3 -m openpifpaf.video --source myvideotoprocess.mp4 --video-output --json-output ``` Requires OpenCV. The `--video-output` option also requires matplotlib. Replace `myvideotoprocess.mp4` with `0` for webcam0 or other OpenCV compatible sources. The full list of arguments is available with `--help`: {ref}`CLI help for video <cli-help-video>`. In v0.12.6, we introduced the ability to pipe the output to a virtual camera. This virtual camera can then be used as the source camera in Zoom and other conferencing softwares. You need a virtual camera on your system, e.g. from [OBS Studio](https://obsproject.com) (Mac and Windows) or [v4l2loopback](https://github.com/umlaeute/v4l2loopback#distributions) (Linux) and need to install `pip3 install pyvirtualcam`. Then you can use the `--video-output=virtualcam` argument. ## Debug Obtain extra information by adding `--debug` to the command line. It will show the structure of the neural network and timing information in the decoder. ``` %%bash python -m openpifpaf.predict coco/000000081988.jpg --image-output --json-output --debug --image-min-dpi=200 --show-file-extension=jpeg ``` You can enable debug plots with `--debug-indices`. Please refer to {ref}`the debug outputs in the Examples <example-debug>` and some further {ref}`debug outputs in the prediction API <predict-fields>`.
github_jupyter
%%bash python -m openpifpaf.predict coco/000000081988.jpg --image-output --json-output --image-min-dpi=200 --show-file-extension=jpeg coco/000000081988.jpg.predictions.jpeg coco/000000081988.jpg.predictions.json import IPython IPython.display.Image('coco/000000081988.jpg.predictions.jpeg') %%bash python -m json.tool coco/000000081988.jpg.predictions.json python3 -m openpifpaf.video --source myvideotoprocess.mp4 --video-output --json-output %%bash python -m openpifpaf.predict coco/000000081988.jpg --image-output --json-output --debug --image-min-dpi=200 --show-file-extension=jpeg
0.09835
0.867204
##**INSTALL REQUIRED PACKAGES AND IMPORT OTHERS** ``` !pip install --upgrade geopandas shapely hyperopt scikit-learn !pip install delayed import geopandas as gpd import pandas as pd import numpy as np from sklearn.model_selection import train_test_split,StratifiedKFold from sklearn.ensemble import RandomForestClassifier from sklearn import metrics from sklearn.feature_selection import mutual_info_classif from hyperopt import fmin, tpe, hp, STATUS_OK, Trials from sklearn.preprocessing import scale,normalize from sklearn.model_selection import cross_val_score from sklearn.experimental import enable_halving_search_cv from sklearn.model_selection import HalvingGridSearchCV import seaborn as sns import matplotlib.pyplot as plt from sklearn.metrics import f1_score,accuracy_score,classification_report import pickle %matplotlib inline ``` ## **COMBINE ALL TRAINING DATA** ``` from google.colab import drive drive.mount('/content/drive') list_data=[....]## the drive link list_kab=[....]## id of municipality all_train=pd.DataFrame() for i in range(0,len(list_data)): temp=pd.read_csv(list_data[i],sep=';').dropna() temp['idkab']=list_kab[i] all_train=temp[['Class_ID', 'xcoord', 'ycoord', 'L8_B2_min', 'L8_B3_min', 'L8_B4_min', 'L8_B5_min', 'L8_B6_min', 'L8_B2_med', 'L8_B3_med', 'L8_B4_med', 'L8_B5_med', 'L8_B6_med', 'L8_B2_mean', 'L8_B3_mean', 'L8_B4_mean', 'L8_B5_mean', 'L8_B6_mean', 'L8_B2_stdev', 'L8_B3_stdev', 'L8_B4_stdev', 'L8_B5_stdev', 'L8_B6_stdev', 'L8_bright_mean', 'L8_green_mean', 'L8_wet_mean', 'L8_bright_med', 'L8_green_med', 'L8_wet_med', 'S2_B2_min', 'S2_B3_min', 'S2_B4_min', 'S2_B8_min', 'S2_B11_min', 'S2_B2_med', 'S2_B3_med', 'S2_B4_med', 'S2_B8_med', 'S2_B11_med', 'S2_B2_mean', 'S2_B3_mean', 'S2_B4_mean', 'S2_B8_mean', 'S2_B11_mean', 'S2_B2_stdev', 'S2_B3_stdev', 'S2_B4_stdev', 'S2_B8_stdev', 'S2_B11_stdev', 'S2_bright_mean', 'S2_green_mean', 'S2_wet_mean', 'S2_bright_med', 'S2_green_med', 'S2_wet_med', 'S1_VV_min', 'S1_VH_min', 'S1_VV_med', 'S1_VH_med', 'S1_VV_mean', 'S1_VH_mean', 'S1_ration_med', 'S1_ratio_mean', 'S1_ratio_min', 'Alos_dsm', 'Alos_slope', 'Alos_landform','idkab']].append(all_train) ``` DESKRIPSI CLASS ID ``` all_train.pivot_table(values='xcoord',index='idkab',columns='Class_ID',aggfunc='count').fillna(0) ``` ## **DATA TRANSFORMATION** - Transform the ALOS landform into PODES landform categoric ``` dict_data_landform={0:'others',11:'others',12:'others',14:'others',15:'others',21: 'u-slope',22:'u-slope', 31:'l-slope',32:'l-slope',41:'valley',42:'valley',24:'flat',34:'flat'} all_train['PODES_landform']=all_train['Alos_landform'].apply(lambda y: dict_data_landform[y]) cat_data=pd.get_dummies(all_train[['PODES_landform']], columns=['PODES_landform'], prefix=["Type_is"]) all_train=all_train.join(cat_data) ``` ## **TUNING AND FEATURE SELECTION** ### Mutual Information ``` mi_=mutual_info_classif(all_train[[ 'L8_B2_min', 'L8_B3_min', 'L8_B4_min', 'L8_B5_min', 'L8_B6_min', 'L8_B2_med', 'L8_B3_med', 'L8_B4_med', 'L8_B5_med', 'L8_B6_med', 'L8_B2_mean', 'L8_B3_mean', 'L8_B4_mean', 'L8_B5_mean', 'L8_B6_mean', 'L8_B2_stdev', 'L8_B3_stdev', 'L8_B4_stdev', 'L8_B5_stdev', 'L8_B6_stdev', 'L8_bright_mean', 'L8_green_mean', 'L8_wet_mean', 'L8_bright_med', 'L8_green_med', 'L8_wet_med', 'S2_B2_min', 'S2_B3_min', 'S2_B4_min', 'S2_B8_min', 'S2_B11_min', 'S2_B2_med', 'S2_B3_med', 'S2_B4_med', 'S2_B8_med', 'S2_B11_med', 'S2_B2_mean', 'S2_B3_mean', 'S2_B4_mean', 'S2_B8_mean', 'S2_B11_mean', 'S2_B2_stdev', 'S2_B3_stdev', 'S2_B4_stdev', 'S2_B8_stdev', 'S2_B11_stdev', 'S2_bright_mean', 'S2_green_mean', 'S2_wet_mean', 'S2_bright_med', 'S2_green_med', 'S2_wet_med', 'S1_VV_min', 'S1_VH_min', 'S1_VV_med', 'S1_VH_med', 'S1_VV_mean', 'S1_VH_mean', 'S1_ration_med', 'S1_ratio_mean', 'S1_ratio_min', 'Alos_dsm', 'Alos_slope', 'Type_is_flat', 'Type_is_l-slope', 'Type_is_others', 'Type_is_u-slope', 'Type_is_valley']],all_train['Class_ID']) mi_ ``` From the result, we must set the border value of accepted features. Eg: 0.6 as borderline (Because the problem is multiclass, it's difficult to get mutual information over 0.8) ``` data_col=[ 'L8_B2_min', 'L8_B3_min', 'L8_B4_min', 'L8_B5_min', 'L8_B6_min', 'L8_B2_med', 'L8_B3_med', 'L8_B4_med', 'L8_B5_med', 'L8_B6_med', 'L8_B2_mean', 'L8_B3_mean', 'L8_B4_mean', 'L8_B5_mean', 'L8_B6_mean', 'L8_B2_stdev', 'L8_B3_stdev', 'L8_B4_stdev', 'L8_B5_stdev', 'L8_B6_stdev', 'L8_bright_mean', 'L8_green_mean', 'L8_wet_mean', 'L8_bright_med', 'L8_green_med', 'L8_wet_med', 'S2_B2_min', 'S2_B3_min', 'S2_B4_min', 'S2_B8_min', 'S2_B11_min', 'S2_B2_med', 'S2_B3_med', 'S2_B4_med', 'S2_B8_med', 'S2_B11_med', 'S2_B2_mean', 'S2_B3_mean', 'S2_B4_mean', 'S2_B8_mean', 'S2_B11_mean', 'S2_B2_stdev', 'S2_B3_stdev', 'S2_B4_stdev', 'S2_B8_stdev', 'S2_B11_stdev', 'S2_bright_mean', 'S2_green_mean', 'S2_wet_mean', 'S2_bright_med', 'S2_green_med', 'S2_wet_med', 'S1_VV_min', 'S1_VH_min', 'S1_VV_med', 'S1_VH_med', 'S1_VV_mean', 'S1_VH_mean', 'S1_ration_med', 'S1_ratio_mean', 'S1_ratio_min', 'Alos_dsm', 'Alos_slope', 'Type_is_flat', 'Type_is_l-slope', 'Type_is_others', 'Type_is_u-slope', 'Type_is_valley'] col_mi=[data_col[i] for i in list(np.where(mi_>.6)[0])] print(len(col_mi)) print(col_mi) ``` ### Tuning standard parameter for Random Forest ``` from hyperopt import fmin, tpe, hp, STATUS_OK, Trials from sklearn.model_selection import cross_val_score space4rf_1 = { 'max_features': hp.uniform('max_features', 0.05,1), 'n_estimators': hp.choice('n_estimators', [20,50,100]), 'criterion': hp.choice('criterion', ["gini", "entropy"]), 'max_depth': hp.choice('max_depth', [5,10,15,20,None]), 'class_weight':hp.choice('class_weight',['balanced','balanced_subsample']), 'min_samples_split':hp.uniform('min_samples_split',0.00002,0.0005), 'min_samples_leaf':hp.uniform('min_samples_leaf',0.00002,0.0005), 'min_impurity_decrease':hp.uniform('min_impurity_decrease',0.0001,0.03)} columns=col_mi X=all_train[col_mi] y=all_train['Class_ID'] def hyperopt_train_test(params): X_ = X[:] st_kfold=StratifiedKFold(n_splits=10) clf = RandomForestClassifier(random_state=1234,n_jobs=-1,oob_score = True,bootstrap=True, **params) return cross_val_score(clf, X_, y,scoring='f1_weighted',cv=st_kfold).mean() best= 0 def f(params): global best acc = hyperopt_train_test(params) if acc > best: best = acc print('new best:', best, params) return {'loss': -acc, 'status': STATUS_OK} trials = Trials() best = fmin(f, space4rf_1, algo=tpe.suggest, max_evals=10, trials=trials) print('best:',best) best ``` Result of tuning parameter - class weight : balanced_subsample - criterion: entropy- - max_depth: 20 - max_features: 0.14983904451743044 - min_impurity_decrease: 0.002368351145317672, - min_samples_leaf: 0.0004554203759603965, - min_samples_split: 0.0003765323732039089, - n_estimators: 20 - oobs: True - n_jobs: -1 - bootstrap: True - random_state=1234 ## **EVALUASI** ``` bst_classifier=RandomForestClassifier(class_weight= 'balanced_subsample',criterion='entropy',max_depth=20, max_features= 0.14983904451743044,min_impurity_decrease=0.002368351145317672, min_samples_leaf=0.0004554203759603965,min_samples_split=0.0003765323732039089, n_estimators= 20,oob_score=True,n_jobs=-1,bootstrap=True,random_state=1234) all_train=all_train.reset_index() X_train, X_test, y_train, y_test = train_test_split(all_train[col_mi], all_train[['Class_ID']], stratify=all_train[['Class_ID','idkab']], test_size=0.2,random_state=1234,) bst_classifier.fit(X_train,y_train) bst_classifier.fit(X_train,y_train) prediction=bst_classifier.predict(X_test) y_test['Class_pred']=prediction confusion_matrix=y_test.reset_index().pivot_table(columns='Class_ID',index='Class_pred',values='index',aggfunc='count').fillna(0) confusion_matrix reported_1=y_test.merge(all_train[['idkab']],left_index=True,right_index=True) pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 150) report_file='...'##report file for i in reported_1.idkab.unique(): temp_=reported_1.query('idkab==@i') print('-------------------------') print('CONFUSION MATRIX FOR:', i) print(temp_.reset_index().pivot_table(columns='Class_ID',index='Class_pred',values='index',aggfunc='count').fillna(0)) print('-------------------------', file=open(report_file, "a")) print('CONFUSION MATRIX FOR:', i,file=open(report_file, "a")) print(temp_.reset_index().pivot_table(columns='Class_ID',index='Class_pred',values='index',aggfunc='count').fillna(0),file=open(report_file, "a")) print('F1-SCORE FOR TEST [MACRO]: ',f1_score(y_test.Class_ID,prediction,average='macro')) print('F1-SCORE FOR TEST [MICRO]: ',f1_score(y_test.Class_ID,prediction,average='micro')) print('F1-SCORE FOR TEST [WEIGHTED]: ',f1_score(y_test.Class_ID,prediction,average='weighted')) print('ACCURACY SCORE FOR TEST: ',accuracy_score(y_test.Class_ID,prediction)) print(classification_report(y_test.Class_ID,prediction)) for i in reported_1.idkab.unique(): temp_=reported_1.query('idkab==@i') print('-------------------------') print('CLASSIFICATION REPORT FOR:', i) print(classification_report(temp_.Class_ID,temp_.Class_pred)) print('-------------------------', file=open(report_file, "a")) print('CLASSIFICATION REPORT FOR:', i,file=open(report_file, "a")) print(classification_report(temp_.Class_ID,temp_.Class_pred),file=open(report_file, "a")) importance = bst_classifier.feature_importances_ for i,v in enumerate(importance): print('Feature: %0d [%s], Score: %.5f' % (i,col_mi[i],v)) plt.bar([x for x in range(len(importance))], importance) plt.show() with open('...', 'wb') as f: ##pickle file pickle.dump(bst_classifier, f) ```
github_jupyter
!pip install --upgrade geopandas shapely hyperopt scikit-learn !pip install delayed import geopandas as gpd import pandas as pd import numpy as np from sklearn.model_selection import train_test_split,StratifiedKFold from sklearn.ensemble import RandomForestClassifier from sklearn import metrics from sklearn.feature_selection import mutual_info_classif from hyperopt import fmin, tpe, hp, STATUS_OK, Trials from sklearn.preprocessing import scale,normalize from sklearn.model_selection import cross_val_score from sklearn.experimental import enable_halving_search_cv from sklearn.model_selection import HalvingGridSearchCV import seaborn as sns import matplotlib.pyplot as plt from sklearn.metrics import f1_score,accuracy_score,classification_report import pickle %matplotlib inline from google.colab import drive drive.mount('/content/drive') list_data=[....]## the drive link list_kab=[....]## id of municipality all_train=pd.DataFrame() for i in range(0,len(list_data)): temp=pd.read_csv(list_data[i],sep=';').dropna() temp['idkab']=list_kab[i] all_train=temp[['Class_ID', 'xcoord', 'ycoord', 'L8_B2_min', 'L8_B3_min', 'L8_B4_min', 'L8_B5_min', 'L8_B6_min', 'L8_B2_med', 'L8_B3_med', 'L8_B4_med', 'L8_B5_med', 'L8_B6_med', 'L8_B2_mean', 'L8_B3_mean', 'L8_B4_mean', 'L8_B5_mean', 'L8_B6_mean', 'L8_B2_stdev', 'L8_B3_stdev', 'L8_B4_stdev', 'L8_B5_stdev', 'L8_B6_stdev', 'L8_bright_mean', 'L8_green_mean', 'L8_wet_mean', 'L8_bright_med', 'L8_green_med', 'L8_wet_med', 'S2_B2_min', 'S2_B3_min', 'S2_B4_min', 'S2_B8_min', 'S2_B11_min', 'S2_B2_med', 'S2_B3_med', 'S2_B4_med', 'S2_B8_med', 'S2_B11_med', 'S2_B2_mean', 'S2_B3_mean', 'S2_B4_mean', 'S2_B8_mean', 'S2_B11_mean', 'S2_B2_stdev', 'S2_B3_stdev', 'S2_B4_stdev', 'S2_B8_stdev', 'S2_B11_stdev', 'S2_bright_mean', 'S2_green_mean', 'S2_wet_mean', 'S2_bright_med', 'S2_green_med', 'S2_wet_med', 'S1_VV_min', 'S1_VH_min', 'S1_VV_med', 'S1_VH_med', 'S1_VV_mean', 'S1_VH_mean', 'S1_ration_med', 'S1_ratio_mean', 'S1_ratio_min', 'Alos_dsm', 'Alos_slope', 'Alos_landform','idkab']].append(all_train) all_train.pivot_table(values='xcoord',index='idkab',columns='Class_ID',aggfunc='count').fillna(0) dict_data_landform={0:'others',11:'others',12:'others',14:'others',15:'others',21: 'u-slope',22:'u-slope', 31:'l-slope',32:'l-slope',41:'valley',42:'valley',24:'flat',34:'flat'} all_train['PODES_landform']=all_train['Alos_landform'].apply(lambda y: dict_data_landform[y]) cat_data=pd.get_dummies(all_train[['PODES_landform']], columns=['PODES_landform'], prefix=["Type_is"]) all_train=all_train.join(cat_data) mi_=mutual_info_classif(all_train[[ 'L8_B2_min', 'L8_B3_min', 'L8_B4_min', 'L8_B5_min', 'L8_B6_min', 'L8_B2_med', 'L8_B3_med', 'L8_B4_med', 'L8_B5_med', 'L8_B6_med', 'L8_B2_mean', 'L8_B3_mean', 'L8_B4_mean', 'L8_B5_mean', 'L8_B6_mean', 'L8_B2_stdev', 'L8_B3_stdev', 'L8_B4_stdev', 'L8_B5_stdev', 'L8_B6_stdev', 'L8_bright_mean', 'L8_green_mean', 'L8_wet_mean', 'L8_bright_med', 'L8_green_med', 'L8_wet_med', 'S2_B2_min', 'S2_B3_min', 'S2_B4_min', 'S2_B8_min', 'S2_B11_min', 'S2_B2_med', 'S2_B3_med', 'S2_B4_med', 'S2_B8_med', 'S2_B11_med', 'S2_B2_mean', 'S2_B3_mean', 'S2_B4_mean', 'S2_B8_mean', 'S2_B11_mean', 'S2_B2_stdev', 'S2_B3_stdev', 'S2_B4_stdev', 'S2_B8_stdev', 'S2_B11_stdev', 'S2_bright_mean', 'S2_green_mean', 'S2_wet_mean', 'S2_bright_med', 'S2_green_med', 'S2_wet_med', 'S1_VV_min', 'S1_VH_min', 'S1_VV_med', 'S1_VH_med', 'S1_VV_mean', 'S1_VH_mean', 'S1_ration_med', 'S1_ratio_mean', 'S1_ratio_min', 'Alos_dsm', 'Alos_slope', 'Type_is_flat', 'Type_is_l-slope', 'Type_is_others', 'Type_is_u-slope', 'Type_is_valley']],all_train['Class_ID']) mi_ data_col=[ 'L8_B2_min', 'L8_B3_min', 'L8_B4_min', 'L8_B5_min', 'L8_B6_min', 'L8_B2_med', 'L8_B3_med', 'L8_B4_med', 'L8_B5_med', 'L8_B6_med', 'L8_B2_mean', 'L8_B3_mean', 'L8_B4_mean', 'L8_B5_mean', 'L8_B6_mean', 'L8_B2_stdev', 'L8_B3_stdev', 'L8_B4_stdev', 'L8_B5_stdev', 'L8_B6_stdev', 'L8_bright_mean', 'L8_green_mean', 'L8_wet_mean', 'L8_bright_med', 'L8_green_med', 'L8_wet_med', 'S2_B2_min', 'S2_B3_min', 'S2_B4_min', 'S2_B8_min', 'S2_B11_min', 'S2_B2_med', 'S2_B3_med', 'S2_B4_med', 'S2_B8_med', 'S2_B11_med', 'S2_B2_mean', 'S2_B3_mean', 'S2_B4_mean', 'S2_B8_mean', 'S2_B11_mean', 'S2_B2_stdev', 'S2_B3_stdev', 'S2_B4_stdev', 'S2_B8_stdev', 'S2_B11_stdev', 'S2_bright_mean', 'S2_green_mean', 'S2_wet_mean', 'S2_bright_med', 'S2_green_med', 'S2_wet_med', 'S1_VV_min', 'S1_VH_min', 'S1_VV_med', 'S1_VH_med', 'S1_VV_mean', 'S1_VH_mean', 'S1_ration_med', 'S1_ratio_mean', 'S1_ratio_min', 'Alos_dsm', 'Alos_slope', 'Type_is_flat', 'Type_is_l-slope', 'Type_is_others', 'Type_is_u-slope', 'Type_is_valley'] col_mi=[data_col[i] for i in list(np.where(mi_>.6)[0])] print(len(col_mi)) print(col_mi) from hyperopt import fmin, tpe, hp, STATUS_OK, Trials from sklearn.model_selection import cross_val_score space4rf_1 = { 'max_features': hp.uniform('max_features', 0.05,1), 'n_estimators': hp.choice('n_estimators', [20,50,100]), 'criterion': hp.choice('criterion', ["gini", "entropy"]), 'max_depth': hp.choice('max_depth', [5,10,15,20,None]), 'class_weight':hp.choice('class_weight',['balanced','balanced_subsample']), 'min_samples_split':hp.uniform('min_samples_split',0.00002,0.0005), 'min_samples_leaf':hp.uniform('min_samples_leaf',0.00002,0.0005), 'min_impurity_decrease':hp.uniform('min_impurity_decrease',0.0001,0.03)} columns=col_mi X=all_train[col_mi] y=all_train['Class_ID'] def hyperopt_train_test(params): X_ = X[:] st_kfold=StratifiedKFold(n_splits=10) clf = RandomForestClassifier(random_state=1234,n_jobs=-1,oob_score = True,bootstrap=True, **params) return cross_val_score(clf, X_, y,scoring='f1_weighted',cv=st_kfold).mean() best= 0 def f(params): global best acc = hyperopt_train_test(params) if acc > best: best = acc print('new best:', best, params) return {'loss': -acc, 'status': STATUS_OK} trials = Trials() best = fmin(f, space4rf_1, algo=tpe.suggest, max_evals=10, trials=trials) print('best:',best) best bst_classifier=RandomForestClassifier(class_weight= 'balanced_subsample',criterion='entropy',max_depth=20, max_features= 0.14983904451743044,min_impurity_decrease=0.002368351145317672, min_samples_leaf=0.0004554203759603965,min_samples_split=0.0003765323732039089, n_estimators= 20,oob_score=True,n_jobs=-1,bootstrap=True,random_state=1234) all_train=all_train.reset_index() X_train, X_test, y_train, y_test = train_test_split(all_train[col_mi], all_train[['Class_ID']], stratify=all_train[['Class_ID','idkab']], test_size=0.2,random_state=1234,) bst_classifier.fit(X_train,y_train) bst_classifier.fit(X_train,y_train) prediction=bst_classifier.predict(X_test) y_test['Class_pred']=prediction confusion_matrix=y_test.reset_index().pivot_table(columns='Class_ID',index='Class_pred',values='index',aggfunc='count').fillna(0) confusion_matrix reported_1=y_test.merge(all_train[['idkab']],left_index=True,right_index=True) pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 150) report_file='...'##report file for i in reported_1.idkab.unique(): temp_=reported_1.query('idkab==@i') print('-------------------------') print('CONFUSION MATRIX FOR:', i) print(temp_.reset_index().pivot_table(columns='Class_ID',index='Class_pred',values='index',aggfunc='count').fillna(0)) print('-------------------------', file=open(report_file, "a")) print('CONFUSION MATRIX FOR:', i,file=open(report_file, "a")) print(temp_.reset_index().pivot_table(columns='Class_ID',index='Class_pred',values='index',aggfunc='count').fillna(0),file=open(report_file, "a")) print('F1-SCORE FOR TEST [MACRO]: ',f1_score(y_test.Class_ID,prediction,average='macro')) print('F1-SCORE FOR TEST [MICRO]: ',f1_score(y_test.Class_ID,prediction,average='micro')) print('F1-SCORE FOR TEST [WEIGHTED]: ',f1_score(y_test.Class_ID,prediction,average='weighted')) print('ACCURACY SCORE FOR TEST: ',accuracy_score(y_test.Class_ID,prediction)) print(classification_report(y_test.Class_ID,prediction)) for i in reported_1.idkab.unique(): temp_=reported_1.query('idkab==@i') print('-------------------------') print('CLASSIFICATION REPORT FOR:', i) print(classification_report(temp_.Class_ID,temp_.Class_pred)) print('-------------------------', file=open(report_file, "a")) print('CLASSIFICATION REPORT FOR:', i,file=open(report_file, "a")) print(classification_report(temp_.Class_ID,temp_.Class_pred),file=open(report_file, "a")) importance = bst_classifier.feature_importances_ for i,v in enumerate(importance): print('Feature: %0d [%s], Score: %.5f' % (i,col_mi[i],v)) plt.bar([x for x in range(len(importance))], importance) plt.show() with open('...', 'wb') as f: ##pickle file pickle.dump(bst_classifier, f)
0.264548
0.560614
# "The Crank-Nicolson method implemented from scratch in Python" > "In this article we implement the well-known finite difference method Crank-Nicolson in Python." - toc: true - branch: master - badges: true - comments: true - categories: [python, numpy, numerical analysis, partial differential equations] # The Crank-Nicolson Method The [Crank-Nicolson method](http://en.wikipedia.org/wiki/Crank%E2%80%93Nicolson_method) is a well-known finite difference method for the numerical integration of the heat equation and closely related partial differential equations. We often resort to a Crank-Nicolson (CN) scheme when we integrate numerically reaction-diffusion systems in one space dimension $$\frac{\partial u}{\partial t} = D \frac{\partial^2 u}{\partial x^2} + f(u),$$ $$\frac{\partial u}{\partial x}\Bigg|_{x = 0, L} = 0,$$ where $u$ is our concentration variable, $x$ is the space variable, $D$ is the diffusion coefficient of $u$, $f$ is the reaction term, and $L$ is the length of our one-dimensional space domain. Note that we use [Neumann boundary conditions](http://en.wikipedia.org/wiki/Neumann_boundary_condition) and specify that the solution $u$ has zero space slope at the boundaries, effectively prohibiting entrance or exit of material at the boundaries (no-flux boundary conditions). ## Finite Difference Methods Many fantastic textbooks and tutorials have been written about finite difference methods, for instance a free textbook by [Lloyd Trefethen](http://people.maths.ox.ac.uk/trefethen/pdetext.html). Here we describe a few basic aspects of finite difference methods. The above reaction-diffusion equation describes the time evolution of variable $u(x,t)$ in one space dimension ($u$ is a line concentration). If we knew an analytic expression for $u(x,t)$ then we could plot $u$ in a two-dimensional coordinate system with axes $t$ and $x$. To approximate $u(x,t)$ numerically we discretize this two-dimensional coordinate system resulting, in the simplest case, in a two-dimensional [regular grid](http://en.wikipedia.org/wiki/Regular_grid). This picture is employed commonly when constructing finite differences methods, see for instance [Figure 3.2.1 of Trefethen](http://people.maths.ox.ac.uk/trefethen/3all.pdf). Let us discretize both time and space as follows: $$t_n = n \Delta t,~ n = 0, \ldots, N-1,$$ $$x_j = j \Delta x,~ j = 0, \ldots, J-1,$$ where $N$ and $J$ are the number of discrete time and space points in our grid respectively. $\Delta t$ and $\Delta x$ are the time step and space step respectively and defined as follows: $$\Delta t = T / N,$$ $$\Delta x = L / J,$$ where $T$ is the point in time up to which we will integrate $u$ numerically. Our ultimate goal is to construct a numerical method that allows us to approximate the unknonwn analytic solution $u(x,t)$ reasonably well in these discrete grid points. That is we want construct a method that computes values $U(j \Delta x, n \Delta t)$ (note: capital $U$) so that $$U(j \Delta x, n \Delta t) \approx u(j \Delta x, n \Delta t)$$ As a shorthand we will write $U_j^n = U(j \Delta x, n \Delta t)$ and $(j,n)$ to refer to grid point $(j \Delta x, n \Delta t)$. ## The Crank-Nicolson Stencil Based on the two-dimensional grid we construct we then approximate the operators of our reaction-diffusion system. For instance, to approximate the time derivative on the left-hand side in grid point $(j,n)$ we use the values of $U$ in two specific grid points: $$\frac{\partial u}{\partial t}\Bigg|_{x = j \Delta x, t = n \Delta t} \approx \frac{U_j^{n+1} - U_j^n}{\Delta t}.$$ We can think of this scheme as a stencil that we superimpose on our $(x,t)$-grid and this particular stencil is commonly referred to as [forward difference](http://en.wikipedia.org/wiki/Finite_difference#Forward.2C_backward.2C_and_central_differences). The spatial part of the [Crank-Nicolson stencil](http://journals.cambridge.org/abstract_S0305004100023197) (or see [Table 3.2.2 of Trefethen](http://people.maths.ox.ac.uk/trefethen/3all.pdf)) for the heat equation ($u_t = u_{xx}$) approximates the [Laplace operator](http://en.wikipedia.org/wiki/Laplace_operator) of our equation and takes the following form $$\frac{\partial^2 u}{\partial x^2}\Bigg|_{x = j \Delta x, t = n \Delta t} \approx \frac{1}{2 \Delta x^2} \left( U_{j+1}^n - 2 U_j^n + U_{j-1}^n + U_{j+1}^{n+1} - 2 U_j^{n+1} + U_{j-1}^{n+1}\right).$$ To approximate $f(u(j \Delta x, n \Delta t))$ we write simply $f(U_j^n)$. These approximations define the stencil for our numerical method as pictured on [Wikipedia](http://en.wikipedia.org/wiki/Crank%E2%80%93Nicolson_method). ![SVG](https://dl.dropboxusercontent.com/u/129945779/georgio/CN-stencil.svg) Applying this stencil to grid point $(j,n)$ gives us the following approximation of our reaction-diffusion equation: $$\frac{U_j^{n+1} - U_j^n}{\Delta t} = \frac{D}{2 \Delta x^2} \left( U_{j+1}^n - 2 U_j^n + U_{j-1}^n + U_{j+1}^{n+1} - 2 U_j^{n+1} + U_{j-1}^{n+1}\right) + f(U_j^n).$$ ## Reordering Stencil into Linear System Let us define $\sigma = \frac{D \Delta t}{2 \Delta x^2}$ and reorder the above approximation of our reaction-diffusion equation: $$-\sigma U_{j-1}^{n+1} + (1+2\sigma) U_j^{n+1} -\sigma U_{j+1}^{n+1} = \sigma U_{j-1}^n + (1-2\sigma) U_j^n + \sigma U_{j+1}^n + \Delta t f(U_j^n).$$ This equation makes sense for space indices $j = 1,\ldots,J-2$ but it does not make sense for indices $j=0$ and $j=J-1$ (on the boundaries): $$j=0:~-\sigma U_{-1}^{n+1} + (1+2\sigma) U_0^{n+1} -\sigma U_{1}^{n+1} = \sigma U_{-1}^n + (1-2\sigma) U_0^n + \sigma U_{1}^n + \Delta t f(U_0^n),$$ $$j=J-1:~-\sigma U_{J-2}^{n+1} + (1+2\sigma) U_{J-1}^{n+1} -\sigma U_{J}^{n+1} = \sigma U_{J-2}^n + (1-2\sigma) U_{J-1}^n + \sigma U_{J}^n + \Delta t f(U_{J-1}^n).$$ The problem here is that the values $U_{-1}^n$ and $U_J^n$ lie outside our grid. However, we can work out what these values should equal by considering our Neumann boundary condition. Let us discretize our boundary condition at $j=0$ with the [backward difference](http://en.wikipedia.org/wiki/Finite_difference#Forward.2C_backward.2C_and_central_differences) and at $j=J-1$ with the [forward difference](http://en.wikipedia.org/wiki/Finite_difference#Forward.2C_backward.2C_and_central_differences): $$\frac{U_1^n - U_0^n}{\Delta x} = 0,$$ $$\frac{U_J^n - U_{J-1}^n}{\Delta x} = 0.$$ These two equations make it clear that we need to amend our above numerical approximation for $j=0$ with the identities $U_0^n = U_1^n$ and $U_0^{n+1} = U_1^{n+1}$, and for $j=J-1$ with the identities $U_{J-1}^n = U_J^n$ and $U_{J-1}^{n+1} = U_J^{n+1}$. Let us reinterpret our numerical approximation of the line concentration of $u$ in a fixed point in time as a vector $\mathbf{U}^n$: $$\mathbf{U}^n = \begin{bmatrix} U_0^n \\ \vdots \\ U_{J-1}^n \end{bmatrix}.$$ Using this notation we can now write our above approximation for a fixed point in time, $t = n \Delta t$, compactly as a linear system: $$ \begin{bmatrix} 1+\sigma & -\sigma & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & 0\\ -\sigma & 1+2\sigma & -\sigma & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\ 0 & -\sigma & 1+2\sigma & -\sigma & \cdots & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \ddots & \ddots & \ddots & \ddots & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\sigma & 1+2\sigma & -\sigma \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\sigma & 1+\sigma \end{bmatrix} \begin{bmatrix} U_0^{n+1} \\ U_1^{n+1} \\ U_2^{n+1} \\ \vdots \\ U_{J-2}^{n+1} \\ U_{J-1}^{n+1} \end{bmatrix} = \begin{bmatrix} 1-\sigma & \sigma & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & 0\\ \sigma & 1-2\sigma & \sigma & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\ 0 & \sigma & 1-2\sigma & \sigma & \cdots & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \ddots & \ddots & \ddots & \ddots & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \sigma & 1-2\sigma & \sigma \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \sigma & 1-\sigma \end{bmatrix} \begin{bmatrix} U_0^{n} \\ U_1^{n} \\ U_2^{n} \\ \vdots \\ U_{J-2}^{n} \\ U_{J-1}^{n} \end{bmatrix} + \begin{bmatrix} \Delta t f(U_0^n) \\ \Delta t f(U_1^n) \\ \Delta t f(U_2^n) \\ \vdots \\ \Delta t f(U_{J-2}^n) \\ \Delta t f(U_{J-1}^n) \end{bmatrix}. $$ Note that since our numerical integration starts with a well-defined initial condition at $n=0$, $\mathbf{U}^0$, the vector $\mathbf{U}^{n+1}$ on the left-hand side is the only unknown in this system of linear equations. Thus, to integrate numerically our reaction-diffusion system from time point $n$ to $n+1$ we need to solve numerically for vector $\mathbf{U}^{n+1}$. Let us call the matrix on the left-hand side $A$, the one on the right-hand side $B$, and the vector on the right-hand side $\mathbf{f}^n$. Using this notation we can write the above system as $$A \mathbf{U}^{n+1} = B \mathbf{U}^n + f^n.$$ In this linear equation, matrices $A$ and $B$ are defined by our problem: we need to specify these matrices once for our problem and incorporate our boundary conditions in them. Vector $\mathbf{f}^n$ is a function of $\mathbf{U}^n$ and so needs to be reevaluated in every time point $n$. We also need to carry out one matrix-vector multiplication every time point, $B \mathbf{U}^n$, and one vector-vector addition, $B \mathbf{U}^n + f^n$. The most expensive numerical operation is inversion of matrix $A$ to solve for $\mathbf{U}^{n+1}$, however we may get away with doing this only once and store the inverse of $A$ as $A^{-1}$: $$\mathbf{U}^{n+1} = A^{-1} \left( B \mathbf{U}^n + f^n \right).$$ ## A Crank-Nicolson Example in Python Let us apply the CN method to a two-variable reaction-diffusion system that was introduced by [Mori *et al.*](http://www.sciencedirect.com/science/article/pii/S0006349508704442): $$\frac{\partial u}{\partial t} = D_u \frac{\partial^2 u}{\partial x^2} + f(u,v),$$ $$\frac{\partial v}{\partial t} = D_v \frac{\partial^2 v}{\partial x^2} - f(u,v),$$ with Neumann boundary conditions $$\frac{\partial u}{\partial x}\Bigg|_{x=0,L} = 0,$$ $$\frac{\partial v}{\partial x}\Bigg|_{x=0,L} = 0.$$ The variables of this system, $u$ and $v$, represent the concetrations of the active form and its inactive form respectively. The reaction term $f(u,v)$ describes the interchange (activation and inactivation) between these two states of the protein. A particular property of this system is that the inactive has much greater diffusivity that the active form, $D_v \gg D_u$. Using the CN method to integrate this system numerically, we need to set up two separate approximations $$A_u \mathbf{U}^{n+1} = B_u \mathbf{U}^n + \mathbf{f}^n,$$ $$A_v \mathbf{V}^{n+1} = B_v \mathbf{V}^n - \mathbf{f}^n,$$ with two different $\sigma$ terms, $\sigma_u = \frac{D_u \Delta t}{2 \Delta x^2}$ and $\sigma_v = \frac{D_v \Delta t}{2 \Delta x^2}$. ### Import Packages For the matrix-vector multiplication, vector-vector addition, and matrix inversion that we will need to carry out we will use the Python library [NumPy](http://www.numpy.org/). To visualize our numerical solutions, we will use [pyplot](http://matplotlib.org/api/pyplot_api.html). ``` import numpy from matplotlib import pyplot ``` Numpy allows us to truncate the numerical values of matrices and vectors to improve their display with [`set_printoptions`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.set_printoptions.html). ``` numpy.set_printoptions(precision=3) ``` ### Specify Grid Our one-dimensional domain has unit length and we define `J = 100` equally spaced grid points in this domain. This divides our domain into `J-1` subintervals, each of length `dx`. ``` L = 1. J = 100 dx = float(L)/float(J-1) x_grid = numpy.array([j*dx for j in range(J)]) ``` Equally, we define `N = 1000` equally spaced grid points on our time domain of length `T = 200` thus dividing our time domain into `N-1` intervals of length `dt`. ``` T = 200 N = 1000 dt = float(T)/float(N-1) t_grid = numpy.array([n*dt for n in range(N)]) ``` ### Specify System Parameters and the Reaction Term We choose our parameter values based on the work by [Mori *et al.*](http://www.sciencedirect.com/science/article/pii/S0006349508704442). ``` D_v = float(10.)/float(100.) D_u = 0.01 * D_v k0 = 0.067 f = lambda u, v: dt*(v*(k0 + float(u*u)/float(1. + u*u)) - u) g = lambda u, v: -f(u,v) sigma_u = float(D_u*dt)/float((2.*dx*dx)) sigma_v = float(D_v*dt)/float((2.*dx*dx)) total_protein = 2.26 ``` ### Specify the Initial Condition As discussed by [Mori *et al.*](http://www.sciencedirect.com/science/article/pii/S0006349508704442), we can expect to observe interesting behaviour in the steady state of this system if we choose a heterogeneous initial condition for $u$. Here, we initialize $u$ with a step-like heterogeneity: ``` no_high = 10 U = numpy.array([0.1 for i in range(no_high,J)] + [2. for i in range(0,no_high)]) V = numpy.array([float(total_protein-dx*sum(u))/float(J*dx) for i in range(0,J)]) ``` Note that we make certain that total protein amounts equal a certain value, `total_protein`. The importance of this was discussed by [Walther *et al.*](http://link.springer.com/article/10.1007%2Fs11538-012-9766-5). Let us plot our initial condition for confirmation: ``` ylim((0., 2.1)) xlabel('x'); ylabel('concentration') pyplot.plot(x_grid, U) pyplot.plot(x_grid, V) pyplot.show() ``` The blue curve is the initial condition for $U$, stored in Python variable `U`, and the green curve is the initial condition for $V$ stored in `V`. ### Create Matrices The matrices that we need to construct are all tridiagonal so they are easy to construct with [`numpy.diagflat`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.diagflat.html). ``` A_u = numpy.diagflat([-sigma_u for i in range(J-1)], -1) +\ numpy.diagflat([1.+sigma_u]+[1.+2.*sigma_u for i in range(J-2)]+[1.+sigma_u]) +\ numpy.diagflat([-sigma_u for i in range(J-1)], 1) B_u = numpy.diagflat([sigma_u for i in range(J-1)], -1) +\ numpy.diagflat([1.-sigma_u]+[1.-2.*sigma_u for i in range(J-2)]+[1.-sigma_u]) +\ numpy.diagflat([sigma_u for i in range(J-1)], 1) A_v = numpy.diagflat([-sigma_v for i in range(J-1)], -1) +\ numpy.diagflat([1.+sigma_v]+[1.+2.*sigma_v for i in range(J-2)]+[1.+sigma_v]) +\ numpy.diagflat([-sigma_v for i in range(J-1)], 1) B_v = numpy.diagflat([sigma_v for i in range(J-1)], -1) +\ numpy.diagflat([1.-sigma_v]+[1.-2.*sigma_v for i in range(J-2)]+[1.-sigma_v]) +\ numpy.diagflat([sigma_v for i in range(J-1)], 1) ``` To confirm, this is what `A_u` looks like: ``` print A_u ``` ### Solve the System Iteratively To advance our system by one time step, we need to do one matrix-vector multiplication followed by one vector-vector addition on the right hand side. To facilitate this, we rewrite our reaction term so that it accepts concentration vectors $\mathbf{U}^n$ and $\mathbf{V}^n$ as arguments and returns vector $\mathbf{f}^n$. As a reminder, this is our non-vectorial definition of $f$ f = lambda u, v: v*(k0 + float(u*u)/float(1. + u*u)) - u ``` f_vec = lambda U, V: numpy.multiply(dt, numpy.subtract(numpy.multiply(V, numpy.add(k0, numpy.divide(numpy.multiply(U,U), numpy.add(1., numpy.multiply(U,U))))), U)) ``` Let us make certain that this produces the same values as our non-vectorial `f`: ``` print f(U[0], V[0]) print f(U[-1], V[-1]) print f_vec(U, V) ``` Accounting for rounding of the displayed values due to the `set_printoptions` we set above, we can see that `f` and `f_vec` generate the same values for our initial condition at both ends of our domain. We will use [`numpy.linalg.solve`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html) to solve our linear system each time step. While we integrate our system over time we will record both `U` and `V` at each time step in `U_record` and `V_record` respectively so that we can plot our numerical solutions over time. ``` U_record = [] V_record = [] U_record.append(U) V_record.append(V) for ti in range(1,N): U_new = numpy.linalg.solve(A_u, B_u.dot(U) + f_vec(U,V)) V_new = numpy.linalg.solve(A_v, B_v.dot(V) - f_vec(U,V)) U = U_new V = V_new U_record.append(U) V_record.append(V) ``` ### Plot the Numerical Solution Let us take a look at the numerical solution we attain after `N` time steps. ``` ylim((0., 2.1)) xlabel('x'); ylabel('concentration') pyplot.plot(x_grid, U) pyplot.plot(x_grid, V) pyplot.show() ``` And here is a [kymograph](http://en.wikipedia.org/wiki/Kymograph) of the values of `U`. This plot shows concisely the behaviour of `U` over time and we can clear observe the wave-pinning behaviour described by [Mori *et al.*](http://www.sciencedirect.com/science/article/pii/S0006349508704442). Furthermore, we observe that this wave pattern is stable for about 50 units of time and we therefore conclude that this wave pattern is a stable steady state of our system. ``` U_record = numpy.array(U_record) V_record = numpy.array(V_record) fig, ax = subplots() xlabel('x'); ylabel('t') heatmap = ax.pcolor(x_grid, t_grid, U_record, vmin=0., vmax=1.2) ```
github_jupyter
import numpy from matplotlib import pyplot numpy.set_printoptions(precision=3) L = 1. J = 100 dx = float(L)/float(J-1) x_grid = numpy.array([j*dx for j in range(J)]) T = 200 N = 1000 dt = float(T)/float(N-1) t_grid = numpy.array([n*dt for n in range(N)]) D_v = float(10.)/float(100.) D_u = 0.01 * D_v k0 = 0.067 f = lambda u, v: dt*(v*(k0 + float(u*u)/float(1. + u*u)) - u) g = lambda u, v: -f(u,v) sigma_u = float(D_u*dt)/float((2.*dx*dx)) sigma_v = float(D_v*dt)/float((2.*dx*dx)) total_protein = 2.26 no_high = 10 U = numpy.array([0.1 for i in range(no_high,J)] + [2. for i in range(0,no_high)]) V = numpy.array([float(total_protein-dx*sum(u))/float(J*dx) for i in range(0,J)]) ylim((0., 2.1)) xlabel('x'); ylabel('concentration') pyplot.plot(x_grid, U) pyplot.plot(x_grid, V) pyplot.show() A_u = numpy.diagflat([-sigma_u for i in range(J-1)], -1) +\ numpy.diagflat([1.+sigma_u]+[1.+2.*sigma_u for i in range(J-2)]+[1.+sigma_u]) +\ numpy.diagflat([-sigma_u for i in range(J-1)], 1) B_u = numpy.diagflat([sigma_u for i in range(J-1)], -1) +\ numpy.diagflat([1.-sigma_u]+[1.-2.*sigma_u for i in range(J-2)]+[1.-sigma_u]) +\ numpy.diagflat([sigma_u for i in range(J-1)], 1) A_v = numpy.diagflat([-sigma_v for i in range(J-1)], -1) +\ numpy.diagflat([1.+sigma_v]+[1.+2.*sigma_v for i in range(J-2)]+[1.+sigma_v]) +\ numpy.diagflat([-sigma_v for i in range(J-1)], 1) B_v = numpy.diagflat([sigma_v for i in range(J-1)], -1) +\ numpy.diagflat([1.-sigma_v]+[1.-2.*sigma_v for i in range(J-2)]+[1.-sigma_v]) +\ numpy.diagflat([sigma_v for i in range(J-1)], 1) print A_u f_vec = lambda U, V: numpy.multiply(dt, numpy.subtract(numpy.multiply(V, numpy.add(k0, numpy.divide(numpy.multiply(U,U), numpy.add(1., numpy.multiply(U,U))))), U)) print f(U[0], V[0]) print f(U[-1], V[-1]) print f_vec(U, V) U_record = [] V_record = [] U_record.append(U) V_record.append(V) for ti in range(1,N): U_new = numpy.linalg.solve(A_u, B_u.dot(U) + f_vec(U,V)) V_new = numpy.linalg.solve(A_v, B_v.dot(V) - f_vec(U,V)) U = U_new V = V_new U_record.append(U) V_record.append(V) ylim((0., 2.1)) xlabel('x'); ylabel('concentration') pyplot.plot(x_grid, U) pyplot.plot(x_grid, V) pyplot.show() U_record = numpy.array(U_record) V_record = numpy.array(V_record) fig, ax = subplots() xlabel('x'); ylabel('t') heatmap = ax.pcolor(x_grid, t_grid, U_record, vmin=0., vmax=1.2)
0.289372
0.992683
___ <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> ___ # NumPy NumPy (or Numpy) is a Linear Algebra Library for Python, the reason it is so important for Data Science with Python is that almost all of the libraries in the PyData Ecosystem rely on NumPy as one of their main building blocks. Numpy is also incredibly fast, as it has bindings to C libraries. For more info on why you would want to use Arrays instead of lists, check out this great [StackOverflow post](http://stackoverflow.com/questions/993984/why-numpy-instead-of-python-lists). We will only learn the basics of NumPy, to get started we need to install it! ## Installation Instructions **It is highly recommended you install Python using the Anaconda distribution to make sure all underlying dependencies (such as Linear Algebra libraries) all sync up with the use of a conda install. If you have Anaconda, install NumPy by going to your terminal or command prompt and typing:** conda install numpy **If you do not have Anaconda and can not install it, please refer to [Numpy's official documentation on various installation instructions.](http://docs.scipy.org/doc/numpy-1.10.1/user/install.html)** ## Using NumPy Once you've installed NumPy you can import it as a library: ``` import numpy as np ``` Numpy has many built-in functions and capabilities. We won't cover them all but instead we will focus on some of the most important aspects of Numpy: vectors,arrays,matrices, and number generation. Let's start by discussing arrays. # Numpy Arrays NumPy arrays are the main way we will use Numpy throughout the course. Numpy arrays essentially come in two flavors: vectors and matrices. Vectors are strictly 1-d arrays and matrices are 2-d (but you should note a matrix can still have only one row or one column). Let's begin our introduction by exploring how to create NumPy arrays. ## Creating NumPy Arrays ### From a Python List We can create an array by directly converting a list or list of lists: ``` my_list = [1,2,3] my_list np.array(my_list) my_matrix = [[1,2,3],[4,5,6],[7,8,9]] my_matrix np.array(my_matrix) ``` ## Built-in Methods There are lots of built-in ways to generate Arrays ### arange Return evenly spaced values within a given interval. ``` np.arange(0,10) np.arange(0,11,2) ``` ### zeros and ones Generate arrays of zeros or ones ``` np.zeros(3) np.zeros((5,5)) np.ones(3) np.ones((3,3)) ``` ### linspace Return evenly spaced numbers over a specified interval. ``` np.linspace(0,10,3) np.linspace(0,10,50) ``` ## eye Creates an identity matrix ``` np.eye(4) ``` ## Random Numpy also has lots of ways to create random number arrays: ### rand Create an array of the given shape and populate it with random samples from a uniform distribution over ``[0, 1)``. ``` np.random.rand(2) np.random.rand(5,5) ``` ### randn Return a sample (or samples) from the "standard normal" distribution. Unlike rand which is uniform: ``` np.random.randn(2) np.random.randn(5,5) ``` ### randint Return random integers from `low` (inclusive) to `high` (exclusive). ``` np.random.randint(1,100) np.random.randint(1,100,10) ``` ## Array Attributes and Methods Let's discuss some useful attributes and methods or an array: ``` arr = np.arange(25) ranarr = np.random.randint(0,50,10) arr ranarr ``` ## Reshape Returns an array containing the same data with a new shape. ``` arr.reshape(5,5) ``` ### max,min,argmax,argmin These are useful methods for finding max or min values. Or to find their index locations using argmin or argmax ``` ranarr ranarr.max() ranarr.argmax() ranarr.min() ranarr.argmin() ``` ## Shape Shape is an attribute that arrays have (not a method): ``` # Vector arr.shape # Notice the two sets of brackets arr.reshape(1,25) arr.reshape(1,25).shape arr.reshape(25,1) arr.reshape(25,1).shape ``` ### dtype You can also grab the data type of the object in the array: ``` arr.dtype ``` # Great Job!
github_jupyter
import numpy as np my_list = [1,2,3] my_list np.array(my_list) my_matrix = [[1,2,3],[4,5,6],[7,8,9]] my_matrix np.array(my_matrix) np.arange(0,10) np.arange(0,11,2) np.zeros(3) np.zeros((5,5)) np.ones(3) np.ones((3,3)) np.linspace(0,10,3) np.linspace(0,10,50) np.eye(4) np.random.rand(2) np.random.rand(5,5) np.random.randn(2) np.random.randn(5,5) np.random.randint(1,100) np.random.randint(1,100,10) arr = np.arange(25) ranarr = np.random.randint(0,50,10) arr ranarr arr.reshape(5,5) ranarr ranarr.max() ranarr.argmax() ranarr.min() ranarr.argmin() # Vector arr.shape # Notice the two sets of brackets arr.reshape(1,25) arr.reshape(1,25).shape arr.reshape(25,1) arr.reshape(25,1).shape arr.dtype
0.277179
0.992597
``` %matplotlib inline ``` Sparse Regression ================= We demonstrate how to do (fully Bayesian) sparse linear regression using the approach described in [1]. This approach is particularly suitable for situations with many feature dimensions (large P) but not too many datapoints (small N). In particular we consider a quadratic regressor of the form: \begin{align}f(X) = \text{constant} + \sum_i \theta_i X_i + \sum_{i<j} \theta_{ij} X_i X_j + \text{observation noise}\end{align} **References:** 1. Raj Agrawal, Jonathan H. Huggins, Brian Trippe, Tamara Broderick (2019), "The Kernel Interaction Trick: Fast Bayesian Discovery of Pairwise Interactions in High Dimensions", (https://arxiv.org/abs/1905.06501) ``` import argparse import itertools import os import time import numpy as np import jax from jax import vmap import jax.numpy as jnp import jax.random as random from jax.scipy.linalg import cho_factor, cho_solve, solve_triangular import numpyro import numpyro.distributions as dist from numpyro.infer import MCMC, NUTS def dot(X, Z): return jnp.dot(X, Z[..., None])[..., 0] # The kernel that corresponds to our quadratic regressor. def kernel(X, Z, eta1, eta2, c, jitter=1.0e-4): eta1sq = jnp.square(eta1) eta2sq = jnp.square(eta2) k1 = 0.5 * eta2sq * jnp.square(1.0 + dot(X, Z)) k2 = -0.5 * eta2sq * dot(jnp.square(X), jnp.square(Z)) k3 = (eta1sq - eta2sq) * dot(X, Z) k4 = jnp.square(c) - 0.5 * eta2sq if X.shape == Z.shape: k4 += jitter * jnp.eye(X.shape[0]) return k1 + k2 + k3 + k4 # Most of the model code is concerned with constructing the sparsity inducing prior. def model(X, Y, hypers): S, P, N = hypers['expected_sparsity'], X.shape[1], X.shape[0] sigma = numpyro.sample("sigma", dist.HalfNormal(hypers['alpha3'])) phi = sigma * (S / jnp.sqrt(N)) / (P - S) eta1 = numpyro.sample("eta1", dist.HalfCauchy(phi)) msq = numpyro.sample("msq", dist.InverseGamma(hypers['alpha1'], hypers['beta1'])) xisq = numpyro.sample("xisq", dist.InverseGamma(hypers['alpha2'], hypers['beta2'])) eta2 = jnp.square(eta1) * jnp.sqrt(xisq) / msq lam = numpyro.sample("lambda", dist.HalfCauchy(jnp.ones(P))) kappa = jnp.sqrt(msq) * lam / jnp.sqrt(msq + jnp.square(eta1 * lam)) # compute kernel kX = kappa * X k = kernel(kX, kX, eta1, eta2, hypers['c']) + sigma ** 2 * jnp.eye(N) assert k.shape == (N, N) # sample Y according to the standard gaussian process formula numpyro.sample("Y", dist.MultivariateNormal(loc=jnp.zeros(X.shape[0]), covariance_matrix=k), obs=Y) # Compute the mean and variance of coefficient theta_i (where i = dimension) for a # MCMC sample of the kernel hyperparameters (eta1, xisq, ...). # Compare to theorem 5.1 in reference [1]. def compute_singleton_mean_variance(X, Y, dimension, msq, lam, eta1, xisq, c, sigma): P, N = X.shape[1], X.shape[0] probe = jnp.zeros((2, P)) probe = jax.ops.index_update(probe, jax.ops.index[:, dimension], jnp.array([1.0, -1.0])) eta2 = jnp.square(eta1) * jnp.sqrt(xisq) / msq kappa = jnp.sqrt(msq) * lam / jnp.sqrt(msq + jnp.square(eta1 * lam)) kX = kappa * X kprobe = kappa * probe k_xx = kernel(kX, kX, eta1, eta2, c) + sigma ** 2 * jnp.eye(N) k_xx_inv = jnp.linalg.inv(k_xx) k_probeX = kernel(kprobe, kX, eta1, eta2, c) k_prbprb = kernel(kprobe, kprobe, eta1, eta2, c) vec = jnp.array([0.50, -0.50]) mu = jnp.matmul(k_probeX, jnp.matmul(k_xx_inv, Y)) mu = jnp.dot(mu, vec) var = k_prbprb - jnp.matmul(k_probeX, jnp.matmul(k_xx_inv, jnp.transpose(k_probeX))) var = jnp.matmul(var, vec) var = jnp.dot(var, vec) return mu, var # Compute the mean and variance of coefficient theta_ij for a MCMC sample of the # kernel hyperparameters (eta1, xisq, ...). Compare to theorem 5.1 in reference [1]. def compute_pairwise_mean_variance(X, Y, dim1, dim2, msq, lam, eta1, xisq, c, sigma): P, N = X.shape[1], X.shape[0] probe = jnp.zeros((4, P)) probe = jax.ops.index_update(probe, jax.ops.index[:, dim1], jnp.array([1.0, 1.0, -1.0, -1.0])) probe = jax.ops.index_update(probe, jax.ops.index[:, dim2], jnp.array([1.0, -1.0, 1.0, -1.0])) eta2 = jnp.square(eta1) * jnp.sqrt(xisq) / msq kappa = jnp.sqrt(msq) * lam / jnp.sqrt(msq + jnp.square(eta1 * lam)) kX = kappa * X kprobe = kappa * probe k_xx = kernel(kX, kX, eta1, eta2, c) + sigma ** 2 * jnp.eye(N) k_xx_inv = jnp.linalg.inv(k_xx) k_probeX = kernel(kprobe, kX, eta1, eta2, c) k_prbprb = kernel(kprobe, kprobe, eta1, eta2, c) vec = jnp.array([0.25, -0.25, -0.25, 0.25]) mu = jnp.matmul(k_probeX, jnp.matmul(k_xx_inv, Y)) mu = jnp.dot(mu, vec) var = k_prbprb - jnp.matmul(k_probeX, jnp.matmul(k_xx_inv, jnp.transpose(k_probeX))) var = jnp.matmul(var, vec) var = jnp.dot(var, vec) return mu, var # Sample coefficients theta from the posterior for a given MCMC sample. # The first P returned values are {theta_1, theta_2, ...., theta_P}, while # the remaining values are {theta_ij} for i,j in the list `active_dims`, # sorted so that i < j. def sample_theta_space(X, Y, active_dims, msq, lam, eta1, xisq, c, sigma): P, N, M = X.shape[1], X.shape[0], len(active_dims) # the total number of coefficients we return num_coefficients = P + M * (M - 1) // 2 probe = jnp.zeros((2 * P + 2 * M * (M - 1), P)) vec = jnp.zeros((num_coefficients, 2 * P + 2 * M * (M - 1))) start1 = 0 start2 = 0 for dim in range(P): probe = jax.ops.index_update(probe, jax.ops.index[start1:start1 + 2, dim], jnp.array([1.0, -1.0])) vec = jax.ops.index_update(vec, jax.ops.index[start2, start1:start1 + 2], jnp.array([0.5, -0.5])) start1 += 2 start2 += 1 for dim1 in active_dims: for dim2 in active_dims: if dim1 >= dim2: continue probe = jax.ops.index_update(probe, jax.ops.index[start1:start1 + 4, dim1], jnp.array([1.0, 1.0, -1.0, -1.0])) probe = jax.ops.index_update(probe, jax.ops.index[start1:start1 + 4, dim2], jnp.array([1.0, -1.0, 1.0, -1.0])) vec = jax.ops.index_update(vec, jax.ops.index[start2, start1:start1 + 4], jnp.array([0.25, -0.25, -0.25, 0.25])) start1 += 4 start2 += 1 eta2 = jnp.square(eta1) * jnp.sqrt(xisq) / msq kappa = jnp.sqrt(msq) * lam / jnp.sqrt(msq + jnp.square(eta1 * lam)) kX = kappa * X kprobe = kappa * probe k_xx = kernel(kX, kX, eta1, eta2, c) + sigma ** 2 * jnp.eye(N) L = cho_factor(k_xx, lower=True)[0] k_probeX = kernel(kprobe, kX, eta1, eta2, c) k_prbprb = kernel(kprobe, kprobe, eta1, eta2, c) mu = jnp.matmul(k_probeX, cho_solve((L, True), Y)) mu = jnp.sum(mu * vec, axis=-1) Linv_k_probeX = solve_triangular(L, jnp.transpose(k_probeX), lower=True) covar = k_prbprb - jnp.matmul(jnp.transpose(Linv_k_probeX), Linv_k_probeX) covar = jnp.matmul(vec, jnp.matmul(covar, jnp.transpose(vec))) # sample from N(mu, covar) L = jnp.linalg.cholesky(covar) sample = mu + jnp.matmul(L, np.random.randn(num_coefficients)) return sample # Helper function for doing HMC inference def run_inference(model, args, rng_key, X, Y, hypers): start = time.time() kernel = NUTS(model) mcmc = MCMC(kernel, args.num_warmup, args.num_samples, num_chains=args.num_chains, progress_bar=False if "NUMPYRO_SPHINXBUILD" in os.environ else True) mcmc.run(rng_key, X, Y, hypers) mcmc.print_summary() print('\nMCMC elapsed time:', time.time() - start) return mcmc.get_samples() # Get the mean and variance of a gaussian mixture def gaussian_mixture_stats(mus, variances): mean_mu = jnp.mean(mus) mean_var = jnp.mean(variances) + jnp.mean(jnp.square(mus)) - jnp.square(mean_mu) return mean_mu, mean_var # Create artificial regression dataset where only S out of P feature # dimensions contain signal and where there is a single pairwise interaction # between the first and second dimensions. def get_data(N=20, S=2, P=10, sigma_obs=0.05): assert S < P and P > 1 and S > 0 np.random.seed(0) X = np.random.randn(N, P) # generate S coefficients with non-negligible magnitude W = 0.5 + 2.5 * np.random.rand(S) # generate data using the S coefficients and a single pairwise interaction Y = np.sum(X[:, 0:S] * W, axis=-1) + X[:, 0] * X[:, 1] + sigma_obs * np.random.randn(N) Y -= jnp.mean(Y) Y_std = jnp.std(Y) assert X.shape == (N, P) assert Y.shape == (N,) return X, Y / Y_std, W / Y_std, 1.0 / Y_std # Helper function for analyzing the posterior statistics for coefficient theta_i def analyze_dimension(samples, X, Y, dimension, hypers): vmap_args = (samples['msq'], samples['lambda'], samples['eta1'], samples['xisq'], samples['sigma']) mus, variances = vmap(lambda msq, lam, eta1, xisq, sigma: compute_singleton_mean_variance(X, Y, dimension, msq, lam, eta1, xisq, hypers['c'], sigma))(*vmap_args) mean, variance = gaussian_mixture_stats(mus, variances) std = jnp.sqrt(variance) return mean, std # Helper function for analyzing the posterior statistics for coefficient theta_ij def analyze_pair_of_dimensions(samples, X, Y, dim1, dim2, hypers): vmap_args = (samples['msq'], samples['lambda'], samples['eta1'], samples['xisq'], samples['sigma']) mus, variances = vmap(lambda msq, lam, eta1, xisq, sigma: compute_pairwise_mean_variance(X, Y, dim1, dim2, msq, lam, eta1, xisq, hypers['c'], sigma))(*vmap_args) mean, variance = gaussian_mixture_stats(mus, variances) std = jnp.sqrt(variance) return mean, std def main(args): X, Y, expected_thetas, expected_pairwise = get_data(N=args.num_data, P=args.num_dimensions, S=args.active_dimensions) # setup hyperparameters hypers = {'expected_sparsity': max(1.0, args.num_dimensions / 10), 'alpha1': 3.0, 'beta1': 1.0, 'alpha2': 3.0, 'beta2': 1.0, 'alpha3': 1.0, 'c': 1.0} # do inference rng_key = random.PRNGKey(0) samples = run_inference(model, args, rng_key, X, Y, hypers) # compute the mean and square root variance of each coefficient theta_i means, stds = vmap(lambda dim: analyze_dimension(samples, X, Y, dim, hypers))(jnp.arange(args.num_dimensions)) print("Coefficients theta_1 to theta_%d used to generate the data:" % args.active_dimensions, expected_thetas) print("The single quadratic coefficient theta_{1,2} used to generate the data:", expected_pairwise) active_dimensions = [] for dim, (mean, std) in enumerate(zip(means, stds)): # we mark the dimension as inactive if the interval [mean - 3 * std, mean + 3 * std] contains zero lower, upper = mean - 3.0 * std, mean + 3.0 * std inactive = "inactive" if lower < 0.0 and upper > 0.0 else "active" if inactive == "active": active_dimensions.append(dim) print("[dimension %02d/%02d] %s:\t%.2e +- %.2e" % (dim + 1, args.num_dimensions, inactive, mean, std)) print("Identified a total of %d active dimensions; expected %d." % (len(active_dimensions), args.active_dimensions)) # Compute the mean and square root variance of coefficients theta_ij for i,j active dimensions. # Note that the resulting numbers are only meaningful for i != j. if len(active_dimensions) > 0: dim_pairs = jnp.array(list(itertools.product(active_dimensions, active_dimensions))) means, stds = vmap(lambda dim_pair: analyze_pair_of_dimensions(samples, X, Y, dim_pair[0], dim_pair[1], hypers))(dim_pairs) for dim_pair, mean, std in zip(dim_pairs, means, stds): dim1, dim2 = dim_pair if dim1 >= dim2: continue lower, upper = mean - 3.0 * std, mean + 3.0 * std if not (lower < 0.0 and upper > 0.0): format_str = "Identified pairwise interaction between dimensions %d and %d: %.2e +- %.2e" print(format_str % (dim1 + 1, dim2 + 1, mean, std)) # Draw a single sample of coefficients theta from the posterior, where we return all singleton # coefficients theta_i and pairwise coefficients theta_ij for i, j active dimensions. We use the # final MCMC sample obtained from the HMC sampler. thetas = sample_theta_space(X, Y, active_dimensions, samples['msq'][-1], samples['lambda'][-1], samples['eta1'][-1], samples['xisq'][-1], hypers['c'], samples['sigma'][-1]) print("Single posterior sample theta:\n", thetas) if __name__ == "__main__": assert numpyro.__version__.startswith('0.4.0') parser = argparse.ArgumentParser(description="Gaussian Process example") parser.add_argument("-n", "--num-samples", nargs="?", default=1000, type=int) parser.add_argument("--num-warmup", nargs='?', default=500, type=int) parser.add_argument("--num-chains", nargs='?', default=1, type=int) parser.add_argument("--num-data", nargs='?', default=100, type=int) parser.add_argument("--num-dimensions", nargs='?', default=20, type=int) parser.add_argument("--active-dimensions", nargs='?', default=3, type=int) parser.add_argument("--device", default='cpu', type=str, help='use "cpu" or "gpu".') args = parser.parse_args() numpyro.set_platform(args.device) numpyro.set_host_device_count(args.num_chains) main(args) ```
github_jupyter
%matplotlib inline import argparse import itertools import os import time import numpy as np import jax from jax import vmap import jax.numpy as jnp import jax.random as random from jax.scipy.linalg import cho_factor, cho_solve, solve_triangular import numpyro import numpyro.distributions as dist from numpyro.infer import MCMC, NUTS def dot(X, Z): return jnp.dot(X, Z[..., None])[..., 0] # The kernel that corresponds to our quadratic regressor. def kernel(X, Z, eta1, eta2, c, jitter=1.0e-4): eta1sq = jnp.square(eta1) eta2sq = jnp.square(eta2) k1 = 0.5 * eta2sq * jnp.square(1.0 + dot(X, Z)) k2 = -0.5 * eta2sq * dot(jnp.square(X), jnp.square(Z)) k3 = (eta1sq - eta2sq) * dot(X, Z) k4 = jnp.square(c) - 0.5 * eta2sq if X.shape == Z.shape: k4 += jitter * jnp.eye(X.shape[0]) return k1 + k2 + k3 + k4 # Most of the model code is concerned with constructing the sparsity inducing prior. def model(X, Y, hypers): S, P, N = hypers['expected_sparsity'], X.shape[1], X.shape[0] sigma = numpyro.sample("sigma", dist.HalfNormal(hypers['alpha3'])) phi = sigma * (S / jnp.sqrt(N)) / (P - S) eta1 = numpyro.sample("eta1", dist.HalfCauchy(phi)) msq = numpyro.sample("msq", dist.InverseGamma(hypers['alpha1'], hypers['beta1'])) xisq = numpyro.sample("xisq", dist.InverseGamma(hypers['alpha2'], hypers['beta2'])) eta2 = jnp.square(eta1) * jnp.sqrt(xisq) / msq lam = numpyro.sample("lambda", dist.HalfCauchy(jnp.ones(P))) kappa = jnp.sqrt(msq) * lam / jnp.sqrt(msq + jnp.square(eta1 * lam)) # compute kernel kX = kappa * X k = kernel(kX, kX, eta1, eta2, hypers['c']) + sigma ** 2 * jnp.eye(N) assert k.shape == (N, N) # sample Y according to the standard gaussian process formula numpyro.sample("Y", dist.MultivariateNormal(loc=jnp.zeros(X.shape[0]), covariance_matrix=k), obs=Y) # Compute the mean and variance of coefficient theta_i (where i = dimension) for a # MCMC sample of the kernel hyperparameters (eta1, xisq, ...). # Compare to theorem 5.1 in reference [1]. def compute_singleton_mean_variance(X, Y, dimension, msq, lam, eta1, xisq, c, sigma): P, N = X.shape[1], X.shape[0] probe = jnp.zeros((2, P)) probe = jax.ops.index_update(probe, jax.ops.index[:, dimension], jnp.array([1.0, -1.0])) eta2 = jnp.square(eta1) * jnp.sqrt(xisq) / msq kappa = jnp.sqrt(msq) * lam / jnp.sqrt(msq + jnp.square(eta1 * lam)) kX = kappa * X kprobe = kappa * probe k_xx = kernel(kX, kX, eta1, eta2, c) + sigma ** 2 * jnp.eye(N) k_xx_inv = jnp.linalg.inv(k_xx) k_probeX = kernel(kprobe, kX, eta1, eta2, c) k_prbprb = kernel(kprobe, kprobe, eta1, eta2, c) vec = jnp.array([0.50, -0.50]) mu = jnp.matmul(k_probeX, jnp.matmul(k_xx_inv, Y)) mu = jnp.dot(mu, vec) var = k_prbprb - jnp.matmul(k_probeX, jnp.matmul(k_xx_inv, jnp.transpose(k_probeX))) var = jnp.matmul(var, vec) var = jnp.dot(var, vec) return mu, var # Compute the mean and variance of coefficient theta_ij for a MCMC sample of the # kernel hyperparameters (eta1, xisq, ...). Compare to theorem 5.1 in reference [1]. def compute_pairwise_mean_variance(X, Y, dim1, dim2, msq, lam, eta1, xisq, c, sigma): P, N = X.shape[1], X.shape[0] probe = jnp.zeros((4, P)) probe = jax.ops.index_update(probe, jax.ops.index[:, dim1], jnp.array([1.0, 1.0, -1.0, -1.0])) probe = jax.ops.index_update(probe, jax.ops.index[:, dim2], jnp.array([1.0, -1.0, 1.0, -1.0])) eta2 = jnp.square(eta1) * jnp.sqrt(xisq) / msq kappa = jnp.sqrt(msq) * lam / jnp.sqrt(msq + jnp.square(eta1 * lam)) kX = kappa * X kprobe = kappa * probe k_xx = kernel(kX, kX, eta1, eta2, c) + sigma ** 2 * jnp.eye(N) k_xx_inv = jnp.linalg.inv(k_xx) k_probeX = kernel(kprobe, kX, eta1, eta2, c) k_prbprb = kernel(kprobe, kprobe, eta1, eta2, c) vec = jnp.array([0.25, -0.25, -0.25, 0.25]) mu = jnp.matmul(k_probeX, jnp.matmul(k_xx_inv, Y)) mu = jnp.dot(mu, vec) var = k_prbprb - jnp.matmul(k_probeX, jnp.matmul(k_xx_inv, jnp.transpose(k_probeX))) var = jnp.matmul(var, vec) var = jnp.dot(var, vec) return mu, var # Sample coefficients theta from the posterior for a given MCMC sample. # The first P returned values are {theta_1, theta_2, ...., theta_P}, while # the remaining values are {theta_ij} for i,j in the list `active_dims`, # sorted so that i < j. def sample_theta_space(X, Y, active_dims, msq, lam, eta1, xisq, c, sigma): P, N, M = X.shape[1], X.shape[0], len(active_dims) # the total number of coefficients we return num_coefficients = P + M * (M - 1) // 2 probe = jnp.zeros((2 * P + 2 * M * (M - 1), P)) vec = jnp.zeros((num_coefficients, 2 * P + 2 * M * (M - 1))) start1 = 0 start2 = 0 for dim in range(P): probe = jax.ops.index_update(probe, jax.ops.index[start1:start1 + 2, dim], jnp.array([1.0, -1.0])) vec = jax.ops.index_update(vec, jax.ops.index[start2, start1:start1 + 2], jnp.array([0.5, -0.5])) start1 += 2 start2 += 1 for dim1 in active_dims: for dim2 in active_dims: if dim1 >= dim2: continue probe = jax.ops.index_update(probe, jax.ops.index[start1:start1 + 4, dim1], jnp.array([1.0, 1.0, -1.0, -1.0])) probe = jax.ops.index_update(probe, jax.ops.index[start1:start1 + 4, dim2], jnp.array([1.0, -1.0, 1.0, -1.0])) vec = jax.ops.index_update(vec, jax.ops.index[start2, start1:start1 + 4], jnp.array([0.25, -0.25, -0.25, 0.25])) start1 += 4 start2 += 1 eta2 = jnp.square(eta1) * jnp.sqrt(xisq) / msq kappa = jnp.sqrt(msq) * lam / jnp.sqrt(msq + jnp.square(eta1 * lam)) kX = kappa * X kprobe = kappa * probe k_xx = kernel(kX, kX, eta1, eta2, c) + sigma ** 2 * jnp.eye(N) L = cho_factor(k_xx, lower=True)[0] k_probeX = kernel(kprobe, kX, eta1, eta2, c) k_prbprb = kernel(kprobe, kprobe, eta1, eta2, c) mu = jnp.matmul(k_probeX, cho_solve((L, True), Y)) mu = jnp.sum(mu * vec, axis=-1) Linv_k_probeX = solve_triangular(L, jnp.transpose(k_probeX), lower=True) covar = k_prbprb - jnp.matmul(jnp.transpose(Linv_k_probeX), Linv_k_probeX) covar = jnp.matmul(vec, jnp.matmul(covar, jnp.transpose(vec))) # sample from N(mu, covar) L = jnp.linalg.cholesky(covar) sample = mu + jnp.matmul(L, np.random.randn(num_coefficients)) return sample # Helper function for doing HMC inference def run_inference(model, args, rng_key, X, Y, hypers): start = time.time() kernel = NUTS(model) mcmc = MCMC(kernel, args.num_warmup, args.num_samples, num_chains=args.num_chains, progress_bar=False if "NUMPYRO_SPHINXBUILD" in os.environ else True) mcmc.run(rng_key, X, Y, hypers) mcmc.print_summary() print('\nMCMC elapsed time:', time.time() - start) return mcmc.get_samples() # Get the mean and variance of a gaussian mixture def gaussian_mixture_stats(mus, variances): mean_mu = jnp.mean(mus) mean_var = jnp.mean(variances) + jnp.mean(jnp.square(mus)) - jnp.square(mean_mu) return mean_mu, mean_var # Create artificial regression dataset where only S out of P feature # dimensions contain signal and where there is a single pairwise interaction # between the first and second dimensions. def get_data(N=20, S=2, P=10, sigma_obs=0.05): assert S < P and P > 1 and S > 0 np.random.seed(0) X = np.random.randn(N, P) # generate S coefficients with non-negligible magnitude W = 0.5 + 2.5 * np.random.rand(S) # generate data using the S coefficients and a single pairwise interaction Y = np.sum(X[:, 0:S] * W, axis=-1) + X[:, 0] * X[:, 1] + sigma_obs * np.random.randn(N) Y -= jnp.mean(Y) Y_std = jnp.std(Y) assert X.shape == (N, P) assert Y.shape == (N,) return X, Y / Y_std, W / Y_std, 1.0 / Y_std # Helper function for analyzing the posterior statistics for coefficient theta_i def analyze_dimension(samples, X, Y, dimension, hypers): vmap_args = (samples['msq'], samples['lambda'], samples['eta1'], samples['xisq'], samples['sigma']) mus, variances = vmap(lambda msq, lam, eta1, xisq, sigma: compute_singleton_mean_variance(X, Y, dimension, msq, lam, eta1, xisq, hypers['c'], sigma))(*vmap_args) mean, variance = gaussian_mixture_stats(mus, variances) std = jnp.sqrt(variance) return mean, std # Helper function for analyzing the posterior statistics for coefficient theta_ij def analyze_pair_of_dimensions(samples, X, Y, dim1, dim2, hypers): vmap_args = (samples['msq'], samples['lambda'], samples['eta1'], samples['xisq'], samples['sigma']) mus, variances = vmap(lambda msq, lam, eta1, xisq, sigma: compute_pairwise_mean_variance(X, Y, dim1, dim2, msq, lam, eta1, xisq, hypers['c'], sigma))(*vmap_args) mean, variance = gaussian_mixture_stats(mus, variances) std = jnp.sqrt(variance) return mean, std def main(args): X, Y, expected_thetas, expected_pairwise = get_data(N=args.num_data, P=args.num_dimensions, S=args.active_dimensions) # setup hyperparameters hypers = {'expected_sparsity': max(1.0, args.num_dimensions / 10), 'alpha1': 3.0, 'beta1': 1.0, 'alpha2': 3.0, 'beta2': 1.0, 'alpha3': 1.0, 'c': 1.0} # do inference rng_key = random.PRNGKey(0) samples = run_inference(model, args, rng_key, X, Y, hypers) # compute the mean and square root variance of each coefficient theta_i means, stds = vmap(lambda dim: analyze_dimension(samples, X, Y, dim, hypers))(jnp.arange(args.num_dimensions)) print("Coefficients theta_1 to theta_%d used to generate the data:" % args.active_dimensions, expected_thetas) print("The single quadratic coefficient theta_{1,2} used to generate the data:", expected_pairwise) active_dimensions = [] for dim, (mean, std) in enumerate(zip(means, stds)): # we mark the dimension as inactive if the interval [mean - 3 * std, mean + 3 * std] contains zero lower, upper = mean - 3.0 * std, mean + 3.0 * std inactive = "inactive" if lower < 0.0 and upper > 0.0 else "active" if inactive == "active": active_dimensions.append(dim) print("[dimension %02d/%02d] %s:\t%.2e +- %.2e" % (dim + 1, args.num_dimensions, inactive, mean, std)) print("Identified a total of %d active dimensions; expected %d." % (len(active_dimensions), args.active_dimensions)) # Compute the mean and square root variance of coefficients theta_ij for i,j active dimensions. # Note that the resulting numbers are only meaningful for i != j. if len(active_dimensions) > 0: dim_pairs = jnp.array(list(itertools.product(active_dimensions, active_dimensions))) means, stds = vmap(lambda dim_pair: analyze_pair_of_dimensions(samples, X, Y, dim_pair[0], dim_pair[1], hypers))(dim_pairs) for dim_pair, mean, std in zip(dim_pairs, means, stds): dim1, dim2 = dim_pair if dim1 >= dim2: continue lower, upper = mean - 3.0 * std, mean + 3.0 * std if not (lower < 0.0 and upper > 0.0): format_str = "Identified pairwise interaction between dimensions %d and %d: %.2e +- %.2e" print(format_str % (dim1 + 1, dim2 + 1, mean, std)) # Draw a single sample of coefficients theta from the posterior, where we return all singleton # coefficients theta_i and pairwise coefficients theta_ij for i, j active dimensions. We use the # final MCMC sample obtained from the HMC sampler. thetas = sample_theta_space(X, Y, active_dimensions, samples['msq'][-1], samples['lambda'][-1], samples['eta1'][-1], samples['xisq'][-1], hypers['c'], samples['sigma'][-1]) print("Single posterior sample theta:\n", thetas) if __name__ == "__main__": assert numpyro.__version__.startswith('0.4.0') parser = argparse.ArgumentParser(description="Gaussian Process example") parser.add_argument("-n", "--num-samples", nargs="?", default=1000, type=int) parser.add_argument("--num-warmup", nargs='?', default=500, type=int) parser.add_argument("--num-chains", nargs='?', default=1, type=int) parser.add_argument("--num-data", nargs='?', default=100, type=int) parser.add_argument("--num-dimensions", nargs='?', default=20, type=int) parser.add_argument("--active-dimensions", nargs='?', default=3, type=int) parser.add_argument("--device", default='cpu', type=str, help='use "cpu" or "gpu".') args = parser.parse_args() numpyro.set_platform(args.device) numpyro.set_host_device_count(args.num_chains) main(args)
0.755366
0.935964
# Changes In The Daily Growth Rate > Changes in the daily growth rate for select countries. - comments: true - author: Thomas Wiecki - categories: [growth] - image: images/covid-growth.png - permalink: /growth-analysis/ ``` #hide from pathlib import Path loadpy = Path('load_covid_data.py') if not loadpy.exists(): ! wget https://raw.githubusercontent.com/github/covid19-dashboard/master/_notebooks/load_covid_data.py #hide %matplotlib inline import numpy as np import matplotlib.pyplot as plt import matplotlib import pandas as pd import seaborn as sns import load_covid_data sns.set_context('talk') plt.style.use('seaborn-whitegrid') #hide df = load_covid_data.load_data(drop_states=True) annotate_kwargs = dict( s='Based on COVID Data Repository by Johns Hopkins CSSE ({})\nBy Thomas Wiecki'.format(df.index.max().strftime('%B %d, %Y')), xy=(0.05, 0.01), xycoords='figure fraction', fontsize=10) #hide # Country names seem to change quite a bit df.country.unique() #hide european_countries = ['Italy', 'Germany', 'France (total)', 'Spain', 'United Kingdom (total)', 'Iran'] large_engl_countries = ['US', 'Canada (total)', 'Australia (total)'] asian_countries = ['Singapore', 'Japan', 'Korea, South', 'Hong Kong'] south_american_countries = ['Argentina', 'Brazil', 'Colombia', 'Chile'] country_groups = [european_countries, large_engl_countries, asian_countries, south_american_countries] line_styles = ['-', ':', '--', '-.'] #hide def plot_countries(df, countries, min_confirmed=100, ls='-', col='confirmed'): for country in countries: df_country = df.loc[(df.country == country) & (df.confirmed >= min_confirmed)] if len(df_country) == 0: continue df_country.reset_index()[col].plot(label=country, ls=ls) sns.set_palette(sns.hls_palette(8, l=.45, s=.8)) # 8 countries max fig, ax = plt.subplots(figsize=(12, 8)) for countries, ls in zip(country_groups, line_styles): plot_countries(df, countries, ls=ls) x = np.linspace(0, plt.xlim()[1] - 1) ax.plot(x, 100 * (1.33) ** x, ls='--', color='k', label='33% daily growth') ax.set(yscale='log', title='Exponential growth of COVID-19 across countries', xlabel='Days from first 100 confirmed cases', ylabel='Confirmed cases (log scale)') ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) ax.legend(bbox_to_anchor=(1.0, 1.0)) ax.annotate(**annotate_kwargs) sns.despine(); #hide fig, ax = plt.subplots(figsize=(12, 8)) for countries, ls in zip(country_groups, line_styles): plot_countries(df, countries, ls=ls) x = np.linspace(0, plt.xlim()[1] - 1) ax.plot(x, 100 * (1.33) ** x, ls='--', color='k', label='33% daily growth') ax.set(title='Exponential growth of COVID-19 across countries', xlabel='Days from first 100 confirmed cases', ylabel='Confirmed cases', ylim=(0, 30000)) ax.legend(bbox_to_anchor=(1.0, 1.0)) ax.annotate(**annotate_kwargs) sns.despine(); #hide_input plt.rcParams['axes.titlesize'] = 24 smooth_days = 4 fig, ax = plt.subplots(figsize=(14, 8)) df['pct_change'] = (df .groupby('country') .confirmed .pct_change() .rolling(smooth_days) .mean() ) for countries, ls in zip(country_groups, line_styles): (df.set_index('country') .loc[countries] .loc[lambda x: x.confirmed > 100] .reset_index() .set_index('days_since_100') .groupby('country', sort=False)['pct_change'] .plot(ls=ls) ) ax.set(ylim=(0, 1), xlim=(0, 20), title='Are we seeing changes in daily growth rate?', xlabel='Days from first 100 confirmed cases', ylabel='Daily percent change (smoothed over {} days)'.format(smooth_days), ) ax.axhline(.33, ls='--', color='k') ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) ax.legend(bbox_to_anchor=(1.0, .155)) sns.despine() ax.annotate(**annotate_kwargs); # This creates a preview image for the blog post and home page fig.savefig('../images/covid-growth.png') ``` ## Appendix: German ICU Capacity ``` #hide_input sns.set_palette(sns.hls_palette(8, l=.45, s=.8)) # 8 countries max fig, ax = plt.subplots(figsize=(12, 8)) p_crit = .05 # 28000 ICU beds total, 80% occupied icu_germany = 28000 icu_germany_free = .2 df_tmp = df.loc[lambda x: (x.country == 'Germany') & (x.confirmed > 100)].critical_estimate df_tmp.plot(ax=ax) x = np.linspace(0, 30, 30) pd.Series(index=pd.date_range(df_tmp.index[0], periods=30), data=100*p_crit * (1.33) ** x).plot(ax=ax,ls='--', color='k', label='33% daily growth') ax.axhline(icu_germany, color='.3', ls='-.', label='Total ICU beds') ax.axhline(icu_germany * icu_germany_free, color='.5', ls=':', label='Free ICU beds') ax.set(yscale='log', title='When will Germany run out of ICU beds?', ylabel='Expected critical cases (assuming {:.0f}% critical)'.format(100 * p_crit), ) ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) ax.legend(bbox_to_anchor=(1.0, 1.0)) sns.despine() ax.annotate(**annotate_kwargs); ``` Updated daily by [GitHub Actions](https://github.com/features/actions). This visualization was made by [Thomas Wiecki](https://twitter.com/twiecki)[^1]. [^1]: Data sourced from ["2019 Novel Coronavirus COVID-19 (2019-nCoV) Data Repository by Johns Hopkins CSSE"](https://systems.jhu.edu/research/public-health/ncov/) [GitHub repository](https://github.com/CSSEGISandData/COVID-19) and recreates the (pay-walled) plot in the [Financial Times]( https://www.ft.com/content/a26fbf7e-48f8-11ea-aeb3-955839e06441). This code is provided under the [BSD-3 License](https://github.com/twiecki/covid19/blob/master/LICENSE). Link to [original notebook](https://github.com/twiecki/covid19/blob/master/covid19_growth.ipynb).
github_jupyter
#hide from pathlib import Path loadpy = Path('load_covid_data.py') if not loadpy.exists(): ! wget https://raw.githubusercontent.com/github/covid19-dashboard/master/_notebooks/load_covid_data.py #hide %matplotlib inline import numpy as np import matplotlib.pyplot as plt import matplotlib import pandas as pd import seaborn as sns import load_covid_data sns.set_context('talk') plt.style.use('seaborn-whitegrid') #hide df = load_covid_data.load_data(drop_states=True) annotate_kwargs = dict( s='Based on COVID Data Repository by Johns Hopkins CSSE ({})\nBy Thomas Wiecki'.format(df.index.max().strftime('%B %d, %Y')), xy=(0.05, 0.01), xycoords='figure fraction', fontsize=10) #hide # Country names seem to change quite a bit df.country.unique() #hide european_countries = ['Italy', 'Germany', 'France (total)', 'Spain', 'United Kingdom (total)', 'Iran'] large_engl_countries = ['US', 'Canada (total)', 'Australia (total)'] asian_countries = ['Singapore', 'Japan', 'Korea, South', 'Hong Kong'] south_american_countries = ['Argentina', 'Brazil', 'Colombia', 'Chile'] country_groups = [european_countries, large_engl_countries, asian_countries, south_american_countries] line_styles = ['-', ':', '--', '-.'] #hide def plot_countries(df, countries, min_confirmed=100, ls='-', col='confirmed'): for country in countries: df_country = df.loc[(df.country == country) & (df.confirmed >= min_confirmed)] if len(df_country) == 0: continue df_country.reset_index()[col].plot(label=country, ls=ls) sns.set_palette(sns.hls_palette(8, l=.45, s=.8)) # 8 countries max fig, ax = plt.subplots(figsize=(12, 8)) for countries, ls in zip(country_groups, line_styles): plot_countries(df, countries, ls=ls) x = np.linspace(0, plt.xlim()[1] - 1) ax.plot(x, 100 * (1.33) ** x, ls='--', color='k', label='33% daily growth') ax.set(yscale='log', title='Exponential growth of COVID-19 across countries', xlabel='Days from first 100 confirmed cases', ylabel='Confirmed cases (log scale)') ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) ax.legend(bbox_to_anchor=(1.0, 1.0)) ax.annotate(**annotate_kwargs) sns.despine(); #hide fig, ax = plt.subplots(figsize=(12, 8)) for countries, ls in zip(country_groups, line_styles): plot_countries(df, countries, ls=ls) x = np.linspace(0, plt.xlim()[1] - 1) ax.plot(x, 100 * (1.33) ** x, ls='--', color='k', label='33% daily growth') ax.set(title='Exponential growth of COVID-19 across countries', xlabel='Days from first 100 confirmed cases', ylabel='Confirmed cases', ylim=(0, 30000)) ax.legend(bbox_to_anchor=(1.0, 1.0)) ax.annotate(**annotate_kwargs) sns.despine(); #hide_input plt.rcParams['axes.titlesize'] = 24 smooth_days = 4 fig, ax = plt.subplots(figsize=(14, 8)) df['pct_change'] = (df .groupby('country') .confirmed .pct_change() .rolling(smooth_days) .mean() ) for countries, ls in zip(country_groups, line_styles): (df.set_index('country') .loc[countries] .loc[lambda x: x.confirmed > 100] .reset_index() .set_index('days_since_100') .groupby('country', sort=False)['pct_change'] .plot(ls=ls) ) ax.set(ylim=(0, 1), xlim=(0, 20), title='Are we seeing changes in daily growth rate?', xlabel='Days from first 100 confirmed cases', ylabel='Daily percent change (smoothed over {} days)'.format(smooth_days), ) ax.axhline(.33, ls='--', color='k') ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) ax.legend(bbox_to_anchor=(1.0, .155)) sns.despine() ax.annotate(**annotate_kwargs); # This creates a preview image for the blog post and home page fig.savefig('../images/covid-growth.png') #hide_input sns.set_palette(sns.hls_palette(8, l=.45, s=.8)) # 8 countries max fig, ax = plt.subplots(figsize=(12, 8)) p_crit = .05 # 28000 ICU beds total, 80% occupied icu_germany = 28000 icu_germany_free = .2 df_tmp = df.loc[lambda x: (x.country == 'Germany') & (x.confirmed > 100)].critical_estimate df_tmp.plot(ax=ax) x = np.linspace(0, 30, 30) pd.Series(index=pd.date_range(df_tmp.index[0], periods=30), data=100*p_crit * (1.33) ** x).plot(ax=ax,ls='--', color='k', label='33% daily growth') ax.axhline(icu_germany, color='.3', ls='-.', label='Total ICU beds') ax.axhline(icu_germany * icu_germany_free, color='.5', ls=':', label='Free ICU beds') ax.set(yscale='log', title='When will Germany run out of ICU beds?', ylabel='Expected critical cases (assuming {:.0f}% critical)'.format(100 * p_crit), ) ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) ax.legend(bbox_to_anchor=(1.0, 1.0)) sns.despine() ax.annotate(**annotate_kwargs);
0.532182
0.856932
``` import pandas as pd import numpy as np import csv import matplotlib.pyplot as plt from matplotlib import style import seaborn as sns import statsmodels.api as sm import datetime from sklearn import preprocessing, svm from sklearn.linear_model import LinearRegression import warnings import itertools plt.style.use('fivethirtyeight') style.use('ggplot') data = pd.read_csv('EIA Data Sets/Data/net_generation.csv') data.head() data['Date'] = pd.to_datetime(data['Date']) df = data.set_index('Date') df price = pd.read_csv('EIA Data Sets/Data/prices.csv') price['Date'] = pd.to_datetime(price['Date']) price_df = price.set_index('Date') price_df price_gen = pd.merge(price_df,df, on='Date') price_gen = price_gen.loc['2010-01-01':'2019-10-31'] price_gen.tail() list(price_gen) price = price_gen['Retail Price of Electricity for All Sectors, U.S. Total, Monthly (cents per kilowatt hour)'] bio = price_gen['Electric power sector net generation from biomass, United States, Monthly (billion kilowatthours)'] geo = price_gen['Electric power sector net generation from geothermal, United States, Monthly (billion kilowatthours)'] hydro = price_gen['Electric power sector net generation from pumped storage hydropower, United States, Monthly (billion kilowatthours)'] conv_hydro = price_gen['Electric power sector net generation from conventional hydropower, United States, Monthly (billion kilowatthours)'] nat_gas = price_gen['Electric power sector net generation from natural gas, United States, Monthly (billion kilowatthours)'] nuc = price_gen['Electric power sector net generation from nuclear, United States, Monthly (billion kilowatthours)'] nonrenew = price_gen['Electric power sector net generation from other nonrenewable fuels, United States, Monthly (billion kilowatthours)'] coal = price_gen['Electric power sector net generation from coal, United States, Monthly (billion kilowatthours)'] petro = price_gen['Electric power sector net generation from petroleum, United States, Monthly (billion kilowatthours)'] renew_sum = price_gen['Electric power sector net generation from renewable energy (all types), United States, Monthly (billion kilowatthours)'] total = price_gen['Total electric power sector net generation by all energy sources, United States, Monthly (billion kilowatthours)'] solar = price_gen['Electric power sector net generation from utility-scale solar, United States, Monthly (billion kilowatthours)'] wind = price_gen['Electric power sector net generation from wind, United States, Monthly (billion kilowatthours)'] # All elements as perecent of total bio_pct = bio/total geo_pct = geo/total hydro_pct = hydro/total conv_hydro_pct = conv_hydro/total nat_gas_pct = nat_gas/total nuc_pct = nuc/total nonrenew_pct = nonrenew/total coal_pct = coal/total petro_pct = petro/total solar_pct = solar/total wind_pct = wind/total renew_pct = renew_sum/total # All elements as percent of renewable total bio_pct_green = bio/renew_sum geo_pct_green = geo/renew_sum hydro_pct_green = hydro/renew_sum conv_hydro_pct_green = conv_hydro/renew_sum nat_gas_pct_green = nat_gas/renew_sum nuc_pct_green = nuc/renew_sum nonrenew_pct_green = nonrenew/renew_sum coal_pct_green = coal/renew_sum petro_pct_green = petro/renew_sum solar_pct_green = solar/renew_sum wind_pct_green = wind/renew_sum price_gen['Coal as Percent of Total'] = coal_pct price_gen['Natural Gas as Percent of Total'] = nat_gas_pct price_gen['Wind as Percent of Total'] = wind_pct price_gen['Renewables as Percent of Total'] = renew_pct price_gen.tail() list(price_gen) wind.plot(figsize=(15, 6)) total.plot(figsize=(15, 6)) plt.legend() plt.show() # Define the p, d and q parameters to take any value between 0 and 2 p = d = q = range(0, 2) # Generate all different combinations of p, q and q triplets pdq = list(itertools.product(p, d, q)) # Generate all different combinations of seasonal p, q and q triplets seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))] print('Examples of parameter combinations for Seasonal ARIMA...') print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1])) print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2])) print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3])) print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4])) warnings.filterwarnings("ignore") # specify to ignore warning messages for param in pdq: for param_seasonal in seasonal_pdq: try: mod = sm.tsa.statespace.SARIMAX(y, order=param, seasonal_order=param_seasonal, enforce_stationarity=False, enforce_invertibility=False) results = mod.fit() print('ARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic)) except: continue mod = sm.tsa.statespace.SARIMAX(wind, order=(1, 1, 1), seasonal_order=(1, 1, 1, 12), enforce_stationarity=False, enforce_invertibility=False) results = mod.fit() print(results.summary().tables[1]) results.plot_diagnostics(figsize=(15, 12)) plt.show() pred = results.get_prediction(start=pd.to_datetime('2018-01-01'), dynamic=False) pred_ci = pred.conf_int() ax = wind.plot(label='observed') pred.predicted_mean.plot(ax=ax, label='One-step ahead Forecast', alpha=.7) ax.fill_between(pred_ci.index, pred_ci.iloc[:, 0], pred_ci.iloc[:, 1], color='k', alpha=.2) ax.set_xlabel('Date') ax.set_ylabel('Wind Energy') plt.legend() plt.show() y_forecasted = pred.predicted_mean y_truth = wind['2010-01-01':] # Compute the mean square error mse = ((y_forecasted - y_truth) ** 2).mean() print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2))) pred_dynamic = results.get_prediction(start=pd.to_datetime('2010-01-01'), dynamic=True, full_results=True) pred_dynamic_ci = pred_dynamic.conf_int() ax = wind['2017':].plot(label='observed', figsize=(20, 15)) pred_dynamic.predicted_mean.plot(label='Dynamic Forecast', ax=ax) ax.fill_between(pred_dynamic_ci.index, pred_dynamic_ci.iloc[:, 0], pred_dynamic_ci.iloc[:, 1], color='k', alpha=.25) ax.fill_betweenx(ax.get_ylim(), pd.to_datetime('2010-01-01'), wind.index[-1], alpha=.1, zorder=-1) ax.set_xlabel('Date') ax.set_ylabel('Wind Energy') plt.legend() plt.show() # Extract the predicted and true values of our time series y_forecasted = pred_dynamic.predicted_mean y_truth = wind['2010-01-01':] # Compute the mean square error mse = ((y_forecasted - y_truth) ** 2).mean() print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2))) # Get forecast 500 steps ahead in future pred_uc = results.get_forecast(steps=500) # Get confidence intervals of forecasts pred_ci = pred_uc.conf_int() ax = wind.plot(label='observed', figsize=(20, 15)) pred_uc.predicted_mean.plot(ax=ax, label='Forecast') ax.fill_between(pred_ci.index, pred_ci.iloc[:, 0], pred_ci.iloc[:, 1], color='k', alpha=.25) ax.set_xlabel('Date') ax.set_ylabel('Wind Energy') # total.plot() plt.legend() plt.show() ``` ## OLS ``` #split dependent and independent variables X = total y = renew_sum X1 = sm.add_constant(X) #make regression model model = sm.OLS(y,X1) # fit model and print results results = model.fit() print(results.summary()) ```
github_jupyter
import pandas as pd import numpy as np import csv import matplotlib.pyplot as plt from matplotlib import style import seaborn as sns import statsmodels.api as sm import datetime from sklearn import preprocessing, svm from sklearn.linear_model import LinearRegression import warnings import itertools plt.style.use('fivethirtyeight') style.use('ggplot') data = pd.read_csv('EIA Data Sets/Data/net_generation.csv') data.head() data['Date'] = pd.to_datetime(data['Date']) df = data.set_index('Date') df price = pd.read_csv('EIA Data Sets/Data/prices.csv') price['Date'] = pd.to_datetime(price['Date']) price_df = price.set_index('Date') price_df price_gen = pd.merge(price_df,df, on='Date') price_gen = price_gen.loc['2010-01-01':'2019-10-31'] price_gen.tail() list(price_gen) price = price_gen['Retail Price of Electricity for All Sectors, U.S. Total, Monthly (cents per kilowatt hour)'] bio = price_gen['Electric power sector net generation from biomass, United States, Monthly (billion kilowatthours)'] geo = price_gen['Electric power sector net generation from geothermal, United States, Monthly (billion kilowatthours)'] hydro = price_gen['Electric power sector net generation from pumped storage hydropower, United States, Monthly (billion kilowatthours)'] conv_hydro = price_gen['Electric power sector net generation from conventional hydropower, United States, Monthly (billion kilowatthours)'] nat_gas = price_gen['Electric power sector net generation from natural gas, United States, Monthly (billion kilowatthours)'] nuc = price_gen['Electric power sector net generation from nuclear, United States, Monthly (billion kilowatthours)'] nonrenew = price_gen['Electric power sector net generation from other nonrenewable fuels, United States, Monthly (billion kilowatthours)'] coal = price_gen['Electric power sector net generation from coal, United States, Monthly (billion kilowatthours)'] petro = price_gen['Electric power sector net generation from petroleum, United States, Monthly (billion kilowatthours)'] renew_sum = price_gen['Electric power sector net generation from renewable energy (all types), United States, Monthly (billion kilowatthours)'] total = price_gen['Total electric power sector net generation by all energy sources, United States, Monthly (billion kilowatthours)'] solar = price_gen['Electric power sector net generation from utility-scale solar, United States, Monthly (billion kilowatthours)'] wind = price_gen['Electric power sector net generation from wind, United States, Monthly (billion kilowatthours)'] # All elements as perecent of total bio_pct = bio/total geo_pct = geo/total hydro_pct = hydro/total conv_hydro_pct = conv_hydro/total nat_gas_pct = nat_gas/total nuc_pct = nuc/total nonrenew_pct = nonrenew/total coal_pct = coal/total petro_pct = petro/total solar_pct = solar/total wind_pct = wind/total renew_pct = renew_sum/total # All elements as percent of renewable total bio_pct_green = bio/renew_sum geo_pct_green = geo/renew_sum hydro_pct_green = hydro/renew_sum conv_hydro_pct_green = conv_hydro/renew_sum nat_gas_pct_green = nat_gas/renew_sum nuc_pct_green = nuc/renew_sum nonrenew_pct_green = nonrenew/renew_sum coal_pct_green = coal/renew_sum petro_pct_green = petro/renew_sum solar_pct_green = solar/renew_sum wind_pct_green = wind/renew_sum price_gen['Coal as Percent of Total'] = coal_pct price_gen['Natural Gas as Percent of Total'] = nat_gas_pct price_gen['Wind as Percent of Total'] = wind_pct price_gen['Renewables as Percent of Total'] = renew_pct price_gen.tail() list(price_gen) wind.plot(figsize=(15, 6)) total.plot(figsize=(15, 6)) plt.legend() plt.show() # Define the p, d and q parameters to take any value between 0 and 2 p = d = q = range(0, 2) # Generate all different combinations of p, q and q triplets pdq = list(itertools.product(p, d, q)) # Generate all different combinations of seasonal p, q and q triplets seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))] print('Examples of parameter combinations for Seasonal ARIMA...') print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1])) print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2])) print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3])) print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4])) warnings.filterwarnings("ignore") # specify to ignore warning messages for param in pdq: for param_seasonal in seasonal_pdq: try: mod = sm.tsa.statespace.SARIMAX(y, order=param, seasonal_order=param_seasonal, enforce_stationarity=False, enforce_invertibility=False) results = mod.fit() print('ARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic)) except: continue mod = sm.tsa.statespace.SARIMAX(wind, order=(1, 1, 1), seasonal_order=(1, 1, 1, 12), enforce_stationarity=False, enforce_invertibility=False) results = mod.fit() print(results.summary().tables[1]) results.plot_diagnostics(figsize=(15, 12)) plt.show() pred = results.get_prediction(start=pd.to_datetime('2018-01-01'), dynamic=False) pred_ci = pred.conf_int() ax = wind.plot(label='observed') pred.predicted_mean.plot(ax=ax, label='One-step ahead Forecast', alpha=.7) ax.fill_between(pred_ci.index, pred_ci.iloc[:, 0], pred_ci.iloc[:, 1], color='k', alpha=.2) ax.set_xlabel('Date') ax.set_ylabel('Wind Energy') plt.legend() plt.show() y_forecasted = pred.predicted_mean y_truth = wind['2010-01-01':] # Compute the mean square error mse = ((y_forecasted - y_truth) ** 2).mean() print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2))) pred_dynamic = results.get_prediction(start=pd.to_datetime('2010-01-01'), dynamic=True, full_results=True) pred_dynamic_ci = pred_dynamic.conf_int() ax = wind['2017':].plot(label='observed', figsize=(20, 15)) pred_dynamic.predicted_mean.plot(label='Dynamic Forecast', ax=ax) ax.fill_between(pred_dynamic_ci.index, pred_dynamic_ci.iloc[:, 0], pred_dynamic_ci.iloc[:, 1], color='k', alpha=.25) ax.fill_betweenx(ax.get_ylim(), pd.to_datetime('2010-01-01'), wind.index[-1], alpha=.1, zorder=-1) ax.set_xlabel('Date') ax.set_ylabel('Wind Energy') plt.legend() plt.show() # Extract the predicted and true values of our time series y_forecasted = pred_dynamic.predicted_mean y_truth = wind['2010-01-01':] # Compute the mean square error mse = ((y_forecasted - y_truth) ** 2).mean() print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2))) # Get forecast 500 steps ahead in future pred_uc = results.get_forecast(steps=500) # Get confidence intervals of forecasts pred_ci = pred_uc.conf_int() ax = wind.plot(label='observed', figsize=(20, 15)) pred_uc.predicted_mean.plot(ax=ax, label='Forecast') ax.fill_between(pred_ci.index, pred_ci.iloc[:, 0], pred_ci.iloc[:, 1], color='k', alpha=.25) ax.set_xlabel('Date') ax.set_ylabel('Wind Energy') # total.plot() plt.legend() plt.show() #split dependent and independent variables X = total y = renew_sum X1 = sm.add_constant(X) #make regression model model = sm.OLS(y,X1) # fit model and print results results = model.fit() print(results.summary())
0.540439
0.40928
We would develop a regression model using Artificial Neural Network to predict the total dollar amount that customers are willing to pay to purchase a car given the following attributes: * Customer Name * Customer e-mail * Country * Gender * Age * Annual Salary * Credit Card Debt * Net Worth **The model should predict: Car Purchase Amount ** ``` # This Python 3 environment comes with many helpful analytics libraries installed # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) ``` **Importing the data** ``` dataset = pd.read_csv('Car_Purchasing_Data.csv',encoding='ISO-8859-1') dataset.head() import seaborn as sns sns.pairplot(dataset) ``` **Dropping unnecessary columns** ``` X=dataset.drop(["Customer Name", "Customer e-mail","Country","Car Purchase Amount"],axis=1) print(X) y = dataset["Car Purchase Amount"] print(y) y.shape y = y.values.reshape(-1,1) y.shape ``` **Normalizing the values to improve the accuracy** ``` from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() X_scaled = scaler.fit_transform(X) y_scaled = scaler.fit_transform(y) print(X_scaled) ``` > **Training the Model** ``` from sklearn.model_selection import train_test_split X_trian,X_test,y_train,y_test = train_test_split(X_scaled,y_scaled,test_size=0.3) import tensorflow.keras from keras.models import Sequential from keras.layers import Dense # initializing the model model=Sequential() # adding layers model.add(Dense(25, input_dim=5, activation='relu')) model.add(Dense(25, activation='relu')) model.add(Dense(1, activation='linear')) # printing the summary model.summary() model.compile(optimizer='adam',loss='mean_squared_error') epochs_hist = model.fit(X_trian,y_train,batch_size=25,epochs=50,validation_split=.2,verbose=1) ``` **Validating the model** ``` print(epochs_hist.history.keys()) import matplotlib.pyplot as plt plt.plot(epochs_hist.history['loss']) plt.plot(epochs_hist.history['val_loss']) plt.title('Model Loss Progression During Training/Validation') plt.ylabel('Training and Validation Losses') plt.xlabel('Epoch Number') plt.legend(['Training Loss', 'Validation Loss']) y_pred = model.predict(X_test) np.set_printoptions(precision=2) y_test= scaler.inverse_transform(y_test) y_pred= scaler.inverse_transform(y_pred) df=np.concatenate((y_pred.reshape(-1,1), y_test.reshape(-1,1)),1) final = pd.DataFrame(data=df, columns=["Predicted", "Actual"]) print(final) ```
github_jupyter
# This Python 3 environment comes with many helpful analytics libraries installed # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) dataset = pd.read_csv('Car_Purchasing_Data.csv',encoding='ISO-8859-1') dataset.head() import seaborn as sns sns.pairplot(dataset) X=dataset.drop(["Customer Name", "Customer e-mail","Country","Car Purchase Amount"],axis=1) print(X) y = dataset["Car Purchase Amount"] print(y) y.shape y = y.values.reshape(-1,1) y.shape from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() X_scaled = scaler.fit_transform(X) y_scaled = scaler.fit_transform(y) print(X_scaled) from sklearn.model_selection import train_test_split X_trian,X_test,y_train,y_test = train_test_split(X_scaled,y_scaled,test_size=0.3) import tensorflow.keras from keras.models import Sequential from keras.layers import Dense # initializing the model model=Sequential() # adding layers model.add(Dense(25, input_dim=5, activation='relu')) model.add(Dense(25, activation='relu')) model.add(Dense(1, activation='linear')) # printing the summary model.summary() model.compile(optimizer='adam',loss='mean_squared_error') epochs_hist = model.fit(X_trian,y_train,batch_size=25,epochs=50,validation_split=.2,verbose=1) print(epochs_hist.history.keys()) import matplotlib.pyplot as plt plt.plot(epochs_hist.history['loss']) plt.plot(epochs_hist.history['val_loss']) plt.title('Model Loss Progression During Training/Validation') plt.ylabel('Training and Validation Losses') plt.xlabel('Epoch Number') plt.legend(['Training Loss', 'Validation Loss']) y_pred = model.predict(X_test) np.set_printoptions(precision=2) y_test= scaler.inverse_transform(y_test) y_pred= scaler.inverse_transform(y_pred) df=np.concatenate((y_pred.reshape(-1,1), y_test.reshape(-1,1)),1) final = pd.DataFrame(data=df, columns=["Predicted", "Actual"]) print(final)
0.777046
0.942612
# Aula 01 - Resoluรงรฃo dos Exercรญcios ## Novas perguntas do CEO para vocรชs 1. Quantas casas estรฃo disponรญveis para compra? 2. Quantos atributos as casas possuem? 3. Quais sรฃo os atributos das casas? 4. Qual a casa mais cara ( casa com o maior valor de venda )? 5. Qual a casa com o maior nรบmero de quartos? 6. Qual a soma total de quartos do conjunto de dados? 7. Quantas casas possuem 2 banheiros? 8. Qual o preรงo mรฉdio de todas as casas no conjunto de dados? 9. Qual o preรงo mรฉdio de casas com 2 banheiros? 10. Qual o preรงo mรญnimo entre as casas com 3 quartos? 11. Quantas casas possuem mais de 300 metros quadrados na sala de estar? 12. Quantas casas tem mais de 2 andares? 13. Quantas casas tem vista para o mar? 14. Das casas com vista para o mar, quantas tem 3 quartos? 15. Das casas com mais de 300 metros quadrados de sala de estar, quantas tem mais de 2 banheiros? # Resoluรงรฃo ## Import Libraries ``` import numpy as np import pandas as pd import seaborn as sns from matplotlib import pyplot as plt import plotly.express as px # Supress Scientific Notation np.set_printoptions(suppress=True) pd.set_option("display.float_format", '{:.2f}'.format) ``` ## Loading Data ``` # loading data into memory data = pd.read_csv( 'datasets/kc_house_data.csv' ) ``` ## 1. Quantas casas estรฃo disponรญveis para compra? ``` # Atenรงรฃo! Temos casas duplicadas, para saber o numero de casas precisamos ver ids รบnicos # Checando se hรก duplicatas data['id'].is_unique # Temos que contar apenas os ids รบnicos num_houses_unique = data['id'].nunique() # Resultado print( 'Estรฃo disponรญveis {} imรณveis para compra'.format( num_houses_unique ) ) ``` ## 2. Quantos atributos as casas possuem? ``` # O numero de colunas representam os atributos do apartamento num_attributes = len( data.columns ) # Resultado print( "Os imรณveis posseum {} atributos". format( num_attributes ) ) ``` ## 3. Quais sรฃo os atributos das casas? ``` print( "Esses sรฃo os seguintes atributos das casas: {}".format( data.columns.tolist() )) ``` ## 4. Qual a casa mais cara ( casa com o maior valor de venda )? ``` # Estratรฉgia: selecionar as colunas 'id' e 'price', ordenar as casas pela coluna 'price' em ordem decrescente e escolher o imรณvel do primeiro id. # DataFrame.sort_values # (by, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last', ignore_index=False, key=None) # loc para pegar o dado da primeira linha da coluna 'id' most_expensive_house = data[[ 'id','price' ]].sort_values( 'price', ascending=False ).reset_index().loc[0,'id'] # Resposta tem que ser o id 6762700020, mas apenas com o loc[0,'id'] o resultado estรก sendo: 7129300520. # ATENร‡รƒO Soluรงรฃo: resetar o index, pois o item 0 estava sendo considerado erroneamente! print( "A casa mais cara รฉ a {}.".format( most_expensive_house ) ) ``` ## 5. Qual a casa com o maior nรบmero de quartos? ``` # Estratรฉgia: # 1. Selecionar as colunas 'id' e 'bedrooms' # 2. Ordenar os imรณveis pelo numero de quartos em ordem decrescente # 3. Selecionar o primeiro dado da coluna 'id' greater_num_bedrooms = data[[ 'id','bedrooms' ]].sort_values( 'bedrooms', ascending=False ).reset_index().loc[0,'id'] print( 'A casa com maior nรบmero de quartos รฉ a de id: {}.'.format( greater_num_bedrooms ) ) ``` ## 6. Qual a soma total de quartos do conjunto de dados? ``` # Estratรฉgia: # 1. Somar os dados da coluna 'bedrooms' total_num_bedrooms = data[ 'bedrooms' ].sum() print( 'A soma total de quartos do conjunto de dados รฉ: {}.'.format( total_num_bedrooms ) ) ``` ## 7. Quantas casas possuem 2 banheiros? ``` # Estratรฉgia: # 1. Filtrar linhas (imรณveis) que possuem 2 'bathrooms' # 2. Contar o nรบmero de linhas do dataset # Dica: atente que quando temos os booleanos (resultantes de alguma condiรงรฃo), podemos usar eles para selecionar linhas e colunas! # loc = localize pelo nome das colunas # iloc = localize para mim pelo รญndice das linhas e das colunas num_houses = len( data.loc[data[ 'bathrooms' ] == 2, 'bathrooms'] ) print( 'O nรบmero total de casas que possuem 2 banheiros รฉ: {}.'.format( num_houses ) ) ``` ## 8. Qual o preรงo mรฉdio de todas as casas no conjunto de dados? ``` # Estratรฉgia: # 1. Calcular o preรงo mรฉdio da coluna 'price' avg_price = np.round( data[ 'price' ].mean(), 2 ) print( 'O preรงo mรฉdio de todas as casas no conjunto de dados รฉ de: R${}.'.format( avg_price )) ``` ## 9. Qual o preรงo mรฉdio de casas com 2 banheiros? ``` # Estratรฉgia: # 1. Selecionar os imรณveis com 2 'bathrooms' # 2. Calcular o preรงo mรฉdio da coluna 'price' do novo conjunto de dados avg_price = np.round( data.loc[data[ 'bathrooms' ] == 2, 'price'].mean() , 2 ) print( 'O preรงo mรฉdio de casas com 2 banheiros รฉ de: R${}.'.format( avg_price ) ) ``` ## 10. Qual o preรงo mรญnimo entre as casas com 3 quartos? ``` # Estratรฉgia: # 1. Selecionar imรญveis com 3 'bedrooms' # 2. Calcular o menor preรงo da coluna 'price' do novo conjunto de dados min_price = data.loc[ data[ 'bedrooms' ] == 3, 'price'].min() print( 'O preรงo mรญnimo entre as casas com 3 quartos รฉ de: R${}.'. format( min_price ) ) ``` ## 11. Quantas casas possuem mais de 300 metros quadrados na sala de estar? ``` # Estratรฉgia: # 1. Filtrar os imรณveis com mais de 300 'sqft_living' # 2. Contar o nรบmero de imรณveis nesse novo conjunto de dados data['m2'] = data['sqft_living'] * 0.093 houses = data.loc[data[ 'sqft_living' ] > 300, 'id'].shape[0] print( 'Um total de {} casas possuem mais de 300 metros quadrados na sala de estar.'.format( houses )) ``` ## 12. Quantas casas tem mais de 2 andares? ``` # Estratรฉgia: # 1. Selecionar imรณveis com a coluna 'floors' maior que 2. # 2. Contar o nรบmero de imรณveis nesse novo conjunto de dados. houses = data.loc[ data[ 'floors' ] > 2, 'id' ].shape[0] print( 'Um total de {} casas possuem mais de 2 andares.'.format( houses ) ) ``` ## 13. Quantas casas tem vista para o mar? ``` # Estratรฉgia: # 1. Selecionar imรณveis com a coluna 'waterfront' igual a 1. # 2. Contar o nรบmero de imรณveis nesse novo conjunto de dados. houses = data.loc[ data[ 'waterfront' ] == 1, 'id' ].shape[0] print( 'Um total de {} casas possuem vista para o mar.'.format( houses ) ) ``` ## 14. Das casas com vista para o mar, quantas tem 3 quartos? ``` # Estratรฉgia: # 1. Selecionar imรณveis com a coluna 'waterfront' igual a 1 e a coluna 'bedrooms' maior que 3. # 2. Contar o nรบmero de imรณveis nesse novo conjunto de dados. houses = data.loc[ (data[ 'waterfront' ] == 1) & ( data[ 'bedrooms' ] == 3 ), 'id' ].shape[0] print( 'Um total de {} casas com vista pro mar possuem 3 quartos.'.format( houses ) ) ``` ## 15. Das casas com mais de 300 metros quadrados de sala de estar, quantas tem mais de 2 banheiros? ``` # Estratรฉgia: # 1. Selecionar imรณveis com a coluna 'sqft_living' maior que 300 e a coluna 'bathrooms' maior que 2. # 2. Contar o nรบmero de imรณveis nesse novo conjunto de dados. houses = data.loc[ ( data[ 'sqft_living' ] > 300 ) & ( data[ 'bathrooms' ] > 2 ), 'id' ].shape[0] print( 'Um total de {} casas com mais de 300 metros quadrados de sala de estar possuem mais de 2 banheiros.'.format( houses ) ) ```
github_jupyter
import numpy as np import pandas as pd import seaborn as sns from matplotlib import pyplot as plt import plotly.express as px # Supress Scientific Notation np.set_printoptions(suppress=True) pd.set_option("display.float_format", '{:.2f}'.format) # loading data into memory data = pd.read_csv( 'datasets/kc_house_data.csv' ) # Atenรงรฃo! Temos casas duplicadas, para saber o numero de casas precisamos ver ids รบnicos # Checando se hรก duplicatas data['id'].is_unique # Temos que contar apenas os ids รบnicos num_houses_unique = data['id'].nunique() # Resultado print( 'Estรฃo disponรญveis {} imรณveis para compra'.format( num_houses_unique ) ) # O numero de colunas representam os atributos do apartamento num_attributes = len( data.columns ) # Resultado print( "Os imรณveis posseum {} atributos". format( num_attributes ) ) print( "Esses sรฃo os seguintes atributos das casas: {}".format( data.columns.tolist() )) # Estratรฉgia: selecionar as colunas 'id' e 'price', ordenar as casas pela coluna 'price' em ordem decrescente e escolher o imรณvel do primeiro id. # DataFrame.sort_values # (by, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last', ignore_index=False, key=None) # loc para pegar o dado da primeira linha da coluna 'id' most_expensive_house = data[[ 'id','price' ]].sort_values( 'price', ascending=False ).reset_index().loc[0,'id'] # Resposta tem que ser o id 6762700020, mas apenas com o loc[0,'id'] o resultado estรก sendo: 7129300520. # ATENร‡รƒO Soluรงรฃo: resetar o index, pois o item 0 estava sendo considerado erroneamente! print( "A casa mais cara รฉ a {}.".format( most_expensive_house ) ) # Estratรฉgia: # 1. Selecionar as colunas 'id' e 'bedrooms' # 2. Ordenar os imรณveis pelo numero de quartos em ordem decrescente # 3. Selecionar o primeiro dado da coluna 'id' greater_num_bedrooms = data[[ 'id','bedrooms' ]].sort_values( 'bedrooms', ascending=False ).reset_index().loc[0,'id'] print( 'A casa com maior nรบmero de quartos รฉ a de id: {}.'.format( greater_num_bedrooms ) ) # Estratรฉgia: # 1. Somar os dados da coluna 'bedrooms' total_num_bedrooms = data[ 'bedrooms' ].sum() print( 'A soma total de quartos do conjunto de dados รฉ: {}.'.format( total_num_bedrooms ) ) # Estratรฉgia: # 1. Filtrar linhas (imรณveis) que possuem 2 'bathrooms' # 2. Contar o nรบmero de linhas do dataset # Dica: atente que quando temos os booleanos (resultantes de alguma condiรงรฃo), podemos usar eles para selecionar linhas e colunas! # loc = localize pelo nome das colunas # iloc = localize para mim pelo รญndice das linhas e das colunas num_houses = len( data.loc[data[ 'bathrooms' ] == 2, 'bathrooms'] ) print( 'O nรบmero total de casas que possuem 2 banheiros รฉ: {}.'.format( num_houses ) ) # Estratรฉgia: # 1. Calcular o preรงo mรฉdio da coluna 'price' avg_price = np.round( data[ 'price' ].mean(), 2 ) print( 'O preรงo mรฉdio de todas as casas no conjunto de dados รฉ de: R${}.'.format( avg_price )) # Estratรฉgia: # 1. Selecionar os imรณveis com 2 'bathrooms' # 2. Calcular o preรงo mรฉdio da coluna 'price' do novo conjunto de dados avg_price = np.round( data.loc[data[ 'bathrooms' ] == 2, 'price'].mean() , 2 ) print( 'O preรงo mรฉdio de casas com 2 banheiros รฉ de: R${}.'.format( avg_price ) ) # Estratรฉgia: # 1. Selecionar imรญveis com 3 'bedrooms' # 2. Calcular o menor preรงo da coluna 'price' do novo conjunto de dados min_price = data.loc[ data[ 'bedrooms' ] == 3, 'price'].min() print( 'O preรงo mรญnimo entre as casas com 3 quartos รฉ de: R${}.'. format( min_price ) ) # Estratรฉgia: # 1. Filtrar os imรณveis com mais de 300 'sqft_living' # 2. Contar o nรบmero de imรณveis nesse novo conjunto de dados data['m2'] = data['sqft_living'] * 0.093 houses = data.loc[data[ 'sqft_living' ] > 300, 'id'].shape[0] print( 'Um total de {} casas possuem mais de 300 metros quadrados na sala de estar.'.format( houses )) # Estratรฉgia: # 1. Selecionar imรณveis com a coluna 'floors' maior que 2. # 2. Contar o nรบmero de imรณveis nesse novo conjunto de dados. houses = data.loc[ data[ 'floors' ] > 2, 'id' ].shape[0] print( 'Um total de {} casas possuem mais de 2 andares.'.format( houses ) ) # Estratรฉgia: # 1. Selecionar imรณveis com a coluna 'waterfront' igual a 1. # 2. Contar o nรบmero de imรณveis nesse novo conjunto de dados. houses = data.loc[ data[ 'waterfront' ] == 1, 'id' ].shape[0] print( 'Um total de {} casas possuem vista para o mar.'.format( houses ) ) # Estratรฉgia: # 1. Selecionar imรณveis com a coluna 'waterfront' igual a 1 e a coluna 'bedrooms' maior que 3. # 2. Contar o nรบmero de imรณveis nesse novo conjunto de dados. houses = data.loc[ (data[ 'waterfront' ] == 1) & ( data[ 'bedrooms' ] == 3 ), 'id' ].shape[0] print( 'Um total de {} casas com vista pro mar possuem 3 quartos.'.format( houses ) ) # Estratรฉgia: # 1. Selecionar imรณveis com a coluna 'sqft_living' maior que 300 e a coluna 'bathrooms' maior que 2. # 2. Contar o nรบmero de imรณveis nesse novo conjunto de dados. houses = data.loc[ ( data[ 'sqft_living' ] > 300 ) & ( data[ 'bathrooms' ] > 2 ), 'id' ].shape[0] print( 'Um total de {} casas com mais de 300 metros quadrados de sala de estar possuem mais de 2 banheiros.'.format( houses ) )
0.278355
0.890151
``` import numpy as np import dicom from tkinter import * from tkinter import filedialog from tkinter import Tk root = Tk() root.withdraw() file = filedialog.askopenfilename() plan = dicom.read_file(file) from xml.etree.ElementTree import ElementTree from xml.etree.ElementTree import Element import xml.etree.ElementTree as etree from xml.etree.ElementTree import Comment from xml.dom import minidom import numpy as np def GetGantry_and_MLC(plan): ##I'm going to start over down here, Mahmoud, and let's summarize what you've been doing so far #1. This code here is going to find the cardinal angles that need to be measured. #Cardinal angles # Gantry angle of start and end cps Start = plan.BeamSequence[0].ControlPointSequence[0].GantryAngle End = plan.BeamSequence[0].ControlPointSequence[-1].GantryAngle #convert to Varian Scale if(Start <=180): Start= 180 - Start else: Start = 540-Start if(End <=180): End = 180 - End else: End = 540-End #print(Start,End) possible_cardinals = np.array([Start,0,90,180,270,End]) #cards= [x in np.arange(Start,End) for x in possible_cardinals] #this is only for CC, With CW the line will have reversed comparison operators. cards = possible_cardinals[np.where(np.logical_and(possible_cardinals>=Start, possible_cardinals<=End))] print(cards) #2. Then find a list of control points that correspond to the surrounding control points at these angles. #this time I"ll do the icp calculation inside this loop. mlc_new_positions = {} #this is how to create a dictionary icp = [] #looping over the control points to find the cardinal Angles "Varian Coordinates". for i in np.arange(0,len(plan.BeamSequence[0].ControlPointSequence)-1): current_gantry = plan.BeamSequence[0].ControlPointSequence[i].GantryAngle next_gantry = plan.BeamSequence[0].ControlPointSequence[i+1].GantryAngle #conversion to Varian Scale. current_gantry = 180-current_gantry if current_gantry <= 180 else 540-current_gantry next_gantry = 180 -next_gantry if next_gantry <=180 else 540-next_gantry for j in cards: #print(current_gantry, next_gantry, j) #print(np.linspace(current_gantry,next_gantry,np.absolute(current_gantry-next_gantry)+1)) if j in np.arange(current_gantry,next_gantry+1): print ("The cardinal Beam angle in between: ",current_gantry, " and ", next_gantry, " degree 'Varian coordinate' between control points: ", plan.BeamSequence[0].ControlPointSequence[i].ControlPointIndex, " and ", plan.BeamSequence[0].ControlPointSequence[i+1].ControlPointIndex) print("Corresponds to ", str(j)) #determine MLC positions of current and next control points. prev_mlc_pos= plan.BeamSequence[0].ControlPointSequence[i].BeamLimitingDevicePositionSequence[-1].LeafJawPositions next_mlc_pos = plan.BeamSequence[0].ControlPointSequence[i+1].BeamLimitingDevicePositionSequence[-1].LeafJawPositions #this array is just to calculate the interpolation. temp_mlc = [] #print(plan.BeamSequence[0].ControlPointSequence[i].BeamLimitingDevicePositionSequence) #interpolate to find the exact control point at the cardinal angle if j == current_gantry: icp.append(float(plan.BeamSequence[0].ControlPointSequence[i].ControlPointIndex)) #print('Current Gantry!') temp_mlc = [float(x) for x in plan.BeamSequence[0].ControlPointSequence[i].BeamLimitingDevicePositionSequence[-1].LeafJawPositions] print('Writing temp MLC for Current Gantry.') #write to the dictionary the temp_mlc with the gantry angle in Varian Standard as the key. mlc_new_positions[j]=temp_mlc #print (plan.BeamSequence[0].ControlPointSequence[i].ControlPointIndex) elif j == next_gantry:#ahhh you got me on this one was supposed to be an else if icp.append(float(plan.BeamSequence[0].ControlPointSequence[i+1].ControlPointIndex)) #print (plan.BeamSequence[0].ControlPointSequence[i+1].ControlPointIndex) temp_mlc = [float(x) for x in plan.BeamSequence[0].ControlPointSequence[i+1].BeamLimitingDevicePositionSequence[-1].LeafJawPositions] print('Next Gantry!') mlc_new_positions[j] = temp_mlc else: print('Interp gantry') icp_temp = plan.BeamSequence[0].ControlPointSequence[i].ControlPointIndex*(1-((j-current_gantry)/(next_gantry-current_gantry)))+plan.BeamSequence[0].ControlPointSequence[i+1].ControlPointIndex*(1-((next_gantry-j)/(next_gantry-current_gantry))) cp_index = float(plan.BeamSequence[0].ControlPointSequence[i].ControlPointIndex) cp_index_next = float(plan.BeamSequence[0].ControlPointSequence[i+1].ControlPointIndex) icp.append(icp_temp) #loop through all MLC positions print('checking mlc positions') print('prev, next, interp')#uncomment the printed line below to actually get these values. for x in np.arange(0,len(prev_mlc_pos)): #y = y_0 + (x-x_0)*((y_1-y_0)/(x_1-x_0)) temp_interp = float(prev_mlc_pos[x])+(icp_temp-cp_index)*((float(next_mlc_pos[x])-float(prev_mlc_pos[x]))/(cp_index_next-cp_index)) temp_mlc.append(temp_interp) #print(float(prev_mlc_pos[x]),next_mlc_pos[x], temp_interp) mlc_new_positions[j] = temp_mlc print (mlc_new_positions[j]) print('--------------------------') print('cardinal angles are: ', cards) print('control points are: ',icp) #print(mlc_new_positions) print ('Done!') return mlc_new_positions MLC_Pos = GetGantry_and_MLC(plan) print(type(MLC_Pos)) for gantry,mlc_pos in MLC_Pos.items(): #this has your gantry angles and your control points. print(gantry,mlc_pos) print(type(mlc_pos)) list(MLC_Pos)[0] len(MLC_Pos[list(MLC_Pos)[0]][60:]) list(MLC_Pos)[1:] len(MLC_Pos) #build the XML file #first the header. ''' <VarianResearchBeam SchemaVersion="1.0"> <!--Generated from python--> <SetBeam> <Id>1234</Id> <MLCModel>NDS120HD</MLCModel> <Accs> <Acc2>3317</Acc2> </Accs> <ControlPoints> <Cp> <SubBeam> <Seq>0</Seq> <Name>MV Outside</Name> </SubBeam> <Energy>6x</Energy> <Mu>0</Mu> <DRate>400</DRate> <GantryRtn>180.0</GantryRtn> <CollRtn>180</CollRtn> <Y1>2.5</Y1> <Y2>2.5</Y2> <X1>2.5</X1> <X2>2.5</X2> </Cp> <!--Now loop through the rest of the control points--> </SetBeam> </VarianResearchBeam> ''' js = ' ' root = Element ("VarianResearchBeam") tree= ElementTree(root) root.set('SchemaVersion','1.0') comment = Comment('Build from python') root.append(comment) setBeam = Element('SetBeam') root.append(setBeam) #inner tags id_tag = Element('Id') id_tag.text='1234' setBeam.append(id_tag) mlcModel = Element('MLCModel') mlcModel.text = 'NDS120HD' setBeam.append(mlcModel) Accs = Element('Accs') Acc2 = Element('Acc2') Acc2.text = '3317' Accs.append(Acc2) setBeam.append(Accs) ControlPoints = Element('ControlPoints') cp = Element('Cp') SubBeam = Element('SubBeam') Sequence = Element('Sequence') Sequence.text = '0' Name = Element('Name') Name.text = 'MV Outside' SubBeam.append(Sequence) SubBeam.append(Name) cp.append(SubBeam) Energy = Element('Energy') Energy.text = '6x' cp.append(Energy) MU= Element('MU') MU.text = '0' cp.append(MU) drate = Element('DRate') drate.text = '400' cp.append(drate) Gantryrtn = Element('GantryRtn') # gantry Angle Gantryrtn.text = str(list(MLC_Pos)[0]) cp.append(Gantryrtn) collrtn = Element('CollRtn') collrtn.text = '180' cp.append(collrtn) #jaw size y1 = Element('Y1') y2 = Element('Y2') x1 = Element('X1') x2 = Element('X2') y1.text = y2.text = x1.text = x2.text = '2.5' cp.append(y1) cp.append(y2) cp.append(x1) cp.append(x2) mlc = Element('Mlc') mlc_id = Element('ID') mlc_id.text = '1' mlc.append(mlc_id) mlc_b = Element('B') j = ',' # MLC B mlc_b.text = js.join([str(x) for x in MLC_Pos[list(MLC_Pos)[0]][60:]]) mlc.append(mlc_b) mlc_a = Element('A') # MLC A mlc_a.text = js.join([str(x) for x in MLC_Pos[list(MLC_Pos)[0]][:60]]) mlc.append(mlc_a) cp.append(mlc) ControlPoints.append(cp) #now loop through the other control poitns. for gantry in list(MLC_Pos)[1:]: #create blank control poitn with no change in parameters. cp_nc = Element('Cp') GANTRY = Element ('GantryRtn') GANTRY.text= str(gantry) cp_nc.append(GANTRY) mlc = Element('Mlc') mlc_id = Element('ID') mlc_id.text = '1' mlc.append(mlc_id) mlc_b = Element('B') j = ',' # MLC B mlc_b.text = js.join([str(x) for x in MLC_Pos[gantry][60:]]) mlc.append(mlc_b) mlc_a = Element('A') # MLC A mlc_a.text = js.join([str(x) for x in MLC_Pos[gantry][:60]]) mlc.append(mlc_a) cp.append(mlc) dummy_cp = Element ('Cp') ControlPoints.append(dummy_cp) ControlPoints.append(cp_nc) #________________________________________________ # I M A G I N G S E C T I O N #________________________________________________ #create imaging points with odd control point number. Imaging_Parameters = Element ('ImagingParameters') Outside_Treatment = Element ('OutsideTreatment') Max_MU = Element ('MaxMu') Max_MU.text = '100' Outside_Treatment.append(Max_MU) Imaging_points = Element ('ImagingPoints') for i in np.arange(0,len(MLC_Pos)): Imaging_Point = Element ('imagingPoint') Img_cp = Element('Cp') Img_cp.text = 2*i+1 Imaging_Point.append(Img_cp) Acquisition = Element ('Acquisition') Acquisition_Id = Element ('AcquisitionId') Acquisition_Id.text = '0' Acquisition_Specs = Element ('AcquisitionSpecs') Acquisition_Parameters = Element ('AcquisitionParameters') Image_Mode = Element ('ImgMode') Image_Mode.set('id','Highres') Mv = Element ('MV') Calibration_Set = Element ('CalibrationSet') Calibration_Set.text = 'DefaultCalibrationSetId' Acquisition_Parameters.append(Image_Mode) Acquisition_Parameters.append(Calibration_Set) Acquisition_Parameters.append(Mv) Acquisition.append(Acquisition_Id) Acquisition_Specs.append(Acquisition_Specs) Acquisition.append(Acquisition_Parameters) Mv_d = Element ('Mvd') positions = Element ('Positions') Lateral = Element ('Lat') Lateral.text = '0' Longitudinal = Element ('Lng') Longitudinal.text='0' Vertical = Element ('vrt') Vertical.text = '-180' pitch = Element ('Pitch') pitch.text='0' positions.append(Lateral) positions.append(Longitudinal) positions.append(Vertical) positions.append(pitch) Mv_d.append(positions) Imaging_Point.append(Acquisition) Imaging_Point.append(Mv_d) setBeam.append(ControlPoints) Imaging_Parameters.append(Imaging_points) setBeam.append(Imaging_Parameters) root.append(setBeam) with open('testXML','wb') as xmlfile: rough_string = etree.tostring(root) parsed = minidom.parseString(rough_string) pretty_tree = parsed.toprettyxml(indent=' ') #pretty_tree.write(xmlfile) print(pretty_tree) ```
github_jupyter
import numpy as np import dicom from tkinter import * from tkinter import filedialog from tkinter import Tk root = Tk() root.withdraw() file = filedialog.askopenfilename() plan = dicom.read_file(file) from xml.etree.ElementTree import ElementTree from xml.etree.ElementTree import Element import xml.etree.ElementTree as etree from xml.etree.ElementTree import Comment from xml.dom import minidom import numpy as np def GetGantry_and_MLC(plan): ##I'm going to start over down here, Mahmoud, and let's summarize what you've been doing so far #1. This code here is going to find the cardinal angles that need to be measured. #Cardinal angles # Gantry angle of start and end cps Start = plan.BeamSequence[0].ControlPointSequence[0].GantryAngle End = plan.BeamSequence[0].ControlPointSequence[-1].GantryAngle #convert to Varian Scale if(Start <=180): Start= 180 - Start else: Start = 540-Start if(End <=180): End = 180 - End else: End = 540-End #print(Start,End) possible_cardinals = np.array([Start,0,90,180,270,End]) #cards= [x in np.arange(Start,End) for x in possible_cardinals] #this is only for CC, With CW the line will have reversed comparison operators. cards = possible_cardinals[np.where(np.logical_and(possible_cardinals>=Start, possible_cardinals<=End))] print(cards) #2. Then find a list of control points that correspond to the surrounding control points at these angles. #this time I"ll do the icp calculation inside this loop. mlc_new_positions = {} #this is how to create a dictionary icp = [] #looping over the control points to find the cardinal Angles "Varian Coordinates". for i in np.arange(0,len(plan.BeamSequence[0].ControlPointSequence)-1): current_gantry = plan.BeamSequence[0].ControlPointSequence[i].GantryAngle next_gantry = plan.BeamSequence[0].ControlPointSequence[i+1].GantryAngle #conversion to Varian Scale. current_gantry = 180-current_gantry if current_gantry <= 180 else 540-current_gantry next_gantry = 180 -next_gantry if next_gantry <=180 else 540-next_gantry for j in cards: #print(current_gantry, next_gantry, j) #print(np.linspace(current_gantry,next_gantry,np.absolute(current_gantry-next_gantry)+1)) if j in np.arange(current_gantry,next_gantry+1): print ("The cardinal Beam angle in between: ",current_gantry, " and ", next_gantry, " degree 'Varian coordinate' between control points: ", plan.BeamSequence[0].ControlPointSequence[i].ControlPointIndex, " and ", plan.BeamSequence[0].ControlPointSequence[i+1].ControlPointIndex) print("Corresponds to ", str(j)) #determine MLC positions of current and next control points. prev_mlc_pos= plan.BeamSequence[0].ControlPointSequence[i].BeamLimitingDevicePositionSequence[-1].LeafJawPositions next_mlc_pos = plan.BeamSequence[0].ControlPointSequence[i+1].BeamLimitingDevicePositionSequence[-1].LeafJawPositions #this array is just to calculate the interpolation. temp_mlc = [] #print(plan.BeamSequence[0].ControlPointSequence[i].BeamLimitingDevicePositionSequence) #interpolate to find the exact control point at the cardinal angle if j == current_gantry: icp.append(float(plan.BeamSequence[0].ControlPointSequence[i].ControlPointIndex)) #print('Current Gantry!') temp_mlc = [float(x) for x in plan.BeamSequence[0].ControlPointSequence[i].BeamLimitingDevicePositionSequence[-1].LeafJawPositions] print('Writing temp MLC for Current Gantry.') #write to the dictionary the temp_mlc with the gantry angle in Varian Standard as the key. mlc_new_positions[j]=temp_mlc #print (plan.BeamSequence[0].ControlPointSequence[i].ControlPointIndex) elif j == next_gantry:#ahhh you got me on this one was supposed to be an else if icp.append(float(plan.BeamSequence[0].ControlPointSequence[i+1].ControlPointIndex)) #print (plan.BeamSequence[0].ControlPointSequence[i+1].ControlPointIndex) temp_mlc = [float(x) for x in plan.BeamSequence[0].ControlPointSequence[i+1].BeamLimitingDevicePositionSequence[-1].LeafJawPositions] print('Next Gantry!') mlc_new_positions[j] = temp_mlc else: print('Interp gantry') icp_temp = plan.BeamSequence[0].ControlPointSequence[i].ControlPointIndex*(1-((j-current_gantry)/(next_gantry-current_gantry)))+plan.BeamSequence[0].ControlPointSequence[i+1].ControlPointIndex*(1-((next_gantry-j)/(next_gantry-current_gantry))) cp_index = float(plan.BeamSequence[0].ControlPointSequence[i].ControlPointIndex) cp_index_next = float(plan.BeamSequence[0].ControlPointSequence[i+1].ControlPointIndex) icp.append(icp_temp) #loop through all MLC positions print('checking mlc positions') print('prev, next, interp')#uncomment the printed line below to actually get these values. for x in np.arange(0,len(prev_mlc_pos)): #y = y_0 + (x-x_0)*((y_1-y_0)/(x_1-x_0)) temp_interp = float(prev_mlc_pos[x])+(icp_temp-cp_index)*((float(next_mlc_pos[x])-float(prev_mlc_pos[x]))/(cp_index_next-cp_index)) temp_mlc.append(temp_interp) #print(float(prev_mlc_pos[x]),next_mlc_pos[x], temp_interp) mlc_new_positions[j] = temp_mlc print (mlc_new_positions[j]) print('--------------------------') print('cardinal angles are: ', cards) print('control points are: ',icp) #print(mlc_new_positions) print ('Done!') return mlc_new_positions MLC_Pos = GetGantry_and_MLC(plan) print(type(MLC_Pos)) for gantry,mlc_pos in MLC_Pos.items(): #this has your gantry angles and your control points. print(gantry,mlc_pos) print(type(mlc_pos)) list(MLC_Pos)[0] len(MLC_Pos[list(MLC_Pos)[0]][60:]) list(MLC_Pos)[1:] len(MLC_Pos) #build the XML file #first the header. ''' <VarianResearchBeam SchemaVersion="1.0"> <!--Generated from python--> <SetBeam> <Id>1234</Id> <MLCModel>NDS120HD</MLCModel> <Accs> <Acc2>3317</Acc2> </Accs> <ControlPoints> <Cp> <SubBeam> <Seq>0</Seq> <Name>MV Outside</Name> </SubBeam> <Energy>6x</Energy> <Mu>0</Mu> <DRate>400</DRate> <GantryRtn>180.0</GantryRtn> <CollRtn>180</CollRtn> <Y1>2.5</Y1> <Y2>2.5</Y2> <X1>2.5</X1> <X2>2.5</X2> </Cp> <!--Now loop through the rest of the control points--> </SetBeam> </VarianResearchBeam> ''' js = ' ' root = Element ("VarianResearchBeam") tree= ElementTree(root) root.set('SchemaVersion','1.0') comment = Comment('Build from python') root.append(comment) setBeam = Element('SetBeam') root.append(setBeam) #inner tags id_tag = Element('Id') id_tag.text='1234' setBeam.append(id_tag) mlcModel = Element('MLCModel') mlcModel.text = 'NDS120HD' setBeam.append(mlcModel) Accs = Element('Accs') Acc2 = Element('Acc2') Acc2.text = '3317' Accs.append(Acc2) setBeam.append(Accs) ControlPoints = Element('ControlPoints') cp = Element('Cp') SubBeam = Element('SubBeam') Sequence = Element('Sequence') Sequence.text = '0' Name = Element('Name') Name.text = 'MV Outside' SubBeam.append(Sequence) SubBeam.append(Name) cp.append(SubBeam) Energy = Element('Energy') Energy.text = '6x' cp.append(Energy) MU= Element('MU') MU.text = '0' cp.append(MU) drate = Element('DRate') drate.text = '400' cp.append(drate) Gantryrtn = Element('GantryRtn') # gantry Angle Gantryrtn.text = str(list(MLC_Pos)[0]) cp.append(Gantryrtn) collrtn = Element('CollRtn') collrtn.text = '180' cp.append(collrtn) #jaw size y1 = Element('Y1') y2 = Element('Y2') x1 = Element('X1') x2 = Element('X2') y1.text = y2.text = x1.text = x2.text = '2.5' cp.append(y1) cp.append(y2) cp.append(x1) cp.append(x2) mlc = Element('Mlc') mlc_id = Element('ID') mlc_id.text = '1' mlc.append(mlc_id) mlc_b = Element('B') j = ',' # MLC B mlc_b.text = js.join([str(x) for x in MLC_Pos[list(MLC_Pos)[0]][60:]]) mlc.append(mlc_b) mlc_a = Element('A') # MLC A mlc_a.text = js.join([str(x) for x in MLC_Pos[list(MLC_Pos)[0]][:60]]) mlc.append(mlc_a) cp.append(mlc) ControlPoints.append(cp) #now loop through the other control poitns. for gantry in list(MLC_Pos)[1:]: #create blank control poitn with no change in parameters. cp_nc = Element('Cp') GANTRY = Element ('GantryRtn') GANTRY.text= str(gantry) cp_nc.append(GANTRY) mlc = Element('Mlc') mlc_id = Element('ID') mlc_id.text = '1' mlc.append(mlc_id) mlc_b = Element('B') j = ',' # MLC B mlc_b.text = js.join([str(x) for x in MLC_Pos[gantry][60:]]) mlc.append(mlc_b) mlc_a = Element('A') # MLC A mlc_a.text = js.join([str(x) for x in MLC_Pos[gantry][:60]]) mlc.append(mlc_a) cp.append(mlc) dummy_cp = Element ('Cp') ControlPoints.append(dummy_cp) ControlPoints.append(cp_nc) #________________________________________________ # I M A G I N G S E C T I O N #________________________________________________ #create imaging points with odd control point number. Imaging_Parameters = Element ('ImagingParameters') Outside_Treatment = Element ('OutsideTreatment') Max_MU = Element ('MaxMu') Max_MU.text = '100' Outside_Treatment.append(Max_MU) Imaging_points = Element ('ImagingPoints') for i in np.arange(0,len(MLC_Pos)): Imaging_Point = Element ('imagingPoint') Img_cp = Element('Cp') Img_cp.text = 2*i+1 Imaging_Point.append(Img_cp) Acquisition = Element ('Acquisition') Acquisition_Id = Element ('AcquisitionId') Acquisition_Id.text = '0' Acquisition_Specs = Element ('AcquisitionSpecs') Acquisition_Parameters = Element ('AcquisitionParameters') Image_Mode = Element ('ImgMode') Image_Mode.set('id','Highres') Mv = Element ('MV') Calibration_Set = Element ('CalibrationSet') Calibration_Set.text = 'DefaultCalibrationSetId' Acquisition_Parameters.append(Image_Mode) Acquisition_Parameters.append(Calibration_Set) Acquisition_Parameters.append(Mv) Acquisition.append(Acquisition_Id) Acquisition_Specs.append(Acquisition_Specs) Acquisition.append(Acquisition_Parameters) Mv_d = Element ('Mvd') positions = Element ('Positions') Lateral = Element ('Lat') Lateral.text = '0' Longitudinal = Element ('Lng') Longitudinal.text='0' Vertical = Element ('vrt') Vertical.text = '-180' pitch = Element ('Pitch') pitch.text='0' positions.append(Lateral) positions.append(Longitudinal) positions.append(Vertical) positions.append(pitch) Mv_d.append(positions) Imaging_Point.append(Acquisition) Imaging_Point.append(Mv_d) setBeam.append(ControlPoints) Imaging_Parameters.append(Imaging_points) setBeam.append(Imaging_Parameters) root.append(setBeam) with open('testXML','wb') as xmlfile: rough_string = etree.tostring(root) parsed = minidom.parseString(rough_string) pretty_tree = parsed.toprettyxml(indent=' ') #pretty_tree.write(xmlfile) print(pretty_tree)
0.182098
0.37419
# Enabling Data Collection for Models in Production With this notebook, you can learn how to collect input model data from your Azure Machine Learning service in an Azure Blob storage. Once enabled, this data collected gives you the opportunity: * Monitor data drifts as production data enters your model * Make better decisions on when to retrain or optimize your model * Retrain your model with the data collected ## What data is collected? * Model input data (voice, images, and video are not supported) from services deployed in Azure Kubernetes Cluster (AKS) * Model predictions using production input data. **Note:** pre-aggregation or pre-calculations on this data are done by user and not included in this version of the product. ## What is different compared to standard production deployment process? 1. Update scoring file. 2. Update yml file with new dependency. 3. Update aks configuration. 4. Build new image and deploy it. ## 1. Import your dependencies ``` from azureml.core import Workspace from azureml.core.compute import AksCompute, ComputeTarget from azureml.core.webservice import Webservice, AksWebservice import azureml.core print(azureml.core.VERSION) ``` ## 2. Set up your configuration and create a workspace ``` ws = Workspace.from_config() print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n') ``` ## 3. Register Model Register an existing trained model, add descirption and tags. ``` #Register the model from azureml.core.model import Model model = Model.register(model_path = "sklearn_regression_model.pkl", # this points to a local file model_name = "sklearn_regression_model.pkl", # this is the name the model is registered as tags = {'area': "diabetes", 'type': "regression"}, description = "Ridge regression model to predict diabetes", workspace = ws) print(model.name, model.description, model.version) ``` ## 4. *Update your scoring file with Data Collection* The file below, compared to the file used in notebook 11, has the following changes: ### a. Import the module ```python from azureml.monitoring import ModelDataCollector``` ### b. In your init function add: ```python global inputs_dc, prediction_d inputs_dc = ModelDataCollector("best_model", identifier="inputs", feature_names=["feat1", "feat2", "feat3", "feat4", "feat5", "Feat6"]) prediction_dc = ModelDataCollector("best_model", identifier="predictions", feature_names=["prediction1", "prediction2"])``` * Identifier: Identifier is later used for building the folder structure in your Blob, it can be used to divide "raw" data versus "processed". * CorrelationId: is an optional parameter, you do not need to set it up if your model doesn't require it. Having a correlationId in place does help you for easier mapping with other data. (Examples include: LoanNumber, CustomerId, etc.) * Feature Names: These need to be set up in the order of your features in order for them to have column names when the .csv is created. ### c. In your run function add: ```python inputs_dc.collect(data) prediction_dc.collect(result)``` ``` %%writefile score.py import pickle import json import numpy from sklearn.externals import joblib from sklearn.linear_model import Ridge from azureml.core.model import Model from azureml.monitoring import ModelDataCollector import time def init(): global model print ("model initialized" + time.strftime("%H:%M:%S")) # note here "sklearn_regression_model.pkl" is the name of the model registered under the workspace # this call should return the path to the model.pkl file on the local disk. model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl') # deserialize the model file back into a sklearn model model = joblib.load(model_path) global inputs_dc, prediction_dc # this setup will help us save our inputs under the "inputs" path in our Azure Blob inputs_dc = ModelDataCollector(model_name="sklearn_regression_model", identifier="inputs", feature_names=["feat1", "feat2"]) # this setup will help us save our ipredictions under the "predictions" path in our Azure Blob prediction_dc = ModelDataCollector("sklearn_regression_model", identifier="predictions", feature_names=["prediction1", "prediction2"]) # note you can pass in multiple rows for scoring def run(raw_data): global inputs_dc, prediction_dc try: data = json.loads(raw_data)['data'] data = numpy.array(data) result = model.predict(data) print ("saving input data" + time.strftime("%H:%M:%S")) inputs_dc.collect(data) #this call is saving our input data into our blob prediction_dc.collect(result)#this call is saving our prediction data into our blob print ("saving prediction data" + time.strftime("%H:%M:%S")) # you can return any data type as long as it is JSON-serializable return result.tolist() except Exception as e: error = str(e) print (error + time.strftime("%H:%M:%S")) return error ``` ## 5. *Update your myenv.yml file with the required module* ``` from azureml.core.conda_dependencies import CondaDependencies myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn']) myenv.add_pip_package("azureml-monitoring") with open("myenv.yml","w") as f: f.write(myenv.serialize_to_string()) ``` ## 6. Create your new Image ``` from azureml.core.image import ContainerImage image_config = ContainerImage.image_configuration(execution_script = "score.py", runtime = "python", conda_file = "myenv.yml", description = "Image with ridge regression model", tags = {'area': "diabetes", 'type': "regression"} ) image = ContainerImage.create(name = "myimage1", # this is the model object models = [model], image_config = image_config, workspace = ws) image.wait_for_creation(show_output = True) print(model.name, model.description, model.version) ``` ## 7. Deploy to AKS service ### Create AKS compute if you haven't done so. ``` # Use the default configuration (can also provide parameters to customize) prov_config = AksCompute.provisioning_configuration() aks_name = 'my-aks-test1' # Create the cluster aks_target = ComputeTarget.create(workspace = ws, name = aks_name, provisioning_configuration = prov_config) %%time aks_target.wait_for_completion(show_output = True) print(aks_target.provisioning_state) print(aks_target.provisioning_errors) ``` If you already have a cluster you can attach the service to it: ```python %%time resource_id = '/subscriptions/<subscriptionid>/resourcegroups/<resourcegroupname>/providers/Microsoft.ContainerService/managedClusters/<aksservername>' create_name= 'myaks4' attach_config = AksCompute.attach_configuration(resource_id=resource_id) aks_target = ComputeTarget.attach(workspace = ws, name = create_name, attach_configuration=attach_config) ## Wait for the operation to complete aks_target.wait_for_provisioning(True)``` ### a. *Activate Data Collection and App Insights through updating AKS Webservice configuration* In order to enable Data Collection and App Insights in your service you will need to update your AKS configuration file: ``` #Set the web service configuration aks_config = AksWebservice.deploy_configuration(collect_model_data=True, enable_app_insights=True) ``` ### b. Deploy your service ``` if aks_target.provisioning_state== "Succeeded": aks_service_name ='aks-w-dc0' aks_service = Webservice.deploy_from_image(workspace = ws, name = aks_service_name, image = image, deployment_config = aks_config, deployment_target = aks_target ) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) else: raise ValueError("aks provisioning failed, can't deploy service") ``` ## 8. Test your service and send some data **Note**: It will take around 15 mins for your data to appear in your blob. The data will appear in your Azure Blob following this format: /modeldata/subscriptionid/resourcegroupname/workspacename/webservicename/modelname/modelversion/identifier/year/month/day/data.csv ``` %%time import json test_sample = json.dumps({'data': [ [1,2,3,4,54,6,7,8,88,10], [10,9,8,37,36,45,4,33,2,1] ]}) test_sample = bytes(test_sample,encoding = 'utf8') if aks_service.state == "Healthy": prediction = aks_service.run(input_data=test_sample) print(prediction) else: raise ValueError("Service deployment isn't healthy, can't call the service") ``` ## 9. Validate you data and analyze it You can look into your data following this path format in your Azure Blob (it takes up to 15 minutes for the data to appear): /modeldata/**subscriptionid>**/**resourcegroupname>**/**workspacename>**/**webservicename>**/**modelname>**/**modelversion>>**/**identifier>**/*year/month/day*/data.csv For doing further analysis you have multiple options: ### a. Create DataBricks cluter and connect it to your blob https://docs.microsoft.com/en-us/azure/azure-databricks/quickstart-create-databricks-workspace-portal or in your databricks workspace you can look for the template "Azure Blob Storage Import Example Notebook". Here is an example for setting up the file location to extract the relevant data: <code> file_location = "wasbs://mycontainer@storageaccountname.blob.core.windows.net/unknown/unknown/unknown-bigdataset-unknown/my_iterate_parking_inputs/2018/&deg;/&deg;/data.csv" file_type = "csv"</code> ### b. Connect Blob to Power Bi (Small Data only) 1. Download and Open PowerBi Desktop 2. Select โ€œGet Dataโ€ and click on โ€œAzure Blob Storageโ€ >> Connect 3. Add your storage account and enter your storage key. 4. Select the container where your Data Collection is stored and click on Edit. 5. In the query editor, click under โ€œNameโ€ column and add your Storage account Model path into the filter. Note: if you want to only look into files from a specific year or month, just expand the filter path. For example, just look into March data: /modeldata/subscriptionid>/resourcegroupname>/workspacename>/webservicename>/modelname>/modelversion>/identifier>/year>/3 6. Click on the double arrow aside the โ€œContentโ€ column to combine the files. 7. Click OK and the data will preload. 8. You can now click Close and Apply and start building your custom reports on your Model Input data. # Disable Data Collection ``` aks_service.update(collect_model_data=False) ``` ## Clean up ``` %%time aks_service.delete() image.delete() model.delete() ```
github_jupyter
from azureml.core import Workspace from azureml.core.compute import AksCompute, ComputeTarget from azureml.core.webservice import Webservice, AksWebservice import azureml.core print(azureml.core.VERSION) ws = Workspace.from_config() print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n') #Register the model from azureml.core.model import Model model = Model.register(model_path = "sklearn_regression_model.pkl", # this points to a local file model_name = "sklearn_regression_model.pkl", # this is the name the model is registered as tags = {'area': "diabetes", 'type': "regression"}, description = "Ridge regression model to predict diabetes", workspace = ws) print(model.name, model.description, model.version) ### b. In your init function add: * Identifier: Identifier is later used for building the folder structure in your Blob, it can be used to divide "raw" data versus "processed". * CorrelationId: is an optional parameter, you do not need to set it up if your model doesn't require it. Having a correlationId in place does help you for easier mapping with other data. (Examples include: LoanNumber, CustomerId, etc.) * Feature Names: These need to be set up in the order of your features in order for them to have column names when the .csv is created. ### c. In your run function add: ## 5. *Update your myenv.yml file with the required module* ## 6. Create your new Image ## 7. Deploy to AKS service ### Create AKS compute if you haven't done so. If you already have a cluster you can attach the service to it: ### a. *Activate Data Collection and App Insights through updating AKS Webservice configuration* In order to enable Data Collection and App Insights in your service you will need to update your AKS configuration file: ### b. Deploy your service ## 8. Test your service and send some data **Note**: It will take around 15 mins for your data to appear in your blob. The data will appear in your Azure Blob following this format: /modeldata/subscriptionid/resourcegroupname/workspacename/webservicename/modelname/modelversion/identifier/year/month/day/data.csv ## 9. Validate you data and analyze it You can look into your data following this path format in your Azure Blob (it takes up to 15 minutes for the data to appear): /modeldata/**subscriptionid>**/**resourcegroupname>**/**workspacename>**/**webservicename>**/**modelname>**/**modelversion>>**/**identifier>**/*year/month/day*/data.csv For doing further analysis you have multiple options: ### a. Create DataBricks cluter and connect it to your blob https://docs.microsoft.com/en-us/azure/azure-databricks/quickstart-create-databricks-workspace-portal or in your databricks workspace you can look for the template "Azure Blob Storage Import Example Notebook". Here is an example for setting up the file location to extract the relevant data: <code> file_location = "wasbs://mycontainer@storageaccountname.blob.core.windows.net/unknown/unknown/unknown-bigdataset-unknown/my_iterate_parking_inputs/2018/&deg;/&deg;/data.csv" file_type = "csv"</code> ### b. Connect Blob to Power Bi (Small Data only) 1. Download and Open PowerBi Desktop 2. Select โ€œGet Dataโ€ and click on โ€œAzure Blob Storageโ€ >> Connect 3. Add your storage account and enter your storage key. 4. Select the container where your Data Collection is stored and click on Edit. 5. In the query editor, click under โ€œNameโ€ column and add your Storage account Model path into the filter. Note: if you want to only look into files from a specific year or month, just expand the filter path. For example, just look into March data: /modeldata/subscriptionid>/resourcegroupname>/workspacename>/webservicename>/modelname>/modelversion>/identifier>/year>/3 6. Click on the double arrow aside the โ€œContentโ€ column to combine the files. 7. Click OK and the data will preload. 8. You can now click Close and Apply and start building your custom reports on your Model Input data. # Disable Data Collection ## Clean up
0.583203
0.9463
``` from random import random from random import randint from numpy import array from numpy import zeros from sklearn.preprocessing import MinMaxScaler from keras.utils.vis_utils import plot_model from keras.layers.convolutional import Conv2D from keras.layers.convolutional import MaxPooling2D from keras.models import Sequential from keras.layers import LSTM from keras.layers import Dense from keras.layers import Flatten from keras.layers import TimeDistributed from keras.layers import Bidirectional from keras.layers import Dropout from keras import regularizers def NormSignal(S, I): # normalize features S=S.reshape(-1, 1) if I not in [0, 1, 13, 14]: scaler = MinMaxScaler(feature_range=(0, 1)) scaled = scaler.fit_transform(S) scaled = scaled else: scaled = S return scaled.reshape(-1).tolist() from pandas import read_csv import matplotlib.pyplot as plt from matplotlib import pyplot as plt import numpy as np np.random.seed = 3 dataset = read_csv('EEGdata.csv', engine = 'python', skipfooter=2) dataset = dataset.values.astype('float32') NormDataG = [NormSignal(dataset[:,i], i) for i in range(0,15)] NormDataG = np.array(NormDataG) NormDataG = NormDataG.T VideoID = list(set(NormDataG[:,1])) SubjectID = list(set(NormDataG[:,0])) import numpy as np A=0 # length of signal for i in range(len(SubjectID)): for j in range(len(VideoID)): Xtemp=NormDataG[(NormDataG[:,0]==SubjectID[i]) & (NormDataG[:,1]==VideoID[j])] A = max(len(Xtemp[:,14]),A) # define the model model = Sequential() model.add(TimeDistributed(Conv2D(20, (5,5), activation='relu'), input_shape=(None,A,11,1))) model.add(Dropout(0.3)) model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2)))) model.add(TimeDistributed(Flatten())) model.add(LSTM(10, return_sequences=True)) model.add(Dropout(0.3)) model.add(Bidirectional(LSTM(20, return_sequences=True))) model.add(LSTM(10)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc']) print(model.summary()) plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True) from IPython.display import Image Image(filename='model_plot.png') def get_model(): model = Sequential() model.add(TimeDistributed(Conv2D(20, (5,5), activation='relu'), input_shape=(None,A,11,1))) model.add(Dropout(0.5)) model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2)))) model.add(TimeDistributed(Flatten())) model.add(LSTM(10, return_sequences=True)) model.add(Dropout(0.5)) model.add(Bidirectional(LSTM(20, return_sequences=True))) model.add(LSTM(10)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc']) return model def CVModel(Leave_One_Out, A): model = get_model() Train = NormDataG [(NormDataG[:,0]!=Leave_One_Out)] Test = NormDataG [(NormDataG[:,0]==Leave_One_Out)] # Manual Padding to fixed size: k=0 VideoID = list(set(Train[:,1])) SubjectID = list(set(Train[:,0])) for i in range(len(SubjectID)): for j in range(len(VideoID)): Xtemp=Train[(Train[:,0]==SubjectID[i]) & (Train[:,1]==VideoID[j])] z = np.zeros((A-Xtemp.shape[0], 11), dtype=Train.dtype) Xt=np.concatenate((Xtemp[:,2:13], z), axis=0) Xt = Xt.reshape(1, A, -1) yt= Xtemp[:,13].mean().reshape(1, -1) if k!=0: Xtrain = np.vstack((Xtrain,Xt)) ytrain = np.vstack((ytrain,yt)) else: Xtrain=Xt ytrain=yt k=1 k=0 VideoID = list(set(Test[:,1])) for i in range(len(VideoID)): Xtemp=Test[Test[:,1]==VideoID[i]] z = np.zeros((A-Xtemp.shape[0], 11), dtype=Train.dtype) Xt=np.concatenate((Xtemp[:,2:13], z), axis=0) Xt = Xt.reshape(1, A, -1) yt= Xtemp[:,13].mean().reshape(1, -1) if k!=0: Xtest = np.vstack((Xtest,Xt)) ytest = np.vstack((ytest,yt)) else: Xtest=Xt ytest=yt k=1 Xtrain = Xtrain.reshape(-1, 1, 144, 11, 1) Xtest = Xtest.reshape(-1, 1, 144, 11, 1) history = model.fit(Xtrain, ytrain, epochs=40, batch_size=30, validation_data=(Xtest, ytest), verbose=0, shuffle=True) correct = 0 for i in range(len(Xtest)): X = np.array(Xtest[i]).reshape(-1, 1, A, 11, 1) y = np.array(ytest[i]).reshape(1,1) loss, acc = model.evaluate(X, y, verbose=0) #print('Probability: %f, acc: %f' % (model.predict(X), acc*100)) yhat = 1 if model.predict(X)>=0.5 else 0 if yhat == y: correct += 1 #print('Accuracy: %f %%' % ((correct/len(Xtest))*100.0)) return (correct/len(Xtest))*100.0, history Final = [] for i in SubjectID: Leave_One_Out = np.int(i) F, history = CVModel(Leave_One_Out, A) Final.append(F) #plt.plot(history.history['loss'], label='train') #plt.plot(history.history['val_loss'], label='test') #plt.legend() #plt.show() print(np.array(Final).mean()) print(Final) print('Average Accuracy %2.1f' %(np.array(Final).mean())) plt.bar(range(len(Final)), Final, align='center') #ax.axvline(data1.mean(), color='blue', linewidth=2) names = ['Leave-Student 1-Out','Leave-Student 2-Out','Leave-Student 3-Out','Leave-Student 4-Out', 'Leave-Student 5-Out', 'Leave-Student 6-Out','Leave-Student 7-Out','Leave-Student 8-Out','Leave-Student 9-Out', 'Leave-Student 10-Out'] x = range(len(Final)) plt.xticks(x, names, rotation=90) plt.xlabel('Experiment', fontsize=18) plt.ylabel('Accuracy [in %]', fontsize=16) plt.plot([-1, 10], [78, 78], 'k-', lw=3) axes = plt.gca() axes.set_ylim([0,100]) plt.annotate('Average Accuracy 78%', xy=(6, 80), xytext=(3, 93), arrowprops=dict(facecolor='black', shrink=0.03), ) plt.savefig('EndResulst.pdf') plt.show() ```
github_jupyter
from random import random from random import randint from numpy import array from numpy import zeros from sklearn.preprocessing import MinMaxScaler from keras.utils.vis_utils import plot_model from keras.layers.convolutional import Conv2D from keras.layers.convolutional import MaxPooling2D from keras.models import Sequential from keras.layers import LSTM from keras.layers import Dense from keras.layers import Flatten from keras.layers import TimeDistributed from keras.layers import Bidirectional from keras.layers import Dropout from keras import regularizers def NormSignal(S, I): # normalize features S=S.reshape(-1, 1) if I not in [0, 1, 13, 14]: scaler = MinMaxScaler(feature_range=(0, 1)) scaled = scaler.fit_transform(S) scaled = scaled else: scaled = S return scaled.reshape(-1).tolist() from pandas import read_csv import matplotlib.pyplot as plt from matplotlib import pyplot as plt import numpy as np np.random.seed = 3 dataset = read_csv('EEGdata.csv', engine = 'python', skipfooter=2) dataset = dataset.values.astype('float32') NormDataG = [NormSignal(dataset[:,i], i) for i in range(0,15)] NormDataG = np.array(NormDataG) NormDataG = NormDataG.T VideoID = list(set(NormDataG[:,1])) SubjectID = list(set(NormDataG[:,0])) import numpy as np A=0 # length of signal for i in range(len(SubjectID)): for j in range(len(VideoID)): Xtemp=NormDataG[(NormDataG[:,0]==SubjectID[i]) & (NormDataG[:,1]==VideoID[j])] A = max(len(Xtemp[:,14]),A) # define the model model = Sequential() model.add(TimeDistributed(Conv2D(20, (5,5), activation='relu'), input_shape=(None,A,11,1))) model.add(Dropout(0.3)) model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2)))) model.add(TimeDistributed(Flatten())) model.add(LSTM(10, return_sequences=True)) model.add(Dropout(0.3)) model.add(Bidirectional(LSTM(20, return_sequences=True))) model.add(LSTM(10)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc']) print(model.summary()) plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True) from IPython.display import Image Image(filename='model_plot.png') def get_model(): model = Sequential() model.add(TimeDistributed(Conv2D(20, (5,5), activation='relu'), input_shape=(None,A,11,1))) model.add(Dropout(0.5)) model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2)))) model.add(TimeDistributed(Flatten())) model.add(LSTM(10, return_sequences=True)) model.add(Dropout(0.5)) model.add(Bidirectional(LSTM(20, return_sequences=True))) model.add(LSTM(10)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc']) return model def CVModel(Leave_One_Out, A): model = get_model() Train = NormDataG [(NormDataG[:,0]!=Leave_One_Out)] Test = NormDataG [(NormDataG[:,0]==Leave_One_Out)] # Manual Padding to fixed size: k=0 VideoID = list(set(Train[:,1])) SubjectID = list(set(Train[:,0])) for i in range(len(SubjectID)): for j in range(len(VideoID)): Xtemp=Train[(Train[:,0]==SubjectID[i]) & (Train[:,1]==VideoID[j])] z = np.zeros((A-Xtemp.shape[0], 11), dtype=Train.dtype) Xt=np.concatenate((Xtemp[:,2:13], z), axis=0) Xt = Xt.reshape(1, A, -1) yt= Xtemp[:,13].mean().reshape(1, -1) if k!=0: Xtrain = np.vstack((Xtrain,Xt)) ytrain = np.vstack((ytrain,yt)) else: Xtrain=Xt ytrain=yt k=1 k=0 VideoID = list(set(Test[:,1])) for i in range(len(VideoID)): Xtemp=Test[Test[:,1]==VideoID[i]] z = np.zeros((A-Xtemp.shape[0], 11), dtype=Train.dtype) Xt=np.concatenate((Xtemp[:,2:13], z), axis=0) Xt = Xt.reshape(1, A, -1) yt= Xtemp[:,13].mean().reshape(1, -1) if k!=0: Xtest = np.vstack((Xtest,Xt)) ytest = np.vstack((ytest,yt)) else: Xtest=Xt ytest=yt k=1 Xtrain = Xtrain.reshape(-1, 1, 144, 11, 1) Xtest = Xtest.reshape(-1, 1, 144, 11, 1) history = model.fit(Xtrain, ytrain, epochs=40, batch_size=30, validation_data=(Xtest, ytest), verbose=0, shuffle=True) correct = 0 for i in range(len(Xtest)): X = np.array(Xtest[i]).reshape(-1, 1, A, 11, 1) y = np.array(ytest[i]).reshape(1,1) loss, acc = model.evaluate(X, y, verbose=0) #print('Probability: %f, acc: %f' % (model.predict(X), acc*100)) yhat = 1 if model.predict(X)>=0.5 else 0 if yhat == y: correct += 1 #print('Accuracy: %f %%' % ((correct/len(Xtest))*100.0)) return (correct/len(Xtest))*100.0, history Final = [] for i in SubjectID: Leave_One_Out = np.int(i) F, history = CVModel(Leave_One_Out, A) Final.append(F) #plt.plot(history.history['loss'], label='train') #plt.plot(history.history['val_loss'], label='test') #plt.legend() #plt.show() print(np.array(Final).mean()) print(Final) print('Average Accuracy %2.1f' %(np.array(Final).mean())) plt.bar(range(len(Final)), Final, align='center') #ax.axvline(data1.mean(), color='blue', linewidth=2) names = ['Leave-Student 1-Out','Leave-Student 2-Out','Leave-Student 3-Out','Leave-Student 4-Out', 'Leave-Student 5-Out', 'Leave-Student 6-Out','Leave-Student 7-Out','Leave-Student 8-Out','Leave-Student 9-Out', 'Leave-Student 10-Out'] x = range(len(Final)) plt.xticks(x, names, rotation=90) plt.xlabel('Experiment', fontsize=18) plt.ylabel('Accuracy [in %]', fontsize=16) plt.plot([-1, 10], [78, 78], 'k-', lw=3) axes = plt.gca() axes.set_ylim([0,100]) plt.annotate('Average Accuracy 78%', xy=(6, 80), xytext=(3, 93), arrowprops=dict(facecolor='black', shrink=0.03), ) plt.savefig('EndResulst.pdf') plt.show()
0.397821
0.48871
### Module 01 - Assignment *** #### Environment `conda activate sklearn-env` *** #### Goals - [Load the data sets from the links page](#Dataset-load-from-CSV-located-on-OpenML-website) - [Print statistics about the data](#Print-statistics-about-the-data) - [Plot correlation and heat maps](#Plot-correlation-and-heat-maps) - [Optional](#Optional) * - [Plot linear regression](#Plot-linear-regression) - [Predict MEDV from CRIM, RM, INDUS, NOX](#Train-model-to-predict-MEDV-from-CRIM,-RM,-INDUS,-NOX) #### Basic python imports for panda (dataframe) and seaborn(visualization) packages ``` import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from IPython.display import display from sklearn.datasets import fetch_openml # Load data from https://www.openml.org/d/40945 data = fetch_openml("boston", version=1, as_frame=True) dataset = data.frame.copy() dataset['CHAS'] = pd.to_numeric(dataset['CHAS']) dataset['RAD'] = pd.to_numeric(dataset['RAD']) dataset.head() ``` ### Print statistics about the data #### Data description ``` print(data.DESCR) ``` #### Dataset meta information ``` dataset.info() ``` #### Display total count of missing values ``` dataset.isna().sum() ``` #### Basic statistical properties ``` dataset.describe().transpose()[['mean', 'std', 'count', 'min', 'max']] ``` ### Plot correlation and heat maps #### Correlation matrix ``` corr = dataset.corr() corr ``` #### Visualize correlation metrix using seaborn heatmap plot https://seaborn.pydata.org/examples/many_pairwise_correlations.html ``` # Generate a mask for the upper triangle mask = np.triu(np.ones_like(corr, dtype=bool)) plt.figure(figsize = (12,8)) sns.heatmap(corr, mask = mask, annot=True, fmt='.2f', xticklabels=corr.columns.values,yticklabels=corr.columns.values,cmap="Greens") ``` ## Optional ### Plot linear regression https://seaborn.pydata.org/tutorial/regression.html https://seaborn.pydata.org/generated/seaborn.pairplot.html ``` sns.pairplot(dataset, x_vars= ['CRIM', 'RM', 'INDUS', 'NOX'],y_vars= 'MEDV', height=5, aspect=.8, kind="reg") fig, axes = plt.subplots(1, 3, figsize=(16, 6)) sns.regplot(ax=axes[0], x='RM', y='MEDV', data=dataset, order=1, ci=None, line_kws={'color': 'red'}); sns.regplot(ax=axes[1], x='RM', y='MEDV', data=dataset, order=2, ci=None, line_kws={'color': 'red'}); sns.regplot(ax=axes[2], x='RM', y='MEDV', data=dataset, order=10, ci=None, line_kws={'color': 'red'}); ``` #### Gradiend descent and cost function ``` def costFunction(X, y , theta): m = len(y) sqHipe = np.matmul(X , theta) - y cost = (1/(2*m)) * np.sum(sqHipe * sqHipe) return cost def gradientDescent(X, y, theta, alpha, num_iter): m = len(y) jurnal = np.zeros(num_iter) theta_jurnal = np.zeros((num_iter, len(theta))) for iter in range(num_iter): theta = theta - alpha * (1/m) * np.sum(((np.matmul(X , theta) - y).transpose() * X.transpose()).transpose(), axis=0) jurnal[iter] = costFunction(X, y, theta) theta_jurnal[iter] = theta return theta, jurnal, theta_jurnal ``` ### Train model to predict MEDV from CRIM, RM, INDUS, NOX ``` train_dataset = dataset[['MEDV', 'CRIM', 'RM', 'INDUS', 'NOX']] #train_dataset = ... <select from dataset 'MEDV', 'CRIM', 'RM', 'INDUS', 'NOX' features> train_features = train_dataset.copy() train_labels = train_features.pop('MEDV') stats = train_features.describe().transpose()[['mean', 'std', 'count', 'min', 'max']] stats normalized_train_features = (train_features - stats['mean'].transpose()) / stats['std'].transpose() normalized_train_features.tail() normalized_ones_features = normalized_train_features.copy() normalized_ones_features.insert(0, 'Oness', 1.0) normalized_ones_features.head() theta = np.zeros(len(normalized_ones_features.columns)) alpha = 0.01; num_iters = 400; theta , jurnal, theta_jurnal = gradientDescent(normalized_ones_features.to_numpy(), train_labels.to_numpy(), theta, alpha, num_iters); print(f"Hypothesis: h(X)= {theta[0]:.3f} {theta[1]:+.3f}*CRIM {theta[2]:+.3f}*RM {theta[3]:+.3f}*INDUS {theta[4]:+.3f}*NOX") ``` #### Predict MEDV from CRIM, RM, INDUS, NOX ``` score_elem = np.array([0.03237, 6.998, 2.18, 0.458]) expected_prediction = 30.319424810512324 score_input = (score_elem - stats['mean'].transpose()) / stats['std'].transpose(); score_elem = np.insert(score_input.to_numpy(),0,1,axis=0) test_mpg = np.matmul(score_elem , theta) print("Predicted MPG:" ,test_mpg, " expected value ", expected_prediction) ```
github_jupyter
import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from IPython.display import display from sklearn.datasets import fetch_openml # Load data from https://www.openml.org/d/40945 data = fetch_openml("boston", version=1, as_frame=True) dataset = data.frame.copy() dataset['CHAS'] = pd.to_numeric(dataset['CHAS']) dataset['RAD'] = pd.to_numeric(dataset['RAD']) dataset.head() print(data.DESCR) dataset.info() dataset.isna().sum() dataset.describe().transpose()[['mean', 'std', 'count', 'min', 'max']] corr = dataset.corr() corr # Generate a mask for the upper triangle mask = np.triu(np.ones_like(corr, dtype=bool)) plt.figure(figsize = (12,8)) sns.heatmap(corr, mask = mask, annot=True, fmt='.2f', xticklabels=corr.columns.values,yticklabels=corr.columns.values,cmap="Greens") sns.pairplot(dataset, x_vars= ['CRIM', 'RM', 'INDUS', 'NOX'],y_vars= 'MEDV', height=5, aspect=.8, kind="reg") fig, axes = plt.subplots(1, 3, figsize=(16, 6)) sns.regplot(ax=axes[0], x='RM', y='MEDV', data=dataset, order=1, ci=None, line_kws={'color': 'red'}); sns.regplot(ax=axes[1], x='RM', y='MEDV', data=dataset, order=2, ci=None, line_kws={'color': 'red'}); sns.regplot(ax=axes[2], x='RM', y='MEDV', data=dataset, order=10, ci=None, line_kws={'color': 'red'}); def costFunction(X, y , theta): m = len(y) sqHipe = np.matmul(X , theta) - y cost = (1/(2*m)) * np.sum(sqHipe * sqHipe) return cost def gradientDescent(X, y, theta, alpha, num_iter): m = len(y) jurnal = np.zeros(num_iter) theta_jurnal = np.zeros((num_iter, len(theta))) for iter in range(num_iter): theta = theta - alpha * (1/m) * np.sum(((np.matmul(X , theta) - y).transpose() * X.transpose()).transpose(), axis=0) jurnal[iter] = costFunction(X, y, theta) theta_jurnal[iter] = theta return theta, jurnal, theta_jurnal train_dataset = dataset[['MEDV', 'CRIM', 'RM', 'INDUS', 'NOX']] #train_dataset = ... <select from dataset 'MEDV', 'CRIM', 'RM', 'INDUS', 'NOX' features> train_features = train_dataset.copy() train_labels = train_features.pop('MEDV') stats = train_features.describe().transpose()[['mean', 'std', 'count', 'min', 'max']] stats normalized_train_features = (train_features - stats['mean'].transpose()) / stats['std'].transpose() normalized_train_features.tail() normalized_ones_features = normalized_train_features.copy() normalized_ones_features.insert(0, 'Oness', 1.0) normalized_ones_features.head() theta = np.zeros(len(normalized_ones_features.columns)) alpha = 0.01; num_iters = 400; theta , jurnal, theta_jurnal = gradientDescent(normalized_ones_features.to_numpy(), train_labels.to_numpy(), theta, alpha, num_iters); print(f"Hypothesis: h(X)= {theta[0]:.3f} {theta[1]:+.3f}*CRIM {theta[2]:+.3f}*RM {theta[3]:+.3f}*INDUS {theta[4]:+.3f}*NOX") score_elem = np.array([0.03237, 6.998, 2.18, 0.458]) expected_prediction = 30.319424810512324 score_input = (score_elem - stats['mean'].transpose()) / stats['std'].transpose(); score_elem = np.insert(score_input.to_numpy(),0,1,axis=0) test_mpg = np.matmul(score_elem , theta) print("Predicted MPG:" ,test_mpg, " expected value ", expected_prediction)
0.592902
0.950549
# Using the CIViC API ### Import Modules ``` import pandas as pd import api_tools as apit ``` ### Load Genes ``` civic_gene = apit.Endpoint(url="https://civicdb.org/api/genes?count=238") df_civic_gene = civic_gene.data_as_pandas("records") df_civic_gene ``` ### Load Variants ``` civic_variants = apit.Endpoint(url="https://civicdb.org/api/variants?count=3056") df_civic_variants = civic_variants.data_as_pandas("records") df_civic_variants ``` ### Load Evidence Items ##### Load all records ``` civic_evidence = apit.Endpoint(url="https://civicdb.org/api/evidence_items?count=8579") df_civic_evidence = civic_evidence.data_as_pandas("records") df_civic_evidence ``` ##### Filter evidence items for records with "Predictive" evidence type ``` df_civic_evidence_filtered = df_civic_evidence[df_civic_evidence["evidence_type"] == "Predictive"] df_civic_evidence_filtered = df_civic_evidence_filtered[df_civic_evidence_filtered["evidence_direction"] == "Supports"] df_civic_evidence_filtered = df_civic_evidence_filtered[df_civic_evidence_filtered["clinical_significance"] != "N/A"] df_civic_evidence_filtered.head() ``` ``` df_variant_evidence = df_civic_evidence_filtered.merge(df_civic_variants, left_on="variant_id", right_on="id", how="left") df_variant_evidence.head() ``` ##### Isolate the columns of interest and obtain the drug names associated with each variant and evidence item ``` columns_of_interest = ["name_y", "drugs", "evidence_level", "clinical_significance", "disease", "entrez_id"] df_variant_evidence_filtered = df_variant_evidence.loc[:, columns_of_interest] df_variant_evidence_filtered.head() for i in range(df_variant_evidence_filtered.shape[0]): drug_list = [drug["name"] for drug in df_variant_evidence_filtered.loc[i, "drugs"]] therapy_regimen = "+".join(drug_list) df_variant_evidence_filtered.loc[i, "therapy_regimen"] = therapy_regimen df_variant_evidence_filtered.loc[i, "disease"] = df_variant_evidence_filtered.loc[i, "disease"]["name"] diseases = df_variant_evidence_filtered[["disease"]].drop_duplicates() diseases.iloc[:100] from thefuzz import process oncotree_mapping = [ ] for disease in df_variant_evidence_filtered["disease"].drop_duplicates(): oncotree_name, score, score2 = process.extractOne(disease, oncotree.name, scorer=fuzz.token_sort_ratio) oncotree_code = oncotree.set_index('name').loc[oncotree_name,'oncotree'] oncotree_mapping.append([disease,oncotree_name,score,score2,oncotree_code]) full_mapping = pd.DataFrame(oncotree_mapping, columns = ['disease','oncotree_name','score','score2','oncotree']) df_variant_evidence_filtered['oncotree'] = df_variant_evidence_filtered.disease.map(full_mapping.set_index('disease')['oncotree']) full_mapping.to_csv("doid_to_oncotree.csv") df_variant_evidence_filtered process.extractOne(disease, oncotree.name, scorer=fuzz.partial_ratio) oncotree.name.loc[oncotree.name.str.contains('leuk',case=False)] full_mapping.sort_values('score').sample(50) pd.DataFrame(oncotree_mapping, columns = ['disease','oncotree_name','score','score2','oncotree']) disease, score, score2 = process.extractOne(disease.disease, oncotree.name, scorer=fuzz.token_sort_ratio) oncotree_code = oncotree.set_index('name').loc[disease,'oncotree'] diseases.sample(50) disease ``` ##### Obtain gene names from `entrez_id` ``` gene_list = pd.read_csv("CancerGeneList.tsv", sep="\t", header=0, usecols=[0, 1]) df_variant_evidence_filtered = df_variant_evidence_filtered.merge(gene_list, left_on="entrez_id", right_on="Entrez_Id", how="left") df_variant_evidence_filtered = df_variant_evidence_filtered.drop(columns=["Entrez_Id", "drugs"]) df_variant_evidence_filtered.head() column_mappings = { "name_y": "variant", "therapy_regimen": "TherapyRegimen", "evidence_level": "EvidenceLevel", "clinical_significance": "ClinicalSignificance", "disease": "Disease", "Gene_Symbol": "Gene" } df_variant_evidence_filtered.rename(columns=column_mappings, inplace=True) df_variant_evidence_filtered.head(20) import requests def oncotree(): HEADER = { 'accept': 'application/json' } response = requests.get('http://oncotree.mskcc.org/api/tumorTypes/tree',headers = HEADER) return (response.json()) def generate_oncotree_mapping(): tree = oncotree() node = tree['TISSUE'] stack = [node] mapping = [] while len(stack) > 0: node = stack.pop() for key, child in node['children'].items(): stack.append(child) parent = node['parent'] # print(node) if parent == "TISSUE": parent = "Disease" if node['code'] != "TISSUE": if "NCI" in node['externalReferences'].keys(): for umls in node['externalReferences']['NCI']: mapping.append([umls, node['code'], node['name']]) return pd.DataFrame(mapping, columns = ['NCI','oncotree','name']) def generate_disease_ontology_mapping(): with open("/home/ec2-user/bmi-210-final-project/source_data/DO_cancer_slim.json") as f: do_cancer = json.load(f) mapping = [] for node in do_cancer['graphs'][0]['nodes']: doid = node['id'].split("/")[-1].split("_")[-1] if 'xrefs' in node['meta'].keys(): for i in node['meta']['xrefs']: if "NCI:" in i['val']: umls = i['val'].split(":")[-1] mapping.append([umls, doid, node['lbl']]) umls_to_do_cancer = pd.DataFrame(mapping, columns = ['NCI','doid','name']) return umls_to_do_cancer oncotree = generate_oncotree_mapping() do = generate_disease_ontology_mapping() # mapping = oncotree.merge(do, on = "NCI", how = "inner") do oncotree from thefuzz import fuzz oncotree['closest_doid'] = None closest = 0 for index, row in oncotree.iterrows(): ratio = fuzz.ratio(row['name'], i) if ratio > closest: oncotree.loc[index,'closest_doid'] = i closest = ratio break do diseases.isin(do.name).value_counts() mapping.loc[mapping.doid =='3'] df_variant_evidence_filtered.loc[df_variant_evidence_filtered.disease.apply(lambda x: str(x['id'])).isin(mapping.doid).value_counts()] import json with open("/home/ec2-user/bmi-210-final-project/source_data/DO_cancer_slim.json") as f: do_cancer = json.load(f) umls_to_do_cancer = pd.DataFrame(mapping, columns = ['UMLS', 'doid']) do_cancer_to_oncotree = umls_to_do_cancer.merge(umls_to_oncotree, on = "UMLS", how = "inner") do_cancer_to_oncotree ``` ##### Safe Table to CSV ``` df_variant_evidence_filtered.to_csv("CIViC_variant_evidence.csv") df_variant_evidence_filtered ```
github_jupyter
import pandas as pd import api_tools as apit civic_gene = apit.Endpoint(url="https://civicdb.org/api/genes?count=238") df_civic_gene = civic_gene.data_as_pandas("records") df_civic_gene civic_variants = apit.Endpoint(url="https://civicdb.org/api/variants?count=3056") df_civic_variants = civic_variants.data_as_pandas("records") df_civic_variants civic_evidence = apit.Endpoint(url="https://civicdb.org/api/evidence_items?count=8579") df_civic_evidence = civic_evidence.data_as_pandas("records") df_civic_evidence df_civic_evidence_filtered = df_civic_evidence[df_civic_evidence["evidence_type"] == "Predictive"] df_civic_evidence_filtered = df_civic_evidence_filtered[df_civic_evidence_filtered["evidence_direction"] == "Supports"] df_civic_evidence_filtered = df_civic_evidence_filtered[df_civic_evidence_filtered["clinical_significance"] != "N/A"] df_civic_evidence_filtered.head() df_variant_evidence = df_civic_evidence_filtered.merge(df_civic_variants, left_on="variant_id", right_on="id", how="left") df_variant_evidence.head() columns_of_interest = ["name_y", "drugs", "evidence_level", "clinical_significance", "disease", "entrez_id"] df_variant_evidence_filtered = df_variant_evidence.loc[:, columns_of_interest] df_variant_evidence_filtered.head() for i in range(df_variant_evidence_filtered.shape[0]): drug_list = [drug["name"] for drug in df_variant_evidence_filtered.loc[i, "drugs"]] therapy_regimen = "+".join(drug_list) df_variant_evidence_filtered.loc[i, "therapy_regimen"] = therapy_regimen df_variant_evidence_filtered.loc[i, "disease"] = df_variant_evidence_filtered.loc[i, "disease"]["name"] diseases = df_variant_evidence_filtered[["disease"]].drop_duplicates() diseases.iloc[:100] from thefuzz import process oncotree_mapping = [ ] for disease in df_variant_evidence_filtered["disease"].drop_duplicates(): oncotree_name, score, score2 = process.extractOne(disease, oncotree.name, scorer=fuzz.token_sort_ratio) oncotree_code = oncotree.set_index('name').loc[oncotree_name,'oncotree'] oncotree_mapping.append([disease,oncotree_name,score,score2,oncotree_code]) full_mapping = pd.DataFrame(oncotree_mapping, columns = ['disease','oncotree_name','score','score2','oncotree']) df_variant_evidence_filtered['oncotree'] = df_variant_evidence_filtered.disease.map(full_mapping.set_index('disease')['oncotree']) full_mapping.to_csv("doid_to_oncotree.csv") df_variant_evidence_filtered process.extractOne(disease, oncotree.name, scorer=fuzz.partial_ratio) oncotree.name.loc[oncotree.name.str.contains('leuk',case=False)] full_mapping.sort_values('score').sample(50) pd.DataFrame(oncotree_mapping, columns = ['disease','oncotree_name','score','score2','oncotree']) disease, score, score2 = process.extractOne(disease.disease, oncotree.name, scorer=fuzz.token_sort_ratio) oncotree_code = oncotree.set_index('name').loc[disease,'oncotree'] diseases.sample(50) disease gene_list = pd.read_csv("CancerGeneList.tsv", sep="\t", header=0, usecols=[0, 1]) df_variant_evidence_filtered = df_variant_evidence_filtered.merge(gene_list, left_on="entrez_id", right_on="Entrez_Id", how="left") df_variant_evidence_filtered = df_variant_evidence_filtered.drop(columns=["Entrez_Id", "drugs"]) df_variant_evidence_filtered.head() column_mappings = { "name_y": "variant", "therapy_regimen": "TherapyRegimen", "evidence_level": "EvidenceLevel", "clinical_significance": "ClinicalSignificance", "disease": "Disease", "Gene_Symbol": "Gene" } df_variant_evidence_filtered.rename(columns=column_mappings, inplace=True) df_variant_evidence_filtered.head(20) import requests def oncotree(): HEADER = { 'accept': 'application/json' } response = requests.get('http://oncotree.mskcc.org/api/tumorTypes/tree',headers = HEADER) return (response.json()) def generate_oncotree_mapping(): tree = oncotree() node = tree['TISSUE'] stack = [node] mapping = [] while len(stack) > 0: node = stack.pop() for key, child in node['children'].items(): stack.append(child) parent = node['parent'] # print(node) if parent == "TISSUE": parent = "Disease" if node['code'] != "TISSUE": if "NCI" in node['externalReferences'].keys(): for umls in node['externalReferences']['NCI']: mapping.append([umls, node['code'], node['name']]) return pd.DataFrame(mapping, columns = ['NCI','oncotree','name']) def generate_disease_ontology_mapping(): with open("/home/ec2-user/bmi-210-final-project/source_data/DO_cancer_slim.json") as f: do_cancer = json.load(f) mapping = [] for node in do_cancer['graphs'][0]['nodes']: doid = node['id'].split("/")[-1].split("_")[-1] if 'xrefs' in node['meta'].keys(): for i in node['meta']['xrefs']: if "NCI:" in i['val']: umls = i['val'].split(":")[-1] mapping.append([umls, doid, node['lbl']]) umls_to_do_cancer = pd.DataFrame(mapping, columns = ['NCI','doid','name']) return umls_to_do_cancer oncotree = generate_oncotree_mapping() do = generate_disease_ontology_mapping() # mapping = oncotree.merge(do, on = "NCI", how = "inner") do oncotree from thefuzz import fuzz oncotree['closest_doid'] = None closest = 0 for index, row in oncotree.iterrows(): ratio = fuzz.ratio(row['name'], i) if ratio > closest: oncotree.loc[index,'closest_doid'] = i closest = ratio break do diseases.isin(do.name).value_counts() mapping.loc[mapping.doid =='3'] df_variant_evidence_filtered.loc[df_variant_evidence_filtered.disease.apply(lambda x: str(x['id'])).isin(mapping.doid).value_counts()] import json with open("/home/ec2-user/bmi-210-final-project/source_data/DO_cancer_slim.json") as f: do_cancer = json.load(f) umls_to_do_cancer = pd.DataFrame(mapping, columns = ['UMLS', 'doid']) do_cancer_to_oncotree = umls_to_do_cancer.merge(umls_to_oncotree, on = "UMLS", how = "inner") do_cancer_to_oncotree df_variant_evidence_filtered.to_csv("CIViC_variant_evidence.csv") df_variant_evidence_filtered
0.360377
0.821223
``` import numpy as np from tqdm import tqdm_notebook from sdcdup.utils import overlap_tag_pairs from sdcdup.utils import overlap_tag_maps from sdcdup.utils import get_overlap_matches from sdcdup.utils import load_duplicate_truth from sdcdup.features import SDCImageContainer %reload_ext autoreload %autoreload 2 matches_files = [ 'matches_bmh32_0.9_offset.csv', 'matches_bmh96_0.9_offset.csv', 'matches_bmh32_0.8.csv', 'matches_bmh96_0.8.csv', ] sdcic = SDCImageContainer() sdcic.matches = get_overlap_matches(matches_files) overlap_image_maps = sdcic.load_image_overlap_properties(matches_files, score_types=['shp']) print(len(overlap_image_maps)) dup_truth = load_duplicate_truth() print(len(dup_truth)) ``` ## Find overlaps with ships ``` # NOTE: This step is here to make the next two cells run 2 orders of magnitude faster. overlap_image_maps2 = {} for (img1_id, img2_id, img1_overlap_tag), scores in tqdm_notebook(overlap_image_maps.items()): if (img1_id, img2_id) not in overlap_image_maps2: overlap_image_maps2[(img1_id, img2_id)] = {} overlap_image_maps2[(img1_id, img2_id)][img1_overlap_tag] = scores untested_image_pairs_with_ship_masks = [] for (img1_id, img2_id), overlap_maps in tqdm_notebook(overlap_image_maps2.items()): mask1 = sdcic.img_metrics['shp'][img1_id] mask2 = sdcic.img_metrics['shp'][img2_id] has_mask1 = np.sum(mask1) > 0 has_mask2 = np.sum(mask2) > 0 if not (has_mask1 and has_mask2): continue for img1_overlap_tag in overlap_maps: if (img1_id, img2_id, img1_overlap_tag) in dup_truth: continue untested_image_pairs_with_ship_masks.append((img1_id, img2_id)) break len(untested_image_pairs_with_ship_masks) untested_overlaps_with_ship_masks = [] for (img1_id, img2_id), overlap_maps in tqdm_notebook(overlap_image_maps2.items()): mask1 = sdcic.img_metrics['shp'][img1_id] mask2 = sdcic.img_metrics['shp'][img2_id] has_mask1 = np.sum(mask1) > 0 has_mask2 = np.sum(mask2) > 0 if not (has_mask1 and has_mask2): continue for img1_overlap_tag in overlap_maps: if (img1_id, img2_id, img1_overlap_tag) in dup_truth: continue mask1_slice_total = np.sum(mask1[overlap_tag_maps[img1_overlap_tag]]) mask2_slice_total = np.sum(mask2[overlap_tag_maps[overlap_tag_pairs[img1_overlap_tag]]]) if mask1_slice_total + mask2_slice_total < 1: continue untested_overlaps_with_ship_masks.append((img1_id, img2_id, img1_overlap_tag)) len(untested_overlaps_with_ship_masks) ```
github_jupyter
import numpy as np from tqdm import tqdm_notebook from sdcdup.utils import overlap_tag_pairs from sdcdup.utils import overlap_tag_maps from sdcdup.utils import get_overlap_matches from sdcdup.utils import load_duplicate_truth from sdcdup.features import SDCImageContainer %reload_ext autoreload %autoreload 2 matches_files = [ 'matches_bmh32_0.9_offset.csv', 'matches_bmh96_0.9_offset.csv', 'matches_bmh32_0.8.csv', 'matches_bmh96_0.8.csv', ] sdcic = SDCImageContainer() sdcic.matches = get_overlap_matches(matches_files) overlap_image_maps = sdcic.load_image_overlap_properties(matches_files, score_types=['shp']) print(len(overlap_image_maps)) dup_truth = load_duplicate_truth() print(len(dup_truth)) # NOTE: This step is here to make the next two cells run 2 orders of magnitude faster. overlap_image_maps2 = {} for (img1_id, img2_id, img1_overlap_tag), scores in tqdm_notebook(overlap_image_maps.items()): if (img1_id, img2_id) not in overlap_image_maps2: overlap_image_maps2[(img1_id, img2_id)] = {} overlap_image_maps2[(img1_id, img2_id)][img1_overlap_tag] = scores untested_image_pairs_with_ship_masks = [] for (img1_id, img2_id), overlap_maps in tqdm_notebook(overlap_image_maps2.items()): mask1 = sdcic.img_metrics['shp'][img1_id] mask2 = sdcic.img_metrics['shp'][img2_id] has_mask1 = np.sum(mask1) > 0 has_mask2 = np.sum(mask2) > 0 if not (has_mask1 and has_mask2): continue for img1_overlap_tag in overlap_maps: if (img1_id, img2_id, img1_overlap_tag) in dup_truth: continue untested_image_pairs_with_ship_masks.append((img1_id, img2_id)) break len(untested_image_pairs_with_ship_masks) untested_overlaps_with_ship_masks = [] for (img1_id, img2_id), overlap_maps in tqdm_notebook(overlap_image_maps2.items()): mask1 = sdcic.img_metrics['shp'][img1_id] mask2 = sdcic.img_metrics['shp'][img2_id] has_mask1 = np.sum(mask1) > 0 has_mask2 = np.sum(mask2) > 0 if not (has_mask1 and has_mask2): continue for img1_overlap_tag in overlap_maps: if (img1_id, img2_id, img1_overlap_tag) in dup_truth: continue mask1_slice_total = np.sum(mask1[overlap_tag_maps[img1_overlap_tag]]) mask2_slice_total = np.sum(mask2[overlap_tag_maps[overlap_tag_pairs[img1_overlap_tag]]]) if mask1_slice_total + mask2_slice_total < 1: continue untested_overlaps_with_ship_masks.append((img1_id, img2_id, img1_overlap_tag)) len(untested_overlaps_with_ship_masks)
0.325842
0.363816
## Create an End-to-End Pipeline using Azure Machine Learning ### Connect to an Azure Machine Learning Workspace ``` import azureml.core from azureml.core import Workspace # Load the workspace from the saved config file ws = Workspace.from_config() print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name)) ``` ### Prepare the Training Data Local data files can be used to train a model, but when running training workloads on cloud-based compute, it makes more sense to store the data centrally in the cloud and then ingest it wherever the training script happens to be running. Here the training data is uploaded to a *datastore* and then a *dataset* is defined. For simplicity, the data is uploaded to the *default* datastore for your Azure Machine Learning workspace. In production, a datastore that references an existing cloud data storage location would be registered (e.g., a Data Lake). A *tabular* dataset is then created using the existing CSV files. ``` from azureml.core import Dataset default_ds = ws.get_default_datastore() if 'diabetes dataset' not in ws.datasets: default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], target_path='diabetes-data/', overwrite=True, show_progress=True) # Create a tabular dataset from the path on the datastore tab_ds = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv')) # Register the tabular dataset try: tab_ds = tab_ds.register(workspace=ws, name='diabetes dataset', description='diabetes data', tags = {'format':'CSV'}, create_new_version=True) print('Dataset registered.') except Exception as ex: print(ex) else: print('Dataset already registered.') ``` ### Create Scripts for Pipeline Steps - Create a folder dedicated to holding the scripts for each pipeline step - For the first pipeline step, generate a script that trains the machine learning model - For the second pipeline step, generate a script that registers the machine learning model ``` # Create a folder for the pipeline step files import os experiment_folder = 'diabetes_pipeline' os.makedirs(experiment_folder, exist_ok=True) print(experiment_folder) %%writefile $experiment_folder/train_diabetes.py # Import libraries from azureml.core import Run import argparse import pandas as pd import numpy as np import joblib from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import roc_auc_score # Get parameters parser = argparse.ArgumentParser() parser.add_argument('--output_folder', type=str, dest='output_folder', default="diabetes_model", help='output folder') args = parser.parse_args() output_folder = args.output_folder # Get the experiment run context run = Run.get_context() # load the diabetes data (passed as an input dataset) print("Loading Data...") diabetes = run.input_datasets['diabetes_train'].to_pandas_dataframe() # Separate features and labels X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values # Split data into training set and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0) # Train adecision tree model print('Training a decision tree model') model = DecisionTreeClassifier().fit(X_train, y_train) # calculate accuracy y_hat = model.predict(X_test) acc = np.average(y_hat == y_test) print('Accuracy:', acc) run.log('Accuracy', np.float(acc)) # calculate AUC y_scores = model.predict_proba(X_test) auc = roc_auc_score(y_test,y_scores[:,1]) print('AUC: ' + str(auc)) run.log('AUC', np.float(auc)) # Save the trained model os.makedirs(output_folder, exist_ok=True) output_path = output_folder + "/model.pkl" joblib.dump(value=model, filename=output_path) run.complete() %%writefile $experiment_folder/register_diabetes.py # Import libraries import argparse import joblib from azureml.core import Workspace, Model, Run # Get parameters parser = argparse.ArgumentParser() parser.add_argument('--model_folder', type=str, dest='model_folder', default="diabetes_model", help='model location') args = parser.parse_args() model_folder = args.model_folder # Get the experiment run context run = Run.get_context() # load the model print("Loading model from " + model_folder) model_file = model_folder + "/model.pkl" model = joblib.load(model_file) Model.register(workspace=run.experiment.workspace, model_path = model_file, model_name = 'diabetes_model', tags={'Training context':'Pipeline'}) run.complete() ``` ### Prepare a Compute Environment ``` from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException cluster_name = "train-cluster" try: # Check for existing compute target pipeline_cluster = ComputeTarget(workspace=ws, name=cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: # If it doesn't already exist, create it try: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2' , max_nodes=2) pipeline_cluster = ComputeTarget.create(ws, cluster_name, compute_config) pipeline_cluster.wait_for_completion(show_output=True) except Exception as ex: print(ex) ``` ### Define a Run Configuration The compute requires a Python environment with the necessary package dependencies installed ``` from azureml.core import Environment from azureml.core.conda_dependencies import CondaDependencies from azureml.core.runconfig import RunConfiguration # Create a Python environment for the experiment diabetes_env = Environment("diabetes-pipeline-env") diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies diabetes_env.docker.enabled = True # Use a docker container # Create a set of package dependencies diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','pandas'], pip_packages=['azureml-defaults','azureml-dataprep[pandas]']) # Add the dependencies to the environment diabetes_env.python.conda_dependencies = diabetes_packages # Register the environment (just in case you want to use it again) diabetes_env.register(workspace=ws) registered_env = Environment.get(ws, 'diabetes-pipeline-env') # Create a new runconfig object for the pipeline pipeline_run_config = RunConfiguration() # Use the compute you created above. pipeline_run_config.target = pipeline_cluster # Assign the environment to the run configuration pipeline_run_config.environment = registered_env print ("Run configuration created.") ``` ## Create and Run a Pipeline First you need to define the steps for the pipeline, and any data references that need to passed between them. In this case, the first step must write the model to a folder that can be read from by the second step. Since the steps will be run on remote compute (and in fact, could each be run on different compute), the folder path must be passed as a data reference to a location in a datastore within the workspace. The **PipelineData** object is a special kind of data reference that is used to pass data from the output of one pipeline step to the input of another, creating a dependency between them. You'll create one and use it as the output for the first step and the input for the second step. Note that you also need to pass it as a script argument so your code can access the datastore location referenced by the data reference. ``` from azureml.pipeline.core import PipelineData from azureml.pipeline.steps import PythonScriptStep, EstimatorStep from azureml.train.estimator import Estimator # Get the training dataset diabetes_ds = ws.datasets.get("diabetes dataset") # Create a PipelineData (temporary Data Reference) for the model folder model_folder = PipelineData("model_folder", datastore=ws.get_default_datastore()) estimator = Estimator(source_directory=experiment_folder, compute_target = pipeline_cluster, environment_definition=pipeline_run_config.environment, entry_script='train_diabetes.py') # Step 1, run the estimator to train the model train_step = EstimatorStep(name = "Train Model", estimator=estimator, estimator_entry_script_arguments=['--output_folder', model_folder], inputs=[diabetes_ds.as_named_input('diabetes_train')], outputs=[model_folder], compute_target = pipeline_cluster, allow_reuse = True) # Step 2, run the model registration script register_step = PythonScriptStep(name = "Register Model", source_directory = experiment_folder, script_name = "register_diabetes.py", arguments = ['--model_folder', model_folder], inputs=[model_folder], compute_target = pipeline_cluster, runconfig = pipeline_run_config, allow_reuse = True) print("Pipeline steps defined") ``` ### Build the Pipeline from the Steps, and then Run it as an AML Experiment > **Note**: This will take a while. The training cluster must be started and configured with the Python environment before the scripts can be run. This is a good time for a coffee break! ``` from azureml.core import Experiment from azureml.pipeline.core import Pipeline # Construct the pipeline pipeline_steps = [train_step, register_step] pipeline = Pipeline(workspace = ws, steps=pipeline_steps) print("Pipeline is built.") # Create an experiment and run the pipeline experiment = Experiment(workspace = ws, name = 'diabetes-training-pipeline') pipeline_run = experiment.submit(pipeline, regenerate_outputs=True) print("Pipeline submitted for execution.") pipeline_run.wait_for_completion(show_output=True) ``` ### Verify that the Newly Trained Model Exists A new model should be registered with a Training context tag indicating it was trained in a pipeline. ``` from azureml.core import Model for model in Model.list(ws): print(model.name, 'version:', model.version) for tag_name in model.tags: tag = model.tags[tag_name] print ('\t',tag_name, ':', tag) for prop_name in model.properties: prop = model.properties[prop_name] print ('\t',prop_name, ':', prop) print('\n') ``` ### Publish the Pipeline as a REST Service ``` published_pipeline = pipeline.publish(name="Diabetes_Training_Pipeline", description="Trains diabetes model", version="1.0") rest_endpoint = published_pipeline.endpoint print(rest_endpoint) ``` ### Test the New REST Service Endpoint - Use the authorization header from the current Azure workspace connection to authenticate the call to the REST Service. - Since the pipeline runs asynchronously, an identifier is returned that can be used to track the experiment at runtime. ``` from azureml.core.authentication import InteractiveLoginAuthentication interactive_auth = InteractiveLoginAuthentication() auth_header = interactive_auth.get_authentication_header() import requests from azureml.pipeline.core.run import PipelineRun experiment_name = 'Run-diabetes-pipeline' response = requests.post(rest_endpoint, headers=auth_header, json={"ExperimentName": experiment_name}) run_id = response.json()["Id"] print("Tracking Run: ", run_id) published_pipeline_run = PipelineRun(ws.experiments[experiment_name], run_id) published_pipeline_run.wait_for_completion(show_output=True) ```
github_jupyter
import azureml.core from azureml.core import Workspace # Load the workspace from the saved config file ws = Workspace.from_config() print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name)) from azureml.core import Dataset default_ds = ws.get_default_datastore() if 'diabetes dataset' not in ws.datasets: default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], target_path='diabetes-data/', overwrite=True, show_progress=True) # Create a tabular dataset from the path on the datastore tab_ds = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv')) # Register the tabular dataset try: tab_ds = tab_ds.register(workspace=ws, name='diabetes dataset', description='diabetes data', tags = {'format':'CSV'}, create_new_version=True) print('Dataset registered.') except Exception as ex: print(ex) else: print('Dataset already registered.') # Create a folder for the pipeline step files import os experiment_folder = 'diabetes_pipeline' os.makedirs(experiment_folder, exist_ok=True) print(experiment_folder) %%writefile $experiment_folder/train_diabetes.py # Import libraries from azureml.core import Run import argparse import pandas as pd import numpy as np import joblib from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import roc_auc_score # Get parameters parser = argparse.ArgumentParser() parser.add_argument('--output_folder', type=str, dest='output_folder', default="diabetes_model", help='output folder') args = parser.parse_args() output_folder = args.output_folder # Get the experiment run context run = Run.get_context() # load the diabetes data (passed as an input dataset) print("Loading Data...") diabetes = run.input_datasets['diabetes_train'].to_pandas_dataframe() # Separate features and labels X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values # Split data into training set and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0) # Train adecision tree model print('Training a decision tree model') model = DecisionTreeClassifier().fit(X_train, y_train) # calculate accuracy y_hat = model.predict(X_test) acc = np.average(y_hat == y_test) print('Accuracy:', acc) run.log('Accuracy', np.float(acc)) # calculate AUC y_scores = model.predict_proba(X_test) auc = roc_auc_score(y_test,y_scores[:,1]) print('AUC: ' + str(auc)) run.log('AUC', np.float(auc)) # Save the trained model os.makedirs(output_folder, exist_ok=True) output_path = output_folder + "/model.pkl" joblib.dump(value=model, filename=output_path) run.complete() %%writefile $experiment_folder/register_diabetes.py # Import libraries import argparse import joblib from azureml.core import Workspace, Model, Run # Get parameters parser = argparse.ArgumentParser() parser.add_argument('--model_folder', type=str, dest='model_folder', default="diabetes_model", help='model location') args = parser.parse_args() model_folder = args.model_folder # Get the experiment run context run = Run.get_context() # load the model print("Loading model from " + model_folder) model_file = model_folder + "/model.pkl" model = joblib.load(model_file) Model.register(workspace=run.experiment.workspace, model_path = model_file, model_name = 'diabetes_model', tags={'Training context':'Pipeline'}) run.complete() from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException cluster_name = "train-cluster" try: # Check for existing compute target pipeline_cluster = ComputeTarget(workspace=ws, name=cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: # If it doesn't already exist, create it try: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2' , max_nodes=2) pipeline_cluster = ComputeTarget.create(ws, cluster_name, compute_config) pipeline_cluster.wait_for_completion(show_output=True) except Exception as ex: print(ex) from azureml.core import Environment from azureml.core.conda_dependencies import CondaDependencies from azureml.core.runconfig import RunConfiguration # Create a Python environment for the experiment diabetes_env = Environment("diabetes-pipeline-env") diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies diabetes_env.docker.enabled = True # Use a docker container # Create a set of package dependencies diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','pandas'], pip_packages=['azureml-defaults','azureml-dataprep[pandas]']) # Add the dependencies to the environment diabetes_env.python.conda_dependencies = diabetes_packages # Register the environment (just in case you want to use it again) diabetes_env.register(workspace=ws) registered_env = Environment.get(ws, 'diabetes-pipeline-env') # Create a new runconfig object for the pipeline pipeline_run_config = RunConfiguration() # Use the compute you created above. pipeline_run_config.target = pipeline_cluster # Assign the environment to the run configuration pipeline_run_config.environment = registered_env print ("Run configuration created.") from azureml.pipeline.core import PipelineData from azureml.pipeline.steps import PythonScriptStep, EstimatorStep from azureml.train.estimator import Estimator # Get the training dataset diabetes_ds = ws.datasets.get("diabetes dataset") # Create a PipelineData (temporary Data Reference) for the model folder model_folder = PipelineData("model_folder", datastore=ws.get_default_datastore()) estimator = Estimator(source_directory=experiment_folder, compute_target = pipeline_cluster, environment_definition=pipeline_run_config.environment, entry_script='train_diabetes.py') # Step 1, run the estimator to train the model train_step = EstimatorStep(name = "Train Model", estimator=estimator, estimator_entry_script_arguments=['--output_folder', model_folder], inputs=[diabetes_ds.as_named_input('diabetes_train')], outputs=[model_folder], compute_target = pipeline_cluster, allow_reuse = True) # Step 2, run the model registration script register_step = PythonScriptStep(name = "Register Model", source_directory = experiment_folder, script_name = "register_diabetes.py", arguments = ['--model_folder', model_folder], inputs=[model_folder], compute_target = pipeline_cluster, runconfig = pipeline_run_config, allow_reuse = True) print("Pipeline steps defined") from azureml.core import Experiment from azureml.pipeline.core import Pipeline # Construct the pipeline pipeline_steps = [train_step, register_step] pipeline = Pipeline(workspace = ws, steps=pipeline_steps) print("Pipeline is built.") # Create an experiment and run the pipeline experiment = Experiment(workspace = ws, name = 'diabetes-training-pipeline') pipeline_run = experiment.submit(pipeline, regenerate_outputs=True) print("Pipeline submitted for execution.") pipeline_run.wait_for_completion(show_output=True) from azureml.core import Model for model in Model.list(ws): print(model.name, 'version:', model.version) for tag_name in model.tags: tag = model.tags[tag_name] print ('\t',tag_name, ':', tag) for prop_name in model.properties: prop = model.properties[prop_name] print ('\t',prop_name, ':', prop) print('\n') published_pipeline = pipeline.publish(name="Diabetes_Training_Pipeline", description="Trains diabetes model", version="1.0") rest_endpoint = published_pipeline.endpoint print(rest_endpoint) from azureml.core.authentication import InteractiveLoginAuthentication interactive_auth = InteractiveLoginAuthentication() auth_header = interactive_auth.get_authentication_header() import requests from azureml.pipeline.core.run import PipelineRun experiment_name = 'Run-diabetes-pipeline' response = requests.post(rest_endpoint, headers=auth_header, json={"ExperimentName": experiment_name}) run_id = response.json()["Id"] print("Tracking Run: ", run_id) published_pipeline_run = PipelineRun(ws.experiments[experiment_name], run_id) published_pipeline_run.wait_for_completion(show_output=True)
0.606964
0.932392
``` """ You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab. Instructions for setting up Colab are as follows: 1. Open a new Python 3 notebook. 2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL) 3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator) 4. Run this cell to set up dependencies. """ # If you're using Google Colab and not running locally, run this cell. ## Install dependencies !pip install wget !apt-get install sox libsndfile1 ffmpeg !pip install unidecode # ## Install NeMo BRANCH = 'v1.0.0' !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr] ## Install TorchAudio !pip install torchaudio>=0.6.0 -f https://download.pytorch.org/whl/torch_stable.html ## Grab the config we'll use in this example !mkdir configs ``` # Introduction This Speech Command recognition tutorial is based on the MatchboxNet model from the paper ["MatchboxNet: 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition"](https://arxiv.org/abs/2004.08531). MatchboxNet is a modified form of the QuartzNet architecture from the paper "[QuartzNet: Deep Automatic Speech Recognition with 1D Time-Channel Separable Convolutions](https://arxiv.org/pdf/1910.10261.pdf)" with a modified decoder head to suit classification tasks. The notebook will follow the steps below: - Dataset preparation: Preparing Google Speech Commands dataset - Audio preprocessing (feature extraction): signal normalization, windowing, (log) spectrogram (or mel scale spectrogram, or MFCC) - Data augmentation using SpecAugment "[SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779)" to increase the number of data samples. - Develop a small Neural classification model that can be trained efficiently. - Model training on the Google Speech Commands dataset in NeMo. - Evaluation of error cases of the model by audibly hearing the samples ``` # Some utility imports import os from omegaconf import OmegaConf # This is where the Google Speech Commands directory will be placed. # Change this if you don't want the data to be extracted in the current directory. # Select the version of the dataset required as well (can be 1 or 2) DATASET_VER = 1 data_dir = './google_dataset_v{0}/'.format(DATASET_VER) if DATASET_VER == 1: MODEL_CONFIG = "matchboxnet_3x1x64_v1.yaml" else: MODEL_CONFIG = "matchboxnet_3x1x64_v2.yaml" if not os.path.exists(f"configs/{MODEL_CONFIG}"): !wget -P configs/ "https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/{MODEL_CONFIG}" ``` # Data Preparation We will be using the open-source Google Speech Commands Dataset (we will use V1 of the dataset for the tutorial but require minor changes to support the V2 dataset). These scripts below will download the dataset and convert it to a format suitable for use with NeMo. ## Download the dataset The dataset must be prepared using the scripts provided under the `{NeMo root directory}/scripts` sub-directory. Run the following command below to download the data preparation script and execute it. **NOTE**: You should have at least 4GB of disk space available if youโ€™ve used --data_version=1; and at least 6GB if you used --data_version=2. Also, it will take some time to download and process, so go grab a coffee. **NOTE**: You may additionally pass a `--rebalance` flag at the end of the `process_speech_commands_data.py` script to rebalance the class samples in the manifest. ``` if not os.path.exists("process_speech_commands_data.py"): !wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_speech_commands_data.py ``` ### Preparing the manifest file The manifest file is a simple file that has the full path to the audio file, the duration of the audio file, and the label that is assigned to that audio file. This notebook is only a demonstration, and therefore we will use the `--skip_duration` flag to speed up construction of the manifest file. **NOTE: When replicating the results of the paper, do not use this flag and prepare the manifest file with correct durations.** ``` !mkdir {data_dir} !python process_speech_commands_data.py --data_root={data_dir} --data_version={DATASET_VER} --skip_duration --log print("Dataset ready !") ``` ## Prepare the path to manifest files ``` dataset_path = 'google_speech_recognition_v{0}'.format(DATASET_VER) dataset_basedir = os.path.join(data_dir, dataset_path) train_dataset = os.path.join(dataset_basedir, 'train_manifest.json') val_dataset = os.path.join(dataset_basedir, 'validation_manifest.json') test_dataset = os.path.join(dataset_basedir, 'validation_manifest.json') ``` ## Read a few rows of the manifest file Manifest files are the data structure used by NeMo to declare a few important details about the data : 1) `audio_filepath`: Refers to the path to the raw audio file <br> 2) `command`: The class label (or speech command) of this sample <br> 3) `duration`: The length of the audio file, in seconds. ``` !head -n 5 {train_dataset} ``` # Training - Preparation We will be training a MatchboxNet model from the paper ["MatchboxNet: 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition"](https://arxiv.org/abs/2004.08531). The benefit of MatchboxNet over JASPER models is that they use 1D Time-Channel Separable Convolutions, which greatly reduce the number of parameters required to obtain good model accuracy. MatchboxNet models generally follow the model definition pattern QuartzNet-[BxRXC], where B is the number of blocks, R is the number of convolutional sub-blocks, and C is the number of channels in these blocks. Each sub-block contains a 1-D masked convolution, batch normalization, ReLU, and dropout. An image of QuartzNet, the base configuration of MatchboxNet models, is provided below. <p align="center"> <img src="https://developer.nvidia.com/blog/wp-content/uploads/2020/05/quartznet-model-architecture-1-625x742.png"> </p> ``` # NeMo's "core" package import nemo # NeMo's ASR collection - this collections contains complete ASR models and # building blocks (modules) for ASR import nemo.collections.asr as nemo_asr ``` ## Model Configuration The MatchboxNet Model is defined in a config file which declares multiple important sections. They are: 1) `model`: All arguments that will relate to the Model - preprocessors, encoder, decoder, optimizer and schedulers, datasets and any other related information 2) `trainer`: Any argument to be passed to PyTorch Lightning ``` # This line will print the entire config of the MatchboxNet model config_path = f"configs/{MODEL_CONFIG}" config = OmegaConf.load(config_path) config = OmegaConf.to_container(config, resolve=True) config = OmegaConf.create(config) print(OmegaConf.to_yaml(config)) # Preserve some useful parameters labels = config.model.labels sample_rate = config.sample_rate ``` ### Setting up the datasets within the config If you'll notice, there are a few config dictionaries called `train_ds`, `validation_ds` and `test_ds`. These are configurations used to setup the Dataset and DataLoaders of the corresponding config. ``` print(OmegaConf.to_yaml(config.model.train_ds)) ``` ### `???` inside configs You will often notice that some configs have `???` in place of paths. This is used as a placeholder so that the user can change the value at a later time. Let's add the paths to the manifests to the config above. ``` config.model.train_ds.manifest_filepath = train_dataset config.model.validation_ds.manifest_filepath = val_dataset config.model.test_ds.manifest_filepath = test_dataset ``` ## Building the PyTorch Lightning Trainer NeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem! Lets first instantiate a Trainer object! ``` import torch import pytorch_lightning as pl print("Trainer config - \n") print(OmegaConf.to_yaml(config.trainer)) # Lets modify some trainer configs for this demo # Checks if we have GPU available and uses it cuda = 1 if torch.cuda.is_available() else 0 config.trainer.gpus = cuda # Reduces maximum number of epochs to 5 for quick demonstration config.trainer.max_epochs = 5 # Remove distributed training flags config.trainer.accelerator = None trainer = pl.Trainer(**config.trainer) ``` ## Setting up a NeMo Experiment NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it ! ``` from nemo.utils.exp_manager import exp_manager exp_dir = exp_manager(trainer, config.get("exp_manager", None)) # The exp_dir provides a path to the current experiment for easy access exp_dir = str(exp_dir) exp_dir ``` ## Building the MatchboxNet Model MatchboxNet is an ASR model with a classification task - it generates one label for the entire provided audio stream. Therefore we encapsulate it inside the `EncDecClassificationModel` as follows. ``` asr_model = nemo_asr.models.EncDecClassificationModel(cfg=config.model, trainer=trainer) ``` # Training a MatchboxNet Model As MatchboxNet is inherently a PyTorch Lightning Model, it can easily be trained in a single line - `trainer.fit(model)` ! ### Monitoring training progress Before we begin training, let's first create a Tensorboard visualization to monitor progress ``` try: from google import colab COLAB_ENV = True except (ImportError, ModuleNotFoundError): COLAB_ENV = False # Load the TensorBoard notebook extension if COLAB_ENV: %load_ext tensorboard else: print("To use tensorboard, please use this notebook in a Google Colab environment.") if COLAB_ENV: %tensorboard --logdir {exp_dir} else: print("To use tensorboard, please use this notebook in a Google Colab environment.") ``` ### Training for 5 epochs We see below that the model begins to get modest scores on the validation set after just 5 epochs of training ``` trainer.fit(asr_model) ``` ### Evaluation on the Test set Lets compute the final score on the test set via `trainer.test(model)` ``` trainer.test(asr_model, ckpt_path=None) ``` # Fast Training We can dramatically improve the time taken to train this model by using Multi GPU training along with Mixed Precision. For multi-GPU training, take a look at [the PyTorch Lightning Multi-GPU training section](https://pytorch-lightning.readthedocs.io/en/latest/advanced/multi_gpu.html) For mixed-precision training, take a look at [the PyTorch Lightning Mixed-Precision training section](https://pytorch-lightning.readthedocs.io/en/latest/advanced/amp.html) ```python # Mixed precision: trainer = Trainer(amp_level='O1', precision=16) # Trainer with a distributed backend: trainer = Trainer(gpus=2, num_nodes=2, accelerator='ddp') # Of course, you can combine these flags as well. ``` # Evaluation of incorrectly predicted samples Given that we have a trained model, which performs reasonably well, let's try to listen to the samples where the model is least confident in its predictions. For this, we need the support of the librosa library. **NOTE**: The following code depends on librosa. To install it, run the following code block first. ``` !pip install librosa ``` ## Extract the predictions from the model We want to possess the actual logits of the model instead of just the final evaluation score, so we can define a function to perform the forward step for us without computing the final loss. Instead, we extract the logits per batch of samples provided. ## Accessing the data loaders We can utilize the `setup_test_data` method in order to instantiate a data loader for the dataset we want to analyze. For convenience, we can access these instantiated data loaders using the following accessors - `asr_model._train_dl`, `asr_model._validation_dl` and `asr_model._test_dl`. ``` asr_model.setup_test_data(config.model.test_ds) test_dl = asr_model._test_dl ``` ## Partial Test Step Below we define a utility function to perform most of the test step. For reference, the test step is defined as follows: ```python def test_step(self, batch, batch_idx, dataloader_idx=0): audio_signal, audio_signal_len, labels, labels_len = batch logits = self.forward(input_signal=audio_signal, input_signal_length=audio_signal_len) loss_value = self.loss(logits=logits, labels=labels) correct_counts, total_counts = self._accuracy(logits=logits, labels=labels) return {'test_loss': loss_value, 'test_correct_counts': correct_counts, 'test_total_counts': total_counts} ``` ``` @torch.no_grad() def extract_logits(model, dataloader): logits_buffer = [] label_buffer = [] # Follow the above definition of the test_step for batch in dataloader: audio_signal, audio_signal_len, labels, labels_len = batch logits = model(input_signal=audio_signal, input_signal_length=audio_signal_len) logits_buffer.append(logits) label_buffer.append(labels) print(".", end='') print() print("Finished extracting logits !") logits = torch.cat(logits_buffer, 0) labels = torch.cat(label_buffer, 0) return logits, labels cpu_model = asr_model.cpu() cpu_model.eval() logits, labels = extract_logits(cpu_model, test_dl) print("Logits:", logits.shape, "Labels :", labels.shape) # Compute accuracy - `_accuracy` is a PyTorch Lightning Metric ! acc = cpu_model._accuracy(logits=logits, labels=labels) print("Accuracy : ", float(acc[0]*100)) ``` ## Filtering out incorrect samples Let us now filter out the incorrectly labeled samples from the total set of samples in the test set ``` import librosa import json import IPython.display as ipd # First let's create a utility class to remap the integer class labels to actual string label class ReverseMapLabel: def __init__(self, data_loader): self.label2id = dict(data_loader.dataset.label2id) self.id2label = dict(data_loader.dataset.id2label) def __call__(self, pred_idx, label_idx): return self.id2label[pred_idx], self.id2label[label_idx] # Next, let's get the indices of all the incorrectly labeled samples sample_idx = 0 incorrect_preds = [] rev_map = ReverseMapLabel(test_dl) # Remember, evaluated_tensor = (loss, logits, labels) probs = torch.softmax(logits, dim=-1) probas, preds = torch.max(probs, dim=-1) total_count = cpu_model._accuracy.total_counts_k[0] incorrect_ids = (preds != labels).nonzero() for idx in incorrect_ids: proba = float(probas[idx][0]) pred = int(preds[idx][0]) label = int(labels[idx][0]) idx = int(idx[0]) + sample_idx incorrect_preds.append((idx, *rev_map(pred, label), proba)) print(f"Num test samples : {total_count.item()}") print(f"Num errors : {len(incorrect_preds)}") # First lets sort by confidence of prediction incorrect_preds = sorted(incorrect_preds, key=lambda x: x[-1], reverse=False) ``` ## Examine a subset of incorrect samples Let's print out the (test id, predicted label, ground truth label, confidence) tuple of first 20 incorrectly labeled samples ``` for incorrect_sample in incorrect_preds[:20]: print(str(incorrect_sample)) ``` ## Define a threshold below which we designate a model's prediction as "low confidence" ``` # Filter out how many such samples exist low_confidence_threshold = 0.25 count_low_confidence = len(list(filter(lambda x: x[-1] <= low_confidence_threshold, incorrect_preds))) print(f"Number of low confidence predictions : {count_low_confidence}") ``` ## Let's hear the samples which the model has least confidence in ! ``` # First let's create a helper function to parse the manifest files def parse_manifest(manifest): data = [] for line in manifest: line = json.loads(line) data.append(line) return data # Next, let's create a helper function to actually listen to certain samples def listen_to_file(sample_id, pred=None, label=None, proba=None): # Load the audio waveform using librosa filepath = test_samples[sample_id]['audio_filepath'] audio, sample_rate = librosa.load(filepath) if pred is not None and label is not None and proba is not None: print(f"Sample : {sample_id} Prediction : {pred} Label : {label} Confidence = {proba: 0.4f}") else: print(f"Sample : {sample_id}") return ipd.Audio(audio, rate=sample_rate) # Now let's load the test manifest into memory test_samples = [] with open(test_dataset, 'r') as test_f: test_samples = test_f.readlines() test_samples = parse_manifest(test_samples) # Finally, let's listen to all the audio samples where the model made a mistake # Note: This list of incorrect samples may be quite large, so you may choose to subsample `incorrect_preds` count = min(count_low_confidence, 20) # replace this line with just `count_low_confidence` to listen to all samples with low confidence for sample_id, pred, label, proba in incorrect_preds[:count]: ipd.display(listen_to_file(sample_id, pred=pred, label=label, proba=proba)) ``` # Fine-tuning on a new dataset We currently trained our dataset on all 30/35 classes of the Google Speech Commands dataset (v1/v2). We will now show an example of fine-tuning a trained model on a subset of the classes, as a demonstration of fine-tuning. ## Preparing the data-subsets Let's select 2 of the classes, `yes` and `no` and prepare our manifests with this dataset. ``` import json def extract_subset_from_manifest(name: str, manifest_path: str, labels: list): manifest_dir = os.path.split(manifest_path)[0] labels = set(labels) manifest_values = [] print(f"Parsing manifest: {manifest_path}") with open(manifest_path, 'r') as f: for line in f: val = json.loads(line) if val['command'] in labels: manifest_values.append(val) print(f"Number of files extracted from dataset: {len(manifest_values)}") outpath = os.path.join(manifest_dir, name) with open(outpath, 'w') as f: for val in manifest_values: json.dump(val, f) f.write("\n") f.flush() print("Manifest subset written to path :", outpath) print() return outpath labels = ["yes", "no"] train_subdataset = extract_subset_from_manifest("train_subset.json", train_dataset, labels) val_subdataset = extract_subset_from_manifest("val_subset.json", val_dataset, labels) test_subdataset = extract_subset_from_manifest("test_subset.json", test_dataset, labels) ``` ## Saving/Restoring a checkpoint There are multiple ways to save and load models in NeMo. Since all NeMo models are inherently Lightning Modules, we can use the standard way that PyTorch Lightning saves and restores models. NeMo also provides a more advanced model save/restore format, which encapsulates all the parts of the model that are required to restore that model for immediate use. In this example, we will explore both ways of saving and restoring models, but we will focus on the PyTorch Lightning method. ### Saving and Restoring via PyTorch Lightning Checkpoints When using NeMo for training, it is advisable to utilize the `exp_manager` framework. It is tasked with handling checkpointing and logging (Tensorboard as well as WandB optionally!), as well as dealing with multi-node and multi-GPU logging. Since we utilized the `exp_manager` framework above, we have access to the directory where the checkpoints exist. `exp_manager` with the default settings will save multiple checkpoints for us - 1) A few checkpoints from certain steps of training. They will have `--val_loss=` tags 2) A checkpoint at the last epoch of training denotes by `-last`. 3) If the model finishes training, it will also have a `--end` checkpoint. ``` import glob print(exp_dir) # Let's list all the checkpoints we have checkpoint_dir = os.path.join(exp_dir, 'checkpoints') checkpoint_paths = list(glob.glob(os.path.join(checkpoint_dir, "*.ckpt"))) checkpoint_paths # We want the checkpoint saved after the final step of training final_checkpoint = list(filter(lambda x: "-last.ckpt" in x, checkpoint_paths))[0] print(final_checkpoint) ``` ### Restoring from a PyTorch Lightning checkpoint To restore a model using the `LightningModule.load_from_checkpoint()` class method. ``` restored_model = nemo_asr.models.EncDecClassificationModel.load_from_checkpoint(final_checkpoint) ``` ## Prepare the model for fine-tuning Remember, the original model was trained for a 30/35 way classification task. Now we require only a subset of these models, so we need to modify the decoder head to support fewer classes. We can do this easily with the convenient function `EncDecClassificationModel.change_labels(new_label_list)`. By performing this step, we discard the old decoder head, but still, preserve the encoder! ``` restored_model.change_labels(labels) ``` ### Prepare the data loaders The restored model, upon restoration, will not attempt to set up any data loaders. This is so that we can manually set up any datasets we want - train and val to finetune the model, test in order to just evaluate, or all three to do both! The entire config that we used before can still be accessed via `ModelPT.cfg`, so we will use it in order to set up our data loaders. This also gives us the opportunity to set any additional parameters we wish to setup! ``` import copy train_subdataset_cfg = copy.deepcopy(restored_model.cfg.train_ds) val_subdataset_cfg = copy.deepcopy(restored_model.cfg.validation_ds) test_subdataset_cfg = copy.deepcopy(restored_model.cfg.test_ds) # Set the paths to the subset of the dataset train_subdataset_cfg.manifest_filepath = train_subdataset val_subdataset_cfg.manifest_filepath = val_subdataset test_subdataset_cfg.manifest_filepath = test_subdataset # Setup the data loader for the restored model restored_model.setup_training_data(train_subdataset_cfg) restored_model.setup_multiple_validation_data(val_subdataset_cfg) restored_model.setup_multiple_test_data(test_subdataset_cfg) # Check data loaders are correct print("Train dataset labels :", restored_model._train_dl.dataset.labels) print("Val dataset labels :", restored_model._validation_dl.dataset.labels) print("Test dataset labels :", restored_model._test_dl.dataset.labels) ``` ## Setting up a new Trainer and Experiment Manager A restored model has a utility method to attach the Trainer object to it, which is necessary in order to correctly set up the optimizer and scheduler! **Note**: The restored model does not contain the trainer config with it. It is necessary to create a new Trainer object suitable for the environment where the model is being trained. The template can be replicated from any of the training scripts. Here, since we already had the previous config object that prepared the trainer, we could have used it, but for demonstration, we will set up the trainer config manually. ``` # Setup the new trainer object # Let's modify some trainer configs for this demo # Checks if we have GPU available and uses it cuda = 1 if torch.cuda.is_available() else 0 trainer_config = OmegaConf.create(dict( gpus=cuda, max_epochs=5, max_steps=None, # computed at runtime if not set num_nodes=1, accumulate_grad_batches=1, checkpoint_callback=False, # Provided by exp_manager logger=False, # Provided by exp_manager log_every_n_steps=1, # Interval of logging. val_check_interval=1.0, # Set to 0.25 to check 4 times per epoch, or an int for number of iterations )) print(trainer_config.pretty()) trainer_finetune = pl.Trainer(**trainer_config) ``` ### Setting the trainer to the restored model All NeMo models provide a convenience method `set_trainer()` in order to setup the trainer after restoration ``` restored_model.set_trainer(trainer_finetune) exp_dir_finetune = exp_manager(trainer_finetune, config.get("exp_manager", None)) exp_dir_finetune = str(exp_dir_finetune) exp_dir_finetune ``` ## Setup optimizer + scheduler For a fine-tuning experiment, let's set up the optimizer and scheduler! We will use a much lower learning rate than before, and also swap out the scheduler from PolyHoldDecay to CosineDecay. ``` optim_sched_cfg = copy.deepcopy(restored_model.cfg.optim) # Struct mode prevents us from popping off elements from the config, so let's disable it OmegaConf.set_struct(optim_sched_cfg, False) # Lets change the maximum learning rate to previous minimum learning rate optim_sched_cfg.lr = 0.001 # Lets change the scheduler optim_sched_cfg.sched.name = "CosineAnnealing" # "power" isnt applicable to CosineAnnealing so let's remove it optim_sched_cfg.sched.pop('power') # "hold_ratio" isnt applicable to CosineAnnealing, so let's remove it optim_sched_cfg.sched.pop('hold_ratio') # Set "min_lr" to lower value optim_sched_cfg.sched.min_lr = 1e-4 print(optim_sched_cfg.pretty()) # Now lets update the optimizer settings restored_model.setup_optimization(optim_sched_cfg) # We can also just directly replace the config inplace if we choose to restored_model.cfg.optim = optim_sched_cfg ``` ## Fine-tune training step We fine-tune on the subset classification problem. Note, the model was originally trained on these classes (the subset defined here has already been trained on above). When fine-tuning on a truly new dataset, we will not see such a dramatic improvement in performance. However, it should still converge a little faster than if it was trained from scratch. ### Monitor training progress via Tensorboard ``` if COLAB_ENV: %tensorboard --logdir {exp_dir_finetune} else: print("To use tensorboard, please use this notebook in a Google Colab environment.") ``` ### Fine-tuning for 5 epochs ``` trainer_finetune.fit(restored_model) ``` ### Evaluation on the Test set Let's compute the final score on the test set via `trainer.test(model)` ``` trainer_finetune.test(restored_model, ckpt_path=None) ``` ## Advanced Usage: Exporting a model in its entirety While most models can be easily serialized via the Experiment Manager as a PyTorch Lightning checkpoint, there are certain models where this is insufficient. Consider the case where a Model contains artifacts such as tokenizers or other intermediate file objects that cannot be so easily serialized into a checkpoint. For such cases, NeMo offers two utility functions that enable serialization of a Model + artifacts - `save_to` and `restore_from`. Further documentation regarding these methods can be obtained from the documentation pages on NeMo. ``` import tarfile # Save a model as a tarfile restored_model.save_to(os.path.join(exp_dir_finetune, "model.nemo")) # The above object is just a tarfile which can store additional artifacts. with tarfile.open(os.path.join(exp_dir_finetune, 'model.nemo')) as blob: for item in blob: print(item) # Restore a model from a tarfile restored_model_2 = nemo_asr.models.EncDecClassificationModel.restore_from(os.path.join(exp_dir_finetune, "model.nemo")) ``` ## Conclusion Once the model has been restored, either via a PyTorch Lightning checkpoint or via the `restore_from` methods, one can finetune by following the above general steps.
github_jupyter
""" You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab. Instructions for setting up Colab are as follows: 1. Open a new Python 3 notebook. 2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL) 3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator) 4. Run this cell to set up dependencies. """ # If you're using Google Colab and not running locally, run this cell. ## Install dependencies !pip install wget !apt-get install sox libsndfile1 ffmpeg !pip install unidecode # ## Install NeMo BRANCH = 'v1.0.0' !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr] ## Install TorchAudio !pip install torchaudio>=0.6.0 -f https://download.pytorch.org/whl/torch_stable.html ## Grab the config we'll use in this example !mkdir configs # Some utility imports import os from omegaconf import OmegaConf # This is where the Google Speech Commands directory will be placed. # Change this if you don't want the data to be extracted in the current directory. # Select the version of the dataset required as well (can be 1 or 2) DATASET_VER = 1 data_dir = './google_dataset_v{0}/'.format(DATASET_VER) if DATASET_VER == 1: MODEL_CONFIG = "matchboxnet_3x1x64_v1.yaml" else: MODEL_CONFIG = "matchboxnet_3x1x64_v2.yaml" if not os.path.exists(f"configs/{MODEL_CONFIG}"): !wget -P configs/ "https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/{MODEL_CONFIG}" if not os.path.exists("process_speech_commands_data.py"): !wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_speech_commands_data.py !mkdir {data_dir} !python process_speech_commands_data.py --data_root={data_dir} --data_version={DATASET_VER} --skip_duration --log print("Dataset ready !") dataset_path = 'google_speech_recognition_v{0}'.format(DATASET_VER) dataset_basedir = os.path.join(data_dir, dataset_path) train_dataset = os.path.join(dataset_basedir, 'train_manifest.json') val_dataset = os.path.join(dataset_basedir, 'validation_manifest.json') test_dataset = os.path.join(dataset_basedir, 'validation_manifest.json') !head -n 5 {train_dataset} # NeMo's "core" package import nemo # NeMo's ASR collection - this collections contains complete ASR models and # building blocks (modules) for ASR import nemo.collections.asr as nemo_asr # This line will print the entire config of the MatchboxNet model config_path = f"configs/{MODEL_CONFIG}" config = OmegaConf.load(config_path) config = OmegaConf.to_container(config, resolve=True) config = OmegaConf.create(config) print(OmegaConf.to_yaml(config)) # Preserve some useful parameters labels = config.model.labels sample_rate = config.sample_rate print(OmegaConf.to_yaml(config.model.train_ds)) config.model.train_ds.manifest_filepath = train_dataset config.model.validation_ds.manifest_filepath = val_dataset config.model.test_ds.manifest_filepath = test_dataset import torch import pytorch_lightning as pl print("Trainer config - \n") print(OmegaConf.to_yaml(config.trainer)) # Lets modify some trainer configs for this demo # Checks if we have GPU available and uses it cuda = 1 if torch.cuda.is_available() else 0 config.trainer.gpus = cuda # Reduces maximum number of epochs to 5 for quick demonstration config.trainer.max_epochs = 5 # Remove distributed training flags config.trainer.accelerator = None trainer = pl.Trainer(**config.trainer) from nemo.utils.exp_manager import exp_manager exp_dir = exp_manager(trainer, config.get("exp_manager", None)) # The exp_dir provides a path to the current experiment for easy access exp_dir = str(exp_dir) exp_dir asr_model = nemo_asr.models.EncDecClassificationModel(cfg=config.model, trainer=trainer) try: from google import colab COLAB_ENV = True except (ImportError, ModuleNotFoundError): COLAB_ENV = False # Load the TensorBoard notebook extension if COLAB_ENV: %load_ext tensorboard else: print("To use tensorboard, please use this notebook in a Google Colab environment.") if COLAB_ENV: %tensorboard --logdir {exp_dir} else: print("To use tensorboard, please use this notebook in a Google Colab environment.") trainer.fit(asr_model) trainer.test(asr_model, ckpt_path=None) # Mixed precision: trainer = Trainer(amp_level='O1', precision=16) # Trainer with a distributed backend: trainer = Trainer(gpus=2, num_nodes=2, accelerator='ddp') # Of course, you can combine these flags as well. !pip install librosa asr_model.setup_test_data(config.model.test_ds) test_dl = asr_model._test_dl def test_step(self, batch, batch_idx, dataloader_idx=0): audio_signal, audio_signal_len, labels, labels_len = batch logits = self.forward(input_signal=audio_signal, input_signal_length=audio_signal_len) loss_value = self.loss(logits=logits, labels=labels) correct_counts, total_counts = self._accuracy(logits=logits, labels=labels) return {'test_loss': loss_value, 'test_correct_counts': correct_counts, 'test_total_counts': total_counts} @torch.no_grad() def extract_logits(model, dataloader): logits_buffer = [] label_buffer = [] # Follow the above definition of the test_step for batch in dataloader: audio_signal, audio_signal_len, labels, labels_len = batch logits = model(input_signal=audio_signal, input_signal_length=audio_signal_len) logits_buffer.append(logits) label_buffer.append(labels) print(".", end='') print() print("Finished extracting logits !") logits = torch.cat(logits_buffer, 0) labels = torch.cat(label_buffer, 0) return logits, labels cpu_model = asr_model.cpu() cpu_model.eval() logits, labels = extract_logits(cpu_model, test_dl) print("Logits:", logits.shape, "Labels :", labels.shape) # Compute accuracy - `_accuracy` is a PyTorch Lightning Metric ! acc = cpu_model._accuracy(logits=logits, labels=labels) print("Accuracy : ", float(acc[0]*100)) import librosa import json import IPython.display as ipd # First let's create a utility class to remap the integer class labels to actual string label class ReverseMapLabel: def __init__(self, data_loader): self.label2id = dict(data_loader.dataset.label2id) self.id2label = dict(data_loader.dataset.id2label) def __call__(self, pred_idx, label_idx): return self.id2label[pred_idx], self.id2label[label_idx] # Next, let's get the indices of all the incorrectly labeled samples sample_idx = 0 incorrect_preds = [] rev_map = ReverseMapLabel(test_dl) # Remember, evaluated_tensor = (loss, logits, labels) probs = torch.softmax(logits, dim=-1) probas, preds = torch.max(probs, dim=-1) total_count = cpu_model._accuracy.total_counts_k[0] incorrect_ids = (preds != labels).nonzero() for idx in incorrect_ids: proba = float(probas[idx][0]) pred = int(preds[idx][0]) label = int(labels[idx][0]) idx = int(idx[0]) + sample_idx incorrect_preds.append((idx, *rev_map(pred, label), proba)) print(f"Num test samples : {total_count.item()}") print(f"Num errors : {len(incorrect_preds)}") # First lets sort by confidence of prediction incorrect_preds = sorted(incorrect_preds, key=lambda x: x[-1], reverse=False) for incorrect_sample in incorrect_preds[:20]: print(str(incorrect_sample)) # Filter out how many such samples exist low_confidence_threshold = 0.25 count_low_confidence = len(list(filter(lambda x: x[-1] <= low_confidence_threshold, incorrect_preds))) print(f"Number of low confidence predictions : {count_low_confidence}") # First let's create a helper function to parse the manifest files def parse_manifest(manifest): data = [] for line in manifest: line = json.loads(line) data.append(line) return data # Next, let's create a helper function to actually listen to certain samples def listen_to_file(sample_id, pred=None, label=None, proba=None): # Load the audio waveform using librosa filepath = test_samples[sample_id]['audio_filepath'] audio, sample_rate = librosa.load(filepath) if pred is not None and label is not None and proba is not None: print(f"Sample : {sample_id} Prediction : {pred} Label : {label} Confidence = {proba: 0.4f}") else: print(f"Sample : {sample_id}") return ipd.Audio(audio, rate=sample_rate) # Now let's load the test manifest into memory test_samples = [] with open(test_dataset, 'r') as test_f: test_samples = test_f.readlines() test_samples = parse_manifest(test_samples) # Finally, let's listen to all the audio samples where the model made a mistake # Note: This list of incorrect samples may be quite large, so you may choose to subsample `incorrect_preds` count = min(count_low_confidence, 20) # replace this line with just `count_low_confidence` to listen to all samples with low confidence for sample_id, pred, label, proba in incorrect_preds[:count]: ipd.display(listen_to_file(sample_id, pred=pred, label=label, proba=proba)) import json def extract_subset_from_manifest(name: str, manifest_path: str, labels: list): manifest_dir = os.path.split(manifest_path)[0] labels = set(labels) manifest_values = [] print(f"Parsing manifest: {manifest_path}") with open(manifest_path, 'r') as f: for line in f: val = json.loads(line) if val['command'] in labels: manifest_values.append(val) print(f"Number of files extracted from dataset: {len(manifest_values)}") outpath = os.path.join(manifest_dir, name) with open(outpath, 'w') as f: for val in manifest_values: json.dump(val, f) f.write("\n") f.flush() print("Manifest subset written to path :", outpath) print() return outpath labels = ["yes", "no"] train_subdataset = extract_subset_from_manifest("train_subset.json", train_dataset, labels) val_subdataset = extract_subset_from_manifest("val_subset.json", val_dataset, labels) test_subdataset = extract_subset_from_manifest("test_subset.json", test_dataset, labels) import glob print(exp_dir) # Let's list all the checkpoints we have checkpoint_dir = os.path.join(exp_dir, 'checkpoints') checkpoint_paths = list(glob.glob(os.path.join(checkpoint_dir, "*.ckpt"))) checkpoint_paths # We want the checkpoint saved after the final step of training final_checkpoint = list(filter(lambda x: "-last.ckpt" in x, checkpoint_paths))[0] print(final_checkpoint) restored_model = nemo_asr.models.EncDecClassificationModel.load_from_checkpoint(final_checkpoint) restored_model.change_labels(labels) import copy train_subdataset_cfg = copy.deepcopy(restored_model.cfg.train_ds) val_subdataset_cfg = copy.deepcopy(restored_model.cfg.validation_ds) test_subdataset_cfg = copy.deepcopy(restored_model.cfg.test_ds) # Set the paths to the subset of the dataset train_subdataset_cfg.manifest_filepath = train_subdataset val_subdataset_cfg.manifest_filepath = val_subdataset test_subdataset_cfg.manifest_filepath = test_subdataset # Setup the data loader for the restored model restored_model.setup_training_data(train_subdataset_cfg) restored_model.setup_multiple_validation_data(val_subdataset_cfg) restored_model.setup_multiple_test_data(test_subdataset_cfg) # Check data loaders are correct print("Train dataset labels :", restored_model._train_dl.dataset.labels) print("Val dataset labels :", restored_model._validation_dl.dataset.labels) print("Test dataset labels :", restored_model._test_dl.dataset.labels) # Setup the new trainer object # Let's modify some trainer configs for this demo # Checks if we have GPU available and uses it cuda = 1 if torch.cuda.is_available() else 0 trainer_config = OmegaConf.create(dict( gpus=cuda, max_epochs=5, max_steps=None, # computed at runtime if not set num_nodes=1, accumulate_grad_batches=1, checkpoint_callback=False, # Provided by exp_manager logger=False, # Provided by exp_manager log_every_n_steps=1, # Interval of logging. val_check_interval=1.0, # Set to 0.25 to check 4 times per epoch, or an int for number of iterations )) print(trainer_config.pretty()) trainer_finetune = pl.Trainer(**trainer_config) restored_model.set_trainer(trainer_finetune) exp_dir_finetune = exp_manager(trainer_finetune, config.get("exp_manager", None)) exp_dir_finetune = str(exp_dir_finetune) exp_dir_finetune optim_sched_cfg = copy.deepcopy(restored_model.cfg.optim) # Struct mode prevents us from popping off elements from the config, so let's disable it OmegaConf.set_struct(optim_sched_cfg, False) # Lets change the maximum learning rate to previous minimum learning rate optim_sched_cfg.lr = 0.001 # Lets change the scheduler optim_sched_cfg.sched.name = "CosineAnnealing" # "power" isnt applicable to CosineAnnealing so let's remove it optim_sched_cfg.sched.pop('power') # "hold_ratio" isnt applicable to CosineAnnealing, so let's remove it optim_sched_cfg.sched.pop('hold_ratio') # Set "min_lr" to lower value optim_sched_cfg.sched.min_lr = 1e-4 print(optim_sched_cfg.pretty()) # Now lets update the optimizer settings restored_model.setup_optimization(optim_sched_cfg) # We can also just directly replace the config inplace if we choose to restored_model.cfg.optim = optim_sched_cfg if COLAB_ENV: %tensorboard --logdir {exp_dir_finetune} else: print("To use tensorboard, please use this notebook in a Google Colab environment.") trainer_finetune.fit(restored_model) trainer_finetune.test(restored_model, ckpt_path=None) import tarfile # Save a model as a tarfile restored_model.save_to(os.path.join(exp_dir_finetune, "model.nemo")) # The above object is just a tarfile which can store additional artifacts. with tarfile.open(os.path.join(exp_dir_finetune, 'model.nemo')) as blob: for item in blob: print(item) # Restore a model from a tarfile restored_model_2 = nemo_asr.models.EncDecClassificationModel.restore_from(os.path.join(exp_dir_finetune, "model.nemo"))
0.808294
0.830766
``` import numpy as np import matplotlib.pyplot as plt import pandas as pd %matplotlib inline data = pd.read_csv("../data/processed/train.csv", header=None, na_filter=False) ``` ## Positive vs Negative Reviews ``` pos = data[data[0] == 1] neg = data[data[0] == 0] pos_percent = 100 * pos.shape[0] / data.shape[0] neg_percent = 100 * neg.shape[0] / data.shape[0] plt.figure(figsize=(15,10)) plt.bar([0, 1, 2], [len(data), len(pos), len(neg)], color=['g', 'b', 'r']) plt.xticks([0, 1, 2], ['Total', 'Positive ({:.2f}%)'.format(pos_percent), 'Negative ({:.2f}%)'.format(neg_percent)]) plt.show() %%capture from keras.preprocessing.text import Tokenizer pos_reviews = (pos[1] + " " + pos[2]).values pos_tokenizer = Tokenizer(num_words=100) pos_tokenizer.fit_on_texts(pos_reviews) from keras.preprocessing.text import Tokenizer neg_reviews = (neg[1] + " " + neg[2]).values neg_tokenizer = Tokenizer(num_words=100) neg_tokenizer.fit_on_texts(neg_reviews) from nltk.corpus import stopwords stopset = set(stopwords.words('english')) def sorted_words(tokenizer, opposite_tokenizer=None): if opposite_tokenizer: tokens = [(word, frequency) for word, frequency in tokenizer.word_counts.items() if word not in stopset and word not in opposite_tokenizer.word_counts] else: tokens = [(word, frequency) for word, frequency in tokenizer.word_counts.items() if word not in stopset] sorted_words = sorted(tokens, key=lambda item: item[1], reverse=True) return list(sorted_words) ``` ## Top 10 positive words ``` from wordcloud import WordCloud pos_dict = dict((x, y) for x, y in sorted_words(pos_tokenizer)) pos_word_cloud = WordCloud(background_color="white", max_words=10) pos_word_cloud.fit_words(pos_dict) plt.figure(figsize=(15, 10)) plt.imshow(pos_word_cloud) plt.axis("off") plt.show() ``` ## Top 10 negative words ``` from wordcloud import WordCloud neg_dict = dict((x, y) for x, y in sorted_words(neg_tokenizer)) neg_word_cloud = WordCloud(background_color="white", max_words=10) neg_word_cloud.fit_words(neg_dict) plt.figure(figsize=(15, 10)) plt.imshow(neg_word_cloud) plt.axis("off") plt.show() ``` ## Top 10 positive words which are not in negative class ``` from wordcloud import WordCloud pos_only_dict = dict((x, y) for x, y in sorted_words(pos_tokenizer, neg_tokenizer)) pos_only_word_cloud = WordCloud(background_color="white", max_words=10) pos_only_word_cloud.fit_words(pos_only_dict) plt.figure(figsize=(15, 10)) plt.imshow(pos_only_word_cloud) plt.axis("off") plt.show() ``` ## Top 10 negative words which are not in positive class ``` from wordcloud import WordCloud neg_only_dict = dict((x, y) for x, y in sorted_words(neg_tokenizer, pos_tokenizer)) neg_only_word_cloud = WordCloud(background_color="white", max_words=10) neg_only_word_cloud.fit_words(neg_only_dict) plt.figure(figsize=(15, 10)) plt.imshow(neg_only_word_cloud) plt.axis("off") plt.show() ``` ## Number of words per review ``` %%capture from keras.preprocessing.text import text_to_word_sequence number_of_words = [len(text_to_word_sequence(row[1] + " " + row[2])) for row in data.itertuples(index=False, name=None)] plt.boxplot(number_of_words) ndf = pd.DataFrame({'words': number_of_words}) ndf.describe() ``` ## Histogram for number of words per review ``` plt.figure(figsize=(15, 10)) plt.hist(number_of_words, orientation='horizontal', rwidth=0.95) ``` ## BoN + Logistic Regression with different preprocessing options ``` bow_data = pd.read_csv("../reports/f1_score_bow_diff_options.csv") X_bow = bow_data["train_size"].values y_bow = bow_data.drop(['train_size'], axis=1).values[0] colors = ['r', 'b', 'g', 'm', 'c', '#feff60', '#007485', '#7663b0', '#f47be9'] labels = ["Emails + Urls", "Stopwords", "Emoticons", "Lemmatizer", "Punctuation", "Repeating vowels", "Stemmer", "Spelling","Negative constructs"] plt.figure(figsize=(15, 10)) plt.bar(range(len(y_bow)), y_bow, color=colors) plt.xticks(range(len(y_bow)), labels) plt.ylim(min(y_bow) - 0.001, max(y_bow) + 0.0005) plt.ylabel("Accuracy") plt.show() ```
github_jupyter
import numpy as np import matplotlib.pyplot as plt import pandas as pd %matplotlib inline data = pd.read_csv("../data/processed/train.csv", header=None, na_filter=False) pos = data[data[0] == 1] neg = data[data[0] == 0] pos_percent = 100 * pos.shape[0] / data.shape[0] neg_percent = 100 * neg.shape[0] / data.shape[0] plt.figure(figsize=(15,10)) plt.bar([0, 1, 2], [len(data), len(pos), len(neg)], color=['g', 'b', 'r']) plt.xticks([0, 1, 2], ['Total', 'Positive ({:.2f}%)'.format(pos_percent), 'Negative ({:.2f}%)'.format(neg_percent)]) plt.show() %%capture from keras.preprocessing.text import Tokenizer pos_reviews = (pos[1] + " " + pos[2]).values pos_tokenizer = Tokenizer(num_words=100) pos_tokenizer.fit_on_texts(pos_reviews) from keras.preprocessing.text import Tokenizer neg_reviews = (neg[1] + " " + neg[2]).values neg_tokenizer = Tokenizer(num_words=100) neg_tokenizer.fit_on_texts(neg_reviews) from nltk.corpus import stopwords stopset = set(stopwords.words('english')) def sorted_words(tokenizer, opposite_tokenizer=None): if opposite_tokenizer: tokens = [(word, frequency) for word, frequency in tokenizer.word_counts.items() if word not in stopset and word not in opposite_tokenizer.word_counts] else: tokens = [(word, frequency) for word, frequency in tokenizer.word_counts.items() if word not in stopset] sorted_words = sorted(tokens, key=lambda item: item[1], reverse=True) return list(sorted_words) from wordcloud import WordCloud pos_dict = dict((x, y) for x, y in sorted_words(pos_tokenizer)) pos_word_cloud = WordCloud(background_color="white", max_words=10) pos_word_cloud.fit_words(pos_dict) plt.figure(figsize=(15, 10)) plt.imshow(pos_word_cloud) plt.axis("off") plt.show() from wordcloud import WordCloud neg_dict = dict((x, y) for x, y in sorted_words(neg_tokenizer)) neg_word_cloud = WordCloud(background_color="white", max_words=10) neg_word_cloud.fit_words(neg_dict) plt.figure(figsize=(15, 10)) plt.imshow(neg_word_cloud) plt.axis("off") plt.show() from wordcloud import WordCloud pos_only_dict = dict((x, y) for x, y in sorted_words(pos_tokenizer, neg_tokenizer)) pos_only_word_cloud = WordCloud(background_color="white", max_words=10) pos_only_word_cloud.fit_words(pos_only_dict) plt.figure(figsize=(15, 10)) plt.imshow(pos_only_word_cloud) plt.axis("off") plt.show() from wordcloud import WordCloud neg_only_dict = dict((x, y) for x, y in sorted_words(neg_tokenizer, pos_tokenizer)) neg_only_word_cloud = WordCloud(background_color="white", max_words=10) neg_only_word_cloud.fit_words(neg_only_dict) plt.figure(figsize=(15, 10)) plt.imshow(neg_only_word_cloud) plt.axis("off") plt.show() %%capture from keras.preprocessing.text import text_to_word_sequence number_of_words = [len(text_to_word_sequence(row[1] + " " + row[2])) for row in data.itertuples(index=False, name=None)] plt.boxplot(number_of_words) ndf = pd.DataFrame({'words': number_of_words}) ndf.describe() plt.figure(figsize=(15, 10)) plt.hist(number_of_words, orientation='horizontal', rwidth=0.95) bow_data = pd.read_csv("../reports/f1_score_bow_diff_options.csv") X_bow = bow_data["train_size"].values y_bow = bow_data.drop(['train_size'], axis=1).values[0] colors = ['r', 'b', 'g', 'm', 'c', '#feff60', '#007485', '#7663b0', '#f47be9'] labels = ["Emails + Urls", "Stopwords", "Emoticons", "Lemmatizer", "Punctuation", "Repeating vowels", "Stemmer", "Spelling","Negative constructs"] plt.figure(figsize=(15, 10)) plt.bar(range(len(y_bow)), y_bow, color=colors) plt.xticks(range(len(y_bow)), labels) plt.ylim(min(y_bow) - 0.001, max(y_bow) + 0.0005) plt.ylabel("Accuracy") plt.show()
0.480235
0.890103
``` %matplotlib inline ``` TorchText๋กœ ์–ธ์–ด ๋ฒˆ์—ญํ•˜๊ธฐ =================================== ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ``torchtext`` ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜์–ด์™€ ๋…์ผ์–ด ๋ฌธ์žฅ๋“ค์ด ํฌํ•จ๋œ ์ž˜ ์•Œ๋ ค์ง„ ๋ฐ์ดํ„ฐ์…‹์„ ์ „์ฒ˜๋ฆฌ(preprocess)ํ•˜๊ณ  ์ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋…์ผ์–ด ๋ฌธ์žฅ์„ ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜๋Š” ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค(sequence-to-sequence, seq2seq) ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์€ PyTorch ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ฉค๋ฒ„์ธ `Ben Trevett <https://github.com/bentrevett>`__ ์ด ์ž‘์„ฑํ•œ `ํŠœํ† ๋ฆฌ์–ผ <https://github.com/bentrevett/pytorch-seq2seq/blob/master/3%20-%20Neural%20Machine%20Translation%20by%20Jointly%20Learning%20to%20Align%20and%20Translate.ipynb>`__ ์— ๊ธฐ์ดˆํ•˜๊ณ  ์žˆ์œผ๋ฉฐ Ben์˜ ํ—ˆ๋ฝ์„ ๋ฐ›๊ณ  ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ๋ช‡๋ช‡ ๊ธฐ์กด ์ฝ”๋“œ๋“ค์„ ์ œ๊ฑฐํ•˜๊ณ  ํŠœํ† ๋ฆฌ์–ผ์„ ์—…๋ฐ์ดํŠธํ•˜์˜€์Šต๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์„ ํ†ตํ•ด NLP ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด ๋ฌธ์žฅ๋“ค์„ ํ…์„œ(tensor)๋กœ ์ „์ฒ˜๋ฆฌํ•˜๊ณ , ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ณ  ๊ฒ€์ฆํ•˜๊ธฐ ์œ„ํ•ด `torch.utils.data.DataLoader <https://pytorch.org/docs/stable/data.html?highlight=dataloader#torch.utils.data.DataLoader>`__ ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌํ•˜๊ธฐ -------------------------------- ``torchtext`` ์—๋Š” ์–ธ์–ด ๋ณ€ํ™˜ ๋ชจ๋ธ์„ ๋งŒ๋“ค ๋•Œ ์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐ์ดํ„ฐ์…‹์„ ๋งŒ๋“ค๊ธฐ ์ ํ•ฉํ•œ ๋‹ค์–‘ํ•œ ๋„๊ตฌ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ์—์„œ๋Š” ๊ฐ€๊ณต๋˜์ง€ ์•Š์€ ํ…์ŠคํŠธ ๋ฌธ์žฅ(raw text sentence)์„ ํ† ํฐํ™”(tokenize)ํ•˜๊ณ , ์–ดํœ˜์ง‘(vocabulary)์„ ๋งŒ๋“ค๊ณ , ํ† ํฐ์„ ํ…์„œ๋กœ ์ˆซ์žํ™”(numericalize)ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ฐธ๊ณ  : ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ์˜ ํ† ํฐํ™”(tokenization)์—๋Š” `Spacy <https://spacy.io>`__ ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. Spacy๋Š” ์˜์–ด ์ด ์™ธ์˜ ๋‹ค๋ฅธ ์–ธ์–ด์— ๋Œ€ํ•œ ๊ฐ•๋ ฅํ•œ ํ† ํฐํ™” ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ``torchtext`` ๋Š” `basic_english`` ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ œ๊ณตํ•  ๋ฟ ์•„๋‹ˆ๋ผ ์˜์–ด์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค๋ฅธ ํ† ํฌ๋‚˜์ด์ €๋“ค(์˜ˆ์ปจ๋ฐ `Moses <https://bitbucket.org/luismsgomes/mosestokenizer/src/default/>`__ )์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค๋งŒ, ์–ธ์–ด ๋ฒˆ์—ญ์„ ์œ„ํ•ด์„œ๋Š” ๋‹ค์–‘ํ•œ ์–ธ์–ด๋ฅผ ๋‹ค๋ฃจ์–ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— Spacy๊ฐ€ ๊ฐ€์žฅ ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์„ ์‹คํ–‰ํ•˜๋ ค๋ฉด, ์šฐ์„  ``pip`` ๋‚˜ ``conda`` ๋กœ ``spacy`` ๋ฅผ ์„ค์น˜ํ•˜์„ธ์š”. ๊ทธ ๋‹ค์Œ, Spacy ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ์“ธ ์˜์–ด์™€ ๋…์ผ์–ด์— ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ค์šด๋กœ๋“œ ๋ฐ›์Šต๋‹ˆ๋‹ค. :: python -m spacy download en python -m spacy download de ``` import torchtext import torch from torchtext.data.utils import get_tokenizer from collections import Counter from torchtext.vocab import Vocab from torchtext.utils import download_from_url, extract_archive import io url_base = 'https://raw.githubusercontent.com/multi30k/dataset/master/data/task1/raw/' train_urls = ('train.de.gz', 'train.en.gz') val_urls = ('val.de.gz', 'val.en.gz') test_urls = ('test_2016_flickr.de.gz', 'test_2016_flickr.en.gz') train_filepaths = [extract_archive(download_from_url(url_base + url))[0] for url in train_urls] val_filepaths = [extract_archive(download_from_url(url_base + url))[0] for url in val_urls] test_filepaths = [extract_archive(download_from_url(url_base + url))[0] for url in test_urls] de_tokenizer = get_tokenizer('spacy', language='de') en_tokenizer = get_tokenizer('spacy', language='en') def build_vocab(filepath, tokenizer): counter = Counter() with io.open(filepath, encoding="utf8") as f: for string_ in f: counter.update(tokenizer(string_)) return Vocab(counter, specials=['<unk>', '<pad>', '<bos>', '<eos>']) de_vocab = build_vocab(train_filepaths[0], de_tokenizer) en_vocab = build_vocab(train_filepaths[1], en_tokenizer) def data_process(filepaths): raw_de_iter = iter(io.open(filepaths[0], encoding="utf8")) raw_en_iter = iter(io.open(filepaths[1], encoding="utf8")) data = [] for (raw_de, raw_en) in zip(raw_de_iter, raw_en_iter): de_tensor_ = torch.tensor([de_vocab[token] for token in de_tokenizer(raw_de)], dtype=torch.long) en_tensor_ = torch.tensor([en_vocab[token] for token in en_tokenizer(raw_en)], dtype=torch.long) data.append((de_tensor_, en_tensor_)) return data train_data = data_process(train_filepaths) val_data = data_process(val_filepaths) test_data = data_process(test_filepaths) ``` ``DataLoader`` -------------------- ๋งˆ์ง€๋ง‰์œผ๋กœ ์‚ฌ์šฉํ•ด ๋ณผ ``torch`` ์— ํŠนํ™”๋œ ๊ธฐ๋Šฅ์€ ๋ฐ”๋กœ ``DataLoader`` ๋กœ, ์ฒซ ๋ฒˆ์งธ ์ธ์ž๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ์ „๋‹ฌ๋ฐ›๊ธฐ ๋•Œ๋ฌธ์— ์‚ฌ์šฉํ•˜๊ธฐ๊ฐ€ ์‰ฝ์Šต๋‹ˆ๋‹ค. ๋ฌธ์„œ์—์„œ๋„ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด, ``DataLoader ๋Š” ๋ฐ์ดํ„ฐ์…‹๊ณผ ์ƒ˜ํ”Œ๋Ÿฌ๋ฅผ ๊ฒฐํ•ฉํ•˜๊ณ , ์ฃผ์–ด์ง„ ๋ฐ์ดํ„ฐ์…‹์— ๋ฐ˜๋ณต ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ``DataLoader`` ๋Š” ๋งต ํ˜•ํƒœ(map-style)๊ณผ ์ˆœํšŒ ๊ฐ€๋Šฅํ•œ ํ˜•ํƒœ(iterable-style) ๋ฐ์ดํ„ฐ์…‹์„ ๋ชจ๋‘ ์ง€์›ํ•˜๋ฉฐ, ๋‹จ์ผ ๋˜๋Š” ๋‹ค์ค‘ ํ”„๋กœ์„ธ์Šค๋กœ ๋ถˆ๋Ÿฌ์˜ค๊ฑฐ๋‚˜, ๋ถˆ๋Ÿฌ์˜ค๋Š” ์ˆœ์„œ๋ฅผ ์กฐ์ •(customize)ํ•˜๊ฑฐ๋‚˜ ์„ ํƒ์  ์ž๋™ ์ผ๊ด„ ์ฒ˜๋ฆฌ(optional automatic batching), ๋ฉ”๋ชจ๋ฆฌ ํ”ผ๋‹(memory pinning)์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ƒ˜ํ”Œ ๋ชฉ๋ก์„ ๋ณ‘ํ•ฉ(merge)ํ•˜์—ฌ Tensor์˜ ๋ฏธ๋‹ˆ๋ฐฐ์น˜๋ฅผ ๊ตฌ์„ฑํ•˜๋Š” ``collate_fn`` (์„ ํƒ ์‚ฌํ•ญ)์„ ์‚ดํŽด๋ณด์‹ญ์‹œ์˜ค. ๋งต ํ˜•ํƒœ(map-style) ๋ฐ์ดํ„ฐ์…‹์„ ์ผ๊ด„๋กœ ๋ถˆ๋Ÿฌ์˜ฌ ๋•Œ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ``` import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') BATCH_SIZE = 128 PAD_IDX = de_vocab['<pad>'] BOS_IDX = de_vocab['<bos>'] EOS_IDX = de_vocab['<eos>'] from torch.nn.utils.rnn import pad_sequence from torch.utils.data import DataLoader def generate_batch(data_batch): de_batch, en_batch = [], [] for (de_item, en_item) in data_batch: de_batch.append(torch.cat([torch.tensor([BOS_IDX]), de_item, torch.tensor([EOS_IDX])], dim=0)) en_batch.append(torch.cat([torch.tensor([BOS_IDX]), en_item, torch.tensor([EOS_IDX])], dim=0)) de_batch = pad_sequence(de_batch, padding_value=PAD_IDX) en_batch = pad_sequence(en_batch, padding_value=PAD_IDX) return de_batch, en_batch train_iter = DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True, collate_fn=generate_batch) valid_iter = DataLoader(val_data, batch_size=BATCH_SIZE, shuffle=True, collate_fn=generate_batch) test_iter = DataLoader(test_data, batch_size=BATCH_SIZE, shuffle=True, collate_fn=generate_batch) ``` ``nn.Module`` ๊ณผ ``Optimizer`` ์ •์˜ํ•˜๊ธฐ ------------------------------------------ ๋Œ€๋ถ€๋ถ„์€ ``torchtext`` ๊ฐ€ ์•Œ์•„์„œ ํ•ด์ค๋‹ˆ๋‹ค : ๋ฐ์ดํ„ฐ์…‹์ด ๋งŒ๋“ค์–ด์ง€๊ณ  ๋ฐ˜๋ณต์ž๊ฐ€ ์ •์˜๋˜๋ฉด, ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์šฐ๋ฆฌ๊ฐ€ ํ•ด์•ผ ํ•  ์ผ์ด๋ผ๊ณ ๋Š” ๊ทธ์ € ``nn.Module`` ์™€ ``Optimizer`` ๋ฅผ ๋ชจ๋ธ๋กœ์„œ ์ •์˜ํ•˜๊ณ  ํ›ˆ๋ จ์‹œํ‚ค๋Š” ๊ฒƒ์ด ์ „๋ถ€์ž…๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์‚ฌ์šฉํ•  ๋ชจ๋ธ์€ `์ด๊ณณ <https://arxiv.org/abs/1409.0473>`__ ์—์„œ ์„ค๋ช…ํ•˜๊ณ  ์žˆ๋Š” ๊ตฌ์กฐ๋ฅผ ๋”ฐ๋ฅด๊ณ  ์žˆ์œผ๋ฉฐ, ๋” ์ž์„ธํ•œ ๋‚ด์šฉ์€ `์—ฌ๊ธฐ <https://github.com/SethHWeidman/pytorch-seq2seq/blob/master/3%20-%20Neural%20Machine%20Translation%20by%20Jointly%20Learning%20to%20Align%20and%20Translate.ipynb>`__ ๋ฅผ ์ฐธ๊ณ ํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ์ฐธ๊ณ  : ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋ธ์€ ์–ธ์–ด ๋ฒˆ์—ญ์„ ์œ„ํ•ด ์‚ฌ์šฉํ•  ์˜ˆ์‹œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ์ด ์ž‘์—…์— ์ ๋‹นํ•œ ํ‘œ์ค€ ๋ชจ๋ธ์ด๊ธฐ ๋•Œ๋ฌธ์ด์ง€, ๋ฒˆ์—ญ์— ์ ํ•ฉํ•œ ๋ชจ๋ธ์ด๊ธฐ ๋•Œ๋ฌธ์€ ์•„๋‹™๋‹ˆ๋‹ค. ์—ฌ๋Ÿฌ๋ถ„์ด ์ตœ์‹  ๊ธฐ์ˆ  ํŠธ๋ Œ๋“œ๋ฅผ ์ž˜ ๋”ฐ๋ผ๊ฐ€๊ณ  ์žˆ๋‹ค๋ฉด ์ž˜ ์•„์‹œ๊ฒ ์ง€๋งŒ, ํ˜„์žฌ ๋ฒˆ์—ญ์—์„œ ๊ฐ€์žฅ ๋›ฐ์–ด๋‚œ ๋ชจ๋ธ์€ Transformers์ž…๋‹ˆ๋‹ค. PyTorch๊ฐ€ Transformer ๋ ˆ์ด์–ด๋ฅผ ๊ตฌํ˜„ํ•œ ๋‚ด์šฉ์€ `์—ฌ๊ธฐ <https://pytorch.org/docs/stable/nn.html#transformer-layers>`__ ์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ์ด ํŠœํ† ๋ฆฌ์–ผ์˜ ๋ชจ๋ธ์ด ์‚ฌ์šฉํ•˜๋Š” "attention" ์€ Transformer ๋ชจ๋ธ์—์„œ ์ œ์•ˆํ•˜๋Š” ๋ฉ€ํ‹ฐ ํ—ค๋“œ ์…€ํ”„ ์–ดํ…์…˜(multi-headed self-attention) ๊ณผ๋Š” ๋‹ค๋ฅด๋‹ค๋Š” ์ ์„ ์•Œ๋ ค๋“œ๋ฆฝ๋‹ˆ๋‹ค. ``` import random from typing import Tuple import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch import Tensor class Encoder(nn.Module): def __init__(self, input_dim: int, emb_dim: int, enc_hid_dim: int, dec_hid_dim: int, dropout: float): super().__init__() self.input_dim = input_dim self.emb_dim = emb_dim self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim self.dropout = dropout self.embedding = nn.Embedding(input_dim, emb_dim) self.rnn = nn.GRU(emb_dim, enc_hid_dim, bidirectional = True) self.fc = nn.Linear(enc_hid_dim * 2, dec_hid_dim) self.dropout = nn.Dropout(dropout) def forward(self, src: Tensor) -> Tuple[Tensor]: embedded = self.dropout(self.embedding(src)) outputs, hidden = self.rnn(embedded) hidden = torch.tanh(self.fc(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1))) return outputs, hidden class Attention(nn.Module): def __init__(self, enc_hid_dim: int, dec_hid_dim: int, attn_dim: int): super().__init__() self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim self.attn_in = (enc_hid_dim * 2) + dec_hid_dim self.attn = nn.Linear(self.attn_in, attn_dim) def forward(self, decoder_hidden: Tensor, encoder_outputs: Tensor) -> Tensor: src_len = encoder_outputs.shape[0] repeated_decoder_hidden = decoder_hidden.unsqueeze(1).repeat(1, src_len, 1) encoder_outputs = encoder_outputs.permute(1, 0, 2) energy = torch.tanh(self.attn(torch.cat(( repeated_decoder_hidden, encoder_outputs), dim = 2))) attention = torch.sum(energy, dim=2) return F.softmax(attention, dim=1) class Decoder(nn.Module): def __init__(self, output_dim: int, emb_dim: int, enc_hid_dim: int, dec_hid_dim: int, dropout: int, attention: nn.Module): super().__init__() self.emb_dim = emb_dim self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim self.output_dim = output_dim self.dropout = dropout self.attention = attention self.embedding = nn.Embedding(output_dim, emb_dim) self.rnn = nn.GRU((enc_hid_dim * 2) + emb_dim, dec_hid_dim) self.out = nn.Linear(self.attention.attn_in + emb_dim, output_dim) self.dropout = nn.Dropout(dropout) def _weighted_encoder_rep(self, decoder_hidden: Tensor, encoder_outputs: Tensor) -> Tensor: a = self.attention(decoder_hidden, encoder_outputs) a = a.unsqueeze(1) encoder_outputs = encoder_outputs.permute(1, 0, 2) weighted_encoder_rep = torch.bmm(a, encoder_outputs) weighted_encoder_rep = weighted_encoder_rep.permute(1, 0, 2) return weighted_encoder_rep def forward(self, input: Tensor, decoder_hidden: Tensor, encoder_outputs: Tensor) -> Tuple[Tensor]: input = input.unsqueeze(0) embedded = self.dropout(self.embedding(input)) weighted_encoder_rep = self._weighted_encoder_rep(decoder_hidden, encoder_outputs) rnn_input = torch.cat((embedded, weighted_encoder_rep), dim = 2) output, decoder_hidden = self.rnn(rnn_input, decoder_hidden.unsqueeze(0)) embedded = embedded.squeeze(0) output = output.squeeze(0) weighted_encoder_rep = weighted_encoder_rep.squeeze(0) output = self.out(torch.cat((output, weighted_encoder_rep, embedded), dim = 1)) return output, decoder_hidden.squeeze(0) class Seq2Seq(nn.Module): def __init__(self, encoder: nn.Module, decoder: nn.Module, device: torch.device): super().__init__() self.encoder = encoder self.decoder = decoder self.device = device def forward(self, src: Tensor, trg: Tensor, teacher_forcing_ratio: float = 0.5) -> Tensor: batch_size = src.shape[1] max_len = trg.shape[0] trg_vocab_size = self.decoder.output_dim outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device) encoder_outputs, hidden = self.encoder(src) # ๋””์ฝ”๋”๋กœ์˜ ์ฒซ ๋ฒˆ์งธ ์ž…๋ ฅ์€ <sos> ํ† ํฐ์ž…๋‹ˆ๋‹ค. output = trg[0,:] for t in range(1, max_len): output, hidden = self.decoder(output, hidden, encoder_outputs) outputs[t] = output teacher_force = random.random() < teacher_forcing_ratio top1 = output.max(1)[1] output = (trg[t] if teacher_force else top1) return outputs INPUT_DIM = len(de_vocab) OUTPUT_DIM = len(en_vocab) # ENC_EMB_DIM = 256 # DEC_EMB_DIM = 256 # ENC_HID_DIM = 512 # DEC_HID_DIM = 512 # ATTN_DIM = 64 # ENC_DROPOUT = 0.5 # DEC_DROPOUT = 0.5 ENC_EMB_DIM = 32 DEC_EMB_DIM = 32 ENC_HID_DIM = 64 DEC_HID_DIM = 64 ATTN_DIM = 8 ENC_DROPOUT = 0.5 DEC_DROPOUT = 0.5 enc = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, ENC_DROPOUT) attn = Attention(ENC_HID_DIM, DEC_HID_DIM, ATTN_DIM) dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, DEC_DROPOUT, attn) model = Seq2Seq(enc, dec, device).to(device) def init_weights(m: nn.Module): for name, param in m.named_parameters(): if 'weight' in name: nn.init.normal_(param.data, mean=0, std=0.01) else: nn.init.constant_(param.data, 0) model.apply(init_weights) optimizer = optim.Adam(model.parameters()) def count_parameters(model: nn.Module): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') ``` ์ฐธ๊ณ  : ์–ธ์–ด ๋ฒˆ์—ญ์˜ ์„ฑ๋Šฅ ์ ์ˆ˜๋ฅผ ๊ธฐ๋กํ•˜๋ ค๋ฉด, ``nn.CrossEntropyLoss`` ํ•จ์ˆ˜๊ฐ€ ๋‹จ์ˆœํ•œ ํŒจ๋”ฉ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ถ€๋ถ„์„ ๋ฌด์‹œํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ด๋‹น ์ƒ‰์ธ๋“ค์„ ์•Œ๋ ค์ค˜์•ผ ํ•ฉ๋‹ˆ๋‹ค. ``` PAD_IDX = en_vocab.stoi['<pad>'] criterion = nn.CrossEntropyLoss(ignore_index=PAD_IDX) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ ์ด ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ  ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค : ``` import math import time def train(model: nn.Module, iterator: torch.utils.data.DataLoader, optimizer: optim.Optimizer, criterion: nn.Module, clip: float): model.train() epoch_loss = 0 for _, (src, trg) in enumerate(iterator): src, trg = src.to(device), trg.to(device) optimizer.zero_grad() output = model(src, trg) output = output[1:].view(-1, output.shape[-1]) trg = trg[1:].view(-1) loss = criterion(output, trg) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() epoch_loss += loss.item() return epoch_loss / len(iterator) def evaluate(model: nn.Module, iterator: torch.utils.data.DataLoader, criterion: nn.Module): model.eval() epoch_loss = 0 with torch.no_grad(): for _, (src, trg) in enumerate(iterator): src, trg = src.to(device), trg.to(device) output = model(src, trg, 0) #turn off teacher forcing output = output[1:].view(-1, output.shape[-1]) trg = trg[1:].view(-1) loss = criterion(output, trg) epoch_loss += loss.item() return epoch_loss / len(iterator) def epoch_time(start_time: int, end_time: int): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs N_EPOCHS = 10 CLIP = 1 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start_time = time.time() train_loss = train(model, train_iter, optimizer, criterion, CLIP) valid_loss = evaluate(model, valid_iter, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s') print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}') print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}') test_loss = evaluate(model, test_iter, criterion) print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |') ``` ๋‹ค์Œ ๋‹จ๊ณ„ -------------- - ``torchtext`` ๋ฅผ ์‚ฌ์šฉํ•œ Ben Trevett์˜ ํŠœํ† ๋ฆฌ์–ผ์„ `์ด๊ณณ <https://github.com/bentrevett/>`__ ์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ``nn.Transformer`` ์™€ ``torchtext`` ์˜ ๋‹ค๋ฅธ ๊ธฐ๋Šฅ๋“ค์„ ์ด์šฉํ•œ ๋‹ค์Œ ๋‹จ์–ด ์˜ˆ์ธก์„ ํ†ตํ•œ ์–ธ์–ด ๋ชจ๋ธ๋ง ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์„ธ์š”.
github_jupyter
%matplotlib inline import torchtext import torch from torchtext.data.utils import get_tokenizer from collections import Counter from torchtext.vocab import Vocab from torchtext.utils import download_from_url, extract_archive import io url_base = 'https://raw.githubusercontent.com/multi30k/dataset/master/data/task1/raw/' train_urls = ('train.de.gz', 'train.en.gz') val_urls = ('val.de.gz', 'val.en.gz') test_urls = ('test_2016_flickr.de.gz', 'test_2016_flickr.en.gz') train_filepaths = [extract_archive(download_from_url(url_base + url))[0] for url in train_urls] val_filepaths = [extract_archive(download_from_url(url_base + url))[0] for url in val_urls] test_filepaths = [extract_archive(download_from_url(url_base + url))[0] for url in test_urls] de_tokenizer = get_tokenizer('spacy', language='de') en_tokenizer = get_tokenizer('spacy', language='en') def build_vocab(filepath, tokenizer): counter = Counter() with io.open(filepath, encoding="utf8") as f: for string_ in f: counter.update(tokenizer(string_)) return Vocab(counter, specials=['<unk>', '<pad>', '<bos>', '<eos>']) de_vocab = build_vocab(train_filepaths[0], de_tokenizer) en_vocab = build_vocab(train_filepaths[1], en_tokenizer) def data_process(filepaths): raw_de_iter = iter(io.open(filepaths[0], encoding="utf8")) raw_en_iter = iter(io.open(filepaths[1], encoding="utf8")) data = [] for (raw_de, raw_en) in zip(raw_de_iter, raw_en_iter): de_tensor_ = torch.tensor([de_vocab[token] for token in de_tokenizer(raw_de)], dtype=torch.long) en_tensor_ = torch.tensor([en_vocab[token] for token in en_tokenizer(raw_en)], dtype=torch.long) data.append((de_tensor_, en_tensor_)) return data train_data = data_process(train_filepaths) val_data = data_process(val_filepaths) test_data = data_process(test_filepaths) import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') BATCH_SIZE = 128 PAD_IDX = de_vocab['<pad>'] BOS_IDX = de_vocab['<bos>'] EOS_IDX = de_vocab['<eos>'] from torch.nn.utils.rnn import pad_sequence from torch.utils.data import DataLoader def generate_batch(data_batch): de_batch, en_batch = [], [] for (de_item, en_item) in data_batch: de_batch.append(torch.cat([torch.tensor([BOS_IDX]), de_item, torch.tensor([EOS_IDX])], dim=0)) en_batch.append(torch.cat([torch.tensor([BOS_IDX]), en_item, torch.tensor([EOS_IDX])], dim=0)) de_batch = pad_sequence(de_batch, padding_value=PAD_IDX) en_batch = pad_sequence(en_batch, padding_value=PAD_IDX) return de_batch, en_batch train_iter = DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True, collate_fn=generate_batch) valid_iter = DataLoader(val_data, batch_size=BATCH_SIZE, shuffle=True, collate_fn=generate_batch) test_iter = DataLoader(test_data, batch_size=BATCH_SIZE, shuffle=True, collate_fn=generate_batch) import random from typing import Tuple import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch import Tensor class Encoder(nn.Module): def __init__(self, input_dim: int, emb_dim: int, enc_hid_dim: int, dec_hid_dim: int, dropout: float): super().__init__() self.input_dim = input_dim self.emb_dim = emb_dim self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim self.dropout = dropout self.embedding = nn.Embedding(input_dim, emb_dim) self.rnn = nn.GRU(emb_dim, enc_hid_dim, bidirectional = True) self.fc = nn.Linear(enc_hid_dim * 2, dec_hid_dim) self.dropout = nn.Dropout(dropout) def forward(self, src: Tensor) -> Tuple[Tensor]: embedded = self.dropout(self.embedding(src)) outputs, hidden = self.rnn(embedded) hidden = torch.tanh(self.fc(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1))) return outputs, hidden class Attention(nn.Module): def __init__(self, enc_hid_dim: int, dec_hid_dim: int, attn_dim: int): super().__init__() self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim self.attn_in = (enc_hid_dim * 2) + dec_hid_dim self.attn = nn.Linear(self.attn_in, attn_dim) def forward(self, decoder_hidden: Tensor, encoder_outputs: Tensor) -> Tensor: src_len = encoder_outputs.shape[0] repeated_decoder_hidden = decoder_hidden.unsqueeze(1).repeat(1, src_len, 1) encoder_outputs = encoder_outputs.permute(1, 0, 2) energy = torch.tanh(self.attn(torch.cat(( repeated_decoder_hidden, encoder_outputs), dim = 2))) attention = torch.sum(energy, dim=2) return F.softmax(attention, dim=1) class Decoder(nn.Module): def __init__(self, output_dim: int, emb_dim: int, enc_hid_dim: int, dec_hid_dim: int, dropout: int, attention: nn.Module): super().__init__() self.emb_dim = emb_dim self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim self.output_dim = output_dim self.dropout = dropout self.attention = attention self.embedding = nn.Embedding(output_dim, emb_dim) self.rnn = nn.GRU((enc_hid_dim * 2) + emb_dim, dec_hid_dim) self.out = nn.Linear(self.attention.attn_in + emb_dim, output_dim) self.dropout = nn.Dropout(dropout) def _weighted_encoder_rep(self, decoder_hidden: Tensor, encoder_outputs: Tensor) -> Tensor: a = self.attention(decoder_hidden, encoder_outputs) a = a.unsqueeze(1) encoder_outputs = encoder_outputs.permute(1, 0, 2) weighted_encoder_rep = torch.bmm(a, encoder_outputs) weighted_encoder_rep = weighted_encoder_rep.permute(1, 0, 2) return weighted_encoder_rep def forward(self, input: Tensor, decoder_hidden: Tensor, encoder_outputs: Tensor) -> Tuple[Tensor]: input = input.unsqueeze(0) embedded = self.dropout(self.embedding(input)) weighted_encoder_rep = self._weighted_encoder_rep(decoder_hidden, encoder_outputs) rnn_input = torch.cat((embedded, weighted_encoder_rep), dim = 2) output, decoder_hidden = self.rnn(rnn_input, decoder_hidden.unsqueeze(0)) embedded = embedded.squeeze(0) output = output.squeeze(0) weighted_encoder_rep = weighted_encoder_rep.squeeze(0) output = self.out(torch.cat((output, weighted_encoder_rep, embedded), dim = 1)) return output, decoder_hidden.squeeze(0) class Seq2Seq(nn.Module): def __init__(self, encoder: nn.Module, decoder: nn.Module, device: torch.device): super().__init__() self.encoder = encoder self.decoder = decoder self.device = device def forward(self, src: Tensor, trg: Tensor, teacher_forcing_ratio: float = 0.5) -> Tensor: batch_size = src.shape[1] max_len = trg.shape[0] trg_vocab_size = self.decoder.output_dim outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device) encoder_outputs, hidden = self.encoder(src) # ๋””์ฝ”๋”๋กœ์˜ ์ฒซ ๋ฒˆ์งธ ์ž…๋ ฅ์€ <sos> ํ† ํฐ์ž…๋‹ˆ๋‹ค. output = trg[0,:] for t in range(1, max_len): output, hidden = self.decoder(output, hidden, encoder_outputs) outputs[t] = output teacher_force = random.random() < teacher_forcing_ratio top1 = output.max(1)[1] output = (trg[t] if teacher_force else top1) return outputs INPUT_DIM = len(de_vocab) OUTPUT_DIM = len(en_vocab) # ENC_EMB_DIM = 256 # DEC_EMB_DIM = 256 # ENC_HID_DIM = 512 # DEC_HID_DIM = 512 # ATTN_DIM = 64 # ENC_DROPOUT = 0.5 # DEC_DROPOUT = 0.5 ENC_EMB_DIM = 32 DEC_EMB_DIM = 32 ENC_HID_DIM = 64 DEC_HID_DIM = 64 ATTN_DIM = 8 ENC_DROPOUT = 0.5 DEC_DROPOUT = 0.5 enc = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, ENC_DROPOUT) attn = Attention(ENC_HID_DIM, DEC_HID_DIM, ATTN_DIM) dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, DEC_DROPOUT, attn) model = Seq2Seq(enc, dec, device).to(device) def init_weights(m: nn.Module): for name, param in m.named_parameters(): if 'weight' in name: nn.init.normal_(param.data, mean=0, std=0.01) else: nn.init.constant_(param.data, 0) model.apply(init_weights) optimizer = optim.Adam(model.parameters()) def count_parameters(model: nn.Module): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') PAD_IDX = en_vocab.stoi['<pad>'] criterion = nn.CrossEntropyLoss(ignore_index=PAD_IDX) import math import time def train(model: nn.Module, iterator: torch.utils.data.DataLoader, optimizer: optim.Optimizer, criterion: nn.Module, clip: float): model.train() epoch_loss = 0 for _, (src, trg) in enumerate(iterator): src, trg = src.to(device), trg.to(device) optimizer.zero_grad() output = model(src, trg) output = output[1:].view(-1, output.shape[-1]) trg = trg[1:].view(-1) loss = criterion(output, trg) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() epoch_loss += loss.item() return epoch_loss / len(iterator) def evaluate(model: nn.Module, iterator: torch.utils.data.DataLoader, criterion: nn.Module): model.eval() epoch_loss = 0 with torch.no_grad(): for _, (src, trg) in enumerate(iterator): src, trg = src.to(device), trg.to(device) output = model(src, trg, 0) #turn off teacher forcing output = output[1:].view(-1, output.shape[-1]) trg = trg[1:].view(-1) loss = criterion(output, trg) epoch_loss += loss.item() return epoch_loss / len(iterator) def epoch_time(start_time: int, end_time: int): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs N_EPOCHS = 10 CLIP = 1 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start_time = time.time() train_loss = train(model, train_iter, optimizer, criterion, CLIP) valid_loss = evaluate(model, valid_iter, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s') print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}') print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}') test_loss = evaluate(model, test_iter, criterion) print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
0.822688
0.958421
# 1. Importing libraries and loading Data ### 1.1 Installing necessary libraries ``` """ %pip install emoji %pip install tensorflow %pip install transformers %pip install pandas %pip install sklearn %pip install matplotlib %pip install seaborn """ import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np import emoji import re import string from transformers import TFBertModel, BertTokenizerFast, BertConfig import tensorflow as tf from keras.layers import Input, Dropout, Dense, BatchNormalization from keras.models import Model from sklearn.metrics import f1_score, recall_score, precision_score from tensorflow.keras.utils import plot_model from keras.initializers import TruncatedNormal import keras.backend as K ``` ### 1.2 Plot Defaults ``` colors = sns.color_palette('rocket_r') sns.set_palette('rocket_r') ``` ### 1.3 Helper Functions ``` def idx2class(idx_list): """ This function converts a list of class indices to a list of class labels. Parameters ---------- idx_list : list List of class indices. Returns ------- class_list : list List of class labels. """ arr = [] for i in idx_list: arr.append(labels[int(i)]) return arr def EmotionMapping(list_of_emotions): list = [] for i in list_of_emotions: if i in ekman_map['anger']: list.append('anger') if i in ekman_map['disgust']: list.append('disgust') if i in ekman_map['fear']: list.append('fear') if i in ekman_map['joy']: list.append('joy') if i in ekman_map['sadness']: list.append('sadness') if i in ekman_map['surprise']: list.append('surprise') if i == 'neutral': list.append('neutral') return list def SentimentMapping(list_of_emotions): list = [] for i in list_of_emotions: if i in sentiment_map['positive']: list.append('positive') if i in sentiment_map['negative']: list.append('negative') if i in sentiment_map['ambiguous']: list.append('ambiguous') return list ``` ### 1.4 Loading data ``` train_url = 'https://github.com/google-research/google-research/raw/master/goemotions/data/train.tsv' valid_url = 'https://github.com/google-research/google-research/raw/master/goemotions/data/dev.tsv' test_url = 'https://github.com/google-research/google-research/raw/master/goemotions/data/test.tsv' train_df = pd.read_csv(train_url, sep='\t', encoding='utf-8', names=['text', 'emotion', 'annotator'], header=None) valid_df = pd.read_csv(valid_url, sep='\t', encoding='utf-8', names=['text', 'emotion', 'annotator'], header=None) test_df = pd.read_csv(test_url, sep='\t', encoding='utf-8', names=['text', 'emotion', 'annotator'], header=None) train_df.head(2) train_df.info() ``` ### 1.5 Preprocessing Column 2 "annotator" is unnecessary, so we can drop it. ``` train_df.drop('annotator', axis=1, inplace=True) valid_df.drop('annotator', axis=1, inplace=True) test_df.drop('annotator', axis=1, inplace=True) ``` Dictionaries for mapping emotions to indices and vice versa. The variable `ekman_map` is used to map 27 emotions to 7 emotions. This is done to reduce the number of classes. The 27 emotions can also be mapped to the 3 emotions using the `sentiment_map` dictionary for sentiment analysis tasks. ``` labels = { 0: 'admiration', 1: 'amusement', 2: 'anger', 3: 'annoyance', 4: 'approval', 5: 'caring', 6: 'confusion', 7: 'curiosity', 8: 'desire', 9: 'disappointment', 10: 'disapproval', 11: 'disgust', 12: 'embarrassment', 13: 'excitement', 14: 'fear', 15: 'gratitude', 16: 'grief', 17: 'joy', 18: 'love', 19: 'nervousness', 20: 'optimism', 21: 'pride', 22: 'realization', 23: 'relief', 24: 'remorse', 25: 'sadness', 26: 'surprise', 27: 'neutral' } ekman_map = { 'anger': ['anger', 'annoyance', 'disapproval'], 'disgust': ['disgust'], 'fear': ['fear', 'nervousness'], 'joy': ['joy', 'amusement', 'approval', 'excitement', 'gratitude', 'love', 'optimism', 'relief', 'pride', 'admiration', 'desire', 'caring'], 'sadness': ['sadness', 'disappointment', 'embarrassment', 'grief', 'remorse'], 'surprise': ['surprise', 'realization', 'confusion', 'curiosity'], 'neutral': ['neutral'] } sentiment_map = { "positive": ["amusement", "excitement", "joy", "love", "desire", "optimism", "caring", "pride", "admiration", "gratitude", "relief", "approval"], "negative": ["fear", "nervousness", "remorse", "embarrassment", "disappointment", "sadness", "grief", "disgust", "anger", "annoyance", "disapproval"], "ambiguous": ["realization", "surprise", "curiosity", "confusion", "neutral"] } ``` First, let's extract all emotions from the each example and store them in a list. ``` train_df['list of emotions'] = train_df['emotion'].apply(lambda x: x.split(',')) test_df['list of emotions'] = test_df['emotion'].apply(lambda x: x.split(',')) valid_df['list of emotions'] = valid_df['emotion'].apply(lambda x: x.split(',')) ``` We can then apply index to class mapping to get the class labels for each row ``` train_df['emotion'] = train_df['list of emotions'].apply(lambda x: idx2class(x)) test_df['emotion'] = test_df['list of emotions'].apply(lambda x: idx2class(x)) valid_df['emotion'] = valid_df['list of emotions'].apply(lambda x: idx2class(x)) ``` Finally, we can reduce the number of classes to 7 by using the EmotionMapping function. ``` train_df['ekman_emotion'] = train_df['emotion'].apply(lambda x: EmotionMapping(x)) test_df['ekman_emotion'] = test_df['emotion'].apply(lambda x: EmotionMapping(x)) valid_df['ekman_emotion'] = valid_df['emotion'].apply(lambda x: EmotionMapping(x)) train_df.head(10) def clean_text(text): """ This function cleans the text in the dataframe and returns a list of cleaned text. text: a string return: modified initial string """ # Removing Emojis text = emoji.demojize(text) # remove emojis text = str(text).lower() # text to lower case text = re.sub(r'[%s]' % re.escape(string.punctuation), ' ', text) # remove punctuation return text ``` One hot encoding of emotions ``` for i in ekman_map: train_df[i] = train_df['ekman_emotion'].apply(lambda x: 1 if i in x else 0) test_df[i] = test_df['ekman_emotion'].apply(lambda x: 1 if i in x else 0) valid_df[i] = valid_df['ekman_emotion'].apply(lambda x: 1 if i in x else 0) train_df.head(10) ``` ### 1.6 Visualization Bar plot of distribution of emotions ``` labels_summary = train_df.iloc[:, 4:].sum() labels_summary.sort_values(ascending=False, inplace=True) fig = plt.figure(figsize=(12, 8)) sns.barplot(x=labels_summary.index, y=labels_summary.values, palette='rocket_r') plt.xticks(rotation=45) plt.ylabel('Frequency') plt.show() ``` Number of emotions in each sample ``` train_df['n_emotions'] = train_df.iloc[:, 4:].apply(lambda x: x.sum(), axis=1) fig = plt.figure(figsize=(12, 8)) sns.countplot(x='n_emotions', data=train_df, palette='rocket_r') plt.xticks(rotation=45) plt.title('Number of emotions per sample') plt.ylabel('Frequency') plt.xlabel('Number of emotions') plt.show() ``` Distribution of text length in the train set ``` full_text = pd.concat([train_df['text'], valid_df['text'], test_df['text']]) lengths = full_text.apply(lambda x: len(x.split())) fig = plt.figure(figsize=(12, 10)) sns.displot(lengths, kde=True, rug=False, color=colors[5]) plt.title('Distribution of Text Lengths') plt.xlabel('Text Length') plt.ylabel('Frequency') plt.xlim(0, 40) plt.show() ``` # 2. Model ### 2.1 Base model config #### Computing max length of samples `max_length` variable is used to limit the length of the input text that is fed to the model. The sequence will be padded with the `<PAD>` token if the length of the sequence is less than `max_length` and the sequence will be truncated if the length of the sequence is more than `max_length`. This is done to ensure that the model can handle any size of input text. ``` full_text = pd.concat([train_df['text'], valid_df['text'], test_df['text']]) max_length = full_text.apply(lambda x: len(x.split())).max() max_length ``` I am going to use Google's BERT base model which contains 110M parameters. ``` model_name = 'bert-base-uncased' config = BertConfig.from_pretrained(model_name, output_hidden_states=False) tokenizer = BertTokenizerFast.from_pretrained(pretrained_model_name_or_path = model_name, config = config) transformer_model = TFBertModel.from_pretrained(model_name, config = config) ``` ### 2.2 Model architecture model takes three inputs that result from tokenization: - `input_ids`: indices of input sequence tokens in the vocabulary - `token_type_ids`: Segment token indices to indicate first and second portions of the inputs. 0 for sentence A and 1 for sentence B - `attention mask`: Mask to avoid performing attention on padding token indices. 0 for masked and 1 for not masked I have a sigmoided output layer in the model because it is more appropriate than a softmax layer. This is because I are trying to predict the probability of each label and not the label itself. ``` def my_model(n_labels): # Load the MainLayer bert = transformer_model.layers[0] ## INPUTS input_ids = Input(shape=(max_length,), name='input_ids', dtype='int32') attention_mask = Input(shape=(max_length,), name='attention_mask', dtype='int32') token_type_ids = Input(shape=(max_length,), name='token_type_ids', dtype='int32') inputs = {'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids} ## LAYERS bert_model = bert(inputs)[1] dropout = Dropout(config.hidden_dropout_prob, name='pooled_output') pooled_output = dropout(bert_model, training=False) ## OUTPUT emotion = Dense(units=n_labels, activation='sigmoid', kernel_initializer=TruncatedNormal(stddev=config.initializer_range), name='emotion')(pooled_output) outputs = emotion model = Model(inputs=inputs, outputs=outputs, name='BERT_Emotion_Classifier') return model model = my_model(len(ekman_map)) model.summary() plot_model(model, show_shapes=True, dpi=300) ``` ### 2.3 Data tokenization ``` ## Train x_train = train_df['text'] y_train = train_df.loc[:, ekman_map.keys()].values train_tokenized = tokenizer( text = list(x_train), add_special_tokens = True, max_length = max_length, padding = 'max_length', truncation = True, return_tensors = 'tf', return_attention_mask = True, return_token_type_ids = True ) ## Test x_test = test_df['text'] y_test = test_df.loc[:, ekman_map.keys()].values test_tokenized = tokenizer( text = list(x_test), add_special_tokens = True, max_length = max_length, padding = 'max_length', truncation = True, return_tensors = 'tf', return_attention_mask = True, return_token_type_ids = True ) ## Validation x_valid = valid_df['text'] y_valid = valid_df.loc[:, ekman_map.keys()].values valid_tokenized = tokenizer( text = list(x_valid), add_special_tokens = True, max_length = max_length, padding = 'max_length', truncation = True, return_tensors = 'tf', return_attention_mask = True, return_token_type_ids = True ) ``` ### 2.4 Creating BERT compatible inputs ``` tf_train = {'input_ids': train_tokenized['input_ids'], 'attention_mask': train_tokenized['attention_mask'], 'token_type_ids': train_tokenized['token_type_ids']} tf_test = {'input_ids': test_tokenized['input_ids'], 'attention_mask': test_tokenized['attention_mask'], 'token_type_ids': test_tokenized['token_type_ids']} tf_valid = {'input_ids': valid_tokenized['input_ids'], 'attention_mask': valid_tokenized['attention_mask'], 'token_type_ids': valid_tokenized['token_type_ids']} train = tf.data.Dataset.from_tensor_slices((tf_train, y_train)).batch(80) valid = tf.data.Dataset.from_tensor_slices((tf_valid, y_valid)).batch(80) test = tf.data.Dataset.from_tensor_slices((tf_test, y_test)).batch(80) learning_rate = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=5e-5, decay_rate=0.7, decay_steps=340, staircase=True) optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) loss = tf.keras.losses.BinaryCrossentropy(from_logits=False) K.clear_session() ``` Prior experiments with BERT showed that the model starts to overfit after ~2 epochs and Tanh performed significantly worse than sigmoid. ``` model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy']) history = model.fit(train, epochs=2, validation_data=valid) model.save_weights('./models/sigmoid_bert.h5') ``` # 3. Evaluation When dealing with unbalanced data, it is essential to mini-batch train the model instead of training it on all the data. This helps to prevent the model from overfitting the minority class. It is also essential to be thoughtful about what metric is being used for model evaluation. When dealing with unbalanced data, accuracy is not a good metric, as the model can predict the majority class every time and still have high accuracy. Instead, it is crucial to use the precision/recall or the F1 score, as these metrics consider false positives and false negatives. ``` model = my_model(len(ekman_map)) model.load_weights('./models/sigmoid_bert.h5') THRESHOLD = 0.83 y_pred = model.predict(test) probabilities = y_pred probabilities = pd.DataFrame(probabilities, columns=ekman_map.keys()) probabilities.index = x_test probabilities.reset_index(inplace=True) probabilities.head(10) y_pred = np.where(y_pred > THRESHOLD, 1, 0) recall = [] f1 = [] precision = [] emotions = ekman_map.keys() for i in range(len(emotions)): f1.append(f1_score(y_test[:, i], y_pred[:, i], average='macro')) precision.append(precision_score(y_test[:, i], y_pred[:, i], average='macro')) results = pd.DataFrame({'precision': precision, 'f1': f1}) results.index = emotions means = {'precision': np.mean(precision), 'f1': np.mean(f1)} means = pd.DataFrame(means, index=['mean']) pd.concat([results, means], axis=0) ``` ### 3.1 Optimization Finding the best value of Threshold. I chose f1-score as the main metric because it is more robust than precision and recall alone. ``` best_threshold = 0 best_f1 = 0 pred = model.predict(test) for threshold in np.arange(0.30, 0.99, 0.01): preds = np.where(pred > threshold, 1, 0) f1 = f1_score(y_test, preds, average='macro', zero_division=0) if f1 > best_f1: best_threshold = threshold best_f1 = f1 else: continue print(f'Best threshold: {best_threshold}\nBest f1: {best_f1}') THRESHOLD = 0.39 ``` ## 4. Make Predictions ``` def pred(text, model, THRESHOLD): text = [clean_text(text) for text in text] tokenized = tokenizer( text = text, add_special_tokens = True, max_length = max_length, padding = 'max_length', truncation = True, return_tensors = 'tf', return_attention_mask = True, return_token_type_ids = True ) tf_test = {'input_ids': tokenized['input_ids'], 'attention_mask': tokenized['attention_mask'], 'token_type_ids': tokenized['token_type_ids']} pred = model.predict(tf_test) probabilities = pred probabilities = pd.DataFrame(probabilities, columns=ekman_map.keys()) probabilities.index = text probabilities.reset_index(inplace=True) pred = np.where(pred > THRESHOLD, 1, 0) pred = pd.DataFrame(pred, columns=ekman_map.keys()) pred['emotion'] = pred.iloc[:, 1:].idxmax(axis=1) pred.drop(columns=emotions, inplace=True) pred.index = text pred.reset_index(inplace=True) return pred, probabilities result, probabilities = pred(['A Ukrainian woman who escaped Russias assault on Mariupol says troops were targeting apartment buildings as if they were playing a computer game', 'I often go to parks to walk and destress and enjoy nature', 'How can this be', 'This is the worst muffin ive ever had'], model, THRESHOLD) result probabilities ```
github_jupyter
""" %pip install emoji %pip install tensorflow %pip install transformers %pip install pandas %pip install sklearn %pip install matplotlib %pip install seaborn """ import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np import emoji import re import string from transformers import TFBertModel, BertTokenizerFast, BertConfig import tensorflow as tf from keras.layers import Input, Dropout, Dense, BatchNormalization from keras.models import Model from sklearn.metrics import f1_score, recall_score, precision_score from tensorflow.keras.utils import plot_model from keras.initializers import TruncatedNormal import keras.backend as K colors = sns.color_palette('rocket_r') sns.set_palette('rocket_r') def idx2class(idx_list): """ This function converts a list of class indices to a list of class labels. Parameters ---------- idx_list : list List of class indices. Returns ------- class_list : list List of class labels. """ arr = [] for i in idx_list: arr.append(labels[int(i)]) return arr def EmotionMapping(list_of_emotions): list = [] for i in list_of_emotions: if i in ekman_map['anger']: list.append('anger') if i in ekman_map['disgust']: list.append('disgust') if i in ekman_map['fear']: list.append('fear') if i in ekman_map['joy']: list.append('joy') if i in ekman_map['sadness']: list.append('sadness') if i in ekman_map['surprise']: list.append('surprise') if i == 'neutral': list.append('neutral') return list def SentimentMapping(list_of_emotions): list = [] for i in list_of_emotions: if i in sentiment_map['positive']: list.append('positive') if i in sentiment_map['negative']: list.append('negative') if i in sentiment_map['ambiguous']: list.append('ambiguous') return list train_url = 'https://github.com/google-research/google-research/raw/master/goemotions/data/train.tsv' valid_url = 'https://github.com/google-research/google-research/raw/master/goemotions/data/dev.tsv' test_url = 'https://github.com/google-research/google-research/raw/master/goemotions/data/test.tsv' train_df = pd.read_csv(train_url, sep='\t', encoding='utf-8', names=['text', 'emotion', 'annotator'], header=None) valid_df = pd.read_csv(valid_url, sep='\t', encoding='utf-8', names=['text', 'emotion', 'annotator'], header=None) test_df = pd.read_csv(test_url, sep='\t', encoding='utf-8', names=['text', 'emotion', 'annotator'], header=None) train_df.head(2) train_df.info() train_df.drop('annotator', axis=1, inplace=True) valid_df.drop('annotator', axis=1, inplace=True) test_df.drop('annotator', axis=1, inplace=True) labels = { 0: 'admiration', 1: 'amusement', 2: 'anger', 3: 'annoyance', 4: 'approval', 5: 'caring', 6: 'confusion', 7: 'curiosity', 8: 'desire', 9: 'disappointment', 10: 'disapproval', 11: 'disgust', 12: 'embarrassment', 13: 'excitement', 14: 'fear', 15: 'gratitude', 16: 'grief', 17: 'joy', 18: 'love', 19: 'nervousness', 20: 'optimism', 21: 'pride', 22: 'realization', 23: 'relief', 24: 'remorse', 25: 'sadness', 26: 'surprise', 27: 'neutral' } ekman_map = { 'anger': ['anger', 'annoyance', 'disapproval'], 'disgust': ['disgust'], 'fear': ['fear', 'nervousness'], 'joy': ['joy', 'amusement', 'approval', 'excitement', 'gratitude', 'love', 'optimism', 'relief', 'pride', 'admiration', 'desire', 'caring'], 'sadness': ['sadness', 'disappointment', 'embarrassment', 'grief', 'remorse'], 'surprise': ['surprise', 'realization', 'confusion', 'curiosity'], 'neutral': ['neutral'] } sentiment_map = { "positive": ["amusement", "excitement", "joy", "love", "desire", "optimism", "caring", "pride", "admiration", "gratitude", "relief", "approval"], "negative": ["fear", "nervousness", "remorse", "embarrassment", "disappointment", "sadness", "grief", "disgust", "anger", "annoyance", "disapproval"], "ambiguous": ["realization", "surprise", "curiosity", "confusion", "neutral"] } train_df['list of emotions'] = train_df['emotion'].apply(lambda x: x.split(',')) test_df['list of emotions'] = test_df['emotion'].apply(lambda x: x.split(',')) valid_df['list of emotions'] = valid_df['emotion'].apply(lambda x: x.split(',')) train_df['emotion'] = train_df['list of emotions'].apply(lambda x: idx2class(x)) test_df['emotion'] = test_df['list of emotions'].apply(lambda x: idx2class(x)) valid_df['emotion'] = valid_df['list of emotions'].apply(lambda x: idx2class(x)) train_df['ekman_emotion'] = train_df['emotion'].apply(lambda x: EmotionMapping(x)) test_df['ekman_emotion'] = test_df['emotion'].apply(lambda x: EmotionMapping(x)) valid_df['ekman_emotion'] = valid_df['emotion'].apply(lambda x: EmotionMapping(x)) train_df.head(10) def clean_text(text): """ This function cleans the text in the dataframe and returns a list of cleaned text. text: a string return: modified initial string """ # Removing Emojis text = emoji.demojize(text) # remove emojis text = str(text).lower() # text to lower case text = re.sub(r'[%s]' % re.escape(string.punctuation), ' ', text) # remove punctuation return text for i in ekman_map: train_df[i] = train_df['ekman_emotion'].apply(lambda x: 1 if i in x else 0) test_df[i] = test_df['ekman_emotion'].apply(lambda x: 1 if i in x else 0) valid_df[i] = valid_df['ekman_emotion'].apply(lambda x: 1 if i in x else 0) train_df.head(10) labels_summary = train_df.iloc[:, 4:].sum() labels_summary.sort_values(ascending=False, inplace=True) fig = plt.figure(figsize=(12, 8)) sns.barplot(x=labels_summary.index, y=labels_summary.values, palette='rocket_r') plt.xticks(rotation=45) plt.ylabel('Frequency') plt.show() train_df['n_emotions'] = train_df.iloc[:, 4:].apply(lambda x: x.sum(), axis=1) fig = plt.figure(figsize=(12, 8)) sns.countplot(x='n_emotions', data=train_df, palette='rocket_r') plt.xticks(rotation=45) plt.title('Number of emotions per sample') plt.ylabel('Frequency') plt.xlabel('Number of emotions') plt.show() full_text = pd.concat([train_df['text'], valid_df['text'], test_df['text']]) lengths = full_text.apply(lambda x: len(x.split())) fig = plt.figure(figsize=(12, 10)) sns.displot(lengths, kde=True, rug=False, color=colors[5]) plt.title('Distribution of Text Lengths') plt.xlabel('Text Length') plt.ylabel('Frequency') plt.xlim(0, 40) plt.show() full_text = pd.concat([train_df['text'], valid_df['text'], test_df['text']]) max_length = full_text.apply(lambda x: len(x.split())).max() max_length model_name = 'bert-base-uncased' config = BertConfig.from_pretrained(model_name, output_hidden_states=False) tokenizer = BertTokenizerFast.from_pretrained(pretrained_model_name_or_path = model_name, config = config) transformer_model = TFBertModel.from_pretrained(model_name, config = config) def my_model(n_labels): # Load the MainLayer bert = transformer_model.layers[0] ## INPUTS input_ids = Input(shape=(max_length,), name='input_ids', dtype='int32') attention_mask = Input(shape=(max_length,), name='attention_mask', dtype='int32') token_type_ids = Input(shape=(max_length,), name='token_type_ids', dtype='int32') inputs = {'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids} ## LAYERS bert_model = bert(inputs)[1] dropout = Dropout(config.hidden_dropout_prob, name='pooled_output') pooled_output = dropout(bert_model, training=False) ## OUTPUT emotion = Dense(units=n_labels, activation='sigmoid', kernel_initializer=TruncatedNormal(stddev=config.initializer_range), name='emotion')(pooled_output) outputs = emotion model = Model(inputs=inputs, outputs=outputs, name='BERT_Emotion_Classifier') return model model = my_model(len(ekman_map)) model.summary() plot_model(model, show_shapes=True, dpi=300) ## Train x_train = train_df['text'] y_train = train_df.loc[:, ekman_map.keys()].values train_tokenized = tokenizer( text = list(x_train), add_special_tokens = True, max_length = max_length, padding = 'max_length', truncation = True, return_tensors = 'tf', return_attention_mask = True, return_token_type_ids = True ) ## Test x_test = test_df['text'] y_test = test_df.loc[:, ekman_map.keys()].values test_tokenized = tokenizer( text = list(x_test), add_special_tokens = True, max_length = max_length, padding = 'max_length', truncation = True, return_tensors = 'tf', return_attention_mask = True, return_token_type_ids = True ) ## Validation x_valid = valid_df['text'] y_valid = valid_df.loc[:, ekman_map.keys()].values valid_tokenized = tokenizer( text = list(x_valid), add_special_tokens = True, max_length = max_length, padding = 'max_length', truncation = True, return_tensors = 'tf', return_attention_mask = True, return_token_type_ids = True ) tf_train = {'input_ids': train_tokenized['input_ids'], 'attention_mask': train_tokenized['attention_mask'], 'token_type_ids': train_tokenized['token_type_ids']} tf_test = {'input_ids': test_tokenized['input_ids'], 'attention_mask': test_tokenized['attention_mask'], 'token_type_ids': test_tokenized['token_type_ids']} tf_valid = {'input_ids': valid_tokenized['input_ids'], 'attention_mask': valid_tokenized['attention_mask'], 'token_type_ids': valid_tokenized['token_type_ids']} train = tf.data.Dataset.from_tensor_slices((tf_train, y_train)).batch(80) valid = tf.data.Dataset.from_tensor_slices((tf_valid, y_valid)).batch(80) test = tf.data.Dataset.from_tensor_slices((tf_test, y_test)).batch(80) learning_rate = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=5e-5, decay_rate=0.7, decay_steps=340, staircase=True) optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) loss = tf.keras.losses.BinaryCrossentropy(from_logits=False) K.clear_session() model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy']) history = model.fit(train, epochs=2, validation_data=valid) model.save_weights('./models/sigmoid_bert.h5') model = my_model(len(ekman_map)) model.load_weights('./models/sigmoid_bert.h5') THRESHOLD = 0.83 y_pred = model.predict(test) probabilities = y_pred probabilities = pd.DataFrame(probabilities, columns=ekman_map.keys()) probabilities.index = x_test probabilities.reset_index(inplace=True) probabilities.head(10) y_pred = np.where(y_pred > THRESHOLD, 1, 0) recall = [] f1 = [] precision = [] emotions = ekman_map.keys() for i in range(len(emotions)): f1.append(f1_score(y_test[:, i], y_pred[:, i], average='macro')) precision.append(precision_score(y_test[:, i], y_pred[:, i], average='macro')) results = pd.DataFrame({'precision': precision, 'f1': f1}) results.index = emotions means = {'precision': np.mean(precision), 'f1': np.mean(f1)} means = pd.DataFrame(means, index=['mean']) pd.concat([results, means], axis=0) best_threshold = 0 best_f1 = 0 pred = model.predict(test) for threshold in np.arange(0.30, 0.99, 0.01): preds = np.where(pred > threshold, 1, 0) f1 = f1_score(y_test, preds, average='macro', zero_division=0) if f1 > best_f1: best_threshold = threshold best_f1 = f1 else: continue print(f'Best threshold: {best_threshold}\nBest f1: {best_f1}') THRESHOLD = 0.39 def pred(text, model, THRESHOLD): text = [clean_text(text) for text in text] tokenized = tokenizer( text = text, add_special_tokens = True, max_length = max_length, padding = 'max_length', truncation = True, return_tensors = 'tf', return_attention_mask = True, return_token_type_ids = True ) tf_test = {'input_ids': tokenized['input_ids'], 'attention_mask': tokenized['attention_mask'], 'token_type_ids': tokenized['token_type_ids']} pred = model.predict(tf_test) probabilities = pred probabilities = pd.DataFrame(probabilities, columns=ekman_map.keys()) probabilities.index = text probabilities.reset_index(inplace=True) pred = np.where(pred > THRESHOLD, 1, 0) pred = pd.DataFrame(pred, columns=ekman_map.keys()) pred['emotion'] = pred.iloc[:, 1:].idxmax(axis=1) pred.drop(columns=emotions, inplace=True) pred.index = text pred.reset_index(inplace=True) return pred, probabilities result, probabilities = pred(['A Ukrainian woman who escaped Russias assault on Mariupol says troops were targeting apartment buildings as if they were playing a computer game', 'I often go to parks to walk and destress and enjoy nature', 'How can this be', 'This is the worst muffin ive ever had'], model, THRESHOLD) result probabilities
0.648578
0.757032
# Kaggle Titanic Competition ``` import pandas as pd import numpy as np import re import matplotlib.pyplot as plt import seaborn as sns # Load training data titanic_train = pd.read_csv("train.csv") #Display training data titanic_train.head(10) # Load test data titanic_test = pd.read_csv("test.csv") # Display test dataframe titanic_test.head(10) # Explore data: the portion of people who survived by Pclass, Sex, and Embarked f,ax=plt.subplots(1,3,figsize=(20,10)) sns.countplot('Pclass',hue='Survived',data=titanic_train,ax=ax[0]) ax[0].set_title('Pclass:Survived vs Dead') sns.countplot('Sex',hue='Survived',data=titanic_train,ax=ax[1]) ax[1].set_title('Sex:Survived vs Dead') sns.countplot('Embarked',hue='Survived',data=titanic_train,ax=ax[2]) ax[2].set_title('Embarked:Survived vs Dead') plt.show() # Explore the relationship between survival rate and age # Replace NA data with the most frequently appeared data titanic_train['Age']=titanic_train['Age'].fillna(titanic_train['Age'].mean()).astype('int') titanic_test['Age']=titanic_test['Age'].fillna(titanic_test['Age'].mean()).astype('int') # Explore the relationship between survival rate and age plt.hist(x = [titanic_train[titanic_train['Survived']==1]['Age'],titanic_train[titanic_train['Survived']==0]['Age']], stacked=True, label = ['Survived','Dead']) plt.title('Age Histogram by Survival') plt.xlabel('Age (Years)') plt.ylabel('# of Passengers') plt.legend() # Explore the relationship between survival rate and fare plt.hist(x = [titanic_train[titanic_train['Survived']==1]['Fare'],titanic_train[titanic_train['Survived']==0]['Fare']], stacked=True, label = ['Survived','Dead']) plt.title('Fare Histogram by Survival') plt.xlabel('Fare ($)') plt.ylabel('# of Passengers') plt.legend() ``` # Data Washing ``` # Sex titanic_train['Sex'] = titanic_train['Sex'].map({'female': 0, 'male': 1}).astype(int) titanic_test['Sex'] = titanic_test['Sex'].map({'female': 0, 'male': 1}).astype(int) # Categorize age into six age_avg = titanic_train['Age'].mean() age_std = titanic_train['Age'].std() titanic_train['CategoricalAge'] = pd.cut(titanic_train['Age'], 6) # Map Age for training data titanic_train.loc[titanic_train['Age'] <= age_avg-2*age_std, 'Age'] = 1 titanic_train.loc[(titanic_train['Age'] > age_avg-2*age_std) & (titanic_train['Age'] <= age_avg-1*age_std), 'Age'] = 2 titanic_train.loc[(titanic_train['Age'] > age_avg-1*age_std) & (titanic_train['Age'] <= age_avg), 'Age'] = 3 titanic_train.loc[(titanic_train['Age'] > age_avg) & (titanic_train['Age'] <= age_avg+1*age_std), 'Age'] = 4 titanic_train.loc[(titanic_train['Age'] > age_avg+1*age_std) & (titanic_train['Age'] <= age_avg+2*age_std), 'Age'] = 5 titanic_train.loc[titanic_train['Age'] > age_avg+2*age_std, 'Age'] = 6 # Map Age for testing data titanic_test.loc[titanic_test['Age'] <= age_avg-2*age_std, 'Age'] = 1 titanic_test.loc[(titanic_test['Age'] > age_avg-2*age_std) & (titanic_test['Age'] <= age_avg-1*age_std), 'Age'] = 2 titanic_test.loc[(titanic_test['Age'] > age_avg-1*age_std) & (titanic_test['Age'] <= age_avg), 'Age'] = 3 titanic_test.loc[(titanic_test['Age'] > age_avg) & (titanic_test['Age'] <= age_avg+1*age_std), 'Age'] = 4 titanic_test.loc[(titanic_test['Age'] > age_avg+1*age_std) & (titanic_test['Age'] <= age_avg+2*age_std), 'Age'] = 5 titanic_test.loc[titanic_test['Age'] > age_avg+2*age_std, 'Age'] = 6 # Family size titanic_train['FamilySize'] = titanic_train['SibSp'] + titanic_train['Parch'] + 1 titanic_test['FamilySize'] = titanic_test['SibSp'] + titanic_test['Parch'] + 1 # Categorize Fare fare_avg = titanic_train['Fare'].mean() fare_std = titanic_train['Fare'].std() titanic_train['Fare'] = titanic_train['Fare'].fillna(titanic_train['Fare'].mean()) titanic_test['Fare'] = titanic_test['Fare'].fillna(titanic_test['Fare'].mean()) titanic_train['CategoricalFare'] = pd.cut(titanic_train['Fare'], 6) # Map Fare for training data titanic_train.loc[titanic_train['Fare'] <= age_avg-2*age_std, 'Fare'] = 1 titanic_train.loc[(titanic_train['Fare'] > age_avg-2*age_std) & (titanic_train['Fare'] <= age_avg-1*age_std), 'Fare'] = 2 titanic_train.loc[(titanic_train['Fare'] > age_avg-1*age_std) & (titanic_train['Fare'] <= age_avg), 'Fare'] = 3 titanic_train.loc[(titanic_train['Fare'] > age_avg) & (titanic_train['Fare'] <= age_avg+1*age_std), 'Fare'] = 4 titanic_train.loc[(titanic_train['Fare'] > age_avg+1*age_std) & (titanic_train['Fare'] <= age_avg+2*age_std), 'Fare'] = 5 titanic_train.loc[titanic_train['Fare'] > age_avg+2*age_std, 'Fare'] = 6 # Map Fare for testing data titanic_test.loc[titanic_test['Fare'] <= age_avg-2*age_std, 'Fare'] = 1 titanic_test.loc[(titanic_test['Fare'] > age_avg-2*age_std) & (titanic_test['Fare'] <= age_avg-1*age_std), 'Fare'] = 2 titanic_test.loc[(titanic_test['Fare'] > age_avg-1*age_std) & (titanic_test['Fare'] <= age_avg), 'Fare'] = 3 titanic_test.loc[(titanic_test['Fare'] > age_avg) & (titanic_test['Fare'] <= age_avg+1*age_std), 'Fare'] = 4 titanic_test.loc[(titanic_test['Fare'] > age_avg+1*age_std) & (titanic_test['Fare'] <= age_avg+2*age_std), 'Fare'] = 5 titanic_test.loc[titanic_test['Fare'] > age_avg+2*age_std, 'Fare'] = 6 # Port of Embarkation titanic_train['Embarked'].value_counts() titanic_train['Embarked'] = titanic_train['Embarked'].fillna("S").map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int) titanic_test['Embarked'] = titanic_test['Embarked'].fillna("S").map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int) # Cabin titanic_train['Has_Cabin'] = titanic_train["Cabin"].apply(lambda x: 0 if type(x) == float else 1) titanic_test['Has_Cabin'] = titanic_test["Cabin"].apply(lambda x: 0 if type(x) == float else 1) # Drop redundant columns titanic_train = titanic_train.drop(['PassengerId', 'Name', 'Ticket', 'Cabin', 'SibSp','CategoricalAge', 'CategoricalFare'], axis = 1) titanic_test = titanic_test.drop(['PassengerId', 'Name', 'Ticket', 'Cabin', 'SibSp'], axis = 1) titanic_train.head(10) titanic_test.head(10) # Pair plots pp = sns.pairplot(titanic_train[['Survived', 'Pclass', 'Sex', 'Age', 'Parch', 'Fare', 'Embarked', 'FamilySize']], hue='Survived', palette = 'seismic',size=1.2,diag_kind = 'kde',diag_kws=dict(shade=True),plot_kws=dict(s=10) ) pp.set(xticklabels=[]) ``` # Data Modeling ``` from sklearn.model_selection import train_test_split from sklearn.model_selection import StratifiedShuffleSplit from sklearn.metrics import accuracy_score, log_loss from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis from sklearn.linear_model import LogisticRegression #Compare different ML models' accuracies classifiers = [ AdaBoostClassifier(), DecisionTreeClassifier(), GradientBoostingClassifier(), GaussianNB(), KNeighborsClassifier(3), LinearDiscriminantAnalysis(), LogisticRegression(), QuadraticDiscriminantAnalysis(), RandomForestClassifier(), SVC(probability=True)] log_cols = ["Classifier", "Accuracy"] log = pd.DataFrame(columns=log_cols) Features= ['Pclass', 'Sex', 'Age', 'Parch', 'Fare', 'Embarked', 'FamilySize', 'Has_Cabin'] x = titanic_train[Features] y = titanic_train.Survived train_x, val_x, train_y, val_y = train_test_split(x, y, random_state = 0) acc_dict = {} for clf in classifiers: name = clf.__class__.__name__ clf.fit(train_x, train_y) train_predictions = clf.predict(val_x) acc = accuracy_score(val_y, train_predictions) if name in acc_dict: acc_dict[name] += acc else: acc_dict[name] = acc for clf in acc_dict: acc_dict[clf] = acc_dict[clf] log_entry = pd.DataFrame([[clf, acc_dict[clf]]], columns=log_cols) log = log.append(log_entry) plt.xlabel('Accuracy') plt.title('Classifier Accuracy') sns.set_color_codes("muted") sns.barplot(x='Accuracy', y='Classifier', data=log, color="b") # Print Accuracy log # Predict the survival rate candidate_classifier = GradientBoostingClassifier() candidate_classifier.fit(x, y) result = candidate_classifier.predict(titanic_test.values) # Cross validation and confusion matrix from sklearn.model_selection import cross_val_predict from sklearn.metrics import confusion_matrix predictions = cross_val_predict(candidate_classifier, train_x, train_y, cv=3) confusion_matrix(train_y, predictions) # Calculate precission and recall from sklearn.metrics import precision_score, recall_score print("Precision:", precision_score(train_y, predictions)) print("Recall:",recall_score(train_y, predictions)) # Save predictions to csv submission = pd.DataFrame() submission["PassengerId"] = pd.read_csv("test.csv")["PassengerId"] submission["Survived"] = result submission.to_csv("submission.csv", index=False) ```
github_jupyter
import pandas as pd import numpy as np import re import matplotlib.pyplot as plt import seaborn as sns # Load training data titanic_train = pd.read_csv("train.csv") #Display training data titanic_train.head(10) # Load test data titanic_test = pd.read_csv("test.csv") # Display test dataframe titanic_test.head(10) # Explore data: the portion of people who survived by Pclass, Sex, and Embarked f,ax=plt.subplots(1,3,figsize=(20,10)) sns.countplot('Pclass',hue='Survived',data=titanic_train,ax=ax[0]) ax[0].set_title('Pclass:Survived vs Dead') sns.countplot('Sex',hue='Survived',data=titanic_train,ax=ax[1]) ax[1].set_title('Sex:Survived vs Dead') sns.countplot('Embarked',hue='Survived',data=titanic_train,ax=ax[2]) ax[2].set_title('Embarked:Survived vs Dead') plt.show() # Explore the relationship between survival rate and age # Replace NA data with the most frequently appeared data titanic_train['Age']=titanic_train['Age'].fillna(titanic_train['Age'].mean()).astype('int') titanic_test['Age']=titanic_test['Age'].fillna(titanic_test['Age'].mean()).astype('int') # Explore the relationship between survival rate and age plt.hist(x = [titanic_train[titanic_train['Survived']==1]['Age'],titanic_train[titanic_train['Survived']==0]['Age']], stacked=True, label = ['Survived','Dead']) plt.title('Age Histogram by Survival') plt.xlabel('Age (Years)') plt.ylabel('# of Passengers') plt.legend() # Explore the relationship between survival rate and fare plt.hist(x = [titanic_train[titanic_train['Survived']==1]['Fare'],titanic_train[titanic_train['Survived']==0]['Fare']], stacked=True, label = ['Survived','Dead']) plt.title('Fare Histogram by Survival') plt.xlabel('Fare ($)') plt.ylabel('# of Passengers') plt.legend() # Sex titanic_train['Sex'] = titanic_train['Sex'].map({'female': 0, 'male': 1}).astype(int) titanic_test['Sex'] = titanic_test['Sex'].map({'female': 0, 'male': 1}).astype(int) # Categorize age into six age_avg = titanic_train['Age'].mean() age_std = titanic_train['Age'].std() titanic_train['CategoricalAge'] = pd.cut(titanic_train['Age'], 6) # Map Age for training data titanic_train.loc[titanic_train['Age'] <= age_avg-2*age_std, 'Age'] = 1 titanic_train.loc[(titanic_train['Age'] > age_avg-2*age_std) & (titanic_train['Age'] <= age_avg-1*age_std), 'Age'] = 2 titanic_train.loc[(titanic_train['Age'] > age_avg-1*age_std) & (titanic_train['Age'] <= age_avg), 'Age'] = 3 titanic_train.loc[(titanic_train['Age'] > age_avg) & (titanic_train['Age'] <= age_avg+1*age_std), 'Age'] = 4 titanic_train.loc[(titanic_train['Age'] > age_avg+1*age_std) & (titanic_train['Age'] <= age_avg+2*age_std), 'Age'] = 5 titanic_train.loc[titanic_train['Age'] > age_avg+2*age_std, 'Age'] = 6 # Map Age for testing data titanic_test.loc[titanic_test['Age'] <= age_avg-2*age_std, 'Age'] = 1 titanic_test.loc[(titanic_test['Age'] > age_avg-2*age_std) & (titanic_test['Age'] <= age_avg-1*age_std), 'Age'] = 2 titanic_test.loc[(titanic_test['Age'] > age_avg-1*age_std) & (titanic_test['Age'] <= age_avg), 'Age'] = 3 titanic_test.loc[(titanic_test['Age'] > age_avg) & (titanic_test['Age'] <= age_avg+1*age_std), 'Age'] = 4 titanic_test.loc[(titanic_test['Age'] > age_avg+1*age_std) & (titanic_test['Age'] <= age_avg+2*age_std), 'Age'] = 5 titanic_test.loc[titanic_test['Age'] > age_avg+2*age_std, 'Age'] = 6 # Family size titanic_train['FamilySize'] = titanic_train['SibSp'] + titanic_train['Parch'] + 1 titanic_test['FamilySize'] = titanic_test['SibSp'] + titanic_test['Parch'] + 1 # Categorize Fare fare_avg = titanic_train['Fare'].mean() fare_std = titanic_train['Fare'].std() titanic_train['Fare'] = titanic_train['Fare'].fillna(titanic_train['Fare'].mean()) titanic_test['Fare'] = titanic_test['Fare'].fillna(titanic_test['Fare'].mean()) titanic_train['CategoricalFare'] = pd.cut(titanic_train['Fare'], 6) # Map Fare for training data titanic_train.loc[titanic_train['Fare'] <= age_avg-2*age_std, 'Fare'] = 1 titanic_train.loc[(titanic_train['Fare'] > age_avg-2*age_std) & (titanic_train['Fare'] <= age_avg-1*age_std), 'Fare'] = 2 titanic_train.loc[(titanic_train['Fare'] > age_avg-1*age_std) & (titanic_train['Fare'] <= age_avg), 'Fare'] = 3 titanic_train.loc[(titanic_train['Fare'] > age_avg) & (titanic_train['Fare'] <= age_avg+1*age_std), 'Fare'] = 4 titanic_train.loc[(titanic_train['Fare'] > age_avg+1*age_std) & (titanic_train['Fare'] <= age_avg+2*age_std), 'Fare'] = 5 titanic_train.loc[titanic_train['Fare'] > age_avg+2*age_std, 'Fare'] = 6 # Map Fare for testing data titanic_test.loc[titanic_test['Fare'] <= age_avg-2*age_std, 'Fare'] = 1 titanic_test.loc[(titanic_test['Fare'] > age_avg-2*age_std) & (titanic_test['Fare'] <= age_avg-1*age_std), 'Fare'] = 2 titanic_test.loc[(titanic_test['Fare'] > age_avg-1*age_std) & (titanic_test['Fare'] <= age_avg), 'Fare'] = 3 titanic_test.loc[(titanic_test['Fare'] > age_avg) & (titanic_test['Fare'] <= age_avg+1*age_std), 'Fare'] = 4 titanic_test.loc[(titanic_test['Fare'] > age_avg+1*age_std) & (titanic_test['Fare'] <= age_avg+2*age_std), 'Fare'] = 5 titanic_test.loc[titanic_test['Fare'] > age_avg+2*age_std, 'Fare'] = 6 # Port of Embarkation titanic_train['Embarked'].value_counts() titanic_train['Embarked'] = titanic_train['Embarked'].fillna("S").map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int) titanic_test['Embarked'] = titanic_test['Embarked'].fillna("S").map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int) # Cabin titanic_train['Has_Cabin'] = titanic_train["Cabin"].apply(lambda x: 0 if type(x) == float else 1) titanic_test['Has_Cabin'] = titanic_test["Cabin"].apply(lambda x: 0 if type(x) == float else 1) # Drop redundant columns titanic_train = titanic_train.drop(['PassengerId', 'Name', 'Ticket', 'Cabin', 'SibSp','CategoricalAge', 'CategoricalFare'], axis = 1) titanic_test = titanic_test.drop(['PassengerId', 'Name', 'Ticket', 'Cabin', 'SibSp'], axis = 1) titanic_train.head(10) titanic_test.head(10) # Pair plots pp = sns.pairplot(titanic_train[['Survived', 'Pclass', 'Sex', 'Age', 'Parch', 'Fare', 'Embarked', 'FamilySize']], hue='Survived', palette = 'seismic',size=1.2,diag_kind = 'kde',diag_kws=dict(shade=True),plot_kws=dict(s=10) ) pp.set(xticklabels=[]) from sklearn.model_selection import train_test_split from sklearn.model_selection import StratifiedShuffleSplit from sklearn.metrics import accuracy_score, log_loss from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis from sklearn.linear_model import LogisticRegression #Compare different ML models' accuracies classifiers = [ AdaBoostClassifier(), DecisionTreeClassifier(), GradientBoostingClassifier(), GaussianNB(), KNeighborsClassifier(3), LinearDiscriminantAnalysis(), LogisticRegression(), QuadraticDiscriminantAnalysis(), RandomForestClassifier(), SVC(probability=True)] log_cols = ["Classifier", "Accuracy"] log = pd.DataFrame(columns=log_cols) Features= ['Pclass', 'Sex', 'Age', 'Parch', 'Fare', 'Embarked', 'FamilySize', 'Has_Cabin'] x = titanic_train[Features] y = titanic_train.Survived train_x, val_x, train_y, val_y = train_test_split(x, y, random_state = 0) acc_dict = {} for clf in classifiers: name = clf.__class__.__name__ clf.fit(train_x, train_y) train_predictions = clf.predict(val_x) acc = accuracy_score(val_y, train_predictions) if name in acc_dict: acc_dict[name] += acc else: acc_dict[name] = acc for clf in acc_dict: acc_dict[clf] = acc_dict[clf] log_entry = pd.DataFrame([[clf, acc_dict[clf]]], columns=log_cols) log = log.append(log_entry) plt.xlabel('Accuracy') plt.title('Classifier Accuracy') sns.set_color_codes("muted") sns.barplot(x='Accuracy', y='Classifier', data=log, color="b") # Print Accuracy log # Predict the survival rate candidate_classifier = GradientBoostingClassifier() candidate_classifier.fit(x, y) result = candidate_classifier.predict(titanic_test.values) # Cross validation and confusion matrix from sklearn.model_selection import cross_val_predict from sklearn.metrics import confusion_matrix predictions = cross_val_predict(candidate_classifier, train_x, train_y, cv=3) confusion_matrix(train_y, predictions) # Calculate precission and recall from sklearn.metrics import precision_score, recall_score print("Precision:", precision_score(train_y, predictions)) print("Recall:",recall_score(train_y, predictions)) # Save predictions to csv submission = pd.DataFrame() submission["PassengerId"] = pd.read_csv("test.csv")["PassengerId"] submission["Survived"] = result submission.to_csv("submission.csv", index=False)
0.477554
0.815012
``` !pip install tensorflow-gpu import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import pandas as pd import numpy as np import seaborn as sns from pylab import rcParams import matplotlib.pyplot as plt from matplotlib import rc %matplotlib inline %config InlineBackend.figure_format='retina' sns.set(style='whitegrid', palette='muted', font_scale=1.5) rcParams['figure.figsize'] = 14, 8 RANDOM_SEED = 42 np.random.seed(RANDOM_SEED) tf.random.set_seed(RANDOM_SEED) tf.__version__ ``` # Tensors ``` x = tf.constant(1) print(x) x.numpy() x.shape tf.rank(x).numpy() m = tf.constant([[1, 2, 1], [3, 4, 2]]) print(m) st = tf.constant(["Hello", "World"]) print(st) tf.rank(st).numpy() ``` ## Helpers ``` ones = tf.ones([3, 3]) print(ones) zeros = tf.zeros([2, 3]) print(zeros) print(tf.reshape(zeros, [3, 2])) tf.transpose(zeros) ``` # Tensor Math ``` a = tf.constant(1) b = tf.constant(1) tf.add(a, b).numpy() (a + b).numpy() c = a + b tf.square(c) c * c d1 = tf.constant([[1, 2], [1, 2]]); d2 = tf.constant([[3, 4], [3, 4]]); tf.tensordot(d1, d2, axes=1).numpy() ``` # Sampling ``` norm = tf.random.normal(shape=(1000, 1), mean=0., stddev=1.) sns.distplot(norm); unif = tf.random.uniform(shape=(1000, 1), minval=0, maxval=100) sns.distplot(unif); pois = tf.random.poisson(shape=(1000, 1), lam=0.8) sns.distplot(pois); gam = tf.random.gamma(shape=(1000, 1), alpha=0.8) sns.distplot(gam); ``` # Linear Regression https://vincentarelbundock.github.io/Rdatasets/datasets.html ``` data = tf.constant([ [4,2], [4,10], [7,4], [7,22], [8,16], [9,10], [10,18], [10,26], [10,34], [11,17], [11,28], [12,14], [12,20], [12,24], [12,28], [13,26], [13,34], [13,34], [13,46], [14,26], [14,36], [14,60], [14,80], [15,20], [15,26], [15,54], [16,32], [16,40], [17,32], [17,40], [17,50], [18,42], [18,56], [18,76], [18,84], [19,36], [19,46], [19,68], [20,32], [20,48], [20,52], [20,56], [20,64], [22,66], [23,54], [24,70], [24,92], [24,93], [24,120], [25,85] ]) speed = data[:, 0] stopping_distance = data[:, 1] sns.scatterplot(speed, stopping_distance); plt.xlabel("speed") plt.ylabel("stopping distance"); lin_reg = keras.Sequential([ layers.Dense(1, activation='linear', input_shape=[1]), ]) optimizer = tf.keras.optimizers.RMSprop(0.001) lin_reg.compile(loss='mse', optimizer=optimizer, metrics=['mse']) history = lin_reg.fit( x=speed, y=stopping_distance, shuffle=True, epochs=1000, validation_split=0.2, verbose=0 ) def plot_error(history): hist = pd.DataFrame(history.history) hist['epoch'] = history.epoch plt.figure() plt.xlabel('Epoch') plt.ylabel('Mean Square Error') plt.plot(hist['epoch'], hist['mse'], label='Train Error') plt.plot(hist['epoch'], hist['val_mse'], label = 'Val Error') plt.legend() plt.show() plot_error(history) lin_reg.summary() weights = lin_reg.get_layer("dense").get_weights() intercept = weights[0][0][0] slope = weights[1][0] slope ``` # Simple Neural Network ``` def build_neural_net(): net = keras.Sequential([ layers.Dense(32, activation='relu', input_shape=[1]), layers.Dense(16, activation='relu'), layers.Dense(1), ]) optimizer = tf.keras.optimizers.RMSprop(0.001) net.compile(loss='mse', optimizer=optimizer, metrics=['mse', 'accuracy']) return net net = build_neural_net() history = net.fit( x=speed, y=stopping_distance, shuffle=True, epochs=1000, validation_split=0.2, verbose=0 ) plot_error(history) ``` ## Stop training early ``` early_stop = keras.callbacks.EarlyStopping( monitor='val_loss', patience=10 ) net = build_neural_net() history = net.fit( x=speed, y=stopping_distance, shuffle=True, epochs=1000, validation_split=0.2, verbose=0, callbacks=[early_stop] ) plot_error(history) ``` # Save/Restore Model ``` net.save('simple_net.h5') simple_net = keras.models.load_model('simple_net.h5') simple_net.summary() ```
github_jupyter
!pip install tensorflow-gpu import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import pandas as pd import numpy as np import seaborn as sns from pylab import rcParams import matplotlib.pyplot as plt from matplotlib import rc %matplotlib inline %config InlineBackend.figure_format='retina' sns.set(style='whitegrid', palette='muted', font_scale=1.5) rcParams['figure.figsize'] = 14, 8 RANDOM_SEED = 42 np.random.seed(RANDOM_SEED) tf.random.set_seed(RANDOM_SEED) tf.__version__ x = tf.constant(1) print(x) x.numpy() x.shape tf.rank(x).numpy() m = tf.constant([[1, 2, 1], [3, 4, 2]]) print(m) st = tf.constant(["Hello", "World"]) print(st) tf.rank(st).numpy() ones = tf.ones([3, 3]) print(ones) zeros = tf.zeros([2, 3]) print(zeros) print(tf.reshape(zeros, [3, 2])) tf.transpose(zeros) a = tf.constant(1) b = tf.constant(1) tf.add(a, b).numpy() (a + b).numpy() c = a + b tf.square(c) c * c d1 = tf.constant([[1, 2], [1, 2]]); d2 = tf.constant([[3, 4], [3, 4]]); tf.tensordot(d1, d2, axes=1).numpy() norm = tf.random.normal(shape=(1000, 1), mean=0., stddev=1.) sns.distplot(norm); unif = tf.random.uniform(shape=(1000, 1), minval=0, maxval=100) sns.distplot(unif); pois = tf.random.poisson(shape=(1000, 1), lam=0.8) sns.distplot(pois); gam = tf.random.gamma(shape=(1000, 1), alpha=0.8) sns.distplot(gam); data = tf.constant([ [4,2], [4,10], [7,4], [7,22], [8,16], [9,10], [10,18], [10,26], [10,34], [11,17], [11,28], [12,14], [12,20], [12,24], [12,28], [13,26], [13,34], [13,34], [13,46], [14,26], [14,36], [14,60], [14,80], [15,20], [15,26], [15,54], [16,32], [16,40], [17,32], [17,40], [17,50], [18,42], [18,56], [18,76], [18,84], [19,36], [19,46], [19,68], [20,32], [20,48], [20,52], [20,56], [20,64], [22,66], [23,54], [24,70], [24,92], [24,93], [24,120], [25,85] ]) speed = data[:, 0] stopping_distance = data[:, 1] sns.scatterplot(speed, stopping_distance); plt.xlabel("speed") plt.ylabel("stopping distance"); lin_reg = keras.Sequential([ layers.Dense(1, activation='linear', input_shape=[1]), ]) optimizer = tf.keras.optimizers.RMSprop(0.001) lin_reg.compile(loss='mse', optimizer=optimizer, metrics=['mse']) history = lin_reg.fit( x=speed, y=stopping_distance, shuffle=True, epochs=1000, validation_split=0.2, verbose=0 ) def plot_error(history): hist = pd.DataFrame(history.history) hist['epoch'] = history.epoch plt.figure() plt.xlabel('Epoch') plt.ylabel('Mean Square Error') plt.plot(hist['epoch'], hist['mse'], label='Train Error') plt.plot(hist['epoch'], hist['val_mse'], label = 'Val Error') plt.legend() plt.show() plot_error(history) lin_reg.summary() weights = lin_reg.get_layer("dense").get_weights() intercept = weights[0][0][0] slope = weights[1][0] slope def build_neural_net(): net = keras.Sequential([ layers.Dense(32, activation='relu', input_shape=[1]), layers.Dense(16, activation='relu'), layers.Dense(1), ]) optimizer = tf.keras.optimizers.RMSprop(0.001) net.compile(loss='mse', optimizer=optimizer, metrics=['mse', 'accuracy']) return net net = build_neural_net() history = net.fit( x=speed, y=stopping_distance, shuffle=True, epochs=1000, validation_split=0.2, verbose=0 ) plot_error(history) early_stop = keras.callbacks.EarlyStopping( monitor='val_loss', patience=10 ) net = build_neural_net() history = net.fit( x=speed, y=stopping_distance, shuffle=True, epochs=1000, validation_split=0.2, verbose=0, callbacks=[early_stop] ) plot_error(history) net.save('simple_net.h5') simple_net = keras.models.load_model('simple_net.h5') simple_net.summary()
0.7696
0.781247
# ๅ‡ฝๆ•ฐ - ๅ‡ฝๆ•ฐๅฏไปฅ็”จๆฅๅฎšไน‰ๅฏ้‡ๅคไปฃ็ ๏ผŒ็ป„็ป‡ๅ’Œ็ฎ€ๅŒ– - ไธ€่ˆฌๆฅ่ฏดไธ€ไธชๅ‡ฝๆ•ฐๅœจๅฎž้™…ๅผ€ๅ‘ไธญไธบไธ€ไธชๅฐๅŠŸ่ƒฝ - ไธ€ไธช็ฑปไธบไธ€ไธชๅคงๅŠŸ่ƒฝ - ๅŒๆ ทๅ‡ฝๆ•ฐ็š„้•ฟๅบฆไธ่ฆ่ถ…่ฟ‡ไธ€ๅฑ ``` def JJ(): print('hellow') return 100 JJ() a=JJ() print(a)#pythonไธญ็š„ๆ‰€ๆœ‰ๅ‡ฝๆ•ฐๅฎž้™…ไธŠ้ƒฝๆ˜ฏๆœ‰่ฟ”ๅ›žๅ€ผ๏ผˆreturn none๏ผ‰ #ๅฆ‚ๆžœไฝ ๆฒกๆœ‰่ฎพ็ฝฎ ``` ## ๅฎšไน‰ไธ€ไธชๅ‡ฝๆ•ฐ def function_name(list of parameters): do something ![](../Photo/69.png) - ไปฅๅ‰ไฝฟ็”จ็š„random ๆˆ–่€…range ๆˆ–่€…print.. ๅ…ถๅฎž้ƒฝๆ˜ฏๅ‡ฝๆ•ฐๆˆ–่€…็ฑป ``` def max_( numb1,numb2,numb3): if numb1<numb2<numb3: return numb3 elif numb1<numb3<numb2: return numb2 elif numb3<numb2<numb1: return numb1 max_(1,3,4) def Good =(nume2,name='wjj',name ): print('%Sๆ˜ฏไธ€ไธชๅ‚ปๅญ'%name) ``` ## ่ฐƒ็”จไธ€ไธชๅ‡ฝๆ•ฐ - functionName() - "()" ๅฐฑไปฃ่กจ่ฐƒ็”จ ![](../Photo/70.png) ## ๅธฆ่ฟ”ๅ›žๅ€ผๅ’Œไธๅธฆ่ฟ”ๅ›žๅ€ผ็š„ๅ‡ฝๆ•ฐ - return ่ฟ”ๅ›ž็š„ๅ†…ๅฎน - return ่ฟ”ๅ›žๅคšไธชๅ€ผ - ไธ€่ˆฌๆƒ…ๅ†ตไธ‹๏ผŒๅœจๅคšไธชๅ‡ฝๆ•ฐๅๅŒๅฎŒๆˆไธ€ไธชๅŠŸ่ƒฝ็š„ๆ—ถๅ€™๏ผŒ้‚ฃไนˆๅฐ†ไผšๆœ‰่ฟ”ๅ›žๅ€ผ ![](../Photo/71.png) - ๅฝ“็„ถไนŸๅฏไปฅ่‡ชๅฎšไน‰่ฟ”ๅ›žNone ## EP๏ผš ![](../Photo/72.png) ``` def fun1(): print('hahaha') def fun2(): print(fun1()) fun2() def shu_(numb): if numb %2==0: return 1 else: return 2 shu_(2) import random num1=random. def max_(num1,num2): if num1>num2: return num1 if num1<num2: return num2 max_(2,3) ``` # KNN็ฎ—ๆณ• # ่ฏปๅ–ๅ›พ็‰‡๏ผŒ่ฟ›่กŒๅฏนๆฏ”ๅˆ†ๆž import matplotlib.pyplot as plt res = plt.imread('D:/rabbit.jpg') print(res) ## ็ฑปๅž‹ๅ’Œๅ…ณ้”ฎๅญ—ๅ‚ๆ•ฐ - ๆ™ฎ้€šๅ‚ๆ•ฐ - ๅคšไธชๅ‚ๆ•ฐ - ้ป˜่ฎคๅ€ผๅ‚ๆ•ฐ - ไธๅฎš้•ฟๅ‚ๆ•ฐ ## ๆ™ฎ้€šๅ‚ๆ•ฐ ## ๅคšไธชๅ‚ๆ•ฐ ## ้ป˜่ฎคๅ€ผๅ‚ๆ•ฐ - ้ป˜่ฎคๅ‚ๆ•ฐๅช่ƒฝๆ”พๅœจๆœ€ๅŽ *ไน‹ๅŽ็š„ๅฟ…้กปๅผบๅˆถๅ‘ฝๅไผ ๅ…ฅๅ‚ๆ•ฐ,* ไน‹ๅŽ็š„ๅ˜้‡ๅฟ…้กปๅธฆๅ‚ๆ•ฐๅ # ๅผบๅˆถๅ‘ฝๅ - *ไน‹ๅŽ็š„ๅฟ…้กปๅผบๅˆถๅ‘ฝๅไผ ๅ…ฅๅ‚ๆ•ฐ,* ไน‹ๅŽ็š„ๅ˜้‡ๅฟ…้กปๅธฆๅ‚ๆ•ฐๅ ## ไธๅฎš้•ฟๅ‚ๆ•ฐ - \*args, ไธๅฎš้•ฟ๏ผŒ่พ“ๅ…ฅๅคšๅฐ‘ๅ‚ๆ•ฐ้ƒฝๅฏไปฅ๏ผŒไธๅ†™ๅ…ฅๅ‚ๆ•ฐไนŸๅฏไปฅ > - ๅฏไปฅ่พ“ๅ…ฅๅคšไธชๅ‚ๆ•ฐ - ่ฟ”ๅ›ž่พ“ๅ‡บ็ป“ๆžœ็š„็ฑปๅž‹ๆ˜ฏๅ…ƒ็ป„็ฑปๅž‹๏ผŒๅ…ƒ็ป„ๅฏไปฅ่ฟญไปฃ๏ผŒ่พ“ๅ…ฅๅคšๅฐ‘ไธชๅ‚ๆ•ฐ้ƒฝๅฏไปฅ่ฟญไปฃๆฅๅ–ๅ‡บ - args ๅๅญ—ๆ˜ฏๅฏไปฅไฟฎๆ”น็š„๏ผŒๅชๆ˜ฏ้€šๅธธ็บฆๅฎšไธบargs - \**kwargs, ไธๅฎš้•ฟ็ฌฌไบŒ็ง > - ๅฏไปฅ่พ“ๅ…ฅๅคšไธชๅ‚ๆ•ฐ๏ผŒไฝ†้œ€่ฆๅธฆๅ‚ๆ•ฐๅ็งฐ - ่ฟ”ๅ›ž็š„ๆ•ฐๆฎ็ฑปๅž‹ๆ˜ฏๅญ—ๅ…ธ - ่พ“ๅ…ฅ็š„ไธ€ๅฎš่ฆๆ˜ฏ่กจ่พพๅผ(้”ฎๅ€ผๅฏน) - ไธ€่ˆฌๅคง็š„้กน็›ฎไผ ๅ…ฅๅ‚ๆ•ฐๆ—ถ๏ผŒไฝฟ็”จconfigๆ–‡ไปถไธ€ๆฌกๅ†™ๅ…ฅๆ‰€ๆœ‰็š„ๅ‚ๆ•ฐ ``` def TT(**args,**kwargs): print(args) print(kwargs) TT(1,2,3,4,a=100,b=1000) ``` ### \**kwargs, ไธๅฎš้•ฟ็ฌฌไบŒ็ง - ็ฌฌไบŒ็งไธๅฎš้•ฟๅ‚ๆ•ฐไผ ๅ…ฅๆ–นๆณ•๏ผŒไฝฟ็”จๅญ—ๅ…ธๆ•ฐๆฎไผ ๅ…ฅ - ๅ†™ๅ…ฅๅ‚ๆ•ฐๆ—ถ๏ผŒๅฟ…้กปๅ…ˆๅ†™\*args, ๅ†ๅ†™\**kwargsใ€‚ ่ฟ™ๆ˜ฏๅ›บๅฎšๅ†™ๆณ• ``` def U(str_): for i in str_: ASCII=ord(i) if 97<=ASCII<=122: xiaoxie+=1 ``` ## ๅ˜้‡็š„ไฝœ็”จๅŸŸ - ๅฑ€้ƒจๅ˜้‡ local - ๅ…จๅฑ€ๅ˜้‡ global - globals ๅ‡ฝๆ•ฐ่ฟ”ๅ›žไธ€ไธชๅ…จๅฑ€ๅ˜้‡็š„ๅญ—ๅ…ธ๏ผŒๅŒ…ๆ‹ฌๆ‰€ๆœ‰ๅฏผๅ…ฅ็š„ๅ˜้‡ - locals() ๅ‡ฝๆ•ฐไผšไปฅๅญ—ๅ…ธ็ฑปๅž‹่ฟ”ๅ›žๅฝ“ๅ‰ไฝ็ฝฎ็š„ๅ…จ้ƒจๅฑ€้ƒจๅ˜้‡ใ€‚ ``` a=1000 def Y(): global a#ๅœจ่ฟ›่กŒ่ต‹ๅ€ผๆ“ไฝœ็š„ๆ—ถๅ€™้œ€่ฆๅฃฐๆ˜Ž a+=100 print(a) Y() ``` ## ๆณจๆ„๏ผš - global ๏ผšๅœจ่ฟ›่กŒ่ต‹ๅ€ผๆ“ไฝœ็š„ๆ—ถๅ€™้œ€่ฆๅฃฐๆ˜Ž - ๅฎ˜ๆ–น่งฃ้‡Š๏ผšThis is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope. - ![](../Photo/73.png) ## EP๏ผš - ๅฎšไน‰ไธ€ไธช้‚ฎ็ฎฑๅŠ ๅฏ†ๅ‡ฝๆ•ฐ๏ผŒไฝฟๅพ—่พ“ๅ…ฅ้‚ฎ็ฎฑๅŽ่ฟ›่กŒASCII็ ๅŠ ๅฏ† - ๅฎšไน‰ไธ€ไธชๅˆคๆ–ญๅ…ถๆ˜ฏๅฆไธบ้—ฐๅนด็š„ๅ‡ฝๆ•ฐ - ๅ‡ฝๆ•ฐ็š„ๅตŒๅฅ—๏ผšๅฎšไน‰ไธคไธชๅ‡ฝๆ•ฐA๏ผŒB๏ผŒBๅ‡ฝๆ•ฐๆŽฅๅ—Aๅ‡ฝๆ•ฐ็š„ๆ•ฐๅ€ผๅนถๅˆคๅฎšๆ˜ฏๅฆไธบๅฅ‡ๆ•ฐๆˆ–่€…ๅถๆ•ฐ # Homework - 1 ![](../Photo/74.png) ``` def joker6(): n=0 for n in range(1,100): shu = n*(3*n-1)/2 print(shu,end=' ') n+=1 if n%10==0: print() joker6() ``` - 2 ![](../Photo/75.png) ``` def joker8(n): import random random = random.randint(0,1000) num1= n % 10 num2= n // 10 num4 =num2% 10 num5=num2//10 i =num1 + num4 +num5 return i joker8(789) ``` - 3 ![](../Photo/76.png) ``` def joker7(num1,num2,num3): nums =[num1,num2,num3] nums.sort()#ๅฐ†ๆ•ฐๅญ—่ฟ›่กŒๆŽ’ๅบ print(nums) joker7() ``` # - 4 ![](../Photo/77.png) - 5 ![](../Photo/78.png) - 6 ![](../Photo/79.png) ``` def number(year): year = 0 for year in range(2010,2021): if year % 4 == 0 and year % 100 != 0 or year%400==0: print(year,'ไธ€ๅนดๆœ‰366ๅคฉ') else: print(year,'ไธ€ๅนดๆœ‰365ๅคฉ') year+=1 print() number(2009) ``` - 7 ![](../Photo/80.png) - 8 ![](../Photo/81.png) - 9 ![](../Photo/82.png) ![](../Photo/83.png) - 10 ![](../Photo/84.png) - 11 ### ๅŽป็ฝ‘ไธŠๅฏปๆ‰พๅฆ‚ไฝ•็”จPythonไปฃ็ ๅ‘้€้‚ฎไปถ - time.sleep(3) 3็ง’ๅ‘้€ไธ€ๆฌก - ไธ€ๆ—ฆ้”™่ฏฏๅŽ๏ผŒ้‡ๆ–ฐๆขไธ€ไธช้‚ฎ็ฎฑ
github_jupyter
def JJ(): print('hellow') return 100 JJ() a=JJ() print(a)#pythonไธญ็š„ๆ‰€ๆœ‰ๅ‡ฝๆ•ฐๅฎž้™…ไธŠ้ƒฝๆ˜ฏๆœ‰่ฟ”ๅ›žๅ€ผ๏ผˆreturn none๏ผ‰ #ๅฆ‚ๆžœไฝ ๆฒกๆœ‰่ฎพ็ฝฎ def max_( numb1,numb2,numb3): if numb1<numb2<numb3: return numb3 elif numb1<numb3<numb2: return numb2 elif numb3<numb2<numb1: return numb1 max_(1,3,4) def Good =(nume2,name='wjj',name ): print('%Sๆ˜ฏไธ€ไธชๅ‚ปๅญ'%name) def fun1(): print('hahaha') def fun2(): print(fun1()) fun2() def shu_(numb): if numb %2==0: return 1 else: return 2 shu_(2) import random num1=random. def max_(num1,num2): if num1>num2: return num1 if num1<num2: return num2 max_(2,3) def TT(**args,**kwargs): print(args) print(kwargs) TT(1,2,3,4,a=100,b=1000) def U(str_): for i in str_: ASCII=ord(i) if 97<=ASCII<=122: xiaoxie+=1 a=1000 def Y(): global a#ๅœจ่ฟ›่กŒ่ต‹ๅ€ผๆ“ไฝœ็š„ๆ—ถๅ€™้œ€่ฆๅฃฐๆ˜Ž a+=100 print(a) Y() def joker6(): n=0 for n in range(1,100): shu = n*(3*n-1)/2 print(shu,end=' ') n+=1 if n%10==0: print() joker6() def joker8(n): import random random = random.randint(0,1000) num1= n % 10 num2= n // 10 num4 =num2% 10 num5=num2//10 i =num1 + num4 +num5 return i joker8(789) def joker7(num1,num2,num3): nums =[num1,num2,num3] nums.sort()#ๅฐ†ๆ•ฐๅญ—่ฟ›่กŒๆŽ’ๅบ print(nums) joker7() def number(year): year = 0 for year in range(2010,2021): if year % 4 == 0 and year % 100 != 0 or year%400==0: print(year,'ไธ€ๅนดๆœ‰366ๅคฉ') else: print(year,'ไธ€ๅนดๆœ‰365ๅคฉ') year+=1 print() number(2009)
0.100354
0.757503
``` %matplotlib notebook import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation # All calculations and variables # Variables needed for calculation # Length of Lower bar R = 120 # Height of gate hinge l = 150 # Length of Upper bar r = 50 # Height of gate G = l + r - 5 # Width and Height of the block weight suspended; plus extra calculations w = 16 w_l = 20 angae = 30 * np.pi / 180 w_h = w / (2 * np.sin(angae)) # Weight bearing bar length wbbl = R - 40 # Calculation to find out total degree of rotation of lower bar divby = np.sqrt(l**2 + (l - R + r)**2) rotang = np.pi/2 + np.arccos((((l**2 + r*l + R**2) / R)-(l + r)) / divby) + np.arccos(l / divby) # Point calculation function, given theta of lower bar def render_points(theta): # Calculating position of rotating end point of lower bar R_x = R * np.sin(theta) R_y = -R * np.cos(theta) wb_x = w_h * np.sin(theta - rotang - angae) wb_y = -w_h * np.cos(theta - rotang - angae) R_pts = [[wb_x, 0, R_x], [wb_y, 0, R_y]] # Calculating end points of weight bearing bar wbb_pts = [[wb_x, wb_x], [wb_y, wb_y - wbbl]] # Calculating bottom-left pont of block wt_pts = (wb_x - w/2, wb_y - wbbl - w_l) # Calculating the other point of gate k = (l - R + r - R_y) / R_x c = ((l + r) * (l - R)) / R_x g_y = (k*c + l - R + r) - np.sqrt((k*c + l - R + r)**2 + (k**2 + 1) * (r**2 - (l - R + r)**2 - c**2)) g_y /= (k**2 + 1) g_x = k * g_y - c m = (g_y - R_y) / (g_x - R_x) e_x = R_x - G * (1 + m**2)**(-0.5) e_y = R_y - G * m * (1 + m**2)**(-0.5) g_pts = [[R_x, g_x, e_x], [R_y, g_y, e_y]] # The remaining points of upper bar r_pts = [[g_x, 0], [g_y, (l - R + r)]] return R_pts, wbb_pts, wt_pts, g_pts, r_pts fig = plt.figure(figsize=(6, 6)) ax = fig.add_subplot(111, aspect='equal', autoscale_on=False, xlim=(-R - 10, R + 10), ylim=(-R, R)) plt.title("Flip Up Garage Door") ax.axes.xaxis.set_visible(False) ax.axes.yaxis.set_visible(False) # The wall ax.axvline(x=0, color='k') ax.axvline(x=2, color='k') ax.axvline(x=10, color='k') ax.hlines(y=[l - R + r - 5, l - R + r + 5], xmin=0, xmax=10, colors='k') # All initializations # Programmer only: Variables for visualization # Number of Frames frames = 2000 # Frames for gate to stay open stay = 25 # Angle by which the lower bar rises every frame dtheta = rotang / (frames // 2) # Interval between each frames in milliseconds interval = 10 # Angle theta theta = 0.0 # The initializations of bars/components R_bar, = ax.plot([], [], c='springgreen', lw=6, marker='o', alpha=0.5, mfc='k') wb_bar, = ax.plot([], [], c='dimgray', lw=4, marker='o', mfc='k') blkwt = plt.Rectangle((- w_h * np.sin(rotang + angae) - w/2, - w_h * np.cos(rotang + angae) - w_l), w, w_l, fc='gray', ec='k') gate, = ax.plot([], [], c='red', lw=6, marker='o', alpha=0.5, mfc='k') r_bar, = ax.plot([], [], c='blue', lw=6, marker='o', alpha=0.5, mfc='k') gep, = ax.plot([], [], c='k', lw=2) gep_l, = ax.plot([], [], c='k', lw=2) # Pre-calculate each point for easy retrieval rendered_pts = [] gep_x = [] gep_y = [] gepl_x = [] gepl_y = [] for i in range(frames // 2): global theta theta += dtheta rendered_pts.append(render_points(theta)) def gate_end_show(retrieve, g_pts): global gep_x, gep_y, gepl_x, gepl_y if retrieve != -1 and retrieve != 0: gep_x.append(g_pts[0][2]) gep_y.append(g_pts[1][2]) gepl_x.append(g_pts[0][0]) gepl_y.append(g_pts[1][0]) else: gep_x = [] gep_y = [] gepl_x = [] gepl_y = [] return [gep_x, gep_y], [gepl_x, gepl_y] # Init function of Animator def init(): R_bar.set_data([], []) wb_bar.set_data([], []) ax.add_patch(blkwt) gate.set_data([], []) r_bar.set_data([], []) gep.set_data([], []) gep_l.set_data([], []) return R_bar, wb_bar, blkwt, gate, r_bar, gep, gep_l # Animate function of Animator def animate(frame): if frame < (frames // 2): retrieve = frame elif frame >= (frames // 2 + stay): retrieve = frames // 2 + stay - frame else: retrieve = -1 R_pts, wbb_pts, wt_pts, g_pts, r_pts = rendered_pts[retrieve] R_bar.set_data(R_pts[0], R_pts[1]) wb_bar.set_data(wbb_pts[0], wbb_pts[1]) blkwt.set_xy(wt_pts) gate.set_data(g_pts[0], g_pts[1]) r_bar.set_data(r_pts[0], r_pts[1]) g1, g2 = gate_end_show(retrieve, g_pts) gep.set_data(*g1) gep_l.set_data(*g2) return R_bar, wb_bar, blkwt, gate, r_bar, gep, gep_l # Animation Module ani = animation.FuncAnimation(fig, animate, frames=(frames + stay), interval=interval, repeat=True, init_func=init, blit=True) # Show plot plt.show() # Save animation to mp4 # writermp4 = animation.FFMpegWriter(fps=50) # ani.save("animation.mp4", writer=writermp4) ```
github_jupyter
%matplotlib notebook import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation # All calculations and variables # Variables needed for calculation # Length of Lower bar R = 120 # Height of gate hinge l = 150 # Length of Upper bar r = 50 # Height of gate G = l + r - 5 # Width and Height of the block weight suspended; plus extra calculations w = 16 w_l = 20 angae = 30 * np.pi / 180 w_h = w / (2 * np.sin(angae)) # Weight bearing bar length wbbl = R - 40 # Calculation to find out total degree of rotation of lower bar divby = np.sqrt(l**2 + (l - R + r)**2) rotang = np.pi/2 + np.arccos((((l**2 + r*l + R**2) / R)-(l + r)) / divby) + np.arccos(l / divby) # Point calculation function, given theta of lower bar def render_points(theta): # Calculating position of rotating end point of lower bar R_x = R * np.sin(theta) R_y = -R * np.cos(theta) wb_x = w_h * np.sin(theta - rotang - angae) wb_y = -w_h * np.cos(theta - rotang - angae) R_pts = [[wb_x, 0, R_x], [wb_y, 0, R_y]] # Calculating end points of weight bearing bar wbb_pts = [[wb_x, wb_x], [wb_y, wb_y - wbbl]] # Calculating bottom-left pont of block wt_pts = (wb_x - w/2, wb_y - wbbl - w_l) # Calculating the other point of gate k = (l - R + r - R_y) / R_x c = ((l + r) * (l - R)) / R_x g_y = (k*c + l - R + r) - np.sqrt((k*c + l - R + r)**2 + (k**2 + 1) * (r**2 - (l - R + r)**2 - c**2)) g_y /= (k**2 + 1) g_x = k * g_y - c m = (g_y - R_y) / (g_x - R_x) e_x = R_x - G * (1 + m**2)**(-0.5) e_y = R_y - G * m * (1 + m**2)**(-0.5) g_pts = [[R_x, g_x, e_x], [R_y, g_y, e_y]] # The remaining points of upper bar r_pts = [[g_x, 0], [g_y, (l - R + r)]] return R_pts, wbb_pts, wt_pts, g_pts, r_pts fig = plt.figure(figsize=(6, 6)) ax = fig.add_subplot(111, aspect='equal', autoscale_on=False, xlim=(-R - 10, R + 10), ylim=(-R, R)) plt.title("Flip Up Garage Door") ax.axes.xaxis.set_visible(False) ax.axes.yaxis.set_visible(False) # The wall ax.axvline(x=0, color='k') ax.axvline(x=2, color='k') ax.axvline(x=10, color='k') ax.hlines(y=[l - R + r - 5, l - R + r + 5], xmin=0, xmax=10, colors='k') # All initializations # Programmer only: Variables for visualization # Number of Frames frames = 2000 # Frames for gate to stay open stay = 25 # Angle by which the lower bar rises every frame dtheta = rotang / (frames // 2) # Interval between each frames in milliseconds interval = 10 # Angle theta theta = 0.0 # The initializations of bars/components R_bar, = ax.plot([], [], c='springgreen', lw=6, marker='o', alpha=0.5, mfc='k') wb_bar, = ax.plot([], [], c='dimgray', lw=4, marker='o', mfc='k') blkwt = plt.Rectangle((- w_h * np.sin(rotang + angae) - w/2, - w_h * np.cos(rotang + angae) - w_l), w, w_l, fc='gray', ec='k') gate, = ax.plot([], [], c='red', lw=6, marker='o', alpha=0.5, mfc='k') r_bar, = ax.plot([], [], c='blue', lw=6, marker='o', alpha=0.5, mfc='k') gep, = ax.plot([], [], c='k', lw=2) gep_l, = ax.plot([], [], c='k', lw=2) # Pre-calculate each point for easy retrieval rendered_pts = [] gep_x = [] gep_y = [] gepl_x = [] gepl_y = [] for i in range(frames // 2): global theta theta += dtheta rendered_pts.append(render_points(theta)) def gate_end_show(retrieve, g_pts): global gep_x, gep_y, gepl_x, gepl_y if retrieve != -1 and retrieve != 0: gep_x.append(g_pts[0][2]) gep_y.append(g_pts[1][2]) gepl_x.append(g_pts[0][0]) gepl_y.append(g_pts[1][0]) else: gep_x = [] gep_y = [] gepl_x = [] gepl_y = [] return [gep_x, gep_y], [gepl_x, gepl_y] # Init function of Animator def init(): R_bar.set_data([], []) wb_bar.set_data([], []) ax.add_patch(blkwt) gate.set_data([], []) r_bar.set_data([], []) gep.set_data([], []) gep_l.set_data([], []) return R_bar, wb_bar, blkwt, gate, r_bar, gep, gep_l # Animate function of Animator def animate(frame): if frame < (frames // 2): retrieve = frame elif frame >= (frames // 2 + stay): retrieve = frames // 2 + stay - frame else: retrieve = -1 R_pts, wbb_pts, wt_pts, g_pts, r_pts = rendered_pts[retrieve] R_bar.set_data(R_pts[0], R_pts[1]) wb_bar.set_data(wbb_pts[0], wbb_pts[1]) blkwt.set_xy(wt_pts) gate.set_data(g_pts[0], g_pts[1]) r_bar.set_data(r_pts[0], r_pts[1]) g1, g2 = gate_end_show(retrieve, g_pts) gep.set_data(*g1) gep_l.set_data(*g2) return R_bar, wb_bar, blkwt, gate, r_bar, gep, gep_l # Animation Module ani = animation.FuncAnimation(fig, animate, frames=(frames + stay), interval=interval, repeat=True, init_func=init, blit=True) # Show plot plt.show() # Save animation to mp4 # writermp4 = animation.FFMpegWriter(fps=50) # ani.save("animation.mp4", writer=writermp4)
0.539226
0.663015
# Stable Neural ODEs (*Stable Neural Flows*) First introduced in [Massaroli, Poli et al, 2020](https://arxiv.org/abs/2003.08063) *Stable Neural FLows* represent a stable variant of Neural ODEs. Their most simple realization has the general neural ODE form $$ \begin{aligned} &\bf{\dot z} = -\nabla_z\varepsilon(t, x, z, \theta)\\ &{\bf z}(0) = h_x(\bf x) \end{aligned} $$ where $\varepsilon(x, z, \theta)$ is a neural network. They can be used both as general-purpose modules (e.g. classification, continuous normalizing flows) or, thanks to their unique structure, they can be employed to learn dynamical systems in a similar fashion to Lagrangian/Hamiltonian-inspired models ``` from torchdyn.core import NeuralODE, ODEProblem from torchdyn.nn import DataControl, DepthCat, Augmenter from torchdyn.datasets import * from torchdyn.utils import * import torch import torch.nn as nn # quick run for automated notebook validation dry_run = False # Vanilla Version of stable neural flows class Stable(nn.Module): """Stable Neural Flow""" def __init__(self, net, depthvar=False, controlled=False): super().__init__() self.net, self.depthvar, self.controlled = net, depthvar, controlled def forward(self, x): with torch.set_grad_enabled(True): bs, n = x.shape[0], x.shape[1] // 2 x = x.requires_grad_(True) eps = self.net(x).sum() out = -torch.autograd.grad(eps, x, allow_unused=False, create_graph=True)[0] out = out[:,:-1] if self.depthvar else out out = out[:,:-2] if self.controlled else out return out ``` ## Learning Dynamical Systems Stable neural flows variants in a (autonomous) [port--Hamiltonian](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.3380&rep=rep1&type=pdf) form $$ \bf{\dot z} = F({\bf z})\nabla_z\varepsilon(z) $$ generalizes the Hamiltonian paradigm to modeling multi-physics systems. They obey to the *power balance equation* $$ \frac{d \varepsilon}{d t} = (\nabla\varepsilon)^\top {\bf F(z)}\nabla\varepsilon $$ Therefore, if one wants to learn e.g. some conservative process (of any nature), it is sufficient to introduce the inductive bias on $\bf F$ to be a skew-symmetric matrix such that $\dot \varepsilon = 0$. Here, we showcase the capibilities of stable neural flows (in port-Hamiltonian form) in such tasks. ``` # Conservative variant of stable neural flow class ConservativeStable(nn.Module): """Conservative Stable Neural Flow""" def __init__(self, net, depthvar=False, controlled=False): super().__init__() self.net, self.depthvar, self.controlled = net, depthvar, controlled self.M = torch.nn.Parameter(torch.randn(2,2)).to(device) # impose the system matrix to be skew symmetric def Skew(self): return .5*(self.M - self.M.T) def forward(self, x): with torch.set_grad_enabled(True): bs, n = x.shape[0], x.shape[1] // 2 x = x.requires_grad_(True) eps = self.net(x).sum() out = -torch.autograd.grad(eps, x, allow_unused=False, create_graph=True)[0] #self.out = out out = out[:,:-1] if self.depthvar else out out = out[:,:-2] if self.controlled else out return out @ self.Skew() ``` We aim at using a stable neural ODE learning the following conservative nonlinear dynamical system $$ \begin{bmatrix} \dot x\\ \dot v \end{bmatrix} = \begin{bmatrix} v(t)\\ \pi\left[\cos\left(\pi x(t) - \frac{\pi}{2}\right) - x(t)\right] \end{bmatrix} $$ ``` # We use this class to simulate through torchdyn the above nonlinear system class odefunc(nn.Module): def __init__(self, sys): super().__init__() self.sys = sys def forward(self, x): return self.sys(x) ## nonlinear conservative vector field def sys(x): dxdt = x[:,1] dydt = 1*np.pi*torch.cos(np.pi*x[:,0]-np.pi/2) - np.pi*x[:,0]# - .5*np.pi*x[:,1] return torch.cat([dxdt[:,None], dydt[:,None]], 1) # define the system model just like a neural ODE system = ODEProblem(odefunc(sys), solver='dopri5') x0, t_span = torch.randn(1000,2), torch.linspace(0, 2, 100) # simulate the system _, traj = system(x0, t_span) # plot the trajectories for i in range(len(x0)): plt.plot(traj[:,i,0], traj[:,i,1], color='blue', alpha=.1) ``` Train the conservative stable neural flow ``` import torch.utils.data as data device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Data vf = odefunc(sys) X = 4*torch.rand(2048,2).to(device) y = vf(X) train = data.TensorDataset(X, y) trainloader = data.DataLoader(train, batch_size=len(X), shuffle=False) import pytorch_lightning as pl import copy class Learner(pl.LightningModule): def __init__(self, model:nn.Module): super().__init__() self.model = model def forward(self, x): return self.model.defunc(0,x) def loss(self, y, y_hat): return ((y-y_hat)**2).sum(1).mean() def training_step(self, batch, batch_idx): x = torch.randn(2048,2).to(device) y = vf(x) y_hat = self.model.vf(0,x) loss = self.loss(y_hat, y) logs = {'train_loss': loss} return {'loss': loss, 'log': logs} def configure_optimizers(self): return torch.optim.Adam(self.model.parameters(), lr=0.001) def train_dataloader(self): return trainloader # vector field parametrized by a NN h_dim = 128 f = ConservativeStable(nn.Sequential( nn.Linear(2,h_dim), nn.Tanh(), nn.Linear(h_dim,h_dim), nn.Tanh(), nn.Linear(h_dim,h_dim), nn.Tanh(), nn.Linear(h_dim, 1))) # neural ODE model = NeuralODE(f, order=1, solver='dopri5', sensitivity='adjoint').to(device) seq = nn.Sequential(model).to(device) learn = Learner(model) if dry_run: trainer = pl.Trainer(max_epochs=1, gpus=1) else: trainer = pl.Trainer(max_epochs=1000, gpus=1) trainer.fit(learn) system = system.to(device) model = model.to(device) # Sample random initial conditions X_t = torch.randn(1000, 2).to(device) # Evaluate the model's trajectories t_span = torch.linspace(0, 5, 100) _, sys_traj = system(X_t, t_span) sys_traj = sys_traj.detach().cpu() traj = model.trajectory(X_t, t_span).detach().cpu() # Plot the trajectories with random ICs fig = plt.figure(figsize=(10,3)) ax = fig.add_subplot(121) ax2 = fig.add_subplot(122) for i in range(len(X_t)): ax.plot(traj[:,i,0], traj[:,i,1], color='blue', alpha=0.1); ax.set_xlim([-3,3]) ax.set_ylim([-3,3]) ax.set_xlabel(r"$q$") ax.set_ylabel(r"$p$") ax.set_title("Reconstructed") for i in range(len(X_t)): ax2.plot(sys_traj[:,i,0], sys_traj[:,i,1], color='blue', alpha=0.1); ax2.set_xlim([-3,3]) ax2.set_ylim([-3,3]) ax2.set_xlabel(r"$q$") ax2.set_ylabel(r"$p$") ax2.set_title("Nominal") # Compare the learned vector field to the nominal one import time fig = plt.figure(figsize=(10,3)) ax0 = fig.add_subplot(121) ax1 = fig.add_subplot(122) n_grid = 25 q = torch.linspace(-3,3,n_grid) Q, P = torch.meshgrid(q,q) H, U, V = torch.zeros(Q.shape), torch.zeros(Q.shape), torch.zeros(Q.shape) Ur, Vr = torch.zeros(Q.shape), torch.zeros(Q.shape) for i in range(n_grid): for j in range(n_grid): x = torch.cat([Q[i,j].reshape(1,1),P[i,j].reshape(1,1)],1).to(device) H[i,j] = f.net(x).detach().cpu() O = model.vf(0,x).detach().cpu() U[i,j], V[i,j] = O[0,0], O[0,1] Ur[i,j], Vr[i,j] = vf(x)[0,0].detach().cpu(), vf(x)[0,1].detach().cpu() ax0.contourf(Q,P,H,100,cmap='inferno') ax0.streamplot(Q.T.numpy(),P.T.numpy(),U.T.numpy(),V.T.numpy(), color='white') ax1.streamplot(Q.T.numpy(),P.T.numpy(),Ur.T.numpy(),Vr.T.numpy(), color='black') ax0.set_xlim([Q.min(),Q.max()]) ; ax1.set_xlim([Q.min(),Q.max()]) ax0.set_ylim([P.min(),P.max()]) ; ax1.set_ylim([P.min(),P.max()]) ax0.set_xticks([]) ; ax1.set_xticks([]) ax0.set_yticks([]) ; ax1.set_yticks([]) ax0.set_title(f"Learned Energy & Vector Field") ; ax1.set_title("Nominal Vector Field") ```
github_jupyter
from torchdyn.core import NeuralODE, ODEProblem from torchdyn.nn import DataControl, DepthCat, Augmenter from torchdyn.datasets import * from torchdyn.utils import * import torch import torch.nn as nn # quick run for automated notebook validation dry_run = False # Vanilla Version of stable neural flows class Stable(nn.Module): """Stable Neural Flow""" def __init__(self, net, depthvar=False, controlled=False): super().__init__() self.net, self.depthvar, self.controlled = net, depthvar, controlled def forward(self, x): with torch.set_grad_enabled(True): bs, n = x.shape[0], x.shape[1] // 2 x = x.requires_grad_(True) eps = self.net(x).sum() out = -torch.autograd.grad(eps, x, allow_unused=False, create_graph=True)[0] out = out[:,:-1] if self.depthvar else out out = out[:,:-2] if self.controlled else out return out # Conservative variant of stable neural flow class ConservativeStable(nn.Module): """Conservative Stable Neural Flow""" def __init__(self, net, depthvar=False, controlled=False): super().__init__() self.net, self.depthvar, self.controlled = net, depthvar, controlled self.M = torch.nn.Parameter(torch.randn(2,2)).to(device) # impose the system matrix to be skew symmetric def Skew(self): return .5*(self.M - self.M.T) def forward(self, x): with torch.set_grad_enabled(True): bs, n = x.shape[0], x.shape[1] // 2 x = x.requires_grad_(True) eps = self.net(x).sum() out = -torch.autograd.grad(eps, x, allow_unused=False, create_graph=True)[0] #self.out = out out = out[:,:-1] if self.depthvar else out out = out[:,:-2] if self.controlled else out return out @ self.Skew() # We use this class to simulate through torchdyn the above nonlinear system class odefunc(nn.Module): def __init__(self, sys): super().__init__() self.sys = sys def forward(self, x): return self.sys(x) ## nonlinear conservative vector field def sys(x): dxdt = x[:,1] dydt = 1*np.pi*torch.cos(np.pi*x[:,0]-np.pi/2) - np.pi*x[:,0]# - .5*np.pi*x[:,1] return torch.cat([dxdt[:,None], dydt[:,None]], 1) # define the system model just like a neural ODE system = ODEProblem(odefunc(sys), solver='dopri5') x0, t_span = torch.randn(1000,2), torch.linspace(0, 2, 100) # simulate the system _, traj = system(x0, t_span) # plot the trajectories for i in range(len(x0)): plt.plot(traj[:,i,0], traj[:,i,1], color='blue', alpha=.1) import torch.utils.data as data device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Data vf = odefunc(sys) X = 4*torch.rand(2048,2).to(device) y = vf(X) train = data.TensorDataset(X, y) trainloader = data.DataLoader(train, batch_size=len(X), shuffle=False) import pytorch_lightning as pl import copy class Learner(pl.LightningModule): def __init__(self, model:nn.Module): super().__init__() self.model = model def forward(self, x): return self.model.defunc(0,x) def loss(self, y, y_hat): return ((y-y_hat)**2).sum(1).mean() def training_step(self, batch, batch_idx): x = torch.randn(2048,2).to(device) y = vf(x) y_hat = self.model.vf(0,x) loss = self.loss(y_hat, y) logs = {'train_loss': loss} return {'loss': loss, 'log': logs} def configure_optimizers(self): return torch.optim.Adam(self.model.parameters(), lr=0.001) def train_dataloader(self): return trainloader # vector field parametrized by a NN h_dim = 128 f = ConservativeStable(nn.Sequential( nn.Linear(2,h_dim), nn.Tanh(), nn.Linear(h_dim,h_dim), nn.Tanh(), nn.Linear(h_dim,h_dim), nn.Tanh(), nn.Linear(h_dim, 1))) # neural ODE model = NeuralODE(f, order=1, solver='dopri5', sensitivity='adjoint').to(device) seq = nn.Sequential(model).to(device) learn = Learner(model) if dry_run: trainer = pl.Trainer(max_epochs=1, gpus=1) else: trainer = pl.Trainer(max_epochs=1000, gpus=1) trainer.fit(learn) system = system.to(device) model = model.to(device) # Sample random initial conditions X_t = torch.randn(1000, 2).to(device) # Evaluate the model's trajectories t_span = torch.linspace(0, 5, 100) _, sys_traj = system(X_t, t_span) sys_traj = sys_traj.detach().cpu() traj = model.trajectory(X_t, t_span).detach().cpu() # Plot the trajectories with random ICs fig = plt.figure(figsize=(10,3)) ax = fig.add_subplot(121) ax2 = fig.add_subplot(122) for i in range(len(X_t)): ax.plot(traj[:,i,0], traj[:,i,1], color='blue', alpha=0.1); ax.set_xlim([-3,3]) ax.set_ylim([-3,3]) ax.set_xlabel(r"$q$") ax.set_ylabel(r"$p$") ax.set_title("Reconstructed") for i in range(len(X_t)): ax2.plot(sys_traj[:,i,0], sys_traj[:,i,1], color='blue', alpha=0.1); ax2.set_xlim([-3,3]) ax2.set_ylim([-3,3]) ax2.set_xlabel(r"$q$") ax2.set_ylabel(r"$p$") ax2.set_title("Nominal") # Compare the learned vector field to the nominal one import time fig = plt.figure(figsize=(10,3)) ax0 = fig.add_subplot(121) ax1 = fig.add_subplot(122) n_grid = 25 q = torch.linspace(-3,3,n_grid) Q, P = torch.meshgrid(q,q) H, U, V = torch.zeros(Q.shape), torch.zeros(Q.shape), torch.zeros(Q.shape) Ur, Vr = torch.zeros(Q.shape), torch.zeros(Q.shape) for i in range(n_grid): for j in range(n_grid): x = torch.cat([Q[i,j].reshape(1,1),P[i,j].reshape(1,1)],1).to(device) H[i,j] = f.net(x).detach().cpu() O = model.vf(0,x).detach().cpu() U[i,j], V[i,j] = O[0,0], O[0,1] Ur[i,j], Vr[i,j] = vf(x)[0,0].detach().cpu(), vf(x)[0,1].detach().cpu() ax0.contourf(Q,P,H,100,cmap='inferno') ax0.streamplot(Q.T.numpy(),P.T.numpy(),U.T.numpy(),V.T.numpy(), color='white') ax1.streamplot(Q.T.numpy(),P.T.numpy(),Ur.T.numpy(),Vr.T.numpy(), color='black') ax0.set_xlim([Q.min(),Q.max()]) ; ax1.set_xlim([Q.min(),Q.max()]) ax0.set_ylim([P.min(),P.max()]) ; ax1.set_ylim([P.min(),P.max()]) ax0.set_xticks([]) ; ax1.set_xticks([]) ax0.set_yticks([]) ; ax1.set_yticks([]) ax0.set_title(f"Learned Energy & Vector Field") ; ax1.set_title("Nominal Vector Field")
0.892656
0.966569
# RadarCOVID-Report ## Data Extraction ``` import datetime import json import logging import os import shutil import tempfile import textwrap import uuid import matplotlib.pyplot as plt import matplotlib.ticker import numpy as np import pandas as pd import pycountry import retry import seaborn as sns %matplotlib inline current_working_directory = os.environ.get("PWD") if current_working_directory: os.chdir(current_working_directory) sns.set() matplotlib.rcParams["figure.figsize"] = (15, 6) extraction_datetime = datetime.datetime.utcnow() extraction_date = extraction_datetime.strftime("%Y-%m-%d") extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1) extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d") extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H") current_hour = datetime.datetime.utcnow().hour are_today_results_partial = current_hour != 23 ``` ### Constants ``` from Modules.ExposureNotification import exposure_notification_io spain_region_country_code = "ES" germany_region_country_code = "DE" default_backend_identifier = spain_region_country_code backend_generation_days = 7 * 2 daily_summary_days = 7 * 4 * 3 daily_plot_days = 7 * 4 tek_dumps_load_limit = daily_summary_days + 1 ``` ### Parameters ``` environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER") if environment_backend_identifier: report_backend_identifier = environment_backend_identifier else: report_backend_identifier = default_backend_identifier report_backend_identifier environment_enable_multi_backend_download = \ os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD") if environment_enable_multi_backend_download: report_backend_identifiers = None else: report_backend_identifiers = [report_backend_identifier] report_backend_identifiers environment_invalid_shared_diagnoses_dates = \ os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES") if environment_invalid_shared_diagnoses_dates: invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",") else: invalid_shared_diagnoses_dates = [] invalid_shared_diagnoses_dates ``` ### COVID-19 Cases ``` report_backend_client = \ exposure_notification_io.get_backend_client_with_identifier( backend_identifier=report_backend_identifier) @retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10)) def download_cases_dataframe(): return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv") confirmed_df_ = download_cases_dataframe() confirmed_df_.iloc[0] confirmed_df = confirmed_df_.copy() confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]] confirmed_df.rename( columns={ "date": "sample_date", "iso_code": "country_code", }, inplace=True) def convert_iso_alpha_3_to_alpha_2(x): try: return pycountry.countries.get(alpha_3=x).alpha_2 except Exception as e: logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}") return None confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2) confirmed_df.dropna(inplace=True) confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True) confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d") confirmed_df.sort_values("sample_date", inplace=True) confirmed_df.tail() confirmed_days = pd.date_range( start=confirmed_df.iloc[0].sample_date, end=extraction_datetime) confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"]) confirmed_days_df["sample_date_string"] = \ confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d") confirmed_days_df.tail() def sort_source_regions_for_display(source_regions: list) -> list: if report_backend_identifier in source_regions: source_regions = [report_backend_identifier] + \ list(sorted(set(source_regions).difference([report_backend_identifier]))) else: source_regions = list(sorted(source_regions)) return source_regions report_source_regions = report_backend_client.source_regions_for_date( date=extraction_datetime.date()) report_source_regions = sort_source_regions_for_display( source_regions=report_source_regions) report_source_regions def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None): source_regions_at_date_df = confirmed_days_df.copy() source_regions_at_date_df["source_regions_at_date"] = \ source_regions_at_date_df.sample_date.apply( lambda x: source_regions_for_date_function(date=x)) source_regions_at_date_df.sort_values("sample_date", inplace=True) source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \ source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x))) source_regions_at_date_df.tail() #%% source_regions_for_summary_df_ = \ source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy() source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True) source_regions_for_summary_df_.tail() #%% confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"] confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns) for source_regions_group, source_regions_group_series in \ source_regions_at_date_df.groupby("_source_regions_group"): source_regions_set = set(source_regions_group.split(",")) confirmed_source_regions_set_df = \ confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy() confirmed_source_regions_group_df = \ confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \ .reset_index().sort_values("sample_date") confirmed_source_regions_group_df = \ confirmed_source_regions_group_df.merge( confirmed_days_df[["sample_date_string"]].rename( columns={"sample_date_string": "sample_date"}), how="right") confirmed_source_regions_group_df["new_cases"] = \ confirmed_source_regions_group_df["new_cases"].clip(lower=0) confirmed_source_regions_group_df["covid_cases"] = \ confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round() confirmed_source_regions_group_df = \ confirmed_source_regions_group_df[confirmed_output_columns] confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan) confirmed_source_regions_group_df.fillna(method="ffill", inplace=True) confirmed_source_regions_group_df = \ confirmed_source_regions_group_df[ confirmed_source_regions_group_df.sample_date.isin( source_regions_group_series.sample_date_string)] confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df) result_df = confirmed_output_df.copy() result_df.tail() #%% result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True) result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left") result_df.sort_values("sample_date_string", inplace=True) result_df.fillna(method="ffill", inplace=True) result_df.tail() #%% result_df[["new_cases", "covid_cases"]].plot() if columns_suffix: result_df.rename( columns={ "new_cases": "new_cases_" + columns_suffix, "covid_cases": "covid_cases_" + columns_suffix}, inplace=True) return result_df, source_regions_for_summary_df_ confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe( report_backend_client.source_regions_for_date) confirmed_es_df, _ = get_cases_dataframe( lambda date: [spain_region_country_code], columns_suffix=spain_region_country_code.lower()) ``` ### Extract API TEKs ``` raw_zip_path_prefix = "Data/TEKs/Raw/" base_backend_identifiers = [report_backend_identifier] multi_backend_exposure_keys_df = \ exposure_notification_io.download_exposure_keys_from_backends( backend_identifiers=report_backend_identifiers, generation_days=backend_generation_days, fail_on_error_backend_identifiers=base_backend_identifiers, save_raw_zip_path_prefix=raw_zip_path_prefix) multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"] multi_backend_exposure_keys_df.rename( columns={ "generation_datetime": "sample_datetime", "generation_date_string": "sample_date_string", }, inplace=True) multi_backend_exposure_keys_df.head() early_teks_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.rolling_period < 144].copy() early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6 early_teks_df[early_teks_df.sample_date_string != extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) early_teks_df[early_teks_df.sample_date_string == extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[ "sample_date_string", "region", "key_data"]] multi_backend_exposure_keys_df.head() active_regions = \ multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() active_regions multi_backend_summary_df = multi_backend_exposure_keys_df.groupby( ["sample_date_string", "region"]).key_data.nunique().reset_index() \ .pivot(index="sample_date_string", columns="region") \ .sort_index(ascending=False) multi_backend_summary_df.rename( columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) multi_backend_summary_df.rename_axis("sample_date", inplace=True) multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int) multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days) multi_backend_summary_df.head() def compute_keys_cross_sharing(x): teks_x = x.key_data_x.item() common_teks = set(teks_x).intersection(x.key_data_y.item()) common_teks_fraction = len(common_teks) / len(teks_x) return pd.Series(dict( common_teks=common_teks, common_teks_fraction=common_teks_fraction, )) multi_backend_exposure_keys_by_region_df = \ multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index() multi_backend_exposure_keys_by_region_df["_merge"] = True multi_backend_exposure_keys_by_region_combination_df = \ multi_backend_exposure_keys_by_region_df.merge( multi_backend_exposure_keys_by_region_df, on="_merge") multi_backend_exposure_keys_by_region_combination_df.drop( columns=["_merge"], inplace=True) if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1: multi_backend_exposure_keys_by_region_combination_df = \ multi_backend_exposure_keys_by_region_combination_df[ multi_backend_exposure_keys_by_region_combination_df.region_x != multi_backend_exposure_keys_by_region_combination_df.region_y] multi_backend_exposure_keys_cross_sharing_df = \ multi_backend_exposure_keys_by_region_combination_df \ .groupby(["region_x", "region_y"]) \ .apply(compute_keys_cross_sharing) \ .reset_index() multi_backend_cross_sharing_summary_df = \ multi_backend_exposure_keys_cross_sharing_df.pivot_table( values=["common_teks_fraction"], columns="region_x", index="region_y", aggfunc=lambda x: x.item()) multi_backend_cross_sharing_summary_df multi_backend_without_active_region_exposure_keys_df = \ multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier] multi_backend_without_active_region = \ multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() multi_backend_without_active_region exposure_keys_summary_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.region == report_backend_identifier] exposure_keys_summary_df.drop(columns=["region"], inplace=True) exposure_keys_summary_df = \ exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame() exposure_keys_summary_df = \ exposure_keys_summary_df.reset_index().set_index("sample_date_string") exposure_keys_summary_df.sort_index(ascending=False, inplace=True) exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) exposure_keys_summary_df.head() ``` ### Dump API TEKs ``` tek_list_df = multi_backend_exposure_keys_df[ ["sample_date_string", "region", "key_data"]].copy() tek_list_df["key_data"] = tek_list_df["key_data"].apply(str) tek_list_df.rename(columns={ "sample_date_string": "sample_date", "key_data": "tek_list"}, inplace=True) tek_list_df = tek_list_df.groupby( ["sample_date", "region"]).tek_list.unique().reset_index() tek_list_df["extraction_date"] = extraction_date tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour tek_list_path_prefix = "Data/TEKs/" tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json" tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json" tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json" for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]: os.makedirs(os.path.dirname(path), exist_ok=True) tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier] tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json( tek_list_current_path, lines=True, orient="records") tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json( tek_list_daily_path, lines=True, orient="records") tek_list_base_df.to_json( tek_list_hourly_path, lines=True, orient="records") tek_list_base_df.head() ``` ### Load TEK Dumps ``` import glob def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame: extracted_teks_df = pd.DataFrame(columns=["region"]) file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json")))) if limit: file_paths = file_paths[:limit] for file_path in file_paths: logging.info(f"Loading TEKs from '{file_path}'...") iteration_extracted_teks_df = pd.read_json(file_path, lines=True) extracted_teks_df = extracted_teks_df.append( iteration_extracted_teks_df, sort=False) extracted_teks_df["region"] = \ extracted_teks_df.region.fillna(spain_region_country_code).copy() if region: extracted_teks_df = \ extracted_teks_df[extracted_teks_df.region == region] return extracted_teks_df daily_extracted_teks_df = load_extracted_teks( mode="Daily", region=report_backend_identifier, limit=tek_dumps_load_limit) daily_extracted_teks_df.head() exposure_keys_summary_df_ = daily_extracted_teks_df \ .sort_values("extraction_date", ascending=False) \ .groupby("sample_date").tek_list.first() \ .to_frame() exposure_keys_summary_df_.index.name = "sample_date_string" exposure_keys_summary_df_["tek_list"] = \ exposure_keys_summary_df_.tek_list.apply(len) exposure_keys_summary_df_ = exposure_keys_summary_df_ \ .rename(columns={"tek_list": "shared_teks_by_generation_date"}) \ .sort_index(ascending=False) exposure_keys_summary_df = exposure_keys_summary_df_ exposure_keys_summary_df.head() ``` ### Daily New TEKs ``` tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply( lambda x: set(sum(x, []))).reset_index() tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True) tek_list_df.head() def compute_teks_by_generation_and_upload_date(date): day_new_teks_set_df = tek_list_df.copy().diff() try: day_new_teks_set = day_new_teks_set_df[ day_new_teks_set_df.index == date].tek_list.item() except ValueError: day_new_teks_set = None if pd.isna(day_new_teks_set): day_new_teks_set = set() day_new_teks_df = daily_extracted_teks_df[ daily_extracted_teks_df.extraction_date == date].copy() day_new_teks_df["shared_teks"] = \ day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set)) day_new_teks_df["shared_teks"] = \ day_new_teks_df.shared_teks.apply(len) day_new_teks_df["upload_date"] = date day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True) day_new_teks_df = day_new_teks_df[ ["upload_date", "generation_date", "shared_teks"]] day_new_teks_df["generation_to_upload_days"] = \ (pd.to_datetime(day_new_teks_df.upload_date) - pd.to_datetime(day_new_teks_df.generation_date)).dt.days day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0] return day_new_teks_df shared_teks_generation_to_upload_df = pd.DataFrame() for upload_date in daily_extracted_teks_df.extraction_date.unique(): shared_teks_generation_to_upload_df = \ shared_teks_generation_to_upload_df.append( compute_teks_by_generation_and_upload_date(date=upload_date)) shared_teks_generation_to_upload_df \ .sort_values(["upload_date", "generation_date"], ascending=False, inplace=True) shared_teks_generation_to_upload_df.tail() today_new_teks_df = \ shared_teks_generation_to_upload_df[ shared_teks_generation_to_upload_df.upload_date == extraction_date].copy() today_new_teks_df.tail() if not today_new_teks_df.empty: today_new_teks_df.set_index("generation_to_upload_days") \ .sort_index().shared_teks.plot.bar() generation_to_upload_period_pivot_df = \ shared_teks_generation_to_upload_df[ ["upload_date", "generation_to_upload_days", "shared_teks"]] \ .pivot(index="upload_date", columns="generation_to_upload_days") \ .sort_index(ascending=False).fillna(0).astype(int) \ .droplevel(level=0, axis=1) generation_to_upload_period_pivot_df.head() new_tek_df = tek_list_df.diff().tek_list.apply( lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index() new_tek_df.rename(columns={ "tek_list": "shared_teks_by_upload_date", "extraction_date": "sample_date_string",}, inplace=True) new_tek_df.tail() shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[ shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \ [["upload_date", "shared_teks"]].rename( columns={ "upload_date": "sample_date_string", "shared_teks": "shared_teks_uploaded_on_generation_date", }) shared_teks_uploaded_on_generation_date_df.head() estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \ .groupby(["upload_date"]).shared_teks.max().reset_index() \ .sort_values(["upload_date"], ascending=False) \ .rename(columns={ "upload_date": "sample_date_string", "shared_teks": "shared_diagnoses", }) invalid_shared_diagnoses_dates_mask = \ estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates) estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0 estimated_shared_diagnoses_df.head() ``` ### Hourly New TEKs ``` hourly_extracted_teks_df = load_extracted_teks( mode="Hourly", region=report_backend_identifier, limit=25) hourly_extracted_teks_df.head() hourly_new_tek_count_df = hourly_extracted_teks_df \ .groupby("extraction_date_with_hour").tek_list. \ apply(lambda x: set(sum(x, []))).reset_index().copy() hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \ .sort_index(ascending=True) hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff() hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply( lambda x: len(x) if not pd.isna(x) else 0) hourly_new_tek_count_df.rename(columns={ "new_tek_count": "shared_teks_by_upload_date"}, inplace=True) hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[ "extraction_date_with_hour", "shared_teks_by_upload_date"]] hourly_new_tek_count_df.head() hourly_summary_df = hourly_new_tek_count_df.copy() hourly_summary_df.set_index("extraction_date_with_hour", inplace=True) hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index() hourly_summary_df["datetime_utc"] = pd.to_datetime( hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H") hourly_summary_df.set_index("datetime_utc", inplace=True) hourly_summary_df = hourly_summary_df.tail(-1) hourly_summary_df.head() ``` ### Official Statistics ``` import requests import pandas.io.json official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics") official_stats_response.raise_for_status() official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json()) official_stats_df = official_stats_df_.copy() official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True) official_stats_df.head() official_stats_column_map = { "date": "sample_date", "applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated", "communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated", } accumulated_suffix = "_accumulated" accumulated_values_columns = \ list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values())) interpolated_values_columns = \ list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns)) official_stats_df = \ official_stats_df[official_stats_column_map.keys()] \ .rename(columns=official_stats_column_map) official_stats_df["extraction_date"] = extraction_date official_stats_df.head() official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json" previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True) previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True) official_stats_df = official_stats_df.append(previous_official_stats_df) official_stats_df.head() official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)] official_stats_df.sort_values("extraction_date", ascending=False, inplace=True) official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True) official_stats_df.head() official_stats_stored_df = official_stats_df.copy() official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d") official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True) official_stats_df.drop(columns=["extraction_date"], inplace=True) official_stats_df = confirmed_days_df.merge(official_stats_df, how="left") official_stats_df.sort_values("sample_date", ascending=False, inplace=True) official_stats_df.head() official_stats_df[accumulated_values_columns] = \ official_stats_df[accumulated_values_columns] \ .astype(float).interpolate(limit_area="inside") official_stats_df[interpolated_values_columns] = \ official_stats_df[accumulated_values_columns].diff(periods=-1) official_stats_df.drop(columns="sample_date", inplace=True) official_stats_df.head() ``` ### Data Merge ``` result_summary_df = exposure_keys_summary_df.merge( new_tek_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( official_stats_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge( result_summary_df, on=["sample_date_string"], how="left") result_summary_df.head() result_summary_df = confirmed_es_df.tail(daily_summary_days).merge( result_summary_df, on=["sample_date_string"], how="left") result_summary_df.head() result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string) result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left") result_summary_df.set_index(["sample_date", "source_regions"], inplace=True) result_summary_df.drop(columns=["sample_date_string"], inplace=True) result_summary_df.sort_index(ascending=False, inplace=True) result_summary_df.head() with pd.option_context("mode.use_inf_as_na", True): result_summary_df = result_summary_df.fillna(0).astype(int) result_summary_df["teks_per_shared_diagnosis"] = \ (result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0) result_summary_df["shared_diagnoses_per_covid_case"] = \ (result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0) result_summary_df["shared_diagnoses_per_covid_case_es"] = \ (result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0) result_summary_df.head(daily_plot_days) def compute_aggregated_results_summary(days) -> pd.DataFrame: aggregated_result_summary_df = result_summary_df.copy() aggregated_result_summary_df["covid_cases_for_ratio"] = \ aggregated_result_summary_df.covid_cases.mask( aggregated_result_summary_df.shared_diagnoses == 0, 0) aggregated_result_summary_df["covid_cases_for_ratio_es"] = \ aggregated_result_summary_df.covid_cases_es.mask( aggregated_result_summary_df.shared_diagnoses_es == 0, 0) aggregated_result_summary_df = aggregated_result_summary_df \ .sort_index(ascending=True).fillna(0).rolling(days).agg({ "covid_cases": "sum", "covid_cases_es": "sum", "covid_cases_for_ratio": "sum", "covid_cases_for_ratio_es": "sum", "shared_teks_by_generation_date": "sum", "shared_teks_by_upload_date": "sum", "shared_diagnoses": "sum", "shared_diagnoses_es": "sum", }).sort_index(ascending=False) with pd.option_context("mode.use_inf_as_na", True): aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int) aggregated_result_summary_df["teks_per_shared_diagnosis"] = \ (aggregated_result_summary_df.shared_teks_by_upload_date / aggregated_result_summary_df.covid_cases_for_ratio).fillna(0) aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \ (aggregated_result_summary_df.shared_diagnoses / aggregated_result_summary_df.covid_cases_for_ratio).fillna(0) aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \ (aggregated_result_summary_df.shared_diagnoses_es / aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0) return aggregated_result_summary_df aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7) aggregated_result_with_7_days_window_summary_df.head() last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1] last_7_days_summary aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13) last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1] last_14_days_summary ``` ## Report Results ``` display_column_name_mapping = { "sample_date": "Sample\u00A0Date\u00A0(UTC)", "source_regions": "Source Countries", "datetime_utc": "Timestamp (UTC)", "upload_date": "Upload Date (UTC)", "generation_to_upload_days": "Generation to Upload Period in Days", "region": "Backend", "region_x": "Backend\u00A0(A)", "region_y": "Backend\u00A0(B)", "common_teks": "Common TEKs Shared Between Backends", "common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)", "covid_cases": "COVID-19 Cases (Source Countries)", "shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)", "shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)", "shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)", "shared_diagnoses": "Shared Diagnoses (Source Countries โ€“ Estimation)", "teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)", "shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)", "covid_cases_es": "COVID-19 Cases (Spain)", "app_downloads_es": "App Downloads (Spain โ€“ Official)", "shared_diagnoses_es": "Shared Diagnoses (Spain โ€“ Official)", "shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)", } summary_columns = [ "covid_cases", "shared_teks_by_generation_date", "shared_teks_by_upload_date", "shared_teks_uploaded_on_generation_date", "shared_diagnoses", "teks_per_shared_diagnosis", "shared_diagnoses_per_covid_case", "covid_cases_es", "app_downloads_es", "shared_diagnoses_es", "shared_diagnoses_per_covid_case_es", ] summary_percentage_columns= [ "shared_diagnoses_per_covid_case_es", "shared_diagnoses_per_covid_case", ] ``` ### Daily Summary Table ``` result_summary_df_ = result_summary_df.copy() result_summary_df = result_summary_df[summary_columns] result_summary_with_display_names_df = result_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) result_summary_with_display_names_df ``` ### Daily Summary Plots ``` result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \ .droplevel(level=["source_regions"]) \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar( title=f"Daily Summary", rot=45, subplots=True, figsize=(15, 30), legend=False) ax_ = summary_ax_list[0] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.95) _ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist())) for percentage_column in summary_percentage_columns: percentage_column_index = summary_columns.index(percentage_column) summary_ax_list[percentage_column_index].yaxis \ .set_major_formatter(matplotlib.ticker.PercentFormatter(1.0)) ``` ### Daily Generation to Upload Period Table ``` display_generation_to_upload_period_pivot_df = \ generation_to_upload_period_pivot_df \ .head(backend_generation_days) display_generation_to_upload_period_pivot_df \ .head(backend_generation_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) fig, generation_to_upload_period_pivot_table_ax = plt.subplots( figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df))) generation_to_upload_period_pivot_table_ax.set_title( "Shared TEKs Generation to Upload Period Table") sns.heatmap( data=display_generation_to_upload_period_pivot_df .rename_axis(columns=display_column_name_mapping) .rename_axis(index=display_column_name_mapping), fmt=".0f", annot=True, ax=generation_to_upload_period_pivot_table_ax) generation_to_upload_period_pivot_table_ax.get_figure().tight_layout() ``` ### Hourly Summary Plots ``` hourly_summary_ax_list = hourly_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .plot.bar( title=f"Last 24h Summary", rot=45, subplots=True, legend=False) ax_ = hourly_summary_ax_list[-1] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.9) _ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist())) ``` ### Publish Results ``` github_repository = os.environ.get("GITHUB_REPOSITORY") if github_repository is None: github_repository = "pvieito/Radar-STATS" github_project_base_url = "https://github.com/" + github_repository display_formatters = { display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "", display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "", display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "", } general_columns = \ list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values())) general_formatter = lambda x: f"{x}" if x != 0 else "" display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns))) daily_summary_table_html = result_summary_with_display_names_df \ .head(daily_plot_days) \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .to_html(formatters=display_formatters) multi_backend_summary_table_html = multi_backend_summary_df \ .head(daily_plot_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) \ .to_html(formatters=display_formatters) def format_multi_backend_cross_sharing_fraction(x): if pd.isna(x): return "-" elif round(x * 100, 1) == 0: return "" else: return f"{x:.1%}" multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \ .rename_axis(columns=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) \ .to_html( classes="table-center", formatters=display_formatters, float_format=format_multi_backend_cross_sharing_fraction) multi_backend_cross_sharing_summary_table_html = \ multi_backend_cross_sharing_summary_table_html \ .replace("<tr>","<tr style=\"text-align: center;\">") extraction_date_result_summary_df = \ result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date] extraction_date_result_hourly_summary_df = \ hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour] covid_cases = \ extraction_date_result_summary_df.covid_cases.item() shared_teks_by_generation_date = \ extraction_date_result_summary_df.shared_teks_by_generation_date.item() shared_teks_by_upload_date = \ extraction_date_result_summary_df.shared_teks_by_upload_date.item() shared_diagnoses = \ extraction_date_result_summary_df.shared_diagnoses.item() teks_per_shared_diagnosis = \ extraction_date_result_summary_df.teks_per_shared_diagnosis.item() shared_diagnoses_per_covid_case = \ extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item() shared_teks_by_upload_date_last_hour = \ extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int) display_source_regions = ", ".join(report_source_regions) if len(report_source_regions) == 1: display_brief_source_regions = report_source_regions[0] else: display_brief_source_regions = f"{len(report_source_regions)} ๐Ÿ‡ช๐Ÿ‡บ" def get_temporary_image_path() -> str: return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png") def save_temporary_plot_image(ax): if isinstance(ax, np.ndarray): ax = ax[0] media_path = get_temporary_image_path() ax.get_figure().savefig(media_path) return media_path def save_temporary_dataframe_image(df): import dataframe_image as dfi df = df.copy() df_styler = df.style.format(display_formatters) media_path = get_temporary_image_path() dfi.export(df_styler, media_path) return media_path summary_plots_image_path = save_temporary_plot_image( ax=summary_ax_list) summary_table_image_path = save_temporary_dataframe_image( df=result_summary_with_display_names_df) hourly_summary_plots_image_path = save_temporary_plot_image( ax=hourly_summary_ax_list) multi_backend_summary_table_image_path = save_temporary_dataframe_image( df=multi_backend_summary_df) generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image( ax=generation_to_upload_period_pivot_table_ax) ``` ### Save Results ``` report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-" result_summary_df.to_csv( report_resources_path_prefix + "Summary-Table.csv") result_summary_df.to_html( report_resources_path_prefix + "Summary-Table.html") hourly_summary_df.to_csv( report_resources_path_prefix + "Hourly-Summary-Table.csv") multi_backend_summary_df.to_csv( report_resources_path_prefix + "Multi-Backend-Summary-Table.csv") multi_backend_cross_sharing_summary_df.to_csv( report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv") generation_to_upload_period_pivot_df.to_csv( report_resources_path_prefix + "Generation-Upload-Period-Table.csv") _ = shutil.copyfile( summary_plots_image_path, report_resources_path_prefix + "Summary-Plots.png") _ = shutil.copyfile( summary_table_image_path, report_resources_path_prefix + "Summary-Table.png") _ = shutil.copyfile( hourly_summary_plots_image_path, report_resources_path_prefix + "Hourly-Summary-Plots.png") _ = shutil.copyfile( multi_backend_summary_table_image_path, report_resources_path_prefix + "Multi-Backend-Summary-Table.png") _ = shutil.copyfile( generation_to_upload_period_pivot_table_image_path, report_resources_path_prefix + "Generation-Upload-Period-Table.png") ``` ### Publish Results as JSON ``` def generate_summary_api_results(df: pd.DataFrame) -> list: api_df = df.reset_index().copy() api_df["sample_date_string"] = \ api_df["sample_date"].dt.strftime("%Y-%m-%d") api_df["source_regions"] = \ api_df["source_regions"].apply(lambda x: x.split(",")) return api_df.to_dict(orient="records") summary_api_results = \ generate_summary_api_results(df=result_summary_df) today_summary_api_results = \ generate_summary_api_results(df=extraction_date_result_summary_df)[0] summary_results = dict( backend_identifier=report_backend_identifier, source_regions=report_source_regions, extraction_datetime=extraction_datetime, extraction_date=extraction_date, extraction_date_with_hour=extraction_date_with_hour, last_hour=dict( shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour, shared_diagnoses=0, ), today=today_summary_api_results, last_7_days=last_7_days_summary, last_14_days=last_14_days_summary, daily_results=summary_api_results) summary_results = \ json.loads(pd.Series([summary_results]).to_json(orient="records"))[0] with open(report_resources_path_prefix + "Summary-Results.json", "w") as f: json.dump(summary_results, f, indent=4) ``` ### Publish on README ``` with open("Data/Templates/README.md", "r") as f: readme_contents = f.read() readme_contents = readme_contents.format( extraction_date_with_hour=extraction_date_with_hour, github_project_base_url=github_project_base_url, daily_summary_table_html=daily_summary_table_html, multi_backend_summary_table_html=multi_backend_summary_table_html, multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html, display_source_regions=display_source_regions) with open("README.md", "w") as f: f.write(readme_contents) ``` ### Publish on Twitter ``` enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER") github_event_name = os.environ.get("GITHUB_EVENT_NAME") if enable_share_to_twitter and github_event_name == "schedule" and \ (shared_teks_by_upload_date_last_hour or not are_today_results_partial): import tweepy twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"] twitter_api_auth_keys = twitter_api_auth_keys.split(":") auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1]) auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3]) api = tweepy.API(auth) summary_plots_media = api.media_upload(summary_plots_image_path) summary_table_media = api.media_upload(summary_table_image_path) generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path) media_ids = [ summary_plots_media.media_id, summary_table_media.media_id, generation_to_upload_period_pivot_table_image_media.media_id, ] if are_today_results_partial: today_addendum = " (Partial)" else: today_addendum = "" def format_shared_diagnoses_per_covid_case(value) -> str: if value == 0: return "โ€“" return f"โ‰ค{value:.2%}" display_shared_diagnoses_per_covid_case = \ format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case) display_last_14_days_shared_diagnoses_per_covid_case = \ format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"]) display_last_14_days_shared_diagnoses_per_covid_case_es = \ format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"]) status = textwrap.dedent(f""" #RadarCOVID โ€“ {extraction_date_with_hour} Today{today_addendum}: - Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour) - Shared Diagnoses: โ‰ค{shared_diagnoses:.0f} - Usage Ratio: {display_shared_diagnoses_per_covid_case} Last 14 Days: - Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case} - Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es} Info: {github_project_base_url}#documentation """) status = status.encode(encoding="utf-8") api.update_status(status=status, media_ids=media_ids) ```
github_jupyter
import datetime import json import logging import os import shutil import tempfile import textwrap import uuid import matplotlib.pyplot as plt import matplotlib.ticker import numpy as np import pandas as pd import pycountry import retry import seaborn as sns %matplotlib inline current_working_directory = os.environ.get("PWD") if current_working_directory: os.chdir(current_working_directory) sns.set() matplotlib.rcParams["figure.figsize"] = (15, 6) extraction_datetime = datetime.datetime.utcnow() extraction_date = extraction_datetime.strftime("%Y-%m-%d") extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1) extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d") extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H") current_hour = datetime.datetime.utcnow().hour are_today_results_partial = current_hour != 23 from Modules.ExposureNotification import exposure_notification_io spain_region_country_code = "ES" germany_region_country_code = "DE" default_backend_identifier = spain_region_country_code backend_generation_days = 7 * 2 daily_summary_days = 7 * 4 * 3 daily_plot_days = 7 * 4 tek_dumps_load_limit = daily_summary_days + 1 environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER") if environment_backend_identifier: report_backend_identifier = environment_backend_identifier else: report_backend_identifier = default_backend_identifier report_backend_identifier environment_enable_multi_backend_download = \ os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD") if environment_enable_multi_backend_download: report_backend_identifiers = None else: report_backend_identifiers = [report_backend_identifier] report_backend_identifiers environment_invalid_shared_diagnoses_dates = \ os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES") if environment_invalid_shared_diagnoses_dates: invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",") else: invalid_shared_diagnoses_dates = [] invalid_shared_diagnoses_dates report_backend_client = \ exposure_notification_io.get_backend_client_with_identifier( backend_identifier=report_backend_identifier) @retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10)) def download_cases_dataframe(): return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv") confirmed_df_ = download_cases_dataframe() confirmed_df_.iloc[0] confirmed_df = confirmed_df_.copy() confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]] confirmed_df.rename( columns={ "date": "sample_date", "iso_code": "country_code", }, inplace=True) def convert_iso_alpha_3_to_alpha_2(x): try: return pycountry.countries.get(alpha_3=x).alpha_2 except Exception as e: logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}") return None confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2) confirmed_df.dropna(inplace=True) confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True) confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d") confirmed_df.sort_values("sample_date", inplace=True) confirmed_df.tail() confirmed_days = pd.date_range( start=confirmed_df.iloc[0].sample_date, end=extraction_datetime) confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"]) confirmed_days_df["sample_date_string"] = \ confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d") confirmed_days_df.tail() def sort_source_regions_for_display(source_regions: list) -> list: if report_backend_identifier in source_regions: source_regions = [report_backend_identifier] + \ list(sorted(set(source_regions).difference([report_backend_identifier]))) else: source_regions = list(sorted(source_regions)) return source_regions report_source_regions = report_backend_client.source_regions_for_date( date=extraction_datetime.date()) report_source_regions = sort_source_regions_for_display( source_regions=report_source_regions) report_source_regions def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None): source_regions_at_date_df = confirmed_days_df.copy() source_regions_at_date_df["source_regions_at_date"] = \ source_regions_at_date_df.sample_date.apply( lambda x: source_regions_for_date_function(date=x)) source_regions_at_date_df.sort_values("sample_date", inplace=True) source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \ source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x))) source_regions_at_date_df.tail() #%% source_regions_for_summary_df_ = \ source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy() source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True) source_regions_for_summary_df_.tail() #%% confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"] confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns) for source_regions_group, source_regions_group_series in \ source_regions_at_date_df.groupby("_source_regions_group"): source_regions_set = set(source_regions_group.split(",")) confirmed_source_regions_set_df = \ confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy() confirmed_source_regions_group_df = \ confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \ .reset_index().sort_values("sample_date") confirmed_source_regions_group_df = \ confirmed_source_regions_group_df.merge( confirmed_days_df[["sample_date_string"]].rename( columns={"sample_date_string": "sample_date"}), how="right") confirmed_source_regions_group_df["new_cases"] = \ confirmed_source_regions_group_df["new_cases"].clip(lower=0) confirmed_source_regions_group_df["covid_cases"] = \ confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round() confirmed_source_regions_group_df = \ confirmed_source_regions_group_df[confirmed_output_columns] confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan) confirmed_source_regions_group_df.fillna(method="ffill", inplace=True) confirmed_source_regions_group_df = \ confirmed_source_regions_group_df[ confirmed_source_regions_group_df.sample_date.isin( source_regions_group_series.sample_date_string)] confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df) result_df = confirmed_output_df.copy() result_df.tail() #%% result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True) result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left") result_df.sort_values("sample_date_string", inplace=True) result_df.fillna(method="ffill", inplace=True) result_df.tail() #%% result_df[["new_cases", "covid_cases"]].plot() if columns_suffix: result_df.rename( columns={ "new_cases": "new_cases_" + columns_suffix, "covid_cases": "covid_cases_" + columns_suffix}, inplace=True) return result_df, source_regions_for_summary_df_ confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe( report_backend_client.source_regions_for_date) confirmed_es_df, _ = get_cases_dataframe( lambda date: [spain_region_country_code], columns_suffix=spain_region_country_code.lower()) raw_zip_path_prefix = "Data/TEKs/Raw/" base_backend_identifiers = [report_backend_identifier] multi_backend_exposure_keys_df = \ exposure_notification_io.download_exposure_keys_from_backends( backend_identifiers=report_backend_identifiers, generation_days=backend_generation_days, fail_on_error_backend_identifiers=base_backend_identifiers, save_raw_zip_path_prefix=raw_zip_path_prefix) multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"] multi_backend_exposure_keys_df.rename( columns={ "generation_datetime": "sample_datetime", "generation_date_string": "sample_date_string", }, inplace=True) multi_backend_exposure_keys_df.head() early_teks_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.rolling_period < 144].copy() early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6 early_teks_df[early_teks_df.sample_date_string != extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) early_teks_df[early_teks_df.sample_date_string == extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[ "sample_date_string", "region", "key_data"]] multi_backend_exposure_keys_df.head() active_regions = \ multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() active_regions multi_backend_summary_df = multi_backend_exposure_keys_df.groupby( ["sample_date_string", "region"]).key_data.nunique().reset_index() \ .pivot(index="sample_date_string", columns="region") \ .sort_index(ascending=False) multi_backend_summary_df.rename( columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) multi_backend_summary_df.rename_axis("sample_date", inplace=True) multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int) multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days) multi_backend_summary_df.head() def compute_keys_cross_sharing(x): teks_x = x.key_data_x.item() common_teks = set(teks_x).intersection(x.key_data_y.item()) common_teks_fraction = len(common_teks) / len(teks_x) return pd.Series(dict( common_teks=common_teks, common_teks_fraction=common_teks_fraction, )) multi_backend_exposure_keys_by_region_df = \ multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index() multi_backend_exposure_keys_by_region_df["_merge"] = True multi_backend_exposure_keys_by_region_combination_df = \ multi_backend_exposure_keys_by_region_df.merge( multi_backend_exposure_keys_by_region_df, on="_merge") multi_backend_exposure_keys_by_region_combination_df.drop( columns=["_merge"], inplace=True) if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1: multi_backend_exposure_keys_by_region_combination_df = \ multi_backend_exposure_keys_by_region_combination_df[ multi_backend_exposure_keys_by_region_combination_df.region_x != multi_backend_exposure_keys_by_region_combination_df.region_y] multi_backend_exposure_keys_cross_sharing_df = \ multi_backend_exposure_keys_by_region_combination_df \ .groupby(["region_x", "region_y"]) \ .apply(compute_keys_cross_sharing) \ .reset_index() multi_backend_cross_sharing_summary_df = \ multi_backend_exposure_keys_cross_sharing_df.pivot_table( values=["common_teks_fraction"], columns="region_x", index="region_y", aggfunc=lambda x: x.item()) multi_backend_cross_sharing_summary_df multi_backend_without_active_region_exposure_keys_df = \ multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier] multi_backend_without_active_region = \ multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() multi_backend_without_active_region exposure_keys_summary_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.region == report_backend_identifier] exposure_keys_summary_df.drop(columns=["region"], inplace=True) exposure_keys_summary_df = \ exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame() exposure_keys_summary_df = \ exposure_keys_summary_df.reset_index().set_index("sample_date_string") exposure_keys_summary_df.sort_index(ascending=False, inplace=True) exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) exposure_keys_summary_df.head() tek_list_df = multi_backend_exposure_keys_df[ ["sample_date_string", "region", "key_data"]].copy() tek_list_df["key_data"] = tek_list_df["key_data"].apply(str) tek_list_df.rename(columns={ "sample_date_string": "sample_date", "key_data": "tek_list"}, inplace=True) tek_list_df = tek_list_df.groupby( ["sample_date", "region"]).tek_list.unique().reset_index() tek_list_df["extraction_date"] = extraction_date tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour tek_list_path_prefix = "Data/TEKs/" tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json" tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json" tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json" for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]: os.makedirs(os.path.dirname(path), exist_ok=True) tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier] tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json( tek_list_current_path, lines=True, orient="records") tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json( tek_list_daily_path, lines=True, orient="records") tek_list_base_df.to_json( tek_list_hourly_path, lines=True, orient="records") tek_list_base_df.head() import glob def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame: extracted_teks_df = pd.DataFrame(columns=["region"]) file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json")))) if limit: file_paths = file_paths[:limit] for file_path in file_paths: logging.info(f"Loading TEKs from '{file_path}'...") iteration_extracted_teks_df = pd.read_json(file_path, lines=True) extracted_teks_df = extracted_teks_df.append( iteration_extracted_teks_df, sort=False) extracted_teks_df["region"] = \ extracted_teks_df.region.fillna(spain_region_country_code).copy() if region: extracted_teks_df = \ extracted_teks_df[extracted_teks_df.region == region] return extracted_teks_df daily_extracted_teks_df = load_extracted_teks( mode="Daily", region=report_backend_identifier, limit=tek_dumps_load_limit) daily_extracted_teks_df.head() exposure_keys_summary_df_ = daily_extracted_teks_df \ .sort_values("extraction_date", ascending=False) \ .groupby("sample_date").tek_list.first() \ .to_frame() exposure_keys_summary_df_.index.name = "sample_date_string" exposure_keys_summary_df_["tek_list"] = \ exposure_keys_summary_df_.tek_list.apply(len) exposure_keys_summary_df_ = exposure_keys_summary_df_ \ .rename(columns={"tek_list": "shared_teks_by_generation_date"}) \ .sort_index(ascending=False) exposure_keys_summary_df = exposure_keys_summary_df_ exposure_keys_summary_df.head() tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply( lambda x: set(sum(x, []))).reset_index() tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True) tek_list_df.head() def compute_teks_by_generation_and_upload_date(date): day_new_teks_set_df = tek_list_df.copy().diff() try: day_new_teks_set = day_new_teks_set_df[ day_new_teks_set_df.index == date].tek_list.item() except ValueError: day_new_teks_set = None if pd.isna(day_new_teks_set): day_new_teks_set = set() day_new_teks_df = daily_extracted_teks_df[ daily_extracted_teks_df.extraction_date == date].copy() day_new_teks_df["shared_teks"] = \ day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set)) day_new_teks_df["shared_teks"] = \ day_new_teks_df.shared_teks.apply(len) day_new_teks_df["upload_date"] = date day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True) day_new_teks_df = day_new_teks_df[ ["upload_date", "generation_date", "shared_teks"]] day_new_teks_df["generation_to_upload_days"] = \ (pd.to_datetime(day_new_teks_df.upload_date) - pd.to_datetime(day_new_teks_df.generation_date)).dt.days day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0] return day_new_teks_df shared_teks_generation_to_upload_df = pd.DataFrame() for upload_date in daily_extracted_teks_df.extraction_date.unique(): shared_teks_generation_to_upload_df = \ shared_teks_generation_to_upload_df.append( compute_teks_by_generation_and_upload_date(date=upload_date)) shared_teks_generation_to_upload_df \ .sort_values(["upload_date", "generation_date"], ascending=False, inplace=True) shared_teks_generation_to_upload_df.tail() today_new_teks_df = \ shared_teks_generation_to_upload_df[ shared_teks_generation_to_upload_df.upload_date == extraction_date].copy() today_new_teks_df.tail() if not today_new_teks_df.empty: today_new_teks_df.set_index("generation_to_upload_days") \ .sort_index().shared_teks.plot.bar() generation_to_upload_period_pivot_df = \ shared_teks_generation_to_upload_df[ ["upload_date", "generation_to_upload_days", "shared_teks"]] \ .pivot(index="upload_date", columns="generation_to_upload_days") \ .sort_index(ascending=False).fillna(0).astype(int) \ .droplevel(level=0, axis=1) generation_to_upload_period_pivot_df.head() new_tek_df = tek_list_df.diff().tek_list.apply( lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index() new_tek_df.rename(columns={ "tek_list": "shared_teks_by_upload_date", "extraction_date": "sample_date_string",}, inplace=True) new_tek_df.tail() shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[ shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \ [["upload_date", "shared_teks"]].rename( columns={ "upload_date": "sample_date_string", "shared_teks": "shared_teks_uploaded_on_generation_date", }) shared_teks_uploaded_on_generation_date_df.head() estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \ .groupby(["upload_date"]).shared_teks.max().reset_index() \ .sort_values(["upload_date"], ascending=False) \ .rename(columns={ "upload_date": "sample_date_string", "shared_teks": "shared_diagnoses", }) invalid_shared_diagnoses_dates_mask = \ estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates) estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0 estimated_shared_diagnoses_df.head() hourly_extracted_teks_df = load_extracted_teks( mode="Hourly", region=report_backend_identifier, limit=25) hourly_extracted_teks_df.head() hourly_new_tek_count_df = hourly_extracted_teks_df \ .groupby("extraction_date_with_hour").tek_list. \ apply(lambda x: set(sum(x, []))).reset_index().copy() hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \ .sort_index(ascending=True) hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff() hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply( lambda x: len(x) if not pd.isna(x) else 0) hourly_new_tek_count_df.rename(columns={ "new_tek_count": "shared_teks_by_upload_date"}, inplace=True) hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[ "extraction_date_with_hour", "shared_teks_by_upload_date"]] hourly_new_tek_count_df.head() hourly_summary_df = hourly_new_tek_count_df.copy() hourly_summary_df.set_index("extraction_date_with_hour", inplace=True) hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index() hourly_summary_df["datetime_utc"] = pd.to_datetime( hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H") hourly_summary_df.set_index("datetime_utc", inplace=True) hourly_summary_df = hourly_summary_df.tail(-1) hourly_summary_df.head() import requests import pandas.io.json official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics") official_stats_response.raise_for_status() official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json()) official_stats_df = official_stats_df_.copy() official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True) official_stats_df.head() official_stats_column_map = { "date": "sample_date", "applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated", "communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated", } accumulated_suffix = "_accumulated" accumulated_values_columns = \ list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values())) interpolated_values_columns = \ list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns)) official_stats_df = \ official_stats_df[official_stats_column_map.keys()] \ .rename(columns=official_stats_column_map) official_stats_df["extraction_date"] = extraction_date official_stats_df.head() official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json" previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True) previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True) official_stats_df = official_stats_df.append(previous_official_stats_df) official_stats_df.head() official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)] official_stats_df.sort_values("extraction_date", ascending=False, inplace=True) official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True) official_stats_df.head() official_stats_stored_df = official_stats_df.copy() official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d") official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True) official_stats_df.drop(columns=["extraction_date"], inplace=True) official_stats_df = confirmed_days_df.merge(official_stats_df, how="left") official_stats_df.sort_values("sample_date", ascending=False, inplace=True) official_stats_df.head() official_stats_df[accumulated_values_columns] = \ official_stats_df[accumulated_values_columns] \ .astype(float).interpolate(limit_area="inside") official_stats_df[interpolated_values_columns] = \ official_stats_df[accumulated_values_columns].diff(periods=-1) official_stats_df.drop(columns="sample_date", inplace=True) official_stats_df.head() result_summary_df = exposure_keys_summary_df.merge( new_tek_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( official_stats_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge( result_summary_df, on=["sample_date_string"], how="left") result_summary_df.head() result_summary_df = confirmed_es_df.tail(daily_summary_days).merge( result_summary_df, on=["sample_date_string"], how="left") result_summary_df.head() result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string) result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left") result_summary_df.set_index(["sample_date", "source_regions"], inplace=True) result_summary_df.drop(columns=["sample_date_string"], inplace=True) result_summary_df.sort_index(ascending=False, inplace=True) result_summary_df.head() with pd.option_context("mode.use_inf_as_na", True): result_summary_df = result_summary_df.fillna(0).astype(int) result_summary_df["teks_per_shared_diagnosis"] = \ (result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0) result_summary_df["shared_diagnoses_per_covid_case"] = \ (result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0) result_summary_df["shared_diagnoses_per_covid_case_es"] = \ (result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0) result_summary_df.head(daily_plot_days) def compute_aggregated_results_summary(days) -> pd.DataFrame: aggregated_result_summary_df = result_summary_df.copy() aggregated_result_summary_df["covid_cases_for_ratio"] = \ aggregated_result_summary_df.covid_cases.mask( aggregated_result_summary_df.shared_diagnoses == 0, 0) aggregated_result_summary_df["covid_cases_for_ratio_es"] = \ aggregated_result_summary_df.covid_cases_es.mask( aggregated_result_summary_df.shared_diagnoses_es == 0, 0) aggregated_result_summary_df = aggregated_result_summary_df \ .sort_index(ascending=True).fillna(0).rolling(days).agg({ "covid_cases": "sum", "covid_cases_es": "sum", "covid_cases_for_ratio": "sum", "covid_cases_for_ratio_es": "sum", "shared_teks_by_generation_date": "sum", "shared_teks_by_upload_date": "sum", "shared_diagnoses": "sum", "shared_diagnoses_es": "sum", }).sort_index(ascending=False) with pd.option_context("mode.use_inf_as_na", True): aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int) aggregated_result_summary_df["teks_per_shared_diagnosis"] = \ (aggregated_result_summary_df.shared_teks_by_upload_date / aggregated_result_summary_df.covid_cases_for_ratio).fillna(0) aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \ (aggregated_result_summary_df.shared_diagnoses / aggregated_result_summary_df.covid_cases_for_ratio).fillna(0) aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \ (aggregated_result_summary_df.shared_diagnoses_es / aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0) return aggregated_result_summary_df aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7) aggregated_result_with_7_days_window_summary_df.head() last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1] last_7_days_summary aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13) last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1] last_14_days_summary display_column_name_mapping = { "sample_date": "Sample\u00A0Date\u00A0(UTC)", "source_regions": "Source Countries", "datetime_utc": "Timestamp (UTC)", "upload_date": "Upload Date (UTC)", "generation_to_upload_days": "Generation to Upload Period in Days", "region": "Backend", "region_x": "Backend\u00A0(A)", "region_y": "Backend\u00A0(B)", "common_teks": "Common TEKs Shared Between Backends", "common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)", "covid_cases": "COVID-19 Cases (Source Countries)", "shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)", "shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)", "shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)", "shared_diagnoses": "Shared Diagnoses (Source Countries โ€“ Estimation)", "teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)", "shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)", "covid_cases_es": "COVID-19 Cases (Spain)", "app_downloads_es": "App Downloads (Spain โ€“ Official)", "shared_diagnoses_es": "Shared Diagnoses (Spain โ€“ Official)", "shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)", } summary_columns = [ "covid_cases", "shared_teks_by_generation_date", "shared_teks_by_upload_date", "shared_teks_uploaded_on_generation_date", "shared_diagnoses", "teks_per_shared_diagnosis", "shared_diagnoses_per_covid_case", "covid_cases_es", "app_downloads_es", "shared_diagnoses_es", "shared_diagnoses_per_covid_case_es", ] summary_percentage_columns= [ "shared_diagnoses_per_covid_case_es", "shared_diagnoses_per_covid_case", ] result_summary_df_ = result_summary_df.copy() result_summary_df = result_summary_df[summary_columns] result_summary_with_display_names_df = result_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) result_summary_with_display_names_df result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \ .droplevel(level=["source_regions"]) \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar( title=f"Daily Summary", rot=45, subplots=True, figsize=(15, 30), legend=False) ax_ = summary_ax_list[0] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.95) _ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist())) for percentage_column in summary_percentage_columns: percentage_column_index = summary_columns.index(percentage_column) summary_ax_list[percentage_column_index].yaxis \ .set_major_formatter(matplotlib.ticker.PercentFormatter(1.0)) display_generation_to_upload_period_pivot_df = \ generation_to_upload_period_pivot_df \ .head(backend_generation_days) display_generation_to_upload_period_pivot_df \ .head(backend_generation_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) fig, generation_to_upload_period_pivot_table_ax = plt.subplots( figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df))) generation_to_upload_period_pivot_table_ax.set_title( "Shared TEKs Generation to Upload Period Table") sns.heatmap( data=display_generation_to_upload_period_pivot_df .rename_axis(columns=display_column_name_mapping) .rename_axis(index=display_column_name_mapping), fmt=".0f", annot=True, ax=generation_to_upload_period_pivot_table_ax) generation_to_upload_period_pivot_table_ax.get_figure().tight_layout() hourly_summary_ax_list = hourly_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .plot.bar( title=f"Last 24h Summary", rot=45, subplots=True, legend=False) ax_ = hourly_summary_ax_list[-1] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.9) _ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist())) github_repository = os.environ.get("GITHUB_REPOSITORY") if github_repository is None: github_repository = "pvieito/Radar-STATS" github_project_base_url = "https://github.com/" + github_repository display_formatters = { display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "", display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "", display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "", } general_columns = \ list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values())) general_formatter = lambda x: f"{x}" if x != 0 else "" display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns))) daily_summary_table_html = result_summary_with_display_names_df \ .head(daily_plot_days) \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .to_html(formatters=display_formatters) multi_backend_summary_table_html = multi_backend_summary_df \ .head(daily_plot_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) \ .to_html(formatters=display_formatters) def format_multi_backend_cross_sharing_fraction(x): if pd.isna(x): return "-" elif round(x * 100, 1) == 0: return "" else: return f"{x:.1%}" multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \ .rename_axis(columns=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) \ .to_html( classes="table-center", formatters=display_formatters, float_format=format_multi_backend_cross_sharing_fraction) multi_backend_cross_sharing_summary_table_html = \ multi_backend_cross_sharing_summary_table_html \ .replace("<tr>","<tr style=\"text-align: center;\">") extraction_date_result_summary_df = \ result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date] extraction_date_result_hourly_summary_df = \ hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour] covid_cases = \ extraction_date_result_summary_df.covid_cases.item() shared_teks_by_generation_date = \ extraction_date_result_summary_df.shared_teks_by_generation_date.item() shared_teks_by_upload_date = \ extraction_date_result_summary_df.shared_teks_by_upload_date.item() shared_diagnoses = \ extraction_date_result_summary_df.shared_diagnoses.item() teks_per_shared_diagnosis = \ extraction_date_result_summary_df.teks_per_shared_diagnosis.item() shared_diagnoses_per_covid_case = \ extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item() shared_teks_by_upload_date_last_hour = \ extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int) display_source_regions = ", ".join(report_source_regions) if len(report_source_regions) == 1: display_brief_source_regions = report_source_regions[0] else: display_brief_source_regions = f"{len(report_source_regions)} ๐Ÿ‡ช๐Ÿ‡บ" def get_temporary_image_path() -> str: return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png") def save_temporary_plot_image(ax): if isinstance(ax, np.ndarray): ax = ax[0] media_path = get_temporary_image_path() ax.get_figure().savefig(media_path) return media_path def save_temporary_dataframe_image(df): import dataframe_image as dfi df = df.copy() df_styler = df.style.format(display_formatters) media_path = get_temporary_image_path() dfi.export(df_styler, media_path) return media_path summary_plots_image_path = save_temporary_plot_image( ax=summary_ax_list) summary_table_image_path = save_temporary_dataframe_image( df=result_summary_with_display_names_df) hourly_summary_plots_image_path = save_temporary_plot_image( ax=hourly_summary_ax_list) multi_backend_summary_table_image_path = save_temporary_dataframe_image( df=multi_backend_summary_df) generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image( ax=generation_to_upload_period_pivot_table_ax) report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-" result_summary_df.to_csv( report_resources_path_prefix + "Summary-Table.csv") result_summary_df.to_html( report_resources_path_prefix + "Summary-Table.html") hourly_summary_df.to_csv( report_resources_path_prefix + "Hourly-Summary-Table.csv") multi_backend_summary_df.to_csv( report_resources_path_prefix + "Multi-Backend-Summary-Table.csv") multi_backend_cross_sharing_summary_df.to_csv( report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv") generation_to_upload_period_pivot_df.to_csv( report_resources_path_prefix + "Generation-Upload-Period-Table.csv") _ = shutil.copyfile( summary_plots_image_path, report_resources_path_prefix + "Summary-Plots.png") _ = shutil.copyfile( summary_table_image_path, report_resources_path_prefix + "Summary-Table.png") _ = shutil.copyfile( hourly_summary_plots_image_path, report_resources_path_prefix + "Hourly-Summary-Plots.png") _ = shutil.copyfile( multi_backend_summary_table_image_path, report_resources_path_prefix + "Multi-Backend-Summary-Table.png") _ = shutil.copyfile( generation_to_upload_period_pivot_table_image_path, report_resources_path_prefix + "Generation-Upload-Period-Table.png") def generate_summary_api_results(df: pd.DataFrame) -> list: api_df = df.reset_index().copy() api_df["sample_date_string"] = \ api_df["sample_date"].dt.strftime("%Y-%m-%d") api_df["source_regions"] = \ api_df["source_regions"].apply(lambda x: x.split(",")) return api_df.to_dict(orient="records") summary_api_results = \ generate_summary_api_results(df=result_summary_df) today_summary_api_results = \ generate_summary_api_results(df=extraction_date_result_summary_df)[0] summary_results = dict( backend_identifier=report_backend_identifier, source_regions=report_source_regions, extraction_datetime=extraction_datetime, extraction_date=extraction_date, extraction_date_with_hour=extraction_date_with_hour, last_hour=dict( shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour, shared_diagnoses=0, ), today=today_summary_api_results, last_7_days=last_7_days_summary, last_14_days=last_14_days_summary, daily_results=summary_api_results) summary_results = \ json.loads(pd.Series([summary_results]).to_json(orient="records"))[0] with open(report_resources_path_prefix + "Summary-Results.json", "w") as f: json.dump(summary_results, f, indent=4) with open("Data/Templates/README.md", "r") as f: readme_contents = f.read() readme_contents = readme_contents.format( extraction_date_with_hour=extraction_date_with_hour, github_project_base_url=github_project_base_url, daily_summary_table_html=daily_summary_table_html, multi_backend_summary_table_html=multi_backend_summary_table_html, multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html, display_source_regions=display_source_regions) with open("README.md", "w") as f: f.write(readme_contents) enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER") github_event_name = os.environ.get("GITHUB_EVENT_NAME") if enable_share_to_twitter and github_event_name == "schedule" and \ (shared_teks_by_upload_date_last_hour or not are_today_results_partial): import tweepy twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"] twitter_api_auth_keys = twitter_api_auth_keys.split(":") auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1]) auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3]) api = tweepy.API(auth) summary_plots_media = api.media_upload(summary_plots_image_path) summary_table_media = api.media_upload(summary_table_image_path) generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path) media_ids = [ summary_plots_media.media_id, summary_table_media.media_id, generation_to_upload_period_pivot_table_image_media.media_id, ] if are_today_results_partial: today_addendum = " (Partial)" else: today_addendum = "" def format_shared_diagnoses_per_covid_case(value) -> str: if value == 0: return "โ€“" return f"โ‰ค{value:.2%}" display_shared_diagnoses_per_covid_case = \ format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case) display_last_14_days_shared_diagnoses_per_covid_case = \ format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"]) display_last_14_days_shared_diagnoses_per_covid_case_es = \ format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"]) status = textwrap.dedent(f""" #RadarCOVID โ€“ {extraction_date_with_hour} Today{today_addendum}: - Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour) - Shared Diagnoses: โ‰ค{shared_diagnoses:.0f} - Usage Ratio: {display_shared_diagnoses_per_covid_case} Last 14 Days: - Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case} - Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es} Info: {github_project_base_url}#documentation """) status = status.encode(encoding="utf-8") api.update_status(status=status, media_ids=media_ids)
0.268749
0.215464
``` import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import pickle import pandas as pd ``` ## History Files ``` with open('../history/binary_trainHistory', 'rb') as history_file: binary_history = pickle.load(history_file) with open('../history/CWE119_trainHistory', 'rb') as history_file: CWE119_history = pickle.load(history_file) with open('../history/CWE120_trainHistory', 'rb') as history_file: CWE120_history = pickle.load(history_file) with open('../history/CWE469_trainHistory', 'rb') as history_file: CWE469_history = pickle.load(history_file) with open('../history/CWE476_trainHistory', 'rb') as history_file: CWE470_history = pickle.load(history_file) with open('../history/CWE-others_trainHistory', 'rb') as history_file: others_history = pickle.load(history_file) binary_history.keys() ``` ## Training Loss ``` plt.title('Training Loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.plot(binary_history['loss'], label='Binary') plt.plot(CWE119_history['loss'], label='CWE119') plt.plot(CWE120_history['loss'], label='CWE120') plt.plot(CWE469_history['loss'], label='CWE469') plt.plot(CWE470_history['loss'], label='CWE470') plt.plot(others_history['loss'], label='Others') plt.legend() plt.savefig('train_loss.png') test_data = pd.read_pickle("../dataset/test.pickle") with open('../tokenizer/tokenizer.pickle', 'rb') as handle: tokenizer = pickle.load(handle) test_tokenized = tokenizer.texts_to_sequences(test_data[0]) x_test = tf.keras.preprocessing.sequence.pad_sequences(test_tokenized, maxlen=500, padding="post") binary_model = tf.keras.models.load_model('../trained_model/Simple_CNN_binary') y_test_binary = (test_data[test_data.columns[1:]]).any(axis=1, bool_only=bool).astype(int) CWE119_model = tf.keras.models.load_model('../trained_model/Simple_CNN_CWE119') y_test_CWE119 = test_data[test_data.columns[1]].astype(int) CWE120_model = tf.keras.models.load_model('../trained_model/Simple_CNN_CWE120') y_test_CWE120 = test_data[test_data.columns[2]].astype(int) CWE469_model = tf.keras.models.load_model('../trained_model/Simple_CNN_CWE469') y_test_CWE469 = test_data[test_data.columns[3]].astype(int) CWE476_model = tf.keras.models.load_model('../trained_model/Simple_CNN_CWE476') y_test_CWE476 = test_data[test_data.columns[4]].astype(int) others_model = tf.keras.models.load_model('../trained_model/Simple_CNN_CWE-others') y_test_others = test_data[test_data.columns[5]].astype(int) binary_test = binary_model.evaluate(x_test, y_test_binary, batch_size=128) CWE119_test = CWE119_model.evaluate(x_test, y_test_CWE119, batch_size=128) CWE120_test = CWE120_model.evaluate(x_test, y_test_CWE120, batch_size=128) CWE469_test = CWE469_model.evaluate(x_test, y_test_CWE469, batch_size=128) CWE476_test = CWE476_model.evaluate(x_test, y_test_CWE476, batch_size=128) others_test = others_model.evaluate(x_test, y_test_others, batch_size=128) ``` ## Testing Results ``` accuracy = [binary_test[5], CWE119_test[5], CWE120_test[5], CWE469_test[5], CWE476_test[5], others_test[5]] AUC = [binary_test[8], CWE119_test[8], CWE120_test[8], CWE469_test[8], CWE476_test[8], others_test[8]] recall = [binary_test[7], CWE119_test[7], CWE120_test[7], CWE469_test[7], CWE476_test[7], others_test[7]] tick_label = ['Binary', 'CWE119', 'CWE120', 'CWE469', 'CWE476', 'Others'] x=np.arange(6) width=0.25 plt.bar(x - width, accuracy, width=width, label='Accuracy') plt.bar(x, AUC, width=width, label='AUC') plt.bar(x + width, recall, width=width, label='Recall') plt.title('Test Results') plt.ylabel('Accuracy') plt.xlabel('Models') plt.legend() plt.xticks(x,tick_label) plt.ylim(ymin = 0.6) plt.savefig('test_results.png') ```
github_jupyter
import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import pickle import pandas as pd with open('../history/binary_trainHistory', 'rb') as history_file: binary_history = pickle.load(history_file) with open('../history/CWE119_trainHistory', 'rb') as history_file: CWE119_history = pickle.load(history_file) with open('../history/CWE120_trainHistory', 'rb') as history_file: CWE120_history = pickle.load(history_file) with open('../history/CWE469_trainHistory', 'rb') as history_file: CWE469_history = pickle.load(history_file) with open('../history/CWE476_trainHistory', 'rb') as history_file: CWE470_history = pickle.load(history_file) with open('../history/CWE-others_trainHistory', 'rb') as history_file: others_history = pickle.load(history_file) binary_history.keys() plt.title('Training Loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.plot(binary_history['loss'], label='Binary') plt.plot(CWE119_history['loss'], label='CWE119') plt.plot(CWE120_history['loss'], label='CWE120') plt.plot(CWE469_history['loss'], label='CWE469') plt.plot(CWE470_history['loss'], label='CWE470') plt.plot(others_history['loss'], label='Others') plt.legend() plt.savefig('train_loss.png') test_data = pd.read_pickle("../dataset/test.pickle") with open('../tokenizer/tokenizer.pickle', 'rb') as handle: tokenizer = pickle.load(handle) test_tokenized = tokenizer.texts_to_sequences(test_data[0]) x_test = tf.keras.preprocessing.sequence.pad_sequences(test_tokenized, maxlen=500, padding="post") binary_model = tf.keras.models.load_model('../trained_model/Simple_CNN_binary') y_test_binary = (test_data[test_data.columns[1:]]).any(axis=1, bool_only=bool).astype(int) CWE119_model = tf.keras.models.load_model('../trained_model/Simple_CNN_CWE119') y_test_CWE119 = test_data[test_data.columns[1]].astype(int) CWE120_model = tf.keras.models.load_model('../trained_model/Simple_CNN_CWE120') y_test_CWE120 = test_data[test_data.columns[2]].astype(int) CWE469_model = tf.keras.models.load_model('../trained_model/Simple_CNN_CWE469') y_test_CWE469 = test_data[test_data.columns[3]].astype(int) CWE476_model = tf.keras.models.load_model('../trained_model/Simple_CNN_CWE476') y_test_CWE476 = test_data[test_data.columns[4]].astype(int) others_model = tf.keras.models.load_model('../trained_model/Simple_CNN_CWE-others') y_test_others = test_data[test_data.columns[5]].astype(int) binary_test = binary_model.evaluate(x_test, y_test_binary, batch_size=128) CWE119_test = CWE119_model.evaluate(x_test, y_test_CWE119, batch_size=128) CWE120_test = CWE120_model.evaluate(x_test, y_test_CWE120, batch_size=128) CWE469_test = CWE469_model.evaluate(x_test, y_test_CWE469, batch_size=128) CWE476_test = CWE476_model.evaluate(x_test, y_test_CWE476, batch_size=128) others_test = others_model.evaluate(x_test, y_test_others, batch_size=128) accuracy = [binary_test[5], CWE119_test[5], CWE120_test[5], CWE469_test[5], CWE476_test[5], others_test[5]] AUC = [binary_test[8], CWE119_test[8], CWE120_test[8], CWE469_test[8], CWE476_test[8], others_test[8]] recall = [binary_test[7], CWE119_test[7], CWE120_test[7], CWE469_test[7], CWE476_test[7], others_test[7]] tick_label = ['Binary', 'CWE119', 'CWE120', 'CWE469', 'CWE476', 'Others'] x=np.arange(6) width=0.25 plt.bar(x - width, accuracy, width=width, label='Accuracy') plt.bar(x, AUC, width=width, label='AUC') plt.bar(x + width, recall, width=width, label='Recall') plt.title('Test Results') plt.ylabel('Accuracy') plt.xlabel('Models') plt.legend() plt.xticks(x,tick_label) plt.ylim(ymin = 0.6) plt.savefig('test_results.png')
0.485356
0.462898
``` import warnings warnings.filterwarnings('ignore') import os import pandas as pd import requests from bs4 import BeautifulSoup from typing import List, Tuple, Union, Callable, Dict, Iterator from collections import defaultdict from difflib import SequenceMatcher import spacy from spacy.matcher import Matcher, PhraseMatcher from spacy.tokens.doc import Doc from spacy.tokens.span import Span from spacy.tokens.token import Token from geotext import GeoText ## new library: https://pypi.org/project/geotext/ nlp = spacy.load("en_core_web_sm") filenames = os.listdir(r'C:\Users\12482\Desktop\lambda-school\labs\project\textfolder\text_cases') # Wherever files are located def similar(a: str, return_b: str, min_score: float) -> Union[str, None]: """ โ€ข Returns 2nd string if similarity score is above supplied minimum score. Else, returns None. """ if SequenceMatcher(None, a, return_b).ratio() >= min_score: return return_b def similar_in_list(lst: Union[List[str], Iterator[str]]) -> Callable: """ โ€ข Uses a closure on supplied list to return a function that iterates over the list in order to search for the first similar term. It's used widely in the scraper. """ def impl(item: str, min_score: float) -> Union[str, None]: for s in lst: s = similar(item, s, min_score) if s: return s return impl members = [ "Adkins-Blanch, Charles K.", "Michael P. Baird", "Cassidy, William A.", "Cole, Patricia A.", "Couch, V. Stuart", "Creppy, Michael J.", "Crossett, John P.", "Donovan, Teresa L.", "Foote, Megan E.", "Geller, Joan B.", "Gemoets, Marcos", "Gonzalez, Gabriel", "Goodwin, Deborah K.", "Gorman, Stephanie E.", "Grant, Edward R.", "Greer, Anne J.", "Guendelsberger, John", "Hunsucker, Keith E.", "Kelly, Edward F.", "Kendall Clark, Molly", "Liebmann, Beth S.", "Liebowitz, Ellen C.", "Mahtabfar, Sunita B.", "Malphrus, Garry D.", "Mann, Ana", "Miller, Neil P.", "Monsky, Megan Foote", "Montante Jr., Phillip J.", "Morris, Daniel", "Mullane, Hugh G.", "Neal, David L.", "Noferi, Mark", "O'Connor, Blair", "O'Herron, Margaret M.", "O'Leary, Brian M.", "Owen, Sirce E.", "Pauley, Roger", "Petty, Aaron R.", "Pepper, S. Kathleen", "RILEY, KEVIN W.", "Rosen, Scott", "Snow, Thomas G.", "Swanwick, Daniel L.", "Wendtland, Linda S.", "Wetmore, David H.", "Wilson, Earle B." ] judges_url = 'https://en.wikipedia.org/wiki/Board_of_Immigration_Appeals' html = requests.get(judges_url).text soup = BeautifulSoup(html, 'html.parser') table = soup.find("table", class_="wikitable") web_judges = [itm.get_text().strip() for itm in table.select("td")[1::4]] web_judges combined_members = members + web_judges class GetJudge: """ Returns the judge's name if a match is found. """ accuracy = 0.7 def __init__(self): # Currently grabs potential judges names from a URL here. # For testing we'll instead alternate `names` # names = web_judges # names = members names = combined_members self.is_judge: Callable = similar_in_list(names) def __call__(self, name): result = self.is_judge(name, self.accuracy) if not result: flip_name = ' '.join(reversed(name.split(', '))) result = self.is_judge(flip_name, self.accuracy) return result class BIACase: def __init__(self, text: str): """ โ€ข Input will be text from a BIA case pdf file, after the pdf has been converted from PDF to text. โ€ข Scraping works utilizing spaCy, tokenizing the text, and iterating token by token searching for matching keywords. """ self.doc: Doc = nlp(text) self.ents: Tuple[Span] = self.doc.ents self.if_judge = GetJudge() def get_ents(self, labels: List[str]) -> Iterator[Span]: """ โ€ข Retrieves entitiess of a specified label(s) in the document, if no label is specified, returns all entities """ return (ent for ent in self.ents if ent.label_ in labels) def get_panel(self) -> str: """ โ€ข Returns the panel members of case in document. """ panel_members: List[str] panel_members = [] possible_members: Iterator[Span] possible_members = map( lambda ent: ent.text, self.get_ents(['PERSON']) ) for member in possible_members: judge: Union[str, None] judge = self.if_judge(member) if judge: panel_members.append(judge) return '; '.join(set(panel_members)) # ** Change `names` to `web_judges` list in GetJudge() ** web_dict = {} for file in filenames: f = open(f"C:\\Users\\12482\\Desktop\\lambda-school\\labs\\project\\textfolder\\text_cases\\{file}", "r",encoding='utf-8') case = BIACase(f.read()) web_dict[file] = case.get_panel() f.close() ``` ### NEW WORK BELOW ``` ## idea: create a dataframe with a multitude of features ## feature 1 is num of panel members -- > do first ## feature 2 is most significant place --> do next ## ....... ## create df l = list(web_dict.keys()) ## easy access to dict len(web_dict[l[0]].split(',')) ## example --> value is number of panel members def func(dict_): panel = [] l = list(dict_.keys()) for x in l: panel.append(len(dict_[x].split(','))) return panel panel_len = func(web_dict) cache = {} for i in range(len(filenames)): cache[i] = open(f"C:\\Users\\12482\\Desktop\\lambda-school\\labs\\project\\textfolder\\text_cases\\{filenames[i]}", "r",encoding='utf-8') cache[i] = cache[i].read() from collections import Counter # --> returns dict (k:v, for k=word,v=count in file) def func(data): file_count = [] for i in range(len(data)): file = data[i] cities_in_file = GeoText(file).cities c = Counter(cities_in_file) file_count.append(c.most_common(1)) return file_count city_count = func(data=cache) def func(data): l = [] for i in range(len(data)): l.append(data[i][0][0]) return l cities = func(city_count) df = pd.DataFrame() df['city'] = cities df['panel_count'] = panel_len df.head() import category_encoders as ce enc = ce.OrdinalEncoder() enc.fit(df['city']) new = enc.transform(df['city']) df['city'] = new df.head() import seaborn as sns sns.heatmap(df.corr()) ``` ### NEED TO ENGINEER MORE FEATURES !
github_jupyter
import warnings warnings.filterwarnings('ignore') import os import pandas as pd import requests from bs4 import BeautifulSoup from typing import List, Tuple, Union, Callable, Dict, Iterator from collections import defaultdict from difflib import SequenceMatcher import spacy from spacy.matcher import Matcher, PhraseMatcher from spacy.tokens.doc import Doc from spacy.tokens.span import Span from spacy.tokens.token import Token from geotext import GeoText ## new library: https://pypi.org/project/geotext/ nlp = spacy.load("en_core_web_sm") filenames = os.listdir(r'C:\Users\12482\Desktop\lambda-school\labs\project\textfolder\text_cases') # Wherever files are located def similar(a: str, return_b: str, min_score: float) -> Union[str, None]: """ โ€ข Returns 2nd string if similarity score is above supplied minimum score. Else, returns None. """ if SequenceMatcher(None, a, return_b).ratio() >= min_score: return return_b def similar_in_list(lst: Union[List[str], Iterator[str]]) -> Callable: """ โ€ข Uses a closure on supplied list to return a function that iterates over the list in order to search for the first similar term. It's used widely in the scraper. """ def impl(item: str, min_score: float) -> Union[str, None]: for s in lst: s = similar(item, s, min_score) if s: return s return impl members = [ "Adkins-Blanch, Charles K.", "Michael P. Baird", "Cassidy, William A.", "Cole, Patricia A.", "Couch, V. Stuart", "Creppy, Michael J.", "Crossett, John P.", "Donovan, Teresa L.", "Foote, Megan E.", "Geller, Joan B.", "Gemoets, Marcos", "Gonzalez, Gabriel", "Goodwin, Deborah K.", "Gorman, Stephanie E.", "Grant, Edward R.", "Greer, Anne J.", "Guendelsberger, John", "Hunsucker, Keith E.", "Kelly, Edward F.", "Kendall Clark, Molly", "Liebmann, Beth S.", "Liebowitz, Ellen C.", "Mahtabfar, Sunita B.", "Malphrus, Garry D.", "Mann, Ana", "Miller, Neil P.", "Monsky, Megan Foote", "Montante Jr., Phillip J.", "Morris, Daniel", "Mullane, Hugh G.", "Neal, David L.", "Noferi, Mark", "O'Connor, Blair", "O'Herron, Margaret M.", "O'Leary, Brian M.", "Owen, Sirce E.", "Pauley, Roger", "Petty, Aaron R.", "Pepper, S. Kathleen", "RILEY, KEVIN W.", "Rosen, Scott", "Snow, Thomas G.", "Swanwick, Daniel L.", "Wendtland, Linda S.", "Wetmore, David H.", "Wilson, Earle B." ] judges_url = 'https://en.wikipedia.org/wiki/Board_of_Immigration_Appeals' html = requests.get(judges_url).text soup = BeautifulSoup(html, 'html.parser') table = soup.find("table", class_="wikitable") web_judges = [itm.get_text().strip() for itm in table.select("td")[1::4]] web_judges combined_members = members + web_judges class GetJudge: """ Returns the judge's name if a match is found. """ accuracy = 0.7 def __init__(self): # Currently grabs potential judges names from a URL here. # For testing we'll instead alternate `names` # names = web_judges # names = members names = combined_members self.is_judge: Callable = similar_in_list(names) def __call__(self, name): result = self.is_judge(name, self.accuracy) if not result: flip_name = ' '.join(reversed(name.split(', '))) result = self.is_judge(flip_name, self.accuracy) return result class BIACase: def __init__(self, text: str): """ โ€ข Input will be text from a BIA case pdf file, after the pdf has been converted from PDF to text. โ€ข Scraping works utilizing spaCy, tokenizing the text, and iterating token by token searching for matching keywords. """ self.doc: Doc = nlp(text) self.ents: Tuple[Span] = self.doc.ents self.if_judge = GetJudge() def get_ents(self, labels: List[str]) -> Iterator[Span]: """ โ€ข Retrieves entitiess of a specified label(s) in the document, if no label is specified, returns all entities """ return (ent for ent in self.ents if ent.label_ in labels) def get_panel(self) -> str: """ โ€ข Returns the panel members of case in document. """ panel_members: List[str] panel_members = [] possible_members: Iterator[Span] possible_members = map( lambda ent: ent.text, self.get_ents(['PERSON']) ) for member in possible_members: judge: Union[str, None] judge = self.if_judge(member) if judge: panel_members.append(judge) return '; '.join(set(panel_members)) # ** Change `names` to `web_judges` list in GetJudge() ** web_dict = {} for file in filenames: f = open(f"C:\\Users\\12482\\Desktop\\lambda-school\\labs\\project\\textfolder\\text_cases\\{file}", "r",encoding='utf-8') case = BIACase(f.read()) web_dict[file] = case.get_panel() f.close() ## idea: create a dataframe with a multitude of features ## feature 1 is num of panel members -- > do first ## feature 2 is most significant place --> do next ## ....... ## create df l = list(web_dict.keys()) ## easy access to dict len(web_dict[l[0]].split(',')) ## example --> value is number of panel members def func(dict_): panel = [] l = list(dict_.keys()) for x in l: panel.append(len(dict_[x].split(','))) return panel panel_len = func(web_dict) cache = {} for i in range(len(filenames)): cache[i] = open(f"C:\\Users\\12482\\Desktop\\lambda-school\\labs\\project\\textfolder\\text_cases\\{filenames[i]}", "r",encoding='utf-8') cache[i] = cache[i].read() from collections import Counter # --> returns dict (k:v, for k=word,v=count in file) def func(data): file_count = [] for i in range(len(data)): file = data[i] cities_in_file = GeoText(file).cities c = Counter(cities_in_file) file_count.append(c.most_common(1)) return file_count city_count = func(data=cache) def func(data): l = [] for i in range(len(data)): l.append(data[i][0][0]) return l cities = func(city_count) df = pd.DataFrame() df['city'] = cities df['panel_count'] = panel_len df.head() import category_encoders as ce enc = ce.OrdinalEncoder() enc.fit(df['city']) new = enc.transform(df['city']) df['city'] = new df.head() import seaborn as sns sns.heatmap(df.corr())
0.674801
0.38943
``` import os import sys sys.path.append('../examples') sys.path.append('../jobs') sys.path.append('../training_data') from tqdm import trange import torch import torch.nn.functional as F import torch.optim as optim import numpy as np import matplotlib.pyplot as plt from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config from generate_with_calibration import get_lookahead_entropies from generate_with_entropy import sample_sequence, sample_sequence_batch import logging logging.getLogger('transformers.tokenization_utils').setLevel(logging.ERROR) # setup cell def set_seed(seed=42, n_gpu=0): np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(args.seed) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpus = torch.cuda.device_count() set_seed() tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') model.to(device) model.eval() vocab_size = tokenizer.vocab_size def calibrate(model, tokenizer, path, save_path, vocab_size, batch_size=512, alpha=0.0, top_k=0, iters=10, threshold=1e-5, device='cpu'): alpha = torch.tensor([alpha], requires_grad=True) total_loss = CEL(model, tokenizer, path, alpha, vocab_size, batch_size, top_k, device) print(f'Total loss: {total_loss.item()}. Alpha: {alpha.item()}') last_alpha = alpha.item() for _ in range(iters): grad_a = torch.autograd.grad(total_loss, alpha, create_graph=True) grad2_a = torch.autograd.grad(grad_a, alpha) alpha.data -= (grad_a[0] / grad2_a[0]).data np.savez(save_path, alpha=alpha.item()) total_loss = CEL(model, tokenizer, path, alpha, vocab_size, batch_size, top_k, device) print(f'Total loss: {total_loss.item()}. Alpha: {alpha.item()}') if abs(alpha.data - last_alpha) < threshold: break last_alpha = alpha.item() return alpha def CEL(model, tokenizer, path, alpha, vocab_size, batch_size=512, top_k=0, device='cpu'): # calculates the CEL on a single context. def CELHelper(context): N = len(context) context_CEL = torch.tensor([0.0]) for i in range(1, N): with torch.no_grad(): context_i = torch.tensor(context[:i], dtype = torch.long, device=device).unsqueeze(0) inputs = {'input_ids': context_i} next_logits = model(**inputs)[0][:, -1, :].detach().cpu() if top_k == 0: candidates = None else: candidates = torch.argsort(next_logits[0], descending=True,)[:top_k] lookahead_ents = get_lookahead_entropies( model = model, context = context_i[0], batch_size = batch_size, vocab_size = vocab_size, candidates = candidates, device = device ).cpu() next_probs = F.softmax(next_logits, dim=-1)[0] if top_k != 0: # replace uncomputed entropies with average (for centered adjustment) next_probs = next_probs[lookahead_ents != -1] top_average_ent = (lookahead_ents[lookahead_ents != -1] * next_probs / next_probs.sum()).sum() lookahead_ents[lookahead_ents != -1] = top_average_ent print(top_average_ent) # context[i] is the next word context_CEL -= torch.log( F.softmax(next_logits - alpha * lookahead_ents, dim=-1)[0][context[i]] ) return context_CEL total_CEL = torch.tensor([0.0]) with open(path) as fp: for line in fp: context = tokenizer.encode(line) total_CEL += CELHelper(context) return total_CEL calibrate(model = model, tokenizer = tokenizer, path = '../training_data/gbw/training/news1-head100', save_path = 'yeet.npz', vocab_size = vocab_size, batch_size=64, alpha=0.0, top_k=64, iters=10, threshold=1e-5, device=device) def getTemp(model, tokenizer, path, vocab_size, batch_size=512, alpha=-0.0298, device='cpu'): def tempHelper(context): N = len(context) ret = [] for i in range(1, N): context_i = torch.tensor(context[:i], dtype = torch.long, device=device).unsqueeze(0) inputs = {'input_ids': context_i} logits = model(**inputs)[0][:, -1, :].detach().cpu() lookahead_ents = get_lookahead_entropies( model = model, context = context_i[0], batch_size = batch_size, vocab_size = vocab_size, candidates = None, device = device ).cpu() temps = logits / (logits - alpha * lookahead_ents) next_probs = F.softmax(logits, dim=-1) tmp = np.average(temps, weights=next_probs) ret.append(tmp) print(f'TEMPS ON SUBCONTEXTS: {ret}') return np.mean(ret) temp = [] with open(path) as fp: for line in fp: context = tokenizer.encode(line) temp.append(tempHelper(context)) print(f'TEMPS: {temp}') np.savez('temps_cache', temp=temp) return np.mean(temp) avg_temp = getTemp(model, tokenizer, path='../training_data/gbw/test/100_lines.txt', vocab_size=vocab_size, alpha=0.0339, batch_size=128, device=device) print(avg_temp) temps = np.load('temps_cache.npz')['temp'] np.average(temps) ```
github_jupyter
import os import sys sys.path.append('../examples') sys.path.append('../jobs') sys.path.append('../training_data') from tqdm import trange import torch import torch.nn.functional as F import torch.optim as optim import numpy as np import matplotlib.pyplot as plt from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config from generate_with_calibration import get_lookahead_entropies from generate_with_entropy import sample_sequence, sample_sequence_batch import logging logging.getLogger('transformers.tokenization_utils').setLevel(logging.ERROR) # setup cell def set_seed(seed=42, n_gpu=0): np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(args.seed) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpus = torch.cuda.device_count() set_seed() tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') model.to(device) model.eval() vocab_size = tokenizer.vocab_size def calibrate(model, tokenizer, path, save_path, vocab_size, batch_size=512, alpha=0.0, top_k=0, iters=10, threshold=1e-5, device='cpu'): alpha = torch.tensor([alpha], requires_grad=True) total_loss = CEL(model, tokenizer, path, alpha, vocab_size, batch_size, top_k, device) print(f'Total loss: {total_loss.item()}. Alpha: {alpha.item()}') last_alpha = alpha.item() for _ in range(iters): grad_a = torch.autograd.grad(total_loss, alpha, create_graph=True) grad2_a = torch.autograd.grad(grad_a, alpha) alpha.data -= (grad_a[0] / grad2_a[0]).data np.savez(save_path, alpha=alpha.item()) total_loss = CEL(model, tokenizer, path, alpha, vocab_size, batch_size, top_k, device) print(f'Total loss: {total_loss.item()}. Alpha: {alpha.item()}') if abs(alpha.data - last_alpha) < threshold: break last_alpha = alpha.item() return alpha def CEL(model, tokenizer, path, alpha, vocab_size, batch_size=512, top_k=0, device='cpu'): # calculates the CEL on a single context. def CELHelper(context): N = len(context) context_CEL = torch.tensor([0.0]) for i in range(1, N): with torch.no_grad(): context_i = torch.tensor(context[:i], dtype = torch.long, device=device).unsqueeze(0) inputs = {'input_ids': context_i} next_logits = model(**inputs)[0][:, -1, :].detach().cpu() if top_k == 0: candidates = None else: candidates = torch.argsort(next_logits[0], descending=True,)[:top_k] lookahead_ents = get_lookahead_entropies( model = model, context = context_i[0], batch_size = batch_size, vocab_size = vocab_size, candidates = candidates, device = device ).cpu() next_probs = F.softmax(next_logits, dim=-1)[0] if top_k != 0: # replace uncomputed entropies with average (for centered adjustment) next_probs = next_probs[lookahead_ents != -1] top_average_ent = (lookahead_ents[lookahead_ents != -1] * next_probs / next_probs.sum()).sum() lookahead_ents[lookahead_ents != -1] = top_average_ent print(top_average_ent) # context[i] is the next word context_CEL -= torch.log( F.softmax(next_logits - alpha * lookahead_ents, dim=-1)[0][context[i]] ) return context_CEL total_CEL = torch.tensor([0.0]) with open(path) as fp: for line in fp: context = tokenizer.encode(line) total_CEL += CELHelper(context) return total_CEL calibrate(model = model, tokenizer = tokenizer, path = '../training_data/gbw/training/news1-head100', save_path = 'yeet.npz', vocab_size = vocab_size, batch_size=64, alpha=0.0, top_k=64, iters=10, threshold=1e-5, device=device) def getTemp(model, tokenizer, path, vocab_size, batch_size=512, alpha=-0.0298, device='cpu'): def tempHelper(context): N = len(context) ret = [] for i in range(1, N): context_i = torch.tensor(context[:i], dtype = torch.long, device=device).unsqueeze(0) inputs = {'input_ids': context_i} logits = model(**inputs)[0][:, -1, :].detach().cpu() lookahead_ents = get_lookahead_entropies( model = model, context = context_i[0], batch_size = batch_size, vocab_size = vocab_size, candidates = None, device = device ).cpu() temps = logits / (logits - alpha * lookahead_ents) next_probs = F.softmax(logits, dim=-1) tmp = np.average(temps, weights=next_probs) ret.append(tmp) print(f'TEMPS ON SUBCONTEXTS: {ret}') return np.mean(ret) temp = [] with open(path) as fp: for line in fp: context = tokenizer.encode(line) temp.append(tempHelper(context)) print(f'TEMPS: {temp}') np.savez('temps_cache', temp=temp) return np.mean(temp) avg_temp = getTemp(model, tokenizer, path='../training_data/gbw/test/100_lines.txt', vocab_size=vocab_size, alpha=0.0339, batch_size=128, device=device) print(avg_temp) temps = np.load('temps_cache.npz')['temp'] np.average(temps)
0.42919
0.273114
# Random Forrests for Exploration of DEAP Dataset Fingerprinting with DMD modes has worked really well. What about in tabular format? ``` %load_ext autoreload %autoreload 2 %matplotlib inline %%javascript utils.load_extension('collapsible_headings/main') utils.load_extension('hide_input/main') utils.load_extension('execute_time/ExecuteTime') utils.load_extension('code_prettify/code_prettify') utils.load_extension('scroll_down/main') utils.load_extension('jupyter-js-widgets/extension') from fastai.tabular import * PATH = "/media/tris/tris_files/EEG_datasets/DMD/tabular" col_names=['subject','trial','mode_no','real1','real2','real3','real4','real5','real6','real7','real8','real9','real10','real11','real12','real13','real14','real15','real16','real17','real18','real19','real20','real21','real22','real23','real24','real25','real26','real27','real28','real29','real30','real31','real32','imag1','imag2','imag3','imag4','imag5','imag6','imag7','imag8','imag9','imag10','imag11','imag12','imag13','imag14','imag15','imag16','imag17','imag18','imag19','imag20','imag21','imag22','imag23','imag24','imag25','imag26','imag27','imag28','imag29','imag30','imag31','imag32','fn','zeta']; len(col_names) df_raw = pd.read_csv('/media/tris/tris_files/EEG_datasets/DMD/tabular/dmd_deap_100modes_vecs.csv',header=None, names=col_names) df_raw df_raw.iloc[[40],:] os.makedirs('tmp', exist_ok=True) df_raw.to_feather('tmp/eeg-raw') import pandas as pd df_raw = pd.read_feather('tmp/eeg-raw') #lol raw sashimis and sushis df_raw.head() df_raw.iloc[[500],:] fig, axs = plt.subplots(1, 5, figsize=(15, 5)) axs[0].hist(df_raw.real1) axs[0].set_title('Real Chan. 1') axs[1].hist(df_raw.fn) axs[1].set_title('frequency') axs[2].hist(df_raw.imag27) axs[2].set_title('Imag Chan. 27') axs[3].hist(df_raw.zeta) axs[3].set_title('Damping') axs[4].hist(df_raw.real5) axs[4].set_title('Real Chan. 5') valid_idx=np.random.randint(low=0, high=len(df_raw), size=12800) dep_var= 'subject' path = "/media/tris/tris_files/EEG_datasets/DMD/tabular" data = TabularDataBunch.from_df(path, df_raw, dep_var, valid_idx=valid_idx) data.show_batch() max_log_y = np.log(np.max(df_raw['subject'])*1.2) y_range = torch.tensor([0, max_log_y]) learn = tabular_learner(data, layers=[1000,500], ps=[0.001,0.01], emb_drop=0.04, metrics=accuracy, emb_szs={'subject': 32}) learn.model learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(5, 1e-2, wd=0.2) learn.recorder.plot_losses() learn.show_results() ``` Ok. Working, but I forgot to include trial and mode no. Also need to plot feature importance.
github_jupyter
%load_ext autoreload %autoreload 2 %matplotlib inline %%javascript utils.load_extension('collapsible_headings/main') utils.load_extension('hide_input/main') utils.load_extension('execute_time/ExecuteTime') utils.load_extension('code_prettify/code_prettify') utils.load_extension('scroll_down/main') utils.load_extension('jupyter-js-widgets/extension') from fastai.tabular import * PATH = "/media/tris/tris_files/EEG_datasets/DMD/tabular" col_names=['subject','trial','mode_no','real1','real2','real3','real4','real5','real6','real7','real8','real9','real10','real11','real12','real13','real14','real15','real16','real17','real18','real19','real20','real21','real22','real23','real24','real25','real26','real27','real28','real29','real30','real31','real32','imag1','imag2','imag3','imag4','imag5','imag6','imag7','imag8','imag9','imag10','imag11','imag12','imag13','imag14','imag15','imag16','imag17','imag18','imag19','imag20','imag21','imag22','imag23','imag24','imag25','imag26','imag27','imag28','imag29','imag30','imag31','imag32','fn','zeta']; len(col_names) df_raw = pd.read_csv('/media/tris/tris_files/EEG_datasets/DMD/tabular/dmd_deap_100modes_vecs.csv',header=None, names=col_names) df_raw df_raw.iloc[[40],:] os.makedirs('tmp', exist_ok=True) df_raw.to_feather('tmp/eeg-raw') import pandas as pd df_raw = pd.read_feather('tmp/eeg-raw') #lol raw sashimis and sushis df_raw.head() df_raw.iloc[[500],:] fig, axs = plt.subplots(1, 5, figsize=(15, 5)) axs[0].hist(df_raw.real1) axs[0].set_title('Real Chan. 1') axs[1].hist(df_raw.fn) axs[1].set_title('frequency') axs[2].hist(df_raw.imag27) axs[2].set_title('Imag Chan. 27') axs[3].hist(df_raw.zeta) axs[3].set_title('Damping') axs[4].hist(df_raw.real5) axs[4].set_title('Real Chan. 5') valid_idx=np.random.randint(low=0, high=len(df_raw), size=12800) dep_var= 'subject' path = "/media/tris/tris_files/EEG_datasets/DMD/tabular" data = TabularDataBunch.from_df(path, df_raw, dep_var, valid_idx=valid_idx) data.show_batch() max_log_y = np.log(np.max(df_raw['subject'])*1.2) y_range = torch.tensor([0, max_log_y]) learn = tabular_learner(data, layers=[1000,500], ps=[0.001,0.01], emb_drop=0.04, metrics=accuracy, emb_szs={'subject': 32}) learn.model learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(5, 1e-2, wd=0.2) learn.recorder.plot_losses() learn.show_results()
0.318485
0.444866
<img align="right" src="https://ds-cs-images.s3.ap-northeast-2.amazonaws.com/Codestates_Fulllogo_Color.png" width=100> ## *AIB / SECTION 2 / SPRINT 2 / NOTE 2* # ๐Ÿ“ Assignment --- # ๋žœ๋คํฌ๋ ˆ์ŠคํŠธ(Random Forests) ### 1) ์บ๊ธ€ ๋Œ€ํšŒ๋ฅผ ์ด์–ด์„œ ์ง„ํ–‰ํ•ฉ๋‹ˆ๋‹ค. EDA, ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ ๋ถ€๋ถ„์„ ์—…๋ฐ์ดํŠธ ํ•˜์„ธ์š”. - EDA๋Š” ํ•ญ์ƒ ์™„๋ฒฝํ•  ์ˆ˜ ์—†์ง€์š” ํ•œ ์ฃผ๊ฐ„ ๊ณ„์† ๊ฐ™์€ ๋ฐ์ดํ„ฐ๋กœ ๊ณผ์ œ๋ฅผ ์ง„ํ–‰ํ•˜๋ฏ€๋กœ ๋ถ€์กฑํ•œ ๋ถ€๋ถ„์„ ์ถ”๊ฐ€ํ•˜๊ฑฐ๋‚˜ ๋…ผํ•˜์„ธ์š”. - (์ง€๊ธˆ์€ feature engineering์— ๋„ˆ๋ฌด ์‹œ๊ฐ„์„ ๋“ค์ด์ง€ ๋งˆ์„ธ์š”!) - Ordinal Encoding์„ ์ ์šฉํ•ด ๋ณด์„ธ์š”. - **(Urclass Quiz) ๋‹ค์Œ ํŠน์„ฑ๋“ค ์ค‘์— ์ˆœ์„œ๋ฅผ ๊ณ ๋ คํ•œ Ordinal Encoding์ด ํ•„์š”ํ•ด ๋ณด์ด๋Š” ํŠน์„ฑ์„ ๊ณ ๋ฅด์„ธ์š”.** 1. opinion_h1n1_vacc_effective 2. state 3. marital 4. employment_occupation ``` !pip install pandas_profiling ! pip install kaggle ! mkdir ~/.kaggle ! cp kaggle.json ~/.kaggle/ ! chmod 600 ~/.kaggle/kaggle.json !kaggle competitions download -c prediction-of-h1n1-vaccination ! unzip prediction-of-h1n1-vaccination import pandas as pd import numpy as np train = pd.read_csv('train.csv') train.info() target = 'vacc_h1n1_f' train = pd.merge(pd.read_csv('train.csv'), pd.read_csv('train_labels.csv')[target], left_index=True, right_index=True) test = pd.read_csv('test.csv') sample_submission = pd.read_csv('submission.csv') from sklearn.model_selection import train_test_split train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify=train[target], random_state=2) def engineer(df): """ํŠน์„ฑ์„ ์—”์ง€๋‹ˆ์–ด๋ง ํ•˜๋Š” ํ•จ์ˆ˜์ž…๋‹ˆ๋‹ค.""" # ๋†’์€ ์นด๋””๋„๋ฆฌํ‹ฐ๋ฅผ ๊ฐ€์ง€๋Š” ํŠน์„ฑ์„ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. # selected_cols = df.select_dtypes(include=['number', 'object']) # colnames = selected_cols.columns.tolist() # labels = selected_cols.nunique() # selected_features = labels[labels <= 30].index.tolist() # df = df[selected_features] # ์ƒˆ๋กœ์šด ํŠน์„ฑ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. behaviorals = [col for col in df.columns if 'behavioral' in col] df['behaviorals'] = df[behaviorals].sum(axis=1) dels = [col for col in df.columns if ('employment' in col or 'seas' in col)] df.drop(columns=dels, inplace=True) return df train = engineer(train) val = engineer(val) test = engineer(test) features = train.drop(columns=[target]).columns X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] !pip install category_encoders from category_encoders import OrdinalEncoder from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline # ordinal encoding enc = OrdinalEncoder(handle_missing="value") enc.fit(X_train, y_train) enc.category_mapping ``` ### 2) ๋žœ๋คํฌ๋ ˆ์ŠคํŠธ ๋ชจ๋ธ์„ ์ ์šฉํ•œ ํ›„์˜ ๊ฒฐ๊ณผ๋ฅผ ์บ๊ธ€์— ์ œ์ถœํ•˜์„ธ์š”. - ๋žœ๋คํฌ๋ ˆ์ŠคํŠธ๋ฅผ ์ ์šฉํ•˜๊ณ  ์„ฑ๋Šฅ์ด ์˜คํžˆ๋ ค ๋–จ์–ด์กŒ์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! ๋งŒ์•ฝ ๊ทธ๋ ‡๋‹ค๋ฉด ์ด์œ ๋ฅผ ๋ณธ์ธ ๋…ผ๋ฆฌ๋กœ ๋ถ„์„ํ•ด ๋ณด์„ธ์š”. - **(Urclass Quiz) ์บ๊ธ€ Leaderboard์˜ ๋ณธ์ธ Score๋ฅผ ์ œ์ถœํ•˜์„ธ์š”.** ``` from sklearn.ensemble import RandomForestClassifier from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline pipe = make_pipeline( OrdinalEncoder(), SimpleImputer(), RandomForestClassifier(n_jobs=-1, random_state=10, oob_score=True) ) pipe.fit(X_train, y_train) print('๊ฒ€์ฆ ์ •ํ™•๋„: ', pipe.score(X_val, y_val)) len() pipe.predict(test) submission = pd.DataFrame({'id':test.index, 'vacc_h1n1_f':a}) submission.to_csv("my_submission.csv", index=False, header=True) !kaggle competitions submit prediction-of-h1n1-vaccination -f my_submission.csv -m "Yeah! I submit my file through the Google Colab!" ``` ## ๐Ÿ”ฅ ๋„์ „๊ณผ์ œ(Github - Discussion) ### 3) ์ˆ˜์—…์— ์‚ฌ์šฉํ•˜์ง€ ์•Š์€ ๋‹ค๋ฅธ ์ข…๋ฅ˜์˜ [category_encoders](http://contrib.scikit-learn.org/category_encoders/)์„ 2๊ฐœ ์ด์ƒ ์‚ฌ์šฉํ•ด ๊ฒฐ๊ณผ๋ฅผ ๊ณต์œ ํ•ด ๋ณด์‹œ๊ณ , ๋‹ค์Œ ์งˆ๋ฌธ์— ๋Œ€ํ•ด ์„œ๋กœ ๋…ผ์˜ํ•ด ๋ณด์„ธ์š”. - ์‚ฌ์šฉํ•˜์‹  encoder๋Š” ๊ฐ๊ฐ ์–ด๋–ค ์žฅ๋‹จ์ ์„ ๊ฐ–๊ณ  ์žˆ์œผ๋ฉฐ, ์–ด๋–ค ์ƒํ™ฉ์—์„œ ์‚ฌ์šฉํ•˜๋ฉด ์ข‹์„๊นŒ์š”? ### 4) ์™œ ํŠธ๋ฆฌ๋ชจ๋ธ์—์„œ๋Š” ordinal encoding์„ ์ฃผ๋กœ ์‚ฌ์šฉํ•˜๋ฉฐ (one-hot encoding๋Œ€์‹ ), ๋ฒ”์ฃผํ˜• ์ž๋ฃŒ๋ฅผ ordinal encoding์œผ๋กœ ์‚ฌ์šฉํ•ด๋„ ๋˜๋Š” ์ด์œ ๋Š” ๋ฌด์—‡์ด๋ผ๊ณ  ์ƒ๊ฐํ•˜์‹œ๋Š”์ง€ ๋…ผ์˜ํ•ด ๋ณด์„ธ์š” ## ์ฐธ๊ณ ์ž๋ฃŒ - [Random Forests for Complete Beginners](https://victorzhou.com/blog/intro-to-random-forests/)
github_jupyter
!pip install pandas_profiling ! pip install kaggle ! mkdir ~/.kaggle ! cp kaggle.json ~/.kaggle/ ! chmod 600 ~/.kaggle/kaggle.json !kaggle competitions download -c prediction-of-h1n1-vaccination ! unzip prediction-of-h1n1-vaccination import pandas as pd import numpy as np train = pd.read_csv('train.csv') train.info() target = 'vacc_h1n1_f' train = pd.merge(pd.read_csv('train.csv'), pd.read_csv('train_labels.csv')[target], left_index=True, right_index=True) test = pd.read_csv('test.csv') sample_submission = pd.read_csv('submission.csv') from sklearn.model_selection import train_test_split train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify=train[target], random_state=2) def engineer(df): """ํŠน์„ฑ์„ ์—”์ง€๋‹ˆ์–ด๋ง ํ•˜๋Š” ํ•จ์ˆ˜์ž…๋‹ˆ๋‹ค.""" # ๋†’์€ ์นด๋””๋„๋ฆฌํ‹ฐ๋ฅผ ๊ฐ€์ง€๋Š” ํŠน์„ฑ์„ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. # selected_cols = df.select_dtypes(include=['number', 'object']) # colnames = selected_cols.columns.tolist() # labels = selected_cols.nunique() # selected_features = labels[labels <= 30].index.tolist() # df = df[selected_features] # ์ƒˆ๋กœ์šด ํŠน์„ฑ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. behaviorals = [col for col in df.columns if 'behavioral' in col] df['behaviorals'] = df[behaviorals].sum(axis=1) dels = [col for col in df.columns if ('employment' in col or 'seas' in col)] df.drop(columns=dels, inplace=True) return df train = engineer(train) val = engineer(val) test = engineer(test) features = train.drop(columns=[target]).columns X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] !pip install category_encoders from category_encoders import OrdinalEncoder from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline # ordinal encoding enc = OrdinalEncoder(handle_missing="value") enc.fit(X_train, y_train) enc.category_mapping from sklearn.ensemble import RandomForestClassifier from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline pipe = make_pipeline( OrdinalEncoder(), SimpleImputer(), RandomForestClassifier(n_jobs=-1, random_state=10, oob_score=True) ) pipe.fit(X_train, y_train) print('๊ฒ€์ฆ ์ •ํ™•๋„: ', pipe.score(X_val, y_val)) len() pipe.predict(test) submission = pd.DataFrame({'id':test.index, 'vacc_h1n1_f':a}) submission.to_csv("my_submission.csv", index=False, header=True) !kaggle competitions submit prediction-of-h1n1-vaccination -f my_submission.csv -m "Yeah! I submit my file through the Google Colab!"
0.449634
0.818483
<a href="https://colab.research.google.com/github/agiagoulas/page-stream-segmentation/blob/master/model_training/CSV_Generation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Connect to Google Drive when working in Google Colab ``` from google.colab import drive drive.mount('/content/gdrive') ``` Set working_dir ``` working_dir = "/Tobacco800/" # TODO: Set correct working directory ``` Imports ``` !sudo apt install tesseract-ocr !pip install pytesseract import csv from os import listdir from os.path import isfile, join from PIL import Image import cv2 import pytesseract import numpy as np ``` # CSV File Generation OCR Extraction with Tessaract OCR ``` def parse_image_to_str(image_file): # open image img = cv2.imread(image_file) # parse to grayscale gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) gray, img_bin = cv2.threshold(gray,128,255,cv2.THRESH_BINARY | cv2.THRESH_OTSU) gray = cv2.bitwise_not(img_bin) # ocr extraction kernel = np.ones((2, 1), np.uint8) img = cv2.erode(gray, kernel, iterations=1) img = cv2.dilate(img, kernel, iterations=1) out_below = pytesseract.image_to_string(img, lang="eng") return out_below ``` CSV File Generation ``` def create_csv_file(target_file, source_dir): file = open(target_file, 'a') source_files = sorted([f for f in listdir(source_dir) if isfile(join(source_dir, f))]) with file: writer = csv.writer(file, csv.QUOTE_NONNUMERIC, delimiter=';') writer.writerow(["counter", "documentText", "label", "documentName"]) past_file_title = "" for counter, file_name in enumerate(source_files): file_name_split = file_name.split('_') current_file_title = file_name_split[0] print(counter+1, "of", len(source_files)) file_content = parse_image_to_str(source_dir + file_name) if past_file_title == current_file_title: writer.writerow([counter, file_content, "NextPage", file_name]) else: current_file_title = current_file_title.split('.')[0] writer.writerow([counter, file_content, "FirstPage", file_name]) past_file_title = current_file_title source_train_files = working_dir + "Tobacco800_Train/" source_test_files = working_dir + "Tobacco800_Test/" target_train_csv_file = working_dir + "tobacco800.train" target_test_csv_file = working_dir + "tobacco800.test" create_csv_file(target_train_csv_file, source_train_files) create_csv_file(target_test_csv_file, source_test_files) ```
github_jupyter
from google.colab import drive drive.mount('/content/gdrive') working_dir = "/Tobacco800/" # TODO: Set correct working directory !sudo apt install tesseract-ocr !pip install pytesseract import csv from os import listdir from os.path import isfile, join from PIL import Image import cv2 import pytesseract import numpy as np def parse_image_to_str(image_file): # open image img = cv2.imread(image_file) # parse to grayscale gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) gray, img_bin = cv2.threshold(gray,128,255,cv2.THRESH_BINARY | cv2.THRESH_OTSU) gray = cv2.bitwise_not(img_bin) # ocr extraction kernel = np.ones((2, 1), np.uint8) img = cv2.erode(gray, kernel, iterations=1) img = cv2.dilate(img, kernel, iterations=1) out_below = pytesseract.image_to_string(img, lang="eng") return out_below def create_csv_file(target_file, source_dir): file = open(target_file, 'a') source_files = sorted([f for f in listdir(source_dir) if isfile(join(source_dir, f))]) with file: writer = csv.writer(file, csv.QUOTE_NONNUMERIC, delimiter=';') writer.writerow(["counter", "documentText", "label", "documentName"]) past_file_title = "" for counter, file_name in enumerate(source_files): file_name_split = file_name.split('_') current_file_title = file_name_split[0] print(counter+1, "of", len(source_files)) file_content = parse_image_to_str(source_dir + file_name) if past_file_title == current_file_title: writer.writerow([counter, file_content, "NextPage", file_name]) else: current_file_title = current_file_title.split('.')[0] writer.writerow([counter, file_content, "FirstPage", file_name]) past_file_title = current_file_title source_train_files = working_dir + "Tobacco800_Train/" source_test_files = working_dir + "Tobacco800_Test/" target_train_csv_file = working_dir + "tobacco800.train" target_test_csv_file = working_dir + "tobacco800.test" create_csv_file(target_train_csv_file, source_train_files) create_csv_file(target_test_csv_file, source_test_files)
0.196749
0.763175
# Extracting features from Voices ``` import pandas as pd import numpy as np import librosa from datetime import datetime import os from pathlib import Path # Setting working directory os.chdir(Path('/home/adriel_martins/Documents/voice_recognition')) ``` ## Preparing the data Initial file dataframe is from the csv that we made with the 'LibriSpeech_Files_Pre_Processing' notebook. ``` df = pd.read_csv(Path('Data/id_and_soundfiles_LibriSpeech.csv')) df.head(10) ``` ## Feature Extraction ``` # Main source for the choosing of the feature is Jurgen Arias (2020). def extract_features(row): # Sets the name to be the path to where the file is in my computer path = Path('LibriSpeech/train-clean-100') folder_paths_to_add = row.soundfile.split('-') for index, dir in enumerate(folder_paths_to_add): if index == 2: break path = path.joinpath(dir) path = path / row.soundfile # Loads the audio file as a floating point time series and assigns the default sample rate # Sample rate is set to 22050 by default X, sample_rate = librosa.load(path, res_type='kaiser_fast') # Generate Mel-frequency cepstral coefficients (MFCCs) from a time series mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=40).T,axis=0) # Generates a Short-time Fourier transform (STFT) to use in the chroma_stft stft = np.abs(librosa.stft(X)) # Computes a chromagram from a waveform or power spectrogram. chroma = np.mean(librosa.feature.chroma_stft(S=stft, sr=sample_rate).T,axis=0) # Computes a mel-scaled spectrogram. mel = np.mean(librosa.feature.melspectrogram(X, sr=sample_rate).T,axis=0) # Computes spectral contrast contrast = np.mean(librosa.feature.spectral_contrast(S=stft, sr=sample_rate).T,axis=0) # Computes the tonal centroid features (tonnetz) tonnetz = np.mean(librosa.feature.tonnetz(y=librosa.effects.harmonic(X), sr=sample_rate).T,axis=0) # We add also the speaker_id of each file as a label at the end label = row.id return mfccs, chroma, mel, contrast, tonnetz, label # Code to start the timer to see how long it takes to extract the features startTime = datetime.now() # Applying the function to the train data by accessing each row of the dataframe features_label = df.apply(extract_features, axis=1) # Code to see how long it took print(datetime.now() - startTime) features_label np.save(Path('Data/features_label'), features_label) ```
github_jupyter
import pandas as pd import numpy as np import librosa from datetime import datetime import os from pathlib import Path # Setting working directory os.chdir(Path('/home/adriel_martins/Documents/voice_recognition')) df = pd.read_csv(Path('Data/id_and_soundfiles_LibriSpeech.csv')) df.head(10) # Main source for the choosing of the feature is Jurgen Arias (2020). def extract_features(row): # Sets the name to be the path to where the file is in my computer path = Path('LibriSpeech/train-clean-100') folder_paths_to_add = row.soundfile.split('-') for index, dir in enumerate(folder_paths_to_add): if index == 2: break path = path.joinpath(dir) path = path / row.soundfile # Loads the audio file as a floating point time series and assigns the default sample rate # Sample rate is set to 22050 by default X, sample_rate = librosa.load(path, res_type='kaiser_fast') # Generate Mel-frequency cepstral coefficients (MFCCs) from a time series mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=40).T,axis=0) # Generates a Short-time Fourier transform (STFT) to use in the chroma_stft stft = np.abs(librosa.stft(X)) # Computes a chromagram from a waveform or power spectrogram. chroma = np.mean(librosa.feature.chroma_stft(S=stft, sr=sample_rate).T,axis=0) # Computes a mel-scaled spectrogram. mel = np.mean(librosa.feature.melspectrogram(X, sr=sample_rate).T,axis=0) # Computes spectral contrast contrast = np.mean(librosa.feature.spectral_contrast(S=stft, sr=sample_rate).T,axis=0) # Computes the tonal centroid features (tonnetz) tonnetz = np.mean(librosa.feature.tonnetz(y=librosa.effects.harmonic(X), sr=sample_rate).T,axis=0) # We add also the speaker_id of each file as a label at the end label = row.id return mfccs, chroma, mel, contrast, tonnetz, label # Code to start the timer to see how long it takes to extract the features startTime = datetime.now() # Applying the function to the train data by accessing each row of the dataframe features_label = df.apply(extract_features, axis=1) # Code to see how long it took print(datetime.now() - startTime) features_label np.save(Path('Data/features_label'), features_label)
0.624637
0.730843
# Advanced SQL I: Special Functions _**Author**: Boom Devahastin Na Ayudhya_ *** Throughout this entire session, we'll be running the queries in PostgreSQL. This Jupyter Notebook will just be a written record of what we've learned so that you'll have all of these functions in one location. Note that **THIS IS BY NO MEANS AN EXHAUSTIVE LIST** -- I have cherry-picked the ones that are commonly asked in interviews and/or useful on the job from my experience. ### Preparation You should have already downloaded [PostgreSQL](https://www.enterprisedb.com/downloads/postgres-postgresql-downloads). Make sure you have **pgAdmin 4** set up and that you've loaded the `GoT Schemas`. ## Contents **I. String Manipulation** - [`UPPER()`](#UPPER()) - [`LOWER()`](#LOWER()) - [`INITCAP()`](#LOWER()) - [`LENGTH()`](#LENGTH()) - [`TRIM()`](#TRIM()) - [`SUBSTRING()`](#SUBSTRING()) - [Concatenation Methods](#Concatenation) - [`REPLACE()`](#REPLACE()) - [`COALESCE()`](#COALESCE()) **II. Conditionals** - [Boolean Statements](#Boolean-Statements) - [`CASE WHEN`](#CASE-WHEN) **III. Date-Time Manipulation** - [Type Conversion](#Type-Conversion) - [`EXTRACT()`](#EXTRACT()) ## I. String Manipulation ### `LOWER()` This is the same as the `.lower()` method for strings in Python used to convert every letter in a string to lower case _Example_: Convert all letters of the string `HeLlO, wOrLd!` to lower case ```MySQL SELECT LOWER('HeLlO, wOrLd!') ``` **DISCUSS:** Why do you think this can be useful? Does case matter in SQL? **THINK:** Consider the following queries. Which of these will run? <br> (A) `SELECT first_name FROM people WHERE first_name = 'eddard'` <br> (B) `select first_name from people where first_name = 'eddard'` <br> (C) `SELECT first_name FROM people WHERE first_name = 'Eddard'` <br> (D) `select first_name from people where first_name = 'Eddard'` **EXERCISE 1:** Write a query that returns the first name of all living members of the ruling family of winterfell, but make sure the letters are all in lower case. _Answer:_ ```MySQL SELECT p.first_name FROM people AS p INNER JOIN houses AS h ON p.house = h.name WHERE h.domain = 'winterfell' AND p.alive = 1 ``` ### `UPPER()` For completeness, this is the same as the `.upper()` method for strings in Python used to capitalize every letter in a string _Example_: Capitalize all letters of the string `Hello World` ```MySQL SELECT 'Hello, world!' ``` **EXERCISE 2:** Write a query that capitalizes every letter of every unique noble house's domain from the `houses` table. _Answer:_ ```MySQL SELECT UPPER(h.name) FROM houses AS h ``` ### `INITCAP()` This is the same as the `.capitalize()` method for strings in Python that is used to convert the first letter to upper case. **EXERCISE 3:** Write a SQL query that returns the first name and houses of all characters whose first name begins with the prefix "ae-" or "Ae-", but make sure that only the first letter is capitalized in both of those columns. ```MySQL SELECT INITCAP(c.first_name), INITCAP(c.house) FROM people AS c WHERE c.first_name ILIKE 'ae%' ``` ### `LENGTH()` This is the same as the `len()` function in Python. However, since we don't have lists or tuples in SQL, this is only applicable to objects with characters. **EXERCISE 4:** Write a query that displays the first name and house of characters that are alive, but only if their house is at least 6 characters long. _Answer:_ ```MySQL SELECT p.first_name, p.house FROM people AS p WHERE p.alive = 1 AND LENGTH(p.house) >= 6 ``` ### `TRIM()` This is the same as the `.strip()` method for strings in Python that eliminates leading and trailing white spaces. _Example:_ Write a query that strips out the white space from the string `' Hello, world! '` ```MySQL SELECT TRIM(' Hello, world! ') ``` ### `SUBSTRING()` Python doesn't have a function that extracts a substring since we can just do it by directly indexing through the string. If you're familiar with R though, then you'll recognize this is similar to the `substr()` function. Syntax for this function: ```MySQL SELECT SUBSTRING(string_column FROM <start_position> FOR <num_characters_ahead>) ``` OR ```MySQL SELECT SUBSTRING(string_column, <start_position>, <num_characters_ahead>) ``` **Example #1:** ```MySQL SELECT SUBSTRING('Hello there, friend! Hehe.' FROM 1 FOR 5) ``` OR ```MySQL SELECT SUBSTRING('Hello there, friend! Hehe.', 1, 5) ``` will return `'Hello'` **Example #2:** ```MySQL SELECT SUBSTRING('Hello there, friend! Hehe.' FROM 14) ``` OR ```MySQL SELECT SUBSTRING('Hello there, friend! Hehe.', 14) ``` will return `'friend! Hehe.` ### Concatenation This is the equivalent of string concatenation in Python using `+`. The `+` in Python is replaced by `||` in PostgreSQL. Alternatively, you can use the `CONCAT()` function. _Example:_ Write a query that prints every character's full name (i.e. first name then house) ```MySQL SELECT INITCAP(p.first_name) || ' ' || INITCAP(p.house) FROM people p ``` **EXERCISE 5:** Write a query that automatically generates the sentence `<bannermen>'s army has <size> soldiers.` _Answer:_ ```MySQL SELECT INITCAP(b.name) || '''s army has ' || size || ' soldiers.' FROM bannermen b ``` ### `REPLACE()` This is the equivalent of the `.replace()` method for strings in Python and the `gsub()` function in R. _Example:_ ```MySQL SELECT house, REPLACE(house, 'lannister', 'Evil Ducks') AS new_house -- replace all 'Lannister' with 'Evil Ducks' in house col FROM people ``` Does the function work when replacing `NULL` values though? Try this and let me know what you see ```MySQL SELECT first_name, REPLACE(nickname, NULL, 'missing') AS new_nickname FROM people ``` ## `COALESCE()` This is an extremely powerful function that lets us handle missing values on a column-by-column basis. The syntax is pretty straight forward for this one: ```MySQL COALESCE(<column_name>, <fill_value>) ``` Alright, your turn! **EXERCISE 6**: Write a query that prints every character's full name in one column and their nickname in another, but make sure to replace all `NULL` nicknames with `ยฏ\_(ใƒ„)_/ยฏ`. _Answer:_ ```MySQL SELECT first_name, COALESCE(nickname, 'ยฏ\_(ใƒ„)_/ยฏ') AS cleaned_nickname FROM people ``` _____ ## II. Conditionals ### Boolean Statements **Review Discussion:** What is a Boolean statement? Can you think of an example where you've used this before? We can also include Booleans to create dummy variables in SQL on the fly. _Example:_ ```MySQL SELECT b.name, b.size, b.size >= 30 AS "IsLarge" FROM bannermen AS b ``` ## `CASE WHEN` This is the equivalent of if-elif-else statements, except embedded into a query. This takes Boolean Statements to the next level by allowing you to customize what happens on a case-by-case basis _Example_: Write a query that groups bannermen army sizes into 'yuge' (35+), 'medium' (25-34), 'smol' (< 25) ```MySQL SELECT b.name, b.size, CASE WHEN b.size >= 35 THEN 'yuge' -- if WHEN b.size BETWEEN 25 AND 34 THEN 'medium' -- elif ELSE 'smol' -- else END AS "size_group" -- end it! (and rename if you want) FROM bannermen AS b ``` ## III. Date-Time Manipulation ### Type Conversion _(Complete documentation here: https://www.postgresql.org/docs/8.1/functions-formatting.html)_ #### `to_timestamp()` If you have a string that's both date and want to convert it to a datetime objecttime want the date and time, ```MySQL SELECT to_timestamp('2019 May 13 15:00:05', 'YYYY-MON-DD HH24:MI:SS') ``` #### `to_date()` If you have a string where you want to convert to a date without any timestamp ```MySQL SELECT to_date('2019 May 13 14:00:58', 'YYYY-MON-DD') ``` #### `current_date` You can use this to pull the current date from your computer's clock and manipulate it as you desired. ```MySQL SELECT current_date ``` **EXERCISE 7:** Write a query that returns what the date was 21 days ago _Answer:_ ```MySQL SELECT current_date - 21 ``` ### `EXTRACT()` _(More datetime manipulation functions: https://www.postgresql.org/docs/9.1/functions-datetime.html)_ If you want to extract certain parts of a datetime object, this function is MAGICAL! ```MySQL SELECT current_timestamp AS today, EXTRACT(day from current_date) AS "Day", EXTRACT(month from current_date) AS "Month", EXTRACT(year from current_timestamp) AS "Year", EXTRACT(hour from current_timestamp) AS "Hour", EXTRACT(minute from current_timestamp) AS "Minute" ``` ### Challenge: Interview Questions Lyft recently acquired the rights to add CitiBike to its app as part of its Bikes & Scooters business. You are a Data Scientist studying a `rides` table containing data on completed trips taken by riders, and a `deployed_bikes` table which contains information on the locations where each unique bike is deployed (i.e. where it is stationed). **`rides`** schema: - `ride_id`: int **[PRIMARY KEY]** - `bike_id`: int - `ride_datetime`: string - `duration`: int **`deployed_bikes`** schema: - `bike_id`: int **[PRIMARY KEY]** - `deploy_location`: string **EXERCISE 8: For the last week, find the number of rides that occured on each date, ordered from most recent to least recent** _Answer:_ ```MySQL SELECT ride_datetime, COUNT(ride_id) FROM rides WHERE to_date(ride_datetime, 'YYYY-MON-DD') BETWEEN (current_date - 7) AND (current_date - 1) GROUP BY ride_datetime ORDER BY ride_datetime DESC ``` **EXERCISE 9: Which deployment location did the best over the past week?** _Answer:_ ```MySQL SELECT d.deploy_location, COUNT(r.ride_id) FROM rides AS r INNER JOIN deployed_bikes AS d ON r.bike_id = d.bike_id WHERE to_date(ride_date, 'YYYY-MON-DD') BETWEEN (current_date - 7) AND (current_date - 1) GROUP BY d.deploy_location ORDER BY COUNT(ride_id) DESC LIMIT 1 ``` Note this is actually _not the best_ solution since it only returns 1 row and doesn't account for the case where we have more than 1 deployment location with tied highest ride counts. The best solution would require a subquery, which I won't be covering until Advanced SQL II (Subqueries), so you can try revisiting this question and coming up with the best solution after we go through that!
github_jupyter
SELECT LOWER('HeLlO, wOrLd!') SELECT p.first_name FROM people AS p INNER JOIN houses AS h ON p.house = h.name WHERE h.domain = 'winterfell' AND p.alive = 1 SELECT 'Hello, world!' SELECT UPPER(h.name) FROM houses AS h SELECT INITCAP(c.first_name), INITCAP(c.house) FROM people AS c WHERE c.first_name ILIKE 'ae%' SELECT p.first_name, p.house FROM people AS p WHERE p.alive = 1 AND LENGTH(p.house) >= 6 SELECT TRIM(' Hello, world! ') SELECT SUBSTRING(string_column FROM <start_position> FOR <num_characters_ahead>) SELECT SUBSTRING(string_column, <start_position>, <num_characters_ahead>) SELECT SUBSTRING('Hello there, friend! Hehe.' FROM 1 FOR 5) SELECT SUBSTRING('Hello there, friend! Hehe.', 1, 5) SELECT SUBSTRING('Hello there, friend! Hehe.' FROM 14) SELECT SUBSTRING('Hello there, friend! Hehe.', 14) SELECT INITCAP(p.first_name) || ' ' || INITCAP(p.house) FROM people p SELECT INITCAP(b.name) || '''s army has ' || size || ' soldiers.' FROM bannermen b SELECT house, REPLACE(house, 'lannister', 'Evil Ducks') AS new_house -- replace all 'Lannister' with 'Evil Ducks' in house col FROM people SELECT first_name, REPLACE(nickname, NULL, 'missing') AS new_nickname FROM people COALESCE(<column_name>, <fill_value>) SELECT first_name, COALESCE(nickname, 'ยฏ\_(ใƒ„)_/ยฏ') AS cleaned_nickname FROM people SELECT b.name, b.size, b.size >= 30 AS "IsLarge" FROM bannermen AS b SELECT b.name, b.size, CASE WHEN b.size >= 35 THEN 'yuge' -- if WHEN b.size BETWEEN 25 AND 34 THEN 'medium' -- elif ELSE 'smol' -- else END AS "size_group" -- end it! (and rename if you want) FROM bannermen AS b SELECT to_timestamp('2019 May 13 15:00:05', 'YYYY-MON-DD HH24:MI:SS') SELECT to_date('2019 May 13 14:00:58', 'YYYY-MON-DD') SELECT current_date SELECT current_date - 21 SELECT current_timestamp AS today, EXTRACT(day from current_date) AS "Day", EXTRACT(month from current_date) AS "Month", EXTRACT(year from current_timestamp) AS "Year", EXTRACT(hour from current_timestamp) AS "Hour", EXTRACT(minute from current_timestamp) AS "Minute" SELECT ride_datetime, COUNT(ride_id) FROM rides WHERE to_date(ride_datetime, 'YYYY-MON-DD') BETWEEN (current_date - 7) AND (current_date - 1) GROUP BY ride_datetime ORDER BY ride_datetime DESC SELECT d.deploy_location, COUNT(r.ride_id) FROM rides AS r INNER JOIN deployed_bikes AS d ON r.bike_id = d.bike_id WHERE to_date(ride_date, 'YYYY-MON-DD') BETWEEN (current_date - 7) AND (current_date - 1) GROUP BY d.deploy_location ORDER BY COUNT(ride_id) DESC LIMIT 1
0.182062
0.959421