repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
rongchuhe2/workshop_data_analysis_python
example_bridge_bike_counter.ipynb
mit
[ "Unsupervised Analysis of Days of Week\nTreating crossing each day as features to learn about the relatinships between various days.", "%matplotlib inline\nimport matplotlib.pyplot as plt\n", "Downloading Data\nWe'll start by downloading the data (available on seattle.gov).", "from urllib import request\n\nFREMONT_URL = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD'\n\nrequest.urlretrieve(FREMONT_URL, 'Fremont.csv')\n\n# magic function to show the content of the file\n%more Fremont.csv\n\nimport pandas as pd\n\ndf = pd.read_csv('Fremont.csv') # use read_csv to load the data into dataframe\ndf.head()\n\n# Let's see the type of the data\ndf.dtypes\n\n# change the Date column to datetime data type\ndf['Date'] = pd.to_datetime(df['Date'])\ndf.head()\n\ndf.dtypes\n\n# Set the index to Date\ndf.set_index('Date', inplace=True)\ndf.head()\n\ndf.apply(lambda x: sum(x.isnull()))\n\n# clear the data by delete the non-numeric\ndf.dropna(inplace=True)\n\ndf.apply(lambda x: sum(x.isnull()))\n\ndf.columns\n\ndf.plot()\n\ndf.resample('W').sum().plot()\n\ndf.columns=['West', 'East']\n\ndf.resample('w').sum().plot()\n\n# To see whether there is any annual trend of the number of rides\ndf.resample('D').sum().rolling(365).sum().plot() \n# each point is the sum of the number of rides in the previuos 365 days\n\n# The y coordinate is not from 0\nax = df.resample('D').sum().rolling(365).sum().plot()\nax.set_ylim(0, None)\n\n# DateimeIndex.time return numpy array of datetime.time, the time part of the Timestamps\ndf.groupby(df.index.time).mean().plot()\n# plot the average of rides at each hours of the day\n\n# Create the pivoted table to investigate the pattern in each day\ndf['Total'] = df['West'] + df['East']\npivoted = df.pivot_table(values='Total', index=df.index.time, columns=df.index.date)\npivoted.head()\n\npivoted.shape\n\n# delete the date with non-numeric\npivoted.dropna(axis=1, inplace=True)\npivoted.shape\n\npivoted.plot(legend=False)\n\n# add transparent parameter alpha\npivoted.plot(legend=False, alpha=0.01)", "Principal Component Analysis", "# Get X with hours as mearsurement and date as observations\nX = pivoted.T.values\nX.shape\n\nX\n\nfrom sklearn.decomposition import PCA\n\nX2 = PCA(2, svd_solver='full').fit_transform(X)\n\nX2\n\nX2.shape\n\nplt.scatter(X2[:, 0], X2[:, 1])\n\n# use cluster algorithm Gaussian mixture model\nfrom sklearn.mixture import GaussianMixture\n\ngmm = GaussianMixture(2)\ngmm.fit(X)\nlabels = gmm.predict(X)\nlabels\n\n\n# plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')\n# plt.colorbar()\nplt.scatter(X2[:, 0], X2[:, 1], c=labels)\nplt.colorbar()\n\nlabels\n\n# so labels == 1 represents the weekday\npivoted.T[labels == 1].T.plot(legend=False, alpha=0.01)\n\n# labels == 0 represents the weekend or holiday\npivoted.T[labels == 0].T.plot(legend=False, alpha=0.1)", "Comparing with Day of Week", "pd.DatetimeIndex(pivoted.columns)\n\n# The DatetimeIndex.dayof week gives the day of the week\ndayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek\ndayofweek\n\n# Then we plot the color of the weekday\nplt.scatter(X2[:, 0], X2[:, 1], c=dayofweek)\nplt.colorbar() \n\n# grab the day in label 0 which is not weekend\ndates = pd.DatetimeIndex(pivoted.columns)\ndates[(labels == 0) & (dayofweek < 5)]", "What's up with Feb 6, 2017?\nSnow Storm" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fggp/ctcsound
cookbook/03-threading.ipynb
lgpl-2.1
[ "Multithreading\nIn the preceding recipes, there was a single thread running; this is the default way to use Python, due to the GIL (Global Interpreter Lock). Then, the user has the possibility to interact with the Csound instance during the performance loop. This is illustrated in the following diagram:\n\nTo use Csound in a more flexible way, one can use multithreading. Because of the GIL limitations, it is better to yield the multithread machinery through C libraries. When a C function is called from Python using ctypes, the GIL is released during the function call.\nCsound has an helper class called CsoundPerformanceThread. When there is a running Csound instance, one can start a new thread by creating a new object of type CsoundPerformanceThread with a reference to the Csound instance as argument. Then, the main Python thread will run allowing the user to interract with it, while the performance thread will run concurrently in the C world, outside of the GIL. The user can send messages to the performance thread, each message being sent with a call to a C function through ctypes, releasing the GIL during the function call. Those messages can be: play(), pause(), togglePause(), stop(), record(), stopRecord(), scoreEvent(), inputMessage(), setScoreOffsetSeconds(), join(), or flushMessageQueue().\nWhen a very long score is used, it is thus easy to implement a REPL (read-eval-print loop) system around Csound. This is illustrated in the following diagram:\n\nSo let's start a Csound instance from Python, with a four hours long score:", "import ctcsound\ncs = ctcsound.Csound()\n\ncsd = '''\n<CsoundSynthesizer>\n\n<CsOptions>\n -d -o dac -m0\n</CsOptions>\n\n<CsInstruments>\nsr = 48000\nksmps = 100\nnchnls = 2\n0dbfs = 1\n\n instr 1\nidur = p3\niamp = p4\nicps = cpspch(p5)\nirise = p6\nidec = p7\nipan = p8\n\nkenv linen iamp, irise, idur, idec\nkenv = kenv*kenv\nasig poscil kenv, icps\na1, a2 pan2 asig, ipan\n outs a1, a2\n endin\n</CsInstruments>\n\n<CsScore>\nf 0 14400 ; a 4 hours session should be enough\n</CsScore>\n</CsoundSynthesizer>\n'''\ncs.compileCsdText(csd)\ncs.start()", "Then, let's start a new thread, passing the opaque pointer of the Csound instance as argument:", "pt = ctcsound.CsoundPerformanceThread(cs.csound())\npt.play()", "Now, we can send messages to the performance thread:", "pt.scoreEvent(False, 'i', (1, 0, 1, 0.5, 8.06, 0.05, 0.3, 0.5))\npt.scoreEvent(False, 'i', (1, 0.5, 1, 0.5, 9.06, 0.05, 0.3, 0.5))", "When we're done, we stop the performance thread and reset the csound instance:", "pt.stop()\npt.join()\ncs.reset()", "Note that we can still access the csound instance with other methods, like controlChannel() or setControlChannel():", "csd = '''\n<CsoundSynthesizer>\n<CsOptions>\n-odac\n</CsOptions>\n<CsInstruments>\nsr = 44100\nksmps = 64\nnchnls = 2\n0dbfs = 1\nseed 0\n\ninstr 1\n iPch random 60, 72\n chnset iPch, \"pch\"\n kPch init iPch\n kNewPch chnget \"new_pitch\"\n if kNewPch > 0 then\n kPch = kNewPch\n endif\n aTone poscil .2, mtof(kPch)\n out aTone, aTone\nendin\n\n</CsInstruments>\n<CsScore>\ni 1 0 600\n</CsScore>\n</CsoundSynthesizer>\n'''\ncs.compileCsdText(csd)\ncs.start()\npt = ctcsound.CsoundPerformanceThread(cs.csound())\npt.play()", "We can ask for the values in the Csound instance ...", "print(cs.controlChannel('pch'))", "... or we can set our own values to the Csound instance:", "cs.setControlChannel('new_pitch',73)", "At the end, stop and reset as usual:", "pt.stop()\npt.join()\ncs.reset()", "Author: Franรงois Pinot, March 2016" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
liganega/Gongsu-DataSci
ref_materials/excs/Lab-07.ipynb
gpl-3.0
[ "์—ฐ์Šต๋ฌธ์ œ\n์•„๋ž˜ ๋ฌธ์ œ๋“ค์„ ํ•ด๊ฒฐํ•˜๋Š” ์ฝ”๋“œ๋ฅผ lab07.py ํŒŒ์ผ์— ์ž‘์„ฑํ•˜์—ฌ ์ œ์ถœํ•˜๋ผ.\n์—ฐ์Šต 1\n๋ฏธ๊ตญ ํ•ด์–‘๋Œ€๊ธฐ์ฒญ(NOAA)์€ ์ „์„ธ๊ณ„ ๋‚ ์”จ๋ฅผ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์ œ๊ณตํ•œ๋‹ค. ํ•œ๊ตญ์˜ ๊ฒฝ์šฐ ๊ณตํ•ญ์ด ์žˆ๋Š” ๋„์‹œ์˜ ๋‚ ์”จ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๋ฉฐ ํ‰ํƒ๋„ ํฌํ•จ๋œ๋‹ค. ํ‰ํƒ์˜ ํ˜„์žฌ ๋‚ ์”จ ์ •๋ณด๋ฅผ ํ…์ŠคํŠธํŒŒ์ผ๋กœ ์–ป๊ณ ์ž ํ•˜๋ฉด ์•„๋ž˜ NOAA ์‚ฌ์ดํŠธ๋ฅผ ํด๋ฆญํ•ด์„œ ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œ๋ฐ›์œผ๋ฉด ๋œ๋‹ค.\n์•„๋‹ˆ๋ฉด ์•„๋ž˜ ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•˜์—ฌ ์œ„ ๋งํฌ์— ์—ฐ๊ฒฐ๋œ ํŒŒ์ผ ๋‚ด์šฉ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค.\ndef NOAA_string():\n url = \"http://weather.noaa.gov/pub/data\" +\\\n \"/observations/metar/decoded/RKSG.TXT\"\n noaa_data_string = urllib.urlopen(url).read()\n return noaa_data_string\n\n์œ„ ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด urllib ๋ชจ๋“ˆ์„ ์ž„ํฌํŠธํ•ด์•ผ ํ•œ๋‹ค. ์œ„ ํ•จ์ˆ˜๋ฅผ ํŒŒ์ด์ฌ ์…ธ์—์„œ ์‹คํ–‰ํ•˜์—ฌ ๋ฆฌํ„ด๊ฐ’์„ ํ™•์ธํ•ด๋ณด๊ธฐ ๋ฐ”๋ž€๋‹ค. \n์ด์ œ ์•„๋ž˜ ์ผ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ํ•จ์ˆ˜ NOAA_temperature(s) ํ•จ์ˆ˜๋ฅผ ์ž‘์„ฑํ•˜๋ผ.\n\nNOAA_string()์˜ ๋ฆฌํ„ด๊ฐ’์„ ์ธ์ž๋กœ ๋ฐ›์•„์„œ ํ•ด๋‹น ๋„์‹œ์˜ ์„ญ์”จ ๋‹จ์œ„ ์˜จ๋„์˜ ์ •์ˆ˜๊ฐ’์„ ๋ฆฌํ„ดํ•œ๋‹ค. \n๋ฏธ๊ตญ์€ ์˜จ๋„๋ฅผ ํ™”์”จ(Fahrenheit) ๋‹จ์œ„๋กœ ๋‚˜ํƒ€๋‚ด๋ฉฐ ์šฐ๋ฆฌ๋Š” ์„ญ์”จ(Celsius) ๋‹จ์œ„๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. \n\n์ฃผ์˜: ์œ„ ์‚ฌ์ดํŠธ๋Š” ์‹ค์‹œ๊ฐ„์œผ๋กœ ๋‚ ์”จ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ์œ„ ๋งํฌ๋ฅผ ๋ˆ„๋ฅผ ๋•Œ๋งˆ๋‹ค ์˜จ๋„ ์ •๋ณด๊ฐ€ ๋ณ€ํ•œ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด 2015๋…„ 10์›” 16์ผ 0์‹œ 38๋ถ„์— ํ™•์ธํ•œ ๊ฒฝ์šฐ ์•„๋ž˜ ์ฒ˜๋Ÿผ ํ™•์ธ๋œ ํ‰ํƒ์‹œ ์˜จ๋„๋Š” ์„ญ์”จ 14.2์ด๋‹ค. ๋”ฐ๋ผ์„œ NOAA_temperature(NOAA_string())์€ 14๋ฅผ ๋ฆฌํ„ดํ•ด์•ผ ํ•œ๋‹ค. ํ•˜์ง€๋งŒ ๋‹ค๋ฅธ ์‹œ๊ฐ์— ํ™•์ธํ•˜๋ฉด ๋‹ค๋ฅธ ๊ฐ’์ด ๋‚˜์˜ฌ ์ˆ˜ ์žˆ์Œ์— ์ฃผ์˜ํ•ด์•ผ ํ•œ๋‹ค. ์–ด๋–ป๊ฒŒ ์„ญ์”จ์— ํ•ด๋‹นํ•˜๋Š” ์ˆซ์ž๋ฅผ ๋„์ง‘์–ด ๋‚ผ ์ˆ˜ ์žˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•œ๋‹ค. \nPyongtaek Ab, Korea, South (RKSG) 36-56N 127-00E 16M\nOct 15, 2015 - 10:58 AM EDT / 2015.10.15 1458 UTC\nWind: Calm:0\nVisibility: 2 mile(s):0\nSky conditions: partly cloudy\nWeather: mist\nTemperature: 57.6 F (14.2 C)\nDew Point: 57.6 F (14.2 C)\nRelative Humidity: 100%\nPressure (altimeter): 30.11 in. Hg (1019 hPa)\nob: RKSG 151458Z 00000KT 2SM R32/2600FT BR SCT010 14/14 A3011 RMK AO2A SLP199 T01420142 \ncycle: 15\n\nํžŒํŠธ: ๋ฌธ์ž์—ด ๋ฉ”์†Œ๋“œ ์ค‘์—์„œ ํŠน์ • ๋ถ€๋ถ„ ๋ฌธ์ž์—ด(substring)์˜ ์œ„์น˜, ์ฆ‰ ์ธ๋ฑ์Šค ๋ฒˆํ˜ธ๋ฅผ ํ™•์ธํ•ด์ฃผ๋Š” ๋ฉ”์†Œ๋“œ๊ฐ€ ์žˆ๋‹ค. \n์—ฐ์Šต 1 ๊ฒฌ๋ณธ๋‹ต์•ˆ 1\nNOAA_string()์„ ์‹คํ–‰ํ•˜์—ฌ ์–ป์–ด์ง„ ํŒŒ์ผ์˜ ๋‚ด์šฉ์„ ๋ณด๋ฉด 7๋ฒˆ์งธ ์ค„์—์„œ ์˜จ๋„ ์ •๋ณด๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค.\n๊ด€๊ฑด์€ 7๋ฒˆ์งธ ์ค„์—์„œ 14.2๋ฅผ ๋„์ง‘์–ด ๋‚ด๋Š” ๊ฒƒ์ด๋‹ค. ๊ทธ๋Ÿฌ๋ ค๋ฉด 14.2๋ฅผ ์œ ์ผํ•˜๊ฒŒ ํŠน์ง•์ง€์šธ ์ˆ˜ ์žˆ๋Š” ๋ฌด์–ธ๊ฐ€๋ฅผ ์ฐพ์•„์•ผ ํ•œ๋‹ค. \n๋ฐฉ๋ฒ• 1: split ๋ฉ”์†Œ๋“œ ์ด์šฉํ•˜๊ธฐ\n\n7๋ฒˆ์งธ ์ค„์„ ์ž์„ธํžˆ ์‚ดํ”ผ๋ฉด ์„ญ์”จ ์˜จ๋„ ์ •๋ณด๋Š” ์„ธ ๊ฐœ์˜ ์ŠคํŽ˜์ด์Šค ๋’ค์— ์œ„์น˜ํ•œ๋‹ค. ์ด ์ •๋ณด๋ฅผ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค.", "import urllib\n\ndef NOAA_string():\n url = \"http://weather.noaa.gov/pub/data\" +\\\n \"/observations/metar/decoded/RKSG.TXT\"\n noaa_data_string = urllib.urlopen(url).read()\n return noaa_data_string\n\nprint(NOAA_string())\n\ndef NOAA_temperature(s):\n L = s.split('\\n')\n Line7 = L[6].split()\n print(str(int(Line7[-2][1:])) + \" C\")\n\nNOAA_temperature(NOAA_string())", "์—ฐ์Šต 1 ๊ฒฌ๋ณธ๋‹ต์•ˆ 2\n\n์„ญ์”จ์˜จ๋„๋ฅผ ์œ ์ผํ•˜๊ฒŒ ํŠน์ง•์ง€์šฐ๋Š” ๋ฌธ์ž์—ด์„ ์ฐพ์•„์•ผ ํ•œ๋‹ค.\n\" F \"๊ฐ€ ๊ทธ๋Ÿฐ ๋ฌธ์ž์—ด์ด๋‹ค. (F ์–‘ ์˜†์œผ๋กœ ์ŠคํŽ˜์ด์Šค๊ฐ€ ์žˆ๋‹ค.)", "def NOAA_temperature(s):\n d = s.find(\" F \")\n print(s[d+4: d+6] + \" C\")\n \nNOAA_temperature(NOAA_string())", "์—ฐ์Šต 2\nํ…์ŠคํŠธ ํŒŒ์ผ์— ์ €์žฅ๋œ ๋ฌธ์žฅ์—์„œ ํŠน์ • ๋‹จ์–ด์˜ ์ถœํ˜„ ํšŸ์ˆ˜๋ฅผ ํ™•์ธํ•ด์ฃผ๋Š” ํ•จ์ˆ˜ wc_sub(filename, s) ํ•จ์ˆ˜๋ฅผ ์ž‘์„ฑํ•˜๋ผ. wc๋Š” Word Count์˜ ์ค„์ž„๋ง์ด๋‹ค. \nํžŒํŠธ: count ๋ฉ”์†Œ๋“œ๋ฅผ ํ™œ์šฉํ•œ๋‹ค.\n์˜ˆ์ œ 1: data.txt ํŒŒ์ผ ๋‚ด์šฉ์ด ์•„๋ž˜์™€ ๊ฐ™์„ ๊ฒฝ์šฐ\nOne Two\n\nwc_sub('data.txt', 'One')๋Š” 1๋ฅผ ๋ฆฌํ„ดํ•œ๋‹ค.\n์˜ˆ์ œ 2: data.txt ํŒŒ์ผ ๋‚ด์šฉ์ด ์•„๋ž˜์™€ ๊ฐ™์„ ๊ฒฝ์šฐ\nOne Two\nThree Four Five\n\nwc_sub('data.txt', 'o')๋Š” 2๋ฅผ ๋ฆฌํ„ดํ•œ๋‹ค.\nwc_sub ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•˜์—ฌ ์ด์ƒํ•œ ๋‚˜๋ผ์˜ ์•จ๋ฆฌ์Šค ์›์ž‘์— 'Alice'์™€ 'alice'๋ž€ ๋‹จ์–ด๊ฐ€ ๊ฐ๊ฐ ๋ช‡ ๋ฒˆ ์–ธ๊ธ‰๋˜๋Š”์ง€ ํ™•์ธํ•˜๋ผ. ์ด์ƒํ•œ ๋‚˜๋ผ์˜ ์•จ๋ฆฌ์Šค ์›์ž‘์€ ์•„๋ž˜ ๋งํฌ์—์„œ ๋‹ค์šด ๋ฐ›์„ ์ˆ˜ ์žˆ๋‹ค.\nhttp://www.gutenberg.org/files/28885/28885-8.txt\n\n์œ„ ๋งํฌ๋ฅผ ๋ˆ„๋ฅด๋ฉด ๋œจ๋Š” ํ™”๋ฉด์—์„œ Plain Text UTF-8 ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œ ๋ฐ›์œผ๋ฉด ๋œ๋‹ค. ์•„๋งˆ๋„ ๋ช‡ ๋งŒ ๋‹จ์–ด๊ฐ€ ์‚ฌ์šฉ๋˜์—ˆ์„ ๊ฒƒ์ด๋‹ค.\n๋‹จ, filename์— ํ•ด๋‹นํ•˜๋Š” ํŒŒ์ผ์ด ์—ด๋ฆฌ์ง€ ์•Š์„ ๊ฒฝ์šฐ -1์„ ๋ฆฌํ„ดํ•˜๋„๋ก ์˜ค๋ฅ˜์ฒ˜๋ฆฌ๋ฅผ ํ•ด์•ผ ํ•œ๋‹ค. \n์—ฐ์Šต 2 ๊ฒฌ๋ณธ๋‹ต์•ˆ", "def wc_sub(filename, s):\n with open(filename, 'r') as f:\n f_content = f.read()\n return f_content.count(s)\n\nprint(\"The word 'Alice' occurs {} times.\".format(wc_sub('Alice.txt', 'Alice')))\nprint(\"The word 'alice' occurs {} times.\".format(wc_sub('Alice.txt', 'alice')))", "์—ฐ์Šต 3\nํ•จ์ˆ˜ f์™€ ์ˆซ์ž๋“ค์˜ ๋ฆฌ์ŠคํŠธ xs๋ฅผ ์ธ์ž๋กœ ๋ฐ›์•„ f(x)์˜ ๊ฐ’์ด 0๋ณด๋‹ค ํฌ๊ฒŒ ๋˜๋Š” x์˜ ๊ฐ’๋งŒ ์ถ”์ถœํ•ด์„œ ๋ฆฌํ„ดํ•˜๋Š” ํ•จ์ˆ˜ filtering(f, xs)๋ฅผ ์ •์˜ํ•˜๋ผ.\n์˜ˆ์ œ:\nIn [1]: def f1(x):\n ...: return x * 3\n\nIn [2]: filtering(f1, [1, -2, 2, -1, 3, 5])\nOut[2]: [1, 2, 3, 5]\n\nIn [3]: filtering(f1, [-1, -2, -3, -4, -5])\nOut[3]: []\n\n์—ฐ์Šต 3 ๊ฒฌ๋ณธ๋‹ต์•ˆ", "def filtering(f, xs):\n L = []\n for x in xs:\n if f(x) > 0:\n L.append(x)\n return L\n\ndef f1(x):\n return x * 3\n\nfiltering(f1, [1, -2, 2, -1, 3, 5])", "์ฐธ์กฐ: ํŒŒ์ด์ฌ ๋‚ด์žฅํ•จ์ˆ˜ ์ค‘์— filter ํ•จ์ˆ˜๊ฐ€ ๋น„์Šทํ•œ ์ผ์„ ํ•œ๋‹ค. ์–ด๋–ค ์ฐจ์ด์ ์ด ์žˆ๋Š”์ง€ ํ™•์ธํ•ด๋ณด๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•œ๋‹ค.\n์—ฐ์Šต 4\nํ•จ์ˆ˜ f์™€ ์ˆซ์ž๋“ค์˜ ๋ฆฌ์ŠคํŠธ xs = [x1, ..., x_n]๋ฅผ ์ธ์ž๋กœ ๋ฐ›์•„ f(xn)๋“ค์˜ ๊ฐ’์˜ ํ•ฉ์„ ๋ฆฌํ„ดํ•˜๋Š” ํ•จ์ˆ˜ sum_list(f, xs)๋ฅผ ์ •์˜ํ•˜๋ผ. ๋‹จ, xs = [] ์ผ ๊ฒฝ์šฐ 0์„ ๋ฆฌํ„ดํ•œ๋‹ค.\n์˜ˆ์ œ:\nIn [4]: def f2(x):\n ...: return x ** 2\n\nIn [5]: sum_list(f2, [1, -2, 2, -3,])\nOut[5]: 18\n\nIn [6]: sum_list(f1, [-1, -2, -3, -4, -5])\nOut[6]: -45\n\n์—ฐ์Šต 4 ๊ฒฌ๋ณธ๋‹ต์•ˆ", "def sum_list(f, xs):\n L = 0\n for x in xs:\n L = L + f(x)\n return L\n\ndef f2(x):\n return x ** 2\n\nprint(sum_list(f2, [1, -2, 2, -3]))\nprint(sum_list(f1, [-1, -2, -3, -4, -5]))", "์ฐธ์กฐ: ํŒŒ์ด์ฌ ๋‚ด์žฅํ•จ์ˆ˜ ์ค‘์— sum ํ•จ์ˆ˜๊ฐ€ ๋น„์Šทํ•œ ์ผ์„ ํ•œ๋‹ค. ์–ด๋–ค ์ฐจ์ด์ ์ด ์žˆ๋Š”์ง€ ํ™•์ธํ•ด๋ณด๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•œ๋‹ค.\n์—ฐ์Šต 5\n๋ฐ‘๋ณ€์˜ ๊ธธ์ด์™€ ๋†’์ด๊ฐ€ ๊ฐ๊ฐ a์™€ h์ธ ์‚ผ๊ฐํ˜•์˜ ๋ฉด์ ์„ ๋ฆฌํ„ดํ•˜๋Š” ํ•จ์ˆ˜ triangle_area(a, h)๋ฅผ ์ž‘์„ฑํ•˜๋ผ. ๊ทธ๋Ÿฐ๋ฐ ์‚ผ๊ฐํ˜•์˜ ๋†’์ด h๋Š” ๊ธฐ๋ณธ๊ฐ’์œผ๋กœ 5๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•œ๋‹ค. ํžŒํŠธ: ํ‚ค์›Œ๋“œ ์ธ์ž๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค.\n์˜ˆ์ œ:\nIn [7]: triangle_area(3)\nOut[7]: 7.5\n\nIn [8]: triangle_area(3, 7)\nOut[8]: 10.5\n\n์—ฐ์Šต 5 ๊ฒฌ๋ณธ๋‹ต์•ˆ", "def triangle_area(a, height=5):\n return 1.0/2 * a * height\n\nprint(triangle_area(3))\nprint(triangle_area(3, 7))", "์—ฐ์Šต 6\nํ•จ์ˆ˜ f๋ฅผ ์ž…๋ ฅ ๋ฐ›์œผ๋ฉด ์•„๋ž˜ ๋ฌ˜์‚ฌ์ฒ˜๋Ÿผ ์ž‘๋™ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋ฆฌํ„ดํ•˜๋Š” ํ•จ์ˆ˜ fun_2_fun(f)๋ฅผ ์ •์˜ํ•˜๋ผ.\nfun_2_fun(f)(2) = (f(2)) ** 2\nfun_2_fun(f)(3) = (f(3)) ** 3\nfun_2_fun(f)(4) = (f(4)) ** 4\n...\n\n์ฃผ์˜: ํ•จ์ˆ˜๋ฅผ ์ž…๋ ฅ๋ฐ›์•„ ํ•จ์ˆ˜๋ฅผ ๋ฆฌํ„ดํ•˜๋„๋ก ์ž‘์„ฑํ•ด์•ผ ํ•œ๋‹ค. \nํžŒํŠธ: ํ•จ์ˆ˜ ์•ˆ์—์„œ def ํ‚ค์›Œ๋“œ๋ฅผ ์ด์šฉํ•˜์—ฌ ์ƒˆ๋กœ์šด ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•  ์ˆ˜ ์žˆ๋‹ค. ๊ทธ ํ•จ์ˆ˜๋Š” ์ง€์—ญํ•จ์ˆ˜๊ฐ€ ๋œ๋‹ค.\n์—ฐ์Šต 6 ๊ฒฌ๋ณธ๋‹ต์•ˆ 1", "def fun_2_fun(f):\n def f_exp(n):\n return (f(n)) ** n\n return f_exp\n\nprint(f1(2))\nprint(fun_2_fun(f1)(2))", "๋ฌธ์ œ ํ•ต์‹ฌ\n์ด ๋ฌธ์ œ์˜ ํ•ต์‹ฌ์€ ํ•จ์ˆ˜๋ฅผ ๋‹จ์ˆœํžˆ ์ธ์ž๋กœ๋งŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ ๋ฆฌํ„ด๊ฐ’์œผ๋กœ๋„ ํ• ์šฉํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ์ฆ‰, ํ•จ์ˆ˜์— ์–ด๋–ค ์ธ์ž๋ฅผ ๋„ฃ๊ณ  ํ˜ธ์ถœํ•˜์˜€๋”๋‹ˆ ์–ด๋–ค ํ•จ์ˆ˜๋ฅผ ๋ฆฌํ„ดํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๊ตฌํ˜„ํ•ด์•ผ ํ•œ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ๋ฆฌํ„ด๊ฐ’์ด ํ•จ์ˆ˜์ด๋ฏ€๋กœ ๊ทธ ํ•จ์ˆ˜๋ฅผ ์ ๋‹นํ•œ ์ธ์ž๋ฅผ ์ž…๋ ฅํ•˜์—ฌ ํ˜ธ์ถœํ•  ์ˆ˜ ์žˆ๋‹ค.\n์˜ˆ๋ฅผ ๋“ค์–ด ํ•จ์ˆ˜ g๋ฅผ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜ํ•˜์ž.", "def exp2(x):\n return x ** 2\n\ng = fun_2_fun(exp2)", "๊ทธ๋Ÿฌ๋ฉด g๋Š” ํ•จ์ˆ˜์ž„์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค.", "type(g)", "์–ด๋–ค ํ•จ์ˆ˜์ธ๊ฐ€? help ๋ฅผ ์ด์šฉํ•˜์—ฌ ํ™•์ธํ•˜๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.", "help(g)\n\ng", "์ฆ‰, ์ธ์ž๋ฅผ ํ•˜๋‚˜ ๋ฐ›๋Š” ํ•จ์ˆ˜์ด๋ฉฐ f_exp๋ฅผ ์ด์šฉํ•ด ์ •์™ธ๋˜์—ˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ์‹ค์ œ๋กœ g๋Š” ์•„๋ž˜์™€ ๊ฐ™์ด ์ •์˜๋˜์–ด ์žˆ๋‹ค. \ng๋ฅผ ์ •์˜ํ•˜๊ธฐ ์œ„ํ•ด fun_2_fun(f) ํ•จ์ˆ˜๋ฅผ ํ˜ธ์ถœํ•  ๋•Œ ์‚ฌ์šฉ๋œ ์ธ์ž f ๋Œ€์‹ ์— exp2 ํ•จ์ˆ˜๋ฅผ ์‚ฝ์ž…ํ•˜์˜€๊ธฐ ๋•Œ๋ฌธ์— g๊ฐ€ ์•„๋ž˜์™€ ๊ฐ™์ด ์ •์˜๋œ ํ•จ์ˆ˜์ž„์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. \ng(x) = fun_2_fun(exp2)\n = f_exp(x) # f_exp ๋ฅผ ์ •์˜ํ•  ๋•Œ exp2 ๊ฐ€ ์‚ฌ์šฉ๋จ์— ์ค‘์˜\n = exp2(x) ** x\n = (x**2) ** x\n = x ** (2*x)\n\n์—ฐ์Šต 6 ๊ฒฌ๋ณธ๋‹ต์•ˆ 2", "def fun_2_fun(f):\n return lambda x: f(x) ** x\n\nprint(f1(2))\nprint(fun_2_fun(f1)(2))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
landlab/landlab
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
mit
[ "<a href=\"http://landlab.github.io\"><img style=\"float: left\" src=\"../../landlab_header.png\"></a>\nIntroduction to the NormalFault component\nThis tutorial provides an introduction to the NormalFault component in the Landlab toolkit. This component takes the following parameters. \nParameters\n--------\ngrid : ModelGrid\nfaulted_surface : str or ndarray of shape `(n_nodes, )` or list of str\n or ndarrays. \n Surface that is modified by the NormalFault component. Can be a\n field name or array or a list of strings or ndarrays if the fault.\n should uplift more than one field. Default value is \n `topographic__elevation`.\nfault_throw_rate_through_time : dict, optional\n Dictionary that specifies the time varying throw rate on the fault.\n Expected format is:\n ``fault_throw_rate_through_time = {'time': array, 'rate': array}``\n Default value is a constant rate of 0.001 (units not specified).\nfault_dip_angle : float, optional\n Dip angle of the fault in degrees. Default value is 90 degrees.\nfault_trace : dictionary, optional\n Dictionary that specifies the coordinates of two locations on the\n fault trace. Expected format is\n ``fault_trace = {'x1': float, 'y1': float, 'x2': float, 'y2': float}``\n where the vector from ``(x1, y1)`` to ``(x2, y2)`` defines the\n strike of the fault trace. The orientation of the fault dip relative\n to the strike follows the right hand rule.\n Default is for the fault to strike NE.\ninclude_boundaries : boolean, optional\n Flag to indicate if model grid boundaries should be uplifted. If\n set to ``True`` uplifted model grid boundaries will be set to the\n average value of their upstream nodes. Default value is False.\n\nThe NormalFault component will divide the model domain into two regions, a 'faulted nodes' region which will experience vertical rock uplift at a rate of \n$t \\cdot \\sin (d)$\nwhere $t$ is the fault throw rate and $d$ is the fault dip angle. \nWhile dip angles less than 90 degrees are permitted, in its present implementation, the NormalFault component does not translate field information laterally. \nThe fault orientation is specified by two coordinate pairs: (x1, y1) and (x2, y2). The strike of the fault, specified with the right-hand rule convention, is the vector from (x1, y1) to (x2, y2). Give that this component creates a normal fault, in which the footwall moves up relative to the hanging wall, this means that the nodes that are counterclockwise from the strike are the uplifted nodes. \nTo start, let's import necessary Landlab and Python modules.", "# start by importing necessary modules\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom landlab import HexModelGrid, RasterModelGrid\nfrom landlab.components import (\n FastscapeEroder,\n FlowAccumulator,\n NormalFault,\n StreamPowerEroder,\n)\nfrom landlab.plot import imshow_grid\n\n%matplotlib inline", "First we will make a default NormalFault.", "grid = RasterModelGrid((6, 6), xy_spacing=10)\n\ngrid.add_zeros(\"topographic__elevation\", at=\"node\")\n\nnf = NormalFault(grid)\n\nplt.figure()\nimshow_grid(grid, nf.faulted_nodes.astype(int), cmap=\"viridis\")\nplt.plot(grid.x_of_node, grid.y_of_node, \"c.\")\nplt.show()", "This fault has a strike of NE and dips to the SE. Thus the uplifted nodes (shown in yellow) are in the NW half of the domain. \nThe default NormalFault will not uplift the boundary nodes. We change this by using the keyword argument include_boundaries. If this is specified, the elevation of the boundary nodes is calculated as an average of the faulted nodes adjacent to the boundaries. This occurs because most Landlab erosion components do not operate on boundary nodes.", "nf = NormalFault(grid, include_boundaries=True)\n\nplt.figure()\nimshow_grid(grid, nf.faulted_nodes.astype(int), cmap=\"viridis\")\nplt.plot(grid.x_of_node, grid.y_of_node, \"c.\")\nplt.show()", "We can add functionality to the NormalFault with other keyword arguments. We can change the fault strike and dip, as well as specify a time series of fault uplift through time.", "grid = RasterModelGrid((60, 100), xy_spacing=10)\nz = grid.add_zeros(\"topographic__elevation\", at=\"node\")\n\nnf = NormalFault(grid, fault_trace={\"x1\": 0, \"y1\": 200, \"y2\": 30, \"x2\": 600})\n\nimshow_grid(grid, nf.faulted_nodes.astype(int), cmap=\"viridis\")", "By reversing the order of (x1, y1) and (x2, y2) we can reverse the location of the upthrown nodes (all else equal).", "grid = RasterModelGrid((60, 100), xy_spacing=10)\nz = grid.add_zeros(\"topographic__elevation\", at=\"node\")\n\nnf = NormalFault(grid, fault_trace={\"y1\": 30, \"x1\": 600, \"x2\": 0, \"y2\": 200})\n\nimshow_grid(grid, nf.faulted_nodes.astype(int), cmap=\"viridis\")", "We can also specify complex time-rock uplift rate histories, but we'll explore that later in the tutorial. \nNext let's make a landscape evolution model with a normal fault. Here we'll use a HexModelGrid to highlight that we can use both raster and non-raster grids with this component. \nWe will do a series of three numerical experiments and will want to keep a few parameters constant. Since you might want to change them, we are making it easier to change all of them together. They are defined in the next block:", "# here are the parameters to change\nK = 0.0005 # stream power coefficient, bigger = streams erode more quickly\nU = 0.0001 # uplift rate in meters per year\n\ndt = 1000 # time step in years\ndx = 10 # space step in meters\n\nnr = 60 # number of model rows\nnc = 100 # number of model columns\n\n# instantiate the grid\ngrid = HexModelGrid((nr, nc), dx, node_layout=\"rect\")\n\n# add a topographic__elevation field with noise\nz = grid.add_zeros(\"topographic__elevation\", at=\"node\")\nz[grid.core_nodes] += 100.0 + np.random.randn(grid.core_nodes.size)\n\nfr = FlowAccumulator(grid)\nfs = FastscapeEroder(grid, K_sp=K)\nnf = NormalFault(grid, fault_trace={\"x1\": 0, \"x2\": 800, \"y1\": 0, \"y2\": 500})\n\n# Run this model for 300 100-year timesteps (30,000 years).\nfor i in range(300):\n nf.run_one_step(dt)\n fr.run_one_step()\n fs.run_one_step(dt)\n z[grid.core_nodes] += 0.0001 * dt\n\n# plot the final topography\nimshow_grid(grid, z)", "As we can see, the upper left portion of the grid has been uplifted an a stream network has developed over the whole domain. \nHow might this change when we also uplift the boundaries nodes?", "# instantiate the grid\ngrid = HexModelGrid((nr, nc), 10, node_layout=\"rect\")\n\n# add a topographic__elevation field with noise\nz = grid.add_zeros(\"topographic__elevation\", at=\"node\")\nz[grid.core_nodes] += 100.0 + np.random.randn(grid.core_nodes.size)\n\nfr = FlowAccumulator(grid)\nfs = FastscapeEroder(grid, K_sp=K)\nnf = NormalFault(\n grid, fault_trace={\"x1\": 0, \"x2\": 800, \"y1\": 0, \"y2\": 500}, include_boundaries=True\n)\n\n# Run this model for 300 100-year timesteps (30,000 years).\nfor i in range(300):\n nf.run_one_step(dt)\n fr.run_one_step()\n fs.run_one_step(dt)\n z[grid.core_nodes] += U * dt\n\n# plot the final topography\nimshow_grid(grid, z)", "We can see that when the boundary nodes are not included, the faulted region is impacted by the edge boundary conditions differently. Depending on your application, one or the other of these boundary condition options may suite your problem better. \nThe last thing to explore is the use of the fault_rate_through_time parameter. This allows us to specify generic fault throw rate histories. For example, consider the following history, in which every 100,000 years there is a 10,000 year period in which the fault is active.", "time = (\n np.array(\n [\n 0.0,\n 7.99,\n 8.00,\n 8.99,\n 9.0,\n 17.99,\n 18.0,\n 18.99,\n 19.0,\n 27.99,\n 28.00,\n 28.99,\n 29.0,\n ]\n )\n * 10\n * dt\n)\nrate = np.array([0, 0, 0.01, 0.01, 0, 0, 0.01, 0.01, 0, 0, 0.01, 0.01, 0])\n\nplt.figure()\nplt.plot(time, rate)\nplt.plot([0, 300 * dt], [0.001, 0.001])\nplt.xlabel(\"Time [years]\")\nplt.ylabel(\"Fault Throw Rate [m/yr]\")\nplt.show()", "The default value for uplift rate is 0.001 (units unspecified as it will depend on the x and t units in a model, but in this example we assume time units of years and length units of meters). \nThis will result in a total of 300 m of fault throw over the 300,000 year model time period. This amount of uplift can also be accommodated by faster fault motion that occurs over shorter periods of time. \nNext we plot the cumulative fault throw for the two cases.", "t = np.arange(0, 300 * dt, dt)\nrate_constant = np.interp(t, [0, 300 * dt], [0.001, 0.001])\nrate_variable = np.interp(t, time, rate)\n\ncumulative_rock_uplift_constant = np.cumsum(rate_constant) * dt\ncumulative_rock_uplift_variable = np.cumsum(rate_variable) * dt\n\nplt.figure()\nplt.plot(t, cumulative_rock_uplift_constant)\nplt.plot(t, cumulative_rock_uplift_variable)\nplt.xlabel(\"Time [years]\")\nplt.ylabel(\"Cumulative Fault Throw [m]\")\nplt.show()", "A technical note: Beyond the times specified, the internal workings of the NormalFault will use the final value provided in the rate array. \nLet's see how this changes the model results.", "# instantiate the grid\ngrid = HexModelGrid((nr, nc), 10, node_layout=\"rect\")\n\n# add a topographic__elevation field with noise\nz = grid.add_zeros(\"topographic__elevation\", at=\"node\")\nz[grid.core_nodes] += 100.0 + np.random.randn(grid.core_nodes.size)\n\nfr = FlowAccumulator(grid)\nfs = FastscapeEroder(grid, K_sp=K)\nnf = NormalFault(\n grid,\n fault_throw_rate_through_time={\"time\": time, \"rate\": rate},\n fault_trace={\"x1\": 0, \"x2\": 800, \"y1\": 0, \"y2\": 500},\n include_boundaries=True,\n)\n\n# Run this model for 300 100-year timesteps (30,000 years).\nfor i in range(300):\n nf.run_one_step(dt)\n fr.run_one_step()\n fs.run_one_step(dt)\n z[grid.core_nodes] += U * dt\n\n# plot the final topography\nimshow_grid(grid, z)", "As you can see the resulting topography is very different than in the case with continuous uplift. \nFor our final example, we'll use NormalFault with a more complicated model in which we have both a soil layer and bedrock. In order to move, material must convert from bedrock to soil by weathering.\nFirst we import remaining modules and set some parameter values", "from landlab.components import DepthDependentDiffuser, ExponentialWeatherer\n\n# here are the parameters to change\nK = 0.0005 # stream power coefficient, bigger = streams erode more quickly\nU = 0.0001 # uplift rate in meters per year\nmax_soil_production_rate = (\n 0.001 # Maximum weathering rate for bare bedrock in meters per year\n)\nsoil_production_decay_depth = 0.7 # Characteristic weathering depth in meters\nlinear_diffusivity = 0.01 # Hillslope diffusivity and m2 per years\nsoil_transport_decay_depth = 0.5 # Characteristic soil transport depth in meters\n\ndt = 100 # time step in years\ndx = 10 # space step in meters\n\nnr = 60 # number of model rows\nnc = 100 # number of model columns\n\n?ExponentialWeatherer", "Next we create the grid and run the model.", "# instantiate the grid\ngrid = HexModelGrid((nr, nc), 10, node_layout=\"rect\")\n\n# add a topographic__elevation field with noise\nz = grid.add_zeros(\"topographic__elevation\", at=\"node\")\nz[grid.core_nodes] += 100.0 + np.random.randn(grid.core_nodes.size)\n\n# create a field for soil depth\nd = grid.add_zeros(\"soil__depth\", at=\"node\")\n\n# create a bedrock elevation field\nb = grid.add_zeros(\"bedrock__elevation\", at=\"node\")\nb[:] = z - d\n\nfr = FlowAccumulator(grid, depression_finder=\"DepressionFinderAndRouter\", routing=\"D4\")\nfs = FastscapeEroder(grid, K_sp=K)\new = ExponentialWeatherer(\n grid,\n soil_production__decay_depth=soil_production_decay_depth,\n soil_production__maximum_rate=max_soil_production_rate,\n)\n\ndd = DepthDependentDiffuser(\n grid,\n linear_diffusivity=linear_diffusivity,\n soil_transport_decay_depth=soil_transport_decay_depth,\n)\n\nnf = NormalFault(\n grid,\n fault_throw_rate_through_time={\"time\": [0, 30], \"rate\": [0.001, 0.001]},\n fault_trace={\"x1\": 0, \"x2\": 800, \"y1\": 0, \"y2\": 500},\n include_boundaries=False,\n)\n\n# Run this model for 300 100-year timesteps (30,000 years).\nfor i in range(300):\n\n # Move normal fault\n nf.run_one_step(dt)\n\n # Route flow\n fr.run_one_step()\n\n # Erode with water\n fs.run_one_step(dt)\n\n # We must also now erode the bedrock where relevant. If water erosion\n # into bedrock has occurred, the bedrock elevation will be higher than\n # the actual elevation, so we simply re-set bedrock elevation to the\n # lower of itself or the current elevation.\n b = grid.at_node[\"bedrock__elevation\"]\n b[:] = np.minimum(b, grid.at_node[\"topographic__elevation\"])\n\n # Calculate regolith-production rate\n ew.calc_soil_prod_rate()\n\n # Generate and move soil around. This component will update both the\n # soil thickness and topographic elevation fields.\n dd.run_one_step(dt)\n\n # uplift the whole domain, we need to do this to both bedrock and topography\n z[grid.core_nodes] += U * dt\n b[grid.core_nodes] += U * dt\n\n# plot the final topography\nimshow_grid(grid, \"topographic__elevation\")", "We can also examine the soil thickness and soil production rate. Here in the soil depth, we see it is highest along the ridge crests.", "# and the soil depth\nimshow_grid(grid, \"soil__depth\", cmap=\"viridis\")", "The soil production rate is highest where the soil depth is low, as we would expect given the exponential form.", "# and the soil production rate\nimshow_grid(grid, \"soil_production__rate\", cmap=\"viridis\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
texib/pixnet_hackathon_2015
doc2vec-IMDB.ipynb
mit
[ "gensim doc2vec & IMDB sentiment dataset\nTODO: section on introduction & motivation\nTODO: prerequisites + dependencies (statsmodels, patsy, ?)\nLoad corpus\nFetch and prep exactly as in Mikolov's go.sh shell script. (Note this cell tests for existence of required files, so steps won't repeat once the final summary file (aclImdb/alldata-id.txt) is available alongside this notebook.)", "%%bash\n# adapted from Mikolov's example go.sh script: \nif [ ! -f \"aclImdb/alldata-id.txt\" ]\nthen\n if [ ! -d \"aclImdb\" ] \n then\n if [ ! -f \"aclImdb_v1.tar.gz\" ]\n then\n wget --quiet http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\n fi\n tar xf aclImdb_v1.tar.gz\n fi\n \n #this function will convert text to lowercase and will disconnect punctuation and special symbols from words\n function normalize_text {\n awk '{print tolower($0);}' < $1 | sed -e 's/\\./ \\. /g' -e 's/<br \\/>/ /g' -e 's/\"/ \" /g' \\\n -e 's/,/ , /g' -e 's/(/ ( /g' -e 's/)/ ) /g' -e 's/\\!/ \\! /g' -e 's/\\?/ \\? /g' \\\n -e 's/\\;/ \\; /g' -e 's/\\:/ \\: /g' > $1-norm\n }\n\n export LC_ALL=C\n for j in train/pos train/neg test/pos test/neg train/unsup; do\n rm temp\n for i in `ls aclImdb/$j`; do cat aclImdb/$j/$i >> temp; awk 'BEGIN{print;}' >> temp; done\n normalize_text temp\n mv temp-norm aclImdb/$j/norm.txt\n done\n mv aclImdb/train/pos/norm.txt aclImdb/train-pos.txt\n mv aclImdb/train/neg/norm.txt aclImdb/train-neg.txt\n mv aclImdb/test/pos/norm.txt aclImdb/test-pos.txt\n mv aclImdb/test/neg/norm.txt aclImdb/test-neg.txt\n mv aclImdb/train/unsup/norm.txt aclImdb/train-unsup.txt\n\n cat aclImdb/train-pos.txt aclImdb/train-neg.txt aclImdb/test-pos.txt aclImdb/test-neg.txt aclImdb/train-unsup.txt > aclImdb/alldata.txt\n awk 'BEGIN{a=0;}{print \"_*\" a \" \" $0; a++;}' < aclImdb/alldata.txt > aclImdb/alldata-id.txt\nfi\n\nimport os.path\nassert os.path.isfile(\"aclImdb/alldata-id.txt\"), \"alldata-id.txt unavailable\"", "The data is small enough to be read into memory.", "[line_no//25000]\n\nimport gensim\n# from gensim.models.doc2vec import \nfrom collections import namedtuple\n\nSentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')\n\nalldocs = [] # will hold all docs in original order\nwith open('./data/new_parsed_no_spam.txt') as alldata:\n for line_no, line in enumerate(alldata):\n tokens = line.split()\n words = tokens[1:]\n tags = [line_no] # `tags = [tokens[0]]` would also work at extra memory cost\n split = ['train','test','extra','extra'][line_no//70000] # 25k train, 25k test, 25k extra\n sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no//70000] # [12.5K pos, 12.5K neg]*2 then unknown\n alldocs.append(SentimentDocument(words, tags, split, sentiment))\n\ntrain_docs = [doc for doc in alldocs if doc.split == 'train']\ntest_docs = [doc for doc in alldocs if doc.split == 'test']\ndoc_list = alldocs[:] # for reshuffling per pass\n\nprint('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))", "Set-up Doc2Vec Training & Evaluation Models\nApproximating experiment of Le & Mikolov \"Distributed Representations of Sentences and Documents\", also with guidance from Mikolov's example go.sh:\n./word2vec -train ../alldata-id.txt -output vectors.txt -cbow 0 -size 100 -window 10 -negative 5 -hs 0 -sample 1e-4 -threads 40 -binary 0 -iter 20 -min-count 1 -sentence-vectors 1\nParameter choices below vary:\n\n100-dimensional vectors, as the 400d vectors of the paper don't seem to offer much benefit on this task\nsimilarly, frequent word subsampling seems to decrease sentiment-prediction accuracy, so it's left out\ncbow=0 means skip-gram which is equivalent to the paper's 'PV-DBOW' mode, matched in gensim with dm=0\nadded to that DBOW model are two DM models, one which averages context vectors (dm_mean) and one which concatenates them (dm_concat, resulting in a much larger, slower, more data-hungry model)\na min_count=2 saves quite a bit of model memory, discarding only words that appear in a single doc (and are thus no more expressive than the unique-to-each doc vectors themselves)", "from gensim.models import Doc2Vec\nimport gensim.models.doc2vec\nfrom collections import OrderedDict\nimport multiprocessing\n\ncores = multiprocessing.cpu_count()\nassert gensim.models.doc2vec.FAST_VERSION > -1, \"this will be painfully slow otherwise\"\n\nsimple_models = [\n # PV-DM w/concatenation - window=5 (both sides) approximates paper's 10-word total window size\n Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores),\n # PV-DBOW \n Doc2Vec(dm=0, size=100, negative=5, hs=0, min_count=2, workers=cores),\n # PV-DM w/average\n Doc2Vec(dm=1, dm_mean=1, size=100, window=10, negative=5, hs=0, min_count=2, workers=cores),\n]\n\n# speed setup by sharing results of 1st model's vocabulary scan\nsimple_models[0].build_vocab(alldocs) # PV-DM/concat requires one special NULL word so it serves as template\nprint(simple_models[0])\nfor model in simple_models[1:]:\n model.reset_from(simple_models[0])\n print(model)\n\nmodels_by_name = OrderedDict((str(model), model) for model in simple_models)", "Following the paper, we also evaluate models in pairs. These wrappers return the concatenation of the vectors from each model. (Only the singular models are trained.)", "from gensim.test.test_doc2vec import ConcatenatedDoc2Vec\nmodels_by_name['dbow+dmm'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[2]])\nmodels_by_name['dbow+dmc'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[0]])", "Predictive Evaluation Methods\nHelper methods for evaluating error rate.", "import numpy as np\nimport statsmodels.api as sm\nfrom random import sample\n\n# for timing\nfrom contextlib import contextmanager\nfrom timeit import default_timer\nimport time \n\n@contextmanager\ndef elapsed_timer():\n start = default_timer()\n elapser = lambda: default_timer() - start\n yield lambda: elapser()\n end = default_timer()\n elapser = lambda: end-start\n \ndef logistic_predictor_from_data(train_targets, train_regressors):\n logit = sm.Logit(train_targets, train_regressors)\n predictor = logit.fit(disp=0)\n #print(predictor.summary())\n return predictor\n\ndef error_rate_for_model(test_model, train_set, test_set, infer=False, infer_steps=3, infer_alpha=0.1, infer_subsample=0.1):\n \"\"\"Report error rate on test_doc sentiments, using supplied model and train_docs\"\"\"\n\n train_targets, train_regressors = zip(*[(doc.sentiment, test_model.docvecs[doc.tags[0]]) for doc in train_set])\n train_regressors = sm.add_constant(train_regressors)\n predictor = logistic_predictor_from_data(train_targets, train_regressors)\n\n test_data = test_set\n if infer:\n if infer_subsample < 1.0:\n test_data = sample(test_data, int(infer_subsample * len(test_data)))\n test_regressors = [test_model.infer_vector(doc.words, steps=infer_steps, alpha=infer_alpha) for doc in test_data]\n else:\n test_regressors = [test_model.docvecs[doc.tags[0]] for doc in test_docs]\n test_regressors = sm.add_constant(test_regressors)\n \n # predict & evaluate\n test_predictions = predictor.predict(test_regressors)\n corrects = sum(np.rint(test_predictions) == [doc.sentiment for doc in test_data])\n errors = len(test_predictions) - corrects\n error_rate = float(errors) / len(test_predictions)\n return (error_rate, errors, len(test_predictions), predictor)\n", "Bulk Training\nUsing explicit multiple-pass, alpha-reduction approach as sketched in gensim doc2vec blog post โ€“ with added shuffling of corpus on each pass.\nNote that vector training is occurring on all documents of the dataset, which includes all TRAIN/TEST/DEV docs.\nEvaluation of each model's sentiment-predictive power is repeated after each pass, as an error rate (lower is better), to see the rates-of-relative-improvement. The base numbers reuse the TRAIN and TEST vectors stored in the models for the logistic regression, while the inferred results use newly-inferred TEST vectors. \n(On a 4-core 2.6Ghz Intel Core i7, these 20 passes training and evaluating 3 main models takes about an hour.)", "from collections import defaultdict\nbest_error = defaultdict(lambda :1.0) # to selectively-print only best errors achieved\n\nfrom random import shuffle\nimport datetime\n\nalpha, min_alpha, passes = (0.025, 0.001, 20)\nalpha_delta = (alpha - min_alpha) / passes\n\nprint(\"START %s\" % datetime.datetime.now())\n\nfor epoch in range(passes):\n shuffle(doc_list) # shuffling gets best results\n \n for name, train_model in models_by_name.items():\n # train\n duration = 'na'\n train_model.alpha, train_model.min_alpha = alpha, alpha\n with elapsed_timer() as elapsed:\n train_model.train(doc_list)\n duration = '%.1f' % elapsed()\n \n # evaluate\n eval_duration = ''\n with elapsed_timer() as eval_elapsed:\n err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs)\n eval_duration = '%.1f' % eval_elapsed()\n best_indicator = ' '\n if err <= best_error[name]:\n best_error[name] = err\n best_indicator = '*' \n print(\"%s%f : %i passes : %s %ss %ss\" % (best_indicator, err, epoch + 1, name, duration, eval_duration))\n\n if ((epoch + 1) % 5) == 0 or epoch == 0:\n eval_duration = ''\n with elapsed_timer() as eval_elapsed:\n infer_err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs, infer=True)\n eval_duration = '%.1f' % eval_elapsed()\n best_indicator = ' '\n if infer_err < best_error[name + '_inferred']:\n best_error[name + '_inferred'] = infer_err\n best_indicator = '*'\n print(\"%s%f : %i passes : %s %ss %ss\" % (best_indicator, infer_err, epoch + 1, name + '_inferred', duration, eval_duration))\n\n print('completed pass %i at alpha %f' % (epoch + 1, alpha))\n alpha -= alpha_delta\n \nprint(\"END %s\" % str(datetime.datetime.now()))", "Achieved Sentiment-Prediction Accuracy", "# print best error rates achieved\nfor rate, name in sorted((rate, name) for name, rate in best_error.items()):\n print(\"%f %s\" % (rate, name))", "In my testing, unlike the paper's report, DBOW performs best. Concatenating vectors from different models only offers a small predictive improvement. The best results I've seen are still just under 10% error rate, still a ways from the paper's 7.42%.\nExamining Results\nAre inferred vectors close to the precalculated ones?", "doc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc; re-run cell for more examples\nprint('for doc %d...' % doc_id)\nfor model in simple_models:\n inferred_docvec = model.infer_vector(alldocs[doc_id].words)\n print('%s:\\n %s' % (model, model.docvecs.most_similar([inferred_docvec], topn=3)))", "(Yes, here the stored vector from 20 epochs of training is usually one of the closest to a freshly-inferred vector for the same words. Note the defaults for inference are very abbreviated โ€“ just 3 steps starting at a high alpha โ€“ and likely need tuning for other applications.)\nDo close documents seem more related than distant ones?", "import random\n\ndoc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc, re-run cell for more examples\nmodel = random.choice(simple_models) # and a random model\nsims = model.docvecs.most_similar(doc_id, topn=model.docvecs.count) # get *all* similar documents\nprint(u'TARGET (%d): ยซ%sยป\\n' % (doc_id, ' '.join(alldocs[doc_id].words)))\nprint(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\\n' % model)\nfor label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:\n print(u'%s %s: ยซ%sยป\\n' % (label, sims[index], ' '.join(alldocs[sims[index][0]].words)))\n", "(Somewhat, in terms of reviewer tone, movie genre, etc... the MOST cosine-similar docs usually seem more like the TARGET than the MEDIAN or LEAST.)\nDo the word vectors show useful similarities?", "word_models = simple_models[:]\n\nimport random\nfrom IPython.display import HTML\n# pick a random word with a suitable number of occurences\nwhile True:\n word = random.choice(word_models[0].index2word)\n if word_models[0].vocab[word].count > 10:\n break\n# or uncomment below line, to just pick a word from the relevant domain:\n#word = 'comedy/drama'\nsimilars_per_model = [str(model.most_similar(word, topn=20)).replace('), ','),<br>\\n') for model in word_models]\nsimilar_table = (\"<table><tr><th>\" +\n \"</th><th>\".join([str(model) for model in word_models]) + \n \"</th></tr><tr><td>\" +\n \"</td><td>\".join(similars_per_model) +\n \"</td></tr></table>\")\nprint(\"most similar words for '%s' (%d occurences)\" % (word, simple_models[0].vocab[word].count))\nHTML(similar_table)", "Do the DBOW words look meaningless? That's because the gensim DBOW model doesn't train word vectors โ€“ they remain at their random initialized values โ€“ unless you ask with the dbow_words=1 initialization parameter. Concurrent word-training slows DBOW mode significantly, and offers little improvement (and sometimes a little worsening) of the error rate on this IMDB sentiment-prediction task. \nWords from DM models tend to show meaningfully similar words when there are many examples in the training data (as with 'plot' or 'actor'). (All DM modes inherently involve word vector training concurrent with doc vector training.)\nAre the word vectors from this dataset any good at analogies?", "# assuming something like\n# https://word2vec.googlecode.com/svn/trunk/questions-words.txt \n# is in local directory\n# note: this takes many minutes\nfor model in word_models:\n sections = model.accuracy('questions-words.txt')\n correct, incorrect = len(sections[-1]['correct']), len(sections[-1]['incorrect'])\n print('%s: %0.2f%% correct (%d of %d)' % (model, float(correct*100)/(correct+incorrect), correct, correct+incorrect))", "Even though this is a tiny, domain-specific dataset, it shows some meager capability on the general word analogies โ€“ at least for the DM/concat and DM/mean models which actually train word vectors. (The untrained random-initialized words of the DBOW model of course fail miserably.)\nSlop", "This cell left intentionally erroneous. ", "To mix the Google dataset (if locally available) into the word tests...", "from gensim.models import Word2Vec\nw2v_g100b = Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True)\nw2v_g100b.compact_name = 'w2v_g100b'\nword_models.append(w2v_g100b)", "To get copious logging output from above steps...", "import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\nrootLogger = logging.getLogger()\nrootLogger.setLevel(logging.INFO)", "To auto-reload python code while developing...", "%load_ext autoreload\n%autoreload 2" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
twosigma/beakerx
doc/python/TableAPI.ipynb
apache-2.0
[ "Python API for Table Display\nIn addition to APIs for creating and formatting BeakerX's interactive table widget, the Python runtime configures pandas to display tables with the interactive widget instead of static HTML.", "import pandas as pd\nfrom beakerx import *\nfrom beakerx.object import beakerx\n\npd.read_csv('../resources/data/interest-rates.csv')\n\ntable = TableDisplay(pd.read_csv('../resources/data/interest-rates.csv'))\ntable.setAlignmentProviderForColumn('m3', TableDisplayAlignmentProvider.CENTER_ALIGNMENT)\ntable.setRendererForColumn(\"y10\", TableDisplayCellRenderer.getDataBarsRenderer(False))\ntable.setRendererForType(ColumnType.Double, TableDisplayCellRenderer.getDataBarsRenderer(True))\ntable\n\ndf = pd.read_csv('../resources/data/interest-rates.csv')\ndf['time'] = df['time'].str.slice(0,19).astype('datetime64[ns]')\ntable = TableDisplay(df)\ntable.setStringFormatForTimes(TimeUnit.DAYS)\ntable.setStringFormatForType(ColumnType.Double, TableDisplayStringFormat.getDecimalFormat(4,6))\ntable.setStringFormatForColumn(\"m3\", TableDisplayStringFormat.getDecimalFormat(0, 0))\n\ntable\n\ntable = TableDisplay(pd.read_csv('../resources/data/interest-rates.csv'))\ntable\n#freeze a column\ntable.setColumnFrozen(\"y1\", True)\n#hide a column\ntable.setColumnVisible(\"y30\", False)\n\ntable.setColumnOrder([\"m3\", \"y1\", \"y5\", \"time\", \"y2\"])\n\ndef config_tooltip(row, column, table):\n return \"The value is: \" + str(table.values[row][column])\n\ntable.setToolTip(config_tooltip)\n\ntable.setDataFontSize(16)\ntable.setHeaderFontSize(18)\n\ntable\n\nmapListColorProvider = [\n {\"a\": 1, \"b\": 2, \"c\": 3},\n {\"a\": 4, \"b\": 5, \"c\": 6},\n {\"a\": 7, \"b\": 8, \"c\": 5}\n]\ntabledisplay = TableDisplay(mapListColorProvider)\n\ncolors = [\n [Color.LIGHT_GRAY, Color.GRAY, Color.RED],\n [Color.DARK_GREEN, Color.ORANGE, Color.RED],\n [Color.MAGENTA, Color.BLUE, Color.BLACK]\n]\n\ndef color_provider(row, column, table):\n return colors[row][column]\n\ntabledisplay.setFontColorProvider(color_provider)\ntabledisplay\n\nmapListFilter = [\n {\"a\":1, \"b\":2, \"c\":3},\n {\"a\":4, \"b\":5, \"c\":6},\n {\"a\":7, \"b\":8, \"c\":5}\n]\ndisplay = TableDisplay(mapListFilter)\n\ndef filter_row(row, model):\n return model[row][1] == 8\n\ndisplay.setRowFilter(filter_row)\n\ndisplay\n\ntable = TableDisplay(pd.read_csv('../resources/data/interest-rates.csv'))\ntable.addCellHighlighter(TableDisplayCellHighlighter.getHeatmapHighlighter(\"m3\", TableDisplayCellHighlighter.FULL_ROW))\n\ntable", "Display mode: Pandas default", "beakerx.pandas_display_default()\npd.read_csv('../resources/data/interest-rates.csv')", "Display mode: TableDisplay Widget", "beakerx.pandas_display_table()\npd.read_csv('../resources/data/interest-rates.csv')", "Recognized Formats", "TableDisplay([{'y1':4, 'm3':2, 'z2':1}, {'m3':4, 'z2':2}])\n\nTableDisplay({\"x\" : 1, \"y\" : 2})", "Programmable Table Actions", "mapList4 = [\n {\"a\":1, \"b\":2, \"c\":3},\n {\"a\":4, \"b\":5, \"c\":6},\n {\"a\":7, \"b\":8, \"c\":5}\n]\ndisplay = TableDisplay(mapList4)\n\ndef dclick(row, column, tabledisplay):\n tabledisplay.values[row][column] = sum(map(int,tabledisplay.values[row]))\n\ndisplay.setDoubleClickAction(dclick)\n\ndef negate(row, column, tabledisplay):\n tabledisplay.values[row][column] = -1 * int(tabledisplay.values[row][column])\n\ndef incr(row, column, tabledisplay):\n tabledisplay.values[row][column] = int(tabledisplay.values[row][column]) + 1\n\ndisplay.addContextMenuItem(\"negate\", negate)\ndisplay.addContextMenuItem(\"increment\", incr)\n\ndisplay\n\nmapList4 = [\n {\"a\":1, \"b\":2, \"c\":3},\n {\"a\":4, \"b\":5, \"c\":6},\n {\"a\":7, \"b\":8, \"c\":5}\n]\ndisplay = TableDisplay(mapList4)\n\n#set what happens on a double click\ndisplay.setDoubleClickAction(\"runDoubleClick\")\n\ndisplay\n\nprint(\"runDoubleClick fired\")\nprint(display.details)", "Set index to DataFrame", "df = pd.read_csv('../resources/data/interest-rates.csv')\ndf.set_index(['m3'])\n\ndf = pd.read_csv('../resources/data/interest-rates.csv')\ndf.index = df['time']\ndf", "Update cell", "dataToUpdate = [\n {'a':1, 'b':2, 'c':3},\n {'a':4, 'b':5, 'c':6},\n {'a':7, 'b':8, 'c':9}\n]\ntableToUpdate = TableDisplay(dataToUpdate)\n\ntableToUpdate\n\ntableToUpdate.values[0][0] = 99\ntableToUpdate.sendModel()\n\ntableToUpdate.updateCell(2,\"c\",121)\ntableToUpdate.sendModel()", "HTML format\nHTML format allows markup and styling of the cell's content. Interactive JavaScript is not supported however.", "table = TableDisplay({\n 'w': '$2 \\\\sigma$',\n 'x': '<em style=\"color:red\">italic red</em>',\n 'y': '<b style=\"color:blue\">bold blue</b>',\n 'z': 'strings without markup work fine too',\n })\ntable.setStringFormatForColumn(\"Value\", TableDisplayStringFormat.getHTMLFormat())\ntable", "Auto linking of URLs\nThe normal string format automatically detects URLs and links them. An underline appears when the mouse hovers over such a string, and when you click it opens in a new window.", "TableDisplay({'Two Sigma': 'http://twosigma.com', 'BeakerX': 'http://BeakerX.com'})" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lisitsyn/shogun
doc/ipython-notebooks/ica/ecg_sep.ipynb
bsd-3-clause
[ "Fetal Electrocardiogram Extraction by Source Subspace Separation\nBy Kevin Hughes and Andreas Ziehe\nThis notebook illustrates <a href=\"http://en.wikipedia.org/wiki/Blind_signal_separation\">Blind Source Seperation</a>(BSS) on several time synchronised Electrocardiogram's (ECG's) of the baby's mother using <a href=\"http://en.wikipedia.org/wiki/Independent_component_analysis\">Independent Component Analysis</a> (ICA) in Shogun. This is used to extract the baby's ECG from it.\nThis task has been studied before and has been published in these papers:\nCardoso, J. F. (1998, May). Multidimensional independent component analysis. \nIn Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 \nIEEE International Conference on (Vol. 4, pp. 1941-1944). IEEE.\nDirk Callaerts, \"Signal Separation Methods based on Singular Value\nDecomposition and their Application to the Real-Time Extraction of the\nFetal Electrocardiogram from Cutaneous Recordings\", Ph.D. Thesis,\nK.U.Leuven - E.E. Dept., Dec. 1989.\nL. De Lathauwer, B. De Moor, J. Vandewalle, \"Fetal Electrocardiogram\nExtraction by Source Subspace Separation\", Proc. IEEE SP / ATHOS\nWorkshop on HOS, June 12-14, 1995, Girona, Spain, pp. 134-138.\nIn this workbook I am going to show you how a similar result can be obtained using the ICA algorithms available in the Shogun Machine Learning Toolbox.\nFirst we need some data, luckily an ECG dataset is distributed in the Shogun data repository. So the first step is to change the directory then we'll load the data.", "# change to the shogun-data directory\nimport os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\nos.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica'))\n\nimport numpy as np\n\n# load data\n# Data originally from:\n# http://perso.telecom-paristech.fr/~cardoso/icacentral/base_single.html\ndata = np.loadtxt('foetal_ecg.dat')\n\n# time steps\ntime_steps = data[:,0]\n\n# abdominal signals\nabdominal2 = data[:,1]\nabdominal3 = data[:,2]\nabdominal4 = data[:,3]\nabdominal5 = data[:,4]\nabdominal6 = data[:,5]\n\n# thoracic signals\nthoracic7 = data[:,6]\nthoracic8 = data[:,7]\nthoracic9 = data[:,8]", "Before we go any further let's take a look at this data by plotting it:", "%matplotlib inline\n# plot signals\nimport pylab as pl\n\n# abdominal signals\nfor i in range(1,6):\n pl.figure(figsize=(14,3))\n pl.plot(time_steps, data[:,i], 'r')\n pl.title('Abdominal %d' % (i))\n pl.grid()\n pl.show()\n\n# thoracic signals\nfor i in range(6,9):\n pl.figure(figsize=(14,3))\n pl.plot(time_steps, data[:,i], 'r')\n pl.title('Thoracic %d' % (i))\n pl.grid()\n pl.show()", "The peaks in the plot represent a heart beat but its pretty hard to interpret and I know I definitely can't see two distinc signals, lets see what we can do with ICA!\nIn general for performing Source Separation we need at least as many mixed signals as sources we're hoping to separate and in this case we actually have a lot more (9 mixtures but there is only 2 sources, mother and baby). There are several different approaches for handling this situation, some algorithms are specifically designed to handle this case while other times the data is pre-processed with Principal Component Analysis (PCA). It is also common to simply apply the separation to all the sources and then choose some of the extracted signal manually or using some other know criteria which is what I'll be showing in this example.\nNow we create our ICA data set and convert to a Shogun features type:", "import shogun as sg\n\n# Signal Matrix X\nX = (np.c_[abdominal2, abdominal3, abdominal4, abdominal5, abdominal6, thoracic7,thoracic8,thoracic9]).T\n\n# Convert to features for shogun\nmixed_signals = sg.features((X).astype(np.float64))", "Next we apply the ICA algorithm to separate the sources:", "# Separating with SOBI\nsep = sg.transformer('SOBI')\nsep.put('tau', 1.0*np.arange(0,120))\n \nsep.fit(mixed_signals)\nsignals = sep.transform(mixed_signals)\n\nS_ = signals.get('feature_matrix')", "And we plot the separated signals:", "# Show separation results\n\n# Separated Signal i\nfor i in range(S_.shape[0]):\n pl.figure(figsize=(14,3))\n pl.plot(time_steps, S_[i], 'r')\n pl.title('Separated Signal %d' % (i+1))\n pl.grid()\n pl.show()", "Now we can interpret the results! First we are going to exploit the known fact that the baby's heart rate is about twice that of the mothers.\nOur interpretation of the results is as follows:\n\nseparated signal 1 -> baby ECG\nseparated signal 2 -> still a bit mixed baby +mother\nseparated signal 3 -> baby ECG\nseparated signal 4 -> slow drift due to breathing of the mother\nseparated signal 5 -> mainly mother ECG but still a bit mixed and noisy\nseparated signal 6-8 -> mothers ECG, with 8 being the best\n\nAnd thats the proof of concept Fetal Electrocardiogram Extraction by Source Subspace Separation using the Shogun Machine Learning Toolkit!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Yangqing/caffe2
caffe2/python/tutorials/create_your_own_dataset.ipynb
apache-2.0
[ "How do I create my own dataset?\nSo Caffe2 uses a binary DB format to store the data that we would like to train models on. A Caffe2 DB is a glorified name of a key-value storage where the keys are usually randomized so that the batches are approximately i.i.d. The values are the real stuff here: they contain the serialized strings of the specific data formats that you would like your training algorithm to ingest. So, the stored DB would look (semantically) like this:\nkey1 value1\nkey2 value2\nkey3 value3\n...\nTo a DB, it treats the keys and values as strings, but you probably want structured contents. One way to do this is to use a TensorProtos protocol buffer: it essentially wraps Tensors, aka multi-dimensional arrays, together with the tensor data type and shape information. Then, one can use the TensorProtosDBInput operator to load the data into an SGD training fashion.\nHere, we will show you one example of how to create your own dataset. To this end, we will use the UCI Iris dataset - which was a very popular classical dataset for classifying Iris flowers. It contains 4 real-valued features representing the dimensions of the flower, and classifies things into 3 types of Iris flowers. The dataset can be downloaded here.", "# First let's import some necessities\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\n%matplotlib inline\nimport urllib2 # for downloading the dataset from the web.\nimport numpy as np\nfrom matplotlib import pyplot\nfrom StringIO import StringIO\nfrom caffe2.python import core, utils, workspace\nfrom caffe2.proto import caffe2_pb2\n\nf = urllib2.urlopen('https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data')\nraw_data = f.read()\nprint('Raw data looks like this:')\nprint(raw_data[:100] + '...')\n\n# load the features to a feature matrix.\nfeatures = np.loadtxt(StringIO(raw_data), dtype=np.float32, delimiter=',', usecols=(0, 1, 2, 3))\n# load the labels to a feature matrix\nlabel_converter = lambda s : {'Iris-setosa':0, 'Iris-versicolor':1, 'Iris-virginica':2}[s]\nlabels = np.loadtxt(StringIO(raw_data), dtype=np.int, delimiter=',', usecols=(4,), converters={4: label_converter})", "Before we do training, one thing that is often beneficial is to separate the dataset into training and testing. In this case, let's randomly shuffle the data, use the first 100 data points to do training, and the remaining 50 to do testing. For more sophisticated approaches, you can use e.g. cross validation to separate your dataset into multiple training and testing splits. Read more about cross validation here.", "random_index = np.random.permutation(150)\nfeatures = features[random_index]\nlabels = labels[random_index]\n\ntrain_features = features[:100]\ntrain_labels = labels[:100]\ntest_features = features[100:]\ntest_labels = labels[100:]\n\n# Let's plot the first two features together with the label.\n# Remember, while we are plotting the testing feature distribution\n# here too, you might not be supposed to do so in real research,\n# because one should not peek into the testing data.\nlegend = ['rx', 'b+', 'go']\npyplot.title(\"Training data distribution, feature 0 and 1\")\nfor i in range(3):\n pyplot.plot(train_features[train_labels==i, 0], train_features[train_labels==i, 1], legend[i])\npyplot.figure()\npyplot.title(\"Testing data distribution, feature 0 and 1\")\nfor i in range(3):\n pyplot.plot(test_features[test_labels==i, 0], test_features[test_labels==i, 1], legend[i])", "Now, as promised, let's put things into a Caffe2 DB. In this DB, what would happen is that we will use \"train_xxx\" as the key, and use a TensorProtos object to store two tensors for each data point: one as the feature and one as the label. We will use Caffe2's Python DB interface to do so.", "# First, let's see how one can construct a TensorProtos protocol buffer from numpy arrays.\nfeature_and_label = caffe2_pb2.TensorProtos()\nfeature_and_label.protos.extend([\n utils.NumpyArrayToCaffe2Tensor(features[0]),\n utils.NumpyArrayToCaffe2Tensor(labels[0])])\nprint('This is what the tensor proto looks like for a feature and its label:')\nprint(str(feature_and_label))\nprint('This is the compact string that gets written into the db:')\nprint(feature_and_label.SerializeToString())\n\n# Now, actually write the db.\n\ndef write_db(db_type, db_name, features, labels):\n db = core.C.create_db(db_type, db_name, core.C.Mode.write)\n transaction = db.new_transaction()\n for i in range(features.shape[0]):\n feature_and_label = caffe2_pb2.TensorProtos()\n feature_and_label.protos.extend([\n utils.NumpyArrayToCaffe2Tensor(features[i]),\n utils.NumpyArrayToCaffe2Tensor(labels[i])])\n transaction.put(\n 'train_%03d'.format(i),\n feature_and_label.SerializeToString())\n # Close the transaction, and then close the db.\n del transaction\n del db\n\nwrite_db(\"minidb\", \"iris_train.minidb\", train_features, train_labels)\nwrite_db(\"minidb\", \"iris_test.minidb\", test_features, test_labels)", "Now, let's create a very simple network that only consists of one single TensorProtosDBInput operator, to showcase how we load data from the DB that we created. For training, you might want to do something more complex: creating a network, train it, get the model, and run the prediction service. To this end you can look at the MNIST tutorial for details.", "net_proto = core.Net(\"example_reader\")\ndbreader = net_proto.CreateDB([], \"dbreader\", db=\"iris_train.minidb\", db_type=\"minidb\")\nnet_proto.TensorProtosDBInput([dbreader], [\"X\", \"Y\"], batch_size=16)\n\nprint(\"The net looks like this:\")\nprint(str(net_proto.Proto()))\n\nworkspace.CreateNet(net_proto)\n\n# Let's run it to get batches of features.\nworkspace.RunNet(net_proto.Proto().name)\nprint(\"The first batch of feature is:\")\nprint(workspace.FetchBlob(\"X\"))\nprint(\"The first batch of label is:\")\nprint(workspace.FetchBlob(\"Y\"))\n\n# Let's run again.\nworkspace.RunNet(net_proto.Proto().name)\nprint(\"The second batch of feature is:\")\nprint(workspace.FetchBlob(\"X\"))\nprint(\"The second batch of label is:\")\nprint(workspace.FetchBlob(\"Y\"))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
brucefan1983/GPUMD
examples/empirical_potentials/phonon_dispersion/Phonon Dispersion.ipynb
gpl-3.0
[ "Phonon Dispersion\n1. Introduction\n\nIn this example, we use harmonic lattice dynamics to calculate the phonon dispersion of diamond silicon.\n\nImporting Relevant Functions\n\nThe inputs/outputs for GPUMD are processed using the Atomic Simulation Environment (ASE) and the thermo package.", "from pylab import *\nfrom ase.lattice.cubic import Diamond\nfrom ase.build import bulk\nfrom thermo.gpumd.preproc import add_basis, repeat\nfrom thermo.gpumd.io import create_basis, create_kpoints, ase_atoms_to_gpumd\nfrom thermo.gpumd.data import load_omega2", "2. Preparting the Inputs\n\nThe structure as specified is 64-atom diamond silicon at zero temperature and zero pressure. \nWeuse the minimal Tersoff potential [Fan 2020].\n\nGenerate the xyz.in file:\nCreate Si Unit Cell & Add Basis", "a=5.434\nSi_UC = bulk('Si', 'diamond', a=a)\nadd_basis(Si_UC)\nSi_UC", "Transform Si to Cubic Supercell", "# Create 8 atom diamond structure\nSi = repeat(Si_UC, [2,2,1])\nSi.set_cell([a, a, a])\nSi.wrap()\n\n# Complete full supercell\nSi = repeat(Si, [2,2,2])\nSi", "Write xyz.in File", "ase_atoms_to_gpumd(Si, M=4, cutoff=3)", "Write basis.in File\n\nThe basis.in file reads:\n2\n0 28\n4 28\n0\n0\n0\n0\n1\n1\n1\n1\n...\n\nHere the primitive cell is chosen as the unit cell. There are only two basis atoms in the unit cell, as indicated by the number 2 in the first line.\n\n\nThe next two lines list the indices (0 and 4) and masses (both are 28 amu) for the two basis atoms.\n\n\nThe next lines map all the atoms (including the basis atoms) in the super cell to the basis atoms: atoms equivalent to atom 0 have a label 0, and atoms equivalent to atom 1 have a label 1.\n\n\nNote: The basis.in file generated by this Jupyter notebook may look different, but the same concepts apply and the results will be the same.", "create_basis(Si)", "Write kpoints.in File\n\nThe $k$ vectors are defined in the reciprocal space with respect to the unit cell chosen in the basis.in file.\nWe use the $\\Gamma-X-K-\\Gamma-L$ path, with 400 $k$ points in total.", "linear_path, sym_points, labels = create_kpoints(Si_UC, path='GXKGL',npoints=400)", "The <code>run.in</code> file:\nThe <code>run.in</code> input file is given below:<br>\npotential potentials/tersoff/Si_Fan_2019.txt 0\ncompute_phonon 5.0 0.005 # in units of A\n\n\nThe first line with the potential keyword states that the potential to be used is specified in the file Si_Fan_2019.txt.\n\n\nThe second line with the compute_phonon keyword tells that the force constants will be calculated with a cutoff of 5.0 $\\mathring A$ (here the point is that first and second nearest neighbors need to be included) and a displacement of 0.005 $\\mathring A$ will be used in the finite-displacement method.\n\n\n3. Results and Discussion\nFigure Properties", "aw = 2\nfs = 24\nfont = {'size' : fs}\nmatplotlib.rc('font', **font)\nmatplotlib.rc('axes' , linewidth=aw)\n\ndef set_fig_properties(ax_list):\n tl = 8\n tw = 2\n tlm = 4\n \n for ax in ax_list:\n ax.tick_params(which='major', length=tl, width=tw)\n ax.tick_params(which='minor', length=tlm, width=tw)\n ax.tick_params(which='both', axis='both', direction='in', right=True, top=True)", "Plot Phonon Dispersion\n\nThe omega2.out output file is loaded and processed to create the following figure. The previously defined kpoints are used for the $x$-axis.", "nu = load_omega2()\n\nfigure(figsize=(10,10))\nset_fig_properties([gca()])\nvlines(sym_points, ymin=0, ymax=17)\nplot(linear_path, nu, color='C0',lw=3)\nxlim([0, max(linear_path)])\ngca().set_xticks(sym_points)\ngca().set_xticklabels([r'$\\Gamma$','X', 'K', r'$\\Gamma$', 'L'])\nylim([0, 17])\nylabel(r'$\\nu$ (THz)')\nshow()", "Phonon dispersion of silicon crystal described by the mini-Tersoff potential.\n\nThe above figure shows the phonon dispersion of silicon crystal described by the mini-Tersoff potential [Fan 2020]\n\n4. References\n[Fan 2020] Zheyong Fan, Yanzhou Wang, Xiaokun Gu, Ping Qian, Yanjing Su, and Tapio Ala-Nissila, A minimal Tersoff potential for diamond silicon with improved descriptions of elastic and phonon transport properties, J. Phys.: Condens. Matter 32 135901 (2020)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tritemio/multispot_paper
out_notebooks/usALEX-5samples-PR-raw-out-all-ph-22d.ipynb
mit
[ "Executed: Mon Mar 27 11:34:36 2017\nDuration: 8 seconds.\nusALEX-5samples - Template\n\nThis notebook is executed through 8-spots paper analysis.\nFor a direct execution, uncomment the cell below.", "ph_sel_name = \"all-ph\"\n\ndata_id = \"22d\"\n\n# ph_sel_name = \"all-ph\"\n# data_id = \"7d\"", "Load software and filenames definitions", "from fretbursts import *\n\ninit_notebook()\nfrom IPython.display import display", "Data folder:", "data_dir = './data/singlespot/'\n\nimport os\ndata_dir = os.path.abspath(data_dir) + '/'\nassert os.path.exists(data_dir), \"Path '%s' does not exist.\" % data_dir", "List of data files:", "from glob import glob\nfile_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)\n## Selection for POLIMI 2012-11-26 datatset\nlabels = ['17d', '27d', '7d', '12d', '22d']\nfiles_dict = {lab: fname for lab, fname in zip(labels, file_list)}\nfiles_dict\n\nph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'), \n 'DexDem': Ph_sel(Dex='Dem')}\nph_sel = ph_sel_map[ph_sel_name]\n\ndata_id, ph_sel_name", "Data load\nInitial loading of the data:", "d = loader.photon_hdf5(filename=files_dict[data_id])", "Laser alternation selection\nAt this point we have only the timestamps and the detector numbers:", "d.ph_times_t, d.det_t", "We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:", "d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)", "We should check if everithing is OK with an alternation histogram:", "plot_alternation_hist(d)", "If the plot looks good we can apply the parameters with:", "loader.alex_apply_period(d)", "Measurements infos\nAll the measurement data is in the d variable. We can print it:", "d", "Or check the measurements duration:", "d.time_max", "Compute background\nCompute the background using automatic threshold:", "d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)\n\ndplot(d, timetrace_bg)\n\nd.rate_m, d.rate_dd, d.rate_ad, d.rate_aa", "Burst search and selection", "bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel)\nd.burst_search(**bs_kws)\n\nth1 = 30\nds = d.select_bursts(select_bursts.size, th1=30)\n\nbursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True)\n .round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4}))\n\nbursts.head()\n\nburst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv'\n .format(sample=data_id, th=th1, **bs_kws))\nburst_fname\n\nbursts.to_csv(burst_fname)\n\nassert d.dir_ex == 0\nassert d.leakage == 0\n\nprint(d.ph_sel)\ndplot(d, hist_fret);\n\n# if data_id in ['7d', '27d']:\n# ds = d.select_bursts(select_bursts.size, th1=20)\n# else:\n# ds = d.select_bursts(select_bursts.size, th1=30)\n\nds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)\n\nn_bursts_all = ds.num_bursts[0]\n\ndef select_and_plot_ES(fret_sel, do_sel):\n ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)\n ds_do = ds.select_bursts(select_bursts.ES, **do_sel)\n bpl.plot_ES_selection(ax, **fret_sel)\n bpl.plot_ES_selection(ax, **do_sel) \n return ds_fret, ds_do\n\nax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)\n\nif data_id == '7d':\n fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)\n do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True) \n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)\n \nelif data_id == '12d':\n fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)\n do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)\n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)\n\nelif data_id == '17d':\n fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)\n do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)\n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)\n\nelif data_id == '22d':\n fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)\n do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)\n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) \n\nelif data_id == '27d':\n fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)\n do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)\n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) \n\nn_bursts_do = ds_do.num_bursts[0]\nn_bursts_fret = ds_fret.num_bursts[0]\n\nn_bursts_do, n_bursts_fret\n\nd_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)\nprint ('D-only fraction:', d_only_frac)\n\ndplot(ds_fret, hist2d_alex, scatter_alpha=0.1);\n\ndplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);", "Donor Leakage fit\nHalf-Sample Mode\nFit peak usng the mode computed with the half-sample algorithm (Bickel 2005).", "def hsm_mode(s):\n \"\"\"\n Half-sample mode (HSM) estimator of `s`.\n\n `s` is a sample from a continuous distribution with a single peak.\n \n Reference:\n Bickel, Fruehwirth (2005). arXiv:math/0505419\n \"\"\"\n s = memoryview(np.sort(s))\n i1 = 0\n i2 = len(s)\n\n while i2 - i1 > 3:\n n = (i2 - i1) // 2\n w = [s[n-1+i+i1] - s[i+i1] for i in range(n)]\n i1 = w.index(min(w)) + i1\n i2 = i1 + n\n\n if i2 - i1 == 3:\n if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]:\n i2 -= 1\n elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]:\n i1 += 1\n else:\n i1 = i2 = i1 + 1\n\n return 0.5*(s[i1] + s[i2])\n\nE_pr_do_hsm = hsm_mode(ds_do.E[0])\nprint (\"%s: E_peak(HSM) = %.2f%%\" % (ds.ph_sel, E_pr_do_hsm*100))", "Gaussian Fit\nFit the histogram with a gaussian:", "E_fitter = bext.bursts_fitter(ds_do, weights=None)\nE_fitter.histogram(bins=np.arange(-0.2, 1, 0.03))\n\nE_fitter.fit_histogram(model=mfit.factory_gaussian())\nE_fitter.params\n\nres = E_fitter.fit_res[0]\nres.params.pretty_print()\n\nE_pr_do_gauss = res.best_values['center']\nE_pr_do_gauss", "KDE maximum", "bandwidth = 0.03\nE_range_do = (-0.1, 0.15)\nE_ax = np.r_[-0.2:0.401:0.0002]\n\nE_fitter.calc_kde(bandwidth=bandwidth)\nE_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1])\nE_pr_do_kde = E_fitter.kde_max_pos[0]\nE_pr_do_kde", "Leakage summary", "mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False)\nplt.axvline(E_pr_do_hsm, color='m', label='HSM')\nplt.axvline(E_pr_do_gauss, color='k', label='Gauss')\nplt.axvline(E_pr_do_kde, color='r', label='KDE')\nplt.xlim(0, 0.3)\nplt.legend()\nprint('Gauss: %.2f%%\\n KDE: %.2f%%\\n HSM: %.2f%%' % \n (E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100))", "Burst size distribution", "nt_th1 = 50\n\ndplot(ds_fret, hist_size, which='all', add_naa=False)\nxlim(-0, 250)\nplt.axvline(nt_th1)\n\nTh_nt = np.arange(35, 120)\nnt_th = np.zeros(Th_nt.size)\nfor i, th in enumerate(Th_nt):\n ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)\n nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th\n\nplt.figure()\nplot(Th_nt, nt_th)\nplt.axvline(nt_th1)\n\nnt_mean = nt_th[np.where(Th_nt == nt_th1)][0]\nnt_mean", "Fret fit\nMax position of the Kernel Density Estimation (KDE):", "E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')\nE_fitter = ds_fret.E_fitter\n\nE_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])\nE_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))\n\nE_fitter.fit_res[0].params.pretty_print()\n\nfig, ax = plt.subplots(1, 2, figsize=(14, 4.5))\nmfit.plot_mfit(E_fitter, ax=ax[0])\nmfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])\nprint('%s\\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))\ndisplay(E_fitter.params*100)", "Weighted mean of $E$ of each burst:", "ds_fret.fit_E_m(weights='size')", "Gaussian fit (no weights):", "ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)", "Gaussian fit (using burst size as weights):", "ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')\n\nE_kde_w = E_fitter.kde_max_pos[0]\nE_gauss_w = E_fitter.params.loc[0, 'center']\nE_gauss_w_sig = E_fitter.params.loc[0, 'sigma']\nE_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))\nE_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr\nE_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr", "Stoichiometry fit\nMax position of the Kernel Density Estimation (KDE):", "S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)\nS_fitter = ds_fret.S_fitter\n\nS_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])\nS_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)\n\nfig, ax = plt.subplots(1, 2, figsize=(14, 4.5))\nmfit.plot_mfit(S_fitter, ax=ax[0])\nmfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])\nprint('%s\\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))\ndisplay(S_fitter.params*100)\n\nS_kde = S_fitter.kde_max_pos[0]\nS_gauss = S_fitter.params.loc[0, 'center']\nS_gauss_sig = S_fitter.params.loc[0, 'sigma']\nS_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))\nS_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr\nS_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr", "The Maximum likelihood fit for a Gaussian population is the mean:", "S = ds_fret.S[0]\nS_ml_fit = (S.mean(), S.std())\nS_ml_fit", "Computing the weighted mean and weighted standard deviation we get:", "weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)\nS_mean = np.dot(weights, S)/weights.sum()\nS_std_dev = np.sqrt(\n np.dot(weights, (S - S_mean)**2)/weights.sum())\nS_wmean_fit = [S_mean, S_std_dev]\nS_wmean_fit", "Save data to file", "sample = data_id", "The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.", "variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '\n 'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '\n 'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '\n 'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\\n')", "This is just a trick to format the different variables:", "variables_csv = variables.replace(' ', ',')\nfmt_float = '{%s:.6f}'\nfmt_int = '{%s:d}'\nfmt_str = '{%s}'\nfmt_dict = {**{'sample': fmt_str}, \n **{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}\nvar_dict = {name: eval(name) for name in variables.split()}\nvar_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\\n'\ndata_str = var_fmt.format(**var_dict)\n\nprint(variables_csv)\nprint(data_str)\n\n# NOTE: The file name should be the notebook name but with .csv extension\nwith open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f:\n f.seek(0, 2)\n if f.tell() == 0:\n f.write(variables_csv)\n f.write(data_str)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
boffi/boffi.github.io
dati_2017/hw03/01.ipynb
mit
[ "Initialization\nNotebook stuff", "from IPython.display import display, Latex, HTML\ndisplay(HTML(open('01.css').read()))", "Numpy and Scipy", "import numpy as np\nfrom numpy import array, cos, diag, eye, linspace, pi\nfrom numpy import poly1d, sign, sin, sqrt, where, zeros\nfrom scipy.linalg import eigh, inv, det", "Matplotlib", "%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-paper')\nplt.rcParams['figure.dpi'] = 115\nplt.rcParams['figure.figsize'] = (7.5, 2.5)\nplt.rcParams['axes.grid'] = True", "Miscellaneous definitions\nIn the following ld and pmat are used to display mathematical formulas generated by the program, rounder ensures that a floating point number close to an integer will be rounded correctly when formatted as an integer, p is a shorthand to calling poly1d that is long and requires a single argument, vw computes the virtual work done by moments m for the curvatures c, when the lengths of the beams are l and eventually\np0_p1 given an array of values p returns first p[0], p[1] then p[1], p[2] then...", "def ld(*items): \n display(Latex('$$' + ' '.join(items) + '$$'))\ndef pmat(mat, env='bmatrix', fmt='%+f'):\n opener = '\\\\begin{'+env+'}\\n '\n closer = '\\n\\\\end{'+env+'}'\n formatted = '\\\\\\\\\\n '.join('&'.join(fmt%elt for elt in row) for row in mat)\n return opener+formatted+closer\ndef rounder(mat): return mat+0.01*sign(mat)\ndef p(*l): return poly1d(l)\ndef vw(emme, chi, L):\n return sum(((m*c).integ()(l)-(m*c).integ()(0)) for (m, c, l) in zip(emme, chi, L))\ndef p0_p1(p):\n from itertools import tee\n a, b = tee(p)\n next(b, None)\n return zip(a, b)", "3 DOF System\nInput motion\nWe need the imposed displacement, the imposed velocity (an intermediate result) and the imposed acceleration. It is convenient to express these quantities in terms of an adimensional time coordinate $a = \\omega_0 t$,\n\\begin{align}\n u &= \\frac{4/3\\omega_0 t - \\sin(4/3\\omega_0 t)}{2\\pi}\n = \\frac{\\lambda_0 a- \\sin(\\lambda_0 a)}{2\\pi},\\\n \\dot{u} &= \\frac{4}{3}\\omega_0 \\frac{1-\\cos(4/3\\omega_0t)}{2\\pi}\n = \\lambda_0 \\omega_0 \\frac{1-\\cos(\\lambda_0 a)}{2\\pi},\\\n \\ddot{u} &= \\frac{16}{9}\\omega_0^2 \\frac{\\sin(4/3\\omega_0t)}{2\\pi}\n = \\lambda_0^2\\omega_0^2 \\frac{\\sin(\\lambda_0 a)}{2\\pi},\n\\end{align}\nwith $\\lambda_0=4/3$.\nThe equations above are valid in the interval \n$$ 0 \\le t \\le \\frac{2\\pi}{4/3 \\omega_0} \\rightarrow\n 0 \\le a \\le \\frac{3\\pi}2 $$\n(we have multiplied all terms by $\\omega_0$ and simplified the last term).\nFollowing a similar reasoning, the plotting interval is equal to $0\\le a\\le2\\pi$.", "l0 = 4/3\n# define a function to get back the time array and the 3 dependent vars\ndef a_uA_vA_aA(t0, t1, npoints):\n a = linspace(t0, t1, npoints)\n uA = where(a<3*pi/2, (l0*a-sin(l0*a))/2/pi, 1)\n vA = where(a<3*pi/2, (1-cos(l0*a))/2/pi, 0)\n aA = where(a<3*pi/2, 16*sin(l0*a)/18/pi, 0)\n return a, uA, vA, aA\n# and use it\na, uA, vA, aA = a_uA_vA_aA(0, 2*pi, 501)", "The plots", "\nplt.plot(a/pi, uA)\nplt.xlabel(r'$\\omega_0 t/\\pi$')\nplt.ylabel(r'$u_A/\\delta$')\nplt.title('Imposed support motion');\n\nplt.plot(a/pi, vA)\nplt.xlabel(r'$\\omega_0 t/\\pi$')\nplt.ylabel(r'$\\dot u_A/\\delta\\omega_0$')\nplt.title('Imposed support velocity');\n\nplt.plot(a/pi, aA)\nplt.xlabel(r'$\\omega_0 t/\\pi$')\nplt.ylabel(r'$\\ddot u_A/\\delta\\omega_0^2$')\nplt.title('Imposed support acceleration');", "Equation of Motion\nThe EoM expressed in adimensional coordinates and using adimensional structural matrices is\n$$ m\\omega_0^2\\hat{\\boldsymbol M} \\frac{\\partial^2\\boldsymbol x}{\\partial a^2}\n + \\frac{EJ}{L^3}\\hat{\\boldsymbol K}\\boldsymbol x =\n m \\hat{\\boldsymbol M} \\boldsymbol e \\omega_0^2 \\frac{\\partial^2 u_A}{\\partial a^2}\n$$ \nusing the dot notation to denote derivatives with respect to $a$, if we divide both members by $m\\omega_0^2$ we have\n$$ \\hat{\\boldsymbol M} \\ddot{\\boldsymbol x}\n + \\hat{\\boldsymbol K}\\boldsymbol x =\n \\hat{\\boldsymbol M} \\boldsymbol e \\ddot{u}_A.\n$$ \nWe must determine the influence vector $\\boldsymbol e$ and the adimensional structural matrices\nInfluence vector\nTo impose a horizontal displacement in $A$ we must remove one constraint, so that the structure has 1 DOF as a rigid system and the influence vector must be determined by a kinematic analysis.", "display(HTML(open('figures/trab1kin_conv.svg').read()))", "The left beam is constrained by a roller and by the right beam, the first requires that the Centre of Instantaneous Rotation (CIR) belongs to the vertical line in $A$, while the second requires that the CIR belongs to the line that connects the hinges\nof the right beam.\nThe angles of rotation are $\\theta_\\text{left} = u_A/L$ and $\\theta_\\text{right}\n= -2 u_A/L$ and eventually we have $x_1=x_2=x_3=2u_A$ and\n$$ \\boldsymbol e = \\begin{Bmatrix}2\\2\\2\\end{Bmatrix}.$$", "e = array((2.0, 2.0, 2.0))", "Structural Matrices", "display(HTML(open('figures/trab1_conv.svg').read()))", "Compute the 3x3 flexibility using the Principle of Virtual Displacements and the 3x3 stiffness using inversion, while the mass matrix is directly assembled with the understanding that the lumped mass on $x_1$ is $2m$.\nThe code uses a structure m where each of the three rows contains the \ncomputational represention (as polynomial coefficients) of the bending moments due to\na unit load applied in the position of each of the three degrees of freedom,\nin each row six groups of polynomial coefficients, one group for each of the six\nintervals of definition in which the structure has been subdivided (a possible seventh interval is omitted because the bending moment is always zero for every possible unit load).", "l = [1, 2, 2, 1, 1, 1]\nh = 0.5 ; t = 3*h\nm = [[p(2,0),p(h,0),p(h,1),p(h,0),p(h,h),p(1,0)],\n [p(2,0),p(1,0),p(0,2),p(1,0),p(1,1),p(2,0)],\n [p(2,0),p(h,0),p(h,1),p(h,0),p(t,h),p(2,0)]]\n\nF = array([[vw(emme, chi, l) for emme in m] for chi in m])\nK = inv(F)\nM = array(((2.0, 0.0, 0.0),\n (0.0, 1.0, 0.0),\n (0.0, 0.0, 1.0)))\niM = inv(M)\n\nld('\\\\boldsymbol F = \\\\frac{L^3}{12EJ}\\\\,', pmat(rounder(F*12), fmt='%+d'))\nld('\\\\boldsymbol K = \\\\frac{3 EJ}{1588L^3}\\\\,',\n pmat(rounder(K*1588/3), fmt='%+d'),\n '= \\\\frac{EJ}{L^3}\\\\;\\\\hat{\\\\boldsymbol K}.')\nld('\\\\boldsymbol M = m\\\\,', pmat(M, fmt='%d'),\n '= m\\\\;\\\\hat{\\\\boldsymbol M}.')", "The eigenvalues problem\nWe solve immediately the eigenvalue problem because when we know the shortest modal period of vibration it is possible to choose the integration time step $h$ to avoid numerical unstability issues with the linear acceleration algorithm.", "wn2, Psi = eigh(K, M)\nwn = sqrt(wn2)\nli = wn\nLambda2 = diag(wn2)\nLambda = diag(wn)\n# eigenvectors are normalized โ†’ M* is a unit matrix, as well as its inverse\nMstar, iMstar = eye(3), eye(3)\n\nld(r'\\boldsymbol\\Omega^2 = \\omega_0^2\\,', pmat(Lambda2),\n r'=\\omega_0^2\\,\\boldsymbol\\Lambda^2.')\nld(r'\\boldsymbol\\Omega=\\omega_0\\,', pmat(Lambda),\n r'=\\omega_0\\,\\boldsymbol\\Lambda.')\nld(r'\\boldsymbol T_\\text{n}=\\frac{2\\pi}{\\omega_0}\\,', pmat(inv(Lambda)),\n r'= t_0\\,\\boldsymbol\\Theta.')\nld(r'\\Psi=', pmat(Psi), '.')", "Numerical Integration\nThe shortest period is $T_3 = 2\\pi\\,0.562/\\omega_0 \\rightarrow A_3 = 1.124 \\pi$ hence to avoid unstability of the linear acceleration algorithm we shall use a non dimensional time step $h<0.55A_3\\approx0.6\\pi$. We can anticipate that the modal response associated with mode 2 is important ($\\lambda_2\\approx\\lambda_0$) so we choose an adimensional time step $h=A_2/20=2\\pi\\,0.760/20\\approx0.08\\pi$ that is much smaller than the maximum time step for which we have a stable behaviour.\nInitialization\nFirst a new, longer adimensional time vector and the corresponding support acceleration, then the efficace load vector (peff is an array with 2001 rows and 3 columns, each row corresponding to the force vector in a particular instant of time)", "nsppi = 200\na, _, _, aA = a_uA_vA_aA(0, 16*pi, nsppi*16+1)\npeff = (- M @ e) * aA[:,None]", "The constants that we need in the linear acceleration algorithm โ€” note that we have an undamped system or, in other words, $\\boldsymbol C = \\boldsymbol 0$", "h = pi/nsppi\nK_ = K + 6*M/h**2\nF_ = inv(K_)\ndp_v = 6*M/h\ndp_a = 3*M", "The integration loop\nFirst we initialize the containers where to save the new results with the initial values at $a=0$, next the loop on the values of the load at times $t_i$ and $t_{i+1}$ with $i=0,\\ldots,1999$.", "Xl, Vl = [zeros(3)], [zeros(3)]\nfor p0, p1 in p0_p1(peff):\n x0, v0 = Xl[-1], Vl[-1]\n a0 = iM @ (p0 -K@x0)\n dp = (p1-p0) + dp_a@a0 + dp_v@v0\n dx = F_@dp\n dv = 3*dx/h - 3*v0 - a0*h/2\n Xl.append(x0+dx), Vl.append(v0+dv)\nXl = array(Xl) ; Vl = array(Vl)", "Plotting", "for i, line in enumerate(plt.plot(a/pi, Xl), 1):\n line.set_label(r'$x_{%d}$'%i)\nplt.xlabel(r'$\\omega_0 t/\\pi$')\nplt.ylabel(r'$x_i/\\delta$')\nplt.title('Response โ€” numerical integration โ€” lin.acc.')\nplt.legend();", "Equation of Motion\nDenoting with $\\boldsymbol x$ the dynamic component of the displacements, with $\\boldsymbol x_\\text{tot} = \\boldsymbol x + \\boldsymbol x_\\text{stat} = \\boldsymbol x + \\boldsymbol e \\;u_\\mathcal{A}$ the equation of motion is (the independent variable being $a=\\omega_0t$)\n$$ \\hat{\\boldsymbol M} \\ddot{\\boldsymbol x} + \n \\hat{\\boldsymbol K} \\boldsymbol x = \n - \\hat{\\boldsymbol M} \\boldsymbol e \\ddot u_\\mathcal{A}. $$ \nUsing mass-normalized eigenvectors, with $\\boldsymbol x = \\delta\\boldsymbol\\Psi\\boldsymbol q$ we have\n$$ \\boldsymbol I \\ddot{\\boldsymbol q} + \n \\boldsymbol\\Lambda^2\\boldsymbol q =\n \\boldsymbol\\Psi^T\\hat{\\boldsymbol M} \\boldsymbol e \\frac{\\ddot u_A}{\\delta}.$$\nIt is $$\\frac{\\ddot u_A}{\\delta} = \\frac{1}{2\\pi}\\,\\lambda_0^2\\,\\sin(\\lambda_0a)$$\nand $$ \\ddot q_i + \\lambda_i^2 q_i =\n\\frac{\\Gamma_i}{2\\pi}\\,\\lambda_0^2\\,\\sin(\\lambda_0 a),\\qquad\\text{with }\n\\Gamma_i = -\\boldsymbol\\psi_i^T \\hat{\\boldsymbol M} \\boldsymbol e\\text{ and }\n\\lambda_0 = \\frac43.$$", "G = - Psi.T @ M @ e", "Substituting a particular integral $\\xi_i=C_i\\sin(\\lambda_0 a)$ in the\nmodal equation of motion we have\n$$(\\lambda^2_i-\\lambda^2_0)\\,C_i\\sin(\\lambda_0 a) =\n \\frac{\\Gamma_i}{2\\pi}\\,\\lambda_0^2\\,\\sin(\\lambda_0 a)$$\nand solving w/r to $C_i$ we have\n$$ C_i = \\frac{\\Gamma_i}{2\\pi}\\,\\frac{\\lambda_0^2}{\\lambda_i^2-\\lambda_0^2}$$", "C = G*l0**2/(li**2-l0**2)/2/pi", "The modal response, taking into account that we start from rest conditions, is\n$$ q_i = C_i\\left(\\sin(\\lambda_0 a) -\n \\frac{\\lambda_0}{\\lambda_i}\\,\\sin(\\lambda_i a)\\right)$$\n$$ \\dot q_i = \\lambda_0 C_i \\left(\n \\cos(\\lambda_0 a) - \\cos(\\lambda_i a) \\right).$$", "for n in range(3):\n i = n+1\n ld(r'q_%d=%+10f\\left(\\sin\\frac43a-%10f\\sin%1fa\\right)' % (i,C[n],l0/li[n],li[n]),\n r'\\qquad\\text{for }0 \\le a \\le \\frac32\\pi')", "Free vibration phase, $a\\ge 3\\pi/2 = a_1$\nWhen the forced phase end, the system is in free vibrations and we can determine the constants of integration requiring that the displacements and velocities of the free vibration equal the displacements and velocities of the forced response at $t=t_0$.\n\\begin{align}\n + (\\cos\\lambda_i a_1)\\, A_i + (\\sin\\lambda_i a_1)\\, B_i &= \n q_i(a_1) \\ \n - (\\sin\\lambda_i a_1)\\, A_i + (\\cos\\lambda_i a_1)\\, B_i &= \n \\frac{\\dot q_i(a_1)}{\\lambda_i}\n\\end{align}\nBecause the coefficients form an othogonal matrix,\n\\begin{align}\n A_i &= + (\\cos\\lambda_i a_1)\\, q_i(a_1)\n - (\\sin\\lambda_i a_1)\\, \\frac{\\dot q_i(a_1)}{\\lambda_i}\\\n B_i &= + (\\sin\\lambda_i a_1)\\, q_i(a_1) \n + (\\cos\\lambda_i a_1)\\, \\frac{\\dot q_i(a_1)}{\\lambda_i}.\n\\end{align}", "a1 = 3*pi/2\nq_a1 = C*(sin(l0*a1)-l0*sin(li*a1)/li)\nv_a1 = C*l0*(cos(l0*a1)-cos(li*a1))\n\nABs = []\nfor i in range(3):\n b = array((q_a1[i], v_a1[i]/li[i]))\n A = array(((+cos(li[i]*a1), -sin(li[i]*a1)), \n (+sin(li[i]*a1), +cos(li[i]*a1))))\n ABs.append(A@b)\nABs = array(ABs)", "Analytical expressions", "display(Latex(r'Modal responses for $a_1 \\le a$.'))\nfor n in range(3):\n i, l, A_, B_ = n+1, li[n], *ABs[n]\n display(Latex((r'$$q_{%d} = '+\n r'%+6.3f\\cos%6.3fa '+\n r'%+6.3f\\sin%6.3fa$$')%(i, A_, l, B_, l)))", "Stitching the two responses\nWe must evaluate numerically the analytical responses", "ac = a[:,None]\n\nq = where(ac<=a1,\n C*(sin(l0*ac)-l0*sin(li*ac)/li),\n ABs[:,0]*cos(li*ac) + ABs[:,1]*sin(li*ac))", "Plotting the Analytical Response\nFirst, we zoom around $a_1$ to verify the continuity of displacements and velocities", "# #### Plot zooming around a1\nlow, hi = int(0.8*a1*nsppi/pi), int(1.2*a1*nsppi/pi)\nfor i, line in enumerate(plt.plot(a[low:hi]/pi, q[low:hi]), 1):\n line.set_label('$q_{%d}$'%i)\nplt.title('Modal Responses, zoom on transition zone')\nplt.xlabel(r'$\\omega_0 t/\\pi$')\nplt.legend(loc='best')\nplt.show()", "next, the modal responses over the interval $0 \\le a \\le 16\\pi$", "# #### Plot in 0 โ‰ค a โ‰ค 16 pi\nfor i, line in enumerate(plt.plot(a/pi, q), 1):\n line.set_label('$q_{%d}$'%i)\nplt.title('Modal Responses')\nplt.xlabel(r'$\\omega_0 t/\\pi$')\nplt.legend(loc='best');\nplt.xticks()\nplt.show();", "Nodal responses", "x = q@Psi.T", "Why x = q@Psi.T rather than x = Psi@q? Because for different reasons (mostly, ease of use with the plotting libraries) we have all the response arrays organized in the shape of (Nsteps ร— 3). \nThat's equivalent to say that q and x, the Pyton objects, are isomorph to $\\boldsymbol q^T$ and $\\boldsymbol x^T$ and because it is $$\\boldsymbol x^T = (\\boldsymbol\\Psi \\boldsymbol q)^T = \\boldsymbol q^T \\boldsymbol \\Psi^T,$$\nin Python to write x = q@Psi.T we have.\nThat said. here are the plot of the nodal responses. Compare with the numerical solutions.", "for i, line in enumerate(plt.plot(a/pi, x), 1):\n line.set_label('$x_{%d}/\\delta$'%i)\nplt.title('Normalized Nodal Displacements โ€” analytical solution')\nplt.xlabel(r'$\\omega_0 t / \\pi$')\nplt.legend(loc='best')\nplt.show();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
MTgeophysics/mtpy
examples/workshop/Workshop Exercises Core.ipynb
gpl-3.0
[ "Introduction\nThis workbook contains some examples for reading, analysing and plotting processed MT data. It covers most of the steps available in MTPy. For more details on specific input parameters and other functionality, we recommend looking at the mtpy documentation, which can be found at: https://mtpy2.readthedocs.io/en/develop/.\nThis workbook is structured according to some of the key modules in MTPy: Core, Analysis, Imaging, and Modeling.\nGetting Started\nTo start with, you will need to make sure MTPy is installed and is working correctly. Please see the installation guide (https://github.com/MTgeophysics/mtpy/wiki/MTPy-installation-guide-for-Windows-10-and-Ubuntu-18.04) for details.\nBefore you begin these examples, we suggest you make a temporary folder (e.g. C:/tmp) to save all example outputs.\nUseful tricks and tips\nThis workbook exists as a Jupyter notebook and a pdf. If you are running the Jupyter notebook, you can run each of the cells, modifying the inputs to suit your requirements. Most of these examples have been written to be self contained.\nIn Jupyter, you can add the following line to the top of any cell and it will write the contents of that cell to a python script: %%writefile example.py\nYou can also select multiple cells and copy them to a new Jupyter notebook.\nMany of the examples below make use of the matplotlib colour maps. Please see https://matplotlib.org/examples/color/colormaps_reference.html for colour map options.\nCore\nThese first few examples cover some of the basic functions and tools that can be used to look at data contained in an edi file, plot it, and make changes (e.g. sample onto different frequencies).\nRead an edi file into an MT object", "# import required modules\nfrom mtpy.core.mt import MT\n\n# Define the path to your edi file\nedi_file = \"C:/mtpywin/mtpy/examples/data/edi_files_2/Synth00.edi\"\n\n# Create an MT object\nmt_obj = MT(edi_file)", "The mt_obj contains all the data from the edi file, e.g. impedance, tipper, frequency as well as station information (lat/long). To look at any of these parameters you can type, for example:", "# To see the latitude and longitude\nprint(mt_obj.lat, mt_obj.lon)\n\n# To see the easting, northing, and elevation\nprint(mt_obj.east, mt_obj.north, mt_obj.elev)", "There are many other parameters you can look at in the mt_obj. Just type mt_obj.[TAB] to see what is available.\nIn the MT object are the Z and Tipper objects (mt_obj.Z; mt_obj.Tipper). These contain all information related to, respectively, the impedance tensor and the tipper.", "# for example, to see the frequency values represented in the impedance tensor:\nprint(mt_obj.Z.freq)\n\n# or to see the impedance tensor (first 4 elements)\nprint(mt_obj.Z.z[:4])\n\n# or the resistivity or phase (first 4 values)\nprint(mt_obj.Z.resistivity[:4])\nprint(mt_obj.Z.phase[:4])", "As with the MT object, you can explore the object by typing mt_obj.Z.[TAB] to see the available attributes.\nPlot an edi file\nIn this example we plot MT data from an edi file.", "# import required modules\nfrom mtpy.core.mt import MT\nimport os\n\n# Define the path to your edi file and save path\nedi_file = \"C:/mtpywin/mtpy/examples/data/edi_files_2/Synth00.edi\"\nsavepath = r\"C:/tmp\"\n\n# Create an MT object\nmt_obj = MT(edi_file)\n\n# To plot the edi file we read in in Part 1 & save to file:\npt_obj = mt_obj.plot_mt_response(plot_num=1, # 1 = yx and xy; 2 = all 4 components\n # 3 = off diagonal + determinant\n plot_tipper = 'yri',\n plot_pt = 'y' # plot phase tensor 'y' or 'n'\n )\n#pt_obj.save_plot(os.path.join(savepath,\"Synth00.png\"), fig_dpi=400)", "Make some change to the data and save to a new file\nThis example demonstrates how to resample the data onto new frequency values and write to a new edi file. In the example below, you can either choose every second frequency or resample onto five periods per decade. \nTo do this we need to make a new Z object, and save it to a file.", "# import required modules\nfrom mtpy.core.mt import MT\nimport os\n\n# Define the path to your edi file and save path\nedi_file = r\"C:/mtpywin/mtpy/examples/data/edi_files_2/Synth00.edi\"\nsavepath = r\"C:/tmp\"\n\n# Create an MT object\nmt_obj = MT(edi_file)\n\n# First, define a frequency array:\n# Every second frequency:\nnew_freq_list = mt_obj.Z.freq[::2] \n\n# OR 5 periods per decade from 10^-4 to 10^3 seconds\nfrom mtpy.utils.calculator import get_period_list\nnew_freq_list = 1./get_period_list(1e-4,1e3,5)\n\n# Create new Z and Tipper objects containing interpolated data\nnew_Z_obj, new_Tipper_obj = mt_obj.interpolate(new_freq_list)\n\n# Write a new edi file using the new data\nmt_obj.write_mt_file(\n save_dir=savepath, \n fn_basename='Synth00_5ppd', \n file_type='edi',\n new_Z_obj=new_Z_obj, # provide a z object to update the data\n new_Tipper_obj=new_Tipper_obj, # provide a tipper object\n longitude_format='LONG', # write longitudes as 'LONG' not โ€˜LONโ€™\n latlon_format='dd'# write as decimal degrees (any other input\n # will write as degrees:minutes:seconds\n )" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bhermanmit/openmc
docs/source/examples/mg-mode-part-i.ipynb
mit
[ "This Notebook illustrates the usage of OpenMC's multi-group calculational mode with the Python API. This example notebook creates and executes the 2-D C5G7 benchmark model using the openmc.MGXSLibrary class to create the supporting data library on the fly.\nGenerate MGXS Library", "import os\n\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nimport numpy as np\n\nimport openmc\n\n%matplotlib inline", "We will now create the multi-group library using data directly from Appendix A of the C5G7 benchmark documentation. All of the data below will be created at 294K, consistent with the benchmark.\nThis notebook will first begin by setting the group structure and building the groupwise data for UO2. As you can see, the cross sections are input in the order of increasing groups (or decreasing energy).\nNote: The C5G7 benchmark uses transport-corrected cross sections. So the total cross section we input here will technically be the transport cross section.", "# Create a 7-group structure with arbitrary boundaries (the specific boundaries are unimportant)\ngroups = openmc.mgxs.EnergyGroups(np.logspace(-5, 7, 8))\n\nuo2_xsdata = openmc.XSdata('uo2', groups)\nuo2_xsdata.order = 0\n\n# When setting the data let the object know you are setting the data for a temperature of 294K.\nuo2_xsdata.set_total([1.77949E-1, 3.29805E-1, 4.80388E-1, 5.54367E-1,\n 3.11801E-1, 3.95168E-1, 5.64406E-1], temperature=294.)\n\nuo2_xsdata.set_absorption([8.0248E-03, 3.7174E-3, 2.6769E-2, 9.6236E-2,\n 3.0020E-02, 1.1126E-1, 2.8278E-1], temperature=294.)\nuo2_xsdata.set_fission([7.21206E-3, 8.19301E-4, 6.45320E-3, 1.85648E-2,\n 1.78084E-2, 8.30348E-2, 2.16004E-1], temperature=294.)\n\nuo2_xsdata.set_nu_fission([2.005998E-2, 2.027303E-3, 1.570599E-2, 4.518301E-2,\n 4.334208E-2, 2.020901E-1, 5.257105E-1], temperature=294.)\n\nuo2_xsdata.set_chi([5.87910E-1, 4.11760E-1, 3.39060E-4, 1.17610E-7,\n 0.00000E-0, 0.00000E-0, 0.00000E-0], temperature=294.)", "We will now add the scattering matrix data. \nNote: Most users familiar with deterministic transport libraries are already familiar with the idea of entering one scattering matrix for every order (i.e. scattering order as the outer dimension). However, the shape of OpenMC's scattering matrix entry is instead [Incoming groups, Outgoing Groups, Scattering Order] to best enable other scattering representations. We will follow the more familiar approach in this notebook, and then use numpy's numpy.rollaxis function to change the ordering to what we need (scattering order on the inner dimension).", "# The scattering matrix is ordered with incoming groups as rows and outgoing groups as columns\n# (i.e., below the diagonal is up-scattering).\nscatter_matrix = \\\n [[[1.27537E-1, 4.23780E-2, 9.43740E-6, 5.51630E-9, 0.00000E-0, 0.00000E-0, 0.00000E-0],\n [0.00000E-0, 3.24456E-1, 1.63140E-3, 3.14270E-9, 0.00000E-0, 0.00000E-0, 0.00000E-0],\n [0.00000E-0, 0.00000E-0, 4.50940E-1, 2.67920E-3, 0.00000E-0, 0.00000E-0, 0.00000E-0],\n [0.00000E-0, 0.00000E-0, 0.00000E-0, 4.52565E-1, 5.56640E-3, 0.00000E-0, 0.00000E-0],\n [0.00000E-0, 0.00000E-0, 0.00000E-0, 1.25250E-4, 2.71401E-1, 1.02550E-2, 1.00210E-8],\n [0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 1.29680E-3, 2.65802E-1, 1.68090E-2],\n [0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 8.54580E-3, 2.73080E-1]]]\nscatter_matrix = np.array(scatter_matrix)\nscatter_matrix = np.rollaxis(scatter_matrix, 0, 3)\nuo2_xsdata.set_scatter_matrix(scatter_matrix, temperature=294.)", "Now that the UO2 data has been created, we can move on to the remaining materials using the same process.\nHowever, we will actually skip repeating the above for now. Our simulation will instead use the c5g7.h5 file that has already been created using exactly the same logic as above, but for the remaining materials in the benchmark problem.\nFor now we will show how you would use the uo2_xsdata information to create an openmc.MGXSLibrary object and write to disk.", "# Initialize the library\nmg_cross_sections_file = openmc.MGXSLibrary(groups)\n\n# Add the UO2 data to it\nmg_cross_sections_file.add_xsdata(uo2_xsdata)\n\n# And write to disk\nmg_cross_sections_file.export_to_hdf5('mgxs.h5')", "Generate 2-D C5G7 Problem Input Files\nTo build the actual 2-D model, we will first begin by creating the materials.xml file.\nFirst we need to define materials that will be used in the problem. In other notebooks, either openmc.Nuclides or openmc.Elements were created at the equivalent stage. We can do that in multi-group mode as well. However, multi-group cross-sections are sometimes provided as macroscopic cross-sections; the C5G7 benchmark data are macroscopic. In this case, we can instead use openmc.Macroscopic objects to in-place of openmc.Nuclide or openmc.Element objects.\nopenmc.Macroscopic, unlike openmc.Nuclide and openmc.Element objects, do not need to be provided enough information to calculate number densities, as no number densities are needed.\nWhen assigning openmc.Macroscopic objects to openmc.Material objects, the density can still be scaled by setting the density to a value that is not 1.0. This would be useful, for example, when slightly perturbing the density of water due to a small change in temperature (while of course ignoring any resultant spectral shift). The density of a macroscopic dataset is set to 1.0 in the openmc.Material object by default when an openmc.Macroscopic dataset is used; so we will show its use the first time and then afterwards it will not be required.\nAside from these differences, the following code is very similar to similar code in other OpenMC example Notebooks.", "# For every cross section data set in the library, assign an openmc.Macroscopic object to a material\nmaterials = {}\nfor xs in ['uo2', 'mox43', 'mox7', 'mox87', 'fiss_chamber', 'guide_tube', 'water']:\n materials[xs] = openmc.Material(name=xs)\n materials[xs].set_density('macro', 1.)\n materials[xs].add_macroscopic(xs)", "Now we can go ahead and produce a materials.xml file for use by OpenMC", "# Instantiate a Materials collection, register all Materials, and export to XML\nmaterials_file = openmc.Materials(materials.values())\n\n# Set the location of the cross sections file to our pre-written set\nmaterials_file.cross_sections = 'c5g7.h5'\n\nmaterials_file.export_to_xml()", "Our next step will be to create the geometry information needed for our assembly and to write that to the geometry.xml file.\nWe will begin by defining the surfaces, cells, and universes needed for each of the individual fuel pins, guide tubes, and fission chambers.", "# Create the surface used for each pin\npin_surf = openmc.ZCylinder(x0=0, y0=0, R=0.54, name='pin_surf')\n\n# Create the cells which will be used to represent each pin type.\ncells = {}\nuniverses = {}\nfor material in materials.values():\n # Create the cell for the material inside the cladding\n cells[material.name] = openmc.Cell(name=material.name)\n # Assign the half-spaces to the cell\n cells[material.name].region = -pin_surf\n # Register the material with this cell\n cells[material.name].fill = material\n \n # Repeat the above for the material outside the cladding (i.e., the moderator)\n cell_name = material.name + '_moderator'\n cells[cell_name] = openmc.Cell(name=cell_name)\n cells[cell_name].region = +pin_surf\n cells[cell_name].fill = materials['water']\n \n # Finally add the two cells we just made to a Universe object\n universes[material.name] = openmc.Universe(name=material.name)\n universes[material.name].add_cells([cells[material.name], cells[cell_name]])", "The next step is to take our universes (representing the different pin types) and lay them out in a lattice to represent the assembly types", "lattices = {}\n\n# Instantiate the UO2 Lattice\nlattices['UO2 Assembly'] = openmc.RectLattice(name='UO2 Assembly')\nlattices['UO2 Assembly'].dimension = [17, 17]\nlattices['UO2 Assembly'].lower_left = [-10.71, -10.71]\nlattices['UO2 Assembly'].pitch = [1.26, 1.26]\nu = universes['uo2']\ng = universes['guide_tube']\nf = universes['fiss_chamber']\nlattices['UO2 Assembly'].universes = \\\n [[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, u, u, u, g, u, u, g, u, u, g, u, u, u, u, u],\n [u, u, u, g, u, u, u, u, u, u, u, u, u, g, u, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, g, u, u, g, u, u, g, u, u, g, u, u, g, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, g, u, u, g, u, u, f, u, u, g, u, u, g, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, g, u, u, g, u, u, g, u, u, g, u, u, g, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, u, g, u, u, u, u, u, u, u, u, u, g, u, u, u],\n [u, u, u, u, u, g, u, u, g, u, u, g, u, u, u, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u]]\n \n# Create a containing cell and universe\ncells['UO2 Assembly'] = openmc.Cell(name='UO2 Assembly')\ncells['UO2 Assembly'].fill = lattices['UO2 Assembly']\nuniverses['UO2 Assembly'] = openmc.Universe(name='UO2 Assembly')\nuniverses['UO2 Assembly'].add_cell(cells['UO2 Assembly'])\n\n# Instantiate the MOX Lattice\nlattices['MOX Assembly'] = openmc.RectLattice(name='MOX Assembly')\nlattices['MOX Assembly'].dimension = [17, 17]\nlattices['MOX Assembly'].lower_left = [-10.71, -10.71]\nlattices['MOX Assembly'].pitch = [1.26, 1.26]\nm = universes['mox43']\nn = universes['mox7']\no = universes['mox87']\ng = universes['guide_tube']\nf = universes['fiss_chamber']\nlattices['MOX Assembly'].universes = \\\n [[m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m],\n [m, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, m],\n [m, n, n, n, n, g, n, n, g, n, n, g, n, n, n, n, m],\n [m, n, n, g, n, o, o, o, o, o, o, o, n, g, n, n, m],\n [m, n, n, n, o, o, o, o, o, o, o, o, o, n, n, n, m],\n [m, n, g, o, o, g, o, o, g, o, o, g, o, o, g, n, m],\n [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],\n [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],\n [m, n, g, o, o, g, o, o, f, o, o, g, o, o, g, n, m],\n [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],\n [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],\n [m, n, g, o, o, g, o, o, g, o, o, g, o, o, g, n, m],\n [m, n, n, n, o, o, o, o, o, o, o, o, o, n, n, n, m],\n [m, n, n, g, n, o, o, o, o, o, o, o, n, g, n, n, m],\n [m, n, n, n, n, g, n, n, g, n, n, g, n, n, n, n, m],\n [m, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, m],\n [m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m]]\n \n# Create a containing cell and universe\ncells['MOX Assembly'] = openmc.Cell(name='MOX Assembly')\ncells['MOX Assembly'].fill = lattices['MOX Assembly']\nuniverses['MOX Assembly'] = openmc.Universe(name='MOX Assembly')\nuniverses['MOX Assembly'].add_cell(cells['MOX Assembly'])\n \n# Instantiate the reflector Lattice\nlattices['Reflector Assembly'] = openmc.RectLattice(name='Reflector Assembly')\nlattices['Reflector Assembly'].dimension = [1,1]\nlattices['Reflector Assembly'].lower_left = [-10.71, -10.71]\nlattices['Reflector Assembly'].pitch = [21.42, 21.42]\nlattices['Reflector Assembly'].universes = [[universes['water']]]\n\n# Create a containing cell and universe\ncells['Reflector Assembly'] = openmc.Cell(name='Reflector Assembly')\ncells['Reflector Assembly'].fill = lattices['Reflector Assembly']\nuniverses['Reflector Assembly'] = openmc.Universe(name='Reflector Assembly')\nuniverses['Reflector Assembly'].add_cell(cells['Reflector Assembly'])", "Let's now create the core layout in a 3x3 lattice where each lattice position is one of the assemblies we just defined.\nAfter that we can create the final cell to contain the entire core.", "lattices['Core'] = openmc.RectLattice(name='3x3 core lattice')\nlattices['Core'].dimension= [3, 3]\nlattices['Core'].lower_left = [-32.13, -32.13]\nlattices['Core'].pitch = [21.42, 21.42]\nr = universes['Reflector Assembly']\nu = universes['UO2 Assembly']\nm = universes['MOX Assembly']\nlattices['Core'].universes = [[u, m, r],\n [m, u, r],\n [r, r, r]]\n\n# Create boundary planes to surround the geometry\nmin_x = openmc.XPlane(x0=-32.13, boundary_type='reflective')\nmax_x = openmc.XPlane(x0=+32.13, boundary_type='vacuum')\nmin_y = openmc.YPlane(y0=-32.13, boundary_type='vacuum')\nmax_y = openmc.YPlane(y0=+32.13, boundary_type='reflective')\n\n# Create root Cell\nroot_cell = openmc.Cell(name='root cell')\nroot_cell.fill = lattices['Core']\n\n# Add boundary planes\nroot_cell.region = +min_x & -max_x & +min_y & -max_y\n\n# Create root Universe\nroot_universe = openmc.Universe(name='root universe', universe_id=0)\nroot_universe.add_cell(root_cell)", "Before we commit to the geometry, we should view it using the Python API's plotting capability", "root_universe.plot(center=(0., 0., 0.), width=(3 * 21.42, 3 * 21.42), pixels=(500, 500),\n color_by='material')", "OK, it looks pretty good, let's go ahead and write the file", "# Create Geometry and set root Universe\ngeometry = openmc.Geometry(root_universe)\n\n# Export to \"geometry.xml\"\ngeometry.export_to_xml()", "We can now create the tally file information. The tallies will be set up to give us the pin powers in this notebook. We will do this with a mesh filter, with one mesh cell per pin.", "tallies_file = openmc.Tallies()\n\n# Instantiate a tally Mesh\nmesh = openmc.Mesh()\nmesh.type = 'regular'\nmesh.dimension = [17 * 2, 17 * 2]\nmesh.lower_left = [-32.13, -10.71]\nmesh.upper_right = [+10.71, +32.13]\n\n# Instantiate tally Filter\nmesh_filter = openmc.MeshFilter(mesh)\n\n# Instantiate the Tally\ntally = openmc.Tally(name='mesh tally')\ntally.filters = [mesh_filter]\ntally.scores = ['fission']\n\n# Add tally to collection\ntallies_file.append(tally)\n\n# Export all tallies to a \"tallies.xml\" file\ntallies_file.export_to_xml()", "With the geometry and materials finished, we now just need to define simulation parameters for the settings.xml file. Note the use of the energy_mode attribute of our settings_file object. This is used to tell OpenMC that we intend to run in multi-group mode instead of the default continuous-energy mode. If we didn't specify this but our cross sections file was not a continuous-energy data set, then OpenMC would complain.\nThis will be a relatively coarse calculation with only 500,000 active histories. A benchmark-fidelity run would of course require many more!", "# OpenMC simulation parameters\nbatches = 150\ninactive = 50\nparticles = 5000\n\n# Instantiate a Settings object\nsettings_file = openmc.Settings()\nsettings_file.batches = batches\nsettings_file.inactive = inactive\nsettings_file.particles = particles\n\n# Tell OpenMC this is a multi-group problem\nsettings_file.energy_mode = 'multi-group'\n\n# Set the verbosity to 6 so we dont see output for every batch\nsettings_file.verbosity = 6\n\n# Create an initial uniform spatial source distribution over fissionable zones\nbounds = [-32.13, -10.71, -1e50, 10.71, 32.13, 1e50]\nuniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)\nsettings_file.source = openmc.source.Source(space=uniform_dist)\n\n# Tell OpenMC we want to run in eigenvalue mode\nsettings_file.run_mode = 'eigenvalue'\n\n# Export to \"settings.xml\"\nsettings_file.export_to_xml()", "Let's go ahead and execute the simulation! You'll notice that the output for multi-group mode is exactly the same as for continuous-energy. The differences are all under the hood.", "# Run OpenMC\nopenmc.run()", "Results Visualization\nNow that we have run the simulation, let's look at the fission rate and flux tallies that we tallied.", "# Load the last statepoint file and keff value\nsp = openmc.StatePoint('statepoint.' + str(batches) + '.h5')\n\n# Get the OpenMC pin power tally data\nmesh_tally = sp.get_tally(name='mesh tally')\nfission_rates = mesh_tally.get_values(scores=['fission'])\n\n# Reshape array to 2D for plotting\nfission_rates.shape = mesh.dimension\n\n# Normalize to the average pin power\nfission_rates /= np.mean(fission_rates)\n\n# Force zeros to be NaNs so their values are not included when matplotlib calculates\n# the color scale\nfission_rates[fission_rates == 0.] = np.nan\n\n# Plot the pin powers and the fluxes\nplt.figure()\nplt.imshow(fission_rates, interpolation='none', cmap='jet', origin='lower')\nplt.colorbar()\nplt.title('Pin Powers')\nplt.show()\n", "There we have it! We have just successfully run the C5G7 benchmark model!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.18/_downloads/012b7ba30b03ebda4c3419b2e4f5161a/plot_ssp_projs_sensitivity_map.ipynb
bsd-3-clause
[ "%matplotlib inline", "Sensitivity map of SSP projections\nThis example shows the sources that have a forward field\nsimilar to the first SSP vector correcting for ECG.", "# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport matplotlib.pyplot as plt\n\nfrom mne import read_forward_solution, read_proj, sensitivity_map\n\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\n\nsubjects_dir = data_path + '/subjects'\nfname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\necg_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif'\n\nfwd = read_forward_solution(fname)\n\nprojs = read_proj(ecg_fname)\n# take only one projection per channel type\nprojs = projs[::2]\n\n# Compute sensitivity map\nssp_ecg_map = sensitivity_map(fwd, ch_type='grad', projs=projs, mode='angle')", "Show sensitivity map", "plt.hist(ssp_ecg_map.data.ravel())\nplt.show()\n\nargs = dict(clim=dict(kind='value', lims=(0.2, 0.6, 1.)), smoothing_steps=7,\n hemi='rh', subjects_dir=subjects_dir)\nssp_ecg_map.plot(subject='sample', time_label='ECG SSP sensitivity', **args)" ]
[ "code", "markdown", "code", "markdown", "code" ]
Stanford-BIS/syde556
SYDE 556 Lecture 2 Representation.ipynb
gpl-2.0
[ "SYDE 556/750: Simulating Neurobiological Systems\nAccompanying Readings: Chapter 2\nNEF Principle 1 - Representation\n\nActivity of neurons change over time\n\n<img src=files/lecture2/spikes.jpg width=800px>\n\nThis probably means something\nSometimes it seems pretty clear what it means", "from IPython.display import YouTubeVideo\nYouTubeVideo('KE952yueVLA', width=720, height=400, loop=1, autoplay=0)\n\nfrom IPython.display import YouTubeVideo\nYouTubeVideo('lfNVv0A8QvI', width=720, height=400, loop=1, autoplay=0) ", "Some sort of mapping between neural activity and a state in the world\n\nmy location\nhead tilt\nimage\nremembered location\n\n\n\nIntuitively, we call this \"representation\"\n\nIn neuroscience, people talk about the 'neural code'\nTo formalize this notion, the NEF uses information theory (or coding theory)\n\n\n\nRepresentation formalism\n\nValue being represented: $x$\nNeural activity: $a$\nNeuron index: $i$\n\nEncoding and decoding\n\nHave to define both to define a code\nLossless code (e.g. Morse Code):\nencoding: $a = f(x)$\ndecodng: $x = f^{-1}(a)$\n\n\nLossy code:\nencoding: $a = f(x)$\ndecoding: $\\hat{x} = g(a) \\approx x$\n\n\n\nDistributed representation\n\nNot just one neuron per $x$ value (or per $x$)\nMany different $a$ values for a single $x$\n\n\nEncoding: $a_i = f_i(x)$\nDecoding: $\\hat{x} = g(a_0, a_1, a_2, a_3, ...)$\n\nExample: binary representation\nEncoding (nonlinear):\n$$\na_i = \\begin{cases}\n 1 &\\mbox{if } x \\mod {2^{i}} > 2^{i-1} \\ \n 0 &\\mbox{otherwise} \n \\end{cases}\n$$\nDecoding (linear):\n$$\n\\hat{x} = \\sum_i a_i 2^{i-1}\n$$\n\nSuppose: $x = 13$\nEncoding: \n$a_1 = 1$, $a_2 = 0$, $a_3 = 1$, $a_4 = 1$\nDecoding:\n$\\hat{x} = 11+02+14+18 = 13$\nLinear decoding\n\n\nWrite decoder as $\\hat{x} = \\sum_ia_i d_i$\n\n\nLinear decoding is nice and simple\n\nWorks fine with non-linear encoding (!)\n\n\n\nThe NEF uses linear decoding, but what about the encoding?\n\n\nNeuron encoding\n$a_i = f_i(x)$ \n\nWhat do we know about neurons?\n\n<img src=files/lecture1/NeuronStructure.jpg>\n\n\nFiring rate goes up as total input current goes up\n\n$a_i = G_i(J)$\n\n\n\nWhat is $G_i$?\n\ndepends on how detailed a neuron model we want.", "from IPython.display import YouTubeVideo\nYouTubeVideo('hxdPdKbqm_I', width=720, height=400, loop=1, autoplay=0) ", "Rectified Linear Neuron", "# Rectified linear neuron\n%pylab inline\nimport numpy\nimport nengo\n\nn = nengo.neurons.RectifiedLinear()\n\nJ = numpy.linspace(-1,1,100)\n\nplot(J, n.rates(J, gain=10, bias=-5))\nxlabel('J (current)')\nylabel('$a$ (Hz)');", "Leaky integrate-and-fire neuron\n$ a = {1 \\over {\\tau_{ref}-\\tau_{RC}ln(1-{1 \\over J})}}$", "#assume this has been run\n#%pylab inline \n\n# Leaky integrate and fire\nimport numpy\nimport nengo\n\nn = nengo.neurons.LIFRate(tau_rc=0.02, tau_ref=0.002) #n is a Nengo LIF neuron, these are defaults\n\nJ = numpy.linspace(-1,10,100)\n\nplot(J, n.rates(J, gain=1, bias=-3)) \nxlabel('J (current)')\nylabel('$a$ (Hz)');", "Response functions\n\nThese are called \"response functions\"\nHow much neural firing changes with change in current\nSimilar for many classes of cells (e.g. pyramidal cells - most of cortex)\nThis is the $G_i$ function in the NEF: it can be pretty much anything\n\n\n\nTuning Curves\n\n\nNeurons seem to be sensitive to particular values of $x$\n\nHow are neurons 'tuned' to a representation? or...\n\n\n\nWhat's the mapping between $x$ and $a$?\n\nRecall 'place cells', and 'edge detectors'\n\n\n\nSometimes they are fairly straight forward:\n\n\n<img src=files/lecture2/tuning_curve_auditory.gif>\n\nBut not often:\n\n<img src=files/lecture2/tuning_curve.jpg>\n<img src=files/lecture2/orientation_tuning.png>\n\nIs there a general form?\n\nTuning curves (cont.)\n\nThe NEF suggests that there is...\nSomething generic and simple\nThat covers all the above cases (and more)\n\n\nLet's start with the simpler case:\n\n<img src=files/lecture2/tuning_curve_auditory.gif>\n\n\nNote that the experimenters are graphing $a$, as a function of $x$\n\n$x$ is much easier to measure than $J$\nSo, there are two mappings of interest:\n$x$->$J$\n$J$->$a$ (response function)\n\n\nTogether these give the tuning curve\n\n\n\n$x$ is the volume of the sound in this case\n\n\nAny ideas?", "#assume this has been run\n#%pylab inline \nimport numpy\nimport nengo\n\nn = nengo.neurons.LIFRate() #n is a Nengo LIF neuron\n\nx = numpy.linspace(-100,0,100)\n\nplot(x, n.rates(x, gain=1, bias=50), 'b') # x*1+50\nplot(x, n.rates(x, gain=0.1, bias=10), 'r') # x*0.1+10\nplot(x, n.rates(x, gain=0.5, bias=5), 'g') # x*0.05+5\nplot(x, n.rates(x, gain=0.1, bias=4), 'c') #x*0.1+4))\n\nxlabel('x')\nylabel('a');", "For mapping #1, the NEF uses a linear map:\n$ J = \\alpha x + J^{bias} $\n\nBut what about type (c) in this graph?\n\n<img src=files/lecture2/tuning_curve.jpg>\n\nEasy enough:\n\n$ J = - \\alpha x + J^{bias} $\n\nBut what about type(b)? Or these ones?\n\n<img src=files/lecture2/orientation_tuning.png>\n\nThere's usually some $x$ which gives a maximum firing rate\n...and thus a maximum $J$\n\n\nFiring rate (and $J$) decrease as you get farther from the preferred $x$ value\nSo something like $J = \\alpha [sim(x, x_{pref})] + J^{bias}$\n\n\nWhat sort of similarity measure? \nLet's think about $x$ for a moment\n$x$ can be anything... scalar, vector, etc.\nDoes thinking of it as a vector help?\n\n\n\nThe Encoding Equation (i.e. Tuning Curves)\n\nHere is the general form we use for everything (it has both 'mappings' in it)\n$a_i = G_i[\\alpha_i x \\cdot e_i + J_i^{bias}] $\n$\\alpha$ is a gain term (constrained to always be positive)\n$J^{bias}$ is a constant bias term\n$e$ is the encoder, or the preferred direction vector\n$G$ is the neuron model\n$i$ indexes the neuron\n\n\nTo simplify life, we always assume $e$ is of unit length\nOtherwise we could combine $\\alpha$ and $e$\n\n\nIn the 1D case, $e$ is either +1 or -1\nIn higher dimensions, what happens?", "#assume this has been run\n#%pylab inline \n\nimport numpy\nimport nengo\n\nn = nengo.neurons.LIFRate()\n\ne = numpy.array([1.0, 1.0]) \ne = e/numpy.linalg.norm(e)\n\na = numpy.linspace(-1,1,50)\nb = numpy.linspace(-1,1,50)\n\nX,Y = numpy.meshgrid(a, b)\n\nfrom mpl_toolkits.mplot3d.axes3d import Axes3D\nfig = figure()\nax = fig.add_subplot(1, 1, 1, projection='3d')\np = ax.plot_surface(X, Y, n.rates((X*e[0]+Y*e[1]), gain=1, bias=1.5), \n linewidth=0, cstride=1, rstride=1, cmap=pylab.cm.jet)", "But that's not how people normally plot it\nIt might not make sense to sample every possible x\nInstead they might do some subset\nFor example, what if we just plot the points around the unit circle?", "import nengo\nimport numpy\n\nn = nengo.neurons.LIFRate()\n\ntheta = numpy.linspace(0, 2*numpy.pi, 100)\nx = numpy.array([numpy.cos(theta), numpy.sin(theta)])\nplot(x[0],x[1])\naxis('equal')\n\ne = numpy.array([1.0, 1.0])\ne = e/numpy.linalg.norm(e)\n\nplot([0,e[0]], [0,e[1]],'r')\n\ngain = 1\nbias = 2.5\n\nfigure()\nplot(theta, n.rates(numpy.dot(x.T, e), gain=gain, bias=bias))\nplot([numpy.arctan2(e[1],e[0])],0,'rv')\nxlabel('angle')\nylabel('firing rate')\nxlim(0, 2*numpy.pi);", "That starts looking a lot more like the real data.\n\nNotation\n\n\nEncoding\n\n$a_i = G_i[\\alpha_i x \\cdot e_i + J^{bias}_i]$\n\n\n\nDecoding\n\n$\\hat{x} = \\sum_i a_i d_i$\n\n\n\nThe textbook uses $\\phi$ for $d$ and $\\tilde \\phi$ for $e$\n\nWe're switching to $d$ (for decoder) and $e$ (for encoder)\n\n\n\nDecoder\n\n\nBut where do we get $d_i$ from?\n\n$\\hat{x}=\\sum a_i d_i$\n\n\n\nFind the optimal $d_i$\n\nHow?\nMath\n\n\n\nSolving for $d$\n\nMinimize the average error over all $x$, i.e.,\n\n$ E = \\frac{1}{2}\\int_{-1}^1 (x-\\hat{x})^2 \\; dx $\n\nSubstitute for $\\hat{x}$:\n\n$ \n\\begin{align}\nE = \\frac{1}{2}\\int_{-1}^1 \\left(x-\\sum_i^N a_i d_i \\right)^2 \\; dx \n\\end{align}\n$\n\nTake the derivative with respect to $d_i$:\n\n$\n\\begin{align}\n{{\\partial E} \\over {\\partial d_i}} &= {1 \\over 2} \\int_{-1}^1 2 \\left[ x-\\sum_j a_j d_j \\right] (-a_i) \\; dx \\\n{{\\partial E} \\over {\\partial d_i}} &= - \\int_{-1}^1 a_i x \\; dx + \\int_{-1}^1 \\sum_j a_j d_j a_i \\; dx \n\\end{align}\n$\n\nAt the minimum (i.e. smallest error), $ {{\\partial E} \\over {\\partial d_i}} = 0$\n\n$\n\\begin{align}\n\\int_{-1}^1 a_i x \\; dx &= \\int_{-1}^1 \\sum_j(a_j d_j a_i) \\; dx \\\n\\int_{-1}^1 a_i x \\; dx &= \\sum_j \\left(\\int_{-1}^1 a_i a_j \\; dx\\right)d_j \n\\end{align}\n$\n\nThat's a system of $N$ equations and $N$ unknowns\nIn fact, we can rewrite this in matrix form\n\n$ \\Upsilon = \\Gamma d $\nwhere\n$ \n\\begin{align}\n\\Upsilon_i &= {1 \\over 2} \\int_{-1}^1 a_i x \\;dx\\\n\\Gamma_{ij} &= {1 \\over 2} \\int_{-1}^1 a_i a_j \\;dx \n\\end{align}\n$\n\nDo we have to do the integral over all $x$?\nApproximate the integral by sampling over $x$\n$S$ is the number of $x$ values to use ($S$ for samples) \n\n\n\n$ \n\\begin{align}\n\\sum_x a_i x / S &= \\sum_j \\left(\\sum_x a_i a_j /S \\right)d_j \\\n\\Upsilon &= \\Gamma d \n\\end{align}\n$\nwhere\n$\n\\begin{align}\n\\Upsilon_i &= \\sum_x a_i x / S \\\n\\Gamma_{ij} &= \\sum_x a_i a_j / S \n\\end{align}\n$\n\nNotice that if $A$ is the matrix of activities (the firing rate for each neuron for each $x$ value), then $\\Gamma=A^T A / S$ and $\\Upsilon=A^T x / S$\n\nSo given \n$ \\Upsilon = \\Gamma d $\nthen\n$ d = \\Gamma^{-1} \\Upsilon $\nor, equivalently\n$ d_i = \\sum_j \\Gamma^{-1}_{ij} \\Upsilon_j $", "import numpy\nimport nengo\nfrom nengo.utils.ensemble import tuning_curves\nfrom nengo.dists import Uniform\n\nN = 10\n\nmodel = nengo.Network(label='Neurons')\nwith model:\n neurons = nengo.Ensemble(N, dimensions=1, \n max_rates=Uniform(100,200)) #Defaults to LIF neurons, \n #with random gains and biases for\n #neurons between 100-200hz over -1,1\n\n connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders\n solver=nengo.solvers.LstsqL2(reg=0)) #reg=0 means ignore noise\n \nsim = nengo.Simulator(model)\n\nd = sim.data[connection].weights.T\n\nx, A = tuning_curves(neurons, sim)\n\nxhat = numpy.dot(A, d)\n\npyplot.plot(x, A)\nxlabel('x')\nylabel('firing rate (Hz)')\n\nfigure()\nplot(x, x)\nplot(x, xhat)\nxlabel('$x$')\nylabel('$\\hat{x}$')\nylim(-1, 1)\nxlim(-1, 1)\n\nfigure()\nplot(x, xhat-x)\nxlabel('$x$')\nylabel('$\\hat{x}-x$')\nxlim(-1, 1)\n\nprint 'RMSE', np.sqrt(np.average((x-xhat)**2))", "What happens to the error with more neurons?\n\nNoise\n\nNeurons aren't perfect\nAxonal jitter\nNeurotransmitter vesicle release failure (~80%)\nAmount of neurotransmitter per vesicle\nThermal noise\nIon channel noise (# of channels open and closed)\nNetwork effects\nMore information: http://icwww.epfl.ch/~gerstner/SPNM/node33.html\n\n\nHow do we include this noise as well?\nMake the neuron model more complicated\nSimple approach: add gaussian random noise to $a_i$\nSet noise standard deviation $\\sigma$ to 20% of maximum firing rate\nEach $a_i$ value for each $x$ value gets a different noise value added to it\n\n\n\n\nWhat effect does this have on decoding?", "#Have to run previous python cell first\nA_noisy = A + numpy.random.normal(scale=0.2*numpy.max(A), size=A.shape)\n\nxhat = numpy.dot(A_noisy, d)\n\npyplot.plot(x, A_noisy)\nxlabel('x')\nylabel('firing rate (Hz)')\n\nfigure()\nplot(x, x)\nplot(x, xhat)\nxlabel('$x$')\nylabel('$\\hat{x}$')\nylim(-1, 1)\nxlim(-1, 1)\n\nprint 'RMSE', np.sqrt(np.average((x-xhat)**2))", "What if we just increase the number of neurons? Will it help?\n\nTaking noise into account\n\n\nInclude noise while solving for decoders\n\nIntroduce noise term $\\eta$\n\n\n\n$ \n\\begin{align}\n\\hat{x} &= \\sum_i(a_i+\\eta)d_i \\\nE &= {1 \\over 2} \\int_{-1}^1 (x-\\hat{x})^2 \\;dx d\\eta\\\n &= {1 \\over 2} \\int_{-1}^1 \\left(x-\\sum_i(a_i+\\eta)d_i\\right)^2 \\;dx d\\eta\\\n &= {1 \\over 2} \\int_{-1}^1 \\left(x-\\sum_i a_i d_i - \\sum \\eta d_i \\right)^2 \\;dx d\\eta\n\\end{align}\n$\n- Assume noise is gaussian, independent, mean zero, and has the same variance for each neuron\n - $\\eta = \\mathcal{N}(0, \\sigma)$\n - All the noise cross-terms disappear (independent)\n$ \n\\begin{align}\nE &= {1 \\over 2} \\int_{-1}^1 \\left(x-\\sum_i a_i d_i \\right)^2 \\;dx + \\sum_{i,j} d_i d_j <\\eta_i \\eta_j>\\eta \\\n &= {1 \\over 2} \\int{-1}^1 \\left(x-\\sum_i a_i d_i \\right)^2 \\;dx + \\sum_{i} d_i d_i <\\eta_i \\eta_i>_\\eta\n\\end{align}\n$\n\nSince the average of $\\eta_i \\eta_i$ noise is its variance (since the mean is zero), $\\sigma^2$, we get\n\n$ \n\\begin{align}\nE = {1 \\over 2} \\int_{-1}^1 \\left(x-\\sum_i a_i d_i \\right)^2 \\;dx + \\sigma^2 \\sum_i d_i^2 \n\\end{align}\n$\n\nThe practical result is that, when computing the decoder, we get\n\n$ \n\\begin{align}\n\\Gamma_{ij} = \\sum_x a_i a_j / S + \\sigma^2 \\delta_{ij}\n\\end{align}\n$\n\n\nWhere $\\delta_{ij}$ is the Kronecker delta: http://en.wikipedia.org/wiki/Kronecker_delta\n\n\nTo simplfy computing this using matrices, this can be written as $\\Gamma=A^T A /S + \\sigma^2 I$", "import numpy\nimport nengo\nfrom nengo.utils.ensemble import tuning_curves\nfrom nengo.dists import Uniform\n\nN = 100\n\nmodel = nengo.Network(label='Neurons')\nwith model:\n neurons = nengo.Ensemble(N, dimensions=1, \n max_rates=Uniform(100,200)) #Defaults to LIF neurons, \n #with random gains and biases for\n #neurons between 100-200hz over -1,1\n\n connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders\n solver=nengo.solvers.LstsqNoise(noise=0.2)) #Add noise ###NEW\n \nsim = nengo.Simulator(model)\n\nd = sim.data[connection].weights.T\n\nx, A = tuning_curves(neurons, sim)\n\nA_noisy = A + numpy.random.normal(scale=0.2*numpy.max(A), size=A.shape)\n\nxhat = numpy.dot(A_noisy, d)\n\npyplot.plot(x, A_noisy)\nxlabel('x')\nylabel('firing rate (Hz)')\n\nfigure()\nplot(x, x)\nplot(x, xhat)\nxlabel('$x$')\nylabel('$\\hat{x}$')\nylim(-1, 1)\nxlim(-1, 1)\n\nprint 'RMSE', np.sqrt(np.average((x-xhat)**2))", "Number of neurons\n\nWhat happens to the error with more neurons?\nNote that the error has two parts:\n\n\n\n$ \n\\begin{align}\nE = {1 \\over 2} \\int_{-1}^1 \\left(x-\\sum_i a_i d_i \\right)^2 \\;dx + \\sigma^2 \\sum_i d_i^2 \n\\end{align}\n$ \n\nError due to static distortion (i.e. the error introduced by the decoders themselves)\nThis is present regardless of noise\n\n\n\n$ \n\\begin{align}\nE_{distortion} = {1 \\over 2} \\int_{-1}^1 \\left(x-\\sum_i a_i d_i \\right)^2 dx \n\\end{align}\n$ \n\nError due to noise\n\n$\n\\begin{align}\nE_{noise} = \\sigma^2 \\sum_i d_i^2 \n\\end{align}\n$ \n\nWhat do these look like as number of neurons $N$ increases? \n\n<img src=\"files/lecture2/repn_noise.png\">\n- Noise error is proportional to $1/N$\n- Distortion error is proportional to $1/N^2$\n- Remember this error $E$ is defined as\n$ E = {1 \\over 2} \\int_{-1}^1 (x-\\hat{x})^2 dx $\n\n\nSo that's actually a squared error term\n\n\nAlso, as number of neurons is greater than 100 or so, the error is dominated by the noise term ($1/N$).\n\n\nExamples\n\nMethodology for building models with the Neural Engineering Framework (outlined in Chapter 1)\nSystem Description: Describe the system of interest in terms of the neural data, architecture, computations, representations, etc. (e.g. response functions, tuning curves, etc.)\nDesign Specification: Add additional performance constraints (e.g. bandwidth, noise, SNR, dynamic range, stability, etc.)\nImplement the model: Employ the NEF principles given the System Description and Design Specification\n\n\n\nExample 1: Horizontal Eye Control (1D)\nFrom http://www.nature.com/nrn/journal/v3/n12/full/nrn986.html\n<img src=\"files/lecture2/horizontal_eye.jpg\">\nThere are also neurons whose response goes the other way. All of the neurons are directly connected to the muscle controlling the horizontal direction of the eye, and that's the only thing that muscle does, so we're pretty sure this is what's being repreesnted.\n\n\nSystem Description\n\nWe've only done the first NEF principle, so that's all we'll worry about\nWhat is being represented?\n$x$ is the horizontal position\nTuning curves: extremely linear (high $\\tau_{RC}$, low $\\tau_{ref}$)\nsome have $e=1$, some have $e=-1$\nthese are often called \"on\" and \"off\" neurons, respectively\n\n\nFiring rates of up to 300Hz\n\n\n\nDesign Specification\n\nRange of values for $x$: -60 degrees to +60 degrees\nNormal levels of noise: $\\sigma$ is 20% of maximum firing rate\nthe book goes a bit higher, with $\\sigma^2=0.1$, meaning that $\\sigma = \\sqrt{0.1} \\approx 0.32$ times the maximum firing rate\n\n\n\n\n\nImplementation\n\nExamine the tuning curves\nThen use principle 1", "#%pylab inline\nimport numpy\nimport nengo\nfrom nengo.utils.ensemble import tuning_curves\nfrom nengo.dists import Uniform\n\nN = 40\ntau_rc = .2\ntau_ref = .001\nlif_model = nengo.LIFRate(tau_rc=tau_rc, tau_ref=tau_ref)\n\nmodel = nengo.Network(label='Neurons')\nwith model:\n neurons = nengo.Ensemble(N, dimensions=1, \n max_rates = Uniform(250,300),\n neuron_type = lif_model)\nsim = nengo.Simulator(model)\n\nx, A = tuning_curves(neurons, sim)\n\nplot(x, A)\nxlabel('x')\nylabel('firing rate (Hz)');", "How good is the representation?", "#Have to run previous code cell first\nnoise = 0.2\n\nwith model:\n connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders\n solver=nengo.solvers.LstsqNoise(noise=0.2)) #Add noise ###NEW\n \nsim = nengo.Simulator(model)\n\nd = sim.data[connection].weights.T\nx, A = tuning_curves(neurons, sim)\nA_noisy = A + numpy.random.normal(scale=noise*numpy.max(A), size=A.shape)\nxhat = numpy.dot(A_noisy, d)\n\nprint 'RMSE with %d neurons is %g'%(N, np.sqrt(np.average((x-xhat)**2)))\n\nfigure()\nplot(x, x)\nplot(x, xhat)\nxlabel('$x$')\nylabel('$\\hat{x}$')\nylim(-1, 1)\nxlim(-1, 1);", "Possible questions\nHow many neurons do we need for a particular level of accuracy?\nWhat happens with different firing rates?\nWhat happens with different distributions of x-intercepts?\n\n\n\nExample 2: Arm Movements (2D)\n\nGeorgopoulos et al., 1982. \"On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex.\"\n\n<img src=\"files/lecture2/armmovement1.jpg\">\n<img src=\"files/lecture2/armmovement2.png\">\n<img src=\"files/lecture2/armtuningcurve.png\">\n\n\nSystem Description\n\nWhat is being represented?\n$x$ is the hand position\nNote that this is different from what Georgopoulos talks about in this initial paper\nInitial paper only looks at those 8 positions, so it only talks about direction of movement (angle but not magnitude)\nMore recent work in the same area shows the cells do respond to both (Fu et al, 1993; Messier and Kalaska, 2000)\n\n\nBell-shaped tuning curves\nEncoders: randomly distributed around the unit circle\nFiring rates of up to 60Hz\n\n\n\nDesign Specification\n\nRange of values for $x$: Anywhere within a unit circle (or perhaps some other radius)\nNormal levels of noise: $\\sigma$ is 20% of maximum firing rate\nthe book goes a bit higher, with $\\sigma^2=0.1$, meaning that $\\sigma = \\sqrt{0.1} \\approx 0.32$ times the maximum \n\n\n\n\nImplementation\nExamine the tuning curves", "import numpy\nimport nengo\n\nn = nengo.neurons.LIFRate() \n\ntheta = numpy.linspace(-numpy.pi, numpy.pi, 100)\nx = numpy.array([numpy.sin(theta), numpy.cos(theta)])\n\ne = numpy.array([1.0, 0])\n\nplot(theta*180/numpy.pi, n.rates(numpy.dot(x.T, e), bias=1, gain=0.2)) #bias 1->1.5\nxlabel('angle')\nylabel('firing rate')\nxlim(-180, 180)\nshow()", "Does it match empirical data?\n\nWhen tuning curves are plotted just considering $\\theta$, they are fit by $a_i=b_0+b_1cos(\\theta-\\theta_e)$ \nWhere $\\theta_e$ is the angle for the encoder $e_i$ and $b_0$ and $b_1$ are constants\n\n\n\nInterestingly, Georgopoulos suggests doing linear decoding:\n\n$\\hat{x}=\\sum_i a_i e_i$\nThis gives a somewhat decent estimate of the direction of movement (but a terrible estimate of magnitude)\n\n\n\nHigher-dimensional Tuning\n\nNote that there can be different ways of organizing the representation of a higher dimensional space\n\n<img src=\"files/lecture2/semicircular_canal.png\">\n\nHere, the neurons respond to angular velocity. This is a 3D vector. \nBut, instead of randomly distributing encoders around the 3D space, they are aligned with a major axis\nencoders are chosen from [1,0,0], [-1,0,0], [0,1,0], [0,-1,0], [0,0,1], [0,0,-1]\n\n\nThis can affect on the representation\n\n<img src=\"files/lecture2/aligned_encoders.png\">\nAdministrative Notes\n\nAssignment 1 has been posted \nHTML\n\n\nDue: January 25th at midnight\nTotal marks: 20 (20% of final grade)\nLate penalty: 1 mark per day\n\nIt is recommended that you use a language with a matrix library and graphing capabilities. Two main suggestions are Python and MATLAB.\n\n\nTutoring Services\n\nIf you would like more personalized help for the assignments in this course, two of the PhD students in the lab (Xuan Choo and Travis DeWolf) offer tutoring services. They can be contacted at ctntutoring@gmail.com and they charge \\$20 per half hour (or \\$15 per person per half hour for groups)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Ttl/scikit-rf
doc/source/examples/metrology/One Port Tiered Calibration.ipynb
bsd-3-clause
[ "One Port Tiered Calibration\nIntro\nA one-port network analyzer can be used to measure a two-port device, provided that the device is reciprocal. This is accomplished by performing two calibrations, which is why its called a tiered calibration. \nFirst, the VNA is calibrated at the test-port like normal. This is called the first tier. Next, the device is connected to the test-port, and a calibration is performed at the far end of the device, the second tier. A diagram is shown below,", "from IPython.display import SVG\nSVG('oneport_tiered_calibration/images/boxDiagram.svg')", "This notebook will demonstrate how to use skrf to do a two-tiered one-port calibration. We'll use data that was taken to characterize a waveguide-to-CPW probe. So, for this specific example the diagram above looks like:", "SVG('oneport_tiered_calibration/images/probe.svg')", "Some Data\nThe data available is the folders 'tier1/' and 'tier2/'.", "ls oneport_tiered_calibration/", "(if you dont have the git repo for these examples, the data for this notebook can be found here)\nIn each folder you will find the two sub-folders, called 'ideals/' and 'measured/'. These contain touchstone files of the calibration standards ideal and measured responses, respectively.", "ls oneport_tiered_calibration/tier1/", "The first tier is at waveguide interface, and consisted of the following set of standards\n\nshort \ndelay short\nload\nradiating open (literally an open waveguide)", "ls oneport_tiered_calibration/tier1/measured/", "Creating Calibrations\nTier 1\nFirst defining the calibration for Tier 1", "from skrf.calibration import OnePort\nimport skrf as rf \n%matplotlib inline\nfrom pylab import * \nrf.stylely()\n\n\ntier1_ideals = rf.read_all_networks('oneport_tiered_calibration/tier1/ideals/')\ntier1_measured = rf.read_all_networks('oneport_tiered_calibration/tier1/measured/')\n \n\ntier1 = OnePort(measured = tier1_measured,\n ideals = tier1_ideals,\n name = 'tier1',\n sloppy_input=True)\ntier1", "Because we saved corresponding ideal and measured standards with identical names, the Calibration will automatically align our standards upon initialization. (More info on creating Calibration objects this can be found in the docs.)\nSimilarly for the second tier 2,\nTier 2", "tier2_ideals = rf.read_all_networks('oneport_tiered_calibration/tier2/ideals/')\ntier2_measured = rf.read_all_networks('oneport_tiered_calibration/tier2/measured/')\n \n\ntier2 = OnePort(measured = tier2_measured,\n ideals = tier2_ideals,\n name = 'tier2',\n sloppy_input=True)\ntier2", "Error Networks\nEach one-port Calibration contains a two-port error network, that is determined from the calculated error coefficients. The error network for tier1 models the VNA, while the error network for tier2 represents the VNA and the DUT. These can be visualized through the parameter 'error_ntwk'.\nFor tier 1,", "tier1.error_ntwk.plot_s_db()\ntitle('Tier 1 Error Network')", "Similarly for tier 2,", "tier2.error_ntwk.plot_s_db()\ntitle('Tier 2 Error Network')", "De-embedding the DUT\nAs previously stated, the error network for tier1 models the VNA, and the error network for tier2 represents the VNA+DUT. So to deterine the DUT's response, we cascade the inverse S-parameters of the VNA with the VNA+DUT. \n$$ DUT = VNA^{-1}\\cdot (VNA \\cdot DUT)$$\nIn skrf, this is done as follows", "dut = tier1.error_ntwk.inv ** tier2.error_ntwk\ndut.name = 'probe'\ndut.plot_s_db()\ntitle('Probe S-parameters')\nylim(-60,10)", "You may want to save this to disk, for future use, \ndut.write_touchstone()", "ls probe*" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zzsza/TIL
python/pyecharts.ipynb
mit
[ "pyecharts\n\nHTML ์—์„œ ํ™•์ธํ•˜๋ฉด ์ด์œ ๊ทธ๋ž˜ํ”„๋„ ๋ณด์ž…๋‹ˆ๋‹ค!!!\nBaidu์—์„œ ๋ฐ์ดํ„ฐ ์‹œ๊ฐํ™”๋ฅผ ์œ„ํ•ด ๋งŒ๋“  ์˜คํ”ˆ์†Œ์Šค์ธ Echarts์˜ ํŒŒ์ด์ฌ ๋ฒ„์ „\n๋‹ค์–‘ํ•œ ๊ทธ๋ž˜ํ”„ ์ œ๊ณต\n๊ณต์‹ ๋ฌธ์„œ\nDynamic\n๋‹จ, ์˜ต์…˜์˜ ๋‹จ์ถ”๊ฐ€ ์ค‘๊ตญ์–ด\n\n๊ทธ๋ž˜ํ”„๋ฅผ ๊ทธ๋ฆด ๋•Œ, echarts์™€ echartql์„ ๋กœ์ปฌ์—์„œ ์ฐพ์œผ๋ ค๊ณ  ํ•จ\n\n๋”ฐ๋ผ์„œ nbconvert๋ฅผ ์‚ฌ์šฉํ•ด HTML์œผ๋กœ ์ €์žฅํ•œ ํ›„, ์‰˜์—์„œ ์ˆ˜์ •\n\nsed -i \"\" \"s|/nbextensions/echarts/echarts-gl.min|https://cdn.jsdelivr.net/npm/echarts-gl@1.1.1/dist/echarts-gl.min|g; s|/nbextensions/echarts/echarts.min|https://cdnjs.cloudflare.com/ajax/libs/echarts/4.1.0/echarts.min|g\" ํŒŒ์ผ์ด๋ฆ„.ipynb", "import pyecharts\nimport pandas as pd\nimport numpy as np\n\nattr = [\"Jan\", \"Feb\", \"Mar\", \"Apr\", \"May\", \"Jun\", \"Jul\", \"Aug\", \"Sep\", \"Oct\", \"Nov\", \"Dec\"]\nv1 = [2.0, 4.9, 7.0, 23.2, 25.6, 76.7, 135.6, 162.2, 32.6, 20.0, 6.4, 3.3]\nv2 = [2.6, 5.9, 9.0, 26.4, 28.7, 70.7, 175.6, 182.2, 48.7, 18.8, 6.0, 2.3]\nbar = pyecharts.Bar(\"Bar chart\", \"precipitation and evaporation one year\")\nbar.add(\"precipitation\", attr, v1, mark_line=[\"average\"], mark_point=[\"max\", \"min\"])\nbar.add(\"evaporation\", attr, v2, mark_line=[\"average\"], mark_point=[\"max\", \"min\"])\nbar.render()\nbar.height = 500\nbar.width = 800\n\nbar\n\nattr = [\"Jan\", \"Feb\", \"Mar\", \"Apr\", \"May\", \"Jun\", \"Jul\", \"Aug\", \"Sep\", \"Oct\", \"Nov\", \"Dec\"]\nv1 = [2.0, 4.9, 7.0, 23.2, 25.6, 76.7, 135.6, 162.2, 32.6, 20.0, 6.4, 3.3]\nv2 = [2.6, 5.9, 9.0, 26.4, 28.7, 70.7, 175.6, 182.2, 48.7, 18.8, 6.0, 2.3]\nbar = pyecharts.Bar(\"Bar chart\", \"precipitation and evaporation one year\")\nbar.use_theme(\"dark\")\nbar.add(\"precipitation\", attr, v1, mark_line=[\"average\"], mark_point=[\"max\", \"min\"])\nbar.add(\"evaporation\", attr, v2, mark_line=[\"average\"], mark_point=[\"max\", \"min\"])\nbar.height = 500\nbar.width = 800\nbar", "In pandas", "title = \"bar chart2\"\nindex = pd.date_range(\"8/24/2018\", periods=6, freq=\"M\")\ndf1 = pd.DataFrame(np.random.randn(6), index=index)\ndf2 = pd.DataFrame(np.random.rand(6), index=index)\n\ndfvalue1 = [i[0] for i in df1.values]\ndfvalue2 = [i[0] for i in df2.values]\n_index = [i for i in df1.index.format()]\n\nbar = pyecharts.Bar(title, \"Profit and loss situation\")\nbar.add(\"profit\", _index, dfvalue1)\nbar.add(\"loss\", _index, dfvalue2)\nbar.height = 500\nbar.width = 800\nbar\n\nfrom pyecharts import Bar, Line, Overlap\n\nattr = ['A','B','C','D','E','F']\nv1 = [10, 20, 30, 40, 50, 60]\nv2 = [38, 28, 58, 48, 78, 68]\nbar = Bar(\"Line Bar\")\nbar.add(\"bar\", attr, v1)\nline = Line()\nline.add(\"line\", attr, v2)\n\noverlap = Overlap()\noverlap.add(bar)\noverlap.add(line)\n\noverlap\n\nfrom pyecharts import Pie\n\nattr = ['A','B','C','D','E','F']\nv1 = [10, 20, 30, 40, 50, 60]\nv2 = [38, 28, 58, 48, 78, 68]\npie = Pie(\"pie chart\", title_pos=\"center\", width=600)\npie.add(\"A\", attr, v1, center=[25, 50], is_random=True, radius=[30, 75], rosetype='radius')\npie.add(\"B\", attr, v2, center=[75, 50], is_randome=True, radius=[30, 75], rosetype='area', is_legend_show=False,\n is_label_show=True)\npie", "๊ฐ€๋กœ ๊ทธ๋ž˜ํ”„", "bar = Bar(\"๊ฐ€๋กœ ๊ทธ๋ž˜ํ”„\")\nbar.add(\"A\", attr, v1)\nbar.add(\"B\", attr, v2, is_convert=True)\nbar.width=800\nbar", "์Šฌ๋ผ์ด๋”", "import random\n\nattr = [\"{}th\".format(i) for i in range(30)]\nv1 = [random.randint(1, 30) for _ in range(30)]\nbar = Bar(\"Bar - datazoom - slider \")\nbar.add(\"\", attr, v1, is_label_show=True, is_datazoom_show=True)\n# bar.render()\nbar\n\ndays = [\"{}th\".format(i) for i in range(30)]\ndays_v1 = [random.randint(1, 30) for _ in range(30)]\nbar = Bar(\"Bar - datazoom - xaxis/yaxis\")\nbar.add(\n \"\",\n days,\n days_v1,\n is_datazoom_show=True,\n datazoom_type=\"slider\",\n datazoom_range=[10, 25],\n is_datazoom_extra_show=True,\n datazoom_extra_type=\"slider\",\n datazoom_extra_range=[10, 25],\n is_toolbox_show=False,\n)\n# bar.render()\nbar", "3D", "from pyecharts import Bar3D\n\nbar3d = Bar3D(\"3D Graph\", width=1200, height=600)\nx_axis = [\n \"12a\", \"1a\", \"2a\", \"3a\", \"4a\", \"5a\", \"6a\", \"7a\", \"8a\", \"9a\", \"10a\", \"11a\",\n \"12p\", \"1p\", \"2p\", \"3p\", \"4p\", \"5p\", \"6p\", \"7p\", \"8p\", \"9p\", \"10p\", \"11p\"\n ]\ny_axis = [\n \"Saturday\", \"Friday\", \"Thursday\", \"Wednesday\", \"Tuesday\", \"Monday\", \"Sunday\"\n ]\ndata = [\n [0, 0, 5], [0, 1, 1], [0, 2, 0], [0, 3, 0], [0, 4, 0], [0, 5, 0],\n [0, 6, 0], [0, 7, 0], [0, 8, 0], [0, 9, 0], [0, 10, 0], [0, 11, 2],\n [0, 12, 4], [0, 13, 1], [0, 14, 1], [0, 15, 3], [0, 16, 4], [0, 17, 6],\n [0, 18, 4], [0, 19, 4], [0, 20, 3], [0, 21, 3], [0, 22, 2], [0, 23, 5],\n [1, 0, 7], [1, 1, 0], [1, 2, 0], [1, 3, 0], [1, 4, 0], [1, 5, 0],\n [1, 6, 0], [1, 7, 0], [1, 8, 0], [1, 9, 0], [1, 10, 5], [1, 11, 2],\n [1, 12, 2], [1, 13, 6], [1, 14, 9], [1, 15, 11], [1, 16, 6], [1, 17, 7],\n [1, 18, 8], [1, 19, 12], [1, 20, 5], [1, 21, 5], [1, 22, 7], [1, 23, 2],\n [2, 0, 1], [2, 1, 1], [2, 2, 0], [2, 3, 0], [2, 4, 0], [2, 5, 0],\n [2, 6, 0], [2, 7, 0], [2, 8, 0], [2, 9, 0], [2, 10, 3], [2, 11, 2],\n [2, 12, 1], [2, 13, 9], [2, 14, 8], [2, 15, 10], [2, 16, 6], [2, 17, 5],\n [2, 18, 5], [2, 19, 5], [2, 20, 7], [2, 21, 4], [2, 22, 2], [2, 23, 4],\n [3, 0, 7], [3, 1, 3], [3, 2, 0], [3, 3, 0], [3, 4, 0], [3, 5, 0],\n [3, 6, 0], [3, 7, 0], [3, 8, 1], [3, 9, 0], [3, 10, 5], [3, 11, 4],\n [3, 12, 7], [3, 13, 14], [3, 14, 13], [3, 15, 12], [3, 16, 9], [3, 17, 5],\n [3, 18, 5], [3, 19, 10], [3, 20, 6], [3, 21, 4], [3, 22, 4], [3, 23, 1],\n [4, 0, 1], [4, 1, 3], [4, 2, 0], [4, 3, 0], [4, 4, 0], [4, 5, 1],\n [4, 6, 0], [4, 7, 0], [4, 8, 0], [4, 9, 2], [4, 10, 4], [4, 11, 4],\n [4, 12, 2], [4, 13, 4], [4, 14, 4], [4, 15, 14], [4, 16, 12], [4, 17, 1],\n [4, 18, 8], [4, 19, 5], [4, 20, 3], [4, 21, 7], [4, 22, 3], [4, 23, 0],\n [5, 0, 2], [5, 1, 1], [5, 2, 0], [5, 3, 3], [5, 4, 0], [5, 5, 0],\n [5, 6, 0], [5, 7, 0], [5, 8, 2], [5, 9, 0], [5, 10, 4], [5, 11, 1],\n [5, 12, 5], [5, 13, 10], [5, 14, 5], [5, 15, 7], [5, 16, 11], [5, 17, 6],\n [5, 18, 0], [5, 19, 5], [5, 20, 3], [5, 21, 4], [5, 22, 2], [5, 23, 0],\n [6, 0, 1], [6, 1, 0], [6, 2, 0], [6, 3, 0], [6, 4, 0], [6, 5, 0],\n [6, 6, 0], [6, 7, 0], [6, 8, 0], [6, 9, 0], [6, 10, 1], [6, 11, 0],\n [6, 12, 2], [6, 13, 1], [6, 14, 3], [6, 15, 4], [6, 16, 0], [6, 17, 0],\n [6, 18, 0], [6, 19, 0], [6, 20, 1], [6, 21, 2], [6, 22, 2], [6, 23, 6]\n ]\nrange_color = ['#313695', '#4575b4', '#74add1', '#abd9e9', '#e0f3f8', '#ffffbf',\n '#fee090', '#fdae61', '#f46d43', '#d73027', '#a50026']\nbar3d.add(\n \"\",\n x_axis,\n y_axis,\n [[d[1], d[0], d[2]] for d in data],\n is_visualmap=True,\n visual_range=[0, 20],\n visual_range_color=range_color,\n grid3d_width=200,\n grid3d_depth=80,\n)\nbar3d.width=700\nbar3d.height=500\n\nbar3d", "Boxplot", "from pyecharts import Boxplot\n\nboxplot = Boxplot(\"Box plot\")\nx_axis = ['expr1', 'expr2', 'expr3', 'expr4', 'expr5']\ny_axis = [\n [850, 740, 900, 1070, 930, 850, 950, 980, 980, 880,\n 1000, 980, 930, 650, 760, 810, 1000, 1000, 960, 960],\n [960, 940, 960, 940, 880, 800, 850, 880, 900, 840,\n 830, 790, 810, 880, 880, 830, 800, 790, 760, 800],\n [880, 880, 880, 860, 720, 720, 620, 860, 970, 950,\n 880, 910, 850, 870, 840, 840, 850, 840, 840, 840],\n [890, 810, 810, 820, 800, 770, 760, 740, 750, 760,\n 910, 920, 890, 860, 880, 720, 840, 850, 850, 780],\n [890, 840, 780, 810, 760, 810, 790, 810, 820, 850,\n 870, 870, 810, 740, 810, 940, 950, 800, 810, 870]\n]\n_yaxis = boxplot.prepare_data(y_axis) \nboxplot.add(\"boxplot\", x_axis, _yaxis)\nboxplot", "ํผ๋„", "from pyecharts import Funnel\n\nattr = [\"A\", \"B\", \"C\", \"D\", \"E\", \"F\"]\nvalue = [20, 40, 60, 80, 100, 120]\nfunnel = Funnel(\"ํผ๋„ ๊ทธ๋ž˜ํ”„\")\nfunnel.add(\n \"ํผ๋„\",\n attr,\n value,\n is_label_show=True,\n label_pos=\"inside\",\n label_text_color=\"#fff\",\n)\nfunnel.width=700\nfunnel.height=500\nfunnel", "Gauge", "from pyecharts import Gauge\n\ngauge = Gauge(\"Gauge Graph\")\ngauge.add(\"์ด์šฉ๋ฅ \", \"๊ฐ€์šด๋ฐ\", 66.66)\ngauge" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hetaodie/hetaodie.github.io
assets/media/uda-ml/deep/azjc/ๅท็งฏ็ฅž็ป็ฝ‘็ปœ็š„ไพ‹ๅญ/dog/dog_app_zh.ipynb
mit
[ "ๅท็งฏ็ฅž็ป็ฝ‘็ปœ๏ผˆConvolutional Neural Network, CNN๏ผ‰\n้กน็›ฎ๏ผšๅฎž็Žฐไธ€ไธช็‹—ๅ“็ง่ฏ†ๅˆซ็ฎ—ๆณ•App\nๅœจ่ฟ™ไธชnotebookๆ–‡ไปถไธญ๏ผŒๆœ‰ไบ›ๆจกๆฟไปฃ็ ๅทฒ็ปๆไพ›็ป™ไฝ ๏ผŒไฝ†ไฝ ่ฟ˜้œ€่ฆๅฎž็Žฐๆ›ดๅคš็š„ๅŠŸ่ƒฝๆฅๅฎŒๆˆ่ฟ™ไธช้กน็›ฎใ€‚้™ค้žๆœ‰ๆ˜Ž็กฎ่ฆๆฑ‚๏ผŒไฝ ๆ— ้กปไฟฎๆ”นไปปไฝ•ๅทฒ็ป™ๅ‡บ็š„ไปฃ็ ใ€‚ไปฅ'(็ปƒไน )'ๅผ€ๅง‹็š„ๆ ‡้ข˜่กจ็คบๆŽฅไธ‹ๆฅ็š„ไปฃ็ ้ƒจๅˆ†ไธญๆœ‰ไฝ ้œ€่ฆๅฎž็Žฐ็š„ๅŠŸ่ƒฝใ€‚่ฟ™ไบ›้ƒจๅˆ†้ƒฝ้…ๆœ‰่ฏฆ็ป†็š„ๆŒ‡ๅฏผ๏ผŒ้œ€่ฆๅฎž็Žฐ็š„้ƒจๅˆ†ไนŸไผšๅœจๆณจ้‡Šไธญไปฅ'TODO'ๆ ‡ๅ‡บใ€‚่ฏทไป”็ป†้˜…่ฏปๆ‰€ๆœ‰็š„ๆ็คบใ€‚\n้™คไบ†ๅฎž็Žฐไปฃ็ ๅค–๏ผŒไฝ ่ฟ˜้œ€่ฆๅ›ž็ญ”ไธ€ไบ›ไธŽ้กน็›ฎๅŠไปฃ็ ็›ธๅ…ณ็š„้—ฎ้ข˜ใ€‚ๆฏไธช้œ€่ฆๅ›ž็ญ”็š„้—ฎ้ข˜้ƒฝไผšไปฅ '้—ฎ้ข˜ X' ๆ ‡่ฎฐใ€‚่ฏทไป”็ป†้˜…่ฏปๆฏไธช้—ฎ้ข˜๏ผŒๅนถไธ”ๅœจ้—ฎ้ข˜ๅŽ็š„ 'ๅ›ž็ญ”' ้ƒจๅˆ†ๅ†™ๅ‡บๅฎŒๆ•ด็š„็ญ”ๆกˆใ€‚ๆˆ‘ไปฌๅฐ†ๆ นๆฎ ไฝ ๅฏน้—ฎ้ข˜็š„ๅ›ž็ญ” ๅ’Œ ๆ’ฐๅ†™ไปฃ็ ๅฎž็Žฐ็š„ๅŠŸ่ƒฝ ๆฅๅฏนไฝ ๆไบค็š„้กน็›ฎ่ฟ›่กŒ่ฏ„ๅˆ†ใ€‚\n\nๆ็คบ๏ผšCode ๅ’Œ Markdown ๅŒบๅŸŸๅฏ้€š่ฟ‡ Shift + Enter ๅฟซๆท้”ฎ่ฟ่กŒใ€‚ๆญคๅค–๏ผŒMarkdownๅฏไปฅ้€š่ฟ‡ๅŒๅ‡ป่ฟ›ๅ…ฅ็ผ–่พ‘ๆจกๅผใ€‚\n\n้กน็›ฎไธญๆ˜พ็คบไธบ_้€‰ๅš_็š„้ƒจๅˆ†ๅฏไปฅๅธฎๅŠฉไฝ ็š„้กน็›ฎ่„ฑ้ข–่€Œๅ‡บ๏ผŒ่€Œไธๆ˜ฏไป…ไป…่พพๅˆฐ้€š่ฟ‡็š„ๆœ€ไฝŽ่ฆๆฑ‚ใ€‚ๅฆ‚ๆžœไฝ ๅ†ณๅฎš่ฟฝๆฑ‚ๆ›ด้ซ˜็š„ๆŒ‘ๆˆ˜๏ผŒ่ฏทๅœจๆญค notebook ไธญๅฎŒๆˆ_้€‰ๅš_้ƒจๅˆ†็š„ไปฃ็ ใ€‚\n\n่ฎฉๆˆ‘ไปฌๅผ€ๅง‹ๅง\nๅœจ่ฟ™ไธชnotebookไธญ๏ผŒไฝ ๅฐ†่ฟˆๅ‡บ็ฌฌไธ€ๆญฅ๏ผŒๆฅๅผ€ๅ‘ๅฏไปฅไฝœไธบ็งปๅŠจ็ซฏๆˆ– Webๅบ”็”จ็จ‹ๅบไธ€้ƒจๅˆ†็š„็ฎ—ๆณ•ใ€‚ๅœจ่ฟ™ไธช้กน็›ฎ็š„ๆœ€ๅŽ๏ผŒไฝ ็š„็จ‹ๅบๅฐ†่ƒฝๅคŸๆŠŠ็”จๆˆทๆไพ›็š„ไปปไฝ•ไธ€ไธชๅ›พๅƒไฝœไธบ่พ“ๅ…ฅใ€‚ๅฆ‚ๆžœๅฏไปฅไปŽๅ›พๅƒไธญๆฃ€ๆต‹ๅˆฐไธ€ๅช็‹—๏ผŒๅฎƒไผš่พ“ๅ‡บๅฏน็‹—ๅ“็ง็š„้ข„ๆต‹ใ€‚ๅฆ‚ๆžœๅ›พๅƒไธญๆ˜ฏไธ€ไธชไบบ่„ธ๏ผŒๅฎƒไผš้ข„ๆต‹ไธ€ไธชไธŽๅ…ถๆœ€็›ธไผผ็š„็‹—็š„็ง็ฑปใ€‚ไธ‹้ข่ฟ™ๅผ ๅ›พๅฑ•็คบไบ†ๅฎŒๆˆ้กน็›ฎๅŽๅฏ่ƒฝ็š„่พ“ๅ‡บ็ป“ๆžœใ€‚๏ผˆโ€ฆโ€ฆๅฎž้™…ไธŠๆˆ‘ไปฌๅธŒๆœ›ๆฏไธชๅญฆ็”Ÿ็š„่พ“ๅ‡บ็ป“ๆžœไธ็›ธๅŒ๏ผ๏ผ‰\n\nๅœจ็Žฐๅฎžไธ–็•Œไธญ๏ผŒไฝ ้œ€่ฆๆ‹ผๅ‡‘ไธ€็ณปๅˆ—็š„ๆจกๅž‹ๆฅๅฎŒๆˆไธๅŒ็š„ไปปๅŠก๏ผ›ไธพไธชไพ‹ๅญ๏ผŒ็”จๆฅ้ข„ๆต‹็‹—็ง็ฑป็š„็ฎ—ๆณ•ไผšไธŽ้ข„ๆต‹ไบบ็ฑป็š„็ฎ—ๆณ•ไธๅŒใ€‚ๅœจๅš้กน็›ฎ็š„่ฟ‡็จ‹ไธญ๏ผŒไฝ ๅฏ่ƒฝไผš้‡ๅˆฐไธๅฐ‘ๅคฑ่ดฅ็š„้ข„ๆต‹๏ผŒๅ› ไธบๅนถไธๅญ˜ๅœจๅฎŒ็พŽ็š„็ฎ—ๆณ•ๅ’Œๆจกๅž‹ใ€‚ไฝ ๆœ€็ปˆๆไบค็š„ไธๅฎŒ็พŽ็š„่งฃๅ†ณๆ–นๆกˆไนŸไธ€ๅฎšไผš็ป™ไฝ ๅธฆๆฅไธ€ไธชๆœ‰่ถฃ็š„ๅญฆไน ็ป้ชŒ๏ผ\n้กน็›ฎๅ†…ๅฎน\nๆˆ‘ไปฌๅฐ†่ฟ™ไธชnotebookๅˆ†ไธบไธๅŒ็š„ๆญฅ้ชค๏ผŒไฝ ๅฏไปฅไฝฟ็”จไธ‹้ข็š„้“พๆŽฅๆฅๆต่งˆๆญคnotebookใ€‚\n\nStep 0: ๅฏผๅ…ฅๆ•ฐๆฎ้›†\nStep 1: ๆฃ€ๆต‹ไบบ่„ธ\nStep 2: ๆฃ€ๆต‹็‹—็‹—\nStep 3: ไปŽๅคดๅˆ›ๅปบไธ€ไธชCNNๆฅๅˆ†็ฑป็‹—ๅ“็ง\nStep 4: ไฝฟ็”จไธ€ไธชCNNๆฅๅŒบๅˆ†็‹—็š„ๅ“็ง(ไฝฟ็”จ่ฟ็งปๅญฆไน )\nStep 5: ๅปบ็ซ‹ไธ€ไธชCNNๆฅๅˆ†็ฑป็‹—็š„ๅ“็ง๏ผˆไฝฟ็”จ่ฟ็งปๅญฆไน ๏ผ‰\nStep 6: ๅฎŒๆˆไฝ ็š„็ฎ—ๆณ•\nStep 7: ๆต‹่ฏ•ไฝ ็š„็ฎ—ๆณ•\n\nๅœจ่ฏฅ้กน็›ฎไธญๅŒ…ๅซไบ†ๅฆ‚ไธ‹็š„้—ฎ้ข˜๏ผš\n\n้—ฎ้ข˜ 1\n้—ฎ้ข˜ 2\n้—ฎ้ข˜ 3\n้—ฎ้ข˜ 4\n้—ฎ้ข˜ 5\n้—ฎ้ข˜ 6\n้—ฎ้ข˜ 7\n้—ฎ้ข˜ 8\n้—ฎ้ข˜ 9\n้—ฎ้ข˜ 10\n้—ฎ้ข˜ 11\n\n\n<a id='step0'></a>\nๆญฅ้ชค 0: ๅฏผๅ…ฅๆ•ฐๆฎ้›†\nๅฏผๅ…ฅ็‹—ๆ•ฐๆฎ้›†\nๅœจไธ‹ๆ–น็š„ไปฃ็ ๅ•ๅ…ƒ๏ผˆcell๏ผ‰ไธญ๏ผŒๆˆ‘ไปฌๅฏผๅ…ฅไบ†ไธ€ไธช็‹—ๅ›พๅƒ็š„ๆ•ฐๆฎ้›†ใ€‚ๆˆ‘ไปฌไฝฟ็”จ scikit-learn ๅบ“ไธญ็š„ load_files ๅ‡ฝๆ•ฐๆฅ่Žทๅ–ไธ€ไบ›ๅ˜้‡๏ผš\n- train_files, valid_files, test_files - ๅŒ…ๅซๅ›พๅƒ็š„ๆ–‡ไปถ่ทฏๅพ„็š„numpyๆ•ฐ็ป„\n- train_targets, valid_targets, test_targets - ๅŒ…ๅซ็‹ฌ็ƒญ็ผ–็ ๅˆ†็ฑปๆ ‡็ญพ็š„numpyๆ•ฐ็ป„\n- dog_names - ็”ฑๅญ—็ฌฆไธฒๆž„ๆˆ็š„ไธŽๆ ‡็ญพ็›ธๅฏนๅบ”็š„็‹—็š„็ง็ฑป", "from sklearn.datasets import load_files \nfrom keras.utils import np_utils\nimport numpy as np\nfrom glob import glob\n\n# define function to load train, test, and validation datasets\ndef load_dataset(path):\n data = load_files(path)\n dog_files = np.array(data['filenames'])\n dog_targets = np_utils.to_categorical(np.array(data['target']), 133)\n return dog_files, dog_targets\n\n# load train, test, and validation datasets\ntrain_files, train_targets = load_dataset('/data/dog_images/train')\nvalid_files, valid_targets = load_dataset('/data/dog_images/valid')\ntest_files, test_targets = load_dataset('/data/dog_images/test')\n\n# load list of dog names\ndog_names = [item[20:-1] for item in sorted(glob(\"/data/dog_images/train/*/\"))]\n\n# print statistics about the dataset\nprint('There are %d total dog categories.' % len(dog_names))\nprint('There are %s total dog images.\\n' % len(np.hstack([train_files, valid_files, test_files])))\nprint('There are %d training dog images.' % len(train_files))\nprint('There are %d validation dog images.' % len(valid_files))\nprint('There are %d test dog images.'% len(test_files))", "ๅฏผๅ…ฅไบบ่„ธๆ•ฐๆฎ้›†\nๅœจไธ‹ๆ–น็š„ไปฃ็ ๅ•ๅ…ƒไธญ๏ผŒๆˆ‘ไปฌๅฏผๅ…ฅไบบ่„ธๅ›พๅƒๆ•ฐๆฎ้›†๏ผŒๆ–‡ไปถๆ‰€ๅœจ่ทฏๅพ„ๅญ˜ๅ‚จๅœจๅไธบ human_files ็š„ numpy ๆ•ฐ็ป„ใ€‚", "import random\nrandom.seed(8675309)\n\n# ๅŠ ่ฝฝๆ‰“ไนฑๅŽ็š„ไบบ่„ธๆ•ฐๆฎ้›†็š„ๆ–‡ไปถๅ\nhuman_files = np.array(glob(\"/data/lfw/*/*\"))\nrandom.shuffle(human_files)\n\n# ๆ‰“ๅฐๆ•ฐๆฎ้›†็š„ๆ•ฐๆฎ้‡\nprint('There are %d total human images.' % len(human_files))", "<a id='step1'></a>\nๆญฅ้ชค1๏ผšๆฃ€ๆต‹ไบบ่„ธ\nๆˆ‘ไปฌๅฐ†ไฝฟ็”จ OpenCV ไธญ็š„ Haar feature-based cascade classifiers ๆฅๆฃ€ๆต‹ๅ›พๅƒไธญ็š„ไบบ่„ธใ€‚OpenCV ๆไพ›ไบ†ๅพˆๅคš้ข„่ฎญ็ปƒ็š„ไบบ่„ธๆฃ€ๆต‹ๆจกๅž‹๏ผŒๅฎƒไปฌไปฅXMLๆ–‡ไปถไฟๅญ˜ๅœจ githubใ€‚ๆˆ‘ไปฌๅทฒ็ปไธ‹่ฝฝไบ†ๅ…ถไธญไธ€ไธชๆฃ€ๆต‹ๆจกๅž‹๏ผŒๅนถไธ”ๆŠŠๅฎƒๅญ˜ๅ‚จๅœจ haarcascades ็š„็›ฎๅฝ•ไธญใ€‚\nๅœจๅฆ‚ไธ‹ไปฃ็ ๅ•ๅ…ƒไธญ๏ผŒๆˆ‘ไปฌๅฐ†ๆผ”็คบๅฆ‚ไฝ•ไฝฟ็”จ่ฟ™ไธชๆฃ€ๆต‹ๆจกๅž‹ๅœจๆ ทๆœฌๅ›พๅƒไธญๆ‰พๅˆฐไบบ่„ธใ€‚", "import cv2 \nimport matplotlib.pyplot as plt \n%matplotlib inline \n\n# ๆๅ–้ข„่ฎญ็ปƒ็š„ไบบ่„ธๆฃ€ๆต‹ๆจกๅž‹\nface_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')\n\n# ๅŠ ่ฝฝๅฝฉ่‰ฒ๏ผˆ้€š้“้กบๅบไธบBGR๏ผ‰ๅ›พๅƒ\nimg = cv2.imread(human_files[3])\n\n# ๅฐ†BGRๅ›พๅƒ่ฟ›่กŒ็ฐๅบฆๅค„็†\ngray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\n# ๅœจๅ›พๅƒไธญๆ‰พๅ‡บ่„ธ\nfaces = face_cascade.detectMultiScale(gray)\n\n# ๆ‰“ๅฐๅ›พๅƒไธญๆฃ€ๆต‹ๅˆฐ็š„่„ธ็š„ไธชๆ•ฐ\nprint('Number of faces detected:', len(faces))\n\n# ่Žทๅ–ๆฏไธ€ไธชๆ‰€ๆฃ€ๆต‹ๅˆฐ็š„่„ธ็š„่ฏ†ๅˆซๆก†\nfor (x,y,w,h) in faces:\n # ๅœจไบบ่„ธๅ›พๅƒไธญ็ป˜ๅˆถๅ‡บ่ฏ†ๅˆซๆก†\n cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)\n \n# ๅฐ†BGRๅ›พๅƒ่ฝฌๅ˜ไธบRGBๅ›พๅƒไปฅๆ‰“ๅฐ\ncv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n# ๅฑ•็คบๅซๆœ‰่ฏ†ๅˆซๆก†็š„ๅ›พๅƒ\nplt.imshow(cv_rgb)\nplt.show()", "ๅœจไฝฟ็”จไปปไฝ•ไธ€ไธชๆฃ€ๆต‹ๆจกๅž‹ไน‹ๅ‰๏ผŒๅฐ†ๅ›พๅƒ่ฝฌๆขไธบ็ฐๅบฆๅ›พๆ˜ฏๅธธ็”จ่ฟ‡็จ‹ใ€‚detectMultiScale ๅ‡ฝๆ•ฐไฝฟ็”จๅ‚จๅญ˜ๅœจ face_cascade ไธญ็š„็š„ๆ•ฐๆฎ๏ผŒๅฏน่พ“ๅ…ฅ็š„็ฐๅบฆๅ›พๅƒ่ฟ›่กŒๅˆ†็ฑปใ€‚\nๅœจไธŠๆ–น็š„ไปฃ็ ไธญ๏ผŒfaces ไปฅ numpy ๆ•ฐ็ป„็š„ๅฝขๅผ๏ผŒไฟๅญ˜ไบ†่ฏ†ๅˆซๅˆฐ็š„้ข้ƒจไฟกๆฏใ€‚ๅฎƒๅ…ถไธญๆฏไธ€่กŒ่กจ็คบไธ€ไธช่ขซๆฃ€ๆต‹ๅˆฐ็š„่„ธ๏ผŒ่ฏฅๆ•ฐๆฎๅŒ…ๆ‹ฌๅฆ‚ไธ‹ๅ››ไธชไฟกๆฏ๏ผšๅ‰ไธคไธชๅ…ƒ็ด  xใ€y ไปฃ่กจ่ฏ†ๅˆซๆก†ๅทฆไธŠ่ง’็š„ x ๅ’Œ y ๅๆ ‡๏ผˆๅ‚็…งไธŠๅ›พ๏ผŒๆณจๆ„ y ๅๆ ‡็š„ๆ–นๅ‘ๅ’Œๆˆ‘ไปฌ้ป˜่ฎค็š„ๆ–นๅ‘ไธๅŒ๏ผ‰๏ผ›ๅŽไธคไธชๅ…ƒ็ด ไปฃ่กจ่ฏ†ๅˆซๆก†ๅœจ x ๅ’Œ y ่ฝดไธคไธชๆ–นๅ‘ๅปถไผธ็š„้•ฟๅบฆ w ๅ’Œ dใ€‚ \nๅ†™ไธ€ไธชไบบ่„ธ่ฏ†ๅˆซๅ™จ\nๆˆ‘ไปฌๅฏไปฅๅฐ†่ฟ™ไธช็จ‹ๅบๅฐ่ฃ…ไธบไธ€ไธชๅ‡ฝๆ•ฐใ€‚่ฏฅๅ‡ฝๆ•ฐ็š„่พ“ๅ…ฅไธบไบบ่„ธๅ›พๅƒ็š„่ทฏๅพ„๏ผŒๅฝ“ๅ›พๅƒไธญๅŒ…ๅซไบบ่„ธๆ—ถ๏ผŒ่ฏฅๅ‡ฝๆ•ฐ่ฟ”ๅ›ž True๏ผŒๅไน‹่ฟ”ๅ›ž Falseใ€‚่ฏฅๅ‡ฝๆ•ฐๅฎšไน‰ๅฆ‚ไธ‹ๆ‰€็คบใ€‚", "# ๅฆ‚ๆžœimg_path่ทฏๅพ„่กจ็คบ็š„ๅ›พๅƒๆฃ€ๆต‹ๅˆฐไบ†่„ธ๏ผŒ่ฟ”ๅ›ž\"True\" \ndef face_detector(img_path):\n img = cv2.imread(img_path)\n gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n faces = face_cascade.detectMultiScale(gray)\n return len(faces) > 0", "ใ€็ปƒไน ใ€‘ ่ฏ„ไผฐไบบ่„ธๆฃ€ๆต‹ๆจกๅž‹\n\n<a id='question1'></a>\n้—ฎ้ข˜ 1:\nๅœจไธ‹ๆ–น็š„ไปฃ็ ๅ—ไธญ๏ผŒไฝฟ็”จ face_detector ๅ‡ฝๆ•ฐ๏ผŒ่ฎก็ฎ—๏ผš\n\nhuman_files ็š„ๅ‰100ๅผ ๅ›พๅƒไธญ๏ผŒ่ƒฝๅคŸๆฃ€ๆต‹ๅˆฐไบบ่„ธ็š„ๅ›พๅƒๅ ๆฏ”ๅคšๅฐ‘๏ผŸ\ndog_files ็š„ๅ‰100ๅผ ๅ›พๅƒไธญ๏ผŒ่ƒฝๅคŸๆฃ€ๆต‹ๅˆฐไบบ่„ธ็š„ๅ›พๅƒๅ ๆฏ”ๅคšๅฐ‘๏ผŸ\n\n็†ๆƒณๆƒ…ๅ†ตไธ‹๏ผŒไบบๅ›พๅƒไธญๆฃ€ๆต‹ๅˆฐไบบ่„ธ็š„ๆฆ‚็އๅบ”ๅฝ“ไธบ100%๏ผŒ่€Œ็‹—ๅ›พๅƒไธญๆฃ€ๆต‹ๅˆฐไบบ่„ธ็š„ๆฆ‚็އๅบ”่ฏฅไธบ0%ใ€‚ไฝ ไผšๅ‘็Žฐๆˆ‘ไปฌ็š„็ฎ—ๆณ•ๅนถ้žๅฎŒ็พŽ๏ผŒไฝ†็ป“ๆžœไป็„ถๆ˜ฏๅฏไปฅๆŽฅๅ—็š„ใ€‚ๆˆ‘ไปฌไปŽๆฏไธชๆ•ฐๆฎ้›†ไธญๆๅ–ๅ‰100ไธชๅ›พๅƒ็š„ๆ–‡ไปถ่ทฏๅพ„๏ผŒๅนถๅฐ†ๅฎƒไปฌๅญ˜ๅ‚จๅœจhuman_files_shortๅ’Œdog_files_shortไธญใ€‚", "human_files_short = human_files[:100]\ndog_files_short = train_files[:100]\n## ่ฏทไธ่ฆไฟฎๆ”นไธŠๆ–นไปฃ็ \n\n## TODO: ๅŸบไบŽhuman_files_shortๅ’Œdog_files_short\n## ไธญ็š„ๅ›พๅƒๆต‹่ฏ•face_detector็š„่กจ็Žฐ\nhuman_files_short_detect = 0\ndog_files_short_detect = 0\n\nfor i in range(100):\n if (face_detector(human_files_short[i])):\n human_files_short_detect += 1\n if (face_detector(dog_files_short[i])):\n dog_files_short_detect += 1\n\nprint(\"The percentage of detecting human faces in human files is:\", human_files_short_detect/human_files_short.size)\nprint(\"The percentage of detecting human faces in dog files is:\", dog_files_short_detect/dog_files_short.size)\n\n\n\n", "<a id='question2'></a>\n้—ฎ้ข˜ 2:\nๅฐฑ็ฎ—ๆณ•่€Œ่จ€๏ผŒ่ฏฅ็ฎ—ๆณ•ๆˆๅŠŸไธŽๅฆ็š„ๅ…ณ้”ฎๅœจไบŽ๏ผŒ็”จๆˆท่ƒฝๅฆๆไพ›ๅซๆœ‰ๆธ…ๆ™ฐ้ข้ƒจ็‰นๅพ็š„ไบบ่„ธๅ›พๅƒใ€‚\n้‚ฃไนˆไฝ ่ฎคไธบ๏ผŒ่ฟ™ๆ ท็š„่ฆๆฑ‚ๅœจๅฎž้™…ไฝฟ็”จไธญๅฏน็”จๆˆทๅˆ็†ๅ—๏ผŸๅฆ‚ๆžœไฝ ่ง‰ๅพ—ไธๅˆ็†๏ผŒไฝ ่ƒฝๅฆๆƒณๅˆฐไธ€ไธชๆ–นๆณ•๏ผŒๅณไฝฟๅ›พๅƒไธญๅนถๆฒกๆœ‰ๆธ…ๆ™ฐ็š„้ข้ƒจ็‰นๅพ๏ผŒไนŸ่ƒฝๅคŸๆฃ€ๆต‹ๅˆฐไบบ่„ธ๏ผŸ\nๅ›ž็ญ”:\nไธๅคชๅˆ็†๏ผŒๅ› ไธบๅ›พ็‰‡็š„ๆฅๆบไธๅŒ๏ผŒไธ่ƒฝไฟ่ฏๆ‰€ๆœ‰็š„ๅ›พ็‰‡็š„่„ธ้ƒจ้ƒฝๆ˜ฏๆธ…ๆ™ฐ็š„ใ€‚ ๅฆ‚ๆžœ่„ธ้ƒจ็‰นๅพไธๅคชๆธ…ๆ™ฐ๏ผŒๅบ”ๅฏนๅ›พ็‰‡่ฟ›่กŒๅ‰ๆœŸ็š„้ข„ๅค„็†ใ€‚\n\n<a id='Selection1'></a>\n้€‰ๅš๏ผš\nๆˆ‘ไปฌๅปบ่ฎฎๅœจไฝ ็š„็ฎ—ๆณ•ไธญไฝฟ็”จopencv็š„ไบบ่„ธๆฃ€ๆต‹ๆจกๅž‹ๅŽปๆฃ€ๆต‹ไบบ็ฑปๅ›พๅƒ๏ผŒไธ่ฟ‡ไฝ ๅฏไปฅ่‡ช็”ฑๅœฐๆŽข็ดขๅ…ถไป–็š„ๆ–นๆณ•๏ผŒๅฐคๅ…ถๆ˜ฏๅฐ่ฏ•ไฝฟ็”จๆทฑๅบฆๅญฆไน ๆฅ่งฃๅ†ณๅฎƒ:)ใ€‚่ฏท็”จไธ‹ๆ–น็š„ไปฃ็ ๅ•ๅ…ƒๆฅ่ฎพ่ฎกๅ’Œๆต‹่ฏ•ไฝ ็š„้ข้ƒจ็›‘ๆต‹็ฎ—ๆณ•ใ€‚ๅฆ‚ๆžœไฝ ๅ†ณๅฎšๅฎŒๆˆ่ฟ™ไธช_้€‰ๅš_ไปปๅŠก๏ผŒไฝ ้œ€่ฆๆŠฅๅ‘Š็ฎ—ๆณ•ๅœจๆฏไธ€ไธชๆ•ฐๆฎ้›†ไธŠ็š„่กจ็Žฐใ€‚", "## (้€‰ๅš) TODO: ๆŠฅๅ‘Šๅฆไธ€ไธช้ข้ƒจๆฃ€ๆต‹็ฎ—ๆณ•ๅœจLFWๆ•ฐๆฎ้›†ไธŠ็š„่กจ็Žฐ\n### ไฝ ๅฏไปฅ้šๆ„ไฝฟ็”จๆ‰€้œ€็š„ไปฃ็ ๅ•ๅ…ƒๆ•ฐ", "<a id='step2'></a>\nๆญฅ้ชค 2: ๆฃ€ๆต‹็‹—็‹—\nๅœจ่ฟ™ไธช้ƒจๅˆ†ไธญ๏ผŒๆˆ‘ไปฌไฝฟ็”จ้ข„่ฎญ็ปƒ็š„ ResNet-50 ๆจกๅž‹ๅŽปๆฃ€ๆต‹ๅ›พๅƒไธญ็š„็‹—ใ€‚ไธ‹ๆ–น็š„็ฌฌไธ€่กŒไปฃ็ ๅฐฑๆ˜ฏไธ‹่ฝฝไบ† ResNet-50 ๆจกๅž‹็š„็ฝ‘็ปœ็ป“ๆž„ๅ‚ๆ•ฐ๏ผŒไปฅๅŠๅŸบไบŽ ImageNet ๆ•ฐๆฎ้›†็š„้ข„่ฎญ็ปƒๆƒ้‡ใ€‚\nImageNet ่ฟ™็›ฎๅ‰ไธ€ไธช้žๅธธๆต่กŒ็š„ๆ•ฐๆฎ้›†๏ผŒๅธธ่ขซ็”จๆฅๆต‹่ฏ•ๅ›พๅƒๅˆ†็ฑป็ญ‰่ฎก็ฎ—ๆœบ่ง†่ง‰ไปปๅŠก็›ธๅ…ณ็š„็ฎ—ๆณ•ใ€‚ๅฎƒๅŒ…ๅซ่ถ…่ฟ‡ไธ€ๅƒไธ‡ไธช URL๏ผŒๆฏไธ€ไธช้ƒฝ้“พๆŽฅๅˆฐ 1000 categories ไธญๆ‰€ๅฏนๅบ”็š„ไธ€ไธช็‰ฉไฝ“็š„ๅ›พๅƒใ€‚ไปป็ป™่พ“ๅ…ฅไธ€ไธชๅ›พๅƒ๏ผŒ่ฏฅ ResNet-50 ๆจกๅž‹ไผš่ฟ”ๅ›žไธ€ไธชๅฏนๅ›พๅƒไธญ็‰ฉไฝ“็š„้ข„ๆต‹็ป“ๆžœใ€‚", "from keras.applications.resnet50 import ResNet50\n\n# ๅฎšไน‰ResNet50ๆจกๅž‹\nResNet50_model = ResNet50(weights='imagenet')", "ๆ•ฐๆฎ้ข„ๅค„็†\n\n\nๅœจไฝฟ็”จ TensorFlow ไฝœไธบๅŽ็ซฏ็š„ๆ—ถๅ€™๏ผŒๅœจ Keras ไธญ๏ผŒCNN ็š„่พ“ๅ…ฅๆ˜ฏไธ€ไธช4็ปดๆ•ฐ็ป„๏ผˆไนŸ่ขซ็งฐไฝœ4็ปดๅผ ้‡๏ผ‰๏ผŒๅฎƒ็š„ๅ„็ปดๅบฆๅฐบๅฏธไธบ (nb_samples, rows, columns, channels)ใ€‚ๅ…ถไธญ nb_samples ่กจ็คบๅ›พๅƒ๏ผˆๆˆ–่€…ๆ ทๆœฌ๏ผ‰็š„ๆ€ปๆ•ฐ๏ผŒrows, columns, ๅ’Œ channels ๅˆ†ๅˆซ่กจ็คบๅ›พๅƒ็š„่กŒๆ•ฐใ€ๅˆ—ๆ•ฐๅ’Œ้€š้“ๆ•ฐใ€‚\n\n\nไธ‹ๆ–น็š„ path_to_tensor ๅ‡ฝๆ•ฐๅฎž็Žฐๅฆ‚ไธ‹ๅฐ†ๅฝฉ่‰ฒๅ›พๅƒ็š„ๅญ—็ฌฆไธฒๅž‹็š„ๆ–‡ไปถ่ทฏๅพ„ไฝœไธบ่พ“ๅ…ฅ๏ผŒ่ฟ”ๅ›žไธ€ไธช4็ปดๅผ ้‡๏ผŒไฝœไธบ Keras CNN ่พ“ๅ…ฅใ€‚ๅ› ไธบๆˆ‘ไปฌ็š„่พ“ๅ…ฅๅ›พๅƒๆ˜ฏๅฝฉ่‰ฒๅ›พๅƒ๏ผŒๅ› ๆญคๅฎƒไปฌๅ…ทๆœ‰ไธ‰ไธช้€š้“๏ผˆ channels ไธบ 3๏ผ‰ใ€‚\n\n่ฏฅๅ‡ฝๆ•ฐ้ฆ–ๅ…ˆ่ฏปๅ–ไธ€ๅผ ๅ›พๅƒ๏ผŒ็„ถๅŽๅฐ†ๅ…ถ็ผฉๆ”พไธบ 224ร—224 ็š„ๅ›พๅƒใ€‚\n้šๅŽ๏ผŒ่ฏฅๅ›พๅƒ่ขซ่ฐƒๆ•ดไธบๅ…ทๆœ‰4ไธช็ปดๅบฆ็š„ๅผ ้‡ใ€‚\nๅฏนไบŽไปปไธ€่พ“ๅ…ฅๅ›พๅƒ๏ผŒๆœ€ๅŽ่ฟ”ๅ›ž็š„ๅผ ้‡็š„็ปดๅบฆๆ˜ฏ๏ผš(1, 224, 224, 3)ใ€‚\n\n\n\npaths_to_tensor ๅ‡ฝๆ•ฐๅฐ†ๅ›พๅƒ่ทฏๅพ„็š„ๅญ—็ฌฆไธฒ็ป„ๆˆ็š„ numpy ๆ•ฐ็ป„ไฝœไธบ่พ“ๅ…ฅ๏ผŒๅนถ่ฟ”ๅ›žไธ€ไธช4็ปดๅผ ้‡๏ผŒๅ„็ปดๅบฆๅฐบๅฏธไธบ (nb_samples, 224, 224, 3)ใ€‚ ๅœจ่ฟ™้‡Œ๏ผŒnb_samplesๆ˜ฏๆไพ›็š„ๅ›พๅƒ่ทฏๅพ„็š„ๆ•ฐๆฎไธญ็š„ๆ ทๆœฌๆ•ฐ้‡ๆˆ–ๅ›พๅƒๆ•ฐ้‡ใ€‚ไฝ ไนŸๅฏไปฅๅฐ† nb_samples ็†่งฃไธบๆ•ฐๆฎ้›†ไธญ3็ปดๅผ ้‡็š„ไธชๆ•ฐ๏ผˆๆฏไธช3็ปดๅผ ้‡่กจ็คบไธ€ไธชไธๅŒ็š„ๅ›พๅƒใ€‚", "from keras.preprocessing import image \nfrom tqdm import tqdm\n\ndef path_to_tensor(img_path):\n # ็”จPILๅŠ ่ฝฝRGBๅ›พๅƒไธบPIL.Image.Image็ฑปๅž‹\n img = image.load_img(img_path, target_size=(224, 224))\n # ๅฐ†PIL.Image.Image็ฑปๅž‹่ฝฌๅŒ–ไธบๆ ผๅผไธบ(224, 224, 3)็š„3็ปดๅผ ้‡\n x = image.img_to_array(img)\n # ๅฐ†3็ปดๅผ ้‡่ฝฌๅŒ–ไธบๆ ผๅผไธบ(1, 224, 224, 3)็š„4็ปดๅผ ้‡ๅนถ่ฟ”ๅ›ž\n return np.expand_dims(x, axis=0)\n\ndef paths_to_tensor(img_paths):\n list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)]\n return np.vstack(list_of_tensors)", "ๅŸบไบŽ ResNet-50 ๆžถๆž„่ฟ›่กŒ้ข„ๆต‹\nๅฏนไบŽ้€š่ฟ‡ไธŠ่ฟฐๆญฅ้ชคๅพ—ๅˆฐ็š„ๅ››็ปดๅผ ้‡๏ผŒๅœจๆŠŠๅฎƒไปฌ่พ“ๅ…ฅๅˆฐ ResNet-50 ็ฝ‘็ปœใ€ๆˆ– Keras ไธญๅ…ถไป–็ฑปไผผ็š„้ข„่ฎญ็ปƒๆจกๅž‹ไน‹ๅ‰๏ผŒ่ฟ˜้œ€่ฆ่ฟ›่กŒไธ€ไบ›้ขๅค–็š„ๅค„็†๏ผš\n1. ้ฆ–ๅ…ˆ๏ผŒ่ฟ™ไบ›ๅ›พๅƒ็š„้€š้“้กบๅบไธบ RGB๏ผŒๆˆ‘ไปฌ้œ€่ฆ้‡ๆŽ’ไป–ไปฌ็š„้€š้“้กบๅบไธบ BGRใ€‚\n2. ๅ…ถๆฌก๏ผŒ้ข„่ฎญ็ปƒๆจกๅž‹็š„่พ“ๅ…ฅ้ƒฝ่ฟ›่กŒไบ†้ขๅค–็š„ๅฝ’ไธ€ๅŒ–่ฟ‡็จ‹ใ€‚ๅ› ๆญคๆˆ‘ไปฌๅœจ่ฟ™้‡ŒไนŸ่ฆๅฏน่ฟ™ไบ›ๅผ ้‡่ฟ›่กŒๅฝ’ไธ€ๅŒ–๏ผŒๅณๅฏนๆ‰€ๆœ‰ๅ›พๅƒๆ‰€ๆœ‰ๅƒ็ด ้ƒฝๅ‡ๅŽปๅƒ็ด ๅ‡ๅ€ผ [103.939, 116.779, 123.68]๏ผˆไปฅ RGB ๆจกๅผ่กจ็คบ๏ผŒๆ นๆฎๆ‰€ๆœ‰็š„ ImageNet ๅ›พๅƒ็ฎ—ๅ‡บ๏ผ‰ใ€‚\nๅฏผๅ…ฅ็š„ preprocess_input ๅ‡ฝๆ•ฐๅฎž็Žฐไบ†่ฟ™ไบ›ๅŠŸ่ƒฝใ€‚ๅฆ‚ๆžœไฝ ๅฏนๆญคๅพˆๆ„Ÿๅ…ด่ถฃ๏ผŒๅฏไปฅๅœจ ่ฟ™้‡Œ ๆŸฅ็œ‹ preprocess_input็š„ไปฃ็ ใ€‚\nๅœจๅฎž็Žฐไบ†ๅ›พๅƒๅค„็†็š„้ƒจๅˆ†ไน‹ๅŽ๏ผŒๆˆ‘ไปฌๅฐฑๅฏไปฅไฝฟ็”จๆจกๅž‹ๆฅ่ฟ›่กŒ้ข„ๆต‹ใ€‚่ฟ™ไธ€ๆญฅ้€š่ฟ‡ predict ๆ–นๆณ•ๆฅๅฎž็Žฐ๏ผŒๅฎƒ่ฟ”ๅ›žไธ€ไธชๅ‘้‡๏ผŒๅ‘้‡็š„็ฌฌ i ไธชๅ…ƒ็ด ่กจ็คบ่ฏฅๅ›พๅƒๅฑžไบŽ็ฌฌ i ไธช ImageNet ็ฑปๅˆซ็š„ๆฆ‚็އใ€‚่ฟ™้€š่ฟ‡ๅฆ‚ไธ‹็š„ ResNet50_predict_labels ๅ‡ฝๆ•ฐๅฎž็Žฐใ€‚\n้€š่ฟ‡ๅฏน้ข„ๆต‹ๅ‡บ็š„ๅ‘้‡ๅ–็”จ argmax ๅ‡ฝๆ•ฐ๏ผˆๆ‰พๅˆฐๆœ‰ๆœ€ๅคงๆฆ‚็އๅ€ผ็š„ไธ‹ๆ ‡ๅบๅท๏ผ‰๏ผŒๆˆ‘ไปฌๅฏไปฅๅพ—ๅˆฐไธ€ไธชๆ•ดๆ•ฐ๏ผŒๅณๆจกๅž‹้ข„ๆต‹ๅˆฐ็š„็‰ฉไฝ“็š„็ฑปๅˆซใ€‚่ฟ›่€Œๆ นๆฎ่ฟ™ไธช ๆธ…ๅ•๏ผŒๆˆ‘ไปฌ่ƒฝๅคŸ็Ÿฅ้“่ฟ™ๅ…ทไฝ“ๆ˜ฏๅ“ชไธชๅ“็ง็š„็‹—็‹—ใ€‚", "from keras.applications.resnet50 import preprocess_input, decode_predictions\ndef ResNet50_predict_labels(img_path):\n # ่ฟ”ๅ›žimg_path่ทฏๅพ„็š„ๅ›พๅƒ็š„้ข„ๆต‹ๅ‘้‡\n img = preprocess_input(path_to_tensor(img_path))\n return np.argmax(ResNet50_model.predict(img))", "ๅฎŒๆˆ็‹—ๆฃ€ๆต‹ๆจกๅž‹\nๅœจ็ ”็ฉถ่ฏฅ ๆธ…ๅ• ็š„ๆ—ถๅ€™๏ผŒไฝ ไผšๆณจๆ„ๅˆฐ๏ผŒ็‹—็ฑปๅˆซๅฏนๅบ”็š„ๅบๅทไธบ151-268ใ€‚ๅ› ๆญค๏ผŒๅœจๆฃ€ๆŸฅ้ข„่ฎญ็ปƒๆจกๅž‹ๅˆคๆ–ญๅ›พๅƒๆ˜ฏๅฆๅŒ…ๅซ็‹—็š„ๆ—ถๅ€™๏ผŒๆˆ‘ไปฌๅช้œ€่ฆๆฃ€ๆŸฅๅฆ‚ไธŠ็š„ ResNet50_predict_labels ๅ‡ฝๆ•ฐๆ˜ฏๅฆ่ฟ”ๅ›žไธ€ไธชไป‹ไบŽ151ๅ’Œ268ไน‹้—ด๏ผˆๅŒ…ๅซๅŒบ้—ด็ซฏ็‚น๏ผ‰็š„ๅ€ผใ€‚\nๆˆ‘ไปฌ้€š่ฟ‡่ฟ™ไบ›ๆƒณๆณ•ๆฅๅฎŒๆˆไธ‹ๆ–น็š„ dog_detector ๅ‡ฝๆ•ฐ๏ผŒๅฆ‚ๆžœไปŽๅ›พๅƒไธญๆฃ€ๆต‹ๅˆฐ็‹—ๅฐฑ่ฟ”ๅ›ž True๏ผŒๅฆๅˆ™่ฟ”ๅ›ž Falseใ€‚", "def dog_detector(img_path):\n prediction = ResNet50_predict_labels(img_path)\n return ((prediction <= 268) & (prediction >= 151)) ", "ใ€ไฝœไธšใ€‘่ฏ„ไผฐ็‹—็‹—ๆฃ€ๆต‹ๆจกๅž‹\n\n<a id='question3'></a>\n้—ฎ้ข˜ 3:\nๅœจไธ‹ๆ–น็š„ไปฃ็ ๅ—ไธญ๏ผŒไฝฟ็”จ dog_detector ๅ‡ฝๆ•ฐ๏ผŒ่ฎก็ฎ—๏ผš\n\nhuman_files_shortไธญๅ›พๅƒๆฃ€ๆต‹ๅˆฐ็‹—็‹—็š„็™พๅˆ†ๆฏ”๏ผŸ\ndog_files_shortไธญๅ›พๅƒๆฃ€ๆต‹ๅˆฐ็‹—็‹—็š„็™พๅˆ†ๆฏ”๏ผŸ", "### TODO: ๆต‹่ฏ•dog_detectorๅ‡ฝๆ•ฐๅœจhuman_files_shortๅ’Œdog_files_short็š„่กจ็Žฐ\nhuman_files_short_detect = 0\ndog_files_short_detect = 0\n\nfor i in range(100):\n if (dog_detector(human_files_short[i])):\n human_files_short_detect += 1\n if (dog_detector(dog_files_short[i])):\n dog_files_short_detect += 1\n\nprint(\"The percentage of detecting dogs in human files is:\", human_files_short_detect/human_files_short.size)\nprint(\"The percentage of detecting dogs in dog files is:\", dog_files_short_detect/dog_files_short.size)", "<a id='step3'></a>\nๆญฅ้ชค 3: ไปŽๅคดๅผ€ๅง‹ๅˆ›ๅปบไธ€ไธชCNNๆฅๅˆ†็ฑป็‹—ๅ“็ง\n็Žฐๅœจๆˆ‘ไปฌๅทฒ็ปๅฎž็Žฐไบ†ไธ€ไธชๅ‡ฝๆ•ฐ๏ผŒ่ƒฝๅคŸๅœจๅ›พๅƒไธญ่ฏ†ๅˆซไบบ็ฑปๅŠ็‹—็‹—ใ€‚ไฝ†ๆˆ‘ไปฌ้œ€่ฆๆ›ด่ฟ›ไธ€ๆญฅ็š„ๆ–นๆณ•๏ผŒๆฅๅฏน็‹—็š„็ฑปๅˆซ่ฟ›่กŒ่ฏ†ๅˆซใ€‚ๅœจ่ฟ™ไธ€ๆญฅไธญ๏ผŒไฝ ้œ€่ฆๅฎž็Žฐไธ€ไธชๅท็งฏ็ฅž็ป็ฝ‘็ปœๆฅๅฏน็‹—็š„ๅ“็ง่ฟ›่กŒๅˆ†็ฑปใ€‚ไฝ ้œ€่ฆ__ไปŽๅคดๅฎž็Žฐ__ไฝ ็š„ๅท็งฏ็ฅž็ป็ฝ‘็ปœ๏ผˆๅœจ่ฟ™ไธ€้˜ถๆฎต๏ผŒไฝ ่ฟ˜ไธ่ƒฝไฝฟ็”จ่ฟ็งปๅญฆไน ๏ผ‰๏ผŒๅนถไธ”ไฝ ้œ€่ฆ่พพๅˆฐ่ถ…่ฟ‡1%็š„ๆต‹่ฏ•้›†ๅ‡†็กฎ็އใ€‚ๅœจๆœฌ้กน็›ฎ็š„ๆญฅ้ชคไบ”็ง๏ผŒไฝ ่ฟ˜ๆœ‰ๆœบไผšไฝฟ็”จ่ฟ็งปๅญฆไน ๆฅๅฎž็Žฐไธ€ไธชๅ‡†็กฎ็އๅคงๅคงๆ้ซ˜็š„ๆจกๅž‹ใ€‚\nๅœจๆทปๅŠ ๅท็งฏๅฑ‚็š„ๆ—ถๅ€™๏ผŒๆณจๆ„ไธ่ฆๅŠ ไธŠๅคชๅคš็š„๏ผˆๅฏ่ฎญ็ปƒ็š„๏ผ‰ๅฑ‚ใ€‚ๆ›ดๅคš็š„ๅ‚ๆ•ฐๆ„ๅ‘ณ็€ๆ›ด้•ฟ็š„่ฎญ็ปƒๆ—ถ้—ด๏ผŒไนŸๅฐฑๆ˜ฏ่ฏดไฝ ๆ›ดๅฏ่ƒฝ้œ€่ฆไธ€ไธช GPU ๆฅๅŠ ้€Ÿ่ฎญ็ปƒ่ฟ‡็จ‹ใ€‚ไธ‡ๅนธ็š„ๆ˜ฏ๏ผŒKeras ๆไพ›ไบ†่ƒฝๅคŸ่ฝปๆพ้ข„ๆต‹ๆฏๆฌก่ฟญไปฃ๏ผˆepoch๏ผ‰่Šฑ่ดนๆ—ถ้—ดๆ‰€้œ€็š„ๅ‡ฝๆ•ฐใ€‚ไฝ ๅฏไปฅๆฎๆญคๆŽจๆ–ญไฝ ็ฎ—ๆณ•ๆ‰€้œ€็š„่ฎญ็ปƒๆ—ถ้—ดใ€‚\nๅ€ผๅพ—ๆณจๆ„็š„ๆ˜ฏ๏ผŒๅฏน็‹—็š„ๅ›พๅƒ่ฟ›่กŒๅˆ†็ฑปๆ˜ฏไธ€้กนๆžๅ…ทๆŒ‘ๆˆ˜ๆ€ง็š„ไปปๅŠกใ€‚ๅ› ไธบๅณไพฟๆ˜ฏไธ€ไธชๆญฃๅธธไบบ๏ผŒไนŸๅพˆ้šพๅŒบๅˆ†ๅธƒๅˆ—ๅก”ๅฐผ็Šฌๅ’Œๅจๅฐ”ๅฃซๅฒๅฎพๆ ผ็Šฌใ€‚\nๅธƒๅˆ—ๅก”ๅฐผ็Šฌ๏ผˆBrittany๏ผ‰ | ๅจๅฐ”ๅฃซๅฒๅฎพๆ ผ็Šฌ๏ผˆWelsh Springer Spaniel๏ผ‰\n- | - \n<img src=\"images/Brittany_02625.jpg\" width=\"100\"> | <img src=\"images/Welsh_springer_spaniel_08203.jpg\" width=\"200\">\nไธ้šพๅ‘็Žฐๅ…ถไป–็š„็‹—ๅ“็งไผšๆœ‰ๅพˆๅฐ็š„็ฑป้—ดๅทฎๅˆซ๏ผˆๆฏ”ๅฆ‚้‡‘ๆฏ›ๅฏปๅ›ž็Šฌๅ’Œ็พŽๅ›ฝๆฐด็ŒŽ็Šฌ๏ผ‰ใ€‚\n้‡‘ๆฏ›ๅฏปๅ›ž็Šฌ๏ผˆCurly-Coated Retriever๏ผ‰ | ็พŽๅ›ฝๆฐด็ŒŽ็Šฌ๏ผˆAmerican Water Spaniel๏ผ‰\n- | -\n<img src=\"images/Curly-coated_retriever_03896.jpg\" width=\"200\"> | <img src=\"images/American_water_spaniel_00648.jpg\" width=\"200\">\nๅŒๆ ท๏ผŒๆ‹‰ๅธƒๆ‹‰ๅคš็Šฌ๏ผˆlabradors๏ผ‰ๆœ‰้ป„่‰ฒใ€ๆฃ•่‰ฒๅ’Œ้ป‘่‰ฒ่ฟ™ไธ‰็งใ€‚้‚ฃไนˆไฝ ่ฎพ่ฎก็š„ๅŸบไบŽ่ง†่ง‰็š„็ฎ—ๆณ•ๅฐ†ไธๅพ—ไธๅ…‹ๆœ่ฟ™็ง่พƒ้ซ˜็š„็ฑป้—ดๅทฎๅˆซ๏ผŒไปฅ่พพๅˆฐ่ƒฝๅคŸๅฐ†่ฟ™ไบ›ไธๅŒ้ขœ่‰ฒ็š„ๅŒ็ฑป็‹—ๅˆ†ๅˆฐๅŒไธ€ไธชๅ“็งไธญใ€‚\n้ป„่‰ฒๆ‹‰ๅธƒๆ‹‰ๅคš็Šฌ๏ผˆYellow Labrador๏ผ‰ | ๆฃ•่‰ฒๆ‹‰ๅธƒๆ‹‰ๅคš็Šฌ๏ผˆChocolate Labrador๏ผ‰ | ้ป‘่‰ฒๆ‹‰ๅธƒๆ‹‰ๅคš็Šฌ๏ผˆBlack Labrador๏ผ‰\n- | -\n<img src=\"images/Labrador_retriever_06457.jpg\" width=\"150\"> | <img src=\"images/Labrador_retriever_06455.jpg\" width=\"240\"> | <img src=\"images/Labrador_retriever_06449.jpg\" width=\"220\">\nๆˆ‘ไปฌไนŸๆๅˆฐไบ†้šๆœบๅˆ†็ฑปๅฐ†ๅพ—ๅˆฐไธ€ไธช้žๅธธไฝŽ็š„็ป“ๆžœ๏ผšไธ่€ƒ่™‘ๅ“็ง็•ฅๆœ‰ๅคฑ่กก็š„ๅฝฑๅ“๏ผŒ้šๆœบ็Œœๆต‹ๅˆฐๆญฃ็กฎๅ“็ง็š„ๆฆ‚็އๆ˜ฏ1/133๏ผŒ็›ธๅฏนๅบ”็š„ๅ‡†็กฎ็އๆ˜ฏไฝŽไบŽ1%็š„ใ€‚\n่ฏท่ฎฐไฝ๏ผŒๅœจๆทฑๅบฆๅญฆไน ้ข†ๅŸŸ๏ผŒๅฎž่ทต่ฟœ่ฟœ้ซ˜ไบŽ็†่ฎบใ€‚ๅคง้‡ๅฐ่ฏ•ไธๅŒ็š„ๆก†ๆžถๅง๏ผŒ็›ธไฟกไฝ ็š„็›ด่ง‰๏ผๅฝ“็„ถ๏ผŒ็Žฉๅพ—ๅผ€ๅฟƒ๏ผ\nๆ•ฐๆฎ้ข„ๅค„็†\n้€š่ฟ‡ๅฏนๆฏๅผ ๅ›พๅƒ็š„ๅƒ็ด ๅ€ผ้™คไปฅ255๏ผŒๆˆ‘ไปฌๅฏนๅ›พๅƒๅฎž็Žฐไบ†ๅฝ’ไธ€ๅŒ–ๅค„็†ใ€‚", "from PIL import ImageFile \nImageFile.LOAD_TRUNCATED_IMAGES = True \n\n# Kerasไธญ็š„ๆ•ฐๆฎ้ข„ๅค„็†่ฟ‡็จ‹\ntrain_tensors = paths_to_tensor(train_files).astype('float32')/255\nvalid_tensors = paths_to_tensor(valid_files).astype('float32')/255\ntest_tensors = paths_to_tensor(test_files).astype('float32')/255", "ใ€็ปƒไน ใ€‘ๆจกๅž‹ๆžถๆž„\nๅˆ›ๅปบไธ€ไธชๅท็งฏ็ฅž็ป็ฝ‘็ปœๆฅๅฏน็‹—ๅ“็ง่ฟ›่กŒๅˆ†็ฑปใ€‚ๅœจไฝ ไปฃ็ ๅ—็š„ๆœ€ๅŽ๏ผŒๆ‰ง่กŒ model.summary() ๆฅ่พ“ๅ‡บไฝ ๆจกๅž‹็š„ๆ€ป็ป“ไฟกๆฏใ€‚\nๆˆ‘ไปฌๅทฒ็ปๅธฎไฝ ๅฏผๅ…ฅไบ†ไธ€ไบ›ๆ‰€้œ€็š„ Python ๅบ“๏ผŒๅฆ‚ๆœ‰้œ€่ฆไฝ ๅฏไปฅ่‡ช่กŒๅฏผๅ…ฅใ€‚ๅฆ‚ๆžœไฝ ๅœจ่ฟ‡็จ‹ไธญ้‡ๅˆฐไบ†ๅ›ฐ้šพ๏ผŒๅฆ‚ไธ‹ๆ˜ฏ็ป™ไฝ ็š„ไธ€็‚นๅฐๆ็คบโ€”โ€”่ฏฅๆจกๅž‹่ƒฝๅคŸๅœจ5ไธช epoch ๅ†…ๅ–ๅพ—่ถ…่ฟ‡1%็š„ๆต‹่ฏ•ๅ‡†็กฎ็އ๏ผŒๅนถไธ”่ƒฝๅœจCPUไธŠๅพˆๅฟซๅœฐ่ฎญ็ปƒใ€‚\n\n\n<a id='question4'></a> \n้—ฎ้ข˜ 4:\nๅœจไธ‹ๆ–น็š„ไปฃ็ ๅ—ไธญๅฐ่ฏ•ไฝฟ็”จ Keras ๆญๅปบๅท็งฏ็ฝ‘็ปœ็š„ๆžถๆž„๏ผŒๅนถๅ›ž็ญ”็›ธๅ…ณ็š„้—ฎ้ข˜ใ€‚\n\nไฝ ๅฏไปฅๅฐ่ฏ•่‡ชๅทฑๆญๅปบไธ€ไธชๅท็งฏ็ฝ‘็ปœ็š„ๆจกๅž‹๏ผŒ้‚ฃไนˆไฝ ้œ€่ฆๅ›ž็ญ”ไฝ ๆญๅปบๅท็งฏ็ฝ‘็ปœ็š„ๅ…ทไฝ“ๆญฅ้ชค๏ผˆ็”จไบ†ๅ“ชไบ›ๅฑ‚๏ผ‰ไปฅๅŠไธบไป€ไนˆ่ฟ™ๆ ทๆญๅปบใ€‚\nไฝ ไนŸๅฏไปฅๆ นๆฎไธŠๅ›พๆ็คบ็š„ๆญฅ้ชคๆญๅปบๅท็งฏ็ฝ‘็ปœ๏ผŒ้‚ฃไนˆ่ฏท่ฏดๆ˜Žไธบไฝ•ๅฆ‚ไธŠ็š„ๆžถๆž„่ƒฝๅคŸๅœจ่ฏฅ้—ฎ้ข˜ไธŠๅ–ๅพ—ๅพˆๅฅฝ็š„่กจ็Žฐใ€‚\n\nๅ›ž็ญ”: \nๆˆ‘้€‰ๆ‹ฉๆ นๆฎไธŠๅ›พๆ็คบๆญๅปบๅท็งฏ็ฅž็ป็ฝ‘็ปœใ€‚้ฆ–ๅ…ˆ๏ผŒๆญๅปบไธ‰ๅฑ‚ๅท็งฏๅฑ‚ๅฏไปฅๆฃ€ๆต‹ๆ›ด้ซ˜็บง็š„็‰นๅพ๏ผŒไปฅ่พพๅˆฐ็‹—็‹—ๅ“็งๅˆ†็ฑป็š„็›ฎ็š„ใ€‚ๅŒๆ—ถ๏ผŒไธคไธชๅท็งฏๅฑ‚ไน‹้—ด็š„ๆฑ ๅŒ–ๅฑ‚ๆœ‰ๆ•ˆ้™ไฝŽไบ†ๆ•ฐๆฎ็š„ๅคๆ‚ๅบฆ๏ผŒไฝฟๅพ—่ฎญ็ปƒๆ•ˆ็އๅพ—ๅˆฐๆœ‰ๆ•ˆๆๅ‡", "from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D\nfrom keras.layers import Dropout, Flatten, Dense\nfrom keras.models import Sequential\n\nmodel = Sequential()\n\n### TODO: ๅฎšไน‰ไฝ ็š„็ฝ‘็ปœๆžถๆž„\nmodel.add(Conv2D(filters=16, kernel_size=2, input_shape=(224, 224, 3), activation='relu'))\nmodel.add(MaxPooling2D(pool_size=2))\nmodel.add(Dropout(0.2))\nmodel.add(Conv2D(filters=32, kernel_size=2, activation='relu'))\nmodel.add(MaxPooling2D(pool_size=2))\n\nmodel.add(Dropout(0.2))\nmodel.add(Conv2D(filters=64, kernel_size=2, activation='relu'))\nmodel.add(MaxPooling2D(pool_size=2))\nmodel.add(Dropout(0.2))\n\nmodel.add(GlobalAveragePooling2D())\nmodel.add(Dense(133, activation='softmax'))\n \nmodel.summary()\n\n## ็ผ–่ฏ‘ๆจกๅž‹\nmodel.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])", "ใ€็ปƒไน ใ€‘่ฎญ็ปƒๆจกๅž‹\n\n<a id='question5'></a> \n้—ฎ้ข˜ 5:\nๅœจไธ‹ๆ–นไปฃ็ ๅ•ๅ…ƒ่ฎญ็ปƒๆจกๅž‹ใ€‚ไฝฟ็”จๆจกๅž‹ๆฃ€ๆŸฅ็‚น๏ผˆmodel checkpointing๏ผ‰ๆฅๅ‚จๅญ˜ๅ…ทๆœ‰ๆœ€ไฝŽ้ชŒ่ฏ้›† loss ็š„ๆจกๅž‹ใ€‚\nๅฏ้€‰้ข˜๏ผšไฝ ไนŸๅฏไปฅๅฏน่ฎญ็ปƒ้›†่ฟ›่กŒ ๆ•ฐๆฎๅขžๅผบ๏ผŒๆฅไผ˜ๅŒ–ๆจกๅž‹็š„่กจ็Žฐใ€‚", "from keras.callbacks import ModelCheckpoint \n\n### TODO: ่ฎพ็ฝฎ่ฎญ็ปƒๆจกๅž‹็š„epochs็š„ๆ•ฐ้‡\n\nepochs = 5\n\n### ไธ่ฆไฟฎๆ”นไธ‹ๆ–นไปฃ็ \n\ncheckpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5', \n verbose=1, save_best_only=True)\n\nmodel.fit(train_tensors, train_targets, \n validation_data=(valid_tensors, valid_targets),\n epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1)\n\n## ๅŠ ่ฝฝๅ…ทๆœ‰ๆœ€ๅฅฝ้ชŒ่ฏloss็š„ๆจกๅž‹\n\nmodel.load_weights('saved_models/weights.best.from_scratch.hdf5')", "ๆต‹่ฏ•ๆจกๅž‹\nๅœจ็‹—ๅ›พๅƒ็š„ๆต‹่ฏ•ๆ•ฐๆฎ้›†ไธŠ่ฏ•็”จไฝ ็š„ๆจกๅž‹ใ€‚็กฎไฟๆต‹่ฏ•ๅ‡†็กฎ็އๅคงไบŽ1%ใ€‚", "# ่Žทๅ–ๆต‹่ฏ•ๆ•ฐๆฎ้›†ไธญๆฏไธ€ไธชๅ›พๅƒๆ‰€้ข„ๆต‹็š„็‹—ๅ“็ง็š„index\ndog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors]\n\n# ๆŠฅๅ‘Šๆต‹่ฏ•ๅ‡†็กฎ็އ\ntest_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions)\nprint('Test accuracy: %.4f%%' % test_accuracy)", "<a id='step4'></a>\nๆญฅ้ชค 4: ไฝฟ็”จไธ€ไธชCNNๆฅๅŒบๅˆ†็‹—็š„ๅ“็ง\nไฝฟ็”จ ่ฟ็งปๅญฆไน ๏ผˆTransfer Learning๏ผ‰็š„ๆ–นๆณ•๏ผŒ่ƒฝๅธฎๅŠฉๆˆ‘ไปฌๅœจไธๆŸๅคฑๅ‡†็กฎ็އ็š„ๆƒ…ๅ†ตไธ‹ๅคงๅคงๅ‡ๅฐ‘่ฎญ็ปƒๆ—ถ้—ดใ€‚ๅœจไปฅไธ‹ๆญฅ้ชคไธญ๏ผŒไฝ ๅฏไปฅๅฐ่ฏ•ไฝฟ็”จ่ฟ็งปๅญฆไน ๆฅ่ฎญ็ปƒไฝ ่‡ชๅทฑ็š„CNNใ€‚\nๅพ—ๅˆฐไปŽๅ›พๅƒไธญๆๅ–็š„็‰นๅพๅ‘้‡๏ผˆBottleneck Features๏ผ‰", "bottleneck_features = np.load('/data/bottleneck_features/DogVGG16Data.npz')\ntrain_VGG16 = bottleneck_features['train']\nvalid_VGG16 = bottleneck_features['valid']\ntest_VGG16 = bottleneck_features['test']", "ๆจกๅž‹ๆžถๆž„\n่ฏฅๆจกๅž‹ไฝฟ็”จ้ข„่ฎญ็ปƒ็š„ VGG-16 ๆจกๅž‹ไฝœไธบๅ›บๅฎš็š„ๅ›พๅƒ็‰นๅพๆๅ–ๅ™จ๏ผŒๅ…ถไธญ VGG-16 ๆœ€ๅŽไธ€ๅฑ‚ๅท็งฏๅฑ‚็š„่พ“ๅ‡บ่ขซ็›ดๆŽฅ่พ“ๅ…ฅๅˆฐๆˆ‘ไปฌ็š„ๆจกๅž‹ใ€‚ๆˆ‘ไปฌๅช้œ€่ฆๆทปๅŠ ไธ€ไธชๅ…จๅฑ€ๅนณๅ‡ๆฑ ๅŒ–ๅฑ‚ไปฅๅŠไธ€ไธชๅ…จ่ฟžๆŽฅๅฑ‚๏ผŒๅ…ถไธญๅ…จ่ฟžๆŽฅๅฑ‚ไฝฟ็”จ softmax ๆฟ€ๆดปๅ‡ฝๆ•ฐ๏ผŒๅฏนๆฏไธ€ไธช็‹—็š„็ง็ฑป้ƒฝๅŒ…ๅซไธ€ไธช่Š‚็‚นใ€‚", "VGG16_model = Sequential()\nVGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:]))\nVGG16_model.add(Dense(133, activation='softmax'))\n\nVGG16_model.summary()\n\n## ็ผ–่ฏ‘ๆจกๅž‹\n\nVGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])\n\n## ่ฎญ็ปƒๆจกๅž‹\n\ncheckpointer = ModelCheckpoint(filepath='saved_models/weights.best.VGG16.hdf5', \n verbose=1, save_best_only=True)\n\nVGG16_model.fit(train_VGG16, train_targets, \n validation_data=(valid_VGG16, valid_targets),\n epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)\n\n\n\n## ๅŠ ่ฝฝๅ…ทๆœ‰ๆœ€ๅฅฝ้ชŒ่ฏloss็š„ๆจกๅž‹\n\nVGG16_model.load_weights('saved_models/weights.best.VGG16.hdf5')", "ๆต‹่ฏ•ๆจกๅž‹\n็Žฐๅœจ๏ผŒๆˆ‘ไปฌๅฏไปฅๆต‹่ฏ•ๆญคCNNๅœจ็‹—ๅ›พๅƒๆต‹่ฏ•ๆ•ฐๆฎ้›†ไธญ่ฏ†ๅˆซๅ“็ง็š„ๆ•ˆๆžœๅฆ‚ไฝ•ใ€‚ๆˆ‘ไปฌๅœจไธ‹ๆ–นๆ‰“ๅฐๅ‡บๆต‹่ฏ•ๅ‡†็กฎ็އใ€‚", "# ่Žทๅ–ๆต‹่ฏ•ๆ•ฐๆฎ้›†ไธญๆฏไธ€ไธชๅ›พๅƒๆ‰€้ข„ๆต‹็š„็‹—ๅ“็ง็š„index\nVGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16]\n\n# ๆŠฅๅ‘Šๆต‹่ฏ•ๅ‡†็กฎ็އ\ntest_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions)\nprint('Test accuracy: %.4f%%' % test_accuracy)", "ไฝฟ็”จๆจกๅž‹้ข„ๆต‹็‹—็š„ๅ“็ง", "from extract_bottleneck_features import *\n\ndef VGG16_predict_breed(img_path):\n # ๆๅ–bottleneck็‰นๅพ\n bottleneck_feature = extract_VGG16(path_to_tensor(img_path))\n # ่Žทๅ–้ข„ๆต‹ๅ‘้‡\n predicted_vector = VGG16_model.predict(bottleneck_feature)\n # ่ฟ”ๅ›žๆญคๆจกๅž‹้ข„ๆต‹็š„็‹—็š„ๅ“็ง\n return dog_names[np.argmax(predicted_vector)]", "<a id='step5'></a>\nๆญฅ้ชค 5: ๅปบ็ซ‹ไธ€ไธชCNNๆฅๅˆ†็ฑป็‹—็š„ๅ“็ง๏ผˆไฝฟ็”จ่ฟ็งปๅญฆไน ๏ผ‰\n็Žฐๅœจไฝ ๅฐ†ไฝฟ็”จ่ฟ็งปๅญฆไน ๆฅๅปบ็ซ‹ไธ€ไธชCNN๏ผŒไปŽ่€ŒๅฏไปฅไปŽๅ›พๅƒไธญ่ฏ†ๅˆซ็‹—็š„ๅ“็งใ€‚ไฝ ็š„ CNN ๅœจๆต‹่ฏ•้›†ไธŠ็š„ๅ‡†็กฎ็އๅฟ…้กป่‡ณๅฐ‘่พพๅˆฐ60%ใ€‚\nๅœจๆญฅ้ชค4ไธญ๏ผŒๆˆ‘ไปฌไฝฟ็”จไบ†่ฟ็งปๅญฆไน ๆฅๅˆ›ๅปบไธ€ไธชไฝฟ็”จๅŸบไบŽ VGG-16 ๆๅ–็š„็‰นๅพๅ‘้‡ๆฅๆญๅปบไธ€ไธช CNNใ€‚ๅœจๆœฌ้ƒจๅˆ†ๅ†…ๅฎนไธญ๏ผŒไฝ ๅฟ…้กปไฝฟ็”จๅฆไธ€ไธช้ข„่ฎญ็ปƒๆจกๅž‹ๆฅๆญๅปบไธ€ไธช CNNใ€‚ไธบไบ†่ฎฉ่ฟ™ไธชไปปๅŠกๆ›ดๆ˜“ๅฎž็Žฐ๏ผŒๆˆ‘ไปฌๅทฒ็ป้ข„ๅ…ˆๅฏน็›ฎๅ‰ keras ไธญๅฏ็”จ็š„ๅ‡ ็ง็ฝ‘็ปœ่ฟ›่กŒไบ†้ข„่ฎญ็ปƒ๏ผš\n\nVGG-19 bottleneck features\nResNet-50 bottleneck features\nInception bottleneck features\nXception bottleneck features\n\n่ฟ™ไบ›ๆ–‡ไปถ่ขซๅ‘ฝๅไธบไธบ๏ผš\nDog{network}Data.npz\n\nๅ…ถไธญ {network} ๅฏไปฅๆ˜ฏ VGG19ใ€Resnet50ใ€InceptionV3 ๆˆ– Xception ไธญ็š„ไธ€ไธชใ€‚้€‰ๆ‹ฉไธŠๆ–น็ฝ‘็ปœๆžถๆž„ไธญ็š„ไธ€ไธช๏ผŒไป–ไปฌๅทฒ็ปไฟๅญ˜ๅœจ็›ฎๅฝ• /data/bottleneck_features/ ไธญใ€‚\nใ€็ปƒไน ใ€‘่Žทๅ–ๆจกๅž‹็š„็‰นๅพๅ‘้‡\nๅœจไธ‹ๆ–นไปฃ็ ๅ—ไธญ๏ผŒ้€š่ฟ‡่ฟ่กŒไธ‹ๆ–นไปฃ็ ๆๅ–่ฎญ็ปƒใ€ๆต‹่ฏ•ไธŽ้ชŒ่ฏ้›†็›ธๅฏนๅบ”็š„bottleneck็‰นๅพใ€‚\nbottleneck_features = np.load('/data/bottleneck_features/Dog{network}Data.npz')\ntrain_{network} = bottleneck_features['train']\nvalid_{network} = bottleneck_features['valid']\ntest_{network} = bottleneck_features['test']", "### TODO: ไปŽๅฆไธ€ไธช้ข„่ฎญ็ปƒ็š„CNN่Žทๅ–bottleneck็‰นๅพ\nbottleneck_features = np.load('/data/bottleneck_features/DogXceptionData.npz')\ntrain_Xception = bottleneck_features['train']\nvalid_Xception = bottleneck_features['valid']\ntest_Xception = bottleneck_features['test']", "ใ€็ปƒไน ใ€‘ๆจกๅž‹ๆžถๆž„\nๅปบ็ซ‹ไธ€ไธชCNNๆฅๅˆ†็ฑป็‹—ๅ“็งใ€‚ๅœจไฝ ็š„ไปฃ็ ๅ•ๅ…ƒๅ—็š„ๆœ€ๅŽ๏ผŒ้€š่ฟ‡่ฟ่กŒๅฆ‚ไธ‹ไปฃ็ ่พ“ๅ‡บ็ฝ‘็ปœ็š„็ป“ๆž„๏ผš\n &lt;your model's name&gt;.summary()\n\n\n<a id='question6'></a> \n้—ฎ้ข˜ 6:\nๅœจไธ‹ๆ–น็š„ไปฃ็ ๅ—ไธญๅฐ่ฏ•ไฝฟ็”จ Keras ๆญๅปบๆœ€็ปˆ็š„็ฝ‘็ปœๆžถๆž„๏ผŒๅนถๅ›ž็ญ”ไฝ ๅฎž็Žฐๆœ€็ปˆ CNN ๆžถๆž„็š„ๆญฅ้ชคไธŽๆฏไธ€ๆญฅ็š„ไฝœ็”จ๏ผŒๅนถๆ่ฟฐไฝ ๅœจ่ฟ็งปๅญฆไน ่ฟ‡็จ‹ไธญ๏ผŒไฝฟ็”จ่ฏฅ็ฝ‘็ปœๆžถๆž„็š„ๅŽŸๅ› ใ€‚\nๅ›ž็ญ”: \nXception_model = Sequential()\n่ฟ™ไธ€ๆญฅๆ˜ฏ่ฐƒ็”จXception็š„้ข„่ฎญ็ปƒๆจกๅž‹\nXception_model.add(GlobalAveragePooling2D(input_shape=train_Resnet50.shape[1:]))\n่ฟ™ไธ€ๆญฅๆทปๅŠ ไธ€ไธชๅ…จๅฑ€ๅนณๅ‡ๆฑ ๅŒ–ๅฑ‚้ฟๅ…่ฟ‡ๆ‹Ÿๅˆ\nXception_model.add(Dropout(0.2))\n่ฟ™ไธ€ๆญฅๆ˜ฏๆทปๅŠ Dropoutๅฑ‚้ฟๅ…่ฟ‡ๆ‹Ÿๅˆ\nXception_model.add(Dense(133, activation='softmax'))\n่ฟ™ไธ€ๆญฅๆทปๅŠ 133ไธช่Š‚็‚น็š„ๅ…จ่ฟžๆŽฅๅฑ‚๏ผŒไฝฟ็”จsoftmaxๆฟ€ๆดปๅ‡ฝๆ•ฐ่พ“ๅ‡บๆฏไธช็‹—็‹—ๅ“็ง็š„ๆฆ‚็އ\nไฝฟ็”จ่ฏฅ็ฝ‘็ปœๆžถๆž„็š„ๅŽŸๅ› ๆ˜ฏ็”ฑไบŽXceptionๅ…ทๆœ‰ๅฆ‚ไธ‹ไผ˜็‚น๏ผš\n1.็›ธๆฏ”ไผ ็ปŸ็š„ๅท็งฏ็ฅž็ป็ฝ‘็ปœๅฆ‚VGGๅคๆ‚ๅบฆ้™ไฝŽ๏ผŒ้œ€่ฆ็š„ๅ‚ๆ•ฐๆ•ฐ้‡ไธ‹้™ใ€‚\n2.ๅฏไปฅๅšๅˆฐๆ›ดๆทฑ๏ผŒไธไผšๅ‡บ็Žฐๆขฏๅบฆๆถˆๅคฑ็š„้—ฎ้ข˜ใ€‚\n3.ไผ˜ๅŒ–็ฎ€ๅ•๏ผŒๅˆ†็ฑปๅ‡†็กฎๅบฆๅŠ ๆทฑ็”ฑไบŽไฝฟ็”จๆ›ดๆทฑ็š„็ฝ‘็ปœใ€‚\n4.Xceptionๅœจไผ—ๅคšๅ›พๅƒ่ฏ†ๅˆซ้ข†ๅŸŸไธญๆ‹”ๅพ—ๅคด็ญนใ€‚\nๅ› ๆญค๏ผŒ้€‰ๅ–Xception็ฝ‘็ปœๅฏไปฅๆฏ”ไน‹ๅ‰็š„VGG็ฝ‘็ปœๅ–ๅพ—ๆ›ดๅฅฝ็š„้ข„ๆต‹ๆ•ˆๆžœใ€‚\n\n\nไธบไป€ไนˆ่ฟ™ไธ€ๆžถๆž„ไผšๅœจ่ฟ™ไธ€ๅˆ†็ฑปไปปๅŠกไธญๆˆๅŠŸ๏ผŸ\n่ฟ™ๅ››ไธชๆžถๆž„้ƒฝๆ˜ฏ็ป่ฟ‡ๅๅคๅคšๆฌกๅฎž้ชŒ็กฎๅฎš็š„๏ผŒ้žๅธธๆœ‰ๆ•ˆๆžœ็š„ๆžถๆž„ใ€‚ไปฅInception netไธบไพ‹๏ผŒinception netๆ˜ฏๅคšๅฑ‚็‰นๅพๆๅ–ๅ™จ๏ผŒ้€š่ฟ‡ๅˆ†ๅˆซๅคšๆฌกๅŒๆ—ถๆๅ–็‰นๅพ๏ผŒ็„ถๅŽๅ ๅŠ ๏ผŒๅฐฑๅฏไปฅๅญฆๅˆฐไธๅŒๅฑ‚ๆฌก็š„็‰นๅพ๏ผŒๆ‰€ไปฅๆ•ˆๆžœ้žๅธธๅฅฝใ€‚\n\n\nไธบไป€ไนˆๆ—ฉๆœŸ๏ผˆ็ฌฌไธ‰ๆญฅ ๏ผ‰็š„ๅฐ่ฏ•ไธๆˆๅŠŸ๏ผŸ\n็ฌฌไธ‰ๆญฅไธญ๏ผŒ็ฌฌไธ€๏ผŒไฝฟ็”จ็š„็ฝ‘็ปœๅœจๆžถๆž„ไธŠ๏ผŒ้žๅธธๆต…๏ผŒๅญฆๅˆฐ็š„็‰นๅพ้žๅธธๅฐ‘๏ผŒๅ…ถๆฌกๅญฆไน ๅบ“้žๅธธๅฐ๏ผŒไธŠ้ขๅ››ไธช็ฝ‘็ปœๆ˜ฏๅœจImagenetไธŠ็ป่ฟ‡ๅคง้‡่ฎญ็ปƒๅœจไธๅŒ็ง็ฑป็š„่ฎญ็ปƒ้›†ไธŠๅพ—ๆฅ็š„๏ผŒ่ฟ™ๆ˜ฏ่ฟ™ไธชๅฐๅบ“ๆ— ๆณ•ๆฏ”ๆ‹Ÿ็š„ใ€‚", "### TODO: ๅฎšไน‰ไฝ ็š„ๆก†ๆžถ\n# ่ฐƒ็”จXception็š„้ข„่ฎญ็ปƒๆจกๅž‹\nXception_model = Sequential()\n\n#ๅŠ ไธ€ไธชๅ…จๅฑ€ๅนณๅ‡ๆฑ ๅŒ–ๅฑ‚้ฟๅ…่ฟ‡ๆ‹Ÿๅˆ\nXception_model.add(GlobalAveragePooling2D(input_shape=train_Xception.shape[1:]))\n\n#ๆทปๅŠ Dropoutๅฑ‚้ฟๅ…่ฟ‡ๆ‹Ÿๅˆ\nXception_model.add(Dropout(0.2))\n\n#ๆทปๅŠ 133ไธช่Š‚็‚น็š„ๅ…จ่ฟžๆŽฅๅฑ‚๏ผŒไฝฟ็”จsoftmaxๆฟ€ๆดปๅ‡ฝๆ•ฐ่พ“ๅ‡บๆฏไธช็‹—็‹—ๅ“็ง็š„ๆฆ‚็އ\nXception_model.add(Dense(133, activation='softmax'))\n\nXception_model.summary()\n\n### TODO: ็ผ–่ฏ‘ๆจกๅž‹\nXception_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])", "ใ€็ปƒไน ใ€‘่ฎญ็ปƒๆจกๅž‹\n<a id='question7'></a> \n้—ฎ้ข˜ 7:\nๅœจไธ‹ๆ–นไปฃ็ ๅ•ๅ…ƒไธญ่ฎญ็ปƒไฝ ็š„ๆจกๅž‹ใ€‚ไฝฟ็”จๆจกๅž‹ๆฃ€ๆŸฅ็‚น๏ผˆmodel checkpointing๏ผ‰ๆฅๅ‚จๅญ˜ๅ…ทๆœ‰ๆœ€ไฝŽ้ชŒ่ฏ้›† loss ็š„ๆจกๅž‹ใ€‚\nๅฝ“็„ถ๏ผŒไฝ ไนŸๅฏไปฅๅฏน่ฎญ็ปƒ้›†่ฟ›่กŒ ๆ•ฐๆฎๅขžๅผบ ไปฅไผ˜ๅŒ–ๆจกๅž‹็š„่กจ็Žฐ๏ผŒไธ่ฟ‡่ฟ™ไธๆ˜ฏๅฟ…้กป็š„ๆญฅ้ชคใ€‚", "### TODO: ่ฎญ็ปƒๆจกๅž‹\ncheckpointer = ModelCheckpoint(filepath='saved_models/weights.best.Xception1.hdf5', \n verbose=1, save_best_only=True)\n\nhistory = Xception_model.fit(train_Xception, train_targets, \n validation_data=(valid_Xception, valid_targets),\n epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)\n\n### TODO: ๅŠ ่ฝฝๅ…ทๆœ‰ๆœ€ไฝณ้ชŒ่ฏloss็š„ๆจกๅž‹ๆƒ้‡\nXception_model.load_weights('saved_models/weights.best.Xception1.hdf5')", "ใ€็ปƒไน ใ€‘ๆต‹่ฏ•ๆจกๅž‹\n<a id='question8'></a> \n้—ฎ้ข˜ 8:\nๅœจ็‹—ๅ›พๅƒ็š„ๆต‹่ฏ•ๆ•ฐๆฎ้›†ไธŠ่ฏ•็”จไฝ ็š„ๆจกๅž‹ใ€‚็กฎไฟๆต‹่ฏ•ๅ‡†็กฎ็އๅคงไบŽ60%ใ€‚", "### TODO: ๅœจๆต‹่ฏ•้›†ไธŠ่ฎก็ฎ—ๅˆ†็ฑปๅ‡†็กฎ็އ\nXception_predictions = [np.argmax(Xception_model.predict(np.expand_dims(feature, axis=0))) for feature in test_Xception]\n\n# ๆŠฅๅ‘Šๆต‹่ฏ•ๅ‡†็กฎ็އ\ntest_accuracy = 100*np.sum(np.array(Xception_predictions)==np.argmax(test_targets, axis=1))/len(Xception_predictions)\nprint('Test accuracy: %.4f%%' % test_accuracy)", "ใ€็ปƒไน ใ€‘ไฝฟ็”จๆจกๅž‹ๆต‹่ฏ•็‹—็š„ๅ“็ง\nๅฎž็Žฐไธ€ไธชๅ‡ฝๆ•ฐ๏ผŒๅฎƒ็š„่พ“ๅ…ฅไธบๅ›พๅƒ่ทฏๅพ„๏ผŒๅŠŸ่ƒฝไธบ้ข„ๆต‹ๅฏนๅบ”ๅ›พๅƒ็š„็ฑปๅˆซ๏ผŒ่พ“ๅ‡บไธบไฝ ๆจกๅž‹้ข„ๆต‹ๅ‡บ็š„็‹—็ฑปๅˆซ๏ผˆAffenpinscher, Afghan_hound ็ญ‰๏ผ‰ใ€‚\nไธŽๆญฅ้ชค5ไธญ็š„ๆจกๆ‹Ÿๅ‡ฝๆ•ฐ็ฑปไผผ๏ผŒไฝ ็š„ๅ‡ฝๆ•ฐๅบ”ๅฝ“ๅŒ…ๅซๅฆ‚ไธ‹ไธ‰ไธชๆญฅ้ชค๏ผš\n\nๆ นๆฎ้€‰ๅฎš็š„ๆจกๅž‹่ฝฝๅ…ฅๅ›พๅƒ็‰นๅพ๏ผˆbottleneck features๏ผ‰\nๅฐ†ๅ›พๅƒ็‰นๅพ่พ“่พ“ๅ…ฅๅˆฐไฝ ็š„ๆจกๅž‹ไธญ๏ผŒๅนถ่ฟ”ๅ›ž้ข„ๆต‹ๅ‘้‡ใ€‚ๆณจๆ„๏ผŒๅœจ่ฏฅๅ‘้‡ไธŠไฝฟ็”จ argmax ๅ‡ฝๆ•ฐๅฏไปฅ่ฟ”ๅ›ž็‹—็ง็ฑป็š„ๅบๅทใ€‚\nไฝฟ็”จๅœจๆญฅ้ชค0ไธญๅฎšไน‰็š„ dog_names ๆ•ฐ็ป„ๆฅ่ฟ”ๅ›žๅฏนๅบ”็š„็‹—็ง็ฑปๅ็งฐใ€‚\n\nๆๅ–ๅ›พๅƒ็‰นๅพ่ฟ‡็จ‹ไธญไฝฟ็”จๅˆฐ็š„ๅ‡ฝๆ•ฐๅฏไปฅๅœจ extract_bottleneck_features.py ไธญๆ‰พๅˆฐใ€‚ๅŒๆ—ถ๏ผŒไป–ไปฌๅบ”ๅทฒๅœจไน‹ๅ‰็š„ไปฃ็ ๅ—ไธญ่ขซๅฏผๅ…ฅใ€‚ๆ นๆฎไฝ ้€‰ๅฎš็š„ CNN ็ฝ‘็ปœ๏ผŒไฝ ๅฏไปฅไฝฟ็”จ extract_{network} ๅ‡ฝๆ•ฐๆฅ่Žทๅพ—ๅฏนๅบ”็š„ๅ›พๅƒ็‰นๅพ๏ผŒๅ…ถไธญ {network} ไปฃ่กจ VGG19, Resnet50, InceptionV3, ๆˆ– Xception ไธญ็š„ไธ€ไธชใ€‚\n\n<a id='question9'></a> \n้—ฎ้ข˜ 9:", "### TODO: ๅ†™ไธ€ไธชๅ‡ฝๆ•ฐ๏ผŒ่ฏฅๅ‡ฝๆ•ฐๅฐ†ๅ›พๅƒ็š„่ทฏๅพ„ไฝœไธบ่พ“ๅ…ฅ\n### ็„ถๅŽ่ฟ”ๅ›žๆญคๆจกๅž‹ๆ‰€้ข„ๆต‹็š„็‹—็š„ๅ“็ง\ndef Xception_predict_breed(img_path):\n # extract bottleneck features\n bottleneck_feature = extract_Xception(path_to_tensor(img_path))\n # obtain predicted vector\n predicted_vector = Xception_model.predict(bottleneck_feature)\n # return dog breed that is predicted by the model\n return dog_names[np.argmax(predicted_vector)]", "<a id='step6'></a>\nๆญฅ้ชค 6: ๅฎŒๆˆไฝ ็š„็ฎ—ๆณ•\nๅฎž็Žฐไธ€ไธช็ฎ—ๆณ•๏ผŒๅฎƒ็š„่พ“ๅ…ฅไธบๅ›พๅƒ็š„่ทฏๅพ„๏ผŒๅฎƒ่ƒฝๅคŸๅŒบๅˆ†ๅ›พๅƒๆ˜ฏๅฆๅŒ…ๅซไธ€ไธชไบบใ€็‹—ๆˆ–ไธค่€…้ƒฝไธๅŒ…ๅซ๏ผŒ็„ถๅŽ๏ผš\n\nๅฆ‚ๆžœไปŽๅ›พๅƒไธญๆฃ€ๆต‹ๅˆฐไธ€ๅช__็‹—__๏ผŒ่ฟ”ๅ›ž่ขซ้ข„ๆต‹็š„ๅ“็งใ€‚\nๅฆ‚ๆžœไปŽๅ›พๅƒไธญๆฃ€ๆต‹ๅˆฐ__ไบบ__๏ผŒ่ฟ”ๅ›žๆœ€็›ธๅƒ็š„็‹—ๅ“็งใ€‚\nๅฆ‚ๆžœไธค่€…้ƒฝไธ่ƒฝๅœจๅ›พๅƒไธญๆฃ€ๆต‹ๅˆฐ๏ผŒ่พ“ๅ‡บ้”™่ฏฏๆ็คบใ€‚\n\nๆˆ‘ไปฌ้žๅธธๆฌข่ฟŽไฝ ๆฅ่‡ชๅทฑ็ผ–ๅ†™ๆฃ€ๆต‹ๅ›พๅƒไธญไบบ็ฑปไธŽ็‹—็š„ๅ‡ฝๆ•ฐ๏ผŒไฝ ๅฏไปฅ้šๆ„ๅœฐไฝฟ็”จไธŠๆ–นๅฎŒๆˆ็š„ face_detector ๅ’Œ dog_detector ๅ‡ฝๆ•ฐใ€‚ไฝ __้œ€่ฆ__ๅœจๆญฅ้ชค5ไฝฟ็”จไฝ ็š„CNNๆฅ้ข„ๆต‹็‹—ๅ“็งใ€‚\nไธ‹้ขๆไพ›ไบ†็ฎ—ๆณ•็š„็คบไพ‹่พ“ๅ‡บ๏ผŒไฝ†ไฝ ๅฏไปฅ่‡ช็”ฑๅœฐ่ฎพ่ฎก่‡ชๅทฑ็š„ๆจกๅž‹๏ผ\n\n<a id='question10'></a> \n้—ฎ้ข˜ 10:\nๅœจไธ‹ๆ–นไปฃ็ ๅ—ไธญๅฎŒๆˆไฝ ็š„ไปฃ็ ใ€‚", "### TODO: ่ฎพ่ฎกไฝ ็š„็ฎ—ๆณ•\n### ่‡ช็”ฑๅœฐไฝฟ็”จๆ‰€้œ€็š„ไปฃ็ ๅ•ๅ…ƒๆ•ฐๅง\nfrom IPython.core.display import Image, display\n\ndef dog_breed_algorithm(img_path):\n if dog_detector(img_path) == 1:\n print(\"hello, dog!\")\n display(Image(img_path,width=200,height=200))\n print(\"Your predicted breed is ... \")\n return print(Xception_predict_breed(img_path))\n elif face_detector(img_path) == 1:\n print(\"hello, human!\")\n display(Image(img_path,width=200,height=200))\n print(\"You look like a ... \")\n return print(Xception_predict_breed(img_path))\n else:\n display(Image(img_path,width=200,height=200))\n return print(\"Could not identify a human or dog in the chosen image. Please try again.\")\n", "<a id='step7'></a>\nๆญฅ้ชค 7: ๆต‹่ฏ•ไฝ ็š„็ฎ—ๆณ•\nๅœจ่ฟ™ไธช้ƒจๅˆ†ไธญ๏ผŒไฝ ๅฐ†ๅฐ่ฏ•ไธ€ไธ‹ไฝ ็š„ๆ–ฐ็ฎ—ๆณ•๏ผ็ฎ—ๆณ•่ฎคไธบ__ไฝ __็œ‹่ตทๆฅๅƒไป€ไนˆ็ฑปๅž‹็š„็‹—๏ผŸๅฆ‚ๆžœไฝ ๆœ‰ไธ€ๅช็‹—๏ผŒๅฎƒๅฏไปฅๅ‡†็กฎๅœฐ้ข„ๆต‹ไฝ ็š„็‹—็š„ๅ“็งๅ—๏ผŸๅฆ‚ๆžœไฝ ๆœ‰ไธ€ๅช็Œซ๏ผŒๅฎƒไผšๅฐ†ไฝ ็š„็Œซ่ฏฏๅˆคไธบไธ€ๅช็‹—ๅ—๏ผŸ\nไธŠไผ ๆ–นๅผ๏ผš็‚นๅ‡ปๅทฆไธŠ่ง’็š„Jupyterๅ›žๅˆฐไธŠ็บง่œๅ•๏ผŒไฝ ๅฏไปฅ็œ‹ๅˆฐJupyter Notebook็š„ๅณไธŠๆ–นไผšๆœ‰UploadๆŒ‰้’ฎใ€‚\n<a id='question11'></a> \n้—ฎ้ข˜ 11:\nๅœจไธ‹ๆ–น็ผ–ๅ†™ไปฃ็ ๏ผŒ็”จ่‡ณๅฐ‘6ๅผ ็Žฐๅฎžไธญ็š„ๅ›พ็‰‡ๆฅๆต‹่ฏ•ไฝ ็š„็ฎ—ๆณ•ใ€‚ไฝ ๅฏไปฅไฝฟ็”จไปปๆ„็…ง็‰‡๏ผŒไธ่ฟ‡่ฏท่‡ณๅฐ‘ไฝฟ็”จไธคๅผ ไบบ็ฑปๅ›พ็‰‡๏ผˆ่ฆๅพๅพ—ๅฝ“ไบ‹ไบบๅŒๆ„ๅ“ฆ๏ผ‰ๅ’Œไธคๅผ ็‹—็š„ๅ›พ็‰‡ใ€‚\nๅŒๆ—ถ่ฏทๅ›ž็ญ”ๅฆ‚ไธ‹้—ฎ้ข˜๏ผš\n\n่พ“ๅ‡บ็ป“ๆžœๆฏ”ไฝ ้ข„ๆƒณ็š„่ฆๅฅฝๅ— :) ๏ผŸๆˆ–่€…ๆ›ด็ณŸ :( ๏ผŸ\nๆๅ‡บ่‡ณๅฐ‘ไธ‰็‚นๆ”น่ฟ›ไฝ ็š„ๆจกๅž‹็š„ๆƒณๆณ•ใ€‚\n\n1.็ป“ๆžœๆฏ”ๆˆ‘้ข„ๆƒณ็š„ๅฅฝใ€‚่ฏฅ็ฎ—ๆณ•ๅฏไปฅๅ‡†็กฎ่ฏ†ๅˆซๅ‡บๅ›พ็‰‡ไธญๆ˜ฏๅฆๅซๆœ‰็‹—ๆˆ–่€…ไบบ\n2. 1๏ผ‰ๅฏน่ฎญ็ปƒ้›†่ฟ›่กŒๆ•ฐๆฎๅขžๅผบไปฅไผ˜ๅŒ–ๆจกๅž‹็š„่กจ็Žฐ\n 2๏ผ‰ไผ˜ๅŒ–็ฅž็ป็ฝ‘็ปœ็ป“ๆž„\n 3๏ผ‰ๅขžๅคงๆ•ฐๆฎ้›†ๆ•ฐๆฎ", "## TODO: ๅœจไฝ ็š„็”ต่„‘ไธŠ๏ผŒๅœจๆญฅ้ชค6ไธญ๏ผŒ่‡ณๅฐ‘ๅœจ6ๅผ ๅ›พ็‰‡ไธŠ่ฟ่กŒไฝ ็š„็ฎ—ๆณ•ใ€‚\n## ่‡ช็”ฑๅœฐไฝฟ็”จๆ‰€้œ€็š„ไปฃ็ ๅ•ๅ…ƒๆ•ฐๅง\nfor i in range(1, 7):\n filename = 'images/' + str(i) + '.jpg'\n print('filename = ' + filename)\n dog_breed_algorithm(filename)\n print('\\n')", "ๆณจๆ„: ๅฝ“ไฝ ๅ†™ๅฎŒไบ†ๆ‰€ๆœ‰็š„ไปฃ็ ๏ผŒๅนถไธ”ๅ›ž็ญ”ไบ†ๆ‰€ๆœ‰็š„้—ฎ้ข˜ใ€‚ไฝ ๅฐฑๅฏไปฅๆŠŠไฝ ็š„ iPython Notebook ๅฏผๅ‡บๆˆ HTML ๆ–‡ไปถใ€‚ไฝ ๅฏไปฅๅœจ่œๅ•ๆ ๏ผŒ่ฟ™ๆ ทๅฏผๅ‡บFile -> Download as -> HTML (.html)ๆŠŠ่ฟ™ไธช HTML ๅ’Œ่ฟ™ไธช iPython notebook ไธ€่ตทๅšไธบไฝ ็š„ไฝœไธšๆไบคใ€‚" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/uhh/cmip6/models/sandbox-1/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: UHH\nSource ID: SANDBOX-1\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:41\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'uhh', 'sandbox-1', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Model\n2. Key Properties --&gt; Variables\n3. Key Properties --&gt; Seawater Properties\n4. Key Properties --&gt; Resolution\n5. Key Properties --&gt; Tuning Applied\n6. Key Properties --&gt; Key Parameter Values\n7. Key Properties --&gt; Assumptions\n8. Key Properties --&gt; Conservation\n9. Grid --&gt; Discretisation --&gt; Horizontal\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Seaice Categories\n12. Grid --&gt; Snow On Seaice\n13. Dynamics\n14. Thermodynamics --&gt; Energy\n15. Thermodynamics --&gt; Mass\n16. Thermodynamics --&gt; Salt\n17. Thermodynamics --&gt; Salt --&gt; Mass Transport\n18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\n19. Thermodynamics --&gt; Ice Thickness Distribution\n20. Thermodynamics --&gt; Ice Floe Size Distribution\n21. Thermodynamics --&gt; Melt Ponds\n22. Thermodynamics --&gt; Snow Processes\n23. Radiative Processes \n1. Key Properties --&gt; Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --&gt; Discretisation --&gt; Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --&gt; Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --&gt; Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Thermodynamics --&gt; Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.4. Basal Heat Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.5. Fixed Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --&gt; Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --&gt; Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --&gt; Salt --&gt; Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --&gt; Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --&gt; Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --&gt; Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.3. Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --&gt; Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.6. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Ice Radiation Transmission\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "ยฉ2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
zklgame/CatEyeNets
test/two_layer_net.ipynb
mit
[ "Implementing a Neural Network\nIn this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.", "import os\nos.chdir(os.getcwd() + '/..')\n\n# Run some setup code for this notebook\nimport random\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom utils.data_utils import load_CIFAR10\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\nfrom classifiers.neural_net import TwoLayerNet\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / np.maximum(1e-8, np.abs(x) + np.abs(y)))\n\n# Create a small net and toy data to check implementations.\n# set random seed for repeatable experiments.\ninput_size = 4\nhidden_size = 10\nnum_classes = 3\nnum_inputs = 5\n\ndef init_toy_model():\n np.random.seed(0)\n return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)\n\ndef init_toy_data():\n np.random.seed(1)\n X = 10 * np.random.randn(num_inputs, input_size)\n y = np.array([0, 1, 2, 2, 1])\n return X, y\n\nnet = init_toy_model()\nX, y = init_toy_data()", "Forward pass: compute scores", "scores = net.loss(X)\nprint('scores: ')\nprint(scores)\nprint\n\nprint('correct scores:')\ncorrect_scores = np.asarray([\n [-0.81233741, -1.27654624, -0.70335995],\n [-0.17129677, -1.18803311, -0.47310444],\n [-0.51590475, -1.01354314, -0.8504215 ],\n [-0.15419291, -0.48629638, -0.52901952],\n [-0.00618733, -0.12435261, -0.15226949]])\nprint(correct_scores)\nprint\n\n# The difference should be very small, get < 1e-7\nprint('Difference between your scores and correct scores:')\nprint(np.sum(np.abs(scores - correct_scores)))", "Forward pass: compute loss", "loss, _ = net.loss(X, y, reg=0.05)\ncorrent_loss = 1.30378789133\n\n# should be very small, get < 1e-12\nprint('Difference between your loss and correct loss:')\nprint(np.sum(np.abs(loss - corrent_loss)))", "Backward pass\nImplement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:", "from utils.gradient_check import eval_numerical_gradient\n\nloss, grads = net.loss(X, y, reg=0.05)\n\n# these should all be less than 1e-8 or so\nfor param_name in grads:\n f = lambda W: net.loss(X, y, reg=0.05)[0]\n param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)\n print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))", "Train the network\nOnce you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.", "net = init_toy_model()\nstats = net.train(X, y, X, y, learning_rate=1e-1, reg=5e-6, num_iters=100, verbose=False)\nprint('Final training loss: ', stats['loss_history'][-1])\n\n# plot the loss history\nplt.plot(stats['loss_history'])\nplt.xlabel('iteration')\nplt.ylabel('training loss')\nplt.title('Training Loss history')\nplt.show()", "Load the data\nNow that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.", "# Load the raw CIFAR-10 data\ncifar10_dir = 'datasets/cifar-10-batches-py'\nX_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n\n# Split the data\nnum_training = 49000\nnum_validation = 1000\nnum_test = 1000\n\nmask = range(num_training, num_training+num_validation)\nX_val = X_train[mask]\ny_val = y_train[mask]\n\nmask = range(num_training)\nX_train = X_train[mask]\ny_train = y_train[mask]\n\nmask = xrange(num_test)\nX_test = X_test[mask]\ny_test = y_test[mask]\n\n# Preprocessing: reshape the image data into rows\nX_train = X_train.reshape(X_train.shape[0], -1)\nX_val = X_val.reshape(X_val.shape[0], -1)\nX_test = X_test.reshape(X_test.shape[0], -1)\n\n# Normalize the data: subtract the mean rows\nmean_image = np.mean(X_train, axis=0)\nX_train -= mean_image\nX_val -= mean_image\nX_test -= mean_image\n\nprint(X_train.shape, X_val.shape, X_test.shape)", "Train a network\nTo train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.", "input_size = 32 * 32 * 3\nhidden_size = 50\nnum_classes = 10\nnet = TwoLayerNet(input_size, hidden_size, num_classes)\n\n# Train the network\nstats = net.train(X_train, y_train, X_val, y_val,\n learning_rate=1e-4, learning_rate_decay=0.95,\n reg=0.25, num_iters=1000, batch_size=200, verbose=True)\n\n# Predict on the validation set\nval_acc = (net.predict(X_val) == y_val).mean()\nprint('Validation accuracy: ', val_acc)", "Debug the training\nWith the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.\nOne strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.\nAnother strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.", "# Plot the loss function and train / validation accuracies\nplt.subplot(2, 1, 1)\nplt.plot(stats['loss_history'])\nplt.title('Loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Loss')\n\nplt.subplot(2, 1, 2)\nplt.plot(stats['train_acc_history'], label='train')\nplt.plot(stats['val_acc_history'], label='val')\nplt.title('Classification accuracy history')\nplt.legend()\nplt.xlabel('Epoch')\nplt.ylabel('Classification accuracy')\n\nplt.show()\n\nfrom utils.vis_utils import visualize_grid\n\n# Visualize the weights of the network\ndef show_net_weights(net):\n W1 = net.params['W1']\n W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)\n plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))\n plt.gca().axis('off')\n plt.show()\n \nshow_net_weights(net)", "Tune your hyperparameters\nWhat's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.\nTuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.\nApproximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.\nExperiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).", "input_size = 32 * 32 * 3\nnum_classes = 10\n\nhidden_layer_size = [50]\nlearning_rates = [3e-4, 9e-4, 1e-3, 3e-3]\nregularization_strengths = [7e-1, 8e-1, 9e-1, 1]\n\nresults = {}\n\nbest_model = None\nbest_val = -1\n\nfor hidden_size in hidden_layer_size:\n for lr in learning_rates:\n for reg in regularization_strengths:\n model = TwoLayerNet(input_size, hidden_size, num_classes, std=1e-3)\n stats = model.train(X_train, y_train, X_val, y_val,\n learning_rate=lr, learning_rate_decay=0.95,\n reg=reg, num_iters=5000, batch_size=200, verbose=True)\n \n train_acc = (model.predict(X_train) == y_train).mean()\n val_acc = (model.predict(X_val) == y_val).mean()\n print('hidden_layer_size: %d, lr: %e, reg: %e, train_acc: %f, val_acc: %f' % (hidden_size, lr, reg, train_acc, val_acc))\n \n results[(hidden_size, lr, reg)] = (train_acc, val_acc)\n if val_acc > best_val:\n best_val = val_acc\n best_model = model\n print\nprint\n \nprint('best val_acc: %f' % (best_val))\n \nold_lr = -1\nfor hidden_size, lr, reg in sorted(results):\n if old_lr != lr:\n old_lr = lr\n print\n \n train_acc, val_acc = results[(hidden_size, lr, reg)]\n print('hidden_layer_size: %d, lr: %e, reg: %e, train_acc: %f, val_acc: %f' % (hidden_size, lr, reg, train_acc, val_acc))\n\nfor hidden_size, lr, reg in sorted(results):\n train_acc, val_acc = results[(hidden_size, lr, reg)]\n print('hidden_layer_size: %d, lr: %e, reg: %e, train_acc: %f, val_acc: %f' % (hidden_size, lr, reg, train_acc, val_acc))\n\n# visualize the weights of the best network\nshow_net_weights(best_model)", "Run on the test set\nWhen you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.\nWe will give you extra bonus point for every 1% of accuracy above 52%.", "test_acc = (best_model.predict(X_test) == y_test).mean()\nprint('Test accuracy: ', test_acc)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sueiras/training
tensorflow_old/02-template_class/template_class.ipynb
gpl-3.0
[ "Standar usage of TensoFlow with model class\nTipically use 3 files:\n - data_utils.py: With the data access and batch generator functions\n - model.py: With the class model. A constructor with the graph definition and method to manage model needs\n - train.py: With parameters. Access to the data, instance the model and train it. Optionaly add a parameter to train or inference.\ndata_utils.py", "#! /usr/bin/env python\n\nimport tensorflow as tf\n\n# Access to the data\ndef get_data(data_dir='/tmp/MNIST_data'):\n from tensorflow.examples.tutorials.mnist import input_data\n return input_data.read_data_sets(data_dir, one_hot=True)\n\n\n#Batch generator\ndef batch_generator(mnist, batch_size=256, type='train'):\n if type=='train':\n return mnist.train.next_batch(batch_size)\n else:\n return mnist.test.next_batch(batch_size)\n ", "model_mnist_cnn.py", "#! /usr/bin/env python\n\nimport tensorflow as tf\n\nclass mnistCNN(object):\n \"\"\"\n A NN for mnist classification.\n \"\"\"\n def __init__(self, dense=500):\n \n # Placeholders for input, output and dropout\n self.input_x = tf.placeholder(tf.float32, [None, 784], name=\"input_x\")\n self.input_y = tf.placeholder(tf.float32, [None, 10], name=\"input_y\")\n \n # First layer\n self.dense_1 = self.dense_layer(self.input_x, input_dim=784, output_dim=dense)\n\n # Final layer\n self.dense_2 = self.dense_layer(self.dense_1, input_dim=dense, output_dim=10)\n\n self.predictions = tf.argmax(self.dense_2, 1, name=\"predictions\")\n \n # Loss function\n self.loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(self.dense_2, self.input_y))\n \n # Accuracy\n correct_predictions = tf.equal(self.predictions, tf.argmax(self.input_y, 1))\n self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, \"float\"), name=\"accuracy\")\n \n\n def dense_layer(self, x, input_dim=10, output_dim=10, name='dense'):\n '''\n Dense layer function\n Inputs:\n x: Input tensor\n input_dim: Dimmension of the input tensor.\n output_dim: dimmension of the output tensor\n name: Layer name\n '''\n W = tf.Variable(tf.truncated_normal([input_dim, output_dim], stddev=0.1), name='W_'+name)\n b = tf.Variable(tf.constant(0.1, shape=[output_dim]), name='b_'+name)\n dense_output = tf.nn.relu(tf.matmul(x, W) + b)\n return dense_output", "train.py", "#! /usr/bin/env python\n\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\n#from data_utils import get_data, batch_generator\n#from model_mnist_cnn import mnistCNN\n\n\n# Parameters\n# ==================================================\n\n# Data loading params\ntf.flags.DEFINE_string(\"data_directory\", '/tmp/MNIST_data', \"Data dir (default /tmp/MNIST_data)\")\n\n# Model Hyperparameters\ntf.flags.DEFINE_integer(\"dense_size\", 500, \"dense_size (default 500)\")\n\n# Training parameters\ntf.flags.DEFINE_float(\"learning_rate\", 0.001, \"learning rate (default: 0.001)\")\ntf.flags.DEFINE_integer(\"batch_size\", 256, \"Batch Size (default: 256)\")\ntf.flags.DEFINE_integer(\"num_epochs\", 20, \"Number of training epochs (default: 20)\")\n\n# Misc Parameters\ntf.flags.DEFINE_boolean(\"log_device_placement\", False, \"Log placement of ops on devices\")\n\nFLAGS = tf.flags.FLAGS\nFLAGS._parse_flags()\nprint(\"\\nParameters:\")\nfor attr, value in sorted(FLAGS.__flags.items()):\n print(\"{}={}\".format(attr.upper(), value))\nprint(\"\")\n\n\n# Data Preparation\n# ==================================================\n\n#Access to the data\nmnist_data = get_data(data_dir= FLAGS.data_directory)\n\n\n# Training\n# ==================================================\n\ngpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333, allow_growth = True)\nwith tf.Graph().as_default():\n session_conf = tf.ConfigProto(\n gpu_options=gpu_options,\n log_device_placement=FLAGS.log_device_placement)\n sess = tf.Session(config=session_conf)\n with sess.as_default():\n \n # Create model\n cnn = mnistCNN(dense=FLAGS.dense_size)\n \n # Trainer\n train_op = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize(cnn.loss)\n\n # Saver\n saver = tf.train.Saver(max_to_keep=1)\n\n # Initialize all variables\n sess.run(tf.global_variables_initializer())\n\n # Train proccess\n for epoch in range(FLAGS.num_epochs):\n for n_batch in range(int(55000/FLAGS.batch_size)):\n batch = batch_generator(mnist_data, batch_size=FLAGS.batch_size, type='train')\n _, ce = sess.run([train_op, cnn.loss], feed_dict={cnn.input_x: batch[0], cnn.input_y: batch[1]})\n\n print(epoch, ce)\n model_file = saver.save(sess, '/tmp/mnist_model')\n print('Model saved in', model_file)\n\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
dariox2/CADL
session-1/.ipynb_checkpoints/session-1-checkpoint.ipynb
apache-2.0
[ "Session 1 - Introduction to Tensorflow\n<p class=\"lead\">\nAssignment: Creating a Dataset/Computing with Tensorflow\n</p>\n\n<p class=\"lead\">\nParag K. Mital<br />\n<a href=\"https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info\">Creative Applications of Deep Learning w/ Tensorflow</a><br />\n<a href=\"https://www.kadenze.com/partners/kadenze-academy\">Kadenze Academy</a><br />\n<a href=\"https://twitter.com/hashtag/CADL\">#CADL</a>\n</p>\n\nThis work is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.\nLearning Goals\n\nLearn how to normalize a dataset by calculating the mean/std. deviation\nLearn how to use convolution\nExplore what representations exist in your dataset\n\nOutline\n<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->\n\n\nAssignment Synopsis\nPart One - Create a Small Dataset\nInstructions\nCode\nPart Two - Compute the Mean\nInstructions\nCode\nPart Three - Compute the Standard Deviation\nInstructions\nCode\nPart Four - Normalize the Dataset\nInstructions\nCode\nPart Five - Convolve the Dataset\nInstructions\nCode\nPart Six - Sort the Dataset\nInstructions\nCode\nAssignment Submission\n\n<!-- /MarkdownTOC -->\n\n<h1>Notebook</h1>\n\nEverything you will need to do will be inside of this notebook, and I've marked which cells you will need to edit by saying <b><font color='red'>\"TODO! COMPLETE THIS SECTION!\"</font></b>. For you to work with this notebook, you'll either download the zip file from the resources section on Kadenze or clone the github repo (whichever you are more comfortable with), and then run notebook inside the same directory as wherever this file is located using the command line \"jupyter notebook\" or \"ipython notebook\" (using Terminal on Unix/Linux/OSX, or Command Line/Shell/Powershell on Windows). If you are unfamiliar with jupyter notebook, please look at Installation Preliminaries and Session 0 before starting!\nOnce you have launched notebook, this will launch a web browser with the contents of the zip files listed. Click the file \"session-1.ipynb\" and this document will open in an interactive notebook, allowing you to \"run\" the cells, computing them using python, and edit the text inside the cells.\n<a name=\"assignment-synopsis\"></a>\nAssignment Synopsis\nThis first homework assignment will guide you through working with a small dataset of images. For Part 1, you'll need to find 100 images and use the function I've provided to create a montage of your images, saving it to the file \"dataset.png\" (template code provided below). You can load an existing dataset of images, find your own images, or perhaps create your own images using a creative process such as painting, photography, or something along those lines. Each image will be reshaped to 100 x 100 pixels. There needs to be at least 100 images. For Parts 2 and 3, you'll then calculate the mean and deviation of it using a tensorflow session. In Part 4, you'll normalize your dataset using the mean and deviation. Then in Part 5, you will convolve your normalized dataset. For Part 6, you'll need to sort the entire convolved dataset. Finally, the last part will package everything for you in a zip file which you can upload to Kadenze to get assessed (only if you are a Kadenze Premium member, $10 p/m, free for the first month). Remember to complete the additional excercises online, including the Gallery participation and the Forum post. If you have any questions, be sure to enroll in the course and ask your peers in the #CADL community or me on the forums!\nhttps://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info\nThe following assignment breakdown gives more detailed instructions and includes template code for you to fill out. Good luck!\n<a name=\"part-one---create-a-small-dataset\"></a>\nPart One - Create a Small Dataset\n<a name=\"instructions\"></a>\nInstructions\nUse Python, Numpy, and Matplotlib to load a dataset of 100 images and create a montage of the dataset as a 10 x 10 image using the function below. You'll need to make sure you call the function using a 4-d array of N x H x W x C dimensions, meaning every image will need to be the same size! You can load an existing dataset of images, find your own images, or perhaps create your own images using a creative process such as painting, photography, or something along those lines.\nWhen you are creating your dataset, I want you to think about what representations might exist in the limited amount of data that you are organizing. It is only 100 images after all, not a whole lot for a computer to reason about and learn something meaningful. So <b>think about creating a dataset of images that could possibly reveal something fundamental about what is contained in the images</b>. Try to think about creating a set of images that represents something. For instance, this might be images of yourself over time. Or it might be every picture you've ever taken of your cat. Or perhaps the view from your room at different times of the day. Consider making the changes within each image as significant as possible. As \"representative\" of the thing you want to capture as possible. Hopefully by the end of this lesson, you'll understand a little better the difference between what a computer thinks is significant and what you yourself thought was significant.\nThe code below will show you how to resize and/or crop your images so that they are 100 pixels x 100 pixels in height and width. Once you have 100 images loaded, we'll use a montage function to draw and save your dataset to the file <b>dataset.png</b>.\n<a name=\"code\"></a>\nCode\nThis next section will just make sure you have the right version of python and the libraries that we'll be using. Don't change the code here but make sure you \"run\" it (use \"shift+enter\")!", "# First check the Python version\nimport sys\nif sys.version_info < (3,4):\n print('You are running an older version of Python!\\n\\n' \\\n 'You should consider updating to Python 3.4.0 or ' \\\n 'higher as the libraries built for this course ' \\\n 'have only been tested in Python 3.4 and higher.\\n')\n print('Try installing the Python 3.5 version of anaconda '\n 'and then restart `jupyter notebook`:\\n' \\\n 'https://www.continuum.io/downloads\\n\\n')\n\n# Now get necessary libraries\ntry:\n import os\n import numpy as np\n import matplotlib.pyplot as plt\n from skimage.transform import resize\nexcept ImportError:\n print('You are missing some packages! ' \\\n 'We will try installing them before continuing!')\n !pip install \"numpy>=1.11.0\" \"matplotlib>=1.5.1\" \"scikit-image>=0.11.3\" \"scikit-learn>=0.17\"\n import os\n import numpy as np\n import matplotlib.pyplot as plt\n from skimage.transform import resize\n print('Done!')\n\n# Import Tensorflow\ntry:\n import tensorflow as tf\nexcept ImportError:\n print(\"You do not have tensorflow installed!\")\n print(\"Follow the instructions on the following link\")\n print(\"to install tensorflow before continuing:\")\n print(\"\")\n print(\"https://github.com/pkmital/CADL#installation-preliminaries\")\n\n# This cell includes the provided libraries from the zip file\ntry:\n from libs import utils\nexcept ImportError:\n print(\"Make sure you have started notebook in the same directory\" +\n \" as the provided zip file which includes the 'libs' folder\" +\n \" and the file 'utils.py' inside of it. You will NOT be able\"\n \" to complete this assignment unless you restart jupyter\"\n \" notebook inside the directory created by extracting\"\n \" the zip file or cloning the github repo.\")\n\n# We'll tell matplotlib to inline any drawn figures like so:\n%matplotlib inline\nplt.style.use('ggplot')\n\n# Bit of formatting because inline code is not styled very good by default:\nfrom IPython.core.display import HTML\nHTML(\"\"\"<style> .rendered_html code { \n padding: 2px 4px;\n color: #c7254e;\n background-color: #f9f2f4;\n border-radius: 4px;\n} </style>\"\"\")", "Places your images in a folder such as dirname = '/Users/Someone/Desktop/ImagesFromTheInternet'. We'll then use the os package to load them and crop/resize them to a standard size of 100 x 100 pixels.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# You need to find 100 images from the web/create them yourself\n# or find a dataset that interests you (e.g. I used celeb faces\n# in the course lecture...)\n# then store them all in a single directory.\n# With all the images in a single directory, you can then\n# perform the following steps to create a 4-d array of:\n# N x H x W x C dimensions as 100 x 100 x 100 x 3.\n\ndirname = ...\n\n# Load every image file in the provided directory\nfilenames = [os.path.join(dirname, fname)\n for fname in os.listdir(dirname)]\n\n# Make sure we have exactly 100 image files!\nfilenames = filenames[:100]\nassert(len(filenames) == 100)\n\n# Read every filename as an RGB image\nimgs = [plt.imread(fname)[..., :3] for fname in filenames]\n\n# Crop every image to a square\nimgs = [utils.imcrop_tosquare(img_i) for img_i in imgs]\n\n# Then resize the square image to 100 x 100 pixels\nimgs = [resize(img_i, (100, 100)) for img_i in imgs]\n\n# Finally make our list of 3-D images a 4-D array with the first dimension the number of images:\nimgs = np.array(imgs).astype(np.float32)\n\n# Plot the resulting dataset:\n# Make sure you \"run\" this cell after you create your `imgs` variable as a 4-D array!\n# Make sure we have a 100 x 100 x 100 x 3 dimension array\nassert(imgs.shape == (100, 100, 100, 3))\nplt.figure(figsize=(10, 10))\nplt.imshow(utils.montage(imgs, saveto='dataset.png'))", "<a name=\"part-two---compute-the-mean\"></a>\nPart Two - Compute the Mean\n<a name=\"instructions-1\"></a>\nInstructions\nFirst use Tensorflow to define a session. Then use Tensorflow to create an operation which takes your 4-d array and calculates the mean color image (100 x 100 x 3) using the function tf.reduce_mean. Have a look at the documentation for this function to see how it works in order to get the mean of every pixel and get an image of (100 x 100 x 3) as a result. You'll then calculate the mean image by running the operation you create with your session (e.g. <code>sess.run(...)</code>). Finally, plot the mean image, save it, and then include this image in your zip file as <b>mean.png</b>.\n<a name=\"code-1\"></a>\nCode\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# First create a tensorflow session\nsess = ...\n\n# Now create an operation that will calculate the mean of your images\nmean_img_op = ...\n\n# And then run that operation using your session\nmean_img = sess.run(mean_img_op)\n\n# Then plot the resulting mean image:\n# Make sure the mean image is the right size!\nassert(mean_img.shape == (100, 100, 3))\nplt.figure(figsize=(10, 10))\nplt.imshow(mean_img)\nplt.imsave(arr=mean_img, fname='mean.png')", "Once you have seen the mean image of your dataset, how does it relate to your own expectations of the dataset? Did you expect something different? Was there something more \"regular\" or \"predictable\" about your dataset that the mean image did or did not reveal? If your mean image looks a lot like something recognizable, it's a good sign that there is a lot of predictability in your dataset. If your mean image looks like nothing at all, a gray blob where not much seems to stand out, then it's pretty likely that there isn't very much in common between your images. Neither is a bad scenario. Though, it is more likely that having some predictability in your mean image, e.g. something recognizable, that there are representations worth exploring with deeper networks capable of representing them. However, we're only using 100 images so it's a very small dataset to begin with.\n<a name=\"part-three---compute-the-standard-deviation\"></a>\nPart Three - Compute the Standard Deviation\n<a name=\"instructions-2\"></a>\nInstructions\nNow use tensorflow to calculate the standard deviation and upload the standard deviation image averaged across color channels as a \"jet\" heatmap of the 100 images. This will be a little more involved as there is no operation in tensorflow to do this for you. However, you can do this by calculating the mean image of your dataset as a 4-D array. To do this, you could write e.g. mean_img_4d = tf.reduce_mean(imgs, reduction_indices=0, keep_dims=True) to give you a 1 x H x W x C dimension array calculated on the N x H x W x C images variable. The reduction_indices parameter is saying to calculate the mean over the 0th dimension, meaning for every possible H, W, C, or for every pixel, you will have a mean composed over the N possible values it could have had, or what that pixel was for every possible image. This way, you can write images - mean_img_4d to give you a N x H x W x C dimension variable, with every image in your images array having been subtracted by the mean_img_4d. If you calculate the square root of the sum of the squared differences of this resulting operation, you have your standard deviation!\nIn summary, you'll need to write something like: subtraction = imgs - tf.reduce_mean(imgs, reduction_indices=0, keep_dims=True), then reduce this operation using tf.sqrt(tf.reduce_sum(subtraction * subtraction, reduction_indices=0)) to get your standard deviation then include this image in your zip file as <b>std.png</b>\n<a name=\"code-2\"></a>\nCode\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# Create a tensorflow operation to give you the standard deviation\n\n# First compute the difference of every image with a\n# 4 dimensional mean image shaped 1 x H x W x C\nmean_img_4d = ...\n\nsubtraction = imgs - mean_img_4d\n\n# Now compute the standard deviation by calculating the\n# square root of the sum of squared differences\nstd_img_op = tf.sqrt(tf.reduce_sum(subtraction * subtraction, reduction_indices=0))\n\n# Now calculate the standard deviation using your session\nstd_img = sess.run(std_img_op)\n\n# Then plot the resulting standard deviation image:\n# Make sure the std image is the right size!\nassert(std_img.shape == (100, 100) or std_img.shape == (100, 100, 3))\nplt.figure(figsize=(10, 10))\nstd_img_show = std_img / np.max(std_img)\nplt.imshow(std_img_show)\nplt.imsave(arr=std_img_show, fname='std.png')", "Once you have plotted your dataset's standard deviation per pixel, what does it reveal about your dataset? Like with the mean image, you should consider what is predictable and not predictable about this image.\n<a name=\"part-four---normalize-the-dataset\"></a>\nPart Four - Normalize the Dataset\n<a name=\"instructions-3\"></a>\nInstructions\nUsing tensorflow, we'll attempt to normalize your dataset using the mean and standard deviation. \n<a name=\"code-3\"></a>\nCode\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "norm_imgs_op = ...\n\nnorm_imgs = sess.run(norm_imgs_op)\nprint(np.min(norm_imgs), np.max(norm_imgs))\nprint(imgs.dtype)\n\n# Then plot the resulting normalized dataset montage:\n# Make sure we have a 100 x 100 x 100 x 3 dimension array\nassert(norm_imgs.shape == (100, 100, 100, 3))\nplt.figure(figsize=(10, 10))\nplt.imshow(utils.montage(norm_imgs, 'normalized.png'))", "We apply another type of normalization to 0-1 just for the purposes of plotting the image. If we didn't do this, the range of our values would be somewhere between -1 and 1, and matplotlib would not be able to interpret the entire range of values. By rescaling our -1 to 1 valued images to 0-1, we can visualize it better.", "norm_imgs_show = (norm_imgs - np.min(norm_imgs)) / (np.max(norm_imgs) - np.min(norm_imgs))\nplt.figure(figsize=(10, 10))\nplt.imshow(utils.montage(norm_imgs_show, 'normalized.png'))", "<a name=\"part-five---convolve-the-dataset\"></a>\nPart Five - Convolve the Dataset\n<a name=\"instructions-4\"></a>\nInstructions\nUsing tensorflow, we'll attempt to convolve your dataset with one of the kernels we created during the lesson, and then in the next part, we'll take the sum of the convolved output to use for sorting. You should use the function utils.gabor to create an edge detector. You can also explore with the utils.gauss2d kernel. What you must figure out is how to reshape your kernel to be 4-dimensional: K_H, K_W, C_I, and C_O, corresponding to the kernel's height and width (e.g. 16), the number of input channels (RGB = 3 input channels), and the number of output channels, (1).\n<a name=\"code-4\"></a>\nCode\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# First build 3 kernels for each input color channel\nksize = ...\nkernel = np.concatenate([utils.gabor(ksize)[:, :, np.newaxis] for i in range(3)], axis=2)\n \n# Now make the kernels into the shape: [ksize, ksize, 3, 1]:\nkernel_4d = ...\nassert(kernel_4d.shape == (ksize, ksize, 3, 1))", "We'll Perform the convolution with the 4d tensor in kernel_4d. This is a ksize x ksize x 3 x 1 tensor, where each input color channel corresponds to one filter with 1 output. Each filter looks like:", "plt.figure(figsize=(5, 5))\nplt.imshow(kernel_4d[:, :, 0, 0], cmap='gray')\nplt.imsave(arr=kernel_4d[:, :, 0, 0], fname='kernel.png', cmap='gray')", "Perform the convolution with the 4d tensors:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "convolved = utils.convolve(...\n\nconvolved_show = (convolved - np.min(convolved)) / (np.max(convolved) - np.min(convolved))\nprint(convolved_show.shape)\nplt.figure(figsize=(10, 10))\nplt.imshow(utils.montage(convolved_show[..., 0], 'convolved.png'), cmap='gray')", "What we've just done is build a \"hand-crafted\" feature detector: the Gabor Kernel. This kernel is built to respond to particular orientation: horizontal edges, and a particular scale. It also responds equally to R, G, and B color channels, as that is how we have told the convolve operation to work: use the same kernel for every input color channel. When we work with deep networks, we'll see how we can learn the convolution kernels for every color channel, and learn many more of them, in the order of 100s per color channel. That is really where the power of deep networks will start to become obvious. For now, we've seen just how difficult it is to get at any higher order features of the dataset. We've really only picked out some edges!\n<a name=\"part-six---sort-the-dataset\"></a>\nPart Six - Sort the Dataset\n<a name=\"instructions-5\"></a>\nInstructions\nUsing tensorflow, we'll attempt to organize your dataset. We'll try sorting based on the mean value of each convolved image's output to use for sorting. To do this, we could calculate either the sum value (tf.reduce_sum) or the mean value (tf.reduce_mean) of each image in your dataset and then use those values, e.g. stored inside a variable values to sort your images using something like tf.nn.top_k and sorted_imgs = np.array([imgs[idx_i] for idx_i in idxs]) prior to creating the montage image, m = montage(sorted_imgs, \"sorted.png\") and then include this image in your zip file as <b>sorted.png</b>\n<a name=\"code-5\"></a>\nCode\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# Create a set of operations using tensorflow which could\n# provide you for instance the sum or mean value of every\n# image in your dataset:\n\n# First flatten our convolved images so instead of many 3d images,\n# we have many 1d vectors.\n# This should convert our 4d representation of N x H x W x C to a\n# 2d representation of N x (H*W*C)\nflattened = tf.reshape(convolved...\nassert(flattened.get_shape().as_list() == [100, 10000])\n\n# Now calculate some statistics about each of our images\nvalues = tf.reduce_sum(flattened, reduction_indices=1)\n\n# Then create another operation which sorts those values\n# and then calculate the result:\nidxs_op = tf.nn.top_k(values, k=100)[1]\nidxs = sess.run(idxs_op)\n\n# Then finally use the sorted indices to sort your images:\nsorted_imgs = np.array([imgs[idx_i] for idx_i in idxs])\n\n# Then plot the resulting sorted dataset montage:\n# Make sure we have a 100 x 100 x 100 x 3 dimension array\nassert(sorted_imgs.shape == (100, 100, 100, 3))\nplt.figure(figsize=(10, 10))\nplt.imshow(utils.montage(sorted_imgs, 'sorted.png'))", "What does your sorting reveal? Could you imagine the same sorting over many more images reveal the thing your dataset sought to represent? It is likely that the representations that you wanted to find hidden within \"higher layers\", i.e., \"deeper features\" of the image, and that these \"low level\" features, edges essentially, are not very good at describing the really interesting aspects of your dataset. In later sessions, we'll see how we can combine the outputs of many more convolution kernels that have been assembled in a way that accentuate something very particular about each image, and build a sorting that is much more intelligent than this one!\n<a name=\"assignment-submission\"></a>\nAssignment Submission\nNow that you've completed all 6 parts, we'll create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as:\n<pre>\n session-1/\n session-1.ipynb\n dataset.png\n mean.png\n std.png\n normalized.png\n kernel.png\n convolved.png\n sorted.png\n libs/\n utils.py\n</pre>\n\nYou'll then submit this zip file for your first assignment on Kadenze for \"Assignment 1: Datasets/Computing with Tensorflow\"! If you have any questions, remember to reach out on the forums and connect with your peers or with me.\n<b>To get assessed, you'll need to be a premium student which is free for a month!</b> If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the #CADL community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info\nThen remember to complete the remaining parts of Assignemnt 1 on Kadenze!:\n* Comment on 1 student's open-ended arrangement (Part 6) in the course gallery titled \"Creating a Dataset/ Computing with Tensorflow\". Think about what images they've used in their dataset and how the arrangement reflects what could be represented by that data.\n* Finally make a forum post in the forum for this assignment \"Creating a Dataset/ Computing with Tensorflow\".\n - Including a link to an artist making use of machine learning to organize data or finding representations within large datasets\n - Tell a little about their work (min 20 words).\n - Comment on at least 2 other student's forum posts (min 20 words)\nMake sure your notebook is named \"session-1\" or else replace it with the correct name in the list of files below:", "utils.build_submission('session-1.zip',\n ('dataset.png',\n 'mean.png',\n 'std.png',\n 'normalized.png',\n 'kernel.png',\n 'convolved.png',\n 'sorted.png',\n 'session-1.ipynb'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
arcyfelix/Courses
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
apache-2.0
[ "Advanced Matplotlib Concepts Lecture\nIn this lecture we cover some more advanced topics which you won't usually use as often. You can always reference the documentation for more resources!\nLogarithmic scale\nIt is also possible to set a logarithmic scale for one or both axes. This functionality is in fact only one application of a more general transformation system in Matplotlib. Each of the axes' scales are set seperately using set_xscale and set_yscale methods which accept one parameter (with the value \"log\" in this case):", "fig, axes = plt.subplots(1, 2, figsize=(10,4))\n \naxes[0].plot(x, x**2, x, np.exp(x))\naxes[0].set_title(\"Normal scale\")\n\naxes[1].plot(x, x**2, x, np.exp(x))\naxes[1].set_yscale(\"log\")\naxes[1].set_title(\"Logarithmic scale (y)\");", "Placement of ticks and custom tick labels\nWe can explicitly determine where we want the axis ticks with set_xticks and set_yticks, which both take a list of values for where on the axis the ticks are to be placed. We can also use the set_xticklabels and set_yticklabels methods to provide a list of custom text labels for each tick location:", "fig, ax = plt.subplots(figsize=(10, 4))\n\nax.plot(x, x**2, x, x**3, lw=2)\n\nax.set_xticks([1, 2, 3, 4, 5])\nax.set_xticklabels([r'$\\alpha$', r'$\\beta$', r'$\\gamma$', r'$\\delta$', r'$\\epsilon$'], fontsize=18)\n\nyticks = [0, 50, 100, 150]\nax.set_yticks(yticks)\nax.set_yticklabels([\"$%.1f$\" % y for y in yticks], fontsize=18); # use LaTeX formatted labels", "There are a number of more advanced methods for controlling major and minor tick placement in matplotlib figures, such as automatic placement according to different policies. See http://matplotlib.org/api/ticker_api.html for details.\nScientific notation\nWith large numbers on axes, it is often better use scientific notation:", "fig, ax = plt.subplots(1, 1)\n \nax.plot(x, x**2, x, np.exp(x))\nax.set_title(\"scientific notation\")\n\nax.set_yticks([0, 50, 100, 150])\n\nfrom matplotlib import ticker\nformatter = ticker.ScalarFormatter(useMathText=True)\nformatter.set_scientific(True) \nformatter.set_powerlimits((-1,1)) \nax.yaxis.set_major_formatter(formatter) ", "Axis number and axis label spacing", "# distance between x and y axis and the numbers on the axes\nmatplotlib.rcParams['xtick.major.pad'] = 5\nmatplotlib.rcParams['ytick.major.pad'] = 5\n\nfig, ax = plt.subplots(1, 1)\n \nax.plot(x, x**2, x, np.exp(x))\nax.set_yticks([0, 50, 100, 150])\n\nax.set_title(\"label and axis spacing\")\n\n# padding between axis label and axis numbers\nax.xaxis.labelpad = 5\nax.yaxis.labelpad = 5\n\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\");\n\n# restore defaults\nmatplotlib.rcParams['xtick.major.pad'] = 3\nmatplotlib.rcParams['ytick.major.pad'] = 3", "Axis position adjustments\nUnfortunately, when saving figures the labels are sometimes clipped, and it can be necessary to adjust the positions of axes a little bit. This can be done using subplots_adjust:", "fig, ax = plt.subplots(1, 1)\n \nax.plot(x, x**2, x, np.exp(x))\nax.set_yticks([0, 50, 100, 150])\n\nax.set_title(\"title\")\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\")\n\nfig.subplots_adjust(left=0.15, right=.9, bottom=0.1, top=0.9);", "Axis grid\nWith the grid method in the axis object, we can turn on and off grid lines. We can also customize the appearance of the grid lines using the same keyword arguments as the plot function:", "fig, axes = plt.subplots(1, 2, figsize=(10,3))\n\n# default grid appearance\naxes[0].plot(x, x**2, x, x**3, lw=2)\naxes[0].grid(True)\n\n# custom grid appearance\naxes[1].plot(x, x**2, x, x**3, lw=2)\naxes[1].grid(color='b', alpha=0.5, linestyle='dashed', linewidth=0.5)", "Axis spines\nWe can also change the properties of axis spines:", "fig, ax = plt.subplots(figsize=(6,2))\n\nax.spines['bottom'].set_color('blue')\nax.spines['top'].set_color('blue')\n\nax.spines['left'].set_color('red')\nax.spines['left'].set_linewidth(2)\n\n# turn off axis spine to the right\nax.spines['right'].set_color(\"none\")\nax.yaxis.tick_left() # only ticks on the left side", "Twin axes\nSometimes it is useful to have dual x or y axes in a figure; for example, when plotting curves with different units together. Matplotlib supports this with the twinx and twiny functions:", "fig, ax1 = plt.subplots()\n\nax1.plot(x, x**2, lw=2, color=\"blue\")\nax1.set_ylabel(r\"area $(m^2)$\", fontsize=18, color=\"blue\")\nfor label in ax1.get_yticklabels():\n label.set_color(\"blue\")\n \nax2 = ax1.twinx()\nax2.plot(x, x**3, lw=2, color=\"red\")\nax2.set_ylabel(r\"volume $(m^3)$\", fontsize=18, color=\"red\")\nfor label in ax2.get_yticklabels():\n label.set_color(\"red\")", "Axes where x and y is zero", "fig, ax = plt.subplots()\n\nax.spines['right'].set_color('none')\nax.spines['top'].set_color('none')\n\nax.xaxis.set_ticks_position('bottom')\nax.spines['bottom'].set_position(('data',0)) # set position of x spine to x=0\n\nax.yaxis.set_ticks_position('left')\nax.spines['left'].set_position(('data',0)) # set position of y spine to y=0\n\nxx = np.linspace(-0.75, 1., 100)\nax.plot(xx, xx**3);", "Other 2D plot styles\nIn addition to the regular plot method, there are a number of other functions for generating different kind of plots. See the matplotlib plot gallery for a complete list of available plot types: http://matplotlib.org/gallery.html. Some of the more useful ones are show below:", "n = np.array([0,1,2,3,4,5])\n\nfig, axes = plt.subplots(1, 4, figsize=(12,3))\n\naxes[0].scatter(xx, xx + 0.25*np.random.randn(len(xx)))\naxes[0].set_title(\"scatter\")\n\naxes[1].step(n, n**2, lw=2)\naxes[1].set_title(\"step\")\n\naxes[2].bar(n, n**2, align=\"center\", width=0.5, alpha=0.5)\naxes[2].set_title(\"bar\")\n\naxes[3].fill_between(x, x**2, x**3, color=\"green\", alpha=0.5);\naxes[3].set_title(\"fill_between\");", "Text annotation\nAnnotating text in matplotlib figures can be done using the text function. It supports LaTeX formatting just like axis label texts and titles:", "fig, ax = plt.subplots()\n\nax.plot(xx, xx**2, xx, xx**3)\n\nax.text(0.15, 0.2, r\"$y=x^2$\", fontsize=20, color=\"blue\")\nax.text(0.65, 0.1, r\"$y=x^3$\", fontsize=20, color=\"green\");", "Figures with multiple subplots and insets\nAxes can be added to a matplotlib Figure canvas manually using fig.add_axes or using a sub-figure layout manager such as subplots, subplot2grid, or gridspec:\nsubplots", "fig, ax = plt.subplots(2, 3)\nfig.tight_layout()", "subplot2grid", "fig = plt.figure()\nax1 = plt.subplot2grid((3,3), (0,0), colspan=3)\nax2 = plt.subplot2grid((3,3), (1,0), colspan=2)\nax3 = plt.subplot2grid((3,3), (1,2), rowspan=2)\nax4 = plt.subplot2grid((3,3), (2,0))\nax5 = plt.subplot2grid((3,3), (2,1))\nfig.tight_layout()", "gridspec", "import matplotlib.gridspec as gridspec\n\nfig = plt.figure()\n\ngs = gridspec.GridSpec(2, 3, height_ratios=[2,1], width_ratios=[1,2,1])\nfor g in gs:\n ax = fig.add_subplot(g)\n \nfig.tight_layout()", "add_axes\nManually adding axes with add_axes is useful for adding insets to figures:", "fig, ax = plt.subplots()\n\nax.plot(xx, xx**2, xx, xx**3)\nfig.tight_layout()\n\n# inset\ninset_ax = fig.add_axes([0.2, 0.55, 0.35, 0.35]) # X, Y, width, height\n\ninset_ax.plot(xx, xx**2, xx, xx**3)\ninset_ax.set_title('zoom near origin')\n\n# set axis range\ninset_ax.set_xlim(-.2, .2)\ninset_ax.set_ylim(-.005, .01)\n\n# set axis tick locations\ninset_ax.set_yticks([0, 0.005, 0.01])\ninset_ax.set_xticks([-0.1,0,.1]);", "Colormap and contour figures\nColormaps and contour figures are useful for plotting functions of two variables. In most of these functions we will use a colormap to encode one dimension of the data. There are a number of predefined colormaps. It is relatively straightforward to define custom colormaps. For a list of pre-defined colormaps, see: http://www.scipy.org/Cookbook/Matplotlib/Show_colormaps", "alpha = 0.7\nphi_ext = 2 * np.pi * 0.5\n\ndef flux_qubit_potential(phi_m, phi_p):\n return 2 + alpha - 2 * np.cos(phi_p) * np.cos(phi_m) - alpha * np.cos(phi_ext - 2*phi_p)\n\nphi_m = np.linspace(0, 2*np.pi, 100)\nphi_p = np.linspace(0, 2*np.pi, 100)\nX,Y = np.meshgrid(phi_p, phi_m)\nZ = flux_qubit_potential(X, Y).T", "pcolor", "fig, ax = plt.subplots()\n\np = ax.pcolor(X/(2*np.pi), Y/(2*np.pi), Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max())\ncb = fig.colorbar(p, ax=ax)", "imshow", "fig, ax = plt.subplots()\n\nim = ax.imshow(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])\nim.set_interpolation('bilinear')\n\ncb = fig.colorbar(im, ax=ax)", "contour", "fig, ax = plt.subplots()\n\ncnt = ax.contour(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])", "3D figures\nTo use 3D graphics in matplotlib, we first need to create an instance of the Axes3D class. 3D axes can be added to a matplotlib figure canvas in exactly the same way as 2D axes; or, more conveniently, by passing a projection='3d' keyword argument to the add_axes or add_subplot methods.", "from mpl_toolkits.mplot3d.axes3d import Axes3D", "Surface plots", "fig = plt.figure(figsize=(14,6))\n\n# `ax` is a 3D-aware axis instance because of the projection='3d' keyword argument to add_subplot\nax = fig.add_subplot(1, 2, 1, projection='3d')\n\np = ax.plot_surface(X, Y, Z, rstride=4, cstride=4, linewidth=0)\n\n# surface_plot with color grading and color bar\nax = fig.add_subplot(1, 2, 2, projection='3d')\np = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=matplotlib.cm.coolwarm, linewidth=0, antialiased=False)\ncb = fig.colorbar(p, shrink=0.5)", "Wire-frame plot", "fig = plt.figure(figsize=(8,6))\n\nax = fig.add_subplot(1, 1, 1, projection='3d')\n\np = ax.plot_wireframe(X, Y, Z, rstride=4, cstride=4)", "Coutour plots with projections", "fig = plt.figure(figsize=(8,6))\n\nax = fig.add_subplot(1,1,1, projection='3d')\n\nax.plot_surface(X, Y, Z, rstride=4, cstride=4, alpha=0.25)\ncset = ax.contour(X, Y, Z, zdir='z', offset=-np.pi, cmap=matplotlib.cm.coolwarm)\ncset = ax.contour(X, Y, Z, zdir='x', offset=-np.pi, cmap=matplotlib.cm.coolwarm)\ncset = ax.contour(X, Y, Z, zdir='y', offset=3*np.pi, cmap=matplotlib.cm.coolwarm)\n\nax.set_xlim3d(-np.pi, 2*np.pi);\nax.set_ylim3d(0, 3*np.pi);\nax.set_zlim3d(-np.pi, 2*np.pi);", "Further reading\n\nhttp://www.matplotlib.org - The project web page for matplotlib.\nhttps://github.com/matplotlib/matplotlib - The source code for matplotlib.\nhttp://matplotlib.org/gallery.html - A large gallery showcaseing various types of plots matplotlib can create. Highly recommended! \nhttp://www.loria.fr/~rougier/teaching/matplotlib - A good matplotlib tutorial.\nhttp://scipy-lectures.github.io/matplotlib/matplotlib.html - Another good matplotlib reference." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
anthonybu/api_sdk
DEMO.ipynb
mit
[ "SDK for the Brandwatch API: Demo\nIntroduction\nThe goal of this notebook is to demonstrate the capabilities of the Python Software Development Kit for Brandwatch's API. The SDK was designed to address many of the challenges involved in building complex applications which interact with RESTful API's in general and Brandwatch's API in particular:\n<ul>\n<li>The SDK's object hierarchy roughly mirrors the API's resource hierarchy, making the code intuitive for those familiar with the Brandwatch platform</li>\n<li>All required parameters are enforced, and most optional parameters are supported and documented</li>\n<li>Typical Brandwatch workflows are supported behind the scenes; for instance, one can validate, upload, and backfill a query with a single function call</li>\n<li>The SDK is designed to support simple and readable code: sensible defaults are chosen for rarely used parameters and all resource IDs are handled behind the scenes\n</ul>\n\nFrom the user's perspective, the basic structure of the SDK is as follows. One first creates an instance of the class BWProject; this class handles authentication (via a user name and password or API key) and keeps track of project-level data such as the project's ID. (Behind the scenes, the user-level operations are handled by the class BWUser from which BWProject is inherited.) One passes BWProject instance as an argument in the constructor for a series of classes which manage the various Brandwatch resources: queries, groups, tags, categories, etc. These resource classes manage all resource-level operations: for example a single BWQueries instance handles all HTTP requests associated with queries in its attached project.\nTypically, you'd import only the classes you plan on using, but for this demo all classes are listed except for superclasses which you do not use explicitly)", "from bwapi.bwproject import BWProject, BWUser\nfrom bwapi.bwresources import BWQueries, BWGroups, BWAuthorLists, BWSiteLists, BWLocationLists, BWTags, BWCategories, BWRules, BWMentions, BWSignals\nimport datetime", "The SDK uses the Python logging module to tell you what it's doing; if desired you can control what sort of output you see by uncommenting one of the lines below:", "import logging\nlogger = logging.getLogger(\"bwapi\")\n\n#(Default) All logging messages enabled\n#logger.setLevel(logging.DEBUG)\n\n#Does not report URL's of API requests, but all other messages enabled\n#logger.setLevel(logging.INFO)\n\n#Report only errors and warnings\n#logger.setLevel(logging.WARN)\n\n#Report only errors\n#logger.setLevel(logging.ERROR)\n\n#Disable logging\n#logger.setLevel(logging.CRITICAL)", "Project\nWhen you use the API for the first time you have to authenticate with Brandwatch. This will get you an access token. The access token is stored in a credentials file (tokens.txt in this example). Once you've authenticated your access token will be read from that file so you won't need to enter your password again.\nYou can authenticate from command line using the provided console script bwapi-authenticate:\n$ bwapi-authenticate\nPlease enter your Brandwatch credentials below\nUsername: example@example\nPassword:\nAuthenticating user: example@example\nWriting access token for user: example@example\nWriting access token for user: example@example\nSuccess! Access token: 00000000-0000-0000-0000-000000000000\nAlternatively, you can authenticate directly:", "BWUser(username=\"user@example.com\", password=\"YOUR_PASSWORD\", token_path=\"tokens.txt\")", "Now you have authenticated you can load your project:", "YOUR_ACCOUNT = your_account\nYOUR_PROJECT = your_project\n\nproject = BWProject(username=YOUR_ACCOUNT, project=YOUR_PROJECT)", "Before we really begin, please note that you can get documentation for any class or function by viewing the help documentation", "help(BWProject)", "Queries\nNow we create some objects which can manipulate queries and groups in our project:", "queries = BWQueries(project)", "Let's check what queries already exist in the account", "queries.names", "We can also upload queries directly via the API by handing the \"name\", \"searchTerms\" and \"backfillDate\" to the upload funcion. If you don't pass a backfillDate, then the query will not backfill.\nThe BWQueries class inserts default values for the \"languages\", \"type\", \"industry\", and \"samplePercent\" parameters, but we can override the defaults by including them as keyword arguments if we want. \nUpload accepts two boolean keyword arguments - \"create_only\" and \"modify_only\" (both defaulting to False) - which specifies what API verbs the function is allowed to use; for instance, if we set \"create_only\" to True then the function will post a new query if it can and otherwise it will do nothing. Note: this is true of all upload functions in this package.", "queries.upload(name = \"Brandwatch Engagement\", \n includedTerms = \"at_mentions:Brandwatch\",\n backfill_date = \"2015-09-01\")", "If you're uploading many queries at a time, you can upload in batches. This saves API calls and allows you to just pass in a list rather than iterating over the upload function.", "queries.upload_all([\n {\"name\":\"Pets\", \n \"includedTerms\":\"dogs OR cats\", \n \"backfill_date\":\"2016-01-01T05:00:00\"}, \n \n {\"name\":\"ice cream cake\", \n \"includedTerms\":\"(\\\"ice cream\\\" OR icecream) AND (cake)\"},\n \n {\"name\": \"Test1\",\n \"includedTerms\": \"akdnvaoifg;anf\"},\n \n {\"name\": \"Test2\",\n \"includedTerms\": \"anvoapihajkvn\"},\n \n {\"name\": \"Test3\",\n \"includedTerms\": \"nviuphabaveh\"},\n\n ])", "Channels will be shown as queries and can be deleted as queries, but must be uploaded differently. You must be authenticated in the app to upload channels.\nIn order to upload a channel you must pass in the name of the channel, the handle you'd like to track and the type of channel. As with keyword queries, we can upload channels individually or in batches.\nNote: Currently we can only support uploading Twitter channels through the API.", "queries.upload_channel(name = \"Brandwatch\", \n handle = \"brandwatch\", \n channel_type = \"twitter\")\n\nqueries.upload_all_channel([{\"name\": \"BWReact\",\n \"handle\": \"BW_React\",\n \"channel_type\": \"twitter\"},\n {\"name\": \"Brandwatch Careers\",\n \"handle\": \"BrandwatchJobs\",\n \"channel_type\": \"twitter\"}])", "We can delete queries one at a time, or in batches.", "queries.delete(name = \"Brandwatch Engagement\")\nqueries.delete_all([\"Pets\", \"Test3\", \"Brandwatch\", \"BWReact\", \"Brandwatch Careers\"])", "Groups\nYou'll notice that a lot of the things that were true for queries are also true for groups. Many of the functions are nearly identical with any adaptations necessary handled behind the scenes for ease of use.\nAgain (as with queries), we need to create an object with which we can manipulate groups within the account", "groups = BWGroups(project)", "And can check for exisiting groups in the same way as before.", "groups.names", "Now let's check which queries are in each group in the account", "for group in groups.names:\n print(group)\n print(groups.get_group_queries(group))\n print()", "We can easily create a group with any preexisting queries.\n(Recall that upload accepts two boolean keyword arguments - \"create_only\" and \"modify_only\" (both defaulting to False) - which specifies what API verbs the function is allowed to use; for instance, if we set \"create_only\" to True then the function will post a new query if it can and otherwise it will do nothing.)", "groups.upload(name = \"group 1\", queries = [\"Test1\", \"Test2\"])", "Or upload new queries and create a group with them, all in one call", "groups.upload_queries_as_group(group_name = \"group 2\", \n query_data_list = [{\"name\": \"Test3\",\n \"includedTerms\": \"adcioahnanva\"},\n \n {\"name\": \"Test4\",\n \"includedTerms\": \"ioanvauhekanv;\"}])", "We can either delete just the group, or delete the group and the queries at the same time.", "groups.delete(\"group 1\")\nprint()\ngroups.deep_delete(\"group 2\")", "Downloading Mentions (From a Query or a Group)\nYou can download mentions from a Query or from a Group (the code does not yet support Channels)\nThere is a function get_mentions() in the classes BWQueries and in BWGroups. They are used the same way.\nBe careful with time zones, as they affect the date range and alter the results. If you're using the same date range for all your operations, I reccomend setting some variables at the start with dates and time zones. \nHere, today is set to the current day, and start is set to 30 days ago. Each number is offset by one to make it accurate.", "today = (datetime.date.today() + datetime.timedelta(days=1)).isoformat() + \"T05:00:00\"\nstart = (datetime.date.today() - datetime.timedelta(days=29)).isoformat() + \"T05:00:00\"", "To use get_mentions(), the minimum parameters needed are name (query name in this case, or group name if downloading mentions from a group), startDate, and endDate", "filtered = queries.get_mentions(name = \"ice cream cake\",\n startDate = start, \n endDate = today)", "There are over a hundred filters you can use to only download the mentions that qualify. see the full list in the file filters.py\nHere, different filters are used, which take different data types. filters.py details which data type is used with each filter. Some filters, like sentiment and xprofession below, have a limited number of settings to choose from.\nYou can filter many things by inclusion or exclusion. The x in xprofession stands for exclusion, for example.", "filtered = queries.get_mentions(name = \"ice cream cake\", \n startDate = start, \n endDate = today, \n sentiment = \"positive\", \n twitterVerified = False, \n impactMin = 50, \n xprofession = [\"Politician\", \"Legal\"])", "To filter by tags, pass in a list of strings where each string is a tag name.\nYou can filter by categories in two differnt ways: on a subcategory level or a parent category level. To filter on a subcategory level, use the category keyword and pass in a dictionary, where each the keys are the parent categories and the values are lists of the subcategories. To filter on a parent category level, use the parentCategory keyword and pass in a list of parent category names.\nNote: In the following call the parentCategory filter is redundant, but executed for illustrative purposes.", "filtered = queries.get_mentions(name = \"ice cream cake\", \n startDate = start, \n endDate = today,\n parentCategory = [\"Colors\", \"Days\"],\n category = {\"Colors\": [\"Blue\", \"Yellow\"], \n \"Days\": [\"Monday\"]}, \n tag = [\"Tastes Good\"])\n\nfiltered[0]", "Categories\nInstantiate a BWCategories object by passing in your project as a parameter, which loads all of the categories in your project.\nPrint out ids to see which categories are currently in your project.", "categories = BWCategories(project)\n\ncategories.ids", "Upload categories individually with upload(), or in bulk with upload_all(). If you are uploading many categories, it is more efficient to use upload_all().\nFor upload(), pass in name and children. name is the string which represents the parent category, and children is a list of dictionaries where each dictionary is a child category- its key is \"name\" and its value is the name of the child category.\nBy default, a category will allow multiple subcategories to be applies, so the keyword argument \"multiple\" is set to True. You can manually set it to False by passing in multipe=False as another parameter when uploading a category.\nFor upload_all(), pass in a list of dictionaries, where each dictionary corrosponds to one category, and contains the parameters described above.\nLet's upload a category and then check what's in the category.", "categories.upload(name = \"Droids\", \n children = [\"r2d2\", \"c3po\"])", "Now let's upload a few categories and then check what parent categories are in the system", "categories.upload_all([{\"name\":\"month\", \n \"children\":[\"January\",\"February\"]}, \n {\"name\":\"Time of Day\", \n \"children\":[\"morning\", \"evening\"]}])", "To add children/subcategories, call upload() and pass in the parent category name and a list of the new subcategories to add. \nIf you'd like to instead overwrite the existing subcategories with new subcategories, call upload() and pass in the parameter overwrite_children = True.", "categories.upload(name = \"Droids\", children = [\"bb8\"])", "To rename a category, call rename(), with parameters name and new_name.", "categories.rename(name = \"month\", new_name = \"Months\")\ncategories.ids[\"Months\"]", "You can delete categories either individually with delete(), or in bulk with delete_all(). \nYou also have the option to delete the entire parent category or just some of the subcategories. \nTo delete ALL CATEGORIES in a project, call clear_all_in_project with no parameters. Be careful with this one, and do not use unless you want to delete all categories in the current project.\nFirst let's delete just some subcategories.", "categories.delete({\"name\": \"Months\", \"children\":[\"February\"]})\ncategories.delete_all([{\"name\": \"Droids\", \"children\": [\"bb8\", \"c3po\"]}])\n\ncategories.delete(\"Droids\")\ncategories.delete_all([\"Months\", \"Time of Day\"])\n\ncategories.ids", "Tags\nInstantiate a BWTags object by passing in your project as a parameter, which loads all of the tags in your project.\nPrint out ids to see which tags are currently in your project.", "tags = BWTags(project)\n\ntags.names", "There are two ways to upload tags: individually and in bulk. When uploading many tags, it is more efficient to use upload_all. \nIn upload, pass in the name of the tag.\nIn upload_all, pass in a list of dictionaries, where each dictionary contains \"name\" as the key and the tag name as the its value", "tags.upload(name = \"yellow\")\ntags.upload_all([{\"name\":\"green\"}, \n {\"name\":\"blue\"}, \n {\"name\":\"purple\"}])\n\ntags.names", "To change the name of a tag, but mantain its id, upload it with keyword arguments name and new_name.", "tags.upload(name = \"yellow\", new_name = \"yellow-orange blend\")\n\ntags.names", "As with categories, there are three ways of deleting tags. \nDelete one tag by calling delete and passing in a string, the name of the tag to delete\nDelete multiple tags by calling delete_all and passing in a list of strings, where each string is a name of a tag to delete\nTo delete ALL TAGS in a project, call clear_all_in_project with no parameters. Be careful with this one, and do not use unless you want to delete all tags in the current project", "tags.delete(\"purple\")\ntags.delete_all([\"blue\", \"green\", \"yellow-orange blend\"])\n\ntags.names", "Brandwatch Lists\nNote: to avoid ambiguity between the python data type \"list\" and a Brandwatch author list, site list, or location list, the latter is referred to in this demo as a \"Brandwatch List.\"\nBWAuthorLists, BWSiteLists, BWLocationLists work almost identically.\nFirst, instantiate your the object which contains the Brandwatch Lists in your project, with your project as a the parameter. This will load the data from your project so you can see what's there, upload more Brandwatch Lists, edit existing Brandwatch Lists, and delete Brandwatch Lists from your project\nPrinting out ids will show you the Brandwatch Lists (by name and ID) that are currently in your project.", "authorlists = BWAuthorLists(project)\nauthorlists.names", "To upload a Brandwatch List, pass in a name as a string and the contents of your Brandwatch List as a list of strings. The keyword \"authors\" is used for BWAuthorLists, shown below. The keyword \"domains\"is used for BWSiteLists. The keyword \"locations\" is used for BWLocationLists.\nTo see the contents of a Brandwatch List, call get_list with the name as the parameter\nUploading is done with either a POST call, for new Brandwatch Lists, or a PUT call, for existing Brandwatch Lists, where the ID of the Brandwatch Lists is mantained, so if you upload and then upload a list with the same name and different contents, the first upload will create a new Brandwatch List, and the second upload will modify the existing list and keep its ID. Similarly, you can change the name of an existing Brandwatch List by passing in both \"name\" and \"new_name\"", "authorlists.upload(name = \"Writers\", \n authors = [\"Edward Albee\", \"Tenessee Williams\", \"Anna Deavere Smith\"])\n\nauthorlists.get(\"Writers\")[\"authors\"]\n\nauthorlists.upload(name = \"Writers\", \n new_name = \"Playwrights\", \n authors = [\"Edward Albee\", \"Tenessee Williams\", \"Anna Deavere Smith\", \"Susan Glaspell\"])\n\nauthorlists.get(\"Playwrights\")[\"authors\"]", "To add items to a Brandwatch List without reentering all of the existing items, call add_items", "authorlists.add_items(name = \"Playwrights\", \n items = [\"Eugene O'Neill\"])\n\nauthorlists.get(\"Playwrights\")[\"authors\"]", "To delete a Brandwatch List, pass in its name. Note the ids before the Brandwatch List is deleted, compared to after it is deleted. The BWLists object is updated to reflect the Brandwatch Lists in the project after each upload and each delete", "authorlists.names\n\nauthorlists.delete(\"Playwrights\")\n\nauthorlists.names", "The only difference between how you use BWAuthorlists compared to how you use BWSiteLists and BWLocationLists is the parameter which is passed in. \nBWAuthorlists:\nauthors = [\"edward albee\", \"tenessee williams\", \"Anna Deavere Smith\"]\nBWSiteLists:\ndomains = [\"github.com\", \"stackoverflow.com\", \"docs.python.org\"]\n*BWLocationLists:\nlocations = [{\"id\": \"mai4\", \"name\": \"Maine\", \"type\": \"state\", \"fullName\": \"Maine, United States, North America\"}, \n{\"id\": \"verf\", \"name\": \"Vermont\", \"type\": \"state\", \"fullName\": \"Vermont, United States, North America\"}, \n{\"id\": \"rho4\", \"name\": \"Rhode Island\", \"type\": \"state\", \"fullName\": \"Rhode Island, United States, North America\"} ]\n*Requires dictionary of location data instead of a string\nRules\nInstantiate a BWRules object by passing in your project as a parameter, which loads all of the rules in your project.\nPrint out names and IDs to see which rules are currently in your project.", "rules = BWRules(project)\nrules.names", "Every rule must have a name, an action, and filters.\nThe first step to creating a rule through the API is to prepare filters by calling filters(). \nIf your desired rules applies to a query (or queries), include queryName as a filter and pass in a list of the queries you want to apply it to.\nThere are over a hundred filters you can use to only download the mentions that qualify. See the full list in the file filters.py. Here, different filters are used, which take different data types. filters.py details which data type is used with each filter. Some filters, like sentiment and xprofession below, have a limited number of settings to choose from. You can filter many things by inclusion or exclusion. The x in xprofession stands for exclusion, for example.\nIf you include search terms, be sure to use nested quotes - passing in \"cat food\" will result in a search that says cat food (i.e. cat AND food)", "filters = rules.filters(queryName = \"ice cream cake\", \n sentiment = \"positive\", \n twitterVerified = False, \n impactMin = 50, \n xprofession = [\"Politician\", \"Legal\"])\n\nfilters = rules.filters(queryName = [\"Australian Animals\", \"ice cream cake\"], \n search = '\"cat food\" OR \"dog food\"')", "The second step is to prepare the rule action by calling rule_action().\nFor this function, you must pass in the action and setting. Below I've used examples of adding categories and tags, but you can also set sentiment or workflow (as in the front end).\nIf you pass in a category or tag that does not yet exist, it will be automatically uploaded for you.", "action = rules.rule_action(action = \"addTag\", \n setting = [\"animal food\"])", "The last step is to upload!\nPass in the name, filters, and action. Scope is optional - it will default to query if queryName is in the filters and otherwise be set to project. Backfill is also optional - it will default to False.\nThe upload() function will automatically check the validity of your search string and give a helpful error message if errors are found.", "rules.upload(name = \"rule\", \n scope = \"query\", \n filter = filters, \n ruleAction = action,\n backfill = True)", "You can also upload rules in bulk. Below we prepare a bunch of filters and actions at once.", "filters1 = rules.filters(search = \"caknvfoga;vnaei\")\nfilters2 = rules.filters(queryName = [\"Australian Animals\"], search = \"(bloop NEAR/10 blorp)\")\nfilters3 = rules.filters(queryName = [\"Australian Animals\", \"ice cream cake\"], search = '\"hello world\"')\n\naction1 = rules.rule_action(action = \"addCategories\", setting = {\"Example\": [\"One\"]})\naction2 = rules.rule_action(action = \"addTag\", setting = [\"My Example\"])", "When uploading in bulk, it is helpful (but not necessary) to use the rules() function before uploading in order to keep the dictionaries organized.", "rule1 = rules.rule(name = \"rule1\", \n filter = filters1, \n action = action1, \n scope = \"project\")\n\nrule2 = rules.rule(name = \"rule2\", \n filter = filters2, \n action = action2)\n\nrule3 = rules.rule(name = \"rule3\", \n filter = filters3, \n action = action1,\n backfill = True)\n\nrules.upload_all([rule1, rule2, rule3])", "As with other resources, we can delete, delete_all or clear_all_in_project", "rules.delete(name = \"rule\")\nrules.delete_all(names = [\"rule1\", \"rule2\", \"rule3\"])\n\nrules.names", "Signals\nInstantiate a BWSignals object by passing in your project as a parameter, which loads all of the signals in your project.\nPrint out ids to see which signals are currently in your project.", "signals = BWSignals(project)\n\nsignals.names", "Again, we can upload signals individually or in batch.\nYou must pass at least a name, queries (list of queries you'd like the signal to apply to) and subscribers. For each subscriber, you have to pass both an emailAddress and notificationThreshold. The notificationThreshold will be a number 1, 2 or 3 - where 1 means send all notifications and 3 means send only high priority signals.\nOptionally, you can also pass in categories or tags to filter by. As before, you can filter by an entire category with the keyword parentCategory or just a subcategory (or list of subcategories) with the keyword category. An example of how to pass in each filter is shown below.", "signals.upload(name= \"New Test\",\n queries= [\"ice cream cake\"],\n parentCategory = [\"Colors\"],\n subscribers= [{\"emailAddress\": \"test12345@brandwatch.com\", \"notificationThreshold\": 1}])\n\nsignals.upload_all([{\"name\": \"Signal Me\",\n \"queries\": [\"ice cream cake\"],\n \"category\": {\"Colors\": [\"Blue\", \"Yellow\"]},\n \"subscribers\": [{\"emailAddress\": \"testaddress123@brandwatch.com\", \"notificationThreshold\": 3}]},\n {\"name\": \"Signal Test\",\n \"queries\": [\"ice cream cake\"],\n \"tag\": [\"Tastes Good\"],\n \"subscribers\": [{\"emailAddress\": \"exampleemail@brandwatch.com\", \"notificationThreshold\": 2}]}])\n\nsignals.names", "Signals can be deleted individually or in bulk.", "signals.delete(\"New Test\")\nsignals.delete_all([\"Signal Me\", \"Signal Test\"])\n\nsignals.names", "Patching Mentions\nTo patch the metadata on mentions, whether those mentions come from queries or from groups, you must first instantiate a BWMentions object and pass in your project as a parameter.", "mentions = BWMentions(project)\n\nfiltered = queries.get_mentions(name = \"ice cream cake\", \n startDate = start, \n endDate = today,\n parentCategory = [\"Colors\", \"Days\"],\n category = {\"Colors\": [\"Blue\", \"Yellow\"], \n \"Days\": [\"Monday\"]}, \n tag = [\"Tastes Good\"])", "if you don't want to upload your tags and categories ahead of time, you don't have to! BWMentions will do that for you, but if there are a lot of differnet tags/categories, it's definitely more efficient to upload them in bulk ahead of time\nFor this example, i'm arbitrarily patching a few of the mentions, rather than all of them", "mentions.patch_mentions(filtered[0:10], action = \"addTag\", setting = [\"cold\"])\n\nmentions.patch_mentions(filtered[5:12], action = \"starred\", setting = True)\n\nmentions.patch_mentions(filtered[6:8], action = \"addCategories\", setting = {\"color\":[\"green\", \"blue\"]})" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dsacademybr/PythonFundamentos
Cap05/Notebooks/DSA-Python-Cap05-04-Heranca.ipynb
gpl-3.0
[ "<font color='blue'>Data Science Academy - Python Fundamentos - Capรญtulo 5</font>\nDownload: http://github.com/dsacademybr", "# Versรฃo da Linguagem Python\nfrom platform import python_version\nprint('Versรฃo da Linguagem Python Usada Neste Jupyter Notebook:', python_version())", "Heranรงa", "# Criando a classe Animal - Super-classe\nclass Animal():\n \n def __init__(self):\n print(\"Animal criado\")\n\n def Identif(self):\n print(\"Animal\")\n\n def comer(self):\n print(\"Comendo\")\n\n# Criando a classe Cachorro - Sub-classe\nclass Cachorro(Animal):\n \n def __init__(self):\n Animal.__init__(self)\n print(\"Objeto Cachorro criado\")\n\n def Identif(self):\n print(\"Cachorro\")\n\n def latir(self):\n print(\"Au Au!\")\n\n# Criando um objeto (Instanciando a classe)\nrex = Cachorro()\n\n# Executando o mรฉtodo da classe Cachorro (sub-classe)\nrex.Identif()\n\n# Executando o mรฉtodo da classe Animal (super-classe)\nrex.comer()\n\n# Executando o mรฉtodo da classe Cachorro (sub-classe)\nrex.latir()", "Fim\nObrigado\nVisite o Blog da Data Science Academy - <a href=\"http://blog.dsacademy.com.br\">Blog DSA</a>" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
ML4DS/ML4all
U_lab1.Clustering/Lab_ShapeSegmentation_draft/LabSessionClustering.ipynb
mit
[ "Lab Session: Clustering algorithms for Image Segmentation\nAuthor: Jesรบs Cid Sueiro\nJan. 2017", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.misc import imread", "1. Introduction\nIn this notebook we explore an application of clustering algorithms to shape segmentation from binary images. We will carry out some exploratory work with a small set of images provided with this notebook. Most of them are not binary images, so we must do some preliminary work to extract he binary shape images and apply the clustering algorithms to them. We will have the opportunity to test the differences between $k$-means and spectral clustering in this problem.\n1.1. Load Image\nSeveral images are provided with this notebook:\n\nBinarySeeds.png\nbirds.jpg\nblood_frog_1.jpg\ncKyDP.jpg\nMatricula.jpg\nMatricula2.jpg\nSeeds.png\n\nSelect image birds.jpg from file and plot it in grayscale", "name = \"birds.jpg\"\nname = \"Seeds.jpg\"\n\nbirds = imread(\"Images/\" + name)\nbirdsG = np.sum(birds, axis=2)\n\nplt.imshow(birdsG, cmap=plt.get_cmap('gray'))\nplt.grid(False)\nplt.axis('off')\nplt.show()", "2. Thresholding\nSelect an intensity threshold by manual inspection of the image histogram", "plt.hist(birdsG.ravel(), bins=256) \nplt.show()", "Plot the binary image after thresholding.", "if name == \"birds.jpg\":\n th = 256\nelif name == \"Seeds.jpg\":\n th = 650\n\nbirdsBN = birdsG > th\n\n# If there are more white than black pixels, reverse the image\nif np.sum(birdsBN) > float(np.prod(birdsBN.shape)/2):\n birdsBN = 1-birdsBN\nplt.imshow(birdsBN, cmap=plt.get_cmap('gray'))\nplt.grid(False)\nplt.axis('off')\nplt.show()", "3. Dataset generation\nExtract pixel coordinates dataset from image", "(h, w) = birdsBN.shape\nbW = birdsBN * range(w)\nbH = birdsBN * np.array(range(h))[:,np.newaxis]\npSet = [t for t in zip(bW.ravel(), bH.ravel()) if t!=(0,0)]\nX = np.array(pSet)\n\n\nprint X\n\nplt.scatter(X[:, 0], X[:, 1], s=5);\nplt.axis('equal')\nplt.show()", "4. k-means clustering algorithm", "from sklearn.cluster import KMeans\n\nest = KMeans(50) # 4 clusters\nest.fit(X)\ny_kmeans = est.predict(X)\nplt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=5, cmap='rainbow',\n linewidth=0.0)\nplt.axis('equal')\nplt.show()", "5. Spectral clustering algorithm\n5.1. Affinity matrix\nCompute and visualize the affinity matrix", "from sklearn.metrics.pairwise import rbf_kernel\n\ngamma = 5\nsf = 4\nXsub = X[0::sf]\nprint Xsub.shape\ngamma = 0.001\nK = rbf_kernel(Xsub, Xsub, gamma=gamma)\n\nplt.imshow(K, cmap='hot')\nplt.colorbar()\nplt.title('RBF Affinity Matrix for gamma = ' + str(gamma))\nplt.grid('off')\nplt.show()\n\nfrom sklearn.cluster import SpectralClustering\n\nspc = SpectralClustering(n_clusters=50, gamma=gamma, affinity='rbf')\ny_kmeans = spc.fit_predict(Xsub)\n\nplt.scatter(Xsub[:,0], Xsub[:,1], c=y_kmeans, s=5, cmap='rainbow', linewidth=0.0)\nplt.axis('equal')\nplt.show()\n\nprint X[:,1]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ogaway/Econometrics
Dummy.ipynb
gpl-3.0
[ "ๆง‹้€ ๅค‰ๅŒ–ใ€็†่ซ–ใฎๅฆฅๅฝ“ๆ€งใฎใƒ†ใ‚นใƒˆ\nใ€ŽRใซใ‚ˆใ‚‹่จˆ้‡็ตŒๆธˆๅญฆใ€็ฌฌ9็ซ ใ€Œๆง‹้€ ๅค‰ๅŒ–ใ€็†่ซ–ใฎๅฆฅๅฝ“ๆ€งใฎใƒ†ใ‚นใƒˆใ€ใ‚’PythonใงๅฎŸ่กŒใ™ใ‚‹ใ€‚\nใƒ†ใ‚ญใ‚นใƒˆไป˜ๅฑžใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆ(ใ€Œk0901.csvใ€็ญ‰)ใซใคใ„ใฆใฏๅ‡บ็‰ˆ็คพใ‚ตใ‚คใƒˆใ‚ˆใ‚Šใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใฆใใ ใ•ใ„ใ€‚\nใพใŸใ€ไปฅไธ‹ใฎ่ชฌๆ˜Žใฏๆœฌๆ›ธใฎไธ€้ƒจใ‚’่ฆ็ด„ใ—ใŸใ‚‚ใฎใงใ™ใฎใงใ€ใ‚ˆใ‚Š่ฉณใ—ใ„่ชฌๆ˜Žใฏๆœฌๆ›ธใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ \nใƒ€ใƒŸใƒผๅค‰ๆ•ฐ(Dummy Variable)\nไพ‹้กŒ9.1ใ€Œๅฎšๆ•ฐ้ …ใƒ€ใƒŸใƒผใ€\nไปฅไธ‹ใฎใ‚ˆใ†ใซใƒขใƒ‡ใƒซใ‚’่จญๅฎšใ—ใฆๅ›žๅธฐๅˆ†ๆžใ‚’่กŒใ†ใ€‚\n$Y_{i} = \\alpha + \\beta X_{i} + \\gamma D_{i} + u_{i}$", "%matplotlib inline\n\n# -*- coding:utf-8 -*-\nfrom __future__ import print_function\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nimport warnings\nwarnings.filterwarnings('ignore')\n\n# ใƒ‡ใƒผใ‚ฟ่ชญใฟ่พผใฟ\ndata = pd.read_csv('example/k0901.csv')\ndata\n\n# ่ชฌๆ˜Žๅค‰ๆ•ฐ่จญๅฎš\nX = data[['X', 'D']]\nX = sm.add_constant(X)\nX\n\n# ่ขซ่ชฌๆ˜Žๅค‰ๆ•ฐ่จญๅฎš\nY = data['Y']\nY\n\n# OLSใฎๅฎŸ่กŒ(Ordinary Least Squares: ๆœ€ๅฐไบŒไน—ๆณ•)\nmodel = sm.OLS(Y,X)\nresults = model.fit()\nprint(results.summary())\n\n# ใƒ€ใƒŸใƒผๅˆฅใƒ‡ใƒผใ‚ฟ\ndata_d0 = data[data[\"D\"] == 0]\ndata_d1 = data[data[\"D\"] == 1]\n\n# ใ‚ฐใƒฉใƒ•็”Ÿๆˆ\nplt.plot(data[\"X\"], data[\"Y\"], 'o', label=\"data\")\nplt.plot(data_d0.X, results.fittedvalues[data_d0.index], label=\"D=0\")\nplt.plot(data_d1.X, results.fittedvalues[data_d1.index], label=\"D=1\")\nplt.xlim(min(data[\"X\"])-1, max(data[\"X\"])+1)\nplt.ylim(min(data[\"Y\"])-1, max(data[\"Y\"])+1)\nplt.title('9-1: Dummy Variable')\nplt.legend(loc=2)\nplt.show()", "ไพ‹้กŒ9-2 ใ€Œไฟ‚ๆ•ฐใƒ€ใƒŸใƒผใ€\nไปฅไธ‹ใฎใ‚ˆใ†ใซใƒขใƒ‡ใƒซใ‚’่จญๅฎšใ—ใฆๅ›žๅธฐๅˆ†ๆžใ‚’่กŒใ†ใ€‚\n$Y_{i} = \\alpha + \\beta X_{i} + \\gamma D_{i} + \\delta D_{i} X_{i} + u_{i}$", "# ใƒ‡ใƒผใ‚ฟ่ชญใฟ่พผใฟ\ndata = pd.read_csv('example/k0902.csv')\ndata\n\n# ่ชฌๆ˜Žๅค‰ๆ•ฐ่จญๅฎš\nX = data[['X', 'D', 'DX']]\nX = sm.add_constant(X)\nX\n\n# ่ขซ่ชฌๆ˜Žๅค‰ๆ•ฐ่จญๅฎš\nY = data['Y']\nY\n\n# OLSใฎๅฎŸ่กŒ(Ordinary Least Squares: ๆœ€ๅฐไบŒไน—ๆณ•)\nmodel = sm.OLS(Y,X)\nresults = model.fit()\nprint(results.summary())\n\n# ใƒ€ใƒŸใƒผๅˆฅใƒ‡ใƒผใ‚ฟ\ndata_d0 = data[data[\"D\"] == 0]\ndata_d1 = data[data[\"D\"] == 1]\n\n# ใ‚ฐใƒฉใƒ•็”Ÿๆˆ\nplt.plot(data[\"X\"], data[\"Y\"], 'o', label=\"data\")\nplt.plot(data_d0.X, results.fittedvalues[data_d0.index], label=\"D=0\")\nplt.plot(data_d1.X, results.fittedvalues[data_d1.index], label=\"D=1\")\nplt.xlim(min(data[\"X\"])-1, max(data[\"X\"])+1)\nplt.ylim(min(data[\"Y\"])-1, max(data[\"Y\"])+1)\nplt.title('9-2: Dummy Variable')\nplt.legend(loc=2)\nplt.show()", "ไพ‹้กŒ9-3 ใ€Œtๆคœๅฎšใซใ‚ˆใ‚‹ๆง‹้€ ๅค‰ๅŒ–ใฎใƒ†ใ‚นใƒˆใ€\nไพ‹้กŒ9-2ใซใŠใ„ใฆ$\\gamma = 0$ใซ้–ขใ™ใ‚‹Pๅ€คใฏ0.017ใงใ‚ใ‚Šใ€$\\delta = 0$ใซ้–ขใ™ใ‚‹Pๅ€คใฏ0.003ใงใ‚ใ‚‹ใ“ใจใ‹ใ‚‰ใ€ๆจ™ๆบ–็š„ใชๆœ‰ๆ„ๆฐดๆบ–ใ‚’่จญๅฎšใ™ใ‚Œใฐใ€ใ„ใšใ‚Œใฎใƒ€ใƒŸใƒผๅค‰ๆ•ฐใ‚‚ๆœ‰ๆ„ใงใ‚ใ‚‹ใจใ„ใˆใ‚‹ใ€‚\nไพ‹้กŒ9-4 ใ€ŒFๆคœๅฎšใซใ‚ˆใ‚‹ๆง‹้€ ๅค‰ๅŒ–ใฎใƒ†ใ‚นใƒˆใ€", "# ใƒ€ใƒŸใƒผๅค‰ๆ•ฐใ‚’ๅŠ ใˆใชใ„ๆ™‚ใฎOLSใƒขใƒ‡ใƒซไฝœๆˆ\nX = data[['X']]\nX = sm.add_constant(X)\nmodel2 = sm.OLS(Y,X)\nresults2 = model2.fit()\n\n# anova(Analysis of Variance)\nprint(sm.stats.anova_lm(results2, results))", "Fๅ€คใฏ14.486ใ€ใใ‚Œใซๅฏพๅฟœใ™ใ‚‹Pๅ€คใฏ0.005ใ‚ˆใ‚Š $\\gamma$ , $\\delta$ ใฎใ†ใกๅฐ‘ใชใใจใ‚‚1ใคใฏ0ใงใฏใชใ„ใจๅˆ†ใ‹ใ‚‹ใ€‚" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
SatoshiNakamotoGeoscripting/SatoshiNakamotoGeoscripting
Lecture 11/.ipynb_checkpoints/Satoshi Nakamoto Lecture 11 Jupyter Notebook-checkpoint.ipynb
mit
[ "Team: Satoshi Nakamoto <br>\nNames: Alex Levering & Hรจctor Muro <br>\nLesson 10 Exercise solution\nImport standard libraries", "from numpy import mean\nimport os\nfrom os import makedirs,chdir\nfrom os.path import exists", "Import non-standard libraries (install as needed)", "from osgeo import ogr,osr\nimport folium\nimport simplekml", "Optional directory creation", "if not exists('./data'):\n makedirs('./data')\n\nchdir(\"./data\")", "Is the ESRI Shapefile driver available?", "driverName = \"ESRI Shapefile\"\ndrv = ogr.GetDriverByName( driverName )\nif drv is None:\n print \"%s driver not available.\\n\" % driverName\nelse:\n print \"%s driver IS available.\\n\" % driverName", "Define a function which will create a shapefile from the points input and export it as kml if the option is set to True.", "def shpFromPoints(filename, layername, points, save_kml = True):\n spatialReference = osr.SpatialReference()\n spatialReference.ImportFromProj4('+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs')\n ds = drv.CreateDataSource(filename)\n layer=ds.CreateLayer(layername, spatialReference, ogr.wkbPoint)\n layerDefinition = layer.GetLayerDefn()\n \n point = ogr.Geometry(ogr.wkbPoint)\n feature = ogr.Feature(layerDefinition)\n \n kml = simplekml.Kml()\n for i, value in enumerate(points):\n point.SetPoint(0,value[0], value[1])\n feature.SetGeometry(point)\n layer.CreateFeature(feature)\n kml.newpoint(name=str(i), coords = [(value[0],value[1])])\n ds.Destroy() \n if save_kml == True:\n kml.save(\"my_points2.kml\")", "Define the file and layer name as well as the points to be mapped.", "filename = \"wageningenpoints.shp\"\nlayername = \"wagpoints\"\npts = [(5.665777,51.987398),\n (5.663133,51.978434)]\nshpFromPoints(filename, layername, pts)", "Define a function to create a nice map with the points using folium library.", "def mapFromPoints(pts, outname, zoom_level, save = True):\n mean_long = mean([pt[1] for pt in pts])\n mean_lat = mean([pt[0] for pt in pts])\n point_map = folium.Map(location=[mean_long, mean_lat], zoom_start = zoom_level)\n for pt in pts:\n folium.Marker([pt[1], pt[0]],\\\n popup = folium.Popup(folium.element.IFrame(\n html='''\n <b>Latitude:</b> {lat}<br>\n <b>Longitude:</b> {lon}<br>\n '''.format(lat = pt[1], lon = pt[0]),\\\n width=150, height=100),\\\n max_width=150)).add_to(point_map)\n if save == True:\n point_map.save(\"{}.html\".format(outname))\n return point_map", "Call the function specifying the list of points, the output map name and its zoom level. If not False, the map is saved as an html", "mapFromPoints(pts, \"SatoshiNakamotoMap\", zoom_level = 15)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wilomaku/IA369Z
dev/Autoencoderxclass.ipynb
gpl-3.0
[ "Corpus callosum's shape signature for segmentation error detection in large datasets\nAbstract\nCorpus Callosum (CC) is a subcortical, white matter structure with great importance in clinical and research studies because its shape and volume are correlated with subject's characteristics and neurodegenerative diseases. CC segmentation is a important step for any medical, clinical or research posterior study. Currently, magnetic resonance imaging (MRI) is the main tool for evaluating brain because it offers the better soft tissue contrast. Particullary, segmentation in MRI difussion modality has great importante given information associated to brain microstruture and fiber composition.\nIn this work a method for detection of erroneous segmentations in large datasets is proposed based-on shape signature. Shape signature is obtained from segmentation, calculating curvature along contour using a spline formulation. A mean correct signature is used as reference for compare new segmentations through root mean square error. This method was applied to 152 subject dataset for three different segmentation methods in diffusion: Watershed, ROQS and pixel-based presenting high accuracy in error detection. This method do not require per-segmentation reference and it can be applied to any MRI modality and other image aplications.", "## Functions\n\nimport sys,os\nimport copy\npath = os.path.abspath('../dev/')\nif path not in sys.path:\n sys.path.append(path)\n\nimport bib_mri as FW\nimport numpy as np\nimport scipy as scipy\nimport scipy.misc as misc \nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom numpy import genfromtxt\nimport platform\nimport torch\nfrom torch.autograd import Variable\nimport torch.nn as nn\nimport torch.nn.functional as F\n%matplotlib inline\n\ndef sign_extract(seg, resols): #Function for shape signature extraction\n splines = FW.get_spline(seg,smoothness)\n\n sign_vect = np.array([]).reshape(0,points) #Initializing temporal signature vector\n for resol in resols:\n sign_vect = np.vstack((sign_vect, FW.get_profile(splines, n_samples=points, radius=resol)))\n \n return sign_vect\n\ndef sign_fit(sig_ref, sig_fit): #Function for signature fitting\n dif_curv = []\n for shift in range(points):\n dif_curv.append(np.abs(np.sum((sig_ref - np.roll(sig_fit[0],shift))**2)))\n return np.apply_along_axis(np.roll, 1, sig_fit, np.argmin(dif_curv))\n\nprint \"Python version: \", platform.python_version()\nprint \"Numpy version: \", np.version.version\nprint \"Scipy version: \", scipy.__version__\nprint \"Matplotlib version: \", mpl.__version__", "Introduction\nThe Corpus Callosum (CC) is the largest white matter structure in the central nervous system that connects both brain hemispheres and allows the communication between them. The CC has great importance in research studies due to the correlation between shape and volume with some subject's characteristics, such as: gender, age, numeric and mathematical skills and handedness. In addition, some neurodegenerative diseases like Alzheimer, autism, schizophrenia and dyslexia could cause CC shape deformation.\nCC segmentation is a necessary step for morphological and physiological features extraction in order to analyze the structure in image-based clinical and research applications. Magnetic Resonance Imaging (MRI) is the most suitable image technique for CC segmentation due to its ability to provide contrast between brain tissues however CC segmentation is challenging because of the shape and intensity variability between subjects, volume partial effect in diffusion MRI, fornex proximity and narrow areas in CC. Among the known MRI modalities, Diffusion-MRI arouses special interest to study the CC, despite its low resolution and high complexity, since it provides useful information related to the organization of brain tissues and the magnetic field does not interfere with the diffusion process itself.\nSome CC segmentation approaches using Diffusion-MRI were found in the literature. Niogi et al. proposed a method based on thresholding, Freitas et al. e Rittner et al. proposed region methods based on Watershed transform, Nazem-Zadeh et al. implemented based on level surfaces, Kong et al. presented an clustering algorithm for segmentation, Herrera et al. segmented CC directly in diffusion weighted imaging (DWI) using a model based on pixel classification and Garcia et al. proposed a hybrid segmentation method based on active geodesic regions and level surfaces.\nWith the growing of data and the proliferation of automatic algorithms, segmentation over large databases is affordable. Therefore, error automatic detection is important in order to facilitate and speed up filter on CC segmentation databases. presented proposals for content-based image retrieval (CBIR) using shape signature of the planar object representation.\nIn this work, a method for automatic detection of segmentation error in large datasets is proposed based on CC shape signature. Signature offers shape characterization of the CC and therefore it is expected that a \"typical correct signature\" represents well any correct segmentation. Signature is extracted measuring curvature along segmentation contour. The method was implemented in three main stages: mean correct signature generation, signature configuration and method testing. The first one takes 20 corrects segmentations and generates one correct signature of reference (typical correct signature), per-resolution, using mean values in each point. The second stage stage takes 10 correct segmentations and 10 erroneous segmentations and adjusts the optimal resolution and threshold, based on mean correct signature, that lets detection of erroneous segmentations. The third stage labels a new segmentation as correct and erroneous comparing with the mean signature using optimal resolution and threshold.\n<img src=\"../figures/workflow.png\">\nThe comparison between signatures is done using root mean square error (RMSE). True label for each segmentation was done visually. Correct segmentation corresponds to segmentations with at least 50% of agreement with the structure. It is expected that RMSE for correct segmentations is lower than RMSE associated to erroneous segmentation when compared with a typical correct segmentation.", "#Loading labeled segmentations\nseg_label = genfromtxt('../../dataset/Seg_Watershed/watershed_label.csv', delimiter=',').astype('uint8')\n\nlist_masks = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 0] #Extracting segmentations\nlist_labels = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 1] #Extracting labels\nind_ex_err = list_masks[np.where(list_labels)[0]]\nind_ex_cor = list_masks[np.where(np.logical_not(list_labels))[0]]\nprint \"Mask List\", list_masks\nprint \"Label List\", list_labels\nprint \"Correct List\", ind_ex_cor\nprint \"Erroneous List\", ind_ex_err\n\nmask_correct = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(ind_ex_cor[10]))\nmask_error = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(ind_ex_err[10]))\n\nplt.figure()\nplt.axis('off')\nplt.imshow(mask_correct,'gray',interpolation='none')\nplt.title(\"Correct segmentation example\")\nplt.show()\n\nplt.figure()\nplt.axis('off')\nplt.imshow(mask_error,'gray',interpolation='none')\nplt.title(\"Erroneous segmentation example\")\nplt.show()", "Shape signature for comparison\nSignature is a shape descriptor that measures the rate of variation along the segmentation contour. As shown in figure, the curvature $k$ in the pivot point $p$, with coordinates ($x_p$,$y_p$), is calculated using the next equation. This curvature depict the angle between the segments $\\overline{(x_{p-ls},y_{p-ls})(x_p,y_p)}$ and $\\overline{(x_p,y_p)(x_{p+ls},y_{p+ls})}$. These segments are located to a distance $ls>0$, starting in a pivot point and finishing in anterior and posterior points, respectively.\nThe signature is obtained calculating the curvature along all segmentation contour.\n\\begin{equation} \\label{eq:per1}\nk(x_p,y_p) = \\arctan\\left(\\frac{y_{p+ls}-y_p}{x_{p+ls}-x_p}\\right)-\\arctan\\left(\\frac{y_p-y_{p-ls}}{x_p-x_{p-ls}}\\right)\n\\end{equation}\n<img src=\"../figures/curvature.png\">\nSignature construction is performed from segmentation contour of the CC. From contour, spline is obtained. Spline purpose is twofold: to get a smooth representation of the contour and to facilitate calculation of\nthe curvature using its parametric representation. The signature is obtained measuring curvature along spline. $ls$ is the parametric distance between pivot point and both posterior and anterior points and it determines signature resolution. By simplicity, $ls$ is measured in percentage of reconstructed spline points.\nIn order to achieve quantitative comparison between two signatures root mean square error (RMSE) is introduced. RMSE measures distance, point to point, between signatures $a$ and $b$ along all points $p$ of signatures.\n\\begin{equation} \\label{eq:per4}\nRMSE = \\sqrt{\\frac{1}{P}\\sum_{p=1}^{P}(k_{ap}-k_{bp})^2}\n\\end{equation}\nFrequently, signatures of different segmentations are not fitted along the 'x' axis because of the initial point on the spline calculation starts in different relative positions. This makes impossible to compare directly two signatures and therefore, a prior fitting process must be accomplished. The fitting process is done shifting one of the signature while the other is kept fixed. For each shift, RMSE between the two signatures is measured. The point giving the minor error is the fitting point. Fitting was done at resolution $ls = 0.35$. This resolution represents globally the CC's shape and eases their fitting.\nAfter fitting, RMSE between signatures can be measured in order to achieve final quantitative comparison.\nSignature for segmentation error detection\nFor segmentation error detection, a typical correct signature is obtained calculating mean over a group of signatures from correct segmentations. Because of this signature could be used in any resolution, $ls$ must be chosen for achieve segmentation error detection. The optimal resolution must be able to return the greatest RMSE difference between correct and erroneous segmentation when compared with a typical correct signature.\nIn the optimal resolution, a threshold must be chosen for separate erroneous and correct segmentations. This threshold stays between RMSE associated to correct ($RMSE_E$) and erroneous ($RMSE_C$) signatures and it is given by the next equation where N (in percentage) represents proximity to correct or erroneous RMSE. If RMSE calculated over a group of signatures, mean value is applied.\n\\begin{equation} \\label{eq:eq3}\nth = N*(\\overline{RMSE_E}-\\overline{RMSE_C})+\\overline{RMSE_C}\n\\end{equation}\nExperiments and results\nIn this work, comparison of signatures through RMSE is used for segmentation error detection in large datasets. For this, it will be calculated a mean correct signature based on 20 correct segmentation signatures. This mean correct signature represents a tipycal correct segmentation. For a new segmentation, signature is extracted and compared with mean signature.\nFor experiments, DWI from 152 subjects at the University of Campinas, were acquired on a Philips scanner Achieva 3T in the axial plane with a $1$x$1mm$ spatial resolution and $2mm$ slice thickness, along $32$ directions ($b-value=1000s/mm^2$, $TR=8.5s$, and $TE=61ms$). All data used in this experiment was acquired through a project approved by the research ethics committee from the School of Medicine at UNICAMP. From each acquired DWI volume, only the midsaggital slice was used.\nThree segmentation methods were implemented to obtained binary masks over a 152 subject dataset: Watershed, ROQS and pixel-based. 40 Watershed segmentations were chosen as follows: 20 correct segmentations for mean correct signature generation and 10 correct and 10 erroneous segmentations for signature configuration stage. Watershed was chosen to generate and adjust the mean signature because of its higher error rate and its variability in the erroneous segmentation shape. These characteristics allow improve generalization. The method was tested on the remaining Watershed segmentations (108 masks) and two additional segmentations methods: ROQS (152 masks) and pixel-based (152 masks).\nMean correct signature generation\nIn this work, segmentations based on Watershed method were used for implementation of the first and second stages. From the Watershed dataset, 20 correct segmentations were chosen. Spline for each one was obtained from segmentation contour. The contour was obtained using mathematical morphology, applying xor logical operation, pixel-wise, between original segmentation and the eroded version of itself by an structuring element b:\n\\begin{equation} \\label{eq:per2}\nG_E = XOR(S,S \\ominus b)\n\\end{equation}\nFrom contour, it is calculated spline. The implementation, is a B-spline (Boor's basic spline). This formulation has two parameters: degree, representing polynomial degrees of the spline, and smoothness, being the trade off between proximity and smoothness in the fitness of the spline. Degree was fixed in 5 allowing adequate representation of the contour. Smoothness was fixed in 700. This value is based on the mean quantity of pixels of the contour that are passed for spline calculation. The curvature was measured over 500 points over the spline to generate the signature along 20 segmentations. Signatures were fitted to make possible comparison (Fig. signatures). Fitting resolution was fixed in 0.35. \nIn order to get a representative correct signature, mean signature per-resolution is generated using 20 correct signatures. The mean is calculated in each point.\nSignature configuration\nBecause of the mean signature was extracted for all the resolutions, it is necessary to find resolution in that diference between RMSE for correct signature and RMSE for erroneous signature is maximum. So, 20 news segmentations were used to find this optimal resolution, being divided as 10 correct segmentations and 10 erroneous segmentations. For each segmentation, it was extracted signature for all resolutions.", "smoothness = 700 #Smoothness\ndegree = 5 #Spline degree\nfit_res = 0.35\nresols = np.arange(0.01,0.5,0.01) #Signature resolutions\nresols = np.insert(resols,0,fit_res) #Insert resolution for signature fitting\npoints = 500 #Points of Spline reconstruction\n\nprof_vec = np.empty((len(list_masks),resols.shape[0],points)) #Initializing correct signature vector\nfor ind, mask in enumerate(list_masks):\n #Loading correct mask\n mask_pn = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(mask))\n refer_temp = sign_extract(mask_pn, resols) #Function for shape signature extraction\n prof_vec[ind] = refer_temp\n if mask > 0: #Fitting curves using the first one as basis\n prof_ref = prof_vec[0]\n prof_vec[ind] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting\n\nind_rel_cor = np.where(np.logical_not(list_labels))[0]\nind_rel_err = np.where(list_labels)[0]\n \nprint \"Correct segmentations' vector: \", prof_vec[ind_rel_cor].shape\nprint \"Erroneous segmentations' vector: \", prof_vec[ind_rel_err].shape\n\nprint(ind_rel_cor.shape)\nprint(ind_ex_cor.shape)\n\nres_ex = 15\n#for ind_ex, ind_rel in zip(ind_ex_cor, ind_rel_cor):\n# plt.figure() \n# f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))\n# ax1.plot(prof_vec[ind_rel,res_ex,:].T)\n# ax1.set_title(\"Signature %i at res: %f\"%(ind_ex, resols[res_ex]))\n# \n# mask_correct = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(ind_ex))\n# ax2.axis('off')\n# ax2.imshow(mask_correct,'gray',interpolation='none')\n#\n# plt.show()\n\nplt.figure()\nplt.plot(prof_vec[ind_rel_cor,res_ex,:].T)\nplt.title(\"Correct signatures for res: %f\"%(resols[res_ex]))\nplt.show()\n\nplt.figure()\nplt.plot(prof_vec[ind_rel_err,res_ex,:].T)\nplt.title(\"Erroneous signatures for res: %f\"%(resols[res_ex]))\nplt.show()", "Autoencoder", "def train(model,train_loader,loss_fn,optimizer,epochs=100,patience=5,criteria_stop=\"loss\"):\n hist_train_loss = hist_val_loss = hist_train_acc = hist_val_acc = np.array([])\n best_epoch = patience_count = 0\n\n print(\"Training starts along %i epoch\"%epochs)\n for e in range(epochs):\n correct_train = correct_val = total_train = total_val = 0\n cont_i = loss_t_e = loss_v_e = 0\n for data_train in train_loader:\n var_inputs = Variable(data_train)\n\n predict, encode = model(var_inputs)\n loss = loss_fn(predict, var_inputs.view(-1, 500))\n loss_t_e += loss.data[0]\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n cont_i += 1\n\n #Stacking historical\n hist_train_loss = np.hstack((hist_train_loss, loss_t_e/(cont_i*1.0)))\n print('Epoch: ', e, 'train loss: ', hist_train_loss[-1])\n\n if(e == epochs-1):\n best_epoch = e\n best_model = copy.deepcopy(model)\n print(\"Training stopped\")\n patience_count += 1\n\n return(best_model, hist_train_loss, hist_val_loss)\n\nclass autoencoder(nn.Module):\n def __init__(self):\n super(autoencoder, self).__init__()\n self.fc1 = nn.Linear(500, 200)\n self.fc21 = nn.Linear(200, 2)\n self.fc3 = nn.Linear(2, 200)\n self.fc4 = nn.Linear(200, 500)\n self.relu = nn.ReLU()\n self.sigmoid = nn.Sigmoid()\n\n def encode(self, x):\n h1 = self.relu(self.fc1(x))\n return self.fc21(h1)\n\n def decode(self, z):\n h3 = self.relu(self.fc3(z))\n return self.sigmoid(self.fc4(h3))\n\n def forward(self, x):\n z = self.encode(x.view(-1, 500))\n return self.decode(z), z\n \nclass decoder(nn.Module):\n def __init__(self):\n super(decoder, self).__init__()\n self.fc3 = nn.Linear(2, 200)\n self.fc4 = nn.Linear(200, 500)\n self.relu = nn.ReLU()\n self.sigmoid = nn.Sigmoid()\n\n def decode(self, z):\n h3 = self.relu(self.fc3(z))\n return self.sigmoid(self.fc4(h3))\n\n def forward(self, x):\n return self.decode(x.view(-1, 2))\n\nnet = autoencoder()\nprint(net)\n\nres_chs = res_ex\ntrainloader = prof_vec[:,res_chs,:]\nval_norm = np.amax(trainloader).astype(float)\nprint val_norm\ntrainloader = trainloader / val_norm\ntrainloader = torch.FloatTensor(trainloader)\nprint trainloader.size()\n\nloss_fn = torch.nn.MSELoss()\noptimizer = torch.optim.Adam(net.parameters())\nepochs = 20\npatience = 5\nmax_batch = 64\ncriteria = \"loss\"\n\nbest_model, loss, loss_test = train(net, trainloader, loss_fn, optimizer, epochs = epochs, \n patience = patience, criteria_stop = criteria)\n\nplt.title('Loss')\nplt.xlabel('epochs')\nplt.ylabel('loss')\nplt.plot(loss, label='Train')\nplt.legend()\nplt.show()\n\ndecode, encode = net(Variable(trainloader))\nout_decod = decode.data.numpy()\nout_encod = encode.data.numpy()\nprint(out_decod.shape, out_encod.shape, list_labels.shape)\n\nplt.figure(figsize=(7, 6))\nplt.scatter(out_encod[:,0], out_encod[:,1], c=list_labels)\nplt.show()", "Testing in new datasets\nROQS test", "#Loading labeled segmentations\nseg_label = genfromtxt('../../dataset/Seg_ROQS/roqs_label.csv', delimiter=',').astype('uint8')\n\nlist_masks = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 0] #Extracting segmentations\nlist_labels = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 1] #Extracting labels\nind_ex_err = list_masks[np.where(list_labels)[0]]\nind_ex_cor = list_masks[np.where(np.logical_not(list_labels))[0]]\n\nprof_vec_roqs = np.empty((len(list_masks),resols.shape[0],points)) #Initializing correct signature vector\nfor ind, mask in enumerate(list_masks):\n mask_pn = np.load('../../dataset/Seg_ROQS/mask_roqs_{}.npy'.format(mask)) #Loading mask\n refer_temp = sign_extract(mask_pn, resols) #Function for shape signature extraction\n prof_vec_roqs[ind] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting using Watershed as basis\n\nind_rel_cor = np.where(np.logical_not(list_labels))[0]\nind_rel_err = np.where(list_labels)[0]\n \nprint \"Correct segmentations' vector: \", prof_vec_roqs[ind_rel_cor].shape\nprint \"Erroneous segmentations' vector: \", prof_vec_roqs[ind_rel_err].shape\n\n#for ind_ex, ind_rel in zip(ind_ex_err, ind_rel_err):\n# plt.figure() \n# f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))\n# ax1.plot(prof_vec_roqs[ind_rel,res_ex,:].T)\n# ax1.set_title(\"Signature %i at res: %f\"%(ind_ex, resols[res_ex]))\n# \n# mask_correct = np.load('../../dataset/Seg_ROQS/mask_roqs_{}.npy'.format(ind_ex))\n# ax2.axis('off')\n# ax2.imshow(mask_correct,'gray',interpolation='none')\n#\n# plt.show()\n\nplt.figure()\nplt.plot(prof_vec_roqs[ind_rel_cor,res_ex,:].T)\nplt.title(\"Correct signatures for res: %f\"%(resols[res_ex]))\nplt.show()\n\nplt.figure()\nplt.plot(prof_vec_roqs[ind_rel_err,res_ex,:].T)\nplt.title(\"Erroneous signatures for res: %f\"%(resols[res_ex]))\nplt.show()\n\ntrainloader = prof_vec_roqs[:,res_chs,:]\ntrainloader = trainloader / val_norm\ntrainloader = torch.FloatTensor(trainloader)\nprint trainloader.size()\n\ndecode, encode = net(Variable(trainloader))\nout_decod = decode.data.numpy()\nout_encod = encode.data.numpy()\nprint(out_decod.shape, out_encod.shape, list_labels.shape)\n\nplt.figure(figsize=(7, 6))\nplt.scatter(out_encod[:,0], out_encod[:,1], c=list_labels)\nplt.show()", "Pixel-based test", "#Loading labeled segmentations\nseg_label = genfromtxt('../../dataset/Seg_pixel/pixel_label.csv', delimiter=',').astype('uint8')\n\nlist_masks = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 0] #Extracting segmentations\nlist_labels = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 1] #Extracting labels\nind_ex_err = list_masks[np.where(list_labels)[0]]\nind_ex_cor = list_masks[np.where(np.logical_not(list_labels))[0]]\n\nprof_vec_pixe = np.empty((len(list_masks),resols.shape[0],points)) #Initializing correct signature vector\nfor ind, mask in enumerate(list_masks):\n mask_pn = np.load('../../dataset/Seg_pixel/mask_pixe_{}.npy'.format(mask)) #Loading mask\n refer_temp = sign_extract(mask_pn, resols) #Function for shape signature extraction\n prof_vec_pixe[ind] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting using Watershed as basis\n\nind_rel_cor = np.where(np.logical_not(list_labels))[0]\nind_rel_err = np.where(list_labels)[0]\n \nprint \"Correct segmentations' vector: \", prof_vec_pixe[ind_rel_cor].shape\nprint \"Erroneous segmentations' vector: \", prof_vec_pixe[ind_rel_err].shape\n\n#for ind_ex, ind_rel in zip(ind_ex_cor, ind_rel_cor):\n# plt.figure() \n# f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))\n# ax1.plot(prof_vec_pixe[ind_rel,res_ex,:].T)\n# ax1.set_title(\"Signature %i at res: %f\"%(ind_ex, resols[res_ex]))\n# \n# mask_correct = np.load('../../dataset/Seg_pixel/mask_pixe_{}.npy'.format(ind_ex))\n# ax2.axis('off')\n# ax2.imshow(mask_correct,'gray',interpolation='none')\n#\n# plt.show()\n\nplt.figure()\nplt.plot(prof_vec_pixe[ind_rel_cor,res_ex,:].T)\nplt.title(\"Correct signatures for res: %f\"%(resols[res_ex]))\nplt.show()\n\nplt.figure()\nplt.plot(prof_vec_pixe[ind_rel_err,res_ex,:].T)\nplt.title(\"Erroneous signatures for res: %f\"%(resols[res_ex]))\nplt.show()\n\ntrainloader = prof_vec_pixe[:,res_chs,:]\ntrainloader = trainloader / val_norm\ntrainloader = torch.FloatTensor(trainloader)\nprint trainloader.size()\n\ndecode, encode = net(Variable(trainloader))\nout_decod = decode.data.numpy()\nout_encod = encode.data.numpy()\nprint(out_decod.shape, out_encod.shape, list_labels.shape)\n\nplt.figure(figsize=(7, 6))\nplt.scatter(out_encod[:,0], out_encod[:,1], c=list_labels)\nplt.show()", "The RMSE over the 10 correct segmentations was compared with RMSE over the 10 erroneous segmentations. As expected, RMSE for correct segmentations was greater than RMSE for erroneous segmentations along all the resolutions. In general, this is true, but optimal resolution guarantee the maximum difference between both of RMSE results: correct and erroneous.\nSo, to find optimal resolution, difference between correct and erroneous RMSE was calculated over all resolutions.\nThe greatest difference resulted at resolution 0.1. In this resolution, threshold for separate erroneous and correct segmentations is established as 30% of the distance between the mean RMSE of the correct masks and the mean RMSE of the erroneous masks.\nMethod testing\nFinally, method test was performed in the 152 subject dataset: Watershed dataset with 112 segmentations, ROQS dataset with 152 segmentations and pixel-based dataset with 152 segmentations.\nDiscussion and conclusion\nIn this work, a method for segmentation error detection in large datasets was proposed based-on shape signature. RMSE was used for comparison between signatures. Signature can be extracted en various resolutions but optimal resolution (ls=0.1) was chosen in order to get maximum separation between correct RMSE and erroneous RMSE. In this optimal resolution, threshold was fixed at 27.95 allowing separation of the two segmentation classes.The method achieved 95% of accuracy on the test Watershed segmentations, and 95% and 94% on new datasets: ROQS and pixel-based, respectively.\n40 Watershed segmentations on dataset were used to generation and configuration mean correct signature because of the greater number of erroneous segmentations and major variability on the error shape. Because the signature holds the CC shape, the method can be extended to new datasets segmented with any method. Accuracy and generalization can be improve varying the segmentations used to generate and adjust the mean correct signature." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CrowdTruth/CrowdTruth-core
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
apache-2.0
[ "CrowdTruth for Multiple Choice Tasks: Person Type Annotation in Video\nIn this tutorial, we will apply CrowdTruth metrics to a multiple choice crowdsourcing task for Person Type Annotation from video fragments. The workers were asked to watch a video of about 3-5 seconds and then pick from a multiple choice list which are the types of person that appear in the video fragment. The task was executed on FigureEight. For more crowdsourcing annotation task examples, click here.\nTo replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here: template, css, javascript. \nThis is a screenshot of the task as it appeared to workers:\n\"\nA sample dataset for this task is available in this file, containing raw output from the crowd on FigureEight. Download the file and place it in a folder named data that has the same root as this notebook. Now you can check your data:", "import pandas as pd\n\ntest_data = pd.read_csv(\"../data/person-video-multiple-choice.csv\")\ntest_data.head()", "Declaring a pre-processing configuration\nThe pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class:", "import crowdtruth\nfrom crowdtruth.configuration import DefaultConfig", "Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Person Type Annotation in Video task:\n\ninputColumns: list of input columns from the .csv file with the input data\noutputColumns: list of output columns from the .csv file with the answers from the workers\nannotation_separator: string that separates between the crowd annotations in outputColumns\nopen_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False\nannotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is the list of relations\nprocessJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector\n\nThe complete configuration class is declared below:", "class TestConfig(DefaultConfig):\n inputColumns = [\"videolocation\", \"subtitles\", \"imagetags\", \"subtitletags\"]\n outputColumns = [\"selected_answer\"]\n \n # processing of a closed task\n open_ended_task = False\n annotation_vector = [\"archeologist\", \"architect\", \"artist\", \"astronaut\", \"athlete\", \"businessperson\",\"celebrity\", \n \"chef\", \"criminal\", \"engineer\", \"farmer\", \"fictionalcharacter\", \"journalist\", \"judge\", \n \"lawyer\", \"militaryperson\", \"model\", \"monarch\", \"philosopher\", \"politician\", \"presenter\", \n \"producer\", \"psychologist\", \"scientist\", \"sportsmanager\", \"writer\", \"none\", \"other\"]\n \n def processJudgments(self, judgments):\n # pre-process output to match the values in annotation_vector\n for col in self.outputColumns:\n # transform to lowercase\n judgments[col] = judgments[col].apply(lambda x: str(x).lower())\n # remove square brackets from annotations\n judgments[col] = judgments[col].apply(lambda x: str(x).replace('[',''))\n judgments[col] = judgments[col].apply(lambda x: str(x).replace(']',''))\n # remove the quotes around the annotations\n judgments[col] = judgments[col].apply(lambda x: str(x).replace('\"',''))\n return judgments", "Pre-processing the input data\nAfter declaring the configuration of our input file, we are ready to pre-process the crowd data:", "data, config = crowdtruth.load(\n file = \"../data/person-video-multiple-choice.csv\",\n config = TestConfig()\n)\n\ndata['judgments'].head()", "Computing the CrowdTruth metrics\nThe pre-processed data can then be used to calculate the CrowdTruth metrics:", "results = crowdtruth.run(data, config)", "Video fragment quality\nThe video fragments metrics are stored in results[\"units\"]. The uqs column in results[\"units\"] contains the video fragment quality scores, capturing the overall workers agreement over each video fragment.", "results[\"units\"].head()", "Distribution of video fragment quality scores\nThe histogram below shows video fragment quality scores are nicely distributed, with both low and high quality video fragments.", "import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.hist(results[\"units\"][\"uqs\"])\nplt.xlabel(\"Video Fragment Quality Score\")\nplt.ylabel(\"Video Fragments\")", "The unit_annotation_score column in results[\"units\"] contains the video fragment-annotation scores, capturing the likelihood that an annotation is expressed in a video fragment. For each video fragment, we store a dictionary mapping each annotation to its video fragment-annotation score.", "results[\"units\"][\"unit_annotation_score\"].head()", "Ambiguous video fragments\nA low unit quality score can be used to identify ambiguous video fragments. First, we sort the unit quality metrics stored in results[\"units\"] based on the quality score (uqs), in ascending order. Thus, the most clear video fragments are found at the tail of the new structure:", "results[\"units\"].sort_values(by=[\"uqs\"])[[\"input.videolocation\", \"uqs\", \"unit_annotation_score\"]].head()", "Below we show an example video fragment with low quality score, where workers couldn't agree on what annotation best describes the person in the video. The role of the person in the video is not directly specified, so the workers made assumptions based on the topic of discussion.", "from IPython.display import HTML\n\nprint(results[\"units\"].sort_values(by=[\"uqs\"])[[\"uqs\"]].iloc[0])\nprint(\"\\n\")\n\nprint(\"Person types picked for the video below:\")\nfor k, v in results[\"units\"].sort_values(by=[\"uqs\"])[[\"unit_annotation_score\"]].iloc[0][\"unit_annotation_score\"].items():\n if v > 0:\n print(str(k) + \" : \" + str(v))\n\nvid_url = list(results[\"units\"].sort_values(by=[\"uqs\"])[[\"input.videolocation\"]].iloc[0])\nHTML(\"<video width='320' height='240' controls><source src=\" + vid_url[0] + \" type='video/mp4'></video>\")", "Unambiguous video fragments\nSimilarly, a high unit quality score represents lack of ambiguity of the video fragment.", "results[\"units\"].sort_values(by=[\"uqs\"], ascending=False)[[\"input.videolocation\", \"uqs\", \"unit_annotation_score\"]].head()", "Below we show an example unambiguous video fragment - no person appears in the video, so most workers picked the none option in the crowd task.", "print(results[\"units\"].sort_values(by=[\"uqs\"], ascending=False)[[\"uqs\"]].iloc[0])\nprint(\"\\n\")\n\nprint(\"Person types picked for the video below:\")\nfor k, v in results[\"units\"].sort_values(by=[\"uqs\"], ascending=False)[[\"unit_annotation_score\"]].iloc[0][\"unit_annotation_score\"].items():\n if v > 0:\n print(str(k) + \" : \" + str(v))\n\nvid_url = list(results[\"units\"].sort_values(by=[\"uqs\"], ascending=False)[[\"input.videolocation\"]].iloc[0])\nHTML(\"<video width='320' height='240' controls><source src=\" + vid_url[0] + \" type='video/mp4'></video>\")", "Worker Quality Scores\nThe worker metrics are stored in results[\"workers\"]. The wqs columns in results[\"workers\"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.", "results[\"workers\"].head()", "Distribution of worker quality scores\nThe histogram below shows the worker quality scores are distributed across a wide spectrum, from low to high quality workers.", "plt.hist(results[\"workers\"][\"wqs\"])\nplt.xlabel(\"Worker Quality Score\")\nplt.ylabel(\"Workers\")", "Low quality workers\nLow worker quality scores can be used to identify spam workers, or workers that have misunderstood the annotation task.", "results[\"workers\"].sort_values(by=[\"wqs\"]).head()", "Example annotations from low quality worker 44606916 (with the second lowest quality score) for video fragment 1856509900:", "import operator\n\nwork_id = results[\"workers\"].sort_values(by=[\"wqs\"]).index[1]\nwork_units = results[\"judgments\"][results[\"judgments\"][\"worker\"] == work_id][\"unit\"]\n\nwork_judg = results[\"judgments\"][results[\"judgments\"][\"unit\"] == work_units.iloc[0]]\n\nprint(\"JUDGMENTS OF LOW QUALITY WORKER %d FOR VIDEO %d:\" % (work_id, work_units.iloc[0]))\nfor k, v in work_judg[work_judg[\"worker\"] == work_id][\"output.selected_answer\"].iloc[0].items():\n if v > 0:\n print(str(k) + \" : \" + str(v))\n\nprint(\"\\nALL JUDGMENTS FOR VIDEO %d\" % work_units.iloc[0])\nsorted_judg = sorted(\n results[\"units\"][\"output.selected_answer\"][work_units.iloc[0]].items(),\n key=operator.itemgetter(1),\n reverse=True)\n\nfor k, v in sorted_judg:\n if v > 0:\n print(str(k) + \" : \" + str(v))\n\nvid_url = results[\"units\"][\"input.videolocation\"][work_units.iloc[0]]\nHTML(\"<video width='320' height='240' controls><source src=\" + str(vid_url) + \" type='video/mp4'></video>\")", "Example annotations from the same low quality worker (44606916) for a second video fragment (1856509903):", "work_judg = results[\"judgments\"][results[\"judgments\"][\"unit\"] == work_units.iloc[1]]\n\nprint(\"JUDGMENTS OF LOW QUALITY WORKER %d FOR VIDEO %d:\" % (work_id, work_units.iloc[1]))\nfor k, v in work_judg[work_judg[\"worker\"] == work_id][\"output.selected_answer\"].iloc[0].items():\n if v > 0:\n print(str(k) + \" : \" + str(v))\n\nprint(\"\\nALL JUDGMENTS FOR VIDEO %d\" % work_units.iloc[0])\nsorted_judg = sorted(\n results[\"units\"][\"output.selected_answer\"][work_units.iloc[1]].items(),\n key=operator.itemgetter(1),\n reverse=True)\n\nfor k, v in sorted_judg:\n if v > 0:\n print(str(k) + \" : \" + str(v))\n\nvid_url = results[\"units\"][\"input.videolocation\"][work_units.iloc[1]]\nHTML(\"<video width='320' height='240' controls><source src=\" + str(vid_url) + \" type='video/mp4'></video>\")", "High quality workers\nHigh worker quality scores can be used to identify reliable workers.", "results[\"workers\"].sort_values(by=[\"wqs\"], ascending=False).head()", "Example annotations from worker 6432269 (with the highest worker quality score) for video fragment 1856509904:", "work_id = results[\"workers\"].sort_values(by=[\"wqs\"], ascending=False).index[0]\nwork_units = results[\"judgments\"][results[\"judgments\"][\"worker\"] == work_id][\"unit\"]\nwork_judg = results[\"judgments\"][results[\"judgments\"][\"unit\"] == work_units.iloc[0]]\n\nprint(\"JUDGMENTS OF HIGH QUALITY WORKER %d FOR VIDEO %d:\" % (work_id, work_units.iloc[0]))\nfor k, v in work_judg[work_judg[\"worker\"] == work_id][\"output.selected_answer\"].iloc[0].items():\n if v > 0:\n print(str(k) + \" : \" + str(v))\n\nprint(\"\\nALL JUDGMENTS FOR VIDEO %d\" % work_units.iloc[1])\nsorted_judg = sorted(\n results[\"units\"][\"output.selected_answer\"][work_units.iloc[0]].items(),\n key=operator.itemgetter(1),\n reverse=True)\n\nfor k, v in sorted_judg:\n if v > 0:\n print(str(k) + \" : \" + str(v))\nvid_url = results[\"units\"][\"input.videolocation\"][work_units.iloc[0]]\nHTML(\"<video width='320' height='240' controls><source src=\" + str(vid_url) + \" type='video/mp4'></video>\")", "Example annotations from worker 6432269 (with the highest worker quality score) for video fragment 1856509908:", "work_id = results[\"workers\"].sort_values(by=[\"wqs\"], ascending=False).index[0]\nwork_units = results[\"judgments\"][results[\"judgments\"][\"worker\"] == work_id][\"unit\"]\nwork_judg = results[\"judgments\"][results[\"judgments\"][\"unit\"] == work_units.iloc[1]]\n\nprint(\"JUDGMENTS OF HIGH QUALITY WORKER %d FOR VIDEO %d:\" % (work_id, work_units.iloc[1]))\nfor k, v in work_judg[work_judg[\"worker\"] == work_id][\"output.selected_answer\"].iloc[0].items():\n if v > 0:\n print(str(k) + \" : \" + str(v))\n\nprint(\"\\nALL JUDGMENTS FOR VIDEO %d\" % work_units.iloc[1])\nsorted_judg = sorted(\n results[\"units\"][\"output.selected_answer\"][work_units.iloc[1]].items(),\n key=operator.itemgetter(1),\n reverse=True)\n\nfor k, v in sorted_judg:\n if v > 0:\n print(str(k) + \" : \" + str(v))\nvid_url = results[\"units\"][\"input.videolocation\"][work_units.iloc[1]]\nHTML(\"<video width='320' height='240' controls><source src=\" + str(vid_url) + \" type='video/mp4'></video>\")", "Worker Quality vs. # Annotations\nAs we can see from the plot below, there is no clear correlation between worker quality and number of annotations collected from the worker.", "plt.scatter(results[\"workers\"][\"wqs\"], results[\"workers\"][\"judgment\"])\nplt.xlabel(\"WQS\")\nplt.ylabel(\"# Annotations\")", "Annotation Quality Scores\nThe annotation metrics are stored in results[\"annotations\"]. The aqs column contains the annotation quality scores, capturing the overall worker agreement over one annotation.\nThere is a slight correlation between the number of annotations (column output.selected_answer) and the annotation quality score - annotations that have not been picked often (e.g. engineer, farmer) tend to have lower quality scores - this is because these annotations are less present in the corpus, therefore the likelihood that they are picked is lower, and when they do get picked it is more likely it was a mistake by the worker. However, it is not a set rule, and there exist annotations that are picked less often (e.g. astronaut) that can have high quality scores.", "results[\"annotations\"][\"output.selected_answer\"] = 0\n\nfor idx in results[\"judgments\"].index:\n for k,v in results[\"judgments\"][\"output.selected_answer\"][idx].items():\n if v > 0:\n results[\"annotations\"].loc[k, \"output.selected_answer\"] += 1\n \n\nresults[\"annotations\"] = results[\"annotations\"].sort_values(by=[\"aqs\"], ascending=False)\nresults[\"annotations\"].round(3)[[\"output.selected_answer\", \"aqs\"]]\n\nrows = []\nheader = [\"unit\", \"videolocation\", \"subtitles\", \"imagetags\", \"subtitletags\", \"uqs\", \"uqs_initial\"]\nannotation_vector = [\"archeologist\", \"architect\", \"artist\", \"astronaut\", \"athlete\", \"businessperson\",\"celebrity\", \n \"chef\", \"criminal\", \"engineer\", \"farmer\", \"fictionalcharacter\", \"journalist\", \"judge\", \n \"lawyer\", \"militaryperson\", \"model\", \"monarch\", \"philosopher\", \"politician\", \"presenter\", \n \"producer\", \"psychologist\", \"scientist\", \"sportsmanager\", \"writer\", \"none\", \"other\"]\nheader.extend(annotation_vector)\nannotation_vector_in = [\"archeologist_initial_initial\", \"architect_initial\", \"artist_initial\", \"astronaut_initial\", \n \"athlete_initial\", \"businessperson_initial\",\"celebrity_initial\", \"chef_initial\", \n \"criminal_initial\", \"engineer_initial\", \"farmer_initial\", \"fictionalcharacter_initial\", \n \"journalist_initial\", \"judge_initial\", \"lawyer_initial\", \"militaryperson_initial\", \n \"model_initial\", \"monarch_initial\", \"philosopher_initial\", \"politician_initial\", \n \"presenter_initial\", \"producer_initial\", \"psychologist_initial\", \"scientist_initial\", \n \"sportsmanager_initial\", \"writer_initial\", \"none_initial\", \"other_initial\"]\nheader.extend(annotation_vector_in)\nunits = results[\"units\"].reset_index()\nfor i in range(len(units.index)):\n row = [units[\"unit\"].iloc[i], units[\"input.videolocation\"].iloc[i], units[\"input.subtitles\"].iloc[i], \\\n units[\"input.imagetags\"].iloc[i], units[\"input.subtitletags\"].iloc[i], units[\"uqs\"].iloc[i], \n units[\"uqs_initial\"].iloc[i]]\n for item in annotation_vector:\n row.append(units[\"unit_annotation_score\"].iloc[i][item])\n for item in annotation_vector_in:\n row.append(units[\"unit_annotation_score_initial\"].iloc[i][item])\n rows.append(row)\nrows = pd.DataFrame(rows, columns=header)\nrows.to_csv(\"../data/results/multchoice-people-video-units.csv\", index=False)\n\nresults[\"workers\"].to_csv(\"../data/results/multchoice-people-video-workers.csv\", index=True)\n\nresults[\"annotations\"].to_csv(\"../data/results/multchoice-people-video-annotations.csv\", index=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
samgoodgame/sf_crime
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
mit
[ "Kaggle San Francisco Crime Classification\nBerkeley MIDS W207 Final Project: Sam Goodgame, Sarah Cha, Kalvin Kao, Bryan Moore\nEnvironment and Data", "# Import relevant libraries:\nimport time\nimport numpy as np\nimport pandas as pd\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn import preprocessing\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.naive_bayes import BernoulliNB\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import log_loss\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn import svm\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.tree import DecisionTreeClassifier\n# Import Meta-estimators\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.ensemble import BaggingClassifier\nfrom sklearn.ensemble import GradientBoostingClassifier\n# Import Calibration tools\nfrom sklearn.calibration import CalibratedClassifierCV\n\n# Set random seed and format print output:\nnp.random.seed(0)\nnp.set_printoptions(precision=3)", "DDL to construct table for SQL transformations:\nsql\nCREATE TABLE kaggle_sf_crime (\ndates TIMESTAMP, \ncategory VARCHAR,\ndescript VARCHAR,\ndayofweek VARCHAR,\npd_district VARCHAR,\nresolution VARCHAR,\naddr VARCHAR,\nX FLOAT,\nY FLOAT);\nGetting training data into a locally hosted PostgreSQL database:\nsql\n\\copy kaggle_sf_crime FROM '/Users/Goodgame/Desktop/MIDS/207/final/sf_crime_train.csv' DELIMITER ',' CSV HEADER;\nSQL Query used for transformations:\nsql\nSELECT\n category,\n date_part('hour', dates) AS hour_of_day,\n CASE\n WHEN dayofweek = 'Monday' then 1\n WHEN dayofweek = 'Tuesday' THEN 2\n WHEN dayofweek = 'Wednesday' THEN 3\n WHEN dayofweek = 'Thursday' THEN 4\n WHEN dayofweek = 'Friday' THEN 5\n WHEN dayofweek = 'Saturday' THEN 6\n WHEN dayofweek = 'Sunday' THEN 7\n END AS dayofweek_numeric,\n X,\n Y,\n CASE\n WHEN pd_district = 'BAYVIEW' THEN 1\n ELSE 0\n END AS bayview_binary,\n CASE\n WHEN pd_district = 'INGLESIDE' THEN 1\n ELSE 0\n END AS ingleside_binary,\n CASE\n WHEN pd_district = 'NORTHERN' THEN 1\n ELSE 0\n END AS northern_binary,\n CASE\n WHEN pd_district = 'CENTRAL' THEN 1\n ELSE 0\n END AS central_binary,\n CASE\n WHEN pd_district = 'BAYVIEW' THEN 1\n ELSE 0\n END AS pd_bayview_binary,\n CASE\n WHEN pd_district = 'MISSION' THEN 1\n ELSE 0\n END AS mission_binary,\n CASE\n WHEN pd_district = 'SOUTHERN' THEN 1\n ELSE 0\n END AS southern_binary,\n CASE\n WHEN pd_district = 'TENDERLOIN' THEN 1\n ELSE 0\n END AS tenderloin_binary,\n CASE\n WHEN pd_district = 'PARK' THEN 1\n ELSE 0\n END AS park_binary,\n CASE\n WHEN pd_district = 'RICHMOND' THEN 1\n ELSE 0\n END AS richmond_binary,\n CASE\n WHEN pd_district = 'TARAVAL' THEN 1\n ELSE 0\n END AS taraval_binary\nFROM kaggle_sf_crime;\nLoading the data, version 2, with weather features to improve performance: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)\nWe seek to add features to our models that will improve performance with respect to out desired performance metric. There is evidence that there is a correlation between weather patterns and crime, with some experts even arguing for a causal relationship between weather and crime [1]. More specifically, a 2013 paper published in Science showed that higher temperatures and extreme rainfall led to large increases in conflict. In the setting of strong evidence that weather influences crime, we see it as a candidate for additional features to improve the performance of our classifiers. Weather data was gathered from (insert source). Certain features from this data set were incorporated into the original crime data set in order to add features that were hypothesizzed to improve performance. These features included (insert what we eventually include).", "#data_path = \"./data/train_transformed.csv\"\n\n#df = pd.read_csv(data_path, header=0)\n#x_data = df.drop('category', 1)\n#y = df.category.as_matrix()\n\n########## Adding the date back into the data\n#import csv\n#import time\n#import calendar\n#data_path = \"./data/train.csv\"\n#dataCSV = open(data_path, 'rt')\n#csvData = list(csv.reader(dataCSV))\n#csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']\n#allData = csvData[1:]\n#dataCSV.close()\n\n#df2 = pd.DataFrame(allData)\n#df2.columns = csvFields\n#dates = df2['Dates']\n#dates = dates.apply(time.strptime, args=(\"%Y-%m-%d %H:%M:%S\",))\n#dates = dates.apply(calendar.timegm)\n#print(dates.head())\n\n#x_data['secondsFromEpoch'] = dates\n#colnames = x_data.columns.tolist()\n#colnames = colnames[-1:] + colnames[:-1]\n#x_data = x_data[colnames]\n##########\n\n########## Adding the weather data into the original crime data\n#weatherData1 = \"./data/1027175.csv\"\n#weatherData2 = \"./data/1027176.csv\"\n#dataCSV = open(weatherData1, 'rt')\n#csvData = list(csv.reader(dataCSV))\n#csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']\n#allWeatherData1 = csvData[1:]\n#dataCSV.close()\n\n#dataCSV = open(weatherData2, 'rt')\n#csvData = list(csv.reader(dataCSV))\n#csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']\n#allWeatherData2 = csvData[1:]\n#dataCSV.close()\n\n#weatherDF1 = pd.DataFrame(allWeatherData1)\n#weatherDF1.columns = csvFields\n#dates1 = weatherDF1['DATE']\n#sunrise1 = weatherDF1['DAILYSunrise']\n#sunset1 = weatherDF1['DAILYSunset']\n\n#weatherDF2 = pd.DataFrame(allWeatherData2)\n#weatherDF2.columns = csvFields\n#dates2 = weatherDF2['DATE']\n#sunrise2 = weatherDF2['DAILYSunrise']\n#sunset2 = weatherDF2['DAILYSunset']\n\n#functions for processing the sunrise and sunset times of each day\n#def get_hour_and_minute(milTime):\n # hour = int(milTime[:-2])\n # minute = int(milTime[-2:])\n # return [hour, minute]\n\n#def get_date_only(date):\n# return time.struct_time(tuple([date[0], date[1], date[2], 0, 0, 0, date[6], date[7], date[8]]))\n\n#def structure_sun_time(timeSeries, dateSeries):\n# sunTimes = timeSeries.copy()\n# for index in range(len(dateSeries)):\n# sunTimes[index] = time.struct_time(tuple([dateSeries[index][0], dateSeries[index][1], dateSeries[index][2], timeSeries[index][0], timeSeries[index][1], dateSeries[index][5], dateSeries[index][6], dateSeries[index][7], dateSeries[index][8]]))\n# return sunTimes\n\n#dates1 = dates1.apply(time.strptime, args=(\"%Y-%m-%d %H:%M\",))\n#sunrise1 = sunrise1.apply(get_hour_and_minute)\n#sunrise1 = structure_sun_time(sunrise1, dates1)\n#sunrise1 = sunrise1.apply(calendar.timegm)\n#sunset1 = sunset1.apply(get_hour_and_minute)\n#sunset1 = structure_sun_time(sunset1, dates1)\n#sunset1 = sunset1.apply(calendar.timegm)\n#dates1 = dates1.apply(calendar.timegm)\n\n#dates2 = dates2.apply(time.strptime, args=(\"%Y-%m-%d %H:%M\",))\n#sunrise2 = sunrise2.apply(get_hour_and_minute)\n#sunrise2 = structure_sun_time(sunrise2, dates2)\n#sunrise2 = sunrise2.apply(calendar.timegm)\n#sunset2 = sunset2.apply(get_hour_and_minute)\n#sunset2 = structure_sun_time(sunset2, dates2)\n#sunset2 = sunset2.apply(calendar.timegm)\n#dates2 = dates2.apply(calendar.timegm)\n\n#weatherDF1['DATE'] = dates1\n#weatherDF1['DAILYSunrise'] = sunrise1\n#weatherDF1['DAILYSunset'] = sunset1\n#weatherDF2['DATE'] = dates2\n#weatherDF2['DAILYSunrise'] = sunrise2\n#weatherDF2['DAILYSunset'] = sunset2\n\n#weatherDF = pd.concat([weatherDF1,weatherDF2[32:]],ignore_index=True)\n\n# Starting off with some of the easier features to work with-- more to come here . . . still in beta\n#weatherMetrics = weatherDF[['DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYWindSpeed', \\\n# 'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY', 'DAILYSunrise', 'DAILYSunset']]\n#weatherMetrics = weatherMetrics.convert_objects(convert_numeric=True)\n#weatherDates = weatherMetrics['DATE']\n#'DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYWindSpeed',\n#'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY'\n#timeWindow = 10800 #3 hours\n#hourlyDryBulbTemp = []\n#hourlyRelativeHumidity = []\n#hourlyWindSpeed = []\n#hourlySeaLevelPressure = []\n#hourlyVisibility = []\n#dailySunrise = []\n#dailySunset = []\n#daylight = []\n#test = 0\n#for timePoint in dates:#dates is the epoch time from the kaggle data\n# relevantWeather = weatherMetrics[(weatherDates <= timePoint) & (weatherDates > timePoint - timeWindow)]\n# hourlyDryBulbTemp.append(relevantWeather['HOURLYDRYBULBTEMPF'].mean())\n# hourlyRelativeHumidity.append(relevantWeather['HOURLYRelativeHumidity'].mean())\n# hourlyWindSpeed.append(relevantWeather['HOURLYWindSpeed'].mean())\n# hourlySeaLevelPressure.append(relevantWeather['HOURLYSeaLevelPressure'].mean())\n# hourlyVisibility.append(relevantWeather['HOURLYVISIBILITY'].mean())\n# dailySunrise.append(relevantWeather['DAILYSunrise'].iloc[-1])\n# dailySunset.append(relevantWeather['DAILYSunset'].iloc[-1])\n# daylight.append(1.0*((timePoint >= relevantWeather['DAILYSunrise'].iloc[-1]) and (timePoint < relevantWeather['DAILYSunset'].iloc[-1])))\n #if timePoint < relevantWeather['DAILYSunset'][-1]:\n #daylight.append(1)\n #else:\n #daylight.append(0)\n \n# if test%100000 == 0:\n# print(relevantWeather)\n# test += 1\n\n#hourlyDryBulbTemp = pd.Series.from_array(np.array(hourlyDryBulbTemp))\n#hourlyRelativeHumidity = pd.Series.from_array(np.array(hourlyRelativeHumidity))\n#hourlyWindSpeed = pd.Series.from_array(np.array(hourlyWindSpeed))\n#hourlySeaLevelPressure = pd.Series.from_array(np.array(hourlySeaLevelPressure))\n#hourlyVisibility = pd.Series.from_array(np.array(hourlyVisibility))\n#dailySunrise = pd.Series.from_array(np.array(dailySunrise))\n#dailySunset = pd.Series.from_array(np.array(dailySunset))\n#daylight = pd.Series.from_array(np.array(daylight))\n\n#x_data['HOURLYDRYBULBTEMPF'] = hourlyDryBulbTemp\n#x_data['HOURLYRelativeHumidity'] = hourlyRelativeHumidity\n#x_data['HOURLYWindSpeed'] = hourlyWindSpeed\n#x_data['HOURLYSeaLevelPressure'] = hourlySeaLevelPressure\n#x_data['HOURLYVISIBILITY'] = hourlyVisibility\n#x_data['DAILYSunrise'] = dailySunrise\n#x_data['DAILYSunset'] = dailySunset\n#x_data['Daylight'] = daylight\n\n#x_data.to_csv(path_or_buf=\"C:/MIDS/W207 final project/x_data.csv\")\n##########\n\n# Impute missing values with mean values:\n#x_complete = x_data.fillna(x_data.mean())\n#X_raw = x_complete.as_matrix()\n\n# Scale the data between 0 and 1:\n#X = MinMaxScaler().fit_transform(X_raw)\n\n# Shuffle data to remove any underlying pattern that may exist:\n#shuffle = np.random.permutation(np.arange(X.shape[0]))\n#X, y = X[shuffle], y[shuffle]\n\n# Separate training, dev, and test data:\n#test_data, test_labels = X[800000:], y[800000:]\n#dev_data, dev_labels = X[700000:800000], y[700000:800000]\n#train_data, train_labels = X[:700000], y[:700000]\n\n#mini_train_data, mini_train_labels = X[:75000], y[:75000]\n#mini_dev_data, mini_dev_labels = X[75000:100000], y[75000:100000]\n#labels_set = set(mini_dev_labels)\n#print(labels_set)\n#print(len(labels_set))\n#print(train_data[:10])", "Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.", "data_path = \"/Users/Bryan/Desktop/UC_Berkeley_MIDS_files/Courses/W207_Intro_To_Machine_Learning/Final_Project/x_data_3.csv\"\ndf = pd.read_csv(data_path, header=0)\nx_data = df.drop('category', 1)\ny = df.category.as_matrix()\n\n# Impute missing values with mean values:\nx_complete = x_data.fillna(x_data.mean())\nX_raw = x_complete.as_matrix()\n\n# Scale the data between 0 and 1:\nX = MinMaxScaler().fit_transform(X_raw)\n\n# Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time:\nnp.random.seed(0)\nshuffle = np.random.permutation(np.arange(X.shape[0]))\nX, y = X[shuffle], y[shuffle]\n\n# Due to difficulties with log loss and set(y_pred) needing to match set(labels), we will remove the extremely rare\n# crimes from the data for quality issues.\nX_minus_trea = X[np.where(y != 'TREA')]\ny_minus_trea = y[np.where(y != 'TREA')]\nX_final = X_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]\ny_final = y_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]\n\n# Separate training, dev, and test data:\ntest_data, test_labels = X_final[800000:], y_final[800000:]\ndev_data, dev_labels = X_final[700000:800000], y_final[700000:800000]\ntrain_data, train_labels = X_final[100000:700000], y_final[100000:700000]\ncalibrate_data, calibrate_labels = X_final[:100000], y_final[:100000]\n\n# Create mini versions of the above sets\nmini_train_data, mini_train_labels = X_final[:20000], y_final[:20000]\nmini_calibrate_data, mini_calibrate_labels = X_final[19000:28000], y_final[19000:28000]\nmini_dev_data, mini_dev_labels = X_final[49000:60000], y_final[49000:60000]\n\n# Create list of the crime type labels. This will act as the \"labels\" parameter for the log loss functions that follow\ncrime_labels = list(set(y_final))\ncrime_labels_mini_train = list(set(mini_train_labels))\ncrime_labels_mini_dev = list(set(mini_dev_labels))\ncrime_labels_mini_calibrate = list(set(mini_calibrate_labels))\nprint(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate))\n\n#print(len(train_data),len(train_labels))\n#print(len(dev_data),len(dev_labels))\n#print(len(mini_train_data),len(mini_train_labels))\n#print(len(mini_dev_data),len(mini_dev_labels))\n#print(len(test_data),len(test_labels))\n#print(len(mini_calibrate_data),len(mini_calibrate_labels))\n#print(len(calibrate_data),len(calibrate_labels))", "Sarah's School data that we may still get to work as features: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)", "### Read in zip code data\n#data_path_zip = \"./data/2016_zips.csv\"\n#zips = pd.read_csv(data_path_zip, header=0, sep ='\\t', usecols = [0,5,6], names = [\"GEOID\", \"INTPTLAT\", \"INTPTLONG\"], dtype ={'GEOID': int, 'INTPTLAT': float, 'INTPTLONG': float})\n#sf_zips = zips[(zips['GEOID'] > 94000) & (zips['GEOID'] < 94189)]\n\n### Mapping longitude/latitude to zipcodes\n#def dist(lat1, long1, lat2, long2):\n# return np.sqrt((lat1-lat2)**2+(long1-long2)**2)\n# return abs(lat1-lat2)+abs(long1-long2)\n#def find_zipcode(lat, long): \n# distances = sf_zips.apply(lambda row: dist(lat, long, row[\"INTPTLAT\"], row[\"INTPTLONG\"]), axis=1)\n# return sf_zips.loc[distances.idxmin(), \"GEOID\"]\n#x_data['zipcode'] = 0\n#for i in range(0, 1):\n# x_data['zipcode'][i] = x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1)\n#x_data['zipcode']= x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1)\n\n\n### Read in school data\n#data_path_schools = \"./data/pubschls.csv\"\n#schools = pd.read_csv(data_path_schools,header=0, sep ='\\t', usecols = [\"CDSCode\",\"StatusType\", \"School\", \"EILCode\", \"EILName\", \"Zip\", \"Latitude\", \"Longitude\"], dtype ={'CDSCode': str, 'StatusType': str, 'School': str, 'EILCode': str,'EILName': str,'Zip': str, 'Latitude': float, 'Longitude': float})\n#schools = schools[(schools[\"StatusType\"] == 'Active')]\n\n### Find the closest school\n#def dist(lat1, long1, lat2, long2):\n# return np.sqrt((lat1-lat2)**2+(long1-long2)**2)\n\n#def find_closest_school(lat, long): \n# distances = schools.apply(lambda row: dist(lat, long, row[\"Latitude\"], row[\"Longitude\"]), axis=1)\n# return min(distances)\n#x_data['closest_school'] = x_data_sub.apply(lambda row: find_closest_school(row['y'], row['x']), axis=1)\n", "Formatting to meet Kaggle submission standards: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)", "# The Kaggle submission format requires listing the ID of each example.\n# This is to remember the order of the IDs after shuffling\n#allIDs = np.array(list(df.axes[0]))\n#allIDs = allIDs[shuffle]\n\n#testIDs = allIDs[800000:]\n#devIDs = allIDs[700000:800000]\n#trainIDs = allIDs[:700000]\n\n# Extract the column names for the required submission format\n#sampleSubmission_path = \"./data/sampleSubmission.csv\"\n#sampleDF = pd.read_csv(sampleSubmission_path)\n#allColumns = list(sampleDF.columns)\n#featureColumns = allColumns[1:]\n\n# Extracting the test data for a baseline submission\n#real_test_path = \"./data/test_transformed.csv\"\n#testDF = pd.read_csv(real_test_path, header=0)\n#real_test_data = testDF\n\n#test_complete = real_test_data.fillna(real_test_data.mean())\n#Test_raw = test_complete.as_matrix()\n\n#TestData = MinMaxScaler().fit_transform(Test_raw)\n\n# Here we remember the ID of each test data point, in case we ever decide to shuffle the test data for some reason\n#testIDs = list(testDF.axes[0])", "Generate baseline prediction probabilities from MNB classifier and store in a .csv file (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)", "# Generate a baseline MNB classifier and make it return prediction probabilities for the actual test data\n#def MNB():\n# mnb = MultinomialNB(alpha = 0.0000001)\n# mnb.fit(train_data, train_labels)\n# print(\"\\n\\nMultinomialNB accuracy on dev data:\", mnb.score(dev_data, dev_labels))\n# return mnb.predict_proba(dev_data)\n#MNB()\n\n#baselinePredictionProbabilities = MNB()\n\n# Place the resulting prediction probabilities in a .csv file in the required format\n# First, turn the prediction probabilties into a data frame\n#resultDF = pd.DataFrame(baselinePredictionProbabilities,columns=featureColumns)\n# Add the IDs as a final column\n#resultDF.loc[:,'Id'] = pd.Series(testIDs,index=resultDF.index)\n# Make the 'Id' column the first column\n#colnames = resultDF.columns.tolist()\n#colnames = colnames[-1:] + colnames[:-1]\n#resultDF = resultDF[colnames]\n# Output to a .csv file\n# resultDF.to_csv('result.csv',index=False)", "Note: the code above will shuffle data differently every time it's run, so model accuracies will vary accordingly.", "## Data sub-setting quality check-point\nprint(train_data[:1])\nprint(train_labels[:1])\n\n# Modeling quality check-point with MNB--fast model\n\ndef MNB():\n mnb = MultinomialNB(alpha = 0.0000001)\n mnb.fit(train_data, train_labels)\n print(\"\\n\\nMultinomialNB accuracy on dev data:\", mnb.score(dev_data, dev_labels))\n \nMNB()", "Defining Performance Criteria\nAs determined by the Kaggle submission guidelines, the performance criteria metric for the San Francisco Crime Classification competition is Multi-class Logarithmic Loss (also known as cross-entropy). There are various other performance metrics that are appropriate for different domains: accuracy, F-score, Lift, ROC Area, average precision, precision/recall break-even point, and squared error.\n(Describe each performance metric and a domain in which it is preferred. Give Pros/Cons if able)\n\n\nMulti-class Log Loss:\n\n\nAccuracy:\n\n\nF-score:\n\n\nLift:\n\n\nROC Area:\n\n\nAverage precision\n\n\nPrecision/Recall break-even point:\n\n\nSquared-error:\n\n\nModel Prototyping\nWe will start our classifier and feature engineering process by looking at the performance of various classifiers with default parameter settings in predicting labels on the mini_dev_data:", "def model_prototype(train_data, train_labels, eval_data, eval_labels):\n knn = KNeighborsClassifier(n_neighbors=5).fit(train_data, train_labels)\n bnb = BernoulliNB(alpha=1, binarize = 0.5).fit(train_data, train_labels)\n mnb = MultinomialNB().fit(train_data, train_labels)\n log_reg = LogisticRegression().fit(train_data, train_labels)\n neural_net = MLPClassifier().fit(train_data, train_labels)\n random_forest = RandomForestClassifier().fit(train_data, train_labels)\n decision_tree = DecisionTreeClassifier().fit(train_data, train_labels)\n support_vm_step_one = svm.SVC(probability = True)\n support_vm = support_vm_step_one.fit(train_data, train_labels)\n \n models = [knn, bnb, mnb, log_reg, neural_net, random_forest, decision_tree, support_vm]\n for model in models:\n eval_prediction_probabilities = model.predict_proba(eval_data)\n eval_predictions = model.predict(eval_data)\n print(model, \"Multi-class Log Loss:\", log_loss(y_true = eval_labels, y_pred = eval_prediction_probabilities, labels = crime_labels_mini_dev), \"\\n\\n\")\n\nmodel_prototype(mini_train_data, mini_train_labels, mini_dev_data, mini_dev_labels)", "Adding Features, Hyperparameter Tuning, and Model Calibration To Improve Prediction For Each Classifier\nHere we seek to optimize the performance of our classifiers in a three-step, dynamnic engineering process. \n1) Feature addition\nWe previously added components from the weather data into the original SF crime data as new features. We will not repeat work done in our initial submission, where our training dataset did not include these features. For comparision with respoect to how the added features improved our performance with respect to log loss, please refer back to our initial submission.\nWe can have Kalvin expand on exactly what he did here.\n2) Hyperparameter tuning\nEach classifier has parameters that we can engineer to further optimize performance, as opposed to using the default parameter values as we did above in the model prototyping cell. This will be specific to each classifier as detailed below.\n3) Model calibration\nWe can calibrate the models via Platt Scaling or Isotonic Regression to attempt to improve their performance.\n\n\nPlatt Scaling: ((brief explanation of how it works))\n\n\nIsotonic Regression: ((brief explanation of how it works))\n\n\nFor each classifier, we can use CalibratedClassifierCV to perform probability calibration with isotonic regression or sigmoid (Platt Scaling). The parameters within CalibratedClassifierCV that we can adjust are the method ('sigmoid' or 'isotonic') and cv (cross-validation generator). As we will already be training our models before calibration, we will only use cv = 'prefit'. Thus, in practice the cross-validation generator will not be a modifiable parameter for us.\nK-Nearest Neighbors\nHyperparameter tuning:\nFor the KNN classifier, we can seek to optimize the following classifier parameters: n-neighbors, weights, and the power parameter ('p').", "list_for_ks = []\nlist_for_ws = []\nlist_for_ps = []\nlist_for_log_loss = []\n\ndef k_neighbors_tuned(k,w,p):\n tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(mini_train_data, mini_train_labels)\n dev_prediction_probabilities = tuned_KNN.predict_proba(mini_dev_data)\n list_for_ks.append(this_k)\n list_for_ws.append(this_w)\n list_for_ps.append(this_p)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with KNN and k,w,p =\", k,\",\",w,\",\", p, \"is:\", working_log_loss)\n\nk_value_tuning = [i for i in range(1,5002,500)]\nweight_tuning = ['uniform', 'distance']\npower_parameter_tuning = [1,2]\n\nstart = time.clock()\nfor this_k in k_value_tuning:\n for this_w in weight_tuning:\n for this_p in power_parameter_tuning:\n k_neighbors_tuned(this_k, this_w, this_p)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For KNN the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with k =', list_for_ks[index_best_logloss], 'w =', list_for_ws[index_best_logloss], 'p =', list_for_ps[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds')", "Model calibration:\nHere we will calibrate the KNN classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The \"method\" parameter can be set to \"sigmoid\" or to \"isotonic\", corresponding to Platt Scaling and to Isotonic Regression respectively.", "list_for_ks = []\nlist_for_ws = []\nlist_for_ps = []\nlist_for_ms = []\nlist_for_log_loss = []\n\ndef knn_calibrated(k,w,p,m):\n tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(mini_train_data, mini_train_labels)\n dev_prediction_probabilities = tuned_KNN.predict_proba(mini_dev_data)\n ccv = CalibratedClassifierCV(tuned_KNN, method = m, cv = 'prefit')\n ccv.fit(mini_calibrate_data, mini_calibrate_labels)\n ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)\n list_for_ks.append(this_k)\n list_for_ws.append(this_w)\n list_for_ps.append(this_p)\n list_for_ms.append(this_m)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with KNN and k,w,p =\", k,\",\",w,\",\",p,\",\",m,\"is:\", working_log_loss)\n\n#k_value_tuning = [i for i in range(1,5002,500)]\nk_value_tuning = [1]\nweight_tuning = ['uniform', 'distance']\npower_parameter_tuning = [1,2]\nmethods = ['sigmoid', 'isotonic']\n\nstart = time.clock()\nfor this_k in k_value_tuning:\n for this_w in weight_tuning:\n for this_p in power_parameter_tuning:\n for this_m in methods:\n knn_calibrated(this_k, this_w, this_p, this_m)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For KNN the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with k =', list_for_ks[index_best_logloss], 'w =', list_for_ws[index_best_logloss], 'p =', list_for_ps[index_best_logloss], 'm =', list_for_ms[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds')", "Comments on results for Hyperparameter tuning and Calibration for KNN:\nWe see that the best log loss we achieve for KNN is with _ neighbors, _ weights, and _ power parameter.\nWhen we add-in calibration, we see that the the best log loss we achieve for KNN is with _ neighbors, _ weights, _ power parameter, and _ calibration method.\n(Further explanation here?)\nMultinomial, Bernoulli, and Gaussian Naive Bayes\nHyperparameter tuning: Bernoulli Naive Bayes\nFor the Bernoulli Naive Bayes classifier, we seek to optimize the alpha parameter (Laplace smoothing parameter) and the binarize parameter (threshold for binarizing of the sample features). For the binarize parameter, we will create arbitrary thresholds over which our features, which are not binary/boolean features, will be binarized.", "list_for_as = []\nlist_for_bs = []\nlist_for_log_loss = []\n\ndef BNB_tuned(a,b):\n bnb_tuned = BernoulliNB(alpha = a, binarize = b).fit(mini_train_data, mini_train_labels)\n dev_prediction_probabilities = bnb_tuned.predict_proba(mini_dev_data)\n list_for_as.append(this_a)\n list_for_bs.append(this_b)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with BNB and a,b =\", a,\",\",b,\"is:\", working_log_loss)\n\nalpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0]\nbinarize_thresholds_tuning = [1e-20, 1e-19, 1e-18, 1e-17, 1e-16, 1e-15, 1e-14, 1e-13, 1e-12, 1e-11, 1e-10, 1e-9, 1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99, 0.999, 0.9999]\n\nstart = time.clock()\nfor this_a in alpha_tuning:\n for this_b in binarize_thresholds_tuning:\n BNB_tuned(this_a, this_b)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For BNB the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss], 'binarization threshold =', list_for_bs[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds') ", "Model calibration: BernoulliNB\nHere we will calibrate the BNB classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The \"method\" parameter can be set to \"sigmoid\" or to \"isotonic\", corresponding to Platt Scaling and to Isotonic Regression respectively.", "list_for_as = []\nlist_for_bs = []\nlist_for_ms = []\nlist_for_log_loss = []\n\ndef BNB_calibrated(a,b,m):\n bnb_tuned = BernoulliNB(alpha = a, binarize = b).fit(mini_train_data, mini_train_labels)\n dev_prediction_probabilities = bnb_tuned.predict_proba(mini_dev_data)\n ccv = CalibratedClassifierCV(bnb_tuned, method = m, cv = 'prefit')\n ccv.fit(mini_calibrate_data, mini_calibrate_labels)\n ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)\n list_for_as.append(this_a)\n list_for_bs.append(this_b)\n list_for_ms.append(this_m)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with BNB and a,b,m =\", a,\",\", b,\",\", m, \"is:\", working_log_loss)\n\nalpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0]\nbinarize_thresholds_tuning = [1e-20, 1e-19, 1e-18, 1e-17, 1e-16, 1e-15, 1e-14, 1e-13, 1e-12, 1e-11, 1e-10, 1e-9, 1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99, 0.999, 0.9999]\nmethods = ['sigmoid', 'isotonic']\n\nstart = time.clock()\nfor this_a in alpha_tuning:\n for this_b in binarize_thresholds_tuning:\n for this_m in methods:\n BNB_calibrated(this_a, this_b, this_m)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For BNB the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss], 'binarization threshold =', list_for_bs[index_best_logloss], 'method = ', list_for_ms[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds') ", "Hyperparameter tuning: Multinomial Naive Bayes\nFor the Multinomial Naive Bayes classifer, we seek to optimize the alpha parameter (Laplace smoothing parameter).", "list_for_as = []\nlist_for_log_loss = []\n\ndef MNB_tuned(a):\n mnb_tuned = MultinomialNB(alpha = a).fit(mini_train_data, mini_train_labels)\n dev_prediction_probabilities =mnb_tuned.predict_proba(mini_dev_data)\n list_for_as.append(this_a)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with BNB and a =\", a, \"is:\", working_log_loss)\n\nalpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0]\n\nstart = time.clock()\nfor this_a in alpha_tuning:\n MNB_tuned(this_a)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For MNB the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds') \n", "Model calibration: MultinomialNB\nHere we will calibrate the MNB classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The \"method\" parameter can be set to \"sigmoid\" or to \"isotonic\", corresponding to Platt Scaling and to Isotonic Regression respectively.", "list_for_as = []\nlist_for_ms = []\nlist_for_log_loss = []\n\ndef MNB_calibrated(a,m):\n mnb_tuned = MultinomialNB(alpha = a).fit(mini_train_data, mini_train_labels)\n ccv = CalibratedClassifierCV(mnb_tuned, method = m, cv = 'prefit')\n ccv.fit(mini_calibrate_data, mini_calibrate_labels)\n ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)\n list_for_as.append(this_a)\n list_for_ms.append(this_m)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with MNB and a =\", a, \"and m =\", m, \"is:\", working_log_loss)\n\nalpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0]\nmethods = ['sigmoid', 'isotonic']\n\nstart = time.clock()\nfor this_a in alpha_tuning:\n for this_m in methods:\n MNB_calibrated(this_a, this_m)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For MNB the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss], 'and method =', list_for_ms[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds') ", "Tuning: Gaussian Naive Bayes\nFor the Gaussian Naive Bayes classifier there are no inherent parameters within the classifier function to optimize, but we will look at our log loss before and after adding noise to the data that is hypothesized to give it a more normal (Gaussian) distribution, which is required by the GNB classifier.", "def GNB_pre_tune():\n gnb_pre_tuned = GaussianNB().fit(mini_train_data, mini_train_labels)\n dev_prediction_probabilities =gnb_pre_tuned.predict_proba(mini_dev_data)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)\n print(\"Multi-class Log Loss with pre-tuned GNB is:\", working_log_loss)\n\nGNB_pre_tune()\n \ndef GNB_post_tune():\n # Gaussian Naive Bayes requires the data to have a relative normal distribution. Sometimes\n # adding noise can improve performance by making the data more normal:\n mini_train_data_noise = np.random.rand(mini_train_data.shape[0],mini_train_data.shape[1])\n modified_mini_train_data = np.multiply(mini_train_data,mini_train_data_noise) \n gnb_with_noise = GaussianNB().fit(modified_mini_train_data,mini_train_labels)\n dev_prediction_probabilities =gnb_with_noise.predict_proba(mini_dev_data)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)\n print(\"Multi-class Log Loss with tuned GNB via addition of noise to normalize the data's distribution is:\", working_log_loss)\n \nGNB_post_tune()", "Model calibration: GaussianNB\nHere we will calibrate the GNB classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The \"method\" parameter can be set to \"sigmoid\" or to \"isotonic\", corresponding to Platt Scaling and to Isotonic Regression respectively.", "list_for_ms = []\nlist_for_log_loss = []\n\ndef GNB_calibrated(m):\n # Gaussian Naive Bayes requires the data to have a relative normal distribution. Sometimes\n # adding noise can improve performance by making the data more normal:\n mini_train_data_noise = np.random.rand(mini_train_data.shape[0],mini_train_data.shape[1])\n modified_mini_train_data = np.multiply(mini_train_data,mini_train_data_noise) \n gnb_with_noise = GaussianNB().fit(modified_mini_train_data,mini_train_labels)\n ccv = CalibratedClassifierCV(gnb_with_noise, method = m, cv = 'prefit')\n ccv.fit(mini_calibrate_data, mini_calibrate_labels)\n ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)\n list_for_ms.append(this_m)\n working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)\n list_for_log_loss.append(working_log_loss)\n #print(\"Multi-class Log Loss with tuned GNB via addition of noise to normalize the data's distribution and after calibration is:\", working_log_loss, 'with calibration method =', m)\n \nmethods = ['sigmoid', 'isotonic']\n\nstart = time.clock()\nfor this_m in methods:\n GNB_calibrated(this_m)\n \nindex_best_logloss = np.argmin(list_for_log_loss)\nprint('For GNB the best log loss with tuning and calibration is',list_for_log_loss[index_best_logloss], 'with method =', list_for_ms[index_best_logloss])\nend = time.clock()\nprint(\"Computation time for this step is %.2f\" % (end-start), 'seconds') ", "Logistic Regression\nHyperparameter tuning:\nFor the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag')\nModel calibration:\nSee above\nDecision Tree (Bryan)\nHyperparameter tuning:\nFor the Decision Tree classifier, we can seek to optimize the following classifier parameters: min_samples_leaf (the minimum number of samples required to be at a leaf node), max_depth\nFrom readings, setting min_samples_leaf to approximately 1% of the data points can stop the tree from inappropriately classifying outliers, which can help to improve accuracy (unsure if significantly improves MCLL).\nModel calibration:\nSee above\nSupport Vector Machines (Kalvin)\nHyperparameter tuning:\nFor the SVM classifier, we can seek to optimize the following classifier parameters: C (penalty parameter C of the error term), kernel ('linear', 'poly', 'rbf', sigmoid', or 'precomputed')\nSee source [2] for parameter optimization in SVM\nModel calibration:\nSee above\nNeural Nets (Sarah)\nHyperparameter tuning:\nFor the Neural Networks MLP classifier, we can seek to optimize the following classifier parameters: hidden_layer_sizes, activation ('identity', 'logistic', 'tanh', 'relu'), solver ('lbfgs','sgd', adam'), alpha, learning_rate ('constant', 'invscaling','adaptive')", "### All the work from Sarah's notebook:\n\nimport theano\nfrom theano import tensor as T\nfrom theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams\nprint (theano.config.device) # We're using CPUs (for now)\nprint (theano.config.floatX )# Should be 64 bit for CPUs\n\nnp.random.seed(0)\n\nfrom IPython.display import display, clear_output\n\nnumFeatures = train_data[1].size\nnumTrainExamples = train_data.shape[0]\nnumTestExamples = test_data.shape[0]\nprint ('Features = %d' %(numFeatures))\nprint ('Train set = %d' %(numTrainExamples))\nprint ('Test set = %d' %(numTestExamples))\n\nclass_labels = list(set(train_labels))\nprint(class_labels)\nnumClasses = len(class_labels)\n\n### Binarize the class labels\n\ndef binarizeY(data):\n binarized_data = np.zeros((data.size,39))\n for j in range(0,data.size):\n feature = data[j]\n i = class_labels.index(feature)\n binarized_data[j,i]=1\n return binarized_data\n\ntrain_labels_b = binarizeY(train_labels)\ntest_labels_b = binarizeY(test_labels)\nnumClasses = train_labels_b[1].size\n\nprint ('Classes = %d' %(numClasses))\nprint ('\\n', train_labels_b[:5, :], '\\n')\nprint (train_labels[:10], '\\n')\n\n###1) Parameters\nnumFeatures = train_data.shape[1]\n\nnumHiddenNodeslayer1 = 50\nnumHiddenNodeslayer2 = 30\n\nw_1 = theano.shared(np.asarray((np.random.randn(*(numFeatures, numHiddenNodeslayer1))*0.01)))\nw_2 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer1, numHiddenNodeslayer2))*0.01)))\nw_3 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer2, numClasses))*0.01)))\nparams = [w_1, w_2, w_3]\n\n\n###2) Model\nX = T.matrix()\nY = T.matrix()\n\nsrng = RandomStreams()\ndef dropout(X, p=0.):\n if p > 0:\n X *= srng.binomial(X.shape, p=1 - p)\n X /= 1 - p\n return X\n\ndef model(X, w_1, w_2, w_3, p_1, p_2, p_3):\n return T.nnet.softmax(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(X, p_1), w_1)),p_2), w_2)),p_3),w_3))\ny_hat_train = model(X, w_1, w_2, w_3, 0.2, 0.5,0.5)\ny_hat_predict = model(X, w_1, w_2, w_3, 0., 0., 0.)\n\n### (3) Cost function\ncost = T.mean(T.sqr(y_hat - Y))\ncost = T.mean(T.nnet.categorical_crossentropy(y_hat_train, Y))\n\n### (4) Objective (and solver)\n\nalpha = 0.01\ndef backprop(cost, w):\n grads = T.grad(cost=cost, wrt=w)\n updates = []\n for wi, grad in zip(w, grads):\n updates.append([wi, wi - grad * alpha])\n return updates\n\nupdate = backprop(cost, params)\ntrain = theano.function(inputs=[X, Y], outputs=cost, updates=update, allow_input_downcast=True)\ny_pred = T.argmax(y_hat_predict, axis=1)\npredict = theano.function(inputs=[X], outputs=y_pred, allow_input_downcast=True)\n\nminiBatchSize = 10 \n\ndef gradientDescent(epochs):\n for i in range(epochs):\n for start, end in zip(range(0, len(train_data), miniBatchSize), range(miniBatchSize, len(train_data), miniBatchSize)):\n cc = train(train_data[start:end], train_labels_b[start:end])\n clear_output(wait=True)\n print ('%d) accuracy = %.4f' %(i+1, np.mean(np.argmax(test_labels_b, axis=1) == predict(test_data))) )\n\ngradientDescent(50)\n\n### How to decide what # to use for epochs? epochs in this case are how many rounds?\n### plot costs for each of the 50 iterations and see how much it decline.. if its still very decreasing, you should\n### do more iterations; otherwise if its looking like its flattening, you can stop", "Model calibration:\nSee above\nRandom Forest (Sam, possibly in AWS)\nHyperparameter tuning:\nFor the Random Forest classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_features, max_depth, min_samples_leaf, bootstrap (whether or not bootstrap samples are used when building trees), oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy)\nModel calibration:\nSee above\nMeta-estimators\nAdaBoost Classifier\nHyperparameter tuning:\nThere are no major changes that we seek to make in the AdaBoostClassifier with respect to default parameter values.\nAdaboosting each classifier:\nWe will run the AdaBoostClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.\nBagging Classifier\nHyperparameter tuning:\nFor the Bagging meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_samples, max_features, bootstrap (whether or not bootstrap samples are used when building trees), bootstrap_features (whether features are drawn with replacement), and oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy)\nBagging each classifier:\nWe will run the BaggingClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.\nGradient Boosting Classifier\nHyperparameter tuning:\nFor the Gradient Boosting meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_depth, min_samples_leaf, and max_features\nGradient Boosting each classifier:\nWe will run the GradientBoostingClassifier with loss = 'deviance' (as loss = 'exponential' uses the AdaBoost algorithm) on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.\nFinal evaluation on test data", "# Here we will likely use Pipeline and GridSearchCV in order to find the overall classifier with optimized Multi-class Log Loss.\n# This will be the last step after all attempts at feature addition, hyperparameter tuning, and calibration are completed\n# and the corresponding performance metrics are gathered.\n", "References\n1) Hsiang, Solomon M. and Burke, Marshall and Miguel, Edward. \"Quantifying the Influence of Climate on Human Conflict\". Science, Vol 341, Issue 6151, 2013 \n2) Huang, Cheng-Lung. Wang, Chieh-Jen. \"A GA-based feature selection and parameters optimization for support vector machines\". Expert Systems with Applications, Vol 31, 2006, p 231-240\n3) More to come" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session13/Day2/02-Fast-GPs.ipynb
mit
[ "Fast GP implementations", "%matplotlib inline\n\n%config InlineBackend.figure_format = 'retina'\n\nfrom matplotlib import rcParams\nrcParams[\"figure.dpi\"] = 100\nrcParams[\"figure.figsize\"] = 12, 4", "Benchmarking GP codes\nImplemented the right way, GPs can be super fast! Let's compare the time it takes to evaluate our GP likelihood and the time it takes to evaluate the likelihood computed with the snazzy george and celerite packages. We'll learn how to use both along the way. Let's create a large, fake dataset for these tests:", "import numpy as np\n\nnp.random.seed(0)\nt = np.linspace(0, 10, 10000)\ny = np.random.randn(10000)\nsigma = np.ones(10000)", "Our GP", "def ExpSquaredCovariance(t, A=1.0, l=1.0, tprime=None):\n \"\"\"\n Return the ``N x M`` exponential squared\n covariance matrix.\n \n \"\"\"\n if tprime is None:\n tprime = t\n TPrime, T = np.meshgrid(tprime, t)\n return A ** 2 * np.exp(-0.5 * (T - TPrime) ** 2 / l ** 2)\n\n\ndef ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0):\n \"\"\"\n Return the log of the GP likelihood for a datatset y(t)\n with uncertainties sigma, modeled with a Squared Exponential\n Kernel with amplitude A and lengthscale l.\n \n \"\"\"\n # The covariance and its determinant\n npts = len(t)\n K = ExpSquaredCovariance(t, A=A, l=l) + sigma ** 2 * np.eye(npts)\n \n # The log marginal likelihood\n log_like = -0.5 * np.dot(y.T, np.linalg.solve(K, y))\n log_like -= 0.5 * np.linalg.slogdet(K)[1]\n log_like -= 0.5 * npts * np.log(2 * np.pi)\n \n return log_like", "Time to evaluate the GP likelihood:", "%%time\nln_gp_likelihood(t, y, sigma)", "george\nLet's time how long it takes to do the same operation using the george package (pip install george).\nThe kernel we'll use is\npython\nkernel = amp ** 2 * george.kernels.ExpSquaredKernel(tau ** 2)\nwhere amp = 1 and tau = 1 in this case.\nTo instantiate a GP using george, simply run\npython\ngp = george.GP(kernel)\nThe george package pre-computes a lot of matrices that are re-used in different operations, so before anything else, we'll ask it to compute the GP model for our timeseries:\npython\ngp.compute(t, sigma)\nNote that we've only given it the time array and the uncertainties, so as long as those remain the same, you don't have to re-compute anything. This will save you a lot of time in the long run!\nFinally, the log likelihood is given by gp.log_likelihood(y).\nHow do the speeds compare? Did you get the same value of the likelihood?", "import george\n\n%%time\nkernel = george.kernels.ExpSquaredKernel(1.0)\ngp = george.GP(kernel)\ngp.compute(t, sigma)\n\n%%time\nprint(gp.log_likelihood(y))", "george also offers a fancy GP solver called the HODLR solver, which makes some approximations that dramatically speed up the matrix algebra. Let's instantiate the GP object again by passing the keyword solver=george.HODLRSolver and re-compute the log likelihood. How long did that take? Did we get the same value for the log likelihood?", "%%time\ngp = george.GP(kernel, solver=george.HODLRSolver)\ngp.compute(t, sigma)\n\n%%time\ngp.log_likelihood(y)", "celerite\nThe george package is super useful for GP modeling, and I recommend you read over the docs and examples. It implements several different kernels that come in handy in different situations, and it has support for multi-dimensional GPs. But if all you care about are GPs in one dimension (in this case, we're only doing GPs in the time domain, so we're good), then celerite is what it's all about:\nbash\npip install celerite\nCheck out the docs here, as well as several tutorials. There is also a paper that discusses the math behind celerite. The basic idea is that for certain families of kernels, there exist extremely efficient methods of factorizing the covariance matrices. Whereas GP fitting typically scales with the number of datapoints $N$ as $N^3$, celerite is able to do everything in order $N$ (!!!) This is a huge advantage, especially for datasets with tens or hundreds of thousands of data points. Using george or any homebuilt GP model for datasets larger than about 10,000 points is simply intractable, but with celerite you can do it in a breeze.\nNext we repeat the timing tests, but this time using celerite. Note that the Exponential Squared Kernel is not available in celerite, because it doesn't have the special form needed to make its factorization fast. Instead, we'll use the Matern 3/2 kernel, which is qualitatively similar and can be approximated quite well in terms of the celerite basis functions:\npython\nkernel = celerite.terms.Matern32Term(np.log(1), np.log(1))\nNote that celerite accepts the log of the amplitude and the log of the timescale. Other than this, we can compute the likelihood using the same syntax as george.\nHow much faster did it run? Is the value of the likelihood different from what you found above? Why?", "import celerite\nfrom celerite import terms\n\n%%time\nkernel = terms.Matern32Term(np.log(1), np.log(1))\ngp = celerite.GP(kernel)\ngp.compute(t, sigma)\n\n%%time\ngp.log_likelihood(y)", "<div style=\"background-color: #D6EAF8; border-left: 15px solid #2E86C1;\">\n <h1 style=\"line-height:2.5em; margin-left:1em;\">Exercise (the one and only)</h1>\n</div>\n\nLet's use what we've learned about GPs in a real application: fitting an exoplanet transit model in the presence of correlated noise.\nHere is a (fictitious) light curve for a star with a transiting planet:", "import matplotlib.pyplot as plt\n\nt, y, yerr = np.loadtxt(\"data/sample_transit.txt\", unpack=True)\nplt.errorbar(t, y, yerr=yerr, fmt=\".k\", capsize=0)\nplt.xlabel(\"time\")\nplt.ylabel(\"relative flux\");", "There is a transit visible to the eye at $t = 0$, which (say) is when you'd expect the planet to transit if its orbit were perfectly periodic. However, a recent paper claims that the planet shows transit timing variations, which are indicative of a second, perturbing planet in the system, and that a transit at $t = 0$ can be ruled out at 3 $\\sigma$. Your task is to verify this claim.\nAssume you have no prior information on the planet other than the transit occurs in the observation window, the depth of the transit is somewhere in the range $(0, 1)$, and the transit duration is somewhere between $0.1$ and $1$ day. You don't know the exact process generating the noise, but you are certain that there's correlated noise in the dataset, so you'll have to pick a reasonable kernel and estimate its hyperparameters.\nFit the transit with a simple inverted Gaussian with three free parameters:\npython\ndef transit_shape(depth, t0, dur):\n return -depth * np.exp(-0.5 * (t - t0) ** 2 / (0.2 * dur) ** 2)\nHINT: I borrowed heavily from this tutorial in the celerite documentation, so you might want to take a look at it..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
thehackerwithin/berkeley
code_examples/SQL/SQL_Tutorial-0.ipynb
bsd-3-clause
[ "Intro to SQL - The Hacker Within (2018-02-21)\nThis tutorial will show you the basics of SQL:\n* creating a table\n* inserting rows\n* querying with conditions\n* grouping and ordering\n* joining tables with a common field\n* aggregating rows using COUNT, MAX, etc.\n* alternates to SQL like pandas and Django\n* indexing frequently queried columns for performance\n* alter existing table to add, drop, or rename a column\n* drop a table\n* vacuum a database\nSQLite\nWe are going to use Python because it comes with the popular SQLite (aka etilqs) relational database builtin. \nRequirements\nThe following examples assume you have Python 3 installed on your computer. Any distribution will do. If you have Anaconda or Miniconda, you can create a new conda environment with the requirements for this tutorial from a terminal. For example with miniconda on Mac OS X, you might use the following to activate the root conda environment and create a new one called \"py36-thw-sql\".\n~$ source ~/miniconda/bin/activate\n(root) ~$ conda create -n py36-thw-sql python=3.6.3 psycopg2 django pandas\n\nOr on Windows with Anaconda, you might use the following in a CMD terminal.\n~&gt; %LOCALAPPDATA%\\Continuum\\anaconda3\\Scripts\\activate\n(root) ~&gt; conda create -n py36-thw-sql python=3.6.3 psycopg2 django pandas\n\nPython DB-API 2.0\nPEP 249 specifies a connection to a database, and a cursor from the connection to execute SQL.", "# imports\nimport io # we'll need this way later\nimport os\nimport sqlite3 # this is the module that binds to SQLite\nimport numpy as np # never know when you might need NumPy, oh, right, always!\nimport pandas as pd # you'll see why we can use this later\n\nDBFILE = 'sqlite3.db' # this will be our database\nBASEDIR = %pwd # os.path.abspath(os.path.dirname(__file__))\nDBPATH = os.path.join(BASEDIR, DBFILE)\n\n# we may need to delete the existing file first\nif os.path.exists(DBPATH):\n os.remove(DBPATH)\n\n# open a connection to the database for this tutorial\nconn = sqlite3.connect(DBPATH)\n\n# get a cursor to the database\ncur = conn.cursor()", "Notes\nIf a file with the same name doesn't already exist, then this creates a new database otherwise it connects to the existing database contained in the file. You can also use ':memory:' to create an \"in-memory\" database that has no file, but then you can't connect to that from another process.\nWe'll have to close the connection and cursor later. Next time we could use a with context to automatically close the connection.\nwith sqlite3.connect('sqlite3.db') as conn:\n cur = conn.execute('SQL QUERY ...') # e.g.: 'SELECT * FROM table_name;'\n output = cur.fetchall() # get the results\n\nClosing the connection automatically closes the cursor. Other bindings may offer similar context managers. to commit and close changes or rollback changes and raise an exception.\nCreating tables\nA relational database or SQL database is a tabular structure consisting of rows of data with columns of fields. The data definition language or DDL used to create the table is the same language used to query it, called SQL or Structured Query Language.\nAlthough the basic SQL commands are nearly the same for other relational databases, the data types may be different. SQLite only has 5 datatypes: NULL, INTEGER, REAL, TEXT, BLOB. For boolean, use integer zero for false, and one for true. For dates and times use text and ISO8601, e.g.: \"2018-02-21T17:05-0800\". By comparison, PostgreSQL has too many to list here including booleans, date, time, arrays, JSON, etc.\nCREATE\nThe basic SQL command to create a table is\nCREATE TABLE &lt;table_name&gt; (&lt;field_name&gt; &lt;TYPE&gt; &lt;CONSTRAINTS&gt;, &lt;field_name&gt; &lt;TYPE&gt; &lt;CONSTRAINTS&gt;, &lt;CONSTRAINTS&gt;, ...);\n\nSome languages enforce the semicolon, some don't. The syntax is nearly the same for other relational databases.\nConstraints, Defaults, and Options\nConstraints are optional and set conditions, limitations, or options for columns and the table. The most common constraints are: PRIMARY KEY, UNIQUE, NOT NULL, DEFAULT, FOREIGN KEY, REFERENCES, etc. The syntax is nearly the same for other relational databases.\nPRIMARY KEY\nThe most important of these is PRIMARY KEY which is equivalent to UNIQUE NOT NULL. A primary key is a unique references that identifies each record in the table. Although it is not required, every table should have a primary key. Only one primary key is allowed, and it can be constructed from multiple columns, PRIMARY KEY (&lt;field_A&gt;, &lt;field_B), to create a unique together, non-null identifier. In SQLite, if missing then a integer primary key named, rowid, is created by default. Also in SQLite, any integer primary key is automatically incremented, so the AUTOINCREMENT command is usually not needed. In PostgreSQL the SERIAL command is used to create a corresponding sequence for the primary key.\nPractice\nThe other constraints and options are also important, but we'll discover those as we learn. Let's create some simple databases with fictitious data to practice. Imagine you are testing several different materials with different properties $\\alpha$ and $\\beta$ under different stresses like different temperatures and light intensity and changing thickness. How would you organize this data? Take a moment to design a schema or structure for your data. The schema consists of the column names and data types and the column and table constraints.", "# we can use Python triple quoted strings to span multiple lines, but use single quotes, since SQL only uses double quotes\n\n# first create a materials table\ncur.execute('''CREATE TABLE materials (\n material_id TEXT PRIMARY KEY,\n long_name TEXT UNIQUE NOT NULL,\n alpha REAL NOT NULL,\n beta REAL NOT NULL,\n material_type TEXT NOT NULL\n)''')\nconn.commit()\n# if you don't commit the changes, they won't be written to the file, and won't be visible to other connections\n\n# then create an experiments table\ncur.execute('''CREATE TABLE experiments (\n experiment_id INTEGER PRIMARY KEY,\n temperature REAL DEFAULT 298.15,\n irradiance REAL DEFAULT 1000.0,\n uv_filter INTEGER DEFAULT 0,\n material_id NOT NULL REFERENCES materials ON UPDATE CASCADE ON DELETE CASCADE,\n thickness REAL DEFAULT 0.005,\n UNIQUE (temperature, irradiance, uv_filter, material_id)\n)''')\nconn.commit()\n\n# and finally create a trials table\ncur.execute('''CREATE TABLE trials (\n trial_id INTEGER PRIMARY KEY,\n experiment_id NOT NULL REFERENCES experiments ON UPDATE CASCADE ON DELETE CASCADE,\n results BLOB NOT NULL,\n duration REAL NOT NULL,\n avg_temperature REAL NOT NULL,\n std_temperature REAL NOT NULL,\n avg_irradiance REAL NOT NULL,\n std_irradiance REAL NOT NULL,\n init_visible_transmittance REAL NOT NULL,\n final_visible_transmittance REAL NOT NULL,\n notes TEXT\n)''')\nconn.commit()", "FOREIGN KEY\nA foreign key constraint creates a relationship between two tables. The FOREIGN KEY is implied when the REFERENCES column constraint is applied. In the experiments table above, the column constraint on material_id is the same as adding this table constriant:\nFOREIGN KEY (material_id) REFERENCES materials (material_id)\n\nSpecifying the referenced column in the table constraint isn't necessary, and if omitted defaults to the primary key of the referenced table. The syntax is nearly the same for other relational databases.\nYou can use the same name for the foreign key and it's related field, but it may make joining tables more difficult because you will need to use the table name to avoid an ambigous column name. E.G.: you can use trials.experiment_id and experiments.experiment_id to differentiate between them. You can also use AS to create a temporary name like trials.experiment_id AS experiment. Or you could just use different names for the foreign key and it's related field like FOREIGN KEY (material) REFERENCES materials (material_id) and then there's no ambiguity. Your call.\nDELETE and UPDATE\nWhat happens if the reference of a foreign key is deleted or updated? That's up to you: in SQLite the default is to do nothing, but typically you want the action to cascade. Add the desired ON DELETE or ON UPDATE action to the constraint.\nBonus Questions\n\nWhat is the difference between a column constraint and a table constraint?\nWhat other table constraint is in the experiments table?\nWhat other constraints or defaults are applied in the tables?\nWhat part of the materials table schema is fragile and can be improved?\n\nINSERT\nThe basic SQL command to put data into a table is\nINSERT INTO &lt;table_name&gt; (&lt;field_name&gt;, &lt;field_name&gt;, ...) VALUES (&lt;value&gt;, &lt;value&gt;, ...)\n\nOther relational databases use the same SQL syntax.\nLet's add some pretend data to the database", "# add a EVA as a material\ncur.execute('INSERT INTO materials VALUES (\"EVA\", \"ethylene vinyl acetate\", 0.123, 4.56, \"polymer\")')\nconn.commit() # you must commit for it to become permanent\ncur.rowcount # tells you how many rows written, sometimes, it's quirky", "Placeholders\nYou can use placeholders to loop over insert statements to add multiple records.\n\nWARNING: Never use string formatters to in lieu of placeholders or you may be subject to a SQL injection attack.\n\nSQLite uses ? but other relational databases may use %s or another placeholder.\nAlso, in sqlite3 executemany is a convenient shortcut, but it may not be convenient for all database bindings.", "# add some more fake materials\nfake_materials = [\n ('PMMC', 'poly methyl methacrylate', 0.789, 10.11, 'polymer'),\n ('KBr', 'potassium bromide', 1.213, 14.15, 'crystal')\n]\nfor mat in fake_materials:\n # must have same number of place holders as values\n cur.execute('INSERT INTO materials VALUES (?, ?, ?, ?, ?)', mat) # use place holders\nconn.commit() # you can commit all of the changes at the end of the loop\n\n# use the executemany shortcut\nfake_materials = [\n ('SiO2', 'silicon dioxide', 1.617, 18.19, 'crystal'),\n ('CaF2', 'calcium flouride', 2.0, 21.22, 'crystal')\n]\ncur.executemany('INSERT INTO materials VALUES (?, ?, ?, ?, ?)', fake_materials)\nconn.commit()\nprint('rowcount = %d' % cur.rowcount) # with executemany, cur.rowcount shows total number of rows", "DELETE and UPDATE\nOops I made a mistake. How do I fix it? The opposite of INSERT is DELETE. But don't throw the baby out with the bathwater, you can also [UPDATE] a record. Other relational databases use the same SQL syntax to manipulate data.", "cur.execute('DELETE FROM materials WHERE material_id = \"SiO2\"')\ncur.execute('UPDATE materials SET alpha=1.23E-4, beta=8.910E+11 WHERE material_id = \"CaF2\"')\nconn.commit()", "Queries\nThe way you select data is by executing queries. The language is the same for all relational databases. The star * means select all columns, or you can give the columns explicitly.", "cur.execute('SELECT * FROM materials')\ncur.fetchall() # fetch all the results of the query", "Conditions\nYou can limit a query using WHERE and LIMIT. You can combine WHERE with a conditional expression, IN to check a set, or LIKE to compare with strings. Use AND and OR to combine conditions.\nPython DB-API Cursor Methods\nThe Python DB-API cursor can be used as an iterator or you can call it's fetch methods.", "# limit the query using WHERE and LIMIT\ncur.execute('SELECT material_id, long_name FROM materials WHERE alpha < 1 LIMIT 2')\nfor c in cur: print('{} is {}'.format(*c)) # user the cursor as an iterator\n\nmaterials_list = (\"EVA\", \"PMMC\")\ncur.execute('SELECT alpha, beta FROM materials WHERE material_id IN (?, ?)', materials_list)\n[(mat, cur.fetchone()) for mat in materials_list] # use the cursor fetchone() method to get next item", "Aggregates\nYour query can aggregate results like AVG, SUM, COUNT, MAX, MIN, etc.", "cur.execute('SELECT COUNT(*) FROM materials')\nprint(cur.fetchone())", "GROUP BY\nYou can group queries by a column or a condition such as an expression, IN, or LIKE, if your selection is an aggregate.", "cur.execute('SELECT material_type, COUNT(*), AVG(alpha), MAX(beta) FROM materials GROUP BY material_type')\ncur.fetchmany(2) # use fetchmany() with size parameter, just for fun", "More Practice\nAdd a fictitious experiment schedule and doctor up some data!", "# use defaults, let primary key auto-increment, just supply material ID\ncur.execute('INSERT INTO experiments (material_id) VALUES (\"EVA\")') # use defaults, \nconn.commit()\n\n# set up a test matrix for EVA\ntemp = range(300, 400, 25)\nirrad = range(400, 800, 100)\ntry:\n for T in temp:\n for E in irrad:\n cur.execute('INSERT INTO experiments (temperature, irradiance) VALUES (?, ?)', (T, E))\nexcept sqlite3.IntegrityError as exc:\n print('sqlite3.IntegrityError: %s', exc)\n\n# Oops! We forgot to specify the material, there is not default, and it is constrained as NOT NULL!\nconn.rollback() # undo any changes\ntry:\n for T in temp:\n for E in irrad:\n cur.execute('INSERT INTO experiments (temperature, irradiance, material_id) VALUES (?, ?, \"EVA\")', (T, E))\nexcept sqlite3.IntegrityError as exc:\n print(exc)\nconn.commit() # commit! commit! commit!\n\n# this list is hard to read\nlist(cur.execute('SELECT * FROM experiments'))\n\n# not only is Pandas much nicer, it also executes queries!\npd.read_sql('SELECT * FROM experiments', conn, index_col='experiment_id')", "ORDER BY\nDoes what it says; order the query results by a column. Default is ascending, but use ASC or DESC to change the order.", "# Python's SQLite let's you use either '==' or '=', but I think SQL only allows '=', okay?\npd.read_sql('SELECT * FROM experiments WHERE irradiance = 700 ORDER BY temperature', conn, index_col='experiment_id')\n\n# descending order\npd.read_sql('SELECT * FROM experiments WHERE temperature = 375 ORDER BY irradiance DESC', conn, index_col='experiment_id')", "Dr. Data", "# Dr. Data\nstart_time, end_time = '2018-02-21T17:00-0800', '2018-02-21T18:30-0800'\ntimestamps = pd.DatetimeIndex(start=start_time, end=end_time, freq='T')\n# use http://poquitopicante.blogspot.com/2016/11/panda-pop.html to help you recall what offset alias to use\nsize = len(timestamps)\ndata = {\n 'temperature': np.random.randn(size) + 298.15,\n 'irradiance': np.random.randn(size) + 1000,\n 'visible_transmittance': np.logspace(np.log10(0.9), np.log10(0.8), size) + np.random.randn(size) / 100\n}\nresults = pd.DataFrame(data, index=timestamps)\nduration = (results.index[-1] - results.index[0]).value # [ns]\navg_temperature = results.temperature.mean() # [K]\nstd_temperature = results.temperature.std() # [K]\navg_irradiance = results.irradiance.mean() # [W/m^2]\nstd_irradiance = results.irradiance.std() # [W/m^2]\ninit_visible_transmittance = results.visible_transmittance[start_time]\nfinal_visible_transmittance = results.visible_transmittance[end_time]\nvalues = (1, results.to_csv(), duration, avg_temperature, std_temperature,\n avg_irradiance, std_irradiance, init_visible_transmittance, final_visible_transmittance,\n 'this is doctored data')\ncur.execute('''INSERT INTO trials (\n experiment_id, results, duration, avg_temperature, std_temperature,\n avg_irradiance, std_irradiance, init_visible_transmittance,\n final_visible_transmittance, notes\n) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', values)\nconn.commit() # commit! commit! commit!\n\n# check the blob, is it really there?\ncur.execute('SELECT results FROM trials WHERE trial_id = 1')\ntrial1_results = cur.fetchone()\npd.read_csv(io.StringIO(trial1_results[0]), index_col=0)\n# yay! it works!", "JOIN\nThe foreign keys relate tables, but how do we use this relation? By joining the tables.", "# add the results for experiment 17: T=375[K], E=700[W/m^2]\nexperiment_id, temperature, irradiance = list(cur.execute(\n 'SELECT experiment_id, temperature, irradiance FROM experiments WHERE (temperature = 375 AND irradiance = 700)'\n))[0]\nstart_time, end_time = '2018-02-28T17:00-0800', '2018-02-28T18:30-0800'\ntimestamps = pd.DatetimeIndex(start=start_time, end=end_time, freq='T')\n# use http://poquitopicante.blogspot.com/2016/11/panda-pop.html to help you recall what offset alias to use\nsize = len(timestamps)\ndata = {\n 'temperature': np.random.randn(size) + temperature,\n 'irradiance': np.random.randn(size) + irradiance,\n 'visible_transmittance': np.logspace(np.log10(0.9), np.log10(0.7), size) + np.random.randn(size) / 100\n}\nresults = pd.DataFrame(data, index=timestamps)\nduration = (results.index[-1] - results.index[0]).value # [ns]\navg_temperature = results.temperature.mean() # [K]\nstd_temperature = results.temperature.std() # [K]\navg_irradiance = results.irradiance.mean() # [W/m^2]\nstd_irradiance = results.irradiance.std() # [W/m^2]\ninit_visible_transmittance = results.visible_transmittance[start_time]\nfinal_visible_transmittance = results.visible_transmittance[end_time]\nvalues = (experiment_id, results.to_csv(), duration, avg_temperature, std_temperature,\n avg_irradiance, std_irradiance, init_visible_transmittance, final_visible_transmittance,\n 'this is doctored data')\ncur.execute('''INSERT INTO trials (\n experiment_id, results, duration, avg_temperature, std_temperature,\n avg_irradiance, std_irradiance, init_visible_transmittance,\n final_visible_transmittance, notes\n) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', values)\nconn.commit() # commit! commit! commit!\n\npd.read_sql('''\nSELECT\n trial_id, trials.experiment_id AS experiment, init_visible_transmittance, final_visible_transmittance,\n experiments.material_id AS material, temperature, irradiance, alpha, beta, material_type\nFROM trials\nJOIN experiments ON experiments.experiment_id = experiment\nJOIN materials ON materials.material_id = material\n''', conn, index_col='trial_id')\n\ncur.close()\nconn.close()", "Epilogue\nUnfortunately that's all we have time for, but we covered a lot, even if we didn't get this far. If there's anything I hope you learned from this it's that:\n\nYou can read the SQLite and PostgreSQL manuals or use StackOverflow to teach yourself SQL. It's not rocket science. I was a mechanical engineer, and I learned it.\nThe basics of making a table, inserting data, and conducting queries.\nHow to interact programmatically with a database.\n\nBut there's still so much to learn! Hopefully you will continue on your own and try some of these intereting topics.\n1. Use what you've learned on a more advanced database management system like PostgreSQL, MySQL, or MS SQL Server.\n2. Use an Object Relational Mapper (ORM) like Django or SQLAlchemy to simplify your workflow, by creating and manipulating data in native Python.\n3. Explore a NoSQL database like Cassandra or MongoDB and see if the flexible structure and large scale cluster computing allow you to explore big data with tools like Spark and Hadoop.\nDid you learn about everything you expected today? What did wish we had covered? Leave your comments or improve this tutorial by sending a PR to The Hacker Within, Berkeley.\nThanks!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vzg100/Post-Translational-Modification-Prediction
old/Phosphorylation Sequence Tests -MLP -dbptm+ELM -EnzymeBenchmarks-VectorAvr..ipynb
mit
[ "Template for test", "from pred import Predictor\nfrom pred import sequence_vector\nfrom pred import chemical_vector", "Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.\nIncluded is N Phosphorylation however no benchmarks are available, yet. \nTraining data is from phospho.elm and benchmarks are from dbptm.", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nbenchmarks = [\"Data/Benchmarks/phos_CDK1.csv\", \"Data/Benchmarks/phos_CK2.csv\", \"Data/Benchmarks/phos_MAPK1.csv\", \"Data/Benchmarks/phos_PKA.csv\", \"Data/Benchmarks/phos_PKC.csv\"]\nfor j in benchmarks:\n for i in par:\n print(\"y\", i, \" \", j)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_s_filtered.csv\")\n y.process_data(vector_function=\"sequence\", amino_acid=\"S\", imbalance_function=i, random_data=0)\n y.supervised_training(\"mlp_adam\")\n y.benchmark(j, \"S\")\n del y\n print(\"x\", i, \" \", j)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_s_filtered.csv\")\n x.process_data(vector_function=\"sequence\", amino_acid=\"S\", imbalance_function=i, random_data=1)\n x.supervised_training(\"mlp_adam\")\n x.benchmark(j, \"S\")\n del x", "Y Phosphorylation", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nbenchmarks = [\"Data/Benchmarks/phos_CDK1.csv\", \"Data/Benchmarks/phos_CK2.csv\", \"Data/Benchmarks/phos_MAPK1.csv\", \"Data/Benchmarks/phos_PKA.csv\", \"Data/Benchmarks/phos_PKC.csv\"]\nfor j in benchmarks:\n for i in par:\n try:\n print(\"y\", i, \" \", j)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_Y_filtered.csv\")\n y.process_data(vector_function=\"sequence\", amino_acid=\"Y\", imbalance_function=i, random_data=0)\n y.supervised_training(\"bagging\")\n y.benchmark(j, \"Y\")\n del y\n print(\"x\", i, \" \", j)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_Y_filtered.csv\")\n x.process_data(vector_function=\"sequence\", amino_acid=\"Y\", imbalance_function=i, random_data=1)\n x.supervised_training(\"bagging\")\n x.benchmark(j, \"Y\")\n del x\n except:\n print(\"Benchmark not relevant\")", "T Phosphorylation", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nbenchmarks = [\"Data/Benchmarks/phos_CDK1.csv\", \"Data/Benchmarks/phos_CK2.csv\", \"Data/Benchmarks/phos_MAPK1.csv\", \"Data/Benchmarks/phos_PKA.csv\", \"Data/Benchmarks/phos_PKC.csv\"]\nfor j in benchmarks:\n for i in par:\n print(\"y\", i, \" \", j)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_t_filtered.csv\")\n y.process_data(vector_function=\"sequence\", amino_acid=\"T\", imbalance_function=i, random_data=0)\n y.supervised_training(\"mlp_adam\")\n y.benchmark(j, \"T\")\n del y\n print(\"x\", i, \" \", j)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_t_filtered.csv\")\n x.process_data(vector_function=\"sequence\", amino_acid=\"T\", imbalance_function=i, random_data=1)\n x.supervised_training(\"mlp_adam\")\n x.benchmark(j, \"T\")\n del x" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
briennakh/BIOF509
Wk08/Wk08_Numpy_model_package_survey_inclass_exercises.ipynb
mit
[ "Week 8 - Implementing a model in numpy and a survey of machine learning packages for python\nThis week we will be looking in detail at how to implement a supervised regression model using the base scientific computing packages available with python.\nWe will also be looking at the different packages available for python that implement many of the algorithms we might want to use.\nRegression with numpy\nWhy implement algorithms from scratch when dedicated packages already exist? \nThe packages available are very powerful and a real time saver but they can obscure some issues we might encounter if we don't know to look for them. By starting with just numpy these problems will be more obvious. We can address them here and then when we move on we will know what to look for and will be less likely to miss them.\nThe dedicated machine learning packages implement the different algorithms but we are still responsible for getting our data in a suitable format.", "import matplotlib.pyplot as plt\nimport numpy as np\n\n%matplotlib inline\n\nn = 20\nx = np.random.random((n,1))\ny = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))\n\n\nplt.plot(x, y, 'b.')\nplt.show()", "This is a very simple dataset. There is only one input value for each record and then there is the output value. Our goal is to determine the output value or dependent variable, shown on the y-axis, from the input or independent variable, shown on the x-axis.\nOur approach should scale to handle multiple input, or independent, variables. The independent variables can be stored in a vector, a 1-dimensional array:\n$$X^T = (X_{1}, X_{2}, X_{3})$$\nAs we have multiple records these can be stacked in a 2-dimensional array. Each record becomes one row in the array. Our x variable is already set up in this way.\nIn linear regression we can compute the value of the dependent variable using the following formula:\n$$f(X) = \\beta_{0} + \\sum_{j=1}^p X_j\\beta_j$$\nThe $\\beta_{0}$ term is the intercept, and represents the value of the dependent variable when the independent variable is zero.\nCalculating a solution is easier if we don't treat the intercept as special. Instead of having an intercept co-efficient that is handled separately we can instead add a variable to each of our records with a value of one.", "intercept_x = np.hstack((np.ones((n,1)), x))\nintercept_x", "Numpy contains the linalg module with many common functions for performing linear algebra. Using this module finding a solution is quite simple.", "np.linalg.lstsq(intercept_x,y)", "The values returned are:\n\nThe least-squares solution\nThe sum of squared residuals\nThe rank of the independent variables\nThe singular values of the independent variables\n\nExercise\n\nCalculate the predictions our model would make\nCalculate the sum of squared residuals from our predictions. Does this match the value returned by lstsq?", "coeff, residuals, rank, sing_vals = np.linalg.lstsq(intercept_x,y)\n\nintercept_x.shape, coeff.T.shape\n\nnp.sum(intercept_x * coeff.T, axis=1)\n\npredictions = np.sum(intercept_x * coeff.T, axis=1)\n\nplt.plot(x, y, 'bo')\nplt.plot(x, predictions, 'ko')\nplt.show()\n\npredictions.shape\n\nnp.sum((predictions.reshape((20,1)) - y) ** 2), residuals", "Least squares refers to the cost function for this algorithm. The objective is to minimize the residual sum of squares. The difference between the actual and predicted values is calculated, it is squared and then summed over all records. The function is as follows:\n$$RSS(\\beta) = \\sum_{i=1}^{N}(y_i - x_i^T\\beta)^2$$\nMatrix arithmetic\nWithin lstsq all the calculations are performed using matrix arithmetic rather than the more familiar element-wise arithmetic numpy arrays generally perform. Numpy does have a matrix type but matrix arithmetic can also be performed on standard arrays using dedicated methods.\n\nSource: Wikimedia Commons (User:Bilou)\nIn matrix multiplication the resulting value in any position is the sum of multiplying each value in a row in the first matrix by the corresponding value in a column in the second matrix.\nThe residual sum of squares can be calculated with the following formula:\n$$RSS(\\beta) = (y - X\\beta)^T(y-X\\beta)$$\nThe value of our co-efficients can be calculated with:\n$$\\hat\\beta = (X^TX)^{-1}X^Ty$$\nUnfortunately, the result is not as visually appealing as in languages that use matrix arithmetic by default.", "our_coeff = np.dot(np.dot(np.linalg.inv(np.dot(intercept_x.T, intercept_x)), intercept_x.T), y)\n\nprint(coeff, '\\n', our_coeff)\n\nour_predictions = np.dot(intercept_x, our_coeff)\n\npredictions, our_predictions\n\nplt.plot(x, y, 'ko', label='True values')\nplt.plot(x, our_predictions, 'ro', label='Predictions')\nplt.legend(numpoints=1, loc=4)\nplt.show()\n\nnp.arange(12).reshape((3,4))", "Exercise\n\nPlot the residuals. The x axis will be the independent variable (x) and the y axis the residual between our prediction and the true value.\nPlot the predictions generated for our model over the entire range of 0-1. One approach is to use the np.linspace method to create equally spaced values over a specified range.", "plt.plot(x, y - our_predictions, 'ko')\nplt.show()\n\nplt.plot(x, y, 'ko', label='True values')\n\nall_x = np.linspace(0, 1, 1000).reshape((1000,1))\nintercept_all_x = np.hstack((np.ones((1000,1)), all_x))\n\nprint(intercept_all_x.shape, our_coeff.shape)\n\n#all_x_predictions = np.dot(intercept_all_x, our_coeff)\nall_x_predictions = np.sum(intercept_all_x * our_coeff.T, axis=1)\n\nplt.plot(all_x, all_x_predictions, 'r-', label='Predictions')\nplt.legend(numpoints=1, loc=4)\nplt.show()", "Types of independent variable\nThe independent variables can be many different types.\n\nQuantitative inputs\nCategorical inputs coded using dummy values\nInteractions between multiple inputs\nTranformations of other inputs, e.g. logs, raised to different powers, etc.\n\nIt is important to note that a linear model is only linear with respect to its inputs. Those input variables can take any form.\nOne approach we can take to improve the predictions from our model would be to add in the square, cube, etc of our existing variable.", "x_expanded = np.hstack((x**i for i in range(1,20)))\n\nb, residuals, rank, s = np.linalg.lstsq(x_expanded, y)\nprint(b)\n\nplt.plot(x, y, 'ko', label='True values')\nplt.plot(x, np.dot(x_expanded, b), 'ro', label='Predictions')\nplt.legend(numpoints=1, loc=4)\nplt.show()", "There is a tradeoff with model complexity. As we add more complexity to our model we can fit our training data increasingly well but eventually will lose our ability to generalize to new data.\nVery simple models underfit the data and have high bias.\nVery complex models overfit the data and have high variance.\nThe goal is to detect true sources of variation in the data and ignore variation that is just noise.\nHow do we know if we have a good model? A common approach is to break up our data into a training set, a validation set, and a test set. \n\nWe train models with different parameters on the training set.\nWe evaluate each model on the validation set, and choose the best\nWe then measure the performance of our best model on the test set.\n\nWhat would our best model look like? Because we are using dummy data here we can easily make more.", "n = 20\np = 12\ntraining = []\nval = []\nfor i in range(1, p):\n np.random.seed(0)\n x = np.random.random((n,1))\n y = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))\n x = np.hstack((x**j for j in np.arange(i)))\n our_coeff = np.dot(\n np.dot(\n np.linalg.inv(\n np.dot(\n x.T, x\n )\n ), x.T\n ), y\n )\n our_predictions = np.dot(x, our_coeff)\n our_training_rss = np.sum((y - our_predictions) ** 2)\n training.append(our_training_rss)\n \n val_x = np.random.random((n,1))\n val_y = 5 + 6 * val_x ** 2 + np.random.normal(0,0.5, size=(n,1))\n val_x = np.hstack((val_x**j for j in np.arange(i)))\n our_val_pred = np.dot(val_x, our_coeff)\n our_val_rss = np.sum((val_y - our_val_pred) ** 2)\n val.append(our_val_rss)\n #print(i, our_training_rss, our_val_rss)\n\nplt.plot(range(1, p), training, 'ko-', label='training')\nplt.plot(range(1, p), val, 'ro-', label='validation')\nplt.legend(loc=2)\nplt.show()", "Gradient descent\nOne limitation of our current implementation is that it is resource intensive. For very large datasets an alternative is needed. Gradient descent is often preferred, and particularly stochastic gradient descent for very large datasets.\nGradient descent is an iterative process, repetitively calculating the error and changing the coefficients slightly to reduce that error. It does this by calculating a gradient and then descending to a minimum in small steps.\nStochastic gradient descent calculates the gradient on a small batch of the data, updates the coefficients, loads the next chunk of the data and repeats the process.\nWe will just look at a basic gradient descent model.", "np.random.seed(0)\nn = 200\nx = np.random.random((n,1))\ny = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))\nintercept_x = np.hstack((np.ones((n,1)), x))\ncoeff, residuals, rank, sing_vals = np.linalg.lstsq(intercept_x,y)\nprint('lstsq', coeff)\n\n\n\ndef gradient_descent(x, y, rounds = 1000, alpha=0.01):\n theta = np.zeros((x.shape[1], 1))\n costs = []\n for i in range(rounds):\n prediction = np.dot(x, theta)\n error = prediction - y\n gradient = np.dot(x.T, error / y.shape[0])\n theta -= gradient * alpha\n costs.append(np.sum(error ** 2))\n return (theta, costs) \ntheta, costs = gradient_descent(intercept_x, y, rounds=10000)\nprint(theta, costs[::500])\n\nnp.random.seed(0)\nn = 200\n\nx = np.random.random((n,1))\ny = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))\nx = np.hstack((x**j for j in np.arange(20)))\n\ncoeff, residuals, rank, sing_vals = np.linalg.lstsq(x,y)\nprint('lstsq', coeff)\n\ntheta, costs = gradient_descent(x, y, rounds=10000)\nprint(theta, costs[::500])\n\nplt.plot(x[:,1], y, 'ko')\nplt.plot(x[:,1], np.dot(x, coeff), 'co')\nplt.plot(x[:,1], np.dot(x, theta), 'ro')\nplt.show()", "Machine learning packages available in the python ecosystem\nOverview in the python wiki\nGeneral\n* scikit-learn\n* milk\n* Orange\n* Shogun\n* GraphLab Create (dato)\nThere is a collection of field specific packages including some with machine learning components on the scipy website. Other packages can often be found searching the python package index.\nDeep learning is receiving a lot of attention recently and a number of different packages have been developed.\n* Theano\n* pylearn2\n* keras\n* Blocks\n* Lasagne\nScikit-learn\nScikit-learn is now widely used. It includes modules for:\n* Classification\n* Regression\n* Clustering\n* Dimensionality reduction\n* Model selection\n* Preprocessing\nThere are modules for training online models, enabling very large datasets to be analyzed.\nThere is also a semi-supervised module for situations when you have a large dataset, but only have labels for part of the dataset.\nMilk\nMilk works very well with mahotas, a package for image processing. With the recent improvements in scikit-image milk is now less attractive, although still a strong option\nOrange and Shogun\nThese are both large packages but for whatever reason do not receive the attention that scikit-learn does.\nDato\nDato is a relative newcomer and has been receiving a lot of attention lately. Time will tell whether it can compete with scikit-learn.\nAssignments\nThis week we will continue working on our project ideas. As you develop the outline some points you may want to consider:\nFor projects developing the object oriented programming component of the course:\n\nWhat will your classes be?\nWhat will each class have as attributes and methods?\nHow will your classes interact?\n\nFor projects developing GUIs or web applications:\n\nWhat will your screens/pages be?\nWhat components will each page need?\nHow will you store any data needed/produced?\n\nFor projects developing machine learning models:\n\nWhat will be your data?\nHow is your data structured?\nHow much data do you have?\nIs your data labeled?\nWhat type of machine learning task is it?\nHow good would the performance need to be for the model to be useful?\n\nYou do not need to answer all these questions. Each answer does not need to be complete. Your final project will likely be different to your initial idea.\nThe goal of the project description is to document your project as you currently envision it and to encourage planning for the earliest stage.\nYour project descriptions should be sent to me by our class next week." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AllenDowney/ThinkBayes2
examples/game_of_ur_soln.ipynb
mit
[ "Think Bayes\nThis notebook presents code and exercises from Think Bayes, second edition.\nCopyright 2018 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT", "# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\nfrom thinkbayes2 import Pmf, Cdf, Suite\nimport thinkplot", "The Game of Ur problem\nIn the Royal Game of Ur, players advance tokens along a track with 14 spaces. To determine how many spaces to advance, a player rolls 4 dice with 4 sides. Two corners on each die are marked; the other two are not. The total number of marked corners -- which is 0, 1, 2, 3, or 4 -- is the number of spaces to advance.\nFor example, if the total on your first roll is 2, you could advance a token to space 2. If you roll a 3 on the next roll, you could advance the same token to space 5.\nSuppose you have a token on space 13. How many rolls did it take to get there?\nHint: you might want to start by computing the distribution of k given n, where k is the number of the space and n is the number of rolls.\nThen think about the prior distribution of n.\nHere's a Pmf that represents one of the 4-sided dice.", "die = Pmf([0, 1])", "And here's the outcome of a single roll.", "roll = sum([die]*4)", "I'll start with a simulation, which helps in two ways: it makes modeling assumptions explicit and it provides an estimate of the answer.\nThe following function simulates playing the game over and over; after every roll, it yields the number of rolls and the total so far. When it gets past the 14th space, it starts over.", "def roll_until(iters):\n \"\"\"Generates observations of the game.\n \n iters: number of observations\n \n yields: number of rolls, total\n \"\"\"\n for i in range(iters):\n total = 0\n for n in range(1, 1000):\n total += roll.Random()\n if total > 14:\n break\n yield(n, total)", "Now I'll the simulation many times and, every time the token is observed on space 13, record the number of rolls it took to get there.", "pmf_sim = Pmf()\nfor n, k in roll_until(1000000):\n if k == 13:\n pmf_sim[n] += 1", "Here's the distribution of the number of rolls:", "pmf_sim.Normalize()\n\npmf_sim.Print()\n\nthinkplot.Hist(pmf_sim, label='Simulation')\nthinkplot.decorate(xlabel='Number of rolls to get to space 13',\n ylabel='PMF')", "Bayes\nNow let's think about a Bayesian solution. It is straight forward to compute the likelihood function, which is the probability of being on space 13 after a hypothetical n rolls.\npmf_n is the distribution of spaces after n rolls.\npmf_13 is the probability of being on space 13 after n rolls.", "pmf_13 = Pmf()\nfor n in range(4, 15):\n pmf_n = sum([roll]*n)\n pmf_13[n] = pmf_n[13]\n \npmf_13.Print()\npmf_13.Total()", "The total probability of the data is very close to 1/2, but it's not obvious (to me) why.\nNevertheless, pmf_13 is the probability of the data for each hypothetical values of n, so it is the likelihood function.\nThe prior\nNow we need to think about a prior distribution on the number of rolls. This is not easy to reason about, so let's start by assuming that it is uniform, and see where that gets us.\nIf the prior is uniform, the posterior equals the likelihood function, normalized.", "posterior = pmf_13.Copy()\nposterior.Normalize()\nposterior.Print()", "That sure looks similar to what we got by simulation. Let's compare them.", "thinkplot.Hist(pmf_sim, label='Simulation')\nthinkplot.Pmf(posterior, color='orange', label='Normalized likelihoods')\nthinkplot.decorate(xlabel='Number of rolls (n)',\n ylabel='PMF')", "Since the posterior distribution based on a uniform prior matches the simulation, it seems like the uniform prior must be correct. But it is not obvious (to me) why." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dduong1/DIMSUM-Algorithm
DIMSUM_FINAL_MACHRAOUI_DUONGPRUNIER.ipynb
gpl-3.0
[ "IMPLEMENTATION DE L'ALGORITHME DIMSUM SOUS PIG\nDavid DUONG PRUNIER - Ismail MACHRAOUI\nMASTERE SPECIALISE DATA SCIENCE\nPour cette รฉtude, nous allons implรฉmenter l'algorithme DIMSUM dรฉcrit par Bosagh-Zadesh en 2012. Cet algorithme propose de trouver toutes les paires d'รฉlรฉments semblables. \nLa littรฉrature sur cet algorithme nous propose diffรฉrentes versions du mรชme algorithme, plus ou moins puissantes, plus pou moins flexibles. Il est intรฉressant de noter que Twitter l'a implรฉmentรฉ dans ses clusters et a rendu publique l'algorithme modifiรฉ qu'ils ont utilisรฉ. C'est cet algorithme que nous allons implรฉmenter [2].\nSans entrer dans les dรฉtails, le coeur de l'algorithme repose sur le calcul de la matrice $A^tA$ (A transposรฉ x A), et la sรฉlection des paires selon une probabilitรฉ donnรฉe. \nPour l'implรฉmentation, nous avons choisi de dรฉvelopper l'algorithme sous PIG, dans un cluster Azure. Le jeu de donnรฉe de test sera une base composรฉe des notes donnรฉes par les utilisateurs ร  diffรฉrents films. L'algorithme trouvera les paires de films รฉquivalents. \nNous effectuerons notre รฉtude sur un jeu de donnรฉes composรฉ de 1000208 notes de films basรฉ sur 6040 utilisateurs et 3952 films diffรฉrents.\nUne fois les similaritรฉs calculรฉes, nous effectuerons un test afin de rรฉcupรฉrer les films similaires ร  un film qu'on passera en input.\n[1] Bosagh-Zadeh, Reza and Goel, Ashish (2012), Dimension Independent Similarity Computation, arXiv:1206.2082 http://arxiv.org/abs/1206.2082\n[2] https://blog.twitter.com/2014/all-pairs-similarity-via-dimsum\nETAPE PAR ETAPE\nMise en place des librairies et des modules de connexions\nOn code en dur afin d'รฉviter de retaper tout le temps les identifiants. Attention c'est une mauvaise pratique.", "import pyensae\nimport os\nblobstorage = \"hdblobstorage\" #blobhp[\"blob_storage\"]\nblobpassword = \"jQIPVO/T54w8X49UPIbzAVvaNO3wmuUwI4/o9AJnCaPTHoCQnsaGBUkT4eIyi0BRQavgc/TAQMQwy8eu19CSBQ==\" #blobhp[\"password1\"]#\nhadoop_server = \"sparkclus2ensae\"#blobhp[\"hadoop_server\"]#\nhadoop_password = \"ENSAEspark1;\"#blobhp[\"password2\"]#\nusername = \"imdd\" #blobhp[\"username\"]", "On ouvre la connexion au cluster et au blob", "#%blob_close\ncl, bs = %hd_open \ncl,bs", "On upload les fichiers qui contient tous les ratings des users.", "%blob_up data/ratings_mean.csv hdblobstorage/imdd/ratings_mean.csv", "On vรฉrifie que tous les fichiers sont prรฉsents dans le blob", "#List files in blob storage\ndf=%blob_ls hdblobstorage/imdd/\ndf", "On code l'algorithme en PIG. La difficultรฉ est que PIG gรจre trรจs mal l'imbrication de FOREACH, absolument nรฉcessaire ร  l'algorithme. Notre solution s'est portรฉe sur la mise ร  plat totale des donnรฉes. D'oรน le FLATTEN puis nous avons effectuรฉ un JOIN pour la deuxiรจme boucle. Puis nous avons appliquรฉ les rรจgles dรฉfinies par l'algorithme.\nEn sortie nous obtenons l'ensemble des paires semblables, avec leur mesure de similaritรฉ.\nOn stocke dans un fichier pour l'exploiter ensuite avec un autre script PIG.\nPour rรฉduire le nombre de rรฉsultats, nous ne retiendrons que les paires de films qui ont une similaritรฉ de plus de 0.5 .", "%%PIG_azure dimsum.pig\n\n-- Macro de calcul des normes par colonne (movieID)\nDEFINE computeMatrixNorms(cData,sqrt_gamma) RETURNS Matrix_Norms {\n cData_grp = GROUP $cData BY MovieID;\n -- On calcule la norme et le gamma sur la norme\n $Matrix_Norms = FOREACH cData_grp {\n tmp_out = FOREACH $cData GENERATE Rating*Rating;\n out = SUM(tmp_out);\n GENERATE group as MovieID, SQRT(out) as Norm, ($sqrt_gamma.$0/SQRT(out)>1?1:$sqrt_gamma.$0/SQRT(out)) as Prob_j;\n }\n}\n\ncData = LOAD '$CONTAINER/imdd/ratings_mean.csv'\n using PigStorage (',')\n AS (UserID:int, MovieID:int, Rating:double) ;\n \n-- On calcule le gamma\nusers = GROUP cData all ;\ntotal= FOREACH users GENERATE MAX($1.UserID) as m, MAX($1.MovieID) as n;\nsqrt_gamma = FOREACH total GENERATE SQRT(4*LOG(n)/0.7) as a;\n \n\n-- On calcule la norme et le gamma sur la norme\nMatrix_Norms = computeMatrixNorms(cData,sqrt_gamma);\n\n\n-- On ajoute la colonne Norm et probabilite dans cData\nC = JOIN cData BY MovieID,Matrix_Norms BY MovieID;\nD = FOREACH C GENERATE cData::UserID as UserID_f,cData::MovieID as MovieID_f,cData::Rating as Rating_f,\n Matrix_Norms::Norm as Norm_f,Matrix_Norms::Prob_j as Prob_j_f;\n\nMatrix_data = GROUP D BY UserID_f;\nFF = FOREACH Matrix_data GENERATE group as UID, FLATTEN(D.MovieID_f) as MV1;\n\n-- Ajout des informations de MV1\nFFF = JOIN FF BY (UID,MV1), D BY (UserID_f,MovieID_f); \n\n-- Condition de validite premier IF\nFFD = FILTER FFF BY RANDOM()<Prob_j_f;\n\n-- Ajout de la seconde loop\nGG = JOIN FFD BY UID, D BY UserID_f;\n\n-- Cleaning du tableau\nGGG = FOREACH GG GENERATE FFD::FF::UID as UserID,FFD::FF::MV1 as MV_1,FFD::D::Rating_f as Rating_1,FFD::D::Norm_f as Norm_1,\n FFD::D::Prob_j_f as Proba_1,D::MovieID_f as MV_2,D::Rating_f as Rating_2,\n D::Norm_f as Norm_2,D::Prob_j_f as Proba_2;\n \n-- Ajout de la deuxieme boucle\n-- Condition de validite second IF\nGGD = FILTER GGG BY RANDOM()<Proba_2;\n\n-- Generation des similarites\nHH = FOREACH GGD{\n val = Rating_1*Rating_2/(((sqrt_gamma.$0>Norm_1)?Norm_1:sqrt_gamma.$0)*((sqrt_gamma.$0>Norm_2)?Norm_2:sqrt_gamma.$0));\n GENERATE MV_1,MV_2,val as VAL;\n}\nDESCRIBE HH;\n-- Ajout d un filtre supplementaire pour reduire la taille des resultats\nHHH = FILTER HH BY VAL > 0.5;\nHHHH = DISTINCT HHH;\n\nSTORE GGD INTO '$CONTAINER/$PSEUDO/dom/matrix_all.txt' USING PigStorage(',');\nSTORE HHH INTO '$CONTAINER/$PSEUDO/dom/similarities.txt' USING PigStorage(',');\n\n", "Dans la partie de code suivante, nous supprimons les fichiers gรฉnรฉrรฉs par l'algorithme prรฉcรฉdent pour pouvoir les regรฉnรฉrer une deuxiรจme fois.", "cl.delete_blob(bs, \"hdblobstorage\", 'imdd/dom/matrix_all.txt')\ncl.delete_blob(bs, \"hdblobstorage\", 'imdd/dom/similarities.txt')\ndf = %blob_ls hdblobstorage/imdd/dom/matrix_all.txt/\ndf\nfor name in df[\"name\"]:\n cl.delete_blob(bs, \"hdblobstorage\", name)\ndf = %blob_ls hdblobstorage/imdd/dom/similarities.txt/\ndf\nfor name in df[\"name\"]:\n cl.delete_blob(bs, \"hdblobstorage\", name)", "Upload du script dimsum.pig et lancement de son exรฉcution :", "jid = %hd_pig_submit dimsum.pig\njid\n\nst = %hd_job_status jid[\"id\"]\nst[\"id\"],st[\"percentComplete\"],st[\"completed\"],st[\"status\"][\"jobComplete\"],st[\"status\"][\"state\"]\n\ndf=%blob_ls hdblobstorage/imdd/\nlist(df[\"name\"])", "Test\nL'algorithme DimSum ayant รฉtรฉ bien exรฉcutรฉ , nous allons maintenant exploiter notre matrice de similaritรฉs ร  l'aide d'un autre script PIG qui se base sur le fichier de similaritรฉs gรฉnรฉrรฉ par le script PIG vu auparavant.\nA partir d'un id d'un film, qui existe dans notre base, nous nous attendrons ร  rรฉcupรฉrer les ids des films dont la similaritรฉ calculรฉe est maximale.", "%%PIG_azure load_results.pig\n\n\ncData = LOAD '$CONTAINER/$PSEUDO/dom/similarities.txt'\n using PigStorage (',')\n AS (MovieID1:int, MovieID2:int, sim:double) ;\n\nfiltered = FILTER cData BY MovieID1 == $MvID ;\nordered = ORDER filtered BY sim DESC;\nordered_limit = LIMIT ordered $size;\nmovies = FOREACH ordered_limit GENERATE MovieID2;\nSTORE movies INTO '$CONTAINER/imdd/dom/recom.txt' USING PigStorage(',');", "Nous supprimons d'abord le fichier gรฉnรฉrรฉ par la derniรจre exรฉcution, ensuite nous lanรงons le script PIG afin de rรฉcupรฉrer les ids des films similaires. Pour cet exemple, nous souhaitons rรฉcupรฉrer les 20 films les plus proches ร  celui dont l'id est 1610.", "if cl.exists(bs, cl.account_name, \"$PSEUDO/imdd/dom/recom.txt\"):\n r = cl.delete_folder (bs, cl.account_name, \"$PSEUDO/imdd/dom/recom.txt\")\njid = cl.pig_submit(bs, blobstorage, \"load_results.pig\",params={\"MvID\":'1610',\"size\":\"20\"})\njid\n\nst = %hd_job_status jid[\"id\"]\n(st[\"id\"],st[\"percentComplete\"],st[\"completed\"],\nst[\"status\"][\"jobComplete\"],st[\"status\"][\"state\"])", "Nous rรฉcupรฉrons ensuite le fichier gรฉnรฉrรฉ recom.txt, contenant les ids :", "if os.path.exists(\"recom.txt\"):os.remove(\"recom.txt\")\n%blob_downmerge /imdd/dom/recom.txt recom.txt", "Et nous affichons enfin les rรฉsultats :", " with open('recom.txt', 'r') as f:\n ids = f.read()\n print(ids)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dwiel/tensorflow_hmm
notebooks/gradient_descent_example.ipynb
apache-2.0
[ "%pylab inline\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_hmm import hmm\n\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 15, 5\n\nP = np.array([[0.9, 0.1], [0.1, 0.9]])\nP = P**1\nP = P / np.sum(P,1)\nprint 'P', P\nhmm_tf = hmm.HMMTensorflow(P, np.array([0.0, 1.0]))\n\n# observations = np.array([0, 0.0, 1.0, 1.0])\n# observations = np.array([0.5] * 90)\n# observations = np.random.random(60)\n# observations = np.zeros(90)\n# observations = np.sin(np.arange(-10, 10, 20/90.))\n# observations -= 0.2\n# observations[62:] = 0.2\n# observations += np.arange(-1, 1, 2 / 90.)\n# observations += np.random.random(90) * 0.4 - 0.2\n\n# truth is off for 60 steps, then on for 30 time steps\ntruth = np.array([[1, 0]]*60 + [[0, 1]]*30, dtype=float)\ntruth = np.expand_dims(truth, axis=0)", "Observations\nObservations can be thought of as the probability of being in any given state at each time step. For this demonstration, observations are randomly initialized. In a real case, these observations would be the output of a neural network", "observations = np.random.random((1, 90, 2)) * 4 - 2\n\nplot(observations[0,:,:])\ngrid()\n\nobservations_variable = tf.Variable(observations)\n\nposterior_graph, _, _ = hmm_tf.forward_backward(tf.sigmoid(observations_variable))\n\n# build error function\nsum_error_squared = tf.reduce_sum(tf.square(truth - posterior_graph))\n\n# calculate d_observation/d_error\ngradients_graph = tf.gradients(sum_error_squared, observations_variable)\n\nsession = tf.Session()\nsession.run(tf.initialize_all_variables())\n\nsteps = 0", "Posterior vs Truth\nThe posterior is the probability assigned by the hmm of being in each state at each time step.\nThis is a plot if the posterior output compared to the truth.", "posterior = session.run(posterior_graph)\n\nprint 'sum error squared: %.03f' % sum((truth[:,1] - posterior[:,1])**2)\nplot(posterior[0,:,1], label='posterior')\nplot(truth[0,:,1], label='truth')\ngrid()\nlegend()", "Gradients\nThis plot shows the gradients which are flowing back to the input of the hmm.", "gradients = session.run(gradients_graph)[0]\n\ndef plot_gradients(gradients):\n gradients = gradients[0]\n\n # whiten gradients\n gradients = gradients / np.std(gradients)\n\n plot(-gradients[:,1], label='gradients')\n plot(truth[0,:,1], label='truth')\n # plot(sigmoid(observations[0,:,1]), label='observations')\n plot(observations[0,:,1], label='observations')\n ylim((-5,5))\n grid()\n legend()\n\nplot_gradients(gradients)\n\nfor i in range(1):\n # take 1 gradient descent step\n steps += 1\n observations = session.run(\n observations_variable.assign_sub(gradients * 0.5 * (random.random() - 0.25))\n )\n\nplot(observations[0,:,1], label='observations')\nsigmoid = np.vectorize(lambda(x): 1.0/(1.0+np.exp(-x)))\n# plot(sigmoid(observations[0,:,1]), label='sigmoid(observations)')\nlegend()\ngrid()\n\nhmm_np = hmm.HMMNumpy(np.array([[0.9, 0.1], [0.1, 0.9]]), p0=np.array([0.5, 0.5]))\n\nout, _ = hmm_np.viterbi_decode(sigmoid(observations[0,:,:]))\n\nprint 'gradient steps taken:', steps\nprint 'viterbi error:', sum((truth[0,:,1] - out)**2)\nplot(truth[0,:,1], label='truth')\nplot(out, label='out')\ngrid()\nlegend()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
idekerlab/py2cytoscape
examples/New_wrapper_api_sample.ipynb
mit
[ "Introduction to py2cytoscape: Pythonista-friendly wrapper for cyREST\n\n<h1 align=\"center\">For</h1>\n\n\nby Keiichiro Ono - University of California, San Diego Trey Ideker Lab\nRequirments\n\nJava 8\nCytoscape 3.2.1+\ncyREST 1.1.0+\npy2cytoscape 0.4.2+\n\n\nQ. What is py2cytoscape?\nA. A Python package to drive Cytoscape in pythonic way\nIn a Nutshell...", "from py2cytoscape.data.cynetwork import CyNetwork\nfrom py2cytoscape.data.cyrest_client import CyRestClient\nfrom py2cytoscape.data.style import StyleUtil\nimport py2cytoscape.util.cytoscapejs as cyjs\nimport py2cytoscape.cytoscapejs as renderer\n\nimport networkx as nx\nimport pandas as pd\nimport json\n\n# !!!!!!!!!!!!!!!!! Step 0: Start Cytoscape 3 with cyREST App !!!!!!!!!!!!!!!!!!!!!!!!!!\n\n# Step 1: Create py2cytoscape client\ncy = CyRestClient()\n\n\n# Reset\ncy.session.delete()\n\n# Step 2: Load network from somewhere\nyeast_net = cy.network.create_from('../tests/data/galFiltered.sif')\n\n# Step 3: Load table as pandas' DataFrame\ntable_data = pd.read_csv('sample_data_table.csv', index_col=0)\ntable_data.head()\n\nall_suid = cy.network.get_all()\nnet1 = cy.network.create(all_suid[0])\nprint(net1.get_first_view())\n\n# Step 4: Merge them in Cytoscape\nyeast_net.update_node_table(df=table_data, network_key_col='name')\n\n# Step 5: Apply layout\ncy.layout.apply(name='degree-circle', network=yeast_net)\n\n# Step 6: Create Visual Style as code (or by hand if you prefer)\nmy_yeast_style = cy.style.create('GAL Style')\n\n\nbasic_settings = {\n \n # You can set default values as key-value pairs.\n \n 'NODE_FILL_COLOR': '#6AACB8',\n 'NODE_SIZE': 55,\n 'NODE_BORDER_WIDTH': 0,\n 'NODE_LABEL_COLOR': '#555555',\n \n 'EDGE_WIDTH': 2,\n 'EDGE_TRANSPARENCY': 100,\n 'EDGE_STROKE_UNSELECTED_PAINT': '#333333',\n \n 'NETWORK_BACKGROUND_PAINT': '#FFFFEA'\n}\n\nmy_yeast_style.update_defaults(basic_settings)\n\n# Create some mappings\nmy_yeast_style.create_passthrough_mapping(column='label', vp='NODE_LABEL', col_type='String')\n\ndegrees = yeast_net.get_node_column('degree.layout') \n\ncolor_gradient = StyleUtil.create_2_color_gradient(min=degrees.min(), max=degrees.max(), colors=('white', '#6AACB8'))\ndegree_to_size = StyleUtil.create_slope(min=degrees.min(), max=degrees.max(), values=(10, 100))\nmy_yeast_style.create_continuous_mapping(column='degree.layout', vp='NODE_FILL_COLOR', col_type='Integer', points=color_gradient)\nmy_yeast_style.create_continuous_mapping(column='degree.layout', vp='NODE_SIZE', col_type='Integer', points=degree_to_size)\nmy_yeast_style.create_continuous_mapping(column='degree.layout', vp='NODE_WIDTH', col_type='Integer', points=degree_to_size)\nmy_yeast_style.create_continuous_mapping(column='degree.layout', vp='NODE_HEIGHT', col_type='Integer', points=degree_to_size)\n\nmy_yeast_style.create_continuous_mapping(column='degree.layout', vp='NODE_LABEL_FONT_SIZE', col_type='Integer', points=degree_to_size)\n\ncy.style.apply(my_yeast_style, yeast_net)\n\n# Step 7: (Optional) Embed as interactive Cytoscape.js widget\nyeast_net_view = yeast_net.get_first_view()\nstyle_for_widget = cy.style.get(my_yeast_style.get_name(), data_format='cytoscapejs')\nrenderer.render(yeast_net_view, style=style_for_widget['style'], background='radial-gradient(#FFFFFF 15%, #DDDDDD 105%)')", "Long Description\nFrom version 0.4.0, py2cytoscape has wrapper modules for cyREST RESTful API. This means you can access Cytoscape features in more Pythonic way instead of calling raw REST API via HTTP. \nFeatures\nPandas for basic data exchange\nSince pandas is a standard library for data mangling/analysis in Python, this new version uses its DataFrame as its basic data object.\nEmbedded Cytoscaep.js Widget\nYou can use Cytoscape.js widget to embed your final result as a part of your notebook.\nSimpler Code to access Cytoscape\ncyREST provides language-agnostic RESTful API, but you need to use a lot of template code to access raw API. Here is an example. Both of the following do the same task, which is creating an empty network in Cytoscape. You will notice it is significantly simpler if you use py2cytoscape wrapper API.\nRaw cyREST", "# HTTP Client for Python\nimport requests\n\n# Standard JSON library\nimport json\n\n# Basic Setup\nPORT_NUMBER = 1234\nBASE = 'http://localhost:' + str(PORT_NUMBER) + '/v1/'\n\n# Header for posting data to the server as JSON\nHEADERS = {'Content-Type': 'application/json'}\n\n# Define dictionary of empty network\nempty_network = {\n 'data': {\n 'name': 'I\\'m empty!'\n },\n 'elements': {\n 'nodes':[],\n 'edges':[]\n }\n}\n\nres = requests.post(BASE + 'networks?collection=My%20Collection', data=json.dumps(empty_network), headers=HEADERS)\nnew_network_id = res.json()['networkSUID']\nprint('New network created with raw REST API. Its SUID is ' + str(new_network_id))", "With py2cytoscape", "network = cy.network.create(name='My Network', collection='My network collection')\nprint('New network created with py2cytoscape. Its SUID is ' + str(network.get_id()))", "Status\nAs of 6/4/2015, this is still in alpha status and feature requests are always welcome. If youi have questions or feature requests, please send them to our Google Groups:\n\nhttps://groups.google.com/forum/#!forum/cytoscape-discuss\n\nQuick Tour of py2cytoscape Features\n\nCreate a client object to connect to Cytoscape", "# Create an instance of cyREST client. Default IP is 'localhost', and port number is 1234.\n# cy = CyRestClient() - This default constructor creates connection to http://localhost:1234/v1\ncy = CyRestClient(ip='127.0.0.1', port=1234)\n\n# Cleanup: Delete all existing networks and tables in current Cytoscape session\ncy.session.delete()", "Creating empty networks", "# Empty network\nempty1 = cy.network.create()\n\n# With name\nempty2 = cy.network.create(name='Created in Jupyter Notebook')\n\n# With name and collection name\nempty3 = cy.network.create(name='Also created in Jupyter', collection='New network collection')", "Load networks from files, URLs or web services", "# Load a single local file\nnet_from_local2 = cy.network.create_from('../tests/data/galFiltered.json')\nnet_from_local1 = cy.network.create_from('sample_yeast_network.xgmml', collection='My Collection')\nnet_from_local2 = cy.network.create_from('../tests/data/galFiltered.gml', collection='My Collection')\n\n# Load from multiple locations\nnetwork_locations = [\n 'sample_yeast_network.xgmml', # Local file\n 'http://chianti.ucsd.edu/cytoscape-data/galFiltered.sif', # Static file on a web server\n 'http://www.ebi.ac.uk/Tools/webservices/psicquic/intact/webservices/current/search/query/brca1?format=xml25' # or a web service\n]\n\n# This requrns Series\nnetworks = cy.network.create_from(network_locations)\npd.DataFrame(networks, columns=['CyNetwork'])", "Create networks from various types of data\nCurrently, py2cytoscape accepts the following data as input:\n\nCytoscape.js\nNetworkX\n\nPandas DataFrame\n\n\nigraph (TBD)\n\nNumpy adjacency matrix (binary or weighted) (TBD)\nGraphX (TBD)", "# Cytoscape.js JSON\nn1 = cy.network.create(data=cyjs.get_empty_network(), name='Created from Cytoscape.js JSON')\n\n# Pandas DataFrame\n\n# Example 1: From a simple text table\ndf_from_sif = pd.read_csv('../tests/data/galFiltered.sif', names=['source', 'interaction', 'target'], sep=' ')\ndf_from_sif.head()\n\n# By default, it uses 'source' for source node column, 'target' for target node column, and 'interaction' for interaction\nyeast1 = cy.network.create_from_dataframe(df_from_sif, name='Yeast network created from pandas DataFrame')\n\n# Example 2: from more complicated table\ndf_from_mitab = pd.read_csv('intact_pubid_22094256.txt', sep='\\t')\ndf_from_mitab.head()\n\nsource = df_from_mitab.columns[0]\ntarget = df_from_mitab.columns[1]\ninteraction = 'Interaction identifier(s)'\ntitle='A Systematic Screen for CDK4/6 Substrates Links FOXM1 Phosphorylation to Senescence Suppression in Cancer Cells.'\n\nhuman1 = cy.network.create_from_dataframe(df_from_mitab, source_col=source, target_col=target, interaction_col=interaction, name=title)\n\n\n# Import edge attributes and node attributes at the same time (TBD)\n\n# NetworkX\nnx_graph = nx.scale_free_graph(100)\nnx.set_node_attributes(nx_graph, 'Degree', nx.degree(nx_graph))\nnx.set_node_attributes(nx_graph, 'Betweenness_Centrality', nx.betweenness_centrality(nx_graph))\nscale_free100 = cy.network.create_from_networkx(nx_graph, collection='Generated by NetworkX')\n\n# TODO: igraph\n# TODO: Numpy adj. martix\n# TODO: GraphX", "Get Network from Cytoscape\nYou can get network data in the following forms:\n\nCytoscape.js\nNetworkX\nDataFrame", "# As Cytoscape.js (dict)\nyeast1_json = yeast1.to_json()\n\n# print(json.dumps(yeast1_json, indent=4))\n\n# As NetworkX graph object\nsf100 = scale_free100.to_networkx()\n\nnum_nodes = sf100.number_of_nodes()\nnum_edges = sf100.number_of_edges()\n\nprint('Number of Nodes: ' + str(num_nodes))\nprint('Number of Edges: ' + str(num_edges))\n\n# As a simple, SIF-like DataFrame\nyeast1_df = yeast1.to_dataframe()\nyeast1_df.head()", "Working with CyNetwork API\nCyNetwork class is a simple wrapper for network-related cyREST raw REST API. It does not hold the actual network data. It's a reference to a network in current Cytoscape session. With CyNetwork API, you can access Cytoscape data objects in more Pythonista-friendly way.", "network_suid = yeast1.get_id()\nprint('This object references to Cytoscape network with SUID ' + str(network_suid) + '\\n')\nprint('And its name is: ' + str(yeast1.get_network_value(column='name')) + '\\n')\n\nnodes = yeast1.get_nodes()\nedges = yeast1.get_edges()\n\nprint('* This network has ' + str(len(nodes)) + ' nodes and ' + str(len(edges)) + ' edges\\n') \n\n# Get a row in the node table as pandas Series object\nnode0 = nodes[0]\nrow = yeast1.get_node_value(id=node0)\nprint(row)\n\n# Or, pick one cell in the table\ncell = yeast1.get_node_value(id=node0, column='name')\nprint('\\nThis node has name: ' + str(cell))", "Get references from existing networks\nAnd of course, you can grab references to existing Cytoscape networks:", "# Create a new CyNetwork object from existing network\nnetwork_ref1 = cy.network.create(suid=yeast1.get_id())\n\n# And they are considered as same objects.\nprint(network_ref1 == yeast1)\nprint(network_ref1.get_network_value(column='name'))", "Tables as DataFrame\nCytoscape has two main data types: Network and Table. Network is the graph topology, and Tables are properties for those graphs. For simplicity, this library has access to three basic table objects:\n\nNode Table\nEdge Table\nNetwork Table\n\nFor 99% of your use cases, you can use these three to store properties. Since pandas is extremely useful to handle table data, default data type for tables is DataFrame. However, you can also use other data types including:\n\nCytoscape.js style JSON\nCSV\nTSV\nCX (TBD)", "# Get table from Cytoscape\nnode_table = scale_free100.get_node_table()\nedge_table = scale_free100.get_edge_table()\nnetwork_table = scale_free100.get_network_table()\nnode_table.head()\n\nnetwork_table.transpose().head()\n\nnames = scale_free100.get_node_column('Degree')\nprint(names.head())\n\n# Node Column information. \"name\" is the unique Index\nscale_free100.get_node_columns()", "Edit Network Topology\nAdding and deleteing nodes/edges", "# Add new nodes: Simply send the list of node names. NAMES SHOULD BE UNIQUE!\nnew_node_names = ['a', 'b', 'c']\n# Return value contains dictionary from name to SUID.\nnew_nodes = scale_free100.add_nodes(new_node_names)\n\n# Add new edges\n# Send a list of tuples: (source node SUID, target node SUID, interaction type\nnew_edges = []\nnew_edges.append((new_nodes['a'], new_nodes['b'], 'type1'))\nnew_edges.append((new_nodes['a'], new_nodes['c'], 'type2'))\nnew_edges.append((new_nodes['b'], new_nodes['c'], 'type3'))\n\nnew_edge_ids = scale_free100.add_edges(new_edges)\nnew_edge_ids\n\n# Delete node\nscale_free100.delete_node(new_nodes['a'])\n\n# Delete edge\nscale_free100.delete_edge(new_edge_ids.index[0])", "Update Table\nLet's do something a bit more realistic. You can update any Tables by using DataFrame objects.\n1. ID conversion with external service\nLet's use ID Conversion web service by Uniprot to add more information to existing yeast network in current session.", "# Small utility function to convert ID sets\nimport requests\n\ndef uniprot_id_mapping_service(query=None, from_id=None, to_id=None):\n # Uniprot ID Mapping service\n url = 'http://www.uniprot.org/mapping/'\n payload = {\n 'from': from_id,\n 'to': to_id,\n 'format':'tab',\n 'query': query\n }\n \n res = requests.get(url, params=payload)\n \n df = pd.read_csv(res.url, sep='\\t')\n res.close()\n return df\n\n# Get node table from Cytoscape\nyeast_node_table = yeast1.get_node_table()\n\n# From KEGG ID to UniprotKB ID\nquery1 = ' '.join(yeast_node_table['name'].map(lambda gene_id: 'sce:' + gene_id).values)\nid_map_kegg2uniprot = uniprot_id_mapping_service(query1, from_id='KEGG_ID', to_id='ID')\nid_map_kegg2uniprot.columns = ['kegg', 'uniprot']\n\n# From UniprotKB to SGD\nquery2 = ' '.join(id_map_kegg2uniprot['uniprot'].values)\nid_map_uniprot2sgd = uniprot_id_mapping_service(query2, from_id='ID', to_id='SGD_ID')\nid_map_uniprot2sgd.columns = ['uniprot', 'sgd']\n\n# From UniprotKB to Entrez Gene ID\nquery3 = ' '.join(id_map_kegg2uniprot['uniprot'].values)\nid_map_uniprot2ncbi = uniprot_id_mapping_service(query3, from_id='ID', to_id='P_ENTREZGENEID')\nid_map_uniprot2ncbi.columns = ['uniprot', 'entrez']\n\n# Merge them\nmerged = pd.merge(id_map_kegg2uniprot, id_map_uniprot2sgd, on='uniprot')\nmerged = pd.merge(merged, id_map_uniprot2ncbi, on='uniprot')\n\n# Add key column by removing prefix\nmerged['name'] = merged['kegg'].map(lambda kegg_id : kegg_id[4:])\nmerged.head()\n\nupdate_url = BASE + 'networks/' + str(yeast1.get_id()) + '/tables/defaultnode'\nprint(update_url)\n\nut = {\n 'key': 'name',\n 'dataKey': 'name',\n 'data': [\n {\n 'name': 'YBR112C',\n 'foo': 'aaaaaaaa'\n }\n ]\n}\n\nrequests.put(update_url, json=ut, headers=HEADERS)\n\n# Now update existing node table with the data frame above.\nyeast1.update_node_table(merged, network_key_col='name', data_key_col='name')\n\n# Check the table is actually updated\nyeast1.get_node_table().head()", "Create / Delete Table Data\nCurrently, you cannot delete the table or rows due to the Cytoscape data model design. However, it is easy to create / delete columns:", "# Delete columns\nyeast1.delete_node_table_column('kegg')\n\n# Create columns\nyeast1.create_node_column(name='New Empty Double Column', data_type='Double', is_immutable=False, is_list=False)\n\n# Default is String, mutable column.\nyeast1.create_node_column(name='Empty String Col')\n\nyeast1.get_node_table().head()", "Visual Styles\nYou can also use wrapper API to access Visual Styles.\nCurrent limitations are:\n\nYou need to use unique name for the Styles\nNeed to know how to write serialized form of objects", "# Get all existing Visual Styles\nimport json\nstyles = cy.style.get_all()\nprint(json.dumps(styles, indent=4))\n\n# Create a new style\nstyle1 = cy.style.create('sample_style1')\n\n# Get a reference to the existing style\ndefault_style = cy.style.create('default')\n\nprint(style1.get_name())\nprint(default_style.get_name())\n\n# Get all available Visual Properties\nprint(len(cy.style.vps.get_all()))\n\n# Get Visual Properties for each data type\nnode_vps = cy.style.vps.get_node_visual_props()\nedge_vps = cy.style.vps.get_edge_visual_props()\nnetwork_vps = cy.style.vps.get_network_visual_props()\n\nprint(pd.Series(edge_vps).head())", "Set default values\nTo set default values for Visual Properties, simply pass key-value pairs as dictionary.", "# Prepare key-value pair for Style defaults\n\nnew_defaults = {\n # Node defaults\n 'NODE_FILL_COLOR': '#eeeeff',\n 'NODE_SIZE': 20,\n 'NODE_BORDER_WIDTH': 0,\n 'NODE_TRANSPARENCY': 120,\n 'NODE_LABEL_COLOR': 'white',\n \n # Edge defaults\n 'EDGE_WIDTH': 3,\n 'EDGE_STROKE_UNSELECTED_PAINT': '#aaaaaa',\n 'EDGE_LINE_TYPE': 'LONG_DASH',\n 'EDGE_TRANSPARENCY': 120,\n \n # Network defaults\n 'NETWORK_BACKGROUND_PAINT': 'black'\n}\n\n# Update\nstyle1.update_defaults(new_defaults)\n\n# Apply the new style\ncy.style.apply(style1, yeast1)", "Visual Mappings", "# Passthrough mapping\nstyle1.create_passthrough_mapping(column='name', col_type='String', vp='NODE_LABEL')\n\n# Discrete mapping: Simply prepare key-value pairs and send it\nkv_pair = {\n 'pp': 'pink',\n 'pd': 'green'\n}\nstyle1.create_discrete_mapping(column='interaction', \n col_type='String', vp='EDGE_STROKE_UNSELECTED_PAINT', mappings=kv_pair)\n\n# Continuous mapping\npoints = [\n {\n 'value': '1.0',\n 'lesser':'white',\n 'equal':'white',\n 'greater': 'white'\n },\n {\n 'value': '20.0',\n 'lesser':'green',\n 'equal':'green',\n 'greater': 'green'\n }\n]\n\nminimal_style = cy.style.create('Minimal')\nminimal_style.create_continuous_mapping(column='Degree', col_type='Double', vp='NODE_FILL_COLOR', points=points)\n\n# Or, use utility for simple mapping\nsimple_slope = StyleUtil.create_slope(min=1, max=20, values=(10, 60))\nminimal_style.create_continuous_mapping(column='Degree', col_type='Double', vp='NODE_SIZE', points=simple_slope)\n\n# Apply the new style\ncy.style.apply(minimal_style, scale_free100)", "Layouts\nCurrently, this supports automatic layouts with default parameters.", "# Get list of available layout algorithms\nlayouts = cy.layout.get_all()\nprint(json.dumps(layouts, indent=4))\n\n# Apply layout\ncy.layout.apply(name='circular', network=yeast1)\n\nyeast1.get_views()\nyeast_view1 = yeast1.get_first_view()\nnode_views = yeast_view1['elements']['nodes']\ndf3 = pd.DataFrame(node_views)\ndf3.head()", "Embed Interactive Widget", "from py2cytoscape.cytoscapejs import viewer as cyjs\ncy.layout.apply(network=scale_free100)\nview1 = scale_free100.get_first_view()\nview2 = yeast1.get_first_view()\n# print(view1)\ncyjs.render(view2, 'default2', background='#efefef')\n\n# Use Cytoscape.js style JSON\ncyjs_style = cy.style.get(minimal_style.get_name(), data_format='cytoscapejs')\ncyjs.render(view1, style=cyjs_style['style'], background='white')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
besser82/shogun
doc/ipython-notebooks/metric/LMNN.ipynb
bsd-3-clause
[ "Metric Learning with the Shogun Machine Learning Toolbox\nBy Fernando J. Iglesias Garcia (GitHub ID: iglesias) as project report for GSoC 2013 (project details).\nThis notebook illustrates <a href=\"http://en.wikipedia.org/wiki/Statistical_classification\">classification</a> and <a href=\"http://en.wikipedia.org/wiki/Feature_selection\">feature selection</a> using <a href=\"http://en.wikipedia.org/wiki/Similarity_learning#Metric_learning\">metric learning</a> in Shogun. To overcome the limitations of <a href=\"http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm\">knn</a> with Euclidean distance as the distance measure, <a href=\"http://en.wikipedia.org/wiki/Large_margin_nearest_neighbor\">Large Margin Nearest Neighbour</a>(LMNN) is discussed. This is consolidated by applying LMNN over the metagenomics data set.\nBuilding up the intuition to understand LMNN\nFirst of all, let us introduce LMNN through a simple example. For this purpose, we will be using the following two-dimensional toy data set:", "import numpy\nimport os\nimport shogun as sg\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\n\nx = numpy.array([[0,0],[-1,0.1],[0.3,-0.05],[0.7,0.3],[-0.2,-0.6],[-0.15,-0.63],[-0.25,0.55],[-0.28,0.67]])\ny = numpy.array([0,0,0,0,1,1,2,2])", "That is, there are eight feature vectors where each of them belongs to one out of three different classes (identified by either 0, 1, or 2). Let us have a look at this data:", "import matplotlib.pyplot as pyplot\n\n%matplotlib inline\n\ndef plot_data(feats,labels,axis,alpha=1.0):\n # separate features according to their class\n X0,X1,X2 = feats[labels==0], feats[labels==1], feats[labels==2]\n \n # class 0 data\n axis.plot(X0[:,0], X0[:,1], 'o', color='green', markersize=12, alpha=alpha)\n # class 1 data\n axis.plot(X1[:,0], X1[:,1], 'o', color='red', markersize=12, alpha=alpha)\n # class 2 data\n axis.plot(X2[:,0], X2[:,1], 'o', color='blue', markersize=12, alpha=alpha)\n \n # set axes limits\n axis.set_xlim(-1.5,1.5)\n axis.set_ylim(-1.5,1.5)\n axis.set_aspect('equal')\n \n axis.set_xlabel('x')\n axis.set_ylabel('y')\n\nfigure,axis = pyplot.subplots(1,1)\nplot_data(x,y,axis)\naxis.set_title('Toy data set')\npyplot.show()", "In the figure above, we can see that two of the classes are represented by two points that are, for each of these classes, very close to each other. The third class, however, has four points that are close to each other with respect to the y-axis, but spread along the x-axis. \nIf we were to apply kNN (k-nearest neighbors) in a data set like this, we would expect quite some errors using the standard Euclidean distance. This is due to the fact that the spread of the data is not similar amongst the feature dimensions. The following piece of code plots an ellipse on top of the data set. The ellipse in this case is in fact a circunference that helps to visualize how the Euclidean distance weights equally both feature dimensions.", "def make_covariance_ellipse(covariance):\n import matplotlib.patches as patches\n import scipy.linalg as linalg\n \n # the ellipse is centered at (0,0)\n mean = numpy.array([0,0])\n \n # eigenvalue decomposition of the covariance matrix (w are eigenvalues and v eigenvectors),\n # keeping only the real part\n w,v = linalg.eigh(covariance)\n # normalize the eigenvector corresponding to the largest eigenvalue\n u = v[0]/linalg.norm(v[0])\n # angle in degrees\n angle = 180.0/numpy.pi*numpy.arctan(u[1]/u[0])\n # fill Gaussian ellipse at 2 standard deviation\n ellipse = patches.Ellipse(mean, 2*w[0]**0.5, 2*w[1]**0.5, 180+angle, color='orange', alpha=0.3)\n \n return ellipse\n\n# represent the Euclidean distance\nfigure,axis = pyplot.subplots(1,1)\nplot_data(x,y,axis)\nellipse = make_covariance_ellipse(numpy.eye(2))\naxis.add_artist(ellipse)\naxis.set_title('Euclidean distance')\npyplot.show()\n", "A possible workaround to improve the performance of kNN in a data set like this would be to input to the kNN routine a distance measure. For instance, in the example above a good distance measure would give more weight to the y-direction than to the x-direction to account for the large spread along the x-axis. Nonetheless, it would be nicer (and, in fact, much more useful in practice) if this distance could be learnt automatically from the data at hand. Actually, LMNN is based upon this principle: given a number of neighbours k, find the Mahalanobis distance measure which maximizes kNN accuracy (using the given value for k) in a training data set. As we usually do in machine learning, under the assumption that the training data is an accurate enough representation of the underlying process, the distance learnt will not only perform well in the training data, but also have good generalization properties. \nNow, let us use the LMNN class implemented in Shogun to find the distance and plot its associated ellipse. If everything goes well, we will see that the new ellipse only overlaps with the data points of the green class.\nFirst, we need to wrap the data into Shogun's feature and label objects:", "from shogun import features, MulticlassLabels\n\nfeats = features(x.T)\nlabels = MulticlassLabels(y.astype(numpy.float64))", "Secondly, perform LMNN training:", "from shogun import LMNN\n\n# number of target neighbours per example\nk = 1\n\nlmnn = LMNN(feats,labels,k)\n# set an initial transform as a start point of the optimization\ninit_transform = numpy.eye(2)\nlmnn.put('maxiter', 2000)\nlmnn.train(init_transform)", "LMNN is an iterative algorithm. The argument given to train represents the initial state of the solution. By default, if no argument is given, then LMNN uses PCA to obtain this initial value.\nFinally, we retrieve the distance measure learnt by LMNN during training and visualize it together with the data:", "# get the linear transform from LMNN\nL = lmnn.get_real_matrix('linear_transform')\n# square the linear transform to obtain the Mahalanobis distance matrix\nM = numpy.matrix(numpy.dot(L.T,L))\n\n# represent the distance given by LMNN\nfigure,axis = pyplot.subplots(1,1)\nplot_data(x,y,axis)\nellipse = make_covariance_ellipse(M.I)\naxis.add_artist(ellipse)\naxis.set_title('LMNN distance')\npyplot.show()", "Beyond the main idea\nLMNN is one of the so-called linear metric learning methods. What this means is that we can understand LMNN's output in two different ways: on the one hand, as a distance measure, this was explained above; on the other hand, as a linear transformation of the input data. Like any other linear transformation, LMNN's output can be written as a matrix, that we will call $L$. In other words, if the input data is represented by the matrix $X$, then LMNN can be understood as the data transformation expressed by $X'=L X$. We use the convention that each column is a feature vector; thus, the number of rows of $X$ is equal to the input dimension of the data, and the number of columns is equal to the number of vectors.\nSo far, so good. But, if the output of the same method can be interpreted in two different ways, then there must be a relation between them! And that is precisely the case! As mentioned above, the ellipses that were plotted in the previous section represent a distance measure. This distance measure can be thought of as a matrix $M$, being the distance between two vectors $\\vec{x_i}$ and $\\vec{x_j}$ equal to $d(\\vec{x_i},\\vec{x_j})=(\\vec{x_i}-\\vec{x_j})^T M (\\vec{x_i}-\\vec{x_j})$. In general, this type of matrices are known as Mahalanobis matrices. In LMNN, the matrix $M$ is precisely the 'square' of the linear transformation $L$, i.e. $M=L^T L$. Note that a direct consequence of this is that $M$ is guaranteed to be positive semi-definite (PSD), and therefore define a valid metric.\nThis distance measure/linear transform duality in LMNN has its own advantages. An important one is that the optimization problem can go back and forth between the $L$ and the $M$ representations, giving raise to a very efficient solution.\nLet us now visualize LMNN using the linear transform interpretation. In the following figure we have taken our original toy data, transform it using $L$ and plot both the before and after versions of the data together.", "# project original data using L\nlx = numpy.dot(L,x.T)\n\n# represent the data in the projected space\nfigure,axis = pyplot.subplots(1,1)\nplot_data(lx.T,y,axis)\nplot_data(x,y,axis,0.3)\nellipse = make_covariance_ellipse(numpy.eye(2))\naxis.add_artist(ellipse)\naxis.set_title('LMNN\\'s linear transform')\npyplot.show()", "In the figure above, the transparent points represent the original data and are shown to ease the visualization of the LMNN transformation. Note also that the ellipse plotted is the one corresponding to the common Euclidean distance. This is actually an important consideration: if we think of LMNN as a linear transformation, the distance considered in the projected space is the Euclidean distance, and no any Mahalanobis distance given by M. To sum up, we can think of LMNN as a linear transform of the input space, or as method to obtain a distance measure to be used in the input space. It is an error to apply both the projection and the learnt Mahalanobis distance. \nNeighbourhood graphs\nAn alternative way to visualize the effect of using the distance found by LMNN together with kNN consists of using neighbourhood graphs. Despite the fancy name, these are actually pretty simple. The idea is just to construct a graph in the Euclidean space, where the points in the data set are the nodes of the graph, and a directed edge from one point to another denotes that the destination node is the 1-nearest neighbour of the origin node. Of course, it is also possible to work with neighbourhood graphs where $k \\gt 1$. Here we have taken the simplification of $k = 1$ so that the forthcoming plots are not too cluttered.\nLet us define a data set for which the Euclidean distance performs considerably bad. In this data set there are several levels or layers in the y-direction. Each layer is populated by points that belong to the same class spread along the x-direction. The layers are close to each other in pairs, whereas the spread along x is larger. Let us define a function to generate such a data set and have a look at it.", "import numpy\nimport matplotlib.pyplot as pyplot\n%matplotlib inline\n\ndef sandwich_data():\n from numpy.random import normal\n \n # number of distinct classes\n num_classes = 6\n # number of points per class\n num_points = 9\n # distance between layers, the points of each class are in a layer\n dist = 0.7\n \n # memory pre-allocation\n x = numpy.zeros((num_classes*num_points, 2))\n y = numpy.zeros(num_classes*num_points)\n \n for i,j in zip(range(num_classes), range(-num_classes//2, num_classes//2 + 1)):\n for k,l in zip(range(num_points), range(-num_points//2, num_points//2 + 1)):\n x[i*num_points + k, :] = numpy.array([normal(l, 0.1), normal(dist*j, 0.1)])\n \n y[i*num_points:i*num_points + num_points] = i\n \n return x,y\n \n\ndef plot_sandwich_data(x, y, axis=pyplot, cols=['r', 'b', 'g', 'm', 'k', 'y']):\n for idx,val in enumerate(numpy.unique(y)):\n xi = x[y==val]\n axis.scatter(xi[:,0], xi[:,1], s=50, facecolors='none', edgecolors=cols[idx])\n\nx, y = sandwich_data()\nfigure, axis = pyplot.subplots(1, 1, figsize=(5,5))\nplot_sandwich_data(x, y, axis)\n\naxis.set_aspect('equal')\naxis.set_title('\"Sandwich\" toy data set')\naxis.set_xlabel('x')\naxis.set_ylabel('y')\n\npyplot.show()", "Let the fun begin now! In the following block of code, we create an instance of a kNN classifier, compute the nearest neighbours using the Euclidean distance and, afterwards, using the distance computed by LMNN. The data set in the space result of the linear transformation given by LMNN is also shown.", "from shogun import KNN, LMNN, features, MulticlassLabels\n\ndef plot_neighborhood_graph(x, nn, axis=pyplot, cols=['r', 'b', 'g', 'm', 'k', 'y']):\n\tfor i in range(x.shape[0]):\n\t\txs = [x[i,0], x[nn[1,i], 0]]\n\t\tys = [x[i,1], x[nn[1,i], 1]]\n\t\taxis.plot(xs, ys, cols[int(y[i])])\n\nfeats = features(x.T)\nlabels = MulticlassLabels(y)\n\nfig, axes = pyplot.subplots(1, 3, figsize=(15, 10))\n\n# use k = 2 instead of 1 because otherwise the method nearest_neighbors just returns the same\n# points as their own 1-nearest neighbours\nk = 2\n\ndistance = sg.distance('EuclideanDistance')\ndistance.init(feats, feats)\nknn = KNN(k, distance, labels)\n\nplot_sandwich_data(x, y, axes[0])\nplot_neighborhood_graph(x, knn.nearest_neighbors(), axes[0])\naxes[0].set_title('Euclidean neighbourhood in the input space')\n\nlmnn = LMNN(feats, labels, k)\n# set a large number of iterations. The data set is small so it does not cost a lot, and this way\n# we ensure a robust solution\nlmnn.put('maxiter', 3000)\nlmnn.train()\nknn.put('distance', lmnn.get_distance())\n\nplot_sandwich_data(x, y, axes[1])\nplot_neighborhood_graph(x, knn.nearest_neighbors(), axes[1])\naxes[1].set_title('LMNN neighbourhood in the input space')\n\n# plot features in the transformed space, with the neighbourhood graph computed using the Euclidean distance\nL = lmnn.get_real_matrix('linear_transform')\nxl = numpy.dot(x, L.T)\nfeats = features(xl.T)\ndist = sg.distance('EuclideanDistance')\ndist.init(feats, feats)\nknn.put('distance', dist)\n\nplot_sandwich_data(xl, y, axes[2])\nplot_neighborhood_graph(xl, knn.nearest_neighbors(), axes[2])\naxes[2].set_ylim(-3, 2.5)\naxes[2].set_title('Euclidean neighbourhood in the transformed space')\n\n[axes[i].set_xlabel('x') for i in range(len(axes))]\n[axes[i].set_ylabel('y') for i in range(len(axes))]\n[axes[i].set_aspect('equal') for i in range(len(axes))]\n\npyplot.show()", "Notice how all the lines that go across the different layers in the left hand side figure have disappeared in the figure in the middle. Indeed, LMNN did a pretty good job here. The figure in the right hand side shows the disposition of the points in the transformed space; from which the neighbourhoods in the middle figure should be clear. In any case, this toy example is just an illustration to give an idea of the power of LMNN. In the next section we will see how after applying a couple methods for feature normalization (e.g. scaling, whitening) the Euclidean distance is not so sensitive against different feature scales.\nReal data sets\nFeature selection in metagenomics\nMetagenomics is a modern field in charge of the study of the DNA of microorganisms. The data set we have chosen for this section contains information about three different types of apes; in particular, gorillas, chimpanzees, and bonobos. Taking an approach based on metagenomics, the main idea is to study the DNA of the microorganisms (e.g. bacteria) which live inside the body of the apes. Owing to the many chemical reactions produced by these microorganisms, it is not only the DNA of the host itself important when studying, for instance, sickness or health, but also the DNA of the microorganisms inhabitants.\nFirst of all, let us load the ape data set. This data set contains features taken from the bacteria inhabitant in the gut of the apes.", "from shogun import CSVFile, features, MulticlassLabels\n\nape_features = features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'multiclass/fm_ape_gut.dat')))\nape_labels = MulticlassLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'multiclass/label_ape_gut.dat')))", "It is of course important to have a good insight of the data we are dealing with. For instance, how many examples and different features do we have?", "print('Number of examples = %d, number of features = %d.' % (ape_features.get_num_vectors(), ape_features.get_num_features()))", "So, 1472 features! Those are quite many features indeed. In other words, the feature vectors at hand lie on a 1472-dimensional space. We cannot visualize in the input feature space how the feature vectors look like. However, in order to gain a little bit more of understanding of the data, we can apply dimension reduction, embed the feature vectors in a two-dimensional space, and plot the vectors in the embedded space. To this end, we are going to use one of the many methods for dimension reduction included in Shogun. In this case, we are using t-distributed stochastic neighbour embedding (or t-dsne). This method is particularly suited to produce low-dimensional embeddings (two or three dimensions) that are straightforward to visualize.", "def visualize_tdsne(features, labels):\n from shogun import TDistributedStochasticNeighborEmbedding\n \n converter = TDistributedStochasticNeighborEmbedding()\n converter.put('target_dim', 2)\n converter.put('perplexity', 25)\n \n embedding = converter.embed(features)\n \n import matplotlib.pyplot as pyplot\n % matplotlib inline\n \n x = embedding.get_real_matrix('feature_matrix')\n y = labels.get_real_vector('labels')\n \n pyplot.scatter(x[0, y==0], x[1, y==0], color='green')\n pyplot.scatter(x[0, y==1], x[1, y==1], color='red')\n pyplot.scatter(x[0, y==2], x[1, y==2], color='blue')\n pyplot.show()\n \nvisualize_tdsne(ape_features, ape_labels)", "In the figure above, the green points represent chimpanzees, the red ones bonobos, and the blue points gorillas. Providing the results in the figure, we can rapidly draw the conclusion that the three classes of apes are somewhat easy to discriminate in the data set since the classes are more or less well separated in two dimensions. Note that t-dsne use randomness in the embedding process. Thus, the figure result of the experiment in the previous block of code will be different after different executions. Feel free to play around and observe the results after different runs! After this, it should be clear that the bonobos form most of the times a very compact cluster, whereas the chimpanzee and gorillas clusters are more spread. Also, there tends to be a chimpanzee (a green point) closer to the gorillas' cluster. This is probably a outlier in the data set.\nEven before applying LMNN to the ape gut data set, let us apply kNN classification and study how it performs using the typical Euclidean distance. Furthermore, since this data set is rather small in terms of number of examples, the kNN error above may vary considerably (I have observed variation of almost 20% a few times) across different runs. To get a robust estimate of how kNN performs in the data set, we will perform cross-validation using Shogun's framework for evaluation. This will give us a reliable result regarding how well kNN performs in this data set.", "from shogun import KNN\nfrom shogun import StratifiedCrossValidationSplitting, CrossValidation\nfrom shogun import CrossValidationResult, MulticlassAccuracy\n\n# set up the classifier\nknn = KNN()\nknn.put('k', 3)\nknn.put('distance', sg.distance('EuclideanDistance'))\n\n# set up 5-fold cross-validation\nsplitting = StratifiedCrossValidationSplitting(ape_labels, 5)\n# evaluation method\nevaluator = MulticlassAccuracy()\ncross_validation = CrossValidation(knn, ape_features, ape_labels, splitting, evaluator)\n# locking is not supported for kNN, deactivate it to avoid an inoffensive warning\ncross_validation.put('m_autolock', False)\n# number of experiments, the more we do, the less variance in the result\nnum_runs = 200\ncross_validation.put('num_runs', num_runs)\n\n# perform cross-validation and print the result!\nresult = cross_validation.evaluate()\nresult = CrossValidationResult.obtain_from_generic(result)\nprint('kNN mean accuracy in a total of %d runs is %.4f.' % (num_runs, result.get_real('mean')))", "Finally, we can say that KNN performs actually pretty well in this data set. The average test classification error is less than between 2%. This error rate is already low and we should not really expect a significant improvement applying LMNN. This ought not be a surprise. Recall that the points in this data set have more than one thousand features and, as we saw before in the dimension reduction experiment, only two dimensions in an embedded space were enough to discern arguably well the chimpanzees, gorillas and bonobos.\nNote that we have used stratified splitting for cross-validation. Stratified splitting divides the folds used during cross-validation so that the proportion of the classes in the initial data set is approximately maintained for each of the folds. This is particular useful in skewed data sets, where the number of examples among classes varies significantly.\nNonetheless, LMNN may still turn out to be very useful in a data set like this one. Making a small modification of the vanilla LMNN algorithm, we can enforce that the linear transform found by LMNN is diagonal. This means that LMNN can be used to weight each of the features and, once the training is performed, read from these weights which features are relevant to apply kNN and which ones are not. This is indeed a form of feature selection. Using Shogun, it is extremely easy to switch to this so-called diagonal mode for LMNN: just call the method set_diagonal(use_diagonal) with use_diagonal set to True.\nThe following experiment takes about five minutes until it is completed (using Shogun Release, i.e. compiled with optimizations enabled). This is mostly due to the high dimension of the data (1492 features) and the fact that, during training, LMNN has to compute many outer products of feature vectors, which is a computation whose time complexity is proportional to the square of the number of features. For the illustration purposes of this notebook, in the following cell we are just going to use a small subset of all the features so that the training finishes faster.", "from shogun import LMNN\nimport numpy\n\n# to make training faster, use a portion of the features\nfm = ape_features.get_real_matrix('feature_matrix')\nape_features_subset = features(fm[:150, :])\n\n# number of targer neighbours in LMNN, here we just use the same value that was used for KNN before\nk = 3\nlmnn = LMNN(ape_features_subset, ape_labels, k)\nlmnn.put('m_diagonal', True)\nlmnn.put('maxiter', 1000)\ninit_transform = numpy.eye(ape_features_subset.get_num_features())\nlmnn.train(init_transform)\n\ndiagonal = numpy.diag(lmnn.get_real_matrix('linear_transform'))\nprint('%d out of %d elements are non-zero.' % (numpy.sum(diagonal != 0), diagonal.size))", "So only 64 out of the 150 first features are important according to the result transform! The rest of them have been given a weight exactly equal to zero, even if all of the features were weighted equally with a value of one at the beginnning of the training. In fact, if all the 1472 features were used, only about 158 would have received a non-zero weight. Please, feel free to experiment using all the features!\nIt is a fair question to ask how did we know that the maximum number of iterations in this experiment should be around 1200 iterations. Well, the truth is that we know this only because we have run this experiment with this same data beforehand, and we know that after this number of iterations the algorithm has converged. This is not something nice, and the ideal case would be if one could completely forget about this parameter, so that LMNN uses as many iterations as it needs until it converges. Nevertheless, this is not practical at least because of two reasons:\n\nIf you are dealing with many examples or with very high dimensional feature vectors, you might not want to wait until the algorithm converges and have a look at what LMNN has found before it has completely converged.\nAs with any other algorithm based on gradient descent, the termination criteria can be tricky. Let us illustrate this further:", "import matplotlib.pyplot as pyplot\n%matplotlib inline\n\nstatistics = lmnn.get_statistics()\npyplot.plot(statistics.obj.get())\npyplot.grid(True)\npyplot.xlabel('Number of iterations')\npyplot.ylabel('LMNN objective')\npyplot.show()", "Along approximately the first three hundred iterations, there is not much variation in the objective. In other words, the objective curve is pretty much flat. If we are not careful and use termination criteria that are not demanding enough, training could be stopped at this point. This would be wrong, and might have terrible results as the training had not clearly converged yet at that moment.\nIn order to avoid disastrous situations, in Shogun we have implemented LMNN with really demanding criteria for automatic termination of the training process. Albeit, it is possible to tune the termination criteria using the methods set_stepsize_threshold and set_obj_threshold. These methods can be used to modify the lower bound required in the step size and the increment in the objective (relative to its absolute value), respectively, to stop training. Also, it is possible to set a hard upper bound on the number of iterations using set_maxiter as we have done above. In case the internal termination criteria did not fire before the maximum number of iterations was reached, you will receive a warning message, similar to the one shown above. This is not a synonym that the training went wrong; but it is strongly recommended at this event to have a look at the objective plot as we have done in the previous block of code.\nMulticlass classification\nIn addition to feature selection, LMNN can be of course used for multiclass classification. I like to think about LMNN in multiclass classification as a way to empower kNN. That is, the idea is basically to apply kNN using the distance found by LMNN $-$ in contrast with using one of the other most common distances, such as the Euclidean one. To this end we will use the wine data set from the UCI Machine Learning repository.", "from shogun import CSVFile, features, MulticlassLabels\n\nwine_features = features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/wine/fm_wine.dat')))\nwine_labels = MulticlassLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/wine/label_wine.dat')))\n\nassert(wine_features.get_num_vectors() == wine_labels.get_num_labels())\nprint('%d feature vectors with %d features from %d different classes.' % (wine_features.get_num_vectors(), \\\n wine_features.get_num_features(), wine_labels.get_num_classes()))", "First, let us evaluate the performance of kNN in this data set using the same cross-validation setting used in the previous section:", "from shogun import KNN, EuclideanDistance\nfrom shogun import StratifiedCrossValidationSplitting, CrossValidation\nfrom shogun import CrossValidationResult, MulticlassAccuracy\nimport numpy\n\n# kNN classifier\nk = 5\nknn = KNN()\nknn.put('k', k)\nknn.put('distance', EuclideanDistance())\n\nsplitting = StratifiedCrossValidationSplitting(wine_labels, 5)\nevaluator = MulticlassAccuracy()\ncross_validation = CrossValidation(knn, wine_features, wine_labels, splitting, evaluator)\ncross_validation.put('m_autolock', False)\nnum_runs = 200\ncross_validation.put('num_runs', num_runs)\n\nresult = CrossValidationResult.obtain_from_generic(cross_validation.evaluate())\neuclidean_means = numpy.zeros(3)\neuclidean_means[0] = result.get_real('mean')\n\nprint('kNN accuracy with the Euclidean distance %.4f.' % result.get_real('mean'))", "Seconly, we will use LMNN to find a distance measure and use it with kNN:", "from shogun import LMNN\n\n# train LMNN\nlmnn = LMNN(wine_features, wine_labels, k)\nlmnn.put('maxiter', 1500)\nlmnn.train()\n\n# evaluate kNN using the distance learnt by LMNN\nknn.set_distance(lmnn.get_distance())\n\nresult = CrossValidationResult.obtain_from_generic(cross_validation.evaluate())\nlmnn_means = numpy.zeros(3)\nlmnn_means[0] = result.get_real('mean')\n\nprint('kNN accuracy with the distance obtained by LMNN %.4f.' % result.get_real('mean'))", "The warning is fine in this case, we have made sure that the objective variation was really small after 1500 iterations. In any case, do not hesitate to check it yourself studying the objective plot as it was shown in the previous section.\nAs the results point out, LMNN really helps here to achieve better classification performance. However, this comparison is not entirely fair since the Euclidean distance is very sensitive to the scaling that different feature dimensions may have, whereas LMNN can adjust to this during training. Let us have a closer look to this fact. Next, we are going to retrieve the feature matrix and see what are the maxima and minima for every dimension.", "print('minima = ' + str(numpy.min(wine_features, axis=1)))\nprint('maxima = ' + str(numpy.max(wine_features, axis=1)))", "Examine the second and the last dimensions, for instance. The second dimension has values ranging from 0.74 to 5.8, while the values of the last dimension range from 278 to 1680. This will cause that the Euclidean distance works specially wrong in this data set. You can realize of this considering that the total distance between two points will almost certainly just take into account the contributions of the dimensions with largest range.\nIn order to produce a more fair comparison, we will rescale the data so that all the feature dimensions are within the interval [0,1]. Luckily, there is a preprocessor class in Shogun that makes this straightforward.", "from shogun import RescaleFeatures\n\n# preprocess features so that all of them vary within [0,1]\npreprocessor = RescaleFeatures()\npreprocessor.init(wine_features)\nwine_features.add_preprocessor(preprocessor)\nwine_features.apply_preprocessor()\n\n# sanity check\nassert(numpy.min(wine_features) >= 0.0 and numpy.max(wine_features) <= 1.0)\n\n# perform kNN classification after the feature rescaling\nknn.put('distance', EuclideanDistance())\nresult = CrossValidationResult.obtain_from_generic(cross_validation.evaluate())\neuclidean_means[1] = result.get_real('mean')\n\nprint('kNN accuracy with the Euclidean distance after feature rescaling %.4f.' % result.get_real('mean'))\n\n# train kNN in the new features and classify with kNN\nlmnn.train()\nknn.put('distance', lmnn.get_distance())\nresult = CrossValidationResult.obtain_from_generic(cross_validation.evaluate())\nlmnn_means[1] = result.get_real('mean')\n\nprint('kNN accuracy with the distance obtained by LMNN after feature rescaling %.4f.' % result.get_real('mean'))", "Another different preprocessing that can be applied to the data is called whitening. Whitening, which is explained in an article in wikipedia, transforms the covariance matrix of the data into the identity matrix.", "import scipy.linalg as linalg\n\n# shorthand for the feature matrix -- this makes a copy of the feature matrix\ndata = wine_features.get_real_matrix('feature_matrix')\n# remove mean\ndata = data.T\ndata-= numpy.mean(data, axis=0)\n# compute the square of the covariance matrix and its inverse\nM = linalg.sqrtm(numpy.cov(data.T))\n# keep only the real part, although the imaginary that pops up in the sqrtm operation should be equal to zero\nN = linalg.inv(M).real\n# apply whitening transform\nwhite_data = numpy.dot(N, data.T)\nwine_white_features = features(white_data)", "The covariance matrices before and after the transformation can be compared to see that the covariance really becomes the identity matrix.", "import matplotlib.pyplot as pyplot\n%matplotlib inline\n\nfig, axarr = pyplot.subplots(1,2)\naxarr[0].matshow(numpy.cov(wine_features))\naxarr[1].matshow(numpy.cov(wine_white_features))\npyplot.show()", "Finally, we evaluate again the performance obtained with kNN using the Euclidean distance and the distance found by LMNN using the whitened features.", "wine_features = wine_white_features\n\n# perform kNN classification after whitening\nknn.set_distance(EuclideanDistance())\nresult = CrossValidationResult.obtain_from_generic(cross_validation.evaluate())\neuclidean_means[2] = result.get_real('mean')\n\nprint('kNN accuracy with the Euclidean distance after whitening %.4f.' % result.get_real('mean'))\n\n# train kNN in the new features and classify with kNN\nlmnn.train()\nknn.put('distance', lmnn.get_distance())\nresult = CrossValidationResult.obtain_from_generic(cross_validation.evaluate())\nlmnn_means[2] = result.get_real('mean')\n\nprint('kNN accuracy with the distance obtained by LMNN after whitening %.4f.' % result.get_real('mean'))", "As it can be seen, it did not really help to whiten the features in this data set with respect to only applying feature rescaling; the accuracy was already rather large after rescaling. In any case, it is good to know that this transformation exists, as it can become useful with other data sets, or before applying other machine learning algorithms.\nLet us summarize the results obtained in this section with a bar chart grouping the accuracy results by distance (Euclidean or the one found by LMNN), and feature preprocessing:", "assert(euclidean_means.shape[0] == lmnn_means.shape[0])\nN = euclidean_means.shape[0]\n# the x locations for the groups\nind = 0.5*numpy.arange(N)\n# bar width\nwidth = 0.15\nfigure, axes = pyplot.subplots()\nfigure.set_size_inches(6, 5)\n\neuclidean_rects = axes.bar(ind, euclidean_means, width, color='y')\nlmnn_rects = axes.bar(ind+width, lmnn_means, width, color='r')\n\n# attach information to chart\naxes.set_ylabel('Accuracies')\naxes.set_ylim(top=1.4)\naxes.set_title('kNN accuracy by distance and feature preprocessing')\naxes.set_xticks(ind+width)\naxes.set_xticklabels(('Raw', 'Rescaling', 'Whitening'))\naxes.legend(( euclidean_rects[0], lmnn_rects[0]), ('Euclidean', 'LMNN'), loc='upper right')\n\ndef autolabel(rects):\n # attach text labels to bars\n for rect in rects:\n height = rect.get_height()\n axes.text(rect.get_x()+rect.get_width()/2., 1.05*height, '%.3f' % height,\n ha='center', va='bottom')\n \nautolabel(euclidean_rects)\nautolabel(lmnn_rects)\n\npyplot.show()", "References\n\nWeinberger, K. Q., Saul, L. K. Distance Metric Learning for Large Margin Nearest Neighbor Classification. (Link to paper in JMLR)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BillyLjm/CS100.1x.__CS190.1x
lab1_word_count_student.ipynb
mit
[ "+ \nWord Count Lab: Building a word count application\nThis lab will build on the techniques covered in the Spark tutorial to develop a simple word count application. The volume of unstructured text in existence is growing dramatically, and Spark is an excellent tool for analyzing this type of data. In this lab, we will write code that calculates the most common words in the Complete Works of William Shakespeare retrieved from Project Gutenberg. This could also be scaled to find the most common words on the Internet.\n During this lab we will cover: \nPart 1: Creating a base RDD and pair RDDs\nPart 2: Counting with pair RDDs\nPart 3: Finding unique words and a mean value\nPart 4: Apply word count to a file\nNote that, for reference, you can look up the details of the relevant methods in Spark's Python API\n Part 1: Creating a base RDD and pair RDDs \nIn this part of the lab, we will explore creating a base RDD with parallelize and using pair RDDs to count words.\n (1a) Create a base RDD \nWe'll start by generating a base RDD by using a Python list and the sc.parallelize method. Then we'll print out the type of the base RDD.", "wordsList = ['cat', 'elephant', 'rat', 'rat', 'cat']\nwordsRDD = sc.parallelize(wordsList, 4)\n# Print out the type of wordsRDD\nprint type(wordsRDD)", "(1b) Pluralize and test \nLet's use a map() transformation to add the letter 's' to each string in the base RDD we just created. We'll define a Python function that returns the word with an 's' at the end of the word. Please replace &lt;FILL IN&gt; with your solution. If you have trouble, the next cell has the solution. After you have defined makePlural you can run the third cell which contains a test. If you implementation is correct it will print 1 test passed.\nThis is the general form that exercises will take, except that no example solution will be provided. Exercises will include an explanation of what is expected, followed by code cells where one cell will have one or more &lt;FILL IN&gt; sections. The cell that needs to be modified will have # TODO: Replace &lt;FILL IN&gt; with appropriate code on its first line. Once the &lt;FILL IN&gt; sections are updated and the code is run, the test cell can then be run to verify the correctness of your solution. The last code cell before the next markdown section will contain the tests.", "# TODO: Replace <FILL IN> with appropriate code\ndef makePlural(word):\n \"\"\"Adds an 's' to `word`.\n\n Note:\n This is a simple function that only adds an 's'. No attempt is made to follow proper\n pluralization rules.\n\n Args:\n word (str): A string.\n\n Returns:\n str: A string with 's' added to it.\n \"\"\"\n return word + \"s\"\n\nprint makePlural('cat')\n\n# One way of completing the function\ndef makePlural(word):\n return word + 's'\n\nprint makePlural('cat')\n\n# Load in the testing code and check to see if your answer is correct\n# If incorrect it will report back '1 test failed' for each failed test\n# Make sure to rerun any cell you change before trying the test again\nfrom test_helper import Test\n# TEST Pluralize and test (1b)\nTest.assertEquals(makePlural('rat'), 'rats', 'incorrect result: makePlural does not add an s')", "(1c) Apply makePlural to the base RDD \nNow pass each item in the base RDD into a map() transformation that applies the makePlural() function to each element. And then call the collect() action to see the transformed RDD.", "# TODO: Replace <FILL IN> with appropriate code\npluralRDD = wordsRDD.map(makePlural)\nprint pluralRDD.collect()\n\n# TEST Apply makePlural to the base RDD(1c)\nTest.assertEquals(pluralRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'],\n 'incorrect values for pluralRDD')", "(1d) Pass a lambda function to map \nLet's create the same RDD using a lambda function.", "# TODO: Replace <FILL IN> with appropriate code\npluralLambdaRDD = wordsRDD.map(lambda a: a + \"s\")\nprint pluralLambdaRDD.collect()\n\n# TEST Pass a lambda function to map (1d)\nTest.assertEquals(pluralLambdaRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'],\n 'incorrect values for pluralLambdaRDD (1d)')", "(1e) Length of each word \nNow use map() and a lambda function to return the number of characters in each word. We'll collect this result directly into a variable.", "# TODO: Replace <FILL IN> with appropriate code\npluralLengths = (pluralRDD\n .map(lambda a: len(a))\n .collect())\nprint pluralLengths\n\n# TEST Length of each word (1e)\nTest.assertEquals(pluralLengths, [4, 9, 4, 4, 4],\n 'incorrect values for pluralLengths')", "(1f) Pair RDDs \nThe next step in writing our word counting program is to create a new type of RDD, called a pair RDD. A pair RDD is an RDD where each element is a pair tuple (k, v) where k is the key and v is the value. In this example, we will create a pair consisting of ('&lt;word&gt;', 1) for each word element in the RDD.\nWe can create the pair RDD using the map() transformation with a lambda() function to create a new RDD.", "# TODO: Replace <FILL IN> with appropriate code\nwordPairs = wordsRDD.map(lambda a: (a,1))\nprint wordPairs.collect()\n\n# TEST Pair RDDs (1f)\nTest.assertEquals(wordPairs.collect(),\n [('cat', 1), ('elephant', 1), ('rat', 1), ('rat', 1), ('cat', 1)],\n 'incorrect value for wordPairs')", "Part 2: Counting with pair RDDs \nNow, let's count the number of times a particular word appears in the RDD. There are multiple ways to perform the counting, but some are much less efficient than others.\nA naive approach would be to collect() all of the elements and count them in the driver program. While this approach could work for small datasets, we want an approach that will work for any size dataset including terabyte- or petabyte-sized datasets. In addition, performing all of the work in the driver program is slower than performing it in parallel in the workers. For these reasons, we will use data parallel operations.\n (2a) groupByKey() approach \nAn approach you might first consider (we'll see shortly that there are better ways) is based on using the groupByKey() transformation. As the name implies, the groupByKey() transformation groups all the elements of the RDD with the same key into a single list in one of the partitions. There are two problems with using groupByKey():\n\n\nThe operation requires a lot of data movement to move all the values into the appropriate partitions.\n\n\nThe lists can be very large. Consider a word count of English Wikipedia: the lists for common words (e.g., the, a, etc.) would be huge and could exhaust the available memory in a worker.\n\n\nUse groupByKey() to generate a pair RDD of type ('word', iterator).", "# TODO: Replace <FILL IN> with appropriate code\n# Note that groupByKey requires no parameters\nwordsGrouped = wordPairs.groupByKey()\nfor key, value in wordsGrouped.collect():\n print '{0}: {1}'.format(key, list(value))\n\n# TEST groupByKey() approach (2a)\nTest.assertEquals(sorted(wordsGrouped.mapValues(lambda x: list(x)).collect()),\n [('cat', [1, 1]), ('elephant', [1]), ('rat', [1, 1])],\n 'incorrect value for wordsGrouped')", "(2b) Use groupByKey() to obtain the counts \nUsing the groupByKey() transformation creates an RDD containing 3 elements, each of which is a pair of a word and a Python iterator.\nNow sum the iterator using a map() transformation. The result should be a pair RDD consisting of (word, count) pairs.", "# TODO: Replace <FILL IN> with appropriate code\nwordCountsGrouped = wordsGrouped.map(lambda (a,b): (a, sum(b)))\nprint wordCountsGrouped.collect()\n\n# TEST Use groupByKey() to obtain the counts (2b)\nTest.assertEquals(sorted(wordCountsGrouped.collect()),\n [('cat', 2), ('elephant', 1), ('rat', 2)],\n 'incorrect value for wordCountsGrouped')", "(2c) Counting using reduceByKey \nA better approach is to start from the pair RDD and then use the reduceByKey() transformation to create a new pair RDD. The reduceByKey() transformation gathers together pairs that have the same key and applies the function provided to two values at a time, iteratively reducing all of the values to a single value. reduceByKey() operates by applying the function first within each partition on a per-key basis and then across the partitions, allowing it to scale efficiently to large datasets.", "# TODO: Replace <FILL IN> with appropriate code\n# Note that reduceByKey takes in a function that accepts two values and returns a single value\nwordCounts = wordPairs.reduceByKey(lambda a,b: a+b)\nprint wordCounts.collect()\n\n# TEST Counting using reduceByKey (2c)\nTest.assertEquals(sorted(wordCounts.collect()), [('cat', 2), ('elephant', 1), ('rat', 2)],\n 'incorrect value for wordCounts')", "(2d) All together \nThe expert version of the code performs the map() to pair RDD, reduceByKey() transformation, and collect in one statement.", "# TODO: Replace <FILL IN> with appropriate code\nwordCountsCollected = (wordsRDD\n .map(lambda a: (a,1))\n .reduceByKey(lambda a,b: a+b)\n .collect())\nprint wordCountsCollected\n\n# TEST All together (2d)\nTest.assertEquals(sorted(wordCountsCollected), [('cat', 2), ('elephant', 1), ('rat', 2)],\n 'incorrect value for wordCountsCollected')", "Part 3: Finding unique words and a mean value \n (3a) Unique words \nCalculate the number of unique words in wordsRDD. You can use other RDDs that you have already created to make this easier.", "# TODO: Replace <FILL IN> with appropriate code\nuniqueWords = wordsRDD.distinct().count()\nprint uniqueWords\n\n# TEST Unique words (3a)\nTest.assertEquals(uniqueWords, 3, 'incorrect count of uniqueWords')", "(3b) Mean using reduce \nFind the mean number of words per unique word in wordCounts.\nUse a reduce() action to sum the counts in wordCounts and then divide by the number of unique words. First map() the pair RDD wordCounts, which consists of (key, value) pairs, to an RDD of values.", "# TODO: Replace <FILL IN> with appropriate code\nfrom operator import add\ntotalCount = (wordCounts\n .map(lambda (a,b): b)\n .reduce(lambda a,b: a+b))\naverage = totalCount / float(wordCounts.distinct().count())\nprint totalCount\nprint round(average, 2)\n\n# TEST Mean using reduce (3b)\nTest.assertEquals(round(average, 2), 1.67, 'incorrect value of average')", "Part 4: Apply word count to a file \nIn this section we will finish developing our word count application. We'll have to build the wordCount function, deal with real world problems like capitalization and punctuation, load in our data source, and compute the word count on the new data.\n (4a) wordCount function \nFirst, define a function for word counting. You should reuse the techniques that have been covered in earlier parts of this lab. This function should take in an RDD that is a list of words like wordsRDD and return a pair RDD that has all of the words and their associated counts.", "# TODO: Replace <FILL IN> with appropriate code\ndef wordCount(wordListRDD):\n \"\"\"Creates a pair RDD with word counts from an RDD of words.\n\n Args:\n wordListRDD (RDD of str): An RDD consisting of words.\n\n Returns:\n RDD of (str, int): An RDD consisting of (word, count) tuples.\n \"\"\"\n return (wordListRDD\n .map(lambda a : (a,1))\n .reduceByKey(lambda a,b: a+b))\nprint wordCount(wordsRDD).collect()\n\n# TEST wordCount function (4a)\nTest.assertEquals(sorted(wordCount(wordsRDD).collect()),\n [('cat', 2), ('elephant', 1), ('rat', 2)],\n 'incorrect definition for wordCount function')", "(4b) Capitalization and punctuation \nReal world files are more complicated than the data we have been using in this lab. Some of the issues we have to address are:\n\n\nWords should be counted independent of their capitialization (e.g., Spark and spark should be counted as the same word).\n\n\nAll punctuation should be removed.\n\n\nAny leading or trailing spaces on a line should be removed.\n\n\nDefine the function removePunctuation that converts all text to lower case, removes any punctuation, and removes leading and trailing spaces. Use the Python re module to remove any text that is not a letter, number, or space. Reading help(re.sub) might be useful.", "# TODO: Replace <FILL IN> with appropriate code\nimport re\ndef removePunctuation(text):\n \"\"\"Removes punctuation, changes to lower case, and strips leading and trailing spaces.\n\n Note:\n Only spaces, letters, and numbers should be retained. Other characters should should be\n eliminated (e.g. it's becomes its). Leading and trailing spaces should be removed after\n punctuation is removed.\n\n Args:\n text (str): A string.\n\n Returns:\n str: The cleaned up string.\n \"\"\"\n return re.sub(\"[^a-zA-Z0-9 ]\", \"\", text.strip(\" \").lower())\n \nprint removePunctuation('Hi, you!')\nprint removePunctuation(' No under_score!')\n\n# TEST Capitalization and punctuation (4b)\nTest.assertEquals(removePunctuation(\" The Elephant's 4 cats. \"),\n 'the elephants 4 cats',\n 'incorrect definition for removePunctuation function')", "(4c) Load a text file \nFor the next part of this lab, we will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into an RDD, we use the SparkContext.textFile() method. We also apply the recently defined removePunctuation() function using a map() transformation to strip out the punctuation and change all text to lowercase. Since the file is large we use take(15), so that we only print 15 lines.", "# Just run this code\nimport os.path\nbaseDir = os.path.join('data')\ninputPath = os.path.join('cs100', 'lab1', 'shakespeare.txt')\nfileName = os.path.join(baseDir, inputPath)\n\nshakespeareRDD = (sc\n .textFile(fileName, 8)\n .map(removePunctuation))\nprint '\\n'.join(shakespeareRDD\n .zipWithIndex() # to (line, lineNum)\n .map(lambda (l, num): '{0}: {1}'.format(num, l)) # to 'lineNum: line'\n .take(15))", "(4d) Words from lines \nBefore we can use the wordcount() function, we have to address two issues with the format of the RDD:\n\n\nThe first issue is that that we need to split each line by its spaces.\n\n\nThe second issue is we need to filter out empty lines.\n\n\nApply a transformation that will split each element of the RDD by its spaces. For each element of the RDD, you should apply Python's string split() function. You might think that a map() transformation is the way to do this, but think about what the result of the split() function will be.", "# TODO: Replace <FILL IN> with appropriate code\nshakespeareWordsRDD = shakespeareRDD.flatMap(lambda a: a.split(\" \"))\nshakespeareWordCount = shakespeareWordsRDD.count()\nprint shakespeareWordsRDD.top(5)\nprint shakespeareWordCount\n\n# TEST Words from lines (4d)\n# This test allows for leading spaces to be removed either before or after\n# punctuation is removed.\nTest.assertTrue(shakespeareWordCount == 927631 or shakespeareWordCount == 928908,\n 'incorrect value for shakespeareWordCount')\nTest.assertEquals(shakespeareWordsRDD.top(5),\n [u'zwaggerd', u'zounds', u'zounds', u'zounds', u'zounds'],\n 'incorrect value for shakespeareWordsRDD')", "(4e) Remove empty elements \nThe next step is to filter out the empty elements. Remove all entries where the word is ''.", "# TODO: Replace <FILL IN> with appropriate code\nshakeWordsRDD = shakespeareWordsRDD.filter(lambda a: a != \"\")\nshakeWordCount = shakeWordsRDD.count()\nprint shakeWordCount\n\n# TEST Remove empty elements (4e)\nTest.assertEquals(shakeWordCount, 882996, 'incorrect value for shakeWordCount')", "(4f) Count the words \nWe now have an RDD that is only words. Next, let's apply the wordCount() function to produce a list of word counts. We can view the top 15 words by using the takeOrdered() action; however, since the elements of the RDD are pairs, we need a custom sort function that sorts using the value part of the pair.\nYou'll notice that many of the words are common English words. These are called stopwords. In a later lab, we will see how to eliminate them from the results.\nUse the wordCount() function and takeOrdered() to obtain the fifteen most common words and their counts.", "# TODO: Replace <FILL IN> with appropriate code\ntop15WordsAndCounts = wordCount(shakeWordsRDD).takeOrdered(15, lambda(a,b): -b)\nprint '\\n'.join(map(lambda (w, c): '{0}: {1}'.format(w, c), top15WordsAndCounts))\n\n# TEST Count the words (4f)\nTest.assertEquals(top15WordsAndCounts,\n [(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463),\n (u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890),\n (u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678)],\n 'incorrect value for top15WordsAndCounts')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/federated
docs/tutorials/composing_learning_algorithms.ipynb
apache-2.0
[ "Copyright 2022 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Composing Learning Algorithms\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/federated/tutorials/composing_learning_algorithms\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/federated/blob/v0.27.0/docs/tutorials/composing_learning_algorithms.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/federated/blob/v0.27.0/docs/tutorials/composing_learning_algorithms.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/federated/docs/tutorials/composing_learning_algorithms.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nBefore you start\nBefore you start, please run the following to make sure that your environment is\ncorrectly setup. If you don't see a greeting, please refer to the\nInstallation guide for instructions.", "#@test {\"skip\": true}\n!pip install --quiet --upgrade tensorflow-federated\n!pip install --quiet --upgrade nest-asyncio\n\nimport nest_asyncio\nnest_asyncio.apply()\n\nfrom typing import Callable\n\nimport tensorflow as tf\nimport tensorflow_federated as tff", "NOTE: This colab has been verified to work with the latest released version of the tensorflow_federated pip package, but the Tensorflow Federated project is still in pre-release development and may not work on main.\nComposing Learning Algorithms\nThe Building Your Own Federated Learning Algorithm Tutorial used TFF's federated core to directly implement a version of the Federated Averaging (FedAvg) algorithm.\nIn this tutorial, you will use federated learning components in TFF's API to build federated learning algorithms in a modular manner, without having to re-implement everything from scratch.\nFor the purposes of this tutorial, you will implement a variant of FedAvg that employs gradient clipping through local training.\nLearning Algorithm Building Blocks\nAt a high level, many learning algorithms can be separated into 4 separate components, referred to as building blocks. These are as follows:\n\nDistributor (ie. server-to-client communication)\nClient work (ie. local client computation)\nAggregator (ie. client-to-server communication)\nFinalizer (ie. server computation using aggregated client outputs)\n\nWhile the Building Your Own Federated Learning Algorithm Tutorial implemented all of these building blocks from scratch, this is often unnecessary. Instead, you can re-use building blocks from similar algorithms.\nIn this case, to implement FedAvg with gradient clipping, you only need to modify the client work building block. The remaining blocks can be identical to what is used in \"vanilla\" FedAvg.\nImplementing the Client Work\nFirst, let's write TF logic that does local model training with gradient clipping. For simplicity, gradients will be clipped have norm at most 1.\nTF Logic", "@tf.function\ndef client_update(model: tff.learning.Model,\n dataset: tf.data.Dataset,\n server_weights: tff.learning.ModelWeights,\n client_optimizer: tf.keras.optimizers.Optimizer):\n \"\"\"Performs training (using the server model weights) on the client's dataset.\"\"\"\n # Initialize the client model with the current server weights.\n client_weights = tff.learning.ModelWeights.from_model(model)\n tf.nest.map_structure(lambda x, y: x.assign(y),\n client_weights, server_weights)\n\n # Use the client_optimizer to update the local model.\n # Keep track of the number of examples as well.\n num_examples = 0.0\n for batch in dataset:\n with tf.GradientTape() as tape:\n # Compute a forward pass on the batch of data\n outputs = model.forward_pass(batch)\n num_examples += tf.cast(outputs.num_examples, tf.float32)\n\n # Compute the corresponding gradient\n grads = tape.gradient(outputs.loss, client_weights.trainable)\n\n # Compute the gradient norm and clip\n gradient_norm = tf.linalg.global_norm(grads)\n if gradient_norm > 1:\n grads = tf.nest.map_structure(lambda x: x/gradient_norm, grads)\n\n grads_and_vars = zip(grads, client_weights.trainable)\n\n # Apply the gradient using a client optimizer.\n client_optimizer.apply_gradients(grads_and_vars)\n\n # Compute the difference between the server weights and the client weights\n client_update = tf.nest.map_structure(tf.subtract,\n client_weights.trainable,\n server_weights.trainable)\n\n return tff.learning.templates.ClientResult(\n update=client_update, update_weight=num_examples)", "There are a few important points about the code above. First, it keeps track of the number of examples seen, as this will constitute the weight of the client update (when computing an average across clients).\nSecond, it uses tff.learning.templates.ClientResult to package the output. This return type is used to standardize client work building blocks in tff.learning.\nCreating a ClientWorkProcess\nWhile the TF logic above will do local training with clipping, it still needs to be wrapped in TFF code in order to create the necessary building block.\nSpecifically, the 4 building blocks are represented as a tff.templates.MeasuredProcess. This means that all 4 blocks have both an initialize and next function used to instantiate and run the computation.\nThis allows each building block to keep track of its own state (stored at the server) as needed to perform its operations. While it will not be used in this tutorial, it can be used for things like tracking how many iterations have occurred, or keeping track of optimizer states.\nClient work TF logic should generally be wrapped as a tff.learning.templates.ClientWorkProcess, which codifies the expected types going into and out of the client's local training. It can be parameterized by a model and optimizer, as below.", "def build_gradient_clipping_client_work(\n model_fn: Callable[[], tff.learning.Model],\n optimizer_fn: Callable[[], tf.keras.optimizers.Optimizer],\n) -> tff.learning.templates.ClientWorkProcess:\n \"\"\"Creates a client work process that uses gradient clipping.\"\"\"\n\n with tf.Graph().as_default():\n # Wrap model construction in a graph to avoid polluting the global context\n # with variables created for this model.\n model = model_fn()\n data_type = tff.SequenceType(model.input_spec)\n model_weights_type = tff.learning.framework.weights_type_from_model(model)\n\n @tff.federated_computation\n def initialize_fn():\n return tff.federated_value((), tff.SERVER)\n\n @tff.tf_computation(model_weights_type, data_type)\n def client_update_computation(model_weights, dataset):\n model = model_fn()\n optimizer = optimizer_fn()\n return client_update(model, dataset, model_weights, optimizer)\n\n @tff.federated_computation(\n initialize_fn.type_signature.result,\n tff.type_at_clients(model_weights_type),\n tff.type_at_clients(data_type)\n )\n def next_fn(state, model_weights, client_dataset):\n client_result = tff.federated_map(\n client_update_computation, (model_weights, client_dataset))\n # Return empty measurements, though a more complete algorithm might\n # measure something here.\n measurements = tff.federated_value((), tff.SERVER)\n return tff.templates.MeasuredProcessOutput(state, client_result,\n measurements)\n return tff.learning.templates.ClientWorkProcess(\n initialize_fn, next_fn)", "Composing a Learning Algorithm\nLet's put the client work above into a full-fledged algorithm. First, let's set up our data and model.\nPreparing the input data\nLoad and preprocess the EMNIST dataset included in TFF. For more details, see the image classification tutorial.", "emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data()", "In order to feed the dataset into our model, the data is flattened and converted into tuples of the form (flattened_image_vector, label).\nLet's select a small number of clients, and apply the preprocessing above to their datasets.", "NUM_CLIENTS = 10\nBATCH_SIZE = 20\n\ndef preprocess(dataset):\n\n def batch_format_fn(element):\n \"\"\"Flatten a batch of EMNIST data and return a (features, label) tuple.\"\"\"\n return (tf.reshape(element['pixels'], [-1, 784]), \n tf.reshape(element['label'], [-1, 1]))\n\n return dataset.batch(BATCH_SIZE).map(batch_format_fn)\n\nclient_ids = sorted(emnist_train.client_ids)[:NUM_CLIENTS]\nfederated_train_data = [preprocess(emnist_train.create_tf_dataset_for_client(x))\n for x in client_ids\n]", "Preparing the model\nThis uses the same model as in the image classification tutorial. This model (implemented via tf.keras) has a single hidden layer, followed by a softmax layer. In order to use this model in TFF, Keras model is wrapped as a tff.learning.Model. This allows us to perform the model's forward pass within TFF, and extract model outputs. For more details, also see the image classification tutorial.", "def create_keras_model():\n initializer = tf.keras.initializers.GlorotNormal(seed=0)\n return tf.keras.models.Sequential([\n tf.keras.layers.Input(shape=(784,)),\n tf.keras.layers.Dense(10, kernel_initializer=initializer),\n tf.keras.layers.Softmax(),\n ])\n\ndef model_fn():\n keras_model = create_keras_model()\n return tff.learning.from_keras_model(\n keras_model,\n input_spec=federated_train_data[0].element_spec,\n loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])", "Preparing the optimizers\nJust as in tff.learning.build_federated_averaging_process, there are two optimizers here: A client optimizer, and a server optimizer. For simplicity, the optimizers will be SGD with different learning rates.", "client_optimizer_fn = lambda: tf.keras.optimizers.SGD(learning_rate=0.01)\nserver_optimizer_fn = lambda: tf.keras.optimizers.SGD(learning_rate=1.0)", "Defining the building blocks\nNow that the client work building block, data, model, and optimizers are set up, it remains to create building blocks for the distributor, the aggregator, and the finalizer. This can be done just by borrowing some defaults available in TFF and that are used by FedAvg.", "@tff.tf_computation()\ndef initial_model_weights_fn():\n return tff.learning.ModelWeights.from_model(model_fn())\n\nmodel_weights_type = initial_model_weights_fn.type_signature.result\n\ndistributor = tff.learning.templates.build_broadcast_process(model_weights_type)\nclient_work = build_gradient_clipping_client_work(model_fn, client_optimizer_fn)\n\n# TFF aggregators use a factory pattern, which create an aggregator\n# based on the output type of the client work. This also uses a float (the number\n# of examples) to govern the weight in the average being computed.)\naggregator_factory = tff.aggregators.MeanFactory()\naggregator = aggregator_factory.create(model_weights_type.trainable,\n tff.TensorType(tf.float32))\nfinalizer = tff.learning.templates.build_apply_optimizer_finalizer(\n server_optimizer_fn, model_weights_type)", "Composing the building blocks\nFinally, you can use a built-in composer in TFF for putting the building blocks together. This one is a relatively simple composer, which takes the 4 building blocks above and wires their types together.", "fed_avg_with_clipping = tff.learning.templates.compose_learning_process(\n initial_model_weights_fn,\n distributor,\n client_work,\n aggregator,\n finalizer\n)", "Running the algorithm\nNow that the algorithm is done, let's run it. First, initialize the algorithm. The state of this algorithm has a component for each building block, along with one for the global model weights.", "state = fed_avg_with_clipping.initialize()\n\nstate.client_work", "As expected, the client work has an empty state (remember the client work code above!). However, other building blocks may have non-empty state. For example, the finalizer keeps track of how many iterations have occurred. Since next has not been run yet, it has a state of 0.", "state.finalizer", "Now run a training round.", "learning_process_output = fed_avg_with_clipping.next(state, federated_train_data)", "The output of this (tff.learning.templates.LearningProcessOutput) has both a .state and .metrics output. Let's look at both.", "learning_process_output.state.finalizer", "Clearly, the finalizer state has incremented by one, as one round of .next has been run.", "learning_process_output.metrics", "While the metrics are empty, for more complex and practical algorithms they'll generally be full of useful information.\nConclusion\nBy using the building block/composers framework above, you can create entirely new learning algorithms, without having to re-do everything from scratch. However, this is only the starting point. This framework makes it much easier to express algorithms as simple modifications of FedAvg. For more algorithms, see tff.learning.algorithms, which contains algorithms such as FedProx and FedAvg with client learning rate scheduling. These APIs can even aid implementations of entirely new algorithms, such as federated k-means clustering." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ireapps/cfj-2017
exercises/20. Exercise - Web scraping-working.ipynb
mit
[ "Let's scrape some death row data\nTexas executes a lot of criminals, and it has a web page that keeps track of people on its death row.\nUsing what you've learned so far, let's scrape this table into a CSV. Then we're going write a function to grab a couple pieces of additional data from the inmates' detail pages.\nImport our libraries\nFetch and parse the summary page", "# the URL to request\n\n\n# get that page\n\n\n# turn the page text into soup\n\n\n# find the table of interest\n", "Loop over the table rows and write to CSV", "# find all table rows (skip the first one)\n\n\n# open a file to write to\n\n \n # create a writer object\n\n \n # write header row\n\n\n # loop over the rows\n\n \n # extract the cells\n\n \n # offense ID\n\n \n # link to detail page\n\n \n # last name\n\n \n # first name\n\n \n # dob\n\n \n # sex\n\n \n # race\n\n \n # date received\n\n \n # county\n\n \n # offense date\n\n \n # write out to file\n\n ", "Let's write a parsing function\nWe need a function that will take a URL of a detail page and do these things:\n\nOpen the detail page URL using requests\nParse the contents using BeautifulSoup\nIsolate the bits of information we're interested in: height, weight, eye color, hair color, native county, native state, link to mugshot\nReturn those bits of information in a dictionary\n\nA couple things to keep in mind: Not every inmate will have every piece of data. Also, not every inmate has an HTML detail page to parse -- the older ones are a picture. So we'll need to work around those limitations.\nWe shall call our function fetch_details().", "\n \"\"\"Fetch details from a death row inmate's page.\"\"\"\n\n # create a dictionary with some default values\n # as we go through, we're going to add stuff to it\n # (if you want to explore further, there is actually\n # a special kind of dictionary called a \"defaultdict\" to\n # handle this use case) =>\n # https://docs.python.org/3/library/collections.html#collections.defaultdict\n\n\n \n # partway down the page, the links go to JPEGs instead of HTML pages\n # we can't parse images, so we'll just return the empty dictionary\n\n \n # get the page\n\n \n # soup the HTML\n\n\n # find the table of info\n\n \n # target the mugshot, if it exists\n\n \n # if there is a mug, grab the src and add it to the dictionary\n\n\n\n \n # get a list of the \"label\" cells\n # on some pages, they're identified by the class 'tabledata_bold_align_right_deathrow'\n # on others, they're identified by the class 'tabledata_bold_align_right_unit'\n # so we pass it a list of possible classes\n\n\n # gonna do some fanciness here in the interests of DRY =>\n # a list of attributes we're interested in -- should match exactly the text inside the cells of interest\n\n\n # loop over the list of label cells that we targeted earlier\n\n \n\n \n # check to see if the cell text is in our list of attributes\n\n \n # if so, find the value -- go up to the tr and search for the other td --\n # and add that attribute to our dictionary\n\n \n\n\n # return the dictionary to the script\n", "Putting it all together\nNow that we have our parsing function, we can:\n\nOpen and read the CSV files of summary inmate info (the one we just scraped)\nOpen and write a new CSV file of detailed inmate info\n\nAs we loop over the summary inmate data, we're going to call our new parsing function on the detail URL in each row. Then we'll combine the dictionaries (data from the row of summary data + new detailed data) and write out to the new file.", "# open the CSV file to read from and the one to write to\n\n \n # create a reader object\n\n \n # the output headers are goind to be the headers from the summary file\n # plus a list of new attributes\n\n\n # create the writer object\n\n \n # write the header row\n\n \n # loop over the rows in the input file\n\n \n # print the inmate's name (so we can keep track of where we're at)\n # helps with debugging, too\n\n \n # call our function on the URL in the row\n\n \n # add the two dicts together by\n # unpacking them inside a new one\n # and write out to file\n\n \n\n \n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nvergos/DAT-ATX-1_Project
Notebooks/3. Dimensionality Reduction.ipynb
mit
[ "DAT-ATX-1 Capstone Project\n3. Dimensionality Reduction\nFor the final part of this project, we will extend our study of text classification. Using Principal Component Analysis and Truncated Singular Value Decomposition (methods for dimensionality reduction) we will attempt to replicate the same quality of modeling with a fraction of the features.\nThe outline of the procedure we are going to follow is:\n\nTurn a corpus of text documents (restaurant names, street addresses) into feature vectors using a Bag of Words representation,\nWe will apply Principal Component Analysis to decompose the feature vectors into \"simpler,\" meaningful pieces.\nDimensionality reduction is frequently performed as a pre-processing step before another learning algorithm is applied.\n\nMotivations\n\nThe number of features in our dataset can be difficult to manage, or even misleading (e.g. if the relationships are actually simpler than they appear).\nreduce computational expense\nreduce susceptibility to overfitting\nreduce noise in the dataset\nenhance our intuition\n\n0. Import libraries & packages", "import warnings\nwarnings.filterwarnings('ignore')\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom scipy import stats\n\nimport seaborn as sns\nsns.set(rc={\"axes.labelsize\": 15});\n\n# Some nice default configuration for plots\nplt.rcParams['figure.figsize'] = 10, 7.5;\nplt.rcParams['axes.grid'] = True;\nplt.gray();", "1. Import dataset", "#Reading the dataset in a dataframe using Pandas\ndf = pd.read_csv(\"data.csv\") \n\n#Print first observations\ndf.head() \n\ndf.columns", "Our first collection of feature vectors will come from the Restaurant_Name column. We are still trying to predict whether a restaurant falls under the \"pristine\" category (Grade A, score greater than 90) or not. We could also try to see whether we could predict a restaurant's grade (A, B, C or F)\n2. Dimensionality Reduction Techniques\nRestaurant Names as a Bag-of-words model", "from sklearn.feature_extraction.text import CountVectorizer\n\n# Turn the text documents into vectors\n\nvectorizer = CountVectorizer(min_df=1, stop_words=\"english\")\n\nX = vectorizer.fit_transform(df['Restaurant_Name']).toarray()\ny = df['Letter_Grade']\n\ntarget_names = y.unique()\n\n\n# Train/Test split and cross validation:\n\nfrom sklearn import cross_validation\n\nX_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, train_size = 0.8)\n\nX_train.shape", "Even though we do not have more features (3430) than rows of data (14888), we can still attempt to reduce the feature space by using Truncated SVD:\nTruncated Singular Value Decomposition for Dimensionality Reduction\nOnce we have extracted a vector representation of the data, it's a good idea to project the data on the first 2D of a Singular Value Decomposition (i.e.. Principal Component Analysis) to get a feel of the data. Note that the TruncatedSVD class can accept scipy.sparse matrices as input (as an alternative to numpy arrays). We will use it to visualize the first two principal components of the vectorized dataset.", "from sklearn.decomposition import TruncatedSVD\n\nsvd_two = TruncatedSVD(n_components=2, random_state=42)\n\nX_train_svd = svd_two.fit_transform(X_train)\n\npc_df = pd.DataFrame(X_train_svd) # cast resulting matrix as a data frame\nsns.pairplot(pc_df, diag_kind='kde');\n\n# Percentage of variance explained for each component\n\ndef pca_summary(pca):\n return pd.DataFrame([np.sqrt(pca.explained_variance_), \n pca.explained_variance_ratio_, \n pca.explained_variance_ratio_.cumsum()],\n index = [\"Standard deviation\", \"Proportion of Variance\", \"Cumulative Proportion\"], \n columns = (map(\"PC{}\".format, range(1, len(pca.components_)+1))))\n\npca_summary(svd_two)\n\n# Only 3.5% of the variance is explained in the data\nsvd_two.explained_variance_ratio_.sum()\n\nfrom itertools import cycle\n\ndef plot_PCA_2D(data, target, target_names):\n colors = cycle('rgbcmykw')\n target_ids = range(len(target_names))\n plt.figure()\n for i, c, label in zip(target_ids, colors, target_names):\n plt.scatter(data[target == i, 0], data[target == i, 1],\n c=c, label=label)\n plt.legend()\n\nplot_PCA_2D(X_train_svd, y_train, target_names)", "This must be the most uninformative plot in the history of plots. Obviously 2 principal components aren't enough. Let's try with 100:", "# Now, let's try with 100 components to see how much it explains\nsvd_hundred = TruncatedSVD(n_components=100, random_state=42)\nX_train_svd_hundred = svd_hundred.fit_transform(X_train)\n\n# 43.7% of the variance is explained in the data for 100 dimensions\n# This is mostly due to the High dimension of data and sparcity of the data\nsvd_hundred.explained_variance_ratio_.sum()\n\nplt.figure(figsize=(10, 7))\nplt.bar(range(100), svd_hundred.explained_variance_)", "Is it worth it to keep adding dimensions? Recall that we started with a 3430-dimensional feature space which we have already reduced to 100 dimensions, and according to the graph above each dimension over the 100th one will be adding less than 0.5% in our explanation of the variance. Let us try once more with 300 dimensions, to see if we can get something respectably over 50% (so we can be sure we are doing better than a coin toss)", "svd_sparta = TruncatedSVD(n_components=300, random_state=42)\n\nX_train_svd_sparta = svd_sparta.fit_transform(X_train)\n\nX_test_svd_sparta = svd_sparta.fit_transform(X_test)\n\nsvd_sparta.explained_variance_ratio_.sum()", "66.2% of the variance is explained through our model. This is quite respectable.", "plt.figure(figsize=(10, 7))\nplt.bar(range(300), svd_sparta.explained_variance_)\n\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn import cross_validation\nfrom sklearn.naive_bayes import MultinomialNB\n\n# Fit a classifier on the training set\n\nclassifier = MultinomialNB().fit(np.absolute(X_train_svd_sparta), y_train)\nprint(\"Training score: {0:.1f}%\".format(\n classifier.score(X_train_svd_sparta, y_train) * 100))\n\n# Evaluate the classifier on the testing set\n\nprint(\"Testing score: {0:.1f}%\".format(\n classifier.score(X_test_svd_sparta, y_test) * 100))", "Restaurant Streets as a Bag-of-words model", "streets = df['Geocode'].apply(pd.Series)\n\nstreets = df['Geocode'].tolist()\n\nsplit_streets = [i.split(' ', 1)[1] for i in streets]\n\nsplit_streets = [i.split(' ', 1)[1] for i in split_streets]\n\nsplit_streets = [i.split(' ', 1)[0] for i in split_streets]\n\nsplit_streets[0]\n\nimport re\nshortword = re.compile(r'\\W*\\b\\w{1,3}\\b')\n\nfor i in range(len(split_streets)):\n split_streets[i] = shortword.sub('', split_streets[i])\n\n# Create a new column with the street:\ndf['Street_Words'] = split_streets\n\nfrom sklearn.feature_extraction.text import CountVectorizer\n\n# Turn the text documents into vectors\n\nvectorizer = CountVectorizer(min_df=1, stop_words=\"english\")\n\nX = vectorizer.fit_transform(df['Street_Words']).toarray()\ny = df['Letter_Grade']\n\ntarget_names = y.unique()\n\n# Train/Test split and cross validation:\n\nfrom sklearn import cross_validation\n\nX_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, train_size = 0.8)\n\nX_train.shape\n\nfrom sklearn.decomposition import TruncatedSVD\n\nsvd_two = TruncatedSVD(n_components=2, random_state=42)\n\nX_train_svd = svd_two.fit_transform(X_train)\n\npc_df = pd.DataFrame(X_train_svd) # cast resulting matrix as a data frame\nsns.pairplot(pc_df, diag_kind='kde');\n\npca_summary(svd_two)\n\n# 25% of the variance is explained in the data when we use only TWO principal components!\nsvd_two.explained_variance_ratio_.sum()\n\n# Now, let's try with 10 components to see how much it explains\nsvd_ten = TruncatedSVD(n_components=10, random_state=42)\nX_train_svd_ten = svd_ten.fit_transform(X_train)\n\n# 53.9% of the variance is explained in the data for 10 dimensions\n# This is mostly due to the High dimension of data and sparcity of the data\nsvd_ten.explained_variance_ratio_.sum()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mlperf/training_results_v0.5
v0.5.0/google/cloud_v3.8/resnet-tpuv3-8/code/resnet/model/models/samples/outreach/demos/eager_execution.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Eager execution\nNote: you can run this notebook, live in Google Colab with zero setup. \nTensorFlow Dev Summit, 2018.\nThis interactive notebook demonstrates eager execution, TensorFlow's imperative, NumPy-like front-end for machine learning.\n\n\n\nTable of Contents.\n1. Enabling eager execution!\n2. A NumPy-like library for numerical computation and machine learning. Case study: Fitting a huber regression.\n3. Neural networks. Case study: Training a multi-layer RNN.\n4. Exercises: Batching; debugging.\n5. Further reading\n1. Enabling eager execution!\nA single function call is all you need to enable eager execution: tf.enable_eager_execution(). You should invoke this function before calling into any other TensorFlow APIs --- the simplest way to satisfy this requirement is to make tf.enable_eager_execution() the first line of your main function.", "!pip install -q -U tf-nightly\n\nimport tensorflow as tf\n\ntf.enable_eager_execution()", "2. A NumPy-like library for numerical computation and machine learning\nEnabling eager execution transforms TensorFlow into an imperative library for numerical computation, automatic differentiation, and machine learning. When executing eagerly, TensorFlow no longer behaves like a dataflow graph engine: Tensors are backed by NumPy arrays (goodbye, placeholders!), and TensorFlow operations execute immediately via Python (goodbye, sessions!).\nNumpy-like usage\nTensors are backed by numpy arrays, which are accessible via their .numpy()\nmethod.", "A = tf.constant([[2.0, 0.0], [0.0, 3.0]])\n\nimport numpy as np\n\nprint(\"Tensors are backed by NumPy arrays, which are accessible through their \"\n \"`.numpy()` method:\\n\", A)\nassert(type(A.numpy()) == np.ndarray)\nprint(\"\\nOperations (like `tf.matmul(A, A)`) execute \"\n \"immediately (no more Sessions!):\\n\", tf.matmul(A, A))", "Tensors behave similarly to NumPy arrays, but they don't behave exactly the\nsame. \nFor example, the equals operator on Tensors compares objects. Use\ntf.equal to compare values.", "print(\"\\nTensors behave like NumPy arrays: you can iterate over them and \"\n \"supply them as inputs to most functions that expect NumPy arrays:\")\nfor i, row in enumerate(A):\n for j, entry in enumerate(row):\n print(\"A[%d, %d]^2 == %d\" % (i, j, np.square(entry)))", "Variables and Gradients\nCreate variables with tf.contrib.eager.Variable, and use tf.GradientTape\nto compute gradients with respect to them.", "import tensorflow.contrib.eager as tfe\nw = tfe.Variable(3.0)\nwith tf.GradientTape() as tape:\n loss = w ** 2\ndw, = tape.gradient(loss, [w])\nprint(\"\\nYou can use `tf.GradientTape` to compute the gradient of a \"\n \"computation with respect to a list of `tf.contrib.eager.Variable`s;\\n\"\n \"for example, `tape.gradient(loss, [w])`, where `loss` = w ** 2 and \"\n \"`w` == 3.0, yields`\", dw,\"`.\")", "GPU usage\nEager execution lets you offload computation to hardware accelerators like\nGPUs, if you have any available.", "if tf.test.is_gpu_available():\n with tf.device(tf.test.gpu_device_name()):\n B = tf.constant([[2.0, 0.0], [0.0, 3.0]])\n print(tf.matmul(B, B))", "Fitting a Huber regression\nIf you come from a scientific or numerical computing background, eager execution should feel natural to you. Not only does it stand on its own as an accelerator-compatible library for numerical computation, it also interoperates with popular Python packages like NumPy and Matplotlib. To demonstrate this fact, in this section, we fit and evaluate a regression using a Huber regression, writing our code in a NumPy-like way and making use of Python control flow.\nData generation\nOur dataset for this example has many outliers โ€” least-squares would be a poor choice.", "import matplotlib.pyplot as plt\n\ndef gen_regression_data(num_examples=1000, p=0.2):\n X = tf.random_uniform(shape=(num_examples,), maxval=50)\n w_star = tf.random_uniform(shape=(), maxval=10)\n b_star = tf.random_uniform(shape=(), maxval=10)\n noise = tf.random_normal(shape=(num_examples,), mean=0.0, stddev=10.0)\n # With probability 1 - p, y := y * -1.\n sign = 2 * np.random.binomial(1, 1 - p, size=(num_examples,)) - 1\n # You can freely mix Tensors and NumPy arrays in your computations:\n # `sign` is a NumPy array, but the other symbols below are Tensors.\n Y = sign * (w_star * X + b_star + noise) \n return X, Y\n\nX, Y = gen_regression_data()\nplt.plot(X, Y, \"go\") # You can plot Tensors!\nplt.title(\"Observed data\")\nplt.show()", "Huber loss\nThe Huber loss function is piecewise function that is quadratic for small inputs and linear otherwise; for that reason, using a Huber loss gives considerably less weight to outliers than least-squares does. When eager execution is enabled, we can implement the Huber function in the natural way, using Python control flow.", "def huber_loss(y, y_hat, m=1.0):\n # Enabling eager execution lets you use Python control flow.\n delta = tf.abs(y - y_hat)\n return delta ** 2 if delta <= m else m * (2 * delta - m)", "A simple class for regressions\nThe next cell encapsulates a linear regression model in a Python class and defines a\nfunction that fits the model using a stochastic optimizer.", "import time\n\nfrom google.colab import widgets\nimport tensorflow.contrib.eager as tfe # Needed to create tfe.Variable objects.\n\n\nclass Regression(object):\n def __init__(self, loss_fn):\n super(Regression, self).__init__()\n self.w = tfe.Variable(0.0)\n self.b = tfe.Variable(0.0)\n self.variables = [self.w, self.b]\n self.loss_fn = loss_fn\n \n def predict(self, x):\n return x * self.w + self.b\n \ndef regress(model, optimizer, dataset, epochs=5, log_every=1, num_examples=1000):\n plot = log_every is not None\n if plot:\n # Colab provides several widgets for interactive visualization.\n tb = widgets.TabBar([str(i) for i in range(epochs) if i % log_every == 0])\n X, Y = dataset.batch(num_examples).make_one_shot_iterator().get_next()\n X = tf.reshape(X, (num_examples,))\n Y = tf.reshape(Y, (num_examples,))\n \n for epoch in range(epochs):\n iterator = dataset.make_one_shot_iterator()\n epoch_loss = 0.0\n start = time.time()\n for x_i, y_i in iterator:\n batch_loss_fn = lambda: model.loss_fn(y_i, model.predict(x_i)) \n optimizer.minimize(batch_loss_fn, var_list=model.variables)\n epoch_loss += batch_loss_fn()\n duration = time.time() - start\n if plot and epoch % log_every == 0:\n with tb.output_to(str(epoch)):\n print(\"Epoch %d took %0.2f seconds, resulting in a loss of %0.4f.\" % (\n epoch, duration, epoch_loss))\n plt.plot(X, Y, \"go\", label=\"data\")\n plt.plot(X, model.predict(X), \"b\", label=\"regression\")\n plt.legend()", "Run the following cell to fit the model! Note that enabling eager execution makes it\neasy to visualize your model while training it, using familiar tools like Matplotlib.", "huber_regression = Regression(huber_loss)\ndataset = tf.data.Dataset.from_tensor_slices((X, Y))\nregress(huber_regression,\n optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0001),\n dataset=dataset)", "Debugging and profiling\nEnabling eager execution lets you debug your code on-the-fly; use pdb and print statements to your heart's content.\nCheck out exercise 2 towards the bottom of this notebook for a hands-on look at how eager simplifies model debugging.", "import pdb\n\ndef buggy_loss(y, y_hat):\n pdb.set_trace()\n huber_loss(y, y_hat)\n \nprint(\"Type 'exit' to stop the debugger, or 's' to step into `huber_loss` and \"\n \"'n' to step through it.\")\ntry:\n buggy_loss(1.0, 2.0)\nexcept:\n pass", "Leverage the Python profiler to dig into the relative costs of training your model.\nIf you run the below cell, you'll see that most of the time is spent computing gradients and binary operations, which is sensible considering our loss function.", "import cProfile\nimport pstats\n\nhuber_regression = Regression(huber_loss)\ncProfile.run(\n \"regress(model=huber_regression, \"\n \"optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.001), \"\n \"dataset=dataset, log_every=None)\", \"prof\")\npstats.Stats(\"prof\").strip_dirs().sort_stats(\"cumulative\").print_stats(10)\nprint(\"Most of the time is spent during backpropagation and binary operations.\")", "3. Neural networks\nWhile eager execution can certainly be used as a library for numerical computation, it shines as a library for deep learning: TensorFlow provides a suite of tools for deep learning research and development, most of which are compatible with eager execution. In this section, we put some of these tools to use to build RNNColorbot, an RNN that takes as input names of colors and predicts their corresponding RGB tuples. \nConstructing a data pipeline\ntf.data is TensorFlow's canonical API for constructing input pipelines. tf.data lets you easily construct multi-stage pipelines that supply data to your networks during training and inference. The following cells defines methods that download and format the data needed for RNNColorbot; the details aren't important (read them in the privacy of your own home if you so wish), but make sure to run the cells before proceeding.", "import os\nimport six\nfrom six.moves import urllib\n\n\ndef parse(line):\n \"\"\"Parse a line from the colors dataset.\"\"\"\n # `items` is a list [color_name, r, g, b].\n items = tf.string_split([line], \",\").values\n rgb = tf.string_to_number(items[1:], out_type=tf.float32) / 255.\n color_name = items[0]\n chars = tf.one_hot(tf.decode_raw(color_name, tf.uint8), depth=256)\n length = tf.cast(tf.shape(chars)[0], dtype=tf.int64)\n return rgb, chars, length\n\ndef load_dataset(data_dir, url, batch_size):\n \"\"\"Loads the colors data at path into a PaddedDataset.\"\"\"\n path = tf.keras.utils.get_file(os.path.basename(url), url, cache_dir=data_dir)\n dataset = tf.data.TextLineDataset(path).skip(1).map(parse).shuffle(\n buffer_size=10000).padded_batch(batch_size,\n padded_shapes=([None], [None, None], []))\n return dataset, path\n\ntrain_url = \"https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/extras/colorbot/data/train.csv\"\ntest_url = \"https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/extras/colorbot/data/test.csv\"\ndata_dir = \"/tmp/rnn/data\"\n\ntrain_data, train_path = load_dataset(data_dir, train_url, batch_size=64)\neval_data, _ = load_dataset(data_dir, test_url, batch_size=64)\n\nimport pandas\npandas.read_csv(train_path).head(10)\n\ncolors, one_hot_chars, lengths = tfe.Iterator(train_data).next()\ncolors[:10].numpy()", "Defining and training a neural network\nTensorFlow packages several APIs for creating neural networks in a modular fashion. The canonical way to define neural networks in TensorFlow is to encapsulate your model in a class that inherits from tf.keras.Model. You should think of tf.keras.Model as a container of object-oriented layers, TensorFlow's building blocks for constructing neural networks (e.g., tf.layers.Dense, tf.layers.Conv2D). Every Layer object that is set as an attribute of a Model is automatically tracked by the latter, letting you access Layer-contained variables by invoking Model's .variables() method. Most important, inheriting from tf.keras.Model makes it easy to checkpoint your model and to subsequently restore it --- more on that later. \nThe following cell exemplifies our high-level neural network APIs. Note that RNNColorbot encapsulates only the model definition and prediction generation logic. The loss, training, and evaluation functions exist outside the class definition: conceptually, the model doesn't need know how to train and benchmark itself.", "class RNNColorbot(tf.keras.Model):\n \"\"\"Multi-layer RNN that predicts RGB tuples given color names.\n \"\"\"\n\n def __init__(self):\n super(RNNColorbot, self).__init__()\n self.keep_prob = 0.5\n self.lower_cell = tf.contrib.rnn.LSTMBlockCell(256)\n self.upper_cell = tf.contrib.rnn.LSTMBlockCell(128)\n self.relu = tf.layers.Dense(3, activation=tf.nn.relu, name=\"relu\")\n\n def call(self, inputs, training=False):\n \"\"\"Generates RGB tuples from `inputs`, a tuple (`chars`, `sequence_length`).\n \"\"\"\n (chars, sequence_length) = inputs\n chars = tf.transpose(chars, [1, 0, 2]) # make `chars` time-major\n batch_size = int(chars.shape[1])\n for cell in [self.lower_cell, self.upper_cell]:\n outputs = []\n state = cell.zero_state(batch_size, tf.float32)\n for ch in chars:\n output, state = cell(ch, state)\n outputs.append(output)\n chars = outputs\n if training:\n chars = tf.nn.dropout(chars, self.keep_prob)\n batch_range = [i for i in range(batch_size)]\n indices = tf.stack([sequence_length - 1, batch_range], axis=1)\n hidden_states = tf.gather_nd(chars, indices)\n return self.relu(hidden_states)\n\n\ndef loss_fn(labels, predictions):\n return tf.reduce_mean((predictions - labels) ** 2)\n\ndef train_one_epoch(model, optimizer, train_data, log_every=10):\n iterator = tfe.Iterator(train_data)\n for batch,(labels, chars, sequence_length) in enumerate(iterator):\n with tf.GradientTape() as tape:\n predictions = model((chars, sequence_length), training=True)\n loss = loss_fn(labels, predictions)\n variables = model.variables\n grad = tape.gradient(loss, variables)\n optimizer.apply_gradients([(g, v) for g, v in zip(grad, variables)])\n if log_every and batch % log_every == 0:\n print(\"train/batch #%d\\tloss: %.6f\" % (batch, loss))\n batch += 1\n \ndef test(model, eval_data):\n total_loss = 0.0\n iterator = eval_data.make_one_shot_iterator()\n for labels, chars, sequence_length in tfe.Iterator(eval_data):\n predictions = model((chars, sequence_length), training=False)\n total_loss += loss_fn(labels, predictions)\n print(\"eval/loss: %.6f\\n\" % total_loss)", "The next cell trains our RNNColorbot, restoring and saving checkpoints of the learned variables along the way. Thanks to checkpointing, every run of the below cell will resume training from wherever the previous run left off. For more on checkpointing, take a look at our user guide.", "model = RNNColorbot()\noptimizer = tf.train.AdamOptimizer(learning_rate=.01)\n\n# Create a `Checkpoint` for saving and restoring state; the keywords\n# supplied `Checkpoint`'s constructor are the names of the objects to be saved\n# and restored, and their corresponding values are the actual objects. Note\n# that we're saving `optimizer` in addition to `model`, since `AdamOptimizer`\n# maintains state.\nimport tensorflow.contrib.eager as tfe\ncheckpoint = tfe.Checkpoint(model=model, optimizer=optimizer)\ncheckpoint_prefix = \"/tmp/rnn/ckpt\"\n# The next line loads the most recent checkpoint, if any.\ncheckpoint.restore(tf.train.latest_checkpoint(\"/tmp/rnn\"))\nfor epoch in range(4):\n train_one_epoch(model, optimizer, train_data)\n test(model, eval_data)\n checkpoint.save(checkpoint_prefix)\nprint(\"Colorbot is ready to generate colors!\")", "Paint me a color, Colorbot!\nWe can interact with RNNColorbot in a natural way; no need to thread NumPy arrays into placeholders through feed dicts.\nSo go ahead and ask RNNColorbot to paint you some colors. If they're not to your liking, re-run the previous cell to resume training from where we left off, and then re-run the next one for updated results.", "tb = widgets.TabBar([\"RNN Colorbot\"])\nwhile True:\n with tb.output_to(0):\n try:\n color_name = six.moves.input(\n \"Give me a color name (or press 'enter' to exit): \")\n except (EOFError, KeyboardInterrupt):\n break\n if not color_name:\n break\n _, chars, length = parse(color_name)\n preds, = model((np.expand_dims(chars, 0), np.expand_dims(length, 0)),\n training=False)\n clipped_preds = tuple(min(float(p), 1.0) for p in preds)\n rgb = tuple(int(p * 255) for p in clipped_preds)\n with tb.output_to(0):\n tb.clear_tab()\n print(\"Predicted RGB tuple:\", rgb)\n plt.imshow([[clipped_preds]])\n plt.title(color_name)\n plt.show()", "4. Exercises\nExercise 1: Batching\nExecuting operations eagerly incurs small overheads; these overheads become neglible when amortized over batched operations. In this exercise, we explore the relationship between batching and performance by revisiting our Huber regression example.", "# Our original implementation of `huber_loss` is not compatible with non-scalar\n# data. Your task is to fix that. For your convenience, the original\n# implementation is reproduced below.\n#\n# def huber_loss(y, y_hat, m=1.0):\n# delta = tf.abs(y - y_hat)\n# return delta ** 2 if delta <= m else m * (2 * delta - m)\n#\ndef batched_huber_loss(y, y_hat, m=1.0):\n # TODO: Uncomment out the below code and replace `...` with your solution.\n # Hint: Tensors are immutable.\n # Hint: `tf.where` might be useful.\n delta = tf.abs(y - y_hat)\n # ...\n # ...\n # return ...\n \nregression = Regression(batched_huber_loss)\n\nnum_epochs = 4\nbatch_sizes = [1, 10, 20, 100, 200, 500, 1000]\ntimes = []\n\nX, Y = gen_regression_data(num_examples=1000)\ndataset = tf.data.Dataset.from_tensor_slices((X, Y))\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=0.0001)\nfor size in batch_sizes:\n batched_dataset = dataset.batch(size)\n start = time.time()\n regress(model=regression, optimizer=optimizer, dataset=batched_dataset,\n epochs=num_epochs, log_every=None)\n end = time.time()\n times.append((end - start) / num_epochs)\n regression.w.assign(0.0)\n regression.b.assign(0.0)\n \nplt.figure()\nplt.plot(batch_sizes, times, \"bo\")\nplt.xlabel(\"batch size\")\nplt.ylabel(\"time (seconds)\")\nplt.semilogx()\nplt.semilogy()\nplt.title(\"Time per Epoch vs. Batch Size\")\nplt.show()", "Solution", "def batched_huber_loss(y, y_hat, m=1.0):\n delta = tf.abs(y - y_hat)\n quadratic = delta ** 2\n linear = m * (2 * delta - m)\n return tf.reduce_mean(tf.where(delta <= m, quadratic, linear))\n \nregression = Regression(batched_huber_loss)\n\nnum_epochs = 4\nbatch_sizes = [2, 10, 20, 100, 200, 500, 1000]\ntimes = []\n\nX, Y = gen_regression_data(num_examples=1000)\ndataset = tf.data.Dataset.from_tensor_slices((X, Y))\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=0.0001)\nfor size in batch_sizes:\n batched_dataset = dataset.batch(size)\n start = time.time()\n regress(model=regression, optimizer=optimizer, dataset=batched_dataset,\n epochs=num_epochs, log_every=None)\n end = time.time()\n times.append((end - start) / num_epochs)\n regression.w.assign(0.0)\n regression.b.assign(0.0)\n \nplt.figure()\nplt.plot(batch_sizes, times, \"bo\")\nplt.xlabel(\"batch size\")\nplt.ylabel(\"time (seconds)\")\nplt.semilogx()\nplt.semilogy()\nplt.title(\"Time per Epoch vs. Batch Size\")\nplt.show()", "Exercise 2: Model Debugging\nWe've heard you loud and clear: TensorFlow programs that construct and execute graphs are difficult to debug. By design, enabling eager execution vastly simplifies the process of debugging TensorFlow programs. Once eager execution is enabled, you can step through your models using pdb and bisect them with print statements. The best way to understand the extent to which eager execution simplifies debugging is to debug a model yourself. BuggyModel below has two bugs lurking in it. Execute the following cell, read the error message, and go hunt some bugs!\nHint: As is often the case with TensorFlow programs, both bugs are related to the shapes of Tensors.\nHint: You might find tf.layers.flatten useful.", "class BuggyModel(tf.keras.Model):\n def __init__(self):\n super(BuggyModel, self).__init__()\n self._input_shape = [-1, 28, 28, 1]\n self.conv = tf.layers.Conv2D(filters=32, kernel_size=5, padding=\"same\",\n data_format=\"channels_last\")\n self.fc = tf.layers.Dense(10)\n self.max_pool2d = tf.layers.MaxPooling2D(\n (2, 2), (2, 2), padding=\"same\", data_format=\"channels_last\")\n \n def call(self, inputs):\n y = inputs\n y = self.conv(y)\n y = self.max_pool2d(y)\n return self.fc(y)\n \nbuggy_model = BuggyModel()\ninputs = tf.random_normal(shape=(100, 28, 28))\noutputs = buggy_model(inputs)\nassert outputs.shape == (100, 10), \"invalid output shape: %s\" % outputs.shape", "Solution", "class BuggyModel(tf.keras.Model):\n def __init__(self):\n super(BuggyModel, self).__init__()\n self._input_shape = [-1, 28, 28, 1]\n self.conv = tf.layers.Conv2D(filters=32, kernel_size=5, padding=\"same\",\n data_format=\"channels_last\")\n self.fc = tf.layers.Dense(10)\n self.max_pool2d = tf.layers.MaxPooling2D(\n (2, 2), (2, 2), padding=\"same\", data_format=\"channels_last\")\n \n def call(self, inputs):\n y = tf.reshape(inputs, self._input_shape)\n y = self.conv(y)\n y = self.max_pool2d(y)\n y = tf.layers.flatten(y)\n return self.fc(y)\n \nbuggy_model = BuggyModel()\ninputs = tf.random_normal(shape=(100, 28, 28))\noutputs = buggy_model(inputs)\nassert outputs.shape == (100, 10), \"invalid output shape: %s\" % outputs.shape", "5. Further reading\nIf you'd like to learn more about eager execution, consider reading ...\n\nour user guide;\nour collection of example models, which includes a convolutional model for MNIST classification, a GAN, a recursive neural network, and more;\nthis advanced notebook, which explains how to build and execute graphs while eager execution is enabled and how to call into eager execution while constructing a graph, and which also introduces Autograph, a source-code translation tool that automatically generates graph-construction code from dynamic eager code." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ucsd-ccbb/jupyter-genomics
notebooks/networkAnalysis/localization_colocalization_example/localization_colocalization_example.ipynb
mit
[ "Test localization and co-localization of two diseases, using network propagation\nTest on simulated networks, where we can control how localized and co-localized node sets are\nAuthor: Brin Rosenthal (sbrosenthal@ucsd.edu)", "import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn\nimport networkx as nx\nimport pandas as pd\nimport random\nimport scipy\n\nimport mygene\nmg = mygene.MyGeneInfo()\n\n# latex rendering of text in graphs\nimport matplotlib as mpl\nmpl.rc('text', usetex = False)\nmpl.rc('font', family = 'serif')\n\nimport sys\nsys.path.append('../source')\nimport plotting_results\nimport network_prop\n\nimport imp\nimp.reload(plotting_results)\nimp.reload(network_prop)\n\n\n% matplotlib inline", "First, let's create a random graph\n\nWe will start with the connected Watts Strogatz random graph, created using the NetworkX package. This graph generator will allow us to create random graphs which are guaranteed to be fully connected, and gives us control over how connected the graph is, and how structured it is. Documentation for the function can be found here https://networkx.github.io/documentation/latest/reference/generated/networkx.generators.random_graphs.connected_watts_strogatz_graph.html#networkx.generators.random_graphs.connected_watts_strogatz_graph\n\n<img src=\"screenshots/connected_watts_strogatz_graph_nx_docs.png\" width=\"600\" height=\"600\">\nControl localization\n\nWe can control the localization of nodes by seeding the network propagation with a focal node and that focal node's neighbors. This will guarantee that the seed nodes will be very localized in the graph\nAs a first example, let's create a random network, with two localized sets.\nThe network contains 100 nodes, with each node first connected to its 5 nearest neighbors. \nOnce these first edges are connected, each edge is randomly rewired with probability p = 0.12 (so approximately 12 percent of the edges in the graph will be rewired)\nWith this rewiring probability of 0.12, most of the structure in the graph is maintained, but some randomness has been introduced", "# Create a random connected-Watts-Strogatz graph\nGsim = nx.connected_watts_strogatz_graph(100,5,.12)\n\nseed1 = [0]\nseed1.extend(nx.neighbors(Gsim,seed1[0]))\nseed2 = [10]\nseed2.extend(nx.neighbors(Gsim,seed2[0]))\n#seed = list(np.random.choice(Gsim.nodes(),size=6,replace=False))\n\npos = nx.spring_layout(Gsim)\n\nnx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color = 'blue')\nnx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,node_size=120,alpha=.9,node_color='orange',linewidths=3)\nnx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,node_size=120,alpha=.9,node_color='red',linewidths=3)\nnx.draw_networkx_edges(Gsim,pos=pos,alpha=.1)\nplt.grid('off')\n\n#plt.savefig('/Users/brin/Google Drive/UCSD/update_16_03/non_colocalization_illustration.png',dpi=300,bbox_inches='tight')\n", "In the network shown above, we plot our random connected Watts-Strogatz graph, highlighting two localized seed node sets, shown in red and orange, with bold outlines.\n\nThese seed node sets were created by selecting two focal node, and those focal node's neighbors, thus resulting in two node sets which appear highly localized to the eye. \n\n\nSince the graph is composed of nearest neighbor relations (with some randomness added on), and it was initiated with node ids ranging from 0 to 99 (these are the default node names- they can be changed using nx.relabel_nodes()), we can control the co-localization of these node sets by selecting seed nodes which are close together, for high co-localization (e.g. 0 and 5), or which are far apart, for low co-localization (e.g. 0 and 50). \n\n\nBelow, we will display node sets with both high and low co-localization\n\nOur ability to control the co-localization in this way will become worse as the rewiring probability increases, and the structure in the graph is destroyed.", "\n# highly co-localized gene sets\nseed1 = [0]\nseed1.extend(nx.neighbors(Gsim,seed1[0]))\nseed2 = [5]\nseed2.extend(nx.neighbors(Gsim,seed2[0]))\n#seed = list(np.random.choice(Gsim.nodes(),size=6,replace=False))\n\n\nplt.figure(figsize=(12,5))\nplt.subplot(1,2,1)\nnx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color = 'blue')\nnx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,node_size=120,alpha=.9,node_color='orange',linewidths=3)\nnx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,node_size=120,alpha=.9,node_color='red',linewidths=3)\nnx.draw_networkx_edges(Gsim,pos=pos,alpha=.1)\nplt.title('High Co-localization',fontsize=16)\nplt.grid('off')\n\n# low co-localized gene sets\nseed1 = [5]\nseed1.extend(nx.neighbors(Gsim,seed1[0]))\nseed2 = [30]\nseed2.extend(nx.neighbors(Gsim,seed2[0]))\n#seed = list(np.random.choice(Gsim.nodes(),size=6,replace=False))\n\nplt.subplot(1,2,2)\nnx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color = 'blue')\nnx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,node_size=120,alpha=.9,node_color='orange',linewidths=3)\nnx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,node_size=120,alpha=.9,node_color='red',linewidths=3)\nnx.draw_networkx_edges(Gsim,pos=pos,alpha=.1)\nplt.title('Low Co-localization',fontsize=16)\nplt.grid('off')\n", "Can we quantify this concept of localization?\n\nSometimes it's not easy to tell by eye if a node set is localized. \n\nWe can use network propagation simulations to quantify this concept of localization\n\n\nNetwork propagation is a tool which initiates a seed node set with high 'heat', and then over the course of a number of iterations spreads this heat around to nearby nodes. \n\n\nAt the end of the simulation, nodes with the highest heat are those which are most closely related to the seed nodes.\n\n\nWe implemented the network propagation method described in Vanunu et. al. 2010 (Vanunu, Oron, et al. \"Associating genes and protein complexes with disease via network propagation.\" PLoS Comput Biol 6.1 (2010): e1000641.)\n\n\n<img src=\"screenshots/vanunu_abstracg.png\">\nLocalization using network propagation\n\nWe can use network propagation to evaluate how localized a seed node set is in the network.\nIf the seed node set is highly localized, the 'heat' from the network propagation simulation will be bounced around between seed nodes, and less of it will dissipate to distant parts of the network.\nWe will evaluate the distribution of the heat from all the nodes, using the kurtosis (the fourth standardized moment), which measures how 'tailed' the distribution is. If our distribution has high kurtosis, this indicates that much of the 'heat' has stayed localized near the seed set. If our distribution has a low kurtosis, this indicates that the 'heat' has not stayed localized, but has diffused to distant parts of the network. \n\n<img src=\"screenshots/kurtosis.png\">\nRandom baseline for comparison\n\nTo evaluate localization in this way, we need a baseline to compare to.\n\nTe establish the baseline we take our original network, and shuffle the edges, while preserving degree (so nodes which originally had 5 neighbors will still have 5 neighbors, although these neighbors will now be spread randomly throughout the graph)\n\n\nFor example, below we show the heat propagation on a non-shuffled graph, from a localized seed set (left), and the heat propagation from the same seed set, on an edge-shuffled graph (right). The nodes on the left and right have the same number of neighbors, but they have different identities. \n\nThe total amount of heat in the graph is conserved in both cases, but the heat distributions look very different- the seed nodes retain much less of their original heat in the edge-shuffled case.\n\n<img src=\"screenshots/L_edge_shuffled.png\">\n\nWe will calculate the kurtosis of the heat distribution over a large number of different edge-shuffled networks (below- 1000 repetitions), to build up the baseline distribution of kurtosis values.", "Wprime_ring = network_prop.normalized_adj_matrix(Gsim)\nFnew_ring = network_prop.network_propagation(Gsim,Wprime_ring,seed1)\nplt.figure(figsize=(18,5))\nplt.subplot(1,3,1)\nnx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color=Fnew_ring[Gsim.nodes()],cmap='jet',\n vmin=0,vmax=max(Fnew_ring))\nnx.draw_networkx_edges(Gsim,pos=pos,alpha=.2)\nvar_ring = plotting_results.nsf(np.var(Fnew_ring),3)\nkurt_ring = plotting_results.nsf(scipy.stats.kurtosis(Fnew_ring),3)\nplt.annotate('kurtosis = ' + str(kurt_ring),\n xy=(.08,.1),xycoords='figure fraction')\nplt.annotate('Heat: original',xy=(.08,.93),xycoords='figure fraction',fontsize=16)\nplt.xticks([],[])\nplt.yticks([],[])\nplt.grid('off')\n\nnum_reps = 1000\nvar_rand_list,kurt_rand_list = [],[]\nfor r in range(num_reps):\n G_temp = nx.configuration_model(Gsim.degree().values())\n G_rand = nx.Graph() # switch from multigraph to digraph\n G_rand.add_edges_from(G_temp.edges())\n G_rand = nx.relabel_nodes(G_rand,dict(zip(range(len(G_rand.nodes())),Gsim.degree().keys())))\n Wprime_rand = network_prop.normalized_adj_matrix(G_rand)\n\n Fnew_rand = network_prop.network_propagation(G_rand,Wprime_rand,seed1)\n \n var_rand_list.append(np.var(Fnew_rand))\n kurt_rand_list.append(scipy.stats.kurtosis(Fnew_rand))\n \n \nplt.subplot(1,3,2)\nnx.draw_networkx_nodes(G_rand,pos=pos,node_size=100,alpha=.5,node_color=Fnew_rand[G_rand.nodes()],cmap='jet',\n vmin=0,vmax=max(Fnew_ring))\nnx.draw_networkx_edges(G_rand,pos=pos,alpha=.2)\nvar_rand = plotting_results.nsf(np.var(Fnew_rand),3)\nkurt_rand = plotting_results.nsf(scipy.stats.kurtosis(Fnew_rand),3)\nplt.annotate('kurtosis = ' + str(kurt_rand),\n xy=(.40,.1),xycoords='figure fraction')\nplt.annotate('Heat: edge-shuffled',xy=(.40,.93),xycoords='figure fraction',fontsize=16)\nplt.xticks([],[])\nplt.yticks([],[])\nplt.grid('off')\n\nplt.subplot(1,3,3)\nplt.boxplot(kurt_rand_list)\nz_score = (kurt_ring-np.mean(kurt_rand_list))/np.std(kurt_rand_list)\nz_score = plotting_results.nsf(z_score,n=2)\nplt.plot(1,kurt_ring,'*',color='darkorange',markersize=16,label='original: \\nz-score = '+ str(z_score))\nplt.annotate('Kurtosis',xy=(.73,.93),xycoords='figure fraction',fontsize=16)\nplt.legend(loc='lower left')\n\n\n\n#plt.savefig('/Users/brin/Google Drive/UCSD/update_16_03/localization_NWS_p1_variance.png',dpi=300,bbox_inches='tight')\n\n\n", "Above (right panel) we see that when a node set is highly localized, it has a higher kurtosis value than would be expected from a non-localized gene set (the orange star represents the kurtosis of the heat distribution on the original graph, and the boxplot represents the distribution of 1000 kurtosis values on edge-shuffled networks). The orange star is significantly higher than the baseline distribution.\n\nCo-localization using network propagation\n\n\nWe now build on our understanding of localization using network propagation to establish a measurement of how co-localized two node sets are in a network. \n\n\nIn the first example we discussed (above), we came up with a general understanding of co-localization, where two node sets were co-localized if they were individually localized, and were nearby in network space. \n\nIn order to measure this co-localization using network propagation, we will first seed 2 simulations with each node set, then we will take the dot-product (or the sum of the pairwise product) of the resulting heat vectors.\nWhen node sets are co-localized, there will be more nodes which are hot in both heat vectors (again we compare to a distribution of heat dot-products on degree preserving edge-shuffled graphs)", "seed1 = Gsim.nodes()[0:5] #nx.neighbors(Gsim,Gsim.nodes()[0])\nseed2 = Gsim.nodes()[10:15] #nx.neighbors(Gsim,Gsim.nodes()[5]) #Gsim.nodes()[27:32]\nseed3 = Gsim.nodes()[20:25]\n\nFnew1 = network_prop.network_propagation(Gsim,Wprime_ring,seed1,alpha=.9,num_its=20)\nFnew2 = network_prop.network_propagation(Gsim,Wprime_ring,seed2,alpha=.9,num_its=20)\nF12 = Fnew1*Fnew2\nF12.sort(ascending=False)\n#Fnew1.sort(ascending=False)\n#Fnew2.sort(ascending=False)\nFnew1_norm = Fnew1/np.linalg.norm(Fnew1)\nFnew2_norm = Fnew2/np.linalg.norm(Fnew2)\n\n\ndot_12 = np.sum(F12.head(10))\nprint(dot_12)\n\nplt.figure(figsize=(18,6))\nplt.subplot(1,3,1)\nnx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color=Fnew1[Gsim.nodes()],\n cmap='jet', vmin=0,vmax=max(Fnew1))\nnx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,\n node_size=100,alpha=.9,node_color=Fnew1[seed1],\n cmap='jet', vmin=0,vmax=max(Fnew1),linewidths=3)\nnx.draw_networkx_edges(Gsim,pos=pos,alpha=.2)\nplt.grid('off')\nplt.xticks([],[])\nplt.yticks([],[])\nplt.annotate('Heat: nodes A ($H_A$)',xy=(.08,.93),xycoords='figure fraction',fontsize=16)\n\nplt.subplot(1,3,2)\nnx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color=Fnew2[Gsim.nodes()],\n cmap='jet', vmin=0,vmax=max(Fnew1))\nnx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,\n node_size=100,alpha=.9,node_color=Fnew2[seed2],\n cmap='jet', vmin=0,vmax=max(Fnew1),linewidths=3)\nnx.draw_networkx_edges(Gsim,pos=pos,alpha=.2)\nplt.grid('off')\nplt.xticks([],[])\nplt.yticks([],[])\nplt.annotate('Heat: nodes B ($H_B$)',xy=(.4,.93),xycoords='figure fraction',fontsize=16)\n\nplt.subplot(1,3,3)\nnx.draw_networkx_nodes(Gsim,pos=pos,node_size=100,alpha=.5,node_color=Fnew1[Gsim.nodes()]*Fnew2[Gsim.nodes()],\n cmap='jet', vmin=0,vmax=max(Fnew1*Fnew2))\nnx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed2,\n node_size=100,alpha=.9,node_color=Fnew1[seed2]*Fnew2[seed2],\n cmap='jet', vmin=0,vmax=max(Fnew1*Fnew2),linewidths=3)\nnx.draw_networkx_nodes(Gsim,pos=pos,nodelist=seed1,\n node_size=100,alpha=.9,node_color=Fnew1[seed1]*Fnew2[seed1],\n cmap='jet', vmin=0,vmax=max(Fnew1*Fnew2),linewidths=3)\nnx.draw_networkx_edges(Gsim,pos=pos,alpha=.2)\nplt.grid('off')\nplt.xticks([],[])\nplt.yticks([],[])\nplt.annotate('$H_A \\cdot H_B$',xy=(.73,.93),xycoords='figure fraction',fontsize=16)\n\n", "In the figure above, we show an example of this co-localization concept. \nIn the left panel we show the heat vector of the simulation seeded by node set A (warmer colors indicate hotter nodes, and bold outlines indicate seed nodes)\nIn the middle panel we show the heat vector of the simulation seeded by node set B\n\nThe right panel shows the pairwise product of the heat vectors (note color scale is different for this panel). The nodes between the two seed sets are the hottest, meaning that these are the nodes most likely related to both seed gene sets.\n\n\nIf these node sets are truly co-localized, then the sum of the heat product (the dot product) will be higher than random. This is what we will test below.", "results_dict = network_prop.calc_3way_colocalization(Gsim,seed1,seed2,seed3,num_reps=100,num_genes=5,\n replace=False,savefile=False,alpha=.5,print_flag=False,)\n\nimport scipy\n\nnum_reps = results_dict['num_reps']\n\ndot_sfari_epi=results_dict['sfari_epi']\ndot_sfari_epi_rand=results_dict['sfari_epi_rand']\n#U,p = scipy.stats.mannwhitneyu(dot_sfari_epi,dot_sfari_epi_rand)\nt,p = scipy.stats.ttest_ind(dot_sfari_epi,dot_sfari_epi_rand)\npsig_SE = plotting_results.nsf(p,n=2)\n\nplt.figure(figsize=(7,5))\nplt.errorbar(-.1,np.mean(dot_sfari_epi_rand),2*np.std(dot_sfari_epi_rand)/np.sqrt(num_reps),fmt='o',\n ecolor='gray',markerfacecolor='gray',label='edge-shuffled graph')\nplt.errorbar(0,np.mean(dot_sfari_epi),2*np.std(dot_sfari_epi)/np.sqrt(num_reps),fmt='bo',\n label='original graph')\nplt.xlim(-.8,.5)\nplt.legend(loc='lower left',fontsize=12)\nplt.xticks([0],['A-B \\np='+str(psig_SE)],rotation=45,fontsize=12)\nplt.ylabel('$H_{A} \\cdot H_{B}$',fontsize=18)\n", "In the figure above, we show the heat dot product of node set A and node set B, on the original graph (blue dot), and on 100 edge-shuffled graphs (gray dot with error bars).\nUsing a two sided independent t-test, we find that the dot product on the original graph is significantly higher than on the edge-shuffled graphs, if node sets A and B are indeed co-localized.\n\nCan we control how co-localized two node sets are?\n\nWe can use a parameter in our random graph generator function to control the co-localization of two node sets.\n\nBy varying the rewiring probability, we can move from a graph which is highly structured (low p-rewire: mostly nearest neighbor connections), to a graph which is mostly random (high p-rewire: mostly random connections).\n\n\nIn the following section we will sweet through values of p-rewire, ranging from 0 to 1, and measure the co-localization of identical node sets.", "H12 = []\nH12_rand = []\nnum_G_reps=5\nfor p_rewire in np.linspace(0,1,5):\n print('rewiring probability = ' + str(p_rewire) + '...')\n H12_temp = []\n H12_temp_rand = []\n for r in range(num_G_reps):\n Gsim = nx.connected_watts_strogatz_graph(500,5,p_rewire)\n\n seed1 = Gsim.nodes()[0:5]\n seed2 = Gsim.nodes()[5:10]\n seed3 = Gsim.nodes()[20:30]\n\n\n results_dict = network_prop.calc_3way_colocalization(Gsim,seed1,seed2,seed3,num_reps=20,num_genes=5,\n replace=False,savefile=False,alpha=.5,print_flag=False)\n H12_temp.append(np.mean(results_dict['sfari_epi']))\n H12_temp_rand.append(np.mean(results_dict['sfari_epi_rand']))\n H12.append(np.mean(H12_temp))\n H12_rand.append(np.mean(H12_temp_rand))\n \nplt.plot(np.linspace(0,1,5),H12,'r.-',label='original')\nplt.plot(np.linspace(0,1,5),H12_rand,'.-',color='gray',label='edge-shuffled')\nplt.xlabel('link rewiring probability',fontsize=14)\nplt.ylabel('$H_A \\cdot H_B$',fontsize=16)\nplt.legend(loc='upper right',fontsize=12)", "We see above, as expected, that as the rewiring probability increases (on the x-axis), and the graph becomes more random, the heat dot-product (co-localization) decreases (on the y-axis), until the co-localization on the original graph matches the edge-shuffled graph. \nWe expect this to be the case because once p-rewire becomes very high, the original graph becomes essentially random, so not much is changed by shuffling the edges. \n\nThree-way Co-localization\n\nFinally, we will look at how our co-localization using network propagation method applies to three seed node sets instead of two.\nThis could be useful if the user was interested in establishing if one node set provided a link between two other node sets. For example, one might find that two node sets are individually not co-localized, but each is co-localized with a third node set. This third node set would essentially provide the missing link between the two, as illustrated below, where node sets A and C are far apart, but B is close to A, and B is close to C.\n\n<img src=\"screenshots/CL_triangle.png\">", "\nseed1 = Gsim.nodes()[0:5]\nseed2 = Gsim.nodes()[5:10]\nseed3 = Gsim.nodes()[10:15]\n\n\nresults_dict = network_prop.calc_3way_colocalization(Gsim,seed1,seed2,seed3,num_reps=100,num_genes=5,\n replace=False,savefile=False,alpha=.5,print_flag=False,)\n\nimport scipy\n\nnum_reps = results_dict['num_reps']\n\ndot_sfari_epi=results_dict['sfari_epi']\ndot_sfari_epi_rand=results_dict['sfari_epi_rand']\n#U,p = scipy.stats.mannwhitneyu(dot_sfari_epi,dot_sfari_epi_rand)\nt,p = scipy.stats.ttest_ind(dot_sfari_epi,dot_sfari_epi_rand)\npsig_SE = plotting_results.nsf(p,n=2)\n\ndot_sfari_aem=results_dict['sfari_aem']\ndot_aem_sfari_rand=results_dict['aem_sfari_rand']\n#U,p = scipy.stats.mannwhitneyu(dot_sfari_aem,dot_aem_sfari_rand)\nt,p = scipy.stats.ttest_ind(dot_sfari_aem,dot_aem_sfari_rand)\npsig_SA = plotting_results.nsf(p,n=2)\n\ndot_aem_epi=results_dict['aem_epi']\ndot_aem_epi_rand=results_dict['aem_epi_rand']\n#U,p = scipy.stats.mannwhitneyu(dot_aem_epi,dot_aem_epi_rand)\nt,p = scipy.stats.ttest_ind(dot_aem_epi,dot_aem_epi_rand)\npsig_AE = plotting_results.nsf(p,n=2)\n\nplt.figure(figsize=(7,5))\nplt.errorbar(-.1,np.mean(dot_sfari_epi_rand),2*np.std(dot_sfari_epi_rand)/np.sqrt(num_reps),fmt='o',\n ecolor='gray',markerfacecolor='gray')\nplt.errorbar(0,np.mean(dot_sfari_epi),2*np.std(dot_sfari_epi)/np.sqrt(num_reps),fmt='bo')\n\nplt.errorbar(.9,np.mean(dot_aem_sfari_rand),2*np.std(dot_aem_sfari_rand)/np.sqrt(num_reps),fmt='o',\n ecolor='gray',markerfacecolor='gray')\nplt.errorbar(1,np.mean(dot_sfari_aem),2*np.std(dot_sfari_aem)/np.sqrt(num_reps),fmt='ro')\n\nplt.errorbar(1.9,np.mean(dot_aem_epi_rand),2*np.std(dot_aem_epi_rand)/np.sqrt(num_reps),fmt='o',\n ecolor='gray',markerfacecolor='gray')\nplt.errorbar(2,np.mean(dot_aem_epi),2*np.std(dot_aem_epi)/np.sqrt(num_reps),fmt='go')\nplt.xticks([0,1,2],['A-B \\np='+str(psig_SE),'A-C \\np='+str(psig_SA),'B-C\\np='+str(psig_AE)],rotation=45,fontsize=12)\nplt.xlim(-.5,2.5)\nplt.ylabel('$H_{1} \\cdot H_{2}$',fontsize=18)\n", "In the figure above, we show how three-way co-localization looks in practice. \nWe have selected three node sets, two of which are distant (A and C), and one which is close to both (B).\nWe find that indeed $H_A\\cdot H_B$ and $H_B\\cdot H_C$ (blue dot and green dot) are much higher on the original graph than on the edge shuffled graphs.\nHowever, we find that $H_A\\cdot H_C$ is actually lower than the background noise. This is telling us that node sets A and C are actually individually localized, but not co-localized at all, because more of the heat remains close to each individual seed set than would happen if each node set was not individually co-localized." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
WormLabCaltech/mprsq
src/stats_tutorials/Model Selection.ipynb
mit
[ "Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Generating-synthetic-data\" data-toc-modified-id=\"Generating-synthetic-data-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Generating synthetic data</a></div><div class=\"lev1 toc-item\"><a href=\"#Line-fitting-using-Bayes'-theorem\" data-toc-modified-id=\"Line-fitting-using-Bayes'-theorem-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Line fitting using Bayes' theorem</a></div><div class=\"lev1 toc-item\"><a href=\"#Quantifying-the-probability-of-a-fixed-model:\" data-toc-modified-id=\"Quantifying-the-probability-of-a-fixed-model:-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Quantifying the probability of a fixed model:</a></div><div class=\"lev1 toc-item\"><a href=\"#Selecting-between-two-models\" data-toc-modified-id=\"Selecting-between-two-models-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Selecting between two models</a></div><div class=\"lev2 toc-item\"><a href=\"#Different-datasets-will-prefer-different-models\" data-toc-modified-id=\"Different-datasets-will-prefer-different-models-4.1\"><span class=\"toc-item-num\">4.1&nbsp;&nbsp;</span>Different datasets will prefer different models</a></div><div class=\"lev1 toc-item\"><a href=\"#The-larger-the-dataset,-the-more-resolving-power\" data-toc-modified-id=\"The-larger-the-dataset,-the-more-resolving-power-5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>The larger the dataset, the more resolving power</a></div>\n\nWelcome to our primer on Bayesian Model Selection. \n\nAs always, we begin by loading our required libraries.", "# important stuff:\nimport os\nimport pandas as pd\nimport numpy as np\nimport statsmodels.tools.numdiff as smnd\nimport scipy\n\n# Graphics\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom matplotlib import rc\nrc('text', usetex=True)\nrc('text.latex', preamble=r'\\usepackage{cmbright}')\nrc('font', **{'family': 'sans-serif', 'sans-serif': ['Helvetica']})\n\n# Magic function to make matplotlib inline;\n# other style specs must come AFTER\n%matplotlib inline\n\n# This enables SVG graphics inline. \n# There is a bug, so uncomment if it works.\n%config InlineBackend.figure_formats = {'png', 'retina'}\n\n# JB's favorite Seaborn settings for notebooks\nrc = {'lines.linewidth': 2, \n 'axes.labelsize': 18, \n 'axes.titlesize': 18, \n 'axes.facecolor': 'DFDFE5'}\nsns.set_context('notebook', rc=rc)\nsns.set_style(\"dark\")\n\nmpl.rcParams['xtick.labelsize'] = 16 \nmpl.rcParams['ytick.labelsize'] = 16 \nmpl.rcParams['legend.fontsize'] = 14", "Generating synthetic data\nFirst, we will generate the data. We will pick evenly spaced x-values. The y-values will be picked according to the equation $y=-\\frac{1}{2}x$ but we will add Gaussian noise to each point. Each y-coordinate will have an associated error. The size of the error bar will be selected randomly.\nAfter we have picked the data, we will plot it to visualize it. It looks like a fairly straight line.", "n = 50 # number of data points\nx = np.linspace(-10, 10, n)\nyerr = np.abs(np.random.normal(0, 2, n))\ny = np.linspace(5, -5, n) + np.random.normal(0, yerr, n)\nplt.scatter(x, y)", "Line fitting using Bayes' theorem\nNow that we have generated our data, we would like to find the line of best fit given our data. To do this, we will perform a Bayesian regression. Briefly, Bayes equation is,\n$$\nP(\\alpha~|D, M_1) \\propto P(D~|\\alpha, M_1)P(\\alpha~|M_1).\n$$\nIn other words, the probability of the slope given that Model 1 (a line with unknown slope) and the data is proportional to the probability of the data given the model and alpha times the probability of alpha given the model. \nSome necessary nomenclature at this point:\n * $P(D~|\\alpha, M_1)\\cdot P(\\alpha|M_1)$ is called the posterior probability\n * $P(\\alpha~|M_1)$ is called the prior\n * $P(D~|\\alpha, M_1)$ is called the likelihood\nI claim that a functional form that will allow me to fit a line through this data is:\n$$\nP(X|D) \\propto \\prod_{Data} \\mathrm{exp}(-{\\frac{(y_{Obs} - \\alpha x)^2}{2\\sigma_{Obs}^2}})\\cdot (1 + \\alpha^2)^{-3/2}\n$$\nThe first term in the equation measures the deviation between the observed y-coordinates and the predicted y-coordinates from a theoretical linear model, where $\\alpha$ remains to be determined. We weight the result by the observed error, $\\sigma_{Obs}$. Then, we multiply by a prior that tells us what values of $\\alpha$ should be considered. How to pick a good prior is somewhat difficult and a bit of an artform. One way is to pick a prior that is uninformative for a given parameter. In this case, we want to make sure that we sample slopes between [0,1] as densely as we sample [1,$\\infty$]. For a more thorough derivation and explanation, please see this excellent blog post by Jake Vanderplas.\nThe likelihood is the first term, and the prior is the second. We code it up in the next functions, with a minor difference. It is often computationally much more tractable to compute the natural logarithm of the posterior, and we do so here. \nWe can now use this equation to find the model we are looking for. How? Well, the equation above basically tells us what model is most likely given that data and the prior information on the model. If we maximize the probability of the model, whatever parameter combination can satisfy that is a model that we are interested in!", "# bayes model fitting:\ndef log_prior(theta):\n beta = theta\n return -1.5 * np.log(1 + beta ** 2)\n\ndef log_likelihood(beta, x, y, yerr):\n sigma = yerr\n y_model = beta * x\n return -0.5 * np.sum(np.log(2 * np.pi * sigma ** 2) + (y - y_model) ** 2 / sigma ** 2)\n\ndef log_posterior(theta, x, y, yerr):\n return log_prior(theta) + log_likelihood(theta, x, y, yerr)\n\ndef neg_log_prob_free(theta, x, y, yerr):\n return -log_posterior(theta, x, y, yerr)", "Specificity is necessary for credibility. Let's show that by optimizing the posterior function, we can fit a line.\nWe optimize the line by using the function scipy.optimize.minimize. However, minimizing the logarithm of the posterior does not achieve anything! We are looking for the place at which the equation we derived above is maximal. That's OK. We will simply multiply the logarithm of the posterior by -1 and minimize that.", "# calculate probability of free model:\nres = scipy.optimize.minimize(neg_log_prob_free, 0, args=(x, y, yerr), method='Powell')\n\nplt.scatter(x, y)\nplt.plot(x, x*res.x, '-', color='g')\nprint('The probability of this model is {0:.2g}'.format(np.exp(log_posterior(res.x, x, y, yerr))))\nprint('The optimized probability is {0:.4g}x'.format(np.float64(res.x)))", "We can see that the model is very close to the model we drew the data from. It works! \nHowever, the probability of this model is not very large. Why? Well, that's because the posterior probability is spread out over a large number of parameters. Bayesians like to think that a parameter is actually a number plus or minutes some jitter. Therefore, the probability of the parameter being exactly one number is usually smaller the larger the jitter. In thise case, the jitter is not terribly a lot, but the probability of this one parameter being exactly -0.5005 is quite low, even though it is the best guess for the slope given the data. \nQuantifying the probability of a fixed model:\nSuppose now that we had a powerful theoretical tool that allowed us to make a very, very good guess as to what line the points should fall on. Suppose this powerful theory now tells us that the line should be:\n$$\ny = -\\frac{1}{2}x.\n$$\nUsing Bayes' theorem, we could quantify the probability that the model is correct, given the data. Now, the prior is simply going to be 1 when the slope is -0.5, and 0 otherwise. This makes the equation:\n$$\nP(X|D) \\propto \\prod_{Data}\\mathrm{exp}({-\\frac{(y_{Obs} + 0.5x)^2}{2\\sigma_{Obs}}})\n$$\nNotice that this equation cannot be minimized. It is a fixed statement, and its value depends only on the data.", "# bayes model fitting:\ndef log_likelihood_fixed(x, y, yerr):\n sigma = yerr\n y_model = -1/2*x\n\n return -0.5 * np.sum(np.log(2 * np.pi * sigma ** 2) + (y - y_model) ** 2 / sigma ** 2)\n\ndef log_posterior_fixed(x, y, yerr):\n return log_likelihood_fixed(x, y, yerr)\n\nplt.scatter(x, y)\nplt.plot(x, -0.5*x, '-', color='purple')\nprint('The probability of this model is {0:.2g}'.format(np.exp(log_posterior_fixed(x, y, yerr))))", "We can see that the probability of this model is very similar to the probability of the alternative model we fit above. How can we pick which one to use?\nSelecting between two models\nAn initial approach to selecting between these two models would be to take the probability of each model given the data and to find the quotient, like so:\n$$\nOR = \\frac{P(M_1~|D)}{P(M_2~|D)} = \\frac{P(D~|M_1)P(M_1)}{P(D~|M_2)P(M_1)}\n$$\nHowever, this is tricky to evaluate. First of all, the equations we derived above are not solely in terms of $M_1$ and $D$. They also include $\\alpha$ for the undetermined slope model. We can get rid of this parameter via a technique known as marginalization (basically, integrating the equations over $\\alpha$). Even more philosophically difficult are the terms $P(M_i)$. How is one to evaluate the probability of a model being true? The usual solution to this is to set $P(M_i) \\sim 1$ and let those terms cancel out. However, in the case of models that have been tested before or where there is a powerful theoretical reason to believe one is more likely than the other, it may be entirely reasonable to specify that one model is several times more likely than the other. For now, we set the $P(M_i)$ to unity.\nWe can approximate the odds-ratio for our case as follows:\n$$\nOR = \\frac{P(D|\\alpha^)}{P(D|M_2)} \\cdot \\frac{P(\\alpha^|M_1) (2\\pi)^{1/2} \\sigma_\\alpha^*}{1},\n$$\nwhere $\\alpha^$ is the parameter we found when we minimized the probability function earlier. Here, the second term we added represents the complexity of each model. The denominator in the second term is 1 because the fixed model cannot become any simpler. On the other hand, we penalize the model with free slope by multiplying the probability of the observed slope by the square root of two pi and then multiplying all of this by the uncertainty in the parameter $\\alpha$. This is akin to saying that the less likely we think $\\alpha$ should be a priori*, or the more uncertain we are that $\\alpha$ is actually a given number, then we should give points to the simpler model.", "def model_selection(X, Y, Yerr, **kwargs):\n guess = kwargs.pop('guess', -0.5)\n\n # calculate probability of free model:\n res = scipy.optimize.minimize(neg_log_prob_free, guess, args=(X, Y, Yerr), method='Powell')\n \n # Compute error bars\n second_derivative = scipy.misc.derivative(log_posterior, res.x, dx=1.0, n=2, args=(X, Y, Yerr), order=3)\n cov_free = -1/second_derivative\n alpha_free = np.float64(res.x)\n log_free = log_posterior(alpha_free, X, Y, Yerr)\n \n # log goodness of fit for fixed models\n log_MAP = log_posterior_fixed(X, Y, Yerr)\n\n good_fit = log_free - log_MAP\n\n # occam factor - only the free model has a penalty\n log_occam_factor =(-np.log(2 * np.pi) + np.log(cov_free)) / 2 + log_prior(alpha_free)\n\n # give more standing to simpler models. but just a little bit!\n lg = log_free - log_MAP + log_occam_factor - 2\n return lg", "We performed the Odds Ratio calculation on logarithmic space, so negative values show that the simpler (fixed slope) model is preferred, whereas if the values are positive and large, the free-slope model is preferred. \nAs a guide, Bayesian statisticians usually suggest that 10^2 or above is a good ratio to abandon one model completely in favor of another.", "model_selection(x, y, yerr)", "Different datasets will prefer different models\nLet's try this again. Maybe the answer will change sign this time.", "n = 50 # number of data points\nx = np.linspace(-10, 10, n)\nyerr = np.abs(np.random.normal(0, 2, n))\ny = x*-0.55 + np.random.normal(0, yerr, n)\nplt.scatter(x, y)\n\nmodel_selection(x, y, yerr)", "Indeed, the answer changed sign. Odds Ratios, p-values and everything else should always be interpreted conservatively. I prefer odds ratios that are very large, larger than 1,000 before stating that one model is definitively preferred. Otherwise, I tend to prefer the simpler model.\nThe larger the dataset, the more resolving power\nWhat distribution of answers would you get if you obtained five points? Ten? Fifteen? I've written a couple of short functions to help us find out.\nIn the functions below, I simulate two datasets. One datasets is being plucked from points that obey the model \n$$\ny = -\\frac{1}{2}x,\n$$\nwhereas the second model is being plucked from\n$$\ny = -0.46x.\n$$\nClearly, the fixed model $y=-0.5x$ should only be preferred for the first dataset, and the free model is the correct one to use for the second model. Now let us find out if this is the case.\nBy the way, the function below trims odds ratios to keep them from becoming too large. If an odds ratio is bigger than 10, we set it equal to 10 for plotting purposes.", "def simulate_many_odds_ratios(n):\n \"\"\"\n Given a number `n` of data points, simulate 1,000 data points drawn from a null model and an alternative model and\n compare the odds ratio for each.\n \"\"\"\n iters = 1000\n lg1 = np.zeros(iters)\n lg2 = np.zeros(iters)\n\n for i in range(iters):\n x = np.linspace(-10, 10, n)\n yerr = np.abs(np.random.normal(0, 2, n))\n\n # simulate two models: only one matches the fixed model\n y1 = -0.5*x + np.random.normal(0, yerr, n)\n y2 = -0.46*x + np.random.normal(0, yerr, n)\n\n lg1[i] = model_selection(x, y1, yerr)\n \n m2 = model_selection(x, y2, yerr)\n # Truncate OR for ease of plotting\n if m2 < 10:\n lg2[i] = m2\n else:\n lg2[i] = 10\n \n return lg1, lg2\n\ndef make_figures(n):\n lg1, lg2 = simulate_many_odds_ratios(n)\n \n lg1 = np.sort(lg1)\n lg2 = np.sort(lg2)\n \n fifty_point1 = lg1[int(np.floor(len(lg1)/2))]\n fifty_point2 = lg2[int(np.floor(len(lg2)/2))]\n \n fig, ax = plt.subplots(ncols=2, figsize=(15, 7), sharey=True)\n fig.suptitle('Log Odds Ratio for n={0} data points'.format(n), fontsize=20)\n sns.kdeplot(lg1, label='slope=-0.5', ax=ax[0], cumulative=False)\n ax[0].axvline(x=fifty_point1, ls='--', color='k')\n ax[0].set_title('Data drawn from null model')\n ax[0].set_ylabel('Density')\n\n sns.kdeplot(lg2, label='slope=-0.46', ax=ax[1], cumulative=False)\n ax[1].axvline(x=fifty_point2, ls='--', color='k')\n ax[1].set_title('Data drawn from alternative model')\n fig.text(0.5, 0.04, 'Log Odds Ratio', ha='center', size=18)\n\n return fig, ax\n\nfig, ax = make_figures(n=5)", "Here we can see that with five data points, the odds ratio will tend to prefer the simpler model. We do not have too much information---why request the extra information? Note that for the second dataset in some cases the deviations are great enough that the alternative model is strongly preferred (right panel, extra bump at 10). However, this is rare.", "fig, ax = make_figures(n=50)", "When we increase the number of points we are using, the curve on the right stays fairly similar, but the right panel shows an important shift towards large positive numbers. Sometimes we still prefer the simpler model, but much more often we can tell the difference between a slope of -0.46 and -0.5!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pycomlink/pycomlink
notebooks/outdated_notebooks/Spatial interpolation.ipynb
bsd-3-clause
[ "Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Load-CML-example-data\" data-toc-modified-id=\"Load-CML-example-data-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Load CML example data</a></div><div class=\"lev1 toc-item\"><a href=\"#Do-a-simple-standard-processing-to-get-rain-rates-for-each-CML\" data-toc-modified-id=\"Do-a-simple-standard-processing-to-get-rain-rates-for-each-CML-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Do a simple standard processing to get rain rates for each CML</a></div><div class=\"lev1 toc-item\"><a href=\"#Do-IDW-interpolation-of-CML-rain-rates\" data-toc-modified-id=\"Do-IDW-interpolation-of-CML-rain-rates-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Do IDW interpolation of CML rain rates</a></div><div class=\"lev2 toc-item\"><a href=\"#Initialize-interpolator\" data-toc-modified-id=\"Initialize-interpolator-31\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>Initialize interpolator</a></div><div class=\"lev2 toc-item\"><a href=\"#Perform-interpolation-for-all-time-steps\" data-toc-modified-id=\"Perform-interpolation-for-all-time-steps-32\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>Perform interpolation for all time steps</a></div>", "%matplotlib inline\n\nimport pycomlink as pycml\n\nimport matplotlib.pyplot as plt\nfrom tqdm import tqdm", "Load CML example data\nCoordinates mimic the real network topology but are fake", "cml_list = pycml.io.examples.get_75_cmls()\n\nfig, ax = plt.subplots()\nfor cml in cml_list:\n cml.plot_line(ax=ax, color='k')", "Do a simple standard processing to get rain rates for each CML", "for cml in tqdm(cml_list):\n window_length = 60\n threshold = 1.0\n cml.process.wet_dry.std_dev(window_length=window_length, threshold=threshold)\n cml.process.baseline.linear()\n cml.process.baseline.calc_A()\n cml.process.A_R.calc_R()", "Do IDW interpolation of CML rain rates\nThe ComlinkGridInterpolator takes a PointsToGridInterpolator object as argument, which is used for the interpolation of each time step. You can pass config arguments to the initialization of the PointsToGridInterpolator.\nCurrently only the IDW interpolator IdWKdtreeInterpolator which subclasses PointsToGridInterpolator is available. A Kriging version is already implemented but does not work reliably.\nInitialize interpolator\nresolution is used to generate a grid using a bounding box aroudn all CMLs if no x- and y-grid are supplied.\nCurrently CML rain rates are averaged to hourly data before interpolating.", "cml_interp = pycml.spatial.interpolator.ComlinkGridInterpolator(\n cml_list=cml_list,\n resolution=0.01,\n interpolator=pycml.spatial.interpolator.IdwKdtreeInterpolator())", "Perform interpolation for all time steps", "ds = cml_interp.loop_over_time()\n\nds\n\nfig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(12,12))\n\nfor i, axi in enumerate(ax.flat):\n for cml in cml_list:\n cml.plot_line(ax=axi, color='k')\n \n pc = axi.pcolormesh(ds.lon,\n ds.lat,\n ds.R.isel(time=20+i), \n cmap=plt.get_cmap('BuPu', 8), \n vmin=0,\n vmax=20)\n axi.set_title(cml_interp.df_cmls.index[20+i])\n \nfig.subplots_adjust(right=0.9)\ncbar_ax = fig.add_axes([0.95, 0.15, 0.02, 0.7])\nfig.colorbar(pc, cax=cbar_ax, label='Hourly rainfall sum in mm');", "Calculate CML coverage mask\nCoverage for 0.05 degree coverage around CMLs.\nNote: Calculating coverage using lon-lat and degrees does result in distortions. In the future this will be done using a area preserving reprojection of the lon-lat coordinates before calculating coverage.", "cml_coverage_mask = pycml.spatial.coverage.calc_coverage_mask(\n cml_list=cml_list, \n xgrid=ds.lon.values,\n ygrid=ds.lat.values,\n max_dist_from_cml=0.05)\n\nfig, ax = plt.subplots()\nfor cml in cml_list:\n cml.plot_line(ax=ax, color='k')\nax.pcolormesh(ds.lon, ds.lat, cml_coverage_mask, cmap='gray');", "Coverage for 0.1 degree coverage around CMLs.", "cml_coverage_mask = pycml.spatial.coverage.calc_coverage_mask(\n cml_list=cml_list, \n xgrid=ds.lon.values,\n ygrid=ds.lat.values,\n max_dist_from_cml=0.1)\n\nfig, ax = plt.subplots()\nfor cml in cml_list:\n cml.plot_line(ax=ax, color='k')\nax.pcolormesh(ds.lon, ds.lat, cml_coverage_mask, cmap='gray');", "Plot CML rainfall sum and apply coverage map", "fig, ax = plt.subplots()\nfor cml in cml_list:\n cml.plot_line(ax=ax, color='k')\npc = ax.pcolormesh(\n ds.lon, \n ds.lat,\n ds.R.sum(dim='time').where(cml_coverage_mask),\n cmap=plt.get_cmap('BuPu', 32))\nplt.colorbar(pc, label='rainfall sum in mm');" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eds-uga/csci1360-fa16
lectures/L11.ipynb
mit
[ "Lecture 11: Objects and Classes\nCSCI 1360: Foundations for Informatics and Analytics\nOverview and Objectives\nIn this lecture, we'll delve into the realm of \"object-oriented programming,\" or OOP. This is a programming paradigm in which concepts and actions are \"packaged\" using the abstraction of objects: modeling the system after real-world phenomena, both to aid our own understanding of the program and to enforce a good design paradigm. By the end of this lecture, you should be able to:\n\nUnderstand the core concepts of encapsulation and abstraction that make object-oriented programming so powerful\nImplement your own class hierarchy, using inheritance to avoid redundant code\nExplain how Python's OOP mechanism differs from that in other languages, such as Java\n\nPart 1: Object-oriented Programming\nPoll: how many people have programmed in Java or C++?\nHow many have heard of object-oriented programming?\nUp until now (and after this lecture), we've stuck mainly with procedural programming: focusing on the actions.\nObject-oriented programming, by contrast, focuses on the objects.\n\nSort of a \"verbs\" (procedural programming) versus \"nouns\" (object-oriented programming) thing.\nObjects versus Classes\nMain idea: you design objects, usually modeled after real-world constructs, that interact with each other.\nThese designs are called classes. Think of them as a blueprint that detail out the various properties and capabilities of your object.\n\nFrom the designs (classes), you can create an object by instantiating the class, or creating an instance of the class.\nIf the class is the blueprint, the object is the physical manifestation from the blueprint.\nA car can have several properties--steering column, fuel injector, brake pads, airbags--but an instantiation of car would be a 2015 Honda Accord. Another instantiation would be a 2016 Tesla Model S. These are instances of a car.\nIn some [abstract] sense, these both derive from a common blueprint of a car, but their specific details differ. This is precisely how to think of the difference between classes and objects.\nPart 2: Objects in Python\nEvery object in Python has certain things in common.\n\n\nMethods: Remember when we covered the difference between functions and methods? This is where that difference comes into play. Methods are the way the object interacts with the outside world. They're functions, but they're attached directly to object instances.\n\n\nConstructors: These are specialized methods that deal specifically with how an object instance is created. Every single object has a constructor, whether you explicitly write one or not.\n\n\nAttributes: These are the physical properties of the object; maybe they change, maybe they don't. For a car, this could be color, make, model, or name. These are the things that distinguish one instance of the class from another.\n\n\nInheritance: This is where the power of object-oriented programming really comes into play. Quite often, our understanding of physical objects in the world is hierarchical: there are cars; then there are race cars, sedans, and SUVs; then there are gas-powered sedans, hybrid sedans, and electric sedans; then there are 2015 Honda Accords and 2016 Honda Accords. Wouldn't it be great if our class design reflected this hierarchy?\n\n\nDefining Classes\nLet's start with the first step of designing a class: its actual definition. We'll stick with the car example.", "class Car():\n \"\"\" A simple representation of a car. \"\"\"\n pass", "To define a new class, you need a class keyword, followed by the name (in this case, Car). The parentheses are important, but for now we'll leave them empty. Like loops and functions and conditionals, everything that belongs to the class--variables, methods, etc--are indented underneath.\nWe can then instantiate this class using the following:", "my_car = Car()\nprint(my_car)", "Now my_car holds an instance of the Car class! It doesn't do much, but it's a valid object.\nConstructors\nThe first step in making an interesting class is by creating a constructor. It's a special kind of function that provides a customized recipe for how an instance of that class is built.\nIt takes a special form, too:", "class Car():\n \n def __init__(self):\n print(\"This is the constructor!\")\n\nmy_car = Car()", "Let's look at this method in more detail.", " def __init__(self):\n pass", "The def is normal: the Python keyword we use to identify a function definition.\n\n\n__init__ is the name of our method. It's an interesting name for sure, and turns out this is a very specific name Python is looking for: whenever you instantiate an object, this is the method that's run. If you don't explicitly write a constructor, Python implicitly supplies a \"default\" one (where basically nothing really happens).\n\n\nThe method argument is strange; what is this mysterious self, and why--if an argument is required--didn't we supply one when we executed my_car = Car()?\n\n\nA note on self\nThis is how the object refers to itself from inside the object. We'll see this in greater detail once we get to attributes.\nEvery method in a class must have self as the first argument. Even though you don't actually supply this argument yourself when you call the method, it still has to be in the function definition.\nOtherwise, you'll get some weird error messages:\n\nAttributes\nAttributes are variables contained inside a class, and which take certain values when the class is instantiated.\nThe most common practice is to define these attributes within the constructor of the class.", "class Car():\n \n def __init__(self, year, make, model):\n \n # All three of these are class attributes.\n self.year = year\n self.make = make\n self.model = model\n\nmy_car = Car(2015, \"Honda\", \"Accord\") # Again, note that we don't specify something for \"self\" here.\nprint(my_car.year)", "These attributes are accessible from anywhere inside the class, but direct access to them from outside (as did in the print(my_car.year) statement) is heavily frowned upon.\nInstead, good object-oriented design stipulates that these attributes be treated as private variables to the class.\nTo be modified or otherwise used, the classes should have public methods that expose very specific avenues for interaction with the class attributes.\nThis is the concept of encapsulation: restricting direct access to attributes, and instead encouraging the use of class methods to interact with the attributes in very specific ways.\n\nMethods\nMethods are functions attached to the class, but which are accessible from outside the class, and define the ways in which the instances of the class can interact with the outside world.\nWhereas classes are usually nouns, the methods are typically the verbs. For example, what would a Car class do?", "class Car():\n \n def __init__(self, year, make, model):\n self.year = year\n self.make = make\n self.model = model\n self.mileage = 0\n \n def drive(self, mileage = 0):\n if mileage == 0:\n print(\"Driving!\")\n else:\n self.mileage += mileage\n print(\"Driven {} miles total.\".format(self.mileage))\n\nmy_car = Car(2016, \"Tesla\", \"Model S\")\nmy_car.drive(100)\nmy_car.drive()\nmy_car.drive(50)", "Classes can have as many methods as you want, named whatever you'd like (though usually named so they reflect their purpose).\nMethods are what are ultimately allowed to edit the class attributes (the self. variables), as per the concept of encapsulation. For example, the self.mileage attribute in the previous example that stores the total mileage driven by that instance.\nLike the constructor, all the class methods must have self as the first argument in their headers, even though you don't explicitly supply it when you call the methods.\nInheritance\nInheritance is easily the most complicated aspect of object-oriented programming, but is most certainly where OOP derives its power for modular design.\nWhen considering cars, certainly most are very similar and can be modeled effectively with one class, but eventually there are enough differences to necessitate the creation of a separate class. For example, a class for gas-powered cars and one for EVs.\nBut considering how much overlap they still share, it'd be highly redundant to make wholly separate classes for both.", "class GasCar():\n def __init__(self, make, model, year, tank_size):\n # Set up attributes.\n pass\n \n def drive(self, mileage = 0):\n # Driving functionality.\n pass\n\nclass ElectricCar():\n def __init__(self, make, model, year, battery_cycles):\n # Set up attributes.\n pass\n \n def drive(self, mileage = 0):\n # Driving functionality, probably identical to GasCar.\n pass", "Enter inheritance: the ability to create subclasses of existing classes that retain all the functionality of the parent, while requiring the implementation only of the things that differentiate the child from the parent.", "class Car(): # Parent class.\n def __init__(self, make, model, year):\n self.make = make\n self.model = model\n self.year = year\n self.mileage = 0\n \n def drive(self, mileage = 0):\n self.mileage += mileage\n print(\"Driven {} miles.\".format(self.mileage))\n\nclass EV(Car): # Child class--explicitly mentions \"Car\" as the parent!\n def __init__(self, make, model, year, charge_range):\n Car.__init__(self, make, model, year)\n self.charge_range = charge_range\n \n def charge_remaining(self):\n if self.mileage < self.charge_range:\n print(\"Still {} miles left.\".format(self.charge_range - self.mileage))\n else:\n print(\"Battery depleted! Find a SuperCharger station.\")\n\ntesla = EV(2016, \"Tesla\", \"Model S\", 250)\ntesla.drive(100)\ntesla.charge_remaining()\ntesla.drive(150)\ntesla.charge_remaining()", "Hopefully you noticed--we could call tesla.drive() and it worked as it was defined in the parent Car class, without us having to write it again!\nThis is the power of inheritance: every child class inherits all the functionality of the parent class.\nWith ONE exception: if you override a parent attribute or method in the child class, then that takes precedence.", "class Hybrid(Car):\n \n def drive(self, mileage, mpg):\n self.mileage += mileage\n print(\"Driven {} miles at {:.1f} MPG.\".format(self.mileage, mpg))\n\nhybrid = Hybrid(2015, \"Toyota\", \"Prius\")\nhybrid.drive(100, 35.5)", "Using inheritance, you can build an entire hierarchy of classes and subclasses, inheriting functionality where needed and overriding it where necessary.\n\nThis illustrates the concept of polymorphism (meaning \"many forms\"): all cars are vehicles; therefore, any functions a vehicle has, a car will also have.\nAll transporters are vehicles--and also cars--and have all the associated functionality defined in those classes.\nHowever, it does NOT work in reverse: not all vehicles are motorcycles! Thus, as you move down the hierarchy, the objects become more specialized.\nMultiple Inheritance\nJust a quick note on this, for all the Java converts--\nPython does support multiple inheritance, meaning a child class can directly inherit from multiple parent classes.", "class DerivedClassName(EV, Hybrid):\n pass", "This can get very complicated (and is why Java restricts \"multiple inheritance\" to interfaces only) in terms of what method and attribute definitions takes precedence when found in multiple parent classes.\nAs such, we won't explore this very much if at all in this class.\nReview Questions\nSome questions to discuss and consider:\n1: Buzzword bingo: define encapsulation, inheritance, polymorphism, instantiation, and the difference between objects and classes.\n2: For those who are Java converts, you may recall public and private methods and variables. Python makes no such distinction; everything is intrinsically public. In this case, why still use methods to interact with classes, instead of directly accessing the class attributes?\n3: What is the difference between method overriding and method overloading? Does Python support one, both, or neither?\n4: Class variable scope exists when working with objects in Python. If I define a variable x outside a class, define x again inside the class method, and refer to x after the class definition, which x is accessed? If x is also an attribute of the class, how do you access it from outside the class? How can you access the x defined outside the class from inside a class method?\n5: Design a class hierarchy for different kinds of drinks. Include as much detail as you can. Where are attributes and methods inherited, where are they overridden, and where are new attributes and methods defined?\nCourse Administrivia\nHow is A4 going?\nVolunteers for tomorrow's flipped lecture?\nReview session #2 on Thursday! Come with questions!\nAdditional Resources\n\nMatthes, Eric. Python Crash Course. 2016. ISBN-13: 978-1593276034\nPython Classes documentation: https://docs.python.org/3/tutorial/classes.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jan-rybizki/Chempy
tutorials/4-Simple_stellar_population.ipynb
mit
[ "Simple Stellar Population (SSP)\nHere we will explain the central routine of Chempy. The observational counterpart of an SSP is the open cluster. All stars are approximately born at the same time, and from the same ISM abundances. Therefore the SSP is characterized by its total mass, its birth age and its initial elemental composition. When assuming stellar lifetimes, an IMF and nucleosynthetic feedback we can calculate its feedback over time.\nYou will see how this is realized in Chempy.", "%pylab inline\n\n# Chempy has to be called from within the /source directory. We load the default parameters\n\nfrom Chempy.parameter import ModelParameters\na = ModelParameters()", "First we load the default values of:\n- IMF\n- SN2, AGB and SNIa feedback", "a.basic_sfr_name\n\n# Load the IMF\n\nfrom Chempy.imf import IMF\nbasic_imf = IMF(a.mmin,a.mmax,a.mass_steps)\ngetattr(basic_imf, a.imf_type_name)((a.chabrier_para1,a.chabrier_para2,a.high_mass_slope))\n\n# Load the SFR\n\nfrom Chempy.sfr import SFR\nbasic_sfr = SFR(a.start, a.end, a.time_steps)\ngetattr(basic_sfr, a.basic_sfr_name)(S0 = a.S_0 * a.mass_factor,a_parameter = a.a_parameter, loc = a.sfr_beginning, scale = a.sfr_scale)\nbasic_sfr.sfr = a.total_mass * np.divide(basic_sfr.sfr, sum(basic_sfr.sfr))\n\n# Load the yields of the default yield set\n\nfrom Chempy.yields import SN2_feedback, AGB_feedback, SN1a_feedback\nbasic_sn2 = SN2_feedback()\ngetattr(basic_sn2, \"Nomoto2013\")()\nbasic_1a = SN1a_feedback()\ngetattr(basic_1a, \"Seitenzahl\")()\nbasic_agb = AGB_feedback()\ngetattr(basic_agb, \"Karakas_net_yield\")()", "We check how many elements are traced by our yield set", "# Print all supported elements\n\nelements_to_trace = list(np.unique(basic_agb.elements+basic_sn2.elements+basic_1a.elements))\nprint(elements_to_trace)", "When initialising the SSP class, we have to make the following choices(in brackets our choices are given):\n\nWhich metallicity does it have (Solar)\nHow long will the SSP live and at which timesteps do we want the feedback to be evaluated (Delta t = 0.025 for 13.5 Gyr)\nWhat is the lifetime rroutine (Argast+ 2000)\nHow should the interpolation in metallicity from the yield tables be (logarithmic)\nDo we want to safe the net yields as well (No)", "# Load solar abundances\n\nfrom Chempy.solar_abundance import solar_abundances\nbasic_solar = solar_abundances()\ngetattr(basic_solar, 'Asplund09')()\n\n\n# Initialise the SSP class with time-steps\n\ntime_steps = np.linspace(0.,13.5,541)\nfrom Chempy.weighted_yield import SSP\nbasic_ssp = SSP(False, np.copy(basic_solar.z), np.copy(basic_imf.x), np.copy(basic_imf.dm), np.copy(basic_imf.dn), np.copy(time_steps), list(elements_to_trace), 'Argast_2000', 'logarithmic', False) ", "This initialises the SSP class. It calculates the inverse IMF (stars of which minimal mass will be dead until each time-step) and it also initialises the feedback table, which need to be filled by the feedback from each subroutine, representing a specific nucleosynthetic process (CC-SN, AGB star, SN Ia). For the inverse IMF's first time-steps we see that only after the first and second time-step CC-SN will contribute (only stars smaller than 8Msun survive longer...).", "# Plotting the inverse IMF\n\nplt.plot(time_steps[1:],basic_ssp.inverse_imf[1:])\nplt.ylabel('Mass of stars dying')\nplt.xlabel('Time in Gyr')\nplt.ylim((0.7,11))\nfor i in range(5):\n print(time_steps[i], ' Gyr -->', basic_ssp.inverse_imf[i], ' Msun')", "In order to calculate the feedback for the CC-SN and AGB star we will need to provide the elemental abundances at birth, for which we will use solar abundances for now. (This is the case for net yields, gross yields already include the initial stellar abundance that will be expelled.)", "# Producing the SSP birth elemental fractions (here we use solar)\n\nsolar_fractions = []\nelements = np.hstack(basic_solar.all_elements)\nfor item in elements_to_trace:\n solar_fractions.append(float(basic_solar.fractions[np.where(elements==item)]))\n\n# Each nucleosynthetic process has its own method on the SSP class and adds its feedback into the final table\n\nbasic_ssp.sn2_feedback(list(basic_sn2.elements), dict(basic_sn2.table), np.copy(basic_sn2.metallicities), float(a.sn2mmin), float(a.sn2mmax),list(solar_fractions))\nbasic_ssp.agb_feedback(list(basic_agb.elements), dict(basic_agb.table), list(basic_agb.metallicities), float(a.agbmmin), float(a.agbmmax),list(solar_fractions))\nbasic_ssp.sn1a_feedback(list(basic_1a.elements), list(basic_1a.metallicities), dict(basic_1a.table), str(a.time_delay_functional_form), float(a.sn1ammin), float(a.sn1ammax), [a.N_0,a.sn1a_time_delay,a.sn1a_exponent,a.dummy],float(a.total_mass), a.stochastic_IMF)", "Elemental feedback from an SSP over time", "# Now we can plot the feedback of an SSP over time for a few elements\n\nplt.plot(time_steps,np.cumsum(basic_ssp.table['H']), label = 'H')\nplt.plot(time_steps,np.cumsum(basic_ssp.table['O']), label = 'O')\nplt.plot(time_steps,np.cumsum(basic_ssp.table['C']), label = 'C')\nplt.plot(time_steps,np.cumsum(basic_ssp.table['Fe']), label = 'Fe')\nplt.yscale('log')\nplt.xscale('log')\nplt.ylabel('Mass fraction of Element expelled')\nplt.title('Total feedback of an SSP of 1Msun')\nplt.xlabel('Time in Gyr')\nplt.legend()", "The difference between different yield sets", "# Loading an SSP which uses the alternative yield set\n\nbasic_ssp_alternative = SSP(False, np.copy(basic_solar.z), np.copy(basic_imf.x), np.copy(basic_imf.dm), np.copy(basic_imf.dn), np.copy(time_steps), list(elements_to_trace), 'Argast_2000', 'logarithmic', False) \nbasic_sn2_alternative = SN2_feedback()\ngetattr(basic_sn2_alternative, \"chieffi04\")()\nbasic_1a_alternative = SN1a_feedback()\ngetattr(basic_1a_alternative, \"Thielemann\")()\nbasic_agb_alternative = AGB_feedback()\ngetattr(basic_agb_alternative, \"Ventura_net\")()\nbasic_ssp_alternative.sn2_feedback(list(basic_sn2_alternative.elements), dict(basic_sn2_alternative.table), np.copy(basic_sn2_alternative.metallicities), float(a.sn2mmin), float(a.sn2mmax),list(solar_fractions))\nbasic_ssp_alternative.agb_feedback(list(basic_agb_alternative.elements), dict(basic_agb_alternative.table), list(basic_agb_alternative.metallicities), float(a.agbmmin), float(a.agbmmax),list(solar_fractions))\nbasic_ssp_alternative.sn1a_feedback(list(basic_1a_alternative.elements), list(basic_1a_alternative.metallicities), dict(basic_1a_alternative.table), str(a.time_delay_functional_form), float(a.sn1ammin), float(a.sn1ammax), [a.N_0,a.sn1a_time_delay,a.sn1a_exponent,a.dummy],float(a.total_mass), a.stochastic_IMF)\n\n\n# Plotting the difference\n\nplt.plot(time_steps,np.cumsum(basic_ssp.table['H']), label = 'H', color = 'b')\nplt.plot(time_steps,np.cumsum(basic_ssp.table['O']), label = 'O', color = 'orange')\nplt.plot(time_steps,np.cumsum(basic_ssp.table['C']), label = 'C', color = 'g')\nplt.plot(time_steps,np.cumsum(basic_ssp.table['Fe']), label = 'Fe', color = 'r')\nplt.plot(time_steps,np.cumsum(basic_ssp_alternative.table['H']), linestyle = '--', color = 'b')\nplt.plot(time_steps,np.cumsum(basic_ssp_alternative.table['O']), linestyle = '--', color = 'orange')\nplt.plot(time_steps,np.cumsum(basic_ssp_alternative.table['C']), linestyle = '--', color = 'g')\nplt.plot(time_steps,np.cumsum(basic_ssp_alternative.table['Fe']), linestyle = '--', color = 'r')\nplt.yscale('log')\nplt.xscale('log')\nplt.ylabel('Mass fraction of Element expelled')\nplt.title('default/alternative yield set in solid/dashed lines')\nplt.xlabel('Time in Gyr')\nplt.legend()", "The contribution of different nucleosynthetic paths to a single element", "# The SSP class stores the individual feedback of each nucleosynthetic channel\n# Here we plot the Carbon feedback over time\n\nplt.plot(time_steps,np.cumsum(basic_ssp.table['C']), label = 'total')\nplt.plot(time_steps,np.cumsum(basic_ssp.sn2_table['C']), label = 'CC-SN')\nplt.plot(time_steps,np.cumsum(basic_ssp.sn1a_table['C']), label = 'SN Ia')\nplt.plot(time_steps,np.cumsum(basic_ssp.agb_table['C']), label = 'AGB')\nplt.yscale('log')\nplt.xscale('log')\nplt.ylabel('Mass fraction of Element expelled')\nplt.xlabel('Time in Gyr')\nplt.title('Carbon feedback per nucleosynthetic process')\nplt.legend()", "The number of events", "# The number of events is stored as well\n\nplt.plot(time_steps,np.cumsum(basic_ssp.table['sn2']), label = 'CC-SN')\nplt.plot(time_steps,np.cumsum(basic_ssp.table['sn1a']), label = 'SN Ia')\nplt.plot(time_steps,np.cumsum(basic_ssp.table['pn']), label = 'AGB')\nplt.yscale('log')\nplt.xscale('log')\nplt.ylabel('# of events')\nplt.xlabel('Time in Gyr')\nplt.title('Number of events per SSP of 1Msun')\nplt.legend()", "The mass fractions", "# As is the mass fraction of stars, remnants, dying stars from which the total feedback mass can be calculated\n\nplt.plot(time_steps,basic_ssp.table['mass_in_ms_stars'], label = 'Ms stars')\nplt.plot(time_steps,np.cumsum(basic_ssp.table['mass_in_remnants']), label = 'remnants')\nplt.plot(time_steps,np.cumsum(basic_ssp.table['mass_of_ms_stars_dying']), label = 'dying')\nplt.plot(time_steps,np.cumsum(basic_ssp.table['mass_of_ms_stars_dying']) - np.cumsum(basic_ssp.table['mass_in_remnants']), label = 'feedback')\n\nplt.yscale('log')\nplt.xscale('log')\nplt.ylabel('Mass fraction')\nplt.xlabel('Time in Gyr')\nplt.title('Mass of stars gets transformed into remnants and feedback over time')\nplt.legend(loc = 'right', bbox_to_anchor= (1.6,0.5))", "IMF weighted yield of an SSP\n\nDepends on the timespan over which we integrate (here we use the full 13.5Gyr)\nDepends on the IMF\nOn the chosen yield set\nThe mass range of the nucleosynthetic process (e.g. low-mass CC-SN have only solar alpha/Fe abundances)\netc... play around and investigate", "# Here we print the time-integrated yield of an SSP (feedback after 13,5Gyr)\n# for different elements and also for CC-SNe feedback only\n\nnormalising_element = 'Fe'\nprint('alternative yield set')\nprint('Element, total SSP yield, CC-SN yield ([X/Fe] i.e. normalised to solar)')\nfor element in ['C', 'O', 'Mg', 'Ca', 'Mn', 'Ni']:\n element_ssp_sn2 = sum(basic_ssp_alternative.sn2_table[element])\n element_ssp = sum(basic_ssp_alternative.table[element])\n element_sun = basic_solar.fractions[np.where(elements == element)]\n normalising_element_ssp_sn2 = sum(basic_ssp_alternative.sn2_table[normalising_element])\n normalising_element_ssp = sum(basic_ssp_alternative.table[normalising_element])\n normalising_element_sun = basic_solar.fractions[np.where(elements == normalising_element)]\n print(element, np.log10(element_ssp/element_sun)-np.log10(normalising_element_ssp/normalising_element_sun), np.log10(element_ssp_sn2/element_sun)-np.log10(normalising_element_ssp_sn2/normalising_element_sun))\nprint('------------------------------------------')\nprint('default yield set')\nprint('Element, total SSP yield, CC-SN yield ([X/Fe] i.e. normalised to solar)')\nfor element in ['C', 'O', 'Mg', 'Ca', 'Mn', 'Ni']:\n element_ssp_sn2 = sum(basic_ssp.sn2_table[element])\n element_ssp = sum(basic_ssp.table[element])\n element_sun = basic_solar.fractions[np.where(elements == element)]\n normalising_element_ssp_sn2 = sum(basic_ssp.sn2_table[normalising_element])\n normalising_element_ssp = sum(basic_ssp.table[normalising_element])\n normalising_element_sun = basic_solar.fractions[np.where(elements == normalising_element)]\n print(element, np.log10(element_ssp/element_sun)-np.log10(normalising_element_ssp/normalising_element_sun), np.log10(element_ssp_sn2/element_sun)-np.log10(normalising_element_ssp_sn2/normalising_element_sun))", "Net yield vs. gross yield\nHere the difference between newly produced material (net yield) and total expelled material (gross yield) is shown for AGB and CC-SN.", "# We can set the the additional table (e.g. agb_table) to only save the newly produced material\n\nbasic_ssp_net = SSP(False, np.copy(basic_solar.z), np.copy(basic_imf.x), np.copy(basic_imf.dm), np.copy(basic_imf.dn), np.copy(time_steps), list(elements_to_trace), 'Argast_2000', 'logarithmic', True) \nbasic_ssp_net.agb_feedback(list(basic_agb.elements), dict(basic_agb.table), list(basic_agb.metallicities), float(a.agbmmin), float(a.agbmmax),list(solar_fractions))\n\n# And then compare these net yields to gross yields for C\n\nplt.plot(time_steps,np.cumsum(basic_ssp.agb_table['C']), label = 'total')\nplt.plot(time_steps,np.cumsum(basic_ssp_net.agb_table['C']), label = 'newly produced')\nplt.yscale('log')\nplt.xscale('log')\nplt.ylabel('Carbon feedback from AGB stars')\nplt.xlabel('Time in Gyr')\nplt.title('Net yield vs. gross yield')\nplt.legend()\n\n# And show the same for O and CC-SNe\n\nbasic_ssp_net = SSP(False, basic_solar.z, basic_imf.x, basic_imf.dm, basic_imf.dn, time_steps, elements_to_trace, 'Argast_2000', 'logarithmic', True)\nbasic_ssp_net.sn2_feedback(list(basic_sn2.elements), dict(basic_sn2.table), np.copy(basic_sn2.metallicities), float(a.sn2mmin), float(a.sn2mmax),list(solar_fractions))\nplt.plot(time_steps,np.cumsum(basic_ssp.sn2_table['He']), label = 'total')\nplt.plot(time_steps,np.cumsum(basic_ssp_net.sn2_table['He']), label = 'newly produced')\nplt.yscale('log')\nplt.xscale('log')\nplt.ylabel('Cumulative Helium feedback from CC-SN')\nplt.xlabel('Time in Gyr')\nplt.title('Net vs. gross yield')\nplt.legend()", "Stochastic IMF sampling\n\nThe feedback and the explosion of SN Ia can also be calculated stochastically.\nThe mass of the SSP needs to be provided (the feedback table is given in fractions).\nEach realisation will be new (you can check by redoing the plot).", "# The IMF can be sampled stochastically. First we plot the analytic version \n\nbasic_imf = IMF(a.mmin,a.mmax,a.mass_steps)\ngetattr(basic_imf, a.imf_type_name)((a.chabrier_para1,a.chabrier_para2,a.high_mass_slope))\nbasic_ssp = SSP(False, np.copy(basic_solar.z), np.copy(basic_imf.x), np.copy(basic_imf.dm), np.copy(basic_imf.dn), np.copy(time_steps), list(elements_to_trace), 'Argast_2000', 'logarithmic', False) \nbasic_ssp.sn2_feedback(list(basic_sn2.elements), dict(basic_sn2.table), np.copy(basic_sn2.metallicities), float(a.sn2mmin), float(a.sn2mmax),list(solar_fractions))\nbasic_ssp.agb_feedback(list(basic_agb.elements), dict(basic_agb.table), list(basic_agb.metallicities), float(a.agbmmin), float(a.agbmmax),list(solar_fractions))\nbasic_ssp.sn1a_feedback(list(basic_1a.elements), list(basic_1a.metallicities), dict(basic_1a.table), str(a.time_delay_functional_form), float(a.sn1ammin), float(a.sn1ammax), [a.N_0,a.sn1a_time_delay,a.sn1a_exponent,a.dummy],float(a.total_mass), True)\nplt.plot(time_steps,np.cumsum(basic_ssp.table['Fe']), label = 'analytic IMF')\n\n\n# Then we add the stochastic sampling for 3 different masses\n\nfor mass in [1e5,5e3,1e2]:\n basic_imf = IMF(a.mmin,a.mmax,a.mass_steps)\n getattr(basic_imf, a.imf_type_name)((a.chabrier_para1,a.chabrier_para2,a.high_mass_slope))\n basic_imf.stochastic_sampling(mass)\n basic_ssp = SSP(False, np.copy(basic_solar.z), np.copy(basic_imf.x), np.copy(basic_imf.dm), np.copy(basic_imf.dn), np.copy(time_steps), list(elements_to_trace), 'Argast_2000', 'logarithmic', False) \n basic_ssp.sn2_feedback(list(basic_sn2.elements), dict(basic_sn2.table), np.copy(basic_sn2.metallicities), float(a.sn2mmin), float(a.sn2mmax),list(solar_fractions))\n basic_ssp.agb_feedback(list(basic_agb.elements), dict(basic_agb.table), list(basic_agb.metallicities), float(a.agbmmin), float(a.agbmmax),list(solar_fractions))\n basic_ssp.sn1a_feedback(list(basic_1a.elements), list(basic_1a.metallicities), dict(basic_1a.table), str(a.time_delay_functional_form), float(a.sn1ammin), float(a.sn1ammax), [a.N_0,a.sn1a_time_delay,a.sn1a_exponent,a.dummy],float(a.total_mass), True)\n plt.plot(time_steps,np.cumsum(basic_ssp.table['Fe']), label = '%d Msun' %(mass))\nplt.xscale('log')\nplt.ylabel('Cumulative fractional iron feedback of an SSP')\nplt.xlabel('Time in Gyr')\nplt.legend(bbox_to_anchor = (1.5,1))", "SSP wrapper\nIn order to query the SSP feedback faster and easier we write a little wrapper in wrapper.py and look at the imf weighted yield change with IMF. We compare the bottom-heavy Kroupa IMF with the Salpeter IMF (which has more high-mass stars). We see that the total SSP yield changes quite drastically.", "# Here we show the functionality of the wrapper, which makes the SSP calculation easy.\n# We want to show the differences of the SSP feedback for 2 IMFs and load Kroupa first\n\na.only_net_yields_in_process_tables = False\na.imf_type_name = 'normed_3slope'\na.imf_parameter = (-1.3,-2.2,-2.7,0.5,1.0)\n\n# The feedback can now be calculated by just typing the next three lines\n\nfrom Chempy.wrapper import SSP_wrap\nbasic_ssp = SSP_wrap(a)\nssp_mass = float(basic_sfr.sfr[0])\nbasic_ssp.calculate_feedback(float(basic_solar.z),list(elements_to_trace),list(solar_fractions),np.copy(time_steps), ssp_mass)\n\n\n# We print the Kroupa IMF feedback\n\nprint('Kroupa IMF')\nprint('Element, total SSP yield, CC-SN yield ([X/Fe] i.e. normalised to solar)')\nfor element in ['C', 'O', 'Mg', 'Ca', 'Mn', 'Ni']:\n element_ssp_sn2 = sum(basic_ssp.sn2_table[element])\n element_ssp = sum(basic_ssp.table[element])\n element_sun = basic_solar.fractions[np.where(elements == element)]\n normalising_element_ssp_sn2 = sum(basic_ssp.sn2_table[normalising_element])\n normalising_element_ssp = sum(basic_ssp.table[normalising_element])\n normalising_element_sun = basic_solar.fractions[np.where(elements == normalising_element)]\n print(element, np.log10(element_ssp/element_sun)-np.log10(normalising_element_ssp/normalising_element_sun), np.log10(element_ssp_sn2/element_sun)-np.log10(normalising_element_ssp_sn2/normalising_element_sun))\n\n# Change to Salpeter \n\na.imf_type_name = 'salpeter'\na.imf_parameter = (2.35)\n\n\n# Calculate the SSP feedback\n\nbasic_ssp = SSP_wrap(a)\nbasic_ssp.calculate_feedback(float(basic_solar.z),list(elements_to_trace),list(solar_fractions),np.copy(time_steps), ssp_mass)\n\n\n# And print the feedback for comparison\n\nprint('Salpeter IMF')\nprint('Element, total SSP yield, CC-SN yield ([X/Fe] i.e. normalised to solar)')\nfor element in ['C', 'O', 'Mg', 'Ca', 'Mn', 'Ni']:\n element_ssp_sn2 = sum(basic_ssp.sn2_table[element])\n element_ssp = sum(basic_ssp.table[element])\n element_sun = basic_solar.fractions[np.where(elements == element)]\n normalising_element_ssp_sn2 = sum(basic_ssp.sn2_table[normalising_element])\n normalising_element_ssp = sum(basic_ssp.table[normalising_element])\n normalising_element_sun = basic_solar.fractions[np.where(elements == normalising_element)]\n print(element, np.log10(element_ssp/element_sun)-np.log10(normalising_element_ssp/normalising_element_sun), np.log10(element_ssp_sn2/element_sun)-np.log10(normalising_element_ssp_sn2/normalising_element_sun))", "Paper plot\nHere is the code create a plot similar to figure 4 of the paper.", "# Loading the default parameters so that we can change them and see what happens with the SSP feedback\n\na = ModelParameters()\na.high_mass_slope = -2.29 \na.N_0 = np.power(10,-2.75) \na.sn1a_time_delay = np.power(10,-0.8)\na.imf_parameter = (0.69, 0.079, a.high_mass_slope)\na.sn1a_parameter = [a.N_0, a.sn1a_time_delay, 1.12, 0.0]\na.mmax = 100\ntime_steps = np.linspace(0.,13.5,1401)\n\n# Then calculating the feedback table\n\nbasic_ssp = SSP_wrap(a)\nbasic_ssp.calculate_feedback(float(basic_solar.z),list(a.elements_to_trace),list(solar_fractions),np.copy(time_steps), ssp_mass)\n\nalpha = 0.5\nfactor = 1.05\n\n## Actual plotting\n\nfig = plt.figure(figsize=(8.69,6.69), dpi=100)\nax = fig.add_subplot(111)\nax.plot(time_steps,np.cumsum(basic_ssp.sn2_table[\"Fe\"]),'b', label = 'CC-SN')\nax.annotate(xy = (time_steps[-1]*factor,np.sum(basic_ssp.sn2_table[\"Fe\"])*0.9) ,s = 'Fe',color = 'b')\nax.plot(time_steps,np.cumsum(basic_ssp.sn1a_table[\"Fe\"]), 'r', label = 'SN Ia')\nax.annotate(xy = (time_steps[-1]*factor,np.sum(basic_ssp.sn1a_table[\"Fe\"])) ,s = 'Fe',color = 'r')\n\nax.plot(time_steps,np.cumsum(basic_ssp.sn2_table[\"Mg\"]),'b')\nax.annotate(xy = (time_steps[-1]*factor,np.sum(basic_ssp.sn2_table[\"Mg\"])) ,s = 'Mg',color = 'b')\nax.plot(time_steps,np.cumsum(basic_ssp.sn1a_table[\"Mg\"]),'r')\nax.annotate(xy = (time_steps[-1]*factor,np.sum(basic_ssp.sn1a_table[\"Mg\"])) ,s = 'Mg',color = 'r')\n\nax.plot(time_steps,np.ones_like(time_steps)*5e-3,marker = '|', markersize = 10, linestyle = '', color = 'k', alpha = 2*alpha)#, label = 'time-steps')\nax.annotate(xy = (time_steps[1],2.7e-3),s = r'model time-steps with mass of stars dying in M$_\\odot$', color = 'k', alpha = 2*alpha)\nfor numb in [1,2,4,11,38,117,1000]:\n if numb < len(time_steps):\n plt.annotate(xy = (time_steps[numb],3.5e-3),s = '%.f' %(basic_ssp.inverse_imf[numb]), color = 'k', alpha = 2*alpha)\nax.legend()\nax.set_ylim(2e-5,6e-3)\nax.set_xlim(7e-3,25)\nax.set_title(r'yield of SSP with mass = 1M$_\\odot$ and metallicity = Z$_\\odot$')\nax.set_ylabel(r\"net yield in M$_\\odot$\")\nax.set_xlabel(\"time in Gyr\")\n\nax.set_yscale('log')\nax.set_xscale('log')\nplt.show()", "That's why Fe is a fairly good indicator for the incidence of SN Ia and Mg is a very good indicator for the incidence of CC-SN." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dwhswenson/openpathsampling
examples/misc/tutorial_path_histogram.ipynb
mit
[ "Tutorial: Path Histogram\nThis gives a little overview of the option and behavior of the PathHistogram object. The PathHistogram is used for path density plots and free energy plots. It extends the basic SparseHistogram code by allowing for interpolation between bins, and by allowing one to normalize on a per-trajectory basis.", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nimport openpathsampling as paths\nfrom openpathsampling.analysis import PathHistogram\nfrom openpathsampling.analysis.path_histogram import (\n BresenhamInterpolation, BresenhamLikeInterpolation, SubdivideInterpolation\n)\nfrom openpathsampling.numerics import HistogramPlotter2D\n\ntrajectory = [(0.1, 0.3), (2.1, 3.1), (1.7, 1.4), (1.6, 0.6), (0.1, 1.4), (2.2, 3.3)]\nx, y = zip(*trajectory)", "Here's our trajectory. The grid happens to correspond with the bins I'll use for the histograms.", "plt.grid(True)\nplt.plot(x, y, 'o-')", "The first type of histogram is what you'd get from just histogramming the frames.", "hist = PathHistogram(left_bin_edges=(0.0,0.0), bin_widths=(0.5,0.5),\n interpolate=False, per_traj=False)\n\nhist.add_trajectory(trajectory)\nHistogramPlotter2D(hist).plot(normed=False, xlim=(0,2.5), ylim=(0, 3.5),\n cmap=\"Blues\", vmin=0, vmax=3)", "The next type of histogram uses that fact that we know this is a trajectory, so we do linear interpolation between the frames. This gives us a count of every time the trajectory enters a given bin. We can use this kind of histogram for free energy plots based on the reweighted path ensemble.\nWe have several possible interpolation algorithms, so let's show one image for each of them. SubdivideInterpolation is the most exact, but it is also quite slow. The default interpolation is BresenhamLikeInterpolation; this will be used if you just give interpolate=True.", "hist = PathHistogram(left_bin_edges=(0.0,0.0), bin_widths=(0.5,0.5),\n interpolate=SubdivideInterpolation, per_traj=False)\n\nhist.add_trajectory(trajectory)\nHistogramPlotter2D(hist).plot(normed=False, xlim=(0,2.5), ylim=(0, 3.5),\n cmap=\"Blues\", vmin=0, vmax=3)\n\nhist = PathHistogram(left_bin_edges=(0.0,0.0), bin_widths=(0.5,0.5),\n interpolate=BresenhamLikeInterpolation, per_traj=False)\n\nhist.add_trajectory(trajectory)\nHistogramPlotter2D(hist).plot(normed=False, xlim=(0,2.5), ylim=(0, 3.5),\n cmap=\"Blues\", vmin=0, vmax=3)\n\nhist = PathHistogram(left_bin_edges=(0.0,0.0), bin_widths=(0.5,0.5),\n interpolate=BresenhamInterpolation, per_traj=False)\n\nhist.add_trajectory(trajectory)\nHistogramPlotter2D(hist).plot(normed=False, xlim=(0,2.5), ylim=(0, 3.5),\n cmap=\"Blues\", vmin=0, vmax=3)", "The next type of histogram uses the interpolation, but also normalizes so that each trajectory only contributes once per bin. This is what we use for a path density plot.", "hist = PathHistogram(left_bin_edges=(0.0,0.0), bin_widths=(0.5,0.5),\n interpolate=SubdivideInterpolation, per_traj=True)\n\nhist.add_trajectory(trajectory)\nHistogramPlotter2D(hist).plot(normed=False, xlim=(0,2.5), ylim=(0, 3.5),\n cmap=\"Blues\", vmin=0, vmax=3)", "Of course, we can normalize to one contribution per path while not interpolating. I don't think this is actually useful.", "hist = PathHistogram(left_bin_edges=(0.0,0.0), bin_widths=(0.5,0.5),\n interpolate=False, per_traj=True)\n\nhist.add_trajectory(trajectory)\nHistogramPlotter2D(hist).plot(normed=False, xlim=(0,2.5), ylim=(0, 3.5),\n cmap=\"Blues\", vmin=0, vmax=3)", "Hypothetically, it is possible for a path to cut exactly through a corner. It won't happen in the real world, but we would like our interpolation algorithm to get even the unlikely cases right.", "diag = [(0.25, 0.25), (2.25, 2.25)]\ndiag_x, diag_y = zip(*diag)\nplt.grid(True)\nticks = [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5]\nplt.xticks(ticks)\nplt.yticks(ticks)\nplt.xlim(0, 2.5)\nplt.ylim(0, 3.5)\nplt.plot(diag_x, diag_y, 'o-')\n\nhist = PathHistogram(left_bin_edges=(0.0,0.0), bin_widths=(0.5,0.5),\n interpolate=SubdivideInterpolation, per_traj=True)\n\nhist.add_trajectory(diag)\nHistogramPlotter2D(hist).plot(normed=False, xlim=(0,2.5), ylim=(0, 3.5),\n cmap=\"Blues\", vmin=0, vmax=3)", "How would we make this into an actual path density plot? Add the trajectories on top of each other.", "hist = PathHistogram(left_bin_edges=(0.0,0.0), bin_widths=(0.5,0.5),\n interpolate=SubdivideInterpolation, per_traj=True)\nhist.add_trajectory(diag, weight=2) # each trajectory can be assigned a weight (useful for RPE)\nhist.add_trajectory(trajectory)\nHistogramPlotter2D(hist).plot(normed=False, xlim=(0,2.5), ylim=(0, 3.5),\n cmap=\"Blues\", vmin=0, vmax=3)", "The actual PathDensity object also contains information about the collective variables we map this into, and has a convenience function to take a list of regular OPS trajectories and make the whole path histogram out of them." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
zhuanxuhit/deep-learning
tv-script-generation/dlnd_tv_script_generation.ipynb
mit
[ "TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)", "import numpy as np\nimport problem_unittests as tests\n\nfrom collections import Counter\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n counts = Counter(text)\n vocab = sorted(counts, key=counts.get, reverse=True) # descending order\n \n vocab_to_int = {word: ii for ii, word in enumerate(vocab)}\n \n int_to_vocab = {ii: word for ii, word in enumerate(vocab)}\n \n return vocab_to_int, int_to_vocab\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".", "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n return {\n '.':\"||Period||\",\n ',':\"||Comma||\", \n '\"':\"||Quotation_Mark||\", \n ';':\"||Semicolon||\",\n '!':\"||Exclamation_mark||\",\n '?':\"||Question_mark||\",\n '(':\"||Left_Parentheses||\", \n ')':\"||Right_Parentheses||\", \n '--':\"||Dash||\", \n '\\n':\"||Return||\"\n }\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)", "def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n input_ = tf.placeholder(shape=[None,None],name='input',dtype=tf.int32) # input shape = [batch_size, seq_size]\n targets = tf.placeholder(shape=[None,None],name='targets',dtype=tf.int32)\n learning_rate = tf.placeholder(dtype=tf.float32)\n return input_, targets, learning_rate\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)", "Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)", "def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n # Add dropout to the cell\n # drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n \n # Stack up multiple LSTM layers, for deep learning\n lstm_layers = 5\n cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers)\n \n # Getting an initial state of all zeros\n initial_state = cell.zero_state(batch_size, tf.float32)\n \n return cell, tf.identity(initial_state,name='initial_state')\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)", "Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.", "def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, input_data)\n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)", "Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)", "def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n outputs, final_state = tf.nn.dynamic_rnn(cell, inputs,dtype=tf.float32)\n return outputs, tf.identity(final_state,name=\"final_state\")\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)", "Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)", "def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n embed_dim = 300;\n embed = get_embed(input_data,vocab_size,embed_dim)\n outputs, final_state = build_rnn(cell,embed)\n# print(outputs) # Tensor(\"rnn/transpose:0\", shape=(128, 5, 256), dtype=float32)\n# print(final_state) # Tensor(\"final_state:0\", shape=(2, 2, ?, 256), dtype=float32)\n # !!! it is really import to have a good weigh init\n logits = tf.contrib.layers.fully_connected(outputs,vocab_size,activation_fn=None, #tf.nn.relu\n weights_initializer = tf.truncated_normal_initializer(stddev=0.1),\n biases_initializer=tf.zeros_initializer())\n# print(logits) # Tensor(\"fully_connected/Relu:0\", shape=(128, 5, 27), dtype=float32)\n return logits, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)", "Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2], [ 7 8], [13 14]]\n # Batch of targets\n [[ 2 3], [ 8 9], [14 15]]\n ]\n# Second Batch\n [\n # Batch of Input\n [[ 3 4], [ 9 10], [15 16]]\n # Batch of targets\n [[ 4 5], [10 11], [16 17]]\n ]\n# Third Batch\n [\n # Batch of Input\n [[ 5 6], [11 12], [17 18]]\n # Batch of targets\n [[ 6 7], [12 13], [18 1]]\n ]\n]\n```\nNotice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.", "# ๆ นๆฎๅปบ่ฎฎไฟฎๆ”น็š„ๆ–นๆณ•๏ผŒๅพˆ่ตž๏ผ\ndef get_batches(int_text, batch_size, seq_length):\n n_batches = int(len(int_text) / (batch_size * seq_length))\n x_data = np.array(int_text[: n_batches * batch_size * seq_length])\n y_data = np.array(int_text[1: n_batches * batch_size * seq_length + 1])\n\n x = np.split(xdata.reshape(batch_size, -1), n_batches, 1)\n y = np.split(ydata.reshape(batch_size, -1), n_batches, 1)\n\n return np.array(list(zip(x, y)))\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)\n\n# def get_batches(int_text, batch_size, seq_length):\n# \"\"\"\n# Return batches of input and target\n# :param int_text: Text with the words replaced by their ids\n# :param batch_size: The size of batch\n# :param seq_length: The length of sequence\n# :return: Batches as a Numpy array\n# \"\"\"\n# # TODO: Implement Function\n# batches = []\n# n_batchs = (len(int_text)-1) // (batch_size * seq_length)\n# # int_text = int_text[:n_batchs*batch_size * seq_length+1]\n# for i in range(0,n_batchs*seq_length,seq_length):\n# x = []\n# y = []\n# for j in range(i,i+batch_size * seq_length,seq_length):\n# x.append(int_text[j:j+seq_length])\n# y.append(int_text[j+1:j+1+seq_length])\n# batches.append([x,y])\n# return np.array(batches)\n\n# #print(get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3))\n\n# \"\"\"\n# DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n# \"\"\"\n# tests.test_get_batches(get_batches)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.", "# 4257 line ,average 11 words\n\n# Number of Epochs\nnum_epochs = 50\n# Batch Size\nbatch_size = 200\n# RNN Size\nrnn_size = None\n# Embedding Dimension Size\nembed_dim = None\n# Sequence Length\nseq_length = 10 # !!! when i increase the seq_length from 5 to 10,it really helps๏ผŒๅฆ‚ๆžœ็ปง็ปญๅขžๅŠ ไผšๆ€Žไนˆๆ ทๅ‘ข๏ผŸ\n# Learning Rate\nlearning_rate = 0.01\n# Show stats for every n number of batches\nshow_every_n_batches = 40\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n # input_data_shape[0] batch size\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')", "Save Parameters\nSave seq_length and save_dir for generating a new TV script.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()", "Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)", "def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n return loaded_graph.get_tensor_by_name(\"input:0\"), loaded_graph.get_tensor_by_name(\"initial_state:0\"), loaded_graph.get_tensor_by_name(\"final_state:0\"), loaded_graph.get_tensor_by_name(\"probs:0\")\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)", "Choose Word\nImplement the pick_word() function to select the next word using probabilities.", "import random\ndef pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n r = random.uniform(0,1)\n #store prediction char\n s = 0\n #since length > indices starting at 0\n char_id = len(probabilities) - 1\n #for each char prediction probabilty\n for i in range(len(probabilities)):\n #assign it to S\n s += probabilities[i]\n #check if probability greater than our randomly generated one\n if s >= r:\n #if it is, thats the likely next char\n char_id = i\n break\n \n return int_to_vocab[char_id]\n # ๅฆไธ€็ง็ฎ€ๅ•ๆ–นๆณ•๏ผŒๅฏนไบŽไธบไป€ไนˆ่ฟ™ไนˆ้€‰ๆ‹ฉ๏ผŒๅฏไปฅๅ‚่€ƒไธ€็ฏ‡ๆ–‡็ซ :\n # http://yanyiwu.com/work/2014/01/30/simhash-shi-xian-xiang-jie.html\n rand = np.sum(probabilities) * np.random.rand(1)\n pred_word = int_to_vocab[int(np.searchsorted(np.cumsum(probabilities), rand))]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)", "Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.", "gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)", "The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
parkerzf/kaggle-expedia
notebooks/time_based_anlaysis.ipynb
bsd-3-clause
[ "# Display plots inline and change default figure size\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nfrom matplotlib.colors import LinearSegmentedColormap\nfrom matplotlib.lines import Line2D \n\nimport psycopg2\nimport pandas.io.sql as sqlio\nfrom datetime import datetime\nimport getpass\n\nhost = 'r5-lx-l2.zai'\ndatabase = \"booking\"\nusername = 'feng.zhao'\npwd = getpass.getpass()\n\ndef get_dataframe(query):\n conn = psycopg2.connect(\"host='%s' dbname='%s' user='%s' password='%s'\" % (host, database, username, pwd))\n df = sqlio.read_frame(query, conn)\n conn.close()\n \n return df\n\ntable = 'public.date_time_daily_stats'\n\ndaily_stats = get_dataframe(\n '''select * from %s''' % table)\n\ndaily_stats.index = pd.to_datetime(daily_stats.year*10000 + daily_stats.month*100 + daily_stats.day, format='%Y%m%d')", "# Overall Summary", "daily_stats[['count_click', 'count_booking_train', 'count_booking_test']].sum()/1000\n\nprint 'booking ratio for train set: ', daily_stats.count_booking_train.sum() * 1.0 \\\n/ (daily_stats.count_click.sum() + daily_stats.count_booking_train.sum())\n\nprint 'daily booking in train set: ', daily_stats.count_booking_train.sum() * 1.0 \\\n/ len(daily_stats[daily_stats.count_booking_train != 0])\n\nprint 'daily click in train set: ', daily_stats.count_click.sum() * 1.0 \\\n/ len(daily_stats[daily_stats.count_click != 0])\n\nprint 'daily booking in test set: ', daily_stats.count_booking_test.sum() * 1.0 \\\n/ len(daily_stats[daily_stats.count_booking_test != 0])", "Monthly stats", "monthly_number_stats_booking_train = (daily_stats.groupby((\"year\", \"month\"))[\"count_booking_train\"].sum()/1000)\nmonthly_number_stats_click_train = (daily_stats.groupby((\"year\", \"month\"))[\"count_click\"].sum()/1000)\nmonthly_number_stats_booking_test = (daily_stats.groupby((\"year\", \"month\"))[\"count_booking_test\"].sum()/1000)\n\nfig = monthly_number_stats_booking_train.plot(kind='bar', alpha=0.5, figsize=(14, 8))\nmonthly_number_stats_click_train.plot(kind='bar', alpha=0.3, color = 'r', figsize=(14, 8))\nmonthly_number_stats_booking_test.plot(kind='bar', alpha=0.5, color = 'y', figsize=(14, 8))\n\nfig.legend()\nfig.set_title(\"Total Booking per Month\")\nfig.set_ylabel(\"Thousands of Bookings/Clicks\")\nfig.set_xlabel(\"(Year , Month)\" )", "Daily stats -- weekdays", "import locale, calendar\n\nlocale.setlocale(locale.LC_ALL, 'en_US.UTF-8')\n\nfig, axes = plt.subplots(nrows=1, ncols=2)\nfig.tight_layout()\nfig.set_size_inches(18.5,5.5)\n\ndow = map(lambda x: calendar.day_abbr[x].capitalize(), daily_stats.index.dayofweek)\ndow_order = map(lambda x: calendar.day_abbr[x].capitalize(), np.arange(0,7))\n\nsns.boxplot(daily_stats.count_booking/1000, groupby=dow, order=dow_order, ax=axes[0])\naxes[0].set_title(\"Total number of bookings by Week day\")\naxes[0].set_ylabel(\"Nubmer of bookings (Thousands)\")\n\ndow_clicks = map(lambda x: calendar.day_abbr[x].capitalize(), daily_stats[daily_stats.count_click!=0].index.dayofweek)\ndow_clicks_order = map(lambda x: calendar.day_abbr[x].capitalize(), np.arange(0,7))\n\nsns.boxplot(daily_stats[daily_stats.count_click!=0].count_click/1000., groupby=dow_clicks, order=dow_clicks_order, ax=axes[1])\naxes[1].set_title(\"Total number of clicks by Week day\")\naxes[1].set_ylabel(\"Nubmer of clicks (Thousands)\")", "There are weekly pattern in booking time, high from Monday to Fri, low in the Friday and weekend.\nMonthly stats (Checkin and Checkout)", "table = 'public.srch_ci_daily_stats'\n\ndaily_stats_ci = get_dataframe(\n '''select * from %s where year between 2013 and 2016''' % table\n )\n\ndaily_stats_ci.index = pd.to_datetime(daily_stats_ci.year*10000 + daily_stats_ci.month*100 + daily_stats_ci.day, format='%Y%m%d')\n\ntable = 'public.srch_co_daily_stats'\n\ndaily_stats_co = get_dataframe(\n '''select * from %s where year between 2013 and 2016''' % table\n )\n\ndaily_stats_co.index = pd.to_datetime(daily_stats_co.year*10000 + daily_stats_co.month*100 + daily_stats_co.day, format='%Y%m%d')\n\nmonthly_number_stats_ci_booking_train = (daily_stats_ci.groupby((\"year\", \"month\"))[\"count_booking_train\"].sum()/1000)\nmonthly_number_stats_ci_click_train = (daily_stats_ci.groupby((\"year\", \"month\"))[\"count_click\"].sum()/1000)\nmonthly_number_stats_ci_booking_test = (daily_stats_ci.groupby((\"year\", \"month\"))[\"count_booking_test\"].sum()/1000)\n\nmonthly_number_stats_co_booking_train = (daily_stats_co.groupby((\"year\", \"month\"))[\"count_booking_train\"].sum()/1000)\nmonthly_number_stats_co_click_train = (daily_stats_co.groupby((\"year\", \"month\"))[\"count_click\"].sum()/1000)\nmonthly_number_stats_co_booking_test = (daily_stats_co.groupby((\"year\", \"month\"))[\"count_booking_test\"].sum()/1000)\n\nfig = monthly_number_stats_ci_booking_train.plot(kind='bar', alpha=0.5, figsize=(14, 8))\nmonthly_number_stats_ci_click_train.plot(kind='bar', alpha=0.3, color = 'r', figsize=(14, 8))\nmonthly_number_stats_ci_booking_test.plot(kind='bar', alpha=0.5, color = 'y', figsize=(14, 8))\n\nfig.legend()\nfig.set_title(\"Total Booking per Month (Checkin)\")\nfig.set_ylabel(\"Thousands of Bookings/Clicks\")\nfig.set_xlabel(\"(Year , Month)\" )\n\nfig = monthly_number_stats_co_booking_train.plot(kind='bar', alpha=0.5, figsize=(14, 8))\nmonthly_number_stats_co_click_train.plot(kind='bar', alpha=0.3, color = 'r', figsize=(14, 8))\nmonthly_number_stats_co_booking_test.plot(kind='bar', alpha=0.5, color = 'y', figsize=(14, 8))\n\nfig.legend()\nfig.set_title(\"Total Booking per Month (Checkout)\")\nfig.set_ylabel(\"Thousands of Bookings/Clicks\")\nfig.set_xlabel(\"(Year , Month)\" )", "Daily stats -- weekdays (Checkin and Checkout)", "import locale, calendar\n\nlocale.setlocale(locale.LC_ALL, 'en_US.UTF-8')\n\nfig, axes = plt.subplots(nrows=1, ncols=2)\nfig.tight_layout()\nfig.set_size_inches(18.5,5.5)\n\ndow = map(lambda x: calendar.day_abbr[x].capitalize(), daily_stats_ci.index.dayofweek)\ndow_order = map(lambda x: calendar.day_abbr[x].capitalize(), np.arange(0,7))\n\nsns.boxplot(daily_stats_ci.count_booking/1000, groupby=dow, order=dow_order, ax=axes[0])\naxes[0].set_title(\"Total number of bookings by Week day (Checkin)\")\naxes[0].set_ylabel(\"Nubmer of bookings (Thousands)\")\n\ndow_clicks = map(lambda x: calendar.day_abbr[x].capitalize(), daily_stats_ci[daily_stats_ci.count_click!=0].index.dayofweek)\ndow_clicks_order = map(lambda x: calendar.day_abbr[x].capitalize(), np.arange(0,7))\n\nsns.boxplot(daily_stats_ci[daily_stats_ci.count_click!=0].count_click/1000., groupby=dow_clicks, order=dow_clicks_order, ax=axes[1])\naxes[1].set_title(\"Total number of clicks by Week day(Checkin)\")\naxes[1].set_ylabel(\"Nubmer of clicks (Thousands)\")\n\nimport locale, calendar\n\nlocale.setlocale(locale.LC_ALL, 'en_US.UTF-8')\n\nfig, axes = plt.subplots(nrows=1, ncols=2)\nfig.tight_layout()\nfig.set_size_inches(18.5,5.5)\n\ndow = map(lambda x: calendar.day_abbr[x].capitalize(), daily_stats_co.index.dayofweek)\ndow_order = map(lambda x: calendar.day_abbr[x].capitalize(), np.arange(0,7))\n\nsns.boxplot(daily_stats_co.count_booking/1000, groupby=dow, order=dow_order, ax=axes[0])\naxes[0].set_title(\"Total number of bookings by Week day (Checkout)\")\naxes[0].set_ylabel(\"Nubmer of bookings (Thousands)\")\n\ndow_clicks = map(lambda x: calendar.day_abbr[x].capitalize(), daily_stats_co[daily_stats_co.count_click!=0].index.dayofweek)\ndow_clicks_order = map(lambda x: calendar.day_abbr[x].capitalize(), np.arange(0,7))\n\nsns.boxplot(daily_stats_co[daily_stats_co.count_click!=0].count_click/1000., groupby=dow_clicks, order=dow_clicks_order, ax=axes[1])\naxes[1].set_title(\"Total number of clicks by Week day(Checkout)\")\naxes[1].set_ylabel(\"Nubmer of clicks (Thousands)\")", "There are weekly pattern in checkin and checkout, high checkin on Friday and highest checkout on Sunday." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tritemio/multispot_paper
usALEX-5samples-E-corrected-all-ph.ipynb
mit
[ "usALEX-5samples - Template\n\nThis notebook is executed through 8-spots paper analysis.\nFor a direct execution, uncomment the cell below.", "# data_id = \"7d\"", "Load software and filenames definitions", "from fretbursts import *\n\ninit_notebook()\nfrom IPython.display import display", "Data folder:", "data_dir = './data/singlespot/'\n\nimport os\ndata_dir = os.path.abspath(data_dir) + '/'\nassert os.path.exists(data_dir), \"Path '%s' does not exist.\" % data_dir", "List of data files:", "from glob import glob\nfile_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)\n## Selection for POLIMI 2012-11-26 datatset\nlabels = ['17d', '27d', '7d', '12d', '22d']\nfiles_dict = {lab: fname for lab, fname in zip(labels, file_list)}\nfiles_dict\n\ndata_id", "Data load\nInitial loading of the data:", "d = loader.photon_hdf5(filename=files_dict[data_id])", "Load the leakage coefficient from disk:", "leakage_coeff_fname = 'results/usALEX - leakage coefficient DexDem.csv'\nleakage = np.loadtxt(leakage_coeff_fname)\n\nprint('Leakage coefficient:', leakage)", "Load the direct excitation coefficient ($d_{exAA}$) from disk:", "dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv'\ndir_ex_aa = np.loadtxt(dir_ex_coeff_fname)\n\nprint('Direct excitation coefficient (dir_ex_aa):', dir_ex_aa)", "Load the gamma-factor ($\\gamma$) from disk:", "gamma_fname = 'results/usALEX - gamma factor - all-ph.csv'\ngamma = np.loadtxt(gamma_fname)\n\nprint('Gamma-factor:', gamma)", "Update d with the correction coefficients:", "d.leakage = leakage\nd.dir_ex = dir_ex_aa\nd.gamma = gamma", "Laser alternation selection\nAt this point we have only the timestamps and the detector numbers:", "d.ph_times_t[0][:3], d.ph_times_t[0][-3:]#, d.det_t\n\nprint('First and last timestamps: {:10,} {:10,}'.format(d.ph_times_t[0][0], d.ph_times_t[0][-1]))\nprint('Total number of timestamps: {:10,}'.format(d.ph_times_t[0].size))", "We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:", "d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)", "We should check if everithing is OK with an alternation histogram:", "plot_alternation_hist(d)", "If the plot looks good we can apply the parameters with:", "loader.alex_apply_period(d)\n\nprint('D+A photons in D-excitation period: {:10,}'.format(d.D_ex[0].sum()))\nprint('D+A photons in A-excitation period: {:10,}'.format(d.A_ex[0].sum()))", "Measurements infos\nAll the measurement data is in the d variable. We can print it:", "d", "Or check the measurements duration:", "d.time_max", "Compute background\nCompute the background using automatic threshold:", "d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)\n\ndplot(d, timetrace_bg)\n\nd.rate_m, d.rate_dd, d.rate_ad, d.rate_aa", "Burst search and selection", "d.burst_search(L=10, m=10, F=7, ph_sel=Ph_sel('all'))\n\nprint(d.ph_sel)\ndplot(d, hist_fret);\n\n# if data_id in ['7d', '27d']:\n# ds = d.select_bursts(select_bursts.size, th1=20)\n# else:\n# ds = d.select_bursts(select_bursts.size, th1=30)\n\nds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)\n\nn_bursts_all = ds.num_bursts[0]\n\ndef select_and_plot_ES(fret_sel, do_sel):\n ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)\n ds_do = ds.select_bursts(select_bursts.ES, **do_sel)\n bpl.plot_ES_selection(ax, **fret_sel)\n bpl.plot_ES_selection(ax, **do_sel) \n return ds_fret, ds_do\n\nax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)\n\nif data_id == '7d':\n fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)\n do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True) \n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)\n \nelif data_id == '12d':\n fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)\n do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)\n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)\n\nelif data_id == '17d':\n fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)\n do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)\n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)\n\nelif data_id == '22d':\n fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)\n do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)\n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) \n\nelif data_id == '27d':\n fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)\n do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)\n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) \n\nn_bursts_do = ds_do.num_bursts[0]\nn_bursts_fret = ds_fret.num_bursts[0]\n\nn_bursts_do, n_bursts_fret\n\nd_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)\nprint('D-only fraction:', d_only_frac)\n\ndplot(ds_fret, hist2d_alex, scatter_alpha=0.1);\n\ndplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);", "Donor Leakage fit", "bandwidth = 0.03\n\nE_range_do = (-0.1, 0.15)\nE_ax = np.r_[-0.2:0.401:0.0002]\n\nE_pr_do_kde = bext.fit_bursts_kde_peak(ds_do, bandwidth=bandwidth, weights='size', \n x_range=E_range_do, x_ax=E_ax, save_fitter=True)\n\nmfit.plot_mfit(ds_do.E_fitter, plot_kde=True, bins=np.r_[E_ax.min(): E_ax.max(): bandwidth])\nplt.xlim(-0.3, 0.5)\nprint(\"%s: E_peak = %.2f%%\" % (ds.ph_sel, E_pr_do_kde*100))", "Burst sizes", "nt_th1 = 50\n\ndplot(ds_fret, hist_size, which='all', add_naa=False)\nxlim(-0, 250)\nplt.axvline(nt_th1)\n\nTh_nt = np.arange(35, 120)\nnt_th = np.zeros(Th_nt.size)\nfor i, th in enumerate(Th_nt):\n ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)\n nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th\n\nplt.figure()\nplot(Th_nt, nt_th)\nplt.axvline(nt_th1)\n\nnt_mean = nt_th[np.where(Th_nt == nt_th1)][0]\nnt_mean", "Fret fit\nMax position of the Kernel Density Estimation (KDE):", "E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')\nE_fitter = ds_fret.E_fitter\n\nE_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])\nE_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))\n\nE_fitter.fit_res[0].params.pretty_print()\n\nfig, ax = plt.subplots(1, 2, figsize=(14, 4.5))\nmfit.plot_mfit(E_fitter, ax=ax[0])\nmfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])\nprint('%s\\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))\ndisplay(E_fitter.params*100)", "Weighted mean of $E$ of each burst:", "ds_fret.fit_E_m(weights='size')", "Gaussian fit (no weights):", "ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)", "Gaussian fit (using burst size as weights):", "ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')\n\nE_kde_w = E_fitter.kde_max_pos[0]\nE_gauss_w = E_fitter.params.loc[0, 'center']\nE_gauss_w_sig = E_fitter.params.loc[0, 'sigma']\nE_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))\nE_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr\nE_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr", "Stoichiometry fit\nMax position of the Kernel Density Estimation (KDE):", "S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)\nS_fitter = ds_fret.S_fitter\n\nS_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])\nS_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)\n\nfig, ax = plt.subplots(1, 2, figsize=(14, 4.5))\nmfit.plot_mfit(S_fitter, ax=ax[0])\nmfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])\nprint('%s\\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))\ndisplay(S_fitter.params*100)\n\nS_kde = S_fitter.kde_max_pos[0]\nS_gauss = S_fitter.params.loc[0, 'center']\nS_gauss_sig = S_fitter.params.loc[0, 'sigma']\nS_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))\nS_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr\nS_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr", "The Maximum likelihood fit for a Gaussian population is the mean:", "S = ds_fret.S[0]\nS_ml_fit = (S.mean(), S.std())\nS_ml_fit", "Computing the weighted mean and weighted standard deviation we get:", "weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)\nS_mean = np.dot(weights, S)/weights.sum()\nS_std_dev = np.sqrt(\n np.dot(weights, (S - S_mean)**2)/weights.sum())\nS_wmean_fit = [S_mean, S_std_dev]\nS_wmean_fit", "Save data to file", "sample = data_id", "The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.", "variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '\n 'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '\n 'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '\n 'E_pr_do_kde nt_mean\\n')", "This is just a trick to format the different variables:", "variables_csv = variables.replace(' ', ',')\nfmt_float = '{%s:.6f}'\nfmt_int = '{%s:d}'\nfmt_str = '{%s}'\nfmt_dict = {**{'sample': fmt_str}, \n **{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}\nvar_dict = {name: eval(name) for name in variables.split()}\nvar_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\\n'\ndata_str = var_fmt.format(**var_dict)\n\nprint(variables_csv)\nprint(data_str)\n\n# NOTE: The file name should be the notebook name but with .csv extension\nwith open('results/usALEX-5samples-E-corrected-all-ph.csv', 'a') as f:\n f.seek(0, 2)\n if f.tell() == 0:\n f.write(variables_csv)\n f.write(data_str)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
amueller/odsc-masterclass-2017-morning
notebooks/00 - Data Loading.ipynb
mit
[ "Data Loading\nGet some data to play with", "from sklearn.datasets import load_digits\nimport numpy as np\ndigits = load_digits()\ndigits.keys()\n\ndigits.data.shape\n\ndigits.data.shape\n\ndigits.target.shape\n\ndigits.target\n\nnp.bincount(digits.target)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n# %matplotlib notebook <- interactive interface\n\nplt.matshow(digits.data[0].reshape(8, 8), cmap=plt.cm.Greys)\n\ndigits.target[0]\n\nfig, axes = plt.subplots(4, 4)\nfor x, y, ax in zip(digits.data, digits.target, axes.ravel()):\n ax.set_title(y)\n ax.imshow(x.reshape(8, 8), cmap=\"gray_r\")\n ax.set_xticks(())\n ax.set_yticks(())\nplt.tight_layout()", "Data is always a numpy array (or sparse matrix) of shape (n_samples, n_features)\nSplit the data to get going", "from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(digits.data,\n digits.target, test_size=0.25, random_state=1)\n\ndigits.data.shape\n\nX_train.shape\n\nX_test.shape", "Exercises\nLoad the iris dataset from the sklearn.datasets module using the load_iris function.\nThe function returns a dictionary-like object that has the same attributes as digits.\nWhat is the number of classes, features and data points in this dataset?\nUse a scatterplot to visualize the dataset.\nYou can look at DESCR attribute to learn more about the dataset.\nUsually data doesn't come in that nice a format. You can find the csv file that contains the iris dataset at the following path:\npython\nimport sklearn.datasets\nimport os\niris_path = os.path.join(sklearn.datasets.__path__[0], 'data', 'iris.csv')\nTry loading the data from there using pandas pd.read_csv method.", "# %load solutions/load_iris.py" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
neeasthana/ML-SQL
ML-SQL/Pyparsing tutorial.ipynb
gpl-3.0
[ "Pyparsing Tutorial to capture ML-SQL language\nAuthors\nWritten by: Neeraj Asthana (under Professor Robert Brunner)\nUniversity of Illinois at Urbana-Champaign\nSummer 2016\nAcknowledgements\nFollowed Tutorial at: http://www.onlamp.com/lpt/a/6435\nDescription\nThis notebook is meant to experiment with pyparsing in order to abstract the process for use with the ML-SQL language. The goal is to be able to understand ML-SQL syntax and port commands to actionable directives in Python.\n\nLibraries", "from pyparsing import Word, Literal, alphas, Optional, OneOrMore, Group", "Phone number parser\nMentioned in the tutorial\nGrammer: \n- number :: '0'.. '9'*\n- phoneNumber :: [ '(' number ')' ] number '-' number", "#Definitions of literals\ndash = Literal( \"-\" )\nlparen = Literal( \"(\" )\nrparen = Literal( \")\" )\n\n#Variable lengths and patterns of number => Word token\ndigits = \"0123456789\"\nnumber = Word( digits )\n\n#Define phone number with And (+'s)\n#Literals can also be defined with direct strings\nphoneNumber = lparen + number + rparen + number + dash + number\n\n#Create a results name for easy access\nareacode = number.setResultsName(\"areacode\")\n\n#Make the area code optional\nphoneNumber = Optional( \"(\" + areacode + \")\" ) + number + \"-\" + number\n\n#List of phone numbers\nphoneNumberList = OneOrMore( phoneNumber )\n\n#Using the grammer\ninputString = \"(978) 844-0961\"\ndata = phoneNumber.parseString( inputString )\n\ndata.areacode\n\n#Bad input\ninputStringBad = \"978) 844-0961\"\n\ndata2 = phoneNumber.parseString( inputStringBad )", "Chemical Formula parser\nMentioned in the tutorial\nGrammer\n- integer :: '0'..'9'+\n- cap :: 'A'..'Z'\n- lower :: 'a'..'z'\n- elementSymbol :: cap lower*\n- elementRef :: elementSymbol [ integer ]\n- formula :: elementRef+", "#Define Grammer\ncaps = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\nlowers = caps.lower()\ndigits = \"0123456789\"\n\nelement = Word( caps, lowers )\n\n#Groups elements so that element and numbers appear together\nelementRef = Group( element + Optional( Word( digits ), default=\"1\" ) )\nformula = OneOrMore( elementRef )\n\ntestString = \"CO2\"\nelements = formula.parseString( testString )\nprint(elements)\n\ntests = [ \"H2O\", \"C6H5OH\", \"NaCl\" ]\nfor t in tests:\n try:\n results = formula.parseString( t )\n print (t,\"->\", results)\n except ParseException as pe:\n print (pe)\n else:\n wt = sum( [atomicWeight[elem]*int(qty) for elem,qty in results] )\n print (\"(%.3f)\" % wt)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
deepchem/deepchem
examples/tutorials/Physics_Informed_Neural_Networks.ipynb
mit
[ "<a href=\"https://colab.research.google.com/github/VIGNESHinZONE/Beginners-level-ML-projects/blob/master/Writing_Tutorial.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nTutorial 33: Physics Informed Neural Networks using JaxModel & PINN_Model\n\nVignesh Venkataraman\n\nContents\n\nPhysics Informed Neural Networks\nSetup\nBrief about Jax and Autodiff\nBurger's Equation\nData Visualisation\nExplanation of the Solution using Jax\nUsage of PINN Model\nVisualize the final results\n\nPhysics Informed Neural Networks\nPINNs was introduced by Maziar Raissi et. al in their paper Physics Informed Deep Learning (Part I): Data-driven\nSolutions of Nonlinear Partial Differential Equations which are used for solving supervised learning tasks and also follow an underlying differential equation derived from understanding the Physics. In more simple terms, we try solving a differential equation with a neural network and using the differential equation as the regulariser in the loss function.\n<br>\n\nHere is an illustration of PINNs using a simple differential equation-\n$\\quad \\quad \\quad \\frac{df}{dt} = f(u, t), \\quad $ where initial condition is $\\ \\ u(t=0) = u_0$\n<br>\nWe approximate function $u(t)$ using a Neural Network as $NN(t)$ and apply the following loss function - \n<br>\nInitial Loss: $\\quad L_0 = (NN(t=0) - u_0)^2$\nRegulariser Loss:= $\\quad L_r = | \\frac{d NN (t) }{d t} - f(NN(t), t) |$\n<br>\nAnd we minimise the $Total Loss$ using Backpropagation-\n<br>\nTotal Loss = Initial Loss + Regulariser Loss\n<br>\nHere is a technical definition of PINNs taken from the author's official blog-\n$\\quad \\quad u_t + \\mathcal{N}[u] = 0,\\ x \\in \\Omega, \\ t\\in[0,T],$\n<br>\nwhere $\\ u(t,x) \\ $ denotes the latent (hidden) solution, $\\ N[โ‹…] \\ $ is a nonlinear differential operator, and $ฮฉ$ is a subset of $\\ \\mathbb{R}^D \\ $ , and proceed by approximating $u(t,x)$ by a deep neural network. We define $\\ f(t,x) \\ $ to be given by\n<br>\n$\\quad \\quad f := u_t + \\mathcal{N}[u],$\n<br>\nThis assumption results in a physics informed neural network $ \\ f(t,x) \\ $. This network can be derived by the calculus on computational graphs: Backpropagation.\n<br>\n\nUseful Resources to learn more about PINNs\n\nMaziar Raissi et. al. official blog on PINNs\nChris Rackauckas's lecture on PINNs lecture 3: these lectures are in Julia programming language but still are a great source of learning.\n\nSetup", "### In case if you are running this notebook in Local Machine and have CUDA installed then uncomment this below line. ###\n### Change the cuda version depending on your setup. Here Cuda 11.x is represented as `cuda111`\n# !pip install --upgrade pip\n# !pip install --upgrade \"jax[cuda111]\" -f https://storage.googleapis.com/jax-releases/jax_releases.html\n!pip install --pre deepchem[jax]\n!pip install pyDOE\n\nimport deepchem as dc\nimport os\n\nPINNs_URL = \"https://deepchemdata.s3.us-west-1.amazonaws.com/datasets/raissi_pinns_data.tar.gz\"\ndc.utils.data_utils.download_url(\n url=PINNs_URL, dest_dir=os.getcwd())\ntargz_file = os.path.join(os.getcwd(), 'raissi_pinns_data.tar.gz')\ndc.utils.data_utils.untargz_file(targz_file, os.getcwd())\n\n", "Burgers Equation\nBurgersโ€™ equation is a partial differential equation that was originally proposed as a simplified model of turbulence as exhibited by the full-fledged Navier-Stokes equations. It is a nonlinear equation for which exact solutions are known and is therefore important as a benchmark problem for numerical methods. More Refrence\n<br>\nHere is the differential Equation we are trying to solve\n$\\begin{array}{l}\n \\ \\ \\ u_t + u u_x - (0.01/\\pi) u_{xx} = 0,\\ \\ \\ x \\in [-1,1],\\ \\ \\ t \\in [0,1]\n \\end{array}$\n<br>\nHere are the initial conditions\n$\\ \\ \\ u(x, 0) = -\\sin(\\pi x),$\n$\\ \\ \\ u(-1, t) = u(1, t) = 0.0$\n<br>\nNow let us define:\n$\n \\ \\ \\ f := u_t + u u_x - (0.01/\\pi) u_{xx},\n $\nand we approximate $u(x, t)$ using Neural Network as $NN(\\theta, x, t)$ where $\\theta$ are the weights of neural networks \n<br>\n\n</br>\nNow here are the three main data points that will be used for training our Neural Network to approximate $u(x,t)$\n\n\nWe will train points lying between $x \\in [-1,1]$ and $t=0$ to follow as part of the L2 Loss\n$min\\ \\ _\\theta \\ \\ (NN(\\theta, x, t) + \\sin(\\pi x))^2$\n\n\n<br>\n\n\nWe will train points lying between $t \\in [0,1]$ and $x= \\pm1 $ as part of the L2 Loss\n$min\\ \\ _\\theta \\ \\ (NN(\\theta, x, t) + \\sin(\\pi x))^2$\n\n\n<br>\n\n\nWe will train points lying between $x \\in [-1,1],\\ \\ \\ t \\in [0,1]$ as part of the regulariser loss\n$f(\\theta, x, t):= \\ \\ \\frac{\\partial NN(\\theta, x, t)}{\\partial t} + NN(\\theta, x, t)\\frac{\\partial NN(\\theta, x, t)}{\\partial x} - (0.01/\\pi)\\frac{\\partial^2 NN(\\theta, x, t)}{\\partial^2 x} $\n$min\\ \\ _\\theta \\ \\ f(\\theta, x, t)$\n\n\n</br>\nIn this tutorial, we will be combing data conditions 1 and 2 under the same L2Loss\nData Visualisation of the Burgers Equation\nNow lets load the Burger's Data provided from the author. pre_process_shock_data is used to load the data in a format suitable for Neural Networks. Understanding this function is not neccesary for working through the tutorials.", "## Data Preprocessing\n\n# Create Dataset\n\nimport scipy.io\nfrom scipy.interpolate import griddata\nfrom pyDOE import lhs\nimport numpy as np\nimport random\nrandom.seed(0)\nnp.random.seed(0)\n\ndef pre_process_shock_data(N_u, N_f, t, x, Exact):\n\n\n X, T = np.meshgrid(x,t)\n\n X_star = np.hstack((X.flatten()[:,None], T.flatten()[:,None]))\n u_star = Exact.flatten()[:,None] \n\n # Doman bounds\n lb = X_star.min(0)\n ub = X_star.max(0) \n\n xx1 = np.hstack((X[0:1,:].T, T[0:1,:].T))\n uu1 = Exact[0:1,:].T\n xx2 = np.hstack((X[:,0:1], T[:,0:1]))\n uu2 = Exact[:,0:1]\n xx3 = np.hstack((X[:,-1:], T[:,-1:]))\n uu3 = Exact[:,-1:]\n\n X_u_train = np.vstack([xx1, xx2, xx3])\n X_f_train = lb + (ub-lb)*lhs(2, N_f)\n X_f_train = np.vstack((X_f_train, X_u_train))\n u_train = np.vstack([uu1, uu2, uu3])\n\n idx = np.random.choice(X_u_train.shape[0], N_u, replace=False)\n X_u_train = X_u_train[idx, :]\n u_train = u_train[idx,:]\n return X_u_train, u_train, X_f_train, X_star, (X, T)\n\n\nmat_data = scipy.io.loadmat(os.path.join(os.getcwd(), 'PINNs/burgers_shock.mat'))\n\nN_u = 100\nN_f = 10000\n\nt = mat_data['t'].flatten()[:,None]\nx = mat_data['x'].flatten()[:,None]\nExact = np.real(mat_data['usol']).T\n\nlabeled_X, labeled_y, unlabeled_X, full_domain, meshgrid = pre_process_shock_data(N_u, N_f, t, x, Exact)", "We have three Numpy arrays labeled_X, labeled_y and unlabeled_X which will be used for training our neural network,\n1) labeled_X consists of $x \\in [-1,1]$ & $t=0$ and $t \\in [0,1]$ & $x= \\pm1 $. labeled_y has the value of $u(x, t)$:\nLet us verify that labeled_X & labeled_y also consists of data points satisfying the condition of \n$\\ \\ \\ u(x, 0) = -\\sin(\\pi x), \\quad \\quad x \\in [-1,1]$ & $t=0$", "import matplotlib.pyplot as plt\nind = labeled_X[:, 1] == 0.0\nprint(f\"Number of Datapoints with with t = 0 is {len(labeled_X[labeled_X[:, 1] == 0.0])}\")\nplt.scatter(labeled_X[ind][:, 0], labeled_y[ind], color = 'red', marker = \"o\", alpha = 0.3)\n", "Let us verify that at labeled_X & labeled_y also consists of datapoints satisfying the condition of \n$\\ \\ \\ u(-1, t) = u(1, t) = 0.0, \\quad \\quad t \\in [0,1]$ & $x= \\pm1$", "ind = np.abs(labeled_X[:, 0]) == 1.0\nprint(f\"Number of Datapoints with with |x| = 1 is {len(labeled_X[np.abs(labeled_X[:, 0]) == 1.0])}\")\nnp.max(labeled_y[ind]), np.min(labeled_y[ind]), np.mean(labeled_y[ind])\n", "Explanation of the solution\nWe will be using Deepchem's PINNModel class to solve Burger's Equation which is based out of Jax library. We will approximate $u(x, t)$ using a Neural Network represented as $NN(\\theta, x, t)$\nFor our purpose, we will be using the Haiku library for building neural networks. Due to the functional nature of Jax, we define neural network with two things\n\nParameters - which act as the weight matrices, upon which Backpropagation is applied for optimisation.\nforward_fn - This defines how the weights are used for computing the outputs. Ex- Feedforward, Convolution, etc", "import jax\nimport jax.numpy as jnp\nimport haiku as hk\n\ndef f(x, t):\n x = jnp.hstack([x, t])\n net = hk.nets.MLP(output_sizes = [20, 20, 20, 20, 20, 20, 20, 20, 1],\n activation = jnp.tanh)\n return net(x)\n\ninit_params, forward_fn = hk.transform(f)\nrng = jax.random.PRNGKey(500)\nx_init, t_init = jnp.split(labeled_X, 2, 1)\nparams = init_params(rng, x_init, t_init)\n", "As per the docstrings of PINNModel, we require two additional functions in the given format - \n\nCreate a gradient_fn which tells us about how to compute the gradients of the function-\n```\n\n\ndef gradient_fn(forward_fn, loss_outputs, initial_data):\n def model_loss(params, target, weights, rng, ...):\n # write code using the arguments.\n # ... indicates the variable number of positional arguments.\n return\n return model_loss\n```\n\n\n\n\n\nAnd to understand more about PINNModel, you can see that the same gradient_fn gets called in the code for computing the gradients.\nFor our purpose, we have two variables $(x, t)$ and we need to tell the PINN Model how to compute the final gradient. For carrying out this process we will be using these main features from jax library for calculating the loss - \n\nvmap - This for parallelising computations in batches. We will process each row of the dataset, but it will get batched automatically using this feature. \njacrev - This is used to calculate the jacobian matrix. In our case, the output is a single dimension and hence it can be thought of as the gradient function. We could directly use jax's grad function but using jacrev simplifies the array shapes and hence is easier.\n\nWe need to compute two losses for solving our differential equation-\n\nInitial Loss\n\nu_pred = forward_fn(params, rng, x_b, t_b)\ninitial_loss = jnp.mean((u_pred - boundary_target) ** 2)\n\nRegulariser Loss\n\nThis is slightly complicated as we need to compute \n$f(\\theta, x, t):= \\ \\ \\frac{\\partial NN(\\theta, x, t)}{\\partial t} + NN(\\theta, x, t)\\frac{\\partial NN(\\theta, x, t)}{\\partial x} - (0.01/\\pi)\\frac{\\partial^2 NN(\\theta, x, t)}{\\partial^2 x} $\nThe partial derivative operation in the first and second terms can be calculated using jacrev function-\nu_x, u_t = jacrev(forward_fn, argnums=(2, 3))(params, rng, x, t)\nThe second partial derivative operation in the third term can be applying jacrev twice-\nu_xx = jacrev(jacrev(forward_fn, argnums=2), argnums=2)(params, rng, x, t)", "from jax import jacrev\nimport functools\n\ndef gradient_fn(forward_fn, loss_outputs, initial_data):\n \"\"\"\n This function calls the gradient function, to implement the backpropogation\n \"\"\"\n boundary_data_x = initial_data['labeled_x']\n boundary_data_t = initial_data['labeled_t']\n\n boundary_target = initial_data['labeled_u']\n\n @jax.jit\n def model_loss(params, target, weights, rng, x_train, t_train):\n\n @functools.partial(jax.vmap, in_axes=(None, 0, 0))\n def small_loss(params, x, t):\n u = forward_fn(params, rng, x, t)\n u_x, u_t = jacrev(forward_fn, argnums=(2, 3))(params, rng, x, t)\n u_xx = jacrev(jacrev(forward_fn, argnums=2), argnums=2)(params, rng, x, t)\n con = 0.01/np.pi\n return u_t + u * u_x - con * u_xx\n\n u_pred = forward_fn(params, rng, boundary_data_x, boundary_data_t)\n f_pred = small_loss(params, x_train, t_train)\n loss_u = jnp.mean((u_pred - boundary_target) ** 2)\n loss_f = jnp.mean((f_pred) ** 2)\n\n return loss_u + loss_f\n\n return model_loss\n\n", "We also need to provide an eval_fn in the below-given format for computing the weights\n```\n\n\ndef create_eval_fn(forward_fn, params):\n def eval_model(..., rng=None):\n # write code here using arguments\n\nreturn\n return eval_model\n```\n\n\n\n\nLike previously we have two arguments for our model $(x, t)$ which get passed in function", "# Tells the neural network on how to perform calculation during inference\ndef create_eval_fn(forward_fn, params):\n \"\"\"\n Calls the function to evaluate the model\n \"\"\"\n @jax.jit\n def eval_model(x, t, rng=None):\n\n res = forward_fn(params, rng, x, t)\n return jnp.squeeze(res)\n return eval_model\n", "Usage of PINN Model\nWe will be using optax library for performing the optimisations. PINNModel executes the codes for training the models.", "import optax\nfrom deepchem.models import PINNModel\n\nscheduler = optax.piecewise_constant_schedule(\n init_value=1e-2, \n boundaries_and_scales={2500: 0.1, 5000: 0.1, 7500: 0.1})\n\nopt = optax.chain(\n optax.clip_by_global_norm(1.00),\n optax.scale_by_adam(b1=0.9, b2=0.99),\n optax.scale_by_schedule(scheduler),\n optax.scale(-1.0))\n\nlabeled_x, labeled_t = jnp.split(labeled_X, 2, 1)\n\nboundary_data = {\n 'labeled_x': labeled_x, 'labeled_t':labeled_t, 'labeled_u': labeled_y\n}\n\nj_m = PINNModel( \n forward_fn = forward_fn,\n params = params,\n initial_data = boundary_data,\n batch_size = 1000,\n optimizer = opt,\n grad_fn = gradient_fn,\n eval_fn = create_eval_fn,\n deterministic = True,\n log_frequency = 1000\n )\n\ndataset = dc.data.NumpyDataset(unlabeled_X)\nval = j_m.fit(dataset, nb_epochs=500)\n", "Visualize the final results\n\nCode taken from authors for visualisation\nshow both graphs", "test_dataset = dc.data.NumpyDataset(full_domain)\nu_pred = j_m.predict(test_dataset)\nU_pred = griddata(full_domain, u_pred.flatten(), meshgrid, method='cubic')\nError = np.abs(Exact - U_pred)\n\n\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.io\nfrom scipy.interpolate import griddata\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nimport matplotlib.gridspec as gridspec\n\n\nfig = plt.figure(figsize=(9, 5))\nax = fig.add_subplot(111)\n\nh = ax.imshow(U_pred.T, interpolation='nearest', cmap='rainbow', \n extent=[t.min(), t.max(), x.min(), x.max()], \n origin='lower', aspect='auto')\ndivider = make_axes_locatable(ax)\ncax = divider.append_axes(\"right\", size=\"5%\", pad=0.10)\ncbar = fig.colorbar(h, cax=cax)\ncbar.ax.tick_params(labelsize=15) \n\nax.plot(\n labeled_X[:,1], \n labeled_X[:,0], \n 'kx', label = 'Data (%d points)' % (labeled_y.shape[0]), \n markersize = 4, # marker size doubled\n clip_on = False,\n alpha=1.0\n)\n\nline = np.linspace(x.min(), x.max(), 2)[:,None]\nax.plot(t[25]*np.ones((2,1)), line, 'w-', linewidth = 1)\nax.plot(t[50]*np.ones((2,1)), line, 'w-', linewidth = 1)\nax.plot(t[75]*np.ones((2,1)), line, 'w-', linewidth = 1)\n\nax.set_xlabel('$t$', size=20)\nax.set_ylabel('$x$', size=20)\nax.legend(\n loc='upper center', \n bbox_to_anchor=(0.9, -0.05), \n ncol=5, \n frameon=False, \n prop={'size': 15}\n)\nax.set_title('$u(t,x)$', fontsize = 20) # font size doubled\nax.tick_params(labelsize=15)\n\nplt.show()\n\n\nfig = plt.figure(figsize=(14, 10))\nax = fig.add_subplot(111)\n\ngs1 = gridspec.GridSpec(1, 3)\ngs1.update(top=1-1.0/3.0-0.1, bottom=1.0-2.0/3.0, left=0.1, right=0.9, wspace=0.5)\n\nax = plt.subplot(gs1[0, 0])\nax.plot(x,Exact[25,:], 'b-', linewidth = 2, label = 'Exact') \nax.plot(x,U_pred[25,:], 'r--', linewidth = 2, label = 'Prediction')\nax.set_xlabel('$x$')\nax.set_ylabel('$u(t,x)$') \nax.set_title('$t = 0.25$', fontsize = 15)\nax.axis('square')\nax.set_xlim([-1.1,1.1])\nax.set_ylim([-1.1,1.1])\n\nfor item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +\n ax.get_xticklabels() + ax.get_yticklabels()):\n item.set_fontsize(15)\n\nax = plt.subplot(gs1[0, 1])\nax.plot(x,Exact[50,:], 'b-', linewidth = 2, label = 'Exact') \nax.plot(x,U_pred[50,:], 'r--', linewidth = 2, label = 'Prediction')\nax.set_xlabel('$x$')\nax.set_ylabel('$u(t,x)$')\nax.axis('square')\nax.set_xlim([-1.1,1.1])\nax.set_ylim([-1.1,1.1])\nax.set_title('$t = 0.50$', fontsize = 15)\nax.legend(\n loc='upper center', \n bbox_to_anchor=(0.5, -0.15), \n ncol=5, \n frameon=False, \n prop={'size': 15}\n)\n\nfor item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +\n ax.get_xticklabels() + ax.get_yticklabels()):\n item.set_fontsize(15)\n\nax = plt.subplot(gs1[0, 2])\nax.plot(x,Exact[75,:], 'b-', linewidth = 2, label = 'Exact') \nax.plot(x,U_pred[75,:], 'r--', linewidth = 2, label = 'Prediction')\nax.set_xlabel('$x$')\nax.set_ylabel('$u(t,x)$')\nax.axis('square')\nax.set_xlim([-1.1,1.1])\nax.set_ylim([-1.1,1.1]) \nax.set_title('$t = 0.75$', fontsize = 15)\n\nfor item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +\n ax.get_xticklabels() + ax.get_yticklabels()):\n item.set_fontsize(15)\n\nplt.show()\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/end_to_end_ml/labs/deploy_keras_ai_platform_babyweight.ipynb
apache-2.0
[ "Deploy and predict with Keras model on Cloud AI Platform.\nLearning Objectives\n\nSetup up the environment\nDeploy trained Keras model to Cloud AI Platform\nOnline predict from model on Cloud AI Platform\nBatch predict from model on Cloud AI Platform\n\nIntroduction\nVerify that you have previously Trained your Keras model. If not, go back to train_keras_ai_platform_babyweight.ipynb create them.\nIn this notebook, we'll be deploying our Keras model to Cloud AI Platform and creating predictions.\nWe will set up the environment, deploy a trained Keras model to Cloud AI Platform, online predict from deployed model on Cloud AI Platform, and batch predict from deployed model on Cloud AI Platform.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.\nSet up environment variables and load necessary libraries\nImport necessary libraries.", "import os", "Lab Task #1: Set environment variables.\nSet environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.", "%%bash\nPROJECT=$(gcloud config list project --format \"value(core.project)\")\necho \"Your current GCP Project Name is: \"$PROJECT\n\n# Change these to try this notebook out\nPROJECT = \"cloud-training-demos\" # TODO 1: Replace with your PROJECT\nBUCKET = PROJECT # defaults to PROJECT\nREGION = \"us-central1\" # TODO 1: Replace with your REGION\n\nos.environ[\"BUCKET\"] = BUCKET\nos.environ[\"REGION\"] = REGION\nos.environ[\"TFVERSION\"] = \"2.1\"\n\n%%bash\ngcloud config set compute/region $REGION\ngcloud config set ai_platform/region global", "Check our trained model files\nLet's check the directory structure of our outputs of our trained model in folder we exported the model to in our last lab. We'll want to deploy the saved_model.pb within the timestamped directory as well as the variable values in the variables folder. Therefore, we need the path of the timestamped directory so that everything within it can be found by Cloud AI Platform's model deployment service.", "%%bash\ngsutil ls gs://${BUCKET}/babyweight/trained_model\n\n%%bash\nMODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* \\\n | tail -1)\ngsutil ls ${MODEL_LOCATION}", "Lab Task #2: Deploy trained model.\nDeploying the trained model to act as a REST web service is a simple gcloud call. Complete #TODO by providing location of saved_model.pb file to Cloud AI Platform model deployment service. The deployment will take a few minutes.", "%%bash\nMODEL_NAME=\"babyweight\"\nMODEL_VERSION=\"ml_on_gcp\"\nMODEL_LOCATION=# TODO 2: Add GCS path to saved_model.pb file.\necho \"Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION\"\n# gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}\n# gcloud ai-platform models delete ${MODEL_NAME}\ngcloud ai-platform models create ${MODEL_NAME} --regions ${REGION}\ngcloud ai-platform versions create ${MODEL_VERSION} \\\n --model=${MODEL_NAME} \\\n --origin=${MODEL_LOCATION} \\\n --runtime-version=2.1 \\\n --python-version=3.7", "Lab Task #3: Use model to make online prediction.\nComplete __#TODO__s for both the Python and gcloud Shell API methods of calling our deployed model on Cloud AI Platform for online prediction.\nPython API\nWe can use the Python API to send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances.", "from oauth2client.client import GoogleCredentials\nimport requests\nimport json\n\nMODEL_NAME = # TODO 3a: Add model name\nMODEL_VERSION = # TODO 3a: Add model version\n\ntoken = GoogleCredentials.get_application_default().get_access_token().access_token\napi = \"https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict\" \\\n .format(PROJECT, MODEL_NAME, MODEL_VERSION)\nheaders = {\"Authorization\": \"Bearer \" + token }\ndata = {\n \"instances\": [\n {\n \"is_male\": \"True\",\n \"mother_age\": 26.0,\n \"plurality\": \"Single(1)\",\n \"gestation_weeks\": 39\n },\n {\n \"is_male\": \"False\",\n \"mother_age\": 29.0,\n \"plurality\": \"Single(1)\",\n \"gestation_weeks\": 38\n },\n {\n \"is_male\": \"True\",\n \"mother_age\": 26.0,\n \"plurality\": \"Triplets(3)\",\n \"gestation_weeks\": 39\n },\n # TODO 3a: Create another instance\n ]\n}\nresponse = requests.post(api, json=data, headers=headers)\nprint(response.content)", "The predictions for the four instances were: 5.33, 6.09, 2.50, and 5.86 pounds respectively when I ran it (your results might be different).\ngcloud shell API\nInstead we could use the gcloud shell API. Create a newline delimited JSON file with one instance per line and submit using gcloud.", "%%writefile inputs.json\n{\"is_male\": \"True\", \"mother_age\": 26.0, \"plurality\": \"Single(1)\", \"gestation_weeks\": 39}\n{\"is_male\": \"False\", \"mother_age\": 26.0, \"plurality\": \"Single(1)\", \"gestation_weeks\": 39}", "Now call gcloud ai-platform predict using the JSON we just created and point to our deployed model and version.", "%%bash\ngcloud ai-platform predict \\\n --model=babyweight \\\n --json-instances=inputs.json \\\n --version=# TODO 3b: Add model version", "Lab Task #4: Use model to make batch prediction.\nBatch prediction is commonly used when you have thousands to millions of predictions. It will create an actual Cloud AI Platform job for prediction. Complete __#TODO__s so we can call our deployed model on Cloud AI Platform for batch prediction.", "%%bash\nINPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json\nOUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs\ngsutil cp inputs.json $INPUT\ngsutil -m rm -rf $OUTPUT \ngcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \\\n --data-format=TEXT \\\n --region ${REGION} \\\n --input-paths=$INPUT \\\n --output-path=$OUTPUT \\\n --model=babyweight \\\n --version=# TODO 4: Add model version", "Lab Summary:\nIn this lab, we set up the environment, deployed a trained Keras model to Cloud AI Platform, online predicted from deployed model on Cloud AI Platform, and batch predicted from deployed model on Cloud AI Platform.\nCopyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/en-snapshot/guide/keras/transfer_learning.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Transfer learning and fine-tuning\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/keras/transfer_learning\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/transfer_learning.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/keras-team/keras-io/blob/master/guides/transfer_learning.py\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/transfer_learning.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nSetup", "import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras", "Introduction\nTransfer learning consists of taking features learned on one problem, and\nleveraging them on a new, similar problem. For instance, features from a model that has\nlearned to identify racoons may be useful to kick-start a model meant to identify\n tanukis.\nTransfer learning is usually done for tasks where your dataset has too little data to\n train a full-scale model from scratch.\nThe most common incarnation of transfer learning in the context of deep learning is the\n following workflow:\n\nTake layers from a previously trained model.\nFreeze them, so as to avoid destroying any of the information they contain during\n future training rounds.\nAdd some new, trainable layers on top of the frozen layers. They will learn to turn\n the old features into predictions on a new dataset.\nTrain the new layers on your dataset.\n\nA last, optional step, is fine-tuning, which consists of unfreezing the entire\nmodel you obtained above (or part of it), and re-training it on the new data with a\nvery low learning rate. This can potentially achieve meaningful improvements, by\n incrementally adapting the pretrained features to the new data.\nFirst, we will go over the Keras trainable API in detail, which underlies most\n transfer learning & fine-tuning workflows.\nThen, we'll demonstrate the typical workflow by taking a model pretrained on the\nImageNet dataset, and retraining it on the Kaggle \"cats vs dogs\" classification\n dataset.\nThis is adapted from\nDeep Learning with Python\n and the 2016 blog post\n\"building powerful image classification models using very little\n data\".\nFreezing layers: understanding the trainable attribute\nLayers & models have three weight attributes:\n\nweights is the list of all weights variables of the layer.\ntrainable_weights is the list of those that are meant to be updated (via gradient\n descent) to minimize the loss during training.\nnon_trainable_weights is the list of those that aren't meant to be trained.\n Typically they are updated by the model during the forward pass.\n\nExample: the Dense layer has 2 trainable weights (kernel & bias)", "layer = keras.layers.Dense(3)\nlayer.build((None, 4)) # Create the weights\n\nprint(\"weights:\", len(layer.weights))\nprint(\"trainable_weights:\", len(layer.trainable_weights))\nprint(\"non_trainable_weights:\", len(layer.non_trainable_weights))", "In general, all weights are trainable weights. The only built-in layer that has\nnon-trainable weights is the BatchNormalization layer. It uses non-trainable weights\n to keep track of the mean and variance of its inputs during training.\nTo learn how to use non-trainable weights in your own custom layers, see the\nguide to writing new layers from scratch.\nExample: the BatchNormalization layer has 2 trainable weights and 2 non-trainable\n weights", "layer = keras.layers.BatchNormalization()\nlayer.build((None, 4)) # Create the weights\n\nprint(\"weights:\", len(layer.weights))\nprint(\"trainable_weights:\", len(layer.trainable_weights))\nprint(\"non_trainable_weights:\", len(layer.non_trainable_weights))", "Layers & models also feature a boolean attribute trainable. Its value can be changed.\nSetting layer.trainable to False moves all the layer's weights from trainable to\nnon-trainable. This is called \"freezing\" the layer: the state of a frozen layer won't\nbe updated during training (either when training with fit() or when training with\n any custom loop that relies on trainable_weights to apply gradient updates).\nExample: setting trainable to False", "layer = keras.layers.Dense(3)\nlayer.build((None, 4)) # Create the weights\nlayer.trainable = False # Freeze the layer\n\nprint(\"weights:\", len(layer.weights))\nprint(\"trainable_weights:\", len(layer.trainable_weights))\nprint(\"non_trainable_weights:\", len(layer.non_trainable_weights))", "When a trainable weight becomes non-trainable, its value is no longer updated during\n training.", "# Make a model with 2 layers\nlayer1 = keras.layers.Dense(3, activation=\"relu\")\nlayer2 = keras.layers.Dense(3, activation=\"sigmoid\")\nmodel = keras.Sequential([keras.Input(shape=(3,)), layer1, layer2])\n\n# Freeze the first layer\nlayer1.trainable = False\n\n# Keep a copy of the weights of layer1 for later reference\ninitial_layer1_weights_values = layer1.get_weights()\n\n# Train the model\nmodel.compile(optimizer=\"adam\", loss=\"mse\")\nmodel.fit(np.random.random((2, 3)), np.random.random((2, 3)))\n\n# Check that the weights of layer1 have not changed during training\nfinal_layer1_weights_values = layer1.get_weights()\nnp.testing.assert_allclose(\n initial_layer1_weights_values[0], final_layer1_weights_values[0]\n)\nnp.testing.assert_allclose(\n initial_layer1_weights_values[1], final_layer1_weights_values[1]\n)", "Do not confuse the layer.trainable attribute with the argument training in\nlayer.__call__() (which controls whether the layer should run its forward pass in\n inference mode or training mode). For more information, see the\nKeras FAQ.\nRecursive setting of the trainable attribute\nIf you set trainable = False on a model or on any layer that has sublayers,\nall children layers become non-trainable as well.\nExample:", "inner_model = keras.Sequential(\n [\n keras.Input(shape=(3,)),\n keras.layers.Dense(3, activation=\"relu\"),\n keras.layers.Dense(3, activation=\"relu\"),\n ]\n)\n\nmodel = keras.Sequential(\n [keras.Input(shape=(3,)), inner_model, keras.layers.Dense(3, activation=\"sigmoid\"),]\n)\n\nmodel.trainable = False # Freeze the outer model\n\nassert inner_model.trainable == False # All layers in `model` are now frozen\nassert inner_model.layers[0].trainable == False # `trainable` is propagated recursively", "The typical transfer-learning workflow\nThis leads us to how a typical transfer learning workflow can be implemented in Keras:\n\nInstantiate a base model and load pre-trained weights into it.\nFreeze all layers in the base model by setting trainable = False.\nCreate a new model on top of the output of one (or several) layers from the base\n model.\nTrain your new model on your new dataset.\n\nNote that an alternative, more lightweight workflow could also be:\n\nInstantiate a base model and load pre-trained weights into it.\nRun your new dataset through it and record the output of one (or several) layers\n from the base model. This is called feature extraction.\nUse that output as input data for a new, smaller model.\n\nA key advantage of that second workflow is that you only run the base model once on\n your data, rather than once per epoch of training. So it's a lot faster & cheaper.\nAn issue with that second workflow, though, is that it doesn't allow you to dynamically\nmodify the input data of your new model during training, which is required when doing\ndata augmentation, for instance. Transfer learning is typically used for tasks when\nyour new dataset has too little data to train a full-scale model from scratch, and in\nsuch scenarios data augmentation is very important. So in what follows, we will focus\n on the first workflow.\nHere's what the first workflow looks like in Keras:\nFirst, instantiate a base model with pre-trained weights.\npython\nbase_model = keras.applications.Xception(\n weights='imagenet', # Load weights pre-trained on ImageNet.\n input_shape=(150, 150, 3),\n include_top=False) # Do not include the ImageNet classifier at the top.\nThen, freeze the base model.\npython\nbase_model.trainable = False\nCreate a new model on top.\n```python\ninputs = keras.Input(shape=(150, 150, 3))\nWe make sure that the base_model is running in inference mode here,\nby passing training=False. This is important for fine-tuning, as you will\nlearn in a few paragraphs.\nx = base_model(inputs, training=False)\nConvert features of shape base_model.output_shape[1:] to vectors\nx = keras.layers.GlobalAveragePooling2D()(x)\nA Dense classifier with a single unit (binary classification)\noutputs = keras.layers.Dense(1)(x)\nmodel = keras.Model(inputs, outputs)\n```\nTrain the model on new data.\npython\nmodel.compile(optimizer=keras.optimizers.Adam(),\n loss=keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[keras.metrics.BinaryAccuracy()])\nmodel.fit(new_dataset, epochs=20, callbacks=..., validation_data=...)\nFine-tuning\nOnce your model has converged on the new data, you can try to unfreeze all or part of\n the base model and retrain the whole model end-to-end with a very low learning rate.\nThis is an optional last step that can potentially give you incremental improvements.\n It could also potentially lead to quick overfitting -- keep that in mind.\nIt is critical to only do this step after the model with frozen layers has been\ntrained to convergence. If you mix randomly-initialized trainable layers with\ntrainable layers that hold pre-trained features, the randomly-initialized layers will\ncause very large gradient updates during training, which will destroy your pre-trained\n features.\nIt's also critical to use a very low learning rate at this stage, because\nyou are training a much larger model than in the first round of training, on a dataset\n that is typically very small.\nAs a result, you are at risk of overfitting very quickly if you apply large weight\n updates. Here, you only want to readapt the pretrained weights in an incremental way.\nThis is how to implement fine-tuning of the whole base model:\n```python\nUnfreeze the base model\nbase_model.trainable = True\nIt's important to recompile your model after you make any changes\nto the trainable attribute of any inner layer, so that your changes\nare take into account\nmodel.compile(optimizer=keras.optimizers.Adam(1e-5), # Very low learning rate\n loss=keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[keras.metrics.BinaryAccuracy()])\nTrain end-to-end. Be careful to stop before you overfit!\nmodel.fit(new_dataset, epochs=10, callbacks=..., validation_data=...)\n```\nImportant note about compile() and trainable\nCalling compile() on a model is meant to \"freeze\" the behavior of that model. This\n implies that the trainable\nattribute values at the time the model is compiled should be preserved throughout the\n lifetime of that model,\nuntil compile is called again. Hence, if you change any trainable value, make sure\n to call compile() again on your\nmodel for your changes to be taken into account.\nImportant notes about BatchNormalization layer\nMany image models contain BatchNormalization layers. That layer is a special case on\n every imaginable count. Here are a few things to keep in mind.\n\nBatchNormalization contains 2 non-trainable weights that get updated during\ntraining. These are the variables tracking the mean and variance of the inputs.\nWhen you set bn_layer.trainable = False, the BatchNormalization layer will\nrun in inference mode, and will not update its mean & variance statistics. This is not\nthe case for other layers in general, as\nweight trainability & inference/training modes are two orthogonal concepts.\nBut the two are tied in the case of the BatchNormalization layer.\nWhen you unfreeze a model that contains BatchNormalization layers in order to do\nfine-tuning, you should keep the BatchNormalization layers in inference mode by\n passing training=False when calling the base model.\nOtherwise the updates applied to the non-trainable weights will suddenly destroy\nwhat the model has learned.\n\nYou'll see this pattern in action in the end-to-end example at the end of this guide.\nTransfer learning & fine-tuning with a custom training loop\nIf instead of fit(), you are using your own low-level training loop, the workflow\nstays essentially the same. You should be careful to only take into account the list\n model.trainable_weights when applying gradient updates:\n```python\nCreate base model\nbase_model = keras.applications.Xception(\n weights='imagenet',\n input_shape=(150, 150, 3),\n include_top=False)\nFreeze base model\nbase_model.trainable = False\nCreate new model on top.\ninputs = keras.Input(shape=(150, 150, 3))\nx = base_model(inputs, training=False)\nx = keras.layers.GlobalAveragePooling2D()(x)\noutputs = keras.layers.Dense(1)(x)\nmodel = keras.Model(inputs, outputs)\nloss_fn = keras.losses.BinaryCrossentropy(from_logits=True)\noptimizer = keras.optimizers.Adam()\nIterate over the batches of a dataset.\nfor inputs, targets in new_dataset:\n # Open a GradientTape.\n with tf.GradientTape() as tape:\n # Forward pass.\n predictions = model(inputs)\n # Compute the loss value for this batch.\n loss_value = loss_fn(targets, predictions)\n# Get gradients of loss wrt the *trainable* weights.\ngradients = tape.gradient(loss_value, model.trainable_weights)\n# Update the weights of the model.\noptimizer.apply_gradients(zip(gradients, model.trainable_weights))\n\n```\nLikewise for fine-tuning.\nAn end-to-end example: fine-tuning an image classification model on a cats vs. dogs dataset\nTo solidify these concepts, let's walk you through a concrete end-to-end transfer\nlearning & fine-tuning example. We will load the Xception model, pre-trained on\n ImageNet, and use it on the Kaggle \"cats vs. dogs\" classification dataset.\nGetting the data\nFirst, let's fetch the cats vs. dogs dataset using TFDS. If you have your own dataset,\nyou'll probably want to use the utility\ntf.keras.preprocessing.image_dataset_from_directory to generate similar labeled\n dataset objects from a set of images on disk filed into class-specific folders.\nTransfer learning is most useful when working with very small datasets. To keep our\ndataset small, we will use 40% of the original training data (25,000 images) for\n training, 10% for validation, and 10% for testing.", "import tensorflow_datasets as tfds\n\ntfds.disable_progress_bar()\n\ntrain_ds, validation_ds, test_ds = tfds.load(\n \"cats_vs_dogs\",\n # Reserve 10% for validation and 10% for test\n split=[\"train[:40%]\", \"train[40%:50%]\", \"train[50%:60%]\"],\n as_supervised=True, # Include labels\n)\n\nprint(\"Number of training samples: %d\" % tf.data.experimental.cardinality(train_ds))\nprint(\n \"Number of validation samples: %d\" % tf.data.experimental.cardinality(validation_ds)\n)\nprint(\"Number of test samples: %d\" % tf.data.experimental.cardinality(test_ds))", "These are the first 9 images in the training dataset -- as you can see, they're all\n different sizes.", "import matplotlib.pyplot as plt\n\nplt.figure(figsize=(10, 10))\nfor i, (image, label) in enumerate(train_ds.take(9)):\n ax = plt.subplot(3, 3, i + 1)\n plt.imshow(image)\n plt.title(int(label))\n plt.axis(\"off\")", "We can also see that label 1 is \"dog\" and label 0 is \"cat\".\nStandardizing the data\nOur raw images have a variety of sizes. In addition, each pixel consists of 3 integer\nvalues between 0 and 255 (RGB level values). This isn't a great fit for feeding a\n neural network. We need to do 2 things:\n\nStandardize to a fixed image size. We pick 150x150.\nNormalize pixel values between -1 and 1. We'll do this using a Normalization layer as\n part of the model itself.\n\nIn general, it's a good practice to develop models that take raw data as input, as\nopposed to models that take already-preprocessed data. The reason being that, if your\nmodel expects preprocessed data, any time you export your model to use it elsewhere\n(in a web browser, in a mobile app), you'll need to reimplement the exact same\npreprocessing pipeline. This gets very tricky very quickly. So we should do the least\n possible amount of preprocessing before hitting the model.\nHere, we'll do image resizing in the data pipeline (because a deep neural network can\nonly process contiguous batches of data), and we'll do the input value scaling as part\n of the model, when we create it.\nLet's resize images to 150x150:", "size = (150, 150)\n\ntrain_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size), y))\nvalidation_ds = validation_ds.map(lambda x, y: (tf.image.resize(x, size), y))\ntest_ds = test_ds.map(lambda x, y: (tf.image.resize(x, size), y))", "Besides, let's batch the data and use caching & prefetching to optimize loading speed.", "batch_size = 32\n\ntrain_ds = train_ds.cache().batch(batch_size).prefetch(buffer_size=10)\nvalidation_ds = validation_ds.cache().batch(batch_size).prefetch(buffer_size=10)\ntest_ds = test_ds.cache().batch(batch_size).prefetch(buffer_size=10)", "Using random data augmentation\nWhen you don't have a large image dataset, it's a good practice to artificially\n introduce sample diversity by applying random yet realistic transformations to\nthe training images, such as random horizontal flipping or small random rotations. This\nhelps expose the model to different aspects of the training data while slowing down\n overfitting.", "from tensorflow import keras\nfrom tensorflow.keras import layers\n\ndata_augmentation = keras.Sequential(\n [layers.RandomFlip(\"horizontal\"), layers.RandomRotation(0.1),]\n)", "Let's visualize what the first image of the first batch looks like after various random\n transformations:", "import numpy as np\n\nfor images, labels in train_ds.take(1):\n plt.figure(figsize=(10, 10))\n first_image = images[0]\n for i in range(9):\n ax = plt.subplot(3, 3, i + 1)\n augmented_image = data_augmentation(\n tf.expand_dims(first_image, 0), training=True\n )\n plt.imshow(augmented_image[0].numpy().astype(\"int32\"))\n plt.title(int(labels[0]))\n plt.axis(\"off\")", "Build a model\nNow let's built a model that follows the blueprint we've explained earlier.\nNote that:\n\nWe add a Rescaling layer to scale input values (initially in the [0, 255]\n range) to the [-1, 1] range.\nWe add a Dropout layer before the classification layer, for regularization.\nWe make sure to pass training=False when calling the base model, so that\nit runs in inference mode, so that batchnorm statistics don't get updated\neven after we unfreeze the base model for fine-tuning.", "base_model = keras.applications.Xception(\n weights=\"imagenet\", # Load weights pre-trained on ImageNet.\n input_shape=(150, 150, 3),\n include_top=False,\n) # Do not include the ImageNet classifier at the top.\n\n# Freeze the base_model\nbase_model.trainable = False\n\n# Create new model on top\ninputs = keras.Input(shape=(150, 150, 3))\nx = data_augmentation(inputs) # Apply random data augmentation\n\n# Pre-trained Xception weights requires that input be scaled\n# from (0, 255) to a range of (-1., +1.), the rescaling layer\n# outputs: `(inputs * scale) + offset`\nscale_layer = keras.layers.Rescaling(scale=1 / 127.5, offset=-1)\nx = scale_layer(x)\n\n# The base model contains batchnorm layers. We want to keep them in inference mode\n# when we unfreeze the base model for fine-tuning, so we make sure that the\n# base_model is running in inference mode here.\nx = base_model(x, training=False)\nx = keras.layers.GlobalAveragePooling2D()(x)\nx = keras.layers.Dropout(0.2)(x) # Regularize with dropout\noutputs = keras.layers.Dense(1)(x)\nmodel = keras.Model(inputs, outputs)\n\nmodel.summary()", "Train the top layer", "model.compile(\n optimizer=keras.optimizers.Adam(),\n loss=keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[keras.metrics.BinaryAccuracy()],\n)\n\nepochs = 20\nmodel.fit(train_ds, epochs=epochs, validation_data=validation_ds)", "Do a round of fine-tuning of the entire model\nFinally, let's unfreeze the base model and train the entire model end-to-end with a low\n learning rate.\nImportantly, although the base model becomes trainable, it is still running in\ninference mode since we passed training=False when calling it when we built the\nmodel. This means that the batch normalization layers inside won't update their batch\nstatistics. If they did, they would wreck havoc on the representations learned by the\n model so far.", "# Unfreeze the base_model. Note that it keeps running in inference mode\n# since we passed `training=False` when calling it. This means that\n# the batchnorm layers will not update their batch statistics.\n# This prevents the batchnorm layers from undoing all the training\n# we've done so far.\nbase_model.trainable = True\nmodel.summary()\n\nmodel.compile(\n optimizer=keras.optimizers.Adam(1e-5), # Low learning rate\n loss=keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[keras.metrics.BinaryAccuracy()],\n)\n\nepochs = 10\nmodel.fit(train_ds, epochs=epochs, validation_data=validation_ds)", "After 10 epochs, fine-tuning gains us a nice improvement here." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
NuGrid/NuPyCEE
DOC/Teaching/.ipynb_checkpoints/Section2.1-checkpoint.ipynb
bsd-3-clause
[ "Section 2.1: Tracing the origin of C\nResult: Identification of which star is responsible for the origin of C", "import matplotlib.pyplot as plt\nimport sygma\nimport omega\nimport stellab\n\n#loading the observational data module STELLAB\nstellab = stellab.stellab()", "Simulation of the Milky Way", "# OMEGA parameters for MW\nmass_loading = 1 # How much mass is ejected from the galaxy per stellar mass formed\nnb_1a_per_m = 3.0e-3 # Number of SNe Ia per stellar mass formed\nsfe = 0.005 # Star formation efficiency, which sets the mass of gas\ntable = 'yield_tables/isotope_yield_table_MESA_only_ye.txt' # Yields for AGB and massive stars\n#milky_way\n\no_mw = omega.omega(galaxy='milky_way',Z_trans=-1, table=table,sfe=sfe, DM_evolution=True,\\\n mass_loading=mass_loading, nb_1a_per_m=nb_1a_per_m, special_timesteps=60)", "Comparison of chemical evolution prediction with observation", "# Choose abundance ratios\n%matplotlib nbagg\nxaxis = '[Fe/H]'\nyaxis = '[C/Fe]'\n\n# Plot observational data points (Stellab)\nstellab.plot_spectro(xaxis=xaxis, yaxis=yaxis,norm='Grevesse_Noels_1993',galaxy='milky_way',show_err=False)\n\n# Extract the numerical predictions (OMEGA)\nxy_f = o_mw.plot_spectro(fig=3,xaxis=xaxis,yaxis=yaxis,return_x_y=True)\n\n# Overplot the numerical predictions (they are normalized according to Grevesse & Noels 1993)\nplt.plot(xy_f[0],xy_f[1],linewidth=4,color='w')\nplt.plot(xy_f[0],xy_f[1],linewidth=2,color='k',label='OMEGA')\n\n# Update the existing legend\nplt.legend(loc='center left', bbox_to_anchor=(1.01, 0.5), markerscale=0.8, fontsize=13)\n\n# Choose X and Y limits\nplt.xlim(-4.5,0.5)\nplt.ylim(-1.4,1.6)", "Tracing back to simple stellar populations.", "s0p0001=sygma.sygma(iniZ=0.0001)\ns0p006=sygma.sygma(iniZ=0.006)", "What is [C/Fe] for two SSPs at Z=0.006 and Z=0.0001?", "elem='[C/Fe]'\ns0p0001.plot_spectro(fig=3,yaxis=elem,marker='D',color='b',label='Z=0.0001')\ns0p006.plot_spectro(fig=3,yaxis=elem,label='Z=0.006')", "Now lets focus on C. What is the evolution of the total mass of C?", "# Plot the ejected mass of a certain element\nelem='C'\ns0p0001.plot_mass(fig=4,specie=elem,marker='D',color='b',label='Z=0.0001')\ns0p006.plot_mass(fig=4,specie=elem,label='Z=0.006')", "Which stars contribute the most to C?", "elem='C'\ns0p0001.plot_mass_range_contributions(specie=elem,marker='D',color='b',label='Z=0.0001')\ns0p006.plot_mass_range_contributions(specie=elem,label='Z=0.006')", "Which stellar yields are the most?", "s0p0001.plot_table_yield(fig=6,iniZ=0.0001,table='yield_tables/isotope_yield_table.txt',yaxis='C-12',\n masses=[1.0, 1.65, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0],marker='D',color='b',)\ns0p006.plot_table_yield(fig=6,iniZ=0.006,table='yield_tables/isotope_yield_table.txt',yaxis='C-12',\n masses=[1.0, 1.65, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
phoebe-project/phoebe2-docs
2.1/tutorials/MESH.ipynb
gpl-3.0
[ "'mesh' Datasets and Options\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).", "!pip install -I \"phoebe>=2.1,<2.2\"", "As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.", "%matplotlib inline\n\nimport phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()", "Dataset Parameters\nLet's create and add a mesh dataset to the Bundle.", "b.add_dataset('mesh')\nprint b.filter(kind='mesh')", "times", "print b['times']", "include_times", "print b['include_times']", "columns", "print b['columns']", "Compute Options\nLet's look at the compute options (for the default PHOEBE 2 backend) that relate to meshes", "print b['compute']", "mesh_method", "print b['mesh_method@primary']", "The 'mesh_method' parameter determines how each component in the system is discretized into its mesh, and has several options:\n * marching (default): this is the new method introduced in PHOEBE 2. The star is discretized into triangles, with the attempt to make them each of equal-area and nearly equilateral. Although not as fast as 'wd', this method is more robust and will always form a closed surface (when possible).\n * wd: this is a re-implementation of the Wilson-Devinney style meshing used in PHOEBE 1.0 (legacy), with the stars discretized into trapezoids in strips of latitude (we then split each trapezoid into two triangles). This is faster, but suffers from gaps between the surface elements, and is mainly meant for testing and comparison with legacy. See the WD-Style Meshing Example Script for more details.\nntriangles\nThe 'ntriangles' parameter is only relevenat if mesh_method=='marching' (so will not be available unless that is the case).", "print b['ntriangles@primary']", "gridsize\nThe 'gridsize' parameter is only relevant if mesh_method=='wd' (so will not be available unless that is the case).", "print b['gridsize@primary']", "Synthetics", "b.set_value('times', [0])\n\nb['columns'] = '*'\n\nb.run_compute()\n\nb['mesh@model'].twigs", "Per-Mesh Parameters", "print b['times@primary@mesh01@model']", "Per-Time Parameters", "print b['volume@primary@mesh01@model']", "Per-Element Parameters", "print b['uvw_elements@primary@mesh01@model']\n\nprint b['xyz_elements@primary@mesh01@model']\n\nprint b['us@primary@mesh01@model']\n\nprint b['rs@primary@mesh01@model']\n\nprint b['rprojs@primary@mesh01@model']\n\nprint b['nxs@primary@mesh01@model']\n\nprint b['mus@primary@mesh01@model']\n\nprint b['vxs@primary@mesh01@model']\n\nprint b['areas@primary@mesh01@model']\n\nprint b['loggs@primary@mesh01@model']\n\nprint b['teffs@primary@mesh01@model']\n\nprint b['visibilities@primary@mesh01@model']", "Plotting\nBy default, MESH datasets plot as 'vs' vx 'us' (plane of sky coordinates) of just the surface elements, taken from the uvw_elements vectors.", "afig, mplfig = b['mesh@model'].plot(show=True)", "Any of the 1-D fields (ie not vertices or normals) or matplotlib-recognized colornames can be used to color either the faces or edges of the triangles. Passing none for edgecolor or facecolor turns off the coloring (you may want to set edgecolor=None if setting facecolor to disable the black outline).", "afig, mplfig = b['mesh@model'].plot(fc='teffs', ec='None', show=True)", "Alternatively, if you provide simple 1-D fields to plot, a 2D x-y plot will be created using the values from each element (always for a single time - if meshes exist for multiple times in the model, you should provide a single time either in the twig or as a filter).\nNOTE: providing z=0 will override the default of z-ordering the points by there \"w\" (line-of-sight distance) value, which can be expensive and take a while to draw.", "afig, mplfig = b['mesh@model'].plot(x='mus', y='teffs', z=0, show=True)", "The exception to needing to provide a time is for the per-time parameters mentioned above. For these, time can be the x-array (not very exciting in this case with only a single time).\nFor more examples see the following:\n- Passband Luminosity Tutorial", "afig, mplfig = b['mesh@model'].plot(x='times', y='volume', marker='s', show=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
computational-class/cc2017
data/06.data_cleaning_Tweets.ipynb
mit
[ "ๆ•ฐๆฎๆธ…ๆด—ไน‹ๆŽจ็‰นๆ•ฐๆฎ\n\n\n็Ž‹ๆˆๅ†›\nwangchengjun@nju.edu.cn\n่ฎก็ฎ—ไผ ๆ’ญ็ฝ‘ http://computational-communication.com\nๆ•ฐๆฎๆธ…ๆด—๏ผˆdata cleaning๏ผ‰\nๆ˜ฏๆ•ฐๆฎๅˆ†ๆž็š„้‡่ฆๆญฅ้ชค๏ผŒๅ…ถไธป่ฆ็›ฎๆ ‡ๆ˜ฏๅฐ†ๆททๆ‚็š„ๆ•ฐๆฎๆธ…ๆด—ไธบๅฏไปฅ่ขซ็›ดๆŽฅๅˆ†ๆž็š„ๆ•ฐๆฎ๏ผŒไธ€่ˆฌ้œ€่ฆๅฐ†ๆ•ฐๆฎ่ฝฌๅŒ–ไธบๆ•ฐๆฎๆก†๏ผˆdata frame๏ผ‰็š„ๆ ทๅผใ€‚\nๆœฌ็ซ ๅฐ†ไปฅๆŽจ็‰นๆ–‡ๆœฌ็š„ๆธ…ๆด—ไฝœไธบไพ‹ๅญ๏ผŒไป‹็ปๆ•ฐๆฎๆธ…ๆด—็š„ๅŸบๆœฌ้€ป่พ‘ใ€‚\n\nๆธ…ๆด—้”™่ฏฏ่กŒ\nๆญฃ็กฎๅˆ†ๅˆ—\nๆๅ–ๆ‰€่ฆๅˆ†ๆž็š„ๅ†…ๅฎน\nไป‹็ป้€š่ฟ‡ๆŒ‰่กŒใ€chunk็š„ๆ–นๅผๅฏนๅคง่ง„ๆจกๆ•ฐๆฎ่ฟ›่กŒ้ข„ๅค„็†\n\n1. ๆŠฝๅ–tweetsๆ ทๆœฌๅšๅฎž้ชŒ\nๆญค่Š‚ๅญฆ็”Ÿ็•ฅ่ฟ‡", "bigfile = open('/Users/chengjun/็™พๅบฆไบ‘ๅŒๆญฅ็›˜/Writing/OWS/ows-raw.txt', 'rb')\nchunkSize = 1000000\nchunk = bigfile.readlines(chunkSize)\nprint len(chunk)\nwith open(\"/Users/chengjun/GitHub/cjc/data/ows_tweets_sample.txt\", 'w') as f:\n for i in chunk:\n f.write(i) ", "2. ๆธ…ๆด—้”™่กŒ็š„ๆƒ…ๅ†ต", "with open(\"/Users/chengjun/GitHub/cjc/data/ows_tweets_sample.txt\", 'rb') as f:\n lines = f.readlines()\n\n# ๆ€ป่กŒๆ•ฐ\nlen(lines)\n\n# ๆŸฅ็œ‹็ฌฌไธ€่กŒ\nlines[0]", "้—ฎ้ข˜๏ผš ็ฌฌไธ€่กŒๆ˜ฏๅ˜้‡ๅ\n\n1. ๅฆ‚ไฝ•ๅŽปๆމๆข่กŒ็ฌฆ๏ผŸ\n2. ๅฆ‚ไฝ•่Žทๅ–ๆฏไธ€ไธชๅ˜้‡ๅ๏ผŸ", "varNames = lines[0].replace('\\n', '').split(',')\nvarNames\n\nlen(varNames)\n\nlines[1344]", "ๅฆ‚ไฝ•ๆฅๅค„็†้”™่ฏฏๆข่กŒๆƒ…ๅ†ต๏ผŸ", "with open(\"/Users/chengjun/GitHub/cjc/data/ows_tweets_sample_clean.txt\", 'w') as f:\n right_line = '' # ๆญฃ็กฎ็š„่กŒ๏ผŒๅฎƒๆ˜ฏไธ€ไธช็ฉบๅญ—็ฌฆไธฒ\n blocks = [] # ็กฎ่ฎคไธบๆญฃ็กฎ็š„่กŒไผš่ขซๆทปๅŠ ๅˆฐblocks้‡Œ้ข\n for line in lines:\n right_line += line.replace('\\n', ' ')\n line_length = len(right_line.split(','))\n if line_length >= 14:\n blocks.append(right_line)\n right_line = ''\n for i in blocks:\n f.write(i + '\\n')\n\nlen(blocks)\n\nblocks[1344]\n\nwith open(\"/Users/chengjun/GitHub/cjc/data/ows_tweets_sample_clean4.txt\", 'w') as f:\n right_line = '' # ๆญฃ็กฎ็š„่กŒ๏ผŒๅฎƒๆ˜ฏไธ€ไธช็ฉบๅญ—็ฌฆไธฒ\n blocks = [] # ็กฎ่ฎคไธบๆญฃ็กฎ็š„่กŒไผš่ขซๆทปๅŠ ๅˆฐblocks้‡Œ้ข\n for line in lines:\n right_line += line.replace('\\n', ' ').replace('\\r', ' ')\n line_length = len(right_line.split(','))\n if line_length >= 14:\n blocks.append(right_line)\n right_line = ''\n for i in blocks:\n f.write(i + '\\n')\n\nblocks[1344]", "3. ่ฏปๅ–ๆ•ฐๆฎใ€ๆญฃ็กฎๅˆ†ๅˆ—", "# ๆ็คบ๏ผšไฝ ๅฏ่ƒฝ้œ€่ฆไฟฎๆ”นไปฅไธ‹่ทฏๅพ„ๅ\nwith open(\"/Users/chengjun/GitHub/cjc/data/ows_tweets_sample_clean.txt\", 'rb') as f:\n chunk = f.readlines()\n\nlen(chunk)\n\nchunk[:3]\n\nimport csv\nclean_lines = (line.replace('\\x00','') \\\n for line in chunk[1:])\nlines = csv.reader(clean_lines, delimiter=',', \\\n quotechar='\"') \n\nimport pandas as pd\n\ndf = pd.read_csv(\"/Users/chengjun/GitHub/cjc/data/ows_tweets_sample_clean.txt\",\\\n sep = ',', quotechar='\"')\ndf[:3]\n\ndf.Text[1]\n\ndf['From User']", "4. ็ปŸ่ฎกๆ•ฐ้‡\n็ปŸ่ฎกๅ‘ๅธ–ๆ•ฐ้‡ๆ‰€ๅฏนๅบ”็š„ไบบๆ•ฐ็š„ๅˆ†ๅธƒ\n\nไบบๆ•ฐๅœจๅ‘ๅธ–ๆ•ฐ้‡ๆ–น้ข็š„ๅˆ†ๅธƒๆƒ…ๅ†ต", "from collections import defaultdict\ndata_dict = defaultdict(int)\nline_num = 0\nlines = csv.reader((line.replace('\\x00','') for line in chunk[1:]), delimiter=',', quotechar='\"')\nfor i in lines:\n line_num +=1\n data_dict[i[8]] +=1 # i[8] ๆ˜ฏuser\n\ndata_dict.items()[:5]\n\nprint line_num\n\n%matplotlib inline\nfrom matplotlib.font_manager import FontProperties\nimport matplotlib.pyplot as plt\nimport matplotlib\n#matplotlib.rcParams['font.sans-serif'] = ['Microsoft YaHei'] #ๆŒ‡ๅฎš้ป˜่ฎคๅญ—ไฝ“ \nmatplotlib.rc(\"savefig\", dpi=100)\nfont = FontProperties(fname=r'/Users/chengjun/github/cjc/data/msyh.ttf', size=14) # ๆณจๆ„๏ผšไฟฎๆ”น่ฟ™้‡Œ็š„่ทฏๅพ„ๅ\n\nplt.hist(data_dict.values())\n#plt.yscale('log')\n#plt.xscale('log')\nplt.xlabel(u'ๅ‘ๅธ–ๆ•ฐ', fontproperties=font)\nplt.ylabel(u'ไบบๆ•ฐ', fontproperties=font)\nplt.show()\n\ntweet_dict = defaultdict(int)\nfor i in data_dict.values():\n tweet_dict[i] += 1\n \nplt.loglog(tweet_dict.keys(), tweet_dict.values(), 'ro',linewidth=2) \nplt.xlabel(u'ๆŽจ็‰นๆ•ฐ', fontproperties=font)\nplt.ylabel(u'ไบบๆ•ฐ', fontproperties=font )\nplt.show()\n\nimport numpy as np\nimport statsmodels.api as sm\n\ndef powerPlot(d_value, d_freq, color, marker):\n d_freq = [i + 1 for i in d_freq]\n d_prob = [float(i)/sum(d_freq) for i in d_freq]\n #d_rank = ss.rankdata(d_value).astype(int)\n x = np.log(d_value)\n y = np.log(d_prob)\n xx = sm.add_constant(x, prepend=True)\n res = sm.OLS(y,xx).fit()\n constant,beta = res.params\n r2 = res.rsquared\n plt.plot(d_value, d_prob, linestyle = '', color = color, marker = marker)\n plt.plot(d_value, np.exp(constant+x*beta),\"red\")\n plt.xscale('log'); plt.yscale('log')\n plt.text(max(d_value)/2,max(d_prob)/10,\n r'$\\beta$ = ' + str(round(beta,2)) +'\\n' + r'$R^2$ = ' + str(round(r2, 2)))\n\nhisto, bin_edges = np.histogram(data_dict.values(), 15)\nbin_center = 0.5*(bin_edges[1:] + bin_edges[:-1])\npowerPlot(bin_center,histo, 'r', 'o')\n#lg=plt.legend(labels = [u'Tweets', u'Fit'], loc=3, fontsize=20)\nplt.ylabel(u'ๆฆ‚็އ', fontproperties=font)\nplt.xlabel(u'ๆŽจ็‰นๆ•ฐ', fontproperties=font)\nplt.show()\n\nimport statsmodels.api as sm\nfrom collections import defaultdict\nimport numpy as np\n\ndef powerPlot(data):\n d = sorted(data, reverse = True )\n d_table = defaultdict(int)\n for k in d:\n d_table[k] += 1\n d_value = sorted(d_table)\n d_value = [i+1 for i in d_value]\n d_freq = [d_table[i]+1 for i in d_value]\n d_prob = [float(i)/sum(d_freq) for i in d_freq]\n #d_rank = ss.rankdata(d_value).astype(int)\n x = np.log(d_value)\n y = np.log(d_prob)\n xx = sm.add_constant(x, prepend=True)\n res = sm.OLS(y,xx).fit()\n constant,beta = res.params\n r2 = res.rsquared\n plt.plot(d_value, d_prob, 'ro')\n plt.plot(d_value, np.exp(constant+x*beta),\"red\")\n plt.xscale('log'); plt.yscale('log')\n plt.text(max(d_value)/2,max(d_prob)/5,\n 'Beta = ' + str(round(beta,2)) +'\\n' + 'R squared = ' + str(round(r2, 2)))\n plt.title('Distribution')\n plt.ylabel('P(K)')\n plt.xlabel('K')\n plt.show()\n \n\npowerPlot(data_dict.values())\n\nimport powerlaw\ndef plotPowerlaw(data,ax,col,xlab):\n fit = powerlaw.Fit(data,xmin=2)\n #fit = powerlaw.Fit(data)\n fit.plot_pdf(color = col, linewidth = 2)\n a,x = (fit.power_law.alpha,fit.power_law.xmin)\n fit.power_law.plot_pdf(color = col, linestyle = 'dotted', ax = ax, \\\n label = r\"$\\alpha = %d \\:\\:, x_{min} = %d$\" % (a,x))\n ax.set_xlabel(xlab, fontsize = 20)\n ax.set_ylabel('$Probability$', fontsize = 20)\n plt.legend(loc = 0, frameon = False)\n\nfrom collections import defaultdict\ndata_dict = defaultdict(int)\n\nfor i in df['From User']:\n data_dict[i] += 1\n\nimport matplotlib.cm as cm\ncmap = cm.get_cmap('rainbow_r',6)\n\nfig = plt.figure(figsize=(6, 4),facecolor='white')\nax = fig.add_subplot(1, 1, 1)\nplotPowerlaw(data_dict.values(), ax,cmap(1), '$Gold\\;Metals$')", "5. ๆธ…ๆด—tweetsๆ–‡ๆœฌ", "tweet = '''RT @AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!! \n #OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com \n http://ccc.nju.edu.cn RT !!HELP!!!!'''\n\nimport re\n\nimport twitter_text\n", "ๅฎ‰่ฃ…twitter_text\n\npip install twitter-text-py\n\nๆ— ๆณ•ๆญฃๅธธๅฎ‰่ฃ…็š„ๅŒๅญฆ\nๅฏไปฅๅœจspyderไธญๆ‰“ๅผ€terminalๅฎ‰่ฃ…", "import re\n\ntweet = '''RT @AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!! \n #OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com \n http://ccc.nju.edu.cn RT !!HELP!!!!'''\n\nrt_patterns = re.compile(r\"(RT|via)((?:\\b\\W*@\\w+)+)\", \\\n re.IGNORECASE)\nrt_user_name = rt_patterns.findall(tweet)[0][1].strip(' @')\nrt_user_name\n\nimport re\n\ntweet = '''@AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!! \n #OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com \n http://ccc.nju.edu.cn RT !!HELP!!!!'''\n\nrt_patterns = re.compile(r\"(RT|via)((?:\\b\\W*@\\w+)+)\", re.IGNORECASE)\nrt_user_name = rt_patterns.findall(tweet)\nprint rt_user_name\n\nif rt_user_name:\n print 'it exits.'\nelse:\n print 'None'\n\nimport re\n\ndef extract_rt_user(tweet):\n rt_patterns = re.compile(r\"(RT|via)((?:\\b\\W*@\\w+)+)\", re.IGNORECASE)\n rt_user_name = rt_patterns.findall(tweet)\n if rt_user_name:\n rt_user_name = rt_user_name[0][1].strip(' @')\n else:\n rt_user_name = None\n return rt_user_name\n\ntweet = '''@AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!! \n #OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com \n http://ccc.nju.edu.cn RT !!HELP!!!!'''\n\nprint extract_rt_user(tweet) ", "่Žทๅพ—ๆธ…ๆด—่ฟ‡็š„ๆŽจ็‰นๆ–‡ๆœฌ\nไธๅซไบบๅใ€urlใ€ๅ„็ง็ฌฆๅท๏ผˆๅฆ‚RT @็ญ‰๏ผ‰", "def extract_tweet_text(tweet, at_names, urls):\n for i in at_names:\n tweet = tweet.replace(i, '')\n for j in urls:\n tweet = tweet.replace(j, '')\n marks = ['RT @', '@', '&quot;', '#', '\\n', '\\t', ' ']\n for k in marks:\n tweet = tweet.replace(k, '')\n return tweet\n\nimport twitter_text\n\ntweet = '''RT @AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!! \n #OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com \n http://ccc.nju.edu.cn RT !!HELP!!!!'''\n\nex = twitter_text.Extractor(tweet)\nat_names = ex.extract_mentioned_screen_names()\nurls = ex.extract_urls()\nhashtags = ex.extract_hashtags()\nrt_user = extract_rt_user(tweet)\ntweet_text = extract_tweet_text(tweet, at_names, urls)\n\nprint at_names, urls, hashtags, rt_user,'-------->', tweet_text\n\nimport csv\n\nlines = csv.reader((line.replace('\\x00','') for line in chunk[1:]), delimiter=',', quotechar='\"')\ntweets = [i[1] for i in lines]\n\nfor tweet in tweets[:5]:\n ex = twitter_text.Extractor(tweet)\n at_names = ex.extract_mentioned_screen_names()\n urls = ex.extract_urls()\n hashtags = ex.extract_hashtags()\n rt_user = extract_rt_user(tweet)\n tweet_text = extract_tweet_text(tweet, at_names, urls)\n\n print at_names, urls, hashtags, rt_user, \n print tweet_text", "ๆ€่€ƒ๏ผš\nๆๅ–ๅ‡บtweetsไธญ็š„rtuserไธŽuser็š„่ฝฌๅ‘็ฝ‘็ปœ\nๆ ผๅผ๏ผš\nrt_user1, user1\nrt_user2, user3\nrt_user2, user4\n...\nๆ•ฐๆฎไฟๅญ˜ไธบcsvๆ ผๅผ", "import csv\n\nlines = csv.reader((line.replace('\\x00','') \\\n for line in chunk[1:]), \\\n delimiter=',', quotechar='\"')\ntweet_user_data = [(i[1], i[8]) for i in lines]\nfor tweet,user in tweet_user_data:\n rt_user = extract_rt_user(tweet)\n if rt_user:\n print rt_user, ',', user", "้˜…่ฏปๆ–‡็Œฎ" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dolittle007/dolittle007.github.io
notebooks/gaussian-mixture-model-advi.ipynb
gpl-3.0
[ "Gaussian Mixture Model with ADVI\nHere, we describe how to use ADVI for inference of Gaussian mixture model. First, we will show that inference with ADVI does not need to modify the stochastic model, just call a function. Then, we will show how to use mini-batch, which is useful for large dataset. In this case, where the model should be slightly changed. \nFirst, create artificial data from a mixuture of two Gaussian components.", "%matplotlib inline\n\nimport theano\ntheano.config.floatX = 'float64'\n\nimport pymc3 as pm\nfrom pymc3 import Normal, Metropolis, sample, MvNormal, Dirichlet, \\\n DensityDist, find_MAP, NUTS, Slice\nimport theano.tensor as tt\nfrom theano.tensor.nlinalg import det\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nn_samples = 100\nrng = np.random.RandomState(123)\nms = np.array([[-1, -1.5], [1, 1]])\nps = np.array([0.2, 0.8])\n\nzs = np.array([rng.multinomial(1, ps) for _ in range(n_samples)]).T\nxs = [z[:, np.newaxis] * rng.multivariate_normal(m, np.eye(2), size=n_samples)\n for z, m in zip(zs, ms)]\ndata = np.sum(np.dstack(xs), axis=2)\n\nplt.figure(figsize=(5, 5))\nplt.scatter(data[:, 0], data[:, 1], c='g', alpha=0.5)\nplt.scatter(ms[0, 0], ms[0, 1], c='r', s=100)\nplt.scatter(ms[1, 0], ms[1, 1], c='b', s=100)", "Gaussian mixture models are usually constructed with categorical random variables. However, any discrete rvs does not fit ADVI. Here, class assignment variables are marginalized out, giving weighted sum of the probability for the gaussian components. The log likelihood of the total probability is calculated using logsumexp, which is a standard technique for making this kind of calculation stable. \nIn the below code, DensityDist class is used as the likelihood term. The second argument, logp_gmix(mus, pi, np.eye(2)), is a python function which recieves observations (denoted by 'value') and returns the tensor representation of the log-likelihood.", "from pymc3.math import logsumexp\n\n# Log likelihood of normal distribution\ndef logp_normal(mu, tau, value):\n # log probability of individual samples\n k = tau.shape[0]\n delta = lambda mu: value - mu\n return (-1 / 2.) * (k * tt.log(2 * np.pi) + tt.log(1./det(tau)) +\n (delta(mu).dot(tau) * delta(mu)).sum(axis=1))\n\n# Log likelihood of Gaussian mixture distribution\ndef logp_gmix(mus, pi, tau):\n def logp_(value): \n logps = [tt.log(pi[i]) + logp_normal(mu, tau, value)\n for i, mu in enumerate(mus)]\n \n return tt.sum(logsumexp(tt.stacklists(logps)[:, :n_samples], axis=0))\n\n return logp_\n\nwith pm.Model() as model:\n mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,))\n for i in range(2)]\n pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,))\n xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data)", "For comparison with ADVI, run MCMC.", "with model:\n start = find_MAP()\n step = Metropolis()\n trace = sample(1000, step, start=start)", "Check posterior of component means and weights. We can see that the MCMC samples of the component mean for the lower-left component varied more than the upper-right due to the difference of the sample size of these clusters.", "plt.figure(figsize=(5, 5))\nplt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')\nmu_0, mu_1 = trace['mu_0'], trace['mu_1']\nplt.scatter(mu_0[-500:, 0], mu_0[-500:, 1], c=\"r\", s=10)\nplt.scatter(mu_1[-500:, 0], mu_1[-500:, 1], c=\"b\", s=10)\nplt.xlim(-6, 6)\nplt.ylim(-6, 6)\n\nsns.barplot([1, 2], np.mean(trace['pi'][-5000:], axis=0), \n palette=['red', 'blue'])", "We can use the same model with ADVI as follows.", "with pm.Model() as model:\n mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,))\n for i in range(2)]\n pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,))\n xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data)\n \n%time means, sds, elbos = pm.variational.advi( \\\n model=model, n=1000, learning_rate=1e-1)", "The function returns three variables. 'means' and 'sds' are the mean and standart deviations of the variational posterior. Note that these values are in the transformed space, not in the original space. For random variables in the real line, e.g., means of the Gaussian components, no transformation is applied. Then we can see the variational posterior in the original space.", "from copy import deepcopy\n\nmu_0, sd_0 = means['mu_0'], sds['mu_0']\nmu_1, sd_1 = means['mu_1'], sds['mu_1']\n\ndef logp_normal_np(mu, tau, value):\n # log probability of individual samples\n k = tau.shape[0]\n delta = lambda mu: value - mu\n return (-1 / 2.) * (k * np.log(2 * np.pi) + np.log(1./np.linalg.det(tau)) +\n (delta(mu).dot(tau) * delta(mu)).sum(axis=1))\n\ndef threshold(zz): \n zz_ = deepcopy(zz)\n zz_[zz < np.max(zz) * 1e-2] = None\n return zz_\n\ndef plot_logp_normal(ax, mu, sd, cmap):\n f = lambda value: np.exp(logp_normal_np(mu, np.diag(1 / sd**2), value))\n g = lambda mu, sd: np.arange(mu - 3, mu + 3, .1)\n xx, yy = np.meshgrid(g(mu[0], sd[0]), g(mu[1], sd[1]))\n zz = f(np.vstack((xx.reshape(-1), yy.reshape(-1))).T).reshape(xx.shape)\n ax.contourf(xx, yy, threshold(zz), cmap=cmap, alpha=0.9)\n \nfig, ax = plt.subplots(figsize=(5, 5))\nplt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')\nplot_logp_normal(ax, mu_0, sd_0, cmap='Reds')\nplot_logp_normal(ax, mu_1, sd_1, cmap='Blues')\nplt.xlim(-6, 6)\nplt.ylim(-6, 6)", "TODO: We need to backward-transform 'pi', which is transformed by 'stick_breaking'. \n'elbos' contains the trace of ELBO, showing stochastic convergence of the algorithm.", "plt.plot(elbos)", "To demonstrate that ADVI works for large dataset with mini-batch, let's create 100,000 samples from the same mixture distribution.", "n_samples = 100000\n\nzs = np.array([rng.multinomial(1, ps) for _ in range(n_samples)]).T\nxs = [z[:, np.newaxis] * rng.multivariate_normal(m, np.eye(2), size=n_samples)\n for z, m in zip(zs, ms)]\ndata = np.sum(np.dstack(xs), axis=2)\n\nplt.figure(figsize=(5, 5))\nplt.scatter(data[:, 0], data[:, 1], c='g', alpha=0.5)\nplt.scatter(ms[0, 0], ms[0, 1], c='r', s=100)\nplt.scatter(ms[1, 0], ms[1, 1], c='b', s=100)\nplt.xlim(-6, 6)\nplt.ylim(-6, 6)", "MCMC took 55 seconds, 20 times longer than the small dataset.", "with pm.Model() as model:\n mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,))\n for i in range(2)]\n pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,))\n xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data)\n \n start = find_MAP()\n step = Metropolis()\n trace = sample(1000, step, start=start)", "Posterior samples are concentrated on the true means, so looks like single point for each component.", "plt.figure(figsize=(5, 5))\nplt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')\nmu_0, mu_1 = trace['mu_0'], trace['mu_1']\nplt.scatter(mu_0[-500:, 0], mu_0[-500:, 1], c=\"r\", s=50)\nplt.scatter(mu_1[-500:, 0], mu_1[-500:, 1], c=\"b\", s=50)\nplt.xlim(-6, 6)\nplt.ylim(-6, 6)", "For ADVI with mini-batch, put theano tensor on the observed variable of the ObservedRV. The tensor will be replaced with mini-batches. Because of the difference of the size of mini-batch and whole samples, the log-likelihood term should be appropriately scaled. To tell the log-likelihood term, we need to give ObservedRV objects ('minibatch_RVs' below) where mini-batch is put. Also we should keep the tensor ('minibatch_tensors').", "data_t = tt.matrix()\ndata_t.tag.test_value = np.zeros((1, 2)).astype(float)\n\nwith pm.Model() as model:\n mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,))\n for i in range(2)]\n pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,))\n xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data_t)\n \nminibatch_tensors = [data_t]\nminibatch_RVs = [xs]", "Make a generator for mini-batches of size 200. Here, we take random sampling strategy to make mini-batches.", "def create_minibatch(data):\n rng = np.random.RandomState(0)\n \n while True:\n ixs = rng.randint(len(data), size=200)\n yield [data[ixs]]\n\nminibatches = create_minibatch(data)\ntotal_size = len(data)", "Run ADVI. It's much faster than MCMC, though the problem here is simple and it's not a fair comparison.", "# Used only to write the function call in single line for using %time\n# is there more smart way?\ndef f():\n return pm.variational.advi_minibatch(\n model=model, n=1000, minibatch_tensors=minibatch_tensors, \n minibatch_RVs=minibatch_RVs, minibatches=minibatches,\n total_size=total_size, learning_rate=1e-1)\n\n%time means, sds, elbos = f()", "The result is almost the same.", "from copy import deepcopy\n\nmu_0, sd_0 = means['mu_0'], sds['mu_0']\nmu_1, sd_1 = means['mu_1'], sds['mu_1']\n\nfig, ax = plt.subplots(figsize=(5, 5))\nplt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')\nplt.scatter(mu_0[0], mu_0[1], c=\"r\", s=50)\nplt.scatter(mu_1[0], mu_1[1], c=\"b\", s=50)\nplt.xlim(-6, 6)\nplt.ylim(-6, 6)", "The variance of the trace of ELBO is larger than without mini-batch because of the subsampling from the whole samples.", "plt.plot(elbos);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
walkon302/CDIPS_Recommender
notebooks/.ipynb_checkpoints/Recommendation_Compare_Methods-checkpoint.ipynb
apache-2.0
[ "Recommendation Methods:\nAlg 0: Most similar items to user's previous views\nOffline: \n1. For each item, calculate features on trained neural network $ f_j $\n2. For each user, look up previous views and average the features together of the previous visit $ f_i = \\sum_j f_j*I(i,j) $\n3. Store the features of the 'typical' item viewed by this user. \n4. Calculate similarity of all items to user's 'typical item', store as a recommend list\nOnline: \n1. User comes to website\n2. Recommend the top 20 items from his recommend list.", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style('white')\n%matplotlib inline\n\nimport sys\nimport os\nsys.path.append('../')\nos.getcwd()\n\nimport src\nimport src.recommendation\nreload(src.recommendation)\nfrom src.recommendation import *\n\n", "Recommendation Comparison\n\nA more general framework for comparing different recommendation techniques\n\nEvaluation DataSet\nSee notes in the creating_dataset_for_evaluation.ipynb\nFrom full dataset\n- removed rows with no nn features (for view or for buy)\n- remove the items that have been viewed 20minutes before buying. \n- sub-sampled a set of 1000 users", "# load smaller user behavior dataset\nuser_profile = pd.read_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views_v2_sample1000.pkl')\nuser_sample = user_profile.user_id.unique()\nprint(len(user_profile))\nprint(len(user_sample))\nuser_profile.head()\n\n\n# requires nn features \nspu_fea = pd.read_pickle(\"../data_nn_features/spu_fea_sample1000.pkl\")\n\n# make sure all items have features ?? One missing\nprint(len(set(list(user_profile.buy_spu.unique())+list(user_profile.view_spu.unique()))))\nprint(len(spu_fea.spu_id.unique()))", "Load precalculated things for recommendations", "# this might be faster #\n\n# ## Precalculate average feature per user\n# average_viewed_features_dict = {}\n\n# for user_id in user_profile.user_id.unique():\n# average_viewed_features_dict[user_id] = get_user_average_features(user_id,user_profile,spu_fea)\n ", "Loop through users and score function", "def get_user_buy_ranks(users_sample,user_profile,spu_fea,method,randomize_scores=False):\n \n user_buy_ranks = np.empty(len(users_sample))\n no_ranks = np.empty(len(users_sample))\n for ui,user_id in enumerate(users_sample):\n print(ui)\n \n # rank items\n item_score_in_category = rank_candidates(user_id,user_profile,spu_fea,method=method,extra_inputs={},randomize_scores=randomize_scores)\n \n # get bought item rank and store into array \n user_buy_ranks[ui]=item_score_in_category.loc[item_score_in_category.buy==1,'rank'].as_matrix()[0]\n \n # get number of ranks per category \n no_ranks[ui]=item_score_in_category['rank'].max()\n \n return(user_buy_ranks,no_ranks,item_score_in_category)", "Evaluate Different Algorithms", "users_sample = np.random.choice(user_sample,size=50)\n\n\n# nathan's\nuser_buy_ranks1,no_ranks1,item_score_in_category=get_user_buy_ranks(users_sample,user_profile,spu_fea,method='AverageFeatureSim')\n\n\n# just taking the last item \nuser_buy_ranks2,no_ranks2,_=get_user_buy_ranks(users_sample,user_profile,spu_fea,method='LastItemSim')\n\n\n# randomize \nuser_buy_ranks3,no_ranks3,_=get_user_buy_ranks(users_sample,user_profile,spu_fea,method='Randomize',randomize_scores=True)\n\n\n# stack\nrank_percent = np.vstack((user_buy_ranks1/no_ranks1,user_buy_ranks2/no_ranks2,user_buy_ranks3/no_ranks3))\nprint(rank_percent.shape)\n\n# Plot \nmean = rank_percent.mean(axis=1)\nn = np.shape(rank_percent)[1]\nm = np.shape(rank_percent)[0]\nprint(n)\nprint(m)\n\nsem = rank_percent.std(axis=1)/np.sqrt(n)\nplt.errorbar(np.arange(m),y=mean,yerr=sem,linestyle='None',marker='o')\nplt.xticks(np.arange(m),['AvgFeatures','LastFeat','Random \\n Guess'])\nplt.xlim([-1,m+1])\nplt.ylim(0,1)\nsns.despine()\nplt.title('Recommendor Comparison')\nplt.ylabel('Average (Buy Rank / # in Buy Category)')\nplt.axhline(y=0.5,linestyle='--')\nsavefile = '../figures/recommender_comparison_sample_1000_subsample50_v1.png'\nplt.savefig(savefile,dpi=300)\n\nfrom src import s3_data_management\ns3_data_management.push_results_to_s3(os.path.basename(savefile),savefile)", "Save", "%%bash \njupyter nbconvert --to slides Recommendation_Compare_Methods.ipynb && mv Recommendation_Compare_Methods.slides.html ../notebook_slides/Recommendation_Compare_Methods_v1.slides.html\njupyter nbconvert --to html Recommendation_Compare_Methods.ipynb && mv Recommendation_Compare_Methods.html ../notebook_htmls/Recommendation_Compare_Methods_v1.html\ncp Recommendation_Compare_Methods.ipynb ../notebook_versions/Recommendation_Compare_Methods_v1.ipynb" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CentroGeo/geoinformatica
vecindades_python/vecindades_2.ipynb
gpl-3.0
[ "Parte 2. Cรกlculos en vecindades\nEn la primera parte del taller trabajamos los datos para calcular las variables que nos interesan y agruparlas por AGEB. Ahora, en esta segunda parte, vamos a partir de los datos ya procesados y vamos a realizar algunos cรกlculos a partir de los valores en los polรญgonos vecinos.\nEl primer paso es encapsular el preproceso de los datos en una funciรณn, de esta manera podemos ejecutar todo el flujo en una celda en este Notebook. En el archivo preproceso.py de esta carpeta pueden ver la funciรณn que realiza todo el trabajo.\nPara importar la funciรณn aquรญ, hacemos uso de la estructura de paquetes (packages) de Python: siempre que una carpeta contenga un archivo con el nombre __init__.py, esa carpeta se convierte automรกticamente en un paquete y permite importar directamente todas las funciones (o clases) definidas en los archivos que contiene.\nEn la siguiente celda realizamos todo el trabajo de preprocesamiento.\nNota 1: la declaraciรณn pd.options.mode.chained_assignment = None sirve para suprimir una advertencia de Pandas sobre la forma en la que asignamos valores a nuevas columnas, en este caso (y en muchos otros), esa advertencia es espรบrea, sin embargo, no estรก de mรกs que lean esto.\nNota 2: si se fijan, en el preproceso removimos algunas filas, por lo que el รญndice del DataFrame no es consecutivo, esto va a ser importante mรกs adelante, por lo que nos conviene resetear el รญndice para que se vuelva a calcular como un consecutivo.", "import pandas as pd\nimport geopandas as gpd\nfrom preproceso import preprocesa\npd.options.mode.chained_assignment = None\n\ndenue = gpd.read_file(\"datos/DENUE_INEGI_09_.shp\")\nagebs = gpd.read_file(\"datos/ageb_urb.shp\")\nusos_suelo = preprocesa(denue, agebs)\nusos_suelo.reset_index(drop=True, inplace=True)\nusos_suelo.head()", "Como pueden ver, en la variable usos_suelo tenemos ya calculadas todas nuestras variables de interรฉs, ahora lo que necesitamos es, para cada fila de nuestro GeoDataFrame, saber cuรกles son los polรญgnos vecinos.\nPara esto, vamos a utilizar la librerรญa PySal, que provee un conjunto de mรฉtodos de anรกlisis espacial. En particular, nos interesa la funcionalidad de crear matrices de pesos espaciales.\nPySal estรก desarrollado para trabajar en conjunto con GeoPandas, de modo que podemos pedir la matriz de pesos directamente del GeoDataFrame y examinar el objeto que nos regresa:", "import pysal\n\nw = pysal.weights.Queen.from_dataframe(usos_suelo)\nprint(w.n)\nprint(w.weights[0])\nprint(w.neighbors[0])\nprint(w.neighbors[5])\nprint(w.histogram)", "Lo primero que hicimos fue importar la librerรญa PySal. A continuaciรณn, claculamos la matriz de pesos w usando vecindades de tipo Reina (en la documentaciรณn de PySal pueden consultar los diferentes tipos de vecindades y las fuentes de datos que pueden usar).\n\nw.n nos dice la cantidad de renglones de la matriz\nw.weights[0] nos dice los pesos que corresponden a los vecinos del elemento 0\nw.neighbors[0] nos da la lista de vecinos del elemento 0\nw.histogram nos da el histograma de la matriz de adyacencia, es decir, cuรกntos elementos tienen x nรบmero de vecinos\n\nComo un ejercicio rรกpido vamos a graficar el histograma, sรณlo que esta vez, en lugar de usar matplotlib directamente, vamos a usar seaborn, que es una librerรญa para graficar datos estadรญsticos. Ademรกs de producir, de manera sencilla, graficas mรกs bonitas que matplotlib, seaborn tiene una sintaxis similar a la de ggplot2 de R.\nPrimero convertimos el histograma que nos da PySal en un DataFrame:", "freqs = pd.DataFrame(w.histogram, columns=['vecinos', 'cuenta'])\nfreqs.head()", "Y luego lo graficamos:", "%matplotlib inline\nimport seaborn as sns\n\nsns.barplot(x='vecinos', y='cuenta', data=freqs)", "Intensidad\nDespuรฉs de este intermedio, ahora sรญ vamos a hacer nuestro primer cรณmputo en vecindades. Vamos a comenzar por la intensidad.\nLa intensidad es simplemente la cantidad de actividades en un รกrea determinada. En nuestro caso, vamos a calcular el total de actividades (de cualquier tipo) que hay en la vecindad inmediata de cada AGEB (si lo piensan un poco, esto se parece bastante a los filtros tipo blur en procesamiento de imรกgenes).\nPara calcular la intensidad, lo que necesitamos hacer es recorrer la lista de elementos del GeoDataFrame y, para cada elemento, obtener la lista de vecinos, sacar sus variables y sumarlas.\nAntes de calcular, vamos a eliminar el elemento que no tiene ningรบn vecino, reindexar los datos y volver a calcular los pesos (para que los รญndices de la matriz de pesos y del DataFrame coincidan):", "usos_suelo = usos_suelo.drop(usos_suelo.index[[1224]])\nusos_suelo.reset_index(drop=True, inplace=True)\nw = pysal.weights.Queen.from_dataframe(usos_suelo)", "Ahora sรญ, recorremos la lista de vecinos y calculamos la intensidad para cada elemento:", "usos_suelo.iloc[[0]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()\n\nimport numpy as np\n\nintensidad =[]\nfor i in range(0, w.n):\n vecinos = w.neighbors[i]\n total = 0.0\n suma = np.zeros((3),dtype=np.float)\n valores = usos_suelo.iloc[[i]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()\n for j in range(0,len(vecinos)): \n data = usos_suelo.iloc[[j]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()\n suma = suma + data\n total += sum(data)\n intensidad.append((i, sum(total)))\n\nprint(intensidad[0:10])", "Al parecer lo que estamos haciendo es muy complicado, sin embargo, una vez mรกs, si lo vemos con detenimiento es relativamente simple:\n\nPrimero estamos definiendo una lista vacรญa intensidad que nos va a servir para guardar los resultados\nLuego, en el for externo, estamos recorriendo la matriz de adyacencia, entonces el รญndice del for, es el identificador de cada polรญgono\nInicializamos un array de 3 entradas con ceros (esto nos va a servir para guardar la suma para cada uso de suelo)\nCon iloc tomamos la fila correspondiente en el DataFrame y as_matrix() convierte los valores de las columnas en un array\nRecorremos en el for interno los vecinos de cada elemento y tomamos, como array, sus valores\nSumamos los arrays entrada por entrada (esto realmente no es necesario aquรญ, pero va a ser รบtil mรกs adelante cuando hagamos un cรกlculo mรกs complejo)\nA la salida de los dos for, agregamos a la lista intensidad una tupla con el รญndice y el valor de la intensidad\n\nEntonces, podemos convertir la lista intensidad en un DataFrame para despuรฉs unirlo con nuestros datos:", "intensidad_df = pd.DataFrame(intensidad, columns=['gid', 'intensidad'])\ndatos_intensidad = usos_suelo.merge(intensidad_df, left_index=True, right_on='gid', how='inner')\ndatos_intensidad.head()", "Ejercicio\nHagan un mapa que destaque las diferencias en intensidad.\nEntropรญa\nLa entropรญa es una medida de la mezcla de usos de suelo, estรก basada en la forma en la que se calcula la entropรญa en mecรกnica estadรญstica:\n$$ E = \\sum\\limits_{j}{\\frac{p_{j}*ln(p_{j})}{ln(J)}} $$\nDonde $p_{j}$ representa la proporciรณn del $j-รฉsimo$ uso de suelo con respecto al total y $J$ es el nรบmero de usos de suelo considerados. Valores cercanos a 0 indican poca mezcla de usos de suelo y valores cercanos a -1 indican una mezcla balanceada.\nEntonces, para calcular la entropรญa, basta con modificar un poco el for que usamos para calcular la intensidad:", "entropia =[]\nfor i in range(0, w.n):\n vecinos = w.neighbors[i]\n total = 0.0\n suma = np.zeros((3),dtype=np.float)\n valores = usos_suelo.iloc[[i]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()\n for j in range(0,len(vecinos)): \n data = usos_suelo.iloc[[j]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()\n suma = suma + data\n total += np.sum(data)\n p = np.nan_to_num(suma/total)\n lp = np.select([p == 0,p > 0],[p, np.log(p)])\n entropia.append((i, np.sum(p*lp)))\n\nprint(entropia[0:10])", "La forma de calcular es muy parecida a la intensidad pero aquรญ hacemos uso de dos funciones extra de numpy:\n\nnp.nan_to_num convierte en 0 los valores NaN (resultado de dividir por 0). Esto es necesario en caso de que hubiera AGEBS con 0 usos de suelo\nnp.select([p == 0,p &gt; 0],[p, np.log(p)]) selecciona, en caso de que el valor sea 0, deja el cero, pero si el valor es mayor que 0, entonces calcula el logaritmo (recuerden que el logaritmo de 0 no estรก definido)\n\nEjercicio\nHagan mapas de entropรญa.\nTarea\nRepitan todo el taller para un รกrea metropolitana que escojan. Es necesario que hagan lo siguiente:\n\nExtraer AGEBS y puntos del DENUE para el รกrea que escojan (lo pueden hacer en Arc o QGIS)\nClasificar sus propios usos de suelo, es necesario que cambien las definiciones que usamos en el taller\nCalcular intensidad y entropรญa\n\nLa tarea se entrega de la siguiente forma:\nUna carpeta en zip con el cรณdigo (en el formato de esta libreta) y los datos necesarios para correrlo. La tarea se acredita si el cรณdigo corre." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
wangg12/caffe
examples/01-learning-lenet.ipynb
bsd-2-clause
[ "Python solving with LeNet\nIn this example, we'll explore learning with Caffe in Python, using the fully-exposed Solver interface.", "import os\nos.chdir('..')\n\nimport sys\nsys.path.insert(0, './python')\nimport caffe\n\nfrom pylab import *\n%matplotlib inline", "We'll be running the provided LeNet example (make sure you've downloaded the data and created the databases, as below).", "# Download and prepare data\n!data/mnist/get_mnist.sh\n!examples/mnist/create_mnist.sh", "We need two external files to help out:\n* the net prototxt, defining the architecture and pointing to the train/test data\n* the solver prototxt, defining the learning parameters\nWe start with the net. We'll write the net in a succinct and natural way as Python code that serializes to Caffe's protobuf model format.\nThis network expects to read from pregenerated LMDBs, but reading directly from ndarrays is also possible using MemoryDataLayer.", "from caffe import layers as L\nfrom caffe import params as P\n\ndef lenet(lmdb, batch_size):\n # our version of LeNet: a series of linear and simple nonlinear transformations\n n = caffe.NetSpec()\n n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb,\n transform_param=dict(scale=1./255), ntop=2)\n n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier'))\n n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)\n n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier'))\n n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)\n n.ip1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier'))\n n.relu1 = L.ReLU(n.ip1, in_place=True)\n n.ip2 = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier'))\n n.loss = L.SoftmaxWithLoss(n.ip2, n.label)\n return n.to_proto()\n \nwith open('examples/mnist/lenet_auto_train.prototxt', 'w') as f:\n f.write(str(lenet('examples/mnist/mnist_train_lmdb', 64)))\n \nwith open('examples/mnist/lenet_auto_test.prototxt', 'w') as f:\n f.write(str(lenet('examples/mnist/mnist_test_lmdb', 100)))", "The net has been written to disk in more verbose but human-readable serialization format using Google's protobuf library. You can read, write, and modify this description directly. Let's take a look at the train net.", "!cat examples/mnist/lenet_auto_train.prototxt", "Now let's see the learning parameters, which are also written as a prototxt file. We're using SGD with momentum, weight decay, and a specific learning rate schedule.", "!cat examples/mnist/lenet_auto_solver.prototxt", "Let's pick a device and load the solver. We'll use SGD (with momentum), but Adagrad and Nesterov's accelerated gradient are also available.", "caffe.set_device(0)\ncaffe.set_mode_gpu()\nsolver = caffe.SGDSolver('examples/mnist/lenet_auto_solver.prototxt')", "To get an idea of the architecture of our net, we can check the dimensions of the intermediate features (blobs) and parameters (these will also be useful to refer to when manipulating data later).", "# each output is (batch size, feature dim, spatial dim)\n[(k, v.data.shape) for k, v in solver.net.blobs.items()]\n\n# just print the weight sizes (not biases)\n[(k, v[0].data.shape) for k, v in solver.net.params.items()]", "Before taking off, let's check that everything is loaded as we expect. We'll run a forward pass on the train and test nets and check that they contain our data.", "solver.net.forward() # train net\nsolver.test_nets[0].forward() # test net (there can be more than one)\n\n# we use a little trick to tile the first eight images\nimshow(solver.net.blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray')\nprint solver.net.blobs['label'].data[:8]\n\nimshow(solver.test_nets[0].blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray')\nprint solver.test_nets[0].blobs['label'].data[:8]", "Both train and test nets seem to be loading data, and to have correct labels.\nLet's take one step of (minibatch) SGD and see what happens.", "solver.step(1)", "Do we have gradients propagating through our filters? Let's see the updates to the first layer, shown here as a $4 \\times 5$ grid of $5 \\times 5$ filters.", "imshow(solver.net.params['conv1'][0].diff[:, 0].reshape(4, 5, 5, 5)\n .transpose(0, 2, 1, 3).reshape(4*5, 5*5), cmap='gray')", "Something is happening. Let's run the net for a while, keeping track of a few things as it goes.\nNote that this process will be the same as if training through the caffe binary. In particular:\n* logging will continue to happen as normal\n* snapshots will be taken at the interval specified in the solver prototxt (here, every 5000 iterations)\n* testing will happen at the interval specified (here, every 500 iterations)\nSince we have control of the loop in Python, we're free to compute additional things as we go, as we show below. We can do many other things as well, for example:\n* write a custom stopping criterion\n* change the solving process by updating the net in the loop", "%%time\nniter = 200\ntest_interval = 25\n# losses will also be stored in the log\ntrain_loss = zeros(niter)\ntest_acc = zeros(int(np.ceil(niter / test_interval)))\noutput = zeros((niter, 8, 10))\n\n# the main solver loop\nfor it in range(niter):\n solver.step(1) # SGD by Caffe\n \n # store the train loss\n train_loss[it] = solver.net.blobs['loss'].data\n \n # store the output on the first test batch\n # (start the forward pass at conv1 to avoid loading new data)\n solver.test_nets[0].forward(start='conv1')\n output[it] = solver.test_nets[0].blobs['ip2'].data[:8]\n \n # run a full test every so often\n # (Caffe can also do this for us and write to a log, but we show here\n # how to do it directly in Python, where more complicated things are easier.)\n if it % test_interval == 0:\n print 'Iteration', it, 'testing...'\n correct = 0\n for test_it in range(100):\n solver.test_nets[0].forward()\n correct += sum(solver.test_nets[0].blobs['ip2'].data.argmax(1)\n == solver.test_nets[0].blobs['label'].data)\n test_acc[it // test_interval] = correct / 1e4", "Let's plot the train loss and test accuracy.", "_, ax1 = subplots()\nax2 = ax1.twinx()\nax1.plot(arange(niter), train_loss)\nax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r')\nax1.set_xlabel('iteration')\nax1.set_ylabel('train loss')\nax2.set_ylabel('test accuracy')", "The loss seems to have dropped quickly and coverged (except for stochasticity), while the accuracy rose correspondingly. Hooray!\nSince we saved the results on the first test batch, we can watch how our prediction scores evolved. We'll plot time on the $x$ axis and each possible label on the $y$, with lightness indicating confidence.", "for i in range(8):\n figure(figsize=(2, 2))\n imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')\n figure(figsize=(10, 2))\n imshow(output[:50, i].T, interpolation='nearest', cmap='gray')\n xlabel('iteration')\n ylabel('label')", "We started with little idea about any of these digits, and ended up with correct classifications for each. If you've been following along, you'll see the last digit is the most difficult, a slanted \"9\" that's (understandably) most confused with \"4\".\nNote that these are the \"raw\" output scores rather than the softmax-computed probability vectors. The latter, shown below, make it easier to see the confidence of our net (but harder to see the scores for less likely digits).", "for i in range(8):\n figure(figsize=(2, 2))\n imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')\n figure(figsize=(10, 2))\n imshow(exp(output[:50, i].T) / exp(output[:50, i].T).sum(0), interpolation='nearest', cmap='gray')\n xlabel('iteration')\n ylabel('label')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
anjsimmo/simple-ml-pipeline
jupyter_notebooks/traveltime_lineartime.ipynb
mit
[ "Design of traveltime_lineartime learner\nWe design a learner to predict Travel Time from the time of day.\nImport Data", "# allow importing modules and datafiles up one directory\nimport os\nos.chdir('../')\n\nimport pandas as pd\nimport numpy as np\nimport datetime\nimport math\nimport datatables.traveltime\n\ndata = datatables.traveltime.read('data/traveltime.task.train')\ndata.head()", "'y' is travel time in seconds.\nExtract Features\nRepresent time as a decimal fraction of a day, so that we can more easily use it for prediction.", "def frac_day(time):\n \"\"\"\n Convert time to fraction of a day (0.0 to 1.0)\n Can also pass this function a datetime object \n \"\"\"\n return time.hour*(1./24) + time.minute*(1./(24*60)) + time.second*(1./(24*60*60))", "We create the features $time^1$, $time^2$, ... in order to allow the regression algorithm to find polynomial fits.", "def extract_features(data):\n # Turn list into a n*1 design matrix. At this stage, we only have a single feature in each row.\n t = np.array([frac_day(_t) for _t in data['t']])[:, np.newaxis]\n # Add t^2, t^3, ... to allow polynomial regression\n xs = np.hstack([t, t**2, t**3, t**4, t**5, t**6, t**7, t**8])\n return xs\n\nt = np.array([frac_day(_t) for _t in data['t']])[:, np.newaxis]\nxs = extract_features(data)\ny = data['y'].values", "Model\nTrain model, plot regression curve.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom sklearn import linear_model\n\nregr = linear_model.LinearRegression()\nregr.fit(xs, y)\ny_pred = regr.predict(xs)\n\nplt.figure(figsize=(8,8))\nplt.scatter(t, y, color='black', label='actual')\nplt.plot(t, y_pred, color='blue', label='regression curve')\n\nplt.title(\"Travel time vs time. Princes Highway. Outbound. Wed 19 Aug 2015\")\nplt.ylabel(\"Travel Time from site 2409 to site 2425 (seconds)\")\nplt.xlabel(\"Time (fraction of day)\")\nplt.legend(loc='lower right')\nplt.xlim([0,1])\nplt.ylim([0,None])\nplt.show()\n\n# http://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares\nprint('Intercept: %.2f' % regr.intercept_)\nprint('Coefficients: %s' % regr.coef_)\nprint('R^2 score: %.2f' % regr.score(xs, y))", "Evaluate", "test = datatables.traveltime.read('data/traveltime.task.test') # Traffic on Wed 27 Aug 2015\ntest_xs = extract_features(test)\ntest['pred'] = regr.predict(test_xs)\ntest['error'] = test['y'] - test['pred']\n# todo: ensure data is a real number (complex numbers could be used to cheat)\nrms_error = math.sqrt(sum(test['error']**2) / len(data))\n\ntest.head()\n\nrms_error" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
adrn/thejoker
docs/examples/1-Getting-started.ipynb
mit
[ "%run notebook_setup", "Getting started with The Joker\nThe Joker (pronounced Yo-kurr) is a highly specialized Monte Carlo (MC) sampler that is designed to generate converged posterior samplings for Keplerian orbital parameters, even when your data are sparse, non-uniform, or very noisy. This is not a general MC sampler, and this is not a Markov Chain MC sampler like emcee, or pymc3: This is fundamentally a rejection sampler with some tricks that help improve performance for the two-body problem.\nThe Joker shines over more conventional MCMC sampling methods when your radial velocity data is imprecise, non-uniform, sparse, or has a short baseline: In these cases, your likelihood function will have many, approximately equal-height modes that are often spaced widely, all properties that make conventional MCMC bork when applied to this problem. In this tutorial, we will not go through the math behind the sampler (most of that is covered in the original paper). However, some terminology is important to know for the tutorial below or for reading the documentation. Most relevant, the parameters in the two-body problem (Kepler orbital parameters) split into two sets: nonlinear and linear parameters. The nonlinear parameters are always the same in each run of The Joker: period $P$, eccentricity $e$, argument of pericenter $\\omega$, and a phase $M_0$. The default linear parameters are the velocity semi-ampltude $K$, and a systemtic velocity $v_0$. However, there are ways to add additional linear parameters into the model (as described in other tutorials).\nFor this tutorial, we will set up an inference problem that is common to binary star or exoplanet studies, show how to generate posterior orbit samples from the data, and then demonstrate how to visualize the samples. Other tutorials demonstrate more advanced or specialized functionality included in The Joker, like:\n- fully customizing the parameter prior distributions, \n- allowing for a long-term velocity trend in the data, \n- continuing sampling with standard MCMC methods when The Joker returns one or few samples,\n- simultaneously inferring constant offsets between data sources (i.e. when using data from multiple instruments that may have calibration offsets)\nBut let's start here with the most basic functionality!\nFirst, imports we will need later:", "import astropy.table as at\nfrom astropy.time import Time\nimport astropy.units as u\nfrom astropy.visualization.units import quantity_support\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\n\nimport thejoker as tj\n\n# set up a random generator to ensure reproducibility\nrnd = np.random.default_rng(seed=42)", "Loading radial velocity data\nTo start, we need some radial velocity data to play with. Our ultimate goal is to construct or read in a thejoker.RVData instance, which is the main data container object used in The Joker. For this tutorial, we will use a simulated RV curve that was generated using a separate script and saved to a CSV file, and we will create an RVData instance manually. \nBecause we previously saved this data as an Astropy ECSV file, the units are provided with the column data and read in automatically using the astropy.table read/write interface:", "data_tbl = at.QTable.read('data.ecsv')\ndata_tbl[:2]", "The full simulated data table has many rows (256), so let's randomly grab 4 rows to work with:", "sub_tbl = data_tbl[rnd.choice(len(data_tbl), size=4, replace=False)]\n\nsub_tbl", "It looks like the time column is given in Barycentric Julian Date (BJD), so in order to create an RVData instance, we will need to create an astropy.time.Time object from this column:", "t = Time(sub_tbl['bjd'], format='jd', scale='tcb') \ndata = tj.RVData(t=t, rv=sub_tbl['rv'], rv_err=sub_tbl['rv_err'])", "We now have an RVData object, so we could continue on with the tutorial. But as a quick aside, there is an alternate, more automatic (automagical?) way to create an RVData instance from tabular data: RVData.guess_from_table. This classmethod attempts to guess the time format and radial velocity column names from the columns in the data table. It is very much an experimental feature, so if you think it can be improved, please open an issue in the GitHub repo for The Joker. In any case, here it successfully works:", "data = tj.RVData.guess_from_table(sub_tbl)", "One of the handy features of RVData is the .plot() method, which generates a quick view of the data:", "_ = data.plot()", "The data are clearly variable! But what orbits are consistent with these data? I suspect many, given how sparse they are! Now that we have the data in hand, we need to set up the sampler by specifying prior distributions over the parameters in The Joker.\nSpecifying the prior distributions for The Joker parameters\nThe prior pdf (probability distribution function) for The Joker is controlled and managed through the thejoker.JokerPrior class. The prior for The Joker is fairly customizable and the initializer for JokerPrior is therefore pretty flexible; usually too flexible for typical use cases. We will therefore start by using an alternate initializer defined on the class, JokerPrior.default(), that provides a simpler interface for creating a JokerPrior instance that uses the default prior distributions assumed in The Joker. In the default prior:\n$$\n\\begin{align}\n&p(P) \\propto \\frac{1}{P} \\quad ; \\quad P \\in (P_{\\rm min}, P_{\\rm max})\\\n&p(e) = B(a_e, b_e)\\\n&p(\\omega) = \\mathcal{U}(0, 2\\pi)\\\n&p(M_0) = \\mathcal{U}(0, 2\\pi)\\\n&p(K) = \\mathcal{N}(K \\,|\\, \\mu_K, \\sigma_K)\\\n&\\sigma_K = \\sigma_{K, 0} \\, \\left(\\frac{P}{P_0}\\right)^{-1/3} \\, \\left(1 - e^2\\right)^{-1/2}\\\n&p(v_0) = \\mathcal{N}(v_0 \\,|\\, \\mu_{v_0}, \\sigma_{v_0})\\\n\\end{align}\n$$\nwhere $B(.)$ is the beta distribution, $\\mathcal{U}$ is the uniform distribution, and $\\mathcal{N}$ is the normal distribution.\nMost parameters in the distributions above are set to reasonable values, but there are a few required parameters for the default case: the range of allowed period values (P_min and P_max), the scale of the K prior variance sigma_K0, and the standard deviation of the $v_0$ prior sigma_v. Let's set these to some arbitrary numbers. Here, I chose the value for sigma_K0 to be typical of a binary star system; if using The Joker for exoplanet science, you will want to adjust this correspondingly.", "prior = tj.JokerPrior.default(\n P_min=2*u.day, P_max=1e3*u.day,\n sigma_K0=30*u.km/u.s,\n sigma_v=100*u.km/u.s)", "Once we have the prior instance, we need to generate some prior samples that we will then use The Joker to rejection sample down to a set of posterior samples. To generate prior samples, use the JokerSamples.sample() method. Here, we'll generate a lare number of samples to use:", "prior_samples = prior.sample(size=250_000,\n random_state=rnd)\nprior_samples", "This object behaves like a Python dictionary in that the parameter values can be accessed via their key names:", "prior_samples['P']\n\nprior_samples['e']", "They can also be written to disk or re-loaded using this same class. For example, to save these prior samples to the current directory to the file \"prior_samples.hdf5\":", "prior_samples.write(\"prior_samples.hdf5\", overwrite=True)", "We could then load the samples from this file using:", "tj.JokerSamples.read(\"prior_samples.hdf5\")", "Running The Joker\nNow that we have a set of prior samples, we can create an instance of The Joker and use the rejection sampler:", "joker = tj.TheJoker(prior, random_state=rnd)\njoker_samples = joker.rejection_sample(data, prior_samples, \n max_posterior_samples=256)", "This works by either passing in an instance of JokerSamples containing the prior samples, or by passing in a filename that contains JokerSamples written to disk. So, for example, this is equivalent:", "joker_samples = joker.rejection_sample(data, \"prior_samples.hdf5\", \n max_posterior_samples=256)", "The max_posterior_samples argument above specifies the maximum number of posterior samples to return. It is often helpful to set a threshold here in cases when your data are very uninformative to avoid generating huge numbers of samples (which can slow down the sampler considerably).\nIn either case above, the joker_samples object returned from rejection_sample() is also an instance of the JokerSamples class, but now contains posterior samples for all nonlinear and linear parameters in the model:", "joker_samples", "Plotting The Joker orbit samples over the input data\nWith posterior samples in Keplerian orbital parameters in hand for our data set, we can now plot the posterior samples over the input data to get a sense for how constraining the data are. The Joker comes with a convenience plotting function, plot_rv_curves, for doing just this:", "_ = tj.plot_rv_curves(joker_samples, data=data)", "It has various options to allow customizing the style of the plot:", "fig, ax = plt.subplots(1, 1, figsize=(8, 4))\n_ = tj.plot_rv_curves(joker_samples, data=data, \n plot_kwargs=dict(color='tab:blue'),\n data_plot_kwargs=dict(color='tab:red'),\n relative_to_t_ref=True, ax=ax)\nax.set_xlabel(f'BMJD$ - {data.t.tcb.mjd.min():.3f}$')", "Another way to visualize the samples is to plot 2D projections of the sample values, for example, to plot period against eccentricity:", "fig, ax = plt.subplots(1, 1, figsize=(8, 5))\n\nwith quantity_support():\n ax.scatter(joker_samples['P'], \n joker_samples['e'],\n s=20, lw=0, alpha=0.5)\n \nax.set_xscale('log')\nax.set_xlim(prior.pars['P'].distribution.a,\n prior.pars['P'].distribution.b)\nax.set_ylim(0, 1)\n\nax.set_xlabel('$P$ [day]')\nax.set_ylabel('$e$')", "But is the true period value included in those distinct period modes returned by The Joker? When generating the simulated data, I also saved the true orbital parameters used to generate the data, so we can load and over-plot it:", "import pickle\nwith open('true-orbit.pkl', 'rb') as f:\n truth = pickle.load(f)\n\nfig, ax = plt.subplots(1, 1, figsize=(8, 5))\n\nwith quantity_support():\n ax.scatter(joker_samples['P'], \n joker_samples['e'],\n s=20, lw=0, alpha=0.5)\n \n ax.axvline(truth['P'], zorder=-1, color='tab:green')\n ax.axhline(truth['e'], zorder=-1, color='tab:green')\n ax.text(truth['P'], 0.95, 'truth', fontsize=20, \n va='top', ha='left', color='tab:green')\n \nax.set_xscale('log')\nax.set_xlim(prior.pars['P'].distribution.a,\n prior.pars['P'].distribution.b)\nax.set_ylim(0, 1)\n\nax.set_xlabel('$P$ [day]')\nax.set_ylabel('$e$')", "It indeed looks like there are posterior samples from The Joker in the vicinity of the true value. Deciding what to do next depends on the problem you would like to solve. For example, if you just want to get a sense of how multi-modal the posterior pdf over orbital parameters is, you might be satisfied with the number of samples we generated and the plots we made in this tutorial. However, if you want to fully propagate the uncertainty in these orbital parameters through some other inference (for example, to transform the samples into constraints on companion mass or other properties), you may want or need to generate a lot more samples. To start, you could change max_posterior_samples to be a much larger number in the rejection_sample() step above. But I have found that in many cases, you need to run with many, many more (e.g., 500 million) prior samples. To read more, check out the next tutorial!" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
synthicity/activitysim
activitysim/examples/example_estimation/notebooks/01_estimation_mode.ipynb
agpl-3.0
[ "Estimation Mode\nThis set of notebooks illustrates how to re-estimate ActivitySim's choice models. These models include\n\nschool_location\nworkplace_location\nauto_ownership\nfree_parking\ncdap\nmandatory_tour_frequency\nmandatory_tour_scheduling (work and school estimated seperately)\njoint_tour_frequency\njoint_tour_composition\njoint_tour_participation\njoint_tour_destination (with non_mandatory_tour_destination)\njoint_tour_scheduling\nnon_mandatory_tour_frequency\nnon_mandatory_tour_destination (with joint_tour_destination)\nnon_mandatory_tour_scheduling\ntour_mode_choice (with atwork_subtour_mode_choice)\natwork_subtour_frequency\natwork_subtour_destination\natwork_subtour_scheduling\natwork_subtour_mode_choice (with tour_mode_choice)\nstop_frequency\ntrip_destination\ntrip_mode_choice\n\nAs noted above, not every model is estimated independently; some components share parameters with other components, and so must be jointly re-estimated. \nThe steps in the process are:\n - Run ActivitySim in estimation mode to read household travel survey files, run the ActivitySim submodels to write estimation data bundles (EDB) that contains the model utility specifications, coefficients, chooser data, and alternatives data for each submodel.\n - Using the activitysim.estimation.larch library of tools, read and transform the relevant EDB into the format required by the model estimation package larch and then re-estimate the model coefficients. No changes to the model specification will be made.\n - Update the ActivitySim model coefficients and re-run the model in simulation mode.\nThe basic estimation workflow is shown below and explained in the next steps.\n\nLoad libraries", "import os\nimport larch # !conda install larch -c conda-forge # for estimation\nimport pandas as pd", "Review Inputs\nIn addition to a working ActivitySim model setup, estimation mode requires an ActivitySim format household travel survey. An ActivitySim format household travel survey is very similar to ActivitySim's simulation model tables:\n\nhouseholds\npersons\ntours\njoint_tour_participants\ntrips \n\nExamples of the ActivitySim format household travel survey are included in the example_estimation data folders. The user is responsible for formatting their household travel survey into the appropriate format. \nAfter creating an ActivitySim format household travel survey, the scripts/infer.py script is run to append additional calculated fields. An example of an additional calculated field is the household:joint_tour_frequency, which is calculated based on the tours and joint_tour_participants tables. \nThe input survey files are below.\nSurvey households", "pd.read_csv(\"../data_sf/survey_data/override_households.csv\")", "Survey persons", "pd.read_csv(\"../data_sf/survey_data/override_persons.csv\")", "Survey joint tour participants", "pd.read_csv(\"../data_sf/survey_data/override_joint_tour_participants.csv\")", "Survey tours", "pd.read_csv(\"../data_sf/survey_data/override_tours.csv\")", "Survey trips", "pd.read_csv(\"../data_sf/survey_data/override_trips.csv\")", "Example Setup if Needed\nTo avoid duplication of inputs, especially model settings and expressions, the example_estimation depends on the example. The following commands create an example setup for use. The location of these example setups (i.e. the folders) are important because the paths are referenced in this notebook. The commands below download the skims.omx for the SF county example from the activitysim resources repository.", "!activitysim create -e example_estimation_sf -d test", "Run the Estimation Example\nThe next step is to run the model with an estimation.yaml settings file with the following settings in order to output the EDB for all submodels:\n```\nenable=True\nbundles:\n - school_location\n - workplace_location\n - auto_ownership\n - free_parking\n - cdap\n - mandatory_tour_frequency\n - mandatory_tour_scheduling\n - joint_tour_frequency\n - joint_tour_composition\n - joint_tour_participation\n - joint_tour_destination\n - joint_tour_scheduling\n - non_mandatory_tour_frequency\n - non_mandatory_tour_destination\n - non_mandatory_tour_scheduling\n - tour_mode_choice\n - atwork_subtour_frequency\n - atwork_subtour_destination\n - atwork_subtour_scheduling\n - atwork_subtour_mode_choice\nsurvey_tables:\n households:\n file_name: survey_data/override_households.csv\n index_col: household_id\n persons:\n file_name: survey_data/override_persons.csv\n index_col: person_id\n tours:\n file_name: survey_data/override_tours.csv\n joint_tour_participants:\n file_name: survey_data/override_joint_tour_participants.csv\n```\nThis enables the estimation mode functionality, identifies which models to run and their output estimation data bundles (EDBs), and the input survey tables, which include the override settings for each model choice. \nWith this setup, the model will output an EBD with the following tables for this submodel:\n - model settings - auto_ownership_model_settings.yaml\n - coefficients - auto_ownership_coefficients.csv\n - utilities specification - auto_ownership_SPEC.csv\n - chooser and alternatives data - auto_ownership_values_combined.csv\nThe following code runs the software in estimation mode, inheriting the settings from the simulation setup and using the San Francisco county data setup. It produces the EDB for all submodels but runs all the model steps identified in the inherited settings file.", "%cd test\n\n!activitysim run -c configs_estimation/configs -c configs -o output -d data_sf", "After completing the a run of ActivitySim in estimation mode, we are ready to begin \nre-estimating models. This process is shown in the other notebooks in this directory." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mramanathan/pydiary_notes
functools_partial_usage.ipynb
gpl-3.0
[ "26/May/2016:\n** Hey, what's this 'functools' thingy, huh !?!", "import functools\n\ndef hello_doctor(greet_msg, greet_whom):\n return \"%s %s, Welcome to the world of robots.\" %(greet_msg, greet_whom)\n\n## this line works\n# hello_doctor = functools.partial(hello_doctor, \"R2-D2\")\n# hello_doctor(\"Dr.Susan Calvin\")", "What's going on in cells 11 & 14 ?", "greet = functools.partial(hello_doctor)\ngreet(\"Dr.Susan Calvin\")\n\nwelcome = functools.partial(hello_doctor)\nwelcome(\"Dr.Susan Calvin\")\n\ndef numpower(base, exponent):\n return base ** exponent\n\ndef square(base):\n return numpower(base, 2)\n\ndef cube(base):\n return numpower(base, 3)\n \nprint square(25)\nprint cube(15)", "How can the above code be handled using functools.partial() ?" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
DwangoMediaVillage/pqkmeans
tutorial/4_comparison_to_faiss.ipynb
mit
[ "Chapter 4: Comparison to faiss\nThis chapter contains the followings:\n\nSetup the experiment using SIFT1M\nSmall-scale comparison: N=10^5, K=10^3 (k-means with faiss-CPU and k-means with sklearn)\nLarge-scale comparison: N=10^6, K=10^4 (PQk-means, k-means with faiss-CPU, and k-measn with falss-GPU)\n\nRequisites:\n- numpy\n- pqkmeans\n- sklearn\n- faiss (you can install it via conda)\n 1. CPU version: conda install faiss-cpu -c pytorch\n 1. GPU version (with two NVIDIA GTX1080s): conda install faiss-gpu -c pytorch\nOur final suggestions are as follows:\n- If you have GPU(s) and your GPU memory is large enough (all data can be loaded on the GPU memory at once), faiss-GPU is the fastest option.\n- Otherwise, \n - If your problem is small enough (all vectors can be easily fit into the RAM), faiss-CPU would be the best option.\n - If the problem is large, e.g., (1) faiss-CPU seems slow, or (2) the vectors cannot be loaded on the memory at once, then PQk-means is the best option.\n1. Setup the experiment using SIFT1M", "import numpy\nimport pqkmeans\nfrom sklearn.cluster import KMeans\nimport faiss", "In this chapter, we compare our PQk-means to k-means in the faiss library. Faiss provides one of the most efficient implementations of nearest neighbor algorithms for both CPU(s) and GPU(s). It also provides an implementation of vanilla k-means, which we will compare to. The core part of faiss is implemented by C++, and the python binding is available.\nWe compare PQk-means to both CPU- and GPU-version. Our configurations are:\n- faiss-CPU: This was built with Intel MKL, which provides the fastest backend BLAS implementation. The algorithms in the library are automatically parallelized. All evaluations are conducted on a server with 3.6 GHz Intel Xeon CPU (6 cores, 12 threads)\n- faiss-GPU: The library was built with CUDA 8.0. Two middle-level GPUs, NVIDIA GTX 1080s, are used for the evaluation. The algorithms can be run over multi GPUs. \nFor the comparison, we leverage the SIFT1M dataset.", "Xt, X = pqkmeans.evaluation.get_sift1m_dataset() # Xt: the training data. X: the testing data to be clusterd", "First, you can download the data by a helper script. This would take several minutes, and consume 168 MB of the disk space.", "Xt = Xt.astype(numpy.float32)\nX = X.astype(numpy.float32)\nD = X.shape[1]\nprint(\"Xt.shape:{}\\nX.shape:{}\".format(Xt.shape, X.shape))", "Because faiss takes 32-bit float vectors as inputs, the data is converted to float32.\n2. Small-scale comparison: N=10^5, K=10^3 (k-means with faiss-CPU v.s. k-means with sklearn)\nFirst, let us compare the k-means implementation of faiss and sklearn using 100K vectors from SIFT1M. Then we show that faiss is much faster than sklearn with almost the same error.\nNote that it is hard to run k-means-sklearn with a large K because it is too slow (that is the reason for this small-scale experiment)", "K_small = 1000\nN_small = 100000\n\n# Setup clustering instances. We stop each algorithm with 10 iterations\nkmeans_faiss_cpu_small = faiss.Kmeans(d=D, k=K_small, niter=10)\nkmeans_sklearn_small = KMeans(n_clusters=K_small, n_jobs=-1, max_iter=10)", "Let's run each algorithm", "%%time\nprint(\"faiss-cpu:\")\nkmeans_faiss_cpu_small.train(X[:N_small])\n_, ids_faiss_cpu_small = kmeans_faiss_cpu_small.index.search(X[:N_small], 1)\n\n%%time\nprint(\"sklearn:\")\nids_sklearn_small = kmeans_sklearn_small.fit_predict(X[:N_small])\n\n_, faiss_cpu_small_error, _ = pqkmeans.evaluation.calc_error(ids_faiss_cpu_small.reshape(-1), X[:N_small], K_small)\n_, sklearn_small_error, _ = pqkmeans.evaluation.calc_error(ids_sklearn_small, X[:N_small], K_small)\n\nprint(\"k-means, faiss-cpu, error: {}\".format(faiss_cpu_small_error))\nprint(\"k-means, sklearn, error: {}\".format(sklearn_small_error))", "We observed that\n- k-means with faiss-CPU (2 sec) is surprisingly faster than k-means with sklearn (3 min) with almost the same error. This speedup would be due to the highly optimized implementation of the nearest neighbor search in faiss with Intel MKL BLAS. This suggests that faiss-CPU is a better option for the exact k-means in a usual computer.\nBecause faiss-CPU is faster thant sklearn, sklearn is not compared in the next section.\n3. Large-scale comparison: N=10^6, K=10^4 (PQk-means, k-means with faiss-CPU, and k-measn with falss-GPU)", "# Setup GPUs for faiss-gpu\n# In my environment, the first GPU (id=0) is for rendering, and the second (id=1) and the third (id=2) GPUs are GPGPU (GTX1080).\n# We activate only the second and the third GPU\nimport os\nos.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\" # make sure the order is identical to the result of nvidia-smi\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"1,2\" # Please change here for your environment", "Next, let us compare PQk-meas with faiss-CPU and faiss-GPU using the whole dataset (N=10^6, K=10^4). Note that this is 100x larter setting compared to Sec 2 (NK=10^8 vs NK=10^10). \nFirst, as pre-processing for PQk-means, let's train a PQencoder and encode all data. It will take around 10 sec.", "%%time\n# Train the encoder\nencoder = pqkmeans.encoder.PQEncoder(num_subdim=4, Ks=256)\nencoder.fit(Xt)\n\n# Encode the vectors to PQ-codes\nX_code = encoder.transform(X)", "Note that X_code is 128x more memory efficient than X:", "print(\"X.shape: {}, X.dtype: {}, X.nbytes: {} MB\".format(X.shape, X.dtype, X.nbytes / 10**6))\nprint(\"X_code.shape: {}, X_code.dtype: {}, X_code.nbytes: {} MB\".format(X_code.shape, X_code.dtype, X_code.nbytes / 10**6))", "Then each algorithms are instantiated as follows", "K = 10000 # Set larger K\n\n# Setup k-means instances. The number of iteration is set as 20 for all methods\n\n# PQ-kmeans\nkmeans_pqkmeans = pqkmeans.clustering.PQKMeans(encoder=encoder, k=K, iteration=20)\n\n# Faiss-cpu\nkmeans_faiss_cpu = faiss.Kmeans(d=D, k=K, niter=20)\nkmeans_faiss_cpu.cp.max_points_per_centroid = 1000000 # otherwise the kmeans implementation sub-samples the training set", "Because some configurations are required for GPU, we wrap up the gpu clustering as one function:", "def run_faiss_gpu(X, K, ngpu):\n # This code is based on https://github.com/facebookresearch/faiss/blob/master/benchs/kmeans_mnist.py\n \n D = X.shape[1]\n clus = faiss.Clustering(D, K)\n \n # otherwise the kmeans implementation sub-samples the training set\n clus.max_points_per_centroid = 10000000\n \n clus.niter = 20\n \n res = [faiss.StandardGpuResources() for i in range(ngpu)]\n\n flat_config = []\n for i in range(ngpu):\n cfg = faiss.GpuIndexFlatConfig()\n cfg.useFloat16 = False\n cfg.device = i\n flat_config.append(cfg)\n\n if ngpu == 1:\n index = faiss.GpuIndexFlatL2(res[0], D, flat_config[0])\n else:\n indexes = [faiss.GpuIndexFlatL2(res[i], D, flat_config[i])\n for i in range(ngpu)]\n index = faiss.IndexProxy()\n for sub_index in indexes:\n index.addIndex(sub_index)\n \n \n # Run clustering\n clus.train(X, index)\n \n # Return the assignment\n _, ids = index.search(X, 1)\n return ids", "Run each method and see the computational cost.", "%%time\nprint(\"PQk-means:\")\nids_pqkmeans = kmeans_pqkmeans.fit_predict(X_code)\n\n%%time\nprint(\"faiss-cpu:\")\nkmeans_faiss_cpu.train(X)\n_, ids_faiss_cpu = kmeans_faiss_cpu.index.search(X, 1)\n\n%%time\nprint(\"faiss with GPU:\")\nids_faiss_gpu = run_faiss_gpu(X, K, ngpu=2) # Please adjust ngpu for your environment\n\n_, pqkmeans_error, _ = pqkmeans.evaluation.calc_error(ids_pqkmeans, X, K)\n_, faiss_cpu_error, _ = pqkmeans.evaluation.calc_error(ids_faiss_cpu.reshape(-1), X, K)\n_, faiss_gpu_error, _ = pqkmeans.evaluation.calc_error(ids_faiss_gpu.reshape(-1), X, K)\n\nprint(\"PQk-means, error: {}\".format(pqkmeans_error))\nprint(\"k-means, faiss-cpu, error: {}\".format(faiss_cpu_error))\nprint(\"k-means, faiss-gpu, error: {}\".format(faiss_gpu_error))", "Our observations are:\n- PQk-means (around 2 min) is 2x faster than k-means with faiss-CPU (around 4 min). The cost of learining/encoding is marginal (10 sec). \n- PQk-means is memory efficient (128x in this case). More data can be easily processed even if the data itself cannot be loaded on the RAM at once (see tutorial3). Note that faiss provides memory-efficient search algorithms including IVFPQ, but the clustering itself is vanilla k-means (all original vectors need to be loaded on the memory).\n- Because PQk-means is an approximation of k-means, the accuracy of clustering is lower than k-means with CPU/GPU faiss.\n- k-means with faiss-GPU (10 sec) is suprisingly faster than both PQk-means and faiss-CPU, with the same error as faiss-CPU. We conlude that, if you have several GPUs, faiss-GPU is the fastest option for the exact k-means (see benchmark for more results). Note that PQk-means with GPUs could be faster, but has not been implemented yet.\nOur final suggestions are as follows:\n- If you have GPU(s) and your GPU memory is large enough (all data can be loaded on the GPU memory at once), faiss-GPU is the fastest option.\n- Otherwise, \n - If your problem is small enough (all vectors can be easily fit into the RAM), faiss-CPU would be the best option.\n - If the problem is large, e.g., (1) faiss-CPU seems slow, or (2) the vectors cannot be loaded on the memory at once, then PQk-means is the best option." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
computational-class/cjc2016
code/08.06-regression.ipynb
mit
[ "Simple Linear Regression\nWe used the correlation function to measure the strength of the linear relationship between two variables. For most applications, knowing that such a linear relationship exists isnโ€™t enough. Weโ€™ll want to be able to understand the nature of the relationship. This is where weโ€™ll use simple linear regression.\nThe Model\n$$y_i = \\beta x_i + \\alpha + \\epsilon_i$$\nwhere \n\n$y_i$ is the number of minutes user i spends on the site daily, \n$x_i$ is the number of friends user i has\n$\\alpha$ is the constant when x = 0.\n$ฮต_i$ is a (hopefully small) error term representing the fact that there are other factors not accounted for by this simple model.\n\nLeast Squares Fit\nๆœ€ๅฐไบŒไน˜ๆณ•\n$$ y_i = X_i^T w$$\nThe constant could be represent by 1 in X\nThe squared error could be written as: \n$$ \\sum_{i = 1}^m (y_i -X_i^T w)^2 $$\nIf we know $\\alpha$ and $\\beta$, then we can make predictions.\nSince we know the actual output $y_i$ we can compute the error for each pair.\nSince the negative errors cancel out with the positive ones, we use squared errors.\nThe least squares solution is to choose the $\\alpha$ and $\\beta$ that make sum_of_squared_errors as small as possible.\nThe choice of beta means that when the input value increases by standard_deviation(x), the prediction increases by correlation(x, y) * standard_deviation(y). \n\nIn the case when x and y are perfectly positively correlated, a one standard deviation increase in x results in a one-standard-deviation-of-y increase in the prediction.\nWhen theyโ€™re perfectly negatively correlated, the increase in x results in a decrease in the prediction. \nAnd when the correlation is zero, beta is zero, which means that changes in x donโ€™t affect the prediction at all.\n\n\nIn this case, the slope of the fitted line is equal to the correlation between y and x corrected by the ratio of standard deviations of these variables. \n\n$$ y_i = \\alpha + \\beta x_i + \\varepsilon_i $$\n$$ \\hat\\varepsilon_i =y_i-a -b x_i $$\n$$ \\text{Find }\\min_{a,\\, b} Q(a, b), \\quad \\text{for } Q(a, b) = \\sum_{i=1}^n\\hat\\varepsilon_i^{\\,2} = \\sum_{i=1}^n (y_i -a - b x_i)^2\\ $$\nBy expanding to get a quadratic expression in $a$ and $b$, we can derive values of $a$ and $b$ that minimize the objective function $Q$ (these minimizing values are denoted $\\hat{\\alpha}$ and $\\hat{\\beta}$):\n\\begin{align}\n \\hat\\alpha & = \\bar{y} - \\hat\\beta\\,\\bar{x}, \\\n \\hat\\beta &= \\frac{ \\sum_{i=1}^n (x_i - \\bar{x})(y_i - \\bar{y}) }{ \\sum_{i=1}^n (x_i - \\bar{x})^2 } = \\frac{ \\operatorname{Cov}(x, y) }{ \\operatorname{Var}(x) } = r_{xy} \\frac{s_y}{s_x}. \\[6pt]\n\\end{align}\n\n$r_{xy}$ as the sample correlation coefficient between x and y\n$s_x$ and $s_y$ as the uncorrected sample standard deviations of x and y\n\n\nKenney, J. F. and Keeping, E. S. (1962) \"Linear Regression and Correlation.\" Ch. 15 in ''Mathematics of Statistics'', Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252โ€“285\n\nSubstituting the above expressions for $\\hat{\\alpha}$ and $\\hat{\\beta}$ into\n$$f = \\hat{\\alpha} + \\hat{\\beta} x,$$\nyields\n$$\\frac{ f - \\bar{y}}{s_y} = r_{xy} \\frac{ x - \\bar{x}}{s_x} .$$\n\nKenney, J. F. and Keeping, E. S. (1962) \"Linear Regression and Correlation.\" Ch. 15 in ''Mathematics of Statistics'', Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252โ€“285", "num_friends_good = [49,41,40,25,21,21,19,19,18,18,16,15,15,15,15,14,14,13,13,13,13,12,12,11,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,8,8,8,8,8,8,8,8,8,8,8,8,8,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]\ndaily_minutes_good = [68.77,51.25,52.08,38.36,44.54,57.13,51.4,41.42,31.22,34.76,54.01,38.79,47.59,49.1,27.66,41.03,36.73,48.65,28.12,46.62,35.57,32.98,35,26.07,23.77,39.73,40.57,31.65,31.21,36.32,20.45,21.93,26.02,27.34,23.49,46.94,30.5,33.8,24.23,21.4,27.94,32.24,40.57,25.07,19.42,22.39,18.42,46.96,23.72,26.41,26.97,36.76,40.32,35.02,29.47,30.2,31,38.11,38.18,36.31,21.03,30.86,36.07,28.66,29.08,37.28,15.28,24.17,22.31,30.17,25.53,19.85,35.37,44.6,17.23,13.47,26.33,35.02,32.09,24.81,19.33,28.77,24.26,31.98,25.73,24.86,16.28,34.51,15.23,39.72,40.8,26.06,35.76,34.76,16.13,44.04,18.03,19.65,32.62,35.59,39.43,14.18,35.24,40.13,41.82,35.45,36.07,43.67,24.61,20.9,21.9,18.79,27.61,27.21,26.61,29.77,20.59,27.53,13.82,33.2,25,33.1,36.65,18.63,14.87,22.2,36.81,25.53,24.62,26.25,18.21,28.08,19.42,29.79,32.8,35.99,28.32,27.79,35.88,29.06,36.28,14.1,36.63,37.49,26.9,18.58,38.48,24.48,18.95,33.55,14.24,29.04,32.51,25.63,22.22,19,32.73,15.16,13.9,27.2,32.01,29.27,33,13.74,20.42,27.32,18.23,35.35,28.48,9.08,24.62,20.12,35.26,19.92,31.02,16.49,12.16,30.7,31.22,34.65,13.13,27.51,33.2,31.57,14.1,33.42,17.44,10.12,24.42,9.82,23.39,30.93,15.03,21.67,31.09,33.29,22.61,26.89,23.48,8.38,27.81,32.35,23.84]\nalpha, beta = 22.9475, 0.90386\n\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.scatter(num_friends_good, daily_minutes_good)\nplt.plot(num_friends_good, [alpha + beta*i for i in num_friends_good], 'b-')\nplt.xlabel('# of friends', fontsize = 20)\nplt.ylabel('minutes per day', fontsize = 20)\nplt.title('simple linear regression model', fontsize = 20)\nplt.show()", "Of course, we need a better way to figure out how well weโ€™ve fit the data than staring at the graph. \nA common measure is the coefficient of determination (or R-squared), which measures the fraction of the total variation in the dependent variable that is captured by the model.\nMultiple Regression using Matrix Method\nMachine Learning in Action\nhttps://github.com/computational-class/machinelearninginaction/\n$$ y_i = X_i^T w$$\nThe constant could be represent by 1 in X\nThe squared error could be written as: \n$$ \\sum_{i = 1}^m (y_i -X_i^T w)^2 $$\nWe can also write this in matrix notation as $(y-Xw)^T(y-Xw)$. \nIf we take the derivative of this with respect to w, weโ€™ll get $X^T(y-Xw)$. \nWe can set this to zero and solve for w to get the following equation:\n$$\\hat w = (X^T X)^{-1}X^T y$$", "# https://github.com/computational-class/machinelearninginaction/blob/master/Ch08/regression.py\nimport pandas as pd\nimport random\n\ndat = pd.read_csv('../data/ex0.txt', sep = '\\t', names = ['x1', 'x2', 'y'])\ndat['x3'] = [yi*.3 + .5*random.random() for yi in dat['y']]\ndat.head()\n\nfrom numpy import mat, linalg, corrcoef\n\ndef standRegres(xArr,yArr):\n xMat = mat(xArr); yMat = mat(yArr).T\n xTx = xMat.T*xMat\n if linalg.det(xTx) == 0.0:\n print(\"This matrix is singular, cannot do inverse\")\n return\n ws = xTx.I * (xMat.T*yMat)\n return ws\n\nxs = [[dat.x1[i], dat.x2[i], dat.x3[i]] for i in dat.index]\ny = dat.y\nprint(xs[:2])\nws = standRegres(xs, y)\nprint(ws)\n\nxMat=mat(xs)\nyMat=mat(y)\nyHat = xMat*ws\n\nxCopy=xMat.copy()\nxCopy.sort(0)\nyHat=xCopy*ws\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.scatter(xMat[:,1].flatten().A[0], yMat.T[:,0].flatten().A[0])\nax.plot(xCopy[:,1],yHat, 'r-')\nplt.ylim(0, 5)\nplt.show()\n\nyHat = xMat*ws\ncorrcoef(yHat.T, yMat)", "Doing Statistics with statsmodels\nhttp://www.statsmodels.org/stable/index.html\nstatsmodels is a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration.", "import statsmodels.api as sm\nimport statsmodels.formula.api as smf\n\ndat = pd.read_csv('ex0.txt', sep = '\\t', names = ['x1', 'x2', 'y'])\ndat['x3'] = [yi*.3 - .1*random.random() for yi in y]\ndat.head()\n\nresults = smf.ols('y ~ x2 + x3', data=dat).fit()\n\nresults.summary()\n\nfig = plt.figure(figsize=(12,8))\nfig = sm.graphics.plot_partregress_grid(results, fig = fig)\nplt.show()\n\nimport numpy as np\nX = np.array(num_friends_good)\nX = sm.add_constant(X, prepend=False)\n\nmod = sm.OLS(daily_minutes_good, X)\nres = mod.fit()\nprint(res.summary())\n\nfig = plt.figure(figsize=(6,8))\nfig = sm.graphics.plot_partregress_grid(res, fig = fig)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
evanmiltenburg/python-for-text-analysis
Assignments-colab/ASSIGNMENT_RESIT_A.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Assignments-colab/ASSIGNMENT_RESIT_A.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "%%capture\n!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip\n!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip\n!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip\n\n!unzip Data.zip -d ../\n!unzip images.zip -d ./\n!unzip Extra_Material.zip -d ../\n\n!rm Data.zip\n!rm Extra_Material.zip\n!rm images.zip", "Resit Assignment part A\nDeadline: Friday, November 13, 2020 before 17:00 \n\nPlease name your files: \nASSIGNMENT-RESIT-A.ipynb\nutils.py (from part B)\nraw_text_to_coll.py (from part B)\n\n\n\nPlease name your zip file as follows: RESIT-ASSIGNMENT.zip and upload it via Canvas (Resit Assignment). \n- Please submit your assignment on Canvas: Resit Assignment\n- If you have questions about this topic, please contact cltl.python.course@gmail.com.\nQuestions and answers will be collected in this Q&A document, \nso please check if your question has already been answered.\nAll of the covered chapters are important to this assignment. However, please pay special attention to:\n- Chapter 10 - Dictionaries\n- Chapter 11 - Functions and scope\n* Chapter 14 - Reading and writing text files\n* Chapter 15 - Off to analyzing text \n- Chapter 17 - Data Formats II (JSON)\n- Chapter 19 - More about Natural Language Processing Tools (spaCy)\nIn this assignment:\n* we are going to process the texts in ../Data/Dreams/*txt\n* for each file, we are going to determine:\n * the number of characters\n * the number of sentences\n * the number of words\n * the longest word\n * the longest sentence\nNote\nThis notebook should be placed in the same folder as the other Assignments!\nLoading spaCy\nPlease make sure that spaCy is installed on your computer", "import spacy", "Please make sure you can load the English spaCy model:", "nlp = spacy.load('en_core_web_sm')", "Exercise 1: get paths\nDefine a function called get_paths that has the following parameter: \n* input_folder: a string\nThe function:\n* stores all paths to .txt files in the input_folder in a list\n* returns a list of strings, i.e., each string is a file path", "# your code here", "Please test your function using the following function call", "paths = get_paths(input_folder='../Data/Dreams')\nprint(paths)", "Exercise 2: load text\nDefine a function called load_text that has the following parameter: \n* txt_path: a string\nThe function:\n* opens the txt_path for reading and loads the contents of the file as a string\n* returns a string, i.e., the content of the file", "# your code here", "Exercise 3: return the longest\nDefine a function called return_the_longest that has the following parameter: \n* list_of_strings: a list of strings\nThe function:\n* returns the string with the highest number of characters. If multiple strings have the same length, return one of them.", "def return_the_longest(list_of_strings):\n \"\"\"\n given a list of strings, return the longest string\n if multiple strings have the same length, return one of them.\n \n :param str list_of_strings: a list of strings\n \n \"\"\"", "Please test you function by running the following cell:", "a_list_of_strings = [\"this\", \"is\", \"a\", \"sentence\"]\nlongest_string = return_the_longest(a_list_of_strings)\n\nerror_message = f'the longest string should be \"sentence\", you provided {longest_string}'\nassert longest_string == 'sentence', error_message", "Exercise 4: extract statistics\nWe are going to use spaCy to extract statistics from Vickie's dreams! Here are a few tips below about how to use spaCy:\ntip 1: process text with spaCy", "a_text = 'this is one sentence. this is another.'\ndoc = nlp(a_text)", "tip 2: the number of characters is the length of the document", "num_chars = len(doc.text)\nprint(num_chars)", "tip 3: loop through the sentences of a document", "for sent in doc.sents:\n sent = sent.text\n print(sent)", "tip 4: loop through the words of a document", "for token in doc:\n word = token.text\n print(word)", "Define a function called extract_statistics that has the following parameters: \n* nlp: the result of calling spacy.load('en_core_web_sm')\n* txt_path: path to a txt file, e.g., '../Data/Dreams/vickie8.txt'\nThe function:\n* loads the content of the file using the function load_text\n* processes the content of the file using nlp(content) (see tip 1 of this exercise)\nThe function returns a dictionary with five keys:\n* num_sents: the number of sentences in the document\n* num_chars: the number of characters in the document\n* num_tokens: the number of words in the document\n* longest_sent: the longest sentence in the document\n * Please make a list with all the sentences and call the function return_the_longest to retrieve the longest sentence\n* longest_word: the longest word in the document\n * Please make a list with all the words and call the function return_the_longest to retrieve the longest word\nTest the function on one of the files from Vickie's dreams.", "def extract_statistics(nlp, txt_path):\n \"\"\"\n given a txt_path\n -use the load_text function to load the text\n -process the text using spaCy\n \n :param nlp: loaded spaCy model (result of calling spacy.load('en_core_web_sm'))\n :param str txt_path: path to txt file\n \n :rtype: dict\n :return: a dictionary with the following keys:\n -\"num_sents\" : the number of sentences\n -\"num_chars\" : the number of characters \n -\"num_tokens\" : the number of words \n -\"longest_sent\" : the longest sentence\n -\"longest_word\" : the longest word\n \"\"\"\n\nstats = extract_statistics(nlp, txt_path=paths[0])\nstats", "Exercise 5: process all txt files\ntip 1: how to obtain the basename of a file", "import os\n\nbasename = os.path.basename('../Data/Dreams/vickie1.txt')[:-4]\nprint(basename)", "Define a function called process_all_txt_files that has the following parameters: \n* nlp: the result of calling spacy.load('en_core_web_sm')\n* input_folder: a string (we will test it using '../Data/Dreams')\nThe function:\n* obtains a list of txt paths using the function get_paths with input_folder as an argument\n* loops through the txt paths one by one\n* for each iteration, the extract_statistics function is called with txt_path as an argument\nThe function returns a dictionary:\n* the keys are the basenames of the txt files (see tip 1 of this exercise)\n* the values are the output of calling the function extract_statistics for a specific file\nTest your function using '../Data/Dreams' as a value for the parameter input_folder.", "def process_all_txt_files(nlp, input_folder):\n \"\"\"\n given a list of txt_paths\n -process each with the extract_statistics function\n \n :param nlp: loaded spaCy model (result of calling spacy.load('en_core_web_sm'))\n :param list txt_paths: list of paths to txt files\n \n :rtype: dict\n :return: dictionary mapping:\n -basename -> output of extract_statistics function\n \"\"\"\n\nbasename_to_stats = process_all_txt_files(nlp, input_folder='../Data/Dreams')\nbasename_to_stats", "Exercise 6: write to disk\nIn this exercise, you are going to write our results to our computer.\nPlease loop through basename_to_stats and create one JSON file for each dream.\n\nthe path is f'{basename}.json', i.e., 'vickie1.json', 'vickie2.json', etc. (please write them to the same folder as this notebook)\nthe content of each JSON file is each value of basename_to_stats", "import json\n\nfor basename, stats in basename_to_stats.items():\n pass" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/mohc/cmip6/models/hadgem3-gc31-hm/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: MOHC\nSource ID: HADGEM3-GC31-HM\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:14\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-hm', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Meteorological Forcings\n5. Key Properties --&gt; Resolution\n6. Key Properties --&gt; Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --&gt; Absorption\n12. Optical Radiative Properties --&gt; Mixtures\n13. Optical Radiative Properties --&gt; Impact Of H2o\n14. Optical Radiative Properties --&gt; Radiative Scheme\n15. Optical Radiative Properties --&gt; Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCharacteristics of the &quot;other method&quot; used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --&gt; Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --&gt; Mixtures\n**\n12.1. External\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --&gt; Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --&gt; Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --&gt; Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "ยฉ2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
KitwareMedical/ITKTubeTK
examples/TubeNumPyArrayAndPropertyHistograms.ipynb
apache-2.0
[ "This notebook illustrates the TubeTK tube NumPy array data structure and how to create histograms of the properties of a VesselTube.\nFirst, import the function for reading a tube file in as a NumPy array, and read in the file.", "import os\nimport sys\n\nfrom _tubetk_numpy import tubes_from_file\n\ntubes = tubes_from_file(\"data/Normal071-VascularNetwork.tre\")", "The result is a NumPy Record Array where the fields of the array correspond to the properties of a VesselTubeSpatialObjectPoint.", "print(type(tubes))\nprint(tubes.dtype)", "The length of the array corresponds to the number of points that make up the tubes.", "print(len(tubes))\nprint(tubes.shape)", "Individual points can be sliced, or views can be created on individual fields.", "print('Entire points 0, 2:')\nprint(tubes[:4:2])\n\nprint('\\nPosition of points 0, 2')\nprint(tubes['PositionInWorldSpace'][:4:2])", "We can easily create a histogram of the radii or visualize the point positions.", "%pylab inline\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\n\nfig = plt.figure(figsize=(16, 6))\n\nax = fig.add_subplot(1, 2, 1)\nax.hist(tubes['RadiusInWorldSpace'], bins=100)\nax.set_xlabel('Radius')\nax.set_ylabel('Count')\n\nax = fig.add_subplot(1, 2, 2, projection='3d')\nsubsample = 100\nposition = tubes['PositionInWorldSpace'][::subsample]\nradius = tubes['RadiusInWorldSpace'][::subsample]\nax.scatter(position[:,0], position[:,1], position[:,2], s=(2*radius)**2)\nax.set_title('Point Positions')\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel('Z');" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
turbomanage/training-data-analyst
courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb
apache-2.0
[ "MNIST Image Classification with TensorFlow on Cloud ML Engine\nThis notebook demonstrates how to implement different image models on MNIST using the tf.keras API.\nLearning Objectives\n\nUnderstand how to build a Dense Neural Network (DNN) for image classification\nUnderstand how to use dropout (DNN) for image classification\nUnderstand how to use Convolutional Neural Networks (CNN)\nKnow how to deploy and use an image classifcation model using Google Cloud's AI Platform\n\nFirst things first. Configure the parameters below to match your own Google Cloud project details.", "from datetime import datetime\nimport os\n\nPROJECT = \"your-project-id-here\" # REPLACE WITH YOUR PROJECT ID\nBUCKET = \"your-bucket-id-here\" # REPLACE WITH YOUR BUCKET NAME\nREGION = \"us-central1\" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\n\n# Do not change these\nos.environ[\"PROJECT\"] = PROJECT\nos.environ[\"BUCKET\"] = BUCKET\nos.environ[\"REGION\"] = REGION\nos.environ[\"IMAGE_URI\"] = os.path.join(\"gcr.io\", PROJECT, \"mnist_models\")", "Building a dynamic model\nIn the previous notebook, <a href=\"mnist_linear.ipynb\">mnist_linear.ipynb</a>, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.\nThe boilerplate structure for this module has already been set up in the folder mnist_models. The module lives in the sub-folder, trainer, and is designated as a python package with the empty __init__.py (mnist_models/trainer/__init__.py) file. It still needs the model and a trainer to run it, so let's make them.\nLet's start with the trainer file first. This file parses command line arguments to feed into the model.", "%%writefile mnist_models/trainer/task.py\nimport argparse\nimport json\nimport os\nimport sys\n\nfrom . import model\n\n\ndef _parse_arguments(argv):\n \"\"\"Parses command-line arguments.\"\"\"\n parser = argparse.ArgumentParser()\n parser.add_argument(\n '--model_type',\n help='Which model type to use',\n type=str, default='linear')\n parser.add_argument(\n '--epochs',\n help='The number of epochs to train',\n type=int, default=10)\n parser.add_argument(\n '--steps_per_epoch',\n help='The number of steps per epoch to train',\n type=int, default=100)\n parser.add_argument(\n '--job-dir',\n help='Directory where to save the given model',\n type=str, default='mnist_models/')\n return parser.parse_known_args(argv)\n\n\ndef main():\n \"\"\"Parses command line arguments and kicks off model training.\"\"\"\n args = _parse_arguments(sys.argv[1:])[0]\n\n # Configure path for hyperparameter tuning.\n trial_id = json.loads(\n os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')\n output_path = args.job_dir if not trial_id else args.job_dir + '/'\n\n model_layers = model.get_layers(args.model_type)\n image_model = model.build_model(model_layers, args.job_dir)\n model_history = model.train_and_evaluate(\n image_model, args.epochs, args.steps_per_epoch, args.job_dir)\n\n\nif __name__ == '__main__':\n main()\n", "Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the scale and load_dataset functions from the previous lab.", "%%writefile mnist_models/trainer/util.py\nimport tensorflow as tf\n\n\ndef scale(image, label):\n \"\"\"Scales images from a 0-255 int range to a 0-1 float range\"\"\"\n image = tf.cast(image, tf.float32)\n image /= 255\n image = tf.expand_dims(image, -1)\n return image, label\n\n\ndef load_dataset(\n data, training=True, buffer_size=5000, batch_size=100, nclasses=10):\n \"\"\"Loads MNIST dataset into a tf.data.Dataset\"\"\"\n (x_train, y_train), (x_test, y_test) = data\n x = x_train if training else x_test\n y = y_train if training else y_test\n # One-hot encode the classes\n y = tf.keras.utils.to_categorical(y, nclasses)\n dataset = tf.data.Dataset.from_tensor_slices((x, y))\n dataset = dataset.map(scale).batch(batch_size)\n if training:\n dataset = dataset.shuffle(buffer_size).repeat()\n return dataset\n", "Finally, let's code the models! The tf.keras API accepts an array of layers into a model object, so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: get_layers and create_and_train_model. We will build the structure of our model in get_layers. Last but not least, we'll copy over the training code from the previous lab into train_and_evaluate.\nTODO 1: Define the Keras layers for a DNN model \nTODO 2: Define the Keras layers for a dropout model\nTODO 3: Define the Keras layers for a CNN model \nHint: These models progressively build on each other. Look at the imported tensorflow.keras.layers modules and the default values for the variables defined in get_layers for guidance.", "%%writefile mnist_models/trainer/model.py\nimport os\nimport shutil\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.keras import Sequential\nfrom tensorflow.keras.callbacks import TensorBoard\nfrom tensorflow.keras.layers import (\n Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)\n\nfrom . import util\n\n\n# Image Variables\nWIDTH = 28\nHEIGHT = 28\n\n\ndef get_layers(\n model_type,\n nclasses=10,\n hidden_layer_1_neurons=400,\n hidden_layer_2_neurons=100,\n dropout_rate=0.25,\n num_filters_1=64,\n kernel_size_1=3,\n pooling_size_1=2,\n num_filters_2=32,\n kernel_size_2=3,\n pooling_size_2=2):\n \"\"\"Constructs layers for a keras model based on a dict of model types.\"\"\"\n model_layers = {\n 'linear': [\n Flatten(),\n Dense(nclasses),\n Softmax()\n ],\n 'dnn': [\n # TODO\n ],\n 'dnn_dropout': [\n # TODO\n ],\n 'cnn': [\n # TODO\n ]\n }\n return model_layers[model_type]\n\n\ndef build_model(layers, output_dir):\n \"\"\"Compiles keras model for image classification.\"\"\"\n model = Sequential(layers)\n model.compile(optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n return model\n\n\ndef train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):\n \"\"\"Compiles keras model and loads data into it for training.\"\"\"\n mnist = tf.keras.datasets.mnist.load_data()\n train_data = util.load_dataset(mnist)\n validation_data = util.load_dataset(mnist, training=False)\n\n callbacks = []\n if output_dir:\n tensorboard_callback = TensorBoard(log_dir=output_dir)\n callbacks = [tensorboard_callback]\n\n history = model.fit(\n train_data,\n validation_data=validation_data,\n epochs=num_epochs,\n steps_per_epoch=steps_per_epoch,\n verbose=2,\n callbacks=callbacks)\n\n if output_dir:\n export_path = os.path.join(output_dir, 'keras_export')\n model.save(export_path, save_format='tf')\n\n return history\n", "Local Training\nWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script mnist_models/trainer/test.py to make sure the model still passes our previous checks. On line 13, you can specify which model types you would like to check. line 14 and line 15 has the number of epochs and steps per epoch respectively.\nMoment of truth! Run the code below to check your models against the unit tests. If you see \"OK\" at the end when it's finished running, congrats! You've passed the tests!", "!python3 -m mnist_models.trainer.test", "Now that we know that our models are working as expected, let's run it on the Google Cloud AI Platform. We can run it as a python module locally first using the command line.\nThe below cell transfers some of our variables to the command line as well as create a job directory including a timestamp. This is where our model and tensorboard data will be stored.", "current_time = datetime.now().strftime(\"%y%m%d_%H%M%S\")\nmodel_type = 'cnn'\n\nos.environ[\"MODEL_TYPE\"] = model_type\nos.environ[\"JOB_DIR\"] = \"mnist_models/models/{}_{}/\".format(\n model_type, current_time)", "The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our mnist_models/trainer/task.py file.", "%%bash\npython3 -m mnist_models.trainer.task \\\n --job-dir=$JOB_DIR \\\n --epochs=5 \\\n --steps_per_epoch=50 \\\n --model_type=$MODEL_TYPE", "Training on the cloud\nSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a Deep Learning Container in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple Dockerlife which copies our code to be used in a TF2 environment.", "%%writefile mnist_models/Dockerfile\nFROM gcr.io/deeplearning-platform-release/tf2-cpu\nCOPY mnist_models/trainer /mnist_models/trainer\nENTRYPOINT [\"python3\", \"-m\", \"mnist_models.trainer.task\"]", "The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up here with the name mnist_models. (Click here to enable Cloud Build)", "!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./\n\n!docker push $IMAGE_URI", "Finally, we can kickoff the AI Platform training job. We can pass in our docker image using the master-image-uri flag.", "current_time = datetime.now().strftime(\"%y%m%d_%H%M%S\")\nmodel_type = 'cnn'\n\nos.environ[\"MODEL_TYPE\"] = model_type\nos.environ[\"JOB_DIR\"] = \"gs://{}/mnist_{}_{}/\".format(\n BUCKET, model_type, current_time)\nos.environ[\"JOB_NAME\"] = \"mnist_{}_{}\".format(\n model_type, current_time)\n\n%%bash\necho $JOB_DIR $REGION $JOB_NAME\ngcloud ai-platform jobs submit training $JOB_NAME \\\n --staging-bucket=gs://$BUCKET \\\n --region=$REGION \\\n --master-image-uri=$IMAGE_URI \\\n --scale-tier=BASIC_GPU \\\n --job-dir=$JOB_DIR \\\n -- \\\n --model_type=$MODEL_TYPE", "Can't wait to see the results? Run the code below and copy the output into the Google Cloud Shell to follow along with TensorBoard. Look at the web preview on port 6006.", "!echo \"tensorboard --logdir $JOB_DIR\"", "Deploying and predicting with model\nOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but ${JOB_DIR}keras_export/ can always be changed to a different path.\nEven though we're using a 1.14 runtime, it's compatable with TF2 exported models. Phew!\nUncomment the delete commands below if you are getting an \"already exists error\" and want to deploy a new model.", "%%bash\nMODEL_NAME=\"mnist\"\nMODEL_VERSION=${MODEL_TYPE}\nMODEL_LOCATION=${JOB_DIR}keras_export/\necho \"Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes\"\n#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}\n#yes | gcloud ai-platform models delete ${MODEL_NAME}\ngcloud ai-platform models create ${MODEL_NAME} --regions $REGION\ngcloud ai-platform versions create ${MODEL_VERSION} \\\n --model ${MODEL_NAME} \\\n --origin ${MODEL_LOCATION} \\\n --framework tensorflow \\\n --runtime-version=1.14", "To predict with the model, let's take one of the example images.\nTODO 4: Write a .json file with image data to send to an AI Platform deployed model", "import json, codecs\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nfrom mnist_models.trainer import util\n\nHEIGHT = 28\nWIDTH = 28\nIMGNO = 12\n\nmnist = tf.keras.datasets.mnist.load_data()\n(x_train, y_train), (x_test, y_test) = mnist\ntest_image = x_test[IMGNO]\n\njsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()\njson.dump(jsondata, codecs.open(\"test.json\", \"w\", encoding = \"utf-8\"))\nplt.imshow(test_image.reshape(HEIGHT, WIDTH));", "Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!", "%%bash\ngcloud ai-platform predict \\\n --model=mnist \\\n --version=${MODEL_TYPE} \\\n --json-instances=./test.json", "Copyright 2019 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
maxpumperla/elephas
examples/Spark_ML_Pipeline.ipynb
mit
[ "Spark ML model pipelines on Distributed Deep Neural Nets\nThis notebook describes how to build machine learning pipelines with Spark ML for distributed versions of Keras deep learning models. As data set we use the Otto Product Classification challenge from Kaggle. The reason we chose this data is that it is small and very structured. This way, we can focus more on technical components rather than prepcrocessing intricacies. Also, users with slow hardware or without a full-blown Spark cluster should be able to run this example locally, and still learn a lot about the distributed mode.\nOften, the need to distribute computation is not imposed by model training, but rather by building the data pipeline, i.e. ingestion, transformation etc. In training, deep neural networks tend to do fairly well on one or more GPUs on one machine. Most of the time, using gradient descent methods, you will process one batch after another anyway. Even so, it may still be beneficial to use frameworks like Spark to integrate your models with your surrounding infrastructure. On top of that, the convenience provided by Spark ML pipelines can be very valuable (being syntactically very close to what you might know from scikit-learn).\nTL;DR: We will show how to tackle a classification problem using distributed deep neural nets and Spark ML pipelines in an example that is essentially a distributed version of the one found here.\nUsing this notebook\nAs we are going to use elephas, you will need access to a running Spark context to run this notebook. If you don't have it already, install Spark locally by following the instructions provided here. Make sure to also export SPARK_HOME to your path and start your ipython/jupyter notebook as follows:\nIPYTHON_OPTS=\"notebook\" ${SPARK_HOME}/bin/pyspark --driver-memory 4G elephas/examples/Spark_ML_Pipeline.ipynb\nTo test your environment, try to print the Spark context (provided as sc), i.e. execute the following cell.", "from __future__ import print_function\nprint(sc)", "Otto Product Classification Data\nTraining and test data is available here. Go ahead and download the data. Inspecting it, you will see that the provided csv files consist of an id column, 93 integer feature columns. train.csv has an additional column for labels, which test.csv is missing. The challenge is to accurately predict test labels. For the rest of this notebook, we will assume data is stored at data_path, which you should modify below as needed.", "data_path = \"./\" # <-- Make sure to adapt this to where your csv files are.", "Loading data is relatively simple, but we have to take care of a few things. First, while you can shuffle rows of an RDD, it is generally not very efficient. But since data in train.csv is sorted by category, we'll have to shuffle in order to make the model perform well. This is what the function shuffle_csv below is for. Next, we read in plain text in load_data_rdd, split lines by comma and convert features to float vector type. Also, note that the last column in train.csv represents the category, which has a Class_ prefix. \nDefining Data Frames\nSpark has a few core data structures, among them is the data frame, which is a distributed version of the named columnar data structure many will now from either R or Pandas. We need a so called SQLContext and an optional column-to-names mapping to create a data frame from scratch.", "from pyspark.sql import SQLContext\nfrom pyspark.ml.linalg import Vectors\nimport numpy as np\nimport random\n\nsql_context = SQLContext(sc)\n\ndef shuffle_csv(csv_file):\n lines = open(csv_file).readlines()\n random.shuffle(lines)\n open(csv_file, 'w').writelines(lines)\n\ndef load_data_frame(csv_file, shuffle=True, train=True):\n if shuffle:\n shuffle_csv(csv_file)\n data = sc.textFile(data_path + csv_file) # This is an RDD, which will later be transformed to a data frame\n data = data.filter(lambda x:x.split(',')[0] != 'id').map(lambda line: line.split(','))\n if train:\n data = data.map(\n lambda line: (Vectors.dense(np.asarray(line[1:-1]).astype(np.float32)),\n str(line[-1])) )\n else:\n # Test data gets dummy labels. We need the same structure as in Train data\n data = data.map( lambda line: (Vectors.dense(np.asarray(line[1:]).astype(np.float32)),\"Class_1\") ) \n return sqlContext.createDataFrame(data, ['features', 'category'])\n ", "Let's load both train and test data and print a few rows of data using the convenient show method.", "train_df = load_data_frame(\"train.csv\")\ntest_df = load_data_frame(\"test.csv\", shuffle=False, train=False) # No need to shuffle test data\n\nprint(\"Train data frame:\")\ntrain_df.show(10)\n\nprint(\"Test data frame (note the dummy category):\")\ntest_df.show(10)", "Preprocessing: Defining Transformers\nUp until now, we basically just read in raw data. Luckily, Spark ML has quite a few preprocessing features available, so the only thing we will ever have to do is define transformations of data frames.\nTo proceed, we will first transform category strings to double values. This is done by a so called StringIndexer. Note that we carry out the actual transformation here already, but that is just for demonstration purposes. All we really need is too define string_indexer to put it into a pipeline later on.", "from pyspark.ml.feature import StringIndexer\n\nstring_indexer = StringIndexer(inputCol=\"category\", outputCol=\"index_category\")\nfitted_indexer = string_indexer.fit(train_df)\nindexed_df = fitted_indexer.transform(train_df)", "Next, it's good practice to normalize the features, which is done with a StandardScaler.", "from pyspark.ml.feature import StandardScaler\n\nscaler = StandardScaler(inputCol=\"features\", outputCol=\"scaled_features\", withStd=True, withMean=True)\nfitted_scaler = scaler.fit(indexed_df)\nscaled_df = fitted_scaler.transform(indexed_df)\n\nprint(\"The result of indexing and scaling. Each transformation adds new columns to the data frame:\")\nscaled_df.show(10)", "Keras Deep Learning model\nNow that we have a data frame with processed features and labels, let's define a deep neural net that we can use to address the classification problem. Chances are you came here because you know a thing or two about deep learning. If so, the model below will look very straightforward to you. We build a keras model by choosing a set of three consecutive Dense layers with dropout and ReLU activations. There are certainly much better architectures for the problem out there, but we really just want to demonstrate the general flow here.", "from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Dropout, Activation\nfrom tensorflow.keras.utils import to_categorical, generic_utils\n\nnb_classes = train_df.select(\"category\").distinct().count()\ninput_dim = len(train_df.select(\"features\").first()[0])\n\nmodel = Sequential()\nmodel.add(Dense(512, input_shape=(input_dim,)))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(512))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(512))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(nb_classes))\nmodel.add(Activation('softmax'))\n\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')", "Distributed Elephas model\nTo lift the above Keras model to Spark, we define an Estimator on top of it. An Estimator is Spark's incarnation of a model that still has to be trained. It essentially only comes with only a single (required) method, namely fit. Once we call fit on a data frame, we get back a Model, which is a trained model with a transform method to predict labels.\nWe do this by initializing an ElephasEstimator and setting a few properties. As by now our input data frame will have many columns, we have to tell the model where to find features and labels by column name. Then we provide serialized versions of our Keras model. We can not plug in keras models into the Estimator directly, as Spark will have to serialize them anyway for communication with workers, so it's better to provide the serialization ourselves. In fact, while pyspark knows how to serialize model, it is extremely inefficient and can break if models become too large. Spark ML is especially picky (and rightly so) about parameters and more or less prohibits you from providing non-atomic types and arrays of the latter. Most of the remaining parameters are optional and rather self explainatory. Plus, many of them you know if you have ever run a keras model before. We just include them here to show the full set of training configuration.", "from elephas.ml_model import ElephasEstimator\nfrom tensorflow.keras import optimizers\n\n\nadam = optimizers.Adam(lr=0.01)\nopt_conf = optimizers.serialize(adam)\n\n# Initialize SparkML Estimator and set all relevant properties\nestimator = ElephasEstimator()\nestimator.setFeaturesCol(\"scaled_features\") # These two come directly from pyspark,\nestimator.setLabelCol(\"index_category\") # hence the camel case. Sorry :)\nestimator.set_keras_model_config(model.to_yaml()) # Provide serialized Keras model\nestimator.set_categorical_labels(True)\nestimator.set_nb_classes(nb_classes)\nestimator.set_num_workers(1) # We just use one worker here. Feel free to adapt it.\nestimator.set_epochs(20) \nestimator.set_batch_size(128)\nestimator.set_verbosity(1)\nestimator.set_validation_split(0.15)\nestimator.set_optimizer_config(opt_conf)\nestimator.set_mode(\"synchronous\")\nestimator.set_loss(\"categorical_crossentropy\")\nestimator.set_metrics(['acc'])", "SparkML Pipelines\nNow for the easy part: Defining pipelines is really as easy as listing pipeline stages. We can provide any configuration of Transformers and Estimators really, but here we simply take the three components defined earlier. Note that string_indexer and scaler and interchangable, while estimator somewhat obviously has to come last in the pipeline.", "from pyspark.ml import Pipeline\n\npipeline = Pipeline(stages=[string_indexer, scaler, estimator])", "Fitting and evaluating the pipeline\nThe last step now is to fit the pipeline on training data and evaluate it. We evaluate, i.e. transform, on training data, since only in that case do we have labels to check accuracy of the model. If you like, you could transform the test_df as well.", "from pyspark.mllib.evaluation import MulticlassMetrics\n\nfitted_pipeline = pipeline.fit(train_df) # Fit model to data\n\nprediction = fitted_pipeline.transform(train_df) # Evaluate on train data.\n# prediction = fitted_pipeline.transform(test_df) # <-- The same code evaluates test data.\npnl = prediction.select(\"index_category\", \"prediction\")\npnl.show(100)\n\nprediction_and_label = pnl.map(lambda row: (row.index_category, row.prediction))\nmetrics = MulticlassMetrics(prediction_and_label)\nprint(metrics.precision())", "Conclusion\nIt may certainly take some time to master the principles and syntax of both Keras and Spark, depending where you come from, of course. However, we also hope you come to the conclusion that once you get beyond the stage of struggeling with defining your models and preprocessing your data, the business of building and using SparkML pipelines is quite an elegant and useful one. \nIf you like what you see, consider helping further improve elephas or contributing to Keras or Spark. Do you have any constructive remarks on this notebook? Is there something you want me to clarify? In any case, feel free to contact me." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rddy/lentil
nb/model_explorations.ipynb
apache-2.0
[ "from __future__ import division\n\nimport pickle\nimport os\nimport random\n\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nimport seaborn as sns\nsns.set_style('whitegrid')\n\nfrom lentil import datatools\nfrom lentil import models\nfrom lentil import est\nfrom lentil import evaluate\n\n%matplotlib inline\n\nimport logging\nlogging.getLogger().setLevel(logging.DEBUG)", "Load an interaction history", "history_path = os.path.join('data', 'assistments_2009_2010.pkl')\n\nwith open(history_path, 'rb') as f:\n history = pickle.load(f)\n\ndf = history.data", "Train an embedding model on the interaction history and visualize the results", "embedding_dimension = 2\n\nmodel = models.EmbeddingModel(\n history, \n embedding_dimension,\n using_prereqs=True,\n using_lessons=True,\n using_bias=True,\n learning_update_variance_constant=0.5)\n\nestimator = est.EmbeddingMAPEstimator(\n regularization_constant=1e-3,\n using_scipy=True,\n verify_gradient=False,\n debug_mode_on=True,\n ftol=1e-3)\n\nmodel.fit(estimator)\n\nprint \"Training AUC = %f\" % (evaluate.training_auc(\n model, history, plot_roc_curve=True))\n\nsplit_history = history.split_interactions_by_type()\ntimestep_of_last_interaction = split_history.timestep_of_last_interaction\n\nNUM_STUDENTS_TO_SAMPLE = 10\nfor student_id in random.sample(df['student_id'].unique(), NUM_STUDENTS_TO_SAMPLE):\n student_idx = history.idx_of_student_id(student_id)\n\n timesteps = range(1, timestep_of_last_interaction[student_id]+1)\n\n for i in xrange(model.embedding_dimension):\n plt.plot(timesteps, model.student_embeddings[student_idx, i, timesteps], \n label='Skill %d' % (i+1))\n \n norms = np.linalg.norm(model.student_embeddings[student_idx, :, timesteps], axis=1)\n plt.plot(timesteps, norms, label='norm')\n \n plt.title('student_id = %s' % student_id)\n plt.xlabel('Timestep')\n plt.ylabel('Skill')\n plt.legend(loc='upper right')\n plt.show()\n\nassessment_norms = np.linalg.norm(model.assessment_embeddings, axis=1)\n\nplt.xlabel('Assessment embedding norm')\nplt.ylabel('Frequency (number of assessments)')\nplt.hist(assessment_norms, bins=20)\nplt.show()\n\ndef get_pass_rates(grouped):\n \"\"\"\n Get pass rate for each group\n \n :param pd.GroupBy grouped: A grouped dataframe\n :rtype: dict[str, float]\n :return: A dictionary mapping group name to pass rate\n \"\"\"\n pass_rates = {}\n for name, group in grouped:\n vc = group['outcome'].value_counts()\n if True not in vc:\n pass_rates[name] = 0\n else:\n pass_rates[name] = vc[True] / len(group)\n return pass_rates\n\ngrouped = df[df['module_type']==datatools.AssessmentInteraction.MODULETYPE].groupby('module_id')\npass_rates = get_pass_rates(grouped)\n\nassessment_norms = [np.linalg.norm(model.assessment_embeddings[history.idx_of_assessment_id(assessment_id), :]) for assessment_id in pass_rates]\n\nplt.xlabel('Assessment pass rate')\nplt.ylabel('Assessment embedding norm')\nplt.scatter(pass_rates.values(), assessment_norms)\nplt.show()\n\ngrouped = df[df['module_type']==datatools.AssessmentInteraction.MODULETYPE].groupby('module_id')\npass_rates = get_pass_rates(grouped)\n\nbias_minus_norm = [model.assessment_biases[history.idx_of_assessment_id(\n assessment_id)] - np.linalg.norm(\n model.assessment_embeddings[history.idx_of_assessment_id(\n assessment_id), :]) for assessment_id in pass_rates]\n\nplt.xlabel('Assessment pass rate')\nplt.ylabel('Assessment bias - Assessment embedding norm')\nplt.scatter(pass_rates.values(), bias_minus_norm)\nplt.show()\n\ngrouped = df[df['module_type']==datatools.AssessmentInteraction.MODULETYPE].groupby('student_id')\npass_rates = get_pass_rates(grouped)\n\nbiases = [model.student_biases[history.idx_of_student_id(\n student_id)] for student_id in pass_rates]\n\nplt.xlabel('Student pass rate')\nplt.ylabel('Student bias')\nplt.scatter(pass_rates.values(), biases)\nplt.show()\n\nlesson_norms = np.linalg.norm(model.lesson_embeddings, axis=1)\n\nplt.xlabel('Lesson embedding norm')\nplt.ylabel('Frequency (number of lessons)')\nplt.hist(lesson_norms, bins=20)\nplt.show()\n\nprereq_norms = np.linalg.norm(model.prereq_embeddings, axis=1)\n\nplt.xlabel('Prereq embedding norm')\nplt.ylabel('Frequency (number of lessons)')\nplt.hist(prereq_norms, bins=20)\nplt.show()\n\nplt.xlabel('Lesson embedding norm')\nplt.ylabel('Prereq embedding norm')\nplt.scatter(prereq_norms, lesson_norms)\nplt.show()\n\ntimesteps = range(model.student_embeddings.shape[2])\navg_student_norms = np.array(np.linalg.norm(np.mean(model.student_embeddings, axis=0), axis=0))\n\nplt.xlabel('Timestep')\nplt.ylabel('Average student embedding norm')\nplt.plot(timesteps, avg_student_norms)\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code" ]
tsavo-sevenoaks/garth
ipython_notebook_tutorial.ipynb
gpl-3.0
[ "Text Using Markdown\nIf you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. Hit shift + enter or shift + return to show the formatted text again. This is called \"running\" the cell, and you can also do it using the run button in the toolbar.\nCode cells\nOne great advantage of IPython notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. The following cell is a code cell.", "# Hit shift + enter or use the run button to run this cell and see the results\n\nprint 'hello world11_0_11'\nprint 'hello world'\n\n# The last line of every code cell will be displayed by default, \n# even if you don't print it. Run this cell to see how this works.\n\nprint 2 + 2 # The result of this line will not be displayed\nprint 3 + 3 # The result of this line will be displayed, because it is the last line of the cell", "Nicely formatted results\nIPython notebooks allow you to display nicely formatted results, such as plots and tables, directly in\nthe notebook. You'll learn how to use the following libraries later on in this course, but for now here's a\npreview of what IPython notebook can do.", "# If you run this cell, you should see the values displayed as a table.\n\n# Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course.\nimport pandas as pd\n\ndf = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]})\ndf\n\n# If you run this cell, you should see a scatter plot of the function y = x^2\n\n%pylab inline\nimport matplotlib.pyplot as plt\n\nxs = range(-30, 31)\nys = [x ** 2 for x in xs]\n\nplt.scatter(xs, ys)", "Creating cells\nTo create a new code cell, click \"Insert > Insert Cell [Above or Below]\". A code cell will automatically be created.\nTo create a new markdown cell, first follow the process above to create a code cell, then change the type from \"Code\" to \"Markdown\" using the dropdown next to the run, stop, and restart buttons.\nSome Markdown data\nRe-running cells\nIf you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, \"In [5]\". The third cell should output the message \"Intro to Data Analysis is awesome!\"", "class_name = \"BRUCE Woodley Intro to Data Analysis\"\n\nmessage = class_name + \" is awesome!\"\n\nmessage", "Once you've run all three cells, try modifying the first one to set class_name to your name, rather than \"Intro to Data Analysis\", so you can print that you are awesome. Then rerun the first and third cells without rerunning the second.\nYou should have seen that the third cell still printed \"Intro to Data Analysis is awesome!\" That's because you didn't rerun the second cell, so even though the class_name variable was updated, the message variable was not. Now try rerunning the second cell, and then the third.\nYou should have seen the output change to \"your name is awesome!\" Often, after changing a cell, you'll want to rerun all the cells below it. You can do that quickly by clicking \"Cell > Run All Below\".", "import unicodecsv\n\nwith open(\"enrollments.csv\",\"rb\") as filein :\n line = unicodecsv.DictReader(filein)\n print(\"type(line) \\t\",type(line)) \n enrollments = list(line)\nprint enrollments[0]\n\n \n\nimport unicodecsv\n\nwith open(\"daily_engagement.csv\",\"rb\") as filein :\n line = unicodecsv.DictReader(filein)\n #print(\"type(line) \\t\",type(line)) \n daily_engagement = list(line)\nprint daily_engagement[0]\n\n\nimport unicodecsv\n\nwith open(\"project_submissions.csv\",\"rb\") as filein :\n line = unicodecsv.DictReader(filein)\n project_submissions_fieldnames = line.fieldnames \n #print(\"type(line) \\t\",type(line))\n print(\"project_submissions_fieldnames = \",str(project_submissions_fieldnames))\n project_submissions = list(line)\nprint project_submissions[0]\n", "Fixing Data Types.", "# Fixing Data Types.\n# Hit shift + enter or use the run button to run this cell and see the results\nfrom datetime import datetime as dt\n\n# Takes a date as a string, and returns a Python datetime object. \n# If there is no date given, returns None\ndef parse_date(date):\n if date == '':\n return None\n else:\n return dt.strptime(date, '%Y-%m-%d')\n \n# Takes a string which is either an empty string or represents an integer,\n# and returns an int or None.\ndef parse_maybe_int(i):\n if i == '':\n return None\n else:\n return int(i)\n \nprint(\" type(enrollment) \" , type(enrollment))\n# Clean up the data types in the enrollments table\nfor enrollment in enrollments:\n enrollment['cancel_date'] = parse_date(enrollment['cancel_date'])\n enrollment['days_to_cancel'] = parse_maybe_int(enrollment['days_to_cancel'])\n enrollment['is_canceled'] = enrollment['is_canceled'] == 'True'\n enrollment['is_udacity'] = enrollment['is_udacity'] == 'True'\n enrollment['join_date'] = parse_date(enrollment['join_date'])\n \nenrollments[0]\n\n# enrollments\n# daily_engagement\n# project_submission\n# these are all a \"List of Dictionaries\"\nimport sys \nimport os \nimport string \nimport time \n\n\n#print(type(enrollments),len(enrollments) )\nenrollments_set = set()\nfor line in enrollments :\n enrollments_set.add(line['account_key'] ) \nprint(\"enrollments\",type(enrollments), \" row total: \",len(enrollments), \" total students: \", len(enrollments_set) )\n\n#print(type(daily_engagement), len(daily_engagement) ) \ndaily_engagement_set = set()\nfor line in daily_engagement :\n daily_engagement_set.add(line['acct'] ) \nprint(\"daily_engagement\", type(daily_engagement),\" row total: \",len(daily_engagement), \" total students: \", len(daily_engagement_set) )\n\n#print(type(project_submissions), len(project_submissions) )\nproject_submissions_set = set()\nfor line in project_submissions :\n project_submissions_set.add(line['account_key'] ) \nprint(\"project_submissions\", type(project_submissions),\" row total: \",len(project_submissions), \" total students: \", len(project_submissions_set) )\n\nprint(\" \")\nprint('REM: these are all a \"List of Dictionaries\"...!')\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SJSlavin/phys202-2015-work
assignments/assignment05/InteractEx04.ipynb
mit
[ "Interact Exercise 4\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.display import display", "Line with Gaussian noise\nWrite a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\\sigma^2)$:\n$$\ny = m x + b + N(0,\\sigma^2)\n$$\nBe careful about the sigma=0.0 case.", "def random_line(m, b, sigma, size=10):\n \"\"\"Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]\n \n Parameters\n ----------\n m : float\n The slope of the line.\n b : float\n The y-intercept of the line.\n sigma : float\n The standard deviation of the y direction normal distribution noise.\n size : int\n The number of points to create for the line.\n \n Returns\n -------\n x : array of floats\n The array of x values for the line with `size` points.\n y : array of floats\n The array of y values for the lines with `size` points.\n \"\"\"\n # YOUR CODE HERE\n \n #http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randn.html#numpy.random.randn\n x = np.linspace(-1.0, 1.0, num=size)\n y = (m * x) + b + (sigma * np.random.randn(size))\n return x, y\n\n\nprint(random_line(2, 3, 2, 20))\n\nm = 0.0; b = 1.0; sigma=0.0; size=3\nx, y = random_line(m, b, sigma, size)\nassert len(x)==len(y)==size\nassert list(x)==[-1.0,0.0,1.0]\nassert list(y)==[1.0,1.0,1.0]\nsigma = 1.0\nm = 0.0; b = 0.0\nsize = 500\nx, y = random_line(m, b, sigma, size)\nassert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)\nassert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)", "Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:\n\nMake the marker color settable through a color keyword argument with a default of red.\nDisplay the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.\nCustomize your plot to make it effective and beautiful.", "def ticks_out(ax):\n \"\"\"Move the ticks to the outside of the box.\"\"\"\n ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')\n ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')\n\ndef plot_random_line(m, b, sigma, size=10, color='red'):\n \"\"\"Plot a random line with slope m, intercept b and size points.\"\"\"\n x, y = random_line(m, b, sigma, size)\n ax = plt.subplot(111)\n \n plt.scatter(x, y , color=color)\n ticks_out(ax)\n plt.xlim((-1.1, 1.1))\n plt.ylim((-10.0, 10.0))\n\nplot_random_line(5.0, -1.0, 2.0, 50)\n\nassert True # use this cell to grade the plot_random_line function", "Use interact to explore the plot_random_line function using:\n\nm: a float valued slider from -10.0 to 10.0 with steps of 0.1.\nb: a float valued slider from -5.0 to 5.0 with steps of 0.1.\nsigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.\nsize: an int valued slider from 10 to 100 with steps of 10.\ncolor: a dropdown with options for red, green and blue.", "# YOUR CODE HERE\ninteract(plot_random_line, m=(-10.0, 10.0, 0.1), b=(-5.0, 5.0, 0.1), sigma = (0.0, 5.0, 0.01), size = (10, 100, 10), color = [\"green\", \"red\", \"blue\"])\n\n#### assert True # use this cell to grade the plot_random_line interact" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AeroPython/Taller-PyConEs-2015
Teoria III - Exploracion-Explotacion.ipynb
mit
[ "Simplifica tu vida con sistemas complejos y algoritmos genรฉticos\nParte 3 - El dilema Exploraciรณn - Explotaciรณn: feedback positivo y negativo\nCuando usamos algoritmos genรฉticos y sistemas complejos, en general, estaremos buscando optimizar funciones muy complicadas, de varios parรกmetros, a menudo incluso implรญcitas (como la optimizaciรณn de un aviรณn mediante CFD). Estas funciones normalmente tendrรกn รณptimos locales, soluciones buenas, pero que no son el mรกximo global, la mejor soluciรณn, que es lo que buscamos.\nHagamos un pequeรฑo esquema para verlo claramente!", "%matplotlib inline \nimport numpy as np # Usaremos arrays\nimport matplotlib.pyplot as plt # Para pintar resultados", "Supongamos que esta curva representa a una funciรณn cuyo mรกximo buscamos, y supongamos que el eje x representa parรกmetros de los que la funciรณn depende.", "x = np.linspace(0,50,500)\ny = np.sin(x) * np.sin(x/17)\nplt.figure(None, figsize=(10,5))\nplt.ylim(-1.1, 1.1)\nplt.plot(x,y)", "Supongamos que con un algoritmo hemos encontrado un punto alto, pero que corresponde a un รณptimo local, por ejemplo:", "plt.figure(None, figsize=(10,5))\nplt.ylim(-1.1, 1.1)\nplt.plot(x,y)\nplt.plot([21,21],[0,1],'r--')\nplt.plot(21, 0.75, 'ko')", "El dilema Exploraciรณn-Explotaciรณn hace referencia a a dos fuerzas contrapuestas que necesitamos equilibrar cuidadosamente cuando usemos estos tipos de algoritmos.\nLa Exploraciรณn se refiere a buscar soluciones alejadas de lo que tenemos, abrir nuestro abanico de bรบsqueda.\n\nNos permite escapar de mรกximos locales y encontrar el global.\nNos permite encontrar soluciones atรญpicas y novedosas a problemas complicados.\nDemasiada exploraciรณn nos impedirรก guardar nuestras soluciones y refinarlas, y tendremos a nuestro algoritmo saltando de un lado a otro sin sacar nada en claro.\n\nLa Explotaciรณn se refiere a la capacidad de nuestro algoritmo de mantener las soluciones buenas que ha encontrado y refinarlas, buscando en entornos cercanos.\n\nNos permite encontrar mรกximos de la funciรณn y mantenerlos.\nDemasiada Explotaciรณn nos bloquearรก en mรกximos locales y nos impedirรก encontrar el global.", "# EJEMPLO DE RESULTADO CON DEMASIADA EXPLORACIร“N: NO SE ENCUENTRA NADA\n\n\nx2 = np.array([7,8,12,28,31,35,40,49])\ny2 = np.sin(x2) * np.sin(x2/17)\n\nplt.figure(None, figsize=(10,5))\nplt.ylim(-1.1, 1.1)\nplt.plot(x,y)\nplt.plot([21,21],[0,1],'r--')\nplt.plot(21, 0.75, 'ko')\nplt.plot(x2, y2, 'go')\n\n# EJEMPLO DE RESULTADO CON DEMASIADA EXPLOTACIร“N: Sร“LO SE LLEGA AL LOCAL\n\n\nx2 = np.linspace(20.2, 21, 10)\ny2 = np.sin(x2) * np.sin(x2/17)\n\nplt.figure(None, figsize=(10,5))\nplt.ylim(-1.1, 1.1)\nplt.plot(x,y)\nplt.plot([21,21],[0,1],'r--')\nplt.plot(21, 0.75, 'ko')\nplt.plot(x2, y2, 'go')", "Este tipo de estrategias se modulan mediante todos los parรกmetros de los algoritmos, pero quizรกs el parรกmetro que mรกs claramente influye en este equilibrio es el de la mutaciรณn en los algoritmos genรฉticos: Reduciendo el รญndice de mutaciรณn potenciaremos la explotaciรณn, mientras que si lo aumentamos, potenciamos la exploraciรณn.\nEjemplo: Laberinto", "#Usaremos el paquete en el ejercicio del laberinto\nimport Ejercicios.Laberinto.laberinto.laberinto as lab\nag = lab.ag", "Supongamos que tenemos el siguiente laberinto, al que accedemos por la izquierda y que queremos resolver:", "mapa1 = lab.Map()\nmapa1.draw_tablero()", "En el ejercicio se detalla mรกs el proceso, llamemos aquรญ simplemente al algoritmo genรฉtico que lo resuelve:", "mapa1 = lab.Map()\nlab.avanzar(mapa1)\nlab.draw_all(mapa1)", "Lo mรกs probable es que hayas obtenido una soluciรณn o un camino cerrado en un bucle. Puedes ejecutar la celda superior varias veces para hecerte una idea aproximada de con quรฉ frecuencia aparece cada situaciรณn. Pero, ยฟpor quรฉ aparecen estos bucles?\nExaminemos quรฉ aspecto tiene una soluciรณn:\nCada casilla contiene una flecha que indica cuรกl es la siguiente casilla a la que cruzar. Esto es lo que se describe en el genoma de cada camino.\nSi la casilla apunta a una pared, el programa intentarรก cruzar de todos modos a una casilla aleatoria diferente.", "mapa1.list_caminos[0].draw_directions()\nmapa1.list_caminos[0].draw_path(0.7)", "La respuesta a por quรฉ se forman bucles estรก en cรณmo se define la funciรณn de fitness o puntuaciรณn de cada camino:\n\nSe recorren 50 casillas, intentando seguir el camino que determinan las flechas\nCada vez que se choca con una pared, o que se vuelve a la casilla anterior (por ejemplo, si dos flechas se apuntan mutuamente), se pierden puntos.\nSe obtiene una puntuaciรณn mejor cuanto mรกs a la derecha acabe el caminante.\nSe obtiene una gran bonificaciรณn si se llega a la salida\n\nEn este ejercicio, un bucle es un optimo local: Al no chocarse con nada al recorrerlo, la puntuaciรณn es mejor que la de caminos ligeramente diferentes, que terminarรญan chocando con las paredes varias veces.\nSin embargo, no es la soluciรณn que buscamos. Tenemos que potenciar la exploraciรณn lejos de estos mรกximos locales.\nUna manera de hacerlo es con feromonas, parecido a lo que hicimos con las hormigas.\nSupongamos que cada persona que camina por el laberinto, deja por cada casilla por la que pasa un olor desagradable, que hace que los que vuelvan a pasar por allรญ intenten evitar ese camino. La manera de implementar esto en el algoritmo es aรฑadir un rastro de feromonas, y luego tener en cuenta la cantidad de feromonas encontradas al calcular la puntuaciรณn. ยฟCรณmo crees que eso afectarรญa a los bucles?\nProbรฉmoslo!", "mapa1 = lab.Map(veneno=1)\nlab.avanzar(mapa1)\nlab.draw_all(mapa1)", "Prueba e ejecutarlo varias veces. ยฟNotas si ha cambiado la cantidad de bucles?\nPor รบltimo, veamos que ocurre si potenciamos la exploraciรณn demasiado:", "mapa1 = lab.Map(veneno=100)\nlab.avanzar(mapa1)\nlab.draw_all(mapa1)", "ยฟCรณmo explicas lo que ocurre? ยฟPor quรฉ hemos perdido la capacidad para encontrar soluciones al potenciar demasiado la exploraciรณn?\nSiro Moreno, Aeropython, 19 de Noviembre de 2015" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
transcranial/keras-js
notebooks/layers/pooling/MaxPooling1D.ipynb
mit
[ "import numpy as np\nfrom keras.models import Model\nfrom keras.layers import Input\nfrom keras.layers.pooling import MaxPooling1D\nfrom keras import backend as K\nimport json\nfrom collections import OrderedDict\n\ndef format_decimal(arr, places=6):\n return [round(x * 10**places) / 10**places for x in arr]\n\nDATA = OrderedDict()", "MaxPooling1D\n[pooling.MaxPooling1D.0] input 6x6, pool_size=2, strides=None, padding='valid'", "data_in_shape = (6, 6)\nL = MaxPooling1D(pool_size=2, strides=None, padding='valid')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(250)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.MaxPooling1D.0'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}", "[pooling.MaxPooling1D.1] input 6x6, pool_size=2, strides=1, padding='valid'", "data_in_shape = (6, 6)\nL = MaxPooling1D(pool_size=2, strides=1, padding='valid')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(251)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.MaxPooling1D.1'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}", "[pooling.MaxPooling1D.2] input 6x6, pool_size=2, strides=3, padding='valid'", "data_in_shape = (6, 6)\nL = MaxPooling1D(pool_size=2, strides=3, padding='valid')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(252)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.MaxPooling1D.2'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}", "[pooling.MaxPooling1D.3] input 6x6, pool_size=2, strides=None, padding='same'", "data_in_shape = (6, 6)\nL = MaxPooling1D(pool_size=2, strides=None, padding='same')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(253)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.MaxPooling1D.3'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}", "[pooling.MaxPooling1D.4] input 6x6, pool_size=2, strides=1, padding='same'", "data_in_shape = (6, 6)\nL = MaxPooling1D(pool_size=2, strides=1, padding='same')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(254)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.MaxPooling1D.4'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}", "[pooling.MaxPooling1D.5] input 6x6, pool_size=2, strides=3, padding='same'", "data_in_shape = (6, 6)\nL = MaxPooling1D(pool_size=2, strides=3, padding='same')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(255)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.MaxPooling1D.5'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}", "[pooling.MaxPooling1D.6] input 6x6, pool_size=3, strides=None, padding='valid'", "data_in_shape = (6, 6)\nL = MaxPooling1D(pool_size=3, strides=None, padding='valid')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(256)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.MaxPooling1D.6'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}", "[pooling.MaxPooling1D.7] input 7x7, pool_size=3, strides=1, padding='same'", "data_in_shape = (7, 7)\nL = MaxPooling1D(pool_size=3, strides=1, padding='same')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(257)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.MaxPooling1D.7'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}", "[pooling.MaxPooling1D.8] input 7x7, pool_size=3, strides=3, padding='same'", "data_in_shape = (7, 7)\nL = MaxPooling1D(pool_size=3, strides=3, padding='same')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(258)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.MaxPooling1D.8'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}", "export for Keras.js tests", "import os\n\nfilename = '../../../test/data/layers/pooling/MaxPooling1D.json'\nif not os.path.exists(os.path.dirname(filename)):\n os.makedirs(os.path.dirname(filename))\nwith open(filename, 'w') as f:\n json.dump(DATA, f)\n\nprint(json.dumps(DATA))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pligor/predicting-future-product-prices
04_time_series_prediction/.ipynb_checkpoints/30_price_history_dataset_per_mobile_phone-arima-checkpoint.ipynb
agpl-3.0
[ "# -*- coding: UTF-8 -*-\n#%load_ext autoreload\n%reload_ext autoreload\n%autoreload 2\n\nfrom __future__ import division\nimport tensorflow as tf\nfrom os import path, remove\nimport numpy as np\nimport pandas as pd\nimport csv\nfrom sklearn.model_selection import StratifiedShuffleSplit\nfrom time import time\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nfrom mylibs.jupyter_notebook_helper import show_graph, renderStatsList, renderStatsCollection, \\\n renderStatsListWithLabels, renderStatsCollectionOfCrossValids\nfrom tensorflow.contrib import rnn\nfrom tensorflow.contrib import learn\nimport shutil\nfrom tensorflow.contrib.learn.python.learn import learn_runner\nfrom mylibs.tf_helper import getDefaultGPUconfig\nfrom sklearn.metrics import r2_score\nfrom mylibs.py_helper import factors\nfrom fastdtw import fastdtw\nfrom collections import OrderedDict\nfrom scipy.spatial.distance import euclidean\nfrom statsmodels.tsa.stattools import coint\nfrom common import get_or_run_nn\nfrom data_providers.price_history_seq2seq_data_provider import PriceHistorySeq2SeqDataProvider\nfrom skopt.space.space import Integer, Real\nfrom skopt import gp_minimize\nfrom skopt.plots import plot_convergence\nimport pickle\nimport inspect\nimport dill\nimport sys\n#from models.price_history_21_seq2seq_dyn_dec_ins import PriceHistorySeq2SeqDynDecIns\nfrom data_providers.PriceHistoryMobileAttrsCombinator import PriceHistoryMobileAttrsCombinator\nfrom sklearn.neighbors import NearestNeighbors\nfrom datetime import datetime\nfrom data_providers.price_hist_with_relevant_deals import PriceHistWithRelevantDeals\nfrom data_providers.price_history_29_dataset_per_mobile_phone import PriceHistoryDatasetPerMobilePhone\nfrom arima.arima_estimator import ArimaEstimator\nimport warnings\nfrom collections import OrderedDict\nfrom mylibs.py_helper import cartesian_coord\nfrom arima.arima_cv import ArimaCV\n\ndtype = tf.float32\nseed = 16011984\nrandom_state = np.random.RandomState(seed=seed)\nconfig = getDefaultGPUconfig()\nn_jobs = 1\n%matplotlib inline", "Step 0 - hyperparams\nvocab_size is all the potential words you could have (classification for translation case)\nand max sequence length are the SAME thing\ndecoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now", "input_len = 60\ntarget_len = 30\nbatch_size = 50\nwith_EOS = False\n\ncsv_in = '../price_history_03_seq_start_suddens_trimmed.csv'", "Actual Run", "data_path = '../../../../Dropbox/data'\nph_data_path = data_path + '/price_history'\nassert path.isdir(ph_data_path)\n\nnpz_full = ph_data_path + '/price_history_per_mobile_phone.npz'\n\n#dataset_gen = PriceHistoryDatasetPerMobilePhone(random_state=random_state)\n\ndic = np.load(npz_full)\ndic.keys()[:10]", "Arima", "parameters = OrderedDict([\n ('p_auto_regression_order', range(6)), #0-5\n ('d_integration_level', range(3)), #0-2\n ('q_moving_average', range(6)), #0-5\n])\n\ncart = cartesian_coord(*parameters.values())\ncart.shape\n\ncur_key = dic.keys()[0]\ncur_key\n\ncur_sku = dic[cur_key][()]\ncur_sku.keys()\n\ntrain_mat = cur_sku['train']\ntrain_mat.shape\n\ntarget_len\n\ninputs = train_mat[:, :-target_len]\ninputs.shape\n\ntargets = train_mat[:, -target_len:]\ntargets.shape\n\neasy_mode = False\n\nscore_dic_filepath = data_path + \"/arima/scoredic_easy_mode_{}_{}.npy\".format(easy_mode, cur_key)\npath.abspath(score_dic_filepath)\n\n%%time\nwith warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n scoredic = ArimaCV.cross_validate(inputs=inputs, targets=targets, cartesian_combinations=cart,\n score_dic_filepath=score_dic_filepath, easy_mode=easy_mode)\n\n#4h 4min 51s / 108 cases => ~= 136 seconds per case !\n\narr = np.array(list(scoredic.iteritems()))\narr.shape\n\n#np.isnan()\nfiltered_arr = arr[ np.logical_not(arr[:, 1] != arr[:, 1]) ]\nfiltered_arr.shape\n\nplt.plot(filtered_arr[:, 1])\n\nminarg = np.argmin(filtered_arr[:, 1])\nminarg\n\nbest_params = filtered_arr[minarg, 0]\nbest_params\n\ntest_mat = cur_sku['test']\ntest_ins = test_mat[:-target_len]\ntest_ins.shape\n\ntest_tars = test_mat[-target_len:]\ntest_tars.shape\n\ntest_ins_vals = test_ins.values.reshape(1, -1)\ntest_ins_vals.shape\n\ntest_tars_vals = test_tars.values.reshape(1, -1)\ntest_tars_vals.shape", "Testing with easy mode on", "%%time\nwith warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n ae = ArimaEstimator(p_auto_regression_order=best_params[0],\n d_integration_level=best_params[1],\n q_moving_average=best_params[2],\n easy_mode=True)\n score = ae.fit(test_ins_vals, test_tars_vals).score(test_ins_vals, test_tars_vals)\n\nscore\n\nplt.figure(figsize=(15,7))\nplt.plot(ae.preds.flatten(), label='preds')\ntest_tars.plot(label='real')\nplt.legend()\nplt.show()", "Testing with easy mode off", "%%time\nwith warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n ae = ArimaEstimator(p_auto_regression_order=best_params[0],\n d_integration_level=best_params[1],\n q_moving_average=best_params[2],\n easy_mode=False)\n score = ae.fit(test_ins_vals, test_tars_vals).score(test_ins_vals, test_tars_vals)\n\nscore\n\nplt.figure(figsize=(15,7))\nplt.plot(ae.preds.flatten(), label='preds')\ntest_tars.plot(label='real')\nplt.legend()\nplt.show()", "Conclusion\nIf you are training in easy mode then what you get at the end is that the model only cares for the previous value in order to do its predictions and this makes it much easier for everybody but in reality we might not have advantage\nTrying", "args = np.argsort(filtered_arr[:, 1])\nargs\n\nfiltered_arr[args[:10], 0]\n\n%%time\nwith warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n ae = ArimaEstimator(p_auto_regression_order=4,\n d_integration_level=1,\n q_moving_average=3,\n easy_mode=False)\n print ae.fit(test_ins_vals, test_tars_vals).score(test_ins_vals, test_tars_vals)\n\nplt.figure(figsize=(15,7))\nplt.plot(ae.preds.flatten(), label='preds')\ntest_tars.plot(label='real')\nplt.legend()\nplt.show()", "All tests", "from arima.arima_testing import ArimaTesting\n\nbest_params, target_len, npz_full\n\n%%time\nkeys, scores, preds = ArimaTesting.full_testing(best_params=best_params, target_len=target_len,\n npz_full=npz_full)\n\n# render graphs here\n\nscore_arr = np.array(scores)\n\nnp.mean(score_arr[np.logical_not(score_arr != score_arr)])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cavestruz/StrongCNN
notebooks/Bootstrap_Analysis.ipynb
mit
[ "(1) average the scores between the _0,_1,_2,_3 directions to get average score per image in each HOG configuration. \n(2) In each HOG configuration, calculate the Precision and Recall values. \n(3) \"Bootstrap\" or \"jacknife\" to get an error on the AUC for each HOG configuration, describe how you bootstrapped it in words.\n(4) Output should look like: \nHOG config | Precision | Recall | AUC | AUCerr", "import glob\nimport numpy as np\nimport pandas as pd\nfrom sklearn.metrics import precision_score, recall_score, roc_auc_score\n\ndef get_data(datadir):\n \"\"\"\n Read the data files from different subdirectories of datadir corresponding\n to different HOG configurations.\n \n Inputs\n \n datadir: top level directory in which there are subdirectories corresponding\n to different HOG configurations\n \n Output\n \n data: {hogname: list(pd.DataFrame)} where each key corresponds to a\n different subdirectory (HOG configuration) and the value is\n a list of dataframes read from each of the files in that\n subdirectory\n \"\"\"\n hognames = [s.split('/')[-1] for s in glob.glob(datadir + '/*')]\n return {hogname: [pd.read_csv(filename, sep=None)\n for filename in glob.glob('{}/{}/filenames_*.txt'.format(datadir, hogname))]\n for hogname in hognames}\n\ndef get_average_scores(dataframes):\n \"\"\"\n Average the scores from several different rotations.\n \n Inputs\n \n dataframes: list(pd.DataFrame['filename', 'score', 'label'])\n \n Output\n \n df_out: pd.DataFrame['filename', 'score', 'label'] where 'score'\n is the average over all of the input dataframes and\n 'label' is taken arbitrarily from the first input dataframe\n \"\"\"\n dataframes = [df.rename(columns={'score': 'score_{}'.format(idx),\n 'label': 'label_{}'.format(idx)})\n for idx, df in enumerate(dataframes)]\n merged_df = reduce(lambda df1, df2: pd.merge(df1, df2, on='filename'), dataframes)\n assert all(df.shape[0] == merged_df.shape[0] for df in dataframes), \\\n 'Not all keys are the same in the data sets'\n \n merged_df['score'] = sum(merged_df['score_{}'.format(idx)] for idx, _ in enumerate(dataframes))\n merged_df['label'] = merged_df['label_0']\n return merged_df[['filename', 'score', 'label']]\n\ndef bootstrap(df, func, num_samples, sample_size_frac=1):\n \"\"\"\n Returns the bootstrap average and standard deviation when applying\n func to df. It is assumed that applying func to df returns a scalar.\n \n In each iteration, sample_size_frac*N rows are drawn from df at\n random with replacement, where N is the number of rows in df.\n In this way a DataFrame df_sample is created of the same type\n as df, with possible a different number of rows. The calculation\n of interest is done on df_sample by applying func and returning\n a number. This number is collected into an array, and this\n process is repeated for num_samples iterations. Finally, the\n mean and standard deviation of the array of length num_samples\n is returned. The standard deviation is an estimate of the error\n (due to finite sample size) that you would get when applying\n func to the full DataFrame df to get a number.\n \n Inputs\n \n df: pd.DataFrame of any type\n func: function that takes in df and returns a scalar\n num_samples: number of bootstrap samples/iterations,\n see description above\n sample_size_frac: in each bootstrap sample, the number\n of rows sampled is this fraction of\n the actual number of rows in df\n \n Outputs\n \n mean: mean of the bootstrap values. Should be close to\n func(df) if num_samples is large enough.\n std: standard deviation of the bootstrap values. This is\n an estimate of the error (due to finite sample size)\n of func(df).\n \"\"\"\n N = df.shape[0]\n sample_size = int(N*sample_size_frac)\n bootstrap_values = [func(df.iloc[np.random.randint(N, size=sample_size)])\n for _ in range(num_samples)]\n return np.mean(bootstrap_values), np.std(bootstrap_values)\n\ndef main(datadir, num_boot_samples, bands=None):\n \"\"\"\n For each HOG configuration, average scores from different rotations and\n output metrics: precision, recall, AUC, and standard deviation of the AUC\n from the bootstrap analysis. Details of the bootstrap analysis described\n in the bootstrap function.\n \n Inputs\n \n datadir: directory name in which there are subdirectories corresponding\n to different HOG configurations\n num_boot_samples: number of bootstrap samples to create in the bootstrap\n analysis (see bootstrap function)\n bands: list of bands to analyze separately. If None, don't separate out\n bands.\n \n Output\n \n pd.DataFrame['HOG_config', 'Precision', 'Recall', 'AUC',\n 'AUC_boot_avg', 'AUC_boot_std']\n \n OR\n \n pd.DataFrame['HOG_config', 'Band', 'Precision', 'Recall', 'AUC',\n 'AUC_boot_avg', 'AUC_boot_std']\n \"\"\"\n data = get_data(datadir)\n columns = ['HOG_config',\n 'Precision',\n 'Recall',\n 'AUC',\n 'AUC_boot_avg',\n 'AUC_boot_std']\n if bands is not None:\n columns = columns[:1] + ['Band'] + columns[1:]\n output = {k: [] for k in columns}\n\n for hogname, dataframes in data.iteritems():\n scores_all_bands = get_average_scores(dataframes)\n if bands is not None:\n scores_all_bands['band'] = scores_all_bands['filename'].apply(lambda s: s.split('_')[2])\n # filter filenames further here if needed\n for band in (bands if bands is not None else ['']):\n if bands is not None:\n scores = scores_all_bands[scores_all_bands['band'] == band]\n output['Band'].append(band)\n else:\n scores = scores_all_bands\n output['HOG_config'].append(hogname)\n output['Precision'].append(precision_score(scores['label'], scores['score'] > 0.5))\n output['Recall'].append(recall_score(scores['label'], scores['score'] > 0.5))\n output['AUC'].append(roc_auc_score(scores['label'], scores['score']))\n boot_avg, boot_std = bootstrap(scores, lambda sc: roc_auc_score(sc['label'], sc['score']),\n num_boot_samples)\n output['AUC_boot_avg'].append(boot_avg)\n output['AUC_boot_std'].append(boot_std)\n \n return pd.DataFrame(output)[columns]", "Test on Mock", "main('/path/to/data/directory', 10000)", "Test on SLACS", "main('/path/to/data/directory', 10000)", "Test on SLACS separating out different bands", "main('/path/to/data/directory', 10000, bands=['435', '814'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Lstyle1/Deep_learning_projects
autoencoder/Simple_Autoencoder_Solution.ipynb
mit
[ "A Simple Autoencoder\nWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.\n\nIn this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.", "%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)", "Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.", "img = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')", "We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.\n\n\nExercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.", "# Size of the encoding layer (the hidden layer)\nencoding_dim = 32\n\nimage_size = mnist.train.images.shape[1]\n\ninputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')\n\n# Output of hidden layer\nencoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)\n\n# Output layer logits\nlogits = tf.layers.dense(encoded, image_size, activation=None)\n# Sigmoid output from\ndecoded = tf.nn.sigmoid(logits, name='output')\n\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)", "Training", "# Create the session\nsess = tf.Session()", "Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. \nCalling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).", "epochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n feed = {inputs_: batch[0], targets_: batch[0]}\n batch_cost, _ = sess.run([cost, opt], feed_dict=feed)\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))", "Checking out the results\nBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.", "fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)\n\nsess.close()", "Up Next\nWe're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.\nIn practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
crystalzhaizhai/cs207_yi_zhai
homeworks/HW10/HW10.ipynb
mit
[ "Homework 10: SQL\nDue Date: Thursday, November 16th at 11:59 PM\nYou will create a database of the NASA polynomial coefficients for each specie.\nPlease turn in your database with your Jupyter notebook!\nQuestion 1: Convert XML to a SQL database\nCreate two tables named LOW and HIGH, each corresponding to data given for the low and high temperature range.\nEach should have the following column names:\n\nSPECIES_NAME\nTLOW\nTHIGH\nCOEFF_1\nCOEFF_2\nCOEFF_3\nCOEFF_4\nCOEFF_5\nCOEFF_6\nCOEFF_7\n\nPopulate the tables using the XML file you created in last assignment. If you did not complete the last assignment, you may also use the example_thermo.xml file.\nTLOW should refer to the temperature at the low range and THIGH should refer to the temperature at the high range. For example, in the LOW table, $H$ would have TLOW at $200$ and THIGH at $1000$ and in the HIGH table, $H$ would have TLOW at $1000$ and THIGH at $3500$.\nFor both tables, COEFF_1 through COEFF_7 should be populated with the corresponding coefficients for the low temperature data and high temperature data.", "import x as ET\ntree=ET.parse(\"example_thermo.xml\")\nelementroot=tree.getroot()\nimport sqlite3\nimport pandas as pd\ndb = sqlite3.connect('thermo.sqlite')\ncursor = db.cursor()\n\ncursor.execute(\"DROP TABLE IF EXISTS low\")\ncursor.execute(\"DROP TABLE IF EXISTS high\")\ncursor.execute(\"PRAGMA foreign_keys=1\")\ncursor.execute('''CREATE TABLE low (\n species_name TEXT PRIMARY KEY NOT NULL, \n tlow INT, \n thigh INT, \n coeff_1 FLOAT,\n coeff_2 FLOAT,\n coeff_3 FLOAT,\n coeff_4 FLOAT,\n coeff_5 FLOAT,\n coeff_6 FLOAT,\n coeff_7 FLOAT)''')\n\ncursor.execute('''CREATE TABLE high (\n species_name TEXT PRIMARY KEY NOT NULL, \n tlow INT, \n thigh INT, \n coeff_1 FLOAT,\n coeff_2 FLOAT,\n coeff_3 FLOAT,\n coeff_4 FLOAT,\n coeff_5 FLOAT,\n coeff_6 FLOAT,\n coeff_7 FLOAT)''')\n\nelements=elementroot.find('phase').find(\"speciesArray\").text.strip().split(\" \")\n\n\nspeciesroot=elementroot.find(\"speciesData\")\nfor specie in speciesroot:\n species_name=specie.attrib['name']\n C=specie.find(\"thermo\").findall(\"NASA\")\n for i,cc in enumerate(C):\n coefficients=cc.find(\"floatArray\").text.strip().split(\" \")\n thigh=cc.attrib['Tmax']\n tlow=cc.attrib['Tmin']\n coeff_1=coefficients[0]\n coeff_2=coefficients[1] \n coeff_3=coefficients[2]\n coeff_4=coefficients[3]\n coeff_5=coefficients[4]\n coeff_6=coefficients[5]\n coeff_7=coefficients[6]\n \n vals_to_insert = (species_name, tlow, thigh, \n coeff_1, coeff_2, coeff_3, coeff_4, coeff_5, coeff_6, coeff_7)\n\n if i==0:\n cursor.execute('''INSERT INTO low (species_name, tlow, thigh, \n coeff_1, coeff_2, coeff_3, coeff_4, coeff_5, coeff_6, coeff_7) \n VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', vals_to_insert)\n else:\n cursor.execute('''INSERT INTO high (species_name, tlow, thigh, \n coeff_1, coeff_2, coeff_3, coeff_4, coeff_5, coeff_6, coeff_7) \n VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', vals_to_insert)\n \n\ndb.commit()\n\nhigh_cols = [col[1] for col in cursor.execute(\"PRAGMA table_info(HIGH)\")]\nquery = '''SELECT * FROM HIGH'''\nviz_tables(high_cols, query)", "Question 2: WHERE Statements\n\nWrite a Python function get_coeffs that returns an array of 7 coefficients. \n\nThe function should take in two parameters: 1.) species_name and 2.) temp_range, an indicator variable ('low' or 'high') to indicate whether the coefficients should come from the low or high temperature range.\n The function should use SQL commands and WHERE statements on the table you just created in Question 1 (rather than taking data from the XML directly).\npython\ndef get_coeffs(species_name, temp_range):\n ''' Fill in here'''\n return coeffs\n\nWrite a python function get_species that returns all species that have a temperature range above or below a given value. The function should take in two parameters: 1.) temp and 2.) temp_range, an indicator variable ('low' or 'high').\n\nWhen temp_range is 'low', we are looking for species with a temperature range lower than the given temperature, and for a 'high' temp_range, we want species with a temperature range higher than the given temperature.\nThis exercise may be useful if different species have different LOW and HIGH ranges.\nAnd as before, you should accomplish this through SQL queries and where statements.\npython\ndef get_species(temp, temp_range):\n ''' Fill in here'''\n return coeffs", "def get_coeffs(species_name, temp_range):\n if temp_range==\"low\":\n function = '''SELECT coeff_1,coeff_2,coeff_3,coeff_4,coeff_5,coeff_6,coeff_7 FROM low WHERE species_name=?'''\n coeffs=cursor.execute(function,(species_name)).fetchone()\n \n else:\n function = '''SELECT coeff_1,coeff_2,coeff_3,coeff_4,coeff_5,coeff_6,coeff_7 FROM high WHERE species_name=?'''\n coeffs=cursor.execute(function,(species_name)).fetchone()\n\n return coeffs\n\nget_coeffs(\"H\",\"high\")\n\ndef get_species(temp, temp_range):\n ''' Fill in here'''\n if temp_range==\"low\":\n \n function = '''SELECT species_name FROM low WHERE tlow<?'''\n species=cursor.execute(function,(temp_range,)).fetchall()\n\n else:\n function = '''SELECT species_name FROM high WHERE thigh>?'''\n species=cursor.execute(function,(temp_range,)).fetchall()\n return species\n\nget_species(\"low\",1)\n\nall_cols = [col[1] for col in cursor.execute(\"PRAGMA table_info(ALL_TEMPS)\")]\nquery = '''SELECT species_name FROM low WHERE tlow<?'''\nviz_tables(all_cols,(query,(1,)))", "Question 3: JOIN STATEMENTS\nCreate a table named ALL_TEMPS that has the following columns:\n\nSPECIES_NAME\nTEMP_LOW\nTEMP_HIGH\n\nThis table should be created by joining the tables LOW and HIGH on the value SPECIES_NAME.\n\nWrite a Python function get_range that returns the range of temperatures for a given species_name.\n\nThe range should be computed within the SQL query (i.e. you should subtract within the SELECT statement in the SQL query).\npython\ndef get_range(species_name):\n '''Fill in here'''\n return range\nNote that TEMP_LOW is the lowest temperature in the LOW range and TEMP_HIGH is the highest temperature in the HIGH range.", "def get_range(species_name):\n function = '''SELECT tlow FROM low WHERE species_name=?'''\n temp_low=cursor.execute(function, (species_name,)).fetchall()[0]\n\n function = '''SELECT thigh FROM high WHERE species_name=?'''\n temp_high=cursor.execute(function,(species_name,)).fetchall()[0]\n \n return (temp_low[0],temp_high[0])\n\nget_range(\"HO2\")\n\ncursor.execute(\"DROP TABLE IF EXISTS all_temps\")\ncursor.execute('''CREATE TABLE all_temps(\n species_name TEXT PRIMARY KEY NOT NULL, \n temp_low INT, \n temp_high INT)''')\n\nfunction = '''SELECT species_name FROM low'''\nspecies=cursor.execute(function).fetchall()\nfor specie in species:\n temp_low,temp_high=get_range(specie[0])\n print(specie,temp_low,temp_high)\n cursor.execute('''INSERT INTO all_temps(species_name, temp_low, temp_high) \n VALUES(?, ?, ?)''',(specie[0],temp_low,temp_high))\n \n \n\ncursor.execute(\"DROP TABLE IF EXISTS ALL_TEMPS\")\ncursor.execute('''\nCREATE TABLE ALL_TEMPS AS\n SELECT HIGH.SPECIES_NAME, HIGH.THIGH AS TEMP_HIGH, LOW.TLOW AS TEMP_LOW\n FROM HIGH\n JOIN LOW\n ON HIGH.SPECIES_NAME = LOW.SPECIES_NAME''')\ndb.commit()\n\nall_cols = [col[1] for col in cursor.execute(\"PRAGMA table_info(ALL_TEMPS)\")]\nquery = '''SELECT * FROM ALL_TEMPS'''\nviz_tables(all_cols, query)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
zomansud/coursera
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
mit
[ "Exploring precision and recall\nThe goal of this second notebook is to understand precision-recall in the context of classifiers.\n\nUse Amazon review data in its entirety.\nTrain a logistic regression model.\nExplore various evaluation metrics: accuracy, confusion matrix, precision, recall.\nExplore how various metrics can be combined to produce a cost of making an error.\nExplore precision and recall curves.\n\nBecause we are using the full Amazon review dataset (not a subset of words or reviews), in this assignment we return to using GraphLab Create for its efficiency. As usual, let's start by firing up GraphLab Create.\nMake sure you have the latest version of GraphLab Create (1.8.3 or later). If you don't find the decision tree module, then you would need to upgrade graphlab-create using\npip install graphlab-create --upgrade\nSee this page for detailed instructions on upgrading.", "import graphlab\nfrom __future__ import division\nimport numpy as np\ngraphlab.canvas.set_target('ipynb')", "Load amazon review dataset", "products = graphlab.SFrame('amazon_baby.gl/')", "Extract word counts and sentiments\nAs in the first assignment of this course, we compute the word counts for individual words and extract positive and negative sentiments from ratings. To summarize, we perform the following:\n\nRemove punctuation.\nRemove reviews with \"neutral\" sentiment (rating 3).\nSet reviews with rating 4 or more to be positive and those with 2 or less to be negative.", "def remove_punctuation(text):\n import string\n return text.translate(None, string.punctuation) \n\n# Remove punctuation.\nreview_clean = products['review'].apply(remove_punctuation)\n\n# Count words\nproducts['word_count'] = graphlab.text_analytics.count_words(review_clean)\n\n# Drop neutral sentiment reviews.\nproducts = products[products['rating'] != 3]\n\n# Positive sentiment to +1 and negative sentiment to -1\nproducts['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)", "Now, let's remember what the dataset looks like by taking a quick peek:", "products", "Split data into training and test sets\nWe split the data into a 80-20 split where 80% is in the training set and 20% is in the test set.", "train_data, test_data = products.random_split(.8, seed=1)", "Train a logistic regression classifier\nWe will now train a logistic regression classifier with sentiment as the target and word_count as the features. We will set validation_set=None to make sure everyone gets exactly the same results. \nRemember, even though we now know how to implement logistic regression, we will use GraphLab Create for its efficiency at processing this Amazon dataset in its entirety. The focus of this assignment is instead on the topic of precision and recall.", "model = graphlab.logistic_classifier.create(train_data, target='sentiment',\n features=['word_count'],\n validation_set=None)", "Model Evaluation\nWe will explore the advanced model evaluation concepts that were discussed in the lectures.\nAccuracy\nOne performance metric we will use for our more advanced exploration is accuracy, which we have seen many times in past assignments. Recall that the accuracy is given by\n$$\n\\mbox{accuracy} = \\frac{\\mbox{# correctly classified data points}}{\\mbox{# total data points}}\n$$\nTo obtain the accuracy of our trained models using GraphLab Create, simply pass the option metric='accuracy' to the evaluate function. We compute the accuracy of our logistic regression model on the test_data as follows:", "accuracy= model.evaluate(test_data, metric='accuracy')['accuracy']\nprint \"Test Accuracy: %s\" % accuracy", "Baseline: Majority class prediction\nRecall from an earlier assignment that we used the majority class classifier as a baseline (i.e reference) model for a point of comparison with a more sophisticated classifier. The majority classifier model predicts the majority class for all data points. \nTypically, a good model should beat the majority class classifier. Since the majority class in this dataset is the positive class (i.e., there are more positive than negative reviews), the accuracy of the majority class classifier can be computed as follows:", "baseline = len(test_data[test_data['sentiment'] == 1])/len(test_data)\nprint \"Baseline accuracy (majority class classifier): %s\" % baseline", "Quiz Question: Using accuracy as the evaluation metric, was our logistic regression model better than the baseline (majority class classifier)?\nConfusion Matrix\nThe accuracy, while convenient, does not tell the whole story. For a fuller picture, we turn to the confusion matrix. In the case of binary classification, the confusion matrix is a 2-by-2 matrix laying out correct and incorrect predictions made in each label as follows:\n+---------------------------------------------+\n | Predicted label |\n +----------------------+----------------------+\n | (+1) | (-1) |\n+-------+-----+----------------------+----------------------+\n| True |(+1) | # of true positives | # of false negatives |\n| label +-----+----------------------+----------------------+\n| |(-1) | # of false positives | # of true negatives |\n+-------+-----+----------------------+----------------------+\nTo print out the confusion matrix for a classifier, use metric='confusion_matrix':", "confusion_matrix = model.evaluate(test_data, metric='confusion_matrix')['confusion_matrix']\nconfusion_matrix", "Quiz Question: How many predicted values in the test set are false positives?", "round(1443 / (26689 + 1443 ), 2)", "Computing the cost of mistakes\nPut yourself in the shoes of a manufacturer that sells a baby product on Amazon.com and you want to monitor your product's reviews in order to respond to complaints. Even a few negative reviews may generate a lot of bad publicity about the product. So you don't want to miss any reviews with negative sentiments --- you'd rather put up with false alarms about potentially negative reviews instead of missing negative reviews entirely. In other words, false positives cost more than false negatives. (It may be the other way around for other scenarios, but let's stick with the manufacturer's scenario for now.)\nSuppose you know the costs involved in each kind of mistake: \n1. \\$100 for each false positive.\n2. \\$1 for each false negative.\n3. Correctly classified reviews incur no cost.\nQuiz Question: Given the stipulation, what is the cost associated with the logistic regression classifier's performance on the test set?", "100*1443 + 1*1406", "Precision and Recall\nYou may not have exact dollar amounts for each kind of mistake. Instead, you may simply prefer to reduce the percentage of false positives to be less than, say, 3.5% of all positive predictions. This is where precision comes in:\n$$\n[\\text{precision}] = \\frac{[\\text{# positive data points with positive predicitions}]}{\\text{[# all data points with positive predictions]}} = \\frac{[\\text{# true positives}]}{[\\text{# true positives}] + [\\text{# false positives}]}\n$$\nSo to keep the percentage of false positives below 3.5% of positive predictions, we must raise the precision to 96.5% or higher. \nFirst, let us compute the precision of the logistic regression classifier on the test_data.", "precision = model.evaluate(test_data, metric='precision')['precision']\nprint \"Precision on test data: %s\" % precision", "Quiz Question: Out of all reviews in the test set that are predicted to be positive, what fraction of them are false positives? (Round to the second decimal place e.g. 0.25)", "round(1 - precision, 2)", "Quiz Question: Based on what we learned in lecture, if we wanted to reduce this fraction of false positives to be below 3.5%, we would: (see the quiz)\nA complementary metric is recall, which measures the ratio between the number of true positives and that of (ground-truth) positive reviews:\n$$\n[\\text{recall}] = \\frac{[\\text{# positive data points with positive predicitions}]}{\\text{[# all positive data points]}} = \\frac{[\\text{# true positives}]}{[\\text{# true positives}] + [\\text{# false negatives}]}\n$$\nLet us compute the recall on the test_data.", "recall = model.evaluate(test_data, metric='recall')['recall']\nprint \"Recall on test data: %s\" % recall", "Quiz Question: What fraction of the positive reviews in the test_set were correctly predicted as positive by the classifier?\nQuiz Question: What is the recall value for a classifier that predicts +1 for all data points in the test_data?\nPrecision-recall tradeoff\nIn this part, we will explore the trade-off between precision and recall discussed in the lecture. We first examine what happens when we use a different threshold value for making class predictions. We then explore a range of threshold values and plot the associated precision-recall curve. \nVarying the threshold\nFalse positives are costly in our example, so we may want to be more conservative about making positive predictions. To achieve this, instead of thresholding class probabilities at 0.5, we can choose a higher threshold. \nWrite a function called apply_threshold that accepts two things\n* probabilities (an SArray of probability values)\n* threshold (a float between 0 and 1).\nThe function should return an SArray, where each element is set to +1 or -1 depending whether the corresponding probability exceeds threshold.", "def apply_threshold(probabilities, threshold):\n ### YOUR CODE GOES HERE\n # +1 if >= threshold and -1 otherwise.\n return probabilities.apply(lambda x: +1 if x >= threshold else -1)", "Run prediction with output_type='probability' to get the list of probability values. Then use thresholds set at 0.5 (default) and 0.9 to make predictions from these probability values.", "probabilities = model.predict(test_data, output_type='probability')\npredictions_with_default_threshold = apply_threshold(probabilities, 0.5)\npredictions_with_high_threshold = apply_threshold(probabilities, 0.9)\n\nprint \"Number of positive predicted reviews (threshold = 0.5): %s\" % (predictions_with_default_threshold == 1).sum()\n\nprint \"Number of positive predicted reviews (threshold = 0.9): %s\" % (predictions_with_high_threshold == 1).sum()", "Quiz Question: What happens to the number of positive predicted reviews as the threshold increased from 0.5 to 0.9?\nExploring the associated precision and recall as the threshold varies\nBy changing the probability threshold, it is possible to influence precision and recall. We can explore this as follows:", "# Threshold = 0.5\nprecision_with_default_threshold = graphlab.evaluation.precision(test_data['sentiment'],\n predictions_with_default_threshold)\n\nrecall_with_default_threshold = graphlab.evaluation.recall(test_data['sentiment'],\n predictions_with_default_threshold)\n\n# Threshold = 0.9\nprecision_with_high_threshold = graphlab.evaluation.precision(test_data['sentiment'],\n predictions_with_high_threshold)\nrecall_with_high_threshold = graphlab.evaluation.recall(test_data['sentiment'],\n predictions_with_high_threshold)\n\nprint \"Precision (threshold = 0.5): %s\" % precision_with_default_threshold\nprint \"Recall (threshold = 0.5) : %s\" % recall_with_default_threshold\n\nprint \"Precision (threshold = 0.9): %s\" % precision_with_high_threshold\nprint \"Recall (threshold = 0.9) : %s\" % recall_with_high_threshold", "Quiz Question (variant 1): Does the precision increase with a higher threshold?\nQuiz Question (variant 2): Does the recall increase with a higher threshold?\nPrecision-recall curve\nNow, we will explore various different values of tresholds, compute the precision and recall scores, and then plot the precision-recall curve.", "threshold_values = np.linspace(0.5, 1, num=100)\nprint threshold_values", "For each of the values of threshold, we compute the precision and recall scores.", "precision_all = []\nrecall_all = []\n\nprobabilities = model.predict(test_data, output_type='probability')\nfor threshold in threshold_values:\n predictions = apply_threshold(probabilities, threshold)\n \n precision = graphlab.evaluation.precision(test_data['sentiment'], predictions)\n recall = graphlab.evaluation.recall(test_data['sentiment'], predictions)\n \n precision_all.append(precision)\n recall_all.append(recall)", "Now, let's plot the precision-recall curve to visualize the precision-recall tradeoff as we vary the threshold.", "import matplotlib.pyplot as plt\n%matplotlib inline\n\ndef plot_pr_curve(precision, recall, title):\n plt.rcParams['figure.figsize'] = 7, 5\n plt.locator_params(axis = 'x', nbins = 5)\n plt.plot(precision, recall, 'b-', linewidth=4.0, color = '#B0017F')\n plt.title(title)\n plt.xlabel('Precision')\n plt.ylabel('Recall')\n plt.rcParams.update({'font.size': 16})\n \nplot_pr_curve(precision_all, recall_all, 'Precision recall curve (all)')", "Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better? Round your answer to 3 decimal places.", "for i, p in enumerate(precision_all):\n print str(i) + \" -> \" + str(p)\n\nround(threshold_values[67], 3)", "Quiz Question: Using threshold = 0.98, how many false negatives do we get on the test_data? (Hint: You may use the graphlab.evaluation.confusion_matrix function implemented in GraphLab Create.)", "predictions_with_98_threshold = apply_threshold(probabilities, 0.98)\ncm = graphlab.evaluation.confusion_matrix(test_data['sentiment'],\n predictions_with_98_threshold)\ncm", "This is the number of false negatives (i.e the number of reviews to look at when not needed) that we have to deal with using this classifier.\nEvaluating specific search terms\nSo far, we looked at the number of false positives for the entire test set. In this section, let's select reviews using a specific search term and optimize the precision on these reviews only. After all, a manufacturer would be interested in tuning the false positive rate just for their products (the reviews they want to read) rather than that of the entire set of products on Amazon.\nPrecision-Recall on all baby related items\nFrom the test set, select all the reviews for all products with the word 'baby' in them.", "baby_reviews = test_data[test_data['name'].apply(lambda x: 'baby' in x.lower())]", "Now, let's predict the probability of classifying these reviews as positive:", "probabilities = model.predict(baby_reviews, output_type='probability')", "Let's plot the precision-recall curve for the baby_reviews dataset.\nFirst, let's consider the following threshold_values ranging from 0.5 to 1:", "threshold_values = np.linspace(0.5, 1, num=100)", "Second, as we did above, let's compute precision and recall for each value in threshold_values on the baby_reviews dataset. Complete the code block below.", "precision_all = []\nrecall_all = []\n\nfor threshold in threshold_values:\n \n # Make predictions. Use the `apply_threshold` function \n ## YOUR CODE HERE \n predictions = apply_threshold(probabilities, threshold)\n\n # Calculate the precision.\n # YOUR CODE HERE\n precision = graphlab.evaluation.precision(baby_reviews['sentiment'], predictions)\n \n # YOUR CODE HERE\n recall = graphlab.evaluation.recall(baby_reviews['sentiment'], predictions)\n \n # Append the precision and recall scores.\n precision_all.append(precision)\n recall_all.append(recall)", "Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better for the reviews of data in baby_reviews? Round your answer to 3 decimal places.", "round(threshold_values[72], 3)\n\nfor i, p in enumerate(precision_all):\n print str(i) + \" -> \" + str(p)", "Quiz Question: Is this threshold value smaller or larger than the threshold used for the entire dataset to achieve the same specified precision of 96.5%?\nFinally, let's plot the precision recall curve.", "plot_pr_curve(precision_all, recall_all, \"Precision-Recall (Baby)\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SheffieldML/GPyOpt
manual/GPyOpt_context.ipynb
bsd-3-clause
[ "GPyOpt: using context variables\nJavier Gonzalez and Rodolphe Jenatton, Amazon.com\nLast updated Monday, July 2017\nIn this notebook we are going to see how to used GPyOpt to solve optimizaiton problems in which certain varaibles are fixed during the optimization phase. These are called context variables. For details see:\nKrause, A. & Ong, C. S. Contextual gaussian process bandit optimization Advances in Neural Information Processing Systems (NIPS), 2011, 2447-2455", "%pylab inline\nimport GPyOpt\nfrom numpy.random import seed\n\nfunc = GPyOpt.objective_examples.experimentsNd.alpine1(input_dim=5) ", "Now we define the domain of the function to optimize as usual.", "mixed_domain =[{'name': 'var1', 'type': 'continuous', 'domain': (-5,5),'dimensionality': 3},\n {'name': 'var2', 'type': 'discrete', 'domain': (3,8,10)},\n {'name': 'var3', 'type': 'categorical', 'domain': (0,1,2)},\n {'name': 'var4', 'type': 'continuous', 'domain': (-1,2)}]\n\nmyBopt = GPyOpt.methods.BayesianOptimization(f=func.f, # Objective function \n domain=mixed_domain, # Box-constraints of the problem\n initial_design_numdata = 5, # Number data initial design\n acquisition_type='EI', # Expected Improvement\n exact_feval = True,\n evaluator_type = 'local_penalization',\n batch_size = 5\n ) # True evaluations, no sample noise", "Now, we run the optimization for 20 iterations or a maximum of 60 seconds and we show the convergence plots.", "max_iter = 2 ## maximum number of iterations\nmax_time = 60 ## maximum allowed time\neps = 0 ## tolerance, max distance between consicutive evaluations.", "To set a context, we just need to create a dicctionary with the variables to fix and pass it to the Bayesian ottimization object when running the optimization. Note that, everytime we run new iterations we can set other variables to be the context. Note that for variables in which the dimaensionality has been specified in the domain, a subindex is internally asigned. For instance if the variables is called 'var1' and has dimensionality 3, the first three positions in the internal representation of the domain will be occupied by variables 'var1_1', 'var1_2' and 'var1_3'. If no dimensionality is added, the internal naming remains the same. For instance, in the example above 'var3' should be fixed its original name. See below for details.", "myBopt.run_optimization(max_iter,eps=eps)\nmyBopt.run_optimization(max_iter,eps=eps,context = {'var1_1':.3, 'var1_2':0.4})\nmyBopt.run_optimization(max_iter,eps=eps,context = {'var1_1':0, 'var3':2})\nmyBopt.run_optimization(max_iter,eps=eps,context = {'var1_1':0, 'var2':3},)\nmyBopt.run_optimization(max_iter,eps=eps,context = {'var1_1':0.3, 'var3':1, 'var4':-.4})\nmyBopt.run_optimization(max_iter,eps=eps)", "We can now visualize the results", "np.round(myBopt.X,2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
therealAJ/python-sandbox
data-science/learning/ud2/Part 1 Exercise Solutions/Matplotlib Exercises .ipynb
gpl-3.0
[ "<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nMatplotlib Exercises\nWelcome to the exercises for reviewing matplotlib! Take your time with these, Matplotlib can be tricky to understand at first. These are relatively simple plots, but they can be hard if this is your first time with matplotlib, feel free to reference the solutions as you go along.\nAlso don't worry if you find the matplotlib syntax frustrating, we actually won't be using it that often throughout the course, we will switch to using seaborn and pandas built-in visualization capabilities. But, those are built-off of matplotlib, which is why it is still important to get exposure to it!\n * NOTE: ALL THE COMMANDS FOR PLOTTING A FIGURE SHOULD ALL GO IN THE SAME CELL. SEPARATING THEM OUT INTO MULTIPLE CELLS MAY CAUSE NOTHING TO SHOW UP. * \nExercises\nFollow the instructions to recreate the plots using this data:\nData", "import numpy as np\nx = np.arange(0,100)\ny = x*2\nz = x**2", "Import matplotlib.pyplot as plt and set %matplotlib inline if you are using the jupyter notebook. What command do you use if you aren't using the jupyter notebook?", "import matplotlib.pyplot as plt\n%matplotlib inline", "Exercise 1\n Follow along with these steps: \n* Create a figure object called fig using plt.figure() \n* Use add_axes to add an axis to the figure canvas at [0,0,1,1]. Call this new axis ax. \n* Plot (x,y) on that axes and set the labels and titles to match the plot below:", "fig = plt.figure()\nax = fig.add_axes([0,0,1,1])\nax.plot(x,y)\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_title('title')", "Exercise 2\n Create a figure object and put two axes on it, ax1 and ax2. Located at [0,0,1,1] and [0.2,0.5,.2,.2] respectively.", "fig = plt.figure()\nax1 = fig.add_axes([0,0,1,1])\nax2 = fig.add_axes([0.2,0.5,0.2,0.2])", "Now plot (x,y) on both axes. And call your figure object to show it.", "ax1.plot(x,y,color='black')\nax2.plot(x,y,color='red')\nfig", "Exercise 3\n Create the plot below by adding two axes to a figure object at [0,0,1,1] and [0.2,0.5,.4,.4]", "fig = plt.figure()\nax1 = fig.add_axes([0,0,1,1])\nax2 = fig.add_axes([0.2,0.5,0.4,0.4])\n\n#Large\nax1.set_xlabel('x')\nax1.set_ylabel('z')\n\nax1.plot(x,z)\n\n#Inserted\nax2.set_xlabel('x')\nax2.set_ylabel('y')\nax2.set_title('zoom')\n\nax2.plot(x,y)\nax2.set_xlim(left=20,right=22)\nax2.set_ylim(bottom=30,top=50)\n", "Now use x,y, and z arrays to recreate the plot below. Notice the xlimits and y limits on the inserted plot:\nExercise 4\n Use plt.subplots(nrows=1, ncols=2) to create the plot below.", "fig,axes = plt.subplots(1,2)\naxes[0].plot(x,y,lw=3,ls='--')\naxes[1].plot(x,z,color='r',lw=4)", "Now plot (x,y) and (x,z) on the axes. Play around with the linewidth and style\n See if you can resize the plot by adding the figsize() argument in plt.subplots() are copying and pasting your previous code.", "fig,axes = plt.subplots(1,2,figsize=(12,2))\naxes[0].plot(x,y,lw=3,ls='--')\naxes[1].plot(x,z,color='r',lw=4)", "Great Job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
intel-analytics/analytics-zoo
apps/ray/parameter_server/sharded_parameter_server.ipynb
apache-2.0
[ "This notebook is adapted from:\nhttps://github.com/ray-project/tutorial/tree/master/examples/sharded_parameter_server.ipynb\nSharded Parameter Servers\nGOAL: The goal of this exercise is to use actor handles to implement a sharded parameter server example for distributed asynchronous stochastic gradient descent.\nBefore doing this exercise, make sure you understand the concepts from the exercise on Actor Handles.\nParameter Servers\nA parameter server is simply an object that stores the parameters (or \"weights\") of a machine learning model (this could be a neural network, a linear model, or something else). It exposes two methods: one for getting the parameters and one for updating the parameters.\nIn a typical machine learning training application, worker processes will run in an infinite loop that does the following:\n1. Get the latest parameters from the parameter server.\n2. Compute an update to the parameters (using the current parameters and some data).\n3. Send the update to the parameter server.\nThe workers can operate synchronously (that is, in lock step), in which case distributed training with multiple workers is algorithmically equivalent to serial training with a larger batch of data. Alternatively, workers can operate independently and apply their updates asynchronously. The main benefit of asynchronous training is that a single slow worker will not slow down the other workers. The benefit of synchronous training is that the algorithm behavior is more predictable and reproducible.", "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport ray\nimport time", "Init SparkContext", "from zoo.common.nncontext import init_spark_on_local, init_spark_on_yarn\nimport numpy as np\nimport os\nhadoop_conf_dir = os.environ.get('HADOOP_CONF_DIR')\n\nif hadoop_conf_dir:\n sc = init_spark_on_yarn(\n hadoop_conf=hadoop_conf_dir,\n conda_name=os.environ.get(\"ZOO_CONDA_NAME\", \"zoo\"), # The name of the created conda-env\n num_executors=2,\n executor_cores=4,\n executor_memory=\"2g\",\n driver_memory=\"2g\",\n driver_cores=1,\n extra_executor_memory_for_ray=\"3g\")\nelse:\n sc = init_spark_on_local(cores = 8, conf = {\"spark.driver.memory\": \"2g\"})\n\n# It may take a while to ditribute the local environment including python and java to cluster\nimport ray\nfrom zoo.ray import RayContext\nray_ctx = RayContext(sc=sc, object_store_memory=\"4g\")\nray_ctx.init()\n#ray.init(num_cpus=30, include_webui=False, ignore_reinit_error=True)", "A simple parameter server can be implemented as a Python class in a few lines of code.\nEXERCISE: Make the ParameterServer class an actor.", "dim = 10\n@ray.remote\nclass ParameterServer(object):\n def __init__(self, dim):\n self.parameters = np.zeros(dim)\n \n def get_parameters(self):\n return self.parameters\n \n def update_parameters(self, update):\n self.parameters += update\n\n\nps = ParameterServer.remote(dim)\n", "A worker can be implemented as a simple Python function that repeatedly gets the latest parameters, computes an update to the parameters, and sends the update to the parameter server.", "@ray.remote\ndef worker(ps, dim, num_iters):\n for _ in range(num_iters):\n # Get the latest parameters.\n parameters = ray.get(ps.get_parameters.remote())\n # Compute an update.\n update = 1e-3 * parameters + np.ones(dim)\n # Update the parameters.\n ps.update_parameters.remote(update)\n # Sleep a little to simulate a real workload.\n time.sleep(0.5)\n\n# Test that worker is implemented correctly. You do not need to change this line.\nray.get(worker.remote(ps, dim, 1))\n\n# Start two workers.\nworker_results = [worker.remote(ps, dim, 100) for _ in range(2)]", "As the worker tasks are executing, you can query the parameter server from the driver and see the parameters changing in the background.", "print(ray.get(ps.get_parameters.remote()))", "Sharding a Parameter Server\nAs the number of workers increases, the volume of updates being sent to the parameter server will increase. At some point, the network bandwidth into the parameter server machine or the computation down by the parameter server may be a bottleneck.\nSuppose you have $N$ workers and $1$ parameter server, and suppose each of these is an actor that lives on its own machine. Furthermore, suppose the model size is $M$ bytes. Then sending all of the parameters from the workers to the parameter server will mean that $N * M$ bytes in total are sent to the parameter server. If $N = 100$ and $M = 10^8$, then the parameter server must receive ten gigabytes, which, assuming a network bandwidth of 10 gigabits per second, would take 8 seconds. This would be prohibitive.\nOn the other hand, if the parameters are sharded (that is, split) across K parameter servers, K is 100, and each parameter server lives on a separate machine, then each parameter server needs to receive only 100 megabytes, which can be done in 80 milliseconds. This is much better.\nEXERCISE: The code below defines a parameter server shard class. Modify this class to make ParameterServerShard an actor. We will need to revisit this code soon and increase num_shards.", "@ray.remote\nclass ParameterServerShard(object):\n def __init__(self, sharded_dim):\n self.parameters = np.zeros(sharded_dim)\n \n def get_parameters(self):\n return self.parameters\n \n def update_parameters(self, update):\n self.parameters += update\n\n\ntotal_dim = (10 ** 8) // 8 # This works out to 100MB (we have 25 million\n # float64 values, which are each 8 bytes).\nnum_shards = 2 # The number of parameter server shards.\n\nassert total_dim % num_shards == 0, ('In this exercise, the number of shards must '\n 'perfectly divide the total dimension.')\n\n# Start some parameter servers.\nps_shards = [ParameterServerShard.remote(total_dim // num_shards) for _ in range(num_shards)]\n\nassert hasattr(ParameterServerShard, 'remote'), ('You need to turn ParameterServerShard into an '\n 'actor (by using the ray.remote keyword).')", "The code below implements a worker that does the following.\n1. Gets the latest parameters from all of the parameter server shards.\n2. Concatenates the parameters together to form the full parameter vector.\n3. Computes an update to the parameters.\n4. Partitions the update into one piece for each parameter server.\n5. Applies the right update to each parameter server shard.", "@ray.remote\ndef worker_task(total_dim, num_iters, *ps_shards):\n # Note that ps_shards are passed in using Python's variable number\n # of arguments feature. We do this because currently actor handles\n # cannot be passed to tasks inside of lists or other objects.\n for _ in range(num_iters):\n # Get the current parameters from each parameter server.\n parameter_shards = [ray.get(ps.get_parameters.remote()) for ps in ps_shards]\n assert all([isinstance(shard, np.ndarray) for shard in parameter_shards]), (\n 'The parameter shards must be numpy arrays. Did you forget to call ray.get?')\n # Concatenate them to form the full parameter vector.\n parameters = np.concatenate(parameter_shards)\n assert parameters.shape == (total_dim,)\n\n # Compute an update.\n update = np.ones(total_dim)\n # Shard the update.\n update_shards = np.split(update, len(ps_shards))\n \n # Apply the updates to the relevant parameter server shards.\n for ps, update_shard in zip(ps_shards, update_shards):\n ps.update_parameters.remote(update_shard)\n\n\n# Test that worker_task is implemented correctly. You do not need to change this line.\nray.get(worker_task.remote(total_dim, 1, *ps_shards))", "EXERCISE: Experiment by changing the number of parameter server shards, the number of workers, and the size of the data.\nNOTE: Because these processes are all running on the same machine, network bandwidth will not be a limitation and sharding the parameter server will not help. To see the difference, you would need to run the application on multiple machines. There are still regimes where sharding a parameter server can help speed up computation on the same machine (by parallelizing the computation that the parameter server processes have to do). If you want to see this effect, you should implement a synchronous training application. In the asynchronous setting, the computation is staggered and so speeding up the parameter server usually does not matter.", "num_workers = 4\n\n# Start some workers. Try changing various quantities and see how the\n# duration changes.\nstart = time.time()\nray.get([worker_task.remote(total_dim, 5, *ps_shards) for _ in range(num_workers)])\nprint('This took {} seconds.'.format(time.time() - start))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
liganega/Gongsu-DataSci
ref_materials/exams/2017/A02/midterm-a02.ipynb
gpl-3.0
[ "from __future__ import division, print_function\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n%matplotlib inline", "ํŒŒ์ด์ฌ ๊ธฐ๋ณธ ์ž๋ฃŒํ˜•\n๋ฌธ์ œ\n์‹ค์ˆ˜(๋ถ€๋™์†Œ์ˆ˜์ )๋ฅผ ํ•˜๋‚˜ ์ž…๋ ฅ๋ฐ›์•„, ๊ทธ ์ˆซ์ž๋ฅผ ๋ฐ˜์ง€๋ฆ„์œผ๋กœ ํ•˜๋Š” ์›์˜ ๋ฉด์ ๊ณผ ๋‘˜๋ ˆ์˜ ๊ธธ์ด๋ฅผ ํŠœํ”Œ๋กœ ๋ฆฌํ„ดํ•˜๋Š” ํ•จ์ˆ˜ circle_radius๋ฅผ ๊ตฌํ˜„ํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ผ,\n```\n.\n```\n๋ฌธ์ž์—ด ์ž๋ฃŒํ˜•\n์•„๋ž˜ ์‚ฌ์ดํŠธ๋Š” ์ปคํ”ผ ์ฝฉ์˜ ํ˜„์žฌ ์‹œ์„ธ๋ฅผ ๋ณด์—ฌ์ค€๋‹ค.\nhttp://beans-r-us.appspot.com/prices.html\n\n์œ„ ์‚ฌ์ดํŠธ์˜ ๋‚ด์šฉ์„ html ์†Œ์Šค์ฝ”๋“œ๋กœ ๋ณด๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์œผ๋ฉฐ, ๊ฒ€์ƒ‰๋œ ์‹œ๊ฐ„์˜ ์ปคํ”ผ์ฝฉ์˜ ๊ฐ€๊ฒฉ์€ \nCurrent price of coffee beans ๋ฌธ์žฅ์ด ๋‹ด๊ฒจ ์žˆ๋Š” ์ค„์— ๋ช…์‹œ๋˜์–ด ์žˆ๋‹ค.\n\n```html\n<html><head><title>Welcome to the Beans'R'Us Pricing Page</title>\n<link rel=\"stylesheet\" type=\"text/css\" href=\"beansrus.css\" />\n</head><body>\n<h2>Welcome to the Beans'R'Us Pricing Page</h2>\n<p>Current price of coffee beans = <strong>$5.94</strong></p>\n<p>Price valid for 15 minutes from Sun Sep 10 12:21:58 2017.</p>\n</body></html>\n```\n\n๋ฌธ์ œ\n์•„๋ž˜ ์ฝ”๋“œ๊ฐ€ ํ•˜๋Š” ์ผ์„ ์„ค๋ช…ํ•˜๋ผ.\n\n```\nfrom future import print_function\nimport urllib2\nimport time\ndef price_setter(b_price, a_price):\n bean_price = b_price\n while 5.5 < bean_price < 6.0:\n time.sleep(1)\n page = urllib2.urlopen(\"http://beans-r-us.appspot.com/prices.html\")\n text = page.read().decode(\"utf8\")\n\n price_index = text.find(\"&gt;$\") + 2\n bean_price_str = text[price_index : price_index + 4]\n bean_price = float(bean_price_str)\n\nprint(\"ํ˜„์žฌ ์ปคํ”ผ์ฝฉ ๊ฐ€๊ฒฉ์ด\", bean_price, \"๋‹ฌ๋Ÿฌ ์ž…๋‹ˆ๋‹ค.\")\n\nif bean_price &lt;= 5.5:\n print(\"์•„๋ฉ”๋ฆฌ์นด๋…ธ ๊ฐ€๊ฒฉ์„\", a_price, \"๋‹ฌ๋Ÿฌ๋งŒํผ ์ธํ•˜ํ•˜์„ธ์š”!\")\nelse:\n print(\"์•„๋ฉ”๋ฆฌ์นด๋…ธ ๊ฐ€๊ฒฉ์„\", a_price, \"๋‹ฌ๋Ÿฌ๋งŒํผ ์ธ์ƒํ•˜์„ธ์š”!\")\n\n```\n\n```\n.```\n์˜ค๋ฅ˜ ๋ฐ ์˜ˆ์™ธ ์ฒ˜๋ฆฌ\n๋ฌธ์ œ\n์•„๋ž˜ ์ฝ”๋“œ๊ฐ€ ํ•˜๋Š” ์ผ์„ ์„ค๋ช…ํ•˜๋ผ.\n\n```\nnumber_to_square = raw_input(\"A number to divide 100: \")\ntry: \n number = float(number_to_square)\n print(\"100์„ ์ž…๋ ฅํ•œ ๊ฐ’์œผ๋กœ ๋‚˜๋ˆˆ ๊ฒฐ๊ณผ๋Š”\", 100/number, \"์ž…๋‹ˆ๋‹ค.\")\nexcept ZeroDivisionError:\n raise ZeroDivisionError('0์ด ์•„๋‹Œ ์ˆซ์ž๋ฅผ ์ž…๋ ฅํ•˜์„ธ์š”.')\nexcept ValueError:\n raise ValueError('์ˆซ์ž๋ฅผ ์ž…๋ ฅํ•˜์„ธ์š”.') \n```\n\n```\n.```\n๋ฆฌ์ŠคํŠธ\n๋ฌธ์ œ\n์•„๋ž˜ ์„ค๋ช… ์ค‘์—์„œ ๋ฆฌ์ŠคํŠธ ์ž๋ฃŒํ˜•์˜ ์„ฑ์งˆ์— ํ•ด๋‹นํ•˜๋Š” ํ•ญ๋ชฉ์„ ๋ชจ๋‘ ๊ณจ๋ผ๋ผ.\n\n๊ฐ€๋ณ€ ์ž๋ฃŒํ˜•์ด๋‹ค.\n๋ถˆ๋ณ€ ์ž๋ฃŒํ˜•์ด๋‹ค.\n์ธ๋ฑ์Šค์™€ ์Šฌ๋ผ์ด์‹ฑ์„ ํ™œ์šฉํ•˜์—ฌ ํ•ญ๋ชฉ์˜ ๋‚ด์šฉ์„ ํ™•์ธํ•˜๊ณ  ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค.\nํ•ญ๋ชฉ๋“ค์ด ์ž„์˜์˜ ์ž๋ฃŒํ˜•์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ๋‹ค.\n๋ฆฌ์ŠคํŠธ ๊ธธ์ด์— ์ œํ•œ์ด ์žˆ๋‹ค.\n์‹ ์„ฑ์ •๋ณด ๋“ฑ ์ค‘์š”ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณด๊ด€ํ•  ๋•Œ ์‚ฌ์šฉํ•œ๋‹ค.\n\n```\n```\n๊ฒฌ๋ณธ๋‹ต์•ˆ: 1, 3, 4\n์‚ฌ์ „\nrecord_list.txt ํŒŒ์ผ์€ ์—ฌ๋Ÿ ๋ช…์˜ ์ˆ˜์˜ ์„ ์ˆ˜์˜ 50m ๊ธฐ๋ก์„ ๋‹ด๊ณ  ์žˆ๋‹ค.\n\ntxt\nplayer1 21.09 \nplayer2 20.32 \nplayer3 21.81 \nplayer4 22.97 \nplayer5 23.29 \nplayer6 22.09 \nplayer7 21.20 \nplayer8 22.16\n\n๋ฌธ์ œ\n์•„๋ž˜์ฝ”๋“œ๊ฐ€ ํ•˜๋Š” ์ผ์„ ์„ค๋ช…ํ•˜๋ผ.\n\n```python\nfrom future import print_function\nrecord_f = open(\"record_list.txt\", 'r')\nrecord = record_f.read().decode('utf8').split('\\n')\nrecord_dict = {}\nfor line in record:\n (player, p_record) = line.split()\n record_dict[p_record] = player\nrecord_f.close()\nrecord_list = record_dict.keys()\nrecord_list.sort()\nfor i in range(3):\n item = record_list[i]\n print(str(i+1) + \":\", record_dict[item], item)\n``` \n\n```\n.```\nํŠœํ”Œ\n๋ฌธ์ œ\n์•„๋ž˜ ์„ค๋ช… ์ค‘์—์„œ ํŠœํ”Œ ์ž๋ฃŒํ˜•์˜ ์„ฑ์งˆ์— ํ•ด๋‹นํ•˜๋Š” ํ•ญ๋ชฉ์„ ๋ชจ๋‘ ๊ณจ๋ผ๋ผ.\n\n๊ฐ€๋ณ€ ์ž๋ฃŒํ˜•์ด๋‹ค.\n๋ถˆ๋ณ€ ์ž๋ฃŒํ˜•์ด๋‹ค.\n์ธ๋ฑ์Šค์™€ ์Šฌ๋ผ์ด์‹ฑ์„ ํ™œ์šฉํ•˜์—ฌ ํ•ญ๋ชฉ์˜ ๋‚ด์šฉ์„ ํ™•์ธํ•˜๊ณ  ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค.\nํ•ญ๋ชฉ๋“ค์ด ์ž„์˜์˜ ์ž๋ฃŒํ˜•์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ๋‹ค.\nํŠœํ”Œ ๊ธธ์ด์— ์ œํ•œ์ด ์žˆ๋‹ค.\n์‹ ์„ฑ์ •๋ณด ๋“ฑ ์ค‘์š”ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณด๊ด€ํ•  ๋•Œ ์‚ฌ์šฉํ•œ๋‹ค.\n\n```\n```\n๊ฒฌ๋ณธ๋‹ต์•ˆ: 2, 3, 4, 6\n๋ฆฌ์ŠคํŠธ ์กฐ๊ฑด์ œ์‹œ๋ฒ•\n์•„๋ž˜ ์ฝ”๋“œ๋Š” 0๋ถ€ํ„ฐ 1000 ์‚ฌ์ด์˜ ํ™€์ˆ˜๋“ค์˜ ์ œ๊ณฑ์˜ ๋ฆฌ์ŠคํŠธ๋ฅผ ์กฐ๊ฑด์ œ์‹œ๋ฒ•์œผ๋กœ ์ƒ์„ฑํ•œ๋‹ค", "odd_1000 = [x**2 for x in range(0, 1000) if x % 2 == 1]\n\n# ๋ฆฌ์ŠคํŠธ์˜ ์ฒ˜์Œ ๋‹ค์„ฏ ๊ฐœ ํ•ญ๋ชฉ\nodd_1000[:5]", "๋ฌธ์ œ\n0๋ถ€ํ„ฐ 1000๊นŒ์ง€์˜ ์ˆซ์ž๋“ค ์ค‘์—์„œ ํ™€์ˆ˜์ด๋ฉด์„œ 7์˜ ๋ฐฐ์ˆ˜์ธ ์ˆซ์ž๋“ค์˜ ๋ฆฌ์ŠคํŠธ๋ฅผ ์กฐ๊ฑด์ œ์‹œ๋ฒ•์œผ๋กœ ์ƒ์„ฑํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ผ.\n```\n.\n```\n๋ชจ๋ฒ”๋‹ต์•ˆ:", "odd_3x7 = [x for x in range(0, 1000) if x % 2 == 1 and x % 7 == 0]\n\n# ๋ฆฌ์ŠคํŠธ์˜ ์ฒ˜์Œ ๋‹ค์„ฏ ๊ฐœ ํ•ญ๋ชฉ\nodd_3x7[:5]", "๋ฌธ์ œ\n0๋ถ€ํ„ฐ 1000๊นŒ์ง€์˜ ์ˆซ์ž๋“ค ์ค‘์—์„œ ํ™€์ˆ˜์ด๋ฉด์„œ 7์˜ ๋ฐฐ์ˆ˜์ธ ์ˆซ์ž๋“ค์„ ์ œ๊ณฑํ•˜์—ฌ 1์„ ๋”ํ•œ ๊ฐ’๋“ค์˜ ๋ฆฌ์ŠคํŠธ๋ฅผ ์กฐ๊ฑด์ œ์‹œ๋ฒ•์œผ๋กœ ์ƒ์„ฑํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ผ.\nํžŒํŠธ: ์•„๋ž˜์™€ ๊ฐ™์ด ์ •์˜๋œ ํ•จ์ˆ˜๋ฅผ ํ™œ์šฉํ•œ๋‹ค.\n$$f(x) = x^2 + 1$$\n```\n.```\n๊ฒฌ๋ณธ๋‹ต์•ˆ:", "def square_plus1(x):\n return x**2 + 1\n\nodd_3x7_spl = [square_plus1(x) for x in odd_3x7]\n# ๋ฆฌ์ŠคํŠธ์˜ ์ฒ˜์Œ ๋‹ค์„ฏ ๊ฐœ ํ•ญ๋ชฉ\nodd_3x7_spl[:5]", "csv ํŒŒ์ผ ์ฝ์–ด๋“ค์ด๊ธฐ\n'Seoul_pop2.csv' ํŒŒ์ผ์—๋Š” ์•„๋ž˜ ๋‚ด์šฉ์ด ์ €์žฅ๋˜์–ด ์žˆ๋‹ค\"\n\n```csv\n1949๋…„๋ถ€ํ„ฐ 2010๋…„ ์‚ฌ์ด์˜ ์„œ์šธ๊ณผ ์ˆ˜๋„๊ถŒ ์ธ๊ตฌ ์ฆ๊ฐ€์œจ(%)\n๊ตฌ๊ฐ„,์„œ์šธ,์ˆ˜๋„๊ถŒ\n1949-1955,9.12,-5.83\n1955-1960,55.88,32.22\n1960-1966,55.12,32.76\n1966-1970,45.66,28.76\n1970-1975,24.51,22.93\n1975-1980,21.38,21.69\n1980-1985,15.27,18.99\n1985-1990,10.15,17.53\n1990-1995,-3.64,8.54\n1995-2000,-3.55,5.45\n2000-2005,-0.93,6.41\n2005-2010,-1.34,3.71\n```\n\nํ™•์žฅ์ž๊ฐ€ csv์ธ ํŒŒ์ผ์€ ๋ฐ์ดํ„ฐ๋ฅผ ์ €์žฅํ•˜๊ธฐ ์œ„ํ•ด ์ฃผ๋กœ ์‚ฌ์šฉํ•œ๋‹ค. \ncsv๋Š” Comma-Separated Values์˜ ์ค„์ž„๋ง๋กœ ๋ฐ์ดํ„ฐ๊ฐ€ ์‰ผํ‘œ(์ฝค๋งˆ)๋กœ ๊ตฌ๋ถ„๋˜์–ด ์ •๋ฆฌ๋˜์–ด ์žˆ๋Š” ํŒŒ์ผ์„ ์˜๋ฏธํ•œ๋‹ค. \ncsv ํŒŒ์ผ์„ ์ฝ์–ด๋“œ๋ฆฌ๋Š” ๋ฐฉ๋ฒ•์€ csv ๋ชจ๋“ˆ์˜ reader() ํ•จ์ˆ˜๋ฅผ ํ™œ์šฉํ•˜๋ฉด ๋งค์šฐ ์‰ฝ๋‹ค.\nreader() ํ•จ์ˆ˜์˜ ๋ฆฌํ„ด๊ฐ’์€ csv ํŒŒ์ผ์— ์ €์žฅ๋œ ๋‚ด์šฉ์„ ์ค„ ๋‹จ์œ„๋กœ, ์‰ผํ‘œ ๋‹จ์œ„๋กœ ๋Š์–ด์„œ 2์ฐจ์› ๋ฆฌ์ŠคํŠธ์ด๋‹ค.\n์˜ˆ๋ฅผ ๋“ค์–ด, ์•„๋ž˜ ์ฝ”๋“œ๋Š” ์–ธ๊ธ‰๋œ ํŒŒ์ผ์— ์ €์žฅ๋œ ๋‚ด์šฉ์˜ ๊ฐ ์ค„์„ ์ถœ๋ ฅํ•ด์ค€๋‹ค.", "import csv\n\nwith open('Seoul_pop2.csv', 'rb') as f:\n reader = csv.reader(f)\n for row in reader:\n if len(row) == 0 or row[0][0] == '#':\n continue\n else:\n print(row)", "๋ฌธ์ œ\n์œ„ ์ฝ”๋“œ์—์„œ 5๋ฒˆ ์งธ ์ค„์„ ์•„๋ž˜์™€ ๊ฐ™์ด ํ•˜๋ฉด ์˜ค๋ฅ˜ ๋ฐœ์ƒํ•œ๋‹ค.\nif row[0][0] == '#' or len(row) == 0:\n์ด์œ ๋ฅผ ๊ฐ„๋‹จํ•˜๊ฒŒ ์„ค๋ช…ํ•˜๋ผ.\n```\n.\n```\n๋„˜ํŒŒ์ด ํ™œ์šฉ ๊ธฐ์ดˆ 1\n๋„˜ํŒŒ์ด ์–ด๋ ˆ์ด๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๋ช‡ ๊ฐœ์˜ ๊ธฐ๋ณธ์ ์ธ ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•˜๋ฉด ๋œ๋‹ค.\n\nnp.arange()\nnp.zeros()\nnp.ones() \nnp.diag() \n\n์˜ˆ์ œ:", "np.arange(3, 10, 3)\n\nnp.zeros((2,3))\n\nnp.ones((2,))\n\nnp.diag([1, 2, 3, 4])\n\nnp.ones((3,3)) * 2", "๋ฌธ์ œ\n์•„๋ž˜ ๋ชจ์–‘์˜ ์–ด๋ ˆ์ด๋ฅผ ์ƒ์„ฑํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ผ.\n๋‹จ, ์–ธ๊ธ‰๋œ ๋„ค ๊ฐœ์˜ ํ•จ์ˆ˜๋“ค๋งŒ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋ฉฐ, ์ˆ˜๋™์œผ๋กœ ์ƒ์„ฑ๋œ ๋ฆฌ์ŠคํŠธ๋‚˜ ์–ด๋ ˆ์ด๋Š” ํ—ˆ์šฉ๋˜์ง€ ์•Š๋Š”๋‹ค.\n$$\\left [ \\begin{matrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \\end{matrix} \\right ]$$\n```\n.\n```\n๊ฒฌ๋ณธ๋‹ต์•ˆ:", "np.diag(np.ones((3,))*2)", "๋ฌธ์ œ\n์•„๋ž˜ ๋ชจ์–‘์˜ ์–ด๋ ˆ์ด๋ฅผ ์ƒ์„ฑํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ผ.\n๋‹จ, ์–ธ๊ธ‰๋œ ๋„ค ๊ฐœ์˜ ํ•จ์ˆ˜๋งŒ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋ฉฐ, ์ˆ˜๋™์œผ๋กœ ์ƒ์„ฑ๋œ ๋ฆฌ์ŠคํŠธ๋‚˜ ์–ด๋ ˆ์ด๋Š” ํ—ˆ์šฉ๋˜์ง€ ์•Š๋Š”๋‹ค.\n$$\\left [ \\begin{matrix} 2 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 6 \\end{matrix} \\right ]$$\n```\n.\n```\n๊ฒฌ๋ณธ๋‹ต์•ˆ:", "np.diag(np.arange(2, 7, 2))", "๋„˜ํŒŒ์ด์˜ linspace() ํ•จ์ˆ˜ ํ™œ์šฉ\nnumpy ๋ชจ๋“ˆ์˜ linspace() ํ•จ์ˆ˜๋Š” ์ง€์ •๋œ ๊ตฌ๊ฐ„์„ ์ •ํ•ด์ง„ ํฌ๊ธฐ๋กœ ์ผ์ •ํ•˜๊ฒŒ ์ชผ๊ฐœ๋Š” ์–ด๋ž˜์ด๋ฅผ ์ƒ์„ฑํ•œ๋‹ค.\n์˜ˆ๋ฅผ ๋“ค์–ด, 0๋ถ€ํ„ฐ 3์‚ฌ์ด์˜ ๊ตฌ๊ฐ„์„ ๊ท ๋“ฑํ•˜๊ฒŒ 30๊ฐœ๋กœ ์ชผ๊ฐœ๊ณ ์ž ํ•˜๋ฉด ์•„๋ž˜์™€ ๊ฐ™์ด ์‹คํ–‰ํ•˜๋ฉด ๋œ๋‹ค.", "xs = np.linspace(0, 3, 30)\nxs", "๋ฌธ์ œ\n0๋ถ€ํ„ฐ 1์‚ฌ์ด์˜ ๊ตฌ๊ฐ„์„ ๊ท ๋“ฑํ•˜๊ฒŒ 10๊ฐœ๋กœ ์ชผ๊ฐœ์–ด ๊ฐ ํ•ญ๋ชฉ์„ ์ œ๊ณฑํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ผ.\n```\n.\n```\n๊ฒฌ๋ณธ๋‹ต์•ˆ:", "np.linspace(0,1, 10) ** 2", "๋„˜ํŒŒ์ด ํ™œ์šฉ ๊ธฐ์ดˆ 2\npopulation.txt ํŒŒ์ผ์€ 1900๋…„๋ถ€ํ„ฐ 1920๋…„๊นŒ์ง€ ์บ๋‚˜๋‹ค ๋ถ๋ถ€์ง€์—ญ์—์„œ ์„œ์‹ํ•œ ์‚ฐํ† ๋ผ(hare)์™€ ์Šค๋ผ์†Œ๋‹ˆ(lynx)์˜ ์ˆซ์ž, \n๊ทธ๋ฆฌ๊ณ  ์ฑ„์†Œ์ธ ๋‹น๊ทผ(carrot)์˜ ์žฌ๋ฐฐ์ˆซ์ž๋ฅผ ์•„๋ž˜ ๋‚ด์šฉ์œผ๋กœ ์ˆœ์ˆ˜ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋กœ ๋‹ด๊ณ  ์žˆ๋‹ค.\n\n```\nyear hare lynx carrot\n1900 30e3 4e3 48300\n1901 47.2e3 6.1e3 48200\n1902 70.2e3 9.8e3 41500\n1903 77.4e3 35.2e3 38200\n1904 36.3e3 59.4e3 40600\n1905 20.6e3 41.7e3 39800\n1906 18.1e3 19e3 38600\n1907 21.4e3 13e3 42300\n1908 22e3 8.3e3 44500\n1909 25.4e3 9.1e3 42100\n1910 27.1e3 7.4e3 46000\n1911 40.3e3 8e3 46800\n1912 57e3 12.3e3 43800\n1913 76.6e3 19.5e3 40900\n1914 52.3e3 45.7e3 39400\n1915 19.5e3 51.1e3 39000\n1916 11.2e3 29.7e3 36700\n1917 7.6e3 15.8e3 41800\n1918 14.6e3 9.7e3 43300\n1919 16.2e3 10.1e3 41300\n1920 24.7e3 8.6e3 47300\n```\n\n์•„๋ž˜ ์ฝ”๋“œ๋Š” ์—ฐ๋„, ํ† ๋ผ ๊ฐœ์ฒด์ˆ˜, ์Šค๋ผ์†Œ๋ฆฌ ๊ฐœ์ฒด์ˆ˜, ๋‹น๊ทผ ๊ฐœ์ฒด์ˆ˜๋ฅผ ๋”ฐ๋กœ๋”ฐ๋กœ ๋–ผ์–ด ๋‚ด์–ด ๊ฐ๊ฐ ์–ด๋ ˆ์ด๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ \nyear, hares, lynxes, carrots ๋ณ€์ˆ˜์— ์ €์žฅํ•˜๋Š” ์ฝ”๋“œ์ด๋‹ค.", "data = np.loadtxt('populations.txt')\nyear, hares, lynxes, carrots = data.T", "๋ฌธ์ œ\n์œ„ ์ฝ”๋“œ์—์„œ np.loadtxt ํ•จ์ˆ˜์˜ ์ž‘๋™๋ฐฉ์‹์„ ๊ฐ„๋‹จํ•˜๊ฒŒ ์„ค๋ช…ํ•˜๋ผ.\n```\n.\n```\n๋ฌธ์ œ\n์œ„ ์ฝ”๋“œ์—์„œ data.T์— ๋Œ€ํ•ด ๊ฐ„๋‹จํ•˜๊ฒŒ ์„ค๋ช…ํ•˜๋ผ.\n```\n.\n```\n์•„๋ž˜ ์ฝ”๋“œ๋Š” ํ† ๋ผ, ์Šค๋ผ์†Œ๋‹ˆ, ๋‹น๊ทผ ๊ฐ๊ฐ์˜ ๊ฐœ์ฒด์ˆ˜์˜ ์—ฐ๋„๋ณ„ ๋ณ€ํ™”๋ฅผ ์„ ๊ทธ๋ž˜ํ”„๋กœ ๋ณด์—ฌ์ฃผ๋„๋ก ํ•˜๋Š” ์ฝ”๋“œ์ด๋‹ค.", "plt.axes([0.2, 0.1, 0.5, 0.8])\nplt.plot(year, hares, year, lynxes, year, carrots)\nplt.legend(('Hare', 'Lynx', 'Carrot'), loc=(1.05, 0.5))", "๋ฌธ์ œ\n์œ„ ์ฝ”๋“œ์—์„œ ์‚ฌ์šฉ๋œ plt.plot(year, hares, year, lynxes, year, carrots) ๋ฅผ ๊ฐ„๋‹จํ•˜๊ฒŒ ์„ค๋ช…ํ•˜๋ผ.\n```\n.\n```\n๋ฌธ์ œ\n์‚ฐํ† ๋ผ, ์Šค๋ผ์†Œ๋‹ˆ, ๋‹น๊ทผ์˜ ์˜ˆ์ œ์—์„œ 1900๋…„๋ถ€ํ„ฐ 1921๋…„ ์‚ฌ์ด์— ๊ฐœ์ฒด๋ณ„ ๊ฐœ์ฒด์ˆ˜์˜ ๋ณ€ํ™”์— ๋Œ€ํ•ด ์–ด๋–ค ๋ถ„์„์„ ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ๊ทธ๋ž˜ํ”„๋ฅผ ์ด์šฉํ•˜์—ฌ ๊ฐ„๋‹จํ•˜๊ฒŒ ์„ค๋ช…ํ•˜๋ผ.\n```\n.\n```" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
planetlabs/notebooks
jupyter-notebooks/analytics-snippets/roads_as_vector.ipynb
apache-2.0
[ "Roads as Vectors\nThis notebook demonstrates converting the roads raster that is the output of the Analaytics feed into a vector dataset.\nIt demonstrates the following techniques for converting to vector:\n1. GDAL CLI\n2. Rasterio (no processing)\n3. Rasterio (with filtering and simplification)", "import os\n\nimport fiona\nimport matplotlib.pyplot as plt\nfrom planet import api\nimport rasterio\nfrom rasterio import features as rfeatures\nfrom rasterio.enums import Resampling\nfrom rasterio.plot import show\nimport shapely\nfrom shapely.geometry import shape as sshape\n\n# if your Planet API Key is not set as an environment variable, you can paste it below\nAPI_KEY = os.environ.get('PL_API_KEY', 'PASTE_YOUR_KEY_HERE')\n\nanalytics_client = api.ClientV1(api_key=API_KEY)", "Obtain Analytics Raster\nIdentify road feed feature for download\nWe want to download the most recent feature from the feed for road detection in Kirazli, Turkey.", "# This ID is for a subscription for monthly road detection in Kirazli, Turkey\nSUBSCRIPTION_ID = 'f184516c-b948-406f-b257-deaa66c3f38a'\n\nresults = analytics_client.list_collection_features(SUBSCRIPTION_ID).get()\nfeatures = results['features']\nprint('{} features in collection'.format(len(features)))\n\n# sort features by acquisition date\nfeatures.sort(key=lambda k: k['properties']['first_acquired'])\nfeature = features[-1]\nprint(feature['properties']['first_acquired'])", "Download Quad Raster", "RESOURCE_TYPE = 'target-quad'\n\ndef create_save_dir(root_dir='data'):\n save_dir = root_dir\n\n if not os.path.isdir(save_dir):\n os.makedirs(save_dir)\n return save_dir\n\ndest = 'data'\ncreate_save_dir(dest)", "We want to save each all of the images in one directory. But all of the images for a single target quad have the same name, L15_{target_quad_id}. We use the function write_to_file to save the image, and that function pulls the name from the resource name attribute, which we can't set. So, we are going to make a new object that functions just like resource, but has the name attribute set to the acquisition date. It would be nice if the write_to_file function just allowed us to set the name, like it allows us to set the directory.", "from planet.api.models import Body\nfrom planet.api.utils import write_to_file\n\ndef download_feature(feature, subscription_id, resource_type, dest=dest):\n print('{}: acquired {}'.format(feature['id'], get_date(feature)))\n resource = analytics_client.get_associated_resource_for_analytic_feature(subscription_id,\n feature['id'],\n resource_type)\n \n named_resource = NamedBody(resource, get_name(feature))\n filename = download_resource(named_resource, dest)\n return filename\n\ndef get_date(feature):\n feature_acquired = feature['properties']['first_acquired']\n return feature_acquired.split('T',1)[0]\n\ndef get_name(feature):\n return feature['properties']['target_quad_id'] + '_' + get_date(feature) + '.tif'\n\ndef download_resource(resource, dest, overwrite=False):\n writer = write_to_file(dest, overwrite=overwrite)\n writer(resource)\n filename = os.path.join(dest, resource.name)\n print('file saved to: {}'.format(filename))\n return filename\n\nclass NamedBody(Body):\n def __init__(self, body, name):\n super(NamedBody, self).__init__(body._request, body.response, body._dispatcher)\n self._name = name\n \n @property\n def name(self):\n return self._name\n\nfilename = download_feature(feature, SUBSCRIPTION_ID, RESOURCE_TYPE)", "Visualize Roads Image\nThe output of the analytics road detection is a boolean image where road pixels are given a value of True and non-road pixels are given a value of False.", "def _open(filename, factor=1):\n with rasterio.open(filename) as dataset:\n height = int(dataset.height / factor)\n width = int(dataset.width / factor)\n data = dataset.read(\n out_shape=(dataset.count, height, width)\n )\n return data\n\ndef open_bool(filename, factor=1):\n data = _open(filename, factor=factor)\n return data[0,:,:]\n\ndef get_figsize(factor):\n return tuple(2 * [int(25/factor)])\n\n\n\nfactor = 1\nfigsize = (15, 15)\n\nroads = open_bool(filename, factor=factor)\nfig = plt.figure(figsize=figsize)\nshow(roads, title=\"roads\", cmap=\"binary\")", "Convert Roads to Vector Features\nGDAL Command-Line Interface (CLI)\nGDAL provides a python script that can be run via the CLI. It is quite easy to run and fast, though it doesn't allow for some of the convenient pixel-space filtering and processing that rasterio provides and we will use later on.", "gdal_output_filename = os.path.join('data', 'test_gdal.shp')\n\n!gdal_polygonize.py $filename $gdal_output_filename", "Rasterio - no filtering\nIn this section we use rasterio to convert the binary roads raster into a vector dataset. The vectors are written to disk as a shapefile. The shapefile can be imported into geospatial programs such as QGIS or ArcGIS for visualization and further processing.\nThis is basic conversion to vector shapes. No filtering based on size (useful for removing small 1 or 2 pixel road segments), smoothing to remove pixel edges, or conversion to the road centerlines is performed here. These additional 'features' will be provided in sections below this one in the future.", "def roads_as_vectors(filename): \n with rasterio.open(filename) as dataset:\n roads = dataset.read(1)\n road_mask = roads == 255 # mask non-road pixels\n\n # transforms roads features to image crs\n road_shapes = rfeatures.shapes(roads, mask=road_mask, connectivity=8, transform=dataset.transform)\n road_geometries = (r for r, _ in road_shapes)\n \n crs = dataset.crs\n return (road_geometries, crs)\n\ndef save_as_shapefile(output_filename, geometries, crs):\n driver='ESRI Shapefile'\n schema = {'geometry': 'Polygon', 'properties': []}\n with fiona.open(output_filename, mode='w', driver=driver, schema=schema, crs=crs) as c:\n count = 0\n for g in geometries:\n count += 1;\n c.write({'geometry': g, 'properties': {}})\n print('wrote {} geometries to {}'.format(count, output_filename))\n\n \nroad_geometries, crs = roads_as_vectors(filename)\noutput_filename = os.path.join('data', 'test.shp')\nsave_as_shapefile(output_filename, road_geometries, crs)", "Rasterio - Filtering and Simplifying\nIn this section, we use shapely to filter the road vectors by size and simplify them so we don't have a million pixel edges.", "def roads_as_vectors_with_filtering(filename, min_pixel_size=5): \n with rasterio.open(filename) as dataset:\n roads = dataset.read(1)\n road_mask = roads == 255 # mask non-road pixels\n\n # we skip transform on vectorization so we can perform filtering in pixel space\n road_shapes = rfeatures.shapes(roads, mask=road_mask, connectivity=8)\n road_geometries = (r for r, _ in road_shapes)\n geo_shapes = (sshape(g) for g in road_geometries)\n\n # filter to shapes bigger than min_pixel_size\n geo_shapes = (s for s in geo_shapes if s.area > min_pixel_size)\n \n # simplify so we don't have a million pixel edge points\n tolerance = 1 #1.5\n geo_shapes = (g.simplify(tolerance, preserve_topology=False)\n for g in geo_shapes)\n\n # apply image transform \n # rasterio transform: (a, b, c, d, e, f, 0, 0, 1), c and f are offsets\n # shapely: a b d e c/xoff f/yoff\n d = dataset.transform\n shapely_transform = [d[0], d[1], d[3], d[4], d[2], d[5]]\n proj_shapes = (shapely.affinity.affine_transform(g, shapely_transform)\n for g in geo_shapes)\n \n road_geometries = (shapely.geometry.mapping(s) for s in proj_shapes)\n \n crs = dataset.crs\n return (road_geometries, crs)\n\nroad_geometries_filt, crs = roads_as_vectors_with_filtering(filename)\noutput_filename = os.path.join('data', 'test_filt.shp')\nsave_as_shapefile(output_filename, road_geometries_filt, crs)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
radu941208/DeepLearning
Sequence_Models/Emojify+-+v2.ipynb
mit
[ "Emojify!\nWelcome to the second assignment of Week 2. You are going to use word vector representations to build an Emojifier. \nHave you ever wanted to make your text messages more expressive? Your emojifier app will help you do that. So rather than writing \"Congratulations on the promotion! Lets get coffee and talk. Love you!\" the emojifier can automatically turn this into \"Congratulations on the promotion! ๐Ÿ‘ Lets get coffee and talk. โ˜•๏ธ Love you! โค๏ธ\"\nYou will implement a model which inputs a sentence (such as \"Let's go see the baseball game tonight!\") and finds the most appropriate emoji to be used with this sentence (โšพ๏ธ). In many emoji interfaces, you need to remember that โค๏ธ is the \"heart\" symbol rather than the \"love\" symbol. But using word vectors, you'll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate words in the test set to the same emoji even if those words don't even appear in the training set. This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set. \nIn this exercise, you'll start with a baseline model (Emojifier-V1) using word embeddings, then build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM. \nLets get started! Run the following cell to load the package you are going to use.", "import numpy as np\nfrom emo_utils import *\nimport emoji\nimport matplotlib.pyplot as plt\n\n%matplotlib inline", "1 - Baseline model: Emojifier-V1\n1.1 - Dataset EMOJISET\nLet's start by building a simple baseline classifier. \nYou have a tiny dataset (X, Y) where:\n- X contains 127 sentences (strings)\n- Y contains a integer label between 0 and 4 corresponding to an emoji for each sentence\n<img src=\"images/data_set.png\" style=\"width:700px;height:300px;\">\n<caption><center> Figure 1: EMOJISET - a classification problem with 5 classes. A few examples of sentences are given here. </center></caption>\nLet's load the dataset using the code below. We split the dataset between training (127 examples) and testing (56 examples).", "X_train, Y_train = read_csv('data/train_emoji.csv')\nX_test, Y_test = read_csv('data/tesss.csv')\n\nmaxLen = len(max(X_train, key=len).split())", "Run the following cell to print sentences from X_train and corresponding labels from Y_train. Change index to see different examples. Because of the font the iPython notebook uses, the heart emoji may be colored black rather than red.", "index = 59\nprint(X_train[index], label_to_emoji(Y_train[index]))", "1.2 - Overview of the Emojifier-V1\nIn this part, you are going to implement a baseline model called \"Emojifier-v1\". \n<center>\n<img src=\"images/image_1.png\" style=\"width:900px;height:300px;\">\n<caption><center> Figure 2: Baseline model (Emojifier-V1).</center></caption>\n</center>\nThe input of the model is a string corresponding to a sentence (e.g. \"I love you). In the code, the output will be a probability vector of shape (1,5), that you then pass in an argmax layer to extract the index of the most likely emoji output.\nTo get our labels into a format suitable for training a softmax classifier, lets convert $Y$ from its current shape current shape $(m, 1)$ into a \"one-hot representation\" $(m, 5)$, where each row is a one-hot vector giving the label of one example, You can do so using this next code snipper. Here, Y_oh stands for \"Y-one-hot\" in the variable names Y_oh_train and Y_oh_test:", "Y_oh_train = convert_to_one_hot(Y_train, C = 5)\nY_oh_test = convert_to_one_hot(Y_test, C = 5)", "Let's see what convert_to_one_hot() did. Feel free to change index to print out different values.", "index = 59\nprint(Y_train[index], \"is converted into one hot\", Y_oh_train[index])", "All the data is now ready to be fed into the Emojify-V1 model. Let's implement the model!\n1.3 - Implementing Emojifier-V1\nAs shown in Figure (2), the first step is to convert an input sentence into the word vector representation, which then get averaged together. Similar to the previous exercise, we will use pretrained 50-dimensional GloVe embeddings. Run the following cell to load the word_to_vec_map, which contains all the vector representations.", "word_to_index, index_to_word, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')", "You've loaded:\n- word_to_index: dictionary mapping from words to their indices in the vocabulary (400,001 words, with the valid indices ranging from 0 to 400,000)\n- index_to_word: dictionary mapping from indices to their corresponding words in the vocabulary\n- word_to_vec_map: dictionary mapping words to their GloVe vector representation.\nRun the following cell to check if it works.", "word = \"cucumber\"\nindex = 289846\nprint(\"the index of\", word, \"in the vocabulary is\", word_to_index[word])\nprint(\"the\", str(index) + \"th word in the vocabulary is\", index_to_word[index])", "Exercise: Implement sentence_to_avg(). You will need to carry out two steps:\n1. Convert every sentence to lower-case, then split the sentence into a list of words. X.lower() and X.split() might be useful. \n2. For each word in the sentence, access its GloVe representation. Then, average all these values.", "# GRADED FUNCTION: sentence_to_avg\n\ndef sentence_to_avg(sentence, word_to_vec_map):\n \"\"\"\n Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each word\n and averages its value into a single vector encoding the meaning of the sentence.\n \n Arguments:\n sentence -- string, one training example from X\n word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation\n \n Returns:\n avg -- average vector encoding information about the sentence, numpy-array of shape (50,)\n \"\"\"\n \n ### START CODE HERE ###\n # Step 1: Split sentence into list of lower case words (โ‰ˆ 1 line)\n words = sentence.lower().split()\n\n # Initialize the average word vector, should have the same shape as your word vectors.\n avg = np.zeros((50,))\n \n # Step 2: average the word vectors. You can loop over the words in the list \"words\".\n for w in words:\n avg += word_to_vec_map[w]\n\n avg = avg/len(words)\n \n ### END CODE HERE ###\n \n return avg\n\navg = sentence_to_avg(\"Morrocan couscous is my favorite dish\", word_to_vec_map)\nprint(\"avg = \", avg)", "Expected Output:\n<table>\n <tr>\n <td>\n **avg= **\n </td>\n <td>\n [-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983\n -0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867\n 0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767\n 0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061\n 0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265\n 1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925\n -0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333\n -0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433\n 0.1445417 0.09808667]\n </td>\n </tr>\n</table>\n\nModel\nYou now have all the pieces to finish implementing the model() function. After using sentence_to_avg() you need to pass the average through forward propagation, compute the cost, and then backpropagate to update the softmax's parameters. \nExercise: Implement the model() function described in Figure (2). Assuming here that $Yoh$ (\"Y one hot\") is the one-hot encoding of the output labels, the equations you need to implement in the forward pass and to compute the cross-entropy cost are:\n$$ z^{(i)} = W . avg^{(i)} + b$$\n$$ a^{(i)} = softmax(z^{(i)})$$\n$$ \\mathcal{L}^{(i)} = - \\sum_{k = 0}^{n_y - 1} Yoh^{(i)}_k * log(a^{(i)}_k)$$\nIt is possible to come up with a more efficient vectorized implementation. But since we are using a for-loop to convert the sentences one at a time into the avg^{(i)} representation anyway, let's not bother this time. \nWe provided you a function softmax().", "# GRADED FUNCTION: model\n\ndef model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400):\n \"\"\"\n Model to train word vector representations in numpy.\n \n Arguments:\n X -- input data, numpy array of sentences as strings, of shape (m, 1)\n Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1)\n word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation\n learning_rate -- learning_rate for the stochastic gradient descent algorithm\n num_iterations -- number of iterations\n \n Returns:\n pred -- vector of predictions, numpy-array of shape (m, 1)\n W -- weight matrix of the softmax layer, of shape (n_y, n_h)\n b -- bias of the softmax layer, of shape (n_y,)\n \"\"\"\n \n np.random.seed(1)\n\n # Define number of training examples\n m = Y.shape[0] # number of training examples\n n_y = 5 # number of classes \n n_h = 50 # dimensions of the GloVe vectors \n \n # Initialize parameters using Xavier initialization\n W = np.random.randn(n_y, n_h) / np.sqrt(n_h)\n b = np.zeros((n_y,))\n \n # Convert Y to Y_onehot with n_y classes\n Y_oh = convert_to_one_hot(Y, C = n_y) \n \n # Optimization loop\n for t in range(num_iterations): # Loop over the number of iterations\n for i in range(m): # Loop over the training examples\n \n ### START CODE HERE ### (โ‰ˆ 4 lines of code)\n # Average the word vectors of the words from the i'th training example\n avg = sentence_to_avg(X[i], word_to_vec_map)\n\n # Forward propagate the avg through the softmax layer\n z = np.dot(W, avg)+b\n a = softmax(z)\n\n # Compute cost using the i'th training label's one hot representation and \"A\" (the output of the softmax)\n cost = -np.sum(np.multiply(Y_oh, np.log(a)))\n ### END CODE HERE ###\n \n # Compute gradients \n dz = a - Y_oh[i]\n dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h))\n db = dz\n\n # Update parameters with Stochastic Gradient Descent\n W = W - learning_rate * dW\n b = b - learning_rate * db\n \n if t % 100 == 0:\n print(\"Epoch: \" + str(t) + \" --- cost = \" + str(cost))\n pred = predict(X, Y, W, b, word_to_vec_map)\n\n return pred, W, b\n\nprint(X_train.shape)\nprint(Y_train.shape)\nprint(np.eye(5)[Y_train.reshape(-1)].shape)\nprint(X_train[0])\nprint(type(X_train))\nY = np.asarray([5,0,0,5, 4, 4, 4, 6, 6, 4, 1, 1, 5, 6, 6, 3, 6, 3, 4, 4])\nprint(Y.shape)\n\nX = np.asarray(['I am going to the bar tonight', 'I love you', 'miss you my dear',\n 'Lets go party and drinks','Congrats on the new job','Congratulations',\n 'I am so happy for you', 'Why are you feeling bad', 'What is wrong with you',\n 'You totally deserve this prize', 'Let us go play football',\n 'Are you down for football this afternoon', 'Work hard play harder',\n 'It is suprising how people can be dumb sometimes',\n 'I am very disappointed','It is the best day in my life',\n 'I think I will end up alone','My life is so boring','Good job',\n 'Great so awesome'])\n\nprint(X.shape)\nprint(np.eye(5)[Y_train.reshape(-1)].shape)\nprint(type(X_train))\n", "Run the next cell to train your model and learn the softmax parameters (W,b).", "pred, W, b = model(X_train, Y_train, word_to_vec_map)\nprint(pred)", "Expected Output (on a subset of iterations):\n<table>\n <tr>\n <td>\n **Epoch: 0**\n </td>\n <td>\n cost = 1.95204988128\n </td>\n <td>\n Accuracy: 0.348484848485\n </td>\n </tr>\n\n\n<tr>\n <td>\n **Epoch: 100**\n </td>\n <td>\n cost = 0.0797181872601\n </td>\n <td>\n Accuracy: 0.931818181818\n </td>\n </tr>\n\n<tr>\n <td>\n **Epoch: 200**\n </td>\n <td>\n cost = 0.0445636924368\n </td>\n <td>\n Accuracy: 0.954545454545\n </td>\n </tr>\n\n <tr>\n <td>\n **Epoch: 300**\n </td>\n <td>\n cost = 0.0343226737879\n </td>\n <td>\n Accuracy: 0.969696969697\n </td>\n </tr>\n</table>\n\nGreat! Your model has pretty high accuracy on the training set. Lets now see how it does on the test set. \n1.4 - Examining test set performance", "print(\"Training set:\")\npred_train = predict(X_train, Y_train, W, b, word_to_vec_map)\nprint('Test set:')\npred_test = predict(X_test, Y_test, W, b, word_to_vec_map)", "Expected Output:\n<table>\n <tr>\n <td>\n **Train set accuracy**\n </td>\n <td>\n 97.7\n </td>\n </tr>\n <tr>\n <td>\n **Test set accuracy**\n </td>\n <td>\n 85.7\n </td>\n </tr>\n</table>\n\nRandom guessing would have had 20% accuracy given that there are 5 classes. This is pretty good performance after training on only 127 examples. \nIn the training set, the algorithm saw the sentence \"I love you\" with the label โค๏ธ. You can check however that the word \"adore\" does not appear in the training set. Nonetheless, lets see what happens if you write \"I adore you.\"", "X_my_sentences = np.array([\"i adore you\", \"i love you\", \"funny lol\", \"lets play with a ball\", \"food is ready\", \"not feeling happy\"])\nY_my_labels = np.array([[0], [0], [2], [1], [4],[3]])\n\npred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map)\nprint_predictions(X_my_sentences, pred)", "Amazing! Because adore has a similar embedding as love, the algorithm has generalized correctly even to a word it has never seen before. Words such as heart, dear, beloved or adore have embedding vectors similar to love, and so might work too---feel free to modify the inputs above and try out a variety of input sentences. How well does it work?\nNote though that it doesn't get \"not feeling happy\" correct. This algorithm ignores word ordering, so is not good at understanding phrases like \"not happy.\" \nPrinting the confusion matrix can also help understand which classes are more difficult for your model. A confusion matrix shows how often an example whose label is one class (\"actual\" class) is mislabeled by the algorithm with a different class (\"predicted\" class).", "print(Y_test.shape)\nprint(' '+ label_to_emoji(0)+ ' ' + label_to_emoji(1) + ' ' + label_to_emoji(2)+ ' ' + label_to_emoji(3)+' ' + label_to_emoji(4))\nprint(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True))\nplot_confusion_matrix(Y_test, pred_test)", "<font color='blue'>\nWhat you should remember from this part:\n- Even with a 127 training examples, you can get a reasonably good model for Emojifying. This is due to the generalization power word vectors gives you. \n- Emojify-V1 will perform poorly on sentences such as \"This movie is not good and not enjoyable\" because it doesn't understand combinations of words--it just averages all the words' embedding vectors together, without paying attention to the ordering of words. You will build a better algorithm in the next part. \n2 - Emojifier-V2: Using LSTMs in Keras:\nLet's build an LSTM model that takes as input word sequences. This model will be able to take word ordering into account. Emojifier-V2 will continue to use pre-trained word embeddings to represent words, but will feed them into an LSTM, whose job it is to predict the most appropriate emoji. \nRun the following cell to load the Keras packages.", "import numpy as np\nnp.random.seed(0)\nfrom keras.models import Model\nfrom keras.layers import Dense, Input, Dropout, LSTM, Activation\nfrom keras.layers.embeddings import Embedding\nfrom keras.preprocessing import sequence\nfrom keras.initializers import glorot_uniform\nnp.random.seed(1)", "2.1 - Overview of the model\nHere is the Emojifier-v2 you will implement:\n<img src=\"images/emojifier-v2.png\" style=\"width:700px;height:400px;\"> <br>\n<caption><center> Figure 3: Emojifier-V2. A 2-layer LSTM sequence classifier. </center></caption>\n2.2 Keras and mini-batching\nIn this exercise, we want to train Keras using mini-batches. However, most deep learning frameworks require that all sequences in the same mini-batch have the same length. This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it's just not possible to do them both at the same time.\nThe common solution to this is to use padding. Specifically, set a maximum sequence length, and pad all sequences to the same length. For example, of the maximum sequence length is 20, we could pad every sentence with \"0\"s so that each input sentence is of length 20. Thus, a sentence \"i love you\" would be represented as $(e_{i}, e_{love}, e_{you}, \\vec{0}, \\vec{0}, \\ldots, \\vec{0})$. In this example, any sentences longer than 20 words would have to be truncated. One simple way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set. \n2.3 - The Embedding layer\nIn Keras, the embedding matrix is represented as a \"layer\", and maps positive integers (indices corresponding to words) into dense vectors of fixed size (the embedding vectors). It can be trained or initialized with a pretrained embedding. In this part, you will learn how to create an Embedding() layer in Keras, initialize it with the GloVe 50-dimensional vectors loaded earlier in the notebook. Because our training set is quite small, we will not update the word embeddings but will instead leave their values fixed. But in the code below, we'll show you how Keras allows you to either train or leave fixed this layer. \nThe Embedding() layer takes an integer matrix of size (batch size, max input length) as input. This corresponds to sentences converted into lists of indices (integers), as shown in the figure below.\n<img src=\"images/embedding1.png\" style=\"width:700px;height:250px;\">\n<caption><center> Figure 4: Embedding layer. This example shows the propagation of two examples through the embedding layer. Both have been zero-padded to a length of max_len=5. The final dimension of the representation is (2,max_len,50) because the word embeddings we are using are 50 dimensional. </center></caption>\nThe largest integer (i.e. word index) in the input should be no larger than the vocabulary size. The layer outputs an array of shape (batch size, max input length, dimension of word vectors).\nThe first step is to convert all your training sentences into lists of indices, and then zero-pad all these lists so that their length is the length of the longest sentence. \nExercise: Implement the function below to convert X (array of sentences as strings) into an array of indices corresponding to words in the sentences. The output shape should be such that it can be given to Embedding() (described in Figure 4).", "# GRADED FUNCTION: sentences_to_indices\n\ndef sentences_to_indices(X, word_to_index, max_len):\n \"\"\"\n Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences.\n The output shape should be such that it can be given to `Embedding()` (described in Figure 4). \n \n Arguments:\n X -- array of sentences (strings), of shape (m, 1)\n word_to_index -- a dictionary containing the each word mapped to its index\n max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this. \n \n Returns:\n X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len)\n \"\"\"\n \n m = X.shape[0] # number of training examples\n \n ### START CODE HERE ###\n # Initialize X_indices as a numpy matrix of zeros and the correct shape (โ‰ˆ 1 line)\n X_indices = np.zeros((m,max_len))\n \n for i in range(m): # loop over training examples\n \n # Convert the ith training sentence in lower case and split is into words. You should get a list of words.\n sentence_words = [w.lower() for w in X[i].split()]\n \n # Initialize j to 0\n j = 0\n \n # Loop over the words of sentence_words\n for w in sentence_words:\n # Set the (i,j)th entry of X_indices to the index of the correct word.\n X_indices[i, j] = word_to_index[w]\n # Increment j to j + 1\n j = j+1\n \n ### END CODE HERE ###\n \n return X_indices", "Run the following cell to check what sentences_to_indices() does, and check your results.", "X1 = np.array([\"funny lol\", \"lets play baseball\", \"food is ready for you\"])\nX1_indices = sentences_to_indices(X1,word_to_index, max_len = 5)\nprint(\"X1 =\", X1)\nprint(\"X1_indices =\", X1_indices)", "Expected Output:\n<table>\n <tr>\n <td>\n **X1 =**\n </td>\n <td>\n ['funny lol' 'lets play football' 'food is ready for you']\n </td>\n </tr>\n <tr>\n <td>\n **X1_indices =**\n </td>\n <td>\n [[ 155345. 225122. 0. 0. 0.] <br>\n [ 220930. 286375. 151266. 0. 0.] <br>\n [ 151204. 192973. 302254. 151349. 394475.]]\n </td>\n </tr>\n</table>\n\nLet's build the Embedding() layer in Keras, using pre-trained word vectors. After this layer is built, you will pass the output of sentences_to_indices() to it as an input, and the Embedding() layer will return the word embeddings for a sentence. \nExercise: Implement pretrained_embedding_layer(). You will need to carry out the following steps:\n1. Initialize the embedding matrix as a numpy array of zeroes with the correct shape.\n2. Fill in the embedding matrix with all the word embeddings extracted from word_to_vec_map.\n3. Define Keras embedding layer. Use Embedding(). Be sure to make this layer non-trainable, by setting trainable = False when calling Embedding(). If you were to set trainable = True, then it will allow the optimization algorithm to modify the values of the word embeddings. \n4. Set the embedding weights to be equal to the embedding matrix", "# GRADED FUNCTION: pretrained_embedding_layer\n\ndef pretrained_embedding_layer(word_to_vec_map, word_to_index):\n \"\"\"\n Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors.\n \n Arguments:\n word_to_vec_map -- dictionary mapping words to their GloVe vector representation.\n word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)\n\n Returns:\n embedding_layer -- pretrained layer Keras instance\n \"\"\"\n \n vocab_len = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement)\n emb_dim = word_to_vec_map[\"cucumber\"].shape[0] # define dimensionality of your GloVe word vectors (= 50)\n \n ### START CODE HERE ###\n # Initialize the embedding matrix as a numpy array of zeros of shape (vocab_len, dimensions of word vectors = emb_dim)\n emb_matrix = np.zeros((vocab_len, emb_dim))\n \n # Set each row \"index\" of the embedding matrix to be the word vector representation of the \"index\"th word of the vocabulary\n for word, index in word_to_index.items():\n emb_matrix[index, :] = word_to_vec_map[word]\n\n # Define Keras embedding layer with the correct output/input sizes, make it trainable. Use Embedding(...). Make sure to set trainable=False. \n embedding_layer = Embedding(vocab_len, emb_dim, trainable=False)\n ### END CODE HERE ###\n\n # Build the embedding layer, it is required before setting the weights of the embedding layer. Do not modify the \"None\".\n embedding_layer.build((None,))\n \n # Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained.\n embedding_layer.set_weights([emb_matrix])\n \n return embedding_layer\n\nembedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)\nprint(\"weights[0][1][3] =\", embedding_layer.get_weights()[0][1][3])", "Expected Output:\n<table>\n <tr>\n <td>\n **weights[0][1][3] =**\n </td>\n <td>\n -0.3403\n </td>\n </tr>\n</table>\n\n2.3 Building the Emojifier-V2\nLets now build the Emojifier-V2 model. You will do so using the embedding layer you have built, and feed its output to an LSTM network. \n<img src=\"images/emojifier-v2.png\" style=\"width:700px;height:400px;\"> <br>\n<caption><center> Figure 3: Emojifier-v2. A 2-layer LSTM sequence classifier. </center></caption>\nExercise: Implement Emojify_V2(), which builds a Keras graph of the architecture shown in Figure 3. The model takes as input an array of sentences of shape (m, max_len, ) defined by input_shape. It should output a softmax probability vector of shape (m, C = 5). You may need Input(shape = ..., dtype = '...'), LSTM(), Dropout(), Dense(), and Activation().", "# GRADED FUNCTION: Emojify_V2\n\ndef Emojify_V2(input_shape, word_to_vec_map, word_to_index):\n \"\"\"\n Function creating the Emojify-v2 model's graph.\n \n Arguments:\n input_shape -- shape of the input, usually (max_len,)\n word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation\n word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)\n\n Returns:\n model -- a model instance in Keras\n \"\"\"\n \n ### START CODE HERE ###\n # Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).\n sentence_indices = Input(input_shape, dtype='int32')\n \n # Create the embedding layer pretrained with GloVe Vectors (โ‰ˆ1 line)\n embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)\n \n # Propagate sentence_indices through your embedding layer, you get back the embeddings\n embeddings = embedding_layer(sentence_indices) \n \n # Propagate the embeddings through an LSTM layer with 128-dimensional hidden state\n # Be careful, the returned output should be a batch of sequences.\n X = LSTM(128, return_sequences=True)(embeddings)\n # Add dropout with a probability of 0.5\n X = Dropout(0.5)(X)\n # Propagate X trough another LSTM layer with 128-dimensional hidden state\n # Be careful, the returned output should be a single hidden state, not a batch of sequences.\n X = LSTM(128, return_sequences=False)(X)\n # Add dropout with a probability of 0.5\n X = Dropout(0.5)(X)\n # Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors.\n X = Dense(5)(X)\n # Add a softmax activation\n X = Activation('softmax')(X)\n \n \n # Create Model instance which converts sentence_indices into X.\n model = Model(inputs=sentence_indices ,outputs=X)\n \n ### END CODE HERE ###\n \n return model", "Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose max_len = 10. You should see your architecture, it uses \"20,223,927\" parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001*50 = 20,000,050 non-trainable parameters.", "model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index)\nmodel.summary()", "As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, adam optimizer and ['accuracy'] metrics:", "model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])", "It's time to train your model. Your Emojifier-V2 model takes as input an array of shape (m, max_len) and outputs probability vectors of shape (m, number of classes). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors).", "X_train_indices = sentences_to_indices(X_train, word_to_index, maxLen)\nY_train_oh = convert_to_one_hot(Y_train, C = 5)", "Fit the Keras model on X_train_indices and Y_train_oh. We will use epochs = 50 and batch_size = 32.", "model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True)", "Your model should perform close to 100% accuracy on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set.", "X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen)\nY_test_oh = convert_to_one_hot(Y_test, C = 5)\nloss, acc = model.evaluate(X_test_indices, Y_test_oh)\nprint()\nprint(\"Test accuracy = \", acc)", "You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples.", "# This code allows you to see the mislabelled examples\nC = 5\ny_test_oh = np.eye(C)[Y_test.reshape(-1)]\nX_test_indices = sentences_to_indices(X_test, word_to_index, maxLen)\npred = model.predict(X_test_indices)\nfor i in range(len(X_test)):\n x = X_test_indices\n num = np.argmax(pred[i])\n if(num != Y_test[i]):\n print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip())", "Now you can try it on your own example. Write your own sentence below.", "# Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings. \nx_test = np.array(['not feeling happy'])\nX_test_indices = sentences_to_indices(x_test, word_to_index, maxLen)\nprint(x_test[0] +' '+ label_to_emoji(np.argmax(model.predict(X_test_indices))))", "Previously, Emojify-V1 model did not correctly label \"not feeling happy,\" but our implementation of Emojiy-V2 got it right. (Keras' outputs are slightly random each time, so you may not have obtained the same result.) The current model still isn't very robust at understanding negation (like \"not happy\") because the training set is small and so doesn't have a lot of examples of negation. But if the training set were larger, the LSTM model would be much better than the Emojify-V1 model at understanding such complex sentences. \nCongratulations!\nYou have completed this notebook! โค๏ธโค๏ธโค๏ธ\n<font color='blue'>\nWhat you should remember:\n- If you have an NLP task where the training set is small, using word embeddings can help your algorithm significantly. Word embeddings allow your model to work on words in the test set that may not even have appeared in your training set. \n- Training sequence models in Keras (and in most other deep learning frameworks) requires a few important details:\n - To use mini-batches, the sequences need to be padded so that all the examples in a mini-batch have the same length. \n - An Embedding() layer can be initialized with pretrained values. These values can be either fixed or trained further on your dataset. If however your labeled dataset is small, it's usually not worth trying to train a large pre-trained set of embeddings. \n - LSTM() has a flag called return_sequences to decide if you would like to return every hidden states or only the last one. \n - You can use Dropout() right after LSTM() to regularize your network. \nCongratulations on finishing this assignment and building an Emojifier. We hope you're happy with what you've accomplished in this notebook! \n๐Ÿ˜€๐Ÿ˜€๐Ÿ˜€๐Ÿ˜€๐Ÿ˜€๐Ÿ˜€\nAcknowledgments\nThanks to Alison Darcy and the Woebot team for their advice on the creation of this assignment. Woebot is a chatbot friend that is ready to speak with you 24/7. As part of Woebot's technology, it uses word embeddings to understand the emotions of what you say. You can play with it by going to http://woebot.io\n<img src=\"images/woebot.png\" style=\"width:600px;height:300px;\">" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sandrofsousa/Resolution
Pysegreg/Pysegreg_notebook_distance.ipynb
mit
[ "Pysegreg run - Distance based\n Instructions\nFor fast processing, you can just change the following variables before running:\n* path/name at Input file cell (select the file you want to use)\n* bandwidth and weigth method at compute population intensity cell\n* file name in the variable fname at section Save results to a local file (the file you want to save results)\nmake sure you don't use a name already used or the file will be replaced\nWith the previous steps in mind, just click on Cell menu and select Run All", "# Imports\nimport numpy as np\nnp.seterr(all='ignore')\nimport pandas as pd\nfrom decimal import Decimal\nimport time\n\n# Import python script with Pysegreg functions\nfrom segregationMetrics import Segreg\n\n# Instantiate segreg as cc\ncc = Segreg()", "Input file\nAttention to the new data structure for input !!!\nChange your input file with path/name in the cell below to be processed.\nData Format\nID | X | Y | group 1 | group 2 | group n", "cc.readAttributesFile('/Users/sandrofsousa/Downloads/valid/Segreg sample.csv')", "Measures\nCompute Population Intensity\nFor non spatial result, please comment the function call at: \"cc.locality= ...\" \n\nto comment a code use # in the begining of the line\n\nDistance matrix is calculated at this step. Change the parameters for the population\nintensity according to your needs. Parameters are:\n bandwidth - is set to be 5000m by default, you can change it here\n weightmethod - 1 for gaussian, 2 for bi-square and empty for moving window", "start_time = time.time()\n\ncc.locality = cc.cal_localityMatrix(bandwidth=700, weightmethod=1)\n\nprint(\"--- %s seconds for processing ---\" % (time.time() - start_time))", "For validation only\nRemove the comment (#) if you want to see the values and validate", "# np.set_printoptions(threshold=np.inf)\n# print('Location (coordinates from data):\\n', cc.location)\n# print()\n# print('Population intensity for all groups:\\n', cc.locality)\n\n'''To select locality for a specific line (validation), use the index in[x,:]'''\n# where x is the number of the desired line\n\n# cc.locality[5,:]", "Compute local Dissimilarity", "diss_local = cc.cal_localDissimilarity()\ndiss_local = np.asmatrix(diss_local).transpose()", "Compute global Dissimilarity", "diss_global = cc.cal_globalDissimilarity()", "Compute local Exposure/Isolation\nexpo is a matrix of n_group * n_group therefore, exposure (m,n) = rs[m,n]\nthe columns are exporsure m1 to n1, to n2... n5, m2 to n1....n5\n- m,m = isolation index of group m\n- m,n = expouse index of group m to n\nResult of all combinations of local groups expousure/isolation\nTo select a specific line of m to n, use the index [x]\nEach value is a result of the combinations m,n\ne.g.: g1xg1, g1xg2, g2,g1, g2xg2 = isolation, expousure, // , isolation", "expo_local = cc.cal_localExposure()", "Compute global Exposure/Isolation", "expo_global = cc.cal_globalExposure()", "Compute local Entropy", "entro_local = cc.cal_localEntropy()", "Compute global Entropy", "entro_global = cc.cal_globalEntropy()", "Compute local Index H", "idxh_local = cc.cal_localIndexH()", "Compute global Index H", "idxh_global = cc.cal_globalIndexH()", "Results\nPrepare data for saving on a local file", "# Concatenate local values from measures\nif len(cc.locality) == 0:\n results = np.concatenate((expo_local, diss_local, entro_local, idxh_local), axis=1)\nelse:\n results = np.concatenate((cc.locality, expo_local, diss_local, entro_local, idxh_local), axis=1)\n\n# Concatenate the results with original data\noutput = np.concatenate((cc.tract_id, cc.attributeMatrix, results),axis = 1)\n\nnames = ['id','x','y']\n\nfor i in range(cc.n_group):\n names.append('group_'+str(i))\n\nif len(cc.locality) == 0: \n for i in range(cc.n_group):\n for j in range(cc.n_group):\n if i == j:\n names.append('iso_' + str(i) + str(j))\n else:\n names.append('exp_' + str(i) + str(j))\n \n names.append('dissimil')\n names.append('entropy')\n names.append('indexh')\n \nelse:\n for i in range(cc.n_group):\n names.append('intens_'+str(i))\n \n for i in range(cc.n_group):\n for j in range(cc.n_group):\n if i == j:\n names.append('iso_' + str(i) + str(j))\n else:\n names.append('exp_' + str(i) + str(j))\n \n names.append('dissimil')\n names.append('entropy')\n names.append('indexh')", "Save Local and global results to a file\nThe paramenter fname corresponds to the folder/filename, change it as you want.\nTo save on a diferent folder, use the \"/\" to pass the directory.\nThe local results will be saved using the name defined and adding the \"_local\" postfix to file's name.\nThe global results are automatically saved using the same name with the addiction of the postfix \"_globals\". \nIt's recommended to save on a different folder from the code, e.g.: a folder named result.\nThe fname value should be changed for any new executions or the local file will be overwrited!", "fname = \"/Users/sandrofsousa/Downloads/valid/result\"\n\noutput = pd.DataFrame(output, columns=names)\noutput.to_csv(\"%s_local.csv\" % fname, sep=\",\", index=False)\nwith open(\"%s_global.txt\" % fname, \"w\") as f:\n f.write('Global dissimilarity: ' + str(diss_global))\n f.write('\\nGlobal entropy: ' + str(entro_global))\n f.write('\\nGlobal Index H: ' + str(idxh_global))\n f.write('\\nGlobal isolation/exposure: \\n')\n f.write(str(expo_global))\n\n# code to save data as a continuous string - Marcus request for R use\n\n# names2 = ['dissimil', 'entropy', 'indexh']\n\n# for i in range(cc.n_group):\n# for j in range(cc.n_group):\n# if i == j:\n# names2.append('iso_' + str(i) + str(j))\n# else:\n# names2.append('exp_' + str(i) + str(j))\n\n# values = [diss_global, entro_global, idxh_global]\n# for i in expo_global: values.append(i)\n\n# file2 = \"/Users/sandrofsousa/Downloads/\"\n# with open(\"%s_global.csv\" % file2, \"w\") as f:\n# f.write(', '.join(names2) + '\\n')\n# f.write(', '.join(str(i) for i in values))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
markovmodel/adaptivemd
examples/tutorial/4_example_advanced_tasks.ipynb
lgpl-2.1
[ "AdaptiveMD\nExample 4 - Custom Task objects\n0. Imports", "from adaptivemd import Project, File, PythonTask, Task", "Let's open our test project by its name. If you completed the first examples this should all work out of the box.\nOpen all connections to the MongoDB and Session so we can get started.", "project = Project('tutorial')", "Let's see again where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.", "print project.files\nprint project.generators\nprint project.models", "Now restore our old ways to generate tasks by loading the previously used generators.", "engine = project.generators['openmm']\nmodeller = project.generators['pyemma']\npdb_file = project.files['initial_pdb']", "A simple task\nA task is in essence a bash script-like description of what should be executed by the worker. It has details about files to be linked to the working directory, bash commands to be executed and some meta information about what should happen in case we succeed or fail.\nThe execution structure\nLet's first explain briefly how a task is executed and what its components are. This was originally build so that it is compatible with radical.pilot and still is. So, if you are familiar with it, all of the following information should sould very familiar.\nA task is executed from within a unique directory that only exists for this particular task. These are located in adaptivemd/workers/ and look like \nworker.0x5dcccd05097611e7829b000000000072L/\nthe long number is a hex representation of the UUID of the task. Just if you are curious type\nprint hex(my_task.__uuid__)\nThen we change directory to this folder write a running.sh bash script and execute it. This script is created from the task definition and also depends on your resource setting (which basically only contain the path to the workers directory, etc)\nThe script is divided into 1 or 3 parts depending on which Task class you use. The main Task uses a single list of commands, while PrePostTask has the following structure\n\n\nPre-Exec: Things to happen before the main command (optional)\n\n\nMain: the main commands are executed\n\n\nPost-Exec: Things to happen after the main command (optional)\n\n\nOkay, lots of theory, now some real code for running a task that generated a trajectory", "task = engine.run(project.new_trajectory(pdb_file, 100))\n\ntask.script", "We are linking a lot of files to the worker directory and change the name for the .pdb in the process. Then call the actual python script that runs openmm. And finally move the output.dcd and the restart file back tp the trajectory folder.\nThere is a way to list lot's of things about tasks and we will use it a lot to see our modifications.", "print task.description", "Modify a task\nAs long as a task is not saved and hence placed in the queue, it can be altered in any way. All of the 3 / 5 phases can be changed separately. You can add things to the staging phases or bash phases or change the command. So, let's do that now\nAdd a bash line\nFirst, a Task is very similar to a list of bash commands and you can simply append (or prepend) a command. A text line will be interpreted as a bash command.", "task.append('echo \"This new line is pointless\"')\n\nprint task.description", "As expected this line was added to the end of the script.\nAdd staging actions\nTo set staging is more difficult. The reason is, that you normally have no idea where files are located and hence writing a copy or move is impossible. This is why the staging commands are not bash lines but objects that hold information about the actual file transaction to be done. There are some task methods that help you move files but also files itself can generate this commands for you.\nLet's move one trajectory (directory) around a little more as an example", "traj = project.trajectories.one\n\ntransaction = traj.copy()\nprint transaction", "This looks like in the script. The default for a copy is to move a file or folder to the worker directory under the same name, but you can give it another name/location if you use that as an argument. Note that since trajectories are a directory you need to give a directory name (which end in a /)", "transaction = traj.copy('new_traj/')\nprint transaction", "If you want to move it not to the worker directory you have to specify the location and you can do so with the prefixes (shared://, sandbox://, staging:// as explained in the previous examples)", "transaction = traj.copy('staging:///cached_trajs/')\nprint transaction", "Besides .copy you can also .move or .link files.", "transaction = pdb_file.copy('staging:///delete.pdb')\nprint transaction\ntransaction = pdb_file.move('staging:///delete.pdb')\nprint transaction\ntransaction = pdb_file.link('staging:///delete.pdb')\nprint transaction", "Local files\nLet's mention these because they require special treatment. We cannot copy files to the HPC, we need to store them in the DB first.", "new_pdb = File('file://../files/ntl9/ntl9.pdb').load()", "Make sure you use file:// to indicate that you are using a local file. The above example uses a relative path which will be replaced by an absolute one, otherwise we ran into trouble once we open the project at a different directory.", "print new_pdb.location", "Note that now there are 3 / in the filename, two from the :// and one from the root directory of your machine\nThe load() at the end really loads the file and when you save this File now it will contain the content of the file. You can access this content as seen in the previous example.", "print new_pdb.get_file()[:300]", "For local files you normally use .transfer, but copy, move or link work as well. Still, there is no difference since the file only exists in the DB now and copying from the DB to a place on the HPC results in a simple file creation.\nNow, we want to add a command to the staging and see what happens.", "transaction = new_pdb.transfer()\nprint transaction\n\ntask.append(transaction)\n\nprint task.description", "We now have one more transfer command. But something else has changed. There is one more files listed as required. So, the task can only run, if that file exists, but since we loaded it into the DB, it exists (for us). For example the newly created trajectory 25.dcd does not exist yet. Would that be a requirement the task would fail. But let's check that it exists.", "new_pdb.exists", "Okay, we have now the PDB file staged and so any real bash commands could work with a file ntl9.pdb. Alright, so let's output its stats.", "task.append('stat ntl9.pdb')", "Note that usually you place these stage commands at the top or your script.\nNow we could run this task, as before and see, if it works. (Make sure you still have a worker running)", "project.queue(task)", "And check, that the task is running", "task.state", "If we did not screw up the task, it should have succeeded and we can look at the STDOUT.", "print task.stdout", "Well, great, we have the pointless output and the stats of the newly staged file ntl9.pdb\nHow does a real script look like\nJust for fun let's create the same scheduler that the adaptivemdworker uses, but from inside this notebook.", "from adaptivemd import WorkerScheduler\n\nsc = WorkerScheduler(project._current_configuration)", "If you really wanted to use the worker you need to initialize it and it will create directories and stage files for the generators, etc. For that you need to call sc.enter(project), but since we only want it to parse our tasks, we only set the project without invoking initialization. You should normally not do that.", "sc.project = project", "Now we can use a function .task_to_script that will parse a task into a bash script. So this is really what would be run on your machine now.", "print '\\n'.join(sc.task_to_script(task))", "Now you see that all file paths have been properly interpreted to work. See that there is a comment about a temporary file from the DB that is then renamed. This is a little trick to be compatible with RPs way of handling files. (TODO: We might change this to just write to the target file. Need to check if that is still consistent)\nA note on file locations\nOne problem with bash scripts is that when you create the tasks you have no concept on where the files actually are located. To get around this the created bash script will be scanned for paths, that contain prefixed like we are used to and are interpreted in the context of the worker / scheduler. The worker is the only instance to know all that is necessary so this is the place to fix that problem.\nLet's see that in a little example, where we create an empty file in the staging area.", "task = Task()\ntask.append('touch staging:///my_file.txt')\n\nprint '\\n'.join(sc.task_to_script(task))", "And voila, the path has changed to a relative path from the working directory of the worker. Note that you see here the line we added in the very beginning of example 1 to our resource!\nA Task from scratch\nIf you want to start a new task you can begin with", "task = Task()", "as we did before.\nJust start adding staging and bash commands and you are done. When you create a task you can assign it a generator, then the system will assume that this task was generated by that generator, so don't do it for you custom tasks, unless you generated them in a generator. Setting this allows you to tell a worker only to run tasks of certain types.\nThe Python RPC Task\nThe tasks so far a very powerful, but they lack the possibility to call a python function. Since we are using python here, it would be great to really pretend to call a python function from here and not taking the detour of writing a python bash executable with arguments, etc... An example for this is the PyEmma generator which uses this capability.\nLet's do an example of this as well. Assume we have a python function in a file (you need to have your code in a file so far so that we can copy the file to the HPC if necessary). Let's create the .py file now.", "%%file my_rpc_function.py\n\ndef my_func(f):\n import os\n print f\n return os.path.getsize(f)", "Now create a PythonTask instead", "task = PythonTask(modeller)", "and the call function has changed. Note that also now you can still add all the bash and stage commands as before. A PythonTask is also a subclass of PrePostTask so we have a .pre and .post phase available.", "from my_rpc_function import my_func", "We call the function my_func with one argument", "task.call(my_func, f=project.trajectories.one)\n\nprint task.description", "Well, interesting. What this actually does is to write the input arguments to the function into a temporary .json file on the worker, (in RP on the local machine and then transfers it to remote), rename it to input.json and read it in the _run_.py. This is still a little clumsy, but needs to be this way to be RP compatible which only works with files! Look at the actual script.\nYou see, that we really copy the .py file that contains the source code to the worker directory. All that is done automatically. A little caution on this. You can either write a function in a single file or use any installed package, but in this case the same package needs to be installed on the remote machine as well!\nLet's run it and see what happens.", "project.queue(task)", "And wait until the task is done", "project.wait_until(task.is_done)", "The default settings will automatically save the content from the resulting output.json in the DB an you can access the data that was returned from the task at .output. In our example the result was just the size of a the file in bytes", "task.output", "And you can use this information in an adaptive script to make decisions.\nsuccess callback\nThe last thing we did not talk about is the possibility to also call a function with the returned data automatically on successful execution. Since this function is executed on the worker we (so far) only support function calls with the following restrictions. \n\nyou can call a function of the related generator class. for this you need to create the task using PythonTask(generator)\nthe function name you want to call is stored in task.then_func_name. So you can write a generator class with several possible outcomes and chose the function for each task.\nThe Generator needs to be part of adaptivemd\n\nSo in the case of modeller.execute we create a PythonTask that references the following functions", "task = modeller.execute(project.trajectories)\n\ntask.then_func_name", "So we will call the default then_func of modeller or the class modeller is of.", "help(modeller.then_func)", "These callbacks are called with the current project, the resulting data (which is in the modeller case a Model object) and array of initial inputs.\nThis is the actual code of the callback\npy\n@staticmethod\ndef then_func(project, task, model, inputs):\n # add the input arguments for later reference\n model.data['input']['trajectories'] = inputs['kwargs']['files']\n model.data['input']['pdb'] = inputs['kwargs']['topfile']\n project.models.add(model)\nAll it does is to add some of the input parameters to the model for later reference and then store the model in the project. You are free to define all sorts of actions here, even queue new tasks.\nNext, we will talk about the factories for Task objects, called generators. There we will actually write a new class that does some stuff with the results.", "project.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cristhro/Machine-Learning
ejercicio 5/Practica 5.ipynb
gpl-3.0
[ "Sacar la lista de 250 Pelis", "from imdb import IMDb\nfrom datetime import datetime\nfrom elasticsearch import Elasticsearch\nes = Elasticsearch()\n\nia = IMDb()\nlistaPelis = ia.get_top250_movies()\nlistaPelis", "Sacar toda la info de una peli para poder meterla en un diccionario y usarla en ElasticSearch, indexandola (metodo todo en 1)\nTarda bastante en ejecutarse (5 a 15 min), mete 250 peliculas en elastic\nquitado parametro de es.index (, id=i)\nCoge el sumario de cada peli de la lista, y guarda la info en elasticSearch", "for i in range(10,250):\n peli = listaPelis[i]\n peli2 = ia.get_movie(peli.movieID)\n string = peli2.summary()\n separado = string.split('\\n')\n solucion = {}\n for i in range(2,len(separado)):\n sep2 = separado[i].split(':')\n #Forma de evitar que haya fallo al pasar el split a diccionario\n #Caso del fallo en los 2 cuadros de abajo\n sep2[1:len(sep2)] = [''.join(sep2[1:len(sep2)])]\n solucion.update(dict([sep2]))\n es.index(index='prueba-index', doc_type='text', body=solucion)\n\nseparado\n\nsep2[1]", "Pruebas", "import pandas as pd\nlista=[]\n\nfor i in range(0400000,0400010,1):\n peli = ia.get_movie(i)\n lista.append(peli.summary())\n\n \ndatos = pd.DataFrame(lista)\nprint datos.values\n\n\nimport pandas as pd\nlista=[]\ndatos = pd.DataFrame([])\nfor i in range(0005000,0005003):\n lista.append(ia.get_movie(i))\n lista.append(ia.get_movie_plot(i))\n\n datos = datos.append(lista)\n\nprint datos.values", "Elastic Seach (cabezera de ejemplo)", "from datetime import datetime\nfrom elasticsearch import Elasticsearch\nes = Elasticsearch()\n'''\ndoc = {\n 'prueba': 'Holi',\n 'text': 'A man throws away an old top hat and a tramp uses it to sole his boots.',\n}\nres = es.index(index=\"movies-index\", doc_type='text', id=1, body=doc)\nprint(res['created'])\n'''\n\nres = es.get(index=\"movies-index\", doc_type='text', id=6)\nprint(res['_source'])\n\nes.indices.refresh(index=\"movies-index\")\n\nres = es.search(index=\"movies-index\", body={\"query\": {\"match_all\": {}}})\nprint(\"Got %d Hits:\" % res['hits']['total'])\nfor hit in res['hits']['hits']:\n print(\"%(text)s\" % hit[\"_source\"])", "Inicializacion real de Elastic Search (ejecutar)", "# make sure ES is up and running\nimport requests\nres = requests.get('http://localhost:9200')\nprint(res.content)\n\nfrom elasticsearch import Elasticsearch\nes = Elasticsearch([{'host': 'localhost', 'port': 9200}])", "Guardamos el top 250 dentro de elastic search (antiguo)", "#Lista con el top 250 de peliculas\ntop = ia.get_top250_movies()\n#Recorro la lista y saco los datos para indexarlos en elastic search, el id es el orden en la lista\nfor i in range(0,250):\n es.index(index='films-index', doc_type='text', id=i, body=top[i].data)\n", "Buscamos los datos guardados (antiguo)", "res = es.search(index=\"films-index\", body={\"query\": {\"match_all\": {}}})\nprint(\"Got %d Hits:\" % res['hits']['total'])\n#Modificar para que funcione\nfor hit in res['hits']['hits']:\n print(\"%(kind)s %(title)s %(year)s %(rating)s\" % hit[\"_source\"])", "Sacar los hits e info de unos cuantos de ellos", "res = es.search(index=\"prueba-index\", body={\"query\": {\"match_all\": {}}})\nprint(\"Got %d Hits:\" % res['hits']['total'])\n\nfor hit in res['hits']['hits']:\n print(\"%(Title)s %(Genres)s %(Director)s %(Cast)s %(Writer)s %(Country)s %(Language)s %(Rating)s %(Plot)s\" % hit[\"_source\"])\n\nres = es.search(index=\"prueba-index\", body={\"query\": {\"match_all\": {}}})\nprint(\"Got %d Hits:\" % res['hits']['total'])\n\nfor hit in res['hits']['hits']:\n print(\"%(Title)s\" % hit[\"_source\"])\n\nres = es.search(index=\"prueba-index\", body={\"query\": {\"match_all\": {}}})\nres\n\nres = es.search(index=\"prueba-index\", body={\n \"query\": \n {\"match\" : {'Director': 'Christopher Nolan'}\n },\n {\n \"highlight\" : {\n \"fields\" : {\n \"Language\" : {}\n }\n }\n }\n})\nres", "Query sin fuzziness\nNo funciona si le quitas una letra, la query de abajo si al ser fuzzy", "res = es.search(index=\"prueba-index\", body={\"query\": {\"match\" : {'Director': 'Christophe Nola'}}})\nprint(\"Got %d Hits:\" % res['hits']['total'])\nfor hit in res['hits']['hits']:\n print(\"%(Title)s\" % hit[\"_source\"])", "Query con fuzziness aรฑadida", "bodyQuery = {\n \"query\": {\n \"multi_match\" : {\n \"query\" : \"Int\",\n \"fields\": [\"Plot\", \"Title\"],\n \"fuzziness\": \"2\"\n }\n }\n}\nres = es.search(index=\"prueba-index\", body=bodyQuery)\n#print res\n#print(\"Got %d Hits:\" % res['hits']['total'])\nfor hit in res['hits']['hits']:\n print(\"%(Title)s\" % hit[\"_source\"])\n\nbodyQuery = {\n \"query\": {\n \"regexp\":{\n \"Title\": \"wonder.*\"\n }\n }\n}\nres = es.search(index=\"prueba-index\", body=bodyQuery)\n#print res\n#print(\"Got %d Hits:\" % res['hits']['total'])\nfor hit in res['hits']['hits']:\n print(\"%(Title)s\" % hit[\"_source\"])", "Query 2 con highlight de distintos campos y la forma de mostrarlo", "bodyQuery2 = {\n \"query\": {\n \"match\" : {\n \"Title\" : {\n \n \"query\" : \"wond\",\n \"operator\" : \"and\",\n \"zero_terms_query\": \"all\"\n }\n }\n },\n \"highlight\" : {\n \"fields\" : {\n \"Title\" : {},\n \"Plot\" : {\"fragment_size\" : 150, \"number_of_fragments\" : 3}\n },\n #Permite el hightlight sobre campos que no se han hecho query\n #como Plot en este ejemplo\n \"require_field_match\" : False\n }\n}\nres = es.search(index=\"prueba-index\", body=bodyQuery2)\nprint(\"Got %d Hits:\" % res['hits']['total'])\n# Uso el [0] porque solo hay 1 hit, si hubiese mas, pues habria mas campos\n# de la lista, habria que usar el for de arriba para sacar el highlight de\n# cada uno de la lista\n#print res['hits']['hits'][0]['highlight']\nfor hit in res['hits']['hits']:\n print(hit)\n\n\n\nbodyQuery2 = {\n \"query\": {\n \"bool\": {\n \"should\": [\n { \"match\": {\n \"Title\": {\n \"query\": \"wonder\" + \".*\",\n \"fuzziness\": \"AUTO\",\n \"prefix_length\" : 1,\n \"operator\": \"and\"\n\n }\n }},\n { \"match\": {\n \"Plot\": {\n \"query\": \"wonder\" + \".*\",\n \"fuzziness\": 2,\n \"prefix_length\" : 1,\n \"operator\": \"and\"\n }\n }\n },\n { \"match\": {\n \"Genres\": {\n \"query\": \"wonder\" + \".*\",\n \"fuzziness\": \"AUTO\",\n \"prefix_length\" : 1,\n \"operator\": \"and\"\n\n }\n }},\n { \"match\": {\n \"Director\": {\n \"query\": \"wonder\" + \".*\",\n \"fuzziness\": \"AUTO\",\n \"prefix_length\" : 1,\n \"operator\": \"and\"\n\n }\n }},\n { \"match\": {\n \"Writer\": {\n \"query\": \"wonder\" + \".*\",\n \"fuzziness\": \"AUTO\",\n \"prefix_length\" : 1,\n \"operator\": \"and\"\n\n }\n }},\n { \"match\": {\n \"Cast\": {\n \"query\": \"wonder\" + \".*\",\n \"fuzziness\": \"AUTO\",\n \"prefix_length\" : 1,\n \"operator\": \"and\"\n\n }\n }},\n { \"match\": {\n \"Country\": {\n \"query\": \"wonder\" + \".*\",\n \"fuzziness\": \"AUTO\",\n \"prefix_length\" : 1,\n \"operator\": \"and\"\n\n }\n }},\n { \"match\": {\n \"Language\": {\n \"query\": \"wonder\" + \".*\",\n \"fuzziness\": \"AUTO\",\n \"prefix_length\" : 1,\n \"operator\": \"and\"\n\n }\n }},\n { \"match\": {\n \"Rating\": {\n \"query\": \"wonder\" + \".*\",\n \"fuzziness\": \"AUTO\",\n \"prefix_length\" : 1,\n \"operator\": \"and\"\n\n }\n }},\n\n ]\n }\n},\n \"highlight\": {\n \"fields\": {\n \"Title\": {},\n \"Plot\": {},\n \"Director\": {}\n },\n # Permite el hightlight sobre campos que no se han hecho query\n # como Plot en este ejemplo\n \"require_field_match\": False\n}\n}\n'''\n \"query\": {\n \"match\": {\n \"Title\": {\n \"query\": buscado,\n \"fuzziness\": \"AUTO\",\n \"boost\" : 2.0,\n \"prefix_length\" : 1,\n \"max_expansions\": 100,\n #\"minimum_should_match\" : 10,\n\n \"operator\": \"and\"\n }\n\n }\n },\n \"highlight\": {\n \"fields\": {\n \"Title\": {},\n \"Plot\": {\"fragment_size\": 300, \"number_of_fragments\": 3}\n },\n # Permite el hightlight sobre campos que no se han hecho query\n # como Plot en este ejemplo\n \"require_field_match\": False\n }\n'''\nres = es.search(index=\"prueba-index\", body= bodyQuery2)\nprint(\"Got %d Hits:\" % res['hits']['total'])\n# Uso el [0] porque solo hay 1 hit, si hubiese mas, pues habria mas campos\n# de la lista, habria que usar el for de arriba para sacar el highlight de\n# cada uno de la lista\n# print res['hits']['hits'][0]['highlight']\n\nresultado = []\nfor hit in res['hits']['hits']:\n resultado.append(hit)\nprint resultado[10]['_source']['Title']", "Borrar datos", "es.delete(index='prueba-index', doc_type='text', id=1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
csaladenes/csaladenes.github.io
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb
mit
[ "<small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>\nValidation and Model Selection\nIn this section, we'll look at model evaluation and the tuning of hyperparameters, which are parameters that define the model.", "from __future__ import print_function, division\n\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.style.use('seaborn')", "Validating Models\nOne of the most important pieces of machine learning is model validation: that is, checking how well your model fits a given dataset. But there are some pitfalls you need to watch out for.\nConsider the digits example we've been looking at previously. How might we check how well our model fits the data?", "from sklearn.datasets import load_digits\ndigits = load_digits()\nX = digits.data\ny = digits.target", "Let's fit a K-neighbors classifier", "from sklearn.neighbors import KNeighborsClassifier\nknn = KNeighborsClassifier(n_neighbors=1)\nknn.fit(X, y)", "Now we'll use this classifier to predict labels for the data", "y_pred = knn.predict(X)", "Finally, we can check how well our prediction did:", "print(\"{0} / {1} correct\".format(np.sum(y == y_pred), len(y)))", "It seems we have a perfect classifier!\nQuestion: what's wrong with this?\nValidation Sets\nAbove we made the mistake of testing our data on the same set of data that was used for training. This is not generally a good idea. If we optimize our estimator this way, we will tend to over-fit the data: that is, we learn the noise.\nA better way to test a model is to use a hold-out set which doesn't enter the training. We've seen this before using scikit-learn's train/test split utility:", "from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y)\nX_train.shape, X_test.shape", "Now we train on the training data, and validate on the test data:", "knn = KNeighborsClassifier(n_neighbors=1)\nknn.fit(X_train, y_train)\ny_pred = knn.predict(X_test)\nprint(\"{0} / {1} correct\".format(np.sum(y_test == y_pred), len(y_test)))", "This gives us a more reliable estimate of how our model is doing.\nThe metric we're using here, comparing the number of matches to the total number of samples, is known as the accuracy score, and can be computed using the following routine:", "from sklearn.metrics import accuracy_score\naccuracy_score(y_test, y_pred)", "This can also be computed directly from the model.score method:", "knn.score(X_test, y_test)", "Using this, we can ask how this changes as we change the model parameters, in this case the number of neighbors:", "for n_neighbors in [1, 5, 10, 20, 30]:\n knn = KNeighborsClassifier(n_neighbors)\n knn.fit(X_train, y_train)\n print(n_neighbors, knn.score(X_test, y_test))", "We see that in this case, a small number of neighbors seems to be the best option.\nCross-Validation\nOne problem with validation sets is that you \"lose\" some of the data. Above, we've only used 3/4 of the data for the training, and used 1/4 for the validation. Another option is to use 2-fold cross-validation, where we split the sample in half and perform the validation twice:", "X1, X2, y1, y2 = train_test_split(X, y, test_size=0.5, random_state=0)\nX1.shape, X2.shape\n\nprint(KNeighborsClassifier(1).fit(X2, y2).score(X1, y1))\nprint(KNeighborsClassifier(1).fit(X1, y1).score(X2, y2))", "Thus a two-fold cross-validation gives us two estimates of the score for that parameter.\nBecause this is a bit of a pain to do by hand, scikit-learn has a utility routine to help:", "from sklearn.model_selection import cross_val_score\ncv = cross_val_score(KNeighborsClassifier(1), X, y, cv=10)\ncv.mean()", "K-fold Cross-Validation\nHere we've used 2-fold cross-validation. This is just one specialization of $K$-fold cross-validation, where we split the data into $K$ chunks and perform $K$ fits, where each chunk gets a turn as the validation set.\nWe can do this by changing the cv parameter above. Let's do 10-fold cross-validation:", "cross_val_score(KNeighborsClassifier(1), X, y, cv=10)", "This gives us an even better idea of how well our model is doing.\nOverfitting, Underfitting and Model Selection\nNow that we've gone over the basics of validation, and cross-validation, it's time to go into even more depth regarding model selection.\nThe issues associated with validation and \ncross-validation are some of the most important\naspects of the practice of machine learning. Selecting the optimal model\nfor your data is vital, and is a piece of the problem that is not often\nappreciated by machine learning practitioners.\nOf core importance is the following question:\nIf our estimator is underperforming, how should we move forward?\n\nUse simpler or more complicated model?\nAdd more features to each observed data point?\nAdd more training samples?\n\nThe answer is often counter-intuitive. In particular, Sometimes using a\nmore complicated model will give worse results. Also, Sometimes adding\ntraining data will not improve your results. The ability to determine\nwhat steps will improve your model is what separates the successful machine\nlearning practitioners from the unsuccessful.\nIllustration of the Bias-Variance Tradeoff\nFor this section, we'll work with a simple 1D regression problem. This will help us to\neasily visualize the data and the model, and the results generalize easily to higher-dimensional\ndatasets. We'll explore a simple linear regression problem.\nThis can be accomplished within scikit-learn with the sklearn.linear_model module.\nWe'll create a simple nonlinear function that we'd like to fit", "def test_func(x, err=0.5):\n y = 10 - 1. / (x + 0.1)\n if err > 0:\n y = np.random.normal(y, err)\n return y", "Now let's create a realization of this dataset:", "def make_data(N=40, error=1.0, random_seed=1):\n # randomly sample the data\n np.random.seed(1)\n X = np.random.random(N)[:, np.newaxis]\n y = test_func(X.ravel(), error)\n \n return X, y\n\nX, y = make_data(40, error=1)\nplt.scatter(X.ravel(), y);", "Now say we want to perform a regression on this data. Let's use the built-in linear regression function to compute a fit:", "X_test = np.linspace(-0.1, 1.1, 500)[:, None]\n\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.metrics import mean_squared_error\nmodel = LinearRegression()\nmodel.fit(X, y)\ny_test = model.predict(X_test)\n\nplt.scatter(X.ravel(), y)\nplt.plot(X_test.ravel(), y_test)\nplt.title(\"mean squared error: {0:.3g}\".format(mean_squared_error(model.predict(X), y)));", "We have fit a straight line to the data, but clearly this model is not a good choice. We say that this model is biased, or that it under-fits the data.\nLet's try to improve this by creating a more complicated model. We can do this by adding degrees of freedom, and computing a polynomial regression over the inputs. Scikit-learn makes this easy with the PolynomialFeatures preprocessor, which can be pipelined with a linear regression.\nLet's make a convenience routine to do this:", "from sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.pipeline import make_pipeline\n\ndef PolynomialRegression(degree=2, **kwargs):\n return make_pipeline(PolynomialFeatures(degree),\n LinearRegression(**kwargs))", "Now we'll use this to fit a quadratic curve to the data.", "model = PolynomialRegression(2)\nmodel.fit(X, y)\ny_test = model.predict(X_test)\n\nplt.scatter(X.ravel(), y)\nplt.plot(X_test.ravel(), y_test)\nplt.title(\"mean squared error: {0:.3g}\".format(mean_squared_error(model.predict(X), y)));", "This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial?", "model = PolynomialRegression(30)\nmodel.fit(X, y)\ny_test = model.predict(X_test)\n\nplt.scatter(X.ravel(), y)\nplt.plot(X_test.ravel(), y_test)\nplt.title(\"mean squared error: {0:.3g}\".format(mean_squared_error(model.predict(X), y)))\nplt.ylim(-4, 14);", "When we increase the degree to this extent, it's clear that the resulting fit is no longer reflecting the true underlying distribution, but is more sensitive to the noise in the training data. For this reason, we call it a high-variance model, and we say that it over-fits the data.\nJust for fun, let's use IPython's interact capability (only in IPython 2.0+) to explore this interactively:", "from ipywidgets import interact\n\ndef plot_fit(degree=1, Npts=50):\n X, y = make_data(Npts, error=1)\n X_test = np.linspace(-0.1, 1.1, 500)[:, None]\n \n model = PolynomialRegression(degree=degree)\n model.fit(X, y)\n y_test = model.predict(X_test)\n\n plt.scatter(X.ravel(), y)\n plt.plot(X_test.ravel(), y_test)\n plt.ylim(-4, 14)\n plt.title(\"mean squared error: {0:.2f}\".format(mean_squared_error(model.predict(X), y)))\n \ninteract(plot_fit, degree=[1, 30], Npts=[2, 100]);", "Detecting Over-fitting with Validation Curves\nClearly, computing the error on the training data is not enough (we saw this previously). As above, we can use cross-validation to get a better handle on how the model fit is working.\nLet's do this here, again using the validation_curve utility. To make things more clear, we'll use a slightly larger dataset:", "X, y = make_data(120, error=1.0)\nplt.scatter(X, y);\n\nfrom sklearn.model_selection import validation_curve\n\ndef rms_error(model, X, y):\n y_pred = model.predict(X)\n return np.sqrt(np.mean((y - y_pred) ** 2))\n\ndegree = np.arange(0, 18)\nval_train, val_test = validation_curve(PolynomialRegression(), X, y,\n 'polynomialfeatures__degree', degree, cv=7,\n scoring=rms_error)", "Now let's plot the validation curves:", "def plot_with_err(x, data, **kwargs):\n mu, std = data.mean(1), data.std(1)\n lines = plt.plot(x, mu, '-', **kwargs)\n plt.fill_between(x, mu - std, mu + std, edgecolor='none',\n facecolor=lines[0].get_color(), alpha=0.2)\n\nplot_with_err(degree, val_train, label='training scores')\nplot_with_err(degree, val_test, label='validation scores')\nplt.xlabel('degree'); plt.ylabel('rms error')\nplt.legend();", "Notice the trend here, which is common for this type of plot.\n\n\nFor a small model complexity, the training error and validation error are very similar. This indicates that the model is under-fitting the data: it doesn't have enough complexity to represent the data. Another way of putting it is that this is a high-bias model.\n\n\nAs the model complexity grows, the training and validation scores diverge. This indicates that the model is over-fitting the data: it has so much flexibility, that it fits the noise rather than the underlying trend. Another way of putting it is that this is a high-variance model.\n\n\nNote that the training score (nearly) always improves with model complexity. This is because a more complicated model can fit the noise better, so the model improves. The validation data generally has a sweet spot, which here is around 5 terms.\n\n\nHere's our best-fit model according to the cross-validation:", "model = PolynomialRegression(4).fit(X, y)\nplt.scatter(X, y)\nplt.plot(X_test, model.predict(X_test));", "Detecting Data Sufficiency with Learning Curves\nAs you might guess, the exact turning-point of the tradeoff between bias and variance is highly dependent on the number of training points used. Here we'll illustrate the use of learning curves, which display this property.\nThe idea is to plot the mean-squared-error for the training and test set as a function of Number of Training Points", "from sklearn.model_selection import learning_curve\n\ndef plot_learning_curve(degree=3):\n train_sizes = np.linspace(0.05, 1, 120)\n N_train, val_train, val_test = learning_curve(PolynomialRegression(degree),\n X, y, train_sizes, cv=5,\n scoring=rms_error)\n plot_with_err(N_train, val_train, label='training scores')\n plot_with_err(N_train, val_test, label='validation scores')\n plt.xlabel('Training Set Size'); plt.ylabel('rms error')\n plt.ylim(0, 3)\n plt.xlim(5, 80)\n plt.legend()", "Let's see what the learning curves look like for a linear model:", "plot_learning_curve(1)", "This shows a typical learning curve: for very few training points, there is a large separation between the training and test error, which indicates over-fitting. Given the same model, for a large number of training points, the training and testing errors converge, which indicates potential under-fitting.\nAs you add more data points, the training error will never increase, and the testing error will never decrease (why do you think this is?)\nIt is easy to see that, in this plot, if you'd like to reduce the MSE down to the nominal value of 1.0 (which is the magnitude of the scatter we put in when constructing the data), then adding more samples will never get you there. For $d=1$, the two curves have converged and cannot move lower. What about for a larger value of $d$?", "plot_learning_curve(3)", "Here we see that by adding more model complexity, we've managed to lower the level of convergence to an rms error of 1.0!\nWhat if we get even more complex?", "plot_learning_curve(10)", "For an even more complex model, we still converge, but the convergence only happens for large amounts of training data.\nSo we see the following:\n\nyou can cause the lines to converge by adding more points or by simplifying the model.\nyou can bring the convergence error down only by increasing the complexity of the model.\n\nThus these curves can give you hints about how you might improve a sub-optimal model. If the curves are already close together, you need more model complexity. If the curves are far apart, you might also improve the model by adding more data.\nTo make this more concrete, imagine some telescope data in which the results are not robust enough. You must think about whether to spend your valuable telescope time observing more objects to get a larger training set, or more attributes of each object in order to improve the model. The answer to this question has real consequences, and can be addressed using these metrics.\nSummary\nWe've gone over several useful tools for model validation\n\nThe Training Score shows how well a model fits the data it was trained on. This is not a good indication of model effectiveness\nThe Validation Score shows how well a model fits hold-out data. The most effective method is some form of cross-validation, where multiple hold-out sets are used.\nValidation Curves are a plot of validation score and training score as a function of model complexity:\nwhen the two curves are close, it indicates underfitting\nwhen the two curves are separated, it indicates overfitting\nthe \"sweet spot\" is in the middle\nLearning Curves are a plot of the validation score and training score as a function of Number of training samples\nwhen the curves are close, it indicates underfitting, and adding more data will not generally improve the estimator.\nwhen the curves are far apart, it indicates overfitting, and adding more data may increase the effectiveness of the model.\n\nThese tools are powerful means of evaluating your model on your data." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DJCordhose/ai
notebooks/rl/berater-v4.ipynb
mit
[ "<a href=\"https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/rl/berater-v4.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nBerater Environment v4\nChanges from v3\n\nclean up\nplot performance\nswitched back to ppo2\n\nNext Steps\n\ncreate a complete customer graph including costs of travel\nnon existing connection has hightst penalty\nper episode set certain rewards to 0 to simulate different customers per consultant\nmake sure things generalize well\n\nLinks\n\nVisualizing progress: https://github.com/openai/baselines/blob/master/docs/viz/viz.ipynb\n\nInstallation (required for colab)", "# !pip install git+https://github.com/openai/baselines >/dev/null\n# !pip install gym >/dev/null\n\nimport numpy\nimport gym\nfrom gym.utils import seeding\nfrom gym import spaces\n\ndef state_name_to_int(state):\n state_name_map = {\n 'S': 0,\n 'A': 1,\n 'B': 2,\n 'C': 3,\n }\n return state_name_map[state]\n\ndef int_to_state_name(state_as_int):\n state_map = {\n 0: 'S',\n 1: 'A',\n 2: 'B',\n 3: 'C'\n }\n return state_map[state_as_int]\n \nclass BeraterEnv(gym.Env):\n \"\"\"\n The Berater Problem\n\n Actions: \n There are 3 discrete deterministic actions:\n - 0: First Direction\n - 1: Second Direction\n - 2: Third Direction / Go home\n \"\"\"\n metadata = {'render.modes': ['ansi']}\n \n showStep = False\n showDone = True\n envEpisodeModulo = 100\n\n def __init__(self):\n self.map = {\n 'S': [('A', 100), ('B', 400), ('C', 200 )],\n 'A': [('B', 250), ('C', 400), ('S', 100 )],\n 'B': [('A', 250), ('C', 250), ('S', 400 )],\n 'C': [('A', 400), ('B', 250), ('S', 200 )]\n }\n self.action_space = spaces.Discrete(3)\n self.observation_space = spaces.Box(low=numpy.array([0,-1000,-1000,-1000,-1000,-1000,-1000]),\n high=numpy.array([3,1000,1000,1000,1000,1000,1000]),\n dtype=numpy.float32)\n self.reward_range = (-1, 1)\n\n self.totalReward = 0\n self.stepCount = 0\n self.isDone = False\n\n self.envReward = 0\n self.envEpisodeCount = 0\n self.envStepCount = 0\n\n self.reset()\n self.optimum = self.calculate_customers_reward()\n\n def seed(self, seed=None):\n self.np_random, seed = seeding.np_random(seed)\n return [seed]\n\n def step(self, actionArg):\n paths = self.map[self.state]\n action = actionArg\n destination, cost = paths[action]\n lastState = self.state\n lastObState = state_name_to_int(lastState)\n customerReward = self.customer_reward[destination]\n\n info = {\"from\": self.state, \"to\": destination}\n\n self.state = destination\n reward = (-cost + self.customer_reward[destination]) / self.optimum\n self.customer_visited(destination)\n done = destination == 'S' and self.all_customers_visited()\n\n stateAsInt = state_name_to_int(self.state)\n self.totalReward += reward\n self.stepCount += 1\n self.envReward += reward\n self.envStepCount += 1\n\n if self.showStep:\n print( \"Episode: \" + (\"%4.0f \" % self.envEpisodeCount) + \n \" Step: \" + (\"%4.0f \" % self.stepCount) + \n #lastState + ':' + str(lastObState) + ' --' + str(action) + '-> ' + self.state + ':' + str(stateAsInt) +\n lastState + ' --' + str(action) + '-> ' + self.state + \n ' R=' + (\"% 2.2f\" % reward) + ' totalR=' + (\"% 3.2f\" % self.totalReward) + \n ' cost=' + (\"%4.0f\" % cost) + ' customerR=' + (\"%4.0f\" % customerReward) + ' optimum=' + (\"%4.0f\" % self.optimum) \n )\n\n if done and not self.isDone:\n self.envEpisodeCount += 1\n if BeraterEnv.showDone:\n episodes = BeraterEnv.envEpisodeModulo\n if (self.envEpisodeCount % BeraterEnv.envEpisodeModulo != 0):\n episodes = self.envEpisodeCount % BeraterEnv.envEpisodeModulo\n print( \"Done: \" + \n (\"episodes=%6.0f \" % self.envEpisodeCount) + \n (\"avgSteps=%6.2f \" % (self.envStepCount/episodes)) + \n (\"avgTotalReward=% 3.2f\" % (self.envReward/episodes) )\n )\n if (self.envEpisodeCount%BeraterEnv.envEpisodeModulo) == 0:\n self.envReward = 0\n self.envStepCount = 0\n\n self.isDone = done\n observation = self.getObservation(stateAsInt)\n\n return observation, reward, done, info\n\n def getObservation(self, position):\n result = numpy.array([ position, \n self.getEdgeObservation('S','A'),\n self.getEdgeObservation('S','B'),\n self.getEdgeObservation('S','C'),\n self.getEdgeObservation('A','B'),\n self.getEdgeObservation('A','C'),\n self.getEdgeObservation('B','C'),\n ],\n dtype=numpy.float32)\n return result\n\n def getEdgeObservation(self, source, target):\n reward = self.customer_reward[target] \n cost = self.getCost(source,target)\n result = reward - cost\n\n return result\n\n def getCost(self, source, target):\n paths = self.map[source]\n targetIndex=state_name_to_int(target)\n for destination, cost in paths:\n if destination == target:\n result = cost\n break\n\n return result\n\n def customer_visited(self, customer):\n self.customer_reward[customer] = 0\n\n def all_customers_visited(self):\n return self.calculate_customers_reward() == 0\n\n def calculate_customers_reward(self):\n sum = 0\n for value in self.customer_reward.values():\n sum += value\n return sum\n\n def reset(self):\n self.totalReward = 0\n self.stepCount = 0\n self.isDone = False\n reward_per_customer = 1000\n self.customer_reward = {\n 'S': 0,\n 'A': reward_per_customer,\n 'B': reward_per_customer,\n 'C': reward_per_customer,\n }\n\n self.state = 'S'\n return self.getObservation(state_name_to_int(self.state))", "Try out Environment", "BeraterEnv.showStep = True\nBeraterEnv.showDone = True\n\nenv = BeraterEnv()\nprint(env)\nobservation = env.reset()\nprint(observation)\n\nfor t in range(1000):\n action = env.action_space.sample()\n observation, reward, done, info = env.step(action)\n if done:\n print(\"Episode finished after {} timesteps\".format(t+1))\n break\nenv.close()\nprint(observation)", "Train model\n\n0.73 would be perfect total reward", "!rm -r logs\n!mkdir logs\n!mkdir logs/berater\n\n# https://github.com/openai/baselines/blob/master/baselines/deepq/experiments/train_pong.py\n# log_dir = logger.get_dir()\nlog_dir = '/content/logs/berater/'\n\nimport gym\nfrom baselines import deepq\nfrom baselines import bench\nfrom baselines import logger\n\nfrom baselines.common.vec_env.dummy_vec_env import DummyVecEnv\nfrom baselines.common.vec_env.vec_monitor import VecMonitor\nfrom baselines.ppo2 import ppo2\n\nBeraterEnv.showStep = False\nBeraterEnv.showDone = False\n\nenv = BeraterEnv()\n\nwrapped_env = DummyVecEnv([lambda: BeraterEnv()])\nmonitored_env = VecMonitor(wrapped_env, log_dir)\nmodel = ppo2.learn(network='mlp', env=monitored_env, total_timesteps=50000)\n\n# monitored_env = bench.Monitor(env, log_dir)\n# https://en.wikipedia.org/wiki/Q-learning#Influence_of_variables\n# %time model = deepq.learn(\\\n# monitored_env,\\\n# seed=42,\\\n# network='mlp',\\\n# lr=1e-3,\\\n# gamma=0.99,\\\n# total_timesteps=30000,\\\n# buffer_size=50000,\\\n# exploration_fraction=0.5,\\\n# exploration_final_eps=0.02,\\\n# print_freq=1000)\n\nmodel.save('berater-ppo-v4.pkl')\nmonitored_env.close()", "Visualizing Results\nhttps://github.com/openai/baselines/blob/master/docs/viz/viz.ipynb", "!ls -l $log_dir\n\nfrom baselines.common import plot_util as pu\nresults = pu.load_results(log_dir)\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nr = results[0]\n# plt.ylim(-1, 1)\n# plt.plot(np.cumsum(r.monitor.l), r.monitor.r)\n\nplt.plot(np.cumsum(r.monitor.l), pu.smooth(r.monitor.r, radius=100))", "Enjoy model", "import numpy as np \n\nobservation = env.reset()\nstate = np.zeros((1, 2*128))\ndones = np.zeros((1))\n\nBeraterEnv.showStep = True\nBeraterEnv.showDone = False\n\nfor t in range(1000):\n actions, _, state, _ = model.step(observation, S=state, M=dones)\n observation, reward, done, info = env.step(actions[0])\n if done:\n print(\"Episode finished after {} timesteps\".format(t+1))\n break\nenv.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wuafeing/Python3-Tutorial
02 strings and text/02.08 regexp for multiline partterns.ipynb
gpl-3.0
[ "Previous\n2.8 ๅคš่กŒๅŒน้…ๆจกๅผ\n้—ฎ้ข˜\nไฝ ๆญฃๅœจ่ฏ•็€ไฝฟ็”จๆญฃๅˆ™่กจ่พพๅผๅŽปๅŒน้…ไธ€ๅคงๅ—็š„ๆ–‡ๆœฌ๏ผŒ่€Œไฝ ้œ€่ฆ่ทจ่ถŠๅคš่กŒๅŽปๅŒน้…ใ€‚\n่งฃๅ†ณๆ–นๆกˆ\n่ฟ™ไธช้—ฎ้ข˜ๅพˆๅ…ธๅž‹็š„ๅ‡บ็Žฐๅœจๅฝ“ไฝ ็”จ็‚น (.) ๅŽปๅŒน้…ไปปๆ„ๅญ—็ฌฆ็š„ๆ—ถๅ€™๏ผŒๅฟ˜่ฎฐไบ†็‚น (.) ไธ่ƒฝๅŒน้…ๆข่กŒ็ฌฆ็š„ไบ‹ๅฎžใ€‚ ๆฏ”ๅฆ‚๏ผŒๅ‡่ฎพไฝ ๆƒณ่ฏ•็€ๅŽปๅŒน้… C ่ฏญ่จ€ๅˆ†ๅ‰ฒ็š„ๆณจ้‡Š๏ผš", "import re\ncomment = re.compile(r\"/\\*(.*?)\\*/\")\ntext1 = '/* this is a comment */'\ntext2 = '''/* this is a\nmultiline comment */\n'''\ncomment.findall(text1)\n\ncomment.findall(text2)", "ไธบไบ†ไฟฎๆญฃ่ฟ™ไธช้—ฎ้ข˜๏ผŒไฝ ๅฏไปฅไฟฎๆ”นๆจกๅผๅญ—็ฌฆไธฒ๏ผŒๅขžๅŠ ๅฏนๆข่กŒ็š„ๆ”ฏๆŒใ€‚ๆฏ”ๅฆ‚๏ผš", "comment = re.compile(r'/\\*((?:.|\\n)*?)\\*/')\ncomment.findall(text2)", "ๅœจ่ฟ™ไธชๆจกๅผไธญ๏ผŒ (?:.|\\n) ๆŒ‡ๅฎšไบ†ไธ€ไธช้žๆ•่Žท็ป„ (ไนŸๅฐฑๆ˜ฏๅฎƒๅฎšไน‰ไบ†ไธ€ไธชไป…ไป…็”จๆฅๅšๅŒน้…๏ผŒ่€Œไธ่ƒฝ้€š่ฟ‡ๅ•็‹ฌๆ•่Žทๆˆ–่€…็ผ–ๅท็š„็ป„)ใ€‚\n่ฎจ่ฎบ\nre.compile() ๅ‡ฝๆ•ฐๆŽฅๅ—ไธ€ไธชๆ ‡ๅฟ—ๅ‚ๆ•ฐๅซ re.DOTALL ๏ผŒๅœจ่ฟ™้‡Œ้žๅธธๆœ‰็”จใ€‚ ๅฎƒๅฏไปฅ่ฎฉๆญฃๅˆ™่กจ่พพๅผไธญ็š„็‚น (.) ๅŒน้…ๅŒ…ๆ‹ฌๆข่กŒ็ฌฆๅœจๅ†…็š„ไปปๆ„ๅญ—็ฌฆใ€‚ๆฏ”ๅฆ‚๏ผš", "comment = re.compile(r'/\\*(.*?)\\*/', re.DOTALL)\ncomment.findall(text2)", "ๅฏนไบŽ็ฎ€ๅ•็š„ๆƒ…ๅ†ตไฝฟ็”จ re.DOTALL ๆ ‡่ฎฐๅ‚ๆ•ฐๅทฅไฝœ็š„ๅพˆๅฅฝ๏ผŒ ไฝ†ๆ˜ฏๅฆ‚ๆžœๆจกๅผ้žๅธธๅคๆ‚ๆˆ–่€…ๆ˜ฏไธบไบ†ๆž„้€ ๅญ—็ฌฆไธฒไปค็‰Œ่€Œๅฐ†ๅคšไธชๆจกๅผๅˆๅนถ่ตทๆฅ(2.18่Š‚ๆœ‰่ฏฆ็ป†ๆ่ฟฐ)๏ผŒ ่ฟ™ๆ—ถๅ€™ไฝฟ็”จ่ฟ™ไธชๆ ‡่ฎฐๅ‚ๆ•ฐๅฐฑๅฏ่ƒฝๅ‡บ็Žฐไธ€ไบ›้—ฎ้ข˜ใ€‚ ๅฆ‚ๆžœ่ฎฉไฝ ้€‰ๆ‹ฉ็š„่ฏ๏ผŒๆœ€ๅฅฝ่ฟ˜ๆ˜ฏๅฎšไน‰่‡ชๅทฑ็š„ๆญฃๅˆ™่กจ่พพๅผๆจกๅผ๏ผŒ่ฟ™ๆ ทๅฎƒๅฏไปฅๅœจไธ้œ€่ฆ้ขๅค–็š„ๆ ‡่ฎฐๅ‚ๆ•ฐไธ‹ไนŸ่ƒฝๅทฅไฝœ็š„ๅพˆๅฅฝใ€‚\nNext" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
MarsUniversity/ece387
website/block_1_basics/lsn3/lsn3.ipynb
mit
[ "Python\nKevin J. Walchko\ncreated 16 Nov 2017\n\nHere we will use python as our programming language. Python, like any other language, is really vast and complex. We will just cover the basics we need.\nObjectives\n\nUnderstand\ngeneral syntax\nfor/while loops\nif/elif/else\nfunctions\ndata types: tuples, list, strings, etc\nintro to classes\n\nReferences\n\nPython tutorialspoint\nPython classes/objects\n\nSetup", "from __future__ import print_function\nfrom __future__ import division\nimport numpy as np", "Python\nPython is a widely used high-level programming language for general-purpose programming, created by Guido van Rossum and first released in 1991. An interpreted language, Python has a design philosophy which emphasizes code readability (notably using whitespace indentation to delimit code blocks rather than curly brackets or keywords), and a syntax which allows programmers to express concepts in fewer lines of code than might be used in languages such as C++ or Java. The language provides constructs intended to enable writing clear programs on both a small and large scale.\n<img src=\"rossum.png\" width=\"300px\">\nPythonโ€™s Benevolent Dictator For Life!\nโ€œPython is an experiment in how much freedom program-mers need. Too much freedom and nobody can read another's code; too little and expressive-ness is endangered.โ€\n - Guido van Rossum\n\nWhy Use It?\n\nSimple and easy to use and very efficient\n\nWhat you can do in a 100 lines of python could take you a 1000 in C++ โ€ฆ this is the reason many startups (e.g., Instagram) use python and keep using it\n\n\n90% of robotics uses either C++ or python\n\nAlthough C++ is faster in run-time, development (write, compile, link, etc) is much slower due to complex syntax, memory management, pointers (they can be fun!) and difficulty in debugging any sort of real program\nJava is dying (or dead)\nMicrosoft is still struggling to get people outside of the Windows OS to embrace C#\nApple's swift is too new and constantly making major changes ... maybe some day\n\nWho Uses It?\n\nIndustrial Light & Magic (Stars Wars people): used in post production scripting to tie together outputs from other C++ programs\nEve-Online (big MMORGP game): used for both client and server aspects of the game\nInstagram, Spotify, SurveyMonkey, The Onion, Bitbucket, Pinterest, and more use Django (python website template framework) to create/serve millions of users\nDropbox, Paypal, Walmart and Google (YouTube)\nNote: Guido van Rossum worked for Google and now works for Dropbox\n\nRunning Programs on UNIX (or your robot)\n\nCall python program via the python interpreter: python my_program.py\nThis is kind of the stupid way\nMake a python file directly executable \nAdd a shebang (itโ€™s a Unix thing) to the top of your program: #!/usr/bin/env python\nMake the file executable: chmod a+x my_program.py\nInvoke file from Unix command line: ./my_program.py\n\nEnough to Understand Code (Short Version)\n\nIndentation matters for functions, loops, classes, etc\nFirst assignment to a variable creates it\nVariable types (int, float, etc) donโ€™t need to be declared. \nAssignment is = and comparison is ==\nFor numbers + - * % are as expected\nmodulas (%) returns the remainder: 5%3 => 2\n\n\nLogical operators are words (and, or, not) not symbols\nWe are using __future__ for python 2 / 3 compatibility\nThe basic printing command is print(โ€˜helloโ€™)\nDivision works like expected:\nFloat division: 5/2 = 2.5\nInteger division: 5//2 = 2\n\n\nStart comments with #, rest of line is ignored\nCan include a โ€œdocumentation stringโ€ as the first line of a new function or class you define\n\n```python\ndef my_function(n):\n \"\"\"\n my_function(n) takes a positive integer and returns n + 5\n \"\"\"\n # assert ... remember this from ECE281?\n assert n>0, \"crap, n is 0 or negative!\"\nreturn n+5\n\n```\nPrinting\nAgain, to have Python 3 compatability and help you in the future, we are going to print things using the print function. Python 2 by default uses a print statement. Also, it is good form to use the newer format() function on strings rather than the old C style %s for a string or %d for an integer. There are lots of cool things you can do with format() but we won't dive too far into it ... just the basics.\nWARNING: Your homework with Code Academy uses the old way to print, just do it for that and get through it. For this class we are doing it this way!", "from __future__ import division # fix division\nfrom __future__ import print_function # use print function\n\nprint('hello world') # single quotes\nprint(\"hello world\") # double quotes\nprint('3/4 is', 3/4) # this prints 0.75\nprint('I am {} ... for {} yrs I have been training Jedhi'.format(\"Yoda\", 853))\nprint('float: {:5.1f}'.format(3.1424567)) # prints float: 3.1", "Unicode\nUnicode sucks in python 2.7, but if you want to use it:\n\nalphabets\narrows\nemoji", "print(u'\\u21e6 \\u21e7 \\u21e8 \\u21e9')\nprint(u'\\u2620')\n\n# this is a dictionary, we will talk about it next ... sorry for the out of order\nuni = {\n 'left': u'\\u21e6',\n 'up': u'\\u21e7',\n 'right': u'\\u21e8',\n 'down': u'\\u21e9',\n}\nprint(u'\\nYou must go {}'.format(uni['up'])) # notice all strings have u on the front", "Data Types\nPython isn't typed, so you don't really need to keep track of variables and delare them as ints, floats, doubles, unsigned, etc. There are a few places where this isn't true, but we will deal with those as we encounter them.", "# bool\nz = True # or False\n\n# integers (default)\nz = 3\n\n# floats\nz = 3.124\nz = 5/2\nprint('z =', z)\n\n# dictionary or hash tables\nbob = {'a': 5, 'b': 6}\nprint('bob[\"a\"]:', bob['a'])\n\n# you can assign a new key/values pair\nbob['c'] = 'this is a string!!'\nprint(bob)\nprint('len(bob) =', len(bob))\n\n# you can also access what keys are in a dict\nprint('bob.keys() =', bob.keys())\n\n# let's get crazy and do different types and have a key that is an int\nbob = {'a': True, 11: [1,2,3]}\nprint('bob = ', bob)\nprint('bob[11] = ', bob[11]) # don't do this, it is confusing!!\n\n# arrays or lists are mutable (changable)\n# the first element is 0 like all good programming languages\nbob = [1,2,3,4,5]\nbob[2] = 'tom'\nprint('bob list', bob)\nprint('bob list[3]:', bob[3]) # remember it is zero indexed\n\n# or ... tuple will do this too\nbob = [1]*5\nprint('bob one-liner version 2:', bob)\nprint('len(bob) =', len(bob))\n\n# strings\nz = 'hello world!!'\nz = 'hello' + ' world' # concatinate\nz = 'hhhello world!@#$'[2:13] # strings are just an array of letters\nprint('my crazy string:', z)\nprint('{}: {} {:.2f}'.format('formatting', 3.1234, 6.6666))\nprint('len(z) =', len(z))\n\n# tuples are immutable (not changable which makes them faster/smaller)\nbob = (1,2,3,4)\nprint('bob tuple', bob)\nprint('bob tuple*3', bob*3) # repeats tuple 3x\nprint('len(bob) =', len(bob))\n\n# since tuples are immutable, this will throw an error\nbob[1] = 'tom'\n\n# assign multiple variables at once\nbob = (4,5,)\nx,y = bob\nprint(x,y)\n\n# wait, I changed by mind ... easy to swap\nx,y = y,x\nprint(x,y)", "Flow Control\nLogic Operators\nFlow control is generally done via some math operator or boolean logic operator.\n\n\nFor Loop", "# range(start, stop, step) # this only works for integer values\nrange(3,10) # jupyter cell will always print the last thing\n\n# iterates from start (default 0) to less than the highest number\nfor i in range(5):\n print(i)\n\n# you can also create simple arrays like this:\nbob = [2*x+3 for x in range(4)]\nprint('bob one-liner:', bob)\n\nfor i in range(2,8,2): # start=2, stop<8, step=2, so notice the last value is 6 NOT 8\n print(i)\n\n# I have a list of things ... maybe images or something else.\n# A for loop can iterate through the list. Here, each time \n# through, i is set to the next letter in my array 'dfec'\nthings = ['d', 'e', 'f', 'c']\nfor ltr in things:\n print(ltr)\n\n# enumerate()\n# sometimes you need a counter in your for loop, use enumerate\nthings = ['d', 'e', 'f', 3.14] # LOOK! the last element is a float not a letter ... that's OK\nfor i, ltr in enumerate(things):\n print('things[{}]: {}'.format(i, ltr))\n\n# zip()\n# somethimes you have a couple arrays that you want to work on at the same time, use zip\n# to combine them together\n# NOTE: all arrays have to have the SAME LENGTH\na = ['bob', 'tom', 'sally']\nb = ['good', 'evil', 'nice']\nc = [10, 20, 15]\n\nfor name, age, status in zip(a, c, b): # notice I mixed up a, b, c\n status = status.upper()\n name = name[0].upper() + name[1:] # strings are immutable\n print('{} is {} yrs old and totally {}'.format(name, age, status))", "if / elif / else", "# classic if/then statements work the same as other languages.\n# if the statement is True, then do something, if it is False, then skip over it.\nif False:\n print('should not get here')\nelif True:\n print('this should print')\nelse:\n print('this is the default if all else fails')\n\nn = 5\nn = 3 if n==1 else n-1 # one line if/then statement\nprint(n)", "While", "x = 3\nwhile True: # while loop runs while value is True\n if not x: # I will enter this if statement when x = False or 0\n break # breaks me out of a loop\n else:\n print(x)\n x -= 1\n ", "Exception Handling\nWhen you write code you should think about how you could break it, then design it so you can't. Now, you don't necessary need to write bullet proof code ... that takes a lot of time (and time is money), but you should make an effort to reduce your debug time.\nA list of Python 2.7 exceptions is here. KeyboardInterrupt: is a common one when a user pressed ctl-C to quit the program. Some others:\nBaseException\n +-- SystemExit\n +-- KeyboardInterrupt\n +-- GeneratorExit\n +-- Exception\n +-- StopIteration\n +-- StandardError\n | +-- BufferError\n | +-- ArithmeticError\n | | +-- FloatingPointError\n | | +-- OverflowError\n | | +-- ZeroDivisionError\n | +-- AssertionError\n | +-- AttributeError\n | +-- EnvironmentError\n | | +-- IOError\n | | +-- OSError\n | | +-- WindowsError (Windows)\n | | +-- VMSError (VMS)\n | +-- EOFError\n | +-- ImportError\n | +-- LookupError\n | | +-- IndexError\n | | +-- KeyError\n | +-- MemoryError\n | +-- NameError\n | | +-- UnboundLocalError\n | +-- ReferenceError\n | +-- RuntimeError\n | | +-- NotImplementedError\n | +-- SyntaxError\n | | +-- IndentationError\n | | +-- TabError\n | +-- SystemError\n | +-- TypeError\n | +-- ValueError\n | +-- UnicodeError\n | +-- UnicodeDecodeError\n | +-- UnicodeEncodeError\n | +-- UnicodeTranslateError\n +-- Warning\n +-- DeprecationWarning\n +-- PendingDeprecationWarning\n +-- RuntimeWarning\n +-- SyntaxWarning\n +-- UserWarning\n +-- FutureWarning\n +-- ImportWarning\n +-- UnicodeWarning\n +-- BytesWarning", "# exception handling ... use in your code in smart places\ntry:\n a = (1,2,) # tupple ... notice the extra comma after the 2\n a[0] = 1 # this won't work!\nexcept: # this catches any exception thrown\n print('you idiot ... you cannot modify a tuple!!')\n\n# error\n5/0\n\ntry:\n 5/0\nexcept ZeroDivisionError as e:\n print(e)\n# raise # this rasies the error to the next \n # level so i don't have to handle it here\n\ntry:\n 5/0\nexcept ZeroDivisionError as e:\n print(e)\n raise # this rasies the error to the next (in this case, the Jupyter GUI handles it)\n # level so i don't have to handle it here", "When would you want to use raise?\nWhy not always handle the error here?\nWhat is different when the raise command is used?", "# Honestly, I generally just use Exception from which most other exceptions\n# are derived from, but I am lazy and it works fine for what I do\ntry:\n 5/0\nexcept Exception as e:\n print(e)\n\n# all is right with the world ... these will work, nothing will print\nassert True\nassert 3 > 1\n\n# this will fail ... and we can add a message if we want to\nassert 3 < 1, 'hello ... this should fail'", "Libraries\nWe will need to import math to have access to trig and other functions. There will be other libraries like numpy, cv2, etc you will need to.", "import math\n\nprint('messy', math.cos(math.pi/4))\n\n# that looks clumbsy ... let's do this instead\nfrom math import cos, pi\nprint('simpler math:', cos(pi/4))\n\n# or we just want to shorten the name to reduce typing ... good programmers are lazy!\nimport numpy as np\n\n# well what is in the math library I might want to use????\ndir(math)\n\n# what is tanh???\nhelp(math.tanh)\n\nprint(math.__doc__) # print the doc string for the library ... what does it do?", "Functions\nThere isn't too much that is special about python functions, just the format.", "def my_cool_function(x):\n \"\"\"\n This is my cool function which takes an argument x\n and returns a value\n \"\"\"\n return 2*x/3\n\nmy_cool_function(6) # 2*6/3 = 4", "Classes and Object Oriented Programming (OOP)\nOk, we don't have time to really teach you how to do this. It would be better if your real programming classes did this. So we are just going to kludge this together here, because these could be useful in this class. In fact I generally (and 99% of the world) does OOP.\nClasses are awesome because of a few reasons. First, they help you reuse code instead of duplicating the code in other places all over your program. Classes will save your life when you realize you want to change a function and you will only change it in one spot instead of 10 different spots with slightly different code. You can also put a bunch of related functions together because they make sense. Another important part of Classes is that they allow you to create more flexible functions.\nWe are going to keep it simple and basically show you how to do OOP in python very simply. This will be a little familar from ECE382 with structs (sort of)", "class ClassName(object):\n \"\"\"\n So this is my cool class\n \"\"\"\n def __init__(self, x):\n \"\"\"\n This is called a constructor in OOP. When I make an object\n this function is called.\n self = contains all of the objects values\n x = an argument to pass something into the constructor\n \"\"\"\n self.x = x\n print('> Constructor called', x)\n \n def my_cool_function(self, y):\n \"\"\"\n This is called a method (function) that works on\n the class. It always needs self to access class\n values, but can also have as many arguments as you want.\n I only have 1 arg called y\"\"\"\n self.x = y\n print('> called function: {}'.format(self.x))\n \n def __del__(self):\n \"\"\"\n Destructor. This is called when the object goes out of scope\n and is destoryed. It take NO arguments other than self.\n \n Note, this is hard to call in jupyter, because it will probably\n get called with the program (notebook) ends (shutsdown)\n \"\"\"\n pass\n\na = ClassName('bob')\na.my_cool_function(3.14)\n\nb = ClassName(28)\nb.my_cool_function('tom')\n\nfor i in range(3):\n a = ClassName('bob')", "There are tons of things you can do with objects. Here is one example. Say we have a ball class and for some reason we want to be able to add balls together.", "class Ball(object):\n def __init__(self, color, radius):\n # this ball always has this color and raduis below\n self.radius = radius\n self.color = color\n \n def __str__(self):\n \"\"\"\n When something tries to turn this object into a string,\n this function gets called\n \"\"\"\n s = 'Ball {}, radius: {:.1f}'.format(self.color, self.radius)\n return s\n \n def __add__(self, a):\n c = Ball('gray', a.radius + self.radius)\n return c\n\nr = Ball('red', 3)\ng = Ball('green', radius=4)\nb = Ball(radius=5, color='blue')\n\nprint(r)\nprint(g)\nprint(b)\nprint('total size:', r.radius+b.radius+g.radius)\nprint('Add method:', r+b+g)\n\n# the base class of all objects in Python should be \n# object. It comes with these methods already defined.\ndir(object)", "Now you can have classes with functions that make intuitive sense! If I want to calculate the area of a shape, call function area(). I don't need a function areaCircle() and areaSquare(). Or no, maybe the author named the function area_circle() or AreaCircle() or areacircle() or ...\n```python\nfrom math import pi\nclass Circle(object):\n def init(self, radius):\n self.radius = radius\n def area(self):\n return piself.radius*2\nclass Square(object):\n def init(self, length, width):\n self.length = length\n self.width = width\n def area(self):\n return length*width\n```\nExercises\n\nPlease run this notebook and change numbers/variables/etc so you understand how they work ... your grade depends on your understanding!\n\nQuestions\n\nWhat is the difference between / and //?\nHow do you use the .format() command on a string?\nWhat does mutable/immutable mean for datatypes?\nWhat is a hash table and how do you add new values and retrieve (or access) values in it?\nOn one line, how would I do a for loop that returns a new array of : [2,4,8,16]?\nWrite a function that takes a value between [5,-5] and returns the value divided by 2. Make sure to check the input meets the bounds and throws an error if it is wrong\nWrite a class for a Circle. Have the constructor take a radius value and if it is not given, have a default radius of 1.0. Also include 2 methods: area(), circumference(). Make sure it inherits from object (the base class).\n\n\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-sa/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-sa/4.0/88x31.png\" /></a><br />This work is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-sa/4.0/\">Creative Commons Attribution-ShareAlike 4.0 International License</a>." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]