File size: 114,941 Bytes
2ffb90d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 4244 4245 4246 4247 4248 4249 4250 4251 4252 4253 4254 4255 4256 4257 4258 4259 4260 4261 4262 4263 4264 4265 4266 4267 4268 4269 4270 4271 4272 4273 4274 4275 4276 4277 4278 4279 4280 4281 4282 4283 4284 4285 4286 4287 4288 4289 4290 4291 4292 4293 4294 4295 4296 4297 4298 4299 4300 4301 4302 4303 4304 4305 4306 4307 4308 4309 4310 4311 4312 4313 4314 4315 4316 4317 4318 4319 4320 4321 4322 4323 4324 4325 4326 4327 4328 4329 4330 4331 4332 4333 4334 4335 4336 4337 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 4353 4354 4355 4356 4357 4358 4359 4360 4361 4362 4363 4364 4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 4380 4381 4382 4383 4384 4385 4386 4387 4388 4389 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 4406 4407 4408 4409 4410 4411 4412 4413 4414 4415 4416 4417 4418 4419 4420 4421 4422 4423 4424 4425 4426 4427 4428 4429 4430 4431 4432 4433 4434 4435 4436 4437 4438 4439 4440 4441 4442 4443 4444 4445 4446 4447 4448 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 4463 4464 4465 4466 4467 4468 4469 4470 4471 4472 4473 4474 4475 4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 4498 4499 4500 4501 4502 4503 4504 4505 4506 4507 4508 4509 4510 4511 4512 4513 4514 4515 4516 4517 4518 4519 4520 4521 4522 4523 4524 4525 4526 4527 4528 4529 4530 4531 4532 4533 4534 4535 4536 4537 4538 4539 4540 4541 4542 4543 4544 4545 4546 4547 4548 4549 4550 4551 4552 4553 4554 4555 4556 4557 4558 4559 4560 4561 4562 4563 4564 4565 4566 4567 4568 4569 4570 4571 4572 4573 4574 4575 4576 4577 4578 4579 4580 4581 4582 4583 4584 4585 4586 4587 4588 4589 4590 4591 4592 4593 4594 4595 4596 4597 4598 4599 4600 4601 4602 4603 4604 4605 4606 4607 4608 4609 4610 4611 4612 4613 4614 4615 4616 4617 4618 4619 4620 4621 4622 4623 4624 4625 4626 4627 4628 4629 4630 4631 4632 4633 4634 4635 4636 4637 4638 4639 4640 4641 4642 4643 4644 4645 4646 4647 4648 4649 4650 4651 4652 4653 4654 4655 4656 4657 4658 4659 4660 4661 4662 4663 4664 4665 4666 4667 4668 4669 4670 4671 4672 4673 4674 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 4685 4686 4687 4688 4689 4690 4691 4692 4693 4694 4695 4696 4697 4698 4699 4700 4701 4702 4703 4704 4705 4706 4707 4708 4709 4710 4711 4712 4713 4714 4715 4716 4717 4718 4719 4720 4721 4722 4723 4724 4725 4726 4727 4728 4729 4730 4731 4732 4733 4734 4735 4736 4737 4738 4739 4740 4741 4742 4743 4744 4745 4746 4747 4748 4749 4750 4751 4752 4753 4754 4755 4756 4757 4758 4759 4760 4761 4762 4763 4764 4765 4766 4767 4768 4769 4770 4771 4772 4773 4774 4775 4776 4777 4778 4779 4780 4781 4782 4783 4784 4785 4786 4787 4788 4789 4790 4791 4792 4793 4794 4795 4796 4797 4798 4799 4800 4801 4802 4803 4804 4805 4806 4807 4808 4809 4810 4811 4812 4813 4814 4815 4816 4817 4818 4819 4820 4821 4822 4823 4824 4825 4826 4827 4828 4829 4830 4831 4832 4833 4834 4835 4836 4837 4838 4839 4840 4841 4842 4843 4844 4845 4846 4847 4848 4849 4850 4851 4852 4853 4854 4855 4856 4857 4858 4859 4860 4861 4862 4863 4864 4865 4866 4867 4868 4869 4870 4871 4872 4873 4874 4875 4876 4877 4878 4879 4880 4881 4882 4883 4884 4885 4886 4887 4888 4889 4890 4891 4892 4893 4894 4895 4896 4897 4898 4899 4900 4901 4902 4903 4904 4905 4906 4907 4908 4909 4910 4911 4912 4913 4914 4915 4916 4917 4918 4919 4920 4921 4922 4923 4924 4925 4926 4927 4928 4929 4930 4931 4932 4933 4934 4935 4936 4937 4938 4939 4940 4941 4942 4943 4944 4945 4946 4947 4948 4949 4950 4951 4952 4953 4954 4955 4956 4957 4958 4959 4960 4961 4962 4963 4964 4965 4966 4967 4968 4969 4970 4971 4972 4973 4974 4975 4976 4977 4978 4979 4980 4981 4982 4983 4984 4985 4986 4987 4988 4989 4990 4991 4992 4993 4994 4995 4996 4997 4998 4999 5000 5001 5002 5003 5004 5005 5006 5007 5008 5009 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020 5021 5022 5023 5024 5025 5026 5027 5028 5029 5030 5031 5032 5033 5034 5035 5036 5037 5038 5039 5040 5041 5042 5043 5044 5045 5046 5047 5048 5049 5050 5051 5052 5053 5054 5055 5056 5057 5058 5059 5060 5061 5062 5063 5064 5065 5066 5067 5068 5069 5070 5071 5072 5073 5074 5075 5076 5077 5078 5079 5080 5081 5082 5083 5084 5085 5086 5087 5088 5089 5090 5091 5092 5093 5094 5095 5096 5097 5098 5099 5100 5101 5102 5103 5104 5105 5106 5107 5108 5109 5110 5111 5112 5113 5114 5115 5116 5117 5118 5119 5120 5121 5122 5123 5124 5125 5126 5127 5128 5129 5130 5131 5132 5133 5134 5135 5136 5137 5138 5139 5140 5141 5142 5143 5144 5145 5146 5147 5148 5149 5150 5151 5152 5153 5154 5155 5156 5157 5158 5159 5160 5161 5162 5163 5164 5165 5166 5167 5168 5169 5170 5171 5172 5173 5174 5175 5176 5177 5178 5179 5180 5181 5182 5183 5184 5185 5186 5187 5188 5189 5190 5191 5192 5193 5194 5195 5196 5197 5198 5199 5200 5201 5202 5203 5204 5205 5206 5207 5208 5209 5210 5211 5212 5213 5214 5215 5216 5217 5218 5219 5220 5221 5222 |
WEBVTT
00:00:00.040 --> 00:00:06.600
started in a moment uh since it's now uh
00:00:03.959 --> 00:00:08.839
12:30 are there any questions before we
00:00:06.600 --> 00:00:08.839
get
00:00:11.840 --> 00:00:17.240
started okay I don't see I don't see any
00:00:14.679 --> 00:00:18.640
so I guess we can uh Jump Right In this
00:00:17.240 --> 00:00:22.080
time I'll be talking about sequence
00:00:18.640 --> 00:00:24.560
modeling and N first I'm going to be
00:00:22.080 --> 00:00:26.359
talking about uh why why we do sequence
00:00:24.560 --> 00:00:29.160
modeling what varieties of sequence
00:00:26.359 --> 00:00:31.199
modeling exist and then after that I'm
00:00:29.160 --> 00:00:34.120
going to talk about kind of three basic
00:00:31.199 --> 00:00:36.320
techniques for sequence modeling namely
00:00:34.120 --> 00:00:38.879
recurrent neural networks convolutional
00:00:36.320 --> 00:00:38.879
networks and
00:00:39.360 --> 00:00:44.079
attention so when we talk about sequence
00:00:41.920 --> 00:00:46.680
modeling in NLP I've kind of already
00:00:44.079 --> 00:00:50.039
made the motivation for doing this but
00:00:46.680 --> 00:00:51.920
basically NLP is full of sequential data
00:00:50.039 --> 00:00:56.120
and this can be everything from words
00:00:51.920 --> 00:00:59.399
and sentences or tokens and sentences to
00:00:56.120 --> 00:01:01.920
uh characters and words to sentences in
00:00:59.399 --> 00:01:04.640
a discourse or a paragraph or a
00:01:01.920 --> 00:01:06.640
document um it can also be multiple
00:01:04.640 --> 00:01:08.840
documents in time multiple social media
00:01:06.640 --> 00:01:12.320
posts whatever else you want there's
00:01:08.840 --> 00:01:15.159
just you know sequences all over
00:01:12.320 --> 00:01:16.640
NLP and I mentioned this uh last time
00:01:15.159 --> 00:01:19.240
also but there's also long-distance
00:01:16.640 --> 00:01:20.840
dependencies in language so uh just to
00:01:19.240 --> 00:01:23.720
give an example there's agreement in
00:01:20.840 --> 00:01:25.799
number uh gender Etc so in order to
00:01:23.720 --> 00:01:28.439
create a fluent language model you'll
00:01:25.799 --> 00:01:30.320
have to handle this agreement so if we
00:01:28.439 --> 00:01:32.920
you say he does not have very much
00:01:30.320 --> 00:01:35.280
confidence in uh it would have to be
00:01:32.920 --> 00:01:36.680
himself but if you say she does not have
00:01:35.280 --> 00:01:39.360
very much confidence in it would have to
00:01:36.680 --> 00:01:41.360
be herself and this is this gender
00:01:39.360 --> 00:01:44.159
agreement is not super frequent in
00:01:41.360 --> 00:01:47.600
English but it's very frequent in other
00:01:44.159 --> 00:01:50.119
languages like French or uh you know
00:01:47.600 --> 00:01:51.759
most languages in the world in some uh
00:01:50.119 --> 00:01:53.799
way or
00:01:51.759 --> 00:01:55.320
another then separately from that you
00:01:53.799 --> 00:01:58.520
also have things like selectional
00:01:55.320 --> 00:02:00.119
preferences um like the Reign has lasted
00:01:58.520 --> 00:02:01.799
as long as the life of the queen and the
00:02:00.119 --> 00:02:04.439
rain has lasted as long as the life of
00:02:01.799 --> 00:02:07.360
the clouds uh in American English the
00:02:04.439 --> 00:02:09.119
only way you could know uh which word
00:02:07.360 --> 00:02:13.520
came beforehand if you were doing speech
00:02:09.119 --> 00:02:17.400
recognition is if you uh like had that
00:02:13.520 --> 00:02:20.319
kind of semantic uh idea of uh that
00:02:17.400 --> 00:02:22.040
these agree with each other in some way
00:02:20.319 --> 00:02:23.920
and there's also factual knowledge
00:02:22.040 --> 00:02:27.680
there's all kinds of other things uh
00:02:23.920 --> 00:02:27.680
that you need to carry over long
00:02:28.319 --> 00:02:33.800
contexts um these can be comp
00:02:30.840 --> 00:02:36.360
complicated so this is a a nice example
00:02:33.800 --> 00:02:39.400
so if we try to figure out what it
00:02:36.360 --> 00:02:41.239
refers to here uh the trophy would not
00:02:39.400 --> 00:02:45.680
fit in the brown suitcase because it was
00:02:41.239 --> 00:02:45.680
too big what is it
00:02:46.680 --> 00:02:51.360
here the trophy yeah and then what about
00:02:49.879 --> 00:02:53.120
uh the trophy would not fit in the brown
00:02:51.360 --> 00:02:57.080
suitcase because it was too
00:02:53.120 --> 00:02:58.680
small suit suitcase right um does anyone
00:02:57.080 --> 00:03:01.760
know what the name of something like
00:02:58.680 --> 00:03:01.760
this is
00:03:03.599 --> 00:03:07.840
has anyone heard of this challenge uh
00:03:09.280 --> 00:03:14.840
before no one okay um this this is
00:03:12.239 --> 00:03:17.200
called the winegrad schema challenge or
00:03:14.840 --> 00:03:22.760
these are called winegrad schemas and
00:03:17.200 --> 00:03:26.319
basically winterr schemas are a type
00:03:22.760 --> 00:03:29.280
of they're type of kind of linguistic
00:03:26.319 --> 00:03:30.439
challenge where you create two paired uh
00:03:29.280 --> 00:03:33.799
examples
00:03:30.439 --> 00:03:37.360
that you vary in very minimal ways where
00:03:33.799 --> 00:03:40.599
the answer differs between the two um
00:03:37.360 --> 00:03:42.000
and so uh there's lots of other examples
00:03:40.599 --> 00:03:44.080
about how you can create these things
00:03:42.000 --> 00:03:45.720
and they're good for testing uh whether
00:03:44.080 --> 00:03:48.239
language models are able to do things
00:03:45.720 --> 00:03:50.920
because they're able to uh kind of
00:03:48.239 --> 00:03:54.239
control for the fact that you know like
00:03:50.920 --> 00:04:01.079
the answer might be
00:03:54.239 --> 00:04:03.000
um the answer might be very uh like
00:04:01.079 --> 00:04:04.560
more frequent or less frequent and so
00:04:03.000 --> 00:04:07.720
the language model could just pick that
00:04:04.560 --> 00:04:11.040
so another example is we uh we came up
00:04:07.720 --> 00:04:12.239
with a benchmark of figurative language
00:04:11.040 --> 00:04:14.239
where we tried to figure out whether
00:04:12.239 --> 00:04:17.160
language models would be able
00:04:14.239 --> 00:04:19.720
to interpret figur figurative language
00:04:17.160 --> 00:04:22.720
and I actually have the multilingual uh
00:04:19.720 --> 00:04:24.160
version on the suggested projects uh on
00:04:22.720 --> 00:04:26.240
the Piaza oh yeah that's one
00:04:24.160 --> 00:04:28.360
announcement I posted a big list of
00:04:26.240 --> 00:04:30.080
suggested projects on pza I think a lot
00:04:28.360 --> 00:04:31.639
of people saw it you don't have to
00:04:30.080 --> 00:04:33.160
follow these but if you're interested in
00:04:31.639 --> 00:04:34.440
them feel free to talk to the contacts
00:04:33.160 --> 00:04:38.880
and we can give you more information
00:04:34.440 --> 00:04:41.039
about them um but anyway uh so in this
00:04:38.880 --> 00:04:43.080
data set what we did is we came up with
00:04:41.039 --> 00:04:46.039
some figurative language like this movie
00:04:43.080 --> 00:04:47.880
had the depth of of a waiting pool and
00:04:46.039 --> 00:04:50.919
this movie had the depth of a diving
00:04:47.880 --> 00:04:54.120
pool and so then after that you would
00:04:50.919 --> 00:04:56.199
have two choices this movie was uh this
00:04:54.120 --> 00:04:58.400
movie was very deep and interesting this
00:04:56.199 --> 00:05:01.000
movie was not very deep and interesting
00:04:58.400 --> 00:05:02.800
and so you have these like like two
00:05:01.000 --> 00:05:04.759
pairs of questions and answers and you
00:05:02.800 --> 00:05:06.240
need to decide between them and
00:05:04.759 --> 00:05:07.759
depending on what the input is the
00:05:06.240 --> 00:05:10.639
output will change and so that's a good
00:05:07.759 --> 00:05:11.919
way to control for um and test whether
00:05:10.639 --> 00:05:13.600
language models really understand
00:05:11.919 --> 00:05:15.080
something so if you're interested in
00:05:13.600 --> 00:05:17.199
benchmarking or other things like that
00:05:15.080 --> 00:05:19.160
it's a good parad time to think about
00:05:17.199 --> 00:05:22.759
anyway that's a little bit of an aside
00:05:19.160 --> 00:05:25.960
um so now I'd like to go on to types of
00:05:22.759 --> 00:05:28.360
sequential prediction problems
00:05:25.960 --> 00:05:30.880
and types of prediction problems in
00:05:28.360 --> 00:05:32.560
general uh binary and multiclass we
00:05:30.880 --> 00:05:35.240
already talked about that's when we're
00:05:32.560 --> 00:05:37.199
doing for example uh classification
00:05:35.240 --> 00:05:38.960
between two classes or classification
00:05:37.199 --> 00:05:41.280
between multiple
00:05:38.960 --> 00:05:42.880
classes but there's also another variety
00:05:41.280 --> 00:05:45.120
of prediction called structured
00:05:42.880 --> 00:05:47.120
prediction and structured prediction is
00:05:45.120 --> 00:05:49.639
when you have a very large number of
00:05:47.120 --> 00:05:53.680
labels it's not you know a finite number
00:05:49.639 --> 00:05:56.560
of labels and uh so that would be
00:05:53.680 --> 00:05:58.160
something like uh if you take in an
00:05:56.560 --> 00:06:00.680
input and you want to predict all of the
00:05:58.160 --> 00:06:04.000
parts of speech of all the words in the
00:06:00.680 --> 00:06:06.840
input and if you had like 50 parts of
00:06:04.000 --> 00:06:09.039
speech the number of labels that you
00:06:06.840 --> 00:06:11.360
would have for each sentence
00:06:09.039 --> 00:06:15.280
is any any
00:06:11.360 --> 00:06:17.919
ideas 50 50 parts of speech and like
00:06:15.280 --> 00:06:17.919
let's say for
00:06:19.880 --> 00:06:31.400
wordss 60 um it it's every combination
00:06:26.039 --> 00:06:31.400
of parts of speech for every words so
00:06:32.039 --> 00:06:38.440
uh close but maybe the opposite it's uh
00:06:35.520 --> 00:06:40.720
50 to the four because you have 50 50
00:06:38.440 --> 00:06:42.400
choices here 50 choices here so it's a c
00:06:40.720 --> 00:06:45.599
cross product of all of the
00:06:42.400 --> 00:06:48.560
choices um and so that becomes very
00:06:45.599 --> 00:06:50.280
quickly un untenable um let's say you're
00:06:48.560 --> 00:06:53.120
talking about translation from English
00:06:50.280 --> 00:06:54.800
to Japanese uh now you don't really even
00:06:53.120 --> 00:06:57.240
have a finite number of choices because
00:06:54.800 --> 00:06:58.440
the length could be even longer uh the
00:06:57.240 --> 00:07:01.400
length of the output could be even
00:06:58.440 --> 00:07:01.400
longer than the
00:07:04.840 --> 00:07:08.879
C
00:07:06.520 --> 00:07:11.319
rules
00:07:08.879 --> 00:07:14.879
together makes it
00:07:11.319 --> 00:07:17.400
fewer yeah so really good question um so
00:07:14.879 --> 00:07:19.319
the question or the the question or
00:07:17.400 --> 00:07:21.160
comment was if there are certain rules
00:07:19.319 --> 00:07:22.759
about one thing not ever being able to
00:07:21.160 --> 00:07:25.080
follow the other you can actually reduce
00:07:22.759 --> 00:07:28.319
the number um you could do that with a
00:07:25.080 --> 00:07:30.280
hard constraint and make things uh kind
00:07:28.319 --> 00:07:32.520
of
00:07:30.280 --> 00:07:34.240
and like actually cut off things that
00:07:32.520 --> 00:07:36.280
you know have zero probability but in
00:07:34.240 --> 00:07:38.680
reality what people do is they just trim
00:07:36.280 --> 00:07:41.319
hypotheses that have low probability and
00:07:38.680 --> 00:07:43.319
so that has kind of the same effect like
00:07:41.319 --> 00:07:47.599
you almost never see a determiner after
00:07:43.319 --> 00:07:49.720
a determiner in English um and so yeah
00:07:47.599 --> 00:07:52.400
we're going to talk about uh algorithms
00:07:49.720 --> 00:07:53.960
to do this in the Generation section so
00:07:52.400 --> 00:07:57.240
we could talk more about that
00:07:53.960 --> 00:08:00.080
that um but anyway the basic idea behind
00:07:57.240 --> 00:08:02.400
structured prediction is that you don't
00:08:00.080 --> 00:08:04.280
like language modeling like I said last
00:08:02.400 --> 00:08:06.240
time you don't predict all of the the
00:08:04.280 --> 00:08:08.319
whole sequence at once you usually
00:08:06.240 --> 00:08:10.440
predict each element at once and then
00:08:08.319 --> 00:08:12.080
somehow calculate the conditional
00:08:10.440 --> 00:08:13.720
probability of the next element given
00:08:12.080 --> 00:08:15.879
the the current element or other things
00:08:13.720 --> 00:08:18.840
like that so that's how we solve
00:08:15.879 --> 00:08:18.840
structured prediction
00:08:18.919 --> 00:08:22.960
problems another thing is unconditioned
00:08:21.319 --> 00:08:25.120
versus conditioned predictions so
00:08:22.960 --> 00:08:28.520
uncondition prediction we don't do this
00:08:25.120 --> 00:08:31.240
very often um but basically uh we
00:08:28.520 --> 00:08:34.039
predict the probability of a a single
00:08:31.240 --> 00:08:35.880
variable or generate a single variable
00:08:34.039 --> 00:08:37.599
and condition pro prediction is
00:08:35.880 --> 00:08:41.000
predicting the probability of an output
00:08:37.599 --> 00:08:45.120
variable given an input like
00:08:41.000 --> 00:08:48.040
this so um for unconditioned prediction
00:08:45.120 --> 00:08:50.000
um the way we can do this is left to
00:08:48.040 --> 00:08:51.399
right autoagressive models and these are
00:08:50.000 --> 00:08:52.600
the ones that I talked about last time
00:08:51.399 --> 00:08:56.360
when I was talking about how we build
00:08:52.600 --> 00:08:59.000
language models um and these could be uh
00:08:56.360 --> 00:09:01.880
specifically this kind though is a kind
00:08:59.000 --> 00:09:03.480
that doesn't have any context limit so
00:09:01.880 --> 00:09:05.680
it's looking all the way back to the
00:09:03.480 --> 00:09:07.519
beginning of the the sequence and this
00:09:05.680 --> 00:09:09.440
could be like an infinite length endr
00:09:07.519 --> 00:09:10.440
model but practically we can't use those
00:09:09.440 --> 00:09:12.519
because they would have too many
00:09:10.440 --> 00:09:15.360
parameters they would be too sparse for
00:09:12.519 --> 00:09:17.079
us to estimate the parameters so um what
00:09:15.360 --> 00:09:19.120
we do instead with engram models which I
00:09:17.079 --> 00:09:21.240
talked about last time is we limit the
00:09:19.120 --> 00:09:23.600
the context length so we have something
00:09:21.240 --> 00:09:25.760
like a trigram model where we don't
00:09:23.600 --> 00:09:28.680
actually reference all of the previous
00:09:25.760 --> 00:09:30.680
outputs uh when we make a prediction oh
00:09:28.680 --> 00:09:34.440
and sorry actually I I should explain
00:09:30.680 --> 00:09:37.640
how how do we uh how do we read this
00:09:34.440 --> 00:09:40.519
graph so this would be we're predicting
00:09:37.640 --> 00:09:42.680
number one here we're predicting word
00:09:40.519 --> 00:09:45.240
number one and we're conditioning we're
00:09:42.680 --> 00:09:47.640
not conditioning on anything after it
00:09:45.240 --> 00:09:49.040
we're predicting word number two we're
00:09:47.640 --> 00:09:50.480
conditioning on Word number one we're
00:09:49.040 --> 00:09:53.040
predicting word number three we're
00:09:50.480 --> 00:09:55.640
conditioning on Word number two so here
00:09:53.040 --> 00:09:58.320
we would be uh predicting word number
00:09:55.640 --> 00:09:59.920
four conditioning on Words number three
00:09:58.320 --> 00:10:02.200
and two but not number one so that would
00:09:59.920 --> 00:10:07.600
be like a trigram
00:10:02.200 --> 00:10:07.600
bottle um so
00:10:08.600 --> 00:10:15.240
the what is this is there a robot
00:10:11.399 --> 00:10:17.480
walking around somewhere um Howard drill
00:10:15.240 --> 00:10:20.440
okay okay' be a lot more fun if it was a
00:10:17.480 --> 00:10:22.560
robot um so
00:10:20.440 --> 00:10:25.519
uh the things we're going to talk about
00:10:22.560 --> 00:10:28.360
today are largely going to be ones that
00:10:25.519 --> 00:10:31.200
have unlimited length context um and so
00:10:28.360 --> 00:10:33.440
we can uh we'll talk about some examples
00:10:31.200 --> 00:10:35.680
here and then um there's also
00:10:33.440 --> 00:10:37.279
independent prediction so this uh would
00:10:35.680 --> 00:10:39.160
be something like a unigram model where
00:10:37.279 --> 00:10:41.560
you would just uh not condition on any
00:10:39.160 --> 00:10:41.560
previous
00:10:41.880 --> 00:10:45.959
context there's also bidirectional
00:10:44.279 --> 00:10:47.959
prediction where basically when you
00:10:45.959 --> 00:10:50.440
predict each element you predict based
00:10:47.959 --> 00:10:52.680
on all of the other elements not the
00:10:50.440 --> 00:10:55.519
element itself uh this could be
00:10:52.680 --> 00:10:59.720
something like a masked language model
00:10:55.519 --> 00:11:02.160
um but note here that I put a slash
00:10:59.720 --> 00:11:04.000
through here uh because this is not a
00:11:02.160 --> 00:11:06.800
well-formed probability because as I
00:11:04.000 --> 00:11:08.760
mentioned last time um in order to have
00:11:06.800 --> 00:11:11.000
a well-formed probability you need to
00:11:08.760 --> 00:11:12.440
predict the elements based on all of the
00:11:11.000 --> 00:11:14.120
elements that you predicted before and
00:11:12.440 --> 00:11:16.519
you can't predict based on future
00:11:14.120 --> 00:11:18.519
elements so this is not actually a
00:11:16.519 --> 00:11:20.760
probabilistic model but sometimes people
00:11:18.519 --> 00:11:22.240
use this to kind of learn
00:11:20.760 --> 00:11:24.720
representations that could be used
00:11:22.240 --> 00:11:28.680
Downstream for some
00:11:24.720 --> 00:11:30.959
reason cool is this clear any questions
00:11:28.680 --> 00:11:30.959
comments
00:11:32.680 --> 00:11:39.839
yeah so these are all um not
00:11:36.800 --> 00:11:42.000
conditioning on any prior context uh so
00:11:39.839 --> 00:11:43.959
when you predict each word it's
00:11:42.000 --> 00:11:46.880
conditioning on context that you
00:11:43.959 --> 00:11:50.160
previously generated or previously
00:11:46.880 --> 00:11:52.279
predicted yeah so and if I go to the
00:11:50.160 --> 00:11:55.399
conditioned ones these are where you
00:11:52.279 --> 00:11:56.800
have like a source x uh where you're
00:11:55.399 --> 00:11:58.480
given this and then you want to
00:11:56.800 --> 00:11:59.639
calculate the conditional probability of
00:11:58.480 --> 00:12:04.279
something else
00:11:59.639 --> 00:12:06.839
so um to give some examples of this um
00:12:04.279 --> 00:12:10.320
this is autor regressive conditioned
00:12:06.839 --> 00:12:12.920
prediction and um this could be like a
00:12:10.320 --> 00:12:14.440
SE a standard sequence to sequence model
00:12:12.920 --> 00:12:16.079
or it could be a language model where
00:12:14.440 --> 00:12:18.600
you're given a prompt and you want to
00:12:16.079 --> 00:12:20.560
predict the following output like we
00:12:18.600 --> 00:12:24.160
often do with chat GPT or something like
00:12:20.560 --> 00:12:27.880
this and so
00:12:24.160 --> 00:12:30.199
um yeah I I don't think you
00:12:27.880 --> 00:12:32.279
can
00:12:30.199 --> 00:12:34.639
yeah I don't know if any way you can do
00:12:32.279 --> 00:12:37.680
a chat GPT without any conditioning
00:12:34.639 --> 00:12:39.959
context um but there were people who
00:12:37.680 --> 00:12:41.240
were sending uh I saw this about a week
00:12:39.959 --> 00:12:44.079
or two ago there were people who were
00:12:41.240 --> 00:12:47.839
sending things to the chat um to the GPD
00:12:44.079 --> 00:12:50.480
3.5 or gp4 API with no input and it
00:12:47.839 --> 00:12:52.279
would randomly output random questions
00:12:50.480 --> 00:12:54.800
or something like that so that's what's
00:12:52.279 --> 00:12:56.720
what happens when you send things to uh
00:12:54.800 --> 00:12:58.120
to chat GPT without any prior
00:12:56.720 --> 00:13:00.120
conditioning conts but normally what you
00:12:58.120 --> 00:13:01.440
do is you put in you know your prompt
00:13:00.120 --> 00:13:05.320
and then it follows up with your prompt
00:13:01.440 --> 00:13:05.320
and that would be in this uh in this
00:13:06.000 --> 00:13:11.279
Paradigm there's also something called
00:13:08.240 --> 00:13:14.199
non-auto regressive condition prediction
00:13:11.279 --> 00:13:16.760
um and this can be used for something
00:13:14.199 --> 00:13:19.160
like sequence labeling or non- autor
00:13:16.760 --> 00:13:20.760
regressive machine translation I'll talk
00:13:19.160 --> 00:13:22.839
about the first one in this class and
00:13:20.760 --> 00:13:25.600
I'll talk about the the second one maybe
00:13:22.839 --> 00:13:27.399
later um it's kind of a minor topic now
00:13:25.600 --> 00:13:30.040
it used to be popular a few years ago so
00:13:27.399 --> 00:13:33.279
I'm not sure whether it'll cover it but
00:13:30.040 --> 00:13:33.279
um uh
00:13:33.399 --> 00:13:39.279
yeah cool so the basic modeling Paradigm
00:13:37.079 --> 00:13:41.199
that we use for things like this is
00:13:39.279 --> 00:13:42.760
extracting features and predicting so
00:13:41.199 --> 00:13:44.839
this is exactly the same as the bag of
00:13:42.760 --> 00:13:46.680
wordss model right I the bag of wordss
00:13:44.839 --> 00:13:48.680
model that I talked about the first time
00:13:46.680 --> 00:13:50.959
we extracted features uh based on those
00:13:48.680 --> 00:13:53.440
features we made predictions so it's no
00:13:50.959 --> 00:13:55.480
different when we do sequence modeling
00:13:53.440 --> 00:13:57.680
um but the methods that we use for
00:13:55.480 --> 00:14:01.120
feature extraction is different so given
00:13:57.680 --> 00:14:03.920
in the input text X we extract features
00:14:01.120 --> 00:14:06.519
H and predict labels
00:14:03.920 --> 00:14:10.320
Y and for something like text
00:14:06.519 --> 00:14:12.600
classification what we do is we uh so
00:14:10.320 --> 00:14:15.440
for example we have text classification
00:14:12.600 --> 00:14:17.920
or or sequence labeling and for text
00:14:15.440 --> 00:14:19.720
classification usually what we would do
00:14:17.920 --> 00:14:21.360
is we would have a feature extractor
00:14:19.720 --> 00:14:23.120
from this feature extractor we take the
00:14:21.360 --> 00:14:25.199
sequence and we convert it into a single
00:14:23.120 --> 00:14:28.040
vector and then based on this Vector we
00:14:25.199 --> 00:14:30.160
make a prediction so that that's what we
00:14:28.040 --> 00:14:33.160
do for
00:14:30.160 --> 00:14:35.480
classification um for sequence labeling
00:14:33.160 --> 00:14:37.160
normally what we do is we extract one
00:14:35.480 --> 00:14:40.240
vector for each thing that we would like
00:14:37.160 --> 00:14:42.079
to predict about so here that might be
00:14:40.240 --> 00:14:45.639
one vector for each
00:14:42.079 --> 00:14:47.720
word um and then based on this uh we
00:14:45.639 --> 00:14:49.120
would predict something for each word so
00:14:47.720 --> 00:14:50.360
this is an example of part of speech
00:14:49.120 --> 00:14:53.079
tagging but there's a lot of other
00:14:50.360 --> 00:14:56.920
sequence labeling tasks
00:14:53.079 --> 00:14:58.839
also and what tasks exist for something
00:14:56.920 --> 00:15:03.040
like sequence labeling so sequence lab
00:14:58.839 --> 00:15:06.240
in is uh a pretty
00:15:03.040 --> 00:15:09.000
big subset of NLP tasks you can express
00:15:06.240 --> 00:15:11.040
a lot of things as sequence labeling and
00:15:09.000 --> 00:15:13.000
basically given an input text X we
00:15:11.040 --> 00:15:16.079
predict an output label sequence y of
00:15:13.000 --> 00:15:17.560
equal length so this can be used for
00:15:16.079 --> 00:15:20.160
things like part of speech tagging to
00:15:17.560 --> 00:15:22.000
get the parts of speech of each word um
00:15:20.160 --> 00:15:24.639
it can also be used for something like
00:15:22.000 --> 00:15:26.959
lemmatization and litiz basically what
00:15:24.639 --> 00:15:29.880
that is is it is predicting the base
00:15:26.959 --> 00:15:31.480
form of each word uh and this can be
00:15:29.880 --> 00:15:34.560
used for normalization if you want to
00:15:31.480 --> 00:15:36.360
find like for example all instances of a
00:15:34.560 --> 00:15:38.480
a particular verb being used or all
00:15:36.360 --> 00:15:40.800
instances of a particular noun being
00:15:38.480 --> 00:15:42.720
used this is a little bit different than
00:15:40.800 --> 00:15:45.000
something like stemming so stemming
00:15:42.720 --> 00:15:48.160
normally what stemming would do is it
00:15:45.000 --> 00:15:50.560
would uh chop off the plural here it
00:15:48.160 --> 00:15:53.240
would chop off S but it wouldn't be able
00:15:50.560 --> 00:15:56.279
to do things like normalized saw into C
00:15:53.240 --> 00:15:57.759
because uh stemming uh just removes
00:15:56.279 --> 00:15:59.240
suffixes it doesn't do any sort of
00:15:57.759 --> 00:16:02.720
normalization so that's the difference
00:15:59.240 --> 00:16:05.199
between lonization and
00:16:02.720 --> 00:16:08.079
stemon there's also something called
00:16:05.199 --> 00:16:09.680
morphological tagging um in
00:16:08.079 --> 00:16:11.639
morphological tagging basically what
00:16:09.680 --> 00:16:14.360
this is doing is this is a
00:16:11.639 --> 00:16:17.040
more advanced version of part of speech
00:16:14.360 --> 00:16:20.360
tagging uh that predicts things like
00:16:17.040 --> 00:16:23.600
okay this is a a past tense verb uh this
00:16:20.360 --> 00:16:25.639
is a plural um this is a particular verb
00:16:23.600 --> 00:16:27.240
form and you have multiple tags here
00:16:25.639 --> 00:16:28.959
this is less interesting in English
00:16:27.240 --> 00:16:30.920
because English is kind of boring
00:16:28.959 --> 00:16:32.319
language morphology morphologically it
00:16:30.920 --> 00:16:33.399
doesn't have a lot of conjugation and
00:16:32.319 --> 00:16:35.839
other stuff but it's a lot more
00:16:33.399 --> 00:16:38.319
interesting in more complex languages
00:16:35.839 --> 00:16:40.040
like Japanese or Hindi or other things
00:16:38.319 --> 00:16:42.480
like
00:16:40.040 --> 00:16:43.920
that Chinese is even more boring than
00:16:42.480 --> 00:16:46.120
English so if you're interested in
00:16:43.920 --> 00:16:47.000
Chinese then you don't need to worry
00:16:46.120 --> 00:16:50.680
about
00:16:47.000 --> 00:16:52.560
that cool um but actually what's maybe
00:16:50.680 --> 00:16:55.000
more widely used from the sequence
00:16:52.560 --> 00:16:57.480
labeling perspective is span labeling
00:16:55.000 --> 00:17:01.040
and here you want to predict spans and
00:16:57.480 --> 00:17:03.560
labels and this could be uh named entity
00:17:01.040 --> 00:17:05.360
recognitions so if you say uh Graham nub
00:17:03.560 --> 00:17:07.199
is teaching at Carnegie melan University
00:17:05.360 --> 00:17:09.520
you would want to identify each entity
00:17:07.199 --> 00:17:11.480
is being like a person organization
00:17:09.520 --> 00:17:16.039
Place governmental entity other stuff
00:17:11.480 --> 00:17:18.760
like that um there's also
00:17:16.039 --> 00:17:20.439
uh things like syntactic chunking where
00:17:18.760 --> 00:17:23.640
you want to find all noun phrases and
00:17:20.439 --> 00:17:26.799
verb phrases um also semantic role
00:17:23.640 --> 00:17:30.360
labeling where semantic role labeling is
00:17:26.799 --> 00:17:32.480
uh demonstrating who did what to whom so
00:17:30.360 --> 00:17:34.440
it's saying uh this is the actor the
00:17:32.480 --> 00:17:36.120
person who did the thing this is the
00:17:34.440 --> 00:17:38.520
thing that is being done and this is the
00:17:36.120 --> 00:17:40.280
place where it's being done so uh this
00:17:38.520 --> 00:17:42.840
can be useful if you want to do any sort
00:17:40.280 --> 00:17:45.559
of analysis about who does what to whom
00:17:42.840 --> 00:17:48.160
uh other things like
00:17:45.559 --> 00:17:50.360
that um there's also a more complicated
00:17:48.160 --> 00:17:52.080
thing called an entity linking which
00:17:50.360 --> 00:17:54.559
isn't really a span linking task but
00:17:52.080 --> 00:17:58.400
it's basically named entity recognition
00:17:54.559 --> 00:18:00.799
and you link it to um and you link it to
00:17:58.400 --> 00:18:04.200
to like a database like Wiki data or
00:18:00.799 --> 00:18:06.600
Wikipedia or something like this and
00:18:04.200 --> 00:18:09.520
this doesn't seem very glamorous perhaps
00:18:06.600 --> 00:18:10.799
you know a lot of people might not you
00:18:09.520 --> 00:18:13.400
might not
00:18:10.799 --> 00:18:15.000
sound like immediately excit be
00:18:13.400 --> 00:18:16.799
immediately excited by entity linking
00:18:15.000 --> 00:18:18.520
but actually it's super super important
00:18:16.799 --> 00:18:20.080
for things like news aggregation and
00:18:18.520 --> 00:18:21.640
other stuff like that find all the news
00:18:20.080 --> 00:18:23.799
articles about the celebrity or
00:18:21.640 --> 00:18:26.919
something like this uh find all of the
00:18:23.799 --> 00:18:29.720
mentions of our product um our company's
00:18:26.919 --> 00:18:33.400
product and uh social media or things so
00:18:29.720 --> 00:18:33.400
it's actually a really widely used
00:18:33.720 --> 00:18:38.000
technology and then finally span
00:18:36.039 --> 00:18:40.240
labeling can also be treated as sequence
00:18:38.000 --> 00:18:43.240
labeling um and the way we normally do
00:18:40.240 --> 00:18:45.600
this is we use something called bio tags
00:18:43.240 --> 00:18:47.760
and uh here you predict the beginning uh
00:18:45.600 --> 00:18:50.200
in and out tags for each word or spans
00:18:47.760 --> 00:18:52.400
so if we have this example of spans uh
00:18:50.200 --> 00:18:56.120
we just convert this into tags uh where
00:18:52.400 --> 00:18:57.760
you say uh begin person in person o
00:18:56.120 --> 00:18:59.640
means it's not an entity begin
00:18:57.760 --> 00:19:02.799
organization in organization and then
00:18:59.640 --> 00:19:05.520
you canvert that back into um into these
00:19:02.799 --> 00:19:09.880
spans so this makes it relatively easy
00:19:05.520 --> 00:19:09.880
to uh kind of do the span
00:19:10.480 --> 00:19:15.120
prediction cool um so now you know uh
00:19:13.640 --> 00:19:16.600
now you know what to do if you want to
00:19:15.120 --> 00:19:18.280
predict entities or other things like
00:19:16.600 --> 00:19:20.240
that there's a lot of models on like
00:19:18.280 --> 00:19:22.400
hugging face for example that uh allow
00:19:20.240 --> 00:19:25.640
you to do these things are there any
00:19:22.400 --> 00:19:25.640
questions uh before I move
00:19:27.080 --> 00:19:32.440
on okay
00:19:28.799 --> 00:19:34.039
cool I'll just go forward then so um now
00:19:32.440 --> 00:19:37.000
I'm going to talk about how we actually
00:19:34.039 --> 00:19:38.559
model these in machine learning models
00:19:37.000 --> 00:19:40.919
and there's three major types of
00:19:38.559 --> 00:19:43.120
sequence models uh there are other types
00:19:40.919 --> 00:19:45.320
of sequence models but I'd say the great
00:19:43.120 --> 00:19:47.840
majority of work uses one of these three
00:19:45.320 --> 00:19:51.720
different types and the first one is
00:19:47.840 --> 00:19:54.840
recurrence um what recurrence does it is
00:19:51.720 --> 00:19:56.240
it conditions on representations on an
00:19:54.840 --> 00:19:58.720
encoding of the
00:19:56.240 --> 00:20:01.360
history and so the way this works works
00:19:58.720 --> 00:20:04.679
is essentially you have your input
00:20:01.360 --> 00:20:06.280
vectors like this uh usually word
00:20:04.679 --> 00:20:08.600
embeddings or embeddings from the
00:20:06.280 --> 00:20:10.880
previous layer of the model and you have
00:20:08.600 --> 00:20:12.840
a recurrent neural network and the
00:20:10.880 --> 00:20:14.600
recurrent neural network um at the very
00:20:12.840 --> 00:20:17.280
beginning might only take the first
00:20:14.600 --> 00:20:19.480
Vector but every subsequent step it
00:20:17.280 --> 00:20:23.760
takes the input vector and it takes the
00:20:19.480 --> 00:20:23.760
hidden Vector from the previous uh
00:20:24.080 --> 00:20:32.280
input and the uh then you keep on going
00:20:29.039 --> 00:20:32.280
uh like this all the way through the
00:20:32.320 --> 00:20:37.600
sequence the convolution is a
00:20:35.799 --> 00:20:40.880
conditioning representations on local
00:20:37.600 --> 00:20:44.200
context so you have the inputs like this
00:20:40.880 --> 00:20:47.200
and here you're conditioning on the word
00:20:44.200 --> 00:20:51.240
itself and the surrounding um words on
00:20:47.200 --> 00:20:52.960
the right or the left so um you would do
00:20:51.240 --> 00:20:55.240
something like this this is a typical
00:20:52.960 --> 00:20:57.480
convolution where you have this this
00:20:55.240 --> 00:20:59.039
certain one here and the left one and
00:20:57.480 --> 00:21:01.080
the right one and this would be a size
00:20:59.039 --> 00:21:03.480
three convolution you could also have a
00:21:01.080 --> 00:21:06.520
size five convolution 7 n you know
00:21:03.480 --> 00:21:08.600
whatever else um that would take in more
00:21:06.520 --> 00:21:11.520
surrounding words like
00:21:08.600 --> 00:21:13.720
this and then finally we have attention
00:21:11.520 --> 00:21:15.640
um and attention is conditioned
00:21:13.720 --> 00:21:19.080
representations at a weighted average of
00:21:15.640 --> 00:21:21.000
all tokens in the sequence and so here
00:21:19.080 --> 00:21:24.600
um we're conditioning on all of the
00:21:21.000 --> 00:21:26.279
other tokens in the sequence but um the
00:21:24.600 --> 00:21:28.919
amount that we condition on each of the
00:21:26.279 --> 00:21:32.039
tokens differs uh between
00:21:28.919 --> 00:21:34.919
so we might get more of this token less
00:21:32.039 --> 00:21:37.600
of this token and other things like that
00:21:34.919 --> 00:21:39.720
and I'll go into the mechanisms of each
00:21:37.600 --> 00:21:43.159
of
00:21:39.720 --> 00:21:45.720
these one important thing to think about
00:21:43.159 --> 00:21:49.279
is uh the computational complexity of
00:21:45.720 --> 00:21:51.960
each of these and um the computational
00:21:49.279 --> 00:21:56.240
complexity can be
00:21:51.960 --> 00:21:58.600
expressed as the sequence length let's
00:21:56.240 --> 00:22:00.840
call the sequence length n and
00:21:58.600 --> 00:22:02.520
convolution has a convolution window
00:22:00.840 --> 00:22:05.080
size so I'll call that
00:22:02.520 --> 00:22:08.039
W so does anyone have an idea of the
00:22:05.080 --> 00:22:10.360
computational complexity of a recurrent
00:22:08.039 --> 00:22:10.360
neural
00:22:11.480 --> 00:22:16.640
network so how um how quickly does the
00:22:15.120 --> 00:22:18.640
computation of a recurrent neural
00:22:16.640 --> 00:22:20.760
network grow and one way you can look at
00:22:18.640 --> 00:22:24.360
this is uh figure out the number of
00:22:20.760 --> 00:22:24.360
arrows uh that you see
00:22:24.480 --> 00:22:29.080
here yeah it's it's linear so it's
00:22:27.440 --> 00:22:32.520
basically
00:22:29.080 --> 00:22:35.520
n um what about
00:22:32.520 --> 00:22:36.760
convolution any other ideas any ideas
00:22:35.520 --> 00:22:42.039
about
00:22:36.760 --> 00:22:45.120
convolution n yeah NW n
00:22:42.039 --> 00:22:47.559
w and what about
00:22:45.120 --> 00:22:52.200
attention n squar
00:22:47.559 --> 00:22:53.559
yeah so what you can see is um for very
00:22:52.200 --> 00:22:58.000
long
00:22:53.559 --> 00:23:00.400
sequences um for very long sequences the
00:22:58.000 --> 00:23:04.480
asymptotic complexity of running a
00:23:00.400 --> 00:23:06.039
recurrent neural network is uh lower so
00:23:04.480 --> 00:23:08.960
you can run a recurrent neural network
00:23:06.039 --> 00:23:10.480
over a sequence of length uh you know 20
00:23:08.960 --> 00:23:12.480
million or something like that and as
00:23:10.480 --> 00:23:15.200
long as you had enough memory it would
00:23:12.480 --> 00:23:16.520
take a linear time but um if you do
00:23:15.200 --> 00:23:18.400
something like attention over a really
00:23:16.520 --> 00:23:20.240
long sequence it would be more difficult
00:23:18.400 --> 00:23:22.080
there's a lot of caveats here because
00:23:20.240 --> 00:23:23.320
attention and convolution are easily
00:23:22.080 --> 00:23:26.200
paral
00:23:23.320 --> 00:23:28.520
parallelized uh whereas uh recurrence is
00:23:26.200 --> 00:23:30.919
not um and I'll talk about that a second
00:23:28.520 --> 00:23:32.679
but any anyway it's a good thing to keep
00:23:30.919 --> 00:23:36.240
in
00:23:32.679 --> 00:23:37.679
mind cool um so the next the first
00:23:36.240 --> 00:23:39.799
sequence model I want to introduce is
00:23:37.679 --> 00:23:42.559
recurrent neural networks oh um sorry
00:23:39.799 --> 00:23:45.799
one other thing I want to mention is all
00:23:42.559 --> 00:23:47.600
of these are still used um it might seem
00:23:45.799 --> 00:23:49.960
that like if you're very plugged into
00:23:47.600 --> 00:23:52.640
NLP it might seem like Well everybody's
00:23:49.960 --> 00:23:55.080
using attention um so why do we need to
00:23:52.640 --> 00:23:56.880
learn about the other ones uh but
00:23:55.080 --> 00:23:59.679
actually all of these are used and
00:23:56.880 --> 00:24:02.600
usually recurrence and convolution are
00:23:59.679 --> 00:24:04.960
used in combination with attention uh in
00:24:02.600 --> 00:24:07.799
some way for particular applications
00:24:04.960 --> 00:24:09.960
where uh like uh recurrence or a
00:24:07.799 --> 00:24:12.640
convolution are are useful so I'll I'll
00:24:09.960 --> 00:24:15.279
go into details of that
00:24:12.640 --> 00:24:18.159
l so let's talk about the first sequence
00:24:15.279 --> 00:24:20.600
model uh recurrent neural networks so
00:24:18.159 --> 00:24:22.919
recurrent neural networks um they're
00:24:20.600 --> 00:24:26.399
basically tools to remember information
00:24:22.919 --> 00:24:28.520
uh they were invented in uh around
00:24:26.399 --> 00:24:30.520
1990 and
00:24:28.520 --> 00:24:34.120
the way they work is a feedforward
00:24:30.520 --> 00:24:35.600
neural network looks a bit like this we
00:24:34.120 --> 00:24:38.000
have some sort of look up over the
00:24:35.600 --> 00:24:40.120
context we calculate embeddings we do a
00:24:38.000 --> 00:24:41.000
transform we get a hidden State and we
00:24:40.120 --> 00:24:43.039
make the
00:24:41.000 --> 00:24:46.159
prediction whereas a recurrent neural
00:24:43.039 --> 00:24:49.360
network uh feeds in the previous hidden
00:24:46.159 --> 00:24:53.360
State and a very simple Elman style
00:24:49.360 --> 00:24:54.840
neural network looks um or I'll contrast
00:24:53.360 --> 00:24:56.559
the feed forward neural network that we
00:24:54.840 --> 00:24:58.279
already know with an Elman style neural
00:24:56.559 --> 00:25:00.399
network um
00:24:58.279 --> 00:25:01.880
uh recurrent neural network so basically
00:25:00.399 --> 00:25:06.120
the feed forward Network that we already
00:25:01.880 --> 00:25:07.840
know does a um linear transform over the
00:25:06.120 --> 00:25:09.279
input and then it runs it through a
00:25:07.840 --> 00:25:11.640
nonlinear function and this could be
00:25:09.279 --> 00:25:14.200
like a tan function or a Ru function or
00:25:11.640 --> 00:25:17.080
anything like that in a recurrent neural
00:25:14.200 --> 00:25:19.559
network we add uh multiplication by the
00:25:17.080 --> 00:25:22.080
hidden the previous hidden state so it
00:25:19.559 --> 00:25:25.120
looks like
00:25:22.080 --> 00:25:27.000
this and so if we look at what
00:25:25.120 --> 00:25:29.080
processing a sequence looks like uh
00:25:27.000 --> 00:25:31.080
basically what we do is we start out
00:25:29.080 --> 00:25:32.720
with an initial State this initial State
00:25:31.080 --> 00:25:34.320
could be like all zeros or it could be
00:25:32.720 --> 00:25:35.200
randomized or it could be learned or
00:25:34.320 --> 00:25:38.480
whatever
00:25:35.200 --> 00:25:42.080
else and then based on based on this uh
00:25:38.480 --> 00:25:44.279
we run it through an RNN function um and
00:25:42.080 --> 00:25:46.600
then you know use calculate the hidden
00:25:44.279 --> 00:25:48.960
State use it to make a prediction uh we
00:25:46.600 --> 00:25:50.760
have the RNN function uh make a
00:25:48.960 --> 00:25:51.760
prediction RNN make a prediction RNN
00:25:50.760 --> 00:25:54.520
make a
00:25:51.760 --> 00:25:56.960
prediction so one important thing here
00:25:54.520 --> 00:25:58.360
is that this RNN is exactly the same
00:25:56.960 --> 00:26:01.880
function
00:25:58.360 --> 00:26:04.960
no matter which position it appears in
00:26:01.880 --> 00:26:06.640
and so because of that we just no matter
00:26:04.960 --> 00:26:08.279
how long the sequence becomes we always
00:26:06.640 --> 00:26:10.200
have the same number of parameters which
00:26:08.279 --> 00:26:12.600
is always like really important for a
00:26:10.200 --> 00:26:15.120
sequence model so uh that's what this
00:26:12.600 --> 00:26:15.120
looks like
00:26:15.799 --> 00:26:20.480
here so how do we train
00:26:18.320 --> 00:26:22.679
rnns um
00:26:20.480 --> 00:26:24.399
basically if you remember we can trade
00:26:22.679 --> 00:26:27.159
neural networks as long as we have a
00:26:24.399 --> 00:26:29.240
directed e cyclic graph that calculates
00:26:27.159 --> 00:26:30.919
our loss function and then for uh
00:26:29.240 --> 00:26:32.640
forward propagation and back propagation
00:26:30.919 --> 00:26:35.720
we'll do all the rest to calculate our
00:26:32.640 --> 00:26:38.760
parameters and we uh we update the
00:26:35.720 --> 00:26:40.480
parameters so the way this works is uh
00:26:38.760 --> 00:26:42.000
let's say we're doing sequence labeling
00:26:40.480 --> 00:26:45.200
in each of these predictions is a part
00:26:42.000 --> 00:26:47.559
of speech uh each of these labels is a
00:26:45.200 --> 00:26:49.000
true part of speech label or sorry each
00:26:47.559 --> 00:26:50.760
of these predictions is like a
00:26:49.000 --> 00:26:52.919
probability over the part parts of
00:26:50.760 --> 00:26:55.720
speech for that sequence each of these
00:26:52.919 --> 00:26:57.640
labels is a true part of speech label so
00:26:55.720 --> 00:26:59.320
basically what we do is from this we
00:26:57.640 --> 00:27:02.200
calculate the negative log likelihood of
00:26:59.320 --> 00:27:05.559
the true part of speech we get a
00:27:02.200 --> 00:27:09.120
loss and so now we have four losses uh
00:27:05.559 --> 00:27:11.559
here this is no longer a nice directed
00:27:09.120 --> 00:27:13.000
acyclic uh graph that ends in a single
00:27:11.559 --> 00:27:15.279
loss function which is kind of what we
00:27:13.000 --> 00:27:17.559
needed for back propagation right so
00:27:15.279 --> 00:27:20.240
what do we do uh very simple we just add
00:27:17.559 --> 00:27:22.440
them together uh we take the sum and now
00:27:20.240 --> 00:27:24.120
we have a single loss function uh which
00:27:22.440 --> 00:27:26.240
is the sum of all of the loss functions
00:27:24.120 --> 00:27:28.679
for each prediction that we
00:27:26.240 --> 00:27:30.799
made and that's our total loss and now
00:27:28.679 --> 00:27:32.600
we do have a directed asli graph where
00:27:30.799 --> 00:27:34.320
this is the terminal node and we can do
00:27:32.600 --> 00:27:36.480
backr like
00:27:34.320 --> 00:27:37.799
this this is true for all sequence
00:27:36.480 --> 00:27:39.320
models I'm going to talk about today I'm
00:27:37.799 --> 00:27:41.559
just illustrating it with recurrent
00:27:39.320 --> 00:27:43.279
networks um any any questions here
00:27:41.559 --> 00:27:45.240
everything
00:27:43.279 --> 00:27:47.919
good
00:27:45.240 --> 00:27:50.279
okay cool um yeah so now we have the
00:27:47.919 --> 00:27:52.960
loss it's a Well form dag uh we can run
00:27:50.279 --> 00:27:55.320
backrop so uh basically what we do is we
00:27:52.960 --> 00:27:58.399
just run back propop and our loss goes
00:27:55.320 --> 00:28:01.120
out uh back into all of the
00:27:58.399 --> 00:28:04.200
places now parameters are tied across
00:28:01.120 --> 00:28:06.080
time so the derivatives into the
00:28:04.200 --> 00:28:07.200
parameters are aggregated over all of
00:28:06.080 --> 00:28:10.760
the time
00:28:07.200 --> 00:28:13.760
steps um and this has been called back
00:28:10.760 --> 00:28:16.320
propagation through time uh since uh
00:28:13.760 --> 00:28:18.679
these were originally invented so
00:28:16.320 --> 00:28:21.720
basically what it looks like is because
00:28:18.679 --> 00:28:25.600
the parameters for this RNN function are
00:28:21.720 --> 00:28:27.120
shared uh they'll essentially be updated
00:28:25.600 --> 00:28:29.480
they'll only be updated once but they're
00:28:27.120 --> 00:28:32.640
updated from like four different
00:28:29.480 --> 00:28:32.640
positions in this network
00:28:34.120 --> 00:28:38.440
essentially yeah and this is the same
00:28:36.120 --> 00:28:40.559
for all sequence uh sequence models that
00:28:38.440 --> 00:28:43.519
I'm going to talk about
00:28:40.559 --> 00:28:45.360
today um another variety of models that
00:28:43.519 --> 00:28:47.559
people use are bidirectional rnns and
00:28:45.360 --> 00:28:49.880
these are uh used when you want to you
00:28:47.559 --> 00:28:52.960
know do something like sequence labeling
00:28:49.880 --> 00:28:54.399
and so you just uh run two rnns you want
00:28:52.960 --> 00:28:56.279
run one from the beginning one from the
00:28:54.399 --> 00:28:59.399
end and concatenate them together like
00:28:56.279 --> 00:28:59.399
this make predictions
00:29:01.200 --> 00:29:08.200
cool uh any questions yeah if you run
00:29:05.559 --> 00:29:09.960
the does that change your
00:29:08.200 --> 00:29:11.679
complexity does this change the
00:29:09.960 --> 00:29:13.000
complexity it doesn't change the ASM
00:29:11.679 --> 00:29:16.519
totic complexity because you're
00:29:13.000 --> 00:29:18.320
multiplying by two uh and like Big O
00:29:16.519 --> 00:29:21.559
notation doesn't care if you multiply by
00:29:18.320 --> 00:29:23.880
a constant but it it does double the Ty
00:29:21.559 --> 00:29:23.880
that it would
00:29:24.080 --> 00:29:28.080
do cool any
00:29:26.320 --> 00:29:32.799
other
00:29:28.080 --> 00:29:35.720
okay let's go forward um another problem
00:29:32.799 --> 00:29:37.240
that is particularly Salient in rnns and
00:29:35.720 --> 00:29:40.440
part of the reason why attention models
00:29:37.240 --> 00:29:42.000
are so useful is Vanishing gradients but
00:29:40.440 --> 00:29:43.880
you should be aware of this regardless
00:29:42.000 --> 00:29:46.799
of whether like no matter which model
00:29:43.880 --> 00:29:48.799
you're using and um thinking about it
00:29:46.799 --> 00:29:50.720
very carefully is actually a really good
00:29:48.799 --> 00:29:52.399
way to design better architectures if
00:29:50.720 --> 00:29:54.000
you're going to be designing uh
00:29:52.399 --> 00:29:56.039
designing
00:29:54.000 --> 00:29:58.000
architectures so basically the problem
00:29:56.039 --> 00:29:59.399
with Vanishing gradients is like let's
00:29:58.000 --> 00:30:01.799
say we have a prediction task where
00:29:59.399 --> 00:30:03.960
we're calculating a regression we're
00:30:01.799 --> 00:30:05.519
inputting a whole bunch of tokens and
00:30:03.960 --> 00:30:08.080
then calculating a regression at the
00:30:05.519 --> 00:30:12.840
very end using a square air loss
00:30:08.080 --> 00:30:16.360
function if we do something like this uh
00:30:12.840 --> 00:30:17.919
the problem is if we have a standard RNN
00:30:16.360 --> 00:30:21.279
when we do back
00:30:17.919 --> 00:30:25.480
propop we'll have a big gradient
00:30:21.279 --> 00:30:27.000
probably for the first RNN unit here but
00:30:25.480 --> 00:30:30.120
every time because we're running this
00:30:27.000 --> 00:30:33.679
through through some sort of
00:30:30.120 --> 00:30:37.080
nonlinearity if we for example if our
00:30:33.679 --> 00:30:39.240
nonlinearity is a t h function uh the
00:30:37.080 --> 00:30:42.000
gradient of the tan H function looks a
00:30:39.240 --> 00:30:42.000
little bit like
00:30:42.120 --> 00:30:50.000
this and um here I if I am not mistaken
00:30:47.200 --> 00:30:53.480
this Peaks at at one and everywhere else
00:30:50.000 --> 00:30:56.919
at zero and so because this is peing at
00:30:53.480 --> 00:30:58.679
one everywhere else at zero let's say um
00:30:56.919 --> 00:31:01.360
we have an input way over here like
00:30:58.679 --> 00:31:03.080
minus minus 3 or something like that if
00:31:01.360 --> 00:31:04.760
we have that that basically destroys our
00:31:03.080 --> 00:31:10.760
gradient our gradient disappears for
00:31:04.760 --> 00:31:13.559
that particular unit um and you know
00:31:10.760 --> 00:31:15.399
maybe one thing that you might say is oh
00:31:13.559 --> 00:31:17.039
well you know if this is getting so
00:31:15.399 --> 00:31:19.320
small because this only goes up to one
00:31:17.039 --> 00:31:22.960
let's do like 100 time t
00:31:19.320 --> 00:31:24.880
h as our uh as our activation function
00:31:22.960 --> 00:31:26.600
we'll do 100 time tan H and so now this
00:31:24.880 --> 00:31:28.279
goes up to 100 and now our gradients are
00:31:26.600 --> 00:31:30.080
not going to disapp here but then you
00:31:28.279 --> 00:31:31.720
have the the opposite problem you have
00:31:30.080 --> 00:31:34.760
exploding gradients where it goes up by
00:31:31.720 --> 00:31:36.360
100 every time uh it gets unmanageable
00:31:34.760 --> 00:31:40.000
and destroys your gradient descent
00:31:36.360 --> 00:31:41.720
itself so basically we have uh we have
00:31:40.000 --> 00:31:43.200
this problem because if you apply a
00:31:41.720 --> 00:31:45.639
function over and over again your
00:31:43.200 --> 00:31:47.240
gradient gets smaller and smaller every
00:31:45.639 --> 00:31:49.080
smaller and smaller bigger and bigger
00:31:47.240 --> 00:31:50.480
every time you do that and uh you have
00:31:49.080 --> 00:31:51.720
the vanishing gradient or exploding
00:31:50.480 --> 00:31:54.799
gradient
00:31:51.720 --> 00:31:56.919
problem um it's not just a problem with
00:31:54.799 --> 00:31:59.039
nonlinearities so it also happens when
00:31:56.919 --> 00:32:00.480
you do do your weight Matrix multiplies
00:31:59.039 --> 00:32:03.840
and other stuff like that basically
00:32:00.480 --> 00:32:05.960
anytime you modify uh the the input into
00:32:03.840 --> 00:32:07.720
a different output it will have a
00:32:05.960 --> 00:32:10.240
gradient and so it will either be bigger
00:32:07.720 --> 00:32:14.000
than one or less than
00:32:10.240 --> 00:32:16.000
one um so I mentioned this is a problem
00:32:14.000 --> 00:32:18.120
for rnns it's particularly a problem for
00:32:16.000 --> 00:32:20.799
rnns over long sequences but it's also a
00:32:18.120 --> 00:32:23.039
problem for any other model you use and
00:32:20.799 --> 00:32:24.960
the reason why this is important to know
00:32:23.039 --> 00:32:26.799
is if there's important information in
00:32:24.960 --> 00:32:29.000
your model finding a way that you can
00:32:26.799 --> 00:32:30.559
get a direct path from that important
00:32:29.000 --> 00:32:32.600
information to wherever you're making a
00:32:30.559 --> 00:32:34.440
prediction often is a way to improve
00:32:32.600 --> 00:32:39.120
your model
00:32:34.440 --> 00:32:41.159
um improve your model performance and on
00:32:39.120 --> 00:32:42.919
the contrary if there's unimportant
00:32:41.159 --> 00:32:45.320
information if there's information that
00:32:42.919 --> 00:32:47.159
you think is likely to be unimportant
00:32:45.320 --> 00:32:49.159
putting it farther away or making it a
00:32:47.159 --> 00:32:51.279
more indirect path so the model has to
00:32:49.159 --> 00:32:53.200
kind of work harder to use it is a good
00:32:51.279 --> 00:32:54.840
way to prevent the model from being
00:32:53.200 --> 00:32:57.679
distracted by like tons and tons of
00:32:54.840 --> 00:33:00.200
information um uh some of it
00:32:57.679 --> 00:33:03.960
which may be irrelevant so it's a good
00:33:00.200 --> 00:33:03.960
thing to know about in general for model
00:33:05.360 --> 00:33:13.080
design so um how did RNN solve this
00:33:09.559 --> 00:33:15.360
problem of uh of the vanishing gradient
00:33:13.080 --> 00:33:16.880
there is a method called long short-term
00:33:15.360 --> 00:33:20.360
memory
00:33:16.880 --> 00:33:22.840
um and the basic idea is to make
00:33:20.360 --> 00:33:24.360
additive connections between time
00:33:22.840 --> 00:33:29.919
steps
00:33:24.360 --> 00:33:32.799
and so addition is the
00:33:29.919 --> 00:33:36.399
only addition or kind of like the
00:33:32.799 --> 00:33:38.159
identity is the only thing that does not
00:33:36.399 --> 00:33:40.880
change the gradient it's guaranteed to
00:33:38.159 --> 00:33:43.279
not change the gradient because um the
00:33:40.880 --> 00:33:46.639
identity function is like f
00:33:43.279 --> 00:33:49.159
ofx equals X and if you take the
00:33:46.639 --> 00:33:51.480
derivative of this it's one so you're
00:33:49.159 --> 00:33:55.440
guaranteed to always have a gradient of
00:33:51.480 --> 00:33:57.360
one according to this function so um
00:33:55.440 --> 00:33:59.559
long shortterm memory makes sure that
00:33:57.360 --> 00:34:01.840
you have this additive uh input between
00:33:59.559 --> 00:34:04.600
time steps and this is what it looks
00:34:01.840 --> 00:34:05.919
like it's not super super important to
00:34:04.600 --> 00:34:09.119
understand everything that's going on
00:34:05.919 --> 00:34:12.200
here but just to explain it very quickly
00:34:09.119 --> 00:34:15.720
this uh C here is something called the
00:34:12.200 --> 00:34:20.520
memory cell it's passed on linearly like
00:34:15.720 --> 00:34:24.679
this and then um you have some gates the
00:34:20.520 --> 00:34:27.320
update gate is determining whether uh
00:34:24.679 --> 00:34:28.919
whether you update this hidden state or
00:34:27.320 --> 00:34:31.440
how much you update given this hidden
00:34:28.919 --> 00:34:34.480
State this input gate is deciding how
00:34:31.440 --> 00:34:36.760
much of the input you take in um and
00:34:34.480 --> 00:34:39.879
then the output gate is deciding how
00:34:36.760 --> 00:34:43.280
much of uh the output from the cell you
00:34:39.879 --> 00:34:45.599
uh you basically push out after using
00:34:43.280 --> 00:34:47.079
the cells so um it has these three gates
00:34:45.599 --> 00:34:48.760
that control the information flow and
00:34:47.079 --> 00:34:51.520
the model can learn to turn them on or
00:34:48.760 --> 00:34:53.720
off uh or something like that so uh
00:34:51.520 --> 00:34:55.679
that's the basic uh basic idea of the
00:34:53.720 --> 00:34:57.240
LSM and there's lots of other like
00:34:55.679 --> 00:34:59.359
variants of this like gated recurrent
00:34:57.240 --> 00:35:01.520
units that are a little bit simpler but
00:34:59.359 --> 00:35:03.920
the basic idea of an additive connection
00:35:01.520 --> 00:35:07.240
plus gating is uh something that appears
00:35:03.920 --> 00:35:07.240
a lot in many different types of
00:35:07.440 --> 00:35:14.240
architectures um any questions
00:35:12.079 --> 00:35:15.760
here another thing I should mention that
00:35:14.240 --> 00:35:19.200
I just realized I don't have on my
00:35:15.760 --> 00:35:24.480
slides but it's a good thing to know is
00:35:19.200 --> 00:35:29.040
that this is also used in uh deep
00:35:24.480 --> 00:35:32.440
networks and uh multi-layer
00:35:29.040 --> 00:35:32.440
networks and so
00:35:34.240 --> 00:35:39.520
basically lstms uh this is
00:35:39.720 --> 00:35:45.359
time lstms have this additive connection
00:35:43.359 --> 00:35:47.599
between the member eel where you're
00:35:45.359 --> 00:35:50.079
always
00:35:47.599 --> 00:35:53.119
adding um adding this into to whatever
00:35:50.079 --> 00:35:53.119
input you
00:35:54.200 --> 00:36:00.720
get and then you you get an input and
00:35:57.000 --> 00:36:00.720
you add this in you get an
00:36:00.839 --> 00:36:07.000
input and so this this makes sure you
00:36:03.440 --> 00:36:09.640
pass your gradients forward in
00:36:07.000 --> 00:36:11.720
time there's also uh something called
00:36:09.640 --> 00:36:13.000
residual connections which I think a lot
00:36:11.720 --> 00:36:14.319
of people have heard of if you've done a
00:36:13.000 --> 00:36:16.000
deep learning class or something like
00:36:14.319 --> 00:36:18.079
that but if you haven't uh they're a
00:36:16.000 --> 00:36:20.599
good thing to know residual connections
00:36:18.079 --> 00:36:22.440
are if you run your input through
00:36:20.599 --> 00:36:25.720
multiple
00:36:22.440 --> 00:36:28.720
layers like let's say you have a block
00:36:25.720 --> 00:36:28.720
here
00:36:36.480 --> 00:36:41.280
let's let's call this an RNN for now
00:36:38.560 --> 00:36:44.280
because we know um we know about RNN
00:36:41.280 --> 00:36:44.280
already so
00:36:45.119 --> 00:36:49.560
RNN so this this connection here is
00:36:48.319 --> 00:36:50.920
called the residual connection and
00:36:49.560 --> 00:36:55.240
basically it's adding an additive
00:36:50.920 --> 00:36:57.280
connection before and after layers so um
00:36:55.240 --> 00:36:58.640
this allows you to pass information from
00:36:57.280 --> 00:37:00.880
the very beginning of a network to the
00:36:58.640 --> 00:37:03.520
very end of a network um through
00:37:00.880 --> 00:37:05.480
multiple layers and it also is there to
00:37:03.520 --> 00:37:08.800
help prevent the gradient finishing
00:37:05.480 --> 00:37:11.520
problem so like in a way you can view uh
00:37:08.800 --> 00:37:14.560
you can view lstms what lstms are doing
00:37:11.520 --> 00:37:15.800
is preventing loss of gradient in time
00:37:14.560 --> 00:37:17.280
and these are preventing loss of
00:37:15.800 --> 00:37:19.480
gradient as you go through like multiple
00:37:17.280 --> 00:37:21.119
layers of the network and this is super
00:37:19.480 --> 00:37:24.079
standard this is used in all like
00:37:21.119 --> 00:37:25.599
Transformer models and llama and GPT and
00:37:24.079 --> 00:37:31.200
whatever
00:37:25.599 --> 00:37:31.200
else cool um any other questions about
00:37:32.760 --> 00:37:39.079
that okay cool um so next I'd like to go
00:37:36.880 --> 00:37:41.760
into convolution um one one thing I
00:37:39.079 --> 00:37:44.760
should mention is rnns or RNN style
00:37:41.760 --> 00:37:46.920
models are used extensively in very long
00:37:44.760 --> 00:37:48.160
sequence modeling and we're going to
00:37:46.920 --> 00:37:50.440
talk more about like actual
00:37:48.160 --> 00:37:52.640
architectures that people use uh to do
00:37:50.440 --> 00:37:55.119
this um usually in combination with
00:37:52.640 --> 00:37:57.720
attention based models uh but they're
00:37:55.119 --> 00:38:01.800
used in very long sequence modeling
00:37:57.720 --> 00:38:05.640
convolutions tend to be used in um a lot
00:38:01.800 --> 00:38:07.160
in speech and image processing uh and
00:38:05.640 --> 00:38:10.880
the reason why they're used a lot in
00:38:07.160 --> 00:38:13.560
speech and image processing is
00:38:10.880 --> 00:38:16.800
because when we're processing
00:38:13.560 --> 00:38:18.599
language uh we have like
00:38:16.800 --> 00:38:22.720
um
00:38:18.599 --> 00:38:22.720
this is
00:38:23.599 --> 00:38:29.400
wonderful like this is wonderful is
00:38:26.599 --> 00:38:33.319
three tokens in language but if we look
00:38:29.400 --> 00:38:36.960
at it in speech it's going to be
00:38:33.319 --> 00:38:36.960
like many many
00:38:37.560 --> 00:38:46.079
frames so kind of
00:38:41.200 --> 00:38:47.680
the semantics of language is already
00:38:46.079 --> 00:38:48.960
kind of like if you look at a single
00:38:47.680 --> 00:38:51.599
token you already get something
00:38:48.960 --> 00:38:52.839
semantically meaningful um but in
00:38:51.599 --> 00:38:54.560
contrast if you're looking at like
00:38:52.839 --> 00:38:56.000
speech or you're looking at pixels and
00:38:54.560 --> 00:38:57.400
images or something like that you're not
00:38:56.000 --> 00:39:00.359
going to get something semantically
00:38:57.400 --> 00:39:01.920
meaningful uh so uh convolution is used
00:39:00.359 --> 00:39:03.359
a lot in that case and also you could
00:39:01.920 --> 00:39:06.079
create a convolutional model over
00:39:03.359 --> 00:39:08.599
characters as well
00:39:06.079 --> 00:39:10.599
um so what is convolution in the first
00:39:08.599 --> 00:39:13.319
place um as I mentioned before basically
00:39:10.599 --> 00:39:16.359
you take the local window uh around an
00:39:13.319 --> 00:39:19.680
input and you run it through um
00:39:16.359 --> 00:39:22.079
basically a model and a a good way to
00:39:19.680 --> 00:39:24.400
think about it is it's essentially a
00:39:22.079 --> 00:39:26.440
feed forward Network where you can
00:39:24.400 --> 00:39:28.240
catenate uh all of the surrounding
00:39:26.440 --> 00:39:30.280
vectors together and run them through a
00:39:28.240 --> 00:39:34.400
linear transform like this so you can
00:39:30.280 --> 00:39:34.400
Cate XT minus XT XT
00:39:35.880 --> 00:39:43.040
plus1 convolution can also be used in
00:39:39.440 --> 00:39:45.400
Auto regressive models and normally like
00:39:43.040 --> 00:39:48.079
we think of it like this so we think
00:39:45.400 --> 00:39:50.640
that we're taking the previous one the
00:39:48.079 --> 00:39:53.839
current one and the next one and making
00:39:50.640 --> 00:39:54.960
a prediction based on this but this
00:39:53.839 --> 00:39:56.440
would be good for something like
00:39:54.960 --> 00:39:57.720
sequence labeling but it's not good for
00:39:56.440 --> 00:39:59.040
for something like language modeling
00:39:57.720 --> 00:40:01.400
because in language modeling we can't
00:39:59.040 --> 00:40:05.200
look at the future right but there's a
00:40:01.400 --> 00:40:07.280
super simple uh solution to this which
00:40:05.200 --> 00:40:11.280
is you have a convolution that just
00:40:07.280 --> 00:40:13.720
looks at the past basically um and
00:40:11.280 --> 00:40:15.319
predicts the next word based on the the
00:40:13.720 --> 00:40:16.760
you know current word in the past so
00:40:15.319 --> 00:40:19.520
here you would be predicting the word
00:40:16.760 --> 00:40:21.040
movie um this is actually essentially
00:40:19.520 --> 00:40:23.839
equivalent to the feed forward language
00:40:21.040 --> 00:40:25.880
model that I talked about last time uh
00:40:23.839 --> 00:40:27.240
so you can also think of that as a
00:40:25.880 --> 00:40:30.599
convolution
00:40:27.240 --> 00:40:32.119
a convolutional language model um so
00:40:30.599 --> 00:40:33.359
when whenever you say feed forward or
00:40:32.119 --> 00:40:36.160
convolutional language model they're
00:40:33.359 --> 00:40:38.880
basically the same uh modulo some uh
00:40:36.160 --> 00:40:42.359
some details about striding and stuff
00:40:38.880 --> 00:40:42.359
which I'm going to talk about the class
00:40:43.000 --> 00:40:49.359
today cool um I covered convolution very
00:40:47.400 --> 00:40:51.440
briefly because it's also the least used
00:40:49.359 --> 00:40:53.400
of the three uh sequence modeling things
00:40:51.440 --> 00:40:55.400
in NLP nowadays but um are there any
00:40:53.400 --> 00:40:58.319
questions there or can I just run into
00:40:55.400 --> 00:40:58.319
attention
00:40:59.119 --> 00:41:04.040
okay cool I'll go into attention next so
00:41:02.400 --> 00:41:06.400
uh the basic idea about
00:41:04.040 --> 00:41:11.119
attention um
00:41:06.400 --> 00:41:12.839
is that we encode uh each token and the
00:41:11.119 --> 00:41:14.440
sequence into a
00:41:12.839 --> 00:41:19.119
vector
00:41:14.440 --> 00:41:21.640
um or so we we have input an input
00:41:19.119 --> 00:41:24.240
sequence that we'd like to encode over
00:41:21.640 --> 00:41:27.800
and we perform a linear combination of
00:41:24.240 --> 00:41:30.640
the vectors weighted by attention weight
00:41:27.800 --> 00:41:33.359
and there's two varieties of attention
00:41:30.640 --> 00:41:35.160
uh that are good to know about the first
00:41:33.359 --> 00:41:37.440
one is cross
00:41:35.160 --> 00:41:40.040
atten where each element in a sequence
00:41:37.440 --> 00:41:41.960
attends to elements of another sequence
00:41:40.040 --> 00:41:44.280
and this is widely used in encoder
00:41:41.960 --> 00:41:47.359
decoder models where you have one
00:41:44.280 --> 00:41:50.319
encoder and you have a separate decoder
00:41:47.359 --> 00:41:51.880
um these models the popular models that
00:41:50.319 --> 00:41:55.119
are like this that people still use a
00:41:51.880 --> 00:41:57.480
lot are T5 uh is a example of an encoder
00:41:55.119 --> 00:42:00.760
decoder model or embar is another
00:41:57.480 --> 00:42:03.160
example of encoder decoder model um but
00:42:00.760 --> 00:42:07.880
basically the uh The Way Cross attention
00:42:03.160 --> 00:42:10.359
works is we have for example an English
00:42:07.880 --> 00:42:14.079
uh sentence here and we want to
00:42:10.359 --> 00:42:17.560
translate it into uh into a Japanese
00:42:14.079 --> 00:42:23.040
sentence and so when we output the first
00:42:17.560 --> 00:42:25.119
word we would mostly uh upweight this or
00:42:23.040 --> 00:42:26.800
sorry we have a we have a Japanese
00:42:25.119 --> 00:42:29.119
sentence and we would like to translated
00:42:26.800 --> 00:42:31.680
into an English sentence for example so
00:42:29.119 --> 00:42:35.160
when we generate the first word in
00:42:31.680 --> 00:42:38.400
Japanese means this so in order to
00:42:35.160 --> 00:42:40.079
Output the first word we would first uh
00:42:38.400 --> 00:42:43.559
do a weighted sum of all of the
00:42:40.079 --> 00:42:46.240
embeddings of the Japanese sentence and
00:42:43.559 --> 00:42:49.359
we would focus probably most on this
00:42:46.240 --> 00:42:51.920
word up here C because it corresponds to
00:42:49.359 --> 00:42:51.920
the word
00:42:53.160 --> 00:42:59.800
this in the next step of generating an
00:42:55.960 --> 00:43:01.319
out output uh we would uh attend to
00:42:59.800 --> 00:43:04.119
different words because different words
00:43:01.319 --> 00:43:07.680
correspond to is so you would attend to
00:43:04.119 --> 00:43:11.040
which corresponds to is um when you
00:43:07.680 --> 00:43:12.599
output n actually there's no word in the
00:43:11.040 --> 00:43:16.839
Japanese sentence that correspon to and
00:43:12.599 --> 00:43:18.720
so you might get a very like blob like
00:43:16.839 --> 00:43:21.319
uh in attention weight that doesn't look
00:43:18.720 --> 00:43:23.319
very uh that looks very smooth not very
00:43:21.319 --> 00:43:25.119
peaky and then when you do example you'd
00:43:23.319 --> 00:43:27.880
have strong attention on uh on the word
00:43:25.119 --> 00:43:29.400
that corresponds to example
00:43:27.880 --> 00:43:31.599
there's also self
00:43:29.400 --> 00:43:33.480
attention and um self attention
00:43:31.599 --> 00:43:36.000
basically what it does is each element
00:43:33.480 --> 00:43:38.640
in a sequence attends to elements of the
00:43:36.000 --> 00:43:40.240
same sequence and so this is a good way
00:43:38.640 --> 00:43:43.359
of doing sequence encoding just like we
00:43:40.240 --> 00:43:46.280
used rnns by rnns uh convolutional
00:43:43.359 --> 00:43:47.559
neural networks and so um the reason why
00:43:46.280 --> 00:43:50.119
you would want to do something like this
00:43:47.559 --> 00:43:52.760
just to give an example let's say we
00:43:50.119 --> 00:43:54.280
wanted to run this we wanted to encode
00:43:52.760 --> 00:43:56.920
the English sentence before doing
00:43:54.280 --> 00:44:00.040
something like translation into Japanese
00:43:56.920 --> 00:44:01.559
and if we did that um this maybe we
00:44:00.040 --> 00:44:02.960
don't need to attend to a whole lot of
00:44:01.559 --> 00:44:06.440
other things because it's kind of clear
00:44:02.960 --> 00:44:08.920
what this means but um
00:44:06.440 --> 00:44:10.880
is the way you would translate it would
00:44:08.920 --> 00:44:12.280
be rather heavily dependent on what the
00:44:10.880 --> 00:44:13.640
other words in the sentence so you might
00:44:12.280 --> 00:44:17.280
want to attend to all the other words in
00:44:13.640 --> 00:44:20.559
the sentence say oh this is is co
00:44:17.280 --> 00:44:22.839
cooccurring with this and example and so
00:44:20.559 --> 00:44:24.440
if that's the case then well we would
00:44:22.839 --> 00:44:26.920
need to translate it in this way or we'
00:44:24.440 --> 00:44:28.960
need to handle it in this way and that's
00:44:26.920 --> 00:44:29.880
exactly the same for you know any other
00:44:28.960 --> 00:44:32.720
sort of
00:44:29.880 --> 00:44:35.880
disambiguation uh style
00:44:32.720 --> 00:44:37.720
task so uh yeah we do something similar
00:44:35.880 --> 00:44:39.040
like this so basically cross attention
00:44:37.720 --> 00:44:42.520
is attending to a different sequence
00:44:39.040 --> 00:44:42.520
self attention is attending to the same
00:44:42.680 --> 00:44:46.559
sequence so how do we do this
00:44:44.960 --> 00:44:48.200
mechanistically in the first place so
00:44:46.559 --> 00:44:51.480
like let's say We're translating from
00:44:48.200 --> 00:44:52.880
Japanese to English um we would have uh
00:44:51.480 --> 00:44:55.960
and we're doing it with an encoder
00:44:52.880 --> 00:44:57.480
decoder model where we have already ENC
00:44:55.960 --> 00:45:00.640
coded the
00:44:57.480 --> 00:45:02.920
input sequence and now we're generating
00:45:00.640 --> 00:45:05.240
the output sequence with a for example a
00:45:02.920 --> 00:45:09.880
recurrent neural network um and so if
00:45:05.240 --> 00:45:12.400
that's the case we have uh I I hate uh
00:45:09.880 --> 00:45:14.440
like this and we want to predict the
00:45:12.400 --> 00:45:17.280
next word so what we would do is we
00:45:14.440 --> 00:45:19.480
would take the current state
00:45:17.280 --> 00:45:21.480
here and uh we use something called a
00:45:19.480 --> 00:45:22.760
query vector and the query Vector is
00:45:21.480 --> 00:45:24.880
essentially the vector that we want to
00:45:22.760 --> 00:45:28.720
use to decide what to attend
00:45:24.880 --> 00:45:31.800
to we then have key vectors and the key
00:45:28.720 --> 00:45:35.319
vectors are the vectors that we would
00:45:31.800 --> 00:45:37.480
like to use to decide which ones we
00:45:35.319 --> 00:45:40.720
should be attending
00:45:37.480 --> 00:45:42.040
to and then for each query key pair we
00:45:40.720 --> 00:45:45.319
calculate a
00:45:42.040 --> 00:45:48.319
weight and we do it like this um this
00:45:45.319 --> 00:45:50.680
gear here is some function that takes in
00:45:48.319 --> 00:45:53.200
the uh query vector and the key vector
00:45:50.680 --> 00:45:55.599
and outputs a weight and notably we use
00:45:53.200 --> 00:45:57.559
the same function every single time this
00:45:55.599 --> 00:46:00.960
is really important again because like
00:45:57.559 --> 00:46:03.760
RNN that allows us to extrapolate
00:46:00.960 --> 00:46:05.960
unlimited length sequences because uh we
00:46:03.760 --> 00:46:08.280
only have one set of you know we only
00:46:05.960 --> 00:46:10.359
have one function no matter how long the
00:46:08.280 --> 00:46:13.200
sequence gets so we can just apply it
00:46:10.359 --> 00:46:15.839
over and over and over
00:46:13.200 --> 00:46:17.920
again uh once we calculate these values
00:46:15.839 --> 00:46:20.839
we normalize so that they add up to one
00:46:17.920 --> 00:46:22.559
using the softmax function and um
00:46:20.839 --> 00:46:27.800
basically in this case that would be
00:46:22.559 --> 00:46:27.800
like 0.76 uh etc etc oops
00:46:28.800 --> 00:46:33.559
so step number two is once we have this
00:46:32.280 --> 00:46:37.839
uh these
00:46:33.559 --> 00:46:40.160
attention uh values here notably these
00:46:37.839 --> 00:46:41.359
values aren't really probabilities uh
00:46:40.160 --> 00:46:42.800
despite the fact that they're between
00:46:41.359 --> 00:46:44.240
zero and one and they add up to one
00:46:42.800 --> 00:46:47.440
because all we're doing is we're using
00:46:44.240 --> 00:46:50.480
them to uh to combine together uh
00:46:47.440 --> 00:46:51.800
multiple vectors so I we don't really
00:46:50.480 --> 00:46:53.319
normally call them attention
00:46:51.800 --> 00:46:54.680
probabilities or anything like that I
00:46:53.319 --> 00:46:56.319
just call them attention values or
00:46:54.680 --> 00:46:59.680
normalized attention values
00:46:56.319 --> 00:47:03.760
is um but once we have these uh
00:46:59.680 --> 00:47:05.760
attention uh attention weights we have
00:47:03.760 --> 00:47:07.200
value vectors and these value vectors
00:47:05.760 --> 00:47:10.000
are the vectors that we would actually
00:47:07.200 --> 00:47:12.319
like to combine together to get the uh
00:47:10.000 --> 00:47:14.000
encoding here and so we take these
00:47:12.319 --> 00:47:17.559
vectors we do a weighted some of the
00:47:14.000 --> 00:47:21.200
vectors and get a final final sum
00:47:17.559 --> 00:47:22.920
here and we can take this uh some and
00:47:21.200 --> 00:47:26.920
use it in any part of the model that we
00:47:22.920 --> 00:47:29.079
would like um and so is very broad it
00:47:26.920 --> 00:47:31.200
can be used in any way now the most
00:47:29.079 --> 00:47:33.240
common way to use it is just have lots
00:47:31.200 --> 00:47:35.000
of self attention layers like in
00:47:33.240 --> 00:47:37.440
something in a Transformer but um you
00:47:35.000 --> 00:47:40.160
can also use it in decoder or other
00:47:37.440 --> 00:47:42.920
things like that as
00:47:40.160 --> 00:47:45.480
well this is an actual graphical example
00:47:42.920 --> 00:47:47.319
from the original attention paper um I'm
00:47:45.480 --> 00:47:50.000
going to give some other examples from
00:47:47.319 --> 00:47:52.480
Transformers in the next class but
00:47:50.000 --> 00:47:55.400
basically you can see that the attention
00:47:52.480 --> 00:47:57.559
weights uh for this English to French I
00:47:55.400 --> 00:48:00.520
think it's English French translation
00:47:57.559 --> 00:48:02.920
task basically um overlap with what you
00:48:00.520 --> 00:48:04.440
would expect uh if you can read English
00:48:02.920 --> 00:48:06.599
and French it's kind of the words that
00:48:04.440 --> 00:48:09.319
are semantically similar to each other
00:48:06.599 --> 00:48:12.920
um it even learns to do this reordering
00:48:09.319 --> 00:48:14.880
uh in an appropriate way here and all of
00:48:12.920 --> 00:48:16.720
this is completely unsupervised so you
00:48:14.880 --> 00:48:18.079
never actually give the model
00:48:16.720 --> 00:48:19.440
information about what it should be
00:48:18.079 --> 00:48:21.559
attending to it's all learned through
00:48:19.440 --> 00:48:23.520
gradient descent and the model learns to
00:48:21.559 --> 00:48:27.640
do this by making the embeddings of the
00:48:23.520 --> 00:48:27.640
key and query vectors closer together
00:48:28.440 --> 00:48:33.240
cool
00:48:30.000 --> 00:48:33.240
um any
00:48:33.800 --> 00:48:40.040
questions okay so um next I'd like to go
00:48:38.440 --> 00:48:41.680
a little bit into how we actually
00:48:40.040 --> 00:48:43.599
calculate the attention score function
00:48:41.680 --> 00:48:44.839
so that's the little gear that I had on
00:48:43.599 --> 00:48:50.280
my
00:48:44.839 --> 00:48:53.559
uh my slide before so here Q is a query
00:48:50.280 --> 00:48:56.440
and K is the key um the original
00:48:53.559 --> 00:48:58.400
attention paper used a multi-layer layer
00:48:56.440 --> 00:49:00.119
uh a multi-layer neural network to
00:48:58.400 --> 00:49:02.440
calculate this so basically what it did
00:49:00.119 --> 00:49:05.319
is it concatenated the query and key
00:49:02.440 --> 00:49:08.000
Vector together multiplied it by a
00:49:05.319 --> 00:49:12.240
weight Matrix calculated a tan H and
00:49:08.000 --> 00:49:15.040
then ran it through uh a weight
00:49:12.240 --> 00:49:19.799
Vector so this
00:49:15.040 --> 00:49:22.480
is essentially very expressive
00:49:19.799 --> 00:49:24.799
um uh it's flexible it's often good with
00:49:22.480 --> 00:49:27.960
large data but it adds extra parameters
00:49:24.799 --> 00:49:30.359
and uh computation time uh to your
00:49:27.960 --> 00:49:31.559
calculations here so it's not as widely
00:49:30.359 --> 00:49:34.359
used
00:49:31.559 --> 00:49:37.799
anymore the uh other thing which was
00:49:34.359 --> 00:49:41.599
proposed by long ad all is a bilinear
00:49:37.799 --> 00:49:43.200
function um and a bilinear function
00:49:41.599 --> 00:49:45.920
basically what it does is it has your
00:49:43.200 --> 00:49:48.319
key Vector it has your query vector and
00:49:45.920 --> 00:49:51.440
it has a matrix in between them like
00:49:48.319 --> 00:49:53.000
this and uh then you calculate uh you
00:49:51.440 --> 00:49:54.520
calculate the
00:49:53.000 --> 00:49:56.680
alut
00:49:54.520 --> 00:49:59.880
so
00:49:56.680 --> 00:50:03.200
this is uh nice because it basically um
00:49:59.880 --> 00:50:05.760
Can Transform uh the key and
00:50:03.200 --> 00:50:08.760
query uh together
00:50:05.760 --> 00:50:08.760
here
00:50:09.119 --> 00:50:13.559
um people have also experimented with
00:50:11.760 --> 00:50:16.079
DOT product and the dot product is
00:50:13.559 --> 00:50:19.839
basically query times
00:50:16.079 --> 00:50:23.480
key uh query transpose times key or
00:50:19.839 --> 00:50:25.760
query. key this is okay but the problem
00:50:23.480 --> 00:50:27.280
with this is then the query vector and
00:50:25.760 --> 00:50:30.160
the key vectors have to be in exactly
00:50:27.280 --> 00:50:31.920
the same space and that's kind of too
00:50:30.160 --> 00:50:34.799
hard of a constraint so it doesn't scale
00:50:31.920 --> 00:50:38.000
very well if you're um if you're working
00:50:34.799 --> 00:50:40.839
hard uh if you're uh like training on
00:50:38.000 --> 00:50:45.400
lots of data um then the scaled dot
00:50:40.839 --> 00:50:47.880
product um the scale dot product here uh
00:50:45.400 --> 00:50:50.079
one problem is that the scale of the dot
00:50:47.880 --> 00:50:53.680
product increases as the dimensions get
00:50:50.079 --> 00:50:55.880
larger and so there's a fix to scale by
00:50:53.680 --> 00:50:58.839
the square root of the length of one of
00:50:55.880 --> 00:51:00.680
the vectors um and so basically you're
00:50:58.839 --> 00:51:04.559
multiplying uh you're taking the dot
00:51:00.680 --> 00:51:06.559
product but you're dividing by the uh
00:51:04.559 --> 00:51:09.359
the square root of the length of one of
00:51:06.559 --> 00:51:11.839
the vectors uh does anyone have an idea
00:51:09.359 --> 00:51:13.599
why you might take the square root here
00:51:11.839 --> 00:51:16.920
if you've taken a machine
00:51:13.599 --> 00:51:20.000
learning uh or maybe statistics class
00:51:16.920 --> 00:51:20.000
you might have a an
00:51:20.599 --> 00:51:26.599
idea any any ideas yeah it normalization
00:51:24.720 --> 00:51:29.079
to make sure
00:51:26.599 --> 00:51:32.760
because otherwise it will impact the
00:51:29.079 --> 00:51:35.640
result because we want normalize one yes
00:51:32.760 --> 00:51:37.920
so we do we do want to normalize it um
00:51:35.640 --> 00:51:40.000
and so that's the reason why we divide
00:51:37.920 --> 00:51:41.920
by the length um and that prevents it
00:51:40.000 --> 00:51:43.839
from getting too large
00:51:41.920 --> 00:51:45.920
specifically does anyone have an idea
00:51:43.839 --> 00:51:49.440
why you take the square root here as
00:51:45.920 --> 00:51:49.440
opposed to dividing just by the length
00:51:52.400 --> 00:51:59.480
overall so um this is this is pretty
00:51:55.400 --> 00:52:01.720
tough and actually uh we I didn't know
00:51:59.480 --> 00:52:04.359
one of the last times I did this class
00:52:01.720 --> 00:52:06.640
uh and had to actually go look for it
00:52:04.359 --> 00:52:09.000
but basically the reason why is because
00:52:06.640 --> 00:52:11.400
if you um if you have a whole bunch of
00:52:09.000 --> 00:52:12.720
random variables so let's say you have a
00:52:11.400 --> 00:52:14.040
whole bunch of random variables no
00:52:12.720 --> 00:52:15.240
matter what kind they are as long as
00:52:14.040 --> 00:52:19.680
they're from the same distribution
00:52:15.240 --> 00:52:19.680
they're IID and you add them all
00:52:20.160 --> 00:52:25.720
together um then the variance I believe
00:52:23.200 --> 00:52:27.760
yeah the variance of this variant
00:52:25.720 --> 00:52:31.119
standard deviation maybe standard
00:52:27.760 --> 00:52:33.319
deviation of this goes uh goes up uh
00:52:31.119 --> 00:52:35.640
square root uh yeah I think standard
00:52:33.319 --> 00:52:38.880
deviation goes
00:52:35.640 --> 00:52:41.040
up dividing by something that would
00:52:38.880 --> 00:52:44.040
divide by this the standard deviation
00:52:41.040 --> 00:52:48.240
here so it's made like normalizing by
00:52:44.040 --> 00:52:51.040
that so um it's a it's that's actually I
00:52:48.240 --> 00:52:53.359
don't think explicitly explained and the
00:52:51.040 --> 00:52:54.720
uh attention is all you need paper uh
00:52:53.359 --> 00:52:57.920
the vasani paper where they introduce
00:52:54.720 --> 00:53:01.079
this but that's basic idea um in terms
00:52:57.920 --> 00:53:03.839
of what people use most widely nowadays
00:53:01.079 --> 00:53:07.680
um they
00:53:03.839 --> 00:53:07.680
are basically doing
00:53:24.160 --> 00:53:27.160
this
00:53:30.280 --> 00:53:34.880
so they're taking the the hidden state
00:53:33.000 --> 00:53:36.599
from the keys and multiplying it by a
00:53:34.880 --> 00:53:39.440
matrix the hidden state by the queries
00:53:36.599 --> 00:53:41.680
and multiplying it by a matrix um this
00:53:39.440 --> 00:53:46.559
is what is done in uh in
00:53:41.680 --> 00:53:50.280
Transformers and the uh and then they're
00:53:46.559 --> 00:53:54.160
using this to um they're normalizing it
00:53:50.280 --> 00:53:57.160
by this uh square root here
00:53:54.160 --> 00:53:57.160
and
00:53:59.440 --> 00:54:05.040
so this is essentially a bilinear
00:54:02.240 --> 00:54:07.680
model um it's a bilinear model that is
00:54:05.040 --> 00:54:09.119
normalized uh they call it uh scale do
00:54:07.680 --> 00:54:11.119
product detention but actually because
00:54:09.119 --> 00:54:15.520
they have these weight matrices uh it's
00:54:11.119 --> 00:54:18.839
a bilinear model so um that's the the
00:54:15.520 --> 00:54:18.839
most standard thing to be used
00:54:20.200 --> 00:54:24.079
nowadays cool any any questions about
00:54:22.520 --> 00:54:27.079
this
00:54:24.079 --> 00:54:27.079
part
00:54:28.240 --> 00:54:36.559
okay so um finally when you actually
00:54:32.280 --> 00:54:36.559
train the model um as I mentioned
00:54:41.960 --> 00:54:45.680
before right at the very
00:54:48.040 --> 00:54:52.400
beginning
00:54:49.839 --> 00:54:55.760
we when we're training an autor
00:54:52.400 --> 00:54:57.400
regressive model we don't want to be
00:54:55.760 --> 00:54:59.799
referring to the Future to things in the
00:54:57.400 --> 00:55:01.240
future um because then you know
00:54:59.799 --> 00:55:03.079
basically we'd be cheating and we'd have
00:55:01.240 --> 00:55:04.599
a nonprobabilistic model it wouldn't be
00:55:03.079 --> 00:55:08.960
good when we actually have to generate
00:55:04.599 --> 00:55:12.119
left to right um and
00:55:08.960 --> 00:55:15.720
so we essentially want to prevent
00:55:12.119 --> 00:55:17.480
ourselves from using information from
00:55:15.720 --> 00:55:20.319
the
00:55:17.480 --> 00:55:22.839
future
00:55:20.319 --> 00:55:24.240
and in an unconditioned model we want to
00:55:22.839 --> 00:55:27.400
prevent ourselves from using any
00:55:24.240 --> 00:55:29.680
information in the feature here um in a
00:55:27.400 --> 00:55:31.520
conditioned model we're okay with doing
00:55:29.680 --> 00:55:33.480
kind of bir
00:55:31.520 --> 00:55:35.880
directional conditioning here to
00:55:33.480 --> 00:55:37.359
calculate the representations but we're
00:55:35.880 --> 00:55:40.440
not okay with doing it on the target
00:55:37.359 --> 00:55:40.440
side so basically what we
00:55:44.240 --> 00:55:50.960
do basically what we do is we create a
00:55:47.920 --> 00:55:52.400
mask that prevents us from attending to
00:55:50.960 --> 00:55:54.559
any of the information in the future
00:55:52.400 --> 00:55:56.440
when we're uh predicting when we're
00:55:54.559 --> 00:56:00.799
calculating the representations of the
00:55:56.440 --> 00:56:04.880
the current thing uh word and
00:56:00.799 --> 00:56:08.280
technically how we do this is we have
00:56:04.880 --> 00:56:08.280
the attention
00:56:09.079 --> 00:56:13.799
values uh like
00:56:11.680 --> 00:56:15.480
2.1
00:56:13.799 --> 00:56:17.880
attention
00:56:15.480 --> 00:56:19.920
0.3 and
00:56:17.880 --> 00:56:22.480
attention uh
00:56:19.920 --> 00:56:24.960
0.5 or something like
00:56:22.480 --> 00:56:27.480
that these are eventually going to be
00:56:24.960 --> 00:56:29.799
fed through the soft Max to calculate
00:56:27.480 --> 00:56:32.119
the attention values that we use to do
00:56:29.799 --> 00:56:33.680
the waiting so what we do is any ones we
00:56:32.119 --> 00:56:36.160
don't want to attend to we just add
00:56:33.680 --> 00:56:39.799
negative infinity or add a very large
00:56:36.160 --> 00:56:42.119
negative number so we uh cross that out
00:56:39.799 --> 00:56:44.000
and set this the negative infinity and
00:56:42.119 --> 00:56:45.440
so then when we take the softb basically
00:56:44.000 --> 00:56:47.839
the value goes to zero and we don't
00:56:45.440 --> 00:56:49.359
attend to it so um this is called the
00:56:47.839 --> 00:56:53.240
attention mask and you'll see it when
00:56:49.359 --> 00:56:53.240
you have to implement
00:56:53.440 --> 00:56:56.880
attention cool
00:56:57.039 --> 00:57:00.200
any any questions about
00:57:02.079 --> 00:57:08.599
this okay great um so next I'd like to
00:57:05.839 --> 00:57:11.039
go to Applications of sequence models um
00:57:08.599 --> 00:57:13.200
there's a bunch of ways that you can use
00:57:11.039 --> 00:57:16.160
sequence models of any variety I wrote
00:57:13.200 --> 00:57:18.400
RNN here arbitrarily but it could be
00:57:16.160 --> 00:57:21.720
convolution or Transformer or anything
00:57:18.400 --> 00:57:23.559
else so the first one is encoding
00:57:21.720 --> 00:57:26.839
sequences
00:57:23.559 --> 00:57:29.240
um and essentially if you do it with an
00:57:26.839 --> 00:57:31.559
RNN this is one way you can encode a
00:57:29.240 --> 00:57:35.799
sequence basically you take the
00:57:31.559 --> 00:57:36.960
last uh value here and you use it to uh
00:57:35.799 --> 00:57:40.559
encode the
00:57:36.960 --> 00:57:42.720
output this can be used for any sort of
00:57:40.559 --> 00:57:45.839
uh like binary or multiclass prediction
00:57:42.720 --> 00:57:48.280
problem it's also right now used very
00:57:45.839 --> 00:57:50.920
widely in sentence representations for
00:57:48.280 --> 00:57:54.200
retrieval uh so for example you build a
00:57:50.920 --> 00:57:55.520
big retrieval index uh with these
00:57:54.200 --> 00:57:57.920
vectors
00:57:55.520 --> 00:57:59.480
and then you do a vector near you also
00:57:57.920 --> 00:58:02.119
in quote a query and you do a vector
00:57:59.480 --> 00:58:04.760
nearest neighbor search to look up uh
00:58:02.119 --> 00:58:06.760
the most similar sentence here so this
00:58:04.760 --> 00:58:10.160
is uh these are two applications where
00:58:06.760 --> 00:58:13.440
you use something like this right on
00:58:10.160 --> 00:58:15.520
this slide I wrote that you use the last
00:58:13.440 --> 00:58:17.359
Vector here but actually a lot of the
00:58:15.520 --> 00:58:20.039
time it's also a good idea to just take
00:58:17.359 --> 00:58:22.599
the mean of the vectors or take the max
00:58:20.039 --> 00:58:26.640
of all of the vectors
00:58:22.599 --> 00:58:29.119
uh in fact I would almost I would almost
00:58:26.640 --> 00:58:30.520
say that that's usually a better choice
00:58:29.119 --> 00:58:32.760
if you're doing any sort of thing where
00:58:30.520 --> 00:58:35.359
you need a single Vector unless your
00:58:32.760 --> 00:58:38.200
model has been specifically trained to
00:58:35.359 --> 00:58:41.480
have good like output vectors uh from
00:58:38.200 --> 00:58:44.359
the final Vector here so um you could
00:58:41.480 --> 00:58:46.880
also just take the the mean of all of
00:58:44.359 --> 00:58:46.880
the purple
00:58:48.240 --> 00:58:52.960
ones um another thing you can do is
00:58:50.280 --> 00:58:54.359
encode tokens for sequence labeling Um
00:58:52.960 --> 00:58:56.200
this can also be used for language
00:58:54.359 --> 00:58:58.280
modeling and what do I mean it can be
00:58:56.200 --> 00:59:00.039
used for language
00:58:58.280 --> 00:59:03.319
modeling
00:59:00.039 --> 00:59:06.599
basically you can view this as first
00:59:03.319 --> 00:59:09.200
running along sequence encoding and then
00:59:06.599 --> 00:59:12.319
after that making all of the predictions
00:59:09.200 --> 00:59:15.240
um it's also a good thing to know
00:59:12.319 --> 00:59:18.440
computationally because um often you can
00:59:15.240 --> 00:59:20.720
do sequence encoding uh kind of all in
00:59:18.440 --> 00:59:22.440
parallel and yeah actually I said I was
00:59:20.720 --> 00:59:23.359
going to mention I said I was going to
00:59:22.440 --> 00:59:25.079
mention that but I don't think I
00:59:23.359 --> 00:59:27.319
actually have a slide about it but um
00:59:25.079 --> 00:59:29.720
one important thing about rnn's compared
00:59:27.319 --> 00:59:33.079
to convolution or Transformers uh sorry
00:59:29.720 --> 00:59:34.839
convolution or attention is rnns in
00:59:33.079 --> 00:59:37.440
order to calculate this RNN you need to
00:59:34.839 --> 00:59:39.599
wait for this RNN to finish so it's
00:59:37.440 --> 00:59:41.200
sequential and you need to go like here
00:59:39.599 --> 00:59:43.480
and then here and then here and then
00:59:41.200 --> 00:59:45.720
here and then here and that's a pretty
00:59:43.480 --> 00:59:48.200
big bottleneck because uh things like
00:59:45.720 --> 00:59:50.760
gpus or tpus they're actually really
00:59:48.200 --> 00:59:52.839
good at doing a bunch of things at once
00:59:50.760 --> 00:59:56.440
and so attention even though its ASM
00:59:52.839 --> 00:59:57.400
totic complexity is worse o of n squ uh
00:59:56.440 --> 00:59:59.319
just because you don't have that
00:59:57.400 --> 01:00:01.680
bottleneck of doing things sequentially
00:59:59.319 --> 01:00:03.640
it can be way way faster on a GPU
01:00:01.680 --> 01:00:04.960
because you're not wasting your time
01:00:03.640 --> 01:00:07.640
waiting for the previous thing to be
01:00:04.960 --> 01:00:11.039
calculated so that's actually why uh
01:00:07.640 --> 01:00:13.520
Transformers are so fast
01:00:11.039 --> 01:00:14.599
um uh Transformers and attention models
01:00:13.520 --> 01:00:17.160
are so
01:00:14.599 --> 01:00:21.119
fast
01:00:17.160 --> 01:00:23.079
um another thing to note so that's one
01:00:21.119 --> 01:00:25.039
of the big reasons why attention models
01:00:23.079 --> 01:00:27.359
are so popular nowadays because fast to
01:00:25.039 --> 01:00:30.200
calculate on Modern Hardware another
01:00:27.359 --> 01:00:33.520
reason why attention models are popular
01:00:30.200 --> 01:00:34.799
nowadays does anyone have a um does
01:00:33.520 --> 01:00:37.280
anyone have an
01:00:34.799 --> 01:00:38.839
idea uh about another reason it's based
01:00:37.280 --> 01:00:41.200
on how easy they are to learn and
01:00:38.839 --> 01:00:43.680
there's a reason why and that reason why
01:00:41.200 --> 01:00:46.240
has to do with
01:00:43.680 --> 01:00:48.520
um that reason why has to do with uh
01:00:46.240 --> 01:00:49.400
something I introduced in this lecture
01:00:48.520 --> 01:00:52.039
uh
01:00:49.400 --> 01:00:54.720
earlier I'll give a
01:00:52.039 --> 01:00:58.079
hint gradients yeah more more
01:00:54.720 --> 01:01:00.480
specifically what what's nice about
01:00:58.079 --> 01:01:02.920
attention with respect to gradients or
01:01:00.480 --> 01:01:02.920
Vanishing
01:01:04.119 --> 01:01:07.319
gradients any
01:01:07.680 --> 01:01:15.160
ideas let's say we have a really long
01:01:10.160 --> 01:01:17.839
sentence it's like X1 X2 X3
01:01:15.160 --> 01:01:21.799
X4 um
01:01:17.839 --> 01:01:26.440
X200 over here and in order to predict
01:01:21.799 --> 01:01:26.440
X200 you need to pay attention to X3
01:01:27.359 --> 01:01:29.640
any
01:01:33.079 --> 01:01:37.359
ideas another another hint how many
01:01:35.599 --> 01:01:38.960
nonlinearities do you have to pass
01:01:37.359 --> 01:01:41.440
through in order to pass that
01:01:38.960 --> 01:01:44.839
information from X3 to
01:01:41.440 --> 01:01:48.839
X200 in a recurrent Network um in a
01:01:44.839 --> 01:01:48.839
recurrent Network or
01:01:51.920 --> 01:01:57.160
attention netw should be
01:01:54.960 --> 01:02:00.680
197 yeah in a recurrent Network it's
01:01:57.160 --> 01:02:03.480
basically 197 or may maybe 196 I haven't
01:02:00.680 --> 01:02:06.319
paid attention but every time every time
01:02:03.480 --> 01:02:08.319
you pass it to the hidden
01:02:06.319 --> 01:02:10.200
state it has to go through a
01:02:08.319 --> 01:02:13.240
nonlinearity so it goes through like
01:02:10.200 --> 01:02:17.119
1907 nonlinearities and even if you're
01:02:13.240 --> 01:02:19.680
using an lstm um it's still the lstm
01:02:17.119 --> 01:02:21.559
hidden cell is getting information added
01:02:19.680 --> 01:02:23.400
to it and subtracted to it and other
01:02:21.559 --> 01:02:24.960
things like that so it's still a bit
01:02:23.400 --> 01:02:27.880
tricky
01:02:24.960 --> 01:02:27.880
um what about
01:02:28.119 --> 01:02:35.160
attention yeah basically one time so
01:02:31.520 --> 01:02:39.319
attention um in the next layer here
01:02:35.160 --> 01:02:41.119
you're passing it all the way you're
01:02:39.319 --> 01:02:45.000
passing all of the information directly
01:02:41.119 --> 01:02:46.480
in and the only qualifying thing is that
01:02:45.000 --> 01:02:47.760
your weight has to be good it has to
01:02:46.480 --> 01:02:49.079
find a good attention weight so that
01:02:47.760 --> 01:02:50.920
it's actually paying attention to that
01:02:49.079 --> 01:02:53.039
information so this is actually
01:02:50.920 --> 01:02:54.400
discussed in the vaswani at all
01:02:53.039 --> 01:02:57.359
attention is all you need paper that
01:02:54.400 --> 01:02:59.920
introduced Transformers um convolutions
01:02:57.359 --> 01:03:03.640
are kind of in the middle so like let's
01:02:59.920 --> 01:03:06.400
say you have a convolution of length 10
01:03:03.640 --> 01:03:09.880
um and then you have two layers of it um
01:03:06.400 --> 01:03:09.880
if you have a convolution of length
01:03:10.200 --> 01:03:15.880
10 or yeah let's say you have a
01:03:12.559 --> 01:03:18.520
convolution of length 10 you would need
01:03:15.880 --> 01:03:19.520
basically you would pass from 10
01:03:18.520 --> 01:03:21.720
previous
01:03:19.520 --> 01:03:23.319
ones and then you would pass again from
01:03:21.720 --> 01:03:27.359
10 previous ones and then you would have
01:03:23.319 --> 01:03:29.160
to go through like 16 or like I guess
01:03:27.359 --> 01:03:31.279
almost 20 layers of convolution in order
01:03:29.160 --> 01:03:34.720
to pass that information along so it's
01:03:31.279 --> 01:03:39.200
kind of in the middle of RNs in uh in
01:03:34.720 --> 01:03:43.480
lsms uh sorry RNN in attention
01:03:39.200 --> 01:03:47.359
Ms Yeah question so regarding how you
01:03:43.480 --> 01:03:51.319
have to wait for one r& the next one can
01:03:47.359 --> 01:03:53.000
you inflence on one RNN once it's done
01:03:51.319 --> 01:03:54.839
even though the next one's competing off
01:03:53.000 --> 01:03:58.400
that one
01:03:54.839 --> 01:04:01.160
yes yeah you can you can do
01:03:58.400 --> 01:04:03.880
inference you could is well so as long
01:04:01.160 --> 01:04:03.880
as
01:04:05.599 --> 01:04:10.640
the as long as the output doesn't affect
01:04:08.079 --> 01:04:14.000
the next input so in this
01:04:10.640 --> 01:04:17.119
case in this case because of language
01:04:14.000 --> 01:04:19.400
modeling or generation is because the
01:04:17.119 --> 01:04:21.000
output doesn't affect the ne uh because
01:04:19.400 --> 01:04:22.440
the output affects the next input if
01:04:21.000 --> 01:04:26.680
you're predicting the output you have to
01:04:22.440 --> 01:04:28.920
weigh if you know the output already um
01:04:26.680 --> 01:04:30.599
if you know the output already you could
01:04:28.920 --> 01:04:33.599
make the prediction at the same time
01:04:30.599 --> 01:04:34.799
miscalculating this next hidden State um
01:04:33.599 --> 01:04:36.200
so if you're just calculating the
01:04:34.799 --> 01:04:38.559
probability you could do that and that's
01:04:36.200 --> 01:04:40.880
actually where Transformers or attention
01:04:38.559 --> 01:04:44.839
models shine attention models actually
01:04:40.880 --> 01:04:46.000
aren't great for Generation Um and the
01:04:44.839 --> 01:04:49.279
reason why they're not great for
01:04:46.000 --> 01:04:52.279
generation is because they're
01:04:49.279 --> 01:04:52.279
um
01:04:52.799 --> 01:04:57.680
like when you're you're generating the
01:04:55.039 --> 01:04:59.200
next token you still need to wait you
01:04:57.680 --> 01:05:00.559
can't calculate in parallel because you
01:04:59.200 --> 01:05:03.039
need to generate the next token before
01:05:00.559 --> 01:05:04.839
you can encode the next uh the previous
01:05:03.039 --> 01:05:07.119
sorry need to generate the next token
01:05:04.839 --> 01:05:08.680
before you can encode it so you can't do
01:05:07.119 --> 01:05:10.359
everything in parallel so Transformers
01:05:08.680 --> 01:05:15.039
for generation are actually
01:05:10.359 --> 01:05:16.559
slow and um there are models uh I don't
01:05:15.039 --> 01:05:18.520
know if people are using them super
01:05:16.559 --> 01:05:22.200
widely now but there were actually
01:05:18.520 --> 01:05:23.640
transform uh language model sorry
01:05:22.200 --> 01:05:26.319
machine translation model set we in
01:05:23.640 --> 01:05:28.279
production they had a really big strong
01:05:26.319 --> 01:05:34.359
Transformer encoder and then they had a
01:05:28.279 --> 01:05:34.359
tiny fast RNN decoder um
01:05:35.440 --> 01:05:40.960
and and if you want a actual
01:05:52.000 --> 01:05:59.440
reference there's there's
01:05:55.079 --> 01:05:59.440
this deep encoder shellow
01:05:59.559 --> 01:06:05.520
decoder um and then there's also the the
01:06:03.079 --> 01:06:07.599
Maran machine translation toolkit that
01:06:05.520 --> 01:06:11.119
supports uh supports those types of
01:06:07.599 --> 01:06:13.839
things as well so um it's also the
01:06:11.119 --> 01:06:16.200
reason why uh if you're using if you're
01:06:13.839 --> 01:06:18.839
using uh like the GPT models through the
01:06:16.200 --> 01:06:21.680
API that decoding is more expensive
01:06:18.839 --> 01:06:21.680
right like
01:06:22.119 --> 01:06:27.960
encoding I forget exactly is it 0.03
01:06:26.279 --> 01:06:30.839
cents for 1,000 tokens for encoding and
01:06:27.960 --> 01:06:33.039
0.06 cents for 1,000 tokens for decoding
01:06:30.839 --> 01:06:34.799
in like gp4 or something like this the
01:06:33.039 --> 01:06:36.839
reason why is precisely that just
01:06:34.799 --> 01:06:37.760
because it's so much more expensive to
01:06:36.839 --> 01:06:41.599
to run the
01:06:37.760 --> 01:06:45.160
decoder um cool I have a few final
01:06:41.599 --> 01:06:47.039
things also about efficiency so um these
01:06:45.160 --> 01:06:50.720
go back to the efficiency things that I
01:06:47.039 --> 01:06:52.279
talked about last time um handling mini
01:06:50.720 --> 01:06:54.440
batching so what do we have to do when
01:06:52.279 --> 01:06:56.359
we're handling mini batching if we were
01:06:54.440 --> 01:06:59.440
handling mini batching in feed forward
01:06:56.359 --> 01:07:02.880
networks it's actually relatively easy
01:06:59.440 --> 01:07:04.880
um because we all of our computations
01:07:02.880 --> 01:07:06.400
are the same shape so we just
01:07:04.880 --> 01:07:09.359
concatenate them all together into a big
01:07:06.400 --> 01:07:11.000
tensor and run uh run over it uh we saw
01:07:09.359 --> 01:07:12.599
mini batching makes things much faster
01:07:11.000 --> 01:07:15.160
but mini batching and sequence modeling
01:07:12.599 --> 01:07:17.240
is harder than in feed forward networks
01:07:15.160 --> 01:07:20.240
um one reason is in rnns each word
01:07:17.240 --> 01:07:22.680
depends on the previous word um also
01:07:20.240 --> 01:07:26.359
because sequences are of various
01:07:22.680 --> 01:07:30.279
lengths so so what we do to handle this
01:07:26.359 --> 01:07:33.480
is uh we do padding and masking uh
01:07:30.279 --> 01:07:35.680
so we can do padding like this uh so we
01:07:33.480 --> 01:07:37.279
just add an extra token at the end to
01:07:35.680 --> 01:07:40.440
make all of the sequences at the same
01:07:37.279 --> 01:07:44.480
length um if we are doing an encoder
01:07:40.440 --> 01:07:47.160
decoder style model uh where we have an
01:07:44.480 --> 01:07:48.440
input and then we want to generate all
01:07:47.160 --> 01:07:50.640
the outputs based on the input one of
01:07:48.440 --> 01:07:54.920
the easy things is to add pads to the
01:07:50.640 --> 01:07:56.520
beginning um and then so yeah it doesn't
01:07:54.920 --> 01:07:58.000
really matter but you can add pads to
01:07:56.520 --> 01:07:59.440
the beginning so they're all starting at
01:07:58.000 --> 01:08:03.079
the same place especially if you're
01:07:59.440 --> 01:08:05.799
using RNN style models um then we
01:08:03.079 --> 01:08:08.920
calculate the loss over the output for
01:08:05.799 --> 01:08:11.000
example we multiply the loss by a mask
01:08:08.920 --> 01:08:13.480
to remove the loss over the tokens that
01:08:11.000 --> 01:08:16.880
we don't care about and we take the sum
01:08:13.480 --> 01:08:19.120
of these and so luckily most of this is
01:08:16.880 --> 01:08:20.719
implemented in for example ptch or
01:08:19.120 --> 01:08:22.279
huging face Transformers already so you
01:08:20.719 --> 01:08:23.560
don't need to worry about it but it is a
01:08:22.279 --> 01:08:24.799
good idea to know what's going on under
01:08:23.560 --> 01:08:28.560
the hood if you want to implement
01:08:24.799 --> 01:08:32.440
anything unusual and also um it's good
01:08:28.560 --> 01:08:35.600
to know for the following reason also
01:08:32.440 --> 01:08:38.799
which is bucketing and
01:08:35.600 --> 01:08:40.319
sorting so if we use sentences of vastly
01:08:38.799 --> 01:08:43.359
different lengths and we put them in the
01:08:40.319 --> 01:08:46.640
same mini batch this can uh waste a
01:08:43.359 --> 01:08:48.000
really large amount of computation so
01:08:46.640 --> 01:08:50.759
like let's say we're processing
01:08:48.000 --> 01:08:52.480
documents or movie reviews or something
01:08:50.759 --> 01:08:54.799
like that and you have a most movie
01:08:52.480 --> 01:08:57.719
reviews are like
01:08:54.799 --> 01:09:00.080
10 words long but you have one movie
01:08:57.719 --> 01:09:02.319
review in your mini batch of uh a
01:09:00.080 --> 01:09:04.359
thousand words so basically what that
01:09:02.319 --> 01:09:08.279
means is you're padding most of your
01:09:04.359 --> 01:09:11.120
sequences 990 times to process 10
01:09:08.279 --> 01:09:12.120
sequences which is like a lot of waste
01:09:11.120 --> 01:09:14.000
right because you're running them all
01:09:12.120 --> 01:09:16.799
through your GPU and other things like
01:09:14.000 --> 01:09:19.080
that so one way to remedy this is to
01:09:16.799 --> 01:09:22.719
sort sentences so similarly length
01:09:19.080 --> 01:09:27.480
sentences are in the same batch so you
01:09:22.719 --> 01:09:29.920
uh you first sort before building all of
01:09:27.480 --> 01:09:31.640
your batches and then uh that makes it
01:09:29.920 --> 01:09:32.960
so that similarly sized ones are the
01:09:31.640 --> 01:09:35.239
same
01:09:32.960 --> 01:09:37.040
batch this goes into the problem that I
01:09:35.239 --> 01:09:39.359
mentioned before but only in passing
01:09:37.040 --> 01:09:42.440
which is uh let's say you're calculating
01:09:39.359 --> 01:09:44.199
your batch based on the number of
01:09:42.440 --> 01:09:47.679
sequences that you're
01:09:44.199 --> 01:09:51.400
processing if you say Okay I want 64
01:09:47.679 --> 01:09:53.359
sequences in my mini batch um if most of
01:09:51.400 --> 01:09:55.159
the time those 64 sequences are are 10
01:09:53.359 --> 01:09:57.480
tokens that's fine but then when you get
01:09:55.159 --> 01:10:01.440
the One Mini batch that has a thousand
01:09:57.480 --> 01:10:02.760
tokens in each sentence or each sequence
01:10:01.440 --> 01:10:04.920
um suddenly you're going to run out of
01:10:02.760 --> 01:10:07.800
GPU memory and you're like training is
01:10:04.920 --> 01:10:08.920
going to crash right which is you really
01:10:07.800 --> 01:10:10.440
don't want that to happen when you
01:10:08.920 --> 01:10:12.440
started running your homework assignment
01:10:10.440 --> 01:10:15.560
and then went to bed and then wake up
01:10:12.440 --> 01:10:18.440
and it crashed you know uh 15 minutes
01:10:15.560 --> 01:10:21.040
into Computing or something so uh this
01:10:18.440 --> 01:10:23.440
is an important thing to be aware of
01:10:21.040 --> 01:10:26.760
practically uh again this can be solved
01:10:23.440 --> 01:10:29.239
by a lot of toolkits like I know fer uh
01:10:26.760 --> 01:10:30.840
does it and hugging face does it if you
01:10:29.239 --> 01:10:33.159
set the appropriate settings but it's
01:10:30.840 --> 01:10:36.239
something you should be aware of um
01:10:33.159 --> 01:10:37.880
another note is that if you do this it's
01:10:36.239 --> 01:10:41.280
reducing the randomness in your
01:10:37.880 --> 01:10:42.880
distribution of data so um stochastic
01:10:41.280 --> 01:10:44.520
gradient descent is really heavily
01:10:42.880 --> 01:10:47.480
reliant on the fact that your ordering
01:10:44.520 --> 01:10:49.440
of data is randomized or at least it's a
01:10:47.480 --> 01:10:52.159
distributed appropriately so it's
01:10:49.440 --> 01:10:56.840
something to definitely be aware of um
01:10:52.159 --> 01:10:59.560
so uh this is a good thing to to think
01:10:56.840 --> 01:11:01.400
about another really useful thing to
01:10:59.560 --> 01:11:03.800
think about is strided
01:11:01.400 --> 01:11:05.440
architectures um strided architectures
01:11:03.800 --> 01:11:07.520
appear in rnns they appear in
01:11:05.440 --> 01:11:10.080
convolution they appear in trans
01:11:07.520 --> 01:11:12.320
Transformers or attention based models
01:11:10.080 --> 01:11:15.199
um they're called different things in
01:11:12.320 --> 01:11:18.159
each of them so in rnns they're called
01:11:15.199 --> 01:11:21.280
pyramidal rnns in convolution they're
01:11:18.159 --> 01:11:22.400
called strided architectures and in
01:11:21.280 --> 01:11:25.080
attention they're called sparse
01:11:22.400 --> 01:11:27.440
attention usually they all actually kind
01:11:25.080 --> 01:11:30.800
of mean the same thing um and basically
01:11:27.440 --> 01:11:33.440
what they mean is you don't you have a
01:11:30.800 --> 01:11:37.040
multi-layer model and when you have a
01:11:33.440 --> 01:11:40.920
multi-layer model you don't process
01:11:37.040 --> 01:11:43.920
every input uh from the uh from the
01:11:40.920 --> 01:11:45.560
previous layer so here's an example um
01:11:43.920 --> 01:11:47.840
like let's say you have a whole bunch of
01:11:45.560 --> 01:11:50.199
inputs um each of the inputs is
01:11:47.840 --> 01:11:53.159
processed in the first layer in some way
01:11:50.199 --> 01:11:56.639
but in the second layer you actually
01:11:53.159 --> 01:12:01.520
input for example uh two inputs to the
01:11:56.639 --> 01:12:03.560
RNN but you you skip so you have one
01:12:01.520 --> 01:12:05.440
state that corresponds to state number
01:12:03.560 --> 01:12:06.840
one and two another state that
01:12:05.440 --> 01:12:08.440
corresponds to state number two and
01:12:06.840 --> 01:12:10.920
three another state that corresponds to
01:12:08.440 --> 01:12:13.280
state number three and four so what that
01:12:10.920 --> 01:12:15.199
means is you can gradually decrease the
01:12:13.280 --> 01:12:18.199
number like the length of the sequence
01:12:15.199 --> 01:12:20.719
every time you process so uh this is a
01:12:18.199 --> 01:12:22.360
really useful thing that to do if you're
01:12:20.719 --> 01:12:25.480
processing very long sequences so you
01:12:22.360 --> 01:12:25.480
should be aware of it
01:12:27.440 --> 01:12:34.120
cool um everything
01:12:30.639 --> 01:12:36.920
okay okay the final thing is truncated
01:12:34.120 --> 01:12:39.239
back propagation through time and uh
01:12:36.920 --> 01:12:41.000
truncated back propagation Through Time
01:12:39.239 --> 01:12:43.560
what this is doing is basically you do
01:12:41.000 --> 01:12:46.120
back propop over shorter segments but
01:12:43.560 --> 01:12:47.840
you initialize with the state from the
01:12:46.120 --> 01:12:51.040
previous
01:12:47.840 --> 01:12:52.440
segment and the way this works is uh
01:12:51.040 --> 01:12:56.080
like for example if you're running an
01:12:52.440 --> 01:12:57.600
RNN uh you would run the RNN over the
01:12:56.080 --> 01:12:59.400
previous segment maybe it's length four
01:12:57.600 --> 01:13:02.120
maybe it's length 400 it doesn't really
01:12:59.400 --> 01:13:04.520
matter but it's uh coherently length
01:13:02.120 --> 01:13:06.360
segment and then when you do the next
01:13:04.520 --> 01:13:08.840
segment what you do is you only pass the
01:13:06.360 --> 01:13:12.960
hidden state but you throw away the rest
01:13:08.840 --> 01:13:16.360
of the previous computation graph and
01:13:12.960 --> 01:13:18.040
then walk through uh like this uh so you
01:13:16.360 --> 01:13:22.159
won't actually be updating the
01:13:18.040 --> 01:13:24.080
parameters of this based on the result
01:13:22.159 --> 01:13:25.800
the lost from this but you're still
01:13:24.080 --> 01:13:28.159
passing the information so this can use
01:13:25.800 --> 01:13:30.400
the information for the previous state
01:13:28.159 --> 01:13:32.239
so this is an example from RNN this is
01:13:30.400 --> 01:13:35.159
used pretty widely in RNN but there's
01:13:32.239 --> 01:13:38.000
also a lot of Transformer architectures
01:13:35.159 --> 01:13:39.400
that do things like this um the original
01:13:38.000 --> 01:13:41.000
one is something called Transformer
01:13:39.400 --> 01:13:44.560
Excel that was actually created here at
01:13:41.000 --> 01:13:46.560
CMU but this is also um used in the new
01:13:44.560 --> 01:13:48.719
mistol models and other things like this
01:13:46.560 --> 01:13:51.719
as well so um it's something that's
01:13:48.719 --> 01:13:54.719
still very much alive and well nowadays
01:13:51.719 --> 01:13:56.320
as well
01:13:54.719 --> 01:13:57.840
cool um that's all I have for today are
01:13:56.320 --> 01:13:59.760
there any questions people want to ask
01:13:57.840 --> 01:14:02.760
before we wrap
01:13:59.760 --> 01:14:02.760
up
01:14:12.840 --> 01:14:20.000
yeah doesent yeah so for condition
01:14:16.960 --> 01:14:25.040
prediction what is Source X and Target y
01:14:20.000 --> 01:14:26.520
um I think I kind of maybe carried over
01:14:25.040 --> 01:14:28.679
uh some terminology from machine
01:14:26.520 --> 01:14:31.400
translation uh by accident maybe it
01:14:28.679 --> 01:14:34.080
should be input X and output y uh that
01:14:31.400 --> 01:14:36.600
would be a better way to put it and so
01:14:34.080 --> 01:14:38.080
uh it could be anything for translation
01:14:36.600 --> 01:14:39.560
it's like something in the source
01:14:38.080 --> 01:14:42.600
language and something in the target
01:14:39.560 --> 01:14:44.520
language so like English and Japanese um
01:14:42.600 --> 01:14:47.280
if it's just a regular language model it
01:14:44.520 --> 01:14:50.560
could be something like a prompt and the
01:14:47.280 --> 01:14:55.280
output so for
01:14:50.560 --> 01:14:55.280
UNC y example that
01:14:57.400 --> 01:15:01.400
yeah so for unconditioned prediction
01:14:59.760 --> 01:15:03.840
that could just be straight up language
01:15:01.400 --> 01:15:07.040
modeling for example so um language
01:15:03.840 --> 01:15:11.840
modeling with no not necessarily any
01:15:07.040 --> 01:15:11.840
problems okay thanks and anything
01:15:12.440 --> 01:15:17.880
else okay great thanks a lot I'm happy
01:15:14.639 --> 01:15:17.880
to take questions
01:15:18.639 --> 01:15:21.639
to
|