|
1 |
|
00:00:00,399 --> 00:00:04,120 |
|
so this time I'm going to be talking |
|
|
|
2 |
|
00:00:02,080 --> 00:00:05,799 |
|
about language modeling uh obviously |
|
|
|
3 |
|
00:00:04,120 --> 00:00:07,240 |
|
language modeling is a big topic and I'm |
|
|
|
4 |
|
00:00:05,799 --> 00:00:09,880 |
|
not going to be able to cover it all in |
|
|
|
5 |
|
00:00:07,240 --> 00:00:11,320 |
|
one class but this is kind of the basics |
|
|
|
6 |
|
00:00:09,880 --> 00:00:13,080 |
|
of uh what does it mean to build a |
|
|
|
7 |
|
00:00:11,320 --> 00:00:15,320 |
|
language model what is a language model |
|
|
|
8 |
|
00:00:13,080 --> 00:00:18,439 |
|
how do we evaluate language models and |
|
|
|
9 |
|
00:00:15,320 --> 00:00:19,920 |
|
other stuff like that and around the end |
|
|
|
10 |
|
00:00:18,439 --> 00:00:21,320 |
|
I'm going to talk a little bit about |
|
|
|
11 |
|
00:00:19,920 --> 00:00:23,039 |
|
efficiently implementing things in |
|
|
|
12 |
|
00:00:21,320 --> 00:00:25,080 |
|
neural networks it's not directly |
|
|
|
13 |
|
00:00:23,039 --> 00:00:27,760 |
|
related to language models but it's very |
|
|
|
14 |
|
00:00:25,080 --> 00:00:31,200 |
|
important to know how to do uh to solve |
|
|
|
15 |
|
00:00:27,760 --> 00:00:34,200 |
|
your assignments so I'll cover both |
|
|
|
16 |
|
00:00:31,200 --> 00:00:34,200 |
|
is |
|
|
|
17 |
|
00:00:34,239 --> 00:00:38,480 |
|
cool okay so the first thing I'd like to |
|
|
|
18 |
|
00:00:36,760 --> 00:00:41,239 |
|
talk about is generative versus |
|
|
|
19 |
|
00:00:38,480 --> 00:00:43,000 |
|
discriminative models and the reason why |
|
|
|
20 |
|
00:00:41,239 --> 00:00:45,280 |
|
is up until now we've been talking about |
|
|
|
21 |
|
00:00:43,000 --> 00:00:47,559 |
|
discriminative models and these are |
|
|
|
22 |
|
00:00:45,280 --> 00:00:49,640 |
|
models uh that are mainly designed to |
|
|
|
23 |
|
00:00:47,559 --> 00:00:53,800 |
|
calculate the probability of a latent |
|
|
|
24 |
|
00:00:49,640 --> 00:00:56,039 |
|
trait uh given the data and so this is |
|
|
|
25 |
|
00:00:53,800 --> 00:00:58,800 |
|
uh P of Y given X where Y is the lat and |
|
|
|
26 |
|
00:00:56,039 --> 00:01:00,800 |
|
trait we want to calculate and X is uh |
|
|
|
27 |
|
00:00:58,800 --> 00:01:04,760 |
|
the input data that we're calculating it |
|
|
|
28 |
|
00:01:00,800 --> 00:01:07,799 |
|
over so just review from last class what |
|
|
|
29 |
|
00:01:04,760 --> 00:01:10,240 |
|
was X from last class from the example |
|
|
|
30 |
|
00:01:07,799 --> 00:01:10,240 |
|
in L |
|
|
|
31 |
|
00:01:11,360 --> 00:01:15,880 |
|
class |
|
|
|
32 |
|
00:01:13,040 --> 00:01:18,280 |
|
anybody yeah some text yeah and then |
|
|
|
33 |
|
00:01:15,880 --> 00:01:18,280 |
|
what was |
|
|
|
34 |
|
00:01:20,400 --> 00:01:26,119 |
|
why it shouldn't be too |
|
|
|
35 |
|
00:01:23,799 --> 00:01:27,920 |
|
hard yeah it was a category or a |
|
|
|
36 |
|
00:01:26,119 --> 00:01:31,680 |
|
sentiment label precisely in the |
|
|
|
37 |
|
00:01:27,920 --> 00:01:33,399 |
|
sentiment analysis tasks so so um a |
|
|
|
38 |
|
00:01:31,680 --> 00:01:35,560 |
|
generative model on the other hand is a |
|
|
|
39 |
|
00:01:33,399 --> 00:01:38,840 |
|
model that calculates the probability of |
|
|
|
40 |
|
00:01:35,560 --> 00:01:40,880 |
|
data itself and is not specifically |
|
|
|
41 |
|
00:01:38,840 --> 00:01:43,439 |
|
conditional and there's a couple of |
|
|
|
42 |
|
00:01:40,880 --> 00:01:45,439 |
|
varieties um this isn't like super |
|
|
|
43 |
|
00:01:43,439 --> 00:01:48,280 |
|
standard terminology I just uh wrote it |
|
|
|
44 |
|
00:01:45,439 --> 00:01:51,520 |
|
myself but here we have a standalone |
|
|
|
45 |
|
00:01:48,280 --> 00:01:54,360 |
|
probability of P of X and we can also |
|
|
|
46 |
|
00:01:51,520 --> 00:01:58,000 |
|
calculate the joint probability P of X |
|
|
|
47 |
|
00:01:54,360 --> 00:01:58,000 |
|
and Y |
|
|
|
48 |
|
00:01:58,159 --> 00:02:02,880 |
|
so probabilistic language models |
|
|
|
49 |
|
00:02:01,079 --> 00:02:06,640 |
|
basically what they do is they calculate |
|
|
|
50 |
|
00:02:02,880 --> 00:02:08,560 |
|
this uh probability usually uh we think |
|
|
|
51 |
|
00:02:06,640 --> 00:02:10,360 |
|
of it as a standalone probability of P |
|
|
|
52 |
|
00:02:08,560 --> 00:02:11,800 |
|
of X where X is something like a |
|
|
|
53 |
|
00:02:10,360 --> 00:02:15,160 |
|
sentence or a |
|
|
|
54 |
|
00:02:11,800 --> 00:02:16,920 |
|
document and it's a generative model |
|
|
|
55 |
|
00:02:15,160 --> 00:02:19,640 |
|
that calculates the probability of |
|
|
|
56 |
|
00:02:16,920 --> 00:02:22,360 |
|
language recently the definition of |
|
|
|
57 |
|
00:02:19,640 --> 00:02:23,959 |
|
language model has expanded a little bit |
|
|
|
58 |
|
00:02:22,360 --> 00:02:26,160 |
|
so now |
|
|
|
59 |
|
00:02:23,959 --> 00:02:28,640 |
|
um people also call things that |
|
|
|
60 |
|
00:02:26,160 --> 00:02:31,080 |
|
calculate the probability of text and |
|
|
|
61 |
|
00:02:28,640 --> 00:02:35,200 |
|
images as like multimodal language |
|
|
|
62 |
|
00:02:31,080 --> 00:02:38,160 |
|
models or uh what are some of the other |
|
|
|
63 |
|
00:02:35,200 --> 00:02:40,480 |
|
ones yeah I think that's the main the |
|
|
|
64 |
|
00:02:38,160 --> 00:02:42,840 |
|
main exception to this rule usually |
|
|
|
65 |
|
00:02:40,480 --> 00:02:45,080 |
|
usually it's calculating either of text |
|
|
|
66 |
|
00:02:42,840 --> 00:02:47,680 |
|
or over text in some multimodal data but |
|
|
|
67 |
|
00:02:45,080 --> 00:02:47,680 |
|
for now we're going to |
|
|
|
68 |
|
00:02:48,800 --> 00:02:54,200 |
|
consider |
|
|
|
69 |
|
00:02:50,319 --> 00:02:56,440 |
|
um then there's kind of two fundamental |
|
|
|
70 |
|
00:02:54,200 --> 00:02:58,159 |
|
operations that we perform with LMS |
|
|
|
71 |
|
00:02:56,440 --> 00:03:00,519 |
|
almost everything else we do with LMS |
|
|
|
72 |
|
00:02:58,159 --> 00:03:03,640 |
|
can be considered like one of these two |
|
|
|
73 |
|
00:03:00,519 --> 00:03:05,319 |
|
types of things the first thing is calc |
|
|
|
74 |
|
00:03:03,640 --> 00:03:06,440 |
|
scoring sentences or calculating the |
|
|
|
75 |
|
00:03:05,319 --> 00:03:09,599 |
|
probability of |
|
|
|
76 |
|
00:03:06,440 --> 00:03:12,280 |
|
sentences and this |
|
|
|
77 |
|
00:03:09,599 --> 00:03:14,720 |
|
is uh for example if we calculate the |
|
|
|
78 |
|
00:03:12,280 --> 00:03:16,400 |
|
probability of Jane went to the store uh |
|
|
|
79 |
|
00:03:14,720 --> 00:03:19,000 |
|
this would have a high probability |
|
|
|
80 |
|
00:03:16,400 --> 00:03:20,879 |
|
ideally um and if we have this kind of |
|
|
|
81 |
|
00:03:19,000 --> 00:03:23,400 |
|
word salid like this this would be given |
|
|
|
82 |
|
00:03:20,879 --> 00:03:26,080 |
|
a low probability uh according to a |
|
|
|
83 |
|
00:03:23,400 --> 00:03:28,000 |
|
English language model if we had a |
|
|
|
84 |
|
00:03:26,080 --> 00:03:30,000 |
|
Chinese language model ideally it would |
|
|
|
85 |
|
00:03:28,000 --> 00:03:31,319 |
|
also probably give low probability first |
|
|
|
86 |
|
00:03:30,000 --> 00:03:32,879 |
|
sentence too because it's a language |
|
|
|
87 |
|
00:03:31,319 --> 00:03:35,000 |
|
model of natural Chinese and not of |
|
|
|
88 |
|
00:03:32,879 --> 00:03:36,200 |
|
natural English so there's also |
|
|
|
89 |
|
00:03:35,000 --> 00:03:37,360 |
|
different types of language models |
|
|
|
90 |
|
00:03:36,200 --> 00:03:38,400 |
|
depending on the type of data you play |
|
|
|
91 |
|
00:03:37,360 --> 00:03:41,360 |
|
in |
|
|
|
92 |
|
00:03:38,400 --> 00:03:43,599 |
|
the another thing I can do is generate |
|
|
|
93 |
|
00:03:41,360 --> 00:03:45,239 |
|
sentences and we'll talk more about the |
|
|
|
94 |
|
00:03:43,599 --> 00:03:48,280 |
|
different methods for generating |
|
|
|
95 |
|
00:03:45,239 --> 00:03:50,319 |
|
sentences but typically they fall into |
|
|
|
96 |
|
00:03:48,280 --> 00:03:51,799 |
|
one of two categories one is sampling |
|
|
|
97 |
|
00:03:50,319 --> 00:03:53,200 |
|
like this where you try to sample a |
|
|
|
98 |
|
00:03:51,799 --> 00:03:55,480 |
|
sentence from the probability |
|
|
|
99 |
|
00:03:53,200 --> 00:03:57,280 |
|
distribution of the language model |
|
|
|
100 |
|
00:03:55,480 --> 00:03:58,360 |
|
possibly with some modifications to the |
|
|
|
101 |
|
00:03:57,280 --> 00:04:00,760 |
|
probability |
|
|
|
102 |
|
00:03:58,360 --> 00:04:03,079 |
|
distribution um the other thing which I |
|
|
|
103 |
|
00:04:00,760 --> 00:04:04,760 |
|
didn't write on the slide is uh finding |
|
|
|
104 |
|
00:04:03,079 --> 00:04:07,439 |
|
the highest scoring sentence according |
|
|
|
105 |
|
00:04:04,760 --> 00:04:09,760 |
|
to the language model um and we do both |
|
|
|
106 |
|
00:04:07,439 --> 00:04:09,760 |
|
of those |
|
|
|
107 |
|
00:04:10,560 --> 00:04:17,600 |
|
S so more concretely how can we apply |
|
|
|
108 |
|
00:04:15,199 --> 00:04:21,199 |
|
these these can be applied to answer |
|
|
|
109 |
|
00:04:17,600 --> 00:04:23,840 |
|
questions so for example um if we have a |
|
|
|
110 |
|
00:04:21,199 --> 00:04:27,240 |
|
multiple choice question we can score |
|
|
|
111 |
|
00:04:23,840 --> 00:04:30,639 |
|
possible multiple choice answers and uh |
|
|
|
112 |
|
00:04:27,240 --> 00:04:32,880 |
|
the way we do this is we calculate |
|
|
|
113 |
|
00:04:30,639 --> 00:04:35,440 |
|
we first |
|
|
|
114 |
|
00:04:32,880 --> 00:04:38,440 |
|
take uh like we have |
|
|
|
115 |
|
00:04:35,440 --> 00:04:38,440 |
|
like |
|
|
|
116 |
|
00:04:38,560 --> 00:04:43,919 |
|
um |
|
|
|
117 |
|
00:04:40,960 --> 00:04:46,919 |
|
where is |
|
|
|
118 |
|
00:04:43,919 --> 00:04:46,919 |
|
CMU |
|
|
|
119 |
|
00:04:47,560 --> 00:04:51,600 |
|
located um |
|
|
|
120 |
|
00:04:51,960 --> 00:04:59,560 |
|
that's and actually maybe promete this |
|
|
|
121 |
|
00:04:54,560 --> 00:05:01,360 |
|
all again to an a here and then we say X |
|
|
|
122 |
|
00:04:59,560 --> 00:05:05,800 |
|
X1 is equal to |
|
|
|
123 |
|
00:05:01,360 --> 00:05:07,520 |
|
this and then we have X2 which is |
|
|
|
124 |
|
00:05:05,800 --> 00:05:09,720 |
|
Q |
|
|
|
125 |
|
00:05:07,520 --> 00:05:12,479 |
|
where is |
|
|
|
126 |
|
00:05:09,720 --> 00:05:14,120 |
|
CMU |
|
|
|
127 |
|
00:05:12,479 --> 00:05:18,080 |
|
located |
|
|
|
128 |
|
00:05:14,120 --> 00:05:19,720 |
|
a um what's something |
|
|
|
129 |
|
00:05:18,080 --> 00:05:21,960 |
|
plausible |
|
|
|
130 |
|
00:05:19,720 --> 00:05:24,560 |
|
uh what was |
|
|
|
131 |
|
00:05:21,960 --> 00:05:26,319 |
|
it okay now now you're going to make it |
|
|
|
132 |
|
00:05:24,560 --> 00:05:27,960 |
|
tricky and make me talk about when we |
|
|
|
133 |
|
00:05:26,319 --> 00:05:29,960 |
|
have multiple right answers and how we |
|
|
|
134 |
|
00:05:27,960 --> 00:05:31,759 |
|
evaluate and stuff let let's ignore that |
|
|
|
135 |
|
00:05:29,960 --> 00:05:35,080 |
|
for now it's say New |
|
|
|
136 |
|
00:05:31,759 --> 00:05:37,199 |
|
York it's not located in New York is |
|
|
|
137 |
|
00:05:35,080 --> 00:05:40,560 |
|
it |
|
|
|
138 |
|
00:05:37,199 --> 00:05:40,560 |
|
okay let's say |
|
|
|
139 |
|
00:05:40,960 --> 00:05:45,199 |
|
Birmingham hopefully there's no CMU |
|
|
|
140 |
|
00:05:43,199 --> 00:05:47,120 |
|
affiliate in Birmingham I think we're |
|
|
|
141 |
|
00:05:45,199 --> 00:05:49,000 |
|
we're pretty so um and then you would |
|
|
|
142 |
|
00:05:47,120 --> 00:05:53,880 |
|
just calculate the probability of X1 and |
|
|
|
143 |
|
00:05:49,000 --> 00:05:56,440 |
|
the probability of X2 X3 X4 Etc and um |
|
|
|
144 |
|
00:05:53,880 --> 00:06:01,479 |
|
then pick the highest saring one and |
|
|
|
145 |
|
00:05:56,440 --> 00:06:01,479 |
|
actually um there's a famous |
|
|
|
146 |
|
00:06:03,199 --> 00:06:07,440 |
|
there's a famous uh leaderboard for |
|
|
|
147 |
|
00:06:05,840 --> 00:06:08,759 |
|
language models that probably a lot of |
|
|
|
148 |
|
00:06:07,440 --> 00:06:09,759 |
|
people know about it's called the open |
|
|
|
149 |
|
00:06:08,759 --> 00:06:13,120 |
|
llm |
|
|
|
150 |
|
00:06:09,759 --> 00:06:15,639 |
|
leaderboard and a lot of these tasks |
|
|
|
151 |
|
00:06:13,120 --> 00:06:17,319 |
|
here basically correspond to doing |
|
|
|
152 |
|
00:06:15,639 --> 00:06:21,000 |
|
something like that like hel swag is |
|
|
|
153 |
|
00:06:17,319 --> 00:06:22,599 |
|
kind of a multiple choice uh is a |
|
|
|
154 |
|
00:06:21,000 --> 00:06:24,160 |
|
multiple choice question answering thing |
|
|
|
155 |
|
00:06:22,599 --> 00:06:27,880 |
|
about common sense where they calculate |
|
|
|
156 |
|
00:06:24,160 --> 00:06:30,280 |
|
it by scoring uh scoring the |
|
|
|
157 |
|
00:06:27,880 --> 00:06:31,880 |
|
outputs so that's a very common way to |
|
|
|
158 |
|
00:06:30,280 --> 00:06:35,000 |
|
use language |
|
|
|
159 |
|
00:06:31,880 --> 00:06:36,960 |
|
models um another thing is generating a |
|
|
|
160 |
|
00:06:35,000 --> 00:06:40,080 |
|
continuation of a question prompt so |
|
|
|
161 |
|
00:06:36,960 --> 00:06:42,639 |
|
basically this is when you uh |
|
|
|
162 |
|
00:06:40,080 --> 00:06:44,759 |
|
sample and so what you would do is you |
|
|
|
163 |
|
00:06:42,639 --> 00:06:48,440 |
|
would prompt the |
|
|
|
164 |
|
00:06:44,759 --> 00:06:50,560 |
|
model with this uh X here and then you |
|
|
|
165 |
|
00:06:48,440 --> 00:06:53,800 |
|
would ask it to generate either the most |
|
|
|
166 |
|
00:06:50,560 --> 00:06:56,400 |
|
likely uh completion or generate um |
|
|
|
167 |
|
00:06:53,800 --> 00:06:58,960 |
|
sample multiple completions to get the |
|
|
|
168 |
|
00:06:56,400 --> 00:07:00,720 |
|
answer so this is very common uh people |
|
|
|
169 |
|
00:06:58,960 --> 00:07:03,759 |
|
are very familiar with this there's lots |
|
|
|
170 |
|
00:07:00,720 --> 00:07:07,160 |
|
of other uh things you can do though so |
|
|
|
171 |
|
00:07:03,759 --> 00:07:09,400 |
|
um you can classify text and there's a |
|
|
|
172 |
|
00:07:07,160 --> 00:07:12,720 |
|
couple ways you can do this uh one way |
|
|
|
173 |
|
00:07:09,400 --> 00:07:15,960 |
|
you can do this is um like let's say we |
|
|
|
174 |
|
00:07:12,720 --> 00:07:15,960 |
|
have a sentiment sentence |
|
|
|
175 |
|
00:07:16,160 --> 00:07:21,520 |
|
here |
|
|
|
176 |
|
00:07:17,759 --> 00:07:25,440 |
|
um you can say uh |
|
|
|
177 |
|
00:07:21,520 --> 00:07:30,919 |
|
this is |
|
|
|
178 |
|
00:07:25,440 --> 00:07:33,919 |
|
gr and then you can say um |
|
|
|
179 |
|
00:07:30,919 --> 00:07:37,680 |
|
star |
|
|
|
180 |
|
00:07:33,919 --> 00:07:38,879 |
|
rating five or something like that and |
|
|
|
181 |
|
00:07:37,680 --> 00:07:41,400 |
|
then you could also have star rating |
|
|
|
182 |
|
00:07:38,879 --> 00:07:43,680 |
|
four star rating three star rating two |
|
|
|
183 |
|
00:07:41,400 --> 00:07:45,080 |
|
star rating one and calculate the |
|
|
|
184 |
|
00:07:43,680 --> 00:07:46,639 |
|
probability of all of these and find |
|
|
|
185 |
|
00:07:45,080 --> 00:07:50,360 |
|
which one has the highest probability so |
|
|
|
186 |
|
00:07:46,639 --> 00:07:51,800 |
|
this is a a common way you can do things |
|
|
|
187 |
|
00:07:50,360 --> 00:07:54,319 |
|
another thing you can do which is kind |
|
|
|
188 |
|
00:07:51,800 --> 00:07:55,240 |
|
of interesting and um there are papers |
|
|
|
189 |
|
00:07:54,319 --> 00:07:58,319 |
|
on this but they're kind of |
|
|
|
190 |
|
00:07:55,240 --> 00:08:00,800 |
|
underexplored is you can do like star |
|
|
|
191 |
|
00:07:58,319 --> 00:08:04,800 |
|
rating |
|
|
|
192 |
|
00:08:00,800 --> 00:08:04,800 |
|
five and then |
|
|
|
193 |
|
00:08:04,879 --> 00:08:13,280 |
|
generate generate the output um and so |
|
|
|
194 |
|
00:08:10,319 --> 00:08:15,039 |
|
that basically says Okay I I want a |
|
|
|
195 |
|
00:08:13,280 --> 00:08:16,680 |
|
positive sentence now I'm going to score |
|
|
|
196 |
|
00:08:15,039 --> 00:08:19,120 |
|
the actual review and see whether that |
|
|
|
197 |
|
00:08:16,680 --> 00:08:22,319 |
|
matches my like conception of a positive |
|
|
|
198 |
|
00:08:19,120 --> 00:08:24,080 |
|
sentence and there's a few uh papers |
|
|
|
199 |
|
00:08:22,319 --> 00:08:25,680 |
|
that do |
|
|
|
200 |
|
00:08:24,080 --> 00:08:28,240 |
|
this |
|
|
|
201 |
|
00:08:25,680 --> 00:08:31,240 |
|
um let |
|
|
|
202 |
|
00:08:28,240 --> 00:08:31,240 |
|
me |
|
|
|
203 |
|
00:08:34,640 --> 00:08:38,760 |
|
this is a kind of older one and then |
|
|
|
204 |
|
00:08:36,240 --> 00:08:42,080 |
|
there's another more recent one by Sean |
|
|
|
205 |
|
00:08:38,760 --> 00:08:43,839 |
|
Min I believe um uh but they demonstrate |
|
|
|
206 |
|
00:08:42,080 --> 00:08:45,480 |
|
how you can do both generative and |
|
|
|
207 |
|
00:08:43,839 --> 00:08:47,600 |
|
discriminative classification in this |
|
|
|
208 |
|
00:08:45,480 --> 00:08:51,760 |
|
way so that's another thing that you can |
|
|
|
209 |
|
00:08:47,600 --> 00:08:51,760 |
|
do uh with language |
|
|
|
210 |
|
00:08:53,279 --> 00:08:56,839 |
|
models and then the other thing you can |
|
|
|
211 |
|
00:08:55,200 --> 00:08:59,000 |
|
do is you can generate the label given a |
|
|
|
212 |
|
00:08:56,839 --> 00:09:00,680 |
|
classification proc so you you say this |
|
|
|
213 |
|
00:08:59,000 --> 00:09:03,079 |
|
is is great star rating and then |
|
|
|
214 |
|
00:09:00,680 --> 00:09:05,720 |
|
generate five |
|
|
|
215 |
|
00:09:03,079 --> 00:09:09,320 |
|
whatever finally um you can do things |
|
|
|
216 |
|
00:09:05,720 --> 00:09:10,920 |
|
like correct a grammar so uh for example |
|
|
|
217 |
|
00:09:09,320 --> 00:09:12,560 |
|
if you score the probability of each |
|
|
|
218 |
|
00:09:10,920 --> 00:09:14,839 |
|
word and you find words that are really |
|
|
|
219 |
|
00:09:12,560 --> 00:09:17,760 |
|
low probability then you can uh replace |
|
|
|
220 |
|
00:09:14,839 --> 00:09:20,160 |
|
them with higher probability words um or |
|
|
|
221 |
|
00:09:17,760 --> 00:09:21,720 |
|
you could ask a model please paraphrase |
|
|
|
222 |
|
00:09:20,160 --> 00:09:24,000 |
|
this output and it will paraphrase it |
|
|
|
223 |
|
00:09:21,720 --> 00:09:27,640 |
|
into something that gives you uh you |
|
|
|
224 |
|
00:09:24,000 --> 00:09:30,720 |
|
know that has better gra so basically |
|
|
|
225 |
|
00:09:27,640 --> 00:09:33,079 |
|
like as I said language models are very |
|
|
|
226 |
|
00:09:30,720 --> 00:09:34,600 |
|
diverse um and they can do a ton of |
|
|
|
227 |
|
00:09:33,079 --> 00:09:35,680 |
|
different things but most of them boil |
|
|
|
228 |
|
00:09:34,600 --> 00:09:38,440 |
|
down to doing one of these two |
|
|
|
229 |
|
00:09:35,680 --> 00:09:42,079 |
|
operations scoring or |
|
|
|
230 |
|
00:09:38,440 --> 00:09:42,079 |
|
generating any questions |
|
|
|
231 |
|
00:09:42,480 --> 00:09:47,600 |
|
s |
|
|
|
232 |
|
00:09:44,640 --> 00:09:50,000 |
|
okay so next I I want to talk about a |
|
|
|
233 |
|
00:09:47,600 --> 00:09:52,279 |
|
specific type of language models uh Auto |
|
|
|
234 |
|
00:09:50,000 --> 00:09:54,240 |
|
regressive language models and auto |
|
|
|
235 |
|
00:09:52,279 --> 00:09:56,720 |
|
regressive language models are language |
|
|
|
236 |
|
00:09:54,240 --> 00:10:00,240 |
|
models that specifically calculate this |
|
|
|
237 |
|
00:09:56,720 --> 00:10:02,320 |
|
probability um in a fashion where you |
|
|
|
238 |
|
00:10:00,240 --> 00:10:03,680 |
|
calculate the probability of one token |
|
|
|
239 |
|
00:10:02,320 --> 00:10:05,519 |
|
and then you calculate the probability |
|
|
|
240 |
|
00:10:03,680 --> 00:10:07,680 |
|
of the next token given the previous |
|
|
|
241 |
|
00:10:05,519 --> 00:10:10,519 |
|
token the probability of the third token |
|
|
|
242 |
|
00:10:07,680 --> 00:10:13,760 |
|
G given the previous two tokens almost |
|
|
|
243 |
|
00:10:10,519 --> 00:10:18,600 |
|
always this happens left to right um or |
|
|
|
244 |
|
00:10:13,760 --> 00:10:20,519 |
|
start to finish um and so this is the |
|
|
|
245 |
|
00:10:18,600 --> 00:10:25,000 |
|
next token here this is a context where |
|
|
|
246 |
|
00:10:20,519 --> 00:10:28,440 |
|
usually um the context is the previous |
|
|
|
247 |
|
00:10:25,000 --> 00:10:29,640 |
|
tokens Can anyone think of a time when |
|
|
|
248 |
|
00:10:28,440 --> 00:10:32,440 |
|
you might want to do |
|
|
|
249 |
|
00:10:29,640 --> 00:10:37,839 |
|
right to left instead of left to |
|
|
|
250 |
|
00:10:32,440 --> 00:10:40,399 |
|
right yeah language that's from right to |
|
|
|
251 |
|
00:10:37,839 --> 00:10:41,680 |
|
yeah that's actually exactly what I what |
|
|
|
252 |
|
00:10:40,399 --> 00:10:43,079 |
|
I was looking for so if you have a |
|
|
|
253 |
|
00:10:41,680 --> 00:10:46,839 |
|
language that's written from right to |
|
|
|
254 |
|
00:10:43,079 --> 00:10:49,320 |
|
left actually uh things like uh Arabic |
|
|
|
255 |
|
00:10:46,839 --> 00:10:51,360 |
|
and Hebrew are written right to left so |
|
|
|
256 |
|
00:10:49,320 --> 00:10:53,720 |
|
um both of those are |
|
|
|
257 |
|
00:10:51,360 --> 00:10:56,360 |
|
chronologically like earlier to later |
|
|
|
258 |
|
00:10:53,720 --> 00:10:59,399 |
|
because you know if if you're thinking |
|
|
|
259 |
|
00:10:56,360 --> 00:11:01,079 |
|
about how people speak um the the first |
|
|
|
260 |
|
00:10:59,399 --> 00:11:02,440 |
|
word that an English speaker speaks is |
|
|
|
261 |
|
00:11:01,079 --> 00:11:04,000 |
|
on the left just because that's the way |
|
|
|
262 |
|
00:11:02,440 --> 00:11:06,079 |
|
you write it but the first word that an |
|
|
|
263 |
|
00:11:04,000 --> 00:11:09,639 |
|
Arabic speaker speaks is on the the |
|
|
|
264 |
|
00:11:06,079 --> 00:11:12,360 |
|
right because chronologically that's uh |
|
|
|
265 |
|
00:11:09,639 --> 00:11:13,519 |
|
that's how it works um there's other |
|
|
|
266 |
|
00:11:12,360 --> 00:11:16,320 |
|
reasons why you might want to do right |
|
|
|
267 |
|
00:11:13,519 --> 00:11:17,839 |
|
to left but uh it's not really that left |
|
|
|
268 |
|
00:11:16,320 --> 00:11:21,720 |
|
to right is important it's that like |
|
|
|
269 |
|
00:11:17,839 --> 00:11:24,440 |
|
start to finish is important in spoken |
|
|
|
270 |
|
00:11:21,720 --> 00:11:27,880 |
|
language so um one thing I should |
|
|
|
271 |
|
00:11:24,440 --> 00:11:30,240 |
|
mention here is that this is just a rule |
|
|
|
272 |
|
00:11:27,880 --> 00:11:31,560 |
|
of probability that if you have multiple |
|
|
|
273 |
|
00:11:30,240 --> 00:11:33,720 |
|
variables and you're calculating the |
|
|
|
274 |
|
00:11:31,560 --> 00:11:35,760 |
|
joint probability of variables the |
|
|
|
275 |
|
00:11:33,720 --> 00:11:38,000 |
|
probability of all of the variables |
|
|
|
276 |
|
00:11:35,760 --> 00:11:40,240 |
|
together is equal to this probability |
|
|
|
277 |
|
00:11:38,000 --> 00:11:41,920 |
|
here so we're not making any |
|
|
|
278 |
|
00:11:40,240 --> 00:11:44,399 |
|
approximations we're not making any |
|
|
|
279 |
|
00:11:41,920 --> 00:11:46,959 |
|
compromises in order to do this but it |
|
|
|
280 |
|
00:11:44,399 --> 00:11:51,639 |
|
all hinges on whether we can predict |
|
|
|
281 |
|
00:11:46,959 --> 00:11:53,440 |
|
this probability um accurately uh |
|
|
|
282 |
|
00:11:51,639 --> 00:11:56,160 |
|
actually another question does anybody |
|
|
|
283 |
|
00:11:53,440 --> 00:11:57,800 |
|
know why we do this decomposition why |
|
|
|
284 |
|
00:11:56,160 --> 00:12:00,959 |
|
don't we just try to predict the |
|
|
|
285 |
|
00:11:57,800 --> 00:12:00,959 |
|
probability of x |
|
|
|
286 |
|
00:12:02,120 --> 00:12:05,399 |
|
directly any |
|
|
|
287 |
|
00:12:07,680 --> 00:12:12,760 |
|
ideas uh of big X sorry uh why don't we |
|
|
|
288 |
|
00:12:11,079 --> 00:12:17,560 |
|
try to calculate the probability of this |
|
|
|
289 |
|
00:12:12,760 --> 00:12:21,360 |
|
is great directly without deated the |
|
|
|
290 |
|
00:12:17,560 --> 00:12:21,360 |
|
IND that |
|
|
|
291 |
|
00:12:25,519 --> 00:12:31,560 |
|
possibility it could be word salid if |
|
|
|
292 |
|
00:12:27,760 --> 00:12:35,279 |
|
you did it in a in a particular way yes |
|
|
|
293 |
|
00:12:31,560 --> 00:12:35,279 |
|
um so that that's a good point |
|
|
|
294 |
|
00:12:39,519 --> 00:12:47,000 |
|
yeah yeah so for example we talked about |
|
|
|
295 |
|
00:12:43,760 --> 00:12:50,120 |
|
um uh we'll talk about |
|
|
|
296 |
|
00:12:47,000 --> 00:12:51,920 |
|
models um or I I mentioned this briefly |
|
|
|
297 |
|
00:12:50,120 --> 00:12:54,000 |
|
last time you can mention it in more |
|
|
|
298 |
|
00:12:51,920 --> 00:12:55,639 |
|
detail this time but this is great we |
|
|
|
299 |
|
00:12:54,000 --> 00:12:59,880 |
|
probably have never seen this before |
|
|
|
300 |
|
00:12:55,639 --> 00:13:01,399 |
|
right so if we predict only things that |
|
|
|
301 |
|
00:12:59,880 --> 00:13:03,199 |
|
we've seen before if we only assign a |
|
|
|
302 |
|
00:13:01,399 --> 00:13:04,600 |
|
non-zero probability to the things we've |
|
|
|
303 |
|
00:13:03,199 --> 00:13:06,000 |
|
seen before there's going to be lots of |
|
|
|
304 |
|
00:13:04,600 --> 00:13:07,079 |
|
sentences that we've never seen before |
|
|
|
305 |
|
00:13:06,000 --> 00:13:10,000 |
|
it makes it |
|
|
|
306 |
|
00:13:07,079 --> 00:13:13,760 |
|
supercars um that that's basically close |
|
|
|
307 |
|
00:13:10,000 --> 00:13:16,399 |
|
to what I wanted to say so um the reason |
|
|
|
308 |
|
00:13:13,760 --> 00:13:18,040 |
|
why we don't typically do it with um |
|
|
|
309 |
|
00:13:16,399 --> 00:13:21,240 |
|
predicting the whole sentence directly |
|
|
|
310 |
|
00:13:18,040 --> 00:13:22,800 |
|
is because if we think about the size of |
|
|
|
311 |
|
00:13:21,240 --> 00:13:24,959 |
|
the classification problem we need to |
|
|
|
312 |
|
00:13:22,800 --> 00:13:27,880 |
|
solve in order to predict the next word |
|
|
|
313 |
|
00:13:24,959 --> 00:13:30,320 |
|
it's a v uh where V is the size of the |
|
|
|
314 |
|
00:13:27,880 --> 00:13:33,120 |
|
vocabulary but the size of the |
|
|
|
315 |
|
00:13:30,320 --> 00:13:35,399 |
|
classification problem that we need to |
|
|
|
316 |
|
00:13:33,120 --> 00:13:38,040 |
|
um we need to solve if we predict |
|
|
|
317 |
|
00:13:35,399 --> 00:13:40,079 |
|
everything directly is V to the N where |
|
|
|
318 |
|
00:13:38,040 --> 00:13:42,240 |
|
n is the length of the sequence and |
|
|
|
319 |
|
00:13:40,079 --> 00:13:45,240 |
|
that's just huge the vocabulary is so |
|
|
|
320 |
|
00:13:42,240 --> 00:13:48,440 |
|
big that it's hard to kind of uh know |
|
|
|
321 |
|
00:13:45,240 --> 00:13:51,000 |
|
how we handle that so basically by doing |
|
|
|
322 |
|
00:13:48,440 --> 00:13:53,160 |
|
this sort of decomposition we decompose |
|
|
|
323 |
|
00:13:51,000 --> 00:13:56,440 |
|
this into uh |
|
|
|
324 |
|
00:13:53,160 --> 00:13:58,120 |
|
n um prediction problems of size V and |
|
|
|
325 |
|
00:13:56,440 --> 00:13:59,519 |
|
that's kind of just a lot more |
|
|
|
326 |
|
00:13:58,120 --> 00:14:03,079 |
|
manageable for from the point of view of |
|
|
|
327 |
|
00:13:59,519 --> 00:14:06,000 |
|
how we train uh know how we train |
|
|
|
328 |
|
00:14:03,079 --> 00:14:09,399 |
|
models um that being said there are |
|
|
|
329 |
|
00:14:06,000 --> 00:14:11,360 |
|
other Alternatives um something very |
|
|
|
330 |
|
00:14:09,399 --> 00:14:13,920 |
|
widely known uh very widely used is |
|
|
|
331 |
|
00:14:11,360 --> 00:14:16,440 |
|
called a MK language model um a mast |
|
|
|
332 |
|
00:14:13,920 --> 00:14:19,480 |
|
language model is something like Bert or |
|
|
|
333 |
|
00:14:16,440 --> 00:14:21,680 |
|
debera or Roberta or all of these models |
|
|
|
334 |
|
00:14:19,480 --> 00:14:25,000 |
|
that you might have heard if you've been |
|
|
|
335 |
|
00:14:21,680 --> 00:14:28,279 |
|
in MLP for more than two years I guess |
|
|
|
336 |
|
00:14:25,000 --> 00:14:30,680 |
|
um and basically what they do is they |
|
|
|
337 |
|
00:14:28,279 --> 00:14:30,680 |
|
predict |
|
|
|
338 |
|
00:14:32,199 --> 00:14:37,480 |
|
uh they like mask out this word and they |
|
|
|
339 |
|
00:14:34,839 --> 00:14:39,480 |
|
predict the middle word so they mask out |
|
|
|
340 |
|
00:14:37,480 --> 00:14:41,440 |
|
is and then try to predict that given |
|
|
|
341 |
|
00:14:39,480 --> 00:14:45,320 |
|
all the other words the problem with |
|
|
|
342 |
|
00:14:41,440 --> 00:14:48,959 |
|
these models is uh twofold number one |
|
|
|
343 |
|
00:14:45,320 --> 00:14:51,880 |
|
they don't actually give you a uh good |
|
|
|
344 |
|
00:14:48,959 --> 00:14:55,399 |
|
probability here uh like a a properly |
|
|
|
345 |
|
00:14:51,880 --> 00:14:57,800 |
|
formed probability here |
|
|
|
346 |
|
00:14:55,399 --> 00:14:59,160 |
|
because this is true only as long as |
|
|
|
347 |
|
00:14:57,800 --> 00:15:01,920 |
|
you're only conditioning on things that |
|
|
|
348 |
|
00:14:59,160 --> 00:15:03,480 |
|
you've previously generated so that |
|
|
|
349 |
|
00:15:01,920 --> 00:15:04,839 |
|
they're not actually true language |
|
|
|
350 |
|
00:15:03,480 --> 00:15:06,920 |
|
models from the point of view of being |
|
|
|
351 |
|
00:15:04,839 --> 00:15:10,040 |
|
able to easily predict the probability |
|
|
|
352 |
|
00:15:06,920 --> 00:15:11,399 |
|
of a sequence um and also it's hard to |
|
|
|
353 |
|
00:15:10,040 --> 00:15:13,399 |
|
generate from them because you need to |
|
|
|
354 |
|
00:15:11,399 --> 00:15:15,440 |
|
generate in some order and mass language |
|
|
|
355 |
|
00:15:13,399 --> 00:15:17,600 |
|
models don't specify economical orders |
|
|
|
356 |
|
00:15:15,440 --> 00:15:19,120 |
|
so they're good for some things like |
|
|
|
357 |
|
00:15:17,600 --> 00:15:21,720 |
|
calculating representations of the |
|
|
|
358 |
|
00:15:19,120 --> 00:15:22,920 |
|
output but they're not useful uh they're |
|
|
|
359 |
|
00:15:21,720 --> 00:15:25,240 |
|
not as useful for |
|
|
|
360 |
|
00:15:22,920 --> 00:15:26,880 |
|
Generation Um there's also energy based |
|
|
|
361 |
|
00:15:25,240 --> 00:15:28,759 |
|
language models which basically create a |
|
|
|
362 |
|
00:15:26,880 --> 00:15:30,000 |
|
scoring function that's not necessarily |
|
|
|
363 |
|
00:15:28,759 --> 00:15:31,279 |
|
left to right or right to left or |
|
|
|
364 |
|
00:15:30,000 --> 00:15:33,120 |
|
anything like that but that's very |
|
|
|
365 |
|
00:15:31,279 --> 00:15:34,639 |
|
Advanced um if you're interested in them |
|
|
|
366 |
|
00:15:33,120 --> 00:15:36,319 |
|
I can talk more about them that we'll |
|
|
|
367 |
|
00:15:34,639 --> 00:15:38,920 |
|
skip |
|
|
|
368 |
|
00:15:36,319 --> 00:15:41,600 |
|
them and um also all of the language |
|
|
|
369 |
|
00:15:38,920 --> 00:15:45,639 |
|
models that you hear about nowadays GPT |
|
|
|
370 |
|
00:15:41,600 --> 00:15:48,800 |
|
uh llama whatever else are all other |
|
|
|
371 |
|
00:15:45,639 --> 00:15:52,880 |
|
models cool so I'm going to go into the |
|
|
|
372 |
|
00:15:48,800 --> 00:15:52,880 |
|
very um any questions about that |
|
|
|
373 |
|
00:15:57,600 --> 00:16:00,600 |
|
yeah |
|
|
|
374 |
|
00:16:00,680 --> 00:16:04,160 |
|
yeah so in Mass language models the |
|
|
|
375 |
|
00:16:02,680 --> 00:16:06,000 |
|
question was in Mass language models |
|
|
|
376 |
|
00:16:04,160 --> 00:16:08,360 |
|
couldn't you just mask out the last |
|
|
|
377 |
|
00:16:06,000 --> 00:16:10,759 |
|
token and predict that sure you could do |
|
|
|
378 |
|
00:16:08,360 --> 00:16:13,079 |
|
that but there it's just not trained |
|
|
|
379 |
|
00:16:10,759 --> 00:16:14,720 |
|
that way so it won't do a very good job |
|
|
|
380 |
|
00:16:13,079 --> 00:16:16,880 |
|
if you always trained it that way it's |
|
|
|
381 |
|
00:16:14,720 --> 00:16:18,160 |
|
an autor regressive language model so |
|
|
|
382 |
|
00:16:16,880 --> 00:16:22,240 |
|
you're you're back to where you were in |
|
|
|
383 |
|
00:16:18,160 --> 00:16:24,800 |
|
the first place um cool so now we I'll |
|
|
|
384 |
|
00:16:22,240 --> 00:16:26,399 |
|
talk about unigram language models and |
|
|
|
385 |
|
00:16:24,800 --> 00:16:29,319 |
|
so the simplest language models are |
|
|
|
386 |
|
00:16:26,399 --> 00:16:33,560 |
|
count-based unigram language models and |
|
|
|
387 |
|
00:16:29,319 --> 00:16:35,319 |
|
the way they work is um basically we |
|
|
|
388 |
|
00:16:33,560 --> 00:16:38,519 |
|
want to calculate this probability |
|
|
|
389 |
|
00:16:35,319 --> 00:16:41,240 |
|
conditioned on all the previous ones and |
|
|
|
390 |
|
00:16:38,519 --> 00:16:42,360 |
|
the way we do this is we just say |
|
|
|
391 |
|
00:16:41,240 --> 00:16:45,680 |
|
actually we're not going to worry about |
|
|
|
392 |
|
00:16:42,360 --> 00:16:48,759 |
|
the order at all and we're just going to |
|
|
|
393 |
|
00:16:45,680 --> 00:16:52,240 |
|
uh predict the probability of the next |
|
|
|
394 |
|
00:16:48,759 --> 00:16:55,279 |
|
word uh independently of all the other |
|
|
|
395 |
|
00:16:52,240 --> 00:16:57,519 |
|
words so if you have something like this |
|
|
|
396 |
|
00:16:55,279 --> 00:16:59,720 |
|
it's actually extremely easy to predict |
|
|
|
397 |
|
00:16:57,519 --> 00:17:02,480 |
|
the probability of this word and the way |
|
|
|
398 |
|
00:16:59,720 --> 00:17:04,280 |
|
you do this is you just count up the |
|
|
|
399 |
|
00:17:02,480 --> 00:17:08,360 |
|
number of times this word appeared in |
|
|
|
400 |
|
00:17:04,280 --> 00:17:10,480 |
|
the training data set and divide by the |
|
|
|
401 |
|
00:17:08,360 --> 00:17:12,559 |
|
uh divide by the total number of words |
|
|
|
402 |
|
00:17:10,480 --> 00:17:14,240 |
|
in the pring data set and now you have a |
|
|
|
403 |
|
00:17:12,559 --> 00:17:15,959 |
|
language model this is like language |
|
|
|
404 |
|
00:17:14,240 --> 00:17:17,760 |
|
model 101 it's the easiest possible |
|
|
|
405 |
|
00:17:15,959 --> 00:17:19,520 |
|
language model you can write in you know |
|
|
|
406 |
|
00:17:17,760 --> 00:17:21,120 |
|
three lines of python |
|
|
|
407 |
|
00:17:19,520 --> 00:17:25,039 |
|
basically |
|
|
|
408 |
|
00:17:21,120 --> 00:17:28,480 |
|
um so it has a few problems uh the first |
|
|
|
409 |
|
00:17:25,039 --> 00:17:31,120 |
|
problem with this language model is um |
|
|
|
410 |
|
00:17:28,480 --> 00:17:32,960 |
|
handling unknown words so what happens |
|
|
|
411 |
|
00:17:31,120 --> 00:17:38,679 |
|
if you have a word that you've never |
|
|
|
412 |
|
00:17:32,960 --> 00:17:41,000 |
|
seen before um in this language model |
|
|
|
413 |
|
00:17:38,679 --> 00:17:42,240 |
|
here what is the probability of any |
|
|
|
414 |
|
00:17:41,000 --> 00:17:44,720 |
|
sequence that has a word that you've |
|
|
|
415 |
|
00:17:42,240 --> 00:17:47,440 |
|
never seen before yeah the probability |
|
|
|
416 |
|
00:17:44,720 --> 00:17:49,240 |
|
of the sequence gets zero so there might |
|
|
|
417 |
|
00:17:47,440 --> 00:17:51,120 |
|
not be such a big problem for generating |
|
|
|
418 |
|
00:17:49,240 --> 00:17:52,480 |
|
things from the language model because |
|
|
|
419 |
|
00:17:51,120 --> 00:17:54,520 |
|
you know maybe it's fine if you only |
|
|
|
420 |
|
00:17:52,480 --> 00:17:55,960 |
|
generate words that you've seen before |
|
|
|
421 |
|
00:17:54,520 --> 00:17:57,679 |
|
uh but it is definitely a problem of |
|
|
|
422 |
|
00:17:55,960 --> 00:17:59,720 |
|
scoring things with the language model |
|
|
|
423 |
|
00:17:57,679 --> 00:18:02,039 |
|
and it's also a problem of uh for |
|
|
|
424 |
|
00:17:59,720 --> 00:18:04,440 |
|
something like translation if you get an |
|
|
|
425 |
|
00:18:02,039 --> 00:18:05,840 |
|
unknown word uh when you're translating |
|
|
|
426 |
|
00:18:04,440 --> 00:18:07,799 |
|
something then you would like to be able |
|
|
|
427 |
|
00:18:05,840 --> 00:18:11,320 |
|
to translate it reasonably but you can't |
|
|
|
428 |
|
00:18:07,799 --> 00:18:13,799 |
|
do that so um that's an issue so how do |
|
|
|
429 |
|
00:18:11,320 --> 00:18:15,840 |
|
we how do we fix this um there's a |
|
|
|
430 |
|
00:18:13,799 --> 00:18:17,640 |
|
couple options the first option is to |
|
|
|
431 |
|
00:18:15,840 --> 00:18:19,440 |
|
segment to characters and subwords and |
|
|
|
432 |
|
00:18:17,640 --> 00:18:21,720 |
|
this is now the preferred option that |
|
|
|
433 |
|
00:18:19,440 --> 00:18:24,360 |
|
most people use nowadays uh just run |
|
|
|
434 |
|
00:18:21,720 --> 00:18:26,840 |
|
sentence piece segment your vocabulary |
|
|
|
435 |
|
00:18:24,360 --> 00:18:28,400 |
|
and you're all set you're you'll now no |
|
|
|
436 |
|
00:18:26,840 --> 00:18:29,679 |
|
longer have any unknown words because |
|
|
|
437 |
|
00:18:28,400 --> 00:18:30,840 |
|
all the unknown words get split into |
|
|
|
438 |
|
00:18:29,679 --> 00:18:33,559 |
|
shorter |
|
|
|
439 |
|
00:18:30,840 --> 00:18:36,240 |
|
units there's also other options that |
|
|
|
440 |
|
00:18:33,559 --> 00:18:37,919 |
|
you can use if you're uh very interested |
|
|
|
441 |
|
00:18:36,240 --> 00:18:41,280 |
|
in or serious about this and want to |
|
|
|
442 |
|
00:18:37,919 --> 00:18:43,720 |
|
handle this like uh as part of a |
|
|
|
443 |
|
00:18:41,280 --> 00:18:45,960 |
|
research project or something like this |
|
|
|
444 |
|
00:18:43,720 --> 00:18:48,520 |
|
and uh the way you can do this is you |
|
|
|
445 |
|
00:18:45,960 --> 00:18:50,120 |
|
can build an unknown word model and an |
|
|
|
446 |
|
00:18:48,520 --> 00:18:52,200 |
|
unknown word model basically what it |
|
|
|
447 |
|
00:18:50,120 --> 00:18:54,520 |
|
does is it uh predicts the probability |
|
|
|
448 |
|
00:18:52,200 --> 00:18:56,200 |
|
of unknown words using characters and |
|
|
|
449 |
|
00:18:54,520 --> 00:18:59,559 |
|
then it models the probability of words |
|
|
|
450 |
|
00:18:56,200 --> 00:19:01,159 |
|
using words and so now you can you have |
|
|
|
451 |
|
00:18:59,559 --> 00:19:02,559 |
|
kind of like a hierarchical model where |
|
|
|
452 |
|
00:19:01,159 --> 00:19:03,919 |
|
you first try to predict words and then |
|
|
|
453 |
|
00:19:02,559 --> 00:19:06,720 |
|
if you can't predict words you predict |
|
|
|
454 |
|
00:19:03,919 --> 00:19:08,960 |
|
unknown words so this isn't us as widely |
|
|
|
455 |
|
00:19:06,720 --> 00:19:11,520 |
|
anymore but it's worth thinking about uh |
|
|
|
456 |
|
00:19:08,960 --> 00:19:11,520 |
|
or knowing |
|
|
|
457 |
|
00:19:11,840 --> 00:19:20,880 |
|
about okay uh so a second detail um a |
|
|
|
458 |
|
00:19:17,200 --> 00:19:22,799 |
|
parameter uh so parameterizing in log |
|
|
|
459 |
|
00:19:20,880 --> 00:19:25,880 |
|
space |
|
|
|
460 |
|
00:19:22,799 --> 00:19:28,400 |
|
so the um multiplication of |
|
|
|
461 |
|
00:19:25,880 --> 00:19:29,840 |
|
probabilities can be reexpressed is the |
|
|
|
462 |
|
00:19:28,400 --> 00:19:31,840 |
|
addition of log |
|
|
|
463 |
|
00:19:29,840 --> 00:19:34,159 |
|
probabilities uh so this is really |
|
|
|
464 |
|
00:19:31,840 --> 00:19:35,720 |
|
important and this is widely used in all |
|
|
|
465 |
|
00:19:34,159 --> 00:19:37,520 |
|
language models whether they're unigram |
|
|
|
466 |
|
00:19:35,720 --> 00:19:39,640 |
|
language models or or neural language |
|
|
|
467 |
|
00:19:37,520 --> 00:19:41,799 |
|
models there's actually a very simple |
|
|
|
468 |
|
00:19:39,640 --> 00:19:45,440 |
|
reason why we why we do it this way does |
|
|
|
469 |
|
00:19:41,799 --> 00:19:45,440 |
|
anybody uh know the |
|
|
|
470 |
|
00:19:46,440 --> 00:19:52,679 |
|
answer what would happen if we |
|
|
|
471 |
|
00:19:48,280 --> 00:19:56,720 |
|
multiplied uh let's say uh 30 30 tokens |
|
|
|
472 |
|
00:19:52,679 --> 00:20:00,360 |
|
worth of probabilities together um |
|
|
|
473 |
|
00:19:56,720 --> 00:20:02,120 |
|
yeah uh yeah too too small um so |
|
|
|
474 |
|
00:20:00,360 --> 00:20:06,120 |
|
basically the problem is numerical |
|
|
|
475 |
|
00:20:02,120 --> 00:20:07,520 |
|
underflow um so modern computers if if |
|
|
|
476 |
|
00:20:06,120 --> 00:20:08,840 |
|
we weren't doing this on a computer and |
|
|
|
477 |
|
00:20:07,520 --> 00:20:11,240 |
|
we were just doing math it wouldn't |
|
|
|
478 |
|
00:20:08,840 --> 00:20:14,280 |
|
matter at all um but because we're doing |
|
|
|
479 |
|
00:20:11,240 --> 00:20:17,280 |
|
it on a computer uh we |
|
|
|
480 |
|
00:20:14,280 --> 00:20:17,280 |
|
have |
|
|
|
481 |
|
00:20:20,880 --> 00:20:26,000 |
|
ours we have our |
|
|
|
482 |
|
00:20:23,000 --> 00:20:26,000 |
|
32bit |
|
|
|
483 |
|
00:20:27,159 --> 00:20:30,159 |
|
float |
|
|
|
484 |
|
00:20:32,320 --> 00:20:37,720 |
|
where we have uh the exponent in the the |
|
|
|
485 |
|
00:20:35,799 --> 00:20:40,159 |
|
fraction over here so the largest the |
|
|
|
486 |
|
00:20:37,720 --> 00:20:41,960 |
|
exponent can get is limited by the |
|
|
|
487 |
|
00:20:40,159 --> 00:20:45,880 |
|
number of exponent bits that we have in |
|
|
|
488 |
|
00:20:41,960 --> 00:20:48,039 |
|
a 32-bit float and um if that's the case |
|
|
|
489 |
|
00:20:45,880 --> 00:20:52,480 |
|
I forget exactly how large it is it's |
|
|
|
490 |
|
00:20:48,039 --> 00:20:53,440 |
|
like yeah something like 30 minus 38 is |
|
|
|
491 |
|
00:20:52,480 --> 00:20:56,640 |
|
that |
|
|
|
492 |
|
00:20:53,440 --> 00:20:58,520 |
|
right yeah but anyway like if the number |
|
|
|
493 |
|
00:20:56,640 --> 00:21:00,640 |
|
gets too small you'll underflow it goes |
|
|
|
494 |
|
00:20:58,520 --> 00:21:02,400 |
|
to zero and you'll get a zero |
|
|
|
495 |
|
00:21:00,640 --> 00:21:05,720 |
|
probability despite the fact that it's |
|
|
|
496 |
|
00:21:02,400 --> 00:21:07,640 |
|
not actually zero so um that's usually |
|
|
|
497 |
|
00:21:05,720 --> 00:21:09,440 |
|
why we do this it's also a little bit |
|
|
|
498 |
|
00:21:07,640 --> 00:21:12,960 |
|
easier for people just to look at like |
|
|
|
499 |
|
00:21:09,440 --> 00:21:15,200 |
|
minus 30 instead of looking to something |
|
|
|
500 |
|
00:21:12,960 --> 00:21:19,960 |
|
something time 10 to the minus 30 or |
|
|
|
501 |
|
00:21:15,200 --> 00:21:24,520 |
|
something so uh that is why we normally |
|
|
|
502 |
|
00:21:19,960 --> 00:21:27,159 |
|
go um another thing that you can note is |
|
|
|
503 |
|
00:21:24,520 --> 00:21:28,760 |
|
uh you can treat each of these in a |
|
|
|
504 |
|
00:21:27,159 --> 00:21:31,360 |
|
unigram model you can treat each of |
|
|
|
505 |
|
00:21:28,760 --> 00:21:37,039 |
|
these as parameters so we talked about |
|
|
|
506 |
|
00:21:31,360 --> 00:21:39,640 |
|
parameters of a model uh like a um like |
|
|
|
507 |
|
00:21:37,039 --> 00:21:41,120 |
|
a bag of words model and we can |
|
|
|
508 |
|
00:21:39,640 --> 00:21:44,080 |
|
similarly treat these unigram |
|
|
|
509 |
|
00:21:41,120 --> 00:21:47,760 |
|
probabilities as parameters so um how |
|
|
|
510 |
|
00:21:44,080 --> 00:21:47,760 |
|
many parameters does a unigram model |
|
|
|
511 |
|
00:21:48,080 --> 00:21:51,320 |
|
have any |
|
|
|
512 |
|
00:21:57,039 --> 00:22:02,400 |
|
ideas |
|
|
|
513 |
|
00:21:59,600 --> 00:22:04,440 |
|
yeah yeah exactly parameters equal to |
|
|
|
514 |
|
00:22:02,400 --> 00:22:08,120 |
|
the size of the vocabulary so this one's |
|
|
|
515 |
|
00:22:04,440 --> 00:22:10,880 |
|
easy and then we can go um we can go to |
|
|
|
516 |
|
00:22:08,120 --> 00:22:13,880 |
|
the slightly less easy ones |
|
|
|
517 |
|
00:22:10,880 --> 00:22:16,039 |
|
there so anyway this is a unigram model |
|
|
|
518 |
|
00:22:13,880 --> 00:22:17,960 |
|
uh it's it's not too hard um you |
|
|
|
519 |
|
00:22:16,039 --> 00:22:20,480 |
|
basically count up and divide and then |
|
|
|
520 |
|
00:22:17,960 --> 00:22:22,720 |
|
you add the the probabilities here you |
|
|
|
521 |
|
00:22:20,480 --> 00:22:25,440 |
|
could easily do it in a short Python |
|
|
|
522 |
|
00:22:22,720 --> 00:22:28,400 |
|
program higher order engram models so |
|
|
|
523 |
|
00:22:25,440 --> 00:22:31,600 |
|
higher order engram models um what these |
|
|
|
524 |
|
00:22:28,400 --> 00:22:35,520 |
|
do is they essentially limit the context |
|
|
|
525 |
|
00:22:31,600 --> 00:22:40,240 |
|
length to a length of N and then they |
|
|
|
526 |
|
00:22:35,520 --> 00:22:42,600 |
|
count and divide so the way it works |
|
|
|
527 |
|
00:22:40,240 --> 00:22:45,559 |
|
here maybe this is a little bit uh |
|
|
|
528 |
|
00:22:42,600 --> 00:22:47,320 |
|
tricky but I can show an example so what |
|
|
|
529 |
|
00:22:45,559 --> 00:22:49,840 |
|
we do is we count up the number of times |
|
|
|
530 |
|
00:22:47,320 --> 00:22:51,320 |
|
we've seen this is an example and then |
|
|
|
531 |
|
00:22:49,840 --> 00:22:53,480 |
|
we divide by the number of times we've |
|
|
|
532 |
|
00:22:51,320 --> 00:22:55,960 |
|
seen this is n and that's the |
|
|
|
533 |
|
00:22:53,480 --> 00:22:56,960 |
|
probability of example given the the |
|
|
|
534 |
|
00:22:55,960 --> 00:22:58,720 |
|
previous |
|
|
|
535 |
|
00:22:56,960 --> 00:23:00,559 |
|
coms |
|
|
|
536 |
|
00:22:58,720 --> 00:23:02,039 |
|
so the problem with this is anytime we |
|
|
|
537 |
|
00:23:00,559 --> 00:23:03,400 |
|
get a sequence that we've never seen |
|
|
|
538 |
|
00:23:02,039 --> 00:23:04,960 |
|
before like we would like to model |
|
|
|
539 |
|
00:23:03,400 --> 00:23:07,200 |
|
longer sequences to make this more |
|
|
|
540 |
|
00:23:04,960 --> 00:23:08,600 |
|
accurate but anytime we've get a uh we |
|
|
|
541 |
|
00:23:07,200 --> 00:23:10,720 |
|
get a sequence that we've never seen |
|
|
|
542 |
|
00:23:08,600 --> 00:23:12,919 |
|
before um it will get a probability of |
|
|
|
543 |
|
00:23:10,720 --> 00:23:15,919 |
|
zero similarly because this count on top |
|
|
|
544 |
|
00:23:12,919 --> 00:23:19,919 |
|
of here will be zero so the way that uh |
|
|
|
545 |
|
00:23:15,919 --> 00:23:22,640 |
|
engram language models work with this uh |
|
|
|
546 |
|
00:23:19,919 --> 00:23:27,320 |
|
handle this is they have fall back to |
|
|
|
547 |
|
00:23:22,640 --> 00:23:31,840 |
|
Shorter uh engram models so um this |
|
|
|
548 |
|
00:23:27,320 --> 00:23:33,480 |
|
model sorry when I say NR uh n is the |
|
|
|
549 |
|
00:23:31,840 --> 00:23:35,520 |
|
length of the context so this is a four |
|
|
|
550 |
|
00:23:33,480 --> 00:23:37,679 |
|
gr model here because the top context is |
|
|
|
551 |
|
00:23:35,520 --> 00:23:40,520 |
|
four so the photogram model would |
|
|
|
552 |
|
00:23:37,679 --> 00:23:46,640 |
|
calculate this and then interpolate it |
|
|
|
553 |
|
00:23:40,520 --> 00:23:48,640 |
|
like this with a um with a trigram model |
|
|
|
554 |
|
00:23:46,640 --> 00:23:50,400 |
|
uh and then the trigram model itself |
|
|
|
555 |
|
00:23:48,640 --> 00:23:51,720 |
|
would interpolate with the Byram model |
|
|
|
556 |
|
00:23:50,400 --> 00:23:53,440 |
|
the Byram model would interpolate with |
|
|
|
557 |
|
00:23:51,720 --> 00:23:56,880 |
|
the unram |
|
|
|
558 |
|
00:23:53,440 --> 00:23:59,880 |
|
model oh this one oh |
|
|
|
559 |
|
00:23:56,880 --> 00:23:59,880 |
|
okay |
|
|
|
560 |
|
00:24:02,159 --> 00:24:05,440 |
|
um one |
|
|
|
561 |
|
00:24:07,039 --> 00:24:12,320 |
|
second could you uh help get it from the |
|
|
|
562 |
|
00:24:10,000 --> 00:24:12,320 |
|
lock |
|
|
|
563 |
|
00:24:26,799 --> 00:24:29,799 |
|
box |
|
|
|
564 |
|
00:24:43,640 --> 00:24:50,200 |
|
um okay sorry |
|
|
|
565 |
|
00:24:46,880 --> 00:24:53,640 |
|
so getting bad |
|
|
|
566 |
|
00:24:50,200 --> 00:24:56,640 |
|
here just |
|
|
|
567 |
|
00:24:53,640 --> 00:24:56,640 |
|
actually |
|
|
|
568 |
|
00:24:56,760 --> 00:25:02,559 |
|
okay uh oh wow that's a lot |
|
|
|
569 |
|
00:25:02,960 --> 00:25:12,080 |
|
better cool okay so |
|
|
|
570 |
|
00:25:08,279 --> 00:25:14,159 |
|
um so this is uh how we deal with the |
|
|
|
571 |
|
00:25:12,080 --> 00:25:18,799 |
|
fact that models can |
|
|
|
572 |
|
00:25:14,159 --> 00:25:23,919 |
|
be um models can be more precise but |
|
|
|
573 |
|
00:25:18,799 --> 00:25:26,679 |
|
more sparse and less precise but less |
|
|
|
574 |
|
00:25:23,919 --> 00:25:28,720 |
|
sparse this is also another concept that |
|
|
|
575 |
|
00:25:26,679 --> 00:25:31,039 |
|
we're going to talk about more later uh |
|
|
|
576 |
|
00:25:28,720 --> 00:25:33,240 |
|
in another class but this is a variety |
|
|
|
577 |
|
00:25:31,039 --> 00:25:33,240 |
|
of |
|
|
|
578 |
|
00:25:33,679 --> 00:25:38,440 |
|
ensembling where we have different |
|
|
|
579 |
|
00:25:35,960 --> 00:25:40,360 |
|
models that are good at different things |
|
|
|
580 |
|
00:25:38,440 --> 00:25:42,279 |
|
and we combine them together so this is |
|
|
|
581 |
|
00:25:40,360 --> 00:25:44,760 |
|
the first instance that you would see of |
|
|
|
582 |
|
00:25:42,279 --> 00:25:46,159 |
|
this there are other instances of this |
|
|
|
583 |
|
00:25:44,760 --> 00:25:50,320 |
|
but the reason why I mentioned that this |
|
|
|
584 |
|
00:25:46,159 --> 00:25:51,840 |
|
is a a variety of ensembling is actually |
|
|
|
585 |
|
00:25:50,320 --> 00:25:55,520 |
|
you're probably not going to be using |
|
|
|
586 |
|
00:25:51,840 --> 00:25:57,840 |
|
engram models super widely unless you |
|
|
|
587 |
|
00:25:55,520 --> 00:26:00,520 |
|
really want to process huge data sets |
|
|
|
588 |
|
00:25:57,840 --> 00:26:02,399 |
|
because that is one advantage of them |
|
|
|
589 |
|
00:26:00,520 --> 00:26:03,960 |
|
but some of these smoothing methods |
|
|
|
590 |
|
00:26:02,399 --> 00:26:05,720 |
|
actually might be interesting even if |
|
|
|
591 |
|
00:26:03,960 --> 00:26:10,520 |
|
you're using other models and ensembling |
|
|
|
592 |
|
00:26:05,720 --> 00:26:10,520 |
|
them together so |
|
|
|
593 |
|
00:26:10,600 --> 00:26:15,679 |
|
the in order to decide this |
|
|
|
594 |
|
00:26:13,679 --> 00:26:19,559 |
|
interpolation coefficient one way we can |
|
|
|
595 |
|
00:26:15,679 --> 00:26:23,440 |
|
do it is just set a fixed um set a fixed |
|
|
|
596 |
|
00:26:19,559 --> 00:26:26,039 |
|
amount of probability that we use for |
|
|
|
597 |
|
00:26:23,440 --> 00:26:29,000 |
|
every um every time so we could say that |
|
|
|
598 |
|
00:26:26,039 --> 00:26:32,000 |
|
we always set this Lambda to 0.8 and |
|
|
|
599 |
|
00:26:29,000 --> 00:26:34,320 |
|
some always set this Lambda 1us Lambda |
|
|
|
600 |
|
00:26:32,000 --> 00:26:36,559 |
|
to 0.2 and interpolate those two |
|
|
|
601 |
|
00:26:34,320 --> 00:26:39,120 |
|
together but actually there's more |
|
|
|
602 |
|
00:26:36,559 --> 00:26:42,240 |
|
sophisticated methods of doing this and |
|
|
|
603 |
|
00:26:39,120 --> 00:26:44,080 |
|
so one way of doing this is uh called |
|
|
|
604 |
|
00:26:42,240 --> 00:26:47,240 |
|
additive |
|
|
|
605 |
|
00:26:44,080 --> 00:26:50,600 |
|
smoothing excuse me and the the way that |
|
|
|
606 |
|
00:26:47,240 --> 00:26:54,039 |
|
additive smoothing works is um basically |
|
|
|
607 |
|
00:26:50,600 --> 00:26:54,919 |
|
we add Alpha to the uh to the top and |
|
|
|
608 |
|
00:26:54,039 --> 00:26:58,000 |
|
the |
|
|
|
609 |
|
00:26:54,919 --> 00:27:02,159 |
|
bottom and the reason why this is slight |
|
|
|
610 |
|
00:26:58,000 --> 00:27:06,279 |
|
different as is as our accounts get |
|
|
|
611 |
|
00:27:02,159 --> 00:27:10,799 |
|
larger we start to approach the true |
|
|
|
612 |
|
00:27:06,279 --> 00:27:10,799 |
|
distribution so just to give an |
|
|
|
613 |
|
00:27:12,080 --> 00:27:19,480 |
|
example let's say we have uh the |
|
|
|
614 |
|
00:27:17,640 --> 00:27:21,640 |
|
box |
|
|
|
615 |
|
00:27:19,480 --> 00:27:26,279 |
|
is |
|
|
|
616 |
|
00:27:21,640 --> 00:27:26,279 |
|
um let's say initially we |
|
|
|
617 |
|
00:27:26,520 --> 00:27:29,520 |
|
have |
|
|
|
618 |
|
00:27:31,159 --> 00:27:37,600 |
|
uh let let's say our Alpha is |
|
|
|
619 |
|
00:27:33,840 --> 00:27:43,559 |
|
one so initially if we have |
|
|
|
620 |
|
00:27:37,600 --> 00:27:47,320 |
|
nothing um if we have no no evidence for |
|
|
|
621 |
|
00:27:43,559 --> 00:27:47,320 |
|
our sorry I I |
|
|
|
622 |
|
00:27:49,720 --> 00:27:54,960 |
|
realize let's say this is |
|
|
|
623 |
|
00:27:52,640 --> 00:27:56,840 |
|
our fallback |
|
|
|
624 |
|
00:27:54,960 --> 00:27:59,240 |
|
distribution um where this is a |
|
|
|
625 |
|
00:27:56,840 --> 00:28:01,880 |
|
probability of Z 0.5 this is a |
|
|
|
626 |
|
00:27:59,240 --> 00:28:03,360 |
|
probability of 0.3 and this is a |
|
|
|
627 |
|
00:28:01,880 --> 00:28:06,559 |
|
probability of |
|
|
|
628 |
|
00:28:03,360 --> 00:28:09,919 |
|
0.2 so now let's talk about our byr |
|
|
|
629 |
|
00:28:06,559 --> 00:28:13,399 |
|
model um and our byr |
|
|
|
630 |
|
00:28:09,919 --> 00:28:18,000 |
|
model has counts which is the |
|
|
|
631 |
|
00:28:13,399 --> 00:28:18,000 |
|
the the box and the |
|
|
|
632 |
|
00:28:19,039 --> 00:28:24,480 |
|
is so if we do something like this then |
|
|
|
633 |
|
00:28:22,720 --> 00:28:26,720 |
|
um initially we have no counts like |
|
|
|
634 |
|
00:28:24,480 --> 00:28:28,159 |
|
let's say we we have no data uh about |
|
|
|
635 |
|
00:28:26,720 --> 00:28:30,760 |
|
this distribution |
|
|
|
636 |
|
00:28:28,159 --> 00:28:33,200 |
|
um our counts would be zero and our |
|
|
|
637 |
|
00:28:30,760 --> 00:28:35,919 |
|
Alpha would be |
|
|
|
638 |
|
00:28:33,200 --> 00:28:37,840 |
|
one and so we would just fall back to |
|
|
|
639 |
|
00:28:35,919 --> 00:28:40,960 |
|
this distribution we just have like one |
|
|
|
640 |
|
00:28:37,840 --> 00:28:43,320 |
|
times uh one times this distribution |
|
|
|
641 |
|
00:28:40,960 --> 00:28:45,679 |
|
let's say we then we have one piece of |
|
|
|
642 |
|
00:28:43,320 --> 00:28:48,640 |
|
evidence and once we have one piece of |
|
|
|
643 |
|
00:28:45,679 --> 00:28:52,279 |
|
evidence now this would be |
|
|
|
644 |
|
00:28:48,640 --> 00:28:53,960 |
|
0.33 um and this would uh be Alpha equal |
|
|
|
645 |
|
00:28:52,279 --> 00:28:56,399 |
|
to 1 so we'd have |
|
|
|
646 |
|
00:28:53,960 --> 00:28:58,679 |
|
0.5 * |
|
|
|
647 |
|
00:28:56,399 --> 00:29:00,399 |
|
0.33 |
|
|
|
648 |
|
00:28:58,679 --> 00:29:04,039 |
|
uh and |
|
|
|
649 |
|
00:29:00,399 --> 00:29:07,720 |
|
0.5 time |
|
|
|
650 |
|
00:29:04,039 --> 00:29:10,840 |
|
0.3 uh is the probability of the Box |
|
|
|
651 |
|
00:29:07,720 --> 00:29:12,840 |
|
because um basically we we have one |
|
|
|
652 |
|
00:29:10,840 --> 00:29:14,720 |
|
piece of evidence and we are adding a |
|
|
|
653 |
|
00:29:12,840 --> 00:29:17,080 |
|
count of one to the lower order |
|
|
|
654 |
|
00:29:14,720 --> 00:29:18,320 |
|
distribution then if we increase our |
|
|
|
655 |
|
00:29:17,080 --> 00:29:24,159 |
|
count |
|
|
|
656 |
|
00:29:18,320 --> 00:29:24,159 |
|
here um now we rely more |
|
|
|
657 |
|
00:29:24,880 --> 00:29:30,960 |
|
strongly sorry that that would be wrong |
|
|
|
658 |
|
00:29:27,720 --> 00:29:32,399 |
|
so so now we rely more strongly on the |
|
|
|
659 |
|
00:29:30,960 --> 00:29:33,880 |
|
higher order distribution because we |
|
|
|
660 |
|
00:29:32,399 --> 00:29:37,039 |
|
have more evidence for the higher order |
|
|
|
661 |
|
00:29:33,880 --> 00:29:39,610 |
|
distribution so basically in this case |
|
|
|
662 |
|
00:29:37,039 --> 00:29:41,240 |
|
um the probability |
|
|
|
663 |
|
00:29:39,610 --> 00:29:44,559 |
|
[Music] |
|
|
|
664 |
|
00:29:41,240 --> 00:29:48,200 |
|
of Lambda which I showed |
|
|
|
665 |
|
00:29:44,559 --> 00:29:52,000 |
|
before is equal to the the sum of the |
|
|
|
666 |
|
00:29:48,200 --> 00:29:54,200 |
|
counts plus um the sum of the counts |
|
|
|
667 |
|
00:29:52,000 --> 00:29:56,480 |
|
over the sum of the counts plus |
|
|
|
668 |
|
00:29:54,200 --> 00:29:58,159 |
|
Ali so as the sum of the counts gets |
|
|
|
669 |
|
00:29:56,480 --> 00:30:00,240 |
|
larger you rely on the higher order |
|
|
|
670 |
|
00:29:58,159 --> 00:30:01,640 |
|
distribution is the sum of the counts is |
|
|
|
671 |
|
00:30:00,240 --> 00:30:02,760 |
|
if the sum of the counts is smaller you |
|
|
|
672 |
|
00:30:01,640 --> 00:30:04,320 |
|
rely more on the lower order |
|
|
|
673 |
|
00:30:02,760 --> 00:30:06,720 |
|
distribution so the more evidence you |
|
|
|
674 |
|
00:30:04,320 --> 00:30:11,640 |
|
have the more you rely on so that's the |
|
|
|
675 |
|
00:30:06,720 --> 00:30:11,640 |
|
basic idea behind these smoothing things |
|
|
|
676 |
|
00:30:11,679 --> 00:30:16,679 |
|
um there's also a number of other |
|
|
|
677 |
|
00:30:14,519 --> 00:30:18,760 |
|
varieties called uh |
|
|
|
678 |
|
00:30:16,679 --> 00:30:20,799 |
|
discounting so uh the discount |
|
|
|
679 |
|
00:30:18,760 --> 00:30:23,679 |
|
hyperparameter basically you subtract |
|
|
|
680 |
|
00:30:20,799 --> 00:30:26,080 |
|
this off um uh you subtract this from |
|
|
|
681 |
|
00:30:23,679 --> 00:30:27,840 |
|
the count so you would subtract like 0.5 |
|
|
|
682 |
|
00:30:26,080 --> 00:30:32,679 |
|
from each of the counts that you it's |
|
|
|
683 |
|
00:30:27,840 --> 00:30:36,279 |
|
just empirically this is a better match |
|
|
|
684 |
|
00:30:32,679 --> 00:30:38,600 |
|
for the fact that um natural language |
|
|
|
685 |
|
00:30:36,279 --> 00:30:40,039 |
|
has a very longtailed distribution um |
|
|
|
686 |
|
00:30:38,600 --> 00:30:41,600 |
|
you can kind of do the math and show |
|
|
|
687 |
|
00:30:40,039 --> 00:30:43,720 |
|
that that works and that's actually in |
|
|
|
688 |
|
00:30:41,600 --> 00:30:46,080 |
|
this um in this paper if you're |
|
|
|
689 |
|
00:30:43,720 --> 00:30:49,880 |
|
interested in looking at more details of |
|
|
|
690 |
|
00:30:46,080 --> 00:30:51,519 |
|
that um and then kind of the |
|
|
|
691 |
|
00:30:49,880 --> 00:30:53,440 |
|
stateoftheart in language modeling |
|
|
|
692 |
|
00:30:51,519 --> 00:30:56,600 |
|
before neural language models came out |
|
|
|
693 |
|
00:30:53,440 --> 00:30:59,919 |
|
was this kesser smoothing and what it |
|
|
|
694 |
|
00:30:56,600 --> 00:31:02,440 |
|
does is it discounts but it also |
|
|
|
695 |
|
00:30:59,919 --> 00:31:04,480 |
|
modifies the lower order distribution so |
|
|
|
696 |
|
00:31:02,440 --> 00:31:07,200 |
|
in the lower order distribution you |
|
|
|
697 |
|
00:31:04,480 --> 00:31:09,039 |
|
basically um modify the counts with |
|
|
|
698 |
|
00:31:07,200 --> 00:31:11,919 |
|
respect to how many times that word has |
|
|
|
699 |
|
00:31:09,039 --> 00:31:13,519 |
|
appeared in new contexts with the IDE |
|
|
|
700 |
|
00:31:11,919 --> 00:31:16,360 |
|
idea being that you only use the lower |
|
|
|
701 |
|
00:31:13,519 --> 00:31:18,880 |
|
order distribution when you have uh new |
|
|
|
702 |
|
00:31:16,360 --> 00:31:21,200 |
|
contexts um and so you can kind of Be |
|
|
|
703 |
|
00:31:18,880 --> 00:31:23,600 |
|
Clever |
|
|
|
704 |
|
00:31:21,200 --> 00:31:25,399 |
|
About You Can Be Clever about how you |
|
|
|
705 |
|
00:31:23,600 --> 00:31:27,639 |
|
build this distribution based on the |
|
|
|
706 |
|
00:31:25,399 --> 00:31:29,360 |
|
fact that you're only using it in the |
|
|
|
707 |
|
00:31:27,639 --> 00:31:31,320 |
|
case when this distribution is not very |
|
|
|
708 |
|
00:31:29,360 --> 00:31:33,960 |
|
Rel |
|
|
|
709 |
|
00:31:31,320 --> 00:31:36,080 |
|
so I I would spend a lot more time |
|
|
|
710 |
|
00:31:33,960 --> 00:31:37,960 |
|
teaching this when uh engram models were |
|
|
|
711 |
|
00:31:36,080 --> 00:31:39,840 |
|
kind of the thing uh that people were |
|
|
|
712 |
|
00:31:37,960 --> 00:31:41,960 |
|
using but now I'm going to go over them |
|
|
|
713 |
|
00:31:39,840 --> 00:31:43,600 |
|
very quickly so you know don't worry if |
|
|
|
714 |
|
00:31:41,960 --> 00:31:46,559 |
|
you weren't able to follow all the |
|
|
|
715 |
|
00:31:43,600 --> 00:31:47,960 |
|
details but the basic um the basic thing |
|
|
|
716 |
|
00:31:46,559 --> 00:31:49,279 |
|
take away from this is number one these |
|
|
|
717 |
|
00:31:47,960 --> 00:31:51,639 |
|
are the methods that people use for |
|
|
|
718 |
|
00:31:49,279 --> 00:31:53,440 |
|
engram language models number two if |
|
|
|
719 |
|
00:31:51,639 --> 00:31:55,720 |
|
you're thinking about combining language |
|
|
|
720 |
|
00:31:53,440 --> 00:31:57,519 |
|
models together in some way through you |
|
|
|
721 |
|
00:31:55,720 --> 00:31:59,279 |
|
know ensembling their probability or |
|
|
|
722 |
|
00:31:57,519 --> 00:32:00,480 |
|
something like this this is something |
|
|
|
723 |
|
00:31:59,279 --> 00:32:02,279 |
|
that you should think about a little bit |
|
|
|
724 |
|
00:32:00,480 --> 00:32:03,679 |
|
more carefully because like some |
|
|
|
725 |
|
00:32:02,279 --> 00:32:05,240 |
|
language models might be good in some |
|
|
|
726 |
|
00:32:03,679 --> 00:32:07,440 |
|
context other language models might be |
|
|
|
727 |
|
00:32:05,240 --> 00:32:09,440 |
|
good in other contexts so you would need |
|
|
|
728 |
|
00:32:07,440 --> 00:32:11,799 |
|
to think about that when you're doing um |
|
|
|
729 |
|
00:32:09,440 --> 00:32:18,200 |
|
when you're combining the model |
|
|
|
730 |
|
00:32:11,799 --> 00:32:18,200 |
|
that cool um any any questions about |
|
|
|
731 |
|
00:32:19,080 --> 00:32:24,840 |
|
this Okay |
|
|
|
732 |
|
00:32:21,159 --> 00:32:27,840 |
|
cool so there's a lot of problems that |
|
|
|
733 |
|
00:32:24,840 --> 00:32:30,760 |
|
we have to deal with um when were |
|
|
|
734 |
|
00:32:27,840 --> 00:32:32,600 |
|
creating engram models and that actually |
|
|
|
735 |
|
00:32:30,760 --> 00:32:35,279 |
|
kind of motivated the reason why we |
|
|
|
736 |
|
00:32:32,600 --> 00:32:36,639 |
|
moved to neural language models the |
|
|
|
737 |
|
00:32:35,279 --> 00:32:38,720 |
|
first one is similar to what I talked |
|
|
|
738 |
|
00:32:36,639 --> 00:32:40,519 |
|
about last time with text classification |
|
|
|
739 |
|
00:32:38,720 --> 00:32:42,600 |
|
um that they can't share strength among |
|
|
|
740 |
|
00:32:40,519 --> 00:32:45,159 |
|
similar words like bought and |
|
|
|
741 |
|
00:32:42,600 --> 00:32:46,919 |
|
purchase um another thing is that they |
|
|
|
742 |
|
00:32:45,159 --> 00:32:49,440 |
|
can't easily condition on context with |
|
|
|
743 |
|
00:32:46,919 --> 00:32:51,240 |
|
intervening words so engram models if |
|
|
|
744 |
|
00:32:49,440 --> 00:32:52,799 |
|
you have a rare word in your context |
|
|
|
745 |
|
00:32:51,240 --> 00:32:54,320 |
|
immediately start falling back to the |
|
|
|
746 |
|
00:32:52,799 --> 00:32:56,799 |
|
unigram distribution and they end up |
|
|
|
747 |
|
00:32:54,320 --> 00:32:58,720 |
|
being very bad so uh that was another |
|
|
|
748 |
|
00:32:56,799 --> 00:33:01,000 |
|
issue |
|
|
|
749 |
|
00:32:58,720 --> 00:33:04,760 |
|
and they couldn't handle long distance |
|
|
|
750 |
|
00:33:01,000 --> 00:33:09,080 |
|
um dependencies so if this was beyond |
|
|
|
751 |
|
00:33:04,760 --> 00:33:10,559 |
|
the engram context that they would uh be |
|
|
|
752 |
|
00:33:09,080 --> 00:33:14,320 |
|
handling then you wouldn't be able to |
|
|
|
753 |
|
00:33:10,559 --> 00:33:15,840 |
|
manage this so actually before neural |
|
|
|
754 |
|
00:33:14,320 --> 00:33:18,000 |
|
language models became a really big |
|
|
|
755 |
|
00:33:15,840 --> 00:33:19,960 |
|
thing uh people came up with a bunch of |
|
|
|
756 |
|
00:33:18,000 --> 00:33:22,760 |
|
individual solutions for this in order |
|
|
|
757 |
|
00:33:19,960 --> 00:33:24,440 |
|
to solve the problems but actually it |
|
|
|
758 |
|
00:33:22,760 --> 00:33:26,679 |
|
wasn't that these Solutions didn't work |
|
|
|
759 |
|
00:33:24,440 --> 00:33:29,159 |
|
at all it was just that engineering all |
|
|
|
760 |
|
00:33:26,679 --> 00:33:30,519 |
|
of them together was so hard that nobody |
|
|
|
761 |
|
00:33:29,159 --> 00:33:32,120 |
|
actually ever did that and so they |
|
|
|
762 |
|
00:33:30,519 --> 00:33:35,120 |
|
relied on just engram models out of the |
|
|
|
763 |
|
00:33:32,120 --> 00:33:37,600 |
|
box and that wasn't scalable so it's |
|
|
|
764 |
|
00:33:35,120 --> 00:33:39,279 |
|
kind of a funny example of how like |
|
|
|
765 |
|
00:33:37,600 --> 00:33:42,000 |
|
actually neural networks despite all the |
|
|
|
766 |
|
00:33:39,279 --> 00:33:43,559 |
|
pain that they cause in some areas are a |
|
|
|
767 |
|
00:33:42,000 --> 00:33:47,120 |
|
much better engineering solution to |
|
|
|
768 |
|
00:33:43,559 --> 00:33:51,279 |
|
solve all the issues that previous |
|
|
|
769 |
|
00:33:47,120 --> 00:33:53,159 |
|
method cool um so when they use uh Eng |
|
|
|
770 |
|
00:33:51,279 --> 00:33:54,799 |
|
grab models neural language models |
|
|
|
771 |
|
00:33:53,159 --> 00:33:56,559 |
|
achieve better performance but Eng grab |
|
|
|
772 |
|
00:33:54,799 --> 00:33:58,440 |
|
models are very very fast to estimate |
|
|
|
773 |
|
00:33:56,559 --> 00:33:59,880 |
|
and apply you can even estimate them |
|
|
|
774 |
|
00:33:58,440 --> 00:34:04,399 |
|
completely in |
|
|
|
775 |
|
00:33:59,880 --> 00:34:07,720 |
|
parallel um engram models also I I don't |
|
|
|
776 |
|
00:34:04,399 --> 00:34:10,399 |
|
know if this is necessarily |
|
|
|
777 |
|
00:34:07,720 --> 00:34:13,200 |
|
A a thing that |
|
|
|
778 |
|
00:34:10,399 --> 00:34:15,079 |
|
you a reason to use engram language |
|
|
|
779 |
|
00:34:13,200 --> 00:34:17,720 |
|
models but it is a reason to think a |
|
|
|
780 |
|
00:34:15,079 --> 00:34:20,320 |
|
little bit critically about uh neural |
|
|
|
781 |
|
00:34:17,720 --> 00:34:22,720 |
|
language models which is neural language |
|
|
|
782 |
|
00:34:20,320 --> 00:34:24,320 |
|
models actually can be worse than engram |
|
|
|
783 |
|
00:34:22,720 --> 00:34:26,679 |
|
language models at modeling very low |
|
|
|
784 |
|
00:34:24,320 --> 00:34:28,480 |
|
frequency phenomenas so engram language |
|
|
|
785 |
|
00:34:26,679 --> 00:34:29,960 |
|
model can learn from a single example |
|
|
|
786 |
|
00:34:28,480 --> 00:34:32,119 |
|
they only need a single example of |
|
|
|
787 |
|
00:34:29,960 --> 00:34:36,879 |
|
anything before the probability of that |
|
|
|
788 |
|
00:34:32,119 --> 00:34:38,639 |
|
continuation goes up very high um and uh |
|
|
|
789 |
|
00:34:36,879 --> 00:34:41,359 |
|
but neural language models actually can |
|
|
|
790 |
|
00:34:38,639 --> 00:34:43,599 |
|
forget or not memorize uh appropriately |
|
|
|
791 |
|
00:34:41,359 --> 00:34:46,280 |
|
from single examples so they can be |
|
|
|
792 |
|
00:34:43,599 --> 00:34:48,040 |
|
better at that um there's a toolkit the |
|
|
|
793 |
|
00:34:46,280 --> 00:34:49,919 |
|
standard toolkit for estimating engram |
|
|
|
794 |
|
00:34:48,040 --> 00:34:54,359 |
|
language models is called KLM it's kind |
|
|
|
795 |
|
00:34:49,919 --> 00:34:57,599 |
|
of frighteningly fast um and so people |
|
|
|
796 |
|
00:34:54,359 --> 00:35:00,400 |
|
have been uh saying like I've seen some |
|
|
|
797 |
|
00:34:57,599 --> 00:35:01,599 |
|
jokes which are like job postings that |
|
|
|
798 |
|
00:35:00,400 --> 00:35:04,040 |
|
say people who have been working on |
|
|
|
799 |
|
00:35:01,599 --> 00:35:05,880 |
|
large language models uh for we want |
|
|
|
800 |
|
00:35:04,040 --> 00:35:07,359 |
|
people who have been 10 years of |
|
|
|
801 |
|
00:35:05,880 --> 00:35:09,240 |
|
experience working on large language |
|
|
|
802 |
|
00:35:07,359 --> 00:35:11,960 |
|
models or something like that and a lot |
|
|
|
803 |
|
00:35:09,240 --> 00:35:13,440 |
|
of people are saying wait nobody has 10 |
|
|
|
804 |
|
00:35:11,960 --> 00:35:16,400 |
|
years of experience working on large |
|
|
|
805 |
|
00:35:13,440 --> 00:35:18,160 |
|
language models well Kenneth hfield who |
|
|
|
806 |
|
00:35:16,400 --> 00:35:19,440 |
|
created KLM does have 10 years of |
|
|
|
807 |
|
00:35:18,160 --> 00:35:22,800 |
|
experience working on large language |
|
|
|
808 |
|
00:35:19,440 --> 00:35:24,599 |
|
models because he was estimating uh |
|
|
|
809 |
|
00:35:22,800 --> 00:35:27,720 |
|
seven gr |
|
|
|
810 |
|
00:35:24,599 --> 00:35:30,320 |
|
bottles um seven models with a |
|
|
|
811 |
|
00:35:27,720 --> 00:35:35,040 |
|
vocabulary of let's say |
|
|
|
812 |
|
00:35:30,320 --> 00:35:37,720 |
|
100,000 on um you know web text so how |
|
|
|
813 |
|
00:35:35,040 --> 00:35:41,119 |
|
many parameters is at that's more than |
|
|
|
814 |
|
00:35:37,720 --> 00:35:44,320 |
|
any you know large neural language model |
|
|
|
815 |
|
00:35:41,119 --> 00:35:45,640 |
|
that we have nowadays so um they they |
|
|
|
816 |
|
00:35:44,320 --> 00:35:47,520 |
|
have a lot of these parameters are |
|
|
|
817 |
|
00:35:45,640 --> 00:35:49,400 |
|
sparse they're zero counts so obviously |
|
|
|
818 |
|
00:35:47,520 --> 00:35:52,160 |
|
you don't uh you don't memorize all of |
|
|
|
819 |
|
00:35:49,400 --> 00:35:55,040 |
|
them but uh |
|
|
|
820 |
|
00:35:52,160 --> 00:35:57,800 |
|
yeah cool um another thing that maybe I |
|
|
|
821 |
|
00:35:55,040 --> 00:35:59,359 |
|
should mention like so this doesn't |
|
|
|
822 |
|
00:35:57,800 --> 00:36:01,960 |
|
sound completely outdated there was a |
|
|
|
823 |
|
00:35:59,359 --> 00:36:05,400 |
|
really good paper |
|
|
|
824 |
|
00:36:01,960 --> 00:36:08,400 |
|
recently that used the fact that engrams |
|
|
|
825 |
|
00:36:05,400 --> 00:36:08,400 |
|
are |
|
|
|
826 |
|
00:36:11,079 --> 00:36:17,319 |
|
so uses effect that engram models are so |
|
|
|
827 |
|
00:36:14,280 --> 00:36:18,960 |
|
scalable it's this paper um it's called |
|
|
|
828 |
|
00:36:17,319 --> 00:36:21,079 |
|
Data selection for language models via |
|
|
|
829 |
|
00:36:18,960 --> 00:36:22,359 |
|
importance rese sampling and one |
|
|
|
830 |
|
00:36:21,079 --> 00:36:24,359 |
|
interesting thing that they do in this |
|
|
|
831 |
|
00:36:22,359 --> 00:36:28,920 |
|
paper is that they don't |
|
|
|
832 |
|
00:36:24,359 --> 00:36:31,560 |
|
actually um they don't |
|
|
|
833 |
|
00:36:28,920 --> 00:36:32,800 |
|
actually use neural models in any way |
|
|
|
834 |
|
00:36:31,560 --> 00:36:34,920 |
|
despite the fact that they use the |
|
|
|
835 |
|
00:36:32,800 --> 00:36:36,880 |
|
downstream data that they sample in |
|
|
|
836 |
|
00:36:34,920 --> 00:36:41,319 |
|
order to calculate neural models but |
|
|
|
837 |
|
00:36:36,880 --> 00:36:42,880 |
|
they run engram models over um over lots |
|
|
|
838 |
|
00:36:41,319 --> 00:36:47,359 |
|
and lots of data and then they fit a |
|
|
|
839 |
|
00:36:42,880 --> 00:36:50,000 |
|
gaussian distribution to the enr model |
|
|
|
840 |
|
00:36:47,359 --> 00:36:51,520 |
|
counts basically uh in order to select |
|
|
|
841 |
|
00:36:50,000 --> 00:36:53,040 |
|
the data in the reason why they do this |
|
|
|
842 |
|
00:36:51,520 --> 00:36:55,280 |
|
is they want to do this over the entire |
|
|
|
843 |
|
00:36:53,040 --> 00:36:56,760 |
|
web and running a neural model over the |
|
|
|
844 |
|
00:36:55,280 --> 00:36:58,920 |
|
entire web would be too expensive so |
|
|
|
845 |
|
00:36:56,760 --> 00:37:00,319 |
|
they use angr models instead so that's |
|
|
|
846 |
|
00:36:58,920 --> 00:37:02,359 |
|
just an example of something in the |
|
|
|
847 |
|
00:37:00,319 --> 00:37:04,920 |
|
modern context where keeping this in |
|
|
|
848 |
|
00:37:02,359 --> 00:37:04,920 |
|
mind is a good |
|
|
|
849 |
|
00:37:08,200 --> 00:37:14,000 |
|
idea okay I'd like to move to the next |
|
|
|
850 |
|
00:37:10,960 --> 00:37:15,319 |
|
part so a language model evaluation uh |
|
|
|
851 |
|
00:37:14,000 --> 00:37:17,200 |
|
this is important to know I'm not going |
|
|
|
852 |
|
00:37:15,319 --> 00:37:19,079 |
|
to talk about language model evaluation |
|
|
|
853 |
|
00:37:17,200 --> 00:37:20,599 |
|
on other tasks I'm only going to talk |
|
|
|
854 |
|
00:37:19,079 --> 00:37:23,800 |
|
right now about language model |
|
|
|
855 |
|
00:37:20,599 --> 00:37:26,280 |
|
evaluation on the task of language |
|
|
|
856 |
|
00:37:23,800 --> 00:37:29,079 |
|
modeling and there's a number of metrics |
|
|
|
857 |
|
00:37:26,280 --> 00:37:30,680 |
|
that we use for the task of language |
|
|
|
858 |
|
00:37:29,079 --> 00:37:32,720 |
|
modeling evaluating language models on |
|
|
|
859 |
|
00:37:30,680 --> 00:37:35,560 |
|
the task of language modeling the first |
|
|
|
860 |
|
00:37:32,720 --> 00:37:38,480 |
|
one is log likelihood and basically uh |
|
|
|
861 |
|
00:37:35,560 --> 00:37:40,160 |
|
the way we calculate log likelihood is |
|
|
|
862 |
|
00:37:38,480 --> 00:37:41,640 |
|
uh sorry there's an extra parenthesis |
|
|
|
863 |
|
00:37:40,160 --> 00:37:45,480 |
|
here but the way we calculate log |
|
|
|
864 |
|
00:37:41,640 --> 00:37:47,160 |
|
likelihood is we get a test set that |
|
|
|
865 |
|
00:37:45,480 --> 00:37:50,400 |
|
ideally has not been included in our |
|
|
|
866 |
|
00:37:47,160 --> 00:37:52,520 |
|
training data and we take all of the |
|
|
|
867 |
|
00:37:50,400 --> 00:37:54,200 |
|
documents or sentences in the test set |
|
|
|
868 |
|
00:37:52,520 --> 00:37:57,040 |
|
we calculate the log probability of all |
|
|
|
869 |
|
00:37:54,200 --> 00:37:59,520 |
|
of them uh we don't actually use this |
|
|
|
870 |
|
00:37:57,040 --> 00:38:02,640 |
|
super broadly to evaluate models and the |
|
|
|
871 |
|
00:37:59,520 --> 00:38:04,200 |
|
reason why is because this number is |
|
|
|
872 |
|
00:38:02,640 --> 00:38:05,720 |
|
very dependent on the size of the data |
|
|
|
873 |
|
00:38:04,200 --> 00:38:07,119 |
|
set so if you have a larger data set |
|
|
|
874 |
|
00:38:05,720 --> 00:38:08,720 |
|
this number will be larger if you have a |
|
|
|
875 |
|
00:38:07,119 --> 00:38:10,960 |
|
smaller data set this number will be |
|
|
|
876 |
|
00:38:08,720 --> 00:38:14,040 |
|
smaller so the more common thing to do |
|
|
|
877 |
|
00:38:10,960 --> 00:38:15,839 |
|
is per word uh log likelihood and per |
|
|
|
878 |
|
00:38:14,040 --> 00:38:19,800 |
|
word log likelihood is basically |
|
|
|
879 |
|
00:38:15,839 --> 00:38:22,760 |
|
dividing the um dividing the log |
|
|
|
880 |
|
00:38:19,800 --> 00:38:25,520 |
|
probability of the entire corpus with uh |
|
|
|
881 |
|
00:38:22,760 --> 00:38:28,359 |
|
the number of words that you have in the |
|
|
|
882 |
|
00:38:25,520 --> 00:38:31,000 |
|
corpus |
|
|
|
883 |
|
00:38:28,359 --> 00:38:34,599 |
|
um it's also common for papers to report |
|
|
|
884 |
|
00:38:31,000 --> 00:38:36,359 |
|
negative log likelihood uh where because |
|
|
|
885 |
|
00:38:34,599 --> 00:38:37,800 |
|
that's used as a loss and there lower is |
|
|
|
886 |
|
00:38:36,359 --> 00:38:40,440 |
|
better so you just need to be careful |
|
|
|
887 |
|
00:38:37,800 --> 00:38:42,560 |
|
about which one is being |
|
|
|
888 |
|
00:38:40,440 --> 00:38:43,880 |
|
reported so this is pretty common I |
|
|
|
889 |
|
00:38:42,560 --> 00:38:45,400 |
|
think most people are are somewhat |
|
|
|
890 |
|
00:38:43,880 --> 00:38:49,040 |
|
familiar with |
|
|
|
891 |
|
00:38:45,400 --> 00:38:49,800 |
|
this another thing that you might see is |
|
|
|
892 |
|
00:38:49,040 --> 00:38:53,079 |
|
uh |
|
|
|
893 |
|
00:38:49,800 --> 00:38:55,000 |
|
entropy and uh specifically this is |
|
|
|
894 |
|
00:38:53,079 --> 00:38:57,319 |
|
often called cross entropy because |
|
|
|
895 |
|
00:38:55,000 --> 00:38:59,880 |
|
you're calculating |
|
|
|
896 |
|
00:38:57,319 --> 00:39:01,599 |
|
the you're estimating the model on a |
|
|
|
897 |
|
00:38:59,880 --> 00:39:05,079 |
|
training data set and then evaluating it |
|
|
|
898 |
|
00:39:01,599 --> 00:39:08,400 |
|
on a separate data set uh so uh on the |
|
|
|
899 |
|
00:39:05,079 --> 00:39:12,200 |
|
test data set and this is calcul often |
|
|
|
900 |
|
00:39:08,400 --> 00:39:14,640 |
|
or usually calculated as log 2 um of the |
|
|
|
901 |
|
00:39:12,200 --> 00:39:17,119 |
|
probability divided by the number of |
|
|
|
902 |
|
00:39:14,640 --> 00:39:18,760 |
|
words or units in the Corpus does anyone |
|
|
|
903 |
|
00:39:17,119 --> 00:39:23,839 |
|
know why this is log |
|
|
|
904 |
|
00:39:18,760 --> 00:39:23,839 |
|
two as opposed to a normal uh |
|
|
|
905 |
|
00:39:25,440 --> 00:39:31,319 |
|
log |
|
|
|
906 |
|
00:39:28,440 --> 00:39:31,319 |
|
anyone yeah |
|
|
|
907 |
|
00:39:33,119 --> 00:39:38,720 |
|
so yeah so it's calculating as bits um |
|
|
|
908 |
|
00:39:36,760 --> 00:39:43,160 |
|
and this is kind of |
|
|
|
909 |
|
00:39:38,720 --> 00:39:45,240 |
|
a um this is kind of a historical thing |
|
|
|
910 |
|
00:39:43,160 --> 00:39:47,119 |
|
and it's not super super important for |
|
|
|
911 |
|
00:39:45,240 --> 00:39:51,800 |
|
language models but it's actually pretty |
|
|
|
912 |
|
00:39:47,119 --> 00:39:54,599 |
|
interesting uh to to think about and so |
|
|
|
913 |
|
00:39:51,800 --> 00:39:57,480 |
|
actually any probabilistic distribution |
|
|
|
914 |
|
00:39:54,599 --> 00:40:00,040 |
|
can also be used for data compression |
|
|
|
915 |
|
00:39:57,480 --> 00:40:03,319 |
|
um and so you know when you're running a |
|
|
|
916 |
|
00:40:00,040 --> 00:40:05,000 |
|
zip file or you're running gzip or bz2 |
|
|
|
917 |
|
00:40:03,319 --> 00:40:07,359 |
|
or something like that uh you're |
|
|
|
918 |
|
00:40:05,000 --> 00:40:09,240 |
|
compressing a file into a smaller file |
|
|
|
919 |
|
00:40:07,359 --> 00:40:12,000 |
|
and any language model can also be used |
|
|
|
920 |
|
00:40:09,240 --> 00:40:15,280 |
|
to compress a SM file into a smaller |
|
|
|
921 |
|
00:40:12,000 --> 00:40:17,119 |
|
file um and so the way it does this is |
|
|
|
922 |
|
00:40:15,280 --> 00:40:19,200 |
|
if you have more likely |
|
|
|
923 |
|
00:40:17,119 --> 00:40:20,960 |
|
sequences uh for example more likely |
|
|
|
924 |
|
00:40:19,200 --> 00:40:25,079 |
|
sentences or more likely documents you |
|
|
|
925 |
|
00:40:20,960 --> 00:40:26,920 |
|
can press them into a a shorter uh |
|
|
|
926 |
|
00:40:25,079 --> 00:40:29,440 |
|
output and |
|
|
|
927 |
|
00:40:26,920 --> 00:40:29,440 |
|
kind of |
|
|
|
928 |
|
00:40:29,640 --> 00:40:33,800 |
|
the |
|
|
|
929 |
|
00:40:31,480 --> 00:40:35,720 |
|
ideal I I think it's pretty safe to say |
|
|
|
930 |
|
00:40:33,800 --> 00:40:37,920 |
|
ideal because I think you can't get a |
|
|
|
931 |
|
00:40:35,720 --> 00:40:42,920 |
|
better method for compression than this |
|
|
|
932 |
|
00:40:37,920 --> 00:40:45,000 |
|
uh if I unless I'm uh you know not well |
|
|
|
933 |
|
00:40:42,920 --> 00:40:46,800 |
|
versed enough in information Theory but |
|
|
|
934 |
|
00:40:45,000 --> 00:40:49,240 |
|
I I think this is basically the ideal |
|
|
|
935 |
|
00:40:46,800 --> 00:40:51,960 |
|
method for data compression and the way |
|
|
|
936 |
|
00:40:49,240 --> 00:40:54,640 |
|
it works is um I have a figure up here |
|
|
|
937 |
|
00:40:51,960 --> 00:40:58,800 |
|
but I'd like to recreate it here which |
|
|
|
938 |
|
00:40:54,640 --> 00:41:02,640 |
|
is let's say we have a vocabulary of |
|
|
|
939 |
|
00:40:58,800 --> 00:41:07,200 |
|
a um which has |
|
|
|
940 |
|
00:41:02,640 --> 00:41:08,800 |
|
50% and then we have a vocabulary uh B |
|
|
|
941 |
|
00:41:07,200 --> 00:41:11,560 |
|
which is |
|
|
|
942 |
|
00:41:08,800 --> 00:41:14,040 |
|
33% and a vocabulary |
|
|
|
943 |
|
00:41:11,560 --> 00:41:18,520 |
|
C |
|
|
|
944 |
|
00:41:14,040 --> 00:41:18,520 |
|
uh yeah C which is about |
|
|
|
945 |
|
00:41:18,640 --> 00:41:25,640 |
|
17% and so if you have a single token |
|
|
|
946 |
|
00:41:22,960 --> 00:41:26,839 |
|
sequence um if you have a single token |
|
|
|
947 |
|
00:41:25,640 --> 00:41:30,880 |
|
sequence |
|
|
|
948 |
|
00:41:26,839 --> 00:41:30,880 |
|
what you do is you can |
|
|
|
949 |
|
00:41:31,319 --> 00:41:38,800 |
|
see divide this into zero and one so if |
|
|
|
950 |
|
00:41:36,400 --> 00:41:40,680 |
|
your single token sequence is a you can |
|
|
|
951 |
|
00:41:38,800 --> 00:41:42,760 |
|
just put zero and you'll be done |
|
|
|
952 |
|
00:41:40,680 --> 00:41:46,800 |
|
encoding it if your single token |
|
|
|
953 |
|
00:41:42,760 --> 00:41:51,920 |
|
sequence is B |
|
|
|
954 |
|
00:41:46,800 --> 00:41:56,520 |
|
then um one overlaps with b and c so now |
|
|
|
955 |
|
00:41:51,920 --> 00:42:00,920 |
|
you need to further split this up into |
|
|
|
956 |
|
00:41:56,520 --> 00:42:00,920 |
|
uh o and one and you can see |
|
|
|
957 |
|
00:42:04,880 --> 00:42:11,440 |
|
that let make sure I did that right yeah |
|
|
|
958 |
|
00:42:08,359 --> 00:42:11,440 |
|
you can you can see |
|
|
|
959 |
|
00:42:15,599 --> 00:42:25,720 |
|
that one zero is entirely encompassed by |
|
|
|
960 |
|
00:42:19,680 --> 00:42:29,200 |
|
uh by B so now B is one Z and C uh C is |
|
|
|
961 |
|
00:42:25,720 --> 00:42:32,359 |
|
not L encompassed by that so you would |
|
|
|
962 |
|
00:42:29,200 --> 00:42:39,240 |
|
need to further break this up and say |
|
|
|
963 |
|
00:42:32,359 --> 00:42:41,880 |
|
it's Z one here and now one one |
|
|
|
964 |
|
00:42:39,240 --> 00:42:45,520 |
|
one is encompassed by this so you would |
|
|
|
965 |
|
00:42:41,880 --> 00:42:48,680 |
|
get uh you would get C if it was 111 and |
|
|
|
966 |
|
00:42:45,520 --> 00:42:51,119 |
|
so every every sequence that started |
|
|
|
967 |
|
00:42:48,680 --> 00:42:53,000 |
|
with zero would start out with a every |
|
|
|
968 |
|
00:42:51,119 --> 00:42:54,960 |
|
sequence that started out with one zero |
|
|
|
969 |
|
00:42:53,000 --> 00:42:57,200 |
|
would start with b and every sequence |
|
|
|
970 |
|
00:42:54,960 --> 00:43:02,079 |
|
that started with 11 one1 |
|
|
|
971 |
|
00:42:57,200 --> 00:43:04,920 |
|
start um and so then you can look at the |
|
|
|
972 |
|
00:43:02,079 --> 00:43:06,960 |
|
next word and let's say we're using a |
|
|
|
973 |
|
00:43:04,920 --> 00:43:09,839 |
|
unigram model if we're using a unigram |
|
|
|
974 |
|
00:43:06,960 --> 00:43:12,960 |
|
model for the next uh the next token |
|
|
|
975 |
|
00:43:09,839 --> 00:43:18,200 |
|
let's say the next token is C |
|
|
|
976 |
|
00:43:12,960 --> 00:43:23,640 |
|
so now the next token being C we already |
|
|
|
977 |
|
00:43:18,200 --> 00:43:27,920 |
|
have B and now we take we subdivide |
|
|
|
978 |
|
00:43:23,640 --> 00:43:33,040 |
|
B into |
|
|
|
979 |
|
00:43:27,920 --> 00:43:35,720 |
|
a BC ba a BB and BC and then we find the |
|
|
|
980 |
|
00:43:33,040 --> 00:43:40,720 |
|
next binary sequence that is entirely |
|
|
|
981 |
|
00:43:35,720 --> 00:43:44,000 |
|
encompassed by uh BC by this like |
|
|
|
982 |
|
00:43:40,720 --> 00:43:45,359 |
|
interval and so the moment we find a a |
|
|
|
983 |
|
00:43:44,000 --> 00:43:48,520 |
|
binary sequence that's entirely |
|
|
|
984 |
|
00:43:45,359 --> 00:43:50,599 |
|
encompassed by the interval uh then that |
|
|
|
985 |
|
00:43:48,520 --> 00:43:53,400 |
|
is the the sequence that we can use to |
|
|
|
986 |
|
00:43:50,599 --> 00:43:54,640 |
|
represent that SC and so um if you're |
|
|
|
987 |
|
00:43:53,400 --> 00:43:56,520 |
|
interested in this you can look up the |
|
|
|
988 |
|
00:43:54,640 --> 00:44:00,400 |
|
arithmetic coding on on wikip it's |
|
|
|
989 |
|
00:43:56,520 --> 00:44:02,079 |
|
pretty fascinating but basically um here |
|
|
|
990 |
|
00:44:00,400 --> 00:44:04,040 |
|
this is showing the example of the |
|
|
|
991 |
|
00:44:02,079 --> 00:44:07,160 |
|
unigram model where the probabilities |
|
|
|
992 |
|
00:44:04,040 --> 00:44:10,240 |
|
don't change based on the context but |
|
|
|
993 |
|
00:44:07,160 --> 00:44:13,000 |
|
what if we knew that |
|
|
|
994 |
|
00:44:10,240 --> 00:44:15,599 |
|
c had a really high probability of |
|
|
|
995 |
|
00:44:13,000 --> 00:44:22,160 |
|
following B so if that's the case now we |
|
|
|
996 |
|
00:44:15,599 --> 00:44:24,559 |
|
have like a a b c here um like based on |
|
|
|
997 |
|
00:44:22,160 --> 00:44:25,880 |
|
our our byr model or neural language |
|
|
|
998 |
|
00:44:24,559 --> 00:44:29,319 |
|
model or something like that so now this |
|
|
|
999 |
|
00:44:25,880 --> 00:44:31,240 |
|
is interval is much much larger so it's |
|
|
|
1000 |
|
00:44:29,319 --> 00:44:35,079 |
|
much more likely to entirely Encompass a |
|
|
|
1001 |
|
00:44:31,240 --> 00:44:39,720 |
|
shorter string and because of that the |
|
|
|
1002 |
|
00:44:35,079 --> 00:44:42,440 |
|
um the output can be much shorter and so |
|
|
|
1003 |
|
00:44:39,720 --> 00:44:45,760 |
|
if you use this arithmetic encoding um |
|
|
|
1004 |
|
00:44:42,440 --> 00:44:49,440 |
|
over a very long sequence of outputs |
|
|
|
1005 |
|
00:44:45,760 --> 00:44:52,440 |
|
your the length of the sequence that is |
|
|
|
1006 |
|
00:44:49,440 --> 00:44:56,000 |
|
needed to encode this uh this particular |
|
|
|
1007 |
|
00:44:52,440 --> 00:45:00,359 |
|
output is going to be essentially um the |
|
|
|
1008 |
|
00:44:56,000 --> 00:45:03,319 |
|
number of bits according to times the |
|
|
|
1009 |
|
00:45:00,359 --> 00:45:06,480 |
|
times the sequence so this is very |
|
|
|
1010 |
|
00:45:03,319 --> 00:45:10,000 |
|
directly connected to like compression |
|
|
|
1011 |
|
00:45:06,480 --> 00:45:13,160 |
|
and information Theory and stuff like |
|
|
|
1012 |
|
00:45:10,000 --> 00:45:15,359 |
|
that so that that's where entropy comes |
|
|
|
1013 |
|
00:45:13,160 --> 00:45:17,680 |
|
from uh are are there any questions |
|
|
|
1014 |
|
00:45:15,359 --> 00:45:17,680 |
|
about |
|
|
|
1015 |
|
00:45:19,319 --> 00:45:22,319 |
|
this |
|
|
|
1016 |
|
00:45:24,880 --> 00:45:28,119 |
|
yeah |
|
|
|
1017 |
|
00:45:26,800 --> 00:45:31,880 |
|
uh for |
|
|
|
1018 |
|
00:45:28,119 --> 00:45:34,319 |
|
c um so |
|
|
|
1019 |
|
00:45:31,880 --> 00:45:36,599 |
|
111 is |
|
|
|
1020 |
|
00:45:34,319 --> 00:45:37,920 |
|
because let me let me see if I can do |
|
|
|
1021 |
|
00:45:36,599 --> 00:45:40,559 |
|
this |
|
|
|
1022 |
|
00:45:37,920 --> 00:45:44,240 |
|
again |
|
|
|
1023 |
|
00:45:40,559 --> 00:45:44,240 |
|
so I had one |
|
|
|
1024 |
|
00:45:46,079 --> 00:45:54,520 |
|
one so here this interval is |
|
|
|
1025 |
|
00:45:50,920 --> 00:45:56,839 |
|
one this interval is one one this |
|
|
|
1026 |
|
00:45:54,520 --> 00:46:00,079 |
|
interval is 111 |
|
|
|
1027 |
|
00:45:56,839 --> 00:46:03,520 |
|
and 111 is the first interval that is |
|
|
|
1028 |
|
00:46:00,079 --> 00:46:05,520 |
|
entirely overlapping with with c um and |
|
|
|
1029 |
|
00:46:03,520 --> 00:46:08,760 |
|
it's not one Z because one one Z is |
|
|
|
1030 |
|
00:46:05,520 --> 00:46:08,760 |
|
overlaping with b and |
|
|
|
1031 |
|
00:46:09,960 --> 00:46:13,599 |
|
c so which |
|
|
|
1032 |
|
00:46:14,280 --> 00:46:21,720 |
|
Cas so which case one |
|
|
|
1033 |
|
00:46:20,160 --> 00:46:24,800 |
|
Z |
|
|
|
1034 |
|
00:46:21,720 --> 00:46:26,319 |
|
one one one |
|
|
|
1035 |
|
00:46:24,800 --> 00:46:30,800 |
|
Z |
|
|
|
1036 |
|
00:46:26,319 --> 00:46:30,800 |
|
when would you use 110 to represent |
|
|
|
1037 |
|
00:46:32,119 --> 00:46:38,839 |
|
something it's a good question I guess |
|
|
|
1038 |
|
00:46:36,119 --> 00:46:40,599 |
|
maybe you wouldn't which seems a little |
|
|
|
1039 |
|
00:46:38,839 --> 00:46:43,280 |
|
bit wasteful |
|
|
|
1040 |
|
00:46:40,599 --> 00:46:46,160 |
|
so let me let me think about that I |
|
|
|
1041 |
|
00:46:43,280 --> 00:46:49,920 |
|
think um it might be the case that you |
|
|
|
1042 |
|
00:46:46,160 --> 00:46:52,319 |
|
just don't use it um |
|
|
|
1043 |
|
00:46:49,920 --> 00:46:53,559 |
|
but yeah I'll try to think about that a |
|
|
|
1044 |
|
00:46:52,319 --> 00:46:55,920 |
|
little bit more because it seems like |
|
|
|
1045 |
|
00:46:53,559 --> 00:46:59,200 |
|
you should use every bet string right so |
|
|
|
1046 |
|
00:46:55,920 --> 00:47:01,559 |
|
um yeah if anybody uh has has the answer |
|
|
|
1047 |
|
00:46:59,200 --> 00:47:05,160 |
|
I'd be happy to hear it otherwise I take |
|
|
|
1048 |
|
00:47:01,559 --> 00:47:07,079 |
|
you cool um so next thing is perplexity |
|
|
|
1049 |
|
00:47:05,160 --> 00:47:10,640 |
|
so this is another one that you see |
|
|
|
1050 |
|
00:47:07,079 --> 00:47:13,240 |
|
commonly and um so perplexity is |
|
|
|
1051 |
|
00:47:10,640 --> 00:47:16,880 |
|
basically two to the ENT uh two to the |
|
|
|
1052 |
|
00:47:13,240 --> 00:47:20,760 |
|
per word entropy or e to the uh negative |
|
|
|
1053 |
|
00:47:16,880 --> 00:47:24,880 |
|
word level log likelihood in log space |
|
|
|
1054 |
|
00:47:20,760 --> 00:47:28,240 |
|
um and so this uh T larger tends to be |
|
|
|
1055 |
|
00:47:24,880 --> 00:47:32,559 |
|
better I'd like to do a little exercise |
|
|
|
1056 |
|
00:47:28,240 --> 00:47:34,599 |
|
to see uh if this works so like let's |
|
|
|
1057 |
|
00:47:32,559 --> 00:47:39,079 |
|
say we have one a dog sees a squirrel it |
|
|
|
1058 |
|
00:47:34,599 --> 00:47:40,960 |
|
will usually um and can anyone guess the |
|
|
|
1059 |
|
00:47:39,079 --> 00:47:43,480 |
|
next word just yell it |
|
|
|
1060 |
|
00:47:40,960 --> 00:47:46,400 |
|
out bar |
|
|
|
1061 |
|
00:47:43,480 --> 00:47:47,400 |
|
okay uh what about that what about |
|
|
|
1062 |
|
00:47:46,400 --> 00:47:50,400 |
|
something |
|
|
|
1063 |
|
00:47:47,400 --> 00:47:50,400 |
|
else |
|
|
|
1064 |
|
00:47:52,640 --> 00:47:57,520 |
|
Chase Run |
|
|
|
1065 |
|
00:47:54,720 --> 00:48:00,800 |
|
Run |
|
|
|
1066 |
|
00:47:57,520 --> 00:48:00,800 |
|
okay John |
|
|
|
1067 |
|
00:48:01,960 --> 00:48:05,280 |
|
John anything |
|
|
|
1068 |
|
00:48:07,000 --> 00:48:10,400 |
|
else any other |
|
|
|
1069 |
|
00:48:11,280 --> 00:48:16,960 |
|
ones so basically what this shows is |
|
|
|
1070 |
|
00:48:13,640 --> 00:48:16,960 |
|
humans are really bad language |
|
|
|
1071 |
|
00:48:17,160 --> 00:48:24,079 |
|
models so uh interestingly every single |
|
|
|
1072 |
|
00:48:21,520 --> 00:48:26,559 |
|
one of the words you predicted here is a |
|
|
|
1073 |
|
00:48:24,079 --> 00:48:32,240 |
|
uh a regular verb |
|
|
|
1074 |
|
00:48:26,559 --> 00:48:35,200 |
|
um but in natural language model gpt2 uh |
|
|
|
1075 |
|
00:48:32,240 --> 00:48:38,079 |
|
the first thing it predicts is B uh |
|
|
|
1076 |
|
00:48:35,200 --> 00:48:40,440 |
|
which is kind of a like the Cula there's |
|
|
|
1077 |
|
00:48:38,079 --> 00:48:43,400 |
|
also start and that will be like start |
|
|
|
1078 |
|
00:48:40,440 --> 00:48:44,880 |
|
running start something um and humans |
|
|
|
1079 |
|
00:48:43,400 --> 00:48:46,400 |
|
actually are really bad at doing this |
|
|
|
1080 |
|
00:48:44,880 --> 00:48:49,079 |
|
are really bad at predicting next words |
|
|
|
1081 |
|
00:48:46,400 --> 00:48:51,760 |
|
we're not trained that way um and so uh |
|
|
|
1082 |
|
00:48:49,079 --> 00:48:54,319 |
|
we end up having these biases but anyway |
|
|
|
1083 |
|
00:48:51,760 --> 00:48:55,799 |
|
um the reason why I did this quiz was |
|
|
|
1084 |
|
00:48:54,319 --> 00:48:57,280 |
|
because that's essentially what |
|
|
|
1085 |
|
00:48:55,799 --> 00:49:01,160 |
|
perplexity |
|
|
|
1086 |
|
00:48:57,280 --> 00:49:02,680 |
|
means um and what what perplexity is is |
|
|
|
1087 |
|
00:49:01,160 --> 00:49:04,559 |
|
it's the number of times you'd have to |
|
|
|
1088 |
|
00:49:02,680 --> 00:49:07,000 |
|
sample from the probability distribution |
|
|
|
1089 |
|
00:49:04,559 --> 00:49:09,200 |
|
before you get the answer right so you |
|
|
|
1090 |
|
00:49:07,000 --> 00:49:11,160 |
|
were a little bit biased here because we |
|
|
|
1091 |
|
00:49:09,200 --> 00:49:13,359 |
|
were doing sampling without replacement |
|
|
|
1092 |
|
00:49:11,160 --> 00:49:15,480 |
|
so like nobody was actually picking a |
|
|
|
1093 |
|
00:49:13,359 --> 00:49:17,000 |
|
word that had already been said but it's |
|
|
|
1094 |
|
00:49:15,480 --> 00:49:18,319 |
|
essentially like if you guessed over and |
|
|
|
1095 |
|
00:49:17,000 --> 00:49:20,839 |
|
over and over again how many times would |
|
|
|
1096 |
|
00:49:18,319 --> 00:49:22,720 |
|
you need until you get it right and so |
|
|
|
1097 |
|
00:49:20,839 --> 00:49:25,119 |
|
here like if the actual answer was start |
|
|
|
1098 |
|
00:49:22,720 --> 00:49:27,480 |
|
the perplexity would be 4.66 so we'd |
|
|
|
1099 |
|
00:49:25,119 --> 00:49:30,240 |
|
expect language model to get it in uh |
|
|
|
1100 |
|
00:49:27,480 --> 00:49:34,400 |
|
four guesses uh between four and five |
|
|
|
1101 |
|
00:49:30,240 --> 00:49:38,559 |
|
guesses and you guys all did six so you |
|
|
|
1102 |
|
00:49:34,400 --> 00:49:41,599 |
|
lose um so uh another important thing to |
|
|
|
1103 |
|
00:49:38,559 --> 00:49:42,799 |
|
mention is evaluation in vocabulary uh |
|
|
|
1104 |
|
00:49:41,599 --> 00:49:44,880 |
|
so for fair |
|
|
|
1105 |
|
00:49:42,799 --> 00:49:47,319 |
|
comparison um make sure that the |
|
|
|
1106 |
|
00:49:44,880 --> 00:49:49,559 |
|
denominator is the same so uh if you're |
|
|
|
1107 |
|
00:49:47,319 --> 00:49:51,559 |
|
calculating the perplexity make sure |
|
|
|
1108 |
|
00:49:49,559 --> 00:49:53,359 |
|
that you're dividing by the same number |
|
|
|
1109 |
|
00:49:51,559 --> 00:49:55,799 |
|
uh every time you're dividing by words |
|
|
|
1110 |
|
00:49:53,359 --> 00:49:58,520 |
|
if it's uh the other paper or whatever |
|
|
|
1111 |
|
00:49:55,799 --> 00:50:00,680 |
|
is dividing by words or like let's say |
|
|
|
1112 |
|
00:49:58,520 --> 00:50:02,160 |
|
you're comparing llama to gp2 they have |
|
|
|
1113 |
|
00:50:00,680 --> 00:50:04,880 |
|
different tokenizers so they'll have |
|
|
|
1114 |
|
00:50:02,160 --> 00:50:07,040 |
|
different numbers of tokens so comparing |
|
|
|
1115 |
|
00:50:04,880 --> 00:50:10,880 |
|
uh with different denominators is not uh |
|
|
|
1116 |
|
00:50:07,040 --> 00:50:12,440 |
|
not fair um if you're allowing unknown |
|
|
|
1117 |
|
00:50:10,880 --> 00:50:14,559 |
|
words or characters so if you allow the |
|
|
|
1118 |
|
00:50:12,440 --> 00:50:17,640 |
|
model to not predict |
|
|
|
1119 |
|
00:50:14,559 --> 00:50:19,119 |
|
any token then you need to be fair about |
|
|
|
1120 |
|
00:50:17,640 --> 00:50:22,040 |
|
that |
|
|
|
1121 |
|
00:50:19,119 --> 00:50:25,160 |
|
too um so I'd like to go into a few |
|
|
|
1122 |
|
00:50:22,040 --> 00:50:27,960 |
|
Alternatives these are very similar to |
|
|
|
1123 |
|
00:50:25,160 --> 00:50:29,400 |
|
the Network classifiers and bag of words |
|
|
|
1124 |
|
00:50:27,960 --> 00:50:30,680 |
|
classifiers that I talked about before |
|
|
|
1125 |
|
00:50:29,400 --> 00:50:32,480 |
|
so I'm going to go through them rather |
|
|
|
1126 |
|
00:50:30,680 --> 00:50:35,480 |
|
quickly because I think you should get |
|
|
|
1127 |
|
00:50:32,480 --> 00:50:38,119 |
|
the basic idea but basically the |
|
|
|
1128 |
|
00:50:35,480 --> 00:50:40,000 |
|
alternative is uh featued models so we |
|
|
|
1129 |
|
00:50:38,119 --> 00:50:42,559 |
|
calculate features of to account based |
|
|
|
1130 |
|
00:50:40,000 --> 00:50:44,599 |
|
models as featued models so we calculate |
|
|
|
1131 |
|
00:50:42,559 --> 00:50:46,880 |
|
features of the context and based on the |
|
|
|
1132 |
|
00:50:44,599 --> 00:50:48,280 |
|
features calculate probabilities |
|
|
|
1133 |
|
00:50:46,880 --> 00:50:50,480 |
|
optimize the feature weights using |
|
|
|
1134 |
|
00:50:48,280 --> 00:50:53,839 |
|
gradient descent uh |
|
|
|
1135 |
|
00:50:50,480 --> 00:50:56,119 |
|
Etc and so for example if we have uh |
|
|
|
1136 |
|
00:50:53,839 --> 00:50:58,880 |
|
input giving a |
|
|
|
1137 |
|
00:50:56,119 --> 00:51:02,960 |
|
uh we calculate features so um we might |
|
|
|
1138 |
|
00:50:58,880 --> 00:51:05,400 |
|
look up uh the word identity of the two |
|
|
|
1139 |
|
00:51:02,960 --> 00:51:08,240 |
|
previous words look up the word identity |
|
|
|
1140 |
|
00:51:05,400 --> 00:51:11,000 |
|
of the word uh directly previous add a |
|
|
|
1141 |
|
00:51:08,240 --> 00:51:13,480 |
|
bias add them all together get scores |
|
|
|
1142 |
|
00:51:11,000 --> 00:51:14,960 |
|
and calculate probabilities where each |
|
|
|
1143 |
|
00:51:13,480 --> 00:51:16,920 |
|
Vector is the size of the output |
|
|
|
1144 |
|
00:51:14,960 --> 00:51:19,680 |
|
vocabulary and feature weights are |
|
|
|
1145 |
|
00:51:16,920 --> 00:51:21,799 |
|
optimized using SGD so this is basically |
|
|
|
1146 |
|
00:51:19,680 --> 00:51:24,240 |
|
a bag of words classifier but it's a |
|
|
|
1147 |
|
00:51:21,799 --> 00:51:27,200 |
|
multiclass bag of words classifier over |
|
|
|
1148 |
|
00:51:24,240 --> 00:51:28,960 |
|
the next token so it's very similar to |
|
|
|
1149 |
|
00:51:27,200 --> 00:51:30,839 |
|
our classification task before except |
|
|
|
1150 |
|
00:51:28,960 --> 00:51:33,160 |
|
now instead of having two classes we |
|
|
|
1151 |
|
00:51:30,839 --> 00:51:36,280 |
|
have you know 10,000 classes or 100,000 |
|
|
|
1152 |
|
00:51:33,160 --> 00:51:38,480 |
|
classes oh yeah sorry very quick aside |
|
|
|
1153 |
|
00:51:36,280 --> 00:51:40,280 |
|
um these were actually invented by Rony |
|
|
|
1154 |
|
00:51:38,480 --> 00:51:41,440 |
|
Rosenfeld who's the head of the machine |
|
|
|
1155 |
|
00:51:40,280 --> 00:51:45,119 |
|
learning department at the end the |
|
|
|
1156 |
|
00:51:41,440 --> 00:51:47,799 |
|
machine learning Department uh so um 27 |
|
|
|
1157 |
|
00:51:45,119 --> 00:51:50,760 |
|
years ago I guess so he has even more |
|
|
|
1158 |
|
00:51:47,799 --> 00:51:52,680 |
|
experience large language modeling than |
|
|
|
1159 |
|
00:51:50,760 --> 00:51:55,880 |
|
um |
|
|
|
1160 |
|
00:51:52,680 --> 00:51:58,599 |
|
cool so um the one difference with a bag |
|
|
|
1161 |
|
00:51:55,880 --> 00:52:02,119 |
|
of words classifier is |
|
|
|
1162 |
|
00:51:58,599 --> 00:52:05,480 |
|
um we we have |
|
|
|
1163 |
|
00:52:02,119 --> 00:52:07,640 |
|
biases um and we have the probability |
|
|
|
1164 |
|
00:52:05,480 --> 00:52:09,400 |
|
Vector given the previous word but |
|
|
|
1165 |
|
00:52:07,640 --> 00:52:11,720 |
|
instead of using a bag of words this |
|
|
|
1166 |
|
00:52:09,400 --> 00:52:15,440 |
|
actually is using uh How likely is it |
|
|
|
1167 |
|
00:52:11,720 --> 00:52:16,960 |
|
giving given two words previous so uh |
|
|
|
1168 |
|
00:52:15,440 --> 00:52:18,040 |
|
the feature design would be a little bit |
|
|
|
1169 |
|
00:52:16,960 --> 00:52:19,119 |
|
different and that would give you a |
|
|
|
1170 |
|
00:52:18,040 --> 00:52:22,920 |
|
total |
|
|
|
1171 |
|
00:52:19,119 --> 00:52:24,359 |
|
score um as a reminder uh last time we |
|
|
|
1172 |
|
00:52:22,920 --> 00:52:26,440 |
|
did a training algorithm where we |
|
|
|
1173 |
|
00:52:24,359 --> 00:52:27,480 |
|
calculated gradients loss function with |
|
|
|
1174 |
|
00:52:26,440 --> 00:52:29,960 |
|
respect to the |
|
|
|
1175 |
|
00:52:27,480 --> 00:52:32,319 |
|
parameters and uh we can use the chain |
|
|
|
1176 |
|
00:52:29,960 --> 00:52:33,839 |
|
Rule and back propagation and updates to |
|
|
|
1177 |
|
00:52:32,319 --> 00:52:36,400 |
|
move in the direction that increases |
|
|
|
1178 |
|
00:52:33,839 --> 00:52:39,040 |
|
enough so nothing extremely different |
|
|
|
1179 |
|
00:52:36,400 --> 00:52:42,640 |
|
from what we had for our |
|
|
|
1180 |
|
00:52:39,040 --> 00:52:44,240 |
|
B um similarly this solves some problems |
|
|
|
1181 |
|
00:52:42,640 --> 00:52:47,240 |
|
so this didn't solve the problem of |
|
|
|
1182 |
|
00:52:44,240 --> 00:52:49,119 |
|
sharing strength among similar words it |
|
|
|
1183 |
|
00:52:47,240 --> 00:52:50,839 |
|
did solve the problem of conditioning on |
|
|
|
1184 |
|
00:52:49,119 --> 00:52:52,839 |
|
context with intervening words because |
|
|
|
1185 |
|
00:52:50,839 --> 00:52:56,920 |
|
now we can condition directly on Doctor |
|
|
|
1186 |
|
00:52:52,839 --> 00:52:59,680 |
|
without having to um combine with |
|
|
|
1187 |
|
00:52:56,920 --> 00:53:01,200 |
|
gitrid um and it doesn't necessarily |
|
|
|
1188 |
|
00:52:59,680 --> 00:53:03,480 |
|
handle longdistance dependencies because |
|
|
|
1189 |
|
00:53:01,200 --> 00:53:05,240 |
|
we're still limited in our context with |
|
|
|
1190 |
|
00:53:03,480 --> 00:53:09,079 |
|
the model I just |
|
|
|
1191 |
|
00:53:05,240 --> 00:53:11,920 |
|
described so um if we so sorry back to |
|
|
|
1192 |
|
00:53:09,079 --> 00:53:13,480 |
|
neural networks is what I should say um |
|
|
|
1193 |
|
00:53:11,920 --> 00:53:15,160 |
|
so if we have a feedforward neural |
|
|
|
1194 |
|
00:53:13,480 --> 00:53:18,480 |
|
network language model the way this |
|
|
|
1195 |
|
00:53:15,160 --> 00:53:20,400 |
|
could work is instead of looking up |
|
|
|
1196 |
|
00:53:18,480 --> 00:53:23,079 |
|
discrete features uh like we had in a |
|
|
|
1197 |
|
00:53:20,400 --> 00:53:25,960 |
|
bag of words model uh we would look up |
|
|
|
1198 |
|
00:53:23,079 --> 00:53:27,400 |
|
dents embeddings and so we concatenate |
|
|
|
1199 |
|
00:53:25,960 --> 00:53:29,359 |
|
together these dense |
|
|
|
1200 |
|
00:53:27,400 --> 00:53:32,319 |
|
embeddings and based on the dense |
|
|
|
1201 |
|
00:53:29,359 --> 00:53:34,599 |
|
embeddings uh we do some sort of uh |
|
|
|
1202 |
|
00:53:32,319 --> 00:53:36,079 |
|
intermediate layer transforms to extract |
|
|
|
1203 |
|
00:53:34,599 --> 00:53:37,200 |
|
features like we did for our neural |
|
|
|
1204 |
|
00:53:36,079 --> 00:53:39,359 |
|
network based |
|
|
|
1205 |
|
00:53:37,200 --> 00:53:41,520 |
|
classifier um we multiply this by |
|
|
|
1206 |
|
00:53:39,359 --> 00:53:43,559 |
|
weights uh we have a bias and we |
|
|
|
1207 |
|
00:53:41,520 --> 00:53:46,559 |
|
calculate |
|
|
|
1208 |
|
00:53:43,559 --> 00:53:49,200 |
|
scores and uh then we take a soft Max to |
|
|
|
1209 |
|
00:53:46,559 --> 00:53:49,200 |
|
do |
|
|
|
1210 |
|
00:53:50,400 --> 00:53:55,799 |
|
classification so um this can calculate |
|
|
|
1211 |
|
00:53:53,359 --> 00:53:58,000 |
|
combination features uh like we we also |
|
|
|
1212 |
|
00:53:55,799 --> 00:54:02,280 |
|
used in our uh neural network based |
|
|
|
1213 |
|
00:53:58,000 --> 00:54:04,119 |
|
classifiers so um this could uh give us |
|
|
|
1214 |
|
00:54:02,280 --> 00:54:05,760 |
|
a positive number for example if the |
|
|
|
1215 |
|
00:54:04,119 --> 00:54:07,760 |
|
previous word is a determiner and the |
|
|
|
1216 |
|
00:54:05,760 --> 00:54:10,440 |
|
second previous word is a verb so that |
|
|
|
1217 |
|
00:54:07,760 --> 00:54:14,520 |
|
would be like uh in giving and then that |
|
|
|
1218 |
|
00:54:10,440 --> 00:54:14,520 |
|
would allow us upway to that particular |
|
|
|
1219 |
|
00:54:15,000 --> 00:54:19,559 |
|
examples um so this allows us to share |
|
|
|
1220 |
|
00:54:17,640 --> 00:54:21,640 |
|
strength in various places in our model |
|
|
|
1221 |
|
00:54:19,559 --> 00:54:23,520 |
|
which was also You Know instrumental in |
|
|
|
1222 |
|
00:54:21,640 --> 00:54:25,599 |
|
making our our neural network |
|
|
|
1223 |
|
00:54:23,520 --> 00:54:28,000 |
|
classifiers work for similar work and |
|
|
|
1224 |
|
00:54:25,599 --> 00:54:30,119 |
|
stuff and so these would be word |
|
|
|
1225 |
|
00:54:28,000 --> 00:54:32,160 |
|
embeddings so similar words get similar |
|
|
|
1226 |
|
00:54:30,119 --> 00:54:35,079 |
|
embeddings another really important |
|
|
|
1227 |
|
00:54:32,160 --> 00:54:38,480 |
|
thing is uh similar output words also |
|
|
|
1228 |
|
00:54:35,079 --> 00:54:41,839 |
|
get similar rows in The softmax Matrix |
|
|
|
1229 |
|
00:54:38,480 --> 00:54:44,440 |
|
and so here remember if you remember |
|
|
|
1230 |
|
00:54:41,839 --> 00:54:48,240 |
|
from last class this was a big Matrix |
|
|
|
1231 |
|
00:54:44,440 --> 00:54:50,400 |
|
where the size of the Matrix was the |
|
|
|
1232 |
|
00:54:48,240 --> 00:54:53,319 |
|
number of vocabulary items times the |
|
|
|
1233 |
|
00:54:50,400 --> 00:54:55,920 |
|
size of a word embedding this is also a |
|
|
|
1234 |
|
00:54:53,319 --> 00:54:58,319 |
|
matrix where this is |
|
|
|
1235 |
|
00:54:55,920 --> 00:55:02,200 |
|
the number of vocabulary items times the |
|
|
|
1236 |
|
00:54:58,319 --> 00:55:04,160 |
|
size of a context embedding gr and so |
|
|
|
1237 |
|
00:55:02,200 --> 00:55:06,160 |
|
these will also be similar because words |
|
|
|
1238 |
|
00:55:04,160 --> 00:55:08,280 |
|
that appear in similar contexts will |
|
|
|
1239 |
|
00:55:06,160 --> 00:55:11,920 |
|
also you know want similar embeddings so |
|
|
|
1240 |
|
00:55:08,280 --> 00:55:15,119 |
|
they get uploaded in at the same |
|
|
|
1241 |
|
00:55:11,920 --> 00:55:17,119 |
|
time and similar hidden States will have |
|
|
|
1242 |
|
00:55:15,119 --> 00:55:19,799 |
|
similar context so ideally like if you |
|
|
|
1243 |
|
00:55:17,119 --> 00:55:20,920 |
|
have giving a or delivering a or |
|
|
|
1244 |
|
00:55:19,799 --> 00:55:22,680 |
|
something like that those would be |
|
|
|
1245 |
|
00:55:20,920 --> 00:55:27,000 |
|
similar contexts so they would get |
|
|
|
1246 |
|
00:55:22,680 --> 00:55:27,000 |
|
similar purple embeddings out out of the |
|
|
|
1247 |
|
00:55:28,440 --> 00:55:31,599 |
|
so one trick that's widely used in |
|
|
|
1248 |
|
00:55:30,200 --> 00:55:34,960 |
|
language model that further takes |
|
|
|
1249 |
|
00:55:31,599 --> 00:55:38,799 |
|
advantage of this is uh tying |
|
|
|
1250 |
|
00:55:34,960 --> 00:55:44,160 |
|
embeddings so here what this does is |
|
|
|
1251 |
|
00:55:38,799 --> 00:55:48,280 |
|
sharing parameters between this um |
|
|
|
1252 |
|
00:55:44,160 --> 00:55:49,920 |
|
lookup Matrix here and this uh Matrix |
|
|
|
1253 |
|
00:55:48,280 --> 00:55:51,119 |
|
over here that we use for calculating |
|
|
|
1254 |
|
00:55:49,920 --> 00:55:56,200 |
|
the |
|
|
|
1255 |
|
00:55:51,119 --> 00:55:58,839 |
|
softmax and um the reason why this is |
|
|
|
1256 |
|
00:55:56,200 --> 00:56:00,559 |
|
useful is twofold number one it gives |
|
|
|
1257 |
|
00:55:58,839 --> 00:56:02,079 |
|
you essentially more training data to |
|
|
|
1258 |
|
00:56:00,559 --> 00:56:04,440 |
|
learn these embeddings because instead |
|
|
|
1259 |
|
00:56:02,079 --> 00:56:05,799 |
|
of learning the embeddings whenever a |
|
|
|
1260 |
|
00:56:04,440 --> 00:56:08,520 |
|
word is in |
|
|
|
1261 |
|
00:56:05,799 --> 00:56:10,599 |
|
context separately from learning the |
|
|
|
1262 |
|
00:56:08,520 --> 00:56:13,520 |
|
embeddings whenever a word is predicted |
|
|
|
1263 |
|
00:56:10,599 --> 00:56:15,480 |
|
you learn the the same embedding Matrix |
|
|
|
1264 |
|
00:56:13,520 --> 00:56:19,319 |
|
whenever the word is in the context or |
|
|
|
1265 |
|
00:56:15,480 --> 00:56:21,520 |
|
whatever it's predicted and so um that |
|
|
|
1266 |
|
00:56:19,319 --> 00:56:24,119 |
|
makes it more accurate to learn these uh |
|
|
|
1267 |
|
00:56:21,520 --> 00:56:26,960 |
|
embeddings well another thing is the |
|
|
|
1268 |
|
00:56:24,119 --> 00:56:31,119 |
|
embedding mat can actually be very large |
|
|
|
1269 |
|
00:56:26,960 --> 00:56:34,920 |
|
so like let's say we have aab of |
|
|
|
1270 |
|
00:56:31,119 --> 00:56:37,520 |
|
10 100,000 and we have an embedding a |
|
|
|
1271 |
|
00:56:34,920 --> 00:56:40,799 |
|
word embedding size of like 512 or |
|
|
|
1272 |
|
00:56:37,520 --> 00:56:45,319 |
|
something like that |
|
|
|
1273 |
|
00:56:40,799 --> 00:56:45,319 |
|
that's um 51 million |
|
|
|
1274 |
|
00:56:46,839 --> 00:56:52,440 |
|
parameters um and this doesn't sound |
|
|
|
1275 |
|
00:56:49,559 --> 00:56:55,520 |
|
like a lot of parameters at first but it |
|
|
|
1276 |
|
00:56:52,440 --> 00:56:57,880 |
|
actually is a lot to learn when um |
|
|
|
1277 |
|
00:56:55,520 --> 00:57:01,000 |
|
these get updated relatively |
|
|
|
1278 |
|
00:56:57,880 --> 00:57:03,400 |
|
infrequently uh because |
|
|
|
1279 |
|
00:57:01,000 --> 00:57:06,079 |
|
um these get updated relatively |
|
|
|
1280 |
|
00:57:03,400 --> 00:57:07,960 |
|
infrequently because they only are |
|
|
|
1281 |
|
00:57:06,079 --> 00:57:09,559 |
|
updated whenever that word or token |
|
|
|
1282 |
|
00:57:07,960 --> 00:57:12,319 |
|
actually appears in your training data |
|
|
|
1283 |
|
00:57:09,559 --> 00:57:14,119 |
|
so um this can be a good thing for |
|
|
|
1284 |
|
00:57:12,319 --> 00:57:16,319 |
|
parameter savings parameter efficiency |
|
|
|
1285 |
|
00:57:14,119 --> 00:57:16,319 |
|
as |
|
|
|
1286 |
|
00:57:16,440 --> 00:57:22,520 |
|
well um so this uh solves most of the |
|
|
|
1287 |
|
00:57:19,599 --> 00:57:24,319 |
|
problems here um but it doesn't solve |
|
|
|
1288 |
|
00:57:22,520 --> 00:57:26,839 |
|
the problem of longdistance dependencies |
|
|
|
1289 |
|
00:57:24,319 --> 00:57:29,839 |
|
because still limited by the overall |
|
|
|
1290 |
|
00:57:26,839 --> 00:57:31,359 |
|
length of uh the context that we're |
|
|
|
1291 |
|
00:57:29,839 --> 00:57:32,520 |
|
concatenating together here sure we |
|
|
|
1292 |
|
00:57:31,359 --> 00:57:35,760 |
|
could make that longer but that would |
|
|
|
1293 |
|
00:57:32,520 --> 00:57:37,200 |
|
make our model larger and um and bring |
|
|
|
1294 |
|
00:57:35,760 --> 00:57:39,720 |
|
various |
|
|
|
1295 |
|
00:57:37,200 --> 00:57:42,520 |
|
issues and so what I'm going to talk |
|
|
|
1296 |
|
00:57:39,720 --> 00:57:44,599 |
|
about in on thur day is how we solve |
|
|
|
1297 |
|
00:57:42,520 --> 00:57:47,559 |
|
this problem of modeling long contexts |
|
|
|
1298 |
|
00:57:44,599 --> 00:57:49,720 |
|
so how do we um build recurrent neural |
|
|
|
1299 |
|
00:57:47,559 --> 00:57:52,559 |
|
networks uh how do we build |
|
|
|
1300 |
|
00:57:49,720 --> 00:57:54,960 |
|
convolutional uh convolutional networks |
|
|
|
1301 |
|
00:57:52,559 --> 00:57:57,520 |
|
or how do we build attention based |
|
|
|
1302 |
|
00:57:54,960 --> 00:58:00,720 |
|
Transformer models and these are all |
|
|
|
1303 |
|
00:57:57,520 --> 00:58:02,119 |
|
options that are used um Transformers |
|
|
|
1304 |
|
00:58:00,720 --> 00:58:04,359 |
|
are kind of |
|
|
|
1305 |
|
00:58:02,119 --> 00:58:06,039 |
|
the the main thing that people use |
|
|
|
1306 |
|
00:58:04,359 --> 00:58:08,400 |
|
nowadays but there's a lot of versions |
|
|
|
1307 |
|
00:58:06,039 --> 00:58:11,880 |
|
of Transformers that borrow ideas from |
|
|
|
1308 |
|
00:58:08,400 --> 00:58:14,960 |
|
recurrent uh and convolutional models |
|
|
|
1309 |
|
00:58:11,880 --> 00:58:17,359 |
|
um recently a lot of long context models |
|
|
|
1310 |
|
00:58:14,960 --> 00:58:19,440 |
|
us use ideas from recurrent networks and |
|
|
|
1311 |
|
00:58:17,359 --> 00:58:22,160 |
|
a lot of for example speech models or |
|
|
|
1312 |
|
00:58:19,440 --> 00:58:24,160 |
|
things like or image models use ideas |
|
|
|
1313 |
|
00:58:22,160 --> 00:58:25,920 |
|
from convolutional networks so I think |
|
|
|
1314 |
|
00:58:24,160 --> 00:58:28,760 |
|
learning all but at the same time is a |
|
|
|
1315 |
|
00:58:25,920 --> 00:58:32,160 |
|
good idea in comparing |
|
|
|
1316 |
|
00:58:28,760 --> 00:58:34,319 |
|
them cool uh any any questions about |
|
|
|
1317 |
|
00:58:32,160 --> 00:58:35,799 |
|
this part I went through this kind of |
|
|
|
1318 |
|
00:58:34,319 --> 00:58:37,319 |
|
quickly because it's pretty similar to |
|
|
|
1319 |
|
00:58:35,799 --> 00:58:40,079 |
|
the the classification stuff that we |
|
|
|
1320 |
|
00:58:37,319 --> 00:58:42,680 |
|
covered last time but uh any any things |
|
|
|
1321 |
|
00:58:40,079 --> 00:58:42,680 |
|
that people want to |
|
|
|
1322 |
|
00:58:43,880 --> 00:58:49,039 |
|
ask okay so next I'm going to talk about |
|
|
|
1323 |
|
00:58:46,839 --> 00:58:51,559 |
|
a few other desiderata of language |
|
|
|
1324 |
|
00:58:49,039 --> 00:58:53,039 |
|
models so the next one is really really |
|
|
|
1325 |
|
00:58:51,559 --> 00:58:55,640 |
|
important it's a concept I want |
|
|
|
1326 |
|
00:58:53,039 --> 00:58:57,640 |
|
everybody to know I actually |
|
|
|
1327 |
|
00:58:55,640 --> 00:58:59,520 |
|
taught this informally up until this |
|
|
|
1328 |
|
00:58:57,640 --> 00:59:02,039 |
|
class but now I I actually made slides |
|
|
|
1329 |
|
00:58:59,520 --> 00:59:05,079 |
|
for it starting this time which is |
|
|
|
1330 |
|
00:59:02,039 --> 00:59:07,240 |
|
calibration so the idea of calibration |
|
|
|
1331 |
|
00:59:05,079 --> 00:59:10,200 |
|
is that the model quote unquote knows |
|
|
|
1332 |
|
00:59:07,240 --> 00:59:14,559 |
|
when it knows or the the fact that it is |
|
|
|
1333 |
|
00:59:10,200 --> 00:59:17,480 |
|
able to provide a a good answer um uh |
|
|
|
1334 |
|
00:59:14,559 --> 00:59:21,640 |
|
provide a good confidence in its answer |
|
|
|
1335 |
|
00:59:17,480 --> 00:59:23,640 |
|
and more formally this can be specified |
|
|
|
1336 |
|
00:59:21,640 --> 00:59:25,240 |
|
as |
|
|
|
1337 |
|
00:59:23,640 --> 00:59:27,799 |
|
the |
|
|
|
1338 |
|
00:59:25,240 --> 00:59:29,200 |
|
feature that the model probability of |
|
|
|
1339 |
|
00:59:27,799 --> 00:59:33,119 |
|
the answer matches the actual |
|
|
|
1340 |
|
00:59:29,200 --> 00:59:37,319 |
|
probability of getting it right um and |
|
|
|
1341 |
|
00:59:33,119 --> 00:59:37,319 |
|
so what this means |
|
|
|
1342 |
|
00:59:41,960 --> 00:59:47,480 |
|
is the |
|
|
|
1343 |
|
00:59:44,240 --> 00:59:51,839 |
|
probability of the |
|
|
|
1344 |
|
00:59:47,480 --> 00:59:51,839 |
|
answer um is |
|
|
|
1345 |
|
00:59:52,720 --> 00:59:59,880 |
|
correct given the fact that |
|
|
|
1346 |
|
00:59:56,319 --> 00:59:59,880 |
|
the model |
|
|
|
1347 |
|
01:00:00,160 --> 01:00:07,440 |
|
probability is equal to |
|
|
|
1348 |
|
01:00:03,640 --> 01:00:07,440 |
|
P is equal to |
|
|
|
1349 |
|
01:00:08,559 --> 01:00:12,760 |
|
ke |
|
|
|
1350 |
|
01:00:10,480 --> 01:00:15,319 |
|
so I know this is a little bit hard to |
|
|
|
1351 |
|
01:00:12,760 --> 01:00:18,240 |
|
parse I it always took me like a few |
|
|
|
1352 |
|
01:00:15,319 --> 01:00:21,720 |
|
seconds to parse before I uh like when I |
|
|
|
1353 |
|
01:00:18,240 --> 01:00:25,160 |
|
looked at it but basically if the model |
|
|
|
1354 |
|
01:00:21,720 --> 01:00:26,920 |
|
if the model says the probability of it |
|
|
|
1355 |
|
01:00:25,160 --> 01:00:29,440 |
|
being correct is |
|
|
|
1356 |
|
01:00:26,920 --> 01:00:33,559 |
|
0.7 then the probability that the answer |
|
|
|
1357 |
|
01:00:29,440 --> 01:00:35,960 |
|
is correct is actually 0.7 so um you |
|
|
|
1358 |
|
01:00:33,559 --> 01:00:41,520 |
|
know if it says uh the probability is |
|
|
|
1359 |
|
01:00:35,960 --> 01:00:41,520 |
|
0.7 100 times then it will be right 70 |
|
|
|
1360 |
|
01:00:43,640 --> 01:00:52,160 |
|
times and so the way we formalize this |
|
|
|
1361 |
|
01:00:48,039 --> 01:00:55,200 |
|
um is is by this uh it was proposed by |
|
|
|
1362 |
|
01:00:52,160 --> 01:00:57,760 |
|
this seminal paper by gu it all in |
|
|
|
1363 |
|
01:00:55,200 --> 01:01:00,319 |
|
2017 |
|
|
|
1364 |
|
01:00:57,760 --> 01:01:03,319 |
|
and |
|
|
|
1365 |
|
01:01:00,319 --> 01:01:05,520 |
|
unfortunately this data itself is hard |
|
|
|
1366 |
|
01:01:03,319 --> 01:01:08,119 |
|
to collect |
|
|
|
1367 |
|
01:01:05,520 --> 01:01:11,200 |
|
because the model probability is always |
|
|
|
1368 |
|
01:01:08,119 --> 01:01:13,359 |
|
different right and so if the model |
|
|
|
1369 |
|
01:01:11,200 --> 01:01:15,359 |
|
probability is like if the model |
|
|
|
1370 |
|
01:01:13,359 --> 01:01:20,480 |
|
probability was actually 0.7 that'd be |
|
|
|
1371 |
|
01:01:15,359 --> 01:01:22,000 |
|
nice but actually it's 0.793 to 6 8 5 |
|
|
|
1372 |
|
01:01:20,480 --> 01:01:24,599 |
|
and you never get another example where |
|
|
|
1373 |
|
01:01:22,000 --> 01:01:26,319 |
|
the probability is exactly the same so |
|
|
|
1374 |
|
01:01:24,599 --> 01:01:28,280 |
|
what we do instead is we divide the |
|
|
|
1375 |
|
01:01:26,319 --> 01:01:30,240 |
|
model probabilities into buckets so we |
|
|
|
1376 |
|
01:01:28,280 --> 01:01:32,880 |
|
say the model probability is between 0 |
|
|
|
1377 |
|
01:01:30,240 --> 01:01:36,599 |
|
and 0.1 we say the model probability is |
|
|
|
1378 |
|
01:01:32,880 --> 01:01:40,319 |
|
between 0.1 and 0.2 0.2 and 0.3 so we |
|
|
|
1379 |
|
01:01:36,599 --> 01:01:44,599 |
|
create buckets like this like these and |
|
|
|
1380 |
|
01:01:40,319 --> 01:01:46,520 |
|
then we looked at the model confidence |
|
|
|
1381 |
|
01:01:44,599 --> 01:01:52,839 |
|
the average model confidence within that |
|
|
|
1382 |
|
01:01:46,520 --> 01:01:55,000 |
|
bucket so maybe uh between 0.1 and 0 uh |
|
|
|
1383 |
|
01:01:52,839 --> 01:01:58,000 |
|
between 0 and 0.1 the model confidence |
|
|
|
1384 |
|
01:01:55,000 --> 01:02:00,920 |
|
on average is 0 055 or something like |
|
|
|
1385 |
|
01:01:58,000 --> 01:02:02,640 |
|
that so that would be this T here and |
|
|
|
1386 |
|
01:02:00,920 --> 01:02:05,079 |
|
then the accuracy is how often did it |
|
|
|
1387 |
|
01:02:02,640 --> 01:02:06,680 |
|
actually get a correct and this can be |
|
|
|
1388 |
|
01:02:05,079 --> 01:02:09,720 |
|
plotted in this thing called a |
|
|
|
1389 |
|
01:02:06,680 --> 01:02:15,039 |
|
reliability diagram and the reliability |
|
|
|
1390 |
|
01:02:09,720 --> 01:02:17,599 |
|
diagram basically um the the |
|
|
|
1391 |
|
01:02:15,039 --> 01:02:20,359 |
|
outputs uh |
|
|
|
1392 |
|
01:02:17,599 --> 01:02:26,359 |
|
here so this is |
|
|
|
1393 |
|
01:02:20,359 --> 01:02:26,359 |
|
um the this is the model |
|
|
|
1394 |
|
01:02:27,520 --> 01:02:34,119 |
|
yeah I think the red is the model |
|
|
|
1395 |
|
01:02:30,760 --> 01:02:36,400 |
|
um expected probability and then the |
|
|
|
1396 |
|
01:02:34,119 --> 01:02:40,559 |
|
blue uh the blue is the actual |
|
|
|
1397 |
|
01:02:36,400 --> 01:02:43,240 |
|
probability and then um |
|
|
|
1398 |
|
01:02:40,559 --> 01:02:45,160 |
|
the difference between the expected and |
|
|
|
1399 |
|
01:02:43,240 --> 01:02:47,160 |
|
the actual probability is kind of like |
|
|
|
1400 |
|
01:02:45,160 --> 01:02:48,359 |
|
the penalty there is how how poorly |
|
|
|
1401 |
|
01:02:47,160 --> 01:02:52,000 |
|
calibrated |
|
|
|
1402 |
|
01:02:48,359 --> 01:02:55,880 |
|
the and one really important thing to |
|
|
|
1403 |
|
01:02:52,000 --> 01:02:58,440 |
|
know is that calibration in accuracy are |
|
|
|
1404 |
|
01:02:55,880 --> 01:03:00,599 |
|
not necessarily they don't go hand inand |
|
|
|
1405 |
|
01:02:58,440 --> 01:03:02,359 |
|
uh they do to some extent but they don't |
|
|
|
1406 |
|
01:03:00,599 --> 01:03:06,440 |
|
uh they don't necessarily go hand in |
|
|
|
1407 |
|
01:03:02,359 --> 01:03:06,440 |
|
hand and |
|
|
|
1408 |
|
01:03:07,200 --> 01:03:14,319 |
|
the example on the left is a a bad model |
|
|
|
1409 |
|
01:03:11,200 --> 01:03:16,279 |
|
but a well calibrated so its accuracy is |
|
|
|
1410 |
|
01:03:14,319 --> 01:03:18,720 |
|
uh its error is |
|
|
|
1411 |
|
01:03:16,279 --> 01:03:20,000 |
|
44.9% um but it's well calibrated as you |
|
|
|
1412 |
|
01:03:18,720 --> 01:03:21,440 |
|
can see like when it says it knows the |
|
|
|
1413 |
|
01:03:20,000 --> 01:03:23,880 |
|
answer it knows the answer when it |
|
|
|
1414 |
|
01:03:21,440 --> 01:03:27,799 |
|
doesn't answer does this model on the |
|
|
|
1415 |
|
01:03:23,880 --> 01:03:30,000 |
|
other hand has better erir and um but |
|
|
|
1416 |
|
01:03:27,799 --> 01:03:31,880 |
|
worse calibration so the reason why is |
|
|
|
1417 |
|
01:03:30,000 --> 01:03:36,680 |
|
the model is very very confident all the |
|
|
|
1418 |
|
01:03:31,880 --> 01:03:39,640 |
|
time and usually what happens is um |
|
|
|
1419 |
|
01:03:36,680 --> 01:03:41,200 |
|
models that overfit to the data |
|
|
|
1420 |
|
01:03:39,640 --> 01:03:43,359 |
|
especially when you do early stopping on |
|
|
|
1421 |
|
01:03:41,200 --> 01:03:44,760 |
|
something like accuracy uh when you stop |
|
|
|
1422 |
|
01:03:43,359 --> 01:03:47,279 |
|
the training on something like accuracy |
|
|
|
1423 |
|
01:03:44,760 --> 01:03:49,960 |
|
will become very overconfident and uh |
|
|
|
1424 |
|
01:03:47,279 --> 01:03:52,599 |
|
give confidence estimates um that are in |
|
|
|
1425 |
|
01:03:49,960 --> 01:03:54,000 |
|
cor like this so this is important to |
|
|
|
1426 |
|
01:03:52,599 --> 01:03:56,079 |
|
know and the reason why it's important |
|
|
|
1427 |
|
01:03:54,000 --> 01:03:58,000 |
|
to know is actually because you know |
|
|
|
1428 |
|
01:03:56,079 --> 01:04:00,960 |
|
models are very good at making up things |
|
|
|
1429 |
|
01:03:58,000 --> 01:04:02,359 |
|
that aren't actually correct nowadays um |
|
|
|
1430 |
|
01:04:00,960 --> 01:04:04,920 |
|
and but if you have a really well |
|
|
|
1431 |
|
01:04:02,359 --> 01:04:07,760 |
|
calibrated model you could at least say |
|
|
|
1432 |
|
01:04:04,920 --> 01:04:09,920 |
|
with what confidence you have this |
|
|
|
1433 |
|
01:04:07,760 --> 01:04:12,760 |
|
working so how do you calculate the |
|
|
|
1434 |
|
01:04:09,920 --> 01:04:14,160 |
|
probability of an answer so H yeah sorry |
|
|
|
1435 |
|
01:04:12,760 --> 01:04:17,599 |
|
uh yes |
|
|
|
1436 |
|
01:04:14,160 --> 01:04:17,599 |
|
yes yeah please |
|
|
|
1437 |
|
01:04:17,799 --> 01:04:26,559 |
|
go the probability of percent or |
|
|
|
1438 |
|
01:04:23,200 --> 01:04:28,039 |
|
percent um usually this would be for a |
|
|
|
1439 |
|
01:04:26,559 --> 01:04:29,599 |
|
generated output because you want to |
|
|
|
1440 |
|
01:04:28,039 --> 01:04:32,559 |
|
know the the probability that the |
|
|
|
1441 |
|
01:04:29,599 --> 01:04:32,559 |
|
generated output is |
|
|
|
1442 |
|
01:04:53,160 --> 01:04:56,160 |
|
cor |
|
|
|
1443 |
|
01:05:01,079 --> 01:05:06,319 |
|
great that's what I'm about to talk |
|
|
|
1444 |
|
01:05:03,000 --> 01:05:07,839 |
|
about so perfect perfect question um so |
|
|
|
1445 |
|
01:05:06,319 --> 01:05:10,160 |
|
how do we calculate the answer |
|
|
|
1446 |
|
01:05:07,839 --> 01:05:13,279 |
|
probability or um how do we calculate |
|
|
|
1447 |
|
01:05:10,160 --> 01:05:15,039 |
|
the confidence in an answer um we're |
|
|
|
1448 |
|
01:05:13,279 --> 01:05:18,319 |
|
actually going to go into more detail |
|
|
|
1449 |
|
01:05:15,039 --> 01:05:20,760 |
|
about this um in a a later class but the |
|
|
|
1450 |
|
01:05:18,319 --> 01:05:23,200 |
|
first thing is probability of the answer |
|
|
|
1451 |
|
01:05:20,760 --> 01:05:25,799 |
|
and this is easy when there's a single |
|
|
|
1452 |
|
01:05:23,200 --> 01:05:29,079 |
|
answer um like if there's only one |
|
|
|
1453 |
|
01:05:25,799 --> 01:05:31,839 |
|
correct answer and you want your model |
|
|
|
1454 |
|
01:05:29,079 --> 01:05:34,160 |
|
to be solving math problems and you want |
|
|
|
1455 |
|
01:05:31,839 --> 01:05:38,319 |
|
it to return only the answer and nothing |
|
|
|
1456 |
|
01:05:34,160 --> 01:05:40,760 |
|
else if it returns anything else like it |
|
|
|
1457 |
|
01:05:38,319 --> 01:05:44,920 |
|
won't work then you can just use the |
|
|
|
1458 |
|
01:05:40,760 --> 01:05:47,119 |
|
probability of the answer but what |
|
|
|
1459 |
|
01:05:44,920 --> 01:05:49,559 |
|
if |
|
|
|
1460 |
|
01:05:47,119 --> 01:05:52,000 |
|
um what if there are multiple acceptable |
|
|
|
1461 |
|
01:05:49,559 --> 01:05:54,680 |
|
answers um and maybe a perfect example |
|
|
|
1462 |
|
01:05:52,000 --> 01:06:02,240 |
|
of that is like where is CMU located |
|
|
|
1463 |
|
01:05:54,680 --> 01:06:04,400 |
|
or um uh where where are we right now um |
|
|
|
1464 |
|
01:06:02,240 --> 01:06:06,960 |
|
if the answer is where are we right |
|
|
|
1465 |
|
01:06:04,400 --> 01:06:08,880 |
|
now um could be |
|
|
|
1466 |
|
01:06:06,960 --> 01:06:12,880 |
|
Pittsburgh could be |
|
|
|
1467 |
|
01:06:08,880 --> 01:06:12,880 |
|
CMU could be carnegy |
|
|
|
1468 |
|
01:06:16,200 --> 01:06:24,440 |
|
melon could be other other things like |
|
|
|
1469 |
|
01:06:18,760 --> 01:06:26,760 |
|
this right um and so another way that |
|
|
|
1470 |
|
01:06:24,440 --> 01:06:28,319 |
|
you can calculate the confidence is |
|
|
|
1471 |
|
01:06:26,760 --> 01:06:31,240 |
|
calculating the probability of the |
|
|
|
1472 |
|
01:06:28,319 --> 01:06:33,680 |
|
answer plus uh you know paraphrases of |
|
|
|
1473 |
|
01:06:31,240 --> 01:06:35,799 |
|
the answer or other uh other things like |
|
|
|
1474 |
|
01:06:33,680 --> 01:06:37,680 |
|
this and so then you would just sum the |
|
|
|
1475 |
|
01:06:35,799 --> 01:06:38,839 |
|
probability over all the qu like |
|
|
|
1476 |
|
01:06:37,680 --> 01:06:41,680 |
|
acceptable |
|
|
|
1477 |
|
01:06:38,839 --> 01:06:45,359 |
|
answers |
|
|
|
1478 |
|
01:06:41,680 --> 01:06:47,680 |
|
um another thing that you can do is um |
|
|
|
1479 |
|
01:06:45,359 --> 01:06:49,279 |
|
sample multiple outputs and count the |
|
|
|
1480 |
|
01:06:47,680 --> 01:06:51,000 |
|
number of times you get a particular |
|
|
|
1481 |
|
01:06:49,279 --> 01:06:54,440 |
|
answer this doesn't solve the problem of |
|
|
|
1482 |
|
01:06:51,000 --> 01:06:58,119 |
|
paraphrasing ex paraphrases existing but |
|
|
|
1483 |
|
01:06:54,440 --> 01:06:59,880 |
|
it does solve the problem of uh it does |
|
|
|
1484 |
|
01:06:58,119 --> 01:07:01,480 |
|
solve two problems sometimes there are |
|
|
|
1485 |
|
01:06:59,880 --> 01:07:05,240 |
|
language models where you can't get |
|
|
|
1486 |
|
01:07:01,480 --> 01:07:06,640 |
|
probabilities out of them um this is not |
|
|
|
1487 |
|
01:07:05,240 --> 01:07:08,680 |
|
so much of a problem anymore with the |
|
|
|
1488 |
|
01:07:06,640 --> 01:07:11,240 |
|
GPT models because they're reintroducing |
|
|
|
1489 |
|
01:07:08,680 --> 01:07:12,440 |
|
the ability to get probabilities but um |
|
|
|
1490 |
|
01:07:11,240 --> 01:07:13,720 |
|
there are some models where you can just |
|
|
|
1491 |
|
01:07:12,440 --> 01:07:16,279 |
|
sample from them and you can't get |
|
|
|
1492 |
|
01:07:13,720 --> 01:07:18,680 |
|
probabilities out but also more |
|
|
|
1493 |
|
01:07:16,279 --> 01:07:21,039 |
|
importantly um sometimes when you're |
|
|
|
1494 |
|
01:07:18,680 --> 01:07:23,000 |
|
using things like uh Chain of Thought |
|
|
|
1495 |
|
01:07:21,039 --> 01:07:26,520 |
|
reasoning which I'll talk about in more |
|
|
|
1496 |
|
01:07:23,000 --> 01:07:29,839 |
|
detail but basically it's like um please |
|
|
|
1497 |
|
01:07:26,520 --> 01:07:31,480 |
|
solve this math problem and explain |
|
|
|
1498 |
|
01:07:29,839 --> 01:07:33,480 |
|
explain your solution and then if it |
|
|
|
1499 |
|
01:07:31,480 --> 01:07:35,119 |
|
will do that it will generate you know a |
|
|
|
1500 |
|
01:07:33,480 --> 01:07:36,279 |
|
really long explanation of how it got to |
|
|
|
1501 |
|
01:07:35,119 --> 01:07:40,119 |
|
the solution and then it will give you |
|
|
|
1502 |
|
01:07:36,279 --> 01:07:41,640 |
|
the answer at the very end and so then |
|
|
|
1503 |
|
01:07:40,119 --> 01:07:44,960 |
|
you can't calculate the probability of |
|
|
|
1504 |
|
01:07:41,640 --> 01:07:47,720 |
|
the actual like answer itself because |
|
|
|
1505 |
|
01:07:44,960 --> 01:07:49,359 |
|
there's this long reasoning chain in |
|
|
|
1506 |
|
01:07:47,720 --> 01:07:51,960 |
|
between and you have like all these |
|
|
|
1507 |
|
01:07:49,359 --> 01:07:53,559 |
|
other all that other text there but what |
|
|
|
1508 |
|
01:07:51,960 --> 01:07:55,480 |
|
you can do is you can sample those |
|
|
|
1509 |
|
01:07:53,559 --> 01:07:56,920 |
|
reasoning chains 100 times and then see |
|
|
|
1510 |
|
01:07:55,480 --> 01:07:59,599 |
|
how many times you got a particular |
|
|
|
1511 |
|
01:07:56,920 --> 01:08:02,960 |
|
answer and that's actually a pretty um a |
|
|
|
1512 |
|
01:07:59,599 --> 01:08:06,079 |
|
Prett pretty reasonable way of uh |
|
|
|
1513 |
|
01:08:02,960 --> 01:08:09,000 |
|
getting a have |
|
|
|
1514 |
|
01:08:06,079 --> 01:08:11,200 |
|
yet this is my favorite one I I love how |
|
|
|
1515 |
|
01:08:09,000 --> 01:08:12,880 |
|
we can do this now it's just absolutely |
|
|
|
1516 |
|
01:08:11,200 --> 01:08:16,480 |
|
ridiculous but you could ask the model |
|
|
|
1517 |
|
01:08:12,880 --> 01:08:20,279 |
|
how confident it is and um it sometimes |
|
|
|
1518 |
|
01:08:16,480 --> 01:08:22,359 |
|
gives you a reasonable uh a reasonable |
|
|
|
1519 |
|
01:08:20,279 --> 01:08:24,600 |
|
answer um there's a really nice |
|
|
|
1520 |
|
01:08:22,359 --> 01:08:26,400 |
|
comparison of different methods uh in |
|
|
|
1521 |
|
01:08:24,600 --> 01:08:29,679 |
|
this paper which is also on on the |
|
|
|
1522 |
|
01:08:26,400 --> 01:08:31,960 |
|
website and basically long story short |
|
|
|
1523 |
|
01:08:29,679 --> 01:08:34,000 |
|
the conclusion from this paper is the |
|
|
|
1524 |
|
01:08:31,960 --> 01:08:35,640 |
|
sampling multiple outputs one is the |
|
|
|
1525 |
|
01:08:34,000 --> 01:08:36,839 |
|
best way to do it if you can't directly |
|
|
|
1526 |
|
01:08:35,640 --> 01:08:39,520 |
|
calculate |
|
|
|
1527 |
|
01:08:36,839 --> 01:08:41,359 |
|
probabilities um another thing that I'd |
|
|
|
1528 |
|
01:08:39,520 --> 01:08:42,600 |
|
like people to pay very close attention |
|
|
|
1529 |
|
01:08:41,359 --> 01:08:45,040 |
|
to is in the |
|
|
|
1530 |
|
01:08:42,600 --> 01:08:46,480 |
|
Generation Um in the generation class |
|
|
|
1531 |
|
01:08:45,040 --> 01:08:49,600 |
|
we're going to be talking about minimum |
|
|
|
1532 |
|
01:08:46,480 --> 01:08:52,600 |
|
based risk which is a Criterion for |
|
|
|
1533 |
|
01:08:49,600 --> 01:08:54,719 |
|
deciding how risky an output is and it's |
|
|
|
1534 |
|
01:08:52,600 --> 01:08:56,199 |
|
actually a really good uh confidence |
|
|
|
1535 |
|
01:08:54,719 --> 01:08:58,000 |
|
metric as well but I'm going to leave |
|
|
|
1536 |
|
01:08:56,199 --> 01:08:59,440 |
|
that till when we discuss it more detail |
|
|
|
1537 |
|
01:08:58,000 --> 01:09:02,759 |
|
with |
|
|
|
1538 |
|
01:08:59,440 --> 01:09:05,359 |
|
it um any any questions |
|
|
|
1539 |
|
01:09:02,759 --> 01:09:08,440 |
|
here okay |
|
|
|
1540 |
|
01:09:05,359 --> 01:09:10,480 |
|
cool um so the other Criterion uh this |
|
|
|
1541 |
|
01:09:08,440 --> 01:09:12,520 |
|
is just yet another Criterion that we |
|
|
|
1542 |
|
01:09:10,480 --> 01:09:15,239 |
|
would like language models to be good at |
|
|
|
1543 |
|
01:09:12,520 --> 01:09:17,600 |
|
um its efficiency and so basically the |
|
|
|
1544 |
|
01:09:15,239 --> 01:09:21,920 |
|
model is easy to run on limited Hardware |
|
|
|
1545 |
|
01:09:17,600 --> 01:09:25,400 |
|
by some you know uh metric of easy and |
|
|
|
1546 |
|
01:09:21,920 --> 01:09:29,319 |
|
some metrics that we like to talk about |
|
|
|
1547 |
|
01:09:25,400 --> 01:09:32,400 |
|
our parameter account so often you will |
|
|
|
1548 |
|
01:09:29,319 --> 01:09:34,239 |
|
see oh this is the best model under |
|
|
|
1549 |
|
01:09:32,400 --> 01:09:35,520 |
|
three billion parameters or this is the |
|
|
|
1550 |
|
01:09:34,239 --> 01:09:37,960 |
|
best model under seven billion |
|
|
|
1551 |
|
01:09:35,520 --> 01:09:39,600 |
|
parameters or um we trained a model with |
|
|
|
1552 |
|
01:09:37,960 --> 01:09:42,159 |
|
one trillion parameters or something |
|
|
|
1553 |
|
01:09:39,600 --> 01:09:44,719 |
|
like that you know |
|
|
|
1554 |
|
01:09:42,159 --> 01:09:46,839 |
|
uh the thing is parameter count doesn't |
|
|
|
1555 |
|
01:09:44,719 --> 01:09:49,640 |
|
really mean that much um from the point |
|
|
|
1556 |
|
01:09:46,839 --> 01:09:52,839 |
|
of view of like ease of using the model |
|
|
|
1557 |
|
01:09:49,640 --> 01:09:54,400 |
|
um unless you also think about other uh |
|
|
|
1558 |
|
01:09:52,839 --> 01:09:56,480 |
|
you know deser |
|
|
|
1559 |
|
01:09:54,400 --> 01:09:58,840 |
|
like just to give one example this is a |
|
|
|
1560 |
|
01:09:56,480 --> 01:10:00,880 |
|
parameter count um let's say you have a |
|
|
|
1561 |
|
01:09:58,840 --> 01:10:02,960 |
|
parameter count of 7 billion is that 7 |
|
|
|
1562 |
|
01:10:00,880 --> 01:10:05,719 |
|
billion parameters at 32-bit Precision |
|
|
|
1563 |
|
01:10:02,960 --> 01:10:07,800 |
|
or is that 7 billion parameters at 4bit |
|
|
|
1564 |
|
01:10:05,719 --> 01:10:09,400 |
|
Precision um will make a huge difference |
|
|
|
1565 |
|
01:10:07,800 --> 01:10:12,960 |
|
in your memory footprint your speed |
|
|
|
1566 |
|
01:10:09,400 --> 01:10:14,920 |
|
other things like that um so some of the |
|
|
|
1567 |
|
01:10:12,960 --> 01:10:18,040 |
|
things that are more direct with respect |
|
|
|
1568 |
|
01:10:14,920 --> 01:10:19,800 |
|
to efficiency are memory usage um and |
|
|
|
1569 |
|
01:10:18,040 --> 01:10:22,440 |
|
there's two varieties of memory usage |
|
|
|
1570 |
|
01:10:19,800 --> 01:10:24,280 |
|
one is model uh model only memory usage |
|
|
|
1571 |
|
01:10:22,440 --> 01:10:27,120 |
|
so when you load loaded the model into |
|
|
|
1572 |
|
01:10:24,280 --> 01:10:29,120 |
|
memory uh how much space does it take |
|
|
|
1573 |
|
01:10:27,120 --> 01:10:31,159 |
|
and also Peak memory consumption when |
|
|
|
1574 |
|
01:10:29,120 --> 01:10:33,159 |
|
you run have run the model over a |
|
|
|
1575 |
|
01:10:31,159 --> 01:10:35,920 |
|
sequence of a certain length how much is |
|
|
|
1576 |
|
01:10:33,159 --> 01:10:40,040 |
|
it going to P so that's another |
|
|
|
1577 |
|
01:10:35,920 --> 01:10:43,000 |
|
thing another thing is latency um and |
|
|
|
1578 |
|
01:10:40,040 --> 01:10:46,440 |
|
with respect to latency this can be |
|
|
|
1579 |
|
01:10:43,000 --> 01:10:49,440 |
|
either how long does it take to start |
|
|
|
1580 |
|
01:10:46,440 --> 01:10:52,080 |
|
outputting the first token um and how |
|
|
|
1581 |
|
01:10:49,440 --> 01:10:54,840 |
|
long does it take to uh finish |
|
|
|
1582 |
|
01:10:52,080 --> 01:10:59,480 |
|
outputting uh a generation of a certain |
|
|
|
1583 |
|
01:10:54,840 --> 01:11:01,199 |
|
length and the first will have more to |
|
|
|
1584 |
|
01:10:59,480 --> 01:11:04,960 |
|
do with how long does it take to encode |
|
|
|
1585 |
|
01:11:01,199 --> 01:11:06,480 |
|
a sequence um which is usually faster |
|
|
|
1586 |
|
01:11:04,960 --> 01:11:09,080 |
|
than how long does it take to generate a |
|
|
|
1587 |
|
01:11:06,480 --> 01:11:11,360 |
|
sequence so this will have to do with |
|
|
|
1588 |
|
01:11:09,080 --> 01:11:13,000 |
|
like encoding time this will require |
|
|
|
1589 |
|
01:11:11,360 --> 01:11:15,880 |
|
encoding time of course but it will also |
|
|
|
1590 |
|
01:11:13,000 --> 01:11:15,880 |
|
require generation |
|
|
|
1591 |
|
01:11:16,280 --> 01:11:21,840 |
|
time also throughput so you know how |
|
|
|
1592 |
|
01:11:19,239 --> 01:11:23,679 |
|
much um how many sentences can you |
|
|
|
1593 |
|
01:11:21,840 --> 01:11:25,400 |
|
process in a certain amount of time so |
|
|
|
1594 |
|
01:11:23,679 --> 01:11:26,480 |
|
of these are kind of desad that you you |
|
|
|
1595 |
|
01:11:25,400 --> 01:11:29,000 |
|
would |
|
|
|
1596 |
|
01:11:26,480 --> 01:11:30,280 |
|
say um we're going to be talking about |
|
|
|
1597 |
|
01:11:29,000 --> 01:11:31,920 |
|
this more in the distillation and |
|
|
|
1598 |
|
01:11:30,280 --> 01:11:33,199 |
|
compression and generation algorithms |
|
|
|
1599 |
|
01:11:31,920 --> 01:11:35,640 |
|
classes so I won't go into a whole lot |
|
|
|
1600 |
|
01:11:33,199 --> 01:11:36,840 |
|
of detail about this but um it's just |
|
|
|
1601 |
|
01:11:35,640 --> 01:11:39,960 |
|
another thing that we want to be |
|
|
|
1602 |
|
01:11:36,840 --> 01:11:43,560 |
|
thinking about in addition to |
|
|
|
1603 |
|
01:11:39,960 --> 01:11:45,360 |
|
complexity um but since I'm I'm on the |
|
|
|
1604 |
|
01:11:43,560 --> 01:11:47,800 |
|
topic of efficiency I would like to talk |
|
|
|
1605 |
|
01:11:45,360 --> 01:11:49,480 |
|
just a little bit about it um in terms |
|
|
|
1606 |
|
01:11:47,800 --> 01:11:51,000 |
|
of especially things that will be useful |
|
|
|
1607 |
|
01:11:49,480 --> 01:11:53,600 |
|
for implementing your first |
|
|
|
1608 |
|
01:11:51,000 --> 01:11:55,840 |
|
assignment and uh one thing that every |
|
|
|
1609 |
|
01:11:53,600 --> 01:11:58,639 |
|
body should know about um if you've done |
|
|
|
1610 |
|
01:11:55,840 --> 01:11:59,920 |
|
any like deep learning with pytorch or |
|
|
|
1611 |
|
01:11:58,639 --> 01:12:02,639 |
|
something like this you already know |
|
|
|
1612 |
|
01:11:59,920 --> 01:12:05,880 |
|
about this probably but uh I think it's |
|
|
|
1613 |
|
01:12:02,639 --> 01:12:08,760 |
|
worth mentioning but basically mini |
|
|
|
1614 |
|
01:12:05,880 --> 01:12:12,120 |
|
batching or batching uh is uh very |
|
|
|
1615 |
|
01:12:08,760 --> 01:12:15,320 |
|
useful and the basic idea behind it is |
|
|
|
1616 |
|
01:12:12,120 --> 01:12:17,560 |
|
that on Modern Hardware if you do many |
|
|
|
1617 |
|
01:12:15,320 --> 01:12:20,520 |
|
of the same operations at once it's much |
|
|
|
1618 |
|
01:12:17,560 --> 01:12:24,320 |
|
faster than doing um |
|
|
|
1619 |
|
01:12:20,520 --> 01:12:25,480 |
|
like uh operations executively and |
|
|
|
1620 |
|
01:12:24,320 --> 01:12:27,280 |
|
that's especially the case if you're |
|
|
|
1621 |
|
01:12:25,480 --> 01:12:30,520 |
|
programming in an extremely slow |
|
|
|
1622 |
|
01:12:27,280 --> 01:12:33,239 |
|
programming language like python um I |
|
|
|
1623 |
|
01:12:30,520 --> 01:12:37,239 |
|
love python but it's slow I mean like |
|
|
|
1624 |
|
01:12:33,239 --> 01:12:38,719 |
|
there's no argument about that um and so |
|
|
|
1625 |
|
01:12:37,239 --> 01:12:40,520 |
|
what mini batching does is it combines |
|
|
|
1626 |
|
01:12:38,719 --> 01:12:43,600 |
|
together smaller operations into one big |
|
|
|
1627 |
|
01:12:40,520 --> 01:12:47,480 |
|
one and the basic idea uh for example if |
|
|
|
1628 |
|
01:12:43,600 --> 01:12:51,679 |
|
we want to calculate our um our linear |
|
|
|
1629 |
|
01:12:47,480 --> 01:12:56,560 |
|
layer with a t uh nonlinearity after it |
|
|
|
1630 |
|
01:12:51,679 --> 01:12:59,760 |
|
we will take several inputs X1 X2 X3 |
|
|
|
1631 |
|
01:12:56,560 --> 01:13:02,040 |
|
concatenate them together and do a |
|
|
|
1632 |
|
01:12:59,760 --> 01:13:04,600 |
|
Matrix Matrix multiply instead of doing |
|
|
|
1633 |
|
01:13:02,040 --> 01:13:07,960 |
|
three Vector Matrix |
|
|
|
1634 |
|
01:13:04,600 --> 01:13:09,239 |
|
multiplies and so what we do is we take |
|
|
|
1635 |
|
01:13:07,960 --> 01:13:11,280 |
|
a whole bunch of examples we take like |
|
|
|
1636 |
|
01:13:09,239 --> 01:13:13,840 |
|
64 examples or something like that and |
|
|
|
1637 |
|
01:13:11,280 --> 01:13:18,000 |
|
we combine them together and calculate |
|
|
|
1638 |
|
01:13:13,840 --> 01:13:21,280 |
|
out thingsit one thing to know is that |
|
|
|
1639 |
|
01:13:18,000 --> 01:13:22,560 |
|
if you're working with sentences there's |
|
|
|
1640 |
|
01:13:21,280 --> 01:13:24,719 |
|
different ways you can calculate the |
|
|
|
1641 |
|
01:13:22,560 --> 01:13:27,360 |
|
size of your mini |
|
|
|
1642 |
|
01:13:24,719 --> 01:13:28,880 |
|
normally nowadays the thing that people |
|
|
|
1643 |
|
01:13:27,360 --> 01:13:30,400 |
|
do and the thing that I recommend is to |
|
|
|
1644 |
|
01:13:28,880 --> 01:13:31,679 |
|
calculate the size of your mini batches |
|
|
|
1645 |
|
01:13:30,400 --> 01:13:33,639 |
|
based on the number of tokens in the |
|
|
|
1646 |
|
01:13:31,679 --> 01:13:35,840 |
|
mini batch it used to be that you would |
|
|
|
1647 |
|
01:13:33,639 --> 01:13:39,719 |
|
do it based on the number of sequences |
|
|
|
1648 |
|
01:13:35,840 --> 01:13:43,800 |
|
but the the problem is um one like 50 |
|
|
|
1649 |
|
01:13:39,719 --> 01:13:47,120 |
|
sequences of length like 100 is much |
|
|
|
1650 |
|
01:13:43,800 --> 01:13:49,480 |
|
more memory intensive than uh 50 |
|
|
|
1651 |
|
01:13:47,120 --> 01:13:51,960 |
|
sequences of Link five and so you get |
|
|
|
1652 |
|
01:13:49,480 --> 01:13:53,920 |
|
these vastly varying these mini batches |
|
|
|
1653 |
|
01:13:51,960 --> 01:13:57,000 |
|
of vastly varying size and that's both |
|
|
|
1654 |
|
01:13:53,920 --> 01:13:59,800 |
|
bad for you know memory overflows and |
|
|
|
1655 |
|
01:13:57,000 --> 01:14:01,639 |
|
bad for um and bad for learning |
|
|
|
1656 |
|
01:13:59,800 --> 01:14:04,280 |
|
stability so I I definitely recommend |
|
|
|
1657 |
|
01:14:01,639 --> 01:14:06,880 |
|
doing it based on the number of |
|
|
|
1658 |
|
01:14:04,280 --> 01:14:09,080 |
|
comps uh another thing is gpus versus |
|
|
|
1659 |
|
01:14:06,880 --> 01:14:12,400 |
|
CPUs so |
|
|
|
1660 |
|
01:14:09,080 --> 01:14:14,600 |
|
um uh CPUs one way you can think of it |
|
|
|
1661 |
|
01:14:12,400 --> 01:14:17,320 |
|
is a CPUs kind of like a motorcycle it's |
|
|
|
1662 |
|
01:14:14,600 --> 01:14:19,600 |
|
very fast at picking up and doing a |
|
|
|
1663 |
|
01:14:17,320 --> 01:14:23,960 |
|
bunch of uh things very quickly |
|
|
|
1664 |
|
01:14:19,600 --> 01:14:26,600 |
|
accelerating uh into starting new uh new |
|
|
|
1665 |
|
01:14:23,960 --> 01:14:28,760 |
|
tasks a GPU is more like an airplane |
|
|
|
1666 |
|
01:14:26,600 --> 01:14:30,719 |
|
which uh you wait forever in line in |
|
|
|
1667 |
|
01:14:28,760 --> 01:14:33,360 |
|
security and |
|
|
|
1668 |
|
01:14:30,719 --> 01:14:34,800 |
|
then and then uh it takes a long time to |
|
|
|
1669 |
|
01:14:33,360 --> 01:14:40,400 |
|
get off the ground and start working but |
|
|
|
1670 |
|
01:14:34,800 --> 01:14:43,679 |
|
once it does it's extremely fast um and |
|
|
|
1671 |
|
01:14:40,400 --> 01:14:45,360 |
|
so if we do a simple example of how long |
|
|
|
1672 |
|
01:14:43,679 --> 01:14:47,600 |
|
does it take to do a Matrix Matrix |
|
|
|
1673 |
|
01:14:45,360 --> 01:14:49,040 |
|
multiply I calculated this a really long |
|
|
|
1674 |
|
01:14:47,600 --> 01:14:51,280 |
|
time ago it's probably horribly out of |
|
|
|
1675 |
|
01:14:49,040 --> 01:14:55,120 |
|
date now but the same general principle |
|
|
|
1676 |
|
01:14:51,280 --> 01:14:56,560 |
|
stands which is if we have have um the |
|
|
|
1677 |
|
01:14:55,120 --> 01:14:58,480 |
|
number of seconds that it takes to do a |
|
|
|
1678 |
|
01:14:56,560 --> 01:15:02,080 |
|
Matrix Matrix multiply doing one of size |
|
|
|
1679 |
|
01:14:58,480 --> 01:15:03,920 |
|
16 is actually faster on CPU because uh |
|
|
|
1680 |
|
01:15:02,080 --> 01:15:07,760 |
|
the overhead it takes to get started is |
|
|
|
1681 |
|
01:15:03,920 --> 01:15:10,880 |
|
very low but if you um once you start |
|
|
|
1682 |
|
01:15:07,760 --> 01:15:13,360 |
|
getting up to size like 128 by 128 |
|
|
|
1683 |
|
01:15:10,880 --> 01:15:15,800 |
|
Matrix multiplies then doing it on GPU |
|
|
|
1684 |
|
01:15:13,360 --> 01:15:17,320 |
|
is faster and then um it's you know a |
|
|
|
1685 |
|
01:15:15,800 --> 01:15:19,679 |
|
100 times faster once you start getting |
|
|
|
1686 |
|
01:15:17,320 --> 01:15:21,600 |
|
up to very large matrices so um if |
|
|
|
1687 |
|
01:15:19,679 --> 01:15:24,000 |
|
you're dealing with very large networks |
|
|
|
1688 |
|
01:15:21,600 --> 01:15:26,800 |
|
handling a GPU is good |
|
|
|
1689 |
|
01:15:24,000 --> 01:15:30,159 |
|
um and this is the the speed up |
|
|
|
1690 |
|
01:15:26,800 --> 01:15:31,440 |
|
percentage um one thing I should mention |
|
|
|
1691 |
|
01:15:30,159 --> 01:15:34,239 |
|
is |
|
|
|
1692 |
|
01:15:31,440 --> 01:15:36,440 |
|
um compute with respect to like doing |
|
|
|
1693 |
|
01:15:34,239 --> 01:15:39,800 |
|
the assignments for this class if you |
|
|
|
1694 |
|
01:15:36,440 --> 01:15:43,199 |
|
have a relatively recent Mac you're kind |
|
|
|
1695 |
|
01:15:39,800 --> 01:15:44,760 |
|
of in luck because actually the gpus on |
|
|
|
1696 |
|
01:15:43,199 --> 01:15:47,239 |
|
the Mac are pretty fast and they're well |
|
|
|
1697 |
|
01:15:44,760 --> 01:15:48,960 |
|
integrated with um they're well |
|
|
|
1698 |
|
01:15:47,239 --> 01:15:52,080 |
|
integrated with pipor and other things |
|
|
|
1699 |
|
01:15:48,960 --> 01:15:53,440 |
|
like that so decently sized models maybe |
|
|
|
1700 |
|
01:15:52,080 --> 01:15:54,840 |
|
up to the size that you would need to |
|
|
|
1701 |
|
01:15:53,440 --> 01:15:57,840 |
|
run for assignment one or even |
|
|
|
1702 |
|
01:15:54,840 --> 01:16:00,880 |
|
assignment two might uh just run on your |
|
|
|
1703 |
|
01:15:57,840 --> 01:16:03,639 |
|
uh laptop computer um if you don't have |
|
|
|
1704 |
|
01:16:00,880 --> 01:16:05,280 |
|
a GPU uh that you have immediately |
|
|
|
1705 |
|
01:16:03,639 --> 01:16:06,760 |
|
accessible to you I we're going to |
|
|
|
1706 |
|
01:16:05,280 --> 01:16:08,400 |
|
recommend that you use collab where you |
|
|
|
1707 |
|
01:16:06,760 --> 01:16:10,120 |
|
can get a GPU uh for the first |
|
|
|
1708 |
|
01:16:08,400 --> 01:16:12,440 |
|
assignments and then we'll have plug |
|
|
|
1709 |
|
01:16:10,120 --> 01:16:15,159 |
|
reddits that you can use otherwise but |
|
|
|
1710 |
|
01:16:12,440 --> 01:16:16,800 |
|
um GPU is usually like something that |
|
|
|
1711 |
|
01:16:15,159 --> 01:16:18,440 |
|
you can get on the cloud or one that you |
|
|
|
1712 |
|
01:16:16,800 --> 01:16:21,080 |
|
have on your Mac or one that you have on |
|
|
|
1713 |
|
01:16:18,440 --> 01:16:24,600 |
|
your gaming computer or something like |
|
|
|
1714 |
|
01:16:21,080 --> 01:16:26,040 |
|
that um there's a few speed tricks that |
|
|
|
1715 |
|
01:16:24,600 --> 01:16:30,000 |
|
you should know for efficient GPU |
|
|
|
1716 |
|
01:16:26,040 --> 01:16:32,480 |
|
operations so um one mistake that people |
|
|
|
1717 |
|
01:16:30,000 --> 01:16:35,880 |
|
make when creating models is they repeat |
|
|
|
1718 |
|
01:16:32,480 --> 01:16:38,080 |
|
operations over and over again and um |
|
|
|
1719 |
|
01:16:35,880 --> 01:16:40,600 |
|
you don't want to be doing this so like |
|
|
|
1720 |
|
01:16:38,080 --> 01:16:43,239 |
|
for example um this is multiplying a |
|
|
|
1721 |
|
01:16:40,600 --> 01:16:45,320 |
|
matrix by a constant multiple times and |
|
|
|
1722 |
|
01:16:43,239 --> 01:16:46,880 |
|
if you're just using out of thee box pie |
|
|
|
1723 |
|
01:16:45,320 --> 01:16:49,280 |
|
torch this would be really bad because |
|
|
|
1724 |
|
01:16:46,880 --> 01:16:50,400 |
|
you'd be repeating the operation uh when |
|
|
|
1725 |
|
01:16:49,280 --> 01:16:52,679 |
|
it's not |
|
|
|
1726 |
|
01:16:50,400 --> 01:16:54,480 |
|
necessary um you can also reduce the |
|
|
|
1727 |
|
01:16:52,679 --> 01:16:57,360 |
|
number of operations that you need to |
|
|
|
1728 |
|
01:16:54,480 --> 01:17:00,320 |
|
use so uh use Matrix Matrix multiplies |
|
|
|
1729 |
|
01:16:57,360 --> 01:17:03,080 |
|
instead of Matrix Vector |
|
|
|
1730 |
|
01:17:00,320 --> 01:17:07,920 |
|
multiplies and another thing is uh |
|
|
|
1731 |
|
01:17:03,080 --> 01:17:10,719 |
|
reducing CPU GPU data movement and um so |
|
|
|
1732 |
|
01:17:07,920 --> 01:17:12,360 |
|
when you do try to move memory um when |
|
|
|
1733 |
|
01:17:10,719 --> 01:17:17,080 |
|
you do try to move memory try to do it |
|
|
|
1734 |
|
01:17:12,360 --> 01:17:20,040 |
|
as early as possible and as uh and as |
|
|
|
1735 |
|
01:17:17,080 --> 01:17:22,199 |
|
few times as possible and the reason why |
|
|
|
1736 |
|
01:17:20,040 --> 01:17:24,199 |
|
you want to move things early or start |
|
|
|
1737 |
|
01:17:22,199 --> 01:17:25,920 |
|
operations early is many GPU operations |
|
|
|
1738 |
|
01:17:24,199 --> 01:17:27,159 |
|
are asynchronous so you can start the |
|
|
|
1739 |
|
01:17:25,920 --> 01:17:28,800 |
|
operation and it will run in the |
|
|
|
1740 |
|
01:17:27,159 --> 01:17:33,120 |
|
background while other things are |
|
|
|
1741 |
|
01:17:28,800 --> 01:17:36,080 |
|
processing so um it's a good idea to try |
|
|
|
1742 |
|
01:17:33,120 --> 01:17:39,840 |
|
to um to optimize and you can also use |
|
|
|
1743 |
|
01:17:36,080 --> 01:17:42,360 |
|
your python profiler or um envidia GPU |
|
|
|
1744 |
|
01:17:39,840 --> 01:17:43,679 |
|
profilers to try to optimize these |
|
|
|
1745 |
|
01:17:42,360 --> 01:17:46,520 |
|
things as |
|
|
|
1746 |
|
01:17:43,679 --> 01:17:49,840 |
|
well cool that's all I have uh we're |
|
|
|
1747 |
|
01:17:46,520 --> 01:17:49,840 |
|
right at time |