ahmedelsayed's picture
commit files to HF hub
2ffb90d
1
00:00:03,879 --> 00:00:07,480
cool um so this time I'm going to talk
2
00:00:05,480 --> 00:00:08,880
about word representation and text
3
00:00:07,480 --> 00:00:11,480
classifiers these are kind of the
4
00:00:08,880 --> 00:00:14,080
foundations that you need to know uh in
5
00:00:11,480 --> 00:00:15,640
order to move on to the more complex
6
00:00:14,080 --> 00:00:17,920
things that we'll be talking in future
7
00:00:15,640 --> 00:00:19,640
classes uh but actually the in
8
00:00:17,920 --> 00:00:22,760
particular the word representation part
9
00:00:19,640 --> 00:00:25,439
is pretty important it's a major uh
10
00:00:22,760 --> 00:00:31,800
thing that we need to do for all NLP
11
00:00:25,439 --> 00:00:34,239
models so uh let's go into it
12
00:00:31,800 --> 00:00:38,200
so last class I talked about the bag of
13
00:00:34,239 --> 00:00:40,239
words model um and just to review this
14
00:00:38,200 --> 00:00:43,920
was a model where basically we take each
15
00:00:40,239 --> 00:00:45,520
word we represent it as a one hot Vector
16
00:00:43,920 --> 00:00:48,760
uh like
17
00:00:45,520 --> 00:00:51,120
this and we add all of these vectors
18
00:00:48,760 --> 00:00:53,160
together we multiply the resulting
19
00:00:51,120 --> 00:00:55,160
frequency vector by some weights and we
20
00:00:53,160 --> 00:00:57,239
get a score out of this and we can use
21
00:00:55,160 --> 00:00:58,559
this score for binary classification or
22
00:00:57,239 --> 00:01:00,239
if we want to do multiclass
23
00:00:58,559 --> 00:01:02,519
classification we get you know multiple
24
00:01:00,239 --> 00:01:05,720
scores for each
25
00:01:02,519 --> 00:01:08,040
class and the features F were just based
26
00:01:05,720 --> 00:01:08,920
on our word identities and the weights
27
00:01:08,040 --> 00:01:12,159
were
28
00:01:08,920 --> 00:01:14,680
learned and um if we look at what's
29
00:01:12,159 --> 00:01:17,520
missing in bag of words
30
00:01:14,680 --> 00:01:19,600
models um we talked about handling of
31
00:01:17,520 --> 00:01:23,280
conjugated or compound
32
00:01:19,600 --> 00:01:25,439
words we talked about handling of word
33
00:01:23,280 --> 00:01:27,880
similarity and we talked about handling
34
00:01:25,439 --> 00:01:30,240
of combination features and handling of
35
00:01:27,880 --> 00:01:33,280
sentence structure and so all of these
36
00:01:30,240 --> 00:01:35,000
are are tricky problems uh we saw that
37
00:01:33,280 --> 00:01:37,000
you know creating a rule-based system to
38
00:01:35,000 --> 00:01:39,000
solve these problems is non-trivial and
39
00:01:37,000 --> 00:01:41,399
at the very least would take a lot of
40
00:01:39,000 --> 00:01:44,079
time and so now I want to talk about
41
00:01:41,399 --> 00:01:47,119
some solutions to the problems in this
42
00:01:44,079 --> 00:01:49,280
class so the first the solution to the
43
00:01:47,119 --> 00:01:52,240
first problem or a solution to the first
44
00:01:49,280 --> 00:01:54,880
problem is uh subword or character based
45
00:01:52,240 --> 00:01:57,520
models and that's what I'll talk about
46
00:01:54,880 --> 00:02:00,719
first handling of word similarity this
47
00:01:57,520 --> 00:02:02,960
can be handled uh using Word edings
48
00:02:00,719 --> 00:02:05,079
and the word embeddings uh will be
49
00:02:02,960 --> 00:02:07,159
another thing we'll talk about this time
50
00:02:05,079 --> 00:02:08,879
handling of combination features uh we
51
00:02:07,159 --> 00:02:11,039
can handle through neural networks which
52
00:02:08,879 --> 00:02:14,040
we'll also talk about this time and then
53
00:02:11,039 --> 00:02:15,560
handling of sentence structure uh the
54
00:02:14,040 --> 00:02:17,720
kind of standard way of handling this
55
00:02:15,560 --> 00:02:20,120
now is through sequence-based models and
56
00:02:17,720 --> 00:02:24,879
that will be uh starting in a few
57
00:02:20,120 --> 00:02:28,080
classes so uh let's jump into
58
00:02:24,879 --> 00:02:30,000
it so subword models uh as I mentioned
59
00:02:28,080 --> 00:02:31,840
this is a really really important part
60
00:02:30,000 --> 00:02:33,360
all of the models that we're building
61
00:02:31,840 --> 00:02:35,480
nowadays including you know
62
00:02:33,360 --> 00:02:38,239
state-of-the-art language models and and
63
00:02:35,480 --> 00:02:42,200
things like this and the basic idea
64
00:02:38,239 --> 00:02:44,720
behind this is that we want to split uh
65
00:02:42,200 --> 00:02:48,040
in particular split less common words up
66
00:02:44,720 --> 00:02:50,200
into multiple subboard tokens so to give
67
00:02:48,040 --> 00:02:52,200
an example of this uh if we have
68
00:02:50,200 --> 00:02:55,040
something like the companies are
69
00:02:52,200 --> 00:02:57,000
expanding uh it might split companies
70
00:02:55,040 --> 00:03:02,120
into compan
71
00:02:57,000 --> 00:03:05,000
e and expand in like this and there are
72
00:03:02,120 --> 00:03:08,480
a few benefits of this uh the first
73
00:03:05,000 --> 00:03:10,760
benefit is that this allows you to
74
00:03:08,480 --> 00:03:13,360
parameters between word varieties or
75
00:03:10,760 --> 00:03:15,200
compound words and the other one is to
76
00:03:13,360 --> 00:03:17,400
reduce parameter size and save compute
77
00:03:15,200 --> 00:03:19,720
and meming and both of these are kind of
78
00:03:17,400 --> 00:03:23,239
like equally important things that we
79
00:03:19,720 --> 00:03:25,519
need to be uh we need to be considering
80
00:03:23,239 --> 00:03:26,440
so does anyone know how many words there
81
00:03:25,519 --> 00:03:28,680
are in
82
00:03:26,440 --> 00:03:31,680
English any
83
00:03:28,680 --> 00:03:31,680
ideas
84
00:03:36,799 --> 00:03:43,400
yeah two
85
00:03:38,599 --> 00:03:45,560
million pretty good um any other
86
00:03:43,400 --> 00:03:47,159
ideas
87
00:03:45,560 --> 00:03:50,360
yeah
88
00:03:47,159 --> 00:03:53,599
60,000 some models use 60,000 I I think
89
00:03:50,360 --> 00:03:56,200
60,000 is probably these subword models
90
00:03:53,599 --> 00:03:58,079
uh when you're talking about this so
91
00:03:56,200 --> 00:03:59,319
they can use sub models to take the 2
92
00:03:58,079 --> 00:04:03,480
million which I think is a reasonable
93
00:03:59,319 --> 00:04:07,400
guess to 6 60,000 any other
94
00:04:03,480 --> 00:04:08,840
ideas 700,000 okay pretty good um so
95
00:04:07,400 --> 00:04:11,799
this was a per question it doesn't
96
00:04:08,840 --> 00:04:14,760
really have a good answer um but two 200
97
00:04:11,799 --> 00:04:17,479
million's probably pretty good six uh
98
00:04:14,760 --> 00:04:19,160
700,000 is pretty good the reason why
99
00:04:17,479 --> 00:04:21,360
this is a trick question is because are
100
00:04:19,160 --> 00:04:24,440
company and companies different
101
00:04:21,360 --> 00:04:26,840
words uh maybe maybe not right because
102
00:04:24,440 --> 00:04:30,120
if we know the word company we can you
103
00:04:26,840 --> 00:04:32,520
know guess what the word companies means
104
00:04:30,120 --> 00:04:35,720
um what about automobile is that a
105
00:04:32,520 --> 00:04:37,400
different word well maybe if we know
106
00:04:35,720 --> 00:04:39,400
Auto and mobile we can kind of guess
107
00:04:37,400 --> 00:04:41,160
what automobile means but not really so
108
00:04:39,400 --> 00:04:43,479
maybe that's a different word there's
109
00:04:41,160 --> 00:04:45,960
all kinds of Shades of Gray there and
110
00:04:43,479 --> 00:04:48,120
also we have really frequent words that
111
00:04:45,960 --> 00:04:50,360
everybody can probably acknowledge our
112
00:04:48,120 --> 00:04:52,320
words like
113
00:04:50,360 --> 00:04:55,639
the and
114
00:04:52,320 --> 00:04:58,520
a and um maybe
115
00:04:55,639 --> 00:05:00,680
car and then we have words down here
116
00:04:58,520 --> 00:05:02,320
which are like Miss spellings or
117
00:05:00,680 --> 00:05:04,160
something like that misspellings of
118
00:05:02,320 --> 00:05:06,520
actual correct words or
119
00:05:04,160 --> 00:05:09,199
slay uh or other things like that and
120
00:05:06,520 --> 00:05:12,520
then it's questionable whether those are
121
00:05:09,199 --> 00:05:17,199
actual words or not so um there's a
122
00:05:12,520 --> 00:05:19,520
famous uh law called Zip's
123
00:05:17,199 --> 00:05:21,280
law um which probably a lot of people
124
00:05:19,520 --> 00:05:23,360
have heard of it's also the source of
125
00:05:21,280 --> 00:05:26,919
your zip
126
00:05:23,360 --> 00:05:30,160
file um which is using Zip's law to
127
00:05:26,919 --> 00:05:32,400
compress uh compress output by making
128
00:05:30,160 --> 00:05:34,880
the uh more frequent words have shorter
129
00:05:32,400 --> 00:05:37,520
bite strings and less frequent words
130
00:05:34,880 --> 00:05:38,800
have uh you know less frequent bite
131
00:05:37,520 --> 00:05:43,120
strings but basically like we're going
132
00:05:38,800 --> 00:05:45,120
to have an infinite number of words or
133
00:05:43,120 --> 00:05:46,360
at least strings that are separated by
134
00:05:45,120 --> 00:05:49,280
white space so we need to handle this
135
00:05:46,360 --> 00:05:53,199
somehow and that's what subword units
136
00:05:49,280 --> 00:05:54,560
do so um 60,000 was a good guess for the
137
00:05:53,199 --> 00:05:57,160
number of subword units you might use in
138
00:05:54,560 --> 00:06:00,759
a model and so uh by using subw units we
139
00:05:57,160 --> 00:06:04,840
can limit to about that much
140
00:06:00,759 --> 00:06:08,160
so there's a couple of common uh ways to
141
00:06:04,840 --> 00:06:10,440
create these subword units and basically
142
00:06:08,160 --> 00:06:14,560
all of them rely on the fact that you
143
00:06:10,440 --> 00:06:16,039
want more common strings to become
144
00:06:14,560 --> 00:06:19,599
subword
145
00:06:16,039 --> 00:06:22,199
units um or actually sorry I realize
146
00:06:19,599 --> 00:06:24,280
maybe before doing that I could explain
147
00:06:22,199 --> 00:06:26,360
an alternative to creating subword units
148
00:06:24,280 --> 00:06:29,639
so the alternative to creating subword
149
00:06:26,360 --> 00:06:33,560
units is to treat every character or
150
00:06:29,639 --> 00:06:36,919
maybe every bite in a string as a single
151
00:06:33,560 --> 00:06:38,560
thing that you encode in forent so in
152
00:06:36,919 --> 00:06:42,520
other words instead of trying to model
153
00:06:38,560 --> 00:06:47,919
the companies are expanding we Model T h
154
00:06:42,520 --> 00:06:50,199
e space c o m uh etc etc can anyone
155
00:06:47,919 --> 00:06:53,199
think of any downsides of
156
00:06:50,199 --> 00:06:53,199
this
157
00:06:57,039 --> 00:07:01,879
yeah yeah the set of these will be very
158
00:07:00,080 --> 00:07:05,000
will be very small but that's not
159
00:07:01,879 --> 00:07:05,000
necessarily a problem
160
00:07:08,560 --> 00:07:15,599
right yeah um and any other
161
00:07:12,599 --> 00:07:15,599
ideas
162
00:07:19,520 --> 00:07:24,360
yeah yeah the resulting sequences will
163
00:07:22,080 --> 00:07:25,520
be very long um and when you say
164
00:07:24,360 --> 00:07:27,160
difficult to use it could be difficult
165
00:07:25,520 --> 00:07:29,560
to use for a couple of reasons there's
166
00:07:27,160 --> 00:07:31,840
mainly two reasons actually any any IDE
167
00:07:29,560 --> 00:07:31,840
about
168
00:07:33,479 --> 00:07:37,800
this any
169
00:07:46,280 --> 00:07:50,599
yeah yeah that's a little bit of a
170
00:07:49,000 --> 00:07:52,319
separate problem than the character
171
00:07:50,599 --> 00:07:53,919
based model so let me get back to that
172
00:07:52,319 --> 00:07:56,400
but uh let let's finish the discussion
173
00:07:53,919 --> 00:07:58,360
of the character based models so if it's
174
00:07:56,400 --> 00:08:00,120
really if it's really long maybe a
175
00:07:58,360 --> 00:08:01,879
simple thing like uh let's say you have
176
00:08:00,120 --> 00:08:06,560
a big neural network and it's processing
177
00:08:01,879 --> 00:08:06,560
a really long sequence any ideas what
178
00:08:06,919 --> 00:08:10,879
happens basically you run out of memory
179
00:08:09,280 --> 00:08:13,440
or it takes a really long time right so
180
00:08:10,879 --> 00:08:16,840
you have computational problems another
181
00:08:13,440 --> 00:08:18,479
reason why is um think of what a bag of
182
00:08:16,840 --> 00:08:21,400
words model would look like if it was a
183
00:08:18,479 --> 00:08:21,400
bag of characters
184
00:08:21,800 --> 00:08:25,919
model it wouldn't be very informative
185
00:08:24,199 --> 00:08:27,599
about whether like a sentence is
186
00:08:25,919 --> 00:08:30,919
positive sentiment or negative sentiment
187
00:08:27,599 --> 00:08:32,959
right because instead of having uh go o
188
00:08:30,919 --> 00:08:35,039
you would have uh instead of having good
189
00:08:32,959 --> 00:08:36,360
you would have go o and that doesn't
190
00:08:35,039 --> 00:08:38,560
really directly tell you whether it's
191
00:08:36,360 --> 00:08:41,719
positive sentiment or not so those are
192
00:08:38,560 --> 00:08:43,680
basically the two problems um compute
193
00:08:41,719 --> 00:08:45,320
and lack of expressiveness in the
194
00:08:43,680 --> 00:08:50,720
underlying representations so you need
195
00:08:45,320 --> 00:08:52,080
to handle both of those yes so if we uh
196
00:08:50,720 --> 00:08:54,480
move from
197
00:08:52,080 --> 00:08:56,440
character better expressiveness and we
198
00:08:54,480 --> 00:08:58,920
assume that if we just get the bigger
199
00:08:56,440 --> 00:09:00,120
and bigger paragraphs we'll get even
200
00:08:58,920 --> 00:09:02,760
better
201
00:09:00,120 --> 00:09:05,120
yeah so a very good question I'll repeat
202
00:09:02,760 --> 00:09:06,560
it um and actually this also goes back
203
00:09:05,120 --> 00:09:08,040
to the other question you asked about
204
00:09:06,560 --> 00:09:09,519
words that look the same but are
205
00:09:08,040 --> 00:09:12,160
pronounced differently or have different
206
00:09:09,519 --> 00:09:14,360
meanings and so like let's say we just
207
00:09:12,160 --> 00:09:15,920
remembered this whole sentence right the
208
00:09:14,360 --> 00:09:18,279
companies are
209
00:09:15,920 --> 00:09:21,600
expanding um and that was like a single
210
00:09:18,279 --> 00:09:22,680
embedding and we somehow embedded it the
211
00:09:21,600 --> 00:09:25,720
problem would be we're never going to
212
00:09:22,680 --> 00:09:27,120
see that sentence again um or if we go
213
00:09:25,720 --> 00:09:29,480
to longer sentences we're never going to
214
00:09:27,120 --> 00:09:31,839
see the longer sentences again so it
215
00:09:29,480 --> 00:09:34,320
becomes too sparse so there's kind of a
216
00:09:31,839 --> 00:09:37,240
sweet spot between
217
00:09:34,320 --> 00:09:40,279
like long enough to be expressive and
218
00:09:37,240 --> 00:09:42,480
short enough to occur many times so that
219
00:09:40,279 --> 00:09:43,959
you can learn appropriately and that's
220
00:09:42,480 --> 00:09:47,120
kind of what subword models are aiming
221
00:09:43,959 --> 00:09:48,360
for and if you get longer subwords then
222
00:09:47,120 --> 00:09:50,200
you'll get things that are more
223
00:09:48,360 --> 00:09:52,959
expressive but more sparse in shorter
224
00:09:50,200 --> 00:09:55,440
subwords you'll get things that are like
225
00:09:52,959 --> 00:09:57,279
uh less expressive but less spice so you
226
00:09:55,440 --> 00:09:59,120
need to balance between them and then
227
00:09:57,279 --> 00:10:00,600
once we get into sequence modeling they
228
00:09:59,120 --> 00:10:02,600
start being able to model like which
229
00:10:00,600 --> 00:10:04,120
words are next to each other uh which
230
00:10:02,600 --> 00:10:06,040
tokens are next to each other and stuff
231
00:10:04,120 --> 00:10:07,800
like that so even if they are less
232
00:10:06,040 --> 00:10:11,279
expressive the combination between them
233
00:10:07,800 --> 00:10:12,600
can be expressive so um yeah that's kind
234
00:10:11,279 --> 00:10:13,440
of a preview of what we're going to be
235
00:10:12,600 --> 00:10:17,320
doing
236
00:10:13,440 --> 00:10:19,279
next okay so um let's assume that we
237
00:10:17,320 --> 00:10:21,320
want to have some subwords that are
238
00:10:19,279 --> 00:10:23,000
longer than characters but shorter than
239
00:10:21,320 --> 00:10:26,240
tokens how do we make these in a
240
00:10:23,000 --> 00:10:28,680
consistent way there's two major ways of
241
00:10:26,240 --> 00:10:31,480
doing this uh the first one is bite pair
242
00:10:28,680 --> 00:10:32,839
encoding and this is uh very very simple
243
00:10:31,480 --> 00:10:35,839
in fact it's so
244
00:10:32,839 --> 00:10:35,839
simple
245
00:10:36,600 --> 00:10:40,839
that we can implement
246
00:10:41,839 --> 00:10:47,240
it in this notebook here which you can
247
00:10:44,600 --> 00:10:51,720
click through to on the
248
00:10:47,240 --> 00:10:55,440
slides and it's uh
249
00:10:51,720 --> 00:10:58,040
about 10 lines of code um and so
250
00:10:55,440 --> 00:11:01,040
basically what B pair encoding
251
00:10:58,040 --> 00:11:01,040
does
252
00:11:04,600 --> 00:11:09,560
is that you start out with um all of the
253
00:11:07,000 --> 00:11:14,360
vocabulary that you want to process
254
00:11:09,560 --> 00:11:17,560
where each vocabulary item is split into
255
00:11:14,360 --> 00:11:21,240
uh the characters and an end of word
256
00:11:17,560 --> 00:11:23,360
symbol and you have a corresponding
257
00:11:21,240 --> 00:11:27,519
frequency of
258
00:11:23,360 --> 00:11:31,120
this you then uh get statistics about
259
00:11:27,519 --> 00:11:33,279
the most common pairs of tokens that
260
00:11:31,120 --> 00:11:34,880
occur next to each other and so here the
261
00:11:33,279 --> 00:11:38,240
most common pairs of tokens that occur
262
00:11:34,880 --> 00:11:41,920
next to each other are e s because it
263
00:11:38,240 --> 00:11:46,560
occurs nine times because it occurs in
264
00:11:41,920 --> 00:11:48,279
newest and wildest also s and t w
265
00:11:46,560 --> 00:11:51,440
because those occur there too and then
266
00:11:48,279 --> 00:11:53,519
you have we and other things like that
267
00:11:51,440 --> 00:11:56,000
so out of all the most frequent ones you
268
00:11:53,519 --> 00:11:59,920
just merge them together and that gives
269
00:11:56,000 --> 00:12:02,720
you uh new s new
270
00:11:59,920 --> 00:12:05,200
EST and wide
271
00:12:02,720 --> 00:12:09,360
EST and then you do the same thing this
272
00:12:05,200 --> 00:12:12,519
time now you get EST so now you get this
273
00:12:09,360 --> 00:12:14,279
uh suffix EST and that looks pretty
274
00:12:12,519 --> 00:12:16,399
reasonable for English right you know
275
00:12:14,279 --> 00:12:19,040
EST is a common suffix that we use it
276
00:12:16,399 --> 00:12:22,399
seems like it should be a single token
277
00:12:19,040 --> 00:12:25,880
and um so you just do this over and over
278
00:12:22,399 --> 00:12:29,279
again if you want a vocabulary of 60,000
279
00:12:25,880 --> 00:12:31,120
for example you would do um 60,000 minus
280
00:12:29,279 --> 00:12:33,079
number of characters merge operations
281
00:12:31,120 --> 00:12:37,160
and eventually you would get a B of
282
00:12:33,079 --> 00:12:41,920
60,000 um and yeah very very simple
283
00:12:37,160 --> 00:12:41,920
method to do this um any questions about
284
00:12:43,160 --> 00:12:46,160
that
285
00:12:57,839 --> 00:13:00,839
yeah
286
00:13:15,600 --> 00:13:20,959
yeah so uh just to repeat the the
287
00:13:18,040 --> 00:13:23,560
comment uh this seems like a greedy
288
00:13:20,959 --> 00:13:25,320
version of Huffman encoding which is a
289
00:13:23,560 --> 00:13:28,839
you know similar to what you're using in
290
00:13:25,320 --> 00:13:32,000
your zip file a way to shorten things by
291
00:13:28,839 --> 00:13:36,560
getting longer uh more frequent things
292
00:13:32,000 --> 00:13:39,120
being inced as a single token um I think
293
00:13:36,560 --> 00:13:40,760
B pair encoding did originally start
294
00:13:39,120 --> 00:13:43,720
like that that's part of the reason why
295
00:13:40,760 --> 00:13:45,760
the encoding uh thing is here I think it
296
00:13:43,720 --> 00:13:47,360
originally started there I haven't read
297
00:13:45,760 --> 00:13:49,360
really deeply into this but I can talk
298
00:13:47,360 --> 00:13:53,240
more about how the next one corresponds
299
00:13:49,360 --> 00:13:54,440
to information Theory and Tuesday I'm
300
00:13:53,240 --> 00:13:55,720
going to talk even more about how
301
00:13:54,440 --> 00:13:57,720
language models correspond to
302
00:13:55,720 --> 00:14:00,040
information theories so we can uh we can
303
00:13:57,720 --> 00:14:04,519
discuss maybe in more detail
304
00:14:00,040 --> 00:14:07,639
to um so the the alternative option is
305
00:14:04,519 --> 00:14:10,000
to use unigram models and unigram models
306
00:14:07,639 --> 00:14:12,240
are the simplest type of language model
307
00:14:10,000 --> 00:14:15,079
I'm going to talk more in detail about
308
00:14:12,240 --> 00:14:18,279
them next time but basically uh the way
309
00:14:15,079 --> 00:14:20,759
it works is you create a model that
310
00:14:18,279 --> 00:14:23,600
generates all word uh words in the
311
00:14:20,759 --> 00:14:26,199
sequence independently sorry I thought I
312
00:14:23,600 --> 00:14:26,199
had a
313
00:14:26,320 --> 00:14:31,800
um I thought I had an equation but
314
00:14:28,800 --> 00:14:31,800
basically the
315
00:14:32,240 --> 00:14:35,759
equation looks
316
00:14:38,079 --> 00:14:41,079
like
317
00:14:47,720 --> 00:14:52,120
this so you say the probability of the
318
00:14:50,360 --> 00:14:53,440
sequence is the product of the
319
00:14:52,120 --> 00:14:54,279
probabilities of each of the words in
320
00:14:53,440 --> 00:14:55,959
the
321
00:14:54,279 --> 00:15:00,079
sequence
322
00:14:55,959 --> 00:15:04,079
and uh then you try to pick a vocabulary
323
00:15:00,079 --> 00:15:06,839
that maximizes the probability of the
324
00:15:04,079 --> 00:15:09,320
Corpus given a fixed vocabulary size so
325
00:15:06,839 --> 00:15:10,320
you try to say okay you get a vocabulary
326
00:15:09,320 --> 00:15:14,440
size of
327
00:15:10,320 --> 00:15:16,920
60,000 how do you um how do you pick the
328
00:15:14,440 --> 00:15:19,680
best 60,000 vocabulary to maximize the
329
00:15:16,920 --> 00:15:22,440
probability of the the Corpus and that
330
00:15:19,680 --> 00:15:25,959
will result in something very similar uh
331
00:15:22,440 --> 00:15:27,920
it will also try to give longer uh
332
00:15:25,959 --> 00:15:29,880
vocabulary uh sorry more common
333
00:15:27,920 --> 00:15:32,240
vocabulary long sequences because that
334
00:15:29,880 --> 00:15:35,560
allows you to to maximize this
335
00:15:32,240 --> 00:15:36,959
objective um the optimization for this
336
00:15:35,560 --> 00:15:40,040
is performed using something called the
337
00:15:36,959 --> 00:15:44,440
EM algorithm where basically you uh
338
00:15:40,040 --> 00:15:48,560
predict the uh the probability of each
339
00:15:44,440 --> 00:15:51,600
token showing up and uh then select the
340
00:15:48,560 --> 00:15:53,279
most common tokens and then trim off the
341
00:15:51,600 --> 00:15:54,759
ones that are less common and then just
342
00:15:53,279 --> 00:15:58,120
do this over and over again until you
343
00:15:54,759 --> 00:15:59,839
drop down to the 60,000 token lat so the
344
00:15:58,120 --> 00:16:02,040
details for this are not important for
345
00:15:59,839 --> 00:16:04,160
most people in this class uh because
346
00:16:02,040 --> 00:16:07,480
you're going to just be using a toolkit
347
00:16:04,160 --> 00:16:08,880
that implements this for you um but if
348
00:16:07,480 --> 00:16:10,759
you're interested in this I'm happy to
349
00:16:08,880 --> 00:16:14,199
talk to you about it
350
00:16:10,759 --> 00:16:14,199
yeah is there
351
00:16:14,680 --> 00:16:18,959
problem Oh in unigram models there's a
352
00:16:17,199 --> 00:16:20,959
huge problem with assuming Independence
353
00:16:18,959 --> 00:16:22,720
in language models because then you
354
00:16:20,959 --> 00:16:25,120
could rearrange the order of words in
355
00:16:22,720 --> 00:16:26,600
sentences um that that's something we're
356
00:16:25,120 --> 00:16:27,519
going to talk about in language model
357
00:16:26,600 --> 00:16:30,560
next
358
00:16:27,519 --> 00:16:32,839
time but the the good thing about this
359
00:16:30,560 --> 00:16:34,519
is the EM algorithm requires dynamic
360
00:16:32,839 --> 00:16:36,079
programming in this case and you can't
361
00:16:34,519 --> 00:16:37,800
easily do dynamic programming if you
362
00:16:36,079 --> 00:16:40,160
don't make that
363
00:16:37,800 --> 00:16:41,880
assumptions um and then finally after
364
00:16:40,160 --> 00:16:43,560
you've picked your vocabulary and you've
365
00:16:41,880 --> 00:16:45,720
assigned a probability to each word in
366
00:16:43,560 --> 00:16:47,800
the vocabulary you then find a
367
00:16:45,720 --> 00:16:49,639
segmentation of the input that maximizes
368
00:16:47,800 --> 00:16:52,600
the unigram
369
00:16:49,639 --> 00:16:54,880
probabilities um so this is basically
370
00:16:52,600 --> 00:16:56,519
the idea of what's going on here um I'm
371
00:16:54,880 --> 00:16:58,120
not going to go into a lot of detail
372
00:16:56,519 --> 00:17:00,560
about this because most people are just
373
00:16:58,120 --> 00:17:02,279
going to be users of this algorithm so
374
00:17:00,560 --> 00:17:06,240
it's not super super
375
00:17:02,279 --> 00:17:09,400
important um the one important thing
376
00:17:06,240 --> 00:17:11,240
about this is that there's a library
377
00:17:09,400 --> 00:17:15,520
called sentence piece that's used very
378
00:17:11,240 --> 00:17:19,199
widely in order to build these um in
379
00:17:15,520 --> 00:17:22,000
order to build these subword units and
380
00:17:19,199 --> 00:17:23,720
uh basically what you do is you run the
381
00:17:22,000 --> 00:17:27,600
sentence piece
382
00:17:23,720 --> 00:17:30,200
train uh model or sorry uh program and
383
00:17:27,600 --> 00:17:32,640
that gives you uh you select your vocab
384
00:17:30,200 --> 00:17:34,240
size uh this also this character
385
00:17:32,640 --> 00:17:36,120
coverage is basically how well do you
386
00:17:34,240 --> 00:17:39,760
need to cover all of the characters in
387
00:17:36,120 --> 00:17:41,840
your vocabulary or in your input text um
388
00:17:39,760 --> 00:17:45,240
what model type do you use and then you
389
00:17:41,840 --> 00:17:48,640
run this uh sentence piece en code file
390
00:17:45,240 --> 00:17:51,039
uh to uh encode the output and split the
391
00:17:48,640 --> 00:17:54,799
output and there's also python bindings
392
00:17:51,039 --> 00:17:56,240
available for this and by the one thing
393
00:17:54,799 --> 00:17:57,919
that you should know is by default it
394
00:17:56,240 --> 00:18:00,600
uses the unigram model but it also
395
00:17:57,919 --> 00:18:01,960
supports EP in my experience it doesn't
396
00:18:00,600 --> 00:18:05,159
make a huge difference about which one
397
00:18:01,960 --> 00:18:07,640
you use the bigger thing is how um how
398
00:18:05,159 --> 00:18:10,159
big is your vocabulary size and if your
399
00:18:07,640 --> 00:18:11,880
vocabulary size is smaller then things
400
00:18:10,159 --> 00:18:13,760
will be more efficient but less
401
00:18:11,880 --> 00:18:17,480
expressive if your vocabulary size is
402
00:18:13,760 --> 00:18:21,280
bigger things will be um will
403
00:18:17,480 --> 00:18:23,240
be more expressive but less efficient
404
00:18:21,280 --> 00:18:25,360
and A good rule of thumb is like
405
00:18:23,240 --> 00:18:26,960
something like 60,000 to 80,000 is
406
00:18:25,360 --> 00:18:29,120
pretty reasonable if you're only doing
407
00:18:26,960 --> 00:18:31,320
English if you're spreading out to
408
00:18:29,120 --> 00:18:32,600
things that do other languages um which
409
00:18:31,320 --> 00:18:35,960
I'll talk about in a second then you
410
00:18:32,600 --> 00:18:38,720
need a much bigger B regular
411
00:18:35,960 --> 00:18:40,559
say so there's two considerations here
412
00:18:38,720 --> 00:18:42,440
two important considerations when using
413
00:18:40,559 --> 00:18:46,320
these models uh the first is
414
00:18:42,440 --> 00:18:48,760
multilinguality as I said so when you're
415
00:18:46,320 --> 00:18:50,760
using um subword
416
00:18:48,760 --> 00:18:54,710
models they're hard to use
417
00:18:50,760 --> 00:18:55,840
multilingually because as I said before
418
00:18:54,710 --> 00:18:59,799
[Music]
419
00:18:55,840 --> 00:19:03,799
they give longer strings to more
420
00:18:59,799 --> 00:19:06,520
frequent strings basically so then
421
00:19:03,799 --> 00:19:09,559
imagine what happens if 50% of your
422
00:19:06,520 --> 00:19:11,919
Corpus is English another 30% of your
423
00:19:09,559 --> 00:19:15,400
Corpus is
424
00:19:11,919 --> 00:19:17,200
other languages written in Latin script
425
00:19:15,400 --> 00:19:21,720
10% is
426
00:19:17,200 --> 00:19:25,480
Chinese uh 5% is cerlic script languages
427
00:19:21,720 --> 00:19:27,240
four 4% is 3% is Japanese and then you
428
00:19:25,480 --> 00:19:31,080
have like
429
00:19:27,240 --> 00:19:33,320
0.01% written in like burmes or
430
00:19:31,080 --> 00:19:35,520
something like that suddenly burmes just
431
00:19:33,320 --> 00:19:37,400
gets chunked up really really tiny
432
00:19:35,520 --> 00:19:38,360
really long sequences and it doesn't
433
00:19:37,400 --> 00:19:45,559
work as
434
00:19:38,360 --> 00:19:45,559
well um so one way that people fix this
435
00:19:45,919 --> 00:19:50,520
um and actually there's a really nice uh
436
00:19:48,760 --> 00:19:52,600
blog post about this called exploring
437
00:19:50,520 --> 00:19:53,760
B's vocabulary which I referenced here
438
00:19:52,600 --> 00:19:58,039
if you're interested in learning more
439
00:19:53,760 --> 00:20:02,960
about that um but one way that people
440
00:19:58,039 --> 00:20:05,240
were around this is if your
441
00:20:02,960 --> 00:20:07,960
actual uh data
442
00:20:05,240 --> 00:20:11,559
distribution looks like this like
443
00:20:07,960 --> 00:20:11,559
English uh
444
00:20:17,039 --> 00:20:23,159
Ty we actually sorry I took out the
445
00:20:19,280 --> 00:20:23,159
Indian languages in my example
446
00:20:24,960 --> 00:20:30,159
apologies
447
00:20:27,159 --> 00:20:30,159
so
448
00:20:30,400 --> 00:20:35,919
um what you do is you essentially create
449
00:20:33,640 --> 00:20:40,000
a different distribution that like
450
00:20:35,919 --> 00:20:43,559
downweights English a little bit and up
451
00:20:40,000 --> 00:20:47,000
weights up weights all of the other
452
00:20:43,559 --> 00:20:49,480
languages um so that you get more of
453
00:20:47,000 --> 00:20:53,159
other languages when creating so this is
454
00:20:49,480 --> 00:20:53,159
a common work around that you can do for
455
00:20:54,200 --> 00:20:59,960
this um the
456
00:20:56,799 --> 00:21:03,000
second problem with these is
457
00:20:59,960 --> 00:21:08,000
arbitrariness so as you saw in my
458
00:21:03,000 --> 00:21:11,240
example with bpe e s s and t and of
459
00:21:08,000 --> 00:21:13,520
board symbol all have the same probabil
460
00:21:11,240 --> 00:21:16,960
or have the same frequency right so if
461
00:21:13,520 --> 00:21:21,520
we get to that point do we segment es or
462
00:21:16,960 --> 00:21:25,039
do we seg uh EST or do we segment e
463
00:21:21,520 --> 00:21:26,559
s and so this is also a problem and it
464
00:21:25,039 --> 00:21:29,000
actually can affect your results
465
00:21:26,559 --> 00:21:30,480
especially if you like don't have a
466
00:21:29,000 --> 00:21:31,760
really strong vocabulary for the
467
00:21:30,480 --> 00:21:33,279
language you're working in or you're
468
00:21:31,760 --> 00:21:37,200
working in a new
469
00:21:33,279 --> 00:21:40,159
domain and so there's a few workarounds
470
00:21:37,200 --> 00:21:41,520
for this uh one workaround for this is
471
00:21:40,159 --> 00:21:44,000
uh called subword
472
00:21:41,520 --> 00:21:46,279
regularization and the way it works is
473
00:21:44,000 --> 00:21:49,400
instead
474
00:21:46,279 --> 00:21:51,640
of just having a single segmentation and
475
00:21:49,400 --> 00:21:54,679
getting the kind of
476
00:21:51,640 --> 00:21:56,200
maximally probable segmentation or the
477
00:21:54,679 --> 00:21:58,480
one the greedy one that you get out of
478
00:21:56,200 --> 00:22:01,360
BP instead you sample different
479
00:21:58,480 --> 00:22:03,000
segmentations in training time and use
480
00:22:01,360 --> 00:22:05,720
the different segmentations and that
481
00:22:03,000 --> 00:22:09,200
makes your model more robust to this
482
00:22:05,720 --> 00:22:10,840
kind of variation and that's also
483
00:22:09,200 --> 00:22:15,679
actually the reason why sentence piece
484
00:22:10,840 --> 00:22:17,919
was released was through this um subword
485
00:22:15,679 --> 00:22:19,559
regularization paper so that's also
486
00:22:17,919 --> 00:22:22,720
implemented in sentence piece if that's
487
00:22:19,559 --> 00:22:22,720
something you're interested in
488
00:22:24,919 --> 00:22:32,520
trying cool um are there any questions
489
00:22:28,480 --> 00:22:32,520
or discussions about this
490
00:22:53,279 --> 00:22:56,279
yeah
491
00:22:56,960 --> 00:22:59,960
already
492
00:23:06,799 --> 00:23:11,080
yeah so this is a good question um just
493
00:23:08,960 --> 00:23:12,760
to repeat the question it was like let's
494
00:23:11,080 --> 00:23:16,080
say we have a big
495
00:23:12,760 --> 00:23:19,640
multilingual um subword
496
00:23:16,080 --> 00:23:23,440
model and we want to add a new language
497
00:23:19,640 --> 00:23:26,240
in some way uh how can we reuse the
498
00:23:23,440 --> 00:23:28,880
existing model but add a new
499
00:23:26,240 --> 00:23:31,080
language it's a good question if you're
500
00:23:28,880 --> 00:23:33,679
only using it for subord
501
00:23:31,080 --> 00:23:36,320
segmentation um one one nice thing about
502
00:23:33,679 --> 00:23:36,320
the unigram
503
00:23:36,400 --> 00:23:41,799
model here is this is kind of a
504
00:23:38,880 --> 00:23:43,679
probabilistic model so it's very easy to
505
00:23:41,799 --> 00:23:46,360
do the kind of standard things that we
506
00:23:43,679 --> 00:23:48,240
do with probabilistic models which is
507
00:23:46,360 --> 00:23:50,559
like let's say we had an
508
00:23:48,240 --> 00:23:53,919
old uh an
509
00:23:50,559 --> 00:23:56,880
old vocabulary for
510
00:23:53,919 --> 00:23:59,880
this um we could just
511
00:23:56,880 --> 00:23:59,880
interpolate
512
00:24:07,159 --> 00:24:12,320
um we could interpolate like this and
513
00:24:09,559 --> 00:24:13,840
just you know uh combine the
514
00:24:12,320 --> 00:24:17,080
probabilities of the two and then use
515
00:24:13,840 --> 00:24:19,520
that combine probability in order to
516
00:24:17,080 --> 00:24:21,320
segment the new language um things like
517
00:24:19,520 --> 00:24:24,159
this have been uh done before but I
518
00:24:21,320 --> 00:24:26,159
don't remember the exact preferences uh
519
00:24:24,159 --> 00:24:30,440
for them but that that's what I would do
520
00:24:26,159 --> 00:24:31,960
here another interesting thing is um
521
00:24:30,440 --> 00:24:35,399
this might be getting a little ahead of
522
00:24:31,960 --> 00:24:35,399
myself but there's
523
00:24:48,559 --> 00:24:58,279
a there's a paper that talks about um
524
00:24:55,360 --> 00:25:00,159
how you can take things that or trained
525
00:24:58,279 --> 00:25:03,360
with another
526
00:25:00,159 --> 00:25:05,480
vocabulary and basically the idea is um
527
00:25:03,360 --> 00:25:09,320
you pre-train on whatever languages you
528
00:25:05,480 --> 00:25:10,679
have and then uh you learn embeddings in
529
00:25:09,320 --> 00:25:11,880
the new language you freeze the body of
530
00:25:10,679 --> 00:25:14,360
the model and learn embeddings in the
531
00:25:11,880 --> 00:25:15,880
new language so that's another uh method
532
00:25:14,360 --> 00:25:19,080
that's used it's called on the cross
533
00:25:15,880 --> 00:25:19,080
lingual printability
534
00:25:21,840 --> 00:25:26,159
representations and I'll probably talk
535
00:25:23,840 --> 00:25:28,480
about that in the last class of this uh
536
00:25:26,159 --> 00:25:30,720
thing so you can remember that
537
00:25:28,480 --> 00:25:33,720
then cool any other
538
00:25:30,720 --> 00:25:33,720
questions
539
00:25:38,480 --> 00:25:42,640
yeah is bag of words a first step to
540
00:25:41,039 --> 00:25:46,640
process your data if you want to do
541
00:25:42,640 --> 00:25:49,919
Generation Um do you mean like
542
00:25:46,640 --> 00:25:52,440
uh a word based model or a subword based
543
00:25:49,919 --> 00:25:52,440
model
544
00:25:56,679 --> 00:26:00,480
or like is
545
00:26:02,360 --> 00:26:08,000
this so the subword segmentation is the
546
00:26:05,919 --> 00:26:10,640
first step of creating just about any
547
00:26:08,000 --> 00:26:13,080
model nowadays like every model every
548
00:26:10,640 --> 00:26:16,600
model uses this and they usually use
549
00:26:13,080 --> 00:26:21,520
this either to segment characters or
550
00:26:16,600 --> 00:26:23,559
byes um characters are like Unicode code
551
00:26:21,520 --> 00:26:25,799
points so they actually correspond to an
552
00:26:23,559 --> 00:26:28,279
actual visual character and then bites
553
00:26:25,799 --> 00:26:31,120
are many unicode characters are like
554
00:26:28,279 --> 00:26:35,000
three by like a Chinese character is
555
00:26:31,120 --> 00:26:37,159
three byes if I remember correctly so um
556
00:26:35,000 --> 00:26:38,640
the bbased segmentation is nice because
557
00:26:37,159 --> 00:26:41,240
you don't even need to worry about unic
558
00:26:38,640 --> 00:26:43,880
code you can just do the like you can
559
00:26:41,240 --> 00:26:45,640
just segment the pile like literally as
560
00:26:43,880 --> 00:26:49,440
is and so a lot of people do it that way
561
00:26:45,640 --> 00:26:53,279
too uh llama as far as I know is
562
00:26:49,440 --> 00:26:55,720
bites I believe GPT is also bites um but
563
00:26:53,279 --> 00:26:58,799
pre previous to like three or four years
564
00:26:55,720 --> 00:27:02,799
ago people used SCS I
565
00:26:58,799 --> 00:27:05,000
cool um okay so this is really really
566
00:27:02,799 --> 00:27:05,919
important it's not like super complex
567
00:27:05,000 --> 00:27:09,760
and
568
00:27:05,919 --> 00:27:13,039
practically uh you will just maybe maybe
569
00:27:09,760 --> 00:27:15,840
train or maybe just use a tokenizer um
570
00:27:13,039 --> 00:27:18,559
but uh that that's an important thing to
571
00:27:15,840 --> 00:27:20,760
me cool uh next I'd like to move on to
572
00:27:18,559 --> 00:27:24,399
continuous word eddings
573
00:27:20,760 --> 00:27:26,720
so the basic idea is that previously we
574
00:27:24,399 --> 00:27:28,240
represented words with a sparse Vector
575
00:27:26,720 --> 00:27:30,120
uh with a single one
576
00:27:28,240 --> 00:27:31,960
also known as one poot Vector so it
577
00:27:30,120 --> 00:27:35,720
looked a little bit like
578
00:27:31,960 --> 00:27:37,640
this and instead what continuous word
579
00:27:35,720 --> 00:27:39,640
embeddings do is they look up a dense
580
00:27:37,640 --> 00:27:42,320
vector and so you get a dense
581
00:27:39,640 --> 00:27:45,760
representation where the entire Vector
582
00:27:42,320 --> 00:27:45,760
has continuous values in
583
00:27:46,000 --> 00:27:51,919
it and I talked about a bag of words
584
00:27:49,200 --> 00:27:54,320
model but we could also create a
585
00:27:51,919 --> 00:27:58,360
continuous bag of words model and the
586
00:27:54,320 --> 00:28:01,159
way this works is you look up the
587
00:27:58,360 --> 00:28:03,720
values of each Vector the embeddings of
588
00:28:01,159 --> 00:28:06,320
each Vector this gives you an embedding
589
00:28:03,720 --> 00:28:08,440
Vector for the entire sequence and then
590
00:28:06,320 --> 00:28:15,120
you multiply this by a weight
591
00:28:08,440 --> 00:28:17,559
Matrix uh where the so this is column so
592
00:28:15,120 --> 00:28:19,960
the rows of the weight Matrix uh
593
00:28:17,559 --> 00:28:22,919
correspond to to the size of this
594
00:28:19,960 --> 00:28:24,760
continuous embedding and The Columns of
595
00:28:22,919 --> 00:28:28,320
the weight Matrix would correspond to
596
00:28:24,760 --> 00:28:30,919
the uh overall um
597
00:28:28,320 --> 00:28:32,559
to the overall uh number of labels that
598
00:28:30,919 --> 00:28:36,919
you would have here and then that would
599
00:28:32,559 --> 00:28:40,120
give you sces and so this uh basically
600
00:28:36,919 --> 00:28:41,679
what this is saying is each Vector now
601
00:28:40,120 --> 00:28:43,440
instead of having a single thing that
602
00:28:41,679 --> 00:28:46,799
represents which vocabulary item you're
603
00:28:43,440 --> 00:28:48,679
looking at uh you would kind of hope
604
00:28:46,799 --> 00:28:52,120
that you would get vectors where words
605
00:28:48,679 --> 00:28:54,919
that are similar uh by some mention of
606
00:28:52,120 --> 00:28:57,760
by some concept of similar like syntatic
607
00:28:54,919 --> 00:28:59,679
uh syntax semantics whether they're in
608
00:28:57,760 --> 00:29:03,120
the same language or not are close in
609
00:28:59,679 --> 00:29:06,679
the vector space and each Vector element
610
00:29:03,120 --> 00:29:09,399
is a feature uh so for example each
611
00:29:06,679 --> 00:29:11,519
Vector element corresponds to is this an
612
00:29:09,399 --> 00:29:14,960
animate object or is this a positive
613
00:29:11,519 --> 00:29:17,399
word or other Vector other things like
614
00:29:14,960 --> 00:29:19,399
that so just to give an example here
615
00:29:17,399 --> 00:29:21,760
this is totally made up I just made it
616
00:29:19,399 --> 00:29:24,360
in keynote so it's not natural Vector
617
00:29:21,760 --> 00:29:26,279
space but to Ill illustrate the concept
618
00:29:24,360 --> 00:29:27,960
I showed here what if we had a
619
00:29:26,279 --> 00:29:30,240
two-dimensional vector
620
00:29:27,960 --> 00:29:33,399
space where the two-dimensional Vector
621
00:29:30,240 --> 00:29:36,240
space the xais here is corresponding to
622
00:29:33,399 --> 00:29:38,679
whether it's animate or not and the the
623
00:29:36,240 --> 00:29:41,480
Y AIS here is corresponding to whether
624
00:29:38,679 --> 00:29:44,080
it's like positive sentiment or not and
625
00:29:41,480 --> 00:29:46,399
so this is kind of like our ideal uh
626
00:29:44,080 --> 00:29:49,799
goal
627
00:29:46,399 --> 00:29:52,279
here um so why would we want to do this
628
00:29:49,799 --> 00:29:52,279
yeah sorry
629
00:29:56,320 --> 00:30:03,399
guys what do the like in the one it's
630
00:30:00,919 --> 00:30:06,399
one
631
00:30:03,399 --> 00:30:06,399
yep
632
00:30:07,200 --> 00:30:12,519
like so what would the four entries do
633
00:30:09,880 --> 00:30:14,799
here the four entries here are learned
634
00:30:12,519 --> 00:30:17,039
so they are um they're learned just
635
00:30:14,799 --> 00:30:18,519
together with the model um and I'm going
636
00:30:17,039 --> 00:30:22,120
to talk about exactly how we learn them
637
00:30:18,519 --> 00:30:24,000
soon but the the final goal is that
638
00:30:22,120 --> 00:30:25,399
after learning has happened they look
639
00:30:24,000 --> 00:30:26,799
they have these two properties like
640
00:30:25,399 --> 00:30:28,600
similar words are close together in the
641
00:30:26,799 --> 00:30:30,080
vectorace
642
00:30:28,600 --> 00:30:32,640
and
643
00:30:30,080 --> 00:30:35,679
um that's like number one that's the
644
00:30:32,640 --> 00:30:37,600
most important and then number two is
645
00:30:35,679 --> 00:30:39,279
ideally these uh features would have
646
00:30:37,600 --> 00:30:41,200
some meaning uh maybe human
647
00:30:39,279 --> 00:30:44,720
interpretable meaning maybe not human
648
00:30:41,200 --> 00:30:47,880
interpretable meaning but
649
00:30:44,720 --> 00:30:50,880
yeah so um one thing that I should
650
00:30:47,880 --> 00:30:53,159
mention is I I showed a contrast between
651
00:30:50,880 --> 00:30:55,159
the bag of words uh the one hot
652
00:30:53,159 --> 00:30:57,000
representations here and the dense
653
00:30:55,159 --> 00:31:00,880
representations here and I used this
654
00:30:57,000 --> 00:31:03,880
look look up operation for both of them
655
00:31:00,880 --> 00:31:07,399
and this this lookup
656
00:31:03,880 --> 00:31:09,559
operation actually um can be viewed as
657
00:31:07,399 --> 00:31:11,799
grabbing a single Vector from a big
658
00:31:09,559 --> 00:31:14,919
Matrix of word
659
00:31:11,799 --> 00:31:17,760
embeddings and
660
00:31:14,919 --> 00:31:19,760
so the way it can work is like we have
661
00:31:17,760 --> 00:31:22,919
this big vector and then we look up word
662
00:31:19,760 --> 00:31:25,919
number two in a zero index Matrix and it
663
00:31:22,919 --> 00:31:27,799
would just grab this out of that Matrix
664
00:31:25,919 --> 00:31:29,880
and that's practically what most like
665
00:31:27,799 --> 00:31:32,240
deep learning libraries or or whatever
666
00:31:29,880 --> 00:31:35,840
Library you use are going to be
667
00:31:32,240 --> 00:31:38,000
doing but another uh way you can view it
668
00:31:35,840 --> 00:31:40,880
is you can view it as multiplying by a
669
00:31:38,000 --> 00:31:43,880
one hot vector and so you have this
670
00:31:40,880 --> 00:31:48,679
Vector uh exactly the same Matrix uh but
671
00:31:43,880 --> 00:31:50,799
you just multiply by a vector uh 0 1 z z
672
00:31:48,679 --> 00:31:55,720
and that gives you exactly the same
673
00:31:50,799 --> 00:31:58,200
things um so the Practical imple
674
00:31:55,720 --> 00:31:59,720
implementations of this uh uh tend to be
675
00:31:58,200 --> 00:32:01,279
the first one because the first one's a
676
00:31:59,720 --> 00:32:04,679
lot faster to implement you don't need
677
00:32:01,279 --> 00:32:06,760
to multiply like this big thing by a
678
00:32:04,679 --> 00:32:11,000
huge Vector but there
679
00:32:06,760 --> 00:32:13,880
are advantages of knowing the second one
680
00:32:11,000 --> 00:32:15,519
uh just to give an example what if you
681
00:32:13,880 --> 00:32:19,600
for whatever reason you came up with
682
00:32:15,519 --> 00:32:21,440
like an a crazy model that predicts a
683
00:32:19,600 --> 00:32:24,120
probability distribution over words
684
00:32:21,440 --> 00:32:25,720
instead of just words maybe it's a
685
00:32:24,120 --> 00:32:27,679
language model that has an idea of what
686
00:32:25,720 --> 00:32:30,200
the next word is going to look like
687
00:32:27,679 --> 00:32:32,159
and maybe your um maybe your model
688
00:32:30,200 --> 00:32:35,279
thinks the next word has a 50%
689
00:32:32,159 --> 00:32:36,600
probability of being capped 30%
690
00:32:35,279 --> 00:32:42,279
probability of being
691
00:32:36,600 --> 00:32:44,960
dog and uh 2% probability uh sorry uh
692
00:32:42,279 --> 00:32:47,200
20% probability being
693
00:32:44,960 --> 00:32:50,000
bir you can take this vector and
694
00:32:47,200 --> 00:32:51,480
multiply it by The Matrix and get like a
695
00:32:50,000 --> 00:32:53,639
word embedding that's kind of a mix of
696
00:32:51,480 --> 00:32:55,639
all of those word which might be
697
00:32:53,639 --> 00:32:57,960
interesting and let you do creative
698
00:32:55,639 --> 00:33:02,120
things so um knowing that these two
699
00:32:57,960 --> 00:33:05,360
things are the same are the same is kind
700
00:33:02,120 --> 00:33:05,360
of useful for that kind of
701
00:33:05,919 --> 00:33:11,480
thing um any any questions about this
702
00:33:09,120 --> 00:33:13,919
I'm G to talk about how we train next so
703
00:33:11,480 --> 00:33:18,159
maybe maybe I can goow into
704
00:33:13,919 --> 00:33:23,159
that okay cool so how do we get the
705
00:33:18,159 --> 00:33:25,840
vectors uh like the question uh so up
706
00:33:23,159 --> 00:33:27,519
until now we trained a bag of words
707
00:33:25,840 --> 00:33:29,080
model and the way we trained a bag of
708
00:33:27,519 --> 00:33:31,159
words model was using the structured
709
00:33:29,080 --> 00:33:35,440
perceptron algorithm where if the model
710
00:33:31,159 --> 00:33:39,639
got the answer wrong we would either
711
00:33:35,440 --> 00:33:42,799
increment or decrement the embeddings
712
00:33:39,639 --> 00:33:45,080
based on whether uh whether the label
713
00:33:42,799 --> 00:33:46,559
was positive or negative right so I
714
00:33:45,080 --> 00:33:48,919
showed an example of this very simple
715
00:33:46,559 --> 00:33:51,039
algorithm you don't even uh need to
716
00:33:48,919 --> 00:33:52,480
write any like numpy or anything like
717
00:33:51,039 --> 00:33:55,919
that to implement that
718
00:33:52,480 --> 00:33:59,559
algorithm uh so here here it is so we
719
00:33:55,919 --> 00:34:02,320
have like 4X why in uh data we extract
720
00:33:59,559 --> 00:34:04,639
the features we run the classifier uh we
721
00:34:02,320 --> 00:34:07,440
have the predicted why and then we
722
00:34:04,639 --> 00:34:09,480
increment or decrement
723
00:34:07,440 --> 00:34:12,679
features but how do we train more
724
00:34:09,480 --> 00:34:15,599
complex models so I think most people
725
00:34:12,679 --> 00:34:17,079
here have taken a uh machine learning
726
00:34:15,599 --> 00:34:19,159
class of some kind so this will be
727
00:34:17,079 --> 00:34:21,079
reviewed for a lot of people uh but
728
00:34:19,159 --> 00:34:22,280
basically we do this uh by doing
729
00:34:21,079 --> 00:34:24,839
gradient
730
00:34:22,280 --> 00:34:27,240
descent and in order to do so we write
731
00:34:24,839 --> 00:34:29,919
down a loss function calculate the
732
00:34:27,240 --> 00:34:30,919
derivatives of the L function with
733
00:34:29,919 --> 00:34:35,079
respect to the
734
00:34:30,919 --> 00:34:37,320
parameters and move uh the parameters in
735
00:34:35,079 --> 00:34:40,839
the direction that reduces the loss
736
00:34:37,320 --> 00:34:42,720
mtion and so specifically for this bag
737
00:34:40,839 --> 00:34:45,560
of words or continuous bag of words
738
00:34:42,720 --> 00:34:48,240
model um we want this loss of function
739
00:34:45,560 --> 00:34:50,839
to be a loss function that gets lower as
740
00:34:48,240 --> 00:34:52,240
the model gets better and I'm going to
741
00:34:50,839 --> 00:34:54,000
give two examples from binary
742
00:34:52,240 --> 00:34:57,400
classification both of these are used in
743
00:34:54,000 --> 00:34:58,839
NLP models uh reasonably frequently
744
00:34:57,400 --> 00:35:01,440
uh there's a bunch of other loss
745
00:34:58,839 --> 00:35:02,800
functions but these are kind of the two
746
00:35:01,440 --> 00:35:05,480
major
747
00:35:02,800 --> 00:35:08,160
ones so the first one um which is
748
00:35:05,480 --> 00:35:10,160
actually less frequent is the hinge loss
749
00:35:08,160 --> 00:35:13,400
and then the second one is taking a
750
00:35:10,160 --> 00:35:15,800
sigmoid and then doing negative log
751
00:35:13,400 --> 00:35:19,760
likelyhood so the hinge loss basically
752
00:35:15,800 --> 00:35:22,760
what we do is we uh take the max of the
753
00:35:19,760 --> 00:35:26,119
label times the score that is output by
754
00:35:22,760 --> 00:35:29,200
the model and zero and what this looks
755
00:35:26,119 --> 00:35:33,480
like is we have a hinged loss uh where
756
00:35:29,200 --> 00:35:36,880
if Y is equal to one the loss if Y is
757
00:35:33,480 --> 00:35:39,520
greater than zero is zero so as long as
758
00:35:36,880 --> 00:35:42,680
we get basically as long as we get the
759
00:35:39,520 --> 00:35:45,079
answer right there's no loss um as the
760
00:35:42,680 --> 00:35:47,400
answer gets more wrong the loss gets
761
00:35:45,079 --> 00:35:49,880
worse like this and then similarly if
762
00:35:47,400 --> 00:35:53,160
the label is negative if we get a
763
00:35:49,880 --> 00:35:54,839
negative score uh then we get zero loss
764
00:35:53,160 --> 00:35:55,800
and the loss increases if we have a
765
00:35:54,839 --> 00:35:58,800
positive
766
00:35:55,800 --> 00:36:00,800
score so the sigmoid plus negative log
767
00:35:58,800 --> 00:36:05,440
likelihood the way this works is you
768
00:36:00,800 --> 00:36:07,400
multiply y * the score here and um then
769
00:36:05,440 --> 00:36:09,960
we have the sigmoid function which is
770
00:36:07,400 --> 00:36:14,079
just kind of a nice function that looks
771
00:36:09,960 --> 00:36:15,440
like this with zero and one centered
772
00:36:14,079 --> 00:36:19,480
around
773
00:36:15,440 --> 00:36:21,240
zero and then we take the negative log
774
00:36:19,480 --> 00:36:22,319
of this sigmoid function or the negative
775
00:36:21,240 --> 00:36:27,160
log
776
00:36:22,319 --> 00:36:28,520
likelihood and that gives us a uh L that
777
00:36:27,160 --> 00:36:30,440
looks a little bit like this so
778
00:36:28,520 --> 00:36:32,640
basically you can see that these look
779
00:36:30,440 --> 00:36:36,040
very similar right the difference being
780
00:36:32,640 --> 00:36:37,760
that the hinge loss is uh sharp and we
781
00:36:36,040 --> 00:36:41,119
get exactly a zero loss if we get the
782
00:36:37,760 --> 00:36:44,319
answer right and the sigmoid is smooth
783
00:36:41,119 --> 00:36:48,440
uh and we never get a zero
784
00:36:44,319 --> 00:36:50,680
loss um so does anyone have an idea of
785
00:36:48,440 --> 00:36:53,119
the benefits and disadvantages of
786
00:36:50,680 --> 00:36:55,680
these I kind of flashed one on the
787
00:36:53,119 --> 00:36:57,599
screen already
788
00:36:55,680 --> 00:36:59,400
but
789
00:36:57,599 --> 00:37:01,359
so I flash that on the screen so I'll
790
00:36:59,400 --> 00:37:03,680
give this one and then I can have a quiz
791
00:37:01,359 --> 00:37:06,319
about the sign but the the hinge glass
792
00:37:03,680 --> 00:37:07,720
is more closely linked to accuracy and
793
00:37:06,319 --> 00:37:10,400
the reason why it's more closely linked
794
00:37:07,720 --> 00:37:13,640
to accuracy is because basically we will
795
00:37:10,400 --> 00:37:16,079
get a zero loss if the model gets the
796
00:37:13,640 --> 00:37:18,319
answer right so when the model gets all
797
00:37:16,079 --> 00:37:20,240
of the answers right we will just stop
798
00:37:18,319 --> 00:37:22,760
updating our model whatsoever because we
799
00:37:20,240 --> 00:37:25,440
never we don't have any loss whatsoever
800
00:37:22,760 --> 00:37:27,720
and the gradient of the loss is zero um
801
00:37:25,440 --> 00:37:29,960
what about the sigmoid uh a negative log
802
00:37:27,720 --> 00:37:33,160
likelihood uh there there's kind of two
803
00:37:29,960 --> 00:37:36,160
major advantages of this anyone want to
804
00:37:33,160 --> 00:37:36,160
review their machine learning
805
00:37:38,240 --> 00:37:41,800
test sorry what was
806
00:37:43,800 --> 00:37:49,960
that for for R uh yeah maybe there's a
807
00:37:48,200 --> 00:37:51,319
more direct I think I know what you're
808
00:37:49,960 --> 00:37:54,560
saying but maybe there's a more direct
809
00:37:51,319 --> 00:37:54,560
way to say that um
810
00:37:54,839 --> 00:38:00,760
yeah yeah so the gradient is nonzero
811
00:37:57,560 --> 00:38:04,240
everywhere and uh the gradient also kind
812
00:38:00,760 --> 00:38:05,839
of increases as your score gets worse so
813
00:38:04,240 --> 00:38:08,440
those are that's one advantage it makes
814
00:38:05,839 --> 00:38:11,240
it easier to optimize models um another
815
00:38:08,440 --> 00:38:13,839
one linked to the ROC score but maybe we
816
00:38:11,240 --> 00:38:13,839
could say it more
817
00:38:16,119 --> 00:38:19,400
directly any
818
00:38:20,040 --> 00:38:26,920
ideas okay um basically the sigmoid can
819
00:38:23,240 --> 00:38:30,160
be interpreted as a probability so um if
820
00:38:26,920 --> 00:38:32,839
the the sigmoid is between Zer and one
821
00:38:30,160 --> 00:38:34,640
uh and because it's between zero and one
822
00:38:32,839 --> 00:38:36,720
we can say the sigmoid is a
823
00:38:34,640 --> 00:38:38,640
probability um and that can be useful
824
00:38:36,720 --> 00:38:40,119
for various things like if we want a
825
00:38:38,640 --> 00:38:41,960
downstream model or if we want a
826
00:38:40,119 --> 00:38:45,480
confidence prediction out of the model
827
00:38:41,960 --> 00:38:48,200
so those are two uh advantages of using
828
00:38:45,480 --> 00:38:49,920
a s plus negative log likelihood there's
829
00:38:48,200 --> 00:38:53,160
no probabilistic interpretation to
830
00:38:49,920 --> 00:38:56,560
something transing theas
831
00:38:53,160 --> 00:38:59,200
basically cool um so the next thing that
832
00:38:56,560 --> 00:39:01,240
that we do is we calculate derivatives
833
00:38:59,200 --> 00:39:04,040
and we calculate the derivative of the
834
00:39:01,240 --> 00:39:05,920
parameter given the loss function um to
835
00:39:04,040 --> 00:39:09,839
give an example of the bag of words
836
00:39:05,920 --> 00:39:13,480
model and the hinge loss um the hinge
837
00:39:09,839 --> 00:39:16,480
loss as I said is the max of the score
838
00:39:13,480 --> 00:39:19,359
and times y in the bag of words model
839
00:39:16,480 --> 00:39:22,640
the score was the frequency of that
840
00:39:19,359 --> 00:39:25,880
vocabulary item in the input multiplied
841
00:39:22,640 --> 00:39:27,680
by the weight here and so if we this is
842
00:39:25,880 --> 00:39:29,520
a simple a function that I can just do
843
00:39:27,680 --> 00:39:34,440
the derivative by hand and if I do the
844
00:39:29,520 --> 00:39:36,920
deriva by hand what comes out is if y *
845
00:39:34,440 --> 00:39:39,319
this value is greater than zero so in
846
00:39:36,920 --> 00:39:44,640
other words if this Max uh picks this
847
00:39:39,319 --> 00:39:48,319
instead of this then the derivative is y
848
00:39:44,640 --> 00:39:52,359
* stre and otherwise uh it
849
00:39:48,319 --> 00:39:52,359
is in the opposite
850
00:39:55,400 --> 00:40:00,160
direction
851
00:39:56,920 --> 00:40:02,839
then uh optimizing gradients uh we do
852
00:40:00,160 --> 00:40:06,200
standard uh in standard stochastic
853
00:40:02,839 --> 00:40:07,839
gradient descent uh which is the most
854
00:40:06,200 --> 00:40:10,920
standard optimization algorithm for
855
00:40:07,839 --> 00:40:14,440
these models uh we basically have a
856
00:40:10,920 --> 00:40:17,440
gradient over uh you take the gradient
857
00:40:14,440 --> 00:40:20,040
over the parameter of the loss function
858
00:40:17,440 --> 00:40:22,480
and we call it GT so here um sorry I
859
00:40:20,040 --> 00:40:25,599
switched my terminology between W and
860
00:40:22,480 --> 00:40:28,280
Theta so this could be W uh the previous
861
00:40:25,599 --> 00:40:31,000
value of w
862
00:40:28,280 --> 00:40:35,440
um and this is the gradient of the loss
863
00:40:31,000 --> 00:40:37,040
and then uh we take the previous value
864
00:40:35,440 --> 00:40:39,680
and then we subtract out the learning
865
00:40:37,040 --> 00:40:39,680
rate times the
866
00:40:40,680 --> 00:40:45,720
gradient and uh there are many many
867
00:40:43,200 --> 00:40:47,280
other optimization options uh I'll cover
868
00:40:45,720 --> 00:40:50,960
the more frequent one called Adam at the
869
00:40:47,280 --> 00:40:54,319
end of this uh this lecture but um this
870
00:40:50,960 --> 00:40:57,160
is the basic way of optimizing the
871
00:40:54,319 --> 00:41:00,599
model so
872
00:40:57,160 --> 00:41:03,359
then my question now is what is this
873
00:41:00,599 --> 00:41:07,000
algorithm with respect
874
00:41:03,359 --> 00:41:10,119
to this is an algorithm that is
875
00:41:07,000 --> 00:41:12,280
taking that has a loss function it's
876
00:41:10,119 --> 00:41:14,079
calculating derivatives and it's
877
00:41:12,280 --> 00:41:17,240
optimizing gradients using stochastic
878
00:41:14,079 --> 00:41:18,839
gradient descent so does anyone have a
879
00:41:17,240 --> 00:41:20,960
guess about what the loss function is
880
00:41:18,839 --> 00:41:23,520
here and maybe what is the learning rate
881
00:41:20,960 --> 00:41:23,520
of stas
882
00:41:24,319 --> 00:41:29,480
gradient I kind of gave you a hint about
883
00:41:26,599 --> 00:41:29,480
the L one
884
00:41:31,640 --> 00:41:37,839
actually and just to recap what this is
885
00:41:34,440 --> 00:41:41,440
doing here it's um if predicted Y is
886
00:41:37,839 --> 00:41:44,560
equal to Y then it is moving the uh the
887
00:41:41,440 --> 00:41:48,240
future weights in the direction of Y
888
00:41:44,560 --> 00:41:48,240
times the frequency
889
00:41:52,599 --> 00:41:56,960
Vector
890
00:41:55,240 --> 00:41:59,079
yeah
891
00:41:56,960 --> 00:42:01,640
yeah exactly so the loss function is
892
00:41:59,079 --> 00:42:05,800
hinge loss and the learning rate is one
893
00:42:01,640 --> 00:42:07,880
um and just to show how that you know
894
00:42:05,800 --> 00:42:12,359
corresponds we have this if statement
895
00:42:07,880 --> 00:42:12,359
here and we have the increment of the
896
00:42:12,960 --> 00:42:20,240
features and this is what the um what
897
00:42:16,920 --> 00:42:21,599
the L sorry the derivative looked like
898
00:42:20,240 --> 00:42:24,240
so we have
899
00:42:21,599 --> 00:42:26,920
if this is moving in the right direction
900
00:42:24,240 --> 00:42:29,520
for the label uh then we increment
901
00:42:26,920 --> 00:42:31,599
otherwise we do nothing so
902
00:42:29,520 --> 00:42:33,559
basically you can see that even this
903
00:42:31,599 --> 00:42:35,200
really simple algorithm that I you know
904
00:42:33,559 --> 00:42:37,480
implemented with a few lines of python
905
00:42:35,200 --> 00:42:38,839
is essentially equivalent to this uh
906
00:42:37,480 --> 00:42:40,760
stochastic gradient descent that we
907
00:42:38,839 --> 00:42:44,559
doing
908
00:42:40,760 --> 00:42:46,359
models so the good news about this is
909
00:42:44,559 --> 00:42:48,359
you know this this is really simple but
910
00:42:46,359 --> 00:42:50,599
it only really works forit like a bag of
911
00:42:48,359 --> 00:42:55,400
words model or a simple feature based
912
00:42:50,599 --> 00:42:57,200
model uh but it opens up a lot of uh new
913
00:42:55,400 --> 00:43:00,440
possibilities for how we can optimize
914
00:42:57,200 --> 00:43:01,599
models and in particular I mentioned uh
915
00:43:00,440 --> 00:43:04,839
that there was a problem with
916
00:43:01,599 --> 00:43:08,200
combination features last class like
917
00:43:04,839 --> 00:43:11,200
don't hate and don't love are not just
918
00:43:08,200 --> 00:43:12,760
you know hate plus don't and love plus
919
00:43:11,200 --> 00:43:14,119
don't it's actually the combination of
920
00:43:12,760 --> 00:43:17,680
the two is really
921
00:43:14,119 --> 00:43:20,160
important and so um yeah just to give an
922
00:43:17,680 --> 00:43:23,440
example we have don't love is maybe bad
923
00:43:20,160 --> 00:43:26,960
uh nothing I don't love is very
924
00:43:23,440 --> 00:43:30,960
good and so in order
925
00:43:26,960 --> 00:43:34,040
to solve this problem we turn to neural
926
00:43:30,960 --> 00:43:37,160
networks and the way we do this is we
927
00:43:34,040 --> 00:43:39,119
have a lookup of dense embeddings sorry
928
00:43:37,160 --> 00:43:41,839
I actually I just realized my coloring
929
00:43:39,119 --> 00:43:44,119
is off I was using red to indicate dense
930
00:43:41,839 --> 00:43:46,480
embeddings so this should be maybe red
931
00:43:44,119 --> 00:43:49,319
instead of blue but um we take these
932
00:43:46,480 --> 00:43:51,200
stents embeddings and then we create
933
00:43:49,319 --> 00:43:53,720
some complicated function to extract
934
00:43:51,200 --> 00:43:55,079
combination features um and then use
935
00:43:53,720 --> 00:43:57,359
those to calculate
936
00:43:55,079 --> 00:44:02,200
scores
937
00:43:57,359 --> 00:44:04,480
um and so we calculate these combination
938
00:44:02,200 --> 00:44:08,240
features and what we want to do is we
939
00:44:04,480 --> 00:44:12,880
want to extract vectors from the input
940
00:44:08,240 --> 00:44:12,880
where each Vector has features
941
00:44:15,839 --> 00:44:21,040
um sorry this is in the wrong order so
942
00:44:18,240 --> 00:44:22,559
I'll I'll get back to this um so this
943
00:44:21,040 --> 00:44:25,319
this was talking about the The
944
00:44:22,559 --> 00:44:27,200
Continuous bag of words features so the
945
00:44:25,319 --> 00:44:30,960
problem with the continuous bag of words
946
00:44:27,200 --> 00:44:30,960
features was we were extracting
947
00:44:31,359 --> 00:44:36,359
features
948
00:44:33,079 --> 00:44:36,359
um like
949
00:44:36,839 --> 00:44:41,400
this but then we were directly using the
950
00:44:39,760 --> 00:44:43,359
the feature the dense features that we
951
00:44:41,400 --> 00:44:45,559
extracted to make predictions without
952
00:44:43,359 --> 00:44:48,839
actually allowing for any interactions
953
00:44:45,559 --> 00:44:51,839
between the features um and
954
00:44:48,839 --> 00:44:55,160
so uh neural networks the way we fix
955
00:44:51,839 --> 00:44:57,079
this is we first extract these features
956
00:44:55,160 --> 00:44:59,440
uh we take these these features of each
957
00:44:57,079 --> 00:45:04,000
word embedding and then we run them
958
00:44:59,440 --> 00:45:07,240
through uh kind of linear transforms in
959
00:45:04,000 --> 00:45:09,880
nonlinear uh like linear multiplications
960
00:45:07,240 --> 00:45:10,880
and then nonlinear transforms to extract
961
00:45:09,880 --> 00:45:13,920
additional
962
00:45:10,880 --> 00:45:15,839
features and uh finally run this through
963
00:45:13,920 --> 00:45:18,640
several layers and then use the
964
00:45:15,839 --> 00:45:21,119
resulting features to make our
965
00:45:18,640 --> 00:45:23,200
predictions and when we do this this
966
00:45:21,119 --> 00:45:25,319
allows us to do more uh interesting
967
00:45:23,200 --> 00:45:28,319
things so like for example we could
968
00:45:25,319 --> 00:45:30,000
learn feature combination a node in the
969
00:45:28,319 --> 00:45:32,599
second layer might be feature one and
970
00:45:30,000 --> 00:45:35,240
feature five are active so that could be
971
00:45:32,599 --> 00:45:38,680
like feature one corresponds to negative
972
00:45:35,240 --> 00:45:43,640
sentiment words like hate
973
00:45:38,680 --> 00:45:45,839
despise um and other things like that so
974
00:45:43,640 --> 00:45:50,079
for hate and despise feature one would
975
00:45:45,839 --> 00:45:53,119
have a high value like 8.0 and then
976
00:45:50,079 --> 00:45:55,480
7.2 and then we also have negation words
977
00:45:53,119 --> 00:45:57,040
like don't or not or something like that
978
00:45:55,480 --> 00:46:00,040
and those would
979
00:45:57,040 --> 00:46:00,040
have
980
00:46:03,720 --> 00:46:08,640
don't would have a high value for like 2
981
00:46:11,880 --> 00:46:15,839
five and so these would be the word
982
00:46:14,200 --> 00:46:18,040
embeddings where each word embedding
983
00:46:15,839 --> 00:46:20,599
corresponded to you know features of the
984
00:46:18,040 --> 00:46:23,480
words and
985
00:46:20,599 --> 00:46:25,480
then um after that we would extract
986
00:46:23,480 --> 00:46:29,319
feature combinations in this second
987
00:46:25,480 --> 00:46:32,079
layer that say oh we see at least one
988
00:46:29,319 --> 00:46:33,760
word where the first feature is active
989
00:46:32,079 --> 00:46:36,359
and we see at least one word where the
990
00:46:33,760 --> 00:46:37,920
fifth feature is active so now that
991
00:46:36,359 --> 00:46:40,640
allows us to capture the fact that we
992
00:46:37,920 --> 00:46:42,319
saw like don't hate or don't despise or
993
00:46:40,640 --> 00:46:44,559
not hate or not despise or something
994
00:46:42,319 --> 00:46:44,559
like
995
00:46:45,079 --> 00:46:51,760
that so this is the way uh kind of this
996
00:46:49,680 --> 00:46:54,839
is a deep uh continuous bag of words
997
00:46:51,760 --> 00:46:56,839
model um this actually was proposed in
998
00:46:54,839 --> 00:46:58,119
205 15 I don't think I have the
999
00:46:56,839 --> 00:47:02,599
reference on the slide but I think it's
1000
00:46:58,119 --> 00:47:05,040
in the notes um on the website and
1001
00:47:02,599 --> 00:47:07,200
actually at that point in time they
1002
00:47:05,040 --> 00:47:09,200
demon there were several interesting
1003
00:47:07,200 --> 00:47:11,960
results that showed that even this like
1004
00:47:09,200 --> 00:47:13,960
really simple model did really well uh
1005
00:47:11,960 --> 00:47:16,319
at text classification and other simple
1006
00:47:13,960 --> 00:47:18,640
tasks like that because it was able to
1007
00:47:16,319 --> 00:47:21,720
you know share features of the words and
1008
00:47:18,640 --> 00:47:23,800
then extract combinations to the
1009
00:47:21,720 --> 00:47:28,200
features
1010
00:47:23,800 --> 00:47:29,760
so um in order order to learn these we
1011
00:47:28,200 --> 00:47:30,920
need to start turning to neural networks
1012
00:47:29,760 --> 00:47:34,400
and the reason why we need to start
1013
00:47:30,920 --> 00:47:38,040
turning to neural networks is
1014
00:47:34,400 --> 00:47:41,920
because while I can calculate the loss
1015
00:47:38,040 --> 00:47:43,280
function of the while I can calculate
1016
00:47:41,920 --> 00:47:44,839
the loss function of the hinged loss for
1017
00:47:43,280 --> 00:47:47,720
a bag of words model by hand I
1018
00:47:44,839 --> 00:47:49,359
definitely don't I probably could but
1019
00:47:47,720 --> 00:47:51,240
don't want to do it for a model that
1020
00:47:49,359 --> 00:47:53,200
starts become as complicated as this
1021
00:47:51,240 --> 00:47:57,440
with multiple Matrix multiplications
1022
00:47:53,200 --> 00:48:00,520
Andes and stuff like that so the way we
1023
00:47:57,440 --> 00:48:05,000
do this just a very brief uh coverage of
1024
00:48:00,520 --> 00:48:06,200
this uh for because um I think probably
1025
00:48:05,000 --> 00:48:08,400
a lot of people have dealt with neural
1026
00:48:06,200 --> 00:48:10,200
networks before um the original
1027
00:48:08,400 --> 00:48:12,880
motivation was that we had neurons in
1028
00:48:10,200 --> 00:48:16,160
the brain uh where
1029
00:48:12,880 --> 00:48:18,839
the each of the neuron synapses took in
1030
00:48:16,160 --> 00:48:21,480
an electrical signal and once they got
1031
00:48:18,839 --> 00:48:24,079
enough electrical signal they would fire
1032
00:48:21,480 --> 00:48:25,960
um but now the current conception of
1033
00:48:24,079 --> 00:48:28,160
neural networks or deep learning models
1034
00:48:25,960 --> 00:48:30,440
is basically computation
1035
00:48:28,160 --> 00:48:32,400
graphs and the way a computation graph
1036
00:48:30,440 --> 00:48:34,760
Works um and I'm especially going to
1037
00:48:32,400 --> 00:48:36,240
talk about the way it works in natural
1038
00:48:34,760 --> 00:48:38,119
language processing which might be a
1039
00:48:36,240 --> 00:48:42,319
contrast to the way it works in computer
1040
00:48:38,119 --> 00:48:43,960
vision is um we have an expression uh
1041
00:48:42,319 --> 00:48:46,480
that looks like this and maybe maybe
1042
00:48:43,960 --> 00:48:47,640
it's the expression X corresponding to
1043
00:48:46,480 --> 00:48:51,880
uh a
1044
00:48:47,640 --> 00:48:53,400
scal um and each node corresponds to
1045
00:48:51,880 --> 00:48:55,599
something like a tensor a matrix a
1046
00:48:53,400 --> 00:48:57,599
vector a scalar so scaler is uh kind
1047
00:48:55,599 --> 00:49:00,480
kind of Zero Dimensional it's a single
1048
00:48:57,599 --> 00:49:01,720
value one dimensional two dimensional or
1049
00:49:00,480 --> 00:49:04,200
arbitrary
1050
00:49:01,720 --> 00:49:06,040
dimensional um and then we also have
1051
00:49:04,200 --> 00:49:08,000
nodes that correspond to the result of
1052
00:49:06,040 --> 00:49:11,480
function applications so if we have X be
1053
00:49:08,000 --> 00:49:14,079
a vector uh we take the vector transpose
1054
00:49:11,480 --> 00:49:18,160
and so each Edge represents a function
1055
00:49:14,079 --> 00:49:20,559
argument and also a data
1056
00:49:18,160 --> 00:49:23,960
dependency and a node with an incoming
1057
00:49:20,559 --> 00:49:27,000
Edge is a function of that Edge's tail
1058
00:49:23,960 --> 00:49:29,040
node and importantly each node knows how
1059
00:49:27,000 --> 00:49:30,640
to compute its value and the value of
1060
00:49:29,040 --> 00:49:32,640
its derivative with respect to each
1061
00:49:30,640 --> 00:49:34,440
argument times the derivative of an
1062
00:49:32,640 --> 00:49:37,920
arbitrary
1063
00:49:34,440 --> 00:49:41,000
input and functions could be basically
1064
00:49:37,920 --> 00:49:45,400
arbitrary functions it can be unary Nary
1065
00:49:41,000 --> 00:49:49,440
unary binary Nary often unary or binary
1066
00:49:45,400 --> 00:49:52,400
and computation graphs are directed in
1067
00:49:49,440 --> 00:49:57,040
cyclic and um one important thing to
1068
00:49:52,400 --> 00:50:00,640
note is that you can um have multiple
1069
00:49:57,040 --> 00:50:02,559
ways of expressing the same function so
1070
00:50:00,640 --> 00:50:04,839
this is actually really important as you
1071
00:50:02,559 --> 00:50:06,920
start implementing things and the reason
1072
00:50:04,839 --> 00:50:09,359
why is the left graph and the right
1073
00:50:06,920 --> 00:50:12,960
graph both express the same thing the
1074
00:50:09,359 --> 00:50:18,640
left graph expresses X
1075
00:50:12,960 --> 00:50:22,559
transpose time A Time X where is whereas
1076
00:50:18,640 --> 00:50:27,160
this one has x a and then it puts it
1077
00:50:22,559 --> 00:50:28,760
into a node that is X transpose a x
1078
00:50:27,160 --> 00:50:30,319
and so these Express exactly the same
1079
00:50:28,760 --> 00:50:32,319
thing but the graph on the left is
1080
00:50:30,319 --> 00:50:33,760
larger and the reason why this is
1081
00:50:32,319 --> 00:50:38,920
important is for practical
1082
00:50:33,760 --> 00:50:40,359
implementation of neural networks um you
1083
00:50:38,920 --> 00:50:43,200
the larger graphs are going to take more
1084
00:50:40,359 --> 00:50:46,799
memory and going to be slower usually
1085
00:50:43,200 --> 00:50:48,200
and so often um in a neural network we
1086
00:50:46,799 --> 00:50:49,559
look at like pipe part which we're going
1087
00:50:48,200 --> 00:50:52,160
to look at in a
1088
00:50:49,559 --> 00:50:55,520
second
1089
00:50:52,160 --> 00:50:57,920
um you will have something you will be
1090
00:50:55,520 --> 00:50:57,920
able to
1091
00:50:58,680 --> 00:51:01,680
do
1092
00:51:03,079 --> 00:51:07,880
this or you'll be able to do
1093
00:51:18,760 --> 00:51:22,880
like
1094
00:51:20,359 --> 00:51:24,839
this so these are two different options
1095
00:51:22,880 --> 00:51:26,920
this one is using more operations and
1096
00:51:24,839 --> 00:51:29,559
this one is using using less operations
1097
00:51:26,920 --> 00:51:31,000
and this is going to be faster because
1098
00:51:29,559 --> 00:51:33,119
basically the implementation within
1099
00:51:31,000 --> 00:51:34,799
Pythor will have been optimized for you
1100
00:51:33,119 --> 00:51:36,799
it will only require one graph node
1101
00:51:34,799 --> 00:51:37,880
instead of multiple graph nodes and
1102
00:51:36,799 --> 00:51:39,799
that's even more important when you
1103
00:51:37,880 --> 00:51:41,040
start talking about like attention or
1104
00:51:39,799 --> 00:51:43,920
something like that which we're going to
1105
00:51:41,040 --> 00:51:46,079
be covering very soon um attention is a
1106
00:51:43,920 --> 00:51:47,359
very multi-head attention or something
1107
00:51:46,079 --> 00:51:49,839
like that is a very complicated
1108
00:51:47,359 --> 00:51:52,079
operation so you want to make sure that
1109
00:51:49,839 --> 00:51:54,359
you're using the operators that are
1110
00:51:52,079 --> 00:51:57,359
available to you to make this more
1111
00:51:54,359 --> 00:51:57,359
efficient
1112
00:51:57,440 --> 00:52:00,760
um and then finally we could like add
1113
00:51:59,280 --> 00:52:01,920
all of these together at the end we
1114
00:52:00,760 --> 00:52:04,000
could add a
1115
00:52:01,920 --> 00:52:05,880
constant um and then we get this
1116
00:52:04,000 --> 00:52:09,520
expression here which gives us kind of a
1117
00:52:05,880 --> 00:52:09,520
polinomial polom
1118
00:52:09,680 --> 00:52:15,760
expression um also another thing to note
1119
00:52:13,480 --> 00:52:17,599
is within a neural network computation
1120
00:52:15,760 --> 00:52:21,920
graph variable names are just labelings
1121
00:52:17,599 --> 00:52:25,359
of nodes and so if you're using a a
1122
00:52:21,920 --> 00:52:27,680
computation graph like this you might
1123
00:52:25,359 --> 00:52:29,240
only be declaring one variable here but
1124
00:52:27,680 --> 00:52:30,839
actually there's a whole bunch of stuff
1125
00:52:29,240 --> 00:52:32,359
going on behind the scenes and all of
1126
00:52:30,839 --> 00:52:34,240
that will take memory and computation
1127
00:52:32,359 --> 00:52:35,440
time and stuff like that so it's
1128
00:52:34,240 --> 00:52:37,119
important to be aware of that if you
1129
00:52:35,440 --> 00:52:40,400
want to make your implementations more
1130
00:52:37,119 --> 00:52:40,400
efficient than other other
1131
00:52:41,119 --> 00:52:46,680
things so we have several algorithms
1132
00:52:44,480 --> 00:52:49,079
that go into implementing neural nuts um
1133
00:52:46,680 --> 00:52:50,760
the first one is graph construction uh
1134
00:52:49,079 --> 00:52:53,480
the second one is forward
1135
00:52:50,760 --> 00:52:54,839
propagation uh and graph construction is
1136
00:52:53,480 --> 00:52:56,359
basically constructing the graph
1137
00:52:54,839 --> 00:52:58,680
declaring ing all the variables stuff
1138
00:52:56,359 --> 00:53:01,520
like this the second one is forward
1139
00:52:58,680 --> 00:53:03,880
propagation and um the way you do this
1140
00:53:01,520 --> 00:53:06,480
is in topological order uh you compute
1141
00:53:03,880 --> 00:53:08,280
the value of a node given its inputs and
1142
00:53:06,480 --> 00:53:11,000
so basically you start out with all of
1143
00:53:08,280 --> 00:53:12,680
the nodes that you give is input and
1144
00:53:11,000 --> 00:53:16,040
then you find any node in the graph
1145
00:53:12,680 --> 00:53:17,799
where all of its uh all of its tail
1146
00:53:16,040 --> 00:53:20,280
nodes or all of its children have been
1147
00:53:17,799 --> 00:53:22,119
calculated so in this case that would be
1148
00:53:20,280 --> 00:53:24,640
these two nodes and then in arbitrary
1149
00:53:22,119 --> 00:53:27,000
order or even in parallel you calculate
1150
00:53:24,640 --> 00:53:28,280
the value of all of the satisfied nodes
1151
00:53:27,000 --> 00:53:31,799
until you get to the
1152
00:53:28,280 --> 00:53:34,280
end and then uh the remaining algorithms
1153
00:53:31,799 --> 00:53:36,200
are back propagation and parameter
1154
00:53:34,280 --> 00:53:38,240
update I already talked about parameter
1155
00:53:36,200 --> 00:53:40,799
update uh using stochastic gradient
1156
00:53:38,240 --> 00:53:42,760
descent but for back propagation we then
1157
00:53:40,799 --> 00:53:45,400
process examples in Reverse topological
1158
00:53:42,760 --> 00:53:47,640
order uh calculate derivatives of
1159
00:53:45,400 --> 00:53:50,400
parameters with respect to final
1160
00:53:47,640 --> 00:53:52,319
value and so we start out with the very
1161
00:53:50,400 --> 00:53:54,200
final value usually this is your loss
1162
00:53:52,319 --> 00:53:56,200
function and then you just step
1163
00:53:54,200 --> 00:54:00,440
backwards in top ological order to
1164
00:53:56,200 --> 00:54:04,160
calculate the derivatives of all these
1165
00:54:00,440 --> 00:54:05,920
so um this is pretty simple I think a
1166
00:54:04,160 --> 00:54:08,040
lot of people may have seen this already
1167
00:54:05,920 --> 00:54:09,920
but keeping this in mind as you're
1168
00:54:08,040 --> 00:54:12,480
implementing NLP models especially
1169
00:54:09,920 --> 00:54:14,240
models that are really memory intensive
1170
00:54:12,480 --> 00:54:16,559
or things like that is pretty important
1171
00:54:14,240 --> 00:54:19,040
because if you accidentally like for
1172
00:54:16,559 --> 00:54:21,799
example calculate the same thing twice
1173
00:54:19,040 --> 00:54:23,559
or accidentally create a graph that is
1174
00:54:21,799 --> 00:54:25,720
manipulating very large tensors and
1175
00:54:23,559 --> 00:54:27,319
creating very large intermediate States
1176
00:54:25,720 --> 00:54:29,720
that can kill your memory and and cause
1177
00:54:27,319 --> 00:54:31,839
big problems so it's an important thing
1178
00:54:29,720 --> 00:54:31,839
to
1179
00:54:34,359 --> 00:54:38,880
be um cool any any questions about
1180
00:54:39,040 --> 00:54:44,440
this okay if not I will go on to the
1181
00:54:41,680 --> 00:54:45,680
next one so neural network Frameworks
1182
00:54:44,440 --> 00:54:48,920
there's several neural network
1183
00:54:45,680 --> 00:54:52,880
Frameworks but in NLP nowadays I really
1184
00:54:48,920 --> 00:54:55,079
only see two and mostly only see one um
1185
00:54:52,880 --> 00:54:57,960
so that one that almost everybody us
1186
00:54:55,079 --> 00:55:01,240
uses is pie torch um and I would
1187
00:54:57,960 --> 00:55:04,559
recommend using it unless you uh you
1188
00:55:01,240 --> 00:55:07,480
know if you're a fan of like rust or you
1189
00:55:04,559 --> 00:55:09,200
know esoteric uh not esoteric but like
1190
00:55:07,480 --> 00:55:11,960
unusual programming languages and you
1191
00:55:09,200 --> 00:55:14,720
like Beauty and things like this another
1192
00:55:11,960 --> 00:55:15,799
option might be Jacks uh so I'll explain
1193
00:55:14,720 --> 00:55:18,440
a little bit about the difference
1194
00:55:15,799 --> 00:55:19,960
between them uh and you can pick
1195
00:55:18,440 --> 00:55:23,559
accordingly
1196
00:55:19,960 --> 00:55:25,359
um first uh both of these Frameworks uh
1197
00:55:23,559 --> 00:55:26,839
are developed by big companies and they
1198
00:55:25,359 --> 00:55:28,520
have a lot of engineering support behind
1199
00:55:26,839 --> 00:55:29,720
them that's kind of an important thing
1200
00:55:28,520 --> 00:55:31,280
to think about when you're deciding
1201
00:55:29,720 --> 00:55:32,599
which framework to use because you know
1202
00:55:31,280 --> 00:55:36,000
it'll be well
1203
00:55:32,599 --> 00:55:38,039
supported um pytorch is definitely most
1204
00:55:36,000 --> 00:55:40,400
widely used in NLP especially NLP
1205
00:55:38,039 --> 00:55:44,240
research um and it's used in some NLP
1206
00:55:40,400 --> 00:55:47,359
project J is used in some NLP
1207
00:55:44,240 --> 00:55:49,960
projects um pytorch favors Dynamic
1208
00:55:47,359 --> 00:55:53,760
execution so what dynamic execution
1209
00:55:49,960 --> 00:55:55,880
means is um you basically create a
1210
00:55:53,760 --> 00:55:59,760
computation graph and and then execute
1211
00:55:55,880 --> 00:56:02,760
it uh every time you process an input uh
1212
00:55:59,760 --> 00:56:04,680
in contrast there's also you define the
1213
00:56:02,760 --> 00:56:07,200
computation graph first and then execute
1214
00:56:04,680 --> 00:56:09,280
it over and over again so in other words
1215
00:56:07,200 --> 00:56:10,680
the graph construction step only happens
1216
00:56:09,280 --> 00:56:13,119
once kind of at the beginning of
1217
00:56:10,680 --> 00:56:16,799
computation and then you compile it
1218
00:56:13,119 --> 00:56:20,039
afterwards and it's actually pytorch
1219
00:56:16,799 --> 00:56:23,359
supports kind of defining and compiling
1220
00:56:20,039 --> 00:56:27,480
and Jax supports more Dynamic things but
1221
00:56:23,359 --> 00:56:30,160
the way they were designed is uh is kind
1222
00:56:27,480 --> 00:56:32,960
of favoring Dynamic execution or
1223
00:56:30,160 --> 00:56:37,079
favoring definition in population
1224
00:56:32,960 --> 00:56:39,200
and the difference between these two is
1225
00:56:37,079 --> 00:56:41,760
this one gives you more flexibility this
1226
00:56:39,200 --> 00:56:45,440
one gives you better optimization in wor
1227
00:56:41,760 --> 00:56:49,760
speed if you want to if you want to do
1228
00:56:45,440 --> 00:56:52,400
that um another thing about Jax is um
1229
00:56:49,760 --> 00:56:55,200
it's kind of very close to numpy in a
1230
00:56:52,400 --> 00:56:57,440
way like it uses a very num something
1231
00:56:55,200 --> 00:56:59,960
that's kind of close to numpy it's very
1232
00:56:57,440 --> 00:57:02,359
heavily based on tensors and so because
1233
00:56:59,960 --> 00:57:04,640
of this you can kind of easily do some
1234
00:57:02,359 --> 00:57:06,640
interesting things like okay I want to
1235
00:57:04,640 --> 00:57:11,319
take this tensor and I want to split it
1236
00:57:06,640 --> 00:57:14,000
over two gpus um and this is good if
1237
00:57:11,319 --> 00:57:17,119
you're training like a very large model
1238
00:57:14,000 --> 00:57:20,920
and you want to put kind
1239
00:57:17,119 --> 00:57:20,920
of this part of the
1240
00:57:22,119 --> 00:57:26,520
model uh you want to put this part of
1241
00:57:24,119 --> 00:57:30,079
the model on GP 1 this on gpu2 this on
1242
00:57:26,520 --> 00:57:31,599
GPU 3 this on GPU it's slightly simpler
1243
00:57:30,079 --> 00:57:34,400
conceptually to do in Jacks but it's
1244
00:57:31,599 --> 00:57:37,160
also possible to do in
1245
00:57:34,400 --> 00:57:39,119
p and pytorch by far has the most
1246
00:57:37,160 --> 00:57:41,640
vibrant ecosystem so like as I said
1247
00:57:39,119 --> 00:57:44,200
pytorch is a good default choice but you
1248
00:57:41,640 --> 00:57:47,480
can consider using Jack if you uh if you
1249
00:57:44,200 --> 00:57:47,480
like new
1250
00:57:48,079 --> 00:57:55,480
things cool um yeah actually I already
1251
00:57:51,599 --> 00:57:58,079
talked about that so in the interest of
1252
00:57:55,480 --> 00:58:02,119
time I may not go into these very deeply
1253
00:57:58,079 --> 00:58:05,799
but it's important to note that we have
1254
00:58:02,119 --> 00:58:05,799
examples of all of
1255
00:58:06,920 --> 00:58:12,520
the models that I talked about in the
1256
00:58:09,359 --> 00:58:16,720
class today these are created for
1257
00:58:12,520 --> 00:58:17,520
Simplicity not for Speed or efficiency
1258
00:58:16,720 --> 00:58:20,480
of
1259
00:58:17,520 --> 00:58:24,920
implementation um so these are kind of
1260
00:58:20,480 --> 00:58:27,760
torch P torch based uh examples uh where
1261
00:58:24,920 --> 00:58:31,599
you can create the bag of words
1262
00:58:27,760 --> 00:58:36,440
Model A continuous bag of words
1263
00:58:31,599 --> 00:58:39,640
model um and
1264
00:58:36,440 --> 00:58:41,640
a deep continuous bag of wordss
1265
00:58:39,640 --> 00:58:44,359
model
1266
00:58:41,640 --> 00:58:46,039
and all of these I believe are
1267
00:58:44,359 --> 00:58:48,760
implemented in
1268
00:58:46,039 --> 00:58:51,960
model.py and the most important thing is
1269
00:58:48,760 --> 00:58:54,960
where you define the forward pass and
1270
00:58:51,960 --> 00:58:57,319
maybe I can just give a a simple example
1271
00:58:54,960 --> 00:58:58,200
this but here this is where you do the
1272
00:58:57,319 --> 00:59:01,839
word
1273
00:58:58,200 --> 00:59:04,400
embedding this is where you sum up all
1274
00:59:01,839 --> 00:59:08,119
of the embeddings and add a
1275
00:59:04,400 --> 00:59:10,200
bias um and then this is uh where you
1276
00:59:08,119 --> 00:59:13,960
return the the
1277
00:59:10,200 --> 00:59:13,960
score and then oh
1278
00:59:14,799 --> 00:59:19,119
sorry the continuous bag of words model
1279
00:59:17,520 --> 00:59:22,160
sums up some
1280
00:59:19,119 --> 00:59:23,640
embeddings uh or gets the embeddings
1281
00:59:22,160 --> 00:59:25,799
sums up some
1282
00:59:23,640 --> 00:59:28,079
embeddings
1283
00:59:25,799 --> 00:59:30,599
uh gets the score here and then runs it
1284
00:59:28,079 --> 00:59:33,200
through a linear or changes the view
1285
00:59:30,599 --> 00:59:35,119
runs it through a linear layer and then
1286
00:59:33,200 --> 00:59:38,319
the Deep continuous bag of words model
1287
00:59:35,119 --> 00:59:41,160
also adds a few layers of uh like linear
1288
00:59:38,319 --> 00:59:43,119
transformations in Dage so you should be
1289
00:59:41,160 --> 00:59:44,640
able to see that these correspond pretty
1290
00:59:43,119 --> 00:59:47,440
closely to the things that I had on the
1291
00:59:44,640 --> 00:59:49,280
slides so um hopefully that's a good
1292
00:59:47,440 --> 00:59:51,839
start if you're not very familiar with
1293
00:59:49,280 --> 00:59:51,839
implementing
1294
00:59:53,119 --> 00:59:58,440
model oh and yes the recitation uh will
1295
00:59:56,599 --> 00:59:59,799
be about playing around with sentence
1296
00:59:58,440 --> 01:00:01,200
piece and playing around with these so
1297
00:59:59,799 --> 01:00:02,839
if you have any look at them have any
1298
01:00:01,200 --> 01:00:05,000
questions you're welcome to show up
1299
01:00:02,839 --> 01:00:09,880
where I walk
1300
01:00:05,000 --> 01:00:09,880
through cool um any any questions about
1301
01:00:12,839 --> 01:00:19,720
these okay so a few more final important
1302
01:00:16,720 --> 01:00:21,720
Concepts um another concept that you
1303
01:00:19,720 --> 01:00:25,440
should definitely be aware of is the
1304
01:00:21,720 --> 01:00:27,280
atom Optimizer uh so there's lots of uh
1305
01:00:25,440 --> 01:00:30,559
optimizers that you could be using but
1306
01:00:27,280 --> 01:00:32,200
almost all research in NLP uses some uh
1307
01:00:30,559 --> 01:00:38,440
variety of the atom
1308
01:00:32,200 --> 01:00:40,839
Optimizer and the U the way this works
1309
01:00:38,440 --> 01:00:42,559
is it
1310
01:00:40,839 --> 01:00:45,640
optimizes
1311
01:00:42,559 --> 01:00:48,480
the um it optimizes model considering
1312
01:00:45,640 --> 01:00:49,359
the rolling average of the gradient and
1313
01:00:48,480 --> 01:00:53,160
uh
1314
01:00:49,359 --> 01:00:55,920
momentum and the way it works is here we
1315
01:00:53,160 --> 01:00:58,839
have a gradient here we have
1316
01:00:55,920 --> 01:01:04,000
momentum and what you can see is
1317
01:00:58,839 --> 01:01:06,680
happening here is we add a little bit of
1318
01:01:04,000 --> 01:01:09,200
the gradient in uh how much you add in
1319
01:01:06,680 --> 01:01:12,720
is with respect to the size of this beta
1320
01:01:09,200 --> 01:01:16,000
1 parameter and you add it into uh the
1321
01:01:12,720 --> 01:01:18,640
momentum term so this momentum term like
1322
01:01:16,000 --> 01:01:20,440
gradually increases and decreases so in
1323
01:01:18,640 --> 01:01:23,440
contrast to standard gradient percent
1324
01:01:20,440 --> 01:01:25,839
which could be
1325
01:01:23,440 --> 01:01:28,440
updating
1326
01:01:25,839 --> 01:01:31,440
uh each parameter kind of like very
1327
01:01:28,440 --> 01:01:33,359
differently on each time step this will
1328
01:01:31,440 --> 01:01:35,680
make the momentum kind of transition
1329
01:01:33,359 --> 01:01:37,240
more smoothly by taking the rolling
1330
01:01:35,680 --> 01:01:39,880
average of the
1331
01:01:37,240 --> 01:01:43,400
gradient and then the the second thing
1332
01:01:39,880 --> 01:01:47,640
is um by taking the momentum this is the
1333
01:01:43,400 --> 01:01:51,000
rolling average of the I guess gradient
1334
01:01:47,640 --> 01:01:54,440
uh variance sorry I this should be
1335
01:01:51,000 --> 01:01:58,079
variance and the reason why you need
1336
01:01:54,440 --> 01:02:01,319
need to keep track of the variance is
1337
01:01:58,079 --> 01:02:03,319
some uh some parameters will have very
1338
01:02:01,319 --> 01:02:06,559
large variance in their gradients and
1339
01:02:03,319 --> 01:02:11,480
might fluctuate very uh strongly and
1340
01:02:06,559 --> 01:02:13,039
others might have a smaller uh chain
1341
01:02:11,480 --> 01:02:15,240
variant in their gradients and not
1342
01:02:13,039 --> 01:02:18,240
fluctuate very much but we want to make
1343
01:02:15,240 --> 01:02:20,200
sure that we update the ones we still
1344
01:02:18,240 --> 01:02:22,240
update the ones that have a very small
1345
01:02:20,200 --> 01:02:25,760
uh change of their variance and the
1346
01:02:22,240 --> 01:02:27,440
reason why is kind of let's say you have
1347
01:02:25,760 --> 01:02:30,440
a
1348
01:02:27,440 --> 01:02:30,440
multi-layer
1349
01:02:32,480 --> 01:02:38,720
network
1350
01:02:34,480 --> 01:02:41,240
um or actually sorry a better
1351
01:02:38,720 --> 01:02:44,319
um a better example is like let's say we
1352
01:02:41,240 --> 01:02:47,559
have a big word embedding Matrix and
1353
01:02:44,319 --> 01:02:53,359
over here we have like really frequent
1354
01:02:47,559 --> 01:02:56,279
words and then over here we have uh
1355
01:02:53,359 --> 01:02:59,319
gradi
1356
01:02:56,279 --> 01:03:00,880
no we have like less frequent words we
1357
01:02:59,319 --> 01:03:02,799
want to make sure that all of these get
1358
01:03:00,880 --> 01:03:06,160
updated appropriately all of these get
1359
01:03:02,799 --> 01:03:08,640
like enough updates and so over here
1360
01:03:06,160 --> 01:03:10,760
this one will have lots of updates and
1361
01:03:08,640 --> 01:03:13,680
so uh kind of
1362
01:03:10,760 --> 01:03:16,599
the amount that we
1363
01:03:13,680 --> 01:03:20,039
update or the the amount that we update
1364
01:03:16,599 --> 01:03:21,799
the uh this will be relatively large
1365
01:03:20,039 --> 01:03:23,119
whereas over here this will not have
1366
01:03:21,799 --> 01:03:24,880
very many updates we'll have lots of
1367
01:03:23,119 --> 01:03:26,480
zero updates also
1368
01:03:24,880 --> 01:03:29,160
and so the amount that we update this
1369
01:03:26,480 --> 01:03:32,520
will be relatively small and so this
1370
01:03:29,160 --> 01:03:36,119
kind of squared to gradient here will uh
1371
01:03:32,520 --> 01:03:38,400
be smaller for the values over here and
1372
01:03:36,119 --> 01:03:41,359
what that allows us to do is it allows
1373
01:03:38,400 --> 01:03:44,200
us to maybe I can just go to the bottom
1374
01:03:41,359 --> 01:03:46,039
we end up uh dividing by the square root
1375
01:03:44,200 --> 01:03:47,599
of this and because we divide by the
1376
01:03:46,039 --> 01:03:51,000
square root of this if this is really
1377
01:03:47,599 --> 01:03:55,680
large like 50 and 70 and then this over
1378
01:03:51,000 --> 01:03:59,480
here is like one 0.5
1379
01:03:55,680 --> 01:04:01,920
uh or something we will be upgrading the
1380
01:03:59,480 --> 01:04:03,920
ones that have like less Square
1381
01:04:01,920 --> 01:04:06,880
gradients so it will it allows you to
1382
01:04:03,920 --> 01:04:08,760
upweight the less common gradients more
1383
01:04:06,880 --> 01:04:10,440
frequently and then there's also some
1384
01:04:08,760 --> 01:04:13,400
terms for correcting bias early in
1385
01:04:10,440 --> 01:04:16,440
training because these momentum in uh in
1386
01:04:13,400 --> 01:04:19,559
variance or momentum in squared gradient
1387
01:04:16,440 --> 01:04:23,119
terms are not going to be like well
1388
01:04:19,559 --> 01:04:24,839
calibrated yet so it prevents them from
1389
01:04:23,119 --> 01:04:28,880
going very three wire beginning of
1390
01:04:24,839 --> 01:04:30,839
training so this is uh the details of
1391
01:04:28,880 --> 01:04:33,640
this again are not like super super
1392
01:04:30,839 --> 01:04:37,359
important um another thing that I didn't
1393
01:04:33,640 --> 01:04:40,200
write on the slides is uh now in
1394
01:04:37,359 --> 01:04:43,920
Transformers it's also super common to
1395
01:04:40,200 --> 01:04:47,400
have an overall learning rate schle so
1396
01:04:43,920 --> 01:04:50,520
even um Even Adam has this uh Ada
1397
01:04:47,400 --> 01:04:53,440
learning rate parameter here and we what
1398
01:04:50,520 --> 01:04:55,240
we often do is we adjust this so we
1399
01:04:53,440 --> 01:04:57,839
start at low
1400
01:04:55,240 --> 01:04:59,640
we raise it up and then we have a Decay
1401
01:04:57,839 --> 01:05:03,039
uh at the end and exactly how much you
1402
01:04:59,640 --> 01:05:04,440
do this kind of depends on um you know
1403
01:05:03,039 --> 01:05:06,160
how big your model is how much data
1404
01:05:04,440 --> 01:05:09,160
you're tring on eventually and the
1405
01:05:06,160 --> 01:05:12,440
reason why we do this is transformers
1406
01:05:09,160 --> 01:05:13,839
are unfortunately super sensitive to
1407
01:05:12,440 --> 01:05:15,359
having a high learning rate right at the
1408
01:05:13,839 --> 01:05:16,559
very beginning so if you update them
1409
01:05:15,359 --> 01:05:17,920
with a high learning rate right at the
1410
01:05:16,559 --> 01:05:22,920
very beginning they go haywire and you
1411
01:05:17,920 --> 01:05:24,400
get a really weird model um and but you
1412
01:05:22,920 --> 01:05:26,760
want to raise it eventually so your
1413
01:05:24,400 --> 01:05:28,920
model is learning appropriately and then
1414
01:05:26,760 --> 01:05:30,400
in all stochastic gradient descent no
1415
01:05:28,920 --> 01:05:31,680
matter whether you're using atom or
1416
01:05:30,400 --> 01:05:33,400
anything else it's a good idea to
1417
01:05:31,680 --> 01:05:36,200
gradually decrease the learning rate at
1418
01:05:33,400 --> 01:05:38,119
the end to prevent the model from
1419
01:05:36,200 --> 01:05:40,480
continuing to fluctuate and getting it
1420
01:05:38,119 --> 01:05:42,760
to a stable point that gives you good
1421
01:05:40,480 --> 01:05:45,559
accuracy over a large part of data so
1422
01:05:42,760 --> 01:05:47,480
this is often included like if you look
1423
01:05:45,559 --> 01:05:51,000
at any standard Transformer training
1424
01:05:47,480 --> 01:05:53,079
recipe it will have that this so that's
1425
01:05:51,000 --> 01:05:54,799
kind of the the go-to
1426
01:05:53,079 --> 01:05:58,960
optimizer
1427
01:05:54,799 --> 01:06:01,039
um are there any questions or
1428
01:05:58,960 --> 01:06:02,599
discussion there's also tricky things
1429
01:06:01,039 --> 01:06:04,000
like cyclic learning rates where you
1430
01:06:02,599 --> 01:06:06,599
decrease the learning rate increase it
1431
01:06:04,000 --> 01:06:08,559
and stuff like that but I won't go into
1432
01:06:06,599 --> 01:06:11,000
that and don't actually use it that
1433
01:06:08,559 --> 01:06:12,760
much second thing is visualization of
1434
01:06:11,000 --> 01:06:15,400
embeddings so normally when we have word
1435
01:06:12,760 --> 01:06:19,760
embeddings usually they're kind of large
1436
01:06:15,400 --> 01:06:21,559
um and they can be like 512 or 1024
1437
01:06:19,760 --> 01:06:25,079
dimensions
1438
01:06:21,559 --> 01:06:28,720
and so one thing that we can do is we
1439
01:06:25,079 --> 01:06:31,079
can down weight them or sorry down uh
1440
01:06:28,720 --> 01:06:34,400
like reduce the dimensions or perform
1441
01:06:31,079 --> 01:06:35,880
dimensionality reduction and put them in
1442
01:06:34,400 --> 01:06:37,680
like two or three dimensions which are
1443
01:06:35,880 --> 01:06:40,200
easy for humans to
1444
01:06:37,680 --> 01:06:42,000
visualize this is an example using
1445
01:06:40,200 --> 01:06:44,839
principal component analysis which is a
1446
01:06:42,000 --> 01:06:48,279
linear Dimension reduction technique and
1447
01:06:44,839 --> 01:06:50,680
this is uh an example from 10 years ago
1448
01:06:48,279 --> 01:06:52,359
now uh one of the first major word
1449
01:06:50,680 --> 01:06:55,240
embedding papers where they demonstrated
1450
01:06:52,359 --> 01:06:57,720
that if you do this sort of linear
1451
01:06:55,240 --> 01:06:59,440
Dimension reduction uh you get actually
1452
01:06:57,720 --> 01:07:01,279
some interesting things where you can
1453
01:06:59,440 --> 01:07:03,240
draw a vector that's almost the same
1454
01:07:01,279 --> 01:07:06,400
direction between like countries and
1455
01:07:03,240 --> 01:07:09,319
their uh countries and their capitals
1456
01:07:06,400 --> 01:07:13,720
for example so this is a good thing to
1457
01:07:09,319 --> 01:07:16,559
do but actually PCA uh doesn't give
1458
01:07:13,720 --> 01:07:20,760
you in some cases PCA doesn't give you
1459
01:07:16,559 --> 01:07:22,920
super great uh visualizations sorry yeah
1460
01:07:20,760 --> 01:07:25,920
well for like if it's
1461
01:07:22,920 --> 01:07:25,920
like
1462
01:07:29,880 --> 01:07:35,039
um for things like this I think you
1463
01:07:33,119 --> 01:07:37,359
probably would still see vectors in the
1464
01:07:35,039 --> 01:07:38,760
same direction but I don't think it like
1465
01:07:37,359 --> 01:07:40,920
there's a reason why I'm introducing
1466
01:07:38,760 --> 01:07:44,279
nonlinear projections next because the
1467
01:07:40,920 --> 01:07:46,799
more standard way to do this is uh
1468
01:07:44,279 --> 01:07:50,640
nonlinear projections in in particular a
1469
01:07:46,799 --> 01:07:54,880
method called tisne and the way um they
1470
01:07:50,640 --> 01:07:56,880
do this is they try to group
1471
01:07:54,880 --> 01:07:59,000
things that are close together in high
1472
01:07:56,880 --> 01:08:01,240
dimensional space so that they're also
1473
01:07:59,000 --> 01:08:04,440
close together in low dimensional space
1474
01:08:01,240 --> 01:08:08,520
but they remove the Restriction that
1475
01:08:04,440 --> 01:08:10,799
this is uh that this is linear so this
1476
01:08:08,520 --> 01:08:15,480
is an example of just grouping together
1477
01:08:10,799 --> 01:08:18,040
some digits uh from the memus data
1478
01:08:15,480 --> 01:08:20,279
set or sorry reducing the dimension of
1479
01:08:18,040 --> 01:08:23,640
digits from the mest data
1480
01:08:20,279 --> 01:08:25,640
set according to PCA and you can see it
1481
01:08:23,640 --> 01:08:28,000
gives these kind of blobs that overlap
1482
01:08:25,640 --> 01:08:29,799
with each other and stuff like this but
1483
01:08:28,000 --> 01:08:31,679
if you do it with tney this is
1484
01:08:29,799 --> 01:08:34,799
completely unsupervised actually it's
1485
01:08:31,679 --> 01:08:37,080
not training any model for labeling the
1486
01:08:34,799 --> 01:08:39,239
labels are just used to draw the colors
1487
01:08:37,080 --> 01:08:42,520
and you can see that it gets pretty
1488
01:08:39,239 --> 01:08:44,520
coherent um clusters that correspond to
1489
01:08:42,520 --> 01:08:48,120
like what the actual digits
1490
01:08:44,520 --> 01:08:50,120
are um however uh one problem with
1491
01:08:48,120 --> 01:08:53,159
titney I I still think it's better than
1492
01:08:50,120 --> 01:08:55,000
PCA for a large number of uh
1493
01:08:53,159 --> 01:08:59,199
applications
1494
01:08:55,000 --> 01:09:01,040
but settings of tisy matter and tisy has
1495
01:08:59,199 --> 01:09:02,920
a few settings kind of the most
1496
01:09:01,040 --> 01:09:04,120
important ones are the overall
1497
01:09:02,920 --> 01:09:06,560
perplexity
1498
01:09:04,120 --> 01:09:09,040
hyperparameter and uh the number of
1499
01:09:06,560 --> 01:09:12,319
steps that you perform and there's a
1500
01:09:09,040 --> 01:09:14,920
nice example uh of a paper or kind of
1501
01:09:12,319 --> 01:09:16,359
like online post uh that demonstrates
1502
01:09:14,920 --> 01:09:18,560
how if you change these parameters you
1503
01:09:16,359 --> 01:09:22,279
can get very different things so if this
1504
01:09:18,560 --> 01:09:24,080
is the original data you run tisy and it
1505
01:09:22,279 --> 01:09:26,640
gives you very different things based on
1506
01:09:24,080 --> 01:09:29,279
the hyper parameters that you change um
1507
01:09:26,640 --> 01:09:32,880
and here's another example uh you have
1508
01:09:29,279 --> 01:09:36,960
two linear uh things like this and so
1509
01:09:32,880 --> 01:09:40,839
PCA no matter how you ran PCA you would
1510
01:09:36,960 --> 01:09:44,080
still get a linear output from this so
1511
01:09:40,839 --> 01:09:45,960
normally uh you know it might change the
1512
01:09:44,080 --> 01:09:49,239
order it might squash it a little bit or
1513
01:09:45,960 --> 01:09:51,239
something like this but um if you run
1514
01:09:49,239 --> 01:09:53,400
tisy it gives you crazy things it even
1515
01:09:51,239 --> 01:09:56,040
gives you like DNA and other stuff like
1516
01:09:53,400 --> 01:09:58,040
that so so um you do need to be a little
1517
01:09:56,040 --> 01:10:00,600
bit careful that uh this is not
1518
01:09:58,040 --> 01:10:02,320
necessarily going to tell you nice
1519
01:10:00,600 --> 01:10:04,400
linear correlations like this so like
1520
01:10:02,320 --> 01:10:06,159
let's say this correlation existed if
1521
01:10:04,400 --> 01:10:09,199
you use tisy it might not necessarily
1522
01:10:06,159 --> 01:10:09,199
come out to
1523
01:10:09,320 --> 01:10:14,880
TIY
1524
01:10:11,800 --> 01:10:16,920
cool yep uh that that's my final thing
1525
01:10:14,880 --> 01:10:18,520
actually I talked said sequence models
1526
01:10:16,920 --> 01:10:19,679
in the next class but it's in the class
1527
01:10:18,520 --> 01:10:21,440
after this I'm going to be talking about
1528
01:10:19,679 --> 01:10:24,199
language
1529
01:10:21,440 --> 01:10:27,159
modeling uh cool any any questions
1530
01:10:24,199 --> 01:10:27,159
or