ahmedelsayed's picture
commit files to HF hub
2ffb90d
1
00:00:00,040 --> 00:00:06,600
started in a moment uh since it's now uh
2
00:00:03,959 --> 00:00:08,839
12:30 are there any questions before we
3
00:00:06,600 --> 00:00:08,839
get
4
00:00:11,840 --> 00:00:17,240
started okay I don't see I don't see any
5
00:00:14,679 --> 00:00:18,640
so I guess we can uh Jump Right In this
6
00:00:17,240 --> 00:00:22,080
time I'll be talking about sequence
7
00:00:18,640 --> 00:00:24,560
modeling and N first I'm going to be
8
00:00:22,080 --> 00:00:26,359
talking about uh why why we do sequence
9
00:00:24,560 --> 00:00:29,160
modeling what varieties of sequence
10
00:00:26,359 --> 00:00:31,199
modeling exist and then after that I'm
11
00:00:29,160 --> 00:00:34,120
going to talk about kind of three basic
12
00:00:31,199 --> 00:00:36,320
techniques for sequence modeling namely
13
00:00:34,120 --> 00:00:38,879
recurrent neural networks convolutional
14
00:00:36,320 --> 00:00:38,879
networks and
15
00:00:39,360 --> 00:00:44,079
attention so when we talk about sequence
16
00:00:41,920 --> 00:00:46,680
modeling in NLP I've kind of already
17
00:00:44,079 --> 00:00:50,039
made the motivation for doing this but
18
00:00:46,680 --> 00:00:51,920
basically NLP is full of sequential data
19
00:00:50,039 --> 00:00:56,120
and this can be everything from words
20
00:00:51,920 --> 00:00:59,399
and sentences or tokens and sentences to
21
00:00:56,120 --> 00:01:01,920
uh characters and words to sentences in
22
00:00:59,399 --> 00:01:04,640
a discourse or a paragraph or a
23
00:01:01,920 --> 00:01:06,640
document um it can also be multiple
24
00:01:04,640 --> 00:01:08,840
documents in time multiple social media
25
00:01:06,640 --> 00:01:12,320
posts whatever else you want there's
26
00:01:08,840 --> 00:01:15,159
just you know sequences all over
27
00:01:12,320 --> 00:01:16,640
NLP and I mentioned this uh last time
28
00:01:15,159 --> 00:01:19,240
also but there's also long-distance
29
00:01:16,640 --> 00:01:20,840
dependencies in language so uh just to
30
00:01:19,240 --> 00:01:23,720
give an example there's agreement in
31
00:01:20,840 --> 00:01:25,799
number uh gender Etc so in order to
32
00:01:23,720 --> 00:01:28,439
create a fluent language model you'll
33
00:01:25,799 --> 00:01:30,320
have to handle this agreement so if we
34
00:01:28,439 --> 00:01:32,920
you say he does not have very much
35
00:01:30,320 --> 00:01:35,280
confidence in uh it would have to be
36
00:01:32,920 --> 00:01:36,680
himself but if you say she does not have
37
00:01:35,280 --> 00:01:39,360
very much confidence in it would have to
38
00:01:36,680 --> 00:01:41,360
be herself and this is this gender
39
00:01:39,360 --> 00:01:44,159
agreement is not super frequent in
40
00:01:41,360 --> 00:01:47,600
English but it's very frequent in other
41
00:01:44,159 --> 00:01:50,119
languages like French or uh you know
42
00:01:47,600 --> 00:01:51,759
most languages in the world in some uh
43
00:01:50,119 --> 00:01:53,799
way or
44
00:01:51,759 --> 00:01:55,320
another then separately from that you
45
00:01:53,799 --> 00:01:58,520
also have things like selectional
46
00:01:55,320 --> 00:02:00,119
preferences um like the Reign has lasted
47
00:01:58,520 --> 00:02:01,799
as long as the life of the queen and the
48
00:02:00,119 --> 00:02:04,439
rain has lasted as long as the life of
49
00:02:01,799 --> 00:02:07,360
the clouds uh in American English the
50
00:02:04,439 --> 00:02:09,119
only way you could know uh which word
51
00:02:07,360 --> 00:02:13,520
came beforehand if you were doing speech
52
00:02:09,119 --> 00:02:17,400
recognition is if you uh like had that
53
00:02:13,520 --> 00:02:20,319
kind of semantic uh idea of uh that
54
00:02:17,400 --> 00:02:22,040
these agree with each other in some way
55
00:02:20,319 --> 00:02:23,920
and there's also factual knowledge
56
00:02:22,040 --> 00:02:27,680
there's all kinds of other things uh
57
00:02:23,920 --> 00:02:27,680
that you need to carry over long
58
00:02:28,319 --> 00:02:33,800
contexts um these can be comp
59
00:02:30,840 --> 00:02:36,360
complicated so this is a a nice example
60
00:02:33,800 --> 00:02:39,400
so if we try to figure out what it
61
00:02:36,360 --> 00:02:41,239
refers to here uh the trophy would not
62
00:02:39,400 --> 00:02:45,680
fit in the brown suitcase because it was
63
00:02:41,239 --> 00:02:45,680
too big what is it
64
00:02:46,680 --> 00:02:51,360
here the trophy yeah and then what about
65
00:02:49,879 --> 00:02:53,120
uh the trophy would not fit in the brown
66
00:02:51,360 --> 00:02:57,080
suitcase because it was too
67
00:02:53,120 --> 00:02:58,680
small suit suitcase right um does anyone
68
00:02:57,080 --> 00:03:01,760
know what the name of something like
69
00:02:58,680 --> 00:03:01,760
this is
70
00:03:03,599 --> 00:03:07,840
has anyone heard of this challenge uh
71
00:03:09,280 --> 00:03:14,840
before no one okay um this this is
72
00:03:12,239 --> 00:03:17,200
called the winegrad schema challenge or
73
00:03:14,840 --> 00:03:22,760
these are called winegrad schemas and
74
00:03:17,200 --> 00:03:26,319
basically winterr schemas are a type
75
00:03:22,760 --> 00:03:29,280
of they're type of kind of linguistic
76
00:03:26,319 --> 00:03:30,439
challenge where you create two paired uh
77
00:03:29,280 --> 00:03:33,799
examples
78
00:03:30,439 --> 00:03:37,360
that you vary in very minimal ways where
79
00:03:33,799 --> 00:03:40,599
the answer differs between the two um
80
00:03:37,360 --> 00:03:42,000
and so uh there's lots of other examples
81
00:03:40,599 --> 00:03:44,080
about how you can create these things
82
00:03:42,000 --> 00:03:45,720
and they're good for testing uh whether
83
00:03:44,080 --> 00:03:48,239
language models are able to do things
84
00:03:45,720 --> 00:03:50,920
because they're able to uh kind of
85
00:03:48,239 --> 00:03:54,239
control for the fact that you know like
86
00:03:50,920 --> 00:04:01,079
the answer might be
87
00:03:54,239 --> 00:04:03,000
um the answer might be very uh like
88
00:04:01,079 --> 00:04:04,560
more frequent or less frequent and so
89
00:04:03,000 --> 00:04:07,720
the language model could just pick that
90
00:04:04,560 --> 00:04:11,040
so another example is we uh we came up
91
00:04:07,720 --> 00:04:12,239
with a benchmark of figurative language
92
00:04:11,040 --> 00:04:14,239
where we tried to figure out whether
93
00:04:12,239 --> 00:04:17,160
language models would be able
94
00:04:14,239 --> 00:04:19,720
to interpret figur figurative language
95
00:04:17,160 --> 00:04:22,720
and I actually have the multilingual uh
96
00:04:19,720 --> 00:04:24,160
version on the suggested projects uh on
97
00:04:22,720 --> 00:04:26,240
the Piaza oh yeah that's one
98
00:04:24,160 --> 00:04:28,360
announcement I posted a big list of
99
00:04:26,240 --> 00:04:30,080
suggested projects on pza I think a lot
100
00:04:28,360 --> 00:04:31,639
of people saw it you don't have to
101
00:04:30,080 --> 00:04:33,160
follow these but if you're interested in
102
00:04:31,639 --> 00:04:34,440
them feel free to talk to the contacts
103
00:04:33,160 --> 00:04:38,880
and we can give you more information
104
00:04:34,440 --> 00:04:41,039
about them um but anyway uh so in this
105
00:04:38,880 --> 00:04:43,080
data set what we did is we came up with
106
00:04:41,039 --> 00:04:46,039
some figurative language like this movie
107
00:04:43,080 --> 00:04:47,880
had the depth of of a waiting pool and
108
00:04:46,039 --> 00:04:50,919
this movie had the depth of a diving
109
00:04:47,880 --> 00:04:54,120
pool and so then after that you would
110
00:04:50,919 --> 00:04:56,199
have two choices this movie was uh this
111
00:04:54,120 --> 00:04:58,400
movie was very deep and interesting this
112
00:04:56,199 --> 00:05:01,000
movie was not very deep and interesting
113
00:04:58,400 --> 00:05:02,800
and so you have these like like two
114
00:05:01,000 --> 00:05:04,759
pairs of questions and answers and you
115
00:05:02,800 --> 00:05:06,240
need to decide between them and
116
00:05:04,759 --> 00:05:07,759
depending on what the input is the
117
00:05:06,240 --> 00:05:10,639
output will change and so that's a good
118
00:05:07,759 --> 00:05:11,919
way to control for um and test whether
119
00:05:10,639 --> 00:05:13,600
language models really understand
120
00:05:11,919 --> 00:05:15,080
something so if you're interested in
121
00:05:13,600 --> 00:05:17,199
benchmarking or other things like that
122
00:05:15,080 --> 00:05:19,160
it's a good parad time to think about
123
00:05:17,199 --> 00:05:22,759
anyway that's a little bit of an aside
124
00:05:19,160 --> 00:05:25,960
um so now I'd like to go on to types of
125
00:05:22,759 --> 00:05:28,360
sequential prediction problems
126
00:05:25,960 --> 00:05:30,880
and types of prediction problems in
127
00:05:28,360 --> 00:05:32,560
general uh binary and multiclass we
128
00:05:30,880 --> 00:05:35,240
already talked about that's when we're
129
00:05:32,560 --> 00:05:37,199
doing for example uh classification
130
00:05:35,240 --> 00:05:38,960
between two classes or classification
131
00:05:37,199 --> 00:05:41,280
between multiple
132
00:05:38,960 --> 00:05:42,880
classes but there's also another variety
133
00:05:41,280 --> 00:05:45,120
of prediction called structured
134
00:05:42,880 --> 00:05:47,120
prediction and structured prediction is
135
00:05:45,120 --> 00:05:49,639
when you have a very large number of
136
00:05:47,120 --> 00:05:53,680
labels it's not you know a finite number
137
00:05:49,639 --> 00:05:56,560
of labels and uh so that would be
138
00:05:53,680 --> 00:05:58,160
something like uh if you take in an
139
00:05:56,560 --> 00:06:00,680
input and you want to predict all of the
140
00:05:58,160 --> 00:06:04,000
parts of speech of all the words in the
141
00:06:00,680 --> 00:06:06,840
input and if you had like 50 parts of
142
00:06:04,000 --> 00:06:09,039
speech the number of labels that you
143
00:06:06,840 --> 00:06:11,360
would have for each sentence
144
00:06:09,039 --> 00:06:15,280
is any any
145
00:06:11,360 --> 00:06:17,919
ideas 50 50 parts of speech and like
146
00:06:15,280 --> 00:06:17,919
let's say for
147
00:06:19,880 --> 00:06:31,400
wordss 60 um it it's every combination
148
00:06:26,039 --> 00:06:31,400
of parts of speech for every words so
149
00:06:32,039 --> 00:06:38,440
uh close but maybe the opposite it's uh
150
00:06:35,520 --> 00:06:40,720
50 to the four because you have 50 50
151
00:06:38,440 --> 00:06:42,400
choices here 50 choices here so it's a c
152
00:06:40,720 --> 00:06:45,599
cross product of all of the
153
00:06:42,400 --> 00:06:48,560
choices um and so that becomes very
154
00:06:45,599 --> 00:06:50,280
quickly un untenable um let's say you're
155
00:06:48,560 --> 00:06:53,120
talking about translation from English
156
00:06:50,280 --> 00:06:54,800
to Japanese uh now you don't really even
157
00:06:53,120 --> 00:06:57,240
have a finite number of choices because
158
00:06:54,800 --> 00:06:58,440
the length could be even longer uh the
159
00:06:57,240 --> 00:07:01,400
length of the output could be even
160
00:06:58,440 --> 00:07:01,400
longer than the
161
00:07:04,840 --> 00:07:08,879
C
162
00:07:06,520 --> 00:07:11,319
rules
163
00:07:08,879 --> 00:07:14,879
together makes it
164
00:07:11,319 --> 00:07:17,400
fewer yeah so really good question um so
165
00:07:14,879 --> 00:07:19,319
the question or the the question or
166
00:07:17,400 --> 00:07:21,160
comment was if there are certain rules
167
00:07:19,319 --> 00:07:22,759
about one thing not ever being able to
168
00:07:21,160 --> 00:07:25,080
follow the other you can actually reduce
169
00:07:22,759 --> 00:07:28,319
the number um you could do that with a
170
00:07:25,080 --> 00:07:30,280
hard constraint and make things uh kind
171
00:07:28,319 --> 00:07:32,520
of
172
00:07:30,280 --> 00:07:34,240
and like actually cut off things that
173
00:07:32,520 --> 00:07:36,280
you know have zero probability but in
174
00:07:34,240 --> 00:07:38,680
reality what people do is they just trim
175
00:07:36,280 --> 00:07:41,319
hypotheses that have low probability and
176
00:07:38,680 --> 00:07:43,319
so that has kind of the same effect like
177
00:07:41,319 --> 00:07:47,599
you almost never see a determiner after
178
00:07:43,319 --> 00:07:49,720
a determiner in English um and so yeah
179
00:07:47,599 --> 00:07:52,400
we're going to talk about uh algorithms
180
00:07:49,720 --> 00:07:53,960
to do this in the Generation section so
181
00:07:52,400 --> 00:07:57,240
we could talk more about that
182
00:07:53,960 --> 00:08:00,080
that um but anyway the basic idea behind
183
00:07:57,240 --> 00:08:02,400
structured prediction is that you don't
184
00:08:00,080 --> 00:08:04,280
like language modeling like I said last
185
00:08:02,400 --> 00:08:06,240
time you don't predict all of the the
186
00:08:04,280 --> 00:08:08,319
whole sequence at once you usually
187
00:08:06,240 --> 00:08:10,440
predict each element at once and then
188
00:08:08,319 --> 00:08:12,080
somehow calculate the conditional
189
00:08:10,440 --> 00:08:13,720
probability of the next element given
190
00:08:12,080 --> 00:08:15,879
the the current element or other things
191
00:08:13,720 --> 00:08:18,840
like that so that's how we solve
192
00:08:15,879 --> 00:08:18,840
structured prediction
193
00:08:18,919 --> 00:08:22,960
problems another thing is unconditioned
194
00:08:21,319 --> 00:08:25,120
versus conditioned predictions so
195
00:08:22,960 --> 00:08:28,520
uncondition prediction we don't do this
196
00:08:25,120 --> 00:08:31,240
very often um but basically uh we
197
00:08:28,520 --> 00:08:34,039
predict the probability of a a single
198
00:08:31,240 --> 00:08:35,880
variable or generate a single variable
199
00:08:34,039 --> 00:08:37,599
and condition pro prediction is
200
00:08:35,880 --> 00:08:41,000
predicting the probability of an output
201
00:08:37,599 --> 00:08:45,120
variable given an input like
202
00:08:41,000 --> 00:08:48,040
this so um for unconditioned prediction
203
00:08:45,120 --> 00:08:50,000
um the way we can do this is left to
204
00:08:48,040 --> 00:08:51,399
right autoagressive models and these are
205
00:08:50,000 --> 00:08:52,600
the ones that I talked about last time
206
00:08:51,399 --> 00:08:56,360
when I was talking about how we build
207
00:08:52,600 --> 00:08:59,000
language models um and these could be uh
208
00:08:56,360 --> 00:09:01,880
specifically this kind though is a kind
209
00:08:59,000 --> 00:09:03,480
that doesn't have any context limit so
210
00:09:01,880 --> 00:09:05,680
it's looking all the way back to the
211
00:09:03,480 --> 00:09:07,519
beginning of the the sequence and this
212
00:09:05,680 --> 00:09:09,440
could be like an infinite length endr
213
00:09:07,519 --> 00:09:10,440
model but practically we can't use those
214
00:09:09,440 --> 00:09:12,519
because they would have too many
215
00:09:10,440 --> 00:09:15,360
parameters they would be too sparse for
216
00:09:12,519 --> 00:09:17,079
us to estimate the parameters so um what
217
00:09:15,360 --> 00:09:19,120
we do instead with engram models which I
218
00:09:17,079 --> 00:09:21,240
talked about last time is we limit the
219
00:09:19,120 --> 00:09:23,600
the context length so we have something
220
00:09:21,240 --> 00:09:25,760
like a trigram model where we don't
221
00:09:23,600 --> 00:09:28,680
actually reference all of the previous
222
00:09:25,760 --> 00:09:30,680
outputs uh when we make a prediction oh
223
00:09:28,680 --> 00:09:34,440
and sorry actually I I should explain
224
00:09:30,680 --> 00:09:37,640
how how do we uh how do we read this
225
00:09:34,440 --> 00:09:40,519
graph so this would be we're predicting
226
00:09:37,640 --> 00:09:42,680
number one here we're predicting word
227
00:09:40,519 --> 00:09:45,240
number one and we're conditioning we're
228
00:09:42,680 --> 00:09:47,640
not conditioning on anything after it
229
00:09:45,240 --> 00:09:49,040
we're predicting word number two we're
230
00:09:47,640 --> 00:09:50,480
conditioning on Word number one we're
231
00:09:49,040 --> 00:09:53,040
predicting word number three we're
232
00:09:50,480 --> 00:09:55,640
conditioning on Word number two so here
233
00:09:53,040 --> 00:09:58,320
we would be uh predicting word number
234
00:09:55,640 --> 00:09:59,920
four conditioning on Words number three
235
00:09:58,320 --> 00:10:02,200
and two but not number one so that would
236
00:09:59,920 --> 00:10:07,600
be like a trigram
237
00:10:02,200 --> 00:10:07,600
bottle um so
238
00:10:08,600 --> 00:10:15,240
the what is this is there a robot
239
00:10:11,399 --> 00:10:17,480
walking around somewhere um Howard drill
240
00:10:15,240 --> 00:10:20,440
okay okay' be a lot more fun if it was a
241
00:10:17,480 --> 00:10:22,560
robot um so
242
00:10:20,440 --> 00:10:25,519
uh the things we're going to talk about
243
00:10:22,560 --> 00:10:28,360
today are largely going to be ones that
244
00:10:25,519 --> 00:10:31,200
have unlimited length context um and so
245
00:10:28,360 --> 00:10:33,440
we can uh we'll talk about some examples
246
00:10:31,200 --> 00:10:35,680
here and then um there's also
247
00:10:33,440 --> 00:10:37,279
independent prediction so this uh would
248
00:10:35,680 --> 00:10:39,160
be something like a unigram model where
249
00:10:37,279 --> 00:10:41,560
you would just uh not condition on any
250
00:10:39,160 --> 00:10:41,560
previous
251
00:10:41,880 --> 00:10:45,959
context there's also bidirectional
252
00:10:44,279 --> 00:10:47,959
prediction where basically when you
253
00:10:45,959 --> 00:10:50,440
predict each element you predict based
254
00:10:47,959 --> 00:10:52,680
on all of the other elements not the
255
00:10:50,440 --> 00:10:55,519
element itself uh this could be
256
00:10:52,680 --> 00:10:59,720
something like a masked language model
257
00:10:55,519 --> 00:11:02,160
um but note here that I put a slash
258
00:10:59,720 --> 00:11:04,000
through here uh because this is not a
259
00:11:02,160 --> 00:11:06,800
well-formed probability because as I
260
00:11:04,000 --> 00:11:08,760
mentioned last time um in order to have
261
00:11:06,800 --> 00:11:11,000
a well-formed probability you need to
262
00:11:08,760 --> 00:11:12,440
predict the elements based on all of the
263
00:11:11,000 --> 00:11:14,120
elements that you predicted before and
264
00:11:12,440 --> 00:11:16,519
you can't predict based on future
265
00:11:14,120 --> 00:11:18,519
elements so this is not actually a
266
00:11:16,519 --> 00:11:20,760
probabilistic model but sometimes people
267
00:11:18,519 --> 00:11:22,240
use this to kind of learn
268
00:11:20,760 --> 00:11:24,720
representations that could be used
269
00:11:22,240 --> 00:11:28,680
Downstream for some
270
00:11:24,720 --> 00:11:30,959
reason cool is this clear any questions
271
00:11:28,680 --> 00:11:30,959
comments
272
00:11:32,680 --> 00:11:39,839
yeah so these are all um not
273
00:11:36,800 --> 00:11:42,000
conditioning on any prior context uh so
274
00:11:39,839 --> 00:11:43,959
when you predict each word it's
275
00:11:42,000 --> 00:11:46,880
conditioning on context that you
276
00:11:43,959 --> 00:11:50,160
previously generated or previously
277
00:11:46,880 --> 00:11:52,279
predicted yeah so and if I go to the
278
00:11:50,160 --> 00:11:55,399
conditioned ones these are where you
279
00:11:52,279 --> 00:11:56,800
have like a source x uh where you're
280
00:11:55,399 --> 00:11:58,480
given this and then you want to
281
00:11:56,800 --> 00:11:59,639
calculate the conditional probability of
282
00:11:58,480 --> 00:12:04,279
something else
283
00:11:59,639 --> 00:12:06,839
so um to give some examples of this um
284
00:12:04,279 --> 00:12:10,320
this is autor regressive conditioned
285
00:12:06,839 --> 00:12:12,920
prediction and um this could be like a
286
00:12:10,320 --> 00:12:14,440
SE a standard sequence to sequence model
287
00:12:12,920 --> 00:12:16,079
or it could be a language model where
288
00:12:14,440 --> 00:12:18,600
you're given a prompt and you want to
289
00:12:16,079 --> 00:12:20,560
predict the following output like we
290
00:12:18,600 --> 00:12:24,160
often do with chat GPT or something like
291
00:12:20,560 --> 00:12:27,880
this and so
292
00:12:24,160 --> 00:12:30,199
um yeah I I don't think you
293
00:12:27,880 --> 00:12:32,279
can
294
00:12:30,199 --> 00:12:34,639
yeah I don't know if any way you can do
295
00:12:32,279 --> 00:12:37,680
a chat GPT without any conditioning
296
00:12:34,639 --> 00:12:39,959
context um but there were people who
297
00:12:37,680 --> 00:12:41,240
were sending uh I saw this about a week
298
00:12:39,959 --> 00:12:44,079
or two ago there were people who were
299
00:12:41,240 --> 00:12:47,839
sending things to the chat um to the GPD
300
00:12:44,079 --> 00:12:50,480
3.5 or gp4 API with no input and it
301
00:12:47,839 --> 00:12:52,279
would randomly output random questions
302
00:12:50,480 --> 00:12:54,800
or something like that so that's what's
303
00:12:52,279 --> 00:12:56,720
what happens when you send things to uh
304
00:12:54,800 --> 00:12:58,120
to chat GPT without any prior
305
00:12:56,720 --> 00:13:00,120
conditioning conts but normally what you
306
00:12:58,120 --> 00:13:01,440
do is you put in you know your prompt
307
00:13:00,120 --> 00:13:05,320
and then it follows up with your prompt
308
00:13:01,440 --> 00:13:05,320
and that would be in this uh in this
309
00:13:06,000 --> 00:13:11,279
Paradigm there's also something called
310
00:13:08,240 --> 00:13:14,199
non-auto regressive condition prediction
311
00:13:11,279 --> 00:13:16,760
um and this can be used for something
312
00:13:14,199 --> 00:13:19,160
like sequence labeling or non- autor
313
00:13:16,760 --> 00:13:20,760
regressive machine translation I'll talk
314
00:13:19,160 --> 00:13:22,839
about the first one in this class and
315
00:13:20,760 --> 00:13:25,600
I'll talk about the the second one maybe
316
00:13:22,839 --> 00:13:27,399
later um it's kind of a minor topic now
317
00:13:25,600 --> 00:13:30,040
it used to be popular a few years ago so
318
00:13:27,399 --> 00:13:33,279
I'm not sure whether it'll cover it but
319
00:13:30,040 --> 00:13:33,279
um uh
320
00:13:33,399 --> 00:13:39,279
yeah cool so the basic modeling Paradigm
321
00:13:37,079 --> 00:13:41,199
that we use for things like this is
322
00:13:39,279 --> 00:13:42,760
extracting features and predicting so
323
00:13:41,199 --> 00:13:44,839
this is exactly the same as the bag of
324
00:13:42,760 --> 00:13:46,680
wordss model right I the bag of wordss
325
00:13:44,839 --> 00:13:48,680
model that I talked about the first time
326
00:13:46,680 --> 00:13:50,959
we extracted features uh based on those
327
00:13:48,680 --> 00:13:53,440
features we made predictions so it's no
328
00:13:50,959 --> 00:13:55,480
different when we do sequence modeling
329
00:13:53,440 --> 00:13:57,680
um but the methods that we use for
330
00:13:55,480 --> 00:14:01,120
feature extraction is different so given
331
00:13:57,680 --> 00:14:03,920
in the input text X we extract features
332
00:14:01,120 --> 00:14:06,519
H and predict labels
333
00:14:03,920 --> 00:14:10,320
Y and for something like text
334
00:14:06,519 --> 00:14:12,600
classification what we do is we uh so
335
00:14:10,320 --> 00:14:15,440
for example we have text classification
336
00:14:12,600 --> 00:14:17,920
or or sequence labeling and for text
337
00:14:15,440 --> 00:14:19,720
classification usually what we would do
338
00:14:17,920 --> 00:14:21,360
is we would have a feature extractor
339
00:14:19,720 --> 00:14:23,120
from this feature extractor we take the
340
00:14:21,360 --> 00:14:25,199
sequence and we convert it into a single
341
00:14:23,120 --> 00:14:28,040
vector and then based on this Vector we
342
00:14:25,199 --> 00:14:30,160
make a prediction so that that's what we
343
00:14:28,040 --> 00:14:33,160
do for
344
00:14:30,160 --> 00:14:35,480
classification um for sequence labeling
345
00:14:33,160 --> 00:14:37,160
normally what we do is we extract one
346
00:14:35,480 --> 00:14:40,240
vector for each thing that we would like
347
00:14:37,160 --> 00:14:42,079
to predict about so here that might be
348
00:14:40,240 --> 00:14:45,639
one vector for each
349
00:14:42,079 --> 00:14:47,720
word um and then based on this uh we
350
00:14:45,639 --> 00:14:49,120
would predict something for each word so
351
00:14:47,720 --> 00:14:50,360
this is an example of part of speech
352
00:14:49,120 --> 00:14:53,079
tagging but there's a lot of other
353
00:14:50,360 --> 00:14:56,920
sequence labeling tasks
354
00:14:53,079 --> 00:14:58,839
also and what tasks exist for something
355
00:14:56,920 --> 00:15:03,040
like sequence labeling so sequence lab
356
00:14:58,839 --> 00:15:06,240
in is uh a pretty
357
00:15:03,040 --> 00:15:09,000
big subset of NLP tasks you can express
358
00:15:06,240 --> 00:15:11,040
a lot of things as sequence labeling and
359
00:15:09,000 --> 00:15:13,000
basically given an input text X we
360
00:15:11,040 --> 00:15:16,079
predict an output label sequence y of
361
00:15:13,000 --> 00:15:17,560
equal length so this can be used for
362
00:15:16,079 --> 00:15:20,160
things like part of speech tagging to
363
00:15:17,560 --> 00:15:22,000
get the parts of speech of each word um
364
00:15:20,160 --> 00:15:24,639
it can also be used for something like
365
00:15:22,000 --> 00:15:26,959
lemmatization and litiz basically what
366
00:15:24,639 --> 00:15:29,880
that is is it is predicting the base
367
00:15:26,959 --> 00:15:31,480
form of each word uh and this can be
368
00:15:29,880 --> 00:15:34,560
used for normalization if you want to
369
00:15:31,480 --> 00:15:36,360
find like for example all instances of a
370
00:15:34,560 --> 00:15:38,480
a particular verb being used or all
371
00:15:36,360 --> 00:15:40,800
instances of a particular noun being
372
00:15:38,480 --> 00:15:42,720
used this is a little bit different than
373
00:15:40,800 --> 00:15:45,000
something like stemming so stemming
374
00:15:42,720 --> 00:15:48,160
normally what stemming would do is it
375
00:15:45,000 --> 00:15:50,560
would uh chop off the plural here it
376
00:15:48,160 --> 00:15:53,240
would chop off S but it wouldn't be able
377
00:15:50,560 --> 00:15:56,279
to do things like normalized saw into C
378
00:15:53,240 --> 00:15:57,759
because uh stemming uh just removes
379
00:15:56,279 --> 00:15:59,240
suffixes it doesn't do any sort of
380
00:15:57,759 --> 00:16:02,720
normalization so that's the difference
381
00:15:59,240 --> 00:16:05,199
between lonization and
382
00:16:02,720 --> 00:16:08,079
stemon there's also something called
383
00:16:05,199 --> 00:16:09,680
morphological tagging um in
384
00:16:08,079 --> 00:16:11,639
morphological tagging basically what
385
00:16:09,680 --> 00:16:14,360
this is doing is this is a
386
00:16:11,639 --> 00:16:17,040
more advanced version of part of speech
387
00:16:14,360 --> 00:16:20,360
tagging uh that predicts things like
388
00:16:17,040 --> 00:16:23,600
okay this is a a past tense verb uh this
389
00:16:20,360 --> 00:16:25,639
is a plural um this is a particular verb
390
00:16:23,600 --> 00:16:27,240
form and you have multiple tags here
391
00:16:25,639 --> 00:16:28,959
this is less interesting in English
392
00:16:27,240 --> 00:16:30,920
because English is kind of boring
393
00:16:28,959 --> 00:16:32,319
language morphology morphologically it
394
00:16:30,920 --> 00:16:33,399
doesn't have a lot of conjugation and
395
00:16:32,319 --> 00:16:35,839
other stuff but it's a lot more
396
00:16:33,399 --> 00:16:38,319
interesting in more complex languages
397
00:16:35,839 --> 00:16:40,040
like Japanese or Hindi or other things
398
00:16:38,319 --> 00:16:42,480
like
399
00:16:40,040 --> 00:16:43,920
that Chinese is even more boring than
400
00:16:42,480 --> 00:16:46,120
English so if you're interested in
401
00:16:43,920 --> 00:16:47,000
Chinese then you don't need to worry
402
00:16:46,120 --> 00:16:50,680
about
403
00:16:47,000 --> 00:16:52,560
that cool um but actually what's maybe
404
00:16:50,680 --> 00:16:55,000
more widely used from the sequence
405
00:16:52,560 --> 00:16:57,480
labeling perspective is span labeling
406
00:16:55,000 --> 00:17:01,040
and here you want to predict spans and
407
00:16:57,480 --> 00:17:03,560
labels and this could be uh named entity
408
00:17:01,040 --> 00:17:05,360
recognitions so if you say uh Graham nub
409
00:17:03,560 --> 00:17:07,199
is teaching at Carnegie melan University
410
00:17:05,360 --> 00:17:09,520
you would want to identify each entity
411
00:17:07,199 --> 00:17:11,480
is being like a person organization
412
00:17:09,520 --> 00:17:16,039
Place governmental entity other stuff
413
00:17:11,480 --> 00:17:18,760
like that um there's also
414
00:17:16,039 --> 00:17:20,439
uh things like syntactic chunking where
415
00:17:18,760 --> 00:17:23,640
you want to find all noun phrases and
416
00:17:20,439 --> 00:17:26,799
verb phrases um also semantic role
417
00:17:23,640 --> 00:17:30,360
labeling where semantic role labeling is
418
00:17:26,799 --> 00:17:32,480
uh demonstrating who did what to whom so
419
00:17:30,360 --> 00:17:34,440
it's saying uh this is the actor the
420
00:17:32,480 --> 00:17:36,120
person who did the thing this is the
421
00:17:34,440 --> 00:17:38,520
thing that is being done and this is the
422
00:17:36,120 --> 00:17:40,280
place where it's being done so uh this
423
00:17:38,520 --> 00:17:42,840
can be useful if you want to do any sort
424
00:17:40,280 --> 00:17:45,559
of analysis about who does what to whom
425
00:17:42,840 --> 00:17:48,160
uh other things like
426
00:17:45,559 --> 00:17:50,360
that um there's also a more complicated
427
00:17:48,160 --> 00:17:52,080
thing called an entity linking which
428
00:17:50,360 --> 00:17:54,559
isn't really a span linking task but
429
00:17:52,080 --> 00:17:58,400
it's basically named entity recognition
430
00:17:54,559 --> 00:18:00,799
and you link it to um and you link it to
431
00:17:58,400 --> 00:18:04,200
to like a database like Wiki data or
432
00:18:00,799 --> 00:18:06,600
Wikipedia or something like this and
433
00:18:04,200 --> 00:18:09,520
this doesn't seem very glamorous perhaps
434
00:18:06,600 --> 00:18:10,799
you know a lot of people might not you
435
00:18:09,520 --> 00:18:13,400
might not
436
00:18:10,799 --> 00:18:15,000
sound like immediately excit be
437
00:18:13,400 --> 00:18:16,799
immediately excited by entity linking
438
00:18:15,000 --> 00:18:18,520
but actually it's super super important
439
00:18:16,799 --> 00:18:20,080
for things like news aggregation and
440
00:18:18,520 --> 00:18:21,640
other stuff like that find all the news
441
00:18:20,080 --> 00:18:23,799
articles about the celebrity or
442
00:18:21,640 --> 00:18:26,919
something like this uh find all of the
443
00:18:23,799 --> 00:18:29,720
mentions of our product um our company's
444
00:18:26,919 --> 00:18:33,400
product and uh social media or things so
445
00:18:29,720 --> 00:18:33,400
it's actually a really widely used
446
00:18:33,720 --> 00:18:38,000
technology and then finally span
447
00:18:36,039 --> 00:18:40,240
labeling can also be treated as sequence
448
00:18:38,000 --> 00:18:43,240
labeling um and the way we normally do
449
00:18:40,240 --> 00:18:45,600
this is we use something called bio tags
450
00:18:43,240 --> 00:18:47,760
and uh here you predict the beginning uh
451
00:18:45,600 --> 00:18:50,200
in and out tags for each word or spans
452
00:18:47,760 --> 00:18:52,400
so if we have this example of spans uh
453
00:18:50,200 --> 00:18:56,120
we just convert this into tags uh where
454
00:18:52,400 --> 00:18:57,760
you say uh begin person in person o
455
00:18:56,120 --> 00:18:59,640
means it's not an entity begin
456
00:18:57,760 --> 00:19:02,799
organization in organization and then
457
00:18:59,640 --> 00:19:05,520
you canvert that back into um into these
458
00:19:02,799 --> 00:19:09,880
spans so this makes it relatively easy
459
00:19:05,520 --> 00:19:09,880
to uh kind of do the span
460
00:19:10,480 --> 00:19:15,120
prediction cool um so now you know uh
461
00:19:13,640 --> 00:19:16,600
now you know what to do if you want to
462
00:19:15,120 --> 00:19:18,280
predict entities or other things like
463
00:19:16,600 --> 00:19:20,240
that there's a lot of models on like
464
00:19:18,280 --> 00:19:22,400
hugging face for example that uh allow
465
00:19:20,240 --> 00:19:25,640
you to do these things are there any
466
00:19:22,400 --> 00:19:25,640
questions uh before I move
467
00:19:27,080 --> 00:19:32,440
on okay
468
00:19:28,799 --> 00:19:34,039
cool I'll just go forward then so um now
469
00:19:32,440 --> 00:19:37,000
I'm going to talk about how we actually
470
00:19:34,039 --> 00:19:38,559
model these in machine learning models
471
00:19:37,000 --> 00:19:40,919
and there's three major types of
472
00:19:38,559 --> 00:19:43,120
sequence models uh there are other types
473
00:19:40,919 --> 00:19:45,320
of sequence models but I'd say the great
474
00:19:43,120 --> 00:19:47,840
majority of work uses one of these three
475
00:19:45,320 --> 00:19:51,720
different types and the first one is
476
00:19:47,840 --> 00:19:54,840
recurrence um what recurrence does it is
477
00:19:51,720 --> 00:19:56,240
it conditions on representations on an
478
00:19:54,840 --> 00:19:58,720
encoding of the
479
00:19:56,240 --> 00:20:01,360
history and so the way this works works
480
00:19:58,720 --> 00:20:04,679
is essentially you have your input
481
00:20:01,360 --> 00:20:06,280
vectors like this uh usually word
482
00:20:04,679 --> 00:20:08,600
embeddings or embeddings from the
483
00:20:06,280 --> 00:20:10,880
previous layer of the model and you have
484
00:20:08,600 --> 00:20:12,840
a recurrent neural network and the
485
00:20:10,880 --> 00:20:14,600
recurrent neural network um at the very
486
00:20:12,840 --> 00:20:17,280
beginning might only take the first
487
00:20:14,600 --> 00:20:19,480
Vector but every subsequent step it
488
00:20:17,280 --> 00:20:23,760
takes the input vector and it takes the
489
00:20:19,480 --> 00:20:23,760
hidden Vector from the previous uh
490
00:20:24,080 --> 00:20:32,280
input and the uh then you keep on going
491
00:20:29,039 --> 00:20:32,280
uh like this all the way through the
492
00:20:32,320 --> 00:20:37,600
sequence the convolution is a
493
00:20:35,799 --> 00:20:40,880
conditioning representations on local
494
00:20:37,600 --> 00:20:44,200
context so you have the inputs like this
495
00:20:40,880 --> 00:20:47,200
and here you're conditioning on the word
496
00:20:44,200 --> 00:20:51,240
itself and the surrounding um words on
497
00:20:47,200 --> 00:20:52,960
the right or the left so um you would do
498
00:20:51,240 --> 00:20:55,240
something like this this is a typical
499
00:20:52,960 --> 00:20:57,480
convolution where you have this this
500
00:20:55,240 --> 00:20:59,039
certain one here and the left one and
501
00:20:57,480 --> 00:21:01,080
the right one and this would be a size
502
00:20:59,039 --> 00:21:03,480
three convolution you could also have a
503
00:21:01,080 --> 00:21:06,520
size five convolution 7 n you know
504
00:21:03,480 --> 00:21:08,600
whatever else um that would take in more
505
00:21:06,520 --> 00:21:11,520
surrounding words like
506
00:21:08,600 --> 00:21:13,720
this and then finally we have attention
507
00:21:11,520 --> 00:21:15,640
um and attention is conditioned
508
00:21:13,720 --> 00:21:19,080
representations at a weighted average of
509
00:21:15,640 --> 00:21:21,000
all tokens in the sequence and so here
510
00:21:19,080 --> 00:21:24,600
um we're conditioning on all of the
511
00:21:21,000 --> 00:21:26,279
other tokens in the sequence but um the
512
00:21:24,600 --> 00:21:28,919
amount that we condition on each of the
513
00:21:26,279 --> 00:21:32,039
tokens differs uh between
514
00:21:28,919 --> 00:21:34,919
so we might get more of this token less
515
00:21:32,039 --> 00:21:37,600
of this token and other things like that
516
00:21:34,919 --> 00:21:39,720
and I'll go into the mechanisms of each
517
00:21:37,600 --> 00:21:43,159
of
518
00:21:39,720 --> 00:21:45,720
these one important thing to think about
519
00:21:43,159 --> 00:21:49,279
is uh the computational complexity of
520
00:21:45,720 --> 00:21:51,960
each of these and um the computational
521
00:21:49,279 --> 00:21:56,240
complexity can be
522
00:21:51,960 --> 00:21:58,600
expressed as the sequence length let's
523
00:21:56,240 --> 00:22:00,840
call the sequence length n and
524
00:21:58,600 --> 00:22:02,520
convolution has a convolution window
525
00:22:00,840 --> 00:22:05,080
size so I'll call that
526
00:22:02,520 --> 00:22:08,039
W so does anyone have an idea of the
527
00:22:05,080 --> 00:22:10,360
computational complexity of a recurrent
528
00:22:08,039 --> 00:22:10,360
neural
529
00:22:11,480 --> 00:22:16,640
network so how um how quickly does the
530
00:22:15,120 --> 00:22:18,640
computation of a recurrent neural
531
00:22:16,640 --> 00:22:20,760
network grow and one way you can look at
532
00:22:18,640 --> 00:22:24,360
this is uh figure out the number of
533
00:22:20,760 --> 00:22:24,360
arrows uh that you see
534
00:22:24,480 --> 00:22:29,080
here yeah it's it's linear so it's
535
00:22:27,440 --> 00:22:32,520
basically
536
00:22:29,080 --> 00:22:35,520
n um what about
537
00:22:32,520 --> 00:22:36,760
convolution any other ideas any ideas
538
00:22:35,520 --> 00:22:42,039
about
539
00:22:36,760 --> 00:22:45,120
convolution n yeah NW n
540
00:22:42,039 --> 00:22:47,559
w and what about
541
00:22:45,120 --> 00:22:52,200
attention n squar
542
00:22:47,559 --> 00:22:53,559
yeah so what you can see is um for very
543
00:22:52,200 --> 00:22:58,000
long
544
00:22:53,559 --> 00:23:00,400
sequences um for very long sequences the
545
00:22:58,000 --> 00:23:04,480
asymptotic complexity of running a
546
00:23:00,400 --> 00:23:06,039
recurrent neural network is uh lower so
547
00:23:04,480 --> 00:23:08,960
you can run a recurrent neural network
548
00:23:06,039 --> 00:23:10,480
over a sequence of length uh you know 20
549
00:23:08,960 --> 00:23:12,480
million or something like that and as
550
00:23:10,480 --> 00:23:15,200
long as you had enough memory it would
551
00:23:12,480 --> 00:23:16,520
take a linear time but um if you do
552
00:23:15,200 --> 00:23:18,400
something like attention over a really
553
00:23:16,520 --> 00:23:20,240
long sequence it would be more difficult
554
00:23:18,400 --> 00:23:22,080
there's a lot of caveats here because
555
00:23:20,240 --> 00:23:23,320
attention and convolution are easily
556
00:23:22,080 --> 00:23:26,200
paral
557
00:23:23,320 --> 00:23:28,520
parallelized uh whereas uh recurrence is
558
00:23:26,200 --> 00:23:30,919
not um and I'll talk about that a second
559
00:23:28,520 --> 00:23:32,679
but any anyway it's a good thing to keep
560
00:23:30,919 --> 00:23:36,240
in
561
00:23:32,679 --> 00:23:37,679
mind cool um so the next the first
562
00:23:36,240 --> 00:23:39,799
sequence model I want to introduce is
563
00:23:37,679 --> 00:23:42,559
recurrent neural networks oh um sorry
564
00:23:39,799 --> 00:23:45,799
one other thing I want to mention is all
565
00:23:42,559 --> 00:23:47,600
of these are still used um it might seem
566
00:23:45,799 --> 00:23:49,960
that like if you're very plugged into
567
00:23:47,600 --> 00:23:52,640
NLP it might seem like Well everybody's
568
00:23:49,960 --> 00:23:55,080
using attention um so why do we need to
569
00:23:52,640 --> 00:23:56,880
learn about the other ones uh but
570
00:23:55,080 --> 00:23:59,679
actually all of these are used and
571
00:23:56,880 --> 00:24:02,600
usually recurrence and convolution are
572
00:23:59,679 --> 00:24:04,960
used in combination with attention uh in
573
00:24:02,600 --> 00:24:07,799
some way for particular applications
574
00:24:04,960 --> 00:24:09,960
where uh like uh recurrence or a
575
00:24:07,799 --> 00:24:12,640
convolution are are useful so I'll I'll
576
00:24:09,960 --> 00:24:15,279
go into details of that
577
00:24:12,640 --> 00:24:18,159
l so let's talk about the first sequence
578
00:24:15,279 --> 00:24:20,600
model uh recurrent neural networks so
579
00:24:18,159 --> 00:24:22,919
recurrent neural networks um they're
580
00:24:20,600 --> 00:24:26,399
basically tools to remember information
581
00:24:22,919 --> 00:24:28,520
uh they were invented in uh around
582
00:24:26,399 --> 00:24:30,520
1990 and
583
00:24:28,520 --> 00:24:34,120
the way they work is a feedforward
584
00:24:30,520 --> 00:24:35,600
neural network looks a bit like this we
585
00:24:34,120 --> 00:24:38,000
have some sort of look up over the
586
00:24:35,600 --> 00:24:40,120
context we calculate embeddings we do a
587
00:24:38,000 --> 00:24:41,000
transform we get a hidden State and we
588
00:24:40,120 --> 00:24:43,039
make the
589
00:24:41,000 --> 00:24:46,159
prediction whereas a recurrent neural
590
00:24:43,039 --> 00:24:49,360
network uh feeds in the previous hidden
591
00:24:46,159 --> 00:24:53,360
State and a very simple Elman style
592
00:24:49,360 --> 00:24:54,840
neural network looks um or I'll contrast
593
00:24:53,360 --> 00:24:56,559
the feed forward neural network that we
594
00:24:54,840 --> 00:24:58,279
already know with an Elman style neural
595
00:24:56,559 --> 00:25:00,399
network um
596
00:24:58,279 --> 00:25:01,880
uh recurrent neural network so basically
597
00:25:00,399 --> 00:25:06,120
the feed forward Network that we already
598
00:25:01,880 --> 00:25:07,840
know does a um linear transform over the
599
00:25:06,120 --> 00:25:09,279
input and then it runs it through a
600
00:25:07,840 --> 00:25:11,640
nonlinear function and this could be
601
00:25:09,279 --> 00:25:14,200
like a tan function or a Ru function or
602
00:25:11,640 --> 00:25:17,080
anything like that in a recurrent neural
603
00:25:14,200 --> 00:25:19,559
network we add uh multiplication by the
604
00:25:17,080 --> 00:25:22,080
hidden the previous hidden state so it
605
00:25:19,559 --> 00:25:25,120
looks like
606
00:25:22,080 --> 00:25:27,000
this and so if we look at what
607
00:25:25,120 --> 00:25:29,080
processing a sequence looks like uh
608
00:25:27,000 --> 00:25:31,080
basically what we do is we start out
609
00:25:29,080 --> 00:25:32,720
with an initial State this initial State
610
00:25:31,080 --> 00:25:34,320
could be like all zeros or it could be
611
00:25:32,720 --> 00:25:35,200
randomized or it could be learned or
612
00:25:34,320 --> 00:25:38,480
whatever
613
00:25:35,200 --> 00:25:42,080
else and then based on based on this uh
614
00:25:38,480 --> 00:25:44,279
we run it through an RNN function um and
615
00:25:42,080 --> 00:25:46,600
then you know use calculate the hidden
616
00:25:44,279 --> 00:25:48,960
State use it to make a prediction uh we
617
00:25:46,600 --> 00:25:50,760
have the RNN function uh make a
618
00:25:48,960 --> 00:25:51,760
prediction RNN make a prediction RNN
619
00:25:50,760 --> 00:25:54,520
make a
620
00:25:51,760 --> 00:25:56,960
prediction so one important thing here
621
00:25:54,520 --> 00:25:58,360
is that this RNN is exactly the same
622
00:25:56,960 --> 00:26:01,880
function
623
00:25:58,360 --> 00:26:04,960
no matter which position it appears in
624
00:26:01,880 --> 00:26:06,640
and so because of that we just no matter
625
00:26:04,960 --> 00:26:08,279
how long the sequence becomes we always
626
00:26:06,640 --> 00:26:10,200
have the same number of parameters which
627
00:26:08,279 --> 00:26:12,600
is always like really important for a
628
00:26:10,200 --> 00:26:15,120
sequence model so uh that's what this
629
00:26:12,600 --> 00:26:15,120
looks like
630
00:26:15,799 --> 00:26:20,480
here so how do we train
631
00:26:18,320 --> 00:26:22,679
rnns um
632
00:26:20,480 --> 00:26:24,399
basically if you remember we can trade
633
00:26:22,679 --> 00:26:27,159
neural networks as long as we have a
634
00:26:24,399 --> 00:26:29,240
directed e cyclic graph that calculates
635
00:26:27,159 --> 00:26:30,919
our loss function and then for uh
636
00:26:29,240 --> 00:26:32,640
forward propagation and back propagation
637
00:26:30,919 --> 00:26:35,720
we'll do all the rest to calculate our
638
00:26:32,640 --> 00:26:38,760
parameters and we uh we update the
639
00:26:35,720 --> 00:26:40,480
parameters so the way this works is uh
640
00:26:38,760 --> 00:26:42,000
let's say we're doing sequence labeling
641
00:26:40,480 --> 00:26:45,200
in each of these predictions is a part
642
00:26:42,000 --> 00:26:47,559
of speech uh each of these labels is a
643
00:26:45,200 --> 00:26:49,000
true part of speech label or sorry each
644
00:26:47,559 --> 00:26:50,760
of these predictions is like a
645
00:26:49,000 --> 00:26:52,919
probability over the part parts of
646
00:26:50,760 --> 00:26:55,720
speech for that sequence each of these
647
00:26:52,919 --> 00:26:57,640
labels is a true part of speech label so
648
00:26:55,720 --> 00:26:59,320
basically what we do is from this we
649
00:26:57,640 --> 00:27:02,200
calculate the negative log likelihood of
650
00:26:59,320 --> 00:27:05,559
the true part of speech we get a
651
00:27:02,200 --> 00:27:09,120
loss and so now we have four losses uh
652
00:27:05,559 --> 00:27:11,559
here this is no longer a nice directed
653
00:27:09,120 --> 00:27:13,000
acyclic uh graph that ends in a single
654
00:27:11,559 --> 00:27:15,279
loss function which is kind of what we
655
00:27:13,000 --> 00:27:17,559
needed for back propagation right so
656
00:27:15,279 --> 00:27:20,240
what do we do uh very simple we just add
657
00:27:17,559 --> 00:27:22,440
them together uh we take the sum and now
658
00:27:20,240 --> 00:27:24,120
we have a single loss function uh which
659
00:27:22,440 --> 00:27:26,240
is the sum of all of the loss functions
660
00:27:24,120 --> 00:27:28,679
for each prediction that we
661
00:27:26,240 --> 00:27:30,799
made and that's our total loss and now
662
00:27:28,679 --> 00:27:32,600
we do have a directed asli graph where
663
00:27:30,799 --> 00:27:34,320
this is the terminal node and we can do
664
00:27:32,600 --> 00:27:36,480
backr like
665
00:27:34,320 --> 00:27:37,799
this this is true for all sequence
666
00:27:36,480 --> 00:27:39,320
models I'm going to talk about today I'm
667
00:27:37,799 --> 00:27:41,559
just illustrating it with recurrent
668
00:27:39,320 --> 00:27:43,279
networks um any any questions here
669
00:27:41,559 --> 00:27:45,240
everything
670
00:27:43,279 --> 00:27:47,919
good
671
00:27:45,240 --> 00:27:50,279
okay cool um yeah so now we have the
672
00:27:47,919 --> 00:27:52,960
loss it's a Well form dag uh we can run
673
00:27:50,279 --> 00:27:55,320
backrop so uh basically what we do is we
674
00:27:52,960 --> 00:27:58,399
just run back propop and our loss goes
675
00:27:55,320 --> 00:28:01,120
out uh back into all of the
676
00:27:58,399 --> 00:28:04,200
places now parameters are tied across
677
00:28:01,120 --> 00:28:06,080
time so the derivatives into the
678
00:28:04,200 --> 00:28:07,200
parameters are aggregated over all of
679
00:28:06,080 --> 00:28:10,760
the time
680
00:28:07,200 --> 00:28:13,760
steps um and this has been called back
681
00:28:10,760 --> 00:28:16,320
propagation through time uh since uh
682
00:28:13,760 --> 00:28:18,679
these were originally invented so
683
00:28:16,320 --> 00:28:21,720
basically what it looks like is because
684
00:28:18,679 --> 00:28:25,600
the parameters for this RNN function are
685
00:28:21,720 --> 00:28:27,120
shared uh they'll essentially be updated
686
00:28:25,600 --> 00:28:29,480
they'll only be updated once but they're
687
00:28:27,120 --> 00:28:32,640
updated from like four different
688
00:28:29,480 --> 00:28:32,640
positions in this network
689
00:28:34,120 --> 00:28:38,440
essentially yeah and this is the same
690
00:28:36,120 --> 00:28:40,559
for all sequence uh sequence models that
691
00:28:38,440 --> 00:28:43,519
I'm going to talk about
692
00:28:40,559 --> 00:28:45,360
today um another variety of models that
693
00:28:43,519 --> 00:28:47,559
people use are bidirectional rnns and
694
00:28:45,360 --> 00:28:49,880
these are uh used when you want to you
695
00:28:47,559 --> 00:28:52,960
know do something like sequence labeling
696
00:28:49,880 --> 00:28:54,399
and so you just uh run two rnns you want
697
00:28:52,960 --> 00:28:56,279
run one from the beginning one from the
698
00:28:54,399 --> 00:28:59,399
end and concatenate them together like
699
00:28:56,279 --> 00:28:59,399
this make predictions
700
00:29:01,200 --> 00:29:08,200
cool uh any questions yeah if you run
701
00:29:05,559 --> 00:29:09,960
the does that change your
702
00:29:08,200 --> 00:29:11,679
complexity does this change the
703
00:29:09,960 --> 00:29:13,000
complexity it doesn't change the ASM
704
00:29:11,679 --> 00:29:16,519
totic complexity because you're
705
00:29:13,000 --> 00:29:18,320
multiplying by two uh and like Big O
706
00:29:16,519 --> 00:29:21,559
notation doesn't care if you multiply by
707
00:29:18,320 --> 00:29:23,880
a constant but it it does double the Ty
708
00:29:21,559 --> 00:29:23,880
that it would
709
00:29:24,080 --> 00:29:28,080
do cool any
710
00:29:26,320 --> 00:29:32,799
other
711
00:29:28,080 --> 00:29:35,720
okay let's go forward um another problem
712
00:29:32,799 --> 00:29:37,240
that is particularly Salient in rnns and
713
00:29:35,720 --> 00:29:40,440
part of the reason why attention models
714
00:29:37,240 --> 00:29:42,000
are so useful is Vanishing gradients but
715
00:29:40,440 --> 00:29:43,880
you should be aware of this regardless
716
00:29:42,000 --> 00:29:46,799
of whether like no matter which model
717
00:29:43,880 --> 00:29:48,799
you're using and um thinking about it
718
00:29:46,799 --> 00:29:50,720
very carefully is actually a really good
719
00:29:48,799 --> 00:29:52,399
way to design better architectures if
720
00:29:50,720 --> 00:29:54,000
you're going to be designing uh
721
00:29:52,399 --> 00:29:56,039
designing
722
00:29:54,000 --> 00:29:58,000
architectures so basically the problem
723
00:29:56,039 --> 00:29:59,399
with Vanishing gradients is like let's
724
00:29:58,000 --> 00:30:01,799
say we have a prediction task where
725
00:29:59,399 --> 00:30:03,960
we're calculating a regression we're
726
00:30:01,799 --> 00:30:05,519
inputting a whole bunch of tokens and
727
00:30:03,960 --> 00:30:08,080
then calculating a regression at the
728
00:30:05,519 --> 00:30:12,840
very end using a square air loss
729
00:30:08,080 --> 00:30:16,360
function if we do something like this uh
730
00:30:12,840 --> 00:30:17,919
the problem is if we have a standard RNN
731
00:30:16,360 --> 00:30:21,279
when we do back
732
00:30:17,919 --> 00:30:25,480
propop we'll have a big gradient
733
00:30:21,279 --> 00:30:27,000
probably for the first RNN unit here but
734
00:30:25,480 --> 00:30:30,120
every time because we're running this
735
00:30:27,000 --> 00:30:33,679
through through some sort of
736
00:30:30,120 --> 00:30:37,080
nonlinearity if we for example if our
737
00:30:33,679 --> 00:30:39,240
nonlinearity is a t h function uh the
738
00:30:37,080 --> 00:30:42,000
gradient of the tan H function looks a
739
00:30:39,240 --> 00:30:42,000
little bit like
740
00:30:42,120 --> 00:30:50,000
this and um here I if I am not mistaken
741
00:30:47,200 --> 00:30:53,480
this Peaks at at one and everywhere else
742
00:30:50,000 --> 00:30:56,919
at zero and so because this is peing at
743
00:30:53,480 --> 00:30:58,679
one everywhere else at zero let's say um
744
00:30:56,919 --> 00:31:01,360
we have an input way over here like
745
00:30:58,679 --> 00:31:03,080
minus minus 3 or something like that if
746
00:31:01,360 --> 00:31:04,760
we have that that basically destroys our
747
00:31:03,080 --> 00:31:10,760
gradient our gradient disappears for
748
00:31:04,760 --> 00:31:13,559
that particular unit um and you know
749
00:31:10,760 --> 00:31:15,399
maybe one thing that you might say is oh
750
00:31:13,559 --> 00:31:17,039
well you know if this is getting so
751
00:31:15,399 --> 00:31:19,320
small because this only goes up to one
752
00:31:17,039 --> 00:31:22,960
let's do like 100 time t
753
00:31:19,320 --> 00:31:24,880
h as our uh as our activation function
754
00:31:22,960 --> 00:31:26,600
we'll do 100 time tan H and so now this
755
00:31:24,880 --> 00:31:28,279
goes up to 100 and now our gradients are
756
00:31:26,600 --> 00:31:30,080
not going to disapp here but then you
757
00:31:28,279 --> 00:31:31,720
have the the opposite problem you have
758
00:31:30,080 --> 00:31:34,760
exploding gradients where it goes up by
759
00:31:31,720 --> 00:31:36,360
100 every time uh it gets unmanageable
760
00:31:34,760 --> 00:31:40,000
and destroys your gradient descent
761
00:31:36,360 --> 00:31:41,720
itself so basically we have uh we have
762
00:31:40,000 --> 00:31:43,200
this problem because if you apply a
763
00:31:41,720 --> 00:31:45,639
function over and over again your
764
00:31:43,200 --> 00:31:47,240
gradient gets smaller and smaller every
765
00:31:45,639 --> 00:31:49,080
smaller and smaller bigger and bigger
766
00:31:47,240 --> 00:31:50,480
every time you do that and uh you have
767
00:31:49,080 --> 00:31:51,720
the vanishing gradient or exploding
768
00:31:50,480 --> 00:31:54,799
gradient
769
00:31:51,720 --> 00:31:56,919
problem um it's not just a problem with
770
00:31:54,799 --> 00:31:59,039
nonlinearities so it also happens when
771
00:31:56,919 --> 00:32:00,480
you do do your weight Matrix multiplies
772
00:31:59,039 --> 00:32:03,840
and other stuff like that basically
773
00:32:00,480 --> 00:32:05,960
anytime you modify uh the the input into
774
00:32:03,840 --> 00:32:07,720
a different output it will have a
775
00:32:05,960 --> 00:32:10,240
gradient and so it will either be bigger
776
00:32:07,720 --> 00:32:14,000
than one or less than
777
00:32:10,240 --> 00:32:16,000
one um so I mentioned this is a problem
778
00:32:14,000 --> 00:32:18,120
for rnns it's particularly a problem for
779
00:32:16,000 --> 00:32:20,799
rnns over long sequences but it's also a
780
00:32:18,120 --> 00:32:23,039
problem for any other model you use and
781
00:32:20,799 --> 00:32:24,960
the reason why this is important to know
782
00:32:23,039 --> 00:32:26,799
is if there's important information in
783
00:32:24,960 --> 00:32:29,000
your model finding a way that you can
784
00:32:26,799 --> 00:32:30,559
get a direct path from that important
785
00:32:29,000 --> 00:32:32,600
information to wherever you're making a
786
00:32:30,559 --> 00:32:34,440
prediction often is a way to improve
787
00:32:32,600 --> 00:32:39,120
your model
788
00:32:34,440 --> 00:32:41,159
um improve your model performance and on
789
00:32:39,120 --> 00:32:42,919
the contrary if there's unimportant
790
00:32:41,159 --> 00:32:45,320
information if there's information that
791
00:32:42,919 --> 00:32:47,159
you think is likely to be unimportant
792
00:32:45,320 --> 00:32:49,159
putting it farther away or making it a
793
00:32:47,159 --> 00:32:51,279
more indirect path so the model has to
794
00:32:49,159 --> 00:32:53,200
kind of work harder to use it is a good
795
00:32:51,279 --> 00:32:54,840
way to prevent the model from being
796
00:32:53,200 --> 00:32:57,679
distracted by like tons and tons of
797
00:32:54,840 --> 00:33:00,200
information um uh some of it
798
00:32:57,679 --> 00:33:03,960
which may be irrelevant so it's a good
799
00:33:00,200 --> 00:33:03,960
thing to know about in general for model
800
00:33:05,360 --> 00:33:13,080
design so um how did RNN solve this
801
00:33:09,559 --> 00:33:15,360
problem of uh of the vanishing gradient
802
00:33:13,080 --> 00:33:16,880
there is a method called long short-term
803
00:33:15,360 --> 00:33:20,360
memory
804
00:33:16,880 --> 00:33:22,840
um and the basic idea is to make
805
00:33:20,360 --> 00:33:24,360
additive connections between time
806
00:33:22,840 --> 00:33:29,919
steps
807
00:33:24,360 --> 00:33:32,799
and so addition is the
808
00:33:29,919 --> 00:33:36,399
only addition or kind of like the
809
00:33:32,799 --> 00:33:38,159
identity is the only thing that does not
810
00:33:36,399 --> 00:33:40,880
change the gradient it's guaranteed to
811
00:33:38,159 --> 00:33:43,279
not change the gradient because um the
812
00:33:40,880 --> 00:33:46,639
identity function is like f
813
00:33:43,279 --> 00:33:49,159
ofx equals X and if you take the
814
00:33:46,639 --> 00:33:51,480
derivative of this it's one so you're
815
00:33:49,159 --> 00:33:55,440
guaranteed to always have a gradient of
816
00:33:51,480 --> 00:33:57,360
one according to this function so um
817
00:33:55,440 --> 00:33:59,559
long shortterm memory makes sure that
818
00:33:57,360 --> 00:34:01,840
you have this additive uh input between
819
00:33:59,559 --> 00:34:04,600
time steps and this is what it looks
820
00:34:01,840 --> 00:34:05,919
like it's not super super important to
821
00:34:04,600 --> 00:34:09,119
understand everything that's going on
822
00:34:05,919 --> 00:34:12,200
here but just to explain it very quickly
823
00:34:09,119 --> 00:34:15,720
this uh C here is something called the
824
00:34:12,200 --> 00:34:20,520
memory cell it's passed on linearly like
825
00:34:15,720 --> 00:34:24,679
this and then um you have some gates the
826
00:34:20,520 --> 00:34:27,320
update gate is determining whether uh
827
00:34:24,679 --> 00:34:28,919
whether you update this hidden state or
828
00:34:27,320 --> 00:34:31,440
how much you update given this hidden
829
00:34:28,919 --> 00:34:34,480
State this input gate is deciding how
830
00:34:31,440 --> 00:34:36,760
much of the input you take in um and
831
00:34:34,480 --> 00:34:39,879
then the output gate is deciding how
832
00:34:36,760 --> 00:34:43,280
much of uh the output from the cell you
833
00:34:39,879 --> 00:34:45,599
uh you basically push out after using
834
00:34:43,280 --> 00:34:47,079
the cells so um it has these three gates
835
00:34:45,599 --> 00:34:48,760
that control the information flow and
836
00:34:47,079 --> 00:34:51,520
the model can learn to turn them on or
837
00:34:48,760 --> 00:34:53,720
off uh or something like that so uh
838
00:34:51,520 --> 00:34:55,679
that's the basic uh basic idea of the
839
00:34:53,720 --> 00:34:57,240
LSM and there's lots of other like
840
00:34:55,679 --> 00:34:59,359
variants of this like gated recurrent
841
00:34:57,240 --> 00:35:01,520
units that are a little bit simpler but
842
00:34:59,359 --> 00:35:03,920
the basic idea of an additive connection
843
00:35:01,520 --> 00:35:07,240
plus gating is uh something that appears
844
00:35:03,920 --> 00:35:07,240
a lot in many different types of
845
00:35:07,440 --> 00:35:14,240
architectures um any questions
846
00:35:12,079 --> 00:35:15,760
here another thing I should mention that
847
00:35:14,240 --> 00:35:19,200
I just realized I don't have on my
848
00:35:15,760 --> 00:35:24,480
slides but it's a good thing to know is
849
00:35:19,200 --> 00:35:29,040
that this is also used in uh deep
850
00:35:24,480 --> 00:35:32,440
networks and uh multi-layer
851
00:35:29,040 --> 00:35:32,440
networks and so
852
00:35:34,240 --> 00:35:39,520
basically lstms uh this is
853
00:35:39,720 --> 00:35:45,359
time lstms have this additive connection
854
00:35:43,359 --> 00:35:47,599
between the member eel where you're
855
00:35:45,359 --> 00:35:50,079
always
856
00:35:47,599 --> 00:35:53,119
adding um adding this into to whatever
857
00:35:50,079 --> 00:35:53,119
input you
858
00:35:54,200 --> 00:36:00,720
get and then you you get an input and
859
00:35:57,000 --> 00:36:00,720
you add this in you get an
860
00:36:00,839 --> 00:36:07,000
input and so this this makes sure you
861
00:36:03,440 --> 00:36:09,640
pass your gradients forward in
862
00:36:07,000 --> 00:36:11,720
time there's also uh something called
863
00:36:09,640 --> 00:36:13,000
residual connections which I think a lot
864
00:36:11,720 --> 00:36:14,319
of people have heard of if you've done a
865
00:36:13,000 --> 00:36:16,000
deep learning class or something like
866
00:36:14,319 --> 00:36:18,079
that but if you haven't uh they're a
867
00:36:16,000 --> 00:36:20,599
good thing to know residual connections
868
00:36:18,079 --> 00:36:22,440
are if you run your input through
869
00:36:20,599 --> 00:36:25,720
multiple
870
00:36:22,440 --> 00:36:28,720
layers like let's say you have a block
871
00:36:25,720 --> 00:36:28,720
here
872
00:36:36,480 --> 00:36:41,280
let's let's call this an RNN for now
873
00:36:38,560 --> 00:36:44,280
because we know um we know about RNN
874
00:36:41,280 --> 00:36:44,280
already so
875
00:36:45,119 --> 00:36:49,560
RNN so this this connection here is
876
00:36:48,319 --> 00:36:50,920
called the residual connection and
877
00:36:49,560 --> 00:36:55,240
basically it's adding an additive
878
00:36:50,920 --> 00:36:57,280
connection before and after layers so um
879
00:36:55,240 --> 00:36:58,640
this allows you to pass information from
880
00:36:57,280 --> 00:37:00,880
the very beginning of a network to the
881
00:36:58,640 --> 00:37:03,520
very end of a network um through
882
00:37:00,880 --> 00:37:05,480
multiple layers and it also is there to
883
00:37:03,520 --> 00:37:08,800
help prevent the gradient finishing
884
00:37:05,480 --> 00:37:11,520
problem so like in a way you can view uh
885
00:37:08,800 --> 00:37:14,560
you can view lstms what lstms are doing
886
00:37:11,520 --> 00:37:15,800
is preventing loss of gradient in time
887
00:37:14,560 --> 00:37:17,280
and these are preventing loss of
888
00:37:15,800 --> 00:37:19,480
gradient as you go through like multiple
889
00:37:17,280 --> 00:37:21,119
layers of the network and this is super
890
00:37:19,480 --> 00:37:24,079
standard this is used in all like
891
00:37:21,119 --> 00:37:25,599
Transformer models and llama and GPT and
892
00:37:24,079 --> 00:37:31,200
whatever
893
00:37:25,599 --> 00:37:31,200
else cool um any other questions about
894
00:37:32,760 --> 00:37:39,079
that okay cool um so next I'd like to go
895
00:37:36,880 --> 00:37:41,760
into convolution um one one thing I
896
00:37:39,079 --> 00:37:44,760
should mention is rnns or RNN style
897
00:37:41,760 --> 00:37:46,920
models are used extensively in very long
898
00:37:44,760 --> 00:37:48,160
sequence modeling and we're going to
899
00:37:46,920 --> 00:37:50,440
talk more about like actual
900
00:37:48,160 --> 00:37:52,640
architectures that people use uh to do
901
00:37:50,440 --> 00:37:55,119
this um usually in combination with
902
00:37:52,640 --> 00:37:57,720
attention based models uh but they're
903
00:37:55,119 --> 00:38:01,800
used in very long sequence modeling
904
00:37:57,720 --> 00:38:05,640
convolutions tend to be used in um a lot
905
00:38:01,800 --> 00:38:07,160
in speech and image processing uh and
906
00:38:05,640 --> 00:38:10,880
the reason why they're used a lot in
907
00:38:07,160 --> 00:38:13,560
speech and image processing is
908
00:38:10,880 --> 00:38:16,800
because when we're processing
909
00:38:13,560 --> 00:38:18,599
language uh we have like
910
00:38:16,800 --> 00:38:22,720
um
911
00:38:18,599 --> 00:38:22,720
this is
912
00:38:23,599 --> 00:38:29,400
wonderful like this is wonderful is
913
00:38:26,599 --> 00:38:33,319
three tokens in language but if we look
914
00:38:29,400 --> 00:38:36,960
at it in speech it's going to be
915
00:38:33,319 --> 00:38:36,960
like many many
916
00:38:37,560 --> 00:38:46,079
frames so kind of
917
00:38:41,200 --> 00:38:47,680
the semantics of language is already
918
00:38:46,079 --> 00:38:48,960
kind of like if you look at a single
919
00:38:47,680 --> 00:38:51,599
token you already get something
920
00:38:48,960 --> 00:38:52,839
semantically meaningful um but in
921
00:38:51,599 --> 00:38:54,560
contrast if you're looking at like
922
00:38:52,839 --> 00:38:56,000
speech or you're looking at pixels and
923
00:38:54,560 --> 00:38:57,400
images or something like that you're not
924
00:38:56,000 --> 00:39:00,359
going to get something semantically
925
00:38:57,400 --> 00:39:01,920
meaningful uh so uh convolution is used
926
00:39:00,359 --> 00:39:03,359
a lot in that case and also you could
927
00:39:01,920 --> 00:39:06,079
create a convolutional model over
928
00:39:03,359 --> 00:39:08,599
characters as well
929
00:39:06,079 --> 00:39:10,599
um so what is convolution in the first
930
00:39:08,599 --> 00:39:13,319
place um as I mentioned before basically
931
00:39:10,599 --> 00:39:16,359
you take the local window uh around an
932
00:39:13,319 --> 00:39:19,680
input and you run it through um
933
00:39:16,359 --> 00:39:22,079
basically a model and a a good way to
934
00:39:19,680 --> 00:39:24,400
think about it is it's essentially a
935
00:39:22,079 --> 00:39:26,440
feed forward Network where you can
936
00:39:24,400 --> 00:39:28,240
catenate uh all of the surrounding
937
00:39:26,440 --> 00:39:30,280
vectors together and run them through a
938
00:39:28,240 --> 00:39:34,400
linear transform like this so you can
939
00:39:30,280 --> 00:39:34,400
Cate XT minus XT XT
940
00:39:35,880 --> 00:39:43,040
plus1 convolution can also be used in
941
00:39:39,440 --> 00:39:45,400
Auto regressive models and normally like
942
00:39:43,040 --> 00:39:48,079
we think of it like this so we think
943
00:39:45,400 --> 00:39:50,640
that we're taking the previous one the
944
00:39:48,079 --> 00:39:53,839
current one and the next one and making
945
00:39:50,640 --> 00:39:54,960
a prediction based on this but this
946
00:39:53,839 --> 00:39:56,440
would be good for something like
947
00:39:54,960 --> 00:39:57,720
sequence labeling but it's not good for
948
00:39:56,440 --> 00:39:59,040
for something like language modeling
949
00:39:57,720 --> 00:40:01,400
because in language modeling we can't
950
00:39:59,040 --> 00:40:05,200
look at the future right but there's a
951
00:40:01,400 --> 00:40:07,280
super simple uh solution to this which
952
00:40:05,200 --> 00:40:11,280
is you have a convolution that just
953
00:40:07,280 --> 00:40:13,720
looks at the past basically um and
954
00:40:11,280 --> 00:40:15,319
predicts the next word based on the the
955
00:40:13,720 --> 00:40:16,760
you know current word in the past so
956
00:40:15,319 --> 00:40:19,520
here you would be predicting the word
957
00:40:16,760 --> 00:40:21,040
movie um this is actually essentially
958
00:40:19,520 --> 00:40:23,839
equivalent to the feed forward language
959
00:40:21,040 --> 00:40:25,880
model that I talked about last time uh
960
00:40:23,839 --> 00:40:27,240
so you can also think of that as a
961
00:40:25,880 --> 00:40:30,599
convolution
962
00:40:27,240 --> 00:40:32,119
a convolutional language model um so
963
00:40:30,599 --> 00:40:33,359
when whenever you say feed forward or
964
00:40:32,119 --> 00:40:36,160
convolutional language model they're
965
00:40:33,359 --> 00:40:38,880
basically the same uh modulo some uh
966
00:40:36,160 --> 00:40:42,359
some details about striding and stuff
967
00:40:38,880 --> 00:40:42,359
which I'm going to talk about the class
968
00:40:43,000 --> 00:40:49,359
today cool um I covered convolution very
969
00:40:47,400 --> 00:40:51,440
briefly because it's also the least used
970
00:40:49,359 --> 00:40:53,400
of the three uh sequence modeling things
971
00:40:51,440 --> 00:40:55,400
in NLP nowadays but um are there any
972
00:40:53,400 --> 00:40:58,319
questions there or can I just run into
973
00:40:55,400 --> 00:40:58,319
attention
974
00:40:59,119 --> 00:41:04,040
okay cool I'll go into attention next so
975
00:41:02,400 --> 00:41:06,400
uh the basic idea about
976
00:41:04,040 --> 00:41:11,119
attention um
977
00:41:06,400 --> 00:41:12,839
is that we encode uh each token and the
978
00:41:11,119 --> 00:41:14,440
sequence into a
979
00:41:12,839 --> 00:41:19,119
vector
980
00:41:14,440 --> 00:41:21,640
um or so we we have input an input
981
00:41:19,119 --> 00:41:24,240
sequence that we'd like to encode over
982
00:41:21,640 --> 00:41:27,800
and we perform a linear combination of
983
00:41:24,240 --> 00:41:30,640
the vectors weighted by attention weight
984
00:41:27,800 --> 00:41:33,359
and there's two varieties of attention
985
00:41:30,640 --> 00:41:35,160
uh that are good to know about the first
986
00:41:33,359 --> 00:41:37,440
one is cross
987
00:41:35,160 --> 00:41:40,040
atten where each element in a sequence
988
00:41:37,440 --> 00:41:41,960
attends to elements of another sequence
989
00:41:40,040 --> 00:41:44,280
and this is widely used in encoder
990
00:41:41,960 --> 00:41:47,359
decoder models where you have one
991
00:41:44,280 --> 00:41:50,319
encoder and you have a separate decoder
992
00:41:47,359 --> 00:41:51,880
um these models the popular models that
993
00:41:50,319 --> 00:41:55,119
are like this that people still use a
994
00:41:51,880 --> 00:41:57,480
lot are T5 uh is a example of an encoder
995
00:41:55,119 --> 00:42:00,760
decoder model or embar is another
996
00:41:57,480 --> 00:42:03,160
example of encoder decoder model um but
997
00:42:00,760 --> 00:42:07,880
basically the uh The Way Cross attention
998
00:42:03,160 --> 00:42:10,359
works is we have for example an English
999
00:42:07,880 --> 00:42:14,079
uh sentence here and we want to
1000
00:42:10,359 --> 00:42:17,560
translate it into uh into a Japanese
1001
00:42:14,079 --> 00:42:23,040
sentence and so when we output the first
1002
00:42:17,560 --> 00:42:25,119
word we would mostly uh upweight this or
1003
00:42:23,040 --> 00:42:26,800
sorry we have a we have a Japanese
1004
00:42:25,119 --> 00:42:29,119
sentence and we would like to translated
1005
00:42:26,800 --> 00:42:31,680
into an English sentence for example so
1006
00:42:29,119 --> 00:42:35,160
when we generate the first word in
1007
00:42:31,680 --> 00:42:38,400
Japanese means this so in order to
1008
00:42:35,160 --> 00:42:40,079
Output the first word we would first uh
1009
00:42:38,400 --> 00:42:43,559
do a weighted sum of all of the
1010
00:42:40,079 --> 00:42:46,240
embeddings of the Japanese sentence and
1011
00:42:43,559 --> 00:42:49,359
we would focus probably most on this
1012
00:42:46,240 --> 00:42:51,920
word up here C because it corresponds to
1013
00:42:49,359 --> 00:42:51,920
the word
1014
00:42:53,160 --> 00:42:59,800
this in the next step of generating an
1015
00:42:55,960 --> 00:43:01,319
out output uh we would uh attend to
1016
00:42:59,800 --> 00:43:04,119
different words because different words
1017
00:43:01,319 --> 00:43:07,680
correspond to is so you would attend to
1018
00:43:04,119 --> 00:43:11,040
which corresponds to is um when you
1019
00:43:07,680 --> 00:43:12,599
output n actually there's no word in the
1020
00:43:11,040 --> 00:43:16,839
Japanese sentence that correspon to and
1021
00:43:12,599 --> 00:43:18,720
so you might get a very like blob like
1022
00:43:16,839 --> 00:43:21,319
uh in attention weight that doesn't look
1023
00:43:18,720 --> 00:43:23,319
very uh that looks very smooth not very
1024
00:43:21,319 --> 00:43:25,119
peaky and then when you do example you'd
1025
00:43:23,319 --> 00:43:27,880
have strong attention on uh on the word
1026
00:43:25,119 --> 00:43:29,400
that corresponds to example
1027
00:43:27,880 --> 00:43:31,599
there's also self
1028
00:43:29,400 --> 00:43:33,480
attention and um self attention
1029
00:43:31,599 --> 00:43:36,000
basically what it does is each element
1030
00:43:33,480 --> 00:43:38,640
in a sequence attends to elements of the
1031
00:43:36,000 --> 00:43:40,240
same sequence and so this is a good way
1032
00:43:38,640 --> 00:43:43,359
of doing sequence encoding just like we
1033
00:43:40,240 --> 00:43:46,280
used rnns by rnns uh convolutional
1034
00:43:43,359 --> 00:43:47,559
neural networks and so um the reason why
1035
00:43:46,280 --> 00:43:50,119
you would want to do something like this
1036
00:43:47,559 --> 00:43:52,760
just to give an example let's say we
1037
00:43:50,119 --> 00:43:54,280
wanted to run this we wanted to encode
1038
00:43:52,760 --> 00:43:56,920
the English sentence before doing
1039
00:43:54,280 --> 00:44:00,040
something like translation into Japanese
1040
00:43:56,920 --> 00:44:01,559
and if we did that um this maybe we
1041
00:44:00,040 --> 00:44:02,960
don't need to attend to a whole lot of
1042
00:44:01,559 --> 00:44:06,440
other things because it's kind of clear
1043
00:44:02,960 --> 00:44:08,920
what this means but um
1044
00:44:06,440 --> 00:44:10,880
is the way you would translate it would
1045
00:44:08,920 --> 00:44:12,280
be rather heavily dependent on what the
1046
00:44:10,880 --> 00:44:13,640
other words in the sentence so you might
1047
00:44:12,280 --> 00:44:17,280
want to attend to all the other words in
1048
00:44:13,640 --> 00:44:20,559
the sentence say oh this is is co
1049
00:44:17,280 --> 00:44:22,839
cooccurring with this and example and so
1050
00:44:20,559 --> 00:44:24,440
if that's the case then well we would
1051
00:44:22,839 --> 00:44:26,920
need to translate it in this way or we'
1052
00:44:24,440 --> 00:44:28,960
need to handle it in this way and that's
1053
00:44:26,920 --> 00:44:29,880
exactly the same for you know any other
1054
00:44:28,960 --> 00:44:32,720
sort of
1055
00:44:29,880 --> 00:44:35,880
disambiguation uh style
1056
00:44:32,720 --> 00:44:37,720
task so uh yeah we do something similar
1057
00:44:35,880 --> 00:44:39,040
like this so basically cross attention
1058
00:44:37,720 --> 00:44:42,520
is attending to a different sequence
1059
00:44:39,040 --> 00:44:42,520
self attention is attending to the same
1060
00:44:42,680 --> 00:44:46,559
sequence so how do we do this
1061
00:44:44,960 --> 00:44:48,200
mechanistically in the first place so
1062
00:44:46,559 --> 00:44:51,480
like let's say We're translating from
1063
00:44:48,200 --> 00:44:52,880
Japanese to English um we would have uh
1064
00:44:51,480 --> 00:44:55,960
and we're doing it with an encoder
1065
00:44:52,880 --> 00:44:57,480
decoder model where we have already ENC
1066
00:44:55,960 --> 00:45:00,640
coded the
1067
00:44:57,480 --> 00:45:02,920
input sequence and now we're generating
1068
00:45:00,640 --> 00:45:05,240
the output sequence with a for example a
1069
00:45:02,920 --> 00:45:09,880
recurrent neural network um and so if
1070
00:45:05,240 --> 00:45:12,400
that's the case we have uh I I hate uh
1071
00:45:09,880 --> 00:45:14,440
like this and we want to predict the
1072
00:45:12,400 --> 00:45:17,280
next word so what we would do is we
1073
00:45:14,440 --> 00:45:19,480
would take the current state
1074
00:45:17,280 --> 00:45:21,480
here and uh we use something called a
1075
00:45:19,480 --> 00:45:22,760
query vector and the query Vector is
1076
00:45:21,480 --> 00:45:24,880
essentially the vector that we want to
1077
00:45:22,760 --> 00:45:28,720
use to decide what to attend
1078
00:45:24,880 --> 00:45:31,800
to we then have key vectors and the key
1079
00:45:28,720 --> 00:45:35,319
vectors are the vectors that we would
1080
00:45:31,800 --> 00:45:37,480
like to use to decide which ones we
1081
00:45:35,319 --> 00:45:40,720
should be attending
1082
00:45:37,480 --> 00:45:42,040
to and then for each query key pair we
1083
00:45:40,720 --> 00:45:45,319
calculate a
1084
00:45:42,040 --> 00:45:48,319
weight and we do it like this um this
1085
00:45:45,319 --> 00:45:50,680
gear here is some function that takes in
1086
00:45:48,319 --> 00:45:53,200
the uh query vector and the key vector
1087
00:45:50,680 --> 00:45:55,599
and outputs a weight and notably we use
1088
00:45:53,200 --> 00:45:57,559
the same function every single time this
1089
00:45:55,599 --> 00:46:00,960
is really important again because like
1090
00:45:57,559 --> 00:46:03,760
RNN that allows us to extrapolate
1091
00:46:00,960 --> 00:46:05,960
unlimited length sequences because uh we
1092
00:46:03,760 --> 00:46:08,280
only have one set of you know we only
1093
00:46:05,960 --> 00:46:10,359
have one function no matter how long the
1094
00:46:08,280 --> 00:46:13,200
sequence gets so we can just apply it
1095
00:46:10,359 --> 00:46:15,839
over and over and over
1096
00:46:13,200 --> 00:46:17,920
again uh once we calculate these values
1097
00:46:15,839 --> 00:46:20,839
we normalize so that they add up to one
1098
00:46:17,920 --> 00:46:22,559
using the softmax function and um
1099
00:46:20,839 --> 00:46:27,800
basically in this case that would be
1100
00:46:22,559 --> 00:46:27,800
like 0.76 uh etc etc oops
1101
00:46:28,800 --> 00:46:33,559
so step number two is once we have this
1102
00:46:32,280 --> 00:46:37,839
uh these
1103
00:46:33,559 --> 00:46:40,160
attention uh values here notably these
1104
00:46:37,839 --> 00:46:41,359
values aren't really probabilities uh
1105
00:46:40,160 --> 00:46:42,800
despite the fact that they're between
1106
00:46:41,359 --> 00:46:44,240
zero and one and they add up to one
1107
00:46:42,800 --> 00:46:47,440
because all we're doing is we're using
1108
00:46:44,240 --> 00:46:50,480
them to uh to combine together uh
1109
00:46:47,440 --> 00:46:51,800
multiple vectors so I we don't really
1110
00:46:50,480 --> 00:46:53,319
normally call them attention
1111
00:46:51,800 --> 00:46:54,680
probabilities or anything like that I
1112
00:46:53,319 --> 00:46:56,319
just call them attention values or
1113
00:46:54,680 --> 00:46:59,680
normalized attention values
1114
00:46:56,319 --> 00:47:03,760
is um but once we have these uh
1115
00:46:59,680 --> 00:47:05,760
attention uh attention weights we have
1116
00:47:03,760 --> 00:47:07,200
value vectors and these value vectors
1117
00:47:05,760 --> 00:47:10,000
are the vectors that we would actually
1118
00:47:07,200 --> 00:47:12,319
like to combine together to get the uh
1119
00:47:10,000 --> 00:47:14,000
encoding here and so we take these
1120
00:47:12,319 --> 00:47:17,559
vectors we do a weighted some of the
1121
00:47:14,000 --> 00:47:21,200
vectors and get a final final sum
1122
00:47:17,559 --> 00:47:22,920
here and we can take this uh some and
1123
00:47:21,200 --> 00:47:26,920
use it in any part of the model that we
1124
00:47:22,920 --> 00:47:29,079
would like um and so is very broad it
1125
00:47:26,920 --> 00:47:31,200
can be used in any way now the most
1126
00:47:29,079 --> 00:47:33,240
common way to use it is just have lots
1127
00:47:31,200 --> 00:47:35,000
of self attention layers like in
1128
00:47:33,240 --> 00:47:37,440
something in a Transformer but um you
1129
00:47:35,000 --> 00:47:40,160
can also use it in decoder or other
1130
00:47:37,440 --> 00:47:42,920
things like that as
1131
00:47:40,160 --> 00:47:45,480
well this is an actual graphical example
1132
00:47:42,920 --> 00:47:47,319
from the original attention paper um I'm
1133
00:47:45,480 --> 00:47:50,000
going to give some other examples from
1134
00:47:47,319 --> 00:47:52,480
Transformers in the next class but
1135
00:47:50,000 --> 00:47:55,400
basically you can see that the attention
1136
00:47:52,480 --> 00:47:57,559
weights uh for this English to French I
1137
00:47:55,400 --> 00:48:00,520
think it's English French translation
1138
00:47:57,559 --> 00:48:02,920
task basically um overlap with what you
1139
00:48:00,520 --> 00:48:04,440
would expect uh if you can read English
1140
00:48:02,920 --> 00:48:06,599
and French it's kind of the words that
1141
00:48:04,440 --> 00:48:09,319
are semantically similar to each other
1142
00:48:06,599 --> 00:48:12,920
um it even learns to do this reordering
1143
00:48:09,319 --> 00:48:14,880
uh in an appropriate way here and all of
1144
00:48:12,920 --> 00:48:16,720
this is completely unsupervised so you
1145
00:48:14,880 --> 00:48:18,079
never actually give the model
1146
00:48:16,720 --> 00:48:19,440
information about what it should be
1147
00:48:18,079 --> 00:48:21,559
attending to it's all learned through
1148
00:48:19,440 --> 00:48:23,520
gradient descent and the model learns to
1149
00:48:21,559 --> 00:48:27,640
do this by making the embeddings of the
1150
00:48:23,520 --> 00:48:27,640
key and query vectors closer together
1151
00:48:28,440 --> 00:48:33,240
cool
1152
00:48:30,000 --> 00:48:33,240
um any
1153
00:48:33,800 --> 00:48:40,040
questions okay so um next I'd like to go
1154
00:48:38,440 --> 00:48:41,680
a little bit into how we actually
1155
00:48:40,040 --> 00:48:43,599
calculate the attention score function
1156
00:48:41,680 --> 00:48:44,839
so that's the little gear that I had on
1157
00:48:43,599 --> 00:48:50,280
my
1158
00:48:44,839 --> 00:48:53,559
uh my slide before so here Q is a query
1159
00:48:50,280 --> 00:48:56,440
and K is the key um the original
1160
00:48:53,559 --> 00:48:58,400
attention paper used a multi-layer layer
1161
00:48:56,440 --> 00:49:00,119
uh a multi-layer neural network to
1162
00:48:58,400 --> 00:49:02,440
calculate this so basically what it did
1163
00:49:00,119 --> 00:49:05,319
is it concatenated the query and key
1164
00:49:02,440 --> 00:49:08,000
Vector together multiplied it by a
1165
00:49:05,319 --> 00:49:12,240
weight Matrix calculated a tan H and
1166
00:49:08,000 --> 00:49:15,040
then ran it through uh a weight
1167
00:49:12,240 --> 00:49:19,799
Vector so this
1168
00:49:15,040 --> 00:49:22,480
is essentially very expressive
1169
00:49:19,799 --> 00:49:24,799
um uh it's flexible it's often good with
1170
00:49:22,480 --> 00:49:27,960
large data but it adds extra parameters
1171
00:49:24,799 --> 00:49:30,359
and uh computation time uh to your
1172
00:49:27,960 --> 00:49:31,559
calculations here so it's not as widely
1173
00:49:30,359 --> 00:49:34,359
used
1174
00:49:31,559 --> 00:49:37,799
anymore the uh other thing which was
1175
00:49:34,359 --> 00:49:41,599
proposed by long ad all is a bilinear
1176
00:49:37,799 --> 00:49:43,200
function um and a bilinear function
1177
00:49:41,599 --> 00:49:45,920
basically what it does is it has your
1178
00:49:43,200 --> 00:49:48,319
key Vector it has your query vector and
1179
00:49:45,920 --> 00:49:51,440
it has a matrix in between them like
1180
00:49:48,319 --> 00:49:53,000
this and uh then you calculate uh you
1181
00:49:51,440 --> 00:49:54,520
calculate the
1182
00:49:53,000 --> 00:49:56,680
alut
1183
00:49:54,520 --> 00:49:59,880
so
1184
00:49:56,680 --> 00:50:03,200
this is uh nice because it basically um
1185
00:49:59,880 --> 00:50:05,760
Can Transform uh the key and
1186
00:50:03,200 --> 00:50:08,760
query uh together
1187
00:50:05,760 --> 00:50:08,760
here
1188
00:50:09,119 --> 00:50:13,559
um people have also experimented with
1189
00:50:11,760 --> 00:50:16,079
DOT product and the dot product is
1190
00:50:13,559 --> 00:50:19,839
basically query times
1191
00:50:16,079 --> 00:50:23,480
key uh query transpose times key or
1192
00:50:19,839 --> 00:50:25,760
query. key this is okay but the problem
1193
00:50:23,480 --> 00:50:27,280
with this is then the query vector and
1194
00:50:25,760 --> 00:50:30,160
the key vectors have to be in exactly
1195
00:50:27,280 --> 00:50:31,920
the same space and that's kind of too
1196
00:50:30,160 --> 00:50:34,799
hard of a constraint so it doesn't scale
1197
00:50:31,920 --> 00:50:38,000
very well if you're um if you're working
1198
00:50:34,799 --> 00:50:40,839
hard uh if you're uh like training on
1199
00:50:38,000 --> 00:50:45,400
lots of data um then the scaled dot
1200
00:50:40,839 --> 00:50:47,880
product um the scale dot product here uh
1201
00:50:45,400 --> 00:50:50,079
one problem is that the scale of the dot
1202
00:50:47,880 --> 00:50:53,680
product increases as the dimensions get
1203
00:50:50,079 --> 00:50:55,880
larger and so there's a fix to scale by
1204
00:50:53,680 --> 00:50:58,839
the square root of the length of one of
1205
00:50:55,880 --> 00:51:00,680
the vectors um and so basically you're
1206
00:50:58,839 --> 00:51:04,559
multiplying uh you're taking the dot
1207
00:51:00,680 --> 00:51:06,559
product but you're dividing by the uh
1208
00:51:04,559 --> 00:51:09,359
the square root of the length of one of
1209
00:51:06,559 --> 00:51:11,839
the vectors uh does anyone have an idea
1210
00:51:09,359 --> 00:51:13,599
why you might take the square root here
1211
00:51:11,839 --> 00:51:16,920
if you've taken a machine
1212
00:51:13,599 --> 00:51:20,000
learning uh or maybe statistics class
1213
00:51:16,920 --> 00:51:20,000
you might have a an
1214
00:51:20,599 --> 00:51:26,599
idea any any ideas yeah it normalization
1215
00:51:24,720 --> 00:51:29,079
to make sure
1216
00:51:26,599 --> 00:51:32,760
because otherwise it will impact the
1217
00:51:29,079 --> 00:51:35,640
result because we want normalize one yes
1218
00:51:32,760 --> 00:51:37,920
so we do we do want to normalize it um
1219
00:51:35,640 --> 00:51:40,000
and so that's the reason why we divide
1220
00:51:37,920 --> 00:51:41,920
by the length um and that prevents it
1221
00:51:40,000 --> 00:51:43,839
from getting too large
1222
00:51:41,920 --> 00:51:45,920
specifically does anyone have an idea
1223
00:51:43,839 --> 00:51:49,440
why you take the square root here as
1224
00:51:45,920 --> 00:51:49,440
opposed to dividing just by the length
1225
00:51:52,400 --> 00:51:59,480
overall so um this is this is pretty
1226
00:51:55,400 --> 00:52:01,720
tough and actually uh we I didn't know
1227
00:51:59,480 --> 00:52:04,359
one of the last times I did this class
1228
00:52:01,720 --> 00:52:06,640
uh and had to actually go look for it
1229
00:52:04,359 --> 00:52:09,000
but basically the reason why is because
1230
00:52:06,640 --> 00:52:11,400
if you um if you have a whole bunch of
1231
00:52:09,000 --> 00:52:12,720
random variables so let's say you have a
1232
00:52:11,400 --> 00:52:14,040
whole bunch of random variables no
1233
00:52:12,720 --> 00:52:15,240
matter what kind they are as long as
1234
00:52:14,040 --> 00:52:19,680
they're from the same distribution
1235
00:52:15,240 --> 00:52:19,680
they're IID and you add them all
1236
00:52:20,160 --> 00:52:25,720
together um then the variance I believe
1237
00:52:23,200 --> 00:52:27,760
yeah the variance of this variant
1238
00:52:25,720 --> 00:52:31,119
standard deviation maybe standard
1239
00:52:27,760 --> 00:52:33,319
deviation of this goes uh goes up uh
1240
00:52:31,119 --> 00:52:35,640
square root uh yeah I think standard
1241
00:52:33,319 --> 00:52:38,880
deviation goes
1242
00:52:35,640 --> 00:52:41,040
up dividing by something that would
1243
00:52:38,880 --> 00:52:44,040
divide by this the standard deviation
1244
00:52:41,040 --> 00:52:48,240
here so it's made like normalizing by
1245
00:52:44,040 --> 00:52:51,040
that so um it's a it's that's actually I
1246
00:52:48,240 --> 00:52:53,359
don't think explicitly explained and the
1247
00:52:51,040 --> 00:52:54,720
uh attention is all you need paper uh
1248
00:52:53,359 --> 00:52:57,920
the vasani paper where they introduce
1249
00:52:54,720 --> 00:53:01,079
this but that's basic idea um in terms
1250
00:52:57,920 --> 00:53:03,839
of what people use most widely nowadays
1251
00:53:01,079 --> 00:53:07,680
um they
1252
00:53:03,839 --> 00:53:07,680
are basically doing
1253
00:53:24,160 --> 00:53:27,160
this
1254
00:53:30,280 --> 00:53:34,880
so they're taking the the hidden state
1255
00:53:33,000 --> 00:53:36,599
from the keys and multiplying it by a
1256
00:53:34,880 --> 00:53:39,440
matrix the hidden state by the queries
1257
00:53:36,599 --> 00:53:41,680
and multiplying it by a matrix um this
1258
00:53:39,440 --> 00:53:46,559
is what is done in uh in
1259
00:53:41,680 --> 00:53:50,280
Transformers and the uh and then they're
1260
00:53:46,559 --> 00:53:54,160
using this to um they're normalizing it
1261
00:53:50,280 --> 00:53:57,160
by this uh square root here
1262
00:53:54,160 --> 00:53:57,160
and
1263
00:53:59,440 --> 00:54:05,040
so this is essentially a bilinear
1264
00:54:02,240 --> 00:54:07,680
model um it's a bilinear model that is
1265
00:54:05,040 --> 00:54:09,119
normalized uh they call it uh scale do
1266
00:54:07,680 --> 00:54:11,119
product detention but actually because
1267
00:54:09,119 --> 00:54:15,520
they have these weight matrices uh it's
1268
00:54:11,119 --> 00:54:18,839
a bilinear model so um that's the the
1269
00:54:15,520 --> 00:54:18,839
most standard thing to be used
1270
00:54:20,200 --> 00:54:24,079
nowadays cool any any questions about
1271
00:54:22,520 --> 00:54:27,079
this
1272
00:54:24,079 --> 00:54:27,079
part
1273
00:54:28,240 --> 00:54:36,559
okay so um finally when you actually
1274
00:54:32,280 --> 00:54:36,559
train the model um as I mentioned
1275
00:54:41,960 --> 00:54:45,680
before right at the very
1276
00:54:48,040 --> 00:54:52,400
beginning
1277
00:54:49,839 --> 00:54:55,760
we when we're training an autor
1278
00:54:52,400 --> 00:54:57,400
regressive model we don't want to be
1279
00:54:55,760 --> 00:54:59,799
referring to the Future to things in the
1280
00:54:57,400 --> 00:55:01,240
future um because then you know
1281
00:54:59,799 --> 00:55:03,079
basically we'd be cheating and we'd have
1282
00:55:01,240 --> 00:55:04,599
a nonprobabilistic model it wouldn't be
1283
00:55:03,079 --> 00:55:08,960
good when we actually have to generate
1284
00:55:04,599 --> 00:55:12,119
left to right um and
1285
00:55:08,960 --> 00:55:15,720
so we essentially want to prevent
1286
00:55:12,119 --> 00:55:17,480
ourselves from using information from
1287
00:55:15,720 --> 00:55:20,319
the
1288
00:55:17,480 --> 00:55:22,839
future
1289
00:55:20,319 --> 00:55:24,240
and in an unconditioned model we want to
1290
00:55:22,839 --> 00:55:27,400
prevent ourselves from using any
1291
00:55:24,240 --> 00:55:29,680
information in the feature here um in a
1292
00:55:27,400 --> 00:55:31,520
conditioned model we're okay with doing
1293
00:55:29,680 --> 00:55:33,480
kind of bir
1294
00:55:31,520 --> 00:55:35,880
directional conditioning here to
1295
00:55:33,480 --> 00:55:37,359
calculate the representations but we're
1296
00:55:35,880 --> 00:55:40,440
not okay with doing it on the target
1297
00:55:37,359 --> 00:55:40,440
side so basically what we
1298
00:55:44,240 --> 00:55:50,960
do basically what we do is we create a
1299
00:55:47,920 --> 00:55:52,400
mask that prevents us from attending to
1300
00:55:50,960 --> 00:55:54,559
any of the information in the future
1301
00:55:52,400 --> 00:55:56,440
when we're uh predicting when we're
1302
00:55:54,559 --> 00:56:00,799
calculating the representations of the
1303
00:55:56,440 --> 00:56:04,880
the current thing uh word and
1304
00:56:00,799 --> 00:56:08,280
technically how we do this is we have
1305
00:56:04,880 --> 00:56:08,280
the attention
1306
00:56:09,079 --> 00:56:13,799
values uh like
1307
00:56:11,680 --> 00:56:15,480
2.1
1308
00:56:13,799 --> 00:56:17,880
attention
1309
00:56:15,480 --> 00:56:19,920
0.3 and
1310
00:56:17,880 --> 00:56:22,480
attention uh
1311
00:56:19,920 --> 00:56:24,960
0.5 or something like
1312
00:56:22,480 --> 00:56:27,480
that these are eventually going to be
1313
00:56:24,960 --> 00:56:29,799
fed through the soft Max to calculate
1314
00:56:27,480 --> 00:56:32,119
the attention values that we use to do
1315
00:56:29,799 --> 00:56:33,680
the waiting so what we do is any ones we
1316
00:56:32,119 --> 00:56:36,160
don't want to attend to we just add
1317
00:56:33,680 --> 00:56:39,799
negative infinity or add a very large
1318
00:56:36,160 --> 00:56:42,119
negative number so we uh cross that out
1319
00:56:39,799 --> 00:56:44,000
and set this the negative infinity and
1320
00:56:42,119 --> 00:56:45,440
so then when we take the softb basically
1321
00:56:44,000 --> 00:56:47,839
the value goes to zero and we don't
1322
00:56:45,440 --> 00:56:49,359
attend to it so um this is called the
1323
00:56:47,839 --> 00:56:53,240
attention mask and you'll see it when
1324
00:56:49,359 --> 00:56:53,240
you have to implement
1325
00:56:53,440 --> 00:56:56,880
attention cool
1326
00:56:57,039 --> 00:57:00,200
any any questions about
1327
00:57:02,079 --> 00:57:08,599
this okay great um so next I'd like to
1328
00:57:05,839 --> 00:57:11,039
go to Applications of sequence models um
1329
00:57:08,599 --> 00:57:13,200
there's a bunch of ways that you can use
1330
00:57:11,039 --> 00:57:16,160
sequence models of any variety I wrote
1331
00:57:13,200 --> 00:57:18,400
RNN here arbitrarily but it could be
1332
00:57:16,160 --> 00:57:21,720
convolution or Transformer or anything
1333
00:57:18,400 --> 00:57:23,559
else so the first one is encoding
1334
00:57:21,720 --> 00:57:26,839
sequences
1335
00:57:23,559 --> 00:57:29,240
um and essentially if you do it with an
1336
00:57:26,839 --> 00:57:31,559
RNN this is one way you can encode a
1337
00:57:29,240 --> 00:57:35,799
sequence basically you take the
1338
00:57:31,559 --> 00:57:36,960
last uh value here and you use it to uh
1339
00:57:35,799 --> 00:57:40,559
encode the
1340
00:57:36,960 --> 00:57:42,720
output this can be used for any sort of
1341
00:57:40,559 --> 00:57:45,839
uh like binary or multiclass prediction
1342
00:57:42,720 --> 00:57:48,280
problem it's also right now used very
1343
00:57:45,839 --> 00:57:50,920
widely in sentence representations for
1344
00:57:48,280 --> 00:57:54,200
retrieval uh so for example you build a
1345
00:57:50,920 --> 00:57:55,520
big retrieval index uh with these
1346
00:57:54,200 --> 00:57:57,920
vectors
1347
00:57:55,520 --> 00:57:59,480
and then you do a vector near you also
1348
00:57:57,920 --> 00:58:02,119
in quote a query and you do a vector
1349
00:57:59,480 --> 00:58:04,760
nearest neighbor search to look up uh
1350
00:58:02,119 --> 00:58:06,760
the most similar sentence here so this
1351
00:58:04,760 --> 00:58:10,160
is uh these are two applications where
1352
00:58:06,760 --> 00:58:13,440
you use something like this right on
1353
00:58:10,160 --> 00:58:15,520
this slide I wrote that you use the last
1354
00:58:13,440 --> 00:58:17,359
Vector here but actually a lot of the
1355
00:58:15,520 --> 00:58:20,039
time it's also a good idea to just take
1356
00:58:17,359 --> 00:58:22,599
the mean of the vectors or take the max
1357
00:58:20,039 --> 00:58:26,640
of all of the vectors
1358
00:58:22,599 --> 00:58:29,119
uh in fact I would almost I would almost
1359
00:58:26,640 --> 00:58:30,520
say that that's usually a better choice
1360
00:58:29,119 --> 00:58:32,760
if you're doing any sort of thing where
1361
00:58:30,520 --> 00:58:35,359
you need a single Vector unless your
1362
00:58:32,760 --> 00:58:38,200
model has been specifically trained to
1363
00:58:35,359 --> 00:58:41,480
have good like output vectors uh from
1364
00:58:38,200 --> 00:58:44,359
the final Vector here so um you could
1365
00:58:41,480 --> 00:58:46,880
also just take the the mean of all of
1366
00:58:44,359 --> 00:58:46,880
the purple
1367
00:58:48,240 --> 00:58:52,960
ones um another thing you can do is
1368
00:58:50,280 --> 00:58:54,359
encode tokens for sequence labeling Um
1369
00:58:52,960 --> 00:58:56,200
this can also be used for language
1370
00:58:54,359 --> 00:58:58,280
modeling and what do I mean it can be
1371
00:58:56,200 --> 00:59:00,039
used for language
1372
00:58:58,280 --> 00:59:03,319
modeling
1373
00:59:00,039 --> 00:59:06,599
basically you can view this as first
1374
00:59:03,319 --> 00:59:09,200
running along sequence encoding and then
1375
00:59:06,599 --> 00:59:12,319
after that making all of the predictions
1376
00:59:09,200 --> 00:59:15,240
um it's also a good thing to know
1377
00:59:12,319 --> 00:59:18,440
computationally because um often you can
1378
00:59:15,240 --> 00:59:20,720
do sequence encoding uh kind of all in
1379
00:59:18,440 --> 00:59:22,440
parallel and yeah actually I said I was
1380
00:59:20,720 --> 00:59:23,359
going to mention I said I was going to
1381
00:59:22,440 --> 00:59:25,079
mention that but I don't think I
1382
00:59:23,359 --> 00:59:27,319
actually have a slide about it but um
1383
00:59:25,079 --> 00:59:29,720
one important thing about rnn's compared
1384
00:59:27,319 --> 00:59:33,079
to convolution or Transformers uh sorry
1385
00:59:29,720 --> 00:59:34,839
convolution or attention is rnns in
1386
00:59:33,079 --> 00:59:37,440
order to calculate this RNN you need to
1387
00:59:34,839 --> 00:59:39,599
wait for this RNN to finish so it's
1388
00:59:37,440 --> 00:59:41,200
sequential and you need to go like here
1389
00:59:39,599 --> 00:59:43,480
and then here and then here and then
1390
00:59:41,200 --> 00:59:45,720
here and then here and that's a pretty
1391
00:59:43,480 --> 00:59:48,200
big bottleneck because uh things like
1392
00:59:45,720 --> 00:59:50,760
gpus or tpus they're actually really
1393
00:59:48,200 --> 00:59:52,839
good at doing a bunch of things at once
1394
00:59:50,760 --> 00:59:56,440
and so attention even though its ASM
1395
00:59:52,839 --> 00:59:57,400
totic complexity is worse o of n squ uh
1396
00:59:56,440 --> 00:59:59,319
just because you don't have that
1397
00:59:57,400 --> 01:00:01,680
bottleneck of doing things sequentially
1398
00:59:59,319 --> 01:00:03,640
it can be way way faster on a GPU
1399
01:00:01,680 --> 01:00:04,960
because you're not wasting your time
1400
01:00:03,640 --> 01:00:07,640
waiting for the previous thing to be
1401
01:00:04,960 --> 01:00:11,039
calculated so that's actually why uh
1402
01:00:07,640 --> 01:00:13,520
Transformers are so fast
1403
01:00:11,039 --> 01:00:14,599
um uh Transformers and attention models
1404
01:00:13,520 --> 01:00:17,160
are so
1405
01:00:14,599 --> 01:00:21,119
fast
1406
01:00:17,160 --> 01:00:23,079
um another thing to note so that's one
1407
01:00:21,119 --> 01:00:25,039
of the big reasons why attention models
1408
01:00:23,079 --> 01:00:27,359
are so popular nowadays because fast to
1409
01:00:25,039 --> 01:00:30,200
calculate on Modern Hardware another
1410
01:00:27,359 --> 01:00:33,520
reason why attention models are popular
1411
01:00:30,200 --> 01:00:34,799
nowadays does anyone have a um does
1412
01:00:33,520 --> 01:00:37,280
anyone have an
1413
01:00:34,799 --> 01:00:38,839
idea uh about another reason it's based
1414
01:00:37,280 --> 01:00:41,200
on how easy they are to learn and
1415
01:00:38,839 --> 01:00:43,680
there's a reason why and that reason why
1416
01:00:41,200 --> 01:00:46,240
has to do with
1417
01:00:43,680 --> 01:00:48,520
um that reason why has to do with uh
1418
01:00:46,240 --> 01:00:49,400
something I introduced in this lecture
1419
01:00:48,520 --> 01:00:52,039
uh
1420
01:00:49,400 --> 01:00:54,720
earlier I'll give a
1421
01:00:52,039 --> 01:00:58,079
hint gradients yeah more more
1422
01:00:54,720 --> 01:01:00,480
specifically what what's nice about
1423
01:00:58,079 --> 01:01:02,920
attention with respect to gradients or
1424
01:01:00,480 --> 01:01:02,920
Vanishing
1425
01:01:04,119 --> 01:01:07,319
gradients any
1426
01:01:07,680 --> 01:01:15,160
ideas let's say we have a really long
1427
01:01:10,160 --> 01:01:17,839
sentence it's like X1 X2 X3
1428
01:01:15,160 --> 01:01:21,799
X4 um
1429
01:01:17,839 --> 01:01:26,440
X200 over here and in order to predict
1430
01:01:21,799 --> 01:01:26,440
X200 you need to pay attention to X3
1431
01:01:27,359 --> 01:01:29,640
any
1432
01:01:33,079 --> 01:01:37,359
ideas another another hint how many
1433
01:01:35,599 --> 01:01:38,960
nonlinearities do you have to pass
1434
01:01:37,359 --> 01:01:41,440
through in order to pass that
1435
01:01:38,960 --> 01:01:44,839
information from X3 to
1436
01:01:41,440 --> 01:01:48,839
X200 in a recurrent Network um in a
1437
01:01:44,839 --> 01:01:48,839
recurrent Network or
1438
01:01:51,920 --> 01:01:57,160
attention netw should be
1439
01:01:54,960 --> 01:02:00,680
197 yeah in a recurrent Network it's
1440
01:01:57,160 --> 01:02:03,480
basically 197 or may maybe 196 I haven't
1441
01:02:00,680 --> 01:02:06,319
paid attention but every time every time
1442
01:02:03,480 --> 01:02:08,319
you pass it to the hidden
1443
01:02:06,319 --> 01:02:10,200
state it has to go through a
1444
01:02:08,319 --> 01:02:13,240
nonlinearity so it goes through like
1445
01:02:10,200 --> 01:02:17,119
1907 nonlinearities and even if you're
1446
01:02:13,240 --> 01:02:19,680
using an lstm um it's still the lstm
1447
01:02:17,119 --> 01:02:21,559
hidden cell is getting information added
1448
01:02:19,680 --> 01:02:23,400
to it and subtracted to it and other
1449
01:02:21,559 --> 01:02:24,960
things like that so it's still a bit
1450
01:02:23,400 --> 01:02:27,880
tricky
1451
01:02:24,960 --> 01:02:27,880
um what about
1452
01:02:28,119 --> 01:02:35,160
attention yeah basically one time so
1453
01:02:31,520 --> 01:02:39,319
attention um in the next layer here
1454
01:02:35,160 --> 01:02:41,119
you're passing it all the way you're
1455
01:02:39,319 --> 01:02:45,000
passing all of the information directly
1456
01:02:41,119 --> 01:02:46,480
in and the only qualifying thing is that
1457
01:02:45,000 --> 01:02:47,760
your weight has to be good it has to
1458
01:02:46,480 --> 01:02:49,079
find a good attention weight so that
1459
01:02:47,760 --> 01:02:50,920
it's actually paying attention to that
1460
01:02:49,079 --> 01:02:53,039
information so this is actually
1461
01:02:50,920 --> 01:02:54,400
discussed in the vaswani at all
1462
01:02:53,039 --> 01:02:57,359
attention is all you need paper that
1463
01:02:54,400 --> 01:02:59,920
introduced Transformers um convolutions
1464
01:02:57,359 --> 01:03:03,640
are kind of in the middle so like let's
1465
01:02:59,920 --> 01:03:06,400
say you have a convolution of length 10
1466
01:03:03,640 --> 01:03:09,880
um and then you have two layers of it um
1467
01:03:06,400 --> 01:03:09,880
if you have a convolution of length
1468
01:03:10,200 --> 01:03:15,880
10 or yeah let's say you have a
1469
01:03:12,559 --> 01:03:18,520
convolution of length 10 you would need
1470
01:03:15,880 --> 01:03:19,520
basically you would pass from 10
1471
01:03:18,520 --> 01:03:21,720
previous
1472
01:03:19,520 --> 01:03:23,319
ones and then you would pass again from
1473
01:03:21,720 --> 01:03:27,359
10 previous ones and then you would have
1474
01:03:23,319 --> 01:03:29,160
to go through like 16 or like I guess
1475
01:03:27,359 --> 01:03:31,279
almost 20 layers of convolution in order
1476
01:03:29,160 --> 01:03:34,720
to pass that information along so it's
1477
01:03:31,279 --> 01:03:39,200
kind of in the middle of RNs in uh in
1478
01:03:34,720 --> 01:03:43,480
lsms uh sorry RNN in attention
1479
01:03:39,200 --> 01:03:47,359
Ms Yeah question so regarding how you
1480
01:03:43,480 --> 01:03:51,319
have to wait for one r& the next one can
1481
01:03:47,359 --> 01:03:53,000
you inflence on one RNN once it's done
1482
01:03:51,319 --> 01:03:54,839
even though the next one's competing off
1483
01:03:53,000 --> 01:03:58,400
that one
1484
01:03:54,839 --> 01:04:01,160
yes yeah you can you can do
1485
01:03:58,400 --> 01:04:03,880
inference you could is well so as long
1486
01:04:01,160 --> 01:04:03,880
as
1487
01:04:05,599 --> 01:04:10,640
the as long as the output doesn't affect
1488
01:04:08,079 --> 01:04:14,000
the next input so in this
1489
01:04:10,640 --> 01:04:17,119
case in this case because of language
1490
01:04:14,000 --> 01:04:19,400
modeling or generation is because the
1491
01:04:17,119 --> 01:04:21,000
output doesn't affect the ne uh because
1492
01:04:19,400 --> 01:04:22,440
the output affects the next input if
1493
01:04:21,000 --> 01:04:26,680
you're predicting the output you have to
1494
01:04:22,440 --> 01:04:28,920
weigh if you know the output already um
1495
01:04:26,680 --> 01:04:30,599
if you know the output already you could
1496
01:04:28,920 --> 01:04:33,599
make the prediction at the same time
1497
01:04:30,599 --> 01:04:34,799
miscalculating this next hidden State um
1498
01:04:33,599 --> 01:04:36,200
so if you're just calculating the
1499
01:04:34,799 --> 01:04:38,559
probability you could do that and that's
1500
01:04:36,200 --> 01:04:40,880
actually where Transformers or attention
1501
01:04:38,559 --> 01:04:44,839
models shine attention models actually
1502
01:04:40,880 --> 01:04:46,000
aren't great for Generation Um and the
1503
01:04:44,839 --> 01:04:49,279
reason why they're not great for
1504
01:04:46,000 --> 01:04:52,279
generation is because they're
1505
01:04:49,279 --> 01:04:52,279
um
1506
01:04:52,799 --> 01:04:57,680
like when you're you're generating the
1507
01:04:55,039 --> 01:04:59,200
next token you still need to wait you
1508
01:04:57,680 --> 01:05:00,559
can't calculate in parallel because you
1509
01:04:59,200 --> 01:05:03,039
need to generate the next token before
1510
01:05:00,559 --> 01:05:04,839
you can encode the next uh the previous
1511
01:05:03,039 --> 01:05:07,119
sorry need to generate the next token
1512
01:05:04,839 --> 01:05:08,680
before you can encode it so you can't do
1513
01:05:07,119 --> 01:05:10,359
everything in parallel so Transformers
1514
01:05:08,680 --> 01:05:15,039
for generation are actually
1515
01:05:10,359 --> 01:05:16,559
slow and um there are models uh I don't
1516
01:05:15,039 --> 01:05:18,520
know if people are using them super
1517
01:05:16,559 --> 01:05:22,200
widely now but there were actually
1518
01:05:18,520 --> 01:05:23,640
transform uh language model sorry
1519
01:05:22,200 --> 01:05:26,319
machine translation model set we in
1520
01:05:23,640 --> 01:05:28,279
production they had a really big strong
1521
01:05:26,319 --> 01:05:34,359
Transformer encoder and then they had a
1522
01:05:28,279 --> 01:05:34,359
tiny fast RNN decoder um
1523
01:05:35,440 --> 01:05:40,960
and and if you want a actual
1524
01:05:52,000 --> 01:05:59,440
reference there's there's
1525
01:05:55,079 --> 01:05:59,440
this deep encoder shellow
1526
01:05:59,559 --> 01:06:05,520
decoder um and then there's also the the
1527
01:06:03,079 --> 01:06:07,599
Maran machine translation toolkit that
1528
01:06:05,520 --> 01:06:11,119
supports uh supports those types of
1529
01:06:07,599 --> 01:06:13,839
things as well so um it's also the
1530
01:06:11,119 --> 01:06:16,200
reason why uh if you're using if you're
1531
01:06:13,839 --> 01:06:18,839
using uh like the GPT models through the
1532
01:06:16,200 --> 01:06:21,680
API that decoding is more expensive
1533
01:06:18,839 --> 01:06:21,680
right like
1534
01:06:22,119 --> 01:06:27,960
encoding I forget exactly is it 0.03
1535
01:06:26,279 --> 01:06:30,839
cents for 1,000 tokens for encoding and
1536
01:06:27,960 --> 01:06:33,039
0.06 cents for 1,000 tokens for decoding
1537
01:06:30,839 --> 01:06:34,799
in like gp4 or something like this the
1538
01:06:33,039 --> 01:06:36,839
reason why is precisely that just
1539
01:06:34,799 --> 01:06:37,760
because it's so much more expensive to
1540
01:06:36,839 --> 01:06:41,599
to run the
1541
01:06:37,760 --> 01:06:45,160
decoder um cool I have a few final
1542
01:06:41,599 --> 01:06:47,039
things also about efficiency so um these
1543
01:06:45,160 --> 01:06:50,720
go back to the efficiency things that I
1544
01:06:47,039 --> 01:06:52,279
talked about last time um handling mini
1545
01:06:50,720 --> 01:06:54,440
batching so what do we have to do when
1546
01:06:52,279 --> 01:06:56,359
we're handling mini batching if we were
1547
01:06:54,440 --> 01:06:59,440
handling mini batching in feed forward
1548
01:06:56,359 --> 01:07:02,880
networks it's actually relatively easy
1549
01:06:59,440 --> 01:07:04,880
um because we all of our computations
1550
01:07:02,880 --> 01:07:06,400
are the same shape so we just
1551
01:07:04,880 --> 01:07:09,359
concatenate them all together into a big
1552
01:07:06,400 --> 01:07:11,000
tensor and run uh run over it uh we saw
1553
01:07:09,359 --> 01:07:12,599
mini batching makes things much faster
1554
01:07:11,000 --> 01:07:15,160
but mini batching and sequence modeling
1555
01:07:12,599 --> 01:07:17,240
is harder than in feed forward networks
1556
01:07:15,160 --> 01:07:20,240
um one reason is in rnns each word
1557
01:07:17,240 --> 01:07:22,680
depends on the previous word um also
1558
01:07:20,240 --> 01:07:26,359
because sequences are of various
1559
01:07:22,680 --> 01:07:30,279
lengths so so what we do to handle this
1560
01:07:26,359 --> 01:07:33,480
is uh we do padding and masking uh
1561
01:07:30,279 --> 01:07:35,680
so we can do padding like this uh so we
1562
01:07:33,480 --> 01:07:37,279
just add an extra token at the end to
1563
01:07:35,680 --> 01:07:40,440
make all of the sequences at the same
1564
01:07:37,279 --> 01:07:44,480
length um if we are doing an encoder
1565
01:07:40,440 --> 01:07:47,160
decoder style model uh where we have an
1566
01:07:44,480 --> 01:07:48,440
input and then we want to generate all
1567
01:07:47,160 --> 01:07:50,640
the outputs based on the input one of
1568
01:07:48,440 --> 01:07:54,920
the easy things is to add pads to the
1569
01:07:50,640 --> 01:07:56,520
beginning um and then so yeah it doesn't
1570
01:07:54,920 --> 01:07:58,000
really matter but you can add pads to
1571
01:07:56,520 --> 01:07:59,440
the beginning so they're all starting at
1572
01:07:58,000 --> 01:08:03,079
the same place especially if you're
1573
01:07:59,440 --> 01:08:05,799
using RNN style models um then we
1574
01:08:03,079 --> 01:08:08,920
calculate the loss over the output for
1575
01:08:05,799 --> 01:08:11,000
example we multiply the loss by a mask
1576
01:08:08,920 --> 01:08:13,480
to remove the loss over the tokens that
1577
01:08:11,000 --> 01:08:16,880
we don't care about and we take the sum
1578
01:08:13,480 --> 01:08:19,120
of these and so luckily most of this is
1579
01:08:16,880 --> 01:08:20,719
implemented in for example ptch or
1580
01:08:19,120 --> 01:08:22,279
huging face Transformers already so you
1581
01:08:20,719 --> 01:08:23,560
don't need to worry about it but it is a
1582
01:08:22,279 --> 01:08:24,799
good idea to know what's going on under
1583
01:08:23,560 --> 01:08:28,560
the hood if you want to implement
1584
01:08:24,799 --> 01:08:32,440
anything unusual and also um it's good
1585
01:08:28,560 --> 01:08:35,600
to know for the following reason also
1586
01:08:32,440 --> 01:08:38,799
which is bucketing and
1587
01:08:35,600 --> 01:08:40,319
sorting so if we use sentences of vastly
1588
01:08:38,799 --> 01:08:43,359
different lengths and we put them in the
1589
01:08:40,319 --> 01:08:46,640
same mini batch this can uh waste a
1590
01:08:43,359 --> 01:08:48,000
really large amount of computation so
1591
01:08:46,640 --> 01:08:50,759
like let's say we're processing
1592
01:08:48,000 --> 01:08:52,480
documents or movie reviews or something
1593
01:08:50,759 --> 01:08:54,799
like that and you have a most movie
1594
01:08:52,480 --> 01:08:57,719
reviews are like
1595
01:08:54,799 --> 01:09:00,080
10 words long but you have one movie
1596
01:08:57,719 --> 01:09:02,319
review in your mini batch of uh a
1597
01:09:00,080 --> 01:09:04,359
thousand words so basically what that
1598
01:09:02,319 --> 01:09:08,279
means is you're padding most of your
1599
01:09:04,359 --> 01:09:11,120
sequences 990 times to process 10
1600
01:09:08,279 --> 01:09:12,120
sequences which is like a lot of waste
1601
01:09:11,120 --> 01:09:14,000
right because you're running them all
1602
01:09:12,120 --> 01:09:16,799
through your GPU and other things like
1603
01:09:14,000 --> 01:09:19,080
that so one way to remedy this is to
1604
01:09:16,799 --> 01:09:22,719
sort sentences so similarly length
1605
01:09:19,080 --> 01:09:27,480
sentences are in the same batch so you
1606
01:09:22,719 --> 01:09:29,920
uh you first sort before building all of
1607
01:09:27,480 --> 01:09:31,640
your batches and then uh that makes it
1608
01:09:29,920 --> 01:09:32,960
so that similarly sized ones are the
1609
01:09:31,640 --> 01:09:35,239
same
1610
01:09:32,960 --> 01:09:37,040
batch this goes into the problem that I
1611
01:09:35,239 --> 01:09:39,359
mentioned before but only in passing
1612
01:09:37,040 --> 01:09:42,440
which is uh let's say you're calculating
1613
01:09:39,359 --> 01:09:44,199
your batch based on the number of
1614
01:09:42,440 --> 01:09:47,679
sequences that you're
1615
01:09:44,199 --> 01:09:51,400
processing if you say Okay I want 64
1616
01:09:47,679 --> 01:09:53,359
sequences in my mini batch um if most of
1617
01:09:51,400 --> 01:09:55,159
the time those 64 sequences are are 10
1618
01:09:53,359 --> 01:09:57,480
tokens that's fine but then when you get
1619
01:09:55,159 --> 01:10:01,440
the One Mini batch that has a thousand
1620
01:09:57,480 --> 01:10:02,760
tokens in each sentence or each sequence
1621
01:10:01,440 --> 01:10:04,920
um suddenly you're going to run out of
1622
01:10:02,760 --> 01:10:07,800
GPU memory and you're like training is
1623
01:10:04,920 --> 01:10:08,920
going to crash right which is you really
1624
01:10:07,800 --> 01:10:10,440
don't want that to happen when you
1625
01:10:08,920 --> 01:10:12,440
started running your homework assignment
1626
01:10:10,440 --> 01:10:15,560
and then went to bed and then wake up
1627
01:10:12,440 --> 01:10:18,440
and it crashed you know uh 15 minutes
1628
01:10:15,560 --> 01:10:21,040
into Computing or something so uh this
1629
01:10:18,440 --> 01:10:23,440
is an important thing to be aware of
1630
01:10:21,040 --> 01:10:26,760
practically uh again this can be solved
1631
01:10:23,440 --> 01:10:29,239
by a lot of toolkits like I know fer uh
1632
01:10:26,760 --> 01:10:30,840
does it and hugging face does it if you
1633
01:10:29,239 --> 01:10:33,159
set the appropriate settings but it's
1634
01:10:30,840 --> 01:10:36,239
something you should be aware of um
1635
01:10:33,159 --> 01:10:37,880
another note is that if you do this it's
1636
01:10:36,239 --> 01:10:41,280
reducing the randomness in your
1637
01:10:37,880 --> 01:10:42,880
distribution of data so um stochastic
1638
01:10:41,280 --> 01:10:44,520
gradient descent is really heavily
1639
01:10:42,880 --> 01:10:47,480
reliant on the fact that your ordering
1640
01:10:44,520 --> 01:10:49,440
of data is randomized or at least it's a
1641
01:10:47,480 --> 01:10:52,159
distributed appropriately so it's
1642
01:10:49,440 --> 01:10:56,840
something to definitely be aware of um
1643
01:10:52,159 --> 01:10:59,560
so uh this is a good thing to to think
1644
01:10:56,840 --> 01:11:01,400
about another really useful thing to
1645
01:10:59,560 --> 01:11:03,800
think about is strided
1646
01:11:01,400 --> 01:11:05,440
architectures um strided architectures
1647
01:11:03,800 --> 01:11:07,520
appear in rnns they appear in
1648
01:11:05,440 --> 01:11:10,080
convolution they appear in trans
1649
01:11:07,520 --> 01:11:12,320
Transformers or attention based models
1650
01:11:10,080 --> 01:11:15,199
um they're called different things in
1651
01:11:12,320 --> 01:11:18,159
each of them so in rnns they're called
1652
01:11:15,199 --> 01:11:21,280
pyramidal rnns in convolution they're
1653
01:11:18,159 --> 01:11:22,400
called strided architectures and in
1654
01:11:21,280 --> 01:11:25,080
attention they're called sparse
1655
01:11:22,400 --> 01:11:27,440
attention usually they all actually kind
1656
01:11:25,080 --> 01:11:30,800
of mean the same thing um and basically
1657
01:11:27,440 --> 01:11:33,440
what they mean is you don't you have a
1658
01:11:30,800 --> 01:11:37,040
multi-layer model and when you have a
1659
01:11:33,440 --> 01:11:40,920
multi-layer model you don't process
1660
01:11:37,040 --> 01:11:43,920
every input uh from the uh from the
1661
01:11:40,920 --> 01:11:45,560
previous layer so here's an example um
1662
01:11:43,920 --> 01:11:47,840
like let's say you have a whole bunch of
1663
01:11:45,560 --> 01:11:50,199
inputs um each of the inputs is
1664
01:11:47,840 --> 01:11:53,159
processed in the first layer in some way
1665
01:11:50,199 --> 01:11:56,639
but in the second layer you actually
1666
01:11:53,159 --> 01:12:01,520
input for example uh two inputs to the
1667
01:11:56,639 --> 01:12:03,560
RNN but you you skip so you have one
1668
01:12:01,520 --> 01:12:05,440
state that corresponds to state number
1669
01:12:03,560 --> 01:12:06,840
one and two another state that
1670
01:12:05,440 --> 01:12:08,440
corresponds to state number two and
1671
01:12:06,840 --> 01:12:10,920
three another state that corresponds to
1672
01:12:08,440 --> 01:12:13,280
state number three and four so what that
1673
01:12:10,920 --> 01:12:15,199
means is you can gradually decrease the
1674
01:12:13,280 --> 01:12:18,199
number like the length of the sequence
1675
01:12:15,199 --> 01:12:20,719
every time you process so uh this is a
1676
01:12:18,199 --> 01:12:22,360
really useful thing that to do if you're
1677
01:12:20,719 --> 01:12:25,480
processing very long sequences so you
1678
01:12:22,360 --> 01:12:25,480
should be aware of it
1679
01:12:27,440 --> 01:12:34,120
cool um everything
1680
01:12:30,639 --> 01:12:36,920
okay okay the final thing is truncated
1681
01:12:34,120 --> 01:12:39,239
back propagation through time and uh
1682
01:12:36,920 --> 01:12:41,000
truncated back propagation Through Time
1683
01:12:39,239 --> 01:12:43,560
what this is doing is basically you do
1684
01:12:41,000 --> 01:12:46,120
back propop over shorter segments but
1685
01:12:43,560 --> 01:12:47,840
you initialize with the state from the
1686
01:12:46,120 --> 01:12:51,040
previous
1687
01:12:47,840 --> 01:12:52,440
segment and the way this works is uh
1688
01:12:51,040 --> 01:12:56,080
like for example if you're running an
1689
01:12:52,440 --> 01:12:57,600
RNN uh you would run the RNN over the
1690
01:12:56,080 --> 01:12:59,400
previous segment maybe it's length four
1691
01:12:57,600 --> 01:13:02,120
maybe it's length 400 it doesn't really
1692
01:12:59,400 --> 01:13:04,520
matter but it's uh coherently length
1693
01:13:02,120 --> 01:13:06,360
segment and then when you do the next
1694
01:13:04,520 --> 01:13:08,840
segment what you do is you only pass the
1695
01:13:06,360 --> 01:13:12,960
hidden state but you throw away the rest
1696
01:13:08,840 --> 01:13:16,360
of the previous computation graph and
1697
01:13:12,960 --> 01:13:18,040
then walk through uh like this uh so you
1698
01:13:16,360 --> 01:13:22,159
won't actually be updating the
1699
01:13:18,040 --> 01:13:24,080
parameters of this based on the result
1700
01:13:22,159 --> 01:13:25,800
the lost from this but you're still
1701
01:13:24,080 --> 01:13:28,159
passing the information so this can use
1702
01:13:25,800 --> 01:13:30,400
the information for the previous state
1703
01:13:28,159 --> 01:13:32,239
so this is an example from RNN this is
1704
01:13:30,400 --> 01:13:35,159
used pretty widely in RNN but there's
1705
01:13:32,239 --> 01:13:38,000
also a lot of Transformer architectures
1706
01:13:35,159 --> 01:13:39,400
that do things like this um the original
1707
01:13:38,000 --> 01:13:41,000
one is something called Transformer
1708
01:13:39,400 --> 01:13:44,560
Excel that was actually created here at
1709
01:13:41,000 --> 01:13:46,560
CMU but this is also um used in the new
1710
01:13:44,560 --> 01:13:48,719
mistol models and other things like this
1711
01:13:46,560 --> 01:13:51,719
as well so um it's something that's
1712
01:13:48,719 --> 01:13:54,719
still very much alive and well nowadays
1713
01:13:51,719 --> 01:13:56,320
as well
1714
01:13:54,719 --> 01:13:57,840
cool um that's all I have for today are
1715
01:13:56,320 --> 01:13:59,760
there any questions people want to ask
1716
01:13:57,840 --> 01:14:02,760
before we wrap
1717
01:13:59,760 --> 01:14:02,760
up
1718
01:14:12,840 --> 01:14:20,000
yeah doesent yeah so for condition
1719
01:14:16,960 --> 01:14:25,040
prediction what is Source X and Target y
1720
01:14:20,000 --> 01:14:26,520
um I think I kind of maybe carried over
1721
01:14:25,040 --> 01:14:28,679
uh some terminology from machine
1722
01:14:26,520 --> 01:14:31,400
translation uh by accident maybe it
1723
01:14:28,679 --> 01:14:34,080
should be input X and output y uh that
1724
01:14:31,400 --> 01:14:36,600
would be a better way to put it and so
1725
01:14:34,080 --> 01:14:38,080
uh it could be anything for translation
1726
01:14:36,600 --> 01:14:39,560
it's like something in the source
1727
01:14:38,080 --> 01:14:42,600
language and something in the target
1728
01:14:39,560 --> 01:14:44,520
language so like English and Japanese um
1729
01:14:42,600 --> 01:14:47,280
if it's just a regular language model it
1730
01:14:44,520 --> 01:14:50,560
could be something like a prompt and the
1731
01:14:47,280 --> 01:14:55,280
output so for
1732
01:14:50,560 --> 01:14:55,280
UNC y example that
1733
01:14:57,400 --> 01:15:01,400
yeah so for unconditioned prediction
1734
01:14:59,760 --> 01:15:03,840
that could just be straight up language
1735
01:15:01,400 --> 01:15:07,040
modeling for example so um language
1736
01:15:03,840 --> 01:15:11,840
modeling with no not necessarily any
1737
01:15:07,040 --> 01:15:11,840
problems okay thanks and anything
1738
01:15:12,440 --> 01:15:17,880
else okay great thanks a lot I'm happy
1739
01:15:14,639 --> 01:15:17,880
to take questions
1740
01:15:18,639 --> 01:15:21,639
to