ahmedelsayed's picture
commit files to HF hub
2ffb90d
WEBVTT
00:00:00.040 --> 00:00:06.600
started in a moment uh since it's now uh
00:00:03.959 --> 00:00:08.839
12:30 are there any questions before we
00:00:06.600 --> 00:00:08.839
get
00:00:11.840 --> 00:00:17.240
started okay I don't see I don't see any
00:00:14.679 --> 00:00:18.640
so I guess we can uh Jump Right In this
00:00:17.240 --> 00:00:22.080
time I'll be talking about sequence
00:00:18.640 --> 00:00:24.560
modeling and N first I'm going to be
00:00:22.080 --> 00:00:26.359
talking about uh why why we do sequence
00:00:24.560 --> 00:00:29.160
modeling what varieties of sequence
00:00:26.359 --> 00:00:31.199
modeling exist and then after that I'm
00:00:29.160 --> 00:00:34.120
going to talk about kind of three basic
00:00:31.199 --> 00:00:36.320
techniques for sequence modeling namely
00:00:34.120 --> 00:00:38.879
recurrent neural networks convolutional
00:00:36.320 --> 00:00:38.879
networks and
00:00:39.360 --> 00:00:44.079
attention so when we talk about sequence
00:00:41.920 --> 00:00:46.680
modeling in NLP I've kind of already
00:00:44.079 --> 00:00:50.039
made the motivation for doing this but
00:00:46.680 --> 00:00:51.920
basically NLP is full of sequential data
00:00:50.039 --> 00:00:56.120
and this can be everything from words
00:00:51.920 --> 00:00:59.399
and sentences or tokens and sentences to
00:00:56.120 --> 00:01:01.920
uh characters and words to sentences in
00:00:59.399 --> 00:01:04.640
a discourse or a paragraph or a
00:01:01.920 --> 00:01:06.640
document um it can also be multiple
00:01:04.640 --> 00:01:08.840
documents in time multiple social media
00:01:06.640 --> 00:01:12.320
posts whatever else you want there's
00:01:08.840 --> 00:01:15.159
just you know sequences all over
00:01:12.320 --> 00:01:16.640
NLP and I mentioned this uh last time
00:01:15.159 --> 00:01:19.240
also but there's also long-distance
00:01:16.640 --> 00:01:20.840
dependencies in language so uh just to
00:01:19.240 --> 00:01:23.720
give an example there's agreement in
00:01:20.840 --> 00:01:25.799
number uh gender Etc so in order to
00:01:23.720 --> 00:01:28.439
create a fluent language model you'll
00:01:25.799 --> 00:01:30.320
have to handle this agreement so if we
00:01:28.439 --> 00:01:32.920
you say he does not have very much
00:01:30.320 --> 00:01:35.280
confidence in uh it would have to be
00:01:32.920 --> 00:01:36.680
himself but if you say she does not have
00:01:35.280 --> 00:01:39.360
very much confidence in it would have to
00:01:36.680 --> 00:01:41.360
be herself and this is this gender
00:01:39.360 --> 00:01:44.159
agreement is not super frequent in
00:01:41.360 --> 00:01:47.600
English but it's very frequent in other
00:01:44.159 --> 00:01:50.119
languages like French or uh you know
00:01:47.600 --> 00:01:51.759
most languages in the world in some uh
00:01:50.119 --> 00:01:53.799
way or
00:01:51.759 --> 00:01:55.320
another then separately from that you
00:01:53.799 --> 00:01:58.520
also have things like selectional
00:01:55.320 --> 00:02:00.119
preferences um like the Reign has lasted
00:01:58.520 --> 00:02:01.799
as long as the life of the queen and the
00:02:00.119 --> 00:02:04.439
rain has lasted as long as the life of
00:02:01.799 --> 00:02:07.360
the clouds uh in American English the
00:02:04.439 --> 00:02:09.119
only way you could know uh which word
00:02:07.360 --> 00:02:13.520
came beforehand if you were doing speech
00:02:09.119 --> 00:02:17.400
recognition is if you uh like had that
00:02:13.520 --> 00:02:20.319
kind of semantic uh idea of uh that
00:02:17.400 --> 00:02:22.040
these agree with each other in some way
00:02:20.319 --> 00:02:23.920
and there's also factual knowledge
00:02:22.040 --> 00:02:27.680
there's all kinds of other things uh
00:02:23.920 --> 00:02:27.680
that you need to carry over long
00:02:28.319 --> 00:02:33.800
contexts um these can be comp
00:02:30.840 --> 00:02:36.360
complicated so this is a a nice example
00:02:33.800 --> 00:02:39.400
so if we try to figure out what it
00:02:36.360 --> 00:02:41.239
refers to here uh the trophy would not
00:02:39.400 --> 00:02:45.680
fit in the brown suitcase because it was
00:02:41.239 --> 00:02:45.680
too big what is it
00:02:46.680 --> 00:02:51.360
here the trophy yeah and then what about
00:02:49.879 --> 00:02:53.120
uh the trophy would not fit in the brown
00:02:51.360 --> 00:02:57.080
suitcase because it was too
00:02:53.120 --> 00:02:58.680
small suit suitcase right um does anyone
00:02:57.080 --> 00:03:01.760
know what the name of something like
00:02:58.680 --> 00:03:01.760
this is
00:03:03.599 --> 00:03:07.840
has anyone heard of this challenge uh
00:03:09.280 --> 00:03:14.840
before no one okay um this this is
00:03:12.239 --> 00:03:17.200
called the winegrad schema challenge or
00:03:14.840 --> 00:03:22.760
these are called winegrad schemas and
00:03:17.200 --> 00:03:26.319
basically winterr schemas are a type
00:03:22.760 --> 00:03:29.280
of they're type of kind of linguistic
00:03:26.319 --> 00:03:30.439
challenge where you create two paired uh
00:03:29.280 --> 00:03:33.799
examples
00:03:30.439 --> 00:03:37.360
that you vary in very minimal ways where
00:03:33.799 --> 00:03:40.599
the answer differs between the two um
00:03:37.360 --> 00:03:42.000
and so uh there's lots of other examples
00:03:40.599 --> 00:03:44.080
about how you can create these things
00:03:42.000 --> 00:03:45.720
and they're good for testing uh whether
00:03:44.080 --> 00:03:48.239
language models are able to do things
00:03:45.720 --> 00:03:50.920
because they're able to uh kind of
00:03:48.239 --> 00:03:54.239
control for the fact that you know like
00:03:50.920 --> 00:04:01.079
the answer might be
00:03:54.239 --> 00:04:03.000
um the answer might be very uh like
00:04:01.079 --> 00:04:04.560
more frequent or less frequent and so
00:04:03.000 --> 00:04:07.720
the language model could just pick that
00:04:04.560 --> 00:04:11.040
so another example is we uh we came up
00:04:07.720 --> 00:04:12.239
with a benchmark of figurative language
00:04:11.040 --> 00:04:14.239
where we tried to figure out whether
00:04:12.239 --> 00:04:17.160
language models would be able
00:04:14.239 --> 00:04:19.720
to interpret figur figurative language
00:04:17.160 --> 00:04:22.720
and I actually have the multilingual uh
00:04:19.720 --> 00:04:24.160
version on the suggested projects uh on
00:04:22.720 --> 00:04:26.240
the Piaza oh yeah that's one
00:04:24.160 --> 00:04:28.360
announcement I posted a big list of
00:04:26.240 --> 00:04:30.080
suggested projects on pza I think a lot
00:04:28.360 --> 00:04:31.639
of people saw it you don't have to
00:04:30.080 --> 00:04:33.160
follow these but if you're interested in
00:04:31.639 --> 00:04:34.440
them feel free to talk to the contacts
00:04:33.160 --> 00:04:38.880
and we can give you more information
00:04:34.440 --> 00:04:41.039
about them um but anyway uh so in this
00:04:38.880 --> 00:04:43.080
data set what we did is we came up with
00:04:41.039 --> 00:04:46.039
some figurative language like this movie
00:04:43.080 --> 00:04:47.880
had the depth of of a waiting pool and
00:04:46.039 --> 00:04:50.919
this movie had the depth of a diving
00:04:47.880 --> 00:04:54.120
pool and so then after that you would
00:04:50.919 --> 00:04:56.199
have two choices this movie was uh this
00:04:54.120 --> 00:04:58.400
movie was very deep and interesting this
00:04:56.199 --> 00:05:01.000
movie was not very deep and interesting
00:04:58.400 --> 00:05:02.800
and so you have these like like two
00:05:01.000 --> 00:05:04.759
pairs of questions and answers and you
00:05:02.800 --> 00:05:06.240
need to decide between them and
00:05:04.759 --> 00:05:07.759
depending on what the input is the
00:05:06.240 --> 00:05:10.639
output will change and so that's a good
00:05:07.759 --> 00:05:11.919
way to control for um and test whether
00:05:10.639 --> 00:05:13.600
language models really understand
00:05:11.919 --> 00:05:15.080
something so if you're interested in
00:05:13.600 --> 00:05:17.199
benchmarking or other things like that
00:05:15.080 --> 00:05:19.160
it's a good parad time to think about
00:05:17.199 --> 00:05:22.759
anyway that's a little bit of an aside
00:05:19.160 --> 00:05:25.960
um so now I'd like to go on to types of
00:05:22.759 --> 00:05:28.360
sequential prediction problems
00:05:25.960 --> 00:05:30.880
and types of prediction problems in
00:05:28.360 --> 00:05:32.560
general uh binary and multiclass we
00:05:30.880 --> 00:05:35.240
already talked about that's when we're
00:05:32.560 --> 00:05:37.199
doing for example uh classification
00:05:35.240 --> 00:05:38.960
between two classes or classification
00:05:37.199 --> 00:05:41.280
between multiple
00:05:38.960 --> 00:05:42.880
classes but there's also another variety
00:05:41.280 --> 00:05:45.120
of prediction called structured
00:05:42.880 --> 00:05:47.120
prediction and structured prediction is
00:05:45.120 --> 00:05:49.639
when you have a very large number of
00:05:47.120 --> 00:05:53.680
labels it's not you know a finite number
00:05:49.639 --> 00:05:56.560
of labels and uh so that would be
00:05:53.680 --> 00:05:58.160
something like uh if you take in an
00:05:56.560 --> 00:06:00.680
input and you want to predict all of the
00:05:58.160 --> 00:06:04.000
parts of speech of all the words in the
00:06:00.680 --> 00:06:06.840
input and if you had like 50 parts of
00:06:04.000 --> 00:06:09.039
speech the number of labels that you
00:06:06.840 --> 00:06:11.360
would have for each sentence
00:06:09.039 --> 00:06:15.280
is any any
00:06:11.360 --> 00:06:17.919
ideas 50 50 parts of speech and like
00:06:15.280 --> 00:06:17.919
let's say for
00:06:19.880 --> 00:06:31.400
wordss 60 um it it's every combination
00:06:26.039 --> 00:06:31.400
of parts of speech for every words so
00:06:32.039 --> 00:06:38.440
uh close but maybe the opposite it's uh
00:06:35.520 --> 00:06:40.720
50 to the four because you have 50 50
00:06:38.440 --> 00:06:42.400
choices here 50 choices here so it's a c
00:06:40.720 --> 00:06:45.599
cross product of all of the
00:06:42.400 --> 00:06:48.560
choices um and so that becomes very
00:06:45.599 --> 00:06:50.280
quickly un untenable um let's say you're
00:06:48.560 --> 00:06:53.120
talking about translation from English
00:06:50.280 --> 00:06:54.800
to Japanese uh now you don't really even
00:06:53.120 --> 00:06:57.240
have a finite number of choices because
00:06:54.800 --> 00:06:58.440
the length could be even longer uh the
00:06:57.240 --> 00:07:01.400
length of the output could be even
00:06:58.440 --> 00:07:01.400
longer than the
00:07:04.840 --> 00:07:08.879
C
00:07:06.520 --> 00:07:11.319
rules
00:07:08.879 --> 00:07:14.879
together makes it
00:07:11.319 --> 00:07:17.400
fewer yeah so really good question um so
00:07:14.879 --> 00:07:19.319
the question or the the question or
00:07:17.400 --> 00:07:21.160
comment was if there are certain rules
00:07:19.319 --> 00:07:22.759
about one thing not ever being able to
00:07:21.160 --> 00:07:25.080
follow the other you can actually reduce
00:07:22.759 --> 00:07:28.319
the number um you could do that with a
00:07:25.080 --> 00:07:30.280
hard constraint and make things uh kind
00:07:28.319 --> 00:07:32.520
of
00:07:30.280 --> 00:07:34.240
and like actually cut off things that
00:07:32.520 --> 00:07:36.280
you know have zero probability but in
00:07:34.240 --> 00:07:38.680
reality what people do is they just trim
00:07:36.280 --> 00:07:41.319
hypotheses that have low probability and
00:07:38.680 --> 00:07:43.319
so that has kind of the same effect like
00:07:41.319 --> 00:07:47.599
you almost never see a determiner after
00:07:43.319 --> 00:07:49.720
a determiner in English um and so yeah
00:07:47.599 --> 00:07:52.400
we're going to talk about uh algorithms
00:07:49.720 --> 00:07:53.960
to do this in the Generation section so
00:07:52.400 --> 00:07:57.240
we could talk more about that
00:07:53.960 --> 00:08:00.080
that um but anyway the basic idea behind
00:07:57.240 --> 00:08:02.400
structured prediction is that you don't
00:08:00.080 --> 00:08:04.280
like language modeling like I said last
00:08:02.400 --> 00:08:06.240
time you don't predict all of the the
00:08:04.280 --> 00:08:08.319
whole sequence at once you usually
00:08:06.240 --> 00:08:10.440
predict each element at once and then
00:08:08.319 --> 00:08:12.080
somehow calculate the conditional
00:08:10.440 --> 00:08:13.720
probability of the next element given
00:08:12.080 --> 00:08:15.879
the the current element or other things
00:08:13.720 --> 00:08:18.840
like that so that's how we solve
00:08:15.879 --> 00:08:18.840
structured prediction
00:08:18.919 --> 00:08:22.960
problems another thing is unconditioned
00:08:21.319 --> 00:08:25.120
versus conditioned predictions so
00:08:22.960 --> 00:08:28.520
uncondition prediction we don't do this
00:08:25.120 --> 00:08:31.240
very often um but basically uh we
00:08:28.520 --> 00:08:34.039
predict the probability of a a single
00:08:31.240 --> 00:08:35.880
variable or generate a single variable
00:08:34.039 --> 00:08:37.599
and condition pro prediction is
00:08:35.880 --> 00:08:41.000
predicting the probability of an output
00:08:37.599 --> 00:08:45.120
variable given an input like
00:08:41.000 --> 00:08:48.040
this so um for unconditioned prediction
00:08:45.120 --> 00:08:50.000
um the way we can do this is left to
00:08:48.040 --> 00:08:51.399
right autoagressive models and these are
00:08:50.000 --> 00:08:52.600
the ones that I talked about last time
00:08:51.399 --> 00:08:56.360
when I was talking about how we build
00:08:52.600 --> 00:08:59.000
language models um and these could be uh
00:08:56.360 --> 00:09:01.880
specifically this kind though is a kind
00:08:59.000 --> 00:09:03.480
that doesn't have any context limit so
00:09:01.880 --> 00:09:05.680
it's looking all the way back to the
00:09:03.480 --> 00:09:07.519
beginning of the the sequence and this
00:09:05.680 --> 00:09:09.440
could be like an infinite length endr
00:09:07.519 --> 00:09:10.440
model but practically we can't use those
00:09:09.440 --> 00:09:12.519
because they would have too many
00:09:10.440 --> 00:09:15.360
parameters they would be too sparse for
00:09:12.519 --> 00:09:17.079
us to estimate the parameters so um what
00:09:15.360 --> 00:09:19.120
we do instead with engram models which I
00:09:17.079 --> 00:09:21.240
talked about last time is we limit the
00:09:19.120 --> 00:09:23.600
the context length so we have something
00:09:21.240 --> 00:09:25.760
like a trigram model where we don't
00:09:23.600 --> 00:09:28.680
actually reference all of the previous
00:09:25.760 --> 00:09:30.680
outputs uh when we make a prediction oh
00:09:28.680 --> 00:09:34.440
and sorry actually I I should explain
00:09:30.680 --> 00:09:37.640
how how do we uh how do we read this
00:09:34.440 --> 00:09:40.519
graph so this would be we're predicting
00:09:37.640 --> 00:09:42.680
number one here we're predicting word
00:09:40.519 --> 00:09:45.240
number one and we're conditioning we're
00:09:42.680 --> 00:09:47.640
not conditioning on anything after it
00:09:45.240 --> 00:09:49.040
we're predicting word number two we're
00:09:47.640 --> 00:09:50.480
conditioning on Word number one we're
00:09:49.040 --> 00:09:53.040
predicting word number three we're
00:09:50.480 --> 00:09:55.640
conditioning on Word number two so here
00:09:53.040 --> 00:09:58.320
we would be uh predicting word number
00:09:55.640 --> 00:09:59.920
four conditioning on Words number three
00:09:58.320 --> 00:10:02.200
and two but not number one so that would
00:09:59.920 --> 00:10:07.600
be like a trigram
00:10:02.200 --> 00:10:07.600
bottle um so
00:10:08.600 --> 00:10:15.240
the what is this is there a robot
00:10:11.399 --> 00:10:17.480
walking around somewhere um Howard drill
00:10:15.240 --> 00:10:20.440
okay okay' be a lot more fun if it was a
00:10:17.480 --> 00:10:22.560
robot um so
00:10:20.440 --> 00:10:25.519
uh the things we're going to talk about
00:10:22.560 --> 00:10:28.360
today are largely going to be ones that
00:10:25.519 --> 00:10:31.200
have unlimited length context um and so
00:10:28.360 --> 00:10:33.440
we can uh we'll talk about some examples
00:10:31.200 --> 00:10:35.680
here and then um there's also
00:10:33.440 --> 00:10:37.279
independent prediction so this uh would
00:10:35.680 --> 00:10:39.160
be something like a unigram model where
00:10:37.279 --> 00:10:41.560
you would just uh not condition on any
00:10:39.160 --> 00:10:41.560
previous
00:10:41.880 --> 00:10:45.959
context there's also bidirectional
00:10:44.279 --> 00:10:47.959
prediction where basically when you
00:10:45.959 --> 00:10:50.440
predict each element you predict based
00:10:47.959 --> 00:10:52.680
on all of the other elements not the
00:10:50.440 --> 00:10:55.519
element itself uh this could be
00:10:52.680 --> 00:10:59.720
something like a masked language model
00:10:55.519 --> 00:11:02.160
um but note here that I put a slash
00:10:59.720 --> 00:11:04.000
through here uh because this is not a
00:11:02.160 --> 00:11:06.800
well-formed probability because as I
00:11:04.000 --> 00:11:08.760
mentioned last time um in order to have
00:11:06.800 --> 00:11:11.000
a well-formed probability you need to
00:11:08.760 --> 00:11:12.440
predict the elements based on all of the
00:11:11.000 --> 00:11:14.120
elements that you predicted before and
00:11:12.440 --> 00:11:16.519
you can't predict based on future
00:11:14.120 --> 00:11:18.519
elements so this is not actually a
00:11:16.519 --> 00:11:20.760
probabilistic model but sometimes people
00:11:18.519 --> 00:11:22.240
use this to kind of learn
00:11:20.760 --> 00:11:24.720
representations that could be used
00:11:22.240 --> 00:11:28.680
Downstream for some
00:11:24.720 --> 00:11:30.959
reason cool is this clear any questions
00:11:28.680 --> 00:11:30.959
comments
00:11:32.680 --> 00:11:39.839
yeah so these are all um not
00:11:36.800 --> 00:11:42.000
conditioning on any prior context uh so
00:11:39.839 --> 00:11:43.959
when you predict each word it's
00:11:42.000 --> 00:11:46.880
conditioning on context that you
00:11:43.959 --> 00:11:50.160
previously generated or previously
00:11:46.880 --> 00:11:52.279
predicted yeah so and if I go to the
00:11:50.160 --> 00:11:55.399
conditioned ones these are where you
00:11:52.279 --> 00:11:56.800
have like a source x uh where you're
00:11:55.399 --> 00:11:58.480
given this and then you want to
00:11:56.800 --> 00:11:59.639
calculate the conditional probability of
00:11:58.480 --> 00:12:04.279
something else
00:11:59.639 --> 00:12:06.839
so um to give some examples of this um
00:12:04.279 --> 00:12:10.320
this is autor regressive conditioned
00:12:06.839 --> 00:12:12.920
prediction and um this could be like a
00:12:10.320 --> 00:12:14.440
SE a standard sequence to sequence model
00:12:12.920 --> 00:12:16.079
or it could be a language model where
00:12:14.440 --> 00:12:18.600
you're given a prompt and you want to
00:12:16.079 --> 00:12:20.560
predict the following output like we
00:12:18.600 --> 00:12:24.160
often do with chat GPT or something like
00:12:20.560 --> 00:12:27.880
this and so
00:12:24.160 --> 00:12:30.199
um yeah I I don't think you
00:12:27.880 --> 00:12:32.279
can
00:12:30.199 --> 00:12:34.639
yeah I don't know if any way you can do
00:12:32.279 --> 00:12:37.680
a chat GPT without any conditioning
00:12:34.639 --> 00:12:39.959
context um but there were people who
00:12:37.680 --> 00:12:41.240
were sending uh I saw this about a week
00:12:39.959 --> 00:12:44.079
or two ago there were people who were
00:12:41.240 --> 00:12:47.839
sending things to the chat um to the GPD
00:12:44.079 --> 00:12:50.480
3.5 or gp4 API with no input and it
00:12:47.839 --> 00:12:52.279
would randomly output random questions
00:12:50.480 --> 00:12:54.800
or something like that so that's what's
00:12:52.279 --> 00:12:56.720
what happens when you send things to uh
00:12:54.800 --> 00:12:58.120
to chat GPT without any prior
00:12:56.720 --> 00:13:00.120
conditioning conts but normally what you
00:12:58.120 --> 00:13:01.440
do is you put in you know your prompt
00:13:00.120 --> 00:13:05.320
and then it follows up with your prompt
00:13:01.440 --> 00:13:05.320
and that would be in this uh in this
00:13:06.000 --> 00:13:11.279
Paradigm there's also something called
00:13:08.240 --> 00:13:14.199
non-auto regressive condition prediction
00:13:11.279 --> 00:13:16.760
um and this can be used for something
00:13:14.199 --> 00:13:19.160
like sequence labeling or non- autor
00:13:16.760 --> 00:13:20.760
regressive machine translation I'll talk
00:13:19.160 --> 00:13:22.839
about the first one in this class and
00:13:20.760 --> 00:13:25.600
I'll talk about the the second one maybe
00:13:22.839 --> 00:13:27.399
later um it's kind of a minor topic now
00:13:25.600 --> 00:13:30.040
it used to be popular a few years ago so
00:13:27.399 --> 00:13:33.279
I'm not sure whether it'll cover it but
00:13:30.040 --> 00:13:33.279
um uh
00:13:33.399 --> 00:13:39.279
yeah cool so the basic modeling Paradigm
00:13:37.079 --> 00:13:41.199
that we use for things like this is
00:13:39.279 --> 00:13:42.760
extracting features and predicting so
00:13:41.199 --> 00:13:44.839
this is exactly the same as the bag of
00:13:42.760 --> 00:13:46.680
wordss model right I the bag of wordss
00:13:44.839 --> 00:13:48.680
model that I talked about the first time
00:13:46.680 --> 00:13:50.959
we extracted features uh based on those
00:13:48.680 --> 00:13:53.440
features we made predictions so it's no
00:13:50.959 --> 00:13:55.480
different when we do sequence modeling
00:13:53.440 --> 00:13:57.680
um but the methods that we use for
00:13:55.480 --> 00:14:01.120
feature extraction is different so given
00:13:57.680 --> 00:14:03.920
in the input text X we extract features
00:14:01.120 --> 00:14:06.519
H and predict labels
00:14:03.920 --> 00:14:10.320
Y and for something like text
00:14:06.519 --> 00:14:12.600
classification what we do is we uh so
00:14:10.320 --> 00:14:15.440
for example we have text classification
00:14:12.600 --> 00:14:17.920
or or sequence labeling and for text
00:14:15.440 --> 00:14:19.720
classification usually what we would do
00:14:17.920 --> 00:14:21.360
is we would have a feature extractor
00:14:19.720 --> 00:14:23.120
from this feature extractor we take the
00:14:21.360 --> 00:14:25.199
sequence and we convert it into a single
00:14:23.120 --> 00:14:28.040
vector and then based on this Vector we
00:14:25.199 --> 00:14:30.160
make a prediction so that that's what we
00:14:28.040 --> 00:14:33.160
do for
00:14:30.160 --> 00:14:35.480
classification um for sequence labeling
00:14:33.160 --> 00:14:37.160
normally what we do is we extract one
00:14:35.480 --> 00:14:40.240
vector for each thing that we would like
00:14:37.160 --> 00:14:42.079
to predict about so here that might be
00:14:40.240 --> 00:14:45.639
one vector for each
00:14:42.079 --> 00:14:47.720
word um and then based on this uh we
00:14:45.639 --> 00:14:49.120
would predict something for each word so
00:14:47.720 --> 00:14:50.360
this is an example of part of speech
00:14:49.120 --> 00:14:53.079
tagging but there's a lot of other
00:14:50.360 --> 00:14:56.920
sequence labeling tasks
00:14:53.079 --> 00:14:58.839
also and what tasks exist for something
00:14:56.920 --> 00:15:03.040
like sequence labeling so sequence lab
00:14:58.839 --> 00:15:06.240
in is uh a pretty
00:15:03.040 --> 00:15:09.000
big subset of NLP tasks you can express
00:15:06.240 --> 00:15:11.040
a lot of things as sequence labeling and
00:15:09.000 --> 00:15:13.000
basically given an input text X we
00:15:11.040 --> 00:15:16.079
predict an output label sequence y of
00:15:13.000 --> 00:15:17.560
equal length so this can be used for
00:15:16.079 --> 00:15:20.160
things like part of speech tagging to
00:15:17.560 --> 00:15:22.000
get the parts of speech of each word um
00:15:20.160 --> 00:15:24.639
it can also be used for something like
00:15:22.000 --> 00:15:26.959
lemmatization and litiz basically what
00:15:24.639 --> 00:15:29.880
that is is it is predicting the base
00:15:26.959 --> 00:15:31.480
form of each word uh and this can be
00:15:29.880 --> 00:15:34.560
used for normalization if you want to
00:15:31.480 --> 00:15:36.360
find like for example all instances of a
00:15:34.560 --> 00:15:38.480
a particular verb being used or all
00:15:36.360 --> 00:15:40.800
instances of a particular noun being
00:15:38.480 --> 00:15:42.720
used this is a little bit different than
00:15:40.800 --> 00:15:45.000
something like stemming so stemming
00:15:42.720 --> 00:15:48.160
normally what stemming would do is it
00:15:45.000 --> 00:15:50.560
would uh chop off the plural here it
00:15:48.160 --> 00:15:53.240
would chop off S but it wouldn't be able
00:15:50.560 --> 00:15:56.279
to do things like normalized saw into C
00:15:53.240 --> 00:15:57.759
because uh stemming uh just removes
00:15:56.279 --> 00:15:59.240
suffixes it doesn't do any sort of
00:15:57.759 --> 00:16:02.720
normalization so that's the difference
00:15:59.240 --> 00:16:05.199
between lonization and
00:16:02.720 --> 00:16:08.079
stemon there's also something called
00:16:05.199 --> 00:16:09.680
morphological tagging um in
00:16:08.079 --> 00:16:11.639
morphological tagging basically what
00:16:09.680 --> 00:16:14.360
this is doing is this is a
00:16:11.639 --> 00:16:17.040
more advanced version of part of speech
00:16:14.360 --> 00:16:20.360
tagging uh that predicts things like
00:16:17.040 --> 00:16:23.600
okay this is a a past tense verb uh this
00:16:20.360 --> 00:16:25.639
is a plural um this is a particular verb
00:16:23.600 --> 00:16:27.240
form and you have multiple tags here
00:16:25.639 --> 00:16:28.959
this is less interesting in English
00:16:27.240 --> 00:16:30.920
because English is kind of boring
00:16:28.959 --> 00:16:32.319
language morphology morphologically it
00:16:30.920 --> 00:16:33.399
doesn't have a lot of conjugation and
00:16:32.319 --> 00:16:35.839
other stuff but it's a lot more
00:16:33.399 --> 00:16:38.319
interesting in more complex languages
00:16:35.839 --> 00:16:40.040
like Japanese or Hindi or other things
00:16:38.319 --> 00:16:42.480
like
00:16:40.040 --> 00:16:43.920
that Chinese is even more boring than
00:16:42.480 --> 00:16:46.120
English so if you're interested in
00:16:43.920 --> 00:16:47.000
Chinese then you don't need to worry
00:16:46.120 --> 00:16:50.680
about
00:16:47.000 --> 00:16:52.560
that cool um but actually what's maybe
00:16:50.680 --> 00:16:55.000
more widely used from the sequence
00:16:52.560 --> 00:16:57.480
labeling perspective is span labeling
00:16:55.000 --> 00:17:01.040
and here you want to predict spans and
00:16:57.480 --> 00:17:03.560
labels and this could be uh named entity
00:17:01.040 --> 00:17:05.360
recognitions so if you say uh Graham nub
00:17:03.560 --> 00:17:07.199
is teaching at Carnegie melan University
00:17:05.360 --> 00:17:09.520
you would want to identify each entity
00:17:07.199 --> 00:17:11.480
is being like a person organization
00:17:09.520 --> 00:17:16.039
Place governmental entity other stuff
00:17:11.480 --> 00:17:18.760
like that um there's also
00:17:16.039 --> 00:17:20.439
uh things like syntactic chunking where
00:17:18.760 --> 00:17:23.640
you want to find all noun phrases and
00:17:20.439 --> 00:17:26.799
verb phrases um also semantic role
00:17:23.640 --> 00:17:30.360
labeling where semantic role labeling is
00:17:26.799 --> 00:17:32.480
uh demonstrating who did what to whom so
00:17:30.360 --> 00:17:34.440
it's saying uh this is the actor the
00:17:32.480 --> 00:17:36.120
person who did the thing this is the
00:17:34.440 --> 00:17:38.520
thing that is being done and this is the
00:17:36.120 --> 00:17:40.280
place where it's being done so uh this
00:17:38.520 --> 00:17:42.840
can be useful if you want to do any sort
00:17:40.280 --> 00:17:45.559
of analysis about who does what to whom
00:17:42.840 --> 00:17:48.160
uh other things like
00:17:45.559 --> 00:17:50.360
that um there's also a more complicated
00:17:48.160 --> 00:17:52.080
thing called an entity linking which
00:17:50.360 --> 00:17:54.559
isn't really a span linking task but
00:17:52.080 --> 00:17:58.400
it's basically named entity recognition
00:17:54.559 --> 00:18:00.799
and you link it to um and you link it to
00:17:58.400 --> 00:18:04.200
to like a database like Wiki data or
00:18:00.799 --> 00:18:06.600
Wikipedia or something like this and
00:18:04.200 --> 00:18:09.520
this doesn't seem very glamorous perhaps
00:18:06.600 --> 00:18:10.799
you know a lot of people might not you
00:18:09.520 --> 00:18:13.400
might not
00:18:10.799 --> 00:18:15.000
sound like immediately excit be
00:18:13.400 --> 00:18:16.799
immediately excited by entity linking
00:18:15.000 --> 00:18:18.520
but actually it's super super important
00:18:16.799 --> 00:18:20.080
for things like news aggregation and
00:18:18.520 --> 00:18:21.640
other stuff like that find all the news
00:18:20.080 --> 00:18:23.799
articles about the celebrity or
00:18:21.640 --> 00:18:26.919
something like this uh find all of the
00:18:23.799 --> 00:18:29.720
mentions of our product um our company's
00:18:26.919 --> 00:18:33.400
product and uh social media or things so
00:18:29.720 --> 00:18:33.400
it's actually a really widely used
00:18:33.720 --> 00:18:38.000
technology and then finally span
00:18:36.039 --> 00:18:40.240
labeling can also be treated as sequence
00:18:38.000 --> 00:18:43.240
labeling um and the way we normally do
00:18:40.240 --> 00:18:45.600
this is we use something called bio tags
00:18:43.240 --> 00:18:47.760
and uh here you predict the beginning uh
00:18:45.600 --> 00:18:50.200
in and out tags for each word or spans
00:18:47.760 --> 00:18:52.400
so if we have this example of spans uh
00:18:50.200 --> 00:18:56.120
we just convert this into tags uh where
00:18:52.400 --> 00:18:57.760
you say uh begin person in person o
00:18:56.120 --> 00:18:59.640
means it's not an entity begin
00:18:57.760 --> 00:19:02.799
organization in organization and then
00:18:59.640 --> 00:19:05.520
you canvert that back into um into these
00:19:02.799 --> 00:19:09.880
spans so this makes it relatively easy
00:19:05.520 --> 00:19:09.880
to uh kind of do the span
00:19:10.480 --> 00:19:15.120
prediction cool um so now you know uh
00:19:13.640 --> 00:19:16.600
now you know what to do if you want to
00:19:15.120 --> 00:19:18.280
predict entities or other things like
00:19:16.600 --> 00:19:20.240
that there's a lot of models on like
00:19:18.280 --> 00:19:22.400
hugging face for example that uh allow
00:19:20.240 --> 00:19:25.640
you to do these things are there any
00:19:22.400 --> 00:19:25.640
questions uh before I move
00:19:27.080 --> 00:19:32.440
on okay
00:19:28.799 --> 00:19:34.039
cool I'll just go forward then so um now
00:19:32.440 --> 00:19:37.000
I'm going to talk about how we actually
00:19:34.039 --> 00:19:38.559
model these in machine learning models
00:19:37.000 --> 00:19:40.919
and there's three major types of
00:19:38.559 --> 00:19:43.120
sequence models uh there are other types
00:19:40.919 --> 00:19:45.320
of sequence models but I'd say the great
00:19:43.120 --> 00:19:47.840
majority of work uses one of these three
00:19:45.320 --> 00:19:51.720
different types and the first one is
00:19:47.840 --> 00:19:54.840
recurrence um what recurrence does it is
00:19:51.720 --> 00:19:56.240
it conditions on representations on an
00:19:54.840 --> 00:19:58.720
encoding of the
00:19:56.240 --> 00:20:01.360
history and so the way this works works
00:19:58.720 --> 00:20:04.679
is essentially you have your input
00:20:01.360 --> 00:20:06.280
vectors like this uh usually word
00:20:04.679 --> 00:20:08.600
embeddings or embeddings from the
00:20:06.280 --> 00:20:10.880
previous layer of the model and you have
00:20:08.600 --> 00:20:12.840
a recurrent neural network and the
00:20:10.880 --> 00:20:14.600
recurrent neural network um at the very
00:20:12.840 --> 00:20:17.280
beginning might only take the first
00:20:14.600 --> 00:20:19.480
Vector but every subsequent step it
00:20:17.280 --> 00:20:23.760
takes the input vector and it takes the
00:20:19.480 --> 00:20:23.760
hidden Vector from the previous uh
00:20:24.080 --> 00:20:32.280
input and the uh then you keep on going
00:20:29.039 --> 00:20:32.280
uh like this all the way through the
00:20:32.320 --> 00:20:37.600
sequence the convolution is a
00:20:35.799 --> 00:20:40.880
conditioning representations on local
00:20:37.600 --> 00:20:44.200
context so you have the inputs like this
00:20:40.880 --> 00:20:47.200
and here you're conditioning on the word
00:20:44.200 --> 00:20:51.240
itself and the surrounding um words on
00:20:47.200 --> 00:20:52.960
the right or the left so um you would do
00:20:51.240 --> 00:20:55.240
something like this this is a typical
00:20:52.960 --> 00:20:57.480
convolution where you have this this
00:20:55.240 --> 00:20:59.039
certain one here and the left one and
00:20:57.480 --> 00:21:01.080
the right one and this would be a size
00:20:59.039 --> 00:21:03.480
three convolution you could also have a
00:21:01.080 --> 00:21:06.520
size five convolution 7 n you know
00:21:03.480 --> 00:21:08.600
whatever else um that would take in more
00:21:06.520 --> 00:21:11.520
surrounding words like
00:21:08.600 --> 00:21:13.720
this and then finally we have attention
00:21:11.520 --> 00:21:15.640
um and attention is conditioned
00:21:13.720 --> 00:21:19.080
representations at a weighted average of
00:21:15.640 --> 00:21:21.000
all tokens in the sequence and so here
00:21:19.080 --> 00:21:24.600
um we're conditioning on all of the
00:21:21.000 --> 00:21:26.279
other tokens in the sequence but um the
00:21:24.600 --> 00:21:28.919
amount that we condition on each of the
00:21:26.279 --> 00:21:32.039
tokens differs uh between
00:21:28.919 --> 00:21:34.919
so we might get more of this token less
00:21:32.039 --> 00:21:37.600
of this token and other things like that
00:21:34.919 --> 00:21:39.720
and I'll go into the mechanisms of each
00:21:37.600 --> 00:21:43.159
of
00:21:39.720 --> 00:21:45.720
these one important thing to think about
00:21:43.159 --> 00:21:49.279
is uh the computational complexity of
00:21:45.720 --> 00:21:51.960
each of these and um the computational
00:21:49.279 --> 00:21:56.240
complexity can be
00:21:51.960 --> 00:21:58.600
expressed as the sequence length let's
00:21:56.240 --> 00:22:00.840
call the sequence length n and
00:21:58.600 --> 00:22:02.520
convolution has a convolution window
00:22:00.840 --> 00:22:05.080
size so I'll call that
00:22:02.520 --> 00:22:08.039
W so does anyone have an idea of the
00:22:05.080 --> 00:22:10.360
computational complexity of a recurrent
00:22:08.039 --> 00:22:10.360
neural
00:22:11.480 --> 00:22:16.640
network so how um how quickly does the
00:22:15.120 --> 00:22:18.640
computation of a recurrent neural
00:22:16.640 --> 00:22:20.760
network grow and one way you can look at
00:22:18.640 --> 00:22:24.360
this is uh figure out the number of
00:22:20.760 --> 00:22:24.360
arrows uh that you see
00:22:24.480 --> 00:22:29.080
here yeah it's it's linear so it's
00:22:27.440 --> 00:22:32.520
basically
00:22:29.080 --> 00:22:35.520
n um what about
00:22:32.520 --> 00:22:36.760
convolution any other ideas any ideas
00:22:35.520 --> 00:22:42.039
about
00:22:36.760 --> 00:22:45.120
convolution n yeah NW n
00:22:42.039 --> 00:22:47.559
w and what about
00:22:45.120 --> 00:22:52.200
attention n squar
00:22:47.559 --> 00:22:53.559
yeah so what you can see is um for very
00:22:52.200 --> 00:22:58.000
long
00:22:53.559 --> 00:23:00.400
sequences um for very long sequences the
00:22:58.000 --> 00:23:04.480
asymptotic complexity of running a
00:23:00.400 --> 00:23:06.039
recurrent neural network is uh lower so
00:23:04.480 --> 00:23:08.960
you can run a recurrent neural network
00:23:06.039 --> 00:23:10.480
over a sequence of length uh you know 20
00:23:08.960 --> 00:23:12.480
million or something like that and as
00:23:10.480 --> 00:23:15.200
long as you had enough memory it would
00:23:12.480 --> 00:23:16.520
take a linear time but um if you do
00:23:15.200 --> 00:23:18.400
something like attention over a really
00:23:16.520 --> 00:23:20.240
long sequence it would be more difficult
00:23:18.400 --> 00:23:22.080
there's a lot of caveats here because
00:23:20.240 --> 00:23:23.320
attention and convolution are easily
00:23:22.080 --> 00:23:26.200
paral
00:23:23.320 --> 00:23:28.520
parallelized uh whereas uh recurrence is
00:23:26.200 --> 00:23:30.919
not um and I'll talk about that a second
00:23:28.520 --> 00:23:32.679
but any anyway it's a good thing to keep
00:23:30.919 --> 00:23:36.240
in
00:23:32.679 --> 00:23:37.679
mind cool um so the next the first
00:23:36.240 --> 00:23:39.799
sequence model I want to introduce is
00:23:37.679 --> 00:23:42.559
recurrent neural networks oh um sorry
00:23:39.799 --> 00:23:45.799
one other thing I want to mention is all
00:23:42.559 --> 00:23:47.600
of these are still used um it might seem
00:23:45.799 --> 00:23:49.960
that like if you're very plugged into
00:23:47.600 --> 00:23:52.640
NLP it might seem like Well everybody's
00:23:49.960 --> 00:23:55.080
using attention um so why do we need to
00:23:52.640 --> 00:23:56.880
learn about the other ones uh but
00:23:55.080 --> 00:23:59.679
actually all of these are used and
00:23:56.880 --> 00:24:02.600
usually recurrence and convolution are
00:23:59.679 --> 00:24:04.960
used in combination with attention uh in
00:24:02.600 --> 00:24:07.799
some way for particular applications
00:24:04.960 --> 00:24:09.960
where uh like uh recurrence or a
00:24:07.799 --> 00:24:12.640
convolution are are useful so I'll I'll
00:24:09.960 --> 00:24:15.279
go into details of that
00:24:12.640 --> 00:24:18.159
l so let's talk about the first sequence
00:24:15.279 --> 00:24:20.600
model uh recurrent neural networks so
00:24:18.159 --> 00:24:22.919
recurrent neural networks um they're
00:24:20.600 --> 00:24:26.399
basically tools to remember information
00:24:22.919 --> 00:24:28.520
uh they were invented in uh around
00:24:26.399 --> 00:24:30.520
1990 and
00:24:28.520 --> 00:24:34.120
the way they work is a feedforward
00:24:30.520 --> 00:24:35.600
neural network looks a bit like this we
00:24:34.120 --> 00:24:38.000
have some sort of look up over the
00:24:35.600 --> 00:24:40.120
context we calculate embeddings we do a
00:24:38.000 --> 00:24:41.000
transform we get a hidden State and we
00:24:40.120 --> 00:24:43.039
make the
00:24:41.000 --> 00:24:46.159
prediction whereas a recurrent neural
00:24:43.039 --> 00:24:49.360
network uh feeds in the previous hidden
00:24:46.159 --> 00:24:53.360
State and a very simple Elman style
00:24:49.360 --> 00:24:54.840
neural network looks um or I'll contrast
00:24:53.360 --> 00:24:56.559
the feed forward neural network that we
00:24:54.840 --> 00:24:58.279
already know with an Elman style neural
00:24:56.559 --> 00:25:00.399
network um
00:24:58.279 --> 00:25:01.880
uh recurrent neural network so basically
00:25:00.399 --> 00:25:06.120
the feed forward Network that we already
00:25:01.880 --> 00:25:07.840
know does a um linear transform over the
00:25:06.120 --> 00:25:09.279
input and then it runs it through a
00:25:07.840 --> 00:25:11.640
nonlinear function and this could be
00:25:09.279 --> 00:25:14.200
like a tan function or a Ru function or
00:25:11.640 --> 00:25:17.080
anything like that in a recurrent neural
00:25:14.200 --> 00:25:19.559
network we add uh multiplication by the
00:25:17.080 --> 00:25:22.080
hidden the previous hidden state so it
00:25:19.559 --> 00:25:25.120
looks like
00:25:22.080 --> 00:25:27.000
this and so if we look at what
00:25:25.120 --> 00:25:29.080
processing a sequence looks like uh
00:25:27.000 --> 00:25:31.080
basically what we do is we start out
00:25:29.080 --> 00:25:32.720
with an initial State this initial State
00:25:31.080 --> 00:25:34.320
could be like all zeros or it could be
00:25:32.720 --> 00:25:35.200
randomized or it could be learned or
00:25:34.320 --> 00:25:38.480
whatever
00:25:35.200 --> 00:25:42.080
else and then based on based on this uh
00:25:38.480 --> 00:25:44.279
we run it through an RNN function um and
00:25:42.080 --> 00:25:46.600
then you know use calculate the hidden
00:25:44.279 --> 00:25:48.960
State use it to make a prediction uh we
00:25:46.600 --> 00:25:50.760
have the RNN function uh make a
00:25:48.960 --> 00:25:51.760
prediction RNN make a prediction RNN
00:25:50.760 --> 00:25:54.520
make a
00:25:51.760 --> 00:25:56.960
prediction so one important thing here
00:25:54.520 --> 00:25:58.360
is that this RNN is exactly the same
00:25:56.960 --> 00:26:01.880
function
00:25:58.360 --> 00:26:04.960
no matter which position it appears in
00:26:01.880 --> 00:26:06.640
and so because of that we just no matter
00:26:04.960 --> 00:26:08.279
how long the sequence becomes we always
00:26:06.640 --> 00:26:10.200
have the same number of parameters which
00:26:08.279 --> 00:26:12.600
is always like really important for a
00:26:10.200 --> 00:26:15.120
sequence model so uh that's what this
00:26:12.600 --> 00:26:15.120
looks like
00:26:15.799 --> 00:26:20.480
here so how do we train
00:26:18.320 --> 00:26:22.679
rnns um
00:26:20.480 --> 00:26:24.399
basically if you remember we can trade
00:26:22.679 --> 00:26:27.159
neural networks as long as we have a
00:26:24.399 --> 00:26:29.240
directed e cyclic graph that calculates
00:26:27.159 --> 00:26:30.919
our loss function and then for uh
00:26:29.240 --> 00:26:32.640
forward propagation and back propagation
00:26:30.919 --> 00:26:35.720
we'll do all the rest to calculate our
00:26:32.640 --> 00:26:38.760
parameters and we uh we update the
00:26:35.720 --> 00:26:40.480
parameters so the way this works is uh
00:26:38.760 --> 00:26:42.000
let's say we're doing sequence labeling
00:26:40.480 --> 00:26:45.200
in each of these predictions is a part
00:26:42.000 --> 00:26:47.559
of speech uh each of these labels is a
00:26:45.200 --> 00:26:49.000
true part of speech label or sorry each
00:26:47.559 --> 00:26:50.760
of these predictions is like a
00:26:49.000 --> 00:26:52.919
probability over the part parts of
00:26:50.760 --> 00:26:55.720
speech for that sequence each of these
00:26:52.919 --> 00:26:57.640
labels is a true part of speech label so
00:26:55.720 --> 00:26:59.320
basically what we do is from this we
00:26:57.640 --> 00:27:02.200
calculate the negative log likelihood of
00:26:59.320 --> 00:27:05.559
the true part of speech we get a
00:27:02.200 --> 00:27:09.120
loss and so now we have four losses uh
00:27:05.559 --> 00:27:11.559
here this is no longer a nice directed
00:27:09.120 --> 00:27:13.000
acyclic uh graph that ends in a single
00:27:11.559 --> 00:27:15.279
loss function which is kind of what we
00:27:13.000 --> 00:27:17.559
needed for back propagation right so
00:27:15.279 --> 00:27:20.240
what do we do uh very simple we just add
00:27:17.559 --> 00:27:22.440
them together uh we take the sum and now
00:27:20.240 --> 00:27:24.120
we have a single loss function uh which
00:27:22.440 --> 00:27:26.240
is the sum of all of the loss functions
00:27:24.120 --> 00:27:28.679
for each prediction that we
00:27:26.240 --> 00:27:30.799
made and that's our total loss and now
00:27:28.679 --> 00:27:32.600
we do have a directed asli graph where
00:27:30.799 --> 00:27:34.320
this is the terminal node and we can do
00:27:32.600 --> 00:27:36.480
backr like
00:27:34.320 --> 00:27:37.799
this this is true for all sequence
00:27:36.480 --> 00:27:39.320
models I'm going to talk about today I'm
00:27:37.799 --> 00:27:41.559
just illustrating it with recurrent
00:27:39.320 --> 00:27:43.279
networks um any any questions here
00:27:41.559 --> 00:27:45.240
everything
00:27:43.279 --> 00:27:47.919
good
00:27:45.240 --> 00:27:50.279
okay cool um yeah so now we have the
00:27:47.919 --> 00:27:52.960
loss it's a Well form dag uh we can run
00:27:50.279 --> 00:27:55.320
backrop so uh basically what we do is we
00:27:52.960 --> 00:27:58.399
just run back propop and our loss goes
00:27:55.320 --> 00:28:01.120
out uh back into all of the
00:27:58.399 --> 00:28:04.200
places now parameters are tied across
00:28:01.120 --> 00:28:06.080
time so the derivatives into the
00:28:04.200 --> 00:28:07.200
parameters are aggregated over all of
00:28:06.080 --> 00:28:10.760
the time
00:28:07.200 --> 00:28:13.760
steps um and this has been called back
00:28:10.760 --> 00:28:16.320
propagation through time uh since uh
00:28:13.760 --> 00:28:18.679
these were originally invented so
00:28:16.320 --> 00:28:21.720
basically what it looks like is because
00:28:18.679 --> 00:28:25.600
the parameters for this RNN function are
00:28:21.720 --> 00:28:27.120
shared uh they'll essentially be updated
00:28:25.600 --> 00:28:29.480
they'll only be updated once but they're
00:28:27.120 --> 00:28:32.640
updated from like four different
00:28:29.480 --> 00:28:32.640
positions in this network
00:28:34.120 --> 00:28:38.440
essentially yeah and this is the same
00:28:36.120 --> 00:28:40.559
for all sequence uh sequence models that
00:28:38.440 --> 00:28:43.519
I'm going to talk about
00:28:40.559 --> 00:28:45.360
today um another variety of models that
00:28:43.519 --> 00:28:47.559
people use are bidirectional rnns and
00:28:45.360 --> 00:28:49.880
these are uh used when you want to you
00:28:47.559 --> 00:28:52.960
know do something like sequence labeling
00:28:49.880 --> 00:28:54.399
and so you just uh run two rnns you want
00:28:52.960 --> 00:28:56.279
run one from the beginning one from the
00:28:54.399 --> 00:28:59.399
end and concatenate them together like
00:28:56.279 --> 00:28:59.399
this make predictions
00:29:01.200 --> 00:29:08.200
cool uh any questions yeah if you run
00:29:05.559 --> 00:29:09.960
the does that change your
00:29:08.200 --> 00:29:11.679
complexity does this change the
00:29:09.960 --> 00:29:13.000
complexity it doesn't change the ASM
00:29:11.679 --> 00:29:16.519
totic complexity because you're
00:29:13.000 --> 00:29:18.320
multiplying by two uh and like Big O
00:29:16.519 --> 00:29:21.559
notation doesn't care if you multiply by
00:29:18.320 --> 00:29:23.880
a constant but it it does double the Ty
00:29:21.559 --> 00:29:23.880
that it would
00:29:24.080 --> 00:29:28.080
do cool any
00:29:26.320 --> 00:29:32.799
other
00:29:28.080 --> 00:29:35.720
okay let's go forward um another problem
00:29:32.799 --> 00:29:37.240
that is particularly Salient in rnns and
00:29:35.720 --> 00:29:40.440
part of the reason why attention models
00:29:37.240 --> 00:29:42.000
are so useful is Vanishing gradients but
00:29:40.440 --> 00:29:43.880
you should be aware of this regardless
00:29:42.000 --> 00:29:46.799
of whether like no matter which model
00:29:43.880 --> 00:29:48.799
you're using and um thinking about it
00:29:46.799 --> 00:29:50.720
very carefully is actually a really good
00:29:48.799 --> 00:29:52.399
way to design better architectures if
00:29:50.720 --> 00:29:54.000
you're going to be designing uh
00:29:52.399 --> 00:29:56.039
designing
00:29:54.000 --> 00:29:58.000
architectures so basically the problem
00:29:56.039 --> 00:29:59.399
with Vanishing gradients is like let's
00:29:58.000 --> 00:30:01.799
say we have a prediction task where
00:29:59.399 --> 00:30:03.960
we're calculating a regression we're
00:30:01.799 --> 00:30:05.519
inputting a whole bunch of tokens and
00:30:03.960 --> 00:30:08.080
then calculating a regression at the
00:30:05.519 --> 00:30:12.840
very end using a square air loss
00:30:08.080 --> 00:30:16.360
function if we do something like this uh
00:30:12.840 --> 00:30:17.919
the problem is if we have a standard RNN
00:30:16.360 --> 00:30:21.279
when we do back
00:30:17.919 --> 00:30:25.480
propop we'll have a big gradient
00:30:21.279 --> 00:30:27.000
probably for the first RNN unit here but
00:30:25.480 --> 00:30:30.120
every time because we're running this
00:30:27.000 --> 00:30:33.679
through through some sort of
00:30:30.120 --> 00:30:37.080
nonlinearity if we for example if our
00:30:33.679 --> 00:30:39.240
nonlinearity is a t h function uh the
00:30:37.080 --> 00:30:42.000
gradient of the tan H function looks a
00:30:39.240 --> 00:30:42.000
little bit like
00:30:42.120 --> 00:30:50.000
this and um here I if I am not mistaken
00:30:47.200 --> 00:30:53.480
this Peaks at at one and everywhere else
00:30:50.000 --> 00:30:56.919
at zero and so because this is peing at
00:30:53.480 --> 00:30:58.679
one everywhere else at zero let's say um
00:30:56.919 --> 00:31:01.360
we have an input way over here like
00:30:58.679 --> 00:31:03.080
minus minus 3 or something like that if
00:31:01.360 --> 00:31:04.760
we have that that basically destroys our
00:31:03.080 --> 00:31:10.760
gradient our gradient disappears for
00:31:04.760 --> 00:31:13.559
that particular unit um and you know
00:31:10.760 --> 00:31:15.399
maybe one thing that you might say is oh
00:31:13.559 --> 00:31:17.039
well you know if this is getting so
00:31:15.399 --> 00:31:19.320
small because this only goes up to one
00:31:17.039 --> 00:31:22.960
let's do like 100 time t
00:31:19.320 --> 00:31:24.880
h as our uh as our activation function
00:31:22.960 --> 00:31:26.600
we'll do 100 time tan H and so now this
00:31:24.880 --> 00:31:28.279
goes up to 100 and now our gradients are
00:31:26.600 --> 00:31:30.080
not going to disapp here but then you
00:31:28.279 --> 00:31:31.720
have the the opposite problem you have
00:31:30.080 --> 00:31:34.760
exploding gradients where it goes up by
00:31:31.720 --> 00:31:36.360
100 every time uh it gets unmanageable
00:31:34.760 --> 00:31:40.000
and destroys your gradient descent
00:31:36.360 --> 00:31:41.720
itself so basically we have uh we have
00:31:40.000 --> 00:31:43.200
this problem because if you apply a
00:31:41.720 --> 00:31:45.639
function over and over again your
00:31:43.200 --> 00:31:47.240
gradient gets smaller and smaller every
00:31:45.639 --> 00:31:49.080
smaller and smaller bigger and bigger
00:31:47.240 --> 00:31:50.480
every time you do that and uh you have
00:31:49.080 --> 00:31:51.720
the vanishing gradient or exploding
00:31:50.480 --> 00:31:54.799
gradient
00:31:51.720 --> 00:31:56.919
problem um it's not just a problem with
00:31:54.799 --> 00:31:59.039
nonlinearities so it also happens when
00:31:56.919 --> 00:32:00.480
you do do your weight Matrix multiplies
00:31:59.039 --> 00:32:03.840
and other stuff like that basically
00:32:00.480 --> 00:32:05.960
anytime you modify uh the the input into
00:32:03.840 --> 00:32:07.720
a different output it will have a
00:32:05.960 --> 00:32:10.240
gradient and so it will either be bigger
00:32:07.720 --> 00:32:14.000
than one or less than
00:32:10.240 --> 00:32:16.000
one um so I mentioned this is a problem
00:32:14.000 --> 00:32:18.120
for rnns it's particularly a problem for
00:32:16.000 --> 00:32:20.799
rnns over long sequences but it's also a
00:32:18.120 --> 00:32:23.039
problem for any other model you use and
00:32:20.799 --> 00:32:24.960
the reason why this is important to know
00:32:23.039 --> 00:32:26.799
is if there's important information in
00:32:24.960 --> 00:32:29.000
your model finding a way that you can
00:32:26.799 --> 00:32:30.559
get a direct path from that important
00:32:29.000 --> 00:32:32.600
information to wherever you're making a
00:32:30.559 --> 00:32:34.440
prediction often is a way to improve
00:32:32.600 --> 00:32:39.120
your model
00:32:34.440 --> 00:32:41.159
um improve your model performance and on
00:32:39.120 --> 00:32:42.919
the contrary if there's unimportant
00:32:41.159 --> 00:32:45.320
information if there's information that
00:32:42.919 --> 00:32:47.159
you think is likely to be unimportant
00:32:45.320 --> 00:32:49.159
putting it farther away or making it a
00:32:47.159 --> 00:32:51.279
more indirect path so the model has to
00:32:49.159 --> 00:32:53.200
kind of work harder to use it is a good
00:32:51.279 --> 00:32:54.840
way to prevent the model from being
00:32:53.200 --> 00:32:57.679
distracted by like tons and tons of
00:32:54.840 --> 00:33:00.200
information um uh some of it
00:32:57.679 --> 00:33:03.960
which may be irrelevant so it's a good
00:33:00.200 --> 00:33:03.960
thing to know about in general for model
00:33:05.360 --> 00:33:13.080
design so um how did RNN solve this
00:33:09.559 --> 00:33:15.360
problem of uh of the vanishing gradient
00:33:13.080 --> 00:33:16.880
there is a method called long short-term
00:33:15.360 --> 00:33:20.360
memory
00:33:16.880 --> 00:33:22.840
um and the basic idea is to make
00:33:20.360 --> 00:33:24.360
additive connections between time
00:33:22.840 --> 00:33:29.919
steps
00:33:24.360 --> 00:33:32.799
and so addition is the
00:33:29.919 --> 00:33:36.399
only addition or kind of like the
00:33:32.799 --> 00:33:38.159
identity is the only thing that does not
00:33:36.399 --> 00:33:40.880
change the gradient it's guaranteed to
00:33:38.159 --> 00:33:43.279
not change the gradient because um the
00:33:40.880 --> 00:33:46.639
identity function is like f
00:33:43.279 --> 00:33:49.159
ofx equals X and if you take the
00:33:46.639 --> 00:33:51.480
derivative of this it's one so you're
00:33:49.159 --> 00:33:55.440
guaranteed to always have a gradient of
00:33:51.480 --> 00:33:57.360
one according to this function so um
00:33:55.440 --> 00:33:59.559
long shortterm memory makes sure that
00:33:57.360 --> 00:34:01.840
you have this additive uh input between
00:33:59.559 --> 00:34:04.600
time steps and this is what it looks
00:34:01.840 --> 00:34:05.919
like it's not super super important to
00:34:04.600 --> 00:34:09.119
understand everything that's going on
00:34:05.919 --> 00:34:12.200
here but just to explain it very quickly
00:34:09.119 --> 00:34:15.720
this uh C here is something called the
00:34:12.200 --> 00:34:20.520
memory cell it's passed on linearly like
00:34:15.720 --> 00:34:24.679
this and then um you have some gates the
00:34:20.520 --> 00:34:27.320
update gate is determining whether uh
00:34:24.679 --> 00:34:28.919
whether you update this hidden state or
00:34:27.320 --> 00:34:31.440
how much you update given this hidden
00:34:28.919 --> 00:34:34.480
State this input gate is deciding how
00:34:31.440 --> 00:34:36.760
much of the input you take in um and
00:34:34.480 --> 00:34:39.879
then the output gate is deciding how
00:34:36.760 --> 00:34:43.280
much of uh the output from the cell you
00:34:39.879 --> 00:34:45.599
uh you basically push out after using
00:34:43.280 --> 00:34:47.079
the cells so um it has these three gates
00:34:45.599 --> 00:34:48.760
that control the information flow and
00:34:47.079 --> 00:34:51.520
the model can learn to turn them on or
00:34:48.760 --> 00:34:53.720
off uh or something like that so uh
00:34:51.520 --> 00:34:55.679
that's the basic uh basic idea of the
00:34:53.720 --> 00:34:57.240
LSM and there's lots of other like
00:34:55.679 --> 00:34:59.359
variants of this like gated recurrent
00:34:57.240 --> 00:35:01.520
units that are a little bit simpler but
00:34:59.359 --> 00:35:03.920
the basic idea of an additive connection
00:35:01.520 --> 00:35:07.240
plus gating is uh something that appears
00:35:03.920 --> 00:35:07.240
a lot in many different types of
00:35:07.440 --> 00:35:14.240
architectures um any questions
00:35:12.079 --> 00:35:15.760
here another thing I should mention that
00:35:14.240 --> 00:35:19.200
I just realized I don't have on my
00:35:15.760 --> 00:35:24.480
slides but it's a good thing to know is
00:35:19.200 --> 00:35:29.040
that this is also used in uh deep
00:35:24.480 --> 00:35:32.440
networks and uh multi-layer
00:35:29.040 --> 00:35:32.440
networks and so
00:35:34.240 --> 00:35:39.520
basically lstms uh this is
00:35:39.720 --> 00:35:45.359
time lstms have this additive connection
00:35:43.359 --> 00:35:47.599
between the member eel where you're
00:35:45.359 --> 00:35:50.079
always
00:35:47.599 --> 00:35:53.119
adding um adding this into to whatever
00:35:50.079 --> 00:35:53.119
input you
00:35:54.200 --> 00:36:00.720
get and then you you get an input and
00:35:57.000 --> 00:36:00.720
you add this in you get an
00:36:00.839 --> 00:36:07.000
input and so this this makes sure you
00:36:03.440 --> 00:36:09.640
pass your gradients forward in
00:36:07.000 --> 00:36:11.720
time there's also uh something called
00:36:09.640 --> 00:36:13.000
residual connections which I think a lot
00:36:11.720 --> 00:36:14.319
of people have heard of if you've done a
00:36:13.000 --> 00:36:16.000
deep learning class or something like
00:36:14.319 --> 00:36:18.079
that but if you haven't uh they're a
00:36:16.000 --> 00:36:20.599
good thing to know residual connections
00:36:18.079 --> 00:36:22.440
are if you run your input through
00:36:20.599 --> 00:36:25.720
multiple
00:36:22.440 --> 00:36:28.720
layers like let's say you have a block
00:36:25.720 --> 00:36:28.720
here
00:36:36.480 --> 00:36:41.280
let's let's call this an RNN for now
00:36:38.560 --> 00:36:44.280
because we know um we know about RNN
00:36:41.280 --> 00:36:44.280
already so
00:36:45.119 --> 00:36:49.560
RNN so this this connection here is
00:36:48.319 --> 00:36:50.920
called the residual connection and
00:36:49.560 --> 00:36:55.240
basically it's adding an additive
00:36:50.920 --> 00:36:57.280
connection before and after layers so um
00:36:55.240 --> 00:36:58.640
this allows you to pass information from
00:36:57.280 --> 00:37:00.880
the very beginning of a network to the
00:36:58.640 --> 00:37:03.520
very end of a network um through
00:37:00.880 --> 00:37:05.480
multiple layers and it also is there to
00:37:03.520 --> 00:37:08.800
help prevent the gradient finishing
00:37:05.480 --> 00:37:11.520
problem so like in a way you can view uh
00:37:08.800 --> 00:37:14.560
you can view lstms what lstms are doing
00:37:11.520 --> 00:37:15.800
is preventing loss of gradient in time
00:37:14.560 --> 00:37:17.280
and these are preventing loss of
00:37:15.800 --> 00:37:19.480
gradient as you go through like multiple
00:37:17.280 --> 00:37:21.119
layers of the network and this is super
00:37:19.480 --> 00:37:24.079
standard this is used in all like
00:37:21.119 --> 00:37:25.599
Transformer models and llama and GPT and
00:37:24.079 --> 00:37:31.200
whatever
00:37:25.599 --> 00:37:31.200
else cool um any other questions about
00:37:32.760 --> 00:37:39.079
that okay cool um so next I'd like to go
00:37:36.880 --> 00:37:41.760
into convolution um one one thing I
00:37:39.079 --> 00:37:44.760
should mention is rnns or RNN style
00:37:41.760 --> 00:37:46.920
models are used extensively in very long
00:37:44.760 --> 00:37:48.160
sequence modeling and we're going to
00:37:46.920 --> 00:37:50.440
talk more about like actual
00:37:48.160 --> 00:37:52.640
architectures that people use uh to do
00:37:50.440 --> 00:37:55.119
this um usually in combination with
00:37:52.640 --> 00:37:57.720
attention based models uh but they're
00:37:55.119 --> 00:38:01.800
used in very long sequence modeling
00:37:57.720 --> 00:38:05.640
convolutions tend to be used in um a lot
00:38:01.800 --> 00:38:07.160
in speech and image processing uh and
00:38:05.640 --> 00:38:10.880
the reason why they're used a lot in
00:38:07.160 --> 00:38:13.560
speech and image processing is
00:38:10.880 --> 00:38:16.800
because when we're processing
00:38:13.560 --> 00:38:18.599
language uh we have like
00:38:16.800 --> 00:38:22.720
um
00:38:18.599 --> 00:38:22.720
this is
00:38:23.599 --> 00:38:29.400
wonderful like this is wonderful is
00:38:26.599 --> 00:38:33.319
three tokens in language but if we look
00:38:29.400 --> 00:38:36.960
at it in speech it's going to be
00:38:33.319 --> 00:38:36.960
like many many
00:38:37.560 --> 00:38:46.079
frames so kind of
00:38:41.200 --> 00:38:47.680
the semantics of language is already
00:38:46.079 --> 00:38:48.960
kind of like if you look at a single
00:38:47.680 --> 00:38:51.599
token you already get something
00:38:48.960 --> 00:38:52.839
semantically meaningful um but in
00:38:51.599 --> 00:38:54.560
contrast if you're looking at like
00:38:52.839 --> 00:38:56.000
speech or you're looking at pixels and
00:38:54.560 --> 00:38:57.400
images or something like that you're not
00:38:56.000 --> 00:39:00.359
going to get something semantically
00:38:57.400 --> 00:39:01.920
meaningful uh so uh convolution is used
00:39:00.359 --> 00:39:03.359
a lot in that case and also you could
00:39:01.920 --> 00:39:06.079
create a convolutional model over
00:39:03.359 --> 00:39:08.599
characters as well
00:39:06.079 --> 00:39:10.599
um so what is convolution in the first
00:39:08.599 --> 00:39:13.319
place um as I mentioned before basically
00:39:10.599 --> 00:39:16.359
you take the local window uh around an
00:39:13.319 --> 00:39:19.680
input and you run it through um
00:39:16.359 --> 00:39:22.079
basically a model and a a good way to
00:39:19.680 --> 00:39:24.400
think about it is it's essentially a
00:39:22.079 --> 00:39:26.440
feed forward Network where you can
00:39:24.400 --> 00:39:28.240
catenate uh all of the surrounding
00:39:26.440 --> 00:39:30.280
vectors together and run them through a
00:39:28.240 --> 00:39:34.400
linear transform like this so you can
00:39:30.280 --> 00:39:34.400
Cate XT minus XT XT
00:39:35.880 --> 00:39:43.040
plus1 convolution can also be used in
00:39:39.440 --> 00:39:45.400
Auto regressive models and normally like
00:39:43.040 --> 00:39:48.079
we think of it like this so we think
00:39:45.400 --> 00:39:50.640
that we're taking the previous one the
00:39:48.079 --> 00:39:53.839
current one and the next one and making
00:39:50.640 --> 00:39:54.960
a prediction based on this but this
00:39:53.839 --> 00:39:56.440
would be good for something like
00:39:54.960 --> 00:39:57.720
sequence labeling but it's not good for
00:39:56.440 --> 00:39:59.040
for something like language modeling
00:39:57.720 --> 00:40:01.400
because in language modeling we can't
00:39:59.040 --> 00:40:05.200
look at the future right but there's a
00:40:01.400 --> 00:40:07.280
super simple uh solution to this which
00:40:05.200 --> 00:40:11.280
is you have a convolution that just
00:40:07.280 --> 00:40:13.720
looks at the past basically um and
00:40:11.280 --> 00:40:15.319
predicts the next word based on the the
00:40:13.720 --> 00:40:16.760
you know current word in the past so
00:40:15.319 --> 00:40:19.520
here you would be predicting the word
00:40:16.760 --> 00:40:21.040
movie um this is actually essentially
00:40:19.520 --> 00:40:23.839
equivalent to the feed forward language
00:40:21.040 --> 00:40:25.880
model that I talked about last time uh
00:40:23.839 --> 00:40:27.240
so you can also think of that as a
00:40:25.880 --> 00:40:30.599
convolution
00:40:27.240 --> 00:40:32.119
a convolutional language model um so
00:40:30.599 --> 00:40:33.359
when whenever you say feed forward or
00:40:32.119 --> 00:40:36.160
convolutional language model they're
00:40:33.359 --> 00:40:38.880
basically the same uh modulo some uh
00:40:36.160 --> 00:40:42.359
some details about striding and stuff
00:40:38.880 --> 00:40:42.359
which I'm going to talk about the class
00:40:43.000 --> 00:40:49.359
today cool um I covered convolution very
00:40:47.400 --> 00:40:51.440
briefly because it's also the least used
00:40:49.359 --> 00:40:53.400
of the three uh sequence modeling things
00:40:51.440 --> 00:40:55.400
in NLP nowadays but um are there any
00:40:53.400 --> 00:40:58.319
questions there or can I just run into
00:40:55.400 --> 00:40:58.319
attention
00:40:59.119 --> 00:41:04.040
okay cool I'll go into attention next so
00:41:02.400 --> 00:41:06.400
uh the basic idea about
00:41:04.040 --> 00:41:11.119
attention um
00:41:06.400 --> 00:41:12.839
is that we encode uh each token and the
00:41:11.119 --> 00:41:14.440
sequence into a
00:41:12.839 --> 00:41:19.119
vector
00:41:14.440 --> 00:41:21.640
um or so we we have input an input
00:41:19.119 --> 00:41:24.240
sequence that we'd like to encode over
00:41:21.640 --> 00:41:27.800
and we perform a linear combination of
00:41:24.240 --> 00:41:30.640
the vectors weighted by attention weight
00:41:27.800 --> 00:41:33.359
and there's two varieties of attention
00:41:30.640 --> 00:41:35.160
uh that are good to know about the first
00:41:33.359 --> 00:41:37.440
one is cross
00:41:35.160 --> 00:41:40.040
atten where each element in a sequence
00:41:37.440 --> 00:41:41.960
attends to elements of another sequence
00:41:40.040 --> 00:41:44.280
and this is widely used in encoder
00:41:41.960 --> 00:41:47.359
decoder models where you have one
00:41:44.280 --> 00:41:50.319
encoder and you have a separate decoder
00:41:47.359 --> 00:41:51.880
um these models the popular models that
00:41:50.319 --> 00:41:55.119
are like this that people still use a
00:41:51.880 --> 00:41:57.480
lot are T5 uh is a example of an encoder
00:41:55.119 --> 00:42:00.760
decoder model or embar is another
00:41:57.480 --> 00:42:03.160
example of encoder decoder model um but
00:42:00.760 --> 00:42:07.880
basically the uh The Way Cross attention
00:42:03.160 --> 00:42:10.359
works is we have for example an English
00:42:07.880 --> 00:42:14.079
uh sentence here and we want to
00:42:10.359 --> 00:42:17.560
translate it into uh into a Japanese
00:42:14.079 --> 00:42:23.040
sentence and so when we output the first
00:42:17.560 --> 00:42:25.119
word we would mostly uh upweight this or
00:42:23.040 --> 00:42:26.800
sorry we have a we have a Japanese
00:42:25.119 --> 00:42:29.119
sentence and we would like to translated
00:42:26.800 --> 00:42:31.680
into an English sentence for example so
00:42:29.119 --> 00:42:35.160
when we generate the first word in
00:42:31.680 --> 00:42:38.400
Japanese means this so in order to
00:42:35.160 --> 00:42:40.079
Output the first word we would first uh
00:42:38.400 --> 00:42:43.559
do a weighted sum of all of the
00:42:40.079 --> 00:42:46.240
embeddings of the Japanese sentence and
00:42:43.559 --> 00:42:49.359
we would focus probably most on this
00:42:46.240 --> 00:42:51.920
word up here C because it corresponds to
00:42:49.359 --> 00:42:51.920
the word
00:42:53.160 --> 00:42:59.800
this in the next step of generating an
00:42:55.960 --> 00:43:01.319
out output uh we would uh attend to
00:42:59.800 --> 00:43:04.119
different words because different words
00:43:01.319 --> 00:43:07.680
correspond to is so you would attend to
00:43:04.119 --> 00:43:11.040
which corresponds to is um when you
00:43:07.680 --> 00:43:12.599
output n actually there's no word in the
00:43:11.040 --> 00:43:16.839
Japanese sentence that correspon to and
00:43:12.599 --> 00:43:18.720
so you might get a very like blob like
00:43:16.839 --> 00:43:21.319
uh in attention weight that doesn't look
00:43:18.720 --> 00:43:23.319
very uh that looks very smooth not very
00:43:21.319 --> 00:43:25.119
peaky and then when you do example you'd
00:43:23.319 --> 00:43:27.880
have strong attention on uh on the word
00:43:25.119 --> 00:43:29.400
that corresponds to example
00:43:27.880 --> 00:43:31.599
there's also self
00:43:29.400 --> 00:43:33.480
attention and um self attention
00:43:31.599 --> 00:43:36.000
basically what it does is each element
00:43:33.480 --> 00:43:38.640
in a sequence attends to elements of the
00:43:36.000 --> 00:43:40.240
same sequence and so this is a good way
00:43:38.640 --> 00:43:43.359
of doing sequence encoding just like we
00:43:40.240 --> 00:43:46.280
used rnns by rnns uh convolutional
00:43:43.359 --> 00:43:47.559
neural networks and so um the reason why
00:43:46.280 --> 00:43:50.119
you would want to do something like this
00:43:47.559 --> 00:43:52.760
just to give an example let's say we
00:43:50.119 --> 00:43:54.280
wanted to run this we wanted to encode
00:43:52.760 --> 00:43:56.920
the English sentence before doing
00:43:54.280 --> 00:44:00.040
something like translation into Japanese
00:43:56.920 --> 00:44:01.559
and if we did that um this maybe we
00:44:00.040 --> 00:44:02.960
don't need to attend to a whole lot of
00:44:01.559 --> 00:44:06.440
other things because it's kind of clear
00:44:02.960 --> 00:44:08.920
what this means but um
00:44:06.440 --> 00:44:10.880
is the way you would translate it would
00:44:08.920 --> 00:44:12.280
be rather heavily dependent on what the
00:44:10.880 --> 00:44:13.640
other words in the sentence so you might
00:44:12.280 --> 00:44:17.280
want to attend to all the other words in
00:44:13.640 --> 00:44:20.559
the sentence say oh this is is co
00:44:17.280 --> 00:44:22.839
cooccurring with this and example and so
00:44:20.559 --> 00:44:24.440
if that's the case then well we would
00:44:22.839 --> 00:44:26.920
need to translate it in this way or we'
00:44:24.440 --> 00:44:28.960
need to handle it in this way and that's
00:44:26.920 --> 00:44:29.880
exactly the same for you know any other
00:44:28.960 --> 00:44:32.720
sort of
00:44:29.880 --> 00:44:35.880
disambiguation uh style
00:44:32.720 --> 00:44:37.720
task so uh yeah we do something similar
00:44:35.880 --> 00:44:39.040
like this so basically cross attention
00:44:37.720 --> 00:44:42.520
is attending to a different sequence
00:44:39.040 --> 00:44:42.520
self attention is attending to the same
00:44:42.680 --> 00:44:46.559
sequence so how do we do this
00:44:44.960 --> 00:44:48.200
mechanistically in the first place so
00:44:46.559 --> 00:44:51.480
like let's say We're translating from
00:44:48.200 --> 00:44:52.880
Japanese to English um we would have uh
00:44:51.480 --> 00:44:55.960
and we're doing it with an encoder
00:44:52.880 --> 00:44:57.480
decoder model where we have already ENC
00:44:55.960 --> 00:45:00.640
coded the
00:44:57.480 --> 00:45:02.920
input sequence and now we're generating
00:45:00.640 --> 00:45:05.240
the output sequence with a for example a
00:45:02.920 --> 00:45:09.880
recurrent neural network um and so if
00:45:05.240 --> 00:45:12.400
that's the case we have uh I I hate uh
00:45:09.880 --> 00:45:14.440
like this and we want to predict the
00:45:12.400 --> 00:45:17.280
next word so what we would do is we
00:45:14.440 --> 00:45:19.480
would take the current state
00:45:17.280 --> 00:45:21.480
here and uh we use something called a
00:45:19.480 --> 00:45:22.760
query vector and the query Vector is
00:45:21.480 --> 00:45:24.880
essentially the vector that we want to
00:45:22.760 --> 00:45:28.720
use to decide what to attend
00:45:24.880 --> 00:45:31.800
to we then have key vectors and the key
00:45:28.720 --> 00:45:35.319
vectors are the vectors that we would
00:45:31.800 --> 00:45:37.480
like to use to decide which ones we
00:45:35.319 --> 00:45:40.720
should be attending
00:45:37.480 --> 00:45:42.040
to and then for each query key pair we
00:45:40.720 --> 00:45:45.319
calculate a
00:45:42.040 --> 00:45:48.319
weight and we do it like this um this
00:45:45.319 --> 00:45:50.680
gear here is some function that takes in
00:45:48.319 --> 00:45:53.200
the uh query vector and the key vector
00:45:50.680 --> 00:45:55.599
and outputs a weight and notably we use
00:45:53.200 --> 00:45:57.559
the same function every single time this
00:45:55.599 --> 00:46:00.960
is really important again because like
00:45:57.559 --> 00:46:03.760
RNN that allows us to extrapolate
00:46:00.960 --> 00:46:05.960
unlimited length sequences because uh we
00:46:03.760 --> 00:46:08.280
only have one set of you know we only
00:46:05.960 --> 00:46:10.359
have one function no matter how long the
00:46:08.280 --> 00:46:13.200
sequence gets so we can just apply it
00:46:10.359 --> 00:46:15.839
over and over and over
00:46:13.200 --> 00:46:17.920
again uh once we calculate these values
00:46:15.839 --> 00:46:20.839
we normalize so that they add up to one
00:46:17.920 --> 00:46:22.559
using the softmax function and um
00:46:20.839 --> 00:46:27.800
basically in this case that would be
00:46:22.559 --> 00:46:27.800
like 0.76 uh etc etc oops
00:46:28.800 --> 00:46:33.559
so step number two is once we have this
00:46:32.280 --> 00:46:37.839
uh these
00:46:33.559 --> 00:46:40.160
attention uh values here notably these
00:46:37.839 --> 00:46:41.359
values aren't really probabilities uh
00:46:40.160 --> 00:46:42.800
despite the fact that they're between
00:46:41.359 --> 00:46:44.240
zero and one and they add up to one
00:46:42.800 --> 00:46:47.440
because all we're doing is we're using
00:46:44.240 --> 00:46:50.480
them to uh to combine together uh
00:46:47.440 --> 00:46:51.800
multiple vectors so I we don't really
00:46:50.480 --> 00:46:53.319
normally call them attention
00:46:51.800 --> 00:46:54.680
probabilities or anything like that I
00:46:53.319 --> 00:46:56.319
just call them attention values or
00:46:54.680 --> 00:46:59.680
normalized attention values
00:46:56.319 --> 00:47:03.760
is um but once we have these uh
00:46:59.680 --> 00:47:05.760
attention uh attention weights we have
00:47:03.760 --> 00:47:07.200
value vectors and these value vectors
00:47:05.760 --> 00:47:10.000
are the vectors that we would actually
00:47:07.200 --> 00:47:12.319
like to combine together to get the uh
00:47:10.000 --> 00:47:14.000
encoding here and so we take these
00:47:12.319 --> 00:47:17.559
vectors we do a weighted some of the
00:47:14.000 --> 00:47:21.200
vectors and get a final final sum
00:47:17.559 --> 00:47:22.920
here and we can take this uh some and
00:47:21.200 --> 00:47:26.920
use it in any part of the model that we
00:47:22.920 --> 00:47:29.079
would like um and so is very broad it
00:47:26.920 --> 00:47:31.200
can be used in any way now the most
00:47:29.079 --> 00:47:33.240
common way to use it is just have lots
00:47:31.200 --> 00:47:35.000
of self attention layers like in
00:47:33.240 --> 00:47:37.440
something in a Transformer but um you
00:47:35.000 --> 00:47:40.160
can also use it in decoder or other
00:47:37.440 --> 00:47:42.920
things like that as
00:47:40.160 --> 00:47:45.480
well this is an actual graphical example
00:47:42.920 --> 00:47:47.319
from the original attention paper um I'm
00:47:45.480 --> 00:47:50.000
going to give some other examples from
00:47:47.319 --> 00:47:52.480
Transformers in the next class but
00:47:50.000 --> 00:47:55.400
basically you can see that the attention
00:47:52.480 --> 00:47:57.559
weights uh for this English to French I
00:47:55.400 --> 00:48:00.520
think it's English French translation
00:47:57.559 --> 00:48:02.920
task basically um overlap with what you
00:48:00.520 --> 00:48:04.440
would expect uh if you can read English
00:48:02.920 --> 00:48:06.599
and French it's kind of the words that
00:48:04.440 --> 00:48:09.319
are semantically similar to each other
00:48:06.599 --> 00:48:12.920
um it even learns to do this reordering
00:48:09.319 --> 00:48:14.880
uh in an appropriate way here and all of
00:48:12.920 --> 00:48:16.720
this is completely unsupervised so you
00:48:14.880 --> 00:48:18.079
never actually give the model
00:48:16.720 --> 00:48:19.440
information about what it should be
00:48:18.079 --> 00:48:21.559
attending to it's all learned through
00:48:19.440 --> 00:48:23.520
gradient descent and the model learns to
00:48:21.559 --> 00:48:27.640
do this by making the embeddings of the
00:48:23.520 --> 00:48:27.640
key and query vectors closer together
00:48:28.440 --> 00:48:33.240
cool
00:48:30.000 --> 00:48:33.240
um any
00:48:33.800 --> 00:48:40.040
questions okay so um next I'd like to go
00:48:38.440 --> 00:48:41.680
a little bit into how we actually
00:48:40.040 --> 00:48:43.599
calculate the attention score function
00:48:41.680 --> 00:48:44.839
so that's the little gear that I had on
00:48:43.599 --> 00:48:50.280
my
00:48:44.839 --> 00:48:53.559
uh my slide before so here Q is a query
00:48:50.280 --> 00:48:56.440
and K is the key um the original
00:48:53.559 --> 00:48:58.400
attention paper used a multi-layer layer
00:48:56.440 --> 00:49:00.119
uh a multi-layer neural network to
00:48:58.400 --> 00:49:02.440
calculate this so basically what it did
00:49:00.119 --> 00:49:05.319
is it concatenated the query and key
00:49:02.440 --> 00:49:08.000
Vector together multiplied it by a
00:49:05.319 --> 00:49:12.240
weight Matrix calculated a tan H and
00:49:08.000 --> 00:49:15.040
then ran it through uh a weight
00:49:12.240 --> 00:49:19.799
Vector so this
00:49:15.040 --> 00:49:22.480
is essentially very expressive
00:49:19.799 --> 00:49:24.799
um uh it's flexible it's often good with
00:49:22.480 --> 00:49:27.960
large data but it adds extra parameters
00:49:24.799 --> 00:49:30.359
and uh computation time uh to your
00:49:27.960 --> 00:49:31.559
calculations here so it's not as widely
00:49:30.359 --> 00:49:34.359
used
00:49:31.559 --> 00:49:37.799
anymore the uh other thing which was
00:49:34.359 --> 00:49:41.599
proposed by long ad all is a bilinear
00:49:37.799 --> 00:49:43.200
function um and a bilinear function
00:49:41.599 --> 00:49:45.920
basically what it does is it has your
00:49:43.200 --> 00:49:48.319
key Vector it has your query vector and
00:49:45.920 --> 00:49:51.440
it has a matrix in between them like
00:49:48.319 --> 00:49:53.000
this and uh then you calculate uh you
00:49:51.440 --> 00:49:54.520
calculate the
00:49:53.000 --> 00:49:56.680
alut
00:49:54.520 --> 00:49:59.880
so
00:49:56.680 --> 00:50:03.200
this is uh nice because it basically um
00:49:59.880 --> 00:50:05.760
Can Transform uh the key and
00:50:03.200 --> 00:50:08.760
query uh together
00:50:05.760 --> 00:50:08.760
here
00:50:09.119 --> 00:50:13.559
um people have also experimented with
00:50:11.760 --> 00:50:16.079
DOT product and the dot product is
00:50:13.559 --> 00:50:19.839
basically query times
00:50:16.079 --> 00:50:23.480
key uh query transpose times key or
00:50:19.839 --> 00:50:25.760
query. key this is okay but the problem
00:50:23.480 --> 00:50:27.280
with this is then the query vector and
00:50:25.760 --> 00:50:30.160
the key vectors have to be in exactly
00:50:27.280 --> 00:50:31.920
the same space and that's kind of too
00:50:30.160 --> 00:50:34.799
hard of a constraint so it doesn't scale
00:50:31.920 --> 00:50:38.000
very well if you're um if you're working
00:50:34.799 --> 00:50:40.839
hard uh if you're uh like training on
00:50:38.000 --> 00:50:45.400
lots of data um then the scaled dot
00:50:40.839 --> 00:50:47.880
product um the scale dot product here uh
00:50:45.400 --> 00:50:50.079
one problem is that the scale of the dot
00:50:47.880 --> 00:50:53.680
product increases as the dimensions get
00:50:50.079 --> 00:50:55.880
larger and so there's a fix to scale by
00:50:53.680 --> 00:50:58.839
the square root of the length of one of
00:50:55.880 --> 00:51:00.680
the vectors um and so basically you're
00:50:58.839 --> 00:51:04.559
multiplying uh you're taking the dot
00:51:00.680 --> 00:51:06.559
product but you're dividing by the uh
00:51:04.559 --> 00:51:09.359
the square root of the length of one of
00:51:06.559 --> 00:51:11.839
the vectors uh does anyone have an idea
00:51:09.359 --> 00:51:13.599
why you might take the square root here
00:51:11.839 --> 00:51:16.920
if you've taken a machine
00:51:13.599 --> 00:51:20.000
learning uh or maybe statistics class
00:51:16.920 --> 00:51:20.000
you might have a an
00:51:20.599 --> 00:51:26.599
idea any any ideas yeah it normalization
00:51:24.720 --> 00:51:29.079
to make sure
00:51:26.599 --> 00:51:32.760
because otherwise it will impact the
00:51:29.079 --> 00:51:35.640
result because we want normalize one yes
00:51:32.760 --> 00:51:37.920
so we do we do want to normalize it um
00:51:35.640 --> 00:51:40.000
and so that's the reason why we divide
00:51:37.920 --> 00:51:41.920
by the length um and that prevents it
00:51:40.000 --> 00:51:43.839
from getting too large
00:51:41.920 --> 00:51:45.920
specifically does anyone have an idea
00:51:43.839 --> 00:51:49.440
why you take the square root here as
00:51:45.920 --> 00:51:49.440
opposed to dividing just by the length
00:51:52.400 --> 00:51:59.480
overall so um this is this is pretty
00:51:55.400 --> 00:52:01.720
tough and actually uh we I didn't know
00:51:59.480 --> 00:52:04.359
one of the last times I did this class
00:52:01.720 --> 00:52:06.640
uh and had to actually go look for it
00:52:04.359 --> 00:52:09.000
but basically the reason why is because
00:52:06.640 --> 00:52:11.400
if you um if you have a whole bunch of
00:52:09.000 --> 00:52:12.720
random variables so let's say you have a
00:52:11.400 --> 00:52:14.040
whole bunch of random variables no
00:52:12.720 --> 00:52:15.240
matter what kind they are as long as
00:52:14.040 --> 00:52:19.680
they're from the same distribution
00:52:15.240 --> 00:52:19.680
they're IID and you add them all
00:52:20.160 --> 00:52:25.720
together um then the variance I believe
00:52:23.200 --> 00:52:27.760
yeah the variance of this variant
00:52:25.720 --> 00:52:31.119
standard deviation maybe standard
00:52:27.760 --> 00:52:33.319
deviation of this goes uh goes up uh
00:52:31.119 --> 00:52:35.640
square root uh yeah I think standard
00:52:33.319 --> 00:52:38.880
deviation goes
00:52:35.640 --> 00:52:41.040
up dividing by something that would
00:52:38.880 --> 00:52:44.040
divide by this the standard deviation
00:52:41.040 --> 00:52:48.240
here so it's made like normalizing by
00:52:44.040 --> 00:52:51.040
that so um it's a it's that's actually I
00:52:48.240 --> 00:52:53.359
don't think explicitly explained and the
00:52:51.040 --> 00:52:54.720
uh attention is all you need paper uh
00:52:53.359 --> 00:52:57.920
the vasani paper where they introduce
00:52:54.720 --> 00:53:01.079
this but that's basic idea um in terms
00:52:57.920 --> 00:53:03.839
of what people use most widely nowadays
00:53:01.079 --> 00:53:07.680
um they
00:53:03.839 --> 00:53:07.680
are basically doing
00:53:24.160 --> 00:53:27.160
this
00:53:30.280 --> 00:53:34.880
so they're taking the the hidden state
00:53:33.000 --> 00:53:36.599
from the keys and multiplying it by a
00:53:34.880 --> 00:53:39.440
matrix the hidden state by the queries
00:53:36.599 --> 00:53:41.680
and multiplying it by a matrix um this
00:53:39.440 --> 00:53:46.559
is what is done in uh in
00:53:41.680 --> 00:53:50.280
Transformers and the uh and then they're
00:53:46.559 --> 00:53:54.160
using this to um they're normalizing it
00:53:50.280 --> 00:53:57.160
by this uh square root here
00:53:54.160 --> 00:53:57.160
and
00:53:59.440 --> 00:54:05.040
so this is essentially a bilinear
00:54:02.240 --> 00:54:07.680
model um it's a bilinear model that is
00:54:05.040 --> 00:54:09.119
normalized uh they call it uh scale do
00:54:07.680 --> 00:54:11.119
product detention but actually because
00:54:09.119 --> 00:54:15.520
they have these weight matrices uh it's
00:54:11.119 --> 00:54:18.839
a bilinear model so um that's the the
00:54:15.520 --> 00:54:18.839
most standard thing to be used
00:54:20.200 --> 00:54:24.079
nowadays cool any any questions about
00:54:22.520 --> 00:54:27.079
this
00:54:24.079 --> 00:54:27.079
part
00:54:28.240 --> 00:54:36.559
okay so um finally when you actually
00:54:32.280 --> 00:54:36.559
train the model um as I mentioned
00:54:41.960 --> 00:54:45.680
before right at the very
00:54:48.040 --> 00:54:52.400
beginning
00:54:49.839 --> 00:54:55.760
we when we're training an autor
00:54:52.400 --> 00:54:57.400
regressive model we don't want to be
00:54:55.760 --> 00:54:59.799
referring to the Future to things in the
00:54:57.400 --> 00:55:01.240
future um because then you know
00:54:59.799 --> 00:55:03.079
basically we'd be cheating and we'd have
00:55:01.240 --> 00:55:04.599
a nonprobabilistic model it wouldn't be
00:55:03.079 --> 00:55:08.960
good when we actually have to generate
00:55:04.599 --> 00:55:12.119
left to right um and
00:55:08.960 --> 00:55:15.720
so we essentially want to prevent
00:55:12.119 --> 00:55:17.480
ourselves from using information from
00:55:15.720 --> 00:55:20.319
the
00:55:17.480 --> 00:55:22.839
future
00:55:20.319 --> 00:55:24.240
and in an unconditioned model we want to
00:55:22.839 --> 00:55:27.400
prevent ourselves from using any
00:55:24.240 --> 00:55:29.680
information in the feature here um in a
00:55:27.400 --> 00:55:31.520
conditioned model we're okay with doing
00:55:29.680 --> 00:55:33.480
kind of bir
00:55:31.520 --> 00:55:35.880
directional conditioning here to
00:55:33.480 --> 00:55:37.359
calculate the representations but we're
00:55:35.880 --> 00:55:40.440
not okay with doing it on the target
00:55:37.359 --> 00:55:40.440
side so basically what we
00:55:44.240 --> 00:55:50.960
do basically what we do is we create a
00:55:47.920 --> 00:55:52.400
mask that prevents us from attending to
00:55:50.960 --> 00:55:54.559
any of the information in the future
00:55:52.400 --> 00:55:56.440
when we're uh predicting when we're
00:55:54.559 --> 00:56:00.799
calculating the representations of the
00:55:56.440 --> 00:56:04.880
the current thing uh word and
00:56:00.799 --> 00:56:08.280
technically how we do this is we have
00:56:04.880 --> 00:56:08.280
the attention
00:56:09.079 --> 00:56:13.799
values uh like
00:56:11.680 --> 00:56:15.480
2.1
00:56:13.799 --> 00:56:17.880
attention
00:56:15.480 --> 00:56:19.920
0.3 and
00:56:17.880 --> 00:56:22.480
attention uh
00:56:19.920 --> 00:56:24.960
0.5 or something like
00:56:22.480 --> 00:56:27.480
that these are eventually going to be
00:56:24.960 --> 00:56:29.799
fed through the soft Max to calculate
00:56:27.480 --> 00:56:32.119
the attention values that we use to do
00:56:29.799 --> 00:56:33.680
the waiting so what we do is any ones we
00:56:32.119 --> 00:56:36.160
don't want to attend to we just add
00:56:33.680 --> 00:56:39.799
negative infinity or add a very large
00:56:36.160 --> 00:56:42.119
negative number so we uh cross that out
00:56:39.799 --> 00:56:44.000
and set this the negative infinity and
00:56:42.119 --> 00:56:45.440
so then when we take the softb basically
00:56:44.000 --> 00:56:47.839
the value goes to zero and we don't
00:56:45.440 --> 00:56:49.359
attend to it so um this is called the
00:56:47.839 --> 00:56:53.240
attention mask and you'll see it when
00:56:49.359 --> 00:56:53.240
you have to implement
00:56:53.440 --> 00:56:56.880
attention cool
00:56:57.039 --> 00:57:00.200
any any questions about
00:57:02.079 --> 00:57:08.599
this okay great um so next I'd like to
00:57:05.839 --> 00:57:11.039
go to Applications of sequence models um
00:57:08.599 --> 00:57:13.200
there's a bunch of ways that you can use
00:57:11.039 --> 00:57:16.160
sequence models of any variety I wrote
00:57:13.200 --> 00:57:18.400
RNN here arbitrarily but it could be
00:57:16.160 --> 00:57:21.720
convolution or Transformer or anything
00:57:18.400 --> 00:57:23.559
else so the first one is encoding
00:57:21.720 --> 00:57:26.839
sequences
00:57:23.559 --> 00:57:29.240
um and essentially if you do it with an
00:57:26.839 --> 00:57:31.559
RNN this is one way you can encode a
00:57:29.240 --> 00:57:35.799
sequence basically you take the
00:57:31.559 --> 00:57:36.960
last uh value here and you use it to uh
00:57:35.799 --> 00:57:40.559
encode the
00:57:36.960 --> 00:57:42.720
output this can be used for any sort of
00:57:40.559 --> 00:57:45.839
uh like binary or multiclass prediction
00:57:42.720 --> 00:57:48.280
problem it's also right now used very
00:57:45.839 --> 00:57:50.920
widely in sentence representations for
00:57:48.280 --> 00:57:54.200
retrieval uh so for example you build a
00:57:50.920 --> 00:57:55.520
big retrieval index uh with these
00:57:54.200 --> 00:57:57.920
vectors
00:57:55.520 --> 00:57:59.480
and then you do a vector near you also
00:57:57.920 --> 00:58:02.119
in quote a query and you do a vector
00:57:59.480 --> 00:58:04.760
nearest neighbor search to look up uh
00:58:02.119 --> 00:58:06.760
the most similar sentence here so this
00:58:04.760 --> 00:58:10.160
is uh these are two applications where
00:58:06.760 --> 00:58:13.440
you use something like this right on
00:58:10.160 --> 00:58:15.520
this slide I wrote that you use the last
00:58:13.440 --> 00:58:17.359
Vector here but actually a lot of the
00:58:15.520 --> 00:58:20.039
time it's also a good idea to just take
00:58:17.359 --> 00:58:22.599
the mean of the vectors or take the max
00:58:20.039 --> 00:58:26.640
of all of the vectors
00:58:22.599 --> 00:58:29.119
uh in fact I would almost I would almost
00:58:26.640 --> 00:58:30.520
say that that's usually a better choice
00:58:29.119 --> 00:58:32.760
if you're doing any sort of thing where
00:58:30.520 --> 00:58:35.359
you need a single Vector unless your
00:58:32.760 --> 00:58:38.200
model has been specifically trained to
00:58:35.359 --> 00:58:41.480
have good like output vectors uh from
00:58:38.200 --> 00:58:44.359
the final Vector here so um you could
00:58:41.480 --> 00:58:46.880
also just take the the mean of all of
00:58:44.359 --> 00:58:46.880
the purple
00:58:48.240 --> 00:58:52.960
ones um another thing you can do is
00:58:50.280 --> 00:58:54.359
encode tokens for sequence labeling Um
00:58:52.960 --> 00:58:56.200
this can also be used for language
00:58:54.359 --> 00:58:58.280
modeling and what do I mean it can be
00:58:56.200 --> 00:59:00.039
used for language
00:58:58.280 --> 00:59:03.319
modeling
00:59:00.039 --> 00:59:06.599
basically you can view this as first
00:59:03.319 --> 00:59:09.200
running along sequence encoding and then
00:59:06.599 --> 00:59:12.319
after that making all of the predictions
00:59:09.200 --> 00:59:15.240
um it's also a good thing to know
00:59:12.319 --> 00:59:18.440
computationally because um often you can
00:59:15.240 --> 00:59:20.720
do sequence encoding uh kind of all in
00:59:18.440 --> 00:59:22.440
parallel and yeah actually I said I was
00:59:20.720 --> 00:59:23.359
going to mention I said I was going to
00:59:22.440 --> 00:59:25.079
mention that but I don't think I
00:59:23.359 --> 00:59:27.319
actually have a slide about it but um
00:59:25.079 --> 00:59:29.720
one important thing about rnn's compared
00:59:27.319 --> 00:59:33.079
to convolution or Transformers uh sorry
00:59:29.720 --> 00:59:34.839
convolution or attention is rnns in
00:59:33.079 --> 00:59:37.440
order to calculate this RNN you need to
00:59:34.839 --> 00:59:39.599
wait for this RNN to finish so it's
00:59:37.440 --> 00:59:41.200
sequential and you need to go like here
00:59:39.599 --> 00:59:43.480
and then here and then here and then
00:59:41.200 --> 00:59:45.720
here and then here and that's a pretty
00:59:43.480 --> 00:59:48.200
big bottleneck because uh things like
00:59:45.720 --> 00:59:50.760
gpus or tpus they're actually really
00:59:48.200 --> 00:59:52.839
good at doing a bunch of things at once
00:59:50.760 --> 00:59:56.440
and so attention even though its ASM
00:59:52.839 --> 00:59:57.400
totic complexity is worse o of n squ uh
00:59:56.440 --> 00:59:59.319
just because you don't have that
00:59:57.400 --> 01:00:01.680
bottleneck of doing things sequentially
00:59:59.319 --> 01:00:03.640
it can be way way faster on a GPU
01:00:01.680 --> 01:00:04.960
because you're not wasting your time
01:00:03.640 --> 01:00:07.640
waiting for the previous thing to be
01:00:04.960 --> 01:00:11.039
calculated so that's actually why uh
01:00:07.640 --> 01:00:13.520
Transformers are so fast
01:00:11.039 --> 01:00:14.599
um uh Transformers and attention models
01:00:13.520 --> 01:00:17.160
are so
01:00:14.599 --> 01:00:21.119
fast
01:00:17.160 --> 01:00:23.079
um another thing to note so that's one
01:00:21.119 --> 01:00:25.039
of the big reasons why attention models
01:00:23.079 --> 01:00:27.359
are so popular nowadays because fast to
01:00:25.039 --> 01:00:30.200
calculate on Modern Hardware another
01:00:27.359 --> 01:00:33.520
reason why attention models are popular
01:00:30.200 --> 01:00:34.799
nowadays does anyone have a um does
01:00:33.520 --> 01:00:37.280
anyone have an
01:00:34.799 --> 01:00:38.839
idea uh about another reason it's based
01:00:37.280 --> 01:00:41.200
on how easy they are to learn and
01:00:38.839 --> 01:00:43.680
there's a reason why and that reason why
01:00:41.200 --> 01:00:46.240
has to do with
01:00:43.680 --> 01:00:48.520
um that reason why has to do with uh
01:00:46.240 --> 01:00:49.400
something I introduced in this lecture
01:00:48.520 --> 01:00:52.039
uh
01:00:49.400 --> 01:00:54.720
earlier I'll give a
01:00:52.039 --> 01:00:58.079
hint gradients yeah more more
01:00:54.720 --> 01:01:00.480
specifically what what's nice about
01:00:58.079 --> 01:01:02.920
attention with respect to gradients or
01:01:00.480 --> 01:01:02.920
Vanishing
01:01:04.119 --> 01:01:07.319
gradients any
01:01:07.680 --> 01:01:15.160
ideas let's say we have a really long
01:01:10.160 --> 01:01:17.839
sentence it's like X1 X2 X3
01:01:15.160 --> 01:01:21.799
X4 um
01:01:17.839 --> 01:01:26.440
X200 over here and in order to predict
01:01:21.799 --> 01:01:26.440
X200 you need to pay attention to X3
01:01:27.359 --> 01:01:29.640
any
01:01:33.079 --> 01:01:37.359
ideas another another hint how many
01:01:35.599 --> 01:01:38.960
nonlinearities do you have to pass
01:01:37.359 --> 01:01:41.440
through in order to pass that
01:01:38.960 --> 01:01:44.839
information from X3 to
01:01:41.440 --> 01:01:48.839
X200 in a recurrent Network um in a
01:01:44.839 --> 01:01:48.839
recurrent Network or
01:01:51.920 --> 01:01:57.160
attention netw should be
01:01:54.960 --> 01:02:00.680
197 yeah in a recurrent Network it's
01:01:57.160 --> 01:02:03.480
basically 197 or may maybe 196 I haven't
01:02:00.680 --> 01:02:06.319
paid attention but every time every time
01:02:03.480 --> 01:02:08.319
you pass it to the hidden
01:02:06.319 --> 01:02:10.200
state it has to go through a
01:02:08.319 --> 01:02:13.240
nonlinearity so it goes through like
01:02:10.200 --> 01:02:17.119
1907 nonlinearities and even if you're
01:02:13.240 --> 01:02:19.680
using an lstm um it's still the lstm
01:02:17.119 --> 01:02:21.559
hidden cell is getting information added
01:02:19.680 --> 01:02:23.400
to it and subtracted to it and other
01:02:21.559 --> 01:02:24.960
things like that so it's still a bit
01:02:23.400 --> 01:02:27.880
tricky
01:02:24.960 --> 01:02:27.880
um what about
01:02:28.119 --> 01:02:35.160
attention yeah basically one time so
01:02:31.520 --> 01:02:39.319
attention um in the next layer here
01:02:35.160 --> 01:02:41.119
you're passing it all the way you're
01:02:39.319 --> 01:02:45.000
passing all of the information directly
01:02:41.119 --> 01:02:46.480
in and the only qualifying thing is that
01:02:45.000 --> 01:02:47.760
your weight has to be good it has to
01:02:46.480 --> 01:02:49.079
find a good attention weight so that
01:02:47.760 --> 01:02:50.920
it's actually paying attention to that
01:02:49.079 --> 01:02:53.039
information so this is actually
01:02:50.920 --> 01:02:54.400
discussed in the vaswani at all
01:02:53.039 --> 01:02:57.359
attention is all you need paper that
01:02:54.400 --> 01:02:59.920
introduced Transformers um convolutions
01:02:57.359 --> 01:03:03.640
are kind of in the middle so like let's
01:02:59.920 --> 01:03:06.400
say you have a convolution of length 10
01:03:03.640 --> 01:03:09.880
um and then you have two layers of it um
01:03:06.400 --> 01:03:09.880
if you have a convolution of length
01:03:10.200 --> 01:03:15.880
10 or yeah let's say you have a
01:03:12.559 --> 01:03:18.520
convolution of length 10 you would need
01:03:15.880 --> 01:03:19.520
basically you would pass from 10
01:03:18.520 --> 01:03:21.720
previous
01:03:19.520 --> 01:03:23.319
ones and then you would pass again from
01:03:21.720 --> 01:03:27.359
10 previous ones and then you would have
01:03:23.319 --> 01:03:29.160
to go through like 16 or like I guess
01:03:27.359 --> 01:03:31.279
almost 20 layers of convolution in order
01:03:29.160 --> 01:03:34.720
to pass that information along so it's
01:03:31.279 --> 01:03:39.200
kind of in the middle of RNs in uh in
01:03:34.720 --> 01:03:43.480
lsms uh sorry RNN in attention
01:03:39.200 --> 01:03:47.359
Ms Yeah question so regarding how you
01:03:43.480 --> 01:03:51.319
have to wait for one r& the next one can
01:03:47.359 --> 01:03:53.000
you inflence on one RNN once it's done
01:03:51.319 --> 01:03:54.839
even though the next one's competing off
01:03:53.000 --> 01:03:58.400
that one
01:03:54.839 --> 01:04:01.160
yes yeah you can you can do
01:03:58.400 --> 01:04:03.880
inference you could is well so as long
01:04:01.160 --> 01:04:03.880
as
01:04:05.599 --> 01:04:10.640
the as long as the output doesn't affect
01:04:08.079 --> 01:04:14.000
the next input so in this
01:04:10.640 --> 01:04:17.119
case in this case because of language
01:04:14.000 --> 01:04:19.400
modeling or generation is because the
01:04:17.119 --> 01:04:21.000
output doesn't affect the ne uh because
01:04:19.400 --> 01:04:22.440
the output affects the next input if
01:04:21.000 --> 01:04:26.680
you're predicting the output you have to
01:04:22.440 --> 01:04:28.920
weigh if you know the output already um
01:04:26.680 --> 01:04:30.599
if you know the output already you could
01:04:28.920 --> 01:04:33.599
make the prediction at the same time
01:04:30.599 --> 01:04:34.799
miscalculating this next hidden State um
01:04:33.599 --> 01:04:36.200
so if you're just calculating the
01:04:34.799 --> 01:04:38.559
probability you could do that and that's
01:04:36.200 --> 01:04:40.880
actually where Transformers or attention
01:04:38.559 --> 01:04:44.839
models shine attention models actually
01:04:40.880 --> 01:04:46.000
aren't great for Generation Um and the
01:04:44.839 --> 01:04:49.279
reason why they're not great for
01:04:46.000 --> 01:04:52.279
generation is because they're
01:04:49.279 --> 01:04:52.279
um
01:04:52.799 --> 01:04:57.680
like when you're you're generating the
01:04:55.039 --> 01:04:59.200
next token you still need to wait you
01:04:57.680 --> 01:05:00.559
can't calculate in parallel because you
01:04:59.200 --> 01:05:03.039
need to generate the next token before
01:05:00.559 --> 01:05:04.839
you can encode the next uh the previous
01:05:03.039 --> 01:05:07.119
sorry need to generate the next token
01:05:04.839 --> 01:05:08.680
before you can encode it so you can't do
01:05:07.119 --> 01:05:10.359
everything in parallel so Transformers
01:05:08.680 --> 01:05:15.039
for generation are actually
01:05:10.359 --> 01:05:16.559
slow and um there are models uh I don't
01:05:15.039 --> 01:05:18.520
know if people are using them super
01:05:16.559 --> 01:05:22.200
widely now but there were actually
01:05:18.520 --> 01:05:23.640
transform uh language model sorry
01:05:22.200 --> 01:05:26.319
machine translation model set we in
01:05:23.640 --> 01:05:28.279
production they had a really big strong
01:05:26.319 --> 01:05:34.359
Transformer encoder and then they had a
01:05:28.279 --> 01:05:34.359
tiny fast RNN decoder um
01:05:35.440 --> 01:05:40.960
and and if you want a actual
01:05:52.000 --> 01:05:59.440
reference there's there's
01:05:55.079 --> 01:05:59.440
this deep encoder shellow
01:05:59.559 --> 01:06:05.520
decoder um and then there's also the the
01:06:03.079 --> 01:06:07.599
Maran machine translation toolkit that
01:06:05.520 --> 01:06:11.119
supports uh supports those types of
01:06:07.599 --> 01:06:13.839
things as well so um it's also the
01:06:11.119 --> 01:06:16.200
reason why uh if you're using if you're
01:06:13.839 --> 01:06:18.839
using uh like the GPT models through the
01:06:16.200 --> 01:06:21.680
API that decoding is more expensive
01:06:18.839 --> 01:06:21.680
right like
01:06:22.119 --> 01:06:27.960
encoding I forget exactly is it 0.03
01:06:26.279 --> 01:06:30.839
cents for 1,000 tokens for encoding and
01:06:27.960 --> 01:06:33.039
0.06 cents for 1,000 tokens for decoding
01:06:30.839 --> 01:06:34.799
in like gp4 or something like this the
01:06:33.039 --> 01:06:36.839
reason why is precisely that just
01:06:34.799 --> 01:06:37.760
because it's so much more expensive to
01:06:36.839 --> 01:06:41.599
to run the
01:06:37.760 --> 01:06:45.160
decoder um cool I have a few final
01:06:41.599 --> 01:06:47.039
things also about efficiency so um these
01:06:45.160 --> 01:06:50.720
go back to the efficiency things that I
01:06:47.039 --> 01:06:52.279
talked about last time um handling mini
01:06:50.720 --> 01:06:54.440
batching so what do we have to do when
01:06:52.279 --> 01:06:56.359
we're handling mini batching if we were
01:06:54.440 --> 01:06:59.440
handling mini batching in feed forward
01:06:56.359 --> 01:07:02.880
networks it's actually relatively easy
01:06:59.440 --> 01:07:04.880
um because we all of our computations
01:07:02.880 --> 01:07:06.400
are the same shape so we just
01:07:04.880 --> 01:07:09.359
concatenate them all together into a big
01:07:06.400 --> 01:07:11.000
tensor and run uh run over it uh we saw
01:07:09.359 --> 01:07:12.599
mini batching makes things much faster
01:07:11.000 --> 01:07:15.160
but mini batching and sequence modeling
01:07:12.599 --> 01:07:17.240
is harder than in feed forward networks
01:07:15.160 --> 01:07:20.240
um one reason is in rnns each word
01:07:17.240 --> 01:07:22.680
depends on the previous word um also
01:07:20.240 --> 01:07:26.359
because sequences are of various
01:07:22.680 --> 01:07:30.279
lengths so so what we do to handle this
01:07:26.359 --> 01:07:33.480
is uh we do padding and masking uh
01:07:30.279 --> 01:07:35.680
so we can do padding like this uh so we
01:07:33.480 --> 01:07:37.279
just add an extra token at the end to
01:07:35.680 --> 01:07:40.440
make all of the sequences at the same
01:07:37.279 --> 01:07:44.480
length um if we are doing an encoder
01:07:40.440 --> 01:07:47.160
decoder style model uh where we have an
01:07:44.480 --> 01:07:48.440
input and then we want to generate all
01:07:47.160 --> 01:07:50.640
the outputs based on the input one of
01:07:48.440 --> 01:07:54.920
the easy things is to add pads to the
01:07:50.640 --> 01:07:56.520
beginning um and then so yeah it doesn't
01:07:54.920 --> 01:07:58.000
really matter but you can add pads to
01:07:56.520 --> 01:07:59.440
the beginning so they're all starting at
01:07:58.000 --> 01:08:03.079
the same place especially if you're
01:07:59.440 --> 01:08:05.799
using RNN style models um then we
01:08:03.079 --> 01:08:08.920
calculate the loss over the output for
01:08:05.799 --> 01:08:11.000
example we multiply the loss by a mask
01:08:08.920 --> 01:08:13.480
to remove the loss over the tokens that
01:08:11.000 --> 01:08:16.880
we don't care about and we take the sum
01:08:13.480 --> 01:08:19.120
of these and so luckily most of this is
01:08:16.880 --> 01:08:20.719
implemented in for example ptch or
01:08:19.120 --> 01:08:22.279
huging face Transformers already so you
01:08:20.719 --> 01:08:23.560
don't need to worry about it but it is a
01:08:22.279 --> 01:08:24.799
good idea to know what's going on under
01:08:23.560 --> 01:08:28.560
the hood if you want to implement
01:08:24.799 --> 01:08:32.440
anything unusual and also um it's good
01:08:28.560 --> 01:08:35.600
to know for the following reason also
01:08:32.440 --> 01:08:38.799
which is bucketing and
01:08:35.600 --> 01:08:40.319
sorting so if we use sentences of vastly
01:08:38.799 --> 01:08:43.359
different lengths and we put them in the
01:08:40.319 --> 01:08:46.640
same mini batch this can uh waste a
01:08:43.359 --> 01:08:48.000
really large amount of computation so
01:08:46.640 --> 01:08:50.759
like let's say we're processing
01:08:48.000 --> 01:08:52.480
documents or movie reviews or something
01:08:50.759 --> 01:08:54.799
like that and you have a most movie
01:08:52.480 --> 01:08:57.719
reviews are like
01:08:54.799 --> 01:09:00.080
10 words long but you have one movie
01:08:57.719 --> 01:09:02.319
review in your mini batch of uh a
01:09:00.080 --> 01:09:04.359
thousand words so basically what that
01:09:02.319 --> 01:09:08.279
means is you're padding most of your
01:09:04.359 --> 01:09:11.120
sequences 990 times to process 10
01:09:08.279 --> 01:09:12.120
sequences which is like a lot of waste
01:09:11.120 --> 01:09:14.000
right because you're running them all
01:09:12.120 --> 01:09:16.799
through your GPU and other things like
01:09:14.000 --> 01:09:19.080
that so one way to remedy this is to
01:09:16.799 --> 01:09:22.719
sort sentences so similarly length
01:09:19.080 --> 01:09:27.480
sentences are in the same batch so you
01:09:22.719 --> 01:09:29.920
uh you first sort before building all of
01:09:27.480 --> 01:09:31.640
your batches and then uh that makes it
01:09:29.920 --> 01:09:32.960
so that similarly sized ones are the
01:09:31.640 --> 01:09:35.239
same
01:09:32.960 --> 01:09:37.040
batch this goes into the problem that I
01:09:35.239 --> 01:09:39.359
mentioned before but only in passing
01:09:37.040 --> 01:09:42.440
which is uh let's say you're calculating
01:09:39.359 --> 01:09:44.199
your batch based on the number of
01:09:42.440 --> 01:09:47.679
sequences that you're
01:09:44.199 --> 01:09:51.400
processing if you say Okay I want 64
01:09:47.679 --> 01:09:53.359
sequences in my mini batch um if most of
01:09:51.400 --> 01:09:55.159
the time those 64 sequences are are 10
01:09:53.359 --> 01:09:57.480
tokens that's fine but then when you get
01:09:55.159 --> 01:10:01.440
the One Mini batch that has a thousand
01:09:57.480 --> 01:10:02.760
tokens in each sentence or each sequence
01:10:01.440 --> 01:10:04.920
um suddenly you're going to run out of
01:10:02.760 --> 01:10:07.800
GPU memory and you're like training is
01:10:04.920 --> 01:10:08.920
going to crash right which is you really
01:10:07.800 --> 01:10:10.440
don't want that to happen when you
01:10:08.920 --> 01:10:12.440
started running your homework assignment
01:10:10.440 --> 01:10:15.560
and then went to bed and then wake up
01:10:12.440 --> 01:10:18.440
and it crashed you know uh 15 minutes
01:10:15.560 --> 01:10:21.040
into Computing or something so uh this
01:10:18.440 --> 01:10:23.440
is an important thing to be aware of
01:10:21.040 --> 01:10:26.760
practically uh again this can be solved
01:10:23.440 --> 01:10:29.239
by a lot of toolkits like I know fer uh
01:10:26.760 --> 01:10:30.840
does it and hugging face does it if you
01:10:29.239 --> 01:10:33.159
set the appropriate settings but it's
01:10:30.840 --> 01:10:36.239
something you should be aware of um
01:10:33.159 --> 01:10:37.880
another note is that if you do this it's
01:10:36.239 --> 01:10:41.280
reducing the randomness in your
01:10:37.880 --> 01:10:42.880
distribution of data so um stochastic
01:10:41.280 --> 01:10:44.520
gradient descent is really heavily
01:10:42.880 --> 01:10:47.480
reliant on the fact that your ordering
01:10:44.520 --> 01:10:49.440
of data is randomized or at least it's a
01:10:47.480 --> 01:10:52.159
distributed appropriately so it's
01:10:49.440 --> 01:10:56.840
something to definitely be aware of um
01:10:52.159 --> 01:10:59.560
so uh this is a good thing to to think
01:10:56.840 --> 01:11:01.400
about another really useful thing to
01:10:59.560 --> 01:11:03.800
think about is strided
01:11:01.400 --> 01:11:05.440
architectures um strided architectures
01:11:03.800 --> 01:11:07.520
appear in rnns they appear in
01:11:05.440 --> 01:11:10.080
convolution they appear in trans
01:11:07.520 --> 01:11:12.320
Transformers or attention based models
01:11:10.080 --> 01:11:15.199
um they're called different things in
01:11:12.320 --> 01:11:18.159
each of them so in rnns they're called
01:11:15.199 --> 01:11:21.280
pyramidal rnns in convolution they're
01:11:18.159 --> 01:11:22.400
called strided architectures and in
01:11:21.280 --> 01:11:25.080
attention they're called sparse
01:11:22.400 --> 01:11:27.440
attention usually they all actually kind
01:11:25.080 --> 01:11:30.800
of mean the same thing um and basically
01:11:27.440 --> 01:11:33.440
what they mean is you don't you have a
01:11:30.800 --> 01:11:37.040
multi-layer model and when you have a
01:11:33.440 --> 01:11:40.920
multi-layer model you don't process
01:11:37.040 --> 01:11:43.920
every input uh from the uh from the
01:11:40.920 --> 01:11:45.560
previous layer so here's an example um
01:11:43.920 --> 01:11:47.840
like let's say you have a whole bunch of
01:11:45.560 --> 01:11:50.199
inputs um each of the inputs is
01:11:47.840 --> 01:11:53.159
processed in the first layer in some way
01:11:50.199 --> 01:11:56.639
but in the second layer you actually
01:11:53.159 --> 01:12:01.520
input for example uh two inputs to the
01:11:56.639 --> 01:12:03.560
RNN but you you skip so you have one
01:12:01.520 --> 01:12:05.440
state that corresponds to state number
01:12:03.560 --> 01:12:06.840
one and two another state that
01:12:05.440 --> 01:12:08.440
corresponds to state number two and
01:12:06.840 --> 01:12:10.920
three another state that corresponds to
01:12:08.440 --> 01:12:13.280
state number three and four so what that
01:12:10.920 --> 01:12:15.199
means is you can gradually decrease the
01:12:13.280 --> 01:12:18.199
number like the length of the sequence
01:12:15.199 --> 01:12:20.719
every time you process so uh this is a
01:12:18.199 --> 01:12:22.360
really useful thing that to do if you're
01:12:20.719 --> 01:12:25.480
processing very long sequences so you
01:12:22.360 --> 01:12:25.480
should be aware of it
01:12:27.440 --> 01:12:34.120
cool um everything
01:12:30.639 --> 01:12:36.920
okay okay the final thing is truncated
01:12:34.120 --> 01:12:39.239
back propagation through time and uh
01:12:36.920 --> 01:12:41.000
truncated back propagation Through Time
01:12:39.239 --> 01:12:43.560
what this is doing is basically you do
01:12:41.000 --> 01:12:46.120
back propop over shorter segments but
01:12:43.560 --> 01:12:47.840
you initialize with the state from the
01:12:46.120 --> 01:12:51.040
previous
01:12:47.840 --> 01:12:52.440
segment and the way this works is uh
01:12:51.040 --> 01:12:56.080
like for example if you're running an
01:12:52.440 --> 01:12:57.600
RNN uh you would run the RNN over the
01:12:56.080 --> 01:12:59.400
previous segment maybe it's length four
01:12:57.600 --> 01:13:02.120
maybe it's length 400 it doesn't really
01:12:59.400 --> 01:13:04.520
matter but it's uh coherently length
01:13:02.120 --> 01:13:06.360
segment and then when you do the next
01:13:04.520 --> 01:13:08.840
segment what you do is you only pass the
01:13:06.360 --> 01:13:12.960
hidden state but you throw away the rest
01:13:08.840 --> 01:13:16.360
of the previous computation graph and
01:13:12.960 --> 01:13:18.040
then walk through uh like this uh so you
01:13:16.360 --> 01:13:22.159
won't actually be updating the
01:13:18.040 --> 01:13:24.080
parameters of this based on the result
01:13:22.159 --> 01:13:25.800
the lost from this but you're still
01:13:24.080 --> 01:13:28.159
passing the information so this can use
01:13:25.800 --> 01:13:30.400
the information for the previous state
01:13:28.159 --> 01:13:32.239
so this is an example from RNN this is
01:13:30.400 --> 01:13:35.159
used pretty widely in RNN but there's
01:13:32.239 --> 01:13:38.000
also a lot of Transformer architectures
01:13:35.159 --> 01:13:39.400
that do things like this um the original
01:13:38.000 --> 01:13:41.000
one is something called Transformer
01:13:39.400 --> 01:13:44.560
Excel that was actually created here at
01:13:41.000 --> 01:13:46.560
CMU but this is also um used in the new
01:13:44.560 --> 01:13:48.719
mistol models and other things like this
01:13:46.560 --> 01:13:51.719
as well so um it's something that's
01:13:48.719 --> 01:13:54.719
still very much alive and well nowadays
01:13:51.719 --> 01:13:56.320
as well
01:13:54.719 --> 01:13:57.840
cool um that's all I have for today are
01:13:56.320 --> 01:13:59.760
there any questions people want to ask
01:13:57.840 --> 01:14:02.760
before we wrap
01:13:59.760 --> 01:14:02.760
up
01:14:12.840 --> 01:14:20.000
yeah doesent yeah so for condition
01:14:16.960 --> 01:14:25.040
prediction what is Source X and Target y
01:14:20.000 --> 01:14:26.520
um I think I kind of maybe carried over
01:14:25.040 --> 01:14:28.679
uh some terminology from machine
01:14:26.520 --> 01:14:31.400
translation uh by accident maybe it
01:14:28.679 --> 01:14:34.080
should be input X and output y uh that
01:14:31.400 --> 01:14:36.600
would be a better way to put it and so
01:14:34.080 --> 01:14:38.080
uh it could be anything for translation
01:14:36.600 --> 01:14:39.560
it's like something in the source
01:14:38.080 --> 01:14:42.600
language and something in the target
01:14:39.560 --> 01:14:44.520
language so like English and Japanese um
01:14:42.600 --> 01:14:47.280
if it's just a regular language model it
01:14:44.520 --> 01:14:50.560
could be something like a prompt and the
01:14:47.280 --> 01:14:55.280
output so for
01:14:50.560 --> 01:14:55.280
UNC y example that
01:14:57.400 --> 01:15:01.400
yeah so for unconditioned prediction
01:14:59.760 --> 01:15:03.840
that could just be straight up language
01:15:01.400 --> 01:15:07.040
modeling for example so um language
01:15:03.840 --> 01:15:11.840
modeling with no not necessarily any
01:15:07.040 --> 01:15:11.840
problems okay thanks and anything
01:15:12.440 --> 01:15:17.880
else okay great thanks a lot I'm happy
01:15:14.639 --> 01:15:17.880
to take questions
01:15:18.639 --> 01:15:21.639
to