ahmedelsayed's picture
commit files to HF hub
2ffb90d
WEBVTT
00:00:03.879 --> 00:00:07.480
cool um so this time I'm going to talk
00:00:05.480 --> 00:00:08.880
about word representation and text
00:00:07.480 --> 00:00:11.480
classifiers these are kind of the
00:00:08.880 --> 00:00:14.080
foundations that you need to know uh in
00:00:11.480 --> 00:00:15.640
order to move on to the more complex
00:00:14.080 --> 00:00:17.920
things that we'll be talking in future
00:00:15.640 --> 00:00:19.640
classes uh but actually the in
00:00:17.920 --> 00:00:22.760
particular the word representation part
00:00:19.640 --> 00:00:25.439
is pretty important it's a major uh
00:00:22.760 --> 00:00:31.800
thing that we need to do for all NLP
00:00:25.439 --> 00:00:34.239
models so uh let's go into it
00:00:31.800 --> 00:00:38.200
so last class I talked about the bag of
00:00:34.239 --> 00:00:40.239
words model um and just to review this
00:00:38.200 --> 00:00:43.920
was a model where basically we take each
00:00:40.239 --> 00:00:45.520
word we represent it as a one hot Vector
00:00:43.920 --> 00:00:48.760
uh like
00:00:45.520 --> 00:00:51.120
this and we add all of these vectors
00:00:48.760 --> 00:00:53.160
together we multiply the resulting
00:00:51.120 --> 00:00:55.160
frequency vector by some weights and we
00:00:53.160 --> 00:00:57.239
get a score out of this and we can use
00:00:55.160 --> 00:00:58.559
this score for binary classification or
00:00:57.239 --> 00:01:00.239
if we want to do multiclass
00:00:58.559 --> 00:01:02.519
classification we get you know multiple
00:01:00.239 --> 00:01:05.720
scores for each
00:01:02.519 --> 00:01:08.040
class and the features F were just based
00:01:05.720 --> 00:01:08.920
on our word identities and the weights
00:01:08.040 --> 00:01:12.159
were
00:01:08.920 --> 00:01:14.680
learned and um if we look at what's
00:01:12.159 --> 00:01:17.520
missing in bag of words
00:01:14.680 --> 00:01:19.600
models um we talked about handling of
00:01:17.520 --> 00:01:23.280
conjugated or compound
00:01:19.600 --> 00:01:25.439
words we talked about handling of word
00:01:23.280 --> 00:01:27.880
similarity and we talked about handling
00:01:25.439 --> 00:01:30.240
of combination features and handling of
00:01:27.880 --> 00:01:33.280
sentence structure and so all of these
00:01:30.240 --> 00:01:35.000
are are tricky problems uh we saw that
00:01:33.280 --> 00:01:37.000
you know creating a rule-based system to
00:01:35.000 --> 00:01:39.000
solve these problems is non-trivial and
00:01:37.000 --> 00:01:41.399
at the very least would take a lot of
00:01:39.000 --> 00:01:44.079
time and so now I want to talk about
00:01:41.399 --> 00:01:47.119
some solutions to the problems in this
00:01:44.079 --> 00:01:49.280
class so the first the solution to the
00:01:47.119 --> 00:01:52.240
first problem or a solution to the first
00:01:49.280 --> 00:01:54.880
problem is uh subword or character based
00:01:52.240 --> 00:01:57.520
models and that's what I'll talk about
00:01:54.880 --> 00:02:00.719
first handling of word similarity this
00:01:57.520 --> 00:02:02.960
can be handled uh using Word edings
00:02:00.719 --> 00:02:05.079
and the word embeddings uh will be
00:02:02.960 --> 00:02:07.159
another thing we'll talk about this time
00:02:05.079 --> 00:02:08.879
handling of combination features uh we
00:02:07.159 --> 00:02:11.039
can handle through neural networks which
00:02:08.879 --> 00:02:14.040
we'll also talk about this time and then
00:02:11.039 --> 00:02:15.560
handling of sentence structure uh the
00:02:14.040 --> 00:02:17.720
kind of standard way of handling this
00:02:15.560 --> 00:02:20.120
now is through sequence-based models and
00:02:17.720 --> 00:02:24.879
that will be uh starting in a few
00:02:20.120 --> 00:02:28.080
classes so uh let's jump into
00:02:24.879 --> 00:02:30.000
it so subword models uh as I mentioned
00:02:28.080 --> 00:02:31.840
this is a really really important part
00:02:30.000 --> 00:02:33.360
all of the models that we're building
00:02:31.840 --> 00:02:35.480
nowadays including you know
00:02:33.360 --> 00:02:38.239
state-of-the-art language models and and
00:02:35.480 --> 00:02:42.200
things like this and the basic idea
00:02:38.239 --> 00:02:44.720
behind this is that we want to split uh
00:02:42.200 --> 00:02:48.040
in particular split less common words up
00:02:44.720 --> 00:02:50.200
into multiple subboard tokens so to give
00:02:48.040 --> 00:02:52.200
an example of this uh if we have
00:02:50.200 --> 00:02:55.040
something like the companies are
00:02:52.200 --> 00:02:57.000
expanding uh it might split companies
00:02:55.040 --> 00:03:02.120
into compan
00:02:57.000 --> 00:03:05.000
e and expand in like this and there are
00:03:02.120 --> 00:03:08.480
a few benefits of this uh the first
00:03:05.000 --> 00:03:10.760
benefit is that this allows you to
00:03:08.480 --> 00:03:13.360
parameters between word varieties or
00:03:10.760 --> 00:03:15.200
compound words and the other one is to
00:03:13.360 --> 00:03:17.400
reduce parameter size and save compute
00:03:15.200 --> 00:03:19.720
and meming and both of these are kind of
00:03:17.400 --> 00:03:23.239
like equally important things that we
00:03:19.720 --> 00:03:25.519
need to be uh we need to be considering
00:03:23.239 --> 00:03:26.440
so does anyone know how many words there
00:03:25.519 --> 00:03:28.680
are in
00:03:26.440 --> 00:03:31.680
English any
00:03:28.680 --> 00:03:31.680
ideas
00:03:36.799 --> 00:03:43.400
yeah two
00:03:38.599 --> 00:03:45.560
million pretty good um any other
00:03:43.400 --> 00:03:47.159
ideas
00:03:45.560 --> 00:03:50.360
yeah
00:03:47.159 --> 00:03:53.599
60,000 some models use 60,000 I I think
00:03:50.360 --> 00:03:56.200
60,000 is probably these subword models
00:03:53.599 --> 00:03:58.079
uh when you're talking about this so
00:03:56.200 --> 00:03:59.319
they can use sub models to take the 2
00:03:58.079 --> 00:04:03.480
million which I think is a reasonable
00:03:59.319 --> 00:04:07.400
guess to 6 60,000 any other
00:04:03.480 --> 00:04:08.840
ideas 700,000 okay pretty good um so
00:04:07.400 --> 00:04:11.799
this was a per question it doesn't
00:04:08.840 --> 00:04:14.760
really have a good answer um but two 200
00:04:11.799 --> 00:04:17.479
million's probably pretty good six uh
00:04:14.760 --> 00:04:19.160
700,000 is pretty good the reason why
00:04:17.479 --> 00:04:21.360
this is a trick question is because are
00:04:19.160 --> 00:04:24.440
company and companies different
00:04:21.360 --> 00:04:26.840
words uh maybe maybe not right because
00:04:24.440 --> 00:04:30.120
if we know the word company we can you
00:04:26.840 --> 00:04:32.520
know guess what the word companies means
00:04:30.120 --> 00:04:35.720
um what about automobile is that a
00:04:32.520 --> 00:04:37.400
different word well maybe if we know
00:04:35.720 --> 00:04:39.400
Auto and mobile we can kind of guess
00:04:37.400 --> 00:04:41.160
what automobile means but not really so
00:04:39.400 --> 00:04:43.479
maybe that's a different word there's
00:04:41.160 --> 00:04:45.960
all kinds of Shades of Gray there and
00:04:43.479 --> 00:04:48.120
also we have really frequent words that
00:04:45.960 --> 00:04:50.360
everybody can probably acknowledge our
00:04:48.120 --> 00:04:52.320
words like
00:04:50.360 --> 00:04:55.639
the and
00:04:52.320 --> 00:04:58.520
a and um maybe
00:04:55.639 --> 00:05:00.680
car and then we have words down here
00:04:58.520 --> 00:05:02.320
which are like Miss spellings or
00:05:00.680 --> 00:05:04.160
something like that misspellings of
00:05:02.320 --> 00:05:06.520
actual correct words or
00:05:04.160 --> 00:05:09.199
slay uh or other things like that and
00:05:06.520 --> 00:05:12.520
then it's questionable whether those are
00:05:09.199 --> 00:05:17.199
actual words or not so um there's a
00:05:12.520 --> 00:05:19.520
famous uh law called Zip's
00:05:17.199 --> 00:05:21.280
law um which probably a lot of people
00:05:19.520 --> 00:05:23.360
have heard of it's also the source of
00:05:21.280 --> 00:05:26.919
your zip
00:05:23.360 --> 00:05:30.160
file um which is using Zip's law to
00:05:26.919 --> 00:05:32.400
compress uh compress output by making
00:05:30.160 --> 00:05:34.880
the uh more frequent words have shorter
00:05:32.400 --> 00:05:37.520
bite strings and less frequent words
00:05:34.880 --> 00:05:38.800
have uh you know less frequent bite
00:05:37.520 --> 00:05:43.120
strings but basically like we're going
00:05:38.800 --> 00:05:45.120
to have an infinite number of words or
00:05:43.120 --> 00:05:46.360
at least strings that are separated by
00:05:45.120 --> 00:05:49.280
white space so we need to handle this
00:05:46.360 --> 00:05:53.199
somehow and that's what subword units
00:05:49.280 --> 00:05:54.560
do so um 60,000 was a good guess for the
00:05:53.199 --> 00:05:57.160
number of subword units you might use in
00:05:54.560 --> 00:06:00.759
a model and so uh by using subw units we
00:05:57.160 --> 00:06:04.840
can limit to about that much
00:06:00.759 --> 00:06:08.160
so there's a couple of common uh ways to
00:06:04.840 --> 00:06:10.440
create these subword units and basically
00:06:08.160 --> 00:06:14.560
all of them rely on the fact that you
00:06:10.440 --> 00:06:16.039
want more common strings to become
00:06:14.560 --> 00:06:19.599
subword
00:06:16.039 --> 00:06:22.199
units um or actually sorry I realize
00:06:19.599 --> 00:06:24.280
maybe before doing that I could explain
00:06:22.199 --> 00:06:26.360
an alternative to creating subword units
00:06:24.280 --> 00:06:29.639
so the alternative to creating subword
00:06:26.360 --> 00:06:33.560
units is to treat every character or
00:06:29.639 --> 00:06:36.919
maybe every bite in a string as a single
00:06:33.560 --> 00:06:38.560
thing that you encode in forent so in
00:06:36.919 --> 00:06:42.520
other words instead of trying to model
00:06:38.560 --> 00:06:47.919
the companies are expanding we Model T h
00:06:42.520 --> 00:06:50.199
e space c o m uh etc etc can anyone
00:06:47.919 --> 00:06:53.199
think of any downsides of
00:06:50.199 --> 00:06:53.199
this
00:06:57.039 --> 00:07:01.879
yeah yeah the set of these will be very
00:07:00.080 --> 00:07:05.000
will be very small but that's not
00:07:01.879 --> 00:07:05.000
necessarily a problem
00:07:08.560 --> 00:07:15.599
right yeah um and any other
00:07:12.599 --> 00:07:15.599
ideas
00:07:19.520 --> 00:07:24.360
yeah yeah the resulting sequences will
00:07:22.080 --> 00:07:25.520
be very long um and when you say
00:07:24.360 --> 00:07:27.160
difficult to use it could be difficult
00:07:25.520 --> 00:07:29.560
to use for a couple of reasons there's
00:07:27.160 --> 00:07:31.840
mainly two reasons actually any any IDE
00:07:29.560 --> 00:07:31.840
about
00:07:33.479 --> 00:07:37.800
this any
00:07:46.280 --> 00:07:50.599
yeah yeah that's a little bit of a
00:07:49.000 --> 00:07:52.319
separate problem than the character
00:07:50.599 --> 00:07:53.919
based model so let me get back to that
00:07:52.319 --> 00:07:56.400
but uh let let's finish the discussion
00:07:53.919 --> 00:07:58.360
of the character based models so if it's
00:07:56.400 --> 00:08:00.120
really if it's really long maybe a
00:07:58.360 --> 00:08:01.879
simple thing like uh let's say you have
00:08:00.120 --> 00:08:06.560
a big neural network and it's processing
00:08:01.879 --> 00:08:06.560
a really long sequence any ideas what
00:08:06.919 --> 00:08:10.879
happens basically you run out of memory
00:08:09.280 --> 00:08:13.440
or it takes a really long time right so
00:08:10.879 --> 00:08:16.840
you have computational problems another
00:08:13.440 --> 00:08:18.479
reason why is um think of what a bag of
00:08:16.840 --> 00:08:21.400
words model would look like if it was a
00:08:18.479 --> 00:08:21.400
bag of characters
00:08:21.800 --> 00:08:25.919
model it wouldn't be very informative
00:08:24.199 --> 00:08:27.599
about whether like a sentence is
00:08:25.919 --> 00:08:30.919
positive sentiment or negative sentiment
00:08:27.599 --> 00:08:32.959
right because instead of having uh go o
00:08:30.919 --> 00:08:35.039
you would have uh instead of having good
00:08:32.959 --> 00:08:36.360
you would have go o and that doesn't
00:08:35.039 --> 00:08:38.560
really directly tell you whether it's
00:08:36.360 --> 00:08:41.719
positive sentiment or not so those are
00:08:38.560 --> 00:08:43.680
basically the two problems um compute
00:08:41.719 --> 00:08:45.320
and lack of expressiveness in the
00:08:43.680 --> 00:08:50.720
underlying representations so you need
00:08:45.320 --> 00:08:52.080
to handle both of those yes so if we uh
00:08:50.720 --> 00:08:54.480
move from
00:08:52.080 --> 00:08:56.440
character better expressiveness and we
00:08:54.480 --> 00:08:58.920
assume that if we just get the bigger
00:08:56.440 --> 00:09:00.120
and bigger paragraphs we'll get even
00:08:58.920 --> 00:09:02.760
better
00:09:00.120 --> 00:09:05.120
yeah so a very good question I'll repeat
00:09:02.760 --> 00:09:06.560
it um and actually this also goes back
00:09:05.120 --> 00:09:08.040
to the other question you asked about
00:09:06.560 --> 00:09:09.519
words that look the same but are
00:09:08.040 --> 00:09:12.160
pronounced differently or have different
00:09:09.519 --> 00:09:14.360
meanings and so like let's say we just
00:09:12.160 --> 00:09:15.920
remembered this whole sentence right the
00:09:14.360 --> 00:09:18.279
companies are
00:09:15.920 --> 00:09:21.600
expanding um and that was like a single
00:09:18.279 --> 00:09:22.680
embedding and we somehow embedded it the
00:09:21.600 --> 00:09:25.720
problem would be we're never going to
00:09:22.680 --> 00:09:27.120
see that sentence again um or if we go
00:09:25.720 --> 00:09:29.480
to longer sentences we're never going to
00:09:27.120 --> 00:09:31.839
see the longer sentences again so it
00:09:29.480 --> 00:09:34.320
becomes too sparse so there's kind of a
00:09:31.839 --> 00:09:37.240
sweet spot between
00:09:34.320 --> 00:09:40.279
like long enough to be expressive and
00:09:37.240 --> 00:09:42.480
short enough to occur many times so that
00:09:40.279 --> 00:09:43.959
you can learn appropriately and that's
00:09:42.480 --> 00:09:47.120
kind of what subword models are aiming
00:09:43.959 --> 00:09:48.360
for and if you get longer subwords then
00:09:47.120 --> 00:09:50.200
you'll get things that are more
00:09:48.360 --> 00:09:52.959
expressive but more sparse in shorter
00:09:50.200 --> 00:09:55.440
subwords you'll get things that are like
00:09:52.959 --> 00:09:57.279
uh less expressive but less spice so you
00:09:55.440 --> 00:09:59.120
need to balance between them and then
00:09:57.279 --> 00:10:00.600
once we get into sequence modeling they
00:09:59.120 --> 00:10:02.600
start being able to model like which
00:10:00.600 --> 00:10:04.120
words are next to each other uh which
00:10:02.600 --> 00:10:06.040
tokens are next to each other and stuff
00:10:04.120 --> 00:10:07.800
like that so even if they are less
00:10:06.040 --> 00:10:11.279
expressive the combination between them
00:10:07.800 --> 00:10:12.600
can be expressive so um yeah that's kind
00:10:11.279 --> 00:10:13.440
of a preview of what we're going to be
00:10:12.600 --> 00:10:17.320
doing
00:10:13.440 --> 00:10:19.279
next okay so um let's assume that we
00:10:17.320 --> 00:10:21.320
want to have some subwords that are
00:10:19.279 --> 00:10:23.000
longer than characters but shorter than
00:10:21.320 --> 00:10:26.240
tokens how do we make these in a
00:10:23.000 --> 00:10:28.680
consistent way there's two major ways of
00:10:26.240 --> 00:10:31.480
doing this uh the first one is bite pair
00:10:28.680 --> 00:10:32.839
encoding and this is uh very very simple
00:10:31.480 --> 00:10:35.839
in fact it's so
00:10:32.839 --> 00:10:35.839
simple
00:10:36.600 --> 00:10:40.839
that we can implement
00:10:41.839 --> 00:10:47.240
it in this notebook here which you can
00:10:44.600 --> 00:10:51.720
click through to on the
00:10:47.240 --> 00:10:55.440
slides and it's uh
00:10:51.720 --> 00:10:58.040
about 10 lines of code um and so
00:10:55.440 --> 00:11:01.040
basically what B pair encoding
00:10:58.040 --> 00:11:01.040
does
00:11:04.600 --> 00:11:09.560
is that you start out with um all of the
00:11:07.000 --> 00:11:14.360
vocabulary that you want to process
00:11:09.560 --> 00:11:17.560
where each vocabulary item is split into
00:11:14.360 --> 00:11:21.240
uh the characters and an end of word
00:11:17.560 --> 00:11:23.360
symbol and you have a corresponding
00:11:21.240 --> 00:11:27.519
frequency of
00:11:23.360 --> 00:11:31.120
this you then uh get statistics about
00:11:27.519 --> 00:11:33.279
the most common pairs of tokens that
00:11:31.120 --> 00:11:34.880
occur next to each other and so here the
00:11:33.279 --> 00:11:38.240
most common pairs of tokens that occur
00:11:34.880 --> 00:11:41.920
next to each other are e s because it
00:11:38.240 --> 00:11:46.560
occurs nine times because it occurs in
00:11:41.920 --> 00:11:48.279
newest and wildest also s and t w
00:11:46.560 --> 00:11:51.440
because those occur there too and then
00:11:48.279 --> 00:11:53.519
you have we and other things like that
00:11:51.440 --> 00:11:56.000
so out of all the most frequent ones you
00:11:53.519 --> 00:11:59.920
just merge them together and that gives
00:11:56.000 --> 00:12:02.720
you uh new s new
00:11:59.920 --> 00:12:05.200
EST and wide
00:12:02.720 --> 00:12:09.360
EST and then you do the same thing this
00:12:05.200 --> 00:12:12.519
time now you get EST so now you get this
00:12:09.360 --> 00:12:14.279
uh suffix EST and that looks pretty
00:12:12.519 --> 00:12:16.399
reasonable for English right you know
00:12:14.279 --> 00:12:19.040
EST is a common suffix that we use it
00:12:16.399 --> 00:12:22.399
seems like it should be a single token
00:12:19.040 --> 00:12:25.880
and um so you just do this over and over
00:12:22.399 --> 00:12:29.279
again if you want a vocabulary of 60,000
00:12:25.880 --> 00:12:31.120
for example you would do um 60,000 minus
00:12:29.279 --> 00:12:33.079
number of characters merge operations
00:12:31.120 --> 00:12:37.160
and eventually you would get a B of
00:12:33.079 --> 00:12:41.920
60,000 um and yeah very very simple
00:12:37.160 --> 00:12:41.920
method to do this um any questions about
00:12:43.160 --> 00:12:46.160
that
00:12:57.839 --> 00:13:00.839
yeah
00:13:15.600 --> 00:13:20.959
yeah so uh just to repeat the the
00:13:18.040 --> 00:13:23.560
comment uh this seems like a greedy
00:13:20.959 --> 00:13:25.320
version of Huffman encoding which is a
00:13:23.560 --> 00:13:28.839
you know similar to what you're using in
00:13:25.320 --> 00:13:32.000
your zip file a way to shorten things by
00:13:28.839 --> 00:13:36.560
getting longer uh more frequent things
00:13:32.000 --> 00:13:39.120
being inced as a single token um I think
00:13:36.560 --> 00:13:40.760
B pair encoding did originally start
00:13:39.120 --> 00:13:43.720
like that that's part of the reason why
00:13:40.760 --> 00:13:45.760
the encoding uh thing is here I think it
00:13:43.720 --> 00:13:47.360
originally started there I haven't read
00:13:45.760 --> 00:13:49.360
really deeply into this but I can talk
00:13:47.360 --> 00:13:53.240
more about how the next one corresponds
00:13:49.360 --> 00:13:54.440
to information Theory and Tuesday I'm
00:13:53.240 --> 00:13:55.720
going to talk even more about how
00:13:54.440 --> 00:13:57.720
language models correspond to
00:13:55.720 --> 00:14:00.040
information theories so we can uh we can
00:13:57.720 --> 00:14:04.519
discuss maybe in more detail
00:14:00.040 --> 00:14:07.639
to um so the the alternative option is
00:14:04.519 --> 00:14:10.000
to use unigram models and unigram models
00:14:07.639 --> 00:14:12.240
are the simplest type of language model
00:14:10.000 --> 00:14:15.079
I'm going to talk more in detail about
00:14:12.240 --> 00:14:18.279
them next time but basically uh the way
00:14:15.079 --> 00:14:20.759
it works is you create a model that
00:14:18.279 --> 00:14:23.600
generates all word uh words in the
00:14:20.759 --> 00:14:26.199
sequence independently sorry I thought I
00:14:23.600 --> 00:14:26.199
had a
00:14:26.320 --> 00:14:31.800
um I thought I had an equation but
00:14:28.800 --> 00:14:31.800
basically the
00:14:32.240 --> 00:14:35.759
equation looks
00:14:38.079 --> 00:14:41.079
like
00:14:47.720 --> 00:14:52.120
this so you say the probability of the
00:14:50.360 --> 00:14:53.440
sequence is the product of the
00:14:52.120 --> 00:14:54.279
probabilities of each of the words in
00:14:53.440 --> 00:14:55.959
the
00:14:54.279 --> 00:15:00.079
sequence
00:14:55.959 --> 00:15:04.079
and uh then you try to pick a vocabulary
00:15:00.079 --> 00:15:06.839
that maximizes the probability of the
00:15:04.079 --> 00:15:09.320
Corpus given a fixed vocabulary size so
00:15:06.839 --> 00:15:10.320
you try to say okay you get a vocabulary
00:15:09.320 --> 00:15:14.440
size of
00:15:10.320 --> 00:15:16.920
60,000 how do you um how do you pick the
00:15:14.440 --> 00:15:19.680
best 60,000 vocabulary to maximize the
00:15:16.920 --> 00:15:22.440
probability of the the Corpus and that
00:15:19.680 --> 00:15:25.959
will result in something very similar uh
00:15:22.440 --> 00:15:27.920
it will also try to give longer uh
00:15:25.959 --> 00:15:29.880
vocabulary uh sorry more common
00:15:27.920 --> 00:15:32.240
vocabulary long sequences because that
00:15:29.880 --> 00:15:35.560
allows you to to maximize this
00:15:32.240 --> 00:15:36.959
objective um the optimization for this
00:15:35.560 --> 00:15:40.040
is performed using something called the
00:15:36.959 --> 00:15:44.440
EM algorithm where basically you uh
00:15:40.040 --> 00:15:48.560
predict the uh the probability of each
00:15:44.440 --> 00:15:51.600
token showing up and uh then select the
00:15:48.560 --> 00:15:53.279
most common tokens and then trim off the
00:15:51.600 --> 00:15:54.759
ones that are less common and then just
00:15:53.279 --> 00:15:58.120
do this over and over again until you
00:15:54.759 --> 00:15:59.839
drop down to the 60,000 token lat so the
00:15:58.120 --> 00:16:02.040
details for this are not important for
00:15:59.839 --> 00:16:04.160
most people in this class uh because
00:16:02.040 --> 00:16:07.480
you're going to just be using a toolkit
00:16:04.160 --> 00:16:08.880
that implements this for you um but if
00:16:07.480 --> 00:16:10.759
you're interested in this I'm happy to
00:16:08.880 --> 00:16:14.199
talk to you about it
00:16:10.759 --> 00:16:14.199
yeah is there
00:16:14.680 --> 00:16:18.959
problem Oh in unigram models there's a
00:16:17.199 --> 00:16:20.959
huge problem with assuming Independence
00:16:18.959 --> 00:16:22.720
in language models because then you
00:16:20.959 --> 00:16:25.120
could rearrange the order of words in
00:16:22.720 --> 00:16:26.600
sentences um that that's something we're
00:16:25.120 --> 00:16:27.519
going to talk about in language model
00:16:26.600 --> 00:16:30.560
next
00:16:27.519 --> 00:16:32.839
time but the the good thing about this
00:16:30.560 --> 00:16:34.519
is the EM algorithm requires dynamic
00:16:32.839 --> 00:16:36.079
programming in this case and you can't
00:16:34.519 --> 00:16:37.800
easily do dynamic programming if you
00:16:36.079 --> 00:16:40.160
don't make that
00:16:37.800 --> 00:16:41.880
assumptions um and then finally after
00:16:40.160 --> 00:16:43.560
you've picked your vocabulary and you've
00:16:41.880 --> 00:16:45.720
assigned a probability to each word in
00:16:43.560 --> 00:16:47.800
the vocabulary you then find a
00:16:45.720 --> 00:16:49.639
segmentation of the input that maximizes
00:16:47.800 --> 00:16:52.600
the unigram
00:16:49.639 --> 00:16:54.880
probabilities um so this is basically
00:16:52.600 --> 00:16:56.519
the idea of what's going on here um I'm
00:16:54.880 --> 00:16:58.120
not going to go into a lot of detail
00:16:56.519 --> 00:17:00.560
about this because most people are just
00:16:58.120 --> 00:17:02.279
going to be users of this algorithm so
00:17:00.560 --> 00:17:06.240
it's not super super
00:17:02.279 --> 00:17:09.400
important um the one important thing
00:17:06.240 --> 00:17:11.240
about this is that there's a library
00:17:09.400 --> 00:17:15.520
called sentence piece that's used very
00:17:11.240 --> 00:17:19.199
widely in order to build these um in
00:17:15.520 --> 00:17:22.000
order to build these subword units and
00:17:19.199 --> 00:17:23.720
uh basically what you do is you run the
00:17:22.000 --> 00:17:27.600
sentence piece
00:17:23.720 --> 00:17:30.200
train uh model or sorry uh program and
00:17:27.600 --> 00:17:32.640
that gives you uh you select your vocab
00:17:30.200 --> 00:17:34.240
size uh this also this character
00:17:32.640 --> 00:17:36.120
coverage is basically how well do you
00:17:34.240 --> 00:17:39.760
need to cover all of the characters in
00:17:36.120 --> 00:17:41.840
your vocabulary or in your input text um
00:17:39.760 --> 00:17:45.240
what model type do you use and then you
00:17:41.840 --> 00:17:48.640
run this uh sentence piece en code file
00:17:45.240 --> 00:17:51.039
uh to uh encode the output and split the
00:17:48.640 --> 00:17:54.799
output and there's also python bindings
00:17:51.039 --> 00:17:56.240
available for this and by the one thing
00:17:54.799 --> 00:17:57.919
that you should know is by default it
00:17:56.240 --> 00:18:00.600
uses the unigram model but it also
00:17:57.919 --> 00:18:01.960
supports EP in my experience it doesn't
00:18:00.600 --> 00:18:05.159
make a huge difference about which one
00:18:01.960 --> 00:18:07.640
you use the bigger thing is how um how
00:18:05.159 --> 00:18:10.159
big is your vocabulary size and if your
00:18:07.640 --> 00:18:11.880
vocabulary size is smaller then things
00:18:10.159 --> 00:18:13.760
will be more efficient but less
00:18:11.880 --> 00:18:17.480
expressive if your vocabulary size is
00:18:13.760 --> 00:18:21.280
bigger things will be um will
00:18:17.480 --> 00:18:23.240
be more expressive but less efficient
00:18:21.280 --> 00:18:25.360
and A good rule of thumb is like
00:18:23.240 --> 00:18:26.960
something like 60,000 to 80,000 is
00:18:25.360 --> 00:18:29.120
pretty reasonable if you're only doing
00:18:26.960 --> 00:18:31.320
English if you're spreading out to
00:18:29.120 --> 00:18:32.600
things that do other languages um which
00:18:31.320 --> 00:18:35.960
I'll talk about in a second then you
00:18:32.600 --> 00:18:38.720
need a much bigger B regular
00:18:35.960 --> 00:18:40.559
say so there's two considerations here
00:18:38.720 --> 00:18:42.440
two important considerations when using
00:18:40.559 --> 00:18:46.320
these models uh the first is
00:18:42.440 --> 00:18:48.760
multilinguality as I said so when you're
00:18:46.320 --> 00:18:50.760
using um subword
00:18:48.760 --> 00:18:54.710
models they're hard to use
00:18:50.760 --> 00:18:55.840
multilingually because as I said before
00:18:54.710 --> 00:18:59.799
[Music]
00:18:55.840 --> 00:19:03.799
they give longer strings to more
00:18:59.799 --> 00:19:06.520
frequent strings basically so then
00:19:03.799 --> 00:19:09.559
imagine what happens if 50% of your
00:19:06.520 --> 00:19:11.919
Corpus is English another 30% of your
00:19:09.559 --> 00:19:15.400
Corpus is
00:19:11.919 --> 00:19:17.200
other languages written in Latin script
00:19:15.400 --> 00:19:21.720
10% is
00:19:17.200 --> 00:19:25.480
Chinese uh 5% is cerlic script languages
00:19:21.720 --> 00:19:27.240
four 4% is 3% is Japanese and then you
00:19:25.480 --> 00:19:31.080
have like
00:19:27.240 --> 00:19:33.320
0.01% written in like burmes or
00:19:31.080 --> 00:19:35.520
something like that suddenly burmes just
00:19:33.320 --> 00:19:37.400
gets chunked up really really tiny
00:19:35.520 --> 00:19:38.360
really long sequences and it doesn't
00:19:37.400 --> 00:19:45.559
work as
00:19:38.360 --> 00:19:45.559
well um so one way that people fix this
00:19:45.919 --> 00:19:50.520
um and actually there's a really nice uh
00:19:48.760 --> 00:19:52.600
blog post about this called exploring
00:19:50.520 --> 00:19:53.760
B's vocabulary which I referenced here
00:19:52.600 --> 00:19:58.039
if you're interested in learning more
00:19:53.760 --> 00:20:02.960
about that um but one way that people
00:19:58.039 --> 00:20:05.240
were around this is if your
00:20:02.960 --> 00:20:07.960
actual uh data
00:20:05.240 --> 00:20:11.559
distribution looks like this like
00:20:07.960 --> 00:20:11.559
English uh
00:20:17.039 --> 00:20:23.159
Ty we actually sorry I took out the
00:20:19.280 --> 00:20:23.159
Indian languages in my example
00:20:24.960 --> 00:20:30.159
apologies
00:20:27.159 --> 00:20:30.159
so
00:20:30.400 --> 00:20:35.919
um what you do is you essentially create
00:20:33.640 --> 00:20:40.000
a different distribution that like
00:20:35.919 --> 00:20:43.559
downweights English a little bit and up
00:20:40.000 --> 00:20:47.000
weights up weights all of the other
00:20:43.559 --> 00:20:49.480
languages um so that you get more of
00:20:47.000 --> 00:20:53.159
other languages when creating so this is
00:20:49.480 --> 00:20:53.159
a common work around that you can do for
00:20:54.200 --> 00:20:59.960
this um the
00:20:56.799 --> 00:21:03.000
second problem with these is
00:20:59.960 --> 00:21:08.000
arbitrariness so as you saw in my
00:21:03.000 --> 00:21:11.240
example with bpe e s s and t and of
00:21:08.000 --> 00:21:13.520
board symbol all have the same probabil
00:21:11.240 --> 00:21:16.960
or have the same frequency right so if
00:21:13.520 --> 00:21:21.520
we get to that point do we segment es or
00:21:16.960 --> 00:21:25.039
do we seg uh EST or do we segment e
00:21:21.520 --> 00:21:26.559
s and so this is also a problem and it
00:21:25.039 --> 00:21:29.000
actually can affect your results
00:21:26.559 --> 00:21:30.480
especially if you like don't have a
00:21:29.000 --> 00:21:31.760
really strong vocabulary for the
00:21:30.480 --> 00:21:33.279
language you're working in or you're
00:21:31.760 --> 00:21:37.200
working in a new
00:21:33.279 --> 00:21:40.159
domain and so there's a few workarounds
00:21:37.200 --> 00:21:41.520
for this uh one workaround for this is
00:21:40.159 --> 00:21:44.000
uh called subword
00:21:41.520 --> 00:21:46.279
regularization and the way it works is
00:21:44.000 --> 00:21:49.400
instead
00:21:46.279 --> 00:21:51.640
of just having a single segmentation and
00:21:49.400 --> 00:21:54.679
getting the kind of
00:21:51.640 --> 00:21:56.200
maximally probable segmentation or the
00:21:54.679 --> 00:21:58.480
one the greedy one that you get out of
00:21:56.200 --> 00:22:01.360
BP instead you sample different
00:21:58.480 --> 00:22:03.000
segmentations in training time and use
00:22:01.360 --> 00:22:05.720
the different segmentations and that
00:22:03.000 --> 00:22:09.200
makes your model more robust to this
00:22:05.720 --> 00:22:10.840
kind of variation and that's also
00:22:09.200 --> 00:22:15.679
actually the reason why sentence piece
00:22:10.840 --> 00:22:17.919
was released was through this um subword
00:22:15.679 --> 00:22:19.559
regularization paper so that's also
00:22:17.919 --> 00:22:22.720
implemented in sentence piece if that's
00:22:19.559 --> 00:22:22.720
something you're interested in
00:22:24.919 --> 00:22:32.520
trying cool um are there any questions
00:22:28.480 --> 00:22:32.520
or discussions about this
00:22:53.279 --> 00:22:56.279
yeah
00:22:56.960 --> 00:22:59.960
already
00:23:06.799 --> 00:23:11.080
yeah so this is a good question um just
00:23:08.960 --> 00:23:12.760
to repeat the question it was like let's
00:23:11.080 --> 00:23:16.080
say we have a big
00:23:12.760 --> 00:23:19.640
multilingual um subword
00:23:16.080 --> 00:23:23.440
model and we want to add a new language
00:23:19.640 --> 00:23:26.240
in some way uh how can we reuse the
00:23:23.440 --> 00:23:28.880
existing model but add a new
00:23:26.240 --> 00:23:31.080
language it's a good question if you're
00:23:28.880 --> 00:23:33.679
only using it for subord
00:23:31.080 --> 00:23:36.320
segmentation um one one nice thing about
00:23:33.679 --> 00:23:36.320
the unigram
00:23:36.400 --> 00:23:41.799
model here is this is kind of a
00:23:38.880 --> 00:23:43.679
probabilistic model so it's very easy to
00:23:41.799 --> 00:23:46.360
do the kind of standard things that we
00:23:43.679 --> 00:23:48.240
do with probabilistic models which is
00:23:46.360 --> 00:23:50.559
like let's say we had an
00:23:48.240 --> 00:23:53.919
old uh an
00:23:50.559 --> 00:23:56.880
old vocabulary for
00:23:53.919 --> 00:23:59.880
this um we could just
00:23:56.880 --> 00:23:59.880
interpolate
00:24:07.159 --> 00:24:12.320
um we could interpolate like this and
00:24:09.559 --> 00:24:13.840
just you know uh combine the
00:24:12.320 --> 00:24:17.080
probabilities of the two and then use
00:24:13.840 --> 00:24:19.520
that combine probability in order to
00:24:17.080 --> 00:24:21.320
segment the new language um things like
00:24:19.520 --> 00:24:24.159
this have been uh done before but I
00:24:21.320 --> 00:24:26.159
don't remember the exact preferences uh
00:24:24.159 --> 00:24:30.440
for them but that that's what I would do
00:24:26.159 --> 00:24:31.960
here another interesting thing is um
00:24:30.440 --> 00:24:35.399
this might be getting a little ahead of
00:24:31.960 --> 00:24:35.399
myself but there's
00:24:48.559 --> 00:24:58.279
a there's a paper that talks about um
00:24:55.360 --> 00:25:00.159
how you can take things that or trained
00:24:58.279 --> 00:25:03.360
with another
00:25:00.159 --> 00:25:05.480
vocabulary and basically the idea is um
00:25:03.360 --> 00:25:09.320
you pre-train on whatever languages you
00:25:05.480 --> 00:25:10.679
have and then uh you learn embeddings in
00:25:09.320 --> 00:25:11.880
the new language you freeze the body of
00:25:10.679 --> 00:25:14.360
the model and learn embeddings in the
00:25:11.880 --> 00:25:15.880
new language so that's another uh method
00:25:14.360 --> 00:25:19.080
that's used it's called on the cross
00:25:15.880 --> 00:25:19.080
lingual printability
00:25:21.840 --> 00:25:26.159
representations and I'll probably talk
00:25:23.840 --> 00:25:28.480
about that in the last class of this uh
00:25:26.159 --> 00:25:30.720
thing so you can remember that
00:25:28.480 --> 00:25:33.720
then cool any other
00:25:30.720 --> 00:25:33.720
questions
00:25:38.480 --> 00:25:42.640
yeah is bag of words a first step to
00:25:41.039 --> 00:25:46.640
process your data if you want to do
00:25:42.640 --> 00:25:49.919
Generation Um do you mean like
00:25:46.640 --> 00:25:52.440
uh a word based model or a subword based
00:25:49.919 --> 00:25:52.440
model
00:25:56.679 --> 00:26:00.480
or like is
00:26:02.360 --> 00:26:08.000
this so the subword segmentation is the
00:26:05.919 --> 00:26:10.640
first step of creating just about any
00:26:08.000 --> 00:26:13.080
model nowadays like every model every
00:26:10.640 --> 00:26:16.600
model uses this and they usually use
00:26:13.080 --> 00:26:21.520
this either to segment characters or
00:26:16.600 --> 00:26:23.559
byes um characters are like Unicode code
00:26:21.520 --> 00:26:25.799
points so they actually correspond to an
00:26:23.559 --> 00:26:28.279
actual visual character and then bites
00:26:25.799 --> 00:26:31.120
are many unicode characters are like
00:26:28.279 --> 00:26:35.000
three by like a Chinese character is
00:26:31.120 --> 00:26:37.159
three byes if I remember correctly so um
00:26:35.000 --> 00:26:38.640
the bbased segmentation is nice because
00:26:37.159 --> 00:26:41.240
you don't even need to worry about unic
00:26:38.640 --> 00:26:43.880
code you can just do the like you can
00:26:41.240 --> 00:26:45.640
just segment the pile like literally as
00:26:43.880 --> 00:26:49.440
is and so a lot of people do it that way
00:26:45.640 --> 00:26:53.279
too uh llama as far as I know is
00:26:49.440 --> 00:26:55.720
bites I believe GPT is also bites um but
00:26:53.279 --> 00:26:58.799
pre previous to like three or four years
00:26:55.720 --> 00:27:02.799
ago people used SCS I
00:26:58.799 --> 00:27:05.000
cool um okay so this is really really
00:27:02.799 --> 00:27:05.919
important it's not like super complex
00:27:05.000 --> 00:27:09.760
and
00:27:05.919 --> 00:27:13.039
practically uh you will just maybe maybe
00:27:09.760 --> 00:27:15.840
train or maybe just use a tokenizer um
00:27:13.039 --> 00:27:18.559
but uh that that's an important thing to
00:27:15.840 --> 00:27:20.760
me cool uh next I'd like to move on to
00:27:18.559 --> 00:27:24.399
continuous word eddings
00:27:20.760 --> 00:27:26.720
so the basic idea is that previously we
00:27:24.399 --> 00:27:28.240
represented words with a sparse Vector
00:27:26.720 --> 00:27:30.120
uh with a single one
00:27:28.240 --> 00:27:31.960
also known as one poot Vector so it
00:27:30.120 --> 00:27:35.720
looked a little bit like
00:27:31.960 --> 00:27:37.640
this and instead what continuous word
00:27:35.720 --> 00:27:39.640
embeddings do is they look up a dense
00:27:37.640 --> 00:27:42.320
vector and so you get a dense
00:27:39.640 --> 00:27:45.760
representation where the entire Vector
00:27:42.320 --> 00:27:45.760
has continuous values in
00:27:46.000 --> 00:27:51.919
it and I talked about a bag of words
00:27:49.200 --> 00:27:54.320
model but we could also create a
00:27:51.919 --> 00:27:58.360
continuous bag of words model and the
00:27:54.320 --> 00:28:01.159
way this works is you look up the
00:27:58.360 --> 00:28:03.720
values of each Vector the embeddings of
00:28:01.159 --> 00:28:06.320
each Vector this gives you an embedding
00:28:03.720 --> 00:28:08.440
Vector for the entire sequence and then
00:28:06.320 --> 00:28:15.120
you multiply this by a weight
00:28:08.440 --> 00:28:17.559
Matrix uh where the so this is column so
00:28:15.120 --> 00:28:19.960
the rows of the weight Matrix uh
00:28:17.559 --> 00:28:22.919
correspond to to the size of this
00:28:19.960 --> 00:28:24.760
continuous embedding and The Columns of
00:28:22.919 --> 00:28:28.320
the weight Matrix would correspond to
00:28:24.760 --> 00:28:30.919
the uh overall um
00:28:28.320 --> 00:28:32.559
to the overall uh number of labels that
00:28:30.919 --> 00:28:36.919
you would have here and then that would
00:28:32.559 --> 00:28:40.120
give you sces and so this uh basically
00:28:36.919 --> 00:28:41.679
what this is saying is each Vector now
00:28:40.120 --> 00:28:43.440
instead of having a single thing that
00:28:41.679 --> 00:28:46.799
represents which vocabulary item you're
00:28:43.440 --> 00:28:48.679
looking at uh you would kind of hope
00:28:46.799 --> 00:28:52.120
that you would get vectors where words
00:28:48.679 --> 00:28:54.919
that are similar uh by some mention of
00:28:52.120 --> 00:28:57.760
by some concept of similar like syntatic
00:28:54.919 --> 00:28:59.679
uh syntax semantics whether they're in
00:28:57.760 --> 00:29:03.120
the same language or not are close in
00:28:59.679 --> 00:29:06.679
the vector space and each Vector element
00:29:03.120 --> 00:29:09.399
is a feature uh so for example each
00:29:06.679 --> 00:29:11.519
Vector element corresponds to is this an
00:29:09.399 --> 00:29:14.960
animate object or is this a positive
00:29:11.519 --> 00:29:17.399
word or other Vector other things like
00:29:14.960 --> 00:29:19.399
that so just to give an example here
00:29:17.399 --> 00:29:21.760
this is totally made up I just made it
00:29:19.399 --> 00:29:24.360
in keynote so it's not natural Vector
00:29:21.760 --> 00:29:26.279
space but to Ill illustrate the concept
00:29:24.360 --> 00:29:27.960
I showed here what if we had a
00:29:26.279 --> 00:29:30.240
two-dimensional vector
00:29:27.960 --> 00:29:33.399
space where the two-dimensional Vector
00:29:30.240 --> 00:29:36.240
space the xais here is corresponding to
00:29:33.399 --> 00:29:38.679
whether it's animate or not and the the
00:29:36.240 --> 00:29:41.480
Y AIS here is corresponding to whether
00:29:38.679 --> 00:29:44.080
it's like positive sentiment or not and
00:29:41.480 --> 00:29:46.399
so this is kind of like our ideal uh
00:29:44.080 --> 00:29:49.799
goal
00:29:46.399 --> 00:29:52.279
here um so why would we want to do this
00:29:49.799 --> 00:29:52.279
yeah sorry
00:29:56.320 --> 00:30:03.399
guys what do the like in the one it's
00:30:00.919 --> 00:30:06.399
one
00:30:03.399 --> 00:30:06.399
yep
00:30:07.200 --> 00:30:12.519
like so what would the four entries do
00:30:09.880 --> 00:30:14.799
here the four entries here are learned
00:30:12.519 --> 00:30:17.039
so they are um they're learned just
00:30:14.799 --> 00:30:18.519
together with the model um and I'm going
00:30:17.039 --> 00:30:22.120
to talk about exactly how we learn them
00:30:18.519 --> 00:30:24.000
soon but the the final goal is that
00:30:22.120 --> 00:30:25.399
after learning has happened they look
00:30:24.000 --> 00:30:26.799
they have these two properties like
00:30:25.399 --> 00:30:28.600
similar words are close together in the
00:30:26.799 --> 00:30:30.080
vectorace
00:30:28.600 --> 00:30:32.640
and
00:30:30.080 --> 00:30:35.679
um that's like number one that's the
00:30:32.640 --> 00:30:37.600
most important and then number two is
00:30:35.679 --> 00:30:39.279
ideally these uh features would have
00:30:37.600 --> 00:30:41.200
some meaning uh maybe human
00:30:39.279 --> 00:30:44.720
interpretable meaning maybe not human
00:30:41.200 --> 00:30:47.880
interpretable meaning but
00:30:44.720 --> 00:30:50.880
yeah so um one thing that I should
00:30:47.880 --> 00:30:53.159
mention is I I showed a contrast between
00:30:50.880 --> 00:30:55.159
the bag of words uh the one hot
00:30:53.159 --> 00:30:57.000
representations here and the dense
00:30:55.159 --> 00:31:00.880
representations here and I used this
00:30:57.000 --> 00:31:03.880
look look up operation for both of them
00:31:00.880 --> 00:31:07.399
and this this lookup
00:31:03.880 --> 00:31:09.559
operation actually um can be viewed as
00:31:07.399 --> 00:31:11.799
grabbing a single Vector from a big
00:31:09.559 --> 00:31:14.919
Matrix of word
00:31:11.799 --> 00:31:17.760
embeddings and
00:31:14.919 --> 00:31:19.760
so the way it can work is like we have
00:31:17.760 --> 00:31:22.919
this big vector and then we look up word
00:31:19.760 --> 00:31:25.919
number two in a zero index Matrix and it
00:31:22.919 --> 00:31:27.799
would just grab this out of that Matrix
00:31:25.919 --> 00:31:29.880
and that's practically what most like
00:31:27.799 --> 00:31:32.240
deep learning libraries or or whatever
00:31:29.880 --> 00:31:35.840
Library you use are going to be
00:31:32.240 --> 00:31:38.000
doing but another uh way you can view it
00:31:35.840 --> 00:31:40.880
is you can view it as multiplying by a
00:31:38.000 --> 00:31:43.880
one hot vector and so you have this
00:31:40.880 --> 00:31:48.679
Vector uh exactly the same Matrix uh but
00:31:43.880 --> 00:31:50.799
you just multiply by a vector uh 0 1 z z
00:31:48.679 --> 00:31:55.720
and that gives you exactly the same
00:31:50.799 --> 00:31:58.200
things um so the Practical imple
00:31:55.720 --> 00:31:59.720
implementations of this uh uh tend to be
00:31:58.200 --> 00:32:01.279
the first one because the first one's a
00:31:59.720 --> 00:32:04.679
lot faster to implement you don't need
00:32:01.279 --> 00:32:06.760
to multiply like this big thing by a
00:32:04.679 --> 00:32:11.000
huge Vector but there
00:32:06.760 --> 00:32:13.880
are advantages of knowing the second one
00:32:11.000 --> 00:32:15.519
uh just to give an example what if you
00:32:13.880 --> 00:32:19.600
for whatever reason you came up with
00:32:15.519 --> 00:32:21.440
like an a crazy model that predicts a
00:32:19.600 --> 00:32:24.120
probability distribution over words
00:32:21.440 --> 00:32:25.720
instead of just words maybe it's a
00:32:24.120 --> 00:32:27.679
language model that has an idea of what
00:32:25.720 --> 00:32:30.200
the next word is going to look like
00:32:27.679 --> 00:32:32.159
and maybe your um maybe your model
00:32:30.200 --> 00:32:35.279
thinks the next word has a 50%
00:32:32.159 --> 00:32:36.600
probability of being capped 30%
00:32:35.279 --> 00:32:42.279
probability of being
00:32:36.600 --> 00:32:44.960
dog and uh 2% probability uh sorry uh
00:32:42.279 --> 00:32:47.200
20% probability being
00:32:44.960 --> 00:32:50.000
bir you can take this vector and
00:32:47.200 --> 00:32:51.480
multiply it by The Matrix and get like a
00:32:50.000 --> 00:32:53.639
word embedding that's kind of a mix of
00:32:51.480 --> 00:32:55.639
all of those word which might be
00:32:53.639 --> 00:32:57.960
interesting and let you do creative
00:32:55.639 --> 00:33:02.120
things so um knowing that these two
00:32:57.960 --> 00:33:05.360
things are the same are the same is kind
00:33:02.120 --> 00:33:05.360
of useful for that kind of
00:33:05.919 --> 00:33:11.480
thing um any any questions about this
00:33:09.120 --> 00:33:13.919
I'm G to talk about how we train next so
00:33:11.480 --> 00:33:18.159
maybe maybe I can goow into
00:33:13.919 --> 00:33:23.159
that okay cool so how do we get the
00:33:18.159 --> 00:33:25.840
vectors uh like the question uh so up
00:33:23.159 --> 00:33:27.519
until now we trained a bag of words
00:33:25.840 --> 00:33:29.080
model and the way we trained a bag of
00:33:27.519 --> 00:33:31.159
words model was using the structured
00:33:29.080 --> 00:33:35.440
perceptron algorithm where if the model
00:33:31.159 --> 00:33:39.639
got the answer wrong we would either
00:33:35.440 --> 00:33:42.799
increment or decrement the embeddings
00:33:39.639 --> 00:33:45.080
based on whether uh whether the label
00:33:42.799 --> 00:33:46.559
was positive or negative right so I
00:33:45.080 --> 00:33:48.919
showed an example of this very simple
00:33:46.559 --> 00:33:51.039
algorithm you don't even uh need to
00:33:48.919 --> 00:33:52.480
write any like numpy or anything like
00:33:51.039 --> 00:33:55.919
that to implement that
00:33:52.480 --> 00:33:59.559
algorithm uh so here here it is so we
00:33:55.919 --> 00:34:02.320
have like 4X why in uh data we extract
00:33:59.559 --> 00:34:04.639
the features we run the classifier uh we
00:34:02.320 --> 00:34:07.440
have the predicted why and then we
00:34:04.639 --> 00:34:09.480
increment or decrement
00:34:07.440 --> 00:34:12.679
features but how do we train more
00:34:09.480 --> 00:34:15.599
complex models so I think most people
00:34:12.679 --> 00:34:17.079
here have taken a uh machine learning
00:34:15.599 --> 00:34:19.159
class of some kind so this will be
00:34:17.079 --> 00:34:21.079
reviewed for a lot of people uh but
00:34:19.159 --> 00:34:22.280
basically we do this uh by doing
00:34:21.079 --> 00:34:24.839
gradient
00:34:22.280 --> 00:34:27.240
descent and in order to do so we write
00:34:24.839 --> 00:34:29.919
down a loss function calculate the
00:34:27.240 --> 00:34:30.919
derivatives of the L function with
00:34:29.919 --> 00:34:35.079
respect to the
00:34:30.919 --> 00:34:37.320
parameters and move uh the parameters in
00:34:35.079 --> 00:34:40.839
the direction that reduces the loss
00:34:37.320 --> 00:34:42.720
mtion and so specifically for this bag
00:34:40.839 --> 00:34:45.560
of words or continuous bag of words
00:34:42.720 --> 00:34:48.240
model um we want this loss of function
00:34:45.560 --> 00:34:50.839
to be a loss function that gets lower as
00:34:48.240 --> 00:34:52.240
the model gets better and I'm going to
00:34:50.839 --> 00:34:54.000
give two examples from binary
00:34:52.240 --> 00:34:57.400
classification both of these are used in
00:34:54.000 --> 00:34:58.839
NLP models uh reasonably frequently
00:34:57.400 --> 00:35:01.440
uh there's a bunch of other loss
00:34:58.839 --> 00:35:02.800
functions but these are kind of the two
00:35:01.440 --> 00:35:05.480
major
00:35:02.800 --> 00:35:08.160
ones so the first one um which is
00:35:05.480 --> 00:35:10.160
actually less frequent is the hinge loss
00:35:08.160 --> 00:35:13.400
and then the second one is taking a
00:35:10.160 --> 00:35:15.800
sigmoid and then doing negative log
00:35:13.400 --> 00:35:19.760
likelyhood so the hinge loss basically
00:35:15.800 --> 00:35:22.760
what we do is we uh take the max of the
00:35:19.760 --> 00:35:26.119
label times the score that is output by
00:35:22.760 --> 00:35:29.200
the model and zero and what this looks
00:35:26.119 --> 00:35:33.480
like is we have a hinged loss uh where
00:35:29.200 --> 00:35:36.880
if Y is equal to one the loss if Y is
00:35:33.480 --> 00:35:39.520
greater than zero is zero so as long as
00:35:36.880 --> 00:35:42.680
we get basically as long as we get the
00:35:39.520 --> 00:35:45.079
answer right there's no loss um as the
00:35:42.680 --> 00:35:47.400
answer gets more wrong the loss gets
00:35:45.079 --> 00:35:49.880
worse like this and then similarly if
00:35:47.400 --> 00:35:53.160
the label is negative if we get a
00:35:49.880 --> 00:35:54.839
negative score uh then we get zero loss
00:35:53.160 --> 00:35:55.800
and the loss increases if we have a
00:35:54.839 --> 00:35:58.800
positive
00:35:55.800 --> 00:36:00.800
score so the sigmoid plus negative log
00:35:58.800 --> 00:36:05.440
likelihood the way this works is you
00:36:00.800 --> 00:36:07.400
multiply y * the score here and um then
00:36:05.440 --> 00:36:09.960
we have the sigmoid function which is
00:36:07.400 --> 00:36:14.079
just kind of a nice function that looks
00:36:09.960 --> 00:36:15.440
like this with zero and one centered
00:36:14.079 --> 00:36:19.480
around
00:36:15.440 --> 00:36:21.240
zero and then we take the negative log
00:36:19.480 --> 00:36:22.319
of this sigmoid function or the negative
00:36:21.240 --> 00:36:27.160
log
00:36:22.319 --> 00:36:28.520
likelihood and that gives us a uh L that
00:36:27.160 --> 00:36:30.440
looks a little bit like this so
00:36:28.520 --> 00:36:32.640
basically you can see that these look
00:36:30.440 --> 00:36:36.040
very similar right the difference being
00:36:32.640 --> 00:36:37.760
that the hinge loss is uh sharp and we
00:36:36.040 --> 00:36:41.119
get exactly a zero loss if we get the
00:36:37.760 --> 00:36:44.319
answer right and the sigmoid is smooth
00:36:41.119 --> 00:36:48.440
uh and we never get a zero
00:36:44.319 --> 00:36:50.680
loss um so does anyone have an idea of
00:36:48.440 --> 00:36:53.119
the benefits and disadvantages of
00:36:50.680 --> 00:36:55.680
these I kind of flashed one on the
00:36:53.119 --> 00:36:57.599
screen already
00:36:55.680 --> 00:36:59.400
but
00:36:57.599 --> 00:37:01.359
so I flash that on the screen so I'll
00:36:59.400 --> 00:37:03.680
give this one and then I can have a quiz
00:37:01.359 --> 00:37:06.319
about the sign but the the hinge glass
00:37:03.680 --> 00:37:07.720
is more closely linked to accuracy and
00:37:06.319 --> 00:37:10.400
the reason why it's more closely linked
00:37:07.720 --> 00:37:13.640
to accuracy is because basically we will
00:37:10.400 --> 00:37:16.079
get a zero loss if the model gets the
00:37:13.640 --> 00:37:18.319
answer right so when the model gets all
00:37:16.079 --> 00:37:20.240
of the answers right we will just stop
00:37:18.319 --> 00:37:22.760
updating our model whatsoever because we
00:37:20.240 --> 00:37:25.440
never we don't have any loss whatsoever
00:37:22.760 --> 00:37:27.720
and the gradient of the loss is zero um
00:37:25.440 --> 00:37:29.960
what about the sigmoid uh a negative log
00:37:27.720 --> 00:37:33.160
likelihood uh there there's kind of two
00:37:29.960 --> 00:37:36.160
major advantages of this anyone want to
00:37:33.160 --> 00:37:36.160
review their machine learning
00:37:38.240 --> 00:37:41.800
test sorry what was
00:37:43.800 --> 00:37:49.960
that for for R uh yeah maybe there's a
00:37:48.200 --> 00:37:51.319
more direct I think I know what you're
00:37:49.960 --> 00:37:54.560
saying but maybe there's a more direct
00:37:51.319 --> 00:37:54.560
way to say that um
00:37:54.839 --> 00:38:00.760
yeah yeah so the gradient is nonzero
00:37:57.560 --> 00:38:04.240
everywhere and uh the gradient also kind
00:38:00.760 --> 00:38:05.839
of increases as your score gets worse so
00:38:04.240 --> 00:38:08.440
those are that's one advantage it makes
00:38:05.839 --> 00:38:11.240
it easier to optimize models um another
00:38:08.440 --> 00:38:13.839
one linked to the ROC score but maybe we
00:38:11.240 --> 00:38:13.839
could say it more
00:38:16.119 --> 00:38:19.400
directly any
00:38:20.040 --> 00:38:26.920
ideas okay um basically the sigmoid can
00:38:23.240 --> 00:38:30.160
be interpreted as a probability so um if
00:38:26.920 --> 00:38:32.839
the the sigmoid is between Zer and one
00:38:30.160 --> 00:38:34.640
uh and because it's between zero and one
00:38:32.839 --> 00:38:36.720
we can say the sigmoid is a
00:38:34.640 --> 00:38:38.640
probability um and that can be useful
00:38:36.720 --> 00:38:40.119
for various things like if we want a
00:38:38.640 --> 00:38:41.960
downstream model or if we want a
00:38:40.119 --> 00:38:45.480
confidence prediction out of the model
00:38:41.960 --> 00:38:48.200
so those are two uh advantages of using
00:38:45.480 --> 00:38:49.920
a s plus negative log likelihood there's
00:38:48.200 --> 00:38:53.160
no probabilistic interpretation to
00:38:49.920 --> 00:38:56.560
something transing theas
00:38:53.160 --> 00:38:59.200
basically cool um so the next thing that
00:38:56.560 --> 00:39:01.240
that we do is we calculate derivatives
00:38:59.200 --> 00:39:04.040
and we calculate the derivative of the
00:39:01.240 --> 00:39:05.920
parameter given the loss function um to
00:39:04.040 --> 00:39:09.839
give an example of the bag of words
00:39:05.920 --> 00:39:13.480
model and the hinge loss um the hinge
00:39:09.839 --> 00:39:16.480
loss as I said is the max of the score
00:39:13.480 --> 00:39:19.359
and times y in the bag of words model
00:39:16.480 --> 00:39:22.640
the score was the frequency of that
00:39:19.359 --> 00:39:25.880
vocabulary item in the input multiplied
00:39:22.640 --> 00:39:27.680
by the weight here and so if we this is
00:39:25.880 --> 00:39:29.520
a simple a function that I can just do
00:39:27.680 --> 00:39:34.440
the derivative by hand and if I do the
00:39:29.520 --> 00:39:36.920
deriva by hand what comes out is if y *
00:39:34.440 --> 00:39:39.319
this value is greater than zero so in
00:39:36.920 --> 00:39:44.640
other words if this Max uh picks this
00:39:39.319 --> 00:39:48.319
instead of this then the derivative is y
00:39:44.640 --> 00:39:52.359
* stre and otherwise uh it
00:39:48.319 --> 00:39:52.359
is in the opposite
00:39:55.400 --> 00:40:00.160
direction
00:39:56.920 --> 00:40:02.839
then uh optimizing gradients uh we do
00:40:00.160 --> 00:40:06.200
standard uh in standard stochastic
00:40:02.839 --> 00:40:07.839
gradient descent uh which is the most
00:40:06.200 --> 00:40:10.920
standard optimization algorithm for
00:40:07.839 --> 00:40:14.440
these models uh we basically have a
00:40:10.920 --> 00:40:17.440
gradient over uh you take the gradient
00:40:14.440 --> 00:40:20.040
over the parameter of the loss function
00:40:17.440 --> 00:40:22.480
and we call it GT so here um sorry I
00:40:20.040 --> 00:40:25.599
switched my terminology between W and
00:40:22.480 --> 00:40:28.280
Theta so this could be W uh the previous
00:40:25.599 --> 00:40:31.000
value of w
00:40:28.280 --> 00:40:35.440
um and this is the gradient of the loss
00:40:31.000 --> 00:40:37.040
and then uh we take the previous value
00:40:35.440 --> 00:40:39.680
and then we subtract out the learning
00:40:37.040 --> 00:40:39.680
rate times the
00:40:40.680 --> 00:40:45.720
gradient and uh there are many many
00:40:43.200 --> 00:40:47.280
other optimization options uh I'll cover
00:40:45.720 --> 00:40:50.960
the more frequent one called Adam at the
00:40:47.280 --> 00:40:54.319
end of this uh this lecture but um this
00:40:50.960 --> 00:40:57.160
is the basic way of optimizing the
00:40:54.319 --> 00:41:00.599
model so
00:40:57.160 --> 00:41:03.359
then my question now is what is this
00:41:00.599 --> 00:41:07.000
algorithm with respect
00:41:03.359 --> 00:41:10.119
to this is an algorithm that is
00:41:07.000 --> 00:41:12.280
taking that has a loss function it's
00:41:10.119 --> 00:41:14.079
calculating derivatives and it's
00:41:12.280 --> 00:41:17.240
optimizing gradients using stochastic
00:41:14.079 --> 00:41:18.839
gradient descent so does anyone have a
00:41:17.240 --> 00:41:20.960
guess about what the loss function is
00:41:18.839 --> 00:41:23.520
here and maybe what is the learning rate
00:41:20.960 --> 00:41:23.520
of stas
00:41:24.319 --> 00:41:29.480
gradient I kind of gave you a hint about
00:41:26.599 --> 00:41:29.480
the L one
00:41:31.640 --> 00:41:37.839
actually and just to recap what this is
00:41:34.440 --> 00:41:41.440
doing here it's um if predicted Y is
00:41:37.839 --> 00:41:44.560
equal to Y then it is moving the uh the
00:41:41.440 --> 00:41:48.240
future weights in the direction of Y
00:41:44.560 --> 00:41:48.240
times the frequency
00:41:52.599 --> 00:41:56.960
Vector
00:41:55.240 --> 00:41:59.079
yeah
00:41:56.960 --> 00:42:01.640
yeah exactly so the loss function is
00:41:59.079 --> 00:42:05.800
hinge loss and the learning rate is one
00:42:01.640 --> 00:42:07.880
um and just to show how that you know
00:42:05.800 --> 00:42:12.359
corresponds we have this if statement
00:42:07.880 --> 00:42:12.359
here and we have the increment of the
00:42:12.960 --> 00:42:20.240
features and this is what the um what
00:42:16.920 --> 00:42:21.599
the L sorry the derivative looked like
00:42:20.240 --> 00:42:24.240
so we have
00:42:21.599 --> 00:42:26.920
if this is moving in the right direction
00:42:24.240 --> 00:42:29.520
for the label uh then we increment
00:42:26.920 --> 00:42:31.599
otherwise we do nothing so
00:42:29.520 --> 00:42:33.559
basically you can see that even this
00:42:31.599 --> 00:42:35.200
really simple algorithm that I you know
00:42:33.559 --> 00:42:37.480
implemented with a few lines of python
00:42:35.200 --> 00:42:38.839
is essentially equivalent to this uh
00:42:37.480 --> 00:42:40.760
stochastic gradient descent that we
00:42:38.839 --> 00:42:44.559
doing
00:42:40.760 --> 00:42:46.359
models so the good news about this is
00:42:44.559 --> 00:42:48.359
you know this this is really simple but
00:42:46.359 --> 00:42:50.599
it only really works forit like a bag of
00:42:48.359 --> 00:42:55.400
words model or a simple feature based
00:42:50.599 --> 00:42:57.200
model uh but it opens up a lot of uh new
00:42:55.400 --> 00:43:00.440
possibilities for how we can optimize
00:42:57.200 --> 00:43:01.599
models and in particular I mentioned uh
00:43:00.440 --> 00:43:04.839
that there was a problem with
00:43:01.599 --> 00:43:08.200
combination features last class like
00:43:04.839 --> 00:43:11.200
don't hate and don't love are not just
00:43:08.200 --> 00:43:12.760
you know hate plus don't and love plus
00:43:11.200 --> 00:43:14.119
don't it's actually the combination of
00:43:12.760 --> 00:43:17.680
the two is really
00:43:14.119 --> 00:43:20.160
important and so um yeah just to give an
00:43:17.680 --> 00:43:23.440
example we have don't love is maybe bad
00:43:20.160 --> 00:43:26.960
uh nothing I don't love is very
00:43:23.440 --> 00:43:30.960
good and so in order
00:43:26.960 --> 00:43:34.040
to solve this problem we turn to neural
00:43:30.960 --> 00:43:37.160
networks and the way we do this is we
00:43:34.040 --> 00:43:39.119
have a lookup of dense embeddings sorry
00:43:37.160 --> 00:43:41.839
I actually I just realized my coloring
00:43:39.119 --> 00:43:44.119
is off I was using red to indicate dense
00:43:41.839 --> 00:43:46.480
embeddings so this should be maybe red
00:43:44.119 --> 00:43:49.319
instead of blue but um we take these
00:43:46.480 --> 00:43:51.200
stents embeddings and then we create
00:43:49.319 --> 00:43:53.720
some complicated function to extract
00:43:51.200 --> 00:43:55.079
combination features um and then use
00:43:53.720 --> 00:43:57.359
those to calculate
00:43:55.079 --> 00:44:02.200
scores
00:43:57.359 --> 00:44:04.480
um and so we calculate these combination
00:44:02.200 --> 00:44:08.240
features and what we want to do is we
00:44:04.480 --> 00:44:12.880
want to extract vectors from the input
00:44:08.240 --> 00:44:12.880
where each Vector has features
00:44:15.839 --> 00:44:21.040
um sorry this is in the wrong order so
00:44:18.240 --> 00:44:22.559
I'll I'll get back to this um so this
00:44:21.040 --> 00:44:25.319
this was talking about the The
00:44:22.559 --> 00:44:27.200
Continuous bag of words features so the
00:44:25.319 --> 00:44:30.960
problem with the continuous bag of words
00:44:27.200 --> 00:44:30.960
features was we were extracting
00:44:31.359 --> 00:44:36.359
features
00:44:33.079 --> 00:44:36.359
um like
00:44:36.839 --> 00:44:41.400
this but then we were directly using the
00:44:39.760 --> 00:44:43.359
the feature the dense features that we
00:44:41.400 --> 00:44:45.559
extracted to make predictions without
00:44:43.359 --> 00:44:48.839
actually allowing for any interactions
00:44:45.559 --> 00:44:51.839
between the features um and
00:44:48.839 --> 00:44:55.160
so uh neural networks the way we fix
00:44:51.839 --> 00:44:57.079
this is we first extract these features
00:44:55.160 --> 00:44:59.440
uh we take these these features of each
00:44:57.079 --> 00:45:04.000
word embedding and then we run them
00:44:59.440 --> 00:45:07.240
through uh kind of linear transforms in
00:45:04.000 --> 00:45:09.880
nonlinear uh like linear multiplications
00:45:07.240 --> 00:45:10.880
and then nonlinear transforms to extract
00:45:09.880 --> 00:45:13.920
additional
00:45:10.880 --> 00:45:15.839
features and uh finally run this through
00:45:13.920 --> 00:45:18.640
several layers and then use the
00:45:15.839 --> 00:45:21.119
resulting features to make our
00:45:18.640 --> 00:45:23.200
predictions and when we do this this
00:45:21.119 --> 00:45:25.319
allows us to do more uh interesting
00:45:23.200 --> 00:45:28.319
things so like for example we could
00:45:25.319 --> 00:45:30.000
learn feature combination a node in the
00:45:28.319 --> 00:45:32.599
second layer might be feature one and
00:45:30.000 --> 00:45:35.240
feature five are active so that could be
00:45:32.599 --> 00:45:38.680
like feature one corresponds to negative
00:45:35.240 --> 00:45:43.640
sentiment words like hate
00:45:38.680 --> 00:45:45.839
despise um and other things like that so
00:45:43.640 --> 00:45:50.079
for hate and despise feature one would
00:45:45.839 --> 00:45:53.119
have a high value like 8.0 and then
00:45:50.079 --> 00:45:55.480
7.2 and then we also have negation words
00:45:53.119 --> 00:45:57.040
like don't or not or something like that
00:45:55.480 --> 00:46:00.040
and those would
00:45:57.040 --> 00:46:00.040
have
00:46:03.720 --> 00:46:08.640
don't would have a high value for like 2
00:46:11.880 --> 00:46:15.839
five and so these would be the word
00:46:14.200 --> 00:46:18.040
embeddings where each word embedding
00:46:15.839 --> 00:46:20.599
corresponded to you know features of the
00:46:18.040 --> 00:46:23.480
words and
00:46:20.599 --> 00:46:25.480
then um after that we would extract
00:46:23.480 --> 00:46:29.319
feature combinations in this second
00:46:25.480 --> 00:46:32.079
layer that say oh we see at least one
00:46:29.319 --> 00:46:33.760
word where the first feature is active
00:46:32.079 --> 00:46:36.359
and we see at least one word where the
00:46:33.760 --> 00:46:37.920
fifth feature is active so now that
00:46:36.359 --> 00:46:40.640
allows us to capture the fact that we
00:46:37.920 --> 00:46:42.319
saw like don't hate or don't despise or
00:46:40.640 --> 00:46:44.559
not hate or not despise or something
00:46:42.319 --> 00:46:44.559
like
00:46:45.079 --> 00:46:51.760
that so this is the way uh kind of this
00:46:49.680 --> 00:46:54.839
is a deep uh continuous bag of words
00:46:51.760 --> 00:46:56.839
model um this actually was proposed in
00:46:54.839 --> 00:46:58.119
205 15 I don't think I have the
00:46:56.839 --> 00:47:02.599
reference on the slide but I think it's
00:46:58.119 --> 00:47:05.040
in the notes um on the website and
00:47:02.599 --> 00:47:07.200
actually at that point in time they
00:47:05.040 --> 00:47:09.200
demon there were several interesting
00:47:07.200 --> 00:47:11.960
results that showed that even this like
00:47:09.200 --> 00:47:13.960
really simple model did really well uh
00:47:11.960 --> 00:47:16.319
at text classification and other simple
00:47:13.960 --> 00:47:18.640
tasks like that because it was able to
00:47:16.319 --> 00:47:21.720
you know share features of the words and
00:47:18.640 --> 00:47:23.800
then extract combinations to the
00:47:21.720 --> 00:47:28.200
features
00:47:23.800 --> 00:47:29.760
so um in order order to learn these we
00:47:28.200 --> 00:47:30.920
need to start turning to neural networks
00:47:29.760 --> 00:47:34.400
and the reason why we need to start
00:47:30.920 --> 00:47:38.040
turning to neural networks is
00:47:34.400 --> 00:47:41.920
because while I can calculate the loss
00:47:38.040 --> 00:47:43.280
function of the while I can calculate
00:47:41.920 --> 00:47:44.839
the loss function of the hinged loss for
00:47:43.280 --> 00:47:47.720
a bag of words model by hand I
00:47:44.839 --> 00:47:49.359
definitely don't I probably could but
00:47:47.720 --> 00:47:51.240
don't want to do it for a model that
00:47:49.359 --> 00:47:53.200
starts become as complicated as this
00:47:51.240 --> 00:47:57.440
with multiple Matrix multiplications
00:47:53.200 --> 00:48:00.520
Andes and stuff like that so the way we
00:47:57.440 --> 00:48:05.000
do this just a very brief uh coverage of
00:48:00.520 --> 00:48:06.200
this uh for because um I think probably
00:48:05.000 --> 00:48:08.400
a lot of people have dealt with neural
00:48:06.200 --> 00:48:10.200
networks before um the original
00:48:08.400 --> 00:48:12.880
motivation was that we had neurons in
00:48:10.200 --> 00:48:16.160
the brain uh where
00:48:12.880 --> 00:48:18.839
the each of the neuron synapses took in
00:48:16.160 --> 00:48:21.480
an electrical signal and once they got
00:48:18.839 --> 00:48:24.079
enough electrical signal they would fire
00:48:21.480 --> 00:48:25.960
um but now the current conception of
00:48:24.079 --> 00:48:28.160
neural networks or deep learning models
00:48:25.960 --> 00:48:30.440
is basically computation
00:48:28.160 --> 00:48:32.400
graphs and the way a computation graph
00:48:30.440 --> 00:48:34.760
Works um and I'm especially going to
00:48:32.400 --> 00:48:36.240
talk about the way it works in natural
00:48:34.760 --> 00:48:38.119
language processing which might be a
00:48:36.240 --> 00:48:42.319
contrast to the way it works in computer
00:48:38.119 --> 00:48:43.960
vision is um we have an expression uh
00:48:42.319 --> 00:48:46.480
that looks like this and maybe maybe
00:48:43.960 --> 00:48:47.640
it's the expression X corresponding to
00:48:46.480 --> 00:48:51.880
uh a
00:48:47.640 --> 00:48:53.400
scal um and each node corresponds to
00:48:51.880 --> 00:48:55.599
something like a tensor a matrix a
00:48:53.400 --> 00:48:57.599
vector a scalar so scaler is uh kind
00:48:55.599 --> 00:49:00.480
kind of Zero Dimensional it's a single
00:48:57.599 --> 00:49:01.720
value one dimensional two dimensional or
00:49:00.480 --> 00:49:04.200
arbitrary
00:49:01.720 --> 00:49:06.040
dimensional um and then we also have
00:49:04.200 --> 00:49:08.000
nodes that correspond to the result of
00:49:06.040 --> 00:49:11.480
function applications so if we have X be
00:49:08.000 --> 00:49:14.079
a vector uh we take the vector transpose
00:49:11.480 --> 00:49:18.160
and so each Edge represents a function
00:49:14.079 --> 00:49:20.559
argument and also a data
00:49:18.160 --> 00:49:23.960
dependency and a node with an incoming
00:49:20.559 --> 00:49:27.000
Edge is a function of that Edge's tail
00:49:23.960 --> 00:49:29.040
node and importantly each node knows how
00:49:27.000 --> 00:49:30.640
to compute its value and the value of
00:49:29.040 --> 00:49:32.640
its derivative with respect to each
00:49:30.640 --> 00:49:34.440
argument times the derivative of an
00:49:32.640 --> 00:49:37.920
arbitrary
00:49:34.440 --> 00:49:41.000
input and functions could be basically
00:49:37.920 --> 00:49:45.400
arbitrary functions it can be unary Nary
00:49:41.000 --> 00:49:49.440
unary binary Nary often unary or binary
00:49:45.400 --> 00:49:52.400
and computation graphs are directed in
00:49:49.440 --> 00:49:57.040
cyclic and um one important thing to
00:49:52.400 --> 00:50:00.640
note is that you can um have multiple
00:49:57.040 --> 00:50:02.559
ways of expressing the same function so
00:50:00.640 --> 00:50:04.839
this is actually really important as you
00:50:02.559 --> 00:50:06.920
start implementing things and the reason
00:50:04.839 --> 00:50:09.359
why is the left graph and the right
00:50:06.920 --> 00:50:12.960
graph both express the same thing the
00:50:09.359 --> 00:50:18.640
left graph expresses X
00:50:12.960 --> 00:50:22.559
transpose time A Time X where is whereas
00:50:18.640 --> 00:50:27.160
this one has x a and then it puts it
00:50:22.559 --> 00:50:28.760
into a node that is X transpose a x
00:50:27.160 --> 00:50:30.319
and so these Express exactly the same
00:50:28.760 --> 00:50:32.319
thing but the graph on the left is
00:50:30.319 --> 00:50:33.760
larger and the reason why this is
00:50:32.319 --> 00:50:38.920
important is for practical
00:50:33.760 --> 00:50:40.359
implementation of neural networks um you
00:50:38.920 --> 00:50:43.200
the larger graphs are going to take more
00:50:40.359 --> 00:50:46.799
memory and going to be slower usually
00:50:43.200 --> 00:50:48.200
and so often um in a neural network we
00:50:46.799 --> 00:50:49.559
look at like pipe part which we're going
00:50:48.200 --> 00:50:52.160
to look at in a
00:50:49.559 --> 00:50:55.520
second
00:50:52.160 --> 00:50:57.920
um you will have something you will be
00:50:55.520 --> 00:50:57.920
able to
00:50:58.680 --> 00:51:01.680
do
00:51:03.079 --> 00:51:07.880
this or you'll be able to do
00:51:18.760 --> 00:51:22.880
like
00:51:20.359 --> 00:51:24.839
this so these are two different options
00:51:22.880 --> 00:51:26.920
this one is using more operations and
00:51:24.839 --> 00:51:29.559
this one is using using less operations
00:51:26.920 --> 00:51:31.000
and this is going to be faster because
00:51:29.559 --> 00:51:33.119
basically the implementation within
00:51:31.000 --> 00:51:34.799
Pythor will have been optimized for you
00:51:33.119 --> 00:51:36.799
it will only require one graph node
00:51:34.799 --> 00:51:37.880
instead of multiple graph nodes and
00:51:36.799 --> 00:51:39.799
that's even more important when you
00:51:37.880 --> 00:51:41.040
start talking about like attention or
00:51:39.799 --> 00:51:43.920
something like that which we're going to
00:51:41.040 --> 00:51:46.079
be covering very soon um attention is a
00:51:43.920 --> 00:51:47.359
very multi-head attention or something
00:51:46.079 --> 00:51:49.839
like that is a very complicated
00:51:47.359 --> 00:51:52.079
operation so you want to make sure that
00:51:49.839 --> 00:51:54.359
you're using the operators that are
00:51:52.079 --> 00:51:57.359
available to you to make this more
00:51:54.359 --> 00:51:57.359
efficient
00:51:57.440 --> 00:52:00.760
um and then finally we could like add
00:51:59.280 --> 00:52:01.920
all of these together at the end we
00:52:00.760 --> 00:52:04.000
could add a
00:52:01.920 --> 00:52:05.880
constant um and then we get this
00:52:04.000 --> 00:52:09.520
expression here which gives us kind of a
00:52:05.880 --> 00:52:09.520
polinomial polom
00:52:09.680 --> 00:52:15.760
expression um also another thing to note
00:52:13.480 --> 00:52:17.599
is within a neural network computation
00:52:15.760 --> 00:52:21.920
graph variable names are just labelings
00:52:17.599 --> 00:52:25.359
of nodes and so if you're using a a
00:52:21.920 --> 00:52:27.680
computation graph like this you might
00:52:25.359 --> 00:52:29.240
only be declaring one variable here but
00:52:27.680 --> 00:52:30.839
actually there's a whole bunch of stuff
00:52:29.240 --> 00:52:32.359
going on behind the scenes and all of
00:52:30.839 --> 00:52:34.240
that will take memory and computation
00:52:32.359 --> 00:52:35.440
time and stuff like that so it's
00:52:34.240 --> 00:52:37.119
important to be aware of that if you
00:52:35.440 --> 00:52:40.400
want to make your implementations more
00:52:37.119 --> 00:52:40.400
efficient than other other
00:52:41.119 --> 00:52:46.680
things so we have several algorithms
00:52:44.480 --> 00:52:49.079
that go into implementing neural nuts um
00:52:46.680 --> 00:52:50.760
the first one is graph construction uh
00:52:49.079 --> 00:52:53.480
the second one is forward
00:52:50.760 --> 00:52:54.839
propagation uh and graph construction is
00:52:53.480 --> 00:52:56.359
basically constructing the graph
00:52:54.839 --> 00:52:58.680
declaring ing all the variables stuff
00:52:56.359 --> 00:53:01.520
like this the second one is forward
00:52:58.680 --> 00:53:03.880
propagation and um the way you do this
00:53:01.520 --> 00:53:06.480
is in topological order uh you compute
00:53:03.880 --> 00:53:08.280
the value of a node given its inputs and
00:53:06.480 --> 00:53:11.000
so basically you start out with all of
00:53:08.280 --> 00:53:12.680
the nodes that you give is input and
00:53:11.000 --> 00:53:16.040
then you find any node in the graph
00:53:12.680 --> 00:53:17.799
where all of its uh all of its tail
00:53:16.040 --> 00:53:20.280
nodes or all of its children have been
00:53:17.799 --> 00:53:22.119
calculated so in this case that would be
00:53:20.280 --> 00:53:24.640
these two nodes and then in arbitrary
00:53:22.119 --> 00:53:27.000
order or even in parallel you calculate
00:53:24.640 --> 00:53:28.280
the value of all of the satisfied nodes
00:53:27.000 --> 00:53:31.799
until you get to the
00:53:28.280 --> 00:53:34.280
end and then uh the remaining algorithms
00:53:31.799 --> 00:53:36.200
are back propagation and parameter
00:53:34.280 --> 00:53:38.240
update I already talked about parameter
00:53:36.200 --> 00:53:40.799
update uh using stochastic gradient
00:53:38.240 --> 00:53:42.760
descent but for back propagation we then
00:53:40.799 --> 00:53:45.400
process examples in Reverse topological
00:53:42.760 --> 00:53:47.640
order uh calculate derivatives of
00:53:45.400 --> 00:53:50.400
parameters with respect to final
00:53:47.640 --> 00:53:52.319
value and so we start out with the very
00:53:50.400 --> 00:53:54.200
final value usually this is your loss
00:53:52.319 --> 00:53:56.200
function and then you just step
00:53:54.200 --> 00:54:00.440
backwards in top ological order to
00:53:56.200 --> 00:54:04.160
calculate the derivatives of all these
00:54:00.440 --> 00:54:05.920
so um this is pretty simple I think a
00:54:04.160 --> 00:54:08.040
lot of people may have seen this already
00:54:05.920 --> 00:54:09.920
but keeping this in mind as you're
00:54:08.040 --> 00:54:12.480
implementing NLP models especially
00:54:09.920 --> 00:54:14.240
models that are really memory intensive
00:54:12.480 --> 00:54:16.559
or things like that is pretty important
00:54:14.240 --> 00:54:19.040
because if you accidentally like for
00:54:16.559 --> 00:54:21.799
example calculate the same thing twice
00:54:19.040 --> 00:54:23.559
or accidentally create a graph that is
00:54:21.799 --> 00:54:25.720
manipulating very large tensors and
00:54:23.559 --> 00:54:27.319
creating very large intermediate States
00:54:25.720 --> 00:54:29.720
that can kill your memory and and cause
00:54:27.319 --> 00:54:31.839
big problems so it's an important thing
00:54:29.720 --> 00:54:31.839
to
00:54:34.359 --> 00:54:38.880
be um cool any any questions about
00:54:39.040 --> 00:54:44.440
this okay if not I will go on to the
00:54:41.680 --> 00:54:45.680
next one so neural network Frameworks
00:54:44.440 --> 00:54:48.920
there's several neural network
00:54:45.680 --> 00:54:52.880
Frameworks but in NLP nowadays I really
00:54:48.920 --> 00:54:55.079
only see two and mostly only see one um
00:54:52.880 --> 00:54:57.960
so that one that almost everybody us
00:54:55.079 --> 00:55:01.240
uses is pie torch um and I would
00:54:57.960 --> 00:55:04.559
recommend using it unless you uh you
00:55:01.240 --> 00:55:07.480
know if you're a fan of like rust or you
00:55:04.559 --> 00:55:09.200
know esoteric uh not esoteric but like
00:55:07.480 --> 00:55:11.960
unusual programming languages and you
00:55:09.200 --> 00:55:14.720
like Beauty and things like this another
00:55:11.960 --> 00:55:15.799
option might be Jacks uh so I'll explain
00:55:14.720 --> 00:55:18.440
a little bit about the difference
00:55:15.799 --> 00:55:19.960
between them uh and you can pick
00:55:18.440 --> 00:55:23.559
accordingly
00:55:19.960 --> 00:55:25.359
um first uh both of these Frameworks uh
00:55:23.559 --> 00:55:26.839
are developed by big companies and they
00:55:25.359 --> 00:55:28.520
have a lot of engineering support behind
00:55:26.839 --> 00:55:29.720
them that's kind of an important thing
00:55:28.520 --> 00:55:31.280
to think about when you're deciding
00:55:29.720 --> 00:55:32.599
which framework to use because you know
00:55:31.280 --> 00:55:36.000
it'll be well
00:55:32.599 --> 00:55:38.039
supported um pytorch is definitely most
00:55:36.000 --> 00:55:40.400
widely used in NLP especially NLP
00:55:38.039 --> 00:55:44.240
research um and it's used in some NLP
00:55:40.400 --> 00:55:47.359
project J is used in some NLP
00:55:44.240 --> 00:55:49.960
projects um pytorch favors Dynamic
00:55:47.359 --> 00:55:53.760
execution so what dynamic execution
00:55:49.960 --> 00:55:55.880
means is um you basically create a
00:55:53.760 --> 00:55:59.760
computation graph and and then execute
00:55:55.880 --> 00:56:02.760
it uh every time you process an input uh
00:55:59.760 --> 00:56:04.680
in contrast there's also you define the
00:56:02.760 --> 00:56:07.200
computation graph first and then execute
00:56:04.680 --> 00:56:09.280
it over and over again so in other words
00:56:07.200 --> 00:56:10.680
the graph construction step only happens
00:56:09.280 --> 00:56:13.119
once kind of at the beginning of
00:56:10.680 --> 00:56:16.799
computation and then you compile it
00:56:13.119 --> 00:56:20.039
afterwards and it's actually pytorch
00:56:16.799 --> 00:56:23.359
supports kind of defining and compiling
00:56:20.039 --> 00:56:27.480
and Jax supports more Dynamic things but
00:56:23.359 --> 00:56:30.160
the way they were designed is uh is kind
00:56:27.480 --> 00:56:32.960
of favoring Dynamic execution or
00:56:30.160 --> 00:56:37.079
favoring definition in population
00:56:32.960 --> 00:56:39.200
and the difference between these two is
00:56:37.079 --> 00:56:41.760
this one gives you more flexibility this
00:56:39.200 --> 00:56:45.440
one gives you better optimization in wor
00:56:41.760 --> 00:56:49.760
speed if you want to if you want to do
00:56:45.440 --> 00:56:52.400
that um another thing about Jax is um
00:56:49.760 --> 00:56:55.200
it's kind of very close to numpy in a
00:56:52.400 --> 00:56:57.440
way like it uses a very num something
00:56:55.200 --> 00:56:59.960
that's kind of close to numpy it's very
00:56:57.440 --> 00:57:02.359
heavily based on tensors and so because
00:56:59.960 --> 00:57:04.640
of this you can kind of easily do some
00:57:02.359 --> 00:57:06.640
interesting things like okay I want to
00:57:04.640 --> 00:57:11.319
take this tensor and I want to split it
00:57:06.640 --> 00:57:14.000
over two gpus um and this is good if
00:57:11.319 --> 00:57:17.119
you're training like a very large model
00:57:14.000 --> 00:57:20.920
and you want to put kind
00:57:17.119 --> 00:57:20.920
of this part of the
00:57:22.119 --> 00:57:26.520
model uh you want to put this part of
00:57:24.119 --> 00:57:30.079
the model on GP 1 this on gpu2 this on
00:57:26.520 --> 00:57:31.599
GPU 3 this on GPU it's slightly simpler
00:57:30.079 --> 00:57:34.400
conceptually to do in Jacks but it's
00:57:31.599 --> 00:57:37.160
also possible to do in
00:57:34.400 --> 00:57:39.119
p and pytorch by far has the most
00:57:37.160 --> 00:57:41.640
vibrant ecosystem so like as I said
00:57:39.119 --> 00:57:44.200
pytorch is a good default choice but you
00:57:41.640 --> 00:57:47.480
can consider using Jack if you uh if you
00:57:44.200 --> 00:57:47.480
like new
00:57:48.079 --> 00:57:55.480
things cool um yeah actually I already
00:57:51.599 --> 00:57:58.079
talked about that so in the interest of
00:57:55.480 --> 00:58:02.119
time I may not go into these very deeply
00:57:58.079 --> 00:58:05.799
but it's important to note that we have
00:58:02.119 --> 00:58:05.799
examples of all of
00:58:06.920 --> 00:58:12.520
the models that I talked about in the
00:58:09.359 --> 00:58:16.720
class today these are created for
00:58:12.520 --> 00:58:17.520
Simplicity not for Speed or efficiency
00:58:16.720 --> 00:58:20.480
of
00:58:17.520 --> 00:58:24.920
implementation um so these are kind of
00:58:20.480 --> 00:58:27.760
torch P torch based uh examples uh where
00:58:24.920 --> 00:58:31.599
you can create the bag of words
00:58:27.760 --> 00:58:36.440
Model A continuous bag of words
00:58:31.599 --> 00:58:39.640
model um and
00:58:36.440 --> 00:58:41.640
a deep continuous bag of wordss
00:58:39.640 --> 00:58:44.359
model
00:58:41.640 --> 00:58:46.039
and all of these I believe are
00:58:44.359 --> 00:58:48.760
implemented in
00:58:46.039 --> 00:58:51.960
model.py and the most important thing is
00:58:48.760 --> 00:58:54.960
where you define the forward pass and
00:58:51.960 --> 00:58:57.319
maybe I can just give a a simple example
00:58:54.960 --> 00:58:58.200
this but here this is where you do the
00:58:57.319 --> 00:59:01.839
word
00:58:58.200 --> 00:59:04.400
embedding this is where you sum up all
00:59:01.839 --> 00:59:08.119
of the embeddings and add a
00:59:04.400 --> 00:59:10.200
bias um and then this is uh where you
00:59:08.119 --> 00:59:13.960
return the the
00:59:10.200 --> 00:59:13.960
score and then oh
00:59:14.799 --> 00:59:19.119
sorry the continuous bag of words model
00:59:17.520 --> 00:59:22.160
sums up some
00:59:19.119 --> 00:59:23.640
embeddings uh or gets the embeddings
00:59:22.160 --> 00:59:25.799
sums up some
00:59:23.640 --> 00:59:28.079
embeddings
00:59:25.799 --> 00:59:30.599
uh gets the score here and then runs it
00:59:28.079 --> 00:59:33.200
through a linear or changes the view
00:59:30.599 --> 00:59:35.119
runs it through a linear layer and then
00:59:33.200 --> 00:59:38.319
the Deep continuous bag of words model
00:59:35.119 --> 00:59:41.160
also adds a few layers of uh like linear
00:59:38.319 --> 00:59:43.119
transformations in Dage so you should be
00:59:41.160 --> 00:59:44.640
able to see that these correspond pretty
00:59:43.119 --> 00:59:47.440
closely to the things that I had on the
00:59:44.640 --> 00:59:49.280
slides so um hopefully that's a good
00:59:47.440 --> 00:59:51.839
start if you're not very familiar with
00:59:49.280 --> 00:59:51.839
implementing
00:59:53.119 --> 00:59:58.440
model oh and yes the recitation uh will
00:59:56.599 --> 00:59:59.799
be about playing around with sentence
00:59:58.440 --> 01:00:01.200
piece and playing around with these so
00:59:59.799 --> 01:00:02.839
if you have any look at them have any
01:00:01.200 --> 01:00:05.000
questions you're welcome to show up
01:00:02.839 --> 01:00:09.880
where I walk
01:00:05.000 --> 01:00:09.880
through cool um any any questions about
01:00:12.839 --> 01:00:19.720
these okay so a few more final important
01:00:16.720 --> 01:00:21.720
Concepts um another concept that you
01:00:19.720 --> 01:00:25.440
should definitely be aware of is the
01:00:21.720 --> 01:00:27.280
atom Optimizer uh so there's lots of uh
01:00:25.440 --> 01:00:30.559
optimizers that you could be using but
01:00:27.280 --> 01:00:32.200
almost all research in NLP uses some uh
01:00:30.559 --> 01:00:38.440
variety of the atom
01:00:32.200 --> 01:00:40.839
Optimizer and the U the way this works
01:00:38.440 --> 01:00:42.559
is it
01:00:40.839 --> 01:00:45.640
optimizes
01:00:42.559 --> 01:00:48.480
the um it optimizes model considering
01:00:45.640 --> 01:00:49.359
the rolling average of the gradient and
01:00:48.480 --> 01:00:53.160
uh
01:00:49.359 --> 01:00:55.920
momentum and the way it works is here we
01:00:53.160 --> 01:00:58.839
have a gradient here we have
01:00:55.920 --> 01:01:04.000
momentum and what you can see is
01:00:58.839 --> 01:01:06.680
happening here is we add a little bit of
01:01:04.000 --> 01:01:09.200
the gradient in uh how much you add in
01:01:06.680 --> 01:01:12.720
is with respect to the size of this beta
01:01:09.200 --> 01:01:16.000
1 parameter and you add it into uh the
01:01:12.720 --> 01:01:18.640
momentum term so this momentum term like
01:01:16.000 --> 01:01:20.440
gradually increases and decreases so in
01:01:18.640 --> 01:01:23.440
contrast to standard gradient percent
01:01:20.440 --> 01:01:25.839
which could be
01:01:23.440 --> 01:01:28.440
updating
01:01:25.839 --> 01:01:31.440
uh each parameter kind of like very
01:01:28.440 --> 01:01:33.359
differently on each time step this will
01:01:31.440 --> 01:01:35.680
make the momentum kind of transition
01:01:33.359 --> 01:01:37.240
more smoothly by taking the rolling
01:01:35.680 --> 01:01:39.880
average of the
01:01:37.240 --> 01:01:43.400
gradient and then the the second thing
01:01:39.880 --> 01:01:47.640
is um by taking the momentum this is the
01:01:43.400 --> 01:01:51.000
rolling average of the I guess gradient
01:01:47.640 --> 01:01:54.440
uh variance sorry I this should be
01:01:51.000 --> 01:01:58.079
variance and the reason why you need
01:01:54.440 --> 01:02:01.319
need to keep track of the variance is
01:01:58.079 --> 01:02:03.319
some uh some parameters will have very
01:02:01.319 --> 01:02:06.559
large variance in their gradients and
01:02:03.319 --> 01:02:11.480
might fluctuate very uh strongly and
01:02:06.559 --> 01:02:13.039
others might have a smaller uh chain
01:02:11.480 --> 01:02:15.240
variant in their gradients and not
01:02:13.039 --> 01:02:18.240
fluctuate very much but we want to make
01:02:15.240 --> 01:02:20.200
sure that we update the ones we still
01:02:18.240 --> 01:02:22.240
update the ones that have a very small
01:02:20.200 --> 01:02:25.760
uh change of their variance and the
01:02:22.240 --> 01:02:27.440
reason why is kind of let's say you have
01:02:25.760 --> 01:02:30.440
a
01:02:27.440 --> 01:02:30.440
multi-layer
01:02:32.480 --> 01:02:38.720
network
01:02:34.480 --> 01:02:41.240
um or actually sorry a better
01:02:38.720 --> 01:02:44.319
um a better example is like let's say we
01:02:41.240 --> 01:02:47.559
have a big word embedding Matrix and
01:02:44.319 --> 01:02:53.359
over here we have like really frequent
01:02:47.559 --> 01:02:56.279
words and then over here we have uh
01:02:53.359 --> 01:02:59.319
gradi
01:02:56.279 --> 01:03:00.880
no we have like less frequent words we
01:02:59.319 --> 01:03:02.799
want to make sure that all of these get
01:03:00.880 --> 01:03:06.160
updated appropriately all of these get
01:03:02.799 --> 01:03:08.640
like enough updates and so over here
01:03:06.160 --> 01:03:10.760
this one will have lots of updates and
01:03:08.640 --> 01:03:13.680
so uh kind of
01:03:10.760 --> 01:03:16.599
the amount that we
01:03:13.680 --> 01:03:20.039
update or the the amount that we update
01:03:16.599 --> 01:03:21.799
the uh this will be relatively large
01:03:20.039 --> 01:03:23.119
whereas over here this will not have
01:03:21.799 --> 01:03:24.880
very many updates we'll have lots of
01:03:23.119 --> 01:03:26.480
zero updates also
01:03:24.880 --> 01:03:29.160
and so the amount that we update this
01:03:26.480 --> 01:03:32.520
will be relatively small and so this
01:03:29.160 --> 01:03:36.119
kind of squared to gradient here will uh
01:03:32.520 --> 01:03:38.400
be smaller for the values over here and
01:03:36.119 --> 01:03:41.359
what that allows us to do is it allows
01:03:38.400 --> 01:03:44.200
us to maybe I can just go to the bottom
01:03:41.359 --> 01:03:46.039
we end up uh dividing by the square root
01:03:44.200 --> 01:03:47.599
of this and because we divide by the
01:03:46.039 --> 01:03:51.000
square root of this if this is really
01:03:47.599 --> 01:03:55.680
large like 50 and 70 and then this over
01:03:51.000 --> 01:03:59.480
here is like one 0.5
01:03:55.680 --> 01:04:01.920
uh or something we will be upgrading the
01:03:59.480 --> 01:04:03.920
ones that have like less Square
01:04:01.920 --> 01:04:06.880
gradients so it will it allows you to
01:04:03.920 --> 01:04:08.760
upweight the less common gradients more
01:04:06.880 --> 01:04:10.440
frequently and then there's also some
01:04:08.760 --> 01:04:13.400
terms for correcting bias early in
01:04:10.440 --> 01:04:16.440
training because these momentum in uh in
01:04:13.400 --> 01:04:19.559
variance or momentum in squared gradient
01:04:16.440 --> 01:04:23.119
terms are not going to be like well
01:04:19.559 --> 01:04:24.839
calibrated yet so it prevents them from
01:04:23.119 --> 01:04:28.880
going very three wire beginning of
01:04:24.839 --> 01:04:30.839
training so this is uh the details of
01:04:28.880 --> 01:04:33.640
this again are not like super super
01:04:30.839 --> 01:04:37.359
important um another thing that I didn't
01:04:33.640 --> 01:04:40.200
write on the slides is uh now in
01:04:37.359 --> 01:04:43.920
Transformers it's also super common to
01:04:40.200 --> 01:04:47.400
have an overall learning rate schle so
01:04:43.920 --> 01:04:50.520
even um Even Adam has this uh Ada
01:04:47.400 --> 01:04:53.440
learning rate parameter here and we what
01:04:50.520 --> 01:04:55.240
we often do is we adjust this so we
01:04:53.440 --> 01:04:57.839
start at low
01:04:55.240 --> 01:04:59.640
we raise it up and then we have a Decay
01:04:57.839 --> 01:05:03.039
uh at the end and exactly how much you
01:04:59.640 --> 01:05:04.440
do this kind of depends on um you know
01:05:03.039 --> 01:05:06.160
how big your model is how much data
01:05:04.440 --> 01:05:09.160
you're tring on eventually and the
01:05:06.160 --> 01:05:12.440
reason why we do this is transformers
01:05:09.160 --> 01:05:13.839
are unfortunately super sensitive to
01:05:12.440 --> 01:05:15.359
having a high learning rate right at the
01:05:13.839 --> 01:05:16.559
very beginning so if you update them
01:05:15.359 --> 01:05:17.920
with a high learning rate right at the
01:05:16.559 --> 01:05:22.920
very beginning they go haywire and you
01:05:17.920 --> 01:05:24.400
get a really weird model um and but you
01:05:22.920 --> 01:05:26.760
want to raise it eventually so your
01:05:24.400 --> 01:05:28.920
model is learning appropriately and then
01:05:26.760 --> 01:05:30.400
in all stochastic gradient descent no
01:05:28.920 --> 01:05:31.680
matter whether you're using atom or
01:05:30.400 --> 01:05:33.400
anything else it's a good idea to
01:05:31.680 --> 01:05:36.200
gradually decrease the learning rate at
01:05:33.400 --> 01:05:38.119
the end to prevent the model from
01:05:36.200 --> 01:05:40.480
continuing to fluctuate and getting it
01:05:38.119 --> 01:05:42.760
to a stable point that gives you good
01:05:40.480 --> 01:05:45.559
accuracy over a large part of data so
01:05:42.760 --> 01:05:47.480
this is often included like if you look
01:05:45.559 --> 01:05:51.000
at any standard Transformer training
01:05:47.480 --> 01:05:53.079
recipe it will have that this so that's
01:05:51.000 --> 01:05:54.799
kind of the the go-to
01:05:53.079 --> 01:05:58.960
optimizer
01:05:54.799 --> 01:06:01.039
um are there any questions or
01:05:58.960 --> 01:06:02.599
discussion there's also tricky things
01:06:01.039 --> 01:06:04.000
like cyclic learning rates where you
01:06:02.599 --> 01:06:06.599
decrease the learning rate increase it
01:06:04.000 --> 01:06:08.559
and stuff like that but I won't go into
01:06:06.599 --> 01:06:11.000
that and don't actually use it that
01:06:08.559 --> 01:06:12.760
much second thing is visualization of
01:06:11.000 --> 01:06:15.400
embeddings so normally when we have word
01:06:12.760 --> 01:06:19.760
embeddings usually they're kind of large
01:06:15.400 --> 01:06:21.559
um and they can be like 512 or 1024
01:06:19.760 --> 01:06:25.079
dimensions
01:06:21.559 --> 01:06:28.720
and so one thing that we can do is we
01:06:25.079 --> 01:06:31.079
can down weight them or sorry down uh
01:06:28.720 --> 01:06:34.400
like reduce the dimensions or perform
01:06:31.079 --> 01:06:35.880
dimensionality reduction and put them in
01:06:34.400 --> 01:06:37.680
like two or three dimensions which are
01:06:35.880 --> 01:06:40.200
easy for humans to
01:06:37.680 --> 01:06:42.000
visualize this is an example using
01:06:40.200 --> 01:06:44.839
principal component analysis which is a
01:06:42.000 --> 01:06:48.279
linear Dimension reduction technique and
01:06:44.839 --> 01:06:50.680
this is uh an example from 10 years ago
01:06:48.279 --> 01:06:52.359
now uh one of the first major word
01:06:50.680 --> 01:06:55.240
embedding papers where they demonstrated
01:06:52.359 --> 01:06:57.720
that if you do this sort of linear
01:06:55.240 --> 01:06:59.440
Dimension reduction uh you get actually
01:06:57.720 --> 01:07:01.279
some interesting things where you can
01:06:59.440 --> 01:07:03.240
draw a vector that's almost the same
01:07:01.279 --> 01:07:06.400
direction between like countries and
01:07:03.240 --> 01:07:09.319
their uh countries and their capitals
01:07:06.400 --> 01:07:13.720
for example so this is a good thing to
01:07:09.319 --> 01:07:16.559
do but actually PCA uh doesn't give
01:07:13.720 --> 01:07:20.760
you in some cases PCA doesn't give you
01:07:16.559 --> 01:07:22.920
super great uh visualizations sorry yeah
01:07:20.760 --> 01:07:25.920
well for like if it's
01:07:22.920 --> 01:07:25.920
like
01:07:29.880 --> 01:07:35.039
um for things like this I think you
01:07:33.119 --> 01:07:37.359
probably would still see vectors in the
01:07:35.039 --> 01:07:38.760
same direction but I don't think it like
01:07:37.359 --> 01:07:40.920
there's a reason why I'm introducing
01:07:38.760 --> 01:07:44.279
nonlinear projections next because the
01:07:40.920 --> 01:07:46.799
more standard way to do this is uh
01:07:44.279 --> 01:07:50.640
nonlinear projections in in particular a
01:07:46.799 --> 01:07:54.880
method called tisne and the way um they
01:07:50.640 --> 01:07:56.880
do this is they try to group
01:07:54.880 --> 01:07:59.000
things that are close together in high
01:07:56.880 --> 01:08:01.240
dimensional space so that they're also
01:07:59.000 --> 01:08:04.440
close together in low dimensional space
01:08:01.240 --> 01:08:08.520
but they remove the Restriction that
01:08:04.440 --> 01:08:10.799
this is uh that this is linear so this
01:08:08.520 --> 01:08:15.480
is an example of just grouping together
01:08:10.799 --> 01:08:18.040
some digits uh from the memus data
01:08:15.480 --> 01:08:20.279
set or sorry reducing the dimension of
01:08:18.040 --> 01:08:23.640
digits from the mest data
01:08:20.279 --> 01:08:25.640
set according to PCA and you can see it
01:08:23.640 --> 01:08:28.000
gives these kind of blobs that overlap
01:08:25.640 --> 01:08:29.799
with each other and stuff like this but
01:08:28.000 --> 01:08:31.679
if you do it with tney this is
01:08:29.799 --> 01:08:34.799
completely unsupervised actually it's
01:08:31.679 --> 01:08:37.080
not training any model for labeling the
01:08:34.799 --> 01:08:39.239
labels are just used to draw the colors
01:08:37.080 --> 01:08:42.520
and you can see that it gets pretty
01:08:39.239 --> 01:08:44.520
coherent um clusters that correspond to
01:08:42.520 --> 01:08:48.120
like what the actual digits
01:08:44.520 --> 01:08:50.120
are um however uh one problem with
01:08:48.120 --> 01:08:53.159
titney I I still think it's better than
01:08:50.120 --> 01:08:55.000
PCA for a large number of uh
01:08:53.159 --> 01:08:59.199
applications
01:08:55.000 --> 01:09:01.040
but settings of tisy matter and tisy has
01:08:59.199 --> 01:09:02.920
a few settings kind of the most
01:09:01.040 --> 01:09:04.120
important ones are the overall
01:09:02.920 --> 01:09:06.560
perplexity
01:09:04.120 --> 01:09:09.040
hyperparameter and uh the number of
01:09:06.560 --> 01:09:12.319
steps that you perform and there's a
01:09:09.040 --> 01:09:14.920
nice example uh of a paper or kind of
01:09:12.319 --> 01:09:16.359
like online post uh that demonstrates
01:09:14.920 --> 01:09:18.560
how if you change these parameters you
01:09:16.359 --> 01:09:22.279
can get very different things so if this
01:09:18.560 --> 01:09:24.080
is the original data you run tisy and it
01:09:22.279 --> 01:09:26.640
gives you very different things based on
01:09:24.080 --> 01:09:29.279
the hyper parameters that you change um
01:09:26.640 --> 01:09:32.880
and here's another example uh you have
01:09:29.279 --> 01:09:36.960
two linear uh things like this and so
01:09:32.880 --> 01:09:40.839
PCA no matter how you ran PCA you would
01:09:36.960 --> 01:09:44.080
still get a linear output from this so
01:09:40.839 --> 01:09:45.960
normally uh you know it might change the
01:09:44.080 --> 01:09:49.239
order it might squash it a little bit or
01:09:45.960 --> 01:09:51.239
something like this but um if you run
01:09:49.239 --> 01:09:53.400
tisy it gives you crazy things it even
01:09:51.239 --> 01:09:56.040
gives you like DNA and other stuff like
01:09:53.400 --> 01:09:58.040
that so so um you do need to be a little
01:09:56.040 --> 01:10:00.600
bit careful that uh this is not
01:09:58.040 --> 01:10:02.320
necessarily going to tell you nice
01:10:00.600 --> 01:10:04.400
linear correlations like this so like
01:10:02.320 --> 01:10:06.159
let's say this correlation existed if
01:10:04.400 --> 01:10:09.199
you use tisy it might not necessarily
01:10:06.159 --> 01:10:09.199
come out to
01:10:09.320 --> 01:10:14.880
TIY
01:10:11.800 --> 01:10:16.920
cool yep uh that that's my final thing
01:10:14.880 --> 01:10:18.520
actually I talked said sequence models
01:10:16.920 --> 01:10:19.679
in the next class but it's in the class
01:10:18.520 --> 01:10:21.440
after this I'm going to be talking about
01:10:19.679 --> 01:10:24.199
language
01:10:21.440 --> 01:10:27.159
modeling uh cool any any questions
01:10:24.199 --> 01:10:27.159
or