|
WEBVTT |
|
|
|
00:00:00.399 --> 00:00:04.720 |
|
great um yeah so today we're going to be |
|
|
|
00:00:03.320 --> 00:00:07.040 |
|
talking a little bit about generation |
|
|
|
00:00:04.720 --> 00:00:08.639 |
|
algorithms um this will be sort of a |
|
|
|
00:00:07.040 --> 00:00:10.160 |
|
tour through some of the most common |
|
|
|
00:00:08.639 --> 00:00:12.080 |
|
methods and we're going to talk a little |
|
|
|
00:00:10.160 --> 00:00:13.480 |
|
bit about the theory behind them as well |
|
|
|
00:00:12.080 --> 00:00:15.080 |
|
um if you're looking at the slides on |
|
|
|
00:00:13.480 --> 00:00:18.359 |
|
the website these might be ever so |
|
|
|
00:00:15.080 --> 00:00:20.000 |
|
slightly different um but yeah I'll try |
|
|
|
00:00:18.359 --> 00:00:21.640 |
|
to stop at each section boundary for |
|
|
|
00:00:20.000 --> 00:00:23.840 |
|
questions also feel free to sort of |
|
|
|
00:00:21.640 --> 00:00:25.720 |
|
interrupt at any point for |
|
|
|
00:00:23.840 --> 00:00:27.720 |
|
clarifications so we're starting off |
|
|
|
00:00:25.720 --> 00:00:29.560 |
|
today with some great news um let's say |
|
|
|
00:00:27.720 --> 00:00:31.199 |
|
that you have some friend who maybe owns |
|
|
|
00:00:29.560 --> 00:00:34.800 |
|
a giant tech company and they've gifted |
|
|
|
00:00:31.199 --> 00:00:36.480 |
|
you this absolutely massive new model M |
|
|
|
00:00:34.800 --> 00:00:38.079 |
|
um it's a great model it's pre-trained |
|
|
|
00:00:36.480 --> 00:00:40.879 |
|
with the latest architecture it's |
|
|
|
00:00:38.079 --> 00:00:42.920 |
|
pre-trained on um trillions of tokens of |
|
|
|
00:00:40.879 --> 00:00:44.520 |
|
text it's got seven billion parameters |
|
|
|
00:00:42.920 --> 00:00:46.399 |
|
it looks like a really promising new |
|
|
|
00:00:44.520 --> 00:00:48.399 |
|
model you know it's the top of all these |
|
|
|
00:00:46.399 --> 00:00:50.320 |
|
leaderboards um but if you actually take |
|
|
|
00:00:48.399 --> 00:00:52.520 |
|
your new model M and you sort of open up |
|
|
|
00:00:50.320 --> 00:00:53.719 |
|
this box and kind of Shake It Out maybe |
|
|
|
00:00:52.520 --> 00:00:55.239 |
|
from last class you know a little bit |
|
|
|
00:00:53.719 --> 00:00:57.000 |
|
architecturally what this model might |
|
|
|
00:00:55.239 --> 00:00:58.239 |
|
look like but if you actually kind of |
|
|
|
00:00:57.000 --> 00:01:00.320 |
|
take a closer look at it from a |
|
|
|
00:00:58.239 --> 00:01:01.719 |
|
different angle what you see is that m |
|
|
|
00:01:00.320 --> 00:01:04.920 |
|
is actually just a conditional |
|
|
|
00:01:01.719 --> 00:01:07.200 |
|
probability distribution um you put some |
|
|
|
00:01:04.920 --> 00:01:09.680 |
|
input X into your model and you get some |
|
|
|
00:01:07.200 --> 00:01:10.680 |
|
probability out for any given sequence |
|
|
|
00:01:09.680 --> 00:01:13.360 |
|
that you're sort of interested in |
|
|
|
00:01:10.680 --> 00:01:14.960 |
|
evaluating right um and in particular M |
|
|
|
00:01:13.360 --> 00:01:17.560 |
|
gives you a probability distribution |
|
|
|
00:01:14.960 --> 00:01:19.439 |
|
over all tokens in its vocabulary to |
|
|
|
00:01:17.560 --> 00:01:21.040 |
|
predict like what token you would output |
|
|
|
00:01:19.439 --> 00:01:24.840 |
|
next right and so this is what this |
|
|
|
00:01:21.040 --> 00:01:26.880 |
|
equation says um given some input X and |
|
|
|
00:01:24.840 --> 00:01:29.520 |
|
everything that you've predicted so far |
|
|
|
00:01:26.880 --> 00:01:32.399 |
|
you get the probability of the next |
|
|
|
00:01:29.520 --> 00:01:33.600 |
|
token in YJ and if you multiply this out |
|
|
|
00:01:32.399 --> 00:01:34.840 |
|
over all the probabilities in your |
|
|
|
00:01:33.600 --> 00:01:37.159 |
|
sequence you can calculate the |
|
|
|
00:01:34.840 --> 00:01:41.240 |
|
probability of any output y given your |
|
|
|
00:01:37.159 --> 00:01:42.640 |
|
input X so what this like super fancy |
|
|
|
00:01:41.240 --> 00:01:44.119 |
|
model that you spend a lot of money to |
|
|
|
00:01:42.640 --> 00:01:46.280 |
|
train is really just a conditional |
|
|
|
00:01:44.119 --> 00:01:47.920 |
|
probability distribution um but this |
|
|
|
00:01:46.280 --> 00:01:49.600 |
|
turns out to be okay because you can use |
|
|
|
00:01:47.920 --> 00:01:51.920 |
|
a conditional probability distribution |
|
|
|
00:01:49.600 --> 00:01:54.399 |
|
to do sort of any task that we're really |
|
|
|
00:01:51.920 --> 00:01:56.719 |
|
interested in in NLP um pretty much any |
|
|
|
00:01:54.399 --> 00:01:58.680 |
|
task right so by changing what you |
|
|
|
00:01:56.719 --> 00:02:01.360 |
|
consider your input X and your output y |
|
|
|
00:01:58.680 --> 00:02:03.560 |
|
to be you can can get outputs from this |
|
|
|
00:02:01.360 --> 00:02:06.479 |
|
model for things like translation for |
|
|
|
00:02:03.560 --> 00:02:08.720 |
|
summarization for reasoning Tas um just |
|
|
|
00:02:06.479 --> 00:02:10.520 |
|
by sort of changing what you consider |
|
|
|
00:02:08.720 --> 00:02:12.760 |
|
your inputs and outputs in this |
|
|
|
00:02:10.520 --> 00:02:14.239 |
|
setting but there's sort of both good |
|
|
|
00:02:12.760 --> 00:02:15.920 |
|
and bad things about your model being a |
|
|
|
00:02:14.239 --> 00:02:17.120 |
|
probability distribution instead of just |
|
|
|
00:02:15.920 --> 00:02:20.599 |
|
an oracle that gives you sort of a |
|
|
|
00:02:17.120 --> 00:02:22.080 |
|
single answer for every input um one |
|
|
|
00:02:20.599 --> 00:02:24.480 |
|
kind of nice thing about this |
|
|
|
00:02:22.080 --> 00:02:26.080 |
|
distribution um is that you can get at |
|
|
|
00:02:24.480 --> 00:02:27.720 |
|
an idea of something like confidence |
|
|
|
00:02:26.080 --> 00:02:30.120 |
|
right if you give your model the input 2 |
|
|
|
00:02:27.720 --> 00:02:32.480 |
|
plus 2 equals and almost all the |
|
|
|
00:02:30.120 --> 00:02:34.200 |
|
probability mass is on the token of four |
|
|
|
00:02:32.480 --> 00:02:35.760 |
|
you can say like the model predicts with |
|
|
|
00:02:34.200 --> 00:02:38.319 |
|
pretty high confidence that 2 plus 2 |
|
|
|
00:02:35.760 --> 00:02:39.480 |
|
equals four um versus if you give it |
|
|
|
00:02:38.319 --> 00:02:40.959 |
|
something that's maybe a little more |
|
|
|
00:02:39.480 --> 00:02:43.120 |
|
open-ended like you ask it to predict |
|
|
|
00:02:40.959 --> 00:02:44.640 |
|
Graham's favorite color and you see this |
|
|
|
00:02:43.120 --> 00:02:47.040 |
|
distribution that's sort of a lot |
|
|
|
00:02:44.640 --> 00:02:48.440 |
|
flatter you know the most likely output |
|
|
|
00:02:47.040 --> 00:02:49.720 |
|
is green but maybe we don't have a lot |
|
|
|
00:02:48.440 --> 00:02:51.560 |
|
of confidence that that's the correct |
|
|
|
00:02:49.720 --> 00:02:53.040 |
|
answer um this is really closely tied |
|
|
|
00:02:51.560 --> 00:02:55.200 |
|
into the idea of calibration which you |
|
|
|
00:02:53.040 --> 00:02:58.879 |
|
guys talked about um I guess a couple of |
|
|
|
00:02:55.200 --> 00:03:00.640 |
|
classes ago now the flip side of this |
|
|
|
00:02:58.879 --> 00:03:03.680 |
|
though is that you know Noti that for |
|
|
|
00:03:00.640 --> 00:03:06.760 |
|
this case like 2 plus 2al 4 not all of |
|
|
|
00:03:03.680 --> 00:03:08.519 |
|
the probability mass is on four um and |
|
|
|
00:03:06.760 --> 00:03:09.720 |
|
so models that are conditional |
|
|
|
00:03:08.519 --> 00:03:11.560 |
|
probability distributions can |
|
|
|
00:03:09.720 --> 00:03:13.560 |
|
hallucinate right um pretty much no |
|
|
|
00:03:11.560 --> 00:03:15.799 |
|
matter what you do there's going to be |
|
|
|
00:03:13.560 --> 00:03:17.680 |
|
some nonzero probability to some output |
|
|
|
00:03:15.799 --> 00:03:19.920 |
|
that's incorrect or |
|
|
|
00:03:17.680 --> 00:03:21.239 |
|
undesirable um in some cases maybe even |
|
|
|
00:03:19.920 --> 00:03:23.760 |
|
offensive something that you don't want |
|
|
|
00:03:21.239 --> 00:03:25.280 |
|
the model to Output um and this is sort |
|
|
|
00:03:23.760 --> 00:03:27.840 |
|
of an artifact of the way these models |
|
|
|
00:03:25.280 --> 00:03:29.280 |
|
are trained if there's some great work |
|
|
|
00:03:27.840 --> 00:03:31.400 |
|
kind of more on the theory side here |
|
|
|
00:03:29.280 --> 00:03:32.840 |
|
that shows that this is actually true |
|
|
|
00:03:31.400 --> 00:03:35.120 |
|
even if everything in your input |
|
|
|
00:03:32.840 --> 00:03:36.920 |
|
training data is sort of correct and |
|
|
|
00:03:35.120 --> 00:03:38.439 |
|
factual and doesn't have any errors |
|
|
|
00:03:36.920 --> 00:03:41.200 |
|
you'll still wind up with a situation |
|
|
|
00:03:38.439 --> 00:03:44.480 |
|
where some nonzero probability mass is |
|
|
|
00:03:41.200 --> 00:03:47.000 |
|
on some outputs that are undesirable or |
|
|
|
00:03:44.480 --> 00:03:50.120 |
|
hallucinatory for sort of most inputs |
|
|
|
00:03:47.000 --> 00:03:52.159 |
|
that you care about evaluating so if we |
|
|
|
00:03:50.120 --> 00:03:55.079 |
|
have these issues how do we actually get |
|
|
|
00:03:52.159 --> 00:03:56.519 |
|
a good output out of the model um and to |
|
|
|
00:03:55.079 --> 00:03:58.640 |
|
do that we're first going to talk about |
|
|
|
00:03:56.519 --> 00:04:00.079 |
|
some sampling methods um but I want to |
|
|
|
00:03:58.640 --> 00:04:01.879 |
|
pause here in case there are of any |
|
|
|
00:04:00.079 --> 00:04:04.159 |
|
questions on this idea of a model is a |
|
|
|
00:04:01.879 --> 00:04:04.159 |
|
conditional |
|
|
|
00:04:05.040 --> 00:04:11.680 |
|
distribution great so we can jump right |
|
|
|
00:04:07.519 --> 00:04:13.560 |
|
in so we have this model right we know |
|
|
|
00:04:11.680 --> 00:04:15.959 |
|
at each step at each token we might want |
|
|
|
00:04:13.560 --> 00:04:17.919 |
|
to decode the distribution of likelihood |
|
|
|
00:04:15.959 --> 00:04:18.959 |
|
over all vocabulary tokens right this |
|
|
|
00:04:17.919 --> 00:04:21.680 |
|
conditional distribution we've been |
|
|
|
00:04:18.959 --> 00:04:24.240 |
|
talking about um for the next time step |
|
|
|
00:04:21.680 --> 00:04:26.400 |
|
and what we want out of this is a good |
|
|
|
00:04:24.240 --> 00:04:28.000 |
|
output um for some definition of good |
|
|
|
00:04:26.400 --> 00:04:30.919 |
|
that we can sort of develop as we go |
|
|
|
00:04:28.000 --> 00:04:32.479 |
|
here so maybe the natural first thing to |
|
|
|
00:04:30.919 --> 00:04:34.880 |
|
try is we have a probability |
|
|
|
00:04:32.479 --> 00:04:36.600 |
|
distribution can we just sample from it |
|
|
|
00:04:34.880 --> 00:04:39.600 |
|
right and this is something called |
|
|
|
00:04:36.600 --> 00:04:41.639 |
|
ancestral sampling so at each time step |
|
|
|
00:04:39.600 --> 00:04:43.560 |
|
we're going to draw a token from this |
|
|
|
00:04:41.639 --> 00:04:45.039 |
|
distribution sort of according to its |
|
|
|
00:04:43.560 --> 00:04:47.199 |
|
relative probability right so if |
|
|
|
00:04:45.039 --> 00:04:48.639 |
|
something has twice as much probability |
|
|
|
00:04:47.199 --> 00:04:51.280 |
|
Mass according to the model we'll draw |
|
|
|
00:04:48.639 --> 00:04:54.000 |
|
it twice as often um and we can sample |
|
|
|
00:04:51.280 --> 00:04:55.560 |
|
from this distribution at each time step |
|
|
|
00:04:54.000 --> 00:04:58.080 |
|
and this is sort of this is sort of a |
|
|
|
00:04:55.560 --> 00:05:00.199 |
|
nice setup um we get exact samples from |
|
|
|
00:04:58.080 --> 00:05:02.639 |
|
the model distribution so using the |
|
|
|
00:05:00.199 --> 00:05:04.479 |
|
setup if you can you imagine like |
|
|
|
00:05:02.639 --> 00:05:06.680 |
|
drawing an almost infinite number of |
|
|
|
00:05:04.479 --> 00:05:08.320 |
|
samples like a ridiculously large number |
|
|
|
00:05:06.680 --> 00:05:10.160 |
|
and you look at their probabilities |
|
|
|
00:05:08.320 --> 00:05:11.840 |
|
you'd sort of get something from this |
|
|
|
00:05:10.160 --> 00:05:13.039 |
|
distribution with exactly the |
|
|
|
00:05:11.840 --> 00:05:15.720 |
|
probability that the real model |
|
|
|
00:05:13.039 --> 00:05:17.280 |
|
distribution is given you um so this is |
|
|
|
00:05:15.720 --> 00:05:19.039 |
|
great this gives us an exact sample from |
|
|
|
00:05:17.280 --> 00:05:21.400 |
|
the model this seems to be exactly what |
|
|
|
00:05:19.039 --> 00:05:22.880 |
|
we want um but you can guess probably by |
|
|
|
00:05:21.400 --> 00:05:24.639 |
|
the fact that we're only like 10 minutes |
|
|
|
00:05:22.880 --> 00:05:27.000 |
|
into class here this is not really the |
|
|
|
00:05:24.639 --> 00:05:28.280 |
|
end of the story um and there's actually |
|
|
|
00:05:27.000 --> 00:05:30.800 |
|
a couple of problems with sampling |
|
|
|
00:05:28.280 --> 00:05:32.560 |
|
directly from our model distribu |
|
|
|
00:05:30.800 --> 00:05:35.280 |
|
the one that we're really going to focus |
|
|
|
00:05:32.560 --> 00:05:37.919 |
|
on first here is this idea of a long |
|
|
|
00:05:35.280 --> 00:05:41.400 |
|
tail so a model like llama and maybe our |
|
|
|
00:05:37.919 --> 00:05:43.639 |
|
new model M um has 32,000 vocabulary |
|
|
|
00:05:41.400 --> 00:05:46.280 |
|
tokens and you can imagine maybe out of |
|
|
|
00:05:43.639 --> 00:05:48.000 |
|
those tokens there might be one or even |
|
|
|
00:05:46.280 --> 00:05:49.720 |
|
2,000 of those tokens that are sort of a |
|
|
|
00:05:48.000 --> 00:05:51.919 |
|
reasonable next thing to predict for a |
|
|
|
00:05:49.720 --> 00:05:53.479 |
|
really open-ended task right but there's |
|
|
|
00:05:51.919 --> 00:05:55.440 |
|
going to be all kinds of things in that |
|
|
|
00:05:53.479 --> 00:05:57.039 |
|
distribution um that are maybe like |
|
|
|
00:05:55.440 --> 00:05:58.440 |
|
punctuation there maybe tokens that |
|
|
|
00:05:57.039 --> 00:06:00.280 |
|
won't actually lead to the correct |
|
|
|
00:05:58.440 --> 00:06:01.840 |
|
answer like there's a lot of things in |
|
|
|
00:06:00.280 --> 00:06:04.560 |
|
this distribution that would be all |
|
|
|
00:06:01.840 --> 00:06:06.160 |
|
really low likelihood and this is fine |
|
|
|
00:06:04.560 --> 00:06:08.759 |
|
these things just get low probability |
|
|
|
00:06:06.160 --> 00:06:11.039 |
|
Mass but the problem is if you give sort |
|
|
|
00:06:08.759 --> 00:06:13.639 |
|
of a small amount of probability Mass to |
|
|
|
00:06:11.039 --> 00:06:16.599 |
|
30,000 different things that mass will |
|
|
|
00:06:13.639 --> 00:06:19.360 |
|
add up pretty quickly um and to see this |
|
|
|
00:06:16.599 --> 00:06:20.360 |
|
we have sort of this illustration here |
|
|
|
00:06:19.360 --> 00:06:21.560 |
|
um I don't know if you can see the |
|
|
|
00:06:20.360 --> 00:06:23.280 |
|
difference between the green and the |
|
|
|
00:06:21.560 --> 00:06:25.720 |
|
yellow but I've also drawn a little bar |
|
|
|
00:06:23.280 --> 00:06:27.800 |
|
between them this is a really longtailed |
|
|
|
00:06:25.720 --> 00:06:29.720 |
|
distribution and the green part of the |
|
|
|
00:06:27.800 --> 00:06:31.960 |
|
distribution which is a lot of tokens |
|
|
|
00:06:29.720 --> 00:06:34.000 |
|
with high likelihood has 50% of the |
|
|
|
00:06:31.960 --> 00:06:35.560 |
|
total probability the Yellow Part which |
|
|
|
00:06:34.000 --> 00:06:37.360 |
|
is all a lot of things that are all |
|
|
|
00:06:35.560 --> 00:06:40.280 |
|
individually not super likely is the |
|
|
|
00:06:37.360 --> 00:06:41.720 |
|
other 50% of the probability and so what |
|
|
|
00:06:40.280 --> 00:06:44.360 |
|
that means is if you're doing something |
|
|
|
00:06:41.720 --> 00:06:46.120 |
|
like ancestral sampling 50% of the time |
|
|
|
00:06:44.360 --> 00:06:49.160 |
|
you'll be sampling something really |
|
|
|
00:06:46.120 --> 00:06:51.520 |
|
unlikely from this long tail um that |
|
|
|
00:06:49.160 --> 00:06:53.759 |
|
seems sort of not like what we want |
|
|
|
00:06:51.520 --> 00:06:56.080 |
|
right um so is there anything we can do |
|
|
|
00:06:53.759 --> 00:06:58.080 |
|
about this and the obvious for solution |
|
|
|
00:06:56.080 --> 00:06:59.400 |
|
here is can we just cut off that tail |
|
|
|
00:06:58.080 --> 00:07:01.680 |
|
like if we know these tokens are not |
|
|
|
00:06:59.400 --> 00:07:03.039 |
|
super likely can we just ignore them and |
|
|
|
00:07:01.680 --> 00:07:05.039 |
|
there's a couple of different ways to do |
|
|
|
00:07:03.039 --> 00:07:07.919 |
|
that um the first of these is something |
|
|
|
00:07:05.039 --> 00:07:10.080 |
|
called topk sampling where we say okay |
|
|
|
00:07:07.919 --> 00:07:12.479 |
|
you know maybe we think there are 10 |
|
|
|
00:07:10.080 --> 00:07:14.000 |
|
reasonable like outputs is right maybe |
|
|
|
00:07:12.479 --> 00:07:17.280 |
|
we'll just sample from the 10 most |
|
|
|
00:07:14.000 --> 00:07:19.759 |
|
probable tokens um here maybe we say if |
|
|
|
00:07:17.280 --> 00:07:21.479 |
|
we want to pick top six sampling we'll |
|
|
|
00:07:19.759 --> 00:07:23.919 |
|
sample from just the six most probable |
|
|
|
00:07:21.479 --> 00:07:26.240 |
|
tokens and so in this example you can |
|
|
|
00:07:23.919 --> 00:07:27.680 |
|
see we originally had 10 tokens and |
|
|
|
00:07:26.240 --> 00:07:30.560 |
|
we're going to sample from just the blue |
|
|
|
00:07:27.680 --> 00:07:32.919 |
|
ones just the six most likely tokens |
|
|
|
00:07:30.560 --> 00:07:34.360 |
|
um in this example this distribution is |
|
|
|
00:07:32.919 --> 00:07:37.280 |
|
pretty flat there's a lot of things that |
|
|
|
00:07:34.360 --> 00:07:40.120 |
|
are like kind of likely right so that |
|
|
|
00:07:37.280 --> 00:07:43.000 |
|
those six tokens are only 68% of the |
|
|
|
00:07:40.120 --> 00:07:45.360 |
|
total probability Mass um if we go like |
|
|
|
00:07:43.000 --> 00:07:47.240 |
|
one time step further here we might have |
|
|
|
00:07:45.360 --> 00:07:49.360 |
|
a distribution that's a lot peier most |
|
|
|
00:07:47.240 --> 00:07:51.759 |
|
of the mass is on just a single token |
|
|
|
00:07:49.360 --> 00:07:53.919 |
|
and so sampling from just the top six |
|
|
|
00:07:51.759 --> 00:07:56.400 |
|
tokens actually captures 99% of the |
|
|
|
00:07:53.919 --> 00:07:58.360 |
|
probability mes maybe we say that seems |
|
|
|
00:07:56.400 --> 00:08:01.199 |
|
a little excessive right we don't really |
|
|
|
00:07:58.360 --> 00:08:03.400 |
|
need um maybe all of these tokens that |
|
|
|
00:08:01.199 --> 00:08:05.479 |
|
are all kind of low probability maybe we |
|
|
|
00:08:03.400 --> 00:08:07.000 |
|
just want to sort of sample from the top |
|
|
|
00:08:05.479 --> 00:08:08.080 |
|
half of our distribution or something or |
|
|
|
00:08:07.000 --> 00:08:10.840 |
|
the top |
|
|
|
00:08:08.080 --> 00:08:12.919 |
|
90% um so instead of choosing a top |
|
|
|
00:08:10.840 --> 00:08:15.560 |
|
number of tokens to sample from you |
|
|
|
00:08:12.919 --> 00:08:17.400 |
|
could choose a top amount of probability |
|
|
|
00:08:15.560 --> 00:08:20.000 |
|
and this is something called top P or |
|
|
|
00:08:17.400 --> 00:08:21.520 |
|
nucleus sampling so P here is the amount |
|
|
|
00:08:20.000 --> 00:08:24.039 |
|
of probability from your distribution |
|
|
|
00:08:21.520 --> 00:08:26.639 |
|
you want to consider so if you decide |
|
|
|
00:08:24.039 --> 00:08:29.280 |
|
your p is about like 94% of the |
|
|
|
00:08:26.639 --> 00:08:31.639 |
|
probability Mass you in this first examp |
|
|
|
00:08:29.280 --> 00:08:33.719 |
|
example here would choose almost all of |
|
|
|
00:08:31.639 --> 00:08:35.440 |
|
the tokens you keep adding tokens in |
|
|
|
00:08:33.719 --> 00:08:37.159 |
|
until you reach an amount of total |
|
|
|
00:08:35.440 --> 00:08:39.479 |
|
probability that's about |
|
|
|
00:08:37.159 --> 00:08:40.880 |
|
094 but then when you get to the Second |
|
|
|
00:08:39.479 --> 00:08:43.240 |
|
Step where you have a couple of really |
|
|
|
00:08:40.880 --> 00:08:45.959 |
|
highly probable tokens you'd only need a |
|
|
|
00:08:43.240 --> 00:08:47.959 |
|
couple of tokens to add up to 094 or |
|
|
|
00:08:45.959 --> 00:08:50.320 |
|
even higher than 0.94 and so you would |
|
|
|
00:08:47.959 --> 00:08:52.200 |
|
just sample from a smaller set of tokens |
|
|
|
00:08:50.320 --> 00:08:54.600 |
|
so in top K sampling the total amount of |
|
|
|
00:08:52.200 --> 00:08:56.560 |
|
probability your sampling from can move |
|
|
|
00:08:54.600 --> 00:08:58.120 |
|
around in top P sampling the total |
|
|
|
00:08:56.560 --> 00:08:59.839 |
|
number of tokens you're sampling from |
|
|
|
00:08:58.120 --> 00:09:01.959 |
|
might change |
|
|
|
00:08:59.839 --> 00:09:04.760 |
|
um but maybe we sort of don't want to |
|
|
|
00:09:01.959 --> 00:09:07.279 |
|
impose a strong constraint like we want |
|
|
|
00:09:04.760 --> 00:09:09.279 |
|
like 94% here maybe just what we really |
|
|
|
00:09:07.279 --> 00:09:11.040 |
|
care about is saying that we're not |
|
|
|
00:09:09.279 --> 00:09:14.000 |
|
going to sample anything that's really |
|
|
|
00:09:11.040 --> 00:09:16.800 |
|
really unlikely right another way of |
|
|
|
00:09:14.000 --> 00:09:18.560 |
|
doing this is called Epsilon sampling |
|
|
|
00:09:16.800 --> 00:09:20.519 |
|
where we just sample tokens that have at |
|
|
|
00:09:18.560 --> 00:09:22.920 |
|
least some minimum amount of probability |
|
|
|
00:09:20.519 --> 00:09:24.720 |
|
to them right so maybe we just want |
|
|
|
00:09:22.920 --> 00:09:29.519 |
|
tokens that have probability of at least |
|
|
|
00:09:24.720 --> 00:09:31.240 |
|
0.05 here um in this first um example |
|
|
|
00:09:29.519 --> 00:09:32.640 |
|
everything has at least some reasonable |
|
|
|
00:09:31.240 --> 00:09:34.240 |
|
amount of probability so we're actually |
|
|
|
00:09:32.640 --> 00:09:36.240 |
|
going to sample from our full |
|
|
|
00:09:34.240 --> 00:09:37.720 |
|
distribution and then in the second |
|
|
|
00:09:36.240 --> 00:09:39.279 |
|
example when we have a lot of things |
|
|
|
00:09:37.720 --> 00:09:41.160 |
|
that are really unlikely we'll only |
|
|
|
00:09:39.279 --> 00:09:43.800 |
|
sample from sort of the more likely part |
|
|
|
00:09:41.160 --> 00:09:45.240 |
|
of the distribution um so all three of |
|
|
|
00:09:43.800 --> 00:09:47.000 |
|
these methods are sort of different ways |
|
|
|
00:09:45.240 --> 00:09:49.399 |
|
of trying to cut off the long tail using |
|
|
|
00:09:47.000 --> 00:09:51.480 |
|
sort of different |
|
|
|
00:09:49.399 --> 00:09:53.000 |
|
characteristics the tail of the |
|
|
|
00:09:51.480 --> 00:09:55.680 |
|
distribution though isn't the only thing |
|
|
|
00:09:53.000 --> 00:09:58.000 |
|
we could choose to modify um we could |
|
|
|
00:09:55.680 --> 00:09:59.880 |
|
also choose to modify this sort of |
|
|
|
00:09:58.000 --> 00:10:02.120 |
|
peakiness of the distribution |
|
|
|
00:09:59.880 --> 00:10:03.880 |
|
so if you look here at the middle of |
|
|
|
00:10:02.120 --> 00:10:06.600 |
|
these diagrams say this is your original |
|
|
|
00:10:03.880 --> 00:10:08.519 |
|
distribution over next tokens and maybe |
|
|
|
00:10:06.600 --> 00:10:11.040 |
|
you want to modify some properties of |
|
|
|
00:10:08.519 --> 00:10:12.640 |
|
this distribution like you say I want an |
|
|
|
00:10:11.040 --> 00:10:14.200 |
|
output that's really diverse and |
|
|
|
00:10:12.640 --> 00:10:15.680 |
|
interesting and open-ended like maybe |
|
|
|
00:10:14.200 --> 00:10:17.920 |
|
this is something like story generation |
|
|
|
00:10:15.680 --> 00:10:20.120 |
|
where you want to have sort of a lot of |
|
|
|
00:10:17.920 --> 00:10:21.279 |
|
maybe surprising things in your output |
|
|
|
00:10:20.120 --> 00:10:23.480 |
|
you could say I want to sort of |
|
|
|
00:10:21.279 --> 00:10:26.440 |
|
distribute my probability Mass more over |
|
|
|
00:10:23.480 --> 00:10:28.399 |
|
the token space and you can do this um |
|
|
|
00:10:26.440 --> 00:10:32.720 |
|
by sort of flattening this distribution |
|
|
|
00:10:28.399 --> 00:10:34.240 |
|
like you see on the the right here um |
|
|
|
00:10:32.720 --> 00:10:36.800 |
|
where now there's sort of more |
|
|
|
00:10:34.240 --> 00:10:39.040 |
|
probability Mass spread over this um |
|
|
|
00:10:36.800 --> 00:10:40.320 |
|
like wider set of tokens you could also |
|
|
|
00:10:39.040 --> 00:10:42.720 |
|
say the opposite right you could say |
|
|
|
00:10:40.320 --> 00:10:44.120 |
|
maybe I'm doing something like math |
|
|
|
00:10:42.720 --> 00:10:45.519 |
|
where there shouldn't really be a lot of |
|
|
|
00:10:44.120 --> 00:10:47.800 |
|
correct answers there should be really |
|
|
|
00:10:45.519 --> 00:10:50.399 |
|
only one or maybe only like a few |
|
|
|
00:10:47.800 --> 00:10:52.320 |
|
potential reasonable next answers and so |
|
|
|
00:10:50.399 --> 00:10:54.160 |
|
you can make your distribution peier or |
|
|
|
00:10:52.320 --> 00:10:56.639 |
|
sharper so that more of the probability |
|
|
|
00:10:54.160 --> 00:11:00.200 |
|
mass is on the things at the very top um |
|
|
|
00:10:56.639 --> 00:11:02.000 |
|
the way you do this is you modify y your |
|
|
|
00:11:00.200 --> 00:11:04.320 |
|
loges your outputs of the last layer of |
|
|
|
00:11:02.000 --> 00:11:06.399 |
|
the model before you apply softn so when |
|
|
|
00:11:04.320 --> 00:11:08.360 |
|
you're predicting you get your outputs |
|
|
|
00:11:06.399 --> 00:11:10.040 |
|
of the last layer of the model and then |
|
|
|
00:11:08.360 --> 00:11:11.560 |
|
you apply softmax which turns those |
|
|
|
00:11:10.040 --> 00:11:15.240 |
|
outputs into a distribution right they |
|
|
|
00:11:11.560 --> 00:11:17.399 |
|
all sum up the um like Mass over all |
|
|
|
00:11:15.240 --> 00:11:18.839 |
|
vocabulary tokens sums to one and so |
|
|
|
00:11:17.399 --> 00:11:21.920 |
|
that is sort of a distribution you could |
|
|
|
00:11:18.839 --> 00:11:23.519 |
|
sample from if you divide those Logics |
|
|
|
00:11:21.920 --> 00:11:26.000 |
|
by some number before you apply that |
|
|
|
00:11:23.519 --> 00:11:27.880 |
|
softmax you can make that distribution |
|
|
|
00:11:26.000 --> 00:11:30.760 |
|
flatter by using a number greater than |
|
|
|
00:11:27.880 --> 00:11:32.440 |
|
one or peier by using a number less than |
|
|
|
00:11:30.760 --> 00:11:35.079 |
|
one and this is this type of parameter |
|
|
|
00:11:32.440 --> 00:11:36.839 |
|
is called temperature um you can apply |
|
|
|
00:11:35.079 --> 00:11:38.480 |
|
this with any of the other methods for |
|
|
|
00:11:36.839 --> 00:11:40.279 |
|
sort of cutting off the long tail but |
|
|
|
00:11:38.480 --> 00:11:41.920 |
|
what people will often do is just apply |
|
|
|
00:11:40.279 --> 00:11:43.639 |
|
a temperature and then sample from that |
|
|
|
00:11:41.920 --> 00:11:45.320 |
|
distribution and that's what we call |
|
|
|
00:11:43.639 --> 00:11:48.720 |
|
temperature |
|
|
|
00:11:45.320 --> 00:11:49.920 |
|
sampling so these I think most of you |
|
|
|
00:11:48.720 --> 00:11:51.320 |
|
might already have been at least a |
|
|
|
00:11:49.920 --> 00:11:53.000 |
|
little bit familiar with some of these |
|
|
|
00:11:51.320 --> 00:11:56.079 |
|
methods I want to touch briefly on a |
|
|
|
00:11:53.000 --> 00:11:58.160 |
|
couple of other ideas for modifying this |
|
|
|
00:11:56.079 --> 00:11:59.680 |
|
distribution maybe some more complex and |
|
|
|
00:11:58.160 --> 00:12:01.839 |
|
more recent ideas and the one that I |
|
|
|
00:11:59.680 --> 00:12:04.279 |
|
want to talk about in more detail is |
|
|
|
00:12:01.839 --> 00:12:05.399 |
|
something called contrastive decoding so |
|
|
|
00:12:04.279 --> 00:12:07.360 |
|
the idea here is that we could |
|
|
|
00:12:05.399 --> 00:12:10.800 |
|
incorporate some extra information at |
|
|
|
00:12:07.360 --> 00:12:12.760 |
|
decoding time um using some other |
|
|
|
00:12:10.800 --> 00:12:15.320 |
|
distribution some other data or in this |
|
|
|
00:12:12.760 --> 00:12:17.320 |
|
case some other model so if you've ever |
|
|
|
00:12:15.320 --> 00:12:19.240 |
|
played around with a really like |
|
|
|
00:12:17.320 --> 00:12:21.800 |
|
relatively small language model maybe |
|
|
|
00:12:19.240 --> 00:12:23.320 |
|
something like gbt2 small um You |
|
|
|
00:12:21.800 --> 00:12:26.560 |
|
probably noticed you try to give it some |
|
|
|
00:12:23.320 --> 00:12:28.240 |
|
inputs and maybe it degenerates into |
|
|
|
00:12:26.560 --> 00:12:30.160 |
|
just repeating the same sequence over |
|
|
|
00:12:28.240 --> 00:12:31.720 |
|
and over maybe it gives you outputs that |
|
|
|
00:12:30.160 --> 00:12:33.399 |
|
are just completely incorrect like you |
|
|
|
00:12:31.720 --> 00:12:35.320 |
|
ask it a factual question and it gets it |
|
|
|
00:12:33.399 --> 00:12:37.120 |
|
wrong um and you don't see those |
|
|
|
00:12:35.320 --> 00:12:39.519 |
|
problems if you look at sort of a larger |
|
|
|
00:12:37.120 --> 00:12:41.399 |
|
model that's trained on more data so the |
|
|
|
00:12:39.519 --> 00:12:43.199 |
|
question here is can you use what that |
|
|
|
00:12:41.399 --> 00:12:46.480 |
|
smaller model is getting wrong to make |
|
|
|
00:12:43.199 --> 00:12:49.120 |
|
your larger model even better um and the |
|
|
|
00:12:46.480 --> 00:12:51.360 |
|
way we do this is by sort of the |
|
|
|
00:12:49.120 --> 00:12:52.880 |
|
intuition that if the smaller model |
|
|
|
00:12:51.360 --> 00:12:55.079 |
|
doesn't have a lot of probability on |
|
|
|
00:12:52.880 --> 00:12:57.160 |
|
some answer but the the larger model |
|
|
|
00:12:55.079 --> 00:12:58.519 |
|
does it's likely because that larger |
|
|
|
00:12:57.160 --> 00:13:02.279 |
|
model has learned something with the |
|
|
|
00:12:58.519 --> 00:13:04.000 |
|
smaller model didn't know and so here we |
|
|
|
00:13:02.279 --> 00:13:06.199 |
|
modify the probability distribution |
|
|
|
00:13:04.000 --> 00:13:08.199 |
|
coming out of the larger model to choose |
|
|
|
00:13:06.199 --> 00:13:11.120 |
|
outputs that that model thinks are very |
|
|
|
00:13:08.199 --> 00:13:12.600 |
|
likely and the amateur or the the weaker |
|
|
|
00:13:11.120 --> 00:13:15.480 |
|
model thinks are not |
|
|
|
00:13:12.600 --> 00:13:20.000 |
|
likely so in this example here from |
|
|
|
00:13:15.480 --> 00:13:22.560 |
|
their paper um if you have sort of a |
|
|
|
00:13:20.000 --> 00:13:27.199 |
|
input like Barack Obama was born in |
|
|
|
00:13:22.560 --> 00:13:29.720 |
|
Hawaii he was born in L um the smaller |
|
|
|
00:13:27.199 --> 00:13:31.360 |
|
model would often do something like |
|
|
|
00:13:29.720 --> 00:13:35.399 |
|
start repeating and actually if you |
|
|
|
00:13:31.360 --> 00:13:36.720 |
|
sample sort of naively from the um |
|
|
|
00:13:35.399 --> 00:13:38.560 |
|
larger model you can wind up in these |
|
|
|
00:13:36.720 --> 00:13:40.000 |
|
situations as well right so if you just |
|
|
|
00:13:38.560 --> 00:13:41.959 |
|
choose the most likely thing at each |
|
|
|
00:13:40.000 --> 00:13:43.399 |
|
step you wind up in this Loop where it's |
|
|
|
00:13:41.959 --> 00:13:45.560 |
|
like he was born in Hawaii he was born |
|
|
|
00:13:43.399 --> 00:13:48.199 |
|
in Hawaii he was born in Hawaii um and |
|
|
|
00:13:45.560 --> 00:13:51.320 |
|
this is behavior we generally don't want |
|
|
|
00:13:48.199 --> 00:13:52.680 |
|
um if you do something like nucleus or |
|
|
|
00:13:51.320 --> 00:13:53.720 |
|
top PE sampling you can wind up with |
|
|
|
00:13:52.680 --> 00:13:55.880 |
|
things that are actually completely |
|
|
|
00:13:53.720 --> 00:13:58.839 |
|
incorrect like he was born in Washington |
|
|
|
00:13:55.880 --> 00:14:01.480 |
|
DC um but if you use contrastive |
|
|
|
00:13:58.839 --> 00:14:04.120 |
|
decoding you take the outputs coming out |
|
|
|
00:14:01.480 --> 00:14:05.720 |
|
of your expert model here and you |
|
|
|
00:14:04.120 --> 00:14:07.680 |
|
subtract out the probabilities coming |
|
|
|
00:14:05.720 --> 00:14:10.160 |
|
out of the weaker model and you can wind |
|
|
|
00:14:07.680 --> 00:14:11.880 |
|
up with things that the higher model the |
|
|
|
00:14:10.160 --> 00:14:13.759 |
|
stronger model ascribed probability to |
|
|
|
00:14:11.880 --> 00:14:15.480 |
|
but the weaker model did not likely |
|
|
|
00:14:13.759 --> 00:14:16.920 |
|
because these are sort of facts that the |
|
|
|
00:14:15.480 --> 00:14:18.959 |
|
larger model knows that the smaller |
|
|
|
00:14:16.920 --> 00:14:20.800 |
|
model does not so here we actually get |
|
|
|
00:14:18.959 --> 00:14:23.199 |
|
the year Barack Obama was born which is |
|
|
|
00:14:20.800 --> 00:14:25.800 |
|
maybe a fact that the larger model knows |
|
|
|
00:14:23.199 --> 00:14:27.639 |
|
and the smaller model didn't know um and |
|
|
|
00:14:25.800 --> 00:14:29.759 |
|
so this is just one of sort of a broad |
|
|
|
00:14:27.639 --> 00:14:32.560 |
|
class of methods where you use external |
|
|
|
00:14:29.759 --> 00:14:35.199 |
|
information to improve your decoding by |
|
|
|
00:14:32.560 --> 00:14:38.720 |
|
modifying this distribution at each |
|
|
|
00:14:35.199 --> 00:14:40.720 |
|
set um those are sort of a brief tour of |
|
|
|
00:14:38.720 --> 00:14:43.920 |
|
a couple of different sampling methods |
|
|
|
00:14:40.720 --> 00:14:43.920 |
|
before we move into search |
|
|
|
00:14:44.600 --> 00:14:50.440 |
|
yeah |
|
|
|
00:14:46.279 --> 00:14:54.880 |
|
yeah is it going to improve upon just |
|
|
|
00:14:50.440 --> 00:14:57.240 |
|
the yeah it generally does um and the |
|
|
|
00:14:54.880 --> 00:14:59.800 |
|
intuition for why this might be I think |
|
|
|
00:14:57.240 --> 00:15:01.680 |
|
is that there are sort of these |
|
|
|
00:14:59.800 --> 00:15:04.560 |
|
degenerate cases like just repeating |
|
|
|
00:15:01.680 --> 00:15:06.120 |
|
over and over that both the expert and |
|
|
|
00:15:04.560 --> 00:15:09.000 |
|
the weak model would give relatively |
|
|
|
00:15:06.120 --> 00:15:10.880 |
|
high probability to um maybe the expert |
|
|
|
00:15:09.000 --> 00:15:13.199 |
|
model is like slightly less likely to do |
|
|
|
00:15:10.880 --> 00:15:14.959 |
|
these things but it's still like sort of |
|
|
|
00:15:13.199 --> 00:15:16.639 |
|
an easy case for the model to learn and |
|
|
|
00:15:14.959 --> 00:15:18.120 |
|
so both of those models will have high |
|
|
|
00:15:16.639 --> 00:15:20.079 |
|
probability for those things but the |
|
|
|
00:15:18.120 --> 00:15:21.800 |
|
things that are genuinely like good |
|
|
|
00:15:20.079 --> 00:15:23.880 |
|
outputs that only the expert would get |
|
|
|
00:15:21.800 --> 00:15:25.519 |
|
right those will have low probability |
|
|
|
00:15:23.880 --> 00:15:27.600 |
|
under the weak model and so you're sort |
|
|
|
00:15:25.519 --> 00:15:30.880 |
|
of subtracting out all the degenerate |
|
|
|
00:15:27.600 --> 00:15:33.759 |
|
behaviors and keeping to really good out |
|
|
|
00:15:30.880 --> 00:15:35.240 |
|
this if you're generating a longer |
|
|
|
00:15:33.759 --> 00:15:37.440 |
|
sequence with with |
|
|
|
00:15:35.240 --> 00:15:40.759 |
|
contacing how do you know which steps |
|
|
|
00:15:37.440 --> 00:15:45.120 |
|
you want to bring out yeah this is a |
|
|
|
00:15:40.759 --> 00:15:48.560 |
|
great question so for this particular |
|
|
|
00:15:45.120 --> 00:15:50.560 |
|
case oh yeah sorry so this was if you're |
|
|
|
00:15:48.560 --> 00:15:52.279 |
|
doing contrastive decoding over a really |
|
|
|
00:15:50.560 --> 00:15:54.399 |
|
long sequence like when do you choose to |
|
|
|
00:15:52.279 --> 00:15:55.800 |
|
bring in the expert right and for |
|
|
|
00:15:54.399 --> 00:15:58.600 |
|
contrastive decoding we're actually |
|
|
|
00:15:55.800 --> 00:16:00.759 |
|
going to do this at every individual |
|
|
|
00:15:58.600 --> 00:16:02.440 |
|
time step so we're going to use the |
|
|
|
00:16:00.759 --> 00:16:04.800 |
|
expert model to decode and we're going |
|
|
|
00:16:02.440 --> 00:16:07.000 |
|
to bring in the amateur to sort of |
|
|
|
00:16:04.800 --> 00:16:09.079 |
|
subtract out probabilities at each next |
|
|
|
00:16:07.000 --> 00:16:10.399 |
|
token prediction um you don't have to do |
|
|
|
00:16:09.079 --> 00:16:12.800 |
|
that I think that's that's what they do |
|
|
|
00:16:10.399 --> 00:16:15.000 |
|
in the paper um you could also decide to |
|
|
|
00:16:12.800 --> 00:16:16.680 |
|
only do this sort of if you have high |
|
|
|
00:16:15.000 --> 00:16:19.639 |
|
uncertainty or something if you don't |
|
|
|
00:16:16.680 --> 00:16:22.639 |
|
have a really sharp probability |
|
|
|
00:16:19.639 --> 00:16:22.639 |
|
distribution |
|
|
|
00:16:23.160 --> 00:16:28.160 |
|
yeah yeah how weak should the weak |
|
|
|
00:16:25.399 --> 00:16:30.199 |
|
predictor be um in the in the paper what |
|
|
|
00:16:28.160 --> 00:16:31.600 |
|
they're look at is actually not a huge |
|
|
|
00:16:30.199 --> 00:16:34.560 |
|
difference between the two models so you |
|
|
|
00:16:31.600 --> 00:16:35.800 |
|
can see here this is gpd2 XL and small |
|
|
|
00:16:34.560 --> 00:16:37.319 |
|
so there's a difference in parameter |
|
|
|
00:16:35.800 --> 00:16:39.519 |
|
counts and like a bit of a difference in |
|
|
|
00:16:37.319 --> 00:16:42.160 |
|
data I think here but these are actually |
|
|
|
00:16:39.519 --> 00:16:44.959 |
|
not like gpd2 XL is certainly not like a |
|
|
|
00:16:42.160 --> 00:16:48.399 |
|
super strong model now um I think they |
|
|
|
00:16:44.959 --> 00:16:50.920 |
|
try a couple of different settings and |
|
|
|
00:16:48.399 --> 00:16:52.319 |
|
the general intuition I think if I'm |
|
|
|
00:16:50.920 --> 00:16:54.880 |
|
remembering it correctly is that you |
|
|
|
00:16:52.319 --> 00:16:56.319 |
|
want a model that's not like so close in |
|
|
|
00:16:54.880 --> 00:16:58.000 |
|
performance to your expert that you're |
|
|
|
00:16:56.319 --> 00:16:59.839 |
|
basically just subtracting out useful |
|
|
|
00:16:58.000 --> 00:17:02.240 |
|
things but you also don't want a model |
|
|
|
00:16:59.839 --> 00:17:03.519 |
|
that's like so degenerate that it is not |
|
|
|
00:17:02.240 --> 00:17:04.959 |
|
hasn't learned anything useful about |
|
|
|
00:17:03.519 --> 00:17:06.839 |
|
your task at all so I think it might |
|
|
|
00:17:04.959 --> 00:17:09.600 |
|
depend on what task you're looking |
|
|
|
00:17:06.839 --> 00:17:12.919 |
|
at |
|
|
|
00:17:09.600 --> 00:17:14.559 |
|
yes this is for inference um so actually |
|
|
|
00:17:12.919 --> 00:17:17.640 |
|
everything we look at today will not |
|
|
|
00:17:14.559 --> 00:17:17.640 |
|
require aning of the |
|
|
|
00:17:19.360 --> 00:17:26.559 |
|
model Okay cool so now we're going to |
|
|
|
00:17:24.000 --> 00:17:30.039 |
|
step into sort of a slightly different |
|
|
|
00:17:26.559 --> 00:17:31.280 |
|
um set of strategies here which is maybe |
|
|
|
00:17:30.039 --> 00:17:33.039 |
|
we don't just want something from the |
|
|
|
00:17:31.280 --> 00:17:35.160 |
|
model distribution or something from a |
|
|
|
00:17:33.039 --> 00:17:37.760 |
|
modified distribution maybe we actually |
|
|
|
00:17:35.160 --> 00:17:39.840 |
|
just want the quote unquote best thing |
|
|
|
00:17:37.760 --> 00:17:42.960 |
|
the single most likely output given our |
|
|
|
00:17:39.840 --> 00:17:45.200 |
|
input right and here this would be the Y |
|
|
|
00:17:42.960 --> 00:17:48.039 |
|
hat the single sequence that satisfies |
|
|
|
00:17:45.200 --> 00:17:51.919 |
|
that has the highest score py given X |
|
|
|
00:17:48.039 --> 00:17:54.240 |
|
for the X that we gave the model um this |
|
|
|
00:17:51.919 --> 00:17:56.000 |
|
is this section is called mode seeking |
|
|
|
00:17:54.240 --> 00:17:58.039 |
|
search because this is the mode of the |
|
|
|
00:17:56.000 --> 00:18:00.440 |
|
distribution over outputs if you sampled |
|
|
|
00:17:58.039 --> 00:18:01.760 |
|
a huge huge number of times and you |
|
|
|
00:18:00.440 --> 00:18:04.720 |
|
looked at the single most likely |
|
|
|
00:18:01.760 --> 00:18:06.720 |
|
sequence you got it would be this y hat |
|
|
|
00:18:04.720 --> 00:18:09.280 |
|
and so how do we find this |
|
|
|
00:18:06.720 --> 00:18:11.600 |
|
thing well one idea is we know the |
|
|
|
00:18:09.280 --> 00:18:13.159 |
|
distribution at each individual setep |
|
|
|
00:18:11.600 --> 00:18:16.000 |
|
can we just pick the most likely thing |
|
|
|
00:18:13.159 --> 00:18:18.960 |
|
from that distribution and so in Greedy |
|
|
|
00:18:16.000 --> 00:18:21.080 |
|
decoding we take the argmax the single |
|
|
|
00:18:18.960 --> 00:18:22.720 |
|
highest probability token at each step |
|
|
|
00:18:21.080 --> 00:18:24.840 |
|
and we continue generating until the |
|
|
|
00:18:22.720 --> 00:18:26.600 |
|
single highest most the single highest |
|
|
|
00:18:24.840 --> 00:18:28.840 |
|
probability token is the stop token |
|
|
|
00:18:26.600 --> 00:18:31.559 |
|
right the end of sequence token |
|
|
|
00:18:28.840 --> 00:18:33.400 |
|
um for an individual token right if we |
|
|
|
00:18:31.559 --> 00:18:35.559 |
|
only want a single token output this is |
|
|
|
00:18:33.400 --> 00:18:38.320 |
|
exactly what we want this is the single |
|
|
|
00:18:35.559 --> 00:18:40.400 |
|
most likely output um and that's great |
|
|
|
00:18:38.320 --> 00:18:44.000 |
|
but if we're looking at something that |
|
|
|
00:18:40.400 --> 00:18:45.120 |
|
is maybe several tokens long are we |
|
|
|
00:18:44.000 --> 00:18:47.360 |
|
actually going to get the highest |
|
|
|
00:18:45.120 --> 00:18:49.720 |
|
probability thing and if you kind of |
|
|
|
00:18:47.360 --> 00:18:52.159 |
|
squint at this you can see that maybe we |
|
|
|
00:18:49.720 --> 00:18:54.120 |
|
have a problem here where the highest |
|
|
|
00:18:52.159 --> 00:18:56.320 |
|
probability sequence that you get from |
|
|
|
00:18:54.120 --> 00:18:58.039 |
|
multiplying across multiple steps |
|
|
|
00:18:56.320 --> 00:18:59.559 |
|
doesn't necessarily start with the token |
|
|
|
00:18:58.039 --> 00:19:01.600 |
|
that was highest probability at time |
|
|
|
00:18:59.559 --> 00:19:03.200 |
|
step one right maybe if you're doing |
|
|
|
00:19:01.600 --> 00:19:04.720 |
|
something like unconditional generation |
|
|
|
00:19:03.200 --> 00:19:06.720 |
|
the highest probability token at time |
|
|
|
00:19:04.720 --> 00:19:08.360 |
|
step one is always the but there could |
|
|
|
00:19:06.720 --> 00:19:09.919 |
|
be a really probable sentence that just |
|
|
|
00:19:08.360 --> 00:19:11.480 |
|
doesn't happen to start with the the |
|
|
|
00:19:09.919 --> 00:19:12.720 |
|
word the' and you would never find it |
|
|
|
00:19:11.480 --> 00:19:15.080 |
|
using GRE |
|
|
|
00:19:12.720 --> 00:19:17.360 |
|
decoding so this isn't going to give us |
|
|
|
00:19:15.080 --> 00:19:19.799 |
|
the highest probability output over a |
|
|
|
00:19:17.360 --> 00:19:22.000 |
|
sequence that's more than one token one |
|
|
|
00:19:19.799 --> 00:19:23.360 |
|
can we do anything better to try to find |
|
|
|
00:19:22.000 --> 00:19:25.640 |
|
this um |
|
|
|
00:19:23.360 --> 00:19:27.559 |
|
output and here we get into sort of one |
|
|
|
00:19:25.640 --> 00:19:29.520 |
|
of the most popular decoding methods the |
|
|
|
00:19:27.559 --> 00:19:32.600 |
|
one that you maybe heard of before which |
|
|
|
00:19:29.520 --> 00:19:35.080 |
|
is beam search the idea here is that we |
|
|
|
00:19:32.600 --> 00:19:36.559 |
|
don't want to miss a high probability |
|
|
|
00:19:35.080 --> 00:19:38.880 |
|
token that's hidden behind a lower |
|
|
|
00:19:36.559 --> 00:19:40.200 |
|
probability prefix so we want to kind of |
|
|
|
00:19:38.880 --> 00:19:42.000 |
|
search through a couple of different |
|
|
|
00:19:40.200 --> 00:19:43.760 |
|
options so that we don't discard |
|
|
|
00:19:42.000 --> 00:19:47.120 |
|
something too early that might have high |
|
|
|
00:19:43.760 --> 00:19:49.360 |
|
probability um later on in generation |
|
|
|
00:19:47.120 --> 00:19:50.919 |
|
and this is a type of bread first search |
|
|
|
00:19:49.360 --> 00:19:53.200 |
|
so we're going to look at a wide variety |
|
|
|
00:19:50.919 --> 00:19:54.600 |
|
of options at a given time step we're |
|
|
|
00:19:53.200 --> 00:19:55.600 |
|
going to pick some set of them to |
|
|
|
00:19:54.600 --> 00:19:57.120 |
|
continue and then we're going to look at |
|
|
|
00:19:55.600 --> 00:19:58.919 |
|
a wide variety of options for the next |
|
|
|
00:19:57.120 --> 00:19:59.960 |
|
time step instead of generating all the |
|
|
|
00:19:58.919 --> 00:20:02.200 |
|
way through a sequence and then |
|
|
|
00:19:59.960 --> 00:20:04.320 |
|
generating all the way through another |
|
|
|
00:20:02.200 --> 00:20:05.760 |
|
sequence um and how this works is we're |
|
|
|
00:20:04.320 --> 00:20:07.559 |
|
going to pick sort of a number of |
|
|
|
00:20:05.760 --> 00:20:09.400 |
|
candidates we'd like to explore a beam |
|
|
|
00:20:07.559 --> 00:20:11.039 |
|
with so in this example we're going to |
|
|
|
00:20:09.400 --> 00:20:12.799 |
|
pick three and we're going to say all |
|
|
|
00:20:11.039 --> 00:20:15.480 |
|
right here are maybe three options for |
|
|
|
00:20:12.799 --> 00:20:17.640 |
|
time step one for if we pick each of |
|
|
|
00:20:15.480 --> 00:20:19.760 |
|
those three options what would be the |
|
|
|
00:20:17.640 --> 00:20:21.799 |
|
three most likely things for time step |
|
|
|
00:20:19.760 --> 00:20:23.200 |
|
two right rather than choosing just the |
|
|
|
00:20:21.799 --> 00:20:24.520 |
|
single most likely thing in Greedy |
|
|
|
00:20:23.200 --> 00:20:26.960 |
|
decoding we're going to pick three |
|
|
|
00:20:24.520 --> 00:20:29.120 |
|
options and so now we have three options |
|
|
|
00:20:26.960 --> 00:20:32.559 |
|
for time step one three options for time |
|
|
|
00:20:29.120 --> 00:20:34.280 |
|
step two we now have nine options um |
|
|
|
00:20:32.559 --> 00:20:36.320 |
|
here right three options and then three |
|
|
|
00:20:34.280 --> 00:20:37.679 |
|
more for each of these and we don't want |
|
|
|
00:20:36.320 --> 00:20:40.159 |
|
to continue doing this because this is |
|
|
|
00:20:37.679 --> 00:20:41.960 |
|
going to sort of combinator explode so |
|
|
|
00:20:40.159 --> 00:20:44.080 |
|
we need to choose some subset of these |
|
|
|
00:20:41.960 --> 00:20:45.880 |
|
to continue with and the way we do that |
|
|
|
00:20:44.080 --> 00:20:47.799 |
|
is we look at the probability over this |
|
|
|
00:20:45.880 --> 00:20:49.240 |
|
two token sequence and we choose the two |
|
|
|
00:20:47.799 --> 00:20:51.520 |
|
that have the highest probability |
|
|
|
00:20:49.240 --> 00:20:53.400 |
|
overall so in this instance we've chosen |
|
|
|
00:20:51.520 --> 00:20:55.679 |
|
sort of one thing from this first group |
|
|
|
00:20:53.400 --> 00:20:57.760 |
|
and two things from the second group and |
|
|
|
00:20:55.679 --> 00:20:59.760 |
|
now we're back down to three hypotheses |
|
|
|
00:20:57.760 --> 00:21:02.120 |
|
each now two tokens long and we'll |
|
|
|
00:20:59.760 --> 00:21:04.000 |
|
continue generating to time step three |
|
|
|
00:21:02.120 --> 00:21:05.600 |
|
we'll get nine options we'll pre it back |
|
|
|
00:21:04.000 --> 00:21:07.760 |
|
down to three and we'll continue until |
|
|
|
00:21:05.600 --> 00:21:09.159 |
|
the end of generation where we now have |
|
|
|
00:21:07.760 --> 00:21:10.679 |
|
three sequences and we'll just pick the |
|
|
|
00:21:09.159 --> 00:21:14.000 |
|
one that's highest probability out of |
|
|
|
00:21:10.679 --> 00:21:15.679 |
|
those three to return um this is not |
|
|
|
00:21:14.000 --> 00:21:17.360 |
|
guaranteed to get you the highest |
|
|
|
00:21:15.679 --> 00:21:18.480 |
|
probability thing right you still have |
|
|
|
00:21:17.360 --> 00:21:20.039 |
|
this risk that you could be sort of |
|
|
|
00:21:18.480 --> 00:21:22.279 |
|
pruning out something that's high |
|
|
|
00:21:20.039 --> 00:21:24.159 |
|
probability but in general this sort of |
|
|
|
00:21:22.279 --> 00:21:26.600 |
|
works um much better than greedy |
|
|
|
00:21:24.159 --> 00:21:28.520 |
|
decoding and this is if you have a |
|
|
|
00:21:26.600 --> 00:21:31.120 |
|
language model and you're sort of not |
|
|
|
00:21:28.520 --> 00:21:32.440 |
|
what um decoding method it's using outs |
|
|
|
00:21:31.120 --> 00:21:34.200 |
|
are pretty good it's either beam search |
|
|
|
00:21:32.440 --> 00:21:37.120 |
|
or temperature samping right this is |
|
|
|
00:21:34.200 --> 00:21:40.039 |
|
very effective this is used um pretty |
|
|
|
00:21:37.120 --> 00:21:41.760 |
|
broadly there are however some issues |
|
|
|
00:21:40.039 --> 00:21:43.760 |
|
with beam search and one of the biggest |
|
|
|
00:21:41.760 --> 00:21:46.159 |
|
ones is that when you're doing this |
|
|
|
00:21:43.760 --> 00:21:47.679 |
|
maximum likelihood sampling you really |
|
|
|
00:21:46.159 --> 00:21:50.080 |
|
or the sampling to search for something |
|
|
|
00:21:47.679 --> 00:21:51.760 |
|
that's very high likelihood um you |
|
|
|
00:21:50.080 --> 00:21:53.679 |
|
really sacrifice a lot of diversity in |
|
|
|
00:21:51.760 --> 00:21:55.320 |
|
your outputs and in particular you could |
|
|
|
00:21:53.679 --> 00:21:57.279 |
|
wind up at the end of beam search with |
|
|
|
00:21:55.320 --> 00:21:58.919 |
|
three different outputs to choose from |
|
|
|
00:21:57.279 --> 00:22:00.120 |
|
that are all pretty pretty much the same |
|
|
|
00:21:58.919 --> 00:22:02.640 |
|
like they're slightly different token |
|
|
|
00:22:00.120 --> 00:22:04.559 |
|
sequences but they look very similar and |
|
|
|
00:22:02.640 --> 00:22:07.480 |
|
so maybe you want to S get sort of a |
|
|
|
00:22:04.559 --> 00:22:08.919 |
|
more diverse set um there's a couple of |
|
|
|
00:22:07.480 --> 00:22:10.640 |
|
different methods in this category I'm |
|
|
|
00:22:08.919 --> 00:22:12.679 |
|
going to very briefly shout out two of |
|
|
|
00:22:10.640 --> 00:22:14.200 |
|
them um but the idea here is to sort of |
|
|
|
00:22:12.679 --> 00:22:16.440 |
|
reintroduce some of the benefits of |
|
|
|
00:22:14.200 --> 00:22:19.120 |
|
sampling while still doing this kind of |
|
|
|
00:22:16.440 --> 00:22:20.919 |
|
search for high probability things um |
|
|
|
00:22:19.120 --> 00:22:22.600 |
|
diverse beam search is one of these |
|
|
|
00:22:20.919 --> 00:22:25.520 |
|
methods and here the idea is that we |
|
|
|
00:22:22.600 --> 00:22:27.279 |
|
want to modify that scoring step when we |
|
|
|
00:22:25.520 --> 00:22:28.600 |
|
choose which three out of our nine beams |
|
|
|
00:22:27.279 --> 00:22:30.200 |
|
we want to continue |
|
|
|
00:22:28.600 --> 00:22:32.000 |
|
to avoid choosing things that are really |
|
|
|
00:22:30.200 --> 00:22:34.320 |
|
really close to each other right so |
|
|
|
00:22:32.000 --> 00:22:36.039 |
|
maybe our highest probability thing is |
|
|
|
00:22:34.320 --> 00:22:37.559 |
|
some sequence a and then if we look at |
|
|
|
00:22:36.039 --> 00:22:39.520 |
|
the other sequences there's one that's |
|
|
|
00:22:37.559 --> 00:22:41.279 |
|
pretty high probability but very similar |
|
|
|
00:22:39.520 --> 00:22:43.600 |
|
to that sequence and there's one that's |
|
|
|
00:22:41.279 --> 00:22:45.320 |
|
like slightly lower probability but very |
|
|
|
00:22:43.600 --> 00:22:47.200 |
|
different and so maybe we would choose a |
|
|
|
00:22:45.320 --> 00:22:49.679 |
|
sequence that is a little lower |
|
|
|
00:22:47.200 --> 00:22:51.760 |
|
probability to maximize diversity in our |
|
|
|
00:22:49.679 --> 00:22:53.799 |
|
set to try to get like sort of a wider |
|
|
|
00:22:51.760 --> 00:22:56.200 |
|
range of options to choose from later in |
|
|
|
00:22:53.799 --> 00:22:58.200 |
|
generation so this modifies the scoring |
|
|
|
00:22:56.200 --> 00:23:00.120 |
|
to not just take into account likelihood |
|
|
|
00:22:58.200 --> 00:23:03.200 |
|
but also similarity to other |
|
|
|
00:23:00.120 --> 00:23:05.400 |
|
KS another option down this path is |
|
|
|
00:23:03.200 --> 00:23:07.640 |
|
stochastic beam search where we're going |
|
|
|
00:23:05.400 --> 00:23:09.279 |
|
to keep the scoring the same but rather |
|
|
|
00:23:07.640 --> 00:23:11.679 |
|
than choosing just the top three most |
|
|
|
00:23:09.279 --> 00:23:13.279 |
|
likely tokens to expand out each beam |
|
|
|
00:23:11.679 --> 00:23:15.200 |
|
we're actually going to sample from some |
|
|
|
00:23:13.279 --> 00:23:17.000 |
|
distribution and you could sample from |
|
|
|
00:23:15.200 --> 00:23:18.760 |
|
the model distribution directly using |
|
|
|
00:23:17.000 --> 00:23:20.200 |
|
ancestral sampling or you could use any |
|
|
|
00:23:18.760 --> 00:23:22.679 |
|
of our sampling methods we talked about |
|
|
|
00:23:20.200 --> 00:23:24.200 |
|
in the last section to do this and the |
|
|
|
00:23:22.679 --> 00:23:25.799 |
|
the idea here is sort of similar to |
|
|
|
00:23:24.200 --> 00:23:29.279 |
|
diverse beam search we want to get sort |
|
|
|
00:23:25.799 --> 00:23:31.240 |
|
of a wider exploration of our models |
|
|
|
00:23:29.279 --> 00:23:33.520 |
|
like output space you know we want to |
|
|
|
00:23:31.240 --> 00:23:35.360 |
|
sort of explore more things instead of |
|
|
|
00:23:33.520 --> 00:23:36.760 |
|
just seeking winding up with a bunch of |
|
|
|
00:23:35.360 --> 00:23:39.679 |
|
outputs that look very similar at the |
|
|
|
00:23:36.760 --> 00:23:41.120 |
|
end of beam search um if folks are |
|
|
|
00:23:39.679 --> 00:23:43.679 |
|
interested in these I think these are |
|
|
|
00:23:41.120 --> 00:23:46.159 |
|
both linked on the website um the the |
|
|
|
00:23:43.679 --> 00:23:48.679 |
|
papers that both of these ideas came |
|
|
|
00:23:46.159 --> 00:23:51.480 |
|
from |
|
|
|
00:23:48.679 --> 00:23:54.400 |
|
Yes um for stochastic |
|
|
|
00:23:51.480 --> 00:23:57.039 |
|
resarch the sampl probability takes into |
|
|
|
00:23:54.400 --> 00:23:59.039 |
|
account the current part that we already |
|
|
|
00:23:57.039 --> 00:24:02.000 |
|
travel okay |
|
|
|
00:23:59.039 --> 00:24:04.320 |
|
yeah exactly so it's this um like |
|
|
|
00:24:02.000 --> 00:24:05.640 |
|
selection step here but we're instead of |
|
|
|
00:24:04.320 --> 00:24:07.760 |
|
just doing greedy selection we're going |
|
|
|
00:24:05.640 --> 00:24:11.760 |
|
to do |
|
|
|
00:24:07.760 --> 00:24:17.520 |
|
assembling yes my question was on the T |
|
|
|
00:24:11.760 --> 00:24:23.200 |
|
yeah like you for something super simple |
|
|
|
00:24:17.520 --> 00:24:26.520 |
|
like if both of them have a high are you |
|
|
|
00:24:23.200 --> 00:24:28.120 |
|
like yeah so you would if it has a |
|
|
|
00:24:26.520 --> 00:24:30.080 |
|
really high probability under both |
|
|
|
00:24:28.120 --> 00:24:32.880 |
|
models it would have a lower probability |
|
|
|
00:24:30.080 --> 00:24:35.080 |
|
after doing this sort of contrasted |
|
|
|
00:24:32.880 --> 00:24:36.600 |
|
de right so if the if the smaller |
|
|
|
00:24:35.080 --> 00:24:38.799 |
|
model's really good at your task this |
|
|
|
00:24:36.600 --> 00:24:40.960 |
|
might not work very |
|
|
|
00:24:38.799 --> 00:24:43.360 |
|
well yeah I think in the paper they're |
|
|
|
00:24:40.960 --> 00:24:45.320 |
|
generally evaluating on these sort of |
|
|
|
00:24:43.360 --> 00:24:48.279 |
|
like open ended generation task I bet |
|
|
|
00:24:45.320 --> 00:24:51.279 |
|
this works a lot worse for |
|
|
|
00:24:48.279 --> 00:24:51.279 |
|
now |
|
|
|
00:24:56.760 --> 00:24:59.760 |
|
yes |
|
|
|
00:25:02.440 --> 00:25:08.120 |
|
you yeah this is a great question um and |
|
|
|
00:25:05.960 --> 00:25:11.559 |
|
so the question is how do we measure |
|
|
|
00:25:08.120 --> 00:25:14.120 |
|
similar beams um you can sort of Define |
|
|
|
00:25:11.559 --> 00:25:15.559 |
|
any kind of similarity function you like |
|
|
|
00:25:14.120 --> 00:25:17.520 |
|
here um anything that you'd use to |
|
|
|
00:25:15.559 --> 00:25:20.440 |
|
evaluate like how similar something is |
|
|
|
00:25:17.520 --> 00:25:22.360 |
|
to a gold reference right um I think in |
|
|
|
00:25:20.440 --> 00:25:25.039 |
|
the original diverse beam search they do |
|
|
|
00:25:22.360 --> 00:25:27.760 |
|
this by looking at like exact token |
|
|
|
00:25:25.039 --> 00:25:30.640 |
|
match across the two right like if these |
|
|
|
00:25:27.760 --> 00:25:33.880 |
|
beams are the same in all but one of the |
|
|
|
00:25:30.640 --> 00:25:35.600 |
|
tokens or they have like you know 50% of |
|
|
|
00:25:33.880 --> 00:25:37.120 |
|
the tokens are shared across the beams |
|
|
|
00:25:35.600 --> 00:25:38.559 |
|
and maybe these are really similar and |
|
|
|
00:25:37.120 --> 00:25:40.559 |
|
they should try to choose two things |
|
|
|
00:25:38.559 --> 00:25:42.600 |
|
that are different um but you could swap |
|
|
|
00:25:40.559 --> 00:25:46.200 |
|
that out for any |
|
|
|
00:25:42.600 --> 00:25:49.440 |
|
metc yes so |
|
|
|
00:25:46.200 --> 00:25:50.960 |
|
the there's kind of like a that's Happ |
|
|
|
00:25:49.440 --> 00:25:53.360 |
|
at |
|
|
|
00:25:50.960 --> 00:25:55.000 |
|
every for the stochastic be search |
|
|
|
00:25:53.360 --> 00:25:57.720 |
|
there's like a shering what do you mean |
|
|
|
00:25:55.000 --> 00:26:00.520 |
|
by a shepher so it says modify the next |
|
|
|
00:25:57.720 --> 00:26:03.000 |
|
sech selection because they're like um |
|
|
|
00:26:00.520 --> 00:26:06.919 |
|
it is searching at a different space and |
|
|
|
00:26:03.000 --> 00:26:09.679 |
|
it's not searching within the same 3D |
|
|
|
00:26:06.919 --> 00:26:14.080 |
|
SE is it searching in a different space |
|
|
|
00:26:09.679 --> 00:26:15.799 |
|
yeah so it's um in the same probability |
|
|
|
00:26:14.080 --> 00:26:18.399 |
|
distribution but it'll see a different |
|
|
|
00:26:15.799 --> 00:26:20.840 |
|
part of the distribution so when you're |
|
|
|
00:26:18.399 --> 00:26:22.640 |
|
doing the grey search you'll only ever |
|
|
|
00:26:20.840 --> 00:26:24.559 |
|
look at the top three tokens in the next |
|
|
|
00:26:22.640 --> 00:26:27.120 |
|
token distribution because you're just |
|
|
|
00:26:24.559 --> 00:26:29.840 |
|
selecting like the maximums um but in |
|
|
|
00:26:27.120 --> 00:26:31.360 |
|
sampling you could you could get the |
|
|
|
00:26:29.840 --> 00:26:32.880 |
|
same tokens right if they're really high |
|
|
|
00:26:31.360 --> 00:26:35.720 |
|
likelihood but you could also sample |
|
|
|
00:26:32.880 --> 00:26:38.399 |
|
something that's further down in the |
|
|
|
00:26:35.720 --> 00:26:42.760 |
|
distribution yeah as a followup to that |
|
|
|
00:26:38.399 --> 00:26:44.880 |
|
like into uh our stamping we take into |
|
|
|
00:26:42.760 --> 00:26:46.960 |
|
account the probability of the prefix |
|
|
|
00:26:44.880 --> 00:26:50.679 |
|
like the current hypothesis right |
|
|
|
00:26:46.960 --> 00:26:51.760 |
|
because otherwise it is the same as just |
|
|
|
00:26:50.679 --> 00:26:54.279 |
|
uh |
|
|
|
00:26:51.760 --> 00:26:57.159 |
|
in yeah so in the sampling we're taking |
|
|
|
00:26:54.279 --> 00:27:00.120 |
|
into account the previous the prefix |
|
|
|
00:26:57.159 --> 00:27:02.600 |
|
yeah so so it we will take into account |
|
|
|
00:27:00.120 --> 00:27:06.200 |
|
the prefix but this sampling mechanism |
|
|
|
00:27:02.600 --> 00:27:08.320 |
|
here could be ancestral sampling um the |
|
|
|
00:27:06.200 --> 00:27:10.480 |
|
only the difference here is that we're |
|
|
|
00:27:08.320 --> 00:27:12.600 |
|
also doing a sort of search step on top |
|
|
|
00:27:10.480 --> 00:27:14.679 |
|
of that to choose the maximum likelihood |
|
|
|
00:27:12.600 --> 00:27:18.080 |
|
things across multiple |
|
|
|
00:27:14.679 --> 00:27:20.559 |
|
me another important thing um is you |
|
|
|
00:27:18.080 --> 00:27:22.279 |
|
sample without replacement and so |
|
|
|
00:27:20.559 --> 00:27:24.120 |
|
normally you sample with replacement and |
|
|
|
00:27:22.279 --> 00:27:25.840 |
|
you might get exactly the same thing but |
|
|
|
00:27:24.120 --> 00:27:28.000 |
|
when you're doing stasic beam search you |
|
|
|
00:27:25.840 --> 00:27:30.240 |
|
sample without replacement so you get |
|
|
|
00:27:28.000 --> 00:27:33.279 |
|
like three ones according to the |
|
|
|
00:27:30.240 --> 00:27:36.080 |
|
probability but they're guaranteed to be |
|
|
|
00:27:33.279 --> 00:27:37.799 |
|
different right so beam search like one |
|
|
|
00:27:36.080 --> 00:27:39.559 |
|
of the characteristics of beam search is |
|
|
|
00:27:37.799 --> 00:27:41.640 |
|
you always get three different things |
|
|
|
00:27:39.559 --> 00:27:44.240 |
|
because you're picking the three top |
|
|
|
00:27:41.640 --> 00:27:45.760 |
|
when you do sampling uh like stochastic |
|
|
|
00:27:44.240 --> 00:27:47.399 |
|
Bean shirts you get three different |
|
|
|
00:27:45.760 --> 00:27:49.440 |
|
things they're not guaranteed to be the |
|
|
|
00:27:47.399 --> 00:27:51.760 |
|
top they could be distributed according |
|
|
|
00:27:49.440 --> 00:27:54.360 |
|
to the prob distribution but they're |
|
|
|
00:27:51.760 --> 00:27:55.840 |
|
guaranteed so um you can take a look at |
|
|
|
00:27:54.360 --> 00:27:58.039 |
|
the paper for more details of exactly |
|
|
|
00:27:55.840 --> 00:28:00.159 |
|
how it looks but that that's |
|
|
|
00:27:58.039 --> 00:28:03.039 |
|
so then is the main difference that |
|
|
|
00:28:00.159 --> 00:28:05.120 |
|
compared to plus temping that we have n |
|
|
|
00:28:03.039 --> 00:28:08.519 |
|
options that we're cheing tet instead of |
|
|
|
00:28:05.120 --> 00:28:10.320 |
|
going with the going with only one and |
|
|
|
00:28:08.519 --> 00:28:11.200 |
|
you can't yeah you can't simple the same |
|
|
|
00:28:10.320 --> 00:28:14.960 |
|
thing |
|
|
|
00:28:11.200 --> 00:28:16.919 |
|
right yeah so just uh repeat recording |
|
|
|
00:28:14.960 --> 00:28:19.159 |
|
is that n options we're keeping track of |
|
|
|
00:28:16.919 --> 00:28:22.240 |
|
and they're all going to be unique token |
|
|
|
00:28:19.159 --> 00:28:24.240 |
|
sequences at least um you can actually |
|
|
|
00:28:22.240 --> 00:28:26.200 |
|
get the same output sequence from two |
|
|
|
00:28:24.240 --> 00:28:28.120 |
|
different toen sequences if you tokenize |
|
|
|
00:28:26.200 --> 00:28:32.360 |
|
slightly differently um but these will |
|
|
|
00:28:28.120 --> 00:28:37.840 |
|
always be unique tokens |
|
|
|
00:28:32.360 --> 00:28:39.279 |
|
Le so that was sort of a a why like a a |
|
|
|
00:28:37.840 --> 00:28:41.320 |
|
set of methods that we've developed to |
|
|
|
00:28:39.279 --> 00:28:43.600 |
|
try to find the most probable sequence |
|
|
|
00:28:41.320 --> 00:28:44.480 |
|
out of the model um but in the next |
|
|
|
00:28:43.600 --> 00:28:46.039 |
|
section here we're going to sort of |
|
|
|
00:28:44.480 --> 00:28:50.240 |
|
think about whether that's actually what |
|
|
|
00:28:46.039 --> 00:28:51.679 |
|
we want to do at all um so what is like |
|
|
|
00:28:50.240 --> 00:28:54.240 |
|
is do we really want the highest |
|
|
|
00:28:51.679 --> 00:28:56.880 |
|
probability thing um we know that |
|
|
|
00:28:54.240 --> 00:28:58.600 |
|
outputs with really low probability tend |
|
|
|
00:28:56.880 --> 00:29:00.640 |
|
to be really like worse than outfits |
|
|
|
00:28:58.600 --> 00:29:03.240 |
|
with high probability right maybe I'm |
|
|
|
00:29:00.640 --> 00:29:05.840 |
|
trying to predict like what the next |
|
|
|
00:29:03.240 --> 00:29:08.640 |
|
sentence should be after the cat saw the |
|
|
|
00:29:05.840 --> 00:29:11.240 |
|
dog right the cat sat down is way higher |
|
|
|
00:29:08.640 --> 00:29:12.559 |
|
probability than the cat grew wings and |
|
|
|
00:29:11.240 --> 00:29:14.039 |
|
at least with the cats I've met that |
|
|
|
00:29:12.559 --> 00:29:15.679 |
|
sounds pretty that sounds pretty much |
|
|
|
00:29:14.039 --> 00:29:19.559 |
|
right right like this is a much better |
|
|
|
00:29:15.679 --> 00:29:21.720 |
|
output than the cat gr wings but if you |
|
|
|
00:29:19.559 --> 00:29:24.159 |
|
look at just the outputs with relatively |
|
|
|
00:29:21.720 --> 00:29:25.960 |
|
high probability it's sort of less clear |
|
|
|
00:29:24.159 --> 00:29:27.880 |
|
that this defines an exact ranking |
|
|
|
00:29:25.960 --> 00:29:30.559 |
|
between those outputs right |
|
|
|
00:29:27.880 --> 00:29:32.600 |
|
is the cat sat down necessarily better |
|
|
|
00:29:30.559 --> 00:29:34.519 |
|
than the cat ran away these both seem |
|
|
|
00:29:32.600 --> 00:29:35.720 |
|
like pretty reasonable outputs to me |
|
|
|
00:29:34.519 --> 00:29:40.200 |
|
even though one of them is slightly |
|
|
|
00:29:35.720 --> 00:29:42.799 |
|
higher probability and so we do we |
|
|
|
00:29:40.200 --> 00:29:45.240 |
|
really like necessarily need to recover |
|
|
|
00:29:42.799 --> 00:29:47.200 |
|
the cat that down um and this gets a |
|
|
|
00:29:45.240 --> 00:29:49.399 |
|
little a little more complicated still |
|
|
|
00:29:47.200 --> 00:29:51.120 |
|
if we look at sort of a range of outputs |
|
|
|
00:29:49.399 --> 00:29:53.120 |
|
so say there's sort of six outputs that |
|
|
|
00:29:51.120 --> 00:29:55.240 |
|
our model could give us um and here |
|
|
|
00:29:53.120 --> 00:29:57.559 |
|
we're looking at sort of full sequences |
|
|
|
00:29:55.240 --> 00:30:00.120 |
|
not individual tokens just for clarity |
|
|
|
00:29:57.559 --> 00:30:02.640 |
|
so maybe our outputs in order of |
|
|
|
00:30:00.120 --> 00:30:05.840 |
|
probability are the cat sat down it ran |
|
|
|
00:30:02.640 --> 00:30:08.240 |
|
away it sprinted off it got out of there |
|
|
|
00:30:05.840 --> 00:30:09.720 |
|
it's very small and it grew Wings right |
|
|
|
00:30:08.240 --> 00:30:11.440 |
|
so we're definitely sure that the cat |
|
|
|
00:30:09.720 --> 00:30:13.159 |
|
sat down is a better output than the cat |
|
|
|
00:30:11.440 --> 00:30:15.360 |
|
grew wings and if we're doing a mod |
|
|
|
00:30:13.159 --> 00:30:17.600 |
|
seeking search we would find that as our |
|
|
|
00:30:15.360 --> 00:30:19.440 |
|
most likely thing if we're if we you |
|
|
|
00:30:17.600 --> 00:30:21.440 |
|
know do a good job searching and we'd |
|
|
|
00:30:19.440 --> 00:30:23.519 |
|
return that as our output but if you |
|
|
|
00:30:21.440 --> 00:30:25.919 |
|
look at the rest of this distribution |
|
|
|
00:30:23.519 --> 00:30:27.880 |
|
you see that there's actually a whole |
|
|
|
00:30:25.919 --> 00:30:29.240 |
|
set of outputs after that all say |
|
|
|
00:30:27.880 --> 00:30:31.720 |
|
something that kind of means the cat |
|
|
|
00:30:29.240 --> 00:30:33.480 |
|
left the area right it's just that this |
|
|
|
00:30:31.720 --> 00:30:35.200 |
|
probability is split over these three |
|
|
|
00:30:33.480 --> 00:30:37.080 |
|
different generations and if you |
|
|
|
00:30:35.200 --> 00:30:39.120 |
|
actually add up the probability mass of |
|
|
|
00:30:37.080 --> 00:30:40.880 |
|
all three of these sequences this is |
|
|
|
00:30:39.120 --> 00:30:42.919 |
|
double the probability mass of the cat |
|
|
|
00:30:40.880 --> 00:30:44.360 |
|
sat down but because none of these |
|
|
|
00:30:42.919 --> 00:30:45.960 |
|
individual sequences is higher |
|
|
|
00:30:44.360 --> 00:30:47.399 |
|
probability if you're doing mode seeking |
|
|
|
00:30:45.960 --> 00:30:50.640 |
|
search you wouldn't you wouldn't be able |
|
|
|
00:30:47.399 --> 00:30:52.480 |
|
to see this effect right so do we really |
|
|
|
00:30:50.640 --> 00:30:53.760 |
|
want to return the cat sat down or do we |
|
|
|
00:30:52.480 --> 00:30:55.200 |
|
want to return something that means the |
|
|
|
00:30:53.760 --> 00:30:57.559 |
|
cat left the |
|
|
|
00:30:55.200 --> 00:30:59.200 |
|
area the question then is like if it's |
|
|
|
00:30:57.559 --> 00:31:03.120 |
|
not probability that makes an output |
|
|
|
00:30:59.200 --> 00:31:04.679 |
|
good what is it so we have this one |
|
|
|
00:31:03.120 --> 00:31:06.039 |
|
output that's really high probability |
|
|
|
00:31:04.679 --> 00:31:09.000 |
|
but it's very different from everything |
|
|
|
00:31:06.039 --> 00:31:10.720 |
|
else in our set and then we have a |
|
|
|
00:31:09.000 --> 00:31:13.200 |
|
couple of outputs that are all pretty |
|
|
|
00:31:10.720 --> 00:31:15.080 |
|
high probability and similar to a bunch |
|
|
|
00:31:13.200 --> 00:31:17.840 |
|
of other relatively high probability |
|
|
|
00:31:15.080 --> 00:31:19.720 |
|
things so maybe it's sort of less risky |
|
|
|
00:31:17.840 --> 00:31:21.399 |
|
to return one of these right are thing |
|
|
|
00:31:19.720 --> 00:31:23.200 |
|
that's higher probability but different |
|
|
|
00:31:21.399 --> 00:31:24.600 |
|
than everything else could be different |
|
|
|
00:31:23.200 --> 00:31:26.840 |
|
because it's way better or it could be |
|
|
|
00:31:24.600 --> 00:31:29.000 |
|
different because it's way worse um |
|
|
|
00:31:26.840 --> 00:31:31.120 |
|
another way to think about this is you |
|
|
|
00:31:29.000 --> 00:31:32.600 |
|
know maybe if you and your friends were |
|
|
|
00:31:31.120 --> 00:31:34.200 |
|
cheating on a test which you shouldn't |
|
|
|
00:31:32.600 --> 00:31:35.480 |
|
do but if you were going to do it and |
|
|
|
00:31:34.200 --> 00:31:37.519 |
|
all of your friends sent you their |
|
|
|
00:31:35.480 --> 00:31:39.240 |
|
answers um maybe one of your friends has |
|
|
|
00:31:37.519 --> 00:31:40.960 |
|
a slightly higher score in the class |
|
|
|
00:31:39.240 --> 00:31:42.519 |
|
than everyone else but they said the |
|
|
|
00:31:40.960 --> 00:31:44.480 |
|
answer was answer a and everyone else |
|
|
|
00:31:42.519 --> 00:31:45.799 |
|
said the answer was B right you still |
|
|
|
00:31:44.480 --> 00:31:48.480 |
|
might go with the answer that everyone |
|
|
|
00:31:45.799 --> 00:31:50.679 |
|
else said because like what there's it |
|
|
|
00:31:48.480 --> 00:31:52.679 |
|
sort of feels less risky like maybe |
|
|
|
00:31:50.679 --> 00:31:54.440 |
|
everyone else got the answer get that |
|
|
|
00:31:52.679 --> 00:31:55.880 |
|
answer and so your one friend could be |
|
|
|
00:31:54.440 --> 00:31:56.919 |
|
right when everyone else is wrong or |
|
|
|
00:31:55.880 --> 00:31:59.679 |
|
they could have made a mistake that no |
|
|
|
00:31:56.919 --> 00:32:01.240 |
|
one El else is making so this is sort of |
|
|
|
00:31:59.679 --> 00:32:03.519 |
|
the same concept right we want an output |
|
|
|
00:32:01.240 --> 00:32:06.320 |
|
that's relatively high probability but |
|
|
|
00:32:03.519 --> 00:32:09.399 |
|
also relatively low |
|
|
|
00:32:06.320 --> 00:32:11.320 |
|
risk and so here maybe if we were using |
|
|
|
00:32:09.399 --> 00:32:13.679 |
|
this criteria we'd return the cat ran |
|
|
|
00:32:11.320 --> 00:32:14.720 |
|
away as our sort of as our sort of |
|
|
|
00:32:13.679 --> 00:32:16.720 |
|
single |
|
|
|
00:32:14.720 --> 00:32:19.440 |
|
output so how do you find something |
|
|
|
00:32:16.720 --> 00:32:21.000 |
|
that's high probability and low risk |
|
|
|
00:32:19.440 --> 00:32:22.480 |
|
there's sort of two questions here right |
|
|
|
00:32:21.000 --> 00:32:24.399 |
|
we have to figure out how to estimate |
|
|
|
00:32:22.480 --> 00:32:26.120 |
|
probability and if we're looking at a |
|
|
|
00:32:24.399 --> 00:32:28.519 |
|
set of outputs like the six we saw |
|
|
|
00:32:26.120 --> 00:32:29.880 |
|
before maybe we can just do this by |
|
|
|
00:32:28.519 --> 00:32:31.720 |
|
counting right we could sample |
|
|
|
00:32:29.880 --> 00:32:34.000 |
|
everything from the model and get exact |
|
|
|
00:32:31.720 --> 00:32:35.200 |
|
probability or we could take a sample |
|
|
|
00:32:34.000 --> 00:32:38.080 |
|
from the model and just look at |
|
|
|
00:32:35.200 --> 00:32:40.200 |
|
probabilities in that set and from there |
|
|
|
00:32:38.080 --> 00:32:41.840 |
|
from that sample um sort of one |
|
|
|
00:32:40.200 --> 00:32:43.559 |
|
reasonable thing to do is just count |
|
|
|
00:32:41.840 --> 00:32:45.320 |
|
frequency right if something's in our |
|
|
|
00:32:43.559 --> 00:32:47.919 |
|
sample twice as often we just say it's |
|
|
|
00:32:45.320 --> 00:32:49.799 |
|
twice as frequent or it's twice as |
|
|
|
00:32:47.919 --> 00:32:52.880 |
|
probable um this is something called |
|
|
|
00:32:49.799 --> 00:32:54.440 |
|
Monte Carlos sampling if you do this um |
|
|
|
00:32:52.880 --> 00:32:56.039 |
|
enough times like if you sample an |
|
|
|
00:32:54.440 --> 00:32:58.279 |
|
infinite set this is would give you |
|
|
|
00:32:56.039 --> 00:33:00.880 |
|
exactly the model distri distribution um |
|
|
|
00:32:58.279 --> 00:33:02.840 |
|
but for the sort of reasonable size sets |
|
|
|
00:33:00.880 --> 00:33:04.200 |
|
we're working with maybe like a 100 |
|
|
|
00:33:02.840 --> 00:33:06.320 |
|
samples this gives us a sort of |
|
|
|
00:33:04.200 --> 00:33:09.440 |
|
reasonable approximation for what we for |
|
|
|
00:33:06.320 --> 00:33:10.840 |
|
what we need to do here at least so |
|
|
|
00:33:09.440 --> 00:33:12.000 |
|
we're just going to take a sample to get |
|
|
|
00:33:10.840 --> 00:33:13.440 |
|
probability and we're just going to |
|
|
|
00:33:12.000 --> 00:33:15.519 |
|
count things in that sample to see how |
|
|
|
00:33:13.440 --> 00:33:17.320 |
|
likely things are that doesn't seem too |
|
|
|
00:33:15.519 --> 00:33:20.080 |
|
bad how do we estimate |
|
|
|
00:33:17.320 --> 00:33:21.679 |
|
risk the idea here is that we have a |
|
|
|
00:33:20.080 --> 00:33:24.080 |
|
bunch of other things in this set of |
|
|
|
00:33:21.679 --> 00:33:26.080 |
|
outputs and we can treat those as sort |
|
|
|
00:33:24.080 --> 00:33:27.880 |
|
of like pseudo references right we can |
|
|
|
00:33:26.080 --> 00:33:29.840 |
|
evaluate agreement between the thing |
|
|
|
00:33:27.880 --> 00:33:31.519 |
|
we're looking at and each of those other |
|
|
|
00:33:29.840 --> 00:33:33.480 |
|
references and this is sort of the same |
|
|
|
00:33:31.519 --> 00:33:35.519 |
|
idea of calculating similarity in |
|
|
|
00:33:33.480 --> 00:33:37.159 |
|
diverse beam search we're going to use |
|
|
|
00:33:35.519 --> 00:33:39.639 |
|
some kind of metric to compare how |
|
|
|
00:33:37.159 --> 00:33:41.279 |
|
similar these things are um this metric |
|
|
|
00:33:39.639 --> 00:33:43.080 |
|
could be anything you use Downstream it |
|
|
|
00:33:41.279 --> 00:33:44.840 |
|
could be like an engram overlap metric |
|
|
|
00:33:43.080 --> 00:33:48.600 |
|
like Rouge or blue or it could also be |
|
|
|
00:33:44.840 --> 00:33:51.120 |
|
something um neural or semantic like um |
|
|
|
00:33:48.600 --> 00:33:54.799 |
|
something like BT score or Bart |
|
|
|
00:33:51.120 --> 00:33:56.600 |
|
score and so this concept um is a type |
|
|
|
00:33:54.799 --> 00:33:57.919 |
|
of decoding called minimum based risk |
|
|
|
00:33:56.600 --> 00:33:59.600 |
|
decoding |
|
|
|
00:33:57.919 --> 00:34:01.840 |
|
and what this equation captures is |
|
|
|
00:33:59.600 --> 00:34:03.919 |
|
exactly the intuition that we were um |
|
|
|
00:34:01.840 --> 00:34:06.600 |
|
sort of talking about just a slide ago |
|
|
|
00:34:03.919 --> 00:34:08.159 |
|
where we're going to choose something |
|
|
|
00:34:06.600 --> 00:34:09.919 |
|
that is low risk which means it's |
|
|
|
00:34:08.159 --> 00:34:11.960 |
|
similar to a lot of other things in this |
|
|
|
00:34:09.919 --> 00:34:12.800 |
|
set of outputs we've sampled and we're |
|
|
|
00:34:11.960 --> 00:34:14.800 |
|
going to choose something that's |
|
|
|
00:34:12.800 --> 00:34:17.560 |
|
relatively high probability which means |
|
|
|
00:34:14.800 --> 00:34:19.159 |
|
that sort of when we sum up over this if |
|
|
|
00:34:17.560 --> 00:34:21.399 |
|
something occurs in our set a bunch of |
|
|
|
00:34:19.159 --> 00:34:23.320 |
|
times it's going to have pretty strong |
|
|
|
00:34:21.399 --> 00:34:25.800 |
|
weight in picking which um of these |
|
|
|
00:34:23.320 --> 00:34:27.000 |
|
outputs are similar right if sort of |
|
|
|
00:34:25.800 --> 00:34:28.399 |
|
there's one thing in the set that |
|
|
|
00:34:27.000 --> 00:34:29.919 |
|
appears a bunch of times it's going to |
|
|
|
00:34:28.399 --> 00:34:32.040 |
|
have a strong influence on which thing |
|
|
|
00:34:29.919 --> 00:34:34.119 |
|
we pick and so that kind of captures |
|
|
|
00:34:32.040 --> 00:34:38.520 |
|
high probability in this |
|
|
|
00:34:34.119 --> 00:34:41.119 |
|
setting so to see how this works we can |
|
|
|
00:34:38.520 --> 00:34:44.639 |
|
look at an example um in |
|
|
|
00:34:41.119 --> 00:34:47.399 |
|
summarization so we choose some Metric |
|
|
|
00:34:44.639 --> 00:34:49.639 |
|
maybe we choose um Rouge which is an |
|
|
|
00:34:47.399 --> 00:34:51.399 |
|
engram overlap metric for summarization |
|
|
|
00:34:49.639 --> 00:34:52.879 |
|
and we say we're going to sample 100 |
|
|
|
00:34:51.399 --> 00:34:55.960 |
|
things and we're going to use this |
|
|
|
00:34:52.879 --> 00:35:00.359 |
|
equation to choose the one that has the |
|
|
|
00:34:55.960 --> 00:35:03.960 |
|
sort of lower EST risk according to MBR |
|
|
|
00:35:00.359 --> 00:35:06.480 |
|
um so if we do that and we look at this |
|
|
|
00:35:03.960 --> 00:35:07.560 |
|
sort of table of results here um you can |
|
|
|
00:35:06.480 --> 00:35:09.680 |
|
see that this |
|
|
|
00:35:07.560 --> 00:35:11.320 |
|
outperforms the other sampling methods |
|
|
|
00:35:09.680 --> 00:35:13.720 |
|
that we've looked at before so greedy |
|
|
|
00:35:11.320 --> 00:35:15.640 |
|
decoding here is just sampling the |
|
|
|
00:35:13.720 --> 00:35:18.760 |
|
single most likely thing in each step |
|
|
|
00:35:15.640 --> 00:35:21.800 |
|
beam search here is the BS with five or |
|
|
|
00:35:18.760 --> 00:35:24.359 |
|
10 beams and DBS is the diverse beam |
|
|
|
00:35:21.800 --> 00:35:27.040 |
|
search we were talking about um if we |
|
|
|
00:35:24.359 --> 00:35:29.440 |
|
use minimum based risk and we use grou |
|
|
|
00:35:27.040 --> 00:35:31.240 |
|
is the sort of determiner of similarity |
|
|
|
00:35:29.440 --> 00:35:32.680 |
|
we do way better across all of our |
|
|
|
00:35:31.240 --> 00:35:33.960 |
|
metrics but we especially do really good |
|
|
|
00:35:32.680 --> 00:35:36.680 |
|
at Rouge because that's sort of the |
|
|
|
00:35:33.960 --> 00:35:38.119 |
|
metric that we've been using to evaluate |
|
|
|
00:35:36.680 --> 00:35:40.240 |
|
and then if we swap this out for other |
|
|
|
00:35:38.119 --> 00:35:43.599 |
|
metrics you still see an performance |
|
|
|
00:35:40.240 --> 00:35:46.440 |
|
improvement over these um search methods |
|
|
|
00:35:43.599 --> 00:35:48.119 |
|
here um what's the sort of catch here |
|
|
|
00:35:46.440 --> 00:35:49.920 |
|
the catch here is that MBR requires you |
|
|
|
00:35:48.119 --> 00:35:51.599 |
|
to sample a hundred things and so this |
|
|
|
00:35:49.920 --> 00:35:54.760 |
|
is a lot more expensive it's a lot |
|
|
|
00:35:51.599 --> 00:35:54.760 |
|
slower at infin |
|
|
|
00:35:54.800 --> 00:35:58.800 |
|
time um yes |
|
|
|
00:36:04.200 --> 00:36:10.040 |
|
yes a great question why does the beam |
|
|
|
00:36:07.000 --> 00:36:14.000 |
|
search with more beams perform worse um |
|
|
|
00:36:10.040 --> 00:36:16.720 |
|
this is a well a relatively welln |
|
|
|
00:36:14.000 --> 00:36:19.359 |
|
phenomena called the cursive beam search |
|
|
|
00:36:16.720 --> 00:36:21.640 |
|
which is we actually lost your M so you |
|
|
|
00:36:19.359 --> 00:36:24.599 |
|
mic and we can speak okay yeah so this |
|
|
|
00:36:21.640 --> 00:36:26.079 |
|
is called the cursive beam search um and |
|
|
|
00:36:24.599 --> 00:36:27.760 |
|
the idea here is that beam search is |
|
|
|
00:36:26.079 --> 00:36:29.359 |
|
like an approxim search right so if you |
|
|
|
00:36:27.760 --> 00:36:31.200 |
|
add more beams you should be doing |
|
|
|
00:36:29.359 --> 00:36:33.319 |
|
better and better at finding the maximum |
|
|
|
00:36:31.200 --> 00:36:34.800 |
|
likelihood thing and generally you are |
|
|
|
00:36:33.319 --> 00:36:37.160 |
|
you get something that is higher |
|
|
|
00:36:34.800 --> 00:36:39.160 |
|
probability but as you add more beams |
|
|
|
00:36:37.160 --> 00:36:42.319 |
|
you also often get something that does |
|
|
|
00:36:39.160 --> 00:36:42.319 |
|
worse on your Downstream |
|
|
|
00:36:44.160 --> 00:36:47.560 |
|
metrics back up |
|
|
|
00:36:54.240 --> 00:36:58.680 |
|
there is that back online |
|
|
|
00:36:59.119 --> 00:37:06.520 |
|
yeah is that back is that any louder no |
|
|
|
00:37:03.520 --> 00:37:06.520 |
|
it |
|
|
|
00:37:07.000 --> 00:37:12.640 |
|
question oh there we go is that better |
|
|
|
00:37:09.599 --> 00:37:13.760 |
|
great um yeah so why why does this |
|
|
|
00:37:12.640 --> 00:37:16.040 |
|
happen right why do you get something |
|
|
|
00:37:13.760 --> 00:37:18.560 |
|
that's higher likelihood but um lower |
|
|
|
00:37:16.040 --> 00:37:22.040 |
|
performance Downstream um and this is |
|
|
|
00:37:18.560 --> 00:37:24.000 |
|
like another sort of degeneracy of beam |
|
|
|
00:37:22.040 --> 00:37:25.680 |
|
search that this idea that the thing |
|
|
|
00:37:24.000 --> 00:37:27.440 |
|
that is the absolute highest likelihood |
|
|
|
00:37:25.680 --> 00:37:28.599 |
|
in your distribution might not actually |
|
|
|
00:37:27.440 --> 00:37:31.079 |
|
be what you want |
|
|
|
00:37:28.599 --> 00:37:33.960 |
|
Downstream um this is sort of one of the |
|
|
|
00:37:31.079 --> 00:37:35.200 |
|
other things that people use to motivate |
|
|
|
00:37:33.960 --> 00:37:37.599 |
|
why you might want to do something like |
|
|
|
00:37:35.200 --> 00:37:39.400 |
|
MBR instead um and there's a great paper |
|
|
|
00:37:37.599 --> 00:37:41.640 |
|
about this problem called the inadequacy |
|
|
|
00:37:39.400 --> 00:37:43.680 |
|
of the mode because beam search is |
|
|
|
00:37:41.640 --> 00:37:45.520 |
|
looking for the mode of the |
|
|
|
00:37:43.680 --> 00:37:47.880 |
|
distribution well one other thing I'd |
|
|
|
00:37:45.520 --> 00:37:49.680 |
|
like to mention is it also goes together |
|
|
|
00:37:47.880 --> 00:37:51.119 |
|
with how you train your models because |
|
|
|
00:37:49.680 --> 00:37:53.760 |
|
most of our models are trained using |
|
|
|
00:37:51.119 --> 00:37:57.079 |
|
maximum likelihood maximum likelihood |
|
|
|
00:37:53.760 --> 00:37:59.040 |
|
isn't explicitly maximizing our ability |
|
|
|
00:37:57.079 --> 00:38:01.079 |
|
to get the best answer it's explicitly |
|
|
|
00:37:59.040 --> 00:38:05.720 |
|
maximizing our ability to estimate the |
|
|
|
00:38:01.079 --> 00:38:10.160 |
|
the distribution of answers so if I |
|
|
|
00:38:05.720 --> 00:38:13.040 |
|
say um if you said like what is what is |
|
|
|
00:38:10.160 --> 00:38:15.839 |
|
your favorite hobby or something like |
|
|
|
00:38:13.040 --> 00:38:17.680 |
|
that uh what is your favorite hobby in a |
|
|
|
00:38:15.839 --> 00:38:19.280 |
|
dialogue system often it'll answer I |
|
|
|
00:38:17.680 --> 00:38:22.400 |
|
don't know or something like that |
|
|
|
00:38:19.280 --> 00:38:24.920 |
|
because it like you know that that's |
|
|
|
00:38:22.400 --> 00:38:26.599 |
|
more likely than answering any specific |
|
|
|
00:38:24.920 --> 00:38:29.240 |
|
hobby like it's more likely than |
|
|
|
00:38:26.599 --> 00:38:32.119 |
|
answering basketball bowling you know |
|
|
|
00:38:29.240 --> 00:38:35.040 |
|
whatever else because you have many many |
|
|
|
00:38:32.119 --> 00:38:36.560 |
|
different options and so like especially |
|
|
|
00:38:35.040 --> 00:38:39.880 |
|
if it's something that's a little bit |
|
|
|
00:38:36.560 --> 00:38:42.160 |
|
more comp complicated it will avoid |
|
|
|
00:38:39.880 --> 00:38:44.680 |
|
answering that and in particular it ends |
|
|
|
00:38:42.160 --> 00:38:47.240 |
|
up answering very short things for |
|
|
|
00:38:44.680 --> 00:38:49.280 |
|
example um or sometimes it ends up |
|
|
|
00:38:47.240 --> 00:38:51.160 |
|
repeating itself over and over again or |
|
|
|
00:38:49.280 --> 00:38:53.240 |
|
or things like that so it also goes |
|
|
|
00:38:51.160 --> 00:38:57.760 |
|
together with like the training of the |
|
|
|
00:38:53.240 --> 00:38:59.359 |
|
model yeah and this is um one of the |
|
|
|
00:38:57.760 --> 00:39:01.079 |
|
this is still a problem in modern |
|
|
|
00:38:59.359 --> 00:39:02.560 |
|
systems so if you actually look at the |
|
|
|
00:39:01.079 --> 00:39:03.839 |
|
single like if you could enumerate |
|
|
|
00:39:02.560 --> 00:39:05.680 |
|
everything and see the single most |
|
|
|
00:39:03.839 --> 00:39:07.520 |
|
likely sequence it's often the empty |
|
|
|
00:39:05.680 --> 00:39:10.920 |
|
sequence just not opening anything at |
|
|
|
00:39:07.520 --> 00:39:12.640 |
|
all um and so if that's your true mode |
|
|
|
00:39:10.920 --> 00:39:16.119 |
|
of the distribution then doing better at |
|
|
|
00:39:12.640 --> 00:39:16.119 |
|
mode seeking is not always like |
|
|
|
00:39:16.599 --> 00:39:19.599 |
|
helpful |
|
|
|
00:39:25.440 --> 00:39:32.960 |
|
yes could this be influenced by the |
|
|
|
00:39:28.200 --> 00:39:32.960 |
|
confidence problem like um how |
|
|
|
00:39:37.560 --> 00:39:41.079 |
|
so seems |
|
|
|
00:39:49.760 --> 00:39:53.599 |
|
bees |
|
|
|
00:39:51.010 --> 00:39:57.280 |
|
[Music] |
|
|
|
00:39:53.599 --> 00:39:59.760 |
|
might right I think I I think I see |
|
|
|
00:39:57.280 --> 00:40:02.000 |
|
what you're saying which is that like |
|
|
|
00:39:59.760 --> 00:40:04.200 |
|
the the confidence gives you the |
|
|
|
00:40:02.000 --> 00:40:06.680 |
|
confidence of like a single exact |
|
|
|
00:40:04.200 --> 00:40:11.000 |
|
sequence right not the like actual sort |
|
|
|
00:40:06.680 --> 00:40:13.200 |
|
of semantic space of and so yeah if you |
|
|
|
00:40:11.000 --> 00:40:14.920 |
|
looked at just like the if you look at |
|
|
|
00:40:13.200 --> 00:40:17.000 |
|
just the probability scores you get the |
|
|
|
00:40:14.920 --> 00:40:18.520 |
|
probability of an exact string when what |
|
|
|
00:40:17.000 --> 00:40:20.119 |
|
you really actually care about with |
|
|
|
00:40:18.520 --> 00:40:22.319 |
|
confidence is the probability of sort of |
|
|
|
00:40:20.119 --> 00:40:23.800 |
|
like things that mean the same thing |
|
|
|
00:40:22.319 --> 00:40:25.359 |
|
yeah this is um part of why like |
|
|
|
00:40:23.800 --> 00:40:28.359 |
|
calibration is really hard for long |
|
|
|
00:40:25.359 --> 00:40:28.359 |
|
sequences |
|
|
|
00:40:30.720 --> 00:40:37.319 |
|
great so we're g to touch sort of |
|
|
|
00:40:34.359 --> 00:40:39.520 |
|
briefly on a couple of other things that |
|
|
|
00:40:37.319 --> 00:40:40.920 |
|
aren't sort of always explicitly |
|
|
|
00:40:39.520 --> 00:40:42.480 |
|
described in this framework but that you |
|
|
|
00:40:40.920 --> 00:40:45.040 |
|
can think of as variance of minimum |
|
|
|
00:40:42.480 --> 00:40:46.960 |
|
based risk um and if you're interested |
|
|
|
00:40:45.040 --> 00:40:49.560 |
|
in this analysis um I think as Graham |
|
|
|
00:40:46.960 --> 00:40:51.800 |
|
mentioned earlier um Alex Z is a first |
|
|
|
00:40:49.560 --> 00:40:53.680 |
|
year MLT and I wrote a paper about this |
|
|
|
00:40:51.800 --> 00:40:57.839 |
|
um which you can check out if you're |
|
|
|
00:40:53.680 --> 00:41:01.200 |
|
interested so the um two that I really |
|
|
|
00:40:57.839 --> 00:41:03.800 |
|
want to touch on here are other sort of |
|
|
|
00:41:01.200 --> 00:41:05.240 |
|
inference time things you can consider |
|
|
|
00:41:03.800 --> 00:41:07.520 |
|
which might look a little bit different |
|
|
|
00:41:05.240 --> 00:41:09.480 |
|
on the first BL um the first of these is |
|
|
|
00:41:07.520 --> 00:41:11.680 |
|
output ensembling so say you have |
|
|
|
00:41:09.480 --> 00:41:13.240 |
|
multiple different models and you get |
|
|
|
00:41:11.680 --> 00:41:15.480 |
|
outputs from all of them and now you |
|
|
|
00:41:13.240 --> 00:41:19.560 |
|
need to choose a best output among that |
|
|
|
00:41:15.480 --> 00:41:21.599 |
|
set um one of the sort of common ways to |
|
|
|
00:41:19.560 --> 00:41:24.480 |
|
do this is to compare like an embedding |
|
|
|
00:41:21.599 --> 00:41:25.920 |
|
similarity across models like does model |
|
|
|
00:41:24.480 --> 00:41:27.560 |
|
one think these two things are really |
|
|
|
00:41:25.920 --> 00:41:28.880 |
|
similar does model two think these two |
|
|
|
00:41:27.560 --> 00:41:32.599 |
|
things are really similar and try to |
|
|
|
00:41:28.880 --> 00:41:34.680 |
|
choose something that the um has really |
|
|
|
00:41:32.599 --> 00:41:37.319 |
|
high similarity with a lot of other |
|
|
|
00:41:34.680 --> 00:41:39.200 |
|
outputs um of course now that we've just |
|
|
|
00:41:37.319 --> 00:41:41.440 |
|
recently been talking about MBR you can |
|
|
|
00:41:39.200 --> 00:41:44.920 |
|
see that you can probably see that this |
|
|
|
00:41:41.440 --> 00:41:46.280 |
|
is um the same general formulation just |
|
|
|
00:41:44.920 --> 00:41:47.880 |
|
rather than summing over a set of |
|
|
|
00:41:46.280 --> 00:41:49.520 |
|
outputs from a single model now you're |
|
|
|
00:41:47.880 --> 00:41:52.160 |
|
looking at outputs over a whole set of |
|
|
|
00:41:49.520 --> 00:41:54.640 |
|
models um so some types of ensembling |
|
|
|
00:41:52.160 --> 00:41:57.319 |
|
fall into this category of minimum based |
|
|
|
00:41:54.640 --> 00:42:00.680 |
|
risk methods another thing in this |
|
|
|
00:41:57.319 --> 00:42:03.280 |
|
category is a um sort of recent decoding |
|
|
|
00:42:00.680 --> 00:42:06.079 |
|
method called self-consistency and the |
|
|
|
00:42:03.280 --> 00:42:08.200 |
|
idea here is that you want to do |
|
|
|
00:42:06.079 --> 00:42:09.359 |
|
something like mathematical reasoning |
|
|
|
00:42:08.200 --> 00:42:10.599 |
|
and you really care about getting the |
|
|
|
00:42:09.359 --> 00:42:12.000 |
|
final answer right but you don't |
|
|
|
00:42:10.599 --> 00:42:15.000 |
|
necessarily care about getting all of |
|
|
|
00:42:12.000 --> 00:42:18.079 |
|
the the reasoning steps in between right |
|
|
|
00:42:15.000 --> 00:42:19.520 |
|
so you prompt the model for an answer um |
|
|
|
00:42:18.079 --> 00:42:20.800 |
|
using something like Chain of Thought |
|
|
|
00:42:19.520 --> 00:42:22.680 |
|
right you ask it to sort of talk through |
|
|
|
00:42:20.800 --> 00:42:26.440 |
|
the steps it's going to do and then give |
|
|
|
00:42:22.680 --> 00:42:28.599 |
|
you a final answer um you sample many |
|
|
|
00:42:26.440 --> 00:42:30.400 |
|
puts using this and then you completely |
|
|
|
00:42:28.599 --> 00:42:32.200 |
|
throw away the chains of thought um and |
|
|
|
00:42:30.400 --> 00:42:35.359 |
|
you just take the answer from each |
|
|
|
00:42:32.200 --> 00:42:37.640 |
|
output um you have that set of answers |
|
|
|
00:42:35.359 --> 00:42:38.960 |
|
maybe you have like 20 30 100 answers |
|
|
|
00:42:37.640 --> 00:42:40.000 |
|
you just return the one that was most |
|
|
|
00:42:38.960 --> 00:42:43.720 |
|
frequently |
|
|
|
00:42:40.000 --> 00:42:46.119 |
|
generated um what this is doing is a |
|
|
|
00:42:43.720 --> 00:42:48.800 |
|
type of MBR where the metric that you |
|
|
|
00:42:46.119 --> 00:42:51.160 |
|
actually care about is exact match of |
|
|
|
00:42:48.800 --> 00:42:51.839 |
|
this answer right ignoring the rest of |
|
|
|
00:42:51.160 --> 00:42:54.079 |
|
the |
|
|
|
00:42:51.839 --> 00:42:55.800 |
|
generation um and so here we have sort |
|
|
|
00:42:54.079 --> 00:42:56.839 |
|
of the same intuition that we want an |
|
|
|
00:42:55.800 --> 00:42:59.160 |
|
output |
|
|
|
00:42:56.839 --> 00:43:01.520 |
|
that is high probability right we're |
|
|
|
00:42:59.160 --> 00:43:03.359 |
|
getting it generated a lot but also low |
|
|
|
00:43:01.520 --> 00:43:06.079 |
|
risk not a lot of the other outputs in |
|
|
|
00:43:03.359 --> 00:43:08.440 |
|
our in our set disagree with this |
|
|
|
00:43:06.079 --> 00:43:10.359 |
|
answer so those are a couple of |
|
|
|
00:43:08.440 --> 00:43:11.920 |
|
different variants of methods where |
|
|
|
00:43:10.359 --> 00:43:13.880 |
|
we're sort of sampling a wide set of |
|
|
|
00:43:11.920 --> 00:43:17.359 |
|
sequences and trying to choose the best |
|
|
|
00:43:13.880 --> 00:43:20.960 |
|
one um MBR is one set is one type of |
|
|
|
00:43:17.359 --> 00:43:22.680 |
|
sort of sequence set reranking method um |
|
|
|
00:43:20.960 --> 00:43:24.760 |
|
you could do other things to rerank sets |
|
|
|
00:43:22.680 --> 00:43:27.400 |
|
as well but this is sort of one |
|
|
|
00:43:24.760 --> 00:43:30.359 |
|
representative class of these yes uh or |
|
|
|
00:43:27.400 --> 00:43:32.280 |
|
of the of these methods before we get |
|
|
|
00:43:30.359 --> 00:43:35.200 |
|
into constrain generation those are sort |
|
|
|
00:43:32.280 --> 00:43:37.000 |
|
of the three broad categories of |
|
|
|
00:43:35.200 --> 00:43:39.480 |
|
inference methods we'll discuss which is |
|
|
|
00:43:37.000 --> 00:43:41.680 |
|
sort of sampling from some distribution |
|
|
|
00:43:39.480 --> 00:43:45.040 |
|
searching over some space of |
|
|
|
00:43:41.680 --> 00:43:47.400 |
|
distributions and doing some kind of um |
|
|
|
00:43:45.040 --> 00:43:48.559 |
|
analysis over a set of samples to choose |
|
|
|
00:43:47.400 --> 00:43:51.359 |
|
which ones they |
|
|
|
00:43:48.559 --> 00:43:52.559 |
|
return um does anyone have any questions |
|
|
|
00:43:51.359 --> 00:43:55.079 |
|
at this |
|
|
|
00:43:52.559 --> 00:44:00.680 |
|
point |
|
|
|
00:43:55.079 --> 00:44:00.680 |
|
yeah that a model |
|
|
|
00:44:05.800 --> 00:44:12.760 |
|
cannot yeah like why is averaging model |
|
|
|
00:44:08.359 --> 00:44:16.400 |
|
weights not MBR um I think it's not MBR |
|
|
|
00:44:12.760 --> 00:44:18.559 |
|
because the two um the key thing that I |
|
|
|
00:44:16.400 --> 00:44:20.880 |
|
think really makes a method MBR is this |
|
|
|
00:44:18.559 --> 00:44:22.480 |
|
concept of comparing between multiple um |
|
|
|
00:44:20.880 --> 00:44:24.880 |
|
sort of pseudo |
|
|
|
00:44:22.480 --> 00:44:26.839 |
|
references um and there you don't have |
|
|
|
00:44:24.880 --> 00:44:28.359 |
|
the same like you aage model way can you |
|
|
|
00:44:26.839 --> 00:44:32.440 |
|
wind up with sort of a single output on |
|
|
|
00:44:28.359 --> 00:44:34.040 |
|
the end that maybe is like using like |
|
|
|
00:44:32.440 --> 00:44:35.800 |
|
information from these two model |
|
|
|
00:44:34.040 --> 00:44:38.240 |
|
distributions that you've sort of smush |
|
|
|
00:44:35.800 --> 00:44:41.160 |
|
together um but it's not the same |
|
|
|
00:44:38.240 --> 00:44:44.720 |
|
concept of like comparing against pseudo |
|
|
|
00:44:41.160 --> 00:44:44.720 |
|
references or ranking in a |
|
|
|
00:44:48.920 --> 00:44:55.599 |
|
set right so now this is sort of a this |
|
|
|
00:44:52.720 --> 00:44:57.559 |
|
was a wide variety of methods to try to |
|
|
|
00:44:55.599 --> 00:44:59.040 |
|
find an output that's just sort of good |
|
|
|
00:44:57.559 --> 00:45:01.440 |
|
right we want an output that that is |
|
|
|
00:44:59.040 --> 00:45:03.480 |
|
nice out of our model um but now we'd |
|
|
|
00:45:01.440 --> 00:45:05.880 |
|
like to maybe enclose a few additional |
|
|
|
00:45:03.480 --> 00:45:08.280 |
|
constraints so say I'm asking our model |
|
|
|
00:45:05.880 --> 00:45:10.720 |
|
for some Hobbies I could use to stay in |
|
|
|
00:45:08.280 --> 00:45:11.920 |
|
to stay in shape and no matter what I |
|
|
|
00:45:10.720 --> 00:45:14.160 |
|
don't want the model to recommend |
|
|
|
00:45:11.920 --> 00:45:16.880 |
|
climbing like I I just I don't want this |
|
|
|
00:45:14.160 --> 00:45:18.400 |
|
as an option I've tried it I'm not a fan |
|
|
|
00:45:16.880 --> 00:45:21.240 |
|
um how do I get the model to stop |
|
|
|
00:45:18.400 --> 00:45:22.760 |
|
suggesting climbing to me and if you've |
|
|
|
00:45:21.240 --> 00:45:24.559 |
|
sort of played around with some of the |
|
|
|
00:45:22.760 --> 00:45:26.200 |
|
more recent llms you'd say maybe this is |
|
|
|
00:45:24.559 --> 00:45:27.480 |
|
easy right you just tell the model the |
|
|
|
00:45:26.200 --> 00:45:30.160 |
|
instruction that you don't want to talk |
|
|
|
00:45:27.480 --> 00:45:31.640 |
|
about climbing and having talked to Bard |
|
|
|
00:45:30.160 --> 00:45:33.640 |
|
recently I can tell you unfortunately |
|
|
|
00:45:31.640 --> 00:45:34.800 |
|
that it's not that easy so I tell the |
|
|
|
00:45:33.640 --> 00:45:36.599 |
|
model I don't want to talk about |
|
|
|
00:45:34.800 --> 00:45:38.000 |
|
climbing it does okay for a little bit |
|
|
|
00:45:36.599 --> 00:45:40.920 |
|
and then it's like all right but maybe |
|
|
|
00:45:38.000 --> 00:45:42.359 |
|
you want to try rap climbing um and so |
|
|
|
00:45:40.920 --> 00:45:44.559 |
|
we could continue trying to instruction |
|
|
|
00:45:42.359 --> 00:45:46.200 |
|
to our model but maybe there's sort of a |
|
|
|
00:45:44.559 --> 00:45:49.079 |
|
way to impose this constraint on the |
|
|
|
00:45:46.200 --> 00:45:50.680 |
|
decoding side instead and so I'd say all |
|
|
|
00:45:49.079 --> 00:45:52.960 |
|
right I'm going to do something dramatic |
|
|
|
00:45:50.680 --> 00:45:54.440 |
|
right I know I can manipulate the |
|
|
|
00:45:52.960 --> 00:45:56.200 |
|
probability distribution I'm just going |
|
|
|
00:45:54.440 --> 00:45:57.920 |
|
to set the probability of climbing to be |
|
|
|
00:45:56.200 --> 00:46:00.440 |
|
zero I don't want to see this token like |
|
|
|
00:45:57.920 --> 00:46:02.640 |
|
I'm I'm completely over it um and this |
|
|
|
00:46:00.440 --> 00:46:04.839 |
|
is sort of nice in some sense because |
|
|
|
00:46:02.640 --> 00:46:06.720 |
|
this is pretty easy to do um remember |
|
|
|
00:46:04.839 --> 00:46:08.440 |
|
we're doing a soft Max over the outputs |
|
|
|
00:46:06.720 --> 00:46:10.599 |
|
to choose this probability distribution |
|
|
|
00:46:08.440 --> 00:46:12.400 |
|
and so if we add a huge negative number |
|
|
|
00:46:10.599 --> 00:46:14.160 |
|
to the logic for climbing before we do |
|
|
|
00:46:12.400 --> 00:46:15.520 |
|
this softmax its probability is going to |
|
|
|
00:46:14.160 --> 00:46:18.640 |
|
be basically zero and we're never going |
|
|
|
00:46:15.520 --> 00:46:20.240 |
|
to see it as an output um but this |
|
|
|
00:46:18.640 --> 00:46:22.480 |
|
doesn't seem like a perfect solution |
|
|
|
00:46:20.240 --> 00:46:24.400 |
|
right because you know what if the model |
|
|
|
00:46:22.480 --> 00:46:26.160 |
|
recommends bouldering to me do I have to |
|
|
|
00:46:24.400 --> 00:46:28.599 |
|
write like a sort of a list of every |
|
|
|
00:46:26.160 --> 00:46:30.599 |
|
possible climbing synonym in the world |
|
|
|
00:46:28.599 --> 00:46:32.079 |
|
um what if there's sort of an allowable |
|
|
|
00:46:30.599 --> 00:46:33.920 |
|
way to use this token like I want the |
|
|
|
00:46:32.079 --> 00:46:35.319 |
|
model to suggest hiking because climbing |
|
|
|
00:46:33.920 --> 00:46:37.480 |
|
up a mountain to see a good view is |
|
|
|
00:46:35.319 --> 00:46:38.720 |
|
relaxing but that's a use of the word |
|
|
|
00:46:37.480 --> 00:46:41.400 |
|
climbing and we just said that we can't |
|
|
|
00:46:38.720 --> 00:46:43.520 |
|
use the word climbing um or what if we |
|
|
|
00:46:41.400 --> 00:46:45.480 |
|
sort of generate other related terms |
|
|
|
00:46:43.520 --> 00:46:47.520 |
|
before we get to the restricted term |
|
|
|
00:46:45.480 --> 00:46:49.359 |
|
like the model starts suggesting maybe |
|
|
|
00:46:47.520 --> 00:46:51.480 |
|
you can work out by going to an indoor |
|
|
|
00:46:49.359 --> 00:46:52.920 |
|
rock blank and then what are we going to |
|
|
|
00:46:51.480 --> 00:46:54.800 |
|
say there's not we can't say rock |
|
|
|
00:46:52.920 --> 00:46:57.079 |
|
climbing so maybe the model suggests |
|
|
|
00:46:54.800 --> 00:46:58.640 |
|
rock climbing is rock collecting is a |
|
|
|
00:46:57.079 --> 00:47:01.400 |
|
hobby to stay in shape and that doesn't |
|
|
|
00:46:58.640 --> 00:47:03.480 |
|
sound good either um you could continue |
|
|
|
00:47:01.400 --> 00:47:05.640 |
|
like sort of engineering more and more |
|
|
|
00:47:03.480 --> 00:47:06.599 |
|
complicated rules here but maybe we |
|
|
|
00:47:05.640 --> 00:47:08.760 |
|
could do something that's a little |
|
|
|
00:47:06.599 --> 00:47:10.559 |
|
simpler so what if I just sample a bunch |
|
|
|
00:47:08.760 --> 00:47:11.920 |
|
of outputs from the model and then I |
|
|
|
00:47:10.559 --> 00:47:14.359 |
|
check if they're about climbing and I |
|
|
|
00:47:11.920 --> 00:47:16.280 |
|
get rid of them if they are right um |
|
|
|
00:47:14.359 --> 00:47:18.200 |
|
this is sort of the advantage that it's |
|
|
|
00:47:16.280 --> 00:47:19.599 |
|
pretty easy to check after the fact if |
|
|
|
00:47:18.200 --> 00:47:22.480 |
|
the sequence has satisfied this |
|
|
|
00:47:19.599 --> 00:47:24.400 |
|
constraint you know we could train some |
|
|
|
00:47:22.480 --> 00:47:26.200 |
|
smaller model to guess if the topic of a |
|
|
|
00:47:24.400 --> 00:47:27.960 |
|
sentence is about climbing could check |
|
|
|
00:47:26.200 --> 00:47:30.040 |
|
for keywords we could have a friend |
|
|
|
00:47:27.960 --> 00:47:31.359 |
|
who's willing to see this content like |
|
|
|
00:47:30.040 --> 00:47:33.040 |
|
filter through it and throw everything |
|
|
|
00:47:31.359 --> 00:47:36.480 |
|
out that's not about climing that is |
|
|
|
00:47:33.040 --> 00:47:38.280 |
|
about climbing but if this model um |
|
|
|
00:47:36.480 --> 00:47:40.119 |
|
ascribes really high likelihood to this |
|
|
|
00:47:38.280 --> 00:47:42.559 |
|
like if this model was trained on you |
|
|
|
00:47:40.119 --> 00:47:44.760 |
|
know data from CS PhD students this |
|
|
|
00:47:42.559 --> 00:47:46.240 |
|
could be an extremely high likelihood |
|
|
|
00:47:44.760 --> 00:47:48.319 |
|
suggestion and so we might need to |
|
|
|
00:47:46.240 --> 00:47:49.839 |
|
regenerate hundreds or thousands of |
|
|
|
00:47:48.319 --> 00:47:52.559 |
|
sequences to find something that's not |
|
|
|
00:47:49.839 --> 00:47:55.240 |
|
about climing um and that feels a little |
|
|
|
00:47:52.559 --> 00:47:56.920 |
|
bit inefficient right so is there |
|
|
|
00:47:55.240 --> 00:47:59.040 |
|
something that we can do that's a little |
|
|
|
00:47:56.920 --> 00:48:01.599 |
|
bit better than that well really we'd |
|
|
|
00:47:59.040 --> 00:48:03.200 |
|
like to guess at some point during our |
|
|
|
00:48:01.599 --> 00:48:05.200 |
|
generation if the sequence is going to |
|
|
|
00:48:03.200 --> 00:48:08.000 |
|
be about climbing and maybe like |
|
|
|
00:48:05.200 --> 00:48:10.640 |
|
recalibrate or you know we could even |
|
|
|
00:48:08.000 --> 00:48:12.079 |
|
restart or sort of shape Our Generations |
|
|
|
00:48:10.640 --> 00:48:14.520 |
|
so that we don't wind up with a sequence |
|
|
|
00:48:12.079 --> 00:48:16.319 |
|
that's about climbing in the first place |
|
|
|
00:48:14.520 --> 00:48:19.359 |
|
um one of the methods that we'll discuss |
|
|
|
00:48:16.319 --> 00:48:20.920 |
|
to do this is a method called fudge um |
|
|
|
00:48:19.359 --> 00:48:22.800 |
|
and unfortunately in their paper they |
|
|
|
00:48:20.920 --> 00:48:24.240 |
|
don't have the same anti-climbing bias I |
|
|
|
00:48:22.800 --> 00:48:27.000 |
|
do so this example is actually about |
|
|
|
00:48:24.240 --> 00:48:29.000 |
|
formality instead um the idea here is |
|
|
|
00:48:27.000 --> 00:48:32.079 |
|
that we want a sequence output of the |
|
|
|
00:48:29.000 --> 00:48:34.079 |
|
model that is sort of satisfies this |
|
|
|
00:48:32.079 --> 00:48:36.079 |
|
constraint of being formal and the way |
|
|
|
00:48:34.079 --> 00:48:39.960 |
|
we're going to do this is at each step |
|
|
|
00:48:36.079 --> 00:48:41.640 |
|
of prediction we get the outputs of what |
|
|
|
00:48:39.960 --> 00:48:44.160 |
|
the model predicts is the next token |
|
|
|
00:48:41.640 --> 00:48:47.319 |
|
right this sort of distribution here in |
|
|
|
00:48:44.160 --> 00:48:49.760 |
|
blue and we also have some second |
|
|
|
00:48:47.319 --> 00:48:52.079 |
|
distribution which says given sort of |
|
|
|
00:48:49.760 --> 00:48:54.480 |
|
what we have so far How likely is this |
|
|
|
00:48:52.079 --> 00:48:56.920 |
|
to be a formal sentence at the end right |
|
|
|
00:48:54.480 --> 00:48:58.880 |
|
does a sentence that starts do you want |
|
|
|
00:48:56.920 --> 00:49:01.200 |
|
have a high likelihood of being formal |
|
|
|
00:48:58.880 --> 00:49:04.559 |
|
versus a sentence that starts do you |
|
|
|
00:49:01.200 --> 00:49:07.200 |
|
prefer and so this sort of guess at what |
|
|
|
00:49:04.559 --> 00:49:09.520 |
|
will be formal at the end of the um |
|
|
|
00:49:07.200 --> 00:49:10.960 |
|
generation will put High likelihood on |
|
|
|
00:49:09.520 --> 00:49:13.599 |
|
things that result in really formal |
|
|
|
00:49:10.960 --> 00:49:15.880 |
|
sentences like do you prefer or do you |
|
|
|
00:49:13.599 --> 00:49:17.200 |
|
thus whereas the original model might |
|
|
|
00:49:15.880 --> 00:49:19.440 |
|
have higher likelihood on things that |
|
|
|
00:49:17.200 --> 00:49:22.559 |
|
are maybe more commonly said like do you |
|
|
|
00:49:19.440 --> 00:49:24.319 |
|
want um so we combine these two |
|
|
|
00:49:22.559 --> 00:49:26.280 |
|
distributions you can just multiply them |
|
|
|
00:49:24.319 --> 00:49:29.079 |
|
together and then we sample from this |
|
|
|
00:49:26.280 --> 00:49:30.520 |
|
modified distribution which now has some |
|
|
|
00:49:29.079 --> 00:49:32.359 |
|
sort of high weight on things that the |
|
|
|
00:49:30.520 --> 00:49:33.559 |
|
model thinks are likely but also takes |
|
|
|
00:49:32.359 --> 00:49:35.960 |
|
into account the likelihood of |
|
|
|
00:49:33.559 --> 00:49:38.240 |
|
satisfying a constraint um this is |
|
|
|
00:49:35.960 --> 00:49:40.640 |
|
another sort of method of modifying or |
|
|
|
00:49:38.240 --> 00:49:42.520 |
|
sampling distribution um with some |
|
|
|
00:49:40.640 --> 00:49:44.520 |
|
external information here and so there's |
|
|
|
00:49:42.520 --> 00:49:47.440 |
|
results and sequences that wind up being |
|
|
|
00:49:44.520 --> 00:49:48.799 |
|
sort of more likely to be formal without |
|
|
|
00:49:47.440 --> 00:49:50.280 |
|
having to sample a whole bunch of |
|
|
|
00:49:48.799 --> 00:49:52.880 |
|
sentences and reject the ones that we |
|
|
|
00:49:50.280 --> 00:49:54.720 |
|
think don't satisfy this constraint so |
|
|
|
00:49:52.880 --> 00:49:57.119 |
|
how do we get sort of a guess of what |
|
|
|
00:49:54.720 --> 00:49:58.839 |
|
will be formal at the end of Generation |
|
|
|
00:49:57.119 --> 00:50:01.319 |
|
Um this is where the name fudge comes |
|
|
|
00:49:58.839 --> 00:50:03.319 |
|
from the fud stands for future |
|
|
|
00:50:01.319 --> 00:50:06.640 |
|
discriminator and so what they do is |
|
|
|
00:50:03.319 --> 00:50:08.920 |
|
they train a model on prefixes to guess |
|
|
|
00:50:06.640 --> 00:50:10.400 |
|
whether that sequence will be formal um |
|
|
|
00:50:08.920 --> 00:50:12.040 |
|
you can do this if you have a bunch of |
|
|
|
00:50:10.400 --> 00:50:15.319 |
|
data that's sort of sorted into formal |
|
|
|
00:50:12.040 --> 00:50:17.720 |
|
and not formal right every um sort of |
|
|
|
00:50:15.319 --> 00:50:20.119 |
|
prefix of a sentence in the formal |
|
|
|
00:50:17.720 --> 00:50:21.480 |
|
category is a training example right you |
|
|
|
00:50:20.119 --> 00:50:23.720 |
|
know a sentence that starts do you |
|
|
|
00:50:21.480 --> 00:50:27.599 |
|
prefer you can shop off each token to |
|
|
|
00:50:23.720 --> 00:50:29.920 |
|
get sort of a um set of sequ of prefixes |
|
|
|
00:50:27.599 --> 00:50:31.160 |
|
to sequences that have the label formal |
|
|
|
00:50:29.920 --> 00:50:33.559 |
|
and you can do the same thing to your |
|
|
|
00:50:31.160 --> 00:50:34.920 |
|
informal set and train a discriminator |
|
|
|
00:50:33.559 --> 00:50:36.559 |
|
to choose between them to say like |
|
|
|
00:50:34.920 --> 00:50:38.400 |
|
what's the probability the sentence but |
|
|
|
00:50:36.559 --> 00:50:41.160 |
|
will belong to the formal set when we |
|
|
|
00:50:38.400 --> 00:50:43.319 |
|
finish and so this idea of sort of |
|
|
|
00:50:41.160 --> 00:50:44.359 |
|
trying to guess at a given decoding step |
|
|
|
00:50:43.319 --> 00:50:49.480 |
|
if we're going to wind up with our |
|
|
|
00:50:44.359 --> 00:50:50.799 |
|
constraints satisfied at the end um is a |
|
|
|
00:50:49.480 --> 00:50:53.000 |
|
sort of key way to do constraint |
|
|
|
00:50:50.799 --> 00:50:56.000 |
|
decoding um and one that we'll return to |
|
|
|
00:50:53.000 --> 00:50:58.280 |
|
in just a couple slides here |
|
|
|
00:50:56.000 --> 00:51:00.440 |
|
I want to talk touch on something |
|
|
|
00:50:58.280 --> 00:51:03.079 |
|
slightly different which is that maybe |
|
|
|
00:51:00.440 --> 00:51:04.599 |
|
one of the constraints we care about is |
|
|
|
00:51:03.079 --> 00:51:07.319 |
|
something a little more nebulous like we |
|
|
|
00:51:04.599 --> 00:51:09.160 |
|
want to match human preference um the |
|
|
|
00:51:07.319 --> 00:51:12.079 |
|
way that we usually accomplish this |
|
|
|
00:51:09.160 --> 00:51:14.920 |
|
constraint is a little bit different |
|
|
|
00:51:12.079 --> 00:51:16.040 |
|
right um this we' usually do through |
|
|
|
00:51:14.920 --> 00:51:18.839 |
|
like reinforcement learning through |
|
|
|
00:51:16.040 --> 00:51:21.559 |
|
human feedback um and so we take sort of |
|
|
|
00:51:18.839 --> 00:51:24.960 |
|
our original model distribution and we |
|
|
|
00:51:21.559 --> 00:51:27.960 |
|
take a sort of really like tight like |
|
|
|
00:51:24.960 --> 00:51:30.200 |
|
distrib tion of evidence that says like |
|
|
|
00:51:27.960 --> 00:51:31.680 |
|
um this model says that this sequence is |
|
|
|
00:51:30.200 --> 00:51:33.960 |
|
really high reward this sequence is |
|
|
|
00:51:31.680 --> 00:51:35.640 |
|
really low reward and we try to sort of |
|
|
|
00:51:33.960 --> 00:51:38.200 |
|
combine them somehow through training so |
|
|
|
00:51:35.640 --> 00:51:41.240 |
|
we get a new model that is um quote |
|
|
|
00:51:38.200 --> 00:51:43.240 |
|
unquote aligned and that it has like a |
|
|
|
00:51:41.240 --> 00:51:45.280 |
|
higher likelihood of giving us things |
|
|
|
00:51:43.240 --> 00:51:48.640 |
|
that have really high reward according |
|
|
|
00:51:45.280 --> 00:51:51.319 |
|
to our reward distribution um you can |
|
|
|
00:51:48.640 --> 00:51:53.599 |
|
view this though as a type of basian |
|
|
|
00:51:51.319 --> 00:51:55.119 |
|
inference and so what this means is the |
|
|
|
00:51:53.599 --> 00:51:57.440 |
|
distribution that we really want to get |
|
|
|
00:51:55.119 --> 00:51:59.880 |
|
at the end is a distribution that |
|
|
|
00:51:57.440 --> 00:52:03.160 |
|
combines our original models |
|
|
|
00:51:59.880 --> 00:52:05.680 |
|
distribution and some idea of like How |
|
|
|
00:52:03.160 --> 00:52:08.480 |
|
likely we are to satisfy the reward |
|
|
|
00:52:05.680 --> 00:52:10.720 |
|
right um this we do through |
|
|
|
00:52:08.480 --> 00:52:12.359 |
|
reinforcement learning but if we sort of |
|
|
|
00:52:10.720 --> 00:52:14.480 |
|
know what these two distributions look |
|
|
|
00:52:12.359 --> 00:52:16.119 |
|
like we've we've just been talking about |
|
|
|
00:52:14.480 --> 00:52:17.680 |
|
a lot of methods that modify the |
|
|
|
00:52:16.119 --> 00:52:20.119 |
|
original models distribution with |
|
|
|
00:52:17.680 --> 00:52:21.880 |
|
external information it seems like maybe |
|
|
|
00:52:20.119 --> 00:52:24.760 |
|
we could just add that external |
|
|
|
00:52:21.880 --> 00:52:26.200 |
|
information in at decoding time to get |
|
|
|
00:52:24.760 --> 00:52:29.040 |
|
some of the same |
|
|
|
00:52:26.200 --> 00:52:31.040 |
|
effects um and it turns out you can do |
|
|
|
00:52:29.040 --> 00:52:32.799 |
|
exactly this so this is a paper from |
|
|
|
00:52:31.040 --> 00:52:36.680 |
|
last year called reward augmented |
|
|
|
00:52:32.799 --> 00:52:39.079 |
|
decoding and the idea here is sort of um |
|
|
|
00:52:36.680 --> 00:52:41.839 |
|
in the same conceptual class as fudge |
|
|
|
00:52:39.079 --> 00:52:44.079 |
|
but instead of um predicting whether |
|
|
|
00:52:41.839 --> 00:52:46.079 |
|
we're likely to satisfy the constraint |
|
|
|
00:52:44.079 --> 00:52:47.599 |
|
we're predicting how much reward we |
|
|
|
00:52:46.079 --> 00:52:49.880 |
|
think that sequence will have at the end |
|
|
|
00:52:47.599 --> 00:52:52.599 |
|
of generation so we take our original |
|
|
|
00:52:49.880 --> 00:52:54.839 |
|
model without doing any rhf and we get |
|
|
|
00:52:52.599 --> 00:52:58.160 |
|
the output we get the predictions for |
|
|
|
00:52:54.839 --> 00:52:59.400 |
|
the next token and then we use a model |
|
|
|
00:52:58.160 --> 00:53:02.359 |
|
that's been trained to predict the |
|
|
|
00:52:59.400 --> 00:53:05.040 |
|
likely reward given some prefix like a |
|
|
|
00:53:02.359 --> 00:53:06.720 |
|
future discriminator and we calculate |
|
|
|
00:53:05.040 --> 00:53:08.200 |
|
the likely reward if we pick each of |
|
|
|
00:53:06.720 --> 00:53:09.799 |
|
those tokens and then we use the |
|
|
|
00:53:08.200 --> 00:53:12.319 |
|
combination of those two distributions |
|
|
|
00:53:09.799 --> 00:53:13.720 |
|
to choose what to decode next um and |
|
|
|
00:53:12.319 --> 00:53:16.000 |
|
this sort of gives you some of the |
|
|
|
00:53:13.720 --> 00:53:18.440 |
|
benefits of rlf without actually having |
|
|
|
00:53:16.000 --> 00:53:21.200 |
|
to do reinforcement learning so it's a |
|
|
|
00:53:18.440 --> 00:53:23.160 |
|
way of treating like aligning to human |
|
|
|
00:53:21.200 --> 00:53:26.839 |
|
feedback as just another constraint that |
|
|
|
00:53:23.160 --> 00:53:30.400 |
|
you can impose at decoding point |
|
|
|
00:53:26.839 --> 00:53:32.319 |
|
so those were sort of a a subset of the |
|
|
|
00:53:30.400 --> 00:53:34.280 |
|
um constrains decoding strategies that |
|
|
|
00:53:32.319 --> 00:53:35.799 |
|
people use um before we get into the |
|
|
|
00:53:34.280 --> 00:53:38.400 |
|
human and the loop stack are there any |
|
|
|
00:53:35.799 --> 00:53:38.400 |
|
questions on |
|
|
|
00:53:39.040 --> 00:53:43.599 |
|
this yes for |
|
|
|
00:53:44.960 --> 00:53:48.319 |
|
the do you have |
|
|
|
00:53:52.799 --> 00:53:57.440 |
|
to right so for the discrimin do you |
|
|
|
00:53:55.640 --> 00:54:00.000 |
|
need to train one for every constraint |
|
|
|
00:53:57.440 --> 00:54:01.440 |
|
and you do yeah so you need to have some |
|
|
|
00:54:00.000 --> 00:54:02.920 |
|
set of data that satisfies your |
|
|
|
00:54:01.440 --> 00:54:05.319 |
|
constraint and some set of data that |
|
|
|
00:54:02.920 --> 00:54:08.200 |
|
doesn't before you can enforce a new |
|
|
|
00:54:05.319 --> 00:54:10.200 |
|
constraint in an alternative might be |
|
|
|
00:54:08.200 --> 00:54:12.040 |
|
like in the paper that's what they did |
|
|
|
00:54:10.200 --> 00:54:16.400 |
|
but an alternative might be just to |
|
|
|
00:54:12.040 --> 00:54:18.359 |
|
train a discriminator to determine |
|
|
|
00:54:16.400 --> 00:54:20.880 |
|
whether any constraint was violated so |
|
|
|
00:54:18.359 --> 00:54:23.359 |
|
if you have 100 constraints you could do |
|
|
|
00:54:20.880 --> 00:54:25.599 |
|
a binary prier about whether any |
|
|
|
00:54:23.359 --> 00:54:26.880 |
|
constraint is violated and then |
|
|
|
00:54:25.599 --> 00:54:29.040 |
|
also |
|
|
|
00:54:26.880 --> 00:54:30.559 |
|
sufficient but if you wanted to add a |
|
|
|
00:54:29.040 --> 00:54:34.079 |
|
new constraint you'd still have to |
|
|
|
00:54:30.559 --> 00:54:34.079 |
|
retrain or you have to retrain |
|
|
|
00:54:35.160 --> 00:54:41.319 |
|
or the the reason that this is sort of |
|
|
|
00:54:38.119 --> 00:54:43.119 |
|
relatively reasonable to do is that this |
|
|
|
00:54:41.319 --> 00:54:45.240 |
|
determination of if a constraint is |
|
|
|
00:54:43.119 --> 00:54:46.960 |
|
likely to be violated is sort of a a |
|
|
|
00:54:45.240 --> 00:54:48.520 |
|
lighter weight or an easier task to |
|
|
|
00:54:46.960 --> 00:54:50.520 |
|
learn you can use a relatively small |
|
|
|
00:54:48.520 --> 00:54:52.079 |
|
model for this versus like your big |
|
|
|
00:54:50.520 --> 00:54:53.680 |
|
model just that has to be able to |
|
|
|
00:54:52.079 --> 00:54:55.920 |
|
predict the next token for any sequence |
|
|
|
00:54:53.680 --> 00:54:58.400 |
|
anymore yeah another another like |
|
|
|
00:54:55.920 --> 00:55:00.760 |
|
interesting thing is if you think about |
|
|
|
00:54:58.400 --> 00:55:01.520 |
|
it normally you're predicting with your |
|
|
|
00:55:00.760 --> 00:55:04.119 |
|
big |
|
|
|
00:55:01.520 --> 00:55:06.359 |
|
softmax like this over all of your |
|
|
|
00:55:04.119 --> 00:55:09.680 |
|
vocabulary you can even use the same |
|
|
|
00:55:06.359 --> 00:55:11.920 |
|
representations here to predict with a |
|
|
|
00:55:09.680 --> 00:55:13.359 |
|
binary classifier uh whether the |
|
|
|
00:55:11.920 --> 00:55:14.559 |
|
constraint is violated let's say you |
|
|
|
00:55:13.359 --> 00:55:17.240 |
|
have 100 |
|
|
|
00:55:14.559 --> 00:55:19.240 |
|
constraints this is still a vector of |
|
|
|
00:55:17.240 --> 00:55:21.520 |
|
size 100 compared to your vector of size |
|
|
|
00:55:19.240 --> 00:55:26.240 |
|
32,000 that you're using for llama right |
|
|
|
00:55:21.520 --> 00:55:28.280 |
|
so it's not like this adds the training |
|
|
|
00:55:26.240 --> 00:55:32.799 |
|
would cost some time but it adds very |
|
|
|
00:55:28.280 --> 00:55:32.799 |
|
little like inference time I guess |
|
|
|
00:55:33.440 --> 00:55:38.960 |
|
basically the rock |
|
|
|
00:55:35.880 --> 00:55:41.400 |
|
sound so when you do the constraint you |
|
|
|
00:55:38.960 --> 00:55:43.160 |
|
use like a more General |
|
|
|
00:55:41.400 --> 00:55:44.680 |
|
like do |
|
|
|
00:55:43.160 --> 00:55:48.160 |
|
notest |
|
|
|
00:55:44.680 --> 00:55:50.799 |
|
or I guess like in that constraint for |
|
|
|
00:55:48.160 --> 00:55:50.799 |
|
you can add |
|
|
|
00:55:52.559 --> 00:55:57.000 |
|
like, is there |
|
|
|
00:55:57.880 --> 00:56:00.720 |
|
like is there a way to generalize your |
|
|
|
00:55:59.400 --> 00:56:04.760 |
|
constraint would be like don't talk |
|
|
|
00:56:00.720 --> 00:56:07.039 |
|
about this whole set of hobes um you |
|
|
|
00:56:04.760 --> 00:56:08.960 |
|
could do that by training a |
|
|
|
00:56:07.039 --> 00:56:10.400 |
|
discriminator um by training one |
|
|
|
00:56:08.960 --> 00:56:12.359 |
|
discriminator that considers all of |
|
|
|
00:56:10.400 --> 00:56:15.119 |
|
those or by training like a hundred |
|
|
|
00:56:12.359 --> 00:56:17.559 |
|
different discriminators and then um |
|
|
|
00:56:15.119 --> 00:56:19.520 |
|
sort of taking like the maximum score |
|
|
|
00:56:17.559 --> 00:56:21.240 |
|
from any of them right like you want to |
|
|
|
00:56:19.520 --> 00:56:23.240 |
|
you want to be able to exclude all of |
|
|
|
00:56:21.240 --> 00:56:27.799 |
|
these things so you consider if any of |
|
|
|
00:56:23.240 --> 00:56:30.720 |
|
them are violated yeah and for um reward |
|
|
|
00:56:27.799 --> 00:56:32.839 |
|
augmented recoding how do we sort of |
|
|
|
00:56:30.720 --> 00:56:36.039 |
|
like frame that reward model or is that |
|
|
|
00:56:32.839 --> 00:56:38.400 |
|
just come from the previously done rhf |
|
|
|
00:56:36.039 --> 00:56:41.079 |
|
data that the store from there and then |
|
|
|
00:56:38.400 --> 00:56:44.119 |
|
you sort of like FR another |
|
|
|
00:56:41.079 --> 00:56:47.880 |
|
discriminator but this one |
|
|
|
00:56:44.119 --> 00:56:50.799 |
|
now I I fully understand yeah so how do |
|
|
|
00:56:47.880 --> 00:56:52.920 |
|
we get the the reward model here this is |
|
|
|
00:56:50.799 --> 00:56:55.280 |
|
we can use the same data that we' use |
|
|
|
00:56:52.920 --> 00:56:58.000 |
|
for rhf but we need a slightly different |
|
|
|
00:56:55.280 --> 00:57:01.119 |
|
model so for rhf we'll train a reward |
|
|
|
00:56:58.000 --> 00:57:02.599 |
|
model over full sequences right and here |
|
|
|
00:57:01.119 --> 00:57:05.280 |
|
we need to do the same trick where we |
|
|
|
00:57:02.599 --> 00:57:07.280 |
|
sort of look at just prefixes and try to |
|
|
|
00:57:05.280 --> 00:57:09.640 |
|
guess the reward Downstream but if we |
|
|
|
00:57:07.280 --> 00:57:12.440 |
|
have already have preference data then |
|
|
|
00:57:09.640 --> 00:57:15.119 |
|
we have some um like we have a data |
|
|
|
00:57:12.440 --> 00:57:16.720 |
|
source to do this with I think if I'm |
|
|
|
00:57:15.119 --> 00:57:19.240 |
|
remembering correctly they also had a |
|
|
|
00:57:16.720 --> 00:57:20.920 |
|
couple more sort of tricks for data |
|
|
|
00:57:19.240 --> 00:57:22.640 |
|
augmentation to get this to work this is |
|
|
|
00:57:20.920 --> 00:57:25.720 |
|
sort of like a non-trivial thing to |
|
|
|
00:57:22.640 --> 00:57:28.039 |
|
figure out um because like reward is |
|
|
|
00:57:25.720 --> 00:57:30.200 |
|
generally a secret bual |
|
|
|
00:57:28.039 --> 00:57:32.280 |
|
attribute and also if you don't know |
|
|
|
00:57:30.200 --> 00:57:34.160 |
|
very much about rhf we're going to cover |
|
|
|
00:57:32.280 --> 00:57:36.400 |
|
that the future class so don't worry if |
|
|
|
00:57:34.160 --> 00:57:37.880 |
|
this is a yeah sorry to Jump Ahead a |
|
|
|
00:57:36.400 --> 00:57:39.880 |
|
little no no |
|
|
|
00:57:37.880 --> 00:57:43.640 |
|
wores |
|
|
|
00:57:39.880 --> 00:57:47.240 |
|
yeah application this like why would we |
|
|
|
00:57:43.640 --> 00:57:49.640 |
|
doing this to ensure it could be like |
|
|
|
00:57:47.240 --> 00:57:52.839 |
|
our llm would want to highlight certain |
|
|
|
00:57:49.640 --> 00:57:53.799 |
|
qualities like we want our evence to be |
|
|
|
00:57:52.839 --> 00:57:55.960 |
|
more |
|
|
|
00:57:53.799 --> 00:57:57.839 |
|
empathetic is there |
|
|
|
00:57:55.960 --> 00:57:59.440 |
|
something yeah like what are the real |
|
|
|
00:57:57.839 --> 00:58:01.280 |
|
world applications like could we use |
|
|
|
00:57:59.440 --> 00:58:03.680 |
|
this to make L more empathetic or |
|
|
|
00:58:01.280 --> 00:58:06.359 |
|
something yeah any any real attribute |
|
|
|
00:58:03.680 --> 00:58:08.000 |
|
that you can sort of collect like |
|
|
|
00:58:06.359 --> 00:58:09.839 |
|
positive and negative data for you could |
|
|
|
00:58:08.000 --> 00:58:12.200 |
|
do this kind of constraints for I think |
|
|
|
00:58:09.839 --> 00:58:15.119 |
|
the the ones you see most commonly are |
|
|
|
00:58:12.200 --> 00:58:16.480 |
|
the human preference and then like |
|
|
|
00:58:15.119 --> 00:58:18.839 |
|
negative constraints like you don't want |
|
|
|
00:58:16.480 --> 00:58:20.000 |
|
your model to generate offensive content |
|
|
|
00:58:18.839 --> 00:58:21.839 |
|
and if you can build like a good |
|
|
|
00:58:20.000 --> 00:58:23.319 |
|
discriminator for is a sentence going in |
|
|
|
00:58:21.839 --> 00:58:26.160 |
|
a really offensive Direction you can |
|
|
|
00:58:23.319 --> 00:58:28.440 |
|
kind of stop it from gener |
|
|
|
00:58:26.160 --> 00:58:30.480 |
|
yeah would it be a good idea if you |
|
|
|
00:58:28.440 --> 00:58:33.760 |
|
generate a bunch of cons and ask the |
|
|
|
00:58:30.480 --> 00:58:35.480 |
|
model itself whether it violates the |
|
|
|
00:58:33.760 --> 00:58:37.319 |
|
yeah you could do that for sure could |
|
|
|
00:58:35.480 --> 00:58:38.920 |
|
you ask like could you generate a bunch |
|
|
|
00:58:37.319 --> 00:58:42.440 |
|
of samples and ask the model if it |
|
|
|
00:58:38.920 --> 00:58:44.720 |
|
violates the constraint um this is also |
|
|
|
00:58:42.440 --> 00:58:47.119 |
|
a type of sort of sample and then rerank |
|
|
|
00:58:44.720 --> 00:58:52.319 |
|
strategy um but yeah this would be sort |
|
|
|
00:58:47.119 --> 00:58:54.000 |
|
of a more um clever like less |
|
|
|
00:58:52.319 --> 00:58:55.559 |
|
heavyweight version of this checking if |
|
|
|
00:58:54.000 --> 00:58:57.319 |
|
it's about climate means right you'd |
|
|
|
00:58:55.559 --> 00:58:58.520 |
|
like ask the model if it violated the |
|
|
|
00:58:57.319 --> 00:59:00.160 |
|
constraint and if it's a good enough |
|
|
|
00:58:58.520 --> 00:59:02.480 |
|
model it could probably do that pretty |
|
|
|
00:59:00.160 --> 00:59:05.160 |
|
well I suppose in that case you don't |
|
|
|
00:59:02.480 --> 00:59:08.160 |
|
have to thing anything yeah yeah and |
|
|
|
00:59:05.160 --> 00:59:10.359 |
|
this is sort of a general like the |
|
|
|
00:59:08.160 --> 00:59:12.240 |
|
generating text that like satisfies a |
|
|
|
00:59:10.359 --> 00:59:14.079 |
|
constraint is harder than checking if a |
|
|
|
00:59:12.240 --> 00:59:16.280 |
|
text satisfies a constraint so even if |
|
|
|
00:59:14.079 --> 00:59:17.880 |
|
the model isn't good about like not |
|
|
|
00:59:16.280 --> 00:59:19.440 |
|
generating text about climbing when you |
|
|
|
00:59:17.880 --> 00:59:20.520 |
|
tell it to it might be able to tell if |
|
|
|
00:59:19.440 --> 00:59:23.640 |
|
text is |
|
|
|
00:59:20.520 --> 00:59:26.640 |
|
about yeah yeah so how do |
|
|
|
00:59:23.640 --> 00:59:26.640 |
|
you |
|
|
|
00:59:28.400 --> 00:59:32.359 |
|
have different |
|
|
|
00:59:32.920 --> 00:59:36.319 |
|
different you have |
|
|
|
00:59:36.599 --> 00:59:42.119 |
|
to yeah like how do you collect the data |
|
|
|
00:59:38.839 --> 00:59:45.720 |
|
to train this discriminator um generally |
|
|
|
00:59:42.119 --> 00:59:47.160 |
|
you're going to see like you'll look to |
|
|
|
00:59:45.720 --> 00:59:48.720 |
|
see if there are data sets that already |
|
|
|
00:59:47.160 --> 00:59:50.160 |
|
captured this attribute or you could |
|
|
|
00:59:48.720 --> 00:59:51.599 |
|
sort of write her istics to try to |
|
|
|
00:59:50.160 --> 00:59:53.839 |
|
recover it if it's an attribute that not |
|
|
|
00:59:51.599 --> 00:59:55.480 |
|
a lot of other people care about like |
|
|
|
00:59:53.839 --> 00:59:58.280 |
|
you could write your puristic to check |
|
|
|
00:59:55.480 --> 01:00:00.160 |
|
if text is about climbing for instance |
|
|
|
00:59:58.280 --> 01:00:02.359 |
|
um and then try to recover what noisy |
|
|
|
01:00:00.160 --> 01:00:04.200 |
|
samples of data that is or is not about |
|
|
|
01:00:02.359 --> 01:00:05.559 |
|
climbing maybe you could scrape a |
|
|
|
01:00:04.200 --> 01:00:07.000 |
|
climbing forum and then scrape like a |
|
|
|
01:00:05.559 --> 01:00:09.079 |
|
hiking forum and use the difference |
|
|
|
01:00:07.000 --> 01:00:10.319 |
|
between them um but for a lot of tests |
|
|
|
01:00:09.079 --> 01:00:11.760 |
|
there's actually pretty good data sets |
|
|
|
01:00:10.319 --> 01:00:14.400 |
|
already out there for this so there's |
|
|
|
01:00:11.760 --> 01:00:17.480 |
|
like in there's a lot of style transfer |
|
|
|
01:00:14.400 --> 01:00:20.200 |
|
tasks that are like go from informal to |
|
|
|
01:00:17.480 --> 01:00:22.240 |
|
formal or go from this to that or like |
|
|
|
01:00:20.200 --> 01:00:24.039 |
|
make this text in an iic contamin and |
|
|
|
01:00:22.240 --> 01:00:26.559 |
|
you can find like data from those |
|
|
|
01:00:24.039 --> 01:00:26.559 |
|
sources |
|
|
|
01:00:26.799 --> 01:00:31.599 |
|
we never like talked about F yet but I'm |
|
|
|
01:00:29.520 --> 01:00:34.520 |
|
really curious with like the word a |
|
|
|
01:00:31.599 --> 01:00:38.039 |
|
beting whether it would perform better |
|
|
|
01:00:34.520 --> 01:00:39.079 |
|
than like fineing on RF like certainly |
|
|
|
01:00:38.039 --> 01:00:42.720 |
|
more |
|
|
|
01:00:39.079 --> 01:00:45.039 |
|
efficient but I I was I think this is a |
|
|
|
01:00:42.720 --> 01:00:49.760 |
|
comparison they make in their paper but |
|
|
|
01:00:45.039 --> 01:00:52.520 |
|
I don't remember their pun on yeah um in |
|
|
|
01:00:49.760 --> 01:00:55.280 |
|
general there's this sort of a like you |
|
|
|
01:00:52.520 --> 01:00:57.039 |
|
can pay a onetime kind of heavy cost to |
|
|
|
01:00:55.280 --> 01:00:58.880 |
|
fine-tune or you can pay costs at |
|
|
|
01:00:57.039 --> 01:01:01.160 |
|
inference time every time to make sort |
|
|
|
01:00:58.880 --> 01:01:03.880 |
|
of a to make your model better in any of |
|
|
|
01:01:01.160 --> 01:01:06.160 |
|
these ways and depending on how much |
|
|
|
01:01:03.880 --> 01:01:09.119 |
|
inference you're playing do like one or |
|
|
|
01:01:06.160 --> 01:01:09.119 |
|
the other of these could be |
|
|
|
01:01:11.240 --> 01:01:16.400 |
|
better |
|
|
|
01:01:12.839 --> 01:01:19.200 |
|
great so now we're going to talk about |
|
|
|
01:01:16.400 --> 01:01:21.160 |
|
sort of methods for introducing human |
|
|
|
01:01:19.200 --> 01:01:22.680 |
|
interaction into the decoding process |
|
|
|
01:01:21.160 --> 01:01:25.240 |
|
and everything we've looked at so far |
|
|
|
01:01:22.680 --> 01:01:26.920 |
|
has been very sort of black booss kind |
|
|
|
01:01:25.240 --> 01:01:28.920 |
|
of hands off right like you give the |
|
|
|
01:01:26.920 --> 01:01:30.640 |
|
model M some input maybe we do some kind |
|
|
|
01:01:28.920 --> 01:01:33.640 |
|
of manipulation on the decoding side you |
|
|
|
01:01:30.640 --> 01:01:37.160 |
|
get one output back right um but in a |
|
|
|
01:01:33.640 --> 01:01:38.920 |
|
lot of situations where maybe you have |
|
|
|
01:01:37.160 --> 01:01:40.960 |
|
some high-risk application and you need |
|
|
|
01:01:38.920 --> 01:01:42.640 |
|
somebody to be consistently monitoring |
|
|
|
01:01:40.960 --> 01:01:43.799 |
|
and maybe intervening or you're doing |
|
|
|
01:01:42.640 --> 01:01:46.359 |
|
something where you want to do some kind |
|
|
|
01:01:43.799 --> 01:01:47.960 |
|
of human AI collaboration um and you |
|
|
|
01:01:46.359 --> 01:01:49.160 |
|
want to be able to go back and forth or |
|
|
|
01:01:47.960 --> 01:01:50.960 |
|
you want to have a conversation with the |
|
|
|
01:01:49.160 --> 01:01:53.480 |
|
model what you're actually doing is sort |
|
|
|
01:01:50.960 --> 01:01:54.960 |
|
of a series of decodings with human |
|
|
|
01:01:53.480 --> 01:01:56.319 |
|
intervention in between |
|
|
|
01:01:54.960 --> 01:01:58.640 |
|
um and I'm going to talk about a couple |
|
|
|
01:01:56.319 --> 01:02:00.760 |
|
of these strategies briefly I think if |
|
|
|
01:01:58.640 --> 01:02:02.200 |
|
you've used sort of a modern llm you're |
|
|
|
01:02:00.760 --> 01:02:04.440 |
|
probably familiar with at least a few of |
|
|
|
01:02:02.200 --> 01:02:06.720 |
|
them already um we'll sort of put names |
|
|
|
01:02:04.440 --> 01:02:08.359 |
|
to each of them and the set of examples |
|
|
|
01:02:06.720 --> 01:02:10.880 |
|
that we're running with here are from a |
|
|
|
01:02:08.359 --> 01:02:13.880 |
|
paper called wordcraft which is about um |
|
|
|
01:02:10.880 --> 01:02:15.480 |
|
story generation with llm assistants but |
|
|
|
01:02:13.880 --> 01:02:17.559 |
|
these can also be applied sort of more |
|
|
|
01:02:15.480 --> 01:02:20.319 |
|
generally to any kind of task where |
|
|
|
01:02:17.559 --> 01:02:23.799 |
|
you'd want to go back and forth with a |
|
|
|
01:02:20.319 --> 01:02:25.319 |
|
model um the sort of easiest or maybe |
|
|
|
01:02:23.799 --> 01:02:27.599 |
|
simplest place to start here is just |
|
|
|
01:02:25.319 --> 01:02:29.760 |
|
with interleaving text right you can |
|
|
|
01:02:27.599 --> 01:02:31.400 |
|
choose when the model starts and stops |
|
|
|
01:02:29.760 --> 01:02:33.720 |
|
decoding and you can choose when a human |
|
|
|
01:02:31.400 --> 01:02:34.920 |
|
is writing text in between and you can |
|
|
|
01:02:33.720 --> 01:02:36.680 |
|
condition your model in sort of a |
|
|
|
01:02:34.920 --> 01:02:39.240 |
|
mixture of human and model generated |
|
|
|
01:02:36.680 --> 01:02:41.279 |
|
text to choose what to continue next um |
|
|
|
01:02:39.240 --> 01:02:43.680 |
|
you can also do something like have the |
|
|
|
01:02:41.279 --> 01:02:45.319 |
|
model generate a set of text edit that |
|
|
|
01:02:43.680 --> 01:02:47.119 |
|
text in some way maybe the human is |
|
|
|
01:02:45.319 --> 01:02:48.640 |
|
imposing some really subtle constraint |
|
|
|
01:02:47.119 --> 01:02:50.559 |
|
like I want it to sound like my writing |
|
|
|
01:02:48.640 --> 01:02:52.200 |
|
style we don't have a discriminator for |
|
|
|
01:02:50.559 --> 01:02:54.119 |
|
this but the human can sort of modify |
|
|
|
01:02:52.200 --> 01:02:55.680 |
|
the text and then continue generating |
|
|
|
01:02:54.119 --> 01:02:57.160 |
|
from that point and that will influence |
|
|
|
01:02:55.680 --> 01:03:01.160 |
|
the style of the text that continues |
|
|
|
01:02:57.160 --> 01:03:03.240 |
|
being generative um a this case here is |
|
|
|
01:03:01.160 --> 01:03:04.720 |
|
sort of a you're writing a story |
|
|
|
01:03:03.240 --> 01:03:06.520 |
|
together and so you're going back and |
|
|
|
01:03:04.720 --> 01:03:07.799 |
|
forth and editing the text like that but |
|
|
|
01:03:06.520 --> 01:03:10.319 |
|
you can also think of any kind of |
|
|
|
01:03:07.799 --> 01:03:11.920 |
|
conversation with a model as the same |
|
|
|
01:03:10.319 --> 01:03:15.319 |
|
kind of interleaving of text right the |
|
|
|
01:03:11.920 --> 01:03:17.000 |
|
model gives some um text you provide |
|
|
|
01:03:15.319 --> 01:03:18.599 |
|
some text you go back and forth on like |
|
|
|
01:03:17.000 --> 01:03:20.480 |
|
who's providing the text that conditions |
|
|
|
01:03:18.599 --> 01:03:23.039 |
|
the |
|
|
|
01:03:20.480 --> 01:03:24.880 |
|
model you also might want to do things |
|
|
|
01:03:23.039 --> 01:03:26.760 |
|
like more fine brain replace |
|
|
|
01:03:24.880 --> 01:03:28.559 |
|
so here the person has highlighted some |
|
|
|
01:03:26.760 --> 01:03:31.640 |
|
text and said like make this more |
|
|
|
01:03:28.559 --> 01:03:33.960 |
|
descriptive or shorten this to two words |
|
|
|
01:03:31.640 --> 01:03:36.079 |
|
or maybe you want some additional |
|
|
|
01:03:33.960 --> 01:03:38.520 |
|
constraint like can this be happier can |
|
|
|
01:03:36.079 --> 01:03:40.960 |
|
this be sad like change the ending or |
|
|
|
01:03:38.520 --> 01:03:43.760 |
|
something um you can accomplish this in |
|
|
|
01:03:40.960 --> 01:03:45.799 |
|
a variety of ways um here this is done |
|
|
|
01:03:43.760 --> 01:03:47.680 |
|
through input manipulation so you prompt |
|
|
|
01:03:45.799 --> 01:03:50.359 |
|
your model differently with different |
|
|
|
01:03:47.680 --> 01:03:52.200 |
|
constraints you can also do this with an |
|
|
|
01:03:50.359 --> 01:03:54.440 |
|
actual modeling change like if you want |
|
|
|
01:03:52.200 --> 01:03:56.119 |
|
some kind of infilling model um |
|
|
|
01:03:54.440 --> 01:03:57.720 |
|
particularly for things like code this |
|
|
|
01:03:56.119 --> 01:04:01.119 |
|
can be helpful so you want context from |
|
|
|
01:03:57.720 --> 01:04:02.440 |
|
left and right sides um or you can do |
|
|
|
01:04:01.119 --> 01:04:03.799 |
|
this with the decoding changes that we |
|
|
|
01:04:02.440 --> 01:04:05.960 |
|
talked about in the previous section |
|
|
|
01:04:03.799 --> 01:04:07.799 |
|
right you could add a discriminator for |
|
|
|
01:04:05.960 --> 01:04:09.680 |
|
descriptiveness of text or you could do |
|
|
|
01:04:07.799 --> 01:04:11.680 |
|
some kind of sampling ranking method to |
|
|
|
01:04:09.680 --> 01:04:13.880 |
|
recover a more descriptive |
|
|
|
01:04:11.680 --> 01:04:16.640 |
|
output another thing that's very common |
|
|
|
01:04:13.880 --> 01:04:17.960 |
|
in this space is sampling and reranking |
|
|
|
01:04:16.640 --> 01:04:20.839 |
|
methods where the human is the one |
|
|
|
01:04:17.960 --> 01:04:23.640 |
|
choosing what to return right so in |
|
|
|
01:04:20.839 --> 01:04:25.960 |
|
wordcraft you see a set of choices and |
|
|
|
01:04:23.640 --> 01:04:28.200 |
|
you can choose text to insert but more |
|
|
|
01:04:25.960 --> 01:04:30.720 |
|
commonly in something like um chat gbt |
|
|
|
01:04:28.200 --> 01:04:33.160 |
|
or Bard you see this little option to |
|
|
|
01:04:30.720 --> 01:04:34.880 |
|
regenerate text right you as the human |
|
|
|
01:04:33.160 --> 01:04:36.160 |
|
can reject the text and say like no I |
|
|
|
01:04:34.880 --> 01:04:38.680 |
|
don't like this give me a different |
|
|
|
01:04:36.160 --> 01:04:41.359 |
|
output and this is also sort of a way of |
|
|
|
01:04:38.680 --> 01:04:44.079 |
|
controlling decoding um just by doing it |
|
|
|
01:04:41.359 --> 01:04:46.319 |
|
on on a human rather in an algorithmic |
|
|
|
01:04:44.079 --> 01:04:49.279 |
|
level of course you don't necessarily |
|
|
|
01:04:46.319 --> 01:04:51.200 |
|
need a human in here and so um some |
|
|
|
01:04:49.279 --> 01:04:52.960 |
|
recent work has looked at functionally |
|
|
|
01:04:51.200 --> 01:04:55.799 |
|
using models to make these decisions |
|
|
|
01:04:52.960 --> 01:04:57.480 |
|
instead um this is a a a prompting paper |
|
|
|
01:04:55.799 --> 01:05:00.359 |
|
called free of thought which was sort of |
|
|
|
01:04:57.480 --> 01:05:02.279 |
|
very popular on Twitter last summer um |
|
|
|
01:05:00.359 --> 01:05:06.119 |
|
and the idea here is that you're going |
|
|
|
01:05:02.279 --> 01:05:08.480 |
|
to generate um several smaller sequences |
|
|
|
01:05:06.119 --> 01:05:11.200 |
|
um like a couple of sentences a |
|
|
|
01:05:08.480 --> 01:05:13.160 |
|
reasoning step or a thought in the paper |
|
|
|
01:05:11.200 --> 01:05:14.839 |
|
and you're going to use a model to |
|
|
|
01:05:13.160 --> 01:05:16.839 |
|
choose which ones to continue and you |
|
|
|
01:05:14.839 --> 01:05:19.000 |
|
can do different sort of constraints |
|
|
|
01:05:16.839 --> 01:05:21.960 |
|
here like I want to sort of rank this |
|
|
|
01:05:19.000 --> 01:05:25.079 |
|
set of three or maybe I want to predict |
|
|
|
01:05:21.960 --> 01:05:26.839 |
|
if any in this set is wrong like is this |
|
|
|
01:05:25.079 --> 01:05:29.400 |
|
a good reasoning step and if the model |
|
|
|
01:05:26.839 --> 01:05:32.240 |
|
says no you no longer continue that but |
|
|
|
01:05:29.400 --> 01:05:33.559 |
|
the idea here is through prompting |
|
|
|
01:05:32.240 --> 01:05:35.640 |
|
really achieving something that's sort |
|
|
|
01:05:33.559 --> 01:05:38.960 |
|
of if you squint at it looks a lot like |
|
|
|
01:05:35.640 --> 01:05:41.279 |
|
beam search right instead of doing a um |
|
|
|
01:05:38.960 --> 01:05:43.160 |
|
like token level thing and making a |
|
|
|
01:05:41.279 --> 01:05:45.079 |
|
decision based on likelihood you're |
|
|
|
01:05:43.160 --> 01:05:47.880 |
|
generating sort of several sentences out |
|
|
|
01:05:45.079 --> 01:05:50.599 |
|
a time and making a decision based on |
|
|
|
01:05:47.880 --> 01:05:52.359 |
|
this models feedback right this signal |
|
|
|
01:05:50.599 --> 01:05:53.799 |
|
from an external source which here is a |
|
|
|
01:05:52.359 --> 01:05:55.279 |
|
model but could also be a human if |
|
|
|
01:05:53.799 --> 01:05:57.920 |
|
you're willing willing to sort of wait |
|
|
|
01:05:55.279 --> 01:06:01.559 |
|
around for them to make the decision and |
|
|
|
01:05:57.920 --> 01:06:03.839 |
|
so this is a way of sort of giving |
|
|
|
01:06:01.559 --> 01:06:06.640 |
|
feedback on a broader level than single |
|
|
|
01:06:03.839 --> 01:06:09.079 |
|
tokens um to guide a decoding process to |
|
|
|
01:06:06.640 --> 01:06:09.079 |
|
a final |
|
|
|
01:06:09.839 --> 01:06:15.079 |
|
outut so the last couple of things we'll |
|
|
|
01:06:12.760 --> 01:06:17.520 |
|
talk about here are sort of practical |
|
|
|
01:06:15.079 --> 01:06:19.839 |
|
considerations speed choosing decoding |
|
|
|
01:06:17.520 --> 01:06:22.599 |
|
methods um but I can take any questions |
|
|
|
01:06:19.839 --> 01:06:22.599 |
|
before that |
|
|
|
01:06:23.000 --> 01:06:26.000 |
|
to |
|
|
|
01:06:26.760 --> 01:06:32.920 |
|
great so how do you make this fast and |
|
|
|
01:06:30.359 --> 01:06:34.920 |
|
in particular if you've ever tried to |
|
|
|
01:06:32.920 --> 01:06:36.920 |
|
sort of Benchmark performance of a model |
|
|
|
01:06:34.920 --> 01:06:38.720 |
|
what you realize pretty quickly is that |
|
|
|
01:06:36.920 --> 01:06:40.720 |
|
the vast majority of time is actually |
|
|
|
01:06:38.720 --> 01:06:43.440 |
|
spent in decoding you have to generate |
|
|
|
01:06:40.720 --> 01:06:45.319 |
|
one token at a time you have to sort of |
|
|
|
01:06:43.440 --> 01:06:46.920 |
|
pass that back through the model to get |
|
|
|
01:06:45.319 --> 01:06:51.279 |
|
conditioning to generate the next token |
|
|
|
01:06:46.920 --> 01:06:53.599 |
|
and so this is um generally fairly slow |
|
|
|
01:06:51.279 --> 01:06:54.839 |
|
um this is sort of a a major impediment |
|
|
|
01:06:53.599 --> 01:06:56.359 |
|
if you're d to do something like a |
|
|
|
01:06:54.839 --> 01:06:57.839 |
|
streaming application where you want or |
|
|
|
01:06:56.359 --> 01:06:59.559 |
|
a chat application where you don't want |
|
|
|
01:06:57.839 --> 01:07:03.599 |
|
the person to be waiting around for an |
|
|
|
01:06:59.559 --> 01:07:06.799 |
|
answer um one way to do this is a method |
|
|
|
01:07:03.599 --> 01:07:09.160 |
|
called Spectra of decoding and this is a |
|
|
|
01:07:06.799 --> 01:07:12.599 |
|
method where you're using a smaller |
|
|
|
01:07:09.160 --> 01:07:14.039 |
|
model um not as like we're in contrast |
|
|
|
01:07:12.599 --> 01:07:16.240 |
|
of decoding right we're using a smaller |
|
|
|
01:07:14.039 --> 01:07:17.559 |
|
model to decide what not to generate but |
|
|
|
01:07:16.240 --> 01:07:20.119 |
|
here we're using a smaller model to |
|
|
|
01:07:17.559 --> 01:07:21.880 |
|
decide be what to generate um and the |
|
|
|
01:07:20.119 --> 01:07:24.960 |
|
idea here is that most of these tokens |
|
|
|
01:07:21.880 --> 01:07:26.480 |
|
are maybe not super hard to side it's |
|
|
|
01:07:24.960 --> 01:07:27.400 |
|
just that occasionally the bigger model |
|
|
|
01:07:26.480 --> 01:07:30.240 |
|
might want to go in a different |
|
|
|
01:07:27.400 --> 01:07:32.920 |
|
direction so these green tokens here are |
|
|
|
01:07:30.240 --> 01:07:35.160 |
|
generated by a smaller model our amateur |
|
|
|
01:07:32.920 --> 01:07:37.079 |
|
model here and the larger model acts |
|
|
|
01:07:35.160 --> 01:07:39.960 |
|
largely as a verifier and what it does |
|
|
|
01:07:37.079 --> 01:07:43.000 |
|
is it checks if the output so far is |
|
|
|
01:07:39.960 --> 01:07:44.920 |
|
going in a an a Direction that's sort of |
|
|
|
01:07:43.000 --> 01:07:46.400 |
|
in distribution for the big model like |
|
|
|
01:07:44.920 --> 01:07:49.240 |
|
something that's within the realm of |
|
|
|
01:07:46.400 --> 01:07:50.720 |
|
what it might SLE and to there's sort of |
|
|
|
01:07:49.240 --> 01:07:52.400 |
|
an involved discussion in this paper of |
|
|
|
01:07:50.720 --> 01:07:55.200 |
|
how you determine if something is in |
|
|
|
01:07:52.400 --> 01:07:58.000 |
|
distribution um so here the smaller |
|
|
|
01:07:55.200 --> 01:08:00.240 |
|
models generates like five or six tokens |
|
|
|
01:07:58.000 --> 01:08:02.559 |
|
that the larger model says okay this |
|
|
|
01:08:00.240 --> 01:08:03.680 |
|
looks great until it hits a token that |
|
|
|
01:08:02.559 --> 01:08:06.079 |
|
the larger model would not have |
|
|
|
01:08:03.680 --> 01:08:07.920 |
|
generated in that circumstance and then |
|
|
|
01:08:06.079 --> 01:08:10.279 |
|
the larger model rejects that token and |
|
|
|
01:08:07.920 --> 01:08:13.000 |
|
generates a different token instead so |
|
|
|
01:08:10.279 --> 01:08:15.440 |
|
you can see here each of these red and |
|
|
|
01:08:13.000 --> 01:08:17.600 |
|
then blue sections is where the larger |
|
|
|
01:08:15.440 --> 01:08:19.400 |
|
model has rejected something and has to |
|
|
|
01:08:17.600 --> 01:08:21.920 |
|
actually autor regressively decode a |
|
|
|
01:08:19.400 --> 01:08:24.199 |
|
single token by contrast if you were |
|
|
|
01:08:21.920 --> 01:08:27.359 |
|
doing regular decoding at each |
|
|
|
01:08:24.199 --> 01:08:28.799 |
|
individual token in this sequence the um |
|
|
|
01:08:27.359 --> 01:08:31.640 |
|
larger model would have had to make the |
|
|
|
01:08:28.799 --> 01:08:35.359 |
|
fall forward pass to decoda token so |
|
|
|
01:08:31.640 --> 01:08:37.359 |
|
here rather than de doing maybe what |
|
|
|
01:08:35.359 --> 01:08:39.239 |
|
probably like 20ish decoding steps to |
|
|
|
01:08:37.359 --> 01:08:41.560 |
|
get this full sequence the larger model |
|
|
|
01:08:39.239 --> 01:08:43.040 |
|
has done about eight decoring steps and |
|
|
|
01:08:41.560 --> 01:08:47.560 |
|
everything else is able to sort of |
|
|
|
01:08:43.040 --> 01:08:49.759 |
|
verify a block of tokens at once um this |
|
|
|
01:08:47.560 --> 01:08:51.400 |
|
sort of idea of like using a smaller |
|
|
|
01:08:49.759 --> 01:08:54.120 |
|
model as an approximation is pretty |
|
|
|
01:08:51.400 --> 01:08:55.839 |
|
powerful um and there's some great um |
|
|
|
01:08:54.120 --> 01:08:58.159 |
|
followup work cons specul decoding and |
|
|
|
01:08:55.839 --> 01:08:59.000 |
|
sort of ways to do this faster or with |
|
|
|
01:08:58.159 --> 01:09:01.520 |
|
stronger |
|
|
|
01:08:59.000 --> 01:09:04.839 |
|
guarantees um but this General concept |
|
|
|
01:09:01.520 --> 01:09:06.920 |
|
is I would bet probably how models like |
|
|
|
01:09:04.839 --> 01:09:09.080 |
|
um part of how models like chat GPT or |
|
|
|
01:09:06.920 --> 01:09:11.159 |
|
Bard are sort of generating text so |
|
|
|
01:09:09.080 --> 01:09:13.120 |
|
quickly um there's another element here |
|
|
|
01:09:11.159 --> 01:09:16.159 |
|
which is like the model architecture |
|
|
|
01:09:13.120 --> 01:09:17.679 |
|
being sparse but I think that um if you |
|
|
|
01:09:16.159 --> 01:09:19.920 |
|
folks talk about mixture of experts we |
|
|
|
01:09:17.679 --> 01:09:22.880 |
|
might get into that |
|
|
|
01:09:19.920 --> 01:09:26.080 |
|
later um how do you do this kind of fast |
|
|
|
01:09:22.880 --> 01:09:27.679 |
|
inference um libraries like BLM will |
|
|
|
01:09:26.080 --> 01:09:29.440 |
|
Implement things I think Implement |
|
|
|
01:09:27.679 --> 01:09:32.199 |
|
speculative decoding and Implement sort |
|
|
|
01:09:29.440 --> 01:09:34.400 |
|
of Hardware level tricks like choosing |
|
|
|
01:09:32.199 --> 01:09:37.799 |
|
which attention um weights to Cash wear |
|
|
|
01:09:34.400 --> 01:09:39.199 |
|
to do faster inflence um there's also |
|
|
|
01:09:37.799 --> 01:09:40.799 |
|
great libraries for doing things like |
|
|
|
01:09:39.199 --> 01:09:42.679 |
|
constraint decoding so things like |
|
|
|
01:09:40.799 --> 01:09:45.520 |
|
outlines will let you set constraints |
|
|
|
01:09:42.679 --> 01:09:46.960 |
|
like I want my outputs to all be Json |
|
|
|
01:09:45.520 --> 01:09:48.640 |
|
and it will impose additional |
|
|
|
01:09:46.960 --> 01:09:50.839 |
|
constraints during decoding to ensure |
|
|
|
01:09:48.640 --> 01:09:52.279 |
|
that that happens and then pretty much |
|
|
|
01:09:50.839 --> 01:09:53.960 |
|
anything in these first couple of |
|
|
|
01:09:52.279 --> 01:09:56.560 |
|
sections we talked about um like |
|
|
|
01:09:53.960 --> 01:09:58.440 |
|
sampling mode seeking search and |
|
|
|
01:09:56.560 --> 01:10:00.400 |
|
sometimes MBR will also be implemented |
|
|
|
01:09:58.440 --> 01:10:05.080 |
|
in pretty much any Library you use for |
|
|
|
01:10:00.400 --> 01:10:07.679 |
|
models like huggingface Fair seek or |
|
|
|
01:10:05.080 --> 01:10:10.000 |
|
Jacks so to kind of take a step back |
|
|
|
01:10:07.679 --> 01:10:12.520 |
|
here is when you get to the end of class |
|
|
|
01:10:10.000 --> 01:10:15.640 |
|
um there's really two broad categories |
|
|
|
01:10:12.520 --> 01:10:17.679 |
|
of methods that we talked about today um |
|
|
|
01:10:15.640 --> 01:10:20.360 |
|
given our initial distribution from the |
|
|
|
01:10:17.679 --> 01:10:22.600 |
|
model for a next token given our our |
|
|
|
01:10:20.360 --> 01:10:24.920 |
|
input we can do two kind of different |
|
|
|
01:10:22.600 --> 01:10:26.400 |
|
things we can each individual decoding |
|
|
|
01:10:24.920 --> 01:10:28.360 |
|
step choose some kind of function to |
|
|
|
01:10:26.400 --> 01:10:30.280 |
|
manipulate this distribution and this |
|
|
|
01:10:28.360 --> 01:10:32.280 |
|
could be something like short like |
|
|
|
01:10:30.280 --> 01:10:33.960 |
|
cutting off the long tail like modifying |
|
|
|
01:10:32.280 --> 01:10:36.239 |
|
the temperature or adding external |
|
|
|
01:10:33.960 --> 01:10:38.400 |
|
information from another model or from a |
|
|
|
01:10:36.239 --> 01:10:41.480 |
|
discriminator model |
|
|
|
01:10:38.400 --> 01:10:43.159 |
|
right or we can over a larger part of |
|
|
|
01:10:41.480 --> 01:10:45.120 |
|
the decoding process choose some |
|
|
|
01:10:43.159 --> 01:10:47.120 |
|
function to choose between sequences and |
|
|
|
01:10:45.120 --> 01:10:49.199 |
|
this could be like choosing between next |
|
|
|
01:10:47.120 --> 01:10:51.679 |
|
tokens in beam search when we pruning |
|
|
|
01:10:49.199 --> 01:10:53.120 |
|
beams this could be choosing from Full |
|
|
|
01:10:51.679 --> 01:10:56.760 |
|
sequences when we're doing something |
|
|
|
01:10:53.120 --> 01:10:58.040 |
|
like MB r or sample and rerank methods |
|
|
|
01:10:56.760 --> 01:11:00.239 |
|
um and you can do these two things in |
|
|
|
01:10:58.040 --> 01:11:01.440 |
|
parallel right you can choose like a |
|
|
|
01:11:00.239 --> 01:11:03.159 |
|
different function to manipulate the |
|
|
|
01:11:01.440 --> 01:11:04.760 |
|
next token distribution and then some |
|
|
|
01:11:03.159 --> 01:11:06.199 |
|
sort of like broader thing to choose |
|
|
|
01:11:04.760 --> 01:11:08.280 |
|
what you do with the full sequences you |
|
|
|
01:11:06.199 --> 01:11:09.920 |
|
get out of that distribution um but |
|
|
|
01:11:08.280 --> 01:11:12.040 |
|
there are sort of these two broad |
|
|
|
01:11:09.920 --> 01:11:14.880 |
|
categories of |
|
|
|
01:11:12.040 --> 01:11:17.440 |
|
decoding so what should you take away |
|
|
|
01:11:14.880 --> 01:11:19.400 |
|
from this um I think a couple of things |
|
|
|
01:11:17.440 --> 01:11:21.000 |
|
you decoding methods can be really |
|
|
|
01:11:19.400 --> 01:11:23.040 |
|
powerful to control features of your |
|
|
|
01:11:21.000 --> 01:11:25.040 |
|
output if you want to impose particular |
|
|
|
01:11:23.040 --> 01:11:26.679 |
|
constraints if you want to factor in |
|
|
|
01:11:25.040 --> 01:11:27.960 |
|
reward function or factor in a data |
|
|
|
01:11:26.679 --> 01:11:31.800 |
|
source that you maybe didn't have at |
|
|
|
01:11:27.960 --> 01:11:34.239 |
|
training time um and to some extent you |
|
|
|
01:11:31.800 --> 01:11:36.120 |
|
can do a more expensive decoding method |
|
|
|
01:11:34.239 --> 01:11:37.520 |
|
to compensate for a worse model or to |
|
|
|
01:11:36.120 --> 01:11:39.080 |
|
compensate for a model that hasn't been |
|
|
|
01:11:37.520 --> 01:11:42.480 |
|
trained to do exactly the thing you want |
|
|
|
01:11:39.080 --> 01:11:44.800 |
|
it to do um of course you can't you know |
|
|
|
01:11:42.480 --> 01:11:47.679 |
|
use this to make gpt2 small as good as |
|
|
|
01:11:44.800 --> 01:11:49.840 |
|
gp4 but you can sort of for some points |
|
|
|
01:11:47.679 --> 01:11:51.679 |
|
in the middle spend more um computed |
|
|
|
01:11:49.840 --> 01:11:53.159 |
|
inference time to pay for not spending |
|
|
|
01:11:51.679 --> 01:11:55.639 |
|
as much computed training time and |
|
|
|
01:11:53.159 --> 01:11:57.440 |
|
particularly if you don't have access to |
|
|
|
01:11:55.639 --> 01:11:59.400 |
|
the kind of giant gpus you might need to |
|
|
|
01:11:57.440 --> 01:12:01.840 |
|
continue fine-tuning your model this can |
|
|
|
01:11:59.400 --> 01:12:05.679 |
|
be a really a really powerful |
|
|
|
01:12:01.840 --> 01:12:07.800 |
|
alternative um yeah so say like you're |
|
|
|
01:12:05.679 --> 01:12:12.560 |
|
building like something in production |
|
|
|
01:12:07.800 --> 01:12:15.920 |
|
right people usually do um sort of like |
|
|
|
01:12:12.560 --> 01:12:18.760 |
|
that you know inance before cling to see |
|
|
|
01:12:15.920 --> 01:12:21.840 |
|
if it's G to work at do |
|
|
|
01:12:18.760 --> 01:12:25.080 |
|
that like try to see like if you have a |
|
|
|
01:12:21.840 --> 01:12:26.800 |
|
model that you can do some kind of |
|
|
|
01:12:25.080 --> 01:12:29.199 |
|
expensive decoding method for to get |
|
|
|
01:12:26.800 --> 01:12:31.120 |
|
good outputs is it then worth try |
|
|
|
01:12:29.199 --> 01:12:34.000 |
|
training that model right um there's |
|
|
|
01:12:31.120 --> 01:12:36.560 |
|
some great recent work on like training |
|
|
|
01:12:34.000 --> 01:12:39.400 |
|
models to produce the same kind of |
|
|
|
01:12:36.560 --> 01:12:40.760 |
|
outputs you get out of MVR without um |
|
|
|
01:12:39.400 --> 01:12:43.239 |
|
actually doing a really expensive |
|
|
|
01:12:40.760 --> 01:12:45.600 |
|
inference Stu so at some level like yeah |
|
|
|
01:12:43.239 --> 01:12:48.120 |
|
you can decide like this model is good |
|
|
|
01:12:45.600 --> 01:12:49.920 |
|
enough with its expensive method we can |
|
|
|
01:12:48.120 --> 01:12:50.920 |
|
try to make it cheaper by spending more |
|
|
|
01:12:49.920 --> 01:12:53.960 |
|
money on |
|
|
|
01:12:50.920 --> 01:12:55.520 |
|
funing um but that's not it's not like |
|
|
|
01:12:53.960 --> 01:12:57.320 |
|
necessarily guaranteed that that's will |
|
|
|
01:12:55.520 --> 01:13:00.679 |
|
be the case |
|
|
|
01:12:57.320 --> 01:13:03.040 |
|
Okay um the methods that we looked at |
|
|
|
01:13:00.679 --> 01:13:06.199 |
|
have these sort of trade-offs in quality |
|
|
|
01:13:03.040 --> 01:13:07.960 |
|
in diversity and in inference speed so |
|
|
|
01:13:06.199 --> 01:13:10.320 |
|
sampling from your model directly is |
|
|
|
01:13:07.960 --> 01:13:13.120 |
|
pretty fast to do you get really diverse |
|
|
|
01:13:10.320 --> 01:13:14.960 |
|
outputs but it tends to be lower quality |
|
|
|
01:13:13.120 --> 01:13:16.320 |
|
um whereas more restricted sampling |
|
|
|
01:13:14.960 --> 01:13:18.520 |
|
these sort of mode seeking search |
|
|
|
01:13:16.320 --> 01:13:20.639 |
|
methods tend to be higher quality but |
|
|
|
01:13:18.520 --> 01:13:21.880 |
|
you get less less diverse outputs and |
|
|
|
01:13:20.639 --> 01:13:23.560 |
|
that's why we have these methods like |
|
|
|
01:13:21.880 --> 01:13:26.719 |
|
diverse and stochastic resarch to |
|
|
|
01:13:23.560 --> 01:13:28.760 |
|
counter this a bit um and then methods |
|
|
|
01:13:26.719 --> 01:13:30.400 |
|
like MBR or other sample and rerank |
|
|
|
01:13:28.760 --> 01:13:32.679 |
|
methods tend to be very high quality |
|
|
|
01:13:30.400 --> 01:13:34.280 |
|
outputs but you pay for this with much |
|
|
|
01:13:32.679 --> 01:13:36.520 |
|
slower inference |
|
|
|
01:13:34.280 --> 01:13:38.679 |
|
time um but if I can kind of convince |
|
|
|
01:13:36.520 --> 01:13:41.560 |
|
you of anything today I think it would |
|
|
|
01:13:38.679 --> 01:13:43.600 |
|
be this which is that these the decoding |
|
|
|
01:13:41.560 --> 01:13:45.600 |
|
method you choose for your model has a |
|
|
|
01:13:43.600 --> 01:13:47.960 |
|
really strong impact on performance |
|
|
|
01:13:45.600 --> 01:13:49.520 |
|
Downstream um you can get radically |
|
|
|
01:13:47.960 --> 01:13:51.239 |
|
different results out of the same model |
|
|
|
01:13:49.520 --> 01:13:52.639 |
|
without doing any additional training |
|
|
|
01:13:51.239 --> 01:13:55.120 |
|
just by choosing the different decoding |
|
|
|
01:13:52.639 --> 01:13:57.880 |
|
method that you might want to try and so |
|
|
|
01:13:55.120 --> 01:13:59.679 |
|
when you sort of let your libraries pick |
|
|
|
01:13:57.880 --> 01:14:01.159 |
|
a quote unquote like sensible default |
|
|
|
01:13:59.679 --> 01:14:03.760 |
|
you can leave a lot of performance on |
|
|
|
01:14:01.159 --> 01:14:06.480 |
|
the train on the table so I encourage |
|
|
|
01:14:03.760 --> 01:14:08.199 |
|
you folks that if if you're um deploying |
|
|
|
01:14:06.480 --> 01:14:09.760 |
|
models in production or if you're doing |
|
|
|
01:14:08.199 --> 01:14:10.840 |
|
research or you know maybe look at your |
|
|
|
01:14:09.760 --> 01:14:13.280 |
|
outputs and your model has some |
|
|
|
01:14:10.840 --> 01:14:15.320 |
|
undesirable behaviors to consider if the |
|
|
|
01:14:13.280 --> 01:14:17.800 |
|
decoding method you're using is imposing |
|
|
|
01:14:15.320 --> 01:14:20.000 |
|
some kind of Intuition or some kind of |
|
|
|
01:14:17.800 --> 01:14:21.840 |
|
inductive bias and if you can alter that |
|
|
|
01:14:20.000 --> 01:14:24.239 |
|
to get some of these behaviors without |
|
|
|
01:14:21.840 --> 01:14:26.320 |
|
resorting to additional training |
|
|
|
01:14:24.239 --> 01:14:28.719 |
|
um and that's sort of the end I can take |
|
|
|
01:14:26.320 --> 01:14:28.719 |
|
any other |
|
|
|
01:14:34.320 --> 01:14:38.719 |
|
questions okay um yeah I guess we don't |
|
|
|
01:14:37.199 --> 01:14:41.360 |
|
have any questions we can take questions |
|
|
|
01:14:38.719 --> 01:14:45.560 |
|
up here um one one thing I'd like to |
|
|
|
01:14:41.360 --> 01:14:47.679 |
|
point out also is that um I I love the |
|
|
|
01:14:45.560 --> 01:14:50.760 |
|
final thing that Amanda said here |
|
|
|
01:14:47.679 --> 01:14:54.199 |
|
another thing is that my impression from |
|
|
|
01:14:50.760 --> 01:14:56.400 |
|
dealing with things is that it's a lot |
|
|
|
01:14:54.199 --> 01:14:58.159 |
|
easier to predict the effect of |
|
|
|
01:14:56.400 --> 01:14:59.920 |
|
inference time decoding time |
|
|
|
01:14:58.159 --> 01:15:01.120 |
|
manipulations than it is to predict the |
|
|
|
01:14:59.920 --> 01:15:04.239 |
|
effect of |
|
|
|
01:15:01.120 --> 01:15:07.480 |
|
like um fine-tuning or something like |
|
|
|
01:15:04.239 --> 01:15:11.040 |
|
this like just to give a an |
|
|
|
01:15:07.480 --> 01:15:12.480 |
|
example beam search with the maximum |
|
|
|
01:15:11.040 --> 01:15:15.199 |
|
likelihood trained model tends to |
|
|
|
01:15:12.480 --> 01:15:16.719 |
|
generate things that are shorter um |
|
|
|
01:15:15.199 --> 01:15:18.040 |
|
whereas greedy decoding tends to |
|
|
|
01:15:16.719 --> 01:15:19.639 |
|
generate things that are longer and |
|
|
|
01:15:18.040 --> 01:15:22.000 |
|
repeat more often and stuff like that |
|
|
|
01:15:19.639 --> 01:15:25.920 |
|
and if you try a few methods like this |
|
|
|
01:15:22.000 --> 01:15:28.920 |
|
you'll quickly find these kind of qus of |
|
|
|
01:15:25.920 --> 01:15:31.320 |
|
each of the methods and so by forming a |
|
|
|
01:15:28.920 --> 01:15:32.719 |
|
good intuition of this you will also |
|
|
|
01:15:31.320 --> 01:15:34.000 |
|
know how to fix these problems when you |
|
|
|
01:15:32.719 --> 01:15:35.600 |
|
see them it's like oh my model's |
|
|
|
01:15:34.000 --> 01:15:37.320 |
|
repeating itself a lot maybe I shouldn't |
|
|
|
01:15:35.600 --> 01:15:38.679 |
|
be using grey search I should be |
|
|
|
01:15:37.320 --> 01:15:41.199 |
|
switching over to something else or |
|
|
|
01:15:38.679 --> 01:15:43.320 |
|
something like that so um this is a good |
|
|
|
01:15:41.199 --> 01:15:45.880 |
|
thing to know and play around with yeah |
|
|
|
01:15:43.320 --> 01:15:47.239 |
|
and I think pretty underutilized too um |
|
|
|
01:15:45.880 --> 01:15:48.880 |
|
a lot of folks will not think about a |
|
|
|
01:15:47.239 --> 01:15:50.920 |
|
decoding method to fix their problem |
|
|
|
01:15:48.880 --> 01:15:52.280 |
|
even if like your model might actually |
|
|
|
01:15:50.920 --> 01:15:53.760 |
|
be perfectly fine under a different |
|
|
|
01:15:52.280 --> 01:15:56.000 |
|
decoding strategy |
|
|
|
01:15:53.760 --> 01:15:58.320 |
|
great okay thanks a lot everyone you can |
|
|
|
01:15:56.000 --> 01:15:58.320 |
|
uh |
|
|
|
01:16:02.280 --> 01:16:05.280 |
|
finish |
|
|