|
1 |
|
00:00:00,399 --> 00:00:04,720 |
|
great um yeah so today we're going to be |
|
|
|
2 |
|
00:00:03,320 --> 00:00:07,040 |
|
talking a little bit about generation |
|
|
|
3 |
|
00:00:04,720 --> 00:00:08,639 |
|
algorithms um this will be sort of a |
|
|
|
4 |
|
00:00:07,040 --> 00:00:10,160 |
|
tour through some of the most common |
|
|
|
5 |
|
00:00:08,639 --> 00:00:12,080 |
|
methods and we're going to talk a little |
|
|
|
6 |
|
00:00:10,160 --> 00:00:13,480 |
|
bit about the theory behind them as well |
|
|
|
7 |
|
00:00:12,080 --> 00:00:15,080 |
|
um if you're looking at the slides on |
|
|
|
8 |
|
00:00:13,480 --> 00:00:18,359 |
|
the website these might be ever so |
|
|
|
9 |
|
00:00:15,080 --> 00:00:20,000 |
|
slightly different um but yeah I'll try |
|
|
|
10 |
|
00:00:18,359 --> 00:00:21,640 |
|
to stop at each section boundary for |
|
|
|
11 |
|
00:00:20,000 --> 00:00:23,840 |
|
questions also feel free to sort of |
|
|
|
12 |
|
00:00:21,640 --> 00:00:25,720 |
|
interrupt at any point for |
|
|
|
13 |
|
00:00:23,840 --> 00:00:27,720 |
|
clarifications so we're starting off |
|
|
|
14 |
|
00:00:25,720 --> 00:00:29,560 |
|
today with some great news um let's say |
|
|
|
15 |
|
00:00:27,720 --> 00:00:31,199 |
|
that you have some friend who maybe owns |
|
|
|
16 |
|
00:00:29,560 --> 00:00:34,800 |
|
a giant tech company and they've gifted |
|
|
|
17 |
|
00:00:31,199 --> 00:00:36,480 |
|
you this absolutely massive new model M |
|
|
|
18 |
|
00:00:34,800 --> 00:00:38,079 |
|
um it's a great model it's pre-trained |
|
|
|
19 |
|
00:00:36,480 --> 00:00:40,879 |
|
with the latest architecture it's |
|
|
|
20 |
|
00:00:38,079 --> 00:00:42,920 |
|
pre-trained on um trillions of tokens of |
|
|
|
21 |
|
00:00:40,879 --> 00:00:44,520 |
|
text it's got seven billion parameters |
|
|
|
22 |
|
00:00:42,920 --> 00:00:46,399 |
|
it looks like a really promising new |
|
|
|
23 |
|
00:00:44,520 --> 00:00:48,399 |
|
model you know it's the top of all these |
|
|
|
24 |
|
00:00:46,399 --> 00:00:50,320 |
|
leaderboards um but if you actually take |
|
|
|
25 |
|
00:00:48,399 --> 00:00:52,520 |
|
your new model M and you sort of open up |
|
|
|
26 |
|
00:00:50,320 --> 00:00:53,719 |
|
this box and kind of Shake It Out maybe |
|
|
|
27 |
|
00:00:52,520 --> 00:00:55,239 |
|
from last class you know a little bit |
|
|
|
28 |
|
00:00:53,719 --> 00:00:57,000 |
|
architecturally what this model might |
|
|
|
29 |
|
00:00:55,239 --> 00:00:58,239 |
|
look like but if you actually kind of |
|
|
|
30 |
|
00:00:57,000 --> 00:01:00,320 |
|
take a closer look at it from a |
|
|
|
31 |
|
00:00:58,239 --> 00:01:01,719 |
|
different angle what you see is that m |
|
|
|
32 |
|
00:01:00,320 --> 00:01:04,920 |
|
is actually just a conditional |
|
|
|
33 |
|
00:01:01,719 --> 00:01:07,200 |
|
probability distribution um you put some |
|
|
|
34 |
|
00:01:04,920 --> 00:01:09,680 |
|
input X into your model and you get some |
|
|
|
35 |
|
00:01:07,200 --> 00:01:10,680 |
|
probability out for any given sequence |
|
|
|
36 |
|
00:01:09,680 --> 00:01:13,360 |
|
that you're sort of interested in |
|
|
|
37 |
|
00:01:10,680 --> 00:01:14,960 |
|
evaluating right um and in particular M |
|
|
|
38 |
|
00:01:13,360 --> 00:01:17,560 |
|
gives you a probability distribution |
|
|
|
39 |
|
00:01:14,960 --> 00:01:19,439 |
|
over all tokens in its vocabulary to |
|
|
|
40 |
|
00:01:17,560 --> 00:01:21,040 |
|
predict like what token you would output |
|
|
|
41 |
|
00:01:19,439 --> 00:01:24,840 |
|
next right and so this is what this |
|
|
|
42 |
|
00:01:21,040 --> 00:01:26,880 |
|
equation says um given some input X and |
|
|
|
43 |
|
00:01:24,840 --> 00:01:29,520 |
|
everything that you've predicted so far |
|
|
|
44 |
|
00:01:26,880 --> 00:01:32,399 |
|
you get the probability of the next |
|
|
|
45 |
|
00:01:29,520 --> 00:01:33,600 |
|
token in YJ and if you multiply this out |
|
|
|
46 |
|
00:01:32,399 --> 00:01:34,840 |
|
over all the probabilities in your |
|
|
|
47 |
|
00:01:33,600 --> 00:01:37,159 |
|
sequence you can calculate the |
|
|
|
48 |
|
00:01:34,840 --> 00:01:41,240 |
|
probability of any output y given your |
|
|
|
49 |
|
00:01:37,159 --> 00:01:42,640 |
|
input X so what this like super fancy |
|
|
|
50 |
|
00:01:41,240 --> 00:01:44,119 |
|
model that you spend a lot of money to |
|
|
|
51 |
|
00:01:42,640 --> 00:01:46,280 |
|
train is really just a conditional |
|
|
|
52 |
|
00:01:44,119 --> 00:01:47,920 |
|
probability distribution um but this |
|
|
|
53 |
|
00:01:46,280 --> 00:01:49,600 |
|
turns out to be okay because you can use |
|
|
|
54 |
|
00:01:47,920 --> 00:01:51,920 |
|
a conditional probability distribution |
|
|
|
55 |
|
00:01:49,600 --> 00:01:54,399 |
|
to do sort of any task that we're really |
|
|
|
56 |
|
00:01:51,920 --> 00:01:56,719 |
|
interested in in NLP um pretty much any |
|
|
|
57 |
|
00:01:54,399 --> 00:01:58,680 |
|
task right so by changing what you |
|
|
|
58 |
|
00:01:56,719 --> 00:02:01,360 |
|
consider your input X and your output y |
|
|
|
59 |
|
00:01:58,680 --> 00:02:03,560 |
|
to be you can can get outputs from this |
|
|
|
60 |
|
00:02:01,360 --> 00:02:06,479 |
|
model for things like translation for |
|
|
|
61 |
|
00:02:03,560 --> 00:02:08,720 |
|
summarization for reasoning Tas um just |
|
|
|
62 |
|
00:02:06,479 --> 00:02:10,520 |
|
by sort of changing what you consider |
|
|
|
63 |
|
00:02:08,720 --> 00:02:12,760 |
|
your inputs and outputs in this |
|
|
|
64 |
|
00:02:10,520 --> 00:02:14,239 |
|
setting but there's sort of both good |
|
|
|
65 |
|
00:02:12,760 --> 00:02:15,920 |
|
and bad things about your model being a |
|
|
|
66 |
|
00:02:14,239 --> 00:02:17,120 |
|
probability distribution instead of just |
|
|
|
67 |
|
00:02:15,920 --> 00:02:20,599 |
|
an oracle that gives you sort of a |
|
|
|
68 |
|
00:02:17,120 --> 00:02:22,080 |
|
single answer for every input um one |
|
|
|
69 |
|
00:02:20,599 --> 00:02:24,480 |
|
kind of nice thing about this |
|
|
|
70 |
|
00:02:22,080 --> 00:02:26,080 |
|
distribution um is that you can get at |
|
|
|
71 |
|
00:02:24,480 --> 00:02:27,720 |
|
an idea of something like confidence |
|
|
|
72 |
|
00:02:26,080 --> 00:02:30,120 |
|
right if you give your model the input 2 |
|
|
|
73 |
|
00:02:27,720 --> 00:02:32,480 |
|
plus 2 equals and almost all the |
|
|
|
74 |
|
00:02:30,120 --> 00:02:34,200 |
|
probability mass is on the token of four |
|
|
|
75 |
|
00:02:32,480 --> 00:02:35,760 |
|
you can say like the model predicts with |
|
|
|
76 |
|
00:02:34,200 --> 00:02:38,319 |
|
pretty high confidence that 2 plus 2 |
|
|
|
77 |
|
00:02:35,760 --> 00:02:39,480 |
|
equals four um versus if you give it |
|
|
|
78 |
|
00:02:38,319 --> 00:02:40,959 |
|
something that's maybe a little more |
|
|
|
79 |
|
00:02:39,480 --> 00:02:43,120 |
|
open-ended like you ask it to predict |
|
|
|
80 |
|
00:02:40,959 --> 00:02:44,640 |
|
Graham's favorite color and you see this |
|
|
|
81 |
|
00:02:43,120 --> 00:02:47,040 |
|
distribution that's sort of a lot |
|
|
|
82 |
|
00:02:44,640 --> 00:02:48,440 |
|
flatter you know the most likely output |
|
|
|
83 |
|
00:02:47,040 --> 00:02:49,720 |
|
is green but maybe we don't have a lot |
|
|
|
84 |
|
00:02:48,440 --> 00:02:51,560 |
|
of confidence that that's the correct |
|
|
|
85 |
|
00:02:49,720 --> 00:02:53,040 |
|
answer um this is really closely tied |
|
|
|
86 |
|
00:02:51,560 --> 00:02:55,200 |
|
into the idea of calibration which you |
|
|
|
87 |
|
00:02:53,040 --> 00:02:58,879 |
|
guys talked about um I guess a couple of |
|
|
|
88 |
|
00:02:55,200 --> 00:03:00,640 |
|
classes ago now the flip side of this |
|
|
|
89 |
|
00:02:58,879 --> 00:03:03,680 |
|
though is that you know Noti that for |
|
|
|
90 |
|
00:03:00,640 --> 00:03:06,760 |
|
this case like 2 plus 2al 4 not all of |
|
|
|
91 |
|
00:03:03,680 --> 00:03:08,519 |
|
the probability mass is on four um and |
|
|
|
92 |
|
00:03:06,760 --> 00:03:09,720 |
|
so models that are conditional |
|
|
|
93 |
|
00:03:08,519 --> 00:03:11,560 |
|
probability distributions can |
|
|
|
94 |
|
00:03:09,720 --> 00:03:13,560 |
|
hallucinate right um pretty much no |
|
|
|
95 |
|
00:03:11,560 --> 00:03:15,799 |
|
matter what you do there's going to be |
|
|
|
96 |
|
00:03:13,560 --> 00:03:17,680 |
|
some nonzero probability to some output |
|
|
|
97 |
|
00:03:15,799 --> 00:03:19,920 |
|
that's incorrect or |
|
|
|
98 |
|
00:03:17,680 --> 00:03:21,239 |
|
undesirable um in some cases maybe even |
|
|
|
99 |
|
00:03:19,920 --> 00:03:23,760 |
|
offensive something that you don't want |
|
|
|
100 |
|
00:03:21,239 --> 00:03:25,280 |
|
the model to Output um and this is sort |
|
|
|
101 |
|
00:03:23,760 --> 00:03:27,840 |
|
of an artifact of the way these models |
|
|
|
102 |
|
00:03:25,280 --> 00:03:29,280 |
|
are trained if there's some great work |
|
|
|
103 |
|
00:03:27,840 --> 00:03:31,400 |
|
kind of more on the theory side here |
|
|
|
104 |
|
00:03:29,280 --> 00:03:32,840 |
|
that shows that this is actually true |
|
|
|
105 |
|
00:03:31,400 --> 00:03:35,120 |
|
even if everything in your input |
|
|
|
106 |
|
00:03:32,840 --> 00:03:36,920 |
|
training data is sort of correct and |
|
|
|
107 |
|
00:03:35,120 --> 00:03:38,439 |
|
factual and doesn't have any errors |
|
|
|
108 |
|
00:03:36,920 --> 00:03:41,200 |
|
you'll still wind up with a situation |
|
|
|
109 |
|
00:03:38,439 --> 00:03:44,480 |
|
where some nonzero probability mass is |
|
|
|
110 |
|
00:03:41,200 --> 00:03:47,000 |
|
on some outputs that are undesirable or |
|
|
|
111 |
|
00:03:44,480 --> 00:03:50,120 |
|
hallucinatory for sort of most inputs |
|
|
|
112 |
|
00:03:47,000 --> 00:03:52,159 |
|
that you care about evaluating so if we |
|
|
|
113 |
|
00:03:50,120 --> 00:03:55,079 |
|
have these issues how do we actually get |
|
|
|
114 |
|
00:03:52,159 --> 00:03:56,519 |
|
a good output out of the model um and to |
|
|
|
115 |
|
00:03:55,079 --> 00:03:58,640 |
|
do that we're first going to talk about |
|
|
|
116 |
|
00:03:56,519 --> 00:04:00,079 |
|
some sampling methods um but I want to |
|
|
|
117 |
|
00:03:58,640 --> 00:04:01,879 |
|
pause here in case there are of any |
|
|
|
118 |
|
00:04:00,079 --> 00:04:04,159 |
|
questions on this idea of a model is a |
|
|
|
119 |
|
00:04:01,879 --> 00:04:04,159 |
|
conditional |
|
|
|
120 |
|
00:04:05,040 --> 00:04:11,680 |
|
distribution great so we can jump right |
|
|
|
121 |
|
00:04:07,519 --> 00:04:13,560 |
|
in so we have this model right we know |
|
|
|
122 |
|
00:04:11,680 --> 00:04:15,959 |
|
at each step at each token we might want |
|
|
|
123 |
|
00:04:13,560 --> 00:04:17,919 |
|
to decode the distribution of likelihood |
|
|
|
124 |
|
00:04:15,959 --> 00:04:18,959 |
|
over all vocabulary tokens right this |
|
|
|
125 |
|
00:04:17,919 --> 00:04:21,680 |
|
conditional distribution we've been |
|
|
|
126 |
|
00:04:18,959 --> 00:04:24,240 |
|
talking about um for the next time step |
|
|
|
127 |
|
00:04:21,680 --> 00:04:26,400 |
|
and what we want out of this is a good |
|
|
|
128 |
|
00:04:24,240 --> 00:04:28,000 |
|
output um for some definition of good |
|
|
|
129 |
|
00:04:26,400 --> 00:04:30,919 |
|
that we can sort of develop as we go |
|
|
|
130 |
|
00:04:28,000 --> 00:04:32,479 |
|
here so maybe the natural first thing to |
|
|
|
131 |
|
00:04:30,919 --> 00:04:34,880 |
|
try is we have a probability |
|
|
|
132 |
|
00:04:32,479 --> 00:04:36,600 |
|
distribution can we just sample from it |
|
|
|
133 |
|
00:04:34,880 --> 00:04:39,600 |
|
right and this is something called |
|
|
|
134 |
|
00:04:36,600 --> 00:04:41,639 |
|
ancestral sampling so at each time step |
|
|
|
135 |
|
00:04:39,600 --> 00:04:43,560 |
|
we're going to draw a token from this |
|
|
|
136 |
|
00:04:41,639 --> 00:04:45,039 |
|
distribution sort of according to its |
|
|
|
137 |
|
00:04:43,560 --> 00:04:47,199 |
|
relative probability right so if |
|
|
|
138 |
|
00:04:45,039 --> 00:04:48,639 |
|
something has twice as much probability |
|
|
|
139 |
|
00:04:47,199 --> 00:04:51,280 |
|
Mass according to the model we'll draw |
|
|
|
140 |
|
00:04:48,639 --> 00:04:54,000 |
|
it twice as often um and we can sample |
|
|
|
141 |
|
00:04:51,280 --> 00:04:55,560 |
|
from this distribution at each time step |
|
|
|
142 |
|
00:04:54,000 --> 00:04:58,080 |
|
and this is sort of this is sort of a |
|
|
|
143 |
|
00:04:55,560 --> 00:05:00,199 |
|
nice setup um we get exact samples from |
|
|
|
144 |
|
00:04:58,080 --> 00:05:02,639 |
|
the model distribution so using the |
|
|
|
145 |
|
00:05:00,199 --> 00:05:04,479 |
|
setup if you can you imagine like |
|
|
|
146 |
|
00:05:02,639 --> 00:05:06,680 |
|
drawing an almost infinite number of |
|
|
|
147 |
|
00:05:04,479 --> 00:05:08,320 |
|
samples like a ridiculously large number |
|
|
|
148 |
|
00:05:06,680 --> 00:05:10,160 |
|
and you look at their probabilities |
|
|
|
149 |
|
00:05:08,320 --> 00:05:11,840 |
|
you'd sort of get something from this |
|
|
|
150 |
|
00:05:10,160 --> 00:05:13,039 |
|
distribution with exactly the |
|
|
|
151 |
|
00:05:11,840 --> 00:05:15,720 |
|
probability that the real model |
|
|
|
152 |
|
00:05:13,039 --> 00:05:17,280 |
|
distribution is given you um so this is |
|
|
|
153 |
|
00:05:15,720 --> 00:05:19,039 |
|
great this gives us an exact sample from |
|
|
|
154 |
|
00:05:17,280 --> 00:05:21,400 |
|
the model this seems to be exactly what |
|
|
|
155 |
|
00:05:19,039 --> 00:05:22,880 |
|
we want um but you can guess probably by |
|
|
|
156 |
|
00:05:21,400 --> 00:05:24,639 |
|
the fact that we're only like 10 minutes |
|
|
|
157 |
|
00:05:22,880 --> 00:05:27,000 |
|
into class here this is not really the |
|
|
|
158 |
|
00:05:24,639 --> 00:05:28,280 |
|
end of the story um and there's actually |
|
|
|
159 |
|
00:05:27,000 --> 00:05:30,800 |
|
a couple of problems with sampling |
|
|
|
160 |
|
00:05:28,280 --> 00:05:32,560 |
|
directly from our model distribu |
|
|
|
161 |
|
00:05:30,800 --> 00:05:35,280 |
|
the one that we're really going to focus |
|
|
|
162 |
|
00:05:32,560 --> 00:05:37,919 |
|
on first here is this idea of a long |
|
|
|
163 |
|
00:05:35,280 --> 00:05:41,400 |
|
tail so a model like llama and maybe our |
|
|
|
164 |
|
00:05:37,919 --> 00:05:43,639 |
|
new model M um has 32,000 vocabulary |
|
|
|
165 |
|
00:05:41,400 --> 00:05:46,280 |
|
tokens and you can imagine maybe out of |
|
|
|
166 |
|
00:05:43,639 --> 00:05:48,000 |
|
those tokens there might be one or even |
|
|
|
167 |
|
00:05:46,280 --> 00:05:49,720 |
|
2,000 of those tokens that are sort of a |
|
|
|
168 |
|
00:05:48,000 --> 00:05:51,919 |
|
reasonable next thing to predict for a |
|
|
|
169 |
|
00:05:49,720 --> 00:05:53,479 |
|
really open-ended task right but there's |
|
|
|
170 |
|
00:05:51,919 --> 00:05:55,440 |
|
going to be all kinds of things in that |
|
|
|
171 |
|
00:05:53,479 --> 00:05:57,039 |
|
distribution um that are maybe like |
|
|
|
172 |
|
00:05:55,440 --> 00:05:58,440 |
|
punctuation there maybe tokens that |
|
|
|
173 |
|
00:05:57,039 --> 00:06:00,280 |
|
won't actually lead to the correct |
|
|
|
174 |
|
00:05:58,440 --> 00:06:01,840 |
|
answer like there's a lot of things in |
|
|
|
175 |
|
00:06:00,280 --> 00:06:04,560 |
|
this distribution that would be all |
|
|
|
176 |
|
00:06:01,840 --> 00:06:06,160 |
|
really low likelihood and this is fine |
|
|
|
177 |
|
00:06:04,560 --> 00:06:08,759 |
|
these things just get low probability |
|
|
|
178 |
|
00:06:06,160 --> 00:06:11,039 |
|
Mass but the problem is if you give sort |
|
|
|
179 |
|
00:06:08,759 --> 00:06:13,639 |
|
of a small amount of probability Mass to |
|
|
|
180 |
|
00:06:11,039 --> 00:06:16,599 |
|
30,000 different things that mass will |
|
|
|
181 |
|
00:06:13,639 --> 00:06:19,360 |
|
add up pretty quickly um and to see this |
|
|
|
182 |
|
00:06:16,599 --> 00:06:20,360 |
|
we have sort of this illustration here |
|
|
|
183 |
|
00:06:19,360 --> 00:06:21,560 |
|
um I don't know if you can see the |
|
|
|
184 |
|
00:06:20,360 --> 00:06:23,280 |
|
difference between the green and the |
|
|
|
185 |
|
00:06:21,560 --> 00:06:25,720 |
|
yellow but I've also drawn a little bar |
|
|
|
186 |
|
00:06:23,280 --> 00:06:27,800 |
|
between them this is a really longtailed |
|
|
|
187 |
|
00:06:25,720 --> 00:06:29,720 |
|
distribution and the green part of the |
|
|
|
188 |
|
00:06:27,800 --> 00:06:31,960 |
|
distribution which is a lot of tokens |
|
|
|
189 |
|
00:06:29,720 --> 00:06:34,000 |
|
with high likelihood has 50% of the |
|
|
|
190 |
|
00:06:31,960 --> 00:06:35,560 |
|
total probability the Yellow Part which |
|
|
|
191 |
|
00:06:34,000 --> 00:06:37,360 |
|
is all a lot of things that are all |
|
|
|
192 |
|
00:06:35,560 --> 00:06:40,280 |
|
individually not super likely is the |
|
|
|
193 |
|
00:06:37,360 --> 00:06:41,720 |
|
other 50% of the probability and so what |
|
|
|
194 |
|
00:06:40,280 --> 00:06:44,360 |
|
that means is if you're doing something |
|
|
|
195 |
|
00:06:41,720 --> 00:06:46,120 |
|
like ancestral sampling 50% of the time |
|
|
|
196 |
|
00:06:44,360 --> 00:06:49,160 |
|
you'll be sampling something really |
|
|
|
197 |
|
00:06:46,120 --> 00:06:51,520 |
|
unlikely from this long tail um that |
|
|
|
198 |
|
00:06:49,160 --> 00:06:53,759 |
|
seems sort of not like what we want |
|
|
|
199 |
|
00:06:51,520 --> 00:06:56,080 |
|
right um so is there anything we can do |
|
|
|
200 |
|
00:06:53,759 --> 00:06:58,080 |
|
about this and the obvious for solution |
|
|
|
201 |
|
00:06:56,080 --> 00:06:59,400 |
|
here is can we just cut off that tail |
|
|
|
202 |
|
00:06:58,080 --> 00:07:01,680 |
|
like if we know these tokens are not |
|
|
|
203 |
|
00:06:59,400 --> 00:07:03,039 |
|
super likely can we just ignore them and |
|
|
|
204 |
|
00:07:01,680 --> 00:07:05,039 |
|
there's a couple of different ways to do |
|
|
|
205 |
|
00:07:03,039 --> 00:07:07,919 |
|
that um the first of these is something |
|
|
|
206 |
|
00:07:05,039 --> 00:07:10,080 |
|
called topk sampling where we say okay |
|
|
|
207 |
|
00:07:07,919 --> 00:07:12,479 |
|
you know maybe we think there are 10 |
|
|
|
208 |
|
00:07:10,080 --> 00:07:14,000 |
|
reasonable like outputs is right maybe |
|
|
|
209 |
|
00:07:12,479 --> 00:07:17,280 |
|
we'll just sample from the 10 most |
|
|
|
210 |
|
00:07:14,000 --> 00:07:19,759 |
|
probable tokens um here maybe we say if |
|
|
|
211 |
|
00:07:17,280 --> 00:07:21,479 |
|
we want to pick top six sampling we'll |
|
|
|
212 |
|
00:07:19,759 --> 00:07:23,919 |
|
sample from just the six most probable |
|
|
|
213 |
|
00:07:21,479 --> 00:07:26,240 |
|
tokens and so in this example you can |
|
|
|
214 |
|
00:07:23,919 --> 00:07:27,680 |
|
see we originally had 10 tokens and |
|
|
|
215 |
|
00:07:26,240 --> 00:07:30,560 |
|
we're going to sample from just the blue |
|
|
|
216 |
|
00:07:27,680 --> 00:07:32,919 |
|
ones just the six most likely tokens |
|
|
|
217 |
|
00:07:30,560 --> 00:07:34,360 |
|
um in this example this distribution is |
|
|
|
218 |
|
00:07:32,919 --> 00:07:37,280 |
|
pretty flat there's a lot of things that |
|
|
|
219 |
|
00:07:34,360 --> 00:07:40,120 |
|
are like kind of likely right so that |
|
|
|
220 |
|
00:07:37,280 --> 00:07:43,000 |
|
those six tokens are only 68% of the |
|
|
|
221 |
|
00:07:40,120 --> 00:07:45,360 |
|
total probability Mass um if we go like |
|
|
|
222 |
|
00:07:43,000 --> 00:07:47,240 |
|
one time step further here we might have |
|
|
|
223 |
|
00:07:45,360 --> 00:07:49,360 |
|
a distribution that's a lot peier most |
|
|
|
224 |
|
00:07:47,240 --> 00:07:51,759 |
|
of the mass is on just a single token |
|
|
|
225 |
|
00:07:49,360 --> 00:07:53,919 |
|
and so sampling from just the top six |
|
|
|
226 |
|
00:07:51,759 --> 00:07:56,400 |
|
tokens actually captures 99% of the |
|
|
|
227 |
|
00:07:53,919 --> 00:07:58,360 |
|
probability mes maybe we say that seems |
|
|
|
228 |
|
00:07:56,400 --> 00:08:01,199 |
|
a little excessive right we don't really |
|
|
|
229 |
|
00:07:58,360 --> 00:08:03,400 |
|
need um maybe all of these tokens that |
|
|
|
230 |
|
00:08:01,199 --> 00:08:05,479 |
|
are all kind of low probability maybe we |
|
|
|
231 |
|
00:08:03,400 --> 00:08:07,000 |
|
just want to sort of sample from the top |
|
|
|
232 |
|
00:08:05,479 --> 00:08:08,080 |
|
half of our distribution or something or |
|
|
|
233 |
|
00:08:07,000 --> 00:08:10,840 |
|
the top |
|
|
|
234 |
|
00:08:08,080 --> 00:08:12,919 |
|
90% um so instead of choosing a top |
|
|
|
235 |
|
00:08:10,840 --> 00:08:15,560 |
|
number of tokens to sample from you |
|
|
|
236 |
|
00:08:12,919 --> 00:08:17,400 |
|
could choose a top amount of probability |
|
|
|
237 |
|
00:08:15,560 --> 00:08:20,000 |
|
and this is something called top P or |
|
|
|
238 |
|
00:08:17,400 --> 00:08:21,520 |
|
nucleus sampling so P here is the amount |
|
|
|
239 |
|
00:08:20,000 --> 00:08:24,039 |
|
of probability from your distribution |
|
|
|
240 |
|
00:08:21,520 --> 00:08:26,639 |
|
you want to consider so if you decide |
|
|
|
241 |
|
00:08:24,039 --> 00:08:29,280 |
|
your p is about like 94% of the |
|
|
|
242 |
|
00:08:26,639 --> 00:08:31,639 |
|
probability Mass you in this first examp |
|
|
|
243 |
|
00:08:29,280 --> 00:08:33,719 |
|
example here would choose almost all of |
|
|
|
244 |
|
00:08:31,639 --> 00:08:35,440 |
|
the tokens you keep adding tokens in |
|
|
|
245 |
|
00:08:33,719 --> 00:08:37,159 |
|
until you reach an amount of total |
|
|
|
246 |
|
00:08:35,440 --> 00:08:39,479 |
|
probability that's about |
|
|
|
247 |
|
00:08:37,159 --> 00:08:40,880 |
|
094 but then when you get to the Second |
|
|
|
248 |
|
00:08:39,479 --> 00:08:43,240 |
|
Step where you have a couple of really |
|
|
|
249 |
|
00:08:40,880 --> 00:08:45,959 |
|
highly probable tokens you'd only need a |
|
|
|
250 |
|
00:08:43,240 --> 00:08:47,959 |
|
couple of tokens to add up to 094 or |
|
|
|
251 |
|
00:08:45,959 --> 00:08:50,320 |
|
even higher than 0.94 and so you would |
|
|
|
252 |
|
00:08:47,959 --> 00:08:52,200 |
|
just sample from a smaller set of tokens |
|
|
|
253 |
|
00:08:50,320 --> 00:08:54,600 |
|
so in top K sampling the total amount of |
|
|
|
254 |
|
00:08:52,200 --> 00:08:56,560 |
|
probability your sampling from can move |
|
|
|
255 |
|
00:08:54,600 --> 00:08:58,120 |
|
around in top P sampling the total |
|
|
|
256 |
|
00:08:56,560 --> 00:08:59,839 |
|
number of tokens you're sampling from |
|
|
|
257 |
|
00:08:58,120 --> 00:09:01,959 |
|
might change |
|
|
|
258 |
|
00:08:59,839 --> 00:09:04,760 |
|
um but maybe we sort of don't want to |
|
|
|
259 |
|
00:09:01,959 --> 00:09:07,279 |
|
impose a strong constraint like we want |
|
|
|
260 |
|
00:09:04,760 --> 00:09:09,279 |
|
like 94% here maybe just what we really |
|
|
|
261 |
|
00:09:07,279 --> 00:09:11,040 |
|
care about is saying that we're not |
|
|
|
262 |
|
00:09:09,279 --> 00:09:14,000 |
|
going to sample anything that's really |
|
|
|
263 |
|
00:09:11,040 --> 00:09:16,800 |
|
really unlikely right another way of |
|
|
|
264 |
|
00:09:14,000 --> 00:09:18,560 |
|
doing this is called Epsilon sampling |
|
|
|
265 |
|
00:09:16,800 --> 00:09:20,519 |
|
where we just sample tokens that have at |
|
|
|
266 |
|
00:09:18,560 --> 00:09:22,920 |
|
least some minimum amount of probability |
|
|
|
267 |
|
00:09:20,519 --> 00:09:24,720 |
|
to them right so maybe we just want |
|
|
|
268 |
|
00:09:22,920 --> 00:09:29,519 |
|
tokens that have probability of at least |
|
|
|
269 |
|
00:09:24,720 --> 00:09:31,240 |
|
0.05 here um in this first um example |
|
|
|
270 |
|
00:09:29,519 --> 00:09:32,640 |
|
everything has at least some reasonable |
|
|
|
271 |
|
00:09:31,240 --> 00:09:34,240 |
|
amount of probability so we're actually |
|
|
|
272 |
|
00:09:32,640 --> 00:09:36,240 |
|
going to sample from our full |
|
|
|
273 |
|
00:09:34,240 --> 00:09:37,720 |
|
distribution and then in the second |
|
|
|
274 |
|
00:09:36,240 --> 00:09:39,279 |
|
example when we have a lot of things |
|
|
|
275 |
|
00:09:37,720 --> 00:09:41,160 |
|
that are really unlikely we'll only |
|
|
|
276 |
|
00:09:39,279 --> 00:09:43,800 |
|
sample from sort of the more likely part |
|
|
|
277 |
|
00:09:41,160 --> 00:09:45,240 |
|
of the distribution um so all three of |
|
|
|
278 |
|
00:09:43,800 --> 00:09:47,000 |
|
these methods are sort of different ways |
|
|
|
279 |
|
00:09:45,240 --> 00:09:49,399 |
|
of trying to cut off the long tail using |
|
|
|
280 |
|
00:09:47,000 --> 00:09:51,480 |
|
sort of different |
|
|
|
281 |
|
00:09:49,399 --> 00:09:53,000 |
|
characteristics the tail of the |
|
|
|
282 |
|
00:09:51,480 --> 00:09:55,680 |
|
distribution though isn't the only thing |
|
|
|
283 |
|
00:09:53,000 --> 00:09:58,000 |
|
we could choose to modify um we could |
|
|
|
284 |
|
00:09:55,680 --> 00:09:59,880 |
|
also choose to modify this sort of |
|
|
|
285 |
|
00:09:58,000 --> 00:10:02,120 |
|
peakiness of the distribution |
|
|
|
286 |
|
00:09:59,880 --> 00:10:03,880 |
|
so if you look here at the middle of |
|
|
|
287 |
|
00:10:02,120 --> 00:10:06,600 |
|
these diagrams say this is your original |
|
|
|
288 |
|
00:10:03,880 --> 00:10:08,519 |
|
distribution over next tokens and maybe |
|
|
|
289 |
|
00:10:06,600 --> 00:10:11,040 |
|
you want to modify some properties of |
|
|
|
290 |
|
00:10:08,519 --> 00:10:12,640 |
|
this distribution like you say I want an |
|
|
|
291 |
|
00:10:11,040 --> 00:10:14,200 |
|
output that's really diverse and |
|
|
|
292 |
|
00:10:12,640 --> 00:10:15,680 |
|
interesting and open-ended like maybe |
|
|
|
293 |
|
00:10:14,200 --> 00:10:17,920 |
|
this is something like story generation |
|
|
|
294 |
|
00:10:15,680 --> 00:10:20,120 |
|
where you want to have sort of a lot of |
|
|
|
295 |
|
00:10:17,920 --> 00:10:21,279 |
|
maybe surprising things in your output |
|
|
|
296 |
|
00:10:20,120 --> 00:10:23,480 |
|
you could say I want to sort of |
|
|
|
297 |
|
00:10:21,279 --> 00:10:26,440 |
|
distribute my probability Mass more over |
|
|
|
298 |
|
00:10:23,480 --> 00:10:28,399 |
|
the token space and you can do this um |
|
|
|
299 |
|
00:10:26,440 --> 00:10:32,720 |
|
by sort of flattening this distribution |
|
|
|
300 |
|
00:10:28,399 --> 00:10:34,240 |
|
like you see on the the right here um |
|
|
|
301 |
|
00:10:32,720 --> 00:10:36,800 |
|
where now there's sort of more |
|
|
|
302 |
|
00:10:34,240 --> 00:10:39,040 |
|
probability Mass spread over this um |
|
|
|
303 |
|
00:10:36,800 --> 00:10:40,320 |
|
like wider set of tokens you could also |
|
|
|
304 |
|
00:10:39,040 --> 00:10:42,720 |
|
say the opposite right you could say |
|
|
|
305 |
|
00:10:40,320 --> 00:10:44,120 |
|
maybe I'm doing something like math |
|
|
|
306 |
|
00:10:42,720 --> 00:10:45,519 |
|
where there shouldn't really be a lot of |
|
|
|
307 |
|
00:10:44,120 --> 00:10:47,800 |
|
correct answers there should be really |
|
|
|
308 |
|
00:10:45,519 --> 00:10:50,399 |
|
only one or maybe only like a few |
|
|
|
309 |
|
00:10:47,800 --> 00:10:52,320 |
|
potential reasonable next answers and so |
|
|
|
310 |
|
00:10:50,399 --> 00:10:54,160 |
|
you can make your distribution peier or |
|
|
|
311 |
|
00:10:52,320 --> 00:10:56,639 |
|
sharper so that more of the probability |
|
|
|
312 |
|
00:10:54,160 --> 00:11:00,200 |
|
mass is on the things at the very top um |
|
|
|
313 |
|
00:10:56,639 --> 00:11:02,000 |
|
the way you do this is you modify y your |
|
|
|
314 |
|
00:11:00,200 --> 00:11:04,320 |
|
loges your outputs of the last layer of |
|
|
|
315 |
|
00:11:02,000 --> 00:11:06,399 |
|
the model before you apply softn so when |
|
|
|
316 |
|
00:11:04,320 --> 00:11:08,360 |
|
you're predicting you get your outputs |
|
|
|
317 |
|
00:11:06,399 --> 00:11:10,040 |
|
of the last layer of the model and then |
|
|
|
318 |
|
00:11:08,360 --> 00:11:11,560 |
|
you apply softmax which turns those |
|
|
|
319 |
|
00:11:10,040 --> 00:11:15,240 |
|
outputs into a distribution right they |
|
|
|
320 |
|
00:11:11,560 --> 00:11:17,399 |
|
all sum up the um like Mass over all |
|
|
|
321 |
|
00:11:15,240 --> 00:11:18,839 |
|
vocabulary tokens sums to one and so |
|
|
|
322 |
|
00:11:17,399 --> 00:11:21,920 |
|
that is sort of a distribution you could |
|
|
|
323 |
|
00:11:18,839 --> 00:11:23,519 |
|
sample from if you divide those Logics |
|
|
|
324 |
|
00:11:21,920 --> 00:11:26,000 |
|
by some number before you apply that |
|
|
|
325 |
|
00:11:23,519 --> 00:11:27,880 |
|
softmax you can make that distribution |
|
|
|
326 |
|
00:11:26,000 --> 00:11:30,760 |
|
flatter by using a number greater than |
|
|
|
327 |
|
00:11:27,880 --> 00:11:32,440 |
|
one or peier by using a number less than |
|
|
|
328 |
|
00:11:30,760 --> 00:11:35,079 |
|
one and this is this type of parameter |
|
|
|
329 |
|
00:11:32,440 --> 00:11:36,839 |
|
is called temperature um you can apply |
|
|
|
330 |
|
00:11:35,079 --> 00:11:38,480 |
|
this with any of the other methods for |
|
|
|
331 |
|
00:11:36,839 --> 00:11:40,279 |
|
sort of cutting off the long tail but |
|
|
|
332 |
|
00:11:38,480 --> 00:11:41,920 |
|
what people will often do is just apply |
|
|
|
333 |
|
00:11:40,279 --> 00:11:43,639 |
|
a temperature and then sample from that |
|
|
|
334 |
|
00:11:41,920 --> 00:11:45,320 |
|
distribution and that's what we call |
|
|
|
335 |
|
00:11:43,639 --> 00:11:48,720 |
|
temperature |
|
|
|
336 |
|
00:11:45,320 --> 00:11:49,920 |
|
sampling so these I think most of you |
|
|
|
337 |
|
00:11:48,720 --> 00:11:51,320 |
|
might already have been at least a |
|
|
|
338 |
|
00:11:49,920 --> 00:11:53,000 |
|
little bit familiar with some of these |
|
|
|
339 |
|
00:11:51,320 --> 00:11:56,079 |
|
methods I want to touch briefly on a |
|
|
|
340 |
|
00:11:53,000 --> 00:11:58,160 |
|
couple of other ideas for modifying this |
|
|
|
341 |
|
00:11:56,079 --> 00:11:59,680 |
|
distribution maybe some more complex and |
|
|
|
342 |
|
00:11:58,160 --> 00:12:01,839 |
|
more recent ideas and the one that I |
|
|
|
343 |
|
00:11:59,680 --> 00:12:04,279 |
|
want to talk about in more detail is |
|
|
|
344 |
|
00:12:01,839 --> 00:12:05,399 |
|
something called contrastive decoding so |
|
|
|
345 |
|
00:12:04,279 --> 00:12:07,360 |
|
the idea here is that we could |
|
|
|
346 |
|
00:12:05,399 --> 00:12:10,800 |
|
incorporate some extra information at |
|
|
|
347 |
|
00:12:07,360 --> 00:12:12,760 |
|
decoding time um using some other |
|
|
|
348 |
|
00:12:10,800 --> 00:12:15,320 |
|
distribution some other data or in this |
|
|
|
349 |
|
00:12:12,760 --> 00:12:17,320 |
|
case some other model so if you've ever |
|
|
|
350 |
|
00:12:15,320 --> 00:12:19,240 |
|
played around with a really like |
|
|
|
351 |
|
00:12:17,320 --> 00:12:21,800 |
|
relatively small language model maybe |
|
|
|
352 |
|
00:12:19,240 --> 00:12:23,320 |
|
something like gbt2 small um You |
|
|
|
353 |
|
00:12:21,800 --> 00:12:26,560 |
|
probably noticed you try to give it some |
|
|
|
354 |
|
00:12:23,320 --> 00:12:28,240 |
|
inputs and maybe it degenerates into |
|
|
|
355 |
|
00:12:26,560 --> 00:12:30,160 |
|
just repeating the same sequence over |
|
|
|
356 |
|
00:12:28,240 --> 00:12:31,720 |
|
and over maybe it gives you outputs that |
|
|
|
357 |
|
00:12:30,160 --> 00:12:33,399 |
|
are just completely incorrect like you |
|
|
|
358 |
|
00:12:31,720 --> 00:12:35,320 |
|
ask it a factual question and it gets it |
|
|
|
359 |
|
00:12:33,399 --> 00:12:37,120 |
|
wrong um and you don't see those |
|
|
|
360 |
|
00:12:35,320 --> 00:12:39,519 |
|
problems if you look at sort of a larger |
|
|
|
361 |
|
00:12:37,120 --> 00:12:41,399 |
|
model that's trained on more data so the |
|
|
|
362 |
|
00:12:39,519 --> 00:12:43,199 |
|
question here is can you use what that |
|
|
|
363 |
|
00:12:41,399 --> 00:12:46,480 |
|
smaller model is getting wrong to make |
|
|
|
364 |
|
00:12:43,199 --> 00:12:49,120 |
|
your larger model even better um and the |
|
|
|
365 |
|
00:12:46,480 --> 00:12:51,360 |
|
way we do this is by sort of the |
|
|
|
366 |
|
00:12:49,120 --> 00:12:52,880 |
|
intuition that if the smaller model |
|
|
|
367 |
|
00:12:51,360 --> 00:12:55,079 |
|
doesn't have a lot of probability on |
|
|
|
368 |
|
00:12:52,880 --> 00:12:57,160 |
|
some answer but the the larger model |
|
|
|
369 |
|
00:12:55,079 --> 00:12:58,519 |
|
does it's likely because that larger |
|
|
|
370 |
|
00:12:57,160 --> 00:13:02,279 |
|
model has learned something with the |
|
|
|
371 |
|
00:12:58,519 --> 00:13:04,000 |
|
smaller model didn't know and so here we |
|
|
|
372 |
|
00:13:02,279 --> 00:13:06,199 |
|
modify the probability distribution |
|
|
|
373 |
|
00:13:04,000 --> 00:13:08,199 |
|
coming out of the larger model to choose |
|
|
|
374 |
|
00:13:06,199 --> 00:13:11,120 |
|
outputs that that model thinks are very |
|
|
|
375 |
|
00:13:08,199 --> 00:13:12,600 |
|
likely and the amateur or the the weaker |
|
|
|
376 |
|
00:13:11,120 --> 00:13:15,480 |
|
model thinks are not |
|
|
|
377 |
|
00:13:12,600 --> 00:13:20,000 |
|
likely so in this example here from |
|
|
|
378 |
|
00:13:15,480 --> 00:13:22,560 |
|
their paper um if you have sort of a |
|
|
|
379 |
|
00:13:20,000 --> 00:13:27,199 |
|
input like Barack Obama was born in |
|
|
|
380 |
|
00:13:22,560 --> 00:13:29,720 |
|
Hawaii he was born in L um the smaller |
|
|
|
381 |
|
00:13:27,199 --> 00:13:31,360 |
|
model would often do something like |
|
|
|
382 |
|
00:13:29,720 --> 00:13:35,399 |
|
start repeating and actually if you |
|
|
|
383 |
|
00:13:31,360 --> 00:13:36,720 |
|
sample sort of naively from the um |
|
|
|
384 |
|
00:13:35,399 --> 00:13:38,560 |
|
larger model you can wind up in these |
|
|
|
385 |
|
00:13:36,720 --> 00:13:40,000 |
|
situations as well right so if you just |
|
|
|
386 |
|
00:13:38,560 --> 00:13:41,959 |
|
choose the most likely thing at each |
|
|
|
387 |
|
00:13:40,000 --> 00:13:43,399 |
|
step you wind up in this Loop where it's |
|
|
|
388 |
|
00:13:41,959 --> 00:13:45,560 |
|
like he was born in Hawaii he was born |
|
|
|
389 |
|
00:13:43,399 --> 00:13:48,199 |
|
in Hawaii he was born in Hawaii um and |
|
|
|
390 |
|
00:13:45,560 --> 00:13:51,320 |
|
this is behavior we generally don't want |
|
|
|
391 |
|
00:13:48,199 --> 00:13:52,680 |
|
um if you do something like nucleus or |
|
|
|
392 |
|
00:13:51,320 --> 00:13:53,720 |
|
top PE sampling you can wind up with |
|
|
|
393 |
|
00:13:52,680 --> 00:13:55,880 |
|
things that are actually completely |
|
|
|
394 |
|
00:13:53,720 --> 00:13:58,839 |
|
incorrect like he was born in Washington |
|
|
|
395 |
|
00:13:55,880 --> 00:14:01,480 |
|
DC um but if you use contrastive |
|
|
|
396 |
|
00:13:58,839 --> 00:14:04,120 |
|
decoding you take the outputs coming out |
|
|
|
397 |
|
00:14:01,480 --> 00:14:05,720 |
|
of your expert model here and you |
|
|
|
398 |
|
00:14:04,120 --> 00:14:07,680 |
|
subtract out the probabilities coming |
|
|
|
399 |
|
00:14:05,720 --> 00:14:10,160 |
|
out of the weaker model and you can wind |
|
|
|
400 |
|
00:14:07,680 --> 00:14:11,880 |
|
up with things that the higher model the |
|
|
|
401 |
|
00:14:10,160 --> 00:14:13,759 |
|
stronger model ascribed probability to |
|
|
|
402 |
|
00:14:11,880 --> 00:14:15,480 |
|
but the weaker model did not likely |
|
|
|
403 |
|
00:14:13,759 --> 00:14:16,920 |
|
because these are sort of facts that the |
|
|
|
404 |
|
00:14:15,480 --> 00:14:18,959 |
|
larger model knows that the smaller |
|
|
|
405 |
|
00:14:16,920 --> 00:14:20,800 |
|
model does not so here we actually get |
|
|
|
406 |
|
00:14:18,959 --> 00:14:23,199 |
|
the year Barack Obama was born which is |
|
|
|
407 |
|
00:14:20,800 --> 00:14:25,800 |
|
maybe a fact that the larger model knows |
|
|
|
408 |
|
00:14:23,199 --> 00:14:27,639 |
|
and the smaller model didn't know um and |
|
|
|
409 |
|
00:14:25,800 --> 00:14:29,759 |
|
so this is just one of sort of a broad |
|
|
|
410 |
|
00:14:27,639 --> 00:14:32,560 |
|
class of methods where you use external |
|
|
|
411 |
|
00:14:29,759 --> 00:14:35,199 |
|
information to improve your decoding by |
|
|
|
412 |
|
00:14:32,560 --> 00:14:38,720 |
|
modifying this distribution at each |
|
|
|
413 |
|
00:14:35,199 --> 00:14:40,720 |
|
set um those are sort of a brief tour of |
|
|
|
414 |
|
00:14:38,720 --> 00:14:43,920 |
|
a couple of different sampling methods |
|
|
|
415 |
|
00:14:40,720 --> 00:14:43,920 |
|
before we move into search |
|
|
|
416 |
|
00:14:44,600 --> 00:14:50,440 |
|
yeah |
|
|
|
417 |
|
00:14:46,279 --> 00:14:54,880 |
|
yeah is it going to improve upon just |
|
|
|
418 |
|
00:14:50,440 --> 00:14:57,240 |
|
the yeah it generally does um and the |
|
|
|
419 |
|
00:14:54,880 --> 00:14:59,800 |
|
intuition for why this might be I think |
|
|
|
420 |
|
00:14:57,240 --> 00:15:01,680 |
|
is that there are sort of these |
|
|
|
421 |
|
00:14:59,800 --> 00:15:04,560 |
|
degenerate cases like just repeating |
|
|
|
422 |
|
00:15:01,680 --> 00:15:06,120 |
|
over and over that both the expert and |
|
|
|
423 |
|
00:15:04,560 --> 00:15:09,000 |
|
the weak model would give relatively |
|
|
|
424 |
|
00:15:06,120 --> 00:15:10,880 |
|
high probability to um maybe the expert |
|
|
|
425 |
|
00:15:09,000 --> 00:15:13,199 |
|
model is like slightly less likely to do |
|
|
|
426 |
|
00:15:10,880 --> 00:15:14,959 |
|
these things but it's still like sort of |
|
|
|
427 |
|
00:15:13,199 --> 00:15:16,639 |
|
an easy case for the model to learn and |
|
|
|
428 |
|
00:15:14,959 --> 00:15:18,120 |
|
so both of those models will have high |
|
|
|
429 |
|
00:15:16,639 --> 00:15:20,079 |
|
probability for those things but the |
|
|
|
430 |
|
00:15:18,120 --> 00:15:21,800 |
|
things that are genuinely like good |
|
|
|
431 |
|
00:15:20,079 --> 00:15:23,880 |
|
outputs that only the expert would get |
|
|
|
432 |
|
00:15:21,800 --> 00:15:25,519 |
|
right those will have low probability |
|
|
|
433 |
|
00:15:23,880 --> 00:15:27,600 |
|
under the weak model and so you're sort |
|
|
|
434 |
|
00:15:25,519 --> 00:15:30,880 |
|
of subtracting out all the degenerate |
|
|
|
435 |
|
00:15:27,600 --> 00:15:33,759 |
|
behaviors and keeping to really good out |
|
|
|
436 |
|
00:15:30,880 --> 00:15:35,240 |
|
this if you're generating a longer |
|
|
|
437 |
|
00:15:33,759 --> 00:15:37,440 |
|
sequence with with |
|
|
|
438 |
|
00:15:35,240 --> 00:15:40,759 |
|
contacing how do you know which steps |
|
|
|
439 |
|
00:15:37,440 --> 00:15:45,120 |
|
you want to bring out yeah this is a |
|
|
|
440 |
|
00:15:40,759 --> 00:15:48,560 |
|
great question so for this particular |
|
|
|
441 |
|
00:15:45,120 --> 00:15:50,560 |
|
case oh yeah sorry so this was if you're |
|
|
|
442 |
|
00:15:48,560 --> 00:15:52,279 |
|
doing contrastive decoding over a really |
|
|
|
443 |
|
00:15:50,560 --> 00:15:54,399 |
|
long sequence like when do you choose to |
|
|
|
444 |
|
00:15:52,279 --> 00:15:55,800 |
|
bring in the expert right and for |
|
|
|
445 |
|
00:15:54,399 --> 00:15:58,600 |
|
contrastive decoding we're actually |
|
|
|
446 |
|
00:15:55,800 --> 00:16:00,759 |
|
going to do this at every individual |
|
|
|
447 |
|
00:15:58,600 --> 00:16:02,440 |
|
time step so we're going to use the |
|
|
|
448 |
|
00:16:00,759 --> 00:16:04,800 |
|
expert model to decode and we're going |
|
|
|
449 |
|
00:16:02,440 --> 00:16:07,000 |
|
to bring in the amateur to sort of |
|
|
|
450 |
|
00:16:04,800 --> 00:16:09,079 |
|
subtract out probabilities at each next |
|
|
|
451 |
|
00:16:07,000 --> 00:16:10,399 |
|
token prediction um you don't have to do |
|
|
|
452 |
|
00:16:09,079 --> 00:16:12,800 |
|
that I think that's that's what they do |
|
|
|
453 |
|
00:16:10,399 --> 00:16:15,000 |
|
in the paper um you could also decide to |
|
|
|
454 |
|
00:16:12,800 --> 00:16:16,680 |
|
only do this sort of if you have high |
|
|
|
455 |
|
00:16:15,000 --> 00:16:19,639 |
|
uncertainty or something if you don't |
|
|
|
456 |
|
00:16:16,680 --> 00:16:22,639 |
|
have a really sharp probability |
|
|
|
457 |
|
00:16:19,639 --> 00:16:22,639 |
|
distribution |
|
|
|
458 |
|
00:16:23,160 --> 00:16:28,160 |
|
yeah yeah how weak should the weak |
|
|
|
459 |
|
00:16:25,399 --> 00:16:30,199 |
|
predictor be um in the in the paper what |
|
|
|
460 |
|
00:16:28,160 --> 00:16:31,600 |
|
they're look at is actually not a huge |
|
|
|
461 |
|
00:16:30,199 --> 00:16:34,560 |
|
difference between the two models so you |
|
|
|
462 |
|
00:16:31,600 --> 00:16:35,800 |
|
can see here this is gpd2 XL and small |
|
|
|
463 |
|
00:16:34,560 --> 00:16:37,319 |
|
so there's a difference in parameter |
|
|
|
464 |
|
00:16:35,800 --> 00:16:39,519 |
|
counts and like a bit of a difference in |
|
|
|
465 |
|
00:16:37,319 --> 00:16:42,160 |
|
data I think here but these are actually |
|
|
|
466 |
|
00:16:39,519 --> 00:16:44,959 |
|
not like gpd2 XL is certainly not like a |
|
|
|
467 |
|
00:16:42,160 --> 00:16:48,399 |
|
super strong model now um I think they |
|
|
|
468 |
|
00:16:44,959 --> 00:16:50,920 |
|
try a couple of different settings and |
|
|
|
469 |
|
00:16:48,399 --> 00:16:52,319 |
|
the general intuition I think if I'm |
|
|
|
470 |
|
00:16:50,920 --> 00:16:54,880 |
|
remembering it correctly is that you |
|
|
|
471 |
|
00:16:52,319 --> 00:16:56,319 |
|
want a model that's not like so close in |
|
|
|
472 |
|
00:16:54,880 --> 00:16:58,000 |
|
performance to your expert that you're |
|
|
|
473 |
|
00:16:56,319 --> 00:16:59,839 |
|
basically just subtracting out useful |
|
|
|
474 |
|
00:16:58,000 --> 00:17:02,240 |
|
things but you also don't want a model |
|
|
|
475 |
|
00:16:59,839 --> 00:17:03,519 |
|
that's like so degenerate that it is not |
|
|
|
476 |
|
00:17:02,240 --> 00:17:04,959 |
|
hasn't learned anything useful about |
|
|
|
477 |
|
00:17:03,519 --> 00:17:06,839 |
|
your task at all so I think it might |
|
|
|
478 |
|
00:17:04,959 --> 00:17:09,600 |
|
depend on what task you're looking |
|
|
|
479 |
|
00:17:06,839 --> 00:17:12,919 |
|
at |
|
|
|
480 |
|
00:17:09,600 --> 00:17:14,559 |
|
yes this is for inference um so actually |
|
|
|
481 |
|
00:17:12,919 --> 00:17:17,640 |
|
everything we look at today will not |
|
|
|
482 |
|
00:17:14,559 --> 00:17:17,640 |
|
require aning of the |
|
|
|
483 |
|
00:17:19,360 --> 00:17:26,559 |
|
model Okay cool so now we're going to |
|
|
|
484 |
|
00:17:24,000 --> 00:17:30,039 |
|
step into sort of a slightly different |
|
|
|
485 |
|
00:17:26,559 --> 00:17:31,280 |
|
um set of strategies here which is maybe |
|
|
|
486 |
|
00:17:30,039 --> 00:17:33,039 |
|
we don't just want something from the |
|
|
|
487 |
|
00:17:31,280 --> 00:17:35,160 |
|
model distribution or something from a |
|
|
|
488 |
|
00:17:33,039 --> 00:17:37,760 |
|
modified distribution maybe we actually |
|
|
|
489 |
|
00:17:35,160 --> 00:17:39,840 |
|
just want the quote unquote best thing |
|
|
|
490 |
|
00:17:37,760 --> 00:17:42,960 |
|
the single most likely output given our |
|
|
|
491 |
|
00:17:39,840 --> 00:17:45,200 |
|
input right and here this would be the Y |
|
|
|
492 |
|
00:17:42,960 --> 00:17:48,039 |
|
hat the single sequence that satisfies |
|
|
|
493 |
|
00:17:45,200 --> 00:17:51,919 |
|
that has the highest score py given X |
|
|
|
494 |
|
00:17:48,039 --> 00:17:54,240 |
|
for the X that we gave the model um this |
|
|
|
495 |
|
00:17:51,919 --> 00:17:56,000 |
|
is this section is called mode seeking |
|
|
|
496 |
|
00:17:54,240 --> 00:17:58,039 |
|
search because this is the mode of the |
|
|
|
497 |
|
00:17:56,000 --> 00:18:00,440 |
|
distribution over outputs if you sampled |
|
|
|
498 |
|
00:17:58,039 --> 00:18:01,760 |
|
a huge huge number of times and you |
|
|
|
499 |
|
00:18:00,440 --> 00:18:04,720 |
|
looked at the single most likely |
|
|
|
500 |
|
00:18:01,760 --> 00:18:06,720 |
|
sequence you got it would be this y hat |
|
|
|
501 |
|
00:18:04,720 --> 00:18:09,280 |
|
and so how do we find this |
|
|
|
502 |
|
00:18:06,720 --> 00:18:11,600 |
|
thing well one idea is we know the |
|
|
|
503 |
|
00:18:09,280 --> 00:18:13,159 |
|
distribution at each individual setep |
|
|
|
504 |
|
00:18:11,600 --> 00:18:16,000 |
|
can we just pick the most likely thing |
|
|
|
505 |
|
00:18:13,159 --> 00:18:18,960 |
|
from that distribution and so in Greedy |
|
|
|
506 |
|
00:18:16,000 --> 00:18:21,080 |
|
decoding we take the argmax the single |
|
|
|
507 |
|
00:18:18,960 --> 00:18:22,720 |
|
highest probability token at each step |
|
|
|
508 |
|
00:18:21,080 --> 00:18:24,840 |
|
and we continue generating until the |
|
|
|
509 |
|
00:18:22,720 --> 00:18:26,600 |
|
single highest most the single highest |
|
|
|
510 |
|
00:18:24,840 --> 00:18:28,840 |
|
probability token is the stop token |
|
|
|
511 |
|
00:18:26,600 --> 00:18:31,559 |
|
right the end of sequence token |
|
|
|
512 |
|
00:18:28,840 --> 00:18:33,400 |
|
um for an individual token right if we |
|
|
|
513 |
|
00:18:31,559 --> 00:18:35,559 |
|
only want a single token output this is |
|
|
|
514 |
|
00:18:33,400 --> 00:18:38,320 |
|
exactly what we want this is the single |
|
|
|
515 |
|
00:18:35,559 --> 00:18:40,400 |
|
most likely output um and that's great |
|
|
|
516 |
|
00:18:38,320 --> 00:18:44,000 |
|
but if we're looking at something that |
|
|
|
517 |
|
00:18:40,400 --> 00:18:45,120 |
|
is maybe several tokens long are we |
|
|
|
518 |
|
00:18:44,000 --> 00:18:47,360 |
|
actually going to get the highest |
|
|
|
519 |
|
00:18:45,120 --> 00:18:49,720 |
|
probability thing and if you kind of |
|
|
|
520 |
|
00:18:47,360 --> 00:18:52,159 |
|
squint at this you can see that maybe we |
|
|
|
521 |
|
00:18:49,720 --> 00:18:54,120 |
|
have a problem here where the highest |
|
|
|
522 |
|
00:18:52,159 --> 00:18:56,320 |
|
probability sequence that you get from |
|
|
|
523 |
|
00:18:54,120 --> 00:18:58,039 |
|
multiplying across multiple steps |
|
|
|
524 |
|
00:18:56,320 --> 00:18:59,559 |
|
doesn't necessarily start with the token |
|
|
|
525 |
|
00:18:58,039 --> 00:19:01,600 |
|
that was highest probability at time |
|
|
|
526 |
|
00:18:59,559 --> 00:19:03,200 |
|
step one right maybe if you're doing |
|
|
|
527 |
|
00:19:01,600 --> 00:19:04,720 |
|
something like unconditional generation |
|
|
|
528 |
|
00:19:03,200 --> 00:19:06,720 |
|
the highest probability token at time |
|
|
|
529 |
|
00:19:04,720 --> 00:19:08,360 |
|
step one is always the but there could |
|
|
|
530 |
|
00:19:06,720 --> 00:19:09,919 |
|
be a really probable sentence that just |
|
|
|
531 |
|
00:19:08,360 --> 00:19:11,480 |
|
doesn't happen to start with the the |
|
|
|
532 |
|
00:19:09,919 --> 00:19:12,720 |
|
word the' and you would never find it |
|
|
|
533 |
|
00:19:11,480 --> 00:19:15,080 |
|
using GRE |
|
|
|
534 |
|
00:19:12,720 --> 00:19:17,360 |
|
decoding so this isn't going to give us |
|
|
|
535 |
|
00:19:15,080 --> 00:19:19,799 |
|
the highest probability output over a |
|
|
|
536 |
|
00:19:17,360 --> 00:19:22,000 |
|
sequence that's more than one token one |
|
|
|
537 |
|
00:19:19,799 --> 00:19:23,360 |
|
can we do anything better to try to find |
|
|
|
538 |
|
00:19:22,000 --> 00:19:25,640 |
|
this um |
|
|
|
539 |
|
00:19:23,360 --> 00:19:27,559 |
|
output and here we get into sort of one |
|
|
|
540 |
|
00:19:25,640 --> 00:19:29,520 |
|
of the most popular decoding methods the |
|
|
|
541 |
|
00:19:27,559 --> 00:19:32,600 |
|
one that you maybe heard of before which |
|
|
|
542 |
|
00:19:29,520 --> 00:19:35,080 |
|
is beam search the idea here is that we |
|
|
|
543 |
|
00:19:32,600 --> 00:19:36,559 |
|
don't want to miss a high probability |
|
|
|
544 |
|
00:19:35,080 --> 00:19:38,880 |
|
token that's hidden behind a lower |
|
|
|
545 |
|
00:19:36,559 --> 00:19:40,200 |
|
probability prefix so we want to kind of |
|
|
|
546 |
|
00:19:38,880 --> 00:19:42,000 |
|
search through a couple of different |
|
|
|
547 |
|
00:19:40,200 --> 00:19:43,760 |
|
options so that we don't discard |
|
|
|
548 |
|
00:19:42,000 --> 00:19:47,120 |
|
something too early that might have high |
|
|
|
549 |
|
00:19:43,760 --> 00:19:49,360 |
|
probability um later on in generation |
|
|
|
550 |
|
00:19:47,120 --> 00:19:50,919 |
|
and this is a type of bread first search |
|
|
|
551 |
|
00:19:49,360 --> 00:19:53,200 |
|
so we're going to look at a wide variety |
|
|
|
552 |
|
00:19:50,919 --> 00:19:54,600 |
|
of options at a given time step we're |
|
|
|
553 |
|
00:19:53,200 --> 00:19:55,600 |
|
going to pick some set of them to |
|
|
|
554 |
|
00:19:54,600 --> 00:19:57,120 |
|
continue and then we're going to look at |
|
|
|
555 |
|
00:19:55,600 --> 00:19:58,919 |
|
a wide variety of options for the next |
|
|
|
556 |
|
00:19:57,120 --> 00:19:59,960 |
|
time step instead of generating all the |
|
|
|
557 |
|
00:19:58,919 --> 00:20:02,200 |
|
way through a sequence and then |
|
|
|
558 |
|
00:19:59,960 --> 00:20:04,320 |
|
generating all the way through another |
|
|
|
559 |
|
00:20:02,200 --> 00:20:05,760 |
|
sequence um and how this works is we're |
|
|
|
560 |
|
00:20:04,320 --> 00:20:07,559 |
|
going to pick sort of a number of |
|
|
|
561 |
|
00:20:05,760 --> 00:20:09,400 |
|
candidates we'd like to explore a beam |
|
|
|
562 |
|
00:20:07,559 --> 00:20:11,039 |
|
with so in this example we're going to |
|
|
|
563 |
|
00:20:09,400 --> 00:20:12,799 |
|
pick three and we're going to say all |
|
|
|
564 |
|
00:20:11,039 --> 00:20:15,480 |
|
right here are maybe three options for |
|
|
|
565 |
|
00:20:12,799 --> 00:20:17,640 |
|
time step one for if we pick each of |
|
|
|
566 |
|
00:20:15,480 --> 00:20:19,760 |
|
those three options what would be the |
|
|
|
567 |
|
00:20:17,640 --> 00:20:21,799 |
|
three most likely things for time step |
|
|
|
568 |
|
00:20:19,760 --> 00:20:23,200 |
|
two right rather than choosing just the |
|
|
|
569 |
|
00:20:21,799 --> 00:20:24,520 |
|
single most likely thing in Greedy |
|
|
|
570 |
|
00:20:23,200 --> 00:20:26,960 |
|
decoding we're going to pick three |
|
|
|
571 |
|
00:20:24,520 --> 00:20:29,120 |
|
options and so now we have three options |
|
|
|
572 |
|
00:20:26,960 --> 00:20:32,559 |
|
for time step one three options for time |
|
|
|
573 |
|
00:20:29,120 --> 00:20:34,280 |
|
step two we now have nine options um |
|
|
|
574 |
|
00:20:32,559 --> 00:20:36,320 |
|
here right three options and then three |
|
|
|
575 |
|
00:20:34,280 --> 00:20:37,679 |
|
more for each of these and we don't want |
|
|
|
576 |
|
00:20:36,320 --> 00:20:40,159 |
|
to continue doing this because this is |
|
|
|
577 |
|
00:20:37,679 --> 00:20:41,960 |
|
going to sort of combinator explode so |
|
|
|
578 |
|
00:20:40,159 --> 00:20:44,080 |
|
we need to choose some subset of these |
|
|
|
579 |
|
00:20:41,960 --> 00:20:45,880 |
|
to continue with and the way we do that |
|
|
|
580 |
|
00:20:44,080 --> 00:20:47,799 |
|
is we look at the probability over this |
|
|
|
581 |
|
00:20:45,880 --> 00:20:49,240 |
|
two token sequence and we choose the two |
|
|
|
582 |
|
00:20:47,799 --> 00:20:51,520 |
|
that have the highest probability |
|
|
|
583 |
|
00:20:49,240 --> 00:20:53,400 |
|
overall so in this instance we've chosen |
|
|
|
584 |
|
00:20:51,520 --> 00:20:55,679 |
|
sort of one thing from this first group |
|
|
|
585 |
|
00:20:53,400 --> 00:20:57,760 |
|
and two things from the second group and |
|
|
|
586 |
|
00:20:55,679 --> 00:20:59,760 |
|
now we're back down to three hypotheses |
|
|
|
587 |
|
00:20:57,760 --> 00:21:02,120 |
|
each now two tokens long and we'll |
|
|
|
588 |
|
00:20:59,760 --> 00:21:04,000 |
|
continue generating to time step three |
|
|
|
589 |
|
00:21:02,120 --> 00:21:05,600 |
|
we'll get nine options we'll pre it back |
|
|
|
590 |
|
00:21:04,000 --> 00:21:07,760 |
|
down to three and we'll continue until |
|
|
|
591 |
|
00:21:05,600 --> 00:21:09,159 |
|
the end of generation where we now have |
|
|
|
592 |
|
00:21:07,760 --> 00:21:10,679 |
|
three sequences and we'll just pick the |
|
|
|
593 |
|
00:21:09,159 --> 00:21:14,000 |
|
one that's highest probability out of |
|
|
|
594 |
|
00:21:10,679 --> 00:21:15,679 |
|
those three to return um this is not |
|
|
|
595 |
|
00:21:14,000 --> 00:21:17,360 |
|
guaranteed to get you the highest |
|
|
|
596 |
|
00:21:15,679 --> 00:21:18,480 |
|
probability thing right you still have |
|
|
|
597 |
|
00:21:17,360 --> 00:21:20,039 |
|
this risk that you could be sort of |
|
|
|
598 |
|
00:21:18,480 --> 00:21:22,279 |
|
pruning out something that's high |
|
|
|
599 |
|
00:21:20,039 --> 00:21:24,159 |
|
probability but in general this sort of |
|
|
|
600 |
|
00:21:22,279 --> 00:21:26,600 |
|
works um much better than greedy |
|
|
|
601 |
|
00:21:24,159 --> 00:21:28,520 |
|
decoding and this is if you have a |
|
|
|
602 |
|
00:21:26,600 --> 00:21:31,120 |
|
language model and you're sort of not |
|
|
|
603 |
|
00:21:28,520 --> 00:21:32,440 |
|
what um decoding method it's using outs |
|
|
|
604 |
|
00:21:31,120 --> 00:21:34,200 |
|
are pretty good it's either beam search |
|
|
|
605 |
|
00:21:32,440 --> 00:21:37,120 |
|
or temperature samping right this is |
|
|
|
606 |
|
00:21:34,200 --> 00:21:40,039 |
|
very effective this is used um pretty |
|
|
|
607 |
|
00:21:37,120 --> 00:21:41,760 |
|
broadly there are however some issues |
|
|
|
608 |
|
00:21:40,039 --> 00:21:43,760 |
|
with beam search and one of the biggest |
|
|
|
609 |
|
00:21:41,760 --> 00:21:46,159 |
|
ones is that when you're doing this |
|
|
|
610 |
|
00:21:43,760 --> 00:21:47,679 |
|
maximum likelihood sampling you really |
|
|
|
611 |
|
00:21:46,159 --> 00:21:50,080 |
|
or the sampling to search for something |
|
|
|
612 |
|
00:21:47,679 --> 00:21:51,760 |
|
that's very high likelihood um you |
|
|
|
613 |
|
00:21:50,080 --> 00:21:53,679 |
|
really sacrifice a lot of diversity in |
|
|
|
614 |
|
00:21:51,760 --> 00:21:55,320 |
|
your outputs and in particular you could |
|
|
|
615 |
|
00:21:53,679 --> 00:21:57,279 |
|
wind up at the end of beam search with |
|
|
|
616 |
|
00:21:55,320 --> 00:21:58,919 |
|
three different outputs to choose from |
|
|
|
617 |
|
00:21:57,279 --> 00:22:00,120 |
|
that are all pretty pretty much the same |
|
|
|
618 |
|
00:21:58,919 --> 00:22:02,640 |
|
like they're slightly different token |
|
|
|
619 |
|
00:22:00,120 --> 00:22:04,559 |
|
sequences but they look very similar and |
|
|
|
620 |
|
00:22:02,640 --> 00:22:07,480 |
|
so maybe you want to S get sort of a |
|
|
|
621 |
|
00:22:04,559 --> 00:22:08,919 |
|
more diverse set um there's a couple of |
|
|
|
622 |
|
00:22:07,480 --> 00:22:10,640 |
|
different methods in this category I'm |
|
|
|
623 |
|
00:22:08,919 --> 00:22:12,679 |
|
going to very briefly shout out two of |
|
|
|
624 |
|
00:22:10,640 --> 00:22:14,200 |
|
them um but the idea here is to sort of |
|
|
|
625 |
|
00:22:12,679 --> 00:22:16,440 |
|
reintroduce some of the benefits of |
|
|
|
626 |
|
00:22:14,200 --> 00:22:19,120 |
|
sampling while still doing this kind of |
|
|
|
627 |
|
00:22:16,440 --> 00:22:20,919 |
|
search for high probability things um |
|
|
|
628 |
|
00:22:19,120 --> 00:22:22,600 |
|
diverse beam search is one of these |
|
|
|
629 |
|
00:22:20,919 --> 00:22:25,520 |
|
methods and here the idea is that we |
|
|
|
630 |
|
00:22:22,600 --> 00:22:27,279 |
|
want to modify that scoring step when we |
|
|
|
631 |
|
00:22:25,520 --> 00:22:28,600 |
|
choose which three out of our nine beams |
|
|
|
632 |
|
00:22:27,279 --> 00:22:30,200 |
|
we want to continue |
|
|
|
633 |
|
00:22:28,600 --> 00:22:32,000 |
|
to avoid choosing things that are really |
|
|
|
634 |
|
00:22:30,200 --> 00:22:34,320 |
|
really close to each other right so |
|
|
|
635 |
|
00:22:32,000 --> 00:22:36,039 |
|
maybe our highest probability thing is |
|
|
|
636 |
|
00:22:34,320 --> 00:22:37,559 |
|
some sequence a and then if we look at |
|
|
|
637 |
|
00:22:36,039 --> 00:22:39,520 |
|
the other sequences there's one that's |
|
|
|
638 |
|
00:22:37,559 --> 00:22:41,279 |
|
pretty high probability but very similar |
|
|
|
639 |
|
00:22:39,520 --> 00:22:43,600 |
|
to that sequence and there's one that's |
|
|
|
640 |
|
00:22:41,279 --> 00:22:45,320 |
|
like slightly lower probability but very |
|
|
|
641 |
|
00:22:43,600 --> 00:22:47,200 |
|
different and so maybe we would choose a |
|
|
|
642 |
|
00:22:45,320 --> 00:22:49,679 |
|
sequence that is a little lower |
|
|
|
643 |
|
00:22:47,200 --> 00:22:51,760 |
|
probability to maximize diversity in our |
|
|
|
644 |
|
00:22:49,679 --> 00:22:53,799 |
|
set to try to get like sort of a wider |
|
|
|
645 |
|
00:22:51,760 --> 00:22:56,200 |
|
range of options to choose from later in |
|
|
|
646 |
|
00:22:53,799 --> 00:22:58,200 |
|
generation so this modifies the scoring |
|
|
|
647 |
|
00:22:56,200 --> 00:23:00,120 |
|
to not just take into account likelihood |
|
|
|
648 |
|
00:22:58,200 --> 00:23:03,200 |
|
but also similarity to other |
|
|
|
649 |
|
00:23:00,120 --> 00:23:05,400 |
|
KS another option down this path is |
|
|
|
650 |
|
00:23:03,200 --> 00:23:07,640 |
|
stochastic beam search where we're going |
|
|
|
651 |
|
00:23:05,400 --> 00:23:09,279 |
|
to keep the scoring the same but rather |
|
|
|
652 |
|
00:23:07,640 --> 00:23:11,679 |
|
than choosing just the top three most |
|
|
|
653 |
|
00:23:09,279 --> 00:23:13,279 |
|
likely tokens to expand out each beam |
|
|
|
654 |
|
00:23:11,679 --> 00:23:15,200 |
|
we're actually going to sample from some |
|
|
|
655 |
|
00:23:13,279 --> 00:23:17,000 |
|
distribution and you could sample from |
|
|
|
656 |
|
00:23:15,200 --> 00:23:18,760 |
|
the model distribution directly using |
|
|
|
657 |
|
00:23:17,000 --> 00:23:20,200 |
|
ancestral sampling or you could use any |
|
|
|
658 |
|
00:23:18,760 --> 00:23:22,679 |
|
of our sampling methods we talked about |
|
|
|
659 |
|
00:23:20,200 --> 00:23:24,200 |
|
in the last section to do this and the |
|
|
|
660 |
|
00:23:22,679 --> 00:23:25,799 |
|
the idea here is sort of similar to |
|
|
|
661 |
|
00:23:24,200 --> 00:23:29,279 |
|
diverse beam search we want to get sort |
|
|
|
662 |
|
00:23:25,799 --> 00:23:31,240 |
|
of a wider exploration of our models |
|
|
|
663 |
|
00:23:29,279 --> 00:23:33,520 |
|
like output space you know we want to |
|
|
|
664 |
|
00:23:31,240 --> 00:23:35,360 |
|
sort of explore more things instead of |
|
|
|
665 |
|
00:23:33,520 --> 00:23:36,760 |
|
just seeking winding up with a bunch of |
|
|
|
666 |
|
00:23:35,360 --> 00:23:39,679 |
|
outputs that look very similar at the |
|
|
|
667 |
|
00:23:36,760 --> 00:23:41,120 |
|
end of beam search um if folks are |
|
|
|
668 |
|
00:23:39,679 --> 00:23:43,679 |
|
interested in these I think these are |
|
|
|
669 |
|
00:23:41,120 --> 00:23:46,159 |
|
both linked on the website um the the |
|
|
|
670 |
|
00:23:43,679 --> 00:23:48,679 |
|
papers that both of these ideas came |
|
|
|
671 |
|
00:23:46,159 --> 00:23:51,480 |
|
from |
|
|
|
672 |
|
00:23:48,679 --> 00:23:54,400 |
|
Yes um for stochastic |
|
|
|
673 |
|
00:23:51,480 --> 00:23:57,039 |
|
resarch the sampl probability takes into |
|
|
|
674 |
|
00:23:54,400 --> 00:23:59,039 |
|
account the current part that we already |
|
|
|
675 |
|
00:23:57,039 --> 00:24:02,000 |
|
travel okay |
|
|
|
676 |
|
00:23:59,039 --> 00:24:04,320 |
|
yeah exactly so it's this um like |
|
|
|
677 |
|
00:24:02,000 --> 00:24:05,640 |
|
selection step here but we're instead of |
|
|
|
678 |
|
00:24:04,320 --> 00:24:07,760 |
|
just doing greedy selection we're going |
|
|
|
679 |
|
00:24:05,640 --> 00:24:11,760 |
|
to do |
|
|
|
680 |
|
00:24:07,760 --> 00:24:17,520 |
|
assembling yes my question was on the T |
|
|
|
681 |
|
00:24:11,760 --> 00:24:23,200 |
|
yeah like you for something super simple |
|
|
|
682 |
|
00:24:17,520 --> 00:24:26,520 |
|
like if both of them have a high are you |
|
|
|
683 |
|
00:24:23,200 --> 00:24:28,120 |
|
like yeah so you would if it has a |
|
|
|
684 |
|
00:24:26,520 --> 00:24:30,080 |
|
really high probability under both |
|
|
|
685 |
|
00:24:28,120 --> 00:24:32,880 |
|
models it would have a lower probability |
|
|
|
686 |
|
00:24:30,080 --> 00:24:35,080 |
|
after doing this sort of contrasted |
|
|
|
687 |
|
00:24:32,880 --> 00:24:36,600 |
|
de right so if the if the smaller |
|
|
|
688 |
|
00:24:35,080 --> 00:24:38,799 |
|
model's really good at your task this |
|
|
|
689 |
|
00:24:36,600 --> 00:24:40,960 |
|
might not work very |
|
|
|
690 |
|
00:24:38,799 --> 00:24:43,360 |
|
well yeah I think in the paper they're |
|
|
|
691 |
|
00:24:40,960 --> 00:24:45,320 |
|
generally evaluating on these sort of |
|
|
|
692 |
|
00:24:43,360 --> 00:24:48,279 |
|
like open ended generation task I bet |
|
|
|
693 |
|
00:24:45,320 --> 00:24:51,279 |
|
this works a lot worse for |
|
|
|
694 |
|
00:24:48,279 --> 00:24:51,279 |
|
now |
|
|
|
695 |
|
00:24:56,760 --> 00:24:59,760 |
|
yes |
|
|
|
696 |
|
00:25:02,440 --> 00:25:08,120 |
|
you yeah this is a great question um and |
|
|
|
697 |
|
00:25:05,960 --> 00:25:11,559 |
|
so the question is how do we measure |
|
|
|
698 |
|
00:25:08,120 --> 00:25:14,120 |
|
similar beams um you can sort of Define |
|
|
|
699 |
|
00:25:11,559 --> 00:25:15,559 |
|
any kind of similarity function you like |
|
|
|
700 |
|
00:25:14,120 --> 00:25:17,520 |
|
here um anything that you'd use to |
|
|
|
701 |
|
00:25:15,559 --> 00:25:20,440 |
|
evaluate like how similar something is |
|
|
|
702 |
|
00:25:17,520 --> 00:25:22,360 |
|
to a gold reference right um I think in |
|
|
|
703 |
|
00:25:20,440 --> 00:25:25,039 |
|
the original diverse beam search they do |
|
|
|
704 |
|
00:25:22,360 --> 00:25:27,760 |
|
this by looking at like exact token |
|
|
|
705 |
|
00:25:25,039 --> 00:25:30,640 |
|
match across the two right like if these |
|
|
|
706 |
|
00:25:27,760 --> 00:25:33,880 |
|
beams are the same in all but one of the |
|
|
|
707 |
|
00:25:30,640 --> 00:25:35,600 |
|
tokens or they have like you know 50% of |
|
|
|
708 |
|
00:25:33,880 --> 00:25:37,120 |
|
the tokens are shared across the beams |
|
|
|
709 |
|
00:25:35,600 --> 00:25:38,559 |
|
and maybe these are really similar and |
|
|
|
710 |
|
00:25:37,120 --> 00:25:40,559 |
|
they should try to choose two things |
|
|
|
711 |
|
00:25:38,559 --> 00:25:42,600 |
|
that are different um but you could swap |
|
|
|
712 |
|
00:25:40,559 --> 00:25:46,200 |
|
that out for any |
|
|
|
713 |
|
00:25:42,600 --> 00:25:49,440 |
|
metc yes so |
|
|
|
714 |
|
00:25:46,200 --> 00:25:50,960 |
|
the there's kind of like a that's Happ |
|
|
|
715 |
|
00:25:49,440 --> 00:25:53,360 |
|
at |
|
|
|
716 |
|
00:25:50,960 --> 00:25:55,000 |
|
every for the stochastic be search |
|
|
|
717 |
|
00:25:53,360 --> 00:25:57,720 |
|
there's like a shering what do you mean |
|
|
|
718 |
|
00:25:55,000 --> 00:26:00,520 |
|
by a shepher so it says modify the next |
|
|
|
719 |
|
00:25:57,720 --> 00:26:03,000 |
|
sech selection because they're like um |
|
|
|
720 |
|
00:26:00,520 --> 00:26:06,919 |
|
it is searching at a different space and |
|
|
|
721 |
|
00:26:03,000 --> 00:26:09,679 |
|
it's not searching within the same 3D |
|
|
|
722 |
|
00:26:06,919 --> 00:26:14,080 |
|
SE is it searching in a different space |
|
|
|
723 |
|
00:26:09,679 --> 00:26:15,799 |
|
yeah so it's um in the same probability |
|
|
|
724 |
|
00:26:14,080 --> 00:26:18,399 |
|
distribution but it'll see a different |
|
|
|
725 |
|
00:26:15,799 --> 00:26:20,840 |
|
part of the distribution so when you're |
|
|
|
726 |
|
00:26:18,399 --> 00:26:22,640 |
|
doing the grey search you'll only ever |
|
|
|
727 |
|
00:26:20,840 --> 00:26:24,559 |
|
look at the top three tokens in the next |
|
|
|
728 |
|
00:26:22,640 --> 00:26:27,120 |
|
token distribution because you're just |
|
|
|
729 |
|
00:26:24,559 --> 00:26:29,840 |
|
selecting like the maximums um but in |
|
|
|
730 |
|
00:26:27,120 --> 00:26:31,360 |
|
sampling you could you could get the |
|
|
|
731 |
|
00:26:29,840 --> 00:26:32,880 |
|
same tokens right if they're really high |
|
|
|
732 |
|
00:26:31,360 --> 00:26:35,720 |
|
likelihood but you could also sample |
|
|
|
733 |
|
00:26:32,880 --> 00:26:38,399 |
|
something that's further down in the |
|
|
|
734 |
|
00:26:35,720 --> 00:26:42,760 |
|
distribution yeah as a followup to that |
|
|
|
735 |
|
00:26:38,399 --> 00:26:44,880 |
|
like into uh our stamping we take into |
|
|
|
736 |
|
00:26:42,760 --> 00:26:46,960 |
|
account the probability of the prefix |
|
|
|
737 |
|
00:26:44,880 --> 00:26:50,679 |
|
like the current hypothesis right |
|
|
|
738 |
|
00:26:46,960 --> 00:26:51,760 |
|
because otherwise it is the same as just |
|
|
|
739 |
|
00:26:50,679 --> 00:26:54,279 |
|
uh |
|
|
|
740 |
|
00:26:51,760 --> 00:26:57,159 |
|
in yeah so in the sampling we're taking |
|
|
|
741 |
|
00:26:54,279 --> 00:27:00,120 |
|
into account the previous the prefix |
|
|
|
742 |
|
00:26:57,159 --> 00:27:02,600 |
|
yeah so so it we will take into account |
|
|
|
743 |
|
00:27:00,120 --> 00:27:06,200 |
|
the prefix but this sampling mechanism |
|
|
|
744 |
|
00:27:02,600 --> 00:27:08,320 |
|
here could be ancestral sampling um the |
|
|
|
745 |
|
00:27:06,200 --> 00:27:10,480 |
|
only the difference here is that we're |
|
|
|
746 |
|
00:27:08,320 --> 00:27:12,600 |
|
also doing a sort of search step on top |
|
|
|
747 |
|
00:27:10,480 --> 00:27:14,679 |
|
of that to choose the maximum likelihood |
|
|
|
748 |
|
00:27:12,600 --> 00:27:18,080 |
|
things across multiple |
|
|
|
749 |
|
00:27:14,679 --> 00:27:20,559 |
|
me another important thing um is you |
|
|
|
750 |
|
00:27:18,080 --> 00:27:22,279 |
|
sample without replacement and so |
|
|
|
751 |
|
00:27:20,559 --> 00:27:24,120 |
|
normally you sample with replacement and |
|
|
|
752 |
|
00:27:22,279 --> 00:27:25,840 |
|
you might get exactly the same thing but |
|
|
|
753 |
|
00:27:24,120 --> 00:27:28,000 |
|
when you're doing stasic beam search you |
|
|
|
754 |
|
00:27:25,840 --> 00:27:30,240 |
|
sample without replacement so you get |
|
|
|
755 |
|
00:27:28,000 --> 00:27:33,279 |
|
like three ones according to the |
|
|
|
756 |
|
00:27:30,240 --> 00:27:36,080 |
|
probability but they're guaranteed to be |
|
|
|
757 |
|
00:27:33,279 --> 00:27:37,799 |
|
different right so beam search like one |
|
|
|
758 |
|
00:27:36,080 --> 00:27:39,559 |
|
of the characteristics of beam search is |
|
|
|
759 |
|
00:27:37,799 --> 00:27:41,640 |
|
you always get three different things |
|
|
|
760 |
|
00:27:39,559 --> 00:27:44,240 |
|
because you're picking the three top |
|
|
|
761 |
|
00:27:41,640 --> 00:27:45,760 |
|
when you do sampling uh like stochastic |
|
|
|
762 |
|
00:27:44,240 --> 00:27:47,399 |
|
Bean shirts you get three different |
|
|
|
763 |
|
00:27:45,760 --> 00:27:49,440 |
|
things they're not guaranteed to be the |
|
|
|
764 |
|
00:27:47,399 --> 00:27:51,760 |
|
top they could be distributed according |
|
|
|
765 |
|
00:27:49,440 --> 00:27:54,360 |
|
to the prob distribution but they're |
|
|
|
766 |
|
00:27:51,760 --> 00:27:55,840 |
|
guaranteed so um you can take a look at |
|
|
|
767 |
|
00:27:54,360 --> 00:27:58,039 |
|
the paper for more details of exactly |
|
|
|
768 |
|
00:27:55,840 --> 00:28:00,159 |
|
how it looks but that that's |
|
|
|
769 |
|
00:27:58,039 --> 00:28:03,039 |
|
so then is the main difference that |
|
|
|
770 |
|
00:28:00,159 --> 00:28:05,120 |
|
compared to plus temping that we have n |
|
|
|
771 |
|
00:28:03,039 --> 00:28:08,519 |
|
options that we're cheing tet instead of |
|
|
|
772 |
|
00:28:05,120 --> 00:28:10,320 |
|
going with the going with only one and |
|
|
|
773 |
|
00:28:08,519 --> 00:28:11,200 |
|
you can't yeah you can't simple the same |
|
|
|
774 |
|
00:28:10,320 --> 00:28:14,960 |
|
thing |
|
|
|
775 |
|
00:28:11,200 --> 00:28:16,919 |
|
right yeah so just uh repeat recording |
|
|
|
776 |
|
00:28:14,960 --> 00:28:19,159 |
|
is that n options we're keeping track of |
|
|
|
777 |
|
00:28:16,919 --> 00:28:22,240 |
|
and they're all going to be unique token |
|
|
|
778 |
|
00:28:19,159 --> 00:28:24,240 |
|
sequences at least um you can actually |
|
|
|
779 |
|
00:28:22,240 --> 00:28:26,200 |
|
get the same output sequence from two |
|
|
|
780 |
|
00:28:24,240 --> 00:28:28,120 |
|
different toen sequences if you tokenize |
|
|
|
781 |
|
00:28:26,200 --> 00:28:32,360 |
|
slightly differently um but these will |
|
|
|
782 |
|
00:28:28,120 --> 00:28:37,840 |
|
always be unique tokens |
|
|
|
783 |
|
00:28:32,360 --> 00:28:39,279 |
|
Le so that was sort of a a why like a a |
|
|
|
784 |
|
00:28:37,840 --> 00:28:41,320 |
|
set of methods that we've developed to |
|
|
|
785 |
|
00:28:39,279 --> 00:28:43,600 |
|
try to find the most probable sequence |
|
|
|
786 |
|
00:28:41,320 --> 00:28:44,480 |
|
out of the model um but in the next |
|
|
|
787 |
|
00:28:43,600 --> 00:28:46,039 |
|
section here we're going to sort of |
|
|
|
788 |
|
00:28:44,480 --> 00:28:50,240 |
|
think about whether that's actually what |
|
|
|
789 |
|
00:28:46,039 --> 00:28:51,679 |
|
we want to do at all um so what is like |
|
|
|
790 |
|
00:28:50,240 --> 00:28:54,240 |
|
is do we really want the highest |
|
|
|
791 |
|
00:28:51,679 --> 00:28:56,880 |
|
probability thing um we know that |
|
|
|
792 |
|
00:28:54,240 --> 00:28:58,600 |
|
outputs with really low probability tend |
|
|
|
793 |
|
00:28:56,880 --> 00:29:00,640 |
|
to be really like worse than outfits |
|
|
|
794 |
|
00:28:58,600 --> 00:29:03,240 |
|
with high probability right maybe I'm |
|
|
|
795 |
|
00:29:00,640 --> 00:29:05,840 |
|
trying to predict like what the next |
|
|
|
796 |
|
00:29:03,240 --> 00:29:08,640 |
|
sentence should be after the cat saw the |
|
|
|
797 |
|
00:29:05,840 --> 00:29:11,240 |
|
dog right the cat sat down is way higher |
|
|
|
798 |
|
00:29:08,640 --> 00:29:12,559 |
|
probability than the cat grew wings and |
|
|
|
799 |
|
00:29:11,240 --> 00:29:14,039 |
|
at least with the cats I've met that |
|
|
|
800 |
|
00:29:12,559 --> 00:29:15,679 |
|
sounds pretty that sounds pretty much |
|
|
|
801 |
|
00:29:14,039 --> 00:29:19,559 |
|
right right like this is a much better |
|
|
|
802 |
|
00:29:15,679 --> 00:29:21,720 |
|
output than the cat gr wings but if you |
|
|
|
803 |
|
00:29:19,559 --> 00:29:24,159 |
|
look at just the outputs with relatively |
|
|
|
804 |
|
00:29:21,720 --> 00:29:25,960 |
|
high probability it's sort of less clear |
|
|
|
805 |
|
00:29:24,159 --> 00:29:27,880 |
|
that this defines an exact ranking |
|
|
|
806 |
|
00:29:25,960 --> 00:29:30,559 |
|
between those outputs right |
|
|
|
807 |
|
00:29:27,880 --> 00:29:32,600 |
|
is the cat sat down necessarily better |
|
|
|
808 |
|
00:29:30,559 --> 00:29:34,519 |
|
than the cat ran away these both seem |
|
|
|
809 |
|
00:29:32,600 --> 00:29:35,720 |
|
like pretty reasonable outputs to me |
|
|
|
810 |
|
00:29:34,519 --> 00:29:40,200 |
|
even though one of them is slightly |
|
|
|
811 |
|
00:29:35,720 --> 00:29:42,799 |
|
higher probability and so we do we |
|
|
|
812 |
|
00:29:40,200 --> 00:29:45,240 |
|
really like necessarily need to recover |
|
|
|
813 |
|
00:29:42,799 --> 00:29:47,200 |
|
the cat that down um and this gets a |
|
|
|
814 |
|
00:29:45,240 --> 00:29:49,399 |
|
little a little more complicated still |
|
|
|
815 |
|
00:29:47,200 --> 00:29:51,120 |
|
if we look at sort of a range of outputs |
|
|
|
816 |
|
00:29:49,399 --> 00:29:53,120 |
|
so say there's sort of six outputs that |
|
|
|
817 |
|
00:29:51,120 --> 00:29:55,240 |
|
our model could give us um and here |
|
|
|
818 |
|
00:29:53,120 --> 00:29:57,559 |
|
we're looking at sort of full sequences |
|
|
|
819 |
|
00:29:55,240 --> 00:30:00,120 |
|
not individual tokens just for clarity |
|
|
|
820 |
|
00:29:57,559 --> 00:30:02,640 |
|
so maybe our outputs in order of |
|
|
|
821 |
|
00:30:00,120 --> 00:30:05,840 |
|
probability are the cat sat down it ran |
|
|
|
822 |
|
00:30:02,640 --> 00:30:08,240 |
|
away it sprinted off it got out of there |
|
|
|
823 |
|
00:30:05,840 --> 00:30:09,720 |
|
it's very small and it grew Wings right |
|
|
|
824 |
|
00:30:08,240 --> 00:30:11,440 |
|
so we're definitely sure that the cat |
|
|
|
825 |
|
00:30:09,720 --> 00:30:13,159 |
|
sat down is a better output than the cat |
|
|
|
826 |
|
00:30:11,440 --> 00:30:15,360 |
|
grew wings and if we're doing a mod |
|
|
|
827 |
|
00:30:13,159 --> 00:30:17,600 |
|
seeking search we would find that as our |
|
|
|
828 |
|
00:30:15,360 --> 00:30:19,440 |
|
most likely thing if we're if we you |
|
|
|
829 |
|
00:30:17,600 --> 00:30:21,440 |
|
know do a good job searching and we'd |
|
|
|
830 |
|
00:30:19,440 --> 00:30:23,519 |
|
return that as our output but if you |
|
|
|
831 |
|
00:30:21,440 --> 00:30:25,919 |
|
look at the rest of this distribution |
|
|
|
832 |
|
00:30:23,519 --> 00:30:27,880 |
|
you see that there's actually a whole |
|
|
|
833 |
|
00:30:25,919 --> 00:30:29,240 |
|
set of outputs after that all say |
|
|
|
834 |
|
00:30:27,880 --> 00:30:31,720 |
|
something that kind of means the cat |
|
|
|
835 |
|
00:30:29,240 --> 00:30:33,480 |
|
left the area right it's just that this |
|
|
|
836 |
|
00:30:31,720 --> 00:30:35,200 |
|
probability is split over these three |
|
|
|
837 |
|
00:30:33,480 --> 00:30:37,080 |
|
different generations and if you |
|
|
|
838 |
|
00:30:35,200 --> 00:30:39,120 |
|
actually add up the probability mass of |
|
|
|
839 |
|
00:30:37,080 --> 00:30:40,880 |
|
all three of these sequences this is |
|
|
|
840 |
|
00:30:39,120 --> 00:30:42,919 |
|
double the probability mass of the cat |
|
|
|
841 |
|
00:30:40,880 --> 00:30:44,360 |
|
sat down but because none of these |
|
|
|
842 |
|
00:30:42,919 --> 00:30:45,960 |
|
individual sequences is higher |
|
|
|
843 |
|
00:30:44,360 --> 00:30:47,399 |
|
probability if you're doing mode seeking |
|
|
|
844 |
|
00:30:45,960 --> 00:30:50,640 |
|
search you wouldn't you wouldn't be able |
|
|
|
845 |
|
00:30:47,399 --> 00:30:52,480 |
|
to see this effect right so do we really |
|
|
|
846 |
|
00:30:50,640 --> 00:30:53,760 |
|
want to return the cat sat down or do we |
|
|
|
847 |
|
00:30:52,480 --> 00:30:55,200 |
|
want to return something that means the |
|
|
|
848 |
|
00:30:53,760 --> 00:30:57,559 |
|
cat left the |
|
|
|
849 |
|
00:30:55,200 --> 00:30:59,200 |
|
area the question then is like if it's |
|
|
|
850 |
|
00:30:57,559 --> 00:31:03,120 |
|
not probability that makes an output |
|
|
|
851 |
|
00:30:59,200 --> 00:31:04,679 |
|
good what is it so we have this one |
|
|
|
852 |
|
00:31:03,120 --> 00:31:06,039 |
|
output that's really high probability |
|
|
|
853 |
|
00:31:04,679 --> 00:31:09,000 |
|
but it's very different from everything |
|
|
|
854 |
|
00:31:06,039 --> 00:31:10,720 |
|
else in our set and then we have a |
|
|
|
855 |
|
00:31:09,000 --> 00:31:13,200 |
|
couple of outputs that are all pretty |
|
|
|
856 |
|
00:31:10,720 --> 00:31:15,080 |
|
high probability and similar to a bunch |
|
|
|
857 |
|
00:31:13,200 --> 00:31:17,840 |
|
of other relatively high probability |
|
|
|
858 |
|
00:31:15,080 --> 00:31:19,720 |
|
things so maybe it's sort of less risky |
|
|
|
859 |
|
00:31:17,840 --> 00:31:21,399 |
|
to return one of these right are thing |
|
|
|
860 |
|
00:31:19,720 --> 00:31:23,200 |
|
that's higher probability but different |
|
|
|
861 |
|
00:31:21,399 --> 00:31:24,600 |
|
than everything else could be different |
|
|
|
862 |
|
00:31:23,200 --> 00:31:26,840 |
|
because it's way better or it could be |
|
|
|
863 |
|
00:31:24,600 --> 00:31:29,000 |
|
different because it's way worse um |
|
|
|
864 |
|
00:31:26,840 --> 00:31:31,120 |
|
another way to think about this is you |
|
|
|
865 |
|
00:31:29,000 --> 00:31:32,600 |
|
know maybe if you and your friends were |
|
|
|
866 |
|
00:31:31,120 --> 00:31:34,200 |
|
cheating on a test which you shouldn't |
|
|
|
867 |
|
00:31:32,600 --> 00:31:35,480 |
|
do but if you were going to do it and |
|
|
|
868 |
|
00:31:34,200 --> 00:31:37,519 |
|
all of your friends sent you their |
|
|
|
869 |
|
00:31:35,480 --> 00:31:39,240 |
|
answers um maybe one of your friends has |
|
|
|
870 |
|
00:31:37,519 --> 00:31:40,960 |
|
a slightly higher score in the class |
|
|
|
871 |
|
00:31:39,240 --> 00:31:42,519 |
|
than everyone else but they said the |
|
|
|
872 |
|
00:31:40,960 --> 00:31:44,480 |
|
answer was answer a and everyone else |
|
|
|
873 |
|
00:31:42,519 --> 00:31:45,799 |
|
said the answer was B right you still |
|
|
|
874 |
|
00:31:44,480 --> 00:31:48,480 |
|
might go with the answer that everyone |
|
|
|
875 |
|
00:31:45,799 --> 00:31:50,679 |
|
else said because like what there's it |
|
|
|
876 |
|
00:31:48,480 --> 00:31:52,679 |
|
sort of feels less risky like maybe |
|
|
|
877 |
|
00:31:50,679 --> 00:31:54,440 |
|
everyone else got the answer get that |
|
|
|
878 |
|
00:31:52,679 --> 00:31:55,880 |
|
answer and so your one friend could be |
|
|
|
879 |
|
00:31:54,440 --> 00:31:56,919 |
|
right when everyone else is wrong or |
|
|
|
880 |
|
00:31:55,880 --> 00:31:59,679 |
|
they could have made a mistake that no |
|
|
|
881 |
|
00:31:56,919 --> 00:32:01,240 |
|
one El else is making so this is sort of |
|
|
|
882 |
|
00:31:59,679 --> 00:32:03,519 |
|
the same concept right we want an output |
|
|
|
883 |
|
00:32:01,240 --> 00:32:06,320 |
|
that's relatively high probability but |
|
|
|
884 |
|
00:32:03,519 --> 00:32:09,399 |
|
also relatively low |
|
|
|
885 |
|
00:32:06,320 --> 00:32:11,320 |
|
risk and so here maybe if we were using |
|
|
|
886 |
|
00:32:09,399 --> 00:32:13,679 |
|
this criteria we'd return the cat ran |
|
|
|
887 |
|
00:32:11,320 --> 00:32:14,720 |
|
away as our sort of as our sort of |
|
|
|
888 |
|
00:32:13,679 --> 00:32:16,720 |
|
single |
|
|
|
889 |
|
00:32:14,720 --> 00:32:19,440 |
|
output so how do you find something |
|
|
|
890 |
|
00:32:16,720 --> 00:32:21,000 |
|
that's high probability and low risk |
|
|
|
891 |
|
00:32:19,440 --> 00:32:22,480 |
|
there's sort of two questions here right |
|
|
|
892 |
|
00:32:21,000 --> 00:32:24,399 |
|
we have to figure out how to estimate |
|
|
|
893 |
|
00:32:22,480 --> 00:32:26,120 |
|
probability and if we're looking at a |
|
|
|
894 |
|
00:32:24,399 --> 00:32:28,519 |
|
set of outputs like the six we saw |
|
|
|
895 |
|
00:32:26,120 --> 00:32:29,880 |
|
before maybe we can just do this by |
|
|
|
896 |
|
00:32:28,519 --> 00:32:31,720 |
|
counting right we could sample |
|
|
|
897 |
|
00:32:29,880 --> 00:32:34,000 |
|
everything from the model and get exact |
|
|
|
898 |
|
00:32:31,720 --> 00:32:35,200 |
|
probability or we could take a sample |
|
|
|
899 |
|
00:32:34,000 --> 00:32:38,080 |
|
from the model and just look at |
|
|
|
900 |
|
00:32:35,200 --> 00:32:40,200 |
|
probabilities in that set and from there |
|
|
|
901 |
|
00:32:38,080 --> 00:32:41,840 |
|
from that sample um sort of one |
|
|
|
902 |
|
00:32:40,200 --> 00:32:43,559 |
|
reasonable thing to do is just count |
|
|
|
903 |
|
00:32:41,840 --> 00:32:45,320 |
|
frequency right if something's in our |
|
|
|
904 |
|
00:32:43,559 --> 00:32:47,919 |
|
sample twice as often we just say it's |
|
|
|
905 |
|
00:32:45,320 --> 00:32:49,799 |
|
twice as frequent or it's twice as |
|
|
|
906 |
|
00:32:47,919 --> 00:32:52,880 |
|
probable um this is something called |
|
|
|
907 |
|
00:32:49,799 --> 00:32:54,440 |
|
Monte Carlos sampling if you do this um |
|
|
|
908 |
|
00:32:52,880 --> 00:32:56,039 |
|
enough times like if you sample an |
|
|
|
909 |
|
00:32:54,440 --> 00:32:58,279 |
|
infinite set this is would give you |
|
|
|
910 |
|
00:32:56,039 --> 00:33:00,880 |
|
exactly the model distri distribution um |
|
|
|
911 |
|
00:32:58,279 --> 00:33:02,840 |
|
but for the sort of reasonable size sets |
|
|
|
912 |
|
00:33:00,880 --> 00:33:04,200 |
|
we're working with maybe like a 100 |
|
|
|
913 |
|
00:33:02,840 --> 00:33:06,320 |
|
samples this gives us a sort of |
|
|
|
914 |
|
00:33:04,200 --> 00:33:09,440 |
|
reasonable approximation for what we for |
|
|
|
915 |
|
00:33:06,320 --> 00:33:10,840 |
|
what we need to do here at least so |
|
|
|
916 |
|
00:33:09,440 --> 00:33:12,000 |
|
we're just going to take a sample to get |
|
|
|
917 |
|
00:33:10,840 --> 00:33:13,440 |
|
probability and we're just going to |
|
|
|
918 |
|
00:33:12,000 --> 00:33:15,519 |
|
count things in that sample to see how |
|
|
|
919 |
|
00:33:13,440 --> 00:33:17,320 |
|
likely things are that doesn't seem too |
|
|
|
920 |
|
00:33:15,519 --> 00:33:20,080 |
|
bad how do we estimate |
|
|
|
921 |
|
00:33:17,320 --> 00:33:21,679 |
|
risk the idea here is that we have a |
|
|
|
922 |
|
00:33:20,080 --> 00:33:24,080 |
|
bunch of other things in this set of |
|
|
|
923 |
|
00:33:21,679 --> 00:33:26,080 |
|
outputs and we can treat those as sort |
|
|
|
924 |
|
00:33:24,080 --> 00:33:27,880 |
|
of like pseudo references right we can |
|
|
|
925 |
|
00:33:26,080 --> 00:33:29,840 |
|
evaluate agreement between the thing |
|
|
|
926 |
|
00:33:27,880 --> 00:33:31,519 |
|
we're looking at and each of those other |
|
|
|
927 |
|
00:33:29,840 --> 00:33:33,480 |
|
references and this is sort of the same |
|
|
|
928 |
|
00:33:31,519 --> 00:33:35,519 |
|
idea of calculating similarity in |
|
|
|
929 |
|
00:33:33,480 --> 00:33:37,159 |
|
diverse beam search we're going to use |
|
|
|
930 |
|
00:33:35,519 --> 00:33:39,639 |
|
some kind of metric to compare how |
|
|
|
931 |
|
00:33:37,159 --> 00:33:41,279 |
|
similar these things are um this metric |
|
|
|
932 |
|
00:33:39,639 --> 00:33:43,080 |
|
could be anything you use Downstream it |
|
|
|
933 |
|
00:33:41,279 --> 00:33:44,840 |
|
could be like an engram overlap metric |
|
|
|
934 |
|
00:33:43,080 --> 00:33:48,600 |
|
like Rouge or blue or it could also be |
|
|
|
935 |
|
00:33:44,840 --> 00:33:51,120 |
|
something um neural or semantic like um |
|
|
|
936 |
|
00:33:48,600 --> 00:33:54,799 |
|
something like BT score or Bart |
|
|
|
937 |
|
00:33:51,120 --> 00:33:56,600 |
|
score and so this concept um is a type |
|
|
|
938 |
|
00:33:54,799 --> 00:33:57,919 |
|
of decoding called minimum based risk |
|
|
|
939 |
|
00:33:56,600 --> 00:33:59,600 |
|
decoding |
|
|
|
940 |
|
00:33:57,919 --> 00:34:01,840 |
|
and what this equation captures is |
|
|
|
941 |
|
00:33:59,600 --> 00:34:03,919 |
|
exactly the intuition that we were um |
|
|
|
942 |
|
00:34:01,840 --> 00:34:06,600 |
|
sort of talking about just a slide ago |
|
|
|
943 |
|
00:34:03,919 --> 00:34:08,159 |
|
where we're going to choose something |
|
|
|
944 |
|
00:34:06,600 --> 00:34:09,919 |
|
that is low risk which means it's |
|
|
|
945 |
|
00:34:08,159 --> 00:34:11,960 |
|
similar to a lot of other things in this |
|
|
|
946 |
|
00:34:09,919 --> 00:34:12,800 |
|
set of outputs we've sampled and we're |
|
|
|
947 |
|
00:34:11,960 --> 00:34:14,800 |
|
going to choose something that's |
|
|
|
948 |
|
00:34:12,800 --> 00:34:17,560 |
|
relatively high probability which means |
|
|
|
949 |
|
00:34:14,800 --> 00:34:19,159 |
|
that sort of when we sum up over this if |
|
|
|
950 |
|
00:34:17,560 --> 00:34:21,399 |
|
something occurs in our set a bunch of |
|
|
|
951 |
|
00:34:19,159 --> 00:34:23,320 |
|
times it's going to have pretty strong |
|
|
|
952 |
|
00:34:21,399 --> 00:34:25,800 |
|
weight in picking which um of these |
|
|
|
953 |
|
00:34:23,320 --> 00:34:27,000 |
|
outputs are similar right if sort of |
|
|
|
954 |
|
00:34:25,800 --> 00:34:28,399 |
|
there's one thing in the set that |
|
|
|
955 |
|
00:34:27,000 --> 00:34:29,919 |
|
appears a bunch of times it's going to |
|
|
|
956 |
|
00:34:28,399 --> 00:34:32,040 |
|
have a strong influence on which thing |
|
|
|
957 |
|
00:34:29,919 --> 00:34:34,119 |
|
we pick and so that kind of captures |
|
|
|
958 |
|
00:34:32,040 --> 00:34:38,520 |
|
high probability in this |
|
|
|
959 |
|
00:34:34,119 --> 00:34:41,119 |
|
setting so to see how this works we can |
|
|
|
960 |
|
00:34:38,520 --> 00:34:44,639 |
|
look at an example um in |
|
|
|
961 |
|
00:34:41,119 --> 00:34:47,399 |
|
summarization so we choose some Metric |
|
|
|
962 |
|
00:34:44,639 --> 00:34:49,639 |
|
maybe we choose um Rouge which is an |
|
|
|
963 |
|
00:34:47,399 --> 00:34:51,399 |
|
engram overlap metric for summarization |
|
|
|
964 |
|
00:34:49,639 --> 00:34:52,879 |
|
and we say we're going to sample 100 |
|
|
|
965 |
|
00:34:51,399 --> 00:34:55,960 |
|
things and we're going to use this |
|
|
|
966 |
|
00:34:52,879 --> 00:35:00,359 |
|
equation to choose the one that has the |
|
|
|
967 |
|
00:34:55,960 --> 00:35:03,960 |
|
sort of lower EST risk according to MBR |
|
|
|
968 |
|
00:35:00,359 --> 00:35:06,480 |
|
um so if we do that and we look at this |
|
|
|
969 |
|
00:35:03,960 --> 00:35:07,560 |
|
sort of table of results here um you can |
|
|
|
970 |
|
00:35:06,480 --> 00:35:09,680 |
|
see that this |
|
|
|
971 |
|
00:35:07,560 --> 00:35:11,320 |
|
outperforms the other sampling methods |
|
|
|
972 |
|
00:35:09,680 --> 00:35:13,720 |
|
that we've looked at before so greedy |
|
|
|
973 |
|
00:35:11,320 --> 00:35:15,640 |
|
decoding here is just sampling the |
|
|
|
974 |
|
00:35:13,720 --> 00:35:18,760 |
|
single most likely thing in each step |
|
|
|
975 |
|
00:35:15,640 --> 00:35:21,800 |
|
beam search here is the BS with five or |
|
|
|
976 |
|
00:35:18,760 --> 00:35:24,359 |
|
10 beams and DBS is the diverse beam |
|
|
|
977 |
|
00:35:21,800 --> 00:35:27,040 |
|
search we were talking about um if we |
|
|
|
978 |
|
00:35:24,359 --> 00:35:29,440 |
|
use minimum based risk and we use grou |
|
|
|
979 |
|
00:35:27,040 --> 00:35:31,240 |
|
is the sort of determiner of similarity |
|
|
|
980 |
|
00:35:29,440 --> 00:35:32,680 |
|
we do way better across all of our |
|
|
|
981 |
|
00:35:31,240 --> 00:35:33,960 |
|
metrics but we especially do really good |
|
|
|
982 |
|
00:35:32,680 --> 00:35:36,680 |
|
at Rouge because that's sort of the |
|
|
|
983 |
|
00:35:33,960 --> 00:35:38,119 |
|
metric that we've been using to evaluate |
|
|
|
984 |
|
00:35:36,680 --> 00:35:40,240 |
|
and then if we swap this out for other |
|
|
|
985 |
|
00:35:38,119 --> 00:35:43,599 |
|
metrics you still see an performance |
|
|
|
986 |
|
00:35:40,240 --> 00:35:46,440 |
|
improvement over these um search methods |
|
|
|
987 |
|
00:35:43,599 --> 00:35:48,119 |
|
here um what's the sort of catch here |
|
|
|
988 |
|
00:35:46,440 --> 00:35:49,920 |
|
the catch here is that MBR requires you |
|
|
|
989 |
|
00:35:48,119 --> 00:35:51,599 |
|
to sample a hundred things and so this |
|
|
|
990 |
|
00:35:49,920 --> 00:35:54,760 |
|
is a lot more expensive it's a lot |
|
|
|
991 |
|
00:35:51,599 --> 00:35:54,760 |
|
slower at infin |
|
|
|
992 |
|
00:35:54,800 --> 00:35:58,800 |
|
time um yes |
|
|
|
993 |
|
00:36:04,200 --> 00:36:10,040 |
|
yes a great question why does the beam |
|
|
|
994 |
|
00:36:07,000 --> 00:36:14,000 |
|
search with more beams perform worse um |
|
|
|
995 |
|
00:36:10,040 --> 00:36:16,720 |
|
this is a well a relatively welln |
|
|
|
996 |
|
00:36:14,000 --> 00:36:19,359 |
|
phenomena called the cursive beam search |
|
|
|
997 |
|
00:36:16,720 --> 00:36:21,640 |
|
which is we actually lost your M so you |
|
|
|
998 |
|
00:36:19,359 --> 00:36:24,599 |
|
mic and we can speak okay yeah so this |
|
|
|
999 |
|
00:36:21,640 --> 00:36:26,079 |
|
is called the cursive beam search um and |
|
|
|
1000 |
|
00:36:24,599 --> 00:36:27,760 |
|
the idea here is that beam search is |
|
|
|
1001 |
|
00:36:26,079 --> 00:36:29,359 |
|
like an approxim search right so if you |
|
|
|
1002 |
|
00:36:27,760 --> 00:36:31,200 |
|
add more beams you should be doing |
|
|
|
1003 |
|
00:36:29,359 --> 00:36:33,319 |
|
better and better at finding the maximum |
|
|
|
1004 |
|
00:36:31,200 --> 00:36:34,800 |
|
likelihood thing and generally you are |
|
|
|
1005 |
|
00:36:33,319 --> 00:36:37,160 |
|
you get something that is higher |
|
|
|
1006 |
|
00:36:34,800 --> 00:36:39,160 |
|
probability but as you add more beams |
|
|
|
1007 |
|
00:36:37,160 --> 00:36:42,319 |
|
you also often get something that does |
|
|
|
1008 |
|
00:36:39,160 --> 00:36:42,319 |
|
worse on your Downstream |
|
|
|
1009 |
|
00:36:44,160 --> 00:36:47,560 |
|
metrics back up |
|
|
|
1010 |
|
00:36:54,240 --> 00:36:58,680 |
|
there is that back online |
|
|
|
1011 |
|
00:36:59,119 --> 00:37:06,520 |
|
yeah is that back is that any louder no |
|
|
|
1012 |
|
00:37:03,520 --> 00:37:06,520 |
|
it |
|
|
|
1013 |
|
00:37:07,000 --> 00:37:12,640 |
|
question oh there we go is that better |
|
|
|
1014 |
|
00:37:09,599 --> 00:37:13,760 |
|
great um yeah so why why does this |
|
|
|
1015 |
|
00:37:12,640 --> 00:37:16,040 |
|
happen right why do you get something |
|
|
|
1016 |
|
00:37:13,760 --> 00:37:18,560 |
|
that's higher likelihood but um lower |
|
|
|
1017 |
|
00:37:16,040 --> 00:37:22,040 |
|
performance Downstream um and this is |
|
|
|
1018 |
|
00:37:18,560 --> 00:37:24,000 |
|
like another sort of degeneracy of beam |
|
|
|
1019 |
|
00:37:22,040 --> 00:37:25,680 |
|
search that this idea that the thing |
|
|
|
1020 |
|
00:37:24,000 --> 00:37:27,440 |
|
that is the absolute highest likelihood |
|
|
|
1021 |
|
00:37:25,680 --> 00:37:28,599 |
|
in your distribution might not actually |
|
|
|
1022 |
|
00:37:27,440 --> 00:37:31,079 |
|
be what you want |
|
|
|
1023 |
|
00:37:28,599 --> 00:37:33,960 |
|
Downstream um this is sort of one of the |
|
|
|
1024 |
|
00:37:31,079 --> 00:37:35,200 |
|
other things that people use to motivate |
|
|
|
1025 |
|
00:37:33,960 --> 00:37:37,599 |
|
why you might want to do something like |
|
|
|
1026 |
|
00:37:35,200 --> 00:37:39,400 |
|
MBR instead um and there's a great paper |
|
|
|
1027 |
|
00:37:37,599 --> 00:37:41,640 |
|
about this problem called the inadequacy |
|
|
|
1028 |
|
00:37:39,400 --> 00:37:43,680 |
|
of the mode because beam search is |
|
|
|
1029 |
|
00:37:41,640 --> 00:37:45,520 |
|
looking for the mode of the |
|
|
|
1030 |
|
00:37:43,680 --> 00:37:47,880 |
|
distribution well one other thing I'd |
|
|
|
1031 |
|
00:37:45,520 --> 00:37:49,680 |
|
like to mention is it also goes together |
|
|
|
1032 |
|
00:37:47,880 --> 00:37:51,119 |
|
with how you train your models because |
|
|
|
1033 |
|
00:37:49,680 --> 00:37:53,760 |
|
most of our models are trained using |
|
|
|
1034 |
|
00:37:51,119 --> 00:37:57,079 |
|
maximum likelihood maximum likelihood |
|
|
|
1035 |
|
00:37:53,760 --> 00:37:59,040 |
|
isn't explicitly maximizing our ability |
|
|
|
1036 |
|
00:37:57,079 --> 00:38:01,079 |
|
to get the best answer it's explicitly |
|
|
|
1037 |
|
00:37:59,040 --> 00:38:05,720 |
|
maximizing our ability to estimate the |
|
|
|
1038 |
|
00:38:01,079 --> 00:38:10,160 |
|
the distribution of answers so if I |
|
|
|
1039 |
|
00:38:05,720 --> 00:38:13,040 |
|
say um if you said like what is what is |
|
|
|
1040 |
|
00:38:10,160 --> 00:38:15,839 |
|
your favorite hobby or something like |
|
|
|
1041 |
|
00:38:13,040 --> 00:38:17,680 |
|
that uh what is your favorite hobby in a |
|
|
|
1042 |
|
00:38:15,839 --> 00:38:19,280 |
|
dialogue system often it'll answer I |
|
|
|
1043 |
|
00:38:17,680 --> 00:38:22,400 |
|
don't know or something like that |
|
|
|
1044 |
|
00:38:19,280 --> 00:38:24,920 |
|
because it like you know that that's |
|
|
|
1045 |
|
00:38:22,400 --> 00:38:26,599 |
|
more likely than answering any specific |
|
|
|
1046 |
|
00:38:24,920 --> 00:38:29,240 |
|
hobby like it's more likely than |
|
|
|
1047 |
|
00:38:26,599 --> 00:38:32,119 |
|
answering basketball bowling you know |
|
|
|
1048 |
|
00:38:29,240 --> 00:38:35,040 |
|
whatever else because you have many many |
|
|
|
1049 |
|
00:38:32,119 --> 00:38:36,560 |
|
different options and so like especially |
|
|
|
1050 |
|
00:38:35,040 --> 00:38:39,880 |
|
if it's something that's a little bit |
|
|
|
1051 |
|
00:38:36,560 --> 00:38:42,160 |
|
more comp complicated it will avoid |
|
|
|
1052 |
|
00:38:39,880 --> 00:38:44,680 |
|
answering that and in particular it ends |
|
|
|
1053 |
|
00:38:42,160 --> 00:38:47,240 |
|
up answering very short things for |
|
|
|
1054 |
|
00:38:44,680 --> 00:38:49,280 |
|
example um or sometimes it ends up |
|
|
|
1055 |
|
00:38:47,240 --> 00:38:51,160 |
|
repeating itself over and over again or |
|
|
|
1056 |
|
00:38:49,280 --> 00:38:53,240 |
|
or things like that so it also goes |
|
|
|
1057 |
|
00:38:51,160 --> 00:38:57,760 |
|
together with like the training of the |
|
|
|
1058 |
|
00:38:53,240 --> 00:38:59,359 |
|
model yeah and this is um one of the |
|
|
|
1059 |
|
00:38:57,760 --> 00:39:01,079 |
|
this is still a problem in modern |
|
|
|
1060 |
|
00:38:59,359 --> 00:39:02,560 |
|
systems so if you actually look at the |
|
|
|
1061 |
|
00:39:01,079 --> 00:39:03,839 |
|
single like if you could enumerate |
|
|
|
1062 |
|
00:39:02,560 --> 00:39:05,680 |
|
everything and see the single most |
|
|
|
1063 |
|
00:39:03,839 --> 00:39:07,520 |
|
likely sequence it's often the empty |
|
|
|
1064 |
|
00:39:05,680 --> 00:39:10,920 |
|
sequence just not opening anything at |
|
|
|
1065 |
|
00:39:07,520 --> 00:39:12,640 |
|
all um and so if that's your true mode |
|
|
|
1066 |
|
00:39:10,920 --> 00:39:16,119 |
|
of the distribution then doing better at |
|
|
|
1067 |
|
00:39:12,640 --> 00:39:16,119 |
|
mode seeking is not always like |
|
|
|
1068 |
|
00:39:16,599 --> 00:39:19,599 |
|
helpful |
|
|
|
1069 |
|
00:39:25,440 --> 00:39:32,960 |
|
yes could this be influenced by the |
|
|
|
1070 |
|
00:39:28,200 --> 00:39:32,960 |
|
confidence problem like um how |
|
|
|
1071 |
|
00:39:37,560 --> 00:39:41,079 |
|
so seems |
|
|
|
1072 |
|
00:39:49,760 --> 00:39:53,599 |
|
bees |
|
|
|
1073 |
|
00:39:51,010 --> 00:39:57,280 |
|
[Music] |
|
|
|
1074 |
|
00:39:53,599 --> 00:39:59,760 |
|
might right I think I I think I see |
|
|
|
1075 |
|
00:39:57,280 --> 00:40:02,000 |
|
what you're saying which is that like |
|
|
|
1076 |
|
00:39:59,760 --> 00:40:04,200 |
|
the the confidence gives you the |
|
|
|
1077 |
|
00:40:02,000 --> 00:40:06,680 |
|
confidence of like a single exact |
|
|
|
1078 |
|
00:40:04,200 --> 00:40:11,000 |
|
sequence right not the like actual sort |
|
|
|
1079 |
|
00:40:06,680 --> 00:40:13,200 |
|
of semantic space of and so yeah if you |
|
|
|
1080 |
|
00:40:11,000 --> 00:40:14,920 |
|
looked at just like the if you look at |
|
|
|
1081 |
|
00:40:13,200 --> 00:40:17,000 |
|
just the probability scores you get the |
|
|
|
1082 |
|
00:40:14,920 --> 00:40:18,520 |
|
probability of an exact string when what |
|
|
|
1083 |
|
00:40:17,000 --> 00:40:20,119 |
|
you really actually care about with |
|
|
|
1084 |
|
00:40:18,520 --> 00:40:22,319 |
|
confidence is the probability of sort of |
|
|
|
1085 |
|
00:40:20,119 --> 00:40:23,800 |
|
like things that mean the same thing |
|
|
|
1086 |
|
00:40:22,319 --> 00:40:25,359 |
|
yeah this is um part of why like |
|
|
|
1087 |
|
00:40:23,800 --> 00:40:28,359 |
|
calibration is really hard for long |
|
|
|
1088 |
|
00:40:25,359 --> 00:40:28,359 |
|
sequences |
|
|
|
1089 |
|
00:40:30,720 --> 00:40:37,319 |
|
great so we're g to touch sort of |
|
|
|
1090 |
|
00:40:34,359 --> 00:40:39,520 |
|
briefly on a couple of other things that |
|
|
|
1091 |
|
00:40:37,319 --> 00:40:40,920 |
|
aren't sort of always explicitly |
|
|
|
1092 |
|
00:40:39,520 --> 00:40:42,480 |
|
described in this framework but that you |
|
|
|
1093 |
|
00:40:40,920 --> 00:40:45,040 |
|
can think of as variance of minimum |
|
|
|
1094 |
|
00:40:42,480 --> 00:40:46,960 |
|
based risk um and if you're interested |
|
|
|
1095 |
|
00:40:45,040 --> 00:40:49,560 |
|
in this analysis um I think as Graham |
|
|
|
1096 |
|
00:40:46,960 --> 00:40:51,800 |
|
mentioned earlier um Alex Z is a first |
|
|
|
1097 |
|
00:40:49,560 --> 00:40:53,680 |
|
year MLT and I wrote a paper about this |
|
|
|
1098 |
|
00:40:51,800 --> 00:40:57,839 |
|
um which you can check out if you're |
|
|
|
1099 |
|
00:40:53,680 --> 00:41:01,200 |
|
interested so the um two that I really |
|
|
|
1100 |
|
00:40:57,839 --> 00:41:03,800 |
|
want to touch on here are other sort of |
|
|
|
1101 |
|
00:41:01,200 --> 00:41:05,240 |
|
inference time things you can consider |
|
|
|
1102 |
|
00:41:03,800 --> 00:41:07,520 |
|
which might look a little bit different |
|
|
|
1103 |
|
00:41:05,240 --> 00:41:09,480 |
|
on the first BL um the first of these is |
|
|
|
1104 |
|
00:41:07,520 --> 00:41:11,680 |
|
output ensembling so say you have |
|
|
|
1105 |
|
00:41:09,480 --> 00:41:13,240 |
|
multiple different models and you get |
|
|
|
1106 |
|
00:41:11,680 --> 00:41:15,480 |
|
outputs from all of them and now you |
|
|
|
1107 |
|
00:41:13,240 --> 00:41:19,560 |
|
need to choose a best output among that |
|
|
|
1108 |
|
00:41:15,480 --> 00:41:21,599 |
|
set um one of the sort of common ways to |
|
|
|
1109 |
|
00:41:19,560 --> 00:41:24,480 |
|
do this is to compare like an embedding |
|
|
|
1110 |
|
00:41:21,599 --> 00:41:25,920 |
|
similarity across models like does model |
|
|
|
1111 |
|
00:41:24,480 --> 00:41:27,560 |
|
one think these two things are really |
|
|
|
1112 |
|
00:41:25,920 --> 00:41:28,880 |
|
similar does model two think these two |
|
|
|
1113 |
|
00:41:27,560 --> 00:41:32,599 |
|
things are really similar and try to |
|
|
|
1114 |
|
00:41:28,880 --> 00:41:34,680 |
|
choose something that the um has really |
|
|
|
1115 |
|
00:41:32,599 --> 00:41:37,319 |
|
high similarity with a lot of other |
|
|
|
1116 |
|
00:41:34,680 --> 00:41:39,200 |
|
outputs um of course now that we've just |
|
|
|
1117 |
|
00:41:37,319 --> 00:41:41,440 |
|
recently been talking about MBR you can |
|
|
|
1118 |
|
00:41:39,200 --> 00:41:44,920 |
|
see that you can probably see that this |
|
|
|
1119 |
|
00:41:41,440 --> 00:41:46,280 |
|
is um the same general formulation just |
|
|
|
1120 |
|
00:41:44,920 --> 00:41:47,880 |
|
rather than summing over a set of |
|
|
|
1121 |
|
00:41:46,280 --> 00:41:49,520 |
|
outputs from a single model now you're |
|
|
|
1122 |
|
00:41:47,880 --> 00:41:52,160 |
|
looking at outputs over a whole set of |
|
|
|
1123 |
|
00:41:49,520 --> 00:41:54,640 |
|
models um so some types of ensembling |
|
|
|
1124 |
|
00:41:52,160 --> 00:41:57,319 |
|
fall into this category of minimum based |
|
|
|
1125 |
|
00:41:54,640 --> 00:42:00,680 |
|
risk methods another thing in this |
|
|
|
1126 |
|
00:41:57,319 --> 00:42:03,280 |
|
category is a um sort of recent decoding |
|
|
|
1127 |
|
00:42:00,680 --> 00:42:06,079 |
|
method called self-consistency and the |
|
|
|
1128 |
|
00:42:03,280 --> 00:42:08,200 |
|
idea here is that you want to do |
|
|
|
1129 |
|
00:42:06,079 --> 00:42:09,359 |
|
something like mathematical reasoning |
|
|
|
1130 |
|
00:42:08,200 --> 00:42:10,599 |
|
and you really care about getting the |
|
|
|
1131 |
|
00:42:09,359 --> 00:42:12,000 |
|
final answer right but you don't |
|
|
|
1132 |
|
00:42:10,599 --> 00:42:15,000 |
|
necessarily care about getting all of |
|
|
|
1133 |
|
00:42:12,000 --> 00:42:18,079 |
|
the the reasoning steps in between right |
|
|
|
1134 |
|
00:42:15,000 --> 00:42:19,520 |
|
so you prompt the model for an answer um |
|
|
|
1135 |
|
00:42:18,079 --> 00:42:20,800 |
|
using something like Chain of Thought |
|
|
|
1136 |
|
00:42:19,520 --> 00:42:22,680 |
|
right you ask it to sort of talk through |
|
|
|
1137 |
|
00:42:20,800 --> 00:42:26,440 |
|
the steps it's going to do and then give |
|
|
|
1138 |
|
00:42:22,680 --> 00:42:28,599 |
|
you a final answer um you sample many |
|
|
|
1139 |
|
00:42:26,440 --> 00:42:30,400 |
|
puts using this and then you completely |
|
|
|
1140 |
|
00:42:28,599 --> 00:42:32,200 |
|
throw away the chains of thought um and |
|
|
|
1141 |
|
00:42:30,400 --> 00:42:35,359 |
|
you just take the answer from each |
|
|
|
1142 |
|
00:42:32,200 --> 00:42:37,640 |
|
output um you have that set of answers |
|
|
|
1143 |
|
00:42:35,359 --> 00:42:38,960 |
|
maybe you have like 20 30 100 answers |
|
|
|
1144 |
|
00:42:37,640 --> 00:42:40,000 |
|
you just return the one that was most |
|
|
|
1145 |
|
00:42:38,960 --> 00:42:43,720 |
|
frequently |
|
|
|
1146 |
|
00:42:40,000 --> 00:42:46,119 |
|
generated um what this is doing is a |
|
|
|
1147 |
|
00:42:43,720 --> 00:42:48,800 |
|
type of MBR where the metric that you |
|
|
|
1148 |
|
00:42:46,119 --> 00:42:51,160 |
|
actually care about is exact match of |
|
|
|
1149 |
|
00:42:48,800 --> 00:42:51,839 |
|
this answer right ignoring the rest of |
|
|
|
1150 |
|
00:42:51,160 --> 00:42:54,079 |
|
the |
|
|
|
1151 |
|
00:42:51,839 --> 00:42:55,800 |
|
generation um and so here we have sort |
|
|
|
1152 |
|
00:42:54,079 --> 00:42:56,839 |
|
of the same intuition that we want an |
|
|
|
1153 |
|
00:42:55,800 --> 00:42:59,160 |
|
output |
|
|
|
1154 |
|
00:42:56,839 --> 00:43:01,520 |
|
that is high probability right we're |
|
|
|
1155 |
|
00:42:59,160 --> 00:43:03,359 |
|
getting it generated a lot but also low |
|
|
|
1156 |
|
00:43:01,520 --> 00:43:06,079 |
|
risk not a lot of the other outputs in |
|
|
|
1157 |
|
00:43:03,359 --> 00:43:08,440 |
|
our in our set disagree with this |
|
|
|
1158 |
|
00:43:06,079 --> 00:43:10,359 |
|
answer so those are a couple of |
|
|
|
1159 |
|
00:43:08,440 --> 00:43:11,920 |
|
different variants of methods where |
|
|
|
1160 |
|
00:43:10,359 --> 00:43:13,880 |
|
we're sort of sampling a wide set of |
|
|
|
1161 |
|
00:43:11,920 --> 00:43:17,359 |
|
sequences and trying to choose the best |
|
|
|
1162 |
|
00:43:13,880 --> 00:43:20,960 |
|
one um MBR is one set is one type of |
|
|
|
1163 |
|
00:43:17,359 --> 00:43:22,680 |
|
sort of sequence set reranking method um |
|
|
|
1164 |
|
00:43:20,960 --> 00:43:24,760 |
|
you could do other things to rerank sets |
|
|
|
1165 |
|
00:43:22,680 --> 00:43:27,400 |
|
as well but this is sort of one |
|
|
|
1166 |
|
00:43:24,760 --> 00:43:30,359 |
|
representative class of these yes uh or |
|
|
|
1167 |
|
00:43:27,400 --> 00:43:32,280 |
|
of the of these methods before we get |
|
|
|
1168 |
|
00:43:30,359 --> 00:43:35,200 |
|
into constrain generation those are sort |
|
|
|
1169 |
|
00:43:32,280 --> 00:43:37,000 |
|
of the three broad categories of |
|
|
|
1170 |
|
00:43:35,200 --> 00:43:39,480 |
|
inference methods we'll discuss which is |
|
|
|
1171 |
|
00:43:37,000 --> 00:43:41,680 |
|
sort of sampling from some distribution |
|
|
|
1172 |
|
00:43:39,480 --> 00:43:45,040 |
|
searching over some space of |
|
|
|
1173 |
|
00:43:41,680 --> 00:43:47,400 |
|
distributions and doing some kind of um |
|
|
|
1174 |
|
00:43:45,040 --> 00:43:48,559 |
|
analysis over a set of samples to choose |
|
|
|
1175 |
|
00:43:47,400 --> 00:43:51,359 |
|
which ones they |
|
|
|
1176 |
|
00:43:48,559 --> 00:43:52,559 |
|
return um does anyone have any questions |
|
|
|
1177 |
|
00:43:51,359 --> 00:43:55,079 |
|
at this |
|
|
|
1178 |
|
00:43:52,559 --> 00:44:00,680 |
|
point |
|
|
|
1179 |
|
00:43:55,079 --> 00:44:00,680 |
|
yeah that a model |
|
|
|
1180 |
|
00:44:05,800 --> 00:44:12,760 |
|
cannot yeah like why is averaging model |
|
|
|
1181 |
|
00:44:08,359 --> 00:44:16,400 |
|
weights not MBR um I think it's not MBR |
|
|
|
1182 |
|
00:44:12,760 --> 00:44:18,559 |
|
because the two um the key thing that I |
|
|
|
1183 |
|
00:44:16,400 --> 00:44:20,880 |
|
think really makes a method MBR is this |
|
|
|
1184 |
|
00:44:18,559 --> 00:44:22,480 |
|
concept of comparing between multiple um |
|
|
|
1185 |
|
00:44:20,880 --> 00:44:24,880 |
|
sort of pseudo |
|
|
|
1186 |
|
00:44:22,480 --> 00:44:26,839 |
|
references um and there you don't have |
|
|
|
1187 |
|
00:44:24,880 --> 00:44:28,359 |
|
the same like you aage model way can you |
|
|
|
1188 |
|
00:44:26,839 --> 00:44:32,440 |
|
wind up with sort of a single output on |
|
|
|
1189 |
|
00:44:28,359 --> 00:44:34,040 |
|
the end that maybe is like using like |
|
|
|
1190 |
|
00:44:32,440 --> 00:44:35,800 |
|
information from these two model |
|
|
|
1191 |
|
00:44:34,040 --> 00:44:38,240 |
|
distributions that you've sort of smush |
|
|
|
1192 |
|
00:44:35,800 --> 00:44:41,160 |
|
together um but it's not the same |
|
|
|
1193 |
|
00:44:38,240 --> 00:44:44,720 |
|
concept of like comparing against pseudo |
|
|
|
1194 |
|
00:44:41,160 --> 00:44:44,720 |
|
references or ranking in a |
|
|
|
1195 |
|
00:44:48,920 --> 00:44:55,599 |
|
set right so now this is sort of a this |
|
|
|
1196 |
|
00:44:52,720 --> 00:44:57,559 |
|
was a wide variety of methods to try to |
|
|
|
1197 |
|
00:44:55,599 --> 00:44:59,040 |
|
find an output that's just sort of good |
|
|
|
1198 |
|
00:44:57,559 --> 00:45:01,440 |
|
right we want an output that that is |
|
|
|
1199 |
|
00:44:59,040 --> 00:45:03,480 |
|
nice out of our model um but now we'd |
|
|
|
1200 |
|
00:45:01,440 --> 00:45:05,880 |
|
like to maybe enclose a few additional |
|
|
|
1201 |
|
00:45:03,480 --> 00:45:08,280 |
|
constraints so say I'm asking our model |
|
|
|
1202 |
|
00:45:05,880 --> 00:45:10,720 |
|
for some Hobbies I could use to stay in |
|
|
|
1203 |
|
00:45:08,280 --> 00:45:11,920 |
|
to stay in shape and no matter what I |
|
|
|
1204 |
|
00:45:10,720 --> 00:45:14,160 |
|
don't want the model to recommend |
|
|
|
1205 |
|
00:45:11,920 --> 00:45:16,880 |
|
climbing like I I just I don't want this |
|
|
|
1206 |
|
00:45:14,160 --> 00:45:18,400 |
|
as an option I've tried it I'm not a fan |
|
|
|
1207 |
|
00:45:16,880 --> 00:45:21,240 |
|
um how do I get the model to stop |
|
|
|
1208 |
|
00:45:18,400 --> 00:45:22,760 |
|
suggesting climbing to me and if you've |
|
|
|
1209 |
|
00:45:21,240 --> 00:45:24,559 |
|
sort of played around with some of the |
|
|
|
1210 |
|
00:45:22,760 --> 00:45:26,200 |
|
more recent llms you'd say maybe this is |
|
|
|
1211 |
|
00:45:24,559 --> 00:45:27,480 |
|
easy right you just tell the model the |
|
|
|
1212 |
|
00:45:26,200 --> 00:45:30,160 |
|
instruction that you don't want to talk |
|
|
|
1213 |
|
00:45:27,480 --> 00:45:31,640 |
|
about climbing and having talked to Bard |
|
|
|
1214 |
|
00:45:30,160 --> 00:45:33,640 |
|
recently I can tell you unfortunately |
|
|
|
1215 |
|
00:45:31,640 --> 00:45:34,800 |
|
that it's not that easy so I tell the |
|
|
|
1216 |
|
00:45:33,640 --> 00:45:36,599 |
|
model I don't want to talk about |
|
|
|
1217 |
|
00:45:34,800 --> 00:45:38,000 |
|
climbing it does okay for a little bit |
|
|
|
1218 |
|
00:45:36,599 --> 00:45:40,920 |
|
and then it's like all right but maybe |
|
|
|
1219 |
|
00:45:38,000 --> 00:45:42,359 |
|
you want to try rap climbing um and so |
|
|
|
1220 |
|
00:45:40,920 --> 00:45:44,559 |
|
we could continue trying to instruction |
|
|
|
1221 |
|
00:45:42,359 --> 00:45:46,200 |
|
to our model but maybe there's sort of a |
|
|
|
1222 |
|
00:45:44,559 --> 00:45:49,079 |
|
way to impose this constraint on the |
|
|
|
1223 |
|
00:45:46,200 --> 00:45:50,680 |
|
decoding side instead and so I'd say all |
|
|
|
1224 |
|
00:45:49,079 --> 00:45:52,960 |
|
right I'm going to do something dramatic |
|
|
|
1225 |
|
00:45:50,680 --> 00:45:54,440 |
|
right I know I can manipulate the |
|
|
|
1226 |
|
00:45:52,960 --> 00:45:56,200 |
|
probability distribution I'm just going |
|
|
|
1227 |
|
00:45:54,440 --> 00:45:57,920 |
|
to set the probability of climbing to be |
|
|
|
1228 |
|
00:45:56,200 --> 00:46:00,440 |
|
zero I don't want to see this token like |
|
|
|
1229 |
|
00:45:57,920 --> 00:46:02,640 |
|
I'm I'm completely over it um and this |
|
|
|
1230 |
|
00:46:00,440 --> 00:46:04,839 |
|
is sort of nice in some sense because |
|
|
|
1231 |
|
00:46:02,640 --> 00:46:06,720 |
|
this is pretty easy to do um remember |
|
|
|
1232 |
|
00:46:04,839 --> 00:46:08,440 |
|
we're doing a soft Max over the outputs |
|
|
|
1233 |
|
00:46:06,720 --> 00:46:10,599 |
|
to choose this probability distribution |
|
|
|
1234 |
|
00:46:08,440 --> 00:46:12,400 |
|
and so if we add a huge negative number |
|
|
|
1235 |
|
00:46:10,599 --> 00:46:14,160 |
|
to the logic for climbing before we do |
|
|
|
1236 |
|
00:46:12,400 --> 00:46:15,520 |
|
this softmax its probability is going to |
|
|
|
1237 |
|
00:46:14,160 --> 00:46:18,640 |
|
be basically zero and we're never going |
|
|
|
1238 |
|
00:46:15,520 --> 00:46:20,240 |
|
to see it as an output um but this |
|
|
|
1239 |
|
00:46:18,640 --> 00:46:22,480 |
|
doesn't seem like a perfect solution |
|
|
|
1240 |
|
00:46:20,240 --> 00:46:24,400 |
|
right because you know what if the model |
|
|
|
1241 |
|
00:46:22,480 --> 00:46:26,160 |
|
recommends bouldering to me do I have to |
|
|
|
1242 |
|
00:46:24,400 --> 00:46:28,599 |
|
write like a sort of a list of every |
|
|
|
1243 |
|
00:46:26,160 --> 00:46:30,599 |
|
possible climbing synonym in the world |
|
|
|
1244 |
|
00:46:28,599 --> 00:46:32,079 |
|
um what if there's sort of an allowable |
|
|
|
1245 |
|
00:46:30,599 --> 00:46:33,920 |
|
way to use this token like I want the |
|
|
|
1246 |
|
00:46:32,079 --> 00:46:35,319 |
|
model to suggest hiking because climbing |
|
|
|
1247 |
|
00:46:33,920 --> 00:46:37,480 |
|
up a mountain to see a good view is |
|
|
|
1248 |
|
00:46:35,319 --> 00:46:38,720 |
|
relaxing but that's a use of the word |
|
|
|
1249 |
|
00:46:37,480 --> 00:46:41,400 |
|
climbing and we just said that we can't |
|
|
|
1250 |
|
00:46:38,720 --> 00:46:43,520 |
|
use the word climbing um or what if we |
|
|
|
1251 |
|
00:46:41,400 --> 00:46:45,480 |
|
sort of generate other related terms |
|
|
|
1252 |
|
00:46:43,520 --> 00:46:47,520 |
|
before we get to the restricted term |
|
|
|
1253 |
|
00:46:45,480 --> 00:46:49,359 |
|
like the model starts suggesting maybe |
|
|
|
1254 |
|
00:46:47,520 --> 00:46:51,480 |
|
you can work out by going to an indoor |
|
|
|
1255 |
|
00:46:49,359 --> 00:46:52,920 |
|
rock blank and then what are we going to |
|
|
|
1256 |
|
00:46:51,480 --> 00:46:54,800 |
|
say there's not we can't say rock |
|
|
|
1257 |
|
00:46:52,920 --> 00:46:57,079 |
|
climbing so maybe the model suggests |
|
|
|
1258 |
|
00:46:54,800 --> 00:46:58,640 |
|
rock climbing is rock collecting is a |
|
|
|
1259 |
|
00:46:57,079 --> 00:47:01,400 |
|
hobby to stay in shape and that doesn't |
|
|
|
1260 |
|
00:46:58,640 --> 00:47:03,480 |
|
sound good either um you could continue |
|
|
|
1261 |
|
00:47:01,400 --> 00:47:05,640 |
|
like sort of engineering more and more |
|
|
|
1262 |
|
00:47:03,480 --> 00:47:06,599 |
|
complicated rules here but maybe we |
|
|
|
1263 |
|
00:47:05,640 --> 00:47:08,760 |
|
could do something that's a little |
|
|
|
1264 |
|
00:47:06,599 --> 00:47:10,559 |
|
simpler so what if I just sample a bunch |
|
|
|
1265 |
|
00:47:08,760 --> 00:47:11,920 |
|
of outputs from the model and then I |
|
|
|
1266 |
|
00:47:10,559 --> 00:47:14,359 |
|
check if they're about climbing and I |
|
|
|
1267 |
|
00:47:11,920 --> 00:47:16,280 |
|
get rid of them if they are right um |
|
|
|
1268 |
|
00:47:14,359 --> 00:47:18,200 |
|
this is sort of the advantage that it's |
|
|
|
1269 |
|
00:47:16,280 --> 00:47:19,599 |
|
pretty easy to check after the fact if |
|
|
|
1270 |
|
00:47:18,200 --> 00:47:22,480 |
|
the sequence has satisfied this |
|
|
|
1271 |
|
00:47:19,599 --> 00:47:24,400 |
|
constraint you know we could train some |
|
|
|
1272 |
|
00:47:22,480 --> 00:47:26,200 |
|
smaller model to guess if the topic of a |
|
|
|
1273 |
|
00:47:24,400 --> 00:47:27,960 |
|
sentence is about climbing could check |
|
|
|
1274 |
|
00:47:26,200 --> 00:47:30,040 |
|
for keywords we could have a friend |
|
|
|
1275 |
|
00:47:27,960 --> 00:47:31,359 |
|
who's willing to see this content like |
|
|
|
1276 |
|
00:47:30,040 --> 00:47:33,040 |
|
filter through it and throw everything |
|
|
|
1277 |
|
00:47:31,359 --> 00:47:36,480 |
|
out that's not about climing that is |
|
|
|
1278 |
|
00:47:33,040 --> 00:47:38,280 |
|
about climbing but if this model um |
|
|
|
1279 |
|
00:47:36,480 --> 00:47:40,119 |
|
ascribes really high likelihood to this |
|
|
|
1280 |
|
00:47:38,280 --> 00:47:42,559 |
|
like if this model was trained on you |
|
|
|
1281 |
|
00:47:40,119 --> 00:47:44,760 |
|
know data from CS PhD students this |
|
|
|
1282 |
|
00:47:42,559 --> 00:47:46,240 |
|
could be an extremely high likelihood |
|
|
|
1283 |
|
00:47:44,760 --> 00:47:48,319 |
|
suggestion and so we might need to |
|
|
|
1284 |
|
00:47:46,240 --> 00:47:49,839 |
|
regenerate hundreds or thousands of |
|
|
|
1285 |
|
00:47:48,319 --> 00:47:52,559 |
|
sequences to find something that's not |
|
|
|
1286 |
|
00:47:49,839 --> 00:47:55,240 |
|
about climing um and that feels a little |
|
|
|
1287 |
|
00:47:52,559 --> 00:47:56,920 |
|
bit inefficient right so is there |
|
|
|
1288 |
|
00:47:55,240 --> 00:47:59,040 |
|
something that we can do that's a little |
|
|
|
1289 |
|
00:47:56,920 --> 00:48:01,599 |
|
bit better than that well really we'd |
|
|
|
1290 |
|
00:47:59,040 --> 00:48:03,200 |
|
like to guess at some point during our |
|
|
|
1291 |
|
00:48:01,599 --> 00:48:05,200 |
|
generation if the sequence is going to |
|
|
|
1292 |
|
00:48:03,200 --> 00:48:08,000 |
|
be about climbing and maybe like |
|
|
|
1293 |
|
00:48:05,200 --> 00:48:10,640 |
|
recalibrate or you know we could even |
|
|
|
1294 |
|
00:48:08,000 --> 00:48:12,079 |
|
restart or sort of shape Our Generations |
|
|
|
1295 |
|
00:48:10,640 --> 00:48:14,520 |
|
so that we don't wind up with a sequence |
|
|
|
1296 |
|
00:48:12,079 --> 00:48:16,319 |
|
that's about climbing in the first place |
|
|
|
1297 |
|
00:48:14,520 --> 00:48:19,359 |
|
um one of the methods that we'll discuss |
|
|
|
1298 |
|
00:48:16,319 --> 00:48:20,920 |
|
to do this is a method called fudge um |
|
|
|
1299 |
|
00:48:19,359 --> 00:48:22,800 |
|
and unfortunately in their paper they |
|
|
|
1300 |
|
00:48:20,920 --> 00:48:24,240 |
|
don't have the same anti-climbing bias I |
|
|
|
1301 |
|
00:48:22,800 --> 00:48:27,000 |
|
do so this example is actually about |
|
|
|
1302 |
|
00:48:24,240 --> 00:48:29,000 |
|
formality instead um the idea here is |
|
|
|
1303 |
|
00:48:27,000 --> 00:48:32,079 |
|
that we want a sequence output of the |
|
|
|
1304 |
|
00:48:29,000 --> 00:48:34,079 |
|
model that is sort of satisfies this |
|
|
|
1305 |
|
00:48:32,079 --> 00:48:36,079 |
|
constraint of being formal and the way |
|
|
|
1306 |
|
00:48:34,079 --> 00:48:39,960 |
|
we're going to do this is at each step |
|
|
|
1307 |
|
00:48:36,079 --> 00:48:41,640 |
|
of prediction we get the outputs of what |
|
|
|
1308 |
|
00:48:39,960 --> 00:48:44,160 |
|
the model predicts is the next token |
|
|
|
1309 |
|
00:48:41,640 --> 00:48:47,319 |
|
right this sort of distribution here in |
|
|
|
1310 |
|
00:48:44,160 --> 00:48:49,760 |
|
blue and we also have some second |
|
|
|
1311 |
|
00:48:47,319 --> 00:48:52,079 |
|
distribution which says given sort of |
|
|
|
1312 |
|
00:48:49,760 --> 00:48:54,480 |
|
what we have so far How likely is this |
|
|
|
1313 |
|
00:48:52,079 --> 00:48:56,920 |
|
to be a formal sentence at the end right |
|
|
|
1314 |
|
00:48:54,480 --> 00:48:58,880 |
|
does a sentence that starts do you want |
|
|
|
1315 |
|
00:48:56,920 --> 00:49:01,200 |
|
have a high likelihood of being formal |
|
|
|
1316 |
|
00:48:58,880 --> 00:49:04,559 |
|
versus a sentence that starts do you |
|
|
|
1317 |
|
00:49:01,200 --> 00:49:07,200 |
|
prefer and so this sort of guess at what |
|
|
|
1318 |
|
00:49:04,559 --> 00:49:09,520 |
|
will be formal at the end of the um |
|
|
|
1319 |
|
00:49:07,200 --> 00:49:10,960 |
|
generation will put High likelihood on |
|
|
|
1320 |
|
00:49:09,520 --> 00:49:13,599 |
|
things that result in really formal |
|
|
|
1321 |
|
00:49:10,960 --> 00:49:15,880 |
|
sentences like do you prefer or do you |
|
|
|
1322 |
|
00:49:13,599 --> 00:49:17,200 |
|
thus whereas the original model might |
|
|
|
1323 |
|
00:49:15,880 --> 00:49:19,440 |
|
have higher likelihood on things that |
|
|
|
1324 |
|
00:49:17,200 --> 00:49:22,559 |
|
are maybe more commonly said like do you |
|
|
|
1325 |
|
00:49:19,440 --> 00:49:24,319 |
|
want um so we combine these two |
|
|
|
1326 |
|
00:49:22,559 --> 00:49:26,280 |
|
distributions you can just multiply them |
|
|
|
1327 |
|
00:49:24,319 --> 00:49:29,079 |
|
together and then we sample from this |
|
|
|
1328 |
|
00:49:26,280 --> 00:49:30,520 |
|
modified distribution which now has some |
|
|
|
1329 |
|
00:49:29,079 --> 00:49:32,359 |
|
sort of high weight on things that the |
|
|
|
1330 |
|
00:49:30,520 --> 00:49:33,559 |
|
model thinks are likely but also takes |
|
|
|
1331 |
|
00:49:32,359 --> 00:49:35,960 |
|
into account the likelihood of |
|
|
|
1332 |
|
00:49:33,559 --> 00:49:38,240 |
|
satisfying a constraint um this is |
|
|
|
1333 |
|
00:49:35,960 --> 00:49:40,640 |
|
another sort of method of modifying or |
|
|
|
1334 |
|
00:49:38,240 --> 00:49:42,520 |
|
sampling distribution um with some |
|
|
|
1335 |
|
00:49:40,640 --> 00:49:44,520 |
|
external information here and so there's |
|
|
|
1336 |
|
00:49:42,520 --> 00:49:47,440 |
|
results and sequences that wind up being |
|
|
|
1337 |
|
00:49:44,520 --> 00:49:48,799 |
|
sort of more likely to be formal without |
|
|
|
1338 |
|
00:49:47,440 --> 00:49:50,280 |
|
having to sample a whole bunch of |
|
|
|
1339 |
|
00:49:48,799 --> 00:49:52,880 |
|
sentences and reject the ones that we |
|
|
|
1340 |
|
00:49:50,280 --> 00:49:54,720 |
|
think don't satisfy this constraint so |
|
|
|
1341 |
|
00:49:52,880 --> 00:49:57,119 |
|
how do we get sort of a guess of what |
|
|
|
1342 |
|
00:49:54,720 --> 00:49:58,839 |
|
will be formal at the end of Generation |
|
|
|
1343 |
|
00:49:57,119 --> 00:50:01,319 |
|
Um this is where the name fudge comes |
|
|
|
1344 |
|
00:49:58,839 --> 00:50:03,319 |
|
from the fud stands for future |
|
|
|
1345 |
|
00:50:01,319 --> 00:50:06,640 |
|
discriminator and so what they do is |
|
|
|
1346 |
|
00:50:03,319 --> 00:50:08,920 |
|
they train a model on prefixes to guess |
|
|
|
1347 |
|
00:50:06,640 --> 00:50:10,400 |
|
whether that sequence will be formal um |
|
|
|
1348 |
|
00:50:08,920 --> 00:50:12,040 |
|
you can do this if you have a bunch of |
|
|
|
1349 |
|
00:50:10,400 --> 00:50:15,319 |
|
data that's sort of sorted into formal |
|
|
|
1350 |
|
00:50:12,040 --> 00:50:17,720 |
|
and not formal right every um sort of |
|
|
|
1351 |
|
00:50:15,319 --> 00:50:20,119 |
|
prefix of a sentence in the formal |
|
|
|
1352 |
|
00:50:17,720 --> 00:50:21,480 |
|
category is a training example right you |
|
|
|
1353 |
|
00:50:20,119 --> 00:50:23,720 |
|
know a sentence that starts do you |
|
|
|
1354 |
|
00:50:21,480 --> 00:50:27,599 |
|
prefer you can shop off each token to |
|
|
|
1355 |
|
00:50:23,720 --> 00:50:29,920 |
|
get sort of a um set of sequ of prefixes |
|
|
|
1356 |
|
00:50:27,599 --> 00:50:31,160 |
|
to sequences that have the label formal |
|
|
|
1357 |
|
00:50:29,920 --> 00:50:33,559 |
|
and you can do the same thing to your |
|
|
|
1358 |
|
00:50:31,160 --> 00:50:34,920 |
|
informal set and train a discriminator |
|
|
|
1359 |
|
00:50:33,559 --> 00:50:36,559 |
|
to choose between them to say like |
|
|
|
1360 |
|
00:50:34,920 --> 00:50:38,400 |
|
what's the probability the sentence but |
|
|
|
1361 |
|
00:50:36,559 --> 00:50:41,160 |
|
will belong to the formal set when we |
|
|
|
1362 |
|
00:50:38,400 --> 00:50:43,319 |
|
finish and so this idea of sort of |
|
|
|
1363 |
|
00:50:41,160 --> 00:50:44,359 |
|
trying to guess at a given decoding step |
|
|
|
1364 |
|
00:50:43,319 --> 00:50:49,480 |
|
if we're going to wind up with our |
|
|
|
1365 |
|
00:50:44,359 --> 00:50:50,799 |
|
constraints satisfied at the end um is a |
|
|
|
1366 |
|
00:50:49,480 --> 00:50:53,000 |
|
sort of key way to do constraint |
|
|
|
1367 |
|
00:50:50,799 --> 00:50:56,000 |
|
decoding um and one that we'll return to |
|
|
|
1368 |
|
00:50:53,000 --> 00:50:58,280 |
|
in just a couple slides here |
|
|
|
1369 |
|
00:50:56,000 --> 00:51:00,440 |
|
I want to talk touch on something |
|
|
|
1370 |
|
00:50:58,280 --> 00:51:03,079 |
|
slightly different which is that maybe |
|
|
|
1371 |
|
00:51:00,440 --> 00:51:04,599 |
|
one of the constraints we care about is |
|
|
|
1372 |
|
00:51:03,079 --> 00:51:07,319 |
|
something a little more nebulous like we |
|
|
|
1373 |
|
00:51:04,599 --> 00:51:09,160 |
|
want to match human preference um the |
|
|
|
1374 |
|
00:51:07,319 --> 00:51:12,079 |
|
way that we usually accomplish this |
|
|
|
1375 |
|
00:51:09,160 --> 00:51:14,920 |
|
constraint is a little bit different |
|
|
|
1376 |
|
00:51:12,079 --> 00:51:16,040 |
|
right um this we' usually do through |
|
|
|
1377 |
|
00:51:14,920 --> 00:51:18,839 |
|
like reinforcement learning through |
|
|
|
1378 |
|
00:51:16,040 --> 00:51:21,559 |
|
human feedback um and so we take sort of |
|
|
|
1379 |
|
00:51:18,839 --> 00:51:24,960 |
|
our original model distribution and we |
|
|
|
1380 |
|
00:51:21,559 --> 00:51:27,960 |
|
take a sort of really like tight like |
|
|
|
1381 |
|
00:51:24,960 --> 00:51:30,200 |
|
distrib tion of evidence that says like |
|
|
|
1382 |
|
00:51:27,960 --> 00:51:31,680 |
|
um this model says that this sequence is |
|
|
|
1383 |
|
00:51:30,200 --> 00:51:33,960 |
|
really high reward this sequence is |
|
|
|
1384 |
|
00:51:31,680 --> 00:51:35,640 |
|
really low reward and we try to sort of |
|
|
|
1385 |
|
00:51:33,960 --> 00:51:38,200 |
|
combine them somehow through training so |
|
|
|
1386 |
|
00:51:35,640 --> 00:51:41,240 |
|
we get a new model that is um quote |
|
|
|
1387 |
|
00:51:38,200 --> 00:51:43,240 |
|
unquote aligned and that it has like a |
|
|
|
1388 |
|
00:51:41,240 --> 00:51:45,280 |
|
higher likelihood of giving us things |
|
|
|
1389 |
|
00:51:43,240 --> 00:51:48,640 |
|
that have really high reward according |
|
|
|
1390 |
|
00:51:45,280 --> 00:51:51,319 |
|
to our reward distribution um you can |
|
|
|
1391 |
|
00:51:48,640 --> 00:51:53,599 |
|
view this though as a type of basian |
|
|
|
1392 |
|
00:51:51,319 --> 00:51:55,119 |
|
inference and so what this means is the |
|
|
|
1393 |
|
00:51:53,599 --> 00:51:57,440 |
|
distribution that we really want to get |
|
|
|
1394 |
|
00:51:55,119 --> 00:51:59,880 |
|
at the end is a distribution that |
|
|
|
1395 |
|
00:51:57,440 --> 00:52:03,160 |
|
combines our original models |
|
|
|
1396 |
|
00:51:59,880 --> 00:52:05,680 |
|
distribution and some idea of like How |
|
|
|
1397 |
|
00:52:03,160 --> 00:52:08,480 |
|
likely we are to satisfy the reward |
|
|
|
1398 |
|
00:52:05,680 --> 00:52:10,720 |
|
right um this we do through |
|
|
|
1399 |
|
00:52:08,480 --> 00:52:12,359 |
|
reinforcement learning but if we sort of |
|
|
|
1400 |
|
00:52:10,720 --> 00:52:14,480 |
|
know what these two distributions look |
|
|
|
1401 |
|
00:52:12,359 --> 00:52:16,119 |
|
like we've we've just been talking about |
|
|
|
1402 |
|
00:52:14,480 --> 00:52:17,680 |
|
a lot of methods that modify the |
|
|
|
1403 |
|
00:52:16,119 --> 00:52:20,119 |
|
original models distribution with |
|
|
|
1404 |
|
00:52:17,680 --> 00:52:21,880 |
|
external information it seems like maybe |
|
|
|
1405 |
|
00:52:20,119 --> 00:52:24,760 |
|
we could just add that external |
|
|
|
1406 |
|
00:52:21,880 --> 00:52:26,200 |
|
information in at decoding time to get |
|
|
|
1407 |
|
00:52:24,760 --> 00:52:29,040 |
|
some of the same |
|
|
|
1408 |
|
00:52:26,200 --> 00:52:31,040 |
|
effects um and it turns out you can do |
|
|
|
1409 |
|
00:52:29,040 --> 00:52:32,799 |
|
exactly this so this is a paper from |
|
|
|
1410 |
|
00:52:31,040 --> 00:52:36,680 |
|
last year called reward augmented |
|
|
|
1411 |
|
00:52:32,799 --> 00:52:39,079 |
|
decoding and the idea here is sort of um |
|
|
|
1412 |
|
00:52:36,680 --> 00:52:41,839 |
|
in the same conceptual class as fudge |
|
|
|
1413 |
|
00:52:39,079 --> 00:52:44,079 |
|
but instead of um predicting whether |
|
|
|
1414 |
|
00:52:41,839 --> 00:52:46,079 |
|
we're likely to satisfy the constraint |
|
|
|
1415 |
|
00:52:44,079 --> 00:52:47,599 |
|
we're predicting how much reward we |
|
|
|
1416 |
|
00:52:46,079 --> 00:52:49,880 |
|
think that sequence will have at the end |
|
|
|
1417 |
|
00:52:47,599 --> 00:52:52,599 |
|
of generation so we take our original |
|
|
|
1418 |
|
00:52:49,880 --> 00:52:54,839 |
|
model without doing any rhf and we get |
|
|
|
1419 |
|
00:52:52,599 --> 00:52:58,160 |
|
the output we get the predictions for |
|
|
|
1420 |
|
00:52:54,839 --> 00:52:59,400 |
|
the next token and then we use a model |
|
|
|
1421 |
|
00:52:58,160 --> 00:53:02,359 |
|
that's been trained to predict the |
|
|
|
1422 |
|
00:52:59,400 --> 00:53:05,040 |
|
likely reward given some prefix like a |
|
|
|
1423 |
|
00:53:02,359 --> 00:53:06,720 |
|
future discriminator and we calculate |
|
|
|
1424 |
|
00:53:05,040 --> 00:53:08,200 |
|
the likely reward if we pick each of |
|
|
|
1425 |
|
00:53:06,720 --> 00:53:09,799 |
|
those tokens and then we use the |
|
|
|
1426 |
|
00:53:08,200 --> 00:53:12,319 |
|
combination of those two distributions |
|
|
|
1427 |
|
00:53:09,799 --> 00:53:13,720 |
|
to choose what to decode next um and |
|
|
|
1428 |
|
00:53:12,319 --> 00:53:16,000 |
|
this sort of gives you some of the |
|
|
|
1429 |
|
00:53:13,720 --> 00:53:18,440 |
|
benefits of rlf without actually having |
|
|
|
1430 |
|
00:53:16,000 --> 00:53:21,200 |
|
to do reinforcement learning so it's a |
|
|
|
1431 |
|
00:53:18,440 --> 00:53:23,160 |
|
way of treating like aligning to human |
|
|
|
1432 |
|
00:53:21,200 --> 00:53:26,839 |
|
feedback as just another constraint that |
|
|
|
1433 |
|
00:53:23,160 --> 00:53:30,400 |
|
you can impose at decoding point |
|
|
|
1434 |
|
00:53:26,839 --> 00:53:32,319 |
|
so those were sort of a a subset of the |
|
|
|
1435 |
|
00:53:30,400 --> 00:53:34,280 |
|
um constrains decoding strategies that |
|
|
|
1436 |
|
00:53:32,319 --> 00:53:35,799 |
|
people use um before we get into the |
|
|
|
1437 |
|
00:53:34,280 --> 00:53:38,400 |
|
human and the loop stack are there any |
|
|
|
1438 |
|
00:53:35,799 --> 00:53:38,400 |
|
questions on |
|
|
|
1439 |
|
00:53:39,040 --> 00:53:43,599 |
|
this yes for |
|
|
|
1440 |
|
00:53:44,960 --> 00:53:48,319 |
|
the do you have |
|
|
|
1441 |
|
00:53:52,799 --> 00:53:57,440 |
|
to right so for the discrimin do you |
|
|
|
1442 |
|
00:53:55,640 --> 00:54:00,000 |
|
need to train one for every constraint |
|
|
|
1443 |
|
00:53:57,440 --> 00:54:01,440 |
|
and you do yeah so you need to have some |
|
|
|
1444 |
|
00:54:00,000 --> 00:54:02,920 |
|
set of data that satisfies your |
|
|
|
1445 |
|
00:54:01,440 --> 00:54:05,319 |
|
constraint and some set of data that |
|
|
|
1446 |
|
00:54:02,920 --> 00:54:08,200 |
|
doesn't before you can enforce a new |
|
|
|
1447 |
|
00:54:05,319 --> 00:54:10,200 |
|
constraint in an alternative might be |
|
|
|
1448 |
|
00:54:08,200 --> 00:54:12,040 |
|
like in the paper that's what they did |
|
|
|
1449 |
|
00:54:10,200 --> 00:54:16,400 |
|
but an alternative might be just to |
|
|
|
1450 |
|
00:54:12,040 --> 00:54:18,359 |
|
train a discriminator to determine |
|
|
|
1451 |
|
00:54:16,400 --> 00:54:20,880 |
|
whether any constraint was violated so |
|
|
|
1452 |
|
00:54:18,359 --> 00:54:23,359 |
|
if you have 100 constraints you could do |
|
|
|
1453 |
|
00:54:20,880 --> 00:54:25,599 |
|
a binary prier about whether any |
|
|
|
1454 |
|
00:54:23,359 --> 00:54:26,880 |
|
constraint is violated and then |
|
|
|
1455 |
|
00:54:25,599 --> 00:54:29,040 |
|
also |
|
|
|
1456 |
|
00:54:26,880 --> 00:54:30,559 |
|
sufficient but if you wanted to add a |
|
|
|
1457 |
|
00:54:29,040 --> 00:54:34,079 |
|
new constraint you'd still have to |
|
|
|
1458 |
|
00:54:30,559 --> 00:54:34,079 |
|
retrain or you have to retrain |
|
|
|
1459 |
|
00:54:35,160 --> 00:54:41,319 |
|
or the the reason that this is sort of |
|
|
|
1460 |
|
00:54:38,119 --> 00:54:43,119 |
|
relatively reasonable to do is that this |
|
|
|
1461 |
|
00:54:41,319 --> 00:54:45,240 |
|
determination of if a constraint is |
|
|
|
1462 |
|
00:54:43,119 --> 00:54:46,960 |
|
likely to be violated is sort of a a |
|
|
|
1463 |
|
00:54:45,240 --> 00:54:48,520 |
|
lighter weight or an easier task to |
|
|
|
1464 |
|
00:54:46,960 --> 00:54:50,520 |
|
learn you can use a relatively small |
|
|
|
1465 |
|
00:54:48,520 --> 00:54:52,079 |
|
model for this versus like your big |
|
|
|
1466 |
|
00:54:50,520 --> 00:54:53,680 |
|
model just that has to be able to |
|
|
|
1467 |
|
00:54:52,079 --> 00:54:55,920 |
|
predict the next token for any sequence |
|
|
|
1468 |
|
00:54:53,680 --> 00:54:58,400 |
|
anymore yeah another another like |
|
|
|
1469 |
|
00:54:55,920 --> 00:55:00,760 |
|
interesting thing is if you think about |
|
|
|
1470 |
|
00:54:58,400 --> 00:55:01,520 |
|
it normally you're predicting with your |
|
|
|
1471 |
|
00:55:00,760 --> 00:55:04,119 |
|
big |
|
|
|
1472 |
|
00:55:01,520 --> 00:55:06,359 |
|
softmax like this over all of your |
|
|
|
1473 |
|
00:55:04,119 --> 00:55:09,680 |
|
vocabulary you can even use the same |
|
|
|
1474 |
|
00:55:06,359 --> 00:55:11,920 |
|
representations here to predict with a |
|
|
|
1475 |
|
00:55:09,680 --> 00:55:13,359 |
|
binary classifier uh whether the |
|
|
|
1476 |
|
00:55:11,920 --> 00:55:14,559 |
|
constraint is violated let's say you |
|
|
|
1477 |
|
00:55:13,359 --> 00:55:17,240 |
|
have 100 |
|
|
|
1478 |
|
00:55:14,559 --> 00:55:19,240 |
|
constraints this is still a vector of |
|
|
|
1479 |
|
00:55:17,240 --> 00:55:21,520 |
|
size 100 compared to your vector of size |
|
|
|
1480 |
|
00:55:19,240 --> 00:55:26,240 |
|
32,000 that you're using for llama right |
|
|
|
1481 |
|
00:55:21,520 --> 00:55:28,280 |
|
so it's not like this adds the training |
|
|
|
1482 |
|
00:55:26,240 --> 00:55:32,799 |
|
would cost some time but it adds very |
|
|
|
1483 |
|
00:55:28,280 --> 00:55:32,799 |
|
little like inference time I guess |
|
|
|
1484 |
|
00:55:33,440 --> 00:55:38,960 |
|
basically the rock |
|
|
|
1485 |
|
00:55:35,880 --> 00:55:41,400 |
|
sound so when you do the constraint you |
|
|
|
1486 |
|
00:55:38,960 --> 00:55:43,160 |
|
use like a more General |
|
|
|
1487 |
|
00:55:41,400 --> 00:55:44,680 |
|
like do |
|
|
|
1488 |
|
00:55:43,160 --> 00:55:48,160 |
|
notest |
|
|
|
1489 |
|
00:55:44,680 --> 00:55:50,799 |
|
or I guess like in that constraint for |
|
|
|
1490 |
|
00:55:48,160 --> 00:55:50,799 |
|
you can add |
|
|
|
1491 |
|
00:55:52,559 --> 00:55:57,000 |
|
like, is there |
|
|
|
1492 |
|
00:55:57,880 --> 00:56:00,720 |
|
like is there a way to generalize your |
|
|
|
1493 |
|
00:55:59,400 --> 00:56:04,760 |
|
constraint would be like don't talk |
|
|
|
1494 |
|
00:56:00,720 --> 00:56:07,039 |
|
about this whole set of hobes um you |
|
|
|
1495 |
|
00:56:04,760 --> 00:56:08,960 |
|
could do that by training a |
|
|
|
1496 |
|
00:56:07,039 --> 00:56:10,400 |
|
discriminator um by training one |
|
|
|
1497 |
|
00:56:08,960 --> 00:56:12,359 |
|
discriminator that considers all of |
|
|
|
1498 |
|
00:56:10,400 --> 00:56:15,119 |
|
those or by training like a hundred |
|
|
|
1499 |
|
00:56:12,359 --> 00:56:17,559 |
|
different discriminators and then um |
|
|
|
1500 |
|
00:56:15,119 --> 00:56:19,520 |
|
sort of taking like the maximum score |
|
|
|
1501 |
|
00:56:17,559 --> 00:56:21,240 |
|
from any of them right like you want to |
|
|
|
1502 |
|
00:56:19,520 --> 00:56:23,240 |
|
you want to be able to exclude all of |
|
|
|
1503 |
|
00:56:21,240 --> 00:56:27,799 |
|
these things so you consider if any of |
|
|
|
1504 |
|
00:56:23,240 --> 00:56:30,720 |
|
them are violated yeah and for um reward |
|
|
|
1505 |
|
00:56:27,799 --> 00:56:32,839 |
|
augmented recoding how do we sort of |
|
|
|
1506 |
|
00:56:30,720 --> 00:56:36,039 |
|
like frame that reward model or is that |
|
|
|
1507 |
|
00:56:32,839 --> 00:56:38,400 |
|
just come from the previously done rhf |
|
|
|
1508 |
|
00:56:36,039 --> 00:56:41,079 |
|
data that the store from there and then |
|
|
|
1509 |
|
00:56:38,400 --> 00:56:44,119 |
|
you sort of like FR another |
|
|
|
1510 |
|
00:56:41,079 --> 00:56:47,880 |
|
discriminator but this one |
|
|
|
1511 |
|
00:56:44,119 --> 00:56:50,799 |
|
now I I fully understand yeah so how do |
|
|
|
1512 |
|
00:56:47,880 --> 00:56:52,920 |
|
we get the the reward model here this is |
|
|
|
1513 |
|
00:56:50,799 --> 00:56:55,280 |
|
we can use the same data that we' use |
|
|
|
1514 |
|
00:56:52,920 --> 00:56:58,000 |
|
for rhf but we need a slightly different |
|
|
|
1515 |
|
00:56:55,280 --> 00:57:01,119 |
|
model so for rhf we'll train a reward |
|
|
|
1516 |
|
00:56:58,000 --> 00:57:02,599 |
|
model over full sequences right and here |
|
|
|
1517 |
|
00:57:01,119 --> 00:57:05,280 |
|
we need to do the same trick where we |
|
|
|
1518 |
|
00:57:02,599 --> 00:57:07,280 |
|
sort of look at just prefixes and try to |
|
|
|
1519 |
|
00:57:05,280 --> 00:57:09,640 |
|
guess the reward Downstream but if we |
|
|
|
1520 |
|
00:57:07,280 --> 00:57:12,440 |
|
have already have preference data then |
|
|
|
1521 |
|
00:57:09,640 --> 00:57:15,119 |
|
we have some um like we have a data |
|
|
|
1522 |
|
00:57:12,440 --> 00:57:16,720 |
|
source to do this with I think if I'm |
|
|
|
1523 |
|
00:57:15,119 --> 00:57:19,240 |
|
remembering correctly they also had a |
|
|
|
1524 |
|
00:57:16,720 --> 00:57:20,920 |
|
couple more sort of tricks for data |
|
|
|
1525 |
|
00:57:19,240 --> 00:57:22,640 |
|
augmentation to get this to work this is |
|
|
|
1526 |
|
00:57:20,920 --> 00:57:25,720 |
|
sort of like a non-trivial thing to |
|
|
|
1527 |
|
00:57:22,640 --> 00:57:28,039 |
|
figure out um because like reward is |
|
|
|
1528 |
|
00:57:25,720 --> 00:57:30,200 |
|
generally a secret bual |
|
|
|
1529 |
|
00:57:28,039 --> 00:57:32,280 |
|
attribute and also if you don't know |
|
|
|
1530 |
|
00:57:30,200 --> 00:57:34,160 |
|
very much about rhf we're going to cover |
|
|
|
1531 |
|
00:57:32,280 --> 00:57:36,400 |
|
that the future class so don't worry if |
|
|
|
1532 |
|
00:57:34,160 --> 00:57:37,880 |
|
this is a yeah sorry to Jump Ahead a |
|
|
|
1533 |
|
00:57:36,400 --> 00:57:39,880 |
|
little no no |
|
|
|
1534 |
|
00:57:37,880 --> 00:57:43,640 |
|
wores |
|
|
|
1535 |
|
00:57:39,880 --> 00:57:47,240 |
|
yeah application this like why would we |
|
|
|
1536 |
|
00:57:43,640 --> 00:57:49,640 |
|
doing this to ensure it could be like |
|
|
|
1537 |
|
00:57:47,240 --> 00:57:52,839 |
|
our llm would want to highlight certain |
|
|
|
1538 |
|
00:57:49,640 --> 00:57:53,799 |
|
qualities like we want our evence to be |
|
|
|
1539 |
|
00:57:52,839 --> 00:57:55,960 |
|
more |
|
|
|
1540 |
|
00:57:53,799 --> 00:57:57,839 |
|
empathetic is there |
|
|
|
1541 |
|
00:57:55,960 --> 00:57:59,440 |
|
something yeah like what are the real |
|
|
|
1542 |
|
00:57:57,839 --> 00:58:01,280 |
|
world applications like could we use |
|
|
|
1543 |
|
00:57:59,440 --> 00:58:03,680 |
|
this to make L more empathetic or |
|
|
|
1544 |
|
00:58:01,280 --> 00:58:06,359 |
|
something yeah any any real attribute |
|
|
|
1545 |
|
00:58:03,680 --> 00:58:08,000 |
|
that you can sort of collect like |
|
|
|
1546 |
|
00:58:06,359 --> 00:58:09,839 |
|
positive and negative data for you could |
|
|
|
1547 |
|
00:58:08,000 --> 00:58:12,200 |
|
do this kind of constraints for I think |
|
|
|
1548 |
|
00:58:09,839 --> 00:58:15,119 |
|
the the ones you see most commonly are |
|
|
|
1549 |
|
00:58:12,200 --> 00:58:16,480 |
|
the human preference and then like |
|
|
|
1550 |
|
00:58:15,119 --> 00:58:18,839 |
|
negative constraints like you don't want |
|
|
|
1551 |
|
00:58:16,480 --> 00:58:20,000 |
|
your model to generate offensive content |
|
|
|
1552 |
|
00:58:18,839 --> 00:58:21,839 |
|
and if you can build like a good |
|
|
|
1553 |
|
00:58:20,000 --> 00:58:23,319 |
|
discriminator for is a sentence going in |
|
|
|
1554 |
|
00:58:21,839 --> 00:58:26,160 |
|
a really offensive Direction you can |
|
|
|
1555 |
|
00:58:23,319 --> 00:58:28,440 |
|
kind of stop it from gener |
|
|
|
1556 |
|
00:58:26,160 --> 00:58:30,480 |
|
yeah would it be a good idea if you |
|
|
|
1557 |
|
00:58:28,440 --> 00:58:33,760 |
|
generate a bunch of cons and ask the |
|
|
|
1558 |
|
00:58:30,480 --> 00:58:35,480 |
|
model itself whether it violates the |
|
|
|
1559 |
|
00:58:33,760 --> 00:58:37,319 |
|
yeah you could do that for sure could |
|
|
|
1560 |
|
00:58:35,480 --> 00:58:38,920 |
|
you ask like could you generate a bunch |
|
|
|
1561 |
|
00:58:37,319 --> 00:58:42,440 |
|
of samples and ask the model if it |
|
|
|
1562 |
|
00:58:38,920 --> 00:58:44,720 |
|
violates the constraint um this is also |
|
|
|
1563 |
|
00:58:42,440 --> 00:58:47,119 |
|
a type of sort of sample and then rerank |
|
|
|
1564 |
|
00:58:44,720 --> 00:58:52,319 |
|
strategy um but yeah this would be sort |
|
|
|
1565 |
|
00:58:47,119 --> 00:58:54,000 |
|
of a more um clever like less |
|
|
|
1566 |
|
00:58:52,319 --> 00:58:55,559 |
|
heavyweight version of this checking if |
|
|
|
1567 |
|
00:58:54,000 --> 00:58:57,319 |
|
it's about climate means right you'd |
|
|
|
1568 |
|
00:58:55,559 --> 00:58:58,520 |
|
like ask the model if it violated the |
|
|
|
1569 |
|
00:58:57,319 --> 00:59:00,160 |
|
constraint and if it's a good enough |
|
|
|
1570 |
|
00:58:58,520 --> 00:59:02,480 |
|
model it could probably do that pretty |
|
|
|
1571 |
|
00:59:00,160 --> 00:59:05,160 |
|
well I suppose in that case you don't |
|
|
|
1572 |
|
00:59:02,480 --> 00:59:08,160 |
|
have to thing anything yeah yeah and |
|
|
|
1573 |
|
00:59:05,160 --> 00:59:10,359 |
|
this is sort of a general like the |
|
|
|
1574 |
|
00:59:08,160 --> 00:59:12,240 |
|
generating text that like satisfies a |
|
|
|
1575 |
|
00:59:10,359 --> 00:59:14,079 |
|
constraint is harder than checking if a |
|
|
|
1576 |
|
00:59:12,240 --> 00:59:16,280 |
|
text satisfies a constraint so even if |
|
|
|
1577 |
|
00:59:14,079 --> 00:59:17,880 |
|
the model isn't good about like not |
|
|
|
1578 |
|
00:59:16,280 --> 00:59:19,440 |
|
generating text about climbing when you |
|
|
|
1579 |
|
00:59:17,880 --> 00:59:20,520 |
|
tell it to it might be able to tell if |
|
|
|
1580 |
|
00:59:19,440 --> 00:59:23,640 |
|
text is |
|
|
|
1581 |
|
00:59:20,520 --> 00:59:26,640 |
|
about yeah yeah so how do |
|
|
|
1582 |
|
00:59:23,640 --> 00:59:26,640 |
|
you |
|
|
|
1583 |
|
00:59:28,400 --> 00:59:32,359 |
|
have different |
|
|
|
1584 |
|
00:59:32,920 --> 00:59:36,319 |
|
different you have |
|
|
|
1585 |
|
00:59:36,599 --> 00:59:42,119 |
|
to yeah like how do you collect the data |
|
|
|
1586 |
|
00:59:38,839 --> 00:59:45,720 |
|
to train this discriminator um generally |
|
|
|
1587 |
|
00:59:42,119 --> 00:59:47,160 |
|
you're going to see like you'll look to |
|
|
|
1588 |
|
00:59:45,720 --> 00:59:48,720 |
|
see if there are data sets that already |
|
|
|
1589 |
|
00:59:47,160 --> 00:59:50,160 |
|
captured this attribute or you could |
|
|
|
1590 |
|
00:59:48,720 --> 00:59:51,599 |
|
sort of write her istics to try to |
|
|
|
1591 |
|
00:59:50,160 --> 00:59:53,839 |
|
recover it if it's an attribute that not |
|
|
|
1592 |
|
00:59:51,599 --> 00:59:55,480 |
|
a lot of other people care about like |
|
|
|
1593 |
|
00:59:53,839 --> 00:59:58,280 |
|
you could write your puristic to check |
|
|
|
1594 |
|
00:59:55,480 --> 01:00:00,160 |
|
if text is about climbing for instance |
|
|
|
1595 |
|
00:59:58,280 --> 01:00:02,359 |
|
um and then try to recover what noisy |
|
|
|
1596 |
|
01:00:00,160 --> 01:00:04,200 |
|
samples of data that is or is not about |
|
|
|
1597 |
|
01:00:02,359 --> 01:00:05,559 |
|
climbing maybe you could scrape a |
|
|
|
1598 |
|
01:00:04,200 --> 01:00:07,000 |
|
climbing forum and then scrape like a |
|
|
|
1599 |
|
01:00:05,559 --> 01:00:09,079 |
|
hiking forum and use the difference |
|
|
|
1600 |
|
01:00:07,000 --> 01:00:10,319 |
|
between them um but for a lot of tests |
|
|
|
1601 |
|
01:00:09,079 --> 01:00:11,760 |
|
there's actually pretty good data sets |
|
|
|
1602 |
|
01:00:10,319 --> 01:00:14,400 |
|
already out there for this so there's |
|
|
|
1603 |
|
01:00:11,760 --> 01:00:17,480 |
|
like in there's a lot of style transfer |
|
|
|
1604 |
|
01:00:14,400 --> 01:00:20,200 |
|
tasks that are like go from informal to |
|
|
|
1605 |
|
01:00:17,480 --> 01:00:22,240 |
|
formal or go from this to that or like |
|
|
|
1606 |
|
01:00:20,200 --> 01:00:24,039 |
|
make this text in an iic contamin and |
|
|
|
1607 |
|
01:00:22,240 --> 01:00:26,559 |
|
you can find like data from those |
|
|
|
1608 |
|
01:00:24,039 --> 01:00:26,559 |
|
sources |
|
|
|
1609 |
|
01:00:26,799 --> 01:00:31,599 |
|
we never like talked about F yet but I'm |
|
|
|
1610 |
|
01:00:29,520 --> 01:00:34,520 |
|
really curious with like the word a |
|
|
|
1611 |
|
01:00:31,599 --> 01:00:38,039 |
|
beting whether it would perform better |
|
|
|
1612 |
|
01:00:34,520 --> 01:00:39,079 |
|
than like fineing on RF like certainly |
|
|
|
1613 |
|
01:00:38,039 --> 01:00:42,720 |
|
more |
|
|
|
1614 |
|
01:00:39,079 --> 01:00:45,039 |
|
efficient but I I was I think this is a |
|
|
|
1615 |
|
01:00:42,720 --> 01:00:49,760 |
|
comparison they make in their paper but |
|
|
|
1616 |
|
01:00:45,039 --> 01:00:52,520 |
|
I don't remember their pun on yeah um in |
|
|
|
1617 |
|
01:00:49,760 --> 01:00:55,280 |
|
general there's this sort of a like you |
|
|
|
1618 |
|
01:00:52,520 --> 01:00:57,039 |
|
can pay a onetime kind of heavy cost to |
|
|
|
1619 |
|
01:00:55,280 --> 01:00:58,880 |
|
fine-tune or you can pay costs at |
|
|
|
1620 |
|
01:00:57,039 --> 01:01:01,160 |
|
inference time every time to make sort |
|
|
|
1621 |
|
01:00:58,880 --> 01:01:03,880 |
|
of a to make your model better in any of |
|
|
|
1622 |
|
01:01:01,160 --> 01:01:06,160 |
|
these ways and depending on how much |
|
|
|
1623 |
|
01:01:03,880 --> 01:01:09,119 |
|
inference you're playing do like one or |
|
|
|
1624 |
|
01:01:06,160 --> 01:01:09,119 |
|
the other of these could be |
|
|
|
1625 |
|
01:01:11,240 --> 01:01:16,400 |
|
better |
|
|
|
1626 |
|
01:01:12,839 --> 01:01:19,200 |
|
great so now we're going to talk about |
|
|
|
1627 |
|
01:01:16,400 --> 01:01:21,160 |
|
sort of methods for introducing human |
|
|
|
1628 |
|
01:01:19,200 --> 01:01:22,680 |
|
interaction into the decoding process |
|
|
|
1629 |
|
01:01:21,160 --> 01:01:25,240 |
|
and everything we've looked at so far |
|
|
|
1630 |
|
01:01:22,680 --> 01:01:26,920 |
|
has been very sort of black booss kind |
|
|
|
1631 |
|
01:01:25,240 --> 01:01:28,920 |
|
of hands off right like you give the |
|
|
|
1632 |
|
01:01:26,920 --> 01:01:30,640 |
|
model M some input maybe we do some kind |
|
|
|
1633 |
|
01:01:28,920 --> 01:01:33,640 |
|
of manipulation on the decoding side you |
|
|
|
1634 |
|
01:01:30,640 --> 01:01:37,160 |
|
get one output back right um but in a |
|
|
|
1635 |
|
01:01:33,640 --> 01:01:38,920 |
|
lot of situations where maybe you have |
|
|
|
1636 |
|
01:01:37,160 --> 01:01:40,960 |
|
some high-risk application and you need |
|
|
|
1637 |
|
01:01:38,920 --> 01:01:42,640 |
|
somebody to be consistently monitoring |
|
|
|
1638 |
|
01:01:40,960 --> 01:01:43,799 |
|
and maybe intervening or you're doing |
|
|
|
1639 |
|
01:01:42,640 --> 01:01:46,359 |
|
something where you want to do some kind |
|
|
|
1640 |
|
01:01:43,799 --> 01:01:47,960 |
|
of human AI collaboration um and you |
|
|
|
1641 |
|
01:01:46,359 --> 01:01:49,160 |
|
want to be able to go back and forth or |
|
|
|
1642 |
|
01:01:47,960 --> 01:01:50,960 |
|
you want to have a conversation with the |
|
|
|
1643 |
|
01:01:49,160 --> 01:01:53,480 |
|
model what you're actually doing is sort |
|
|
|
1644 |
|
01:01:50,960 --> 01:01:54,960 |
|
of a series of decodings with human |
|
|
|
1645 |
|
01:01:53,480 --> 01:01:56,319 |
|
intervention in between |
|
|
|
1646 |
|
01:01:54,960 --> 01:01:58,640 |
|
um and I'm going to talk about a couple |
|
|
|
1647 |
|
01:01:56,319 --> 01:02:00,760 |
|
of these strategies briefly I think if |
|
|
|
1648 |
|
01:01:58,640 --> 01:02:02,200 |
|
you've used sort of a modern llm you're |
|
|
|
1649 |
|
01:02:00,760 --> 01:02:04,440 |
|
probably familiar with at least a few of |
|
|
|
1650 |
|
01:02:02,200 --> 01:02:06,720 |
|
them already um we'll sort of put names |
|
|
|
1651 |
|
01:02:04,440 --> 01:02:08,359 |
|
to each of them and the set of examples |
|
|
|
1652 |
|
01:02:06,720 --> 01:02:10,880 |
|
that we're running with here are from a |
|
|
|
1653 |
|
01:02:08,359 --> 01:02:13,880 |
|
paper called wordcraft which is about um |
|
|
|
1654 |
|
01:02:10,880 --> 01:02:15,480 |
|
story generation with llm assistants but |
|
|
|
1655 |
|
01:02:13,880 --> 01:02:17,559 |
|
these can also be applied sort of more |
|
|
|
1656 |
|
01:02:15,480 --> 01:02:20,319 |
|
generally to any kind of task where |
|
|
|
1657 |
|
01:02:17,559 --> 01:02:23,799 |
|
you'd want to go back and forth with a |
|
|
|
1658 |
|
01:02:20,319 --> 01:02:25,319 |
|
model um the sort of easiest or maybe |
|
|
|
1659 |
|
01:02:23,799 --> 01:02:27,599 |
|
simplest place to start here is just |
|
|
|
1660 |
|
01:02:25,319 --> 01:02:29,760 |
|
with interleaving text right you can |
|
|
|
1661 |
|
01:02:27,599 --> 01:02:31,400 |
|
choose when the model starts and stops |
|
|
|
1662 |
|
01:02:29,760 --> 01:02:33,720 |
|
decoding and you can choose when a human |
|
|
|
1663 |
|
01:02:31,400 --> 01:02:34,920 |
|
is writing text in between and you can |
|
|
|
1664 |
|
01:02:33,720 --> 01:02:36,680 |
|
condition your model in sort of a |
|
|
|
1665 |
|
01:02:34,920 --> 01:02:39,240 |
|
mixture of human and model generated |
|
|
|
1666 |
|
01:02:36,680 --> 01:02:41,279 |
|
text to choose what to continue next um |
|
|
|
1667 |
|
01:02:39,240 --> 01:02:43,680 |
|
you can also do something like have the |
|
|
|
1668 |
|
01:02:41,279 --> 01:02:45,319 |
|
model generate a set of text edit that |
|
|
|
1669 |
|
01:02:43,680 --> 01:02:47,119 |
|
text in some way maybe the human is |
|
|
|
1670 |
|
01:02:45,319 --> 01:02:48,640 |
|
imposing some really subtle constraint |
|
|
|
1671 |
|
01:02:47,119 --> 01:02:50,559 |
|
like I want it to sound like my writing |
|
|
|
1672 |
|
01:02:48,640 --> 01:02:52,200 |
|
style we don't have a discriminator for |
|
|
|
1673 |
|
01:02:50,559 --> 01:02:54,119 |
|
this but the human can sort of modify |
|
|
|
1674 |
|
01:02:52,200 --> 01:02:55,680 |
|
the text and then continue generating |
|
|
|
1675 |
|
01:02:54,119 --> 01:02:57,160 |
|
from that point and that will influence |
|
|
|
1676 |
|
01:02:55,680 --> 01:03:01,160 |
|
the style of the text that continues |
|
|
|
1677 |
|
01:02:57,160 --> 01:03:03,240 |
|
being generative um a this case here is |
|
|
|
1678 |
|
01:03:01,160 --> 01:03:04,720 |
|
sort of a you're writing a story |
|
|
|
1679 |
|
01:03:03,240 --> 01:03:06,520 |
|
together and so you're going back and |
|
|
|
1680 |
|
01:03:04,720 --> 01:03:07,799 |
|
forth and editing the text like that but |
|
|
|
1681 |
|
01:03:06,520 --> 01:03:10,319 |
|
you can also think of any kind of |
|
|
|
1682 |
|
01:03:07,799 --> 01:03:11,920 |
|
conversation with a model as the same |
|
|
|
1683 |
|
01:03:10,319 --> 01:03:15,319 |
|
kind of interleaving of text right the |
|
|
|
1684 |
|
01:03:11,920 --> 01:03:17,000 |
|
model gives some um text you provide |
|
|
|
1685 |
|
01:03:15,319 --> 01:03:18,599 |
|
some text you go back and forth on like |
|
|
|
1686 |
|
01:03:17,000 --> 01:03:20,480 |
|
who's providing the text that conditions |
|
|
|
1687 |
|
01:03:18,599 --> 01:03:23,039 |
|
the |
|
|
|
1688 |
|
01:03:20,480 --> 01:03:24,880 |
|
model you also might want to do things |
|
|
|
1689 |
|
01:03:23,039 --> 01:03:26,760 |
|
like more fine brain replace |
|
|
|
1690 |
|
01:03:24,880 --> 01:03:28,559 |
|
so here the person has highlighted some |
|
|
|
1691 |
|
01:03:26,760 --> 01:03:31,640 |
|
text and said like make this more |
|
|
|
1692 |
|
01:03:28,559 --> 01:03:33,960 |
|
descriptive or shorten this to two words |
|
|
|
1693 |
|
01:03:31,640 --> 01:03:36,079 |
|
or maybe you want some additional |
|
|
|
1694 |
|
01:03:33,960 --> 01:03:38,520 |
|
constraint like can this be happier can |
|
|
|
1695 |
|
01:03:36,079 --> 01:03:40,960 |
|
this be sad like change the ending or |
|
|
|
1696 |
|
01:03:38,520 --> 01:03:43,760 |
|
something um you can accomplish this in |
|
|
|
1697 |
|
01:03:40,960 --> 01:03:45,799 |
|
a variety of ways um here this is done |
|
|
|
1698 |
|
01:03:43,760 --> 01:03:47,680 |
|
through input manipulation so you prompt |
|
|
|
1699 |
|
01:03:45,799 --> 01:03:50,359 |
|
your model differently with different |
|
|
|
1700 |
|
01:03:47,680 --> 01:03:52,200 |
|
constraints you can also do this with an |
|
|
|
1701 |
|
01:03:50,359 --> 01:03:54,440 |
|
actual modeling change like if you want |
|
|
|
1702 |
|
01:03:52,200 --> 01:03:56,119 |
|
some kind of infilling model um |
|
|
|
1703 |
|
01:03:54,440 --> 01:03:57,720 |
|
particularly for things like code this |
|
|
|
1704 |
|
01:03:56,119 --> 01:04:01,119 |
|
can be helpful so you want context from |
|
|
|
1705 |
|
01:03:57,720 --> 01:04:02,440 |
|
left and right sides um or you can do |
|
|
|
1706 |
|
01:04:01,119 --> 01:04:03,799 |
|
this with the decoding changes that we |
|
|
|
1707 |
|
01:04:02,440 --> 01:04:05,960 |
|
talked about in the previous section |
|
|
|
1708 |
|
01:04:03,799 --> 01:04:07,799 |
|
right you could add a discriminator for |
|
|
|
1709 |
|
01:04:05,960 --> 01:04:09,680 |
|
descriptiveness of text or you could do |
|
|
|
1710 |
|
01:04:07,799 --> 01:04:11,680 |
|
some kind of sampling ranking method to |
|
|
|
1711 |
|
01:04:09,680 --> 01:04:13,880 |
|
recover a more descriptive |
|
|
|
1712 |
|
01:04:11,680 --> 01:04:16,640 |
|
output another thing that's very common |
|
|
|
1713 |
|
01:04:13,880 --> 01:04:17,960 |
|
in this space is sampling and reranking |
|
|
|
1714 |
|
01:04:16,640 --> 01:04:20,839 |
|
methods where the human is the one |
|
|
|
1715 |
|
01:04:17,960 --> 01:04:23,640 |
|
choosing what to return right so in |
|
|
|
1716 |
|
01:04:20,839 --> 01:04:25,960 |
|
wordcraft you see a set of choices and |
|
|
|
1717 |
|
01:04:23,640 --> 01:04:28,200 |
|
you can choose text to insert but more |
|
|
|
1718 |
|
01:04:25,960 --> 01:04:30,720 |
|
commonly in something like um chat gbt |
|
|
|
1719 |
|
01:04:28,200 --> 01:04:33,160 |
|
or Bard you see this little option to |
|
|
|
1720 |
|
01:04:30,720 --> 01:04:34,880 |
|
regenerate text right you as the human |
|
|
|
1721 |
|
01:04:33,160 --> 01:04:36,160 |
|
can reject the text and say like no I |
|
|
|
1722 |
|
01:04:34,880 --> 01:04:38,680 |
|
don't like this give me a different |
|
|
|
1723 |
|
01:04:36,160 --> 01:04:41,359 |
|
output and this is also sort of a way of |
|
|
|
1724 |
|
01:04:38,680 --> 01:04:44,079 |
|
controlling decoding um just by doing it |
|
|
|
1725 |
|
01:04:41,359 --> 01:04:46,319 |
|
on on a human rather in an algorithmic |
|
|
|
1726 |
|
01:04:44,079 --> 01:04:49,279 |
|
level of course you don't necessarily |
|
|
|
1727 |
|
01:04:46,319 --> 01:04:51,200 |
|
need a human in here and so um some |
|
|
|
1728 |
|
01:04:49,279 --> 01:04:52,960 |
|
recent work has looked at functionally |
|
|
|
1729 |
|
01:04:51,200 --> 01:04:55,799 |
|
using models to make these decisions |
|
|
|
1730 |
|
01:04:52,960 --> 01:04:57,480 |
|
instead um this is a a a prompting paper |
|
|
|
1731 |
|
01:04:55,799 --> 01:05:00,359 |
|
called free of thought which was sort of |
|
|
|
1732 |
|
01:04:57,480 --> 01:05:02,279 |
|
very popular on Twitter last summer um |
|
|
|
1733 |
|
01:05:00,359 --> 01:05:06,119 |
|
and the idea here is that you're going |
|
|
|
1734 |
|
01:05:02,279 --> 01:05:08,480 |
|
to generate um several smaller sequences |
|
|
|
1735 |
|
01:05:06,119 --> 01:05:11,200 |
|
um like a couple of sentences a |
|
|
|
1736 |
|
01:05:08,480 --> 01:05:13,160 |
|
reasoning step or a thought in the paper |
|
|
|
1737 |
|
01:05:11,200 --> 01:05:14,839 |
|
and you're going to use a model to |
|
|
|
1738 |
|
01:05:13,160 --> 01:05:16,839 |
|
choose which ones to continue and you |
|
|
|
1739 |
|
01:05:14,839 --> 01:05:19,000 |
|
can do different sort of constraints |
|
|
|
1740 |
|
01:05:16,839 --> 01:05:21,960 |
|
here like I want to sort of rank this |
|
|
|
1741 |
|
01:05:19,000 --> 01:05:25,079 |
|
set of three or maybe I want to predict |
|
|
|
1742 |
|
01:05:21,960 --> 01:05:26,839 |
|
if any in this set is wrong like is this |
|
|
|
1743 |
|
01:05:25,079 --> 01:05:29,400 |
|
a good reasoning step and if the model |
|
|
|
1744 |
|
01:05:26,839 --> 01:05:32,240 |
|
says no you no longer continue that but |
|
|
|
1745 |
|
01:05:29,400 --> 01:05:33,559 |
|
the idea here is through prompting |
|
|
|
1746 |
|
01:05:32,240 --> 01:05:35,640 |
|
really achieving something that's sort |
|
|
|
1747 |
|
01:05:33,559 --> 01:05:38,960 |
|
of if you squint at it looks a lot like |
|
|
|
1748 |
|
01:05:35,640 --> 01:05:41,279 |
|
beam search right instead of doing a um |
|
|
|
1749 |
|
01:05:38,960 --> 01:05:43,160 |
|
like token level thing and making a |
|
|
|
1750 |
|
01:05:41,279 --> 01:05:45,079 |
|
decision based on likelihood you're |
|
|
|
1751 |
|
01:05:43,160 --> 01:05:47,880 |
|
generating sort of several sentences out |
|
|
|
1752 |
|
01:05:45,079 --> 01:05:50,599 |
|
a time and making a decision based on |
|
|
|
1753 |
|
01:05:47,880 --> 01:05:52,359 |
|
this models feedback right this signal |
|
|
|
1754 |
|
01:05:50,599 --> 01:05:53,799 |
|
from an external source which here is a |
|
|
|
1755 |
|
01:05:52,359 --> 01:05:55,279 |
|
model but could also be a human if |
|
|
|
1756 |
|
01:05:53,799 --> 01:05:57,920 |
|
you're willing willing to sort of wait |
|
|
|
1757 |
|
01:05:55,279 --> 01:06:01,559 |
|
around for them to make the decision and |
|
|
|
1758 |
|
01:05:57,920 --> 01:06:03,839 |
|
so this is a way of sort of giving |
|
|
|
1759 |
|
01:06:01,559 --> 01:06:06,640 |
|
feedback on a broader level than single |
|
|
|
1760 |
|
01:06:03,839 --> 01:06:09,079 |
|
tokens um to guide a decoding process to |
|
|
|
1761 |
|
01:06:06,640 --> 01:06:09,079 |
|
a final |
|
|
|
1762 |
|
01:06:09,839 --> 01:06:15,079 |
|
outut so the last couple of things we'll |
|
|
|
1763 |
|
01:06:12,760 --> 01:06:17,520 |
|
talk about here are sort of practical |
|
|
|
1764 |
|
01:06:15,079 --> 01:06:19,839 |
|
considerations speed choosing decoding |
|
|
|
1765 |
|
01:06:17,520 --> 01:06:22,599 |
|
methods um but I can take any questions |
|
|
|
1766 |
|
01:06:19,839 --> 01:06:22,599 |
|
before that |
|
|
|
1767 |
|
01:06:23,000 --> 01:06:26,000 |
|
to |
|
|
|
1768 |
|
01:06:26,760 --> 01:06:32,920 |
|
great so how do you make this fast and |
|
|
|
1769 |
|
01:06:30,359 --> 01:06:34,920 |
|
in particular if you've ever tried to |
|
|
|
1770 |
|
01:06:32,920 --> 01:06:36,920 |
|
sort of Benchmark performance of a model |
|
|
|
1771 |
|
01:06:34,920 --> 01:06:38,720 |
|
what you realize pretty quickly is that |
|
|
|
1772 |
|
01:06:36,920 --> 01:06:40,720 |
|
the vast majority of time is actually |
|
|
|
1773 |
|
01:06:38,720 --> 01:06:43,440 |
|
spent in decoding you have to generate |
|
|
|
1774 |
|
01:06:40,720 --> 01:06:45,319 |
|
one token at a time you have to sort of |
|
|
|
1775 |
|
01:06:43,440 --> 01:06:46,920 |
|
pass that back through the model to get |
|
|
|
1776 |
|
01:06:45,319 --> 01:06:51,279 |
|
conditioning to generate the next token |
|
|
|
1777 |
|
01:06:46,920 --> 01:06:53,599 |
|
and so this is um generally fairly slow |
|
|
|
1778 |
|
01:06:51,279 --> 01:06:54,839 |
|
um this is sort of a a major impediment |
|
|
|
1779 |
|
01:06:53,599 --> 01:06:56,359 |
|
if you're d to do something like a |
|
|
|
1780 |
|
01:06:54,839 --> 01:06:57,839 |
|
streaming application where you want or |
|
|
|
1781 |
|
01:06:56,359 --> 01:06:59,559 |
|
a chat application where you don't want |
|
|
|
1782 |
|
01:06:57,839 --> 01:07:03,599 |
|
the person to be waiting around for an |
|
|
|
1783 |
|
01:06:59,559 --> 01:07:06,799 |
|
answer um one way to do this is a method |
|
|
|
1784 |
|
01:07:03,599 --> 01:07:09,160 |
|
called Spectra of decoding and this is a |
|
|
|
1785 |
|
01:07:06,799 --> 01:07:12,599 |
|
method where you're using a smaller |
|
|
|
1786 |
|
01:07:09,160 --> 01:07:14,039 |
|
model um not as like we're in contrast |
|
|
|
1787 |
|
01:07:12,599 --> 01:07:16,240 |
|
of decoding right we're using a smaller |
|
|
|
1788 |
|
01:07:14,039 --> 01:07:17,559 |
|
model to decide what not to generate but |
|
|
|
1789 |
|
01:07:16,240 --> 01:07:20,119 |
|
here we're using a smaller model to |
|
|
|
1790 |
|
01:07:17,559 --> 01:07:21,880 |
|
decide be what to generate um and the |
|
|
|
1791 |
|
01:07:20,119 --> 01:07:24,960 |
|
idea here is that most of these tokens |
|
|
|
1792 |
|
01:07:21,880 --> 01:07:26,480 |
|
are maybe not super hard to side it's |
|
|
|
1793 |
|
01:07:24,960 --> 01:07:27,400 |
|
just that occasionally the bigger model |
|
|
|
1794 |
|
01:07:26,480 --> 01:07:30,240 |
|
might want to go in a different |
|
|
|
1795 |
|
01:07:27,400 --> 01:07:32,920 |
|
direction so these green tokens here are |
|
|
|
1796 |
|
01:07:30,240 --> 01:07:35,160 |
|
generated by a smaller model our amateur |
|
|
|
1797 |
|
01:07:32,920 --> 01:07:37,079 |
|
model here and the larger model acts |
|
|
|
1798 |
|
01:07:35,160 --> 01:07:39,960 |
|
largely as a verifier and what it does |
|
|
|
1799 |
|
01:07:37,079 --> 01:07:43,000 |
|
is it checks if the output so far is |
|
|
|
1800 |
|
01:07:39,960 --> 01:07:44,920 |
|
going in a an a Direction that's sort of |
|
|
|
1801 |
|
01:07:43,000 --> 01:07:46,400 |
|
in distribution for the big model like |
|
|
|
1802 |
|
01:07:44,920 --> 01:07:49,240 |
|
something that's within the realm of |
|
|
|
1803 |
|
01:07:46,400 --> 01:07:50,720 |
|
what it might SLE and to there's sort of |
|
|
|
1804 |
|
01:07:49,240 --> 01:07:52,400 |
|
an involved discussion in this paper of |
|
|
|
1805 |
|
01:07:50,720 --> 01:07:55,200 |
|
how you determine if something is in |
|
|
|
1806 |
|
01:07:52,400 --> 01:07:58,000 |
|
distribution um so here the smaller |
|
|
|
1807 |
|
01:07:55,200 --> 01:08:00,240 |
|
models generates like five or six tokens |
|
|
|
1808 |
|
01:07:58,000 --> 01:08:02,559 |
|
that the larger model says okay this |
|
|
|
1809 |
|
01:08:00,240 --> 01:08:03,680 |
|
looks great until it hits a token that |
|
|
|
1810 |
|
01:08:02,559 --> 01:08:06,079 |
|
the larger model would not have |
|
|
|
1811 |
|
01:08:03,680 --> 01:08:07,920 |
|
generated in that circumstance and then |
|
|
|
1812 |
|
01:08:06,079 --> 01:08:10,279 |
|
the larger model rejects that token and |
|
|
|
1813 |
|
01:08:07,920 --> 01:08:13,000 |
|
generates a different token instead so |
|
|
|
1814 |
|
01:08:10,279 --> 01:08:15,440 |
|
you can see here each of these red and |
|
|
|
1815 |
|
01:08:13,000 --> 01:08:17,600 |
|
then blue sections is where the larger |
|
|
|
1816 |
|
01:08:15,440 --> 01:08:19,400 |
|
model has rejected something and has to |
|
|
|
1817 |
|
01:08:17,600 --> 01:08:21,920 |
|
actually autor regressively decode a |
|
|
|
1818 |
|
01:08:19,400 --> 01:08:24,199 |
|
single token by contrast if you were |
|
|
|
1819 |
|
01:08:21,920 --> 01:08:27,359 |
|
doing regular decoding at each |
|
|
|
1820 |
|
01:08:24,199 --> 01:08:28,799 |
|
individual token in this sequence the um |
|
|
|
1821 |
|
01:08:27,359 --> 01:08:31,640 |
|
larger model would have had to make the |
|
|
|
1822 |
|
01:08:28,799 --> 01:08:35,359 |
|
fall forward pass to decoda token so |
|
|
|
1823 |
|
01:08:31,640 --> 01:08:37,359 |
|
here rather than de doing maybe what |
|
|
|
1824 |
|
01:08:35,359 --> 01:08:39,239 |
|
probably like 20ish decoding steps to |
|
|
|
1825 |
|
01:08:37,359 --> 01:08:41,560 |
|
get this full sequence the larger model |
|
|
|
1826 |
|
01:08:39,239 --> 01:08:43,040 |
|
has done about eight decoring steps and |
|
|
|
1827 |
|
01:08:41,560 --> 01:08:47,560 |
|
everything else is able to sort of |
|
|
|
1828 |
|
01:08:43,040 --> 01:08:49,759 |
|
verify a block of tokens at once um this |
|
|
|
1829 |
|
01:08:47,560 --> 01:08:51,400 |
|
sort of idea of like using a smaller |
|
|
|
1830 |
|
01:08:49,759 --> 01:08:54,120 |
|
model as an approximation is pretty |
|
|
|
1831 |
|
01:08:51,400 --> 01:08:55,839 |
|
powerful um and there's some great um |
|
|
|
1832 |
|
01:08:54,120 --> 01:08:58,159 |
|
followup work cons specul decoding and |
|
|
|
1833 |
|
01:08:55,839 --> 01:08:59,000 |
|
sort of ways to do this faster or with |
|
|
|
1834 |
|
01:08:58,159 --> 01:09:01,520 |
|
stronger |
|
|
|
1835 |
|
01:08:59,000 --> 01:09:04,839 |
|
guarantees um but this General concept |
|
|
|
1836 |
|
01:09:01,520 --> 01:09:06,920 |
|
is I would bet probably how models like |
|
|
|
1837 |
|
01:09:04,839 --> 01:09:09,080 |
|
um part of how models like chat GPT or |
|
|
|
1838 |
|
01:09:06,920 --> 01:09:11,159 |
|
Bard are sort of generating text so |
|
|
|
1839 |
|
01:09:09,080 --> 01:09:13,120 |
|
quickly um there's another element here |
|
|
|
1840 |
|
01:09:11,159 --> 01:09:16,159 |
|
which is like the model architecture |
|
|
|
1841 |
|
01:09:13,120 --> 01:09:17,679 |
|
being sparse but I think that um if you |
|
|
|
1842 |
|
01:09:16,159 --> 01:09:19,920 |
|
folks talk about mixture of experts we |
|
|
|
1843 |
|
01:09:17,679 --> 01:09:22,880 |
|
might get into that |
|
|
|
1844 |
|
01:09:19,920 --> 01:09:26,080 |
|
later um how do you do this kind of fast |
|
|
|
1845 |
|
01:09:22,880 --> 01:09:27,679 |
|
inference um libraries like BLM will |
|
|
|
1846 |
|
01:09:26,080 --> 01:09:29,440 |
|
Implement things I think Implement |
|
|
|
1847 |
|
01:09:27,679 --> 01:09:32,199 |
|
speculative decoding and Implement sort |
|
|
|
1848 |
|
01:09:29,440 --> 01:09:34,400 |
|
of Hardware level tricks like choosing |
|
|
|
1849 |
|
01:09:32,199 --> 01:09:37,799 |
|
which attention um weights to Cash wear |
|
|
|
1850 |
|
01:09:34,400 --> 01:09:39,199 |
|
to do faster inflence um there's also |
|
|
|
1851 |
|
01:09:37,799 --> 01:09:40,799 |
|
great libraries for doing things like |
|
|
|
1852 |
|
01:09:39,199 --> 01:09:42,679 |
|
constraint decoding so things like |
|
|
|
1853 |
|
01:09:40,799 --> 01:09:45,520 |
|
outlines will let you set constraints |
|
|
|
1854 |
|
01:09:42,679 --> 01:09:46,960 |
|
like I want my outputs to all be Json |
|
|
|
1855 |
|
01:09:45,520 --> 01:09:48,640 |
|
and it will impose additional |
|
|
|
1856 |
|
01:09:46,960 --> 01:09:50,839 |
|
constraints during decoding to ensure |
|
|
|
1857 |
|
01:09:48,640 --> 01:09:52,279 |
|
that that happens and then pretty much |
|
|
|
1858 |
|
01:09:50,839 --> 01:09:53,960 |
|
anything in these first couple of |
|
|
|
1859 |
|
01:09:52,279 --> 01:09:56,560 |
|
sections we talked about um like |
|
|
|
1860 |
|
01:09:53,960 --> 01:09:58,440 |
|
sampling mode seeking search and |
|
|
|
1861 |
|
01:09:56,560 --> 01:10:00,400 |
|
sometimes MBR will also be implemented |
|
|
|
1862 |
|
01:09:58,440 --> 01:10:05,080 |
|
in pretty much any Library you use for |
|
|
|
1863 |
|
01:10:00,400 --> 01:10:07,679 |
|
models like huggingface Fair seek or |
|
|
|
1864 |
|
01:10:05,080 --> 01:10:10,000 |
|
Jacks so to kind of take a step back |
|
|
|
1865 |
|
01:10:07,679 --> 01:10:12,520 |
|
here is when you get to the end of class |
|
|
|
1866 |
|
01:10:10,000 --> 01:10:15,640 |
|
um there's really two broad categories |
|
|
|
1867 |
|
01:10:12,520 --> 01:10:17,679 |
|
of methods that we talked about today um |
|
|
|
1868 |
|
01:10:15,640 --> 01:10:20,360 |
|
given our initial distribution from the |
|
|
|
1869 |
|
01:10:17,679 --> 01:10:22,600 |
|
model for a next token given our our |
|
|
|
1870 |
|
01:10:20,360 --> 01:10:24,920 |
|
input we can do two kind of different |
|
|
|
1871 |
|
01:10:22,600 --> 01:10:26,400 |
|
things we can each individual decoding |
|
|
|
1872 |
|
01:10:24,920 --> 01:10:28,360 |
|
step choose some kind of function to |
|
|
|
1873 |
|
01:10:26,400 --> 01:10:30,280 |
|
manipulate this distribution and this |
|
|
|
1874 |
|
01:10:28,360 --> 01:10:32,280 |
|
could be something like short like |
|
|
|
1875 |
|
01:10:30,280 --> 01:10:33,960 |
|
cutting off the long tail like modifying |
|
|
|
1876 |
|
01:10:32,280 --> 01:10:36,239 |
|
the temperature or adding external |
|
|
|
1877 |
|
01:10:33,960 --> 01:10:38,400 |
|
information from another model or from a |
|
|
|
1878 |
|
01:10:36,239 --> 01:10:41,480 |
|
discriminator model |
|
|
|
1879 |
|
01:10:38,400 --> 01:10:43,159 |
|
right or we can over a larger part of |
|
|
|
1880 |
|
01:10:41,480 --> 01:10:45,120 |
|
the decoding process choose some |
|
|
|
1881 |
|
01:10:43,159 --> 01:10:47,120 |
|
function to choose between sequences and |
|
|
|
1882 |
|
01:10:45,120 --> 01:10:49,199 |
|
this could be like choosing between next |
|
|
|
1883 |
|
01:10:47,120 --> 01:10:51,679 |
|
tokens in beam search when we pruning |
|
|
|
1884 |
|
01:10:49,199 --> 01:10:53,120 |
|
beams this could be choosing from Full |
|
|
|
1885 |
|
01:10:51,679 --> 01:10:56,760 |
|
sequences when we're doing something |
|
|
|
1886 |
|
01:10:53,120 --> 01:10:58,040 |
|
like MB r or sample and rerank methods |
|
|
|
1887 |
|
01:10:56,760 --> 01:11:00,239 |
|
um and you can do these two things in |
|
|
|
1888 |
|
01:10:58,040 --> 01:11:01,440 |
|
parallel right you can choose like a |
|
|
|
1889 |
|
01:11:00,239 --> 01:11:03,159 |
|
different function to manipulate the |
|
|
|
1890 |
|
01:11:01,440 --> 01:11:04,760 |
|
next token distribution and then some |
|
|
|
1891 |
|
01:11:03,159 --> 01:11:06,199 |
|
sort of like broader thing to choose |
|
|
|
1892 |
|
01:11:04,760 --> 01:11:08,280 |
|
what you do with the full sequences you |
|
|
|
1893 |
|
01:11:06,199 --> 01:11:09,920 |
|
get out of that distribution um but |
|
|
|
1894 |
|
01:11:08,280 --> 01:11:12,040 |
|
there are sort of these two broad |
|
|
|
1895 |
|
01:11:09,920 --> 01:11:14,880 |
|
categories of |
|
|
|
1896 |
|
01:11:12,040 --> 01:11:17,440 |
|
decoding so what should you take away |
|
|
|
1897 |
|
01:11:14,880 --> 01:11:19,400 |
|
from this um I think a couple of things |
|
|
|
1898 |
|
01:11:17,440 --> 01:11:21,000 |
|
you decoding methods can be really |
|
|
|
1899 |
|
01:11:19,400 --> 01:11:23,040 |
|
powerful to control features of your |
|
|
|
1900 |
|
01:11:21,000 --> 01:11:25,040 |
|
output if you want to impose particular |
|
|
|
1901 |
|
01:11:23,040 --> 01:11:26,679 |
|
constraints if you want to factor in |
|
|
|
1902 |
|
01:11:25,040 --> 01:11:27,960 |
|
reward function or factor in a data |
|
|
|
1903 |
|
01:11:26,679 --> 01:11:31,800 |
|
source that you maybe didn't have at |
|
|
|
1904 |
|
01:11:27,960 --> 01:11:34,239 |
|
training time um and to some extent you |
|
|
|
1905 |
|
01:11:31,800 --> 01:11:36,120 |
|
can do a more expensive decoding method |
|
|
|
1906 |
|
01:11:34,239 --> 01:11:37,520 |
|
to compensate for a worse model or to |
|
|
|
1907 |
|
01:11:36,120 --> 01:11:39,080 |
|
compensate for a model that hasn't been |
|
|
|
1908 |
|
01:11:37,520 --> 01:11:42,480 |
|
trained to do exactly the thing you want |
|
|
|
1909 |
|
01:11:39,080 --> 01:11:44,800 |
|
it to do um of course you can't you know |
|
|
|
1910 |
|
01:11:42,480 --> 01:11:47,679 |
|
use this to make gpt2 small as good as |
|
|
|
1911 |
|
01:11:44,800 --> 01:11:49,840 |
|
gp4 but you can sort of for some points |
|
|
|
1912 |
|
01:11:47,679 --> 01:11:51,679 |
|
in the middle spend more um computed |
|
|
|
1913 |
|
01:11:49,840 --> 01:11:53,159 |
|
inference time to pay for not spending |
|
|
|
1914 |
|
01:11:51,679 --> 01:11:55,639 |
|
as much computed training time and |
|
|
|
1915 |
|
01:11:53,159 --> 01:11:57,440 |
|
particularly if you don't have access to |
|
|
|
1916 |
|
01:11:55,639 --> 01:11:59,400 |
|
the kind of giant gpus you might need to |
|
|
|
1917 |
|
01:11:57,440 --> 01:12:01,840 |
|
continue fine-tuning your model this can |
|
|
|
1918 |
|
01:11:59,400 --> 01:12:05,679 |
|
be a really a really powerful |
|
|
|
1919 |
|
01:12:01,840 --> 01:12:07,800 |
|
alternative um yeah so say like you're |
|
|
|
1920 |
|
01:12:05,679 --> 01:12:12,560 |
|
building like something in production |
|
|
|
1921 |
|
01:12:07,800 --> 01:12:15,920 |
|
right people usually do um sort of like |
|
|
|
1922 |
|
01:12:12,560 --> 01:12:18,760 |
|
that you know inance before cling to see |
|
|
|
1923 |
|
01:12:15,920 --> 01:12:21,840 |
|
if it's G to work at do |
|
|
|
1924 |
|
01:12:18,760 --> 01:12:25,080 |
|
that like try to see like if you have a |
|
|
|
1925 |
|
01:12:21,840 --> 01:12:26,800 |
|
model that you can do some kind of |
|
|
|
1926 |
|
01:12:25,080 --> 01:12:29,199 |
|
expensive decoding method for to get |
|
|
|
1927 |
|
01:12:26,800 --> 01:12:31,120 |
|
good outputs is it then worth try |
|
|
|
1928 |
|
01:12:29,199 --> 01:12:34,000 |
|
training that model right um there's |
|
|
|
1929 |
|
01:12:31,120 --> 01:12:36,560 |
|
some great recent work on like training |
|
|
|
1930 |
|
01:12:34,000 --> 01:12:39,400 |
|
models to produce the same kind of |
|
|
|
1931 |
|
01:12:36,560 --> 01:12:40,760 |
|
outputs you get out of MVR without um |
|
|
|
1932 |
|
01:12:39,400 --> 01:12:43,239 |
|
actually doing a really expensive |
|
|
|
1933 |
|
01:12:40,760 --> 01:12:45,600 |
|
inference Stu so at some level like yeah |
|
|
|
1934 |
|
01:12:43,239 --> 01:12:48,120 |
|
you can decide like this model is good |
|
|
|
1935 |
|
01:12:45,600 --> 01:12:49,920 |
|
enough with its expensive method we can |
|
|
|
1936 |
|
01:12:48,120 --> 01:12:50,920 |
|
try to make it cheaper by spending more |
|
|
|
1937 |
|
01:12:49,920 --> 01:12:53,960 |
|
money on |
|
|
|
1938 |
|
01:12:50,920 --> 01:12:55,520 |
|
funing um but that's not it's not like |
|
|
|
1939 |
|
01:12:53,960 --> 01:12:57,320 |
|
necessarily guaranteed that that's will |
|
|
|
1940 |
|
01:12:55,520 --> 01:13:00,679 |
|
be the case |
|
|
|
1941 |
|
01:12:57,320 --> 01:13:03,040 |
|
Okay um the methods that we looked at |
|
|
|
1942 |
|
01:13:00,679 --> 01:13:06,199 |
|
have these sort of trade-offs in quality |
|
|
|
1943 |
|
01:13:03,040 --> 01:13:07,960 |
|
in diversity and in inference speed so |
|
|
|
1944 |
|
01:13:06,199 --> 01:13:10,320 |
|
sampling from your model directly is |
|
|
|
1945 |
|
01:13:07,960 --> 01:13:13,120 |
|
pretty fast to do you get really diverse |
|
|
|
1946 |
|
01:13:10,320 --> 01:13:14,960 |
|
outputs but it tends to be lower quality |
|
|
|
1947 |
|
01:13:13,120 --> 01:13:16,320 |
|
um whereas more restricted sampling |
|
|
|
1948 |
|
01:13:14,960 --> 01:13:18,520 |
|
these sort of mode seeking search |
|
|
|
1949 |
|
01:13:16,320 --> 01:13:20,639 |
|
methods tend to be higher quality but |
|
|
|
1950 |
|
01:13:18,520 --> 01:13:21,880 |
|
you get less less diverse outputs and |
|
|
|
1951 |
|
01:13:20,639 --> 01:13:23,560 |
|
that's why we have these methods like |
|
|
|
1952 |
|
01:13:21,880 --> 01:13:26,719 |
|
diverse and stochastic resarch to |
|
|
|
1953 |
|
01:13:23,560 --> 01:13:28,760 |
|
counter this a bit um and then methods |
|
|
|
1954 |
|
01:13:26,719 --> 01:13:30,400 |
|
like MBR or other sample and rerank |
|
|
|
1955 |
|
01:13:28,760 --> 01:13:32,679 |
|
methods tend to be very high quality |
|
|
|
1956 |
|
01:13:30,400 --> 01:13:34,280 |
|
outputs but you pay for this with much |
|
|
|
1957 |
|
01:13:32,679 --> 01:13:36,520 |
|
slower inference |
|
|
|
1958 |
|
01:13:34,280 --> 01:13:38,679 |
|
time um but if I can kind of convince |
|
|
|
1959 |
|
01:13:36,520 --> 01:13:41,560 |
|
you of anything today I think it would |
|
|
|
1960 |
|
01:13:38,679 --> 01:13:43,600 |
|
be this which is that these the decoding |
|
|
|
1961 |
|
01:13:41,560 --> 01:13:45,600 |
|
method you choose for your model has a |
|
|
|
1962 |
|
01:13:43,600 --> 01:13:47,960 |
|
really strong impact on performance |
|
|
|
1963 |
|
01:13:45,600 --> 01:13:49,520 |
|
Downstream um you can get radically |
|
|
|
1964 |
|
01:13:47,960 --> 01:13:51,239 |
|
different results out of the same model |
|
|
|
1965 |
|
01:13:49,520 --> 01:13:52,639 |
|
without doing any additional training |
|
|
|
1966 |
|
01:13:51,239 --> 01:13:55,120 |
|
just by choosing the different decoding |
|
|
|
1967 |
|
01:13:52,639 --> 01:13:57,880 |
|
method that you might want to try and so |
|
|
|
1968 |
|
01:13:55,120 --> 01:13:59,679 |
|
when you sort of let your libraries pick |
|
|
|
1969 |
|
01:13:57,880 --> 01:14:01,159 |
|
a quote unquote like sensible default |
|
|
|
1970 |
|
01:13:59,679 --> 01:14:03,760 |
|
you can leave a lot of performance on |
|
|
|
1971 |
|
01:14:01,159 --> 01:14:06,480 |
|
the train on the table so I encourage |
|
|
|
1972 |
|
01:14:03,760 --> 01:14:08,199 |
|
you folks that if if you're um deploying |
|
|
|
1973 |
|
01:14:06,480 --> 01:14:09,760 |
|
models in production or if you're doing |
|
|
|
1974 |
|
01:14:08,199 --> 01:14:10,840 |
|
research or you know maybe look at your |
|
|
|
1975 |
|
01:14:09,760 --> 01:14:13,280 |
|
outputs and your model has some |
|
|
|
1976 |
|
01:14:10,840 --> 01:14:15,320 |
|
undesirable behaviors to consider if the |
|
|
|
1977 |
|
01:14:13,280 --> 01:14:17,800 |
|
decoding method you're using is imposing |
|
|
|
1978 |
|
01:14:15,320 --> 01:14:20,000 |
|
some kind of Intuition or some kind of |
|
|
|
1979 |
|
01:14:17,800 --> 01:14:21,840 |
|
inductive bias and if you can alter that |
|
|
|
1980 |
|
01:14:20,000 --> 01:14:24,239 |
|
to get some of these behaviors without |
|
|
|
1981 |
|
01:14:21,840 --> 01:14:26,320 |
|
resorting to additional training |
|
|
|
1982 |
|
01:14:24,239 --> 01:14:28,719 |
|
um and that's sort of the end I can take |
|
|
|
1983 |
|
01:14:26,320 --> 01:14:28,719 |
|
any other |
|
|
|
1984 |
|
01:14:34,320 --> 01:14:38,719 |
|
questions okay um yeah I guess we don't |
|
|
|
1985 |
|
01:14:37,199 --> 01:14:41,360 |
|
have any questions we can take questions |
|
|
|
1986 |
|
01:14:38,719 --> 01:14:45,560 |
|
up here um one one thing I'd like to |
|
|
|
1987 |
|
01:14:41,360 --> 01:14:47,679 |
|
point out also is that um I I love the |
|
|
|
1988 |
|
01:14:45,560 --> 01:14:50,760 |
|
final thing that Amanda said here |
|
|
|
1989 |
|
01:14:47,679 --> 01:14:54,199 |
|
another thing is that my impression from |
|
|
|
1990 |
|
01:14:50,760 --> 01:14:56,400 |
|
dealing with things is that it's a lot |
|
|
|
1991 |
|
01:14:54,199 --> 01:14:58,159 |
|
easier to predict the effect of |
|
|
|
1992 |
|
01:14:56,400 --> 01:14:59,920 |
|
inference time decoding time |
|
|
|
1993 |
|
01:14:58,159 --> 01:15:01,120 |
|
manipulations than it is to predict the |
|
|
|
1994 |
|
01:14:59,920 --> 01:15:04,239 |
|
effect of |
|
|
|
1995 |
|
01:15:01,120 --> 01:15:07,480 |
|
like um fine-tuning or something like |
|
|
|
1996 |
|
01:15:04,239 --> 01:15:11,040 |
|
this like just to give a an |
|
|
|
1997 |
|
01:15:07,480 --> 01:15:12,480 |
|
example beam search with the maximum |
|
|
|
1998 |
|
01:15:11,040 --> 01:15:15,199 |
|
likelihood trained model tends to |
|
|
|
1999 |
|
01:15:12,480 --> 01:15:16,719 |
|
generate things that are shorter um |
|
|
|
2000 |
|
01:15:15,199 --> 01:15:18,040 |
|
whereas greedy decoding tends to |
|
|
|
2001 |
|
01:15:16,719 --> 01:15:19,639 |
|
generate things that are longer and |
|
|
|
2002 |
|
01:15:18,040 --> 01:15:22,000 |
|
repeat more often and stuff like that |
|
|
|
2003 |
|
01:15:19,639 --> 01:15:25,920 |
|
and if you try a few methods like this |
|
|
|
2004 |
|
01:15:22,000 --> 01:15:28,920 |
|
you'll quickly find these kind of qus of |
|
|
|
2005 |
|
01:15:25,920 --> 01:15:31,320 |
|
each of the methods and so by forming a |
|
|
|
2006 |
|
01:15:28,920 --> 01:15:32,719 |
|
good intuition of this you will also |
|
|
|
2007 |
|
01:15:31,320 --> 01:15:34,000 |
|
know how to fix these problems when you |
|
|
|
2008 |
|
01:15:32,719 --> 01:15:35,600 |
|
see them it's like oh my model's |
|
|
|
2009 |
|
01:15:34,000 --> 01:15:37,320 |
|
repeating itself a lot maybe I shouldn't |
|
|
|
2010 |
|
01:15:35,600 --> 01:15:38,679 |
|
be using grey search I should be |
|
|
|
2011 |
|
01:15:37,320 --> 01:15:41,199 |
|
switching over to something else or |
|
|
|
2012 |
|
01:15:38,679 --> 01:15:43,320 |
|
something like that so um this is a good |
|
|
|
2013 |
|
01:15:41,199 --> 01:15:45,880 |
|
thing to know and play around with yeah |
|
|
|
2014 |
|
01:15:43,320 --> 01:15:47,239 |
|
and I think pretty underutilized too um |
|
|
|
2015 |
|
01:15:45,880 --> 01:15:48,880 |
|
a lot of folks will not think about a |
|
|
|
2016 |
|
01:15:47,239 --> 01:15:50,920 |
|
decoding method to fix their problem |
|
|
|
2017 |
|
01:15:48,880 --> 01:15:52,280 |
|
even if like your model might actually |
|
|
|
2018 |
|
01:15:50,920 --> 01:15:53,760 |
|
be perfectly fine under a different |
|
|
|
2019 |
|
01:15:52,280 --> 01:15:56,000 |
|
decoding strategy |
|
|
|
2020 |
|
01:15:53,760 --> 01:15:58,320 |
|
great okay thanks a lot everyone you can |
|
|
|
2021 |
|
01:15:56,000 --> 01:15:58,320 |
|
uh |
|
|
|
2022 |
|
01:16:02,280 --> 01:16:05,280 |
|
finish |