ahmedelsayed's picture
commit files to HF hub
2ffb90d
1
00:00:00,840 --> 00:00:05,920
okay so uh let's get started um today
2
00:00:04,200 --> 00:00:08,000
I'm going to be talking about learning
3
00:00:05,920 --> 00:00:09,480
from Human feedback I wrote
4
00:00:08,000 --> 00:00:12,160
reinforcement learning from Human
5
00:00:09,480 --> 00:00:14,519
feedback because that's what um you know
6
00:00:12,160 --> 00:00:15,759
a lot of people talk about nowadays but
7
00:00:14,519 --> 00:00:18,880
actually there's other methods of
8
00:00:15,759 --> 00:00:21,840
learning from Human feedback so first
9
00:00:18,880 --> 00:00:24,760
I'm going to be talking about the ways
10
00:00:21,840 --> 00:00:27,920
we can get uh human feedback for the
11
00:00:24,760 --> 00:00:31,039
generations of models and mostly focus
12
00:00:27,920 --> 00:00:32,960
on generation tasks because is um
13
00:00:31,039 --> 00:00:35,800
generation tasks are harder than like
14
00:00:32,960 --> 00:00:38,559
classification tasks that we uh we deal
15
00:00:35,800 --> 00:00:40,000
with normally so I'll spend a fair
16
00:00:38,559 --> 00:00:42,239
amount of time talking about how we do
17
00:00:40,000 --> 00:00:45,760
that and then after I talk about how we
18
00:00:42,239 --> 00:00:48,360
do that we'll move into um how we
19
00:00:45,760 --> 00:00:51,160
actually learn from that
20
00:00:48,360 --> 00:00:53,399
signal so normally what we've done up
21
00:00:51,160 --> 00:00:56,399
until this point is maximum likelihood
22
00:00:53,399 --> 00:00:58,199
training uh this is just an overview
23
00:00:56,399 --> 00:00:59,559
slide so we what we want to do is we
24
00:00:58,199 --> 00:01:00,760
want to maximize the likelihood of
25
00:00:59,559 --> 00:01:03,280
predicting the next word and the
26
00:01:00,760 --> 00:01:05,960
reference given the previous words uh
27
00:01:03,280 --> 00:01:08,119
which gives us the loss of the output
28
00:01:05,960 --> 00:01:09,799
given the input uh where you know the
29
00:01:08,119 --> 00:01:13,960
input can be the prompt the output can
30
00:01:09,799 --> 00:01:16,080
be the answer to uh the output but
31
00:01:13,960 --> 00:01:18,360
there's uh lots of problems with
32
00:01:16,080 --> 00:01:20,439
learning from Maximum likelihood and I'm
33
00:01:18,360 --> 00:01:22,079
going to give three examples here I
34
00:01:20,439 --> 00:01:24,159
think all of these are actually real
35
00:01:22,079 --> 00:01:26,880
problems uh that we need to be worried
36
00:01:24,159 --> 00:01:30,240
about so the first one is that some
37
00:01:26,880 --> 00:01:32,439
mistakes are worse than others so um in
38
00:01:30,240 --> 00:01:33,560
the end we want good outputs and some
39
00:01:32,439 --> 00:01:36,520
mistaken
40
00:01:33,560 --> 00:01:38,200
predictions uh can be a bigger problem
41
00:01:36,520 --> 00:01:42,680
for the output being
42
00:01:38,200 --> 00:01:46,000
good so to give an example uh let's say
43
00:01:42,680 --> 00:01:47,600
what we actually wanted from like a
44
00:01:46,000 --> 00:01:49,320
speech recognition system or a
45
00:01:47,600 --> 00:01:54,040
translation system or something like
46
00:01:49,320 --> 00:01:54,040
that is uh please send this package to
47
00:01:54,280 --> 00:01:58,920
Pittsburgh if I write please send a
48
00:01:56,880 --> 00:02:01,560
package to Pittsburgh then this is not a
49
00:01:58,920 --> 00:02:03,560
huge problem
50
00:02:01,560 --> 00:02:06,479
if I write uh please send this package
51
00:02:03,560 --> 00:02:07,719
to Tokyo then that might be a big
52
00:02:06,479 --> 00:02:09,640
problem because the package you wanted
53
00:02:07,719 --> 00:02:12,760
to come to Pittsburgh goes to Tokyo
54
00:02:09,640 --> 00:02:13,680
instead and uh you might not want that
55
00:02:12,760 --> 00:02:16,080
to
56
00:02:13,680 --> 00:02:18,000
happen you might also have it say
57
00:02:16,080 --> 00:02:20,400
bleeping send this package to Pittsburgh
58
00:02:18,000 --> 00:02:22,200
instead of pleas um and that would be a
59
00:02:20,400 --> 00:02:24,200
problem in a customer service system
60
00:02:22,200 --> 00:02:28,400
right because your customer would uh
61
00:02:24,200 --> 00:02:28,400
leave and never come back
62
00:02:28,840 --> 00:02:32,040
so
63
00:02:30,360 --> 00:02:33,720
determiner like this is not going to
64
00:02:32,040 --> 00:02:35,640
cause a huge issue U messing up other
65
00:02:33,720 --> 00:02:37,519
things is going to cause a larger
66
00:02:35,640 --> 00:02:39,519
issue but from the point of view of
67
00:02:37,519 --> 00:02:42,680
Maximum likelihood all of these are just
68
00:02:39,519 --> 00:02:44,560
tokens and messing up one token is the
69
00:02:42,680 --> 00:02:47,519
same as messing up another token so
70
00:02:44,560 --> 00:02:50,040
that's uh you know an
71
00:02:47,519 --> 00:02:52,080
issue another problem is that the gold
72
00:02:50,040 --> 00:02:54,640
standard and maximum likelihood
73
00:02:52,080 --> 00:02:57,480
estimation can be bad it can be like not
74
00:02:54,640 --> 00:02:59,239
what you want and uh corpa are full of
75
00:02:57,480 --> 00:03:02,400
outputs that we wouldn't want a language
76
00:02:59,239 --> 00:03:05,400
model producing so for example uh toxic
77
00:03:02,400 --> 00:03:07,799
comments on Reddit uh
78
00:03:05,400 --> 00:03:09,959
disinformation um another thing that a
79
00:03:07,799 --> 00:03:13,000
lot of people don't think about uh quite
80
00:03:09,959 --> 00:03:15,640
as much is a lot of the data online is
81
00:03:13,000 --> 00:03:17,680
uh from is automatically generated
82
00:03:15,640 --> 00:03:19,720
nowadays for example from machine
83
00:03:17,680 --> 00:03:24,080
translation a lot of the translations
84
00:03:19,720 --> 00:03:25,720
online are from uh 2016 Google translate
85
00:03:24,080 --> 00:03:27,560
uh when Google translate was a lot less
86
00:03:25,720 --> 00:03:29,120
good than it is now and so you have like
87
00:03:27,560 --> 00:03:31,760
poor quality translations that were
88
00:03:29,120 --> 00:03:31,760
automatically
89
00:03:33,040 --> 00:03:37,959
a final problem is uh something that's
90
00:03:35,280 --> 00:03:40,360
called exposure bias and exposure bias
91
00:03:37,959 --> 00:03:44,000
basically what it means is mle training
92
00:03:40,360 --> 00:03:46,000
doesn't consider um the necessarity the
93
00:03:44,000 --> 00:03:48,599
necessity for generation and it relies
94
00:03:46,000 --> 00:03:51,360
on gold standard context so if we go
95
00:03:48,599 --> 00:03:54,159
back to the mle equation when we're
96
00:03:51,360 --> 00:03:57,200
calculating mle this y less than T is
97
00:03:54,159 --> 00:03:59,200
always correct it's always a good output
98
00:03:57,200 --> 00:04:01,439
and so what the model does is it learns
99
00:03:59,200 --> 00:04:04,280
to over rely on good
100
00:04:01,439 --> 00:04:06,079
outputs and one example of a problem
101
00:04:04,280 --> 00:04:08,360
that this causes is models tend to
102
00:04:06,079 --> 00:04:10,560
repeat themselves over and over again
103
00:04:08,360 --> 00:04:12,319
for example um when you use some
104
00:04:10,560 --> 00:04:15,079
generation algorithms and the reason why
105
00:04:12,319 --> 00:04:18,519
this happens is because in a gold
106
00:04:15,079 --> 00:04:22,079
standard output if a word has appeared
107
00:04:18,519 --> 00:04:25,840
previously that word is more likely to
108
00:04:22,079 --> 00:04:28,560
happen next so like if you say um like I
109
00:04:25,840 --> 00:04:29,759
am going um I am going to Pittsburgh
110
00:04:28,560 --> 00:04:31,880
you're much more likely to say
111
00:04:29,759 --> 00:04:33,000
Pittsburgh again in the future because
112
00:04:31,880 --> 00:04:35,720
you're talking about Pittsburgh
113
00:04:33,000 --> 00:04:37,400
topically as coherent so what you get is
114
00:04:35,720 --> 00:04:38,639
you get mle trained models saying I'm
115
00:04:37,400 --> 00:04:40,160
going to Pittsburgh I am going to
116
00:04:38,639 --> 00:04:41,680
Pittsburgh I am going to Pittsburgh I
117
00:04:40,160 --> 00:04:45,280
going to Pittsburgh you've probably seen
118
00:04:41,680 --> 00:04:47,320
this before uh at some point and so um
119
00:04:45,280 --> 00:04:49,320
exposure bias is basically that the
120
00:04:47,320 --> 00:04:51,039
model has never been exposed to mistakes
121
00:04:49,320 --> 00:04:55,240
in the past and so it can't deal with
122
00:04:51,039 --> 00:04:56,840
them so what this does is um if you have
123
00:04:55,240 --> 00:04:58,560
an alternative training algorithm you
124
00:04:56,840 --> 00:05:02,120
can fix this by generating a whole bunch
125
00:04:58,560 --> 00:05:04,880
of outputs uh down like scoring some of
126
00:05:02,120 --> 00:05:06,880
them poorly and penalizing the model for
127
00:05:04,880 --> 00:05:09,960
uh generating po outputs and so that can
128
00:05:06,880 --> 00:05:09,960
fix these problems as
129
00:05:10,800 --> 00:05:18,440
well uh any questions about this all
130
00:05:15,199 --> 00:05:20,800
good Okay cool so now I'd like to get
131
00:05:18,440 --> 00:05:23,919
into how we measure how good an output
132
00:05:20,800 --> 00:05:26,360
is and there's different ways of doing
133
00:05:23,919 --> 00:05:30,319
this um the first one is objective
134
00:05:26,360 --> 00:05:32,680
assessment so for some uh tasks or for
135
00:05:30,319 --> 00:05:35,400
many tasks there's kind of objectively a
136
00:05:32,680 --> 00:05:37,280
correct answer there's also human
137
00:05:35,400 --> 00:05:40,360
subjective annotations so you can ask
138
00:05:37,280 --> 00:05:42,919
humans to do annotation for you there's
139
00:05:40,360 --> 00:05:45,400
machine prediction of human
140
00:05:42,919 --> 00:05:48,319
preferences and there's also use in
141
00:05:45,400 --> 00:05:50,840
another system in a downstream
142
00:05:48,319 --> 00:05:52,960
task so the way objective assessment
143
00:05:50,840 --> 00:05:54,919
works is you have an annotated correct
144
00:05:52,960 --> 00:05:57,080
answer in match against this so like if
145
00:05:54,919 --> 00:06:00,600
you're solving math problems uh
146
00:05:57,080 --> 00:06:02,560
answering objective questions and and
147
00:06:00,600 --> 00:06:04,280
you know you can pick any arbitrary
148
00:06:02,560 --> 00:06:06,840
example you can pick your classification
149
00:06:04,280 --> 00:06:09,800
example from uh like your text
150
00:06:06,840 --> 00:06:11,880
classification tasks an even clearer
151
00:06:09,800 --> 00:06:13,880
example is if you have math problems
152
00:06:11,880 --> 00:06:15,639
there's kind of objectively one answer
153
00:06:13,880 --> 00:06:18,080
to any math problem and there's no other
154
00:06:15,639 --> 00:06:19,680
answer that could be correct so this
155
00:06:18,080 --> 00:06:21,160
makes your life easy if you're handling
156
00:06:19,680 --> 00:06:22,560
this type of problem but of course
157
00:06:21,160 --> 00:06:24,120
there's many other types of problems we
158
00:06:22,560 --> 00:06:26,039
want to handle that don't have objective
159
00:06:24,120 --> 00:06:29,039
answers like
160
00:06:26,039 --> 00:06:31,440
this so let's say we're handling a gener
161
00:06:29,039 --> 00:06:34,680
a generation task where we don't have an
162
00:06:31,440 --> 00:06:36,360
objective answer um in this Cas kind of
163
00:06:34,680 --> 00:06:39,440
one of our gold standards is human
164
00:06:36,360 --> 00:06:42,360
evaluation so we might have a source
165
00:06:39,440 --> 00:06:44,919
input like a prompt or an input text for
166
00:06:42,360 --> 00:06:47,240
machine translation we have one or
167
00:06:44,919 --> 00:06:49,960
several hypotheses and we ask a human
168
00:06:47,240 --> 00:06:53,280
annotator to basically give uh a score
169
00:06:49,960 --> 00:06:55,759
for them or do some sort of other
170
00:06:53,280 --> 00:06:59,759
annotation and the different varieties
171
00:06:55,759 --> 00:07:03,080
of annotation that we can give are um
172
00:06:59,759 --> 00:07:04,599
something called direct assessment so uh
173
00:07:03,080 --> 00:07:06,599
direct assessment is a term that comes
174
00:07:04,599 --> 00:07:09,280
from machine translation uh so you might
175
00:07:06,599 --> 00:07:11,039
not see it used uh lots of other places
176
00:07:09,280 --> 00:07:13,120
but it's basically just give a score
177
00:07:11,039 --> 00:07:15,759
directly to how good the output is so
178
00:07:13,120 --> 00:07:17,199
you can say like if you say please send
179
00:07:15,759 --> 00:07:18,960
this translation is please send this
180
00:07:17,199 --> 00:07:21,759
package to Tokyo we give it a score of
181
00:07:18,960 --> 00:07:24,360
two out of 10 or something like
182
00:07:21,759 --> 00:07:28,000
this
183
00:07:24,360 --> 00:07:30,840
so the the question here is like what
184
00:07:28,000 --> 00:07:32,400
does like let's say I gave a score of
185
00:07:30,840 --> 00:07:34,520
two out of 10 for please send this
186
00:07:32,400 --> 00:07:37,680
package to Tokyo what score should I
187
00:07:34,520 --> 00:07:40,240
give for please send a package to Tokyo
188
00:07:37,680 --> 00:07:42,360
anyone have any ideas the the correct
189
00:07:40,240 --> 00:07:46,520
answer is please send this package to
190
00:07:42,360 --> 00:07:48,000
take out of eight out of 10 yeah but you
191
00:07:46,520 --> 00:07:50,440
might disagree on that right it's kind
192
00:07:48,000 --> 00:07:52,159
of like subjective um one of the
193
00:07:50,440 --> 00:07:54,039
difficulties of direct assessment is
194
00:07:52,159 --> 00:07:55,520
giving a number like this is pretty
195
00:07:54,039 --> 00:07:57,800
difficult if you don't have a very clear
196
00:07:55,520 --> 00:07:59,720
rubric and very skilled annotators and
197
00:07:57,800 --> 00:08:02,879
it's hard to get consistency between
198
00:07:59,720 --> 00:08:04,400
people when you do this so the advantage
199
00:08:02,879 --> 00:08:05,599
is it kind of gives you an idea of how
200
00:08:04,400 --> 00:08:07,520
good things are overall but the
201
00:08:05,599 --> 00:08:09,280
disadvantage is it's more difficult to
202
00:08:07,520 --> 00:08:11,319
annotate and get
203
00:08:09,280 --> 00:08:13,159
consistency um another thing that I
204
00:08:11,319 --> 00:08:15,319
should point out is often scores are
205
00:08:13,159 --> 00:08:18,680
assigned separately based on desirable
206
00:08:15,319 --> 00:08:20,960
traits so um we don't necessarily just
207
00:08:18,680 --> 00:08:23,479
say how good is it we say how fluent is
208
00:08:20,960 --> 00:08:26,120
it like is it fluent uh
209
00:08:23,479 --> 00:08:28,159
English in Translation there's a concept
210
00:08:26,120 --> 00:08:30,720
called adequacy which is how well does
211
00:08:28,159 --> 00:08:34,599
the output reflect the input
212
00:08:30,720 --> 00:08:36,519
semantics um and if you're assessing
213
00:08:34,599 --> 00:08:38,440
translation systems actually it's common
214
00:08:36,519 --> 00:08:40,519
to assess fluency without even looking
215
00:08:38,440 --> 00:08:43,200
at the input because then you can just
216
00:08:40,519 --> 00:08:44,880
say how fluent is it but for adequacy
217
00:08:43,200 --> 00:08:46,320
you definitely need to understand the
218
00:08:44,880 --> 00:08:49,600
input so you need to be a bilingual
219
00:08:46,320 --> 00:08:54,680
speaker to be able to assess
220
00:08:49,600 --> 00:08:57,560
that um factuality um and so factuality
221
00:08:54,680 --> 00:09:00,160
is tricky um it can either be factuality
222
00:08:57,560 --> 00:09:03,880
grounded in a particular input text in
223
00:09:00,160 --> 00:09:05,600
which case um the facts would have to be
224
00:09:03,880 --> 00:09:07,680
you know things that were said in the
225
00:09:05,600 --> 00:09:09,399
input or it can be just kind of is the
226
00:09:07,680 --> 00:09:11,120
statement factual in general in which
227
00:09:09,399 --> 00:09:13,720
case you need to go online you need to
228
00:09:11,120 --> 00:09:16,480
search for things and like uh check
229
00:09:13,720 --> 00:09:18,480
whether the statement is factual or not
230
00:09:16,480 --> 00:09:20,480
um other things are like coherence does
231
00:09:18,480 --> 00:09:21,480
the output fit coherently within the
232
00:09:20,480 --> 00:09:23,680
larger
233
00:09:21,480 --> 00:09:25,680
discs um and there's many many other
234
00:09:23,680 --> 00:09:28,120
ones of these this is also task
235
00:09:25,680 --> 00:09:29,760
dependent so like the things you will
236
00:09:28,120 --> 00:09:31,000
evaluate for machine transl are
237
00:09:29,760 --> 00:09:32,880
different than the ones you would do for
238
00:09:31,000 --> 00:09:35,760
dialog which are different than the ones
239
00:09:32,880 --> 00:09:38,200
you would do for a general purpose
240
00:09:35,760 --> 00:09:41,279
chatot uh which is different kind things
241
00:09:38,200 --> 00:09:44,120
you would do for um summarization for
242
00:09:41,279 --> 00:09:46,320
example so if you're interested in doing
243
00:09:44,120 --> 00:09:47,519
something like this uh then I definitely
244
00:09:46,320 --> 00:09:48,800
encourage you to look at what other
245
00:09:47,519 --> 00:09:51,399
people have done for the tasks you're
246
00:09:48,800 --> 00:09:53,079
interested in uh previously and uh find
247
00:09:51,399 --> 00:09:54,880
out the different types of traits that
248
00:09:53,079 --> 00:09:58,320
did
249
00:09:54,880 --> 00:10:00,760
last uh any any questions about this
250
00:09:58,320 --> 00:10:03,079
also
251
00:10:00,760 --> 00:10:06,920
okay the next type of feedback is
252
00:10:03,079 --> 00:10:09,839
preference ratings um and so this is uh
253
00:10:06,920 --> 00:10:12,600
basically what you do is you have two or
254
00:10:09,839 --> 00:10:14,240
more outputs from different models or
255
00:10:12,600 --> 00:10:16,440
different Generations from an individual
256
00:10:14,240 --> 00:10:18,839
model and you ask a human which one is
257
00:10:16,440 --> 00:10:22,320
better like is one better than the other
258
00:10:18,839 --> 00:10:23,839
or are they tied and so in this case um
259
00:10:22,320 --> 00:10:26,320
you might have please send this package
260
00:10:23,839 --> 00:10:28,880
to Tokyo please send a package to
261
00:10:26,320 --> 00:10:31,040
Tokyo we might disagree on how like good
262
00:10:28,880 --> 00:10:33,959
or bad each of them are but I think most
263
00:10:31,040 --> 00:10:35,959
people would agree that this one is like
264
00:10:33,959 --> 00:10:37,480
despite the fact that it got this wrong
265
00:10:35,959 --> 00:10:40,160
the second one is better than the first
266
00:10:37,480 --> 00:10:42,240
one so this is a little bit of an easier
267
00:10:40,160 --> 00:10:45,040
task it's easier to uh get people to
268
00:10:42,240 --> 00:10:46,839
annotate these things
269
00:10:45,040 --> 00:10:50,519
consistently however it has the
270
00:10:46,839 --> 00:10:52,839
disadvantage that you can't really tell
271
00:10:50,519 --> 00:10:55,360
uh whether systems are really good or
272
00:10:52,839 --> 00:10:57,200
really bad so let's say you have a bunch
273
00:10:55,360 --> 00:11:00,279
of really bad systems that you're
274
00:10:57,200 --> 00:11:01,839
comparing with each other um you might
275
00:11:00,279 --> 00:11:03,680
find that one is better than the other
276
00:11:01,839 --> 00:11:06,000
but that still doesn't mean it's ready
277
00:11:03,680 --> 00:11:07,399
to be deployed or if you have a bunch of
278
00:11:06,000 --> 00:11:11,040
really good systems they're all
279
00:11:07,399 --> 00:11:13,000
basically you know very very similar to
280
00:11:11,040 --> 00:11:14,399
another but one is like slightly more
281
00:11:13,000 --> 00:11:18,639
fluent than the other you might still
282
00:11:14,399 --> 00:11:20,680
get a similar result um and so that also
283
00:11:18,639 --> 00:11:22,760
makes it uh you know a little bit
284
00:11:20,680 --> 00:11:24,880
difficult to use practically in some
285
00:11:22,760 --> 00:11:27,040
ways I didn't put it on the slide but
286
00:11:24,880 --> 00:11:30,680
there's another way you can kind of get
287
00:11:27,040 --> 00:11:33,920
the best of both worlds um which is a
288
00:11:30,680 --> 00:11:35,560
side by side assessment and side by-side
289
00:11:33,920 --> 00:11:38,440
assessment basically what you would do
290
00:11:35,560 --> 00:11:40,560
is you would say um please send this
291
00:11:38,440 --> 00:11:43,399
package to Tokyo please send a package
292
00:11:40,560 --> 00:11:47,279
to Pittsburgh give each of them a direct
293
00:11:43,399 --> 00:11:48,839
score um but you can use decimal places
294
00:11:47,279 --> 00:11:51,120
and you can't use the same score for all
295
00:11:48,839 --> 00:11:55,920
of them and so it's
296
00:11:51,120 --> 00:11:57,480
like five 500 and 4.99 out of five or
297
00:11:55,920 --> 00:11:59,519
something like that like you like one
298
00:11:57,480 --> 00:12:02,639
slightly better than the other or or
299
00:11:59,519 --> 00:12:04,480
something like that um so there are ways
300
00:12:02,639 --> 00:12:07,240
to kind of get Best of Both Worlds if
301
00:12:04,480 --> 00:12:11,720
you're interested in doing
302
00:12:07,240 --> 00:12:11,720
that um
303
00:12:14,920 --> 00:12:20,519
so one problem one other problem with
304
00:12:18,279 --> 00:12:22,519
preference rankings is that there's a
305
00:12:20,519 --> 00:12:24,440
limited number of things that humans can
306
00:12:22,519 --> 00:12:28,160
compare before they get really
307
00:12:24,440 --> 00:12:32,360
overwhelmed so if you say I
308
00:12:28,160 --> 00:12:35,560
want like I want to
309
00:12:32,360 --> 00:12:36,920
rate 15 systems or 20 systems with
310
00:12:35,560 --> 00:12:39,120
respect to how good they are with
311
00:12:36,920 --> 00:12:40,639
respect to each other it's going to be
312
00:12:39,120 --> 00:12:43,680
impossible for humans to come up with a
313
00:12:40,639 --> 00:12:46,959
good preference ranking between them and
314
00:12:43,680 --> 00:12:49,480
so the typical way around this um which
315
00:12:46,959 --> 00:12:52,360
is also used in uh things like the
316
00:12:49,480 --> 00:12:55,440
chatbot Arena by lmis and other things
317
00:12:52,360 --> 00:12:58,720
like this is to use uh something like an
318
00:12:55,440 --> 00:13:00,959
ELO or true skill ranking and what these
319
00:12:58,720 --> 00:13:03,079
are is these are things that were
320
00:13:00,959 --> 00:13:05,760
created for the ranking of like chess
321
00:13:03,079 --> 00:13:09,160
players or video game players or other
322
00:13:05,760 --> 00:13:11,720
things where they like b battle against
323
00:13:09,160 --> 00:13:13,920
each other in multiple matches uh
324
00:13:11,720 --> 00:13:16,440
pair-wise and then you put all of the
325
00:13:13,920 --> 00:13:18,399
wins and losses into these ranking
326
00:13:16,440 --> 00:13:20,600
algorithms and they give you a score
327
00:13:18,399 --> 00:13:22,920
about how good like each of the each of
328
00:13:20,600 --> 00:13:27,079
the players are so if you do something
329
00:13:22,920 --> 00:13:29,480
like this you can um get basically a
330
00:13:27,079 --> 00:13:32,120
ranking of systems despite the that you
331
00:13:29,480 --> 00:13:35,240
only did pairwise assessments so these
332
00:13:32,120 --> 00:13:35,240
are also a good thing to know
333
00:13:37,399 --> 00:13:43,839
about a final variety of human feedback
334
00:13:40,600 --> 00:13:45,320
uh that we create is uh air annotation
335
00:13:43,839 --> 00:13:47,519
and this can be useful for a number of
336
00:13:45,320 --> 00:13:49,839
reasons um but basically the way it
337
00:13:47,519 --> 00:13:53,839
works is you annotate individual errors
338
00:13:49,839 --> 00:13:55,639
within the outputs and um oh one thing I
339
00:13:53,839 --> 00:13:58,120
should mention is that um I'm giving a
340
00:13:55,639 --> 00:14:00,880
lot of examples from machine translation
341
00:13:58,120 --> 00:14:02,800
um I feel like machine translation has
342
00:14:00,880 --> 00:14:04,519
been doing evaluation of generated
343
00:14:02,800 --> 00:14:07,600
outputs for a lot longer than a lot of
344
00:14:04,519 --> 00:14:09,000
other uh fields of NLP have and
345
00:14:07,600 --> 00:14:11,800
therefore their methodology is more
346
00:14:09,000 --> 00:14:13,480
developed than a lot of other fields um
347
00:14:11,800 --> 00:14:16,199
but a lot of these things can also be
348
00:14:13,480 --> 00:14:18,079
applied to uh other uh other tasks as
349
00:14:16,199 --> 00:14:19,079
well but anyway getting back to this
350
00:14:18,079 --> 00:14:20,680
there's something for machine
351
00:14:19,079 --> 00:14:23,639
translation called multi-dimensional
352
00:14:20,680 --> 00:14:26,240
quality metrics and the multidimensional
353
00:14:23,639 --> 00:14:29,160
quality metrics basically what they do
354
00:14:26,240 --> 00:14:32,199
is they annotate spans in the output
355
00:14:29,160 --> 00:14:34,800
where each Span in the output is given a
356
00:14:32,199 --> 00:14:38,079
severity ranking of the error and it's
357
00:14:34,800 --> 00:14:40,199
given a type of the error and there's
358
00:14:38,079 --> 00:14:42,600
about eight different types of Errors
359
00:14:40,199 --> 00:14:44,839
like this doesn't violate or this
360
00:14:42,600 --> 00:14:47,399
violates linguistic conventions of using
361
00:14:44,839 --> 00:14:49,880
the word this instead of uh here by
362
00:14:47,399 --> 00:14:51,639
using the word uh instead of this here
363
00:14:49,880 --> 00:14:55,079
and then this is an accuracy error
364
00:14:51,639 --> 00:14:57,839
because it's not accurately con uh uh
365
00:14:55,079 --> 00:15:01,720
conveying the output and then this error
366
00:14:57,839 --> 00:15:04,600
is minor uh this error is Major um and
367
00:15:01,720 --> 00:15:06,399
then there's also like severe severe
368
00:15:04,600 --> 00:15:07,440
versus major but minor and major is a
369
00:15:06,399 --> 00:15:09,680
more important
370
00:15:07,440 --> 00:15:11,839
distinction um so the advantage of this
371
00:15:09,680 --> 00:15:14,279
is a couple fold number one it gives you
372
00:15:11,839 --> 00:15:16,440
more fine grained feedback uh in that
373
00:15:14,279 --> 00:15:19,199
you can say okay this system has a lot
374
00:15:16,440 --> 00:15:22,199
of uh accuracy errors this system has a
375
00:15:19,199 --> 00:15:24,880
lot of linguistic conventions errors um
376
00:15:22,199 --> 00:15:28,600
it also can be more consistent because
377
00:15:24,880 --> 00:15:29,839
if you just say to people which output
378
00:15:28,600 --> 00:15:31,800
is better
379
00:15:29,839 --> 00:15:34,560
or what is the score of this output
380
00:15:31,800 --> 00:15:36,360
people have trouble deciding about that
381
00:15:34,560 --> 00:15:39,560
because it's a more subjective
382
00:15:36,360 --> 00:15:41,680
evaluation but if I say is this word
383
00:15:39,560 --> 00:15:43,000
correct it's a little bit easier for
384
00:15:41,680 --> 00:15:44,759
people to do so you can get more
385
00:15:43,000 --> 00:15:46,920
consistent annotations
386
00:15:44,759 --> 00:15:49,720
here the problem with this is this can
387
00:15:46,920 --> 00:15:50,839
be very time consuming so um you know
388
00:15:49,720 --> 00:15:52,480
obviously you need to go through and
389
00:15:50,839 --> 00:15:56,440
annotate every single error if it's for
390
00:15:52,480 --> 00:15:56,440
a long outputs or something your
391
00:15:56,959 --> 00:16:03,519
problem so anyway these are just three
392
00:15:59,800 --> 00:16:05,680
uh ways of collecting human feedback um
393
00:16:03,519 --> 00:16:08,639
and then there's an alternative which is
394
00:16:05,680 --> 00:16:10,079
automatic evaluation of outputs and um
395
00:16:08,639 --> 00:16:14,399
there's a bunch of different ways we can
396
00:16:10,079 --> 00:16:16,800
do this the basic idea here is we have a
397
00:16:14,399 --> 00:16:20,199
source um we have a couple
398
00:16:16,800 --> 00:16:22,800
hypotheses and uh we have an automatic
399
00:16:20,199 --> 00:16:26,000
system that generates outputs uh like
400
00:16:22,800 --> 00:16:28,279
scores and we optionally have a
401
00:16:26,000 --> 00:16:30,839
reference output so the reference output
402
00:16:28,279 --> 00:16:33,519
is a human created gold standard output
403
00:16:30,839 --> 00:16:35,120
with respect to how good that um uh with
404
00:16:33,519 --> 00:16:38,240
respect to like what the output should
405
00:16:35,120 --> 00:16:38,240
be in an ideal
406
00:16:38,279 --> 00:16:47,079
case and basically the goal of automatic
407
00:16:43,199 --> 00:16:50,199
evaluation is to
408
00:16:47,079 --> 00:16:52,839
predict human preferences or to predict
409
00:16:50,199 --> 00:16:56,240
what the human scores would be um
410
00:16:52,839 --> 00:16:58,600
because still at this point um we mostly
411
00:16:56,240 --> 00:16:59,480
view what humans think of the output to
412
00:16:58,600 --> 00:17:01,680
be
413
00:16:59,480 --> 00:17:03,280
uh kind of the
414
00:17:01,680 --> 00:17:06,199
standard
415
00:17:03,280 --> 00:17:08,439
and this is called a variety of things
416
00:17:06,199 --> 00:17:10,600
depending on what field you're in um in
417
00:17:08,439 --> 00:17:12,559
machine translation and summarization
418
00:17:10,600 --> 00:17:13,520
it's called automatic evaluation also a
419
00:17:12,559 --> 00:17:16,520
lot in
420
00:17:13,520 --> 00:17:18,400
dialogue um if you're talking about
421
00:17:16,520 --> 00:17:21,000
people from reinforcement learning or
422
00:17:18,400 --> 00:17:24,600
other things um or chat Bots or things
423
00:17:21,000 --> 00:17:28,240
like that uh a lot of people or uh like
424
00:17:24,600 --> 00:17:31,280
AGI or whatever um a lot of people call
425
00:17:28,240 --> 00:17:32,520
it uh word model um because that
426
00:17:31,280 --> 00:17:34,480
specifically comes from the point of
427
00:17:32,520 --> 00:17:36,440
view of like learning from this feedback
428
00:17:34,480 --> 00:17:37,960
but essentially they're the same thing
429
00:17:36,440 --> 00:17:41,080
uh from my point of view they're trying
430
00:17:37,960 --> 00:17:42,520
to predict how good an output is and how
431
00:17:41,080 --> 00:17:44,240
much you should reward the model for
432
00:17:42,520 --> 00:17:46,559
producing that
433
00:17:44,240 --> 00:17:48,679
output
434
00:17:46,559 --> 00:17:50,520
um so there's a bunch of different
435
00:17:48,679 --> 00:17:51,720
methods to do this I'm not going to
436
00:17:50,520 --> 00:17:53,799
cover all of them I'm just going to
437
00:17:51,720 --> 00:17:55,240
cover three paradigms for doing this so
438
00:17:53,799 --> 00:17:57,880
you know where to look further if you're
439
00:17:55,240 --> 00:18:00,039
interested in doing these things um the
440
00:17:57,880 --> 00:18:02,400
first one is embedding based
441
00:18:00,039 --> 00:18:04,679
evaluation and the way embedding based
442
00:18:02,400 --> 00:18:06,600
evaluation works is usually it's
443
00:18:04,679 --> 00:18:11,400
unsupervised calculation based on
444
00:18:06,600 --> 00:18:14,880
embeding similarity between um
445
00:18:11,400 --> 00:18:18,080
the output that the model generated and
446
00:18:14,880 --> 00:18:20,840
a reference output that uh you have
447
00:18:18,080 --> 00:18:23,400
created so sorry this is very small but
448
00:18:20,840 --> 00:18:25,559
we have a reference here that says the
449
00:18:23,400 --> 00:18:27,640
weather is cold today and we have a
450
00:18:25,559 --> 00:18:30,240
candidate that says it is freezing today
451
00:18:27,640 --> 00:18:33,000
so this is probably you know like a good
452
00:18:30,240 --> 00:18:35,480
um a reasonably good
453
00:18:33,000 --> 00:18:37,640
output and we run this through some
454
00:18:35,480 --> 00:18:39,120
embedding model uh it was called Bert
455
00:18:37,640 --> 00:18:40,679
score and so of course you can run it
456
00:18:39,120 --> 00:18:42,240
through Bert but basically it can be any
457
00:18:40,679 --> 00:18:43,799
embedding model that gives you embedding
458
00:18:42,240 --> 00:18:46,200
for each token in the
459
00:18:43,799 --> 00:18:47,640
sequence and so there are five tokens in
460
00:18:46,200 --> 00:18:49,720
this sequence four tokens in this
461
00:18:47,640 --> 00:18:51,960
sequence you get five tokens and then
462
00:18:49,720 --> 00:18:54,799
four sorry five embeddings and then four
463
00:18:51,960 --> 00:18:57,400
embeddings you calculate carewise cosine
464
00:18:54,799 --> 00:18:59,880
similarity between all of them and this
465
00:18:57,400 --> 00:19:03,480
gives you cosine
466
00:18:59,880 --> 00:19:06,480
similarity Matrix and then you take the
467
00:19:03,480 --> 00:19:09,120
ARG Max or you take the maximum
468
00:19:06,480 --> 00:19:11,280
similarity along either the
469
00:19:09,120 --> 00:19:15,799
rows or the
470
00:19:11,280 --> 00:19:19,559
columns and here the rows correspond
471
00:19:15,799 --> 00:19:22,400
to tokens in the reference and because
472
00:19:19,559 --> 00:19:24,039
the rows correspond to tokens in the
473
00:19:22,400 --> 00:19:26,960
reference
474
00:19:24,039 --> 00:19:28,320
the how well you find something that is
475
00:19:26,960 --> 00:19:31,679
similar to each of the tokens in the
476
00:19:28,320 --> 00:19:34,000
reference is like a recall based method
477
00:19:31,679 --> 00:19:35,919
because it's saying how many tokens in
478
00:19:34,000 --> 00:19:39,520
the reference have a good match in the
479
00:19:35,919 --> 00:19:41,120
output and then if you look at the
480
00:19:39,520 --> 00:19:42,799
columns if you look at the max and the
481
00:19:41,120 --> 00:19:44,960
columns this is like a precision based
482
00:19:42,799 --> 00:19:47,000
metric because it's saying how many of
483
00:19:44,960 --> 00:19:49,360
the things in the output are similar
484
00:19:47,000 --> 00:19:51,240
have a similar match in the reference so
485
00:19:49,360 --> 00:19:54,480
basically you can calculate recall and
486
00:19:51,240 --> 00:19:56,200
precision over all of the tokens and
487
00:19:54,480 --> 00:20:00,200
then feed this into something that looks
488
00:19:56,200 --> 00:20:02,400
like fmeasure and you can also use tfidf
489
00:20:00,200 --> 00:20:06,000
waiting um like what I talked about in
490
00:20:02,400 --> 00:20:07,799
the rag lecture uh to upweight low
491
00:20:06,000 --> 00:20:09,520
frequency words because low frequency
492
00:20:07,799 --> 00:20:11,440
words tend to be more content words and
493
00:20:09,520 --> 00:20:13,120
going back to my example you know if you
494
00:20:11,440 --> 00:20:14,280
make a mistake from Pittsburgh to Tokyo
495
00:20:13,120 --> 00:20:17,880
that's going to be more painful than
496
00:20:14,280 --> 00:20:21,000
making a mistake from this to um so
497
00:20:17,880 --> 00:20:22,520
actually if you'll uh if you were paying
498
00:20:21,000 --> 00:20:25,480
close attention to the rag lecture this
499
00:20:22,520 --> 00:20:27,360
looks really similar to the co bear um
500
00:20:25,480 --> 00:20:29,559
the co bear retrieval objective that I
501
00:20:27,360 --> 00:20:30,960
talked about in the r lecture um I don't
502
00:20:29,559 --> 00:20:32,840
think it's a coincidence they both came
503
00:20:30,960 --> 00:20:34,360
out around the same time uh so people
504
00:20:32,840 --> 00:20:36,360
were thinking about the same thing but
505
00:20:34,360 --> 00:20:37,600
um this is one method that's pretty
506
00:20:36,360 --> 00:20:40,200
widely
507
00:20:37,600 --> 00:20:43,480
use the bird Square code base is also
508
00:20:40,200 --> 00:20:45,440
really nice and easy to use so um if uh
509
00:20:43,480 --> 00:20:47,640
you want to try it out feel free to take
510
00:20:45,440 --> 00:20:47,640
a
511
00:20:48,159 --> 00:20:53,840
look cool um the next one I'd like to
512
00:20:51,600 --> 00:20:56,080
talk about is a regression based
513
00:20:53,840 --> 00:20:58,760
evaluation and the way this works is
514
00:20:56,080 --> 00:21:02,600
this is usually used in a supervised uh
515
00:20:58,760 --> 00:21:04,320
setting so uh the way what you have to
516
00:21:02,600 --> 00:21:07,600
do is you have to calculate a whole
517
00:21:04,320 --> 00:21:09,799
bunch of like actual human
518
00:21:07,600 --> 00:21:12,440
judgments and
519
00:21:09,799 --> 00:21:15,000
usually these judgments can either be
520
00:21:12,440 --> 00:21:16,960
direct assessment uh where you actually
521
00:21:15,000 --> 00:21:19,120
have a score or they can be pairwise
522
00:21:16,960 --> 00:21:20,840
judgments and then if you have direct
523
00:21:19,120 --> 00:21:23,640
assessment you use a regression based
524
00:21:20,840 --> 00:21:26,039
loss like uh minimum squared error if
525
00:21:23,640 --> 00:21:27,520
you have pairwise uh you use a ranking
526
00:21:26,039 --> 00:21:29,039
based loss that tries to upweight the
527
00:21:27,520 --> 00:21:31,360
ones that are higher scoring downward
528
00:21:29,039 --> 00:21:33,200
the ones that are lower scoring one
529
00:21:31,360 --> 00:21:35,720
typical example of this is Comet which
530
00:21:33,200 --> 00:21:37,200
is or has been at least for a very long
531
00:21:35,720 --> 00:21:39,880
time the state-of-the art and machine
532
00:21:37,200 --> 00:21:41,279
translation evaluation and the reason
533
00:21:39,880 --> 00:21:43,440
why it works so well is because we have
534
00:21:41,279 --> 00:21:44,720
a bunch of evaluations for machine
535
00:21:43,440 --> 00:21:46,080
translation they've been doing
536
00:21:44,720 --> 00:21:47,600
evaluation and machine translation
537
00:21:46,080 --> 00:21:50,480
systems for years and you can use that
538
00:21:47,600 --> 00:21:52,720
as lots of supervised training data so
539
00:21:50,480 --> 00:21:54,640
basically you just take um these
540
00:21:52,720 --> 00:21:56,440
evaluation data you have human
541
00:21:54,640 --> 00:21:59,080
annotations you have the output
542
00:21:56,440 --> 00:22:00,320
according to a model like comet um you
543
00:21:59,080 --> 00:22:02,679
calculate the difference between them
544
00:22:00,320 --> 00:22:05,640
and you update model
545
00:22:02,679 --> 00:22:07,080
parameters um the problem this is great
546
00:22:05,640 --> 00:22:08,520
if you have lots of training data the
547
00:22:07,080 --> 00:22:10,640
problem with this is for a lot of tasks
548
00:22:08,520 --> 00:22:12,360
we don't have lots of training data so
549
00:22:10,640 --> 00:22:14,720
um you know training these is a little
550
00:22:12,360 --> 00:22:14,720
bit less
551
00:22:15,400 --> 00:22:22,919
feasible and now recently uh what we
552
00:22:19,600 --> 00:22:25,279
have been moving into is is a QA based
553
00:22:22,919 --> 00:22:27,120
evaluation which is basically where we
554
00:22:25,279 --> 00:22:30,760
ask a language model how good the output
555
00:22:27,120 --> 00:22:32,279
is and so uh gmba is an example one of
556
00:22:30,760 --> 00:22:34,559
the early examples of this for machine
557
00:22:32,279 --> 00:22:37,320
translation evaluation uh where they
558
00:22:34,559 --> 00:22:39,840
basically just ask a g gp4 like score
559
00:22:37,320 --> 00:22:41,600
the following translation from Source
560
00:22:39,840 --> 00:22:44,000
language to target language with respect
561
00:22:41,600 --> 00:22:47,080
to the human reference um on a
562
00:22:44,000 --> 00:22:49,200
continuous scale from Z to 100 uh where
563
00:22:47,080 --> 00:22:51,320
the score of zero means no meaning
564
00:22:49,200 --> 00:22:54,039
preserved and the score of 100 means a
565
00:22:51,320 --> 00:22:56,880
perfect meaning in grammar uh you feed
566
00:22:54,039 --> 00:22:58,760
in the source um you feed in the T the
567
00:22:56,880 --> 00:23:01,000
human reference optionally if you have a
568
00:22:58,760 --> 00:23:03,320
human reference and then you feed in the
569
00:23:01,000 --> 00:23:06,760
Target um and you get a
570
00:23:03,320 --> 00:23:09,919
score and um so this this works pretty
571
00:23:06,760 --> 00:23:12,720
well this can give you uh better results
572
00:23:09,919 --> 00:23:15,159
um there's a especially if you have a
573
00:23:12,720 --> 00:23:16,960
strong language model the problem is
574
00:23:15,159 --> 00:23:18,279
it's very unpredictable whether this is
575
00:23:16,960 --> 00:23:20,120
going to work well and it's very
576
00:23:18,279 --> 00:23:23,039
dependent on the prompt that you're
577
00:23:20,120 --> 00:23:25,279
using so um right now A lot of people
578
00:23:23,039 --> 00:23:27,279
are using gp4 without actually
579
00:23:25,279 --> 00:23:29,039
validating whether it does a good job at
580
00:23:27,279 --> 00:23:33,080
evaluation and
581
00:23:29,039 --> 00:23:34,919
and my the results are all across the
582
00:23:33,080 --> 00:23:36,880
board it can be anywhere from very very
583
00:23:34,919 --> 00:23:38,640
good to very very bad at evaluating
584
00:23:36,880 --> 00:23:41,320
particular tasks so I would be at least
585
00:23:38,640 --> 00:23:43,559
a little bit suspicious of whether gp4
586
00:23:41,320 --> 00:23:45,679
is doing a good job evaluating for your
587
00:23:43,559 --> 00:23:49,320
task especially more complex
588
00:23:45,679 --> 00:23:51,960
tests um I would especially be
589
00:23:49,320 --> 00:23:54,000
suspicious if you're doing two uh any of
590
00:23:51,960 --> 00:23:56,760
the two following things number one if
591
00:23:54,000 --> 00:23:59,880
you're comparing gp4 or any model
592
00:23:56,760 --> 00:24:02,400
against itself in another model because
593
00:23:59,880 --> 00:24:05,200
gp4 really likes
594
00:24:02,400 --> 00:24:06,880
gp4 it really likes its own outputs and
595
00:24:05,200 --> 00:24:08,120
there are papers uh sorry I don't
596
00:24:06,880 --> 00:24:09,679
actually have the references here but I
597
00:24:08,120 --> 00:24:11,200
can follow up if people are interested
598
00:24:09,679 --> 00:24:13,080
but there are papers that demonstrate
599
00:24:11,200 --> 00:24:15,799
that gp4 likes it you know its own
600
00:24:13,080 --> 00:24:19,200
outputs more than others also if you're
601
00:24:15,799 --> 00:24:22,120
explicitly optimizing the outputs using
602
00:24:19,200 --> 00:24:24,640
rlf um there is something called good
603
00:24:22,120 --> 00:24:27,120
Hearts law which is basically anytime
604
00:24:24,640 --> 00:24:29,520
you uh start optimizing towards a metric
605
00:24:27,120 --> 00:24:32,559
it becomes a bad metric and that also
606
00:24:29,520 --> 00:24:35,000
happens for gp4 based evaluations so if
607
00:24:32,559 --> 00:24:37,200
you start optimizing for gp4 based
608
00:24:35,000 --> 00:24:38,960
evaluations especially for reference
609
00:24:37,200 --> 00:24:41,679
list metrics that don't use a reference
610
00:24:38,960 --> 00:24:44,840
output then um you start basically
611
00:24:41,679 --> 00:24:47,440
exploiting the metric
612
00:24:44,840 --> 00:24:49,840
um another thing that you can do with QA
613
00:24:47,440 --> 00:24:53,279
based evaluation is ask about fine grade
614
00:24:49,840 --> 00:24:54,919
mistakes and so this is a paper by um uh
615
00:24:53,279 --> 00:24:56,480
Patrick Fernandez who's a student who's
616
00:24:54,919 --> 00:25:02,080
working with me and basically what we
617
00:24:56,480 --> 00:25:05,240
did is we asked the model to um not give
618
00:25:02,080 --> 00:25:07,360
a particular score but actually identify
619
00:25:05,240 --> 00:25:08,880
the mistakes in the output and when we
620
00:25:07,360 --> 00:25:10,559
asked it to identify the mistakes in the
621
00:25:08,880 --> 00:25:13,720
output we found that this gave more
622
00:25:10,559 --> 00:25:17,320
consistent uh results so kind of
623
00:25:13,720 --> 00:25:18,840
interestingly we ask humans to identify
624
00:25:17,320 --> 00:25:21,120
individual mistakes and the output that
625
00:25:18,840 --> 00:25:24,240
gives humans more consistent results
626
00:25:21,120 --> 00:25:25,559
it's the same thing for gp4 so um that
627
00:25:24,240 --> 00:25:27,320
that's another paper you can look at if
628
00:25:25,559 --> 00:25:29,640
you're
629
00:25:27,320 --> 00:25:32,679
interested
630
00:25:29,640 --> 00:25:38,000
cool um so I I mentioned that you could
631
00:25:32,679 --> 00:25:38,000
or could not uh trust uh yeah sorry go
632
00:25:44,679 --> 00:25:51,279
ahead uh correct so yeah B basically
633
00:25:47,360 --> 00:25:53,279
just what you do is you have the source
634
00:25:51,279 --> 00:25:54,960
um ideally you'll also have a reference
635
00:25:53,279 --> 00:25:57,840
output that was created by skilled
636
00:25:54,960 --> 00:25:59,720
humans and then you put in the Target
637
00:25:57,840 --> 00:26:02,279
you know output basically you have the
638
00:25:59,720 --> 00:26:08,000
input ideally a reference output created
639
00:26:02,279 --> 00:26:08,000
by Good by skilled humans and uh like
640
00:26:15,159 --> 00:26:20,240
hypothesis yeah I
641
00:26:17,919 --> 00:26:24,559
mean it's a good question and I don't
642
00:26:20,240 --> 00:26:26,919
know if we actually have a a very clear
643
00:26:24,559 --> 00:26:31,399
empirical like evidence of why this is
644
00:26:26,919 --> 00:26:33,320
the case but my hypothesis about this is
645
00:26:31,399 --> 00:26:36,159
yes we kind of would expect models to be
646
00:26:33,320 --> 00:26:38,200
more biased towards their own outputs
647
00:26:36,159 --> 00:26:40,919
and the reason why is because
648
00:26:38,200 --> 00:26:43,080
essentially you know models
649
00:26:40,919 --> 00:26:44,279
are within their embeddings they're
650
00:26:43,080 --> 00:26:45,760
encoding when they're in a high
651
00:26:44,279 --> 00:26:47,600
probability part of the space and when
652
00:26:45,760 --> 00:26:50,200
they're in a low probability part of the
653
00:26:47,600 --> 00:26:51,120
space and like the high probability part
654
00:26:50,200 --> 00:26:54,600
of the
655
00:26:51,120 --> 00:26:56,200
space is going to be the high
656
00:26:54,600 --> 00:26:58,600
probability part of the space is going
657
00:26:56,200 --> 00:27:02,559
to be associated with good outputs
658
00:26:58,600 --> 00:27:07,000
because like when
659
00:27:02,559 --> 00:27:08,600
models are more sure of their outputs
660
00:27:07,000 --> 00:27:11,960
they're more likely to be
661
00:27:08,600 --> 00:27:13,520
good just because that indicates that
662
00:27:11,960 --> 00:27:15,240
like they're closer to the training data
663
00:27:13,520 --> 00:27:17,760
that it had and other things like that
664
00:27:15,240 --> 00:27:21,600
so model probabilities are associated
665
00:27:17,760 --> 00:27:23,760
with outputs uh with uh with good
666
00:27:21,600 --> 00:27:26,600
outputs but just
667
00:27:23,760 --> 00:27:29,440
correla separately from
668
00:27:26,600 --> 00:27:32,120
that I believe a model can identify when
669
00:27:29,440 --> 00:27:33,320
it's in a high probability segment of
670
00:27:32,120 --> 00:27:35,799
the space and when it's in a low
671
00:27:33,320 --> 00:27:39,399
probability segment of the space and
672
00:27:35,799 --> 00:27:39,399
because of that I expect
673
00:27:39,519 --> 00:27:45,519
that I like there are segments of the
674
00:27:43,240 --> 00:27:47,120
embedding space where it's more likely
675
00:27:45,519 --> 00:27:48,360
to answer yes about something being good
676
00:27:47,120 --> 00:27:50,960
or not and those are going to be
677
00:27:48,360 --> 00:27:54,760
associated with high uh like high
678
00:27:50,960 --> 00:27:56,159
probability outbreaks as well and also
679
00:27:54,760 --> 00:27:57,760
models are more likely to generate
680
00:27:56,159 --> 00:28:00,240
outputs that are high probability
681
00:27:57,760 --> 00:28:02,320
according into their model by definition
682
00:28:00,240 --> 00:28:03,880
so all three of those effects together
683
00:28:02,320 --> 00:28:05,640
would basically go into a model being
684
00:28:03,880 --> 00:28:09,120
bios supports its own outputs compared
685
00:28:05,640 --> 00:28:11,559
to that puts in another model but um
686
00:28:09,120 --> 00:28:13,279
yeah this is a very handwavy explanation
687
00:28:11,559 --> 00:28:15,519
but like putting the two the three
688
00:28:13,279 --> 00:28:18,600
together models output high probability
689
00:28:15,519 --> 00:28:20,880
things from their own probability Space
690
00:28:18,600 --> 00:28:23,440
by definition
691
00:28:20,880 --> 00:28:25,760
um things that are high probability are
692
00:28:23,440 --> 00:28:27,519
associated with being good uh just
693
00:28:25,760 --> 00:28:29,279
because otherwise a model would be
694
00:28:27,519 --> 00:28:31,840
outputting garbage
695
00:28:29,279 --> 00:28:33,840
and um the final thing which is more
696
00:28:31,840 --> 00:28:35,679
tenuous is if the model is in a high
697
00:28:33,840 --> 00:28:37,919
probability segment of the space it's
698
00:28:35,679 --> 00:28:39,760
more likely to Output yes according to a
699
00:28:37,919 --> 00:28:41,480
question of it being good and I I think
700
00:28:39,760 --> 00:28:44,360
that's probably true but I'm not 100%
701
00:28:41,480 --> 00:28:44,360
sure about the the
702
00:28:45,559 --> 00:28:51,039
fin um maybe maybe someone wants to
703
00:28:49,000 --> 00:28:52,840
examinate examine that as a final
704
00:28:51,039 --> 00:28:54,200
project it seems like a interesting
705
00:28:52,840 --> 00:28:57,080
interesting
706
00:28:54,200 --> 00:29:00,039
question um cool uh were there any other
707
00:28:57,080 --> 00:29:00,039
questions about these methods
708
00:29:00,159 --> 00:29:07,120
here um okay so when I say like an
709
00:29:03,960 --> 00:29:11,080
evaluation metric is good or not what do
710
00:29:07,120 --> 00:29:13,200
I mean by this being good or not um or a
711
00:29:11,080 --> 00:29:16,880
reward model or whatever else and
712
00:29:13,200 --> 00:29:18,440
basically the um the way we typically do
713
00:29:16,880 --> 00:29:19,840
this is by doing something called meta
714
00:29:18,440 --> 00:29:22,440
evaluation so it's called meta
715
00:29:19,840 --> 00:29:25,799
evaluation because it's evaluation of
716
00:29:22,440 --> 00:29:29,279
evaluation and uh the way we do this is
717
00:29:25,799 --> 00:29:32,519
we have human uh scores and we have
718
00:29:29,279 --> 00:29:34,760
automatic scores and we usually
719
00:29:32,519 --> 00:29:38,640
calculate some sort of correlation
720
00:29:34,760 --> 00:29:41,000
between the scores so um typical ones
721
00:29:38,640 --> 00:29:46,440
are rank correlations like Pearson's
722
00:29:41,000 --> 00:29:48,799
correlation or tendle uh Tow and uh so
723
00:29:46,440 --> 00:29:51,200
the more Associated the automatic scores
724
00:29:48,799 --> 00:29:53,960
are with the human scores the higher
725
00:29:51,200 --> 00:29:55,159
these correlations are going to be um
726
00:29:53,960 --> 00:29:57,559
there's other things that you can
727
00:29:55,159 --> 00:30:00,080
calculate so if you're trying to figure
728
00:29:57,559 --> 00:30:01,640
out whether a model um matches human
729
00:30:00,080 --> 00:30:04,279
pairwise preferences you can just
730
00:30:01,640 --> 00:30:06,440
calculate accuracy so I didn't put that
731
00:30:04,279 --> 00:30:08,080
on um I didn't put that on the slide
732
00:30:06,440 --> 00:30:10,880
here but you can just calculate accuracy
733
00:30:08,080 --> 00:30:13,120
of pairwise preferences um you can also
734
00:30:10,880 --> 00:30:15,360
calculate the absolute error between the
735
00:30:13,120 --> 00:30:19,320
the judgments if you want to know uh
736
00:30:15,360 --> 00:30:21,720
whether the absolute error matches so um
737
00:30:19,320 --> 00:30:24,159
the these are good things to do if you
738
00:30:21,720 --> 00:30:25,600
want to use an evaluation metric but you
739
00:30:24,159 --> 00:30:27,200
aren't sure whether it's good or not I
740
00:30:25,600 --> 00:30:29,640
would check to see whether the authors
741
00:30:27,200 --> 00:30:32,000
have done this sort of meta evaluation
742
00:30:29,640 --> 00:30:33,760
if they haven't be a little bit
743
00:30:32,000 --> 00:30:36,960
suspicious if they have be a little bit
744
00:30:33,760 --> 00:30:39,799
less suspicious but um
745
00:30:36,960 --> 00:30:42,960
yeah how do people do this typically uh
746
00:30:39,799 --> 00:30:45,640
usually they create uh data sets like
747
00:30:42,960 --> 00:30:49,440
the WM they use data sets like the WMT
748
00:30:45,640 --> 00:30:53,960
shared tasks um or
749
00:30:49,440 --> 00:30:57,679
uh uh like some evl um but there's also
750
00:30:53,960 --> 00:30:59,960
other ways to create um uh there's also
751
00:30:57,679 --> 00:31:01,639
Lots other data sets but in order to do
752
00:30:59,960 --> 00:31:05,639
this reliably you need a fairly large
753
00:31:01,639 --> 00:31:05,639
data set so it's one thing to be aware
754
00:31:07,080 --> 00:31:10,760
of
755
00:31:08,720 --> 00:31:14,200
cool
756
00:31:10,760 --> 00:31:16,360
um then the final thing um all of the
757
00:31:14,200 --> 00:31:17,919
automatic evaluation methods that I
758
00:31:16,360 --> 00:31:20,240
talked about now are trying to match
759
00:31:17,919 --> 00:31:22,679
human preferences but that's not the
760
00:31:20,240 --> 00:31:24,960
only thing that you necessarily want to
761
00:31:22,679 --> 00:31:28,440
do the final thing that you might want
762
00:31:24,960 --> 00:31:30,840
to do is uh use the model outputs in a
763
00:31:28,440 --> 00:31:34,200
downstream system and see whether they
764
00:31:30,840 --> 00:31:36,399
are effective for that so there's two
765
00:31:34,200 --> 00:31:39,080
concepts of intrinsic evaluation and
766
00:31:36,399 --> 00:31:41,720
extrinsic evaluation so intrinsic
767
00:31:39,080 --> 00:31:44,159
evaluation um evaluates the quality of
768
00:31:41,720 --> 00:31:45,720
the output itself and so that would be
769
00:31:44,159 --> 00:31:48,639
like asking a human directly about how
770
00:31:45,720 --> 00:31:50,720
good is this output extrinsic evaluation
771
00:31:48,639 --> 00:31:53,679
is evaluating output quality by its
772
00:31:50,720 --> 00:31:57,000
utility um and so just to give one
773
00:31:53,679 --> 00:31:58,360
example um if you can evaluate large
774
00:31:57,000 --> 00:32:00,200
language model summary
775
00:31:58,360 --> 00:32:04,200
through question answering
776
00:32:00,200 --> 00:32:05,880
accuracy um and so you can take the
777
00:32:04,200 --> 00:32:07,399
output of an llm and feed it through a
778
00:32:05,880 --> 00:32:09,600
question answering model and see whether
779
00:32:07,399 --> 00:32:12,399
you're able to answer questions based on
780
00:32:09,600 --> 00:32:15,799
this and that kind of gives you a better
781
00:32:12,399 --> 00:32:18,279
idea of whether the summary require uh
782
00:32:15,799 --> 00:32:20,120
incorporates requisite information but
783
00:32:18,279 --> 00:32:22,120
if you think about anything an llm can
784
00:32:20,120 --> 00:32:23,760
be used for usually it's part of a
785
00:32:22,120 --> 00:32:26,679
bigger system so you can evaluate it as
786
00:32:23,760 --> 00:32:28,399
a part of that bigger system um the
787
00:32:26,679 --> 00:32:30,639
problem with this is it's a very
788
00:32:28,399 --> 00:32:33,960
indirect way of assessing things so like
789
00:32:30,639 --> 00:32:36,080
let's say your QA model is just bad uh
790
00:32:33,960 --> 00:32:38,480
how can you disentangle the effect of
791
00:32:36,080 --> 00:32:41,679
the L summary versus the QA model that's
792
00:32:38,480 --> 00:32:44,120
not a trivial thing to do so ideally
793
00:32:41,679 --> 00:32:47,000
like a combination of these two is
794
00:32:44,120 --> 00:32:47,000
practically the best way
795
00:32:48,039 --> 00:32:52,200
go cool so
796
00:32:56,039 --> 00:32:59,960
yeah yeah it wouldn't necessar
797
00:32:58,360 --> 00:33:05,679
say it's harder to do it might even be
798
00:32:59,960 --> 00:33:05,679
easier to do um which is like let's
799
00:33:06,679 --> 00:33:11,720
say Let me let me see if I can come up
800
00:33:09,360 --> 00:33:11,720
with
801
00:33:12,639 --> 00:33:17,600
example what let's
802
00:33:15,000 --> 00:33:19,670
say you
803
00:33:17,600 --> 00:33:22,979
are trying
804
00:33:19,670 --> 00:33:22,979
[Music]
805
00:33:24,639 --> 00:33:29,760
to let's say you're trying to
806
00:33:30,559 --> 00:33:33,559
guess
807
00:33:39,000 --> 00:33:45,399
whether let's say you're trying to guess
808
00:33:42,399 --> 00:33:46,559
whether a someone will be hired at a
809
00:33:45,399 --> 00:33:52,039
company or
810
00:33:46,559 --> 00:33:53,880
not based on an llm generated summary of
811
00:33:52,039 --> 00:33:58,880
their qualifications for a position or
812
00:33:53,880 --> 00:34:01,799
something like that um and
813
00:33:58,880 --> 00:34:03,080
you what actually maybe this is not a
814
00:34:01,799 --> 00:34:04,720
great example because whether you should
815
00:34:03,080 --> 00:34:06,960
be doing this ethically is a little bit
816
00:34:04,720 --> 00:34:08,159
unclear but let's say you were doing
817
00:34:06,960 --> 00:34:09,560
let's say you were doing something like
818
00:34:08,159 --> 00:34:11,520
that just because it's one example I can
819
00:34:09,560 --> 00:34:14,320
think of right now whether they will get
820
00:34:11,520 --> 00:34:16,320
hired or not is um is clear because you
821
00:34:14,320 --> 00:34:19,399
have a objective answer right whether
822
00:34:16,320 --> 00:34:21,480
they were hired or not um or maybe maybe
823
00:34:19,399 --> 00:34:23,800
another example would be like let's say
824
00:34:21,480 --> 00:34:26,320
um let's say you want to predict the
825
00:34:23,800 --> 00:34:29,599
diagnosis in a medical application based
826
00:34:26,320 --> 00:34:32,960
on an llm generated some of somebody's
827
00:34:29,599 --> 00:34:35,919
uh you know LM generated summary of
828
00:34:32,960 --> 00:34:38,480
somebody's you know past medical history
829
00:34:35,919 --> 00:34:40,839
and all this stuff and here you want the
830
00:34:38,480 --> 00:34:43,440
llm generated summary you definitely
831
00:34:40,839 --> 00:34:44,879
want the summary because the summary is
832
00:34:43,440 --> 00:34:47,560
going to be viewed by a doctor who will
833
00:34:44,879 --> 00:34:49,359
make the final decision but you also
834
00:34:47,560 --> 00:34:50,760
have information about the diagnoses of
835
00:34:49,359 --> 00:34:52,399
all the people in your medical system
836
00:34:50,760 --> 00:34:54,560
later because you know they went through
837
00:34:52,399 --> 00:34:56,480
your medical system for years and you
838
00:34:54,560 --> 00:34:58,200
know later like through lots of tests
839
00:34:56,480 --> 00:35:00,800
and stuff uh whether how they were
840
00:34:58,200 --> 00:35:02,320
diagnosed so you generate an LM based
841
00:35:00,800 --> 00:35:05,000
summary and then you predict the
842
00:35:02,320 --> 00:35:06,599
diagnosis from the summary so there the
843
00:35:05,000 --> 00:35:08,040
evaluation of the diagnosis is very
844
00:35:06,599 --> 00:35:11,480
clear because you kind of have a gold
845
00:35:08,040 --> 00:35:12,599
standard answer um but the EV intrinsic
846
00:35:11,480 --> 00:35:14,839
evaluation of whether it's a good
847
00:35:12,599 --> 00:35:16,839
summary or not is not as clear because
848
00:35:14,839 --> 00:35:19,400
you'd have pass do whether it's good and
849
00:35:16,839 --> 00:35:21,079
understandable summary so the extrinsic
850
00:35:19,400 --> 00:35:24,920
evaluation might be easier because it's
851
00:35:21,079 --> 00:35:26,480
clearer um so there are cases like that
852
00:35:24,920 --> 00:35:30,720
um the problem is you would have to have
853
00:35:26,480 --> 00:35:33,800
that data in order to do that um yeah do
854
00:35:30,720 --> 00:35:38,240
like evaluation yeah I was just
855
00:35:33,800 --> 00:35:40,800
wondering typically the
856
00:35:38,240 --> 00:35:42,880
like like how do you accomodate the
857
00:35:40,800 --> 00:35:47,160
diversity oh yeah that's a great that's
858
00:35:42,880 --> 00:35:50,240
a great question um so how do you how do
859
00:35:47,160 --> 00:35:50,240
you get these scores
860
00:35:50,720 --> 00:35:55,800
here there's a number of different
861
00:35:53,200 --> 00:35:59,160
things in the WMT shared tasks what they
862
00:35:55,800 --> 00:36:00,280
did is they did
863
00:35:59,160 --> 00:36:03,200
the first thing they do is they
864
00:36:00,280 --> 00:36:06,319
normalize by annotator and what they do
865
00:36:03,200 --> 00:36:10,400
is they basically take the zcore or Z
866
00:36:06,319 --> 00:36:12,240
score of the um of the human annotator's
867
00:36:10,400 --> 00:36:14,880
actual scores because some people are
868
00:36:12,240 --> 00:36:16,400
more harsh than other people and so what
869
00:36:14,880 --> 00:36:20,680
that means is you basically normalize to
870
00:36:16,400 --> 00:36:22,119
have zero mean in unit variance um and
871
00:36:20,680 --> 00:36:24,119
then after they've normalized to zero
872
00:36:22,119 --> 00:36:29,560
mean and unit variance then I think they
873
00:36:24,119 --> 00:36:29,560
average together different humans so um
874
00:36:30,160 --> 00:36:36,520
then for how do you deal with the fact
875
00:36:33,680 --> 00:36:38,040
that humans disagree on things and I
876
00:36:36,520 --> 00:36:39,480
think it's pretty varied I don't know if
877
00:36:38,040 --> 00:36:42,160
there's any gold standard way of doing
878
00:36:39,480 --> 00:36:43,839
it but sometimes you just average
879
00:36:42,160 --> 00:36:46,359
sometimes you throw away examples where
880
00:36:43,839 --> 00:36:47,960
humans disagree a lot um because like
881
00:36:46,359 --> 00:36:50,200
you can't get the humans to agree how
882
00:36:47,960 --> 00:36:53,319
could you expect how could you expect a
883
00:36:50,200 --> 00:36:55,119
machine to do well um so I think it it's
884
00:36:53,319 --> 00:36:59,200
a little bit test
885
00:36:55,119 --> 00:37:01,560
defending yeah so for
886
00:36:59,200 --> 00:37:04,560
generation inin
887
00:37:01,560 --> 00:37:06,280
andin yeah so for code generation that's
888
00:37:04,560 --> 00:37:08,200
I I I love this example because I've
889
00:37:06,280 --> 00:37:09,960
worked on code generation a lot of
890
00:37:08,200 --> 00:37:12,680
people only think about extrinsic
891
00:37:09,960 --> 00:37:14,400
evaluation of code Generation Um or I
892
00:37:12,680 --> 00:37:16,160
don't know if it's extrinsic but only
893
00:37:14,400 --> 00:37:19,160
think about execution based evaluation
894
00:37:16,160 --> 00:37:20,520
of code generation which is like you
895
00:37:19,160 --> 00:37:22,400
execute the code you see whether it
896
00:37:20,520 --> 00:37:25,040
passs unit tests and other things like
897
00:37:22,400 --> 00:37:26,839
this but in reality actually there's a
898
00:37:25,040 --> 00:37:28,599
lot of other important things for code
899
00:37:26,839 --> 00:37:30,560
like readability and other stuff like
900
00:37:28,599 --> 00:37:32,160
that and you should be evaluating those
901
00:37:30,560 --> 00:37:34,920
things but I think a lot of people like
902
00:37:32,160 --> 00:37:36,520
kind of ignore that so um there there
903
00:37:34,920 --> 00:37:38,880
are a few Pap that do that but most of
904
00:37:36,520 --> 00:37:41,000
the time people just execute the Cod
905
00:37:38,880 --> 00:37:45,520
process
906
00:37:41,000 --> 00:37:47,760
un cool okay um so yeah moving on to the
907
00:37:45,520 --> 00:37:51,160
learning part so now I'd like to talk
908
00:37:47,760 --> 00:37:55,280
about uh learning and the first thing
909
00:37:51,160 --> 00:37:59,480
I'll cover is error and risk and so
910
00:37:55,280 --> 00:38:02,280
basically um the way we calculate air is
911
00:37:59,480 --> 00:38:03,119
we generate an output and we calculate
912
00:38:02,280 --> 00:38:07,680
its
913
00:38:03,119 --> 00:38:09,480
Badness um and so generating the output
914
00:38:07,680 --> 00:38:13,160
could be argmax it could be sampling it
915
00:38:09,480 --> 00:38:15,800
could be anything else like that um and
916
00:38:13,160 --> 00:38:18,640
we calculate its Badness uh which is one
917
00:38:15,800 --> 00:38:21,040
minus in which could be like how bad is
918
00:38:18,640 --> 00:38:22,720
the output uh if you're you have a
919
00:38:21,040 --> 00:38:24,760
Badness measure or it could be one minus
920
00:38:22,720 --> 00:38:28,400
the evaluation Square to calculate its
921
00:38:24,760 --> 00:38:30,160
Badness and this is defined as error
922
00:38:28,400 --> 00:38:31,440
and generally what you want to do is you
923
00:38:30,160 --> 00:38:33,520
want to minimize
924
00:38:31,440 --> 00:38:36,800
error
925
00:38:33,520 --> 00:38:39,400
um because in the end you're going to be
926
00:38:36,800 --> 00:38:42,359
deploying A system that just outputs you
927
00:38:39,400 --> 00:38:46,079
know one thing and uh you're going to
928
00:38:42,359 --> 00:38:49,800
want that to be as good a thing as
929
00:38:46,079 --> 00:38:53,000
possible um but the problem with this is
930
00:38:49,800 --> 00:38:56,400
there's no easy way to actually optimize
931
00:38:53,000 --> 00:38:59,079
this value in especially in a text
932
00:38:56,400 --> 00:39:01,800
generation sty setting but even in the
933
00:38:59,079 --> 00:39:06,839
classification setting we can't easily
934
00:39:01,800 --> 00:39:06,839
maximize err because um if you look at
935
00:39:09,040 --> 00:39:14,200
the if you look at the surface of air uh
936
00:39:12,760 --> 00:39:15,960
at some point you're going to have a
937
00:39:14,200 --> 00:39:18,319
non-differentiable part when you take
938
00:39:15,960 --> 00:39:21,119
the argmax and or when you do sampling
939
00:39:18,319 --> 00:39:23,319
or anything like that so um you're not
940
00:39:21,119 --> 00:39:27,119
going to be able to do gradient based
941
00:39:23,319 --> 00:39:29,200
optimization so what we do normally is
942
00:39:27,119 --> 00:39:33,400
um
943
00:39:29,200 --> 00:39:37,000
we instead calculate something uh called
944
00:39:33,400 --> 00:39:38,560
risk and what risk looks like is uh we
945
00:39:37,000 --> 00:39:40,599
talked a little bit about minimum based
946
00:39:38,560 --> 00:39:43,520
risk for decoding but this is for uh
947
00:39:40,599 --> 00:39:46,160
training time and what it looks like is
948
00:39:43,520 --> 00:39:49,040
it's essentially the expected err of the
949
00:39:46,160 --> 00:39:52,359
output and the expected err of the
950
00:39:49,040 --> 00:39:54,760
output um includes a probability in the
951
00:39:52,359 --> 00:39:58,240
objective function here and that
952
00:39:54,760 --> 00:40:01,079
probability uh is differential basically
953
00:39:58,240 --> 00:40:02,319
so we can um uh we can easily do
954
00:40:01,079 --> 00:40:05,720
gradient based
955
00:40:02,319 --> 00:40:09,119
optimization through it um the problem
956
00:40:05,720 --> 00:40:12,200
with this is It's differentiable but for
957
00:40:09,119 --> 00:40:17,160
text generation for example the sum is
958
00:40:12,200 --> 00:40:20,319
intractable because we have a combinator
959
00:40:17,160 --> 00:40:23,880
large number of potential outputs um
960
00:40:20,319 --> 00:40:25,520
because you know if this is we've talked
961
00:40:23,880 --> 00:40:28,720
about this before but if this is like
962
00:40:25,520 --> 00:40:30,680
link you know 50 and we have a 30,000
963
00:40:28,720 --> 00:40:32,839
vocabul that's 30,000 to the 50
964
00:40:30,680 --> 00:40:34,599
possibilities we can't take a su over
965
00:40:32,839 --> 00:40:36,359
that many
966
00:40:34,599 --> 00:40:38,400
possibilities
967
00:40:36,359 --> 00:40:42,680
um
968
00:40:38,400 --> 00:40:45,839
so minimum R risk training uh tries to
969
00:40:42,680 --> 00:40:48,440
minimize risk reinforcement learning
970
00:40:45,839 --> 00:40:50,040
also many of the models especially
971
00:40:48,440 --> 00:40:53,599
policy gradient models are trying to
972
00:40:50,040 --> 00:40:55,240
minimize risk as well so um but the
973
00:40:53,599 --> 00:40:58,040
reason why I wanted to talk about risk
974
00:40:55,240 --> 00:41:00,440
first is because this is very simple to
975
00:40:58,040 --> 00:41:01,640
get to from the uh the point of view of
976
00:41:00,440 --> 00:41:06,560
like all the things that we've studied
977
00:41:01,640 --> 00:41:06,560
so so I think it's talking about
978
00:41:06,760 --> 00:41:11,800
that
979
00:41:08,319 --> 00:41:15,520
um one other thing that I should mention
980
00:41:11,800 --> 00:41:18,400
about is
981
00:41:15,520 --> 00:41:23,079
um or no sorry I'll I'll talk about that
982
00:41:18,400 --> 00:41:26,880
later so when we want to optimize risk
983
00:41:23,079 --> 00:41:30,560
um what we do is we sample in order to
984
00:41:26,880 --> 00:41:35,520
make this trct so a very simple way to
985
00:41:30,560 --> 00:41:37,640
minimize risk is instead of um instead
986
00:41:35,520 --> 00:41:39,359
of summing over all of the possible
987
00:41:37,640 --> 00:41:42,760
outputs we sum over a small number of
988
00:41:39,359 --> 00:41:46,079
possible outputs and we upgrade uh and
989
00:41:42,760 --> 00:41:47,359
we uh sorry normalize uh to make this
990
00:41:46,079 --> 00:41:51,200
all add up to
991
00:41:47,359 --> 00:41:52,839
one and so this normalizer here is
992
00:41:51,200 --> 00:41:55,319
basically the sum over all of the
993
00:41:52,839 --> 00:41:58,599
probabilities that we have uh on the top
994
00:41:55,319 --> 00:42:02,119
part here and and these samples can be
995
00:41:58,599 --> 00:42:05,480
created either using sampling or n best
996
00:42:02,119 --> 00:42:07,040
search we don't need to have from the
997
00:42:05,480 --> 00:42:11,040
point of view of doing this sort of
998
00:42:07,040 --> 00:42:13,960
minimum risk training the kind of
999
00:42:11,040 --> 00:42:16,880
correct way of doing this is sampling
1000
00:42:13,960 --> 00:42:19,880
using ancestral sampling uh like we
1001
00:42:16,880 --> 00:42:23,079
talked about before and um in minimizing
1002
00:42:19,880 --> 00:42:25,839
the output based on the the samples but
1003
00:42:23,079 --> 00:42:28,480
the problem with that is um as many of
1004
00:42:25,839 --> 00:42:31,440
you also might have seen when you were
1005
00:42:28,480 --> 00:42:33,599
sampling from your language model uh
1006
00:42:31,440 --> 00:42:35,160
from assignment one if you sample with
1007
00:42:33,599 --> 00:42:38,040
temperature one it gives you a lot of
1008
00:42:35,160 --> 00:42:40,720
like not very good outlets right and so
1009
00:42:38,040 --> 00:42:43,400
if you're sampling with temperature one
1010
00:42:40,720 --> 00:42:45,000
um you'll be exploring a a very large
1011
00:42:43,400 --> 00:42:47,880
part of the space that actually isn't
1012
00:42:45,000 --> 00:42:49,720
very good and so because of this uh some
1013
00:42:47,880 --> 00:42:51,480
other Alternatives that you can use is
1014
00:42:49,720 --> 00:42:53,400
you can just do endb search to find the
1015
00:42:51,480 --> 00:42:55,280
best outputs or you can sample with a
1016
00:42:53,400 --> 00:42:58,079
temperature that's not one or something
1017
00:42:55,280 --> 00:43:00,240
like that and basically create uh you
1018
00:42:58,079 --> 00:43:02,520
know a list of possible hypotheses and
1019
00:43:00,240 --> 00:43:04,079
then normalize other B so that's another
1020
00:43:02,520 --> 00:43:06,240
option and very often not using
1021
00:43:04,079 --> 00:43:11,200
temperature one is a better
1022
00:43:06,240 --> 00:43:15,280
way um if you're sampling with not
1023
00:43:11,200 --> 00:43:18,640
temperature one and you are um
1024
00:43:15,280 --> 00:43:20,920
potentially getting multiple outputs you
1025
00:43:18,640 --> 00:43:23,400
should try to D duplicate or sample
1026
00:43:20,920 --> 00:43:25,480
without replacement because if you get
1027
00:43:23,400 --> 00:43:27,559
multiple outputs here it messes up your
1028
00:43:25,480 --> 00:43:30,680
equations if you basically uh have the
1029
00:43:27,559 --> 00:43:30,680
same one in there multiple
1030
00:43:32,160 --> 00:43:37,800
times cool so so this is a really simple
1031
00:43:35,880 --> 00:43:40,079
example of how you can do minimal risk
1032
00:43:37,800 --> 00:43:42,119
training but now I want to get into uh
1033
00:43:40,079 --> 00:43:44,640
like reinforcement learning which is the
1034
00:43:42,119 --> 00:43:48,119
framing that most um
1035
00:43:44,640 --> 00:43:50,760
modern Works about this Paulo uh one
1036
00:43:48,119 --> 00:43:52,559
thing I should mention is there are
1037
00:43:50,760 --> 00:43:55,240
actually other alternatives to learning
1038
00:43:52,559 --> 00:43:57,359
from uh human feedback including like
1039
00:43:55,240 --> 00:43:59,359
margin loss margin based losses and
1040
00:43:57,359 --> 00:44:00,960
other stuff like that but most people
1041
00:43:59,359 --> 00:44:03,440
nowadays use reinforcement learning so
1042
00:44:00,960 --> 00:44:06,359
I'm only going to cover that
1043
00:44:03,440 --> 00:44:08,440
here so what is reinforcement learning
1044
00:44:06,359 --> 00:44:11,000
um learning reinforcement learning is
1045
00:44:08,440 --> 00:44:14,559
learning where we have an environment uh
1046
00:44:11,000 --> 00:44:16,079
x uh ability to make actions a and get a
1047
00:44:14,559 --> 00:44:20,160
delayed reward
1048
00:44:16,079 --> 00:44:21,880
R and um there's a really nice example
1049
00:44:20,160 --> 00:44:24,400
uh if you're not familiar with the
1050
00:44:21,880 --> 00:44:27,480
basics of policy gradient by Andre
1051
00:44:24,400 --> 00:44:28,800
karpathy which I linked in the um in the
1052
00:44:27,480 --> 00:44:29,680
recommended reading so you can take a
1053
00:44:28,800 --> 00:44:34,680
look at
1054
00:44:29,680 --> 00:44:37,240
that um but in that example gives an
1055
00:44:34,680 --> 00:44:39,440
example of pong uh where you're playing
1056
00:44:37,240 --> 00:44:42,640
the game pong where X is your observed
1057
00:44:39,440 --> 00:44:45,640
image a is up or down and R is the wind
1058
00:44:42,640 --> 00:44:47,480
loss at the end of the game uh does
1059
00:44:45,640 --> 00:44:50,559
anyone have an idea about uh what this
1060
00:44:47,480 --> 00:44:52,119
looks like for any arbitrary NLP task
1061
00:44:50,559 --> 00:44:56,520
that we might want to do reinforcement
1062
00:44:52,119 --> 00:44:59,040
learning for so what what is X what is a
1063
00:44:56,520 --> 00:44:59,040
and what is
1064
00:45:00,040 --> 00:45:04,680
are pick your favorite uh your favorite
1065
00:45:06,920 --> 00:45:09,920
Trask
1066
00:45:10,960 --> 00:45:18,400
anybody
1067
00:45:12,520 --> 00:45:18,400
yeah be or what what's X first
1068
00:45:19,680 --> 00:45:28,720
yeah you have generate okay is the
1069
00:45:24,440 --> 00:45:29,720
next be like the Buton like whether or
1070
00:45:28,720 --> 00:45:32,520
not
1071
00:45:29,720 --> 00:45:35,240
you okay yeah I I think this is very
1072
00:45:32,520 --> 00:45:37,119
close just to repeat it it's like X is
1073
00:45:35,240 --> 00:45:39,599
what you've generated so far a is the
1074
00:45:37,119 --> 00:45:41,559
next token and R is the button that the
1075
00:45:39,599 --> 00:45:45,400
user clicks about whether it's good or
1076
00:45:41,559 --> 00:45:46,920
not um I think that's reasonably good
1077
00:45:45,400 --> 00:45:48,760
although I don't know if we'd expect
1078
00:45:46,920 --> 00:45:52,960
them to click the button every token we
1079
00:45:48,760 --> 00:45:54,880
generate right so um it might be that X
1080
00:45:52,960 --> 00:45:57,880
is the conversational history up till
1081
00:45:54,880 --> 00:46:02,319
this point um a
1082
00:45:57,880 --> 00:46:04,280
a could be a next token generation and
1083
00:46:02,319 --> 00:46:06,520
then R is a reward we get in an
1084
00:46:04,280 --> 00:46:08,280
arbitrary time point it might not be
1085
00:46:06,520 --> 00:46:09,960
like immediately after generating the
1086
00:46:08,280 --> 00:46:12,040
next token but it might be later and
1087
00:46:09,960 --> 00:46:13,480
that's actually really really important
1088
00:46:12,040 --> 00:46:15,040
from the point of view of reinforcement
1089
00:46:13,480 --> 00:46:19,599
learning and I'll I'll talk about that
1090
00:46:15,040 --> 00:46:23,040
in a second um anyone have an idea from
1091
00:46:19,599 --> 00:46:24,960
I don't know uh code generation or
1092
00:46:23,040 --> 00:46:28,119
translation or some other
1093
00:46:24,960 --> 00:46:31,160
things C generation maybe s is a
1094
00:46:28,119 --> 00:46:33,040
compiler or like the gra scpt and then
1095
00:46:31,160 --> 00:46:37,000
the
1096
00:46:33,040 --> 00:46:42,520
is the actual code that right and reward
1097
00:46:37,000 --> 00:46:44,839
is yep um so X could be the compiler
1098
00:46:42,520 --> 00:46:47,559
it's probably the compiler and all of
1099
00:46:44,839 --> 00:46:50,200
the surrounding code context like what
1100
00:46:47,559 --> 00:46:52,520
what is the natural language output and
1101
00:46:50,200 --> 00:46:53,960
it's also um you know what is the
1102
00:46:52,520 --> 00:46:57,280
project that you're you're working on
1103
00:46:53,960 --> 00:47:00,079
and stuff like that um a i think
1104
00:46:57,280 --> 00:47:02,800
typically we would treat each token in
1105
00:47:00,079 --> 00:47:04,160
the code to be an action um and then R
1106
00:47:02,800 --> 00:47:06,599
would be the reward after a long
1107
00:47:04,160 --> 00:47:08,640
sequence of actions um and it could be
1108
00:47:06,599 --> 00:47:11,119
the reward from the compiler it could be
1109
00:47:08,640 --> 00:47:13,160
the reward from a code readability model
1110
00:47:11,119 --> 00:47:15,720
it could be the reward from a speed
1111
00:47:13,160 --> 00:47:17,079
execution speed and stuff like that so
1112
00:47:15,720 --> 00:47:18,839
like one of the interesting things about
1113
00:47:17,079 --> 00:47:22,640
R is you can be really creative about
1114
00:47:18,839 --> 00:47:25,400
how you form R um which is not easy to
1115
00:47:22,640 --> 00:47:27,319
do uh if you're just doing maximum
1116
00:47:25,400 --> 00:47:29,240
likelihood also so you can come up with
1117
00:47:27,319 --> 00:47:32,920
a r that really matches with like what
1118
00:47:29,240 --> 00:47:36,559
you want um what you want in an output
1119
00:47:32,920 --> 00:47:40,079
so why reinforcement learning in NLP um
1120
00:47:36,559 --> 00:47:42,599
and I think there's basically three um
1121
00:47:40,079 --> 00:47:44,240
three answers the first one is you have
1122
00:47:42,599 --> 00:47:49,000
a typical reinforcement learning
1123
00:47:44,240 --> 00:47:51,119
scenario um where you have a dialogue
1124
00:47:49,000 --> 00:47:52,720
where you get lots of responses and then
1125
00:47:51,119 --> 00:47:54,559
you get a reward at the end so the
1126
00:47:52,720 --> 00:47:57,359
thumbs up and thumbs down from humans is
1127
00:47:54,559 --> 00:47:59,839
a very typical example of
1128
00:47:57,359 --> 00:48:02,800
uh reinforcement learning because you
1129
00:47:59,839 --> 00:48:05,000
get a delayed reward uh at some point in
1130
00:48:02,800 --> 00:48:07,599
the dialogue when a human presses up or
1131
00:48:05,000 --> 00:48:09,280
down um another like actually more
1132
00:48:07,599 --> 00:48:11,680
technical scenario where reinforcement
1133
00:48:09,280 --> 00:48:14,960
learning has been used um for a long
1134
00:48:11,680 --> 00:48:17,400
time is call centers so we've had
1135
00:48:14,960 --> 00:48:20,680
dialogue systems for call centers and
1136
00:48:17,400 --> 00:48:23,160
then if you complete a ticket purchase
1137
00:48:20,680 --> 00:48:24,839
um or you complete resolve a ticket
1138
00:48:23,160 --> 00:48:27,480
without ever having to go to a human
1139
00:48:24,839 --> 00:48:30,800
operator you get a really big reward
1140
00:48:27,480 --> 00:48:33,640
if you have to go to the human operator
1141
00:48:30,800 --> 00:48:36,400
you get maybe a smaller reward and if
1142
00:48:33,640 --> 00:48:39,200
the person yells at you and hangs up
1143
00:48:36,400 --> 00:48:41,640
then you get a really negative reward so
1144
00:48:39,200 --> 00:48:43,040
um this is kind of the typical example
1145
00:48:41,640 --> 00:48:45,599
reinforcement learning has been used for
1146
00:48:43,040 --> 00:48:48,520
a long time there another example is if
1147
00:48:45,599 --> 00:48:53,280
you have like latent variables uh chains
1148
00:48:48,520 --> 00:48:55,799
of thought where um you decide the
1149
00:48:53,280 --> 00:48:58,839
latent variable and then get a reward um
1150
00:48:55,799 --> 00:49:02,799
you get a reward based Bas on how those
1151
00:48:58,839 --> 00:49:03,920
latent variables affect the output so um
1152
00:49:02,799 --> 00:49:07,200
this
1153
00:49:03,920 --> 00:49:09,799
is uh this is another example
1154
00:49:07,200 --> 00:49:12,599
because the Chain of Thought itself
1155
00:49:09,799 --> 00:49:13,880
might not actually be good you might
1156
00:49:12,599 --> 00:49:15,839
have a bad Chain of Thought and still
1157
00:49:13,880 --> 00:49:17,760
get the correct answer so you don't
1158
00:49:15,839 --> 00:49:19,640
actually know for sure that a chain of
1159
00:49:17,760 --> 00:49:22,359
thought that was automatically generated
1160
00:49:19,640 --> 00:49:24,799
is good or not but um that so that kind
1161
00:49:22,359 --> 00:49:27,000
of makes it a reinforcement learning
1162
00:49:24,799 --> 00:49:29,520
problem and another thing is you might
1163
00:49:27,000 --> 00:49:32,520
have a sequence level evaluation metric
1164
00:49:29,520 --> 00:49:34,240
um so that you can't optimize the
1165
00:49:32,520 --> 00:49:36,839
evaluation metric without uh first
1166
00:49:34,240 --> 00:49:38,480
generating the whole like sequence so
1167
00:49:36,839 --> 00:49:40,880
that would be any of the evaluation
1168
00:49:38,480 --> 00:49:42,400
metrics that I talked about before so um
1169
00:49:40,880 --> 00:49:44,720
these are three scenarios where you can
1170
00:49:42,400 --> 00:49:47,079
use reinforcement
1171
00:49:44,720 --> 00:49:50,000
planning so
1172
00:49:47,079 --> 00:49:51,400
um I'm going to make a few steps through
1173
00:49:50,000 --> 00:49:54,640
but like let's start again with our
1174
00:49:51,400 --> 00:49:57,359
supervised mle loss and uh that's just
1175
00:49:54,640 --> 00:50:01,799
the log probability here um in the
1176
00:49:57,359 --> 00:50:04,160
context of reinforcement learning this
1177
00:50:01,799 --> 00:50:07,079
is also called imitation
1178
00:50:04,160 --> 00:50:08,880
learning because um essentially you're
1179
00:50:07,079 --> 00:50:12,680
learning how to perform actions by
1180
00:50:08,880 --> 00:50:14,559
imitating a teacher um and imitation
1181
00:50:12,680 --> 00:50:15,960
learning is not just supervised mle
1182
00:50:14,559 --> 00:50:18,440
there's also other varieties of
1183
00:50:15,960 --> 00:50:21,440
imitation learning but um this is one
1184
00:50:18,440 --> 00:50:21,440
variety of imitation
1185
00:50:22,520 --> 00:50:27,640
learning the next thing I'd like to talk
1186
00:50:24,599 --> 00:50:30,079
about is self-training and basically
1187
00:50:27,640 --> 00:50:31,760
self-training the idea is that you
1188
00:50:30,079 --> 00:50:33,720
sample or argmax according to the
1189
00:50:31,760 --> 00:50:36,119
current model so you have your current
1190
00:50:33,720 --> 00:50:38,000
model and you get a sample from it and
1191
00:50:36,119 --> 00:50:41,520
then you use the sample or samples to
1192
00:50:38,000 --> 00:50:43,680
maximize likelihood so um basically
1193
00:50:41,520 --> 00:50:47,520
instead of doing maximum likelihood with
1194
00:50:43,680 --> 00:50:49,520
respect to the a gold standard output
1195
00:50:47,520 --> 00:50:51,280
you're doing it with respect to your own
1196
00:50:49,520 --> 00:50:55,280
output
1197
00:50:51,280 --> 00:50:55,280
so does this seem like a good
1198
00:50:55,640 --> 00:51:03,880
idea I see a few people shaking heads um
1199
00:51:00,480 --> 00:51:03,880
any ideas why this is not a good
1200
00:51:04,680 --> 00:51:07,680
idea
1201
00:51:15,040 --> 00:51:20,599
yeah yeah exactly so if you don't have
1202
00:51:17,720 --> 00:51:23,760
any access to any notion well it's good
1203
00:51:20,599 --> 00:51:27,480
um this will be optimizing towards good
1204
00:51:23,760 --> 00:51:28,839
outputs and bad outputs right so um your
1205
00:51:27,480 --> 00:51:30,200
model might be outputting bad outputs
1206
00:51:28,839 --> 00:51:32,839
and you're just reinforcing the errors
1207
00:51:30,200 --> 00:51:35,160
set the model R already nonetheless like
1208
00:51:32,839 --> 00:51:37,799
self trining actually improves your
1209
00:51:35,160 --> 00:51:39,680
accuracy somewhat in some cases like for
1210
00:51:37,799 --> 00:51:43,040
example if your accuracy is if your
1211
00:51:39,680 --> 00:51:45,520
model is Right more often than not um
1212
00:51:43,040 --> 00:51:49,119
basically optimizing towards the more
1213
00:51:45,520 --> 00:51:51,720
often the not right outputs can actually
1214
00:51:49,119 --> 00:51:53,640
um due to the implicit regularization
1215
00:51:51,720 --> 00:51:55,000
that models have and early stopping and
1216
00:51:53,640 --> 00:51:56,559
other things like that it can actually
1217
00:51:55,000 --> 00:51:59,280
move you in the right direction and
1218
00:51:56,559 --> 00:52:01,559
improve accuracy
1219
00:51:59,280 --> 00:52:05,000
um
1220
00:52:01,559 --> 00:52:06,640
so there are alternatives to this that
1221
00:52:05,000 --> 00:52:09,520
further improve accuracy so like for
1222
00:52:06,640 --> 00:52:12,720
example if you have multiple models and
1223
00:52:09,520 --> 00:52:16,200
um you only generate sentences where the
1224
00:52:12,720 --> 00:52:17,760
models agree then this can improve your
1225
00:52:16,200 --> 00:52:20,000
uh overall accuracy
1226
00:52:17,760 --> 00:52:24,240
further um this is called code training
1227
00:52:20,000 --> 00:52:27,799
it was actually uh created by uh uh
1228
00:52:24,240 --> 00:52:30,160
people at at CMU as well and another
1229
00:52:27,799 --> 00:52:32,280
successful alternative uh is adding
1230
00:52:30,160 --> 00:52:34,920
noise to the input to match the noise
1231
00:52:32,280 --> 00:52:38,760
that you find in the output so if you uh
1232
00:52:34,920 --> 00:52:40,720
add like word uh word-based Dropout or
1233
00:52:38,760 --> 00:52:44,000
other things like that this can also
1234
00:52:40,720 --> 00:52:47,400
help uh accommodate these things but
1235
00:52:44,000 --> 00:52:48,920
anyway um so self trining is is useful
1236
00:52:47,400 --> 00:52:50,480
but there are better Alternatives if you
1237
00:52:48,920 --> 00:52:54,079
can get a reward
1238
00:52:50,480 --> 00:52:55,559
function so um the simplest variety of
1239
00:52:54,079 --> 00:52:56,960
this is something called policy gradient
1240
00:52:55,559 --> 00:52:59,720
or reinforce
1241
00:52:56,960 --> 00:53:02,319
um or more specifically reinforce and
1242
00:52:59,720 --> 00:53:06,280
basically what this does is this adds a
1243
00:53:02,319 --> 00:53:08,359
term that scales the loss by the reward
1244
00:53:06,280 --> 00:53:12,400
so if you can get a reward for each
1245
00:53:08,359 --> 00:53:15,680
output basically this
1246
00:53:12,400 --> 00:53:18,119
um you uh instead of doing self trining
1247
00:53:15,680 --> 00:53:21,760
entirely by itself you multiply it by a
1248
00:53:18,119 --> 00:53:23,119
reward and this allows you to increase
1249
00:53:21,760 --> 00:53:24,640
the likelihood of things that get a high
1250
00:53:23,119 --> 00:53:28,440
reward decrease the likelihood of things
1251
00:53:24,640 --> 00:53:28,440
that get a low reward
1252
00:53:29,680 --> 00:53:34,960
so uh a brief quiz here under what
1253
00:53:32,440 --> 00:53:37,599
conditions is this equal equivalent to
1254
00:53:34,960 --> 00:53:41,480
ml or essentially equivalent to maximum
1255
00:53:37,599 --> 00:53:43,079
leg uh estimation and so like in order
1256
00:53:41,480 --> 00:53:45,480
to make this quiz easier I'll go back to
1257
00:53:43,079 --> 00:53:47,720
maximum likelihood estimation so it
1258
00:53:45,480 --> 00:53:50,359
looked a bit like this um you calculated
1259
00:53:47,720 --> 00:53:53,440
the log probability of the true output
1260
00:53:50,359 --> 00:53:55,440
and now let me go uh to
1261
00:53:53,440 --> 00:53:56,960
here any
1262
00:53:55,440 --> 00:54:00,119
ideas
1263
00:53:56,960 --> 00:54:05,040
yeah when your reward equals to
1264
00:54:00,119 --> 00:54:05,040
one some sometimes in zero other times
1265
00:54:07,760 --> 00:54:10,960
what any
1266
00:54:12,760 --> 00:54:17,520
ideas what when when does your reward
1267
00:54:15,280 --> 00:54:19,640
need to be equal to one in order to make
1268
00:54:17,520 --> 00:54:23,400
this
1269
00:54:19,640 --> 00:54:23,400
equation equivalent this
1270
00:54:24,960 --> 00:54:31,680
equation yeah when Y and Y hat are the
1271
00:54:27,319 --> 00:54:36,119
same so um basically
1272
00:54:31,680 --> 00:54:38,880
this objective is equivalent to the mle
1273
00:54:36,119 --> 00:54:43,160
objective when you're using a zero1
1274
00:54:38,880 --> 00:54:44,480
loss um where or you're using an
1275
00:54:43,160 --> 00:54:46,359
evaluation function that gives you a
1276
00:54:44,480 --> 00:54:50,920
score of one when it's exact match and
1277
00:54:46,359 --> 00:54:51,720
zero when it's not exact match so um but
1278
00:54:50,920 --> 00:54:54,480
that
1279
00:54:51,720 --> 00:54:56,440
also demonstrates that this can be more
1280
00:54:54,480 --> 00:54:58,400
flexible because you can have other
1281
00:54:56,440 --> 00:55:00,160
rewards that are not just one and zero
1282
00:54:58,400 --> 00:55:02,599
for exact match but you can use things
1283
00:55:00,160 --> 00:55:05,359
that give you partial credit you can use
1284
00:55:02,599 --> 00:55:06,880
things that uplate multiple potential uh
1285
00:55:05,359 --> 00:55:08,880
potentially correct outputs and other
1286
00:55:06,880 --> 00:55:13,400
things like
1287
00:55:08,880 --> 00:55:17,160
that so one problem with these methods
1288
00:55:13,400 --> 00:55:21,799
is um how do we know which action led to
1289
00:55:17,160 --> 00:55:24,720
the reward so the best scenario is after
1290
00:55:21,799 --> 00:55:26,359
each action you get a reward so after
1291
00:55:24,720 --> 00:55:28,960
each token that you generated you get
1292
00:55:26,359 --> 00:55:31,240
get a thumbs up or thumbs down uh from
1293
00:55:28,960 --> 00:55:34,280
the user about whether they like that
1294
00:55:31,240 --> 00:55:36,000
token or not um and how much happier
1295
00:55:34,280 --> 00:55:37,720
they are after you generated that token
1296
00:55:36,000 --> 00:55:42,400
than they were before you generated that
1297
00:55:37,720 --> 00:55:44,200
token um the problem with this is that
1298
00:55:42,400 --> 00:55:45,799
that's completely infeasible right like
1299
00:55:44,200 --> 00:55:47,039
every time after you use chat GPD you're
1300
00:55:45,799 --> 00:55:50,480
not going to press thumbs up and thumbs
1301
00:55:47,039 --> 00:55:52,559
down after each token so um in reality
1302
00:55:50,480 --> 00:55:55,559
what we get is usually we get it at the
1303
00:55:52,559 --> 00:55:57,000
end of uh roll out of many many
1304
00:55:55,559 --> 00:55:58,640
different actions and we're not sure
1305
00:55:57,000 --> 00:55:59,720
which action is responsible for giving
1306
00:55:58,640 --> 00:56:02,559
us the
1307
00:55:59,720 --> 00:56:05,440
reward and
1308
00:56:02,559 --> 00:56:08,000
so there's a few typical ways of dealing
1309
00:56:05,440 --> 00:56:09,640
with this um the most typical way of
1310
00:56:08,000 --> 00:56:13,359
dealing with this right now is just not
1311
00:56:09,640 --> 00:56:15,440
dealing with it um and just hoping that
1312
00:56:13,359 --> 00:56:17,200
your optimization algorithm internally
1313
00:56:15,440 --> 00:56:21,480
will be able to do credit
1314
00:56:17,200 --> 00:56:24,520
assignment um and so what that entails
1315
00:56:21,480 --> 00:56:27,319
is essentially you um give an equal
1316
00:56:24,520 --> 00:56:29,880
reward for each token in the output
1317
00:56:27,319 --> 00:56:32,480
other ways that you can deal with it are
1318
00:56:29,880 --> 00:56:35,640
um you can assign decaying rewards from
1319
00:56:32,480 --> 00:56:37,559
future events so like let's say let's
1320
00:56:35,640 --> 00:56:41,839
say you're talking about a chat bot for
1321
00:56:37,559 --> 00:56:44,119
example maybe this is the the most uh
1322
00:56:41,839 --> 00:56:46,599
kind of intuitive way of thinking about
1323
00:56:44,119 --> 00:56:50,400
it but you you have a chat bot you have
1324
00:56:46,599 --> 00:56:52,599
like 20 chat turns and you have the user
1325
00:56:50,400 --> 00:56:55,640
give a thumbs up or a thumbs down on the
1326
00:56:52,599 --> 00:56:58,920
20th chat turn there you would assign a
1327
00:56:55,640 --> 00:57:01,440
reward of um like let's say it gave a
1328
00:56:58,920 --> 00:57:03,640
thumbs up there you would re assign a
1329
00:57:01,440 --> 00:57:06,559
reward of one for the previous chat turn
1330
00:57:03,640 --> 00:57:09,839
a reward of like 0.5 for the second to
1331
00:57:06,559 --> 00:57:11,720
previous chat term a reward of 0.25 for
1332
00:57:09,839 --> 00:57:14,319
the third to previous chat term to
1333
00:57:11,720 --> 00:57:16,160
basically say yeah like the user is
1334
00:57:14,319 --> 00:57:18,240
feeling good at the moment they gave the
1335
00:57:16,160 --> 00:57:20,359
thumbs up and that's probably more
1336
00:57:18,240 --> 00:57:23,400
likely due to the things that happened
1337
00:57:20,359 --> 00:57:23,400
recently so
1338
00:57:23,559 --> 00:57:28,119
yeah we have a
1339
00:57:26,680 --> 00:57:32,280
like not
1340
00:57:28,119 --> 00:57:34,160
learning so the reward model can be any
1341
00:57:32,280 --> 00:57:35,839
of the methods that I talked about
1342
00:57:34,160 --> 00:57:37,480
before so it can be human feedback
1343
00:57:35,839 --> 00:57:39,000
directly like a thumbs up or a thumbs
1344
00:57:37,480 --> 00:57:42,200
down it could also be from a reward
1345
00:57:39,000 --> 00:57:44,599
model uh that was pre-trained you could
1346
00:57:42,200 --> 00:57:47,680
also theoretically learn the reward
1347
00:57:44,599 --> 00:57:52,720
model simultaneously but you'd have to
1348
00:57:47,680 --> 00:57:55,200
simultaneously with the model itself um
1349
00:57:52,720 --> 00:57:57,280
so yeah I'm going to talk a little bit
1350
00:57:55,200 --> 00:58:00,359
about DP which kind of does that a
1351
00:57:57,280 --> 00:58:01,720
little bit but um I I would basically
1352
00:58:00,359 --> 00:58:03,160
say that wherever you're getting your
1353
00:58:01,720 --> 00:58:06,280
reward is probably from one of the
1354
00:58:03,160 --> 00:58:06,280
things I talked about earlier
1355
00:58:06,359 --> 00:58:14,960
today cool any other
1356
00:58:09,319 --> 00:58:17,720
questions okay um so that's the basic
1357
00:58:14,960 --> 00:58:20,640
the basic idea the very simplest thing
1358
00:58:17,720 --> 00:58:23,359
that you can do is you can just sample
1359
00:58:20,640 --> 00:58:26,079
um optimize the subjective function this
1360
00:58:23,359 --> 00:58:28,359
is dead easy you it's not hard to imp
1361
00:58:26,079 --> 00:58:30,799
imp it all as long as you have some
1362
00:58:28,359 --> 00:58:32,760
source of reward signal um but the
1363
00:58:30,799 --> 00:58:35,559
problem is uh reinforcement learning can
1364
00:58:32,760 --> 00:58:38,599
be very unstable and it's hard to get it
1365
00:58:35,559 --> 00:58:40,160
to uh you know work properly if you uh
1366
00:58:38,599 --> 00:58:42,400
don't do some additional tricks so I'd
1367
00:58:40,160 --> 00:58:45,720
like to talk about this
1368
00:58:42,400 --> 00:58:45,720
next oh yeah
1369
00:58:48,880 --> 00:58:51,880
sir
1370
00:58:55,039 --> 00:58:58,039
yeah
1371
00:59:03,280 --> 00:59:08,960
yeah the typical the typical way is you
1372
00:59:05,440 --> 00:59:12,960
just have an exponential decay um so you
1373
00:59:08,960 --> 00:59:16,200
you multiply each time by what 0.5 0. or
1374
00:59:12,960 --> 00:59:19,400
something like that
1375
00:59:16,200 --> 00:59:19,400
um from
1376
00:59:20,319 --> 00:59:27,720
A6 um cool okay
1377
00:59:25,039 --> 00:59:30,720
so
1378
00:59:27,720 --> 00:59:33,319
and that's one option and sorry just to
1379
00:59:30,720 --> 00:59:35,760
clarify the most common option nowadays
1380
00:59:33,319 --> 00:59:37,920
um at least from the point of view of
1381
00:59:35,760 --> 00:59:39,839
models is not to Decay it at all and
1382
00:59:37,920 --> 00:59:43,880
just assign the same amount for each
1383
00:59:39,839 --> 00:59:45,319
token um I'm not actually 100% sure what
1384
00:59:43,880 --> 00:59:47,319
people are doing with respect to like
1385
00:59:45,319 --> 00:59:49,280
long chat things I think probably
1386
00:59:47,319 --> 00:59:51,720
they're only assigning it to the current
1387
00:59:49,280 --> 00:59:54,240
like utterance and then not optimizing
1388
00:59:51,720 --> 00:59:57,240
the previous utterances so like if they
1389
00:59:54,240 --> 00:59:59,039
get a thumbs up or thumbs down signal um
1390
00:59:57,240 --> 01:00:00,720
then they they would assign an
1391
00:59:59,039 --> 01:00:02,440
equivalent reward for all of the tokens
1392
01:00:00,720 --> 01:00:04,640
and the current utterance and zero
1393
01:00:02,440 --> 01:00:06,119
reward for the previous ones but I'm not
1394
01:00:04,640 --> 01:00:08,480
100% sure about that there might be
1395
01:00:06,119 --> 01:00:11,200
other methods that people are
1396
01:00:08,480 --> 01:00:13,960
using um
1397
01:00:11,200 --> 01:00:16,680
cool so uh stabilizing reinforcement
1398
01:00:13,960 --> 01:00:18,520
learning so um stabilizing reinforcement
1399
01:00:16,680 --> 01:00:21,839
learning there's a lot of reasons why
1400
01:00:18,520 --> 01:00:23,880
it's unstable um the first reason is
1401
01:00:21,839 --> 01:00:27,200
you're sampling an individual output and
1402
01:00:23,880 --> 01:00:30,160
calculating the um uh calculating based
1403
01:00:27,200 --> 01:00:32,039
on the S individual sampled output and
1404
01:00:30,160 --> 01:00:33,440
then there's an Infinity of other
1405
01:00:32,039 --> 01:00:36,480
outputs that you could be optimizing
1406
01:00:33,440 --> 01:00:39,119
over for mle this is not a problem
1407
01:00:36,480 --> 01:00:41,319
because for mle you're always
1408
01:00:39,119 --> 01:00:45,359
contrasting the gold standard output to
1409
01:00:41,319 --> 01:00:46,599
all of the other outputs in the space um
1410
01:00:45,359 --> 01:00:48,280
and you're saying I want to upweight the
1411
01:00:46,599 --> 01:00:51,200
gold standard output and down we all of
1412
01:00:48,280 --> 01:00:53,039
the other ones but for reinforcement
1413
01:00:51,200 --> 01:00:54,760
learning you only have a single sampled
1414
01:00:53,039 --> 01:00:57,520
output that output might be wrong and
1415
01:00:54,760 --> 01:00:59,359
that's a source of inst ility this is
1416
01:00:57,520 --> 01:01:02,079
particularly a problem when using bigger
1417
01:00:59,359 --> 01:01:05,960
output spaces like all of the in the
1418
01:01:02,079 --> 01:01:07,920
vocabul another problem is uh anytime
1419
01:01:05,960 --> 01:01:11,599
you start using negative
1420
01:01:07,920 --> 01:01:15,160
rewards um because if you start using
1421
01:01:11,599 --> 01:01:17,559
negative rewards those rewards will be
1422
01:01:15,160 --> 01:01:19,520
downweighting the probability of a
1423
01:01:17,559 --> 01:01:20,680
particular output sequence and that
1424
01:01:19,520 --> 01:01:22,440
might be a good idea maybe you're
1425
01:01:20,680 --> 01:01:24,319
getting a toxic output or something like
1426
01:01:22,440 --> 01:01:25,960
that and you want to down it but at the
1427
01:01:24,319 --> 01:01:28,280
same time in addition to that toxic
1428
01:01:25,960 --> 01:01:30,000
output there's like you know a
1429
01:01:28,280 --> 01:01:31,599
combinatorial number of completely
1430
01:01:30,000 --> 01:01:33,880
nonsense outputs that aren't even
1431
01:01:31,599 --> 01:01:36,599
English and so basically you can start
1432
01:01:33,880 --> 01:01:38,920
diverge from the N starting start to
1433
01:01:36,599 --> 01:01:40,799
diverge from the natural like language
1434
01:01:38,920 --> 01:01:44,720
modeling distribution that you have
1435
01:01:40,799 --> 01:01:49,079
before so this is a big uh a big
1436
01:01:44,720 --> 01:01:51,880
problem so a number of uh strategies can
1437
01:01:49,079 --> 01:01:53,880
be used to stabilize the first one is
1438
01:01:51,880 --> 01:01:55,480
this is completely obvious right now and
1439
01:01:53,880 --> 01:01:57,240
nobody in their right mind would avoid
1440
01:01:55,480 --> 01:02:00,119
doing this but the first one is
1441
01:01:57,240 --> 01:02:02,839
pre-training with mle and so you start
1442
01:02:00,119 --> 01:02:04,920
with a pre-trained model um and then
1443
01:02:02,839 --> 01:02:09,359
switch over to RL after you finished
1444
01:02:04,920 --> 01:02:11,520
pre-training the model um and so
1445
01:02:09,359 --> 01:02:13,279
this makes a lot of sense if you're
1446
01:02:11,520 --> 01:02:14,960
training a language model which I assume
1447
01:02:13,279 --> 01:02:17,039
that almost everybody in this class is
1448
01:02:14,960 --> 01:02:20,279
going to be doing but it does only work
1449
01:02:17,039 --> 01:02:22,720
in scenarios where you can run mle and
1450
01:02:20,279 --> 01:02:24,359
so it doesn't work if you're predicting
1451
01:02:22,720 --> 01:02:27,240
like latent variables that aren't
1452
01:02:24,359 --> 01:02:28,760
included in the original space
1453
01:02:27,240 --> 01:02:31,960
um it
1454
01:02:28,760 --> 01:02:34,279
also doesn't work in a setting where
1455
01:02:31,960 --> 01:02:36,640
like you want to learn a
1456
01:02:34,279 --> 01:02:40,799
chatbot you want to learn a chatbot for
1457
01:02:36,640 --> 01:02:44,200
customer service for a
1458
01:02:40,799 --> 01:02:48,039
company that
1459
01:02:44,200 --> 01:02:49,960
has like for example a product catalog
1460
01:02:48,039 --> 01:02:53,559
that the language model has never seen
1461
01:02:49,960 --> 01:02:56,000
before and so if the language model has
1462
01:02:53,559 --> 01:02:57,359
no information about the product catalog
1463
01:02:56,000 --> 01:02:59,920
whatsoever you don't provide it through
1464
01:02:57,359 --> 01:03:02,440
rag or something like that it's going to
1465
01:02:59,920 --> 01:03:04,039
have to explore infinitely or not
1466
01:03:02,440 --> 01:03:05,599
infinitely but it's going to have to
1467
01:03:04,039 --> 01:03:08,359
explore too large of a space and you're
1468
01:03:05,599 --> 01:03:10,000
never going to converge with um with
1469
01:03:08,359 --> 01:03:12,359
your language modeling objectives so you
1470
01:03:10,000 --> 01:03:15,000
need to basically be able to create at
1471
01:03:12,359 --> 01:03:16,079
least some supervised training data to
1472
01:03:15,000 --> 01:03:19,279
train with
1473
01:03:16,079 --> 01:03:20,720
mle um but assuming you can do that I'm
1474
01:03:19,279 --> 01:03:22,920
assuming that almost everybody is going
1475
01:03:20,720 --> 01:03:26,400
to do some sort of pre-training with
1476
01:03:22,920 --> 01:03:27,880
ML um The Next Step that people use uh
1477
01:03:26,400 --> 01:03:30,520
in reinforcement learning that's really
1478
01:03:27,880 --> 01:03:34,319
important to stabilize is regularization
1479
01:03:30,520 --> 01:03:35,880
to an existing model and you have an
1480
01:03:34,319 --> 01:03:39,039
existing model and you want to prevent
1481
01:03:35,880 --> 01:03:40,559
it from getting too far away and the
1482
01:03:39,039 --> 01:03:42,279
reason why you want to do this is like
1483
01:03:40,559 --> 01:03:45,720
let's say you start assigning a negative
1484
01:03:42,279 --> 01:03:47,440
reward to toxic utterances for example
1485
01:03:45,720 --> 01:03:49,200
if your model stops being a language
1486
01:03:47,440 --> 01:03:51,920
model whatsoever that's a bad idea so
1487
01:03:49,200 --> 01:03:53,400
you want to keep it as a language model
1488
01:03:51,920 --> 01:03:55,599
keep it close enough to still being a
1489
01:03:53,400 --> 01:03:57,559
competent language model while you know
1490
01:03:55,599 --> 01:03:59,599
like removing the toxic
1491
01:03:57,559 --> 01:04:03,039
utterances so there's a number of
1492
01:03:59,599 --> 01:04:05,680
methods that people use to do this um uh
1493
01:04:03,039 --> 01:04:08,359
the most prominent ones are kale
1494
01:04:05,680 --> 01:04:10,279
regularization uh well so the the first
1495
01:04:08,359 --> 01:04:13,119
most prominent one is K regularization
1496
01:04:10,279 --> 01:04:15,839
and the way this works is basically in
1497
01:04:13,119 --> 01:04:19,400
addition you add you have two
1498
01:04:15,839 --> 01:04:22,279
terms the first term is a term that
1499
01:04:19,400 --> 01:04:25,760
improves your reward so you have your
1500
01:04:22,279 --> 01:04:28,039
old model where your old model is
1501
01:04:25,760 --> 01:04:31,279
creating a
1502
01:04:28,039 --> 01:04:32,440
probability uh it has a probability here
1503
01:04:31,279 --> 01:04:34,960
and then you have the probability
1504
01:04:32,440 --> 01:04:38,160
assigned by your new model and then you
1505
01:04:34,960 --> 01:04:41,200
have your reward signal here and so this
1506
01:04:38,160 --> 01:04:43,599
is basically improving the log odds or
1507
01:04:41,200 --> 01:04:46,960
improving the odds of getting a good
1508
01:04:43,599 --> 01:04:49,720
reward for high reward
1509
01:04:46,960 --> 01:04:52,920
sequences separately from this you have
1510
01:04:49,720 --> 01:04:55,920
this K regularization term and this K
1511
01:04:52,920 --> 01:04:58,119
regularization term is keeping the
1512
01:04:55,920 --> 01:05:00,279
scores of or it's keeping the
1513
01:04:58,119 --> 01:05:02,400
probability distribution of your new
1514
01:05:00,279 --> 01:05:03,960
model similar to the probability
1515
01:05:02,400 --> 01:05:09,200
distribution of your old
1516
01:05:03,960 --> 01:05:11,359
model and this beta parameter basically
1517
01:05:09,200 --> 01:05:15,240
you can increase it or decrease it based
1518
01:05:11,359 --> 01:05:18,400
on how similar you want to keep the um
1519
01:05:15,240 --> 01:05:18,400
how similar you want to keep the
1520
01:05:20,720 --> 01:05:24,640
model another method that people use is
1521
01:05:23,160 --> 01:05:29,279
something called proximal policy
1522
01:05:24,640 --> 01:05:30,920
optimization or or Po and this is a
1523
01:05:29,279 --> 01:05:33,920
method that is based on
1524
01:05:30,920 --> 01:05:38,160
clipping uh the
1525
01:05:33,920 --> 01:05:40,920
outputs and we Define uh this ratio
1526
01:05:38,160 --> 01:05:43,880
here so this ratio is equivalent to this
1527
01:05:40,920 --> 01:05:46,160
here so it's basically um kind of the
1528
01:05:43,880 --> 01:05:47,839
amount that you're learning or the
1529
01:05:46,160 --> 01:05:51,720
amount that the new model up weights
1530
01:05:47,839 --> 01:05:54,039
High reward sequences and so here we
1531
01:05:51,720 --> 01:05:58,200
have the same thing that we had
1532
01:05:54,039 --> 01:06:01,200
above so it it looks like this but over
1533
01:05:58,200 --> 01:06:03,720
here we have a clipped version of this
1534
01:06:01,200 --> 01:06:07,000
where essentially what we do is we
1535
01:06:03,720 --> 01:06:07,000
clip this
1536
01:06:21,119 --> 01:06:27,880
ratio this ratio to be within uh a
1537
01:06:24,720 --> 01:06:32,160
certain range of the original ratio and
1538
01:06:27,880 --> 01:06:37,880
what this is doing is this is
1539
01:06:32,160 --> 01:06:41,400
essentially forcing the model to um not
1540
01:06:37,880 --> 01:06:44,000
reward large jumps in the space um
1541
01:06:41,400 --> 01:06:47,559
because if you take the
1542
01:06:44,000 --> 01:06:49,160
minimum and actually I'm I'm sorry I
1543
01:06:47,559 --> 01:06:50,720
just realized I I might have done
1544
01:06:49,160 --> 01:06:52,520
something confusing here because this is
1545
01:06:50,720 --> 01:06:53,960
actually higher as better so this isn't
1546
01:06:52,520 --> 01:06:56,079
really a loss function this is something
1547
01:06:53,960 --> 01:06:57,680
you're attempting to maximize so
1548
01:06:56,079 --> 01:06:59,839
in contrast to all of the other things I
1549
01:06:57,680 --> 01:07:01,680
was talking about before um this is
1550
01:06:59,839 --> 01:07:04,400
something where higher is better instead
1551
01:07:01,680 --> 01:07:07,599
of lower is better but anyway basically
1552
01:07:04,400 --> 01:07:09,599
by taking the minimum of this you're
1553
01:07:07,599 --> 01:07:11,960
encouraging the model
1554
01:07:09,599 --> 01:07:16,279
to
1555
01:07:11,960 --> 01:07:18,559
uh keep examining the space where you
1556
01:07:16,279 --> 01:07:20,799
don't diverge much from the original
1557
01:07:18,559 --> 01:07:22,920
model and if the space where the
1558
01:07:20,799 --> 01:07:25,240
original model was in is better than the
1559
01:07:22,920 --> 01:07:27,440
new space that your model has moved into
1560
01:07:25,240 --> 01:07:30,920
you move back towards the original model
1561
01:07:27,440 --> 01:07:33,000
so basically like if you had um if you
1562
01:07:30,920 --> 01:07:34,960
learned a model if you started learning
1563
01:07:33,000 --> 01:07:37,960
a model that looked like it was
1564
01:07:34,960 --> 01:07:40,279
optimizing uh your your reward but then
1565
01:07:37,960 --> 01:07:43,119
suddenly the model went off the rails
1566
01:07:40,279 --> 01:07:45,000
and um it starts generating completely
1567
01:07:43,119 --> 01:07:47,319
nonsense outputs that get really bad
1568
01:07:45,000 --> 01:07:49,119
reward this will push it back towards
1569
01:07:47,319 --> 01:07:50,920
the original policy and that's the basic
1570
01:07:49,119 --> 01:07:54,279
idea behind
1571
01:07:50,920 --> 01:07:57,640
P um in terms of what I see people using
1572
01:07:54,279 --> 01:07:59,799
um po was like really really popular for
1573
01:07:57,640 --> 01:08:01,880
a while but I've started to see people
1574
01:07:59,799 --> 01:08:04,799
use alternative strategies that use K
1575
01:08:01,880 --> 01:08:06,880
regularization so I don't I don't think
1576
01:08:04,799 --> 01:08:08,520
either one of them is like particularly
1577
01:08:06,880 --> 01:08:10,039
more popular than any of the others and
1578
01:08:08,520 --> 01:08:13,720
this one's a little bit simpler
1579
01:08:10,039 --> 01:08:13,720
conceptually so I like the the
1580
01:08:14,880 --> 01:08:19,279
one cool um any questions about
1581
01:08:20,359 --> 01:08:26,759
this okay um and actually one thing I
1582
01:08:24,640 --> 01:08:29,679
should mention is um all of these things
1583
01:08:26,759 --> 01:08:32,120
are implemented uh in you know whatever
1584
01:08:29,679 --> 01:08:33,759
libraries you use like hugging face TRL
1585
01:08:32,120 --> 01:08:35,679
Transformer reinforcement learning as an
1586
01:08:33,759 --> 01:08:37,040
example Library all of these methods are
1587
01:08:35,679 --> 01:08:38,400
implemented there so if you actually
1588
01:08:37,040 --> 01:08:40,600
want to use these in practice that's
1589
01:08:38,400 --> 01:08:40,600
good
1590
01:08:40,839 --> 01:08:46,359
place the next thing is adding a
1591
01:08:42,920 --> 01:08:48,679
Baseline and so the basic idea is that
1592
01:08:46,359 --> 01:08:52,199
you have ex expectations about your
1593
01:08:48,679 --> 01:08:54,640
reward for a particular sentence and um
1594
01:08:52,199 --> 01:08:56,560
like let's say we wanted to uh translate
1595
01:08:54,640 --> 01:08:58,400
a sentence and we have uh something like
1596
01:08:56,560 --> 01:09:01,279
this is an easy sentence and buffalo
1597
01:08:58,400 --> 01:09:02,920
buffalo buffalo which is a harder
1598
01:09:01,279 --> 01:09:07,799
sentence to
1599
01:09:02,920 --> 01:09:09,679
translate and so we have a reward um if
1600
01:09:07,799 --> 01:09:11,759
if you're not familiar with this example
1601
01:09:09,679 --> 01:09:13,480
you can search on Wikipedia for buffalo
1602
01:09:11,759 --> 01:09:16,759
buffalo buffalo and you'll you'll find
1603
01:09:13,480 --> 01:09:19,520
out what I'm talking about um but uh
1604
01:09:16,759 --> 01:09:21,440
there's a reward uh and let's say you
1605
01:09:19,520 --> 01:09:24,359
got a reward of 0.8 for the first one
1606
01:09:21,440 --> 01:09:29,679
and a reward of 0.3 for the second
1607
01:09:24,359 --> 01:09:31,679
one but the problem is if um the first
1608
01:09:29,679 --> 01:09:33,640
one actually is really easy and the
1609
01:09:31,679 --> 01:09:36,120
second one is really hard getting a
1610
01:09:33,640 --> 01:09:37,799
reward of 0.8 for the second one for
1611
01:09:36,120 --> 01:09:40,080
like a translation or something is
1612
01:09:37,799 --> 01:09:41,120
actually bad right and a reward of 0.3
1613
01:09:40,080 --> 01:09:45,239
is good because you're moving in the
1614
01:09:41,120 --> 01:09:49,359
right direction and so you basically um
1615
01:09:45,239 --> 01:09:52,239
you have uh the Baseline uh minus reward
1616
01:09:49,359 --> 01:09:54,960
or sorry reward minus Baseline and this
1617
01:09:52,239 --> 01:09:56,520
would give you a negative value for this
1618
01:09:54,960 --> 01:09:59,320
first one a positive value for the
1619
01:09:56,520 --> 01:10:01,360
second one and so the basic idea is can
1620
01:09:59,320 --> 01:10:04,400
we predict a priori how difficult this
1621
01:10:01,360 --> 01:10:05,440
example is and then uh adjust our reward
1622
01:10:04,400 --> 01:10:08,360
based on
1623
01:10:05,440 --> 01:10:10,960
that and
1624
01:10:08,360 --> 01:10:13,679
so that's the basic idea you just have
1625
01:10:10,960 --> 01:10:15,560
kind of like a baseline model um you
1626
01:10:13,679 --> 01:10:19,320
have a baseline model that predicts this
1627
01:10:15,560 --> 01:10:19,320
and uh you adjust uh
1628
01:10:19,760 --> 01:10:25,000
appropriately um there's two major ways
1629
01:10:22,719 --> 01:10:27,600
you can do this the first one um the
1630
01:10:25,000 --> 01:10:29,800
Baseline doesn't need to be anything um
1631
01:10:27,600 --> 01:10:32,960
the only hope is that it decreases the
1632
01:10:29,800 --> 01:10:35,960
variance in your reward uh and makes
1633
01:10:32,960 --> 01:10:38,239
learning more stable um there's two
1634
01:10:35,960 --> 01:10:40,159
options that I see done pretty widely
1635
01:10:38,239 --> 01:10:43,000
the first one is predicting the final
1636
01:10:40,159 --> 01:10:47,360
reward um predicting the final reward
1637
01:10:43,000 --> 01:10:50,960
using a model that doesn't look at
1638
01:10:47,360 --> 01:10:53,400
all at the answer that you provided it
1639
01:10:50,960 --> 01:10:55,880
only looks at the input or it only looks
1640
01:10:53,400 --> 01:10:58,840
at the intermediate States of uh you
1641
01:10:55,880 --> 01:11:00,480
know a model or something and so at the
1642
01:10:58,840 --> 01:11:03,280
sentence level you can have one Baseline
1643
01:11:00,480 --> 01:11:04,719
per sentence um you can also do it at
1644
01:11:03,280 --> 01:11:10,560
each decoder
1645
01:11:04,719 --> 01:11:11,640
State and this is uh basically you can
1646
01:11:10,560 --> 01:11:13,040
do this anytime you're doing
1647
01:11:11,640 --> 01:11:15,199
reinforcement learning by just training
1648
01:11:13,040 --> 01:11:18,199
a regression model that does this for
1649
01:11:15,199 --> 01:11:19,679
you based on the rewards you get the
1650
01:11:18,199 --> 01:11:21,040
important thing is the Baseline is not
1651
01:11:19,679 --> 01:11:22,640
allowed to use any of your actual
1652
01:11:21,040 --> 01:11:25,679
predictions because once you start using
1653
01:11:22,640 --> 01:11:26,640
the predictions then um your uh it's not
1654
01:11:25,679 --> 01:11:28,679
a
1655
01:11:26,640 --> 01:11:30,840
baseline another option which is
1656
01:11:28,679 --> 01:11:33,440
relatively easy to implement but can
1657
01:11:30,840 --> 01:11:36,320
still be effective is you calculate the
1658
01:11:33,440 --> 01:11:38,719
mean of the rewards in a batch and so if
1659
01:11:36,320 --> 01:11:40,880
you have a big batch of data and your
1660
01:11:38,719 --> 01:11:44,440
average reward in the batch is like
1661
01:11:40,880 --> 01:11:46,480
0.4 uh then you just subtract that 0.4
1662
01:11:44,440 --> 01:11:50,080
uh and calculate your reward based on
1663
01:11:46,480 --> 01:11:50,080
that so that's another option that can
1664
01:11:51,800 --> 01:11:57,800
use
1665
01:11:53,639 --> 01:12:00,000
um a kind of extreme example of this uh
1666
01:11:57,800 --> 01:12:01,199
of creating a baseline is contrasting
1667
01:12:00,000 --> 01:12:03,639
pairwise
1668
01:12:01,199 --> 01:12:05,880
examples um or
1669
01:12:03,639 --> 01:12:08,280
contrasting different outputs for the
1670
01:12:05,880 --> 01:12:12,040
same input
1671
01:12:08,280 --> 01:12:13,920
and you can easily learn uh directly
1672
01:12:12,040 --> 01:12:16,239
from pairwise Human
1673
01:12:13,920 --> 01:12:18,199
preferences uh which can provide more
1674
01:12:16,239 --> 01:12:20,760
stability because you know one is better
1675
01:12:18,199 --> 01:12:23,880
than the other so you essentially can be
1676
01:12:20,760 --> 01:12:26,199
sure that uh you're upweighting a better
1677
01:12:23,880 --> 01:12:29,560
one and down weting a worse one
1678
01:12:26,199 --> 01:12:31,400
um this is the idea behind DPO which is
1679
01:12:29,560 --> 01:12:33,719
a recently pretty popular model but
1680
01:12:31,400 --> 01:12:36,800
there's also other previous methods that
1681
01:12:33,719 --> 01:12:40,199
did similar things and the way DPO works
1682
01:12:36,800 --> 01:12:45,040
is it basically calculates this ratio of
1683
01:12:40,199 --> 01:12:49,280
uh the probability of the new uh the new
1684
01:12:45,040 --> 01:12:51,639
model to the old model but it UPS this
1685
01:12:49,280 --> 01:12:53,639
probability for a good output and it
1686
01:12:51,639 --> 01:12:56,280
downweights this probability for a bad
1687
01:12:53,639 --> 01:12:57,679
output and so
1688
01:12:56,280 --> 01:13:00,120
here we have our better outputs over
1689
01:12:57,679 --> 01:13:02,040
here here we have our worse outputs and
1690
01:13:00,120 --> 01:13:03,600
you just it's basically learning to
1691
01:13:02,040 --> 01:13:05,639
upate the probability and downweight
1692
01:13:03,600 --> 01:13:09,320
probability
1693
01:13:05,639 --> 01:13:09,320
accordingly so
1694
01:13:09,360 --> 01:13:15,040
um you can notice that DPO is very
1695
01:13:12,280 --> 01:13:18,040
similar to PO um and that it's learning
1696
01:13:15,040 --> 01:13:19,679
uh it's using these ratios but the
1697
01:13:18,040 --> 01:13:21,520
disadvantage of this is you obviously
1698
01:13:19,679 --> 01:13:23,120
require pairwise judgments and you can't
1699
01:13:21,520 --> 01:13:26,120
learn a model if you don't have these
1700
01:13:23,120 --> 01:13:28,080
pawise judgments so
1701
01:13:26,120 --> 01:13:30,760
the
1702
01:13:28,080 --> 01:13:33,159
beta yeah so the beta term is is
1703
01:13:30,760 --> 01:13:35,840
basically a normalization term it's a
1704
01:13:33,159 --> 01:13:39,960
hyper parameter um
1705
01:13:35,840 --> 01:13:41,840
for DPO sorry I read the paper right
1706
01:13:39,960 --> 01:13:43,639
when it came out and I don't remember if
1707
01:13:41,840 --> 01:13:45,600
it's a direct derivation from the K
1708
01:13:43,639 --> 01:13:47,960
Divergence term or not but I think it
1709
01:13:45,600 --> 01:13:49,800
might be um I'd have to go back and look
1710
01:13:47,960 --> 01:13:50,480
at the look at the paper but basically
1711
01:13:49,800 --> 01:13:53,600
the
1712
01:13:50,480 --> 01:13:56,760
more the larger this is the larger
1713
01:13:53,600 --> 01:13:59,320
gradient steps you'll be
1714
01:13:56,760 --> 01:14:00,639
it also um like you'll notice there
1715
01:13:59,320 --> 01:14:03,400
sorry I didn't mention this but you'll
1716
01:14:00,639 --> 01:14:06,120
notice there's a sigmoid term here so
1717
01:14:03,400 --> 01:14:09,000
the the
1718
01:14:06,120 --> 01:14:10,080
beta the larger you increase the beta
1719
01:14:09,000 --> 01:14:13,239
the
1720
01:14:10,080 --> 01:14:16,600
more small differences in these
1721
01:14:13,239 --> 01:14:18,719
values like it basically like stretches
1722
01:14:16,600 --> 01:14:22,280
or shrinks the sigmoid with respect to
1723
01:14:18,719 --> 01:14:24,120
how beak the it is so it will um it will
1724
01:14:22,280 --> 01:14:25,800
affect how much like small differences
1725
01:14:24,120 --> 01:14:27,960
in this will affect
1726
01:14:25,800 --> 01:14:30,120
but I I think this was derived from the
1727
01:14:27,960 --> 01:14:31,760
K regularization term that we had
1728
01:14:30,120 --> 01:14:34,400
previously in
1729
01:14:31,760 --> 01:14:35,800
um in this slide here but I have to go
1730
01:14:34,400 --> 01:14:40,520
back and double check unless somebody
1731
01:14:35,800 --> 01:14:43,239
knows it is okay good yeah
1732
01:14:40,520 --> 01:14:45,000
so I don't want to say wrong things but
1733
01:14:43,239 --> 01:14:48,239
I also don't want
1734
01:14:45,000 --> 01:14:50,920
to okay cool um and so then increasing
1735
01:14:48,239 --> 01:14:55,080
batch size
1736
01:14:50,920 --> 01:14:57,360
um because each uh another thing is um
1737
01:14:55,080 --> 01:14:58,440
kind of NE necessarily reinforcement
1738
01:14:57,360 --> 01:14:59,920
learning is going to have higher
1739
01:14:58,440 --> 01:15:01,400
variance and maximum likelihood
1740
01:14:59,920 --> 01:15:04,199
estimation just because we're doing samp
1741
01:15:01,400 --> 01:15:07,840
playing and other things like this and
1742
01:15:04,199 --> 01:15:09,440
um so one very simple thing you can do
1743
01:15:07,840 --> 01:15:11,280
is just increase the number of examples
1744
01:15:09,440 --> 01:15:13,679
or rollouts that you do before an update
1745
01:15:11,280 --> 01:15:15,800
to stabilize and so I I would definitely
1746
01:15:13,679 --> 01:15:17,480
suggest that if you're seeing any
1747
01:15:15,800 --> 01:15:18,679
stability after doing all of the tricks
1748
01:15:17,480 --> 01:15:20,400
that I mentioned before that you
1749
01:15:18,679 --> 01:15:23,040
increase your batch size and often that
1750
01:15:20,400 --> 01:15:25,480
can just resolve your problems
1751
01:15:23,040 --> 01:15:28,760
um another uh
1752
01:15:25,480 --> 01:15:30,560
thing that people often do is um save
1753
01:15:28,760 --> 01:15:32,040
many many previous rollouts because
1754
01:15:30,560 --> 01:15:34,199
generally doing rollouts is more
1755
01:15:32,040 --> 01:15:37,840
expensive doing rollouts and collecting
1756
01:15:34,199 --> 01:15:39,560
rewards is more expensive and so um you
1757
01:15:37,840 --> 01:15:42,360
can save the roll outs that you have
1758
01:15:39,560 --> 01:15:43,840
done before and uh keep them around so
1759
01:15:42,360 --> 01:15:46,600
you can update parameters with larger
1760
01:15:43,840 --> 01:15:50,800
batches in a more efficient
1761
01:15:46,600 --> 01:15:53,120
way cool so that's all I have uh I just
1762
01:15:50,800 --> 01:15:54,400
realized we're exactly at time so uh I
1763
01:15:53,120 --> 01:15:56,440
should finish up here but I'll be happy
1764
01:15:54,400 --> 01:15:59,440
to take any
1765
01:15:56,440 --> 01:15:59,440
for
1766
01:16:01,679 --> 01:16:04,679
thanks