|
WEBVTT |
|
|
|
00:00:00.840 --> 00:00:05.920 |
|
okay so uh let's get started um today |
|
|
|
00:00:04.200 --> 00:00:08.000 |
|
I'm going to be talking about learning |
|
|
|
00:00:05.920 --> 00:00:09.480 |
|
from Human feedback I wrote |
|
|
|
00:00:08.000 --> 00:00:12.160 |
|
reinforcement learning from Human |
|
|
|
00:00:09.480 --> 00:00:14.519 |
|
feedback because that's what um you know |
|
|
|
00:00:12.160 --> 00:00:15.759 |
|
a lot of people talk about nowadays but |
|
|
|
00:00:14.519 --> 00:00:18.880 |
|
actually there's other methods of |
|
|
|
00:00:15.759 --> 00:00:21.840 |
|
learning from Human feedback so first |
|
|
|
00:00:18.880 --> 00:00:24.760 |
|
I'm going to be talking about the ways |
|
|
|
00:00:21.840 --> 00:00:27.920 |
|
we can get uh human feedback for the |
|
|
|
00:00:24.760 --> 00:00:31.039 |
|
generations of models and mostly focus |
|
|
|
00:00:27.920 --> 00:00:32.960 |
|
on generation tasks because is um |
|
|
|
00:00:31.039 --> 00:00:35.800 |
|
generation tasks are harder than like |
|
|
|
00:00:32.960 --> 00:00:38.559 |
|
classification tasks that we uh we deal |
|
|
|
00:00:35.800 --> 00:00:40.000 |
|
with normally so I'll spend a fair |
|
|
|
00:00:38.559 --> 00:00:42.239 |
|
amount of time talking about how we do |
|
|
|
00:00:40.000 --> 00:00:45.760 |
|
that and then after I talk about how we |
|
|
|
00:00:42.239 --> 00:00:48.360 |
|
do that we'll move into um how we |
|
|
|
00:00:45.760 --> 00:00:51.160 |
|
actually learn from that |
|
|
|
00:00:48.360 --> 00:00:53.399 |
|
signal so normally what we've done up |
|
|
|
00:00:51.160 --> 00:00:56.399 |
|
until this point is maximum likelihood |
|
|
|
00:00:53.399 --> 00:00:58.199 |
|
training uh this is just an overview |
|
|
|
00:00:56.399 --> 00:00:59.559 |
|
slide so we what we want to do is we |
|
|
|
00:00:58.199 --> 00:01:00.760 |
|
want to maximize the likelihood of |
|
|
|
00:00:59.559 --> 00:01:03.280 |
|
predicting the next word and the |
|
|
|
00:01:00.760 --> 00:01:05.960 |
|
reference given the previous words uh |
|
|
|
00:01:03.280 --> 00:01:08.119 |
|
which gives us the loss of the output |
|
|
|
00:01:05.960 --> 00:01:09.799 |
|
given the input uh where you know the |
|
|
|
00:01:08.119 --> 00:01:13.960 |
|
input can be the prompt the output can |
|
|
|
00:01:09.799 --> 00:01:16.080 |
|
be the answer to uh the output but |
|
|
|
00:01:13.960 --> 00:01:18.360 |
|
there's uh lots of problems with |
|
|
|
00:01:16.080 --> 00:01:20.439 |
|
learning from Maximum likelihood and I'm |
|
|
|
00:01:18.360 --> 00:01:22.079 |
|
going to give three examples here I |
|
|
|
00:01:20.439 --> 00:01:24.159 |
|
think all of these are actually real |
|
|
|
00:01:22.079 --> 00:01:26.880 |
|
problems uh that we need to be worried |
|
|
|
00:01:24.159 --> 00:01:30.240 |
|
about so the first one is that some |
|
|
|
00:01:26.880 --> 00:01:32.439 |
|
mistakes are worse than others so um in |
|
|
|
00:01:30.240 --> 00:01:33.560 |
|
the end we want good outputs and some |
|
|
|
00:01:32.439 --> 00:01:36.520 |
|
mistaken |
|
|
|
00:01:33.560 --> 00:01:38.200 |
|
predictions uh can be a bigger problem |
|
|
|
00:01:36.520 --> 00:01:42.680 |
|
for the output being |
|
|
|
00:01:38.200 --> 00:01:46.000 |
|
good so to give an example uh let's say |
|
|
|
00:01:42.680 --> 00:01:47.600 |
|
what we actually wanted from like a |
|
|
|
00:01:46.000 --> 00:01:49.320 |
|
speech recognition system or a |
|
|
|
00:01:47.600 --> 00:01:54.040 |
|
translation system or something like |
|
|
|
00:01:49.320 --> 00:01:54.040 |
|
that is uh please send this package to |
|
|
|
00:01:54.280 --> 00:01:58.920 |
|
Pittsburgh if I write please send a |
|
|
|
00:01:56.880 --> 00:02:01.560 |
|
package to Pittsburgh then this is not a |
|
|
|
00:01:58.920 --> 00:02:03.560 |
|
huge problem |
|
|
|
00:02:01.560 --> 00:02:06.479 |
|
if I write uh please send this package |
|
|
|
00:02:03.560 --> 00:02:07.719 |
|
to Tokyo then that might be a big |
|
|
|
00:02:06.479 --> 00:02:09.640 |
|
problem because the package you wanted |
|
|
|
00:02:07.719 --> 00:02:12.760 |
|
to come to Pittsburgh goes to Tokyo |
|
|
|
00:02:09.640 --> 00:02:13.680 |
|
instead and uh you might not want that |
|
|
|
00:02:12.760 --> 00:02:16.080 |
|
to |
|
|
|
00:02:13.680 --> 00:02:18.000 |
|
happen you might also have it say |
|
|
|
00:02:16.080 --> 00:02:20.400 |
|
bleeping send this package to Pittsburgh |
|
|
|
00:02:18.000 --> 00:02:22.200 |
|
instead of pleas um and that would be a |
|
|
|
00:02:20.400 --> 00:02:24.200 |
|
problem in a customer service system |
|
|
|
00:02:22.200 --> 00:02:28.400 |
|
right because your customer would uh |
|
|
|
00:02:24.200 --> 00:02:28.400 |
|
leave and never come back |
|
|
|
00:02:28.840 --> 00:02:32.040 |
|
so |
|
|
|
00:02:30.360 --> 00:02:33.720 |
|
determiner like this is not going to |
|
|
|
00:02:32.040 --> 00:02:35.640 |
|
cause a huge issue U messing up other |
|
|
|
00:02:33.720 --> 00:02:37.519 |
|
things is going to cause a larger |
|
|
|
00:02:35.640 --> 00:02:39.519 |
|
issue but from the point of view of |
|
|
|
00:02:37.519 --> 00:02:42.680 |
|
Maximum likelihood all of these are just |
|
|
|
00:02:39.519 --> 00:02:44.560 |
|
tokens and messing up one token is the |
|
|
|
00:02:42.680 --> 00:02:47.519 |
|
same as messing up another token so |
|
|
|
00:02:44.560 --> 00:02:50.040 |
|
that's uh you know an |
|
|
|
00:02:47.519 --> 00:02:52.080 |
|
issue another problem is that the gold |
|
|
|
00:02:50.040 --> 00:02:54.640 |
|
standard and maximum likelihood |
|
|
|
00:02:52.080 --> 00:02:57.480 |
|
estimation can be bad it can be like not |
|
|
|
00:02:54.640 --> 00:02:59.239 |
|
what you want and uh corpa are full of |
|
|
|
00:02:57.480 --> 00:03:02.400 |
|
outputs that we wouldn't want a language |
|
|
|
00:02:59.239 --> 00:03:05.400 |
|
model producing so for example uh toxic |
|
|
|
00:03:02.400 --> 00:03:07.799 |
|
comments on Reddit uh |
|
|
|
00:03:05.400 --> 00:03:09.959 |
|
disinformation um another thing that a |
|
|
|
00:03:07.799 --> 00:03:13.000 |
|
lot of people don't think about uh quite |
|
|
|
00:03:09.959 --> 00:03:15.640 |
|
as much is a lot of the data online is |
|
|
|
00:03:13.000 --> 00:03:17.680 |
|
uh from is automatically generated |
|
|
|
00:03:15.640 --> 00:03:19.720 |
|
nowadays for example from machine |
|
|
|
00:03:17.680 --> 00:03:24.080 |
|
translation a lot of the translations |
|
|
|
00:03:19.720 --> 00:03:25.720 |
|
online are from uh 2016 Google translate |
|
|
|
00:03:24.080 --> 00:03:27.560 |
|
uh when Google translate was a lot less |
|
|
|
00:03:25.720 --> 00:03:29.120 |
|
good than it is now and so you have like |
|
|
|
00:03:27.560 --> 00:03:31.760 |
|
poor quality translations that were |
|
|
|
00:03:29.120 --> 00:03:31.760 |
|
automatically |
|
|
|
00:03:33.040 --> 00:03:37.959 |
|
a final problem is uh something that's |
|
|
|
00:03:35.280 --> 00:03:40.360 |
|
called exposure bias and exposure bias |
|
|
|
00:03:37.959 --> 00:03:44.000 |
|
basically what it means is mle training |
|
|
|
00:03:40.360 --> 00:03:46.000 |
|
doesn't consider um the necessarity the |
|
|
|
00:03:44.000 --> 00:03:48.599 |
|
necessity for generation and it relies |
|
|
|
00:03:46.000 --> 00:03:51.360 |
|
on gold standard context so if we go |
|
|
|
00:03:48.599 --> 00:03:54.159 |
|
back to the mle equation when we're |
|
|
|
00:03:51.360 --> 00:03:57.200 |
|
calculating mle this y less than T is |
|
|
|
00:03:54.159 --> 00:03:59.200 |
|
always correct it's always a good output |
|
|
|
00:03:57.200 --> 00:04:01.439 |
|
and so what the model does is it learns |
|
|
|
00:03:59.200 --> 00:04:04.280 |
|
to over rely on good |
|
|
|
00:04:01.439 --> 00:04:06.079 |
|
outputs and one example of a problem |
|
|
|
00:04:04.280 --> 00:04:08.360 |
|
that this causes is models tend to |
|
|
|
00:04:06.079 --> 00:04:10.560 |
|
repeat themselves over and over again |
|
|
|
00:04:08.360 --> 00:04:12.319 |
|
for example um when you use some |
|
|
|
00:04:10.560 --> 00:04:15.079 |
|
generation algorithms and the reason why |
|
|
|
00:04:12.319 --> 00:04:18.519 |
|
this happens is because in a gold |
|
|
|
00:04:15.079 --> 00:04:22.079 |
|
standard output if a word has appeared |
|
|
|
00:04:18.519 --> 00:04:25.840 |
|
previously that word is more likely to |
|
|
|
00:04:22.079 --> 00:04:28.560 |
|
happen next so like if you say um like I |
|
|
|
00:04:25.840 --> 00:04:29.759 |
|
am going um I am going to Pittsburgh |
|
|
|
00:04:28.560 --> 00:04:31.880 |
|
you're much more likely to say |
|
|
|
00:04:29.759 --> 00:04:33.000 |
|
Pittsburgh again in the future because |
|
|
|
00:04:31.880 --> 00:04:35.720 |
|
you're talking about Pittsburgh |
|
|
|
00:04:33.000 --> 00:04:37.400 |
|
topically as coherent so what you get is |
|
|
|
00:04:35.720 --> 00:04:38.639 |
|
you get mle trained models saying I'm |
|
|
|
00:04:37.400 --> 00:04:40.160 |
|
going to Pittsburgh I am going to |
|
|
|
00:04:38.639 --> 00:04:41.680 |
|
Pittsburgh I am going to Pittsburgh I |
|
|
|
00:04:40.160 --> 00:04:45.280 |
|
going to Pittsburgh you've probably seen |
|
|
|
00:04:41.680 --> 00:04:47.320 |
|
this before uh at some point and so um |
|
|
|
00:04:45.280 --> 00:04:49.320 |
|
exposure bias is basically that the |
|
|
|
00:04:47.320 --> 00:04:51.039 |
|
model has never been exposed to mistakes |
|
|
|
00:04:49.320 --> 00:04:55.240 |
|
in the past and so it can't deal with |
|
|
|
00:04:51.039 --> 00:04:56.840 |
|
them so what this does is um if you have |
|
|
|
00:04:55.240 --> 00:04:58.560 |
|
an alternative training algorithm you |
|
|
|
00:04:56.840 --> 00:05:02.120 |
|
can fix this by generating a whole bunch |
|
|
|
00:04:58.560 --> 00:05:04.880 |
|
of outputs uh down like scoring some of |
|
|
|
00:05:02.120 --> 00:05:06.880 |
|
them poorly and penalizing the model for |
|
|
|
00:05:04.880 --> 00:05:09.960 |
|
uh generating po outputs and so that can |
|
|
|
00:05:06.880 --> 00:05:09.960 |
|
fix these problems as |
|
|
|
00:05:10.800 --> 00:05:18.440 |
|
well uh any questions about this all |
|
|
|
00:05:15.199 --> 00:05:20.800 |
|
good Okay cool so now I'd like to get |
|
|
|
00:05:18.440 --> 00:05:23.919 |
|
into how we measure how good an output |
|
|
|
00:05:20.800 --> 00:05:26.360 |
|
is and there's different ways of doing |
|
|
|
00:05:23.919 --> 00:05:30.319 |
|
this um the first one is objective |
|
|
|
00:05:26.360 --> 00:05:32.680 |
|
assessment so for some uh tasks or for |
|
|
|
00:05:30.319 --> 00:05:35.400 |
|
many tasks there's kind of objectively a |
|
|
|
00:05:32.680 --> 00:05:37.280 |
|
correct answer there's also human |
|
|
|
00:05:35.400 --> 00:05:40.360 |
|
subjective annotations so you can ask |
|
|
|
00:05:37.280 --> 00:05:42.919 |
|
humans to do annotation for you there's |
|
|
|
00:05:40.360 --> 00:05:45.400 |
|
machine prediction of human |
|
|
|
00:05:42.919 --> 00:05:48.319 |
|
preferences and there's also use in |
|
|
|
00:05:45.400 --> 00:05:50.840 |
|
another system in a downstream |
|
|
|
00:05:48.319 --> 00:05:52.960 |
|
task so the way objective assessment |
|
|
|
00:05:50.840 --> 00:05:54.919 |
|
works is you have an annotated correct |
|
|
|
00:05:52.960 --> 00:05:57.080 |
|
answer in match against this so like if |
|
|
|
00:05:54.919 --> 00:06:00.600 |
|
you're solving math problems uh |
|
|
|
00:05:57.080 --> 00:06:02.560 |
|
answering objective questions and and |
|
|
|
00:06:00.600 --> 00:06:04.280 |
|
you know you can pick any arbitrary |
|
|
|
00:06:02.560 --> 00:06:06.840 |
|
example you can pick your classification |
|
|
|
00:06:04.280 --> 00:06:09.800 |
|
example from uh like your text |
|
|
|
00:06:06.840 --> 00:06:11.880 |
|
classification tasks an even clearer |
|
|
|
00:06:09.800 --> 00:06:13.880 |
|
example is if you have math problems |
|
|
|
00:06:11.880 --> 00:06:15.639 |
|
there's kind of objectively one answer |
|
|
|
00:06:13.880 --> 00:06:18.080 |
|
to any math problem and there's no other |
|
|
|
00:06:15.639 --> 00:06:19.680 |
|
answer that could be correct so this |
|
|
|
00:06:18.080 --> 00:06:21.160 |
|
makes your life easy if you're handling |
|
|
|
00:06:19.680 --> 00:06:22.560 |
|
this type of problem but of course |
|
|
|
00:06:21.160 --> 00:06:24.120 |
|
there's many other types of problems we |
|
|
|
00:06:22.560 --> 00:06:26.039 |
|
want to handle that don't have objective |
|
|
|
00:06:24.120 --> 00:06:29.039 |
|
answers like |
|
|
|
00:06:26.039 --> 00:06:31.440 |
|
this so let's say we're handling a gener |
|
|
|
00:06:29.039 --> 00:06:34.680 |
|
a generation task where we don't have an |
|
|
|
00:06:31.440 --> 00:06:36.360 |
|
objective answer um in this Cas kind of |
|
|
|
00:06:34.680 --> 00:06:39.440 |
|
one of our gold standards is human |
|
|
|
00:06:36.360 --> 00:06:42.360 |
|
evaluation so we might have a source |
|
|
|
00:06:39.440 --> 00:06:44.919 |
|
input like a prompt or an input text for |
|
|
|
00:06:42.360 --> 00:06:47.240 |
|
machine translation we have one or |
|
|
|
00:06:44.919 --> 00:06:49.960 |
|
several hypotheses and we ask a human |
|
|
|
00:06:47.240 --> 00:06:53.280 |
|
annotator to basically give uh a score |
|
|
|
00:06:49.960 --> 00:06:55.759 |
|
for them or do some sort of other |
|
|
|
00:06:53.280 --> 00:06:59.759 |
|
annotation and the different varieties |
|
|
|
00:06:55.759 --> 00:07:03.080 |
|
of annotation that we can give are um |
|
|
|
00:06:59.759 --> 00:07:04.599 |
|
something called direct assessment so uh |
|
|
|
00:07:03.080 --> 00:07:06.599 |
|
direct assessment is a term that comes |
|
|
|
00:07:04.599 --> 00:07:09.280 |
|
from machine translation uh so you might |
|
|
|
00:07:06.599 --> 00:07:11.039 |
|
not see it used uh lots of other places |
|
|
|
00:07:09.280 --> 00:07:13.120 |
|
but it's basically just give a score |
|
|
|
00:07:11.039 --> 00:07:15.759 |
|
directly to how good the output is so |
|
|
|
00:07:13.120 --> 00:07:17.199 |
|
you can say like if you say please send |
|
|
|
00:07:15.759 --> 00:07:18.960 |
|
this translation is please send this |
|
|
|
00:07:17.199 --> 00:07:21.759 |
|
package to Tokyo we give it a score of |
|
|
|
00:07:18.960 --> 00:07:24.360 |
|
two out of 10 or something like |
|
|
|
00:07:21.759 --> 00:07:28.000 |
|
this |
|
|
|
00:07:24.360 --> 00:07:30.840 |
|
so the the question here is like what |
|
|
|
00:07:28.000 --> 00:07:32.400 |
|
does like let's say I gave a score of |
|
|
|
00:07:30.840 --> 00:07:34.520 |
|
two out of 10 for please send this |
|
|
|
00:07:32.400 --> 00:07:37.680 |
|
package to Tokyo what score should I |
|
|
|
00:07:34.520 --> 00:07:40.240 |
|
give for please send a package to Tokyo |
|
|
|
00:07:37.680 --> 00:07:42.360 |
|
anyone have any ideas the the correct |
|
|
|
00:07:40.240 --> 00:07:46.520 |
|
answer is please send this package to |
|
|
|
00:07:42.360 --> 00:07:48.000 |
|
take out of eight out of 10 yeah but you |
|
|
|
00:07:46.520 --> 00:07:50.440 |
|
might disagree on that right it's kind |
|
|
|
00:07:48.000 --> 00:07:52.159 |
|
of like subjective um one of the |
|
|
|
00:07:50.440 --> 00:07:54.039 |
|
difficulties of direct assessment is |
|
|
|
00:07:52.159 --> 00:07:55.520 |
|
giving a number like this is pretty |
|
|
|
00:07:54.039 --> 00:07:57.800 |
|
difficult if you don't have a very clear |
|
|
|
00:07:55.520 --> 00:07:59.720 |
|
rubric and very skilled annotators and |
|
|
|
00:07:57.800 --> 00:08:02.879 |
|
it's hard to get consistency between |
|
|
|
00:07:59.720 --> 00:08:04.400 |
|
people when you do this so the advantage |
|
|
|
00:08:02.879 --> 00:08:05.599 |
|
is it kind of gives you an idea of how |
|
|
|
00:08:04.400 --> 00:08:07.520 |
|
good things are overall but the |
|
|
|
00:08:05.599 --> 00:08:09.280 |
|
disadvantage is it's more difficult to |
|
|
|
00:08:07.520 --> 00:08:11.319 |
|
annotate and get |
|
|
|
00:08:09.280 --> 00:08:13.159 |
|
consistency um another thing that I |
|
|
|
00:08:11.319 --> 00:08:15.319 |
|
should point out is often scores are |
|
|
|
00:08:13.159 --> 00:08:18.680 |
|
assigned separately based on desirable |
|
|
|
00:08:15.319 --> 00:08:20.960 |
|
traits so um we don't necessarily just |
|
|
|
00:08:18.680 --> 00:08:23.479 |
|
say how good is it we say how fluent is |
|
|
|
00:08:20.960 --> 00:08:26.120 |
|
it like is it fluent uh |
|
|
|
00:08:23.479 --> 00:08:28.159 |
|
English in Translation there's a concept |
|
|
|
00:08:26.120 --> 00:08:30.720 |
|
called adequacy which is how well does |
|
|
|
00:08:28.159 --> 00:08:34.599 |
|
the output reflect the input |
|
|
|
00:08:30.720 --> 00:08:36.519 |
|
semantics um and if you're assessing |
|
|
|
00:08:34.599 --> 00:08:38.440 |
|
translation systems actually it's common |
|
|
|
00:08:36.519 --> 00:08:40.519 |
|
to assess fluency without even looking |
|
|
|
00:08:38.440 --> 00:08:43.200 |
|
at the input because then you can just |
|
|
|
00:08:40.519 --> 00:08:44.880 |
|
say how fluent is it but for adequacy |
|
|
|
00:08:43.200 --> 00:08:46.320 |
|
you definitely need to understand the |
|
|
|
00:08:44.880 --> 00:08:49.600 |
|
input so you need to be a bilingual |
|
|
|
00:08:46.320 --> 00:08:54.680 |
|
speaker to be able to assess |
|
|
|
00:08:49.600 --> 00:08:57.560 |
|
that um factuality um and so factuality |
|
|
|
00:08:54.680 --> 00:09:00.160 |
|
is tricky um it can either be factuality |
|
|
|
00:08:57.560 --> 00:09:03.880 |
|
grounded in a particular input text in |
|
|
|
00:09:00.160 --> 00:09:05.600 |
|
which case um the facts would have to be |
|
|
|
00:09:03.880 --> 00:09:07.680 |
|
you know things that were said in the |
|
|
|
00:09:05.600 --> 00:09:09.399 |
|
input or it can be just kind of is the |
|
|
|
00:09:07.680 --> 00:09:11.120 |
|
statement factual in general in which |
|
|
|
00:09:09.399 --> 00:09:13.720 |
|
case you need to go online you need to |
|
|
|
00:09:11.120 --> 00:09:16.480 |
|
search for things and like uh check |
|
|
|
00:09:13.720 --> 00:09:18.480 |
|
whether the statement is factual or not |
|
|
|
00:09:16.480 --> 00:09:20.480 |
|
um other things are like coherence does |
|
|
|
00:09:18.480 --> 00:09:21.480 |
|
the output fit coherently within the |
|
|
|
00:09:20.480 --> 00:09:23.680 |
|
larger |
|
|
|
00:09:21.480 --> 00:09:25.680 |
|
discs um and there's many many other |
|
|
|
00:09:23.680 --> 00:09:28.120 |
|
ones of these this is also task |
|
|
|
00:09:25.680 --> 00:09:29.760 |
|
dependent so like the things you will |
|
|
|
00:09:28.120 --> 00:09:31.000 |
|
evaluate for machine transl are |
|
|
|
00:09:29.760 --> 00:09:32.880 |
|
different than the ones you would do for |
|
|
|
00:09:31.000 --> 00:09:35.760 |
|
dialog which are different than the ones |
|
|
|
00:09:32.880 --> 00:09:38.200 |
|
you would do for a general purpose |
|
|
|
00:09:35.760 --> 00:09:41.279 |
|
chatot uh which is different kind things |
|
|
|
00:09:38.200 --> 00:09:44.120 |
|
you would do for um summarization for |
|
|
|
00:09:41.279 --> 00:09:46.320 |
|
example so if you're interested in doing |
|
|
|
00:09:44.120 --> 00:09:47.519 |
|
something like this uh then I definitely |
|
|
|
00:09:46.320 --> 00:09:48.800 |
|
encourage you to look at what other |
|
|
|
00:09:47.519 --> 00:09:51.399 |
|
people have done for the tasks you're |
|
|
|
00:09:48.800 --> 00:09:53.079 |
|
interested in uh previously and uh find |
|
|
|
00:09:51.399 --> 00:09:54.880 |
|
out the different types of traits that |
|
|
|
00:09:53.079 --> 00:09:58.320 |
|
did |
|
|
|
00:09:54.880 --> 00:10:00.760 |
|
last uh any any questions about this |
|
|
|
00:09:58.320 --> 00:10:03.079 |
|
also |
|
|
|
00:10:00.760 --> 00:10:06.920 |
|
okay the next type of feedback is |
|
|
|
00:10:03.079 --> 00:10:09.839 |
|
preference ratings um and so this is uh |
|
|
|
00:10:06.920 --> 00:10:12.600 |
|
basically what you do is you have two or |
|
|
|
00:10:09.839 --> 00:10:14.240 |
|
more outputs from different models or |
|
|
|
00:10:12.600 --> 00:10:16.440 |
|
different Generations from an individual |
|
|
|
00:10:14.240 --> 00:10:18.839 |
|
model and you ask a human which one is |
|
|
|
00:10:16.440 --> 00:10:22.320 |
|
better like is one better than the other |
|
|
|
00:10:18.839 --> 00:10:23.839 |
|
or are they tied and so in this case um |
|
|
|
00:10:22.320 --> 00:10:26.320 |
|
you might have please send this package |
|
|
|
00:10:23.839 --> 00:10:28.880 |
|
to Tokyo please send a package to |
|
|
|
00:10:26.320 --> 00:10:31.040 |
|
Tokyo we might disagree on how like good |
|
|
|
00:10:28.880 --> 00:10:33.959 |
|
or bad each of them are but I think most |
|
|
|
00:10:31.040 --> 00:10:35.959 |
|
people would agree that this one is like |
|
|
|
00:10:33.959 --> 00:10:37.480 |
|
despite the fact that it got this wrong |
|
|
|
00:10:35.959 --> 00:10:40.160 |
|
the second one is better than the first |
|
|
|
00:10:37.480 --> 00:10:42.240 |
|
one so this is a little bit of an easier |
|
|
|
00:10:40.160 --> 00:10:45.040 |
|
task it's easier to uh get people to |
|
|
|
00:10:42.240 --> 00:10:46.839 |
|
annotate these things |
|
|
|
00:10:45.040 --> 00:10:50.519 |
|
consistently however it has the |
|
|
|
00:10:46.839 --> 00:10:52.839 |
|
disadvantage that you can't really tell |
|
|
|
00:10:50.519 --> 00:10:55.360 |
|
uh whether systems are really good or |
|
|
|
00:10:52.839 --> 00:10:57.200 |
|
really bad so let's say you have a bunch |
|
|
|
00:10:55.360 --> 00:11:00.279 |
|
of really bad systems that you're |
|
|
|
00:10:57.200 --> 00:11:01.839 |
|
comparing with each other um you might |
|
|
|
00:11:00.279 --> 00:11:03.680 |
|
find that one is better than the other |
|
|
|
00:11:01.839 --> 00:11:06.000 |
|
but that still doesn't mean it's ready |
|
|
|
00:11:03.680 --> 00:11:07.399 |
|
to be deployed or if you have a bunch of |
|
|
|
00:11:06.000 --> 00:11:11.040 |
|
really good systems they're all |
|
|
|
00:11:07.399 --> 00:11:13.000 |
|
basically you know very very similar to |
|
|
|
00:11:11.040 --> 00:11:14.399 |
|
another but one is like slightly more |
|
|
|
00:11:13.000 --> 00:11:18.639 |
|
fluent than the other you might still |
|
|
|
00:11:14.399 --> 00:11:20.680 |
|
get a similar result um and so that also |
|
|
|
00:11:18.639 --> 00:11:22.760 |
|
makes it uh you know a little bit |
|
|
|
00:11:20.680 --> 00:11:24.880 |
|
difficult to use practically in some |
|
|
|
00:11:22.760 --> 00:11:27.040 |
|
ways I didn't put it on the slide but |
|
|
|
00:11:24.880 --> 00:11:30.680 |
|
there's another way you can kind of get |
|
|
|
00:11:27.040 --> 00:11:33.920 |
|
the best of both worlds um which is a |
|
|
|
00:11:30.680 --> 00:11:35.560 |
|
side by side assessment and side by-side |
|
|
|
00:11:33.920 --> 00:11:38.440 |
|
assessment basically what you would do |
|
|
|
00:11:35.560 --> 00:11:40.560 |
|
is you would say um please send this |
|
|
|
00:11:38.440 --> 00:11:43.399 |
|
package to Tokyo please send a package |
|
|
|
00:11:40.560 --> 00:11:47.279 |
|
to Pittsburgh give each of them a direct |
|
|
|
00:11:43.399 --> 00:11:48.839 |
|
score um but you can use decimal places |
|
|
|
00:11:47.279 --> 00:11:51.120 |
|
and you can't use the same score for all |
|
|
|
00:11:48.839 --> 00:11:55.920 |
|
of them and so it's |
|
|
|
00:11:51.120 --> 00:11:57.480 |
|
like five 500 and 4.99 out of five or |
|
|
|
00:11:55.920 --> 00:11:59.519 |
|
something like that like you like one |
|
|
|
00:11:57.480 --> 00:12:02.639 |
|
slightly better than the other or or |
|
|
|
00:11:59.519 --> 00:12:04.480 |
|
something like that um so there are ways |
|
|
|
00:12:02.639 --> 00:12:07.240 |
|
to kind of get Best of Both Worlds if |
|
|
|
00:12:04.480 --> 00:12:11.720 |
|
you're interested in doing |
|
|
|
00:12:07.240 --> 00:12:11.720 |
|
that um |
|
|
|
00:12:14.920 --> 00:12:20.519 |
|
so one problem one other problem with |
|
|
|
00:12:18.279 --> 00:12:22.519 |
|
preference rankings is that there's a |
|
|
|
00:12:20.519 --> 00:12:24.440 |
|
limited number of things that humans can |
|
|
|
00:12:22.519 --> 00:12:28.160 |
|
compare before they get really |
|
|
|
00:12:24.440 --> 00:12:32.360 |
|
overwhelmed so if you say I |
|
|
|
00:12:28.160 --> 00:12:35.560 |
|
want like I want to |
|
|
|
00:12:32.360 --> 00:12:36.920 |
|
rate 15 systems or 20 systems with |
|
|
|
00:12:35.560 --> 00:12:39.120 |
|
respect to how good they are with |
|
|
|
00:12:36.920 --> 00:12:40.639 |
|
respect to each other it's going to be |
|
|
|
00:12:39.120 --> 00:12:43.680 |
|
impossible for humans to come up with a |
|
|
|
00:12:40.639 --> 00:12:46.959 |
|
good preference ranking between them and |
|
|
|
00:12:43.680 --> 00:12:49.480 |
|
so the typical way around this um which |
|
|
|
00:12:46.959 --> 00:12:52.360 |
|
is also used in uh things like the |
|
|
|
00:12:49.480 --> 00:12:55.440 |
|
chatbot Arena by lmis and other things |
|
|
|
00:12:52.360 --> 00:12:58.720 |
|
like this is to use uh something like an |
|
|
|
00:12:55.440 --> 00:13:00.959 |
|
ELO or true skill ranking and what these |
|
|
|
00:12:58.720 --> 00:13:03.079 |
|
are is these are things that were |
|
|
|
00:13:00.959 --> 00:13:05.760 |
|
created for the ranking of like chess |
|
|
|
00:13:03.079 --> 00:13:09.160 |
|
players or video game players or other |
|
|
|
00:13:05.760 --> 00:13:11.720 |
|
things where they like b battle against |
|
|
|
00:13:09.160 --> 00:13:13.920 |
|
each other in multiple matches uh |
|
|
|
00:13:11.720 --> 00:13:16.440 |
|
pair-wise and then you put all of the |
|
|
|
00:13:13.920 --> 00:13:18.399 |
|
wins and losses into these ranking |
|
|
|
00:13:16.440 --> 00:13:20.600 |
|
algorithms and they give you a score |
|
|
|
00:13:18.399 --> 00:13:22.920 |
|
about how good like each of the each of |
|
|
|
00:13:20.600 --> 00:13:27.079 |
|
the players are so if you do something |
|
|
|
00:13:22.920 --> 00:13:29.480 |
|
like this you can um get basically a |
|
|
|
00:13:27.079 --> 00:13:32.120 |
|
ranking of systems despite the that you |
|
|
|
00:13:29.480 --> 00:13:35.240 |
|
only did pairwise assessments so these |
|
|
|
00:13:32.120 --> 00:13:35.240 |
|
are also a good thing to know |
|
|
|
00:13:37.399 --> 00:13:43.839 |
|
about a final variety of human feedback |
|
|
|
00:13:40.600 --> 00:13:45.320 |
|
uh that we create is uh air annotation |
|
|
|
00:13:43.839 --> 00:13:47.519 |
|
and this can be useful for a number of |
|
|
|
00:13:45.320 --> 00:13:49.839 |
|
reasons um but basically the way it |
|
|
|
00:13:47.519 --> 00:13:53.839 |
|
works is you annotate individual errors |
|
|
|
00:13:49.839 --> 00:13:55.639 |
|
within the outputs and um oh one thing I |
|
|
|
00:13:53.839 --> 00:13:58.120 |
|
should mention is that um I'm giving a |
|
|
|
00:13:55.639 --> 00:14:00.880 |
|
lot of examples from machine translation |
|
|
|
00:13:58.120 --> 00:14:02.800 |
|
um I feel like machine translation has |
|
|
|
00:14:00.880 --> 00:14:04.519 |
|
been doing evaluation of generated |
|
|
|
00:14:02.800 --> 00:14:07.600 |
|
outputs for a lot longer than a lot of |
|
|
|
00:14:04.519 --> 00:14:09.000 |
|
other uh fields of NLP have and |
|
|
|
00:14:07.600 --> 00:14:11.800 |
|
therefore their methodology is more |
|
|
|
00:14:09.000 --> 00:14:13.480 |
|
developed than a lot of other fields um |
|
|
|
00:14:11.800 --> 00:14:16.199 |
|
but a lot of these things can also be |
|
|
|
00:14:13.480 --> 00:14:18.079 |
|
applied to uh other uh other tasks as |
|
|
|
00:14:16.199 --> 00:14:19.079 |
|
well but anyway getting back to this |
|
|
|
00:14:18.079 --> 00:14:20.680 |
|
there's something for machine |
|
|
|
00:14:19.079 --> 00:14:23.639 |
|
translation called multi-dimensional |
|
|
|
00:14:20.680 --> 00:14:26.240 |
|
quality metrics and the multidimensional |
|
|
|
00:14:23.639 --> 00:14:29.160 |
|
quality metrics basically what they do |
|
|
|
00:14:26.240 --> 00:14:32.199 |
|
is they annotate spans in the output |
|
|
|
00:14:29.160 --> 00:14:34.800 |
|
where each Span in the output is given a |
|
|
|
00:14:32.199 --> 00:14:38.079 |
|
severity ranking of the error and it's |
|
|
|
00:14:34.800 --> 00:14:40.199 |
|
given a type of the error and there's |
|
|
|
00:14:38.079 --> 00:14:42.600 |
|
about eight different types of Errors |
|
|
|
00:14:40.199 --> 00:14:44.839 |
|
like this doesn't violate or this |
|
|
|
00:14:42.600 --> 00:14:47.399 |
|
violates linguistic conventions of using |
|
|
|
00:14:44.839 --> 00:14:49.880 |
|
the word this instead of uh here by |
|
|
|
00:14:47.399 --> 00:14:51.639 |
|
using the word uh instead of this here |
|
|
|
00:14:49.880 --> 00:14:55.079 |
|
and then this is an accuracy error |
|
|
|
00:14:51.639 --> 00:14:57.839 |
|
because it's not accurately con uh uh |
|
|
|
00:14:55.079 --> 00:15:01.720 |
|
conveying the output and then this error |
|
|
|
00:14:57.839 --> 00:15:04.600 |
|
is minor uh this error is Major um and |
|
|
|
00:15:01.720 --> 00:15:06.399 |
|
then there's also like severe severe |
|
|
|
00:15:04.600 --> 00:15:07.440 |
|
versus major but minor and major is a |
|
|
|
00:15:06.399 --> 00:15:09.680 |
|
more important |
|
|
|
00:15:07.440 --> 00:15:11.839 |
|
distinction um so the advantage of this |
|
|
|
00:15:09.680 --> 00:15:14.279 |
|
is a couple fold number one it gives you |
|
|
|
00:15:11.839 --> 00:15:16.440 |
|
more fine grained feedback uh in that |
|
|
|
00:15:14.279 --> 00:15:19.199 |
|
you can say okay this system has a lot |
|
|
|
00:15:16.440 --> 00:15:22.199 |
|
of uh accuracy errors this system has a |
|
|
|
00:15:19.199 --> 00:15:24.880 |
|
lot of linguistic conventions errors um |
|
|
|
00:15:22.199 --> 00:15:28.600 |
|
it also can be more consistent because |
|
|
|
00:15:24.880 --> 00:15:29.839 |
|
if you just say to people which output |
|
|
|
00:15:28.600 --> 00:15:31.800 |
|
is better |
|
|
|
00:15:29.839 --> 00:15:34.560 |
|
or what is the score of this output |
|
|
|
00:15:31.800 --> 00:15:36.360 |
|
people have trouble deciding about that |
|
|
|
00:15:34.560 --> 00:15:39.560 |
|
because it's a more subjective |
|
|
|
00:15:36.360 --> 00:15:41.680 |
|
evaluation but if I say is this word |
|
|
|
00:15:39.560 --> 00:15:43.000 |
|
correct it's a little bit easier for |
|
|
|
00:15:41.680 --> 00:15:44.759 |
|
people to do so you can get more |
|
|
|
00:15:43.000 --> 00:15:46.920 |
|
consistent annotations |
|
|
|
00:15:44.759 --> 00:15:49.720 |
|
here the problem with this is this can |
|
|
|
00:15:46.920 --> 00:15:50.839 |
|
be very time consuming so um you know |
|
|
|
00:15:49.720 --> 00:15:52.480 |
|
obviously you need to go through and |
|
|
|
00:15:50.839 --> 00:15:56.440 |
|
annotate every single error if it's for |
|
|
|
00:15:52.480 --> 00:15:56.440 |
|
a long outputs or something your |
|
|
|
00:15:56.959 --> 00:16:03.519 |
|
problem so anyway these are just three |
|
|
|
00:15:59.800 --> 00:16:05.680 |
|
uh ways of collecting human feedback um |
|
|
|
00:16:03.519 --> 00:16:08.639 |
|
and then there's an alternative which is |
|
|
|
00:16:05.680 --> 00:16:10.079 |
|
automatic evaluation of outputs and um |
|
|
|
00:16:08.639 --> 00:16:14.399 |
|
there's a bunch of different ways we can |
|
|
|
00:16:10.079 --> 00:16:16.800 |
|
do this the basic idea here is we have a |
|
|
|
00:16:14.399 --> 00:16:20.199 |
|
source um we have a couple |
|
|
|
00:16:16.800 --> 00:16:22.800 |
|
hypotheses and uh we have an automatic |
|
|
|
00:16:20.199 --> 00:16:26.000 |
|
system that generates outputs uh like |
|
|
|
00:16:22.800 --> 00:16:28.279 |
|
scores and we optionally have a |
|
|
|
00:16:26.000 --> 00:16:30.839 |
|
reference output so the reference output |
|
|
|
00:16:28.279 --> 00:16:33.519 |
|
is a human created gold standard output |
|
|
|
00:16:30.839 --> 00:16:35.120 |
|
with respect to how good that um uh with |
|
|
|
00:16:33.519 --> 00:16:38.240 |
|
respect to like what the output should |
|
|
|
00:16:35.120 --> 00:16:38.240 |
|
be in an ideal |
|
|
|
00:16:38.279 --> 00:16:47.079 |
|
case and basically the goal of automatic |
|
|
|
00:16:43.199 --> 00:16:50.199 |
|
evaluation is to |
|
|
|
00:16:47.079 --> 00:16:52.839 |
|
predict human preferences or to predict |
|
|
|
00:16:50.199 --> 00:16:56.240 |
|
what the human scores would be um |
|
|
|
00:16:52.839 --> 00:16:58.600 |
|
because still at this point um we mostly |
|
|
|
00:16:56.240 --> 00:16:59.480 |
|
view what humans think of the output to |
|
|
|
00:16:58.600 --> 00:17:01.680 |
|
be |
|
|
|
00:16:59.480 --> 00:17:03.280 |
|
uh kind of the |
|
|
|
00:17:01.680 --> 00:17:06.199 |
|
standard |
|
|
|
00:17:03.280 --> 00:17:08.439 |
|
and this is called a variety of things |
|
|
|
00:17:06.199 --> 00:17:10.600 |
|
depending on what field you're in um in |
|
|
|
00:17:08.439 --> 00:17:12.559 |
|
machine translation and summarization |
|
|
|
00:17:10.600 --> 00:17:13.520 |
|
it's called automatic evaluation also a |
|
|
|
00:17:12.559 --> 00:17:16.520 |
|
lot in |
|
|
|
00:17:13.520 --> 00:17:18.400 |
|
dialogue um if you're talking about |
|
|
|
00:17:16.520 --> 00:17:21.000 |
|
people from reinforcement learning or |
|
|
|
00:17:18.400 --> 00:17:24.600 |
|
other things um or chat Bots or things |
|
|
|
00:17:21.000 --> 00:17:28.240 |
|
like that uh a lot of people or uh like |
|
|
|
00:17:24.600 --> 00:17:31.280 |
|
AGI or whatever um a lot of people call |
|
|
|
00:17:28.240 --> 00:17:32.520 |
|
it uh word model um because that |
|
|
|
00:17:31.280 --> 00:17:34.480 |
|
specifically comes from the point of |
|
|
|
00:17:32.520 --> 00:17:36.440 |
|
view of like learning from this feedback |
|
|
|
00:17:34.480 --> 00:17:37.960 |
|
but essentially they're the same thing |
|
|
|
00:17:36.440 --> 00:17:41.080 |
|
uh from my point of view they're trying |
|
|
|
00:17:37.960 --> 00:17:42.520 |
|
to predict how good an output is and how |
|
|
|
00:17:41.080 --> 00:17:44.240 |
|
much you should reward the model for |
|
|
|
00:17:42.520 --> 00:17:46.559 |
|
producing that |
|
|
|
00:17:44.240 --> 00:17:48.679 |
|
output |
|
|
|
00:17:46.559 --> 00:17:50.520 |
|
um so there's a bunch of different |
|
|
|
00:17:48.679 --> 00:17:51.720 |
|
methods to do this I'm not going to |
|
|
|
00:17:50.520 --> 00:17:53.799 |
|
cover all of them I'm just going to |
|
|
|
00:17:51.720 --> 00:17:55.240 |
|
cover three paradigms for doing this so |
|
|
|
00:17:53.799 --> 00:17:57.880 |
|
you know where to look further if you're |
|
|
|
00:17:55.240 --> 00:18:00.039 |
|
interested in doing these things um the |
|
|
|
00:17:57.880 --> 00:18:02.400 |
|
first one is embedding based |
|
|
|
00:18:00.039 --> 00:18:04.679 |
|
evaluation and the way embedding based |
|
|
|
00:18:02.400 --> 00:18:06.600 |
|
evaluation works is usually it's |
|
|
|
00:18:04.679 --> 00:18:11.400 |
|
unsupervised calculation based on |
|
|
|
00:18:06.600 --> 00:18:14.880 |
|
embeding similarity between um |
|
|
|
00:18:11.400 --> 00:18:18.080 |
|
the output that the model generated and |
|
|
|
00:18:14.880 --> 00:18:20.840 |
|
a reference output that uh you have |
|
|
|
00:18:18.080 --> 00:18:23.400 |
|
created so sorry this is very small but |
|
|
|
00:18:20.840 --> 00:18:25.559 |
|
we have a reference here that says the |
|
|
|
00:18:23.400 --> 00:18:27.640 |
|
weather is cold today and we have a |
|
|
|
00:18:25.559 --> 00:18:30.240 |
|
candidate that says it is freezing today |
|
|
|
00:18:27.640 --> 00:18:33.000 |
|
so this is probably you know like a good |
|
|
|
00:18:30.240 --> 00:18:35.480 |
|
um a reasonably good |
|
|
|
00:18:33.000 --> 00:18:37.640 |
|
output and we run this through some |
|
|
|
00:18:35.480 --> 00:18:39.120 |
|
embedding model uh it was called Bert |
|
|
|
00:18:37.640 --> 00:18:40.679 |
|
score and so of course you can run it |
|
|
|
00:18:39.120 --> 00:18:42.240 |
|
through Bert but basically it can be any |
|
|
|
00:18:40.679 --> 00:18:43.799 |
|
embedding model that gives you embedding |
|
|
|
00:18:42.240 --> 00:18:46.200 |
|
for each token in the |
|
|
|
00:18:43.799 --> 00:18:47.640 |
|
sequence and so there are five tokens in |
|
|
|
00:18:46.200 --> 00:18:49.720 |
|
this sequence four tokens in this |
|
|
|
00:18:47.640 --> 00:18:51.960 |
|
sequence you get five tokens and then |
|
|
|
00:18:49.720 --> 00:18:54.799 |
|
four sorry five embeddings and then four |
|
|
|
00:18:51.960 --> 00:18:57.400 |
|
embeddings you calculate carewise cosine |
|
|
|
00:18:54.799 --> 00:18:59.880 |
|
similarity between all of them and this |
|
|
|
00:18:57.400 --> 00:19:03.480 |
|
gives you cosine |
|
|
|
00:18:59.880 --> 00:19:06.480 |
|
similarity Matrix and then you take the |
|
|
|
00:19:03.480 --> 00:19:09.120 |
|
ARG Max or you take the maximum |
|
|
|
00:19:06.480 --> 00:19:11.280 |
|
similarity along either the |
|
|
|
00:19:09.120 --> 00:19:15.799 |
|
rows or the |
|
|
|
00:19:11.280 --> 00:19:19.559 |
|
columns and here the rows correspond |
|
|
|
00:19:15.799 --> 00:19:22.400 |
|
to tokens in the reference and because |
|
|
|
00:19:19.559 --> 00:19:24.039 |
|
the rows correspond to tokens in the |
|
|
|
00:19:22.400 --> 00:19:26.960 |
|
reference |
|
|
|
00:19:24.039 --> 00:19:28.320 |
|
the how well you find something that is |
|
|
|
00:19:26.960 --> 00:19:31.679 |
|
similar to each of the tokens in the |
|
|
|
00:19:28.320 --> 00:19:34.000 |
|
reference is like a recall based method |
|
|
|
00:19:31.679 --> 00:19:35.919 |
|
because it's saying how many tokens in |
|
|
|
00:19:34.000 --> 00:19:39.520 |
|
the reference have a good match in the |
|
|
|
00:19:35.919 --> 00:19:41.120 |
|
output and then if you look at the |
|
|
|
00:19:39.520 --> 00:19:42.799 |
|
columns if you look at the max and the |
|
|
|
00:19:41.120 --> 00:19:44.960 |
|
columns this is like a precision based |
|
|
|
00:19:42.799 --> 00:19:47.000 |
|
metric because it's saying how many of |
|
|
|
00:19:44.960 --> 00:19:49.360 |
|
the things in the output are similar |
|
|
|
00:19:47.000 --> 00:19:51.240 |
|
have a similar match in the reference so |
|
|
|
00:19:49.360 --> 00:19:54.480 |
|
basically you can calculate recall and |
|
|
|
00:19:51.240 --> 00:19:56.200 |
|
precision over all of the tokens and |
|
|
|
00:19:54.480 --> 00:20:00.200 |
|
then feed this into something that looks |
|
|
|
00:19:56.200 --> 00:20:02.400 |
|
like fmeasure and you can also use tfidf |
|
|
|
00:20:00.200 --> 00:20:06.000 |
|
waiting um like what I talked about in |
|
|
|
00:20:02.400 --> 00:20:07.799 |
|
the rag lecture uh to upweight low |
|
|
|
00:20:06.000 --> 00:20:09.520 |
|
frequency words because low frequency |
|
|
|
00:20:07.799 --> 00:20:11.440 |
|
words tend to be more content words and |
|
|
|
00:20:09.520 --> 00:20:13.120 |
|
going back to my example you know if you |
|
|
|
00:20:11.440 --> 00:20:14.280 |
|
make a mistake from Pittsburgh to Tokyo |
|
|
|
00:20:13.120 --> 00:20:17.880 |
|
that's going to be more painful than |
|
|
|
00:20:14.280 --> 00:20:21.000 |
|
making a mistake from this to um so |
|
|
|
00:20:17.880 --> 00:20:22.520 |
|
actually if you'll uh if you were paying |
|
|
|
00:20:21.000 --> 00:20:25.480 |
|
close attention to the rag lecture this |
|
|
|
00:20:22.520 --> 00:20:27.360 |
|
looks really similar to the co bear um |
|
|
|
00:20:25.480 --> 00:20:29.559 |
|
the co bear retrieval objective that I |
|
|
|
00:20:27.360 --> 00:20:30.960 |
|
talked about in the r lecture um I don't |
|
|
|
00:20:29.559 --> 00:20:32.840 |
|
think it's a coincidence they both came |
|
|
|
00:20:30.960 --> 00:20:34.360 |
|
out around the same time uh so people |
|
|
|
00:20:32.840 --> 00:20:36.360 |
|
were thinking about the same thing but |
|
|
|
00:20:34.360 --> 00:20:37.600 |
|
um this is one method that's pretty |
|
|
|
00:20:36.360 --> 00:20:40.200 |
|
widely |
|
|
|
00:20:37.600 --> 00:20:43.480 |
|
use the bird Square code base is also |
|
|
|
00:20:40.200 --> 00:20:45.440 |
|
really nice and easy to use so um if uh |
|
|
|
00:20:43.480 --> 00:20:47.640 |
|
you want to try it out feel free to take |
|
|
|
00:20:45.440 --> 00:20:47.640 |
|
a |
|
|
|
00:20:48.159 --> 00:20:53.840 |
|
look cool um the next one I'd like to |
|
|
|
00:20:51.600 --> 00:20:56.080 |
|
talk about is a regression based |
|
|
|
00:20:53.840 --> 00:20:58.760 |
|
evaluation and the way this works is |
|
|
|
00:20:56.080 --> 00:21:02.600 |
|
this is usually used in a supervised uh |
|
|
|
00:20:58.760 --> 00:21:04.320 |
|
setting so uh the way what you have to |
|
|
|
00:21:02.600 --> 00:21:07.600 |
|
do is you have to calculate a whole |
|
|
|
00:21:04.320 --> 00:21:09.799 |
|
bunch of like actual human |
|
|
|
00:21:07.600 --> 00:21:12.440 |
|
judgments and |
|
|
|
00:21:09.799 --> 00:21:15.000 |
|
usually these judgments can either be |
|
|
|
00:21:12.440 --> 00:21:16.960 |
|
direct assessment uh where you actually |
|
|
|
00:21:15.000 --> 00:21:19.120 |
|
have a score or they can be pairwise |
|
|
|
00:21:16.960 --> 00:21:20.840 |
|
judgments and then if you have direct |
|
|
|
00:21:19.120 --> 00:21:23.640 |
|
assessment you use a regression based |
|
|
|
00:21:20.840 --> 00:21:26.039 |
|
loss like uh minimum squared error if |
|
|
|
00:21:23.640 --> 00:21:27.520 |
|
you have pairwise uh you use a ranking |
|
|
|
00:21:26.039 --> 00:21:29.039 |
|
based loss that tries to upweight the |
|
|
|
00:21:27.520 --> 00:21:31.360 |
|
ones that are higher scoring downward |
|
|
|
00:21:29.039 --> 00:21:33.200 |
|
the ones that are lower scoring one |
|
|
|
00:21:31.360 --> 00:21:35.720 |
|
typical example of this is Comet which |
|
|
|
00:21:33.200 --> 00:21:37.200 |
|
is or has been at least for a very long |
|
|
|
00:21:35.720 --> 00:21:39.880 |
|
time the state-of-the art and machine |
|
|
|
00:21:37.200 --> 00:21:41.279 |
|
translation evaluation and the reason |
|
|
|
00:21:39.880 --> 00:21:43.440 |
|
why it works so well is because we have |
|
|
|
00:21:41.279 --> 00:21:44.720 |
|
a bunch of evaluations for machine |
|
|
|
00:21:43.440 --> 00:21:46.080 |
|
translation they've been doing |
|
|
|
00:21:44.720 --> 00:21:47.600 |
|
evaluation and machine translation |
|
|
|
00:21:46.080 --> 00:21:50.480 |
|
systems for years and you can use that |
|
|
|
00:21:47.600 --> 00:21:52.720 |
|
as lots of supervised training data so |
|
|
|
00:21:50.480 --> 00:21:54.640 |
|
basically you just take um these |
|
|
|
00:21:52.720 --> 00:21:56.440 |
|
evaluation data you have human |
|
|
|
00:21:54.640 --> 00:21:59.080 |
|
annotations you have the output |
|
|
|
00:21:56.440 --> 00:22:00.320 |
|
according to a model like comet um you |
|
|
|
00:21:59.080 --> 00:22:02.679 |
|
calculate the difference between them |
|
|
|
00:22:00.320 --> 00:22:05.640 |
|
and you update model |
|
|
|
00:22:02.679 --> 00:22:07.080 |
|
parameters um the problem this is great |
|
|
|
00:22:05.640 --> 00:22:08.520 |
|
if you have lots of training data the |
|
|
|
00:22:07.080 --> 00:22:10.640 |
|
problem with this is for a lot of tasks |
|
|
|
00:22:08.520 --> 00:22:12.360 |
|
we don't have lots of training data so |
|
|
|
00:22:10.640 --> 00:22:14.720 |
|
um you know training these is a little |
|
|
|
00:22:12.360 --> 00:22:14.720 |
|
bit less |
|
|
|
00:22:15.400 --> 00:22:22.919 |
|
feasible and now recently uh what we |
|
|
|
00:22:19.600 --> 00:22:25.279 |
|
have been moving into is is a QA based |
|
|
|
00:22:22.919 --> 00:22:27.120 |
|
evaluation which is basically where we |
|
|
|
00:22:25.279 --> 00:22:30.760 |
|
ask a language model how good the output |
|
|
|
00:22:27.120 --> 00:22:32.279 |
|
is and so uh gmba is an example one of |
|
|
|
00:22:30.760 --> 00:22:34.559 |
|
the early examples of this for machine |
|
|
|
00:22:32.279 --> 00:22:37.320 |
|
translation evaluation uh where they |
|
|
|
00:22:34.559 --> 00:22:39.840 |
|
basically just ask a g gp4 like score |
|
|
|
00:22:37.320 --> 00:22:41.600 |
|
the following translation from Source |
|
|
|
00:22:39.840 --> 00:22:44.000 |
|
language to target language with respect |
|
|
|
00:22:41.600 --> 00:22:47.080 |
|
to the human reference um on a |
|
|
|
00:22:44.000 --> 00:22:49.200 |
|
continuous scale from Z to 100 uh where |
|
|
|
00:22:47.080 --> 00:22:51.320 |
|
the score of zero means no meaning |
|
|
|
00:22:49.200 --> 00:22:54.039 |
|
preserved and the score of 100 means a |
|
|
|
00:22:51.320 --> 00:22:56.880 |
|
perfect meaning in grammar uh you feed |
|
|
|
00:22:54.039 --> 00:22:58.760 |
|
in the source um you feed in the T the |
|
|
|
00:22:56.880 --> 00:23:01.000 |
|
human reference optionally if you have a |
|
|
|
00:22:58.760 --> 00:23:03.320 |
|
human reference and then you feed in the |
|
|
|
00:23:01.000 --> 00:23:06.760 |
|
Target um and you get a |
|
|
|
00:23:03.320 --> 00:23:09.919 |
|
score and um so this this works pretty |
|
|
|
00:23:06.760 --> 00:23:12.720 |
|
well this can give you uh better results |
|
|
|
00:23:09.919 --> 00:23:15.159 |
|
um there's a especially if you have a |
|
|
|
00:23:12.720 --> 00:23:16.960 |
|
strong language model the problem is |
|
|
|
00:23:15.159 --> 00:23:18.279 |
|
it's very unpredictable whether this is |
|
|
|
00:23:16.960 --> 00:23:20.120 |
|
going to work well and it's very |
|
|
|
00:23:18.279 --> 00:23:23.039 |
|
dependent on the prompt that you're |
|
|
|
00:23:20.120 --> 00:23:25.279 |
|
using so um right now A lot of people |
|
|
|
00:23:23.039 --> 00:23:27.279 |
|
are using gp4 without actually |
|
|
|
00:23:25.279 --> 00:23:29.039 |
|
validating whether it does a good job at |
|
|
|
00:23:27.279 --> 00:23:33.080 |
|
evaluation and |
|
|
|
00:23:29.039 --> 00:23:34.919 |
|
and my the results are all across the |
|
|
|
00:23:33.080 --> 00:23:36.880 |
|
board it can be anywhere from very very |
|
|
|
00:23:34.919 --> 00:23:38.640 |
|
good to very very bad at evaluating |
|
|
|
00:23:36.880 --> 00:23:41.320 |
|
particular tasks so I would be at least |
|
|
|
00:23:38.640 --> 00:23:43.559 |
|
a little bit suspicious of whether gp4 |
|
|
|
00:23:41.320 --> 00:23:45.679 |
|
is doing a good job evaluating for your |
|
|
|
00:23:43.559 --> 00:23:49.320 |
|
task especially more complex |
|
|
|
00:23:45.679 --> 00:23:51.960 |
|
tests um I would especially be |
|
|
|
00:23:49.320 --> 00:23:54.000 |
|
suspicious if you're doing two uh any of |
|
|
|
00:23:51.960 --> 00:23:56.760 |
|
the two following things number one if |
|
|
|
00:23:54.000 --> 00:23:59.880 |
|
you're comparing gp4 or any model |
|
|
|
00:23:56.760 --> 00:24:02.400 |
|
against itself in another model because |
|
|
|
00:23:59.880 --> 00:24:05.200 |
|
gp4 really likes |
|
|
|
00:24:02.400 --> 00:24:06.880 |
|
gp4 it really likes its own outputs and |
|
|
|
00:24:05.200 --> 00:24:08.120 |
|
there are papers uh sorry I don't |
|
|
|
00:24:06.880 --> 00:24:09.679 |
|
actually have the references here but I |
|
|
|
00:24:08.120 --> 00:24:11.200 |
|
can follow up if people are interested |
|
|
|
00:24:09.679 --> 00:24:13.080 |
|
but there are papers that demonstrate |
|
|
|
00:24:11.200 --> 00:24:15.799 |
|
that gp4 likes it you know its own |
|
|
|
00:24:13.080 --> 00:24:19.200 |
|
outputs more than others also if you're |
|
|
|
00:24:15.799 --> 00:24:22.120 |
|
explicitly optimizing the outputs using |
|
|
|
00:24:19.200 --> 00:24:24.640 |
|
rlf um there is something called good |
|
|
|
00:24:22.120 --> 00:24:27.120 |
|
Hearts law which is basically anytime |
|
|
|
00:24:24.640 --> 00:24:29.520 |
|
you uh start optimizing towards a metric |
|
|
|
00:24:27.120 --> 00:24:32.559 |
|
it becomes a bad metric and that also |
|
|
|
00:24:29.520 --> 00:24:35.000 |
|
happens for gp4 based evaluations so if |
|
|
|
00:24:32.559 --> 00:24:37.200 |
|
you start optimizing for gp4 based |
|
|
|
00:24:35.000 --> 00:24:38.960 |
|
evaluations especially for reference |
|
|
|
00:24:37.200 --> 00:24:41.679 |
|
list metrics that don't use a reference |
|
|
|
00:24:38.960 --> 00:24:44.840 |
|
output then um you start basically |
|
|
|
00:24:41.679 --> 00:24:47.440 |
|
exploiting the metric |
|
|
|
00:24:44.840 --> 00:24:49.840 |
|
um another thing that you can do with QA |
|
|
|
00:24:47.440 --> 00:24:53.279 |
|
based evaluation is ask about fine grade |
|
|
|
00:24:49.840 --> 00:24:54.919 |
|
mistakes and so this is a paper by um uh |
|
|
|
00:24:53.279 --> 00:24:56.480 |
|
Patrick Fernandez who's a student who's |
|
|
|
00:24:54.919 --> 00:25:02.080 |
|
working with me and basically what we |
|
|
|
00:24:56.480 --> 00:25:05.240 |
|
did is we asked the model to um not give |
|
|
|
00:25:02.080 --> 00:25:07.360 |
|
a particular score but actually identify |
|
|
|
00:25:05.240 --> 00:25:08.880 |
|
the mistakes in the output and when we |
|
|
|
00:25:07.360 --> 00:25:10.559 |
|
asked it to identify the mistakes in the |
|
|
|
00:25:08.880 --> 00:25:13.720 |
|
output we found that this gave more |
|
|
|
00:25:10.559 --> 00:25:17.320 |
|
consistent uh results so kind of |
|
|
|
00:25:13.720 --> 00:25:18.840 |
|
interestingly we ask humans to identify |
|
|
|
00:25:17.320 --> 00:25:21.120 |
|
individual mistakes and the output that |
|
|
|
00:25:18.840 --> 00:25:24.240 |
|
gives humans more consistent results |
|
|
|
00:25:21.120 --> 00:25:25.559 |
|
it's the same thing for gp4 so um that |
|
|
|
00:25:24.240 --> 00:25:27.320 |
|
that's another paper you can look at if |
|
|
|
00:25:25.559 --> 00:25:29.640 |
|
you're |
|
|
|
00:25:27.320 --> 00:25:32.679 |
|
interested |
|
|
|
00:25:29.640 --> 00:25:38.000 |
|
cool um so I I mentioned that you could |
|
|
|
00:25:32.679 --> 00:25:38.000 |
|
or could not uh trust uh yeah sorry go |
|
|
|
00:25:44.679 --> 00:25:51.279 |
|
ahead uh correct so yeah B basically |
|
|
|
00:25:47.360 --> 00:25:53.279 |
|
just what you do is you have the source |
|
|
|
00:25:51.279 --> 00:25:54.960 |
|
um ideally you'll also have a reference |
|
|
|
00:25:53.279 --> 00:25:57.840 |
|
output that was created by skilled |
|
|
|
00:25:54.960 --> 00:25:59.720 |
|
humans and then you put in the Target |
|
|
|
00:25:57.840 --> 00:26:02.279 |
|
you know output basically you have the |
|
|
|
00:25:59.720 --> 00:26:08.000 |
|
input ideally a reference output created |
|
|
|
00:26:02.279 --> 00:26:08.000 |
|
by Good by skilled humans and uh like |
|
|
|
00:26:15.159 --> 00:26:20.240 |
|
hypothesis yeah I |
|
|
|
00:26:17.919 --> 00:26:24.559 |
|
mean it's a good question and I don't |
|
|
|
00:26:20.240 --> 00:26:26.919 |
|
know if we actually have a a very clear |
|
|
|
00:26:24.559 --> 00:26:31.399 |
|
empirical like evidence of why this is |
|
|
|
00:26:26.919 --> 00:26:33.320 |
|
the case but my hypothesis about this is |
|
|
|
00:26:31.399 --> 00:26:36.159 |
|
yes we kind of would expect models to be |
|
|
|
00:26:33.320 --> 00:26:38.200 |
|
more biased towards their own outputs |
|
|
|
00:26:36.159 --> 00:26:40.919 |
|
and the reason why is because |
|
|
|
00:26:38.200 --> 00:26:43.080 |
|
essentially you know models |
|
|
|
00:26:40.919 --> 00:26:44.279 |
|
are within their embeddings they're |
|
|
|
00:26:43.080 --> 00:26:45.760 |
|
encoding when they're in a high |
|
|
|
00:26:44.279 --> 00:26:47.600 |
|
probability part of the space and when |
|
|
|
00:26:45.760 --> 00:26:50.200 |
|
they're in a low probability part of the |
|
|
|
00:26:47.600 --> 00:26:51.120 |
|
space and like the high probability part |
|
|
|
00:26:50.200 --> 00:26:54.600 |
|
of the |
|
|
|
00:26:51.120 --> 00:26:56.200 |
|
space is going to be the high |
|
|
|
00:26:54.600 --> 00:26:58.600 |
|
probability part of the space is going |
|
|
|
00:26:56.200 --> 00:27:02.559 |
|
to be associated with good outputs |
|
|
|
00:26:58.600 --> 00:27:07.000 |
|
because like when |
|
|
|
00:27:02.559 --> 00:27:08.600 |
|
models are more sure of their outputs |
|
|
|
00:27:07.000 --> 00:27:11.960 |
|
they're more likely to be |
|
|
|
00:27:08.600 --> 00:27:13.520 |
|
good just because that indicates that |
|
|
|
00:27:11.960 --> 00:27:15.240 |
|
like they're closer to the training data |
|
|
|
00:27:13.520 --> 00:27:17.760 |
|
that it had and other things like that |
|
|
|
00:27:15.240 --> 00:27:21.600 |
|
so model probabilities are associated |
|
|
|
00:27:17.760 --> 00:27:23.760 |
|
with outputs uh with uh with good |
|
|
|
00:27:21.600 --> 00:27:26.600 |
|
outputs but just |
|
|
|
00:27:23.760 --> 00:27:29.440 |
|
correla separately from |
|
|
|
00:27:26.600 --> 00:27:32.120 |
|
that I believe a model can identify when |
|
|
|
00:27:29.440 --> 00:27:33.320 |
|
it's in a high probability segment of |
|
|
|
00:27:32.120 --> 00:27:35.799 |
|
the space and when it's in a low |
|
|
|
00:27:33.320 --> 00:27:39.399 |
|
probability segment of the space and |
|
|
|
00:27:35.799 --> 00:27:39.399 |
|
because of that I expect |
|
|
|
00:27:39.519 --> 00:27:45.519 |
|
that I like there are segments of the |
|
|
|
00:27:43.240 --> 00:27:47.120 |
|
embedding space where it's more likely |
|
|
|
00:27:45.519 --> 00:27:48.360 |
|
to answer yes about something being good |
|
|
|
00:27:47.120 --> 00:27:50.960 |
|
or not and those are going to be |
|
|
|
00:27:48.360 --> 00:27:54.760 |
|
associated with high uh like high |
|
|
|
00:27:50.960 --> 00:27:56.159 |
|
probability outbreaks as well and also |
|
|
|
00:27:54.760 --> 00:27:57.760 |
|
models are more likely to generate |
|
|
|
00:27:56.159 --> 00:28:00.240 |
|
outputs that are high probability |
|
|
|
00:27:57.760 --> 00:28:02.320 |
|
according into their model by definition |
|
|
|
00:28:00.240 --> 00:28:03.880 |
|
so all three of those effects together |
|
|
|
00:28:02.320 --> 00:28:05.640 |
|
would basically go into a model being |
|
|
|
00:28:03.880 --> 00:28:09.120 |
|
bios supports its own outputs compared |
|
|
|
00:28:05.640 --> 00:28:11.559 |
|
to that puts in another model but um |
|
|
|
00:28:09.120 --> 00:28:13.279 |
|
yeah this is a very handwavy explanation |
|
|
|
00:28:11.559 --> 00:28:15.519 |
|
but like putting the two the three |
|
|
|
00:28:13.279 --> 00:28:18.600 |
|
together models output high probability |
|
|
|
00:28:15.519 --> 00:28:20.880 |
|
things from their own probability Space |
|
|
|
00:28:18.600 --> 00:28:23.440 |
|
by definition |
|
|
|
00:28:20.880 --> 00:28:25.760 |
|
um things that are high probability are |
|
|
|
00:28:23.440 --> 00:28:27.519 |
|
associated with being good uh just |
|
|
|
00:28:25.760 --> 00:28:29.279 |
|
because otherwise a model would be |
|
|
|
00:28:27.519 --> 00:28:31.840 |
|
outputting garbage |
|
|
|
00:28:29.279 --> 00:28:33.840 |
|
and um the final thing which is more |
|
|
|
00:28:31.840 --> 00:28:35.679 |
|
tenuous is if the model is in a high |
|
|
|
00:28:33.840 --> 00:28:37.919 |
|
probability segment of the space it's |
|
|
|
00:28:35.679 --> 00:28:39.760 |
|
more likely to Output yes according to a |
|
|
|
00:28:37.919 --> 00:28:41.480 |
|
question of it being good and I I think |
|
|
|
00:28:39.760 --> 00:28:44.360 |
|
that's probably true but I'm not 100% |
|
|
|
00:28:41.480 --> 00:28:44.360 |
|
sure about the the |
|
|
|
00:28:45.559 --> 00:28:51.039 |
|
fin um maybe maybe someone wants to |
|
|
|
00:28:49.000 --> 00:28:52.840 |
|
examinate examine that as a final |
|
|
|
00:28:51.039 --> 00:28:54.200 |
|
project it seems like a interesting |
|
|
|
00:28:52.840 --> 00:28:57.080 |
|
interesting |
|
|
|
00:28:54.200 --> 00:29:00.039 |
|
question um cool uh were there any other |
|
|
|
00:28:57.080 --> 00:29:00.039 |
|
questions about these methods |
|
|
|
00:29:00.159 --> 00:29:07.120 |
|
here um okay so when I say like an |
|
|
|
00:29:03.960 --> 00:29:11.080 |
|
evaluation metric is good or not what do |
|
|
|
00:29:07.120 --> 00:29:13.200 |
|
I mean by this being good or not um or a |
|
|
|
00:29:11.080 --> 00:29:16.880 |
|
reward model or whatever else and |
|
|
|
00:29:13.200 --> 00:29:18.440 |
|
basically the um the way we typically do |
|
|
|
00:29:16.880 --> 00:29:19.840 |
|
this is by doing something called meta |
|
|
|
00:29:18.440 --> 00:29:22.440 |
|
evaluation so it's called meta |
|
|
|
00:29:19.840 --> 00:29:25.799 |
|
evaluation because it's evaluation of |
|
|
|
00:29:22.440 --> 00:29:29.279 |
|
evaluation and uh the way we do this is |
|
|
|
00:29:25.799 --> 00:29:32.519 |
|
we have human uh scores and we have |
|
|
|
00:29:29.279 --> 00:29:34.760 |
|
automatic scores and we usually |
|
|
|
00:29:32.519 --> 00:29:38.640 |
|
calculate some sort of correlation |
|
|
|
00:29:34.760 --> 00:29:41.000 |
|
between the scores so um typical ones |
|
|
|
00:29:38.640 --> 00:29:46.440 |
|
are rank correlations like Pearson's |
|
|
|
00:29:41.000 --> 00:29:48.799 |
|
correlation or tendle uh Tow and uh so |
|
|
|
00:29:46.440 --> 00:29:51.200 |
|
the more Associated the automatic scores |
|
|
|
00:29:48.799 --> 00:29:53.960 |
|
are with the human scores the higher |
|
|
|
00:29:51.200 --> 00:29:55.159 |
|
these correlations are going to be um |
|
|
|
00:29:53.960 --> 00:29:57.559 |
|
there's other things that you can |
|
|
|
00:29:55.159 --> 00:30:00.080 |
|
calculate so if you're trying to figure |
|
|
|
00:29:57.559 --> 00:30:01.640 |
|
out whether a model um matches human |
|
|
|
00:30:00.080 --> 00:30:04.279 |
|
pairwise preferences you can just |
|
|
|
00:30:01.640 --> 00:30:06.440 |
|
calculate accuracy so I didn't put that |
|
|
|
00:30:04.279 --> 00:30:08.080 |
|
on um I didn't put that on the slide |
|
|
|
00:30:06.440 --> 00:30:10.880 |
|
here but you can just calculate accuracy |
|
|
|
00:30:08.080 --> 00:30:13.120 |
|
of pairwise preferences um you can also |
|
|
|
00:30:10.880 --> 00:30:15.360 |
|
calculate the absolute error between the |
|
|
|
00:30:13.120 --> 00:30:19.320 |
|
the judgments if you want to know uh |
|
|
|
00:30:15.360 --> 00:30:21.720 |
|
whether the absolute error matches so um |
|
|
|
00:30:19.320 --> 00:30:24.159 |
|
the these are good things to do if you |
|
|
|
00:30:21.720 --> 00:30:25.600 |
|
want to use an evaluation metric but you |
|
|
|
00:30:24.159 --> 00:30:27.200 |
|
aren't sure whether it's good or not I |
|
|
|
00:30:25.600 --> 00:30:29.640 |
|
would check to see whether the authors |
|
|
|
00:30:27.200 --> 00:30:32.000 |
|
have done this sort of meta evaluation |
|
|
|
00:30:29.640 --> 00:30:33.760 |
|
if they haven't be a little bit |
|
|
|
00:30:32.000 --> 00:30:36.960 |
|
suspicious if they have be a little bit |
|
|
|
00:30:33.760 --> 00:30:39.799 |
|
less suspicious but um |
|
|
|
00:30:36.960 --> 00:30:42.960 |
|
yeah how do people do this typically uh |
|
|
|
00:30:39.799 --> 00:30:45.640 |
|
usually they create uh data sets like |
|
|
|
00:30:42.960 --> 00:30:49.440 |
|
the WM they use data sets like the WMT |
|
|
|
00:30:45.640 --> 00:30:53.960 |
|
shared tasks um or |
|
|
|
00:30:49.440 --> 00:30:57.679 |
|
uh uh like some evl um but there's also |
|
|
|
00:30:53.960 --> 00:30:59.960 |
|
other ways to create um uh there's also |
|
|
|
00:30:57.679 --> 00:31:01.639 |
|
Lots other data sets but in order to do |
|
|
|
00:30:59.960 --> 00:31:05.639 |
|
this reliably you need a fairly large |
|
|
|
00:31:01.639 --> 00:31:05.639 |
|
data set so it's one thing to be aware |
|
|
|
00:31:07.080 --> 00:31:10.760 |
|
of |
|
|
|
00:31:08.720 --> 00:31:14.200 |
|
cool |
|
|
|
00:31:10.760 --> 00:31:16.360 |
|
um then the final thing um all of the |
|
|
|
00:31:14.200 --> 00:31:17.919 |
|
automatic evaluation methods that I |
|
|
|
00:31:16.360 --> 00:31:20.240 |
|
talked about now are trying to match |
|
|
|
00:31:17.919 --> 00:31:22.679 |
|
human preferences but that's not the |
|
|
|
00:31:20.240 --> 00:31:24.960 |
|
only thing that you necessarily want to |
|
|
|
00:31:22.679 --> 00:31:28.440 |
|
do the final thing that you might want |
|
|
|
00:31:24.960 --> 00:31:30.840 |
|
to do is uh use the model outputs in a |
|
|
|
00:31:28.440 --> 00:31:34.200 |
|
downstream system and see whether they |
|
|
|
00:31:30.840 --> 00:31:36.399 |
|
are effective for that so there's two |
|
|
|
00:31:34.200 --> 00:31:39.080 |
|
concepts of intrinsic evaluation and |
|
|
|
00:31:36.399 --> 00:31:41.720 |
|
extrinsic evaluation so intrinsic |
|
|
|
00:31:39.080 --> 00:31:44.159 |
|
evaluation um evaluates the quality of |
|
|
|
00:31:41.720 --> 00:31:45.720 |
|
the output itself and so that would be |
|
|
|
00:31:44.159 --> 00:31:48.639 |
|
like asking a human directly about how |
|
|
|
00:31:45.720 --> 00:31:50.720 |
|
good is this output extrinsic evaluation |
|
|
|
00:31:48.639 --> 00:31:53.679 |
|
is evaluating output quality by its |
|
|
|
00:31:50.720 --> 00:31:57.000 |
|
utility um and so just to give one |
|
|
|
00:31:53.679 --> 00:31:58.360 |
|
example um if you can evaluate large |
|
|
|
00:31:57.000 --> 00:32:00.200 |
|
language model summary |
|
|
|
00:31:58.360 --> 00:32:04.200 |
|
through question answering |
|
|
|
00:32:00.200 --> 00:32:05.880 |
|
accuracy um and so you can take the |
|
|
|
00:32:04.200 --> 00:32:07.399 |
|
output of an llm and feed it through a |
|
|
|
00:32:05.880 --> 00:32:09.600 |
|
question answering model and see whether |
|
|
|
00:32:07.399 --> 00:32:12.399 |
|
you're able to answer questions based on |
|
|
|
00:32:09.600 --> 00:32:15.799 |
|
this and that kind of gives you a better |
|
|
|
00:32:12.399 --> 00:32:18.279 |
|
idea of whether the summary require uh |
|
|
|
00:32:15.799 --> 00:32:20.120 |
|
incorporates requisite information but |
|
|
|
00:32:18.279 --> 00:32:22.120 |
|
if you think about anything an llm can |
|
|
|
00:32:20.120 --> 00:32:23.760 |
|
be used for usually it's part of a |
|
|
|
00:32:22.120 --> 00:32:26.679 |
|
bigger system so you can evaluate it as |
|
|
|
00:32:23.760 --> 00:32:28.399 |
|
a part of that bigger system um the |
|
|
|
00:32:26.679 --> 00:32:30.639 |
|
problem with this is it's a very |
|
|
|
00:32:28.399 --> 00:32:33.960 |
|
indirect way of assessing things so like |
|
|
|
00:32:30.639 --> 00:32:36.080 |
|
let's say your QA model is just bad uh |
|
|
|
00:32:33.960 --> 00:32:38.480 |
|
how can you disentangle the effect of |
|
|
|
00:32:36.080 --> 00:32:41.679 |
|
the L summary versus the QA model that's |
|
|
|
00:32:38.480 --> 00:32:44.120 |
|
not a trivial thing to do so ideally |
|
|
|
00:32:41.679 --> 00:32:47.000 |
|
like a combination of these two is |
|
|
|
00:32:44.120 --> 00:32:47.000 |
|
practically the best way |
|
|
|
00:32:48.039 --> 00:32:52.200 |
|
go cool so |
|
|
|
00:32:56.039 --> 00:32:59.960 |
|
yeah yeah it wouldn't necessar |
|
|
|
00:32:58.360 --> 00:33:05.679 |
|
say it's harder to do it might even be |
|
|
|
00:32:59.960 --> 00:33:05.679 |
|
easier to do um which is like let's |
|
|
|
00:33:06.679 --> 00:33:11.720 |
|
say Let me let me see if I can come up |
|
|
|
00:33:09.360 --> 00:33:11.720 |
|
with |
|
|
|
00:33:12.639 --> 00:33:17.600 |
|
example what let's |
|
|
|
00:33:15.000 --> 00:33:19.670 |
|
say you |
|
|
|
00:33:17.600 --> 00:33:22.979 |
|
are trying |
|
|
|
00:33:19.670 --> 00:33:22.979 |
|
[Music] |
|
|
|
00:33:24.639 --> 00:33:29.760 |
|
to let's say you're trying to |
|
|
|
00:33:30.559 --> 00:33:33.559 |
|
guess |
|
|
|
00:33:39.000 --> 00:33:45.399 |
|
whether let's say you're trying to guess |
|
|
|
00:33:42.399 --> 00:33:46.559 |
|
whether a someone will be hired at a |
|
|
|
00:33:45.399 --> 00:33:52.039 |
|
company or |
|
|
|
00:33:46.559 --> 00:33:53.880 |
|
not based on an llm generated summary of |
|
|
|
00:33:52.039 --> 00:33:58.880 |
|
their qualifications for a position or |
|
|
|
00:33:53.880 --> 00:34:01.799 |
|
something like that um and |
|
|
|
00:33:58.880 --> 00:34:03.080 |
|
you what actually maybe this is not a |
|
|
|
00:34:01.799 --> 00:34:04.720 |
|
great example because whether you should |
|
|
|
00:34:03.080 --> 00:34:06.960 |
|
be doing this ethically is a little bit |
|
|
|
00:34:04.720 --> 00:34:08.159 |
|
unclear but let's say you were doing |
|
|
|
00:34:06.960 --> 00:34:09.560 |
|
let's say you were doing something like |
|
|
|
00:34:08.159 --> 00:34:11.520 |
|
that just because it's one example I can |
|
|
|
00:34:09.560 --> 00:34:14.320 |
|
think of right now whether they will get |
|
|
|
00:34:11.520 --> 00:34:16.320 |
|
hired or not is um is clear because you |
|
|
|
00:34:14.320 --> 00:34:19.399 |
|
have a objective answer right whether |
|
|
|
00:34:16.320 --> 00:34:21.480 |
|
they were hired or not um or maybe maybe |
|
|
|
00:34:19.399 --> 00:34:23.800 |
|
another example would be like let's say |
|
|
|
00:34:21.480 --> 00:34:26.320 |
|
um let's say you want to predict the |
|
|
|
00:34:23.800 --> 00:34:29.599 |
|
diagnosis in a medical application based |
|
|
|
00:34:26.320 --> 00:34:32.960 |
|
on an llm generated some of somebody's |
|
|
|
00:34:29.599 --> 00:34:35.919 |
|
uh you know LM generated summary of |
|
|
|
00:34:32.960 --> 00:34:38.480 |
|
somebody's you know past medical history |
|
|
|
00:34:35.919 --> 00:34:40.839 |
|
and all this stuff and here you want the |
|
|
|
00:34:38.480 --> 00:34:43.440 |
|
llm generated summary you definitely |
|
|
|
00:34:40.839 --> 00:34:44.879 |
|
want the summary because the summary is |
|
|
|
00:34:43.440 --> 00:34:47.560 |
|
going to be viewed by a doctor who will |
|
|
|
00:34:44.879 --> 00:34:49.359 |
|
make the final decision but you also |
|
|
|
00:34:47.560 --> 00:34:50.760 |
|
have information about the diagnoses of |
|
|
|
00:34:49.359 --> 00:34:52.399 |
|
all the people in your medical system |
|
|
|
00:34:50.760 --> 00:34:54.560 |
|
later because you know they went through |
|
|
|
00:34:52.399 --> 00:34:56.480 |
|
your medical system for years and you |
|
|
|
00:34:54.560 --> 00:34:58.200 |
|
know later like through lots of tests |
|
|
|
00:34:56.480 --> 00:35:00.800 |
|
and stuff uh whether how they were |
|
|
|
00:34:58.200 --> 00:35:02.320 |
|
diagnosed so you generate an LM based |
|
|
|
00:35:00.800 --> 00:35:05.000 |
|
summary and then you predict the |
|
|
|
00:35:02.320 --> 00:35:06.599 |
|
diagnosis from the summary so there the |
|
|
|
00:35:05.000 --> 00:35:08.040 |
|
evaluation of the diagnosis is very |
|
|
|
00:35:06.599 --> 00:35:11.480 |
|
clear because you kind of have a gold |
|
|
|
00:35:08.040 --> 00:35:12.599 |
|
standard answer um but the EV intrinsic |
|
|
|
00:35:11.480 --> 00:35:14.839 |
|
evaluation of whether it's a good |
|
|
|
00:35:12.599 --> 00:35:16.839 |
|
summary or not is not as clear because |
|
|
|
00:35:14.839 --> 00:35:19.400 |
|
you'd have pass do whether it's good and |
|
|
|
00:35:16.839 --> 00:35:21.079 |
|
understandable summary so the extrinsic |
|
|
|
00:35:19.400 --> 00:35:24.920 |
|
evaluation might be easier because it's |
|
|
|
00:35:21.079 --> 00:35:26.480 |
|
clearer um so there are cases like that |
|
|
|
00:35:24.920 --> 00:35:30.720 |
|
um the problem is you would have to have |
|
|
|
00:35:26.480 --> 00:35:33.800 |
|
that data in order to do that um yeah do |
|
|
|
00:35:30.720 --> 00:35:38.240 |
|
like evaluation yeah I was just |
|
|
|
00:35:33.800 --> 00:35:40.800 |
|
wondering typically the |
|
|
|
00:35:38.240 --> 00:35:42.880 |
|
like like how do you accomodate the |
|
|
|
00:35:40.800 --> 00:35:47.160 |
|
diversity oh yeah that's a great that's |
|
|
|
00:35:42.880 --> 00:35:50.240 |
|
a great question um so how do you how do |
|
|
|
00:35:47.160 --> 00:35:50.240 |
|
you get these scores |
|
|
|
00:35:50.720 --> 00:35:55.800 |
|
here there's a number of different |
|
|
|
00:35:53.200 --> 00:35:59.160 |
|
things in the WMT shared tasks what they |
|
|
|
00:35:55.800 --> 00:36:00.280 |
|
did is they did |
|
|
|
00:35:59.160 --> 00:36:03.200 |
|
the first thing they do is they |
|
|
|
00:36:00.280 --> 00:36:06.319 |
|
normalize by annotator and what they do |
|
|
|
00:36:03.200 --> 00:36:10.400 |
|
is they basically take the zcore or Z |
|
|
|
00:36:06.319 --> 00:36:12.240 |
|
score of the um of the human annotator's |
|
|
|
00:36:10.400 --> 00:36:14.880 |
|
actual scores because some people are |
|
|
|
00:36:12.240 --> 00:36:16.400 |
|
more harsh than other people and so what |
|
|
|
00:36:14.880 --> 00:36:20.680 |
|
that means is you basically normalize to |
|
|
|
00:36:16.400 --> 00:36:22.119 |
|
have zero mean in unit variance um and |
|
|
|
00:36:20.680 --> 00:36:24.119 |
|
then after they've normalized to zero |
|
|
|
00:36:22.119 --> 00:36:29.560 |
|
mean and unit variance then I think they |
|
|
|
00:36:24.119 --> 00:36:29.560 |
|
average together different humans so um |
|
|
|
00:36:30.160 --> 00:36:36.520 |
|
then for how do you deal with the fact |
|
|
|
00:36:33.680 --> 00:36:38.040 |
|
that humans disagree on things and I |
|
|
|
00:36:36.520 --> 00:36:39.480 |
|
think it's pretty varied I don't know if |
|
|
|
00:36:38.040 --> 00:36:42.160 |
|
there's any gold standard way of doing |
|
|
|
00:36:39.480 --> 00:36:43.839 |
|
it but sometimes you just average |
|
|
|
00:36:42.160 --> 00:36:46.359 |
|
sometimes you throw away examples where |
|
|
|
00:36:43.839 --> 00:36:47.960 |
|
humans disagree a lot um because like |
|
|
|
00:36:46.359 --> 00:36:50.200 |
|
you can't get the humans to agree how |
|
|
|
00:36:47.960 --> 00:36:53.319 |
|
could you expect how could you expect a |
|
|
|
00:36:50.200 --> 00:36:55.119 |
|
machine to do well um so I think it it's |
|
|
|
00:36:53.319 --> 00:36:59.200 |
|
a little bit test |
|
|
|
00:36:55.119 --> 00:37:01.560 |
|
defending yeah so for |
|
|
|
00:36:59.200 --> 00:37:04.560 |
|
generation inin |
|
|
|
00:37:01.560 --> 00:37:06.280 |
|
andin yeah so for code generation that's |
|
|
|
00:37:04.560 --> 00:37:08.200 |
|
I I I love this example because I've |
|
|
|
00:37:06.280 --> 00:37:09.960 |
|
worked on code generation a lot of |
|
|
|
00:37:08.200 --> 00:37:12.680 |
|
people only think about extrinsic |
|
|
|
00:37:09.960 --> 00:37:14.400 |
|
evaluation of code Generation Um or I |
|
|
|
00:37:12.680 --> 00:37:16.160 |
|
don't know if it's extrinsic but only |
|
|
|
00:37:14.400 --> 00:37:19.160 |
|
think about execution based evaluation |
|
|
|
00:37:16.160 --> 00:37:20.520 |
|
of code generation which is like you |
|
|
|
00:37:19.160 --> 00:37:22.400 |
|
execute the code you see whether it |
|
|
|
00:37:20.520 --> 00:37:25.040 |
|
passs unit tests and other things like |
|
|
|
00:37:22.400 --> 00:37:26.839 |
|
this but in reality actually there's a |
|
|
|
00:37:25.040 --> 00:37:28.599 |
|
lot of other important things for code |
|
|
|
00:37:26.839 --> 00:37:30.560 |
|
like readability and other stuff like |
|
|
|
00:37:28.599 --> 00:37:32.160 |
|
that and you should be evaluating those |
|
|
|
00:37:30.560 --> 00:37:34.920 |
|
things but I think a lot of people like |
|
|
|
00:37:32.160 --> 00:37:36.520 |
|
kind of ignore that so um there there |
|
|
|
00:37:34.920 --> 00:37:38.880 |
|
are a few Pap that do that but most of |
|
|
|
00:37:36.520 --> 00:37:41.000 |
|
the time people just execute the Cod |
|
|
|
00:37:38.880 --> 00:37:45.520 |
|
process |
|
|
|
00:37:41.000 --> 00:37:47.760 |
|
un cool okay um so yeah moving on to the |
|
|
|
00:37:45.520 --> 00:37:51.160 |
|
learning part so now I'd like to talk |
|
|
|
00:37:47.760 --> 00:37:55.280 |
|
about uh learning and the first thing |
|
|
|
00:37:51.160 --> 00:37:59.480 |
|
I'll cover is error and risk and so |
|
|
|
00:37:55.280 --> 00:38:02.280 |
|
basically um the way we calculate air is |
|
|
|
00:37:59.480 --> 00:38:03.119 |
|
we generate an output and we calculate |
|
|
|
00:38:02.280 --> 00:38:07.680 |
|
its |
|
|
|
00:38:03.119 --> 00:38:09.480 |
|
Badness um and so generating the output |
|
|
|
00:38:07.680 --> 00:38:13.160 |
|
could be argmax it could be sampling it |
|
|
|
00:38:09.480 --> 00:38:15.800 |
|
could be anything else like that um and |
|
|
|
00:38:13.160 --> 00:38:18.640 |
|
we calculate its Badness uh which is one |
|
|
|
00:38:15.800 --> 00:38:21.040 |
|
minus in which could be like how bad is |
|
|
|
00:38:18.640 --> 00:38:22.720 |
|
the output uh if you're you have a |
|
|
|
00:38:21.040 --> 00:38:24.760 |
|
Badness measure or it could be one minus |
|
|
|
00:38:22.720 --> 00:38:28.400 |
|
the evaluation Square to calculate its |
|
|
|
00:38:24.760 --> 00:38:30.160 |
|
Badness and this is defined as error |
|
|
|
00:38:28.400 --> 00:38:31.440 |
|
and generally what you want to do is you |
|
|
|
00:38:30.160 --> 00:38:33.520 |
|
want to minimize |
|
|
|
00:38:31.440 --> 00:38:36.800 |
|
error |
|
|
|
00:38:33.520 --> 00:38:39.400 |
|
um because in the end you're going to be |
|
|
|
00:38:36.800 --> 00:38:42.359 |
|
deploying A system that just outputs you |
|
|
|
00:38:39.400 --> 00:38:46.079 |
|
know one thing and uh you're going to |
|
|
|
00:38:42.359 --> 00:38:49.800 |
|
want that to be as good a thing as |
|
|
|
00:38:46.079 --> 00:38:53.000 |
|
possible um but the problem with this is |
|
|
|
00:38:49.800 --> 00:38:56.400 |
|
there's no easy way to actually optimize |
|
|
|
00:38:53.000 --> 00:38:59.079 |
|
this value in especially in a text |
|
|
|
00:38:56.400 --> 00:39:01.800 |
|
generation sty setting but even in the |
|
|
|
00:38:59.079 --> 00:39:06.839 |
|
classification setting we can't easily |
|
|
|
00:39:01.800 --> 00:39:06.839 |
|
maximize err because um if you look at |
|
|
|
00:39:09.040 --> 00:39:14.200 |
|
the if you look at the surface of air uh |
|
|
|
00:39:12.760 --> 00:39:15.960 |
|
at some point you're going to have a |
|
|
|
00:39:14.200 --> 00:39:18.319 |
|
non-differentiable part when you take |
|
|
|
00:39:15.960 --> 00:39:21.119 |
|
the argmax and or when you do sampling |
|
|
|
00:39:18.319 --> 00:39:23.319 |
|
or anything like that so um you're not |
|
|
|
00:39:21.119 --> 00:39:27.119 |
|
going to be able to do gradient based |
|
|
|
00:39:23.319 --> 00:39:29.200 |
|
optimization so what we do normally is |
|
|
|
00:39:27.119 --> 00:39:33.400 |
|
um |
|
|
|
00:39:29.200 --> 00:39:37.000 |
|
we instead calculate something uh called |
|
|
|
00:39:33.400 --> 00:39:38.560 |
|
risk and what risk looks like is uh we |
|
|
|
00:39:37.000 --> 00:39:40.599 |
|
talked a little bit about minimum based |
|
|
|
00:39:38.560 --> 00:39:43.520 |
|
risk for decoding but this is for uh |
|
|
|
00:39:40.599 --> 00:39:46.160 |
|
training time and what it looks like is |
|
|
|
00:39:43.520 --> 00:39:49.040 |
|
it's essentially the expected err of the |
|
|
|
00:39:46.160 --> 00:39:52.359 |
|
output and the expected err of the |
|
|
|
00:39:49.040 --> 00:39:54.760 |
|
output um includes a probability in the |
|
|
|
00:39:52.359 --> 00:39:58.240 |
|
objective function here and that |
|
|
|
00:39:54.760 --> 00:40:01.079 |
|
probability uh is differential basically |
|
|
|
00:39:58.240 --> 00:40:02.319 |
|
so we can um uh we can easily do |
|
|
|
00:40:01.079 --> 00:40:05.720 |
|
gradient based |
|
|
|
00:40:02.319 --> 00:40:09.119 |
|
optimization through it um the problem |
|
|
|
00:40:05.720 --> 00:40:12.200 |
|
with this is It's differentiable but for |
|
|
|
00:40:09.119 --> 00:40:17.160 |
|
text generation for example the sum is |
|
|
|
00:40:12.200 --> 00:40:20.319 |
|
intractable because we have a combinator |
|
|
|
00:40:17.160 --> 00:40:23.880 |
|
large number of potential outputs um |
|
|
|
00:40:20.319 --> 00:40:25.520 |
|
because you know if this is we've talked |
|
|
|
00:40:23.880 --> 00:40:28.720 |
|
about this before but if this is like |
|
|
|
00:40:25.520 --> 00:40:30.680 |
|
link you know 50 and we have a 30,000 |
|
|
|
00:40:28.720 --> 00:40:32.839 |
|
vocabul that's 30,000 to the 50 |
|
|
|
00:40:30.680 --> 00:40:34.599 |
|
possibilities we can't take a su over |
|
|
|
00:40:32.839 --> 00:40:36.359 |
|
that many |
|
|
|
00:40:34.599 --> 00:40:38.400 |
|
possibilities |
|
|
|
00:40:36.359 --> 00:40:42.680 |
|
um |
|
|
|
00:40:38.400 --> 00:40:45.839 |
|
so minimum R risk training uh tries to |
|
|
|
00:40:42.680 --> 00:40:48.440 |
|
minimize risk reinforcement learning |
|
|
|
00:40:45.839 --> 00:40:50.040 |
|
also many of the models especially |
|
|
|
00:40:48.440 --> 00:40:53.599 |
|
policy gradient models are trying to |
|
|
|
00:40:50.040 --> 00:40:55.240 |
|
minimize risk as well so um but the |
|
|
|
00:40:53.599 --> 00:40:58.040 |
|
reason why I wanted to talk about risk |
|
|
|
00:40:55.240 --> 00:41:00.440 |
|
first is because this is very simple to |
|
|
|
00:40:58.040 --> 00:41:01.640 |
|
get to from the uh the point of view of |
|
|
|
00:41:00.440 --> 00:41:06.560 |
|
like all the things that we've studied |
|
|
|
00:41:01.640 --> 00:41:06.560 |
|
so so I think it's talking about |
|
|
|
00:41:06.760 --> 00:41:11.800 |
|
that |
|
|
|
00:41:08.319 --> 00:41:15.520 |
|
um one other thing that I should mention |
|
|
|
00:41:11.800 --> 00:41:18.400 |
|
about is |
|
|
|
00:41:15.520 --> 00:41:23.079 |
|
um or no sorry I'll I'll talk about that |
|
|
|
00:41:18.400 --> 00:41:26.880 |
|
later so when we want to optimize risk |
|
|
|
00:41:23.079 --> 00:41:30.560 |
|
um what we do is we sample in order to |
|
|
|
00:41:26.880 --> 00:41:35.520 |
|
make this trct so a very simple way to |
|
|
|
00:41:30.560 --> 00:41:37.640 |
|
minimize risk is instead of um instead |
|
|
|
00:41:35.520 --> 00:41:39.359 |
|
of summing over all of the possible |
|
|
|
00:41:37.640 --> 00:41:42.760 |
|
outputs we sum over a small number of |
|
|
|
00:41:39.359 --> 00:41:46.079 |
|
possible outputs and we upgrade uh and |
|
|
|
00:41:42.760 --> 00:41:47.359 |
|
we uh sorry normalize uh to make this |
|
|
|
00:41:46.079 --> 00:41:51.200 |
|
all add up to |
|
|
|
00:41:47.359 --> 00:41:52.839 |
|
one and so this normalizer here is |
|
|
|
00:41:51.200 --> 00:41:55.319 |
|
basically the sum over all of the |
|
|
|
00:41:52.839 --> 00:41:58.599 |
|
probabilities that we have uh on the top |
|
|
|
00:41:55.319 --> 00:42:02.119 |
|
part here and and these samples can be |
|
|
|
00:41:58.599 --> 00:42:05.480 |
|
created either using sampling or n best |
|
|
|
00:42:02.119 --> 00:42:07.040 |
|
search we don't need to have from the |
|
|
|
00:42:05.480 --> 00:42:11.040 |
|
point of view of doing this sort of |
|
|
|
00:42:07.040 --> 00:42:13.960 |
|
minimum risk training the kind of |
|
|
|
00:42:11.040 --> 00:42:16.880 |
|
correct way of doing this is sampling |
|
|
|
00:42:13.960 --> 00:42:19.880 |
|
using ancestral sampling uh like we |
|
|
|
00:42:16.880 --> 00:42:23.079 |
|
talked about before and um in minimizing |
|
|
|
00:42:19.880 --> 00:42:25.839 |
|
the output based on the the samples but |
|
|
|
00:42:23.079 --> 00:42:28.480 |
|
the problem with that is um as many of |
|
|
|
00:42:25.839 --> 00:42:31.440 |
|
you also might have seen when you were |
|
|
|
00:42:28.480 --> 00:42:33.599 |
|
sampling from your language model uh |
|
|
|
00:42:31.440 --> 00:42:35.160 |
|
from assignment one if you sample with |
|
|
|
00:42:33.599 --> 00:42:38.040 |
|
temperature one it gives you a lot of |
|
|
|
00:42:35.160 --> 00:42:40.720 |
|
like not very good outlets right and so |
|
|
|
00:42:38.040 --> 00:42:43.400 |
|
if you're sampling with temperature one |
|
|
|
00:42:40.720 --> 00:42:45.000 |
|
um you'll be exploring a a very large |
|
|
|
00:42:43.400 --> 00:42:47.880 |
|
part of the space that actually isn't |
|
|
|
00:42:45.000 --> 00:42:49.720 |
|
very good and so because of this uh some |
|
|
|
00:42:47.880 --> 00:42:51.480 |
|
other Alternatives that you can use is |
|
|
|
00:42:49.720 --> 00:42:53.400 |
|
you can just do endb search to find the |
|
|
|
00:42:51.480 --> 00:42:55.280 |
|
best outputs or you can sample with a |
|
|
|
00:42:53.400 --> 00:42:58.079 |
|
temperature that's not one or something |
|
|
|
00:42:55.280 --> 00:43:00.240 |
|
like that and basically create uh you |
|
|
|
00:42:58.079 --> 00:43:02.520 |
|
know a list of possible hypotheses and |
|
|
|
00:43:00.240 --> 00:43:04.079 |
|
then normalize other B so that's another |
|
|
|
00:43:02.520 --> 00:43:06.240 |
|
option and very often not using |
|
|
|
00:43:04.079 --> 00:43:11.200 |
|
temperature one is a better |
|
|
|
00:43:06.240 --> 00:43:15.280 |
|
way um if you're sampling with not |
|
|
|
00:43:11.200 --> 00:43:18.640 |
|
temperature one and you are um |
|
|
|
00:43:15.280 --> 00:43:20.920 |
|
potentially getting multiple outputs you |
|
|
|
00:43:18.640 --> 00:43:23.400 |
|
should try to D duplicate or sample |
|
|
|
00:43:20.920 --> 00:43:25.480 |
|
without replacement because if you get |
|
|
|
00:43:23.400 --> 00:43:27.559 |
|
multiple outputs here it messes up your |
|
|
|
00:43:25.480 --> 00:43:30.680 |
|
equations if you basically uh have the |
|
|
|
00:43:27.559 --> 00:43:30.680 |
|
same one in there multiple |
|
|
|
00:43:32.160 --> 00:43:37.800 |
|
times cool so so this is a really simple |
|
|
|
00:43:35.880 --> 00:43:40.079 |
|
example of how you can do minimal risk |
|
|
|
00:43:37.800 --> 00:43:42.119 |
|
training but now I want to get into uh |
|
|
|
00:43:40.079 --> 00:43:44.640 |
|
like reinforcement learning which is the |
|
|
|
00:43:42.119 --> 00:43:48.119 |
|
framing that most um |
|
|
|
00:43:44.640 --> 00:43:50.760 |
|
modern Works about this Paulo uh one |
|
|
|
00:43:48.119 --> 00:43:52.559 |
|
thing I should mention is there are |
|
|
|
00:43:50.760 --> 00:43:55.240 |
|
actually other alternatives to learning |
|
|
|
00:43:52.559 --> 00:43:57.359 |
|
from uh human feedback including like |
|
|
|
00:43:55.240 --> 00:43:59.359 |
|
margin loss margin based losses and |
|
|
|
00:43:57.359 --> 00:44:00.960 |
|
other stuff like that but most people |
|
|
|
00:43:59.359 --> 00:44:03.440 |
|
nowadays use reinforcement learning so |
|
|
|
00:44:00.960 --> 00:44:06.359 |
|
I'm only going to cover that |
|
|
|
00:44:03.440 --> 00:44:08.440 |
|
here so what is reinforcement learning |
|
|
|
00:44:06.359 --> 00:44:11.000 |
|
um learning reinforcement learning is |
|
|
|
00:44:08.440 --> 00:44:14.559 |
|
learning where we have an environment uh |
|
|
|
00:44:11.000 --> 00:44:16.079 |
|
x uh ability to make actions a and get a |
|
|
|
00:44:14.559 --> 00:44:20.160 |
|
delayed reward |
|
|
|
00:44:16.079 --> 00:44:21.880 |
|
R and um there's a really nice example |
|
|
|
00:44:20.160 --> 00:44:24.400 |
|
uh if you're not familiar with the |
|
|
|
00:44:21.880 --> 00:44:27.480 |
|
basics of policy gradient by Andre |
|
|
|
00:44:24.400 --> 00:44:28.800 |
|
karpathy which I linked in the um in the |
|
|
|
00:44:27.480 --> 00:44:29.680 |
|
recommended reading so you can take a |
|
|
|
00:44:28.800 --> 00:44:34.680 |
|
look at |
|
|
|
00:44:29.680 --> 00:44:37.240 |
|
that um but in that example gives an |
|
|
|
00:44:34.680 --> 00:44:39.440 |
|
example of pong uh where you're playing |
|
|
|
00:44:37.240 --> 00:44:42.640 |
|
the game pong where X is your observed |
|
|
|
00:44:39.440 --> 00:44:45.640 |
|
image a is up or down and R is the wind |
|
|
|
00:44:42.640 --> 00:44:47.480 |
|
loss at the end of the game uh does |
|
|
|
00:44:45.640 --> 00:44:50.559 |
|
anyone have an idea about uh what this |
|
|
|
00:44:47.480 --> 00:44:52.119 |
|
looks like for any arbitrary NLP task |
|
|
|
00:44:50.559 --> 00:44:56.520 |
|
that we might want to do reinforcement |
|
|
|
00:44:52.119 --> 00:44:59.040 |
|
learning for so what what is X what is a |
|
|
|
00:44:56.520 --> 00:44:59.040 |
|
and what is |
|
|
|
00:45:00.040 --> 00:45:04.680 |
|
are pick your favorite uh your favorite |
|
|
|
00:45:06.920 --> 00:45:09.920 |
|
Trask |
|
|
|
00:45:10.960 --> 00:45:18.400 |
|
anybody |
|
|
|
00:45:12.520 --> 00:45:18.400 |
|
yeah be or what what's X first |
|
|
|
00:45:19.680 --> 00:45:28.720 |
|
yeah you have generate okay is the |
|
|
|
00:45:24.440 --> 00:45:29.720 |
|
next be like the Buton like whether or |
|
|
|
00:45:28.720 --> 00:45:32.520 |
|
not |
|
|
|
00:45:29.720 --> 00:45:35.240 |
|
you okay yeah I I think this is very |
|
|
|
00:45:32.520 --> 00:45:37.119 |
|
close just to repeat it it's like X is |
|
|
|
00:45:35.240 --> 00:45:39.599 |
|
what you've generated so far a is the |
|
|
|
00:45:37.119 --> 00:45:41.559 |
|
next token and R is the button that the |
|
|
|
00:45:39.599 --> 00:45:45.400 |
|
user clicks about whether it's good or |
|
|
|
00:45:41.559 --> 00:45:46.920 |
|
not um I think that's reasonably good |
|
|
|
00:45:45.400 --> 00:45:48.760 |
|
although I don't know if we'd expect |
|
|
|
00:45:46.920 --> 00:45:52.960 |
|
them to click the button every token we |
|
|
|
00:45:48.760 --> 00:45:54.880 |
|
generate right so um it might be that X |
|
|
|
00:45:52.960 --> 00:45:57.880 |
|
is the conversational history up till |
|
|
|
00:45:54.880 --> 00:46:02.319 |
|
this point um a |
|
|
|
00:45:57.880 --> 00:46:04.280 |
|
a could be a next token generation and |
|
|
|
00:46:02.319 --> 00:46:06.520 |
|
then R is a reward we get in an |
|
|
|
00:46:04.280 --> 00:46:08.280 |
|
arbitrary time point it might not be |
|
|
|
00:46:06.520 --> 00:46:09.960 |
|
like immediately after generating the |
|
|
|
00:46:08.280 --> 00:46:12.040 |
|
next token but it might be later and |
|
|
|
00:46:09.960 --> 00:46:13.480 |
|
that's actually really really important |
|
|
|
00:46:12.040 --> 00:46:15.040 |
|
from the point of view of reinforcement |
|
|
|
00:46:13.480 --> 00:46:19.599 |
|
learning and I'll I'll talk about that |
|
|
|
00:46:15.040 --> 00:46:23.040 |
|
in a second um anyone have an idea from |
|
|
|
00:46:19.599 --> 00:46:24.960 |
|
I don't know uh code generation or |
|
|
|
00:46:23.040 --> 00:46:28.119 |
|
translation or some other |
|
|
|
00:46:24.960 --> 00:46:31.160 |
|
things C generation maybe s is a |
|
|
|
00:46:28.119 --> 00:46:33.040 |
|
compiler or like the gra scpt and then |
|
|
|
00:46:31.160 --> 00:46:37.000 |
|
the |
|
|
|
00:46:33.040 --> 00:46:42.520 |
|
is the actual code that right and reward |
|
|
|
00:46:37.000 --> 00:46:44.839 |
|
is yep um so X could be the compiler |
|
|
|
00:46:42.520 --> 00:46:47.559 |
|
it's probably the compiler and all of |
|
|
|
00:46:44.839 --> 00:46:50.200 |
|
the surrounding code context like what |
|
|
|
00:46:47.559 --> 00:46:52.520 |
|
what is the natural language output and |
|
|
|
00:46:50.200 --> 00:46:53.960 |
|
it's also um you know what is the |
|
|
|
00:46:52.520 --> 00:46:57.280 |
|
project that you're you're working on |
|
|
|
00:46:53.960 --> 00:47:00.079 |
|
and stuff like that um a i think |
|
|
|
00:46:57.280 --> 00:47:02.800 |
|
typically we would treat each token in |
|
|
|
00:47:00.079 --> 00:47:04.160 |
|
the code to be an action um and then R |
|
|
|
00:47:02.800 --> 00:47:06.599 |
|
would be the reward after a long |
|
|
|
00:47:04.160 --> 00:47:08.640 |
|
sequence of actions um and it could be |
|
|
|
00:47:06.599 --> 00:47:11.119 |
|
the reward from the compiler it could be |
|
|
|
00:47:08.640 --> 00:47:13.160 |
|
the reward from a code readability model |
|
|
|
00:47:11.119 --> 00:47:15.720 |
|
it could be the reward from a speed |
|
|
|
00:47:13.160 --> 00:47:17.079 |
|
execution speed and stuff like that so |
|
|
|
00:47:15.720 --> 00:47:18.839 |
|
like one of the interesting things about |
|
|
|
00:47:17.079 --> 00:47:22.640 |
|
R is you can be really creative about |
|
|
|
00:47:18.839 --> 00:47:25.400 |
|
how you form R um which is not easy to |
|
|
|
00:47:22.640 --> 00:47:27.319 |
|
do uh if you're just doing maximum |
|
|
|
00:47:25.400 --> 00:47:29.240 |
|
likelihood also so you can come up with |
|
|
|
00:47:27.319 --> 00:47:32.920 |
|
a r that really matches with like what |
|
|
|
00:47:29.240 --> 00:47:36.559 |
|
you want um what you want in an output |
|
|
|
00:47:32.920 --> 00:47:40.079 |
|
so why reinforcement learning in NLP um |
|
|
|
00:47:36.559 --> 00:47:42.599 |
|
and I think there's basically three um |
|
|
|
00:47:40.079 --> 00:47:44.240 |
|
three answers the first one is you have |
|
|
|
00:47:42.599 --> 00:47:49.000 |
|
a typical reinforcement learning |
|
|
|
00:47:44.240 --> 00:47:51.119 |
|
scenario um where you have a dialogue |
|
|
|
00:47:49.000 --> 00:47:52.720 |
|
where you get lots of responses and then |
|
|
|
00:47:51.119 --> 00:47:54.559 |
|
you get a reward at the end so the |
|
|
|
00:47:52.720 --> 00:47:57.359 |
|
thumbs up and thumbs down from humans is |
|
|
|
00:47:54.559 --> 00:47:59.839 |
|
a very typical example of |
|
|
|
00:47:57.359 --> 00:48:02.800 |
|
uh reinforcement learning because you |
|
|
|
00:47:59.839 --> 00:48:05.000 |
|
get a delayed reward uh at some point in |
|
|
|
00:48:02.800 --> 00:48:07.599 |
|
the dialogue when a human presses up or |
|
|
|
00:48:05.000 --> 00:48:09.280 |
|
down um another like actually more |
|
|
|
00:48:07.599 --> 00:48:11.680 |
|
technical scenario where reinforcement |
|
|
|
00:48:09.280 --> 00:48:14.960 |
|
learning has been used um for a long |
|
|
|
00:48:11.680 --> 00:48:17.400 |
|
time is call centers so we've had |
|
|
|
00:48:14.960 --> 00:48:20.680 |
|
dialogue systems for call centers and |
|
|
|
00:48:17.400 --> 00:48:23.160 |
|
then if you complete a ticket purchase |
|
|
|
00:48:20.680 --> 00:48:24.839 |
|
um or you complete resolve a ticket |
|
|
|
00:48:23.160 --> 00:48:27.480 |
|
without ever having to go to a human |
|
|
|
00:48:24.839 --> 00:48:30.800 |
|
operator you get a really big reward |
|
|
|
00:48:27.480 --> 00:48:33.640 |
|
if you have to go to the human operator |
|
|
|
00:48:30.800 --> 00:48:36.400 |
|
you get maybe a smaller reward and if |
|
|
|
00:48:33.640 --> 00:48:39.200 |
|
the person yells at you and hangs up |
|
|
|
00:48:36.400 --> 00:48:41.640 |
|
then you get a really negative reward so |
|
|
|
00:48:39.200 --> 00:48:43.040 |
|
um this is kind of the typical example |
|
|
|
00:48:41.640 --> 00:48:45.599 |
|
reinforcement learning has been used for |
|
|
|
00:48:43.040 --> 00:48:48.520 |
|
a long time there another example is if |
|
|
|
00:48:45.599 --> 00:48:53.280 |
|
you have like latent variables uh chains |
|
|
|
00:48:48.520 --> 00:48:55.799 |
|
of thought where um you decide the |
|
|
|
00:48:53.280 --> 00:48:58.839 |
|
latent variable and then get a reward um |
|
|
|
00:48:55.799 --> 00:49:02.799 |
|
you get a reward based Bas on how those |
|
|
|
00:48:58.839 --> 00:49:03.920 |
|
latent variables affect the output so um |
|
|
|
00:49:02.799 --> 00:49:07.200 |
|
this |
|
|
|
00:49:03.920 --> 00:49:09.799 |
|
is uh this is another example |
|
|
|
00:49:07.200 --> 00:49:12.599 |
|
because the Chain of Thought itself |
|
|
|
00:49:09.799 --> 00:49:13.880 |
|
might not actually be good you might |
|
|
|
00:49:12.599 --> 00:49:15.839 |
|
have a bad Chain of Thought and still |
|
|
|
00:49:13.880 --> 00:49:17.760 |
|
get the correct answer so you don't |
|
|
|
00:49:15.839 --> 00:49:19.640 |
|
actually know for sure that a chain of |
|
|
|
00:49:17.760 --> 00:49:22.359 |
|
thought that was automatically generated |
|
|
|
00:49:19.640 --> 00:49:24.799 |
|
is good or not but um that so that kind |
|
|
|
00:49:22.359 --> 00:49:27.000 |
|
of makes it a reinforcement learning |
|
|
|
00:49:24.799 --> 00:49:29.520 |
|
problem and another thing is you might |
|
|
|
00:49:27.000 --> 00:49:32.520 |
|
have a sequence level evaluation metric |
|
|
|
00:49:29.520 --> 00:49:34.240 |
|
um so that you can't optimize the |
|
|
|
00:49:32.520 --> 00:49:36.839 |
|
evaluation metric without uh first |
|
|
|
00:49:34.240 --> 00:49:38.480 |
|
generating the whole like sequence so |
|
|
|
00:49:36.839 --> 00:49:40.880 |
|
that would be any of the evaluation |
|
|
|
00:49:38.480 --> 00:49:42.400 |
|
metrics that I talked about before so um |
|
|
|
00:49:40.880 --> 00:49:44.720 |
|
these are three scenarios where you can |
|
|
|
00:49:42.400 --> 00:49:47.079 |
|
use reinforcement |
|
|
|
00:49:44.720 --> 00:49:50.000 |
|
planning so |
|
|
|
00:49:47.079 --> 00:49:51.400 |
|
um I'm going to make a few steps through |
|
|
|
00:49:50.000 --> 00:49:54.640 |
|
but like let's start again with our |
|
|
|
00:49:51.400 --> 00:49:57.359 |
|
supervised mle loss and uh that's just |
|
|
|
00:49:54.640 --> 00:50:01.799 |
|
the log probability here um in the |
|
|
|
00:49:57.359 --> 00:50:04.160 |
|
context of reinforcement learning this |
|
|
|
00:50:01.799 --> 00:50:07.079 |
|
is also called imitation |
|
|
|
00:50:04.160 --> 00:50:08.880 |
|
learning because um essentially you're |
|
|
|
00:50:07.079 --> 00:50:12.680 |
|
learning how to perform actions by |
|
|
|
00:50:08.880 --> 00:50:14.559 |
|
imitating a teacher um and imitation |
|
|
|
00:50:12.680 --> 00:50:15.960 |
|
learning is not just supervised mle |
|
|
|
00:50:14.559 --> 00:50:18.440 |
|
there's also other varieties of |
|
|
|
00:50:15.960 --> 00:50:21.440 |
|
imitation learning but um this is one |
|
|
|
00:50:18.440 --> 00:50:21.440 |
|
variety of imitation |
|
|
|
00:50:22.520 --> 00:50:27.640 |
|
learning the next thing I'd like to talk |
|
|
|
00:50:24.599 --> 00:50:30.079 |
|
about is self-training and basically |
|
|
|
00:50:27.640 --> 00:50:31.760 |
|
self-training the idea is that you |
|
|
|
00:50:30.079 --> 00:50:33.720 |
|
sample or argmax according to the |
|
|
|
00:50:31.760 --> 00:50:36.119 |
|
current model so you have your current |
|
|
|
00:50:33.720 --> 00:50:38.000 |
|
model and you get a sample from it and |
|
|
|
00:50:36.119 --> 00:50:41.520 |
|
then you use the sample or samples to |
|
|
|
00:50:38.000 --> 00:50:43.680 |
|
maximize likelihood so um basically |
|
|
|
00:50:41.520 --> 00:50:47.520 |
|
instead of doing maximum likelihood with |
|
|
|
00:50:43.680 --> 00:50:49.520 |
|
respect to the a gold standard output |
|
|
|
00:50:47.520 --> 00:50:51.280 |
|
you're doing it with respect to your own |
|
|
|
00:50:49.520 --> 00:50:55.280 |
|
output |
|
|
|
00:50:51.280 --> 00:50:55.280 |
|
so does this seem like a good |
|
|
|
00:50:55.640 --> 00:51:03.880 |
|
idea I see a few people shaking heads um |
|
|
|
00:51:00.480 --> 00:51:03.880 |
|
any ideas why this is not a good |
|
|
|
00:51:04.680 --> 00:51:07.680 |
|
idea |
|
|
|
00:51:15.040 --> 00:51:20.599 |
|
yeah yeah exactly so if you don't have |
|
|
|
00:51:17.720 --> 00:51:23.760 |
|
any access to any notion well it's good |
|
|
|
00:51:20.599 --> 00:51:27.480 |
|
um this will be optimizing towards good |
|
|
|
00:51:23.760 --> 00:51:28.839 |
|
outputs and bad outputs right so um your |
|
|
|
00:51:27.480 --> 00:51:30.200 |
|
model might be outputting bad outputs |
|
|
|
00:51:28.839 --> 00:51:32.839 |
|
and you're just reinforcing the errors |
|
|
|
00:51:30.200 --> 00:51:35.160 |
|
set the model R already nonetheless like |
|
|
|
00:51:32.839 --> 00:51:37.799 |
|
self trining actually improves your |
|
|
|
00:51:35.160 --> 00:51:39.680 |
|
accuracy somewhat in some cases like for |
|
|
|
00:51:37.799 --> 00:51:43.040 |
|
example if your accuracy is if your |
|
|
|
00:51:39.680 --> 00:51:45.520 |
|
model is Right more often than not um |
|
|
|
00:51:43.040 --> 00:51:49.119 |
|
basically optimizing towards the more |
|
|
|
00:51:45.520 --> 00:51:51.720 |
|
often the not right outputs can actually |
|
|
|
00:51:49.119 --> 00:51:53.640 |
|
um due to the implicit regularization |
|
|
|
00:51:51.720 --> 00:51:55.000 |
|
that models have and early stopping and |
|
|
|
00:51:53.640 --> 00:51:56.559 |
|
other things like that it can actually |
|
|
|
00:51:55.000 --> 00:51:59.280 |
|
move you in the right direction and |
|
|
|
00:51:56.559 --> 00:52:01.559 |
|
improve accuracy |
|
|
|
00:51:59.280 --> 00:52:05.000 |
|
um |
|
|
|
00:52:01.559 --> 00:52:06.640 |
|
so there are alternatives to this that |
|
|
|
00:52:05.000 --> 00:52:09.520 |
|
further improve accuracy so like for |
|
|
|
00:52:06.640 --> 00:52:12.720 |
|
example if you have multiple models and |
|
|
|
00:52:09.520 --> 00:52:16.200 |
|
um you only generate sentences where the |
|
|
|
00:52:12.720 --> 00:52:17.760 |
|
models agree then this can improve your |
|
|
|
00:52:16.200 --> 00:52:20.000 |
|
uh overall accuracy |
|
|
|
00:52:17.760 --> 00:52:24.240 |
|
further um this is called code training |
|
|
|
00:52:20.000 --> 00:52:27.799 |
|
it was actually uh created by uh uh |
|
|
|
00:52:24.240 --> 00:52:30.160 |
|
people at at CMU as well and another |
|
|
|
00:52:27.799 --> 00:52:32.280 |
|
successful alternative uh is adding |
|
|
|
00:52:30.160 --> 00:52:34.920 |
|
noise to the input to match the noise |
|
|
|
00:52:32.280 --> 00:52:38.760 |
|
that you find in the output so if you uh |
|
|
|
00:52:34.920 --> 00:52:40.720 |
|
add like word uh word-based Dropout or |
|
|
|
00:52:38.760 --> 00:52:44.000 |
|
other things like that this can also |
|
|
|
00:52:40.720 --> 00:52:47.400 |
|
help uh accommodate these things but |
|
|
|
00:52:44.000 --> 00:52:48.920 |
|
anyway um so self trining is is useful |
|
|
|
00:52:47.400 --> 00:52:50.480 |
|
but there are better Alternatives if you |
|
|
|
00:52:48.920 --> 00:52:54.079 |
|
can get a reward |
|
|
|
00:52:50.480 --> 00:52:55.559 |
|
function so um the simplest variety of |
|
|
|
00:52:54.079 --> 00:52:56.960 |
|
this is something called policy gradient |
|
|
|
00:52:55.559 --> 00:52:59.720 |
|
or reinforce |
|
|
|
00:52:56.960 --> 00:53:02.319 |
|
um or more specifically reinforce and |
|
|
|
00:52:59.720 --> 00:53:06.280 |
|
basically what this does is this adds a |
|
|
|
00:53:02.319 --> 00:53:08.359 |
|
term that scales the loss by the reward |
|
|
|
00:53:06.280 --> 00:53:12.400 |
|
so if you can get a reward for each |
|
|
|
00:53:08.359 --> 00:53:15.680 |
|
output basically this |
|
|
|
00:53:12.400 --> 00:53:18.119 |
|
um you uh instead of doing self trining |
|
|
|
00:53:15.680 --> 00:53:21.760 |
|
entirely by itself you multiply it by a |
|
|
|
00:53:18.119 --> 00:53:23.119 |
|
reward and this allows you to increase |
|
|
|
00:53:21.760 --> 00:53:24.640 |
|
the likelihood of things that get a high |
|
|
|
00:53:23.119 --> 00:53:28.440 |
|
reward decrease the likelihood of things |
|
|
|
00:53:24.640 --> 00:53:28.440 |
|
that get a low reward |
|
|
|
00:53:29.680 --> 00:53:34.960 |
|
so uh a brief quiz here under what |
|
|
|
00:53:32.440 --> 00:53:37.599 |
|
conditions is this equal equivalent to |
|
|
|
00:53:34.960 --> 00:53:41.480 |
|
ml or essentially equivalent to maximum |
|
|
|
00:53:37.599 --> 00:53:43.079 |
|
leg uh estimation and so like in order |
|
|
|
00:53:41.480 --> 00:53:45.480 |
|
to make this quiz easier I'll go back to |
|
|
|
00:53:43.079 --> 00:53:47.720 |
|
maximum likelihood estimation so it |
|
|
|
00:53:45.480 --> 00:53:50.359 |
|
looked a bit like this um you calculated |
|
|
|
00:53:47.720 --> 00:53:53.440 |
|
the log probability of the true output |
|
|
|
00:53:50.359 --> 00:53:55.440 |
|
and now let me go uh to |
|
|
|
00:53:53.440 --> 00:53:56.960 |
|
here any |
|
|
|
00:53:55.440 --> 00:54:00.119 |
|
ideas |
|
|
|
00:53:56.960 --> 00:54:05.040 |
|
yeah when your reward equals to |
|
|
|
00:54:00.119 --> 00:54:05.040 |
|
one some sometimes in zero other times |
|
|
|
00:54:07.760 --> 00:54:10.960 |
|
what any |
|
|
|
00:54:12.760 --> 00:54:17.520 |
|
ideas what when when does your reward |
|
|
|
00:54:15.280 --> 00:54:19.640 |
|
need to be equal to one in order to make |
|
|
|
00:54:17.520 --> 00:54:23.400 |
|
this |
|
|
|
00:54:19.640 --> 00:54:23.400 |
|
equation equivalent this |
|
|
|
00:54:24.960 --> 00:54:31.680 |
|
equation yeah when Y and Y hat are the |
|
|
|
00:54:27.319 --> 00:54:36.119 |
|
same so um basically |
|
|
|
00:54:31.680 --> 00:54:38.880 |
|
this objective is equivalent to the mle |
|
|
|
00:54:36.119 --> 00:54:43.160 |
|
objective when you're using a zero1 |
|
|
|
00:54:38.880 --> 00:54:44.480 |
|
loss um where or you're using an |
|
|
|
00:54:43.160 --> 00:54:46.359 |
|
evaluation function that gives you a |
|
|
|
00:54:44.480 --> 00:54:50.920 |
|
score of one when it's exact match and |
|
|
|
00:54:46.359 --> 00:54:51.720 |
|
zero when it's not exact match so um but |
|
|
|
00:54:50.920 --> 00:54:54.480 |
|
that |
|
|
|
00:54:51.720 --> 00:54:56.440 |
|
also demonstrates that this can be more |
|
|
|
00:54:54.480 --> 00:54:58.400 |
|
flexible because you can have other |
|
|
|
00:54:56.440 --> 00:55:00.160 |
|
rewards that are not just one and zero |
|
|
|
00:54:58.400 --> 00:55:02.599 |
|
for exact match but you can use things |
|
|
|
00:55:00.160 --> 00:55:05.359 |
|
that give you partial credit you can use |
|
|
|
00:55:02.599 --> 00:55:06.880 |
|
things that uplate multiple potential uh |
|
|
|
00:55:05.359 --> 00:55:08.880 |
|
potentially correct outputs and other |
|
|
|
00:55:06.880 --> 00:55:13.400 |
|
things like |
|
|
|
00:55:08.880 --> 00:55:17.160 |
|
that so one problem with these methods |
|
|
|
00:55:13.400 --> 00:55:21.799 |
|
is um how do we know which action led to |
|
|
|
00:55:17.160 --> 00:55:24.720 |
|
the reward so the best scenario is after |
|
|
|
00:55:21.799 --> 00:55:26.359 |
|
each action you get a reward so after |
|
|
|
00:55:24.720 --> 00:55:28.960 |
|
each token that you generated you get |
|
|
|
00:55:26.359 --> 00:55:31.240 |
|
get a thumbs up or thumbs down uh from |
|
|
|
00:55:28.960 --> 00:55:34.280 |
|
the user about whether they like that |
|
|
|
00:55:31.240 --> 00:55:36.000 |
|
token or not um and how much happier |
|
|
|
00:55:34.280 --> 00:55:37.720 |
|
they are after you generated that token |
|
|
|
00:55:36.000 --> 00:55:42.400 |
|
than they were before you generated that |
|
|
|
00:55:37.720 --> 00:55:44.200 |
|
token um the problem with this is that |
|
|
|
00:55:42.400 --> 00:55:45.799 |
|
that's completely infeasible right like |
|
|
|
00:55:44.200 --> 00:55:47.039 |
|
every time after you use chat GPD you're |
|
|
|
00:55:45.799 --> 00:55:50.480 |
|
not going to press thumbs up and thumbs |
|
|
|
00:55:47.039 --> 00:55:52.559 |
|
down after each token so um in reality |
|
|
|
00:55:50.480 --> 00:55:55.559 |
|
what we get is usually we get it at the |
|
|
|
00:55:52.559 --> 00:55:57.000 |
|
end of uh roll out of many many |
|
|
|
00:55:55.559 --> 00:55:58.640 |
|
different actions and we're not sure |
|
|
|
00:55:57.000 --> 00:55:59.720 |
|
which action is responsible for giving |
|
|
|
00:55:58.640 --> 00:56:02.559 |
|
us the |
|
|
|
00:55:59.720 --> 00:56:05.440 |
|
reward and |
|
|
|
00:56:02.559 --> 00:56:08.000 |
|
so there's a few typical ways of dealing |
|
|
|
00:56:05.440 --> 00:56:09.640 |
|
with this um the most typical way of |
|
|
|
00:56:08.000 --> 00:56:13.359 |
|
dealing with this right now is just not |
|
|
|
00:56:09.640 --> 00:56:15.440 |
|
dealing with it um and just hoping that |
|
|
|
00:56:13.359 --> 00:56:17.200 |
|
your optimization algorithm internally |
|
|
|
00:56:15.440 --> 00:56:21.480 |
|
will be able to do credit |
|
|
|
00:56:17.200 --> 00:56:24.520 |
|
assignment um and so what that entails |
|
|
|
00:56:21.480 --> 00:56:27.319 |
|
is essentially you um give an equal |
|
|
|
00:56:24.520 --> 00:56:29.880 |
|
reward for each token in the output |
|
|
|
00:56:27.319 --> 00:56:32.480 |
|
other ways that you can deal with it are |
|
|
|
00:56:29.880 --> 00:56:35.640 |
|
um you can assign decaying rewards from |
|
|
|
00:56:32.480 --> 00:56:37.559 |
|
future events so like let's say let's |
|
|
|
00:56:35.640 --> 00:56:41.839 |
|
say you're talking about a chat bot for |
|
|
|
00:56:37.559 --> 00:56:44.119 |
|
example maybe this is the the most uh |
|
|
|
00:56:41.839 --> 00:56:46.599 |
|
kind of intuitive way of thinking about |
|
|
|
00:56:44.119 --> 00:56:50.400 |
|
it but you you have a chat bot you have |
|
|
|
00:56:46.599 --> 00:56:52.599 |
|
like 20 chat turns and you have the user |
|
|
|
00:56:50.400 --> 00:56:55.640 |
|
give a thumbs up or a thumbs down on the |
|
|
|
00:56:52.599 --> 00:56:58.920 |
|
20th chat turn there you would assign a |
|
|
|
00:56:55.640 --> 00:57:01.440 |
|
reward of um like let's say it gave a |
|
|
|
00:56:58.920 --> 00:57:03.640 |
|
thumbs up there you would re assign a |
|
|
|
00:57:01.440 --> 00:57:06.559 |
|
reward of one for the previous chat turn |
|
|
|
00:57:03.640 --> 00:57:09.839 |
|
a reward of like 0.5 for the second to |
|
|
|
00:57:06.559 --> 00:57:11.720 |
|
previous chat term a reward of 0.25 for |
|
|
|
00:57:09.839 --> 00:57:14.319 |
|
the third to previous chat term to |
|
|
|
00:57:11.720 --> 00:57:16.160 |
|
basically say yeah like the user is |
|
|
|
00:57:14.319 --> 00:57:18.240 |
|
feeling good at the moment they gave the |
|
|
|
00:57:16.160 --> 00:57:20.359 |
|
thumbs up and that's probably more |
|
|
|
00:57:18.240 --> 00:57:23.400 |
|
likely due to the things that happened |
|
|
|
00:57:20.359 --> 00:57:23.400 |
|
recently so |
|
|
|
00:57:23.559 --> 00:57:28.119 |
|
yeah we have a |
|
|
|
00:57:26.680 --> 00:57:32.280 |
|
like not |
|
|
|
00:57:28.119 --> 00:57:34.160 |
|
learning so the reward model can be any |
|
|
|
00:57:32.280 --> 00:57:35.839 |
|
of the methods that I talked about |
|
|
|
00:57:34.160 --> 00:57:37.480 |
|
before so it can be human feedback |
|
|
|
00:57:35.839 --> 00:57:39.000 |
|
directly like a thumbs up or a thumbs |
|
|
|
00:57:37.480 --> 00:57:42.200 |
|
down it could also be from a reward |
|
|
|
00:57:39.000 --> 00:57:44.599 |
|
model uh that was pre-trained you could |
|
|
|
00:57:42.200 --> 00:57:47.680 |
|
also theoretically learn the reward |
|
|
|
00:57:44.599 --> 00:57:52.720 |
|
model simultaneously but you'd have to |
|
|
|
00:57:47.680 --> 00:57:55.200 |
|
simultaneously with the model itself um |
|
|
|
00:57:52.720 --> 00:57:57.280 |
|
so yeah I'm going to talk a little bit |
|
|
|
00:57:55.200 --> 00:58:00.359 |
|
about DP which kind of does that a |
|
|
|
00:57:57.280 --> 00:58:01.720 |
|
little bit but um I I would basically |
|
|
|
00:58:00.359 --> 00:58:03.160 |
|
say that wherever you're getting your |
|
|
|
00:58:01.720 --> 00:58:06.280 |
|
reward is probably from one of the |
|
|
|
00:58:03.160 --> 00:58:06.280 |
|
things I talked about earlier |
|
|
|
00:58:06.359 --> 00:58:14.960 |
|
today cool any other |
|
|
|
00:58:09.319 --> 00:58:17.720 |
|
questions okay um so that's the basic |
|
|
|
00:58:14.960 --> 00:58:20.640 |
|
the basic idea the very simplest thing |
|
|
|
00:58:17.720 --> 00:58:23.359 |
|
that you can do is you can just sample |
|
|
|
00:58:20.640 --> 00:58:26.079 |
|
um optimize the subjective function this |
|
|
|
00:58:23.359 --> 00:58:28.359 |
|
is dead easy you it's not hard to imp |
|
|
|
00:58:26.079 --> 00:58:30.799 |
|
imp it all as long as you have some |
|
|
|
00:58:28.359 --> 00:58:32.760 |
|
source of reward signal um but the |
|
|
|
00:58:30.799 --> 00:58:35.559 |
|
problem is uh reinforcement learning can |
|
|
|
00:58:32.760 --> 00:58:38.599 |
|
be very unstable and it's hard to get it |
|
|
|
00:58:35.559 --> 00:58:40.160 |
|
to uh you know work properly if you uh |
|
|
|
00:58:38.599 --> 00:58:42.400 |
|
don't do some additional tricks so I'd |
|
|
|
00:58:40.160 --> 00:58:45.720 |
|
like to talk about this |
|
|
|
00:58:42.400 --> 00:58:45.720 |
|
next oh yeah |
|
|
|
00:58:48.880 --> 00:58:51.880 |
|
sir |
|
|
|
00:58:55.039 --> 00:58:58.039 |
|
yeah |
|
|
|
00:59:03.280 --> 00:59:08.960 |
|
yeah the typical the typical way is you |
|
|
|
00:59:05.440 --> 00:59:12.960 |
|
just have an exponential decay um so you |
|
|
|
00:59:08.960 --> 00:59:16.200 |
|
you multiply each time by what 0.5 0. or |
|
|
|
00:59:12.960 --> 00:59:19.400 |
|
something like that |
|
|
|
00:59:16.200 --> 00:59:19.400 |
|
um from |
|
|
|
00:59:20.319 --> 00:59:27.720 |
|
A6 um cool okay |
|
|
|
00:59:25.039 --> 00:59:30.720 |
|
so |
|
|
|
00:59:27.720 --> 00:59:33.319 |
|
and that's one option and sorry just to |
|
|
|
00:59:30.720 --> 00:59:35.760 |
|
clarify the most common option nowadays |
|
|
|
00:59:33.319 --> 00:59:37.920 |
|
um at least from the point of view of |
|
|
|
00:59:35.760 --> 00:59:39.839 |
|
models is not to Decay it at all and |
|
|
|
00:59:37.920 --> 00:59:43.880 |
|
just assign the same amount for each |
|
|
|
00:59:39.839 --> 00:59:45.319 |
|
token um I'm not actually 100% sure what |
|
|
|
00:59:43.880 --> 00:59:47.319 |
|
people are doing with respect to like |
|
|
|
00:59:45.319 --> 00:59:49.280 |
|
long chat things I think probably |
|
|
|
00:59:47.319 --> 00:59:51.720 |
|
they're only assigning it to the current |
|
|
|
00:59:49.280 --> 00:59:54.240 |
|
like utterance and then not optimizing |
|
|
|
00:59:51.720 --> 00:59:57.240 |
|
the previous utterances so like if they |
|
|
|
00:59:54.240 --> 00:59:59.039 |
|
get a thumbs up or thumbs down signal um |
|
|
|
00:59:57.240 --> 01:00:00.720 |
|
then they they would assign an |
|
|
|
00:59:59.039 --> 01:00:02.440 |
|
equivalent reward for all of the tokens |
|
|
|
01:00:00.720 --> 01:00:04.640 |
|
and the current utterance and zero |
|
|
|
01:00:02.440 --> 01:00:06.119 |
|
reward for the previous ones but I'm not |
|
|
|
01:00:04.640 --> 01:00:08.480 |
|
100% sure about that there might be |
|
|
|
01:00:06.119 --> 01:00:11.200 |
|
other methods that people are |
|
|
|
01:00:08.480 --> 01:00:13.960 |
|
using um |
|
|
|
01:00:11.200 --> 01:00:16.680 |
|
cool so uh stabilizing reinforcement |
|
|
|
01:00:13.960 --> 01:00:18.520 |
|
learning so um stabilizing reinforcement |
|
|
|
01:00:16.680 --> 01:00:21.839 |
|
learning there's a lot of reasons why |
|
|
|
01:00:18.520 --> 01:00:23.880 |
|
it's unstable um the first reason is |
|
|
|
01:00:21.839 --> 01:00:27.200 |
|
you're sampling an individual output and |
|
|
|
01:00:23.880 --> 01:00:30.160 |
|
calculating the um uh calculating based |
|
|
|
01:00:27.200 --> 01:00:32.039 |
|
on the S individual sampled output and |
|
|
|
01:00:30.160 --> 01:00:33.440 |
|
then there's an Infinity of other |
|
|
|
01:00:32.039 --> 01:00:36.480 |
|
outputs that you could be optimizing |
|
|
|
01:00:33.440 --> 01:00:39.119 |
|
over for mle this is not a problem |
|
|
|
01:00:36.480 --> 01:00:41.319 |
|
because for mle you're always |
|
|
|
01:00:39.119 --> 01:00:45.359 |
|
contrasting the gold standard output to |
|
|
|
01:00:41.319 --> 01:00:46.599 |
|
all of the other outputs in the space um |
|
|
|
01:00:45.359 --> 01:00:48.280 |
|
and you're saying I want to upweight the |
|
|
|
01:00:46.599 --> 01:00:51.200 |
|
gold standard output and down we all of |
|
|
|
01:00:48.280 --> 01:00:53.039 |
|
the other ones but for reinforcement |
|
|
|
01:00:51.200 --> 01:00:54.760 |
|
learning you only have a single sampled |
|
|
|
01:00:53.039 --> 01:00:57.520 |
|
output that output might be wrong and |
|
|
|
01:00:54.760 --> 01:00:59.359 |
|
that's a source of inst ility this is |
|
|
|
01:00:57.520 --> 01:01:02.079 |
|
particularly a problem when using bigger |
|
|
|
01:00:59.359 --> 01:01:05.960 |
|
output spaces like all of the in the |
|
|
|
01:01:02.079 --> 01:01:07.920 |
|
vocabul another problem is uh anytime |
|
|
|
01:01:05.960 --> 01:01:11.599 |
|
you start using negative |
|
|
|
01:01:07.920 --> 01:01:15.160 |
|
rewards um because if you start using |
|
|
|
01:01:11.599 --> 01:01:17.559 |
|
negative rewards those rewards will be |
|
|
|
01:01:15.160 --> 01:01:19.520 |
|
downweighting the probability of a |
|
|
|
01:01:17.559 --> 01:01:20.680 |
|
particular output sequence and that |
|
|
|
01:01:19.520 --> 01:01:22.440 |
|
might be a good idea maybe you're |
|
|
|
01:01:20.680 --> 01:01:24.319 |
|
getting a toxic output or something like |
|
|
|
01:01:22.440 --> 01:01:25.960 |
|
that and you want to down it but at the |
|
|
|
01:01:24.319 --> 01:01:28.280 |
|
same time in addition to that toxic |
|
|
|
01:01:25.960 --> 01:01:30.000 |
|
output there's like you know a |
|
|
|
01:01:28.280 --> 01:01:31.599 |
|
combinatorial number of completely |
|
|
|
01:01:30.000 --> 01:01:33.880 |
|
nonsense outputs that aren't even |
|
|
|
01:01:31.599 --> 01:01:36.599 |
|
English and so basically you can start |
|
|
|
01:01:33.880 --> 01:01:38.920 |
|
diverge from the N starting start to |
|
|
|
01:01:36.599 --> 01:01:40.799 |
|
diverge from the natural like language |
|
|
|
01:01:38.920 --> 01:01:44.720 |
|
modeling distribution that you have |
|
|
|
01:01:40.799 --> 01:01:49.079 |
|
before so this is a big uh a big |
|
|
|
01:01:44.720 --> 01:01:51.880 |
|
problem so a number of uh strategies can |
|
|
|
01:01:49.079 --> 01:01:53.880 |
|
be used to stabilize the first one is |
|
|
|
01:01:51.880 --> 01:01:55.480 |
|
this is completely obvious right now and |
|
|
|
01:01:53.880 --> 01:01:57.240 |
|
nobody in their right mind would avoid |
|
|
|
01:01:55.480 --> 01:02:00.119 |
|
doing this but the first one is |
|
|
|
01:01:57.240 --> 01:02:02.839 |
|
pre-training with mle and so you start |
|
|
|
01:02:00.119 --> 01:02:04.920 |
|
with a pre-trained model um and then |
|
|
|
01:02:02.839 --> 01:02:09.359 |
|
switch over to RL after you finished |
|
|
|
01:02:04.920 --> 01:02:11.520 |
|
pre-training the model um and so |
|
|
|
01:02:09.359 --> 01:02:13.279 |
|
this makes a lot of sense if you're |
|
|
|
01:02:11.520 --> 01:02:14.960 |
|
training a language model which I assume |
|
|
|
01:02:13.279 --> 01:02:17.039 |
|
that almost everybody in this class is |
|
|
|
01:02:14.960 --> 01:02:20.279 |
|
going to be doing but it does only work |
|
|
|
01:02:17.039 --> 01:02:22.720 |
|
in scenarios where you can run mle and |
|
|
|
01:02:20.279 --> 01:02:24.359 |
|
so it doesn't work if you're predicting |
|
|
|
01:02:22.720 --> 01:02:27.240 |
|
like latent variables that aren't |
|
|
|
01:02:24.359 --> 01:02:28.760 |
|
included in the original space |
|
|
|
01:02:27.240 --> 01:02:31.960 |
|
um it |
|
|
|
01:02:28.760 --> 01:02:34.279 |
|
also doesn't work in a setting where |
|
|
|
01:02:31.960 --> 01:02:36.640 |
|
like you want to learn a |
|
|
|
01:02:34.279 --> 01:02:40.799 |
|
chatbot you want to learn a chatbot for |
|
|
|
01:02:36.640 --> 01:02:44.200 |
|
customer service for a |
|
|
|
01:02:40.799 --> 01:02:48.039 |
|
company that |
|
|
|
01:02:44.200 --> 01:02:49.960 |
|
has like for example a product catalog |
|
|
|
01:02:48.039 --> 01:02:53.559 |
|
that the language model has never seen |
|
|
|
01:02:49.960 --> 01:02:56.000 |
|
before and so if the language model has |
|
|
|
01:02:53.559 --> 01:02:57.359 |
|
no information about the product catalog |
|
|
|
01:02:56.000 --> 01:02:59.920 |
|
whatsoever you don't provide it through |
|
|
|
01:02:57.359 --> 01:03:02.440 |
|
rag or something like that it's going to |
|
|
|
01:02:59.920 --> 01:03:04.039 |
|
have to explore infinitely or not |
|
|
|
01:03:02.440 --> 01:03:05.599 |
|
infinitely but it's going to have to |
|
|
|
01:03:04.039 --> 01:03:08.359 |
|
explore too large of a space and you're |
|
|
|
01:03:05.599 --> 01:03:10.000 |
|
never going to converge with um with |
|
|
|
01:03:08.359 --> 01:03:12.359 |
|
your language modeling objectives so you |
|
|
|
01:03:10.000 --> 01:03:15.000 |
|
need to basically be able to create at |
|
|
|
01:03:12.359 --> 01:03:16.079 |
|
least some supervised training data to |
|
|
|
01:03:15.000 --> 01:03:19.279 |
|
train with |
|
|
|
01:03:16.079 --> 01:03:20.720 |
|
mle um but assuming you can do that I'm |
|
|
|
01:03:19.279 --> 01:03:22.920 |
|
assuming that almost everybody is going |
|
|
|
01:03:20.720 --> 01:03:26.400 |
|
to do some sort of pre-training with |
|
|
|
01:03:22.920 --> 01:03:27.880 |
|
ML um The Next Step that people use uh |
|
|
|
01:03:26.400 --> 01:03:30.520 |
|
in reinforcement learning that's really |
|
|
|
01:03:27.880 --> 01:03:34.319 |
|
important to stabilize is regularization |
|
|
|
01:03:30.520 --> 01:03:35.880 |
|
to an existing model and you have an |
|
|
|
01:03:34.319 --> 01:03:39.039 |
|
existing model and you want to prevent |
|
|
|
01:03:35.880 --> 01:03:40.559 |
|
it from getting too far away and the |
|
|
|
01:03:39.039 --> 01:03:42.279 |
|
reason why you want to do this is like |
|
|
|
01:03:40.559 --> 01:03:45.720 |
|
let's say you start assigning a negative |
|
|
|
01:03:42.279 --> 01:03:47.440 |
|
reward to toxic utterances for example |
|
|
|
01:03:45.720 --> 01:03:49.200 |
|
if your model stops being a language |
|
|
|
01:03:47.440 --> 01:03:51.920 |
|
model whatsoever that's a bad idea so |
|
|
|
01:03:49.200 --> 01:03:53.400 |
|
you want to keep it as a language model |
|
|
|
01:03:51.920 --> 01:03:55.599 |
|
keep it close enough to still being a |
|
|
|
01:03:53.400 --> 01:03:57.559 |
|
competent language model while you know |
|
|
|
01:03:55.599 --> 01:03:59.599 |
|
like removing the toxic |
|
|
|
01:03:57.559 --> 01:04:03.039 |
|
utterances so there's a number of |
|
|
|
01:03:59.599 --> 01:04:05.680 |
|
methods that people use to do this um uh |
|
|
|
01:04:03.039 --> 01:04:08.359 |
|
the most prominent ones are kale |
|
|
|
01:04:05.680 --> 01:04:10.279 |
|
regularization uh well so the the first |
|
|
|
01:04:08.359 --> 01:04:13.119 |
|
most prominent one is K regularization |
|
|
|
01:04:10.279 --> 01:04:15.839 |
|
and the way this works is basically in |
|
|
|
01:04:13.119 --> 01:04:19.400 |
|
addition you add you have two |
|
|
|
01:04:15.839 --> 01:04:22.279 |
|
terms the first term is a term that |
|
|
|
01:04:19.400 --> 01:04:25.760 |
|
improves your reward so you have your |
|
|
|
01:04:22.279 --> 01:04:28.039 |
|
old model where your old model is |
|
|
|
01:04:25.760 --> 01:04:31.279 |
|
creating a |
|
|
|
01:04:28.039 --> 01:04:32.440 |
|
probability uh it has a probability here |
|
|
|
01:04:31.279 --> 01:04:34.960 |
|
and then you have the probability |
|
|
|
01:04:32.440 --> 01:04:38.160 |
|
assigned by your new model and then you |
|
|
|
01:04:34.960 --> 01:04:41.200 |
|
have your reward signal here and so this |
|
|
|
01:04:38.160 --> 01:04:43.599 |
|
is basically improving the log odds or |
|
|
|
01:04:41.200 --> 01:04:46.960 |
|
improving the odds of getting a good |
|
|
|
01:04:43.599 --> 01:04:49.720 |
|
reward for high reward |
|
|
|
01:04:46.960 --> 01:04:52.920 |
|
sequences separately from this you have |
|
|
|
01:04:49.720 --> 01:04:55.920 |
|
this K regularization term and this K |
|
|
|
01:04:52.920 --> 01:04:58.119 |
|
regularization term is keeping the |
|
|
|
01:04:55.920 --> 01:05:00.279 |
|
scores of or it's keeping the |
|
|
|
01:04:58.119 --> 01:05:02.400 |
|
probability distribution of your new |
|
|
|
01:05:00.279 --> 01:05:03.960 |
|
model similar to the probability |
|
|
|
01:05:02.400 --> 01:05:09.200 |
|
distribution of your old |
|
|
|
01:05:03.960 --> 01:05:11.359 |
|
model and this beta parameter basically |
|
|
|
01:05:09.200 --> 01:05:15.240 |
|
you can increase it or decrease it based |
|
|
|
01:05:11.359 --> 01:05:18.400 |
|
on how similar you want to keep the um |
|
|
|
01:05:15.240 --> 01:05:18.400 |
|
how similar you want to keep the |
|
|
|
01:05:20.720 --> 01:05:24.640 |
|
model another method that people use is |
|
|
|
01:05:23.160 --> 01:05:29.279 |
|
something called proximal policy |
|
|
|
01:05:24.640 --> 01:05:30.920 |
|
optimization or or Po and this is a |
|
|
|
01:05:29.279 --> 01:05:33.920 |
|
method that is based on |
|
|
|
01:05:30.920 --> 01:05:38.160 |
|
clipping uh the |
|
|
|
01:05:33.920 --> 01:05:40.920 |
|
outputs and we Define uh this ratio |
|
|
|
01:05:38.160 --> 01:05:43.880 |
|
here so this ratio is equivalent to this |
|
|
|
01:05:40.920 --> 01:05:46.160 |
|
here so it's basically um kind of the |
|
|
|
01:05:43.880 --> 01:05:47.839 |
|
amount that you're learning or the |
|
|
|
01:05:46.160 --> 01:05:51.720 |
|
amount that the new model up weights |
|
|
|
01:05:47.839 --> 01:05:54.039 |
|
High reward sequences and so here we |
|
|
|
01:05:51.720 --> 01:05:58.200 |
|
have the same thing that we had |
|
|
|
01:05:54.039 --> 01:06:01.200 |
|
above so it it looks like this but over |
|
|
|
01:05:58.200 --> 01:06:03.720 |
|
here we have a clipped version of this |
|
|
|
01:06:01.200 --> 01:06:07.000 |
|
where essentially what we do is we |
|
|
|
01:06:03.720 --> 01:06:07.000 |
|
clip this |
|
|
|
01:06:21.119 --> 01:06:27.880 |
|
ratio this ratio to be within uh a |
|
|
|
01:06:24.720 --> 01:06:32.160 |
|
certain range of the original ratio and |
|
|
|
01:06:27.880 --> 01:06:37.880 |
|
what this is doing is this is |
|
|
|
01:06:32.160 --> 01:06:41.400 |
|
essentially forcing the model to um not |
|
|
|
01:06:37.880 --> 01:06:44.000 |
|
reward large jumps in the space um |
|
|
|
01:06:41.400 --> 01:06:47.559 |
|
because if you take the |
|
|
|
01:06:44.000 --> 01:06:49.160 |
|
minimum and actually I'm I'm sorry I |
|
|
|
01:06:47.559 --> 01:06:50.720 |
|
just realized I I might have done |
|
|
|
01:06:49.160 --> 01:06:52.520 |
|
something confusing here because this is |
|
|
|
01:06:50.720 --> 01:06:53.960 |
|
actually higher as better so this isn't |
|
|
|
01:06:52.520 --> 01:06:56.079 |
|
really a loss function this is something |
|
|
|
01:06:53.960 --> 01:06:57.680 |
|
you're attempting to maximize so |
|
|
|
01:06:56.079 --> 01:06:59.839 |
|
in contrast to all of the other things I |
|
|
|
01:06:57.680 --> 01:07:01.680 |
|
was talking about before um this is |
|
|
|
01:06:59.839 --> 01:07:04.400 |
|
something where higher is better instead |
|
|
|
01:07:01.680 --> 01:07:07.599 |
|
of lower is better but anyway basically |
|
|
|
01:07:04.400 --> 01:07:09.599 |
|
by taking the minimum of this you're |
|
|
|
01:07:07.599 --> 01:07:11.960 |
|
encouraging the model |
|
|
|
01:07:09.599 --> 01:07:16.279 |
|
to |
|
|
|
01:07:11.960 --> 01:07:18.559 |
|
uh keep examining the space where you |
|
|
|
01:07:16.279 --> 01:07:20.799 |
|
don't diverge much from the original |
|
|
|
01:07:18.559 --> 01:07:22.920 |
|
model and if the space where the |
|
|
|
01:07:20.799 --> 01:07:25.240 |
|
original model was in is better than the |
|
|
|
01:07:22.920 --> 01:07:27.440 |
|
new space that your model has moved into |
|
|
|
01:07:25.240 --> 01:07:30.920 |
|
you move back towards the original model |
|
|
|
01:07:27.440 --> 01:07:33.000 |
|
so basically like if you had um if you |
|
|
|
01:07:30.920 --> 01:07:34.960 |
|
learned a model if you started learning |
|
|
|
01:07:33.000 --> 01:07:37.960 |
|
a model that looked like it was |
|
|
|
01:07:34.960 --> 01:07:40.279 |
|
optimizing uh your your reward but then |
|
|
|
01:07:37.960 --> 01:07:43.119 |
|
suddenly the model went off the rails |
|
|
|
01:07:40.279 --> 01:07:45.000 |
|
and um it starts generating completely |
|
|
|
01:07:43.119 --> 01:07:47.319 |
|
nonsense outputs that get really bad |
|
|
|
01:07:45.000 --> 01:07:49.119 |
|
reward this will push it back towards |
|
|
|
01:07:47.319 --> 01:07:50.920 |
|
the original policy and that's the basic |
|
|
|
01:07:49.119 --> 01:07:54.279 |
|
idea behind |
|
|
|
01:07:50.920 --> 01:07:57.640 |
|
P um in terms of what I see people using |
|
|
|
01:07:54.279 --> 01:07:59.799 |
|
um po was like really really popular for |
|
|
|
01:07:57.640 --> 01:08:01.880 |
|
a while but I've started to see people |
|
|
|
01:07:59.799 --> 01:08:04.799 |
|
use alternative strategies that use K |
|
|
|
01:08:01.880 --> 01:08:06.880 |
|
regularization so I don't I don't think |
|
|
|
01:08:04.799 --> 01:08:08.520 |
|
either one of them is like particularly |
|
|
|
01:08:06.880 --> 01:08:10.039 |
|
more popular than any of the others and |
|
|
|
01:08:08.520 --> 01:08:13.720 |
|
this one's a little bit simpler |
|
|
|
01:08:10.039 --> 01:08:13.720 |
|
conceptually so I like the the |
|
|
|
01:08:14.880 --> 01:08:19.279 |
|
one cool um any questions about |
|
|
|
01:08:20.359 --> 01:08:26.759 |
|
this okay um and actually one thing I |
|
|
|
01:08:24.640 --> 01:08:29.679 |
|
should mention is um all of these things |
|
|
|
01:08:26.759 --> 01:08:32.120 |
|
are implemented uh in you know whatever |
|
|
|
01:08:29.679 --> 01:08:33.759 |
|
libraries you use like hugging face TRL |
|
|
|
01:08:32.120 --> 01:08:35.679 |
|
Transformer reinforcement learning as an |
|
|
|
01:08:33.759 --> 01:08:37.040 |
|
example Library all of these methods are |
|
|
|
01:08:35.679 --> 01:08:38.400 |
|
implemented there so if you actually |
|
|
|
01:08:37.040 --> 01:08:40.600 |
|
want to use these in practice that's |
|
|
|
01:08:38.400 --> 01:08:40.600 |
|
good |
|
|
|
01:08:40.839 --> 01:08:46.359 |
|
place the next thing is adding a |
|
|
|
01:08:42.920 --> 01:08:48.679 |
|
Baseline and so the basic idea is that |
|
|
|
01:08:46.359 --> 01:08:52.199 |
|
you have ex expectations about your |
|
|
|
01:08:48.679 --> 01:08:54.640 |
|
reward for a particular sentence and um |
|
|
|
01:08:52.199 --> 01:08:56.560 |
|
like let's say we wanted to uh translate |
|
|
|
01:08:54.640 --> 01:08:58.400 |
|
a sentence and we have uh something like |
|
|
|
01:08:56.560 --> 01:09:01.279 |
|
this is an easy sentence and buffalo |
|
|
|
01:08:58.400 --> 01:09:02.920 |
|
buffalo buffalo which is a harder |
|
|
|
01:09:01.279 --> 01:09:07.799 |
|
sentence to |
|
|
|
01:09:02.920 --> 01:09:09.679 |
|
translate and so we have a reward um if |
|
|
|
01:09:07.799 --> 01:09:11.759 |
|
if you're not familiar with this example |
|
|
|
01:09:09.679 --> 01:09:13.480 |
|
you can search on Wikipedia for buffalo |
|
|
|
01:09:11.759 --> 01:09:16.759 |
|
buffalo buffalo and you'll you'll find |
|
|
|
01:09:13.480 --> 01:09:19.520 |
|
out what I'm talking about um but uh |
|
|
|
01:09:16.759 --> 01:09:21.440 |
|
there's a reward uh and let's say you |
|
|
|
01:09:19.520 --> 01:09:24.359 |
|
got a reward of 0.8 for the first one |
|
|
|
01:09:21.440 --> 01:09:29.679 |
|
and a reward of 0.3 for the second |
|
|
|
01:09:24.359 --> 01:09:31.679 |
|
one but the problem is if um the first |
|
|
|
01:09:29.679 --> 01:09:33.640 |
|
one actually is really easy and the |
|
|
|
01:09:31.679 --> 01:09:36.120 |
|
second one is really hard getting a |
|
|
|
01:09:33.640 --> 01:09:37.799 |
|
reward of 0.8 for the second one for |
|
|
|
01:09:36.120 --> 01:09:40.080 |
|
like a translation or something is |
|
|
|
01:09:37.799 --> 01:09:41.120 |
|
actually bad right and a reward of 0.3 |
|
|
|
01:09:40.080 --> 01:09:45.239 |
|
is good because you're moving in the |
|
|
|
01:09:41.120 --> 01:09:49.359 |
|
right direction and so you basically um |
|
|
|
01:09:45.239 --> 01:09:52.239 |
|
you have uh the Baseline uh minus reward |
|
|
|
01:09:49.359 --> 01:09:54.960 |
|
or sorry reward minus Baseline and this |
|
|
|
01:09:52.239 --> 01:09:56.520 |
|
would give you a negative value for this |
|
|
|
01:09:54.960 --> 01:09:59.320 |
|
first one a positive value for the |
|
|
|
01:09:56.520 --> 01:10:01.360 |
|
second one and so the basic idea is can |
|
|
|
01:09:59.320 --> 01:10:04.400 |
|
we predict a priori how difficult this |
|
|
|
01:10:01.360 --> 01:10:05.440 |
|
example is and then uh adjust our reward |
|
|
|
01:10:04.400 --> 01:10:08.360 |
|
based on |
|
|
|
01:10:05.440 --> 01:10:10.960 |
|
that and |
|
|
|
01:10:08.360 --> 01:10:13.679 |
|
so that's the basic idea you just have |
|
|
|
01:10:10.960 --> 01:10:15.560 |
|
kind of like a baseline model um you |
|
|
|
01:10:13.679 --> 01:10:19.320 |
|
have a baseline model that predicts this |
|
|
|
01:10:15.560 --> 01:10:19.320 |
|
and uh you adjust uh |
|
|
|
01:10:19.760 --> 01:10:25.000 |
|
appropriately um there's two major ways |
|
|
|
01:10:22.719 --> 01:10:27.600 |
|
you can do this the first one um the |
|
|
|
01:10:25.000 --> 01:10:29.800 |
|
Baseline doesn't need to be anything um |
|
|
|
01:10:27.600 --> 01:10:32.960 |
|
the only hope is that it decreases the |
|
|
|
01:10:29.800 --> 01:10:35.960 |
|
variance in your reward uh and makes |
|
|
|
01:10:32.960 --> 01:10:38.239 |
|
learning more stable um there's two |
|
|
|
01:10:35.960 --> 01:10:40.159 |
|
options that I see done pretty widely |
|
|
|
01:10:38.239 --> 01:10:43.000 |
|
the first one is predicting the final |
|
|
|
01:10:40.159 --> 01:10:47.360 |
|
reward um predicting the final reward |
|
|
|
01:10:43.000 --> 01:10:50.960 |
|
using a model that doesn't look at |
|
|
|
01:10:47.360 --> 01:10:53.400 |
|
all at the answer that you provided it |
|
|
|
01:10:50.960 --> 01:10:55.880 |
|
only looks at the input or it only looks |
|
|
|
01:10:53.400 --> 01:10:58.840 |
|
at the intermediate States of uh you |
|
|
|
01:10:55.880 --> 01:11:00.480 |
|
know a model or something and so at the |
|
|
|
01:10:58.840 --> 01:11:03.280 |
|
sentence level you can have one Baseline |
|
|
|
01:11:00.480 --> 01:11:04.719 |
|
per sentence um you can also do it at |
|
|
|
01:11:03.280 --> 01:11:10.560 |
|
each decoder |
|
|
|
01:11:04.719 --> 01:11:11.640 |
|
State and this is uh basically you can |
|
|
|
01:11:10.560 --> 01:11:13.040 |
|
do this anytime you're doing |
|
|
|
01:11:11.640 --> 01:11:15.199 |
|
reinforcement learning by just training |
|
|
|
01:11:13.040 --> 01:11:18.199 |
|
a regression model that does this for |
|
|
|
01:11:15.199 --> 01:11:19.679 |
|
you based on the rewards you get the |
|
|
|
01:11:18.199 --> 01:11:21.040 |
|
important thing is the Baseline is not |
|
|
|
01:11:19.679 --> 01:11:22.640 |
|
allowed to use any of your actual |
|
|
|
01:11:21.040 --> 01:11:25.679 |
|
predictions because once you start using |
|
|
|
01:11:22.640 --> 01:11:26.640 |
|
the predictions then um your uh it's not |
|
|
|
01:11:25.679 --> 01:11:28.679 |
|
a |
|
|
|
01:11:26.640 --> 01:11:30.840 |
|
baseline another option which is |
|
|
|
01:11:28.679 --> 01:11:33.440 |
|
relatively easy to implement but can |
|
|
|
01:11:30.840 --> 01:11:36.320 |
|
still be effective is you calculate the |
|
|
|
01:11:33.440 --> 01:11:38.719 |
|
mean of the rewards in a batch and so if |
|
|
|
01:11:36.320 --> 01:11:40.880 |
|
you have a big batch of data and your |
|
|
|
01:11:38.719 --> 01:11:44.440 |
|
average reward in the batch is like |
|
|
|
01:11:40.880 --> 01:11:46.480 |
|
0.4 uh then you just subtract that 0.4 |
|
|
|
01:11:44.440 --> 01:11:50.080 |
|
uh and calculate your reward based on |
|
|
|
01:11:46.480 --> 01:11:50.080 |
|
that so that's another option that can |
|
|
|
01:11:51.800 --> 01:11:57.800 |
|
use |
|
|
|
01:11:53.639 --> 01:12:00.000 |
|
um a kind of extreme example of this uh |
|
|
|
01:11:57.800 --> 01:12:01.199 |
|
of creating a baseline is contrasting |
|
|
|
01:12:00.000 --> 01:12:03.639 |
|
pairwise |
|
|
|
01:12:01.199 --> 01:12:05.880 |
|
examples um or |
|
|
|
01:12:03.639 --> 01:12:08.280 |
|
contrasting different outputs for the |
|
|
|
01:12:05.880 --> 01:12:12.040 |
|
same input |
|
|
|
01:12:08.280 --> 01:12:13.920 |
|
and you can easily learn uh directly |
|
|
|
01:12:12.040 --> 01:12:16.239 |
|
from pairwise Human |
|
|
|
01:12:13.920 --> 01:12:18.199 |
|
preferences uh which can provide more |
|
|
|
01:12:16.239 --> 01:12:20.760 |
|
stability because you know one is better |
|
|
|
01:12:18.199 --> 01:12:23.880 |
|
than the other so you essentially can be |
|
|
|
01:12:20.760 --> 01:12:26.199 |
|
sure that uh you're upweighting a better |
|
|
|
01:12:23.880 --> 01:12:29.560 |
|
one and down weting a worse one |
|
|
|
01:12:26.199 --> 01:12:31.400 |
|
um this is the idea behind DPO which is |
|
|
|
01:12:29.560 --> 01:12:33.719 |
|
a recently pretty popular model but |
|
|
|
01:12:31.400 --> 01:12:36.800 |
|
there's also other previous methods that |
|
|
|
01:12:33.719 --> 01:12:40.199 |
|
did similar things and the way DPO works |
|
|
|
01:12:36.800 --> 01:12:45.040 |
|
is it basically calculates this ratio of |
|
|
|
01:12:40.199 --> 01:12:49.280 |
|
uh the probability of the new uh the new |
|
|
|
01:12:45.040 --> 01:12:51.639 |
|
model to the old model but it UPS this |
|
|
|
01:12:49.280 --> 01:12:53.639 |
|
probability for a good output and it |
|
|
|
01:12:51.639 --> 01:12:56.280 |
|
downweights this probability for a bad |
|
|
|
01:12:53.639 --> 01:12:57.679 |
|
output and so |
|
|
|
01:12:56.280 --> 01:13:00.120 |
|
here we have our better outputs over |
|
|
|
01:12:57.679 --> 01:13:02.040 |
|
here here we have our worse outputs and |
|
|
|
01:13:00.120 --> 01:13:03.600 |
|
you just it's basically learning to |
|
|
|
01:13:02.040 --> 01:13:05.639 |
|
upate the probability and downweight |
|
|
|
01:13:03.600 --> 01:13:09.320 |
|
probability |
|
|
|
01:13:05.639 --> 01:13:09.320 |
|
accordingly so |
|
|
|
01:13:09.360 --> 01:13:15.040 |
|
um you can notice that DPO is very |
|
|
|
01:13:12.280 --> 01:13:18.040 |
|
similar to PO um and that it's learning |
|
|
|
01:13:15.040 --> 01:13:19.679 |
|
uh it's using these ratios but the |
|
|
|
01:13:18.040 --> 01:13:21.520 |
|
disadvantage of this is you obviously |
|
|
|
01:13:19.679 --> 01:13:23.120 |
|
require pairwise judgments and you can't |
|
|
|
01:13:21.520 --> 01:13:26.120 |
|
learn a model if you don't have these |
|
|
|
01:13:23.120 --> 01:13:28.080 |
|
pawise judgments so |
|
|
|
01:13:26.120 --> 01:13:30.760 |
|
the |
|
|
|
01:13:28.080 --> 01:13:33.159 |
|
beta yeah so the beta term is is |
|
|
|
01:13:30.760 --> 01:13:35.840 |
|
basically a normalization term it's a |
|
|
|
01:13:33.159 --> 01:13:39.960 |
|
hyper parameter um |
|
|
|
01:13:35.840 --> 01:13:41.840 |
|
for DPO sorry I read the paper right |
|
|
|
01:13:39.960 --> 01:13:43.639 |
|
when it came out and I don't remember if |
|
|
|
01:13:41.840 --> 01:13:45.600 |
|
it's a direct derivation from the K |
|
|
|
01:13:43.639 --> 01:13:47.960 |
|
Divergence term or not but I think it |
|
|
|
01:13:45.600 --> 01:13:49.800 |
|
might be um I'd have to go back and look |
|
|
|
01:13:47.960 --> 01:13:50.480 |
|
at the look at the paper but basically |
|
|
|
01:13:49.800 --> 01:13:53.600 |
|
the |
|
|
|
01:13:50.480 --> 01:13:56.760 |
|
more the larger this is the larger |
|
|
|
01:13:53.600 --> 01:13:59.320 |
|
gradient steps you'll be |
|
|
|
01:13:56.760 --> 01:14:00.639 |
|
it also um like you'll notice there |
|
|
|
01:13:59.320 --> 01:14:03.400 |
|
sorry I didn't mention this but you'll |
|
|
|
01:14:00.639 --> 01:14:06.120 |
|
notice there's a sigmoid term here so |
|
|
|
01:14:03.400 --> 01:14:09.000 |
|
the the |
|
|
|
01:14:06.120 --> 01:14:10.080 |
|
beta the larger you increase the beta |
|
|
|
01:14:09.000 --> 01:14:13.239 |
|
the |
|
|
|
01:14:10.080 --> 01:14:16.600 |
|
more small differences in these |
|
|
|
01:14:13.239 --> 01:14:18.719 |
|
values like it basically like stretches |
|
|
|
01:14:16.600 --> 01:14:22.280 |
|
or shrinks the sigmoid with respect to |
|
|
|
01:14:18.719 --> 01:14:24.120 |
|
how beak the it is so it will um it will |
|
|
|
01:14:22.280 --> 01:14:25.800 |
|
affect how much like small differences |
|
|
|
01:14:24.120 --> 01:14:27.960 |
|
in this will affect |
|
|
|
01:14:25.800 --> 01:14:30.120 |
|
but I I think this was derived from the |
|
|
|
01:14:27.960 --> 01:14:31.760 |
|
K regularization term that we had |
|
|
|
01:14:30.120 --> 01:14:34.400 |
|
previously in |
|
|
|
01:14:31.760 --> 01:14:35.800 |
|
um in this slide here but I have to go |
|
|
|
01:14:34.400 --> 01:14:40.520 |
|
back and double check unless somebody |
|
|
|
01:14:35.800 --> 01:14:43.239 |
|
knows it is okay good yeah |
|
|
|
01:14:40.520 --> 01:14:45.000 |
|
so I don't want to say wrong things but |
|
|
|
01:14:43.239 --> 01:14:48.239 |
|
I also don't want |
|
|
|
01:14:45.000 --> 01:14:50.920 |
|
to okay cool um and so then increasing |
|
|
|
01:14:48.239 --> 01:14:55.080 |
|
batch size |
|
|
|
01:14:50.920 --> 01:14:57.360 |
|
um because each uh another thing is um |
|
|
|
01:14:55.080 --> 01:14:58.440 |
|
kind of NE necessarily reinforcement |
|
|
|
01:14:57.360 --> 01:14:59.920 |
|
learning is going to have higher |
|
|
|
01:14:58.440 --> 01:15:01.400 |
|
variance and maximum likelihood |
|
|
|
01:14:59.920 --> 01:15:04.199 |
|
estimation just because we're doing samp |
|
|
|
01:15:01.400 --> 01:15:07.840 |
|
playing and other things like this and |
|
|
|
01:15:04.199 --> 01:15:09.440 |
|
um so one very simple thing you can do |
|
|
|
01:15:07.840 --> 01:15:11.280 |
|
is just increase the number of examples |
|
|
|
01:15:09.440 --> 01:15:13.679 |
|
or rollouts that you do before an update |
|
|
|
01:15:11.280 --> 01:15:15.800 |
|
to stabilize and so I I would definitely |
|
|
|
01:15:13.679 --> 01:15:17.480 |
|
suggest that if you're seeing any |
|
|
|
01:15:15.800 --> 01:15:18.679 |
|
stability after doing all of the tricks |
|
|
|
01:15:17.480 --> 01:15:20.400 |
|
that I mentioned before that you |
|
|
|
01:15:18.679 --> 01:15:23.040 |
|
increase your batch size and often that |
|
|
|
01:15:20.400 --> 01:15:25.480 |
|
can just resolve your problems |
|
|
|
01:15:23.040 --> 01:15:28.760 |
|
um another uh |
|
|
|
01:15:25.480 --> 01:15:30.560 |
|
thing that people often do is um save |
|
|
|
01:15:28.760 --> 01:15:32.040 |
|
many many previous rollouts because |
|
|
|
01:15:30.560 --> 01:15:34.199 |
|
generally doing rollouts is more |
|
|
|
01:15:32.040 --> 01:15:37.840 |
|
expensive doing rollouts and collecting |
|
|
|
01:15:34.199 --> 01:15:39.560 |
|
rewards is more expensive and so um you |
|
|
|
01:15:37.840 --> 01:15:42.360 |
|
can save the roll outs that you have |
|
|
|
01:15:39.560 --> 01:15:43.840 |
|
done before and uh keep them around so |
|
|
|
01:15:42.360 --> 01:15:46.600 |
|
you can update parameters with larger |
|
|
|
01:15:43.840 --> 01:15:50.800 |
|
batches in a more efficient |
|
|
|
01:15:46.600 --> 01:15:53.120 |
|
way cool so that's all I have uh I just |
|
|
|
01:15:50.800 --> 01:15:54.400 |
|
realized we're exactly at time so uh I |
|
|
|
01:15:53.120 --> 01:15:56.440 |
|
should finish up here but I'll be happy |
|
|
|
01:15:54.400 --> 01:15:59.440 |
|
to take any |
|
|
|
01:15:56.440 --> 01:15:59.440 |
|
for |
|
|
|
01:16:01.679 --> 01:16:04.679 |
|
thanks |
|
|