|
WEBVTT |
|
|
|
00:00:01.319 --> 00:00:07.560 |
|
um today I want to talk about prompting |
|
|
|
00:00:03.919 --> 00:00:09.639 |
|
and uh prompting is kind of a new uh |
|
|
|
00:00:07.560 --> 00:00:11.320 |
|
Paradigm as of a few years ago with |
|
|
|
00:00:09.639 --> 00:00:15.120 |
|
interacting with models it's now kind of |
|
|
|
00:00:11.320 --> 00:00:16.880 |
|
the standard uh in doing so and |
|
|
|
00:00:15.120 --> 00:00:19.880 |
|
basically what we do is we encourage a |
|
|
|
00:00:16.880 --> 00:00:21.840 |
|
pre-trained model to make predictions by |
|
|
|
00:00:19.880 --> 00:00:24.039 |
|
providing a textual prompt specifying |
|
|
|
00:00:21.840 --> 00:00:25.960 |
|
the task to be done this is how you |
|
|
|
00:00:24.039 --> 00:00:28.960 |
|
always interact with chat GPT or |
|
|
|
00:00:25.960 --> 00:00:33.200 |
|
anything else like this |
|
|
|
00:00:28.960 --> 00:00:36.200 |
|
um so prompting fundamentals uh the way |
|
|
|
00:00:33.200 --> 00:00:38.360 |
|
that basic prompting works is you append |
|
|
|
00:00:36.200 --> 00:00:42.079 |
|
a textual string to the beginning of the |
|
|
|
00:00:38.360 --> 00:00:44.079 |
|
output and you complete it and exactly |
|
|
|
00:00:42.079 --> 00:00:45.800 |
|
how you complete it can be based on any |
|
|
|
00:00:44.079 --> 00:00:48.800 |
|
of the generation methods that we talked |
|
|
|
00:00:45.800 --> 00:00:51.559 |
|
about in the previous class uh you know |
|
|
|
00:00:48.800 --> 00:00:55.160 |
|
beam search it can be uh sampling it can |
|
|
|
00:00:51.559 --> 00:00:58.480 |
|
be MBR or self-consistency or whatever |
|
|
|
00:00:55.160 --> 00:01:00.960 |
|
else um so I I put in when a dog sees a |
|
|
|
00:00:58.480 --> 00:01:03.680 |
|
squirrel it will usually |
|
|
|
00:01:00.960 --> 00:01:06.280 |
|
um into gpt2 small which is a very small |
|
|
|
00:01:03.680 --> 00:01:08.960 |
|
language model says Be Afraid of |
|
|
|
00:01:06.280 --> 00:01:10.560 |
|
Anything unusual as an exception that's |
|
|
|
00:01:08.960 --> 00:01:13.720 |
|
when a squirrel is usually afraid to |
|
|
|
00:01:10.560 --> 00:01:16.280 |
|
bitee um so as you can see if the model |
|
|
|
00:01:13.720 --> 00:01:19.560 |
|
is not super great you get a kind of not |
|
|
|
00:01:16.280 --> 00:01:24.119 |
|
very great response also um but then I |
|
|
|
00:01:19.560 --> 00:01:25.960 |
|
CED it into gp2 XL and uh what it says |
|
|
|
00:01:24.119 --> 00:01:28.159 |
|
when a dog sees a squirrel it will |
|
|
|
00:01:25.960 --> 00:01:30.640 |
|
usually lick the squirrel it will also |
|
|
|
00:01:28.159 --> 00:01:34.000 |
|
touch its nose to the squirrel the tail |
|
|
|
00:01:30.640 --> 00:01:37.880 |
|
and nose if it can um which might be |
|
|
|
00:01:34.000 --> 00:01:40.280 |
|
true um one thing I I should note is |
|
|
|
00:01:37.880 --> 00:01:43.040 |
|
when I generated these I used uh like |
|
|
|
00:01:40.280 --> 00:01:45.200 |
|
actual regular ancestral sampling so I |
|
|
|
00:01:43.040 --> 00:01:47.159 |
|
set the temperature to one I didn't do |
|
|
|
00:01:45.200 --> 00:01:49.600 |
|
top feed didn't do top K or anything |
|
|
|
00:01:47.159 --> 00:01:51.040 |
|
like this so this is a raw view of what |
|
|
|
00:01:49.600 --> 00:01:53.799 |
|
the language model thinks is like |
|
|
|
00:01:51.040 --> 00:01:58.479 |
|
actually a reasonable answer um if I |
|
|
|
00:01:53.799 --> 00:02:00.159 |
|
modified the code to do something else |
|
|
|
00:01:58.479 --> 00:02:02.560 |
|
actually maybe I can I can do that that |
|
|
|
00:02:00.159 --> 00:02:04.960 |
|
right now but if I modified the code to |
|
|
|
00:02:02.560 --> 00:02:08.879 |
|
use a |
|
|
|
00:02:04.960 --> 00:02:12.119 |
|
different output we can actually see uh |
|
|
|
00:02:08.879 --> 00:02:12.119 |
|
the different result that we |
|
|
|
00:02:13.599 --> 00:02:17.959 |
|
get since I I have it here |
|
|
|
00:02:18.360 --> 00:02:23.879 |
|
anyway actually sorry I'll need to |
|
|
|
00:02:20.360 --> 00:02:27.239 |
|
modify the code on my my screen here |
|
|
|
00:02:23.879 --> 00:02:32.120 |
|
um so I will |
|
|
|
00:02:27.239 --> 00:02:35.040 |
|
set uh top K to 50 top P to |
|
|
|
00:02:32.120 --> 00:02:38.360 |
|
0.95 so you see I I changed the |
|
|
|
00:02:35.040 --> 00:02:38.360 |
|
generation parameters |
|
|
|
00:02:38.760 --> 00:02:46.400 |
|
here and I'll uh run all of |
|
|
|
00:02:43.159 --> 00:02:50.319 |
|
them you can see the uh the result that |
|
|
|
00:02:46.400 --> 00:02:51.840 |
|
we get in a little bit but basically um |
|
|
|
00:02:50.319 --> 00:02:54.800 |
|
so this is the standard method for |
|
|
|
00:02:51.840 --> 00:02:57.319 |
|
prompting I intentionally use gpt2 small |
|
|
|
00:02:54.800 --> 00:02:58.800 |
|
and gpt2 XL here because these are raw |
|
|
|
00:02:57.319 --> 00:03:01.879 |
|
based language models they were just |
|
|
|
00:02:58.800 --> 00:03:05.440 |
|
pre-trained as language models and so |
|
|
|
00:03:01.879 --> 00:03:06.920 |
|
when we prompt them we're getting a |
|
|
|
00:03:05.440 --> 00:03:09.200 |
|
language model that was just trained on |
|
|
|
00:03:06.920 --> 00:03:12.280 |
|
lots of texts view of what is likely |
|
|
|
00:03:09.200 --> 00:03:13.760 |
|
next text um there are other ways to |
|
|
|
00:03:12.280 --> 00:03:15.599 |
|
train language models like instruction |
|
|
|
00:03:13.760 --> 00:03:18.040 |
|
tuning and rlf which I'm going to be |
|
|
|
00:03:15.599 --> 00:03:19.480 |
|
talking in future classes and if that's |
|
|
|
00:03:18.040 --> 00:03:21.760 |
|
the case you might get a different |
|
|
|
00:03:19.480 --> 00:03:23.159 |
|
response here so when a dog sees a |
|
|
|
00:03:21.760 --> 00:03:25.720 |
|
squirrel it will usually get angry |
|
|
|
00:03:23.159 --> 00:03:27.319 |
|
scratched the squirrel and run off uh |
|
|
|
00:03:25.720 --> 00:03:29.080 |
|
some dogs may also attempt to capture |
|
|
|
00:03:27.319 --> 00:03:30.799 |
|
the squirrel or attempt to eat it dogs |
|
|
|
00:03:29.080 --> 00:03:32.599 |
|
will often to pick up the squirrel and |
|
|
|
00:03:30.799 --> 00:03:36.400 |
|
eat it |
|
|
|
00:03:32.599 --> 00:03:40.680 |
|
for it was more uh more violent than I |
|
|
|
00:03:36.400 --> 00:03:44.280 |
|
expected any um |
|
|
|
00:03:40.680 --> 00:03:45.720 |
|
so but anyway I think that like actually |
|
|
|
00:03:44.280 --> 00:03:47.080 |
|
you can see that when I used the |
|
|
|
00:03:45.720 --> 00:03:48.920 |
|
different generation parameters it |
|
|
|
00:03:47.080 --> 00:03:51.480 |
|
actually gave me something that was |
|
|
|
00:03:48.920 --> 00:03:54.319 |
|
maybe more typical than lick so lick is |
|
|
|
00:03:51.480 --> 00:03:56.840 |
|
maybe a kind of unusual uh answer here |
|
|
|
00:03:54.319 --> 00:03:58.680 |
|
but anyway |
|
|
|
00:03:56.840 --> 00:04:03.040 |
|
cool |
|
|
|
00:03:58.680 --> 00:04:05.680 |
|
so that's the basic idea of prompting we |
|
|
|
00:04:03.040 --> 00:04:08.480 |
|
tend to use prompting to try to solve |
|
|
|
00:04:05.680 --> 00:04:10.680 |
|
problems also so it's not just to |
|
|
|
00:04:08.480 --> 00:04:14.200 |
|
complete text although completing text |
|
|
|
00:04:10.680 --> 00:04:17.320 |
|
is useful and important like I complete |
|
|
|
00:04:14.200 --> 00:04:19.199 |
|
text in my Gmail all the time uh you |
|
|
|
00:04:17.320 --> 00:04:20.600 |
|
know it it's constantly giving me |
|
|
|
00:04:19.199 --> 00:04:23.440 |
|
suggestions about what I should write |
|
|
|
00:04:20.600 --> 00:04:24.800 |
|
next and I do tab autoc complete um you |
|
|
|
00:04:23.440 --> 00:04:28.040 |
|
know on your phone you're doing that |
|
|
|
00:04:24.800 --> 00:04:29.919 |
|
that's also using a language model um |
|
|
|
00:04:28.040 --> 00:04:32.320 |
|
but very often we'll use prompting to do |
|
|
|
00:04:29.919 --> 00:04:34.440 |
|
things other than just completing Texs |
|
|
|
00:04:32.320 --> 00:04:36.000 |
|
and when we do this uh this is kind of |
|
|
|
00:04:34.440 --> 00:04:38.199 |
|
the standard workflow for how we solve |
|
|
|
00:04:36.000 --> 00:04:41.280 |
|
NLP tasks with prompting the way we do |
|
|
|
00:04:38.199 --> 00:04:43.360 |
|
this is we fill in a prompt template |
|
|
|
00:04:41.280 --> 00:04:46.080 |
|
predict the answer and post-process the |
|
|
|
00:04:43.360 --> 00:04:46.080 |
|
answer in some |
|
|
|
00:04:46.320 --> 00:04:51.880 |
|
way so prompt templates are templates |
|
|
|
00:04:49.280 --> 00:04:55.280 |
|
where you will actually uh that you will |
|
|
|
00:04:51.880 --> 00:04:57.479 |
|
fill in with an actual input and so if |
|
|
|
00:04:55.280 --> 00:05:00.479 |
|
we have an input X which is something |
|
|
|
00:04:57.479 --> 00:05:04.880 |
|
like I love this movie our template will |
|
|
|
00:05:00.479 --> 00:05:08.360 |
|
be something like X overall it was Z or |
|
|
|
00:05:04.880 --> 00:05:10.680 |
|
overall it was and so if we do that when |
|
|
|
00:05:08.360 --> 00:05:13.320 |
|
we actually want to make a prediction we |
|
|
|
00:05:10.680 --> 00:05:14.840 |
|
will uh convert this into the actual |
|
|
|
00:05:13.320 --> 00:05:16.880 |
|
prompt we feed into the language model |
|
|
|
00:05:14.840 --> 00:05:20.639 |
|
by filling in the template um I love |
|
|
|
00:05:16.880 --> 00:05:24.919 |
|
this movie overall it was blank and then |
|
|
|
00:05:20.639 --> 00:05:24.919 |
|
fill this uh continuation |
|
|
|
00:05:25.840 --> 00:05:31.919 |
|
in a particular variety uh |
|
|
|
00:05:30.000 --> 00:05:34.039 |
|
that we use very broadly nowadays |
|
|
|
00:05:31.919 --> 00:05:36.240 |
|
because a lot of models are trained as |
|
|
|
00:05:34.039 --> 00:05:38.240 |
|
chatbots um but actually even if they're |
|
|
|
00:05:36.240 --> 00:05:41.199 |
|
not trained as chatbots this still works |
|
|
|
00:05:38.240 --> 00:05:46.199 |
|
to some extent um is a chat |
|
|
|
00:05:41.199 --> 00:05:49.919 |
|
prompt and so usually the way we we do |
|
|
|
00:05:46.199 --> 00:05:53.240 |
|
this is we specify inputs in a format |
|
|
|
00:05:49.919 --> 00:05:55.800 |
|
called the open AI messages format and |
|
|
|
00:05:53.240 --> 00:05:58.199 |
|
uh this is this is what it looks like |
|
|
|
00:05:55.800 --> 00:06:03.759 |
|
each we have a |
|
|
|
00:05:58.199 --> 00:06:07.680 |
|
list of outputs each list is given a |
|
|
|
00:06:03.759 --> 00:06:10.280 |
|
role and content and here so we have the |
|
|
|
00:06:07.680 --> 00:06:12.479 |
|
role of system and the content is please |
|
|
|
00:06:10.280 --> 00:06:15.319 |
|
classify movie reviews as positive or |
|
|
|
00:06:12.479 --> 00:06:17.400 |
|
negative uh then we have the role user |
|
|
|
00:06:15.319 --> 00:06:21.039 |
|
uh this movie is a |
|
|
|
00:06:17.400 --> 00:06:24.919 |
|
banger um and then we have roles uh |
|
|
|
00:06:21.039 --> 00:06:27.240 |
|
system message uh so is the roles we |
|
|
|
00:06:24.919 --> 00:06:29.639 |
|
have the system and the system is a |
|
|
|
00:06:27.240 --> 00:06:31.560 |
|
message provided to the system to |
|
|
|
00:06:29.639 --> 00:06:33.560 |
|
influence Its Behavior it's to explain |
|
|
|
00:06:31.560 --> 00:06:39.240 |
|
to it |
|
|
|
00:06:33.560 --> 00:06:40.840 |
|
like how it should be working um and so |
|
|
|
00:06:39.240 --> 00:06:43.199 |
|
you can see that this is explaining to |
|
|
|
00:06:40.840 --> 00:06:46.400 |
|
the system how it should be working user |
|
|
|
00:06:43.199 --> 00:06:48.680 |
|
is the message input by the user um and |
|
|
|
00:06:46.400 --> 00:06:51.160 |
|
so this could be just a single message |
|
|
|
00:06:48.680 --> 00:06:53.520 |
|
or if you have a multi-turn dialogue it |
|
|
|
00:06:51.160 --> 00:06:55.080 |
|
can be like user and then assistant and |
|
|
|
00:06:53.520 --> 00:06:56.680 |
|
then user and then assistant and then |
|
|
|
00:06:55.080 --> 00:06:59.400 |
|
user and then assistant and that makes |
|
|
|
00:06:56.680 --> 00:07:00.680 |
|
it clear that it's a multi-term dialogue |
|
|
|
00:06:59.400 --> 00:07:02.800 |
|
so if you have a multi-term dialogue in |
|
|
|
00:07:00.680 --> 00:07:06.319 |
|
chat GPT that's how they're feeding it |
|
|
|
00:07:02.800 --> 00:07:06.319 |
|
in um into the |
|
|
|
00:07:06.479 --> 00:07:12.440 |
|
system so what's happening behind the |
|
|
|
00:07:08.840 --> 00:07:14.160 |
|
scenes with these chat prompts basically |
|
|
|
00:07:12.440 --> 00:07:17.720 |
|
they're being converted into token |
|
|
|
00:07:14.160 --> 00:07:19.680 |
|
strings and then fed into the model so |
|
|
|
00:07:17.720 --> 00:07:21.800 |
|
despite the fact that this is fed in in |
|
|
|
00:07:19.680 --> 00:07:23.560 |
|
this format and it makes you think that |
|
|
|
00:07:21.800 --> 00:07:25.120 |
|
maybe something special is going on |
|
|
|
00:07:23.560 --> 00:07:28.360 |
|
actually in most cases these are just |
|
|
|
00:07:25.120 --> 00:07:30.199 |
|
being fed into the model uh as a prompt |
|
|
|
00:07:28.360 --> 00:07:34.560 |
|
so these are just kind of special |
|
|
|
00:07:30.199 --> 00:07:36.879 |
|
version of a uh of a template so here we |
|
|
|
00:07:34.560 --> 00:07:40.560 |
|
have um this is what the Llama template |
|
|
|
00:07:36.879 --> 00:07:43.319 |
|
looks like so basically you have um |
|
|
|
00:07:40.560 --> 00:07:46.560 |
|
square bracket ins and then for the |
|
|
|
00:07:43.319 --> 00:07:49.280 |
|
system message it's like um like angle |
|
|
|
00:07:46.560 --> 00:07:51.240 |
|
bracket uh angle bracket sis uh close |
|
|
|
00:07:49.280 --> 00:07:53.720 |
|
angle bracket close angle bracket and |
|
|
|
00:07:51.240 --> 00:07:55.759 |
|
then the actual system message and then |
|
|
|
00:07:53.720 --> 00:07:58.479 |
|
you have uh this closing out the system |
|
|
|
00:07:55.759 --> 00:08:01.240 |
|
message this closing out the instruction |
|
|
|
00:07:58.479 --> 00:08:04.120 |
|
then the user is surrounded by inst and |
|
|
|
00:08:01.240 --> 00:08:06.599 |
|
then the assistant is just like a |
|
|
|
00:08:04.120 --> 00:08:08.400 |
|
regular string so this is what the |
|
|
|
00:08:06.599 --> 00:08:12.319 |
|
actual textual string that's fed into |
|
|
|
00:08:08.400 --> 00:08:14.199 |
|
llama chat models is we can contrast |
|
|
|
00:08:12.319 --> 00:08:19.440 |
|
that to some other models so alpaka |
|
|
|
00:08:14.199 --> 00:08:22.400 |
|
looks like this um uh so we have like |
|
|
|
00:08:19.440 --> 00:08:24.879 |
|
hash instruction colon and then the |
|
|
|
00:08:22.400 --> 00:08:26.639 |
|
instruction for the user there there's |
|
|
|
00:08:24.879 --> 00:08:28.879 |
|
no distinction between system and user |
|
|
|
00:08:26.639 --> 00:08:31.960 |
|
so it's like hash instruction and then |
|
|
|
00:08:28.879 --> 00:08:35.240 |
|
the user message and then hash response |
|
|
|
00:08:31.960 --> 00:08:37.760 |
|
and then be assistant so it's not super |
|
|
|
00:08:35.240 --> 00:08:39.640 |
|
important which one we use here um the |
|
|
|
00:08:37.760 --> 00:08:41.919 |
|
important thing is that this matches |
|
|
|
00:08:39.640 --> 00:08:44.039 |
|
with what uh the model is trained and |
|
|
|
00:08:41.919 --> 00:08:46.640 |
|
I'll show you some example uh you know |
|
|
|
00:08:44.039 --> 00:08:50.680 |
|
I'll talk about that in more detail |
|
|
|
00:08:46.640 --> 00:08:52.880 |
|
later and we have a reference uh that I |
|
|
|
00:08:50.680 --> 00:08:56.600 |
|
got this uh |
|
|
|
00:08:52.880 --> 00:08:58.519 |
|
from and there's this toolkit that I um |
|
|
|
00:08:56.600 --> 00:09:02.680 |
|
I rather like recently it's called light |
|
|
|
00:08:58.519 --> 00:09:05.079 |
|
llm it makes it very easy to uh query |
|
|
|
00:09:02.680 --> 00:09:07.240 |
|
different llms uh and kind of like |
|
|
|
00:09:05.079 --> 00:09:09.320 |
|
unified things so basically you can |
|
|
|
00:09:07.240 --> 00:09:11.800 |
|
query many different types of LMS like |
|
|
|
00:09:09.320 --> 00:09:14.440 |
|
open AI or open source models or other |
|
|
|
00:09:11.800 --> 00:09:17.079 |
|
things like that and what happens behind |
|
|
|
00:09:14.440 --> 00:09:19.120 |
|
the scene is it basically takes um the |
|
|
|
00:09:17.079 --> 00:09:20.839 |
|
open AI messages format and converts it |
|
|
|
00:09:19.120 --> 00:09:22.880 |
|
into the appropriate prompt format for |
|
|
|
00:09:20.839 --> 00:09:24.680 |
|
whatever model you're using or the |
|
|
|
00:09:22.880 --> 00:09:27.120 |
|
appropriate API calls for whatever thing |
|
|
|
00:09:24.680 --> 00:09:29.800 |
|
you're using but |
|
|
|
00:09:27.120 --> 00:09:31.399 |
|
um this here basically |
|
|
|
00:09:29.800 --> 00:09:33.800 |
|
um if you click through this link shows |
|
|
|
00:09:31.399 --> 00:09:35.959 |
|
you okay this is what it looks like for |
|
|
|
00:09:33.800 --> 00:09:37.880 |
|
alpaca um so you have the instruction |
|
|
|
00:09:35.959 --> 00:09:40.920 |
|
instruction response this is what it |
|
|
|
00:09:37.880 --> 00:09:44.880 |
|
looks like for llama 2 chat this is what |
|
|
|
00:09:40.920 --> 00:09:48.480 |
|
it looks like for the oama um for AMA |
|
|
|
00:09:44.880 --> 00:09:49.920 |
|
this is what it looks like for mistol |
|
|
|
00:09:48.480 --> 00:09:52.160 |
|
and other things like that so you see |
|
|
|
00:09:49.920 --> 00:09:53.440 |
|
all of these are very similar but |
|
|
|
00:09:52.160 --> 00:09:55.000 |
|
they're like slightly different and |
|
|
|
00:09:53.440 --> 00:09:58.120 |
|
getting these right is actually kind of |
|
|
|
00:09:55.000 --> 00:10:01.120 |
|
important for the model doing a good |
|
|
|
00:09:58.120 --> 00:10:01.120 |
|
job |
|
|
|
00:10:03.640 --> 00:10:10.399 |
|
um any questions about |
|
|
|
00:10:05.880 --> 00:10:15.360 |
|
this yeah like say you start PR with |
|
|
|
00:10:10.399 --> 00:10:18.160 |
|
this um inut and then you started simar |
|
|
|
00:10:15.360 --> 00:10:21.320 |
|
without |
|
|
|
00:10:18.160 --> 00:10:24.640 |
|
model could you give an example yeah so |
|
|
|
00:10:21.320 --> 00:10:28.040 |
|
say um my account is a great movie or |
|
|
|
00:10:24.640 --> 00:10:31.040 |
|
this movie is great in front of I put |
|
|
|
00:10:28.040 --> 00:10:31.040 |
|
UMR |
|
|
|
00:10:34.279 --> 00:10:39.519 |
|
model |
|
|
|
00:10:36.399 --> 00:10:42.440 |
|
so depend it depends a lot on the |
|
|
|
00:10:39.519 --> 00:10:45.959 |
|
bottle the reason why this system |
|
|
|
00:10:42.440 --> 00:10:48.720 |
|
message was input here in the first |
|
|
|
00:10:45.959 --> 00:10:52.440 |
|
place was this wasn't originally a |
|
|
|
00:10:48.720 --> 00:10:54.240 |
|
feature of open AI models uh open AI was |
|
|
|
00:10:52.440 --> 00:10:56.440 |
|
the first place to introduce this which |
|
|
|
00:10:54.240 --> 00:10:58.519 |
|
is why I I'm calling it open ey messages |
|
|
|
00:10:56.440 --> 00:10:59.800 |
|
formul they didn't originally have |
|
|
|
00:10:58.519 --> 00:11:02.360 |
|
something like this but they were having |
|
|
|
00:10:59.800 --> 00:11:04.360 |
|
lots of trouble with um people trying to |
|
|
|
00:11:02.360 --> 00:11:07.600 |
|
reveal the prompts that were given to |
|
|
|
00:11:04.360 --> 00:11:09.680 |
|
systems uh like called like prompt |
|
|
|
00:11:07.600 --> 00:11:12.040 |
|
injection attacks or like jailbreaking |
|
|
|
00:11:09.680 --> 00:11:15.399 |
|
attacks or stff like that and so the |
|
|
|
00:11:12.040 --> 00:11:17.079 |
|
models would basically reveal this |
|
|
|
00:11:15.399 --> 00:11:19.600 |
|
prompt that was being used behind the |
|
|
|
00:11:17.079 --> 00:11:22.760 |
|
scenes by whatever customer of open a |
|
|
|
00:11:19.600 --> 00:11:26.120 |
|
was like deploying a system and so in |
|
|
|
00:11:22.760 --> 00:11:29.120 |
|
order to fix this basically what open AI |
|
|
|
00:11:26.120 --> 00:11:30.480 |
|
did I believe I believe like they're |
|
|
|
00:11:29.120 --> 00:11:32.279 |
|
don't actually tell you exactly what |
|
|
|
00:11:30.480 --> 00:11:36.040 |
|
they did ever but I'm assuming what they |
|
|
|
00:11:32.279 --> 00:11:37.680 |
|
did is they trained uh their models so |
|
|
|
00:11:36.040 --> 00:11:39.240 |
|
that the models would not output |
|
|
|
00:11:37.680 --> 00:11:41.639 |
|
anything that's included in the system |
|
|
|
00:11:39.240 --> 00:11:43.839 |
|
message so the system message is used to |
|
|
|
00:11:41.639 --> 00:11:46.120 |
|
influence behavior but it like they're |
|
|
|
00:11:43.839 --> 00:11:48.200 |
|
explicitly trained to not output things |
|
|
|
00:11:46.120 --> 00:11:49.880 |
|
that are included in there and so if you |
|
|
|
00:11:48.200 --> 00:11:53.360 |
|
put the |
|
|
|
00:11:49.880 --> 00:11:56.200 |
|
actual if you put the actual thing that |
|
|
|
00:11:53.360 --> 00:11:59.639 |
|
you wanted to evaluate within the system |
|
|
|
00:11:56.200 --> 00:12:01.839 |
|
message it might still predict |
|
|
|
00:11:59.639 --> 00:12:04.839 |
|
the sentiment correctly but it won't |
|
|
|
00:12:01.839 --> 00:12:06.920 |
|
repeat the the stuff that was in system |
|
|
|
00:12:04.839 --> 00:12:09.920 |
|
message |
|
|
|
00:12:06.920 --> 00:12:09.920 |
|
B |
|
|
|
00:12:14.160 --> 00:12:20.480 |
|
yeah after we give it the yeah yeah so |
|
|
|
00:12:18.320 --> 00:12:23.040 |
|
the that's a great question so typically |
|
|
|
00:12:20.480 --> 00:12:26.480 |
|
this is hand created so you you create |
|
|
|
00:12:23.040 --> 00:12:29.680 |
|
something like this um I I have a a |
|
|
|
00:12:26.480 --> 00:12:32.120 |
|
bracket X here but another way people |
|
|
|
00:12:29.680 --> 00:12:33.800 |
|
typically specify this is you just have |
|
|
|
00:12:32.120 --> 00:12:36.880 |
|
a |
|
|
|
00:12:33.800 --> 00:12:41.199 |
|
big um you just have a big python string |
|
|
|
00:12:36.880 --> 00:12:41.199 |
|
which is like um you know |
|
|
|
00:12:42.040 --> 00:12:46.480 |
|
please um please |
|
|
|
00:12:49.279 --> 00:12:55.440 |
|
specify and then you |
|
|
|
00:12:52.440 --> 00:12:55.440 |
|
have |
|
|
|
00:12:56.160 --> 00:13:02.240 |
|
um and then you substitute in uh like |
|
|
|
00:12:59.880 --> 00:13:04.440 |
|
the input into this place here so you |
|
|
|
00:13:02.240 --> 00:13:07.760 |
|
usually handw write it I'm going to |
|
|
|
00:13:04.440 --> 00:13:07.760 |
|
talk excuse |
|
|
|
00:13:07.800 --> 00:13:14.120 |
|
me and to end about some methods to |
|
|
|
00:13:10.320 --> 00:13:16.120 |
|
learn these also but um I'd say like 90 |
|
|
|
00:13:14.120 --> 00:13:18.320 |
|
95% of the time people are just writing |
|
|
|
00:13:16.120 --> 00:13:18.320 |
|
the |
|
|
|
00:13:19.959 --> 00:13:24.560 |
|
man yep I would |
|
|
|
00:13:25.920 --> 00:13:31.639 |
|
write |
|
|
|
00:13:27.760 --> 00:13:31.639 |
|
and real input that |
|
|
|
00:13:33.240 --> 00:13:38.040 |
|
I yeah so typically the template is |
|
|
|
00:13:36.360 --> 00:13:39.800 |
|
written when you decide what system you |
|
|
|
00:13:38.040 --> 00:13:41.839 |
|
want to create so you decide you want to |
|
|
|
00:13:39.800 --> 00:13:44.519 |
|
create a sentiment analysis system so |
|
|
|
00:13:41.839 --> 00:13:46.760 |
|
you create a template that either says |
|
|
|
00:13:44.519 --> 00:13:48.079 |
|
like please classify the topic in the |
|
|
|
00:13:46.760 --> 00:13:50.959 |
|
case of a model that was trained to |
|
|
|
00:13:48.079 --> 00:13:52.240 |
|
follow instructions or if you have a |
|
|
|
00:13:50.959 --> 00:13:54.240 |
|
base model that was not trained to |
|
|
|
00:13:52.240 --> 00:13:58.079 |
|
follow instructions which is rare rare |
|
|
|
00:13:54.240 --> 00:14:00.279 |
|
nowadays but gpd2 or La llama 2 without |
|
|
|
00:13:58.079 --> 00:14:02.320 |
|
chat tuning is as an example of that |
|
|
|
00:14:00.279 --> 00:14:05.600 |
|
then you would need to create a template |
|
|
|
00:14:02.320 --> 00:14:10.040 |
|
that looks like this um where |
|
|
|
00:14:05.600 --> 00:14:11.360 |
|
you put the model in a situation where |
|
|
|
00:14:10.040 --> 00:14:13.839 |
|
the |
|
|
|
00:14:11.360 --> 00:14:15.240 |
|
next word that follows up should be |
|
|
|
00:14:13.839 --> 00:14:17.120 |
|
indicative of the answer to your |
|
|
|
00:14:15.240 --> 00:14:20.120 |
|
question so like positive or negative or |
|
|
|
00:14:17.120 --> 00:14:21.800 |
|
something like that so um but either way |
|
|
|
00:14:20.120 --> 00:14:24.639 |
|
like usually you handw write this when |
|
|
|
00:14:21.800 --> 00:14:27.199 |
|
you decide what task is you want to do |
|
|
|
00:14:24.639 --> 00:14:29.000 |
|
then this input X this comes at test |
|
|
|
00:14:27.199 --> 00:14:32.920 |
|
time this comes when you actually Dey |
|
|
|
00:14:29.000 --> 00:14:34.240 |
|
your system um so this would be like an |
|
|
|
00:14:32.920 --> 00:14:37.040 |
|
Amazon review that you wanted to |
|
|
|
00:14:34.240 --> 00:14:37.040 |
|
classify using an |
|
|
|
00:14:37.720 --> 00:14:42.720 |
|
image cool any other |
|
|
|
00:14:40.519 --> 00:14:46.480 |
|
questions okay let's |
|
|
|
00:14:42.720 --> 00:14:48.160 |
|
move um so basically this is what is |
|
|
|
00:14:46.480 --> 00:14:49.920 |
|
happening behind the scenes I don't know |
|
|
|
00:14:48.160 --> 00:14:53.040 |
|
what open AI format is because they |
|
|
|
00:14:49.920 --> 00:14:54.639 |
|
won't tell us of course um but you know |
|
|
|
00:14:53.040 --> 00:14:56.000 |
|
I'm assuming that that's similar to |
|
|
|
00:14:54.639 --> 00:14:59.399 |
|
what's happening in |
|
|
|
00:14:56.000 --> 00:15:01.959 |
|
op okay um so the next thing that we do |
|
|
|
00:14:59.399 --> 00:15:05.360 |
|
is answer prediction so given uh The |
|
|
|
00:15:01.959 --> 00:15:08.320 |
|
Prompt we predict the answer um and so |
|
|
|
00:15:05.360 --> 00:15:11.880 |
|
using whatever algorithm we want to use |
|
|
|
00:15:08.320 --> 00:15:14.880 |
|
uh we predict you know fantastic |
|
|
|
00:15:11.880 --> 00:15:14.880 |
|
here |
|
|
|
00:15:15.120 --> 00:15:21.639 |
|
um and actually it might not predict |
|
|
|
00:15:19.959 --> 00:15:26.399 |
|
fantastic it might predict something |
|
|
|
00:15:21.639 --> 00:15:28.120 |
|
else like overall it was um a really |
|
|
|
00:15:26.399 --> 00:15:30.000 |
|
fantastic movie that I liked a lot or |
|
|
|
00:15:28.120 --> 00:15:33.839 |
|
something like so it might also do |
|
|
|
00:15:30.000 --> 00:15:36.880 |
|
something like that so based on that we |
|
|
|
00:15:33.839 --> 00:15:39.600 |
|
want to select the actual output out of |
|
|
|
00:15:36.880 --> 00:15:41.160 |
|
the generated uh outputs and I'm calling |
|
|
|
00:15:39.600 --> 00:15:43.639 |
|
this uh |
|
|
|
00:15:41.160 --> 00:15:45.959 |
|
postprocessing so for instance we might |
|
|
|
00:15:43.639 --> 00:15:48.240 |
|
take the output as is so for something |
|
|
|
00:15:45.959 --> 00:15:50.880 |
|
like just you interacting with chat |
|
|
|
00:15:48.240 --> 00:15:53.360 |
|
jpt um or interacting with a chat model |
|
|
|
00:15:50.880 --> 00:15:55.639 |
|
you might be looking at the text as is |
|
|
|
00:15:53.360 --> 00:15:58.319 |
|
or it might be formatting the output for |
|
|
|
00:15:55.639 --> 00:16:00.079 |
|
easy Vis visualization selecting only |
|
|
|
00:15:58.319 --> 00:16:02.440 |
|
parts of the output that you want to use |
|
|
|
00:16:00.079 --> 00:16:04.560 |
|
or mapping the output to other |
|
|
|
00:16:02.440 --> 00:16:07.600 |
|
actions so to give an example of |
|
|
|
00:16:04.560 --> 00:16:10.079 |
|
formatting this is a feature of uh chat |
|
|
|
00:16:07.600 --> 00:16:13.440 |
|
GPT or Bard or any that you interact |
|
|
|
00:16:10.079 --> 00:16:14.920 |
|
with but um I wrote please write a table |
|
|
|
00:16:13.440 --> 00:16:18.759 |
|
with the last five presidents and their |
|
|
|
00:16:14.920 --> 00:16:20.319 |
|
birth dates and chat GPT is happy to do |
|
|
|
00:16:18.759 --> 00:16:22.000 |
|
this for me it says here is a table with |
|
|
|
00:16:20.319 --> 00:16:24.920 |
|
the last five US presidents and their |
|
|
|
00:16:22.000 --> 00:16:27.639 |
|
birth dates um Joe Biden Donald Trump |
|
|
|
00:16:24.920 --> 00:16:31.720 |
|
Barack Obama George W wish Bill Clinton |
|
|
|
00:16:27.639 --> 00:16:33.600 |
|
um but this is written in markdown um or |
|
|
|
00:16:31.720 --> 00:16:35.079 |
|
I assume it's written in markdown so it |
|
|
|
00:16:33.600 --> 00:16:37.880 |
|
basically makes this table and then |
|
|
|
00:16:35.079 --> 00:16:39.319 |
|
renders it in an easy to view way so |
|
|
|
00:16:37.880 --> 00:16:41.000 |
|
this is really important if you're |
|
|
|
00:16:39.319 --> 00:16:42.440 |
|
building a user facing system because |
|
|
|
00:16:41.000 --> 00:16:44.279 |
|
you want to be able to render these |
|
|
|
00:16:42.440 --> 00:16:46.279 |
|
things but the only thing a large |
|
|
|
00:16:44.279 --> 00:16:48.880 |
|
language model can output is text right |
|
|
|
00:16:46.279 --> 00:16:50.279 |
|
it can output a string of tokens so uh |
|
|
|
00:16:48.880 --> 00:16:54.000 |
|
this is a really good way to interact |
|
|
|
00:16:50.279 --> 00:16:55.759 |
|
with it um I I followed by saying output |
|
|
|
00:16:54.000 --> 00:16:58.720 |
|
that in Json format so it says here's |
|
|
|
00:16:55.759 --> 00:17:00.360 |
|
the information in Json format and |
|
|
|
00:16:58.720 --> 00:17:02.000 |
|
instead of just giving me a big Json |
|
|
|
00:17:00.360 --> 00:17:04.199 |
|
string it gives me syntax highlighting |
|
|
|
00:17:02.000 --> 00:17:06.880 |
|
and all the other stuff like this um |
|
|
|
00:17:04.199 --> 00:17:09.760 |
|
presumably what it's doing here is it's |
|
|
|
00:17:06.880 --> 00:17:12.839 |
|
outputting um like a triple hash or |
|
|
|
00:17:09.760 --> 00:17:15.160 |
|
something like this um the reason why I |
|
|
|
00:17:12.839 --> 00:17:17.600 |
|
know that is because |
|
|
|
00:17:15.160 --> 00:17:21.079 |
|
like seems to be making a mistake down |
|
|
|
00:17:17.600 --> 00:17:23.280 |
|
here for some reason um like uh |
|
|
|
00:17:21.079 --> 00:17:25.079 |
|
outputting a weird Le formatted thing at |
|
|
|
00:17:23.280 --> 00:17:26.160 |
|
that and so even chat GPT makes mistakes |
|
|
|
00:17:25.079 --> 00:17:30.320 |
|
some of the |
|
|
|
00:17:26.160 --> 00:17:32.400 |
|
time um |
|
|
|
00:17:30.320 --> 00:17:33.960 |
|
cool um another thing that you might |
|
|
|
00:17:32.400 --> 00:17:35.520 |
|
want to do is especially if you're not |
|
|
|
00:17:33.960 --> 00:17:37.360 |
|
using it in like a a directly |
|
|
|
00:17:35.520 --> 00:17:40.200 |
|
user-facing application but you want to |
|
|
|
00:17:37.360 --> 00:17:41.840 |
|
use it to extract some information or |
|
|
|
00:17:40.200 --> 00:17:45.440 |
|
make some classification decision or |
|
|
|
00:17:41.840 --> 00:17:47.280 |
|
something like that um you often select |
|
|
|
00:17:45.440 --> 00:17:49.880 |
|
information that's indicative of the |
|
|
|
00:17:47.280 --> 00:17:52.360 |
|
answer and so I love this movie overall |
|
|
|
00:17:49.880 --> 00:17:53.960 |
|
it was a movie that was simply fantastic |
|
|
|
00:17:52.360 --> 00:17:56.600 |
|
um you can do things like extract |
|
|
|
00:17:53.960 --> 00:17:59.440 |
|
keywords like fantastic and use that to |
|
|
|
00:17:56.600 --> 00:18:01.360 |
|
indicate positive sentiment |
|
|
|
00:17:59.440 --> 00:18:04.080 |
|
there's various methods for doing this |
|
|
|
00:18:01.360 --> 00:18:05.919 |
|
and these are also used in the |
|
|
|
00:18:04.080 --> 00:18:08.679 |
|
benchmarks that are used to evaluate |
|
|
|
00:18:05.919 --> 00:18:09.799 |
|
language models so it's you know like |
|
|
|
00:18:08.679 --> 00:18:11.039 |
|
even if you're not building an |
|
|
|
00:18:09.799 --> 00:18:12.679 |
|
application directly but you're just |
|
|
|
00:18:11.039 --> 00:18:14.120 |
|
trying to do well in this class and get |
|
|
|
00:18:12.679 --> 00:18:15.679 |
|
like a high score on a leaderboard or |
|
|
|
00:18:14.120 --> 00:18:20.320 |
|
something it's still useful to know |
|
|
|
00:18:15.679 --> 00:18:22.159 |
|
about these things so um for things like |
|
|
|
00:18:20.320 --> 00:18:24.039 |
|
classification um you can identify |
|
|
|
00:18:22.159 --> 00:18:27.159 |
|
keywords like fantastic that might be |
|
|
|
00:18:24.039 --> 00:18:29.120 |
|
indicative of the class another thing |
|
|
|
00:18:27.159 --> 00:18:31.559 |
|
that's uh pretty common is for |
|
|
|
00:18:29.120 --> 00:18:34.480 |
|
regression or numerical problems you |
|
|
|
00:18:31.559 --> 00:18:37.440 |
|
identify numbers and pull out the |
|
|
|
00:18:34.480 --> 00:18:40.400 |
|
numbers and use those numbers as the |
|
|
|
00:18:37.440 --> 00:18:42.360 |
|
answer um for code uh you can pull out |
|
|
|
00:18:40.400 --> 00:18:45.080 |
|
code Snippets and triple back ticks and |
|
|
|
00:18:42.360 --> 00:18:46.960 |
|
then execute the code for example so all |
|
|
|
00:18:45.080 --> 00:18:48.600 |
|
of these things are basically heuristic |
|
|
|
00:18:46.960 --> 00:18:50.159 |
|
methods but they can be used to pull out |
|
|
|
00:18:48.600 --> 00:18:53.440 |
|
the actual answer that you want from the |
|
|
|
00:18:50.159 --> 00:18:53.440 |
|
text that's generated di |
|
|
|
00:18:54.480 --> 00:19:00.320 |
|
know cool uh any questions about that |
|
|
|
00:19:02.280 --> 00:19:07.880 |
|
the final thing is output mapping um |
|
|
|
00:19:04.640 --> 00:19:11.120 |
|
given an answer uh map it into a class |
|
|
|
00:19:07.880 --> 00:19:13.360 |
|
label or a continuous value and so this |
|
|
|
00:19:11.120 --> 00:19:16.000 |
|
is doing something like taking fantastic |
|
|
|
00:19:13.360 --> 00:19:18.480 |
|
and mapping it into the class |
|
|
|
00:19:16.000 --> 00:19:21.000 |
|
positive uh and so you know if we want |
|
|
|
00:19:18.480 --> 00:19:23.000 |
|
to extract fi one to five star ratings |
|
|
|
00:19:21.000 --> 00:19:25.559 |
|
from reviews this is something you would |
|
|
|
00:19:23.000 --> 00:19:29.360 |
|
need to do and very often it's like a |
|
|
|
00:19:25.559 --> 00:19:33.880 |
|
one to um one class to |
|
|
|
00:19:29.360 --> 00:19:35.720 |
|
many um many word mapping and uh by |
|
|
|
00:19:33.880 --> 00:19:37.400 |
|
doing this you can basically get a more |
|
|
|
00:19:35.720 --> 00:19:38.720 |
|
robust mapping onto the number that you |
|
|
|
00:19:37.400 --> 00:19:42.400 |
|
actually |
|
|
|
00:19:38.720 --> 00:19:42.400 |
|
want I actually |
|
|
|
00:19:42.720 --> 00:19:48.919 |
|
coincidentally on uh on Twitter saw a |
|
|
|
00:19:45.280 --> 00:19:48.919 |
|
really good example of this like a week |
|
|
|
00:19:55.880 --> 00:20:00.520 |
|
ago and yeah I don't know if I'm going |
|
|
|
00:19:59.120 --> 00:20:05.440 |
|
to be able to find it in a reasonable |
|
|
|
00:20:00.520 --> 00:20:08.520 |
|
time frame but basically um there was |
|
|
|
00:20:05.440 --> 00:20:11.080 |
|
a person who was using gp4 to create a |
|
|
|
00:20:08.520 --> 00:20:14.120 |
|
model uh to like reward open source |
|
|
|
00:20:11.080 --> 00:20:15.880 |
|
models for good and bad you know |
|
|
|
00:20:14.120 --> 00:20:18.320 |
|
responses |
|
|
|
00:20:15.880 --> 00:20:20.799 |
|
and they started out with giving it a |
|
|
|
00:20:18.320 --> 00:20:24.480 |
|
one to five star rating and then they |
|
|
|
00:20:20.799 --> 00:20:28.360 |
|
switched it into very good good okay bad |
|
|
|
00:20:24.480 --> 00:20:31.280 |
|
very bad and then um then asked to |
|
|
|
00:20:28.360 --> 00:20:34.520 |
|
generate you know those like very good |
|
|
|
00:20:31.280 --> 00:20:37.039 |
|
good bad okay bad very bad instead of |
|
|
|
00:20:34.520 --> 00:20:40.360 |
|
one to five and that worked a lot better |
|
|
|
00:20:37.039 --> 00:20:43.480 |
|
like the GPT model was a lot more uh |
|
|
|
00:20:40.360 --> 00:20:46.039 |
|
like likely to get the answer correct um |
|
|
|
00:20:43.480 --> 00:20:48.880 |
|
than it was if you gave a one to five |
|
|
|
00:20:46.039 --> 00:20:50.799 |
|
star rating so this is something you |
|
|
|
00:20:48.880 --> 00:20:54.280 |
|
should think about pretty seriously and |
|
|
|
00:20:50.799 --> 00:20:57.440 |
|
the way you can think about it is How |
|
|
|
00:20:54.280 --> 00:20:59.679 |
|
likely was this data to appear in a |
|
|
|
00:20:57.440 --> 00:21:02.520 |
|
large Corp of data on the |
|
|
|
00:20:59.679 --> 00:21:04.760 |
|
internet and it might be like a lot less |
|
|
|
00:21:02.520 --> 00:21:08.679 |
|
likely that it's like how good is this |
|
|
|
00:21:04.760 --> 00:21:11.400 |
|
movie five then how good is this movie |
|
|
|
00:21:08.679 --> 00:21:13.960 |
|
really good like just think of like the |
|
|
|
00:21:11.400 --> 00:21:16.200 |
|
occurrence probability and you can even |
|
|
|
00:21:13.960 --> 00:21:18.600 |
|
um like mine this data from the the web |
|
|
|
00:21:16.200 --> 00:21:21.320 |
|
if you want to to try to find out the |
|
|
|
00:21:18.600 --> 00:21:24.520 |
|
best you know |
|
|
|
00:21:21.320 --> 00:21:30.039 |
|
like the best things |
|
|
|
00:21:24.520 --> 00:21:30.039 |
|
there cool um any questions about this |
|
|
|
00:21:35.360 --> 00:21:39.480 |
|
yeah how is |
|
|
|
00:21:37.720 --> 00:21:43.039 |
|
it |
|
|
|
00:21:39.480 --> 00:21:45.919 |
|
learning so the model the model is |
|
|
|
00:21:43.039 --> 00:21:47.600 |
|
predicting txt and like accurately it's |
|
|
|
00:21:45.919 --> 00:21:50.200 |
|
not even predicting the word fantastic |
|
|
|
00:21:47.600 --> 00:21:54.480 |
|
it's predicting the token ID like |
|
|
|
00:21:50.200 --> 00:21:57.600 |
|
73521 or something like that um but you |
|
|
|
00:21:54.480 --> 00:21:58.679 |
|
know if it has seen that token ID more |
|
|
|
00:21:57.600 --> 00:22:00.840 |
|
frequent |
|
|
|
00:21:58.679 --> 00:22:04.240 |
|
after reviews than it has seen the token |
|
|
|
00:22:00.840 --> 00:22:06.000 |
|
ID for the number one or the number five |
|
|
|
00:22:04.240 --> 00:22:07.520 |
|
then it's more likely to predict that |
|
|
|
00:22:06.000 --> 00:22:10.279 |
|
accurately right it's more likely to |
|
|
|
00:22:07.520 --> 00:22:11.880 |
|
predict fantastic than it is to predict |
|
|
|
00:22:10.279 --> 00:22:14.679 |
|
five star or something like that just |
|
|
|
00:22:11.880 --> 00:22:16.720 |
|
because fantastic is more frequent and |
|
|
|
00:22:14.679 --> 00:22:18.880 |
|
so because of that if you think about |
|
|
|
00:22:16.720 --> 00:22:22.120 |
|
like what has it seen in all of the data |
|
|
|
00:22:18.880 --> 00:22:24.240 |
|
on the internet and like model your um |
|
|
|
00:22:22.120 --> 00:22:26.960 |
|
model your answers here appropriately |
|
|
|
00:22:24.240 --> 00:22:28.520 |
|
then that can give you |
|
|
|
00:22:26.960 --> 00:22:30.320 |
|
betters |
|
|
|
00:22:28.520 --> 00:22:32.120 |
|
this is a very important rule of thumb |
|
|
|
00:22:30.320 --> 00:22:33.400 |
|
like don't try to make a language model |
|
|
|
00:22:32.120 --> 00:22:35.039 |
|
do something it's never seen in the |
|
|
|
00:22:33.400 --> 00:22:38.200 |
|
pre-training data and it will make your |
|
|
|
00:22:35.039 --> 00:22:40.240 |
|
life a lot easier so um you can think |
|
|
|
00:22:38.200 --> 00:22:41.880 |
|
that going forward |
|
|
|
00:22:40.240 --> 00:22:44.679 |
|
to |
|
|
|
00:22:41.880 --> 00:22:48.559 |
|
cool so next I want to move into fat |
|
|
|
00:22:44.679 --> 00:22:49.679 |
|
prompting or in context learning um so |
|
|
|
00:22:48.559 --> 00:22:52.159 |
|
fat |
|
|
|
00:22:49.679 --> 00:22:54.440 |
|
prompting basically what we do is we |
|
|
|
00:22:52.159 --> 00:22:55.799 |
|
provide a few examples of the task |
|
|
|
00:22:54.440 --> 00:22:58.440 |
|
together with the |
|
|
|
00:22:55.799 --> 00:23:00.080 |
|
instruction and the way this work works |
|
|
|
00:22:58.440 --> 00:23:02.360 |
|
is you write an instruction like please |
|
|
|
00:23:00.080 --> 00:23:05.919 |
|
classify movie reviews as positive or |
|
|
|
00:23:02.360 --> 00:23:08.120 |
|
negative and add like input uh I really |
|
|
|
00:23:05.919 --> 00:23:10.320 |
|
don't like this movie output negative uh |
|
|
|
00:23:08.120 --> 00:23:12.480 |
|
input this movie is great output |
|
|
|
00:23:10.320 --> 00:23:16.640 |
|
positive |
|
|
|
00:23:12.480 --> 00:23:18.880 |
|
and this is um pretty effective the |
|
|
|
00:23:16.640 --> 00:23:21.799 |
|
thing it's most effective for are |
|
|
|
00:23:18.880 --> 00:23:24.400 |
|
twofold it's most effective for making |
|
|
|
00:23:21.799 --> 00:23:26.360 |
|
sure that you get the formatting right |
|
|
|
00:23:24.400 --> 00:23:27.640 |
|
uh because if you have a few examples |
|
|
|
00:23:26.360 --> 00:23:28.679 |
|
the model will tend to follow those |
|
|
|
00:23:27.640 --> 00:23:30.840 |
|
examples |
|
|
|
00:23:28.679 --> 00:23:34.440 |
|
with respect to formatting especially if |
|
|
|
00:23:30.840 --> 00:23:37.320 |
|
we're talking about like gp4 models um |
|
|
|
00:23:34.440 --> 00:23:40.400 |
|
or strong GPT models it's also effective |
|
|
|
00:23:37.320 --> 00:23:42.400 |
|
if you're using weaker models so like |
|
|
|
00:23:40.400 --> 00:23:44.720 |
|
stronger models like gp4 tend to be |
|
|
|
00:23:42.400 --> 00:23:46.720 |
|
pretty good at following instructions so |
|
|
|
00:23:44.720 --> 00:23:49.520 |
|
if you say |
|
|
|
00:23:46.720 --> 00:23:51.640 |
|
um please classify movie reviews as |
|
|
|
00:23:49.520 --> 00:23:54.000 |
|
positive or negative it will be more |
|
|
|
00:23:51.640 --> 00:23:56.279 |
|
likely to just output positive or |
|
|
|
00:23:54.000 --> 00:23:58.760 |
|
negative um but if you have weaker |
|
|
|
00:23:56.279 --> 00:24:01.720 |
|
models it might say I really don't like |
|
|
|
00:23:58.760 --> 00:24:03.559 |
|
this movie output uh I think I think |
|
|
|
00:24:01.720 --> 00:24:05.640 |
|
this is probably negative or something |
|
|
|
00:24:03.559 --> 00:24:07.240 |
|
like that it will you know it might not |
|
|
|
00:24:05.640 --> 00:24:10.080 |
|
follow the instructions as well and it's |
|
|
|
00:24:07.240 --> 00:24:14.240 |
|
more effective to provide as in context |
|
|
|
00:24:10.080 --> 00:24:17.600 |
|
examples um so so this is a one uh |
|
|
|
00:24:14.240 --> 00:24:19.480 |
|
one uh thing to remember one thing I |
|
|
|
00:24:17.600 --> 00:24:22.120 |
|
should mention also is when I say F shot |
|
|
|
00:24:19.480 --> 00:24:25.720 |
|
prompting and in context learning these |
|
|
|
00:24:22.120 --> 00:24:27.880 |
|
are basically the same thing uh they |
|
|
|
00:24:25.720 --> 00:24:29.720 |
|
basically refer to the same concept but |
|
|
|
00:24:27.880 --> 00:24:31.919 |
|
just from slightly different |
|
|
|
00:24:29.720 --> 00:24:34.799 |
|
examples uh from sorry slightly |
|
|
|
00:24:31.919 --> 00:24:36.919 |
|
different angles PE shot is in contrast |
|
|
|
00:24:34.799 --> 00:24:39.320 |
|
to zero shot so zero shot means you're |
|
|
|
00:24:36.919 --> 00:24:43.039 |
|
providing no examples so zero shot |
|
|
|
00:24:39.320 --> 00:24:45.720 |
|
prompting you would have none uh few |
|
|
|
00:24:43.039 --> 00:24:47.240 |
|
shot you have several examples in |
|
|
|
00:24:45.720 --> 00:24:49.679 |
|
context learning means that you're |
|
|
|
00:24:47.240 --> 00:24:51.640 |
|
learning how to do a task but instead of |
|
|
|
00:24:49.679 --> 00:24:54.320 |
|
providing the model with fine-tuning |
|
|
|
00:24:51.640 --> 00:24:56.679 |
|
data you're providing the examples in |
|
|
|
00:24:54.320 --> 00:24:58.080 |
|
the language models context so they both |
|
|
|
00:24:56.679 --> 00:25:00.919 |
|
basically mean the same thing but |
|
|
|
00:24:58.080 --> 00:25:03.159 |
|
they're they're just contrasting to like |
|
|
|
00:25:00.919 --> 00:25:06.559 |
|
either a zero shot or fine tuning which |
|
|
|
00:25:03.159 --> 00:25:06.559 |
|
is why the terminology is |
|
|
|
00:25:06.880 --> 00:25:13.520 |
|
different so they usering interface |
|
|
|
00:25:11.320 --> 00:25:16.080 |
|
and for the |
|
|
|
00:25:13.520 --> 00:25:17.760 |
|
rendering uh yes you can definitely do F |
|
|
|
00:25:16.080 --> 00:25:20.039 |
|
shot prompting I'm actually going to |
|
|
|
00:25:17.760 --> 00:25:23.440 |
|
talk exactly about exactly how you do |
|
|
|
00:25:20.039 --> 00:25:26.320 |
|
this in like an open AI model um here |
|
|
|
00:25:23.440 --> 00:25:28.240 |
|
which is for open AI models there's a |
|
|
|
00:25:26.320 --> 00:25:31.320 |
|
couple ways that you could do this one |
|
|
|
00:25:28.240 --> 00:25:33.640 |
|
way you could do this is you could um |
|
|
|
00:25:31.320 --> 00:25:36.279 |
|
you could have the role be user and the |
|
|
|
00:25:33.640 --> 00:25:39.279 |
|
role be assistant and just add like |
|
|
|
00:25:36.279 --> 00:25:41.159 |
|
additional conversational history into |
|
|
|
00:25:39.279 --> 00:25:43.159 |
|
the the messages that you're sending to |
|
|
|
00:25:41.159 --> 00:25:46.240 |
|
the language model but actually the |
|
|
|
00:25:43.159 --> 00:25:49.120 |
|
recommended way of doing this um which |
|
|
|
00:25:46.240 --> 00:25:51.880 |
|
is in the openi cookbook uh which is in |
|
|
|
00:25:49.120 --> 00:25:53.919 |
|
the reference is that you send this as a |
|
|
|
00:25:51.880 --> 00:25:58.200 |
|
system message but you provide this like |
|
|
|
00:25:53.919 --> 00:26:00.840 |
|
additional name variable here um with |
|
|
|
00:25:58.200 --> 00:26:02.840 |
|
example user and example assistant the |
|
|
|
00:26:00.840 --> 00:26:06.200 |
|
main reason why you do this is just |
|
|
|
00:26:02.840 --> 00:26:08.080 |
|
because if you don't um if you send it |
|
|
|
00:26:06.200 --> 00:26:10.600 |
|
in as the like user and assistant the |
|
|
|
00:26:08.080 --> 00:26:12.799 |
|
model might refer back to the few shot |
|
|
|
00:26:10.600 --> 00:26:14.320 |
|
examples as something that happened |
|
|
|
00:26:12.799 --> 00:26:15.760 |
|
previously in the conversation whereas |
|
|
|
00:26:14.320 --> 00:26:18.200 |
|
if you send it in the system message |
|
|
|
00:26:15.760 --> 00:26:19.799 |
|
it's guaranteed to not do that so I |
|
|
|
00:26:18.200 --> 00:26:23.600 |
|
think it's like less of an accuracy |
|
|
|
00:26:19.799 --> 00:26:26.360 |
|
thing it's more of a like it's more of a |
|
|
|
00:26:23.600 --> 00:26:29.120 |
|
privacy prompt privacy thing uh than |
|
|
|
00:26:26.360 --> 00:26:30.880 |
|
anything else so this is a recommended |
|
|
|
00:26:29.120 --> 00:26:33.159 |
|
way of doing this on the other hand if |
|
|
|
00:26:30.880 --> 00:26:34.600 |
|
you're using like an open source model |
|
|
|
00:26:33.159 --> 00:26:36.600 |
|
uh you need to be careful because this |
|
|
|
00:26:34.600 --> 00:26:38.279 |
|
name might not even be included in the |
|
|
|
00:26:36.600 --> 00:26:40.080 |
|
prompt template like for example in the |
|
|
|
00:26:38.279 --> 00:26:41.840 |
|
light llm prompt templates that I was |
|
|
|
00:26:40.080 --> 00:26:44.080 |
|
sending in this is not even included at |
|
|
|
00:26:41.840 --> 00:26:46.480 |
|
all so you might just get a weird system |
|
|
|
00:26:44.080 --> 00:26:49.720 |
|
message that uh is poorly fored so you |
|
|
|
00:26:46.480 --> 00:26:53.600 |
|
need to be a little bit conscious |
|
|
|
00:26:49.720 --> 00:26:55.799 |
|
this um cool any questions here does |
|
|
|
00:26:53.600 --> 00:26:58.880 |
|
that answer the |
|
|
|
00:26:55.799 --> 00:27:02.279 |
|
question okay |
|
|
|
00:26:58.880 --> 00:27:05.000 |
|
um so one one thing to be aware of is |
|
|
|
00:27:02.279 --> 00:27:07.039 |
|
llms are sensitive to small changes and |
|
|
|
00:27:05.000 --> 00:27:12.080 |
|
in context examples that you provide to |
|
|
|
00:27:07.039 --> 00:27:14.600 |
|
them so uh previous work has examined |
|
|
|
00:27:12.080 --> 00:27:19.399 |
|
this from a number of angles there's a |
|
|
|
00:27:14.600 --> 00:27:22.679 |
|
paper by Luol and they examine the |
|
|
|
00:27:19.399 --> 00:27:25.000 |
|
sensitivity to example ordering so like |
|
|
|
00:27:22.679 --> 00:27:28.399 |
|
if you take the same examples and you |
|
|
|
00:27:25.000 --> 00:27:30.840 |
|
just order them in different orders um |
|
|
|
00:27:28.399 --> 00:27:32.679 |
|
you can actually get very wildly |
|
|
|
00:27:30.840 --> 00:27:35.600 |
|
different |
|
|
|
00:27:32.679 --> 00:27:37.520 |
|
results um and this is especially true |
|
|
|
00:27:35.600 --> 00:27:40.320 |
|
for smaller models so the smaller models |
|
|
|
00:27:37.520 --> 00:27:42.720 |
|
here are like the gpt2 models the larger |
|
|
|
00:27:40.320 --> 00:27:47.440 |
|
models here are like the GPT the larger |
|
|
|
00:27:42.720 --> 00:27:47.440 |
|
model here is GPT 3.5 uh I |
|
|
|
00:27:48.399 --> 00:27:54.120 |
|
believe other things that people have |
|
|
|
00:27:50.559 --> 00:27:56.760 |
|
looked at are label balance so um how |
|
|
|
00:27:54.120 --> 00:27:58.559 |
|
important is it for the labels to be |
|
|
|
00:27:56.760 --> 00:28:01.440 |
|
balanced |
|
|
|
00:27:58.559 --> 00:28:02.799 |
|
um and if you're doing sentiment |
|
|
|
00:28:01.440 --> 00:28:05.240 |
|
classification for example you might |
|
|
|
00:28:02.799 --> 00:28:07.519 |
|
have only positive examples or only |
|
|
|
00:28:05.240 --> 00:28:10.000 |
|
negative examples and if you have only |
|
|
|
00:28:07.519 --> 00:28:13.279 |
|
positive or negative examples this can |
|
|
|
00:28:10.000 --> 00:28:15.559 |
|
uh help or hurt your accuracy uh for |
|
|
|
00:28:13.279 --> 00:28:17.200 |
|
example on this Amazon review data set |
|
|
|
00:28:15.559 --> 00:28:18.679 |
|
most of the reviews are positive so you |
|
|
|
00:28:17.200 --> 00:28:20.840 |
|
actually do better by having lots of |
|
|
|
00:28:18.679 --> 00:28:23.640 |
|
positive examples in your in context |
|
|
|
00:28:20.840 --> 00:28:26.600 |
|
examples on the other hand for sst2 this |
|
|
|
00:28:23.640 --> 00:28:29.159 |
|
is label balanced so having only |
|
|
|
00:28:26.600 --> 00:28:31.799 |
|
positive or negative is worse on average |
|
|
|
00:28:29.159 --> 00:28:34.279 |
|
than having three positive and one |
|
|
|
00:28:31.799 --> 00:28:36.679 |
|
negative another thing is label coverage |
|
|
|
00:28:34.279 --> 00:28:38.679 |
|
so if we're talking about multi class |
|
|
|
00:28:36.679 --> 00:28:41.120 |
|
classification um |
|
|
|
00:28:38.679 --> 00:28:42.919 |
|
having good coverage of all of the |
|
|
|
00:28:41.120 --> 00:28:45.919 |
|
classes that you want to include in your |
|
|
|
00:28:42.919 --> 00:28:49.120 |
|
multiclass classification is important |
|
|
|
00:28:45.919 --> 00:28:51.720 |
|
um to some extent but if you have uh |
|
|
|
00:28:49.120 --> 00:28:53.440 |
|
more uh you can also confuse some model |
|
|
|
00:28:51.720 --> 00:28:55.840 |
|
especially if they're minority labels so |
|
|
|
00:28:53.440 --> 00:28:57.799 |
|
if you have a whole bunch of like random |
|
|
|
00:28:55.840 --> 00:28:59.080 |
|
minority labels and that can cause so |
|
|
|
00:28:57.799 --> 00:29:01.399 |
|
this is something important to think |
|
|
|
00:28:59.080 --> 00:29:04.640 |
|
about if you're planning on solving kind |
|
|
|
00:29:01.399 --> 00:29:08.640 |
|
of like classification tests um I I've |
|
|
|
00:29:04.640 --> 00:29:11.000 |
|
also had my own experience with uh using |
|
|
|
00:29:08.640 --> 00:29:13.159 |
|
GPT for evaluation for machine |
|
|
|
00:29:11.000 --> 00:29:14.760 |
|
translation and when we use GPT for |
|
|
|
00:29:13.159 --> 00:29:18.559 |
|
evaluation for machine translation it |
|
|
|
00:29:14.760 --> 00:29:20.799 |
|
was very important to add um like high |
|
|
|
00:29:18.559 --> 00:29:22.760 |
|
uh high scoring values low score high |
|
|
|
00:29:20.799 --> 00:29:26.320 |
|
scoring outputs low scoring outputs some |
|
|
|
00:29:22.760 --> 00:29:27.840 |
|
in the middle um and so it's also the |
|
|
|
00:29:26.320 --> 00:29:30.760 |
|
case for regression |
|
|
|
00:29:27.840 --> 00:29:30.760 |
|
uh problems as |
|
|
|
00:29:32.600 --> 00:29:37.320 |
|
well cool um any questions |
|
|
|
00:29:38.159 --> 00:29:45.000 |
|
here um however this is not super |
|
|
|
00:29:42.240 --> 00:29:46.600 |
|
predictable um so there's not like any |
|
|
|
00:29:45.000 --> 00:29:48.399 |
|
rule of thumb that tells you like this |
|
|
|
00:29:46.600 --> 00:29:49.720 |
|
is or as far as I know there's not any |
|
|
|
00:29:48.399 --> 00:29:51.640 |
|
rule of thumb that tells you this is the |
|
|
|
00:29:49.720 --> 00:29:54.000 |
|
way you should construct in context |
|
|
|
00:29:51.640 --> 00:29:55.880 |
|
examples uh there are lots of papers |
|
|
|
00:29:54.000 --> 00:29:57.799 |
|
that say they have methods that work |
|
|
|
00:29:55.880 --> 00:30:01.000 |
|
better but I don't know if there's any |
|
|
|
00:29:57.799 --> 00:30:02.559 |
|
like gold standard IND indry practice |
|
|
|
00:30:01.000 --> 00:30:05.799 |
|
for doing something like this at the |
|
|
|
00:30:02.559 --> 00:30:07.799 |
|
moment so just to give an example uh |
|
|
|
00:30:05.799 --> 00:30:10.399 |
|
this paper it's a really nice paper |
|
|
|
00:30:07.799 --> 00:30:13.440 |
|
examining why uh in context Learning |
|
|
|
00:30:10.399 --> 00:30:17.279 |
|
Works one thing one interesting finding |
|
|
|
00:30:13.440 --> 00:30:19.760 |
|
that they have is they output they take |
|
|
|
00:30:17.279 --> 00:30:22.720 |
|
in context examples but they randomize |
|
|
|
00:30:19.760 --> 00:30:27.320 |
|
the labels they make the labels wrong |
|
|
|
00:30:22.720 --> 00:30:29.519 |
|
some of the time so even with completely |
|
|
|
00:30:27.320 --> 00:30:32.120 |
|
wrong labels even with labels that are |
|
|
|
00:30:29.519 --> 00:30:34.399 |
|
correct 0% of the time you still get |
|
|
|
00:30:32.120 --> 00:30:37.360 |
|
much much better accuracy than if you |
|
|
|
00:30:34.399 --> 00:30:39.440 |
|
use no Inc context examples and why is |
|
|
|
00:30:37.360 --> 00:30:41.640 |
|
this probably you know it's getting the |
|
|
|
00:30:39.440 --> 00:30:44.600 |
|
model formatting correct it's getting |
|
|
|
00:30:41.640 --> 00:30:47.679 |
|
like the names of the labels correct |
|
|
|
00:30:44.600 --> 00:30:49.039 |
|
even if it's not uh accurate so it seems |
|
|
|
00:30:47.679 --> 00:30:50.519 |
|
like it's not really using these for |
|
|
|
00:30:49.039 --> 00:30:52.640 |
|
training data it's using them more just |
|
|
|
00:30:50.519 --> 00:30:56.240 |
|
to know the formatting |
|
|
|
00:30:52.640 --> 00:30:59.399 |
|
appropriate like |
|
|
|
00:30:56.240 --> 00:31:01.399 |
|
you so you already |
|
|
|
00:30:59.399 --> 00:31:03.760 |
|
have |
|
|
|
00:31:01.399 --> 00:31:08.840 |
|
right how is it |
|
|
|
00:31:03.760 --> 00:31:11.240 |
|
Ma like is it just y one y i gu I'm just |
|
|
|
00:31:08.840 --> 00:31:15.000 |
|
ask how you would inter |
|
|
|
00:31:11.240 --> 00:31:16.480 |
|
that so this is you're not training the |
|
|
|
00:31:15.000 --> 00:31:17.880 |
|
model at the moment we're going to talk |
|
|
|
00:31:16.480 --> 00:31:19.360 |
|
about that next class but right now |
|
|
|
00:31:17.880 --> 00:31:21.279 |
|
you're taking a model that has already |
|
|
|
00:31:19.360 --> 00:31:22.840 |
|
been trained and you're providing it |
|
|
|
00:31:21.279 --> 00:31:25.519 |
|
with a few examples and then you're |
|
|
|
00:31:22.840 --> 00:31:28.679 |
|
asking it to fill in um the following |
|
|
|
00:31:25.519 --> 00:31:30.880 |
|
examples just examples |
|
|
|
00:31:28.679 --> 00:31:32.960 |
|
yes |
|
|
|
00:31:30.880 --> 00:31:34.679 |
|
exactly and it's pretty amazing that |
|
|
|
00:31:32.960 --> 00:31:36.440 |
|
that works in the first place especially |
|
|
|
00:31:34.679 --> 00:31:39.840 |
|
with a model that hasn't been explicitly |
|
|
|
00:31:36.440 --> 00:31:41.200 |
|
trained that way but um there's a a fair |
|
|
|
00:31:39.840 --> 00:31:42.320 |
|
amount of research that I think we're |
|
|
|
00:31:41.200 --> 00:31:43.960 |
|
probably going to be talking about in |
|
|
|
00:31:42.320 --> 00:31:47.000 |
|
the interpretability class about why |
|
|
|
00:31:43.960 --> 00:31:49.600 |
|
this happens but um |
|
|
|
00:31:47.000 --> 00:31:51.279 |
|
basically my my interpretation for why |
|
|
|
00:31:49.600 --> 00:31:53.679 |
|
this happens is because there's so much |
|
|
|
00:31:51.279 --> 00:31:56.000 |
|
repetitive stuff on the internet right |
|
|
|
00:31:53.679 --> 00:31:58.240 |
|
there's a bunch of examples of math |
|
|
|
00:31:56.000 --> 00:32:00.399 |
|
problems which is like |
|
|
|
00:31:58.240 --> 00:32:02.279 |
|
question one and then the math problem |
|
|
|
00:32:00.399 --> 00:32:04.320 |
|
and then the answer question two math |
|
|
|
00:32:02.279 --> 00:32:06.440 |
|
problem and then the answer so in order |
|
|
|
00:32:04.320 --> 00:32:08.320 |
|
to model the text on the internet it |
|
|
|
00:32:06.440 --> 00:32:12.120 |
|
needs to learn how to be able to do |
|
|
|
00:32:08.320 --> 00:32:15.399 |
|
these things but so um |
|
|
|
00:32:12.120 --> 00:32:17.760 |
|
cool the second thing is uh more |
|
|
|
00:32:15.399 --> 00:32:20.000 |
|
demonstrations can sometimes hurt |
|
|
|
00:32:17.760 --> 00:32:22.120 |
|
accuracy so this is like binary |
|
|
|
00:32:20.000 --> 00:32:25.080 |
|
classification versus multiple choice |
|
|
|
00:32:22.120 --> 00:32:27.440 |
|
question answering um and actually with |
|
|
|
00:32:25.080 --> 00:32:30.919 |
|
binary classification the model ends up |
|
|
|
00:32:27.440 --> 00:32:33.159 |
|
getting worse um with uh more examples |
|
|
|
00:32:30.919 --> 00:32:36.799 |
|
probably just because the longer context |
|
|
|
00:32:33.159 --> 00:32:39.320 |
|
uh you know confuses the model or moves |
|
|
|
00:32:36.799 --> 00:32:41.320 |
|
the instructions that are provided to |
|
|
|
00:32:39.320 --> 00:32:44.279 |
|
the model farther away in the context so |
|
|
|
00:32:41.320 --> 00:32:48.120 |
|
it starts forgetting them so |
|
|
|
00:32:44.279 --> 00:32:50.240 |
|
um basically what I want to say is uh |
|
|
|
00:32:48.120 --> 00:32:51.760 |
|
you know this is more of an art than a |
|
|
|
00:32:50.240 --> 00:32:53.279 |
|
science you might not get entirely |
|
|
|
00:32:51.760 --> 00:32:55.840 |
|
predictable results but don't worry it's |
|
|
|
00:32:53.279 --> 00:32:59.320 |
|
not just |
|
|
|
00:32:55.840 --> 00:32:59.320 |
|
you cool cool |
|
|
|
00:33:09.200 --> 00:33:15.320 |
|
yeah it can't so the question is if the |
|
|
|
00:33:12.639 --> 00:33:17.039 |
|
in context examples reflect the data |
|
|
|
00:33:15.320 --> 00:33:18.919 |
|
distribution well would that boost the |
|
|
|
00:33:17.039 --> 00:33:24.240 |
|
accuracy I think the answer is probably |
|
|
|
00:33:18.919 --> 00:33:26.039 |
|
yes yeah um I don't know if that it's |
|
|
|
00:33:24.240 --> 00:33:27.679 |
|
that clear because like what I would |
|
|
|
00:33:26.039 --> 00:33:29.919 |
|
expect |
|
|
|
00:33:27.679 --> 00:33:33.559 |
|
is better |
|
|
|
00:33:29.919 --> 00:33:37.240 |
|
coverage is probably more |
|
|
|
00:33:33.559 --> 00:33:39.760 |
|
important than better representativeness |
|
|
|
00:33:37.240 --> 00:33:41.960 |
|
so like even if you have some minority |
|
|
|
00:33:39.760 --> 00:33:43.639 |
|
labels um it's probably better for the |
|
|
|
00:33:41.960 --> 00:33:44.880 |
|
model to know what those minority labels |
|
|
|
00:33:43.639 --> 00:33:47.279 |
|
look like and that's going to be |
|
|
|
00:33:44.880 --> 00:33:49.120 |
|
especially true for like stronger models |
|
|
|
00:33:47.279 --> 00:33:50.679 |
|
um I think |
|
|
|
00:33:49.120 --> 00:33:54.320 |
|
so |
|
|
|
00:33:50.679 --> 00:33:56.440 |
|
cool okay so uh next I want to talk |
|
|
|
00:33:54.320 --> 00:33:59.000 |
|
about Chain of Thought prompting um so |
|
|
|
00:33:56.440 --> 00:34:01.320 |
|
Chain of Thought prompting is a very |
|
|
|
00:33:59.000 --> 00:34:04.080 |
|
popular way of prompting |
|
|
|
00:34:01.320 --> 00:34:06.080 |
|
models and the way it works is you get |
|
|
|
00:34:04.080 --> 00:34:07.839 |
|
the model to explain its reasoning |
|
|
|
00:34:06.080 --> 00:34:12.679 |
|
before making an |
|
|
|
00:34:07.839 --> 00:34:14.520 |
|
answer um and so sorry this example is a |
|
|
|
00:34:12.679 --> 00:34:18.879 |
|
little bit small but like the standard |
|
|
|
00:34:14.520 --> 00:34:20.480 |
|
prompting method is uh like Roger has |
|
|
|
00:34:18.879 --> 00:34:22.000 |
|
five tennis balls he buys two more cans |
|
|
|
00:34:20.480 --> 00:34:23.480 |
|
of tennis balls each can has three |
|
|
|
00:34:22.000 --> 00:34:28.200 |
|
tennis balls how many tennis balls does |
|
|
|
00:34:23.480 --> 00:34:29.359 |
|
he have now um the answer is 11 and so |
|
|
|
00:34:28.200 --> 00:34:32.119 |
|
um this |
|
|
|
00:34:29.359 --> 00:34:34.320 |
|
is an in context example and then you |
|
|
|
00:34:32.119 --> 00:34:37.240 |
|
have your input which has a different |
|
|
|
00:34:34.320 --> 00:34:39.000 |
|
problem uh the cafeteria has 23 apples |
|
|
|
00:34:37.240 --> 00:34:40.639 |
|
if they Ed 20 to make lunch and bought |
|
|
|
00:34:39.000 --> 00:34:41.800 |
|
six more how many apples do they have |
|
|
|
00:34:40.639 --> 00:34:46.720 |
|
the answer is |
|
|
|
00:34:41.800 --> 00:34:49.000 |
|
27 um and so this is wrong so what Chain |
|
|
|
00:34:46.720 --> 00:34:52.000 |
|
of Thought prompting does is instead of |
|
|
|
00:34:49.000 --> 00:34:54.960 |
|
just giving the answer it gives you an |
|
|
|
00:34:52.000 --> 00:34:57.079 |
|
additional reasoning chain uh that says |
|
|
|
00:34:54.960 --> 00:34:59.680 |
|
R started with five balls two cans of of |
|
|
|
00:34:57.079 --> 00:35:01.800 |
|
three tennis balls uh each of six tennis |
|
|
|
00:34:59.680 --> 00:35:04.520 |
|
balls 5 plus 6 equals 11 the answer is |
|
|
|
00:35:01.800 --> 00:35:06.280 |
|
11 and so then when you feed this in |
|
|
|
00:35:04.520 --> 00:35:08.000 |
|
basically the model will generate a |
|
|
|
00:35:06.280 --> 00:35:10.240 |
|
similar reasoning chain and then it's |
|
|
|
00:35:08.000 --> 00:35:13.400 |
|
more likely to get the answer correct |
|
|
|
00:35:10.240 --> 00:35:15.720 |
|
and this very robustly works |
|
|
|
00:35:13.400 --> 00:35:19.440 |
|
for many |
|
|
|
00:35:15.720 --> 00:35:21.440 |
|
different problems where a reasoning |
|
|
|
00:35:19.440 --> 00:35:23.520 |
|
chain is |
|
|
|
00:35:21.440 --> 00:35:27.839 |
|
necessary and if you think about the |
|
|
|
00:35:23.520 --> 00:35:30.359 |
|
reason why this uh why this works I |
|
|
|
00:35:27.839 --> 00:35:33.040 |
|
think there's basically two reasons why |
|
|
|
00:35:30.359 --> 00:35:34.440 |
|
um the first reason is I I only wrote |
|
|
|
00:35:33.040 --> 00:35:36.560 |
|
one on the thing here but the first |
|
|
|
00:35:34.440 --> 00:35:38.760 |
|
reason is it allows the model to |
|
|
|
00:35:36.560 --> 00:35:41.359 |
|
decompose harder problems into simpler |
|
|
|
00:35:38.760 --> 00:35:45.119 |
|
problems and simpler problems are easier |
|
|
|
00:35:41.359 --> 00:35:47.560 |
|
right so um instead |
|
|
|
00:35:45.119 --> 00:35:51.319 |
|
of immediately trying to solve the whole |
|
|
|
00:35:47.560 --> 00:35:53.800 |
|
problem in a single go it will first |
|
|
|
00:35:51.319 --> 00:35:56.520 |
|
solve the problem of like what how many |
|
|
|
00:35:53.800 --> 00:35:58.920 |
|
are left after you use buy and so it |
|
|
|
00:35:56.520 --> 00:36:00.240 |
|
gets three and so now it has this three |
|
|
|
00:35:58.920 --> 00:36:02.480 |
|
here so now it can solve the next |
|
|
|
00:36:00.240 --> 00:36:05.160 |
|
problem of adding six that's equal to 9 |
|
|
|
00:36:02.480 --> 00:36:07.880 |
|
so it's solving simpler sub problems |
|
|
|
00:36:05.160 --> 00:36:11.440 |
|
than it is and uh compared to harder |
|
|
|
00:36:07.880 --> 00:36:13.920 |
|
ones another reason why is it allows for |
|
|
|
00:36:11.440 --> 00:36:17.319 |
|
adaptive computation time so if you |
|
|
|
00:36:13.920 --> 00:36:17.319 |
|
think about like a Transformer |
|
|
|
00:36:19.000 --> 00:36:23.119 |
|
model um if you think about a |
|
|
|
00:36:21.280 --> 00:36:25.560 |
|
Transformer model a Transformer model |
|
|
|
00:36:23.119 --> 00:36:27.200 |
|
has fixed computation time for |
|
|
|
00:36:25.560 --> 00:36:29.920 |
|
predicting each token right a fixed |
|
|
|
00:36:27.200 --> 00:36:31.560 |
|
number of layers it um and based on that |
|
|
|
00:36:29.920 --> 00:36:33.839 |
|
fixed number of layers it passes all the |
|
|
|
00:36:31.560 --> 00:36:36.520 |
|
information through and makes a |
|
|
|
00:36:33.839 --> 00:36:38.200 |
|
prediction and some problems are harder |
|
|
|
00:36:36.520 --> 00:36:39.599 |
|
than others right so it would be very |
|
|
|
00:36:38.200 --> 00:36:42.480 |
|
wasteful to have a really big |
|
|
|
00:36:39.599 --> 00:36:45.640 |
|
Transformer that could solve you know |
|
|
|
00:36:42.480 --> 00:36:49.119 |
|
really complex math problems in the same |
|
|
|
00:36:45.640 --> 00:36:53.359 |
|
amount of time it takes to predict that |
|
|
|
00:36:49.119 --> 00:36:55.280 |
|
the next word is like uh dog after the |
|
|
|
00:36:53.359 --> 00:36:57.280 |
|
word the big or something like that |
|
|
|
00:36:55.280 --> 00:36:58.560 |
|
right so there are some things that are |
|
|
|
00:36:57.280 --> 00:37:00.000 |
|
easy we can do in a second there are |
|
|
|
00:36:58.560 --> 00:37:01.839 |
|
some things that take us more time and |
|
|
|
00:37:00.000 --> 00:37:05.880 |
|
essentially this Chain of Thought |
|
|
|
00:37:01.839 --> 00:37:09.280 |
|
reasoning is um is doing that it's |
|
|
|
00:37:05.880 --> 00:37:12.280 |
|
giving it more time to solve the harder |
|
|
|
00:37:09.280 --> 00:37:12.280 |
|
problems |
|
|
|
00:37:17.200 --> 00:37:22.440 |
|
yes |
|
|
|
00:37:18.839 --> 00:37:23.960 |
|
okay yeah good good question so so |
|
|
|
00:37:22.440 --> 00:37:26.200 |
|
that's what um that's what this next |
|
|
|
00:37:23.960 --> 00:37:27.920 |
|
paper does so uh the the question was |
|
|
|
00:37:26.200 --> 00:37:31.160 |
|
what what happens if we just ask it to |
|
|
|
00:37:27.920 --> 00:37:34.800 |
|
reason and the answer is it still works |
|
|
|
00:37:31.160 --> 00:37:37.000 |
|
um and this paper was really like I I I |
|
|
|
00:37:34.800 --> 00:37:39.760 |
|
love this paper for its Simplicity and |
|
|
|
00:37:37.000 --> 00:37:43.160 |
|
cleverness and basically uh they |
|
|
|
00:37:39.760 --> 00:37:45.000 |
|
contrast few shot learning few shot |
|
|
|
00:37:43.160 --> 00:37:49.200 |
|
Chain of Thought where you provide Chain |
|
|
|
00:37:45.000 --> 00:37:52.160 |
|
of Thought examples zero shot prompting |
|
|
|
00:37:49.200 --> 00:37:54.560 |
|
basically and zero shot Chain of Thought |
|
|
|
00:37:52.160 --> 00:37:58.720 |
|
So what they do is they just |
|
|
|
00:37:54.560 --> 00:38:00.280 |
|
add uh let's thinks step by step that |
|
|
|
00:37:58.720 --> 00:38:04.200 |
|
they add that phrase to the end of The |
|
|
|
00:38:00.280 --> 00:38:06.079 |
|
Prompt and then that elicits the model |
|
|
|
00:38:04.200 --> 00:38:08.000 |
|
to basically do Chain of Thought |
|
|
|
00:38:06.079 --> 00:38:09.240 |
|
reasoning without any further examples |
|
|
|
00:38:08.000 --> 00:38:12.599 |
|
of how that Chain of Thought reasoning |
|
|
|
00:38:09.240 --> 00:38:14.440 |
|
works why does this work again because |
|
|
|
00:38:12.599 --> 00:38:16.760 |
|
like on the internet there's a bunch of |
|
|
|
00:38:14.440 --> 00:38:20.240 |
|
examples of math problem solving data |
|
|
|
00:38:16.760 --> 00:38:22.800 |
|
sets or QA corpora where it says let |
|
|
|
00:38:20.240 --> 00:38:24.480 |
|
things step by step and after that you |
|
|
|
00:38:22.800 --> 00:38:28.040 |
|
you know consistently have this sort of |
|
|
|
00:38:24.480 --> 00:38:29.800 |
|
resoning chain added there so um good |
|
|
|
00:38:28.040 --> 00:38:31.200 |
|
good intuition uh that this paper |
|
|
|
00:38:29.800 --> 00:38:32.480 |
|
answers the question and this like |
|
|
|
00:38:31.200 --> 00:38:36.119 |
|
actually does |
|
|
|
00:38:32.480 --> 00:38:39.119 |
|
work one interesting thing is |
|
|
|
00:38:36.119 --> 00:38:39.119 |
|
um |
|
|
|
00:38:39.720 --> 00:38:45.200 |
|
now if I go to chat |
|
|
|
00:38:47.319 --> 00:38:52.240 |
|
GPT and I say |
|
|
|
00:38:50.480 --> 00:38:58.520 |
|
um |
|
|
|
00:38:52.240 --> 00:38:58.520 |
|
I am teaching a class with 98 |
|
|
|
00:38:58.720 --> 00:39:06.000 |
|
students |
|
|
|
00:39:01.480 --> 00:39:08.400 |
|
70% turn in the |
|
|
|
00:39:06.000 --> 00:39:10.720 |
|
assignment hint on |
|
|
|
00:39:08.400 --> 00:39:17.720 |
|
time uh |
|
|
|
00:39:10.720 --> 00:39:21.880 |
|
10% and it in play how many did not ENT |
|
|
|
00:39:17.720 --> 00:39:21.880 |
|
in let's see let's see if this |
|
|
|
00:39:25.079 --> 00:39:29.200 |
|
works okay it's writing code for |
|
|
|
00:39:29.440 --> 00:39:34.599 |
|
me which is that's a feature slide I |
|
|
|
00:39:32.119 --> 00:39:37.319 |
|
kind of didn't uh I kind of didn't |
|
|
|
00:39:34.599 --> 00:39:37.319 |
|
wanted to do |
|
|
|
00:39:40.040 --> 00:39:44.160 |
|
that okay |
|
|
|
00:39:45.920 --> 00:39:50.359 |
|
um I do not |
|
|
|
00:39:54.280 --> 00:39:59.720 |
|
like okay so um |
|
|
|
00:39:57.040 --> 00:39:59.720 |
|
it's a little bit |
|
|
|
00:40:15.720 --> 00:40:21.839 |
|
slow okay so there there that worked but |
|
|
|
00:40:19.680 --> 00:40:24.640 |
|
not that I did not say let's think step |
|
|
|
00:40:21.839 --> 00:40:27.760 |
|
by step I didn't I didn't ask it to do |
|
|
|
00:40:24.640 --> 00:40:29.240 |
|
this um and the reason why is um we're |
|
|
|
00:40:27.760 --> 00:40:31.119 |
|
going to talk about instruction tuning |
|
|
|
00:40:29.240 --> 00:40:34.359 |
|
next time but basically GPT has been |
|
|
|
00:40:31.119 --> 00:40:36.560 |
|
tuned to do this reasoning even if you |
|
|
|
00:40:34.359 --> 00:40:38.480 |
|
don't ask it to do that uh it wouldn't |
|
|
|
00:40:36.560 --> 00:40:40.839 |
|
do that naturally but it's because lots |
|
|
|
00:40:38.480 --> 00:40:43.880 |
|
of supervised data has been added into |
|
|
|
00:40:40.839 --> 00:40:46.920 |
|
this model so like another thing is like |
|
|
|
00:40:43.880 --> 00:40:48.960 |
|
if you are planning on doing anything |
|
|
|
00:40:46.920 --> 00:40:51.240 |
|
about like Chain of Thought reasoning or |
|
|
|
00:40:48.960 --> 00:40:53.000 |
|
or stuff like that as a class project |
|
|
|
00:40:51.240 --> 00:40:54.960 |
|
you need to keep in mind that the like |
|
|
|
00:40:53.000 --> 00:40:58.280 |
|
GPD models have already been trained to |
|
|
|
00:40:54.960 --> 00:41:00.040 |
|
do this and so so if you want to like |
|
|
|
00:40:58.280 --> 00:41:01.599 |
|
try to find out a better way to elicit |
|
|
|
00:41:00.040 --> 00:41:03.960 |
|
this from a raw model you'll need to use |
|
|
|
00:41:01.599 --> 00:41:07.119 |
|
a raw model like llama 2 with no chat |
|
|
|
00:41:03.960 --> 00:41:10.200 |
|
tuning or stuff like that um in order to |
|
|
|
00:41:07.119 --> 00:41:12.520 |
|
uh do that in a neutral in a neutral |
|
|
|
00:41:10.200 --> 00:41:14.960 |
|
setting that hasn't been contaminated by |
|
|
|
00:41:12.520 --> 00:41:14.960 |
|
like super |
|
|
|
00:41:15.960 --> 00:41:20.520 |
|
L cool um any |
|
|
|
00:41:21.079 --> 00:41:27.720 |
|
questions okay um so next I want to talk |
|
|
|
00:41:24.720 --> 00:41:31.280 |
|
about prompting in programs that uh Chad |
|
|
|
00:41:27.720 --> 00:41:35.560 |
|
GPD gave me a good example of uh why why |
|
|
|
00:41:31.280 --> 00:41:37.160 |
|
this is useful or important um so |
|
|
|
00:41:35.560 --> 00:41:40.640 |
|
there's two results actually both of |
|
|
|
00:41:37.160 --> 00:41:43.440 |
|
these are are from my uh collaborators |
|
|
|
00:41:40.640 --> 00:41:45.839 |
|
but the first one is um it demonstrates |
|
|
|
00:41:43.440 --> 00:41:48.720 |
|
that structuring outputs at programs can |
|
|
|
00:41:45.839 --> 00:41:51.599 |
|
help you get better results even if the |
|
|
|
00:41:48.720 --> 00:41:55.119 |
|
task isn't a programmatic task so this |
|
|
|
00:41:51.599 --> 00:41:57.000 |
|
is kind of interesting um so we were |
|
|
|
00:41:55.119 --> 00:41:59.319 |
|
looking at predicting stru structured |
|
|
|
00:41:57.000 --> 00:42:01.640 |
|
outputs and these structured outputs |
|
|
|
00:41:59.319 --> 00:42:03.839 |
|
specifically are procedural knowledge |
|
|
|
00:42:01.640 --> 00:42:06.920 |
|
like this so like how do we cook a pie |
|
|
|
00:42:03.839 --> 00:42:09.040 |
|
or how do we serve pot pies on a plate |
|
|
|
00:42:06.920 --> 00:42:10.800 |
|
and we had this procedural knowledge |
|
|
|
00:42:09.040 --> 00:42:14.040 |
|
like take the pies out to pool open the |
|
|
|
00:42:10.800 --> 00:42:16.079 |
|
cabinet drawer take out several plates |
|
|
|
00:42:14.040 --> 00:42:17.720 |
|
and we wanted to know the dependencies |
|
|
|
00:42:16.079 --> 00:42:19.520 |
|
between these so we could create a |
|
|
|
00:42:17.720 --> 00:42:22.559 |
|
structured like procedural knowledge |
|
|
|
00:42:19.520 --> 00:42:25.599 |
|
base so this is not an inherently code |
|
|
|
00:42:22.559 --> 00:42:27.200 |
|
based task it's not a you know you could |
|
|
|
00:42:25.599 --> 00:42:28.880 |
|
just ask |
|
|
|
00:42:27.200 --> 00:42:32.160 |
|
the model in natural language and that |
|
|
|
00:42:28.880 --> 00:42:35.000 |
|
would work as well so we structured |
|
|
|
00:42:32.160 --> 00:42:37.720 |
|
things in a couple varieties so we had a |
|
|
|
00:42:35.000 --> 00:42:39.800 |
|
textual format we had uh something in |
|
|
|
00:42:37.720 --> 00:42:43.079 |
|
the dot format which is a way to draw |
|
|
|
00:42:39.800 --> 00:42:45.480 |
|
graphs and then we had we also tried |
|
|
|
00:42:43.079 --> 00:42:47.240 |
|
structuring the output in Python so |
|
|
|
00:42:45.480 --> 00:42:48.960 |
|
these are just different ways to format |
|
|
|
00:42:47.240 --> 00:42:50.720 |
|
the output they all say the same thing |
|
|
|
00:42:48.960 --> 00:42:54.599 |
|
and we can extract the answer from all |
|
|
|
00:42:50.720 --> 00:42:56.920 |
|
of them um but we found that structuring |
|
|
|
00:42:54.599 --> 00:42:58.480 |
|
it in in Python basically is the more |
|
|
|
00:42:56.920 --> 00:43:02.920 |
|
effective way of doing |
|
|
|
00:42:58.480 --> 00:43:04.680 |
|
this so why why is it this the case the |
|
|
|
00:43:02.920 --> 00:43:06.280 |
|
answer is essentially the same thing |
|
|
|
00:43:04.680 --> 00:43:08.680 |
|
that I was talking about before with you |
|
|
|
00:43:06.280 --> 00:43:11.480 |
|
know predicting excellent instead of |
|
|
|
00:43:08.680 --> 00:43:13.319 |
|
five right you know it's seen a ton of |
|
|
|
00:43:11.480 --> 00:43:15.960 |
|
python in it's training data so it's |
|
|
|
00:43:13.319 --> 00:43:17.760 |
|
very good at predicting python uh it's |
|
|
|
00:43:15.960 --> 00:43:20.359 |
|
less good at predicting dot format |
|
|
|
00:43:17.760 --> 00:43:24.240 |
|
because it seemed less do format and it |
|
|
|
00:43:20.359 --> 00:43:26.640 |
|
hasn't seen very much text here |
|
|
|
00:43:24.240 --> 00:43:29.960 |
|
um another |
|
|
|
00:43:26.640 --> 00:43:32.359 |
|
comment is code is very highly |
|
|
|
00:43:29.960 --> 00:43:33.559 |
|
structured compared to natural language |
|
|
|
00:43:32.359 --> 00:43:35.599 |
|
and because code is very highly |
|
|
|
00:43:33.559 --> 00:43:37.520 |
|
structured we have things like |
|
|
|
00:43:35.599 --> 00:43:39.079 |
|
dependencies where we refer back to |
|
|
|
00:43:37.520 --> 00:43:41.079 |
|
variables that we defined before and |
|
|
|
00:43:39.079 --> 00:43:44.119 |
|
other things like this so I think when |
|
|
|
00:43:41.079 --> 00:43:46.760 |
|
it starts outputting code the models get |
|
|
|
00:43:44.119 --> 00:43:48.359 |
|
into this mode which say yes please |
|
|
|
00:43:46.760 --> 00:43:51.280 |
|
refer back to the things you've seen |
|
|
|
00:43:48.359 --> 00:43:53.440 |
|
previously more often like attend to |
|
|
|
00:43:51.280 --> 00:43:57.040 |
|
previous stuff more often and don't just |
|
|
|
00:43:53.440 --> 00:43:59.760 |
|
like generate things you know uh |
|
|
|
00:43:57.040 --> 00:44:02.440 |
|
arbitrarily and hallucinate you know new |
|
|
|
00:43:59.760 --> 00:44:04.119 |
|
content and because of this for |
|
|
|
00:44:02.440 --> 00:44:05.559 |
|
generating structured outputs even if |
|
|
|
00:44:04.119 --> 00:44:08.920 |
|
the structured outputs don't need to be |
|
|
|
00:44:05.559 --> 00:44:11.520 |
|
code you can benefit by doing |
|
|
|
00:44:08.920 --> 00:44:13.200 |
|
this another thing that's a really handy |
|
|
|
00:44:11.520 --> 00:44:16.319 |
|
trick is anytime you want to get a |
|
|
|
00:44:13.200 --> 00:44:19.079 |
|
structured output out of a model |
|
|
|
00:44:16.319 --> 00:44:22.760 |
|
um you can ask it to generate something |
|
|
|
00:44:19.079 --> 00:44:24.839 |
|
in Json instead of generating it in uh |
|
|
|
00:44:22.760 --> 00:44:26.640 |
|
in text and the reason why Json is |
|
|
|
00:44:24.839 --> 00:44:28.079 |
|
useful is you can press the on you can |
|
|
|
00:44:26.640 --> 00:44:32.319 |
|
pull out the strings and other stuff |
|
|
|
00:44:28.079 --> 00:44:34.839 |
|
like that um so this can be very |
|
|
|
00:44:32.319 --> 00:44:37.960 |
|
effective because if you just add an |
|
|
|
00:44:34.839 --> 00:44:40.200 |
|
instruction that says please um please |
|
|
|
00:44:37.960 --> 00:44:42.440 |
|
format things in this particular |
|
|
|
00:44:40.200 --> 00:44:43.680 |
|
format often the model won't listen to |
|
|
|
00:44:42.440 --> 00:44:44.800 |
|
you and it will output something in a |
|
|
|
00:44:43.680 --> 00:44:46.280 |
|
different format you need to write a |
|
|
|
00:44:44.800 --> 00:44:48.599 |
|
really annoying parser to pull out the |
|
|
|
00:44:46.280 --> 00:44:50.280 |
|
information that you actually want but |
|
|
|
00:44:48.599 --> 00:44:51.960 |
|
it gets Json right almost all of the |
|
|
|
00:44:50.280 --> 00:44:54.040 |
|
time just because it's seen so much Json |
|
|
|
00:44:51.960 --> 00:44:57.880 |
|
so that's a nice trick if you want to do |
|
|
|
00:44:54.040 --> 00:45:01.520 |
|
something like that |
|
|
|
00:44:57.880 --> 00:45:03.559 |
|
another uh thing is a paper uh called |
|
|
|
00:45:01.520 --> 00:45:08.079 |
|
program AED language models that we did |
|
|
|
00:45:03.559 --> 00:45:10.200 |
|
about a year ago and the method that we |
|
|
|
00:45:08.079 --> 00:45:13.760 |
|
proposed here is using a program to |
|
|
|
00:45:10.200 --> 00:45:16.480 |
|
generate outputs uh using a program to |
|
|
|
00:45:13.760 --> 00:45:19.440 |
|
generate outputs and this can be more |
|
|
|
00:45:16.480 --> 00:45:22.319 |
|
precise than asking an LM to do so and |
|
|
|
00:45:19.440 --> 00:45:26.720 |
|
so instead of doing Chain of Thought |
|
|
|
00:45:22.319 --> 00:45:30.640 |
|
prompting we created a few F shot |
|
|
|
00:45:26.720 --> 00:45:34.319 |
|
examples where we wrote like the text |
|
|
|
00:45:30.640 --> 00:45:37.160 |
|
here and then the text in English and |
|
|
|
00:45:34.319 --> 00:45:40.160 |
|
then we had code corresponding code the |
|
|
|
00:45:37.160 --> 00:45:42.280 |
|
text in English corresponding code and |
|
|
|
00:45:40.160 --> 00:45:44.960 |
|
then the answer is and then the final |
|
|
|
00:45:42.280 --> 00:45:48.160 |
|
code and then we basically generate this |
|
|
|
00:45:44.960 --> 00:45:49.640 |
|
code and execute it to get the answer so |
|
|
|
00:45:48.160 --> 00:45:52.319 |
|
like as you saw this is implemented in |
|
|
|
00:45:49.640 --> 00:45:54.280 |
|
chat GP now it's uh you write something |
|
|
|
00:45:52.319 --> 00:45:56.319 |
|
out it will decide whether it wants to |
|
|
|
00:45:54.280 --> 00:45:58.599 |
|
generate code or generate text depending |
|
|
|
00:45:56.319 --> 00:46:00.760 |
|
on the type of problem and it's just |
|
|
|
00:45:58.599 --> 00:46:03.559 |
|
more precise it can solve like actually |
|
|
|
00:46:00.760 --> 00:46:05.200 |
|
rather complex problems like uh you know |
|
|
|
00:46:03.559 --> 00:46:07.880 |
|
calculating how much tax you need to be |
|
|
|
00:46:05.200 --> 00:46:10.880 |
|
paying or something like that |
|
|
|
00:46:07.880 --> 00:46:12.480 |
|
um it's especially useful for numeric |
|
|
|
00:46:10.880 --> 00:46:14.599 |
|
questions and it's implemented in things |
|
|
|
00:46:12.480 --> 00:46:17.040 |
|
like the chat GPT uh code interpreter |
|
|
|
00:46:14.599 --> 00:46:18.640 |
|
bar to execution other things like that |
|
|
|
00:46:17.040 --> 00:46:22.079 |
|
it's pretty cool it can actually do |
|
|
|
00:46:18.640 --> 00:46:24.440 |
|
visualizations for you for papers also |
|
|
|
00:46:22.079 --> 00:46:28.000 |
|
so if you ask it to visualize data for |
|
|
|
00:46:24.440 --> 00:46:30.200 |
|
you um chat GPD now does a pretty good |
|
|
|
00:46:28.000 --> 00:46:32.640 |
|
job of doing this like to give an |
|
|
|
00:46:30.200 --> 00:46:34.319 |
|
example I asked it I gave it a big |
|
|
|
00:46:32.640 --> 00:46:35.760 |
|
python list and asked it to generate a |
|
|
|
00:46:34.319 --> 00:46:37.839 |
|
histogram and it did a really good job |
|
|
|
00:46:35.760 --> 00:46:40.240 |
|
of it for me it also gives you the code |
|
|
|
00:46:37.839 --> 00:46:42.839 |
|
so you can go in and modify it later so |
|
|
|
00:46:40.240 --> 00:46:44.720 |
|
um I would definitely recommend you know |
|
|
|
00:46:42.839 --> 00:46:46.200 |
|
thinking about using this uh either in |
|
|
|
00:46:44.720 --> 00:46:49.200 |
|
your research or just to write your |
|
|
|
00:46:46.200 --> 00:46:51.480 |
|
reports uh for this class so um this |
|
|
|
00:46:49.200 --> 00:46:55.839 |
|
class is uh generative AI friendly |
|
|
|
00:46:51.480 --> 00:46:57.760 |
|
mostly so like I do I do want you to |
|
|
|
00:46:55.839 --> 00:46:59.880 |
|
learn the things we expect you to learn |
|
|
|
00:46:57.760 --> 00:47:02.480 |
|
which is why I suggest that you don't |
|
|
|
00:46:59.880 --> 00:47:04.400 |
|
like just write every uh everything for |
|
|
|
00:47:02.480 --> 00:47:06.280 |
|
assignment number one with chat GP key |
|
|
|
00:47:04.400 --> 00:47:07.720 |
|
but I think even if you tried to do that |
|
|
|
00:47:06.280 --> 00:47:09.640 |
|
it'd probably get it wrong in subtle |
|
|
|
00:47:07.720 --> 00:47:10.920 |
|
ways so you're probably better off |
|
|
|
00:47:09.640 --> 00:47:13.880 |
|
understanding the content |
|
|
|
00:47:10.920 --> 00:47:16.400 |
|
anyway um |
|
|
|
00:47:13.880 --> 00:47:18.160 |
|
cool this can also be expanded a whole |
|
|
|
00:47:16.400 --> 00:47:21.559 |
|
lot into like agents and tools and I'm |
|
|
|
00:47:18.160 --> 00:47:21.559 |
|
going to talk about that separately |
|
|
|
00:47:22.800 --> 00:47:27.720 |
|
later cool uh any any things about this |
|
|
|
00:47:29.040 --> 00:47:34.200 |
|
okay I'm uh I'm going to go next so |
|
|
|
00:47:31.800 --> 00:47:36.079 |
|
prompt engineering um when you're |
|
|
|
00:47:34.200 --> 00:47:37.280 |
|
designing prompts uh there's a number of |
|
|
|
00:47:36.079 --> 00:47:38.240 |
|
different ways you can do this you can |
|
|
|
00:47:37.280 --> 00:47:41.559 |
|
do this |
|
|
|
00:47:38.240 --> 00:47:42.960 |
|
manually uh you to do this you configure |
|
|
|
00:47:41.559 --> 00:47:44.520 |
|
a manual template based on the |
|
|
|
00:47:42.960 --> 00:47:46.160 |
|
characteristics of the task using all of |
|
|
|
00:47:44.520 --> 00:47:48.880 |
|
the knowledge that I told you |
|
|
|
00:47:46.160 --> 00:47:50.119 |
|
before you can also do automated search |
|
|
|
00:47:48.880 --> 00:47:52.079 |
|
and there's a number of different ways |
|
|
|
00:47:50.119 --> 00:47:55.119 |
|
to do automated search for |
|
|
|
00:47:52.079 --> 00:47:58.319 |
|
prompts uh the first one is doing some |
|
|
|
00:47:55.119 --> 00:48:00.599 |
|
sort of search discret space uh so you |
|
|
|
00:47:58.319 --> 00:48:02.720 |
|
find a prompt that is |
|
|
|
00:48:00.599 --> 00:48:04.680 |
|
essentially |
|
|
|
00:48:02.720 --> 00:48:06.640 |
|
text the other one is search in |
|
|
|
00:48:04.680 --> 00:48:08.559 |
|
continuous space so you find a prompt |
|
|
|
00:48:06.640 --> 00:48:10.680 |
|
that isn't actually comprehensible text |
|
|
|
00:48:08.559 --> 00:48:14.760 |
|
but nonetheless is a good |
|
|
|
00:48:10.680 --> 00:48:16.960 |
|
prompt so looking at manual engineering |
|
|
|
00:48:14.760 --> 00:48:19.000 |
|
um making sure that the format matches |
|
|
|
00:48:16.960 --> 00:48:21.680 |
|
that of a trained model uh such as the |
|
|
|
00:48:19.000 --> 00:48:24.359 |
|
chat format is actually really really |
|
|
|
00:48:21.680 --> 00:48:26.119 |
|
important um and this can have a a large |
|
|
|
00:48:24.359 --> 00:48:28.119 |
|
effect on models there's a really paper |
|
|
|
00:48:26.119 --> 00:48:30.000 |
|
that demonstrated this convincingly |
|
|
|
00:48:28.119 --> 00:48:33.200 |
|
before and also releases some software |
|
|
|
00:48:30.000 --> 00:48:35.880 |
|
that allows you to do this um kind of in |
|
|
|
00:48:33.200 --> 00:48:38.079 |
|
an efficient manner and what this is |
|
|
|
00:48:35.880 --> 00:48:41.200 |
|
showing is |
|
|
|
00:48:38.079 --> 00:48:45.079 |
|
um this is the original formatting of a |
|
|
|
00:48:41.200 --> 00:48:48.400 |
|
prompt that was given I I Believe by uh |
|
|
|
00:48:45.079 --> 00:48:50.119 |
|
some sort of like uh machine reading or |
|
|
|
00:48:48.400 --> 00:48:52.799 |
|
document based question answering data |
|
|
|
00:48:50.119 --> 00:48:55.480 |
|
set which was like passage |
|
|
|
00:48:52.799 --> 00:48:58.440 |
|
answer if you modify the spacing between |
|
|
|
00:48:55.480 --> 00:49:01.680 |
|
the the fields that increases your score |
|
|
|
00:48:58.440 --> 00:49:04.280 |
|
by several percentage points um if you |
|
|
|
00:49:01.680 --> 00:49:06.880 |
|
remove the colons that increases your |
|
|
|
00:49:04.280 --> 00:49:08.720 |
|
score by a few more percentage points |
|
|
|
00:49:06.880 --> 00:49:10.119 |
|
it's kind of silly but like little |
|
|
|
00:49:08.720 --> 00:49:11.040 |
|
things like this actually can make a |
|
|
|
00:49:10.119 --> 00:49:14.240 |
|
really big |
|
|
|
00:49:11.040 --> 00:49:17.599 |
|
difference um if you modify the casing |
|
|
|
00:49:14.240 --> 00:49:19.960 |
|
this decreases by a lot if you modify |
|
|
|
00:49:17.599 --> 00:49:22.440 |
|
the casing and remove colons so the |
|
|
|
00:49:19.960 --> 00:49:25.200 |
|
thing that was useful like adding colons |
|
|
|
00:49:22.440 --> 00:49:26.720 |
|
here remove colons uh that further |
|
|
|
00:49:25.200 --> 00:49:29.280 |
|
decrease |
|
|
|
00:49:26.720 --> 00:49:31.400 |
|
if you forget to add a space between the |
|
|
|
00:49:29.280 --> 00:49:32.559 |
|
passage and the text that really hurts |
|
|
|
00:49:31.400 --> 00:49:35.599 |
|
your |
|
|
|
00:49:32.559 --> 00:49:38.000 |
|
accuracy so this is pretty painful right |
|
|
|
00:49:35.599 --> 00:49:40.599 |
|
like you don't want to be getting uh |
|
|
|
00:49:38.000 --> 00:49:44.160 |
|
0.036% accuracy when adding a space |
|
|
|
00:49:40.599 --> 00:49:48.680 |
|
would give you like 75% accuracy |
|
|
|
00:49:44.160 --> 00:49:50.799 |
|
right um and one interesting thing is um |
|
|
|
00:49:48.680 --> 00:49:53.160 |
|
this is looking |
|
|
|
00:49:50.799 --> 00:49:56.559 |
|
at different |
|
|
|
00:49:53.160 --> 00:49:58.520 |
|
models and um |
|
|
|
00:49:56.559 --> 00:50:00.640 |
|
with different models it's pretty |
|
|
|
00:49:58.520 --> 00:50:03.599 |
|
consistent that many different plausible |
|
|
|
00:50:00.640 --> 00:50:05.400 |
|
formats that you try the average gives |
|
|
|
00:50:03.599 --> 00:50:07.240 |
|
you a really low accuracy but there's a |
|
|
|
00:50:05.400 --> 00:50:08.760 |
|
few outliers that give you really good |
|
|
|
00:50:07.240 --> 00:50:11.119 |
|
accuracy and these probably correspond |
|
|
|
00:50:08.760 --> 00:50:13.400 |
|
to the things that it was trained on um |
|
|
|
00:50:11.119 --> 00:50:15.880 |
|
instruction tuned on or or other things |
|
|
|
00:50:13.400 --> 00:50:17.480 |
|
like this so number one make sure you're |
|
|
|
00:50:15.880 --> 00:50:19.799 |
|
using like the canonical prompt |
|
|
|
00:50:17.480 --> 00:50:21.240 |
|
formatting for the model for sure number |
|
|
|
00:50:19.799 --> 00:50:22.640 |
|
two you might want to do a little bit of |
|
|
|
00:50:21.240 --> 00:50:24.720 |
|
additional search to see if you can do |
|
|
|
00:50:22.640 --> 00:50:26.960 |
|
even better than that so um this is |
|
|
|
00:50:24.720 --> 00:50:29.480 |
|
something to be very aware |
|
|
|
00:50:26.960 --> 00:50:32.480 |
|
of |
|
|
|
00:50:29.480 --> 00:50:32.480 |
|
um |
|
|
|
00:50:34.200 --> 00:50:37.680 |
|
okay do you have a |
|
|
|
00:50:39.599 --> 00:50:43.720 |
|
question this is dependent on what it |
|
|
|
00:50:41.720 --> 00:50:47.680 |
|
sees in trading time another thing |
|
|
|
00:50:43.720 --> 00:50:51.920 |
|
actually is um this will definitely be |
|
|
|
00:50:47.680 --> 00:50:53.200 |
|
tighter for uh like a chat GPT or GPT 4 |
|
|
|
00:50:51.920 --> 00:50:56.599 |
|
um because it's been trained on many |
|
|
|
00:50:53.200 --> 00:50:59.319 |
|
different formats at training time um |
|
|
|
00:50:56.599 --> 00:51:00.880 |
|
and so the better the model has been |
|
|
|
00:50:59.319 --> 00:51:03.520 |
|
trained on a lot of different formats |
|
|
|
00:51:00.880 --> 00:51:05.559 |
|
the less this is going to have an |
|
|
|
00:51:03.520 --> 00:51:06.920 |
|
effect but you know you're probably not |
|
|
|
00:51:05.559 --> 00:51:09.440 |
|
going to be retraining a model that |
|
|
|
00:51:06.920 --> 00:51:10.799 |
|
somebody gives you uh so like this is |
|
|
|
00:51:09.440 --> 00:51:12.880 |
|
something to be very aware of if you're |
|
|
|
00:51:10.799 --> 00:51:14.839 |
|
just a downstream newer model especially |
|
|
|
00:51:12.880 --> 00:51:17.599 |
|
an open source |
|
|
|
00:51:14.839 --> 00:51:19.359 |
|
model um another thing is how do you |
|
|
|
00:51:17.599 --> 00:51:22.280 |
|
give instructions to |
|
|
|
00:51:19.359 --> 00:51:25.000 |
|
models um instructions should be clear |
|
|
|
00:51:22.280 --> 00:51:29.280 |
|
concise and easy to understand one very |
|
|
|
00:51:25.000 --> 00:51:31.559 |
|
funny thing is um I think now like |
|
|
|
00:51:29.280 --> 00:51:33.280 |
|
actually prompting language models is |
|
|
|
00:51:31.559 --> 00:51:34.960 |
|
very similar to prompting humans |
|
|
|
00:51:33.280 --> 00:51:37.119 |
|
especially if we're talking about like |
|
|
|
00:51:34.960 --> 00:51:38.760 |
|
gp4 so if you're not very good at |
|
|
|
00:51:37.119 --> 00:51:41.599 |
|
explaining things to humans that might |
|
|
|
00:51:38.760 --> 00:51:45.440 |
|
actually be bad um and you might want to |
|
|
|
00:51:41.599 --> 00:51:47.359 |
|
practice that and explaining things to |
|
|
|
00:51:45.440 --> 00:51:50.319 |
|
models might be a good way to practice |
|
|
|
00:51:47.359 --> 00:51:51.799 |
|
that right so you know um it actually |
|
|
|
00:51:50.319 --> 00:51:54.040 |
|
can give you feedback without annoying |
|
|
|
00:51:51.799 --> 00:51:55.359 |
|
your friends by having you explain uh |
|
|
|
00:51:54.040 --> 00:51:58.160 |
|
things to them in several different ways |
|
|
|
00:51:55.359 --> 00:52:00.040 |
|
way and seeing how they react so um but |
|
|
|
00:51:58.160 --> 00:52:03.680 |
|
anyway clear concise easy to understand |
|
|
|
00:52:00.040 --> 00:52:05.319 |
|
is good um there's this prompting guide |
|
|
|
00:52:03.680 --> 00:52:08.599 |
|
uh which I I can |
|
|
|
00:52:05.319 --> 00:52:13.240 |
|
open um this has a prompt engineering |
|
|
|
00:52:08.599 --> 00:52:14.520 |
|
guide I I I like this site but it it |
|
|
|
00:52:13.240 --> 00:52:17.400 |
|
does have a bit |
|
|
|
00:52:14.520 --> 00:52:18.880 |
|
of like variance in the importance of |
|
|
|
00:52:17.400 --> 00:52:21.760 |
|
the information that tells you but like |
|
|
|
00:52:18.880 --> 00:52:23.960 |
|
this particular page is nice I feel so |
|
|
|
00:52:21.760 --> 00:52:26.160 |
|
start simple start with simple |
|
|
|
00:52:23.960 --> 00:52:29.520 |
|
instructions um |
|
|
|
00:52:26.160 --> 00:52:32.119 |
|
you should tell the model what it should |
|
|
|
00:52:29.520 --> 00:52:36.839 |
|
be doing so make sure you say write |
|
|
|
00:52:32.119 --> 00:52:39.799 |
|
classify summarize translate order um |
|
|
|
00:52:36.839 --> 00:52:41.960 |
|
and things like this uh it also gives |
|
|
|
00:52:39.799 --> 00:52:45.440 |
|
some good examples of the level of |
|
|
|
00:52:41.960 --> 00:52:47.559 |
|
specificity that you should be giving so |
|
|
|
00:52:45.440 --> 00:52:49.680 |
|
something that's less precise is explain |
|
|
|
00:52:47.559 --> 00:52:51.559 |
|
the concept of prompt engineering keep |
|
|
|
00:52:49.680 --> 00:52:53.920 |
|
the explanation short only a few |
|
|
|
00:52:51.559 --> 00:52:57.119 |
|
sentences and don't be too |
|
|
|
00:52:53.920 --> 00:52:58.799 |
|
descriptive um it use two to three |
|
|
|
00:52:57.119 --> 00:53:00.240 |
|
sentences to explain the concept of |
|
|
|
00:52:58.799 --> 00:53:02.599 |
|
prompt engineering to a high school |
|
|
|
00:53:00.240 --> 00:53:04.839 |
|
student so what this does is this tells |
|
|
|
00:53:02.599 --> 00:53:07.839 |
|
you the level of read like the reading |
|
|
|
00:53:04.839 --> 00:53:07.839 |
|
level |
|
|
|
00:53:07.960 --> 00:53:12.520 |
|
um so this doesn't even tell you the |
|
|
|
00:53:10.200 --> 00:53:14.319 |
|
reading level I guess um and then two to |
|
|
|
00:53:12.520 --> 00:53:16.240 |
|
three sentences is more precise than |
|
|
|
00:53:14.319 --> 00:53:19.200 |
|
keep it a few sentences don't be too |
|
|
|
00:53:16.240 --> 00:53:22.440 |
|
descriptive so um the more precise you |
|
|
|
00:53:19.200 --> 00:53:25.760 |
|
can be the the better it is um one |
|
|
|
00:53:22.440 --> 00:53:27.040 |
|
interesting thing is like if you ask |
|
|
|
00:53:25.760 --> 00:53:28.359 |
|
your friend to do something and they |
|
|
|
00:53:27.040 --> 00:53:32.400 |
|
don't know how to do it they'll complain |
|
|
|
00:53:28.359 --> 00:53:34.240 |
|
to you but right now uh LMS don't |
|
|
|
00:53:32.400 --> 00:53:35.720 |
|
complain to you they may in the future |
|
|
|
00:53:34.240 --> 00:53:38.680 |
|
uh that might be like actually an |
|
|
|
00:53:35.720 --> 00:53:40.799 |
|
interesting thing to find uh the you |
|
|
|
00:53:38.680 --> 00:53:42.319 |
|
know interesting methodological thing to |
|
|
|
00:53:40.799 --> 00:53:45.240 |
|
look at for a project or something like |
|
|
|
00:53:42.319 --> 00:53:47.960 |
|
that but um right now you need to be |
|
|
|
00:53:45.240 --> 00:53:49.040 |
|
precise and like there's it doesn't give |
|
|
|
00:53:47.960 --> 00:53:51.799 |
|
you feedback when you're not being |
|
|
|
00:53:49.040 --> 00:53:51.799 |
|
precise so you need |
|
|
|
00:53:52.000 --> 00:53:56.359 |
|
to um separately from this there are |
|
|
|
00:53:54.200 --> 00:53:59.160 |
|
methods for automatic prompt engineering |
|
|
|
00:53:56.359 --> 00:54:00.960 |
|
so uh prompt paraphrasing gradient based |
|
|
|
00:53:59.160 --> 00:54:02.240 |
|
discreet prompt search prompt tuning |
|
|
|
00:54:00.960 --> 00:54:06.160 |
|
prefix |
|
|
|
00:54:02.240 --> 00:54:09.880 |
|
tuning so prompt paraphrasing um this is |
|
|
|
00:54:06.160 --> 00:54:12.559 |
|
a method that uh we proposed a while ago |
|
|
|
00:54:09.880 --> 00:54:15.760 |
|
um to basically paraphrase an existing |
|
|
|
00:54:12.559 --> 00:54:17.280 |
|
prompt to get other candidates um it's |
|
|
|
00:54:15.760 --> 00:54:19.240 |
|
rather simple basically you take a |
|
|
|
00:54:17.280 --> 00:54:21.960 |
|
prompt you put it through a paraphrasing |
|
|
|
00:54:19.240 --> 00:54:24.280 |
|
model and it will give you new prompts |
|
|
|
00:54:21.960 --> 00:54:25.440 |
|
and this is good because it will tend to |
|
|
|
00:54:24.280 --> 00:54:28.319 |
|
give you things that are natural |
|
|
|
00:54:25.440 --> 00:54:29.839 |
|
language um you can paraphrase 50 times |
|
|
|
00:54:28.319 --> 00:54:32.480 |
|
try all of them see which one gives you |
|
|
|
00:54:29.839 --> 00:54:37.079 |
|
the highest accuracy and then use that |
|
|
|
00:54:32.480 --> 00:54:39.280 |
|
one um there's also an interesting paper |
|
|
|
00:54:37.079 --> 00:54:43.079 |
|
uh that demonstrates that you can do |
|
|
|
00:54:39.280 --> 00:54:45.240 |
|
this iteratively so you paraphrase once |
|
|
|
00:54:43.079 --> 00:54:46.599 |
|
um you filter down all the candidates |
|
|
|
00:54:45.240 --> 00:54:48.119 |
|
that do well and then you go in and |
|
|
|
00:54:46.599 --> 00:54:49.960 |
|
paraphrase them again and you just do |
|
|
|
00:54:48.119 --> 00:54:51.960 |
|
this over and over again and that can |
|
|
|
00:54:49.960 --> 00:54:54.079 |
|
give you better results than kind of one |
|
|
|
00:54:51.960 --> 00:54:57.079 |
|
one off |
|
|
|
00:54:54.079 --> 00:54:57.079 |
|
paraphrasing |
|
|
|
00:54:59.240 --> 00:55:02.079 |
|
so that's very simple you can even use a |
|
|
|
00:55:01.079 --> 00:55:04.160 |
|
large language model to do the |
|
|
|
00:55:02.079 --> 00:55:06.599 |
|
paraphrasing for you um another thing |
|
|
|
00:55:04.160 --> 00:55:08.920 |
|
that you can do is gradient based search |
|
|
|
00:55:06.599 --> 00:55:11.119 |
|
so the way this works is you need to |
|
|
|
00:55:08.920 --> 00:55:16.319 |
|
have a a model that you can calculate |
|
|
|
00:55:11.119 --> 00:55:19.920 |
|
gradients for and what you do is you |
|
|
|
00:55:16.319 --> 00:55:22.240 |
|
calculate you create a seed prompt and |
|
|
|
00:55:19.920 --> 00:55:26.000 |
|
then you calculate gradients into that |
|
|
|
00:55:22.240 --> 00:55:29.760 |
|
seed prompt so you treat the |
|
|
|
00:55:26.000 --> 00:55:33.160 |
|
um you treat each of the tokens here |
|
|
|
00:55:29.760 --> 00:55:36.680 |
|
like T1 T2 T3 T4 |
|
|
|
00:55:33.160 --> 00:55:38.240 |
|
T5 as their own embeddings you do back |
|
|
|
00:55:36.680 --> 00:55:39.920 |
|
propop into those embeddings and you |
|
|
|
00:55:38.240 --> 00:55:42.799 |
|
optimize them to get high accuracy on |
|
|
|
00:55:39.920 --> 00:55:44.720 |
|
your data set then after you're done |
|
|
|
00:55:42.799 --> 00:55:47.319 |
|
optimizing them to get high accuracy on |
|
|
|
00:55:44.720 --> 00:55:49.079 |
|
your data set you clamp them onto the |
|
|
|
00:55:47.319 --> 00:55:52.160 |
|
nearest neighbor embedding that you |
|
|
|
00:55:49.079 --> 00:55:53.520 |
|
already have so you basically say okay |
|
|
|
00:55:52.160 --> 00:55:56.720 |
|
the nearest neighbor to the embedding |
|
|
|
00:55:53.520 --> 00:55:58.920 |
|
that I learned you um is atmosphere then |
|
|
|
00:55:56.720 --> 00:56:02.240 |
|
a lot dialogue clone |
|
|
|
00:55:58.920 --> 00:56:03.799 |
|
totally and so this is this will |
|
|
|
00:56:02.240 --> 00:56:05.599 |
|
actually give you better results than |
|
|
|
00:56:03.799 --> 00:56:07.839 |
|
paraphrasing in many cases because the |
|
|
|
00:56:05.599 --> 00:56:11.520 |
|
search space is less constrained you can |
|
|
|
00:56:07.839 --> 00:56:12.960 |
|
get these very unnatural prompts uh that |
|
|
|
00:56:11.520 --> 00:56:16.280 |
|
don't seem to make sense but actually |
|
|
|
00:56:12.960 --> 00:56:20.280 |
|
work well this has particularly been |
|
|
|
00:56:16.280 --> 00:56:22.960 |
|
widely used in um adversarial attacks on |
|
|
|
00:56:20.280 --> 00:56:25.599 |
|
language modals so how can you come up |
|
|
|
00:56:22.960 --> 00:56:27.720 |
|
with um |
|
|
|
00:56:25.599 --> 00:56:31.559 |
|
with prompts that |
|
|
|
00:56:27.720 --> 00:56:33.319 |
|
cause language models to uh do things |
|
|
|
00:56:31.559 --> 00:56:36.039 |
|
that you don't want them to be |
|
|
|
00:56:33.319 --> 00:56:38.920 |
|
doing and um there's actually this nice |
|
|
|
00:56:36.039 --> 00:56:41.440 |
|
paper uh also by people at CMU called |
|
|
|
00:56:38.920 --> 00:56:42.960 |
|
Universal and transferable adversarial |
|
|
|
00:56:41.440 --> 00:56:45.400 |
|
attacks on line language |
|
|
|
00:56:42.960 --> 00:56:50.559 |
|
models and basically what they do is |
|
|
|
00:56:45.400 --> 00:56:53.880 |
|
they try to optimize the uh they try to |
|
|
|
00:56:50.559 --> 00:56:56.839 |
|
optimize the prompt to create a prompt |
|
|
|
00:56:53.880 --> 00:56:58.599 |
|
that causes the model to do bad things |
|
|
|
00:56:56.839 --> 00:57:00.039 |
|
basically and they try to do it even on |
|
|
|
00:56:58.599 --> 00:57:03.440 |
|
models that have been trying to not do |
|
|
|
00:57:00.039 --> 00:57:05.039 |
|
bad things and they demonstrate that |
|
|
|
00:57:03.440 --> 00:57:07.359 |
|
number one you can cause things like |
|
|
|
00:57:05.039 --> 00:57:09.599 |
|
models like llama to do bad you know bad |
|
|
|
00:57:07.359 --> 00:57:12.559 |
|
things like output toxic things tell you |
|
|
|
00:57:09.599 --> 00:57:15.599 |
|
how to build bombs stuff like that but |
|
|
|
00:57:12.559 --> 00:57:18.480 |
|
also the same prompts also work on like |
|
|
|
00:57:15.599 --> 00:57:22.319 |
|
GPD models uh which is kind of like |
|
|
|
00:57:18.480 --> 00:57:23.839 |
|
interesting and and very uh you know |
|
|
|
00:57:22.319 --> 00:57:26.520 |
|
confusing in a way because you thought |
|
|
|
00:57:23.839 --> 00:57:28.160 |
|
this might be explo idiosyncrasies of a |
|
|
|
00:57:26.520 --> 00:57:32.440 |
|
particular language model but actually |
|
|
|
00:57:28.160 --> 00:57:32.440 |
|
it's not so I I find this kind of |
|
|
|
00:57:33.880 --> 00:57:39.520 |
|
fascinating |
|
|
|
00:57:36.039 --> 00:57:42.240 |
|
so if you take that a step further one |
|
|
|
00:57:39.520 --> 00:57:44.079 |
|
thing that you can do is you can say oh |
|
|
|
00:57:42.240 --> 00:57:46.280 |
|
actually there's no reason why we need |
|
|
|
00:57:44.079 --> 00:57:48.520 |
|
to clamp these embeddings back to an |
|
|
|
00:57:46.280 --> 00:57:52.240 |
|
existing embedding right so we could |
|
|
|
00:57:48.520 --> 00:57:56.079 |
|
just optimize the prompts the embeddings |
|
|
|
00:57:52.240 --> 00:57:57.720 |
|
of the prompts that go for a task and |
|
|
|
00:57:56.079 --> 00:58:02.000 |
|
not clamp them back to embeddings and |
|
|
|
00:57:57.720 --> 00:58:03.599 |
|
just keep them as is so um what I mean |
|
|
|
00:58:02.000 --> 00:58:07.079 |
|
by that is like right here it's |
|
|
|
00:58:03.599 --> 00:58:09.160 |
|
optimizing T1 T2 T3 T4 T5 and then |
|
|
|
00:58:07.079 --> 00:58:11.359 |
|
clamping that back to Atmosphere a lot |
|
|
|
00:58:09.160 --> 00:58:13.960 |
|
dialog clone totally but just keep them |
|
|
|
00:58:11.359 --> 00:58:16.160 |
|
as is and don't worry about them like |
|
|
|
00:58:13.960 --> 00:58:18.039 |
|
actually being a token in the model |
|
|
|
00:58:16.160 --> 00:58:19.400 |
|
because if you have control over your |
|
|
|
00:58:18.039 --> 00:58:21.200 |
|
model you can just add them as new |
|
|
|
00:58:19.400 --> 00:58:25.960 |
|
elements in the vocabulary and you're |
|
|
|
00:58:21.200 --> 00:58:28.440 |
|
fine right so what they demonstrate in |
|
|
|
00:58:25.960 --> 00:58:31.520 |
|
this paper is that instead of taking |
|
|
|
00:58:28.440 --> 00:58:33.440 |
|
your 11 billion parameter model and |
|
|
|
00:58:31.520 --> 00:58:35.920 |
|
training the whole 11 billion parameter |
|
|
|
00:58:33.440 --> 00:58:38.359 |
|
model for many different tasks on many |
|
|
|
00:58:35.920 --> 00:58:40.079 |
|
different data sets they just train |
|
|
|
00:58:38.359 --> 00:58:42.039 |
|
these prompts which are like 20K |
|
|
|
00:58:40.079 --> 00:58:44.039 |
|
parameters each I I forget how long it |
|
|
|
00:58:42.039 --> 00:58:46.280 |
|
is it's like 10 tokens or 20 tokens or |
|
|
|
00:58:44.039 --> 00:58:48.079 |
|
something like that um and train it on |
|
|
|
00:58:46.280 --> 00:58:49.640 |
|
all of the the data sets here and you |
|
|
|
00:58:48.079 --> 00:58:50.680 |
|
don't actually need to do multitask |
|
|
|
00:58:49.640 --> 00:58:52.200 |
|
learning you don't need to train on |
|
|
|
00:58:50.680 --> 00:58:53.720 |
|
multiple tasks at the same time you can |
|
|
|
00:58:52.200 --> 00:58:56.119 |
|
just train on a single |
|
|
|
00:58:53.720 --> 00:58:58.599 |
|
task |
|
|
|
00:58:56.119 --> 00:59:01.000 |
|
so now let's take that even a step |
|
|
|
00:58:58.599 --> 00:59:03.640 |
|
further so this is only training the |
|
|
|
00:59:01.000 --> 00:59:06.359 |
|
embeddings that you input into the model |
|
|
|
00:59:03.640 --> 00:59:08.160 |
|
there's a method called prefix tuning |
|
|
|
00:59:06.359 --> 00:59:10.319 |
|
and the way prefix tuning works is |
|
|
|
00:59:08.160 --> 00:59:12.280 |
|
instead of training only the embeddings |
|
|
|
00:59:10.319 --> 00:59:14.799 |
|
that go into the model they actually |
|
|
|
00:59:12.280 --> 00:59:18.920 |
|
train a prefix that you then append to |
|
|
|
00:59:14.799 --> 00:59:20.839 |
|
every layer of the model so prompt |
|
|
|
00:59:18.920 --> 00:59:23.319 |
|
tuning basically does this for the first |
|
|
|
00:59:20.839 --> 00:59:24.839 |
|
layer of the model prefix tuning does |
|
|
|
00:59:23.319 --> 00:59:28.400 |
|
this for every layer of the model you |
|
|
|
00:59:24.839 --> 00:59:30.319 |
|
append a prefix uh for every day so it's |
|
|
|
00:59:28.400 --> 00:59:32.200 |
|
just a more expressive version of |
|
|
|
00:59:30.319 --> 00:59:36.119 |
|
prompting |
|
|
|
00:59:32.200 --> 00:59:40.200 |
|
essentially so these are all kinds of |
|
|
|
00:59:36.119 --> 00:59:43.680 |
|
gradual steps from a human created |
|
|
|
00:59:40.200 --> 00:59:47.880 |
|
prompt into something that is basically |
|
|
|
00:59:43.680 --> 00:59:50.839 |
|
training a a prompt or a prefix to the |
|
|
|
00:59:47.880 --> 00:59:52.960 |
|
model so I I would take questions but |
|
|
|
00:59:50.839 --> 00:59:55.200 |
|
let me get to the end of this section uh |
|
|
|
00:59:52.960 --> 00:59:58.839 |
|
also because uh I think there's |
|
|
|
00:59:55.200 --> 01:00:00.720 |
|
interesting analogies here so in the |
|
|
|
00:59:58.839 --> 01:00:02.880 |
|
next class I'm going to talk about |
|
|
|
01:00:00.720 --> 01:00:04.440 |
|
parameter efficient fine-tuning methods |
|
|
|
01:00:02.880 --> 01:00:06.960 |
|
which is kind of a more |
|
|
|
01:00:04.440 --> 01:00:10.000 |
|
General it's a more |
|
|
|
01:00:06.960 --> 01:00:11.480 |
|
General version of prompt tuning or |
|
|
|
01:00:10.000 --> 01:00:13.280 |
|
prefix tuning there are methods that |
|
|
|
01:00:11.480 --> 01:00:15.960 |
|
tune a small number of parameters to get |
|
|
|
01:00:13.280 --> 01:00:17.400 |
|
the model to do something and there's a |
|
|
|
01:00:15.960 --> 01:00:18.880 |
|
bunch of different parameter efficient |
|
|
|
01:00:17.400 --> 01:00:21.520 |
|
tuning methods many people may have |
|
|
|
01:00:18.880 --> 01:00:23.880 |
|
heard of something like Laura uh or |
|
|
|
01:00:21.520 --> 01:00:25.440 |
|
adapters um I just talked about prefix |
|
|
|
01:00:23.880 --> 01:00:28.119 |
|
tuning |
|
|
|
01:00:25.440 --> 01:00:30.960 |
|
so essentially prompt tuning and prefix |
|
|
|
01:00:28.119 --> 01:00:33.359 |
|
tuning are part of this more General |
|
|
|
01:00:30.960 --> 01:00:36.680 |
|
class of parameter efficient find tuning |
|
|
|
01:00:33.359 --> 01:00:39.240 |
|
methods and so what we can say is |
|
|
|
01:00:36.680 --> 01:00:41.119 |
|
actually prompting is fine-tuning |
|
|
|
01:00:39.240 --> 01:00:42.920 |
|
prompting is a way of fine-tuning the |
|
|
|
01:00:41.119 --> 01:00:46.799 |
|
model or getting the model to perform a |
|
|
|
01:00:42.920 --> 01:00:49.839 |
|
particular task um and we have this |
|
|
|
01:00:46.799 --> 01:00:53.720 |
|
taxonomy of we have prompts in natural |
|
|
|
01:00:49.839 --> 01:00:55.160 |
|
language that are created uh by humans |
|
|
|
01:00:53.720 --> 01:00:57.240 |
|
actually maybe I should say manual |
|
|
|
01:00:55.160 --> 01:00:59.559 |
|
prompt engineering here this was first |
|
|
|
01:00:57.240 --> 01:01:01.480 |
|
done in the gpd2 paper where they |
|
|
|
01:00:59.559 --> 01:01:04.359 |
|
demonstrate that models uh models could |
|
|
|
01:01:01.480 --> 01:01:06.200 |
|
solve tasks by doing it this way prompt |
|
|
|
01:01:04.359 --> 01:01:07.760 |
|
paraphrasing is a step up from this |
|
|
|
01:01:06.200 --> 01:01:09.799 |
|
because it's no longer relying on human |
|
|
|
01:01:07.760 --> 01:01:12.680 |
|
engineering and you can you know expand |
|
|
|
01:01:09.799 --> 01:01:15.280 |
|
to a broader set of prompts um it can |
|
|
|
01:01:12.680 --> 01:01:17.359 |
|
always start with human created prompts |
|
|
|
01:01:15.280 --> 01:01:20.240 |
|
so it's kind of like broader uh than |
|
|
|
01:01:17.359 --> 01:01:21.799 |
|
that discrete prompt search doesn't |
|
|
|
01:01:20.240 --> 01:01:23.599 |
|
necessarily need to rely on a |
|
|
|
01:01:21.799 --> 01:01:25.559 |
|
paraphrasing model it could rely on like |
|
|
|
01:01:23.599 --> 01:01:26.760 |
|
gradient-based models or something else |
|
|
|
01:01:25.559 --> 01:01:29.240 |
|
like that to give you something that's |
|
|
|
01:01:26.760 --> 01:01:32.559 |
|
not actually natural language uh kind of |
|
|
|
01:01:29.240 --> 01:01:35.920 |
|
just random tokens continuous prompts or |
|
|
|
01:01:32.559 --> 01:01:38.119 |
|
prompt tuning is a step above that |
|
|
|
01:01:35.920 --> 01:01:41.039 |
|
multi-layer continuous prompts or prefix |
|
|
|
01:01:38.119 --> 01:01:42.520 |
|
tuning is a layer above that parameter |
|
|
|
01:01:41.039 --> 01:01:43.520 |
|
efficient tuning is more General than |
|
|
|
01:01:42.520 --> 01:01:45.359 |
|
that and then you have all training |
|
|
|
01:01:43.520 --> 01:01:49.160 |
|
methods so including fine tuning your |
|
|
|
01:01:45.359 --> 01:01:52.680 |
|
model and so what are the implications |
|
|
|
01:01:49.160 --> 01:01:55.760 |
|
of this um I think so a lot of people |
|
|
|
01:01:52.680 --> 01:01:58.720 |
|
when prompting came out they were like |
|
|
|
01:01:55.760 --> 01:02:00.640 |
|
prompting methods are very hacky I don't |
|
|
|
01:01:58.720 --> 01:02:03.839 |
|
like how we have to do manual prompt |
|
|
|
01:02:00.640 --> 01:02:08.160 |
|
engineering um it seems like a dark art |
|
|
|
01:02:03.839 --> 01:02:11.000 |
|
as opposed to like you know actually you |
|
|
|
01:02:08.160 --> 01:02:14.160 |
|
know some sort of well understood |
|
|
|
01:02:11.000 --> 01:02:16.839 |
|
fine-tuning method that we could use um |
|
|
|
01:02:14.160 --> 01:02:20.520 |
|
but I I actually like them I like |
|
|
|
01:02:16.839 --> 01:02:23.920 |
|
prompting a lot because um if anybody is |
|
|
|
01:02:20.520 --> 01:02:25.960 |
|
familiar with like basian basian |
|
|
|
01:02:23.920 --> 01:02:27.920 |
|
statistics or machine learning we have |
|
|
|
01:02:25.960 --> 01:02:28.799 |
|
the concept of like a prior probability |
|
|
|
01:02:27.920 --> 01:02:31.200 |
|
over |
|
|
|
01:02:28.799 --> 01:02:32.359 |
|
parameters and then a probability that |
|
|
|
01:02:31.200 --> 01:02:34.680 |
|
we get |
|
|
|
01:02:32.359 --> 01:02:37.880 |
|
after after fine tuning the model or |
|
|
|
01:02:34.680 --> 01:02:40.440 |
|
after training the model and prompts in |
|
|
|
01:02:37.880 --> 01:02:42.640 |
|
a way are our first like good prior over |
|
|
|
01:02:40.440 --> 01:02:43.880 |
|
neural network models they give us the |
|
|
|
01:02:42.640 --> 01:02:46.319 |
|
ability to |
|
|
|
01:02:43.880 --> 01:02:48.559 |
|
specify what task the model should be |
|
|
|
01:02:46.319 --> 01:02:51.880 |
|
doing or like a general idea of what |
|
|
|
01:02:48.559 --> 01:02:54.200 |
|
task the model should be doing before we |
|
|
|
01:02:51.880 --> 01:02:56.359 |
|
ask the model to actually do the task |
|
|
|
01:02:54.200 --> 01:02:58.640 |
|
and and so we can either use that prior |
|
|
|
01:02:56.359 --> 01:03:02.119 |
|
Asis we can use a prompted model Asis |
|
|
|
01:02:58.640 --> 01:03:04.839 |
|
without doing any additional tuning or |
|
|
|
01:03:02.119 --> 01:03:06.480 |
|
we could take the prior that we have |
|
|
|
01:03:04.839 --> 01:03:07.920 |
|
given to the model by using a natural |
|
|
|
01:03:06.480 --> 01:03:09.039 |
|
language description of the task it |
|
|
|
01:03:07.920 --> 01:03:12.079 |
|
should be |
|
|
|
01:03:09.039 --> 01:03:14.799 |
|
doing and then combine it with fineing |
|
|
|
01:03:12.079 --> 01:03:17.039 |
|
so we can take the prompted |
|
|
|
01:03:14.799 --> 01:03:19.279 |
|
model we can |
|
|
|
01:03:17.039 --> 01:03:21.640 |
|
initialize we can initialize the |
|
|
|
01:03:19.279 --> 01:03:23.960 |
|
distribution of this like Cas a prompt |
|
|
|
01:03:21.640 --> 01:03:25.720 |
|
using the prompt using a human created |
|
|
|
01:03:23.960 --> 01:03:28.160 |
|
prompt and then go on and fine-tune it |
|
|
|
01:03:25.720 --> 01:03:30.960 |
|
on lots of training data as well and |
|
|
|
01:03:28.160 --> 01:03:33.799 |
|
there's a method for doing that um by |
|
|
|
01:03:30.960 --> 01:03:35.880 |
|
shik and schutza uh called uh pattern |
|
|
|
01:03:33.799 --> 01:03:37.559 |
|
exploiting training where they do |
|
|
|
01:03:35.880 --> 01:03:39.799 |
|
exactly that they basically initialize |
|
|
|
01:03:37.559 --> 01:03:41.720 |
|
with a manually created prompt and then |
|
|
|
01:03:39.799 --> 01:03:44.559 |
|
they find the model on finding inator |
|
|
|
01:03:41.720 --> 01:03:46.400 |
|
after that so um that's a reason why I |
|
|
|
01:03:44.559 --> 01:03:47.920 |
|
like prompting based methods they they |
|
|
|
01:03:46.400 --> 01:03:49.720 |
|
give us this like really nice way to |
|
|
|
01:03:47.920 --> 01:03:53.039 |
|
very quickly create a system but we can |
|
|
|
01:03:49.720 --> 01:03:56.079 |
|
also have you know whatever level of |
|
|
|
01:03:53.039 --> 01:03:59.880 |
|
additional training on top of that |
|
|
|
01:03:56.079 --> 01:03:59.880 |
|
cool so that's a little bit early I'm |
|
|