ahmedelsayed's picture
commit files to HF hub
2ffb90d
WEBVTT
00:00:00.719 --> 00:00:07.480
so to get started I want to show an
00:00:04.120 --> 00:00:10.320
example of the scientific method I took
00:00:07.480 --> 00:00:12.920
this directly from Wikipedia but it's
00:00:10.320 --> 00:00:15.320
actually uh pretty nice it's a pretty
00:00:12.920 --> 00:00:17.480
nice and concise summary of what we
00:00:15.320 --> 00:00:19.439
should do when we're coming up with new
00:00:17.480 --> 00:00:22.160
uh kind of research
00:00:19.439 --> 00:00:24.039
projects and we start with an
00:00:22.160 --> 00:00:26.840
observation or question we do research
00:00:24.039 --> 00:00:28.599
of the topic area we form a hypothesis
00:00:26.840 --> 00:00:31.439
we test it with an experiment analyze
00:00:28.599 --> 00:00:33.600
data and Report conclusions
00:00:31.439 --> 00:00:35.640
and even if we're doing kind of an
00:00:33.600 --> 00:00:37.480
engineering based project still this
00:00:35.640 --> 00:00:42.079
thinking of the stuff that we're doing
00:00:37.480 --> 00:00:44.399
in a framework like this can help you a
00:00:42.079 --> 00:00:46.079
lot so uh the first thing I'd like to
00:00:44.399 --> 00:00:49.120
talk about is identifying good research
00:00:46.079 --> 00:00:51.800
directions and so I'm going to look at
00:00:49.120 --> 00:00:53.640
that from the observation question
00:00:51.800 --> 00:00:56.320
perspective
00:00:53.640 --> 00:00:58.480
here so if we think about why we do
00:00:56.320 --> 00:01:01.160
research uh particularly why we do
00:00:58.480 --> 00:01:04.199
research on natural language process in
00:01:01.160 --> 00:01:07.159
um there's a couple reasons why the
00:01:04.199 --> 00:01:09.439
first is application driven research and
00:01:07.159 --> 00:01:13.159
usually this is I would like to make a
00:01:09.439 --> 00:01:15.040
useful system or make one work better so
00:01:13.159 --> 00:01:18.479
uh you know this is probably the great
00:01:15.040 --> 00:01:20.280
majority of NLP research then separately
00:01:18.479 --> 00:01:21.960
from that there's curiosity driven
00:01:20.280 --> 00:01:24.560
research which is like I would like to
00:01:21.960 --> 00:01:27.360
know more about language or the world
00:01:24.560 --> 00:01:29.159
viewed through language and so this
00:01:27.360 --> 00:01:31.840
doesn't necessarily have to be
00:01:29.159 --> 00:01:31.840
immediately
00:01:32.000 --> 00:01:37.280
like a downstream application that users
00:01:35.399 --> 00:01:39.159
are using will immediately get better
00:01:37.280 --> 00:01:40.439
it's more like we have a burning
00:01:39.159 --> 00:01:43.159
question that we would like to answer
00:01:40.439 --> 00:01:47.399
and we want to answer
00:01:43.159 --> 00:01:48.640
it so NLP encompasses both uh sometimes
00:01:47.399 --> 00:01:50.479
if you read a paper you'll have
00:01:48.640 --> 00:01:54.360
something that's doing both uh
00:01:50.479 --> 00:01:56.439
especially like analyzing the internals
00:01:54.360 --> 00:01:58.079
or training dynamics of a a neural
00:01:56.439 --> 00:01:59.920
network to answer a curiosity-driven
00:01:58.079 --> 00:02:02.439
question and then applying that to come
00:01:59.920 --> 00:02:04.840
up with a better method that makes work
00:02:02.439 --> 00:02:06.560
better I I would like to say though that
00:02:04.840 --> 00:02:09.119
it's kind of rare that there's a paper
00:02:06.560 --> 00:02:10.879
that does both of them really well uh
00:02:09.119 --> 00:02:13.160
and so usually one of them is kind of
00:02:10.879 --> 00:02:14.599
the main focus and I think you can be
00:02:13.160 --> 00:02:17.680
well served by choosing which one is
00:02:14.599 --> 00:02:20.560
your main focus and then kind of uh the
00:02:17.680 --> 00:02:23.560
other might come as a additional uh
00:02:20.560 --> 00:02:23.560
bonus on top of
00:02:23.920 --> 00:02:28.760
that so here are a few examples of
00:02:27.160 --> 00:02:32.800
application driven
00:02:28.760 --> 00:02:35.239
research so for example pay at all uh
00:02:32.800 --> 00:02:37.840
they proposed the task of sentiment
00:02:35.239 --> 00:02:39.879
analysis um so actually there was a
00:02:37.840 --> 00:02:41.879
paper 22 years ago that proposed the
00:02:39.879 --> 00:02:44.879
task of sentiment analysis it might seem
00:02:41.879 --> 00:02:46.760
very you know normal nowadays but uh
00:02:44.879 --> 00:02:49.519
there was a paper that proposed it back
00:02:46.760 --> 00:02:52.840
then and they proposed sentiment
00:02:49.519 --> 00:02:54.200
analysis because um labeling articles
00:02:52.840 --> 00:02:57.480
with their sentiment would provide
00:02:54.200 --> 00:02:59.760
succinct summaries to the readers um so
00:02:57.480 --> 00:03:03.319
they basically wanted to provide
00:02:59.760 --> 00:03:03.319
information to readers and that would be
00:03:03.400 --> 00:03:09.000
useful another paper by ready at all
00:03:06.440 --> 00:03:11.519
2019 proposes a task of conversational
00:03:09.000 --> 00:03:13.640
question answering uh because an
00:03:11.519 --> 00:03:15.599
inability to build and maintain common
00:03:13.640 --> 00:03:17.680
ground is part of the reason why virtual
00:03:15.599 --> 00:03:20.159
assistant usually don't seem like
00:03:17.680 --> 00:03:22.040
competent conversational Partners so
00:03:20.159 --> 00:03:24.519
when you're talking to your Alexa or
00:03:22.040 --> 00:03:27.000
your Google uh home or something like
00:03:24.519 --> 00:03:28.599
this you might ask it a question and
00:03:27.000 --> 00:03:30.120
then after you asked it a question you
00:03:28.599 --> 00:03:31.480
ask it another question but it doesn't
00:03:30.120 --> 00:03:32.879
go back to the contexts that you had
00:03:31.480 --> 00:03:34.519
before and they wanted to solve this
00:03:32.879 --> 00:03:36.040
problem so they proposed this data set
00:03:34.519 --> 00:03:40.000
for
00:03:36.040 --> 00:03:41.720
it um Gerel propos a method for bottom
00:03:40.000 --> 00:03:43.159
up abstractive summarization because
00:03:41.720 --> 00:03:44.760
neural network-based methods for
00:03:43.159 --> 00:03:46.879
abstractive summarization produce
00:03:44.760 --> 00:03:49.000
outputs that are fluent but perform
00:03:46.879 --> 00:03:51.120
poorly a Content selection so they had a
00:03:49.000 --> 00:03:53.000
problem they had a task already in mind
00:03:51.120 --> 00:03:54.239
they weren't proposing a new task and
00:03:53.000 --> 00:03:56.040
they there was a problem with the
00:03:54.239 --> 00:03:58.760
existing system so they fixed
00:03:56.040 --> 00:04:00.400
it and then Kudo and Richardson proposed
00:03:58.760 --> 00:04:02.920
a method for un supervised word
00:04:00.400 --> 00:04:04.799
segmentation namely sentence piece uh
00:04:02.920 --> 00:04:06.439
because language dependent processing
00:04:04.799 --> 00:04:08.920
makes it hard to train multilingual
00:04:06.439 --> 00:04:10.360
models as we have to carefully manage
00:04:08.920 --> 00:04:12.720
the configurations of pre- and
00:04:10.360 --> 00:04:15.879
post-processors per language so they
00:04:12.720 --> 00:04:17.519
tried to make things easier uh so like
00:04:15.879 --> 00:04:19.600
you can see all of these things like the
00:04:17.519 --> 00:04:21.919
first two are proposing new tasks to
00:04:19.600 --> 00:04:23.880
solve and they're doing it from the
00:04:21.919 --> 00:04:25.919
point of view of uh creating something
00:04:23.880 --> 00:04:29.120
useful for users the second two are
00:04:25.919 --> 00:04:30.440
proposing new methods the first one is
00:04:29.120 --> 00:04:34.360
like improving
00:04:30.440 --> 00:04:36.320
accuracy um so it's this is the most
00:04:34.360 --> 00:04:37.639
common most commonly people say I have a
00:04:36.320 --> 00:04:39.120
test that I want to solve there's a
00:04:37.639 --> 00:04:41.280
problem with accuracy I want to improve
00:04:39.120 --> 00:04:43.960
it but you can also improve other things
00:04:41.280 --> 00:04:45.880
so you can improve like convenience or
00:04:43.960 --> 00:04:47.320
uh you can Pro improve efficiency or
00:04:45.880 --> 00:04:51.720
other things like that so all of those
00:04:47.320 --> 00:04:51.720
are you know perfectly reasonable
00:04:52.120 --> 00:04:57.320
things I also have some examples of
00:04:54.639 --> 00:04:59.120
curiosity driven research these are
00:04:57.320 --> 00:05:00.360
actually harder to find in the ACL
00:04:59.120 --> 00:05:03.120
anthology
00:05:00.360 --> 00:05:06.400
it's definitely the minority case but
00:05:03.120 --> 00:05:09.160
they still do exist um so for example
00:05:06.400 --> 00:05:10.960
rank at all 2017 asked what is the
00:05:09.160 --> 00:05:13.800
difference between the language of real
00:05:10.960 --> 00:05:17.000
news with that of satire hoaxes and
00:05:13.800 --> 00:05:18.800
propaganda so they were not attempting
00:05:17.000 --> 00:05:21.039
to create a system for fake news
00:05:18.800 --> 00:05:23.199
detection that was not their goal here
00:05:21.039 --> 00:05:24.600
their go their goal was just to figure
00:05:23.199 --> 00:05:26.240
out what were the different linguistic
00:05:24.600 --> 00:05:28.000
characteristics and they found that
00:05:26.240 --> 00:05:29.720
scientifically interesting maybe
00:05:28.000 --> 00:05:31.280
Downstream that would be useful but that
00:05:29.720 --> 00:05:35.080
wasn't the point of their
00:05:31.280 --> 00:05:36.960
paper another one uh curell at all ask
00:05:35.080 --> 00:05:38.960
are all languages equally hard to
00:05:36.960 --> 00:05:41.000
language model and so basically they
00:05:38.960 --> 00:05:42.440
wanted to know are all languages just
00:05:41.000 --> 00:05:45.520
character strings and so language
00:05:42.440 --> 00:05:47.479
modeling them is uh similarly easy or
00:05:45.520 --> 00:05:49.120
are there certain characteristics of
00:05:47.479 --> 00:05:51.080
language that make them easier or harder
00:05:49.120 --> 00:05:54.000
to model with the current architectures
00:05:51.080 --> 00:05:55.520
that we have um and so they didn't
00:05:54.000 --> 00:05:57.039
propose a new architecture they didn't
00:05:55.520 --> 00:06:00.479
propose to improve anything they just
00:05:57.039 --> 00:06:02.400
proposed to examine this question
00:06:00.479 --> 00:06:04.280
um and also Tenny at all this is
00:06:02.400 --> 00:06:06.880
actually an extremely impactful work
00:06:04.280 --> 00:06:09.319
Downstream but uh they weren't improving
00:06:06.880 --> 00:06:11.520
anything they just Quantified where
00:06:09.319 --> 00:06:14.440
specific types of linguistic information
00:06:11.520 --> 00:06:16.720
are encoded in birs so they found that
00:06:14.440 --> 00:06:18.840
for example syntax was encoded better in
00:06:16.720 --> 00:06:20.560
the early layers semantics in the later
00:06:18.840 --> 00:06:22.520
layers and then if you go further you
00:06:20.560 --> 00:06:25.280
you have other fine grain things like
00:06:22.520 --> 00:06:27.599
pragne style
00:06:25.280 --> 00:06:30.400
information so I I think you can kind of
00:06:27.599 --> 00:06:32.120
see the difference between these two um
00:06:30.400 --> 00:06:34.800
are there any questions
00:06:32.120 --> 00:06:40.199
about
00:06:34.800 --> 00:06:41.720
this no okay let's be that so the next
00:06:40.199 --> 00:06:43.680
question which I think a lot of people
00:06:41.720 --> 00:06:46.240
might be asking particularly with
00:06:43.680 --> 00:06:47.720
respect to assignment 4 which requires
00:06:46.240 --> 00:06:51.039
you to come up with something novel to
00:06:47.720 --> 00:06:53.240
do is how do we uh get research
00:06:51.039 --> 00:06:57.360
ideas
00:06:53.240 --> 00:07:02.280
and the way we can do this is uh twofold
00:06:57.360 --> 00:07:04.479
so um one is kind of we want to turn a
00:07:02.280 --> 00:07:07.120
concrete understanding of existing
00:07:04.479 --> 00:07:10.120
research's failings into a higher level
00:07:07.120 --> 00:07:12.560
experimental question and the two ways
00:07:10.120 --> 00:07:15.240
that I normally characterize doing this
00:07:12.560 --> 00:07:19.319
are bottom up discovery of research
00:07:15.240 --> 00:07:21.080
ideas um or the way the way I
00:07:19.319 --> 00:07:24.479
characterize this is bottom up discovery
00:07:21.080 --> 00:07:27.000
of research ideas and this is a great
00:07:24.479 --> 00:07:29.120
tool for making incremental progress on
00:07:27.000 --> 00:07:32.039
existing systems on tasks that we really
00:07:29.120 --> 00:07:35.400
care about or expanding the scope of a
00:07:32.039 --> 00:07:37.680
task that we care about so uh some
00:07:35.400 --> 00:07:41.879
examples of this would be like in
00:07:37.680 --> 00:07:45.639
assignment number three you uh look
00:07:41.879 --> 00:07:47.720
let's say you're looking at
00:07:45.639 --> 00:07:50.159
um let's say you're looking at the
00:07:47.720 --> 00:07:53.840
question answering performance
00:07:50.159 --> 00:07:58.280
of models of multilingual models on
00:07:53.840 --> 00:08:01.479
different languages um and you for
00:07:58.280 --> 00:08:03.159
assignment three you implement a couple
00:08:01.479 --> 00:08:05.240
multilingual models on different
00:08:03.159 --> 00:08:06.560
languages you run them you look at the
00:08:05.240 --> 00:08:08.400
results and you identify that
00:08:06.560 --> 00:08:10.080
multilingual models are particularly bad
00:08:08.400 --> 00:08:12.919
at answering questions about named
00:08:10.080 --> 00:08:14.680
entities and so now you have looked at
00:08:12.919 --> 00:08:17.759
the output you have decided that that's
00:08:14.680 --> 00:08:20.199
a big problem um you can go in and
00:08:17.759 --> 00:08:22.080
improve it so this is a great tool for
00:08:20.199 --> 00:08:23.720
incremental progress and like in fact
00:08:22.080 --> 00:08:26.520
doing this really effectively has been
00:08:23.720 --> 00:08:31.000
very effective in my own research career
00:08:26.520 --> 00:08:34.680
like we uh if I feel like I I like to
00:08:31.000 --> 00:08:36.279
look at data I try to do that a lot and
00:08:34.680 --> 00:08:38.440
by doing that I identify the most
00:08:36.279 --> 00:08:40.200
frequent problems and because of that
00:08:38.440 --> 00:08:42.039
when I fix those problems my accuracy
00:08:40.200 --> 00:08:44.560
goes up a lot more than people who pick
00:08:42.039 --> 00:08:46.880
the less good problems right and so if
00:08:44.560 --> 00:08:49.440
we want our accuracy to go up uh I'm
00:08:46.880 --> 00:08:51.360
more efficient at you know improving
00:08:49.440 --> 00:08:53.240
things on the other hand there's
00:08:51.360 --> 00:08:55.399
something uh from the opposite direction
00:08:53.240 --> 00:08:57.080
is moving from a higher level question
00:08:55.399 --> 00:08:57.800
to a lower level concrete testing of
00:08:57.080 --> 00:09:00.120
that
00:08:57.800 --> 00:09:01.760
question um so this could be tap down
00:09:00.120 --> 00:09:02.760
Design This is tap down design of
00:09:01.760 --> 00:09:06.360
research
00:09:02.760 --> 00:09:08.399
ideas this favors bigger ideas but these
00:09:06.360 --> 00:09:10.240
ideas can be disconnected from reality
00:09:08.399 --> 00:09:13.880
or they could be not solving the right
00:09:10.240 --> 00:09:17.079
problems so the typical like very very
00:09:13.880 --> 00:09:18.800
successful example of this is um neural
00:09:17.079 --> 00:09:20.800
machine translation or something like
00:09:18.800 --> 00:09:22.720
this neural machine translations neural
00:09:20.800 --> 00:09:26.399
sequence sequence
00:09:22.720 --> 00:09:30.040
models this came out of a few people
00:09:26.399 --> 00:09:32.040
like Jeff Hinton and yua
00:09:30.040 --> 00:09:33.480
believing for a very long time that
00:09:32.040 --> 00:09:35.760
neural networks were the right way to
00:09:33.480 --> 00:09:37.800
solve lots of problems uh despite the
00:09:35.760 --> 00:09:39.640
fact that there wasn't like super
00:09:37.800 --> 00:09:42.279
concrete evidence of that for a long
00:09:39.640 --> 00:09:43.399
time and so they had this idea which was
00:09:42.279 --> 00:09:47.399
like we should be doing things with
00:09:43.399 --> 00:09:49.440
neural networks and uh they you know
00:09:47.399 --> 00:09:50.720
they successfully executed that and now
00:09:49.440 --> 00:09:52.200
everybody is doing things with neural
00:09:50.720 --> 00:09:56.560
networks so they made a really huge
00:09:52.200 --> 00:09:58.160
revolution in the research space um that
00:09:56.560 --> 00:09:59.720
that's great that's a great example of a
00:09:58.160 --> 00:10:02.839
successful topown IDE IDE but the
00:09:59.720 --> 00:10:05.519
problem is uh for every example like
00:10:02.839 --> 00:10:07.560
that there's a thousand uh top down
00:10:05.519 --> 00:10:10.760
ideas in the graveyard of not being very
00:10:07.560 --> 00:10:12.600
you know effective so I I think um in
00:10:10.760 --> 00:10:14.519
order to do something like this you
00:10:12.600 --> 00:10:16.200
better have a very strong conviction or
00:10:14.519 --> 00:10:18.079
you better have maybe some initial
00:10:16.200 --> 00:10:20.920
evidence or a very strong intuition
00:10:18.079 --> 00:10:22.320
about why this might be a good idea and
00:10:20.920 --> 00:10:25.240
uh you would be able to test that
00:10:22.320 --> 00:10:27.240
intuition through intermediate steps uh
00:10:25.240 --> 00:10:31.040
to to demonstrate like through toy data
00:10:27.240 --> 00:10:31.040
or other stuff like that
00:10:31.720 --> 00:10:38.360
um cool so these are kind of the general
00:10:36.360 --> 00:10:40.839
ways that we can come up with research
00:10:38.360 --> 00:10:42.519
ideas the next thing that we want to do
00:10:40.839 --> 00:10:44.480
is research our topic area were there
00:10:42.519 --> 00:10:46.720
any questions about bottom up versus top
00:10:44.480 --> 00:10:49.120
down I'm going to talk about effective
00:10:46.720 --> 00:10:51.920
strategies to bottom up stuff in uh in
00:10:49.120 --> 00:10:54.360
two weeks uh so we can talk more about
00:10:51.920 --> 00:10:56.800
that then
00:10:54.360 --> 00:11:00.959
but okay if not I'll move
00:10:56.800 --> 00:11:05.079
on so next uh we have research topic
00:11:00.959 --> 00:11:07.360
areas so this is about how you will do
00:11:05.079 --> 00:11:10.320
assignment three which is researching uh
00:11:07.360 --> 00:11:13.240
topic area getting forming a very good
00:11:10.320 --> 00:11:15.680
understanding of the topic that you're
00:11:13.240 --> 00:11:18.800
trying to handle and so there's a bunch
00:11:15.680 --> 00:11:22.800
of different ways you can do this uh the
00:11:18.800 --> 00:11:25.680
first one is keyword search and so you
00:11:22.800 --> 00:11:27.839
look something up on Google Scholar or
00:11:25.680 --> 00:11:29.480
something uh finding older and newer
00:11:27.839 --> 00:11:32.880
papers so this is like following the
00:11:29.480 --> 00:11:35.360
tracks of papers you can uh read the
00:11:32.880 --> 00:11:39.160
abstract and intro uh read the details
00:11:35.360 --> 00:11:43.760
of most relevant papers and I don't do
00:11:39.160 --> 00:11:45.440
this as much now but um when I was a
00:11:43.760 --> 00:11:47.360
graduate student I would often make a
00:11:45.440 --> 00:11:49.800
short summary of the paper to make sure
00:11:47.360 --> 00:11:54.680
I really understood the details uh
00:11:49.800 --> 00:11:56.000
because also now I teach a class um and
00:11:54.680 --> 00:11:58.240
actually making these slides is very
00:11:56.000 --> 00:12:00.120
useful for me so going back into the
00:11:58.240 --> 00:12:03.440
Transformer slide slides you know that
00:12:00.120 --> 00:12:05.160
kind of serves as my um you know my way
00:12:03.440 --> 00:12:06.800
of digesting papers and making sure that
00:12:05.160 --> 00:12:08.160
I can explain them and if you're not
00:12:06.800 --> 00:12:10.480
teaching a class and you can go in and
00:12:08.160 --> 00:12:13.560
make a summary into it yourselves so
00:12:10.480 --> 00:12:16.480
that can confirm uh solidify your memory
00:12:13.560 --> 00:12:19.360
and like confirm your uh ability to
00:12:16.480 --> 00:12:19.360
understand everything that's in
00:12:20.639 --> 00:12:27.120
there cool um so next I'd like to talk
00:12:23.639 --> 00:12:29.600
about some sources of papers in NLP um
00:12:27.120 --> 00:12:31.800
one really good source uh is the ACL
00:12:29.600 --> 00:12:33.720
Anthology another good source is Google
00:12:31.800 --> 00:12:36.120
Scholar um they both have their
00:12:33.720 --> 00:12:37.959
advantages and their disadvantages um
00:12:36.120 --> 00:12:39.800
increasingly actually I realized now
00:12:37.959 --> 00:12:41.959
that I should add this to my slides but
00:12:39.800 --> 00:12:43.639
increasingly a lot of good uh papers in
00:12:41.959 --> 00:12:47.120
NLP are also published in machine
00:12:43.639 --> 00:12:51.199
learning conferences so like icml or NPS
00:12:47.120 --> 00:12:53.040
or um uh I clear or things like that the
00:12:51.199 --> 00:12:54.920
problem is the ACL Anthology is way
00:12:53.040 --> 00:12:56.600
better than any of them at like
00:12:54.920 --> 00:13:00.360
organizing the papers in an easy to
00:12:56.600 --> 00:13:03.560
process way so I I think um I I'll talk
00:13:00.360 --> 00:13:06.000
about this uh for now and so the ACL
00:13:03.560 --> 00:13:08.800
Anthology covers many uh prestigious
00:13:06.000 --> 00:13:11.639
venues in NLP it has all of these ones
00:13:08.800 --> 00:13:15.160
here this figure is a little bit old uh
00:13:11.639 --> 00:13:18.839
I I made it in 21 2021 but you know it
00:13:15.160 --> 00:13:22.959
reaches up to the present day and what I
00:13:18.839 --> 00:13:25.880
do often is I can start with the past 3
00:13:22.959 --> 00:13:30.160
to 5 years of several top venues in here
00:13:25.880 --> 00:13:33.880
like ACL emnlp uh nackle and tackle and
00:13:30.160 --> 00:13:36.360
go in and do uh keyword search and so
00:13:33.880 --> 00:13:36.360
like let's
00:13:38.760 --> 00:13:43.600
say let's say I was interested in
00:13:44.639 --> 00:13:49.519
multilingual multilingual large language
00:13:47.600 --> 00:13:52.079
models and evaluating them or some way
00:13:49.519 --> 00:13:54.279
so I would go to ACL and then I would
00:13:52.079 --> 00:13:57.560
just put in multi
00:13:54.279 --> 00:14:01.360
lingual um and you get a wonderful paper
00:13:57.560 --> 00:14:01.360
by by some research are
00:14:01.480 --> 00:14:06.440
named that was not intentional I didn't
00:14:03.639 --> 00:14:08.800
know that was going to happen but um so
00:14:06.440 --> 00:14:11.240
on the Fly crosslingual masking for
00:14:08.800 --> 00:14:12.959
multilingual pre-training um scaling
00:14:11.240 --> 00:14:15.040
multilingual corpora and language models
00:14:12.959 --> 00:14:18.120
to 500 languages that seems pretty
00:14:15.040 --> 00:14:19.880
pretty relevant evaluating multilingual
00:14:18.120 --> 00:14:22.000
compositional generalization so you can
00:14:19.880 --> 00:14:27.680
just go through here and see a bunch of
00:14:22.000 --> 00:14:30.680
papers that like um that could be
00:14:27.680 --> 00:14:30.680
useful
00:14:32.240 --> 00:14:35.199
and you could uh if you're doing a more
00:14:33.800 --> 00:14:36.920
machine learning oriented thing you can
00:14:35.199 --> 00:14:38.920
do the same thing for like the nurs
00:14:36.920 --> 00:14:41.480
proceedings or the icml proceedings or
00:14:38.920 --> 00:14:41.480
something like
00:14:41.800 --> 00:14:48.120
that um separately from this you can go
00:14:44.839 --> 00:14:50.920
through Google Scholar um this allows
00:14:48.120 --> 00:14:52.560
for a search of papers by keyword and so
00:14:50.920 --> 00:14:54.440
if I write like neural entity
00:14:52.560 --> 00:14:56.360
recognition it will give neural
00:14:54.440 --> 00:15:00.040
architectures for identity recognition
00:14:56.360 --> 00:15:03.399
all of these things like this um you can
00:15:00.040 --> 00:15:06.800
view the more recent papers so like for
00:15:03.399 --> 00:15:10.120
example uh if you're researching uh kind
00:15:06.800 --> 00:15:12.759
of generic topic that a lot of people
00:15:10.120 --> 00:15:14.639
use uh a lot of people do research on
00:15:12.759 --> 00:15:18.399
you might be getting papers from like
00:15:14.639 --> 00:15:19.920
1998 or something like this and you know
00:15:18.399 --> 00:15:21.639
they might be useful but honestly the
00:15:19.920 --> 00:15:23.519
methodology has changed so much since
00:15:21.639 --> 00:15:24.680
then that most methodical papers from
00:15:23.519 --> 00:15:26.959
that long ago are probably not going to
00:15:24.680 --> 00:15:29.480
be very useful um so you can view the
00:15:26.959 --> 00:15:31.079
recent papers another really useful
00:15:29.480 --> 00:15:33.759
thing that you can do is view papers
00:15:31.079 --> 00:15:35.319
that site the current paper and you can
00:15:33.759 --> 00:15:39.560
even click on this and then you can
00:15:35.319 --> 00:15:42.519
search within the sighting papers so
00:15:39.560 --> 00:15:44.399
um like let's say I want to know about
00:15:42.519 --> 00:15:45.620
how
00:15:44.399 --> 00:15:48.730
people
00:15:45.620 --> 00:15:48.730
[Music]
00:15:50.720 --> 00:15:55.720
do let's say I want to see if anybody
00:15:53.199 --> 00:15:59.639
does neural entity recognition with uh
00:15:55.720 --> 00:16:02.160
State space models so I do like stage
00:15:59.639 --> 00:16:05.399
space
00:16:02.160 --> 00:16:09.040
model and then I search within the
00:16:05.399 --> 00:16:12.279
citing articles and I'm able to find
00:16:09.040 --> 00:16:14.319
three articles that at least cite this
00:16:12.279 --> 00:16:17.759
paper and and talk about State space
00:16:14.319 --> 00:16:20.319
models so
00:16:17.759 --> 00:16:21.600
um none of these seem particularly
00:16:20.319 --> 00:16:23.240
relevant to what I was looking for but
00:16:21.600 --> 00:16:26.800
you get the idea like this can be a
00:16:23.240 --> 00:16:26.800
useful tool for finding more recent
00:16:27.519 --> 00:16:30.519
things
00:16:33.639 --> 00:16:40.480
and then finding older papers this is
00:16:36.279 --> 00:16:42.839
also relatively easy um so you read the
00:16:40.480 --> 00:16:44.319
papers that you're interested in and
00:16:42.839 --> 00:16:45.480
then it will have back blinks to older
00:16:44.319 --> 00:16:47.519
papers and you look them up in the
00:16:45.480 --> 00:16:50.000
references this is how I I find older
00:16:47.519 --> 00:16:53.600
papers that might be
00:16:50.000 --> 00:16:57.800
relevant um and so the these are the
00:16:53.600 --> 00:16:59.720
tools that I use um some other so I I'd
00:16:57.800 --> 00:17:03.600
like to give a few caveats about Google
00:16:59.720 --> 00:17:06.120
Scholar and uh things like Twitter or
00:17:03.600 --> 00:17:08.360
LinkedIn or something like this they
00:17:06.120 --> 00:17:10.720
give you very biased views on all the
00:17:08.360 --> 00:17:14.600
papers that are out there um because
00:17:10.720 --> 00:17:16.919
they sort for popularity basically so um
00:17:14.600 --> 00:17:19.439
actually if you're looking at like
00:17:16.919 --> 00:17:22.000
Twitter or LinkedIn or something like
00:17:19.439 --> 00:17:23.679
that you can actually get a pretty bleak
00:17:22.000 --> 00:17:25.360
view on natural language processing and
00:17:23.679 --> 00:17:28.000
say all anybody is doing is training
00:17:25.360 --> 00:17:30.080
large language models because you know
00:17:28.000 --> 00:17:31.720
these things tend to become you know
00:17:30.080 --> 00:17:33.520
popular and then they get Amplified by
00:17:31.720 --> 00:17:35.840
algorithms and stuff like that when in
00:17:33.520 --> 00:17:37.440
fact like the landscape is much richer
00:17:35.840 --> 00:17:40.400
which is why I do definitely suggest
00:17:37.440 --> 00:17:42.000
that you like actually look through uh
00:17:40.400 --> 00:17:43.880
conference proceedings and stuff and
00:17:42.000 --> 00:17:46.720
find papers that are not you know
00:17:43.880 --> 00:17:48.520
Amplified as much so um I I definitely
00:17:46.720 --> 00:17:50.840
highly recommend doing this in addition
00:17:48.520 --> 00:17:52.480
to you know Google Scholar or social
00:17:50.840 --> 00:17:54.640
media or other things like that that
00:17:52.480 --> 00:17:54.640
might
00:17:56.600 --> 00:18:01.760
be cool um I'd also like to mention a
00:18:00.200 --> 00:18:04.000
thing about the ups and downs of
00:18:01.760 --> 00:18:07.559
preemptive surveys
00:18:04.000 --> 00:18:10.440
so um surveying extensively before doing
00:18:07.559 --> 00:18:12.840
research uh has a bunch of good sides so
00:18:10.440 --> 00:18:14.000
it prevents you from duplicating work so
00:18:12.840 --> 00:18:15.039
somebody else might have done a very
00:18:14.000 --> 00:18:18.080
similar
00:18:15.039 --> 00:18:20.480
thing um it also increases your toolbox
00:18:18.080 --> 00:18:21.600
of methods so you know if it's a problem
00:18:20.480 --> 00:18:25.400
that a lot of people have worked on
00:18:21.600 --> 00:18:27.120
before then you know it helps uh give
00:18:25.400 --> 00:18:30.320
you ideas of methods that you could be
00:18:27.120 --> 00:18:35.600
using um however in a way it also kind
00:18:30.320 --> 00:18:38.720
of constrains your thinking so um if you
00:18:35.600 --> 00:18:42.480
like on once you have built up a very
00:18:38.720 --> 00:18:45.440
extensive survey of like ways to do
00:18:42.480 --> 00:18:47.240
things you tend to like move away from
00:18:45.440 --> 00:18:48.799
there when in fact like if you thought
00:18:47.240 --> 00:18:50.080
just thought of ways to solve problems
00:18:48.799 --> 00:18:52.360
without looking at everything you might
00:18:50.080 --> 00:18:54.799
come up with something over here might
00:18:52.360 --> 00:18:56.400
actually be a good idea right um and so
00:18:54.799 --> 00:18:58.600
there's this really nice essay it was
00:18:56.400 --> 00:19:00.799
actually shared uh shared with me by
00:18:58.600 --> 00:19:02.440
Chris Manning from Sanford um it's
00:19:00.799 --> 00:19:04.720
called how to build an economics model
00:19:02.440 --> 00:19:06.679
in your spare time it's about it's from
00:19:04.720 --> 00:19:08.880
a Nobel Prize winner in economics but
00:19:06.679 --> 00:19:10.480
he's talking about how when he tries to
00:19:08.880 --> 00:19:13.039
come up with new and like important
00:19:10.480 --> 00:19:15.840
ideas he doesn't look at economics
00:19:13.039 --> 00:19:19.679
journals he looks at the newspaper and
00:19:15.840 --> 00:19:21.919
tries to uh you know
00:19:19.679 --> 00:19:23.480
like look at problems that people are
00:19:21.919 --> 00:19:24.840
talking about in the newspaper and think
00:19:23.480 --> 00:19:27.159
about whether there's an economic
00:19:24.840 --> 00:19:29.919
solution to them and so if we think
00:19:27.159 --> 00:19:32.880
about the anal of how we can do this in
00:19:29.919 --> 00:19:35.600
natural language processing you know
00:19:32.880 --> 00:19:37.360
maybe you don't necessarily right away
00:19:35.600 --> 00:19:38.799
want to do a really extensive survey
00:19:37.360 --> 00:19:41.080
first you might just think about like
00:19:38.799 --> 00:19:44.080
what's bothering you like when you're
00:19:41.080 --> 00:19:46.799
using chat GPT what is really
00:19:44.080 --> 00:19:49.600
frustrating to you uh about how it gives
00:19:46.799 --> 00:19:51.280
responses or um what are the things you
00:19:49.600 --> 00:19:53.159
wish it were possible to do through
00:19:51.280 --> 00:19:56.240
natural language processing but not are
00:19:53.159 --> 00:19:57.640
not possible to do and um then you can
00:19:56.240 --> 00:20:00.679
start from there you can look at you
00:19:57.640 --> 00:20:03.440
know what companies are doing in their
00:20:00.679 --> 00:20:05.799
Tech demos uh because the tech demos
00:20:03.440 --> 00:20:08.640
might be nice but they almost never work
00:20:05.799 --> 00:20:11.240
as well as the tech demo makes them seem
00:20:08.640 --> 00:20:13.840
like they work so that could be another
00:20:11.240 --> 00:20:15.720
place to get ideas um or you can look at
00:20:13.840 --> 00:20:17.039
papers in a related field like machine
00:20:15.720 --> 00:20:18.760
learning like let's say you're a machine
00:20:17.039 --> 00:20:21.280
learning oriented person and you really
00:20:18.760 --> 00:20:23.000
love like math and stuff like that it's
00:20:21.280 --> 00:20:25.799
like well there's this good mathematical
00:20:23.000 --> 00:20:27.760
tool that I think could be applicable to
00:20:25.799 --> 00:20:30.440
um a certain problem in NLP or something
00:20:27.760 --> 00:20:31.960
like that so you could do that too um
00:20:30.440 --> 00:20:33.960
the the final one you know comes with
00:20:31.960 --> 00:20:35.799
all the caveats of doing topown research
00:20:33.960 --> 00:20:37.320
of course so you know you need to make
00:20:35.799 --> 00:20:39.799
sure that that really is the correct
00:20:37.320 --> 00:20:42.159
tool for whatever you want to sell but
00:20:39.799 --> 00:20:45.280
um definitely this is something to think
00:20:42.159 --> 00:20:48.240
about um however for assignment three
00:20:45.280 --> 00:20:49.559
you need to do a survey so I'm I'm
00:20:48.240 --> 00:20:50.720
forcing you to do a survey for
00:20:49.559 --> 00:20:52.200
assignment three so if you're going to
00:20:50.720 --> 00:20:53.640
do something like this you can do it
00:20:52.200 --> 00:20:56.600
before assignment 3 and start thinking
00:20:53.640 --> 00:21:00.000
about what you want to be doing so um
00:20:56.600 --> 00:21:01.520
that's something
00:21:00.000 --> 00:21:03.200
uh any questions or discussion about
00:21:01.520 --> 00:21:06.799
that
00:21:03.200 --> 00:21:07.840
part this is hard I'm I'm happy to uh
00:21:06.799 --> 00:21:11.120
happy to
00:21:07.840 --> 00:21:14.039
discuss either now or in office hours or
00:21:11.120 --> 00:21:14.039
anything like this
00:21:14.200 --> 00:21:19.720
but Okay
00:21:17.080 --> 00:21:24.279
cool so the next thing is a for
00:21:19.720 --> 00:21:25.640
hypothesis so uh once you have done you
00:21:24.279 --> 00:21:28.600
have a general idea of what you want to
00:21:25.640 --> 00:21:31.240
do um and you have done a survey related
00:21:28.600 --> 00:21:32.480
work you can devise a final research
00:21:31.240 --> 00:21:34.159
question or
00:21:32.480 --> 00:21:37.760
hypothesis
00:21:34.159 --> 00:21:40.039
and so a research question is one or
00:21:37.760 --> 00:21:43.400
several explicit questions regarding the
00:21:40.039 --> 00:21:45.919
thing that you want to know um
00:21:43.400 --> 00:21:47.400
and this is actually pretty hard for
00:21:45.919 --> 00:21:49.080
people like I ask people to write
00:21:47.400 --> 00:21:50.880
research questions and very often they
00:21:49.080 --> 00:21:53.080
don't write research questions in this
00:21:50.880 --> 00:21:57.720
format and I have to ask people to try
00:21:53.080 --> 00:21:59.919
to change them and what they what I
00:21:57.720 --> 00:22:03.159
think they in general should be are yes
00:21:59.919 --> 00:22:08.120
no questions so
00:22:03.159 --> 00:22:10.400
it um yes no questions and you have a
00:22:08.120 --> 00:22:13.120
hypothesis uh about what you think the
00:22:10.400 --> 00:22:14.600
answer to the question may be a priori
00:22:13.120 --> 00:22:17.520
and that hypothesis should be
00:22:14.600 --> 00:22:19.919
falsifiable so basically it's if you get
00:22:17.520 --> 00:22:21.240
a certain result you can demonstrate
00:22:19.919 --> 00:22:23.120
that the answer to this question is
00:22:21.240 --> 00:22:24.679
probably yes if you get a different
00:22:23.120 --> 00:22:27.520
result you can demonstrate that the
00:22:24.679 --> 00:22:29.640
answer to the question is probably no
00:22:27.520 --> 00:22:32.400
and just to make this a little bit more
00:22:29.640 --> 00:22:34.360
concrete I can give a few curiosity
00:22:32.400 --> 00:22:36.880
driven questions and
00:22:34.360 --> 00:22:40.720
hypothesis C the Curiosity driven
00:22:36.880 --> 00:22:43.480
questions are a little bit easier so um
00:22:40.720 --> 00:22:45.600
we have the Curiosity driven question of
00:22:43.480 --> 00:22:49.679
are all language models are all
00:22:45.600 --> 00:22:53.559
languages equally hard to language model
00:22:49.679 --> 00:22:55.400
and they say uh it is unlikely that all
00:22:53.559 --> 00:22:56.760
languages are equally easy or that
00:22:55.400 --> 00:22:58.799
methods are equally good at all
00:22:56.760 --> 00:23:01.159
languages um so so that's their
00:22:58.799 --> 00:23:04.120
hypothesis so they think a priori that
00:23:01.159 --> 00:23:05.919
that's the case um but that might be
00:23:04.120 --> 00:23:08.400
falsified by getting a very strong
00:23:05.919 --> 00:23:10.679
result that says like no matter which
00:23:08.400 --> 00:23:13.760
language you're modeling many models
00:23:10.679 --> 00:23:18.120
that we use get get similar results
00:23:13.760 --> 00:23:20.400
on um what makes a particular podcast
00:23:18.120 --> 00:23:21.320
broadly engaging so this was an analysis
00:23:20.400 --> 00:23:24.400
of
00:23:21.320 --> 00:23:27.960
podcasts uh where they compared popular
00:23:24.400 --> 00:23:29.720
podcasts and unpopular podcasts or
00:23:27.960 --> 00:23:32.400
engaging and unengaging
00:23:29.720 --> 00:23:34.400
podcasts and it says uh tips such as
00:23:32.400 --> 00:23:37.039
reducing filler words and disfluencies
00:23:34.400 --> 00:23:38.840
or incorporating emotion are things that
00:23:37.039 --> 00:23:41.400
people had anecdotally written on the
00:23:38.840 --> 00:23:43.039
internet as tips to make a good podcast
00:23:41.400 --> 00:23:45.760
but nobody had actually empirically
00:23:43.039 --> 00:23:48.440
valid validated that so they wanted to
00:23:45.760 --> 00:23:50.000
like actually go invalidate that so they
00:23:48.440 --> 00:23:51.679
came up with hypotheses and they could
00:23:50.000 --> 00:23:55.720
demonstrate that those had good or bad
00:23:51.679 --> 00:23:55.720
correlation podcast being judged as
00:23:56.880 --> 00:24:03.600
engaging application driven questions
00:23:59.039 --> 00:24:03.600
and hypotheses are a little bit harder
00:24:04.520 --> 00:24:10.480
so here is an
00:24:07.640 --> 00:24:13.039
example this is an example from a paper
00:24:10.480 --> 00:24:18.720
that I wrote previously which
00:24:13.039 --> 00:24:22.080
was where and why or how and why do
00:24:18.720 --> 00:24:22.960
pre-trained word embeddings help neural
00:24:22.080 --> 00:24:25.080
machine
00:24:22.960 --> 00:24:26.760
translation and this was back when
00:24:25.080 --> 00:24:28.279
pre-training was mostly like word
00:24:26.760 --> 00:24:31.880
embeddings we weren't preing the whole
00:24:28.279 --> 00:24:34.480
body of the neural net so
00:24:31.880 --> 00:24:36.640
now the answers to this question are a
00:24:34.480 --> 00:24:37.919
little bit different but basically the
00:24:36.640 --> 00:24:40.080
questions that we asked is is the
00:24:37.919 --> 00:24:42.360
behavior of pre-training affected by
00:24:40.080 --> 00:24:45.960
language families and other linguistic
00:24:42.360 --> 00:24:49.520
features of source and Target languages
00:24:45.960 --> 00:24:51.360
so uh we expected that the answer to
00:24:49.520 --> 00:24:53.640
this would be yes it would vary across
00:24:51.360 --> 00:24:54.960
them do pre-trained edings help more
00:24:53.640 --> 00:24:57.760
when the size of the training data is
00:24:54.960 --> 00:24:59.039
small we expected that this would be yes
00:24:57.760 --> 00:25:00.640
how much does the similarity of the
00:24:59.039 --> 00:25:03.720
source and Target languages affect the
00:25:00.640 --> 00:25:06.200
efficacy of using pre-trained edings uh
00:25:03.720 --> 00:25:08.399
we didn't have a hypothesis about
00:25:06.200 --> 00:25:10.600
whether it would or not and is it
00:25:08.399 --> 00:25:12.320
helpful to align the embedding spaces
00:25:10.600 --> 00:25:14.520
between the source and Target languages
00:25:12.320 --> 00:25:16.039
we assume this would be yes and do
00:25:14.520 --> 00:25:17.640
pre-trained edings help more in
00:25:16.039 --> 00:25:19.360
multilingual systems as compared to
00:25:17.640 --> 00:25:22.679
bilingual systems and we didn't have a
00:25:19.360 --> 00:25:26.279
good hypothesis about that
00:25:22.679 --> 00:25:29.559
I another one is although recent stud uh
00:25:26.279 --> 00:25:32.760
sorry the question of whether and how
00:25:29.559 --> 00:25:35.039
contextual information benefits endtoend
00:25:32.760 --> 00:25:38.960
speech translation has received little
00:25:35.039 --> 00:25:42.480
attention and so their guess was that it
00:25:38.960 --> 00:25:44.880
probably would help so application
00:25:42.480 --> 00:25:47.120
oriented questions are a little bit
00:25:44.880 --> 00:25:49.200
tricky because the obvious one is like
00:25:47.120 --> 00:25:52.200
does X make y
00:25:49.200 --> 00:25:54.080
better and so you you have a method you
00:25:52.200 --> 00:25:55.559
think it's going to make the output
00:25:54.080 --> 00:25:58.120
better and so that's kind of your
00:25:55.559 --> 00:26:00.000
obvious research question but the
00:25:58.120 --> 00:26:02.080
problem is the above question or
00:26:00.000 --> 00:26:04.279
hypothesis is natural but it's very
00:26:02.080 --> 00:26:06.679
indirect so normally you also have a
00:26:04.279 --> 00:26:09.760
hypothesis about like why it will help
00:26:06.679 --> 00:26:13.279
or something like this and so if the
00:26:09.760 --> 00:26:15.440
answer is no after your experiments why
00:26:13.279 --> 00:26:18.080
is the answer
00:26:15.440 --> 00:26:20.640
no it could be that your original
00:26:18.080 --> 00:26:23.720
assumption about why a particular method
00:26:20.640 --> 00:26:25.039
would help was wrong which is the worst
00:26:23.720 --> 00:26:28.360
case scenario but you also could just
00:26:25.039 --> 00:26:30.559
have a bug in your code or uh your
00:26:28.360 --> 00:26:32.000
data set your test set might not be
00:26:30.559 --> 00:26:34.279
large enough so you wouldn't be able to
00:26:32.000 --> 00:26:35.840
get a statistically significant result
00:26:34.279 --> 00:26:40.039
based on the amount that it helped you
00:26:35.840 --> 00:26:42.960
improve or other things like that so
00:26:40.039 --> 00:26:44.960
what I like to do in this case is try to
00:26:42.960 --> 00:26:48.399
come up with the intuition about why X
00:26:44.960 --> 00:26:50.360
will make y better and can you think of
00:26:48.399 --> 00:26:52.080
other research questions or hypotheses
00:26:50.360 --> 00:26:54.240
that confirm or falsified these
00:26:52.080 --> 00:26:56.640
assumptions
00:26:54.240 --> 00:26:59.559
so uh some things that you can do are
00:26:56.640 --> 00:27:01.240
come up with like toy data or come up
00:26:59.559 --> 00:27:03.840
with a subset of the data where you
00:27:01.240 --> 00:27:06.600
think this might be correct so just to
00:27:03.840 --> 00:27:09.279
give an example let's say we have a
00:27:06.600 --> 00:27:12.159
translation model and we have a
00:27:09.279 --> 00:27:14.279
hypothesis that improving entity
00:27:12.159 --> 00:27:16.520
translation and low resource languages
00:27:14.279 --> 00:27:18.799
will improve translation accuracy and we
00:27:16.520 --> 00:27:21.399
run an experiment or actually maybe this
00:27:18.799 --> 00:27:23.760
is an even better one we we have a
00:27:21.399 --> 00:27:26.240
hypothesis that incorporating contextual
00:27:23.760 --> 00:27:28.799
information in speech translation will
00:27:26.240 --> 00:27:31.760
help translation results
00:27:28.799 --> 00:27:36.480
so incorporating context in machine
00:27:31.760 --> 00:27:37.600
translation has been a very old topic
00:27:36.480 --> 00:27:41.279
like people have been trying to do this
00:27:37.600 --> 00:27:43.559
for a very long time but for a long time
00:27:41.279 --> 00:27:45.200
the conclusion was that it essentially
00:27:43.559 --> 00:27:46.519
wasn't helping translation people would
00:27:45.200 --> 00:27:48.039
incorporate contacts through neural
00:27:46.519 --> 00:27:50.960
networks or other things like that and
00:27:48.039 --> 00:27:53.320
it just wasn't improving the results
00:27:50.960 --> 00:27:55.320
significantly and in the end the reason
00:27:53.320 --> 00:27:57.960
why was because there just weren't
00:27:55.320 --> 00:27:59.799
enough examples where contextual
00:27:57.960 --> 00:28:02.200
information was useful in the data sets
00:27:59.799 --> 00:28:06.360
that everybody was using so people were
00:28:02.200 --> 00:28:09.080
using really long news sentences to try
00:28:06.360 --> 00:28:10.880
to figure out where uh whether context
00:28:09.080 --> 00:28:12.440
was helping but really long new
00:28:10.880 --> 00:28:14.000
sentences have so much information
00:28:12.440 --> 00:28:16.080
included in them that you can mostly
00:28:14.000 --> 00:28:20.120
translate sentence by sentence and get
00:28:16.080 --> 00:28:21.880
it right like 95% of the time so the
00:28:20.120 --> 00:28:23.600
problem wasn't that any of the methods
00:28:21.880 --> 00:28:26.799
that people were proposing were bad it
00:28:23.600 --> 00:28:29.559
was just that they weren't effective
00:28:26.799 --> 00:28:31.440
enough to see big enough uh results and
00:28:29.559 --> 00:28:33.159
so then people Chang the data set to
00:28:31.440 --> 00:28:34.720
like conversations or something like
00:28:33.159 --> 00:28:37.399
that and in conversations they're very
00:28:34.720 --> 00:28:39.159
contextual yeah very short utterances
00:28:37.399 --> 00:28:41.440
and once you started doing things like
00:28:39.159 --> 00:28:45.840
that then the same methods like exactly
00:28:41.440 --> 00:28:48.640
the same methods were um were helping
00:28:45.840 --> 00:28:51.120
when they weren't helping before and
00:28:48.640 --> 00:28:52.720
so the underlying assumption about
00:28:51.120 --> 00:28:56.240
incorporating context information is
00:28:52.720 --> 00:28:58.159
that context will be helpful and or
00:28:56.240 --> 00:29:01.760
context is necessary
00:28:58.159 --> 00:29:03.880
to you know do translation well so does
00:29:01.760 --> 00:29:06.880
anyone have an idea about how you could
00:29:03.880 --> 00:29:06.880
like actually verify that
00:29:10.880 --> 00:29:16.519
assumption any idea yeah simplest way
00:29:14.000 --> 00:29:19.120
would be just give an El way to set and
00:29:16.519 --> 00:29:21.000
then have a measure of okay if it in
00:29:19.120 --> 00:29:23.679
more than
00:29:21.000 --> 00:29:25.519
x% um and how would that verify the
00:29:23.679 --> 00:29:28.480
assumption that context is
00:29:25.519 --> 00:29:30.720
necessary so we're asking a question
00:29:28.480 --> 00:29:33.480
whether context is helpful in the proect
00:29:30.720 --> 00:29:36.000
you're doing that uh we're asking
00:29:33.480 --> 00:29:39.240
whether
00:29:36.000 --> 00:29:40.840
so we're asking kind of a a two-part the
00:29:39.240 --> 00:29:44.080
main question is whether context is
00:29:40.840 --> 00:29:45.559
helpful given a particular you know
00:29:44.080 --> 00:29:47.240
experimental setup right so like
00:29:45.559 --> 00:29:50.440
training data
00:29:47.240 --> 00:29:52.039
set modeling method and training
00:29:50.440 --> 00:29:54.679
algorithm and evaluation algorithm
00:29:52.039 --> 00:29:56.480
that's kind of the big final result that
00:29:54.679 --> 00:29:58.840
you want to get in your paper but
00:29:56.480 --> 00:30:01.399
there's kind of a the question which is
00:29:58.840 --> 00:30:04.360
is context even necessary to translate
00:30:01.399 --> 00:30:06.559
well you train a model with context and
00:30:04.360 --> 00:30:08.200
one without context you train a model
00:30:06.559 --> 00:30:10.679
with context and one without context but
00:30:08.200 --> 00:30:14.080
what if your model of context is really
00:30:10.679 --> 00:30:15.399
bad J the same model you have the same
00:30:14.080 --> 00:30:16.840
model architecture but let's say your
00:30:15.399 --> 00:30:18.559
model architecture is really bad at
00:30:16.840 --> 00:30:19.919
capturing context so then maybe it's a
00:30:18.559 --> 00:30:22.399
problem of your model architecture and
00:30:19.919 --> 00:30:24.720
context is necessary or helpful but your
00:30:22.399 --> 00:30:27.399
model just isn't very good at capture
00:30:24.720 --> 00:30:29.720
human yeah exactly so this is one thing
00:30:27.399 --> 00:30:31.960
that people can do so there was a
00:30:29.720 --> 00:30:34.240
interesting paper um let me see if I can
00:30:31.960 --> 00:30:34.240
find
00:30:39.960 --> 00:30:49.080
it so this is a paper from a long time
00:30:45.760 --> 00:30:51.600
ago where they did something like
00:30:49.080 --> 00:30:53.360
this um it's evaluating machine
00:30:51.600 --> 00:30:54.480
translation systems with second language
00:30:53.360 --> 00:30:57.399
proficiency
00:30:54.480 --> 00:31:01.240
tests and basically what they did is
00:30:57.399 --> 00:31:03.519
they had these English proficiency tests
00:31:01.240 --> 00:31:05.320
for uh I think it was like middle
00:31:03.519 --> 00:31:07.480
schoolers or high schoolers or something
00:31:05.320 --> 00:31:09.600
like this and then they used machine
00:31:07.480 --> 00:31:11.240
translation systems to translate them
00:31:09.600 --> 00:31:13.600
into Japanese and then they asked
00:31:11.240 --> 00:31:19.720
Japanese students to solve them in
00:31:13.600 --> 00:31:19.720
japanies and so what they did is they
00:31:20.000 --> 00:31:26.159
asked uh Anonymous system G and
00:31:23.679 --> 00:31:28.200
Anonymous system Y which are Google and
00:31:26.159 --> 00:31:32.360
Yahoo
00:31:28.200 --> 00:31:34.720
and uh and a human without context and a
00:31:32.360 --> 00:31:36.279
human with context to translate them so
00:31:34.720 --> 00:31:38.720
they ask humans to translate each
00:31:36.279 --> 00:31:40.880
sentence without giving any context and
00:31:38.720 --> 00:31:44.320
they ask humans to translate each uh
00:31:40.880 --> 00:31:46.399
sentence with giving context and what
00:31:44.320 --> 00:31:48.960
they were able to find was in this case
00:31:46.399 --> 00:31:50.080
humans with context the Japanese
00:31:48.960 --> 00:31:53.080
students were able to answer the
00:31:50.080 --> 00:31:55.360
questions most of the time um whereas if
00:31:53.080 --> 00:31:57.559
they translated without contexts like G
00:31:55.360 --> 00:31:59.039
and Y were doing at that time actually
00:31:57.559 --> 00:32:01.320
why was almost as good as human
00:31:59.039 --> 00:32:04.080
translators at you know achieving the
00:32:01.320 --> 00:32:05.440
the task so but basically like the
00:32:04.080 --> 00:32:09.159
important thing here is they were able
00:32:05.440 --> 00:32:11.039
to confirm their you know idea that in
00:32:09.159 --> 00:32:12.519
this case humans with context were much
00:32:11.039 --> 00:32:13.799
better than humans without context so
00:32:12.519 --> 00:32:16.279
that would verify your like sub
00:32:13.799 --> 00:32:18.080
assumption right and so this is just
00:32:16.279 --> 00:32:20.279
like one
00:32:18.080 --> 00:32:22.240
example this is just one example of
00:32:20.279 --> 00:32:25.960
something that you can
00:32:22.240 --> 00:32:27.480
do uh but the basic idea is like your
00:32:25.960 --> 00:32:29.320
final result is that you want build of
00:32:27.480 --> 00:32:30.799
system that does better on some
00:32:29.320 --> 00:32:32.159
Benchmark that you care about there's a
00:32:30.799 --> 00:32:33.600
bunch of things that go into whether it
00:32:32.159 --> 00:32:36.159
does better or not your evaluation
00:32:33.600 --> 00:32:38.960
system your model your training data
00:32:36.159 --> 00:32:41.559
your training your evaluation data set
00:32:38.960 --> 00:32:43.080
um and things like that so can you break
00:32:41.559 --> 00:32:45.360
that down into sub questions that you
00:32:43.080 --> 00:32:48.039
could ask where you could verify that
00:32:45.360 --> 00:32:49.720
it's working or not uh based on whether
00:32:48.039 --> 00:32:51.600
those things are happening another thing
00:32:49.720 --> 00:32:53.159
people do an ml oriented things is
00:32:51.600 --> 00:32:54.919
create a toy data set where they know
00:32:53.159 --> 00:32:57.200
the phenomenon they're interested in
00:32:54.919 --> 00:32:59.679
exists and train their models on there
00:32:57.200 --> 00:33:02.919
and make sure that they work there um so
00:32:59.679 --> 00:33:02.919
that's another thing that you can take
00:33:03.120 --> 00:33:07.639
that cool um any questions about
00:33:08.080 --> 00:33:12.760
this okay
00:33:10.200 --> 00:33:16.519
s so the next thing is running
00:33:12.760 --> 00:33:19.000
experiments um so in order to do this
00:33:16.519 --> 00:33:21.399
you'll find data that will answer your
00:33:19.000 --> 00:33:23.639
research question uh run experiments and
00:33:21.399 --> 00:33:25.720
calculate numbers uh calculate
00:33:23.639 --> 00:33:28.279
significant differences and analyze
00:33:25.720 --> 00:33:31.080
effects whoops
00:33:28.279 --> 00:33:35.519
and so this is a basic pipeline that we
00:33:31.080 --> 00:33:37.760
want to follow so obtaining test data so
00:33:35.519 --> 00:33:41.200
in order to obtain test data uh we would
00:33:37.760 --> 00:33:42.799
like to find data sets um so if you're
00:33:41.200 --> 00:33:46.200
building on previous work the safest
00:33:42.799 --> 00:33:48.960
thing that you can do um is start with
00:33:46.200 --> 00:33:51.919
the same data sets if you're answering a
00:33:48.960 --> 00:33:53.799
new question um you can think about can
00:33:51.919 --> 00:33:55.399
you repurpose other data sets to answer
00:33:53.799 --> 00:33:57.679
the question so very often there will be
00:33:55.399 --> 00:34:00.080
a data set that is uh appropriate for
00:33:57.679 --> 00:34:03.360
answer answering your question um and
00:34:00.080 --> 00:34:05.760
you can go and find that um actually our
00:34:03.360 --> 00:34:06.919
our wonderful TJ has created a system
00:34:05.760 --> 00:34:08.800
called datafinder that will
00:34:06.919 --> 00:34:11.159
automatically find it for you so if you
00:34:08.800 --> 00:34:13.679
want to uh search for data sets you can
00:34:11.159 --> 00:34:16.760
use his system or ask him about it but
00:34:13.679 --> 00:34:20.359
um uh but if no appropriate data set
00:34:16.760 --> 00:34:24.359
exists you can uh create your own and
00:34:20.359 --> 00:34:25.879
particularly for industry use cases it's
00:34:24.359 --> 00:34:28.119
very common that you need to go in and
00:34:25.879 --> 00:34:30.040
create your own or if you're planning on
00:34:28.119 --> 00:34:31.639
doing research in Academia afterwards
00:34:30.040 --> 00:34:33.119
very often you'll come up with a
00:34:31.639 --> 00:34:34.639
research question where no data set
00:34:33.119 --> 00:34:36.679
exists so you'll have to create your own
00:34:34.639 --> 00:34:38.960
anyway so this is something that's
00:34:36.679 --> 00:34:41.639
really important to be able to do well
00:34:38.960 --> 00:34:44.639
uh in most
00:34:41.639 --> 00:34:49.240
cases um so I'll be talking about how to
00:34:44.639 --> 00:34:53.280
do all of these so data set lists um the
00:34:49.240 --> 00:34:55.159
best one I think by far in uh natural
00:34:53.280 --> 00:34:58.359
language processing nowadays is hugging
00:34:55.159 --> 00:35:02.960
face data sets um there's also other
00:34:58.359 --> 00:35:05.359
data resources like um elra is uh
00:35:02.960 --> 00:35:07.240
another one kind of by the more
00:35:05.359 --> 00:35:09.800
traditional natural language processing
00:35:07.240 --> 00:35:12.960
Community there's also the LDC the
00:35:09.800 --> 00:35:15.680
linguistic data uh Consortium and there
00:35:12.960 --> 00:35:17.119
are some older heavily annotated data
00:35:15.680 --> 00:35:20.040
sets that are only available through
00:35:17.119 --> 00:35:22.000
those at CMU you have the ability to
00:35:20.040 --> 00:35:24.520
download things from LDC so if you find
00:35:22.000 --> 00:35:26.960
an LDC data set in any papers that
00:35:24.520 --> 00:35:29.640
you're doing or online um you need
00:35:26.960 --> 00:35:31.000
register for that and I I'm the person
00:35:29.640 --> 00:35:33.280
who's in charge of it so I'll give you
00:35:31.000 --> 00:35:35.520
access and then uh and then you can use
00:35:33.280 --> 00:35:37.400
it um there's also things like papers
00:35:35.520 --> 00:35:39.680
with code and papers with code basically
00:35:37.400 --> 00:35:41.359
automatically extracts uh kind of like
00:35:39.680 --> 00:35:42.839
the names of data sets so even some
00:35:41.359 --> 00:35:45.599
things that don't appear on a hug and
00:35:42.839 --> 00:35:45.599
place will appear
00:35:46.359 --> 00:35:52.440
there so annotating data um when you
00:35:50.640 --> 00:35:54.599
annotate data you first need to decide
00:35:52.440 --> 00:35:57.599
how much to annotate sample appropriate
00:35:54.599 --> 00:36:00.240
data create annotation guidelines
00:35:57.599 --> 00:36:03.160
uh either annotate yourself or hire and
00:36:00.240 --> 00:36:05.839
supervis annotators and evaluate
00:36:03.160 --> 00:36:07.720
quality so a very common problem that a
00:36:05.839 --> 00:36:10.240
lot of people ask me is how much test
00:36:07.720 --> 00:36:12.800
data do you need
00:36:10.240 --> 00:36:14.800
and I'm going to talk about uh
00:36:12.800 --> 00:36:17.520
statistical significance tests in a
00:36:14.800 --> 00:36:19.520
second but um basically you need to have
00:36:17.520 --> 00:36:23.240
enough to have a statistically
00:36:19.520 --> 00:36:28.119
significant difference um between
00:36:23.240 --> 00:36:32.079
methods and the way you do this actually
00:36:28.119 --> 00:36:32.079
sorry very quickly let me
00:36:33.240 --> 00:36:37.599
check I rearrange my slides and I want
00:36:35.560 --> 00:36:40.359
to make sure that I didn't accidentally
00:36:37.599 --> 00:36:42.280
um I didn't accidentally remove the
00:36:40.359 --> 00:36:44.520
slides on statistical significance which
00:36:42.280 --> 00:36:44.520
would be
00:36:51.680 --> 00:36:57.880
a okay
00:36:55.240 --> 00:36:59.200
um sorry hang on one second I just
00:36:57.880 --> 00:37:02.240
realized that I don't have the slides
00:36:59.200 --> 00:37:03.839
for a statistical significance on this
00:37:02.240 --> 00:37:05.280
presentation so let me grab them from
00:37:03.839 --> 00:37:09.440
the
00:37:05.280 --> 00:37:09.440
last uh the last
00:37:10.520 --> 00:37:14.640
us this is is pretty
00:37:25.599 --> 00:37:28.599
important
00:37:33.160 --> 00:37:38.599
okay so yeah let me explain statistical
00:37:35.560 --> 00:37:40.319
significance here um so basically when
00:37:38.599 --> 00:37:43.319
we're doing statistical
00:37:40.319 --> 00:37:44.680
testing um let's say we have two models
00:37:43.319 --> 00:37:47.800
with similar
00:37:44.680 --> 00:37:50.160
accuracies and these models with similar
00:37:47.800 --> 00:37:52.240
accuracies let's say model one is a
00:37:50.160 --> 00:37:56.880
generative model model two is a
00:37:52.240 --> 00:37:58.520
discriminative model and we say uh data
00:37:56.880 --> 00:38:00.200
set one we have this result on data set
00:37:58.520 --> 00:38:02.480
two we have another result on data set
00:38:00.200 --> 00:38:04.720
three we have uh another
00:38:02.480 --> 00:38:06.440
result and so then the question is how
00:38:04.720 --> 00:38:09.480
can we tell if the differences are due
00:38:06.440 --> 00:38:13.839
to consistent trends that uh will hold
00:38:09.480 --> 00:38:16.119
on other data sets or um if they are
00:38:13.839 --> 00:38:18.480
kind of random noise due to the fact
00:38:16.119 --> 00:38:21.000
that we have one
00:38:18.480 --> 00:38:24.200
uh due to the fact that you know data
00:38:21.000 --> 00:38:25.640
sets vary models vary um and so the way
00:38:24.200 --> 00:38:28.319
we do this is through statistical
00:38:25.640 --> 00:38:31.839
significance testing
00:38:28.319 --> 00:38:34.319
um so I'm going to cover this briefly in
00:38:31.839 --> 00:38:36.920
this class but you can see a drawer at
00:38:34.319 --> 00:38:38.640
all for an overview and also we're going
00:38:36.920 --> 00:38:41.520
to have a recitation on how to actually
00:38:38.640 --> 00:38:44.280
run statistical significance tests so um
00:38:41.520 --> 00:38:47.920
you can take a look at that
00:38:44.280 --> 00:38:51.680
there and so the basic idea is given a
00:38:47.920 --> 00:38:54.280
quantity we test um certain values of
00:38:51.680 --> 00:38:57.880
uncertainty with respect to the quantity
00:38:54.280 --> 00:38:59.960
so number one is a p value and the P
00:38:57.880 --> 00:39:02.240
value is what is the probability that a
00:38:59.960 --> 00:39:06.119
difference with another quantity is by
00:39:02.240 --> 00:39:08.359
chance and so a lower uh P value means
00:39:06.119 --> 00:39:11.839
more likelihood of having a significant
00:39:08.359 --> 00:39:13.200
difference usually the threshold for
00:39:11.839 --> 00:39:16.520
saying that we have a significant
00:39:13.200 --> 00:39:20.280
difference is there's a 5% chance
00:39:16.520 --> 00:39:22.160
0.05 that this difference between the
00:39:20.280 --> 00:39:25.760
models was due to chance or like data
00:39:22.160 --> 00:39:28.520
sampling or things like that uh so p uh
00:39:25.760 --> 00:39:30.880
less than 0.05 is kind of a threshold
00:39:28.520 --> 00:39:30.880
for
00:39:31.119 --> 00:39:35.680
significance another thing that we can
00:39:33.040 --> 00:39:38.720
measure is confidence intervals and the
00:39:35.680 --> 00:39:40.760
confidence interval is um what is the
00:39:38.720 --> 00:39:42.560
range under which we could expect
00:39:40.760 --> 00:39:44.760
another trial to fall and I'll talk
00:39:42.560 --> 00:39:47.359
about both of
00:39:44.760 --> 00:39:49.280
these um there's another concept called
00:39:47.359 --> 00:39:53.880
paired versus unpaired
00:39:49.280 --> 00:39:56.680
tests and in unpaired test comp this
00:39:53.880 --> 00:39:59.480
means um we compare the means of a
00:39:56.680 --> 00:40:02.359
quantity on two unrelated
00:39:59.480 --> 00:40:04.040
groups so an example could be the test
00:40:02.359 --> 00:40:07.040
of the significance of a difference of
00:40:04.040 --> 00:40:09.160
accuracies of a model on two data sets
00:40:07.040 --> 00:40:12.400
so like let's say I have data set number
00:40:09.160 --> 00:40:16.440
one and data set number two what is the
00:40:12.400 --> 00:40:18.000
likelihood that the um there's actually
00:40:16.440 --> 00:40:20.839
a real difference in the data sets as
00:40:18.000 --> 00:40:23.400
opposed to just random uh random
00:40:20.839 --> 00:40:26.599
sampling RS between
00:40:23.400 --> 00:40:28.560
them in contrast AED test compares the
00:40:26.599 --> 00:40:31.400
means of a quantity on one data set
00:40:28.560 --> 00:40:32.480
under two conditions and so an example
00:40:31.400 --> 00:40:33.760
of this could be testing the
00:40:32.480 --> 00:40:37.319
significance of a difference of
00:40:33.760 --> 00:40:39.640
accuracies of two models on one data set
00:40:37.319 --> 00:40:42.000
so this is a really important difference
00:40:39.640 --> 00:40:43.960
and the reason why it's a really
00:40:42.000 --> 00:40:45.520
important difference well number one
00:40:43.960 --> 00:40:49.119
we're most commonly interested in the
00:40:45.520 --> 00:40:51.839
letter number two if we can make
00:40:49.119 --> 00:40:54.280
assumptions about
00:40:51.839 --> 00:40:56.079
the association of the points in the
00:40:54.280 --> 00:40:58.680
data set we're much much more likely to
00:40:56.079 --> 00:41:00.440
get a significant result because we can
00:40:58.680 --> 00:41:02.240
um we can look at the difference of the
00:41:00.440 --> 00:41:06.000
models on individual data points as
00:41:02.240 --> 00:41:10.400
opposed to um uh as opposed to looking
00:41:06.000 --> 00:41:10.400
at just the difference in the
00:41:10.520 --> 00:41:16.839
means so one example of a statistical
00:41:13.760 --> 00:41:18.280
significance test is a bootstrap test
00:41:16.839 --> 00:41:19.760
and the bootstrap test is really
00:41:18.280 --> 00:41:21.680
convenient because you can implement it
00:41:19.760 --> 00:41:25.160
for any evaluation metric that you want
00:41:21.680 --> 00:41:26.880
to be using and so in NLP we can use
00:41:25.160 --> 00:41:29.560
lots of different evaluations metrics we
00:41:26.880 --> 00:41:31.119
can use an evaluation metric like um
00:41:29.560 --> 00:41:34.160
accuracy but we can also use an
00:41:31.119 --> 00:41:37.400
evaluation metric like fmeasure for
00:41:34.160 --> 00:41:40.560
classification or a blue score or
00:41:37.400 --> 00:41:43.599
character F score or word error rate or
00:41:40.560 --> 00:41:48.440
something like that for um for various
00:41:43.599 --> 00:41:50.720
tasks and this is applicable to any any
00:41:48.440 --> 00:41:54.000
metric you want to use uh any quantity
00:41:50.720 --> 00:41:57.319
you want to measure also so the basic
00:41:54.000 --> 00:41:59.079
idea of a bootstrap test is a method
00:41:57.319 --> 00:42:02.520
that can measure P values and confidence
00:41:59.079 --> 00:42:06.040
intervals by resampling data and so the
00:42:02.520 --> 00:42:08.480
way you do this is you sample subsets
00:42:06.040 --> 00:42:11.960
from your death Dev test set with
00:42:08.480 --> 00:42:14.720
replacement so you might sample 10,000
00:42:11.960 --> 00:42:19.599
times and you measure accuracy on these
00:42:14.720 --> 00:42:22.520
many subsets and then you take
00:42:19.599 --> 00:42:25.640
the you look at all of the accuracies
00:42:22.520 --> 00:42:27.680
that you got on these subsample data
00:42:25.640 --> 00:42:31.079
sets and then you take the middle
00:42:27.680 --> 00:42:32.640
percentile range like 2.5 to 97.5 and
00:42:31.079 --> 00:42:34.960
you can treat that as a confidence
00:42:32.640 --> 00:42:37.640
interval the 95% confidence interval
00:42:34.960 --> 00:42:40.720
about where you're like 95% certain that
00:42:37.640 --> 00:42:40.720
your results will fall in
00:42:40.880 --> 00:42:48.240
here another thing that you can do is
00:42:45.119 --> 00:42:50.040
you can do a paired test and what the
00:42:48.240 --> 00:42:51.200
paired test does is it measures the
00:42:50.040 --> 00:42:53.359
number of
00:42:51.200 --> 00:42:55.839
winds um
00:42:53.359 --> 00:42:57.720
if and you measure the percentage of
00:42:55.839 --> 00:43:00.920
winds and this is the confidence that a
00:42:57.720 --> 00:43:03.280
gain in accuracy is not by chance um and
00:43:00.920 --> 00:43:05.920
so this could be one minus the P value
00:43:03.280 --> 00:43:07.960
of the paired test so this is easy to
00:43:05.920 --> 00:43:09.960
implement applicable to any evaluation
00:43:07.960 --> 00:43:13.480
measure but somewhat biased on small
00:43:09.960 --> 00:43:17.240
data sets um just to maybe I can give a
00:43:13.480 --> 00:43:19.920
more concrete example so let's say we
00:43:17.240 --> 00:43:27.520
have a classification data set what you
00:43:19.920 --> 00:43:30.400
can do is um let's say we have a b c d e
00:43:27.520 --> 00:43:36.960
e or
00:43:30.400 --> 00:43:39.559
um X1 X2 X3 X4
00:43:36.960 --> 00:43:44.520
X5 so this is our our classification
00:43:39.559 --> 00:43:47.440
data set and um we have system
00:43:44.520 --> 00:43:52.000
one system
00:43:47.440 --> 00:43:53.760
two and we have right right right right
00:43:52.000 --> 00:43:56.599
wrong
00:43:53.760 --> 00:44:00.440
right uh right wrong
00:43:56.599 --> 00:44:03.040
long right or something like this and so
00:44:00.440 --> 00:44:07.079
what we do is we randomly sample a sub
00:44:03.040 --> 00:44:08.760
data set um and let's say this is like
00:44:07.079 --> 00:44:10.440
X3
00:44:08.760 --> 00:44:13.599
X2
00:44:10.440 --> 00:44:17.599
X4 X1
00:44:13.599 --> 00:44:20.440
X2 and so this is our subd data set uh
00:44:17.599 --> 00:44:20.440
what we do
00:44:20.640 --> 00:44:28.920
is um so X3 would be
00:44:23.520 --> 00:44:34.559
01 X2 would be 1 one X4 would be one Zer
00:44:28.920 --> 00:44:39.079
X X1 would be 1 one and
00:44:34.559 --> 00:44:42.319
then uh X X2 would be one and so the
00:44:39.079 --> 00:44:45.319
overall accuracy here
00:44:42.319 --> 00:44:45.319
is
00:44:45.480 --> 00:44:50.240
60% and
00:44:47.440 --> 00:44:51.880
80% so if we didn't do any statistical
00:44:50.240 --> 00:44:55.400
significance test we might say oh system
00:44:51.880 --> 00:44:57.680
2 is better obviously um but if we do
00:44:55.400 --> 00:45:01.079
the significance test this is one sample
00:44:57.680 --> 00:45:03.119
from the bootstrap test in
00:45:01.079 --> 00:45:07.040
here
00:45:03.119 --> 00:45:09.079
now we get like 80% and 80% and it's
00:45:07.040 --> 00:45:11.079
like okay actually maybe in some cases
00:45:09.079 --> 00:45:13.480
these systems AR equally good maybe
00:45:11.079 --> 00:45:16.079
there's a tie or if we sampled another
00:45:13.480 --> 00:45:19.079
one uh let's say we
00:45:16.079 --> 00:45:19.079
sampled
00:45:19.359 --> 00:45:27.319
uh
00:45:20.960 --> 00:45:30.680
X4 X1 X2 X4 X1
00:45:27.319 --> 00:45:36.160
um um then we would get something like
00:45:30.680 --> 00:45:37.559
one Z one one one one 1 0 1 one this
00:45:36.160 --> 00:45:40.440
would be
00:45:37.559 --> 00:45:42.559
100% And this would be
00:45:40.440 --> 00:45:44.960
60% and
00:45:42.559 --> 00:45:47.000
so in some cases depending on how we
00:45:44.960 --> 00:45:48.440
sample actually system one wins and so
00:45:47.000 --> 00:45:51.440
you count the number of times that
00:45:48.440 --> 00:45:52.880
system two wins based on um based on
00:45:51.440 --> 00:45:54.280
these sub samples you count the number
00:45:52.880 --> 00:45:56.400
of times that system one wins and you
00:45:54.280 --> 00:45:59.000
count the number of times you get a tie
00:45:56.400 --> 00:46:00.920
and only in the case where system two or
00:45:59.000 --> 00:46:03.680
like the better system wins more than
00:46:00.920 --> 00:46:06.280
95% of the time you say that there's a
00:46:03.680 --> 00:46:08.599
significant difference be these or
00:46:06.280 --> 00:46:10.720
alternatively you could also look at the
00:46:08.599 --> 00:46:15.960
confidence intervals by saying okay I
00:46:10.720 --> 00:46:19.000
sampled um like 90 95% of the time uh
00:46:15.960 --> 00:46:20.920
the accuracy of system one is uh like
00:46:19.000 --> 00:46:23.640
80% or lower and so that would give you
00:46:20.920 --> 00:46:23.640
the upper L
00:46:23.760 --> 00:46:29.599
calculation so yeah sorry this is a very
00:46:27.480 --> 00:46:31.760
uh very quick overview of this but the
00:46:29.599 --> 00:46:34.240
reason why this is useful is let's say
00:46:31.760 --> 00:46:36.160
you create a very small data set if you
00:46:34.240 --> 00:46:38.400
create a very small data set this is
00:46:36.160 --> 00:46:39.880
going to give you a very it's going to
00:46:38.400 --> 00:46:41.319
be very hard to get a statistically
00:46:39.880 --> 00:46:44.319
significant result on this data set
00:46:41.319 --> 00:46:47.200
because it's tiny right and you know
00:46:44.319 --> 00:46:50.640
quite frequently you're going to be
00:46:47.200 --> 00:46:53.400
sampling um you're going to be sampling
00:46:50.640 --> 00:46:55.400
data sets like this where the model like
00:46:53.400 --> 00:46:56.640
where model one wins quite frequently
00:46:55.400 --> 00:46:58.520
you're going to be sampling other data
00:46:56.640 --> 00:47:00.359
sets where key wins and basically you're
00:46:58.520 --> 00:47:02.920
not going to be able to say with
00:47:00.359 --> 00:47:04.480
confidence which model is better because
00:47:02.920 --> 00:47:06.359
you just don't have enough data to say
00:47:04.480 --> 00:47:07.880
that but as you make your data set
00:47:06.359 --> 00:47:11.119
bigger and bigger it becomes easier and
00:47:07.880 --> 00:47:14.240
easier to get a significant result and
00:47:11.119 --> 00:47:17.400
so uh because you're more sure that you
00:47:14.240 --> 00:47:20.960
didn't just randomly pick data that
00:47:17.400 --> 00:47:25.400
model two is better at
00:47:20.960 --> 00:47:28.440
uh so um there's also other varieties
00:47:25.400 --> 00:47:31.240
ofest there's things like T tests for
00:47:28.440 --> 00:47:34.720
unpaired unpaired outputs and paired T
00:47:31.240 --> 00:47:38.079
tests for paired outputs those work when
00:47:34.720 --> 00:47:40.440
your um outputs are eddied so they work
00:47:38.079 --> 00:47:43.599
for accuracy because the accuracy is
00:47:40.440 --> 00:47:46.440
just you add all the add all the ones
00:47:43.599 --> 00:47:48.680
and then divide by the um the number of
00:47:46.440 --> 00:47:50.960
instances and that gives you an accuracy
00:47:48.680 --> 00:47:57.880
that doesn't work for something like
00:47:50.960 --> 00:48:03.599
fmeasure um because fmeasure is um 2 *
00:47:57.880 --> 00:48:07.319
Precision Time recall / Precision plus
00:48:03.599 --> 00:48:08.040
recall um and precision and recall uh
00:48:07.319 --> 00:48:10.640
you
00:48:08.040 --> 00:48:12.920
can like a T Test works for this but
00:48:10.640 --> 00:48:15.160
there's a non-additive component of f
00:48:12.920 --> 00:48:16.680
measure so you can't calculate
00:48:15.160 --> 00:48:19.280
statistically significant differences in
00:48:16.680 --> 00:48:21.079
F measure using a key test in that case
00:48:19.280 --> 00:48:23.000
you're basically you have to use a
00:48:21.079 --> 00:48:24.920
bootstrap method like this in order to
00:48:23.000 --> 00:48:29.040
get it to work or you need to do some
00:48:24.920 --> 00:48:29.040
really complex math but I I just
00:48:29.760 --> 00:48:33.920
use cool um are there any questions
00:48:32.680 --> 00:48:35.520
about this I guess we'll have a code
00:48:33.920 --> 00:48:37.680
example in the recitation so you can go
00:48:35.520 --> 00:48:39.599
in and take a look at that there's also
00:48:37.680 --> 00:48:42.599
tons of code examples
00:48:39.599 --> 00:48:42.599
online
00:48:42.960 --> 00:48:49.440
um is that
00:48:45.720 --> 00:48:52.400
okay okay sounds good um so now let me
00:48:49.440 --> 00:48:54.599
uh let me go back to the actual slides
00:48:52.400 --> 00:48:57.400
for
00:48:54.599 --> 00:49:00.559
today and given those statist uh the
00:48:57.400 --> 00:49:04.119
results about statistical signicance um
00:49:00.559 --> 00:49:06.040
how can we estimate how much testing
00:49:04.119 --> 00:49:07.920
data is enough and there's a method
00:49:06.040 --> 00:49:11.079
called Power analysis that allows you to
00:49:07.920 --> 00:49:13.359
do this and basically the idea of power
00:49:11.079 --> 00:49:16.680
analysis is that you make an assumption
00:49:13.359 --> 00:49:18.880
about the effect size between settings
00:49:16.680 --> 00:49:20.680
um for example the expected accuracy
00:49:18.880 --> 00:49:23.480
difference between tested
00:49:20.680 --> 00:49:26.480
models and given the effect size a
00:49:23.480 --> 00:49:28.880
significance threshold and significant
00:49:26.480 --> 00:49:30.839
threshold you can determine how much
00:49:28.880 --> 00:49:32.680
data is necessary to get a significant
00:49:30.839 --> 00:49:36.680
effect in most
00:49:32.680 --> 00:49:39.319
CLS and so to give an example
00:49:36.680 --> 00:49:41.559
again let's say we're talking about the
00:49:39.319 --> 00:49:45.880
accuracy let's say we have a baseline
00:49:41.559 --> 00:49:49.079
model and we have a um we have a
00:49:45.880 --> 00:49:52.280
baseline model and then we also have our
00:49:49.079 --> 00:49:54.000
uh propos model and we know kind of from
00:49:52.280 --> 00:49:55.599
experience that the Baseline model is
00:49:54.000 --> 00:49:58.400
probably going to get around 90%
00:49:55.599 --> 00:50:00.559
accuracy We Know by like eyeballing
00:49:58.400 --> 00:50:06.240
eyeballing the data or something like
00:50:00.559 --> 00:50:09.599
that and then we think our um we think
00:50:06.240 --> 00:50:13.799
our model is going to get 93%
00:50:09.599 --> 00:50:17.160
accuracy uh and we want a significant
00:50:13.799 --> 00:50:19.440
threshold significance threshold of p is
00:50:17.160 --> 00:50:22.319
less than
00:50:19.440 --> 00:50:26.000
0.05 given these
00:50:22.319 --> 00:50:30.559
two quantities we can basically go in
00:50:26.000 --> 00:50:33.720
and say okay now we need uh 500 training
00:50:30.559 --> 00:50:36.200
500 test examples in order to say with
00:50:33.720 --> 00:50:38.920
confidence that we will be able
00:50:36.200 --> 00:50:40.599
to um that we will be able to
00:50:38.920 --> 00:50:42.640
distinguish between two models with 90
00:50:40.599 --> 00:50:44.400
and 93%
00:50:42.640 --> 00:50:48.240
accuracy
00:50:44.400 --> 00:50:51.079
and I can go I can show the algorithm
00:50:48.240 --> 00:50:51.079
that they have in this
00:50:54.440 --> 00:50:57.440
paper
00:51:01.760 --> 00:51:04.960
but basically the way this
00:51:13.040 --> 00:51:19.720
works um is you sample a data set um
00:51:17.799 --> 00:51:22.960
Canute the effect of interest on the
00:51:19.720 --> 00:51:25.880
sample I compute the P value and then
00:51:22.960 --> 00:51:29.319
you can calculate the power uh
00:51:25.880 --> 00:51:31.520
by basically um checking the number of
00:51:29.319 --> 00:51:34.480
times that the P value is less than your
00:51:31.520 --> 00:51:36.319
threshold um multiplied by uh the fact
00:51:34.480 --> 00:51:38.920
that the sign is in a particular
00:51:36.319 --> 00:51:41.200
direction and by doing this you can
00:51:38.920 --> 00:51:43.280
essentially um you can essentially
00:51:41.200 --> 00:51:46.200
calculate how much data you would need
00:51:43.280 --> 00:51:48.319
or sorry you can calculate the uh the
00:51:46.200 --> 00:51:50.319
statistical power and then you can do
00:51:48.319 --> 00:51:52.000
this for various sizes of data set so
00:51:50.319 --> 00:51:53.559
you can gradually increase the size of
00:51:52.000 --> 00:51:57.160
the data set or decrease the size of the
00:51:53.559 --> 00:51:59.040
data set and that allows you to figure
00:51:57.160 --> 00:52:02.200
out how big your data set needs to be in
00:51:59.040 --> 00:52:04.640
order to get a statistically significant
00:52:02.200 --> 00:52:08.839
effect of the data
00:52:04.640 --> 00:52:10.720
set and so like many many people ask me
00:52:08.839 --> 00:52:12.599
the question like how big of a data set
00:52:10.720 --> 00:52:14.440
do we need to make this is basically the
00:52:12.599 --> 00:52:17.280
statistically like quote unquote correct
00:52:14.440 --> 00:52:19.520
answer for how you can do this and also
00:52:17.280 --> 00:52:20.440
uh for assignment two we're going to ask
00:52:19.520 --> 00:52:24.559
you to
00:52:20.440 --> 00:52:26.720
justify uh your choice of creation of a
00:52:24.559 --> 00:52:30.359
data set of particular size for testing
00:52:26.720 --> 00:52:31.799
based on this so um uh pay pay attention
00:52:30.359 --> 00:52:34.720
and please look at the references here
00:52:31.799 --> 00:52:38.760
and you should be able to
00:52:34.720 --> 00:52:41.280
that cool um any
00:52:38.760 --> 00:52:43.119
questions I I didn't go like really
00:52:41.280 --> 00:52:44.319
deeply into the formulas here you'll
00:52:43.119 --> 00:52:45.720
you'll probably have to look them up in
00:52:44.319 --> 00:52:48.119
the paper but hopefully that gives you
00:52:45.720 --> 00:52:51.799
the general
00:52:48.119 --> 00:52:52.680
idea okay next um how much training data
00:52:51.799 --> 00:52:55.599
do I
00:52:52.680 --> 00:52:58.160
need so in general more is usually
00:52:55.599 --> 00:53:00.760
better if you're fine tuning a model um
00:52:58.160 --> 00:53:02.880
so I can't tell you like you don't need
00:53:00.760 --> 00:53:05.480
to make more data because
00:53:02.880 --> 00:53:06.280
probably you do if you're not happy with
00:53:05.480 --> 00:53:10.799
your
00:53:06.280 --> 00:53:12.599
performance um but recently you can get
00:53:10.799 --> 00:53:14.680
very reasonable performance with few
00:53:12.599 --> 00:53:17.319
shot or zero shot or pre-trained models
00:53:14.680 --> 00:53:19.760
and prompting and because of this in
00:53:17.319 --> 00:53:21.240
some cases maybe the answer is zero
00:53:19.760 --> 00:53:22.960
maybe you don't need any training data
00:53:21.240 --> 00:53:26.559
and you could just use a zero shot pred
00:53:22.960 --> 00:53:29.240
model so um you you need to choose like
00:53:26.559 --> 00:53:31.319
what your accuracy threshold is um you
00:53:29.240 --> 00:53:32.720
need to decide whether you want to be
00:53:31.319 --> 00:53:34.480
fine-tuning a model to improve
00:53:32.720 --> 00:53:36.319
performance or doing other things like
00:53:34.480 --> 00:53:39.119
prompt engineering or other stuff like
00:53:36.319 --> 00:53:41.520
that so basically there's no uh correct
00:53:39.119 --> 00:53:45.440
answer to this
00:53:41.520 --> 00:53:47.359
um one thing to be aware of is uh
00:53:45.440 --> 00:53:51.440
sometimes if you select data
00:53:47.359 --> 00:53:52.880
intelligently you can uh improve more
00:53:51.440 --> 00:53:54.359
quickly with something like Active
00:53:52.880 --> 00:53:56.520
Learning and active learning chooses
00:53:54.359 --> 00:54:00.000
representative and difficult data that
00:53:56.520 --> 00:54:02.559
you can um be
00:54:00.000 --> 00:54:04.839
using so when you sample data for fine
00:54:02.559 --> 00:54:07.440
tuning uh what you want to be doing is
00:54:04.839 --> 00:54:08.839
you want to be sampling data that has
00:54:07.440 --> 00:54:10.040
good coverage of the domains that you
00:54:08.839 --> 00:54:12.760
want to
00:54:10.040 --> 00:54:15.079
cover um you also want to be covering
00:54:12.760 --> 00:54:18.599
for example language uh languages or
00:54:15.079 --> 00:54:23.200
language varieties or demographics of
00:54:18.599 --> 00:54:25.520
users um and another thing is uh when
00:54:23.200 --> 00:54:29.440
you're doing this it's often good idea
00:54:25.520 --> 00:54:31.400
to document how you're creating data and
00:54:29.440 --> 00:54:34.079
uh there's this paper data statements
00:54:31.400 --> 00:54:35.520
for NLP by vendor and fredman uh which
00:54:34.079 --> 00:54:37.440
suggests a bunch of different things
00:54:35.520 --> 00:54:39.520
that you can use to document your data
00:54:37.440 --> 00:54:41.520
collection and like why and how you
00:54:39.520 --> 00:54:44.960
collected the data and this gives you
00:54:41.520 --> 00:54:47.200
some pieces of information that uh could
00:54:44.960 --> 00:54:49.359
be useful this has been incorporated
00:54:47.200 --> 00:54:51.880
into the hugging face data sets data set
00:54:49.359 --> 00:54:53.520
cards and now hugging face data sets
00:54:51.880 --> 00:54:56.040
actually has lots of metadata that's
00:54:53.520 --> 00:54:58.359
kind of inspired by uh this although
00:54:56.040 --> 00:55:01.799
it's been adjusted for more kind of like
00:54:58.359 --> 00:55:01.799
practical industry use
00:55:02.119 --> 00:55:06.480
cases another thing is annotation
00:55:04.400 --> 00:55:09.160
guidelines so if you're asking humans to
00:55:06.480 --> 00:55:11.319
do anything um or for that matter if
00:55:09.160 --> 00:55:16.119
you're asking gp4 to generate data for
00:55:11.319 --> 00:55:21.480
you um you need to tell people or gp4 in
00:55:16.119 --> 00:55:24.440
um you know a clear manner how you will
00:55:21.480 --> 00:55:28.119
um like how it should be creating data
00:55:24.440 --> 00:55:29.920
so the first thing is um if you try uh
00:55:28.119 --> 00:55:32.960
to an the first thing that you can do is
00:55:29.920 --> 00:55:34.240
you can try to annotate yourself um and
00:55:32.960 --> 00:55:37.039
if you actually try to solve The
00:55:34.240 --> 00:55:38.440
annotation task yourself then you'll
00:55:37.039 --> 00:55:41.160
realize that there's lots of corner
00:55:38.440 --> 00:55:43.799
cases that are hard to decide on um
00:55:41.160 --> 00:55:45.440
other things like that so like if you're
00:55:43.799 --> 00:55:47.520
annotating sentiment what is the
00:55:45.440 --> 00:55:49.799
boundary between very positive and
00:55:47.520 --> 00:55:50.880
positive um if you're annotating
00:55:49.799 --> 00:55:54.000
question
00:55:50.880 --> 00:55:56.280
answering um like for
00:55:54.000 --> 00:55:57.720
example do you want to answer in a whole
00:55:56.280 --> 00:56:01.119
sentence or do you want to answer with
00:55:57.720 --> 00:56:03.760
only a short concise answer like these
00:56:01.119 --> 00:56:05.400
sorts of things you'll need to tell uh
00:56:03.760 --> 00:56:07.839
either an annotator or a model that
00:56:05.400 --> 00:56:10.960
you're asking to do annotation to give
00:56:07.839 --> 00:56:12.760
some examples from pent Tree Bank uh
00:56:10.960 --> 00:56:15.440
part of speech annotation guidelines
00:56:12.760 --> 00:56:18.079
this is very old it's from 1990 but
00:56:15.440 --> 00:56:21.200
basically they have uh like adverb this
00:56:18.079 --> 00:56:25.559
category includes most words that end in
00:56:21.200 --> 00:56:30.680
um ly as well as degree words like
00:56:25.559 --> 00:56:33.079
quite um etc etc it has other things for
00:56:30.680 --> 00:56:36.200
adverbs and then it has like confusing
00:56:33.079 --> 00:56:38.039
parts of speech with examples uh one
00:56:36.200 --> 00:56:39.640
thing that I found like really really
00:56:38.039 --> 00:56:42.640
interesting is like if you look at these
00:56:39.640 --> 00:56:46.160
annotation guidelines it's like uh
00:56:42.640 --> 00:56:48.319
prompts so if you look at this it's like
00:56:46.160 --> 00:56:49.880
these are your your prompts your zero
00:56:48.319 --> 00:56:52.359
shot prompts and these are F shot
00:56:49.880 --> 00:56:54.480
examples so like even for humans we were
00:56:52.359 --> 00:56:56.520
doing F shot prompting with examples
00:56:54.480 --> 00:57:00.880
when they were doing annotations so uh
00:56:56.520 --> 00:57:03.119
it's kind of uh kind of fun um hiring
00:57:00.880 --> 00:57:05.000
annotators so like let's say you want to
00:57:03.119 --> 00:57:08.319
actually build a data set and and pay
00:57:05.000 --> 00:57:10.359
people to do things um for smaller scale
00:57:08.319 --> 00:57:13.359
projects uh very often you can just
00:57:10.359 --> 00:57:15.240
annotate yourself and that's fine um
00:57:13.359 --> 00:57:16.720
there's a fixed set of overhead to get
00:57:15.240 --> 00:57:19.480
other people to do something and train
00:57:16.720 --> 00:57:23.200
them and stuff so you know I often just
00:57:19.480 --> 00:57:25.079
annotate things myself um you can also
00:57:23.200 --> 00:57:26.520
find friends or other students or
00:57:25.079 --> 00:57:29.559
co-workers who can help you out with
00:57:26.520 --> 00:57:33.359
things you can bri bribe them with uh
00:57:29.559 --> 00:57:37.280
pizza or whatever favorite uh food or
00:57:33.359 --> 00:57:39.400
beverage that they like um then for
00:57:37.280 --> 00:57:42.440
finding people online there's a lot of
00:57:39.400 --> 00:57:45.160
things that you can do um I very often
00:57:42.440 --> 00:57:46.000
hire Freelancers uh through platforms
00:57:45.160 --> 00:57:50.400
such as
00:57:46.000 --> 00:57:51.799
upwork um this is good and bad the bad
00:57:50.400 --> 00:57:53.760
thing about it is that this is often
00:57:51.799 --> 00:57:56.280
more expensive the good thing about it
00:57:53.760 --> 00:57:58.640
is um you get people who have pride in
00:57:56.280 --> 00:58:00.440
their work and accountability and
00:57:58.640 --> 00:58:02.440
motivation because like if they get
00:58:00.440 --> 00:58:04.480
rated poorly they it's going to be
00:58:02.440 --> 00:58:06.720
harder to get work and often they're
00:58:04.480 --> 00:58:08.160
Professionals in their fields so like if
00:58:06.720 --> 00:58:12.079
you want to get a code generation data
00:58:08.160 --> 00:58:15.880
set you can hire good um Freelancers
00:58:12.079 --> 00:58:18.520
I've actually heard rumors that uh
00:58:15.880 --> 00:58:20.119
people like open AI they hire people and
00:58:18.520 --> 00:58:21.599
pay them $60 an hour to do The
00:58:20.119 --> 00:58:23.599
annotation because they really want
00:58:21.599 --> 00:58:27.119
people who are very professional and do
00:58:23.599 --> 00:58:30.000
a very good job um I don't pay that
00:58:27.119 --> 00:58:34.240
much but I do pay well more than minimum
00:58:30.000 --> 00:58:35.880
wage and uh you know like it's a I pay a
00:58:34.240 --> 00:58:38.039
competitive price for these freelancing
00:58:35.880 --> 00:58:40.319
sites when I get people to do
00:58:38.039 --> 00:58:42.000
that another thing you can do as crowd
00:58:40.319 --> 00:58:44.400
workers and this is could be through
00:58:42.000 --> 00:58:45.960
sites like Mechanical Turk or prolific
00:58:44.400 --> 00:58:48.960
or other things like this so that's
00:58:45.960 --> 00:58:51.680
another option um here quality control
00:58:48.960 --> 00:58:55.240
becomes very difficult and um we're
00:58:51.680 --> 00:58:57.799
getting to the point where number one
00:58:55.240 --> 00:58:59.400
um if you don't aren't very careful with
00:58:57.799 --> 00:59:01.920
quality control language models actually
00:58:59.400 --> 00:59:03.400
do a similarly good job as crowd workers
00:59:01.920 --> 00:59:06.960
and number two all the crowd workers are
00:59:03.400 --> 00:59:10.000
using gp4 anyway so um you do need to be
00:59:06.960 --> 00:59:12.319
careful about that um one thing that I
00:59:10.000 --> 00:59:14.039
often do is I hire for a small job first
00:59:12.319 --> 00:59:16.880
to gauge timeliness and accuracy and
00:59:14.039 --> 00:59:18.920
then hire for a bigger job so um just
00:59:16.880 --> 00:59:21.720
hire people to do you know 50 examples
00:59:18.920 --> 00:59:23.319
or 20 examples first and then uh you
00:59:21.720 --> 00:59:26.240
know if they do a good job with it then
00:59:23.319 --> 00:59:27.960
I hire them to do 200 th000
00:59:26.240 --> 00:59:30.799
examples
00:59:27.960 --> 00:59:34.720
um one thing to note is that if you're
00:59:30.799 --> 00:59:36.599
doing research in a university um you
00:59:34.720 --> 00:59:39.400
might need to get approval from an
00:59:36.599 --> 00:59:41.480
Institutional review board and this is
00:59:39.400 --> 00:59:43.000
in particular the case for subjective
00:59:41.480 --> 00:59:45.880
task so this is when you're asking
00:59:43.000 --> 00:59:47.440
people how do you feel about this output
00:59:45.880 --> 00:59:50.039
um do you think this output is
00:59:47.440 --> 00:59:51.720
representative of your beliefs or things
00:59:50.039 --> 00:59:54.760
like that where it doesn't have a
00:59:51.720 --> 00:59:56.319
correct answer a yes and no answer if
00:59:54.760 --> 00:59:58.680
it's something like it it does have a
00:59:56.319 --> 01:00:03.640
yes and no answer which is like how many
00:59:58.680 --> 01:00:05.640
verbs are in this sentence or um how do
01:00:03.640 --> 01:00:07.280
you translate the sentence into another
01:00:05.640 --> 01:00:09.880
language or something like that then you
01:00:07.280 --> 01:00:12.039
don't need an IRB approval um but if
01:00:09.880 --> 01:00:15.000
it's borderline you might want to check
01:00:12.039 --> 01:00:17.280
anyway um so that that's something to be
01:00:15.000 --> 01:00:17.280
aware
01:00:18.640 --> 01:00:26.240
of next is assessing annotation quality
01:00:22.640 --> 01:00:27.680
so um one of my favorite ways to do this
01:00:26.240 --> 01:00:30.039
is assess Human
01:00:27.680 --> 01:00:32.240
Performance and so the way we do this is
01:00:30.039 --> 01:00:34.119
you double annotate some data and then
01:00:32.240 --> 01:00:37.160
you measure whatever metric you want to
01:00:34.119 --> 01:00:39.200
measure for machines just with respect
01:00:37.160 --> 01:00:41.039
to human agreement and so for
01:00:39.200 --> 01:00:43.839
translation if you're using blue score
01:00:41.039 --> 01:00:45.440
or KF score or something like this then
01:00:43.839 --> 01:00:47.079
you would want to use this for
01:00:45.440 --> 01:00:50.440
assessment of the
01:00:47.079 --> 01:00:56.039
outputs um the advantage of doing this
01:00:50.440 --> 01:00:58.760
is that you get a human quality score
01:00:56.039 --> 01:01:00.960
and the human quality score is directly
01:00:58.760 --> 01:01:02.480
comparable to the machine quality score
01:01:00.960 --> 01:01:04.599
and so you can say well humans got the
01:01:02.480 --> 01:01:07.280
task right 90% of the time and gp4 got
01:01:04.599 --> 01:01:11.280
the task right 16% of the time so humans
01:01:07.280 --> 01:01:13.760
are way better than gp4 or um you know
01:01:11.280 --> 01:01:16.559
humans got it right 80% of the time and
01:01:13.760 --> 01:01:19.599
gp4 got it right 78% of the time so this
01:01:16.559 --> 01:01:21.000
task is you know this task or maybe not
01:01:19.599 --> 01:01:23.640
necessarily the task but at least the
01:01:21.000 --> 01:01:25.079
data set is more or less uh been so by
01:01:23.640 --> 01:01:26.640
the strongest language models so now we
01:01:25.079 --> 01:01:28.920
need to catch up open source models so
01:01:26.640 --> 01:01:31.680
SW ones or something like
01:01:28.920 --> 01:01:32.880
that um there are things that you can
01:01:31.680 --> 01:01:34.880
measure you can measure things like
01:01:32.880 --> 01:01:36.880
Kappa statistics this is particularly
01:01:34.880 --> 01:01:39.799
useful for um kind of just
01:01:36.880 --> 01:01:41.799
classification tasks and what this tells
01:01:39.799 --> 01:01:43.880
you is this tells you how much higher is
01:01:41.799 --> 01:01:48.000
the agreement that you would get than if
01:01:43.880 --> 01:01:49.920
you got it by chance and so for example
01:01:48.000 --> 01:01:53.279
let's say you're classifying
01:01:49.920 --> 01:01:54.760
spam uh or you're classifying you know
01:01:53.279 --> 01:01:59.520
toxic content or something something
01:01:54.760 --> 01:02:03.400
like that in 99% of your time 99% of the
01:01:59.520 --> 01:02:07.480
time the content is not toxic and 1% of
01:02:03.400 --> 01:02:11.799
the time the content is toxic and then
01:02:07.480 --> 01:02:14.079
you hire some annotators and you get 98%
01:02:11.799 --> 01:02:16.279
accuracy that's kind of bad right you
01:02:14.079 --> 01:02:19.200
know if you just said not toxic all the
01:02:16.279 --> 01:02:20.880
time you would get 99% um what the Kaus
01:02:19.200 --> 01:02:24.599
statistic does is it accounts for this
01:02:20.880 --> 01:02:26.559
basically it says um how much more like
01:02:24.599 --> 01:02:28.440
assis than chance and if you just had
01:02:26.559 --> 01:02:30.720
chance accuracy you would get zero if
01:02:28.440 --> 01:02:33.200
you had perfect accuracy you would get
01:02:30.720 --> 01:02:34.920
one and you normally get something in
01:02:33.200 --> 01:02:37.359
between
01:02:34.920 --> 01:02:39.200
um so if it's slow you may need to
01:02:37.359 --> 01:02:41.319
revisit guidelines Tire better
01:02:39.200 --> 01:02:44.480
annotators or rethink whether the task
01:02:41.319 --> 01:02:46.559
is possible at all or not um and you
01:02:44.480 --> 01:02:48.599
know some tasks are just impossible like
01:02:46.559 --> 01:02:51.599
if um I'm
01:02:48.599 --> 01:02:51.599
asking
01:02:52.240 --> 01:02:58.160
uh well or um they're very hard for
01:02:55.960 --> 01:03:00.039
annotators so like to give one example
01:02:58.160 --> 01:03:04.039
um annotators are really horrible at
01:03:00.039 --> 01:03:06.200
identifying fake reviews um and so like
01:03:04.039 --> 01:03:07.640
if you even if you hire annotators to
01:03:06.200 --> 01:03:09.279
identify paper reviews they're bad at
01:03:07.640 --> 01:03:11.359
doing that so you're not likely to get
01:03:09.279 --> 01:03:14.680
high
01:03:11.359 --> 01:03:17.920
agreement um cool I'm going to skip over
01:03:14.680 --> 01:03:23.279
this part because I already talked about
01:03:17.920 --> 01:03:26.640
it okay um any any questions
01:03:23.279 --> 01:03:29.079
here okay sounds good uh next I'd like
01:03:26.640 --> 01:03:30.640
to get into running experiments so
01:03:29.079 --> 01:03:34.359
running experiments one thing I find
01:03:30.640 --> 01:03:37.200
very helpful is workflow automation um
01:03:34.359 --> 01:03:40.079
and basically what I I like to do is I
01:03:37.200 --> 01:03:41.839
like to mod modularize each step of an
01:03:40.079 --> 01:03:44.119
experiment into a
01:03:41.839 --> 01:03:47.240
directory
01:03:44.119 --> 01:03:51.039
um where uh you have like a directory as
01:03:47.240 --> 01:03:53.279
input and a directory as output
01:03:51.039 --> 01:03:54.559
um this is my personal way of doing
01:03:53.279 --> 01:03:56.799
things there are other ways of doing
01:03:54.559 --> 01:03:58.640
things that are also good but um very
01:03:56.799 --> 01:04:00.760
often like just to give an example
01:03:58.640 --> 01:04:04.680
you'll need to do pre-processing
01:04:00.760 --> 01:04:07.480
According to some uh you'll need to do
01:04:04.680 --> 01:04:09.119
data selection so you'll need to select
01:04:07.480 --> 01:04:11.039
which data sets you're training on
01:04:09.119 --> 01:04:13.520
you'll need to do pre-processing of them
01:04:11.039 --> 01:04:16.160
with a tokenization model and then you
01:04:13.520 --> 01:04:18.359
will need to run an
01:04:16.160 --> 01:04:20.000
experiment and then you'll need to do
01:04:18.359 --> 01:04:23.240
evaluation and those are all kind of
01:04:20.000 --> 01:04:25.079
like discret Steps where the data
01:04:23.240 --> 01:04:27.760
selection takes in your big pool of data
01:04:25.079 --> 01:04:31.200
and outputs a a data set that's been
01:04:27.760 --> 01:04:33.680
selected the tokenization
01:04:31.200 --> 01:04:35.480
will uh take a tokenizer model maybe
01:04:33.680 --> 01:04:38.599
train a tokenizer model and and split it
01:04:35.480 --> 01:04:40.400
up into different tokens um the training
01:04:38.599 --> 01:04:42.079
will train it might output a whole bunch
01:04:40.400 --> 01:04:44.720
of checkpoints and the evaluation will
01:04:42.079 --> 01:04:47.039
evaluate one checkpoint and so those are
01:04:44.720 --> 01:04:48.400
all kind of modular and you can actually
01:04:47.039 --> 01:04:50.039
think of each one of them as like a
01:04:48.400 --> 01:04:52.760
function in your Python
01:04:50.039 --> 01:04:56.400
program
01:04:52.760 --> 01:04:58.160
and you kind of want to avoid rerunning
01:04:56.400 --> 01:05:00.200
data set selection and tokenization
01:04:58.160 --> 01:05:01.720
every time you do a new evaluation right
01:05:00.200 --> 01:05:03.359
like that would be kind of silly you
01:05:01.720 --> 01:05:04.680
definitely want to avoid rerunning
01:05:03.359 --> 01:05:09.119
training every time you evaluate a
01:05:04.680 --> 01:05:11.200
checkpoint so um what I do is I often
01:05:09.119 --> 01:05:12.799
name directories by parameters where
01:05:11.200 --> 01:05:16.079
it's like Transformer
01:05:12.799 --> 01:05:18.640
layer Transformer layer 8 node 512
01:05:16.079 --> 01:05:21.279
Dropout 0.5 label smooth
01:05:18.640 --> 01:05:25.880
0.02 um and so I have all the parameters
01:05:21.279 --> 01:05:26.880
in there and then
01:05:25.880 --> 01:05:29.680
the
01:05:26.880 --> 01:05:31.960
training process will output a whole
01:05:29.680 --> 01:05:33.960
bunch of checkpoints in here and then
01:05:31.960 --> 01:05:35.520
for my evaluation I have evaluation
01:05:33.960 --> 01:05:38.119
metrics and I have the checkpoint I'm
01:05:35.520 --> 01:05:41.680
evaluating so uh when I do
01:05:38.119 --> 01:05:45.119
evaluation I will then append checkpoint
01:05:41.680 --> 01:05:47.279
6 uh metric F measure or something like
01:05:45.119 --> 01:05:49.079
that and so I keep around all of the
01:05:47.279 --> 01:05:52.520
previous information and just append
01:05:49.079 --> 01:05:54.599
append append append and so um this
01:05:52.520 --> 01:05:56.680
allows you to avoid rerunning things
01:05:54.599 --> 01:05:58.359
because you can uh just have your python
01:05:56.680 --> 01:06:00.520
code to check if the directory already
01:05:58.359 --> 01:06:01.839
exists and already has been completed
01:06:00.520 --> 01:06:03.559
and then read in the result if it
01:06:01.839 --> 01:06:06.319
already has been or run the experiment
01:06:03.559 --> 01:06:08.079
that it hasn't been so um you can write
01:06:06.319 --> 01:06:10.279
you can write this in pure python by
01:06:08.079 --> 01:06:11.599
just adding like some if statements at
01:06:10.279 --> 01:06:14.079
the beginning of the function some if
01:06:11.599 --> 01:06:16.799
statements at um some like output
01:06:14.079 --> 01:06:19.440
statements at the end of the function um
01:06:16.799 --> 01:06:22.000
there are more sophisticated models
01:06:19.440 --> 01:06:24.200
methods so there's like a toolkit called
01:06:22.000 --> 01:06:28.079
duct tape that was originally created
01:06:24.200 --> 01:06:31.760
here at CMU and um my uh student Patrick
01:06:28.079 --> 01:06:33.079
is maintaining now this link um so you
01:06:31.760 --> 01:06:34.960
can either just roll something on your
01:06:33.079 --> 01:06:36.880
own or look into one of these more
01:06:34.960 --> 01:06:39.359
complex work workflow automation things
01:06:36.880 --> 01:06:39.359
to sve you
01:06:39.400 --> 01:06:47.279
time okay evaluation um so I talked
01:06:43.400 --> 01:06:49.000
about this to some extent um uh so yeah
01:06:47.279 --> 01:06:51.000
I'll just skip over
01:06:49.000 --> 01:06:54.559
that
01:06:51.000 --> 01:06:57.200
and result reporting um
01:06:54.559 --> 01:06:59.160
for papers one thing that I really like
01:06:57.200 --> 01:07:01.960
to do is plan the result section in
01:06:59.160 --> 01:07:07.039
advance or at least imagine the result
01:07:01.960 --> 01:07:07.039
section in advance um
01:07:07.200 --> 01:07:11.640
so what what I think of is like what
01:07:09.559 --> 01:07:14.520
experimental claims would I like to make
01:07:11.640 --> 01:07:15.760
how am I going to support them by the
01:07:14.520 --> 01:07:19.039
experiments that I'm going to show in a
01:07:15.760 --> 01:07:21.160
result section um and this identifies
01:07:19.039 --> 01:07:24.640
unjustified experimental claims like so
01:07:21.160 --> 01:07:27.119
let's say your method is you're saying
01:07:24.640 --> 01:07:29.000
something like uh this method improves
01:07:27.119 --> 01:07:30.440
across a wide variety of languages and
01:07:29.000 --> 01:07:32.520
then you realize that you only have one
01:07:30.440 --> 01:07:34.720
language and you're uh in your
01:07:32.520 --> 01:07:37.960
experiment section that's a problem
01:07:34.720 --> 01:07:40.640
obviously um also I I really enjoy like
01:07:37.960 --> 01:07:43.599
assuming that all of my experiments are
01:07:40.640 --> 01:07:46.520
going really really well um and you know
01:07:43.599 --> 01:07:49.440
none of my uh none of my runs crash with
01:07:46.520 --> 01:07:52.000
Cuda out of memory errors and you know
01:07:49.440 --> 01:07:55.319
all of all of the experiments appear as
01:07:52.000 --> 01:07:57.960
expected and if you do something like
01:07:55.319 --> 01:07:59.960
that you can be ambitious and say okay
01:07:57.960 --> 01:08:03.119
how can I make this research project
01:07:59.960 --> 01:08:04.960
really impactful like um and another
01:08:03.119 --> 01:08:08.240
thing that I like to ask my students or
01:08:04.960 --> 01:08:11.200
people I'm working with recently is like
01:08:08.240 --> 01:08:13.440
who are like three people in the world
01:08:11.200 --> 01:08:17.440
who will be really excited by your paper
01:08:13.440 --> 01:08:19.040
like name actual people um and where do
01:08:17.440 --> 01:08:20.839
those people work what do they care
01:08:19.040 --> 01:08:22.359
about what sort of evidence would you
01:08:20.839 --> 01:08:24.560
need in your paper to make them really
01:08:22.359 --> 01:08:26.560
excited about your paper or something
01:08:24.560 --> 01:08:29.679
like that and very often people will
01:08:26.560 --> 01:08:31.480
reply to me like oh I think people in um
01:08:29.679 --> 01:08:32.799
in Google will be very excited about
01:08:31.480 --> 01:08:34.440
this and they're going to use it and I'm
01:08:32.799 --> 01:08:38.719
like well you're writing all your code
01:08:34.440 --> 01:08:39.839
in pytorch and they don't use pytorch so
01:08:38.719 --> 01:08:41.000
how are you going to convince them to
01:08:39.839 --> 01:08:42.640
use their paper they're going to have to
01:08:41.000 --> 01:08:46.120
reimplement it in Jax and that's going
01:08:42.640 --> 01:08:47.520
to suck for them so like uh you know
01:08:46.120 --> 01:08:49.040
what are the barriers for them actually
01:08:47.520 --> 01:08:50.799
using it and then maybe the people are
01:08:49.040 --> 01:08:52.159
like oh well maybe actually I don't want
01:08:50.799 --> 01:08:54.199
people at Google to use this and I can
01:08:52.159 --> 01:08:56.560
think of somebody else and it's like
01:08:54.199 --> 01:08:58.920
well great so now release it open source
01:08:56.560 --> 01:09:00.520
and people will will have it open source
01:08:58.920 --> 01:09:01.920
so you can kind of think about like the
01:09:00.520 --> 01:09:03.719
types of evidence that you would need to
01:09:01.920 --> 01:09:05.440
convince people to use your work and
01:09:03.719 --> 01:09:08.040
that can result in your work being more
01:09:05.440 --> 01:09:09.319
impactful in the long run and if you
01:09:08.040 --> 01:09:10.400
think about it from the very beginning
01:09:09.319 --> 01:09:11.839
that also helps you plan your
01:09:10.400 --> 01:09:13.520
experiments like what sort of evidence
01:09:11.839 --> 01:09:15.359
is necessary for people to get excited
01:09:13.520 --> 01:09:18.440
about it in the this
01:09:15.359 --> 01:09:20.120
SPS um another thing that I like to do
01:09:18.440 --> 01:09:24.000
with result reporting is result
01:09:20.120 --> 01:09:26.880
generation scripts um so uh I often
01:09:24.000 --> 01:09:29.159
generate paper latex directly from log
01:09:26.880 --> 01:09:31.799
files uh there's two reasons why I do
01:09:29.159 --> 01:09:34.480
this um number one it's efficient and
01:09:31.799 --> 01:09:36.719
minimizes errors number two it allows
01:09:34.480 --> 01:09:39.080
you to preemptively plan experiments
01:09:36.719 --> 01:09:41.120
that you want to run so like for example
01:09:39.080 --> 01:09:44.440
if we go back to the dock um the
01:09:41.120 --> 01:09:46.199
directory that I talked about before um
01:09:44.440 --> 01:09:50.359
I can write
01:09:46.199 --> 01:09:52.719
a a script that reads in 20 evaluation
01:09:50.359 --> 01:09:54.800
results from 20 different directories
01:09:52.719 --> 01:09:56.920
and fills in a table and if that
01:09:54.800 --> 01:09:58.600
directory doesn't exist yet it will put
01:09:56.920 --> 01:10:01.239
like TVD or something like that in the
01:09:58.600 --> 01:10:03.960
table so I can very quickly see okay
01:10:01.239 --> 01:10:05.880
these things are TBD um oh this thing
01:10:03.960 --> 01:10:07.480
has been TBD for a very long time is my
01:10:05.880 --> 01:10:09.400
experiment crashed do I need to go back
01:10:07.480 --> 01:10:12.239
and like restart my experiment or
01:10:09.400 --> 01:10:13.719
something like that so um it's an
01:10:12.239 --> 01:10:17.280
efficient way and when you finish the
01:10:13.719 --> 01:10:17.280
last TBD it's a very good feeling
01:10:18.280 --> 01:10:23.719
also cool um next computational
01:10:21.760 --> 01:10:26.159
resources actually I kind of already
01:10:23.719 --> 01:10:28.600
talked about this a little bit um but on
01:10:26.159 --> 01:10:30.280
Amazon web services we have uh class
01:10:28.600 --> 01:10:32.080
credits that we're going to be issuing
01:10:30.280 --> 01:10:34.880
as soon as uh the assignment one
01:10:32.080 --> 01:10:37.560
deadline is over um there's also Google
01:10:34.880 --> 01:10:39.440
cloud and collab um you can get
01:10:37.560 --> 01:10:44.000
commodity gpus and other things like
01:10:39.440 --> 01:10:47.800
that so um you can also consider
01:10:44.000 --> 01:10:53.159
that okay let me get into Data analysis
01:10:47.800 --> 01:10:55.440
um so I'm going to cover this a lot more
01:10:53.159 --> 01:10:58.480
in an interpretation lecture and this is
01:10:55.440 --> 01:10:59.520
going to be in three classes so this is
01:10:58.480 --> 01:11:02.239
going to
01:10:59.520 --> 01:11:07.000
be the
01:11:02.239 --> 01:11:09.719
Tuesday after next um so uh very
01:11:07.000 --> 01:11:11.000
important things though uh look at data
01:11:09.719 --> 01:11:13.679
um you'll want to do quantitative
01:11:11.000 --> 01:11:16.239
analysis and qualitative analysis um you
01:11:13.679 --> 01:11:17.440
can also look at model explanations so
01:11:16.239 --> 01:11:18.719
I'm going to cover how to do all of
01:11:17.440 --> 01:11:21.520
these things in that lecture I don't
01:11:18.719 --> 01:11:24.440
have enough time to do it
01:11:21.520 --> 01:11:26.960
today then the final thing is accoring
01:11:24.440 --> 01:11:30.840
conclusions um this is also too much for
01:11:26.960 --> 01:11:34.000
a single class but um I very highly
01:11:30.840 --> 01:11:35.920
recommend this lecture um uh sorry these
01:11:34.000 --> 01:11:39.320
lecture slides they don't take that long
01:11:35.920 --> 01:11:40.880
to look through they're maybe um 20
01:11:39.320 --> 01:11:42.880
minutes or so but they're very very
01:11:40.880 --> 01:11:45.480
helpful um they talk about how to
01:11:42.880 --> 01:11:48.199
structure a paper uh other things like
01:11:45.480 --> 01:11:51.440
this and if you follow this advice for
01:11:48.199 --> 01:11:53.239
writing your reports for like three and
01:11:51.440 --> 01:11:54.960
four assignment three and assignment
01:11:53.239 --> 01:11:57.800
four even assignment two I think you
01:11:54.960 --> 01:11:59.400
can't really go wrong uh actually three
01:11:57.800 --> 01:12:00.840
and four is probably better uh than
01:11:59.400 --> 01:12:03.320
assignment two assignment two can be
01:12:00.840 --> 01:12:05.360
more descriptive so definitely take a
01:12:03.320 --> 01:12:08.600
look at that if
01:12:05.360 --> 01:12:08.600
you cool