|
1 |
|
00:00:00,719 --> 00:00:07,480 |
|
so to get started I want to show an |
|
|
|
2 |
|
00:00:04,120 --> 00:00:10,320 |
|
example of the scientific method I took |
|
|
|
3 |
|
00:00:07,480 --> 00:00:12,920 |
|
this directly from Wikipedia but it's |
|
|
|
4 |
|
00:00:10,320 --> 00:00:15,320 |
|
actually uh pretty nice it's a pretty |
|
|
|
5 |
|
00:00:12,920 --> 00:00:17,480 |
|
nice and concise summary of what we |
|
|
|
6 |
|
00:00:15,320 --> 00:00:19,439 |
|
should do when we're coming up with new |
|
|
|
7 |
|
00:00:17,480 --> 00:00:22,160 |
|
uh kind of research |
|
|
|
8 |
|
00:00:19,439 --> 00:00:24,039 |
|
projects and we start with an |
|
|
|
9 |
|
00:00:22,160 --> 00:00:26,840 |
|
observation or question we do research |
|
|
|
10 |
|
00:00:24,039 --> 00:00:28,599 |
|
of the topic area we form a hypothesis |
|
|
|
11 |
|
00:00:26,840 --> 00:00:31,439 |
|
we test it with an experiment analyze |
|
|
|
12 |
|
00:00:28,599 --> 00:00:33,600 |
|
data and Report conclusions |
|
|
|
13 |
|
00:00:31,439 --> 00:00:35,640 |
|
and even if we're doing kind of an |
|
|
|
14 |
|
00:00:33,600 --> 00:00:37,480 |
|
engineering based project still this |
|
|
|
15 |
|
00:00:35,640 --> 00:00:42,079 |
|
thinking of the stuff that we're doing |
|
|
|
16 |
|
00:00:37,480 --> 00:00:44,399 |
|
in a framework like this can help you a |
|
|
|
17 |
|
00:00:42,079 --> 00:00:46,079 |
|
lot so uh the first thing I'd like to |
|
|
|
18 |
|
00:00:44,399 --> 00:00:49,120 |
|
talk about is identifying good research |
|
|
|
19 |
|
00:00:46,079 --> 00:00:51,800 |
|
directions and so I'm going to look at |
|
|
|
20 |
|
00:00:49,120 --> 00:00:53,640 |
|
that from the observation question |
|
|
|
21 |
|
00:00:51,800 --> 00:00:56,320 |
|
perspective |
|
|
|
22 |
|
00:00:53,640 --> 00:00:58,480 |
|
here so if we think about why we do |
|
|
|
23 |
|
00:00:56,320 --> 00:01:01,160 |
|
research uh particularly why we do |
|
|
|
24 |
|
00:00:58,480 --> 00:01:04,199 |
|
research on natural language process in |
|
|
|
25 |
|
00:01:01,160 --> 00:01:07,159 |
|
um there's a couple reasons why the |
|
|
|
26 |
|
00:01:04,199 --> 00:01:09,439 |
|
first is application driven research and |
|
|
|
27 |
|
00:01:07,159 --> 00:01:13,159 |
|
usually this is I would like to make a |
|
|
|
28 |
|
00:01:09,439 --> 00:01:15,040 |
|
useful system or make one work better so |
|
|
|
29 |
|
00:01:13,159 --> 00:01:18,479 |
|
uh you know this is probably the great |
|
|
|
30 |
|
00:01:15,040 --> 00:01:20,280 |
|
majority of NLP research then separately |
|
|
|
31 |
|
00:01:18,479 --> 00:01:21,960 |
|
from that there's curiosity driven |
|
|
|
32 |
|
00:01:20,280 --> 00:01:24,560 |
|
research which is like I would like to |
|
|
|
33 |
|
00:01:21,960 --> 00:01:27,360 |
|
know more about language or the world |
|
|
|
34 |
|
00:01:24,560 --> 00:01:29,159 |
|
viewed through language and so this |
|
|
|
35 |
|
00:01:27,360 --> 00:01:31,840 |
|
doesn't necessarily have to be |
|
|
|
36 |
|
00:01:29,159 --> 00:01:31,840 |
|
immediately |
|
|
|
37 |
|
00:01:32,000 --> 00:01:37,280 |
|
like a downstream application that users |
|
|
|
38 |
|
00:01:35,399 --> 00:01:39,159 |
|
are using will immediately get better |
|
|
|
39 |
|
00:01:37,280 --> 00:01:40,439 |
|
it's more like we have a burning |
|
|
|
40 |
|
00:01:39,159 --> 00:01:43,159 |
|
question that we would like to answer |
|
|
|
41 |
|
00:01:40,439 --> 00:01:47,399 |
|
and we want to answer |
|
|
|
42 |
|
00:01:43,159 --> 00:01:48,640 |
|
it so NLP encompasses both uh sometimes |
|
|
|
43 |
|
00:01:47,399 --> 00:01:50,479 |
|
if you read a paper you'll have |
|
|
|
44 |
|
00:01:48,640 --> 00:01:54,360 |
|
something that's doing both uh |
|
|
|
45 |
|
00:01:50,479 --> 00:01:56,439 |
|
especially like analyzing the internals |
|
|
|
46 |
|
00:01:54,360 --> 00:01:58,079 |
|
or training dynamics of a a neural |
|
|
|
47 |
|
00:01:56,439 --> 00:01:59,920 |
|
network to answer a curiosity-driven |
|
|
|
48 |
|
00:01:58,079 --> 00:02:02,439 |
|
question and then applying that to come |
|
|
|
49 |
|
00:01:59,920 --> 00:02:04,840 |
|
up with a better method that makes work |
|
|
|
50 |
|
00:02:02,439 --> 00:02:06,560 |
|
better I I would like to say though that |
|
|
|
51 |
|
00:02:04,840 --> 00:02:09,119 |
|
it's kind of rare that there's a paper |
|
|
|
52 |
|
00:02:06,560 --> 00:02:10,879 |
|
that does both of them really well uh |
|
|
|
53 |
|
00:02:09,119 --> 00:02:13,160 |
|
and so usually one of them is kind of |
|
|
|
54 |
|
00:02:10,879 --> 00:02:14,599 |
|
the main focus and I think you can be |
|
|
|
55 |
|
00:02:13,160 --> 00:02:17,680 |
|
well served by choosing which one is |
|
|
|
56 |
|
00:02:14,599 --> 00:02:20,560 |
|
your main focus and then kind of uh the |
|
|
|
57 |
|
00:02:17,680 --> 00:02:23,560 |
|
other might come as a additional uh |
|
|
|
58 |
|
00:02:20,560 --> 00:02:23,560 |
|
bonus on top of |
|
|
|
59 |
|
00:02:23,920 --> 00:02:28,760 |
|
that so here are a few examples of |
|
|
|
60 |
|
00:02:27,160 --> 00:02:32,800 |
|
application driven |
|
|
|
61 |
|
00:02:28,760 --> 00:02:35,239 |
|
research so for example pay at all uh |
|
|
|
62 |
|
00:02:32,800 --> 00:02:37,840 |
|
they proposed the task of sentiment |
|
|
|
63 |
|
00:02:35,239 --> 00:02:39,879 |
|
analysis um so actually there was a |
|
|
|
64 |
|
00:02:37,840 --> 00:02:41,879 |
|
paper 22 years ago that proposed the |
|
|
|
65 |
|
00:02:39,879 --> 00:02:44,879 |
|
task of sentiment analysis it might seem |
|
|
|
66 |
|
00:02:41,879 --> 00:02:46,760 |
|
very you know normal nowadays but uh |
|
|
|
67 |
|
00:02:44,879 --> 00:02:49,519 |
|
there was a paper that proposed it back |
|
|
|
68 |
|
00:02:46,760 --> 00:02:52,840 |
|
then and they proposed sentiment |
|
|
|
69 |
|
00:02:49,519 --> 00:02:54,200 |
|
analysis because um labeling articles |
|
|
|
70 |
|
00:02:52,840 --> 00:02:57,480 |
|
with their sentiment would provide |
|
|
|
71 |
|
00:02:54,200 --> 00:02:59,760 |
|
succinct summaries to the readers um so |
|
|
|
72 |
|
00:02:57,480 --> 00:03:03,319 |
|
they basically wanted to provide |
|
|
|
73 |
|
00:02:59,760 --> 00:03:03,319 |
|
information to readers and that would be |
|
|
|
74 |
|
00:03:03,400 --> 00:03:09,000 |
|
useful another paper by ready at all |
|
|
|
75 |
|
00:03:06,440 --> 00:03:11,519 |
|
2019 proposes a task of conversational |
|
|
|
76 |
|
00:03:09,000 --> 00:03:13,640 |
|
question answering uh because an |
|
|
|
77 |
|
00:03:11,519 --> 00:03:15,599 |
|
inability to build and maintain common |
|
|
|
78 |
|
00:03:13,640 --> 00:03:17,680 |
|
ground is part of the reason why virtual |
|
|
|
79 |
|
00:03:15,599 --> 00:03:20,159 |
|
assistant usually don't seem like |
|
|
|
80 |
|
00:03:17,680 --> 00:03:22,040 |
|
competent conversational Partners so |
|
|
|
81 |
|
00:03:20,159 --> 00:03:24,519 |
|
when you're talking to your Alexa or |
|
|
|
82 |
|
00:03:22,040 --> 00:03:27,000 |
|
your Google uh home or something like |
|
|
|
83 |
|
00:03:24,519 --> 00:03:28,599 |
|
this you might ask it a question and |
|
|
|
84 |
|
00:03:27,000 --> 00:03:30,120 |
|
then after you asked it a question you |
|
|
|
85 |
|
00:03:28,599 --> 00:03:31,480 |
|
ask it another question but it doesn't |
|
|
|
86 |
|
00:03:30,120 --> 00:03:32,879 |
|
go back to the contexts that you had |
|
|
|
87 |
|
00:03:31,480 --> 00:03:34,519 |
|
before and they wanted to solve this |
|
|
|
88 |
|
00:03:32,879 --> 00:03:36,040 |
|
problem so they proposed this data set |
|
|
|
89 |
|
00:03:34,519 --> 00:03:40,000 |
|
for |
|
|
|
90 |
|
00:03:36,040 --> 00:03:41,720 |
|
it um Gerel propos a method for bottom |
|
|
|
91 |
|
00:03:40,000 --> 00:03:43,159 |
|
up abstractive summarization because |
|
|
|
92 |
|
00:03:41,720 --> 00:03:44,760 |
|
neural network-based methods for |
|
|
|
93 |
|
00:03:43,159 --> 00:03:46,879 |
|
abstractive summarization produce |
|
|
|
94 |
|
00:03:44,760 --> 00:03:49,000 |
|
outputs that are fluent but perform |
|
|
|
95 |
|
00:03:46,879 --> 00:03:51,120 |
|
poorly a Content selection so they had a |
|
|
|
96 |
|
00:03:49,000 --> 00:03:53,000 |
|
problem they had a task already in mind |
|
|
|
97 |
|
00:03:51,120 --> 00:03:54,239 |
|
they weren't proposing a new task and |
|
|
|
98 |
|
00:03:53,000 --> 00:03:56,040 |
|
they there was a problem with the |
|
|
|
99 |
|
00:03:54,239 --> 00:03:58,760 |
|
existing system so they fixed |
|
|
|
100 |
|
00:03:56,040 --> 00:04:00,400 |
|
it and then Kudo and Richardson proposed |
|
|
|
101 |
|
00:03:58,760 --> 00:04:02,920 |
|
a method for un supervised word |
|
|
|
102 |
|
00:04:00,400 --> 00:04:04,799 |
|
segmentation namely sentence piece uh |
|
|
|
103 |
|
00:04:02,920 --> 00:04:06,439 |
|
because language dependent processing |
|
|
|
104 |
|
00:04:04,799 --> 00:04:08,920 |
|
makes it hard to train multilingual |
|
|
|
105 |
|
00:04:06,439 --> 00:04:10,360 |
|
models as we have to carefully manage |
|
|
|
106 |
|
00:04:08,920 --> 00:04:12,720 |
|
the configurations of pre- and |
|
|
|
107 |
|
00:04:10,360 --> 00:04:15,879 |
|
post-processors per language so they |
|
|
|
108 |
|
00:04:12,720 --> 00:04:17,519 |
|
tried to make things easier uh so like |
|
|
|
109 |
|
00:04:15,879 --> 00:04:19,600 |
|
you can see all of these things like the |
|
|
|
110 |
|
00:04:17,519 --> 00:04:21,919 |
|
first two are proposing new tasks to |
|
|
|
111 |
|
00:04:19,600 --> 00:04:23,880 |
|
solve and they're doing it from the |
|
|
|
112 |
|
00:04:21,919 --> 00:04:25,919 |
|
point of view of uh creating something |
|
|
|
113 |
|
00:04:23,880 --> 00:04:29,120 |
|
useful for users the second two are |
|
|
|
114 |
|
00:04:25,919 --> 00:04:30,440 |
|
proposing new methods the first one is |
|
|
|
115 |
|
00:04:29,120 --> 00:04:34,360 |
|
like improving |
|
|
|
116 |
|
00:04:30,440 --> 00:04:36,320 |
|
accuracy um so it's this is the most |
|
|
|
117 |
|
00:04:34,360 --> 00:04:37,639 |
|
common most commonly people say I have a |
|
|
|
118 |
|
00:04:36,320 --> 00:04:39,120 |
|
test that I want to solve there's a |
|
|
|
119 |
|
00:04:37,639 --> 00:04:41,280 |
|
problem with accuracy I want to improve |
|
|
|
120 |
|
00:04:39,120 --> 00:04:43,960 |
|
it but you can also improve other things |
|
|
|
121 |
|
00:04:41,280 --> 00:04:45,880 |
|
so you can improve like convenience or |
|
|
|
122 |
|
00:04:43,960 --> 00:04:47,320 |
|
uh you can Pro improve efficiency or |
|
|
|
123 |
|
00:04:45,880 --> 00:04:51,720 |
|
other things like that so all of those |
|
|
|
124 |
|
00:04:47,320 --> 00:04:51,720 |
|
are you know perfectly reasonable |
|
|
|
125 |
|
00:04:52,120 --> 00:04:57,320 |
|
things I also have some examples of |
|
|
|
126 |
|
00:04:54,639 --> 00:04:59,120 |
|
curiosity driven research these are |
|
|
|
127 |
|
00:04:57,320 --> 00:05:00,360 |
|
actually harder to find in the ACL |
|
|
|
128 |
|
00:04:59,120 --> 00:05:03,120 |
|
anthology |
|
|
|
129 |
|
00:05:00,360 --> 00:05:06,400 |
|
it's definitely the minority case but |
|
|
|
130 |
|
00:05:03,120 --> 00:05:09,160 |
|
they still do exist um so for example |
|
|
|
131 |
|
00:05:06,400 --> 00:05:10,960 |
|
rank at all 2017 asked what is the |
|
|
|
132 |
|
00:05:09,160 --> 00:05:13,800 |
|
difference between the language of real |
|
|
|
133 |
|
00:05:10,960 --> 00:05:17,000 |
|
news with that of satire hoaxes and |
|
|
|
134 |
|
00:05:13,800 --> 00:05:18,800 |
|
propaganda so they were not attempting |
|
|
|
135 |
|
00:05:17,000 --> 00:05:21,039 |
|
to create a system for fake news |
|
|
|
136 |
|
00:05:18,800 --> 00:05:23,199 |
|
detection that was not their goal here |
|
|
|
137 |
|
00:05:21,039 --> 00:05:24,600 |
|
their go their goal was just to figure |
|
|
|
138 |
|
00:05:23,199 --> 00:05:26,240 |
|
out what were the different linguistic |
|
|
|
139 |
|
00:05:24,600 --> 00:05:28,000 |
|
characteristics and they found that |
|
|
|
140 |
|
00:05:26,240 --> 00:05:29,720 |
|
scientifically interesting maybe |
|
|
|
141 |
|
00:05:28,000 --> 00:05:31,280 |
|
Downstream that would be useful but that |
|
|
|
142 |
|
00:05:29,720 --> 00:05:35,080 |
|
wasn't the point of their |
|
|
|
143 |
|
00:05:31,280 --> 00:05:36,960 |
|
paper another one uh curell at all ask |
|
|
|
144 |
|
00:05:35,080 --> 00:05:38,960 |
|
are all languages equally hard to |
|
|
|
145 |
|
00:05:36,960 --> 00:05:41,000 |
|
language model and so basically they |
|
|
|
146 |
|
00:05:38,960 --> 00:05:42,440 |
|
wanted to know are all languages just |
|
|
|
147 |
|
00:05:41,000 --> 00:05:45,520 |
|
character strings and so language |
|
|
|
148 |
|
00:05:42,440 --> 00:05:47,479 |
|
modeling them is uh similarly easy or |
|
|
|
149 |
|
00:05:45,520 --> 00:05:49,120 |
|
are there certain characteristics of |
|
|
|
150 |
|
00:05:47,479 --> 00:05:51,080 |
|
language that make them easier or harder |
|
|
|
151 |
|
00:05:49,120 --> 00:05:54,000 |
|
to model with the current architectures |
|
|
|
152 |
|
00:05:51,080 --> 00:05:55,520 |
|
that we have um and so they didn't |
|
|
|
153 |
|
00:05:54,000 --> 00:05:57,039 |
|
propose a new architecture they didn't |
|
|
|
154 |
|
00:05:55,520 --> 00:06:00,479 |
|
propose to improve anything they just |
|
|
|
155 |
|
00:05:57,039 --> 00:06:02,400 |
|
proposed to examine this question |
|
|
|
156 |
|
00:06:00,479 --> 00:06:04,280 |
|
um and also Tenny at all this is |
|
|
|
157 |
|
00:06:02,400 --> 00:06:06,880 |
|
actually an extremely impactful work |
|
|
|
158 |
|
00:06:04,280 --> 00:06:09,319 |
|
Downstream but uh they weren't improving |
|
|
|
159 |
|
00:06:06,880 --> 00:06:11,520 |
|
anything they just Quantified where |
|
|
|
160 |
|
00:06:09,319 --> 00:06:14,440 |
|
specific types of linguistic information |
|
|
|
161 |
|
00:06:11,520 --> 00:06:16,720 |
|
are encoded in birs so they found that |
|
|
|
162 |
|
00:06:14,440 --> 00:06:18,840 |
|
for example syntax was encoded better in |
|
|
|
163 |
|
00:06:16,720 --> 00:06:20,560 |
|
the early layers semantics in the later |
|
|
|
164 |
|
00:06:18,840 --> 00:06:22,520 |
|
layers and then if you go further you |
|
|
|
165 |
|
00:06:20,560 --> 00:06:25,280 |
|
you have other fine grain things like |
|
|
|
166 |
|
00:06:22,520 --> 00:06:27,599 |
|
pragne style |
|
|
|
167 |
|
00:06:25,280 --> 00:06:30,400 |
|
information so I I think you can kind of |
|
|
|
168 |
|
00:06:27,599 --> 00:06:32,120 |
|
see the difference between these two um |
|
|
|
169 |
|
00:06:30,400 --> 00:06:34,800 |
|
are there any questions |
|
|
|
170 |
|
00:06:32,120 --> 00:06:40,199 |
|
about |
|
|
|
171 |
|
00:06:34,800 --> 00:06:41,720 |
|
this no okay let's be that so the next |
|
|
|
172 |
|
00:06:40,199 --> 00:06:43,680 |
|
question which I think a lot of people |
|
|
|
173 |
|
00:06:41,720 --> 00:06:46,240 |
|
might be asking particularly with |
|
|
|
174 |
|
00:06:43,680 --> 00:06:47,720 |
|
respect to assignment 4 which requires |
|
|
|
175 |
|
00:06:46,240 --> 00:06:51,039 |
|
you to come up with something novel to |
|
|
|
176 |
|
00:06:47,720 --> 00:06:53,240 |
|
do is how do we uh get research |
|
|
|
177 |
|
00:06:51,039 --> 00:06:57,360 |
|
ideas |
|
|
|
178 |
|
00:06:53,240 --> 00:07:02,280 |
|
and the way we can do this is uh twofold |
|
|
|
179 |
|
00:06:57,360 --> 00:07:04,479 |
|
so um one is kind of we want to turn a |
|
|
|
180 |
|
00:07:02,280 --> 00:07:07,120 |
|
concrete understanding of existing |
|
|
|
181 |
|
00:07:04,479 --> 00:07:10,120 |
|
research's failings into a higher level |
|
|
|
182 |
|
00:07:07,120 --> 00:07:12,560 |
|
experimental question and the two ways |
|
|
|
183 |
|
00:07:10,120 --> 00:07:15,240 |
|
that I normally characterize doing this |
|
|
|
184 |
|
00:07:12,560 --> 00:07:19,319 |
|
are bottom up discovery of research |
|
|
|
185 |
|
00:07:15,240 --> 00:07:21,080 |
|
ideas um or the way the way I |
|
|
|
186 |
|
00:07:19,319 --> 00:07:24,479 |
|
characterize this is bottom up discovery |
|
|
|
187 |
|
00:07:21,080 --> 00:07:27,000 |
|
of research ideas and this is a great |
|
|
|
188 |
|
00:07:24,479 --> 00:07:29,120 |
|
tool for making incremental progress on |
|
|
|
189 |
|
00:07:27,000 --> 00:07:32,039 |
|
existing systems on tasks that we really |
|
|
|
190 |
|
00:07:29,120 --> 00:07:35,400 |
|
care about or expanding the scope of a |
|
|
|
191 |
|
00:07:32,039 --> 00:07:37,680 |
|
task that we care about so uh some |
|
|
|
192 |
|
00:07:35,400 --> 00:07:41,879 |
|
examples of this would be like in |
|
|
|
193 |
|
00:07:37,680 --> 00:07:45,639 |
|
assignment number three you uh look |
|
|
|
194 |
|
00:07:41,879 --> 00:07:47,720 |
|
let's say you're looking at |
|
|
|
195 |
|
00:07:45,639 --> 00:07:50,159 |
|
um let's say you're looking at the |
|
|
|
196 |
|
00:07:47,720 --> 00:07:53,840 |
|
question answering performance |
|
|
|
197 |
|
00:07:50,159 --> 00:07:58,280 |
|
of models of multilingual models on |
|
|
|
198 |
|
00:07:53,840 --> 00:08:01,479 |
|
different languages um and you for |
|
|
|
199 |
|
00:07:58,280 --> 00:08:03,159 |
|
assignment three you implement a couple |
|
|
|
200 |
|
00:08:01,479 --> 00:08:05,240 |
|
multilingual models on different |
|
|
|
201 |
|
00:08:03,159 --> 00:08:06,560 |
|
languages you run them you look at the |
|
|
|
202 |
|
00:08:05,240 --> 00:08:08,400 |
|
results and you identify that |
|
|
|
203 |
|
00:08:06,560 --> 00:08:10,080 |
|
multilingual models are particularly bad |
|
|
|
204 |
|
00:08:08,400 --> 00:08:12,919 |
|
at answering questions about named |
|
|
|
205 |
|
00:08:10,080 --> 00:08:14,680 |
|
entities and so now you have looked at |
|
|
|
206 |
|
00:08:12,919 --> 00:08:17,759 |
|
the output you have decided that that's |
|
|
|
207 |
|
00:08:14,680 --> 00:08:20,199 |
|
a big problem um you can go in and |
|
|
|
208 |
|
00:08:17,759 --> 00:08:22,080 |
|
improve it so this is a great tool for |
|
|
|
209 |
|
00:08:20,199 --> 00:08:23,720 |
|
incremental progress and like in fact |
|
|
|
210 |
|
00:08:22,080 --> 00:08:26,520 |
|
doing this really effectively has been |
|
|
|
211 |
|
00:08:23,720 --> 00:08:31,000 |
|
very effective in my own research career |
|
|
|
212 |
|
00:08:26,520 --> 00:08:34,680 |
|
like we uh if I feel like I I like to |
|
|
|
213 |
|
00:08:31,000 --> 00:08:36,279 |
|
look at data I try to do that a lot and |
|
|
|
214 |
|
00:08:34,680 --> 00:08:38,440 |
|
by doing that I identify the most |
|
|
|
215 |
|
00:08:36,279 --> 00:08:40,200 |
|
frequent problems and because of that |
|
|
|
216 |
|
00:08:38,440 --> 00:08:42,039 |
|
when I fix those problems my accuracy |
|
|
|
217 |
|
00:08:40,200 --> 00:08:44,560 |
|
goes up a lot more than people who pick |
|
|
|
218 |
|
00:08:42,039 --> 00:08:46,880 |
|
the less good problems right and so if |
|
|
|
219 |
|
00:08:44,560 --> 00:08:49,440 |
|
we want our accuracy to go up uh I'm |
|
|
|
220 |
|
00:08:46,880 --> 00:08:51,360 |
|
more efficient at you know improving |
|
|
|
221 |
|
00:08:49,440 --> 00:08:53,240 |
|
things on the other hand there's |
|
|
|
222 |
|
00:08:51,360 --> 00:08:55,399 |
|
something uh from the opposite direction |
|
|
|
223 |
|
00:08:53,240 --> 00:08:57,080 |
|
is moving from a higher level question |
|
|
|
224 |
|
00:08:55,399 --> 00:08:57,800 |
|
to a lower level concrete testing of |
|
|
|
225 |
|
00:08:57,080 --> 00:09:00,120 |
|
that |
|
|
|
226 |
|
00:08:57,800 --> 00:09:01,760 |
|
question um so this could be tap down |
|
|
|
227 |
|
00:09:00,120 --> 00:09:02,760 |
|
Design This is tap down design of |
|
|
|
228 |
|
00:09:01,760 --> 00:09:06,360 |
|
research |
|
|
|
229 |
|
00:09:02,760 --> 00:09:08,399 |
|
ideas this favors bigger ideas but these |
|
|
|
230 |
|
00:09:06,360 --> 00:09:10,240 |
|
ideas can be disconnected from reality |
|
|
|
231 |
|
00:09:08,399 --> 00:09:13,880 |
|
or they could be not solving the right |
|
|
|
232 |
|
00:09:10,240 --> 00:09:17,079 |
|
problems so the typical like very very |
|
|
|
233 |
|
00:09:13,880 --> 00:09:18,800 |
|
successful example of this is um neural |
|
|
|
234 |
|
00:09:17,079 --> 00:09:20,800 |
|
machine translation or something like |
|
|
|
235 |
|
00:09:18,800 --> 00:09:22,720 |
|
this neural machine translations neural |
|
|
|
236 |
|
00:09:20,800 --> 00:09:26,399 |
|
sequence sequence |
|
|
|
237 |
|
00:09:22,720 --> 00:09:30,040 |
|
models this came out of a few people |
|
|
|
238 |
|
00:09:26,399 --> 00:09:32,040 |
|
like Jeff Hinton and yua |
|
|
|
239 |
|
00:09:30,040 --> 00:09:33,480 |
|
believing for a very long time that |
|
|
|
240 |
|
00:09:32,040 --> 00:09:35,760 |
|
neural networks were the right way to |
|
|
|
241 |
|
00:09:33,480 --> 00:09:37,800 |
|
solve lots of problems uh despite the |
|
|
|
242 |
|
00:09:35,760 --> 00:09:39,640 |
|
fact that there wasn't like super |
|
|
|
243 |
|
00:09:37,800 --> 00:09:42,279 |
|
concrete evidence of that for a long |
|
|
|
244 |
|
00:09:39,640 --> 00:09:43,399 |
|
time and so they had this idea which was |
|
|
|
245 |
|
00:09:42,279 --> 00:09:47,399 |
|
like we should be doing things with |
|
|
|
246 |
|
00:09:43,399 --> 00:09:49,440 |
|
neural networks and uh they you know |
|
|
|
247 |
|
00:09:47,399 --> 00:09:50,720 |
|
they successfully executed that and now |
|
|
|
248 |
|
00:09:49,440 --> 00:09:52,200 |
|
everybody is doing things with neural |
|
|
|
249 |
|
00:09:50,720 --> 00:09:56,560 |
|
networks so they made a really huge |
|
|
|
250 |
|
00:09:52,200 --> 00:09:58,160 |
|
revolution in the research space um that |
|
|
|
251 |
|
00:09:56,560 --> 00:09:59,720 |
|
that's great that's a great example of a |
|
|
|
252 |
|
00:09:58,160 --> 00:10:02,839 |
|
successful topown IDE IDE but the |
|
|
|
253 |
|
00:09:59,720 --> 00:10:05,519 |
|
problem is uh for every example like |
|
|
|
254 |
|
00:10:02,839 --> 00:10:07,560 |
|
that there's a thousand uh top down |
|
|
|
255 |
|
00:10:05,519 --> 00:10:10,760 |
|
ideas in the graveyard of not being very |
|
|
|
256 |
|
00:10:07,560 --> 00:10:12,600 |
|
you know effective so I I think um in |
|
|
|
257 |
|
00:10:10,760 --> 00:10:14,519 |
|
order to do something like this you |
|
|
|
258 |
|
00:10:12,600 --> 00:10:16,200 |
|
better have a very strong conviction or |
|
|
|
259 |
|
00:10:14,519 --> 00:10:18,079 |
|
you better have maybe some initial |
|
|
|
260 |
|
00:10:16,200 --> 00:10:20,920 |
|
evidence or a very strong intuition |
|
|
|
261 |
|
00:10:18,079 --> 00:10:22,320 |
|
about why this might be a good idea and |
|
|
|
262 |
|
00:10:20,920 --> 00:10:25,240 |
|
uh you would be able to test that |
|
|
|
263 |
|
00:10:22,320 --> 00:10:27,240 |
|
intuition through intermediate steps uh |
|
|
|
264 |
|
00:10:25,240 --> 00:10:31,040 |
|
to to demonstrate like through toy data |
|
|
|
265 |
|
00:10:27,240 --> 00:10:31,040 |
|
or other stuff like that |
|
|
|
266 |
|
00:10:31,720 --> 00:10:38,360 |
|
um cool so these are kind of the general |
|
|
|
267 |
|
00:10:36,360 --> 00:10:40,839 |
|
ways that we can come up with research |
|
|
|
268 |
|
00:10:38,360 --> 00:10:42,519 |
|
ideas the next thing that we want to do |
|
|
|
269 |
|
00:10:40,839 --> 00:10:44,480 |
|
is research our topic area were there |
|
|
|
270 |
|
00:10:42,519 --> 00:10:46,720 |
|
any questions about bottom up versus top |
|
|
|
271 |
|
00:10:44,480 --> 00:10:49,120 |
|
down I'm going to talk about effective |
|
|
|
272 |
|
00:10:46,720 --> 00:10:51,920 |
|
strategies to bottom up stuff in uh in |
|
|
|
273 |
|
00:10:49,120 --> 00:10:54,360 |
|
two weeks uh so we can talk more about |
|
|
|
274 |
|
00:10:51,920 --> 00:10:56,800 |
|
that then |
|
|
|
275 |
|
00:10:54,360 --> 00:11:00,959 |
|
but okay if not I'll move |
|
|
|
276 |
|
00:10:56,800 --> 00:11:05,079 |
|
on so next uh we have research topic |
|
|
|
277 |
|
00:11:00,959 --> 00:11:07,360 |
|
areas so this is about how you will do |
|
|
|
278 |
|
00:11:05,079 --> 00:11:10,320 |
|
assignment three which is researching uh |
|
|
|
279 |
|
00:11:07,360 --> 00:11:13,240 |
|
topic area getting forming a very good |
|
|
|
280 |
|
00:11:10,320 --> 00:11:15,680 |
|
understanding of the topic that you're |
|
|
|
281 |
|
00:11:13,240 --> 00:11:18,800 |
|
trying to handle and so there's a bunch |
|
|
|
282 |
|
00:11:15,680 --> 00:11:22,800 |
|
of different ways you can do this uh the |
|
|
|
283 |
|
00:11:18,800 --> 00:11:25,680 |
|
first one is keyword search and so you |
|
|
|
284 |
|
00:11:22,800 --> 00:11:27,839 |
|
look something up on Google Scholar or |
|
|
|
285 |
|
00:11:25,680 --> 00:11:29,480 |
|
something uh finding older and newer |
|
|
|
286 |
|
00:11:27,839 --> 00:11:32,880 |
|
papers so this is like following the |
|
|
|
287 |
|
00:11:29,480 --> 00:11:35,360 |
|
tracks of papers you can uh read the |
|
|
|
288 |
|
00:11:32,880 --> 00:11:39,160 |
|
abstract and intro uh read the details |
|
|
|
289 |
|
00:11:35,360 --> 00:11:43,760 |
|
of most relevant papers and I don't do |
|
|
|
290 |
|
00:11:39,160 --> 00:11:45,440 |
|
this as much now but um when I was a |
|
|
|
291 |
|
00:11:43,760 --> 00:11:47,360 |
|
graduate student I would often make a |
|
|
|
292 |
|
00:11:45,440 --> 00:11:49,800 |
|
short summary of the paper to make sure |
|
|
|
293 |
|
00:11:47,360 --> 00:11:54,680 |
|
I really understood the details uh |
|
|
|
294 |
|
00:11:49,800 --> 00:11:56,000 |
|
because also now I teach a class um and |
|
|
|
295 |
|
00:11:54,680 --> 00:11:58,240 |
|
actually making these slides is very |
|
|
|
296 |
|
00:11:56,000 --> 00:12:00,120 |
|
useful for me so going back into the |
|
|
|
297 |
|
00:11:58,240 --> 00:12:03,440 |
|
Transformer slide slides you know that |
|
|
|
298 |
|
00:12:00,120 --> 00:12:05,160 |
|
kind of serves as my um you know my way |
|
|
|
299 |
|
00:12:03,440 --> 00:12:06,800 |
|
of digesting papers and making sure that |
|
|
|
300 |
|
00:12:05,160 --> 00:12:08,160 |
|
I can explain them and if you're not |
|
|
|
301 |
|
00:12:06,800 --> 00:12:10,480 |
|
teaching a class and you can go in and |
|
|
|
302 |
|
00:12:08,160 --> 00:12:13,560 |
|
make a summary into it yourselves so |
|
|
|
303 |
|
00:12:10,480 --> 00:12:16,480 |
|
that can confirm uh solidify your memory |
|
|
|
304 |
|
00:12:13,560 --> 00:12:19,360 |
|
and like confirm your uh ability to |
|
|
|
305 |
|
00:12:16,480 --> 00:12:19,360 |
|
understand everything that's in |
|
|
|
306 |
|
00:12:20,639 --> 00:12:27,120 |
|
there cool um so next I'd like to talk |
|
|
|
307 |
|
00:12:23,639 --> 00:12:29,600 |
|
about some sources of papers in NLP um |
|
|
|
308 |
|
00:12:27,120 --> 00:12:31,800 |
|
one really good source uh is the ACL |
|
|
|
309 |
|
00:12:29,600 --> 00:12:33,720 |
|
Anthology another good source is Google |
|
|
|
310 |
|
00:12:31,800 --> 00:12:36,120 |
|
Scholar um they both have their |
|
|
|
311 |
|
00:12:33,720 --> 00:12:37,959 |
|
advantages and their disadvantages um |
|
|
|
312 |
|
00:12:36,120 --> 00:12:39,800 |
|
increasingly actually I realized now |
|
|
|
313 |
|
00:12:37,959 --> 00:12:41,959 |
|
that I should add this to my slides but |
|
|
|
314 |
|
00:12:39,800 --> 00:12:43,639 |
|
increasingly a lot of good uh papers in |
|
|
|
315 |
|
00:12:41,959 --> 00:12:47,120 |
|
NLP are also published in machine |
|
|
|
316 |
|
00:12:43,639 --> 00:12:51,199 |
|
learning conferences so like icml or NPS |
|
|
|
317 |
|
00:12:47,120 --> 00:12:53,040 |
|
or um uh I clear or things like that the |
|
|
|
318 |
|
00:12:51,199 --> 00:12:54,920 |
|
problem is the ACL Anthology is way |
|
|
|
319 |
|
00:12:53,040 --> 00:12:56,600 |
|
better than any of them at like |
|
|
|
320 |
|
00:12:54,920 --> 00:13:00,360 |
|
organizing the papers in an easy to |
|
|
|
321 |
|
00:12:56,600 --> 00:13:03,560 |
|
process way so I I think um I I'll talk |
|
|
|
322 |
|
00:13:00,360 --> 00:13:06,000 |
|
about this uh for now and so the ACL |
|
|
|
323 |
|
00:13:03,560 --> 00:13:08,800 |
|
Anthology covers many uh prestigious |
|
|
|
324 |
|
00:13:06,000 --> 00:13:11,639 |
|
venues in NLP it has all of these ones |
|
|
|
325 |
|
00:13:08,800 --> 00:13:15,160 |
|
here this figure is a little bit old uh |
|
|
|
326 |
|
00:13:11,639 --> 00:13:18,839 |
|
I I made it in 21 2021 but you know it |
|
|
|
327 |
|
00:13:15,160 --> 00:13:22,959 |
|
reaches up to the present day and what I |
|
|
|
328 |
|
00:13:18,839 --> 00:13:25,880 |
|
do often is I can start with the past 3 |
|
|
|
329 |
|
00:13:22,959 --> 00:13:30,160 |
|
to 5 years of several top venues in here |
|
|
|
330 |
|
00:13:25,880 --> 00:13:33,880 |
|
like ACL emnlp uh nackle and tackle and |
|
|
|
331 |
|
00:13:30,160 --> 00:13:36,360 |
|
go in and do uh keyword search and so |
|
|
|
332 |
|
00:13:33,880 --> 00:13:36,360 |
|
like let's |
|
|
|
333 |
|
00:13:38,760 --> 00:13:43,600 |
|
say let's say I was interested in |
|
|
|
334 |
|
00:13:44,639 --> 00:13:49,519 |
|
multilingual multilingual large language |
|
|
|
335 |
|
00:13:47,600 --> 00:13:52,079 |
|
models and evaluating them or some way |
|
|
|
336 |
|
00:13:49,519 --> 00:13:54,279 |
|
so I would go to ACL and then I would |
|
|
|
337 |
|
00:13:52,079 --> 00:13:57,560 |
|
just put in multi |
|
|
|
338 |
|
00:13:54,279 --> 00:14:01,360 |
|
lingual um and you get a wonderful paper |
|
|
|
339 |
|
00:13:57,560 --> 00:14:01,360 |
|
by by some research are |
|
|
|
340 |
|
00:14:01,480 --> 00:14:06,440 |
|
named that was not intentional I didn't |
|
|
|
341 |
|
00:14:03,639 --> 00:14:08,800 |
|
know that was going to happen but um so |
|
|
|
342 |
|
00:14:06,440 --> 00:14:11,240 |
|
on the Fly crosslingual masking for |
|
|
|
343 |
|
00:14:08,800 --> 00:14:12,959 |
|
multilingual pre-training um scaling |
|
|
|
344 |
|
00:14:11,240 --> 00:14:15,040 |
|
multilingual corpora and language models |
|
|
|
345 |
|
00:14:12,959 --> 00:14:18,120 |
|
to 500 languages that seems pretty |
|
|
|
346 |
|
00:14:15,040 --> 00:14:19,880 |
|
pretty relevant evaluating multilingual |
|
|
|
347 |
|
00:14:18,120 --> 00:14:22,000 |
|
compositional generalization so you can |
|
|
|
348 |
|
00:14:19,880 --> 00:14:27,680 |
|
just go through here and see a bunch of |
|
|
|
349 |
|
00:14:22,000 --> 00:14:30,680 |
|
papers that like um that could be |
|
|
|
350 |
|
00:14:27,680 --> 00:14:30,680 |
|
useful |
|
|
|
351 |
|
00:14:32,240 --> 00:14:35,199 |
|
and you could uh if you're doing a more |
|
|
|
352 |
|
00:14:33,800 --> 00:14:36,920 |
|
machine learning oriented thing you can |
|
|
|
353 |
|
00:14:35,199 --> 00:14:38,920 |
|
do the same thing for like the nurs |
|
|
|
354 |
|
00:14:36,920 --> 00:14:41,480 |
|
proceedings or the icml proceedings or |
|
|
|
355 |
|
00:14:38,920 --> 00:14:41,480 |
|
something like |
|
|
|
356 |
|
00:14:41,800 --> 00:14:48,120 |
|
that um separately from this you can go |
|
|
|
357 |
|
00:14:44,839 --> 00:14:50,920 |
|
through Google Scholar um this allows |
|
|
|
358 |
|
00:14:48,120 --> 00:14:52,560 |
|
for a search of papers by keyword and so |
|
|
|
359 |
|
00:14:50,920 --> 00:14:54,440 |
|
if I write like neural entity |
|
|
|
360 |
|
00:14:52,560 --> 00:14:56,360 |
|
recognition it will give neural |
|
|
|
361 |
|
00:14:54,440 --> 00:15:00,040 |
|
architectures for identity recognition |
|
|
|
362 |
|
00:14:56,360 --> 00:15:03,399 |
|
all of these things like this um you can |
|
|
|
363 |
|
00:15:00,040 --> 00:15:06,800 |
|
view the more recent papers so like for |
|
|
|
364 |
|
00:15:03,399 --> 00:15:10,120 |
|
example uh if you're researching uh kind |
|
|
|
365 |
|
00:15:06,800 --> 00:15:12,759 |
|
of generic topic that a lot of people |
|
|
|
366 |
|
00:15:10,120 --> 00:15:14,639 |
|
use uh a lot of people do research on |
|
|
|
367 |
|
00:15:12,759 --> 00:15:18,399 |
|
you might be getting papers from like |
|
|
|
368 |
|
00:15:14,639 --> 00:15:19,920 |
|
1998 or something like this and you know |
|
|
|
369 |
|
00:15:18,399 --> 00:15:21,639 |
|
they might be useful but honestly the |
|
|
|
370 |
|
00:15:19,920 --> 00:15:23,519 |
|
methodology has changed so much since |
|
|
|
371 |
|
00:15:21,639 --> 00:15:24,680 |
|
then that most methodical papers from |
|
|
|
372 |
|
00:15:23,519 --> 00:15:26,959 |
|
that long ago are probably not going to |
|
|
|
373 |
|
00:15:24,680 --> 00:15:29,480 |
|
be very useful um so you can view the |
|
|
|
374 |
|
00:15:26,959 --> 00:15:31,079 |
|
recent papers another really useful |
|
|
|
375 |
|
00:15:29,480 --> 00:15:33,759 |
|
thing that you can do is view papers |
|
|
|
376 |
|
00:15:31,079 --> 00:15:35,319 |
|
that site the current paper and you can |
|
|
|
377 |
|
00:15:33,759 --> 00:15:39,560 |
|
even click on this and then you can |
|
|
|
378 |
|
00:15:35,319 --> 00:15:42,519 |
|
search within the sighting papers so |
|
|
|
379 |
|
00:15:39,560 --> 00:15:44,399 |
|
um like let's say I want to know about |
|
|
|
380 |
|
00:15:42,519 --> 00:15:45,620 |
|
how |
|
|
|
381 |
|
00:15:44,399 --> 00:15:48,730 |
|
people |
|
|
|
382 |
|
00:15:45,620 --> 00:15:48,730 |
|
[Music] |
|
|
|
383 |
|
00:15:50,720 --> 00:15:55,720 |
|
do let's say I want to see if anybody |
|
|
|
384 |
|
00:15:53,199 --> 00:15:59,639 |
|
does neural entity recognition with uh |
|
|
|
385 |
|
00:15:55,720 --> 00:16:02,160 |
|
State space models so I do like stage |
|
|
|
386 |
|
00:15:59,639 --> 00:16:05,399 |
|
space |
|
|
|
387 |
|
00:16:02,160 --> 00:16:09,040 |
|
model and then I search within the |
|
|
|
388 |
|
00:16:05,399 --> 00:16:12,279 |
|
citing articles and I'm able to find |
|
|
|
389 |
|
00:16:09,040 --> 00:16:14,319 |
|
three articles that at least cite this |
|
|
|
390 |
|
00:16:12,279 --> 00:16:17,759 |
|
paper and and talk about State space |
|
|
|
391 |
|
00:16:14,319 --> 00:16:20,319 |
|
models so |
|
|
|
392 |
|
00:16:17,759 --> 00:16:21,600 |
|
um none of these seem particularly |
|
|
|
393 |
|
00:16:20,319 --> 00:16:23,240 |
|
relevant to what I was looking for but |
|
|
|
394 |
|
00:16:21,600 --> 00:16:26,800 |
|
you get the idea like this can be a |
|
|
|
395 |
|
00:16:23,240 --> 00:16:26,800 |
|
useful tool for finding more recent |
|
|
|
396 |
|
00:16:27,519 --> 00:16:30,519 |
|
things |
|
|
|
397 |
|
00:16:33,639 --> 00:16:40,480 |
|
and then finding older papers this is |
|
|
|
398 |
|
00:16:36,279 --> 00:16:42,839 |
|
also relatively easy um so you read the |
|
|
|
399 |
|
00:16:40,480 --> 00:16:44,319 |
|
papers that you're interested in and |
|
|
|
400 |
|
00:16:42,839 --> 00:16:45,480 |
|
then it will have back blinks to older |
|
|
|
401 |
|
00:16:44,319 --> 00:16:47,519 |
|
papers and you look them up in the |
|
|
|
402 |
|
00:16:45,480 --> 00:16:50,000 |
|
references this is how I I find older |
|
|
|
403 |
|
00:16:47,519 --> 00:16:53,600 |
|
papers that might be |
|
|
|
404 |
|
00:16:50,000 --> 00:16:57,800 |
|
relevant um and so the these are the |
|
|
|
405 |
|
00:16:53,600 --> 00:16:59,720 |
|
tools that I use um some other so I I'd |
|
|
|
406 |
|
00:16:57,800 --> 00:17:03,600 |
|
like to give a few caveats about Google |
|
|
|
407 |
|
00:16:59,720 --> 00:17:06,120 |
|
Scholar and uh things like Twitter or |
|
|
|
408 |
|
00:17:03,600 --> 00:17:08,360 |
|
LinkedIn or something like this they |
|
|
|
409 |
|
00:17:06,120 --> 00:17:10,720 |
|
give you very biased views on all the |
|
|
|
410 |
|
00:17:08,360 --> 00:17:14,600 |
|
papers that are out there um because |
|
|
|
411 |
|
00:17:10,720 --> 00:17:16,919 |
|
they sort for popularity basically so um |
|
|
|
412 |
|
00:17:14,600 --> 00:17:19,439 |
|
actually if you're looking at like |
|
|
|
413 |
|
00:17:16,919 --> 00:17:22,000 |
|
Twitter or LinkedIn or something like |
|
|
|
414 |
|
00:17:19,439 --> 00:17:23,679 |
|
that you can actually get a pretty bleak |
|
|
|
415 |
|
00:17:22,000 --> 00:17:25,360 |
|
view on natural language processing and |
|
|
|
416 |
|
00:17:23,679 --> 00:17:28,000 |
|
say all anybody is doing is training |
|
|
|
417 |
|
00:17:25,360 --> 00:17:30,080 |
|
large language models because you know |
|
|
|
418 |
|
00:17:28,000 --> 00:17:31,720 |
|
these things tend to become you know |
|
|
|
419 |
|
00:17:30,080 --> 00:17:33,520 |
|
popular and then they get Amplified by |
|
|
|
420 |
|
00:17:31,720 --> 00:17:35,840 |
|
algorithms and stuff like that when in |
|
|
|
421 |
|
00:17:33,520 --> 00:17:37,440 |
|
fact like the landscape is much richer |
|
|
|
422 |
|
00:17:35,840 --> 00:17:40,400 |
|
which is why I do definitely suggest |
|
|
|
423 |
|
00:17:37,440 --> 00:17:42,000 |
|
that you like actually look through uh |
|
|
|
424 |
|
00:17:40,400 --> 00:17:43,880 |
|
conference proceedings and stuff and |
|
|
|
425 |
|
00:17:42,000 --> 00:17:46,720 |
|
find papers that are not you know |
|
|
|
426 |
|
00:17:43,880 --> 00:17:48,520 |
|
Amplified as much so um I I definitely |
|
|
|
427 |
|
00:17:46,720 --> 00:17:50,840 |
|
highly recommend doing this in addition |
|
|
|
428 |
|
00:17:48,520 --> 00:17:52,480 |
|
to you know Google Scholar or social |
|
|
|
429 |
|
00:17:50,840 --> 00:17:54,640 |
|
media or other things like that that |
|
|
|
430 |
|
00:17:52,480 --> 00:17:54,640 |
|
might |
|
|
|
431 |
|
00:17:56,600 --> 00:18:01,760 |
|
be cool um I'd also like to mention a |
|
|
|
432 |
|
00:18:00,200 --> 00:18:04,000 |
|
thing about the ups and downs of |
|
|
|
433 |
|
00:18:01,760 --> 00:18:07,559 |
|
preemptive surveys |
|
|
|
434 |
|
00:18:04,000 --> 00:18:10,440 |
|
so um surveying extensively before doing |
|
|
|
435 |
|
00:18:07,559 --> 00:18:12,840 |
|
research uh has a bunch of good sides so |
|
|
|
436 |
|
00:18:10,440 --> 00:18:14,000 |
|
it prevents you from duplicating work so |
|
|
|
437 |
|
00:18:12,840 --> 00:18:15,039 |
|
somebody else might have done a very |
|
|
|
438 |
|
00:18:14,000 --> 00:18:18,080 |
|
similar |
|
|
|
439 |
|
00:18:15,039 --> 00:18:20,480 |
|
thing um it also increases your toolbox |
|
|
|
440 |
|
00:18:18,080 --> 00:18:21,600 |
|
of methods so you know if it's a problem |
|
|
|
441 |
|
00:18:20,480 --> 00:18:25,400 |
|
that a lot of people have worked on |
|
|
|
442 |
|
00:18:21,600 --> 00:18:27,120 |
|
before then you know it helps uh give |
|
|
|
443 |
|
00:18:25,400 --> 00:18:30,320 |
|
you ideas of methods that you could be |
|
|
|
444 |
|
00:18:27,120 --> 00:18:35,600 |
|
using um however in a way it also kind |
|
|
|
445 |
|
00:18:30,320 --> 00:18:38,720 |
|
of constrains your thinking so um if you |
|
|
|
446 |
|
00:18:35,600 --> 00:18:42,480 |
|
like on once you have built up a very |
|
|
|
447 |
|
00:18:38,720 --> 00:18:45,440 |
|
extensive survey of like ways to do |
|
|
|
448 |
|
00:18:42,480 --> 00:18:47,240 |
|
things you tend to like move away from |
|
|
|
449 |
|
00:18:45,440 --> 00:18:48,799 |
|
there when in fact like if you thought |
|
|
|
450 |
|
00:18:47,240 --> 00:18:50,080 |
|
just thought of ways to solve problems |
|
|
|
451 |
|
00:18:48,799 --> 00:18:52,360 |
|
without looking at everything you might |
|
|
|
452 |
|
00:18:50,080 --> 00:18:54,799 |
|
come up with something over here might |
|
|
|
453 |
|
00:18:52,360 --> 00:18:56,400 |
|
actually be a good idea right um and so |
|
|
|
454 |
|
00:18:54,799 --> 00:18:58,600 |
|
there's this really nice essay it was |
|
|
|
455 |
|
00:18:56,400 --> 00:19:00,799 |
|
actually shared uh shared with me by |
|
|
|
456 |
|
00:18:58,600 --> 00:19:02,440 |
|
Chris Manning from Sanford um it's |
|
|
|
457 |
|
00:19:00,799 --> 00:19:04,720 |
|
called how to build an economics model |
|
|
|
458 |
|
00:19:02,440 --> 00:19:06,679 |
|
in your spare time it's about it's from |
|
|
|
459 |
|
00:19:04,720 --> 00:19:08,880 |
|
a Nobel Prize winner in economics but |
|
|
|
460 |
|
00:19:06,679 --> 00:19:10,480 |
|
he's talking about how when he tries to |
|
|
|
461 |
|
00:19:08,880 --> 00:19:13,039 |
|
come up with new and like important |
|
|
|
462 |
|
00:19:10,480 --> 00:19:15,840 |
|
ideas he doesn't look at economics |
|
|
|
463 |
|
00:19:13,039 --> 00:19:19,679 |
|
journals he looks at the newspaper and |
|
|
|
464 |
|
00:19:15,840 --> 00:19:21,919 |
|
tries to uh you know |
|
|
|
465 |
|
00:19:19,679 --> 00:19:23,480 |
|
like look at problems that people are |
|
|
|
466 |
|
00:19:21,919 --> 00:19:24,840 |
|
talking about in the newspaper and think |
|
|
|
467 |
|
00:19:23,480 --> 00:19:27,159 |
|
about whether there's an economic |
|
|
|
468 |
|
00:19:24,840 --> 00:19:29,919 |
|
solution to them and so if we think |
|
|
|
469 |
|
00:19:27,159 --> 00:19:32,880 |
|
about the anal of how we can do this in |
|
|
|
470 |
|
00:19:29,919 --> 00:19:35,600 |
|
natural language processing you know |
|
|
|
471 |
|
00:19:32,880 --> 00:19:37,360 |
|
maybe you don't necessarily right away |
|
|
|
472 |
|
00:19:35,600 --> 00:19:38,799 |
|
want to do a really extensive survey |
|
|
|
473 |
|
00:19:37,360 --> 00:19:41,080 |
|
first you might just think about like |
|
|
|
474 |
|
00:19:38,799 --> 00:19:44,080 |
|
what's bothering you like when you're |
|
|
|
475 |
|
00:19:41,080 --> 00:19:46,799 |
|
using chat GPT what is really |
|
|
|
476 |
|
00:19:44,080 --> 00:19:49,600 |
|
frustrating to you uh about how it gives |
|
|
|
477 |
|
00:19:46,799 --> 00:19:51,280 |
|
responses or um what are the things you |
|
|
|
478 |
|
00:19:49,600 --> 00:19:53,159 |
|
wish it were possible to do through |
|
|
|
479 |
|
00:19:51,280 --> 00:19:56,240 |
|
natural language processing but not are |
|
|
|
480 |
|
00:19:53,159 --> 00:19:57,640 |
|
not possible to do and um then you can |
|
|
|
481 |
|
00:19:56,240 --> 00:20:00,679 |
|
start from there you can look at you |
|
|
|
482 |
|
00:19:57,640 --> 00:20:03,440 |
|
know what companies are doing in their |
|
|
|
483 |
|
00:20:00,679 --> 00:20:05,799 |
|
Tech demos uh because the tech demos |
|
|
|
484 |
|
00:20:03,440 --> 00:20:08,640 |
|
might be nice but they almost never work |
|
|
|
485 |
|
00:20:05,799 --> 00:20:11,240 |
|
as well as the tech demo makes them seem |
|
|
|
486 |
|
00:20:08,640 --> 00:20:13,840 |
|
like they work so that could be another |
|
|
|
487 |
|
00:20:11,240 --> 00:20:15,720 |
|
place to get ideas um or you can look at |
|
|
|
488 |
|
00:20:13,840 --> 00:20:17,039 |
|
papers in a related field like machine |
|
|
|
489 |
|
00:20:15,720 --> 00:20:18,760 |
|
learning like let's say you're a machine |
|
|
|
490 |
|
00:20:17,039 --> 00:20:21,280 |
|
learning oriented person and you really |
|
|
|
491 |
|
00:20:18,760 --> 00:20:23,000 |
|
love like math and stuff like that it's |
|
|
|
492 |
|
00:20:21,280 --> 00:20:25,799 |
|
like well there's this good mathematical |
|
|
|
493 |
|
00:20:23,000 --> 00:20:27,760 |
|
tool that I think could be applicable to |
|
|
|
494 |
|
00:20:25,799 --> 00:20:30,440 |
|
um a certain problem in NLP or something |
|
|
|
495 |
|
00:20:27,760 --> 00:20:31,960 |
|
like that so you could do that too um |
|
|
|
496 |
|
00:20:30,440 --> 00:20:33,960 |
|
the the final one you know comes with |
|
|
|
497 |
|
00:20:31,960 --> 00:20:35,799 |
|
all the caveats of doing topown research |
|
|
|
498 |
|
00:20:33,960 --> 00:20:37,320 |
|
of course so you know you need to make |
|
|
|
499 |
|
00:20:35,799 --> 00:20:39,799 |
|
sure that that really is the correct |
|
|
|
500 |
|
00:20:37,320 --> 00:20:42,159 |
|
tool for whatever you want to sell but |
|
|
|
501 |
|
00:20:39,799 --> 00:20:45,280 |
|
um definitely this is something to think |
|
|
|
502 |
|
00:20:42,159 --> 00:20:48,240 |
|
about um however for assignment three |
|
|
|
503 |
|
00:20:45,280 --> 00:20:49,559 |
|
you need to do a survey so I'm I'm |
|
|
|
504 |
|
00:20:48,240 --> 00:20:50,720 |
|
forcing you to do a survey for |
|
|
|
505 |
|
00:20:49,559 --> 00:20:52,200 |
|
assignment three so if you're going to |
|
|
|
506 |
|
00:20:50,720 --> 00:20:53,640 |
|
do something like this you can do it |
|
|
|
507 |
|
00:20:52,200 --> 00:20:56,600 |
|
before assignment 3 and start thinking |
|
|
|
508 |
|
00:20:53,640 --> 00:21:00,000 |
|
about what you want to be doing so um |
|
|
|
509 |
|
00:20:56,600 --> 00:21:01,520 |
|
that's something |
|
|
|
510 |
|
00:21:00,000 --> 00:21:03,200 |
|
uh any questions or discussion about |
|
|
|
511 |
|
00:21:01,520 --> 00:21:06,799 |
|
that |
|
|
|
512 |
|
00:21:03,200 --> 00:21:07,840 |
|
part this is hard I'm I'm happy to uh |
|
|
|
513 |
|
00:21:06,799 --> 00:21:11,120 |
|
happy to |
|
|
|
514 |
|
00:21:07,840 --> 00:21:14,039 |
|
discuss either now or in office hours or |
|
|
|
515 |
|
00:21:11,120 --> 00:21:14,039 |
|
anything like this |
|
|
|
516 |
|
00:21:14,200 --> 00:21:19,720 |
|
but Okay |
|
|
|
517 |
|
00:21:17,080 --> 00:21:24,279 |
|
cool so the next thing is a for |
|
|
|
518 |
|
00:21:19,720 --> 00:21:25,640 |
|
hypothesis so uh once you have done you |
|
|
|
519 |
|
00:21:24,279 --> 00:21:28,600 |
|
have a general idea of what you want to |
|
|
|
520 |
|
00:21:25,640 --> 00:21:31,240 |
|
do um and you have done a survey related |
|
|
|
521 |
|
00:21:28,600 --> 00:21:32,480 |
|
work you can devise a final research |
|
|
|
522 |
|
00:21:31,240 --> 00:21:34,159 |
|
question or |
|
|
|
523 |
|
00:21:32,480 --> 00:21:37,760 |
|
hypothesis |
|
|
|
524 |
|
00:21:34,159 --> 00:21:40,039 |
|
and so a research question is one or |
|
|
|
525 |
|
00:21:37,760 --> 00:21:43,400 |
|
several explicit questions regarding the |
|
|
|
526 |
|
00:21:40,039 --> 00:21:45,919 |
|
thing that you want to know um |
|
|
|
527 |
|
00:21:43,400 --> 00:21:47,400 |
|
and this is actually pretty hard for |
|
|
|
528 |
|
00:21:45,919 --> 00:21:49,080 |
|
people like I ask people to write |
|
|
|
529 |
|
00:21:47,400 --> 00:21:50,880 |
|
research questions and very often they |
|
|
|
530 |
|
00:21:49,080 --> 00:21:53,080 |
|
don't write research questions in this |
|
|
|
531 |
|
00:21:50,880 --> 00:21:57,720 |
|
format and I have to ask people to try |
|
|
|
532 |
|
00:21:53,080 --> 00:21:59,919 |
|
to change them and what they what I |
|
|
|
533 |
|
00:21:57,720 --> 00:22:03,159 |
|
think they in general should be are yes |
|
|
|
534 |
|
00:21:59,919 --> 00:22:08,120 |
|
no questions so |
|
|
|
535 |
|
00:22:03,159 --> 00:22:10,400 |
|
it um yes no questions and you have a |
|
|
|
536 |
|
00:22:08,120 --> 00:22:13,120 |
|
hypothesis uh about what you think the |
|
|
|
537 |
|
00:22:10,400 --> 00:22:14,600 |
|
answer to the question may be a priori |
|
|
|
538 |
|
00:22:13,120 --> 00:22:17,520 |
|
and that hypothesis should be |
|
|
|
539 |
|
00:22:14,600 --> 00:22:19,919 |
|
falsifiable so basically it's if you get |
|
|
|
540 |
|
00:22:17,520 --> 00:22:21,240 |
|
a certain result you can demonstrate |
|
|
|
541 |
|
00:22:19,919 --> 00:22:23,120 |
|
that the answer to this question is |
|
|
|
542 |
|
00:22:21,240 --> 00:22:24,679 |
|
probably yes if you get a different |
|
|
|
543 |
|
00:22:23,120 --> 00:22:27,520 |
|
result you can demonstrate that the |
|
|
|
544 |
|
00:22:24,679 --> 00:22:29,640 |
|
answer to the question is probably no |
|
|
|
545 |
|
00:22:27,520 --> 00:22:32,400 |
|
and just to make this a little bit more |
|
|
|
546 |
|
00:22:29,640 --> 00:22:34,360 |
|
concrete I can give a few curiosity |
|
|
|
547 |
|
00:22:32,400 --> 00:22:36,880 |
|
driven questions and |
|
|
|
548 |
|
00:22:34,360 --> 00:22:40,720 |
|
hypothesis C the Curiosity driven |
|
|
|
549 |
|
00:22:36,880 --> 00:22:43,480 |
|
questions are a little bit easier so um |
|
|
|
550 |
|
00:22:40,720 --> 00:22:45,600 |
|
we have the Curiosity driven question of |
|
|
|
551 |
|
00:22:43,480 --> 00:22:49,679 |
|
are all language models are all |
|
|
|
552 |
|
00:22:45,600 --> 00:22:53,559 |
|
languages equally hard to language model |
|
|
|
553 |
|
00:22:49,679 --> 00:22:55,400 |
|
and they say uh it is unlikely that all |
|
|
|
554 |
|
00:22:53,559 --> 00:22:56,760 |
|
languages are equally easy or that |
|
|
|
555 |
|
00:22:55,400 --> 00:22:58,799 |
|
methods are equally good at all |
|
|
|
556 |
|
00:22:56,760 --> 00:23:01,159 |
|
languages um so so that's their |
|
|
|
557 |
|
00:22:58,799 --> 00:23:04,120 |
|
hypothesis so they think a priori that |
|
|
|
558 |
|
00:23:01,159 --> 00:23:05,919 |
|
that's the case um but that might be |
|
|
|
559 |
|
00:23:04,120 --> 00:23:08,400 |
|
falsified by getting a very strong |
|
|
|
560 |
|
00:23:05,919 --> 00:23:10,679 |
|
result that says like no matter which |
|
|
|
561 |
|
00:23:08,400 --> 00:23:13,760 |
|
language you're modeling many models |
|
|
|
562 |
|
00:23:10,679 --> 00:23:18,120 |
|
that we use get get similar results |
|
|
|
563 |
|
00:23:13,760 --> 00:23:20,400 |
|
on um what makes a particular podcast |
|
|
|
564 |
|
00:23:18,120 --> 00:23:21,320 |
|
broadly engaging so this was an analysis |
|
|
|
565 |
|
00:23:20,400 --> 00:23:24,400 |
|
of |
|
|
|
566 |
|
00:23:21,320 --> 00:23:27,960 |
|
podcasts uh where they compared popular |
|
|
|
567 |
|
00:23:24,400 --> 00:23:29,720 |
|
podcasts and unpopular podcasts or |
|
|
|
568 |
|
00:23:27,960 --> 00:23:32,400 |
|
engaging and unengaging |
|
|
|
569 |
|
00:23:29,720 --> 00:23:34,400 |
|
podcasts and it says uh tips such as |
|
|
|
570 |
|
00:23:32,400 --> 00:23:37,039 |
|
reducing filler words and disfluencies |
|
|
|
571 |
|
00:23:34,400 --> 00:23:38,840 |
|
or incorporating emotion are things that |
|
|
|
572 |
|
00:23:37,039 --> 00:23:41,400 |
|
people had anecdotally written on the |
|
|
|
573 |
|
00:23:38,840 --> 00:23:43,039 |
|
internet as tips to make a good podcast |
|
|
|
574 |
|
00:23:41,400 --> 00:23:45,760 |
|
but nobody had actually empirically |
|
|
|
575 |
|
00:23:43,039 --> 00:23:48,440 |
|
valid validated that so they wanted to |
|
|
|
576 |
|
00:23:45,760 --> 00:23:50,000 |
|
like actually go invalidate that so they |
|
|
|
577 |
|
00:23:48,440 --> 00:23:51,679 |
|
came up with hypotheses and they could |
|
|
|
578 |
|
00:23:50,000 --> 00:23:55,720 |
|
demonstrate that those had good or bad |
|
|
|
579 |
|
00:23:51,679 --> 00:23:55,720 |
|
correlation podcast being judged as |
|
|
|
580 |
|
00:23:56,880 --> 00:24:03,600 |
|
engaging application driven questions |
|
|
|
581 |
|
00:23:59,039 --> 00:24:03,600 |
|
and hypotheses are a little bit harder |
|
|
|
582 |
|
00:24:04,520 --> 00:24:10,480 |
|
so here is an |
|
|
|
583 |
|
00:24:07,640 --> 00:24:13,039 |
|
example this is an example from a paper |
|
|
|
584 |
|
00:24:10,480 --> 00:24:18,720 |
|
that I wrote previously which |
|
|
|
585 |
|
00:24:13,039 --> 00:24:22,080 |
|
was where and why or how and why do |
|
|
|
586 |
|
00:24:18,720 --> 00:24:22,960 |
|
pre-trained word embeddings help neural |
|
|
|
587 |
|
00:24:22,080 --> 00:24:25,080 |
|
machine |
|
|
|
588 |
|
00:24:22,960 --> 00:24:26,760 |
|
translation and this was back when |
|
|
|
589 |
|
00:24:25,080 --> 00:24:28,279 |
|
pre-training was mostly like word |
|
|
|
590 |
|
00:24:26,760 --> 00:24:31,880 |
|
embeddings we weren't preing the whole |
|
|
|
591 |
|
00:24:28,279 --> 00:24:34,480 |
|
body of the neural net so |
|
|
|
592 |
|
00:24:31,880 --> 00:24:36,640 |
|
now the answers to this question are a |
|
|
|
593 |
|
00:24:34,480 --> 00:24:37,919 |
|
little bit different but basically the |
|
|
|
594 |
|
00:24:36,640 --> 00:24:40,080 |
|
questions that we asked is is the |
|
|
|
595 |
|
00:24:37,919 --> 00:24:42,360 |
|
behavior of pre-training affected by |
|
|
|
596 |
|
00:24:40,080 --> 00:24:45,960 |
|
language families and other linguistic |
|
|
|
597 |
|
00:24:42,360 --> 00:24:49,520 |
|
features of source and Target languages |
|
|
|
598 |
|
00:24:45,960 --> 00:24:51,360 |
|
so uh we expected that the answer to |
|
|
|
599 |
|
00:24:49,520 --> 00:24:53,640 |
|
this would be yes it would vary across |
|
|
|
600 |
|
00:24:51,360 --> 00:24:54,960 |
|
them do pre-trained edings help more |
|
|
|
601 |
|
00:24:53,640 --> 00:24:57,760 |
|
when the size of the training data is |
|
|
|
602 |
|
00:24:54,960 --> 00:24:59,039 |
|
small we expected that this would be yes |
|
|
|
603 |
|
00:24:57,760 --> 00:25:00,640 |
|
how much does the similarity of the |
|
|
|
604 |
|
00:24:59,039 --> 00:25:03,720 |
|
source and Target languages affect the |
|
|
|
605 |
|
00:25:00,640 --> 00:25:06,200 |
|
efficacy of using pre-trained edings uh |
|
|
|
606 |
|
00:25:03,720 --> 00:25:08,399 |
|
we didn't have a hypothesis about |
|
|
|
607 |
|
00:25:06,200 --> 00:25:10,600 |
|
whether it would or not and is it |
|
|
|
608 |
|
00:25:08,399 --> 00:25:12,320 |
|
helpful to align the embedding spaces |
|
|
|
609 |
|
00:25:10,600 --> 00:25:14,520 |
|
between the source and Target languages |
|
|
|
610 |
|
00:25:12,320 --> 00:25:16,039 |
|
we assume this would be yes and do |
|
|
|
611 |
|
00:25:14,520 --> 00:25:17,640 |
|
pre-trained edings help more in |
|
|
|
612 |
|
00:25:16,039 --> 00:25:19,360 |
|
multilingual systems as compared to |
|
|
|
613 |
|
00:25:17,640 --> 00:25:22,679 |
|
bilingual systems and we didn't have a |
|
|
|
614 |
|
00:25:19,360 --> 00:25:26,279 |
|
good hypothesis about that |
|
|
|
615 |
|
00:25:22,679 --> 00:25:29,559 |
|
I another one is although recent stud uh |
|
|
|
616 |
|
00:25:26,279 --> 00:25:32,760 |
|
sorry the question of whether and how |
|
|
|
617 |
|
00:25:29,559 --> 00:25:35,039 |
|
contextual information benefits endtoend |
|
|
|
618 |
|
00:25:32,760 --> 00:25:38,960 |
|
speech translation has received little |
|
|
|
619 |
|
00:25:35,039 --> 00:25:42,480 |
|
attention and so their guess was that it |
|
|
|
620 |
|
00:25:38,960 --> 00:25:44,880 |
|
probably would help so application |
|
|
|
621 |
|
00:25:42,480 --> 00:25:47,120 |
|
oriented questions are a little bit |
|
|
|
622 |
|
00:25:44,880 --> 00:25:49,200 |
|
tricky because the obvious one is like |
|
|
|
623 |
|
00:25:47,120 --> 00:25:52,200 |
|
does X make y |
|
|
|
624 |
|
00:25:49,200 --> 00:25:54,080 |
|
better and so you you have a method you |
|
|
|
625 |
|
00:25:52,200 --> 00:25:55,559 |
|
think it's going to make the output |
|
|
|
626 |
|
00:25:54,080 --> 00:25:58,120 |
|
better and so that's kind of your |
|
|
|
627 |
|
00:25:55,559 --> 00:26:00,000 |
|
obvious research question but the |
|
|
|
628 |
|
00:25:58,120 --> 00:26:02,080 |
|
problem is the above question or |
|
|
|
629 |
|
00:26:00,000 --> 00:26:04,279 |
|
hypothesis is natural but it's very |
|
|
|
630 |
|
00:26:02,080 --> 00:26:06,679 |
|
indirect so normally you also have a |
|
|
|
631 |
|
00:26:04,279 --> 00:26:09,760 |
|
hypothesis about like why it will help |
|
|
|
632 |
|
00:26:06,679 --> 00:26:13,279 |
|
or something like this and so if the |
|
|
|
633 |
|
00:26:09,760 --> 00:26:15,440 |
|
answer is no after your experiments why |
|
|
|
634 |
|
00:26:13,279 --> 00:26:18,080 |
|
is the answer |
|
|
|
635 |
|
00:26:15,440 --> 00:26:20,640 |
|
no it could be that your original |
|
|
|
636 |
|
00:26:18,080 --> 00:26:23,720 |
|
assumption about why a particular method |
|
|
|
637 |
|
00:26:20,640 --> 00:26:25,039 |
|
would help was wrong which is the worst |
|
|
|
638 |
|
00:26:23,720 --> 00:26:28,360 |
|
case scenario but you also could just |
|
|
|
639 |
|
00:26:25,039 --> 00:26:30,559 |
|
have a bug in your code or uh your |
|
|
|
640 |
|
00:26:28,360 --> 00:26:32,000 |
|
data set your test set might not be |
|
|
|
641 |
|
00:26:30,559 --> 00:26:34,279 |
|
large enough so you wouldn't be able to |
|
|
|
642 |
|
00:26:32,000 --> 00:26:35,840 |
|
get a statistically significant result |
|
|
|
643 |
|
00:26:34,279 --> 00:26:40,039 |
|
based on the amount that it helped you |
|
|
|
644 |
|
00:26:35,840 --> 00:26:42,960 |
|
improve or other things like that so |
|
|
|
645 |
|
00:26:40,039 --> 00:26:44,960 |
|
what I like to do in this case is try to |
|
|
|
646 |
|
00:26:42,960 --> 00:26:48,399 |
|
come up with the intuition about why X |
|
|
|
647 |
|
00:26:44,960 --> 00:26:50,360 |
|
will make y better and can you think of |
|
|
|
648 |
|
00:26:48,399 --> 00:26:52,080 |
|
other research questions or hypotheses |
|
|
|
649 |
|
00:26:50,360 --> 00:26:54,240 |
|
that confirm or falsified these |
|
|
|
650 |
|
00:26:52,080 --> 00:26:56,640 |
|
assumptions |
|
|
|
651 |
|
00:26:54,240 --> 00:26:59,559 |
|
so uh some things that you can do are |
|
|
|
652 |
|
00:26:56,640 --> 00:27:01,240 |
|
come up with like toy data or come up |
|
|
|
653 |
|
00:26:59,559 --> 00:27:03,840 |
|
with a subset of the data where you |
|
|
|
654 |
|
00:27:01,240 --> 00:27:06,600 |
|
think this might be correct so just to |
|
|
|
655 |
|
00:27:03,840 --> 00:27:09,279 |
|
give an example let's say we have a |
|
|
|
656 |
|
00:27:06,600 --> 00:27:12,159 |
|
translation model and we have a |
|
|
|
657 |
|
00:27:09,279 --> 00:27:14,279 |
|
hypothesis that improving entity |
|
|
|
658 |
|
00:27:12,159 --> 00:27:16,520 |
|
translation and low resource languages |
|
|
|
659 |
|
00:27:14,279 --> 00:27:18,799 |
|
will improve translation accuracy and we |
|
|
|
660 |
|
00:27:16,520 --> 00:27:21,399 |
|
run an experiment or actually maybe this |
|
|
|
661 |
|
00:27:18,799 --> 00:27:23,760 |
|
is an even better one we we have a |
|
|
|
662 |
|
00:27:21,399 --> 00:27:26,240 |
|
hypothesis that incorporating contextual |
|
|
|
663 |
|
00:27:23,760 --> 00:27:28,799 |
|
information in speech translation will |
|
|
|
664 |
|
00:27:26,240 --> 00:27:31,760 |
|
help translation results |
|
|
|
665 |
|
00:27:28,799 --> 00:27:36,480 |
|
so incorporating context in machine |
|
|
|
666 |
|
00:27:31,760 --> 00:27:37,600 |
|
translation has been a very old topic |
|
|
|
667 |
|
00:27:36,480 --> 00:27:41,279 |
|
like people have been trying to do this |
|
|
|
668 |
|
00:27:37,600 --> 00:27:43,559 |
|
for a very long time but for a long time |
|
|
|
669 |
|
00:27:41,279 --> 00:27:45,200 |
|
the conclusion was that it essentially |
|
|
|
670 |
|
00:27:43,559 --> 00:27:46,519 |
|
wasn't helping translation people would |
|
|
|
671 |
|
00:27:45,200 --> 00:27:48,039 |
|
incorporate contacts through neural |
|
|
|
672 |
|
00:27:46,519 --> 00:27:50,960 |
|
networks or other things like that and |
|
|
|
673 |
|
00:27:48,039 --> 00:27:53,320 |
|
it just wasn't improving the results |
|
|
|
674 |
|
00:27:50,960 --> 00:27:55,320 |
|
significantly and in the end the reason |
|
|
|
675 |
|
00:27:53,320 --> 00:27:57,960 |
|
why was because there just weren't |
|
|
|
676 |
|
00:27:55,320 --> 00:27:59,799 |
|
enough examples where contextual |
|
|
|
677 |
|
00:27:57,960 --> 00:28:02,200 |
|
information was useful in the data sets |
|
|
|
678 |
|
00:27:59,799 --> 00:28:06,360 |
|
that everybody was using so people were |
|
|
|
679 |
|
00:28:02,200 --> 00:28:09,080 |
|
using really long news sentences to try |
|
|
|
680 |
|
00:28:06,360 --> 00:28:10,880 |
|
to figure out where uh whether context |
|
|
|
681 |
|
00:28:09,080 --> 00:28:12,440 |
|
was helping but really long new |
|
|
|
682 |
|
00:28:10,880 --> 00:28:14,000 |
|
sentences have so much information |
|
|
|
683 |
|
00:28:12,440 --> 00:28:16,080 |
|
included in them that you can mostly |
|
|
|
684 |
|
00:28:14,000 --> 00:28:20,120 |
|
translate sentence by sentence and get |
|
|
|
685 |
|
00:28:16,080 --> 00:28:21,880 |
|
it right like 95% of the time so the |
|
|
|
686 |
|
00:28:20,120 --> 00:28:23,600 |
|
problem wasn't that any of the methods |
|
|
|
687 |
|
00:28:21,880 --> 00:28:26,799 |
|
that people were proposing were bad it |
|
|
|
688 |
|
00:28:23,600 --> 00:28:29,559 |
|
was just that they weren't effective |
|
|
|
689 |
|
00:28:26,799 --> 00:28:31,440 |
|
enough to see big enough uh results and |
|
|
|
690 |
|
00:28:29,559 --> 00:28:33,159 |
|
so then people Chang the data set to |
|
|
|
691 |
|
00:28:31,440 --> 00:28:34,720 |
|
like conversations or something like |
|
|
|
692 |
|
00:28:33,159 --> 00:28:37,399 |
|
that and in conversations they're very |
|
|
|
693 |
|
00:28:34,720 --> 00:28:39,159 |
|
contextual yeah very short utterances |
|
|
|
694 |
|
00:28:37,399 --> 00:28:41,440 |
|
and once you started doing things like |
|
|
|
695 |
|
00:28:39,159 --> 00:28:45,840 |
|
that then the same methods like exactly |
|
|
|
696 |
|
00:28:41,440 --> 00:28:48,640 |
|
the same methods were um were helping |
|
|
|
697 |
|
00:28:45,840 --> 00:28:51,120 |
|
when they weren't helping before and |
|
|
|
698 |
|
00:28:48,640 --> 00:28:52,720 |
|
so the underlying assumption about |
|
|
|
699 |
|
00:28:51,120 --> 00:28:56,240 |
|
incorporating context information is |
|
|
|
700 |
|
00:28:52,720 --> 00:28:58,159 |
|
that context will be helpful and or |
|
|
|
701 |
|
00:28:56,240 --> 00:29:01,760 |
|
context is necessary |
|
|
|
702 |
|
00:28:58,159 --> 00:29:03,880 |
|
to you know do translation well so does |
|
|
|
703 |
|
00:29:01,760 --> 00:29:06,880 |
|
anyone have an idea about how you could |
|
|
|
704 |
|
00:29:03,880 --> 00:29:06,880 |
|
like actually verify that |
|
|
|
705 |
|
00:29:10,880 --> 00:29:16,519 |
|
assumption any idea yeah simplest way |
|
|
|
706 |
|
00:29:14,000 --> 00:29:19,120 |
|
would be just give an El way to set and |
|
|
|
707 |
|
00:29:16,519 --> 00:29:21,000 |
|
then have a measure of okay if it in |
|
|
|
708 |
|
00:29:19,120 --> 00:29:23,679 |
|
more than |
|
|
|
709 |
|
00:29:21,000 --> 00:29:25,519 |
|
x% um and how would that verify the |
|
|
|
710 |
|
00:29:23,679 --> 00:29:28,480 |
|
assumption that context is |
|
|
|
711 |
|
00:29:25,519 --> 00:29:30,720 |
|
necessary so we're asking a question |
|
|
|
712 |
|
00:29:28,480 --> 00:29:33,480 |
|
whether context is helpful in the proect |
|
|
|
713 |
|
00:29:30,720 --> 00:29:36,000 |
|
you're doing that uh we're asking |
|
|
|
714 |
|
00:29:33,480 --> 00:29:39,240 |
|
whether |
|
|
|
715 |
|
00:29:36,000 --> 00:29:40,840 |
|
so we're asking kind of a a two-part the |
|
|
|
716 |
|
00:29:39,240 --> 00:29:44,080 |
|
main question is whether context is |
|
|
|
717 |
|
00:29:40,840 --> 00:29:45,559 |
|
helpful given a particular you know |
|
|
|
718 |
|
00:29:44,080 --> 00:29:47,240 |
|
experimental setup right so like |
|
|
|
719 |
|
00:29:45,559 --> 00:29:50,440 |
|
training data |
|
|
|
720 |
|
00:29:47,240 --> 00:29:52,039 |
|
set modeling method and training |
|
|
|
721 |
|
00:29:50,440 --> 00:29:54,679 |
|
algorithm and evaluation algorithm |
|
|
|
722 |
|
00:29:52,039 --> 00:29:56,480 |
|
that's kind of the big final result that |
|
|
|
723 |
|
00:29:54,679 --> 00:29:58,840 |
|
you want to get in your paper but |
|
|
|
724 |
|
00:29:56,480 --> 00:30:01,399 |
|
there's kind of a the question which is |
|
|
|
725 |
|
00:29:58,840 --> 00:30:04,360 |
|
is context even necessary to translate |
|
|
|
726 |
|
00:30:01,399 --> 00:30:06,559 |
|
well you train a model with context and |
|
|
|
727 |
|
00:30:04,360 --> 00:30:08,200 |
|
one without context you train a model |
|
|
|
728 |
|
00:30:06,559 --> 00:30:10,679 |
|
with context and one without context but |
|
|
|
729 |
|
00:30:08,200 --> 00:30:14,080 |
|
what if your model of context is really |
|
|
|
730 |
|
00:30:10,679 --> 00:30:15,399 |
|
bad J the same model you have the same |
|
|
|
731 |
|
00:30:14,080 --> 00:30:16,840 |
|
model architecture but let's say your |
|
|
|
732 |
|
00:30:15,399 --> 00:30:18,559 |
|
model architecture is really bad at |
|
|
|
733 |
|
00:30:16,840 --> 00:30:19,919 |
|
capturing context so then maybe it's a |
|
|
|
734 |
|
00:30:18,559 --> 00:30:22,399 |
|
problem of your model architecture and |
|
|
|
735 |
|
00:30:19,919 --> 00:30:24,720 |
|
context is necessary or helpful but your |
|
|
|
736 |
|
00:30:22,399 --> 00:30:27,399 |
|
model just isn't very good at capture |
|
|
|
737 |
|
00:30:24,720 --> 00:30:29,720 |
|
human yeah exactly so this is one thing |
|
|
|
738 |
|
00:30:27,399 --> 00:30:31,960 |
|
that people can do so there was a |
|
|
|
739 |
|
00:30:29,720 --> 00:30:34,240 |
|
interesting paper um let me see if I can |
|
|
|
740 |
|
00:30:31,960 --> 00:30:34,240 |
|
find |
|
|
|
741 |
|
00:30:39,960 --> 00:30:49,080 |
|
it so this is a paper from a long time |
|
|
|
742 |
|
00:30:45,760 --> 00:30:51,600 |
|
ago where they did something like |
|
|
|
743 |
|
00:30:49,080 --> 00:30:53,360 |
|
this um it's evaluating machine |
|
|
|
744 |
|
00:30:51,600 --> 00:30:54,480 |
|
translation systems with second language |
|
|
|
745 |
|
00:30:53,360 --> 00:30:57,399 |
|
proficiency |
|
|
|
746 |
|
00:30:54,480 --> 00:31:01,240 |
|
tests and basically what they did is |
|
|
|
747 |
|
00:30:57,399 --> 00:31:03,519 |
|
they had these English proficiency tests |
|
|
|
748 |
|
00:31:01,240 --> 00:31:05,320 |
|
for uh I think it was like middle |
|
|
|
749 |
|
00:31:03,519 --> 00:31:07,480 |
|
schoolers or high schoolers or something |
|
|
|
750 |
|
00:31:05,320 --> 00:31:09,600 |
|
like this and then they used machine |
|
|
|
751 |
|
00:31:07,480 --> 00:31:11,240 |
|
translation systems to translate them |
|
|
|
752 |
|
00:31:09,600 --> 00:31:13,600 |
|
into Japanese and then they asked |
|
|
|
753 |
|
00:31:11,240 --> 00:31:19,720 |
|
Japanese students to solve them in |
|
|
|
754 |
|
00:31:13,600 --> 00:31:19,720 |
|
japanies and so what they did is they |
|
|
|
755 |
|
00:31:20,000 --> 00:31:26,159 |
|
asked uh Anonymous system G and |
|
|
|
756 |
|
00:31:23,679 --> 00:31:28,200 |
|
Anonymous system Y which are Google and |
|
|
|
757 |
|
00:31:26,159 --> 00:31:32,360 |
|
Yahoo |
|
|
|
758 |
|
00:31:28,200 --> 00:31:34,720 |
|
and uh and a human without context and a |
|
|
|
759 |
|
00:31:32,360 --> 00:31:36,279 |
|
human with context to translate them so |
|
|
|
760 |
|
00:31:34,720 --> 00:31:38,720 |
|
they ask humans to translate each |
|
|
|
761 |
|
00:31:36,279 --> 00:31:40,880 |
|
sentence without giving any context and |
|
|
|
762 |
|
00:31:38,720 --> 00:31:44,320 |
|
they ask humans to translate each uh |
|
|
|
763 |
|
00:31:40,880 --> 00:31:46,399 |
|
sentence with giving context and what |
|
|
|
764 |
|
00:31:44,320 --> 00:31:48,960 |
|
they were able to find was in this case |
|
|
|
765 |
|
00:31:46,399 --> 00:31:50,080 |
|
humans with context the Japanese |
|
|
|
766 |
|
00:31:48,960 --> 00:31:53,080 |
|
students were able to answer the |
|
|
|
767 |
|
00:31:50,080 --> 00:31:55,360 |
|
questions most of the time um whereas if |
|
|
|
768 |
|
00:31:53,080 --> 00:31:57,559 |
|
they translated without contexts like G |
|
|
|
769 |
|
00:31:55,360 --> 00:31:59,039 |
|
and Y were doing at that time actually |
|
|
|
770 |
|
00:31:57,559 --> 00:32:01,320 |
|
why was almost as good as human |
|
|
|
771 |
|
00:31:59,039 --> 00:32:04,080 |
|
translators at you know achieving the |
|
|
|
772 |
|
00:32:01,320 --> 00:32:05,440 |
|
the task so but basically like the |
|
|
|
773 |
|
00:32:04,080 --> 00:32:09,159 |
|
important thing here is they were able |
|
|
|
774 |
|
00:32:05,440 --> 00:32:11,039 |
|
to confirm their you know idea that in |
|
|
|
775 |
|
00:32:09,159 --> 00:32:12,519 |
|
this case humans with context were much |
|
|
|
776 |
|
00:32:11,039 --> 00:32:13,799 |
|
better than humans without context so |
|
|
|
777 |
|
00:32:12,519 --> 00:32:16,279 |
|
that would verify your like sub |
|
|
|
778 |
|
00:32:13,799 --> 00:32:18,080 |
|
assumption right and so this is just |
|
|
|
779 |
|
00:32:16,279 --> 00:32:20,279 |
|
like one |
|
|
|
780 |
|
00:32:18,080 --> 00:32:22,240 |
|
example this is just one example of |
|
|
|
781 |
|
00:32:20,279 --> 00:32:25,960 |
|
something that you can |
|
|
|
782 |
|
00:32:22,240 --> 00:32:27,480 |
|
do uh but the basic idea is like your |
|
|
|
783 |
|
00:32:25,960 --> 00:32:29,320 |
|
final result is that you want build of |
|
|
|
784 |
|
00:32:27,480 --> 00:32:30,799 |
|
system that does better on some |
|
|
|
785 |
|
00:32:29,320 --> 00:32:32,159 |
|
Benchmark that you care about there's a |
|
|
|
786 |
|
00:32:30,799 --> 00:32:33,600 |
|
bunch of things that go into whether it |
|
|
|
787 |
|
00:32:32,159 --> 00:32:36,159 |
|
does better or not your evaluation |
|
|
|
788 |
|
00:32:33,600 --> 00:32:38,960 |
|
system your model your training data |
|
|
|
789 |
|
00:32:36,159 --> 00:32:41,559 |
|
your training your evaluation data set |
|
|
|
790 |
|
00:32:38,960 --> 00:32:43,080 |
|
um and things like that so can you break |
|
|
|
791 |
|
00:32:41,559 --> 00:32:45,360 |
|
that down into sub questions that you |
|
|
|
792 |
|
00:32:43,080 --> 00:32:48,039 |
|
could ask where you could verify that |
|
|
|
793 |
|
00:32:45,360 --> 00:32:49,720 |
|
it's working or not uh based on whether |
|
|
|
794 |
|
00:32:48,039 --> 00:32:51,600 |
|
those things are happening another thing |
|
|
|
795 |
|
00:32:49,720 --> 00:32:53,159 |
|
people do an ml oriented things is |
|
|
|
796 |
|
00:32:51,600 --> 00:32:54,919 |
|
create a toy data set where they know |
|
|
|
797 |
|
00:32:53,159 --> 00:32:57,200 |
|
the phenomenon they're interested in |
|
|
|
798 |
|
00:32:54,919 --> 00:32:59,679 |
|
exists and train their models on there |
|
|
|
799 |
|
00:32:57,200 --> 00:33:02,919 |
|
and make sure that they work there um so |
|
|
|
800 |
|
00:32:59,679 --> 00:33:02,919 |
|
that's another thing that you can take |
|
|
|
801 |
|
00:33:03,120 --> 00:33:07,639 |
|
that cool um any questions about |
|
|
|
802 |
|
00:33:08,080 --> 00:33:12,760 |
|
this okay |
|
|
|
803 |
|
00:33:10,200 --> 00:33:16,519 |
|
s so the next thing is running |
|
|
|
804 |
|
00:33:12,760 --> 00:33:19,000 |
|
experiments um so in order to do this |
|
|
|
805 |
|
00:33:16,519 --> 00:33:21,399 |
|
you'll find data that will answer your |
|
|
|
806 |
|
00:33:19,000 --> 00:33:23,639 |
|
research question uh run experiments and |
|
|
|
807 |
|
00:33:21,399 --> 00:33:25,720 |
|
calculate numbers uh calculate |
|
|
|
808 |
|
00:33:23,639 --> 00:33:28,279 |
|
significant differences and analyze |
|
|
|
809 |
|
00:33:25,720 --> 00:33:31,080 |
|
effects whoops |
|
|
|
810 |
|
00:33:28,279 --> 00:33:35,519 |
|
and so this is a basic pipeline that we |
|
|
|
811 |
|
00:33:31,080 --> 00:33:37,760 |
|
want to follow so obtaining test data so |
|
|
|
812 |
|
00:33:35,519 --> 00:33:41,200 |
|
in order to obtain test data uh we would |
|
|
|
813 |
|
00:33:37,760 --> 00:33:42,799 |
|
like to find data sets um so if you're |
|
|
|
814 |
|
00:33:41,200 --> 00:33:46,200 |
|
building on previous work the safest |
|
|
|
815 |
|
00:33:42,799 --> 00:33:48,960 |
|
thing that you can do um is start with |
|
|
|
816 |
|
00:33:46,200 --> 00:33:51,919 |
|
the same data sets if you're answering a |
|
|
|
817 |
|
00:33:48,960 --> 00:33:53,799 |
|
new question um you can think about can |
|
|
|
818 |
|
00:33:51,919 --> 00:33:55,399 |
|
you repurpose other data sets to answer |
|
|
|
819 |
|
00:33:53,799 --> 00:33:57,679 |
|
the question so very often there will be |
|
|
|
820 |
|
00:33:55,399 --> 00:34:00,080 |
|
a data set that is uh appropriate for |
|
|
|
821 |
|
00:33:57,679 --> 00:34:03,360 |
|
answer answering your question um and |
|
|
|
822 |
|
00:34:00,080 --> 00:34:05,760 |
|
you can go and find that um actually our |
|
|
|
823 |
|
00:34:03,360 --> 00:34:06,919 |
|
our wonderful TJ has created a system |
|
|
|
824 |
|
00:34:05,760 --> 00:34:08,800 |
|
called datafinder that will |
|
|
|
825 |
|
00:34:06,919 --> 00:34:11,159 |
|
automatically find it for you so if you |
|
|
|
826 |
|
00:34:08,800 --> 00:34:13,679 |
|
want to uh search for data sets you can |
|
|
|
827 |
|
00:34:11,159 --> 00:34:16,760 |
|
use his system or ask him about it but |
|
|
|
828 |
|
00:34:13,679 --> 00:34:20,359 |
|
um uh but if no appropriate data set |
|
|
|
829 |
|
00:34:16,760 --> 00:34:24,359 |
|
exists you can uh create your own and |
|
|
|
830 |
|
00:34:20,359 --> 00:34:25,879 |
|
particularly for industry use cases it's |
|
|
|
831 |
|
00:34:24,359 --> 00:34:28,119 |
|
very common that you need to go in and |
|
|
|
832 |
|
00:34:25,879 --> 00:34:30,040 |
|
create your own or if you're planning on |
|
|
|
833 |
|
00:34:28,119 --> 00:34:31,639 |
|
doing research in Academia afterwards |
|
|
|
834 |
|
00:34:30,040 --> 00:34:33,119 |
|
very often you'll come up with a |
|
|
|
835 |
|
00:34:31,639 --> 00:34:34,639 |
|
research question where no data set |
|
|
|
836 |
|
00:34:33,119 --> 00:34:36,679 |
|
exists so you'll have to create your own |
|
|
|
837 |
|
00:34:34,639 --> 00:34:38,960 |
|
anyway so this is something that's |
|
|
|
838 |
|
00:34:36,679 --> 00:34:41,639 |
|
really important to be able to do well |
|
|
|
839 |
|
00:34:38,960 --> 00:34:44,639 |
|
uh in most |
|
|
|
840 |
|
00:34:41,639 --> 00:34:49,240 |
|
cases um so I'll be talking about how to |
|
|
|
841 |
|
00:34:44,639 --> 00:34:53,280 |
|
do all of these so data set lists um the |
|
|
|
842 |
|
00:34:49,240 --> 00:34:55,159 |
|
best one I think by far in uh natural |
|
|
|
843 |
|
00:34:53,280 --> 00:34:58,359 |
|
language processing nowadays is hugging |
|
|
|
844 |
|
00:34:55,159 --> 00:35:02,960 |
|
face data sets um there's also other |
|
|
|
845 |
|
00:34:58,359 --> 00:35:05,359 |
|
data resources like um elra is uh |
|
|
|
846 |
|
00:35:02,960 --> 00:35:07,240 |
|
another one kind of by the more |
|
|
|
847 |
|
00:35:05,359 --> 00:35:09,800 |
|
traditional natural language processing |
|
|
|
848 |
|
00:35:07,240 --> 00:35:12,960 |
|
Community there's also the LDC the |
|
|
|
849 |
|
00:35:09,800 --> 00:35:15,680 |
|
linguistic data uh Consortium and there |
|
|
|
850 |
|
00:35:12,960 --> 00:35:17,119 |
|
are some older heavily annotated data |
|
|
|
851 |
|
00:35:15,680 --> 00:35:20,040 |
|
sets that are only available through |
|
|
|
852 |
|
00:35:17,119 --> 00:35:22,000 |
|
those at CMU you have the ability to |
|
|
|
853 |
|
00:35:20,040 --> 00:35:24,520 |
|
download things from LDC so if you find |
|
|
|
854 |
|
00:35:22,000 --> 00:35:26,960 |
|
an LDC data set in any papers that |
|
|
|
855 |
|
00:35:24,520 --> 00:35:29,640 |
|
you're doing or online um you need |
|
|
|
856 |
|
00:35:26,960 --> 00:35:31,000 |
|
register for that and I I'm the person |
|
|
|
857 |
|
00:35:29,640 --> 00:35:33,280 |
|
who's in charge of it so I'll give you |
|
|
|
858 |
|
00:35:31,000 --> 00:35:35,520 |
|
access and then uh and then you can use |
|
|
|
859 |
|
00:35:33,280 --> 00:35:37,400 |
|
it um there's also things like papers |
|
|
|
860 |
|
00:35:35,520 --> 00:35:39,680 |
|
with code and papers with code basically |
|
|
|
861 |
|
00:35:37,400 --> 00:35:41,359 |
|
automatically extracts uh kind of like |
|
|
|
862 |
|
00:35:39,680 --> 00:35:42,839 |
|
the names of data sets so even some |
|
|
|
863 |
|
00:35:41,359 --> 00:35:45,599 |
|
things that don't appear on a hug and |
|
|
|
864 |
|
00:35:42,839 --> 00:35:45,599 |
|
place will appear |
|
|
|
865 |
|
00:35:46,359 --> 00:35:52,440 |
|
there so annotating data um when you |
|
|
|
866 |
|
00:35:50,640 --> 00:35:54,599 |
|
annotate data you first need to decide |
|
|
|
867 |
|
00:35:52,440 --> 00:35:57,599 |
|
how much to annotate sample appropriate |
|
|
|
868 |
|
00:35:54,599 --> 00:36:00,240 |
|
data create annotation guidelines |
|
|
|
869 |
|
00:35:57,599 --> 00:36:03,160 |
|
uh either annotate yourself or hire and |
|
|
|
870 |
|
00:36:00,240 --> 00:36:05,839 |
|
supervis annotators and evaluate |
|
|
|
871 |
|
00:36:03,160 --> 00:36:07,720 |
|
quality so a very common problem that a |
|
|
|
872 |
|
00:36:05,839 --> 00:36:10,240 |
|
lot of people ask me is how much test |
|
|
|
873 |
|
00:36:07,720 --> 00:36:12,800 |
|
data do you need |
|
|
|
874 |
|
00:36:10,240 --> 00:36:14,800 |
|
and I'm going to talk about uh |
|
|
|
875 |
|
00:36:12,800 --> 00:36:17,520 |
|
statistical significance tests in a |
|
|
|
876 |
|
00:36:14,800 --> 00:36:19,520 |
|
second but um basically you need to have |
|
|
|
877 |
|
00:36:17,520 --> 00:36:23,240 |
|
enough to have a statistically |
|
|
|
878 |
|
00:36:19,520 --> 00:36:28,119 |
|
significant difference um between |
|
|
|
879 |
|
00:36:23,240 --> 00:36:32,079 |
|
methods and the way you do this actually |
|
|
|
880 |
|
00:36:28,119 --> 00:36:32,079 |
|
sorry very quickly let me |
|
|
|
881 |
|
00:36:33,240 --> 00:36:37,599 |
|
check I rearrange my slides and I want |
|
|
|
882 |
|
00:36:35,560 --> 00:36:40,359 |
|
to make sure that I didn't accidentally |
|
|
|
883 |
|
00:36:37,599 --> 00:36:42,280 |
|
um I didn't accidentally remove the |
|
|
|
884 |
|
00:36:40,359 --> 00:36:44,520 |
|
slides on statistical significance which |
|
|
|
885 |
|
00:36:42,280 --> 00:36:44,520 |
|
would be |
|
|
|
886 |
|
00:36:51,680 --> 00:36:57,880 |
|
a okay |
|
|
|
887 |
|
00:36:55,240 --> 00:36:59,200 |
|
um sorry hang on one second I just |
|
|
|
888 |
|
00:36:57,880 --> 00:37:02,240 |
|
realized that I don't have the slides |
|
|
|
889 |
|
00:36:59,200 --> 00:37:03,839 |
|
for a statistical significance on this |
|
|
|
890 |
|
00:37:02,240 --> 00:37:05,280 |
|
presentation so let me grab them from |
|
|
|
891 |
|
00:37:03,839 --> 00:37:09,440 |
|
the |
|
|
|
892 |
|
00:37:05,280 --> 00:37:09,440 |
|
last uh the last |
|
|
|
893 |
|
00:37:10,520 --> 00:37:14,640 |
|
us this is is pretty |
|
|
|
894 |
|
00:37:25,599 --> 00:37:28,599 |
|
important |
|
|
|
895 |
|
00:37:33,160 --> 00:37:38,599 |
|
okay so yeah let me explain statistical |
|
|
|
896 |
|
00:37:35,560 --> 00:37:40,319 |
|
significance here um so basically when |
|
|
|
897 |
|
00:37:38,599 --> 00:37:43,319 |
|
we're doing statistical |
|
|
|
898 |
|
00:37:40,319 --> 00:37:44,680 |
|
testing um let's say we have two models |
|
|
|
899 |
|
00:37:43,319 --> 00:37:47,800 |
|
with similar |
|
|
|
900 |
|
00:37:44,680 --> 00:37:50,160 |
|
accuracies and these models with similar |
|
|
|
901 |
|
00:37:47,800 --> 00:37:52,240 |
|
accuracies let's say model one is a |
|
|
|
902 |
|
00:37:50,160 --> 00:37:56,880 |
|
generative model model two is a |
|
|
|
903 |
|
00:37:52,240 --> 00:37:58,520 |
|
discriminative model and we say uh data |
|
|
|
904 |
|
00:37:56,880 --> 00:38:00,200 |
|
set one we have this result on data set |
|
|
|
905 |
|
00:37:58,520 --> 00:38:02,480 |
|
two we have another result on data set |
|
|
|
906 |
|
00:38:00,200 --> 00:38:04,720 |
|
three we have uh another |
|
|
|
907 |
|
00:38:02,480 --> 00:38:06,440 |
|
result and so then the question is how |
|
|
|
908 |
|
00:38:04,720 --> 00:38:09,480 |
|
can we tell if the differences are due |
|
|
|
909 |
|
00:38:06,440 --> 00:38:13,839 |
|
to consistent trends that uh will hold |
|
|
|
910 |
|
00:38:09,480 --> 00:38:16,119 |
|
on other data sets or um if they are |
|
|
|
911 |
|
00:38:13,839 --> 00:38:18,480 |
|
kind of random noise due to the fact |
|
|
|
912 |
|
00:38:16,119 --> 00:38:21,000 |
|
that we have one |
|
|
|
913 |
|
00:38:18,480 --> 00:38:24,200 |
|
uh due to the fact that you know data |
|
|
|
914 |
|
00:38:21,000 --> 00:38:25,640 |
|
sets vary models vary um and so the way |
|
|
|
915 |
|
00:38:24,200 --> 00:38:28,319 |
|
we do this is through statistical |
|
|
|
916 |
|
00:38:25,640 --> 00:38:31,839 |
|
significance testing |
|
|
|
917 |
|
00:38:28,319 --> 00:38:34,319 |
|
um so I'm going to cover this briefly in |
|
|
|
918 |
|
00:38:31,839 --> 00:38:36,920 |
|
this class but you can see a drawer at |
|
|
|
919 |
|
00:38:34,319 --> 00:38:38,640 |
|
all for an overview and also we're going |
|
|
|
920 |
|
00:38:36,920 --> 00:38:41,520 |
|
to have a recitation on how to actually |
|
|
|
921 |
|
00:38:38,640 --> 00:38:44,280 |
|
run statistical significance tests so um |
|
|
|
922 |
|
00:38:41,520 --> 00:38:47,920 |
|
you can take a look at that |
|
|
|
923 |
|
00:38:44,280 --> 00:38:51,680 |
|
there and so the basic idea is given a |
|
|
|
924 |
|
00:38:47,920 --> 00:38:54,280 |
|
quantity we test um certain values of |
|
|
|
925 |
|
00:38:51,680 --> 00:38:57,880 |
|
uncertainty with respect to the quantity |
|
|
|
926 |
|
00:38:54,280 --> 00:38:59,960 |
|
so number one is a p value and the P |
|
|
|
927 |
|
00:38:57,880 --> 00:39:02,240 |
|
value is what is the probability that a |
|
|
|
928 |
|
00:38:59,960 --> 00:39:06,119 |
|
difference with another quantity is by |
|
|
|
929 |
|
00:39:02,240 --> 00:39:08,359 |
|
chance and so a lower uh P value means |
|
|
|
930 |
|
00:39:06,119 --> 00:39:11,839 |
|
more likelihood of having a significant |
|
|
|
931 |
|
00:39:08,359 --> 00:39:13,200 |
|
difference usually the threshold for |
|
|
|
932 |
|
00:39:11,839 --> 00:39:16,520 |
|
saying that we have a significant |
|
|
|
933 |
|
00:39:13,200 --> 00:39:20,280 |
|
difference is there's a 5% chance |
|
|
|
934 |
|
00:39:16,520 --> 00:39:22,160 |
|
0.05 that this difference between the |
|
|
|
935 |
|
00:39:20,280 --> 00:39:25,760 |
|
models was due to chance or like data |
|
|
|
936 |
|
00:39:22,160 --> 00:39:28,520 |
|
sampling or things like that uh so p uh |
|
|
|
937 |
|
00:39:25,760 --> 00:39:30,880 |
|
less than 0.05 is kind of a threshold |
|
|
|
938 |
|
00:39:28,520 --> 00:39:30,880 |
|
for |
|
|
|
939 |
|
00:39:31,119 --> 00:39:35,680 |
|
significance another thing that we can |
|
|
|
940 |
|
00:39:33,040 --> 00:39:38,720 |
|
measure is confidence intervals and the |
|
|
|
941 |
|
00:39:35,680 --> 00:39:40,760 |
|
confidence interval is um what is the |
|
|
|
942 |
|
00:39:38,720 --> 00:39:42,560 |
|
range under which we could expect |
|
|
|
943 |
|
00:39:40,760 --> 00:39:44,760 |
|
another trial to fall and I'll talk |
|
|
|
944 |
|
00:39:42,560 --> 00:39:47,359 |
|
about both of |
|
|
|
945 |
|
00:39:44,760 --> 00:39:49,280 |
|
these um there's another concept called |
|
|
|
946 |
|
00:39:47,359 --> 00:39:53,880 |
|
paired versus unpaired |
|
|
|
947 |
|
00:39:49,280 --> 00:39:56,680 |
|
tests and in unpaired test comp this |
|
|
|
948 |
|
00:39:53,880 --> 00:39:59,480 |
|
means um we compare the means of a |
|
|
|
949 |
|
00:39:56,680 --> 00:40:02,359 |
|
quantity on two unrelated |
|
|
|
950 |
|
00:39:59,480 --> 00:40:04,040 |
|
groups so an example could be the test |
|
|
|
951 |
|
00:40:02,359 --> 00:40:07,040 |
|
of the significance of a difference of |
|
|
|
952 |
|
00:40:04,040 --> 00:40:09,160 |
|
accuracies of a model on two data sets |
|
|
|
953 |
|
00:40:07,040 --> 00:40:12,400 |
|
so like let's say I have data set number |
|
|
|
954 |
|
00:40:09,160 --> 00:40:16,440 |
|
one and data set number two what is the |
|
|
|
955 |
|
00:40:12,400 --> 00:40:18,000 |
|
likelihood that the um there's actually |
|
|
|
956 |
|
00:40:16,440 --> 00:40:20,839 |
|
a real difference in the data sets as |
|
|
|
957 |
|
00:40:18,000 --> 00:40:23,400 |
|
opposed to just random uh random |
|
|
|
958 |
|
00:40:20,839 --> 00:40:26,599 |
|
sampling RS between |
|
|
|
959 |
|
00:40:23,400 --> 00:40:28,560 |
|
them in contrast AED test compares the |
|
|
|
960 |
|
00:40:26,599 --> 00:40:31,400 |
|
means of a quantity on one data set |
|
|
|
961 |
|
00:40:28,560 --> 00:40:32,480 |
|
under two conditions and so an example |
|
|
|
962 |
|
00:40:31,400 --> 00:40:33,760 |
|
of this could be testing the |
|
|
|
963 |
|
00:40:32,480 --> 00:40:37,319 |
|
significance of a difference of |
|
|
|
964 |
|
00:40:33,760 --> 00:40:39,640 |
|
accuracies of two models on one data set |
|
|
|
965 |
|
00:40:37,319 --> 00:40:42,000 |
|
so this is a really important difference |
|
|
|
966 |
|
00:40:39,640 --> 00:40:43,960 |
|
and the reason why it's a really |
|
|
|
967 |
|
00:40:42,000 --> 00:40:45,520 |
|
important difference well number one |
|
|
|
968 |
|
00:40:43,960 --> 00:40:49,119 |
|
we're most commonly interested in the |
|
|
|
969 |
|
00:40:45,520 --> 00:40:51,839 |
|
letter number two if we can make |
|
|
|
970 |
|
00:40:49,119 --> 00:40:54,280 |
|
assumptions about |
|
|
|
971 |
|
00:40:51,839 --> 00:40:56,079 |
|
the association of the points in the |
|
|
|
972 |
|
00:40:54,280 --> 00:40:58,680 |
|
data set we're much much more likely to |
|
|
|
973 |
|
00:40:56,079 --> 00:41:00,440 |
|
get a significant result because we can |
|
|
|
974 |
|
00:40:58,680 --> 00:41:02,240 |
|
um we can look at the difference of the |
|
|
|
975 |
|
00:41:00,440 --> 00:41:06,000 |
|
models on individual data points as |
|
|
|
976 |
|
00:41:02,240 --> 00:41:10,400 |
|
opposed to um uh as opposed to looking |
|
|
|
977 |
|
00:41:06,000 --> 00:41:10,400 |
|
at just the difference in the |
|
|
|
978 |
|
00:41:10,520 --> 00:41:16,839 |
|
means so one example of a statistical |
|
|
|
979 |
|
00:41:13,760 --> 00:41:18,280 |
|
significance test is a bootstrap test |
|
|
|
980 |
|
00:41:16,839 --> 00:41:19,760 |
|
and the bootstrap test is really |
|
|
|
981 |
|
00:41:18,280 --> 00:41:21,680 |
|
convenient because you can implement it |
|
|
|
982 |
|
00:41:19,760 --> 00:41:25,160 |
|
for any evaluation metric that you want |
|
|
|
983 |
|
00:41:21,680 --> 00:41:26,880 |
|
to be using and so in NLP we can use |
|
|
|
984 |
|
00:41:25,160 --> 00:41:29,560 |
|
lots of different evaluations metrics we |
|
|
|
985 |
|
00:41:26,880 --> 00:41:31,119 |
|
can use an evaluation metric like um |
|
|
|
986 |
|
00:41:29,560 --> 00:41:34,160 |
|
accuracy but we can also use an |
|
|
|
987 |
|
00:41:31,119 --> 00:41:37,400 |
|
evaluation metric like fmeasure for |
|
|
|
988 |
|
00:41:34,160 --> 00:41:40,560 |
|
classification or a blue score or |
|
|
|
989 |
|
00:41:37,400 --> 00:41:43,599 |
|
character F score or word error rate or |
|
|
|
990 |
|
00:41:40,560 --> 00:41:48,440 |
|
something like that for um for various |
|
|
|
991 |
|
00:41:43,599 --> 00:41:50,720 |
|
tasks and this is applicable to any any |
|
|
|
992 |
|
00:41:48,440 --> 00:41:54,000 |
|
metric you want to use uh any quantity |
|
|
|
993 |
|
00:41:50,720 --> 00:41:57,319 |
|
you want to measure also so the basic |
|
|
|
994 |
|
00:41:54,000 --> 00:41:59,079 |
|
idea of a bootstrap test is a method |
|
|
|
995 |
|
00:41:57,319 --> 00:42:02,520 |
|
that can measure P values and confidence |
|
|
|
996 |
|
00:41:59,079 --> 00:42:06,040 |
|
intervals by resampling data and so the |
|
|
|
997 |
|
00:42:02,520 --> 00:42:08,480 |
|
way you do this is you sample subsets |
|
|
|
998 |
|
00:42:06,040 --> 00:42:11,960 |
|
from your death Dev test set with |
|
|
|
999 |
|
00:42:08,480 --> 00:42:14,720 |
|
replacement so you might sample 10,000 |
|
|
|
1000 |
|
00:42:11,960 --> 00:42:19,599 |
|
times and you measure accuracy on these |
|
|
|
1001 |
|
00:42:14,720 --> 00:42:22,520 |
|
many subsets and then you take |
|
|
|
1002 |
|
00:42:19,599 --> 00:42:25,640 |
|
the you look at all of the accuracies |
|
|
|
1003 |
|
00:42:22,520 --> 00:42:27,680 |
|
that you got on these subsample data |
|
|
|
1004 |
|
00:42:25,640 --> 00:42:31,079 |
|
sets and then you take the middle |
|
|
|
1005 |
|
00:42:27,680 --> 00:42:32,640 |
|
percentile range like 2.5 to 97.5 and |
|
|
|
1006 |
|
00:42:31,079 --> 00:42:34,960 |
|
you can treat that as a confidence |
|
|
|
1007 |
|
00:42:32,640 --> 00:42:37,640 |
|
interval the 95% confidence interval |
|
|
|
1008 |
|
00:42:34,960 --> 00:42:40,720 |
|
about where you're like 95% certain that |
|
|
|
1009 |
|
00:42:37,640 --> 00:42:40,720 |
|
your results will fall in |
|
|
|
1010 |
|
00:42:40,880 --> 00:42:48,240 |
|
here another thing that you can do is |
|
|
|
1011 |
|
00:42:45,119 --> 00:42:50,040 |
|
you can do a paired test and what the |
|
|
|
1012 |
|
00:42:48,240 --> 00:42:51,200 |
|
paired test does is it measures the |
|
|
|
1013 |
|
00:42:50,040 --> 00:42:53,359 |
|
number of |
|
|
|
1014 |
|
00:42:51,200 --> 00:42:55,839 |
|
winds um |
|
|
|
1015 |
|
00:42:53,359 --> 00:42:57,720 |
|
if and you measure the percentage of |
|
|
|
1016 |
|
00:42:55,839 --> 00:43:00,920 |
|
winds and this is the confidence that a |
|
|
|
1017 |
|
00:42:57,720 --> 00:43:03,280 |
|
gain in accuracy is not by chance um and |
|
|
|
1018 |
|
00:43:00,920 --> 00:43:05,920 |
|
so this could be one minus the P value |
|
|
|
1019 |
|
00:43:03,280 --> 00:43:07,960 |
|
of the paired test so this is easy to |
|
|
|
1020 |
|
00:43:05,920 --> 00:43:09,960 |
|
implement applicable to any evaluation |
|
|
|
1021 |
|
00:43:07,960 --> 00:43:13,480 |
|
measure but somewhat biased on small |
|
|
|
1022 |
|
00:43:09,960 --> 00:43:17,240 |
|
data sets um just to maybe I can give a |
|
|
|
1023 |
|
00:43:13,480 --> 00:43:19,920 |
|
more concrete example so let's say we |
|
|
|
1024 |
|
00:43:17,240 --> 00:43:27,520 |
|
have a classification data set what you |
|
|
|
1025 |
|
00:43:19,920 --> 00:43:30,400 |
|
can do is um let's say we have a b c d e |
|
|
|
1026 |
|
00:43:27,520 --> 00:43:36,960 |
|
e or |
|
|
|
1027 |
|
00:43:30,400 --> 00:43:39,559 |
|
um X1 X2 X3 X4 |
|
|
|
1028 |
|
00:43:36,960 --> 00:43:44,520 |
|
X5 so this is our our classification |
|
|
|
1029 |
|
00:43:39,559 --> 00:43:47,440 |
|
data set and um we have system |
|
|
|
1030 |
|
00:43:44,520 --> 00:43:52,000 |
|
one system |
|
|
|
1031 |
|
00:43:47,440 --> 00:43:53,760 |
|
two and we have right right right right |
|
|
|
1032 |
|
00:43:52,000 --> 00:43:56,599 |
|
wrong |
|
|
|
1033 |
|
00:43:53,760 --> 00:44:00,440 |
|
right uh right wrong |
|
|
|
1034 |
|
00:43:56,599 --> 00:44:03,040 |
|
long right or something like this and so |
|
|
|
1035 |
|
00:44:00,440 --> 00:44:07,079 |
|
what we do is we randomly sample a sub |
|
|
|
1036 |
|
00:44:03,040 --> 00:44:08,760 |
|
data set um and let's say this is like |
|
|
|
1037 |
|
00:44:07,079 --> 00:44:10,440 |
|
X3 |
|
|
|
1038 |
|
00:44:08,760 --> 00:44:13,599 |
|
X2 |
|
|
|
1039 |
|
00:44:10,440 --> 00:44:17,599 |
|
X4 X1 |
|
|
|
1040 |
|
00:44:13,599 --> 00:44:20,440 |
|
X2 and so this is our subd data set uh |
|
|
|
1041 |
|
00:44:17,599 --> 00:44:20,440 |
|
what we do |
|
|
|
1042 |
|
00:44:20,640 --> 00:44:28,920 |
|
is um so X3 would be |
|
|
|
1043 |
|
00:44:23,520 --> 00:44:34,559 |
|
01 X2 would be 1 one X4 would be one Zer |
|
|
|
1044 |
|
00:44:28,920 --> 00:44:39,079 |
|
X X1 would be 1 one and |
|
|
|
1045 |
|
00:44:34,559 --> 00:44:42,319 |
|
then uh X X2 would be one and so the |
|
|
|
1046 |
|
00:44:39,079 --> 00:44:45,319 |
|
overall accuracy here |
|
|
|
1047 |
|
00:44:42,319 --> 00:44:45,319 |
|
is |
|
|
|
1048 |
|
00:44:45,480 --> 00:44:50,240 |
|
60% and |
|
|
|
1049 |
|
00:44:47,440 --> 00:44:51,880 |
|
80% so if we didn't do any statistical |
|
|
|
1050 |
|
00:44:50,240 --> 00:44:55,400 |
|
significance test we might say oh system |
|
|
|
1051 |
|
00:44:51,880 --> 00:44:57,680 |
|
2 is better obviously um but if we do |
|
|
|
1052 |
|
00:44:55,400 --> 00:45:01,079 |
|
the significance test this is one sample |
|
|
|
1053 |
|
00:44:57,680 --> 00:45:03,119 |
|
from the bootstrap test in |
|
|
|
1054 |
|
00:45:01,079 --> 00:45:07,040 |
|
here |
|
|
|
1055 |
|
00:45:03,119 --> 00:45:09,079 |
|
now we get like 80% and 80% and it's |
|
|
|
1056 |
|
00:45:07,040 --> 00:45:11,079 |
|
like okay actually maybe in some cases |
|
|
|
1057 |
|
00:45:09,079 --> 00:45:13,480 |
|
these systems AR equally good maybe |
|
|
|
1058 |
|
00:45:11,079 --> 00:45:16,079 |
|
there's a tie or if we sampled another |
|
|
|
1059 |
|
00:45:13,480 --> 00:45:19,079 |
|
one uh let's say we |
|
|
|
1060 |
|
00:45:16,079 --> 00:45:19,079 |
|
sampled |
|
|
|
1061 |
|
00:45:19,359 --> 00:45:27,319 |
|
uh |
|
|
|
1062 |
|
00:45:20,960 --> 00:45:30,680 |
|
X4 X1 X2 X4 X1 |
|
|
|
1063 |
|
00:45:27,319 --> 00:45:36,160 |
|
um um then we would get something like |
|
|
|
1064 |
|
00:45:30,680 --> 00:45:37,559 |
|
one Z one one one one 1 0 1 one this |
|
|
|
1065 |
|
00:45:36,160 --> 00:45:40,440 |
|
would be |
|
|
|
1066 |
|
00:45:37,559 --> 00:45:42,559 |
|
100% And this would be |
|
|
|
1067 |
|
00:45:40,440 --> 00:45:44,960 |
|
60% and |
|
|
|
1068 |
|
00:45:42,559 --> 00:45:47,000 |
|
so in some cases depending on how we |
|
|
|
1069 |
|
00:45:44,960 --> 00:45:48,440 |
|
sample actually system one wins and so |
|
|
|
1070 |
|
00:45:47,000 --> 00:45:51,440 |
|
you count the number of times that |
|
|
|
1071 |
|
00:45:48,440 --> 00:45:52,880 |
|
system two wins based on um based on |
|
|
|
1072 |
|
00:45:51,440 --> 00:45:54,280 |
|
these sub samples you count the number |
|
|
|
1073 |
|
00:45:52,880 --> 00:45:56,400 |
|
of times that system one wins and you |
|
|
|
1074 |
|
00:45:54,280 --> 00:45:59,000 |
|
count the number of times you get a tie |
|
|
|
1075 |
|
00:45:56,400 --> 00:46:00,920 |
|
and only in the case where system two or |
|
|
|
1076 |
|
00:45:59,000 --> 00:46:03,680 |
|
like the better system wins more than |
|
|
|
1077 |
|
00:46:00,920 --> 00:46:06,280 |
|
95% of the time you say that there's a |
|
|
|
1078 |
|
00:46:03,680 --> 00:46:08,599 |
|
significant difference be these or |
|
|
|
1079 |
|
00:46:06,280 --> 00:46:10,720 |
|
alternatively you could also look at the |
|
|
|
1080 |
|
00:46:08,599 --> 00:46:15,960 |
|
confidence intervals by saying okay I |
|
|
|
1081 |
|
00:46:10,720 --> 00:46:19,000 |
|
sampled um like 90 95% of the time uh |
|
|
|
1082 |
|
00:46:15,960 --> 00:46:20,920 |
|
the accuracy of system one is uh like |
|
|
|
1083 |
|
00:46:19,000 --> 00:46:23,640 |
|
80% or lower and so that would give you |
|
|
|
1084 |
|
00:46:20,920 --> 00:46:23,640 |
|
the upper L |
|
|
|
1085 |
|
00:46:23,760 --> 00:46:29,599 |
|
calculation so yeah sorry this is a very |
|
|
|
1086 |
|
00:46:27,480 --> 00:46:31,760 |
|
uh very quick overview of this but the |
|
|
|
1087 |
|
00:46:29,599 --> 00:46:34,240 |
|
reason why this is useful is let's say |
|
|
|
1088 |
|
00:46:31,760 --> 00:46:36,160 |
|
you create a very small data set if you |
|
|
|
1089 |
|
00:46:34,240 --> 00:46:38,400 |
|
create a very small data set this is |
|
|
|
1090 |
|
00:46:36,160 --> 00:46:39,880 |
|
going to give you a very it's going to |
|
|
|
1091 |
|
00:46:38,400 --> 00:46:41,319 |
|
be very hard to get a statistically |
|
|
|
1092 |
|
00:46:39,880 --> 00:46:44,319 |
|
significant result on this data set |
|
|
|
1093 |
|
00:46:41,319 --> 00:46:47,200 |
|
because it's tiny right and you know |
|
|
|
1094 |
|
00:46:44,319 --> 00:46:50,640 |
|
quite frequently you're going to be |
|
|
|
1095 |
|
00:46:47,200 --> 00:46:53,400 |
|
sampling um you're going to be sampling |
|
|
|
1096 |
|
00:46:50,640 --> 00:46:55,400 |
|
data sets like this where the model like |
|
|
|
1097 |
|
00:46:53,400 --> 00:46:56,640 |
|
where model one wins quite frequently |
|
|
|
1098 |
|
00:46:55,400 --> 00:46:58,520 |
|
you're going to be sampling other data |
|
|
|
1099 |
|
00:46:56,640 --> 00:47:00,359 |
|
sets where key wins and basically you're |
|
|
|
1100 |
|
00:46:58,520 --> 00:47:02,920 |
|
not going to be able to say with |
|
|
|
1101 |
|
00:47:00,359 --> 00:47:04,480 |
|
confidence which model is better because |
|
|
|
1102 |
|
00:47:02,920 --> 00:47:06,359 |
|
you just don't have enough data to say |
|
|
|
1103 |
|
00:47:04,480 --> 00:47:07,880 |
|
that but as you make your data set |
|
|
|
1104 |
|
00:47:06,359 --> 00:47:11,119 |
|
bigger and bigger it becomes easier and |
|
|
|
1105 |
|
00:47:07,880 --> 00:47:14,240 |
|
easier to get a significant result and |
|
|
|
1106 |
|
00:47:11,119 --> 00:47:17,400 |
|
so uh because you're more sure that you |
|
|
|
1107 |
|
00:47:14,240 --> 00:47:20,960 |
|
didn't just randomly pick data that |
|
|
|
1108 |
|
00:47:17,400 --> 00:47:25,400 |
|
model two is better at |
|
|
|
1109 |
|
00:47:20,960 --> 00:47:28,440 |
|
uh so um there's also other varieties |
|
|
|
1110 |
|
00:47:25,400 --> 00:47:31,240 |
|
ofest there's things like T tests for |
|
|
|
1111 |
|
00:47:28,440 --> 00:47:34,720 |
|
unpaired unpaired outputs and paired T |
|
|
|
1112 |
|
00:47:31,240 --> 00:47:38,079 |
|
tests for paired outputs those work when |
|
|
|
1113 |
|
00:47:34,720 --> 00:47:40,440 |
|
your um outputs are eddied so they work |
|
|
|
1114 |
|
00:47:38,079 --> 00:47:43,599 |
|
for accuracy because the accuracy is |
|
|
|
1115 |
|
00:47:40,440 --> 00:47:46,440 |
|
just you add all the add all the ones |
|
|
|
1116 |
|
00:47:43,599 --> 00:47:48,680 |
|
and then divide by the um the number of |
|
|
|
1117 |
|
00:47:46,440 --> 00:47:50,960 |
|
instances and that gives you an accuracy |
|
|
|
1118 |
|
00:47:48,680 --> 00:47:57,880 |
|
that doesn't work for something like |
|
|
|
1119 |
|
00:47:50,960 --> 00:48:03,599 |
|
fmeasure um because fmeasure is um 2 * |
|
|
|
1120 |
|
00:47:57,880 --> 00:48:07,319 |
|
Precision Time recall / Precision plus |
|
|
|
1121 |
|
00:48:03,599 --> 00:48:08,040 |
|
recall um and precision and recall uh |
|
|
|
1122 |
|
00:48:07,319 --> 00:48:10,640 |
|
you |
|
|
|
1123 |
|
00:48:08,040 --> 00:48:12,920 |
|
can like a T Test works for this but |
|
|
|
1124 |
|
00:48:10,640 --> 00:48:15,160 |
|
there's a non-additive component of f |
|
|
|
1125 |
|
00:48:12,920 --> 00:48:16,680 |
|
measure so you can't calculate |
|
|
|
1126 |
|
00:48:15,160 --> 00:48:19,280 |
|
statistically significant differences in |
|
|
|
1127 |
|
00:48:16,680 --> 00:48:21,079 |
|
F measure using a key test in that case |
|
|
|
1128 |
|
00:48:19,280 --> 00:48:23,000 |
|
you're basically you have to use a |
|
|
|
1129 |
|
00:48:21,079 --> 00:48:24,920 |
|
bootstrap method like this in order to |
|
|
|
1130 |
|
00:48:23,000 --> 00:48:29,040 |
|
get it to work or you need to do some |
|
|
|
1131 |
|
00:48:24,920 --> 00:48:29,040 |
|
really complex math but I I just |
|
|
|
1132 |
|
00:48:29,760 --> 00:48:33,920 |
|
use cool um are there any questions |
|
|
|
1133 |
|
00:48:32,680 --> 00:48:35,520 |
|
about this I guess we'll have a code |
|
|
|
1134 |
|
00:48:33,920 --> 00:48:37,680 |
|
example in the recitation so you can go |
|
|
|
1135 |
|
00:48:35,520 --> 00:48:39,599 |
|
in and take a look at that there's also |
|
|
|
1136 |
|
00:48:37,680 --> 00:48:42,599 |
|
tons of code examples |
|
|
|
1137 |
|
00:48:39,599 --> 00:48:42,599 |
|
online |
|
|
|
1138 |
|
00:48:42,960 --> 00:48:49,440 |
|
um is that |
|
|
|
1139 |
|
00:48:45,720 --> 00:48:52,400 |
|
okay okay sounds good um so now let me |
|
|
|
1140 |
|
00:48:49,440 --> 00:48:54,599 |
|
uh let me go back to the actual slides |
|
|
|
1141 |
|
00:48:52,400 --> 00:48:57,400 |
|
for |
|
|
|
1142 |
|
00:48:54,599 --> 00:49:00,559 |
|
today and given those statist uh the |
|
|
|
1143 |
|
00:48:57,400 --> 00:49:04,119 |
|
results about statistical signicance um |
|
|
|
1144 |
|
00:49:00,559 --> 00:49:06,040 |
|
how can we estimate how much testing |
|
|
|
1145 |
|
00:49:04,119 --> 00:49:07,920 |
|
data is enough and there's a method |
|
|
|
1146 |
|
00:49:06,040 --> 00:49:11,079 |
|
called Power analysis that allows you to |
|
|
|
1147 |
|
00:49:07,920 --> 00:49:13,359 |
|
do this and basically the idea of power |
|
|
|
1148 |
|
00:49:11,079 --> 00:49:16,680 |
|
analysis is that you make an assumption |
|
|
|
1149 |
|
00:49:13,359 --> 00:49:18,880 |
|
about the effect size between settings |
|
|
|
1150 |
|
00:49:16,680 --> 00:49:20,680 |
|
um for example the expected accuracy |
|
|
|
1151 |
|
00:49:18,880 --> 00:49:23,480 |
|
difference between tested |
|
|
|
1152 |
|
00:49:20,680 --> 00:49:26,480 |
|
models and given the effect size a |
|
|
|
1153 |
|
00:49:23,480 --> 00:49:28,880 |
|
significance threshold and significant |
|
|
|
1154 |
|
00:49:26,480 --> 00:49:30,839 |
|
threshold you can determine how much |
|
|
|
1155 |
|
00:49:28,880 --> 00:49:32,680 |
|
data is necessary to get a significant |
|
|
|
1156 |
|
00:49:30,839 --> 00:49:36,680 |
|
effect in most |
|
|
|
1157 |
|
00:49:32,680 --> 00:49:39,319 |
|
CLS and so to give an example |
|
|
|
1158 |
|
00:49:36,680 --> 00:49:41,559 |
|
again let's say we're talking about the |
|
|
|
1159 |
|
00:49:39,319 --> 00:49:45,880 |
|
accuracy let's say we have a baseline |
|
|
|
1160 |
|
00:49:41,559 --> 00:49:49,079 |
|
model and we have a um we have a |
|
|
|
1161 |
|
00:49:45,880 --> 00:49:52,280 |
|
baseline model and then we also have our |
|
|
|
1162 |
|
00:49:49,079 --> 00:49:54,000 |
|
uh propos model and we know kind of from |
|
|
|
1163 |
|
00:49:52,280 --> 00:49:55,599 |
|
experience that the Baseline model is |
|
|
|
1164 |
|
00:49:54,000 --> 00:49:58,400 |
|
probably going to get around 90% |
|
|
|
1165 |
|
00:49:55,599 --> 00:50:00,559 |
|
accuracy We Know by like eyeballing |
|
|
|
1166 |
|
00:49:58,400 --> 00:50:06,240 |
|
eyeballing the data or something like |
|
|
|
1167 |
|
00:50:00,559 --> 00:50:09,599 |
|
that and then we think our um we think |
|
|
|
1168 |
|
00:50:06,240 --> 00:50:13,799 |
|
our model is going to get 93% |
|
|
|
1169 |
|
00:50:09,599 --> 00:50:17,160 |
|
accuracy uh and we want a significant |
|
|
|
1170 |
|
00:50:13,799 --> 00:50:19,440 |
|
threshold significance threshold of p is |
|
|
|
1171 |
|
00:50:17,160 --> 00:50:22,319 |
|
less than |
|
|
|
1172 |
|
00:50:19,440 --> 00:50:26,000 |
|
0.05 given these |
|
|
|
1173 |
|
00:50:22,319 --> 00:50:30,559 |
|
two quantities we can basically go in |
|
|
|
1174 |
|
00:50:26,000 --> 00:50:33,720 |
|
and say okay now we need uh 500 training |
|
|
|
1175 |
|
00:50:30,559 --> 00:50:36,200 |
|
500 test examples in order to say with |
|
|
|
1176 |
|
00:50:33,720 --> 00:50:38,920 |
|
confidence that we will be able |
|
|
|
1177 |
|
00:50:36,200 --> 00:50:40,599 |
|
to um that we will be able to |
|
|
|
1178 |
|
00:50:38,920 --> 00:50:42,640 |
|
distinguish between two models with 90 |
|
|
|
1179 |
|
00:50:40,599 --> 00:50:44,400 |
|
and 93% |
|
|
|
1180 |
|
00:50:42,640 --> 00:50:48,240 |
|
accuracy |
|
|
|
1181 |
|
00:50:44,400 --> 00:50:51,079 |
|
and I can go I can show the algorithm |
|
|
|
1182 |
|
00:50:48,240 --> 00:50:51,079 |
|
that they have in this |
|
|
|
1183 |
|
00:50:54,440 --> 00:50:57,440 |
|
paper |
|
|
|
1184 |
|
00:51:01,760 --> 00:51:04,960 |
|
but basically the way this |
|
|
|
1185 |
|
00:51:13,040 --> 00:51:19,720 |
|
works um is you sample a data set um |
|
|
|
1186 |
|
00:51:17,799 --> 00:51:22,960 |
|
Canute the effect of interest on the |
|
|
|
1187 |
|
00:51:19,720 --> 00:51:25,880 |
|
sample I compute the P value and then |
|
|
|
1188 |
|
00:51:22,960 --> 00:51:29,319 |
|
you can calculate the power uh |
|
|
|
1189 |
|
00:51:25,880 --> 00:51:31,520 |
|
by basically um checking the number of |
|
|
|
1190 |
|
00:51:29,319 --> 00:51:34,480 |
|
times that the P value is less than your |
|
|
|
1191 |
|
00:51:31,520 --> 00:51:36,319 |
|
threshold um multiplied by uh the fact |
|
|
|
1192 |
|
00:51:34,480 --> 00:51:38,920 |
|
that the sign is in a particular |
|
|
|
1193 |
|
00:51:36,319 --> 00:51:41,200 |
|
direction and by doing this you can |
|
|
|
1194 |
|
00:51:38,920 --> 00:51:43,280 |
|
essentially um you can essentially |
|
|
|
1195 |
|
00:51:41,200 --> 00:51:46,200 |
|
calculate how much data you would need |
|
|
|
1196 |
|
00:51:43,280 --> 00:51:48,319 |
|
or sorry you can calculate the uh the |
|
|
|
1197 |
|
00:51:46,200 --> 00:51:50,319 |
|
statistical power and then you can do |
|
|
|
1198 |
|
00:51:48,319 --> 00:51:52,000 |
|
this for various sizes of data set so |
|
|
|
1199 |
|
00:51:50,319 --> 00:51:53,559 |
|
you can gradually increase the size of |
|
|
|
1200 |
|
00:51:52,000 --> 00:51:57,160 |
|
the data set or decrease the size of the |
|
|
|
1201 |
|
00:51:53,559 --> 00:51:59,040 |
|
data set and that allows you to figure |
|
|
|
1202 |
|
00:51:57,160 --> 00:52:02,200 |
|
out how big your data set needs to be in |
|
|
|
1203 |
|
00:51:59,040 --> 00:52:04,640 |
|
order to get a statistically significant |
|
|
|
1204 |
|
00:52:02,200 --> 00:52:08,839 |
|
effect of the data |
|
|
|
1205 |
|
00:52:04,640 --> 00:52:10,720 |
|
set and so like many many people ask me |
|
|
|
1206 |
|
00:52:08,839 --> 00:52:12,599 |
|
the question like how big of a data set |
|
|
|
1207 |
|
00:52:10,720 --> 00:52:14,440 |
|
do we need to make this is basically the |
|
|
|
1208 |
|
00:52:12,599 --> 00:52:17,280 |
|
statistically like quote unquote correct |
|
|
|
1209 |
|
00:52:14,440 --> 00:52:19,520 |
|
answer for how you can do this and also |
|
|
|
1210 |
|
00:52:17,280 --> 00:52:20,440 |
|
uh for assignment two we're going to ask |
|
|
|
1211 |
|
00:52:19,520 --> 00:52:24,559 |
|
you to |
|
|
|
1212 |
|
00:52:20,440 --> 00:52:26,720 |
|
justify uh your choice of creation of a |
|
|
|
1213 |
|
00:52:24,559 --> 00:52:30,359 |
|
data set of particular size for testing |
|
|
|
1214 |
|
00:52:26,720 --> 00:52:31,799 |
|
based on this so um uh pay pay attention |
|
|
|
1215 |
|
00:52:30,359 --> 00:52:34,720 |
|
and please look at the references here |
|
|
|
1216 |
|
00:52:31,799 --> 00:52:38,760 |
|
and you should be able to |
|
|
|
1217 |
|
00:52:34,720 --> 00:52:41,280 |
|
that cool um any |
|
|
|
1218 |
|
00:52:38,760 --> 00:52:43,119 |
|
questions I I didn't go like really |
|
|
|
1219 |
|
00:52:41,280 --> 00:52:44,319 |
|
deeply into the formulas here you'll |
|
|
|
1220 |
|
00:52:43,119 --> 00:52:45,720 |
|
you'll probably have to look them up in |
|
|
|
1221 |
|
00:52:44,319 --> 00:52:48,119 |
|
the paper but hopefully that gives you |
|
|
|
1222 |
|
00:52:45,720 --> 00:52:51,799 |
|
the general |
|
|
|
1223 |
|
00:52:48,119 --> 00:52:52,680 |
|
idea okay next um how much training data |
|
|
|
1224 |
|
00:52:51,799 --> 00:52:55,599 |
|
do I |
|
|
|
1225 |
|
00:52:52,680 --> 00:52:58,160 |
|
need so in general more is usually |
|
|
|
1226 |
|
00:52:55,599 --> 00:53:00,760 |
|
better if you're fine tuning a model um |
|
|
|
1227 |
|
00:52:58,160 --> 00:53:02,880 |
|
so I can't tell you like you don't need |
|
|
|
1228 |
|
00:53:00,760 --> 00:53:05,480 |
|
to make more data because |
|
|
|
1229 |
|
00:53:02,880 --> 00:53:06,280 |
|
probably you do if you're not happy with |
|
|
|
1230 |
|
00:53:05,480 --> 00:53:10,799 |
|
your |
|
|
|
1231 |
|
00:53:06,280 --> 00:53:12,599 |
|
performance um but recently you can get |
|
|
|
1232 |
|
00:53:10,799 --> 00:53:14,680 |
|
very reasonable performance with few |
|
|
|
1233 |
|
00:53:12,599 --> 00:53:17,319 |
|
shot or zero shot or pre-trained models |
|
|
|
1234 |
|
00:53:14,680 --> 00:53:19,760 |
|
and prompting and because of this in |
|
|
|
1235 |
|
00:53:17,319 --> 00:53:21,240 |
|
some cases maybe the answer is zero |
|
|
|
1236 |
|
00:53:19,760 --> 00:53:22,960 |
|
maybe you don't need any training data |
|
|
|
1237 |
|
00:53:21,240 --> 00:53:26,559 |
|
and you could just use a zero shot pred |
|
|
|
1238 |
|
00:53:22,960 --> 00:53:29,240 |
|
model so um you you need to choose like |
|
|
|
1239 |
|
00:53:26,559 --> 00:53:31,319 |
|
what your accuracy threshold is um you |
|
|
|
1240 |
|
00:53:29,240 --> 00:53:32,720 |
|
need to decide whether you want to be |
|
|
|
1241 |
|
00:53:31,319 --> 00:53:34,480 |
|
fine-tuning a model to improve |
|
|
|
1242 |
|
00:53:32,720 --> 00:53:36,319 |
|
performance or doing other things like |
|
|
|
1243 |
|
00:53:34,480 --> 00:53:39,119 |
|
prompt engineering or other stuff like |
|
|
|
1244 |
|
00:53:36,319 --> 00:53:41,520 |
|
that so basically there's no uh correct |
|
|
|
1245 |
|
00:53:39,119 --> 00:53:45,440 |
|
answer to this |
|
|
|
1246 |
|
00:53:41,520 --> 00:53:47,359 |
|
um one thing to be aware of is uh |
|
|
|
1247 |
|
00:53:45,440 --> 00:53:51,440 |
|
sometimes if you select data |
|
|
|
1248 |
|
00:53:47,359 --> 00:53:52,880 |
|
intelligently you can uh improve more |
|
|
|
1249 |
|
00:53:51,440 --> 00:53:54,359 |
|
quickly with something like Active |
|
|
|
1250 |
|
00:53:52,880 --> 00:53:56,520 |
|
Learning and active learning chooses |
|
|
|
1251 |
|
00:53:54,359 --> 00:54:00,000 |
|
representative and difficult data that |
|
|
|
1252 |
|
00:53:56,520 --> 00:54:02,559 |
|
you can um be |
|
|
|
1253 |
|
00:54:00,000 --> 00:54:04,839 |
|
using so when you sample data for fine |
|
|
|
1254 |
|
00:54:02,559 --> 00:54:07,440 |
|
tuning uh what you want to be doing is |
|
|
|
1255 |
|
00:54:04,839 --> 00:54:08,839 |
|
you want to be sampling data that has |
|
|
|
1256 |
|
00:54:07,440 --> 00:54:10,040 |
|
good coverage of the domains that you |
|
|
|
1257 |
|
00:54:08,839 --> 00:54:12,760 |
|
want to |
|
|
|
1258 |
|
00:54:10,040 --> 00:54:15,079 |
|
cover um you also want to be covering |
|
|
|
1259 |
|
00:54:12,760 --> 00:54:18,599 |
|
for example language uh languages or |
|
|
|
1260 |
|
00:54:15,079 --> 00:54:23,200 |
|
language varieties or demographics of |
|
|
|
1261 |
|
00:54:18,599 --> 00:54:25,520 |
|
users um and another thing is uh when |
|
|
|
1262 |
|
00:54:23,200 --> 00:54:29,440 |
|
you're doing this it's often good idea |
|
|
|
1263 |
|
00:54:25,520 --> 00:54:31,400 |
|
to document how you're creating data and |
|
|
|
1264 |
|
00:54:29,440 --> 00:54:34,079 |
|
uh there's this paper data statements |
|
|
|
1265 |
|
00:54:31,400 --> 00:54:35,520 |
|
for NLP by vendor and fredman uh which |
|
|
|
1266 |
|
00:54:34,079 --> 00:54:37,440 |
|
suggests a bunch of different things |
|
|
|
1267 |
|
00:54:35,520 --> 00:54:39,520 |
|
that you can use to document your data |
|
|
|
1268 |
|
00:54:37,440 --> 00:54:41,520 |
|
collection and like why and how you |
|
|
|
1269 |
|
00:54:39,520 --> 00:54:44,960 |
|
collected the data and this gives you |
|
|
|
1270 |
|
00:54:41,520 --> 00:54:47,200 |
|
some pieces of information that uh could |
|
|
|
1271 |
|
00:54:44,960 --> 00:54:49,359 |
|
be useful this has been incorporated |
|
|
|
1272 |
|
00:54:47,200 --> 00:54:51,880 |
|
into the hugging face data sets data set |
|
|
|
1273 |
|
00:54:49,359 --> 00:54:53,520 |
|
cards and now hugging face data sets |
|
|
|
1274 |
|
00:54:51,880 --> 00:54:56,040 |
|
actually has lots of metadata that's |
|
|
|
1275 |
|
00:54:53,520 --> 00:54:58,359 |
|
kind of inspired by uh this although |
|
|
|
1276 |
|
00:54:56,040 --> 00:55:01,799 |
|
it's been adjusted for more kind of like |
|
|
|
1277 |
|
00:54:58,359 --> 00:55:01,799 |
|
practical industry use |
|
|
|
1278 |
|
00:55:02,119 --> 00:55:06,480 |
|
cases another thing is annotation |
|
|
|
1279 |
|
00:55:04,400 --> 00:55:09,160 |
|
guidelines so if you're asking humans to |
|
|
|
1280 |
|
00:55:06,480 --> 00:55:11,319 |
|
do anything um or for that matter if |
|
|
|
1281 |
|
00:55:09,160 --> 00:55:16,119 |
|
you're asking gp4 to generate data for |
|
|
|
1282 |
|
00:55:11,319 --> 00:55:21,480 |
|
you um you need to tell people or gp4 in |
|
|
|
1283 |
|
00:55:16,119 --> 00:55:24,440 |
|
um you know a clear manner how you will |
|
|
|
1284 |
|
00:55:21,480 --> 00:55:28,119 |
|
um like how it should be creating data |
|
|
|
1285 |
|
00:55:24,440 --> 00:55:29,920 |
|
so the first thing is um if you try uh |
|
|
|
1286 |
|
00:55:28,119 --> 00:55:32,960 |
|
to an the first thing that you can do is |
|
|
|
1287 |
|
00:55:29,920 --> 00:55:34,240 |
|
you can try to annotate yourself um and |
|
|
|
1288 |
|
00:55:32,960 --> 00:55:37,039 |
|
if you actually try to solve The |
|
|
|
1289 |
|
00:55:34,240 --> 00:55:38,440 |
|
annotation task yourself then you'll |
|
|
|
1290 |
|
00:55:37,039 --> 00:55:41,160 |
|
realize that there's lots of corner |
|
|
|
1291 |
|
00:55:38,440 --> 00:55:43,799 |
|
cases that are hard to decide on um |
|
|
|
1292 |
|
00:55:41,160 --> 00:55:45,440 |
|
other things like that so like if you're |
|
|
|
1293 |
|
00:55:43,799 --> 00:55:47,520 |
|
annotating sentiment what is the |
|
|
|
1294 |
|
00:55:45,440 --> 00:55:49,799 |
|
boundary between very positive and |
|
|
|
1295 |
|
00:55:47,520 --> 00:55:50,880 |
|
positive um if you're annotating |
|
|
|
1296 |
|
00:55:49,799 --> 00:55:54,000 |
|
question |
|
|
|
1297 |
|
00:55:50,880 --> 00:55:56,280 |
|
answering um like for |
|
|
|
1298 |
|
00:55:54,000 --> 00:55:57,720 |
|
example do you want to answer in a whole |
|
|
|
1299 |
|
00:55:56,280 --> 00:56:01,119 |
|
sentence or do you want to answer with |
|
|
|
1300 |
|
00:55:57,720 --> 00:56:03,760 |
|
only a short concise answer like these |
|
|
|
1301 |
|
00:56:01,119 --> 00:56:05,400 |
|
sorts of things you'll need to tell uh |
|
|
|
1302 |
|
00:56:03,760 --> 00:56:07,839 |
|
either an annotator or a model that |
|
|
|
1303 |
|
00:56:05,400 --> 00:56:10,960 |
|
you're asking to do annotation to give |
|
|
|
1304 |
|
00:56:07,839 --> 00:56:12,760 |
|
some examples from pent Tree Bank uh |
|
|
|
1305 |
|
00:56:10,960 --> 00:56:15,440 |
|
part of speech annotation guidelines |
|
|
|
1306 |
|
00:56:12,760 --> 00:56:18,079 |
|
this is very old it's from 1990 but |
|
|
|
1307 |
|
00:56:15,440 --> 00:56:21,200 |
|
basically they have uh like adverb this |
|
|
|
1308 |
|
00:56:18,079 --> 00:56:25,559 |
|
category includes most words that end in |
|
|
|
1309 |
|
00:56:21,200 --> 00:56:30,680 |
|
um ly as well as degree words like |
|
|
|
1310 |
|
00:56:25,559 --> 00:56:33,079 |
|
quite um etc etc it has other things for |
|
|
|
1311 |
|
00:56:30,680 --> 00:56:36,200 |
|
adverbs and then it has like confusing |
|
|
|
1312 |
|
00:56:33,079 --> 00:56:38,039 |
|
parts of speech with examples uh one |
|
|
|
1313 |
|
00:56:36,200 --> 00:56:39,640 |
|
thing that I found like really really |
|
|
|
1314 |
|
00:56:38,039 --> 00:56:42,640 |
|
interesting is like if you look at these |
|
|
|
1315 |
|
00:56:39,640 --> 00:56:46,160 |
|
annotation guidelines it's like uh |
|
|
|
1316 |
|
00:56:42,640 --> 00:56:48,319 |
|
prompts so if you look at this it's like |
|
|
|
1317 |
|
00:56:46,160 --> 00:56:49,880 |
|
these are your your prompts your zero |
|
|
|
1318 |
|
00:56:48,319 --> 00:56:52,359 |
|
shot prompts and these are F shot |
|
|
|
1319 |
|
00:56:49,880 --> 00:56:54,480 |
|
examples so like even for humans we were |
|
|
|
1320 |
|
00:56:52,359 --> 00:56:56,520 |
|
doing F shot prompting with examples |
|
|
|
1321 |
|
00:56:54,480 --> 00:57:00,880 |
|
when they were doing annotations so uh |
|
|
|
1322 |
|
00:56:56,520 --> 00:57:03,119 |
|
it's kind of uh kind of fun um hiring |
|
|
|
1323 |
|
00:57:00,880 --> 00:57:05,000 |
|
annotators so like let's say you want to |
|
|
|
1324 |
|
00:57:03,119 --> 00:57:08,319 |
|
actually build a data set and and pay |
|
|
|
1325 |
|
00:57:05,000 --> 00:57:10,359 |
|
people to do things um for smaller scale |
|
|
|
1326 |
|
00:57:08,319 --> 00:57:13,359 |
|
projects uh very often you can just |
|
|
|
1327 |
|
00:57:10,359 --> 00:57:15,240 |
|
annotate yourself and that's fine um |
|
|
|
1328 |
|
00:57:13,359 --> 00:57:16,720 |
|
there's a fixed set of overhead to get |
|
|
|
1329 |
|
00:57:15,240 --> 00:57:19,480 |
|
other people to do something and train |
|
|
|
1330 |
|
00:57:16,720 --> 00:57:23,200 |
|
them and stuff so you know I often just |
|
|
|
1331 |
|
00:57:19,480 --> 00:57:25,079 |
|
annotate things myself um you can also |
|
|
|
1332 |
|
00:57:23,200 --> 00:57:26,520 |
|
find friends or other students or |
|
|
|
1333 |
|
00:57:25,079 --> 00:57:29,559 |
|
co-workers who can help you out with |
|
|
|
1334 |
|
00:57:26,520 --> 00:57:33,359 |
|
things you can bri bribe them with uh |
|
|
|
1335 |
|
00:57:29,559 --> 00:57:37,280 |
|
pizza or whatever favorite uh food or |
|
|
|
1336 |
|
00:57:33,359 --> 00:57:39,400 |
|
beverage that they like um then for |
|
|
|
1337 |
|
00:57:37,280 --> 00:57:42,440 |
|
finding people online there's a lot of |
|
|
|
1338 |
|
00:57:39,400 --> 00:57:45,160 |
|
things that you can do um I very often |
|
|
|
1339 |
|
00:57:42,440 --> 00:57:46,000 |
|
hire Freelancers uh through platforms |
|
|
|
1340 |
|
00:57:45,160 --> 00:57:50,400 |
|
such as |
|
|
|
1341 |
|
00:57:46,000 --> 00:57:51,799 |
|
upwork um this is good and bad the bad |
|
|
|
1342 |
|
00:57:50,400 --> 00:57:53,760 |
|
thing about it is that this is often |
|
|
|
1343 |
|
00:57:51,799 --> 00:57:56,280 |
|
more expensive the good thing about it |
|
|
|
1344 |
|
00:57:53,760 --> 00:57:58,640 |
|
is um you get people who have pride in |
|
|
|
1345 |
|
00:57:56,280 --> 00:58:00,440 |
|
their work and accountability and |
|
|
|
1346 |
|
00:57:58,640 --> 00:58:02,440 |
|
motivation because like if they get |
|
|
|
1347 |
|
00:58:00,440 --> 00:58:04,480 |
|
rated poorly they it's going to be |
|
|
|
1348 |
|
00:58:02,440 --> 00:58:06,720 |
|
harder to get work and often they're |
|
|
|
1349 |
|
00:58:04,480 --> 00:58:08,160 |
|
Professionals in their fields so like if |
|
|
|
1350 |
|
00:58:06,720 --> 00:58:12,079 |
|
you want to get a code generation data |
|
|
|
1351 |
|
00:58:08,160 --> 00:58:15,880 |
|
set you can hire good um Freelancers |
|
|
|
1352 |
|
00:58:12,079 --> 00:58:18,520 |
|
I've actually heard rumors that uh |
|
|
|
1353 |
|
00:58:15,880 --> 00:58:20,119 |
|
people like open AI they hire people and |
|
|
|
1354 |
|
00:58:18,520 --> 00:58:21,599 |
|
pay them $60 an hour to do The |
|
|
|
1355 |
|
00:58:20,119 --> 00:58:23,599 |
|
annotation because they really want |
|
|
|
1356 |
|
00:58:21,599 --> 00:58:27,119 |
|
people who are very professional and do |
|
|
|
1357 |
|
00:58:23,599 --> 00:58:30,000 |
|
a very good job um I don't pay that |
|
|
|
1358 |
|
00:58:27,119 --> 00:58:34,240 |
|
much but I do pay well more than minimum |
|
|
|
1359 |
|
00:58:30,000 --> 00:58:35,880 |
|
wage and uh you know like it's a I pay a |
|
|
|
1360 |
|
00:58:34,240 --> 00:58:38,039 |
|
competitive price for these freelancing |
|
|
|
1361 |
|
00:58:35,880 --> 00:58:40,319 |
|
sites when I get people to do |
|
|
|
1362 |
|
00:58:38,039 --> 00:58:42,000 |
|
that another thing you can do as crowd |
|
|
|
1363 |
|
00:58:40,319 --> 00:58:44,400 |
|
workers and this is could be through |
|
|
|
1364 |
|
00:58:42,000 --> 00:58:45,960 |
|
sites like Mechanical Turk or prolific |
|
|
|
1365 |
|
00:58:44,400 --> 00:58:48,960 |
|
or other things like this so that's |
|
|
|
1366 |
|
00:58:45,960 --> 00:58:51,680 |
|
another option um here quality control |
|
|
|
1367 |
|
00:58:48,960 --> 00:58:55,240 |
|
becomes very difficult and um we're |
|
|
|
1368 |
|
00:58:51,680 --> 00:58:57,799 |
|
getting to the point where number one |
|
|
|
1369 |
|
00:58:55,240 --> 00:58:59,400 |
|
um if you don't aren't very careful with |
|
|
|
1370 |
|
00:58:57,799 --> 00:59:01,920 |
|
quality control language models actually |
|
|
|
1371 |
|
00:58:59,400 --> 00:59:03,400 |
|
do a similarly good job as crowd workers |
|
|
|
1372 |
|
00:59:01,920 --> 00:59:06,960 |
|
and number two all the crowd workers are |
|
|
|
1373 |
|
00:59:03,400 --> 00:59:10,000 |
|
using gp4 anyway so um you do need to be |
|
|
|
1374 |
|
00:59:06,960 --> 00:59:12,319 |
|
careful about that um one thing that I |
|
|
|
1375 |
|
00:59:10,000 --> 00:59:14,039 |
|
often do is I hire for a small job first |
|
|
|
1376 |
|
00:59:12,319 --> 00:59:16,880 |
|
to gauge timeliness and accuracy and |
|
|
|
1377 |
|
00:59:14,039 --> 00:59:18,920 |
|
then hire for a bigger job so um just |
|
|
|
1378 |
|
00:59:16,880 --> 00:59:21,720 |
|
hire people to do you know 50 examples |
|
|
|
1379 |
|
00:59:18,920 --> 00:59:23,319 |
|
or 20 examples first and then uh you |
|
|
|
1380 |
|
00:59:21,720 --> 00:59:26,240 |
|
know if they do a good job with it then |
|
|
|
1381 |
|
00:59:23,319 --> 00:59:27,960 |
|
I hire them to do 200 th000 |
|
|
|
1382 |
|
00:59:26,240 --> 00:59:30,799 |
|
examples |
|
|
|
1383 |
|
00:59:27,960 --> 00:59:34,720 |
|
um one thing to note is that if you're |
|
|
|
1384 |
|
00:59:30,799 --> 00:59:36,599 |
|
doing research in a university um you |
|
|
|
1385 |
|
00:59:34,720 --> 00:59:39,400 |
|
might need to get approval from an |
|
|
|
1386 |
|
00:59:36,599 --> 00:59:41,480 |
|
Institutional review board and this is |
|
|
|
1387 |
|
00:59:39,400 --> 00:59:43,000 |
|
in particular the case for subjective |
|
|
|
1388 |
|
00:59:41,480 --> 00:59:45,880 |
|
task so this is when you're asking |
|
|
|
1389 |
|
00:59:43,000 --> 00:59:47,440 |
|
people how do you feel about this output |
|
|
|
1390 |
|
00:59:45,880 --> 00:59:50,039 |
|
um do you think this output is |
|
|
|
1391 |
|
00:59:47,440 --> 00:59:51,720 |
|
representative of your beliefs or things |
|
|
|
1392 |
|
00:59:50,039 --> 00:59:54,760 |
|
like that where it doesn't have a |
|
|
|
1393 |
|
00:59:51,720 --> 00:59:56,319 |
|
correct answer a yes and no answer if |
|
|
|
1394 |
|
00:59:54,760 --> 00:59:58,680 |
|
it's something like it it does have a |
|
|
|
1395 |
|
00:59:56,319 --> 01:00:03,640 |
|
yes and no answer which is like how many |
|
|
|
1396 |
|
00:59:58,680 --> 01:00:05,640 |
|
verbs are in this sentence or um how do |
|
|
|
1397 |
|
01:00:03,640 --> 01:00:07,280 |
|
you translate the sentence into another |
|
|
|
1398 |
|
01:00:05,640 --> 01:00:09,880 |
|
language or something like that then you |
|
|
|
1399 |
|
01:00:07,280 --> 01:00:12,039 |
|
don't need an IRB approval um but if |
|
|
|
1400 |
|
01:00:09,880 --> 01:00:15,000 |
|
it's borderline you might want to check |
|
|
|
1401 |
|
01:00:12,039 --> 01:00:17,280 |
|
anyway um so that that's something to be |
|
|
|
1402 |
|
01:00:15,000 --> 01:00:17,280 |
|
aware |
|
|
|
1403 |
|
01:00:18,640 --> 01:00:26,240 |
|
of next is assessing annotation quality |
|
|
|
1404 |
|
01:00:22,640 --> 01:00:27,680 |
|
so um one of my favorite ways to do this |
|
|
|
1405 |
|
01:00:26,240 --> 01:00:30,039 |
|
is assess Human |
|
|
|
1406 |
|
01:00:27,680 --> 01:00:32,240 |
|
Performance and so the way we do this is |
|
|
|
1407 |
|
01:00:30,039 --> 01:00:34,119 |
|
you double annotate some data and then |
|
|
|
1408 |
|
01:00:32,240 --> 01:00:37,160 |
|
you measure whatever metric you want to |
|
|
|
1409 |
|
01:00:34,119 --> 01:00:39,200 |
|
measure for machines just with respect |
|
|
|
1410 |
|
01:00:37,160 --> 01:00:41,039 |
|
to human agreement and so for |
|
|
|
1411 |
|
01:00:39,200 --> 01:00:43,839 |
|
translation if you're using blue score |
|
|
|
1412 |
|
01:00:41,039 --> 01:00:45,440 |
|
or KF score or something like this then |
|
|
|
1413 |
|
01:00:43,839 --> 01:00:47,079 |
|
you would want to use this for |
|
|
|
1414 |
|
01:00:45,440 --> 01:00:50,440 |
|
assessment of the |
|
|
|
1415 |
|
01:00:47,079 --> 01:00:56,039 |
|
outputs um the advantage of doing this |
|
|
|
1416 |
|
01:00:50,440 --> 01:00:58,760 |
|
is that you get a human quality score |
|
|
|
1417 |
|
01:00:56,039 --> 01:01:00,960 |
|
and the human quality score is directly |
|
|
|
1418 |
|
01:00:58,760 --> 01:01:02,480 |
|
comparable to the machine quality score |
|
|
|
1419 |
|
01:01:00,960 --> 01:01:04,599 |
|
and so you can say well humans got the |
|
|
|
1420 |
|
01:01:02,480 --> 01:01:07,280 |
|
task right 90% of the time and gp4 got |
|
|
|
1421 |
|
01:01:04,599 --> 01:01:11,280 |
|
the task right 16% of the time so humans |
|
|
|
1422 |
|
01:01:07,280 --> 01:01:13,760 |
|
are way better than gp4 or um you know |
|
|
|
1423 |
|
01:01:11,280 --> 01:01:16,559 |
|
humans got it right 80% of the time and |
|
|
|
1424 |
|
01:01:13,760 --> 01:01:19,599 |
|
gp4 got it right 78% of the time so this |
|
|
|
1425 |
|
01:01:16,559 --> 01:01:21,000 |
|
task is you know this task or maybe not |
|
|
|
1426 |
|
01:01:19,599 --> 01:01:23,640 |
|
necessarily the task but at least the |
|
|
|
1427 |
|
01:01:21,000 --> 01:01:25,079 |
|
data set is more or less uh been so by |
|
|
|
1428 |
|
01:01:23,640 --> 01:01:26,640 |
|
the strongest language models so now we |
|
|
|
1429 |
|
01:01:25,079 --> 01:01:28,920 |
|
need to catch up open source models so |
|
|
|
1430 |
|
01:01:26,640 --> 01:01:31,680 |
|
SW ones or something like |
|
|
|
1431 |
|
01:01:28,920 --> 01:01:32,880 |
|
that um there are things that you can |
|
|
|
1432 |
|
01:01:31,680 --> 01:01:34,880 |
|
measure you can measure things like |
|
|
|
1433 |
|
01:01:32,880 --> 01:01:36,880 |
|
Kappa statistics this is particularly |
|
|
|
1434 |
|
01:01:34,880 --> 01:01:39,799 |
|
useful for um kind of just |
|
|
|
1435 |
|
01:01:36,880 --> 01:01:41,799 |
|
classification tasks and what this tells |
|
|
|
1436 |
|
01:01:39,799 --> 01:01:43,880 |
|
you is this tells you how much higher is |
|
|
|
1437 |
|
01:01:41,799 --> 01:01:48,000 |
|
the agreement that you would get than if |
|
|
|
1438 |
|
01:01:43,880 --> 01:01:49,920 |
|
you got it by chance and so for example |
|
|
|
1439 |
|
01:01:48,000 --> 01:01:53,279 |
|
let's say you're classifying |
|
|
|
1440 |
|
01:01:49,920 --> 01:01:54,760 |
|
spam uh or you're classifying you know |
|
|
|
1441 |
|
01:01:53,279 --> 01:01:59,520 |
|
toxic content or something something |
|
|
|
1442 |
|
01:01:54,760 --> 01:02:03,400 |
|
like that in 99% of your time 99% of the |
|
|
|
1443 |
|
01:01:59,520 --> 01:02:07,480 |
|
time the content is not toxic and 1% of |
|
|
|
1444 |
|
01:02:03,400 --> 01:02:11,799 |
|
the time the content is toxic and then |
|
|
|
1445 |
|
01:02:07,480 --> 01:02:14,079 |
|
you hire some annotators and you get 98% |
|
|
|
1446 |
|
01:02:11,799 --> 01:02:16,279 |
|
accuracy that's kind of bad right you |
|
|
|
1447 |
|
01:02:14,079 --> 01:02:19,200 |
|
know if you just said not toxic all the |
|
|
|
1448 |
|
01:02:16,279 --> 01:02:20,880 |
|
time you would get 99% um what the Kaus |
|
|
|
1449 |
|
01:02:19,200 --> 01:02:24,599 |
|
statistic does is it accounts for this |
|
|
|
1450 |
|
01:02:20,880 --> 01:02:26,559 |
|
basically it says um how much more like |
|
|
|
1451 |
|
01:02:24,599 --> 01:02:28,440 |
|
assis than chance and if you just had |
|
|
|
1452 |
|
01:02:26,559 --> 01:02:30,720 |
|
chance accuracy you would get zero if |
|
|
|
1453 |
|
01:02:28,440 --> 01:02:33,200 |
|
you had perfect accuracy you would get |
|
|
|
1454 |
|
01:02:30,720 --> 01:02:34,920 |
|
one and you normally get something in |
|
|
|
1455 |
|
01:02:33,200 --> 01:02:37,359 |
|
between |
|
|
|
1456 |
|
01:02:34,920 --> 01:02:39,200 |
|
um so if it's slow you may need to |
|
|
|
1457 |
|
01:02:37,359 --> 01:02:41,319 |
|
revisit guidelines Tire better |
|
|
|
1458 |
|
01:02:39,200 --> 01:02:44,480 |
|
annotators or rethink whether the task |
|
|
|
1459 |
|
01:02:41,319 --> 01:02:46,559 |
|
is possible at all or not um and you |
|
|
|
1460 |
|
01:02:44,480 --> 01:02:48,599 |
|
know some tasks are just impossible like |
|
|
|
1461 |
|
01:02:46,559 --> 01:02:51,599 |
|
if um I'm |
|
|
|
1462 |
|
01:02:48,599 --> 01:02:51,599 |
|
asking |
|
|
|
1463 |
|
01:02:52,240 --> 01:02:58,160 |
|
uh well or um they're very hard for |
|
|
|
1464 |
|
01:02:55,960 --> 01:03:00,039 |
|
annotators so like to give one example |
|
|
|
1465 |
|
01:02:58,160 --> 01:03:04,039 |
|
um annotators are really horrible at |
|
|
|
1466 |
|
01:03:00,039 --> 01:03:06,200 |
|
identifying fake reviews um and so like |
|
|
|
1467 |
|
01:03:04,039 --> 01:03:07,640 |
|
if you even if you hire annotators to |
|
|
|
1468 |
|
01:03:06,200 --> 01:03:09,279 |
|
identify paper reviews they're bad at |
|
|
|
1469 |
|
01:03:07,640 --> 01:03:11,359 |
|
doing that so you're not likely to get |
|
|
|
1470 |
|
01:03:09,279 --> 01:03:14,680 |
|
high |
|
|
|
1471 |
|
01:03:11,359 --> 01:03:17,920 |
|
agreement um cool I'm going to skip over |
|
|
|
1472 |
|
01:03:14,680 --> 01:03:23,279 |
|
this part because I already talked about |
|
|
|
1473 |
|
01:03:17,920 --> 01:03:26,640 |
|
it okay um any any questions |
|
|
|
1474 |
|
01:03:23,279 --> 01:03:29,079 |
|
here okay sounds good uh next I'd like |
|
|
|
1475 |
|
01:03:26,640 --> 01:03:30,640 |
|
to get into running experiments so |
|
|
|
1476 |
|
01:03:29,079 --> 01:03:34,359 |
|
running experiments one thing I find |
|
|
|
1477 |
|
01:03:30,640 --> 01:03:37,200 |
|
very helpful is workflow automation um |
|
|
|
1478 |
|
01:03:34,359 --> 01:03:40,079 |
|
and basically what I I like to do is I |
|
|
|
1479 |
|
01:03:37,200 --> 01:03:41,839 |
|
like to mod modularize each step of an |
|
|
|
1480 |
|
01:03:40,079 --> 01:03:44,119 |
|
experiment into a |
|
|
|
1481 |
|
01:03:41,839 --> 01:03:47,240 |
|
directory |
|
|
|
1482 |
|
01:03:44,119 --> 01:03:51,039 |
|
um where uh you have like a directory as |
|
|
|
1483 |
|
01:03:47,240 --> 01:03:53,279 |
|
input and a directory as output |
|
|
|
1484 |
|
01:03:51,039 --> 01:03:54,559 |
|
um this is my personal way of doing |
|
|
|
1485 |
|
01:03:53,279 --> 01:03:56,799 |
|
things there are other ways of doing |
|
|
|
1486 |
|
01:03:54,559 --> 01:03:58,640 |
|
things that are also good but um very |
|
|
|
1487 |
|
01:03:56,799 --> 01:04:00,760 |
|
often like just to give an example |
|
|
|
1488 |
|
01:03:58,640 --> 01:04:04,680 |
|
you'll need to do pre-processing |
|
|
|
1489 |
|
01:04:00,760 --> 01:04:07,480 |
|
According to some uh you'll need to do |
|
|
|
1490 |
|
01:04:04,680 --> 01:04:09,119 |
|
data selection so you'll need to select |
|
|
|
1491 |
|
01:04:07,480 --> 01:04:11,039 |
|
which data sets you're training on |
|
|
|
1492 |
|
01:04:09,119 --> 01:04:13,520 |
|
you'll need to do pre-processing of them |
|
|
|
1493 |
|
01:04:11,039 --> 01:04:16,160 |
|
with a tokenization model and then you |
|
|
|
1494 |
|
01:04:13,520 --> 01:04:18,359 |
|
will need to run an |
|
|
|
1495 |
|
01:04:16,160 --> 01:04:20,000 |
|
experiment and then you'll need to do |
|
|
|
1496 |
|
01:04:18,359 --> 01:04:23,240 |
|
evaluation and those are all kind of |
|
|
|
1497 |
|
01:04:20,000 --> 01:04:25,079 |
|
like discret Steps where the data |
|
|
|
1498 |
|
01:04:23,240 --> 01:04:27,760 |
|
selection takes in your big pool of data |
|
|
|
1499 |
|
01:04:25,079 --> 01:04:31,200 |
|
and outputs a a data set that's been |
|
|
|
1500 |
|
01:04:27,760 --> 01:04:33,680 |
|
selected the tokenization |
|
|
|
1501 |
|
01:04:31,200 --> 01:04:35,480 |
|
will uh take a tokenizer model maybe |
|
|
|
1502 |
|
01:04:33,680 --> 01:04:38,599 |
|
train a tokenizer model and and split it |
|
|
|
1503 |
|
01:04:35,480 --> 01:04:40,400 |
|
up into different tokens um the training |
|
|
|
1504 |
|
01:04:38,599 --> 01:04:42,079 |
|
will train it might output a whole bunch |
|
|
|
1505 |
|
01:04:40,400 --> 01:04:44,720 |
|
of checkpoints and the evaluation will |
|
|
|
1506 |
|
01:04:42,079 --> 01:04:47,039 |
|
evaluate one checkpoint and so those are |
|
|
|
1507 |
|
01:04:44,720 --> 01:04:48,400 |
|
all kind of modular and you can actually |
|
|
|
1508 |
|
01:04:47,039 --> 01:04:50,039 |
|
think of each one of them as like a |
|
|
|
1509 |
|
01:04:48,400 --> 01:04:52,760 |
|
function in your Python |
|
|
|
1510 |
|
01:04:50,039 --> 01:04:56,400 |
|
program |
|
|
|
1511 |
|
01:04:52,760 --> 01:04:58,160 |
|
and you kind of want to avoid rerunning |
|
|
|
1512 |
|
01:04:56,400 --> 01:05:00,200 |
|
data set selection and tokenization |
|
|
|
1513 |
|
01:04:58,160 --> 01:05:01,720 |
|
every time you do a new evaluation right |
|
|
|
1514 |
|
01:05:00,200 --> 01:05:03,359 |
|
like that would be kind of silly you |
|
|
|
1515 |
|
01:05:01,720 --> 01:05:04,680 |
|
definitely want to avoid rerunning |
|
|
|
1516 |
|
01:05:03,359 --> 01:05:09,119 |
|
training every time you evaluate a |
|
|
|
1517 |
|
01:05:04,680 --> 01:05:11,200 |
|
checkpoint so um what I do is I often |
|
|
|
1518 |
|
01:05:09,119 --> 01:05:12,799 |
|
name directories by parameters where |
|
|
|
1519 |
|
01:05:11,200 --> 01:05:16,079 |
|
it's like Transformer |
|
|
|
1520 |
|
01:05:12,799 --> 01:05:18,640 |
|
layer Transformer layer 8 node 512 |
|
|
|
1521 |
|
01:05:16,079 --> 01:05:21,279 |
|
Dropout 0.5 label smooth |
|
|
|
1522 |
|
01:05:18,640 --> 01:05:25,880 |
|
0.02 um and so I have all the parameters |
|
|
|
1523 |
|
01:05:21,279 --> 01:05:26,880 |
|
in there and then |
|
|
|
1524 |
|
01:05:25,880 --> 01:05:29,680 |
|
the |
|
|
|
1525 |
|
01:05:26,880 --> 01:05:31,960 |
|
training process will output a whole |
|
|
|
1526 |
|
01:05:29,680 --> 01:05:33,960 |
|
bunch of checkpoints in here and then |
|
|
|
1527 |
|
01:05:31,960 --> 01:05:35,520 |
|
for my evaluation I have evaluation |
|
|
|
1528 |
|
01:05:33,960 --> 01:05:38,119 |
|
metrics and I have the checkpoint I'm |
|
|
|
1529 |
|
01:05:35,520 --> 01:05:41,680 |
|
evaluating so uh when I do |
|
|
|
1530 |
|
01:05:38,119 --> 01:05:45,119 |
|
evaluation I will then append checkpoint |
|
|
|
1531 |
|
01:05:41,680 --> 01:05:47,279 |
|
6 uh metric F measure or something like |
|
|
|
1532 |
|
01:05:45,119 --> 01:05:49,079 |
|
that and so I keep around all of the |
|
|
|
1533 |
|
01:05:47,279 --> 01:05:52,520 |
|
previous information and just append |
|
|
|
1534 |
|
01:05:49,079 --> 01:05:54,599 |
|
append append append and so um this |
|
|
|
1535 |
|
01:05:52,520 --> 01:05:56,680 |
|
allows you to avoid rerunning things |
|
|
|
1536 |
|
01:05:54,599 --> 01:05:58,359 |
|
because you can uh just have your python |
|
|
|
1537 |
|
01:05:56,680 --> 01:06:00,520 |
|
code to check if the directory already |
|
|
|
1538 |
|
01:05:58,359 --> 01:06:01,839 |
|
exists and already has been completed |
|
|
|
1539 |
|
01:06:00,520 --> 01:06:03,559 |
|
and then read in the result if it |
|
|
|
1540 |
|
01:06:01,839 --> 01:06:06,319 |
|
already has been or run the experiment |
|
|
|
1541 |
|
01:06:03,559 --> 01:06:08,079 |
|
that it hasn't been so um you can write |
|
|
|
1542 |
|
01:06:06,319 --> 01:06:10,279 |
|
you can write this in pure python by |
|
|
|
1543 |
|
01:06:08,079 --> 01:06:11,599 |
|
just adding like some if statements at |
|
|
|
1544 |
|
01:06:10,279 --> 01:06:14,079 |
|
the beginning of the function some if |
|
|
|
1545 |
|
01:06:11,599 --> 01:06:16,799 |
|
statements at um some like output |
|
|
|
1546 |
|
01:06:14,079 --> 01:06:19,440 |
|
statements at the end of the function um |
|
|
|
1547 |
|
01:06:16,799 --> 01:06:22,000 |
|
there are more sophisticated models |
|
|
|
1548 |
|
01:06:19,440 --> 01:06:24,200 |
|
methods so there's like a toolkit called |
|
|
|
1549 |
|
01:06:22,000 --> 01:06:28,079 |
|
duct tape that was originally created |
|
|
|
1550 |
|
01:06:24,200 --> 01:06:31,760 |
|
here at CMU and um my uh student Patrick |
|
|
|
1551 |
|
01:06:28,079 --> 01:06:33,079 |
|
is maintaining now this link um so you |
|
|
|
1552 |
|
01:06:31,760 --> 01:06:34,960 |
|
can either just roll something on your |
|
|
|
1553 |
|
01:06:33,079 --> 01:06:36,880 |
|
own or look into one of these more |
|
|
|
1554 |
|
01:06:34,960 --> 01:06:39,359 |
|
complex work workflow automation things |
|
|
|
1555 |
|
01:06:36,880 --> 01:06:39,359 |
|
to sve you |
|
|
|
1556 |
|
01:06:39,400 --> 01:06:47,279 |
|
time okay evaluation um so I talked |
|
|
|
1557 |
|
01:06:43,400 --> 01:06:49,000 |
|
about this to some extent um uh so yeah |
|
|
|
1558 |
|
01:06:47,279 --> 01:06:51,000 |
|
I'll just skip over |
|
|
|
1559 |
|
01:06:49,000 --> 01:06:54,559 |
|
that |
|
|
|
1560 |
|
01:06:51,000 --> 01:06:57,200 |
|
and result reporting um |
|
|
|
1561 |
|
01:06:54,559 --> 01:06:59,160 |
|
for papers one thing that I really like |
|
|
|
1562 |
|
01:06:57,200 --> 01:07:01,960 |
|
to do is plan the result section in |
|
|
|
1563 |
|
01:06:59,160 --> 01:07:07,039 |
|
advance or at least imagine the result |
|
|
|
1564 |
|
01:07:01,960 --> 01:07:07,039 |
|
section in advance um |
|
|
|
1565 |
|
01:07:07,200 --> 01:07:11,640 |
|
so what what I think of is like what |
|
|
|
1566 |
|
01:07:09,559 --> 01:07:14,520 |
|
experimental claims would I like to make |
|
|
|
1567 |
|
01:07:11,640 --> 01:07:15,760 |
|
how am I going to support them by the |
|
|
|
1568 |
|
01:07:14,520 --> 01:07:19,039 |
|
experiments that I'm going to show in a |
|
|
|
1569 |
|
01:07:15,760 --> 01:07:21,160 |
|
result section um and this identifies |
|
|
|
1570 |
|
01:07:19,039 --> 01:07:24,640 |
|
unjustified experimental claims like so |
|
|
|
1571 |
|
01:07:21,160 --> 01:07:27,119 |
|
let's say your method is you're saying |
|
|
|
1572 |
|
01:07:24,640 --> 01:07:29,000 |
|
something like uh this method improves |
|
|
|
1573 |
|
01:07:27,119 --> 01:07:30,440 |
|
across a wide variety of languages and |
|
|
|
1574 |
|
01:07:29,000 --> 01:07:32,520 |
|
then you realize that you only have one |
|
|
|
1575 |
|
01:07:30,440 --> 01:07:34,720 |
|
language and you're uh in your |
|
|
|
1576 |
|
01:07:32,520 --> 01:07:37,960 |
|
experiment section that's a problem |
|
|
|
1577 |
|
01:07:34,720 --> 01:07:40,640 |
|
obviously um also I I really enjoy like |
|
|
|
1578 |
|
01:07:37,960 --> 01:07:43,599 |
|
assuming that all of my experiments are |
|
|
|
1579 |
|
01:07:40,640 --> 01:07:46,520 |
|
going really really well um and you know |
|
|
|
1580 |
|
01:07:43,599 --> 01:07:49,440 |
|
none of my uh none of my runs crash with |
|
|
|
1581 |
|
01:07:46,520 --> 01:07:52,000 |
|
Cuda out of memory errors and you know |
|
|
|
1582 |
|
01:07:49,440 --> 01:07:55,319 |
|
all of all of the experiments appear as |
|
|
|
1583 |
|
01:07:52,000 --> 01:07:57,960 |
|
expected and if you do something like |
|
|
|
1584 |
|
01:07:55,319 --> 01:07:59,960 |
|
that you can be ambitious and say okay |
|
|
|
1585 |
|
01:07:57,960 --> 01:08:03,119 |
|
how can I make this research project |
|
|
|
1586 |
|
01:07:59,960 --> 01:08:04,960 |
|
really impactful like um and another |
|
|
|
1587 |
|
01:08:03,119 --> 01:08:08,240 |
|
thing that I like to ask my students or |
|
|
|
1588 |
|
01:08:04,960 --> 01:08:11,200 |
|
people I'm working with recently is like |
|
|
|
1589 |
|
01:08:08,240 --> 01:08:13,440 |
|
who are like three people in the world |
|
|
|
1590 |
|
01:08:11,200 --> 01:08:17,440 |
|
who will be really excited by your paper |
|
|
|
1591 |
|
01:08:13,440 --> 01:08:19,040 |
|
like name actual people um and where do |
|
|
|
1592 |
|
01:08:17,440 --> 01:08:20,839 |
|
those people work what do they care |
|
|
|
1593 |
|
01:08:19,040 --> 01:08:22,359 |
|
about what sort of evidence would you |
|
|
|
1594 |
|
01:08:20,839 --> 01:08:24,560 |
|
need in your paper to make them really |
|
|
|
1595 |
|
01:08:22,359 --> 01:08:26,560 |
|
excited about your paper or something |
|
|
|
1596 |
|
01:08:24,560 --> 01:08:29,679 |
|
like that and very often people will |
|
|
|
1597 |
|
01:08:26,560 --> 01:08:31,480 |
|
reply to me like oh I think people in um |
|
|
|
1598 |
|
01:08:29,679 --> 01:08:32,799 |
|
in Google will be very excited about |
|
|
|
1599 |
|
01:08:31,480 --> 01:08:34,440 |
|
this and they're going to use it and I'm |
|
|
|
1600 |
|
01:08:32,799 --> 01:08:38,719 |
|
like well you're writing all your code |
|
|
|
1601 |
|
01:08:34,440 --> 01:08:39,839 |
|
in pytorch and they don't use pytorch so |
|
|
|
1602 |
|
01:08:38,719 --> 01:08:41,000 |
|
how are you going to convince them to |
|
|
|
1603 |
|
01:08:39,839 --> 01:08:42,640 |
|
use their paper they're going to have to |
|
|
|
1604 |
|
01:08:41,000 --> 01:08:46,120 |
|
reimplement it in Jax and that's going |
|
|
|
1605 |
|
01:08:42,640 --> 01:08:47,520 |
|
to suck for them so like uh you know |
|
|
|
1606 |
|
01:08:46,120 --> 01:08:49,040 |
|
what are the barriers for them actually |
|
|
|
1607 |
|
01:08:47,520 --> 01:08:50,799 |
|
using it and then maybe the people are |
|
|
|
1608 |
|
01:08:49,040 --> 01:08:52,159 |
|
like oh well maybe actually I don't want |
|
|
|
1609 |
|
01:08:50,799 --> 01:08:54,199 |
|
people at Google to use this and I can |
|
|
|
1610 |
|
01:08:52,159 --> 01:08:56,560 |
|
think of somebody else and it's like |
|
|
|
1611 |
|
01:08:54,199 --> 01:08:58,920 |
|
well great so now release it open source |
|
|
|
1612 |
|
01:08:56,560 --> 01:09:00,520 |
|
and people will will have it open source |
|
|
|
1613 |
|
01:08:58,920 --> 01:09:01,920 |
|
so you can kind of think about like the |
|
|
|
1614 |
|
01:09:00,520 --> 01:09:03,719 |
|
types of evidence that you would need to |
|
|
|
1615 |
|
01:09:01,920 --> 01:09:05,440 |
|
convince people to use your work and |
|
|
|
1616 |
|
01:09:03,719 --> 01:09:08,040 |
|
that can result in your work being more |
|
|
|
1617 |
|
01:09:05,440 --> 01:09:09,319 |
|
impactful in the long run and if you |
|
|
|
1618 |
|
01:09:08,040 --> 01:09:10,400 |
|
think about it from the very beginning |
|
|
|
1619 |
|
01:09:09,319 --> 01:09:11,839 |
|
that also helps you plan your |
|
|
|
1620 |
|
01:09:10,400 --> 01:09:13,520 |
|
experiments like what sort of evidence |
|
|
|
1621 |
|
01:09:11,839 --> 01:09:15,359 |
|
is necessary for people to get excited |
|
|
|
1622 |
|
01:09:13,520 --> 01:09:18,440 |
|
about it in the this |
|
|
|
1623 |
|
01:09:15,359 --> 01:09:20,120 |
|
SPS um another thing that I like to do |
|
|
|
1624 |
|
01:09:18,440 --> 01:09:24,000 |
|
with result reporting is result |
|
|
|
1625 |
|
01:09:20,120 --> 01:09:26,880 |
|
generation scripts um so uh I often |
|
|
|
1626 |
|
01:09:24,000 --> 01:09:29,159 |
|
generate paper latex directly from log |
|
|
|
1627 |
|
01:09:26,880 --> 01:09:31,799 |
|
files uh there's two reasons why I do |
|
|
|
1628 |
|
01:09:29,159 --> 01:09:34,480 |
|
this um number one it's efficient and |
|
|
|
1629 |
|
01:09:31,799 --> 01:09:36,719 |
|
minimizes errors number two it allows |
|
|
|
1630 |
|
01:09:34,480 --> 01:09:39,080 |
|
you to preemptively plan experiments |
|
|
|
1631 |
|
01:09:36,719 --> 01:09:41,120 |
|
that you want to run so like for example |
|
|
|
1632 |
|
01:09:39,080 --> 01:09:44,440 |
|
if we go back to the dock um the |
|
|
|
1633 |
|
01:09:41,120 --> 01:09:46,199 |
|
directory that I talked about before um |
|
|
|
1634 |
|
01:09:44,440 --> 01:09:50,359 |
|
I can write |
|
|
|
1635 |
|
01:09:46,199 --> 01:09:52,719 |
|
a a script that reads in 20 evaluation |
|
|
|
1636 |
|
01:09:50,359 --> 01:09:54,800 |
|
results from 20 different directories |
|
|
|
1637 |
|
01:09:52,719 --> 01:09:56,920 |
|
and fills in a table and if that |
|
|
|
1638 |
|
01:09:54,800 --> 01:09:58,600 |
|
directory doesn't exist yet it will put |
|
|
|
1639 |
|
01:09:56,920 --> 01:10:01,239 |
|
like TVD or something like that in the |
|
|
|
1640 |
|
01:09:58,600 --> 01:10:03,960 |
|
table so I can very quickly see okay |
|
|
|
1641 |
|
01:10:01,239 --> 01:10:05,880 |
|
these things are TBD um oh this thing |
|
|
|
1642 |
|
01:10:03,960 --> 01:10:07,480 |
|
has been TBD for a very long time is my |
|
|
|
1643 |
|
01:10:05,880 --> 01:10:09,400 |
|
experiment crashed do I need to go back |
|
|
|
1644 |
|
01:10:07,480 --> 01:10:12,239 |
|
and like restart my experiment or |
|
|
|
1645 |
|
01:10:09,400 --> 01:10:13,719 |
|
something like that so um it's an |
|
|
|
1646 |
|
01:10:12,239 --> 01:10:17,280 |
|
efficient way and when you finish the |
|
|
|
1647 |
|
01:10:13,719 --> 01:10:17,280 |
|
last TBD it's a very good feeling |
|
|
|
1648 |
|
01:10:18,280 --> 01:10:23,719 |
|
also cool um next computational |
|
|
|
1649 |
|
01:10:21,760 --> 01:10:26,159 |
|
resources actually I kind of already |
|
|
|
1650 |
|
01:10:23,719 --> 01:10:28,600 |
|
talked about this a little bit um but on |
|
|
|
1651 |
|
01:10:26,159 --> 01:10:30,280 |
|
Amazon web services we have uh class |
|
|
|
1652 |
|
01:10:28,600 --> 01:10:32,080 |
|
credits that we're going to be issuing |
|
|
|
1653 |
|
01:10:30,280 --> 01:10:34,880 |
|
as soon as uh the assignment one |
|
|
|
1654 |
|
01:10:32,080 --> 01:10:37,560 |
|
deadline is over um there's also Google |
|
|
|
1655 |
|
01:10:34,880 --> 01:10:39,440 |
|
cloud and collab um you can get |
|
|
|
1656 |
|
01:10:37,560 --> 01:10:44,000 |
|
commodity gpus and other things like |
|
|
|
1657 |
|
01:10:39,440 --> 01:10:47,800 |
|
that so um you can also consider |
|
|
|
1658 |
|
01:10:44,000 --> 01:10:53,159 |
|
that okay let me get into Data analysis |
|
|
|
1659 |
|
01:10:47,800 --> 01:10:55,440 |
|
um so I'm going to cover this a lot more |
|
|
|
1660 |
|
01:10:53,159 --> 01:10:58,480 |
|
in an interpretation lecture and this is |
|
|
|
1661 |
|
01:10:55,440 --> 01:10:59,520 |
|
going to be in three classes so this is |
|
|
|
1662 |
|
01:10:58,480 --> 01:11:02,239 |
|
going to |
|
|
|
1663 |
|
01:10:59,520 --> 01:11:07,000 |
|
be the |
|
|
|
1664 |
|
01:11:02,239 --> 01:11:09,719 |
|
Tuesday after next um so uh very |
|
|
|
1665 |
|
01:11:07,000 --> 01:11:11,000 |
|
important things though uh look at data |
|
|
|
1666 |
|
01:11:09,719 --> 01:11:13,679 |
|
um you'll want to do quantitative |
|
|
|
1667 |
|
01:11:11,000 --> 01:11:16,239 |
|
analysis and qualitative analysis um you |
|
|
|
1668 |
|
01:11:13,679 --> 01:11:17,440 |
|
can also look at model explanations so |
|
|
|
1669 |
|
01:11:16,239 --> 01:11:18,719 |
|
I'm going to cover how to do all of |
|
|
|
1670 |
|
01:11:17,440 --> 01:11:21,520 |
|
these things in that lecture I don't |
|
|
|
1671 |
|
01:11:18,719 --> 01:11:24,440 |
|
have enough time to do it |
|
|
|
1672 |
|
01:11:21,520 --> 01:11:26,960 |
|
today then the final thing is accoring |
|
|
|
1673 |
|
01:11:24,440 --> 01:11:30,840 |
|
conclusions um this is also too much for |
|
|
|
1674 |
|
01:11:26,960 --> 01:11:34,000 |
|
a single class but um I very highly |
|
|
|
1675 |
|
01:11:30,840 --> 01:11:35,920 |
|
recommend this lecture um uh sorry these |
|
|
|
1676 |
|
01:11:34,000 --> 01:11:39,320 |
|
lecture slides they don't take that long |
|
|
|
1677 |
|
01:11:35,920 --> 01:11:40,880 |
|
to look through they're maybe um 20 |
|
|
|
1678 |
|
01:11:39,320 --> 01:11:42,880 |
|
minutes or so but they're very very |
|
|
|
1679 |
|
01:11:40,880 --> 01:11:45,480 |
|
helpful um they talk about how to |
|
|
|
1680 |
|
01:11:42,880 --> 01:11:48,199 |
|
structure a paper uh other things like |
|
|
|
1681 |
|
01:11:45,480 --> 01:11:51,440 |
|
this and if you follow this advice for |
|
|
|
1682 |
|
01:11:48,199 --> 01:11:53,239 |
|
writing your reports for like three and |
|
|
|
1683 |
|
01:11:51,440 --> 01:11:54,960 |
|
four assignment three and assignment |
|
|
|
1684 |
|
01:11:53,239 --> 01:11:57,800 |
|
four even assignment two I think you |
|
|
|
1685 |
|
01:11:54,960 --> 01:11:59,400 |
|
can't really go wrong uh actually three |
|
|
|
1686 |
|
01:11:57,800 --> 01:12:00,840 |
|
and four is probably better uh than |
|
|
|
1687 |
|
01:11:59,400 --> 01:12:03,320 |
|
assignment two assignment two can be |
|
|
|
1688 |
|
01:12:00,840 --> 01:12:05,360 |
|
more descriptive so definitely take a |
|
|
|
1689 |
|
01:12:03,320 --> 01:12:08,600 |
|
look at that if |
|
|
|
1690 |
|
01:12:05,360 --> 01:12:08,600 |
|
you cool |