ahmedelsayed's picture
commit files to HF hub
2ffb90d
1
00:00:01,280 --> 00:00:06,759
so the class today is uh introduction to
2
00:00:04,680 --> 00:00:09,480
natural language processing and I'll be
3
00:00:06,759 --> 00:00:11,200
talking a little bit about you know what
4
00:00:09,480 --> 00:00:14,719
is natural language processing why we're
5
00:00:11,200 --> 00:00:16,720
motivated to do it and also some of the
6
00:00:14,719 --> 00:00:18,039
difficulties that we encounter and I'll
7
00:00:16,720 --> 00:00:19,880
at the end I'll also be talking about
8
00:00:18,039 --> 00:00:22,519
class Logistics so you can ask any
9
00:00:19,880 --> 00:00:25,439
Logistics questions at that
10
00:00:22,519 --> 00:00:27,720
time so if we talk about what is NLP
11
00:00:25,439 --> 00:00:29,320
anyway uh does anyone have any opinions
12
00:00:27,720 --> 00:00:31,439
about the definition of what natural
13
00:00:29,320 --> 00:00:33,239
language process would be oh one other
14
00:00:31,439 --> 00:00:35,680
thing I should mention is I am recording
15
00:00:33,239 --> 00:00:38,600
the class uh I put the class on YouTube
16
00:00:35,680 --> 00:00:40,520
uh afterwards I will not take pictures
17
00:00:38,600 --> 00:00:41,920
or video of any of you uh but if you
18
00:00:40,520 --> 00:00:44,719
talk your voice might come in the
19
00:00:41,920 --> 00:00:47,440
background so just uh be aware of that
20
00:00:44,719 --> 00:00:49,000
um usually not it's a directional mic so
21
00:00:47,440 --> 00:00:51,559
I try to repeat the questions after
22
00:00:49,000 --> 00:00:54,079
everybody um but uh for the people who
23
00:00:51,559 --> 00:00:57,680
are recordings uh listening to the
24
00:00:54,079 --> 00:00:59,320
recordings um so anyway what is NLP
25
00:00:57,680 --> 00:01:03,120
anyway does anybody have any ideas about
26
00:00:59,320 --> 00:01:03,120
the definition of what NLP might
27
00:01:06,119 --> 00:01:09,119
be
28
00:01:15,439 --> 00:01:21,759
yes okay um it so the answer was it
29
00:01:19,240 --> 00:01:25,759
helps machines understand language
30
00:01:21,759 --> 00:01:27,920
better uh so to facilitate human human
31
00:01:25,759 --> 00:01:31,159
and human machine interactions I think
32
00:01:27,920 --> 00:01:32,759
that's very good um it's
33
00:01:31,159 --> 00:01:36,520
uh similar to what I have written on my
34
00:01:32,759 --> 00:01:38,040
slide here uh but natur in addition to
35
00:01:36,520 --> 00:01:41,280
natural language understanding there's
36
00:01:38,040 --> 00:01:46,000
one major other segment of NLP uh does
37
00:01:41,280 --> 00:01:46,000
anyone uh have an idea what that might
38
00:01:48,719 --> 00:01:53,079
be we often have a dichotomy between two
39
00:01:51,399 --> 00:01:55,240
major segments natural language
40
00:01:53,079 --> 00:01:57,520
understanding and natural language
41
00:01:55,240 --> 00:01:59,439
generation yeah exactly so I I would say
42
00:01:57,520 --> 00:02:03,119
that's almost perfect if you had said
43
00:01:59,439 --> 00:02:06,640
understand and generate so very good um
44
00:02:03,119 --> 00:02:08,560
so I I say natural technology to handle
45
00:02:06,640 --> 00:02:11,400
human language usually text using
46
00:02:08,560 --> 00:02:13,200
computers uh to Aid human machine
47
00:02:11,400 --> 00:02:15,480
communication and this can include
48
00:02:13,200 --> 00:02:17,879
things like question answering dialogue
49
00:02:15,480 --> 00:02:20,840
or generation of code that can be
50
00:02:17,879 --> 00:02:23,239
executed with uh
51
00:02:20,840 --> 00:02:25,080
computers it can also Aid human human
52
00:02:23,239 --> 00:02:27,440
communication and this can include
53
00:02:25,080 --> 00:02:30,440
things like machine translation or spell
54
00:02:27,440 --> 00:02:32,640
checking or assisted writing
55
00:02:30,440 --> 00:02:34,560
and then a final uh segment that people
56
00:02:32,640 --> 00:02:37,400
might think about a little bit less is
57
00:02:34,560 --> 00:02:39,400
analyzing and understanding a language
58
00:02:37,400 --> 00:02:42,400
and this includes things like syntactic
59
00:02:39,400 --> 00:02:44,959
analysis text classification entity
60
00:02:42,400 --> 00:02:47,400
recognition and linking and these can be
61
00:02:44,959 --> 00:02:49,159
used for uh various reasons not
62
00:02:47,400 --> 00:02:51,000
necessarily for direct human machine
63
00:02:49,159 --> 00:02:52,720
communication but also for like
64
00:02:51,000 --> 00:02:54,400
aggregating information across large
65
00:02:52,720 --> 00:02:55,760
things for scientific studies and other
66
00:02:54,400 --> 00:02:57,519
things like that I'll give a few
67
00:02:55,760 --> 00:03:00,920
examples of
68
00:02:57,519 --> 00:03:04,040
this um we now use an many times a day
69
00:03:00,920 --> 00:03:06,480
sometimes without even knowing it so uh
70
00:03:04,040 --> 00:03:09,400
whenever you're typing a doc in Google
71
00:03:06,480 --> 00:03:11,599
Docs there's you know spell checking and
72
00:03:09,400 --> 00:03:13,959
grammar checking going on behind it's
73
00:03:11,599 --> 00:03:15,920
gotten frighten frighteningly good
74
00:03:13,959 --> 00:03:18,280
recently that where it checks like most
75
00:03:15,920 --> 00:03:20,720
of my mistakes and rarely Flags things
76
00:03:18,280 --> 00:03:22,799
that are not mistakes so obviously they
77
00:03:20,720 --> 00:03:25,080
have powerful models running behind that
78
00:03:22,799 --> 00:03:25,080
uh
79
00:03:25,640 --> 00:03:33,080
so and it can do things like answer
80
00:03:28,720 --> 00:03:34,599
questions uh so I asked chat GPT who is
81
00:03:33,080 --> 00:03:37,000
the current president of Carnegie melan
82
00:03:34,599 --> 00:03:38,920
University and chat GPT said I did a
83
00:03:37,000 --> 00:03:40,920
quick search for more information here
84
00:03:38,920 --> 00:03:43,439
is what I found uh the current president
85
00:03:40,920 --> 00:03:47,120
of car Mel University is faram Janan he
86
00:03:43,439 --> 00:03:50,040
has been serving since July 1 etc etc so
87
00:03:47,120 --> 00:03:50,040
as far as I can tell that's
88
00:03:50,400 --> 00:03:56,319
correct um at the same time I asked how
89
00:03:53,799 --> 00:04:00,280
many layers are included in the GP 3.5
90
00:03:56,319 --> 00:04:02,360
turbo architecture and it said to me
91
00:04:00,280 --> 00:04:05,400
GPT 3.5 turbo which is an optimized
92
00:04:02,360 --> 00:04:07,239
version of GPT 3.5 for faster responses
93
00:04:05,400 --> 00:04:08,959
doesn't have a specific layer art
94
00:04:07,239 --> 00:04:11,720
structure like the traditional gpt3
95
00:04:08,959 --> 00:04:13,560
models um and I don't know if this is
96
00:04:11,720 --> 00:04:16,600
true or not but I'm pretty sure it's not
97
00:04:13,560 --> 00:04:18,840
true I'm pretty sure that you know GPT
98
00:04:16,600 --> 00:04:20,560
is a model that's much like other models
99
00:04:18,840 --> 00:04:21,560
uh so it basically just made up the spec
100
00:04:20,560 --> 00:04:22,880
because it didn't have any information
101
00:04:21,560 --> 00:04:26,000
on the Internet or couldn't talk about
102
00:04:22,880 --> 00:04:26,000
it so
103
00:04:26,120 --> 00:04:33,479
um another thing is uh NLP can translate
104
00:04:29,639 --> 00:04:37,759
text pretty well so I ran um Google
105
00:04:33,479 --> 00:04:39,560
translate uh on Japanese uh this example
106
00:04:37,759 --> 00:04:41,639
is a little bit old it's from uh you
107
00:04:39,560 --> 00:04:44,639
know a few years ago about Co but I I
108
00:04:41,639 --> 00:04:46,240
retranslated it a few days ago and it
109
00:04:44,639 --> 00:04:47,680
comes up pretty good uh you can
110
00:04:46,240 --> 00:04:49,639
basically understand what's going on
111
00:04:47,680 --> 00:04:53,520
here it's not perfect but you can
112
00:04:49,639 --> 00:04:56,400
understand the uh the general uh
113
00:04:53,520 --> 00:04:58,560
gist at the same time uh if I put in a
114
00:04:56,400 --> 00:05:02,280
relatively low resource language this is
115
00:04:58,560 --> 00:05:05,759
Kurdish um it has a number of problems
116
00:05:02,280 --> 00:05:08,199
when you try to understand it and just
117
00:05:05,759 --> 00:05:12,400
to give an example this is talking about
118
00:05:08,199 --> 00:05:14,320
uh some uh paleontology Discovery it
119
00:05:12,400 --> 00:05:15,800
called this person a fossil scientist
120
00:05:14,320 --> 00:05:17,440
instead of the kind of obvious English
121
00:05:15,800 --> 00:05:20,120
term
122
00:05:17,440 --> 00:05:23,520
paleontologist um and it's talking about
123
00:05:20,120 --> 00:05:25,240
three different uh T-Rex species uh how
124
00:05:23,520 --> 00:05:27,039
T-Rex should actually be split into
125
00:05:25,240 --> 00:05:29,639
three species where T-Rex says king of
126
00:05:27,039 --> 00:05:31,560
ferocious lizards emperator says emperor
127
00:05:29,639 --> 00:05:33,720
of Savaged lizards and then T Regina
128
00:05:31,560 --> 00:05:35,120
means clean of ferocious snail I'm
129
00:05:33,720 --> 00:05:37,240
pretty sure that's not snail I'm pretty
130
00:05:35,120 --> 00:05:41,080
sure that's lizard so uh you can see
131
00:05:37,240 --> 00:05:41,080
that this is not uh this is not perfect
132
00:05:41,280 --> 00:05:46,680
either some people might be thinking why
133
00:05:43,960 --> 00:05:48,400
Google translate and why not GPD well it
134
00:05:46,680 --> 00:05:49,960
turns out um according to one of the
135
00:05:48,400 --> 00:05:51,759
recent studies we've done GPD is even
136
00:05:49,960 --> 00:05:55,479
worse at these slow resource languages
137
00:05:51,759 --> 00:05:58,120
so I use the best thing that's out
138
00:05:55,479 --> 00:06:00,440
there um another thing is language
139
00:05:58,120 --> 00:06:02,039
analysis can Aid scientific ific inquiry
140
00:06:00,440 --> 00:06:03,600
so this is an example that I've been
141
00:06:02,039 --> 00:06:06,120
using for a long time it's actually from
142
00:06:03,600 --> 00:06:09,160
Martin sap another faculty member here
143
00:06:06,120 --> 00:06:12,440
uh but I have been using it since uh
144
00:06:09,160 --> 00:06:14,160
like before he joined and it uh this is
145
00:06:12,440 --> 00:06:16,039
an example from computational social
146
00:06:14,160 --> 00:06:18,599
science uh answering questions about
147
00:06:16,039 --> 00:06:20,240
Society given observational data and
148
00:06:18,599 --> 00:06:22,280
their question was do movie scripts
149
00:06:20,240 --> 00:06:24,599
portray female or male characters with
150
00:06:22,280 --> 00:06:27,520
more power or agency in movie script
151
00:06:24,599 --> 00:06:30,120
films so it's asking kind of a so
152
00:06:27,520 --> 00:06:32,160
societal question by using NLP
153
00:06:30,120 --> 00:06:35,360
technology and the way they did it is
154
00:06:32,160 --> 00:06:36,880
they basically analyzed text trying to
155
00:06:35,360 --> 00:06:43,080
find
156
00:06:36,880 --> 00:06:45,280
uh the uh agents and patients in a a
157
00:06:43,080 --> 00:06:46,479
particular text which are the the things
158
00:06:45,280 --> 00:06:49,280
that are doing things and the things
159
00:06:46,479 --> 00:06:52,639
that things are being done to and you
160
00:06:49,280 --> 00:06:54,440
can see that essentially male characters
161
00:06:52,639 --> 00:06:56,560
in these movie scripts were given more
162
00:06:54,440 --> 00:06:58,080
power in agency and female characters
163
00:06:56,560 --> 00:06:59,960
were given less power in agency and they
164
00:06:58,080 --> 00:07:02,680
were able to do this because they had
165
00:06:59,960 --> 00:07:04,840
NLP technology that analyzed and
166
00:07:02,680 --> 00:07:08,960
extracted useful data and made turned it
167
00:07:04,840 --> 00:07:11,520
into a very easy form to do kind of
168
00:07:08,960 --> 00:07:15,840
analysis of the variety that they want
169
00:07:11,520 --> 00:07:17,400
so um I think that's a major use case of
170
00:07:15,840 --> 00:07:19,400
NLP technology that does language
171
00:07:17,400 --> 00:07:20,919
analysis nowadays turn it into a form
172
00:07:19,400 --> 00:07:23,960
that allows you to very quickly do
173
00:07:20,919 --> 00:07:27,440
aggregate queries and other things like
174
00:07:23,960 --> 00:07:30,479
this um but at the same time uh language
175
00:07:27,440 --> 00:07:33,520
analysis tools fail at very basic tasks
176
00:07:30,479 --> 00:07:36,000
so these are
177
00:07:33,520 --> 00:07:38,199
some things that I ran through a named
178
00:07:36,000 --> 00:07:41,080
entity recognizer and these were kind of
179
00:07:38,199 --> 00:07:43,160
very nice named entity recognizers uh
180
00:07:41,080 --> 00:07:46,240
that a lot of people were using for
181
00:07:43,160 --> 00:07:48,039
example Stanford core NLP and Spacey and
182
00:07:46,240 --> 00:07:50,319
both of them I just threw in the first
183
00:07:48,039 --> 00:07:53,120
thing that I found on the New York Times
184
00:07:50,319 --> 00:07:55,199
at the time and it basically made at
185
00:07:53,120 --> 00:07:58,319
least one mistake in the first sentence
186
00:07:55,199 --> 00:08:00,840
and here it recognizes Baton Rouge as an
187
00:07:58,319 --> 00:08:04,720
organization and here it recognized
188
00:08:00,840 --> 00:08:07,000
hurricane EA as an organization so um
189
00:08:04,720 --> 00:08:08,879
like even uh these things that we expect
190
00:08:07,000 --> 00:08:10,360
should work pretty well make pretty
191
00:08:08,879 --> 00:08:13,360
Solly
192
00:08:10,360 --> 00:08:16,199
mistakes so in the class uh basically
193
00:08:13,360 --> 00:08:18,479
what I want to cover is uh what goes
194
00:08:16,199 --> 00:08:20,360
into building uh state-of-the-art NLP
195
00:08:18,479 --> 00:08:24,000
systems that work really well on a wide
196
00:08:20,360 --> 00:08:26,240
variety of tasks um where do current
197
00:08:24,000 --> 00:08:28,840
systems
198
00:08:26,240 --> 00:08:30,479
fail and how can we make appropriate
199
00:08:28,840 --> 00:08:35,000
improvements and Achieve whatever we
200
00:08:30,479 --> 00:08:37,719
want to do with nalp and this set of
201
00:08:35,000 --> 00:08:39,360
questions that I'm asking here is
202
00:08:37,719 --> 00:08:40,919
exactly the same as the set of questions
203
00:08:39,360 --> 00:08:43,519
that I was asking two years ago before
204
00:08:40,919 --> 00:08:45,480
chat GPT uh I still think they're
205
00:08:43,519 --> 00:08:46,920
important questions but I think the
206
00:08:45,480 --> 00:08:48,399
answers to these questions is very
207
00:08:46,920 --> 00:08:50,040
different and because of that we're
208
00:08:48,399 --> 00:08:52,120
updating the class materials to try to
209
00:08:50,040 --> 00:08:54,399
cover you know the answers to these
210
00:08:52,120 --> 00:08:56,000
questions and uh in kind of the era of
211
00:08:54,399 --> 00:08:58,200
large language models and other things
212
00:08:56,000 --> 00:08:59,720
like
213
00:08:58,200 --> 00:09:02,079
that
214
00:08:59,720 --> 00:09:03,360
so that's all I have for the intro maybe
215
00:09:02,079 --> 00:09:06,640
maybe pretty straightforward are there
216
00:09:03,360 --> 00:09:08,480
any questions or comments so far if not
217
00:09:06,640 --> 00:09:14,399
I'll I'll just go
218
00:09:08,480 --> 00:09:17,160
on okay great so I want to uh first go
219
00:09:14,399 --> 00:09:19,480
into a very high Lev overview of NLP
220
00:09:17,160 --> 00:09:20,839
system building and most of the stuff
221
00:09:19,480 --> 00:09:22,399
that I want to do today is to set the
222
00:09:20,839 --> 00:09:24,320
stage for what I'm going to be talking
223
00:09:22,399 --> 00:09:25,040
about in more detail uh over the rest of
224
00:09:24,320 --> 00:09:29,200
the
225
00:09:25,040 --> 00:09:31,720
class and we could think of NLP syst
226
00:09:29,200 --> 00:09:34,040
systems through this kind of General
227
00:09:31,720 --> 00:09:36,560
framework where we want to create a
228
00:09:34,040 --> 00:09:40,600
function to map an input X into an
229
00:09:36,560 --> 00:09:44,440
output y uh where X and or Y involve
230
00:09:40,600 --> 00:09:47,000
language and uh do some people have
231
00:09:44,440 --> 00:09:50,120
favorite NLP tasks or NLP tasks that you
232
00:09:47,000 --> 00:09:52,399
want to uh want to be handling in some
233
00:09:50,120 --> 00:09:57,000
way or maybe what what do you think are
234
00:09:52,399 --> 00:09:57,000
the most popular and important NLP tasks
235
00:09:58,120 --> 00:10:03,200
nowadays
236
00:10:00,800 --> 00:10:06,120
okay so translation is maybe easy what's
237
00:10:03,200 --> 00:10:06,120
the input and output of
238
00:10:11,440 --> 00:10:15,720
translation okay yeah so uh in
239
00:10:13,800 --> 00:10:17,959
Translation inputs text in one language
240
00:10:15,720 --> 00:10:21,760
output is text in another language and
241
00:10:17,959 --> 00:10:21,760
then what what is a good
242
00:10:27,680 --> 00:10:32,160
translation yeah corre or or the same is
243
00:10:30,320 --> 00:10:35,839
the input basically yes um it also
244
00:10:32,160 --> 00:10:37,760
should be fluent but I agree any other
245
00:10:35,839 --> 00:10:39,839
things generation the reason why I said
246
00:10:37,760 --> 00:10:41,519
it's tough is it's pretty broad um and
247
00:10:39,839 --> 00:10:43,360
it's not like we could be doing
248
00:10:41,519 --> 00:10:46,360
generation with lots of different inputs
249
00:10:43,360 --> 00:10:51,440
but um yeah any any other things maybe a
250
00:10:46,360 --> 00:10:51,440
little bit different yeah like
251
00:10:51,480 --> 00:10:55,959
scenario a scenario and a multiple
252
00:10:54,000 --> 00:10:58,200
choice question about the scenario and
253
00:10:55,959 --> 00:10:59,680
so what would the scenario in the
254
00:10:58,200 --> 00:11:01,760
multiple choice question are probably
255
00:10:59,680 --> 00:11:04,040
the input and then the output
256
00:11:01,760 --> 00:11:06,480
is an answer to the multiple choice
257
00:11:04,040 --> 00:11:07,920
question um and then there it's kind of
258
00:11:06,480 --> 00:11:12,279
obvious like what is good it's the
259
00:11:07,920 --> 00:11:14,880
correct answer sure um interestingly I
260
00:11:12,279 --> 00:11:17,440
think a lot of llm evaluation is done on
261
00:11:14,880 --> 00:11:21,160
these multiple choice questions but I'm
262
00:11:17,440 --> 00:11:22,320
yet to encounter an actual application
263
00:11:21,160 --> 00:11:24,880
that cares about multiple choice
264
00:11:22,320 --> 00:11:26,880
question answering so uh there's kind of
265
00:11:24,880 --> 00:11:30,959
a funny disconnect there but uh yeah I
266
00:11:26,880 --> 00:11:33,519
saw hand that think about V search comp
267
00:11:30,959 --> 00:11:36,360
yeah Vector search uh that's very good
268
00:11:33,519 --> 00:11:36,360
so the input
269
00:11:37,120 --> 00:11:45,000
is can con it into or understanding and
270
00:11:42,560 --> 00:11:45,000
it to
271
00:11:47,360 --> 00:11:53,760
another okay yeah so I'd say the input
272
00:11:49,880 --> 00:11:56,160
there is a query and a document base um
273
00:11:53,760 --> 00:11:57,959
and then the output is maybe an index
274
00:11:56,160 --> 00:11:59,800
into the document or or something else
275
00:11:57,959 --> 00:12:01,279
like that sure um and then something
276
00:11:59,800 --> 00:12:05,040
that's good here here's a good question
277
00:12:01,279 --> 00:12:05,040
what what's a good result from
278
00:12:06,560 --> 00:12:10,200
that what's a good
279
00:12:10,839 --> 00:12:19,279
output be sort of simar the major
280
00:12:15,560 --> 00:12:21,680
problem there I see is how you def SAR
281
00:12:19,279 --> 00:12:26,199
and how you
282
00:12:21,680 --> 00:12:29,760
a always like you understand
283
00:12:26,199 --> 00:12:33,000
whether is actually
284
00:12:29,760 --> 00:12:35,079
yeah exactly so that um just to repeat
285
00:12:33,000 --> 00:12:36,880
it's like uh we need to have a
286
00:12:35,079 --> 00:12:38,399
similarity a good similarity metric we
287
00:12:36,880 --> 00:12:40,120
need to have a good threshold where we
288
00:12:38,399 --> 00:12:41,760
get like the ones we want and we don't
289
00:12:40,120 --> 00:12:43,240
get the ones we don't want we're going
290
00:12:41,760 --> 00:12:44,959
to talk more about that in the retrieval
291
00:12:43,240 --> 00:12:48,440
lecture exactly how we evaluate and
292
00:12:44,959 --> 00:12:49,920
stuff but um yeah good so this is a good
293
00:12:48,440 --> 00:12:53,279
uh here are some good examples I have
294
00:12:49,920 --> 00:12:55,519
some examples of my own um the first one
295
00:12:53,279 --> 00:12:58,360
is uh kind of the very generic one maybe
296
00:12:55,519 --> 00:13:00,800
kind of like generation here but text in
297
00:12:58,360 --> 00:13:02,959
continuing text uh so this is language
298
00:13:00,800 --> 00:13:04,160
modeling so you have a text and then you
299
00:13:02,959 --> 00:13:05,440
have the continuation you want to
300
00:13:04,160 --> 00:13:07,680
predict the
301
00:13:05,440 --> 00:13:10,480
continuation um text and text in another
302
00:13:07,680 --> 00:13:13,040
language is translation uh text in a
303
00:13:10,480 --> 00:13:15,800
label could be text classification uh
304
00:13:13,040 --> 00:13:17,760
text in linguistic structure or uh some
305
00:13:15,800 --> 00:13:21,360
s kind of entities or something like
306
00:13:17,760 --> 00:13:22,680
that could be uh language analysis or um
307
00:13:21,360 --> 00:13:24,839
information
308
00:13:22,680 --> 00:13:29,440
extraction uh we could also have image
309
00:13:24,839 --> 00:13:31,320
and text uh which is image captioning um
310
00:13:29,440 --> 00:13:33,560
or speech and text which is speech
311
00:13:31,320 --> 00:13:35,240
recognition and I take the very broad
312
00:13:33,560 --> 00:13:38,000
view of natural language processing
313
00:13:35,240 --> 00:13:39,519
which is if it's any variety of language
314
00:13:38,000 --> 00:13:41,519
uh if you're handling language in some
315
00:13:39,519 --> 00:13:42,800
way it's natural language processing it
316
00:13:41,519 --> 00:13:45,880
doesn't necessarily have to be text
317
00:13:42,800 --> 00:13:47,480
input text output um so that's relevant
318
00:13:45,880 --> 00:13:50,199
for the projects that you're thinking
319
00:13:47,480 --> 00:13:52,160
about too at the end of this course so
320
00:13:50,199 --> 00:13:55,519
the the most common FAQ for this course
321
00:13:52,160 --> 00:13:57,839
is does my project count and if you're
322
00:13:55,519 --> 00:13:59,360
uncertain you should ask but usually
323
00:13:57,839 --> 00:14:01,040
like if it has some sort of language
324
00:13:59,360 --> 00:14:05,079
involved then I'll usually say yes it
325
00:14:01,040 --> 00:14:07,920
does kind so um if it's like uh code to
326
00:14:05,079 --> 00:14:09,680
code there that's not code is not
327
00:14:07,920 --> 00:14:11,480
natural language it is language but it's
328
00:14:09,680 --> 00:14:13,000
not natural language so that might be
329
00:14:11,480 --> 00:14:15,320
borderline we might have to discuss
330
00:14:13,000 --> 00:14:15,320
about
331
00:14:15,759 --> 00:14:21,800
that cool um so next I'd like to talk
332
00:14:18,880 --> 00:14:25,240
about methods for creating NLP systems
333
00:14:21,800 --> 00:14:27,839
um and there's a lot of different ways
334
00:14:25,240 --> 00:14:29,720
to create MLP systems all of these are
335
00:14:27,839 --> 00:14:32,880
alive and well in
336
00:14:29,720 --> 00:14:35,759
2024 uh the first one is Rule uh
337
00:14:32,880 --> 00:14:37,959
rule-based system creation and so the
338
00:14:35,759 --> 00:14:40,399
way this works is like let's say you
339
00:14:37,959 --> 00:14:42,480
want to build a text classifier you just
340
00:14:40,399 --> 00:14:46,560
write the simple python function that
341
00:14:42,480 --> 00:14:48,639
classifies things into uh sports or
342
00:14:46,560 --> 00:14:50,240
other and the way it classifies it into
343
00:14:48,639 --> 00:14:52,959
sports or other is it checks whether
344
00:14:50,240 --> 00:14:55,160
baseball soccer football and Tennis are
345
00:14:52,959 --> 00:14:59,399
included in the document and classifies
346
00:14:55,160 --> 00:15:01,959
it into uh Sports if so uh other if not
347
00:14:59,399 --> 00:15:05,279
so has anyone written something like
348
00:15:01,959 --> 00:15:09,720
this maybe not a text classifier but um
349
00:15:05,279 --> 00:15:11,880
you know to identify entities or uh
350
00:15:09,720 --> 00:15:14,279
split words
351
00:15:11,880 --> 00:15:16,680
or something like
352
00:15:14,279 --> 00:15:18,399
that has anybody not ever written
353
00:15:16,680 --> 00:15:22,800
anything like
354
00:15:18,399 --> 00:15:24,639
this yeah that's what I thought so um
355
00:15:22,800 --> 00:15:26,079
rule-based systems are very convenient
356
00:15:24,639 --> 00:15:28,920
when you don't really care about how
357
00:15:26,079 --> 00:15:30,759
good your system is um or you're doing
358
00:15:28,920 --> 00:15:32,360
that's really really simple and like
359
00:15:30,759 --> 00:15:35,600
it'll be perfect even if you do the very
360
00:15:32,360 --> 00:15:37,079
simple thing and so I I think it's worth
361
00:15:35,600 --> 00:15:39,959
talking a little bit about them and I'll
362
00:15:37,079 --> 00:15:43,319
talk a little bit about that uh this
363
00:15:39,959 --> 00:15:45,680
time the second thing which like very
364
00:15:43,319 --> 00:15:47,680
rapidly over the course of maybe three
365
00:15:45,680 --> 00:15:50,279
years or so has become actually maybe
366
00:15:47,680 --> 00:15:52,720
the dominant Paradigm in NLP is
367
00:15:50,279 --> 00:15:56,360
prompting uh in prompting a language
368
00:15:52,720 --> 00:15:58,560
model and the way this works is uh you
369
00:15:56,360 --> 00:16:00,720
ask a language model if the following
370
00:15:58,560 --> 00:16:03,079
sent is about sports reply Sports
371
00:16:00,720 --> 00:16:06,120
otherwise reply other and you feed it to
372
00:16:03,079 --> 00:16:08,480
your favorite LM uh usually that's GPT
373
00:16:06,120 --> 00:16:11,399
something or other uh sometimes it's an
374
00:16:08,480 --> 00:16:14,440
open source model of some variety and
375
00:16:11,399 --> 00:16:17,759
then uh it will give you the
376
00:16:14,440 --> 00:16:20,639
answer and then finally uh fine-tuning
377
00:16:17,759 --> 00:16:22,240
uh so you take some paired data and you
378
00:16:20,639 --> 00:16:23,600
do machine learning from paired data
379
00:16:22,240 --> 00:16:25,680
where you have something like I love to
380
00:16:23,600 --> 00:16:27,440
play baseball uh the stock price is
381
00:16:25,680 --> 00:16:29,519
going up he got a hatrick yesterday he
382
00:16:27,440 --> 00:16:32,759
is wearing tennis shoes and you assign
383
00:16:29,519 --> 00:16:35,319
all these uh labels to them training a
384
00:16:32,759 --> 00:16:38,160
model and you can even start out with a
385
00:16:35,319 --> 00:16:41,480
prompting based model and fine-tune a a
386
00:16:38,160 --> 00:16:41,480
language model
387
00:16:42,920 --> 00:16:49,399
also so one major consideration when
388
00:16:47,519 --> 00:16:52,000
you're Building Systems like this is the
389
00:16:49,399 --> 00:16:56,440
data requirements for building such a
390
00:16:52,000 --> 00:16:59,319
system and for rules or prompting where
391
00:16:56,440 --> 00:17:02,240
it's just based on intuition really no
392
00:16:59,319 --> 00:17:04,640
data is needed whatsoever it you don't
393
00:17:02,240 --> 00:17:08,240
need a single example and you can start
394
00:17:04,640 --> 00:17:11,000
writing rules or like just just to give
395
00:17:08,240 --> 00:17:12,640
an example the rules and prompts I wrote
396
00:17:11,000 --> 00:17:14,679
here I didn't look at any examples and I
397
00:17:12,640 --> 00:17:17,240
just wrote them uh so this is something
398
00:17:14,679 --> 00:17:20,000
that you could start out
399
00:17:17,240 --> 00:17:21,559
with uh the problem is you also have no
400
00:17:20,000 --> 00:17:24,720
idea how well it works if you don't have
401
00:17:21,559 --> 00:17:26,760
any data whatsoever right so um you'll
402
00:17:24,720 --> 00:17:30,400
you might be in trouble if you think
403
00:17:26,760 --> 00:17:30,400
something should be working
404
00:17:30,919 --> 00:17:34,440
so normally the next thing that people
405
00:17:32,919 --> 00:17:36,880
move to nowadays when they're building
406
00:17:34,440 --> 00:17:39,559
practical systems is rules are prompting
407
00:17:36,880 --> 00:17:41,240
based on spot checks so that basically
408
00:17:39,559 --> 00:17:42,919
means that you start out with a
409
00:17:41,240 --> 00:17:45,840
rule-based system or a prompting based
410
00:17:42,919 --> 00:17:47,240
system and then you go in and you run it
411
00:17:45,840 --> 00:17:48,720
on some data that you're interested in
412
00:17:47,240 --> 00:17:50,799
you just kind of qualitatively look at
413
00:17:48,720 --> 00:17:52,160
the data and say oh it's messing up here
414
00:17:50,799 --> 00:17:53,440
then you go in and fix your prompt a
415
00:17:52,160 --> 00:17:54,919
little bit or you go in and fix your
416
00:17:53,440 --> 00:17:57,320
rules a little bit or something like
417
00:17:54,919 --> 00:18:00,400
that so uh this is kind of the second
418
00:17:57,320 --> 00:18:00,400
level of difficulty
419
00:18:01,400 --> 00:18:04,640
so the third level of difficulty would
420
00:18:03,159 --> 00:18:07,400
be something like rules are prompting
421
00:18:04,640 --> 00:18:09,039
with rigorous evaluation and so here you
422
00:18:07,400 --> 00:18:12,840
would create a development set with
423
00:18:09,039 --> 00:18:14,840
inputs and outputs uh so you uh create
424
00:18:12,840 --> 00:18:17,039
maybe 200 to 2,000
425
00:18:14,840 --> 00:18:20,080
examples um
426
00:18:17,039 --> 00:18:21,720
and then evaluate your actual accuracy
427
00:18:20,080 --> 00:18:23,880
so you need an evaluation metric you
428
00:18:21,720 --> 00:18:26,120
need other things like this this is the
429
00:18:23,880 --> 00:18:28,400
next level of difficulty but if you're
430
00:18:26,120 --> 00:18:30,240
going to be a serious you know NLP
431
00:18:28,400 --> 00:18:33,000
engineer or something like this you
432
00:18:30,240 --> 00:18:34,720
definitely will be doing this a lot I
433
00:18:33,000 --> 00:18:37,760
feel and
434
00:18:34,720 --> 00:18:40,360
then so that here now you start needing
435
00:18:37,760 --> 00:18:41,960
a depth set and a test set and then
436
00:18:40,360 --> 00:18:46,280
finally fine-tuning you need an
437
00:18:41,960 --> 00:18:48,480
additional training set um and uh this
438
00:18:46,280 --> 00:18:52,240
will generally be a lot bigger than 200
439
00:18:48,480 --> 00:18:56,080
to 2,000 examples and generally the rule
440
00:18:52,240 --> 00:18:56,080
is that every time you
441
00:18:57,320 --> 00:19:01,080
double
442
00:18:59,520 --> 00:19:02,400
every time you double your training set
443
00:19:01,080 --> 00:19:07,480
size you get about a constant
444
00:19:02,400 --> 00:19:07,480
Improvement so if you start
445
00:19:07,799 --> 00:19:15,080
out if you start out down here with
446
00:19:12,240 --> 00:19:17,039
um zero shot accuracy with a language
447
00:19:15,080 --> 00:19:21,559
model you you create a small printing
448
00:19:17,039 --> 00:19:21,559
set and you get you know a pretty big
449
00:19:22,000 --> 00:19:29,120
increase and then every time you double
450
00:19:26,320 --> 00:19:30,799
it it increases by constant fact it's
451
00:19:29,120 --> 00:19:32,480
kind of like just in general in machine
452
00:19:30,799 --> 00:19:37,360
learning this is a trend that we tend to
453
00:19:32,480 --> 00:19:40,679
see so um So based on this
454
00:19:37,360 --> 00:19:41,880
uh there's kind of like you get a big
455
00:19:40,679 --> 00:19:44,200
gain from having a little bit of
456
00:19:41,880 --> 00:19:45,760
training data but the gains very quickly
457
00:19:44,200 --> 00:19:48,919
drop off and you start spending a lot of
458
00:19:45,760 --> 00:19:48,919
time annotating
459
00:19:51,000 --> 00:19:55,880
an so um yeah this is the the general
460
00:19:54,760 --> 00:19:58,280
overview of the different types of
461
00:19:55,880 --> 00:20:00,000
system building uh any any question
462
00:19:58,280 --> 00:20:01,559
questions about this or comments or
463
00:20:00,000 --> 00:20:04,000
things like
464
00:20:01,559 --> 00:20:05,840
this I think one thing that's changed
465
00:20:04,000 --> 00:20:08,159
really drastically from the last time I
466
00:20:05,840 --> 00:20:09,600
taught this class is the fact that
467
00:20:08,159 --> 00:20:11,000
number one and number two are the things
468
00:20:09,600 --> 00:20:13,799
that people are actually doing in
469
00:20:11,000 --> 00:20:15,360
practice uh which was you know people
470
00:20:13,799 --> 00:20:16,679
who actually care about systems are
471
00:20:15,360 --> 00:20:18,880
doing number one and number two is the
472
00:20:16,679 --> 00:20:20,440
main thing it used to be that if you
473
00:20:18,880 --> 00:20:22,679
were actually serious about building a
474
00:20:20,440 --> 00:20:24,320
system uh you really needed to do the
475
00:20:22,679 --> 00:20:27,080
funing and now it's kind of like more
476
00:20:24,320 --> 00:20:27,080
optional
477
00:20:27,159 --> 00:20:30,159
so
478
00:20:44,039 --> 00:20:50,960
yeah
479
00:20:46,320 --> 00:20:53,960
so it's it's definitely an empirical
480
00:20:50,960 --> 00:20:53,960
observation
481
00:20:54,720 --> 00:21:01,080
um in terms of the theoretical
482
00:20:57,640 --> 00:21:03,120
background I am not I can't immediately
483
00:21:01,080 --> 00:21:05,840
point to a
484
00:21:03,120 --> 00:21:10,039
particular paper that does that but I
485
00:21:05,840 --> 00:21:12,720
think if you think about
486
00:21:10,039 --> 00:21:14,720
the I I think I have seen that they do
487
00:21:12,720 --> 00:21:17,039
exist in the past but I I can't think of
488
00:21:14,720 --> 00:21:19,000
it right now I can try to uh try to come
489
00:21:17,039 --> 00:21:23,720
up with an example of
490
00:21:19,000 --> 00:21:23,720
that so yeah I I should take
491
00:21:26,799 --> 00:21:31,960
notes or someone wants to share one on
492
00:21:29,360 --> 00:21:33,360
Piaza uh if you have any ideas and want
493
00:21:31,960 --> 00:21:34,520
to share on Patza I'm sure that would be
494
00:21:33,360 --> 00:21:35,640
great it'd be great to have a discussion
495
00:21:34,520 --> 00:21:39,320
on
496
00:21:35,640 --> 00:21:44,960
Patza um Pi
497
00:21:39,320 --> 00:21:46,880
one cool okay so next I want to try to
498
00:21:44,960 --> 00:21:48,200
make a rule-based system and I'm going
499
00:21:46,880 --> 00:21:49,360
to make a rule-based system for
500
00:21:48,200 --> 00:21:51,799
sentiment
501
00:21:49,360 --> 00:21:53,480
analysis uh and this is a bad idea I
502
00:21:51,799 --> 00:21:55,400
would not encourage you to ever do this
503
00:21:53,480 --> 00:21:57,440
in real life but I want to do it here to
504
00:21:55,400 --> 00:21:59,640
show you why it's a bad idea and like
505
00:21:57,440 --> 00:22:01,200
what are some of the hard problems that
506
00:21:59,640 --> 00:22:03,960
you encounter when trying to create a
507
00:22:01,200 --> 00:22:06,600
system based on rules
508
00:22:03,960 --> 00:22:08,080
and then we'll move into building a
509
00:22:06,600 --> 00:22:12,360
machine learning base system after we
510
00:22:08,080 --> 00:22:15,400
finish this so if we look at the example
511
00:22:12,360 --> 00:22:18,559
test this is review sentiment analysis
512
00:22:15,400 --> 00:22:21,799
it's one of the most valuable uh tasks
513
00:22:18,559 --> 00:22:24,039
uh that people do in NLP nowadays
514
00:22:21,799 --> 00:22:26,400
because it allows people to know how
515
00:22:24,039 --> 00:22:29,200
customers are thinking about products uh
516
00:22:26,400 --> 00:22:30,799
improve their you know their product
517
00:22:29,200 --> 00:22:32,919
development and other things like that
518
00:22:30,799 --> 00:22:34,799
may monitor people's you know
519
00:22:32,919 --> 00:22:36,760
satisfaction with their social media
520
00:22:34,799 --> 00:22:39,200
service other things like this so
521
00:22:36,760 --> 00:22:42,720
basically the way it works is um you
522
00:22:39,200 --> 00:22:44,400
have uh outputs or you have sentences
523
00:22:42,720 --> 00:22:46,720
inputs like I hate this movie I love
524
00:22:44,400 --> 00:22:48,520
this movie I saw this movie and this
525
00:22:46,720 --> 00:22:50,600
gets mapped into positive neutral or
526
00:22:48,520 --> 00:22:53,120
negative so I hate this movie would be
527
00:22:50,600 --> 00:22:55,480
negative I love this movie positive and
528
00:22:53,120 --> 00:22:59,039
I saw this movie is
529
00:22:55,480 --> 00:23:01,200
neutral so um
530
00:22:59,039 --> 00:23:05,200
that that's the task input tax output
531
00:23:01,200 --> 00:23:08,880
labels uh Kary uh sentence
532
00:23:05,200 --> 00:23:11,679
label and in order to do this uh we
533
00:23:08,880 --> 00:23:13,120
would like to build a model um and we're
534
00:23:11,679 --> 00:23:16,159
going to build the model in a rule based
535
00:23:13,120 --> 00:23:19,000
way but it we'll still call it a model
536
00:23:16,159 --> 00:23:21,600
and the way it works is we do feature
537
00:23:19,000 --> 00:23:23,159
extraction um so we extract the Salient
538
00:23:21,600 --> 00:23:25,279
features for making the decision about
539
00:23:23,159 --> 00:23:27,320
what to Output next we do score
540
00:23:25,279 --> 00:23:29,880
calculation calculate a score for one or
541
00:23:27,320 --> 00:23:32,320
more possib ities and we have a decision
542
00:23:29,880 --> 00:23:33,520
function so we choose one of those
543
00:23:32,320 --> 00:23:37,679
several
544
00:23:33,520 --> 00:23:40,120
possibilities and so for feature
545
00:23:37,679 --> 00:23:42,200
extraction uh formally what this looks
546
00:23:40,120 --> 00:23:44,240
like is we have some function and it
547
00:23:42,200 --> 00:23:48,039
extracts a feature
548
00:23:44,240 --> 00:23:51,159
Vector for score calculation um we
549
00:23:48,039 --> 00:23:54,240
calculate the scores based on either a
550
00:23:51,159 --> 00:23:56,279
binary classification uh where we have a
551
00:23:54,240 --> 00:23:58,279
a weight vector and we take the dot
552
00:23:56,279 --> 00:24:00,120
product with our feature vector or we
553
00:23:58,279 --> 00:24:02,480
have multi class classification where we
554
00:24:00,120 --> 00:24:04,520
have a weight Matrix and we take the
555
00:24:02,480 --> 00:24:08,640
product with uh the vector and that
556
00:24:04,520 --> 00:24:08,640
gives us you know squares over multiple
557
00:24:08,919 --> 00:24:14,840
classes and then we have a decision uh
558
00:24:11,600 --> 00:24:17,520
rule so this decision rule tells us what
559
00:24:14,840 --> 00:24:20,080
the output is going to be um does anyone
560
00:24:17,520 --> 00:24:22,200
know what a typical decision rule is
561
00:24:20,080 --> 00:24:24,520
maybe maybe so obvious that you don't
562
00:24:22,200 --> 00:24:28,760
think about it often
563
00:24:24,520 --> 00:24:31,000
but uh a threshold um so like for would
564
00:24:28,760 --> 00:24:34,440
that be for binary a single binary
565
00:24:31,000 --> 00:24:37,000
scaler score or a multiple
566
00:24:34,440 --> 00:24:38,520
class binary yeah so and then you would
567
00:24:37,000 --> 00:24:39,960
pick a threshold and if it's over the
568
00:24:38,520 --> 00:24:42,919
threshold
569
00:24:39,960 --> 00:24:45,760
you say yes and if it's under the
570
00:24:42,919 --> 00:24:50,279
threshold you say no um another option
571
00:24:45,760 --> 00:24:51,679
would be um you have a threshold and you
572
00:24:50,279 --> 00:24:56,080
say
573
00:24:51,679 --> 00:24:56,080
yes no
574
00:24:56,200 --> 00:25:00,559
obain so you know you don't give an
575
00:24:58,360 --> 00:25:02,520
answer and depending on how you're
576
00:25:00,559 --> 00:25:03,720
evaluated what what is a good classifier
577
00:25:02,520 --> 00:25:07,799
you might want to abstain some of the
578
00:25:03,720 --> 00:25:10,960
time also um for multiclass what what's
579
00:25:07,799 --> 00:25:10,960
a standard decision role for
580
00:25:11,120 --> 00:25:16,720
multiclass argmax yeah exactly so um
581
00:25:14,279 --> 00:25:19,520
basically you you find the index that
582
00:25:16,720 --> 00:25:22,000
has the highest score in you output
583
00:25:19,520 --> 00:25:24,480
it we're going to be talking about other
584
00:25:22,000 --> 00:25:26,559
decision rules also um like
585
00:25:24,480 --> 00:25:29,480
self-consistency and minimum based risk
586
00:25:26,559 --> 00:25:30,760
later uh for text generation so you can
587
00:25:29,480 --> 00:25:33,000
just keep that in mind and then we'll
588
00:25:30,760 --> 00:25:36,279
forget about it for like several
589
00:25:33,000 --> 00:25:39,559
classes um so for sentiment
590
00:25:36,279 --> 00:25:42,159
class um I have a Cod
591
00:25:39,559 --> 00:25:45,159
walk
592
00:25:42,159 --> 00:25:45,159
here
593
00:25:46,240 --> 00:25:54,320
and this is pretty simple um but if
594
00:25:50,320 --> 00:25:58,559
you're bored uh of the class and would
595
00:25:54,320 --> 00:26:01,000
like to um try out yourself you can
596
00:25:58,559 --> 00:26:04,480
Challenge and try to get a better score
597
00:26:01,000 --> 00:26:06,120
than I do um over the next few minutes
598
00:26:04,480 --> 00:26:06,880
but we have this rule based classifier
599
00:26:06,120 --> 00:26:10,240
in
600
00:26:06,880 --> 00:26:12,640
here and I will open it up in my vs
601
00:26:10,240 --> 00:26:15,360
code
602
00:26:12,640 --> 00:26:18,360
to try to create a rule-based classifier
603
00:26:15,360 --> 00:26:18,360
and basically the way this
604
00:26:22,799 --> 00:26:29,960
works is
605
00:26:25,159 --> 00:26:29,960
that we have a feature
606
00:26:31,720 --> 00:26:37,720
extraction we have feature extraction we
607
00:26:34,120 --> 00:26:40,679
have scoring and we have um a decision
608
00:26:37,720 --> 00:26:43,480
rle so here for our feature extraction I
609
00:26:40,679 --> 00:26:44,720
have created a list of good words and a
610
00:26:43,480 --> 00:26:46,720
list of bad
611
00:26:44,720 --> 00:26:48,960
words
612
00:26:46,720 --> 00:26:51,320
and what we do is we just count the
613
00:26:48,960 --> 00:26:53,000
number of good words that appeared and
614
00:26:51,320 --> 00:26:55,320
count the number of bad words that
615
00:26:53,000 --> 00:26:57,880
appeared then we also have a bias
616
00:26:55,320 --> 00:27:01,159
feature so the bias feature is a feature
617
00:26:57,880 --> 00:27:03,679
that's always one and so what that
618
00:27:01,159 --> 00:27:06,799
results in is we have a dimension three
619
00:27:03,679 --> 00:27:08,880
feature Vector um where this is like the
620
00:27:06,799 --> 00:27:11,320
number of good words this is the number
621
00:27:08,880 --> 00:27:15,320
of bad words and then you have the
622
00:27:11,320 --> 00:27:17,760
bias and then I also Define the feature
623
00:27:15,320 --> 00:27:20,039
weights that so for every good word we
624
00:27:17,760 --> 00:27:22,200
add one to our score for every bad word
625
00:27:20,039 --> 00:27:25,559
we add uh we subtract one from our score
626
00:27:22,200 --> 00:27:29,399
and for the BIOS we absor and so we then
627
00:27:25,559 --> 00:27:30,480
take the dot product between
628
00:27:29,399 --> 00:27:34,360
these
629
00:27:30,480 --> 00:27:36,919
two and we get minus
630
00:27:34,360 --> 00:27:37,640
0.5 and that gives us uh that gives us
631
00:27:36,919 --> 00:27:41,000
the
632
00:27:37,640 --> 00:27:46,000
squore so let's run
633
00:27:41,000 --> 00:27:50,320
that um and I read in some
634
00:27:46,000 --> 00:27:52,600
data and what this data looks like is
635
00:27:50,320 --> 00:27:55,000
basically we have a
636
00:27:52,600 --> 00:27:57,559
review um which says the rock is
637
00:27:55,000 --> 00:27:59,480
destined to be the 21st Century's new
638
00:27:57,559 --> 00:28:01,240
Conan and that he's going to make a
639
00:27:59,480 --> 00:28:03,600
splash even greater than Arnold
640
00:28:01,240 --> 00:28:07,000
Schwarzenegger jeanclaude vanam or
641
00:28:03,600 --> 00:28:09,519
Steven Seagal um so this seems pretty
642
00:28:07,000 --> 00:28:10,840
positive right I like that's a pretty
643
00:28:09,519 --> 00:28:13,200
high order to be better than Arnold
644
00:28:10,840 --> 00:28:16,080
Schwarzenegger or John Claude vanam uh
645
00:28:13,200 --> 00:28:19,519
if you're familiar with action movies um
646
00:28:16,080 --> 00:28:22,840
and so of course this gets a positive
647
00:28:19,519 --> 00:28:24,120
label and so uh we have run classifier
648
00:28:22,840 --> 00:28:25,240
actually maybe I should call this
649
00:28:24,120 --> 00:28:27,600
decision rule because this is
650
00:28:25,240 --> 00:28:29,120
essentially our decision Rule and here
651
00:28:27,600 --> 00:28:32,600
basically do the thing that I mentioned
652
00:28:29,120 --> 00:28:35,440
here the yes no obstain or in this case
653
00:28:32,600 --> 00:28:38,360
positive negative neutral so if the
654
00:28:35,440 --> 00:28:40,159
score is greater than zero we uh return
655
00:28:38,360 --> 00:28:42,480
one if the score is less than zero we
656
00:28:40,159 --> 00:28:44,679
return negative one which is negative
657
00:28:42,480 --> 00:28:47,240
and otherwise we returns
658
00:28:44,679 --> 00:28:48,760
zero um we have an accuracy calculation
659
00:28:47,240 --> 00:28:51,519
function just calculating the outputs
660
00:28:48,760 --> 00:28:55,840
are good and
661
00:28:51,519 --> 00:28:57,440
um this is uh the overall label count in
662
00:28:55,840 --> 00:28:59,919
the in the output so we can see there
663
00:28:57,440 --> 00:29:03,120
slightly more positives than there are
664
00:28:59,919 --> 00:29:06,080
negatives and then we can run this and
665
00:29:03,120 --> 00:29:10,200
we get a a score of
666
00:29:06,080 --> 00:29:14,760
43 and so one one thing that I have
667
00:29:10,200 --> 00:29:19,279
found um is I I do a lot of kind
668
00:29:14,760 --> 00:29:21,240
of research on how to make NLP systems
669
00:29:19,279 --> 00:29:23,600
better and one of the things I found
670
00:29:21,240 --> 00:29:26,679
really invaluable
671
00:29:23,600 --> 00:29:27,840
is if you're in a situation where you
672
00:29:26,679 --> 00:29:29,720
have a
673
00:29:27,840 --> 00:29:31,760
set task and you just want to make the
674
00:29:29,720 --> 00:29:33,760
system better on the set task doing
675
00:29:31,760 --> 00:29:35,159
comprehensive error analysis and
676
00:29:33,760 --> 00:29:37,320
understanding where your system is
677
00:29:35,159 --> 00:29:39,880
failing is one of the best ways to do
678
00:29:37,320 --> 00:29:42,200
that and I would like to do a very
679
00:29:39,880 --> 00:29:43,640
rudimentary version of this here and
680
00:29:42,200 --> 00:29:46,519
what I'm doing essentially is I'm just
681
00:29:43,640 --> 00:29:47,480
randomly picking uh several examples
682
00:29:46,519 --> 00:29:49,320
that were
683
00:29:47,480 --> 00:29:52,000
correct
684
00:29:49,320 --> 00:29:54,840
um and so like let let's look at the
685
00:29:52,000 --> 00:29:58,200
examples here um here the true label is
686
00:29:54,840 --> 00:30:00,760
zero um in this predicted one um it may
687
00:29:58,200 --> 00:30:03,440
not be as cutting as Woody or as true as
688
00:30:00,760 --> 00:30:05,039
back in the Glory Days of uh weekend and
689
00:30:03,440 --> 00:30:07,440
two or three things that I know about
690
00:30:05,039 --> 00:30:09,640
her but who else engaged in film Mak
691
00:30:07,440 --> 00:30:12,679
today is so cognizant of the cultural
692
00:30:09,640 --> 00:30:14,480
and moral issues involved in the process
693
00:30:12,679 --> 00:30:17,600
so what words in here are a good
694
00:30:14,480 --> 00:30:20,840
indication that this is a neutral
695
00:30:17,600 --> 00:30:20,840
sentence any
696
00:30:23,760 --> 00:30:28,399
ideas little bit tough
697
00:30:26,240 --> 00:30:30,919
huh starting to think maybe we should be
698
00:30:28,399 --> 00:30:30,919
using machine
699
00:30:31,480 --> 00:30:37,440
learning
700
00:30:34,080 --> 00:30:40,320
um even by the intentionally low
701
00:30:37,440 --> 00:30:41,559
standards of fratboy humor sority boys
702
00:30:40,320 --> 00:30:43,840
is a
703
00:30:41,559 --> 00:30:46,080
Bowser I think frat boy is maybe
704
00:30:43,840 --> 00:30:47,360
negative sentiment if you're familiar
705
00:30:46,080 --> 00:30:50,360
with
706
00:30:47,360 --> 00:30:51,960
us us I don't have any negative
707
00:30:50,360 --> 00:30:54,519
sentiment but the people who say it that
708
00:30:51,960 --> 00:30:55,960
way have negative senent maybe so if we
709
00:30:54,519 --> 00:31:01,080
wanted to go in and do that we could
710
00:30:55,960 --> 00:31:01,080
maybe I won't save this but
711
00:31:01,519 --> 00:31:08,919
uh
712
00:31:04,240 --> 00:31:11,840
um oh whoops I'll go back and fix it uh
713
00:31:08,919 --> 00:31:14,840
crass crass is pretty obviously negative
714
00:31:11,840 --> 00:31:14,840
right so I can add
715
00:31:17,039 --> 00:31:21,080
crass actually let me just add
716
00:31:21,760 --> 00:31:29,159
CR and then um I'll go back and have our
717
00:31:26,559 --> 00:31:29,159
train accurate
718
00:31:32,159 --> 00:31:36,240
wa maybe maybe I need to run the whole
719
00:31:33,960 --> 00:31:36,240
thing
720
00:31:36,960 --> 00:31:39,960
again
721
00:31:40,960 --> 00:31:45,880
and that budg the training accuracy a
722
00:31:43,679 --> 00:31:50,360
little um the dev test accuracy not very
723
00:31:45,880 --> 00:31:53,919
much so I could go through and do this
724
00:31:50,360 --> 00:31:53,919
um let me add
725
00:31:54,000 --> 00:31:58,320
unengaging so I could go through and do
726
00:31:56,000 --> 00:32:01,720
this all day and you probably be very
727
00:31:58,320 --> 00:32:01,720
bored on
728
00:32:04,240 --> 00:32:08,360
engage but I won't do that uh because we
729
00:32:06,919 --> 00:32:10,679
have much more important things to be
730
00:32:08,360 --> 00:32:14,679
doing
731
00:32:10,679 --> 00:32:16,440
um and uh so anyway we um we could go
732
00:32:14,679 --> 00:32:18,919
through and design all the features here
733
00:32:16,440 --> 00:32:21,279
but like why is this complicated like
734
00:32:18,919 --> 00:32:22,600
the the reason why it was complicated
735
00:32:21,279 --> 00:32:25,840
became pretty
736
00:32:22,600 --> 00:32:27,840
clear from the uh from the very
737
00:32:25,840 --> 00:32:29,639
beginning uh the very first example I
738
00:32:27,840 --> 00:32:32,200
showed you which was that was a really
739
00:32:29,639 --> 00:32:34,720
complicated sentence like all of us
740
00:32:32,200 --> 00:32:36,240
could see that it wasn't like really
741
00:32:34,720 --> 00:32:38,679
strongly positive it wasn't really
742
00:32:36,240 --> 00:32:40,519
strongly negative it was kind of like in
743
00:32:38,679 --> 00:32:42,919
the middle but it was in the middle and
744
00:32:40,519 --> 00:32:44,600
it said it in a very long way uh you
745
00:32:42,919 --> 00:32:46,120
know not using any clearly positive
746
00:32:44,600 --> 00:32:47,639
sentiment words not using any clearly
747
00:32:46,120 --> 00:32:49,760
negative sentiment
748
00:32:47,639 --> 00:32:53,760
words
749
00:32:49,760 --> 00:32:56,519
um so yeah basically I I
750
00:32:53,760 --> 00:33:00,559
improved um but what are the difficult
751
00:32:56,519 --> 00:33:03,720
cases uh that we saw here so the first
752
00:33:00,559 --> 00:33:07,639
one is low frequency
753
00:33:03,720 --> 00:33:09,760
words so um here's an example the action
754
00:33:07,639 --> 00:33:11,519
switches between past and present but
755
00:33:09,760 --> 00:33:13,120
the material link is too tenuous to
756
00:33:11,519 --> 00:33:16,840
Anchor the emotional connections at
757
00:33:13,120 --> 00:33:19,519
purport to span a 125 year divide so
758
00:33:16,840 --> 00:33:21,080
this is negative um tenuous is kind of a
759
00:33:19,519 --> 00:33:22,799
negative word purport is kind of a
760
00:33:21,080 --> 00:33:24,760
negative word but it doesn't appear very
761
00:33:22,799 --> 00:33:26,159
frequently so I would need to spend all
762
00:33:24,760 --> 00:33:29,720
my time looking for these words and
763
00:33:26,159 --> 00:33:32,480
trying to them in um here's yet another
764
00:33:29,720 --> 00:33:34,240
horse franchise mucking up its storyline
765
00:33:32,480 --> 00:33:36,639
with glitches casual fans could correct
766
00:33:34,240 --> 00:33:40,159
in their sleep negative
767
00:33:36,639 --> 00:33:42,600
again um so the solutions here are keep
768
00:33:40,159 --> 00:33:46,880
working until we get all of them which
769
00:33:42,600 --> 00:33:49,159
is maybe not super fun um or incorporate
770
00:33:46,880 --> 00:33:51,639
external resources such as sentiment
771
00:33:49,159 --> 00:33:52,880
dictionaries that people created uh we
772
00:33:51,639 --> 00:33:55,960
could do that but that's a lot of
773
00:33:52,880 --> 00:33:57,480
engineering effort to make something
774
00:33:55,960 --> 00:34:00,639
work
775
00:33:57,480 --> 00:34:03,720
um another one is conjugation so we saw
776
00:34:00,639 --> 00:34:06,600
unengaging I guess that's an example of
777
00:34:03,720 --> 00:34:08,359
conjugation uh some other ones are
778
00:34:06,600 --> 00:34:10,520
operatic sprawling picture that's
779
00:34:08,359 --> 00:34:12,040
entertainingly acted magnificently shot
780
00:34:10,520 --> 00:34:15,480
and gripping enough to sustain most of
781
00:34:12,040 --> 00:34:17,399
its 170 minute length so here we have
782
00:34:15,480 --> 00:34:19,079
magnificently so even if I added
783
00:34:17,399 --> 00:34:20,480
magnificent this wouldn't have been
784
00:34:19,079 --> 00:34:23,800
clocked
785
00:34:20,480 --> 00:34:26,599
right um it's basically an overlong
786
00:34:23,800 --> 00:34:28,839
episode of tales from the cryp so that's
787
00:34:26,599 --> 00:34:31,480
maybe another
788
00:34:28,839 --> 00:34:33,040
example um so some things that we could
789
00:34:31,480 --> 00:34:35,320
do or what we would have done before the
790
00:34:33,040 --> 00:34:37,720
modern Paradigm of machine learning is
791
00:34:35,320 --> 00:34:40,079
we would run some sort of normalizer
792
00:34:37,720 --> 00:34:42,800
like a stemmer or other things like this
793
00:34:40,079 --> 00:34:45,240
in order to convert this into uh the
794
00:34:42,800 --> 00:34:48,599
root wordss that we already have seen
795
00:34:45,240 --> 00:34:52,040
somewhere in our data or have already
796
00:34:48,599 --> 00:34:54,040
handed so that requires um conjugation
797
00:34:52,040 --> 00:34:55,879
analysis or morphological analysis as we
798
00:34:54,040 --> 00:34:57,400
say it in
799
00:34:55,879 --> 00:35:00,680
technicals
800
00:34:57,400 --> 00:35:03,960
negation this is a tricky one so this
801
00:35:00,680 --> 00:35:06,760
one's not nearly as Dreadful as expected
802
00:35:03,960 --> 00:35:08,800
so Dreadful is a pretty bad word right
803
00:35:06,760 --> 00:35:13,000
but not nearly as Dreadful as expected
804
00:35:08,800 --> 00:35:14,440
is like a solidly neutral um you know or
805
00:35:13,000 --> 00:35:16,359
maybe even
806
00:35:14,440 --> 00:35:18,920
positive I would I would say that's
807
00:35:16,359 --> 00:35:20,640
neutral but you know uh neutral or
808
00:35:18,920 --> 00:35:23,800
positive it's definitely not
809
00:35:20,640 --> 00:35:26,359
negative um serving s doesn't serve up a
810
00:35:23,800 --> 00:35:29,480
whole lot of laughs so laughs is
811
00:35:26,359 --> 00:35:31,880
obviously positive but not serving UPS
812
00:35:29,480 --> 00:35:34,440
is obviously
813
00:35:31,880 --> 00:35:36,839
negative so if negation modifies the
814
00:35:34,440 --> 00:35:38,240
word disregard it now we would probably
815
00:35:36,839 --> 00:35:41,440
need to do some sort of syntactic
816
00:35:38,240 --> 00:35:45,599
analysis or semantic analysis of
817
00:35:41,440 --> 00:35:47,520
some metaphor an analogy so puts a human
818
00:35:45,599 --> 00:35:50,640
face on a land most westerners are
819
00:35:47,520 --> 00:35:52,880
unfamiliar though uh this is
820
00:35:50,640 --> 00:35:54,960
positive green might want to hang on to
821
00:35:52,880 --> 00:35:58,800
that ski mask as robbery may be the only
822
00:35:54,960 --> 00:35:58,800
way to pay for this next project
823
00:35:58,839 --> 00:36:03,640
so this this is saying that the movie
824
00:36:01,960 --> 00:36:05,560
was so bad that the director will have
825
00:36:03,640 --> 00:36:08,359
to rob people in order to get money for
826
00:36:05,560 --> 00:36:11,000
the next project so that's kind of bad I
827
00:36:08,359 --> 00:36:12,880
guess um has all the depth of a waiting
828
00:36:11,000 --> 00:36:14,520
pool this is kind of my favorite one
829
00:36:12,880 --> 00:36:15,880
because it's really short and sweet but
830
00:36:14,520 --> 00:36:18,800
you know you need to know how deep a
831
00:36:15,880 --> 00:36:21,440
waiting pool is um so that's
832
00:36:18,800 --> 00:36:22,960
negative so the solution here I don't
833
00:36:21,440 --> 00:36:24,680
really even know how to handle this with
834
00:36:22,960 --> 00:36:26,880
a rule based system I have no idea how
835
00:36:24,680 --> 00:36:30,040
we would possibly do this yeah machine
836
00:36:26,880 --> 00:36:32,400
learning based models seem to be pretty
837
00:36:30,040 --> 00:36:37,000
adaptive okay and then I start doing
838
00:36:32,400 --> 00:36:37,000
these ones um anyone have a good
839
00:36:38,160 --> 00:36:46,800
idea any any other friends who know
840
00:36:42,520 --> 00:36:50,040
Japanese no okay um so yeah that's
841
00:36:46,800 --> 00:36:52,839
positive um that one's negative uh and
842
00:36:50,040 --> 00:36:54,920
the solution here is learn Japanese I
843
00:36:52,839 --> 00:36:56,800
guess or whatever other language you
844
00:36:54,920 --> 00:37:00,040
want to process so like obviously
845
00:36:56,800 --> 00:37:03,720
rule-based systems don't scale very
846
00:37:00,040 --> 00:37:05,119
well so um we've moved but like rule
847
00:37:03,720 --> 00:37:06,319
based systems don't scale very well
848
00:37:05,119 --> 00:37:08,160
we're not going to be using them for
849
00:37:06,319 --> 00:37:11,400
most of the things we do in this class
850
00:37:08,160 --> 00:37:14,240
but I do think it's sometimes useful to
851
00:37:11,400 --> 00:37:15,640
try to create one for your task maybe
852
00:37:14,240 --> 00:37:16,680
right at the very beginning of a project
853
00:37:15,640 --> 00:37:18,560
because it gives you an idea about
854
00:37:16,680 --> 00:37:21,160
what's really hard about the task in
855
00:37:18,560 --> 00:37:22,480
some cases so um yeah I wouldn't
856
00:37:21,160 --> 00:37:25,599
entirely discount them I'm not
857
00:37:22,480 --> 00:37:27,400
introducing them for no reason
858
00:37:25,599 --> 00:37:29,880
whatsoever
859
00:37:27,400 --> 00:37:34,160
so next is machine learning based anal
860
00:37:29,880 --> 00:37:35,400
and machine learning uh in general uh I
861
00:37:34,160 --> 00:37:36,640
here actually when I say machine
862
00:37:35,400 --> 00:37:38,160
learning I'm going to be talking about
863
00:37:36,640 --> 00:37:39,560
the traditional fine-tuning approach
864
00:37:38,160 --> 00:37:43,520
where we have a training set Dev set
865
00:37:39,560 --> 00:37:46,359
test set and so we take our training set
866
00:37:43,520 --> 00:37:49,680
we run some learning algorithm over it
867
00:37:46,359 --> 00:37:52,319
we have a learned feature extractor F A
868
00:37:49,680 --> 00:37:55,839
possibly learned feature extractor F
869
00:37:52,319 --> 00:37:57,880
possibly learned scoring function W and
870
00:37:55,839 --> 00:38:00,800
uh then we apply our inference algorithm
871
00:37:57,880 --> 00:38:02,839
our decision Rule and make decisions
872
00:38:00,800 --> 00:38:04,200
when I say possibly learned actually the
873
00:38:02,839 --> 00:38:06,119
first example I'm going to give of a
874
00:38:04,200 --> 00:38:07,760
machine learning based technique is uh
875
00:38:06,119 --> 00:38:10,079
doesn't have a learned feature extractor
876
00:38:07,760 --> 00:38:12,800
but most things that we use nowadays do
877
00:38:10,079 --> 00:38:12,800
have learned feature
878
00:38:13,200 --> 00:38:18,040
extractors so our first attempt is going
879
00:38:15,640 --> 00:38:21,760
to be a bag of words model uh and the
880
00:38:18,040 --> 00:38:27,119
way a bag of wordss model works is uh
881
00:38:21,760 --> 00:38:30,160
essentially we start out by looking up a
882
00:38:27,119 --> 00:38:33,240
Vector where one element in the vector
883
00:38:30,160 --> 00:38:36,240
is uh is one and all the other elements
884
00:38:33,240 --> 00:38:38,040
in the vector are zero and so if the
885
00:38:36,240 --> 00:38:40,319
word is different the position in the
886
00:38:38,040 --> 00:38:42,839
vector that's one will be different we
887
00:38:40,319 --> 00:38:46,280
add all of these together and this gives
888
00:38:42,839 --> 00:38:48,200
us a vector where each element is the
889
00:38:46,280 --> 00:38:50,359
frequency of that word in the vector and
890
00:38:48,200 --> 00:38:52,520
then we multiply that by weights and we
891
00:38:50,359 --> 00:38:55,520
get a
892
00:38:52,520 --> 00:38:57,160
score and um here as I said this is not
893
00:38:55,520 --> 00:39:00,359
a learned feature
894
00:38:57,160 --> 00:39:02,079
uh Vector this is basically uh sorry not
895
00:39:00,359 --> 00:39:04,359
a learn feature extractor this is
896
00:39:02,079 --> 00:39:06,200
basically a fixed feature extractor but
897
00:39:04,359 --> 00:39:09,839
the weights themselves are
898
00:39:06,200 --> 00:39:11,640
learned um so my my question is I
899
00:39:09,839 --> 00:39:14,599
mentioned a whole lot of problems before
900
00:39:11,640 --> 00:39:17,480
I mentioned infrequent words I mentioned
901
00:39:14,599 --> 00:39:20,760
conjugation I mentioned uh different
902
00:39:17,480 --> 00:39:22,880
languages I mentioned syntax and
903
00:39:20,760 --> 00:39:24,599
metaphor so which of these do we think
904
00:39:22,880 --> 00:39:25,440
would be fixed by this sort of learning
905
00:39:24,599 --> 00:39:27,400
based
906
00:39:25,440 --> 00:39:29,640
approach
907
00:39:27,400 --> 00:39:29,640
any
908
00:39:29,920 --> 00:39:35,200
ideas maybe not fixed maybe made
909
00:39:32,520 --> 00:39:35,200
significantly
910
00:39:36,880 --> 00:39:41,560
better any Brave uh brave
911
00:39:44,880 --> 00:39:48,440
people maybe maybe
912
00:39:53,720 --> 00:39:58,400
negation okay so maybe doesn't when it
913
00:39:55,760 --> 00:39:58,400
have a negative qu
914
00:40:02,960 --> 00:40:07,560
yeah yeah so for the conjugation if we
915
00:40:05,520 --> 00:40:09,200
had the conjugations of the stems mapped
916
00:40:07,560 --> 00:40:11,119
in the same position that might fix a
917
00:40:09,200 --> 00:40:12,920
conjugation problem but I would say if
918
00:40:11,119 --> 00:40:15,200
you don't do that then this kind of
919
00:40:12,920 --> 00:40:18,160
fixes conjugation a little bit but maybe
920
00:40:15,200 --> 00:40:21,319
not not really yeah kind of fix
921
00:40:18,160 --> 00:40:24,079
conjugation because like they're using
922
00:40:21,319 --> 00:40:26,760
the same there
923
00:40:24,079 --> 00:40:28,400
probably different variations so we
924
00:40:26,760 --> 00:40:31,359
learn how to
925
00:40:28,400 --> 00:40:33,400
classify surrounding
926
00:40:31,359 --> 00:40:35,000
structure yeah if it's a big enough
927
00:40:33,400 --> 00:40:36,760
training set you might have covered the
928
00:40:35,000 --> 00:40:37,880
various conjugations but if you haven't
929
00:40:36,760 --> 00:40:43,000
and you don't have any rule-based
930
00:40:37,880 --> 00:40:43,000
processing it it might still be problems
931
00:40:45,400 --> 00:40:50,359
yeah yeah so in frequent words if you
932
00:40:48,280 --> 00:40:52,560
have a large enough training set yeah
933
00:40:50,359 --> 00:40:54,599
you'll be able to fix it to some extent
934
00:40:52,560 --> 00:40:56,480
so none of the problems are entirely
935
00:40:54,599 --> 00:40:57,880
fixed but a lot of them are made better
936
00:40:56,480 --> 00:40:58,960
different languages is also made better
937
00:40:57,880 --> 00:41:00,119
if you have training data in that
938
00:40:58,960 --> 00:41:04,599
language but if you don't then you're
939
00:41:00,119 --> 00:41:06,240
out of BL so um so now what I'd like to
940
00:41:04,599 --> 00:41:10,800
do is I'd look to like to look at what
941
00:41:06,240 --> 00:41:15,079
our vectors represent so basically um in
942
00:41:10,800 --> 00:41:16,880
uh in binary classification each word um
943
00:41:15,079 --> 00:41:19,119
sorry so the vectors themselves
944
00:41:16,880 --> 00:41:21,880
represent the counts of the words here
945
00:41:19,119 --> 00:41:25,319
I'm talking about what the weight uh
946
00:41:21,880 --> 00:41:28,520
vectors or matrices correspond to and
947
00:41:25,319 --> 00:41:31,640
the weight uh Vector here will be
948
00:41:28,520 --> 00:41:33,680
positive if the word it tends to be
949
00:41:31,640 --> 00:41:36,680
positive if in a binary classification
950
00:41:33,680 --> 00:41:38,400
case in a multiclass classification case
951
00:41:36,680 --> 00:41:42,480
we'll actually have a matrix that looks
952
00:41:38,400 --> 00:41:45,480
like this where um each column or row uh
953
00:41:42,480 --> 00:41:47,079
corresponds to the word and each row or
954
00:41:45,480 --> 00:41:49,319
column corresponds to a label and it
955
00:41:47,079 --> 00:41:51,960
will be higher if that row tends to uh
956
00:41:49,319 --> 00:41:54,800
correlate with that uh that word tends
957
00:41:51,960 --> 00:41:56,920
to correlate that little
958
00:41:54,800 --> 00:41:59,240
bit so
959
00:41:56,920 --> 00:42:04,079
this um training of the bag of words
960
00:41:59,240 --> 00:42:07,720
model is can be done uh so simply that
961
00:42:04,079 --> 00:42:10,200
we uh can put it in a single slide so
962
00:42:07,720 --> 00:42:11,599
basically here uh what we do is we start
963
00:42:10,200 --> 00:42:14,760
out with the feature
964
00:42:11,599 --> 00:42:18,880
weights and for each example in our data
965
00:42:14,760 --> 00:42:20,800
set we extract features um the exact way
966
00:42:18,880 --> 00:42:23,920
I'm extracting features is basically
967
00:42:20,800 --> 00:42:25,720
splitting uh splitting the words using
968
00:42:23,920 --> 00:42:28,000
the python split function and then uh
969
00:42:25,720 --> 00:42:31,319
Counting number of times each word
970
00:42:28,000 --> 00:42:33,160
exists uh we then run the classifier so
971
00:42:31,319 --> 00:42:36,280
actually running the classifier is
972
00:42:33,160 --> 00:42:38,200
exactly the same as what we did for the
973
00:42:36,280 --> 00:42:42,640
uh the rule based system it's just that
974
00:42:38,200 --> 00:42:47,359
we have feature vectors instead and
975
00:42:42,640 --> 00:42:51,559
then if the predicted value is
976
00:42:47,359 --> 00:42:55,160
not value then for each of the
977
00:42:51,559 --> 00:42:56,680
features uh in the feature space we
978
00:42:55,160 --> 00:43:02,200
upweight
979
00:42:56,680 --> 00:43:03,599
the um we upweight The Weight by the
980
00:43:02,200 --> 00:43:06,000
vector
981
00:43:03,599 --> 00:43:09,920
size by or by the amount of the vector
982
00:43:06,000 --> 00:43:13,240
if Y is positive and we downweight the
983
00:43:09,920 --> 00:43:16,240
vector uh by the size of the vector if Y
984
00:43:13,240 --> 00:43:18,520
is negative so this is really really
985
00:43:16,240 --> 00:43:20,559
simple it's uh probably the simplest
986
00:43:18,520 --> 00:43:25,079
possible algorithm for training one of
987
00:43:20,559 --> 00:43:27,559
these models um but I have an
988
00:43:25,079 --> 00:43:30,040
example in this that you can also take a
989
00:43:27,559 --> 00:43:31,960
look at here's a trained bag of words
990
00:43:30,040 --> 00:43:33,680
classifier and we could step through
991
00:43:31,960 --> 00:43:34,960
this is on exactly the same data set as
992
00:43:33,680 --> 00:43:37,240
I did before we're training on the
993
00:43:34,960 --> 00:43:42,359
training set
994
00:43:37,240 --> 00:43:43,640
um and uh evaluating on the dev set um I
995
00:43:42,359 --> 00:43:45,880
also have some extra stuff like I'm
996
00:43:43,640 --> 00:43:47,079
Shuffling the order of the data IDs
997
00:43:45,880 --> 00:43:49,440
which is really important if you're
998
00:43:47,079 --> 00:43:53,160
doing this sort of incremental algorithm
999
00:43:49,440 --> 00:43:54,960
uh because uh what if what if your
1000
00:43:53,160 --> 00:43:57,400
creating data set was ordered in this
1001
00:43:54,960 --> 00:44:00,040
way where you have all of the positive
1002
00:43:57,400 --> 00:44:00,040
labels on
1003
00:44:00,359 --> 00:44:04,520
top and then you have all of the
1004
00:44:02,280 --> 00:44:06,680
negative labels on the
1005
00:44:04,520 --> 00:44:08,200
bottom if you do something like this it
1006
00:44:06,680 --> 00:44:10,200
would see only negative labels at the
1007
00:44:08,200 --> 00:44:11,800
end of training and you might have
1008
00:44:10,200 --> 00:44:14,400
problems because your model would only
1009
00:44:11,800 --> 00:44:17,440
predict negatives so we also Shuffle
1010
00:44:14,400 --> 00:44:20,319
data um and then step through we run the
1011
00:44:17,440 --> 00:44:22,559
classifier and I'm going to run uh five
1012
00:44:20,319 --> 00:44:23,640
epochs of training through the data set
1013
00:44:22,559 --> 00:44:27,160
uh very
1014
00:44:23,640 --> 00:44:29,599
fast and calculate our accuracy
1015
00:44:27,160 --> 00:44:33,280
and this got 75% accuracy on the
1016
00:44:29,599 --> 00:44:36,160
training data set and uh 56% accuracy on
1017
00:44:33,280 --> 00:44:40,000
the Deb data set so uh if you remember
1018
00:44:36,160 --> 00:44:41,520
our rule-based classifier had 42 uh 42
1019
00:44:40,000 --> 00:44:43,880
accuracy and now our training based
1020
00:44:41,520 --> 00:44:45,760
classifier has 56 accuracy but it's
1021
00:44:43,880 --> 00:44:49,359
overfitting heavily to the training side
1022
00:44:45,760 --> 00:44:50,880
so um basically this is a pretty strong
1023
00:44:49,359 --> 00:44:53,480
advertisement for why we should be using
1024
00:44:50,880 --> 00:44:54,960
machine learning you know I the amount
1025
00:44:53,480 --> 00:44:57,800
of code that we had for this machine
1026
00:44:54,960 --> 00:44:59,720
learning model is basically very similar
1027
00:44:57,800 --> 00:45:02,680
um it's not using any external libraries
1028
00:44:59,720 --> 00:45:02,680
but we're getting better at
1029
00:45:03,599 --> 00:45:08,800
this
1030
00:45:05,800 --> 00:45:08,800
cool
1031
00:45:09,559 --> 00:45:16,000
so cool any any questions
1032
00:45:13,520 --> 00:45:18,240
here and so I'm going to talk about the
1033
00:45:16,000 --> 00:45:20,760
connection to between this algorithm and
1034
00:45:18,240 --> 00:45:22,839
neural networks in the next class um
1035
00:45:20,760 --> 00:45:24,200
because this actually is using a very
1036
00:45:22,839 --> 00:45:26,319
similar training algorithm to what we
1037
00:45:24,200 --> 00:45:27,480
use in neural networks with some uh
1038
00:45:26,319 --> 00:45:30,079
particular
1039
00:45:27,480 --> 00:45:32,839
assumptions cool um so what's missing in
1040
00:45:30,079 --> 00:45:34,800
bag of words um still handling of
1041
00:45:32,839 --> 00:45:36,880
conjugation or compound words is not
1042
00:45:34,800 --> 00:45:39,160
perfect it we can do it to some extent
1043
00:45:36,880 --> 00:45:41,079
to the point where we can uh memorize
1044
00:45:39,160 --> 00:45:44,079
things so I love this movie I love this
1045
00:45:41,079 --> 00:45:46,920
movie another thing is handling word Ser
1046
00:45:44,079 --> 00:45:49,240
uh similarities so I love this movie and
1047
00:45:46,920 --> 00:45:50,720
I adore this movie uh these basically
1048
00:45:49,240 --> 00:45:52,119
mean the same thing as humans we know
1049
00:45:50,720 --> 00:45:54,200
they mean the same thing so we should be
1050
00:45:52,119 --> 00:45:56,079
able to take advantage of that fact to
1051
00:45:54,200 --> 00:45:57,839
learn better models but we're not doing
1052
00:45:56,079 --> 00:46:02,760
that in this model at the moment because
1053
00:45:57,839 --> 00:46:05,440
each unit is uh treated as a atomic unit
1054
00:46:02,760 --> 00:46:08,040
and there's no idea of
1055
00:46:05,440 --> 00:46:11,040
similarity also handling of combination
1056
00:46:08,040 --> 00:46:12,760
features so um I love this movie and I
1057
00:46:11,040 --> 00:46:14,920
don't love this movie I hate this movie
1058
00:46:12,760 --> 00:46:17,079
and I don't hate this movie actually
1059
00:46:14,920 --> 00:46:20,400
this is a little bit tricky because
1060
00:46:17,079 --> 00:46:23,240
negative words are slightly indicative
1061
00:46:20,400 --> 00:46:25,280
of it being negative but actually what
1062
00:46:23,240 --> 00:46:28,119
they do is they negate the other things
1063
00:46:25,280 --> 00:46:28,119
that you're saying in the
1064
00:46:28,240 --> 00:46:36,559
sentence
1065
00:46:30,720 --> 00:46:40,480
so um like love is positive hate is
1066
00:46:36,559 --> 00:46:40,480
negative but like don't
1067
00:46:50,359 --> 00:46:56,079
love it's actually kind of like this
1068
00:46:52,839 --> 00:46:59,359
right like um Love is very positive POS
1069
00:46:56,079 --> 00:47:01,760
hate is very negative but don't love is
1070
00:46:59,359 --> 00:47:04,680
like slightly less positive than don't
1071
00:47:01,760 --> 00:47:06,160
hate right so um It's actually kind of
1072
00:47:04,680 --> 00:47:07,559
tricky because you need to combine them
1073
00:47:06,160 --> 00:47:10,720
together and figure out what's going on
1074
00:47:07,559 --> 00:47:12,280
based on that another example that a lot
1075
00:47:10,720 --> 00:47:14,160
of people might not think of immediately
1076
00:47:12,280 --> 00:47:17,880
but is super super common in sentiment
1077
00:47:14,160 --> 00:47:20,160
analysis or any other thing is butt so
1078
00:47:17,880 --> 00:47:22,599
basically what but does is it throws
1079
00:47:20,160 --> 00:47:24,160
away all the stuff that you said before
1080
00:47:22,599 --> 00:47:26,119
um and you can just pay attention to the
1081
00:47:24,160 --> 00:47:29,000
stuff that you saw beforehand so like we
1082
00:47:26,119 --> 00:47:30,440
could even add this to our um like if
1083
00:47:29,000 --> 00:47:31,760
you want to add this to your rule based
1084
00:47:30,440 --> 00:47:33,240
classifier you can do that you just
1085
00:47:31,760 --> 00:47:34,640
search for butt and delete everything
1086
00:47:33,240 --> 00:47:37,240
before it and see if that inputs your
1087
00:47:34,640 --> 00:47:39,240
accuracy might be might be a fun very
1088
00:47:37,240 --> 00:47:43,480
quick thing
1089
00:47:39,240 --> 00:47:44,880
to cool so the better solution which is
1090
00:47:43,480 --> 00:47:46,800
what we're going to talk about for every
1091
00:47:44,880 --> 00:47:49,480
other class other than uh other than
1092
00:47:46,800 --> 00:47:52,160
this one is neural network models and
1093
00:47:49,480 --> 00:47:55,800
basically uh what they do is they do a
1094
00:47:52,160 --> 00:47:59,400
lookup of uh dense word embeddings so
1095
00:47:55,800 --> 00:48:02,520
instead of looking up uh individual uh
1096
00:47:59,400 --> 00:48:04,640
sparse uh vectors individual one hot
1097
00:48:02,520 --> 00:48:06,920
vectors they look up dense word
1098
00:48:04,640 --> 00:48:09,680
embeddings and then throw them into some
1099
00:48:06,920 --> 00:48:11,880
complicated function to extract features
1100
00:48:09,680 --> 00:48:16,359
and based on the features uh multiply by
1101
00:48:11,880 --> 00:48:18,280
weights and get a score um and if you're
1102
00:48:16,359 --> 00:48:20,359
doing text classification in the
1103
00:48:18,280 --> 00:48:22,520
traditional way this is normally what
1104
00:48:20,359 --> 00:48:23,760
you do um if you're doing text
1105
00:48:22,520 --> 00:48:25,960
classification with something like
1106
00:48:23,760 --> 00:48:27,280
prompting you're still actually doing
1107
00:48:25,960 --> 00:48:29,960
this because you're calculating the
1108
00:48:27,280 --> 00:48:32,960
score of the next word to predict and
1109
00:48:29,960 --> 00:48:34,720
that's done in exactly the same way so
1110
00:48:32,960 --> 00:48:37,760
uh even if you're using a large language
1111
00:48:34,720 --> 00:48:39,359
model like GPT this is still probably
1112
00:48:37,760 --> 00:48:41,800
happening under the hood unless open the
1113
00:48:39,359 --> 00:48:43,400
eye invented something that very
1114
00:48:41,800 --> 00:48:45,559
different in Alien than anything else
1115
00:48:43,400 --> 00:48:48,440
that we know of but I I'm guessing that
1116
00:48:45,559 --> 00:48:48,440
that propably hasn't
1117
00:48:48,480 --> 00:48:52,880
happen um one nice thing about neural
1118
00:48:50,880 --> 00:48:54,480
networks is neural networks
1119
00:48:52,880 --> 00:48:57,559
theoretically are powerful enough to
1120
00:48:54,480 --> 00:49:00,000
solve any task if you make them uh deep
1121
00:48:57,559 --> 00:49:01,160
enough or wide enough uh like if you
1122
00:49:00,000 --> 00:49:04,520
make them wide enough and then if you
1123
00:49:01,160 --> 00:49:06,799
make them deep it also helps further so
1124
00:49:04,520 --> 00:49:08,079
anytime somebody says well you can't
1125
00:49:06,799 --> 00:49:11,119
just solve that problem with neural
1126
00:49:08,079 --> 00:49:13,240
networks you know that they're lying
1127
00:49:11,119 --> 00:49:15,720
basically because they theoretically can
1128
00:49:13,240 --> 00:49:17,359
solve every problem uh but you have you
1129
00:49:15,720 --> 00:49:19,799
have issues of data you have issues of
1130
00:49:17,359 --> 00:49:23,079
other things like that so you know they
1131
00:49:19,799 --> 00:49:23,079
don't just necessarily work
1132
00:49:23,119 --> 00:49:28,040
outs cool um so the final thing I'd like
1133
00:49:26,400 --> 00:49:29,319
to talk about is the road map going
1134
00:49:28,040 --> 00:49:31,319
forward some of the things I'm going to
1135
00:49:29,319 --> 00:49:32,799
cover in the class and some of the
1136
00:49:31,319 --> 00:49:35,200
logistics
1137
00:49:32,799 --> 00:49:36,799
issues so um the first thing I'm going
1138
00:49:35,200 --> 00:49:38,240
to talk about in the class is language
1139
00:49:36,799 --> 00:49:40,559
modeling fun
1140
00:49:38,240 --> 00:49:42,720
fundamentals and uh so this could
1141
00:49:40,559 --> 00:49:44,240
include language models uh that just
1142
00:49:42,720 --> 00:49:46,559
predict the next words it could include
1143
00:49:44,240 --> 00:49:50,559
language models that predict the output
1144
00:49:46,559 --> 00:49:51,599
given the uh the input or the prompt um
1145
00:49:50,559 --> 00:49:54,559
I'm going to be talking about
1146
00:49:51,599 --> 00:49:56,520
representing words uh how how we get
1147
00:49:54,559 --> 00:49:59,319
word representation subword models other
1148
00:49:56,520 --> 00:50:01,440
things like that uh then go kind of
1149
00:49:59,319 --> 00:50:04,200
deeper into language modeling uh how do
1150
00:50:01,440 --> 00:50:07,799
we do it how do we evaluate it other
1151
00:50:04,200 --> 00:50:10,920
things um sequence encoding uh and this
1152
00:50:07,799 --> 00:50:13,240
is going to cover things like uh
1153
00:50:10,920 --> 00:50:16,280
Transformers uh self attention modals
1154
00:50:13,240 --> 00:50:18,559
but also very quickly cnns and rnns
1155
00:50:16,280 --> 00:50:20,880
which are useful in some
1156
00:50:18,559 --> 00:50:22,200
cases um and then we're going to
1157
00:50:20,880 --> 00:50:24,040
specifically go very deep into the
1158
00:50:22,200 --> 00:50:25,960
Transformer architecture and also talk a
1159
00:50:24,040 --> 00:50:27,280
little bit about some of the modern uh
1160
00:50:25,960 --> 00:50:30,240
improvements to the Transformer
1161
00:50:27,280 --> 00:50:31,839
architecture so the Transformer we're
1162
00:50:30,240 --> 00:50:33,839
using nowadays is very different than
1163
00:50:31,839 --> 00:50:36,200
the Transformer that was invented in
1164
00:50:33,839 --> 00:50:37,240
2017 uh so we're going to talk well I
1165
00:50:36,200 --> 00:50:38,760
wouldn't say very different but
1166
00:50:37,240 --> 00:50:41,359
different enough that it's important so
1167
00:50:38,760 --> 00:50:43,280
we're going to talk about some of those
1168
00:50:41,359 --> 00:50:45,079
things second thing I'd like to talk
1169
00:50:43,280 --> 00:50:47,000
about is training and inference methods
1170
00:50:45,079 --> 00:50:48,839
so this includes uh generation
1171
00:50:47,000 --> 00:50:52,119
algorithms uh so we're going to have a
1172
00:50:48,839 --> 00:50:55,520
whole class on how we generate text uh
1173
00:50:52,119 --> 00:50:58,319
in different ways uh prompting how uh we
1174
00:50:55,520 --> 00:50:59,720
can prompt things I hear uh world class
1175
00:50:58,319 --> 00:51:01,799
prompt engineers make a lot of money
1176
00:50:59,720 --> 00:51:05,480
nowadays so uh you'll want to pay
1177
00:51:01,799 --> 00:51:08,760
attention to that one um and instruction
1178
00:51:05,480 --> 00:51:11,520
tuning uh so how do we train models to
1179
00:51:08,760 --> 00:51:13,720
handle a lot of different tasks and
1180
00:51:11,520 --> 00:51:15,839
reinforcement learning so how do we uh
1181
00:51:13,720 --> 00:51:18,520
you know like actually generate outputs
1182
00:51:15,839 --> 00:51:19,839
uh kind of Judge them and then learn
1183
00:51:18,520 --> 00:51:22,599
from
1184
00:51:19,839 --> 00:51:25,880
there also experimental design and
1185
00:51:22,599 --> 00:51:28,079
evaluation so experimental design uh so
1186
00:51:25,880 --> 00:51:30,480
how do we design an experiment well uh
1187
00:51:28,079 --> 00:51:32,000
so that it backs up what we want to be
1188
00:51:30,480 --> 00:51:34,559
uh our conclusions that we want to be
1189
00:51:32,000 --> 00:51:37,000
backing up how do we do human annotation
1190
00:51:34,559 --> 00:51:38,880
of data in a reliable way this is
1191
00:51:37,000 --> 00:51:41,160
getting harder and harder as models get
1192
00:51:38,880 --> 00:51:43,359
better and better because uh getting
1193
00:51:41,160 --> 00:51:45,000
humans who don't care very much about
1194
00:51:43,359 --> 00:51:48,559
The annotation task they might do worse
1195
00:51:45,000 --> 00:51:51,119
than gp4 so um you need to be careful of
1196
00:51:48,559 --> 00:51:52,240
that also debugging and interpretation
1197
00:51:51,119 --> 00:51:53,960
technique so what are some of the
1198
00:51:52,240 --> 00:51:55,160
automatic techniques that you can do to
1199
00:51:53,960 --> 00:51:57,720
quickly figure out what's going wrong
1200
00:51:55,160 --> 00:52:00,040
with your models and improve
1201
00:51:57,720 --> 00:52:01,599
them and uh bias and fairness
1202
00:52:00,040 --> 00:52:04,200
considerations so it's really really
1203
00:52:01,599 --> 00:52:05,799
important nowadays uh that models are
1204
00:52:04,200 --> 00:52:07,880
being deployed to real people in the
1205
00:52:05,799 --> 00:52:09,880
real world and like actually causing
1206
00:52:07,880 --> 00:52:11,760
harm to people in some cases that we
1207
00:52:09,880 --> 00:52:15,160
need to be worried about
1208
00:52:11,760 --> 00:52:17,000
that Advanced Training in architectures
1209
00:52:15,160 --> 00:52:19,280
so we're going to talk about distill
1210
00:52:17,000 --> 00:52:21,400
distillation and quantization how can we
1211
00:52:19,280 --> 00:52:23,520
make small language models uh that
1212
00:52:21,400 --> 00:52:24,880
actually still work well like not large
1213
00:52:23,520 --> 00:52:27,559
you can run them on your phone you can
1214
00:52:24,880 --> 00:52:29,920
run them on your local
1215
00:52:27,559 --> 00:52:31,640
laptop um ensembling and mixtures of
1216
00:52:29,920 --> 00:52:33,480
experts how can we combine together
1217
00:52:31,640 --> 00:52:34,760
multiple models in order to create
1218
00:52:33,480 --> 00:52:35,880
models that are better than the sum of
1219
00:52:34,760 --> 00:52:38,799
their
1220
00:52:35,880 --> 00:52:40,720
parts and um retrieval and retrieval
1221
00:52:38,799 --> 00:52:43,920
augmented
1222
00:52:40,720 --> 00:52:45,480
generation long sequence models uh so
1223
00:52:43,920 --> 00:52:49,920
how do we handle long
1224
00:52:45,480 --> 00:52:52,240
outputs um and uh we're going to talk
1225
00:52:49,920 --> 00:52:55,760
about applications to complex reasoning
1226
00:52:52,240 --> 00:52:57,760
tasks code generation language agents
1227
00:52:55,760 --> 00:52:59,920
and knowledge-based QA and information
1228
00:52:57,760 --> 00:53:04,160
extraction I picked
1229
00:52:59,920 --> 00:53:06,760
these because they seem to be maybe the
1230
00:53:04,160 --> 00:53:09,880
most important at least in research
1231
00:53:06,760 --> 00:53:11,440
nowadays and also they cover uh the
1232
00:53:09,880 --> 00:53:13,640
things that when I talk to people in
1233
00:53:11,440 --> 00:53:15,280
Industry are kind of most interested in
1234
00:53:13,640 --> 00:53:17,559
so hopefully it'll be useful regardless
1235
00:53:15,280 --> 00:53:19,799
of uh whether you plan on doing research
1236
00:53:17,559 --> 00:53:22,839
or or plan on doing industry related
1237
00:53:19,799 --> 00:53:24,160
things uh by by the way the two things
1238
00:53:22,839 --> 00:53:25,920
that when I talk to people in Industry
1239
00:53:24,160 --> 00:53:29,599
they're most interested in are Rag and
1240
00:53:25,920 --> 00:53:31,079
code generation at the moment for now um
1241
00:53:29,599 --> 00:53:32,319
so those are ones that you'll want to
1242
00:53:31,079 --> 00:53:34,680
pay attention
1243
00:53:32,319 --> 00:53:36,599
to and then finally we have a few
1244
00:53:34,680 --> 00:53:40,079
lectures on Linguistics and
1245
00:53:36,599 --> 00:53:42,720
multilinguality um I love Linguistics
1246
00:53:40,079 --> 00:53:44,839
but uh to be honest at the moment most
1247
00:53:42,720 --> 00:53:47,760
of our Cutting Edge models don't
1248
00:53:44,839 --> 00:53:49,240
explicitly use linguistic structure um
1249
00:53:47,760 --> 00:53:50,799
but I still think it's useful to know
1250
00:53:49,240 --> 00:53:52,760
about it especially if you're working on
1251
00:53:50,799 --> 00:53:54,880
multilingual things especially if you're
1252
00:53:52,760 --> 00:53:57,040
interested in very robust generalization
1253
00:53:54,880 --> 00:53:58,920
to new models so we're going to talk a
1254
00:53:57,040 --> 00:54:02,599
little bit about that and also
1255
00:53:58,920 --> 00:54:06,079
multilingual LP I'm going to have
1256
00:54:02,599 --> 00:54:09,119
fure so also if you have any suggestions
1257
00:54:06,079 --> 00:54:11,400
um we have two guest lecture slots still
1258
00:54:09,119 --> 00:54:12,799
open uh that I'm trying to fill so if
1259
00:54:11,400 --> 00:54:15,440
you have any things that you really want
1260
00:54:12,799 --> 00:54:16,440
to hear about um I could either add them
1261
00:54:15,440 --> 00:54:19,319
to the
1262
00:54:16,440 --> 00:54:21,079
existing you know content or I could
1263
00:54:19,319 --> 00:54:23,240
invite a guest lecturer who's working on
1264
00:54:21,079 --> 00:54:24,079
that topic so you know please feel free
1265
00:54:23,240 --> 00:54:26,760
to tell
1266
00:54:24,079 --> 00:54:29,160
me um then the class format and
1267
00:54:26,760 --> 00:54:32,280
structure uh the class
1268
00:54:29,160 --> 00:54:34,000
content my goal is to learn in detail
1269
00:54:32,280 --> 00:54:36,640
about building NLP systems from a
1270
00:54:34,000 --> 00:54:40,520
research perspective so this is a 700
1271
00:54:36,640 --> 00:54:43,599
level course so it's aiming to be for
1272
00:54:40,520 --> 00:54:46,960
people who really want to try new and
1273
00:54:43,599 --> 00:54:49,280
Innovative things in uh kind of natural
1274
00:54:46,960 --> 00:54:51,359
language processing it's not going to
1275
00:54:49,280 --> 00:54:52,760
focus solely on reimplementing things
1276
00:54:51,359 --> 00:54:54,319
that have been done before including in
1277
00:54:52,760 --> 00:54:55,280
the project I'm going to be expecting
1278
00:54:54,319 --> 00:54:58,480
everybody to do something something
1279
00:54:55,280 --> 00:54:59,920
that's kind of new whether it's coming
1280
00:54:58,480 --> 00:55:01,359
up with a new method or applying
1281
00:54:59,920 --> 00:55:03,559
existing methods to a place where they
1282
00:55:01,359 --> 00:55:05,079
haven't been used before or building out
1283
00:55:03,559 --> 00:55:06,640
things for a new language or something
1284
00:55:05,079 --> 00:55:08,359
like that so that's kind of one of the
1285
00:55:06,640 --> 00:55:11,480
major goals of this
1286
00:55:08,359 --> 00:55:13,000
class um learn basic and advanced topics
1287
00:55:11,480 --> 00:55:15,559
in machine learning approaches to NLP
1288
00:55:13,000 --> 00:55:18,359
and language models learn some basic
1289
00:55:15,559 --> 00:55:21,480
linguistic knowledge useful in NLP uh
1290
00:55:18,359 --> 00:55:23,200
see case studies of NLP applications and
1291
00:55:21,480 --> 00:55:25,680
learn how to identify unique problems
1292
00:55:23,200 --> 00:55:29,039
for each um one thing i' like to point
1293
00:55:25,680 --> 00:55:31,160
out is I'm not going to cover every NLP
1294
00:55:29,039 --> 00:55:32,920
application ever because that would be
1295
00:55:31,160 --> 00:55:35,520
absolutely impossible NLP is being used
1296
00:55:32,920 --> 00:55:37,079
in so many different areas nowadays but
1297
00:55:35,520 --> 00:55:38,960
what I want people to pay attention to
1298
00:55:37,079 --> 00:55:41,280
like even if you're not super interested
1299
00:55:38,960 --> 00:55:42,400
in code generation for example what you
1300
00:55:41,280 --> 00:55:44,200
can do is you can look at code
1301
00:55:42,400 --> 00:55:46,160
generation look at how people identify
1302
00:55:44,200 --> 00:55:47,680
problems look at the methods that people
1303
00:55:46,160 --> 00:55:50,880
have proposed to solve those unique
1304
00:55:47,680 --> 00:55:53,039
problems and then kind of map that try
1305
00:55:50,880 --> 00:55:54,799
to do some generalization onto your own
1306
00:55:53,039 --> 00:55:57,799
problems of Interest so uh that's kind
1307
00:55:54,799 --> 00:56:00,280
of the goal of the NLP
1308
00:55:57,799 --> 00:56:02,440
applications finally uh learning how to
1309
00:56:00,280 --> 00:56:05,160
debug when and where NLP systems fail
1310
00:56:02,440 --> 00:56:08,200
and build improvements based on this so
1311
00:56:05,160 --> 00:56:10,200
um ever since I was a graduate student
1312
00:56:08,200 --> 00:56:12,720
this has been like one of the really
1313
00:56:10,200 --> 00:56:15,920
important things that I feel like I've
1314
00:56:12,720 --> 00:56:17,440
done well or done better than some other
1315
00:56:15,920 --> 00:56:19,280
people and I I feel like it's a really
1316
00:56:17,440 --> 00:56:21,119
good way to like even if you're only
1317
00:56:19,280 --> 00:56:22,680
interested in improving accuracy knowing
1318
00:56:21,119 --> 00:56:25,039
why your system's failing still is the
1319
00:56:22,680 --> 00:56:27,599
best way to do that I so I'm going to
1320
00:56:25,039 --> 00:56:30,559
put a lot of emphasis on
1321
00:56:27,599 --> 00:56:32,559
that in terms of the class format um
1322
00:56:30,559 --> 00:56:36,280
before class for some classes there are
1323
00:56:32,559 --> 00:56:37,880
recommended reading uh this can be
1324
00:56:36,280 --> 00:56:39,559
helpful to read I'm never going to
1325
00:56:37,880 --> 00:56:41,119
expect you to definitely have read it
1326
00:56:39,559 --> 00:56:42,480
before the class but I would suggest
1327
00:56:41,119 --> 00:56:45,160
that maybe you'll get more out of the
1328
00:56:42,480 --> 00:56:47,319
class if you do that um during class
1329
00:56:45,160 --> 00:56:48,079
we'll have the lecture um in discussion
1330
00:56:47,319 --> 00:56:50,559
with
1331
00:56:48,079 --> 00:56:52,359
everybody um sometimes we'll have a code
1332
00:56:50,559 --> 00:56:55,839
or data walk
1333
00:56:52,359 --> 00:56:58,760
um actually this is a a little bit old I
1334
00:56:55,839 --> 00:57:01,880
I have this slide we're this year we're
1335
00:56:58,760 --> 00:57:04,160
going to be adding more uh code and data
1336
00:57:01,880 --> 00:57:07,400
walks during office hours and the way it
1337
00:57:04,160 --> 00:57:09,400
will work is one of the Tas we have
1338
00:57:07,400 --> 00:57:11,160
seven Tas who I'm going to introduce
1339
00:57:09,400 --> 00:57:15,000
very soon but one of the Tas will be
1340
00:57:11,160 --> 00:57:16,839
doing this kind of recitation where you
1341
00:57:15,000 --> 00:57:18,200
um where we go over a library so if
1342
00:57:16,839 --> 00:57:19,480
you're not familiar with the library and
1343
00:57:18,200 --> 00:57:21,960
you want to be more familiar with the
1344
00:57:19,480 --> 00:57:23,720
library you can join this and uh then
1345
00:57:21,960 --> 00:57:25,400
we'll be able to do this and this will
1346
00:57:23,720 --> 00:57:28,240
cover things like
1347
00:57:25,400 --> 00:57:31,039
um pie torch and sentence piece uh we're
1348
00:57:28,240 --> 00:57:33,280
going to start out with hugging face um
1349
00:57:31,039 --> 00:57:36,559
inference stuff like
1350
00:57:33,280 --> 00:57:41,520
VM uh debugging software like
1351
00:57:36,559 --> 00:57:41,520
Xeno um what were the other
1352
00:57:41,960 --> 00:57:47,200
ones oh the open AI API and light llm
1353
00:57:45,680 --> 00:57:50,520
other stuff like that so we we have lots
1354
00:57:47,200 --> 00:57:53,599
of them planned we'll uh uh we'll update
1355
00:57:50,520 --> 00:57:54,839
that um and then after class after
1356
00:57:53,599 --> 00:57:58,079
almost every class we'll have a question
1357
00:57:54,839 --> 00:58:00,079
quiz um and the quiz is intended to just
1358
00:57:58,079 --> 00:58:02,000
you know make sure that you uh paid
1359
00:58:00,079 --> 00:58:04,480
attention to the material and are able
1360
00:58:02,000 --> 00:58:07,520
to answer questions about it we will aim
1361
00:58:04,480 --> 00:58:09,559
to release it on the day of the course
1362
00:58:07,520 --> 00:58:11,599
the day of the actual lecture and it
1363
00:58:09,559 --> 00:58:14,559
will be due at the end of the following
1364
00:58:11,599 --> 00:58:15,960
day of the lecture so um it will be
1365
00:58:14,559 --> 00:58:18,920
three questions it probably shouldn't
1366
00:58:15,960 --> 00:58:20,680
take a whole lot of time but um uh yeah
1367
00:58:18,920 --> 00:58:23,400
so we'll H
1368
00:58:20,680 --> 00:58:26,319
that in terms of assignments assignment
1369
00:58:23,400 --> 00:58:28,640
one is going to be build your own llama
1370
00:58:26,319 --> 00:58:30,200
and so what this is going to look like
1371
00:58:28,640 --> 00:58:32,680
is we're going to give you a partial
1372
00:58:30,200 --> 00:58:34,319
implementation of llama which is kind of
1373
00:58:32,680 --> 00:58:37,960
the most popular open source language
1374
00:58:34,319 --> 00:58:40,160
model nowadays and ask you to fill in um
1375
00:58:37,960 --> 00:58:42,839
ask you to fill in the parts we're going
1376
00:58:40,160 --> 00:58:45,920
to train a very small version of llama
1377
00:58:42,839 --> 00:58:47,319
on a small data set and get it to work
1378
00:58:45,920 --> 00:58:48,880
and the reason why it's very small is
1379
00:58:47,319 --> 00:58:50,480
because the smallest actual version of
1380
00:58:48,880 --> 00:58:53,039
llama is 7 billion
1381
00:58:50,480 --> 00:58:55,359
parameters um and that might be a little
1382
00:58:53,039 --> 00:58:58,400
bit difficult to train with
1383
00:58:55,359 --> 00:59:00,680
resources um for assignment two we're
1384
00:58:58,400 --> 00:59:04,559
going to try to do an NLP task from
1385
00:59:00,680 --> 00:59:06,920
scratch and so the way this will work is
1386
00:59:04,559 --> 00:59:08,520
we're going to give you an assignment
1387
00:59:06,920 --> 00:59:10,880
which we're not going to give you an
1388
00:59:08,520 --> 00:59:13,400
actual data set and instead we're going
1389
00:59:10,880 --> 00:59:15,760
to ask you to uh perform data creation
1390
00:59:13,400 --> 00:59:19,359
modeling and evaluation for a specified
1391
00:59:15,760 --> 00:59:20,640
task and so we're going to tell you uh
1392
00:59:19,359 --> 00:59:22,599
what to do but we're not going to tell
1393
00:59:20,640 --> 00:59:26,400
you exactly how to do it but we're going
1394
00:59:22,599 --> 00:59:29,680
to try to give as conrete directions as
1395
00:59:26,400 --> 00:59:32,359
we can um
1396
00:59:29,680 --> 00:59:34,160
yeah will you be given a parameter limit
1397
00:59:32,359 --> 00:59:36,559
on the model so that's a good question
1398
00:59:34,160 --> 00:59:39,119
or like a expense limit or something
1399
00:59:36,559 --> 00:59:40,440
like that um I maybe actually I should
1400
00:59:39,119 --> 00:59:44,240
take a break from the assignments and
1401
00:59:40,440 --> 00:59:46,520
talk about compute so right now um for
1402
00:59:44,240 --> 00:59:49,319
assignment one we're planning on having
1403
00:59:46,520 --> 00:59:51,599
this be able to be done either on a Mac
1404
00:59:49,319 --> 00:59:53,520
laptop with an M1 or M2 processor which
1405
00:59:51,599 --> 00:59:57,079
I think a lot of people have or Google
1406
00:59:53,520 --> 00:59:59,839
collab um so it should be like
1407
00:59:57,079 --> 01:00:02,160
sufficient to use free computational
1408
00:59:59,839 --> 01:00:03,640
resources that you have for number two
1409
01:00:02,160 --> 01:00:06,079
we'll think about that I think that's
1410
01:00:03,640 --> 01:00:08,280
important we do have Google cloud
1411
01:00:06,079 --> 01:00:11,520
credits for $50 for everybody and I'm
1412
01:00:08,280 --> 01:00:13,440
working to get AWS credits for more um
1413
01:00:11,520 --> 01:00:18,160
but the cloud providers nowadays are
1414
01:00:13,440 --> 01:00:19,680
being very stingy so um so it's uh been
1415
01:00:18,160 --> 01:00:22,160
a little bit of a fight to get uh
1416
01:00:19,680 --> 01:00:23,680
credits but I I it is very important so
1417
01:00:22,160 --> 01:00:28,480
I'm going to try to get as as many as we
1418
01:00:23,680 --> 01:00:31,119
can um and so yeah I I think basically
1419
01:00:28,480 --> 01:00:32,280
uh there will be some sort of like limit
1420
01:00:31,119 --> 01:00:34,480
on the amount of things you can
1421
01:00:32,280 --> 01:00:36,240
practically do and so because of that
1422
01:00:34,480 --> 01:00:39,920
I'm hoping that people will rely very
1423
01:00:36,240 --> 01:00:43,359
heavily on pre-trained models um or uh
1424
01:00:39,920 --> 01:00:46,079
yeah pre-trained models
1425
01:00:43,359 --> 01:00:49,599
and yeah so that that's the the short
1426
01:00:46,079 --> 01:00:52,799
story B um the second thing uh the
1427
01:00:49,599 --> 01:00:54,720
assignment three is to do a survey of
1428
01:00:52,799 --> 01:00:57,920
some sort of state-ofthe-art research
1429
01:00:54,720 --> 01:01:00,760
resarch and do a reimplementation of
1430
01:00:57,920 --> 01:01:02,000
this and in doing this again you will
1431
01:01:00,760 --> 01:01:03,440
have to think about something that's
1432
01:01:02,000 --> 01:01:06,359
feasible within computational
1433
01:01:03,440 --> 01:01:08,680
constraints um and so you can discuss
1434
01:01:06,359 --> 01:01:11,839
with your Tas about uh about the best
1435
01:01:08,680 --> 01:01:13,920
way to do this um and then the final
1436
01:01:11,839 --> 01:01:15,400
project is to perform a unique project
1437
01:01:13,920 --> 01:01:17,559
that either improves on the state-of-the
1438
01:01:15,400 --> 01:01:21,000
art with respect to whatever you would
1439
01:01:17,559 --> 01:01:23,440
like to improve with this could be uh
1440
01:01:21,000 --> 01:01:25,280
accuracy for sure this could be
1441
01:01:23,440 --> 01:01:27,760
efficiency
1442
01:01:25,280 --> 01:01:29,599
it could be some sense of
1443
01:01:27,760 --> 01:01:31,520
interpretability but if it's going to be
1444
01:01:29,599 --> 01:01:33,599
something like interpretability you'll
1445
01:01:31,520 --> 01:01:35,440
have to discuss with us what that means
1446
01:01:33,599 --> 01:01:37,240
like how we measure that how we can like
1447
01:01:35,440 --> 01:01:40,839
actually say that you did a good job
1448
01:01:37,240 --> 01:01:42,839
with improving that um another thing
1449
01:01:40,839 --> 01:01:44,680
that you can do is take whatever you
1450
01:01:42,839 --> 01:01:47,280
implemented for assignment 3 and apply
1451
01:01:44,680 --> 01:01:49,039
it to a new task or apply it to a new
1452
01:01:47,280 --> 01:01:50,760
language that has never been examined
1453
01:01:49,039 --> 01:01:53,119
before so these are also acceptable
1454
01:01:50,760 --> 01:01:54,240
final projects but basically the idea is
1455
01:01:53,119 --> 01:01:55,559
for the final project you need to do
1456
01:01:54,240 --> 01:01:57,480
something something new that hasn't been
1457
01:01:55,559 --> 01:01:59,880
done before and create new knowledge
1458
01:01:57,480 --> 01:02:04,520
with the respect
1459
01:01:59,880 --> 01:02:07,640
toy um so for this the instructor is me
1460
01:02:04,520 --> 01:02:09,920
um I'm uh looking forward to you know
1461
01:02:07,640 --> 01:02:13,599
discussing and working with all of you
1462
01:02:09,920 --> 01:02:16,119
um for TAS we have seven Tas uh two of
1463
01:02:13,599 --> 01:02:18,319
them are in transit so they're not here
1464
01:02:16,119 --> 01:02:22,279
today um the other ones uh Tas would you
1465
01:02:18,319 --> 01:02:22,279
mind coming up uh to introduce
1466
01:02:23,359 --> 01:02:26,359
yourself
1467
01:02:28,400 --> 01:02:32,839
so um yeah nhir and akshai couldn't be
1468
01:02:31,599 --> 01:02:34,039
here today because they're traveling
1469
01:02:32,839 --> 01:02:37,119
I'll introduce them later because
1470
01:02:34,039 --> 01:02:37,119
they're coming uh next
1471
01:02:40,359 --> 01:02:46,480
time cool and what I'd like everybody to
1472
01:02:43,000 --> 01:02:48,680
do is say um like you know what your
1473
01:02:46,480 --> 01:02:53,079
name is uh what
1474
01:02:48,680 --> 01:02:55,799
your like maybe what you're interested
1475
01:02:53,079 --> 01:02:57,319
in um and the reason the goal of this is
1476
01:02:55,799 --> 01:02:59,200
number one for everybody to know who you
1477
01:02:57,319 --> 01:03:00,720
are and number two for everybody to know
1478
01:02:59,200 --> 01:03:03,440
who the best person to talk to is if
1479
01:03:00,720 --> 01:03:03,440
they're interested in
1480
01:03:04,200 --> 01:03:09,079
particular hi uh I'm
1481
01:03:07,000 --> 01:03:15,520
Aila second
1482
01:03:09,079 --> 01:03:15,520
year I work on language and social
1483
01:03:16,200 --> 01:03:24,559
and I'm I'm a second this year PhD
1484
01:03:21,160 --> 01:03:26,799
student Grand and Shar with you I search
1485
01:03:24,559 --> 01:03:28,480
is like started in the border of MP and
1486
01:03:26,799 --> 01:03:31,000
computer interaction with a lot of work
1487
01:03:28,480 --> 01:03:32,640
on automating parts of the developer
1488
01:03:31,000 --> 01:03:35,319
experience to make it easier for anyone
1489
01:03:32,640 --> 01:03:35,319
to
1490
01:03:39,090 --> 01:03:42,179
[Music]
1491
01:03:47,520 --> 01:03:53,279
orif
1492
01:03:50,079 --> 01:03:54,680
everyone first
1493
01:03:53,279 --> 01:03:57,119
year
1494
01:03:54,680 --> 01:04:00,119
[Music]
1495
01:03:57,119 --> 01:04:03,559
I don't like updating primar models I
1496
01:04:00,119 --> 01:04:03,559
hope to not update Prim
1497
01:04:14,599 --> 01:04:19,400
modelm yeah thanks a lot everyone and
1498
01:04:17,200 --> 01:04:19,400
yeah
1499
01:04:20,839 --> 01:04:29,400
than and so we will um we'll have people
1500
01:04:25,640 --> 01:04:30,799
uh kind of have office hours uh every ta
1501
01:04:29,400 --> 01:04:32,880
has office hours at a regular time
1502
01:04:30,799 --> 01:04:34,480
during the week uh please feel free to
1503
01:04:32,880 --> 01:04:38,400
come to their office hours or my office
1504
01:04:34,480 --> 01:04:41,960
hours um I think they are visha are they
1505
01:04:38,400 --> 01:04:43,880
posted on the site or okay yeah they
1506
01:04:41,960 --> 01:04:47,240
they either are or will be posted on the
1507
01:04:43,880 --> 01:04:49,720
site very soon um and come by to talk
1508
01:04:47,240 --> 01:04:51,480
about anything uh if there's nobody in
1509
01:04:49,720 --> 01:04:53,079
my office hours I'm happy to talk about
1510
01:04:51,480 --> 01:04:54,599
things that are unrelated but if there's
1511
01:04:53,079 --> 01:04:58,039
lots of people waiting outside or I
1512
01:04:54,599 --> 01:05:00,319
might limit it to uh like um just things
1513
01:04:58,039 --> 01:05:02,480
about the class so cool and we have
1514
01:05:00,319 --> 01:05:04,760
Patza we'll be checking that regularly
1515
01:05:02,480 --> 01:05:06,839
uh striving to get you an answer in 24
1516
01:05:04,760 --> 01:05:12,240
hours on weekdays over weekends we might
1517
01:05:06,839 --> 01:05:16,000
not so um yeah so that's all for today
1518
01:05:12,240 --> 01:05:16,000
are there any questions