ahmedelsayed's picture
commit files to HF hub
2ffb90d
1
00:00:00,120 --> 00:00:04,880
everyone I today I'd like to talk about
2
00:00:02,760 --> 00:00:07,399
uh learning from knowledge bases uh
3
00:00:04,880 --> 00:00:11,440
learning from in for knowledge bases
4
00:00:07,399 --> 00:00:14,799
this is kind of a a shift uh from a lot
5
00:00:11,440 --> 00:00:16,480
of the stuff that we've done so far uh
6
00:00:14,799 --> 00:00:18,439
and I'm going to be talking about like a
7
00:00:16,480 --> 00:00:20,480
different information Source some
8
00:00:18,439 --> 00:00:21,960
relatively different algorithms compared
9
00:00:20,480 --> 00:00:26,080
to the stuff that we talked about up
10
00:00:21,960 --> 00:00:28,880
until this point so um you know it might
11
00:00:26,080 --> 00:00:32,360
be uh interesting it might be different
12
00:00:28,880 --> 00:00:35,640
so uh get started with
13
00:00:32,360 --> 00:00:37,360
that so I'm going to be talking about
14
00:00:35,640 --> 00:00:40,000
knowledge bases and knowledge bases are
15
00:00:37,360 --> 00:00:43,039
basically a structured databases of
16
00:00:40,000 --> 00:00:46,079
knowledge and they can contain a lot of
17
00:00:43,039 --> 00:00:48,559
things but most commonly when people are
18
00:00:46,079 --> 00:00:50,600
talking about them they are talking
19
00:00:48,559 --> 00:00:53,160
about relational knowledge bases that
20
00:00:50,600 --> 00:00:55,559
include things like entities which are
21
00:00:53,160 --> 00:00:57,399
nodes in a graph and relations which are
22
00:00:55,559 --> 00:01:00,239
edges between
23
00:00:57,399 --> 00:01:02,079
nodes and
24
00:01:00,239 --> 00:01:03,879
I'll I'll talk about some examples of
25
00:01:02,079 --> 00:01:05,479
this in a little bit to make that a
26
00:01:03,879 --> 00:01:08,040
little bit more concrete and then some
27
00:01:05,479 --> 00:01:11,240
of the questions that we ask about these
28
00:01:08,040 --> 00:01:14,400
are how can we learn to create and
29
00:01:11,240 --> 00:01:16,799
expand knowledge bases with uh you know
30
00:01:14,400 --> 00:01:18,439
neural network based methods and then
31
00:01:16,799 --> 00:01:20,200
the second question is how can we learn
32
00:01:18,439 --> 00:01:22,600
from the information in knowledge bases
33
00:01:20,200 --> 00:01:24,720
to improve like neural network models or
34
00:01:22,600 --> 00:01:27,560
uh use them in effective
35
00:01:24,720 --> 00:01:31,479
ways and how can we use uh structured
36
00:01:27,560 --> 00:01:31,479
knowledge to answer questions
37
00:01:32,200 --> 00:01:37,159
so the first uh thing I'd like to talk
38
00:01:35,000 --> 00:01:40,960
about a little bit is types of knowledge
39
00:01:37,159 --> 00:01:43,079
bases and they come in several different
40
00:01:40,960 --> 00:01:46,119
varieties the first one I'd like to talk
41
00:01:43,079 --> 00:01:48,560
about is a very uh classical one called
42
00:01:46,119 --> 00:01:50,960
wordnet has anyone actually ever used
43
00:01:48,560 --> 00:01:53,479
wordnet
44
00:01:50,960 --> 00:01:55,520
before I see at least one person raising
45
00:01:53,479 --> 00:01:57,640
their hand so it's not entirely uh
46
00:01:55,520 --> 00:02:00,119
hasn't entirely disappeared has anyone
47
00:01:57,640 --> 00:02:03,240
heard of wordnet before
48
00:02:00,119 --> 00:02:05,079
okay more more people um so basically
49
00:02:03,240 --> 00:02:06,960
this used to be a really big thing in in
50
00:02:05,079 --> 00:02:10,440
natural language processing it's not So
51
00:02:06,960 --> 00:02:12,319
Much Anymore um but I I want to explain
52
00:02:10,440 --> 00:02:14,800
about it because I want to explain why
53
00:02:12,319 --> 00:02:17,360
this is maybe like less necessary to use
54
00:02:14,800 --> 00:02:19,599
but actual knowledge bases are still
55
00:02:17,360 --> 00:02:23,160
more necessary to
56
00:02:19,599 --> 00:02:26,280
use and so wordnet is a large database
57
00:02:23,160 --> 00:02:29,560
of words and specifically what it does
58
00:02:26,280 --> 00:02:32,720
is each word or something they call a
59
00:02:29,560 --> 00:02:37,120
syn set is a node and then there are
60
00:02:32,720 --> 00:02:42,560
relationships between nodes and the
61
00:02:37,120 --> 00:02:44,319
nodes can correspond to nouns um and or
62
00:02:42,560 --> 00:02:45,920
verbs or
63
00:02:44,319 --> 00:02:48,360
adjectives
64
00:02:45,920 --> 00:02:49,959
and nouns have different types of
65
00:02:48,360 --> 00:02:53,360
relations between them so they have
66
00:02:49,959 --> 00:02:56,280
things like an is a relation so like a
67
00:02:53,360 --> 00:03:00,040
hatchback is a type of car they are part
68
00:02:56,280 --> 00:03:02,840
of relations uh where a wheel is a part
69
00:03:00,040 --> 00:03:05,720
of a car um and they also make
70
00:03:02,840 --> 00:03:09,799
distinctions between types and instances
71
00:03:05,720 --> 00:03:12,400
so like Joe Biden is an instance of a
72
00:03:09,799 --> 00:03:16,560
president and president is the
73
00:03:12,400 --> 00:03:19,239
type so um verb relations are ordered by
74
00:03:16,560 --> 00:03:22,680
specificity so like communicate is more
75
00:03:19,239 --> 00:03:25,799
broad than talk so talk is you know
76
00:03:22,680 --> 00:03:27,519
generally a sub class of communicate and
77
00:03:25,799 --> 00:03:30,720
then whisper is generally a subass of
78
00:03:27,519 --> 00:03:33,159
talk so it's ordered in this way
79
00:03:30,720 --> 00:03:35,920
and then adjective relations are mostly
80
00:03:33,159 --> 00:03:37,720
antonyms so like wet and wet versus dry
81
00:03:35,920 --> 00:03:43,599
and other things like
82
00:03:37,720 --> 00:03:47,080
this um when I said sinets uh actually
83
00:03:43,599 --> 00:03:50,239
the each node is not a word despite the
84
00:03:47,080 --> 00:03:53,239
name word net it's a set of words that
85
00:03:50,239 --> 00:03:56,200
all have the same meaning so you might
86
00:03:53,239 --> 00:03:59,120
have artifact and thing would both
87
00:03:56,200 --> 00:04:00,879
correspond to this um node because they
88
00:03:59,120 --> 00:04:02,599
both mean basically the same thing so
89
00:04:00,879 --> 00:04:04,159
it's like sets of synonyms and this is
90
00:04:02,599 --> 00:04:07,599
also important when we talk about other
91
00:04:04,159 --> 00:04:09,920
types of uh knowledge bases as well and
92
00:04:07,599 --> 00:04:13,920
so what was this used for um this was
93
00:04:09,920 --> 00:04:17,160
used for for example uh trying to figure
94
00:04:13,920 --> 00:04:22,400
out whether trying to find all the cars
95
00:04:17,160 --> 00:04:24,440
that were mentioned in like a in a large
96
00:04:22,400 --> 00:04:27,440
set of text so you would go through you
97
00:04:24,440 --> 00:04:30,280
would identify all
98
00:04:27,440 --> 00:04:32,120
sinets or you would identify all words
99
00:04:30,280 --> 00:04:34,120
that corresponded to these sunsets and
100
00:04:32,120 --> 00:04:35,720
then you would take a step up and find
101
00:04:34,120 --> 00:04:38,800
motor car and you would know that like
102
00:04:35,720 --> 00:04:42,320
all of those were mentions of cars so
103
00:04:38,800 --> 00:04:45,520
like why don't we use wordnet very much
104
00:04:42,320 --> 00:04:45,520
anymore any
105
00:04:49,160 --> 00:04:52,840
ideas what would what would you do
106
00:04:51,080 --> 00:04:55,560
instead if I told you find all the cars
107
00:04:52,840 --> 00:04:55,560
in a big piece of
108
00:04:55,960 --> 00:05:00,160
text yeah just do something with the
109
00:04:58,280 --> 00:05:02,880
embeding just do something with
110
00:05:00,160 --> 00:05:04,560
embeddings yeah so you might get um you
111
00:05:02,880 --> 00:05:06,720
might get something and find all things
112
00:05:04,560 --> 00:05:10,360
that were close in embedding space to a
113
00:05:06,720 --> 00:05:10,360
car what what's another thing you might
114
00:05:11,560 --> 00:05:15,520
do like what I would do is I would
115
00:05:13,639 --> 00:05:17,080
download mistol and say does this
116
00:05:15,520 --> 00:05:19,880
sentence talk about a car and it would
117
00:05:17,080 --> 00:05:22,199
say yes or no and I I would you know or
118
00:05:19,880 --> 00:05:23,479
I would say find all the cars in this uh
119
00:05:22,199 --> 00:05:25,319
that are mentioned in the sentence and
120
00:05:23,479 --> 00:05:28,720
it would get them and sure that's like
121
00:05:25,319 --> 00:05:31,319
expensive but it's really easy so um you
122
00:05:28,720 --> 00:05:32,919
know there are other options that might
123
00:05:31,319 --> 00:05:36,720
be less expensive but that could solve a
124
00:05:32,919 --> 00:05:39,520
lot of the things so word not you know
125
00:05:36,720 --> 00:05:41,039
started out with more and more it it
126
00:05:39,520 --> 00:05:42,600
started out being very popular in
127
00:05:41,039 --> 00:05:44,039
natural language processing but now it's
128
00:05:42,600 --> 00:05:45,440
less so because we can get a lot of it
129
00:05:44,039 --> 00:05:47,639
from embeddings we can get a lot of it
130
00:05:45,440 --> 00:05:50,520
from language models
131
00:05:47,639 --> 00:05:52,759
itself um another thing that started
132
00:05:50,520 --> 00:05:55,759
maybe before wordnet or even around the
133
00:05:52,759 --> 00:05:58,840
same time as wordnet was this uh data
134
00:05:55,759 --> 00:06:00,800
base called psych and it was a manually
135
00:05:58,840 --> 00:06:04,160
curated database attempting to encode
136
00:06:00,800 --> 00:06:06,280
all common sense knowledge um and the
137
00:06:04,160 --> 00:06:08,759
project itself lasted for about 30 to 40
138
00:06:06,280 --> 00:06:11,840
years it might even still
139
00:06:08,759 --> 00:06:13,319
exist um and so they had this huge uh
140
00:06:11,840 --> 00:06:15,199
like hierarchy of all the different
141
00:06:13,319 --> 00:06:17,680
types of knowledge you could have it
142
00:06:15,199 --> 00:06:19,680
encoded knowledge about like events and
143
00:06:17,680 --> 00:06:21,479
like which events happened before other
144
00:06:19,680 --> 00:06:26,840
events and all these other stuff like
145
00:06:21,479 --> 00:06:29,039
this um but the problem with this is uh
146
00:06:26,840 --> 00:06:31,000
this was just too ambitious basically it
147
00:06:29,039 --> 00:06:35,680
was not possible to encode all of this
148
00:06:31,000 --> 00:06:37,440
manually by hand so people um like it it
149
00:06:35,680 --> 00:06:38,840
did it got part of the way there but
150
00:06:37,440 --> 00:06:40,240
that part of the way there was not
151
00:06:38,840 --> 00:06:42,560
enough for it to be really useful in
152
00:06:40,240 --> 00:06:45,199
Practical systems so it isn't this sort
153
00:06:42,560 --> 00:06:47,800
of method is not used as frequently
154
00:06:45,199 --> 00:06:51,240
now
155
00:06:47,800 --> 00:06:56,000
um a a followup one
156
00:06:51,240 --> 00:06:57,479
um which is it's successor is now uh the
157
00:06:56,000 --> 00:06:59,879
the most widely used knowledge Bas is
158
00:06:57,479 --> 00:07:03,240
something called dbpedia and the basic
159
00:06:59,879 --> 00:07:06,120
idea behind dbpedia is that while Psych
160
00:07:03,240 --> 00:07:07,840
is too difficult because they had people
161
00:07:06,120 --> 00:07:12,400
on the psych project who would go in and
162
00:07:07,840 --> 00:07:12,400
curate rules um for
163
00:07:13,280 --> 00:07:19,080
machines Wikipedia basically they have a
164
00:07:17,160 --> 00:07:21,080
very very large number of humans
165
00:07:19,080 --> 00:07:23,639
curating this structured data about
166
00:07:21,080 --> 00:07:25,199
entities in the world for humans they're
167
00:07:23,639 --> 00:07:27,879
creating it for humans because then you
168
00:07:25,199 --> 00:07:29,599
can put it on a Wikipedia page and you
169
00:07:27,879 --> 00:07:31,440
can look and see it says cardig melan
170
00:07:29,599 --> 00:07:34,160
University it has the former names of
171
00:07:31,440 --> 00:07:36,919
Carnegie melon um it has the motto of
172
00:07:34,160 --> 00:07:38,759
Carnegie melon the type of entity who it
173
00:07:36,919 --> 00:07:41,360
was established by and when and other
174
00:07:38,759 --> 00:07:42,840
stuff like that and because people are
175
00:07:41,360 --> 00:07:44,280
no longer creating it for machines
176
00:07:42,840 --> 00:07:46,280
they're creating it for humans people
177
00:07:44,280 --> 00:07:47,840
are like motivated to do this so like
178
00:07:46,280 --> 00:07:49,960
lots of people will do it for free so
179
00:07:47,840 --> 00:07:51,960
you can actually get a reasonably sized
180
00:07:49,960 --> 00:07:53,639
amount of data from this and actually
181
00:07:51,960 --> 00:07:55,720
cover you know like most of the entities
182
00:07:53,639 --> 00:07:57,080
in the world or not most of the entities
183
00:07:55,720 --> 00:08:00,120
in the world but most of the notable
184
00:07:57,080 --> 00:08:03,319
entities in uh part of the world that
185
00:08:00,120 --> 00:08:03,319
have high participation in
186
00:08:03,479 --> 00:08:09,800
Wikipedia um so now the the thing that a
187
00:08:08,039 --> 00:08:13,319
lot of people use is something called
188
00:08:09,800 --> 00:08:14,919
Wiki data this is not this name is a
189
00:08:13,319 --> 00:08:17,039
little bit of a misnomer because it's
190
00:08:14,919 --> 00:08:18,960
not actually that closely connected to
191
00:08:17,039 --> 00:08:20,639
Wikipedia they extract data from
192
00:08:18,960 --> 00:08:21,720
Wikipedia but they also extract it from
193
00:08:20,639 --> 00:08:24,400
lots of other
194
00:08:21,720 --> 00:08:27,520
sources and this is a curated database
195
00:08:24,400 --> 00:08:30,360
of entities um it's linked it's
196
00:08:27,520 --> 00:08:33,959
extremely large scale and it's
197
00:08:30,360 --> 00:08:38,080
multilingual and um this is an example
198
00:08:33,959 --> 00:08:39,680
of a thing from Richard fean um where
199
00:08:38,080 --> 00:08:42,680
people can go in and they can actually
200
00:08:39,680 --> 00:08:45,320
like add information and stuff like that
201
00:08:42,680 --> 00:08:47,440
um and you know it gives information
202
00:08:45,320 --> 00:08:50,959
about education and all kinds of other
203
00:08:47,440 --> 00:08:52,600
stuff so um for fun I can go to the wiki
204
00:08:50,959 --> 00:08:55,040
data
205
00:08:52,600 --> 00:08:59,360
site does anyone have an entity they'd
206
00:08:55,040 --> 00:08:59,360
like to know more about
207
00:09:01,640 --> 00:09:07,320
any any ideas maybe something that has
208
00:09:03,959 --> 00:09:07,320
been in the news recently
209
00:09:10,680 --> 00:09:16,160
or nobody brave enough to come up with
210
00:09:13,040 --> 00:09:18,360
an entity yeah
211
00:09:16,160 --> 00:09:20,640
Mamba that's a good one I'm actually not
212
00:09:18,360 --> 00:09:23,800
sure if that one's going to be in here
213
00:09:20,640 --> 00:09:27,720
um there's lots of mambas but I don't
214
00:09:23,800 --> 00:09:27,720
know about that particular Mamba let me
215
00:09:27,839 --> 00:09:31,200
see do you want to know about a
216
00:09:29,720 --> 00:09:33,399
different Mamba do you want about know
217
00:09:31,200 --> 00:09:36,040
about Mamba the research
218
00:09:33,399 --> 00:09:38,399
group so Mamba is a research group it's
219
00:09:36,040 --> 00:09:41,800
the modeling and Analysis for medicine
220
00:09:38,399 --> 00:09:44,800
research group um it focuses on
221
00:09:41,800 --> 00:09:48,000
mathematical biology and it's in the uh
222
00:09:44,800 --> 00:09:51,120
in this National Center for scientific
223
00:09:48,000 --> 00:09:52,519
research in France um the chairperson is
224
00:09:51,120 --> 00:09:55,360
this person and stuff like that so you
225
00:09:52,519 --> 00:10:00,200
can see it has all of these things so
226
00:09:55,360 --> 00:10:03,920
Mamba this Mamba is a node in the graph
227
00:10:00,200 --> 00:10:06,839
and then the edges are pointing um the
228
00:10:03,920 --> 00:10:09,440
edges are labeled with like instance of
229
00:10:06,839 --> 00:10:11,200
and then the next note is research group
230
00:10:09,440 --> 00:10:13,000
so research group is like another note
231
00:10:11,200 --> 00:10:17,120
in the graph and so you can click
232
00:10:13,000 --> 00:10:18,680
through this and it has its own ID and
233
00:10:17,120 --> 00:10:21,200
other things like
234
00:10:18,680 --> 00:10:22,839
this also you'll notice that research
235
00:10:21,200 --> 00:10:24,160
group is translated into lots of
236
00:10:22,839 --> 00:10:27,440
different languages in the world so you
237
00:10:24,160 --> 00:10:30,120
can use it multi multilingually and um
238
00:10:27,440 --> 00:10:33,880
and other things like that
239
00:10:30,120 --> 00:10:37,000
um even minor entities like Graham
240
00:10:33,880 --> 00:10:40,160
nuig are included in this and it has a
241
00:10:37,000 --> 00:10:42,240
little bit of um like information about
242
00:10:40,160 --> 00:10:45,480
me like my PhD was in Kyoto University
243
00:10:42,240 --> 00:10:45,480
in 2012 I am a
244
00:10:45,600 --> 00:10:52,079
human I I am male uh and first name last
245
00:10:50,519 --> 00:10:53,720
name University teacher computer
246
00:10:52,079 --> 00:10:56,279
scientist natural language processing
247
00:10:53,720 --> 00:10:58,639
this is all right um because this is
248
00:10:56,279 --> 00:11:00,240
mostly hand curated it even has the IDS
249
00:10:58,639 --> 00:11:04,240
of my advisor
250
00:11:00,240 --> 00:11:06,519
advisers um the reason why it has all of
251
00:11:04,240 --> 00:11:09,839
this stuff actually is because like 15
252
00:11:06,519 --> 00:11:12,160
years ago or like 10 years ago I entered
253
00:11:09,839 --> 00:11:14,399
in my uh my information into the
254
00:11:12,160 --> 00:11:16,240
mathematical genealogy project uh which
255
00:11:14,399 --> 00:11:18,880
is this project about who your advisers
256
00:11:16,240 --> 00:11:20,680
were because I wanted to see like who my
257
00:11:18,880 --> 00:11:22,800
mathematical like siblings were and
258
00:11:20,680 --> 00:11:24,519
stuff like that and uh somehow they
259
00:11:22,800 --> 00:11:27,360
managed to pull that out and keep this
260
00:11:24,519 --> 00:11:28,760
like 10 years later so um basically
261
00:11:27,360 --> 00:11:30,519
they're pulling information from like
262
00:11:28,760 --> 00:11:32,800
many many different structured data
263
00:11:30,519 --> 00:11:34,160
sources that they can use so uh they can
264
00:11:32,800 --> 00:11:37,480
pull it in there I don't know where they
265
00:11:34,160 --> 00:11:39,440
got that I'm human uh but maybe that was
266
00:11:37,480 --> 00:11:43,240
inferred from some piece of data
267
00:11:39,440 --> 00:11:44,760
somewhere online or something cool um
268
00:11:43,240 --> 00:11:46,839
another good thing about this that
269
00:11:44,760 --> 00:11:52,680
actually I didn't mention directly in
270
00:11:46,839 --> 00:11:52,680
the um in the lecture note or
271
00:11:54,680 --> 00:12:01,120
slides is that there's a query language
272
00:11:57,360 --> 00:12:04,320
for this yeah and a query language this
273
00:12:01,120 --> 00:12:06,839
query language is called Sparkle so
274
00:12:04,320 --> 00:12:10,680
there's a sequel for querying relational
275
00:12:06,839 --> 00:12:14,399
databases and Sparkle is for querying
276
00:12:10,680 --> 00:12:15,240
these uh knowledge bases and let me see
277
00:12:14,399 --> 00:12:18,279
if I
278
00:12:15,240 --> 00:12:22,560
can I asked chat
279
00:12:18,279 --> 00:12:24,560
GPT to write me a sparkle query to find
280
00:12:22,560 --> 00:12:26,839
all presidents of Carnegie melon
281
00:12:24,560 --> 00:12:31,160
University so let's see if Chad GPT is
282
00:12:26,839 --> 00:12:31,160
capable of doing that um
283
00:12:35,639 --> 00:12:39,680
okay that's a problem let me
284
00:12:41,279 --> 00:12:47,000
see okay there's there's an errand there
285
00:12:43,880 --> 00:12:48,360
but like if uh uh if I could find a I
286
00:12:47,000 --> 00:12:50,160
don't want to waste time in class like
287
00:12:48,360 --> 00:12:52,079
finding a working query but basically
288
00:12:50,160 --> 00:12:53,399
you can put it in a query and it allows
289
00:12:52,079 --> 00:12:56,120
you to do a lot of things that are
290
00:12:53,399 --> 00:13:00,519
similar to what you can do in SQL so you
291
00:12:56,120 --> 00:13:02,720
can find like all of the edges of nodes
292
00:13:00,519 --> 00:13:05,279
that satisfy a particular relation so
293
00:13:02,720 --> 00:13:07,360
you could say I want for Carnegie melon
294
00:13:05,279 --> 00:13:10,160
University to find all things that
295
00:13:07,360 --> 00:13:13,519
followed the like president of relation
296
00:13:10,160 --> 00:13:14,959
and that would give me all um you know
297
00:13:13,519 --> 00:13:18,680
all presidents of Carnegie melon
298
00:13:14,959 --> 00:13:20,440
University you can also like filter um
299
00:13:18,680 --> 00:13:22,160
filter by their start date and end date
300
00:13:20,440 --> 00:13:24,120
so find all of the preceden between a
301
00:13:22,160 --> 00:13:25,839
certain time and a another time or
302
00:13:24,120 --> 00:13:30,480
things like
303
00:13:25,839 --> 00:13:34,199
that so this is good if you want to get
304
00:13:30,480 --> 00:13:36,600
like high reli high reliability data um
305
00:13:34,199 --> 00:13:39,839
in a scalable way because like if I ask
306
00:13:36,600 --> 00:13:41,920
chat GPT like one of my favorite um one
307
00:13:39,839 --> 00:13:45,720
of my favorite queries for chat GPT is
308
00:13:41,920 --> 00:13:48,600
like name all of the name all of the
309
00:13:45,720 --> 00:13:51,959
presidents that were born uh east of the
310
00:13:48,600 --> 00:13:53,880
Mississippi River um and I've never
311
00:13:51,959 --> 00:13:56,519
successfully gotten chat GPT to be able
312
00:13:53,880 --> 00:13:57,800
to do this um because there's lots of
313
00:13:56,519 --> 00:13:59,560
presidents who were born east of the
314
00:13:57,800 --> 00:14:02,320
Mississippi River and it starts counting
315
00:13:59,560 --> 00:14:04,079
them it can't distinguish what position
316
00:14:02,320 --> 00:14:05,639
is east of the Mississippi and what
317
00:14:04,079 --> 00:14:09,120
position is the west west of the
318
00:14:05,639 --> 00:14:11,279
Mississippi but if you write a uh like a
319
00:14:09,120 --> 00:14:14,759
sparkle query it's not that hard to do
320
00:14:11,279 --> 00:14:16,480
that so there are um you know there are
321
00:14:14,759 --> 00:14:18,639
certain types of questions especially
322
00:14:16,480 --> 00:14:20,399
information aggregation and complex
323
00:14:18,639 --> 00:14:22,839
relations and stuff that uh language
324
00:14:20,399 --> 00:14:26,600
models are not very good
325
00:14:22,839 --> 00:14:28,120
at cool um so that's kind of an intro to
326
00:14:26,600 --> 00:14:31,240
knowledge bases why you might want to
327
00:14:28,120 --> 00:14:33,759
think about them any questions so far
328
00:14:31,240 --> 00:14:33,759
for
329
00:14:34,759 --> 00:14:39,720
discussion okay um I will move on next
330
00:14:38,320 --> 00:14:41,199
so the next thing I'd like to talk about
331
00:14:39,720 --> 00:14:43,839
is learning representations for
332
00:14:41,199 --> 00:14:45,519
knowledge bases um so knowledge bases
333
00:14:43,839 --> 00:14:48,000
are great but one problem is they're
334
00:14:45,519 --> 00:14:51,040
like inherently
335
00:14:48,000 --> 00:14:55,040
incomplete and even with extremely large
336
00:14:51,040 --> 00:14:58,279
scale uh it becomes impossible to have
337
00:14:55,040 --> 00:15:00,360
them be complete and the reason why is
338
00:14:58,279 --> 00:15:03,639
uh for examp example in Freebase which
339
00:15:00,360 --> 00:15:05,480
was the predecessor to Wiki data um 71%
340
00:15:03,639 --> 00:15:08,560
of humans didn't have a date of
341
00:15:05,480 --> 00:15:10,560
birth um and probably every human
342
00:15:08,560 --> 00:15:12,079
actually has a date of birth right um
343
00:15:10,560 --> 00:15:15,880
you know we're pretty much guaranteed
344
00:15:12,079 --> 00:15:17,639
for that to be the case so the issue is
345
00:15:15,880 --> 00:15:19,160
like for very famous entities you want
346
00:15:17,639 --> 00:15:21,040
lots of detailed information like you
347
00:15:19,160 --> 00:15:24,000
can know absolutely everything about Joe
348
00:15:21,040 --> 00:15:25,759
Biden or Barack Obama but you know at
349
00:15:24,000 --> 00:15:26,880
the same time for Less major entities
350
00:15:25,759 --> 00:15:28,079
you still want them in the knowledge
351
00:15:26,880 --> 00:15:30,079
base but you're not going to be able to
352
00:15:28,079 --> 00:15:31,519
get all that information or should you
353
00:15:30,079 --> 00:15:35,600
for privacy
354
00:15:31,519 --> 00:15:36,680
purposes and so the idea is um for
355
00:15:35,600 --> 00:15:38,079
information that's written on the
356
00:15:36,680 --> 00:15:40,600
internet somewhere can you perform
357
00:15:38,079 --> 00:15:42,759
relation extraction which essentially
358
00:15:40,600 --> 00:15:44,600
allows you to extract this information
359
00:15:42,759 --> 00:15:46,360
and create your own knowledge bases and
360
00:15:44,600 --> 00:15:47,680
stuff like this and this can also be
361
00:15:46,360 --> 00:15:50,079
useful if you want to create it for like
362
00:15:47,680 --> 00:15:52,199
a specialized domain or um or other
363
00:15:50,079 --> 00:15:55,000
stuff like
364
00:15:52,199 --> 00:15:59,519
that so there's a bunch of ways that
365
00:15:55,000 --> 00:16:03,079
people do this um and one kind of
366
00:15:59,519 --> 00:16:06,120
popular way that people have tried to do
367
00:16:03,079 --> 00:16:09,199
relation extraction is through uh
368
00:16:06,120 --> 00:16:12,560
leveraging consistency in embedding
369
00:16:09,199 --> 00:16:15,319
space and so this is the most famous
370
00:16:12,560 --> 00:16:17,959
example from word de uh what seems like
371
00:16:15,319 --> 00:16:21,880
ages ago uh in
372
00:16:17,959 --> 00:16:23,920
2013 and in the word Toc paper one of
373
00:16:21,880 --> 00:16:26,279
the big you know exciting things was
374
00:16:23,920 --> 00:16:28,639
essentially they demonstrated that
375
00:16:26,279 --> 00:16:30,120
vectors in embedding space had kind of
376
00:16:28,639 --> 00:16:31,839
in
377
00:16:30,120 --> 00:16:33,160
you know meaning and actually the
378
00:16:31,839 --> 00:16:34,600
vectors in embedding space could
379
00:16:33,160 --> 00:16:37,639
correspond to relations between
380
00:16:34,600 --> 00:16:39,480
embeddings so like uh we would have man
381
00:16:37,639 --> 00:16:41,000
pointing to woman in approximately the
382
00:16:39,480 --> 00:16:42,920
same direction that we had Uncle
383
00:16:41,000 --> 00:16:46,600
pointing to Aunt and King pointing to
384
00:16:42,920 --> 00:16:49,680
Queen and so um then you could do things
385
00:16:46,600 --> 00:16:51,440
like you could take Kings subtract out
386
00:16:49,680 --> 00:16:53,560
the vector that corresponded to
387
00:16:51,440 --> 00:16:58,360
plurality uh add the vector that
388
00:16:53,560 --> 00:17:00,839
corresponded to um you know uh to going
389
00:16:58,360 --> 00:17:04,319
from masculine to feminine words and
390
00:17:00,839 --> 00:17:05,559
then um like read the vector to that
391
00:17:04,319 --> 00:17:07,160
were plural and you'd be able to
392
00:17:05,559 --> 00:17:09,439
identify the plural by just knowing
393
00:17:07,160 --> 00:17:11,000
these two uh vectors the plural of green
394
00:17:09,439 --> 00:17:14,000
by just knowing those two
395
00:17:11,000 --> 00:17:14,000
vectors
396
00:17:14,160 --> 00:17:21,880
um but it turns out that you can either
397
00:17:18,199 --> 00:17:21,880
learn embeddings
398
00:17:22,720 --> 00:17:28,240
from like uh you can either learn
399
00:17:25,000 --> 00:17:30,400
embeddings from text or you can use the
400
00:17:28,240 --> 00:17:32,039
fact that you have a big knowledge base
401
00:17:30,400 --> 00:17:34,880
that was curated by humans like Wiki
402
00:17:32,039 --> 00:17:36,120
data to improve the embeddings of a
403
00:17:34,880 --> 00:17:39,559
neural model
404
00:17:36,120 --> 00:17:41,799
itself and so another pretty large uh
405
00:17:39,559 --> 00:17:43,600
research area that a lot of people have
406
00:17:41,799 --> 00:17:47,120
focused on is how do you get good
407
00:17:43,600 --> 00:17:48,720
embeddings of a Knowledge Graph and this
408
00:17:47,120 --> 00:17:50,600
is important if you want to do any sort
409
00:17:48,720 --> 00:17:52,799
of like Knowledge Graph Search or other
410
00:17:50,600 --> 00:17:54,160
things like this like for example one of
411
00:17:52,799 --> 00:17:56,799
the really nice things about knowledge
412
00:17:54,160 --> 00:17:58,880
graphs is they have information about a
413
00:17:56,799 --> 00:18:00,200
whole bunch of really sparse entities
414
00:17:58,880 --> 00:18:03,240
that aren't mentioned very much on the
415
00:18:00,200 --> 00:18:05,679
internet for example and so because of
416
00:18:03,240 --> 00:18:07,440
that you can um you can leverage the
417
00:18:05,679 --> 00:18:10,720
knowledge graph structure together with
418
00:18:07,440 --> 00:18:10,720
text to learn better embeddings
419
00:18:11,240 --> 00:18:18,520
overall and so this particular paper is
420
00:18:15,280 --> 00:18:20,960
one example of it um and the way they do
421
00:18:18,520 --> 00:18:23,280
this is they express uh Knowledge Graph
422
00:18:20,960 --> 00:18:25,919
triples is additive
423
00:18:23,280 --> 00:18:28,480
Transformations and they minimize the
424
00:18:25,919 --> 00:18:31,640
distance uh of existing triples with a
425
00:18:28,480 --> 00:18:35,039
margin based loss so the way they do
426
00:18:31,640 --> 00:18:38,240
this is they have the head um in the
427
00:18:35,039 --> 00:18:40,799
tail and L is the vector corresponding
428
00:18:38,240 --> 00:18:42,679
to like the link between the things that
429
00:18:40,799 --> 00:18:47,960
corresponds to a
430
00:18:42,679 --> 00:18:52,159
relation and so you go uh you have H and
431
00:18:47,960 --> 00:18:53,559
T and here um like this is L but here
432
00:18:52,159 --> 00:18:55,640
it's written as are because I got this
433
00:18:53,559 --> 00:18:58,120
from a different paper and basically you
434
00:18:55,640 --> 00:18:59,480
you try to go from H to T um according
435
00:18:58,120 --> 00:19:00,919
to the relation
436
00:18:59,480 --> 00:19:05,120
uh Vector
437
00:19:00,919 --> 00:19:07,200
are and you use a hinge loss where um
438
00:19:05,120 --> 00:19:10,039
for the hinge loss you you have a hinge
439
00:19:07,200 --> 00:19:12,640
parameter and then you try to upweight
440
00:19:10,039 --> 00:19:15,760
the example of a true triple and
441
00:19:12,640 --> 00:19:17,960
downweight the example of a of a false
442
00:19:15,760 --> 00:19:19,880
triple so this could be one that was
443
00:19:17,960 --> 00:19:22,080
like randomly sampled to be incorrect
444
00:19:19,880 --> 00:19:22,080
for
445
00:19:23,760 --> 00:19:29,080
example um one interesting thing about
446
00:19:26,880 --> 00:19:31,559
knowledge graph embeddings is like a lot
447
00:19:29,080 --> 00:19:33,600
of famous AI researchers got their start
448
00:19:31,559 --> 00:19:36,000
in Knowledge Graph embeddings and so
449
00:19:33,600 --> 00:19:39,760
Richard soer is one of them if you know
450
00:19:36,000 --> 00:19:44,320
he's the CEO of vi.com search engine now
451
00:19:39,760 --> 00:19:46,679
um and uh this was a first attempt at
452
00:19:44,320 --> 00:19:49,679
predicting relations they basically
453
00:19:46,679 --> 00:19:55,400
created a um MLP that tries to predict
454
00:19:49,679 --> 00:19:58,880
whether a relation exists so they have
455
00:19:55,400 --> 00:20:00,760
a matrix for the left side of the
456
00:19:58,880 --> 00:20:03,320
relation a matrix for the right side of
457
00:20:00,760 --> 00:20:05,080
the relation and then they feed in the
458
00:20:03,320 --> 00:20:07,559
embeddings of each of the entities in
459
00:20:05,080 --> 00:20:08,919
the relation they have a nonlinearity
460
00:20:07,559 --> 00:20:11,799
and then they have another Vector that
461
00:20:08,919 --> 00:20:14,720
tries to predict the um the probability
462
00:20:11,799 --> 00:20:16,679
of the uh actual relation being correct
463
00:20:14,720 --> 00:20:18,960
so you would run this through a sigmoid
464
00:20:16,679 --> 00:20:21,000
and then uh if it was one the relation
465
00:20:18,960 --> 00:20:24,039
was likely to exist if it was Zero then
466
00:20:21,000 --> 00:20:25,480
the relation was likely to not exist and
467
00:20:24,039 --> 00:20:27,799
then they also propos something called a
468
00:20:25,480 --> 00:20:31,480
neural tensor Network and this adds a
469
00:20:27,799 --> 00:20:34,000
bilinear feature extractor um and so
470
00:20:31,480 --> 00:20:37,440
basically what this is saying is we have
471
00:20:34,000 --> 00:20:40,000
the embedding here the embedding here we
472
00:20:37,440 --> 00:20:41,840
have a matrix and then we calculate the
473
00:20:40,000 --> 00:20:43,080
dot product between the embedding after
474
00:20:41,840 --> 00:20:45,799
transformation it looks a lot like
475
00:20:43,080 --> 00:20:47,720
attention actually in a way um because
476
00:20:45,799 --> 00:20:50,000
we had the bilinear attention so it's
477
00:20:47,720 --> 00:20:53,640
similar to that as well and then we also
478
00:20:50,000 --> 00:20:56,840
have the MLP so this part corresponds to
479
00:20:53,640 --> 00:21:00,320
MLP and then we have a bias
480
00:20:56,840 --> 00:21:02,200
term and um this is a powerful model but
481
00:21:00,320 --> 00:21:05,400
it's a bit overparameterized so we
482
00:21:02,200 --> 00:21:08,120
actually later um uh this kind of fell
483
00:21:05,400 --> 00:21:10,360
out of uh favor towards these more
484
00:21:08,120 --> 00:21:14,520
simple models that we're using uh kind
485
00:21:10,360 --> 00:21:14,520
of just linear projections between the
486
00:21:17,600 --> 00:21:22,279
two so there's um there's a lot of
487
00:21:20,120 --> 00:21:25,320
methods like this these methods are
488
00:21:22,279 --> 00:21:27,039
basically assuming that we have either
489
00:21:25,320 --> 00:21:29,080
Knowledge Graph
490
00:21:27,039 --> 00:21:30,799
embeddings um and we want to learn
491
00:21:29,080 --> 00:21:32,480
relations or they're assuming that we
492
00:21:30,799 --> 00:21:34,320
don't have any information at all about
493
00:21:32,480 --> 00:21:36,840
the knowledge graph and we want to learn
494
00:21:34,320 --> 00:21:40,039
the knowledge graph embedding themselves
495
00:21:36,840 --> 00:21:42,400
it's been used for both of them but um I
496
00:21:40,039 --> 00:21:44,000
I'd say now it's probably most useful
497
00:21:42,400 --> 00:21:45,520
for learning Knowledge Graph embeddings
498
00:21:44,000 --> 00:21:50,480
if you want to do any sort of Knowledge
499
00:21:45,520 --> 00:21:50,480
Graph based modeling uh which can be
500
00:21:51,240 --> 00:21:55,919
useful um cool any questions about these
501
00:21:57,360 --> 00:22:01,679
ones okay
502
00:21:59,520 --> 00:22:04,360
next um actually this part might be a
503
00:22:01,679 --> 00:22:06,600
little bit simpler than the uh than the
504
00:22:04,360 --> 00:22:09,000
like knowledge graft based approaches so
505
00:22:06,600 --> 00:22:10,960
another method for relations extraction
506
00:22:09,000 --> 00:22:13,440
is learning from text
507
00:22:10,960 --> 00:22:16,120
directly
508
00:22:13,440 --> 00:22:19,080
and the first question about this is how
509
00:22:16,120 --> 00:22:22,200
do you get training data to learn uh
510
00:22:19,080 --> 00:22:24,480
about relation learn relation extraction
511
00:22:22,200 --> 00:22:26,720
and so there was this very influential
512
00:22:24,480 --> 00:22:28,279
paper a distant supervision for relation
513
00:22:26,720 --> 00:22:31,120
extraction I would say it's almost one
514
00:22:28,279 --> 00:22:32,880
of the first or certainly one of the
515
00:22:31,120 --> 00:22:34,559
most influential papers on like data
516
00:22:32,880 --> 00:22:35,960
augmentation or synthetic data for
517
00:22:34,559 --> 00:22:38,400
natural language
518
00:22:35,960 --> 00:22:40,440
processing and basically the idea is you
519
00:22:38,400 --> 00:22:44,279
already have a knowledge base that has
520
00:22:40,440 --> 00:22:47,440
some entries in it like Wiki data and so
521
00:22:44,279 --> 00:22:50,919
then given in entity relation entity
522
00:22:47,440 --> 00:22:52,919
triples um can you extract all text that
523
00:22:50,919 --> 00:22:54,799
matches this particular relation type
524
00:22:52,919 --> 00:22:56,480
and use it to train a relation extractor
525
00:22:54,799 --> 00:22:59,640
a supervised relation
526
00:22:56,480 --> 00:23:01,880
extractor so the way this works
527
00:22:59,640 --> 00:23:04,039
is like let's say we have this is an old
528
00:23:01,880 --> 00:23:06,120
paper so the examples are also old but
529
00:23:04,039 --> 00:23:08,039
um let's say we have Steven Spielberg
530
00:23:06,120 --> 00:23:10,159
being a director of the film Saving
531
00:23:08,039 --> 00:23:12,840
Private Ryan and that's included in our
532
00:23:10,159 --> 00:23:14,840
uh our knowledge base so what it would
533
00:23:12,840 --> 00:23:17,080
do is it would find all sentences that
534
00:23:14,840 --> 00:23:19,400
have Steven Spielberg and Saving Private
535
00:23:17,080 --> 00:23:22,080
Ryan included in them and it would label
536
00:23:19,400 --> 00:23:24,159
this as like a positive example of that
537
00:23:22,080 --> 00:23:28,240
relation so this
538
00:23:24,159 --> 00:23:30,760
is in general often it's okay it it
539
00:23:28,240 --> 00:23:34,480
works reasonably well but the problem
540
00:23:30,760 --> 00:23:37,200
with this is there are also um negative
541
00:23:34,480 --> 00:23:38,840
examples of this so like for example
542
00:23:37,200 --> 00:23:40,480
here I think the first one is kind of a
543
00:23:38,840 --> 00:23:43,240
negative example for the director
544
00:23:40,480 --> 00:23:45,880
relation because Steven Spielberg's film
545
00:23:43,240 --> 00:23:48,120
Saving Private Ryan doesn't actually
546
00:23:45,880 --> 00:23:50,000
tell you he's the director it just tells
547
00:23:48,120 --> 00:23:52,520
you that he's somehow affiliated with it
548
00:23:50,000 --> 00:23:54,840
he could be the writer or he could be uh
549
00:23:52,520 --> 00:23:57,679
the actor or or something else like that
550
00:23:54,840 --> 00:24:00,440
so this is a nice way to create data for
551
00:23:57,679 --> 00:24:03,640
basically free but at the same time uh
552
00:24:00,440 --> 00:24:06,159
you can like create noisy examples and
553
00:24:03,640 --> 00:24:06,159
that can be a
554
00:24:07,159 --> 00:24:14,600
problem so um there's been a lot of work
555
00:24:11,400 --> 00:24:16,000
about this um relationship uh relation
556
00:24:14,600 --> 00:24:17,840
classification with neural networks
557
00:24:16,000 --> 00:24:20,840
there's a lot of uh different methods
558
00:24:17,840 --> 00:24:23,159
that could be uh doing this most of them
559
00:24:20,840 --> 00:24:24,919
work by extracting features and then
560
00:24:23,159 --> 00:24:27,039
classifying somehow although there are
561
00:24:24,919 --> 00:24:29,960
some uh large language model based
562
00:24:27,039 --> 00:24:33,120
methods now um one one thing about
563
00:24:29,960 --> 00:24:35,440
relation extraction or not kind of like
564
00:24:33,120 --> 00:24:36,799
information extraction in general is
565
00:24:35,440 --> 00:24:38,559
that very often you want to run this
566
00:24:36,799 --> 00:24:40,200
over like a huge Corpus you want to run
567
00:24:38,559 --> 00:24:42,320
it over the whole internet or other
568
00:24:40,200 --> 00:24:45,000
things like that so from that point of
569
00:24:42,320 --> 00:24:47,159
view like I I said I could just ask
570
00:24:45,000 --> 00:24:49,480
mistol to give me the answer about like
571
00:24:47,159 --> 00:24:52,440
whether cars are included in sentences
572
00:24:49,480 --> 00:24:55,120
but if you want to run you know gp4 over
573
00:24:52,440 --> 00:24:56,799
the whole internet that's a pretty big
574
00:24:55,120 --> 00:25:00,159
budget and you might want to reconsider
575
00:24:56,799 --> 00:25:02,440
that so there are so um there is also
576
00:25:00,159 --> 00:25:04,440
some you know benefit in having cheap
577
00:25:02,440 --> 00:25:07,200
and lightweight
578
00:25:04,440 --> 00:25:09,159
methods so basically what this
579
00:25:07,200 --> 00:25:11,279
particular paper did is it extracted
580
00:25:09,159 --> 00:25:12,760
features in in classified so it
581
00:25:11,279 --> 00:25:15,600
extracted lexical features of the
582
00:25:12,760 --> 00:25:20,240
entities themselves and features of the
583
00:25:15,600 --> 00:25:22,360
whole span and so like the way I uh most
584
00:25:20,240 --> 00:25:26,960
modern methods for this do this is they
585
00:25:22,360 --> 00:25:29,399
basically um extract features from the
586
00:25:26,960 --> 00:25:31,679
first part of the first entity the
587
00:25:29,399 --> 00:25:33,760
second part of the the first entity the
588
00:25:31,679 --> 00:25:36,360
first part of the second entity and the
589
00:25:33,760 --> 00:25:37,720
last part of the uh second entity and
590
00:25:36,360 --> 00:25:39,600
take all of those embeddings feed them
591
00:25:37,720 --> 00:25:41,440
into like an MLP or something like that
592
00:25:39,600 --> 00:25:44,039
and then make a prediction about whether
593
00:25:41,440 --> 00:25:45,760
that relation exists so if you have an
594
00:25:44,039 --> 00:25:47,840
embedding model this is relatively easy
595
00:25:45,760 --> 00:25:50,360
to do you feed it through like uh
596
00:25:47,840 --> 00:25:51,919
Roberta or you feed it through mistol
597
00:25:50,360 --> 00:25:54,559
and get the embeddings for each of the
598
00:25:51,919 --> 00:25:55,840
tokens and um and then you make a
599
00:25:54,559 --> 00:25:58,840
prediction based on those four
600
00:25:55,840 --> 00:25:58,840
embeddings
601
00:26:00,600 --> 00:26:04,840
um the details of that are like not
602
00:26:03,520 --> 00:26:07,320
super important unless you're going to
603
00:26:04,840 --> 00:26:09,279
go in and implement it yourself so you
604
00:26:07,320 --> 00:26:10,919
can um like if you're actually going to
605
00:26:09,279 --> 00:26:12,120
be doing relation extraction obviously
606
00:26:10,919 --> 00:26:14,279
the details are important but I'm
607
00:26:12,120 --> 00:26:16,000
assuming that most people won't be uh
608
00:26:14,279 --> 00:26:19,720
you know doing that as your final
609
00:26:16,000 --> 00:26:21,240
project but um one really interesting
610
00:26:19,720 --> 00:26:22,919
thing that is relevant even if you're
611
00:26:21,240 --> 00:26:26,360
not doing relationship relation
612
00:26:22,919 --> 00:26:29,360
extraction is how you can model noise
613
00:26:26,360 --> 00:26:32,600
because this um as I said they're
614
00:26:29,360 --> 00:26:35,720
creating lots of like semi noisy data
615
00:26:32,600 --> 00:26:38,919
and a lot of the work in getting good
616
00:26:35,720 --> 00:26:40,360
bottles for relation extraction has been
617
00:26:38,919 --> 00:26:41,799
how do we deal with this distant
618
00:26:40,360 --> 00:26:43,799
supervision noise and I'm just going to
619
00:26:41,799 --> 00:26:45,760
give one example here but there's like a
620
00:26:43,799 --> 00:26:49,120
series of papers after this that also
621
00:26:45,760 --> 00:26:50,600
tried to do similar things so the idea
622
00:26:49,120 --> 00:26:53,600
is that there's noise in the distant
623
00:26:50,600 --> 00:26:56,559
supervision labels um and so we want to
624
00:26:53,600 --> 00:27:01,039
model and mitigate that noise and the
625
00:26:56,559 --> 00:27:03,919
way this paper does this is they have an
626
00:27:01,039 --> 00:27:06,679
encoder and from the encoder you
627
00:27:03,919 --> 00:27:10,960
calculate embeddings and make
628
00:27:06,679 --> 00:27:14,279
predictions and so you have a small set
629
00:27:10,960 --> 00:27:16,080
of like very high quality data and this
630
00:27:14,279 --> 00:27:17,760
small set of very high quality data you
631
00:27:16,080 --> 00:27:19,880
can basically trust that all of the data
632
00:27:17,760 --> 00:27:22,320
is not noisy like maybe it's manually
633
00:27:19,880 --> 00:27:23,720
annotated data and you have like 5,000
634
00:27:22,320 --> 00:27:25,000
examples of it or something like that
635
00:27:23,720 --> 00:27:26,880
and then separately from that you have
636
00:27:25,000 --> 00:27:28,440
like 5 million examples of automatically
637
00:27:26,880 --> 00:27:30,799
labeled data that might be good might
638
00:27:28,440 --> 00:27:32,679
not be good and so what they do is
639
00:27:30,799 --> 00:27:34,200
essentially at the beginning they take
640
00:27:32,679 --> 00:27:36,520
this encoder get embeddings make
641
00:27:34,200 --> 00:27:38,000
predictions over the high quality data
642
00:27:36,520 --> 00:27:40,320
and then they have a separate noise
643
00:27:38,000 --> 00:27:43,440
modeling layer where what this noise
644
00:27:40,320 --> 00:27:46,919
modeling layer does is it has a
645
00:27:43,440 --> 00:27:50,039
transition Matrix which says given that
646
00:27:46,919 --> 00:27:53,279
this given that we made a particular
647
00:27:50,039 --> 00:27:55,159
prediction over classes because this is
648
00:27:53,279 --> 00:27:59,919
essentially a multiclass classification
649
00:27:55,159 --> 00:28:01,519
problem they transform the
650
00:27:59,919 --> 00:28:03,159
sorry I don't remember if they transform
651
00:28:01,519 --> 00:28:04,640
the probabilities or the low Jets I
652
00:28:03,159 --> 00:28:07,320
think it's the probabilities but they
653
00:28:04,640 --> 00:28:12,799
transform the probabilities and get a
654
00:28:07,320 --> 00:28:14,720
final uh distribution after noise and so
655
00:28:12,799 --> 00:28:17,399
that means that you can basically smooth
656
00:28:14,720 --> 00:28:19,240
out this uh distribution and account for
657
00:28:17,399 --> 00:28:20,880
the fact that the labels may be noisy or
658
00:28:19,240 --> 00:28:24,399
may may not be
659
00:28:20,880 --> 00:28:26,600
noisy um then they add additional
660
00:28:24,399 --> 00:28:28,559
normalization on this transition Matrix
661
00:28:26,600 --> 00:28:32,440
using something called Trace normal
662
00:28:28,559 --> 00:28:35,840
ization to move this Matrix closer to
663
00:28:32,440 --> 00:28:38,480
the identity function which says that
664
00:28:35,840 --> 00:28:40,720
the predictions are probably not wrong
665
00:28:38,480 --> 00:28:43,159
all the time uh the predictions are
666
00:28:40,720 --> 00:28:45,360
probably correct you know a lot of the
667
00:28:43,159 --> 00:28:46,600
time they're not correct all the time uh
668
00:28:45,360 --> 00:28:49,720
so then you have that Trace
669
00:28:46,600 --> 00:28:51,880
normalization competing with um this uh
670
00:28:49,720 --> 00:28:55,440
trying to give you like a more smooth
671
00:28:51,880 --> 00:28:58,760
distribution and and reduce your uh L
672
00:28:55,440 --> 00:29:00,320
like reduce your loss so um I I think
673
00:28:58,760 --> 00:29:02,559
this is actually a pretty interesting
674
00:29:00,320 --> 00:29:04,480
idea and it can be used not just for
675
00:29:02,559 --> 00:29:08,600
relation extraction but also in cases
676
00:29:04,480 --> 00:29:08,600
where um you might have noisy labels
677
00:29:08,799 --> 00:29:14,320
overall um so are there any questions
678
00:29:12,360 --> 00:29:15,720
about this or any of the things that are
679
00:29:14,320 --> 00:29:18,480
going on
680
00:29:15,720 --> 00:29:20,279
here um even if you're completely
681
00:29:18,480 --> 00:29:21,960
uninterested in relation extraction I'd
682
00:29:20,279 --> 00:29:23,720
encourage you to think about like what
683
00:29:21,960 --> 00:29:26,159
are
684
00:29:23,720 --> 00:29:27,360
some examples of things that you are
685
00:29:26,159 --> 00:29:29,519
interested in where you could get
686
00:29:27,360 --> 00:29:31,840
potentially labels and how could you for
687
00:29:29,519 --> 00:29:34,880
theise there like that might be uh you
688
00:29:31,840 --> 00:29:34,880
know a thing to
689
00:29:35,679 --> 00:29:39,919
about okay so this was a very very brief
690
00:29:38,320 --> 00:29:42,679
overview of how we create knowledge
691
00:29:39,919 --> 00:29:44,080
bases uh from textual data or from
692
00:29:42,679 --> 00:29:47,159
Knowledge Graph data structured
693
00:29:44,080 --> 00:29:48,840
Knowledge Graph data um so now I like to
694
00:29:47,159 --> 00:29:51,519
talk a little bit about how to use
695
00:29:48,840 --> 00:29:53,960
knowledge bases to inform neural
696
00:29:51,519 --> 00:29:56,159
models and there's a bunch of different
697
00:29:53,960 --> 00:29:59,519
ways to do this
698
00:29:56,159 --> 00:30:02,600
um the
699
00:29:59,519 --> 00:30:06,960
the first way um is to
700
00:30:02,600 --> 00:30:09,840
improve embeddings uh
701
00:30:06,960 --> 00:30:11,960
with existing lexicons and this example
702
00:30:09,840 --> 00:30:14,679
is using non-contextual embeddings like
703
00:30:11,960 --> 00:30:16,240
not the not the ones we get from neural
704
00:30:14,679 --> 00:30:17,919
language models but once we get from
705
00:30:16,240 --> 00:30:20,919
just running a embedding model like word
706
00:30:17,919 --> 00:30:22,960
toac or something like this um and what
707
00:30:20,919 --> 00:30:25,640
they did in this paper is they
708
00:30:22,960 --> 00:30:27,600
essentially um retrofitted embeddings to
709
00:30:25,640 --> 00:30:30,840
existing lexicons by doing post Hawk
710
00:30:27,600 --> 00:30:34,080
trans of the embeddings so that they
711
00:30:30,840 --> 00:30:36,840
matched the um the knowledge graph for
712
00:30:34,080 --> 00:30:39,080
lexon better and so the way they did
713
00:30:36,840 --> 00:30:41,880
this is
714
00:30:39,080 --> 00:30:43,720
um they started out with pre-trained
715
00:30:41,880 --> 00:30:45,399
embeddings and they had a double
716
00:30:43,720 --> 00:30:47,240
objective of making the transform
717
00:30:45,399 --> 00:30:49,120
embeddings close to the neighbors and
718
00:30:47,240 --> 00:30:52,519
close to the original
719
00:30:49,120 --> 00:30:58,840
embedding and the way they did this is
720
00:30:52,519 --> 00:30:58,840
they essentially had um this
721
00:30:59,799 --> 00:31:03,720
this regularization term over here so
722
00:31:01,880 --> 00:31:06,200
this regularization term is basically
723
00:31:03,720 --> 00:31:08,279
saying um I don't want you to move your
724
00:31:06,200 --> 00:31:09,360
embeddings too far away from how they
725
00:31:08,279 --> 00:31:11,679
were
726
00:31:09,360 --> 00:31:14,799
initialized and then at the same time I
727
00:31:11,679 --> 00:31:17,279
would like you to make these uh
728
00:31:14,799 --> 00:31:19,600
embeddings closer to each other if they
729
00:31:17,279 --> 00:31:21,240
are synonyms of each other so they did
730
00:31:19,600 --> 00:31:23,600
this using word net and they basically
731
00:31:21,240 --> 00:31:26,200
took the words uh that were synonyms to
732
00:31:23,600 --> 00:31:28,679
each other in sinets with each other and
733
00:31:26,200 --> 00:31:30,000
they tried to regularize the synonyms to
734
00:31:28,679 --> 00:31:32,120
be closer together but also the
735
00:31:30,000 --> 00:31:33,639
embeddings to be closer to how they
736
00:31:32,120 --> 00:31:35,960
started
737
00:31:33,639 --> 00:31:38,799
out and there were also examples of
738
00:31:35,960 --> 00:31:40,720
forcing anms away from each other so
739
00:31:38,799 --> 00:31:42,480
like if you're um this is a little bit
740
00:31:40,720 --> 00:31:44,799
of an older work so it was working on
741
00:31:42,480 --> 00:31:47,600
non-contextualized embeddings but we
742
00:31:44,799 --> 00:31:49,399
could do something very similar for um
743
00:31:47,600 --> 00:31:52,000
more modern models in like Knowledge
744
00:31:49,399 --> 00:31:55,320
Graph embeddings for example so let's
745
00:31:52,000 --> 00:31:58,960
say we had
746
00:31:55,320 --> 00:32:03,240
um a model that ident
747
00:31:58,960 --> 00:32:06,600
entities and then different examples of
748
00:32:03,240 --> 00:32:06,600
those entities across different
749
00:32:07,159 --> 00:32:11,480
contexts um let's go back to the wiki
750
00:32:20,639 --> 00:32:26,840
data and so um if we had lots of
751
00:32:23,960 --> 00:32:29,360
examples of Joe Biden um Joe Biden is
752
00:32:26,840 --> 00:32:35,159
referred to in a number ways like Joe
753
00:32:29,360 --> 00:32:44,440
Biden Joseph Biden Joseph R Biden um J
754
00:32:35,159 --> 00:32:47,880
jrb I guess um pus 48 46 sorry um and uh
755
00:32:44,440 --> 00:32:50,799
so you could find different examples of
756
00:32:47,880 --> 00:32:52,799
things that match these strings um and
757
00:32:50,799 --> 00:32:55,360
even do entity linking uh which I'll
758
00:32:52,799 --> 00:32:57,200
I'll talk about in a little bit and then
759
00:32:55,360 --> 00:32:58,760
encourag the embeddings for all of these
760
00:32:57,200 --> 00:33:01,360
different instances is to be closer
761
00:32:58,760 --> 00:33:04,039
together to make your model like disting
762
00:33:01,360 --> 00:33:06,799
uh distinguish them less and Ure that
763
00:33:04,039 --> 00:33:08,399
they uh they get closer edings and that
764
00:33:06,799 --> 00:33:11,639
could improve like question answering
765
00:33:08,399 --> 00:33:11,639
look up other stuff like
766
00:33:12,960 --> 00:33:19,880
that
767
00:33:14,919 --> 00:33:23,399
cool um yeah I have a question about
768
00:33:19,880 --> 00:33:25,399
this so what happens if you do like subw
769
00:33:23,399 --> 00:33:28,000
modeling and then you don't have like
770
00:33:25,399 --> 00:33:30,440
the embedment for that entire string
771
00:33:28,000 --> 00:33:32,320
that is supposed to be Clos yeah what
772
00:33:30,440 --> 00:33:34,279
happens if you do subword modeling and
773
00:33:32,320 --> 00:33:35,480
you don't have the embedding uh you
774
00:33:34,279 --> 00:33:37,159
don't have a single embedding that
775
00:33:35,480 --> 00:33:40,360
corresponds to an entity so that's a
776
00:33:37,159 --> 00:33:42,559
really good question um let me
777
00:33:40,360 --> 00:33:44,240
check I don't think I actually have
778
00:33:42,559 --> 00:33:46,600
these on the slide so I might have to
779
00:33:44,240 --> 00:33:46,600
open a
780
00:33:53,639 --> 00:33:59,720
paper yeah okay so there's a lot of
781
00:33:56,440 --> 00:33:59,720
different ways to handle this
782
00:34:11,520 --> 00:34:18,079
so there there's two papers um the first
783
00:34:14,879 --> 00:34:20,000
paper is uh a really nice paper very
784
00:34:18,079 --> 00:34:22,359
influential on the subject of
785
00:34:20,000 --> 00:34:25,359
co-reference resolution and co-reference
786
00:34:22,359 --> 00:34:27,240
resolution um is essentially trying to
787
00:34:25,359 --> 00:34:30,000
identify when two spans correspond to
788
00:34:27,240 --> 00:34:32,320
each other so like if I say Joe B Joe
789
00:34:30,000 --> 00:34:34,359
Biden early in a document and then later
790
00:34:32,320 --> 00:34:35,480
in a document it just says Biden we want
791
00:34:34,359 --> 00:34:38,839
to know that those two things are
792
00:34:35,480 --> 00:34:40,919
referring to each other and then um we
793
00:34:38,839 --> 00:34:42,839
had a paper later where we generalized
794
00:34:40,919 --> 00:34:44,839
this and applied you know very similar
795
00:34:42,839 --> 00:34:48,079
methodology to like lots and lots of
796
00:34:44,839 --> 00:34:50,760
different analysis tasks but I can um I
797
00:34:48,079 --> 00:34:53,839
can show the beginning here and
798
00:34:50,760 --> 00:34:59,320
basically the methodology that they use
799
00:34:53,839 --> 00:35:02,440
here um is they add
800
00:34:59,320 --> 00:35:04,440
a and this is specifically for modeling
801
00:35:02,440 --> 00:35:08,240
spans and getting embeddings out of
802
00:35:04,440 --> 00:35:09,040
spans of uh tokens and what they did is
803
00:35:08,240 --> 00:35:13,079
they
804
00:35:09,040 --> 00:35:14,920
essentially have a model where you take
805
00:35:13,079 --> 00:35:16,440
the thing from the beginning the
806
00:35:14,920 --> 00:35:18,760
embedding from the beginning of the span
807
00:35:16,440 --> 00:35:22,040
the embedding from the end of the span
808
00:35:18,760 --> 00:35:24,280
and the average embedding of all of the
809
00:35:22,040 --> 00:35:26,280
embeddings in the span and that gives
810
00:35:24,280 --> 00:35:27,480
you three vectors for any span right
811
00:35:26,280 --> 00:35:30,160
because you can always get the beginning
812
00:35:27,480 --> 00:35:33,280
that and in the mean and then based on
813
00:35:30,160 --> 00:35:36,560
that they feed that through um like a
814
00:35:33,280 --> 00:35:37,800
neural network and get a new edting so
815
00:35:36,560 --> 00:35:40,000
they feed that through a transformation
816
00:35:37,800 --> 00:35:42,520
and get a new edting and so that's the
817
00:35:40,000 --> 00:35:44,200
method that they used and I think our
818
00:35:42,520 --> 00:35:46,640
paper actually has a
819
00:35:44,200 --> 00:35:49,640
better
820
00:35:46,640 --> 00:35:52,640
um a better figure of how you can
821
00:35:49,640 --> 00:35:56,680
actually use that actually maybe it
822
00:35:52,640 --> 00:35:58,160
doesn't okay but anyway um yeah because
823
00:35:56,680 --> 00:36:00,240
uh yeah here's the figure
824
00:35:58,160 --> 00:36:01,520
so then you can use that for a number of
825
00:36:00,240 --> 00:36:03,040
things you could use that to like look
826
00:36:01,520 --> 00:36:06,359
up something in a knowledge base you
827
00:36:03,040 --> 00:36:08,599
could also use that to um decide whether
828
00:36:06,359 --> 00:36:10,440
two spans are co-referent by feeding in
829
00:36:08,599 --> 00:36:12,800
like the first span and the second Span
830
00:36:10,440 --> 00:36:14,960
in and then predicting whether those two
831
00:36:12,800 --> 00:36:19,640
spans cor correspond to each other or
832
00:36:14,960 --> 00:36:21,240
not so this general idea of modeling
833
00:36:19,640 --> 00:36:22,960
spans and then modeling relations
834
00:36:21,240 --> 00:36:24,520
between the spans allows you to solve
835
00:36:22,960 --> 00:36:26,119
like lots of different tasks like part
836
00:36:24,520 --> 00:36:27,920
of speech tagging or named entity
837
00:36:26,119 --> 00:36:30,319
recognition or relation extraction or
838
00:36:27,920 --> 00:36:31,920
other stuff like that so um yeah
839
00:36:30,319 --> 00:36:34,040
actually I realized now that I should
840
00:36:31,920 --> 00:36:35,079
have probably talked about these in the
841
00:36:34,040 --> 00:36:36,560
slides where I was talking about
842
00:36:35,079 --> 00:36:38,599
modeling but that that would be my
843
00:36:36,560 --> 00:36:42,319
recommended way of doing
844
00:36:38,599 --> 00:36:42,319
it cool any other
845
00:36:43,839 --> 00:36:49,480
questions nice okay
846
00:36:46,880 --> 00:36:52,880
um
847
00:36:49,480 --> 00:36:55,119
so another question is how can we inject
848
00:36:52,880 --> 00:36:56,640
knowledge into language models um
849
00:36:55,119 --> 00:36:58,720
there's a bunch of different ways to do
850
00:36:56,640 --> 00:37:03,079
this um
851
00:36:58,720 --> 00:37:05,000
one very easy way is to somehow look up
852
00:37:03,079 --> 00:37:09,640
relevant knowledge in your knowledge
853
00:37:05,000 --> 00:37:09,640
graph and um oh
854
00:37:10,280 --> 00:37:15,440
sorry I was presenting on my own screen
855
00:37:13,040 --> 00:37:18,240
not the screen that everybody can see so
856
00:37:15,440 --> 00:37:22,000
um to look up all of the uh knowledge in
857
00:37:18,240 --> 00:37:24,000
a Knowledge Graph and um somehow provide
858
00:37:22,000 --> 00:37:26,800
it to the model one way you can provide
859
00:37:24,000 --> 00:37:28,720
it to the model is through prompting um
860
00:37:26,800 --> 00:37:32,400
but the problem with with prompting is
861
00:37:28,720 --> 00:37:33,920
that you're not necessarily going to uh
862
00:37:32,400 --> 00:37:37,319
be able
863
00:37:33,920 --> 00:37:41,359
to utilize knowledge that is kind of
864
00:37:37,319 --> 00:37:43,920
like minority knowledge because the
865
00:37:41,359 --> 00:37:47,560
embeddings of the entities that you're
866
00:37:43,920 --> 00:37:49,440
presenting may not be you know like well
867
00:37:47,560 --> 00:37:51,839
learned so
868
00:37:49,440 --> 00:37:53,200
you're requiring essentially the model
869
00:37:51,839 --> 00:37:55,359
to be able to generalize from the
870
00:37:53,200 --> 00:37:57,880
knowledge you provide in
871
00:37:55,359 --> 00:38:00,839
the prompt despite the fact that the
872
00:37:57,880 --> 00:38:02,240
prompt is like minor entities or other
873
00:38:00,839 --> 00:38:07,040
things like that that are not as well
874
00:38:02,240 --> 00:38:10,400
learned so is another um method to
875
00:38:07,040 --> 00:38:13,440
handle this um we previously proposed a
876
00:38:10,400 --> 00:38:15,599
method that allows you
877
00:38:13,440 --> 00:38:18,319
to essentially
878
00:38:15,599 --> 00:38:21,319
predict instead of predicting directly
879
00:38:18,319 --> 00:38:24,920
the words here you can predict a tag
880
00:38:21,319 --> 00:38:27,200
that says birth name or a given name or
881
00:38:24,920 --> 00:38:31,480
family name or something like that and
882
00:38:27,200 --> 00:38:32,839
then post talk the model will fill in uh
883
00:38:31,480 --> 00:38:36,720
that like birth
884
00:38:32,839 --> 00:38:39,400
name text based on a knowledge base so
885
00:38:36,720 --> 00:38:41,079
um you know if you have a a Wikipedia
886
00:38:39,400 --> 00:38:44,240
article about Barack Obama that you're
887
00:38:41,079 --> 00:38:48,680
trying to write it could predict um
888
00:38:44,240 --> 00:38:52,040
birth name born uh birth name comma born
889
00:38:48,680 --> 00:38:55,359
in birth date and that's like a very
890
00:38:52,040 --> 00:38:56,880
very common thing in Wikipedia right so
891
00:38:55,359 --> 00:39:00,960
because of that it can predict it very
892
00:38:56,880 --> 00:39:03,160
consistently very uh formulaically and
893
00:39:00,960 --> 00:39:04,599
that allows you to um you know with high
894
00:39:03,160 --> 00:39:06,079
confidence get something that makes
895
00:39:04,599 --> 00:39:08,599
sense and is factual and reduce
896
00:39:06,079 --> 00:39:11,400
hallucination and other stuff like that
897
00:39:08,599 --> 00:39:12,599
so um basically how could you inject
898
00:39:11,400 --> 00:39:14,280
this into language models there's
899
00:39:12,599 --> 00:39:16,240
multiple ways one is prompting that's
900
00:39:14,280 --> 00:39:18,160
maybe the easier way another way is
901
00:39:16,240 --> 00:39:21,520
through like templatic generation like
902
00:39:18,160 --> 00:39:23,200
this where you generate placeholders uh
903
00:39:21,520 --> 00:39:25,200
for all the information you want to add
904
00:39:23,200 --> 00:39:26,480
and then you add the information uh
905
00:39:25,200 --> 00:39:29,359
directly from the knowledge base through
906
00:39:26,480 --> 00:39:29,359
the placeholders like
907
00:39:30,680 --> 00:39:36,800
cool um there there's details about this
908
00:39:34,240 --> 00:39:38,920
in the paper like how we um formulate a
909
00:39:36,800 --> 00:39:41,319
training objective for something like
910
00:39:38,920 --> 00:39:43,480
this and the difficulty in formulating a
911
00:39:41,319 --> 00:39:46,400
training objective is that you need to
912
00:39:43,480 --> 00:39:48,280
figure out when you want to replace
913
00:39:46,400 --> 00:39:49,720
things so like you might not always want
914
00:39:48,280 --> 00:39:51,000
to replace with birth name you might
915
00:39:49,720 --> 00:39:53,920
want to replace with given name and
916
00:39:51,000 --> 00:39:55,839
family name and we demonstrate that you
917
00:39:53,920 --> 00:39:58,400
can figure out how to do this by
918
00:39:55,839 --> 00:40:00,960
essentially like Mar iing over the
919
00:39:58,400 --> 00:40:03,520
various ways of uh of doing this but
920
00:40:00,960 --> 00:40:05,880
that's kind of more complex detail
921
00:40:03,520 --> 00:40:05,880
that's in the
922
00:40:08,440 --> 00:40:15,480
paper another really interesting
923
00:40:11,000 --> 00:40:17,319
question um that uh we this is a also a
924
00:40:15,480 --> 00:40:19,440
paper that I was involved in from uh
925
00:40:17,319 --> 00:40:22,040
four years ago but I feel like this is
926
00:40:19,440 --> 00:40:25,040
not entirely solved even in like modern
927
00:40:22,040 --> 00:40:26,920
rag systems uh today is how can we
928
00:40:25,040 --> 00:40:28,880
reason over a lot of text that's
929
00:40:26,920 --> 00:40:32,440
included in a knowledge
930
00:40:28,880 --> 00:40:35,839
base um oh sorry reason over Text corpus
931
00:40:32,440 --> 00:40:40,480
like we reason over knowledge bases
932
00:40:35,839 --> 00:40:43,280
and basically uh what we did was we
933
00:40:40,480 --> 00:40:44,960
answered questions using text corpora as
934
00:40:43,280 --> 00:40:48,680
a traceable knowledge
935
00:40:44,960 --> 00:40:52,800
bases and we did relevance matching over
936
00:40:48,680 --> 00:40:54,920
mentions um and the way we did this is
937
00:40:52,800 --> 00:40:57,440
we created mentioned
938
00:40:54,920 --> 00:40:59,480
vectors and the mentioned vectors
939
00:40:57,440 --> 00:41:01,720
vectors of all of the mentions in the
940
00:40:59,480 --> 00:41:04,920
knowledge base of particular
941
00:41:01,720 --> 00:41:05,920
entities um and then we retrieved
942
00:41:04,920 --> 00:41:09,599
relevant
943
00:41:05,920 --> 00:41:13,440
mentions um from pre-trained Models uh
944
00:41:09,599 --> 00:41:15,040
so we we ran embeddings and generated uh
945
00:41:13,440 --> 00:41:16,000
embeddings for each of the mentions in
946
00:41:15,040 --> 00:41:20,440
the whole
947
00:41:16,000 --> 00:41:25,440
Corpus and based on this let let
948
00:41:20,440 --> 00:41:29,119
me find the place over here so based on
949
00:41:25,440 --> 00:41:32,720
this we basically um encoded all of
950
00:41:29,119 --> 00:41:35,040
these uh in here and then we had a dense
951
00:41:32,720 --> 00:41:37,359
query vector and the dense query Vector
952
00:41:35,040 --> 00:41:41,640
was specifically trained so that it
953
00:41:37,359 --> 00:41:44,280
would be able to identify entity
954
00:41:41,640 --> 00:41:46,760
mentions that answered the problem so if
955
00:41:44,280 --> 00:41:50,240
we had like when was The Grateful Dead
956
00:41:46,760 --> 00:41:52,520
and uh Bob Dylan album released uh we
957
00:41:50,240 --> 00:41:54,760
would have Bob Dylan be one vector The
958
00:41:52,520 --> 00:41:56,560
Grateful Dead be another vector and the
959
00:41:54,760 --> 00:41:58,200
model would be specifically trained so
960
00:41:56,560 --> 00:42:00,040
that when you took took the entity
961
00:41:58,200 --> 00:42:03,319
embedding of this and matched it with an
962
00:42:00,040 --> 00:42:05,400
entity embedding in this big Corpus of
963
00:42:03,319 --> 00:42:07,920
encoded things here it would be most
964
00:42:05,400 --> 00:42:10,400
likely to return relevant information to
965
00:42:07,920 --> 00:42:13,160
answer these like entity relation
966
00:42:10,400 --> 00:42:14,680
questions so then the question is how do
967
00:42:13,160 --> 00:42:18,040
we train a model like this how do we
968
00:42:14,680 --> 00:42:20,280
train like a dense uh embedding model so
969
00:42:18,040 --> 00:42:21,520
that it gets relevant information for
970
00:42:20,280 --> 00:42:23,800
answering
971
00:42:21,520 --> 00:42:26,920
questions and basically the way we did
972
00:42:23,800 --> 00:42:29,280
this was through week supervision uh
973
00:42:26,920 --> 00:42:31,640
just like I talked about for relation
974
00:42:29,280 --> 00:42:33,599
extraction in relation extraction we can
975
00:42:31,640 --> 00:42:35,680
create weak supervision by taking a big
976
00:42:33,599 --> 00:42:37,960
existing knowledge base and identifying
977
00:42:35,680 --> 00:42:40,920
all of the sentences where the answer is
978
00:42:37,960 --> 00:42:43,319
included and so what we did is we took
979
00:42:40,920 --> 00:42:45,880
this big existing knowledge base and
980
00:42:43,319 --> 00:42:47,920
said okay what are some of the relations
981
00:42:45,880 --> 00:42:49,800
in the knowledge base one example of a
982
00:42:47,920 --> 00:42:51,559
relation in the knowledge base is Steven
983
00:42:49,800 --> 00:42:54,359
Spielberg is the director of Saving
984
00:42:51,559 --> 00:42:57,319
Private Ryan so we created questions
985
00:42:54,359 --> 00:42:59,119
that said um
986
00:42:57,319 --> 00:43:01,079
was the director of Saving Private Ryan
987
00:42:59,119 --> 00:43:03,920
we can create those with templates uh
988
00:43:01,079 --> 00:43:06,359
easily for many different relations and
989
00:43:03,920 --> 00:43:09,480
then we took the embedding for Saving
990
00:43:06,359 --> 00:43:10,760
Private Ryan in that question and we
991
00:43:09,480 --> 00:43:14,200
tried to
992
00:43:10,760 --> 00:43:17,119
upweight all of the Saving Private Ryan
993
00:43:14,200 --> 00:43:19,680
embeddings over all of Wikipedia where
994
00:43:17,119 --> 00:43:23,160
Steven Spielberg cooccurred in that
995
00:43:19,680 --> 00:43:25,640
sentence so that tries to match um you
996
00:43:23,160 --> 00:43:27,079
know artificially created questions with
997
00:43:25,640 --> 00:43:29,040
sentences that would be the answer
998
00:43:27,079 --> 00:43:31,040
answer to that question and so that
999
00:43:29,040 --> 00:43:32,480
gives you like supervision it gives you
1000
00:43:31,040 --> 00:43:35,079
a lot of data to train over it gives you
1001
00:43:32,480 --> 00:43:38,920
a good model so that that allowed us to
1002
00:43:35,079 --> 00:43:41,319
learn this model well so um this is one
1003
00:43:38,920 --> 00:43:43,160
example of how you can do like rag spe
1004
00:43:41,319 --> 00:43:46,200
specifically like informed by knowledge
1005
00:43:43,160 --> 00:43:46,200
bases and stuff like
1006
00:43:47,280 --> 00:43:52,160
that um any any questions about this
1007
00:43:53,480 --> 00:43:57,680
or
1008
00:43:55,079 --> 00:44:00,079
okay so another thing that I I'd like to
1009
00:43:57,680 --> 00:44:03,599
go into is uh something we call schema
1010
00:44:00,079 --> 00:44:06,240
free extraction and so if I go back to
1011
00:44:03,599 --> 00:44:09,960
the wiki Data
1012
00:44:06,240 --> 00:44:10,760
Page um Wiki data has something we call
1013
00:44:09,960 --> 00:44:13,599
a
1014
00:44:10,760 --> 00:44:16,880
schema and the schema is basically like
1015
00:44:13,599 --> 00:44:19,640
what are the relations that are included
1016
00:44:16,880 --> 00:44:21,000
in the database so one of the relations
1017
00:44:19,640 --> 00:44:25,079
that's included in the databas is
1018
00:44:21,000 --> 00:44:25,079
instance of I guess also
1019
00:44:25,200 --> 00:44:29,040
image lots of images
1020
00:44:29,079 --> 00:44:33,880
um
1021
00:44:30,440 --> 00:44:35,680
signature uh sex or gender country of
1022
00:44:33,880 --> 00:44:38,319
citizenship and these relations are like
1023
00:44:35,680 --> 00:44:41,079
decided a priori by the people who
1024
00:44:38,319 --> 00:44:43,200
created Wiki data um and there's lots
1025
00:44:41,079 --> 00:44:45,880
and lots of them but that doesn't
1026
00:44:43,200 --> 00:44:48,880
necessarily mean
1027
00:44:45,880 --> 00:44:50,400
that like similarly to the problem of
1028
00:44:48,880 --> 00:44:51,839
not having all of the entities we can't
1029
00:44:50,400 --> 00:44:55,119
have all of the relations and just to
1030
00:44:51,839 --> 00:44:57,280
give one example I was um in preparation
1031
00:44:55,119 --> 00:44:59,680
for our large language models lecture I
1032
00:44:57,280 --> 00:45:02,640
actually created some structured data
1033
00:44:59,680 --> 00:45:04,319
about large language models and some of
1034
00:45:02,640 --> 00:45:06,119
the instru the structured data about
1035
00:45:04,319 --> 00:45:09,319
large language models that I created was
1036
00:45:06,119 --> 00:45:11,440
like what is the variety of positional
1037
00:45:09,319 --> 00:45:13,079
embedding that they're using or
1038
00:45:11,440 --> 00:45:15,800
positional embedding variety and
1039
00:45:13,079 --> 00:45:18,720
positional embedding variety is not in
1040
00:45:15,800 --> 00:45:20,359
Wiki data I think um I'd be surprised if
1041
00:45:18,720 --> 00:45:23,200
it was in Wiki data but I think it's not
1042
00:45:20,359 --> 00:45:25,760
in Wiki data um so like as you go down
1043
00:45:23,200 --> 00:45:27,760
to like more esoteric Concepts or like
1044
00:45:25,760 --> 00:45:29,599
specialized domains or stuff like that
1045
00:45:27,760 --> 00:45:31,359
you're almost always guaranteed to not
1046
00:45:29,599 --> 00:45:34,040
you know have all the entities you need
1047
00:45:31,359 --> 00:45:36,680
or not have all the relations you need
1048
00:45:34,040 --> 00:45:38,160
so that's the problem that schema free
1049
00:45:36,680 --> 00:45:39,920
extraction is trying to solve it's
1050
00:45:38,160 --> 00:45:41,680
trying to figure out how we can like
1051
00:45:39,920 --> 00:45:45,920
jointly figure out the schema together
1052
00:45:41,680 --> 00:45:45,920
with uh the information you want to
1053
00:45:48,480 --> 00:45:54,040
extract and the um the most famous
1054
00:45:52,319 --> 00:45:55,599
example of this is something called open
1055
00:45:54,040 --> 00:45:57,200
information extraction in open
1056
00:45:55,599 --> 00:46:01,160
information extraction basically what
1057
00:45:57,200 --> 00:46:04,040
it's saying is um we don't need a schema
1058
00:46:01,160 --> 00:46:06,359
uh there's no there's no schema um the
1059
00:46:04,040 --> 00:46:08,720
only schema that we have is the actual
1060
00:46:06,359 --> 00:46:12,200
text in the sentences that we're
1061
00:46:08,720 --> 00:46:14,520
referring to um the entities so if we
1062
00:46:12,200 --> 00:46:16,040
have United United has a Hub in Chicago
1063
00:46:14,520 --> 00:46:17,359
which is the headquarters of United
1064
00:46:16,040 --> 00:46:21,200
Continental
1065
00:46:17,359 --> 00:46:25,880
Holdings um the relation is literally
1066
00:46:21,200 --> 00:46:29,359
has a Hub in um that that's the relation
1067
00:46:25,880 --> 00:46:33,359
um and then for this we have Chicago is
1068
00:46:29,359 --> 00:46:35,559
the headquarters of um but the problem
1069
00:46:33,359 --> 00:46:37,520
with this uh is that this cannot
1070
00:46:35,559 --> 00:46:40,359
abstract away so if we had another
1071
00:46:37,520 --> 00:46:42,000
sentence that said Chicago or United
1072
00:46:40,359 --> 00:46:44,319
Continental Holdings has its
1073
00:46:42,000 --> 00:46:45,720
headquarters in Chicago that would be
1074
00:46:44,319 --> 00:46:49,800
treated as completely different you
1075
00:46:45,720 --> 00:46:49,800
wouldn't be able to like group those two
1076
00:46:51,119 --> 00:46:57,720
together so um in open information
1077
00:46:55,000 --> 00:47:00,079
extraction actually a lot of the methods
1078
00:46:57,720 --> 00:47:02,800
this is one of the few things where
1079
00:47:00,079 --> 00:47:05,480
people still use rule-based systems as
1080
00:47:02,800 --> 00:47:07,640
kind of like uh you know almost
1081
00:47:05,480 --> 00:47:09,319
state-of-the-art systems but basically
1082
00:47:07,640 --> 00:47:11,559
the reason why you're able to do this is
1083
00:47:09,319 --> 00:47:14,440
it's not actually that hard to extract
1084
00:47:11,559 --> 00:47:16,839
kind of the relevant strings between uh
1085
00:47:14,440 --> 00:47:19,599
two entities and so the both the
1086
00:47:16,839 --> 00:47:21,359
Precision and recall are pretty high and
1087
00:47:19,599 --> 00:47:24,079
another reason why people use rule-based
1088
00:47:21,359 --> 00:47:25,760
systems is because they um like you want
1089
00:47:24,079 --> 00:47:27,440
to run it over the whole web and running
1090
00:47:25,760 --> 00:47:29,079
a neural model over the whole web is
1091
00:47:27,440 --> 00:47:32,000
expensive so you can use a role-based
1092
00:47:29,079 --> 00:47:35,319
model so some examples of this include
1093
00:47:32,000 --> 00:47:37,640
text Runner and Reverb um the basic
1094
00:47:35,319 --> 00:47:41,000
ideas behind them is that you use a
1095
00:47:37,640 --> 00:47:43,720
parser to extract um to do a syntactic
1096
00:47:41,000 --> 00:47:45,760
analysis of the sentence um in extract
1097
00:47:43,720 --> 00:47:47,640
during according to rules so for example
1098
00:47:45,760 --> 00:47:50,160
the relation must contain a
1099
00:47:47,640 --> 00:47:52,720
predicate um the subject and object must
1100
00:47:50,160 --> 00:47:56,040
be noun phrases other things like
1101
00:47:52,720 --> 00:47:57,640
this um and then what they did later is
1102
00:47:56,040 --> 00:47:59,240
what they did in this this paper
1103
00:47:57,640 --> 00:48:00,800
arguably this is maybe no longer
1104
00:47:59,240 --> 00:48:02,280
necessary with the compute power we have
1105
00:48:00,800 --> 00:48:04,000
now but they trained an even faster
1106
00:48:02,280 --> 00:48:06,960
model to extract over large amounts of
1107
00:48:04,000 --> 00:48:08,720
data so they basically um use this as a
1108
00:48:06,960 --> 00:48:10,599
su weak supervision and then train a
1109
00:48:08,720 --> 00:48:12,160
model that could do it even faster with
1110
00:48:10,599 --> 00:48:14,680
the sequence base
1111
00:48:12,160 --> 00:48:18,119
model
1112
00:48:14,680 --> 00:48:19,880
um another thing that they did was um
1113
00:48:18,119 --> 00:48:22,280
they aggregated multiple pieces of
1114
00:48:19,880 --> 00:48:24,480
evidence heris to find common and
1115
00:48:22,280 --> 00:48:28,760
therefore potentially reliable
1116
00:48:24,480 --> 00:48:28,760
extractions so like
1117
00:48:29,800 --> 00:48:36,960
any piece of text on the internet like
1118
00:48:31,559 --> 00:48:40,200
could be a lie right so um you know
1119
00:48:36,960 --> 00:48:43,400
if I I might write on my blog United has
1120
00:48:40,200 --> 00:48:45,119
a Hub in like Denver or on the other
1121
00:48:43,400 --> 00:48:48,240
hand
1122
00:48:45,119 --> 00:48:50,839
um wait a set
1123
00:48:48,240 --> 00:48:52,680
right some something has a Hub in Denver
1124
00:48:50,839 --> 00:48:54,960
but United has a Hub in Pittsburgh is
1125
00:48:52,680 --> 00:48:58,040
definitely wrong so let's uh let's go
1126
00:48:54,960 --> 00:49:00,000
with that um uh so somebody could write
1127
00:48:58,040 --> 00:49:02,359
that on the internet and in fact because
1128
00:49:00,000 --> 00:49:06,440
I just said it it's probably in YouTube
1129
00:49:02,359 --> 00:49:09,119
comments somewhere but um uh
1130
00:49:06,440 --> 00:49:10,760
like any any piece of information on the
1131
00:49:09,119 --> 00:49:13,079
internet could be wrong so basically
1132
00:49:10,760 --> 00:49:16,680
they had um heuristic methods to filter
1133
00:49:13,079 --> 00:49:19,559
these out and usually these were
1134
00:49:16,680 --> 00:49:21,559
frequency based so it's like um if both
1135
00:49:19,559 --> 00:49:23,520
United and Pittsburgh are very common
1136
00:49:21,559 --> 00:49:26,000
but it's very rare for somebody to says
1137
00:49:23,520 --> 00:49:27,799
say United has a Hub in Pittsburgh then
1138
00:49:26,000 --> 00:49:29,200
that means it's statistically unlikely
1139
00:49:27,799 --> 00:49:30,799
for this to be correct because if it
1140
00:49:29,200 --> 00:49:33,280
were correct we'd expect to see it much
1141
00:49:30,799 --> 00:49:36,799
more frequently so um those were the
1142
00:49:33,280 --> 00:49:36,799
kind of things that they they did
1143
00:49:37,520 --> 00:49:44,440
here there's also some neural models for
1144
00:49:40,400 --> 00:49:46,839
open IE um I I think these are uh used
1145
00:49:44,440 --> 00:49:48,440
maybe a little bit less often um but
1146
00:49:46,839 --> 00:49:52,559
basically heuristics are still not
1147
00:49:48,440 --> 00:49:55,280
perfect and so what they did the problem
1148
00:49:52,559 --> 00:49:56,720
with um like not relying on heuristics
1149
00:49:55,280 --> 00:49:58,880
is you need to get training data from
1150
00:49:56,720 --> 00:50:01,880
somewhere so there's a rather clever
1151
00:49:58,880 --> 00:50:03,599
paper um and again if you're not
1152
00:50:01,880 --> 00:50:05,119
interested in relation extraction in
1153
00:50:03,599 --> 00:50:07,559
particular I think this is one thing
1154
00:50:05,119 --> 00:50:10,000
that's still worth paying attention to
1155
00:50:07,559 --> 00:50:12,680
um which is
1156
00:50:10,000 --> 00:50:14,559
they demonstrated that it's possible to
1157
00:50:12,680 --> 00:50:16,319
create relatively large data sets by
1158
00:50:14,559 --> 00:50:18,160
asking people simple
1159
00:50:16,319 --> 00:50:21,440
questions
1160
00:50:18,160 --> 00:50:24,480
and in particular they wanted to
1161
00:50:21,440 --> 00:50:27,119
get relation extraction data sets that
1162
00:50:24,480 --> 00:50:30,799
are like um
1163
00:50:27,119 --> 00:50:34,200
who finished something like UCD finished
1164
00:50:30,799 --> 00:50:37,760
the two 2006 championships and if you
1165
00:50:34,200 --> 00:50:40,720
ask people like okay select this span um
1166
00:50:37,760 --> 00:50:44,559
select the entity span the relations
1167
00:50:40,720 --> 00:50:46,160
span and the um in the second entity the
1168
00:50:44,559 --> 00:50:49,079
head entity the relation and the tail
1169
00:50:46,160 --> 00:50:51,839
entity select it on this interface and
1170
00:50:49,079 --> 00:50:54,200
then uh tell me is it this relation or
1171
00:50:51,839 --> 00:50:55,640
this relation or this relation that's
1172
00:50:54,200 --> 00:50:58,160
actually pretty hard and getting like
1173
00:50:55,640 --> 00:51:01,280
crowd workers to start learning how to
1174
00:50:58,160 --> 00:51:03,280
do that task is a bit tricky and it
1175
00:51:01,280 --> 00:51:06,400
takes some you know it takes some time
1176
00:51:03,280 --> 00:51:07,799
to get them onboarded basically um but
1177
00:51:06,400 --> 00:51:09,760
basically what they said is instead
1178
00:51:07,799 --> 00:51:11,359
we'll just ask them questions where the
1179
00:51:09,760 --> 00:51:14,240
answer to the question basically gives
1180
00:51:11,359 --> 00:51:17,160
us the answer to what the relation is so
1181
00:51:14,240 --> 00:51:20,319
they ask like who finished something and
1182
00:51:17,160 --> 00:51:23,680
the answer is like UCD and um what did
1183
00:51:20,319 --> 00:51:25,359
someone finish the 2006 Championship
1184
00:51:23,680 --> 00:51:28,920
what did someone fish some finish
1185
00:51:25,359 --> 00:51:31,760
something as and basically um in doing
1186
00:51:28,920 --> 00:51:33,319
this they created uh something called
1187
00:51:31,760 --> 00:51:34,359
semantic roles which we're actually
1188
00:51:33,319 --> 00:51:35,960
probably going to talk about a little
1189
00:51:34,359 --> 00:51:37,559
bit later but you can take the semantic
1190
00:51:35,960 --> 00:51:41,200
roles and then you can use them to
1191
00:51:37,559 --> 00:51:43,920
annotate uh relation extraction data and
1192
00:51:41,200 --> 00:51:46,720
then they trained a supervised neural
1193
00:51:43,920 --> 00:51:46,720
tager for
1194
00:51:48,799 --> 00:51:53,480
this
1195
00:51:50,480 --> 00:51:56,040
cool um so another thing I'd like to
1196
00:51:53,480 --> 00:51:57,880
talk about is I talked about learning um
1197
00:51:56,040 --> 00:51:59,920
information about entities from entity
1198
00:51:57,880 --> 00:52:02,079
embeddings but you can actually learn
1199
00:51:59,920 --> 00:52:04,520
information about relations from
1200
00:52:02,079 --> 00:52:07,680
relation information about other
1201
00:52:04,520 --> 00:52:12,359
relations and this can help solve the
1202
00:52:07,680 --> 00:52:16,119
problem um of like essentially the fact
1203
00:52:12,359 --> 00:52:18,760
that open IE is not able to abstract and
1204
00:52:16,119 --> 00:52:20,680
generalize so word embeddings or entity
1205
00:52:18,760 --> 00:52:23,079
embeddings give information of the word
1206
00:52:20,680 --> 00:52:26,920
in context um which can be indicative
1207
00:52:23,079 --> 00:52:29,640
for knowledge uh knowledge bases
1208
00:52:26,920 --> 00:52:32,640
but other relations or combinations
1209
00:52:29,640 --> 00:52:34,960
thereof are also indicative of them and
1210
00:52:32,640 --> 00:52:36,960
um if anybody is familiar with graphs or
1211
00:52:34,960 --> 00:52:39,520
graph processing there's the whole idea
1212
00:52:36,960 --> 00:52:41,400
of um link prediction where you're given
1213
00:52:39,520 --> 00:52:42,680
like a a small number of links in a
1214
00:52:41,400 --> 00:52:45,760
graph and you want to predict what other
1215
00:52:42,680 --> 00:52:50,559
links are likely to uh
1216
00:52:45,760 --> 00:52:52,920
exist and like as I said um a lot of uh
1217
00:52:50,559 --> 00:52:54,839
you know very prominent AI researchers
1218
00:52:52,920 --> 00:52:57,440
got their start in uh relation
1219
00:52:54,839 --> 00:53:01,480
extraction and uh it sker is another one
1220
00:52:57,440 --> 00:53:04,319
of them actually um and uh basically
1221
00:53:01,480 --> 00:53:07,880
this 2009 paper proposed to use tensor
1222
00:53:04,319 --> 00:53:09,400
de composition to do uh induction of
1223
00:53:07,880 --> 00:53:13,520
relations
1224
00:53:09,400 --> 00:53:15,319
and the way it worked is um you model
1225
00:53:13,520 --> 00:53:18,400
relations by decomposing a tensor
1226
00:53:15,319 --> 00:53:21,599
containing entity relation entity tles
1227
00:53:18,400 --> 00:53:24,000
so you have the left entity the right
1228
00:53:21,599 --> 00:53:27,160
entity and whether the relation exists
1229
00:53:24,000 --> 00:53:31,319
is this big um uh big tensor in the
1230
00:53:27,160 --> 00:53:33,160
Middle where these are embeddings of the
1231
00:53:31,319 --> 00:53:35,760
left entity these are embeddings of the
1232
00:53:33,160 --> 00:53:38,839
right entity and then the the depth of
1233
00:53:35,760 --> 00:53:40,680
the tensor is like which relations exist
1234
00:53:38,839 --> 00:53:43,760
and so we know that some exist so we
1235
00:53:40,680 --> 00:53:46,640
give them a one we know others exist um
1236
00:53:43,760 --> 00:53:48,680
don't exist so we give them a zero um
1237
00:53:46,640 --> 00:53:51,040
and then we do a low rank approximation
1238
00:53:48,680 --> 00:53:52,559
of this tensor and if we do a low rank
1239
00:53:51,040 --> 00:53:55,720
approximation of the tensor we have
1240
00:53:52,559 --> 00:53:57,280
reconstruction ER basically so when we
1241
00:53:55,720 --> 00:53:59,960
reconstruct the are some things that
1242
00:53:57,280 --> 00:54:01,960
were previously zero become one and so
1243
00:53:59,960 --> 00:54:04,760
the things that were previously zero and
1244
00:54:01,960 --> 00:54:07,880
then become close to one are the ones
1245
00:54:04,760 --> 00:54:10,559
that we think like actually might exist
1246
00:54:07,880 --> 00:54:12,000
they might be real um they might be real
1247
00:54:10,559 --> 00:54:13,640
relations that we were just missing
1248
00:54:12,000 --> 00:54:16,599
because our previous knowledge base was
1249
00:54:13,640 --> 00:54:16,599
complete uh
1250
00:54:18,640 --> 00:54:26,880
incomplete and um one thing that takes
1251
00:54:21,799 --> 00:54:28,559
us a step further is uh what if if we
1252
00:54:26,880 --> 00:54:30,079
actually do have a knowledge basee or
1253
00:54:28,559 --> 00:54:31,839
what if we even have multiple knowledge
1254
00:54:30,079 --> 00:54:35,520
bases like what if we have Wiki data and
1255
00:54:31,839 --> 00:54:36,640
we have wordnet and we have um uh other
1256
00:54:35,520 --> 00:54:38,920
things like
1257
00:54:36,640 --> 00:54:40,680
this and in addition to that we also
1258
00:54:38,920 --> 00:54:43,400
have open IE
1259
00:54:40,680 --> 00:54:45,960
extractions so there's an idea of
1260
00:54:43,400 --> 00:54:47,880
something called Universal schema and
1261
00:54:45,960 --> 00:54:50,200
what Universal schema do is they embed
1262
00:54:47,880 --> 00:54:55,119
relations from multiple schema or
1263
00:54:50,200 --> 00:54:56,960
schemata in the same space and based on
1264
00:54:55,119 --> 00:54:59,559
this they then
1265
00:54:56,960 --> 00:55:01,359
predict which ones exist are likely to
1266
00:54:59,559 --> 00:55:04,400
exist or which ones are not likely to
1267
00:55:01,359 --> 00:55:06,680
exist so here we might have a free base
1268
00:55:04,400 --> 00:55:08,640
or Wiki data we might have another uh
1269
00:55:06,680 --> 00:55:11,559
kind of relation extraction data set
1270
00:55:08,640 --> 00:55:15,480
called Tac and then on the training data
1271
00:55:11,559 --> 00:55:17,040
set we have um like all of these uh
1272
00:55:15,480 --> 00:55:20,240
things that are like positive or
1273
00:55:17,040 --> 00:55:23,960
negative or something like this and then
1274
00:55:20,240 --> 00:55:26,960
on the heldout data set we have only
1275
00:55:23,960 --> 00:55:29,480
information about like open
1276
00:55:26,960 --> 00:55:30,920
for example so um for all of the
1277
00:55:29,480 --> 00:55:33,079
entities that exist in the knowledge
1278
00:55:30,920 --> 00:55:34,839
base we know you know whether the
1279
00:55:33,079 --> 00:55:36,039
relations exist for but for all the
1280
00:55:34,839 --> 00:55:39,640
entities that don't exist in the
1281
00:55:36,039 --> 00:55:41,760
database we don't know and so uh then
1282
00:55:39,640 --> 00:55:43,839
just from the existence of open IE
1283
00:55:41,760 --> 00:55:45,480
relations or non-existence of open IE
1284
00:55:43,839 --> 00:55:47,920
relations we can predict that other
1285
00:55:45,480 --> 00:55:49,359
relations might exist for example so
1286
00:55:47,920 --> 00:55:51,079
this is a great way to combine the two
1287
00:55:49,359 --> 00:55:53,920
together like open IE you can run it
1288
00:55:51,079 --> 00:55:55,880
over you know very large data sets um
1289
00:55:53,920 --> 00:55:58,000
but it doesn't have a good schema free
1290
00:55:55,880 --> 00:56:00,400
uh Wiki data has a good schema but you
1291
00:55:58,000 --> 00:56:02,960
can't you know it's all manually created
1292
00:56:00,400 --> 00:56:04,720
so you can suggest other ones and one
1293
00:56:02,960 --> 00:56:07,960
other like interesting thing is you can
1294
00:56:04,720 --> 00:56:09,640
suggest other um things that might exist
1295
00:56:07,960 --> 00:56:13,039
in Wiki data but you could also track
1296
00:56:09,640 --> 00:56:15,039
that back to the original text that
1297
00:56:13,039 --> 00:56:17,000
indicated that it might exist in Wiki
1298
00:56:15,039 --> 00:56:18,720
data so then you could have a human go
1299
00:56:17,000 --> 00:56:20,520
back and check it to make sure that
1300
00:56:18,720 --> 00:56:24,200
that's actually true and trustworthy and
1301
00:56:20,520 --> 00:56:24,200
other things like that
1302
00:56:26,400 --> 00:56:31,400
cool um so if you like uh you like
1303
00:56:29,400 --> 00:56:33,160
tensors or you like linear algebra or
1304
00:56:31,400 --> 00:56:34,720
things like this this is maybe something
1305
00:56:33,160 --> 00:56:37,880
that you could take a look at and think
1306
00:56:34,720 --> 00:56:40,240
a little bit more about um any any
1307
00:56:37,880 --> 00:56:40,240
questions
1308
00:56:42,799 --> 00:56:46,240
here okay
1309
00:56:46,880 --> 00:56:53,680
cool um so another thing I'd like to
1310
00:56:50,640 --> 00:56:56,920
talk about is uh modeling relation paths
1311
00:56:53,680 --> 00:57:00,359
so this is a really nice uh idea
1312
00:56:56,920 --> 00:57:00,359
which is you
1313
00:57:00,440 --> 00:57:05,000
can make inferences across multiple hops
1314
00:57:04,240 --> 00:57:08,400
of
1315
00:57:05,000 --> 00:57:12,280
relations um based on uh particular
1316
00:57:08,400 --> 00:57:14,200
relations existing and so um multi-step
1317
00:57:12,280 --> 00:57:17,280
passs can be informative for indicating
1318
00:57:14,200 --> 00:57:20,000
whether individual relations exist so um
1319
00:57:17,280 --> 00:57:24,400
for example uh given a word given a
1320
00:57:20,000 --> 00:57:27,960
particular word in a paper title
1321
00:57:24,400 --> 00:57:29,880
recommend a venue in which to the paper
1322
00:57:27,960 --> 00:57:32,559
and so this is the the problem that they
1323
00:57:29,880 --> 00:57:36,079
were trying to solve and then basically
1324
00:57:32,559 --> 00:57:38,440
you have a word um you
1325
00:57:36,079 --> 00:57:41,119
find if you have that word in your paper
1326
00:57:38,440 --> 00:57:42,920
title you then find other papers that
1327
00:57:41,119 --> 00:57:45,280
have that title uh that have that word
1328
00:57:42,920 --> 00:57:48,359
in their title and those papers are in a
1329
00:57:45,280 --> 00:57:52,039
journal and that gets a high weight with
1330
00:57:48,359 --> 00:57:54,119
respect to like that your paper being
1331
00:57:52,039 --> 00:57:56,839
you know relevant to that particular
1332
00:57:54,119 --> 00:57:59,880
Journal you can also say
1333
00:57:56,839 --> 00:58:01,000
okay I have a a word find papers with
1334
00:57:59,880 --> 00:58:03,240
that word in the
1335
00:58:01,000 --> 00:58:07,240
title find the first author of that
1336
00:58:03,240 --> 00:58:09,280
paper find another paper uh that had
1337
00:58:07,240 --> 00:58:11,599
that author as a first author and then
1338
00:58:09,280 --> 00:58:13,240
find the Journal of it and they
1339
00:58:11,599 --> 00:58:15,839
demonstrate a way where you can like
1340
00:58:13,240 --> 00:58:18,280
expand these paths and feed them into a
1341
00:58:15,839 --> 00:58:22,400
prediction model and use that to predict
1342
00:58:18,280 --> 00:58:25,480
um you know additional relations so
1343
00:58:22,400 --> 00:58:26,680
unlike this method here this method was
1344
00:58:25,480 --> 00:58:29,240
saying like
1345
00:58:26,680 --> 00:58:30,920
other single relations are indicative of
1346
00:58:29,240 --> 00:58:34,160
a particular relation
1347
00:58:30,920 --> 00:58:36,880
existing this paper is saying not just
1348
00:58:34,160 --> 00:58:38,720
individual relations are indicative of
1349
00:58:36,880 --> 00:58:40,640
another relation existing but actually
1350
00:58:38,720 --> 00:58:43,839
relation paths are indicative of a
1351
00:58:40,640 --> 00:58:46,400
relation existing so this is more um
1352
00:58:43,839 --> 00:58:46,400
expressive
1353
00:58:47,520 --> 00:58:55,359
basically um and this followup paper
1354
00:58:52,640 --> 00:58:57,480
uh using differentiable logic rules
1355
00:58:55,359 --> 00:59:00,799
actually made this endtoend
1356
00:58:57,480 --> 00:59:03,079
trainable so this allows you to consider
1357
00:59:00,799 --> 00:59:07,599
whole paths in a differentiable
1358
00:59:03,079 --> 00:59:09,960
framework and so the way they did this
1359
00:59:07,599 --> 00:59:13,359
is like if you have you know City in
1360
00:59:09,960 --> 00:59:16,440
country and has office in country um
1361
00:59:13,359 --> 00:59:18,920
that or sorry City and Country and has
1362
00:59:16,440 --> 00:59:22,200
office in city that indicates has office
1363
00:59:18,920 --> 00:59:24,160
in country and I I'm sure you know many
1364
00:59:22,200 --> 00:59:26,760
people here have thought like learned
1365
00:59:24,160 --> 00:59:29,520
about logic and you know and induction
1366
00:59:26,760 --> 00:59:32,720
from or deduction from uh logic rules
1367
00:59:29,520 --> 00:59:34,359
and stuff like this but the problem is
1368
00:59:32,720 --> 00:59:37,079
deduction from logic rules is very
1369
00:59:34,359 --> 00:59:39,039
fragile like there are cases where there
1370
00:59:37,079 --> 00:59:41,119
are counter examples so if you say that
1371
00:59:39,039 --> 00:59:43,280
something is always true deductively
1372
00:59:41,119 --> 00:59:45,839
then um that can cause problems so in
1373
00:59:43,280 --> 00:59:47,839
reality it's like if you have two pieces
1374
00:59:45,839 --> 00:59:52,400
of information something can become much
1375
00:59:47,839 --> 00:59:56,920
much more likely um and so you know just
1376
00:59:52,400 --> 00:59:59,880
to give an example um somebody studying
1377
00:59:56,920 --> 01:00:01,280
studying at CMU makes it very likely
1378
00:59:59,880 --> 01:00:03,799
much more likely that they're studying
1379
01:00:01,280 --> 01:00:06,359
computer science and much less likely
1380
01:00:03,799 --> 01:00:08,000
that they're studying medicine or
1381
01:00:06,359 --> 01:00:09,520
something like that but that doesn't
1382
01:00:08,000 --> 01:00:11,720
mean that it like
1383
01:00:09,520 --> 01:00:13,559
entirely the first one is definitely not
1384
01:00:11,720 --> 01:00:15,480
entirely implied and I'm sure there's
1385
01:00:13,559 --> 01:00:16,760
like a few people at CMU who are somehow
1386
01:00:15,480 --> 01:00:18,440
studying medicine through a joint
1387
01:00:16,760 --> 01:00:21,480
program with pit or something like that
1388
01:00:18,440 --> 01:00:24,400
so you know like very it's very rare
1389
01:00:21,480 --> 01:00:26,799
that logic rules are hard and fast and
1390
01:00:24,400 --> 01:00:28,480
so basically what they do is they treat
1391
01:00:26,799 --> 01:00:30,559
each path as a sequence of Matrix
1392
01:00:28,480 --> 01:00:34,839
multiplies it where they have a rule
1393
01:00:30,559 --> 01:00:36,599
weight um like this and um in the end
1394
01:00:34,839 --> 01:00:38,359
that allows you to make a a prediction
1395
01:00:36,599 --> 01:00:40,839
about whether a predic logic rule is
1396
01:00:38,359 --> 01:00:40,839
correct or
1397
01:00:40,880 --> 01:00:49,319
not um so this is uh i' I've been
1398
01:00:46,880 --> 01:00:51,119
working mostly in like structured
1399
01:00:49,319 --> 01:00:54,480
knowledge space structured knowledge
1400
01:00:51,119 --> 01:00:56,599
graphs other uh other things like this
1401
01:00:54,480 --> 01:00:59,760
um I I don't
1402
01:00:56,599 --> 01:01:02,720
think there's a whole lot of work that
1403
01:00:59,760 --> 01:01:05,640
directly applies this to language models
1404
01:01:02,720 --> 01:01:07,319
um like differentiable logic rules and
1405
01:01:05,640 --> 01:01:10,079
language models or things like that just
1406
01:01:07,319 --> 01:01:12,440
because it's less clean it's you know uh
1407
01:01:10,079 --> 01:01:13,839
harder um there there's a little bit of
1408
01:01:12,440 --> 01:01:16,079
work which I'm going to talk about now
1409
01:01:13,839 --> 01:01:18,599
but I think like this kind of work is
1410
01:01:16,079 --> 01:01:21,440
interesting because a lot of models are
1411
01:01:18,599 --> 01:01:23,119
not super great at reasoning and how to
1412
01:01:21,440 --> 01:01:25,119
like allow them to be better at
1413
01:01:23,119 --> 01:01:26,559
reasoning is kind of an open problem so
1414
01:01:25,119 --> 01:01:28,039
learning from these old older works that
1415
01:01:26,559 --> 01:01:30,200
did it in a more structured space and
1416
01:01:28,039 --> 01:01:32,160
trying to figure out how to apply them
1417
01:01:30,200 --> 01:01:34,400
to less structured spaces is still
1418
01:01:32,160 --> 01:01:36,240
interesting I think
1419
01:01:34,400 --> 01:01:39,160
so
1420
01:01:36,240 --> 01:01:40,720
cool um then the final talk topic I want
1421
01:01:39,160 --> 01:01:42,920
to talk about is probing knowledge in
1422
01:01:40,720 --> 01:01:44,920
LMS and so we have these knowledge bases
1423
01:01:42,920 --> 01:01:47,319
that encode you know tons and tons of
1424
01:01:44,920 --> 01:01:49,880
knowledge um which allows us to figure
1425
01:01:47,319 --> 01:01:52,200
out you know oh well how well do uh
1426
01:01:49,880 --> 01:01:56,200
language models know about these
1427
01:01:52,200 --> 01:01:59,079
things and so
1428
01:01:56,200 --> 01:02:02,760
traditional um kind of QA machine
1429
01:01:59,079 --> 01:02:04,799
reading comprehension rag models um
1430
01:02:02,760 --> 01:02:06,359
usually referred to external resources
1431
01:02:04,799 --> 01:02:10,039
to answer questions like Wikipedia
1432
01:02:06,359 --> 01:02:14,359
articles um or things like this but then
1433
01:02:10,039 --> 01:02:16,119
the question is without doing rag can we
1434
01:02:14,359 --> 01:02:18,160
you know answer questions like what
1435
01:02:16,119 --> 01:02:20,920
knowledge is
1436
01:02:18,160 --> 01:02:24,079
encoded and so the first paper that kind
1437
01:02:20,920 --> 01:02:26,520
of handled this sort of problem uh is
1438
01:02:24,079 --> 01:02:29,200
this paper which actually was also
1439
01:02:26,520 --> 01:02:33,359
called uh
1440
01:02:29,200 --> 01:02:35,960
wama surprisingly um or released a
1441
01:02:33,359 --> 01:02:41,000
resource called llama except it was l m
1442
01:02:35,960 --> 01:02:44,880
a um but what they did is they
1443
01:02:41,000 --> 01:02:46,960
uh used they in contrast to using
1444
01:02:44,880 --> 01:02:50,000
structural queries like SQL or or
1445
01:02:46,960 --> 01:02:52,119
Sparkle two query KBS they tried to use
1446
01:02:50,000 --> 01:02:54,240
natural language prompts to query LM so
1447
01:02:52,119 --> 01:02:58,160
this was actually one of the the first
1448
01:02:54,240 --> 01:03:02,359
uh kind of paper on prompts uh prompting
1449
01:02:58,160 --> 01:03:05,079
for uh language models in a way and the
1450
01:03:02,359 --> 01:03:08,359
way they did this is they had um they
1451
01:03:05,079 --> 01:03:10,039
did like Dante was born in mask and then
1452
01:03:08,359 --> 01:03:13,279
they tried to fill in the mask using a
1453
01:03:10,039 --> 01:03:15,839
mask language model and uh and output
1454
01:03:13,279 --> 01:03:18,559
Florence so
1455
01:03:15,839 --> 01:03:19,960
um when they did this work now now we
1456
01:03:18,559 --> 01:03:21,359
don't do this quite as much but when
1457
01:03:19,960 --> 01:03:23,520
they did this work they basically used
1458
01:03:21,359 --> 01:03:25,440
the knowledge base as the ground truth
1459
01:03:23,520 --> 01:03:28,880
and tried to probe whether the knowledge
1460
01:03:25,440 --> 01:03:31,520
in in um in the knowledge base was also
1461
01:03:28,880 --> 01:03:34,880
uh recoverable from the neural
1462
01:03:31,520 --> 01:03:37,720
map um and they proposed the Llama
1463
01:03:34,880 --> 01:03:39,760
Benchmark um basically it was manual
1464
01:03:37,720 --> 01:03:42,480
prompts for 41 relations they created
1465
01:03:39,760 --> 01:03:44,839
the prompts manually uh so like X was
1466
01:03:42,480 --> 01:03:46,480
founded in y The Prompt template and
1467
01:03:44,839 --> 01:03:49,400
they filled in the subjects and had the
1468
01:03:46,480 --> 01:03:52,160
LMS uh for such as Bert predict the
1469
01:03:49,400 --> 01:03:55,839
objects uh like blueberg LP was founded
1470
01:03:52,160 --> 01:03:59,000
in mask and they demonstrated that like
1471
01:03:55,839 --> 01:04:02,440
basically Elmo uh Transformer XL and
1472
01:03:59,000 --> 01:04:04,960
Bert base got uh you know up to 31%
1473
01:04:02,440 --> 01:04:06,480
accuracy now I'm sure uh the modern
1474
01:04:04,960 --> 01:04:09,200
language models would have much higher
1475
01:04:06,480 --> 01:04:11,279
accuracy than
1476
01:04:09,200 --> 01:04:13,920
that
1477
01:04:11,279 --> 01:04:17,839
um this is a a follow-up paper that we
1478
01:04:13,920 --> 01:04:21,160
did to this um where we tried to do this
1479
01:04:17,839 --> 01:04:23,400
multilingually um I I think this is
1480
01:04:21,160 --> 01:04:25,680
really let
1481
01:04:23,400 --> 01:04:29,520
me I think one thing that's interesting
1482
01:04:25,680 --> 01:04:31,960
interesting about this paper is um even
1483
01:04:29,520 --> 01:04:37,240
if you're not interested in multilingual
1484
01:04:31,960 --> 01:04:38,920
stuff per se there is an interesting
1485
01:04:37,240 --> 01:04:40,760
dichotomy about like what knowledge is
1486
01:04:38,920 --> 01:04:43,079
included in LMS and whether we can
1487
01:04:40,760 --> 01:04:46,000
retrieve it and the reason why I'm
1488
01:04:43,079 --> 01:04:48,359
saying this is because in this paper
1489
01:04:46,000 --> 01:04:51,200
we created
1490
01:04:48,359 --> 01:04:52,599
queries from a knowledge base and
1491
01:04:51,200 --> 01:04:54,160
because we created queries from a
1492
01:04:52,599 --> 01:04:55,760
knowledge base and knowledge bases are
1493
01:04:54,160 --> 01:04:57,240
multilingual we can also create
1494
01:04:55,760 --> 01:05:00,039
multilingual queries from knowledge
1495
01:04:57,240 --> 01:05:01,720
bases right so we can use exactly the
1496
01:05:00,039 --> 01:05:03,359
same entities but just ask the same
1497
01:05:01,720 --> 01:05:05,920
question in different languages and so
1498
01:05:03,359 --> 01:05:07,480
we had a bunch of people manually uh
1499
01:05:05,920 --> 01:05:10,119
create prompts for all of these
1500
01:05:07,480 --> 01:05:13,000
languages here and you can see that in
1501
01:05:10,119 --> 01:05:15,960
English it's much better at responding
1502
01:05:13,000 --> 01:05:19,000
uh to these queries than it is in any
1503
01:05:15,960 --> 01:05:21,039
other language and in particular like
1504
01:05:19,000 --> 01:05:22,880
lower resource languages or languages
1505
01:05:21,039 --> 01:05:26,400
that are less similar to English it did
1506
01:05:22,880 --> 01:05:29,079
much worse and notably we we counted the
1507
01:05:26,400 --> 01:05:32,160
answer correct if it got it
1508
01:05:29,079 --> 01:05:34,279
um we we had two settings one setting is
1509
01:05:32,160 --> 01:05:35,799
we counted the answer correct if it only
1510
01:05:34,279 --> 01:05:38,359
if it answered in the language we
1511
01:05:35,799 --> 01:05:39,680
queried it in but we in other setting we
1512
01:05:38,359 --> 01:05:42,640
also counted the answer correct if it
1513
01:05:39,680 --> 01:05:44,200
answered in any language so we um it
1514
01:05:42,640 --> 01:05:46,640
didn't necessarily have to even know the
1515
01:05:44,200 --> 01:05:48,200
name of the entity in that uh language
1516
01:05:46,640 --> 01:05:50,520
and we would still count it
1517
01:05:48,200 --> 01:05:54,720
correct and so what I mean by there's a
1518
01:05:50,520 --> 01:05:56,440
dichotomy between the information that
1519
01:05:54,720 --> 01:05:59,240
language models have
1520
01:05:56,440 --> 01:06:02,480
encoded and whether they're able to
1521
01:05:59,240 --> 01:06:02,480
retrieve it
1522
01:06:02,680 --> 01:06:07,640
is in English it's able to answer the
1523
01:06:06,000 --> 01:06:10,799
models we tested were able to answer
1524
01:06:07,640 --> 01:06:13,000
like 177% of queries
1525
01:06:10,799 --> 01:06:14,359
but if the fact that they're able to
1526
01:06:13,000 --> 01:06:16,160
answer in English means that the
1527
01:06:14,359 --> 01:06:18,520
language model quote unquote knows the
1528
01:06:16,160 --> 01:06:20,200
answer right like it knows the answer in
1529
01:06:18,520 --> 01:06:22,680
English we're asking exactly the same
1530
01:06:20,200 --> 01:06:24,400
question in all the other languages so
1531
01:06:22,680 --> 01:06:26,079
you know it should know the answer in
1532
01:06:24,400 --> 01:06:27,680
the other languages too
1533
01:06:26,079 --> 01:06:30,000
but it's not able to retrieve the answer
1534
01:06:27,680 --> 01:06:33,079
because we asked in another language
1535
01:06:30,000 --> 01:06:35,920
so um that brings up some interesting
1536
01:06:33,079 --> 01:06:38,079
questions about how we can make models
1537
01:06:35,920 --> 01:06:39,680
better at retrieving the the knowledge
1538
01:06:38,079 --> 01:06:43,559
that they already know in English when
1539
01:06:39,680 --> 01:06:45,520
you query them in other languages or um
1540
01:06:43,559 --> 01:06:48,119
and there was another paper recently I
1541
01:06:45,520 --> 01:06:52,720
don't know if I'd be able to find it um
1542
01:06:48,119 --> 01:06:56,119
exactly which is um they
1543
01:06:52,720 --> 01:07:01,799
prompted models with personas and so
1544
01:06:56,119 --> 01:07:04,599
they said I um you know I am a old man I
1545
01:07:01,799 --> 01:07:07,160
am an old woman I am a young man I am
1546
01:07:04,599 --> 01:07:10,039
young woman I am a child or something
1547
01:07:07,160 --> 01:07:12,799
like that um or they also talked about
1548
01:07:10,039 --> 01:07:15,640
things like uh physical disabilities and
1549
01:07:12,799 --> 01:07:17,200
things and they said um please answer
1550
01:07:15,640 --> 01:07:19,640
this question after they prompted with a
1551
01:07:17,200 --> 01:07:22,680
Persona and just having that Persona
1552
01:07:19,640 --> 01:07:24,839
greatly changed the ability of the model
1553
01:07:22,680 --> 01:07:26,400
to answer questions so it's this very
1554
01:07:24,839 --> 01:07:28,200
weird thing which which is like the
1555
01:07:26,400 --> 01:07:29,799
models are actually capable of answering
1556
01:07:28,200 --> 01:07:31,520
the questions but based on how you probe
1557
01:07:29,799 --> 01:07:32,880
them whether it's in like different
1558
01:07:31,520 --> 01:07:34,599
languages or if you give them a
1559
01:07:32,880 --> 01:07:36,839
different Persona they manage to answer
1560
01:07:34,599 --> 01:07:39,000
things differently and so on the plus
1561
01:07:36,839 --> 01:07:42,920
side like you can create you can make
1562
01:07:39,000 --> 01:07:44,799
ways to reduce the language models
1563
01:07:42,920 --> 01:07:45,920
performance by giving it like a Persona
1564
01:07:44,799 --> 01:07:49,839
that shouldn't be good at answering
1565
01:07:45,920 --> 01:07:53,279
questions or something like that um
1566
01:07:49,839 --> 01:07:54,839
but on the plus side um like when you're
1567
01:07:53,279 --> 01:07:57,279
doing code generation there was this
1568
01:07:54,839 --> 01:07:58,960
magic prompt which is like um I have
1569
01:07:57,279 --> 01:08:01,319
checked this carefully in all the unit
1570
01:07:58,960 --> 01:08:03,240
tests pass and that would improve your
1571
01:08:01,319 --> 01:08:05,760
code generation accuracy by like five
1572
01:08:03,240 --> 01:08:07,559
five points or something like that so um
1573
01:08:05,760 --> 01:08:09,240
you just get the the model in the right
1574
01:08:07,559 --> 01:08:11,359
mood to answer the question accurately
1575
01:08:09,240 --> 01:08:13,319
and it does a better job at doing it so
1576
01:08:11,359 --> 01:08:15,960
it's kind of uh it goes in both
1577
01:08:13,319 --> 01:08:15,960
directions I
1578
01:08:16,679 --> 01:08:27,080
guess cool um yeah uh any any questions
1579
01:08:23,679 --> 01:08:30,120
here um another thing that you can do uh
1580
01:08:27,080 --> 01:08:31,000
is fine-tune models specifically so
1581
01:08:30,120 --> 01:08:34,080
they're good at answering
1582
01:08:31,000 --> 01:08:35,560
knowledge-based questions so um uh this
1583
01:08:34,080 --> 01:08:38,080
paper demonstrated that you could find
1584
01:08:35,560 --> 01:08:39,480
tune models uh on synthetically created
1585
01:08:38,080 --> 01:08:41,159
knowledge based questions and that would
1586
01:08:39,480 --> 01:08:42,920
improve the ability of the model to
1587
01:08:41,159 --> 01:08:47,679
answer questions about knowledge
1588
01:08:42,920 --> 01:08:47,679
bases um it's
1589
01:08:49,120 --> 01:08:57,440
uh yeah um it's pretty straightforward
1590
01:08:53,199 --> 01:08:57,440
so uh there's that
1591
01:08:57,799 --> 01:09:03,120
um yeah we already talked about this in
1592
01:09:00,000 --> 01:09:07,560
the rag class so I think I might skip
1593
01:09:03,120 --> 01:09:10,239
that um a final paper that I'd like to
1594
01:09:07,560 --> 01:09:12,600
talk about this is also a paper uh done
1595
01:09:10,239 --> 01:09:13,759
by my student Jung B Jong and this is
1596
01:09:12,600 --> 01:09:16,080
interesting from the point of view of
1597
01:09:13,759 --> 01:09:18,000
multihop reasoning and so I talked a
1598
01:09:16,080 --> 01:09:19,679
little bit about like multihop reasoning
1599
01:09:18,000 --> 01:09:23,239
along reasoning
1600
01:09:19,679 --> 01:09:26,159
chains um in knowledge bases and this is
1601
01:09:23,239 --> 01:09:28,520
one example of multihop reasoning
1602
01:09:26,159 --> 01:09:30,080
among along reasoning chains within the
1603
01:09:28,520 --> 01:09:33,400
parameters of the model so testing
1604
01:09:30,080 --> 01:09:36,759
whether models can answer
1605
01:09:33,400 --> 01:09:38,480
um Can it answer multihop questions and
1606
01:09:36,759 --> 01:09:40,839
basically what we did here is we took a
1607
01:09:38,480 --> 01:09:42,679
knowledge base and a knowledge base can
1608
01:09:40,839 --> 01:09:44,279
have
1609
01:09:42,679 --> 01:09:49,480
um
1610
01:09:44,279 --> 01:09:49,480
like uh country country is
1611
01:09:49,600 --> 01:09:52,600
US
1612
01:09:53,480 --> 01:09:58,600
president um and then a
1613
01:10:00,880 --> 01:10:06,560
birthday um and so we can create these
1614
01:10:04,280 --> 01:10:08,640
multihop questions right uh and just
1615
01:10:06,560 --> 01:10:10,280
follow the relation links and then we
1616
01:10:08,640 --> 01:10:11,440
know the answer to the multihop question
1617
01:10:10,280 --> 01:10:13,560
by following the link and we can
1618
01:10:11,440 --> 01:10:18,159
generate you know the question given a
1619
01:10:13,560 --> 01:10:19,800
template um so we did this and had like
1620
01:10:18,159 --> 01:10:22,800
question one which is return the artist
1621
01:10:19,800 --> 01:10:25,719
who recorded party a over um and then
1622
01:10:22,800 --> 01:10:28,159
where in Georgia does uh Usher live and
1623
01:10:25,719 --> 01:10:29,920
then we can turn this into a question
1624
01:10:28,159 --> 01:10:31,679
which part of Georgia in which part of
1625
01:10:29,920 --> 01:10:34,239
Georgia does the artist that recorded
1626
01:10:31,679 --> 01:10:37,560
the party8 overlive and so we now have a
1627
01:10:34,239 --> 01:10:45,000
multi multihop question and what we did
1628
01:10:37,560 --> 01:10:47,440
is we measured whether um the model was
1629
01:10:45,000 --> 01:10:49,760
able to answer the first question the
1630
01:10:47,440 --> 01:10:53,320
second question and the comp like
1631
01:10:49,760 --> 01:10:56,120
compound question and what we found is
1632
01:10:53,320 --> 01:10:59,440
like what we would expect
1633
01:10:56,120 --> 01:11:01,719
if models were like perfect knowledge
1634
01:10:59,440 --> 01:11:04,360
processors right
1635
01:11:01,719 --> 01:11:08,120
is we have
1636
01:11:04,360 --> 01:11:10,800
like yes on the first question
1637
01:11:08,120 --> 01:11:14,000
no
1638
01:11:10,800 --> 01:11:16,560
yes um yes on the first question and no
1639
01:11:14,000 --> 01:11:16,560
on the first
1640
01:11:17,199 --> 01:11:24,760
question and we would expect that
1641
01:11:21,920 --> 01:11:26,080
basically if it knew both of the answers
1642
01:11:24,760 --> 01:11:27,239
to the first question and the second
1643
01:11:26,080 --> 01:11:30,600
question it would get the compound
1644
01:11:27,239 --> 01:11:31,800
question right and if it got uh like
1645
01:11:30,600 --> 01:11:34,800
either of them wrong it would get it
1646
01:11:31,800 --> 01:11:37,120
wrong right um you know in the in the
1647
01:11:34,800 --> 01:11:39,400
ideal world where the knowledge of the
1648
01:11:37,120 --> 01:11:41,280
two sub questions is necessary to answer
1649
01:11:39,400 --> 01:11:43,880
the comp composite question and the
1650
01:11:41,280 --> 01:11:45,840
model is a perfect knowledge processor
1651
01:11:43,880 --> 01:11:47,120
and basically what we found we tried a
1652
01:11:45,840 --> 01:11:49,280
whole bunch of different types of
1653
01:11:47,120 --> 01:11:51,199
questions and what we found is this is
1654
01:11:49,280 --> 01:11:55,960
totally not the case like it's not the
1655
01:11:51,199 --> 01:11:58,520
case at all um and what we found in said
1656
01:11:55,960 --> 01:12:01,560
is if it's able to answer the second
1657
01:11:58,520 --> 01:12:04,120
question correctly it was much more
1658
01:12:01,560 --> 01:12:07,480
likely to be able to answer the
1659
01:12:04,120 --> 01:12:08,840
composite question um even if it can
1660
01:12:07,480 --> 01:12:11,000
answer the first question that has
1661
01:12:08,840 --> 01:12:13,120
almost no relation with whether it could
1662
01:12:11,000 --> 01:12:15,520
answer the composite question at all so
1663
01:12:13,120 --> 01:12:17,679
it's more like somehow from the answer
1664
01:12:15,520 --> 01:12:19,320
to the second question it was able to to
1665
01:12:17,679 --> 01:12:22,280
get the answer right and it kind of
1666
01:12:19,320 --> 01:12:24,040
makes sense actually because like um
1667
01:12:22,280 --> 01:12:26,320
let's say the answer to the second
1668
01:12:24,040 --> 01:12:27,920
question is some like really long list
1669
01:12:26,320 --> 01:12:30,719
like who are all the presidents of the
1670
01:12:27,920 --> 01:12:33,320
United States um or something like that
1671
01:12:30,719 --> 01:12:35,639
that's just hard to answer um so if I
1672
01:12:33,320 --> 01:12:38,000
said who are all the presidents of the
1673
01:12:35,639 --> 01:12:40,800
country where Washington DC is located
1674
01:12:38,000 --> 01:12:42,679
in um you know like the second question
1675
01:12:40,800 --> 01:12:44,040
is really hard so that's hard to get but
1676
01:12:42,679 --> 01:12:46,120
if I say
1677
01:12:44,040 --> 01:12:49,920
um
1678
01:12:46,120 --> 01:12:53,520
uh what what is the
1679
01:12:49,920 --> 01:12:57,120
capital what is the capital of the
1680
01:12:53,520 --> 01:12:57,120
country uh
1681
01:12:57,400 --> 01:13:02,440
what is what is the capital of the
1682
01:12:58,840 --> 01:13:05,400
country where the most
1683
01:13:02,440 --> 01:13:06,800
um people live or something like that
1684
01:13:05,400 --> 01:13:08,679
even if you weren't sure about the
1685
01:13:06,800 --> 01:13:10,880
country where the most people live you
1686
01:13:08,679 --> 01:13:13,040
could pick a random capital and get it
1687
01:13:10,880 --> 01:13:16,199
right some of the time or something like
1688
01:13:13,040 --> 01:13:18,239
that so um that's what we found in this
1689
01:13:16,199 --> 01:13:19,800
paper and I I think like another nice
1690
01:13:18,239 --> 01:13:22,360
thing about knowledge bases is they
1691
01:13:19,800 --> 01:13:24,880
allow you to ask like really interesting
1692
01:13:22,360 --> 01:13:26,400
questions like this about what language
1693
01:13:24,880 --> 01:13:29,120
model know or what language models don't
1694
01:13:26,400 --> 01:13:31,040
know in a structured way so um I think
1695
01:13:29,120 --> 01:13:32,280
if you're interested in probing language
1696
01:13:31,040 --> 01:13:35,320
models and what they know and what they
1697
01:13:32,280 --> 01:13:38,639
can infer what logic they can do that's
1698
01:13:35,320 --> 01:13:42,320
good um cool yeah that's all I have for
1699
01:13:38,639 --> 01:13:44,920
today um are there any questions or
1700
01:13:42,320 --> 01:13:48,679
discussion or things like that or happy
1701
01:13:44,920 --> 01:13:48,679
to talk up here too