|
WEBVTT |
|
|
|
00:00:00.120 --> 00:00:04.880 |
|
everyone I today I'd like to talk about |
|
|
|
00:00:02.760 --> 00:00:07.399 |
|
uh learning from knowledge bases uh |
|
|
|
00:00:04.880 --> 00:00:11.440 |
|
learning from in for knowledge bases |
|
|
|
00:00:07.399 --> 00:00:14.799 |
|
this is kind of a a shift uh from a lot |
|
|
|
00:00:11.440 --> 00:00:16.480 |
|
of the stuff that we've done so far uh |
|
|
|
00:00:14.799 --> 00:00:18.439 |
|
and I'm going to be talking about like a |
|
|
|
00:00:16.480 --> 00:00:20.480 |
|
different information Source some |
|
|
|
00:00:18.439 --> 00:00:21.960 |
|
relatively different algorithms compared |
|
|
|
00:00:20.480 --> 00:00:26.080 |
|
to the stuff that we talked about up |
|
|
|
00:00:21.960 --> 00:00:28.880 |
|
until this point so um you know it might |
|
|
|
00:00:26.080 --> 00:00:32.360 |
|
be uh interesting it might be different |
|
|
|
00:00:28.880 --> 00:00:35.640 |
|
so uh get started with |
|
|
|
00:00:32.360 --> 00:00:37.360 |
|
that so I'm going to be talking about |
|
|
|
00:00:35.640 --> 00:00:40.000 |
|
knowledge bases and knowledge bases are |
|
|
|
00:00:37.360 --> 00:00:43.039 |
|
basically a structured databases of |
|
|
|
00:00:40.000 --> 00:00:46.079 |
|
knowledge and they can contain a lot of |
|
|
|
00:00:43.039 --> 00:00:48.559 |
|
things but most commonly when people are |
|
|
|
00:00:46.079 --> 00:00:50.600 |
|
talking about them they are talking |
|
|
|
00:00:48.559 --> 00:00:53.160 |
|
about relational knowledge bases that |
|
|
|
00:00:50.600 --> 00:00:55.559 |
|
include things like entities which are |
|
|
|
00:00:53.160 --> 00:00:57.399 |
|
nodes in a graph and relations which are |
|
|
|
00:00:55.559 --> 00:01:00.239 |
|
edges between |
|
|
|
00:00:57.399 --> 00:01:02.079 |
|
nodes and |
|
|
|
00:01:00.239 --> 00:01:03.879 |
|
I'll I'll talk about some examples of |
|
|
|
00:01:02.079 --> 00:01:05.479 |
|
this in a little bit to make that a |
|
|
|
00:01:03.879 --> 00:01:08.040 |
|
little bit more concrete and then some |
|
|
|
00:01:05.479 --> 00:01:11.240 |
|
of the questions that we ask about these |
|
|
|
00:01:08.040 --> 00:01:14.400 |
|
are how can we learn to create and |
|
|
|
00:01:11.240 --> 00:01:16.799 |
|
expand knowledge bases with uh you know |
|
|
|
00:01:14.400 --> 00:01:18.439 |
|
neural network based methods and then |
|
|
|
00:01:16.799 --> 00:01:20.200 |
|
the second question is how can we learn |
|
|
|
00:01:18.439 --> 00:01:22.600 |
|
from the information in knowledge bases |
|
|
|
00:01:20.200 --> 00:01:24.720 |
|
to improve like neural network models or |
|
|
|
00:01:22.600 --> 00:01:27.560 |
|
uh use them in effective |
|
|
|
00:01:24.720 --> 00:01:31.479 |
|
ways and how can we use uh structured |
|
|
|
00:01:27.560 --> 00:01:31.479 |
|
knowledge to answer questions |
|
|
|
00:01:32.200 --> 00:01:37.159 |
|
so the first uh thing I'd like to talk |
|
|
|
00:01:35.000 --> 00:01:40.960 |
|
about a little bit is types of knowledge |
|
|
|
00:01:37.159 --> 00:01:43.079 |
|
bases and they come in several different |
|
|
|
00:01:40.960 --> 00:01:46.119 |
|
varieties the first one I'd like to talk |
|
|
|
00:01:43.079 --> 00:01:48.560 |
|
about is a very uh classical one called |
|
|
|
00:01:46.119 --> 00:01:50.960 |
|
wordnet has anyone actually ever used |
|
|
|
00:01:48.560 --> 00:01:53.479 |
|
wordnet |
|
|
|
00:01:50.960 --> 00:01:55.520 |
|
before I see at least one person raising |
|
|
|
00:01:53.479 --> 00:01:57.640 |
|
their hand so it's not entirely uh |
|
|
|
00:01:55.520 --> 00:02:00.119 |
|
hasn't entirely disappeared has anyone |
|
|
|
00:01:57.640 --> 00:02:03.240 |
|
heard of wordnet before |
|
|
|
00:02:00.119 --> 00:02:05.079 |
|
okay more more people um so basically |
|
|
|
00:02:03.240 --> 00:02:06.960 |
|
this used to be a really big thing in in |
|
|
|
00:02:05.079 --> 00:02:10.440 |
|
natural language processing it's not So |
|
|
|
00:02:06.960 --> 00:02:12.319 |
|
Much Anymore um but I I want to explain |
|
|
|
00:02:10.440 --> 00:02:14.800 |
|
about it because I want to explain why |
|
|
|
00:02:12.319 --> 00:02:17.360 |
|
this is maybe like less necessary to use |
|
|
|
00:02:14.800 --> 00:02:19.599 |
|
but actual knowledge bases are still |
|
|
|
00:02:17.360 --> 00:02:23.160 |
|
more necessary to |
|
|
|
00:02:19.599 --> 00:02:26.280 |
|
use and so wordnet is a large database |
|
|
|
00:02:23.160 --> 00:02:29.560 |
|
of words and specifically what it does |
|
|
|
00:02:26.280 --> 00:02:32.720 |
|
is each word or something they call a |
|
|
|
00:02:29.560 --> 00:02:37.120 |
|
syn set is a node and then there are |
|
|
|
00:02:32.720 --> 00:02:42.560 |
|
relationships between nodes and the |
|
|
|
00:02:37.120 --> 00:02:44.319 |
|
nodes can correspond to nouns um and or |
|
|
|
00:02:42.560 --> 00:02:45.920 |
|
verbs or |
|
|
|
00:02:44.319 --> 00:02:48.360 |
|
adjectives |
|
|
|
00:02:45.920 --> 00:02:49.959 |
|
and nouns have different types of |
|
|
|
00:02:48.360 --> 00:02:53.360 |
|
relations between them so they have |
|
|
|
00:02:49.959 --> 00:02:56.280 |
|
things like an is a relation so like a |
|
|
|
00:02:53.360 --> 00:03:00.040 |
|
hatchback is a type of car they are part |
|
|
|
00:02:56.280 --> 00:03:02.840 |
|
of relations uh where a wheel is a part |
|
|
|
00:03:00.040 --> 00:03:05.720 |
|
of a car um and they also make |
|
|
|
00:03:02.840 --> 00:03:09.799 |
|
distinctions between types and instances |
|
|
|
00:03:05.720 --> 00:03:12.400 |
|
so like Joe Biden is an instance of a |
|
|
|
00:03:09.799 --> 00:03:16.560 |
|
president and president is the |
|
|
|
00:03:12.400 --> 00:03:19.239 |
|
type so um verb relations are ordered by |
|
|
|
00:03:16.560 --> 00:03:22.680 |
|
specificity so like communicate is more |
|
|
|
00:03:19.239 --> 00:03:25.799 |
|
broad than talk so talk is you know |
|
|
|
00:03:22.680 --> 00:03:27.519 |
|
generally a sub class of communicate and |
|
|
|
00:03:25.799 --> 00:03:30.720 |
|
then whisper is generally a subass of |
|
|
|
00:03:27.519 --> 00:03:33.159 |
|
talk so it's ordered in this way |
|
|
|
00:03:30.720 --> 00:03:35.920 |
|
and then adjective relations are mostly |
|
|
|
00:03:33.159 --> 00:03:37.720 |
|
antonyms so like wet and wet versus dry |
|
|
|
00:03:35.920 --> 00:03:43.599 |
|
and other things like |
|
|
|
00:03:37.720 --> 00:03:47.080 |
|
this um when I said sinets uh actually |
|
|
|
00:03:43.599 --> 00:03:50.239 |
|
the each node is not a word despite the |
|
|
|
00:03:47.080 --> 00:03:53.239 |
|
name word net it's a set of words that |
|
|
|
00:03:50.239 --> 00:03:56.200 |
|
all have the same meaning so you might |
|
|
|
00:03:53.239 --> 00:03:59.120 |
|
have artifact and thing would both |
|
|
|
00:03:56.200 --> 00:04:00.879 |
|
correspond to this um node because they |
|
|
|
00:03:59.120 --> 00:04:02.599 |
|
both mean basically the same thing so |
|
|
|
00:04:00.879 --> 00:04:04.159 |
|
it's like sets of synonyms and this is |
|
|
|
00:04:02.599 --> 00:04:07.599 |
|
also important when we talk about other |
|
|
|
00:04:04.159 --> 00:04:09.920 |
|
types of uh knowledge bases as well and |
|
|
|
00:04:07.599 --> 00:04:13.920 |
|
so what was this used for um this was |
|
|
|
00:04:09.920 --> 00:04:17.160 |
|
used for for example uh trying to figure |
|
|
|
00:04:13.920 --> 00:04:22.400 |
|
out whether trying to find all the cars |
|
|
|
00:04:17.160 --> 00:04:24.440 |
|
that were mentioned in like a in a large |
|
|
|
00:04:22.400 --> 00:04:27.440 |
|
set of text so you would go through you |
|
|
|
00:04:24.440 --> 00:04:30.280 |
|
would identify all |
|
|
|
00:04:27.440 --> 00:04:32.120 |
|
sinets or you would identify all words |
|
|
|
00:04:30.280 --> 00:04:34.120 |
|
that corresponded to these sunsets and |
|
|
|
00:04:32.120 --> 00:04:35.720 |
|
then you would take a step up and find |
|
|
|
00:04:34.120 --> 00:04:38.800 |
|
motor car and you would know that like |
|
|
|
00:04:35.720 --> 00:04:42.320 |
|
all of those were mentions of cars so |
|
|
|
00:04:38.800 --> 00:04:45.520 |
|
like why don't we use wordnet very much |
|
|
|
00:04:42.320 --> 00:04:45.520 |
|
anymore any |
|
|
|
00:04:49.160 --> 00:04:52.840 |
|
ideas what would what would you do |
|
|
|
00:04:51.080 --> 00:04:55.560 |
|
instead if I told you find all the cars |
|
|
|
00:04:52.840 --> 00:04:55.560 |
|
in a big piece of |
|
|
|
00:04:55.960 --> 00:05:00.160 |
|
text yeah just do something with the |
|
|
|
00:04:58.280 --> 00:05:02.880 |
|
embeding just do something with |
|
|
|
00:05:00.160 --> 00:05:04.560 |
|
embeddings yeah so you might get um you |
|
|
|
00:05:02.880 --> 00:05:06.720 |
|
might get something and find all things |
|
|
|
00:05:04.560 --> 00:05:10.360 |
|
that were close in embedding space to a |
|
|
|
00:05:06.720 --> 00:05:10.360 |
|
car what what's another thing you might |
|
|
|
00:05:11.560 --> 00:05:15.520 |
|
do like what I would do is I would |
|
|
|
00:05:13.639 --> 00:05:17.080 |
|
download mistol and say does this |
|
|
|
00:05:15.520 --> 00:05:19.880 |
|
sentence talk about a car and it would |
|
|
|
00:05:17.080 --> 00:05:22.199 |
|
say yes or no and I I would you know or |
|
|
|
00:05:19.880 --> 00:05:23.479 |
|
I would say find all the cars in this uh |
|
|
|
00:05:22.199 --> 00:05:25.319 |
|
that are mentioned in the sentence and |
|
|
|
00:05:23.479 --> 00:05:28.720 |
|
it would get them and sure that's like |
|
|
|
00:05:25.319 --> 00:05:31.319 |
|
expensive but it's really easy so um you |
|
|
|
00:05:28.720 --> 00:05:32.919 |
|
know there are other options that might |
|
|
|
00:05:31.319 --> 00:05:36.720 |
|
be less expensive but that could solve a |
|
|
|
00:05:32.919 --> 00:05:39.520 |
|
lot of the things so word not you know |
|
|
|
00:05:36.720 --> 00:05:41.039 |
|
started out with more and more it it |
|
|
|
00:05:39.520 --> 00:05:42.600 |
|
started out being very popular in |
|
|
|
00:05:41.039 --> 00:05:44.039 |
|
natural language processing but now it's |
|
|
|
00:05:42.600 --> 00:05:45.440 |
|
less so because we can get a lot of it |
|
|
|
00:05:44.039 --> 00:05:47.639 |
|
from embeddings we can get a lot of it |
|
|
|
00:05:45.440 --> 00:05:50.520 |
|
from language models |
|
|
|
00:05:47.639 --> 00:05:52.759 |
|
itself um another thing that started |
|
|
|
00:05:50.520 --> 00:05:55.759 |
|
maybe before wordnet or even around the |
|
|
|
00:05:52.759 --> 00:05:58.840 |
|
same time as wordnet was this uh data |
|
|
|
00:05:55.759 --> 00:06:00.800 |
|
base called psych and it was a manually |
|
|
|
00:05:58.840 --> 00:06:04.160 |
|
curated database attempting to encode |
|
|
|
00:06:00.800 --> 00:06:06.280 |
|
all common sense knowledge um and the |
|
|
|
00:06:04.160 --> 00:06:08.759 |
|
project itself lasted for about 30 to 40 |
|
|
|
00:06:06.280 --> 00:06:11.840 |
|
years it might even still |
|
|
|
00:06:08.759 --> 00:06:13.319 |
|
exist um and so they had this huge uh |
|
|
|
00:06:11.840 --> 00:06:15.199 |
|
like hierarchy of all the different |
|
|
|
00:06:13.319 --> 00:06:17.680 |
|
types of knowledge you could have it |
|
|
|
00:06:15.199 --> 00:06:19.680 |
|
encoded knowledge about like events and |
|
|
|
00:06:17.680 --> 00:06:21.479 |
|
like which events happened before other |
|
|
|
00:06:19.680 --> 00:06:26.840 |
|
events and all these other stuff like |
|
|
|
00:06:21.479 --> 00:06:29.039 |
|
this um but the problem with this is uh |
|
|
|
00:06:26.840 --> 00:06:31.000 |
|
this was just too ambitious basically it |
|
|
|
00:06:29.039 --> 00:06:35.680 |
|
was not possible to encode all of this |
|
|
|
00:06:31.000 --> 00:06:37.440 |
|
manually by hand so people um like it it |
|
|
|
00:06:35.680 --> 00:06:38.840 |
|
did it got part of the way there but |
|
|
|
00:06:37.440 --> 00:06:40.240 |
|
that part of the way there was not |
|
|
|
00:06:38.840 --> 00:06:42.560 |
|
enough for it to be really useful in |
|
|
|
00:06:40.240 --> 00:06:45.199 |
|
Practical systems so it isn't this sort |
|
|
|
00:06:42.560 --> 00:06:47.800 |
|
of method is not used as frequently |
|
|
|
00:06:45.199 --> 00:06:51.240 |
|
now |
|
|
|
00:06:47.800 --> 00:06:56.000 |
|
um a a followup one |
|
|
|
00:06:51.240 --> 00:06:57.479 |
|
um which is it's successor is now uh the |
|
|
|
00:06:56.000 --> 00:06:59.879 |
|
the most widely used knowledge Bas is |
|
|
|
00:06:57.479 --> 00:07:03.240 |
|
something called dbpedia and the basic |
|
|
|
00:06:59.879 --> 00:07:06.120 |
|
idea behind dbpedia is that while Psych |
|
|
|
00:07:03.240 --> 00:07:07.840 |
|
is too difficult because they had people |
|
|
|
00:07:06.120 --> 00:07:12.400 |
|
on the psych project who would go in and |
|
|
|
00:07:07.840 --> 00:07:12.400 |
|
curate rules um for |
|
|
|
00:07:13.280 --> 00:07:19.080 |
|
machines Wikipedia basically they have a |
|
|
|
00:07:17.160 --> 00:07:21.080 |
|
very very large number of humans |
|
|
|
00:07:19.080 --> 00:07:23.639 |
|
curating this structured data about |
|
|
|
00:07:21.080 --> 00:07:25.199 |
|
entities in the world for humans they're |
|
|
|
00:07:23.639 --> 00:07:27.879 |
|
creating it for humans because then you |
|
|
|
00:07:25.199 --> 00:07:29.599 |
|
can put it on a Wikipedia page and you |
|
|
|
00:07:27.879 --> 00:07:31.440 |
|
can look and see it says cardig melan |
|
|
|
00:07:29.599 --> 00:07:34.160 |
|
University it has the former names of |
|
|
|
00:07:31.440 --> 00:07:36.919 |
|
Carnegie melon um it has the motto of |
|
|
|
00:07:34.160 --> 00:07:38.759 |
|
Carnegie melon the type of entity who it |
|
|
|
00:07:36.919 --> 00:07:41.360 |
|
was established by and when and other |
|
|
|
00:07:38.759 --> 00:07:42.840 |
|
stuff like that and because people are |
|
|
|
00:07:41.360 --> 00:07:44.280 |
|
no longer creating it for machines |
|
|
|
00:07:42.840 --> 00:07:46.280 |
|
they're creating it for humans people |
|
|
|
00:07:44.280 --> 00:07:47.840 |
|
are like motivated to do this so like |
|
|
|
00:07:46.280 --> 00:07:49.960 |
|
lots of people will do it for free so |
|
|
|
00:07:47.840 --> 00:07:51.960 |
|
you can actually get a reasonably sized |
|
|
|
00:07:49.960 --> 00:07:53.639 |
|
amount of data from this and actually |
|
|
|
00:07:51.960 --> 00:07:55.720 |
|
cover you know like most of the entities |
|
|
|
00:07:53.639 --> 00:07:57.080 |
|
in the world or not most of the entities |
|
|
|
00:07:55.720 --> 00:08:00.120 |
|
in the world but most of the notable |
|
|
|
00:07:57.080 --> 00:08:03.319 |
|
entities in uh part of the world that |
|
|
|
00:08:00.120 --> 00:08:03.319 |
|
have high participation in |
|
|
|
00:08:03.479 --> 00:08:09.800 |
|
Wikipedia um so now the the thing that a |
|
|
|
00:08:08.039 --> 00:08:13.319 |
|
lot of people use is something called |
|
|
|
00:08:09.800 --> 00:08:14.919 |
|
Wiki data this is not this name is a |
|
|
|
00:08:13.319 --> 00:08:17.039 |
|
little bit of a misnomer because it's |
|
|
|
00:08:14.919 --> 00:08:18.960 |
|
not actually that closely connected to |
|
|
|
00:08:17.039 --> 00:08:20.639 |
|
Wikipedia they extract data from |
|
|
|
00:08:18.960 --> 00:08:21.720 |
|
Wikipedia but they also extract it from |
|
|
|
00:08:20.639 --> 00:08:24.400 |
|
lots of other |
|
|
|
00:08:21.720 --> 00:08:27.520 |
|
sources and this is a curated database |
|
|
|
00:08:24.400 --> 00:08:30.360 |
|
of entities um it's linked it's |
|
|
|
00:08:27.520 --> 00:08:33.959 |
|
extremely large scale and it's |
|
|
|
00:08:30.360 --> 00:08:38.080 |
|
multilingual and um this is an example |
|
|
|
00:08:33.959 --> 00:08:39.680 |
|
of a thing from Richard fean um where |
|
|
|
00:08:38.080 --> 00:08:42.680 |
|
people can go in and they can actually |
|
|
|
00:08:39.680 --> 00:08:45.320 |
|
like add information and stuff like that |
|
|
|
00:08:42.680 --> 00:08:47.440 |
|
um and you know it gives information |
|
|
|
00:08:45.320 --> 00:08:50.959 |
|
about education and all kinds of other |
|
|
|
00:08:47.440 --> 00:08:52.600 |
|
stuff so um for fun I can go to the wiki |
|
|
|
00:08:50.959 --> 00:08:55.040 |
|
data |
|
|
|
00:08:52.600 --> 00:08:59.360 |
|
site does anyone have an entity they'd |
|
|
|
00:08:55.040 --> 00:08:59.360 |
|
like to know more about |
|
|
|
00:09:01.640 --> 00:09:07.320 |
|
any any ideas maybe something that has |
|
|
|
00:09:03.959 --> 00:09:07.320 |
|
been in the news recently |
|
|
|
00:09:10.680 --> 00:09:16.160 |
|
or nobody brave enough to come up with |
|
|
|
00:09:13.040 --> 00:09:18.360 |
|
an entity yeah |
|
|
|
00:09:16.160 --> 00:09:20.640 |
|
Mamba that's a good one I'm actually not |
|
|
|
00:09:18.360 --> 00:09:23.800 |
|
sure if that one's going to be in here |
|
|
|
00:09:20.640 --> 00:09:27.720 |
|
um there's lots of mambas but I don't |
|
|
|
00:09:23.800 --> 00:09:27.720 |
|
know about that particular Mamba let me |
|
|
|
00:09:27.839 --> 00:09:31.200 |
|
see do you want to know about a |
|
|
|
00:09:29.720 --> 00:09:33.399 |
|
different Mamba do you want about know |
|
|
|
00:09:31.200 --> 00:09:36.040 |
|
about Mamba the research |
|
|
|
00:09:33.399 --> 00:09:38.399 |
|
group so Mamba is a research group it's |
|
|
|
00:09:36.040 --> 00:09:41.800 |
|
the modeling and Analysis for medicine |
|
|
|
00:09:38.399 --> 00:09:44.800 |
|
research group um it focuses on |
|
|
|
00:09:41.800 --> 00:09:48.000 |
|
mathematical biology and it's in the uh |
|
|
|
00:09:44.800 --> 00:09:51.120 |
|
in this National Center for scientific |
|
|
|
00:09:48.000 --> 00:09:52.519 |
|
research in France um the chairperson is |
|
|
|
00:09:51.120 --> 00:09:55.360 |
|
this person and stuff like that so you |
|
|
|
00:09:52.519 --> 00:10:00.200 |
|
can see it has all of these things so |
|
|
|
00:09:55.360 --> 00:10:03.920 |
|
Mamba this Mamba is a node in the graph |
|
|
|
00:10:00.200 --> 00:10:06.839 |
|
and then the edges are pointing um the |
|
|
|
00:10:03.920 --> 00:10:09.440 |
|
edges are labeled with like instance of |
|
|
|
00:10:06.839 --> 00:10:11.200 |
|
and then the next note is research group |
|
|
|
00:10:09.440 --> 00:10:13.000 |
|
so research group is like another note |
|
|
|
00:10:11.200 --> 00:10:17.120 |
|
in the graph and so you can click |
|
|
|
00:10:13.000 --> 00:10:18.680 |
|
through this and it has its own ID and |
|
|
|
00:10:17.120 --> 00:10:21.200 |
|
other things like |
|
|
|
00:10:18.680 --> 00:10:22.839 |
|
this also you'll notice that research |
|
|
|
00:10:21.200 --> 00:10:24.160 |
|
group is translated into lots of |
|
|
|
00:10:22.839 --> 00:10:27.440 |
|
different languages in the world so you |
|
|
|
00:10:24.160 --> 00:10:30.120 |
|
can use it multi multilingually and um |
|
|
|
00:10:27.440 --> 00:10:33.880 |
|
and other things like that |
|
|
|
00:10:30.120 --> 00:10:37.000 |
|
um even minor entities like Graham |
|
|
|
00:10:33.880 --> 00:10:40.160 |
|
nuig are included in this and it has a |
|
|
|
00:10:37.000 --> 00:10:42.240 |
|
little bit of um like information about |
|
|
|
00:10:40.160 --> 00:10:45.480 |
|
me like my PhD was in Kyoto University |
|
|
|
00:10:42.240 --> 00:10:45.480 |
|
in 2012 I am a |
|
|
|
00:10:45.600 --> 00:10:52.079 |
|
human I I am male uh and first name last |
|
|
|
00:10:50.519 --> 00:10:53.720 |
|
name University teacher computer |
|
|
|
00:10:52.079 --> 00:10:56.279 |
|
scientist natural language processing |
|
|
|
00:10:53.720 --> 00:10:58.639 |
|
this is all right um because this is |
|
|
|
00:10:56.279 --> 00:11:00.240 |
|
mostly hand curated it even has the IDS |
|
|
|
00:10:58.639 --> 00:11:04.240 |
|
of my advisor |
|
|
|
00:11:00.240 --> 00:11:06.519 |
|
advisers um the reason why it has all of |
|
|
|
00:11:04.240 --> 00:11:09.839 |
|
this stuff actually is because like 15 |
|
|
|
00:11:06.519 --> 00:11:12.160 |
|
years ago or like 10 years ago I entered |
|
|
|
00:11:09.839 --> 00:11:14.399 |
|
in my uh my information into the |
|
|
|
00:11:12.160 --> 00:11:16.240 |
|
mathematical genealogy project uh which |
|
|
|
00:11:14.399 --> 00:11:18.880 |
|
is this project about who your advisers |
|
|
|
00:11:16.240 --> 00:11:20.680 |
|
were because I wanted to see like who my |
|
|
|
00:11:18.880 --> 00:11:22.800 |
|
mathematical like siblings were and |
|
|
|
00:11:20.680 --> 00:11:24.519 |
|
stuff like that and uh somehow they |
|
|
|
00:11:22.800 --> 00:11:27.360 |
|
managed to pull that out and keep this |
|
|
|
00:11:24.519 --> 00:11:28.760 |
|
like 10 years later so um basically |
|
|
|
00:11:27.360 --> 00:11:30.519 |
|
they're pulling information from like |
|
|
|
00:11:28.760 --> 00:11:32.800 |
|
many many different structured data |
|
|
|
00:11:30.519 --> 00:11:34.160 |
|
sources that they can use so uh they can |
|
|
|
00:11:32.800 --> 00:11:37.480 |
|
pull it in there I don't know where they |
|
|
|
00:11:34.160 --> 00:11:39.440 |
|
got that I'm human uh but maybe that was |
|
|
|
00:11:37.480 --> 00:11:43.240 |
|
inferred from some piece of data |
|
|
|
00:11:39.440 --> 00:11:44.760 |
|
somewhere online or something cool um |
|
|
|
00:11:43.240 --> 00:11:46.839 |
|
another good thing about this that |
|
|
|
00:11:44.760 --> 00:11:52.680 |
|
actually I didn't mention directly in |
|
|
|
00:11:46.839 --> 00:11:52.680 |
|
the um in the lecture note or |
|
|
|
00:11:54.680 --> 00:12:01.120 |
|
slides is that there's a query language |
|
|
|
00:11:57.360 --> 00:12:04.320 |
|
for this yeah and a query language this |
|
|
|
00:12:01.120 --> 00:12:06.839 |
|
query language is called Sparkle so |
|
|
|
00:12:04.320 --> 00:12:10.680 |
|
there's a sequel for querying relational |
|
|
|
00:12:06.839 --> 00:12:14.399 |
|
databases and Sparkle is for querying |
|
|
|
00:12:10.680 --> 00:12:15.240 |
|
these uh knowledge bases and let me see |
|
|
|
00:12:14.399 --> 00:12:18.279 |
|
if I |
|
|
|
00:12:15.240 --> 00:12:22.560 |
|
can I asked chat |
|
|
|
00:12:18.279 --> 00:12:24.560 |
|
GPT to write me a sparkle query to find |
|
|
|
00:12:22.560 --> 00:12:26.839 |
|
all presidents of Carnegie melon |
|
|
|
00:12:24.560 --> 00:12:31.160 |
|
University so let's see if Chad GPT is |
|
|
|
00:12:26.839 --> 00:12:31.160 |
|
capable of doing that um |
|
|
|
00:12:35.639 --> 00:12:39.680 |
|
okay that's a problem let me |
|
|
|
00:12:41.279 --> 00:12:47.000 |
|
see okay there's there's an errand there |
|
|
|
00:12:43.880 --> 00:12:48.360 |
|
but like if uh uh if I could find a I |
|
|
|
00:12:47.000 --> 00:12:50.160 |
|
don't want to waste time in class like |
|
|
|
00:12:48.360 --> 00:12:52.079 |
|
finding a working query but basically |
|
|
|
00:12:50.160 --> 00:12:53.399 |
|
you can put it in a query and it allows |
|
|
|
00:12:52.079 --> 00:12:56.120 |
|
you to do a lot of things that are |
|
|
|
00:12:53.399 --> 00:13:00.519 |
|
similar to what you can do in SQL so you |
|
|
|
00:12:56.120 --> 00:13:02.720 |
|
can find like all of the edges of nodes |
|
|
|
00:13:00.519 --> 00:13:05.279 |
|
that satisfy a particular relation so |
|
|
|
00:13:02.720 --> 00:13:07.360 |
|
you could say I want for Carnegie melon |
|
|
|
00:13:05.279 --> 00:13:10.160 |
|
University to find all things that |
|
|
|
00:13:07.360 --> 00:13:13.519 |
|
followed the like president of relation |
|
|
|
00:13:10.160 --> 00:13:14.959 |
|
and that would give me all um you know |
|
|
|
00:13:13.519 --> 00:13:18.680 |
|
all presidents of Carnegie melon |
|
|
|
00:13:14.959 --> 00:13:20.440 |
|
University you can also like filter um |
|
|
|
00:13:18.680 --> 00:13:22.160 |
|
filter by their start date and end date |
|
|
|
00:13:20.440 --> 00:13:24.120 |
|
so find all of the preceden between a |
|
|
|
00:13:22.160 --> 00:13:25.839 |
|
certain time and a another time or |
|
|
|
00:13:24.120 --> 00:13:30.480 |
|
things like |
|
|
|
00:13:25.839 --> 00:13:34.199 |
|
that so this is good if you want to get |
|
|
|
00:13:30.480 --> 00:13:36.600 |
|
like high reli high reliability data um |
|
|
|
00:13:34.199 --> 00:13:39.839 |
|
in a scalable way because like if I ask |
|
|
|
00:13:36.600 --> 00:13:41.920 |
|
chat GPT like one of my favorite um one |
|
|
|
00:13:39.839 --> 00:13:45.720 |
|
of my favorite queries for chat GPT is |
|
|
|
00:13:41.920 --> 00:13:48.600 |
|
like name all of the name all of the |
|
|
|
00:13:45.720 --> 00:13:51.959 |
|
presidents that were born uh east of the |
|
|
|
00:13:48.600 --> 00:13:53.880 |
|
Mississippi River um and I've never |
|
|
|
00:13:51.959 --> 00:13:56.519 |
|
successfully gotten chat GPT to be able |
|
|
|
00:13:53.880 --> 00:13:57.800 |
|
to do this um because there's lots of |
|
|
|
00:13:56.519 --> 00:13:59.560 |
|
presidents who were born east of the |
|
|
|
00:13:57.800 --> 00:14:02.320 |
|
Mississippi River and it starts counting |
|
|
|
00:13:59.560 --> 00:14:04.079 |
|
them it can't distinguish what position |
|
|
|
00:14:02.320 --> 00:14:05.639 |
|
is east of the Mississippi and what |
|
|
|
00:14:04.079 --> 00:14:09.120 |
|
position is the west west of the |
|
|
|
00:14:05.639 --> 00:14:11.279 |
|
Mississippi but if you write a uh like a |
|
|
|
00:14:09.120 --> 00:14:14.759 |
|
sparkle query it's not that hard to do |
|
|
|
00:14:11.279 --> 00:14:16.480 |
|
that so there are um you know there are |
|
|
|
00:14:14.759 --> 00:14:18.639 |
|
certain types of questions especially |
|
|
|
00:14:16.480 --> 00:14:20.399 |
|
information aggregation and complex |
|
|
|
00:14:18.639 --> 00:14:22.839 |
|
relations and stuff that uh language |
|
|
|
00:14:20.399 --> 00:14:26.600 |
|
models are not very good |
|
|
|
00:14:22.839 --> 00:14:28.120 |
|
at cool um so that's kind of an intro to |
|
|
|
00:14:26.600 --> 00:14:31.240 |
|
knowledge bases why you might want to |
|
|
|
00:14:28.120 --> 00:14:33.759 |
|
think about them any questions so far |
|
|
|
00:14:31.240 --> 00:14:33.759 |
|
for |
|
|
|
00:14:34.759 --> 00:14:39.720 |
|
discussion okay um I will move on next |
|
|
|
00:14:38.320 --> 00:14:41.199 |
|
so the next thing I'd like to talk about |
|
|
|
00:14:39.720 --> 00:14:43.839 |
|
is learning representations for |
|
|
|
00:14:41.199 --> 00:14:45.519 |
|
knowledge bases um so knowledge bases |
|
|
|
00:14:43.839 --> 00:14:48.000 |
|
are great but one problem is they're |
|
|
|
00:14:45.519 --> 00:14:51.040 |
|
like inherently |
|
|
|
00:14:48.000 --> 00:14:55.040 |
|
incomplete and even with extremely large |
|
|
|
00:14:51.040 --> 00:14:58.279 |
|
scale uh it becomes impossible to have |
|
|
|
00:14:55.040 --> 00:15:00.360 |
|
them be complete and the reason why is |
|
|
|
00:14:58.279 --> 00:15:03.639 |
|
uh for examp example in Freebase which |
|
|
|
00:15:00.360 --> 00:15:05.480 |
|
was the predecessor to Wiki data um 71% |
|
|
|
00:15:03.639 --> 00:15:08.560 |
|
of humans didn't have a date of |
|
|
|
00:15:05.480 --> 00:15:10.560 |
|
birth um and probably every human |
|
|
|
00:15:08.560 --> 00:15:12.079 |
|
actually has a date of birth right um |
|
|
|
00:15:10.560 --> 00:15:15.880 |
|
you know we're pretty much guaranteed |
|
|
|
00:15:12.079 --> 00:15:17.639 |
|
for that to be the case so the issue is |
|
|
|
00:15:15.880 --> 00:15:19.160 |
|
like for very famous entities you want |
|
|
|
00:15:17.639 --> 00:15:21.040 |
|
lots of detailed information like you |
|
|
|
00:15:19.160 --> 00:15:24.000 |
|
can know absolutely everything about Joe |
|
|
|
00:15:21.040 --> 00:15:25.759 |
|
Biden or Barack Obama but you know at |
|
|
|
00:15:24.000 --> 00:15:26.880 |
|
the same time for Less major entities |
|
|
|
00:15:25.759 --> 00:15:28.079 |
|
you still want them in the knowledge |
|
|
|
00:15:26.880 --> 00:15:30.079 |
|
base but you're not going to be able to |
|
|
|
00:15:28.079 --> 00:15:31.519 |
|
get all that information or should you |
|
|
|
00:15:30.079 --> 00:15:35.600 |
|
for privacy |
|
|
|
00:15:31.519 --> 00:15:36.680 |
|
purposes and so the idea is um for |
|
|
|
00:15:35.600 --> 00:15:38.079 |
|
information that's written on the |
|
|
|
00:15:36.680 --> 00:15:40.600 |
|
internet somewhere can you perform |
|
|
|
00:15:38.079 --> 00:15:42.759 |
|
relation extraction which essentially |
|
|
|
00:15:40.600 --> 00:15:44.600 |
|
allows you to extract this information |
|
|
|
00:15:42.759 --> 00:15:46.360 |
|
and create your own knowledge bases and |
|
|
|
00:15:44.600 --> 00:15:47.680 |
|
stuff like this and this can also be |
|
|
|
00:15:46.360 --> 00:15:50.079 |
|
useful if you want to create it for like |
|
|
|
00:15:47.680 --> 00:15:52.199 |
|
a specialized domain or um or other |
|
|
|
00:15:50.079 --> 00:15:55.000 |
|
stuff like |
|
|
|
00:15:52.199 --> 00:15:59.519 |
|
that so there's a bunch of ways that |
|
|
|
00:15:55.000 --> 00:16:03.079 |
|
people do this um and one kind of |
|
|
|
00:15:59.519 --> 00:16:06.120 |
|
popular way that people have tried to do |
|
|
|
00:16:03.079 --> 00:16:09.199 |
|
relation extraction is through uh |
|
|
|
00:16:06.120 --> 00:16:12.560 |
|
leveraging consistency in embedding |
|
|
|
00:16:09.199 --> 00:16:15.319 |
|
space and so this is the most famous |
|
|
|
00:16:12.560 --> 00:16:17.959 |
|
example from word de uh what seems like |
|
|
|
00:16:15.319 --> 00:16:21.880 |
|
ages ago uh in |
|
|
|
00:16:17.959 --> 00:16:23.920 |
|
2013 and in the word Toc paper one of |
|
|
|
00:16:21.880 --> 00:16:26.279 |
|
the big you know exciting things was |
|
|
|
00:16:23.920 --> 00:16:28.639 |
|
essentially they demonstrated that |
|
|
|
00:16:26.279 --> 00:16:30.120 |
|
vectors in embedding space had kind of |
|
|
|
00:16:28.639 --> 00:16:31.839 |
|
in |
|
|
|
00:16:30.120 --> 00:16:33.160 |
|
you know meaning and actually the |
|
|
|
00:16:31.839 --> 00:16:34.600 |
|
vectors in embedding space could |
|
|
|
00:16:33.160 --> 00:16:37.639 |
|
correspond to relations between |
|
|
|
00:16:34.600 --> 00:16:39.480 |
|
embeddings so like uh we would have man |
|
|
|
00:16:37.639 --> 00:16:41.000 |
|
pointing to woman in approximately the |
|
|
|
00:16:39.480 --> 00:16:42.920 |
|
same direction that we had Uncle |
|
|
|
00:16:41.000 --> 00:16:46.600 |
|
pointing to Aunt and King pointing to |
|
|
|
00:16:42.920 --> 00:16:49.680 |
|
Queen and so um then you could do things |
|
|
|
00:16:46.600 --> 00:16:51.440 |
|
like you could take Kings subtract out |
|
|
|
00:16:49.680 --> 00:16:53.560 |
|
the vector that corresponded to |
|
|
|
00:16:51.440 --> 00:16:58.360 |
|
plurality uh add the vector that |
|
|
|
00:16:53.560 --> 00:17:00.839 |
|
corresponded to um you know uh to going |
|
|
|
00:16:58.360 --> 00:17:04.319 |
|
from masculine to feminine words and |
|
|
|
00:17:00.839 --> 00:17:05.559 |
|
then um like read the vector to that |
|
|
|
00:17:04.319 --> 00:17:07.160 |
|
were plural and you'd be able to |
|
|
|
00:17:05.559 --> 00:17:09.439 |
|
identify the plural by just knowing |
|
|
|
00:17:07.160 --> 00:17:11.000 |
|
these two uh vectors the plural of green |
|
|
|
00:17:09.439 --> 00:17:14.000 |
|
by just knowing those two |
|
|
|
00:17:11.000 --> 00:17:14.000 |
|
vectors |
|
|
|
00:17:14.160 --> 00:17:21.880 |
|
um but it turns out that you can either |
|
|
|
00:17:18.199 --> 00:17:21.880 |
|
learn embeddings |
|
|
|
00:17:22.720 --> 00:17:28.240 |
|
from like uh you can either learn |
|
|
|
00:17:25.000 --> 00:17:30.400 |
|
embeddings from text or you can use the |
|
|
|
00:17:28.240 --> 00:17:32.039 |
|
fact that you have a big knowledge base |
|
|
|
00:17:30.400 --> 00:17:34.880 |
|
that was curated by humans like Wiki |
|
|
|
00:17:32.039 --> 00:17:36.120 |
|
data to improve the embeddings of a |
|
|
|
00:17:34.880 --> 00:17:39.559 |
|
neural model |
|
|
|
00:17:36.120 --> 00:17:41.799 |
|
itself and so another pretty large uh |
|
|
|
00:17:39.559 --> 00:17:43.600 |
|
research area that a lot of people have |
|
|
|
00:17:41.799 --> 00:17:47.120 |
|
focused on is how do you get good |
|
|
|
00:17:43.600 --> 00:17:48.720 |
|
embeddings of a Knowledge Graph and this |
|
|
|
00:17:47.120 --> 00:17:50.600 |
|
is important if you want to do any sort |
|
|
|
00:17:48.720 --> 00:17:52.799 |
|
of like Knowledge Graph Search or other |
|
|
|
00:17:50.600 --> 00:17:54.160 |
|
things like this like for example one of |
|
|
|
00:17:52.799 --> 00:17:56.799 |
|
the really nice things about knowledge |
|
|
|
00:17:54.160 --> 00:17:58.880 |
|
graphs is they have information about a |
|
|
|
00:17:56.799 --> 00:18:00.200 |
|
whole bunch of really sparse entities |
|
|
|
00:17:58.880 --> 00:18:03.240 |
|
that aren't mentioned very much on the |
|
|
|
00:18:00.200 --> 00:18:05.679 |
|
internet for example and so because of |
|
|
|
00:18:03.240 --> 00:18:07.440 |
|
that you can um you can leverage the |
|
|
|
00:18:05.679 --> 00:18:10.720 |
|
knowledge graph structure together with |
|
|
|
00:18:07.440 --> 00:18:10.720 |
|
text to learn better embeddings |
|
|
|
00:18:11.240 --> 00:18:18.520 |
|
overall and so this particular paper is |
|
|
|
00:18:15.280 --> 00:18:20.960 |
|
one example of it um and the way they do |
|
|
|
00:18:18.520 --> 00:18:23.280 |
|
this is they express uh Knowledge Graph |
|
|
|
00:18:20.960 --> 00:18:25.919 |
|
triples is additive |
|
|
|
00:18:23.280 --> 00:18:28.480 |
|
Transformations and they minimize the |
|
|
|
00:18:25.919 --> 00:18:31.640 |
|
distance uh of existing triples with a |
|
|
|
00:18:28.480 --> 00:18:35.039 |
|
margin based loss so the way they do |
|
|
|
00:18:31.640 --> 00:18:38.240 |
|
this is they have the head um in the |
|
|
|
00:18:35.039 --> 00:18:40.799 |
|
tail and L is the vector corresponding |
|
|
|
00:18:38.240 --> 00:18:42.679 |
|
to like the link between the things that |
|
|
|
00:18:40.799 --> 00:18:47.960 |
|
corresponds to a |
|
|
|
00:18:42.679 --> 00:18:52.159 |
|
relation and so you go uh you have H and |
|
|
|
00:18:47.960 --> 00:18:53.559 |
|
T and here um like this is L but here |
|
|
|
00:18:52.159 --> 00:18:55.640 |
|
it's written as are because I got this |
|
|
|
00:18:53.559 --> 00:18:58.120 |
|
from a different paper and basically you |
|
|
|
00:18:55.640 --> 00:18:59.480 |
|
you try to go from H to T um according |
|
|
|
00:18:58.120 --> 00:19:00.919 |
|
to the relation |
|
|
|
00:18:59.480 --> 00:19:05.120 |
|
uh Vector |
|
|
|
00:19:00.919 --> 00:19:07.200 |
|
are and you use a hinge loss where um |
|
|
|
00:19:05.120 --> 00:19:10.039 |
|
for the hinge loss you you have a hinge |
|
|
|
00:19:07.200 --> 00:19:12.640 |
|
parameter and then you try to upweight |
|
|
|
00:19:10.039 --> 00:19:15.760 |
|
the example of a true triple and |
|
|
|
00:19:12.640 --> 00:19:17.960 |
|
downweight the example of a of a false |
|
|
|
00:19:15.760 --> 00:19:19.880 |
|
triple so this could be one that was |
|
|
|
00:19:17.960 --> 00:19:22.080 |
|
like randomly sampled to be incorrect |
|
|
|
00:19:19.880 --> 00:19:22.080 |
|
for |
|
|
|
00:19:23.760 --> 00:19:29.080 |
|
example um one interesting thing about |
|
|
|
00:19:26.880 --> 00:19:31.559 |
|
knowledge graph embeddings is like a lot |
|
|
|
00:19:29.080 --> 00:19:33.600 |
|
of famous AI researchers got their start |
|
|
|
00:19:31.559 --> 00:19:36.000 |
|
in Knowledge Graph embeddings and so |
|
|
|
00:19:33.600 --> 00:19:39.760 |
|
Richard soer is one of them if you know |
|
|
|
00:19:36.000 --> 00:19:44.320 |
|
he's the CEO of vi.com search engine now |
|
|
|
00:19:39.760 --> 00:19:46.679 |
|
um and uh this was a first attempt at |
|
|
|
00:19:44.320 --> 00:19:49.679 |
|
predicting relations they basically |
|
|
|
00:19:46.679 --> 00:19:55.400 |
|
created a um MLP that tries to predict |
|
|
|
00:19:49.679 --> 00:19:58.880 |
|
whether a relation exists so they have |
|
|
|
00:19:55.400 --> 00:20:00.760 |
|
a matrix for the left side of the |
|
|
|
00:19:58.880 --> 00:20:03.320 |
|
relation a matrix for the right side of |
|
|
|
00:20:00.760 --> 00:20:05.080 |
|
the relation and then they feed in the |
|
|
|
00:20:03.320 --> 00:20:07.559 |
|
embeddings of each of the entities in |
|
|
|
00:20:05.080 --> 00:20:08.919 |
|
the relation they have a nonlinearity |
|
|
|
00:20:07.559 --> 00:20:11.799 |
|
and then they have another Vector that |
|
|
|
00:20:08.919 --> 00:20:14.720 |
|
tries to predict the um the probability |
|
|
|
00:20:11.799 --> 00:20:16.679 |
|
of the uh actual relation being correct |
|
|
|
00:20:14.720 --> 00:20:18.960 |
|
so you would run this through a sigmoid |
|
|
|
00:20:16.679 --> 00:20:21.000 |
|
and then uh if it was one the relation |
|
|
|
00:20:18.960 --> 00:20:24.039 |
|
was likely to exist if it was Zero then |
|
|
|
00:20:21.000 --> 00:20:25.480 |
|
the relation was likely to not exist and |
|
|
|
00:20:24.039 --> 00:20:27.799 |
|
then they also propos something called a |
|
|
|
00:20:25.480 --> 00:20:31.480 |
|
neural tensor Network and this adds a |
|
|
|
00:20:27.799 --> 00:20:34.000 |
|
bilinear feature extractor um and so |
|
|
|
00:20:31.480 --> 00:20:37.440 |
|
basically what this is saying is we have |
|
|
|
00:20:34.000 --> 00:20:40.000 |
|
the embedding here the embedding here we |
|
|
|
00:20:37.440 --> 00:20:41.840 |
|
have a matrix and then we calculate the |
|
|
|
00:20:40.000 --> 00:20:43.080 |
|
dot product between the embedding after |
|
|
|
00:20:41.840 --> 00:20:45.799 |
|
transformation it looks a lot like |
|
|
|
00:20:43.080 --> 00:20:47.720 |
|
attention actually in a way um because |
|
|
|
00:20:45.799 --> 00:20:50.000 |
|
we had the bilinear attention so it's |
|
|
|
00:20:47.720 --> 00:20:53.640 |
|
similar to that as well and then we also |
|
|
|
00:20:50.000 --> 00:20:56.840 |
|
have the MLP so this part corresponds to |
|
|
|
00:20:53.640 --> 00:21:00.320 |
|
MLP and then we have a bias |
|
|
|
00:20:56.840 --> 00:21:02.200 |
|
term and um this is a powerful model but |
|
|
|
00:21:00.320 --> 00:21:05.400 |
|
it's a bit overparameterized so we |
|
|
|
00:21:02.200 --> 00:21:08.120 |
|
actually later um uh this kind of fell |
|
|
|
00:21:05.400 --> 00:21:10.360 |
|
out of uh favor towards these more |
|
|
|
00:21:08.120 --> 00:21:14.520 |
|
simple models that we're using uh kind |
|
|
|
00:21:10.360 --> 00:21:14.520 |
|
of just linear projections between the |
|
|
|
00:21:17.600 --> 00:21:22.279 |
|
two so there's um there's a lot of |
|
|
|
00:21:20.120 --> 00:21:25.320 |
|
methods like this these methods are |
|
|
|
00:21:22.279 --> 00:21:27.039 |
|
basically assuming that we have either |
|
|
|
00:21:25.320 --> 00:21:29.080 |
|
Knowledge Graph |
|
|
|
00:21:27.039 --> 00:21:30.799 |
|
embeddings um and we want to learn |
|
|
|
00:21:29.080 --> 00:21:32.480 |
|
relations or they're assuming that we |
|
|
|
00:21:30.799 --> 00:21:34.320 |
|
don't have any information at all about |
|
|
|
00:21:32.480 --> 00:21:36.840 |
|
the knowledge graph and we want to learn |
|
|
|
00:21:34.320 --> 00:21:40.039 |
|
the knowledge graph embedding themselves |
|
|
|
00:21:36.840 --> 00:21:42.400 |
|
it's been used for both of them but um I |
|
|
|
00:21:40.039 --> 00:21:44.000 |
|
I'd say now it's probably most useful |
|
|
|
00:21:42.400 --> 00:21:45.520 |
|
for learning Knowledge Graph embeddings |
|
|
|
00:21:44.000 --> 00:21:50.480 |
|
if you want to do any sort of Knowledge |
|
|
|
00:21:45.520 --> 00:21:50.480 |
|
Graph based modeling uh which can be |
|
|
|
00:21:51.240 --> 00:21:55.919 |
|
useful um cool any questions about these |
|
|
|
00:21:57.360 --> 00:22:01.679 |
|
ones okay |
|
|
|
00:21:59.520 --> 00:22:04.360 |
|
next um actually this part might be a |
|
|
|
00:22:01.679 --> 00:22:06.600 |
|
little bit simpler than the uh than the |
|
|
|
00:22:04.360 --> 00:22:09.000 |
|
like knowledge graft based approaches so |
|
|
|
00:22:06.600 --> 00:22:10.960 |
|
another method for relations extraction |
|
|
|
00:22:09.000 --> 00:22:13.440 |
|
is learning from text |
|
|
|
00:22:10.960 --> 00:22:16.120 |
|
directly |
|
|
|
00:22:13.440 --> 00:22:19.080 |
|
and the first question about this is how |
|
|
|
00:22:16.120 --> 00:22:22.200 |
|
do you get training data to learn uh |
|
|
|
00:22:19.080 --> 00:22:24.480 |
|
about relation learn relation extraction |
|
|
|
00:22:22.200 --> 00:22:26.720 |
|
and so there was this very influential |
|
|
|
00:22:24.480 --> 00:22:28.279 |
|
paper a distant supervision for relation |
|
|
|
00:22:26.720 --> 00:22:31.120 |
|
extraction I would say it's almost one |
|
|
|
00:22:28.279 --> 00:22:32.880 |
|
of the first or certainly one of the |
|
|
|
00:22:31.120 --> 00:22:34.559 |
|
most influential papers on like data |
|
|
|
00:22:32.880 --> 00:22:35.960 |
|
augmentation or synthetic data for |
|
|
|
00:22:34.559 --> 00:22:38.400 |
|
natural language |
|
|
|
00:22:35.960 --> 00:22:40.440 |
|
processing and basically the idea is you |
|
|
|
00:22:38.400 --> 00:22:44.279 |
|
already have a knowledge base that has |
|
|
|
00:22:40.440 --> 00:22:47.440 |
|
some entries in it like Wiki data and so |
|
|
|
00:22:44.279 --> 00:22:50.919 |
|
then given in entity relation entity |
|
|
|
00:22:47.440 --> 00:22:52.919 |
|
triples um can you extract all text that |
|
|
|
00:22:50.919 --> 00:22:54.799 |
|
matches this particular relation type |
|
|
|
00:22:52.919 --> 00:22:56.480 |
|
and use it to train a relation extractor |
|
|
|
00:22:54.799 --> 00:22:59.640 |
|
a supervised relation |
|
|
|
00:22:56.480 --> 00:23:01.880 |
|
extractor so the way this works |
|
|
|
00:22:59.640 --> 00:23:04.039 |
|
is like let's say we have this is an old |
|
|
|
00:23:01.880 --> 00:23:06.120 |
|
paper so the examples are also old but |
|
|
|
00:23:04.039 --> 00:23:08.039 |
|
um let's say we have Steven Spielberg |
|
|
|
00:23:06.120 --> 00:23:10.159 |
|
being a director of the film Saving |
|
|
|
00:23:08.039 --> 00:23:12.840 |
|
Private Ryan and that's included in our |
|
|
|
00:23:10.159 --> 00:23:14.840 |
|
uh our knowledge base so what it would |
|
|
|
00:23:12.840 --> 00:23:17.080 |
|
do is it would find all sentences that |
|
|
|
00:23:14.840 --> 00:23:19.400 |
|
have Steven Spielberg and Saving Private |
|
|
|
00:23:17.080 --> 00:23:22.080 |
|
Ryan included in them and it would label |
|
|
|
00:23:19.400 --> 00:23:24.159 |
|
this as like a positive example of that |
|
|
|
00:23:22.080 --> 00:23:28.240 |
|
relation so this |
|
|
|
00:23:24.159 --> 00:23:30.760 |
|
is in general often it's okay it it |
|
|
|
00:23:28.240 --> 00:23:34.480 |
|
works reasonably well but the problem |
|
|
|
00:23:30.760 --> 00:23:37.200 |
|
with this is there are also um negative |
|
|
|
00:23:34.480 --> 00:23:38.840 |
|
examples of this so like for example |
|
|
|
00:23:37.200 --> 00:23:40.480 |
|
here I think the first one is kind of a |
|
|
|
00:23:38.840 --> 00:23:43.240 |
|
negative example for the director |
|
|
|
00:23:40.480 --> 00:23:45.880 |
|
relation because Steven Spielberg's film |
|
|
|
00:23:43.240 --> 00:23:48.120 |
|
Saving Private Ryan doesn't actually |
|
|
|
00:23:45.880 --> 00:23:50.000 |
|
tell you he's the director it just tells |
|
|
|
00:23:48.120 --> 00:23:52.520 |
|
you that he's somehow affiliated with it |
|
|
|
00:23:50.000 --> 00:23:54.840 |
|
he could be the writer or he could be uh |
|
|
|
00:23:52.520 --> 00:23:57.679 |
|
the actor or or something else like that |
|
|
|
00:23:54.840 --> 00:24:00.440 |
|
so this is a nice way to create data for |
|
|
|
00:23:57.679 --> 00:24:03.640 |
|
basically free but at the same time uh |
|
|
|
00:24:00.440 --> 00:24:06.159 |
|
you can like create noisy examples and |
|
|
|
00:24:03.640 --> 00:24:06.159 |
|
that can be a |
|
|
|
00:24:07.159 --> 00:24:14.600 |
|
problem so um there's been a lot of work |
|
|
|
00:24:11.400 --> 00:24:16.000 |
|
about this um relationship uh relation |
|
|
|
00:24:14.600 --> 00:24:17.840 |
|
classification with neural networks |
|
|
|
00:24:16.000 --> 00:24:20.840 |
|
there's a lot of uh different methods |
|
|
|
00:24:17.840 --> 00:24:23.159 |
|
that could be uh doing this most of them |
|
|
|
00:24:20.840 --> 00:24:24.919 |
|
work by extracting features and then |
|
|
|
00:24:23.159 --> 00:24:27.039 |
|
classifying somehow although there are |
|
|
|
00:24:24.919 --> 00:24:29.960 |
|
some uh large language model based |
|
|
|
00:24:27.039 --> 00:24:33.120 |
|
methods now um one one thing about |
|
|
|
00:24:29.960 --> 00:24:35.440 |
|
relation extraction or not kind of like |
|
|
|
00:24:33.120 --> 00:24:36.799 |
|
information extraction in general is |
|
|
|
00:24:35.440 --> 00:24:38.559 |
|
that very often you want to run this |
|
|
|
00:24:36.799 --> 00:24:40.200 |
|
over like a huge Corpus you want to run |
|
|
|
00:24:38.559 --> 00:24:42.320 |
|
it over the whole internet or other |
|
|
|
00:24:40.200 --> 00:24:45.000 |
|
things like that so from that point of |
|
|
|
00:24:42.320 --> 00:24:47.159 |
|
view like I I said I could just ask |
|
|
|
00:24:45.000 --> 00:24:49.480 |
|
mistol to give me the answer about like |
|
|
|
00:24:47.159 --> 00:24:52.440 |
|
whether cars are included in sentences |
|
|
|
00:24:49.480 --> 00:24:55.120 |
|
but if you want to run you know gp4 over |
|
|
|
00:24:52.440 --> 00:24:56.799 |
|
the whole internet that's a pretty big |
|
|
|
00:24:55.120 --> 00:25:00.159 |
|
budget and you might want to reconsider |
|
|
|
00:24:56.799 --> 00:25:02.440 |
|
that so there are so um there is also |
|
|
|
00:25:00.159 --> 00:25:04.440 |
|
some you know benefit in having cheap |
|
|
|
00:25:02.440 --> 00:25:07.200 |
|
and lightweight |
|
|
|
00:25:04.440 --> 00:25:09.159 |
|
methods so basically what this |
|
|
|
00:25:07.200 --> 00:25:11.279 |
|
particular paper did is it extracted |
|
|
|
00:25:09.159 --> 00:25:12.760 |
|
features in in classified so it |
|
|
|
00:25:11.279 --> 00:25:15.600 |
|
extracted lexical features of the |
|
|
|
00:25:12.760 --> 00:25:20.240 |
|
entities themselves and features of the |
|
|
|
00:25:15.600 --> 00:25:22.360 |
|
whole span and so like the way I uh most |
|
|
|
00:25:20.240 --> 00:25:26.960 |
|
modern methods for this do this is they |
|
|
|
00:25:22.360 --> 00:25:29.399 |
|
basically um extract features from the |
|
|
|
00:25:26.960 --> 00:25:31.679 |
|
first part of the first entity the |
|
|
|
00:25:29.399 --> 00:25:33.760 |
|
second part of the the first entity the |
|
|
|
00:25:31.679 --> 00:25:36.360 |
|
first part of the second entity and the |
|
|
|
00:25:33.760 --> 00:25:37.720 |
|
last part of the uh second entity and |
|
|
|
00:25:36.360 --> 00:25:39.600 |
|
take all of those embeddings feed them |
|
|
|
00:25:37.720 --> 00:25:41.440 |
|
into like an MLP or something like that |
|
|
|
00:25:39.600 --> 00:25:44.039 |
|
and then make a prediction about whether |
|
|
|
00:25:41.440 --> 00:25:45.760 |
|
that relation exists so if you have an |
|
|
|
00:25:44.039 --> 00:25:47.840 |
|
embedding model this is relatively easy |
|
|
|
00:25:45.760 --> 00:25:50.360 |
|
to do you feed it through like uh |
|
|
|
00:25:47.840 --> 00:25:51.919 |
|
Roberta or you feed it through mistol |
|
|
|
00:25:50.360 --> 00:25:54.559 |
|
and get the embeddings for each of the |
|
|
|
00:25:51.919 --> 00:25:55.840 |
|
tokens and um and then you make a |
|
|
|
00:25:54.559 --> 00:25:58.840 |
|
prediction based on those four |
|
|
|
00:25:55.840 --> 00:25:58.840 |
|
embeddings |
|
|
|
00:26:00.600 --> 00:26:04.840 |
|
um the details of that are like not |
|
|
|
00:26:03.520 --> 00:26:07.320 |
|
super important unless you're going to |
|
|
|
00:26:04.840 --> 00:26:09.279 |
|
go in and implement it yourself so you |
|
|
|
00:26:07.320 --> 00:26:10.919 |
|
can um like if you're actually going to |
|
|
|
00:26:09.279 --> 00:26:12.120 |
|
be doing relation extraction obviously |
|
|
|
00:26:10.919 --> 00:26:14.279 |
|
the details are important but I'm |
|
|
|
00:26:12.120 --> 00:26:16.000 |
|
assuming that most people won't be uh |
|
|
|
00:26:14.279 --> 00:26:19.720 |
|
you know doing that as your final |
|
|
|
00:26:16.000 --> 00:26:21.240 |
|
project but um one really interesting |
|
|
|
00:26:19.720 --> 00:26:22.919 |
|
thing that is relevant even if you're |
|
|
|
00:26:21.240 --> 00:26:26.360 |
|
not doing relationship relation |
|
|
|
00:26:22.919 --> 00:26:29.360 |
|
extraction is how you can model noise |
|
|
|
00:26:26.360 --> 00:26:32.600 |
|
because this um as I said they're |
|
|
|
00:26:29.360 --> 00:26:35.720 |
|
creating lots of like semi noisy data |
|
|
|
00:26:32.600 --> 00:26:38.919 |
|
and a lot of the work in getting good |
|
|
|
00:26:35.720 --> 00:26:40.360 |
|
bottles for relation extraction has been |
|
|
|
00:26:38.919 --> 00:26:41.799 |
|
how do we deal with this distant |
|
|
|
00:26:40.360 --> 00:26:43.799 |
|
supervision noise and I'm just going to |
|
|
|
00:26:41.799 --> 00:26:45.760 |
|
give one example here but there's like a |
|
|
|
00:26:43.799 --> 00:26:49.120 |
|
series of papers after this that also |
|
|
|
00:26:45.760 --> 00:26:50.600 |
|
tried to do similar things so the idea |
|
|
|
00:26:49.120 --> 00:26:53.600 |
|
is that there's noise in the distant |
|
|
|
00:26:50.600 --> 00:26:56.559 |
|
supervision labels um and so we want to |
|
|
|
00:26:53.600 --> 00:27:01.039 |
|
model and mitigate that noise and the |
|
|
|
00:26:56.559 --> 00:27:03.919 |
|
way this paper does this is they have an |
|
|
|
00:27:01.039 --> 00:27:06.679 |
|
encoder and from the encoder you |
|
|
|
00:27:03.919 --> 00:27:10.960 |
|
calculate embeddings and make |
|
|
|
00:27:06.679 --> 00:27:14.279 |
|
predictions and so you have a small set |
|
|
|
00:27:10.960 --> 00:27:16.080 |
|
of like very high quality data and this |
|
|
|
00:27:14.279 --> 00:27:17.760 |
|
small set of very high quality data you |
|
|
|
00:27:16.080 --> 00:27:19.880 |
|
can basically trust that all of the data |
|
|
|
00:27:17.760 --> 00:27:22.320 |
|
is not noisy like maybe it's manually |
|
|
|
00:27:19.880 --> 00:27:23.720 |
|
annotated data and you have like 5,000 |
|
|
|
00:27:22.320 --> 00:27:25.000 |
|
examples of it or something like that |
|
|
|
00:27:23.720 --> 00:27:26.880 |
|
and then separately from that you have |
|
|
|
00:27:25.000 --> 00:27:28.440 |
|
like 5 million examples of automatically |
|
|
|
00:27:26.880 --> 00:27:30.799 |
|
labeled data that might be good might |
|
|
|
00:27:28.440 --> 00:27:32.679 |
|
not be good and so what they do is |
|
|
|
00:27:30.799 --> 00:27:34.200 |
|
essentially at the beginning they take |
|
|
|
00:27:32.679 --> 00:27:36.520 |
|
this encoder get embeddings make |
|
|
|
00:27:34.200 --> 00:27:38.000 |
|
predictions over the high quality data |
|
|
|
00:27:36.520 --> 00:27:40.320 |
|
and then they have a separate noise |
|
|
|
00:27:38.000 --> 00:27:43.440 |
|
modeling layer where what this noise |
|
|
|
00:27:40.320 --> 00:27:46.919 |
|
modeling layer does is it has a |
|
|
|
00:27:43.440 --> 00:27:50.039 |
|
transition Matrix which says given that |
|
|
|
00:27:46.919 --> 00:27:53.279 |
|
this given that we made a particular |
|
|
|
00:27:50.039 --> 00:27:55.159 |
|
prediction over classes because this is |
|
|
|
00:27:53.279 --> 00:27:59.919 |
|
essentially a multiclass classification |
|
|
|
00:27:55.159 --> 00:28:01.519 |
|
problem they transform the |
|
|
|
00:27:59.919 --> 00:28:03.159 |
|
sorry I don't remember if they transform |
|
|
|
00:28:01.519 --> 00:28:04.640 |
|
the probabilities or the low Jets I |
|
|
|
00:28:03.159 --> 00:28:07.320 |
|
think it's the probabilities but they |
|
|
|
00:28:04.640 --> 00:28:12.799 |
|
transform the probabilities and get a |
|
|
|
00:28:07.320 --> 00:28:14.720 |
|
final uh distribution after noise and so |
|
|
|
00:28:12.799 --> 00:28:17.399 |
|
that means that you can basically smooth |
|
|
|
00:28:14.720 --> 00:28:19.240 |
|
out this uh distribution and account for |
|
|
|
00:28:17.399 --> 00:28:20.880 |
|
the fact that the labels may be noisy or |
|
|
|
00:28:19.240 --> 00:28:24.399 |
|
may may not be |
|
|
|
00:28:20.880 --> 00:28:26.600 |
|
noisy um then they add additional |
|
|
|
00:28:24.399 --> 00:28:28.559 |
|
normalization on this transition Matrix |
|
|
|
00:28:26.600 --> 00:28:32.440 |
|
using something called Trace normal |
|
|
|
00:28:28.559 --> 00:28:35.840 |
|
ization to move this Matrix closer to |
|
|
|
00:28:32.440 --> 00:28:38.480 |
|
the identity function which says that |
|
|
|
00:28:35.840 --> 00:28:40.720 |
|
the predictions are probably not wrong |
|
|
|
00:28:38.480 --> 00:28:43.159 |
|
all the time uh the predictions are |
|
|
|
00:28:40.720 --> 00:28:45.360 |
|
probably correct you know a lot of the |
|
|
|
00:28:43.159 --> 00:28:46.600 |
|
time they're not correct all the time uh |
|
|
|
00:28:45.360 --> 00:28:49.720 |
|
so then you have that Trace |
|
|
|
00:28:46.600 --> 00:28:51.880 |
|
normalization competing with um this uh |
|
|
|
00:28:49.720 --> 00:28:55.440 |
|
trying to give you like a more smooth |
|
|
|
00:28:51.880 --> 00:28:58.760 |
|
distribution and and reduce your uh L |
|
|
|
00:28:55.440 --> 00:29:00.320 |
|
like reduce your loss so um I I think |
|
|
|
00:28:58.760 --> 00:29:02.559 |
|
this is actually a pretty interesting |
|
|
|
00:29:00.320 --> 00:29:04.480 |
|
idea and it can be used not just for |
|
|
|
00:29:02.559 --> 00:29:08.600 |
|
relation extraction but also in cases |
|
|
|
00:29:04.480 --> 00:29:08.600 |
|
where um you might have noisy labels |
|
|
|
00:29:08.799 --> 00:29:14.320 |
|
overall um so are there any questions |
|
|
|
00:29:12.360 --> 00:29:15.720 |
|
about this or any of the things that are |
|
|
|
00:29:14.320 --> 00:29:18.480 |
|
going on |
|
|
|
00:29:15.720 --> 00:29:20.279 |
|
here um even if you're completely |
|
|
|
00:29:18.480 --> 00:29:21.960 |
|
uninterested in relation extraction I'd |
|
|
|
00:29:20.279 --> 00:29:23.720 |
|
encourage you to think about like what |
|
|
|
00:29:21.960 --> 00:29:26.159 |
|
are |
|
|
|
00:29:23.720 --> 00:29:27.360 |
|
some examples of things that you are |
|
|
|
00:29:26.159 --> 00:29:29.519 |
|
interested in where you could get |
|
|
|
00:29:27.360 --> 00:29:31.840 |
|
potentially labels and how could you for |
|
|
|
00:29:29.519 --> 00:29:34.880 |
|
theise there like that might be uh you |
|
|
|
00:29:31.840 --> 00:29:34.880 |
|
know a thing to |
|
|
|
00:29:35.679 --> 00:29:39.919 |
|
about okay so this was a very very brief |
|
|
|
00:29:38.320 --> 00:29:42.679 |
|
overview of how we create knowledge |
|
|
|
00:29:39.919 --> 00:29:44.080 |
|
bases uh from textual data or from |
|
|
|
00:29:42.679 --> 00:29:47.159 |
|
Knowledge Graph data structured |
|
|
|
00:29:44.080 --> 00:29:48.840 |
|
Knowledge Graph data um so now I like to |
|
|
|
00:29:47.159 --> 00:29:51.519 |
|
talk a little bit about how to use |
|
|
|
00:29:48.840 --> 00:29:53.960 |
|
knowledge bases to inform neural |
|
|
|
00:29:51.519 --> 00:29:56.159 |
|
models and there's a bunch of different |
|
|
|
00:29:53.960 --> 00:29:59.519 |
|
ways to do this |
|
|
|
00:29:56.159 --> 00:30:02.600 |
|
um the |
|
|
|
00:29:59.519 --> 00:30:06.960 |
|
the first way um is to |
|
|
|
00:30:02.600 --> 00:30:09.840 |
|
improve embeddings uh |
|
|
|
00:30:06.960 --> 00:30:11.960 |
|
with existing lexicons and this example |
|
|
|
00:30:09.840 --> 00:30:14.679 |
|
is using non-contextual embeddings like |
|
|
|
00:30:11.960 --> 00:30:16.240 |
|
not the not the ones we get from neural |
|
|
|
00:30:14.679 --> 00:30:17.919 |
|
language models but once we get from |
|
|
|
00:30:16.240 --> 00:30:20.919 |
|
just running a embedding model like word |
|
|
|
00:30:17.919 --> 00:30:22.960 |
|
toac or something like this um and what |
|
|
|
00:30:20.919 --> 00:30:25.640 |
|
they did in this paper is they |
|
|
|
00:30:22.960 --> 00:30:27.600 |
|
essentially um retrofitted embeddings to |
|
|
|
00:30:25.640 --> 00:30:30.840 |
|
existing lexicons by doing post Hawk |
|
|
|
00:30:27.600 --> 00:30:34.080 |
|
trans of the embeddings so that they |
|
|
|
00:30:30.840 --> 00:30:36.840 |
|
matched the um the knowledge graph for |
|
|
|
00:30:34.080 --> 00:30:39.080 |
|
lexon better and so the way they did |
|
|
|
00:30:36.840 --> 00:30:41.880 |
|
this is |
|
|
|
00:30:39.080 --> 00:30:43.720 |
|
um they started out with pre-trained |
|
|
|
00:30:41.880 --> 00:30:45.399 |
|
embeddings and they had a double |
|
|
|
00:30:43.720 --> 00:30:47.240 |
|
objective of making the transform |
|
|
|
00:30:45.399 --> 00:30:49.120 |
|
embeddings close to the neighbors and |
|
|
|
00:30:47.240 --> 00:30:52.519 |
|
close to the original |
|
|
|
00:30:49.120 --> 00:30:58.840 |
|
embedding and the way they did this is |
|
|
|
00:30:52.519 --> 00:30:58.840 |
|
they essentially had um this |
|
|
|
00:30:59.799 --> 00:31:03.720 |
|
this regularization term over here so |
|
|
|
00:31:01.880 --> 00:31:06.200 |
|
this regularization term is basically |
|
|
|
00:31:03.720 --> 00:31:08.279 |
|
saying um I don't want you to move your |
|
|
|
00:31:06.200 --> 00:31:09.360 |
|
embeddings too far away from how they |
|
|
|
00:31:08.279 --> 00:31:11.679 |
|
were |
|
|
|
00:31:09.360 --> 00:31:14.799 |
|
initialized and then at the same time I |
|
|
|
00:31:11.679 --> 00:31:17.279 |
|
would like you to make these uh |
|
|
|
00:31:14.799 --> 00:31:19.600 |
|
embeddings closer to each other if they |
|
|
|
00:31:17.279 --> 00:31:21.240 |
|
are synonyms of each other so they did |
|
|
|
00:31:19.600 --> 00:31:23.600 |
|
this using word net and they basically |
|
|
|
00:31:21.240 --> 00:31:26.200 |
|
took the words uh that were synonyms to |
|
|
|
00:31:23.600 --> 00:31:28.679 |
|
each other in sinets with each other and |
|
|
|
00:31:26.200 --> 00:31:30.000 |
|
they tried to regularize the synonyms to |
|
|
|
00:31:28.679 --> 00:31:32.120 |
|
be closer together but also the |
|
|
|
00:31:30.000 --> 00:31:33.639 |
|
embeddings to be closer to how they |
|
|
|
00:31:32.120 --> 00:31:35.960 |
|
started |
|
|
|
00:31:33.639 --> 00:31:38.799 |
|
out and there were also examples of |
|
|
|
00:31:35.960 --> 00:31:40.720 |
|
forcing anms away from each other so |
|
|
|
00:31:38.799 --> 00:31:42.480 |
|
like if you're um this is a little bit |
|
|
|
00:31:40.720 --> 00:31:44.799 |
|
of an older work so it was working on |
|
|
|
00:31:42.480 --> 00:31:47.600 |
|
non-contextualized embeddings but we |
|
|
|
00:31:44.799 --> 00:31:49.399 |
|
could do something very similar for um |
|
|
|
00:31:47.600 --> 00:31:52.000 |
|
more modern models in like Knowledge |
|
|
|
00:31:49.399 --> 00:31:55.320 |
|
Graph embeddings for example so let's |
|
|
|
00:31:52.000 --> 00:31:58.960 |
|
say we had |
|
|
|
00:31:55.320 --> 00:32:03.240 |
|
um a model that ident |
|
|
|
00:31:58.960 --> 00:32:06.600 |
|
entities and then different examples of |
|
|
|
00:32:03.240 --> 00:32:06.600 |
|
those entities across different |
|
|
|
00:32:07.159 --> 00:32:11.480 |
|
contexts um let's go back to the wiki |
|
|
|
00:32:20.639 --> 00:32:26.840 |
|
data and so um if we had lots of |
|
|
|
00:32:23.960 --> 00:32:29.360 |
|
examples of Joe Biden um Joe Biden is |
|
|
|
00:32:26.840 --> 00:32:35.159 |
|
referred to in a number ways like Joe |
|
|
|
00:32:29.360 --> 00:32:44.440 |
|
Biden Joseph Biden Joseph R Biden um J |
|
|
|
00:32:35.159 --> 00:32:47.880 |
|
jrb I guess um pus 48 46 sorry um and uh |
|
|
|
00:32:44.440 --> 00:32:50.799 |
|
so you could find different examples of |
|
|
|
00:32:47.880 --> 00:32:52.799 |
|
things that match these strings um and |
|
|
|
00:32:50.799 --> 00:32:55.360 |
|
even do entity linking uh which I'll |
|
|
|
00:32:52.799 --> 00:32:57.200 |
|
I'll talk about in a little bit and then |
|
|
|
00:32:55.360 --> 00:32:58.760 |
|
encourag the embeddings for all of these |
|
|
|
00:32:57.200 --> 00:33:01.360 |
|
different instances is to be closer |
|
|
|
00:32:58.760 --> 00:33:04.039 |
|
together to make your model like disting |
|
|
|
00:33:01.360 --> 00:33:06.799 |
|
uh distinguish them less and Ure that |
|
|
|
00:33:04.039 --> 00:33:08.399 |
|
they uh they get closer edings and that |
|
|
|
00:33:06.799 --> 00:33:11.639 |
|
could improve like question answering |
|
|
|
00:33:08.399 --> 00:33:11.639 |
|
look up other stuff like |
|
|
|
00:33:12.960 --> 00:33:19.880 |
|
that |
|
|
|
00:33:14.919 --> 00:33:23.399 |
|
cool um yeah I have a question about |
|
|
|
00:33:19.880 --> 00:33:25.399 |
|
this so what happens if you do like subw |
|
|
|
00:33:23.399 --> 00:33:28.000 |
|
modeling and then you don't have like |
|
|
|
00:33:25.399 --> 00:33:30.440 |
|
the embedment for that entire string |
|
|
|
00:33:28.000 --> 00:33:32.320 |
|
that is supposed to be Clos yeah what |
|
|
|
00:33:30.440 --> 00:33:34.279 |
|
happens if you do subword modeling and |
|
|
|
00:33:32.320 --> 00:33:35.480 |
|
you don't have the embedding uh you |
|
|
|
00:33:34.279 --> 00:33:37.159 |
|
don't have a single embedding that |
|
|
|
00:33:35.480 --> 00:33:40.360 |
|
corresponds to an entity so that's a |
|
|
|
00:33:37.159 --> 00:33:42.559 |
|
really good question um let me |
|
|
|
00:33:40.360 --> 00:33:44.240 |
|
check I don't think I actually have |
|
|
|
00:33:42.559 --> 00:33:46.600 |
|
these on the slide so I might have to |
|
|
|
00:33:44.240 --> 00:33:46.600 |
|
open a |
|
|
|
00:33:53.639 --> 00:33:59.720 |
|
paper yeah okay so there's a lot of |
|
|
|
00:33:56.440 --> 00:33:59.720 |
|
different ways to handle this |
|
|
|
00:34:11.520 --> 00:34:18.079 |
|
so there there's two papers um the first |
|
|
|
00:34:14.879 --> 00:34:20.000 |
|
paper is uh a really nice paper very |
|
|
|
00:34:18.079 --> 00:34:22.359 |
|
influential on the subject of |
|
|
|
00:34:20.000 --> 00:34:25.359 |
|
co-reference resolution and co-reference |
|
|
|
00:34:22.359 --> 00:34:27.240 |
|
resolution um is essentially trying to |
|
|
|
00:34:25.359 --> 00:34:30.000 |
|
identify when two spans correspond to |
|
|
|
00:34:27.240 --> 00:34:32.320 |
|
each other so like if I say Joe B Joe |
|
|
|
00:34:30.000 --> 00:34:34.359 |
|
Biden early in a document and then later |
|
|
|
00:34:32.320 --> 00:34:35.480 |
|
in a document it just says Biden we want |
|
|
|
00:34:34.359 --> 00:34:38.839 |
|
to know that those two things are |
|
|
|
00:34:35.480 --> 00:34:40.919 |
|
referring to each other and then um we |
|
|
|
00:34:38.839 --> 00:34:42.839 |
|
had a paper later where we generalized |
|
|
|
00:34:40.919 --> 00:34:44.839 |
|
this and applied you know very similar |
|
|
|
00:34:42.839 --> 00:34:48.079 |
|
methodology to like lots and lots of |
|
|
|
00:34:44.839 --> 00:34:50.760 |
|
different analysis tasks but I can um I |
|
|
|
00:34:48.079 --> 00:34:53.839 |
|
can show the beginning here and |
|
|
|
00:34:50.760 --> 00:34:59.320 |
|
basically the methodology that they use |
|
|
|
00:34:53.839 --> 00:35:02.440 |
|
here um is they add |
|
|
|
00:34:59.320 --> 00:35:04.440 |
|
a and this is specifically for modeling |
|
|
|
00:35:02.440 --> 00:35:08.240 |
|
spans and getting embeddings out of |
|
|
|
00:35:04.440 --> 00:35:09.040 |
|
spans of uh tokens and what they did is |
|
|
|
00:35:08.240 --> 00:35:13.079 |
|
they |
|
|
|
00:35:09.040 --> 00:35:14.920 |
|
essentially have a model where you take |
|
|
|
00:35:13.079 --> 00:35:16.440 |
|
the thing from the beginning the |
|
|
|
00:35:14.920 --> 00:35:18.760 |
|
embedding from the beginning of the span |
|
|
|
00:35:16.440 --> 00:35:22.040 |
|
the embedding from the end of the span |
|
|
|
00:35:18.760 --> 00:35:24.280 |
|
and the average embedding of all of the |
|
|
|
00:35:22.040 --> 00:35:26.280 |
|
embeddings in the span and that gives |
|
|
|
00:35:24.280 --> 00:35:27.480 |
|
you three vectors for any span right |
|
|
|
00:35:26.280 --> 00:35:30.160 |
|
because you can always get the beginning |
|
|
|
00:35:27.480 --> 00:35:33.280 |
|
that and in the mean and then based on |
|
|
|
00:35:30.160 --> 00:35:36.560 |
|
that they feed that through um like a |
|
|
|
00:35:33.280 --> 00:35:37.800 |
|
neural network and get a new edting so |
|
|
|
00:35:36.560 --> 00:35:40.000 |
|
they feed that through a transformation |
|
|
|
00:35:37.800 --> 00:35:42.520 |
|
and get a new edting and so that's the |
|
|
|
00:35:40.000 --> 00:35:44.200 |
|
method that they used and I think our |
|
|
|
00:35:42.520 --> 00:35:46.640 |
|
paper actually has a |
|
|
|
00:35:44.200 --> 00:35:49.640 |
|
better |
|
|
|
00:35:46.640 --> 00:35:52.640 |
|
um a better figure of how you can |
|
|
|
00:35:49.640 --> 00:35:56.680 |
|
actually use that actually maybe it |
|
|
|
00:35:52.640 --> 00:35:58.160 |
|
doesn't okay but anyway um yeah because |
|
|
|
00:35:56.680 --> 00:36:00.240 |
|
uh yeah here's the figure |
|
|
|
00:35:58.160 --> 00:36:01.520 |
|
so then you can use that for a number of |
|
|
|
00:36:00.240 --> 00:36:03.040 |
|
things you could use that to like look |
|
|
|
00:36:01.520 --> 00:36:06.359 |
|
up something in a knowledge base you |
|
|
|
00:36:03.040 --> 00:36:08.599 |
|
could also use that to um decide whether |
|
|
|
00:36:06.359 --> 00:36:10.440 |
|
two spans are co-referent by feeding in |
|
|
|
00:36:08.599 --> 00:36:12.800 |
|
like the first span and the second Span |
|
|
|
00:36:10.440 --> 00:36:14.960 |
|
in and then predicting whether those two |
|
|
|
00:36:12.800 --> 00:36:19.640 |
|
spans cor correspond to each other or |
|
|
|
00:36:14.960 --> 00:36:21.240 |
|
not so this general idea of modeling |
|
|
|
00:36:19.640 --> 00:36:22.960 |
|
spans and then modeling relations |
|
|
|
00:36:21.240 --> 00:36:24.520 |
|
between the spans allows you to solve |
|
|
|
00:36:22.960 --> 00:36:26.119 |
|
like lots of different tasks like part |
|
|
|
00:36:24.520 --> 00:36:27.920 |
|
of speech tagging or named entity |
|
|
|
00:36:26.119 --> 00:36:30.319 |
|
recognition or relation extraction or |
|
|
|
00:36:27.920 --> 00:36:31.920 |
|
other stuff like that so um yeah |
|
|
|
00:36:30.319 --> 00:36:34.040 |
|
actually I realized now that I should |
|
|
|
00:36:31.920 --> 00:36:35.079 |
|
have probably talked about these in the |
|
|
|
00:36:34.040 --> 00:36:36.560 |
|
slides where I was talking about |
|
|
|
00:36:35.079 --> 00:36:38.599 |
|
modeling but that that would be my |
|
|
|
00:36:36.560 --> 00:36:42.319 |
|
recommended way of doing |
|
|
|
00:36:38.599 --> 00:36:42.319 |
|
it cool any other |
|
|
|
00:36:43.839 --> 00:36:49.480 |
|
questions nice okay |
|
|
|
00:36:46.880 --> 00:36:52.880 |
|
um |
|
|
|
00:36:49.480 --> 00:36:55.119 |
|
so another question is how can we inject |
|
|
|
00:36:52.880 --> 00:36:56.640 |
|
knowledge into language models um |
|
|
|
00:36:55.119 --> 00:36:58.720 |
|
there's a bunch of different ways to do |
|
|
|
00:36:56.640 --> 00:37:03.079 |
|
this um |
|
|
|
00:36:58.720 --> 00:37:05.000 |
|
one very easy way is to somehow look up |
|
|
|
00:37:03.079 --> 00:37:09.640 |
|
relevant knowledge in your knowledge |
|
|
|
00:37:05.000 --> 00:37:09.640 |
|
graph and um oh |
|
|
|
00:37:10.280 --> 00:37:15.440 |
|
sorry I was presenting on my own screen |
|
|
|
00:37:13.040 --> 00:37:18.240 |
|
not the screen that everybody can see so |
|
|
|
00:37:15.440 --> 00:37:22.000 |
|
um to look up all of the uh knowledge in |
|
|
|
00:37:18.240 --> 00:37:24.000 |
|
a Knowledge Graph and um somehow provide |
|
|
|
00:37:22.000 --> 00:37:26.800 |
|
it to the model one way you can provide |
|
|
|
00:37:24.000 --> 00:37:28.720 |
|
it to the model is through prompting um |
|
|
|
00:37:26.800 --> 00:37:32.400 |
|
but the problem with with prompting is |
|
|
|
00:37:28.720 --> 00:37:33.920 |
|
that you're not necessarily going to uh |
|
|
|
00:37:32.400 --> 00:37:37.319 |
|
be able |
|
|
|
00:37:33.920 --> 00:37:41.359 |
|
to utilize knowledge that is kind of |
|
|
|
00:37:37.319 --> 00:37:43.920 |
|
like minority knowledge because the |
|
|
|
00:37:41.359 --> 00:37:47.560 |
|
embeddings of the entities that you're |
|
|
|
00:37:43.920 --> 00:37:49.440 |
|
presenting may not be you know like well |
|
|
|
00:37:47.560 --> 00:37:51.839 |
|
learned so |
|
|
|
00:37:49.440 --> 00:37:53.200 |
|
you're requiring essentially the model |
|
|
|
00:37:51.839 --> 00:37:55.359 |
|
to be able to generalize from the |
|
|
|
00:37:53.200 --> 00:37:57.880 |
|
knowledge you provide in |
|
|
|
00:37:55.359 --> 00:38:00.839 |
|
the prompt despite the fact that the |
|
|
|
00:37:57.880 --> 00:38:02.240 |
|
prompt is like minor entities or other |
|
|
|
00:38:00.839 --> 00:38:07.040 |
|
things like that that are not as well |
|
|
|
00:38:02.240 --> 00:38:10.400 |
|
learned so is another um method to |
|
|
|
00:38:07.040 --> 00:38:13.440 |
|
handle this um we previously proposed a |
|
|
|
00:38:10.400 --> 00:38:15.599 |
|
method that allows you |
|
|
|
00:38:13.440 --> 00:38:18.319 |
|
to essentially |
|
|
|
00:38:15.599 --> 00:38:21.319 |
|
predict instead of predicting directly |
|
|
|
00:38:18.319 --> 00:38:24.920 |
|
the words here you can predict a tag |
|
|
|
00:38:21.319 --> 00:38:27.200 |
|
that says birth name or a given name or |
|
|
|
00:38:24.920 --> 00:38:31.480 |
|
family name or something like that and |
|
|
|
00:38:27.200 --> 00:38:32.839 |
|
then post talk the model will fill in uh |
|
|
|
00:38:31.480 --> 00:38:36.720 |
|
that like birth |
|
|
|
00:38:32.839 --> 00:38:39.400 |
|
name text based on a knowledge base so |
|
|
|
00:38:36.720 --> 00:38:41.079 |
|
um you know if you have a a Wikipedia |
|
|
|
00:38:39.400 --> 00:38:44.240 |
|
article about Barack Obama that you're |
|
|
|
00:38:41.079 --> 00:38:48.680 |
|
trying to write it could predict um |
|
|
|
00:38:44.240 --> 00:38:52.040 |
|
birth name born uh birth name comma born |
|
|
|
00:38:48.680 --> 00:38:55.359 |
|
in birth date and that's like a very |
|
|
|
00:38:52.040 --> 00:38:56.880 |
|
very common thing in Wikipedia right so |
|
|
|
00:38:55.359 --> 00:39:00.960 |
|
because of that it can predict it very |
|
|
|
00:38:56.880 --> 00:39:03.160 |
|
consistently very uh formulaically and |
|
|
|
00:39:00.960 --> 00:39:04.599 |
|
that allows you to um you know with high |
|
|
|
00:39:03.160 --> 00:39:06.079 |
|
confidence get something that makes |
|
|
|
00:39:04.599 --> 00:39:08.599 |
|
sense and is factual and reduce |
|
|
|
00:39:06.079 --> 00:39:11.400 |
|
hallucination and other stuff like that |
|
|
|
00:39:08.599 --> 00:39:12.599 |
|
so um basically how could you inject |
|
|
|
00:39:11.400 --> 00:39:14.280 |
|
this into language models there's |
|
|
|
00:39:12.599 --> 00:39:16.240 |
|
multiple ways one is prompting that's |
|
|
|
00:39:14.280 --> 00:39:18.160 |
|
maybe the easier way another way is |
|
|
|
00:39:16.240 --> 00:39:21.520 |
|
through like templatic generation like |
|
|
|
00:39:18.160 --> 00:39:23.200 |
|
this where you generate placeholders uh |
|
|
|
00:39:21.520 --> 00:39:25.200 |
|
for all the information you want to add |
|
|
|
00:39:23.200 --> 00:39:26.480 |
|
and then you add the information uh |
|
|
|
00:39:25.200 --> 00:39:29.359 |
|
directly from the knowledge base through |
|
|
|
00:39:26.480 --> 00:39:29.359 |
|
the placeholders like |
|
|
|
00:39:30.680 --> 00:39:36.800 |
|
cool um there there's details about this |
|
|
|
00:39:34.240 --> 00:39:38.920 |
|
in the paper like how we um formulate a |
|
|
|
00:39:36.800 --> 00:39:41.319 |
|
training objective for something like |
|
|
|
00:39:38.920 --> 00:39:43.480 |
|
this and the difficulty in formulating a |
|
|
|
00:39:41.319 --> 00:39:46.400 |
|
training objective is that you need to |
|
|
|
00:39:43.480 --> 00:39:48.280 |
|
figure out when you want to replace |
|
|
|
00:39:46.400 --> 00:39:49.720 |
|
things so like you might not always want |
|
|
|
00:39:48.280 --> 00:39:51.000 |
|
to replace with birth name you might |
|
|
|
00:39:49.720 --> 00:39:53.920 |
|
want to replace with given name and |
|
|
|
00:39:51.000 --> 00:39:55.839 |
|
family name and we demonstrate that you |
|
|
|
00:39:53.920 --> 00:39:58.400 |
|
can figure out how to do this by |
|
|
|
00:39:55.839 --> 00:40:00.960 |
|
essentially like Mar iing over the |
|
|
|
00:39:58.400 --> 00:40:03.520 |
|
various ways of uh of doing this but |
|
|
|
00:40:00.960 --> 00:40:05.880 |
|
that's kind of more complex detail |
|
|
|
00:40:03.520 --> 00:40:05.880 |
|
that's in the |
|
|
|
00:40:08.440 --> 00:40:15.480 |
|
paper another really interesting |
|
|
|
00:40:11.000 --> 00:40:17.319 |
|
question um that uh we this is a also a |
|
|
|
00:40:15.480 --> 00:40:19.440 |
|
paper that I was involved in from uh |
|
|
|
00:40:17.319 --> 00:40:22.040 |
|
four years ago but I feel like this is |
|
|
|
00:40:19.440 --> 00:40:25.040 |
|
not entirely solved even in like modern |
|
|
|
00:40:22.040 --> 00:40:26.920 |
|
rag systems uh today is how can we |
|
|
|
00:40:25.040 --> 00:40:28.880 |
|
reason over a lot of text that's |
|
|
|
00:40:26.920 --> 00:40:32.440 |
|
included in a knowledge |
|
|
|
00:40:28.880 --> 00:40:35.839 |
|
base um oh sorry reason over Text corpus |
|
|
|
00:40:32.440 --> 00:40:40.480 |
|
like we reason over knowledge bases |
|
|
|
00:40:35.839 --> 00:40:43.280 |
|
and basically uh what we did was we |
|
|
|
00:40:40.480 --> 00:40:44.960 |
|
answered questions using text corpora as |
|
|
|
00:40:43.280 --> 00:40:48.680 |
|
a traceable knowledge |
|
|
|
00:40:44.960 --> 00:40:52.800 |
|
bases and we did relevance matching over |
|
|
|
00:40:48.680 --> 00:40:54.920 |
|
mentions um and the way we did this is |
|
|
|
00:40:52.800 --> 00:40:57.440 |
|
we created mentioned |
|
|
|
00:40:54.920 --> 00:40:59.480 |
|
vectors and the mentioned vectors |
|
|
|
00:40:57.440 --> 00:41:01.720 |
|
vectors of all of the mentions in the |
|
|
|
00:40:59.480 --> 00:41:04.920 |
|
knowledge base of particular |
|
|
|
00:41:01.720 --> 00:41:05.920 |
|
entities um and then we retrieved |
|
|
|
00:41:04.920 --> 00:41:09.599 |
|
relevant |
|
|
|
00:41:05.920 --> 00:41:13.440 |
|
mentions um from pre-trained Models uh |
|
|
|
00:41:09.599 --> 00:41:15.040 |
|
so we we ran embeddings and generated uh |
|
|
|
00:41:13.440 --> 00:41:16.000 |
|
embeddings for each of the mentions in |
|
|
|
00:41:15.040 --> 00:41:20.440 |
|
the whole |
|
|
|
00:41:16.000 --> 00:41:25.440 |
|
Corpus and based on this let let |
|
|
|
00:41:20.440 --> 00:41:29.119 |
|
me find the place over here so based on |
|
|
|
00:41:25.440 --> 00:41:32.720 |
|
this we basically um encoded all of |
|
|
|
00:41:29.119 --> 00:41:35.040 |
|
these uh in here and then we had a dense |
|
|
|
00:41:32.720 --> 00:41:37.359 |
|
query vector and the dense query Vector |
|
|
|
00:41:35.040 --> 00:41:41.640 |
|
was specifically trained so that it |
|
|
|
00:41:37.359 --> 00:41:44.280 |
|
would be able to identify entity |
|
|
|
00:41:41.640 --> 00:41:46.760 |
|
mentions that answered the problem so if |
|
|
|
00:41:44.280 --> 00:41:50.240 |
|
we had like when was The Grateful Dead |
|
|
|
00:41:46.760 --> 00:41:52.520 |
|
and uh Bob Dylan album released uh we |
|
|
|
00:41:50.240 --> 00:41:54.760 |
|
would have Bob Dylan be one vector The |
|
|
|
00:41:52.520 --> 00:41:56.560 |
|
Grateful Dead be another vector and the |
|
|
|
00:41:54.760 --> 00:41:58.200 |
|
model would be specifically trained so |
|
|
|
00:41:56.560 --> 00:42:00.040 |
|
that when you took took the entity |
|
|
|
00:41:58.200 --> 00:42:03.319 |
|
embedding of this and matched it with an |
|
|
|
00:42:00.040 --> 00:42:05.400 |
|
entity embedding in this big Corpus of |
|
|
|
00:42:03.319 --> 00:42:07.920 |
|
encoded things here it would be most |
|
|
|
00:42:05.400 --> 00:42:10.400 |
|
likely to return relevant information to |
|
|
|
00:42:07.920 --> 00:42:13.160 |
|
answer these like entity relation |
|
|
|
00:42:10.400 --> 00:42:14.680 |
|
questions so then the question is how do |
|
|
|
00:42:13.160 --> 00:42:18.040 |
|
we train a model like this how do we |
|
|
|
00:42:14.680 --> 00:42:20.280 |
|
train like a dense uh embedding model so |
|
|
|
00:42:18.040 --> 00:42:21.520 |
|
that it gets relevant information for |
|
|
|
00:42:20.280 --> 00:42:23.800 |
|
answering |
|
|
|
00:42:21.520 --> 00:42:26.920 |
|
questions and basically the way we did |
|
|
|
00:42:23.800 --> 00:42:29.280 |
|
this was through week supervision uh |
|
|
|
00:42:26.920 --> 00:42:31.640 |
|
just like I talked about for relation |
|
|
|
00:42:29.280 --> 00:42:33.599 |
|
extraction in relation extraction we can |
|
|
|
00:42:31.640 --> 00:42:35.680 |
|
create weak supervision by taking a big |
|
|
|
00:42:33.599 --> 00:42:37.960 |
|
existing knowledge base and identifying |
|
|
|
00:42:35.680 --> 00:42:40.920 |
|
all of the sentences where the answer is |
|
|
|
00:42:37.960 --> 00:42:43.319 |
|
included and so what we did is we took |
|
|
|
00:42:40.920 --> 00:42:45.880 |
|
this big existing knowledge base and |
|
|
|
00:42:43.319 --> 00:42:47.920 |
|
said okay what are some of the relations |
|
|
|
00:42:45.880 --> 00:42:49.800 |
|
in the knowledge base one example of a |
|
|
|
00:42:47.920 --> 00:42:51.559 |
|
relation in the knowledge base is Steven |
|
|
|
00:42:49.800 --> 00:42:54.359 |
|
Spielberg is the director of Saving |
|
|
|
00:42:51.559 --> 00:42:57.319 |
|
Private Ryan so we created questions |
|
|
|
00:42:54.359 --> 00:42:59.119 |
|
that said um |
|
|
|
00:42:57.319 --> 00:43:01.079 |
|
was the director of Saving Private Ryan |
|
|
|
00:42:59.119 --> 00:43:03.920 |
|
we can create those with templates uh |
|
|
|
00:43:01.079 --> 00:43:06.359 |
|
easily for many different relations and |
|
|
|
00:43:03.920 --> 00:43:09.480 |
|
then we took the embedding for Saving |
|
|
|
00:43:06.359 --> 00:43:10.760 |
|
Private Ryan in that question and we |
|
|
|
00:43:09.480 --> 00:43:14.200 |
|
tried to |
|
|
|
00:43:10.760 --> 00:43:17.119 |
|
upweight all of the Saving Private Ryan |
|
|
|
00:43:14.200 --> 00:43:19.680 |
|
embeddings over all of Wikipedia where |
|
|
|
00:43:17.119 --> 00:43:23.160 |
|
Steven Spielberg cooccurred in that |
|
|
|
00:43:19.680 --> 00:43:25.640 |
|
sentence so that tries to match um you |
|
|
|
00:43:23.160 --> 00:43:27.079 |
|
know artificially created questions with |
|
|
|
00:43:25.640 --> 00:43:29.040 |
|
sentences that would be the answer |
|
|
|
00:43:27.079 --> 00:43:31.040 |
|
answer to that question and so that |
|
|
|
00:43:29.040 --> 00:43:32.480 |
|
gives you like supervision it gives you |
|
|
|
00:43:31.040 --> 00:43:35.079 |
|
a lot of data to train over it gives you |
|
|
|
00:43:32.480 --> 00:43:38.920 |
|
a good model so that that allowed us to |
|
|
|
00:43:35.079 --> 00:43:41.319 |
|
learn this model well so um this is one |
|
|
|
00:43:38.920 --> 00:43:43.160 |
|
example of how you can do like rag spe |
|
|
|
00:43:41.319 --> 00:43:46.200 |
|
specifically like informed by knowledge |
|
|
|
00:43:43.160 --> 00:43:46.200 |
|
bases and stuff like |
|
|
|
00:43:47.280 --> 00:43:52.160 |
|
that um any any questions about this |
|
|
|
00:43:53.480 --> 00:43:57.680 |
|
or |
|
|
|
00:43:55.079 --> 00:44:00.079 |
|
okay so another thing that I I'd like to |
|
|
|
00:43:57.680 --> 00:44:03.599 |
|
go into is uh something we call schema |
|
|
|
00:44:00.079 --> 00:44:06.240 |
|
free extraction and so if I go back to |
|
|
|
00:44:03.599 --> 00:44:09.960 |
|
the wiki Data |
|
|
|
00:44:06.240 --> 00:44:10.760 |
|
Page um Wiki data has something we call |
|
|
|
00:44:09.960 --> 00:44:13.599 |
|
a |
|
|
|
00:44:10.760 --> 00:44:16.880 |
|
schema and the schema is basically like |
|
|
|
00:44:13.599 --> 00:44:19.640 |
|
what are the relations that are included |
|
|
|
00:44:16.880 --> 00:44:21.000 |
|
in the database so one of the relations |
|
|
|
00:44:19.640 --> 00:44:25.079 |
|
that's included in the databas is |
|
|
|
00:44:21.000 --> 00:44:25.079 |
|
instance of I guess also |
|
|
|
00:44:25.200 --> 00:44:29.040 |
|
image lots of images |
|
|
|
00:44:29.079 --> 00:44:33.880 |
|
um |
|
|
|
00:44:30.440 --> 00:44:35.680 |
|
signature uh sex or gender country of |
|
|
|
00:44:33.880 --> 00:44:38.319 |
|
citizenship and these relations are like |
|
|
|
00:44:35.680 --> 00:44:41.079 |
|
decided a priori by the people who |
|
|
|
00:44:38.319 --> 00:44:43.200 |
|
created Wiki data um and there's lots |
|
|
|
00:44:41.079 --> 00:44:45.880 |
|
and lots of them but that doesn't |
|
|
|
00:44:43.200 --> 00:44:48.880 |
|
necessarily mean |
|
|
|
00:44:45.880 --> 00:44:50.400 |
|
that like similarly to the problem of |
|
|
|
00:44:48.880 --> 00:44:51.839 |
|
not having all of the entities we can't |
|
|
|
00:44:50.400 --> 00:44:55.119 |
|
have all of the relations and just to |
|
|
|
00:44:51.839 --> 00:44:57.280 |
|
give one example I was um in preparation |
|
|
|
00:44:55.119 --> 00:44:59.680 |
|
for our large language models lecture I |
|
|
|
00:44:57.280 --> 00:45:02.640 |
|
actually created some structured data |
|
|
|
00:44:59.680 --> 00:45:04.319 |
|
about large language models and some of |
|
|
|
00:45:02.640 --> 00:45:06.119 |
|
the instru the structured data about |
|
|
|
00:45:04.319 --> 00:45:09.319 |
|
large language models that I created was |
|
|
|
00:45:06.119 --> 00:45:11.440 |
|
like what is the variety of positional |
|
|
|
00:45:09.319 --> 00:45:13.079 |
|
embedding that they're using or |
|
|
|
00:45:11.440 --> 00:45:15.800 |
|
positional embedding variety and |
|
|
|
00:45:13.079 --> 00:45:18.720 |
|
positional embedding variety is not in |
|
|
|
00:45:15.800 --> 00:45:20.359 |
|
Wiki data I think um I'd be surprised if |
|
|
|
00:45:18.720 --> 00:45:23.200 |
|
it was in Wiki data but I think it's not |
|
|
|
00:45:20.359 --> 00:45:25.760 |
|
in Wiki data um so like as you go down |
|
|
|
00:45:23.200 --> 00:45:27.760 |
|
to like more esoteric Concepts or like |
|
|
|
00:45:25.760 --> 00:45:29.599 |
|
specialized domains or stuff like that |
|
|
|
00:45:27.760 --> 00:45:31.359 |
|
you're almost always guaranteed to not |
|
|
|
00:45:29.599 --> 00:45:34.040 |
|
you know have all the entities you need |
|
|
|
00:45:31.359 --> 00:45:36.680 |
|
or not have all the relations you need |
|
|
|
00:45:34.040 --> 00:45:38.160 |
|
so that's the problem that schema free |
|
|
|
00:45:36.680 --> 00:45:39.920 |
|
extraction is trying to solve it's |
|
|
|
00:45:38.160 --> 00:45:41.680 |
|
trying to figure out how we can like |
|
|
|
00:45:39.920 --> 00:45:45.920 |
|
jointly figure out the schema together |
|
|
|
00:45:41.680 --> 00:45:45.920 |
|
with uh the information you want to |
|
|
|
00:45:48.480 --> 00:45:54.040 |
|
extract and the um the most famous |
|
|
|
00:45:52.319 --> 00:45:55.599 |
|
example of this is something called open |
|
|
|
00:45:54.040 --> 00:45:57.200 |
|
information extraction in open |
|
|
|
00:45:55.599 --> 00:46:01.160 |
|
information extraction basically what |
|
|
|
00:45:57.200 --> 00:46:04.040 |
|
it's saying is um we don't need a schema |
|
|
|
00:46:01.160 --> 00:46:06.359 |
|
uh there's no there's no schema um the |
|
|
|
00:46:04.040 --> 00:46:08.720 |
|
only schema that we have is the actual |
|
|
|
00:46:06.359 --> 00:46:12.200 |
|
text in the sentences that we're |
|
|
|
00:46:08.720 --> 00:46:14.520 |
|
referring to um the entities so if we |
|
|
|
00:46:12.200 --> 00:46:16.040 |
|
have United United has a Hub in Chicago |
|
|
|
00:46:14.520 --> 00:46:17.359 |
|
which is the headquarters of United |
|
|
|
00:46:16.040 --> 00:46:21.200 |
|
Continental |
|
|
|
00:46:17.359 --> 00:46:25.880 |
|
Holdings um the relation is literally |
|
|
|
00:46:21.200 --> 00:46:29.359 |
|
has a Hub in um that that's the relation |
|
|
|
00:46:25.880 --> 00:46:33.359 |
|
um and then for this we have Chicago is |
|
|
|
00:46:29.359 --> 00:46:35.559 |
|
the headquarters of um but the problem |
|
|
|
00:46:33.359 --> 00:46:37.520 |
|
with this uh is that this cannot |
|
|
|
00:46:35.559 --> 00:46:40.359 |
|
abstract away so if we had another |
|
|
|
00:46:37.520 --> 00:46:42.000 |
|
sentence that said Chicago or United |
|
|
|
00:46:40.359 --> 00:46:44.319 |
|
Continental Holdings has its |
|
|
|
00:46:42.000 --> 00:46:45.720 |
|
headquarters in Chicago that would be |
|
|
|
00:46:44.319 --> 00:46:49.800 |
|
treated as completely different you |
|
|
|
00:46:45.720 --> 00:46:49.800 |
|
wouldn't be able to like group those two |
|
|
|
00:46:51.119 --> 00:46:57.720 |
|
together so um in open information |
|
|
|
00:46:55.000 --> 00:47:00.079 |
|
extraction actually a lot of the methods |
|
|
|
00:46:57.720 --> 00:47:02.800 |
|
this is one of the few things where |
|
|
|
00:47:00.079 --> 00:47:05.480 |
|
people still use rule-based systems as |
|
|
|
00:47:02.800 --> 00:47:07.640 |
|
kind of like uh you know almost |
|
|
|
00:47:05.480 --> 00:47:09.319 |
|
state-of-the-art systems but basically |
|
|
|
00:47:07.640 --> 00:47:11.559 |
|
the reason why you're able to do this is |
|
|
|
00:47:09.319 --> 00:47:14.440 |
|
it's not actually that hard to extract |
|
|
|
00:47:11.559 --> 00:47:16.839 |
|
kind of the relevant strings between uh |
|
|
|
00:47:14.440 --> 00:47:19.599 |
|
two entities and so the both the |
|
|
|
00:47:16.839 --> 00:47:21.359 |
|
Precision and recall are pretty high and |
|
|
|
00:47:19.599 --> 00:47:24.079 |
|
another reason why people use rule-based |
|
|
|
00:47:21.359 --> 00:47:25.760 |
|
systems is because they um like you want |
|
|
|
00:47:24.079 --> 00:47:27.440 |
|
to run it over the whole web and running |
|
|
|
00:47:25.760 --> 00:47:29.079 |
|
a neural model over the whole web is |
|
|
|
00:47:27.440 --> 00:47:32.000 |
|
expensive so you can use a role-based |
|
|
|
00:47:29.079 --> 00:47:35.319 |
|
model so some examples of this include |
|
|
|
00:47:32.000 --> 00:47:37.640 |
|
text Runner and Reverb um the basic |
|
|
|
00:47:35.319 --> 00:47:41.000 |
|
ideas behind them is that you use a |
|
|
|
00:47:37.640 --> 00:47:43.720 |
|
parser to extract um to do a syntactic |
|
|
|
00:47:41.000 --> 00:47:45.760 |
|
analysis of the sentence um in extract |
|
|
|
00:47:43.720 --> 00:47:47.640 |
|
during according to rules so for example |
|
|
|
00:47:45.760 --> 00:47:50.160 |
|
the relation must contain a |
|
|
|
00:47:47.640 --> 00:47:52.720 |
|
predicate um the subject and object must |
|
|
|
00:47:50.160 --> 00:47:56.040 |
|
be noun phrases other things like |
|
|
|
00:47:52.720 --> 00:47:57.640 |
|
this um and then what they did later is |
|
|
|
00:47:56.040 --> 00:47:59.240 |
|
what they did in this this paper |
|
|
|
00:47:57.640 --> 00:48:00.800 |
|
arguably this is maybe no longer |
|
|
|
00:47:59.240 --> 00:48:02.280 |
|
necessary with the compute power we have |
|
|
|
00:48:00.800 --> 00:48:04.000 |
|
now but they trained an even faster |
|
|
|
00:48:02.280 --> 00:48:06.960 |
|
model to extract over large amounts of |
|
|
|
00:48:04.000 --> 00:48:08.720 |
|
data so they basically um use this as a |
|
|
|
00:48:06.960 --> 00:48:10.599 |
|
su weak supervision and then train a |
|
|
|
00:48:08.720 --> 00:48:12.160 |
|
model that could do it even faster with |
|
|
|
00:48:10.599 --> 00:48:14.680 |
|
the sequence base |
|
|
|
00:48:12.160 --> 00:48:18.119 |
|
model |
|
|
|
00:48:14.680 --> 00:48:19.880 |
|
um another thing that they did was um |
|
|
|
00:48:18.119 --> 00:48:22.280 |
|
they aggregated multiple pieces of |
|
|
|
00:48:19.880 --> 00:48:24.480 |
|
evidence heris to find common and |
|
|
|
00:48:22.280 --> 00:48:28.760 |
|
therefore potentially reliable |
|
|
|
00:48:24.480 --> 00:48:28.760 |
|
extractions so like |
|
|
|
00:48:29.800 --> 00:48:36.960 |
|
any piece of text on the internet like |
|
|
|
00:48:31.559 --> 00:48:40.200 |
|
could be a lie right so um you know |
|
|
|
00:48:36.960 --> 00:48:43.400 |
|
if I I might write on my blog United has |
|
|
|
00:48:40.200 --> 00:48:45.119 |
|
a Hub in like Denver or on the other |
|
|
|
00:48:43.400 --> 00:48:48.240 |
|
hand |
|
|
|
00:48:45.119 --> 00:48:50.839 |
|
um wait a set |
|
|
|
00:48:48.240 --> 00:48:52.680 |
|
right some something has a Hub in Denver |
|
|
|
00:48:50.839 --> 00:48:54.960 |
|
but United has a Hub in Pittsburgh is |
|
|
|
00:48:52.680 --> 00:48:58.040 |
|
definitely wrong so let's uh let's go |
|
|
|
00:48:54.960 --> 00:49:00.000 |
|
with that um uh so somebody could write |
|
|
|
00:48:58.040 --> 00:49:02.359 |
|
that on the internet and in fact because |
|
|
|
00:49:00.000 --> 00:49:06.440 |
|
I just said it it's probably in YouTube |
|
|
|
00:49:02.359 --> 00:49:09.119 |
|
comments somewhere but um uh |
|
|
|
00:49:06.440 --> 00:49:10.760 |
|
like any any piece of information on the |
|
|
|
00:49:09.119 --> 00:49:13.079 |
|
internet could be wrong so basically |
|
|
|
00:49:10.760 --> 00:49:16.680 |
|
they had um heuristic methods to filter |
|
|
|
00:49:13.079 --> 00:49:19.559 |
|
these out and usually these were |
|
|
|
00:49:16.680 --> 00:49:21.559 |
|
frequency based so it's like um if both |
|
|
|
00:49:19.559 --> 00:49:23.520 |
|
United and Pittsburgh are very common |
|
|
|
00:49:21.559 --> 00:49:26.000 |
|
but it's very rare for somebody to says |
|
|
|
00:49:23.520 --> 00:49:27.799 |
|
say United has a Hub in Pittsburgh then |
|
|
|
00:49:26.000 --> 00:49:29.200 |
|
that means it's statistically unlikely |
|
|
|
00:49:27.799 --> 00:49:30.799 |
|
for this to be correct because if it |
|
|
|
00:49:29.200 --> 00:49:33.280 |
|
were correct we'd expect to see it much |
|
|
|
00:49:30.799 --> 00:49:36.799 |
|
more frequently so um those were the |
|
|
|
00:49:33.280 --> 00:49:36.799 |
|
kind of things that they they did |
|
|
|
00:49:37.520 --> 00:49:44.440 |
|
here there's also some neural models for |
|
|
|
00:49:40.400 --> 00:49:46.839 |
|
open IE um I I think these are uh used |
|
|
|
00:49:44.440 --> 00:49:48.440 |
|
maybe a little bit less often um but |
|
|
|
00:49:46.839 --> 00:49:52.559 |
|
basically heuristics are still not |
|
|
|
00:49:48.440 --> 00:49:55.280 |
|
perfect and so what they did the problem |
|
|
|
00:49:52.559 --> 00:49:56.720 |
|
with um like not relying on heuristics |
|
|
|
00:49:55.280 --> 00:49:58.880 |
|
is you need to get training data from |
|
|
|
00:49:56.720 --> 00:50:01.880 |
|
somewhere so there's a rather clever |
|
|
|
00:49:58.880 --> 00:50:03.599 |
|
paper um and again if you're not |
|
|
|
00:50:01.880 --> 00:50:05.119 |
|
interested in relation extraction in |
|
|
|
00:50:03.599 --> 00:50:07.559 |
|
particular I think this is one thing |
|
|
|
00:50:05.119 --> 00:50:10.000 |
|
that's still worth paying attention to |
|
|
|
00:50:07.559 --> 00:50:12.680 |
|
um which is |
|
|
|
00:50:10.000 --> 00:50:14.559 |
|
they demonstrated that it's possible to |
|
|
|
00:50:12.680 --> 00:50:16.319 |
|
create relatively large data sets by |
|
|
|
00:50:14.559 --> 00:50:18.160 |
|
asking people simple |
|
|
|
00:50:16.319 --> 00:50:21.440 |
|
questions |
|
|
|
00:50:18.160 --> 00:50:24.480 |
|
and in particular they wanted to |
|
|
|
00:50:21.440 --> 00:50:27.119 |
|
get relation extraction data sets that |
|
|
|
00:50:24.480 --> 00:50:30.799 |
|
are like um |
|
|
|
00:50:27.119 --> 00:50:34.200 |
|
who finished something like UCD finished |
|
|
|
00:50:30.799 --> 00:50:37.760 |
|
the two 2006 championships and if you |
|
|
|
00:50:34.200 --> 00:50:40.720 |
|
ask people like okay select this span um |
|
|
|
00:50:37.760 --> 00:50:44.559 |
|
select the entity span the relations |
|
|
|
00:50:40.720 --> 00:50:46.160 |
|
span and the um in the second entity the |
|
|
|
00:50:44.559 --> 00:50:49.079 |
|
head entity the relation and the tail |
|
|
|
00:50:46.160 --> 00:50:51.839 |
|
entity select it on this interface and |
|
|
|
00:50:49.079 --> 00:50:54.200 |
|
then uh tell me is it this relation or |
|
|
|
00:50:51.839 --> 00:50:55.640 |
|
this relation or this relation that's |
|
|
|
00:50:54.200 --> 00:50:58.160 |
|
actually pretty hard and getting like |
|
|
|
00:50:55.640 --> 00:51:01.280 |
|
crowd workers to start learning how to |
|
|
|
00:50:58.160 --> 00:51:03.280 |
|
do that task is a bit tricky and it |
|
|
|
00:51:01.280 --> 00:51:06.400 |
|
takes some you know it takes some time |
|
|
|
00:51:03.280 --> 00:51:07.799 |
|
to get them onboarded basically um but |
|
|
|
00:51:06.400 --> 00:51:09.760 |
|
basically what they said is instead |
|
|
|
00:51:07.799 --> 00:51:11.359 |
|
we'll just ask them questions where the |
|
|
|
00:51:09.760 --> 00:51:14.240 |
|
answer to the question basically gives |
|
|
|
00:51:11.359 --> 00:51:17.160 |
|
us the answer to what the relation is so |
|
|
|
00:51:14.240 --> 00:51:20.319 |
|
they ask like who finished something and |
|
|
|
00:51:17.160 --> 00:51:23.680 |
|
the answer is like UCD and um what did |
|
|
|
00:51:20.319 --> 00:51:25.359 |
|
someone finish the 2006 Championship |
|
|
|
00:51:23.680 --> 00:51:28.920 |
|
what did someone fish some finish |
|
|
|
00:51:25.359 --> 00:51:31.760 |
|
something as and basically um in doing |
|
|
|
00:51:28.920 --> 00:51:33.319 |
|
this they created uh something called |
|
|
|
00:51:31.760 --> 00:51:34.359 |
|
semantic roles which we're actually |
|
|
|
00:51:33.319 --> 00:51:35.960 |
|
probably going to talk about a little |
|
|
|
00:51:34.359 --> 00:51:37.559 |
|
bit later but you can take the semantic |
|
|
|
00:51:35.960 --> 00:51:41.200 |
|
roles and then you can use them to |
|
|
|
00:51:37.559 --> 00:51:43.920 |
|
annotate uh relation extraction data and |
|
|
|
00:51:41.200 --> 00:51:46.720 |
|
then they trained a supervised neural |
|
|
|
00:51:43.920 --> 00:51:46.720 |
|
tager for |
|
|
|
00:51:48.799 --> 00:51:53.480 |
|
this |
|
|
|
00:51:50.480 --> 00:51:56.040 |
|
cool um so another thing I'd like to |
|
|
|
00:51:53.480 --> 00:51:57.880 |
|
talk about is I talked about learning um |
|
|
|
00:51:56.040 --> 00:51:59.920 |
|
information about entities from entity |
|
|
|
00:51:57.880 --> 00:52:02.079 |
|
embeddings but you can actually learn |
|
|
|
00:51:59.920 --> 00:52:04.520 |
|
information about relations from |
|
|
|
00:52:02.079 --> 00:52:07.680 |
|
relation information about other |
|
|
|
00:52:04.520 --> 00:52:12.359 |
|
relations and this can help solve the |
|
|
|
00:52:07.680 --> 00:52:16.119 |
|
problem um of like essentially the fact |
|
|
|
00:52:12.359 --> 00:52:18.760 |
|
that open IE is not able to abstract and |
|
|
|
00:52:16.119 --> 00:52:20.680 |
|
generalize so word embeddings or entity |
|
|
|
00:52:18.760 --> 00:52:23.079 |
|
embeddings give information of the word |
|
|
|
00:52:20.680 --> 00:52:26.920 |
|
in context um which can be indicative |
|
|
|
00:52:23.079 --> 00:52:29.640 |
|
for knowledge uh knowledge bases |
|
|
|
00:52:26.920 --> 00:52:32.640 |
|
but other relations or combinations |
|
|
|
00:52:29.640 --> 00:52:34.960 |
|
thereof are also indicative of them and |
|
|
|
00:52:32.640 --> 00:52:36.960 |
|
um if anybody is familiar with graphs or |
|
|
|
00:52:34.960 --> 00:52:39.520 |
|
graph processing there's the whole idea |
|
|
|
00:52:36.960 --> 00:52:41.400 |
|
of um link prediction where you're given |
|
|
|
00:52:39.520 --> 00:52:42.680 |
|
like a a small number of links in a |
|
|
|
00:52:41.400 --> 00:52:45.760 |
|
graph and you want to predict what other |
|
|
|
00:52:42.680 --> 00:52:50.559 |
|
links are likely to uh |
|
|
|
00:52:45.760 --> 00:52:52.920 |
|
exist and like as I said um a lot of uh |
|
|
|
00:52:50.559 --> 00:52:54.839 |
|
you know very prominent AI researchers |
|
|
|
00:52:52.920 --> 00:52:57.440 |
|
got their start in uh relation |
|
|
|
00:52:54.839 --> 00:53:01.480 |
|
extraction and uh it sker is another one |
|
|
|
00:52:57.440 --> 00:53:04.319 |
|
of them actually um and uh basically |
|
|
|
00:53:01.480 --> 00:53:07.880 |
|
this 2009 paper proposed to use tensor |
|
|
|
00:53:04.319 --> 00:53:09.400 |
|
de composition to do uh induction of |
|
|
|
00:53:07.880 --> 00:53:13.520 |
|
relations |
|
|
|
00:53:09.400 --> 00:53:15.319 |
|
and the way it worked is um you model |
|
|
|
00:53:13.520 --> 00:53:18.400 |
|
relations by decomposing a tensor |
|
|
|
00:53:15.319 --> 00:53:21.599 |
|
containing entity relation entity tles |
|
|
|
00:53:18.400 --> 00:53:24.000 |
|
so you have the left entity the right |
|
|
|
00:53:21.599 --> 00:53:27.160 |
|
entity and whether the relation exists |
|
|
|
00:53:24.000 --> 00:53:31.319 |
|
is this big um uh big tensor in the |
|
|
|
00:53:27.160 --> 00:53:33.160 |
|
Middle where these are embeddings of the |
|
|
|
00:53:31.319 --> 00:53:35.760 |
|
left entity these are embeddings of the |
|
|
|
00:53:33.160 --> 00:53:38.839 |
|
right entity and then the the depth of |
|
|
|
00:53:35.760 --> 00:53:40.680 |
|
the tensor is like which relations exist |
|
|
|
00:53:38.839 --> 00:53:43.760 |
|
and so we know that some exist so we |
|
|
|
00:53:40.680 --> 00:53:46.640 |
|
give them a one we know others exist um |
|
|
|
00:53:43.760 --> 00:53:48.680 |
|
don't exist so we give them a zero um |
|
|
|
00:53:46.640 --> 00:53:51.040 |
|
and then we do a low rank approximation |
|
|
|
00:53:48.680 --> 00:53:52.559 |
|
of this tensor and if we do a low rank |
|
|
|
00:53:51.040 --> 00:53:55.720 |
|
approximation of the tensor we have |
|
|
|
00:53:52.559 --> 00:53:57.280 |
|
reconstruction ER basically so when we |
|
|
|
00:53:55.720 --> 00:53:59.960 |
|
reconstruct the are some things that |
|
|
|
00:53:57.280 --> 00:54:01.960 |
|
were previously zero become one and so |
|
|
|
00:53:59.960 --> 00:54:04.760 |
|
the things that were previously zero and |
|
|
|
00:54:01.960 --> 00:54:07.880 |
|
then become close to one are the ones |
|
|
|
00:54:04.760 --> 00:54:10.559 |
|
that we think like actually might exist |
|
|
|
00:54:07.880 --> 00:54:12.000 |
|
they might be real um they might be real |
|
|
|
00:54:10.559 --> 00:54:13.640 |
|
relations that we were just missing |
|
|
|
00:54:12.000 --> 00:54:16.599 |
|
because our previous knowledge base was |
|
|
|
00:54:13.640 --> 00:54:16.599 |
|
complete uh |
|
|
|
00:54:18.640 --> 00:54:26.880 |
|
incomplete and um one thing that takes |
|
|
|
00:54:21.799 --> 00:54:28.559 |
|
us a step further is uh what if if we |
|
|
|
00:54:26.880 --> 00:54:30.079 |
|
actually do have a knowledge basee or |
|
|
|
00:54:28.559 --> 00:54:31.839 |
|
what if we even have multiple knowledge |
|
|
|
00:54:30.079 --> 00:54:35.520 |
|
bases like what if we have Wiki data and |
|
|
|
00:54:31.839 --> 00:54:36.640 |
|
we have wordnet and we have um uh other |
|
|
|
00:54:35.520 --> 00:54:38.920 |
|
things like |
|
|
|
00:54:36.640 --> 00:54:40.680 |
|
this and in addition to that we also |
|
|
|
00:54:38.920 --> 00:54:43.400 |
|
have open IE |
|
|
|
00:54:40.680 --> 00:54:45.960 |
|
extractions so there's an idea of |
|
|
|
00:54:43.400 --> 00:54:47.880 |
|
something called Universal schema and |
|
|
|
00:54:45.960 --> 00:54:50.200 |
|
what Universal schema do is they embed |
|
|
|
00:54:47.880 --> 00:54:55.119 |
|
relations from multiple schema or |
|
|
|
00:54:50.200 --> 00:54:56.960 |
|
schemata in the same space and based on |
|
|
|
00:54:55.119 --> 00:54:59.559 |
|
this they then |
|
|
|
00:54:56.960 --> 00:55:01.359 |
|
predict which ones exist are likely to |
|
|
|
00:54:59.559 --> 00:55:04.400 |
|
exist or which ones are not likely to |
|
|
|
00:55:01.359 --> 00:55:06.680 |
|
exist so here we might have a free base |
|
|
|
00:55:04.400 --> 00:55:08.640 |
|
or Wiki data we might have another uh |
|
|
|
00:55:06.680 --> 00:55:11.559 |
|
kind of relation extraction data set |
|
|
|
00:55:08.640 --> 00:55:15.480 |
|
called Tac and then on the training data |
|
|
|
00:55:11.559 --> 00:55:17.040 |
|
set we have um like all of these uh |
|
|
|
00:55:15.480 --> 00:55:20.240 |
|
things that are like positive or |
|
|
|
00:55:17.040 --> 00:55:23.960 |
|
negative or something like this and then |
|
|
|
00:55:20.240 --> 00:55:26.960 |
|
on the heldout data set we have only |
|
|
|
00:55:23.960 --> 00:55:29.480 |
|
information about like open |
|
|
|
00:55:26.960 --> 00:55:30.920 |
|
for example so um for all of the |
|
|
|
00:55:29.480 --> 00:55:33.079 |
|
entities that exist in the knowledge |
|
|
|
00:55:30.920 --> 00:55:34.839 |
|
base we know you know whether the |
|
|
|
00:55:33.079 --> 00:55:36.039 |
|
relations exist for but for all the |
|
|
|
00:55:34.839 --> 00:55:39.640 |
|
entities that don't exist in the |
|
|
|
00:55:36.039 --> 00:55:41.760 |
|
database we don't know and so uh then |
|
|
|
00:55:39.640 --> 00:55:43.839 |
|
just from the existence of open IE |
|
|
|
00:55:41.760 --> 00:55:45.480 |
|
relations or non-existence of open IE |
|
|
|
00:55:43.839 --> 00:55:47.920 |
|
relations we can predict that other |
|
|
|
00:55:45.480 --> 00:55:49.359 |
|
relations might exist for example so |
|
|
|
00:55:47.920 --> 00:55:51.079 |
|
this is a great way to combine the two |
|
|
|
00:55:49.359 --> 00:55:53.920 |
|
together like open IE you can run it |
|
|
|
00:55:51.079 --> 00:55:55.880 |
|
over you know very large data sets um |
|
|
|
00:55:53.920 --> 00:55:58.000 |
|
but it doesn't have a good schema free |
|
|
|
00:55:55.880 --> 00:56:00.400 |
|
uh Wiki data has a good schema but you |
|
|
|
00:55:58.000 --> 00:56:02.960 |
|
can't you know it's all manually created |
|
|
|
00:56:00.400 --> 00:56:04.720 |
|
so you can suggest other ones and one |
|
|
|
00:56:02.960 --> 00:56:07.960 |
|
other like interesting thing is you can |
|
|
|
00:56:04.720 --> 00:56:09.640 |
|
suggest other um things that might exist |
|
|
|
00:56:07.960 --> 00:56:13.039 |
|
in Wiki data but you could also track |
|
|
|
00:56:09.640 --> 00:56:15.039 |
|
that back to the original text that |
|
|
|
00:56:13.039 --> 00:56:17.000 |
|
indicated that it might exist in Wiki |
|
|
|
00:56:15.039 --> 00:56:18.720 |
|
data so then you could have a human go |
|
|
|
00:56:17.000 --> 00:56:20.520 |
|
back and check it to make sure that |
|
|
|
00:56:18.720 --> 00:56:24.200 |
|
that's actually true and trustworthy and |
|
|
|
00:56:20.520 --> 00:56:24.200 |
|
other things like that |
|
|
|
00:56:26.400 --> 00:56:31.400 |
|
cool um so if you like uh you like |
|
|
|
00:56:29.400 --> 00:56:33.160 |
|
tensors or you like linear algebra or |
|
|
|
00:56:31.400 --> 00:56:34.720 |
|
things like this this is maybe something |
|
|
|
00:56:33.160 --> 00:56:37.880 |
|
that you could take a look at and think |
|
|
|
00:56:34.720 --> 00:56:40.240 |
|
a little bit more about um any any |
|
|
|
00:56:37.880 --> 00:56:40.240 |
|
questions |
|
|
|
00:56:42.799 --> 00:56:46.240 |
|
here okay |
|
|
|
00:56:46.880 --> 00:56:53.680 |
|
cool um so another thing I'd like to |
|
|
|
00:56:50.640 --> 00:56:56.920 |
|
talk about is uh modeling relation paths |
|
|
|
00:56:53.680 --> 00:57:00.359 |
|
so this is a really nice uh idea |
|
|
|
00:56:56.920 --> 00:57:00.359 |
|
which is you |
|
|
|
00:57:00.440 --> 00:57:05.000 |
|
can make inferences across multiple hops |
|
|
|
00:57:04.240 --> 00:57:08.400 |
|
of |
|
|
|
00:57:05.000 --> 00:57:12.280 |
|
relations um based on uh particular |
|
|
|
00:57:08.400 --> 00:57:14.200 |
|
relations existing and so um multi-step |
|
|
|
00:57:12.280 --> 00:57:17.280 |
|
passs can be informative for indicating |
|
|
|
00:57:14.200 --> 00:57:20.000 |
|
whether individual relations exist so um |
|
|
|
00:57:17.280 --> 00:57:24.400 |
|
for example uh given a word given a |
|
|
|
00:57:20.000 --> 00:57:27.960 |
|
particular word in a paper title |
|
|
|
00:57:24.400 --> 00:57:29.880 |
|
recommend a venue in which to the paper |
|
|
|
00:57:27.960 --> 00:57:32.559 |
|
and so this is the the problem that they |
|
|
|
00:57:29.880 --> 00:57:36.079 |
|
were trying to solve and then basically |
|
|
|
00:57:32.559 --> 00:57:38.440 |
|
you have a word um you |
|
|
|
00:57:36.079 --> 00:57:41.119 |
|
find if you have that word in your paper |
|
|
|
00:57:38.440 --> 00:57:42.920 |
|
title you then find other papers that |
|
|
|
00:57:41.119 --> 00:57:45.280 |
|
have that title uh that have that word |
|
|
|
00:57:42.920 --> 00:57:48.359 |
|
in their title and those papers are in a |
|
|
|
00:57:45.280 --> 00:57:52.039 |
|
journal and that gets a high weight with |
|
|
|
00:57:48.359 --> 00:57:54.119 |
|
respect to like that your paper being |
|
|
|
00:57:52.039 --> 00:57:56.839 |
|
you know relevant to that particular |
|
|
|
00:57:54.119 --> 00:57:59.880 |
|
Journal you can also say |
|
|
|
00:57:56.839 --> 00:58:01.000 |
|
okay I have a a word find papers with |
|
|
|
00:57:59.880 --> 00:58:03.240 |
|
that word in the |
|
|
|
00:58:01.000 --> 00:58:07.240 |
|
title find the first author of that |
|
|
|
00:58:03.240 --> 00:58:09.280 |
|
paper find another paper uh that had |
|
|
|
00:58:07.240 --> 00:58:11.599 |
|
that author as a first author and then |
|
|
|
00:58:09.280 --> 00:58:13.240 |
|
find the Journal of it and they |
|
|
|
00:58:11.599 --> 00:58:15.839 |
|
demonstrate a way where you can like |
|
|
|
00:58:13.240 --> 00:58:18.280 |
|
expand these paths and feed them into a |
|
|
|
00:58:15.839 --> 00:58:22.400 |
|
prediction model and use that to predict |
|
|
|
00:58:18.280 --> 00:58:25.480 |
|
um you know additional relations so |
|
|
|
00:58:22.400 --> 00:58:26.680 |
|
unlike this method here this method was |
|
|
|
00:58:25.480 --> 00:58:29.240 |
|
saying like |
|
|
|
00:58:26.680 --> 00:58:30.920 |
|
other single relations are indicative of |
|
|
|
00:58:29.240 --> 00:58:34.160 |
|
a particular relation |
|
|
|
00:58:30.920 --> 00:58:36.880 |
|
existing this paper is saying not just |
|
|
|
00:58:34.160 --> 00:58:38.720 |
|
individual relations are indicative of |
|
|
|
00:58:36.880 --> 00:58:40.640 |
|
another relation existing but actually |
|
|
|
00:58:38.720 --> 00:58:43.839 |
|
relation paths are indicative of a |
|
|
|
00:58:40.640 --> 00:58:46.400 |
|
relation existing so this is more um |
|
|
|
00:58:43.839 --> 00:58:46.400 |
|
expressive |
|
|
|
00:58:47.520 --> 00:58:55.359 |
|
basically um and this followup paper |
|
|
|
00:58:52.640 --> 00:58:57.480 |
|
uh using differentiable logic rules |
|
|
|
00:58:55.359 --> 00:59:00.799 |
|
actually made this endtoend |
|
|
|
00:58:57.480 --> 00:59:03.079 |
|
trainable so this allows you to consider |
|
|
|
00:59:00.799 --> 00:59:07.599 |
|
whole paths in a differentiable |
|
|
|
00:59:03.079 --> 00:59:09.960 |
|
framework and so the way they did this |
|
|
|
00:59:07.599 --> 00:59:13.359 |
|
is like if you have you know City in |
|
|
|
00:59:09.960 --> 00:59:16.440 |
|
country and has office in country um |
|
|
|
00:59:13.359 --> 00:59:18.920 |
|
that or sorry City and Country and has |
|
|
|
00:59:16.440 --> 00:59:22.200 |
|
office in city that indicates has office |
|
|
|
00:59:18.920 --> 00:59:24.160 |
|
in country and I I'm sure you know many |
|
|
|
00:59:22.200 --> 00:59:26.760 |
|
people here have thought like learned |
|
|
|
00:59:24.160 --> 00:59:29.520 |
|
about logic and you know and induction |
|
|
|
00:59:26.760 --> 00:59:32.720 |
|
from or deduction from uh logic rules |
|
|
|
00:59:29.520 --> 00:59:34.359 |
|
and stuff like this but the problem is |
|
|
|
00:59:32.720 --> 00:59:37.079 |
|
deduction from logic rules is very |
|
|
|
00:59:34.359 --> 00:59:39.039 |
|
fragile like there are cases where there |
|
|
|
00:59:37.079 --> 00:59:41.119 |
|
are counter examples so if you say that |
|
|
|
00:59:39.039 --> 00:59:43.280 |
|
something is always true deductively |
|
|
|
00:59:41.119 --> 00:59:45.839 |
|
then um that can cause problems so in |
|
|
|
00:59:43.280 --> 00:59:47.839 |
|
reality it's like if you have two pieces |
|
|
|
00:59:45.839 --> 00:59:52.400 |
|
of information something can become much |
|
|
|
00:59:47.839 --> 00:59:56.920 |
|
much more likely um and so you know just |
|
|
|
00:59:52.400 --> 00:59:59.880 |
|
to give an example um somebody studying |
|
|
|
00:59:56.920 --> 01:00:01.280 |
|
studying at CMU makes it very likely |
|
|
|
00:59:59.880 --> 01:00:03.799 |
|
much more likely that they're studying |
|
|
|
01:00:01.280 --> 01:00:06.359 |
|
computer science and much less likely |
|
|
|
01:00:03.799 --> 01:00:08.000 |
|
that they're studying medicine or |
|
|
|
01:00:06.359 --> 01:00:09.520 |
|
something like that but that doesn't |
|
|
|
01:00:08.000 --> 01:00:11.720 |
|
mean that it like |
|
|
|
01:00:09.520 --> 01:00:13.559 |
|
entirely the first one is definitely not |
|
|
|
01:00:11.720 --> 01:00:15.480 |
|
entirely implied and I'm sure there's |
|
|
|
01:00:13.559 --> 01:00:16.760 |
|
like a few people at CMU who are somehow |
|
|
|
01:00:15.480 --> 01:00:18.440 |
|
studying medicine through a joint |
|
|
|
01:00:16.760 --> 01:00:21.480 |
|
program with pit or something like that |
|
|
|
01:00:18.440 --> 01:00:24.400 |
|
so you know like very it's very rare |
|
|
|
01:00:21.480 --> 01:00:26.799 |
|
that logic rules are hard and fast and |
|
|
|
01:00:24.400 --> 01:00:28.480 |
|
so basically what they do is they treat |
|
|
|
01:00:26.799 --> 01:00:30.559 |
|
each path as a sequence of Matrix |
|
|
|
01:00:28.480 --> 01:00:34.839 |
|
multiplies it where they have a rule |
|
|
|
01:00:30.559 --> 01:00:36.599 |
|
weight um like this and um in the end |
|
|
|
01:00:34.839 --> 01:00:38.359 |
|
that allows you to make a a prediction |
|
|
|
01:00:36.599 --> 01:00:40.839 |
|
about whether a predic logic rule is |
|
|
|
01:00:38.359 --> 01:00:40.839 |
|
correct or |
|
|
|
01:00:40.880 --> 01:00:49.319 |
|
not um so this is uh i' I've been |
|
|
|
01:00:46.880 --> 01:00:51.119 |
|
working mostly in like structured |
|
|
|
01:00:49.319 --> 01:00:54.480 |
|
knowledge space structured knowledge |
|
|
|
01:00:51.119 --> 01:00:56.599 |
|
graphs other uh other things like this |
|
|
|
01:00:54.480 --> 01:00:59.760 |
|
um I I don't |
|
|
|
01:00:56.599 --> 01:01:02.720 |
|
think there's a whole lot of work that |
|
|
|
01:00:59.760 --> 01:01:05.640 |
|
directly applies this to language models |
|
|
|
01:01:02.720 --> 01:01:07.319 |
|
um like differentiable logic rules and |
|
|
|
01:01:05.640 --> 01:01:10.079 |
|
language models or things like that just |
|
|
|
01:01:07.319 --> 01:01:12.440 |
|
because it's less clean it's you know uh |
|
|
|
01:01:10.079 --> 01:01:13.839 |
|
harder um there there's a little bit of |
|
|
|
01:01:12.440 --> 01:01:16.079 |
|
work which I'm going to talk about now |
|
|
|
01:01:13.839 --> 01:01:18.599 |
|
but I think like this kind of work is |
|
|
|
01:01:16.079 --> 01:01:21.440 |
|
interesting because a lot of models are |
|
|
|
01:01:18.599 --> 01:01:23.119 |
|
not super great at reasoning and how to |
|
|
|
01:01:21.440 --> 01:01:25.119 |
|
like allow them to be better at |
|
|
|
01:01:23.119 --> 01:01:26.559 |
|
reasoning is kind of an open problem so |
|
|
|
01:01:25.119 --> 01:01:28.039 |
|
learning from these old older works that |
|
|
|
01:01:26.559 --> 01:01:30.200 |
|
did it in a more structured space and |
|
|
|
01:01:28.039 --> 01:01:32.160 |
|
trying to figure out how to apply them |
|
|
|
01:01:30.200 --> 01:01:34.400 |
|
to less structured spaces is still |
|
|
|
01:01:32.160 --> 01:01:36.240 |
|
interesting I think |
|
|
|
01:01:34.400 --> 01:01:39.160 |
|
so |
|
|
|
01:01:36.240 --> 01:01:40.720 |
|
cool um then the final talk topic I want |
|
|
|
01:01:39.160 --> 01:01:42.920 |
|
to talk about is probing knowledge in |
|
|
|
01:01:40.720 --> 01:01:44.920 |
|
LMS and so we have these knowledge bases |
|
|
|
01:01:42.920 --> 01:01:47.319 |
|
that encode you know tons and tons of |
|
|
|
01:01:44.920 --> 01:01:49.880 |
|
knowledge um which allows us to figure |
|
|
|
01:01:47.319 --> 01:01:52.200 |
|
out you know oh well how well do uh |
|
|
|
01:01:49.880 --> 01:01:56.200 |
|
language models know about these |
|
|
|
01:01:52.200 --> 01:01:59.079 |
|
things and so |
|
|
|
01:01:56.200 --> 01:02:02.760 |
|
traditional um kind of QA machine |
|
|
|
01:01:59.079 --> 01:02:04.799 |
|
reading comprehension rag models um |
|
|
|
01:02:02.760 --> 01:02:06.359 |
|
usually referred to external resources |
|
|
|
01:02:04.799 --> 01:02:10.039 |
|
to answer questions like Wikipedia |
|
|
|
01:02:06.359 --> 01:02:14.359 |
|
articles um or things like this but then |
|
|
|
01:02:10.039 --> 01:02:16.119 |
|
the question is without doing rag can we |
|
|
|
01:02:14.359 --> 01:02:18.160 |
|
you know answer questions like what |
|
|
|
01:02:16.119 --> 01:02:20.920 |
|
knowledge is |
|
|
|
01:02:18.160 --> 01:02:24.079 |
|
encoded and so the first paper that kind |
|
|
|
01:02:20.920 --> 01:02:26.520 |
|
of handled this sort of problem uh is |
|
|
|
01:02:24.079 --> 01:02:29.200 |
|
this paper which actually was also |
|
|
|
01:02:26.520 --> 01:02:33.359 |
|
called uh |
|
|
|
01:02:29.200 --> 01:02:35.960 |
|
wama surprisingly um or released a |
|
|
|
01:02:33.359 --> 01:02:41.000 |
|
resource called llama except it was l m |
|
|
|
01:02:35.960 --> 01:02:44.880 |
|
a um but what they did is they |
|
|
|
01:02:41.000 --> 01:02:46.960 |
|
uh used they in contrast to using |
|
|
|
01:02:44.880 --> 01:02:50.000 |
|
structural queries like SQL or or |
|
|
|
01:02:46.960 --> 01:02:52.119 |
|
Sparkle two query KBS they tried to use |
|
|
|
01:02:50.000 --> 01:02:54.240 |
|
natural language prompts to query LM so |
|
|
|
01:02:52.119 --> 01:02:58.160 |
|
this was actually one of the the first |
|
|
|
01:02:54.240 --> 01:03:02.359 |
|
uh kind of paper on prompts uh prompting |
|
|
|
01:02:58.160 --> 01:03:05.079 |
|
for uh language models in a way and the |
|
|
|
01:03:02.359 --> 01:03:08.359 |
|
way they did this is they had um they |
|
|
|
01:03:05.079 --> 01:03:10.039 |
|
did like Dante was born in mask and then |
|
|
|
01:03:08.359 --> 01:03:13.279 |
|
they tried to fill in the mask using a |
|
|
|
01:03:10.039 --> 01:03:15.839 |
|
mask language model and uh and output |
|
|
|
01:03:13.279 --> 01:03:18.559 |
|
Florence so |
|
|
|
01:03:15.839 --> 01:03:19.960 |
|
um when they did this work now now we |
|
|
|
01:03:18.559 --> 01:03:21.359 |
|
don't do this quite as much but when |
|
|
|
01:03:19.960 --> 01:03:23.520 |
|
they did this work they basically used |
|
|
|
01:03:21.359 --> 01:03:25.440 |
|
the knowledge base as the ground truth |
|
|
|
01:03:23.520 --> 01:03:28.880 |
|
and tried to probe whether the knowledge |
|
|
|
01:03:25.440 --> 01:03:31.520 |
|
in in um in the knowledge base was also |
|
|
|
01:03:28.880 --> 01:03:34.880 |
|
uh recoverable from the neural |
|
|
|
01:03:31.520 --> 01:03:37.720 |
|
map um and they proposed the Llama |
|
|
|
01:03:34.880 --> 01:03:39.760 |
|
Benchmark um basically it was manual |
|
|
|
01:03:37.720 --> 01:03:42.480 |
|
prompts for 41 relations they created |
|
|
|
01:03:39.760 --> 01:03:44.839 |
|
the prompts manually uh so like X was |
|
|
|
01:03:42.480 --> 01:03:46.480 |
|
founded in y The Prompt template and |
|
|
|
01:03:44.839 --> 01:03:49.400 |
|
they filled in the subjects and had the |
|
|
|
01:03:46.480 --> 01:03:52.160 |
|
LMS uh for such as Bert predict the |
|
|
|
01:03:49.400 --> 01:03:55.839 |
|
objects uh like blueberg LP was founded |
|
|
|
01:03:52.160 --> 01:03:59.000 |
|
in mask and they demonstrated that like |
|
|
|
01:03:55.839 --> 01:04:02.440 |
|
basically Elmo uh Transformer XL and |
|
|
|
01:03:59.000 --> 01:04:04.960 |
|
Bert base got uh you know up to 31% |
|
|
|
01:04:02.440 --> 01:04:06.480 |
|
accuracy now I'm sure uh the modern |
|
|
|
01:04:04.960 --> 01:04:09.200 |
|
language models would have much higher |
|
|
|
01:04:06.480 --> 01:04:11.279 |
|
accuracy than |
|
|
|
01:04:09.200 --> 01:04:13.920 |
|
that |
|
|
|
01:04:11.279 --> 01:04:17.839 |
|
um this is a a follow-up paper that we |
|
|
|
01:04:13.920 --> 01:04:21.160 |
|
did to this um where we tried to do this |
|
|
|
01:04:17.839 --> 01:04:23.400 |
|
multilingually um I I think this is |
|
|
|
01:04:21.160 --> 01:04:25.680 |
|
really let |
|
|
|
01:04:23.400 --> 01:04:29.520 |
|
me I think one thing that's interesting |
|
|
|
01:04:25.680 --> 01:04:31.960 |
|
interesting about this paper is um even |
|
|
|
01:04:29.520 --> 01:04:37.240 |
|
if you're not interested in multilingual |
|
|
|
01:04:31.960 --> 01:04:38.920 |
|
stuff per se there is an interesting |
|
|
|
01:04:37.240 --> 01:04:40.760 |
|
dichotomy about like what knowledge is |
|
|
|
01:04:38.920 --> 01:04:43.079 |
|
included in LMS and whether we can |
|
|
|
01:04:40.760 --> 01:04:46.000 |
|
retrieve it and the reason why I'm |
|
|
|
01:04:43.079 --> 01:04:48.359 |
|
saying this is because in this paper |
|
|
|
01:04:46.000 --> 01:04:51.200 |
|
we created |
|
|
|
01:04:48.359 --> 01:04:52.599 |
|
queries from a knowledge base and |
|
|
|
01:04:51.200 --> 01:04:54.160 |
|
because we created queries from a |
|
|
|
01:04:52.599 --> 01:04:55.760 |
|
knowledge base and knowledge bases are |
|
|
|
01:04:54.160 --> 01:04:57.240 |
|
multilingual we can also create |
|
|
|
01:04:55.760 --> 01:05:00.039 |
|
multilingual queries from knowledge |
|
|
|
01:04:57.240 --> 01:05:01.720 |
|
bases right so we can use exactly the |
|
|
|
01:05:00.039 --> 01:05:03.359 |
|
same entities but just ask the same |
|
|
|
01:05:01.720 --> 01:05:05.920 |
|
question in different languages and so |
|
|
|
01:05:03.359 --> 01:05:07.480 |
|
we had a bunch of people manually uh |
|
|
|
01:05:05.920 --> 01:05:10.119 |
|
create prompts for all of these |
|
|
|
01:05:07.480 --> 01:05:13.000 |
|
languages here and you can see that in |
|
|
|
01:05:10.119 --> 01:05:15.960 |
|
English it's much better at responding |
|
|
|
01:05:13.000 --> 01:05:19.000 |
|
uh to these queries than it is in any |
|
|
|
01:05:15.960 --> 01:05:21.039 |
|
other language and in particular like |
|
|
|
01:05:19.000 --> 01:05:22.880 |
|
lower resource languages or languages |
|
|
|
01:05:21.039 --> 01:05:26.400 |
|
that are less similar to English it did |
|
|
|
01:05:22.880 --> 01:05:29.079 |
|
much worse and notably we we counted the |
|
|
|
01:05:26.400 --> 01:05:32.160 |
|
answer correct if it got it |
|
|
|
01:05:29.079 --> 01:05:34.279 |
|
um we we had two settings one setting is |
|
|
|
01:05:32.160 --> 01:05:35.799 |
|
we counted the answer correct if it only |
|
|
|
01:05:34.279 --> 01:05:38.359 |
|
if it answered in the language we |
|
|
|
01:05:35.799 --> 01:05:39.680 |
|
queried it in but we in other setting we |
|
|
|
01:05:38.359 --> 01:05:42.640 |
|
also counted the answer correct if it |
|
|
|
01:05:39.680 --> 01:05:44.200 |
|
answered in any language so we um it |
|
|
|
01:05:42.640 --> 01:05:46.640 |
|
didn't necessarily have to even know the |
|
|
|
01:05:44.200 --> 01:05:48.200 |
|
name of the entity in that uh language |
|
|
|
01:05:46.640 --> 01:05:50.520 |
|
and we would still count it |
|
|
|
01:05:48.200 --> 01:05:54.720 |
|
correct and so what I mean by there's a |
|
|
|
01:05:50.520 --> 01:05:56.440 |
|
dichotomy between the information that |
|
|
|
01:05:54.720 --> 01:05:59.240 |
|
language models have |
|
|
|
01:05:56.440 --> 01:06:02.480 |
|
encoded and whether they're able to |
|
|
|
01:05:59.240 --> 01:06:02.480 |
|
retrieve it |
|
|
|
01:06:02.680 --> 01:06:07.640 |
|
is in English it's able to answer the |
|
|
|
01:06:06.000 --> 01:06:10.799 |
|
models we tested were able to answer |
|
|
|
01:06:07.640 --> 01:06:13.000 |
|
like 177% of queries |
|
|
|
01:06:10.799 --> 01:06:14.359 |
|
but if the fact that they're able to |
|
|
|
01:06:13.000 --> 01:06:16.160 |
|
answer in English means that the |
|
|
|
01:06:14.359 --> 01:06:18.520 |
|
language model quote unquote knows the |
|
|
|
01:06:16.160 --> 01:06:20.200 |
|
answer right like it knows the answer in |
|
|
|
01:06:18.520 --> 01:06:22.680 |
|
English we're asking exactly the same |
|
|
|
01:06:20.200 --> 01:06:24.400 |
|
question in all the other languages so |
|
|
|
01:06:22.680 --> 01:06:26.079 |
|
you know it should know the answer in |
|
|
|
01:06:24.400 --> 01:06:27.680 |
|
the other languages too |
|
|
|
01:06:26.079 --> 01:06:30.000 |
|
but it's not able to retrieve the answer |
|
|
|
01:06:27.680 --> 01:06:33.079 |
|
because we asked in another language |
|
|
|
01:06:30.000 --> 01:06:35.920 |
|
so um that brings up some interesting |
|
|
|
01:06:33.079 --> 01:06:38.079 |
|
questions about how we can make models |
|
|
|
01:06:35.920 --> 01:06:39.680 |
|
better at retrieving the the knowledge |
|
|
|
01:06:38.079 --> 01:06:43.559 |
|
that they already know in English when |
|
|
|
01:06:39.680 --> 01:06:45.520 |
|
you query them in other languages or um |
|
|
|
01:06:43.559 --> 01:06:48.119 |
|
and there was another paper recently I |
|
|
|
01:06:45.520 --> 01:06:52.720 |
|
don't know if I'd be able to find it um |
|
|
|
01:06:48.119 --> 01:06:56.119 |
|
exactly which is um they |
|
|
|
01:06:52.720 --> 01:07:01.799 |
|
prompted models with personas and so |
|
|
|
01:06:56.119 --> 01:07:04.599 |
|
they said I um you know I am a old man I |
|
|
|
01:07:01.799 --> 01:07:07.160 |
|
am an old woman I am a young man I am |
|
|
|
01:07:04.599 --> 01:07:10.039 |
|
young woman I am a child or something |
|
|
|
01:07:07.160 --> 01:07:12.799 |
|
like that um or they also talked about |
|
|
|
01:07:10.039 --> 01:07:15.640 |
|
things like uh physical disabilities and |
|
|
|
01:07:12.799 --> 01:07:17.200 |
|
things and they said um please answer |
|
|
|
01:07:15.640 --> 01:07:19.640 |
|
this question after they prompted with a |
|
|
|
01:07:17.200 --> 01:07:22.680 |
|
Persona and just having that Persona |
|
|
|
01:07:19.640 --> 01:07:24.839 |
|
greatly changed the ability of the model |
|
|
|
01:07:22.680 --> 01:07:26.400 |
|
to answer questions so it's this very |
|
|
|
01:07:24.839 --> 01:07:28.200 |
|
weird thing which which is like the |
|
|
|
01:07:26.400 --> 01:07:29.799 |
|
models are actually capable of answering |
|
|
|
01:07:28.200 --> 01:07:31.520 |
|
the questions but based on how you probe |
|
|
|
01:07:29.799 --> 01:07:32.880 |
|
them whether it's in like different |
|
|
|
01:07:31.520 --> 01:07:34.599 |
|
languages or if you give them a |
|
|
|
01:07:32.880 --> 01:07:36.839 |
|
different Persona they manage to answer |
|
|
|
01:07:34.599 --> 01:07:39.000 |
|
things differently and so on the plus |
|
|
|
01:07:36.839 --> 01:07:42.920 |
|
side like you can create you can make |
|
|
|
01:07:39.000 --> 01:07:44.799 |
|
ways to reduce the language models |
|
|
|
01:07:42.920 --> 01:07:45.920 |
|
performance by giving it like a Persona |
|
|
|
01:07:44.799 --> 01:07:49.839 |
|
that shouldn't be good at answering |
|
|
|
01:07:45.920 --> 01:07:53.279 |
|
questions or something like that um |
|
|
|
01:07:49.839 --> 01:07:54.839 |
|
but on the plus side um like when you're |
|
|
|
01:07:53.279 --> 01:07:57.279 |
|
doing code generation there was this |
|
|
|
01:07:54.839 --> 01:07:58.960 |
|
magic prompt which is like um I have |
|
|
|
01:07:57.279 --> 01:08:01.319 |
|
checked this carefully in all the unit |
|
|
|
01:07:58.960 --> 01:08:03.240 |
|
tests pass and that would improve your |
|
|
|
01:08:01.319 --> 01:08:05.760 |
|
code generation accuracy by like five |
|
|
|
01:08:03.240 --> 01:08:07.559 |
|
five points or something like that so um |
|
|
|
01:08:05.760 --> 01:08:09.240 |
|
you just get the the model in the right |
|
|
|
01:08:07.559 --> 01:08:11.359 |
|
mood to answer the question accurately |
|
|
|
01:08:09.240 --> 01:08:13.319 |
|
and it does a better job at doing it so |
|
|
|
01:08:11.359 --> 01:08:15.960 |
|
it's kind of uh it goes in both |
|
|
|
01:08:13.319 --> 01:08:15.960 |
|
directions I |
|
|
|
01:08:16.679 --> 01:08:27.080 |
|
guess cool um yeah uh any any questions |
|
|
|
01:08:23.679 --> 01:08:30.120 |
|
here um another thing that you can do uh |
|
|
|
01:08:27.080 --> 01:08:31.000 |
|
is fine-tune models specifically so |
|
|
|
01:08:30.120 --> 01:08:34.080 |
|
they're good at answering |
|
|
|
01:08:31.000 --> 01:08:35.560 |
|
knowledge-based questions so um uh this |
|
|
|
01:08:34.080 --> 01:08:38.080 |
|
paper demonstrated that you could find |
|
|
|
01:08:35.560 --> 01:08:39.480 |
|
tune models uh on synthetically created |
|
|
|
01:08:38.080 --> 01:08:41.159 |
|
knowledge based questions and that would |
|
|
|
01:08:39.480 --> 01:08:42.920 |
|
improve the ability of the model to |
|
|
|
01:08:41.159 --> 01:08:47.679 |
|
answer questions about knowledge |
|
|
|
01:08:42.920 --> 01:08:47.679 |
|
bases um it's |
|
|
|
01:08:49.120 --> 01:08:57.440 |
|
uh yeah um it's pretty straightforward |
|
|
|
01:08:53.199 --> 01:08:57.440 |
|
so uh there's that |
|
|
|
01:08:57.799 --> 01:09:03.120 |
|
um yeah we already talked about this in |
|
|
|
01:09:00.000 --> 01:09:07.560 |
|
the rag class so I think I might skip |
|
|
|
01:09:03.120 --> 01:09:10.239 |
|
that um a final paper that I'd like to |
|
|
|
01:09:07.560 --> 01:09:12.600 |
|
talk about this is also a paper uh done |
|
|
|
01:09:10.239 --> 01:09:13.759 |
|
by my student Jung B Jong and this is |
|
|
|
01:09:12.600 --> 01:09:16.080 |
|
interesting from the point of view of |
|
|
|
01:09:13.759 --> 01:09:18.000 |
|
multihop reasoning and so I talked a |
|
|
|
01:09:16.080 --> 01:09:19.679 |
|
little bit about like multihop reasoning |
|
|
|
01:09:18.000 --> 01:09:23.239 |
|
along reasoning |
|
|
|
01:09:19.679 --> 01:09:26.159 |
|
chains um in knowledge bases and this is |
|
|
|
01:09:23.239 --> 01:09:28.520 |
|
one example of multihop reasoning |
|
|
|
01:09:26.159 --> 01:09:30.080 |
|
among along reasoning chains within the |
|
|
|
01:09:28.520 --> 01:09:33.400 |
|
parameters of the model so testing |
|
|
|
01:09:30.080 --> 01:09:36.759 |
|
whether models can answer |
|
|
|
01:09:33.400 --> 01:09:38.480 |
|
um Can it answer multihop questions and |
|
|
|
01:09:36.759 --> 01:09:40.839 |
|
basically what we did here is we took a |
|
|
|
01:09:38.480 --> 01:09:42.679 |
|
knowledge base and a knowledge base can |
|
|
|
01:09:40.839 --> 01:09:44.279 |
|
have |
|
|
|
01:09:42.679 --> 01:09:49.480 |
|
um |
|
|
|
01:09:44.279 --> 01:09:49.480 |
|
like uh country country is |
|
|
|
01:09:49.600 --> 01:09:52.600 |
|
US |
|
|
|
01:09:53.480 --> 01:09:58.600 |
|
president um and then a |
|
|
|
01:10:00.880 --> 01:10:06.560 |
|
birthday um and so we can create these |
|
|
|
01:10:04.280 --> 01:10:08.640 |
|
multihop questions right uh and just |
|
|
|
01:10:06.560 --> 01:10:10.280 |
|
follow the relation links and then we |
|
|
|
01:10:08.640 --> 01:10:11.440 |
|
know the answer to the multihop question |
|
|
|
01:10:10.280 --> 01:10:13.560 |
|
by following the link and we can |
|
|
|
01:10:11.440 --> 01:10:18.159 |
|
generate you know the question given a |
|
|
|
01:10:13.560 --> 01:10:19.800 |
|
template um so we did this and had like |
|
|
|
01:10:18.159 --> 01:10:22.800 |
|
question one which is return the artist |
|
|
|
01:10:19.800 --> 01:10:25.719 |
|
who recorded party a over um and then |
|
|
|
01:10:22.800 --> 01:10:28.159 |
|
where in Georgia does uh Usher live and |
|
|
|
01:10:25.719 --> 01:10:29.920 |
|
then we can turn this into a question |
|
|
|
01:10:28.159 --> 01:10:31.679 |
|
which part of Georgia in which part of |
|
|
|
01:10:29.920 --> 01:10:34.239 |
|
Georgia does the artist that recorded |
|
|
|
01:10:31.679 --> 01:10:37.560 |
|
the party8 overlive and so we now have a |
|
|
|
01:10:34.239 --> 01:10:45.000 |
|
multi multihop question and what we did |
|
|
|
01:10:37.560 --> 01:10:47.440 |
|
is we measured whether um the model was |
|
|
|
01:10:45.000 --> 01:10:49.760 |
|
able to answer the first question the |
|
|
|
01:10:47.440 --> 01:10:53.320 |
|
second question and the comp like |
|
|
|
01:10:49.760 --> 01:10:56.120 |
|
compound question and what we found is |
|
|
|
01:10:53.320 --> 01:10:59.440 |
|
like what we would expect |
|
|
|
01:10:56.120 --> 01:11:01.719 |
|
if models were like perfect knowledge |
|
|
|
01:10:59.440 --> 01:11:04.360 |
|
processors right |
|
|
|
01:11:01.719 --> 01:11:08.120 |
|
is we have |
|
|
|
01:11:04.360 --> 01:11:10.800 |
|
like yes on the first question |
|
|
|
01:11:08.120 --> 01:11:14.000 |
|
no |
|
|
|
01:11:10.800 --> 01:11:16.560 |
|
yes um yes on the first question and no |
|
|
|
01:11:14.000 --> 01:11:16.560 |
|
on the first |
|
|
|
01:11:17.199 --> 01:11:24.760 |
|
question and we would expect that |
|
|
|
01:11:21.920 --> 01:11:26.080 |
|
basically if it knew both of the answers |
|
|
|
01:11:24.760 --> 01:11:27.239 |
|
to the first question and the second |
|
|
|
01:11:26.080 --> 01:11:30.600 |
|
question it would get the compound |
|
|
|
01:11:27.239 --> 01:11:31.800 |
|
question right and if it got uh like |
|
|
|
01:11:30.600 --> 01:11:34.800 |
|
either of them wrong it would get it |
|
|
|
01:11:31.800 --> 01:11:37.120 |
|
wrong right um you know in the in the |
|
|
|
01:11:34.800 --> 01:11:39.400 |
|
ideal world where the knowledge of the |
|
|
|
01:11:37.120 --> 01:11:41.280 |
|
two sub questions is necessary to answer |
|
|
|
01:11:39.400 --> 01:11:43.880 |
|
the comp composite question and the |
|
|
|
01:11:41.280 --> 01:11:45.840 |
|
model is a perfect knowledge processor |
|
|
|
01:11:43.880 --> 01:11:47.120 |
|
and basically what we found we tried a |
|
|
|
01:11:45.840 --> 01:11:49.280 |
|
whole bunch of different types of |
|
|
|
01:11:47.120 --> 01:11:51.199 |
|
questions and what we found is this is |
|
|
|
01:11:49.280 --> 01:11:55.960 |
|
totally not the case like it's not the |
|
|
|
01:11:51.199 --> 01:11:58.520 |
|
case at all um and what we found in said |
|
|
|
01:11:55.960 --> 01:12:01.560 |
|
is if it's able to answer the second |
|
|
|
01:11:58.520 --> 01:12:04.120 |
|
question correctly it was much more |
|
|
|
01:12:01.560 --> 01:12:07.480 |
|
likely to be able to answer the |
|
|
|
01:12:04.120 --> 01:12:08.840 |
|
composite question um even if it can |
|
|
|
01:12:07.480 --> 01:12:11.000 |
|
answer the first question that has |
|
|
|
01:12:08.840 --> 01:12:13.120 |
|
almost no relation with whether it could |
|
|
|
01:12:11.000 --> 01:12:15.520 |
|
answer the composite question at all so |
|
|
|
01:12:13.120 --> 01:12:17.679 |
|
it's more like somehow from the answer |
|
|
|
01:12:15.520 --> 01:12:19.320 |
|
to the second question it was able to to |
|
|
|
01:12:17.679 --> 01:12:22.280 |
|
get the answer right and it kind of |
|
|
|
01:12:19.320 --> 01:12:24.040 |
|
makes sense actually because like um |
|
|
|
01:12:22.280 --> 01:12:26.320 |
|
let's say the answer to the second |
|
|
|
01:12:24.040 --> 01:12:27.920 |
|
question is some like really long list |
|
|
|
01:12:26.320 --> 01:12:30.719 |
|
like who are all the presidents of the |
|
|
|
01:12:27.920 --> 01:12:33.320 |
|
United States um or something like that |
|
|
|
01:12:30.719 --> 01:12:35.639 |
|
that's just hard to answer um so if I |
|
|
|
01:12:33.320 --> 01:12:38.000 |
|
said who are all the presidents of the |
|
|
|
01:12:35.639 --> 01:12:40.800 |
|
country where Washington DC is located |
|
|
|
01:12:38.000 --> 01:12:42.679 |
|
in um you know like the second question |
|
|
|
01:12:40.800 --> 01:12:44.040 |
|
is really hard so that's hard to get but |
|
|
|
01:12:42.679 --> 01:12:46.120 |
|
if I say |
|
|
|
01:12:44.040 --> 01:12:49.920 |
|
um |
|
|
|
01:12:46.120 --> 01:12:53.520 |
|
uh what what is the |
|
|
|
01:12:49.920 --> 01:12:57.120 |
|
capital what is the capital of the |
|
|
|
01:12:53.520 --> 01:12:57.120 |
|
country uh |
|
|
|
01:12:57.400 --> 01:13:02.440 |
|
what is what is the capital of the |
|
|
|
01:12:58.840 --> 01:13:05.400 |
|
country where the most |
|
|
|
01:13:02.440 --> 01:13:06.800 |
|
um people live or something like that |
|
|
|
01:13:05.400 --> 01:13:08.679 |
|
even if you weren't sure about the |
|
|
|
01:13:06.800 --> 01:13:10.880 |
|
country where the most people live you |
|
|
|
01:13:08.679 --> 01:13:13.040 |
|
could pick a random capital and get it |
|
|
|
01:13:10.880 --> 01:13:16.199 |
|
right some of the time or something like |
|
|
|
01:13:13.040 --> 01:13:18.239 |
|
that so um that's what we found in this |
|
|
|
01:13:16.199 --> 01:13:19.800 |
|
paper and I I think like another nice |
|
|
|
01:13:18.239 --> 01:13:22.360 |
|
thing about knowledge bases is they |
|
|
|
01:13:19.800 --> 01:13:24.880 |
|
allow you to ask like really interesting |
|
|
|
01:13:22.360 --> 01:13:26.400 |
|
questions like this about what language |
|
|
|
01:13:24.880 --> 01:13:29.120 |
|
model know or what language models don't |
|
|
|
01:13:26.400 --> 01:13:31.040 |
|
know in a structured way so um I think |
|
|
|
01:13:29.120 --> 01:13:32.280 |
|
if you're interested in probing language |
|
|
|
01:13:31.040 --> 01:13:35.320 |
|
models and what they know and what they |
|
|
|
01:13:32.280 --> 01:13:38.639 |
|
can infer what logic they can do that's |
|
|
|
01:13:35.320 --> 01:13:42.320 |
|
good um cool yeah that's all I have for |
|
|
|
01:13:38.639 --> 01:13:44.920 |
|
today um are there any questions or |
|
|
|
01:13:42.320 --> 01:13:48.679 |
|
discussion or things like that or happy |
|
|
|
01:13:44.920 --> 01:13:48.679 |
|
to talk up here too |
|
|