title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1214.08
And labels.
1,214.08
1,222.32
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1214.8
Okay. So that's quite a lot going into our model. And now what we want to do is extract the loss
1,214.8
1,230.32
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1222.32
from that. Then we calculate loss for every parameter in our model. And then using that,
1,222.32
1,237.84
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1230.32
we can update our gradients using our optimizer. And then what we want to do is print the relevant
1,230.32
1,246.96
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1237.84
info to our progress bar that we set up using TQDM and loop. So loop set description.
1,237.84
1,256.32
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1250.24
And here I was going to put the epoch info. So the epoch we're currently on.
1,250.24
1,260.64
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1258.08
And then I also want to set the post fix.
1,258.08
1,268.64
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1260.64
Which will contain the loss information. So loss.item. Okay. We can run that.
1,260.64
1,276.4
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1270.24
And you see that our model is now training. So we're now training a model using both
1,270.24
1,283.04
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1276.4
assignment modeling and net sentence prediction. And we haven't needed to take any structured data
1,276.4
1,288.48
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1283.0400000000002
to set up the model. So we can just run that. And we can see that our model is now training.
1,283.04
1,294.64
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1288.48
We haven't needed to take any structured data. We've just taken a book and pulled all data and
1,288.48
1,299.2
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1294.64
formatted it in the correct way for us to actually train a better model, which I think is really
1,294.64
1,318.72
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1299.2
1,299.2
1,318.72
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t0.0
Okay, so in the previous video what we did was set up our Elasticsearch document store to contain all of our
0
11.64
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t9.64
paragraphs from meditations
9.64
15.2
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t12.3
so we did that in this script here and
12.3
23.06
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t16.2
All together we only have not that much data, 508 paragraphs or documents within our document store
16.2
25.2
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t23.76
so
23.76
30
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t25.2
What we now want to do is set up the next part of our
25.2
34.44
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t31.04
Retriever reader stack, which is the retriever and
31.04
37.2
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t35.2
What the retriever will do is
35.2
43.48
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t37.8
given a query it will communicate with our Elasticsearch document store and
37.8
45.8
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t44.28
return a
44.28
52.6
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t45.8
Certain number of contexts which are the paragraphs in our case that it thinks are most relevant to our query
45.8
54.6
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t52.6
So
52.6
58.56
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t55.760000000000005
That's what we are going to be doing here and
55.76
67.14
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t59.120000000000005
The first thing that we need to do is initialize our document store again, so I'm just going to copy these
59.12
70.16
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t68.48
and
68.48
72.16
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t70.16
paste them here and
70.16
81.04
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t74.92
This would just initialize it from what we've already built so it's using the same index that already exists
74.92
82.92
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t81.04
so
81.04
89.36
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t82.92
Just initialize that and once we have our document store. Okay, cool. We have that now
82.92
96.88
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t90.58000000000001
Now what we want to do is set up our DPR, which is a dense passage retriever, which
90.58
99.92
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t97.92
essentially uses
97.92
102.42
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t100.42
dense vectors and
100.42
105.6
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t102.56
a type of efficient similarity search
102.56
109.72
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t106.78
to embed these indexes as
106.78
114.24
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t109.72
dense vectors and then once it comes to actually searching
109.72
117.44
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t114.8
And finding the most similar or the most relevant
114.8
124.78
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t118.68
Documents later on it will use those dense vectors and find the most similar ones
118.68
130
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t125.8
So I'll explain that a little bit better in a moment
125.8
135.46
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t132.0
So first what we want to do is actually initialize that
132
143.3
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t135.46
So we do from Haystack dense retriever
135.46
148.06
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t145.14000000000001
Import dense passage retriever
145.14
158.54
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t155.14000000000001
Sorry, it's the other way around here so retriever dense
155.14
163.54
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t162.34
And
162.34
166.18
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t163.54
then we'll put into a
163.54
175.38
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t169.29999999999998
Variable called retriever which uses the dense passage retriever from up here
169.3
184.42
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t176.57999999999998
And in here we need to pass a few parameters. So the first thing is the document store. So the document store is
176.58
187.78
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t185.38
just what we've already initialized up so and
185.38
195.98
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t187.78
Then we need to initialize two different models so it's the query embedding model
187.78
204.06
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t200.86
And the passage embedding model
200.86
212.06
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t207.66
Now behind the scenes Haystack is using the Hugging Face Transformers library
207.66
214.54
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t212.06
So what we'll do is we'll head over to the
212.06
219.66
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t214.54
Models over there and see which embedding models we can use for DPR
214.54
237.42
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t229.34
Okay, so here let's just search for DPR and you'll find we have all of these models from Facebook AI
229.34
240.86
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t238.85999999999999
Now with DPR
238.86
246.22
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t240.86
the reason that it's so useful for question answering is that we have
240.86
251.5
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t247.26000000000002
What are two different models that encode
247.26
254.46
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t252.06
the text that we pass into it so we have
252.06
258.06
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t255.34
this sort of setup during training and
255.34
261.26
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t259.26
What we see down here
259.26
265.34
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t263.34000000000003
Are these two models we have this
263.34
267.98
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t265.98
EP, BERT, and EMP
265.98
271.26
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t267.98
models we have this EP, BERT encoder
267.98
279.98
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t271.98
And we also have this EQ, BERT encoder. Now the EP, BERT encoder encodes the passages or the context
271.98
283.74
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t280.3
So essentially the paragraphs that we have
280.3
286.54
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t284.46000000000004
fed into our elastic search model
284.46
289.26
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t287.26
This is what we'll be
287.26
293.02
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t289.74
encoding them into these vectors here
289.74
299.1
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t293.02
Now this is during training this whole graph. So all we will actually see
293.02
303.5
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t299.74
when we're encoding these vectors is we will see the
299.74
307.98
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t305.97999999999996
EP encoder
305.98
315.98
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t312.78
And this will create the EP vectors
312.78
320.06
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t315.98
And all we're going to do is feed in all of the documents
315.98
324.62
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t321.26
from elastic search into this
321.26
330.3
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t326.86
Now once all of these have been encoded
326.86
335.66
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t331.34000000000003
We then have a new set of dense vectors
331.34
343.26
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t339.98
And then we'll have a new set of dense vectors
339.98
345.26
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t343.26
And all of those
343.26
351.42
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t346.3
Will be fed back into our document store so back into elastic
346.3
358.7
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t354.3
Now when it comes to performing similarity search later on
354.3
365.42
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t359.58
We're going to ask a question and that question will be processed by the EQ encoder
359.58
369.82
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t367.65999999999997
So here we have our
367.66
372.46
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t369.82
EQ encoder
369.82
377.5
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t373.98
And we have our question so that will go into here
373.98
383.58
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t379.9
And that will encode our question
379.9
389.58
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t384.38
And then send it over to elastic and say okay what are the most similar vectors to
384.38
392.86
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t390.3
this vector that we created from a question
390.3
397.18
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t394.14
And the reason that we're asking this question is because
394.14
400.86
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t397.18
We're going to be using a new set of vectors to train our question
397.18
405.18
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t402.3
And the reason that DPR is so good is
402.3
408.14
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t406.14
That if you look at the training down here
406.14
417.26
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t408.86
We are creating these EP vectors and these EQ vectors that are matching so where we have a matching question to a matching context
408.86
420.14
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t418.54
We are training them
418.54
423.1
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t420.14
To maximize the dot product
420.14
428.22
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t423.1
And the alignment between those two vectors so what happens is
423.1
432.46
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t429.1
That a relevant passage and a relevant question
429.1
437.18
Q&A Document Retrieval With DPR
2021-04-15 15:00:10 UTC
https://youtu.be/DBsxUSUhfRg
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
DBsxUSUhfRg-t433.82000000000005
Will come out to have a very similar vector
433.82
441.82