title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t379.28
no worries.
379.28
388.4
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t380.16
And then we also need to specify our index and at the moment we don't have an Aurelius index and
380.16
394.96
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t388.40000000000003
that's fine because this will initialize it for us. So we'll just call it Aurelius.
388.4
406.64
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t400.0
Now if we go down here we can see what it actually did so it sent a put request to here.
400
415.52
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t406.64
localhost 9200 Aurelius. So that's how you create a new index. After that what we want to do is
406.64
425.76
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t416.24
first import our data. So we have the data here which I got from this website
416.24
434.48
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t425.76
and process with this script which you can find on GitHub. I'll keep a link in the description so you
425.76
440.8
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t434.48
can just go and copy that if you need to. Now I haven't really done much pre-process it's pretty
434.48
449.52
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t440.8
straightforward and all you need to do here is actually open that data. So we do that with open
440.8
458.88
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t449.52
and from here that data file is located two folders up in a data folder it's called meditations.txt.
449.52
462.56
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t461.52
I'm going to be reading that
461.52
470.64
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t466.71999999999997
and all we do is data equals f.read
466.72
478.72
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t470.64
and then if we just have a quick look at the first 100 characters there we see that we have
470.64
488.32
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t478.71999999999997
this newline character and that signifies a new paragraph from the text. So what we want to do here
478.72
496.96
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t490.47999999999996
is split the data and then we can see that we have a newline character.
490.48
502.24
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t496.96
So what we want to do is split the data by newline
496.96
511.52
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t504.64
and then if we check the length of that see that we have 508 separate paragraphs in there.
504.64
521.76
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t512.8
So what we now want to do is we want to modify this data so that it's in the correct format
512.8
531.2
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t521.76
for Haystack and Elasticsearch. So that format looks like this so it expects a list of dictionaries
521.76
540.4
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t531.2
where each dictionary looks like this from the text and inside here we would have our paragraph.
531.2
549.68
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t541.12
So each one of these items here and then there's another optional field called meta
541.12
553.76
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t549.68
and meta contains a dictionary and in here we can put whatever we want.
549.68
561.92
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t554.88
So for us I don't think at the moment there's really that much to put into here other than
554.88
569.44
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t561.92
where it came from so the book or maybe the source is probably a better word to use here
561.92
577.28
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t570.64
and all of these are coming from Meditations. Now later on we will probably add a few other
570.64
583.44
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t577.28
books as well and then the source will be different and when we return that item from
577.28
588.08
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t583.4399999999999
our retriever and our reader we'll at least be able to see which book came from him.
583.44
595.6
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t588.0799999999999
It would also be pretty cool to maybe include like a page number or something but at the moment with
588.08
601.2
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t595.6
this there are no page numbers included so we're not doing that at the moment.
595.6
609.36
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t601.2
So that's the format that we need and it's going to be a list of these.
601.2
614.08
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t610.5600000000001
So to do that we'll just do some list comprehension.
610.56
623.76
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t617.36
So we're going to write this and let's just copy this I think yeah it should be fine we'll copy this
617.36
634.64
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t623.76
and just indent that and in here we have our paragraph and sources Meditations for all of them
623.76
645.36
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t634.64
and then we just write for paragraph in and data okay so yeah that should work and if we just
634.64
655.44
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t645.36
and if we just check what we have here okay so that's that's what we want so we have text
645.36
659.92
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t655.44
we have the paragraph and then in here we have this meta with a source which is always Meditations
655.44
667.52
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t659.92
at the moment so that looks pretty good and we'll just double check the length again it should be
659.92
678.8
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t667.52
508 okay perfect now what we need to do is index all of these documents into our Elasticsearch
667.52
685.28
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t678.8
instance and to do that it's super easy all we do is call docstore because we're doing this through
678.8
698.16
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t685.28
Haystack now and we do write documents and we just pass in our data.json and that should work.
685.28
708.48
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t698.16
Okay cool so we can see here what it's done as it's sent a POST request to the Bulk API and sent
698.16
717.28
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t708.48
two of them I assume because it can only send so many documents at once so that's pretty cool and
708.48
726.48
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t717.28
now what I want to check is that we actually have 508 documents in our Elasticsearch instance
717.28
734.8
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t728.24
so to do that we're going to revert back to requests so we'll do requests.get
728.24
740.88
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t734.8
again go to our localhost
734.8
751.44
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t744.0
9200 and here we need to specify the index that we want to count the number of entries in
744
759.36
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t752.0
and then all we do is add count onto the end there and this will return a JSON object so we do this
752
766.32
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t759.36
so that we can see it and sure enough we have 508 items in that document store.
759.36
776.64
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t767.44
So if we head on back to our original plan so up here we had meditations we've now got that
767.44
789.04
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t778.8000000000001
and we've also set up the first part of our sack over here so Elastic now has meditations in there
778.8
797.36
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t789.04
so we can cross that off now the next step is setting up our retriever which we'll cover in the
789.04
819.6
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-12 15:00:11 UTC
https://youtu.be/Vwq7Ucp9UCw
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
Vwq7Ucp9UCw-t797.36
797.36
819.6
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t0.0
Hi, welcome to the video. Here we're going to have a look at how we can pre-train BERT.
0
14.64
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t6.88
So what I mean by pre-train is fine-tune BERT using the same approaches that are used to actually
6.88
22.8
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t14.64
pre-train BERT itself. So we would use these when we want to teach BERT to better understand the
14.64
32.24
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t22.8
style of language in our specific use cases. So we'll jump straight into it, but what we're going
22.8
39.76
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t32.24
to see is essentially two different methods applied together. So when we're pre-training,
32.24
47.2
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t39.760000000000005
we're using something called mass language modeling or MLM and also net sentence prediction or NSP.
39.76
53.36
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t47.2
Now in a few previous videos, I've covered all of these. So if you do want to go into a little more
47.2
58.72
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t53.36
depth, then I would definitely recommend having a look at those. But in this video, we're just
53.36
64.72
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t58.72
going to go straight into actually training a BERT model using both of those methods using
58.72
72.72
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t64.72
the pre-training class. So we need first to import everything that we need. So I'm going to
64.72
78
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t72.72
import requests because I'm going to use request download data we're using, which is from here. You
72.72
88.16
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t78.0
find a link in the description for that. And we also need to import our tokenizer and model classes
78
93.76
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t88.16
from transformers. So from transformers, we're going to import BERT tokenizer
88.16
104.8
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t93.76
and also BERT for pre-training. Now, like I said before, this BERT for pre-training class contains
93.76
114.4
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t104.80000000000001
both an MLM head and an NSP head. So once we have that, we also need to import torch as well. So let
104.8
123.92
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t114.4
me import torch. Once we have that, we can initialize our tokenizer and model. So we initialize
114.4
132.64
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t123.92
our tokenizer like this. So BERT tokenizer and from pre-trained. And we're going to be using the
123.92
145.44
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t132.64
BERT base uncased model. Obviously, you can use whichever BERT model you'd like. And for our model,
132.64
152.4
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t145.44
we have the BERT for pre-training class. So that's our tokenizer model. Now let's get our data.
145.44
158
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t153.27999999999997
Don't need to worry about that warning. It's just telling us that we need to train it, basically,
153.28
167.84
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t158.0
if we want to use it for inference predictions. So we get our data. We're going to pull it from
158
178.16
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t167.84
here. So let me copy that. And it's just requests.get. And paste that in there. And we should
167.84
184.56
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t178.16
see a 200 code. That's good. And so we just extracted data using the text attribute.
178.16
190.72
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t184.56
So text equals that. We also need to split it because it's a set of paragraphs that are split
184.56
197.68
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t190.72
by a new line character. And we can see those in here. Now we need to power data both for NSP and
190.72
204.96
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t197.68
MLM. So we'll go with NSP first. And to do that, we need to create a set of random sentences. So
197.68
213.92
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t204.96
sentence A and B. And then we need to create a set of random sentences. So we need to create a set
204.96
222
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t213.92
of random sentences. So sentence A and B, where the sentence B is not related to sentence A. We
213.92
228.8
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t222.0
need roughly 50% of those. And then the other 50% we want it to be sentence A is followed by sentence
222
237.36
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t228.79999999999998
B. So they are more coherent. So we're basically teaching BERT to distinguish between coherence
228.8
248
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t237.36
between sentences. So like long term dependencies. And we just want to be aware that within our text,
237.36
255.44
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t248.0
so we have this one paragraph that has multiple sentences. So we split by this. We have those. So
248
261.6
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t255.44000000000003
we need to create essentially a list of all of the different sentences that we have that we can just
255.44
267.92
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t261.6
pull from when we're creating our training data for NSP. Now to do that, we're going to
261.6
273.52
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t268.88
use this comprehension here. And what we do is write sentence. So for each sentence,
268.88
278.88
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t274.40000000000003
for each paragraph in the text. So this variable.
274.4
289.04
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t282.64000000000004
For sentence in para.split. So this is where we're getting our sentence variable from.
282.64
296.4
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t289.04
And we just want to be aware of if we have a look at this one, we see we get this empty sentence,
289.04
300.96
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t296.40000000000003
we get that for all of our paragraphs. So we want to not include those. So we say if
296.4
310.4
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t301.68
sentence is not equal to that empty sentence. And we're also going to need to get the length
301.68
317.12
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t310.40000000000003
of that bag for later as well. And now what we do is create our NSP training data.
310.4
324.4
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t317.12
So we want that 50-50 split. So we're going to use the random library to create that 50-50 randomness.
317.12
333.04
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t326.24
We want to initialize a list of sentence a's, a list of sentence b's,
326.24
342.32
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t335.04
and also a list of labels. And then what we do is we're going to loop through each paragraph in
335.04
350.48
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t342.32
our text. So for paragraph in text. We want to extract each sentence from the paragraph. So
342.32
355.84
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t350.48
we're going to use it similar to what we've done here. So write sentences. This is going to be a
350.48
365.76
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t355.84
list of all the sentences within each paragraph. So sentence for sentence in para.split
355.84
374.4
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t365.76
by a period character. And we also want to make sure we're not including those empty ones. So if
365.76
384.64