title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t374.4
sentence is not equal to empty, then once we're there, what we want to do is we want to get the
374.4
394
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t385.76
number of sentences within each sentence or sentences variable. So just get length.
385.76
398.88
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t394.0
And the reason we do that is because we want to check that a couple of times in the next few
394
405.28
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t398.88
lines of code. And first time we check that is now. So we check that the number of sentences is
398.88
412.08
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t405.28
greater than one. Now this because we're concatenating two sentences to create our
405.28
417.36
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t412.08
training data, we don't want to get just one sentence. We need it where we have, for example,
412.08
422.4
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t417.36
in this one, we have multiple sentences so that we can select like this sentence followed by this
417.36
427.76
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t422.4
sentence. We can't do that with these because there's no guarantee that this paragraph here
422.4
432.48
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t427.76
is going to be talking about the same topic as this paragraph here. So we just avoid that.
427.76
438.4
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t432.47999999999996
And in here, first thing we want to do is set out start sentence. So this is where
432.48
442.64
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t438.4
sentence A is going to come from. And we're going to randomly select,
438.4
450.8
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t443.84
say for this example, we want to randomly select any of the first one, two, three sentences.
443.84
455.6
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t450.8
Okay, we'd want to select any of these three, but not this one, because if this sentence A,
450.8
458.64
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t455.6
we don't have a sentence B which follows it to extract.
455.6
474.48
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t461.28000000000003
So we write random, randint 0 up to the length of num sentences minus two. Now we can now get
461.28
485.12
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t474.48
our sentence A, which is append, and we just write sentences start. And then for our sentence B,
474.48
490.8
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t485.12
50% we want to select random one from bag up here, 50% of time we want to select the genuine
485.12
498.48
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t490.8
next sentence. So say if random.random, so this will select a random float between 0 and 1,
490.8
510.4
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t498.48
it's greater than 0.5. And sentence B is going to be, we'll make this our coherent version. So
498.48
522.88
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t510.40000000000003
sentences start plus one. And that means our label will have to be zero because that means
510.4
529.36
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t522.88
that these two sentences are coherent. Sentence B does follow sentence A. Otherwise,
522.88
538.48
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t530.8
we select a random sentence for sentence B. So do append, and here we would write bag,
530.8
544.8
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t539.52
and we need to select a random one. So we do random, same as we did earlier on for the start,
539.52
556.16
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t544.8
we do random, randint from zero to the length of the bag size minus one. So we also need to do the
544.8
563.28
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t556.16
label, which is going to be one in this case. We can execute that. Now that will work. I go a
556.16
570.48
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t563.28
little more into depth on this in the previous NSP video. So I'll leave a link to that in the
563.28
576.88
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t570.48
description if you want to go through it. And now what we can do is tokenize our data. So to do that,
570.48
581.84
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t576.88
we just write inputs and we use a tokenizer. So this is just normal, you know,
576.88
591.6
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t581.84
hugging face transformers. And we just write sentence A and sentence B. So hugging face
581.84
595.84
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t591.6
transformers will know what we want to do with that. It will deal with formatting for us,
591.6
605.04
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t595.84
which is pretty useful. We want to return PyTorch tensors. So return tensors equals pt.
595.84
615.84
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t607.2
And we need to set everything to a max length of 512 tokens. So max length equals 512.
607.2
623.84
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t615.84
The truncation needs to be set to true. And we also need to set padding equal to max length.
615.84
633.52
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t625.84
Okay. So that creates three different tensors for us.
625.84
642.8
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t635.2
Impart IDs, token type IDs, and attention mask. Now for the pre-trained model, we need two more
635.2
649.12
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t642.8
tensors. We need our next sentence label tensor. So to create that, we write inputs,
642.8
656.88
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t650.0799999999999
next sentence label. And that needs to be a long tensor
650.08
667.2
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t659.1999999999999
containing our labels, which we created before in the correct dimensionality. So that's why we're
659.2
674.24
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t667.2
using the list here and the transpose. And we can have a look at what that creates as well.
667.2
683.84
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t674.96
So look at the first 10. We get that. Okay. And now what we want to do is create our mask data. So
674.96
691.92
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t684.5600000000001
we need the labels for our mask first. So when we do this, what we'll do is we're going to clone
684.56
698
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t691.92
the input IDs tensor. We're going to use that clone for the labels tensor. And then we're going to go
691.92
704.48
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t698.0
back to our input IDs and mask around 15% of the tokens in that tensor. So let's create that labels
698
719.76
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t704.4799999999999
tensor. It's going to be equal to inputs, input IDs, detach, and clone. Okay. So now we've got
704.48
727.76
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t719.76
our mask data. Okay. So now we'll see in here, we have all of the tensors we need, but we still need
719.76
734
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t727.76
to mask around 15% of these before moving on to training our model. And to do that, we'll
727.76
740.88
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t734.0
use, we'll create a random array using the torch rend. That needs to be in the same shape as our
734
750.56
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t740.88
input IDs. And that will just create a big tensor between values of zero to one. And what we want
740.88
758.64
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t750.56
to do is mask around 15% of those. So we write something like this. Okay. And that will give us
750.56
764.96
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t759.28
our mask here, but we also don't want to mask special tokens, which we are doing here. We're
759.28
771.12
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t764.96
masking classification tokens and also masking padding tokens up here. So we need to add a little
764.96
781.76
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t771.12
bit more logic to that. So let me just add this to a variable. So we add that logic, which says,
771.12
794.88
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t784.8000000000001
and input IDs is not equal to one zero one, which is our CLS token, which is what we get down here.
784.8
803.36
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t794.88
See the impact. See we get faults now. And we also want to do the same for our separator tokens,
794.88
810.72
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t805.2
which is one zero two. We can't see any of those. And our padding tokens, we use zero.
805.2
821.2
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t812.16
So you see these are all that will go false now, like so. So that's our masking array.
812.16
830.8
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t821.2
And now what we want to do is loop through all of these, extract the points at which they are not
821.2
838.72
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t830.8000000000001
false. So where we have the mask and use those indices values to mask our actual input IDs up
830.8
851.92
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t838.72
here. To do that, we go for i in range inputs, input IDs dot shape zero. This is like iterating
838.72
861.44
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t851.9200000000001
through each row. And what we do here is we get selection. So these are the indices where we have
851.92
874.56
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t861.44
true values and mask array. And we do that using torch flatten mask array at the given index,
861.44
885.6
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t874.5600000000001
where they are non-zero. And we want to create a list from that. Okay. So we have that. Oh, and so
874.56
891.28
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t885.6
I want to show you what the selection looks like quickly. So it's just a selection of indices to
885.6
904.72
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t891.28
mask. And we want to apply that to our inputs, input IDs. So at the current index, and we
891.28
911.68
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t905.76
select those specific items and we set them equal to one zero three, which is the masking token ID.
905.76
922
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t911.68
Okay. So that's our masking. And now what we need to do is we need to take all of our data here and
911.68
928.48
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t922.0
load it into a PyTorch data loader. And to do that, we need to reform our data into a PyTorch
922
937.28
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t928.4799999999999
data set object. And we do that here. So main thing to note is we pass our data into this
928.48
945.28
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t937.28
initialization that assigns them to this self encodings attribute. And then here we say, okay,
937.28
953.36
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t945.28
given a certain index, we want to extract the tensors in a dictionary format for that index.
945.28
960.48
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t954.16
And then here we're just passing the lengths of how many tensors or how many samples we have in
954.16
970.4
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t960.48
the full data set. So run that. We initialize our data sets using that class. So right, data set
960.48
977.76
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t970.4
equals meditations data set, pass our data in there, which is inputs. And then with that,
970.4
991.84
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t977.76
we can create our data loader like this. So torch utils data data loader. And we have data set.
977.76
997.76
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t991.84
Okay. So that's ready. Now we need to set up our training loop. So first thing we need to do is
991.84
1,004.56
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t997.76
check if we are on GPU or not. If we are, we use it and we do that like so. So device equals torch
997.76
1,011.2
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1004.56
device cuda if torch cuda is available. Else torch device CPU. So that's saying use the GPU
1,004.56
1,019.28
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1011.1999999999999
if we have a cuda enabled GPU, otherwise use CPU. And then what we want to do is move our model
1,011.2
1,028.48
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1019.92
over to that device. And we also want to activate the training mode of our model.
1,019.92
1,037.12
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1028.48
And then we need to initialize our optimizer. I'm going to be using Adam with weighted decay.
1,028.48
1,049.84
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1038.64
So from transformers import Adam w. And initialize it like this. So optim equals Adam w.
1,038.64
1,058.08
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1051.6
We pass our model parameters to that. And we also pass a learning rate. So learning rate
1,051.6
1,065.84
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1058.08
is going to be 5e to the minus 5. Okay. And now we can create our training loop.
1,058.08
1,074.4
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1065.84
So you're going to use TQDM to create the progress bar. And we're going to go through
1,065.84
1,084.4
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1074.3999999999999
two epochs. So for epoch in range two, we initialize our loop by wrapping it within TQDM.
1,074.4
1,091.28
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1084.4
And in here we have our data loader. And we set leave equal to true so that we can see that progress
1,084.4
1,102.48
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1091.2800000000002
bar. And then we loop through each batch within that loop. Up here, so I didn't actually set the
1,091.28
1,108.08
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1102.48
batches. My mistake. So up here we want to set where we initialize the data loader. We want to
1,102.48
1,120.56
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1108.08
set batch size equal to 16. And also shuffle the data set as well. Okay. So for batch in loop,
1,108.08
1,129.52
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1121.52
here we want to initialize the gradient on our optimizer. And then we need to load in
1,121.52
1,139.44
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1129.52
each of our tensors, which there are quite a few of them. So we have inputs.keys. We need to load
1,129.52
1,148.56
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1139.44
in each one of these. So input IDs equals batch. We access this like a dictionary. So input IDs.
1,139.44
1,156.08
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1150.32
We also want to move each one of those tensors that we're using to our device.
1,150.32
1,166.88
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1156.08
So we do that for each one of those. And we have attention mask.
1,156.08
1,172.88
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1168.56
And next sentence labels and also labels.
1,168.56
1,187.36
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1172.88
Labels and also labels. Okay. And now we can actually process that through our model.
1,172.88
1,193.52
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1188.8000000000002
So in here, we just need to pass all of these tensors that we have. So input IDs.
1,188.8
1,200.8
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1193.52
And then we have token type IDs. Just copy this.
1,193.52
1,204.8
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1203.76
Attention mask.
1,203.76
1,209.84
Training BERT #5 - Training With BertForPretraining
2021-06-15 15:00:19 UTC
https://youtu.be/IC9FaVPKlYc
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
IC9FaVPKlYc-t1207.84
Next sentence label.
1,207.84
1,214.8