title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2433.6
for the very first item every single time. So let's just update that. So
2,433.6
2,454.92
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2442.4
obviously that won't get us very far. And just update that as well. And now this
2,442.4
2,467.24
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2454.92
should look a little bit better. So it's lucky we checked. Okay so our data at the
2,454.92
2,473.04
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2467.2400000000002
moment is in the right format. We just need to use it to create a PyTorch
2,467.24
2,484.64
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2473.04
dataset object. So to do that obviously we need to import PyTorch. And we define
2,473.04
2,497.64
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2484.64
that dataset using a class. I'm just passing the torch. utils data dataset.
2,484.64
2,519.88
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2497.64
We need to initialize that. Like so. And this is coming from the Houden Face
2,497.64
2,536.36
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2519.88
Transformers documentation. Don't take credit for this. And we essentially need
2,519.88
2,541.12
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2536.36
to do this so that we can load in our data using the PyTorch data loader
2,536.36
2,566.48
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2541.12
later on. Which makes things incredibly easy. And then we just have one more
2,541.12
2,592.52
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2566.48
function here. Or method. Okay. And return. And also this as well. That should be
2,566.48
2,609.6
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2592.52
okay. So we apply this to our datasets to create dataset objects. Now our encodings.
2,592.52
2,623.48
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2609.6
And then the same again for the validation set. Okay so that is our data.
2,609.6
2,629.24
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2623.48
Almost fully prepared. All we do now is load it into a data loader object. But
2,623.48
2,634.6
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2629.24
this is everything on the data side done. Which is great because I know this bit
2,629.24
2,639.8
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2634.6
does take some time. And I know it's not the most interesting part of it. But it's
2,634.6
2,644.8
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2639.7999999999997
just something that we need to do. And need to understand what we're doing as
2,639.8
2,653.44
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2644.7999999999997
well. So now we get to the more interesting bit. So we'll just add the
2,644.8
2,666.36
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2653.44
imports in here. So we need our data loader.
2,653.44
2,680.96
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2673.56
We're going to import the Adam optimizer with weighted decay. Which is pretty
2,673.56
2,686.48
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2680.96
commonly used for transformer models when you are fine-tuning. Because
2,680.96
2,692.12
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2686.48
transformer models are generally very large models. And they can over fit very
2,686.48
2,699.32
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2692.12
easily. So this Adam optimizer with weighted decay essentially just reduces
2,692.12
2,706.96
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2699.32
the chances of that happening. Which is supposed to be very useful and quite
2,699.32
2,723.76
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2706.96
important. So obviously we're going to use that. And then final bit is TQDM. So
2,706.96
2,730.6
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2725.56
TQDM is a progress bar that we are going to be using. So that we can actually see
2,725.56
2,735.36
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2730.6
the progress of our training. Otherwise we're just going to sit there for
2,730.6
2,740.92
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2735.36
probably quite a long time not knowing what is actually happening. And trust me
2,735.36
2,744.12
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2740.92
it won't take long before you start questioning whether anything is
2,740.92
2,750.56
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2744.1200000000003
happening. Because it takes a long time to train these models. So they are our
2,744.12
2,758.6
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2750.56
imports. And I'm being stupid again here. That's from, did that twice. Okay so
2,750.56
2,764.92
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2758.6
that's all good. So now we just need to do a few little bits for the setup. So
2,758.6
2,771.88
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2764.92
we need to tell Pytorch whether we're using CPU or GPU. In my case it will be a
2,764.92
2,777.52
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2771.88
GPU. If you're using CPU this is going to take you a very long time to train. And
2,771.88
2,786
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2777.52
it's still going to take you a long time on GPU. So just be aware of that. But what
2,777.52
2,804.72
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2786.0
we're going to do here is say device. It's CUDA. If CUDA is available. Otherwise
2,786
2,815.6
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2804.72
we are going to use the CPU. And good luck if that is what you're doing. So once
2,804.72
2,825.96
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2815.6
we've defined the device we want to move our model over to it. So we just.model.to
2,815.6
2,831.04
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2825.96
device. So this.to method is essentially a way of transferring data between
2,825.96
2,837.44
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2831.04
different hardware components. So your CPU or GPU. It's quite useful. And then we
2,831.04
2,844.6
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2837.44
want to activate our model for training. So there's two things we have here. So we
2,837.44
2,851.96
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2844.6
have.train and eval. So when we're in train mode there's a lot of
2,844.6
2,855.4
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2851.96
different layers and different parts of your model that will behave differently
2,851.96
2,860
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2855.4
depending on whether you are using the model for training or you're using it
2,855.4
2,865.16
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2860.0
for inference which is predictions. So we just need to make sure our model is in
2,860
2,870.92
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2865.16
the right mode for whatever we're doing. And later on we'll switch it to eval to
2,865.16
2,876.64
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2870.92
make some predictions. So that's almost everything. So we just need to
2,870.92
2,885.36
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2876.64
initialize the optimizer. And here we're using the weighted decay Adam optimizer.
2,876.64
2,894.76
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2885.64
We need to pass in our model parameters and also give it a learning rate. And
2,885.64
2,902.92
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2894.76
we're going to use this value here. All these are the recommended parameters for
2,894.76
2,913.28
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2902.92
what we are doing here. So the one thing that I have somehow missed is defining
2,902.92
2,919.16
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2913.28
the actual initializing the model. So let's just add that in. And all we're
2,913.28
2,923.96
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2919.1600000000003
doing here is loading again a pre-trained one. So like we did before
2,919.16
2,936.68
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2923.96
when we were loading the transformers tokenizer. This time it's for question
2,923.96
2,944.6
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2936.68
answering. So this the Silbert of question answering is a the Silbert model with a
2,936.68
2,950.4
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2944.6
question and answering head added on to the end of it. So essentially with
2,944.6
2,953.68
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2950.4
transformers you have all these different heads that you add on and they
2,950.4
2,962.68
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2953.68
will do different things depending on what head it has on there. So let's
2,953.68
2,973.6
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2962.68
initialize that from pre-trained. And we're using the same one we use up here
2,962.68
2,989.2
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2973.6
which is the Silbert base uncased. And sometimes you will need to download that.
2,973.6
2,993.12
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2989.2
Fortunately I don't need to as I've already done that but this can also take
2,989.2
2,996.92
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2993.12
a little bit of time. Not too long though and you get a nice progress bar
2,993.12
3,005.52
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2996.92
hopefully as well. Okay so now that is all set up we can initialize our data
2,996.92
3,020.36
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3005.52
loader. So all we're doing here is using the PyTorch data loader object and we
3,005.52
3,027.68
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3020.36
just pass in our training data set. The batch size so how many we want to train
3,020.36
3,033.52
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3027.6800000000003
on at once in parallel before updating the model weights which will be 16. And
3,027.68
3,039.04
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3033.52
we also would like to shuffle the data because we don't want to train the model
3,033.52
3,042.4
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3039.04
on a single batch and it just learned about Beyonce and then the next one
3,039.04
3,047.6
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3042.4
it's learning about Chopin and it will keep switching between those but never
3,042.4
3,058.56
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3047.6
within a single batch having a good mix of different things to learn about. So it
3,047.6
3,065.8
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3058.56
is data set seems a bit of a weird name to me so I'm just going to change it.
3,058.56
3,081.12
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3065.8
And they also can't spell. There we go. And that is everything we can actually
3,065.8
3,095.36
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3081.1200000000003
begin our training loop. So we're gonna go for three parts and what we want to
3,081.12
3,103.36
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3095.36
start with here is a loop object. So we do this mainly because we're using TQDM
3,095.36
3,108.96
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3103.36
as a progress bar otherwise we wouldn't need to do this. There would be no point in
3,103.36
3,116.84
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3108.96
doing it and all this is doing is kind of like pre-initializing our loop that we
3,108.96
3,120.16
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3116.84
are going to go through. So we're going to obviously loop through every batch
3,116.84
3,128.12
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3120.16
within the train loader so we just add that in here and then there's this other
3,120.16
3,134.64
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3128.12
parameter which I don't know if we... So let's leave it but essentially you can
3,128.12
3,138.64
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3134.64
add a leave equals true in order to leave your progress bar in the same
3,134.64
3,143.48
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3138.64
place with every epoch. Whereas at the moment with every epoch what it will do
3,138.64
3,148.6
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3143.48
is create a new progress bar. We are going to create a new progress bar but
3,143.48
3,151.68
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3148.6
if you don't do that and you want it to just stay in the same place you add
3,148.6
3,160.24
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3151.68
leave equals true into this function here. So after that we need to go through
3,151.68
3,168.16
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3160.24
each batch within our loop and the first thing that we need to do is set all of
3,160.24
3,177.36
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3168.16
our calculated gradients to zero. So with every iteration that we go through here
3,168.16
3,181.48
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3177.36
or every batch at the end of it we are going to calculate gradients which tells
3,177.36
3,188.72
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3181.48
the model in which direction to change the weights within the model and
3,181.48
3,193.48
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3188.7200000000003
obviously when we go into the next iteration we don't want those gradients
3,188.72
3,198.84
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3193.48
to still be there. So all we're doing here is reinitializing those gradients
3,193.48
3,202.92
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3198.84
at the start of every loop so we have a fresh set of gradients to work with
3,198.84
3,211.44
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3202.92
every time and here we just want to pull in our data. So this is everything that
3,202.92
3,218.72
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3211.44
is relevant that we're going to be feeding into the training process. So
3,211.44
3,225.8
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3218.7200000000003
everything within our batch and then in here we have all of our different items.
3,218.72
3,242.12
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3225.8
So we can actually see go here we want to add in all of these and we also want
3,225.8
3,249.16
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3242.1200000000003
to move them across to the GPU in my case or whatever device you are working
3,242.12
3,257
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3249.16
on. I would do that for the attention mass start positions and end positions.
3,249.16
3,282.08
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3257.0
So these start and end positions are essentially the labels that are targets
3,257
3,289.32
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3282.08
that we want our model to optimize for and the input IDs and attention mass are
3,282.08
3,311.32
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t3289.3199999999997
the inputs. So now we have those defined we just need to feed them into our model
3,289.32
3,318.56