title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1665.36
a few of these ending points as well.
1,665.36
1,676.64
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1673.9199999999998
OK, so I think that looks pretty good.
1,673.92
1,682.64
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1677.84
And that means we can move on to actually encoding our text.
1,677.84
1,696.88
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1682.64
To tokenize or encode our text, this is where we bring in a BERT tokenizer.
1,682.64
1,700.48
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1698.16
So we need to import the Transformers library for this.
1,698.16
1,706.72
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1702.4
And from Transformers, we are going to import the Distilbert.
1,702.4
1,712.96
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1706.72
So Distilbert is a smaller version of BERT, which is just going to run a bit quicker,
1,706.72
1,714.32
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1712.96
but it will take a very long time.
1,712.96
1,723.44
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1716.72
And we're going to import the FAST version of this tokenizer because this allows us to more
1,716.72
1,731.36
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1723.44
easily adjust our character and then start locations to token and start locations later on.
1,723.44
1,740
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1731.36
So first, we need to actually initialize our tokenizer, which is super easy.
1,731.36
1,760.48
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1740.0
All we're doing is loading it from a pre-trained model.
1,740
1,768.8
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1761.28
And then all we do to create our encodings is to load the
1,761.28
1,774.8
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1768.8
tokenizer. So we'll do the training set first.
1,768.8
1,778.96
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1777.68
Let's call it tokenizer.
1,777.68
1,782.72
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1780.1599999999999
And in here, we include our training context.
1,780.16
1,788.32
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1785.9199999999998
And the training questions.
1,785.92
1,792.16
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1789.76
So what this will do
1,789.76
1,796
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1792.16
is actually merge these two strings together.
1,792.16
1,802.08
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1796.0
So what we will have is our context and then there will be a separator token
1,796
1,804.32
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1802.88
followed by the question.
1,802.88
1,807.68
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1804.3200000000002
And this will be fed into Distilbert during training.
1,804.32
1,813.92
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1811.68
I just want to add padding there as well.
1,811.68
1,817.76
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1813.92
And then we'll copy this and do the same for our relation set.
1,813.92
1,825.44
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1817.76
Okay, and this will convert our data into encoding objects.
1,817.76
1,842.64
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1827.04
So what we can do here is
1,827.04
1,852.2
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1842.64
So what we can do here is print out different parts that we have within our
1,842.64
1,864.52
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1852.2
encodings. So in here you have the input IDs so let's access that and you'll find
1,852.2
1,870.84
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1864.5200000000002
in here we have a big list of all of our samples so check that we have
1,864.52
1,879.68
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1870.84
130k and let's open one of those okay and we have these token IDs and this is
1,870.84
1,885.72
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1879.6799999999998
what Bert will be reading. Now if we want to have a look at what this actually is
1,879.68
1,892.64
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1885.72
in sort of human readable language we can use the tokenizer to just decode it for
1,885.72
1,904.44
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1892.64
us. Okay this is what we're feeding in so we have a couple of these special
1,892.64
1,911.28
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1904.44
tokens this just means it's the sort of sequence and in here we have a process
1,904.44
1,918.8
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1911.2800000000002
form of our original context. Now you find that the context actually ends here
1,911.28
1,923.64
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1918.8
and like I said before we have this separated token and then after that we
1,918.8
1,930.56
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1923.6399999999999
have our actual question and this is what is being fed into Bert but obviously
1,923.64
1,937.16
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1930.56
the token ID version. So it's just good to be aware of what is actually being
1,930.56
1,941.68
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1937.1599999999999
fed in and what we're actually using here but this is a format that Bert is
1,937.16
1,945.36
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1941.68
expecting and then after that we have another separated token followed by all
1,941.68
1,952.28
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1945.36
of our padding tokens because Bert is going to be expecting 512 tokens to be
1,945.36
1,957.64
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1952.28
fed in for every one sample so we just need to fill that space essentially so
1,952.28
1,967.84
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1957.6399999999999
that's all that is doing. So let's remove those and we can continue. So the next
1,957.64
1,975.44
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1967.84
thing we need to add to our encodings is the start and end positions because at the
1,967.84
1,983.16
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1975.4399999999998
moment we just don't have them in there. So to do that we need to add a additional
1,975.44
1,990.72
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1983.1599999999999
bit of logic. We use this character to token method so if we just take out one
1,983.16
2,004.8
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t1990.72
of these. Let's take the first one. Okay we have this and what we can do is
1,990.72
2,011.76
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2004.8
actually modify this to use the character token method. Remove the input
2,004.8
2,017.08
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2011.76
IDs because we just need to pass it the index of whichever encoding we are
2,011.76
2,024.24
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2017.08
wanting to modify or get the start and end position of and in here all we're
2,017.08
2,030.22
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2024.24
doing is converting from the character that we have found a position for to the
2,024.24
2,036.4
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2030.22
token that we want to find a position for and what we need to add is train
2,030.22
2,043.8
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2036.3999999999999
answers. We have our position again because the answers and encodings the
2,036.4
2,047.56
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2043.8
context and question that needs to match up to the answer of course that we're
2,043.8
2,056.44
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2047.56
asking about and we do answers start. So here we're just feeding in the position
2,047.56
2,064.24
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2056.44
of the character and this is answer. Okay so feeding in the position of the character and
2,056.44
2,072.72
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2064.24
we're expecting to return the position of the token which is position 64. So
2,064.24
2,078.04
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2072.72
all we need to do now is do this for both of those so for the start position
2,072.72
2,091.68
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2078.04
and end position. See here we should get a different value. Okay but this is one
2,078.04
2,098.28
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2091.68
limitations of this. Sometimes this is going to return nothing as you can see
2,091.68
2,103.52
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2098.28
it's not returning anything here and that is because sometimes it is actually
2,098.28
2,110.04
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2103.52
returning the space and when it looks at the space and the tokenizer see
2,103.52
2,113.88
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2110.0400000000004
that and they say okay that's nothing we're not concerned about spaces and it
2,110.04
2,120.92
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2113.88
returns this non value that you can see here. So this is something that we need
2,113.88
2,127.48
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2120.92
to consider and build in some added logic for. So to do that again we're
2,120.92
2,138.96
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2127.48
going to use a function to contain all this and call it add token positions.
2,127.48
2,145.8
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2138.96
Here we'll have our encodings and our answers and then we just modify this code so we have
2,138.96
2,155.92
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2145.8
the encodings we have the answers and because we're collecting all of the
2,145.8
2,162.68
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2155.92
token positions we also need to initialize a list to containers. So we
2,155.92
2,174.32
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2162.6800000000003
do start positions empty list and end positions. And now we just want to loop
2,162.68
2,188.6
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2174.32
through every single answer and encoding that we have. Like so. And here we have
2,174.32
2,199.2
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2188.6000000000004
our start position so we need to append that to our start positions list.
2,188.6
2,212.72
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2199.2
And we just do the same for our end positions which is here. Now here we can
2,199.2
2,219.56
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2212.72
deal with this problem that we had. So if we find that the end positions the most
2,212.72
2,227
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2219.56
recent one so the negative one index is non that means it wasn't found and it
2,219.56
2,234.2
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2227.0
means there is a space. So what we do is we change it to instead use the minus one
2,227
2,241.36
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2234.2
version. And all this needs to do is update the end positions here. Okay
2,234.2
2,248.84
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2241.36
that's great but in some cases this also happens with the start position but that
2,241.36
2,252.2
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2248.84
is for a different reason. The reason that will occasionally happen with start
2,248.84
2,258.64
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2252.2
position is when the passage of data that we're adding in here so you saw
2,252.2
2,263.96
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2258.64
before we had the context that separated token and then the question. Sometimes
2,258.64
2,270.88
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2263.96
the context passage is truncated in order to fit in the question. So some of
2,263.96
2,276.96
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2270.8799999999997
it will be cut off and in that case we do have a bit of a problem but we still
2,270.88
2,284.84
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2276.96
need to just allow our code to run without any problems. So what we do is we
2,276.96
2,292.36
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2284.84
just modify the start positions again just like we did with the end positions.
2,284.84
2,302.76
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2292.36
Obviously only if it's a non and we just set it to be equal to the maximum length
2,292.36
2,307.4
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2302.76
that has been defined by the tokenizer.
2,302.76
2,319.26
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2313.0
It's as simple as that. Now the only final thing we need to do which is because we're
2,313
2,324.96
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2319.26
using the encodings is actually update those encodings to include this
2,319.26
2,331.64
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2324.96
data because as of yet we haven't added that back in. So to do that we can use
2,324.96
2,340.12
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2331.64
this quite handy update method and just add in our data as a dictionary. So you
2,331.64
2,358
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2340.12
have start positions, start positions and we also have our end positions. And then
2,340.12
2,363.88
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2358.0
again we just need to apply this to our training and validation sets and let's
2,358
2,368.08
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2363.88
just modify that.
2,363.88
2,379.68
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2373.8
Let's add the training encodings here and train answers.
2,373.8
2,391.6
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2379.68
We do that again the validation set.
2,379.68
2,407.52
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2398.7599999999998
So now let's take a look at our encodings and here we can see great now
2,398.76
2,413.78
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2407.52
have those start positions and end positions. We can even so a quick look
2,407.52
2,417.12
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2413.78
what they look like.
2,413.78
2,433.6
How to Build Custom Q&A Transformer Models in Python
2021-02-12 13:30:03 UTC
https://youtu.be/ZIRmXkHp0-c
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
ZIRmXkHp0-c-t2427.04
What we've done is actually not included the index here so we're just taking it
2,427.04
2,442.4