title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t873.54
|
And the output of that tokenizer, our token IDs will be fed into BERT.
| 873.54 | 882.02 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t877.46
|
BERT will return us a span start and span end,
| 877.46 | 887.06 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t882.02
|
which is essentially two numbers, which signify the start position
| 882.02 | 890.18 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t887.06
|
and end position of our answer within the context.
| 887.06 | 894.34 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t890.18
|
And this pipeline will take those two numbers and apply them to our context
| 890.18 | 898.26 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t894.34
|
to get the text, which is our answer from that.
| 894.34 | 902.98 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t898.98
|
So it's essentially just a little wrapper and it adds a few functionalities
| 898.98 | 906.34 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t902.98
|
so that we don't have to worry about converting all of these things.
| 902.98 | 910.82 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t907.78
|
So now we just need to pass in our model.
| 907.78 | 912.34 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t910.82
|
And the tokenizer as well.
| 910.82 | 916.02 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t914.98
|
And it's as simple as that.
| 914.98 | 918.18 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t916.0200000000001
|
That's our pipeline set up.
| 916.02 | 923.22 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t918.82
|
So if we want to use that now, all we need to do is write NLP.
| 918.82 | 926.66 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t924.74
|
And then here we pass a dictionary.
| 924.74 | 933.22 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t927.62
|
And this dictionary, like I said before, needs to contain our question and context.
| 927.62 | 934.9 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t934.1800000000001
|
So the question.
| 934.18 | 940.82 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t934.9
|
And for this, we will just pass the first of our questions up here again.
| 934.9 | 946.02 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t940.8199999999999
|
So this questions at the index zero.
| 940.82 | 950.18 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t948.66
|
And then we also pass our context,
| 948.66 | 954.58 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t951.22
|
which is inside the context variable up here.
| 951.22 | 957.94 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t957.38
|
Okay.
| 957.38 | 966.5 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t957.94
|
And this will output a dictionary containing the, well, we can see the score of the answer.
| 957.94 | 970.18 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t966.5
|
So that is the model's confidence that this is actually an answer.
| 966.5 | 980.02 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t970.9000000000001
|
Like I said before, the start index and end index and what those start index and end index map to,
| 970.9 | 981.38 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t980.0200000000001
|
which is United Nations.
| 980.02 | 984.82 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t981.3800000000001
|
So our question was, what is the answer?
| 981.38 | 988.58 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t984.82
|
And we got United Nations, which is correct.
| 984.82 | 993.78 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t988.58
|
So let me just show you what I mean with this start and end.
| 988.58 | 1,001.94 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t993.7800000000001
|
So if we do 118 here, we get the first letter of our answer because we are going through here
| 993.78 | 1,005.3 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1001.94
|
and it is pulling out this specific character.
| 1,001.94 | 1,011.06 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1007.3000000000001
|
If we then add the first letter of our answer,
| 1,007.3 | 1,019.38 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1011.06
|
if we then add this and go all the way up to our end, which is at 132,
| 1,011.06 | 1,026.66 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1020.66
|
we get the full set because what we're doing here is pulling out all the characters from you
| 1,020.66 | 1,035.06 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1026.6599999999999
|
or character 118 all the way up to character 132, which is actually this comma here.
| 1,026.66 | 1,039.62 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1035.06
|
But obviously with Python list indexing, we get the character before.
| 1,035.06 | 1,042.58 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1039.62
|
And that gives us United Nations, which is our answer.
| 1,039.62 | 1,046.42 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1044.4199999999998
|
So let's ask another question.
| 1,044.42 | 1,053.22 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1048.9799999999998
|
We have what UN organizations established the IPCC?
| 1,048.98 | 1,060.82 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1055.54
|
And we get this WMO and United Nations Environment Program, UNEP.
| 1,055.54 | 1,066.5 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1062.6599999999999
|
So if we go in here, we can see it was first established in 1988
| 1,062.66 | 1,072.82 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1066.5
|
by two United Nations organizations, the World Meteorological Organization,
| 1,066.5 | 1,076.74 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1072.82
|
WMO, and the United Nations Environment Program, UNEP.
| 1,072.82 | 1,084.1 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1077.78
|
So here we have two organizations and it is only actually pulling out one of those.
| 1,077.78 | 1,091.14 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1085.7
|
So I think the reason for that is all that is reading is WMO and United Nations Environment
| 1,085.7 | 1,097.06 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1091.14
|
Program. So it is pulling out those two organizations in the end, just not the full name of the first one.
| 1,091.14 | 1,099.46 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1097.8600000000001
|
So it's still a pretty good result.
| 1,097.86 | 1,103.78 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1100.1000000000001
|
And let's go down to this final question.
| 1,100.1 | 1,109.06 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1105.7800000000002
|
So what does the UN want to stabilize?
| 1,105.78 | 1,114.74 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1110.74
|
And here we're getting the answer of greenhouse gas concentrations in the atmosphere.
| 1,110.74 | 1,122.74 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1114.74
|
So if we go down here, we can see the ultimate objective of the UNFCCC
| 1,114.74 | 1,128.02 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1122.74
|
is to stabilize greenhouse gas concentrations in the atmosphere at a level that would prevent
| 1,122.74 | 1,131.54 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1128.02
|
dangerous anthropogenic interference with the climate system.
| 1,128.02 | 1,137.78 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1132.5
|
So again, we are getting the answer, stabilize greenhouse gas concentrations.
| 1,132.5 | 1,143.7 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1137.78
|
So our model has gone through each one of those questions and successfully answered them.
| 1,137.78 | 1,145.78 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1143.7
|
And all we've done is written a few lines of code.
| 1,143.7 | 1,149.54 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1147.06
|
And this is without us fine tuning them at all.
| 1,147.06 | 1,154.98 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1150.1
|
Now, when you do go and apply these to your own problems, sometimes you won't need to do
| 1,150.1 | 1,159.22 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1154.98
|
any fine tuning and the model as is will be more than enough.
| 1,154.98 | 1,161.86 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1159.22
|
But a lot of the time you will need to fine tune it.
| 1,159.22 | 1,164.98 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1161.86
|
And in that case, the answer is yes.
| 1,161.86 | 1,169.54 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1164.98
|
You fine tune it and in that case, there are a few extra steps.
| 1,164.98 | 1,174.66 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1170.5
|
But for this introduction, that's everything I wanted to cover there.
| 1,170.5 | 1,177.38 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1174.66
|
In terms of fine tuning, I have covered that in another video.
| 1,174.66 | 1,179.94 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1177.38
|
So I will put a link to that in the description.
| 1,177.38 | 1,182.5 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1180.58
|
But that's everything for this video.
| 1,180.58 | 1,184.74 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1182.5
|
So thank you very much for watching.
| 1,182.5 | 1,188.02 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1184.74
|
I hope you enjoyed and I will see you again next time.
| 1,184.74 | 1,188.5 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1188.02
|
Thanks.
| 1,188.02 | 1,195.22 |
How to Build Q&A Models in Python (Transformers)
|
2021-02-19 15:00:21 UTC
|
https://youtu.be/scJsty_DR3o
|
scJsty_DR3o
|
UCv83tO5cePwHMt1952IVVHw
|
scJsty_DR3o-t1188.5
| 1,188.5 | 1,195.22 |
|
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t0.0
|
Hi and welcome to the video. We're going to go through language generation using GPT-2.
| 0 | 14.64 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t6.8
|
Now this is actually incredibly easy to do and we can build this entire model including the imports,
| 6.8 | 22 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t14.64
|
the tokenizer model and outputting our generated text with just seven lines of code which is
| 14.64 | 29.6 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t22.0
|
pretty insane. Now the only libraries we need for this are PyTorch and Transformers. So we'll go
| 22 | 49.76 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t29.6
|
ahead and import them now. Now all we need from the Transformers library are the GPT-2 LMHead
| 29.6 | 63.6 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t49.76
|
model and GPT-2 tokenizer. So we can initialize both of those as well now and both will be from
| 49.76 | 85.92 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t63.6
|
pre-trained. So now we have initialized our tokenizer and model. We just need a sequence of
| 63.6 | 96.72 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t85.92
|
text to feed in and get our model going. So I've taken a snippet of text from the Wikipedia page
| 85.92 | 105.76 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t96.72
|
of Winston Churchill which is here and it's just a small little snippet talking about when he took
| 96.72 | 112.56 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t105.76
|
office during World War II. Now from this I've tested it briefly and it seems to give some
| 105.76 | 119.84 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t112.56
|
pretty interesting results. So we will go ahead use this. All we need to do is tokenize it.
| 112.56 | 134.16 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t126.56
|
Now all we're doing here is taking each of these words, splitting them into tokens. So that would
| 126.56 | 143.28 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t134.16
|
be a list where each word is its own item. So he began his premiership. Each one of those would be
| 134.16 | 151.36 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t143.28
|
a separate value within that list. Once we have them in that tokenized format our tokenizer will
| 143.28 | 164.08 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t151.36
|
then convert them into numerical IDs which map to a word vector that's been trained to work with GPT-2.
| 151.36 | 169.68 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t164.08
|
Now because we're using PyTorch we just need to remember to return PT tensors here.
| 164.08 | 179.44 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t169.68
|
So now we have our inputs we just need to feed them into our model. So we can do that using
| 169.68 | 192.96 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t179.44
|
model.generate. We add our inputs. Now we also need to tell PyTorch how long we want our generate
| 179.44 | 201.76 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t192.96
|
sequence to be. So all we do for that is add a max length. And this will act as the cut off point.
| 192.96 | 212.48 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t201.76000000000002
|
Anything longer than this will simply be cut off. And now here we are just generating our output.
| 201.76 | 223.04 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t212.48
|
We also need to pass this into the outputs variable here so that we can actually read from it and
| 212.48 | 232 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t223.04
|
decode it. So to decode our output IDs because it will output numerical IDs representing words just
| 223.04 | 236.64 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t232.0
|
like we fed into it we need to use the tokenizer decode method.
| 232 | 246 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t236.64
|
And our output IDs are in the zero index of the outputs object. And we also want to skip any
| 236.64 | 254.08 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t246.0
|
special tokens. So this would be stuff like end of sequence tokens, padding tokens, unknown word
| 246 | 265.2 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t254.08
|
tokens and so on. And then we can print the text. Now we can see here that it's basically just going
| 254.08 | 271.44 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t265.2
|
over and over again saying the same things which is not really what we want. So this is a pretty
| 265.2 | 277.44 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t271.44
|
common problem and all we need to do to fix this is add a new output. So we can just add a new
| 271.44 | 287.84 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t277.44
|
argument to our generator method here. So we simply do sample equals true. And then we can rerun this.
| 277.44 | 292.32 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t290.48
|
And this looks pretty good now.
| 290.48 | 300.24 |
Language Generation with OpenAI's GPT-2 in Python
|
2020-11-24 14:22:46 UTC
|
https://youtu.be/YvVQgvAz9dY
|
YvVQgvAz9dY
|
UCv83tO5cePwHMt1952IVVHw
|
YvVQgvAz9dY-t295.2
|
So we can add more randomness and restrict the number of possible texts.
| 295.2 | 307.04 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.