title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2116.0
|
calculated here.
| 2,116 | 2,120 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2118.0
|
And that is all we actually need
| 2,118 | 2,122 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2120.0
|
for our training loop.
| 2,120 | 2,124 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2122.0
|
We do also have the TQDM up here
| 2,122 | 2,126 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2124.0
|
as well. So I just want to
| 2,124 | 2,128 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2126.0
|
use that.
| 2,126 | 2,130 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2128.0
|
And what we're going to do is we're just going to set
| 2,128 | 2,132 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2130.0
|
the description of our loop
| 2,130 | 2,134 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2132.0
|
at this current step
| 2,132 | 2,136 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2134.0
|
equal to the epoch.
| 2,134 | 2,140 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2138.0
|
So this is just
| 2,138 | 2,142 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2140.0
|
purely aesthetics. We don't
| 2,140 | 2,144 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2142.0
|
need this for training but it's just so we can
| 2,142 | 2,146 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2144.0
|
see what is going on.
| 2,144 | 2,148 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2146.0
|
And we also want to loop set
| 2,146 | 2,150 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2148.0
|
postfix. And here I'm going to
| 2,148 | 2,152 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2150.0
|
add in
| 2,150 | 2,154 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2152.0
|
our loss which
| 2,152 | 2,156 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2154.0
|
is just going to be
| 2,154 | 2,158 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2156.0
|
loss equals loss.item
| 2,156 | 2,160 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2158.0
|
like that.
| 2,158 | 2,162 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2160.0
|
Now that should be okay.
| 2,160 | 2,164 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2162.0
|
Let's give it a go.
| 2,162 | 2,166 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2164.0
|
See what happens.
| 2,164 | 2,168 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2166.0
|
Okay.
| 2,166 | 2,170 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2168.0
|
So
| 2,168 | 2,172 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2170.0
|
that looks pretty good.
| 2,170 | 2,174 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2172.0
|
So you can see that our model is training. Loss
| 2,172 | 2,176 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2174.0
|
is reducing.
| 2,174 | 2,178 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2176.0
|
Now there isn't that much training data
| 2,176 | 2,180 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2178.0
|
so we're not going to see anything crazy here
| 2,178 | 2,182 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2180.0
|
but we can see that
| 2,180 | 2,184 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2182.0
|
it is moving in the right direction. So that's pretty good.
| 2,182 | 2,186 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2184.0
|
So that's
| 2,184 | 2,188 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2186.0
|
everything for this video.
| 2,186 | 2,190 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2188.0
|
It's a pretty long video.
| 2,188 | 2,192 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2190.0
|
Recorded for 41 minutes.
| 2,190 | 2,194 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2192.0
|
It'll probably be a little bit short for you.
| 2,192 | 2,196 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2194.0
|
But yeah that's long.
| 2,194 | 2,198 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2196.0
|
So
| 2,196 | 2,200 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2198.0
|
that's everything for this video.
| 2,198 | 2,202 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2200.0
|
I hope it's been useful.
| 2,200 | 2,204 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2202.0
|
And I will see you in the
| 2,202 | 2,232 |
Training BERT #4 - Train With Next Sentence Prediction (NSP)
|
2021-05-27 16:15:39 UTC
|
https://youtu.be/x1lAcT3xl5M
|
x1lAcT3xl5M
|
UCv83tO5cePwHMt1952IVVHw
|
x1lAcT3xl5M-t2204.0
| 2,204 | 2,232 |
|
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t0.0
|
Hi, welcome to this video. We're going to have a look at Hugging Faces data sets library.
| 0 | 10.8 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t5.84
|
We're going to have a look at some of what I think are the most useful data sets.
| 5.84 | 15.44 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t12.0
|
And we're going to look at how we can use the library to build
| 12 | 23.12 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t16.080000000000002
|
what I think are very good pipelines or data input pipelines for NLP. So let's get started.
| 16.08 | 35.04 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t23.12
|
So the first thing we want to do is actually, well, install data sets. So we'll go
| 23.12 | 39.6 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t35.04
|
clip install data sets and that will install the library for you.
| 35.04 | 44.4 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t40.88
|
After this, we'll want to go ahead and import data sets.
| 40.88 | 52 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t47.28
|
And then we can start having a look at which data sets are available to us.
| 47.28 | 56 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t52.0
|
Now there's two ways that you can have a look at all of the data sets.
| 52 | 63.28 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t56.72
|
The first one is using the data sets viewer, which you can find on Google.
| 56.72 | 67.44 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t63.28
|
You just type in data sets viewer and it's just an interactive
| 63.28 | 72.72 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t68.32
|
app which allows you to go through and have a look at the different data sets.
| 68.32 | 78.48 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t73.44
|
Now I'm not going to, I've already spoken about that a lot before and it's super easy to use.
| 73.44 | 82.48 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t78.48
|
So we're not going to go through it. Instead, we're just going to have a look at how we can
| 78.48 | 85.76 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t82.48
|
have view everything in Python, which is the second option.
| 82.48 | 91.28 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t86.4
|
So first we can do this. So we just list all of our data sets.
| 86.4 | 95.36 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t92.0
|
Now I'm going to just write dslists here.
| 92 | 107.2 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t98.80000000000001
|
And from this, we will just get, I think it's something like 1400 data sets now.
| 98.8 | 112 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t107.2
|
So it's quite a lot. So if we go len of all dslists.
| 107.2 | 124.88 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t119.52000000000001
|
So yeah, it's 1.4 thousand data sets, which is obviously a lot.
| 119.52 | 126.48 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t124.88
|
And some of these are massive as well.
| 124.88 | 132.88 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t127.28
|
So if we, for example, if we were to look at the Oscar data set,
| 127.28 | 144.16 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t132.88
|
so in dslists, we could go data set for data set in dslists.
| 132.88 | 153.76 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t146.64
|
If Oscar is in the data set. So these are just data set names.
| 146.64 | 163.12 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t153.76
|
Okay, and we have Oscar, I think PT is, what is PT?
| 153.76 | 169.44 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t163.12
|
I imagine it's probably Portuguese. And then we have all these other ones as well.
| 163.12 | 174.96 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t169.44
|
But these are just, these are users uploaded Oscar data sets.
| 169.44 | 180.32 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t174.95999999999998
|
This is the actual Oscar data set that's been sold by Hugging Face and it's huge.
| 174.96 | 185.2 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t180.32
|
It contains, I think, more than 160 languages.
| 180.32 | 190.24 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t185.84
|
And some of them, for example, English, obviously English is one of the biggest ones,
| 185.84 | 193.12 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t190.95999999999998
|
that contains 1.2 terabytes of data.
| 190.96 | 200.08 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t193.92
|
So there's a lot of data in there, but that's just unstructured text.
| 193.92 | 204.88 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t200.88
|
What I want to have a look at is the squad data sets.
| 200.88 | 214.48 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t204.88
|
Squad data sets. So we're going to be using, we're just going to use the original squad in this video.
| 204.88 | 218.24 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t215.04
|
But you can see that we have a few different ones here.
| 215.04 | 224.24 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t218.24
|
So Italian, Spanish, Korean, you have Thai, Thai QA squad here,
| 218.24 | 226 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t224.24
|
and then also French as well at the bottom.
| 224.24 | 229.52 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t226.8
|
So you have plenty of choice.
| 226.8 | 234 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t230.4
|
Now, obviously, you kind of need to know what sort of data set you're looking for.
| 230.4 | 235.52 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t234.0
|
I know I'm looking for a squad data set.
| 234 | 237.28 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t235.52
|
So I've gone, I've looked for squad.
| 235.52 | 239.28 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t237.84
|
There are other ones as well, actually.
| 237.84 | 244.32 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t239.28
|
If I change this to lower, you'll see those also pop up.
| 239.28 | 249.6 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t245.76
|
Okay, so we have like this one here and this one.
| 245.76 | 250.64 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t249.6
|
This one doesn't seem to work.
| 249.6 | 251.84 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t251.36
|
It's fine.
| 251.36 | 256.88 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t252.8
|
Now, to load one of those data sets, obviously we're going to be using squad.
| 252.8 | 263.52 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t256.88
|
We write data set equals data sets dot load data set.
| 256.88 | 269.28 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t264.8
|
And then in here, we just write our data set name, so squad.
| 264.8 | 275.68 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t269.84
|
Now, there's two ways to download your data.
| 269.84 | 279.2 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t275.68
|
So if we do this, this is the default method.
| 275.68 | 282.88 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t279.2
|
We are going to download and cache the whole data set in memory.
| 279.2 | 284.48 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t282.88
|
Which for squad is fine.
| 282.88 | 288.16 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t284.48
|
I think squad, it's not a huge data set, so it's not really a problem.
| 284.48 | 294.56 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t288.16
|
But when you think, okay, if we wanted the English OSCA data set, that's massive.
| 288.16 | 297.12 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t295.12
|
That's 1.2 terabytes.
| 295.12 | 303.2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.