title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t297.12
|
So in those cases, you probably don't want to download it all onto your machine.
| 297.12 | 309.44 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t305.04
|
So what you can do instead is you set streaming equal to true.
| 305.04 | 315.6 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t309.44
|
And when streaming is equal to true, you do need to make some changes to your code,
| 309.44 | 317.36 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t316.16
|
which I'll show you.
| 316.16 | 322.48 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t318.08
|
And there are also some things, particularly filtering, which we will cover later on,
| 318.08 | 324.32 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t322.48
|
which we can't do with streaming.
| 322.48 | 328.96 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t325.04
|
But we will just go ahead and for now we're going to use streaming.
| 325.04 | 332.32 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t328.96
|
We'll switch over to not streaming later on.
| 328.96 | 338.56 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t332.32
|
And this creates like a iteratable data set object.
| 332.32 | 344.8 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t338.56
|
And it means that whenever we are calling a specific record within that data set,
| 338.56 | 353.76 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t344.8
|
it is only going to download or store that single record or multiple records in our memory at once.
| 344.8 | 355.76 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t353.76
|
So we're not downloading the data set.
| 353.76 | 360.88 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t355.76
|
We're just processing it as we get, which is, I think, very important.
| 355.76 | 363.28 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t360.88
|
And it is, I think, very useful.
| 360.88 | 371.04 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t365.36
|
Now, you can see here we have two actual subsets within our data.
| 365.36 | 377.36 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t371.44
|
If we want to select a specific subset, all we have to do is rewrite data sets again.
| 371.44 | 380.08 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t377.36
|
So let me actually copy this.
| 377.36 | 387.28 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t380.08
|
So we copy that.
| 380.08 | 390.8 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t387.28
|
And if we just want a subset, we write split.
| 387.28 | 395.36 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t392.0
|
And in this case, it would be train or validation.
| 392 | 399.36 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t396.15999999999997
|
And if I just call execute that.
| 396.16 | 402.64 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t399.36
|
So I'm not going to store that in our data set variable here,
| 399.36 | 405.2 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t402.64
|
because I don't want to use just train.
| 402.64 | 409.28 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t405.2
|
We have this single iterable data set object.
| 405.2 | 414.16 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t409.28
|
So we're just pulling in this single part of it or single subset.
| 409.28 | 416.08 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t415.12
|
And we can also view.
| 415.12 | 418.64 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t416.08
|
So here we can see we have train and validation.
| 416.08 | 428 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t418.64
|
If you want to see it in a more clear way, you can use dictionary syntax.
| 418.64 | 430.16 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t428.0
|
So sorry, data set keys.
| 428 | 433.2 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t430.71999999999997
|
You can use dictionary syntax for most of this.
| 430.72 | 434.88 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t433.2
|
So we have train and validation.
| 433.2 | 440.88 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t435.84
|
Now there's also, so the moment we have our data set,
| 435.84 | 442.32 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t440.88
|
we don't really know anything about it.
| 440.88 | 444.24 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t442.32
|
So we have this train subset.
| 442.32 | 447.52 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t444.24
|
And let's say I want to understand what is in there.
| 444.24 | 452.96 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t448.4
|
So what I can do to start is I write a data set train.
| 448.4 | 456.88 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t454.0
|
And I can write, for example, the data set size.
| 454 | 457.68 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t456.88
|
So how big is it?
| 456.88 | 464 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t457.68
|
Right, data set size, data set, not data size, size.
| 457.68 | 465.6 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t464.0
|
Don't know what I was doing there.
| 464 | 472.56 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t466.08
|
Let me see that we get, so it's like, so 80, about 90, 90 megabytes.
| 466.08 | 476.88 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t473.28000000000003
|
So reasonably big, but it's not anything huge, nothing crazy.
| 473.28 | 482.8 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t479.44
|
We can also, so we have that.
| 479.44 | 494 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t482.8
|
We can also get, if I copy this, you can also get a description.
| 482.8 | 502.16 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t500.16
|
Let me see what the data set is.
| 500.16 | 505.36 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t502.16
|
So SQUAD, I didn't even mention it already,
| 502.16 | 508.8 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t505.36
|
but SQUAD is the Stanford Question Answering Data Set.
| 505.36 | 513.28 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t508.8
|
So use it generally for training Q&A models or testing Q&A models.
| 508.8 | 518.56 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t514.88
|
And you can pause and read that if you want to.
| 514.88 | 525.12 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t522.48
|
And then another thing that is pretty important is
| 522.48 | 528.96 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t525.84
|
what are the features that we have inside here?
| 525.84 | 533.6 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t528.96
|
Now we can also just print out one of the samples,
| 528.96 | 536.32 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t534.24
|
but it's useful to know, I think.
| 534.24 | 539.2 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t536.32
|
And this also gives you data types, so it's kind of useful.
| 536.32 | 542.64 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t539.2
|
So we have ID, title, context, question, and answers.
| 539.2 | 546.8 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t543.9200000000001
|
All of them are strings.
| 543.92 | 552.24 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t547.7600000000001
|
Answers is actually, so within answers we have, it says,
| 547.76 | 555.6 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t552.24
|
Sequency, we can view it as a dictionary.
| 552.24 | 560.88 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t556.96
|
But we have a text, a attribute, and also an answer star attribute.
| 556.96 | 563.84 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t561.6
|
So that's pretty useful to know, I think.
| 561.6 | 570 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t563.84
|
And to view one of our samples,
| 563.84 | 572.32 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t570.0
|
so we have all the features here,
| 570 | 575.44 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t572.32
|
but let's say we just want to see what it actually looks like.
| 572.32 | 579.6 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t575.84
|
We can write data set and we go train.
| 575.84 | 585.36 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t580.08
|
And when we have streaming set to false, we can write this.
| 580.08 | 588.88 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t585.36
|
But because we have streaming set to true, we can't do this.
| 585.36 | 592.32 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t588.88
|
So instead what we have to do is we
| 588.88 | 596.16 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t592.32
|
actually just iterate through the data set.
| 592.32 | 599.52 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t596.1600000000001
|
So we just go for sample in data set.
| 596.16 | 604.56 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t602.24
|
And we just want to print a single sample.
| 602.24 | 606.72 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t604.5600000000001
|
And then I don't want to print anymore,
| 604.56 | 608.96 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t606.72
|
so I'm going to write break after that.
| 606.72 | 611.36 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t608.96
|
So we just print one of those samples.
| 608.96 | 615.52 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t612.32
|
And then we see, okay, we have the ID, we have title.
| 612.32 | 621.12 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t615.5200000000001
|
So each of these samples is being pulled from a different Wikipedia,
| 615.52 | 622.64 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t621.12
|
pulled from a different Wikipedia page.
| 621.12 | 625.76 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t623.36
|
In this case, the title is a titled page.
| 623.36 | 629.76 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t625.76
|
So this one is from the University of Notre Dame Wikipedia page.
| 625.76 | 631.6 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t630.64
|
We have answers.
| 630.64 | 636.8 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t631.6
|
So further down, we're going to ask a question and these answers here.
| 631.6 | 640.08 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t636.8
|
So we have the text, which is the text answer.
| 636.8 | 642 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t640.08
|
And then we have the position,
| 640.08 | 646.56 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t642.0
|
so the character position where the answer starts within the context,
| 642 | 648.16 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t646.5600000000001
|
which is what you can see here.
| 646.56 | 651.84 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t648.16
|
Now we have a question here, which we're asking.
| 648.16 | 654.56 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t651.8399999999999
|
And then the model, the Q&A model is going to
| 651.84 | 659.04 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t655.28
|
extract the answer from our context there.
| 655.28 | 660.66 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t660.16
|
Okay.
| 660.16 | 666.24 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t662.4
|
So we're not going to be training model in this video or anything like that.
| 662.4 | 669.76 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t666.24
|
We're just experimenting with the data sets library.
| 666.24 | 672.4 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t670.3199999999999
|
We don't need to worry so much about that.
| 670.32 | 679.2 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t672.4
|
So the first thing I want to do is have a look at how we can modify some of the features in our data.
| 672.4 | 684.64 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t679.1999999999999
|
So with SQUAD, when we are training a model,
| 679.2 | 690.48 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t685.1999999999999
|
one of the first things we would do is we take our answer start and the text
| 685.2 | 695.76 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t691.28
|
and we will use that to get the answer end position as well.
| 691.28 | 699.12 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t696.64
|
So let's go ahead and do that.
| 696.64 | 707.36 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t699.12
|
So I first I want to just have a look, okay, for sample in the data set train,
| 699.12 | 711.36 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t707.36
|
I'm just going to print out a few of the answer features.
| 707.36 | 716.96 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t711.36
|
So we have sample answer or answers, sorry.
| 711.36 | 719.6 |
Build NLP Pipelines with HuggingFace Datasets
|
2021-09-23 13:30:07 UTC
|
https://youtu.be/r-zQQ16wTCA
|
r-zQQ16wTCA
|
UCv83tO5cePwHMt1952IVVHw
|
r-zQQ16wTCA-t716.96
|
And I just want to print that.
| 716.96 | 721.28 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.