title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1577.04
So this allows us to do everything or perform this operation in batches.
1,577.04
1,584.4
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1581.6799999999998
And then we can also specify our batch size.
1,581.68
1,587.92
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1584.3999999999999
So batch size equals let's say 32.
1,584.4
1,591.6
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1587.9199999999998
So now when we run this where is it where is it going?
1,587.92
1,601.84
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1591.6
It's here. Now when we run this the map function here is going to tokenize our question and context in batches of 32.
1,591.6
1,603.76
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1601.84
So let's go ahead and do that.
1,601.84
1,608.08
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1605.36
Okay and then you can you can see that processing there.
1,605.36
1,613.04
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1608.8
So I mean that's that's all we really need to do with that.
1,608.8
1,617.12
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1613.04
So I think that's probably it for the map method.
1,613.04
1,627.36
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1617.12
And we'll well I'll fast forward and we'll continue with I think a few of the methods I think are quite useful as well.
1,617.12
1,631.68
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1628.3999999999999
Okay so that's just finishing up now.
1,628.4
1,636.16
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1631.6799999999998
So we can go ahead and have a look at what we've actually produced.
1,631.68
1,643.6
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1636.1599999999999
So come to here and say dataset train.
1,636.16
1,647.84
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1643.6
So what do we have now?
1,643.6
1,651.44
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1647.84
We have answers like we did before but now we also have attention mask.
1,647.84
1,655.76
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1652.3999999999999
We have input ids and we also have token type ids.
1,652.4
1,661.84
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1655.76
Which are the three tensors that we usually output from the tokenizer when we do that.
1,655.76
1,664.32
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1662.56
So we now have those in there as well.
1,662.56
1,670.32
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1664.32
We can also have a look another thing as well we can now rather than looping through our dataset
1,664.32
1,675.2
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1670.32
because we're not using a we're not using streaming which is true we're using streaming equals false.
1,670.32
1,684.88
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1675.2
We can now do this and we can see okay we have attention mask and it's not going to show me everything because it's quite large.
1,675.2
1,688.56
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1684.8799999999999
So I'll just delete that but you can see that we have the attention mask in there.
1,684.88
1,699.36
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1690.8
So what I want to do is say I want to be quite pedantic and I don't like the fact that
1,690.8
1,706.8
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1699.36
there is the fact that we have one feature called title.
1,699.36
1,713.76
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1707.52
Maybe I want to say okay it should be topic because it's a topic of the context and the question.
1,707.52
1,721.04
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1713.76
If I want to be really pedantic and modify that I could say dataset train rename column.
1,713.76
1,727.28
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1722.1599999999999
And to be honest you can use it for this of course but you're probably not going to you're probably
1,722.16
1,733.44
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1727.28
going to use it more for when you need to rename a column to make sure it aligns to whatever the
1,727.28
1,738.08
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1734.72
expected inputs are for a transformer model for example.
1,734.72
1,743.2
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1738.8
So that's where you would use it but I'm just using this example.
1,738.8
1,753.92
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1743.2
So I'm going to rename the column title to topic and let's print out and dataset train again.
1,743.2
1,759.12
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1753.92
So down here we have title and the moment we're going to have topic.
1,753.92
1,762.32
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1760.96
Okay so now we have topic.
1,760.96
1,771.2
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1763.3600000000001
So just rename column like I said come useful not in this case but generally this is usually useful.
1,763.36
1,780.24
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1772.24
Now what I may want to do as well is remove certain records from this dataset.
1,772.24
1,786.8
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1780.24
So so far we've been printing out the here we have this which is now topic.
1,780.24
1,789.28
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1786.8
We have University of Notre Dame.
1,786.8
1,796
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1789.84
Maybe for whatever reason we don't want to include those topics so we can say
1,789.84
1,805.04
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1798.32
very similar to before we write dataset train equals dataset train again.
1,798.32
1,809.76
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1805.04
This time I'm going to filter so we're going to filter out records that we don't want.
1,805.04
1,816.4
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1809.76
And again it's very similar to the syntax we use for the map function which is the lambda.
1,809.76
1,822.8
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1817.2
And in here we just need to specify the condition for the samples that we do want to include
1,817.2
1,824.08
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1822.8
or we do want to keep.
1,822.8
1,832.4
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1824.8799999999999
And in this case we want to say okay wherever the topic is not equal to University
1,824.88
1,834.08
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1832.4
of Notre Dame.
1,832.4
1,844.24
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1839.2800000000002
Okay so we'll run this and we'll have a look at what we produce.
1,839.28
1,852.56
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1844.24
So dataset train so somehow like we have number of rows here which is just over 88,000.
1,844.24
1,856
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1853.52
And we should get a lower number now.
1,853.52
1,859.76
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1856.0
Now this will also go through so this remember we have shuffle.
1,856
1,863.28
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1859.76
Set to shuffle what I keep calling it shuffle.
1,859.76
1,868.56
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1863.28
We have streaming set to false this time.
1,863.28
1,873.36
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1869.2
So it's going to run through the whole dataset and perform this filtering operation.
1,869.2
1,876.56
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1874.96
Now whilst we're waiting for that.
1,874.96
1,882.72
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1877.92
Now I'll just fast forward again to when this finishes in a moment.
1,877.92
1,888.32
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1883.52
Okay so now we have this finished and we can now run this.
1,883.52
1,895.52
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1888.32
It's finished and we have before we had 88,000 rows now we have 87.3.
1,888.32
1,902.48
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1896.48
And we should see so let me take the dataset train
1,896.48
1,908.08
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1904.24
topic and I want to see let's say the first five of those.
1,904.24
1,916.32
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1909.84
Okay now they're all Beyonce rather than before where it was the University of Notre Dame.
1,909.84
1,927.84
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1916.32
So we have those and what we may want to do now is say for example we're performing inference
1,916.32
1,930.4
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1927.84
with Q&A with a transformer model.
1,927.84
1,934.96
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1931.52
We don't really need all of the features that we have here.
1,931.52
1,943.2
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1935.6799999999998
So we would only need the attention mask, the input ids and also the token type ids.
1,935.68
1,948.96
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1943.2
So what we can do now is we can remove some of those columns.
1,943.2
1,955.44
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1948.96
So we'll do dataset train as always dataset train again.
1,948.96
1,964.88
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1961.44
And we want to remove those columns so remove columns.
1,961.44
1,972.56
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1967.52
And we'll just remove so all of them other than the ones that we want.
1,967.52
1,983.84
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1972.56
So do answers context id question and topic.
1,972.56
1,991.52
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1988.96
Okay and then let's have a look at what we have left.
1,988.96
1,999.52
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1994.56
Okay and then that's it so we have those final features and these are the ones that we would
1,994.56
2,002.56
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t1999.52
input into a transform model for training.
1,999.52
2,006.16
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t2003.28
Now I mean there's nothing else I really want to cover.
2,003.28
2,012.4
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t2006.16
I think that is pretty much all you need to know on Iconface datasets to get started
2,006.16
2,019.04
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t2012.4
and start building pretty I think good input pipelines and using some of the
2,012.4
2,021.2
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t2020.08
datasets that are available.
2,020.08
2,023.36
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t2021.84
So we'll leave it there.
2,021.84
2,029.2
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t2024.16
Thank you very much for watching and I will see you again in the next one.
2,024.16
2,029.76
Build NLP Pipelines with HuggingFace Datasets
2021-09-23 13:30:07 UTC
https://youtu.be/r-zQQ16wTCA
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
r-zQQ16wTCA-t2029.2
2,029.2
2,029.76
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t0.0
Hi, welcome to this video on sentiment analysis using the Flare library.
0
14.32
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t6.32
So Flare is an incredibly simple, easy to use library, which contains a load of pre-built models for NLP
6.32
18.48
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t14.32
that we can simply import and use to make predictions.
14.32
24.08
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t19.12
So it actually allows us to use some of the most powerful models out there as well.
19.12
31.12
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t24.08
So in this tutorial, we're going to be using the Distilbert model, which is based on a BERT, but it's a lot smaller,
24.08
34.96
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t31.119999999999997
but almost as powerful as BERT itself.
31.12
38.56
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t35.92
So we're going to go ahead and begin.
35.92
43.68
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t40.16
First, if you haven't already, you need to pip install Flare.
40.16
49.92
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t46.0
And alongside Flare, you are also going to need PyTorch.
46
57.12
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t49.92
If you haven't got PyTorch installed already, you'll need to head over to the PyTorch website.
49.92
61.92
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t58.0
And they give you instructions on exactly what you need to install.
58
67.68
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t62.72
So we come down to here and we can see, OK, for me, I have Windows.
62.72
73.52
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t67.68
I want to install using Conda, using Python and then CUDA.
67.68
78.48
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t73.52000000000001
So this is if you have a CUDA enabled GPU on your machine.
73.52
82.72
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t78.48
If you don't know what that means, you probably don't.
78.48
85.6
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t83.2
So in that case, just click none.
83.2
89.12
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t86.16
But for me, I have 10.2.
86.16
94.16
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t89.2
So all we need to do is copy the command underneath here.
89.2
100.24
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t94.96000000000001
And then we would run this in our Anaconda prompt.
94.96
106.4
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t101.36
I already have these installed, so I'm going to go ahead and actually begin coding.
101.36
110
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t106.4
So we're going to need to use Pandas and also Flare.
106.4
122.48
How-to do Sentiment Analysis with Flair in Python
2020-12-04 14:00:03 UTC
https://youtu.be/DFtP1THE8fE
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
DFtP1THE8fE-t115.76
So now we have imported Flare, we can actually import a sentiment model straight away.
115.76
129.52