title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2137.28
|
you will select in a moment, do we want to warm up first? Yes, we do. I'm going to warm up for
| 2,137.28 | 2,154.24 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2143.52
|
10% of the training data, which is just length of the loader and multiplied by 0.1.
| 2,143.52 | 2,162.24 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2156.48
|
Okay. And from there, where do you want to save the model? I'm going to try,
| 2,156.48 | 2,165.28 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2162.24
|
I'm going to save it in xml-ted.
| 2,162.24 | 2,171.36 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2165.28
|
Now optimizer parameters.
| 2,165.28 | 2,180.56 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2174.2400000000002
|
So we have a, we're going to set a learning rate of 2e to the minus 5,
| 2,174.24 | 2,192.24 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2181.84
|
epsilon of 1e to the minus 6. And we're also going to set correct bias equal to false.
| 2,181.84 | 2,200.88 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2192.24
|
Okay. There the optimizer parameters. And then we can also save the best model. Save the best
| 2,192.24 | 2,211.6 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2201.4399999999996
|
model equal to true. And then we run it. Okay. So run that. It's going to take a long time. So
| 2,201.44 | 2,217.2 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2211.6
|
I'm going to actually going to stop it because I've already run it. And let's have a look at the,
| 2,211.6 | 2,222.32 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2217.2
|
actually evaluating that and have a look at the results. Okay. So I just have this notebook where
| 2,217.2 | 2,232.24 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2222.3199999999997
|
I've evaluated the model. So I'm using this STS sentence textual similarity benchmark dataset,
| 2,222.32 | 2,238.48 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2232.24
|
which is multilingual. I'm getting the English data and also the Italian. And you can see
| 2,232.24 | 2,246.8 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2238.48
|
they are similar. So the zero, so each row in the English dataset corresponds to the other
| 2,238.48 | 2,252.32 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2246.8
|
language datasets as well. So in here, sentence one in the English means the same thing as sentence
| 2,246.8 | 2,259.92 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2252.32
|
zero in the Italian. Okay. Same sentence two, also same similarity score. So first thing we do is
| 2,252.32 | 2,268.56 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2259.92
|
normalize that similarity score. And then we go down a little bit. So we reformat data using
| 2,259.92 | 2,277.36 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2268.56
|
sentence transformers input example class. And through this I've created three different evaluation
| 2,268.56 | 2,282.72 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2277.36
|
sets. So we have the English to English, Italian to Italian, and then English to Italian.
| 2,277.36 | 2,289.76 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2282.72
|
And then what we do here is we initialize a similarity evaluator for each of these datasets.
| 2,282.72 | 2,296.48 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2289.7599999999998
|
Again, we're using sentence transformers, just makes life a lot easier. We initialize those and
| 2,289.76 | 2,302.72 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2296.48
|
then we can just pass our model to each one of those evaluators to get its performance. So here
| 2,296.48 | 2,314.16 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2302.72
|
81.6 on the English set, 74.3 and 71 here. Now I just trained on one epoch. If you want better
| 2,302.72 | 2,320.56 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2314.16
|
performance, you can train on one epoch and you should be able to get more towards 80% or maybe
| 2,314.16 | 2,328.96 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2320.56
|
a little bit higher. So pretty straightforward and incredibly easy. And then here I'm just
| 2,320.56 | 2,334.8 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2328.96
|
I wanted to compare that to the student before we trained it. So I initialize a new student and had
| 2,328.96 | 2,344.16 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2334.8
|
a look and you can see the evaluation is pretty low. So for English, 47.5. Italian, actually 50%,
| 2,334.8 | 2,351.36 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2345.04
|
surprisingly. Although it's already a multilingual model. So it does make sense that you can understand
| 2,345.04 | 2,361.36 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2351.36
|
Italian. And then from English to Italian, it rarely drops down to 23. So that's it for this
| 2,351.36 | 2,370.72 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2361.36
|
video. I think it's been pretty useful. At least for me, I can kind of see where you can build a
| 2,361.36 | 2,377.2 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2370.7200000000003
|
sentence transformer in a lot of different languages using this, which is, I think, really cool.
| 2,370.72 | 2,383.2 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2377.2
|
And will probably be useful for a lot of people. So I hope you enjoyed the video.
| 2,377.2 | 2,409.12 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2383.2
| 2,383.2 | 2,409.12 |
|
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t0.0
|
Hi, welcome to the video. We're going to have a look at how we can build our own tokenizer
| 0 | 15.04 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t6.48
|
in transformers from scratch. So this is the second video in our transformers from scratch
| 6.48 | 20.8 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t15.040000000000001
|
series. And what we're going to be covering is that the actual tokenizer itself.
| 15.04 | 29.84 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t22.240000000000002
|
So we've already got our data so we can cross off now onto the tokenizer. So let's move over
| 22.24 | 41.92 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t29.84
|
to our code. So in the previous video, we created all these files here. So these are just a lot
| 29.84 | 49.28 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t42.480000000000004
|
of text files that contain the Italian subset from the Oscar dataset.
| 42.48 | 63.76 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t49.28
|
Now let's maybe open one, ignore that, and we just we get all this Italian. Now each sample
| 49.28 | 69.12 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t63.760000000000005
|
in this text file is separated by a newline character.
| 63.76 | 88.72 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t69.12
|
So let's go ahead and begin using that data to build our tokenizer. So we first want to get a
| 69.12 | 95.2 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t88.72
|
list of all the paths to our files. So we are going to be using pathlib. You could also use
| 88.72 | 109.44 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t95.2
|
oslist there as well. It's up to you. Import path. So from pathlib, import path.
| 95.2 | 115.36 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t110.64
|
I'm using this one because I don't know, I've noticed that people are using this a lot at
| 110.64 | 121.44 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t115.36
|
the moment for machine learning stuff. I'm not sure why you would do it over oslist there,
| 115.36 | 131.04 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t121.44
|
but it's what people are using. So let's give it a go, see how it is. So we have this
| 121.44 | 140 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t133.28
|
and we just want to create a string from each path object that we get. So for x in,
| 133.28 | 148.08 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t141.6
|
and then in here, we need to write path and in here we just want to
| 141.6 | 154.56 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t148.08
|
basically tell this where to look. So we're using path here and we're just in the same directory.
| 148.08 | 159.36 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t154.56
|
So it's not, we don't really need to do anything here. That's fine. And then at the end,
| 154.56 | 163.68 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t159.36
|
we are going to use glob here. I think this is why people are using this.
| 159.36 | 172.72 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t165.04000000000002
|
And we just create like a wildcard, like we want all text files in this directory. So we just write
| 165.04 | 184.96 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t172.72
|
that. Now let's do that. I look at the first five and see that we have our text files now.
| 172.72 | 192.56 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t184.96
|
So that's good. And what we can now do is move on to actually training the tokenizer. So the
| 184.96 | 205.36 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t192.56
|
tokenizer that we're going to be using is a byte level, byte pair encoding tokenizer or BP tokenizer.
| 192.56 | 212.96 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t205.36
|
And essentially what that means is that it's going to break down our text into bytes. So
| 205.36 | 220.88 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t214.48000000000002
|
with most tokenizers that we probably use, unless you've used this one before, then
| 214.48 | 230.08 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t220.88
|
we've used it before. We tend to have like unknown tokens. So like for BERT, we use sentence piece
| 220.88 | 238.72 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t230.79999999999998
|
encodings and we have to have this unknown token for when we don't have a token for a specific
| 230.8 | 248.32 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t238.72
|
word, like for some new word. Now with the BPE tokenizer, we are breaking things down into bytes.
| 238.72 | 253.28 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t248.32
|
So essentially we don't actually need an unknown token anymore. So that's I think pretty cool.
| 248.32 | 265.04 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t254.0
|
Now to use that, we need to do from tokenizers. So this is another Hugging Face package. So
| 254 | 269.76 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t265.68
|
maybe you might need to install that. So pip install tokenizers.
| 265.68 | 278.16 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t269.76
|
And you want to do byte level BP tokenizer like that. Okay. Now we take that and we're going to
| 269.76 | 290.56 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t278.15999999999997
|
initialize our tokenizer. So we just write that. That's our tokenizer initialized. We haven't
| 278.16 | 300.64 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t290.56
|
trained it yet. To train it, we need to write tokenizer train. And then in here, we need to
| 290.56 | 306.72 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t300.64
|
include the files that we're training on. So this is why we have that pass variable up here. So this
| 300.64 | 313.12 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t306.72
|
is just a list of all the text files that we created, which are all separated by newline
| 306.72 | 320.72 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t313.12
|
characters. Each sample is separated by a newline character. Now the vocab size,
| 313.12 | 331.68 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t322.72
|
we're going to be using a Roberta model here. And I think the Roberta model, typical Roberta model
| 322.72 | 340.08 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t332.24
|
vocab size is 50k. Now, you can use that if you want this up to use, but I'm going to stick with
| 332.24 | 349.36 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t340.08
|
the typical BERT size just because I don't think we need that much. We're just figuring things
| 340.08 | 356.72 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t349.35999999999996
|
out here. So this is going to mean less training time. And that's a good thing, in my opinion.
| 349.36 | 365.92 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t358.08
|
We'll set the min frequency. So this is saying, what is the minimum number of times you want to
| 358.08 | 372.24 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t365.92
|
see a word or a part of a word or a byte? So it's kind of weird with this tokenizer
| 365.92 | 381.2 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t373.28000000000003
|
before you add it into our vocabulary. So that's all that is. Okay. And then we also need to
| 373.28 | 387.68 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t381.76
|
include our special tokens. So we're using the Roberta special tokens here. So we write
| 381.76 | 393.28 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t387.68
|
special tokens. And then in here, we have our start sequence token, which I'm going to
| 387.68 | 403.2 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t393.28
|
put this on the new line. So not like that, like this. So we have this start sequence token,
| 393.28 | 415.36 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t403.76
|
the padding token, end of sequence, which is like this, the unknown token, which with it being a
| 403.76 | 422.16 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t415.91999999999996
|
byte level encoding, you'd hope it doesn't need to use this. And then we have our start sequence
| 415.92 | 429.92 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t422.16
|
token. So it doesn't need to use this very much, but it's there anyway. And the mastern token. So
| 422.16 | 435.84 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t432.64000000000004
|
that's everything we need to train our model.
| 432.64 | 449.28 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t439.12
|
And one thing I do remember is if you train on all of that, all of those files, it takes a really
| 439.12 | 453.28 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t449.28
|
very, very long time, which is fine if you're training it overnight or something, but
| 449.28 | 461.52 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t453.84
|
that's not what we're doing here. So I'm just going to shorten that to the first 100 tokens.
| 453.84 | 470.48 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t461.52
|
And maybe I'll train it after this with the full set. Let's see. So I will leave that to train for
| 461.52 | 478.4 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t470.47999999999996
|
a while and I'll be back when it's done. Okay. So it's finished training our tokenizer and we can
| 470.48 | 488 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t478.4
|
go ahead and actually save it. So I'm going to import OS, just soon so I can make a new directory
| 478.4 | 497.44 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t488.0
|
to store the tokenizer files in. And a typical Italian name, or so I've been told, is Filiberto,
| 488 | 510.96 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t497.44
|
which fits really well. But so this is our Italian BERT model name, Filiberto. So that is our
| 497.44 | 519.52 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t513.04
|
new directory. And if we just come over to here, we have this working directory, which is what I'm
| 513.04 | 526.64 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t519.52
|
in. And then we have this new directory, Filiberto, in here. That's where we're going to save our
| 519.52 | 535.44 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t526.64
|
tokenizer. So we just write tokenizer, save model. And here we can see here, we can do save or save
| 526.64 | 545.6 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t535.4399999999999
|
model. Save just saves a JSON file with our tokenizer data inside it. But I don't think
| 535.44 | 550.4 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t547.04
|
that's the standard way of doing it. I think this is the way that you want to be doing it.
| 547.04 | 558.72 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t550.4
|
And we're saving it as Filiberto, like that. So we'll do that. And we see that we get these
| 550.4 | 566.16 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t559.28
|
two new files, vocab.json and merges.txt. Now, if we look over here, we see both of those.
| 559.28 | 574 |
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
|
2021-06-24 14:00:06 UTC
|
https://youtu.be/JIeAB8vvBQo
|
JIeAB8vvBQo
|
UCv83tO5cePwHMt1952IVVHw
|
JIeAB8vvBQo-t567.68
|
And these are essentially like the two steps of tokenization for our tokenizer.
| 567.68 | 583.04 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.