title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1341.76
|
in a couple of sentences, there'll be a couple of blank lines where you need to, you know,
| 1,341.76 | 1,351.84 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1346.5600000000002
|
guess what the correct word should be. If you only have a couple of those blanks, you know,
| 1,346.56 | 1,359.2 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1352.96
|
as a person, you can probably guess accurately. And the same for Bert. Bert can probably guess
| 1,352.96 | 1,366.4 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1359.2
|
accurately what the occasional unknown token is. But you know, if you're a kid in school,
| 1,359.2 | 1,376.48 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1366.4
|
you can guess what the actual unknown token is. But if in school they gave you a sheet and they
| 1,366.4 | 1,381.2 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1376.48
|
said, okay, fill out these blanks. And it was actually just a paragraph of blank and you had
| 1,376.48 | 1,387.12 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1381.2
|
to guess it correctly, you've probably, I don't know, I think your chances are pretty slim of
| 1,381.2 | 1,394.88 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1387.1200000000001
|
getting that correct. So the same is true for Bert. Bert, for example, in our Georgian example
| 1,387.12 | 1,403.44 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1394.88
|
though. So the tokenizer from Bert is not suitable for non-Latin character languages whatsoever.
| 1,394.88 | 1,409.52 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1403.44
|
And then it does know some Greek characters here and maybe it knows all of them. So I suppose Greek
| 1,403.44 | 1,416.56 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1409.5200000000002
|
feeds into Latin languages a bit more than Georgian or Chinese, but it doesn't know what
| 1,409.52 | 1,421.28 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1416.5600000000002
|
to do with them. They're all single character tokens. And the issue with single character
| 1,416.56 | 1,426.8 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1421.28
|
tokens is that you can't really encode that much information into a single character.
| 1,421.28 | 1,434.24 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1427.52
|
Because that, you know, if you have 24 characters in your alphabet, that means you have 24 encodings
| 1,427.52 | 1,440.72 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1434.24
|
to represent your entire language, which is not going to happen. So, you know, that's also not
| 1,434.24 | 1,449.28 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1440.72
|
good. So basically don't use a Bert tokenizer. It's not a good idea. What you can do is, okay,
| 1,440.72 | 1,460.24 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1449.28
|
how's this XLM token or tokenizer? Now, XLM is trained for multilingual comprehension.
| 1,449.28 | 1,467.28 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1460.8
|
It uses a sentence piece transformer, which uses byte level logic to split up the sentences or
| 1,460.8 | 1,475.2 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1467.28
|
the words. So it can deal with tokens it's never seen before, which is pretty nice. And the
| 1,467.28 | 1,483.68 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1475.2
|
vocabulary size for this is not 30k. I think it's 250k. It could be off a few k there, but it's
| 1,475.2 | 1,492.64 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1483.68
|
around that mark. And it's been trained on many languages. So it's obviously a much better option
| 1,483.68 | 1,501.2 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1492.64
|
for our student model. So let's have a look at how we initialize that. So this XMR model is just
| 1,492.64 | 1,509.44 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1501.2
|
coming from transformers. Okay. So I need to convert that model from just a transform model
| 1,501.2 | 1,516 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1509.44
|
into an, or initialize it as a sentence transformer model using the sentence transformers library.
| 1,509.44 | 1,523.84 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1516.0
|
Okay. So from sentence transformers, I'm going to import models and also sentence transformer.
| 1,516 | 1,530.56 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1523.84
|
So XMR, so this is going to be our actual transformer model. We're going to write models.transformer.
| 1,523.84 | 1,538.64 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1530.56
|
Sentence transformers under hood uses hugging face transformers as well. So we would access this
| 1,530.56 | 1,545.92 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1538.6399999999999
|
as the normal model identified that we would with normal hugging face transformers,
| 1,538.64 | 1,554.24 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1545.92
|
which is XMR Roberta base. Okay. As well as that, we need a pooling layer.
| 1,545.92 | 1,565.44 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1555.44
|
So we write models.pooling. And in here, we need to pass the output embeddings dimension. So it's
| 1,555.44 | 1,572.56 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1565.44
|
this get word embedding dimension for our model. And also what type of pooling we'd like to do.
| 1,565.44 | 1,583.28 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1572.56
|
We have max pooling, CLS token pooling, and what we want is a mean pooling. So is pooling
| 1,572.56 | 1,597.28 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1586.32
|
mode mean tokens equals true. Okay. So that two components of our sentence transformer.
| 1,586.32 | 1,604.08 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1597.28
|
And then from there, we can initialize our students. So student equals sentence transformer.
| 1,597.28 | 1,613.44 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1605.04
|
And we're initializing that using the modules, which is just a list of our two components. So
| 1,605.04 | 1,622.48 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1613.44
|
XMR followed by pooling. And that's it. So let's have a look at what we have there.
| 1,613.44 | 1,627.04 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1622.48
|
Okay. We can just ignore this top bit here. We just want to focus on this. So you see,
| 1,622.48 | 1,633.28 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1627.04
|
we have our transforming model followed by the pooling here. And we also see that we're using
| 1,627.04 | 1,639.6 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1633.28
|
the mean tokens pooling set to true, rest of them are false. Okay. So that's our student model
| 1,633.28 | 1,647.28 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1639.6
|
initialized. And now what we want to do is initialize our teach model. Now, the teach model
| 1,639.6 | 1,652.4 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1647.28
|
let me show you, you just have to be a little bit careful with this. So sentence transformer.
| 1,647.28 | 1,662.08 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1653.28
|
So maybe you'd like to use one of the top forming ones, which a lot of them are the old models.
| 1,653.28 | 1,671.28 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1662.8
|
So these are monolingual models, all MPNet base V2.
| 1,662.8 | 1,682.16 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1671.28
|
And okay, let's initialize this and let's see what is inside it. Okay. So we have the transformer,
| 1,671.28 | 1,689.28 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1682.16
|
the pooling as we had before, but then we also have this normalization layer. So the outputs from
| 1,682.16 | 1,697.52 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1689.28
|
this model are normalized. And obviously if you're trying to make another model mimic the
| 1,689.28 | 1,705.36 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1697.52
|
normalization layer outputs, well, it's not ideal because the model is going to be trying to
| 1,697.52 | 1,711.6 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1706.08
|
normalize its own vectors. So you don't really want to do that. You want to choose a model.
| 1,706.08 | 1,718.32 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1711.6
|
You either want to remove the normalization layer or just choose a model that doesn't have
| 1,711.6 | 1,723.28 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1718.32
|
normalization layer, which I think is probably the better option. So that's what I'm going to do.
| 1,718.32 | 1,730.96 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1723.28
|
So for the teacher, I'm going to use a sentence transformer. I'm going to use paraphrase models
| 1,723.28 | 1,743.68 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1730.96
|
because these don't use normalization layers. Distill Roberta base V2. Okay, let's have a look.
| 1,730.96 | 1,750.8 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1743.68
|
Okay. So now you see we have the transformer followed directly by the pooling. Now another
| 1,743.68 | 1,755.28 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1750.8
|
thing that you probably should just be aware of here is that we have this max sequence length here
| 1,750.8 | 1,762.48 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1755.28
|
is 512, which doesn't align with our paraphrase model here. But that's fine because I'm going to
| 1,755.28 | 1,770.16 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1762.48
|
limit the maximum sequence length anyway to 512. So that's fine. But I'm going to limit the
| 1,762.48 | 1,778.64 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1770.16
|
the maximum sequence length anyway to 250. So, you know, don't, you know, it's not really an issue,
| 1,770.16 | 1,783.92 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1778.64
|
but just, you know, look out for that if you're training your own models. This one's on 384.
| 1,778.64 | 1,790.64 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1783.92
|
So none of those align. But yeah, just be aware of that, that the sequence lengths might not
| 1,783.92 | 1,800.72 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1790.64
|
align there. So we've, okay, so we have our, we've sort of formatted our training data.
| 1,790.64 | 1,808.64 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1800.72
|
We have our two models, the teacher and the student. So now what we can do is prepare that
| 1,800.72 | 1,814.96 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1808.64
|
data for loading into our training process or fine tuning process. So as I said before,
| 1,808.64 | 1,822.72 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1814.96
|
we're going to be using the parallel sentences, sorry, from sentence transformers import
| 1,814.96 | 1,830.24 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1823.68
|
parallel sentences data set. And first thing we need to do here is actually initialize the object.
| 1,823.68 | 1,838.16 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1831.04
|
And that requires that we pass the two models that we're training with, because this kind of handles
| 1,831.04 | 1,845.68 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1838.16
|
the interaction between those two models as well. So obviously we have our student model,
| 1,838.16 | 1,856.32 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1846.3200000000002
|
which is our student. And we have the teacher model, which is our teacher. Alongside this,
| 1,846.32 | 1,864.88 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1856.3200000000002
|
we want batch size. I'm going to use 32, but I think actually you can probably use higher batches
| 1,856.32 | 1,873.36 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1864.88
|
here. Or you probably should use higher batches. 64 is one that I see used a lot in these training
| 1,864.88 | 1,886.08 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1874.16
|
codes. And you also use embedding cache called true. Okay. So that initializes the
| 1,874.16 | 1,893.6 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1887.0400000000002
|
parallel sentences data set object. And now what we want to do is add our data to it. So
| 1,887.04 | 1,901.2 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1893.6
|
we need our training files. So training files equal to OS list that we did before.
| 1,893.6 | 1,906.96 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1901.1999999999998
|
I think it's in the data file, in the data directory.
| 1,901.2 | 1,919.52 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1910.48
|
Yeah. So that's all we want. And what I'll do is just for F in those training files,
| 1,910.48 | 1,928.96 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1919.52
|
I'm going to load each one of those into the data set object. Print F and data dot load data.
| 1,919.52 | 1,937.12 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1930.24
|
I need to make sure I include the path there, followed by the actual file name.
| 1,930.24 | 1,944.48 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1938.48
|
I need to pass your max sentences, which is the maximum number of sentences that you're
| 1,938.48 | 1,950.16 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1944.48
|
going to take from that load data batch. So basically the maximum number of sentences we're
| 1,944.48 | 1,958.64 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1950.16
|
going to use from each language there. Now I'm just going to set this to 250,000,
| 1,950.16 | 1,965.12 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1959.68
|
which is higher than any of the batches we have. That's fine. I don't think, I mean,
| 1,959.68 | 1,968.56 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1965.1200000000001
|
if you want to try and balance it out, that's fine. You can do that here.
| 1,965.12 | 1,979.44 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1968.56
|
And then the other option is where we set the maximum length of the sentences that we're going
| 1,968.56 | 1,988.24 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1979.44
|
to be processing. So that is max sentence length. And I said before, look, the maximum we have here
| 1,979.44 | 2,002.24 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t1988.24
|
is 256 or 512. So let's just trim all of those down to 256. Okay. That will load our data.
| 1,988.24 | 2,009.2 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2003.36
|
And now we just need to initialize a data loader. So we're just using PyTorch here. So run from
| 2,003.36 | 2,022.4 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2009.2
|
torch utils.data, input data loader. Loader is equal to data loader. Pass out data.
| 2,009.2 | 2,031.76 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2023.1200000000001
|
We want to shuffle that data. And we also want to set the batch size, which is same as before, 32.
| 2,023.12 | 2,041.76 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2031.76
|
Okay. So model is already, data is ready. Now we initialize our loss function. So from
| 2,031.76 | 2,049.44 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2042.64
|
sentence transformers again, dot loss, losses. Yep. Import MSE loss.
| 2,042.64 | 2,063.44 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2049.44
|
And then loss is equal to MSE loss. And then here we have model equals student model. Okay. So we're
| 2,049.44 | 2,069.52 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2063.44
|
only optimizing our student model, not the teacher model. The teacher model is there to teach our
| 2,063.44 | 2,078.24 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2069.52
|
student, not the other way around. Okay. So that's everything we need ready for training. So let's
| 2,069.52 | 2,084.8 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2078.24
|
move on to the actual training function. So we can train, I'm going to train for one epoch,
| 2,078.24 | 2,093.52 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2085.4399999999996
|
but you can do more. I think in the actual zone, in the other codes that I've seen that do this,
| 2,085.44 | 2,099.36 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2094.3199999999997
|
they were training for like five epochs. But you even just training on one epoch,
| 2,094.32 | 2,108.56 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2099.36
|
how you actually get a pretty good model. So I think you don't need to train on too many, but
| 2,099.36 | 2,113.68 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2108.56
|
obviously, you know, if you want better performance, I would go with the five that I've seen in the
| 2,108.56 | 2,123.84 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2113.6800000000003
|
other codes. So we need to pass our train objectors here. So we have the data loader and then loss
| 2,113.68 | 2,130 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2123.84
|
function. Now I want to say, okay, how many epochs? Like I said before, I'm going to get with one
| 2,123.84 | 2,137.28 |
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
|
2021-11-04 13:00:10 UTC
|
https://youtu.be/NNS5pOpjvAQ
|
NNS5pOpjvAQ
|
UCv83tO5cePwHMt1952IVVHw
|
NNS5pOpjvAQ-t2130.56
|
number of warm up steps. So before you jump straight up to the learning rate that you,
| 2,130.56 | 2,143.52 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.