title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t709.36
And then that leads us on to our final normal form, which is normal form at KC.
709.36
719.18
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t715.4200000000001
So normal form KC consists of two sets.
715.42
726.04
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t719.1800000000001
We have the compatibility decomposition, which is what we've just done.
719.18
729.58
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t726.04
And then there's a second set, which is a canonical composition.
726.04
734.2
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t729.5799999999999
So we're building that back up, those different parts, canonically.
729.58
741.04
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t734.1999999999999
This allows us to normalize all variants of a given character into a single shared form.
734.2
750.08
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t741.04
So for example, with our fancy H, we can add the combining Cedilla to that in order to
741.04
756.52
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t750.08
just make this horrible monstrosity of a character.
750.08
763.12
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t756.5200000000001
And we would write that out as we have H here.
756.52
765.08
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t763.12
So we can just put that straight in.
763.12
770.52
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t765.08
And then we can just come up here and get our Cedilla Unicode and put that in.
765.08
774.84
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t770.5200000000001
And if we put those together, we get this weird character.
770.52
781.16
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t774.84
Now, if we wanted to compare that to another character, which is the H with Cedilla, which
774.84
786.8
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t781.1600000000001
is a single Unicode character, we're going to have some issues because this is just one
781.16
787.8
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t786.8000000000001
character.
786.8
793.9
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t787.8000000000001
So if we use NFKD, we can give it a go.
787.8
796.8
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t793.9
So we'll add this in.
793.9
799.56
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t796.8000000000001
Let's try and compare it to this.
796.8
803.92
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t799.5600000000001
Okay, we get false.
799.56
807.32
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t803.92
That's because this is breaking this down into two different parts.
803.92
812.56
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t807.3199999999999
So a H and this combining Cedilla.
807.32
815.8
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t812.56
So if I just remove this and print out, you see, okay, they look the same, but they're
812.56
818.76
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t815.8
not the same because we have those two characters again.
815.8
825.18
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t818.76
So this is where we need canonical composition to bring those together into a single character.
818.76
827.08
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t825.18
So that looks like this.
825.18
831.56
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t827.0799999999999
So we have, initially, we have our compatibility decomposition.
827.08
836.76
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t831.56
If we go across, we have this final work, which is the canonical composition.
831.56
841.38
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t836.76
And this is the NFKC normal form.
836.76
845.2
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t841.38
So normal form KC.
841.38
851.36
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t845.1999999999999
And to apply that, all we need to do is, obviously, adjust this to KC.
845.2
854.58
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t851.3599999999999
And, okay, we run that.
851.36
857.6
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t854.5799999999999
We seem to get the same result.
854.58
864.8
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t857.6
And then if we add this, we can see, okay, now we're getting what we need.
857.6
871.32
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t864.8000000000001
And in reality, I think for most cases, or almost all that I can think of anyway, you're
864.8
876.22
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t871.32
going to use this NFKC to normalize your text.
871.32
881.12
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t876.22
Because this is going to provide you with the cleanest, simplest dataset that is the
876.22
883.16
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t881.12
most normalized.
881.12
889.24
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t883.16
So when going forward with your language models, this is definitely the form that I would go
883.16
890.24
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t889.24
with.
889.24
892.84
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t890.24
Now, of course, you can mix it up.
890.24
893.84
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t892.8399999999999
You can use different ones.
892.84
900.16
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t893.8399999999999
But I would definitely recommend, if this is quite confusing and hard to get a grasp
893.84
906.48
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t900.16
of, just taking these Unicode characters, playing around them a little bit, applying
900.16
911.52
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t906.48
these normal form functions to them and just seeing what happens.
906.48
915.84
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t911.52
And I think it will probably click quite quickly.
911.52
917.88
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t915.84
So that's it for this video.
915.84
921.98
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t917.88
I hope it's been useful and you've enjoyed it.
917.88
923.44
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t921.98
So thank you for watching.
921.98
942.32
Unicode Normalization for NLP in Python
2021-03-17 13:30:00 UTC
https://youtu.be/9Od9-DV9kd8
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
9Od9-DV9kd8-t923.44
923.44
942.32
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t0.0
Today we're going to have a look at how we can use transformers like BERT to create embeddings for sentences
0
17
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t8.0
and how we can then take those sentence vectors and use them to calculate the semantic similarity between different sentences.
8
24
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t17.0
So at a high level, what you can see on the screen right now is a BERT base model.
17
32
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t24.0
Inside BERT base we have multiple encoders and at the bottom we can see we have our tokenized text.
24
43
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t32.0
We have 512 tokens here and they get passed into our first encoder to create these hidden state vectors,
32
48
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t43.0
which are of the size 768 in BERT.
43
56
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t48.0
Now these get processed through multiple encoders and between every one of these encoders, there's 12 in total,
48
64
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t56.0
there are going to be a vector of size 768 for every single token that we have.
56
68
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t64.0
So 512 tokens in this case.
64
75
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t68.0
Now what we're going to do is take the final tensor out here, so this last hidden state tensor,
68
87
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t75.0
and we're going to use mean pooling to compress it into a 768 by 1 vector.
75
91
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t87.0
And that is our sentence vector.
87
99
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t91.0
Then once we've built our sentence vector, we're going to use cosine similarity to compare different sentences
91
108
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t99.0
and see if we can get something that works.
99
115
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t108.0
So switching across to Python, these are the sentences we're going to be comparing and there's two.
108
121
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t115.0
So there's this one here, which is three years later the coffin was still full of jello.
115
124
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t121.0
And that has the same meaning as this here.
121
127
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t124.0
I just rewrote it, but with completely different words.
124
132
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t127.0
So I don't think there's really any words here that match.
127
139
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t132.0
So in years we have dozens of months, jelly, jello, coffin, person box.
132
143
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t139.0
No normal human would even say that second.
139
146
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t143.0
Well, no normal human would probably say either of those.
143
154
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t146.0
But we definitely wouldn't use person box for coffin and many dozens of months for years.
146
162
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t154.0
So it's reasonably complicated, but we'll see that this should work for similarity.
154
172
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t162.0
So we'll find that these two shared highest similarity score after we've encoded them with BERT and calculate our cosine similarity.
162
176
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t172.0
And down here is the model we'll be using.
172
181
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t176.0
So we're going to be using sentence transformers and then BERT based NLI mean tokens model.
176
184
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t181.0
Now there's two approaches that we can take here.
181
187
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t184.0
The easy approach using something called sentence transformers.
184
190
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t187.0
I'm going to be covering that in another video.
187
197
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t190.0
And this approach was a little more involved where we're going to be using transformers and PyTorch.
190
205
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t197.0
So the first thing we need to do is actually create our last hidden state tensor.
197
210
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t205.0
So, of course, we need to import the libraries that we're going to be using.
205
221
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t210.0
So transformers, we're going to be using the auto tokenizer and the auto model.
210
225
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t221.0
And then we need to import Torch as well.
221
232
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t225.0
And then after we've imported these, we need to first initialize our tokenizer model.
225
237
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t232.0
Which we just do auto tokenizer.
232
241
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t237.0
And then for both these, we're going to use from pre-trained.
237
245
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t241.0
And we're going to use the model name that I've already defined.
241
250
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t245.0
So these are coming from face library, obviously.
245
254
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t250.0
And we can see the model here. So it's this one.
250
263
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t254.0
And then our model is auto model from pre-trained again.
254
265
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t263.0
Run those.
263
269
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t265.0
And now what we want to do is tokenize all of our sentences.
265
275
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t269.0
Now, to do this, we're going to use a tokens dictionary.
269
279
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t275.0
And in here, we're going to have input IDs.
275
284
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t279.0
And this will contain a list. And you'll see why in a moment.
279
291
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t284.0
And attention mask, which will also contain a list.
284
297
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t291.0
Now, when we're going through each sentence, we have to do this one by one.
291
301
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t297.0
For sentence in sentences.
297
308
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t301.0
We are going to be using the tokenizers encode plus method.
301
312