title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t732.0
|
So what we need to do is add this other dimension which is the 768.
| 732 | 741 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t738.0
|
And then we can just multiply those two tensors together.
| 738 | 747 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t741.0
|
And this will remove the embedding values where there shouldn't be embedding values.
| 741 | 752 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t747.0
|
And to do that, we'll assign it to mass.
| 747 | 754 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t752.0
|
But we'll do it later actually.
| 752 | 756 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t754.0
|
So attention.
| 754 | 763 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t756.0
|
And what we want to do is use the unsqueeze method.
| 756 | 765 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t763.0
|
And if we start looking at the shape.
| 763 | 767 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t765.0
|
So we can see what is actually happening here.
| 765 | 771 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t767.0
|
See that we've added this other dimension.
| 767 | 777 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t771.0
|
And then what that allows us to do is expand that dimension out to 768.
| 771 | 783 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t777.0
|
Which will then match to the correct shape that we need to multiply those two together.
| 777 | 786 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t783.0
|
So we do expand.
| 783 | 792 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t786.0
|
And here what we want is we'll take embeddings.
| 786 | 800 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t792.0
|
And we want to expand it out to the embeddings shape that we have already used up here.
| 792 | 804 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t800.0
|
So that will compare these two.
| 800 | 810 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t804.0
|
And see that we need to expand this one dimension out to 768.
| 804 | 815 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t810.0
|
And if we execute that, we can see that it has worked.
| 810 | 824 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t815.0
|
So the final thing that we need to do there is convert that into a float tensor.
| 815 | 827 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t824.0
|
And then we assign that to the mask here.
| 824 | 832 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t827.0
|
So this float at the end, that's just converting it from integer to float.
| 827 | 837 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t832.0
|
So now what we can do is apply this mask to our embeddings.
| 832 | 843 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t837.0
|
So we'll call this one mask embeddings.
| 837 | 850 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t843.0
|
And it is very simple. We just do embeddings multiplied by mask.
| 843 | 854 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t850.0
|
And now if we just compare embeddings, have a look at what we have here.
| 850 | 858 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t854.0
|
So it's quite a lot.
| 854 | 862 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t858.0
|
And now we have a look at mask embeddings.
| 858 | 868 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t862.0
|
And you see here that we have the same values here.
| 862 | 872 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t868.0
|
So looking at the top, these are the same.
| 868 | 877 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t872.0
|
But then these values here have been mapped to zero.
| 872 | 882 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t877.0
|
Because they are just padding tokens. We don't want to pay attention to those.
| 877 | 891 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t882.0
|
So that's the point of the masking operation there.
| 882 | 894 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t891.0
|
So remove those.
| 891 | 900 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t894.0
|
And now what we want to do is take all of those embeddings.
| 894 | 909 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t900.0
|
Because if we have a look at the shape that we have, we still have this 128 tokens.
| 900 | 913 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t909.0
|
We want to convert this into one token.
| 909 | 917 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t913.0
|
And there's two operations that we need to do here.
| 913 | 919 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t917.0
|
So we're doing a mean pooling operation.
| 917 | 924 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t919.0
|
So we need to calculate the sum within each of these.
| 919 | 928 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t924.0
|
So if we summed all these up together, that's what we are going to be doing.
| 924 | 933 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t928.0
|
And pushing them into a single value.
| 928 | 936 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t933.0
|
And then we also need to count all of those values.
| 933 | 940 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t936.0
|
But only where we were supposed to be paying attention.
| 936 | 944 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t940.0
|
So where we converted them into zeros, we don't want to count those values.
| 940 | 948 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t944.0
|
And then we divide that sum by the count to get our mean.
| 944 | 953 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t948.0
|
So to get the summed, we do torch.sum.
| 948 | 961 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t953.0
|
And then it's just mass embeddings.
| 953 | 970 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t961.0
|
And this is in the dimension one, which is this dimension here.
| 961 | 972 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t970.0
|
That's how I look at the shape that we have here.
| 970 | 977 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t972.0
|
Okay, so now we can see that we've removed this dimension.
| 972 | 980 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t977.0
|
And now what we want to do is create our counts.
| 977 | 984 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t980.0
|
And to do this, we use a slightly different approach.
| 980 | 987 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t984.0
|
We just do torch clamp.
| 984 | 994 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t987.0
|
And then inside here, we do mass.sum.
| 987 | 997 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t994.0
|
Again, in the dimension one.
| 994 | 1,004 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t997.0
|
And then we also have, we also add a min argument here.
| 997 | 1,012 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1004.0
|
Which just stops us from creating any divide by zero error.
| 1,004 | 1,014 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1012.0
|
So we do one e.
| 1,012 | 1,017 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1014.0
|
And all this needs to be is a very small number.
| 1,014 | 1,020 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1017.0
|
I think by default it's one e to the minus eight.
| 1,017 | 1,024 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1020.0
|
But I usually just use one e to the minus nine.
| 1,020 | 1,033 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1024.0
|
Although in reality, it shouldn't really make a difference.
| 1,024 | 1,037 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1033.0
|
And sorry, just put counts there.
| 1,033 | 1,040 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1037.0
|
Okay, so that's our sum and our counts.
| 1,037 | 1,043 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1040.0
|
And now we get the mean pulled.
| 1,040 | 1,052 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1043.0
|
So we do mean pulled equals summed divided by the counts.
| 1,043 | 1,056 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1052.0
|
And we'll just check the size of that again.
| 1,052 | 1,062 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1056.0
|
Okay, so that is our sentence vector.
| 1,056 | 1,065 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1062.0
|
So we have six of them here.
| 1,062 | 1,070 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1065.0
|
Each one contains just 768 values.
| 1,065 | 1,072 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1070.0
|
And let's have a look at what they look like.
| 1,070 | 1,075 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1072.0
|
We just get these values here.
| 1,072 | 1,079 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1075.0
|
Now, what we can do is compare each of these
| 1,075 | 1,086 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1079.0
|
and see which ones get the highest cosine similarity value.
| 1,079 | 1,092 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1086.0
|
Now, we're going to be using the sklearn implementation.
| 1,086 | 1,096 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1092.0
|
Which is metrics.pairwise.
| 1,092 | 1,102 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1096.0
|
We import cosine similarity.
| 1,096 | 1,106 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1102.0
|
And then this would expect numpy arrays.
| 1,102 | 1,108 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1106.0
|
Obviously, we have PyTorch tensors.
| 1,106 | 1,110 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1108.0
|
So we are going to get an error.
| 1,108 | 1,113 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1110.0
|
I'm going to show you so you at least see it.
| 1,110 | 1,117 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1113.0
|
I'll show you how to fix it.
| 1,113 | 1,120 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1117.0
|
So we cosine similarity.
| 1,117 | 1,126 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1120.0
|
And in here, we want to pass a single vector
| 1,120 | 1,128 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1126.0
|
that we are going to be comparing.
| 1,126 | 1,133 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1128.0
|
So I'm going to compare the first text sentence.
| 1,128 | 1,142 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1133.0
|
So if we just take these and put them down here.
| 1,133 | 1,145 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1142.0
|
So I'm going to take the very first one of those,
| 1,142 | 1,149 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1145.0
|
which is mean pulled 0.
| 1,145 | 1,154 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1149.0
|
And because we are extracting this out directly,
| 1,149 | 1,157 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1154.0
|
that means we get a list format.
| 1,154 | 1,160 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1157.0
|
We want it to be in a vector format.
| 1,157 | 1,163 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1160.0
|
So it's a list within a list.
| 1,160 | 1,170 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1163.0
|
And then we want to extract the remaining five sentences.
| 1,163 | 1,173 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1170.0
|
So go one all the way to the end.
| 1,170 | 1,176 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1173.0
|
So that's those last five there.
| 1,173 | 1,179 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1176.0
|
Now if we run this, we're going to get this runtime error.
| 1,176 | 1,184 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1179.0
|
We go down and we see comment column numpy on tensor that requires grad.
| 1,179 | 1,189 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1184.0
|
So this is just with PyTorch.
| 1,184 | 1,195 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1189.0
|
This tensor is currently within our PyTorch model.
| 1,189 | 1,197 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.