title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t308.0
So tokenizer encode plus.
308
315
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t312.0
And then in here, we need to pass our sentence.
312
319
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t315.0
We need to pass the maximum length of our sequence.
315
323
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t319.0
So with BERT, usually we would set this to 512.
319
327
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t323.0
But because we're using this BERT based NLIME tokens model,
323
330
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t327.0
this should actually be set to 128.
327
335
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t330.0
So we set max length to 128.
330
338
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t335.0
And anything longer than this, we want to truncate.
335
342
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t338.0
So we set truncation equal to true.
338
346
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t342.0
And anything shorter than this, which they all will be in our case,
342
349
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t346.0
we set padding equal to the max length.
346
352
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t349.0
To pad it up to that max length.
349
356
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t352.0
And then here, we want to say return tensors.
352
362
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t356.0
And we set this equal to PT, because we're using PyTorch.
356
366
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t362.0
Now this will return a dictionary containing input IDs
362
369
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t366.0
and attention masks for a single sentence.
366
377
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t369.0
So we'll take the new tokens, assign it to that variable.
369
382
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t377.0
And then what we're going to do is access our tokens dictionary.
377
385
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t382.0
Which inputs IDs first.
382
390
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t385.0
And append the input IDs for the single sentence
385
393
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t390.0
from the new tokens variable.
390
396
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t393.0
So input IDs.
393
400
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t396.0
And then we do the same for our attention mask.
396
406
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t404.0
Okay.
404
411
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t406.0
So that gives us those.
406
413
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t411.0
There's another thing as well.
411
416
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t413.0
These are wrapped as vectors.
413
420
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t416.0
So we also want to just extract the first element there.
416
427
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t420.0
Because they're almost like lists within a list, but in tensor format.
420
430
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t427.0
And we want to extract the list.
427
432
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t430.0
Now that's good.
430
434
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t432.0
But obviously we're using PyTorch here.
432
438
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t434.0
We want PyTorch tensors, not lists.
434
442
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t438.0
So within these lists, we do have PyTorch tensors.
438
447
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t442.0
So in fact, let me just show you.
442
451
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t447.0
So if we have a look in here.
447
456
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t451.0
We'll see that we have our PyTorch tensors.
451
460
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t456.0
But they're contained within a normal Python list.
456
463
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t460.0
So we can even check that.
460
465
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t463.0
We do type.
463
466
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t465.0
We see that we get lists.
465
469
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t466.0
And inside there, we have the Torch tensor.
466
471
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t469.0
Which is what we want for all of them.
469
478
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t471.0
So to convert this list of PyTorch tensors into a single PyTorch tensor.
471
483
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t478.0
What we do is we take this Torch.
478
489
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t483.0
And we use the stack method.
483
494
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t489.0
And what the stack method does is takes a list.
489
497
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t494.0
And within that list we'll expect PyTorch tensors.
494
500
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t497.0
And it will stack all of those on top of each other.
497
502
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t500.0
Essentially adding another dimension.
500
505
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t502.0
And stacking them all on top of each other.
502
508
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t505.0
Hence the name.
505
509
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t508.0
So take that.
508
516
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t509.0
And we want to do it for both input IDs and attention mask.
509
517
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t516.0
And then let's have a look at what we have.
516
522
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t517.0
So let's go attention or input IDs.
517
525
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t522.0
And now we just have a single tensor.
522
532
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t525.0
Okay, so we do type.
525
534
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t532.0
And now we just have a tensor.
532
540
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t534.0
Now, that's great.
534
542
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t540.0
Check its size.
540
549
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t542.0
So we have six sentences that have all been encoded into the 128 tokens.
542
552
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t549.0
Ready to go into our model.
549
556
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t552.0
So to process these through our model.
552
562
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t556.0
We'll output the outputs to this outputs variable.
556
564
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t562.0
And we take our model.
562
573
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t564.0
And we pass our tokens as keyword arguments into the model input there.
564
577
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t573.0
So we process that.
573
582
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t577.0
And that will give us this output object.
577
589
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t582.0
And inside this output object, we have the last hidden state tensor here.
582
593
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t589.0
And we can also see that if we print out keys.
589
595
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t593.0
You see that we have the last hidden state.
593
597
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t595.0
And we also have this pooler output.
595
604
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t597.0
Now, we want to take our last hidden state tensor.
597
612
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t604.0
And then perform the mean pooling operation to convert it into a sentence vector.
604
621
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t612.0
So to get that last hidden state, we will assign it to this embeddings variable.
612
630
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t621.0
And we extract it using hidden or last hidden state.
621
631
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t630.0
Like that.
630
634
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t631.0
And let's just check what we have here.
631
635
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t634.0
So we'll just hold it at shape.
634
639
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t635.0
And you see now we have the six sentences.
635
642
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t639.0
We have the 128 tokens.
639
646
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t642.0
And then we have the 768 dimension size.
642
651
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t646.0
Which is just the hidden state dimensions within BERT.
646
658
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t651.0
So what we have at the moment is this last hidden state tensor.
651
665
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t658.0
And what we're going to do is now convert it into this using a mean pooling operation.
658
679
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t665.0
So the first thing we need to do is multiply every value within this last hidden state tensor by zero.
665
683
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t679.0
Where we shouldn't have a real token.
679
686
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t683.0
So if we look up here, we've padded all of these.
683
693
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t686.0
And obviously there's more padding tokens in this sentence than there are in this sentence.
686
698
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t693.0
So we need to take each of those attention mass tensors that we took here.
693
700
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t698.0
Which just contain ones and zeros.
698
702
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t700.0
Ones where there's real tokens.
700
704
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t702.0
Zeros where there are padding tokens.
702
711
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t704.0
And multiply that out to remove any activations where there should just be padding tokens.
704
712
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t711.0
E.g. zeros.
711
719
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t712.0
Now the only problem is that if we have a look at our attention mass.
712
726
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t719.0
So tokens attention mass.
719
730
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t726.0
If we have a look at the size.
726
732
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-05 15:00:20 UTC
https://youtu.be/jVPd7lEvjtg
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
jVPd7lEvjtg-t730.0
We get a six by 128.
730
738