title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1195.0
|
And we need to detach it from PyTorch
| 1,195 | 1,202 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1197.0
|
in order to convert it into something that PyTorch cannot read anymore.
| 1,197 | 1,205 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1202.0
|
And it actually tells us exactly what we need to do.
| 1,202 | 1,208 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1205.0
|
So use tensor, detach numpy instead.
| 1,205 | 1,213 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1208.0
|
So we take detach and numpy.
| 1,208 | 1,224 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1213.0
|
And all we need to do is write mean pulled because that rerun it.
| 1,213 | 1,227 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1224.0
|
And we get our similarity scores.
| 1,224 | 1,235 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1227.0
|
So straight away, we got 0.33174455.
| 1,227 | 1,241 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1235.0
|
This one is the one the highest similarity, 0.72, by a fair bit as well.
| 1,235 | 1,253 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1241.0
|
So that is comparing this sentence and sentence at index one of our last five,
| 1,241 | 1,255 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1253.0
|
which is this one.
| 1,253 | 1,261 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1255.0
|
So there we've calculated similarity and it is clearly working.
| 1,255 | 1,264 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1261.0
|
So that's it for this video.
| 1,261 | 1,265 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1264.0
|
I hope it's been useful.
| 1,264 | 1,267 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1265.0
|
I think this is really cool.
| 1,265 | 1,287 |
Sentence Similarity With Transformers and PyTorch (Python)
|
2021-05-05 15:00:20 UTC
|
https://youtu.be/jVPd7lEvjtg
|
jVPd7lEvjtg
|
UCv83tO5cePwHMt1952IVVHw
|
jVPd7lEvjtg-t1267.0
| 1,267 | 1,287 |
|
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t0.0
|
Hi, welcome to this video. We're going to be covering Facebook AI Similarity Search
| 0 | 14.38 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t5.96
|
or FICE. And we're going to be covering what FICE is and how we can actually begin using
| 5.96 | 21.3 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t14.38
|
it and we'll introduce a few of the key indexes that we can use.
| 14.38 | 27.08 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t21.3
|
So just as a quick introduction to FICE, as you can probably tell from the name, it's
| 21.3 | 35.34 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t27.08
|
a similarity search and it's a library that we can use from Facebook AI that allows us
| 27.08 | 46.16 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t35.339999999999996
|
to compare vectors with a very high efficiency. So if you've seen any of my videos before
| 35.34 | 52.88 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t46.16
|
on building sentence embeddings and comparing sentence embeddings, in those videos I just
| 46.16 | 58.2 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t52.88
|
added a generic Python loop to go through and compare each embedding and that's very
| 52.88 | 63.68 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t58.2
|
slow. Now if you're only working with maybe 100 vectors, it's probably OK, you can deal
| 58.2 | 67.76 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t63.68000000000001
|
with that. But in reality, we're probably never going to be working with that smaller
| 63.68 | 75.08 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t67.76
|
data set. Facebook AI Similarity Search can scale to tens, hundreds of thousands or up
| 67.76 | 86.72 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t75.08
|
to millions and even billions. So this is incredibly good for efficient similarity search.
| 75.08 | 94.6 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t86.72
|
But before we get into it, I'll just sort of visualize what this index looks like. So
| 86.72 | 103.22 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t94.6
|
if we imagine that we have all of the vectors that we have created and we put it into our
| 94.6 | 109.64 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t103.22
|
similarity search index. Now they could look like this. So this is only a three dimensional
| 103.22 | 118.48 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t109.64
|
space, but in reality, there would be hundreds of dimensions here. In our use case, we're
| 109.64 | 130.36 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t118.48
|
going to be using dimensions of 768. So, you know, there's a fair bit in there. Now when
| 118.48 | 137.88 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t130.36
|
we search, we would introduce a new vector into here. So let's say here this is our query
| 130.36 | 145.96 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t137.88000000000002
|
vector. So x, q. Now if we were comparing every item here, we would have to calculate
| 137.88 | 154.32 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t145.96
|
the distance between every single item. So we would calculate between our query vector
| 145.96 | 158.28 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t154.32000000000002
|
and every other vector that is already in there in order to find the vectors which are
| 154.32 | 168.48 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t158.28
|
closest to it. Now we can optimize this. We can improve, we can decrease the number of
| 158.28 | 172.8 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t168.48
|
dimensions in each of our vectors and do it in a intelligent way so they take up less
| 168.48 | 179.44 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t172.8
|
space and the calculations are faster. And we can also restrict our search. So in this
| 172.8 | 185.76 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t179.44
|
case, rather than comparing every single item, we might restrict our search to just this
| 179.44 | 193.76 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t185.76
|
area here. And these are a few of the optimizations at a very high level that we can do with FICE.
| 185.76 | 201.28 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t193.76
|
So that's enough for the introduction to FICE. Let's actually jump straight into the code.
| 193.76 | 209.44 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t201.28
|
Okay, so this is our code. In here, this is how we are loading in all of our sentence
| 201.28 | 212.76 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t209.44
|
embedding. So I've gone ahead and processed some already because they do take a little
| 209.44 | 219.08 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t212.76
|
bit of time to actually build. But we're building them from this file here. We'll load this
| 212.76 | 226.48 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t219.07999999999998
|
into Python as well. But I mean, it's pretty straightforward to say load of sentences that
| 219.08 | 232.2 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t226.48
|
have been separated by a newline character. And then here we have all of those NumPy binary
| 226.48 | 237.6 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t232.2
|
files. Now there's NumPy binary files. Like I said, we're getting them from GitHub, which
| 232.2 | 245.44 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t237.6
|
are over here. That's where we're pulling them all in using this cell here. Now that
| 237.6 | 250.32 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t245.44
|
saves everything to file. And then we just read in each of those files and we append
| 245.44 | 258.76 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t250.32
|
them all into a single NumPy array here. And that gives us these 14.5 thousand samples.
| 250.32 | 266.92 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t258.76
|
Each embedding is a vector with 768 values inside. So that's how we're loading in our
| 258.76 | 281.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t266.92
|
data. I'll also load in that text file as well. So we just want to do with open sentences.txt.
| 266.92 | 287.16 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t281.04
|
And then we'll just read that in as a normal file. And we just write, I'm going to put
| 281.04 | 293.92 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t287.16
|
lines equals fp.read. And like I said, we're splitting that by newline characters. So we
| 287.16 | 311.24 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t293.92
|
just write that. Sorry, sentences. And we see a few of those as well. Okay. Now to convert
| 293.92 | 317.64 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t311.24
|
from those sentences into those sentence embeddings, I need to import this anyway for later on
| 311.24 | 321.6 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t317.64
|
when we're building our query vectors. I'll just show you how I do that now. What we do
| 317.64 | 328.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t321.6
|
is from sentence transformers, which is the library we're using to create those embeddings,
| 321.6 | 340.28 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t328.04
|
import sentence transformer. And then our model, we're using sentence transformer again.
| 328.04 | 350.82 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t340.28000000000003
|
And we're using the BERT and base NLI mean tokens model. Okay. So that's how we initialize
| 340.28 | 355.12 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t350.82
|
our model. And then when we're encoding our text, we'll see in a moment, we just write
| 350.82 | 360.68 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t355.12
|
model encode. And then we write something in here, hello world. Okay. And that will
| 355.12 | 369.12 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t360.68
|
encode, that will give us a sentence embedding. Okay. So that is what we have inside here.
| 360.68 | 378.28 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t369.12
|
We just have the sentence embeddings of all of our lines here. Now, I think we have everything
| 369.12 | 386.2 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t378.28
|
we need to get started. So let's build our first FICE index. So the first one we're going
| 378.28 | 395.44 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t386.2
|
to build is called the index flat L2. And this is a flat index, which means that all
| 386.2 | 402.88 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t395.44
|
the vectors are just flat vectors. We're not modifying them in any way. And the L2 stands
| 395.44 | 410.44 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t402.88
|
for the distance metric that we're using to measure the similarity of each vector or the
| 402.88 | 418.24 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t410.44
|
proximity of each vector. And L2 is just Euclidean distance. So it's a pretty straightforward
| 410.44 | 425.76 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t418.24
|
function. Now, to initialize that, we just write FICE. So we imported, no, so we need
| 418.24 | 435.4 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t425.76
|
to import FICE. And then we write index equals FICE dot index flat L2. And then in here,
| 425.76 | 442.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t435.4
|
we need to pass the dimensionality of our vectors or our sentence embeddings. Now, what
| 435.4 | 454.12 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t442.03999999999996
|
is our dimensionality? So each one is 768 values long. So if we'd like a nicer way of
| 442.04 | 466.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t454.12
|
writing out, we put sentence embeddings. And we write shape one. OK. And our index requires
| 454.12 | 474.24 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t466.04
|
that in order to be properly initialized. So do that. That will be initialized. Let
| 466.04 | 485.36 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t474.24
|
me run it again. I think my notebook just restarted. It did restart. It's weird. OK,
| 474.24 | 495.8 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t485.36
|
one minute. So that's going to initialize the index. And there is one thing that we
| 485.36 | 503.76 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t495.8
|
need to be aware of. So sometimes with these indexes, we will need to train them. So if
| 495.8 | 508.56 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t503.76
|
the index is going to do any clustering, we would need to train that clustering algorithm
| 503.76 | 515.08 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t508.56
|
on our data. And now in this case, we can check if an index needs training or is trained
| 508.56 | 523.68 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t515.08
|
already using the is trained attribute. And we'll see with this index, because it's just
| 515.08 | 531.28 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t523.68
|
a flat L2 index, it's not doing anything special. We'll see. Because it's not doing anything
| 523.68 | 536.4 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t531.28
|
special, we don't need to train it. And we can see that when we write is trained, it
| 531.28 | 540.88 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t536.4
|
says it's already trained. Just means that we don't actually need to train it. So that's
| 536.4 | 548.32 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t540.88
|
good. Now, how do we add our vectors, our sentence embeddings? All we need to do is
| 540.88 | 557.12 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t548.3199999999999
|
write index add. And then we just add embeddings like so. So pretty straightforward. So add
| 548.32 | 564.28 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t557.12
|
sentence embeddings. And then from there, we can check that they've been added properly
| 557.12 | 569.44 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t564.28
|
by looking at the end total value. So this is number of embeddings or vectors that we
| 564.28 | 576.64 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t569.44
|
have in our index. And with that, we can go ahead and start querying. So let's first create
| 569.44 | 583.84 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t576.64
|
a query. So we'll do xq, which is our query vector. And we want to do the model and code
| 576.64 | 595.6 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t583.84
|
that we did before. Now, I'm going to write someone sprints with a football. OK. That's
| 583.84 | 604.84 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t595.6
|
going to be our query vector. And to search, we do this. So we write di equals index search
| 595.6 | 614.56 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t604.84
|
xq. And then in here, we need to add k as well. So k, let me define it above here. So
| 604.84 | 621.12 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t614.5600000000001
|
k is the number of items or vectors, similar vectors that we'd like to return. So I'm going
| 614.56 | 631.88 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t621.12
|
to want to return 4. So with here, with this, we will return 4 index ids into this i variable
| 621.12 | 638 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t631.88
|
here. I'm going to time it as well, just so we see how long it takes. And let's print
| 631.88 | 650.16 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t638.0
|
i. You can see that we get these four items. Now, these align to our lines. So the text
| 638 | 656.4 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t650.16
|
that we have up here, that will align. So what we can do is we can print all of those
| 650.16 | 670.72 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.