title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t656.4
|
out. So let's do i. And then in here, we want to write lines i for i. Sorry, let me end
| 656.4 | 687.8 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t670.72
|
that. For i in i. OK. Ah, sorry. So this is 0 here. OK. So these are the sentences or
| 670.72 | 691.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t687.8000000000001
|
the similar sentences that we've got back. And we see, obviously, it seems to be working
| 687.8 | 696.64 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t691.0400000000001
|
pretty well. All of them are talking about football or being on a football field. So
| 691.04 | 703.2 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t696.64
|
that looks pretty good, right? Only problem is that this takes a long time. We don't have
| 696.64 | 712.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t703.1999999999999
|
that many vectors in there. And it took 57.4 milliseconds. So it's a little bit long. And
| 703.2 | 720.6 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t712.04
|
something that we can actually improve. OK. So before we move on to the next index, I
| 712.04 | 725.08 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t720.6
|
just want to have a look at the sort of speed that we would expect from this when we are
| 720.6 | 733.64 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t725.08
|
this is a very small data set. So what else could we expect? So if we go over here, I've
| 725.08 | 736.96 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t733.64
|
already written all this code. If you'd like to go through this notebook, I'll leave a
| 733.64 | 744.6 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t736.96
|
link in the description. So come down here, we have this flat L2 index. And this is the
| 736.96 | 752.5 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t744.6
|
query time. So this is for a randomly generated vector with a dimension size of 100. And this
| 744.6 | 759.44 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t752.5
|
is a number of vectors within that index. So we go up to 1 million here. And this is
| 752.5 | 765.6 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t759.44
|
a query time in milliseconds. You can see it increases quite quickly. Now, this is in
| 759.44 | 772.16 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t765.6
|
FICE, but it's still an exhaustive search. We're not really optimizing how we could do.
| 765.6 | 781.4 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t772.16
|
We're not using that approximate search capabilities of FICE. So if we switch back over to FICE,
| 772.16 | 790.12 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t781.4
|
we can begin using that approximate search by adding partitioning into our index. Now,
| 781.4 | 797.6 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t790.12
|
the most popular of these uses a technique very similar to something called Voronoi cells.
| 790.12 | 804.52 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t797.6
|
I'm not sure how you pronounce it. I think that's about right. And I can show you what
| 797.6 | 815.44 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t804.52
|
that looks like. So over here, if we go here, we have all of these. So this is called a
| 804.52 | 823.8 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t815.4399999999999
|
Voronoi diagram. And each of the sort of squares or the cells that you see are called Voronoi
| 815.44 | 837.4 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t823.8
|
cells. So here we have Voronoi cells. And that is just what you see here. So this, this,
| 823.8 | 845.48 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t837.4
|
all of these kind of squares are each a cell. Now, as well as those, we also have our centroids.
| 837.4 | 853.24 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t845.4799999999999
|
So I'm just going to write this down. So centroids. And these are simply the centers of those
| 845.48 | 861.76 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t853.24
|
cells. Now, when we introduce a new vector or our query vector into this, what we're
| 853.24 | 868.2 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t861.76
|
doing is essentially, so we have our query vector and let's say, let's say it appears
| 861.76 | 875.56 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t868.2
|
here. Now, within these Voronoi cells, we actually have a lot of other vectors. So we
| 868.2 | 883.56 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t875.56
|
could have, we could have millions in each cell. So there's a lot in there. And if we
| 875.56 | 891.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t883.56
|
compare that query vector and this thing here to every single one of those vectors, it would
| 883.56 | 895.6 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t891.04
|
obviously take a long time. We're going through every single one. We don't want to do that.
| 891.04 | 901.52 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t895.5999999999999
|
So what this approach allows us to do is instead of checking against every one of those vectors,
| 895.6 | 909.52 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t901.52
|
we just check it against every centroid. And once we figure out which centroid is the closest,
| 901.52 | 920.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t909.52
|
we limit our search scope to only vectors that are within that centroid Voronoi cell.
| 909.52 | 925.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t920.04
|
So in this case, it would probably be this centroid here, which is the closest. And then
| 920.04 | 933 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t925.04
|
we would just limit our search to only be within these boundaries. Now, what we might
| 925.04 | 938.44 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t933.0
|
find is maybe there's the closest vector here is actually here, whereas the closest vector
| 933 | 947.6 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t938.4399999999999
|
here is right there. So in reality, this vector here, this one, might actually be a better
| 938.44 | 955.08 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t947.6
|
approximation or a better, it might be more similar to our query. And that's why this is
| 947.6 | 963.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t955.08
|
approximate search, not exhaustive search, because we might miss out on something, but
| 955.08 | 969.88 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t963.0400000000001
|
that is kind of outweighed by the fact that this is just a lot, a lot faster. So it's
| 963.04 | 977.28 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t969.88
|
sort of pros and cons. It's whatever is going to work best for your use case. Now, if we
| 969.88 | 983.84 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t977.28
|
want to implement that in code, first thing that we want to do is define how many of those
| 977.28 | 989.88 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t983.8399999999999
|
cells that we would like. So I'm going to go 50. So use this endless parameter. And
| 983.84 | 997.6 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t989.88
|
then from there, we can set up our quantizer, which is almost like another step in the process.
| 989.88 | 1,005.82 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t997.6
|
So with our index, we are still going to be measuring the L2 distance. So we still actually
| 997.6 | 1,017.52 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1005.82
|
need that index in there. So to do that, we need to write FICE index flat L2. And we pass
| 1,005.82 | 1,023.76 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1017.5200000000001
|
out dimensions again, just like we did before. And like I said, that's just a step in the
| 1,017.52 | 1,029.76 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1023.7600000000001
|
process. That's not our full index. Our full index is going to look like this. So we write
| 1,023.76 | 1,035.12 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1029.76
|
index. And in here, we're going to have our FICE. And this is a new index. So this is
| 1,029.76 | 1,046.16 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1035.12
|
the one that is creating those partitions. So we write index IVF flat. And in there,
| 1,035.12 | 1,056.92 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1046.1599999999999
|
we need to pass our quantizer, the dimensions, and also the endless.
| 1,046.16 | 1,066.32 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1056.92
|
OK. Now, if you remember what I said before, in some cases, we'll need to train our index.
| 1,056.92 | 1,072.46 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1066.3200000000002
|
Now, this is an example of one of those times. Because we're doing the clustering and creating
| 1,066.32 | 1,078.52 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1072.46
|
those foreign noise cells, we do need to train it. And we can see that because this is false.
| 1,072.46 | 1,090.72 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1078.52
|
Now, to train it, we need to just write index train. And then in here, we want to pass all
| 1,078.52 | 1,101.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1090.72
|
of our sentence embeddings. So sentence embeddings, like so. Let's run that. It's very quick.
| 1,090.72 | 1,107.92 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1101.04
|
And then we can write, it's trained. And we see that's true. So now, our index is essentially
| 1,101.04 | 1,116.08 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1107.92
|
ready to receive our data. So we do this exactly the same way as we did before. We write index
| 1,107.92 | 1,123.32 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1116.0800000000002
|
add. And we pass our sentence embeddings again. And we can check that everything is in there
| 1,116.08 | 1,130.16 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1123.3200000000002
|
with index and total. OK. So now, we see that we have our index. It's ready. And we can
| 1,123.32 | 1,137.68 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1130.16
|
begin querying it. So what I'm going to do is use the exact same query vector that we
| 1,130.16 | 1,144.44 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1137.68
|
used before. Going to time it so that we can see how quick this is compared to our previous
| 1,137.68 | 1,152.36 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1144.44
|
query. And we're actually going to write the exact same thing we wrote before. So can I
| 1,144.44 | 1,167.96 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1152.36
|
actually just copy it? So take that. Bring it here. There we go. So now, let's have a
| 1,152.36 | 1,176.36 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1167.9599999999998
|
look. So total, 7.22. So bring it up here. And we have 57.4. Now, this is maybe a little
| 1,167.96 | 1,184.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1176.36
|
bit slow. So we'll see that the times do vary a little bit quite randomly. But maybe that's
| 1,176.36 | 1,194.84 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1184.04
|
a little bit slow. But it's probably pretty realistic. So that took 57 milliseconds. This
| 1,184.04 | 1,200.44 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1194.84
|
one, 7. Now, let's have a look. So these are the indexes we've got. Let's compare them
| 1,194.84 | 1,207.44 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1200.44
|
to what we had before. And I believe they're all the same. So we've just shortened the
| 1,200.44 | 1,214.56 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1207.44
|
time by a lot. And we're getting the exact same results. So that's pretty good. Now,
| 1,207.44 | 1,221.12 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1214.56
|
sometimes we will find that we do get different results. And a lot of time, that's fine. But
| 1,214.56 | 1,228.72 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1221.1200000000001
|
maybe if you find the results are not that great when you add this sort of index, then
| 1,221.12 | 1,234.28 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1228.72
|
that just means that this search is not exhaustive enough. Like we are using approximate search.
| 1,228.72 | 1,239.64 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1234.28
|
But maybe we should approximate a little bit less and be slightly more exhaustive. And
| 1,234.28 | 1,250 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1239.64
|
we can do that by setting the nprobe value. So nprobe, I'll explain in a minute. So let
| 1,239.64 | 1,256.96 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1250.0
|
me actually first just run this. And we can see it will probably take slightly longer.
| 1,250 | 1,263.4 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1256.96
|
So yeah, we get 15 milliseconds here. Of course, we get the same results again, because there
| 1,256.96 | 1,270.8 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1263.4
|
were no accuracy issues here anyway. But let me just explain what that is actually doing.
| 1,263.4 | 1,280.84 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1270.8
|
So in this case here, what you can see is a IVF search where we are using an nprobe
| 1,270.8 | 1,287.96 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1280.84
|
value of 1. So we're just using, we're just searching one cell based on what the first
| 1,280.84 | 1,295.56 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1287.9599999999998
|
nearest centroid to our query vector. Now, if we increase this up to 8, let's use a smaller
| 1,287.96 | 1,304.88 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1295.56
|
number in this example. So maybe we increase it to 4. Our four nearest centroids. So I
| 1,295.56 | 1,317.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1304.88
|
would say probably these, this one, this one, this one, and the one we've already highlighted.
| 1,304.88 | 1,321.68 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1317.0400000000002
|
All of those would now be in scope because our nprobe value, so the number of cells that
| 1,317.04 | 1,329.52 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1321.68
|
we are going to search is 4. Now, if we increase again to say 6, these two cells might also
| 1,321.68 | 1,336.08 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1329.52
|
be included. Now, of course, when we do that, we are searching more. So we might get a better
| 1,329.52 | 1,346 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1336.08
|
performance, better accuracy. But in terms of performance in time, it's also not, it's
| 1,336.08 | 1,353.36 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1346.0
|
also going to increase and we don't want time to increase. So there's a trade off between
| 1,346 | 1,360.36 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1353.36
|
those two. In our case, we don't really need to increase this. So don't really need to
| 1,353.36 | 1,373.68 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1360.36
|
worry about it. So that is the index IVF. And we have one more that I want to look at.
| 1,360.36 | 1,383.16 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1373.6799999999998
|
And that is the product quantization index. So this is actually, so we use IVF and then
| 1,373.68 | 1,393.48 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1383.16
|
we also use product quantization. So it's probably better if I try and draw this out.
| 1,383.16 | 1,402.08 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1393.48
|
So when we use product quantization, imagine we have one vector here. So this is our vector.
| 1,393.48 | 1,409.2 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1402.0800000000002
|
Now the first step in product quantization is to split this into sub vectors. So we split
| 1,402.08 | 1,416.8 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1409.2
|
this into several and then we take them out. We pull these out and they are now their own
| 1,409.2 | 1,425.36 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1416.8
|
sort of mini vectors. And this is just one vector that I'm visualizing here, but we would
| 1,416.8 | 1,432.32 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1425.3600000000001
|
obviously do this with many, many vectors. So there would be many, many more. So in our
| 1,425.36 | 1,440.48 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1432.32
|
case, that's one, 15,000, just under 15,000. Now that means that we have a lot of these
| 1,432.32 | 1,453.04 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1440.48
|
sub vectors. And what we do with these is we run them through their own clustering algorithm.
| 1,440.48 | 1,460.44 |
Faiss - Introduction to Similarity Search
|
2021-07-13 15:00:19 UTC
|
https://youtu.be/sKyvsdEv6rk
|
sKyvsdEv6rk
|
UCv83tO5cePwHMt1952IVVHw
|
sKyvsdEv6rk-t1453.04
|
So what we do is we end up getting clusters and each of those clusters is going to have
| 1,453.04 | 1,467.84 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.