title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t476.5
|
OK. And then we can have a look at the I'm going to I'm going to show you the image.
| 476.5 | 487.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t483.5
|
First we have a look at the shape and as well one thing.
| 483.5 | 492.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t487.5
|
So OK I can show you. OK.
| 487.5 | 496.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t492.5
|
OK. In here we actually have this pixel values so we actually need to extract that.
| 492.5 | 500.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t496.5
|
So we're going to put it here. I'm going to move those to the device as well.
| 496.5 | 507.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t500.5
|
I think the device I have set up right now is actually CPU so it doesn't make a difference for me but it's fine.
| 500.5 | 511.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t507.5
|
So let's have a look at the shape.
| 507.5 | 518.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t511.5
|
OK. So you see that we have this 224 by 224 image with three color channels.
| 511.5 | 526.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t518.5
|
So this is just the expected shape that will be consumed by the vision transformer of CLIP.
| 518.5 | 530.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t526.5
|
OK. And to import my PLOTlib.
| 526.5 | 536.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t530.5
|
Pyplot.plt. And I want to show you this image.
| 530.5 | 544.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t536.5
|
So this resize image. So PLT. I'm show image.
| 536.5 | 549.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t544.5
|
And I need to so I need to resize it. Let me show you what I'm actually doing here.
| 544.5 | 556.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t549.5
|
So image.squeeze. Zero. So I'm going to remove that first dimension.
| 549.5 | 562.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t556.5
|
Now I'm going to transpose it. So we put the three color channels at the back.
| 556.5 | 568.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t562.5
|
This is for this is for my PLOTlib to be able to actually show us this.
| 562.5 | 574.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t568.5
|
So I'm going to take that. I'm going to put it here.
| 568.5 | 583.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t574.5
|
OK. And you can see so the minimum maximum color values are all of the color values.
| 574.5 | 588.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t583.5
|
Pixel values are modified when we do this process it through the processor.
| 583.5 | 594.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t588.5
|
So the colors are kind of messed up. But you can see that this is like a resized.
| 588.5 | 600.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t594.5
|
You know what we saw before. OK. So it's a Sony. Just kind of backwards now and flipped.
| 594.5 | 607.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t600.5
|
We can sort it see that it is that Sony radio. So with that we can go ahead and get the image features.
| 600.5 | 613.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t607.5
|
I think it just showed me. Model. Get image features.
| 607.5 | 619.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t613.5
|
So an image. OK. And then let's have a look at the shape.
| 613.5 | 626.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t619.5
|
Cool. OK. So similar to before we have that 512 dimensional embedding vector.
| 619.5 | 631.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t626.5
|
OK. So that's cool. And from here we can we can do a lot of things.
| 626.5 | 641.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t631.5
|
What I'm going to show you how to do is how to kind of search through this or at least compare a small number of images against our prompt
| 631.5 | 648.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t641.5
|
so that we can actually see which one of those images is the most similar to a dog in the snow.
| 641.5 | 653.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t648.5
|
OK. So to do that we're going to want to embed more of these images.
| 648.5 | 658.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t653.5
|
I'm not going to embed loads of them just going to embed 100 images. Nothing nothing crazy.
| 653.5 | 666.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t658.5
|
So we're going to import NumPy as NP. NP random seed.
| 658.5 | 669.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t666.5
|
So this is just so you can replicate what I am doing.
| 666.5 | 678.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t669.5
|
And so this will this will randomly generate a set set set of random numbers. OK.
| 669.5 | 682.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t678.5
|
So the reason I'm doing this is because we want to take a sample out of the data set.
| 678.5 | 687.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t682.5
|
We don't want to have the whole data set. I want it to be at least somewhat random.
| 682.5 | 694.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t687.5
|
So to do that we want to go. So sample.
| 687.5 | 703.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t694.5
|
Indices are going to be equal to NumPy random dot random from zero up to the length of image net.
| 694.5 | 708.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t703.5
|
It's actually plus one. And we need 100 of those.
| 703.5 | 712.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t708.5
|
And then we're going to convert that into a list.
| 708.5 | 717.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t712.5
|
OK. I can just have a quick look at what is in there.
| 712.5 | 722.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t717.5
|
OK. So just all of these all these numbers here. OK.
| 717.5 | 733.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t722.5
|
So yeah. Cool. And if we run it again because we have that random seed set the random set of numbers doesn't change.
| 722.5 | 740.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t733.5
|
And what I'm going to do is just create a list of images using that using those values.
| 733.5 | 747.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t740.5
|
So I for I in sample IDX. OK.
| 740.5 | 753.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t747.5
|
Check. OK. So now 100 images from our data set.
| 747.5 | 765.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t753.5
|
And now we want to just go ahead and literally take everything we've just done and put into a for loop to create the embeddings for all of these images.
| 753.5 | 770.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t765.5
|
OK. So that will look something like this. I'm using TQDM here.
| 765.5 | 779.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t770.5
|
This is just a progress bar so we can see where we are. Batch size saying how many images to perform this for in any one go.
| 770.5 | 787.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t779.5
|
You can increase this if you're on a if you're using a bigger GPU or or whatever else.
| 779.5 | 795.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t787.5
|
Image array I'm setting that to none for now. We initialize that in the first loop.
| 787.5 | 804.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t795.5
|
OK. And then we're just in the same thing as before. So. So from this. So I'm selecting a batch of images based on the on the batch size.
| 795.5 | 813.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t804.5
|
And then where we are processing and resizing the images from that batch we're getting the image features look exactly the same thing.
| 804.5 | 818.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t813.5
|
I think before I actually didn't include pixel values but it's the same thing.
| 813.5 | 827.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t818.5
|
It's just a default argument. Converting into a NumPy array. Did I show you this before? I don't actually think so.
| 818.5 | 834.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t827.5
|
No maybe not. But here the squeeze is very similar. It's the same thing as what I showed you up here.
| 827.5 | 841.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t834.5
|
So we squeeze the first dimension out of that like we did here.
| 834.5 | 856.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t841.5
|
And then we are moving that batch of embeddings to the CPU. If it's not already on the CPU we're detaching it from the gradient like the training graph of PyTorch.
| 841.5 | 861.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t856.5
|
The PyTorch model e.g. clip. And then we're converting into a NumPy array.
| 856.5 | 871.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t861.5
|
OK. And then I'm going to add that batch of embeddings to a larger array of all image embeddings. OK. And that's why the image array comes in.
| 861.5 | 881.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t871.5
|
OK. So let's run that. OK. So we come up here. I made a mistake in the code.
| 871.5 | 889.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t881.5
|
So here I'm actually pulling in the full row or record at any one time. We don't do that. We want the image itself.
| 881.5 | 901.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t889.5
|
OK. So run that again. OK. And now if we check the type of images. Zero. We should see it's a pill image.
| 889.5 | 908.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t901.5
|
Yeah. Cool. Yeah. Pill here. Now we can run this. OK. It won't take long.
| 901.5 | 926.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t908.5
|
And now we have one hundred five hundred twelve dimensional image embeddings from our data set and we can now use them to compare to our initial text embedding and see which one of these matches most closely to that text embedding.
| 908.5 | 940.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t926.5
|
OK. So I'm going to be using dot product similarity. So there's just one thing to be aware of with that. And that is that it considers both the magnitude of the vector and also the angle.
| 926.5 | 945.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t940.5
|
So in this case that will that can throw off our results.
| 940.5 | 963.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t945.5
|
So we should normalize all of the image embeddings so that we are not looking at the magnitude of vectors. And we're only focusing on the angular similarity between our text embedding and these image embeddings.
| 945.5 | 969.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t963.5
|
So to do that we need to. I'll just show you quickly.
| 963.5 | 978.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t969.5
|
So look at the minimum maximum. You know that kind of all over the place. So to normalize we need to do this.
| 969.5 | 984.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t978.5
|
So do image array divided by do numpy.
| 978.5 | 991.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t984.5
|
Linouge dot norm. And here we have the image array.
| 984.5 | 995.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t991.5
|
OK. Axis equals one.
| 991.5 | 1,015.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t995.5
|
And let me show you what that is. So we have all these numbers and these are basically telling us for each one of these vectors of what should we divide it by in order to bring each of them to within a within a set set.
| 995.5 | 1,021.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1015.5
|
Within a set magnitude pretty much. So.
| 1,015.5 | 1,029.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1021.5
|
Take a look at the shape will be 100. So yeah we do that.
| 1,021.5 | 1,034.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1029.5
|
So I think I need to.
| 1,029.5 | 1,041.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1034.5
|
Transpose this. OK.
| 1,034.5 | 1,051.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1041.5
|
And then so the image array the shape is going to be transposed now so I'm going to transpose it again. Yeah.
| 1,041.5 | 1,056.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1051.5
|
Image array equals image array transpose. OK.
| 1,051.5 | 1,062.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1056.5
|
Cool. And now if we have a look at the minimum and maximum.
| 1,056.5 | 1,068.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1062.5
|
So minimum and maximum we get these values which are more reasonable.
| 1,062.5 | 1,076.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1068.5
|
OK. So now what we can do is use dot product similarity to actually compare compare these.
| 1,068.5 | 1,086.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1076.5
|
So text embedding I'm going to take the text embedding and similar to before what we did is we need to move it to the CPU.
| 1,076.5 | 1,094.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1086.5
|
Detach it from the PyTorch graph and then convert to a numpy array. OK.
| 1,086.5 | 1,100.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1094.5
|
Yeah. And then for the scores all we need to do is a numpy dot.
| 1,094.5 | 1,109.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1100.5
|
And we are going to put the text embedding followed by the image array. And actually I think I need to transpose this again. So.
| 1,100.5 | 1,114.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1109.5
|
Maybe we could have avoided transposing up here.
| 1,109.5 | 1,121.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1114.5
|
OK. Yeah. So the scores that we get here we get a single score for every single vector.
| 1,114.5 | 1,127.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1121.5
|
As we can see shape 100 and they are the dot product similarity scores.
| 1,121.5 | 1,134.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1127.5
|
So what we can now do is sort based on this scores array and just return like the top.
| 1,127.5 | 1,143.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1134.5
|
So the top five images and see what the top five most similar images are for our particular query.
| 1,134.5 | 1,153.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1143.5
|
OK. So we're going to return the top k. So top k is going to be the five most similar or the five items with the highest score.
| 1,143.5 | 1,159.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1153.5
|
And then we want to take the index values using np.org sort.
| 1,153.5 | 1,168.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1159.5
|
We're going to add the negative of the scores there and just make sure we take because scores has this here.
| 1,159.5 | 1,175.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1168.5
|
So we're actually just taking the let me show you scores 0.
| 1,168.5 | 1,184.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1175.5
|
Shape. OK. So it's taking the 100 values there and then I want to take the top k from that.
| 1,175.5 | 1,196.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1184.5
|
OK. So what we're left with is these five index values which are essentially indexes of the image embeddings and therefore both the.
| 1,184.5 | 1,201.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1196.5
|
And therefore the images that are the most similar to our query.
| 1,196.5 | 1,209.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1201.5
|
So we use matplotlib again to to visualize those. So we do for i in ID8.
| 1,201.5 | 1,219.5 |
Fast intro to multi-modal ML with OpenAI's CLIP
|
2022-08-11 13:03:08 UTC
|
https://youtu.be/989aKUVBfbk
|
989aKUVBfbk
|
UCv83tO5cePwHMt1952IVVHw
|
989aKUVBfbk-t1209.5
|
Let's print the let's print the score first. So scores i.
| 1,209.5 | 1,229.5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.