title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1310.6
Okay, let's save this and
1,310.6
1,317.96
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1312.6
And try again. Okay, so i'm going to copy this over into home face come here
1,312.6
1,320.68
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1318.6799999999998
Uh not here here
1,318.68
1,322.76
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1320.9199999999998
edit
1,320.92
1,327.8
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1322.76
And come here select all paste and I am going to commit those changes
1,322.76
1,333.32
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1329.08
Now let's have a look at what happens if we load the data set so
1,329.08
1,336.12
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1334.12
Come back over here test data set
1,334.12
1,338.44
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1336.4399999999998
Uh, let's run this
1,336.44
1,340.52
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1338.76
Let's see what happens
1,338.76
1,345
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1340.52
Okay, it loaded well it loaded correctly. That's a good sign come down here
1,340.52
1,351.72
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1345.72
And now we can see that these are no longer strings, but they're actually flowing point numbers. Okay, so
1,345.72
1,354.76
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1353.32
That is
1,353.32
1,359.96
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1354.76
That's everything there are maybe a few aesthetic things to change here. So the
1,354.76
1,362.6
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1360.68
Like the citation
1,360.68
1,367.24
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1362.6
We'll change that up here. I can change this as well, but we're not going to go through that in this
1,362.6
1,373.74
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1367.24
Uh in this video, but I think you want to watch me change citations. So yeah, that's everything
1,367.24
1,376.2
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1374.76
For this video in the next video
1,374.76
1,380.52
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1376.2
What we're going to do is take a look at taking this a a little bit further
1,376.2
1,385.64
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1381.0
And adding more advanced data types like images into our data sets
1,381
1,387.48
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1386.28
so
1,386.28
1,389.96
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1387.48
Until then I hope this has been useful
1,387.48
1,396.86
Hugging Face Datasets #2 - Dataset Builder Scripts
2022-09-23 14:45:22 UTC
https://youtu.be/ODdKC30dT8c
ODdKC30dT8c
UCv83tO5cePwHMt1952IVVHw
ODdKC30dT8c-t1389.96
1,389.96
1,396.86
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t0.0
In this video we're going to have a quick introduction to OpenAI's clip and how we can use it to almost move between the modalities of both language and images.
0
19.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t15.0
Now before we dive in let's just quickly understand what clip is.
15
23
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t19.5
So it consists of two big models.
19.5
29.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t23.0
In this implementation we're going to be using a vision transformer that will embed images.
23
35
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t29.5
And we're going to use a normal text transformer that will embed text.
29.5
51
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t35.0
During pre-training OpenAI trained the model on pairs of images and text and it trained them to both output embedding vectors that are as close as possible to each other.
35
68
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t51.0
So the text transformer was trained to output a single embedding 512 dimensional embedding that was as close as possible to the vision transformer's image embedding for the image text pair.
51
79.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t68.0
So what that means is that clip is able to take both images and text and embed them both into a similar vector space.
68
81.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t79.5
And with that we can do a lot of things.
79.5
84.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t81.5
You can do image and text classification.
81.5
88.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t84.5
You can do image and text search and a huge number of things.
84.5
93.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t88.5
Anything to do with images and text there's a good chance we can do it with clip.
88.5
98
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t93.5
So let's have a look at how we actually use clip.
93.5
102
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t98.0
OpenAI released a GitHub repository OpenAI clip here.
98
107.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t102.0
This contains clip but we're not going to use this implementation.
102
111
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t107.5
We're actually going to use this implementation of clip.
107.5
113
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t111.0
So this is on Hugging Face.
111
118
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t113.0
So we're going to be using Hugging Face transformers and this is still from OpenAI.
113
119
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t118.0
It's still clip.
118
134
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t119.0
It's just an easy to use implementation of it through the Hugging Face transformers library which is a more standard library for actually doing anything with NLP and also now computer vision and some other things as well.
119
137.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t134.0
So to get started I'd recommend you install these libraries.
134
145.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t137.5
To install Torch you should probably go through the PyTorch.org instructions rather than following this here.
137.5
157.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t145.5
So go to PyTorch.org and just install PyTorch using the specific install command they use for your platform or your iOS from here.
145.5
160.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t157.5
And then pip install transformers and datasets.
157.5
166
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t160.5
You can still just use this command I'd recommend installing PyTorch from here instead.
160.5
171
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t166.0
Now after that we're going to need our dataset.
166
175.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t171.0
So this is just a very simple dataset.
171
183.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t175.5
It contains I think just under 10,000 images and we only care about the images here.
175.5
191.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t183.5
So if we have a look we have ImageNet we'll go the first item and we'll just have a look at image.
183.5
196.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t191.5
And we have this Sony radio and we have other things as well.
191.5
200
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t196.5
So if we go ImageNet.
196.5
204
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t200.0
It's 6494.
200
206.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t204.0
There's another image here of a dog.
204
214.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t206.5
OK just to point out that we have a lot of images in here in the dataset that cover a range of things.
206.5
220.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t214.5
There's not a huge number of different categories here but they have dogs they have radios and a few other things.
214.5
224.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t220.5
Now I'm just going to go ahead and initialize everything.
220.5
227.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t224.5
So there's a few things here.
224.5
231.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t227.5
From transformers we're importing the clip tokenizer.
227.5
240.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t231.5
So the tokenizer is what's going to handle the pre-processing of our text into token ID tensors and other tensors.
231.5
246.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t240.5
We have the clip processor that's like the tokenizer but for images.
240.5
259.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t246.5
So this is actually just going to resize our images into the size that clip expects and also modify the pixel values as well.
246.5
263.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t259.5
And then we have clip model. Clip model is clip itself.
259.5
274.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t263.5
OK so if you have CUDA or MPS if you're on M1 Mac you just set that with this.
263.5
279.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t274.5
OK and then we're ready to actually initialize all of this.
274.5
285.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t279.5
So the model ID is going to be what we saw before.
279.5
294.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t285.5
So you come over here we have the tokenizer clip VIT base patch 32 copy that.
285.5
297.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t294.5
And here we go. OK.
294.5
301.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t297.5
And now we just need to look I'm being told what to do already.
297.5
305.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t301.5
OK so model clip model from pre-trained model ID.
301.5
308.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t305.5
I'm going to I don't normally set device like that.
305.5
313.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t308.5
I don't know if you can. I am going to do it like this.
308.5
319.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t313.5
OK and tokenizer. OK good job. And processor. Cool.
313.5
325.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t319.5
Almost there. It's from pre-trained.
319.5
333.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t325.5
OK and you got a little bit confused. So model ID.
325.5
337.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t333.5
OK that looks good. Let's run that.
333.5
345.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t337.5
OK cool. So now what we're going to do is take a look how we actually create the text embeddings through clip.
337.5
350.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t345.5
So we start with a prompt. I'm going to go with a dog in the snow.
345.5
356.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t350.5
There's not many pictures of dogs in the snow in the dataset but there are some.
350.5
359.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t356.5
And what we need to do is is tokenize the prompt.
356.5
363.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t359.5
Yeah that's true. OK. I'm not going to do it like that.
359.5
371.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t363.5
We're going to go with tokenize prompt and the we need to return tensors using Pytorch.
363.5
375.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t371.5
So we're using we're going to be using Pytorch behind the scenes here.
371.5
383.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t375.5
So make sure we do that. And let's just have a look at what is actually in inputs.
375.5
391.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t383.5
OK so we get the this input ID is tensor so you'll you'll recognize this if you if you use a face transformers before.
383.5
397.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t391.5
These are just the ID token IDs that represent the words from this.
391.5
402.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t397.5
OK. And these this is the attention mask.
397.5
410.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t402.5
Now for us it is going to all be ones but if we had padding in here anything beyond the length of our prompt would become a zero.
402.5
415.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t410.5
Telling the model to not pay attention to to that part of the prompt.
410.5
429.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t415.5
And from there we can process this through clips so we do model get text features I think.
415.5
437.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t429.5
And we pass in those inputs. OK. And let's have a look at the shape of that.
429.5
442.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t437.5
OK so we have a five hundred and twelve dimensional vector.
437.5
447.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t442.5
OK. So that's the text embedding side of things.
442.5
451.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t447.5
Now we need to go ahead and do the image embedding side of things.
447.5
456.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t451.5
OK. So we're going to resize the image first with the processor.
451.5
462.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t456.5
We're not adding any text in here so you can also process text through this processor.
456.5
466.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t462.5
I'm just keeping it separate because it makes more sense to me.
462.5
472.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t466.5
The image should be images actually.
466.5
476.5
Fast intro to multi-modal ML with OpenAI's CLIP
2022-08-11 13:03:08 UTC
https://youtu.be/989aKUVBfbk
989aKUVBfbk
UCv83tO5cePwHMt1952IVVHw
989aKUVBfbk-t472.5
Again we want to return tensors using PyTorch.
472.5
483.5