title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t954.64
especially for this so we have this image text data set and in here we don't have there's not
954.64
968.24
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t961.4399999999999
much it's just 21 images or text to image pairs and we can see what they look like so
961.44
975.12
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t969.1999999999999
we have this text aeroshock of a futuristic city with a large motorway okay so i tried to just
969.2
984.24
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t975.12
describe this image as as best i could and yeah that's what i got and there are like as like you
975.12
993.2
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t984.24
saw just now 21 of these image text pairs in there so let's go ahead and actually prepare or download
984.24
1,005.6
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t993.2
and sort of initialize clip for our for our use so the the model id on hooking face is this
993.2
1,015.04
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1005.6
so if we were to go to hooking face.co we could type that in here and we have the model there
1,005.6
1,022.48
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1015.04
okay so this is the model that we're using over from openai here and with this model we we use
1,015.04
1,031.04
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1022.48
these two we use a processor and a model so this is the model itself this is clip right this is a
1,022.48
1,039.92
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1031.04
almost like a pre-processor for both our text and also the images okay so one thing we would do here
1,031.04
1,046.08
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1039.92
if we have a CUDA device available we can move our model to the CUDA device
1,039.92
1,053.04
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1047.68
at the moment if you try and do this with nps so if you're on mac and you have a you have apple
1,047.68
1,059.6
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1053.04
silicon there are some processors or some transformations in the clip that don't
1,053.04
1,066.72
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1059.6000000000001
function on nps at the moment so i would stick with cpu we're only doing inference so it's still
1,059.6
1,073.84
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1066.72
pretty fast now as i was mentioning the the processor is what handles both the text and
1,066.72
1,080
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1073.84
image preparation that needs to happen before we feed them into the actual encoder models themselves
1,073.84
1,088
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1080.0
that make up clip so for text we do this so this is just going to be this is going to work like a
1,080
1,094.96
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1088.0
normal text tokenizer a normal text tokenizer for text transform models is used in order to
1,088
1,105.84
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1094.96
translate our human readable text into transformer readable ids okay so we pass the text here we make
1,094.96
1,110.96
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1105.8400000000001
sure we are saying there are no images included here because the processor if we have both
1,105.84
1,118.4
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1111.52
images and text it can process them at the same time we can do that here as well but i want to
1,111.52
1,122.8
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1118.4
show you it separately just to show you what they're actually doing so the padding
1,118.4
1,130.8
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1122.8
we need to set that to true and that is because different different sentences can have different
1,122.8
1,137.6
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1130.8
lengths okay so you have like hello world and whatever i wrote before up here so this
1,130.8
1,139.68
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1137.6
aerial shot of futuristic city
1,137.6
1,154.24
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1139.68
aerial shot of a city these two sentences have different lengths and a transform model needs to
1,139.68
1,161.84
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1154.24
see the same length being input for all of the the text that is within this sort of single batch
1,154.24
1,166.96
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1161.8400000000001
so basically what it's going to do there is add what are called padding labels so it's just going
1,161.84
1,176.32
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1166.96
to add a few of these up to the length of the longest sequence within that batch of of text
1,166.96
1,188.56
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1176.32
items because in here we have those 22 um 20 no sorry 21 sentences so that's all we're doing there
1,176.32
1,197.28
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1188.56
i'm sure that is uh and then we are returning those as pytorch sensors and then finally just
1,188.56
1,202.24
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1197.28
moving them to whichever device we're using i'm using cpu here so it's not actually necessary to
1,197.28
1,210.4
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1202.24
do this but i'm doing it in case you do do the same on a cuda enabled device so from there we
1,202.24
1,218.88
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1210.4
have these input ids and an attention mask okay so let's have a quick look at what what those are so
1,210.4
1,230.8
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1218.88
we go into tokens and we have a look at input ids okay you see we get all these literally just
1,218.88
1,240.16
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1230.8000000000002
integer values and you'll see that a lot of them have this 49407 at the end all right that is
1,230.8
1,245.12
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1240.16
they're the padding tokens there okay so they they are not represented as strings but they're
1,240.16
1,250.08
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1245.1200000000001
represented as these integer numbers okay and we know that they're the paying tokens because they
1,245.12
1,257.28
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1250.0800000000002
they're appearing several times at the end of each sequence and none of the sequences i fed in there
1,250.08
1,262.72
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1258.0
were they didn't have any similar words at the end of those okay so you can see them all here
1,258
1,268.08
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1262.72
so we know that those are the pattern sequences we also see there's like an initialization of
1,262.72
1,277.52
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1268.08
sequence token there as well and then everything in between those they are tokens that represent
1,268.08
1,284.8
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1277.52
a word or a part of a word from our original text so that's the input ids the attention mask
1,277.52
1,292.72
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1284.8
so that's the input ids the attention mask is you'll see so here you can see that it's just
1,284.8
1,300
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1292.72
these ones and zeros now the ones represent real tokens okay they represent real words that were in
1,292.72
1,310
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1300.0
our from our text inputs the zeros represent where the where our processor has added padding tokens
1,300
1,317.2
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1310.0
so this is used for the internal mechanisms of the text transformer to know which tokens to pay
1,310
1,322.48
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1317.2
attention to which ones to ignore because we don't want to really focus on those padding tokens because
1,317.2
1,329.2
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1322.48
they're meaningless they're just there to make sure we have the same size inputs going into our
1,322.48
1,339.12
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1329.2
transform model that's all that is so we can go down and after we have our tokens you know what
1,329.2
1,346.64
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1339.12
we what we do is we use clip to encode all of them with this get text features okay and then
1,339.12
1,352.08
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1346.6399999999999
we pass our tokens and i've got two device here i think i already i already moved them to device
1,346.64
1,360.24
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1352.08
so i don't need to do that again we can actually remove that okay and okay what do we get here
1,352.08
1,371.6
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1360.24
so we get 21 so 21 text inputs that makes sense 512 dimensional vectors okay so they are our text
1,360.24
1,378
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1371.6
embeddings representing each of those those text sentences that we just gave and then one other
1,371.6
1,384.48
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1378.0
thing i wanted to point out here is that we have the min and max values and they're pretty big okay
1,378
1,390.96
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1384.48
they're clearly not normalized so this depends on what you're doing if you are if you want to
1,384.48
1,398.88
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1390.96
compare these vectors you need to make sure you're not using a similarity metric that looks or that
1,390.96
1,406.16
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1398.88
considers the magnitude of your vectors you need to only consider the the angle so you can do that
1,398.88
1,411.36
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1406.16
with cosine similarity or the alternative is that you can normalize these vectors and then you can
1,406.16
1,419.68
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1411.36
also do this with dot product similarity okay so to normalize if you wanted to use that product
1,411.36
1,426.48
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1419.6799999999998
similarity now you would do this okay so here we're just detaching our text embeddings from the
1,419.68
1,431.84
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1426.4799999999998
the pytorch graph moving them cpu if needed we actually don't need to do that but do it here
1,426.48
1,438.16
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1431.84
anyway and convert them into a non-py array and then we calculate the value that we will normalize
1,431.84
1,443.84
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1438.16
that we will normalize it each vector by okay so for each each vector we're calculating a
1,438.16
1,452
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1444.88
number and then that number is what we're going to divide them all by here okay to to normalize that
1,444.88
1,462.56
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1452.72
and then after that you can see the minimum maximum is this minus 0.15 and plus 0.53 okay so
1,452.72
1,468
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1462.5600000000002
neither of them going over minus one or plus one now now when it comes to encoding images we we do
1,462.56
1,475.2
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1468.0
the same thing or very similar thing so images are also pre-processed using the using the processor
1,468
1,480.8
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1475.2
as we did with our text but we just use slightly different parameters to start there so the reason
1,475.2
1,487.52
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1480.8
we're processing these images is that clip expects a certain size of image when when we're feeding
1,480.8
1,496.64
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1488.08
images into it and expects those those image pixels to be normalized as well now rgb images
1,488.08
1,503.6
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1496.64
by default the the value the pixel values and they will range from zero to 255 we need to
1,496.64
1,508.96
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1503.6000000000001
normalize those and we also need to resize the images so you can see you can see that here so
1,503.6
1,514.88
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1508.96
the first image it has this size it's it's a pretty big image okay this is the the width and
1,508.96
1,520.8
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1514.88
the height of that image now here we're taking all the images and we're processing them make
1,514.88
1,526.24
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1520.8000000000002
sure we say text is is none and that will actually only output one tensor the pixel values tensor so
1,520.8
1,530.96
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1526.24
we're just going to extract that straight out there and we're also going to move it to the
1,526.24
1,536.16
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1530.96
device set hardware device in this case just cpu and now let's have a look at this image
1,530.96
1,543.76
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1536.16
or images now so now we can see that we have this this array or tensor with three color channels so
1,536.16
1,552
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1543.76
this is the rgb and it has a height and width of 224 so it's been you know sort of squeezed into
1,543.76
1,561.04
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1552.0
a smaller size now and we have 21 days because we fed in all of our images okay so this is how we
1,552
1,567.52
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1561.04
use the processor and this is just resizing and normalizing our images ready for the division
1,561.04
1,575.44
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1567.52
transformer encoder of clip and very similar to before before we use get text features now we're
1,567.52
1,582.32
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1575.44
going to use get image features and we pass in those images like that and again as you you might
1,575.44
1,592.4
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1582.3200000000002
expect those images are not normalized you see that here and as we would also expect they are
1,582.32
1,598
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1592.4
the same dimensionality as our text embeddings so that means we can compare them but before
1,592.4
1,603.76
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1598.64
comparing them of course as before we we normalize them so we should normalize them again here
1,598.64
1,612.32
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1603.76
um and yep same process again and we can see that those have those have changed okay cool so
1,603.76
1,621.76
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1613.6
what we now want to do is calculate the similarity between all of our image embeddings and all of our
1,613.6
1,629.2
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1621.76
text embeddings so we can do that in a few different ways we have cosine similarity or
1,621.76
1,633.84
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1629.2
dot product similarity the reason we can use our product similarity is because we normalize
1,629.2
1,637.92
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1633.8400000000001
but i'm going to show you how to do both so that if you don't normalize you can actually just use
1,633.84
1,644.32
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1637.92
a cosine similarity like we do here so cosine similarity is actually just a dot product as
1,637.92
1,650.88
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1644.32
a numerator between the text embeddings and image embeddings and in the denominator we have just
1,644.32
1,660.32
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1650.88
normalized the norm values of both of those okay that is that's all it is actually so it's it's
1,650.88
1,666.24
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1660.3200000000002
pretty pretty simple and if we plot those similarity scores between those we get this
1,660.32
1,672.72
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1666.24
so we would expect along this diagonal here we'd expect these to be the highest similarity values
1,666.24
1,678.56
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1672.72
which say represent the the true pairs okay between the images and the text now we have some
1,672.72
1,684.4
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t1678.56
that are not quite there like here and there is this image text pair which is more similar even
1,678.56
1,690.48