title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t275.28000000000003
makes sense that that is the sort of direction that machine learning and AI may also go in.
275.28
290.96
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t282.8
So to achieve this multi-modality in CLIP we actually use two models that are trained to
282.8
296.72
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t291.52000000000004
almost speak the same language. So with these two models one of them is a text encoder one of them
291.52
304.32
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t296.72
is an image encoder. Both of these models create a vector representation of whatever they are being
296.72
310
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t304.32000000000005
input so the text encoder may get a sentence that sentence could be two dogs running across a frosty
304.32
320.8
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t310.0
field and then we have a image of two dogs running across a frosty field and CLIP will be trained so
310
329.6
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t320.8
that the text encoder consumes our sentence and outputs a vector representation that is very very
320.8
337.44
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t329.6
closely aligned to what the image encoder has output based on the image of the same concept.
329.6
344.88
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t337.44
Now by training both of these models to encode these vectors into a similar vector space we
337.44
351.04
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t344.88
are teaching them to speak the same vector language right so this is it's very abstract
344.88
359.28
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t351.04
this this vector language is like 512 dimensional space so we can't directly understand what
351.04
362.64
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t359.28
or it's very difficult for us to directly understand what is actually happening there
359.28
373.36
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t363.68
but these two models do actually output patterns that are logical and and make sense and we can
363.68
378.96
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t373.36
see some of this by comparing the similarity between the vectors that it outputs okay so we
373.36
384.8
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t378.96000000000004
can see that the two vectors for dogs running across a frosty field both the the text vector
378.96
392.32
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t384.8
and the image vector are both within a very similar vector space whereas something else like elephants
384.8
399.28
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t392.32
in the Serengeti is you know whether it's text or image is not here with our our two dogs running
392.32
404.48
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t399.28
across the frosty field is somewhere over over here right in a completely different space so
399.28
409.52
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t404.47999999999996
what we can do with that is is calculate the similarity between these vectors and identify
404.48
416.32
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t409.52
which ones are similar or not similar according to clip from this from these these meaningful
409.52
421.92
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t416.32
vectors that that clip is actually outputting we are able to create a content-based image retrieval
416.32
428.72
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t421.91999999999996
system okay so content-based image retrieval is basically where we um using some text or using
421.92
438
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t428.72
maybe even another image we can search for images based on their content right and not just like
428.72
443.2
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t438.0
some metatextual metadata or something that's been attached to it and with clip unlike other
438
450.88
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t443.20000000000005
content-based image retrieval systems um clip is incredibly good at actually capturing the meaning
443.2
457.04
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t450.88000000000005
across the entire image so you know for example with our our two dogs running across a frosty field
450.88
461.44
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t457.04
we might also be able to describe the background of that image without mentioning that there's two
457.04
467.36
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t461.44
dogs in it and if we describe in such a way that um we align pretty well with what that image
461.44
472.96
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t467.36
actually is what is in that image we might actually also return the image based on that so
467.36
479.44
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t472.96000000000004
we're not just focusing on one thing in the image clip allows us to focus on many things in the image
472.96
487.92
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t479.44
so an example of that is within this data set i've been using here there are no images there's
479.44
495.52
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t487.92
one single image of the food a hot dog okay so i tried to search that and the first image that is
487.92
501.36
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t495.52
returned is a dog eating a hot dog okay so it's pretty relevant but of course there are no other
495.52
506.72
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t501.36
images of hot dogs in this in this data set so the other images that are returned are quite
501.36
513.6
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t506.72
interesting because in some way or another they are kind of showing a hot dog so the first one we
506.72
521.76
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t513.6
have a dog looking pretty cozy in a warm room with a fire in the background then we have a dog in a
513.6
529.68
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t521.76
big woolly jumper and another dog kind of like posing for the camera so weirdly enough we we
521.76
535.2
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t529.6800000000001
got a load of hot dog images even though it's not really um maybe it's not exactly what we meant when
529.68
541.6
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t535.2
we said hot dog but a person could understand that okay we can we can see how those that term
535.2
547.6
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t541.6
and those images are related now we're not actually only restricted to text to image search
541.6
552.8
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t547.6
when we encode our our data when we code text and when we code images we are actually just
547.6
559.92
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t552.8000000000001
creating vectors so we can search across that space in any any direction with any combination
552.8
566.24
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t559.92
of modalities so we could do a text to text search image to image search we can also do image
559.92
571.6
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t566.24
to text search or we can search everything we could use some text to search for text and images
566.24
577.84
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t572.3199999999999
we can kind of go in any direction use any modality we want now let's go into a little
572.32
585.84
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t577.8399999999999
more detail on what the architecture of clip actually looks like so clip as i mentioned it's
577.84
591.6
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t585.84
these two models now these two models are trained in parallel one of them is the the text encoder
585.84
598.08
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t591.6
now it's a just a generic text encoder of 12 layers and then on the image encoder side there
591.6
604.72
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t598.08
are there are two different options i've spoken about there is a vision transformer model and
598.08
611.76
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t604.72
also a resnet model and they use a few different sizes for resnet as well both of these both of
604.72
619.2
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t611.76
these encoder models output a single 512 dimensional vector and the way these models
611.76
625.44
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t619.2
is trained is is kind of in the name of clip so clip stands for contrastive learning in pre-training
619.2
632.56
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t626.0
and so the the training that is used during pre-training is is contrastive it's contrastive
626
639.36
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t632.56
pre-training now across both nlp and computer vision large models sort of dominate the the
632.56
645.28
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t639.36
state of the art and the reason for this or the idea behind this is that just by giving a large
639.36
652.56
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t645.28
model a huge amount of data they can learn sort of general patterns from what they see and almost
645.28
661.28
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t652.5600000000001
kind of internalize a a general rule set for the the data that it sees okay so they manage to
652.56
668.72
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t662.48
recognize general patterns in their modality in language they may be able to internalize the
662.48
677.92
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t668.72
grammar rules and patterns in english language for vision models that may be sort of the general
668.72
682.72
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t678.5600000000001
patterns that you identify or notice in with different scenes and different objects
678.56
689.12
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t683.28
now the problem with these different models the reason they don't fit together very well already
683.28
696.24
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t689.76
is that they're trained separately so by default these state of the art models have no understanding
689.76
701.84
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t696.24
of each other and that that's where clip is is different that's what clip has has brought to the
696.24
710.48
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t701.84
table here with clip the text and image encoders are trained while considering the context of the
701.84
717.2
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t710.48
other modality okay so the text encoder is trained and it considers the modality or it considers the
710.48
722.56
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t717.2
the concept learned by the image encoder and the image encoder does the same for the text encoder
717.2
729.44
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t722.56
and we can almost think of this as the the image and text encoders are sharing a almost indirect
722.56
737.2
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t729.4399999999999
understanding of the other modality now contrastive training works by taking a image and
729.44
743.12
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t737.1999999999999
text pair so for example the two dogs running across a frosted field and putting those together
737.2
749.92
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t743.1199999999999
into the text encoder and image encoder and learning to encode them both as as closely as
743.12
759.44
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t749.92
possible for this to work well we also need negative pairs so we need something to compare
749.92
764.8
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t759.4399999999999
against this is a general rule in contrastive learning you can't just have positive pairs
759.44
769.2
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t764.8
because then everything can just be kind of encoded into the same like tiny little space
764.8
776.24
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t769.8399999999999
and you don't know how to separate the the pairs are dissimilar okay so we need both positive and
769.84
782.56
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t776.24
negative pairs so we have a positive pair okay in order to get negative pairs we can essentially
776.24
792.16
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t782.5600000000001
just take all the positive pairs in our data set and we can say okay the pair t1 and i1 we can mix
782.56
800.56
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t792.16
t1 with different eyes okay so we can do t1 with i2 and i3 i4 and so on so we're basically just
792.16
807.36
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t800.56
swapping the pairs and we can we can understand that there's other pairs are probably not going
800.56
813.44
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t807.3599999999999
to be similar as long as our data set is relatively large occasionally maybe we will get a pair that
807.36
818.32
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t813.4399999999999
are similar but as long as our data set is large enough that that doesn't happen too frequently
813.44
825.92
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t818.3199999999999
it's not going to affect our training it will be sort of a negligible problem so with this idea
818.32
834.64
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t825.92
we can use a loss function that will minimize the difference between positive pairs and maximize the
825.92
840.32
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t834.64
difference between negative pairs and that will look something like this where we have our positive
834.64
847.04
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t840.3199999999999
pairs in the diagonal of the similarity matrix and everything else is something that we the dot
840.32
854.48
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t847.04
product there we need to maximize and this image that you see here is actually the pre-training for
847.04
861.36
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t854.48
a single batch okay so one interesting thing to note here is if we have a small batch say we only
854.48
866.32
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t861.36
have a batch size of two it's going to be very easy for our model to identify which two items are
861.36
873.92
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t866.32
similar which two are not similar whereas if we have 64 in our 64 items in our batch it will be
866.32
879.84
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t873.9200000000001
much harder for our model because it has to it has to find more nuanced differences between them and
873.92
887.52
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t879.84
and what basically the odds of guessing randomly between those and guessing correctly are much
879.84
895.68
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t887.52
smaller so a larger batch size is a good thing to to aim for in this contrastive pre-training
887.52
904.08
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t895.6800000000001
approach so with that i think we we have a good idea now of how clip can be used and also you
895.68
911.28
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t904.08
know how it has been trained for for this so what i really want to do now is kind of show you how you
904.08
917.44
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t911.2800000000001
might be able to use it as well now we're going to be using the vision transformer version of clip
911.28
923.84
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t918.0
okay so we remember i said there's a the resnet and vision transformer options for that image encoder
918
931.68
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t923.84
we're going to use a vision transformer version and openai have released this model through the
923.84
936.24
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t931.68
hooking face library so we can we can go to the hooking face library and use it directly from now
931.68
941.2
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t936.2399999999999
which makes it really easy for us to actually sort of get started with it so let's go ahead and do
936.24
948
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t941.1999999999999
that now okay so for this we will need to install a few libraries here so we have transformers
941.2
954.64
CLIP Explained | Multi-modal ML
2022-09-15 13:00:22 UTC
https://youtu.be/fGwH2YoQkDM
fGwH2YoQkDM
UCv83tO5cePwHMt1952IVVHw
fGwH2YoQkDM-t948.64
torch and data sets so data sets we need to actually get data set so i've prepared one
948.64
961.44