title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t624.8
|
And we have all of these sentence A's and we have all these sentence B's. Now if we take one sentence
| 624.8 | 641.12 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t635.04
|
A, it's already matched up to one sentence B. And what we can do is say, OK, I want to randomly
| 635.04 | 652.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t641.12
|
sample some other sentence B's and match them up to our sentence A. So we have three more pairs now.
| 641.12 | 661.52 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t652.64
|
OK, so if we did this, if we took three sentence A's, three sentence B's, and we made new pairs
| 652.64 | 666.96 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t661.52
|
from all of them, not really random sampling, just taking all the possible pairs, we end up with
| 661.52 | 676.32 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t669.28
|
nine new or nine pairs in total, which is much better if you
| 669.28 | 684.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t676.32
|
extend that a little further. So from just a thousand pairs, we can end up with one million pairs.
| 676.32 | 692 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t684.96
|
So you can see quite quickly, you can take a small data set and very quickly create a big data set
| 684.96 | 699.04 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t692.0
|
with it. Now this is just one part of the problem though, because our smaller data set
| 692 | 707.92 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t699.04
|
will have similarity scores or natural language inference labels, but the new data set
| 699.04 | 714.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t707.92
|
that we've just created, the augmented data set, doesn't have any of those, just randomly sampled
| 707.92 | 719.44 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t714.16
|
new sentence pairs. So there's no scores or labels there and we need those to actually train and model.
| 714.16 | 727.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t719.44
|
So what we can do is take a slightly different approach or add another step into here.
| 719.44 | 738.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t729.2
|
Now that other set is using something called a cross encoder. So in semantic similarity,
| 729.2 | 745.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t739.7600000000001
|
we can use two different types of models. We can use a cross encoder, which is over here,
| 739.76 | 751.52 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t745.76
|
or we can use a bi-encoder or what I would usually call a sentence transporter.
| 745.76 | 762.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t751.52
|
Now a cross encoder is the sort of old way of doing it and it works by simply putting
| 751.52 | 768.72 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t762.56
|
sentence A and sentence B into a BERT model together at once. So we have sentence A,
| 762.56 | 775.68 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t768.72
|
separate a token, sentence B, feed that into a BERT model and from that BERT model we will get all of our
| 768.72 | 780.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t775.6800000000001
|
embeddings, output embeddings over here and they all get fed into a linear layer,
| 775.68 | 788.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t780.5600000000001
|
which converts all of those into a similarity score up here. Now that similarity score is
| 780.56 | 794.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t788.48
|
typically going to be more accurate than a similarity score that you get from a bi-encoder or a
| 788.48 | 804.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t794.48
|
sentence transformer. But the problem here is from our sentence transformer we are outputting
| 794.48 | 813.84 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t804.48
|
sentence vectors and if we have two sentence vectors we can perform a cosine similarity or
| 804.48 | 822.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t813.84
|
a Buclidean distance calculation to get the similarity of those two vectors. And the cosine
| 813.84 | 833.2 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t822.1600000000001
|
similarity calculation or operation is much quicker than a full BERT inference set,
| 822.16 | 840.08 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t833.2
|
which is what we need with a cross encoder. So I think it is something like a BERT model
| 833.2 | 850.8 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t840.08
|
so I think it is something like for maybe 10 maybe clustering 10,000 vectors using a cross encoder,
| 840.08 | 858.32 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t850.8000000000001
|
an expert cross encoder, would take you something like 65 hours whereas with a bi-encoder it's going
| 850.8 | 868.72 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t858.32
|
to take you about five seconds. So it's much much quicker. And that's why we use bi-encoders
| 858.32 | 874.8 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t868.72
|
or sentence transformers. Now the reason I'm talking about cross encoders is because we get
| 868.72 | 884.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t874.8000000000001
|
this more accurate similarity score which we can use as a label. And another very key thing here
| 874.8 | 891.36 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t884.48
|
is that we need less data to train a cross encoder. With a bi-encoder if we I think the
| 884.48 | 899.92 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t891.36
|
SBERT model itself was trained on something like one million sentence pairs and some new
| 891.36 | 907.12 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t899.92
|
models are training a billion or more. Whereas a cross encoder we can train a reasonable cross
| 899.92 | 915.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t907.12
|
encoder on something like 5k or maybe even less sentence pairs. So we need much less data and
| 907.12 | 920 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t915.6
|
that works quite well what we've been talking about with data orientation. We can take a small data set
| 915.6 | 927.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t920.0
|
we can augment it to create more sentence pairs and then what we do is train on that original
| 920 | 938.08 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t927.76
|
data set which we call the gold data set. We train our cross encoder using that and then we use that
| 927.76 | 946.96 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t938.08
|
fine-tuned cross encoder to label the augmented data set without labels and that creates a augmented
| 938.08 | 959.52 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t946.96
|
label data set that we call the silver data set. So that sort of strategy of creating a silver
| 946.96 | 969.12 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t959.52
|
data set which we would then use to fine-tune our bi-encoder model is what we refer to as the
| 959.52 | 990.24 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t969.12
|
in-domain augmented SBERT training strategy. And this sort of what you can see this flow diagram
| 969.12 | 1,000.32 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t990.24
|
is basically every set that we need to do to create an in-domain or SBERT training process.
| 990.24 | 1,006.8 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1000.32
|
So we've already described most of this so we get our gold data set, the original data set.
| 1,000.32 | 1,013.12 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1006.8
|
That's going to be quite small let's say one to five thousand sentence pairs that are labeled.
| 1,006.8 | 1,018.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1013.84
|
From that we're going to use something like random sampling which I'll just
| 1,013.84 | 1,027.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1018.64
|
call random sample. We're going to use that to create a larger data set. Let's say we create
| 1,018.64 | 1,035.12 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1027.6
|
something like a hundred thousand sentence pairs but these are not labeled. We don't have any
| 1,027.6 | 1,046.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1035.6
|
similarity scores or natural language inference labels for these. So what we do is we take that
| 1,035.6 | 1,052.72 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1046.56
|
gold data set and we take it down here and we fine-tune a cross encoder using that gold data
| 1,046.56 | 1,060.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1052.72
|
because we need less data to train a reasonably good cross encoder. So we take that and we
| 1,052.72 | 1,066.8 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1060.1599999999999
|
fine-tune cross encoder and then we use that cross encoder alongside our unlabeled data set
| 1,060.16 | 1,075.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1066.8
|
to create a new silver data set. Now the cross encoder is going to predict the similarity scores
| 1,066.8 | 1,086.8 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1075.76
|
or NLI labels for every pair in that data set. So with that we have our silver data. We also have
| 1,075.76 | 1,098.08 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1086.8
|
the gold data which is up here and we actually take both those together and we fine-tune the
| 1,086.8 | 1,105.84 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1098.08
|
by encoder or the sentence transformer on both the gold data and the silver data. Now one thing I
| 1,098.08 | 1,113.92 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1105.84
|
would say here is it's useful to separate some of your gold data at the very start so don't even
| 1,105.84 | 1,121.12 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1113.92
|
train your cross encoder on those. It's good to separate them as your evaluation or test set
| 1,113.92 | 1,129.52 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1121.1200000000001
|
and evaluate both the cross encoder performance and also your by encoder performance on that
| 1,121.12 | 1,134.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1129.52
|
separate set. So don't include that in your training data for any of your models. Keep that
| 1,129.52 | 1,140.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1134.16
|
separate and then you can use that to figure out is this working or is it not working. So
| 1,134.16 | 1,151.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1140.64
|
that is in the main org fermented experts and sort of see this is the same as what you saw before
| 1,140.64 | 1,158.8 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1151.76
|
just another this is the training approach. So we have the gold trained cross encoder.
| 1,151.76 | 1,164.08 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1160.16
|
We have our unlabeled pairs which have come from random sampling our gold data.
| 1,160.16 | 1,170.08 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1164.08
|
We process those for a cross encoder to create the silver data set and then the silver and the gold
| 1,164.08 | 1,180.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1170.8799999999999
|
come over here to fine-tune a by encoder. So that's it for the theory and the concepts
| 1,170.88 | 1,187.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1181.9199999999998
|
and now what I want to do is actually go through the code and and we'll work through an example
| 1,181.92 | 1,195.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1187.76
|
of how we can actually do this. Okay so we have downloaded the both the training and the validation
| 1,187.76 | 1,203.68 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1195.76
|
set for our scsb data and let's have a look at what some of that data looks like. So scsb
| 1,195.76 | 1,213.28 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1205.36
|
zero. So we have sentence pair sentence one sentence two just a simple sentence and we have
| 1,205.36 | 1,220.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1213.28
|
a label which is our similarity score. Now that similarity score varies from between zero to five
| 1,213.28 | 1,230.4 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1220.48
|
where zero is no similarity no relation between the two sentence pairs and five is they mean that
| 1,220.48 | 1,234.96 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1230.3999999999999
|
same thing. Now see here these two mean the same thing as we
| 1,230.4 | 1,244.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1234.96
|
Now we can see here that these two mean the same thing as we would expect. So we first want to
| 1,234.96 | 1,251.44 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1244.16
|
modify that score a little bit because we are going to be training using cosine similarity loss
| 1,244.16 | 1,260.24 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1251.44
|
and we would expect our label to not go up to a value of five but we would expect it to go up to
| 1,251.44 | 1,270.32 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1260.24
|
a value of one. So all I'm doing here is changing that score so that we are dividing everything by
| 1,260.24 | 1,279.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1270.32
|
five normalizing everything. So we do that and no problem and now what we can do is load our
| 1,270.32 | 1,285.36 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1279.76
|
training data into a data loader. So to do that we first need to load our training data into a
| 1,279.76 | 1,294.32 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1285.36
|
data loader. So to do that we first form everything into a input example and then load that into
| 1,285.36 | 1,303.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1294.32
|
into our PyTorch data loader. So I'll run that and then at the same time during training I also
| 1,294.32 | 1,313.2 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1303.6
|
want to output a evaluation source. So how the cross encoder do on the evaluation data.
| 1,303.6 | 1,325.36 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1313.2
|
So to do that I import. So here we're importing from sentence transformers cross encoder
| 1,313.2 | 1,330.8 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1325.3600000000001
|
evaluation. I'm importing the cross encoder CE correlation evaluator.
| 1,325.36 | 1,337.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1332.56
|
I again am using input examples with working sentence transformers library
| 1,332.56 | 1,344.32 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1337.6
|
and I am importing both text and labels.
| 1,337.6 | 1,354.72 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1346.7199999999998
|
And here I am putting all that development or I'm putting all that validation of that data
| 1,346.72 | 1,363.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1354.7199999999998
|
into that evaluator. Okay now I can run that and then we can move on to initializing a cross encoder
| 1,354.72 | 1,371.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1363.6
|
and training it and also evaluating it. So to do that we're going to import from sentence
| 1,363.6 | 1,380.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1371.76
|
transformers. So from sentence transformers and I'll just make sure I'm working in Python.
| 1,371.76 | 1,383.84 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1381.9199999999998
|
I'm going to import from cross encoder
| 1,381.92 | 1,393.44 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1383.84
|
a cross encoder. Okay and to initialize that cross encoder model I'll call it C.
| 1,383.84 | 1,400.96 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1394.6399999999999
|
All I need to do is write cross encoder very similar to when we write sentence transformer
| 1,394.64 | 1,410.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1400.9599999999998
|
initializer and model. We specify the model from the face transformers that we like to
| 1,400.96 | 1,416.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1410.16
|
initialize a cross encoder from. So that based on case and also a number of labels that we'd like to
| 1,410.16 | 1,425.2 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1416.88
|
use. So in this case we are just targeting a similarity as well between 0 and 1. So we just
| 1,416.88 | 1,433.92 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1425.2
|
want a single label there. If we were doing for example NLI labels where we have entailment
| 1,425.2 | 1,442.4 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1433.92
|
contradiction and neutral labels or some other labels we would change this to for example 3.
| 1,433.92 | 1,450.24 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1442.4
|
But in this case 1. We can initialize our cross encoder and then from there we move on to actually
| 1,442.4 | 1,459.04 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.