title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1450.24
|
training. So we call model or C.fit and we want to specify the data loader. So this is slightly
| 1,450.24 | 1,464.96 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1459.04
|
different to the fit function we usually use with sentence transformers. So we want train data loader.
| 1,459.04 | 1,474.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1466.0
|
We specify our loader that we initialize just up here the data loader. We don't need to do this
| 1,466 | 1,482.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1474.8799999999999
|
but if you are going to evaluate your model during training you also want to add in evaluator as well.
| 1,474.88 | 1,489.92 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1482.64
|
So this is from the C correlation evaluator. Make sure here using a cross encoder evaluation class.
| 1,482.64 | 1,500.4 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1490.96
|
We would like to run for say one epoch and we should define this because I would also like to
| 1,490.96 | 1,507.92 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1501.8400000000001
|
while we're training I would also like to include some warm up sets as well. I'm going to include a
| 1,501.84 | 1,513.12 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1507.92
|
lot of warm up sets actually. Although I'll mention it I'll talk about it in a moment.
| 1,507.92 | 1,516.24 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1513.1200000000001
|
So I would say number of epochs
| 1,513.12 | 1,527.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1519.8400000000001
|
is equal to one and for the warm up I would like to take integer. So the length of loader. So the
| 1,519.84 | 1,538.96 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1527.6
|
number of batches that we have in our data set. I'm going to multiply this by 0.4. So I'm going to do
| 1,527.6 | 1,546.96 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1538.9599999999998
|
a warm up or do warm up sets for 40 percent of our total data set size or batch or 40 percent of our
| 1,538.96 | 1,553.44 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1546.9599999999998
|
total number of batches. And we also need to multiply that by number of epochs. Say we're
| 1,546.96 | 1,559.04 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1553.44
|
training two epochs we multiply that in this case just one. So not necessary but it's there.
| 1,553.44 | 1,567.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1560.16
|
So we're actually performing warm up for 40 percent of the training steps and I found this
| 1,560.16 | 1,575.04 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1567.76
|
works better than something like 10 percent 15 percent 20 percent. However that being said I
| 1,567.76 | 1,582.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1575.04
|
think you could also achieve a similar result by just decreasing the learning rate of your model. So
| 1,575.04 | 1,590.08 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1584.24
|
by default. So if I write in the epochs here we'll define the warm up sets.
| 1,584.24 | 1,608.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1590.08
|
So by default this will use optimizer params with a learning rate of 2e to the minus 5.
| 1,590.08 | 1,620.8 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1608.64
|
OK. So if you say want to decrease that a little bit you could go let's say go to the minus 6 5e to minus 6.
| 1,608.64 | 1,625.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1620.8000000000002
|
And this would probably have a similar effect to having such a significant number of warm up
| 1,620.8 | 1,633.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1625.76
|
sets. And then in this case you could decrease this to 1 or 10 percent. But for me the way I've
| 1,625.76 | 1,640.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1633.6
|
tested this I've ended up going with 40 percent warm up sets and that works quite well. So the
| 1,633.6 | 1,646.96 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1640.1599999999999
|
final step here is where do we want to save our model. So I'm going to say I want to save it into
| 1,640.16 | 1,660.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1647.6
|
BERT base cross encoder or let's say BERT STSB cross encoder. And we can run that and that will
| 1,647.6 | 1,666.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1660.48
|
run everything for us. I'll just make sure it's actually. Yep there we go. So see it's running
| 1,660.48 | 1,675.52 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1666.64
|
but I'm not going to run it because I've already done it. So let me pause that and I will move on
| 1,666.64 | 1,686.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1675.52
|
to the next step. OK. So we now have our gold data set which we have pulled from HuginFace data sets
| 1,675.52 | 1,693.2 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1686.16
|
and we've just fine tuned a cross encoder. So let's cross both of those off of here.
| 1,686.16 | 1,702.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1694.88
|
This and this. And now so before we actually go on to predicting labels with the cross encoder
| 1,694.88 | 1,710.32 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1702.88
|
we need to actually create that unlabeled data set. So let's do that through random sampling
| 1,702.88 | 1,719.04 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1710.32
|
using the gold data set you already have. And then we can move on to the next steps.
| 1,710.32 | 1,726.72 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1719.9199999999998
|
OK. So I'll just add a little bit of separation in here. So now we're going to go ahead and create
| 1,719.92 | 1,737.36 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1726.72
|
the augmented data. So as I said we're going to be using random sampling for that. And I find that
| 1,726.72 | 1,744.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1737.36
|
the easiest way to do that is to actually go ahead and use a Pandas data frame rather than
| 1,737.36 | 1,750.96 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1744.7199999999998
|
using the HuginFace data set object that we currently have. So I'm going to go ahead and
| 1,744.72 | 1,759.52 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1750.9599999999998
|
initialize that. So we have our gold data. That will be pde.data frame.
| 1,750.96 | 1,766.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1759.52
|
And in here we're going to have sentence one and sentence two. So sentence one.
| 1,759.52 | 1,782.96 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1768.8
|
That is going to be equal to stsb sentence one. OK. And as well as that we also have sentence two
| 1,768.8 | 1,793.12 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1782.96
|
which is going to be stsb sentence two. Now we may also want to include our
| 1,782.96 | 1,798.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1794.88
|
label in there. Although I wouldn't say this is really necessary.
| 1,794.88 | 1,804.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1799.28
|
Or add it in. So our label is just label.
| 1,799.28 | 1,814.72 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1804.48
|
And if I have a look here. So we have. I'm going to overwrite anything called gold.
| 1,804.48 | 1,824 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1817.44
|
So OK. I'm going to have a look at that as well. So we can see a few examples of what we're actually
| 1,817.44 | 1,830.96 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1824.56
|
working with. I'll just go ahead and actually rerun these as well.
| 1,824.56 | 1,838.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1830.96
|
OK. So there we have our gold data. And now what we can do because we've
| 1,830.96 | 1,845.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1839.6000000000001
|
reformatted that into a kind of data frame. We can use the sample method to randomly sample
| 1,839.6 | 1,853.84 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1846.56
|
different sentences. So to do that what I will want to do is create a new data frame.
| 1,846.56 | 1,858.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1853.8400000000001
|
So this is going to be our unlabeled silver data set. It's not going to be a silver data set.
| 1,853.84 | 1,864.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1858.88
|
Because we don't have the labels or scores yet.
| 1,858.88 | 1,872.08 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1865.2
|
But this is going to be where we will put them. And in here we again will have sentence one.
| 1,865.2 | 1,880.32 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1874.0
|
And also sentence two. But at the moment they're empty. There's nothing in there yet.
| 1,874 | 1,888.08 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1880.88
|
So what we need to do is actually iterate through all of the rows in here. So before that I'm just
| 1,880.88 | 1,902.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1888.08
|
going to do from or import TQDM.auto from TQDM.auto import TQDM. And that's just a progress bar.
| 1,888.08 | 1,910 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1902.6399999999999
|
So we can see where we are. I don't really like to wait and have no idea how long this is taking to
| 1,902.64 | 1,921.2 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1910.0
|
process. And for sentence one in TQDM. So we have the progress bar. And I want to take a list
| 1,910 | 1,927.2 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1921.2
|
of a set. So we're taking all the unique values in the gold data frame for sentence one.
| 1,921.2 | 1,937.2 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1928.56
|
Okay so that will just loop through every single unique sentence one item in there. And I'm going
| 1,928.56 | 1,944.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1937.2
|
to use that and I'm going to randomly sample five sentences from the other column sentence two
| 1,937.2 | 1,953.84 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1945.6000000000001
|
to be paired with that sentence one. And here I'll sample the sentence two phrases that we're
| 1,945.6 | 1,962.24 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1953.8400000000001
|
going to sample are going to come from the gold data of course. And we only want to sample from
| 1,953.84 | 1,968.72 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1962.24
|
rows where sentence one is not equal to the current sentence one because otherwise we
| 1,962.24 | 1,974.4 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1969.52
|
are possibly going to introduce duplicates. And we're going to remove duplicates anyway but let's
| 1,969.52 | 1,982.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1974.4
|
just remove them from the sampling in the first place. So we're going to take that so all of the
| 1,974.4 | 1,990.24 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1982.88
|
gold data set that where sentence one is not equal to sentence one. And what I'm going to do is just
| 1,982.88 | 2,001.84 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t1990.24
|
sample five of those rows like that. Now from that I'm just going to extract sentence two. So the five
| 1,990.24 | 2,009.36 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2001.84
|
sentence two phrases that we have there. And I'm going to convert them into a list. And now for
| 2,001.84 | 2,018.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2010.0
|
sentence two in the sampled list that we just created I'm going to take my pairs. I'm going to
| 2,010 | 2,026.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2018.16
|
append new pairs. So pairs are append and I want sentence one to be sentence one.
| 2,018.16 | 2,037.2 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2028.4
|
And also sentence two is going to be equal to sentence two. Now this will take a little while.
| 2,028.4 | 2,045.12 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2038.24
|
So what I'm going to do is actually maybe not include the full data set here.
| 2,038.24 | 2,056.24 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2045.12
|
So let me possibly just go maybe the first 500. Yeah let's go to the first 500.
| 2,045.12 | 2,063.68 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2056.24
|
See how long that takes. And I will also want to just have a look at what we get from that.
| 2,056.24 | 2,075.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2063.68
|
So yes it's much quicker. So we have sentence one. Let me remove that from there.
| 2,063.68 | 2,087.12 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2079.8399999999997
|
And let's just say that top 10. So because we are taking five of sentence one every time and
| 2,079.84 | 2,092.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2087.12
|
random sampling it we can see that we have a few of those. And another thing that we might want to
| 2,087.12 | 2,101.12 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2092.56
|
do is remove any duplicates. Now there probably isn't any duplicates here but we can check. So
| 2,092.56 | 2,108.96 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2101.12
|
pairs equals pairs.drop duplicates. And then we'll check the length of pairs again.
| 2,101.12 | 2,120.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2108.96
|
And also print. Let me run this again and print.
| 2,108.96 | 2,132.24 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2126.8
|
Okay so there were not any duplicates anyway but it's a good idea to add that in just in case.
| 2,126.8 | 2,140.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2132.24
|
And now what I want to do is actually take the cross encoder. In fact actually let's go back to
| 2,132.24 | 2,151.92 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2140.64
|
our little flowcharts. So we have now created our larger unlabeled data set. So it's good. And now
| 2,140.64 | 2,159.28 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2151.9199999999996
|
we go on to predicting the labels of our cross encoder. So down here what I'm going to do is
| 2,151.92 | 2,168.8 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2159.28
|
take the cross encoder code here. And what I've done is I've trained this already and I've
| 2,159.28 | 2,180.4 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2168.8
|
uploaded it to the Hugin base models. So what you can do and what I can do is this. So I'm going to
| 2,168.8 | 2,196.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2180.4
|
write James Callum and it is called BERT STSB cross encoder. Okay so that's our cross encoder.
| 2,180.4 | 2,205.28 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2197.6
|
And now what I want to do is use that cross encoder to create our labels. So that will create
| 2,197.6 | 2,214 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2205.28
|
our silver data set. Now to do that I'm going to call it silver. For now I mean this isn't really
| 2,205.28 | 2,219.84 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2214.0
|
the silver data set but it's fine. And what I'm going to do is create a list and I'm going to zip
| 2,214 | 2,229.68 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2219.84
|
both of the columns from our pairs. So pairs sentence one, pairs sentence two. Pairs sentence
| 2,219.84 | 2,249.2 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2229.68
|
one and pairs sentence two. Okay so that will give us all of our pairs again. You can look at those.
| 2,229.68 | 2,257.04 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2249.7599999999998
|
Okay so it's just like this. And what we want to do now is actually create our score. So just
| 2,249.76 | 2,265.52 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2257.04
|
take the cross encoder. What did we load it as? CE.predict and we just pass in that silver data.
| 2,257.04 | 2,275.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2266.24
|
So do that. Let's run it. It might take a moment. Okay so it's definitely taking a moment. So
| 2,266.24 | 2,283.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2276.88
|
let me pause it. I'm going to just do let's say 10 because I already have the full data set so I
| 2,276.88 | 2,291.28 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2283.6
|
can show you that somewhere else. And let's have a look at what you have in those scores. So three
| 2,283.6 | 2,301.12 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2291.2799999999997
|
of them. So we have an array and we have these scores. Okay so that they are our predictions,
| 2,291.28 | 2,306.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2301.12
|
our similarity predictions for the first three. Now because they're randomly sampled a lot of
| 2,301.12 | 2,316.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2306.56
|
these are negative. So if we go silver, say negative. I mean more. They're not relevant.
| 2,306.56 | 2,324.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2317.36
|
So yeah we can see not particularly relevant. And that's just one must first issue with this.
| 2,317.36 | 2,335.52 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.