title
stringlengths 12
112
| published
stringlengths 19
23
| url
stringlengths 28
28
| video_id
stringlengths 11
11
| channel_id
stringclasses 5
values | id
stringlengths 16
31
| text
stringlengths 0
596
| start
float64 0
37.8k
| end
float64 2.18
37.8k
|
---|---|---|---|---|---|---|---|---|
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2324.56
|
And you can try and modify that by after creating your scores. If you oversample and
| 2,324.56 | 2,347.28 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2336.88
|
got a lot of values or a lot of records and then just go ahead and remove most of the low scoring
| 2,336.88 | 2,354.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2347.28
|
samples and keep all of your high scoring samples that will help you deal with that imbalance in
| 2,347.28 | 2,364.72 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2354.5600000000004
|
your data. So what I'm going to do is I'm going to add to the labels column those scores which
| 2,354.56 | 2,374.24 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2365.84
|
will not actually cover all of them because we only have 10 in here. So let me maybe multiply that.
| 2,365.84 | 2,379.84 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2374.24
|
So this isn't, you shouldn't do this obviously. It's just so they fit.
| 2,374.24 | 2,389.92 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2382.8799999999997
|
Okay and let's have a look. Okay so we now have sense one, sense two and some labels.
| 2,382.88 | 2,396.4 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2390.72
|
And what you do, although I'm not going to run this, is you would write pairs.to csv.
| 2,390.72 | 2,400.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2397.6
|
Don't necessarily need to do this if you're running everything in the same notebook.
| 2,397.6 | 2,407.52 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2400.88
|
But it's probably a good idea. So with csv, I'm going to say the silver data is a tab separated
| 2,400.88 | 2,417.04 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2407.52
|
file. And obviously the separator for that type of file is a tab character. And I don't want to
| 2,407.52 | 2,425.28 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2417.04
|
include those. Okay and that will create the silver data file that we can train with.
| 2,417.04 | 2,436.72 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2425.28
|
Which I do already have. So if we come over here, we can see that I have this file and
| 2,425.28 | 2,448.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2437.44
|
we have all of these different sentence pairs and the scores that our encoder has assigned to that.
| 2,437.44 | 2,454.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2448.16
|
So I'm going to close that and I'm going to go back to the demo.
| 2,448.16 | 2,464.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2457.2799999999997
|
And what I'm now going to do is actually, well first go back to the flow chart that we had.
| 2,457.28 | 2,467.52 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2465.04
|
I'm going to cross off predict labels.
| 2,465.04 | 2,475.92 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2470.24
|
And we're going to go ahead and fine tune the buy encoder on both gold and silver data.
| 2,470.24 | 2,483.52 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2475.92
|
So we have the gold data. Let's have a look at what we have. Yes and the silver. I'm going to
| 2,475.92 | 2,502.8 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2483.52
|
load that from file. So pd.read csv. Silver.tsv. And separator is a tab character. And let's have a look.
| 2,483.52 | 2,511.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2502.8
|
What we have. Make sure it's all loaded correctly. Looks good. Now what I'm going to do is
| 2,502.8 | 2,521.52 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2512.88
|
put both those together. So all data is equal to gold.append silver. And we ignore the index.
| 2,512.88 | 2,532.8 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2521.52
|
So we're going to get an index error. Sorry. True. And all data.head. Okay we can see that we
| 2,521.52 | 2,538.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2533.68
|
hopefully now have all of the data in there. So let's check the length.
| 2,533.68 | 2,547.04 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2541.52
|
Yeah so it's definitely a bigger data set now than before with just gold.
| 2,541.52 | 2,554.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2547.04
|
Okay so we now have a larger data set. We can go ahead and use that to fine tune the
| 2,547.04 | 2,561.92 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2555.44
|
the buy encoder or sentence transformer. So what I'm going to do is take the code from up here.
| 2,555.44 | 2,570.08 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2562.96
|
So we have this train data. And I think I've already run this before so I don't need to
| 2,562.96 | 2,578.8 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2570.08
|
import the import example here. But what I want to do here is for row in all data.
| 2,570.08 | 2,586.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2580.7999999999997
|
And what we actually want to do here is for i row in all data because this is a data frame.
| 2,580.8 | 2,595.44 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2587.6
|
It iterates through each row. We have row, sentence one, sentence two, and also a label.
| 2,587.6 | 2,603.92 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2595.44
|
So we load them into our train data. And we can have a look at that train data.
| 2,595.44 | 2,606.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2605.2000000000003
|
See what it looks like.
| 2,605.2 | 2,616.32 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2609.2000000000003
|
Okay we see that we get all these input example objects. If you want to see what one of those
| 2,609.2 | 2,622.4 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2616.32
|
has inside you can access the text like this. Should probably do that on a
| 2,616.32 | 2,631.04 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2622.4
|
in a new cell. So let me pull this down here. And you can also access a label to see what we
| 2,622.4 | 2,639.2 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2631.04
|
what we have in there. Okay so that looks good. And we can now take that like we did before and
| 2,631.04 | 2,646.8 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2639.2000000000003
|
load it into a data loader. So let me go up again and we'll copy that. Where are you?
| 2,639.2 | 2,657.84 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2646.8
|
Take this. Bring it down here. And we run this. Creates our data loader. And we can move on to
| 2,646.8 | 2,664.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2657.84
|
actually initializing the sentence transformer or by encoder and actually training it. So
| 2,657.84 | 2,668.24 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2665.44
|
once you run from sentence transformers.
| 2,665.44 | 2,677.2 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2668.24
|
We're going to import models and we're also going to import sentence transformer. Now to initialize
| 2,668.24 | 2,682 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2677.2
|
our sentence transformer if you've been following along with the series of videos and articles.
| 2,677.2 | 2,690.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2682.7999999999997
|
You will know that we do something looks like this. So we're going to convert and we're going
| 2,682.8 | 2,696.08 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2690.4799999999996
|
to import the sentence transformer. And we're going to import the sentence transformer.
| 2,690.48 | 2,700.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2696.08
|
So we're going to convert and that is going to be models.transformer.
| 2,696.08 | 2,708.32 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2704.64
|
And here we're just loading a model from copy paste transformers. So
| 2,704.64 | 2,714.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2708.3199999999997
|
that base in case. And we also have our pooling layer. So models again and we have pooling.
| 2,708.32 | 2,724.32 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2716.0
|
And in here we want to include the dimensionality of the vectors that the pooling
| 2,716 | 2,731.36 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2724.32
|
layer should expect. Which is just going to be vert.get word embedding dimension.
| 2,724.32 | 2,738.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2732.32
|
And also it needs to know what type of pooling we're going to use. Are we going to use
| 2,732.32 | 2,745.28 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2738.7200000000003
|
CLS pooling? Are we going to use mean pooling, max pooling or so on. Now we are going to use
| 2,738.72 | 2,752.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2746.2400000000002
|
pooling and we're going to use a mean. So mode mean tokens. Let me set that to true.
| 2,746.24 | 2,762.08 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2752.56
|
So there are the two let's say components in our sentence transformer. And we need to now put
| 2,752.56 | 2,770.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2762.08
|
those together. So we're going to call model equals sentence transformer. And we write modules.
| 2,762.08 | 2,781.2 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2772.4
|
And then we just pass as a list vert and also pooling. Okay. So we run that. We can also have
| 2,772.4 | 2,788.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2781.2
|
a look at what our model looks like. Okay. And we have a sentence transformer object. And inside
| 2,781.2 | 2,794.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2788.8799999999997
|
there we have two layers or components. First one is our transformer. It's a vert model. And the
| 2,788.88 | 2,801.44 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2794.8799999999997
|
second one is our pooling. And we can see here the only pooling method that is set to true is the
| 2,794.88 | 2,806.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2802.16
|
mode mean tokens. Which means we're going to take the mean across all the word embeddings
| 2,802.16 | 2,815.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2806.88
|
output by vert and use that to create our sentence embedding or vector. So with that model now
| 2,806.88 | 2,824.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2815.6
|
defined we can initialize our loss function. So we do want to write from sentence transformers
| 2,815.6 | 2,835.52 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2826.2400000000002
|
dot losses import cosine similarity loss. So cosine similarity loss. And in here we
| 2,826.24 | 2,839.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2835.52
|
need to pass the model so it understands which parameters to actually optimize.
| 2,835.52 | 2,848.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2840.96
|
And initialize that. And then we sell our training function or the fit function. And
| 2,840.96 | 2,854.4 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2849.12
|
that's similar to before the cross encoder although slightly different. So let me take that.
| 2,849.12 | 2,860.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2855.12
|
That's a little further up from here.
| 2,855.12 | 2,870.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2860.16
|
Then take that and we're just going to modify it. So warm up. I'm going to warm up for 15% of the
| 2,860.16 | 2,877.84 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2870.7999999999997
|
number of steps that we're going to run through. We change this to model. It's not C anymore.
| 2,870.8 | 2,885.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2878.7999999999997
|
And like I said there are some differences here. So we have a training objectives. That's different.
| 2,878.8 | 2,890.88 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2885.76
|
And this is just a list of all the training objectives we have. We are only using one.
| 2,885.76 | 2,900.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2891.28
|
And we just pass loader and loss into that. Evaluator. We could use an evaluator. I'm not
| 2,891.28 | 2,907.76 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2900.48
|
going to. For this one I'm going to evaluate everything afterwards. The epochs and warm
| 2,900.48 | 2,912.24 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2907.76
|
steps are the same. The only thing that's different is the output path which is going to be vert
| 2,907.76 | 2,922.16 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2912.24
|
stsp.org. That's it. So go ahead and run that. It should run. Let's check that it does.
| 2,912.24 | 2,932.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2925.2
|
Okay so I've got this error here. So it's lucky that we checked. And this runtime error found
| 2,925.2 | 2,938.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2932.4799999999996
|
dtype long but expected to float. And if we come up here it's going to be in the data loader or in
| 2,932.48 | 2,948.08 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2938.64
|
the data that we've initialized. So here I've put int for some reason. I'm not sure why
| 2,938.64 | 2,954.4 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2948.08
|
that is. So this should be a float. The label in your training data. And that should be the same
| 2,948.08 | 2,956.56 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2955.52
|
up here as well.
| 2,955.52 | 2,966.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2956.56
|
Okay so here as well the cross encoder. We would expect a float value. So just be aware that I'll
| 2,956.56 | 2,977.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2967.2799999999997
|
make sure there's a note in the video earlier on for that. Okay and okay let's continue through
| 2,967.28 | 2,985.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2977.6
|
that and try and rerun it. Should be okay now. Oh I need to actually rerun everything else as well.
| 2,977.6 | 2,999.28 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2987.2
|
So rerun this. Okay label 1.0. Okay it's better. This is this. I'll just leave this for a moment.
| 2,987.2 | 3,012.48 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t2999.28
|
Just to be sure that is actually running this time. But it does look good. So yeah that's fine.
| 2,999.28 | 3,020.08 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3012.48
|
So it looks good. When for some reason in the notebook I'm actually seeing the number of
| 3,012.48 | 3,024.72 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3020.0800000000004
|
iterations. But okay yeah pause it now and we can see that it's actually running.
| 3,020.08 | 3,030.64 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3024.72
|
I'm actually seeing the number of iterations. But okay pause it now and we can see that yes it did
| 3,024.72 | 3,038.32 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3030.64
|
run through two iterations. So it is running correctly now. That's good. So that's great.
| 3,030.64 | 3,047.04 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3039.12
|
What I want to do now is actually show you okay evaluation of these models. So back to our flow
| 3,039.12 | 3,053.68 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3047.04
|
chart quickly. Okay so fine tune by encoder. We've just done it. So we've now finished with our in
| 3,047.04 | 3,063.44 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3053.68
|
the main augmented expert training strategy. And yeah let's move on to the evaluation. Okay so my
| 3,053.68 | 3,071.92 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3063.44
|
evaluation script here is maybe not the easiest to read.
| 3,063.44 | 3,081.6 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3074.7999999999997
|
But basically all we're doing is we're importing the embedding similarity evaluated from down here.
| 3,074.8 | 3,088 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3081.6
|
I'm loading the the glue data. SDSP again and we're taking the validation split which we didn't
| 3,081.6 | 3,094.4 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3088.0
|
train on. We are converting it into input examples feeding it into our embedding similarity evaluator.
| 3,088 | 3,105.68 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3095.44
|
And loading the model. The model name I pass through some command line arguments from up here.
| 3,095.44 | 3,112.24 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3105.68
|
And then it just prints out the score. So let me switch across to the command line.
| 3,105.68 | 3,120.32 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3112.24
|
We can see how that actually performs. Okay so just switch across to my other desktop because
| 3,112.24 | 3,129.2 |
Making The Most of Data: Augmented SBERT
|
2021-12-17 14:24:40 UTC
|
https://youtu.be/3IPCEeh4xTg
|
3IPCEeh4xTg
|
UCv83tO5cePwHMt1952IVVHw
|
3IPCEeh4xTg-t3120.3199999999997
|
this is much faster. So I can actually run this quickly. So python and 03. So we're going to run
| 3,120.32 | 3,137.68 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.