title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t35.52
perceptron and neural networks, a lot of that was researched and discovered
35.52
46.52
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t41.68
back in the 50s and 60s and 70s, but we
41.68
50.76
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t47.160000000000004
didn't see that really applied in industry or
47.16
55.16
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t51.56
anywhere really until the past decade and
51.56
59.72
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t55.16
there are two main reasons for this.
55.16
64.4
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t59.72
So the first is that we didn't have enough compute power back in the
59.72
71.48
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t64.8
50s, 60s, 70s to train the models that we needed to train and we also didn't have the data to actually
64.8
73.84
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t71.84
train those models. Now
71.84
76.28
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t74.28
compute power is
74.28
83.64
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t76.32
not really a problem anymore. We sort of look at this graph, it depends on what model you're training
76.32
88.36
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t83.64
of course if you are open AI and you're training GPT-4 or 5 or whatever
83.64
96.68
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t89.96000000000001
yeah, maybe compute power is pretty relevant, but for most of us we can get access to
89.96
102.08
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t97.96000000000001
cloud machinery, personal machines and we can wait
97.96
108.68
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t102.68
a few hours or a couple of days and fine tune or pre-train a transform model
102.68
111.4
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t109.6
that is
109.6
113.6
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t111.4
good performance for what we need.
111.4
120.12
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t115.44000000000001
Now that obviously wasn't always the case until very recently back in
115.44
124.36
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t120.88000000000001
1960s, you see on this graph here we have the IBM
120.88
127.16
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t125.16000000000001
704 and
125.16
133.64
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t127.72
you can see under the Y axes we have floating point operations per second
127.72
136.56
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t134.56
and that's a logarithmic scale.
134.56
142.68
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t136.56
So linear scale just basically looks like a straight line until a few years ago and then shoots up.
136.56
144.68
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t143.36
It's
143.36
149.72
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t144.68
pretty impressive how much progress is made in terms of
144.68
152.52
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t150.52
computing power. Now
150.52
156.88
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t153.08
like I said, that's not really an issue for us anymore.
153.08
161.16
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t156.88
We have the compute in most cases to do what we need to do and
156.88
167.04
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t161.16
and data is not as much of a problem anymore, but we'll talk about that in a moment.
161.16
170.36
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t168.04
So data again, we have a very
168.04
179
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t171.64
big increase in data, not quite as big as the computing power and this graph here doesn't go quite as far back.
171.64
181
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t179.0
It's only 2010
179
186.16
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t181.56
where I believe it was at 2 zettabytes and now
181.56
189.2
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t187.2
71 or
187.2
196.72
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t189.2
so in 2021. So there's a fairly big increase, not quite as much as computing power
189.2
201.36
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t197.28
over time, but still pretty massive. Now the
197.28
204.44
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t202.44
thing with data is
202.44
210.52
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t204.76
yes, there's a lot of data out there, but is there that much data out there for
204.76
214.08
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t211.12
what we need to train models to do and
211.12
216.44
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t214.44
in a lot of cases, yes, there is.
214.44
219.72
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t216.44
But it really depends on what you're doing. If you are
216.44
223.12
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t221.12
focusing on
221.12
230.48
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t223.12
a more niche domain. So what I have here on the left over here are a couple of niche domains.
223.12
234.2
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t231.2
There's not that much data out there on
231.2
240.84
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t235.0
sentence pairs for climate evidence and claims, for example. So where you have a
235
250.08
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t240.84
piece of evidence and a claim and whether the claim supports evidence or not, there is a very small data set called climate fever data set, but it's not big.
240.84
256.4
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t251.76
For agriculture, I assume within that industry, there's not that much data, although I
251.76
259.52
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t256.96
have never worked in that industry. So I
256.96
263.68
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t260.28000000000003
am not fully aware. I just assume there's probably not that much.
260.28
267.24
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t264.36
And then also niche finance, which I do
264.36
273.4
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t267.24
at least have a bit more experience with and I imagine this is probably something that a lot of you will find useful as well.
267.24
276.16
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t274.72
Because
274.72
280.96
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t276.16
finance is a big industry. There's a lot of finance data out there, but there's a lot of niche
276.16
286.36
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t282.0
projects and problems in finance where you find much less data.
282
294.4
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t287.76
So yes, we have a lot more data nowadays, but we don't have enough for a lot of the data that we need.
287.76
299.32
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t294.4
On the right here, we have a couple of examples of low resource data sets. So we have
294.4
306.32
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t299.79999999999995
Adave from the Maldives and also the Navajo languages as well. So with these
299.8
311.44
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t307.35999999999996
we kind of need to find a different approach. Now we can
307.36
319.28
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t312.0
investigate, depending on your use case, unsupervised learning, TSEA, which we have covered in a previous video article and
312
324.64
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t319.28
that does work when you're trying to build a model that recognizes
319.28
328.72
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t325.28
generic similarity. It works very well as well.
325.28
334.32
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t328.71999999999997
But for example with the climate claims data, we are not necessarily trying to
328.72
338.88
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t334.96
match sentence A and B based on their semantic similarity.
334.96
345.76
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t338.88
But we're trying to match sentence A, which is a claim, to sentence B, which is a claim.
338.88
352.8
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t345.76
As to whether that evidence supports the claim or not.
345.76
359.2
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t353.52
So in that case, unsupervised approach like TSEA doesn't really work.
353.52
366.4
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t360.4
So what we have is very little data and there aren't really
360.4
372.72
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t367.2
any alternative training approaches that we can use. So basically what we need to do is
367.2
377.2
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t372.72
create more data. Now data orientation is
372.72
385.04
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t378.08000000000004
difficult, particularly for language. So data orientation is not specific to NLP.
378.08
391.52
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t385.04
It's used across ML and it's more established in the field of computer vision.
385.04
400.96
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t392.24
And that makes sense because computer vision, say you have an image, you can modify that image using a few different approaches.
392.24
406.08
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t400.96
And a person can still look at that image and think, OK, that is the same image.
400.96
411.2
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t406.08
It's just maybe it's rotated a little bit. We've changed the color grading, the brightness,
406.08
418.48
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t412.32
or something along those lines. We just modified it slightly. But it's still in essence the same image.
412.32
428.48
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t420.08
Now for language it's a bit difficult because language is very abstract and nuanced.
420.08
435.68
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t428.48
So if you start randomly changing certain words, the chances are you're going to produce something
428.48
441.12
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t435.68
that doesn't make any sense. And we, when we're augmenting our data, we don't want to just throw
435.68
450.64
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t441.12
rubbish into our model. We want something that makes sense. So there are some data augmentation
441.12
459.44
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t450.64
techniques. And we'll look at a couple of the simpler ones now. So there is a library called NLP.org
450.64
465.6
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t459.44
which I think is very good for this sort of thing. It's essentially a library that allows us to do
459.44
478.48
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t465.59999999999997
data augmentation for NLP. And what you can see here is two methods using word2vec vectors and
465.6
486.24
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t478.48
similarity. And what we're doing is taking this original sentence. So the quick brown fox jumps
478.48
493.36
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t486.24
over the lazy dog. And we're just inserting some words using word2vec. So we're trying to find
486.24
499.52
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t493.36
what words word2vec thinks could go in here, which words are the most similar to the surrounding
493.36
508.32
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t499.52000000000004
words. And we have this al-Ziari, which I don't know. I think it seems like a name to me. But I
499.52
516
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t508.32
am not sure. That I don't think really fits there. So it's not great. It's not perfect.
508.32
524.24
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t517.6
Lazy superintendents dog. That does kind of make sense. I feel like a lazy superintendents dog is
517.6
531.92
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t525.52
maybe a stereotype or I'm sure it's been in The Simpsons or something before. So,
525.52
537.76
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t532.56
okay, fair enough. I can see how that can fit in there. Which again, it's a bit weird. It's not
532.56
545.76
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t537.76
great. Substitution for me seems to work better. So rather than the quick brown fox, we have the
537.76
551.52
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t545.76
easy brown fox. And rather than jumping over the lazy dog, jumps around the lazy dog. Which
545.76
559.68
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t551.52
changes the meaning slightly. Easy is a bit weird there to be fair. But we still have a sentence
551.52
568.16
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t559.68
that kind of makes sense. So that's good, I think. Now we don't have to use words to vet. We can also
559.68
575.76
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t568.16
use contextual word embeddings like with Bert. And for me, I think the results look better. So
568.16
583.04
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t575.76
for insertion, we get even the quick brown fox usually jumps over the lazy dog. So we're adding
575.76
589.92
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t583.04
some words there. It makes sense. That's I think good for substitution. And we're only doing one
583.04
596.96
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t589.92
word here. And we're changing that to a little quick brown fox instead of just quick brown fox.
589.92
604.32
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t597.68
So I think that makes sense. And this is a good way of augmenting your data and bring more data
597.68
615.6
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t604.32
from less. But for us, because we are using sentence pairs, we can basically just take
604.32
624.8
Making The Most of Data: Augmented SBERT
2021-12-17 14:24:40 UTC
https://youtu.be/3IPCEeh4xTg
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
3IPCEeh4xTg-t616.8000000000001
all of the data from say we have A and B over here. Imagine this is a data frame.
616.8
635.04