title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2061.4
If a human looks at this right here and sees, you know, the name and address of a person or a
2,061.4
2,073.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2066.84
credit card number, we know that's not really highly likely text. And that's sort of the the
2,066.84
2,079.24
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2073.72
answer right here. So we say if a human looks at it, but what is a human a human is just another
2,073.72
2,084.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2079.24
language model, among other things, right, but the human is just sort of another thing that has an
2,079.24
2,090.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2084.52
intuition of how likely text is. So the basis of their approach is going to be the following. Let's
2,084.52
2,098.04
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2090.52
take a second, second data set, okay, sampled in the same way also from the internet, but not in
2,090.52
2,104.04
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2098.04
exactly the same way. In fact, they use common crawl instead of the the Reddit outbound links that
2,098.04
2,109.24
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2104.04
GPT-2 used, but we take any other data set, and I'm going to draw the other data set. So here's the
2,104.04
2,114.68
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2109.24
data point, here's the data point, maybe this data point is duplicated from the other data set. And
2,109.24
2,122.12
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2114.68
here's the data point here one, right, so you're going to have sort of other data points. But also,
2,114.68
2,126.92
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2122.6
you know, since you're sampling from the internet broadly, you're going to have the MIT public
2,122.6
2,132.44
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2126.92
license many times. And you're also going to have the outliers in this data set. Now the important
2,126.92
2,138.6
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2132.44
part is, you're probably if you sample this differently, in the same fashion, but a bit
2,132.44
2,143.88
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2138.6
differently, you're probably not going to have this same outlier right here, you're probably not
2,138.6
2,149.96
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2143.88
going to have that in your new data set. Okay, so you can see in the new data set, I hope you can
2,143.88
2,155.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2149.96
see this, you're going to have the the same pattern extracted here, even though it's from, you know,
2,149.96
2,159.8
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2155.72
slightly different data points, you're going to have maybe a pattern extracted here, maybe one
2,155.72
2,165.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2159.7999999999997
here, you're going to have this same cluster here, because the MIT public license will appear, even
2,159.8
2,169.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2165.72
though it comes from other documents, it's copied over and over. And you're going to have this
2,165.72
2,178.36
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2169.56
outlier right here. So what you can do to differentiate our two, our two things, you can
2,169.56
2,185.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2178.36
consider a second language model. And you can ask. So here we have two things that the first language
2,178.36
2,190.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2185.7200000000003
model things are very likely, you have this thing right here. And you have this thing right here,
2,185.72
2,196.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2190.52
both the first language model considers super likely, you ask the second language model and the
2,190.52
2,202.76
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2196.52
second language model says, yes, the MIT public license, I consider that to be also super likely.
2,196.52
2,208.36
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2202.76
But this outlier over here now that's I've never seen that what's that that seems very unlikely.
2,202.76
2,215
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2208.92
And so by the ratio of the two likelihoods of the two different models, you can find out
2,208.92
2,222.76
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2216.36
samples that the first model finds super likely, but the second model things are not likely at all.
2,216.36
2,231
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2224.0400000000004
And that's exactly the trick they use right here. In fact, they use many instances of that trick.
2,224.04
2,237.48
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2231.0
So here are the strategies perplexity is simply what they use before whatever's likely is probably
2,231
2,245.16
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2237.48
memorized. This Yes, it's memorized, but it's often memorized justifiably. Then they have these
2,237.48
2,252.12
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2245.16
strategies, small and medium. And and this is the ratio of the log perplexities of the largest GPT2
2,245.16
2,260.04
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2252.12
model, that's the one they attack, and the small GPT2 model. And this ties into so the
2,252.12
2,266.12
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2260.04
so you don't even need a different model, right? You can simply train a the reason they train a
2,260.04
2,273.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2266.12
smaller model is the following. And we on the machine learning street talk podcast, if you
2,266.12
2,279.16
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2273.08
don't know that it's a it's a it's a podcast where we talk to people from various, you know,
2,273.08
2,286.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2279.16
from the industry, and from various research labs, and so on. And we spoke with Sarah Hooker, who
2,279.16
2,290.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2286.2
we talked about their paper, the hardware lottery, but she also has other research,
2,286.2
2,297.24
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2290.2
where she sort of shows that if you have weights, so you have a neural network, and it has, you know,
2,290.2
2,304.36
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2297.24
layers, layers, layers, and you have weights in these layers, right? What she was able to show is
2,297.24
2,311.32
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2304.3599999999997
that not all weights are equal. So some of the weights, let's say the weights here will be
2,304.36
2,317.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2311.32
allocated to these pattern extraction things. So you know, here we have these, you know, you have
2,311.32
2,323.16
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2317.56
date training data training data outlier outlier, right? So you'll have this, you have these
2,317.56
2,328.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2323.1600000000003
weights representing this pattern within a layer, right? You have these, this pattern will be
2,323.16
2,334.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2328.2000000000003
represented by these weights right here. And then you'll have other weights, they're sort of
2,328.2
2,342.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2334.2
allocated to remembering single or very few outliers. Okay, so here, this will be allocated,
2,334.2
2,348.36
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2342.2
and these will be disproportionate. So there will be many, many more data samples covered by,
2,342.2
2,353.24
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2349.16
let's say, this piece of weights right here, I should have drawn the bottom one smaller,
2,349.16
2,360.84
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2353.96
than by this. So there might be, you know, 1000 training examples covered by one piece of weight
2,353.96
2,367.64
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2360.84
space. And there might be only one piece of training data covered by this other piece of
2,360.84
2,372.76
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2367.6400000000003
weight space. And that's simply because it can extract a pattern from one, but not from the
2,367.64
2,378.04
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2372.76
other. So it needs to memorize it. And the larger we make these models, you know, the more
2,372.76
2,386.68
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2379.56
parameters we give them, the more the more, the more ability they have, the more space they have
2,379.56
2,394.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2386.68
to do this remembering. So what what Sarah Hooker noticed in her paper is if you then distill these
2,386.68
2,399.4
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2394.2
models and distillation is the process of taking these models and putting their knowledge into
2,394.2
2,406.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2399.3999999999996
smaller models, then what happens is not all training data points will will so that in
2,399.4
2,411.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2406.2
distillation, you usually lose performance, not all training data points will lose performance
2,406.2
2,417.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2411.56
equally, namely, you will lose performance on the training data points that are sort of these
2,411.56
2,423.32
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2417.56
outliers that are these not often represented in the training data that you know, the model has a
2,417.56
2,431.8
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2423.32
harder time extracting a patterns from it. So they will be seldom patterns, or just hard patterns,
2,423.32
2,438.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2431.7999999999997
I would also assume that, you know, patterns that are harder to extract will also fall, fall away.
2,431.8
2,445.64
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2438.28
So the the more complicated patterns will also be sacrificed. But I guess among the things are
2,438.28
2,452.92
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2445.6400000000003
these outliers. So if you train a smaller model, the smaller model would have less ability to
2,445.64
2,461.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2452.92
remember these outliers. And therefore, if you do this, you don't even have to do it on a different
2,452.92
2,468.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2461.5600000000004
training data set, right? You can simply compare to the same model trained on a different model
2,461.56
2,473.96
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2468.2
by sorry, to a smaller version of the same model trained on the same training data set, because
2,468.2
2,479.48
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2473.96
that will probably not remember the outliers as much. It would have been interesting if these
2,473.96
2,486.84
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2479.48
authors here had actually distilled GPT two. And though they do not have access to the original
2,479.48
2,493.64
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2486.8399999999997
training data, so I can get why they didn't do it. But would be interesting to see that.
2,486.84
2,500.76
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2493.64
That gives me an idea sort of, maybe there is actually a way to look at the weights and I get
2,493.64
2,504.36
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2500.7599999999998
these these authors don't have access to the weights, but maybe there's a way to look at the
2,500.76
2,511.4
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2504.3599999999997
weights, and to actually be able to sort of, in some way spot, right, which of the which of the
2,504.36
2,517.4
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2511.4
weights only are associated with with single or very few training data points. Maybe during
2,511.4
2,522.36
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2517.4
training, you can sort of count how many times a weight is updated in a substantial amount of
2,517.4
2,526.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2522.36
training data points. And maybe looking at the attention matrices, you can sort of determine
2,522.36
2,532.12
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2526.28
what are the kind of patterns that need to happen that lead to this weight being activated, right?
2,526.28
2,538.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2532.1200000000003
So if there's a weight, and it's activated by lots of lots of different patterns, maybe, you know,
2,532.12
2,543.64
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2538.52
that weight is useful for many, many forward propagated signals. But if there is another
2,538.52
2,549
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2543.6400000000003
weight that's only activated by a specific pattern, right, then maybe that's one of these these
2,543.64
2,554.44
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2549.0
memorization weights. So maybe there's a way to recognize these in the weights directly. So
2,549
2,562.92
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2554.44
distillation appears to be sort of a defense against this this memorization of things, though
2,554.44
2,567.88
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2562.92
that's not, that's not done in this particular paper, they also have different strategies. So
2,562.92
2,574.6
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2567.88
you don't need to do this neurally, right, you can compare the ratio of the perplexity that GPT2
2,567.88
2,581.96
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2574.6
gives to the Zlib entropy. So this is simply a text compression method, you can even compare it
2,574.6
2,587.88
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2581.96
perplexities between the original string and the lowercase version, and so on. So they extract,
2,581.96
2,593.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2588.7599999999998
for each of these configurations, we select 100 examples among the top 1000 samples, so they
2,588.76
2,601.8
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2593.72
produce 1000 samples, and they sample 100 from those 1000. So they mostly sample from low ranked
2,593.72
2,608.04
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2601.8
samples, but also they explore some of the higher ranked samples, they have a formula where they
2,601.8
2,615
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2608.04
sample they deduplicate, and then they investigate. Alright, so they do Google searches, if they can
2,608.04
2,621.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2615.0
find the thing they say that's memorized. Alright, so they say, across all strategies, what we
2,615
2,629.4
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2621.5600000000004
identify 604 unique memorized training examples from among the 1800 candidates, our best variant
2,621.56
2,640.76
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2629.4
has a true positive rate of 67%. That's quite remarkable, right? So 67%, 67% of the things that
2,629.4
2,647.96
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2640.76
this method delivers you automatically are actually memorized. Though you have to qualify that,
2,640.76
2,655.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2647.96
right? If you want more than 1000 examples, that rates going to drop, right? You since you select
2,647.96
2,661.96
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2655.56
the top 1000 examples, these are the most likely to be memorized. So yeah, if an attacker wants
2,655.56
2,667.88
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2661.96
more, if they want to scale this attack up, their positive rate is gonna plummet fairly quickly,
2,661.96
2,673.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2667.88
I'm going to assume it would actually be interesting also to see how that develops with the
2,667.88
2,680.68
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2673.72
top the top retrieve document right here. But I get the, they have to do Google searches to figure
2,673.72
2,686.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2680.68
out and then ask open AI to figure out if it's really a memorized training example. They say
2,680.68
2,691.88
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2686.2
their categories, we manually group the memorized samples into different categories. The results are
2,686.2
2,696.68
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2691.8799999999997
shown in table one, most memorized content is fairly canonical text from news headlines,
2,691.88
2,703.32