title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t671.0
that this particular piece of information here is contained only once. Plus, it is a corporate
671
687.96
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t678.6
contact. So again, so to my point, the paper might be written a bit more scary than, then it ultimately
678.6
694.12
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t687.96
turns out to be, though, you know, you have to you have to make two different points like this
687.96
699.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t694.12
particular piece of information. Yes, it might be written a bit more scary and gimmicky with the with
694.12
708.36
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t699.5600000000001
the blacked out stuff. However, right. The paper has a point, namely that if let's say you as a
699.56
715.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t708.36
company do this on internal data, it might very well be and they do have examples where they
708.36
721
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t715.56
reproduce data from just one document. But even it might be that something like this happens to
715.56
728.44
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t721.0
you internally, where you sort of maybe in your internal document base, you sort of do quasi
721
733.88
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t728.4399999999999
duplicate a document with the same information over and over and and that's not the duplicated.
728.44
741.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t733.88
And then your language model sort of memorizes that. So it's quite it, it has a point the paper.
733.88
748.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t741.72
That's that's what I'm trying to say. I hope that's clear. Alright, so we'll get to the results
741.72
754.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t748.2
in a bit. I hope I've already given you some sort of a taste for what you can expect. So first of
748.2
759.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t754.52
all, they go into language models into sort of the definition of language models. And the language
754.52
768.36
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t759.5600000000001
model here is simply framed as a model that can sort of give you a a probability of a sequence
759.56
775.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t768.36
of text in sort of a stepwise fashion. So always probability of next word given the previous words,
768.36
784.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t775.5600000000001
and you can evaluate that, right, so the access to the model that they assume here is access to,
775.56
788.84
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t784.28
let's say, the logits of the model or the output distribution of the model.
784.28
798.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t788.84
And they say they use GPT-2 because it's trained on large piece of text, but it's also, you can,
788.84
806.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t798.2800000000001
you can evaluate it, it's not as slow, I guess, as GPT-3, and it's publicly available. However,
798.28
813.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t806.52
the training data to GPT-2 is not publicly available. But they do have someone of OpenAI
806.52
822.44
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t813.72
on the paper here. And this person at OpenAI made like made, they could sort of query the OpenAI
813.72
830.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t822.44
person to make sure a given piece of text that they find is or isn't in the training data of GPT-2.
822.44
837.96
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t830.9200000000001
So that's how they work. So that one per the OpenAI person acts as an API for the training data.
830.92
845.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t837.96
Right, so they, they do, they define their attacks here. So they do a lot of things to,
837.96
854.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t845.88
to set up cleanly what they do right here. So they have two points right here, there is this notion
845.88
861.24
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t854.52
of memorization. Okay, so there's they say there are many ways to define memorization in language
854.52
872.04
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t861.24
modeling. In this particular piece of work, they say it is okay to memorize some stuff, they say
861.24
877.32
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t872.04
language models must, for example, memorize the correct spelling of individual words, right,
872.04
882.92
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t877.32
because the words are made of word pieces, and the language model needs to output that. So that's
877.32
888.04
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t882.92
fine if it memorizes this. Indeed, there is an entire area of research that analyzes neural
882.92
895.96
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t888.04
networks as repositories of memorized knowledge. For example, when GPT-2 is prompted to complete
888.04
902.92
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t895.9599999999999
the sentence, my address is one Main Street, San Francisco CA, it generates the next token 94107,
895.96
910.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t902.92
a correct zip code for San Francisco in California. They say, while this is clearly memorization in
902.92
915.32
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t910.1999999999999
some abstract form, we aim to formalize our definition of memorization in order to restrict it
910.2
923.32
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t915.32
to cases that we might consider unintended. So memorization as such isn't bad. What is bad is
915.32
933
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t923.32
what they call here, the eidetic memorization of text. So eidetic memorization of text is when the
923.32
942.36
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t933.0
model memorizes something that only appears very few times in the training data. So they say, we
933
948.04
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t942.36
first define what it means for a model to x to have knowledge of a string, our definition is loosely
942.36
956.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t948.04
inspired, yada yada yada, a model f knows a string, if s can be extracted by interacting with the
948.04
964.76
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t956.28
model. So if you can input whatever you need to input, and the model outputs s, then the you say
956.28
974.12
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t964.76
that model knows s, right? So if s is a piece of training data, then you say the model memorizes
964.76
982.44
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t974.12
s, the model has memorized it. So here, they say a string is extractable from a language model if
974.12
988.84
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t982.4399999999999
there is a prefix and the prefix here is the input to the model, such that if you input that model,
982.44
998.92
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t988.84
the output will be the will be the string. And then they define this eidetic memorization,
988.84
1,006.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t999.72
respectively, they define k eidetic memorization, a string s is k eidetic, I have no clue whether
999.72
1,015.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1006.52
I pronounce this correctly, k eidetic memorized by a language model f, if f if s is extractable
1,006.52
1,024.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1015.72
from f, so that's memorization, and s appears in at most k examples in the training data. Okay, so
1,015.72
1,031.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1025.48
if this address of this person only appeared twice, but you could extract it verbatim from the
1,025.48
1,037.16
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1031.56
language model, then that would be an example of two eidetic memorization, okay, because k in that
1,031.56
1,044.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1037.16
case would be two because it appears twice in the training data, though they they also they're
1,037.16
1,049.16
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1044.2
not clear what they mean by examples in the training data, because usually this training
1,044.2
1,054.84
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1049.16
data is sort of chunked to make it fit into the language model and so on. And I think they do this
1,049.16
1,061.32
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1054.8400000000001
on a document basis. So they would consider something like this here, one example, right,
1,054.84
1,068.12
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1061.32
and then a different document, a different example. So if you have like, for example, if you
1,061.32
1,073.64
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1068.12
have these IRC conversations that they are able to extract, so they claim here they are able to
1,068.12
1,081.4
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1073.6399999999999
extract IRC conversations, or they're able to extract the usernames of the IRC conversations,
1,073.64
1,086.84
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1081.3999999999999
right? The usernames might appear hundreds or thousands of times because they chat with each
1,081.4
1,091.88
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1086.84
other. And they will all be, you know, in one document, but the document will be so long, they
1,086.84
1,097.32
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1091.8799999999999
will actually be chunked into different training data pieces. Maybe I don't know, but I think
1,091.88
1,107.24
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1097.32
maybe I don't know, I don't know exactly what it means to be an example right here. But they do
1,097.32
1,113.4
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1107.24
the example for sure, for sure, that piece of text can appear more than once, even if it is
1,107.24
1,120.6
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1113.3999999999999
only in one example. In fact, they, they actually analyze the situation. Alright, so we've defined
1,113.4
1,126.76
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1120.6
that this is the chi, these k-idetic memorization, that's what we're looking for. That's sort of the
1,120.6
1,133.48
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1126.76
problematic regime. If k is very small in the extreme k is one, one piece of training data
1,126.76
1,139.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1133.48
contains a string and we can extract the string at from the trained language model.
1,133.48
1,147.16
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1141.08
They also say that for any given k, memorizing longer strings is also intuitively more harmful
1,141.08
1,155.88
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1147.16
than shorter ones. So this kind of makes sense. And they even they even go into sort of corner
1,147.16
1,160.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1155.88
cases, they say amidst certain pathological corner cases, for example, many language model
1,155.88
1,164.68
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1160.5200000000002
when prompting with the sequence, repeat the following sentence, and then you give a sentence,
1,160.52
1,169.8
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1164.68
will do so correctly. This technically allows any string to be known under our definition.
1,164.68
1,175.4
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1171.24
But they, they of course, don't do that, they assume they don't know the training data, so they
1,171.24
1,180.68
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1175.4
can't just say repeat the following sentence, and so on. But you do see that it is fairly hard
1,175.4
1,186.68
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1180.68
actually to even define the problem right here, even though we as humans have a sort of an
1,180.68
1,194.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1186.68
intuition what it means for a language model to unintentionally or on the do do unintended
1,186.68
1,203.8
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1194.2
memorization. Alright, so the adversary's objective here is to extract memorized training data from
1,194.2
1,211.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1203.8
the language model. The strength of the attack is measured by how private so how k-idetic a
1,203.8
1,217.4
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1211.08
particular example is. Stronger attacks extract more examples in total, and examples with lower
1,211.08
1,224.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1217.3999999999999
values of k. They say we do not aim to extract targeted pieces of training data, but rather
1,217.4
1,229.64
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1224.52
indiscriminately extract training data. While targeted attacks have the potential to be more
1,224.52
1,235.4
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1229.64
harmful, our goal is to study the ability of language models to memorize data generally,
1,229.64
1,242.92
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1236.1200000000001
not to create an attack that can be operationalized by real adversaries to target specific users. So
1,236.12
1,250.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1243.5600000000002
you can see that here, they simply want some training data, they don't really care what it is,
1,243.56
1,255.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1250.2800000000002
they simply want to get some so they're going to search for sort of the easiest to get training
1,250.28
1,262.44
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1255.08
data. And that, so they frame it as yeah, we don't want to devise an attack that can attack
1,255.08
1,270.6
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1262.4399999999998
individual users. But there is a different component to it. So if you had to sort of guess
1,262.44
1,278.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1270.6
the password of any particular user, that would be you know, fairly, fairly hard. However, if you
1,270.6
1,288.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1278.52
had to guess a password that was used by any user, it's fairly easy, right? Even if you discard the
1,278.52
1,293.8
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1288.28
fact that most of people use password as password, and so on, if, if people would just uniformly
1,288.28
1,301.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1293.8
sample words from the dictionary as their password, still, you'd have a decent chance of figuring out
1,293.8
1,309.88
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1301.08
a password, right? We have a decent chance of figuring out, you know, not super high entropy
1,301.08
1,314.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1309.8799999999999
things like maybe credit cards, you'd have a decent chance of figuring out the credit card
1,309.88
1,322.6
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1314.28
number, just by guessing one. So this is the regime we are in here. And it's entirely different
1,314.28
1,330.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1322.6
regime, I think if you try to attack individual users, essentially, what they're going to do right
1,322.6
1,337.8
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1330.2
here is they're going to say, look, there's training data right here. Now, some training
1,330.2
1,344.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1337.8
data, these models can extract a pattern from right? If and this is what we do with machine
1,337.8
1,349.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1344.2
learning, right? We say, okay, this this data right here, they all have like some pattern. And
1,344.2
1,354.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1349.72
this data right here is some pattern. And you can learn from this. And it has some patterns. So the
1,349.72
1,359.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1354.28
machine learns to sort of abstract from extra training data samples, and so on. But here is a
1,354.28
1,365.96
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1359.72
data point that doesn't really fall into any of these categories. So what the model will do is it
1,359.72
1,371.64
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t1365.96
will simply say, well, this is its sort of own little group, I'll remember that I can extract
1,365.96
1,376.68