title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2696.68
log files, entry from forums or Wikis or religious text. However, we also identify a significant
2,696.68
2,710.12
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2703.3199999999997
amount of unique data containing 128 bits UUIDs correctly resolving URLs contained in the
2,703.32
2,716.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2710.12
URLs containing random strings, and contact information of individual people. Okay, so
2,710.12
2,722.68
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2718.12
as I said, these, this is this is fairly interesting, but also a bit expected, right?
2,718.12
2,731.4
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2722.68
If I give you the start of a UUID, then there is no pattern to extract, except I guess the UUID
2,722.68
2,737.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2731.4
structure, but there is no deeper pattern to exact. So all the model really can do is memorize
2,731.4
2,744.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2737.56
the UUID, especially if there aren't too many UUIDs in the training data or if this particular
2,737.56
2,750.76
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2744.2
UUID is some sort of, as I said, it's this outlier type of situations, the same thing for,
2,744.2
2,756.76
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2750.7599999999998
you know, URLs containing random strings. These are just not pattern extractable, therefore,
2,750.76
2,766.6
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2757.72
easily, more easily remembered by the model than learned. So you can see right here, the breakdown,
2,757.72
2,775.16
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2766.6
where they see how many of what they extract, and your contact info, 32 named individuals,
2,766.6
2,782.6
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2775.16
none in non news, 46. That's a fair amount of things you can extract from GPT-2. You have to
2,775.16
2,791.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2782.6
say that that is all right, all of GPT-2, you get approximately 100 things that are kind of names or
2,782.6
2,798.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2791.56
contact informations. So as I said, not too bad, specifically considering what I've shown you here,
2,791.56
2,808.04
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2798.52
right? That's one of these contact informations. And they do say this in the paper that this
2,798.52
2,814.92
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2808.04
person, this information was obviously released in the context of this software project. And the
2,808.04
2,820.6
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2814.92
problem is only the model might actually output this in a different context, right? The model
2,814.92
2,826.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2820.6
might think, oh, now I need to output some sort of name and address. What kind of names and addresses
2,820.6
2,830.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2826.2799999999997
to enter? Well, this name and address appears pretty often, I'm going to put that here. And
2,826.28
2,842.04
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2831.48
so that's a failure case, you know, that these things can do. So here is a sort of a graph.
2,831.48
2,847.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2842.04
And they have more of these graphs later. But you can see that here, for example, is a GPT-2
2,842.04
2,854.04
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2847.56
perplexity. And here is this Zlib entropy. And if you plot them one against another, most things
2,847.56
2,859.96
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2854.04
will fall on this diagonal right here with the giant blob around here for most texts of the
2,854.04
2,868.92
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2859.96
internet. And there will be a region where GPT-2 thinks this is fairly low perplexity, but Zlib
2,859.96
2,876.76
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2868.92
thinks the text is relatively high entropy. So these are candidates for memorization. And the red
2,868.92
2,884.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2876.76
and blue here are the ones the authors selected for checking. And the ones that are blue are ones
2,876.76
2,891.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2884.28
that they found or memorized from the internet. So a fairly high percentage, in fact, 67% of this
2,884.28
2,900.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2891.5600000000004
method that they selected was, in fact, was memorized. Though, as I said, you can see that
2,891.56
2,907.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2900.28
there aren't super many more, right? So this is all samples. I don't know how many, you know,
2,900.28
2,916.68
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2909.0800000000004
they could generate more, but you can see that it gets pretty sparse out here. Okay.
2,909.08
2,925.48
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2921.0800000000004
Yeah, so examples of memorized content, personally identifiable information.
2,921.08
2,929.8
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2925.48
They say there are several examples of individual people's names, phone numbers, addresses, and
2,925.48
2,935.64
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2929.8
social media accounts. Some of this is memorized content is just exclusive to a few documents. For
2,929.8
2,941
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2935.64
example, we extract the usernames of six users participating in an IRC conversation that happened
2,935.64
2,948.04
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2941.0
in exactly one document. Yeah, so I guess the question is, how often did the usernames appear
2,941
2,954.92
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2948.04
in that one document, right? And once the model sort of, and how, how does the user name appear
2,948.04
2,960.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2954.92
how distinct are these usernames from other usernames? Because if they're very distinct,
2,954.92
2,965.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2960.2000000000003
and they happen, you know, they have a long conversation, it can be easy to see that the model
2,960.2
2,972.12
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2965.08
will remember that not saying this is not a problem. I am telling you, the models, it's not,
2,965.08
2,979.4
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2973.0
it's not that they'll just randomly remember stuff, then it needs to be very specific conditions for
2,973
2,985.64
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2979.4
the models to remember stuff. So they say, we identify 50 examples of memorized URLs that
2,979.4
2,994.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2985.64
correctly resolve to live web pages. Okay, many of these URLs contain uncommon pieces of text,
2,985.64
3,001.56
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t2994.2000000000003
such as random numbers or base64 encoded strings. Again, this this random element right here
2,994.2
3,009.16
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3002.44
makes it you can't extract a pattern. They say we identify 31 generated samples that contain
3,002.44
3,015.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3009.16
snippets of memorized source code. And they can actually extend that. So they can take these
3,009.16
3,021.64
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3015.72
snippets and they always, I think they do 256 token length, but they can extend that to sort
3,015.72
3,027.32
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3021.64
of verbatim recover the source code. And that's also you know, that's that's fairly interesting.
3,021.64
3,037.64
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3029.8799999999997
And unnatural text, yeah, these UUIDs. A Google search for this string identifies just
3,029.88
3,045.32
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3037.64
three document containing this UUID. And it is contained in just one GPT-2 training document,
3,037.64
3,052.92
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3045.3199999999997
okay, though, again, we are not seeing how often. They say table three gives nine examples of k
3,045.32
3,058.84
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3052.92
equals one memorize content, each of which is a random sequence between 10 and 87 characters long.
3,052.92
3,066.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3058.84
You can see the table right here. So these are examples of random strings that for some reason
3,058.84
3,073
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3066.52
appear in this training data in exactly one document. However, this string right here,
3,066.52
3,081.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3073.0
for example, appears 10 times. And this string right here appears 311 times. So again, it's a
3,073
3,091.24
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3081.08
random string that appears, though 10 times is fairly often for a piece of text to appear,
3,081.08
3,098.84
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3091.24
especially the same piece of text that is not pattern close to any other piece of text. It seems
3,091.24
3,108.36
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3098.84
okay that the model remembers that it seems expected, right. So yeah, here, they also say
3,098.84
3,113.24
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3108.36
that samples that contain two or more snippets of memorized texts that are unrelated to one another.
3,108.36
3,119.16
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3113.2400000000002
In one example, GPT-2 generates a news article about the real murder of a woman in 2013,
3,113.24
3,125.32
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3119.1600000000003
but then attributes the murder to one of the victims of a nightclub shooting in Orlando in 2016.
3,119.16
3,133.16
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3125.32
And this I found very, very interesting, right? Because that's exactly what I said GPT-3 does,
3,125.32
3,141.16
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3133.16
right? Especially. So in GPT-3, they have this example of GPT-3 writing an entire news article
3,133.16
3,148.68
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3141.16
about I'm not even sure about some pastors, some split in the Mormon Church or something like this,
3,141.16
3,156.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3148.68
or I'm I don't remember correctly, but I was able to Google that. And I did not find the verbatim
3,148.68
3,164.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3156.2
sequence. But I found that article that GPT-3 wrote many, many times in sort of different words
3,156.2
3,171.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3164.2799999999997
in written down in, you know, books and reported about and so on. So what GPT-3 did is simply,
3,164.28
3,178.04
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3171.08
I would guess interpolated between these things. And here they find the same thing GPT-2 just takes
3,171.08
3,183.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3178.04
two pieces of text and sort of finds that they're close and sort of interpolates between the two,
3,178.04
3,189.16
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3183.72
I would call this memorization too. And they say, yeah, there are this is memorized text,
3,183.72
3,197.64
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3189.16
this is not memorized text in their definition of memorized text. But it is right. So, so it sort of
3,189.16
3,203.24
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3197.64
mixes up different training data points together. And this, I think, is a strong,
3,197.64
3,210.52
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3205.0
it's very strong evidence for how these language models work in that they sort of take training
3,205
3,216.6
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3210.52
data points, and they just kind of mix them together. And they can do this in a grammatically
3,210.52
3,221.32
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3216.6
well founded fashion, they can also change individual words of a sentence and so on.
3,216.6
3,229.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3222.28
By the way, it doesn't mean that people are doing anything smarter, like there are arguments like
3,222.28
3,233
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3229.08
the best arguments I hear are, you know, people are kind of doing the same thing. They're just
3,229.08
3,239.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3233.0
kind of recount the training samples in there a bit of their own words. But yeah, this this I found
3,233
3,247.32
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3239.72
extremely, extremely interesting. And also, you know, what I found from GPT-3 with this Google
3,239.72
3,253.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3247.3199999999997
example was that the problem of memorization may even be way more way worse than what they analyze
3,247.32
3,261
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3253.72
in this paper right here, because they look for sort of direct, direct overlap in text,
3,253.72
3,266.36
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3261.0
where as they wouldn't catch strings that are sort of reformulated.
3,261
3,276.36
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3266.36
Again, okay, so here they they lastly they say, they can extend text and this thing here, I find
3,266.36
3,287.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3276.36
very interesting. So they say, if they if they put in this prompt 3.14159 GPT-2 will complete the
3,276.36
3,297
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3287.08
first 25 digits of pi correctly. Interestingly, when they input pi is this, it gives the first 799
3,287.08
3,306.6
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3297.0
digits. And if they say e is this, and pi is this, then it gets the first 824 digits correctly. So
3,297
3,311.4
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3306.6
they make the point here that the memorization problem could actually be much worse if you only
3,306.6
3,320.92
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3311.4
knew what prefix to input. So this strengthens my case for the future job description of a prompt
3,311.4
3,329.8
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3320.92
engineer, right? It seems to be that it's quite a sort of magical power to know what to input into
3,320.92
3,335.72
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3329.8
these language models to make them output what you want them to output in this context, but also in
3,329.8
3,342.36
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3335.72
the context where you actually want to do them. I want want them to do something useful. Right.
3,335.72
3,348.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3342.3599999999997
And here, here is where they investigate this number k. So you might have noticed and this is
3,342.36
3,353.48
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3348.2
a bit of the criticism of my paper up until this point. Yes, they have, you know, they have the k
3,348.2
3,358.84
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3353.48
equals one right here. And they sometimes say that it's only found in very few examples. But
3,353.48
3,368.76
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3358.84
essentially, they just they they they investigate this memorization here, pretty much in absence of
3,358.84
3,374.2
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3368.76
k of what they themselves defined to be problematic, right? They say, well, it's problematic if it only
3,368.76
3,383.32
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3374.2000000000003
appears in few training examples. But the the analysis here is done quite absent of k very
3,374.2
3,390.44
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3383.32
often. And here is where they investigate this. So this is also pretty clever that the the
3,383.32
3,401.08
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3390.44
experiments here are fairly clever. They find a they find a one piece one document a pastebin
3,390.44
3,412.76
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3401.08
document. So the pastebin document where that is sort of a JSON document, and it has lots of links.
3,401.08
3,420.28
Extracting Training Data from Large Language Models (Paper Explained)
2020-12-26 19:42:56
https://youtu.be/plK2WVdLTOY
plK2WVdLTOY
UCZHmQk67mSJgfCCTn7xBfew
plK2WVdLTOY-t3413.08
And I found the documents that giant document, okay, and it's a giant JSON document with these
3,413.08
3,426.44