Thoughts on “Does GPT-2 Know Your Phone Number?”
Via Hacker News: “Does GPT-2 Know Your Phone Number?” (Eric Wallace, Florian Tramèr, Matthew Jagielski, and Ariel Herbert-Voss, December 2020).
When prompted with a short snippet of Internet text, the model accurately generates [one Peter W.’s] contact information, including his work address, email, phone, and fax.
[…] In another case, the model generates a news story about the murder of M. R. (a real event). However, GPT-2 incorrectly attributes the murder to A. D., who was in fact a murder victim in an unrelated crime.
[…] Below, we prompt GPT-3 with the beginning of chapter 3 of Harry Potter and the Philosopher’s Stone. The model correctly reproduces about one full page of the book (about 240 words) before making its first mistake.
No answers here, just thoughts.
The general theme of GPT-2 (and now GPT-3) slurping up “real-world-significant” data and regurgitating it on command reminds me of the way arbitrary data can be (intentionally) stored on the Bitcoin blockchain, again with potential legal consequences for… well, somebody, but it’s not clear whom. See “A Quantitative Analysis of the Impact of Arbitrary Blockchain Content on Bitcoin” (Matzutt, Hiller, Henze, Ziegldorf, Müllmann, Hohlfeld, Wehrle; 2018).
In the case of the murder attributed to the wrong person, it strikes me that there are at least three things that happened there:
-
The computer program was given training data from the real world. (As the researchers say, this might have been a GDPR violation or something.)
-
The computer program, through random processes, produced the sequence of tokens
A--- D---, 35, was indicted by...
-
A human reader (quite reasonably might have) interpreted that sequence of tokens as a statement of truth about the real world.
Without the third step, there wouldn’t really be a problem, at least
not from a utilitarian point of view. Having a computer generate
false statements into /dev/null
might be considered totally fine.
The third step certainly depends somewhat on the first step; but also depends on the presentation of the program’s output in a context where the human is predisposed to trust what it says. Such as a customer-service chatbot, or a bot that writes news articles.
You might think that the output of a program is obviously nothing but fiction and falsehood, and only a fool would take anything generated by it as fact. (Also known as the Tucker Carlson defense.) But average people don’t work that way.
Another big problem here is that the people doing the training (step 1) don’t control how the program is going to be used (step 3). Once the real-world-significant data has been encoded into a model and released into the wild, anyone can use it to generate news articles or other trusted content.
The idea that AI models can generate false statements about private individuals
is worrisome, but it strikes me that AI models easily generate
just as many false statements not about individuals. In terms of
how we want society to look, is there really much difference
between A--- D---, 35, was indicted
versus
The European Union currently has six member states
or Dominion Voting Systems' CEO is Cesar Chavez, brother of Hugo
?
In other words, our society dislikes giving platforms to gossips
and/or freeloaders, and we’ve built laws (GDPR, HIPAA, DMCA) that
reflect those biases;
but we also dislike giving platforms to indiscriminate liars,
yet we don’t really have any laws around that.
And unfortunately it’s really easy to
make money by being an indiscriminate liar
(and people need money), so people do that.
“Automatic lie generators” like GPT-2 and GPT-3 simply increase those people’s profit margins.
It’d be nice — but unrealistic, I know — to see the AI field collectively abandon their decades-long focus on mathematics-driven mimicry and return to some more Hofstadterian idea of “true AI” — the pursuit of some kind of program that could somehow be taught to value “honesty” — or for that matter to “value” anything. (What does that mean? We don’t know, and that’s why AI is hard, and that’s why philosophy-free machine learning has effectively taken over.)
No answers here, just thoughts.