Skip to content

Machine generated content helping spread fake news

I recently participated in a discussion about the role of machine-generated text in the spread of fake news.

The context of this discussion was the work titled: How Language Models Could Change Disinformation. The progress made by the industry in the area of algorithmic text generation has led to concerns that such systems could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3 — an AI system that writes text, to analyze its potential use for promoting disinformation (i.e., fake news).

The report reads:

In light of this breakthrough, we consider a simple but important question: can automation generate content for disinformation campaigns? If GPT-3 can write seemingly credible news stories, perhaps it can write compelling fake news stories; if it can draft op-eds, perhaps it can draft misleading tweets.

Following is my take on this.

What machines can and cannot write

In my opinion, we need to distinguish between AI’s ability to produce Turing-test-passing articles (articles that look as if they were written by a human), and its ability to generate intellectual property. On the former — I’m sure it does an excellent job, as I’m sure a less-excellent job would have been just as useful. A long piece of text that expands on the message of a couple of sentences has its uses, such as when listening while driving or reading while tired. We are not always as sharp as we want to be, and we fit the type of text we consume to our current mental capacity. On the latter — AI text generation does no miracles. It can write an op-ed which you’ll understand and resonate with, but probably not one which you’ll intellectually appreciate to the word. I’m sure GPT-3 can produce good readable text, but I am equally sure it cannot produce knowledge.

Machines’ role in spreading fake news

Fake news is an excellent use-case for machine-generated content. The reader consumes fake news because he/she enjoys seeing those maybe-facts in writing. More text repeating the same mantra is even better, just as long as it’s not exactly the same text by exactly the same author. If fake news is a problem, then this problem is rooted in the desire to consume content that comes through a questionable supply-chain, regardless of who wrote that content.

Unfortunately, the problem of fake news is here to stay, with AI or without it. Human text-generators today are already cheap enough, perhaps even almost as cheap as their AI equivalents. They do not generate intellectual property — they take a message and expand it in multiple forms and in a way that is engaging to read; precisely what fake news need. Technology did not create the fake-news phenomenon, and will not be able to solve this problem either.

For more, see a previous post: The Fake News problem will not be solved by technology.


See also

Trackbacks

No Trackbacks

Comments

Display comments as Linear | Threaded

No comments

Add Comment

Markdown format allowed
Enclosing asterisks marks text as bold (*word*), underscore are made via (_word_), else escape with (\_).
E-Mail addresses will not be displayed and will only be used for E-Mail notifications.
Form options

Submitted comments will be subject to moderation before being displayed.