Machine generated content helping spread fake news
I recently participated in a discussion about the role of machine-generated text in the spread of fake news.
The context of this discussion was the work titled: How Language Models Could Change Disinformation. The progress made by the industry in the area of algorithmic text generation has led to concerns that such systems could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3 — an AI system that writes text, to analyze its potential use for promoting disinformation (i.e., fake news).
The report reads:
In light of this breakthrough, we consider a simple but important question: can automation generate content for disinformation campaigns? If GPT-3 can write seemingly credible news stories, perhaps it can write compelling fake news stories; if it can draft op-eds, perhaps it can draft misleading tweets.
Following is my take on this.
Continue reading "Machine generated content helping spread fake news"