Product
Solutions
Resources

AI and Fake News: How Technology Has Changed Modern Propaganda

AI and Fake News How Technology Has Changed Modern Propagandamodern propaganda

“The misinformation crisis is a symptom of the enormously significant ways that the internet has changed our information ecosystem, which I don’t think anybody, least of all governments and traditional media, has fully got to grips with.” Al Baker, Managing Editor at Logically

If you’ve been on social media at all in the past decade, you’ve likely spoken to at least one synthetic person without even realizing it.

To be honest, probably more than one. In the early months of 2020, Twitter purged 174,000 bots it claimed were being used by the Chinese government.

This is what propaganda looks like today. But no, you say, propaganda is kitschy, cartoonish posters from the 1940s, and, once upon a time, you’d be right. The truth is, though, modern propaganda is almost completely digital – and not nearly as easy to spot.

Dictators use bots to stifle uprisings. Political candidates use bots to promote their platforms. Civil rights advocates use bots to schedule protests and share information. That new startup has bots to raise its brand profile, and if you’ve ever scheduled an automated post, you’ve used one, too.

Like most technology, bots aren’t malevolent by nature; how they’re used, however, is another matter entirely.

This article is going to examine the methods and origins of modern propaganda and what’s been done to combat it.

Onward, we go:

Is there a difference between “fake news” and propaganda?

The short answer is: no. The long answer is, well. Long.

People have been trying to influence each other’s opinions for thousands of years and, undoubtedly, they will continue to manipulate each other for many thousands more. As long as there are commodities and consumers, there will be propaganda.

modern propaganda

“War is peace
freedom is slavery
ignorance is strength”
George Orwell, 1984

Fake news. Clickbait. Disinformation. Misinformation. Deepfakes. Propaganda. What does it all really mean, and how do you know which you’re dealing with?

To answer that question, let’s rewind a bit and look at how modern propaganda got its start.

A brief history of propaganda

In the Kermanshah Province of Iran, west of Tehran, lies the small village of Behistun. Just outside the village: Mount Behistun, and one of the most important historical sites in the world: the Behistun Inscription.

“I am Darius, the great king, king of kings, the king of Persia, the king of countries, the son of Hystaspes, the grandson of Arsames, the Achaemenid.” King Darius I the Great, approximately 520 BCE

The inscription goes on for an impressive 515 lines in the original Persian cuneiform – just shy of 10 A4 pages once translated to English.

What does an ancient Persian king have to do with modern propaganda?

The Behistun Inscription is important for two reasons. First, because the monument is multilingual, it serves as the equivalent of the Rosetta Stone for cuneiform.

It is also, according to many historians, one of the earliest known examples of propaganda.

Modern Propaganda: Behistun Inscription
(Source)

The word “propaganda” evokes certain feelings and ideas in all of us. You may think of regimes like North Korea or the Third Reich. Perhaps your mind leans more towards organizations like Hydra and Vought. Maybe you guiltily think of the copy you wrote this morning to promote the latest app to hit iTunes. (Don’t forget to leave a review! RTs pls!)

Maybe it’s my age, but I think of George Orwell and 1984.

Lately, that distinctive, bold red and black eye seems to be even more pernicious than when I first read the book as a precocious teenager excited by the idea of rebelling against state (read: parental) oppression.

We are all rebels at 16, but propaganda hasn’t always been propaganda.

Modern Propaganda: 1984
(Source)

I’ve already mentioned King Darius I. He was not the only person from the ancient world to use propaganda, though. Themistocles (Greece, 480 BCE) used it to defeat Xerxes. Alexander the Great (Macedonia, 356-323 BCE) put his face on coins, monuments, statues, and pretty much anything else. Julius Caesar (Rome, 100-44 BCE) was considered an expert propagandist, as well as many other Roman writers.

The first use of the word, however, isn’t until 1718: a shortened version of “Congregatio de Propaganda Fide” (congregation for propagating the faith). This was a committee of Catholic cardinals established to oversee foreign missions.

The modern definition of propaganda – material to persuade/advance a cause – only originates from about 1920, during the first world war. Even then, though, propaganda itself was more neutral than negative. Propaganda in the early 20th century was most often used as a means to rally and unite the populace in times of war.

Posters advocated children to save their quarters for stamps, food be rationed, and buying liberty bonds to support the war effort. And, of course, we all know Uncle Sam’s iconic portrait some 100 years later.

That is not to say that those posters didn’t serve a political agenda, or that they never blatantly appealed to racial stereotypes and emotive responses to further those agendas.

Okay. War propaganda – not the greatest example. Let’s talk about cookies.

In early October, Oreo posted a series of tweets in conjunction with PFlag using Oreos to depict some of the flags from the LGBTQ+ community as part of National Coming Out Day. They followed it up the next day with #ProudParent to enter a giveaway for said featured rainbow cookies.

Modern Propaganda: Oreo Tweet
(Source)

Is this an instance of a corporation jumping on the bandwagon of social responsibility to increase profits? Maybe.

Is it harmful? Not really. Who doesn’t love cookies?

Is it promoting a particular agenda? Obviously.

Is it modern propaganda? Absolutely.

This ad is actually performing three different functions as propaganda:

  1. It says Oreo promotes inclusivity and if you promote inclusivity, too, buy their cookies.
  2. It tells the LGBTQ+ that Oreo supports them, so they should buy more cookies, too.
  3. It’s 5 linked tweets prominently featuring their product (at 7.6K retweets and 26.3K likes) so… buy the cookies already!

Granted, not everyone reacted positively to the tweets, and cynicism aside, Oreo has been consistent with this campaign. The point is, though, 7.6 thousand people engaged with the Oreo brand because of one tweet.

Propaganda is everywhere, but don’t freak out. Like the bots, modern propaganda itself isn’t inherently nefarious. How it’s used, however, is a huge problem, particularly in the digital age where anyone with an internet connection and a clever idea can become viral with the right amount of effort.

But surely, if we’re aware of the effect, it won’t really work, right?

Wrong.

There are a number of reasons we fall for fake news and propaganda. Most significantly, though, is the fact that all propaganda – whether from a political campaign, a cookie company, or the high school cheerleading team – is specifically designed to make you react.

So how does this whole propaganda thing work, anyway?

Is modern propaganda really doing anything new?

In 1937, the Institute for Propaganda Analysis (IPA) created a list of seven tactics used in propaganda. The IPA closed in 1942 due to its inability to remain impartial during World War 2, but many still use their list as a base. Others have expanded the list, cataloging sometimes up to 50 different propaganda tactics.

What makes modern propaganda effective, though, is really very simple: it makes you feel.

It’s not a matter of feeling happy or sad; propaganda targets things like fear, security, identity, pride. It taps into those deeply-held, visceral emotions you can’t even describe, all those biases you don’t even know you have, and it exploits them.

Let’s talk about earworms. You know, that song you heard for five seconds while you popped into the shop but five days later the chorus is still on repeat in your head?

Propaganda is the earworm.

Researchers claim that earworms need five components to work: surprise, predictability, rhythmic repetition, melodic potency and receptiveness.

(You’re welcome.)

The difference between the propaganda of last century and the modern iteration is volume. Television, YouTube, radio, social media, blogs, apps – we consume ridiculous amounts of information every day. Our brains, however, can only take in so much. The more we consume, the harder it is to process; the harder it is to process, the less receptive we are.

The human brain uses an immense amount of energy, so it cuts corners. These cognitive shortcuts essentially mean that the easier something is to process – ie rhyming, legibility, repetition – the more we like it. The more positive we feel about a thought, the more we believe it.

Modern Propaganda: Hands. Face. Space.
(Source)

With what is known as “information pollution” (fake news) pumped into our daily feeds nonstop, combined with the fact that we find ourselves outside our circle of competence more and more, deciding what is true and what is not becomes nearly impossible.

This is called the Illusory Truth Effect. It means, essentially, that no matter how well-informed, intelligent, or discriminating we are, we are all likely to believe something that isn’t true at some point.

Modern Propaganda: Hilary Clinton
(Source)

*This never happened.

What is “fake news”?

Fake news does what it says on the tin: it’s untrue information spread under the guise of factual news.

The origin of the term is contentious. Inarguably, it’s gained increased traction since 2016 via news and media outlets, and several researchers, journalists, and the occasional world leader claim to have coined the term – though, fittingly, no one can really back that up with any evidence.

While the phrase “fake news” may or may not have been in use as long ago as the 19th century, everyone agrees that yellow journalism very much was.

In 1883, Joseph Pulitzer (yes, that Pulitzer) created the largest newspaper circulation in the US by using sensationalism, scandal-mongering, and eye-catching headlines with the sole purpose of enticing readers.

A few years later, William J Hearst came on the scene. The intense rivalry over who could provide the most lurid and outlandish reporting quickly came to be known as yellow journalism. It changed American journalism forever. We don’t call it yellow journalism anymore, but modern propaganda still uses the same tools today.

Things like banner headlines, color comics, illustrations, and pretty much all of the Internet.

Modern Propaganda: Connectivity
(Source)

With so much misinformation and disinformation circulating, “fake news” seems like a pretty appropriate phrase to use. Except it’s not that simple. In a somewhat ironic turn of events, the term, “fake news,” has become propaganda itself.

While some still use “fake news” to refer solely to false news stories, others use it as a catch-all term for any disinformation being pushed on the public. Still others use it as an accusation of media bias and to discredit anyone who doesn’t agree with them. Even respectable mainstays of legitimate journalism have been tagged as “fake news.”

This is where the term becomes really dangerous. How many times have you been on Twitter, scanned an article quickly, and hit that retweet button only to find out it was satire or parody? How many times have you seen a tweet and thought, This can’t be true, to learn later that it very much was?

We live in – let’s face it – a very bizarre moment in history. The drive to have more, better, faster connectivity is facilitating the spread of information and discourse like never before. News stories are being published, then revised, then updated, then even sometimes removed completely minute by minute. We read our news on Twitter, Facebook, Youtube; commuting to work, walking down the street, grabbing a sandwich between meetings.

Information is spreading faster and faster, and we are giving it less and less critical attention. So, what’s the solution?

Meet the AI generating synthetic content

Earlier this year, NPR reported that researchers at Carnegie Mellon found that 45% of the accounts spreading information about COVID-19 were bots, or at least accounts that behaved more like robots than humans.

It’s no secret that artificial intelligence (AI) has been used to create and disseminate “fake news,” or, more accurately in this case, synthetic content. All AI is not created equal, however, and, for the most part, the AI at work on your social media is nowhere near the likes of Data or Hal.

Your average chatbot is what’s considered Weak AI. This means it has a specific, programmed task to perform, and, when it comes right down to it, chatbots are not that bright. Over short interactions, they may be difficult to pick out, but over time, it becomes pretty easy to pinpoint who is bot and who is not.

But, since AI is being used to create fake news, people began to ask: Could AI be used to detect it?

GPT-3

OpenAI’s GPT-3 is likely the text-generating AI you’ve heard most about. The third iteration of OpenAI’s deep learning language model generated headlines when the organization declared it too dangerous for public release. OpenAI has been pretty tightlipped about the API in general, but presumably GPT-3 isn’t going to take over the world any time soon.

The idea behind GPT-3, though, is to perfect it to the point where it can generate large amounts of synthetic text indecipherable from something a human would create.

GPT-3 even wrote an article for The Guardian, which is more or less convincing of its capabilities. I’ve certainly read worse journalism.

However, while its creators have (responsibly) flagged potential misuse of the tech for abuse, spam, plagiarism, and violating legal processes, others, such as MIT Review’s Gary Marcus have conducted their own experiments. According to an article Marcus wrote shortly before The Guardian, GPT-3 doesn’t hold up to closer scrutiny.

modern propaganda

“Although its output is grammatical, and even impressively idiomatic, its comprehension of the world is often seriously off, which means you can never really trust what it says.” Gary Marcus

Additionally, GPT-3 needs a human to input text to get it started. This could be a sentence or a phrase, but, while GPT-3 is an important step forward in the development of AI possibilities, it still depends on human support.

Modern Propaganda: Artificial Intelligence
(Source)

Grover

In May 2019, a group of researchers from the Paul G. Allen School of Computer Science and Engineering published a paper on what they call a “state-of-the-art-defense against neural fake news.”

Or, Grover.

Like GPT-3, Grover is a deep learning language model. The goal with Grover, however, is to first teach the AI how to generate text (based on a headline), and through doing so, it will become better at detecting modern propaganda.

I had the chance to play around with Grover – and it was an interesting experience (not to mention plain fun). Grover has two main functions:

  1. Generate text: you input a title, then select a publication to mirror, and even an author. Grover does some research and analysis, then writes an article in that style.
  2. Synthetic detection: You provide the text, and Grover determines whether it was written by either a human or a machine.

Obviously, we tested both of these features out, and the results are… mixed. Last month, we published an article written by Grover: The Ultimate Guide to Content Creation Automation.

Modern Propaganda: Grover's ArticleAs far as rough drafts go, it isn’t bad. Sure, his syntax is a little… different, and I’m not sure exactly what a MODEL MODEL module is, but you can imagine a somewhat harried content writer churning it out with some placeholder text they forgot to edit out.

Some of our readers seemed equally impressed with the progress of AI-generated content over recent years:

“This is much better than what I’ve seen a few years ago. At a glance, you might not notice. Once you start to read it, the flaws are obvious. It’s certainly improving. I think, with many things, the evolution will be AI making humans more efficient, working in tandem.” – Craig Inzana

Others, however, felt that AI still has a ways to go before it can replicate the human essence:

“Much funnier to read in a bot voice I felt it is missing the human spin, swag to really capture engagement.” – Matt Johnson

On a lighter note, I couldn’t resist checking the Process Street content team for potential AI hiding in our midst. I’m happy to report that Grover certified every one of us as human, except our editor. We don’t hold it against him, though; he’s way more advanced than Grover.

University of Waterloo

Researchers at the University of Waterloo developed their own AI tool in December 2019. This one doesn’t have a cute name, and isn’t designed to operate autonomously. This AI uses deep learning algorithms to determine if facts in a post are supported in other stories on the same subject.

While it reaches an impressive 90% accuracy in stance detection (determining bias), the creators are clear that this tool is designed to augment human efforts and flag concerns rather than work independently.

modern propaganda

“It isn’t designed to replace people, but to help them fact-check faster and more reliably.” Alexander Wong, Waterloo Artificial Intelligence Institute

While these advances in artificial intelligence and fact-checking are both impressive and exciting, we can’t become complacent in the idea that machines will do all the work for us when it comes to modern propaganda.

As Chris Dulhanty, one of the graduate students on the Waterloo project, emphasized, it’s on us as consumers to hold journalists accountable and empower them to keep us informed.

As individuals, though, we also have a responsibility to be more discerning with the information we interact with.

Modern Propaganda: Fake News
(Source)

How you can spot modern propaganda

modern propaganda

“If we are not serious about facts and what’s true and what’s not, if we can’t discriminate between serious arguments and propaganda, then we have problems.” President Obama, Nov 10, 2016

How do you decide what’s true and what isn’t? How do you know, for example, if the latest statistic about COVID-19 is accurate or misleading? With so many headlines, sources, and conflicting statements coming at you, it can be difficult to decide who to listen to.

Fortunately, you’re not alone.

Here are eight simple tactics you can use yourself to get better at spotting modern propaganda:

  • Consider the source. Where is the story coming from? Check out the site, its mission, and contact info. Check out other articles it’s published. Ask yourself: Is this site reliable?
  • Read beyond. Don’t just read the headline. Headlines are designed to get clicks. There are guides and formulas ad infinitum about how to entice readers with clickable headlines. I used one for the title of this post. The point is: get the whole story.
  • Check the author. Like the site, is the author credible? Is the author even real? Check them out. What else have they written? Who have they written for? What are their credentials?
  • Supporting sources. Click the links to those sources. Do they support what the author is saying, or have they been misconstrued in some way? Are those sources also credible?
  • Check the date. Old news is reposted as new information all the time, especially on social media. A screenshot of an interview from ten years ago will be edited to fit the narrative of a current issue, or a statement given on a different topic will be repurposed to make the speaker appear a certain way. Make sure your information is as up-to-date and relevant as possible.
  • Is it parody? Sometimes satire and parody get mistaken as real news. For the most part, these sites make it clear that they aren’t serious, but a hastily retweeted link without verification can make something that was meant as a joke appear serious. If something sounds too outlandish (I know, it can be hard to tell these days), do some research to make sure.
  • Check your bias. It’s okay. We all have them. It’s how we understand the world. The important thing, though, is to be aware of them. Are your beliefs affecting your judgment? If you read a negative story about a politician you like, are you more incredulous than with a similar story about a politician you disagree with?
  • Ask the experts. This is probably the most important thing you can do. While our devices do overload us with information, they also allow us to research topics quickly. With very little effort, you can access facts and ideas from experts in any field you can imagine.

If you combine those eight tactics with the apps and sites below, you’ll be much better equipped at telling the difference between fact and fiction.

Logically.ai

In speaking with Al Baker, Managing Editor at Logically, he advised that, in addition to thinking about whether or not something is true, it’s equally important to consider why it’s being said. As he pointed out, it’s possible – and as common – to deceive by saying true things as it is saying false things:

“Think about the way that politicians will so often respond to an awkward question by changing the subject: that’s not a lie, but there’s certainly something dishonest about it. […] Ask yourself, ‘Is this something that a person would tell me who wanted me to understand the issue?'”

Logically is an app for iOS, Android, as well as a Google Chrome extension. Using a combination of AI and human intelligence, Logically provides nearly instant fact-checking while you’re browsing content.

Logically uses what is called “extended intelligence,” which, similar to Wong’s work at the University of Waterloo, is focused on using technology to supplement and support human work. I got to speak with Al Baker so I’ll let him explain exactly how it works:

“Our tech team has developed several advanced AI pipelines which ingest massive amounts of open source social media, news, research, and other data, and can point us to interesting things to investigate, tell us what people are interested in knowing about, and point us quickly to high-quality evidence relevant to the claims we fact check.”

To give you an idea, I took it for a spin around Harvard Business Review.

Modern Propaganda: LogicallyAs you can see, the app evaluates four different criteria:

  • Source credibility
  • Article credibility
  • Sentiment
  • Key players

A trusted site will be shown with a green box; HBR, as to be expected, has a high rating for reliable sources. In addition, if you’re browsing a social media site, and a post comes up that contains sensitive or questionable content, Logically will let you know, and give you the option of seeing it or passing it by:

Modern Propaganda: Logically Flags Twitter

NewsGuard

This is another app that rates the reliability of websites while you’re browsing. The NewsGuard extension is available on Safari, Firefox, and Chrome, as well as iOS and Android.

I only recently downloaded this extension because a colleague swears by it, but already I’m impressed.

Modern Propaganda: NewsGuardNewsguard also gives you a quick overview of whether or not the site is trustworthy, with the added advantage of being able to click on the “Full Nutrition Label,” which gives you the rundown on ownership, credibility, links to sources, history, and all the things you need to know when deciding if a source is reliable or not.

Fact-Checking Sites

In addition to apps, there are also numerous fact-checking websites whose mission is to verify and debunk claims made from various outlets. Make sure you check out these sites if you want to see if that story your aunt’s cousin’s neighbor’s niece posted on Facebook is true, or another case of modern propaganda.

  • Media Bias/Fact Check: This website rates news media by factual accuracy and political bias. You can click on different biases (left, left-center, right, etc.) for a list of media with that leaning, or you can search a publication. For example, searching HBR revealed that it’s least biased with highly factual reporting.
  • FactCheck.org: Funded by the Annenberg Foundation at the University of Pennsylvania, FactCheck.org describe themselves as advocates that “aim to reduce the level of deception and confusion in US politics.”
  • our.news: This is another mobile app/browser extension for Chrome and Firefox that provides a nutrition label (Newstrition) for all online news stories. Users can also contribute sources, reviews, and ratings of news and content.
  • Snopes: Snopes started out investigating urban legends and hoaxes, but as since evolved into the largest investigative reporting and fact-checking site out there.

modern propaganda

This isn’t a problem we can talk about solving yet, not until there is a more general realisation that the information crisis is a feature of our new media ecosystem, and not of passing political circumstances.” Al Baker, Logically

How do you feel about AI being used to detect propaganda? Is it too much of a risk that the technology will be used to promote propaganda instead? Let us know what you think!

Get our posts & product updates earlier by simply subscribing

Leks Drakos

Leks Drakos, Ph.D. is a rogue academic with a PhD from the University of Kent (Paris and Canterbury). Research interests include HR, DEIA, contemporary culture, post-apocalyptica, and monster studies. Twitter: @leksikality [he/him]

Leave a Reply

Your email address will not be published. Required fields are marked *

Take control of your workflows today