Skip to main content

This article is part of a Globe initiative to cover disinformation and misinformation. E-mail us to share tips or feedback at disinfodesk@globeandmail.com.

Have you ever shared something online before checking to see if it was correct? Maybe you did so because it made you angry, or because it was so outrageous you felt as though you had to respond. This is something we’ve probably all done, with no harm intended. But it’s a doorway into the murky realm of disinformation.

Disinformation and misinformation are terms often used interchangeably. The key difference between them is intent.

Disinformation is made with the intent to deceive and spread something false.

Misinformation is the unwitting sharing of something misleading.

These are not new concerns. Author and satirist Jonathan Swift wrote in 1710 that “Falsehood flies, and the Truth comes limping after it.” In 1835, the New York Sun published fantastical articles about life on the moon. What is new today is the adoption of generative artificial intelligence. Generative AI systems produce new content – including images, video, audio and text – based on the data they have been trained on. Generative AI has countless positive uses, but the technology’s ease of access and high-quality outputs can supercharge the creation of misleading information.

Open this photo in gallery:

This image of Pope Francis in a huge puffer jacket has signs of being AI-generated, including the way his glasses and their shadow appear on his face and a distorted hand.

Specialist journalists, fact-checkers and the open-source intelligence community have been investigating and battling disinformation for years. The Globe and Mail is increasing its reporting on disinformation, and will share insights into how information is verified. But we can’t debunk everything.

This article provides a broad overview of disinformation and misinformation, based on a presentation given to Globe journalists by Patrick Dell, The Globe’s senior visuals editor, and Kat Eschner, a journalist and disinformation trainer. It draws on a range of articles and online resources about disinformation.

Key characteristics of disinformation

Successful disinformation has an element of truth to it. It often uses real people or events to seem more legitimate. An example, identified by Scientific American, is a real video of former U.S. House speaker Nancy Pelosi that was slowed down by someone sowing disinformation so it appeared as though she was slurring her words. The video led to online speculation that she was drunk.

Another element of disinformation is the clever use of psychological tricks, such as rage-baiting to fuel engagement on social media. Someone promoting falsehoods might also use confirmation bias (the tendency of people to believe information that matches their worldviews) to get social media users to like, comment and share.

Disinformation is designed to be shareable, and tends to be visual. Memes, maps, charts, photos, video clips and annotated screenshots can seem relatable and trustworthy at first glance. Peeling back the layers to see where a photo is from, or if a chart uses real data, can be difficult and time-consuming. Disinformation actors hope you won’t do that kind of checking.

Types of disinformation

While intention to deceive is a key part of disinformation, some attempts to spread falsehoods are more malicious than others. First Draft News created this chart, which catalogues different types of misinformation and disinformation, ranking them by their intent to mislead.

Open this photo in gallery:

First Draft News outlines the spectrum of disinformation, increasing in intention to cause harm from left to right.First Draft News

By this ranking, fabricated content is the most dangerous. Generative AI systems make this type of disinformation very easy to create.

Reuters reported on a 2023 Ron DeSantis campaign ad attacking Donald Trump and his connection to Anthony Fauci. (Mr. Fauci, the now-former director of the U.S. National Institute of Allergy and Infectious Diseases, became a target of right-wing critics because of his prominent role in the country’s pandemic response.) A section of the ad shows six pictures. Three of them are real photographs of the men, but the three where they are embracing were very likely generated by a text-to-image AI system such as Midjourney.

Open this photo in gallery:

Three out of six images of Donald Trump and Anthony Fauci together in a Ron DeSantis campaign ad are very likely generated by AI. The original ad was posted to X.DeSantis War Room

Mixing real and fabricated content like this makes the fakes look more authentic, and makes them more likely to play to the biases of people viewing them. The Globe is not linking directly to this or other disinformation so that it is not amplified.

Do your own analysis

It pays to be extra skeptical. Look past bold claims to see if they are inflated and what, if anything, supports them. The same applies to photos and videos. You may see an image shared online that is a too-perfect match for a story. It could be an old photo recirculating with a new caption, edited in Photoshop or made using AI.

A reverse image search is an excellent way to see if a photo is current, relevant and real. Browser extensions such as RevEye allow users to do this in a couple of clicks.

AI image systems can make pictures that are extremely detailed and seemingly authentic. Websites such as AI or Not can make an educated guess as to whether a photo was produced by one of the big image generators, such as Midjourney or Dall E. But these sites aren’t completely accurate, so use your judgment.

Open this photo in gallery:

Combination of images made by Midjourney, Bing (top middle) and Dall-E (bottom middle).Midjourney, Bing, Dall-E

To detect an AI-generated image on your own, examine the image’s background and other elements for inconsistencies. These can be very subtle, such as distant power lines that don’t align, or weird anatomy. But if a scene lacks features like these, it may be impossible to separate real from fake.

Learn more

We highly recommend these additional sources of information:

Note: The disinformation graphic from First Draft News used above is made available under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

Generative AI tools that easily make or edit images are cluttering our digital lives with misleading or out-and-out fake content, with consequences for our view of the past, present and future. Patrick Dell, The Globe's senior visuals editor, highlights the challenges we face separating the real from the AI-created.

The Globe and Mail

Interact with The Globe