Ada Lovelace and AI
Ada Lovelace was one of the pioneers of modern computing alongside Charles Babbage. She believed a machine could not be considered “intelligent” unless it produced original work, not something derivative of its input or training data.
AI in 2024
Now, in 2024, generative AI seems to pass the “Lovelace Test.” It can generate what seems to be original work and is not explicitly in its programming.
But even the advanced models available today can only generate material based on their training data and programming rather than creating new ideas out of thin air.
Modern AI lacks a fundamental understanding of true context, which is why it fails to be truly “intelligent” and why AI cannot have opinions of its own.
From robocalls to deepfake videos of candidates, disinformation is easier than ever to create and spread. Thanks to advances in technology, it’s also getting harder to detect.
Can you spot the fake? Press the play button below to take the quiz:
Types of AI-generated disinformation
Beyond still photographs, the public should be on the lookout for other types of AI-generated content.
Deepfakes
Merriam-Webster defines a deepfake as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”
One example is a video of Trump saying, “We need to make antisemitism great again.”
Trump says a lot during this video, but perhaps the most impactful words are:
“Mark my words, anti-semites are going to be the ones that save this country. … Bottom line is, if we want to make America great again, we’ve got to make antisemitism great again.”
Trump never said these words. This video was posted on Twitter/X and was briefly circulated before being debunked.
Deepfakes are becoming more difficult to detect as technology improves, but here are some things to watch for:
- Watch the speaker’s lips. Do they move in time with the audio and make accurate shapes?
- What language is the audio in? Is it in a language that makes sense for the speaker? (e.g. is Joe Biden suddenly speaking fluent Tagalog?)
- Are their lips blurry and sometimes offset from the rest of their face?
- Listen to the audio. Does their voice sound metallic, as if they’re trapped in a tin can, or sound odd when speaking certain words? Do they breathe?
- How believable is what the speaker is saying? Does it fall within reason that they would say something like this, or is it not like them at all?
- Current AI-generated videos have trouble maintaining consistency with a character, so watch for morphing facial features and body parts that seem fluid, but this will likely become less noticeable as time goes on.
AI audio
Disinformation isn’t limited to deepfakes and low-budget “cheapfakes.” It also comes in the form of AI-generated audio, like the fake Biden robocall that went to New Hampshire voters in January.
- Rules 2, 3 and 4 from the video section apply to audio, as well. AI-generated audio may feature skipped words or sound metallic, as if the speaker is trapped in a tin can.
- If you’re still not sure whether it’s AI-generated, you can analyze the audio in software like Adobe Audition.
Seeing is not believing
With AI-generated content becoming more indistinguishable from reality every day, avoiding disinformation can be difficult. But there are still ways to stay safe from disinformation.
Look at everything – especially on social media – with a skeptical eye.
Ask questions like:
- Where is this information coming from? Is the original source reputable?
- What kind of claims are being made? Are they reasonable or out of the ordinary?
- Does the information use a neutral tone and language to get the point across, or is it inflammatory and trying to evoke an emotional reaction?
- Is the organization or the person who shared the information selling something?
- What motive could be behind sharing the information?
- Check to see if several news outlets are reporting on the same thing to verify facts and glean additional information.