Dear Readers,
For the last three days, I've been pondering a crucial question: how do AI-generated fake images impact our ability to function, especially in times of crisis?
The motivation behind this question arises from the sheer volume of AI-generated images I encountered on Twitter while researching Hurricane Milton.
One picture showed a girl shivering and holding her puppy in a boat on a flooded street. It was widely shared by Twitter users, including one senior leader in the Republican Party of U.SA.
There were some other AI-generated pictures, portraying former President Donald Trump in a favorable light. In one, he stood on a flooded road, cradling a newborn child with a U.S. flag in the background.
In another, he was shown wearing a life jacket and wading through floodwaters, and in yet another, handing out paper towels to children.
The underlying message was clear: the Republican leader was actively helping America during a crisis, while the sitting president was not. A photo of President Biden calmly sitting on a beach further fueled this narrative.
The Spread of Misinformation During Natural Disasters
Is this a new phenomenon? The answer is both - Yes and No.
While image manipulation has long been used for propaganda, Nazi propaganda is a point in the case.
Lisa Fazio, an associate professor of psychology at Vanderbilt University, discusses this in a review published in Nature. She writes:
The psychology and history of misinformation cannot be fully grasped without taking into account contemporary technology. Misinformation helped bring Roman emperors to power, who used messages on coins as a form of mass communication, and Nazi propaganda heavily relied on the printed press, radio, and cinema.
So, while misinformation isn't new, the tools and reach have evolved. The one-word or rather one-term explanation for this shift is "AI."
The paper further elaborates:
Today, misinformation campaigns can leverage digital infrastructure that is unparalleled in its reach. The internet reaches billions of individuals and enables senders to tailor persuasive messages to the specific psychological profiles of individual users. Moreover, social media users’ exposure to information that challenges their worldviews can be limited when communication environments foster confirmation of previous beliefs — so-called echo chambers.
Let's unpack that last point about eco-chambers.
For example, if you believe that the earth is flat then you are more likely to come across content on social media which confirms your bias rather than challenging it.
Then there is another thing called belief perseverance which makes you hold onto your beliefs once you have accepted them as your beliefs.
Just ask yourself would you even want to consider checking whether God exist or not if you somehow get the tools and science to do so?
Reality Becomes a Perception Game
Our perception of reality is essentially a collection of pieces of information which we see as real, factual things.
This expansion of perceived reality is an ongoing process. It expands with each piece of information we come across.
If we come across a piece of information which claims that the earth is flat, the moon landing never happened or Trump’s assassination in July was staged then we would analyze it rationally without ourselves ever knowing about it.
Would you consider a revered stone as a spaceship just because you feel like it? You may not because it challenges all your beliefs about spaceships.
So, if you get a WhatsApp message claiming the stone is a spaceship you would immediately reject that as a total BS or misinformation in other words.
If such pieces of misinformation infiltrate this collection of facts, it can profoundly impact your decision-making.
Research indicates that even after misinformation is corrected, it can continue to influence reasoning—a phenomenon known as the "continued influence effect."
In the context of natural disasters, misinformation can have more immediate and dangerous consequences.
If an AI-generated image like a leaked government document convinces you that the crisis is orchestrated or exaggerated, you might decide against evacuating when rescue workers arrive. Such decisions could be the difference between life and death.
Filling the Gaps with Lies
It’s been seen historically that misinformation thrives in situations of crisis which are like the breeding ground of conspiracy theories, misinformation, and disinformation.
The phase between the First and Second World Wars and the Cold War are the perfect long-term examples of this phenomenon.
The Kennedy assassination, the Illuminati, and the MK-Ultra project are just to name a few.
These conspiracy theories fascinate people to date and may do so in future as well because there are gaps to be filled.
Like who is Satoshi Nakamoto, the founder of Bitcoin, the world’s first cryptocurrency?
There have been many documentaries released answering and failing to answer this very question but there would be more because there is a gap in world’s collective information on this subject. This is why questions like these fascinate people.
The same is true with crises which create a vacuum where misinformation can easily thrive. When official communication is slow or ambiguous, people seek answers to fill the void, often clinging to whatever information is available. This was evident during the COVID-19 pandemic, with theories about the origins of the virus proliferating in the absence of verified data.
Fazio explains:
Misinformation offers a way for people to plug gaps of uncertainty with at least something. When communication systems go down, when family members can’t be contacted, and when official responses haven’t yet been issued, rumors take root.
AI Is Getting Smarter
It raises a question: Would this not leave people to their own devices, given that public trust in major institutions is eroding while, simultaneously, pieces of misinformation like deepfake videos, images and audios are becoming increasingly believable as AI technology advances rapidly?
I found the answer to this question in the Global Risks Report of World Economic Forum.
It highlights misinformation as one of the most significant threats in the coming years:
Societies may become polarized not only in their political affiliations but also in their perceptions of reality, posing a serious challenge to social cohesion and even mental health. When emotions and ideologies overshadow facts, manipulative narratives can infiltrate the public discourse on issues ranging from public health to social justice and education to the environment.
Returning to Trusted Sources
This raises a fundamental question: whom should we trust during crises, especially like natural disasters?
Casey Newton, a tech journalist at Platformer suggests on the New York Times’ podcast Hard Fork -
People may need to revert to traditional institutions for reliable information, as they did before because no matter how emotional or partisan people feel, they want the truth at the end of the day. And, in a world dominated by AI slop, you're probably going to need an institution to help you figure that out.
But would it be not like putting all the eggs in one basket?
It should be a mix of many small creative solutions that would include running mass digital literacy campaigns, holding social media companies accountable for what they let go viral on their platforms and putting checks & balances on major institutions like governments and newspapers.
The Cost of Lies
Because in the absence of all of the above and more, you end up with costly lies.
And, what’s that cost? when all or nearly all images on the web of any developing natural disaster come from AI tools outsmarting the real ones, it won’t just make us pick the fake for the real but hamper our very ability to discern fake from real.
The opening quote from Chernobyl resonates deeply here:
"What is the cost of lies? It's not that we'll mistake them for the truth. The real danger is that if we hear enough lies, then we no longer recognize the truth at all. What can we do then? What else is left but to abandon even the hope of truth and content ourselves instead with stories?"
When it comes to crises like Hurricane Milton, AI-generated misinformation doesn't just muddy the waters; it can drown us in them.
Further Readings:
How fake Hurricane Milton AI images can have real consequences
I’m Running Out of Ways to Explain How Bad This Is