In this series, I explore various ways we can adapt to the coming onslaught of fake information and AI-generated content. Subscribe to my blog to get updates to the next ones!
In the early 2000's, the internet brought a radical shift in how we understood what was real. Remember the Arab Spring? A "revolution fought over Twitter", where direct access to powerful megaphones allowed protesters to share their messages immediately with the world?
At the same time, news organizations lost subscription revenue and have been forced to compete in a clickbait attention economy, driving down their quality and ability to uncover facts.
We now interpret an image from a cell phone as more "true" than one printed in a newspaper. Millions of people with cameras can handily beat a few professionals. We have all become reporters.
It's a natural human inclination to believe the individual over the institution, especially when we distrust our institutions and see their agendas and biases.
Enter the fakes
Into this world enters AI-generated content. We've anticipated this for a while, and now it's happening. The tsunami is coming ashore, and fake photos and videos are tricking people and causing ripples.
A few days ago, an AI-generated photo of an explosion near the Pentagon caused panic, wiping $500B off the S&P in a few hours. The image wasn't real, but its impact certainly was.
We should all be wondering this week — when 99% of the photos we see are fake and most of the "people" online are bots, how will we know if the Pentagon is on fire?
Avoiding Magical Thinking
It's a common tendency to wish for magical innovations that will revert the information landscape back to what it used to be. I'll be going through the issues with some proposals like deepfake detectors and sensemaking AIs later in this series.
If such innovations happen I'll be thrilled, but for now, to truly prepare for this possible future, we must truly internalize these assumptions:
- It will not be possible to tell whether an image or person on the internet is real. Full stop.
- AI-generated content will be more than twice as persuasive than any other content we have yet experienced — yes, dear reader, even you, as intelligent as you are.
- The response by the people who are worst at media literacy is what matters most. A fake that is unconvincing to you but nevertheless tricks millions impacts all of us.
Truly internalizing the above brings the coming tsunami into sharper focus.
Only Institutions Remain
If we can't get trusted information from an online account that belongs to a stranger, that leaves us with two options:
1. People we know personally
Everything else will become noise.
And because we cannot gain information about the broader world from our personal connections, we are left with institutions.
The best solution for solving this problem is to build and maintain institutions which are accountable, transparent, and tasked with discovering the truth.
This means going back to trusting the New York Times, Wall Street Journal, Associated Press, etc. It means paying them to discern the truth.
It allows for networks of trust, but bolstered by centralized, large, accountable organizations.
I know, I know. Institutions SUCK. They've lied to us over and over, colluded with governments, hidden scandals, and demonstrated extreme bias. Now that reporting is decentralized, we can better see how narrow their coverage has always been. Going back to the old paradigm of three cable news channels feels like a painful death.
And yet, I maintain that we have no other option.
How will Reddit be usable in the age of GPT10?
So what do we do?
To trust institutions, we need to create institutions that we trust
I'm not saying that we need to roll over and accept the hegemony of the existing order. We can create new organizations and have input on how they operate, or fix existing ones.
What's critical is:
1. We pay accountable entities, whose job is to discover what is true.
2. These entities are large enough that accountability can be enforced.
Once these institutions exist and are trusted, they can delegate trust to others, certifying reporting by their standards.
There is no limit to how many such institutions we build, as long as they are large enough that many people can hold them accountable.
It's unfortunate that the very organizations built to do this have been gutted by the rise of the internet. We will have to reverse this trend — to build them up better and stronger. And I am sure that we can do better than the news we had in the late 20th century.
Going back to the old way — "this person might be lying to me" sucks...
But the new way — "this person probably doesn't exist" is worst.
In the end, which would you prefer: the old way — "this person might be lying to me", or the new way — "this person probably doesn't exist"?
I want to hear from you. What do you think will work?