On a recent flight from DC, my neighbor remarked to me, “The scariest thing about AI is that people are literally designing it to control us.” I think he’s very wrong.
He was a well-preserved grandfather, an economist and former CIA analyst, whose progeny were sitting across the aisle. He gestured at them, their vacant eyes transfixed by iPads, and I felt a surge of pity and understanding.
The grandfather was wrong, though, because the scariest thing about AI is not that it is being designed to control us. The scariest thing about AI is that it will change us to be easily controlled, whether we program it to do so or not. I think we should call this AI’s REalignment problem. (This is in contrast to the Alignment Problem, which gets plenty of press.)
Jonathan Haidt, the Facebook Files, and others have exposed the way tech tries to deliberately addict their customers. I think this emphasis is seriously misplaced. If we’re most worried about what firms do deliberately, we will miss the dangerous ways the market might be causing the technology to evolve accidentally.
Here's the Realignment Problem: the AI ecosystem will reward AI that succeeds in
“dumbing down” its audience. Not its content, its audience. And this is new. The Marvel Cinematic Universe was by no means the first cataract of crap to drown American culture by catering to the lowest common denominator, but the MCU and its ilk all catered to a taste that already existed. Our culture got the garbage it wanted. But imagine if AI could cultivate—could groom—the tastes of its audience. Imagine it could train you to like some things, and not like others. What then?
Say you instruct an AI to “maximize clicks.” You’re not being evil—you’re not using cognitive psychology to find creative ways to exploit human vulnerabilities—you’re just giving it a generic command. What will happen? The market will reward AI that learns to exploit human psychology. It will reward an AI that learns to addict you and infantilize you. And no human, indeed maybe not even the AI itself, will know what it’s doing or how it’s doing it.
The same problem, of course, happened long before AI. The market rewards casinos, MSG, and sugary sodas. Billboards are written in ugly, attention-grabbing sans-serif fonts. Beer ads use beautiful women to sell bud light (or at least, they used to). Etc. Is the Realignment Problem really that new? The Marxists have long complained, after all, that “supply creates its own demand.” What’s so different about AI?
The difference, I think, is twofold: the pre-AI market evolved slowly enough for us to adapt, and AI is uniquely able to trap a person in his own present. Let me explain.
Norms need time to evolve. It took thousands of years to invent “do unto others as they do unto you.” It took another half-millennium to get to “do unto others as you would have them do unto you,” and even then we needed divine revelation. And since the Industrial Revolution, technology has often outpaced morality. It took us decades to realize that child labor is fine on a farm but not okay in a factory. That in a mass, urbanized setting we will need strong norms against littering. That dropping nuclear bombs on your enemies, even when justified, might nonetheless lead to a very, very dangerous world. Yet, eventually, we did learn and implement these new norms. We adapted.
For the most part, human mores have been able, eventually, to catch up to technological advances. We restricted the supply of casinos, and the market now rewards products that can proudly declare “No MSG!”
With AI, though, we have invented a technology that, by its nature, will tend to evolve faster than human society. It also lacks a stopping point. Thermonuclear weapons upended several thousand years of evolved social norms. We suddenly needed new norms to govern coercion, diplomacy, and military power. And after a tense decade of social and technical evolution, we solidified MAD. But what would have happened if the cutting edge of nuclear technology hadn’t plateaued? It would have been like trying to plant a garden during an earthquake. Nothing would have stayed still long enough for us to wrestle with it.
So that’s my first worry—that AI, unlike previous technologies, will not give us a chance to catch up to it.
Here’s the second reason I worry about Realignment: AI will tend to reproduce our immaturities. It will tend to give us what we want today. On its face, this doesn’t sound so bad. I’m honestly not worried about Skynet. I don’t think the Battlestar Galactica cyborgs are coming for us. Nor are we going to bury the world in aluminum paper clips. In the decades ahead, I think AI will almost always be much more eager to give us what we want than it will be to start shooting us. I’m not worried about it exterminating us; I’m worried about it enslaving us with our own appetites.
The Christian (and the Buddhist) is supposed to mortify the flesh, to put to death the desires that enslave him and thus pull him away from God. Christianity requires repentance, and part of Christianity’s problem in the West is that people don’t recognize how impoverished is modern life. “We are far too easily pleased.” Sin isn’t always or even usually dramatic. Rather, sin is usually soporific. It lulls you asleep. ‘Why do I need to repent? I’m a pretty good person with a pretty good life’—say the damned.
Likewise, educators are supposed to lead our students to something higher: “you like Phantom of the Opera? now try Beethoven;” or, “you like romcons? Now try Jane Austen.” It’s not up to a student to intuit what better culture she ought to be consuming; it’s up to her educators to show her. Quoth Aristotle: “the purpose of education is to inculcate good taste in the young.”
But trying to draw someone upward is a risky strategy. A student might not resonate with Beethoven, or he might find Austen too difficult. The safe strategy, if you want to be sure someone will like what you recommend, is to give him more of the same. Which is exactly what AI does.
It’s the same logic that gives rise to the internet’s echo chambers. Many people have traced the decline of journalism to internet-age business models: to the need to give the customer what he wants. (My favorite piece in this vein is Jon Askonas’ “How Jon Steward Made Tucker Carlson.”) When journalism sought to reach the most people possible for generic ad revenue, it needed to at least pretend to objectivity; when instead it began seeking subscriber revenues, it needed to give its subscribers what they wanted to hear.
AI enables each of us to create an echo chamber of one. And not just with our politics: our taste in sports, novels, films, everything. The age of screens is an age of narcissistic mirrors. And like Narcissus, we will eventually die from staring at nothing but ourselves.
So that’s why I’m worried. So far as I can tell, the smartest people in the room are usually talking about the Alignment Problem, whether they’re worried about it (Eliezer Yudkowsky, Toby Ord, the Effective Altruists, Scott Alexander) or whether they think it’s overblown (Brian Chau, Tyler Cowen). The overeducated, middle-brow folks in the room (professors, technocrats) are usually fretting about misinformation or racism. The inattentive public, whether smart or dumb, is worried about Skynet. People with a tech background know enough to worry about energy consumption, which the lay public never connects to AI because it runs so cheaply after it’s trained. And so forth. But honestly, I don’t see any group of people consistently worrying about what I’ve called the Realignment Problem. It has usually crossed the radar of the EAs, but otherwise it seems to be far down everyone’s list. I think it should be at the top.
Let me go back to the distinction between deliberate vs accidental malevolence. Our political discourse is riddled with conspiracy theories these days, and they’re almost all wrong. Why? Because conspiracy theories are almost always cope.
I’ll say it again. Conspiracies theories are cope. If there’s a conspiracy, there’s an easy solution—expose the conspiracy, jail the baddies, and the problem goes away. But if there’s not a conspiracy—what then?
What if AI will enslave us not because an evil conspiracy deliberately programs it to do so, and not because it becomes superintelligent and escapes our control—but because the market will reward AI for giving us exactly what we want?




Good piece!
I think this is actually a subset of the alignment problem as opposed to a different problem entirely. Part of one’s values is their meta-values (ie the things they would want themselves to want). It seems like you’re arguing that AI can change what we want (as have the social media algorithms, etc). However, what we want to want is merely just a preference that AI is then being misaligned to.
I do like the framing you give a lot.
One issue that I have with this piece, though, is that it frames this as a larger problem than alignment (and the associated risks) itself without much justification - while I agree that this type of thing is a problem (and can be somewhat distinguished from alignment as it’s normally stated), I don’t see why it would be a bigger deal than the potential risks associated with super capable misaligned AI systems.
Excellent work. I especially enjoyed this paragraph:
"The Christian (and the Buddhist) is supposed to mortify the flesh, to put to death the desires that enslave him and thus pull him away from God. Christianity requires repentance, and part of Christianity’s problem in the West is that people don’t recognize how impoverished is modern life. “We are far too easily pleased.” Sin isn’t always or even usually dramatic. Rather, sin is usually soporific. It lulls you asleep. ‘Why do I need to repent? I’m a pretty good person with a pretty good life’—say the damned."