4 Comments
User's avatar
Noah Birnbaum's avatar

Good piece!

I think this is actually a subset of the alignment problem as opposed to a different problem entirely. Part of one’s values is their meta-values (ie the things they would want themselves to want). It seems like you’re arguing that AI can change what we want (as have the social media algorithms, etc). However, what we want to want is merely just a preference that AI is then being misaligned to.

I do like the framing you give a lot.

One issue that I have with this piece, though, is that it frames this as a larger problem than alignment (and the associated risks) itself without much justification - while I agree that this type of thing is a problem (and can be somewhat distinguished from alignment as it’s normally stated), I don’t see why it would be a bigger deal than the potential risks associated with super capable misaligned AI systems.

Winston Margaritis's avatar

Excellent work. I especially enjoyed this paragraph:

"The Christian (and the Buddhist) is supposed to mortify the flesh, to put to death the desires that enslave him and thus pull him away from God. Christianity requires repentance, and part of Christianity’s problem in the West is that people don’t recognize how impoverished is modern life. “We are far too easily pleased.” Sin isn’t always or even usually dramatic. Rather, sin is usually soporific. It lulls you asleep. ‘Why do I need to repent? I’m a pretty good person with a pretty good life’—say the damned."

Thomas's avatar

Great piece. Changing customers' desires is Sales 101. It is as old as dirt and is the heartbeat of fashion. The MSG analogy is appropriate, but MSG, casinos, and sugary sodas are creating demand by creating addiction, intentionally changing desires. This makes me think that the programmed addictiveness highlighted by Haidt, et al., fundamentally intersects with your concern. It's like selling drugs to a kid: get him hooked, then he becomes a drug addict with new appetites that govern his behavior and drive the market. The market always rewards those who exploit human psychology and infantilize us, right? I suppose the accidental facet is indeed different with AI. There is now a tertium quid between sales and consumer that both muddles the manipulation and helps us exonerate ourselves from complicity (I just told it to maximize clicks!). This article seemed to make some related claims: https://christianscholars.com/the-real-problem-with-chatbot-personas-in-response-to-derek-schuurman/

Ross Byrd's avatar

Brilliant piece. "In the decades ahead, I think AI will almost always be much more eager to give us what we want than it will be to start shooting us. I’m not worried about it exterminating us; I’m worried about it enslaving us with our own appetites." Yes. I've been thinking/writing about the same sort of thing. Keep it up!