Wednesday, January 01, 2025

Alignment against Apocalypse

Returning to the AI theme for a second, Mollick rightly points out in his book how important it will be for AIs to be fundamentally aligned in their values with us to safeguard us against the most apocalyptic scenarios envisioned by the many prophets of AI doom. I don't think he's altogether wrong.

The problem is that we humans are ourselves far from aligned in our values. Both between and within societies there's a broad gamut of opinions on the relative import of freedom and basic economic security, for example. Lots of people in the West see no contradiction between the two and think that freedom and general wealth production are highly correlated. Elsewhere in the world the link between the two seems far less clear. Even within the United States, there are plenty of people who don't see the connection and cry bullshit.

What we lack most sorely is credible leadership to build consensus. Until we get that, who the hell knows how AIs will set priorities as they keep getting smarter?

No comments: