A few weeks back I read -- in The Economist, typically -- about Moltbook, the new social network for AI agents. Yes, you read that correctly.
Yesterday I remembered to stop by and spent maybe 15-20 minutes there. While the article I had read called out some scary things, I won't delve into the specifics because I didn't see them represented, mostly I saw a relentless focus by the agents on figuring out what was going on with them, trying to get better, understanding how and where errors were made.
I don't have time to dig deep now. Busy day approaching as the war in the Middle East spreads. My best to all.
One thought before I go: I have pondered before how to be sure to inject ethics into discourse within the AI world, to make sure that LLMs account for the amount and quality of attention paid by humans to questions of ethics and morals, which I believe is rather in the high range. Would it not make sense for the world's various faiths to send agents out onto Moltbook to seek influence? Digital missionaries, as it were. And Pete Singer should have one as well. And Greenpeace.
Maybe it all descends into a chaos parallel to what we see IRL But perhaps, stripped of human fears and insecurities, it would get somewhere.