Two pieces have appeared in the NY Times in the past couple of days that relate to the argument I was making in the Cathedral Lectures. The first is a Ross Douthat interview yesterday with Daniel Kokotajlo entitled "An Interview with the Herald of the Apocalypse", and the second today by Cade Metz entitled "Why We're Unlikely to Get Artificial General Intelligence Anytime Soon". The titles tell you what you need to know about their respective takes on AGI. Kokotajlo wrote a paper predicting these machines will reach 'superintelligence' by 2027, and Metz thinks probably not. Kokotajlo is very alarmed, and Metz not so much. Kokotajlo's hair is on fire; Metz is chill.
I don't have the time to do a deep dive into the details of these pieces, so I recommend that you read or listen to them yourselves, but a couple of quick thoughts:
What strikes me after reading both is how fuzzy everybody seems to be about what 'intelligence' is, or, as I argued in the Cathedral Lectures, what a human being is. And the problem, as I argued, is not so much that machines are becoming more human, but that humans are becoming more machine-like. This has become mainstream through the transhumanist project that is embraced by some of the leading figures in Silicon Valley to whom we are entrusting the future of humanity. These are people I have argued who have the moral maturity and emotional intelligence of middle schoolers. So that alone is pretty disturbing. That's the first issue, and it's a big deal, but at least the more alert and sane among us can find ways to resist becoming more machinelike if we choose to.
The second issue relates to Metz's contention that there will always be things that humans can do that machines can't. I agree, but that does not address Kokotajlo's concern. Kokotajlo doesn't argue that machines can do all the things that humans can, but that they will develop capabilities that so surpass human ability to understand or control that they pose an unprecedented threat to the existence of humanity itself. And he argues alarmingly that a threshold in this regard will be reached in 2027/28. The real issue is not so much what machines can't do that humans can, but what machines can do that humans can't. If there is even a 10% chance that Kokotajlo's scenario can happen–whether in 2027 or 2037–his alarm is justified. The extinction of humanity is not something the sane among us can resist if we choose to.
BTW, I've started reading Capitalism and Its Critics: A History from the Industrial Revolution to AI by John Cassidy. It's such a depressing story because it's the story of its inevitability and the callow justification of its cruelties. Yes, its cruelty in crushing the lives of ordinary people, but also in the ways it dehumanized its elites. Sad, shriveled souls like Trump and Musk are its apotheosis. And sad also because of the futility of all those lively souls who from Capitalism's onset tried to resist and to imagine something better.
Since its beginning Capitalism was rejected by anybody with half a soul, whether on the cultural left or right. Some of the greatest figures on the Right from Carlyle and Nietzsche through Heidegger have been Capitalism's greatest haters. It's not a left/right thing; it never has been. Rather it's a soul/soulless thing. The challenge going forward is to recognize that for all its material benefits Capitalism has been a spiritual-cultural disaster. While the MAGA Right has no answers, neither do Marxism and the secular Left. Although I accept much of its trenchant critique, Marxism provides no solution because it shares the opposite face of the materialist coin occupied on its other face by capitalism. We need a new coin made from a different metal, something new, something from the origins.
Leave a Reply