AI and the Current Zeitgeist

Apologies if I'm throwing more AI stuff at you than you care to read, but things in the world of AI are happening fast, and the argument I've been making…

Apologies if I'm throwing more AI stuff at you than you care to read, but things in the world of AI are happening fast, and the argument I've been making here for the last couple of years is that what's happening there is more important to understand and to resist than whatever is happening in the political sphere. It certainly doesn't help that we have given almost complete power to a party that has become a mindless/spineless personality cult and that as such is utterly incapable of meeting this moment. But the Democrats aren't much better. Do we really think that the Chuck Schumer Hakeem Jeffries are capable of meeting the AI moment?

So my focus here at ATF is two-pronged. The first, to keep up with the reporting on what's happening with AI and how its shaping the zeitgeist, and the second, to pursue the themes I've been developing in the “Rescuing Aristotle” posts. I was interested to see this morning a NYT piece by Leif Weatherby, the Director of NYU's Digital Theory Lab, that addresses the latter in closing his piece with these grafs:

The head of Anthropic, which runs the ChatGPT competitor Claude, recently admitted that we have “no idea” how A.I. works. He didn’t mean that engineers don’t know how to code up a large language model. He meant that we don’t yet understand how these systems interact with meaning. Why is a chatbot able to use language? What does that ability imply about the mathematical processes that underlie it? Deep questions like these require humans to answer them. To understand our own culture in the age of A.I., we will need high-level math classes and deep study of literature and linguistics. It might be that the crucial insight a student needs will come from a course on “Don Quixote” — as quixotic as that may sound.

In other words, I think we have to return to the liberal arts to make our way forward in the age of A.I. For students today, studying math and language might turn out to be the only way to be flexible enough to face a rapidly changing market in which a computer science degree is no longer a guarantee of a job. But it’s also a way to deepen our humanity in the face of these strange machines we have built, and to understand them. And that is something that A.I. will never do.

We humans are symbol-making, meaning-making creatures, and one of the big problems with life in the TCM is that the power of symbols, and so our experience of meaning, has become so profoundly diminished; they have lost their sacramental power and have become one-dimensional and inert for all but a few.1 A restoration of the humanities means an awakening to the multidimensionality of the human being, and that means a recovery of sacramental religion. A religion that is sacramental is not at all what fundamentalists and reactionary dogmatists mean by religiion.

So, yes, a return to the liberal arts would be refreshing if by such a return we mean a recovery of the humanist tradition that traces back to our Greco-Judaean roots, as well as retrieving the post-Axial roots of all the great global civilizations, all of which have a rich, multidimensional understanding of the human being. The global intelligentsia needs to start having a conversation about what it means to be human, and that must necessarily open things up to what our premodern global ancestors thought about it. Pope Leo might be the guy to get the ball rolling on that. He seems uniquely situated to do it.

But today, I want to talk more about the first prong, AI and the zeitgeist. I've excerpted below from Joshua Rothman's piece in the New Yorker entitled, "Two Paths for A.I." Rothman reports on his interviews with Daniel Kokotajlo, a former safety guy at OpenAI who quit because the boomer/accelerationist culture there was hostile to his doomer fears. Also I wrote about his podcast interview with Ross Douthat several days ago.Rothman contrasts Kokotajlo's hair-on-fire anxieties about the imminency of AI Super-intelligence before the end of the decade with the more conservative, skeptical views of Kapoor and Narayan, whom he also interviewed. So who's right?

Please read the whole article to get a more objective view, because clearly I am not. I'm excerpting the following grafs from Rothman's piece because they contain the nut that I want to crack:

Which is it: business as usual or the end of the world? “The test of a first-rate intelligence,” F. Scott Fitzgerald famously claimed, “is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” Reading these reports back-to-back, I found myself losing that ability, and speaking to their authors in succession, in the course of a single afternoon, I became positively deranged. “AI 2027” and “AI as Normal Technology” aim to describe the same reality, and have been written by deeply knowledgeable experts, but arrive at absurdly divergent conclusions. Discussing the future of A.I. with Kapoor, Narayanan, and Kokotajlo, I felt like I was having a conversation about spirituality with Richard Dawkins and the Pope.

In the parable of the blind men and the elephant, a group of well-intentioned people grapple with an unfamiliar object, failing to agree on its nature because each believes that the part he’s encountered defines the whole. That’s part of the problem with A.I.—it’s hard to see the whole of something new. But it’s also true, as Kapoor and Narayanan write, that “today’s AI safety discourse is characterized by deep differences in worldviews.” If I were to sum up those differences, I’d say that, broadly speaking, West Coast, Silicon Valley thinkers are drawn to visions of rapid transformation, while East Coast academics recoil from them; that A.I. researchers believe in quick experimental progress, while other computer scientists yearn for theoretical rigor; and that people in the A.I. industry want to make history, while those outside of it are bored of tech hype. Meanwhile, there are barely articulated differences on political and human questions—about what people want, how technology evolves, how societies change, how minds work, what “thinking” is, and so on—that help push people into one camp or the other.

When a technology becomes important enough to shape the course of society, the discourse around it needs to change. Debates among specialists need to make room for a consensus upon which the rest of us can act. The lack of such a consensus about A.I. is starting to have real costs. When experts get together to make a unified recommendation, it’s hard to ignore them; when they divide themselves into duelling groups, it becomes easier for decision-makers to dismiss both sides and do nothing. Currently, nothing appears to be the plan. A.I. companies aren’t substantially altering the balance between capability and safety in their products; in the budget-reconciliation bill that just passed the House, a clause prohibits state governments from regulating “artificial intelligence models, artificial intelligence systems, or automated decision systems” for ten years. If “AI 2027” is right, and that bill is signed into law, then by the time we’re allowed to regulate A.I. it might be regulating us. We need to make sense of the safety discourse now, before the game is over.

You think?!

To read this article in the ambient glow of having recently watched Mountainhead, it strikes me that given the differences between Silicon Valley and the Eastern Academics, the most important is that the former are doing stuff and the latter are just watching them do it. The former are making all the decisions about how fast and how safely they want to go, and the latter are just studying what happens as they do. And, as the article points out, much of what the AI companies are doing is invisible to academics like K & N, especially as it relates to the military. (No worries there with Pete Hegseth in charge.)

What's obvious to me, having spent most of my career around academics, is that Kapoor and Narayan are conventional academic types who assume that any assertion cannot be taken seriously until there is irrefutable, unambiguous proof to establish it. So basically Kapoor and Narayan are just saying that they haven't seen enough from the AI companies to prove that what they claim is likely to happen will happen any time soon. Maybe they are right. Maybe the view from Silicon Valley is all Adderall-and-shrooms-driven hyperbole. I hope so. But I really, really doubt it.

Why? Because we have an accelerationist-dominated Silicon Valley culture that is run by moral morons who revel in their moving fast and breaking things, and who see themselves as uebermenschen who know better than the rest of us peons, and have contempt for any chickenshit, dcel, p(doom) traitors from their own ranks–guys like Daniel Kokotajlo

I know. It's hard for normal outsiders to believe such people exist except on the fringe. "Really," you might say, "Nobody who runs a successful, huge American company is so ridiculous as those clowns as they were depicted in Mountainhead. That's just way, way over the top. Right?" But even if they are not nearly as bad as all that, isn't it profoundly disturbing that they have such enormous, unconstrained power?

And I think we have good reason to fear it is as bad as all that. We're approaching the ten-year anniversary of Trump coming down the escalator. Did anybody in June 2015 take him seriously? Did anybody believe then that such an ignorant, morally vacuous clown would be running the oldest democracy and most powerful economy and military on earth? Did anybody expect to happen then what we've seen happen since then, and isn't it true that the more we find out, the more absurd and criminal we learn what he and those around him have done and are doing?

We should assume that what we don't know is far worse than what we do. What reason do we have to believe that the ignorance and moral vacuousness of the people surrounding Trump isn't precisely the same as the ignorance and moral vacuousness of the tech billionaires? Should we all not be seriously disturbed by Elon Musk's bizarre behavior? Do we think he's just an aberration? So what possible rational basis do we have to assume it's going to be any different with all of these AI accelerationists? Haven't we learned yet that because we prefer it not to be true does not make it so?

We are living in a world where there are simply no longer any norms or constraints on the richest and most powerful, and they have proved time and again that they are capable of any bizarre, improbable thing. That's the unsettling truth to which Mountainhead points. We are fools if we expect any of these billionaires in tech, or in any other sector, to behave like normal humans. Their wealth and power has bent them. Clearly these people have no self-restraint, no inconvenient taboos, and, so far, no one else has demonstrated the power or moral authority to impose any constraints on them.

Honestly, have we not yet learned that the stupidest possible way to think about the future is to expect things to go pretty much the way they have always done? But don't worry, Narayan and Kapoor think it's gonna be ok because it usually is. We've got through so far, right? We haven't blown ourselves up with nukes, so probably we'll get through this, too. Go shopping everybody. No worries.

Notes

1. I address this theme in the second of the Cathedral lectures where I talk about Baudrillard’s ‘precession of the simulacra’. The last stage, what Baudrillard calls the “Semiotic Stage”, is where we are now. The ‘semiotic’ means for Baudrillard that meanings for humans have no depth, they are only lateral in their reference. Everything is one-dimensional. For Baudrillard., the last moment of the sacramental, or the multi-dimensional, played a central civilizational role in the West was during the Renaissance, but I’d argue that it’s always there, even if at the fringes among some artists and saints.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *