There is the view that AI is just hype. No different to Crypto or Web3. I think this is wrong. I was going to write about this but I have just heard a now month old episode of the Ezra Klein Podcast, where he lays it out perfectly. Here is a transcript…
If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations. An explanation that would be impossible to understand. That’s the point. That is the weirdest thing about what we are building. The thinking, for lack of a better word, is utterly inhuman. But we have trained it to present as deeply human. And the more inhuman these systems get, the more billions of connections they draw in, layers and parameters and nodes and computing power they acquire, the more human they can they come to seem to us.
The stakes here are material about jobs we have or don’t have an income and capital. And they’re social about what kinds of personalities we spend time with and how we relate to each other. And they’re metaphysical too.
O’Gieblyn observes,
“As AI continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel, the qualities in other words that we share with animals.”
This is an inversion of centuries of thought in which humanity justified its own dominance by emphasizing our cognitive uniqueness. We may soon, arguably we already are, find ourselves taking metaphysical shelter in the subjective experience of consciousness, the qualities we share with animals, but not so far with AI. O’Gieblyn writes,
“If there were gods, they would surely be laughing their heads off at the inconsistency of our logic. If we had eons to adjust, perhaps we could do so cleanly, but we don’t. The major tech companies are in a race for AI dominance. The US and China are in a race for AI dominance. Money is gushing towards companies with AI expertise. They’re going faster. To suggest they go slower or even stop entirely, it’s come to seem somehow childish. If one company slows down, well, look, another’s gonna speed up. If one country hits pause, the other is going to push harder. Fatalism becomes the handmaiden of inevitability, and inevitability becomes the justification for acceleration.”
I think Katja Grace, who’s an AI safety researcher, summed up the illogic of this position really well. “Slowing down,” she wrote,
“would involve coordinating numerous people. We may be arrogant enough to think that we might build a God machine that can take over the world and remake it as a paradise, but we aren’t delusional.”
I think one of two things must happen or should happen. Humanity needs to accelerate our adaptation to these technologies of governance to them or of them, or a collective enforceable decision has to be made to slow them down.