
Marc Andreessen says it very succinctly in his June 6th article in the Andreesen Horowitz Newsletter:
And the reality, which is obvious to everyone in the Bay Area but probably not outside of it, is that “AI risk” has developed into a cult, which has suddenly emerged into the daylight of global press attention and the public conversation. … This cult is why there are a set of AI risk doomers who sound so extreme … they’ve whipped themselves into a frenzy and really are …extremely extreme. It turns out that this type of cult isn’t new – there is a longstanding Western tradition o millenarianism which generates apocalyptic cults. [Underlining by the author]
In 1983 I organized the first conference called Facing Apocalypse, a series of conferences spanning 1983 till 1992 about the apocalyptic imagination in politics. My hypothesis was that millenarianism would increase around the advent of a new millennium. It had happened to a point around the year 1000 and there was no reason to assume that this increase would not occur around the year 2000. We picked several themes: 1983 apocalyptic imagination in the Nuclear Issue (cold Nuclear Winter;) 1990 Ecological apocalypse (hot climate crisis) and the apocalypse of the Soviet Union, both in Newport, Rhode Island under the auspices of then Chairman of the Senate Foreign Relations Committee Senator Claiborne Pell; followed by a 1992 conference in Moscow with among others the participation of Michael Gorbachev, HH the Dalai Lama and Crown Prince Hassan of Jordan about the apocalyptic imagination in Holy War. Then in 1999 I was asked to participate in meetings organized by the Naval War College in Newport around the global threat of Y2K where I spoke about millenarianism.
At all these conferences and meetings, the apocalyptic imagination was the same theme: there would be global human extinction or gigantic catastrophes with the doom clock permanently stuck at a few minutes to midnight. If we wouldn’t control these dangers immediately and radically they would be out of control. Usually, the time frame between now and irreversible doom was about 10 years.
Political millenarianism only develops around issues that have profound inherent danger to them. The nuclear issue in the Cold War of the Eighties was objectively dangerous, as is climate change. This time bad actors have been given an undreamed of Christmas present in the form of generative AI to wreak havoc. But the latter is not what the apocalyptic worriers of human extinction fear. It is AI itself. AI by itself will cause humanity to be superseded by, like a wrathful god, an uncontrollable greater intelligence called AI.
The United States itself is based on apocalyptic imagination: the City on the Hill; Novus Ordo Seclorum (the new secular order) says the US 1$ bill. These days we have become a brave new world of the lucky few keeping out the ones who clamour to enter, much like the Rapture.
What is the apocalypse? It is the Holy War between good and evil; the revelation of a New World, the Heavenly Jerusalem of purified spirit; the destruction of mankind but for a few that shall be saved, in what is called the Rapture, in which the True Believers will be exempt and will pass over to heaven on earth. This is called the Millennium.
Now the extinction-by-AI adherents wage a holy war against a perceived future in which humanity will be superseded by a greater intelligence, pure spirit. I don’t know if in their opinion anyone will be saved as in the Rapture. Others see the revelation of a glorious, almost heavenly future for humanity because of generative AI which also is of the apocalyptic ilk.
One element that was raised at all Facing Apocalypse conferences was that there is a heroic aspect to the imminent extinction of humanity. It makes us live at the End of Days as a very special generation with very special responsibilities to immediately save the earth. In my opinion we are just one of the many generations in the march of the endless mystery of Continuity. Nothing special.
What I’m NOT saying is that AI is not dangerous. I’m saying that just because of its potential for great harm in the hands of some nasty humans, it is a perfect issue around which the apocalyptic imagination can gather and build up a head of steam.
Attune Media Labs, PBC, is the fulfillment of an old dream I started to work on in 1997 to create empathic, emotionally intelligent AI companions that can read your emotions and respond in a way that mimics human consciousness. Frequently when we tell people about this, especially now that we have a functioning prototype, they react with fear. This fear is always cloaked in rational arguments: people should communicate with people; AI is dangerous, even the original creators say so, people argue. I’m not surprised to sense the same fear I have felt during the Facing Apocalypse conferences. It has two elements to it: rational fear and a non-specific undercurrent of dread. The rational fear stems from the fact that AI in the hands of bad actors indeed carries dangers within. And I encourage us all to approach generative AI with caution, the way one would a wild animal one knows could have predatory inclinations under particular conditions. But the underlying dread is another matter altogether. This dread has its origin in awe, as in awesome, being in awe or awful. Awe is the numinous experience that arises when we meet the totally Other, an entity we have never encountered before that has undreamed of powers. We are in awe. This may act upon the perceiver as a moment of numinous wonderment, this is awesome, or as a source of dread, this is awful; or both simultaneously.
In his encounter with dreadful awe upon the first detonation of a nuclear weapon on July 16, 1945 Robert Oppenheimer’s quotes the Baghavad Gita, “Now I have become Death, the destroyer of worlds.” The quote comes from a moment when the warrior Arjuna asks the god Krishna to reveal himself. In verse 12 of the Gita, Krishna manifests as a sublime terrifying being of many mouths and eyes. It is this moment that entered Oppenheimer’s mind in July 1945. His translation of this was: “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendor of the mighty one.” While watching the fireball at the Trinity nuclear test site Oppenheimer said, “We knew the world would not be the same. A few people laughed, a few people cried, most people were silent.”
We are again confronted with mighty splendor, a presence with seemingly endless potential and power. Some people laugh, some people cry but most are in silent awe. Again dreadful destruction can come of it if unleashed by humans in order to destroy. But if we realize that we are faced with something that far exceeds our power of comprehension throwing us into an uncertain future it may be best to silently reflect on our dread or elation now that again we know that the world will not be the same ever again.
The potential of generative AI has us in awe. The dread this generates may blind us to the caution we need when entering this new world that triggers our apocalyptic imagination. Dread doesn’t make us more cautious. Dread paralyses us.
As a psychoanalyst, I have found that separating rational fear from archetypal dread helps a person be more alert and gets them out of their frozen state of overwhelming feelings. After working with the dreams and nightmares of victims of the Tsunami in Japan for the last decade I have seen time and again that if the victim on the shore can identify with the power of the wave coming at them and feel it rushing through their body, they can handle their overwhelming feelings much better and they become more alert in their lives, leaving their post-tsunami paralysis behind.
Let’s feel the awesome power of the tsunami of generative AI coming at us so we can be on alert and avoid paralysis.



