AI Armageddon: Calculating Humanity's Doom or Brilliant Power Play?

I was caught off guard when someone first asked me about my "peedoom" at a conference about the future of AI. Since then, I’ve learned that it is written as P(Doom) and represents the probability (P) that artificial intelligence poses an existential risk to humanity.

Maximalfocus 0N4Jh Vgs4Zs Unsplash

In other words: it measures the chance of AI bringing about our downfall and has become widely adopted by the tech ‘in crowd’. Inquiring about it is akin to asking about someone's health or the wellbeing of their children. At that conference, I heard a very diverse set of P(Doom) estimates ranging from a few percent to well over 50%.

A few years ago, I had the privilege of meeting Swedish philosopher Nick Bostrom, who is not exactly a cheerful chap, to be honest. He assigns a P(Doom) of 100 percent and passionately wrote about that in his bestseller 'Superintelligence'. At the Future of Humanity Institute in Oxford, he and a group of mathematicians have designed models to calculate our survival chances if computers become smarter than us. His conclusion is zero. A word of advice: never book Bostrom as a motivational speaker, as his audience will be sure to leave the room radically depressed.

Public Manipulation

Another key figure sounding the AI threat alarm is Sam Altman, founder and CEO of OpenAI. He's on track to become the poster boy of this AI revolution and was very recently on a highly publicized global tour, persuading world leaders that 'AI is very dangerous, you guys'.

His most impressive performance was undoubtedly his testimony before the US Senate. Rarely have I seen such a masterful display of mass influence, a masterclass in public manipulation, from which even old Vlad Putin could learn a lot. If you haven't seen it: make some popcorn, grab a glass of pinot noir, and enjoy two hours of pure entertainment.

Altman goes to great lengths to instill a deep fear of AI's risks in the senators. Then he says something like 'Look at medication. Fortunately, we have the FDA (the American pharmaceutical watchdog), otherwise so many accidents would happen, right?' Brilliantly constructed. The senators then naturally conclude that a federal agency should regulate AI.

But the absolute highlight of that spectacle has to be when a senator asks 'Mr. Altman, thank you so much for warning us. If we were to establish such an agency, would you be interested in leading it?' To which the cunning fox superbly replies something like 'Well, I'm incredibly flattered. But I really like my current job.'


Altman is also one of the signatories of the infamous manifesto of the 'Center for AI Safety', along with other tech glitterati like Bill Gates and Ray Kurzweil. The manifesto contains one single sentence: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.' It's been quite a while since I read something so utterly bland.

Altman even threatened to bypass Europe if we pushed our "idiotic" AI regulation through. It's clear that he has a lot of demands and knows how to play his cards exquisitely. Who remembers the ordeal of Meta CEO Mark Zuckerberg and his awkward confrontation with politicians? Altman, on the other hand, is clearly a next-generation public affairs master who brilliantly understands how to manipulate politicians.

Regulation is obviously necessary. But whether we should blindly accept the solutions of this new nerdy emperor is another matter. Timeo Danaos et dona ferentes (I fear the Greeks, even when they bear gifts), is a Latin quote from Virgil's Aeneid. I believe that we would be a lot better off with less doomsday thinking, fear-mongering, and hysteria over AI. 

My personal “peedoom” is less than 1 percent, in case you wondered.