What Makes Us Human In The Never Normal

My conversation with bestselling author, essayist and columnist Meghan O'Gieblyn.

Ogieblyn Author Photo

The pace of change in AI has been so phenomenal, that I believe that we’ll need a new set of mental tools for looking at the world and the role of technology in it. That’s why I greatly enjoyed renowned author, essayist and columnist Meghan O'Gieblyn’s book “God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning” because it investigates the impact of technology on core human concepts like identity and knowledge as well as the very nature and purpose of life itself.

If we will want to build a future world that’s worth living in, we’ll need philosophers, historians, artists and other humanities experts as well as engineers and managers. And we’ll need to reinstate the humanities in our educational foundations. That’s why, as an engineer myself, I was so excited to be able to interview Meghan about her uniquely philosophical, historical and even (former) religious perspective on AI.

Literate about the history of ideas

It was interesting to notice how Meghan seemed a bit wary of the tech crowd’s obsession with novelty and predictions about the next big thing. She is temperamentally more drawn to the past and how certain conversations and challenges keep getting recycled through different lenses. Many quite technical conversations about AI and free will, for instance, or AI and immortality (by uploading your mind to a computer) and AI and identity are very similar to the ones discussed by the early church fathers in the third and fourth century in terms of resurrection and afterlife.

The context was entirely different, obviously, but they are essentially the same highly speculative questions. That’s why Meghan believes that we can benefit from becoming more literate about the history of those ideas, because our current version of them could be enriched by the knowledge of previous solutions and answers to those questions.

When it comes to “old” insights and ideas being useful for contemporary ones, I loved her example of the pioneers of quantum physics and the depth of their philosophical knowledge. “Heisenberg was very interested in the Greek philosophers and was able to connect learnings from Plato and Zeno with the workings of quantum physics”, she said. “And that really enriched their work.”

Shifting goalposts

We may also see machine intelligence offer an alternative to traditional religions and it will probably even create schisms within certain religions. Meghan gave the example of a group of Christians which had reinterpreted the Bible in ways that were inspired by transhumanism and singulatarianism as conduits for becoming immortal and going to “heaven” at the end of time.

One of the most fascinating aspects of AI is how our thinking about it keeps evolving as well. Even the top experts like Rodney Brooks are questioning their own past assumptions. He for instance recently wrote that the Turing test has now basically become obsolete with all the latest evolutions. It seems we just keep moving the goalposts and keep shifting our perspectives on the matter. “The conversation about what makes humans unique is probably one of the most fundamental here”, stated Meghan. “It's remarkable how quickly we keep moving the area of our human exceptionality and where we seem to define those last islands of uniqueness as humans: the Turing test, chess, the Game of Go and even creativity.”

“In my opinion these ideas will always keep moving because it is not a fixed objective”, explained Meghan. “It's a dialectic conversation, in the sense that we're always trying to figure out what it means to be human in reference to something else. For a long time, we tried to figure out what makes humans distinct compared to animals. Now we are doing the same with machines”.

Hope or fear?

I also find it truly fascinating how the tech community responds with either hope or fear to artificial intelligence, even some of the most prominent tech luminaries like Elon Musk. In a recent interview, he talked about the falling out he had with the Google founders. They were incredibly excited about the latest evolutions but Elon asked them to be cautious because he believed that AI also holds the potential to destroy us as humankind. Apparently, they called him “speciesist”, being overly concerned about preserving humanity. The Google founders were convinced they might create something new which would be beautiful in its own right. They believed that we as humans do not own the exclusive right to be where we are forever.”

And then you also have people like Mark Andreessen who’s a strong proponent of the effective accelerationist movement. He believes that AI is capable of ushering us into an era with an enormous positive outcome. AI seems to create a lot of emotional polarization when it comes to its impact on humanity’s future.

“I don't think it's coincidental that the people who are the most optimistic about the technology are people who stand to gain a lot of money from it”, added Meghan. “But the truth is just that we don't know. I'm not a doomer by any means but I’m also not an accelerationist at all. I tend to be most interested in people who are sort of cautiously optimistic and who have a more measured approach to the development of AI.”

“I’m also skeptical of that impulse to imply a speciesism when it comes to fear of the possibilities of AI. Human exceptionalism has certainly had devastating effects on our environment and on other biological creatures. And there's indeed a temptation to extend that to AI and say “who are we to say that we're unique?” and “why can't we make space for other types of consciousness, other types of intelligence?”. But the difference here is that we're the ones who are directing that evolution. And there are a lot of important conversations that need to be had about what that looks like. I am skeptical of how corporations are using that language as some sort of political correct way of selling this technology that they're going to profit from, obviously.”

Ethics: all talk, (almost) no action

In the book, Meghan writes about the need for a debate about AI and ethics. And while there is a lot of talking about that subject in many companies or at industry events, there is very little evidence of companies putting this into execution. To the contrary, as Meghan explained, “a lot of the people who have been concerned about ethics at major corporations - like Google’s Timnit Gebru and Margaret Mitchell - have been fired. There's interest in ethics until it threatens the business model. These types of difficult questions require a slower form of development. But these companies are all in the middle of an AI arms race and nobody can really afford to get behind right now. So ethics has become some sort of afterthought for them.”

“People really like to make comparisons with earlier technologies, where we did figure out how to make them work in ways that were beneficial to humanity”, Meghan explained. “But we also had a lot more government involvement at the time. And a lot of the research was happening at universities where there was a lot less pressure over the bottom line. So the context of ethics in commercial and very powerful companies is very different from before.”

A stability board

I thought Ian Bremmer’s input was interesting. He wrote that we should be approaching the AI industry like the financial sector: since individual actors would be able to cause a systemic crisis, we need a stability board that could identify it, should it arise, and respond immediately. Even more interesting was who he thought should be on that board, which included a lot of leaders from the big corporations. Which I personally thought was strange. But he has a point in that we need mechanisms and governance to be able to deal with these types of crises.

And you could say that legislation is one part of the answer. But on the one hand we have the executive order on AI from the White House which was basically drafted to protect the role of the US in the AI debate and arms race. And on the other, we have the European AI Act, which is indeed based on values and ethics, but also has AI researchers in Europe wondering if they should leave the continent because there's a lot of things that they can't do anymore. So, rather than installing effective guardrails, legislation has also become part of a very interesting geopolitical debate.

Either way, I think it is safe to say that we have a lot of thinking (and acting) to do if we want to steer the AI industry in ways that will be beneficial for humanity, society, the environment and the economy. And Meghan’s book is a great way to start that journey.

If you're curious about the impact of AI on your organization, check my updated keynote page, featuring two new keynotes about AI, including a duo presentation with China (tech) keynote speaker Pascal Coppens.