A Conversation with Professor David De Cremer on Humanity, Leadership, and the Future of Work.
Having a meaningful conversation about leadership and humanity, in an age where AI seems to be set on changing everything, is more relevant than ever. So that’s why I was excited to be able to interview Professor David De Cremer, the Dean of D'Amore-McKim School of Business at Northeastern University and a professor in management and technology.
David’s background spans an interesting array of domains ranging from psychology and behavioral economics to philosophy, and computer science. His very latest research and books - among which "The AI Savvy Leader: 9 ways to take back control and make AI work", published by Harvard Business Review Press - are focused on understanding more deeply how leaders can be effective and transforming in the new technology era.
David firmly believes that we should use AI to create value in a holistic way, which benefits not just business but society as a whole: “it would be very reductionistic to only use AI to make sure that people work faster and more efficiently and create more money”. That’s why we need to think in terms of augmentation, rather than replacement, especially when it comes to preserving meaningfulness.
“Jobs will be replaced and disappear, but companies will need to invest in the human element, too”, explained David. “Because, certainly for now, we are not evolutionary programmed to let go of that aspect. To give an example: research has shown that AI can be creative and even empathic, but whenever the user is informed that what they read or see or hear has been built by an AI, they don’t like it anymore. Maybe that might change over the course of several generations, but certainly not now.”
“People will not regard your AI-induced products and services as meaningful anymore if you just cut humans out of the equation. We don’t want humans to start thinking like machines, right? We want humans to think like humans and we want human-centered AI. If only because if people stop seeing meaning in what we do, AI will suffer as well. We will not use it optimally.”
That’s why academic experts like David believe that we will experience a “Feeling Economy” in the future, where the largest part of our salaries will be determined by our soft skills rather than our hard skills, which will be the domain of AI. “That’s where the Moravec paradox comes in: what's easy for humans is difficult for AI and vice versa. Humans will never measure up to AI’s talent in calculating big numbers. But what a 1-year-old baby can, in terms of intuition or vision, AI certainly can’t, for now.”
What both David and I are really passionate about is how AI is in the process of changing the very nature of what it means to be human. “For quite some time now, we have outsourced our bodies to technology, even as early with the steam engine or the car”, explained David. “But now AI has arrived at the level where we may be outsourcing our cognition and our minds. And if we outsource both our body and our mind, we suddenly start questioning “what is left” for us? That realization is so powerful. And that's why we are having all these discussions about the end of humanity, wondering “Is there anything left for us”?”
For me, the biggest shift is that new technologies used to ‘only’ change markets, economics and consumer behavior. Yes, there often was a spillover effect to the human side, to how we organize work for instance. But it probably never touched the very nature of humans and the very nature of work the way that AI is doing now.
So, probably for the first time in my career, I'm also beginning to wonder if I am overly optimistic about the consequences of AI. Because, this time, the very nature of work is up for debate. Leaders who understand that, will have a tremendous potential to really turn their organization into a more powerful version.
On top of all the benefits that AI will bring us, there are also many challenges to deal with. And a lot of them are related to some type of uneven distribution: of access to budgets, of talent and of who gets to benefit.
First of all, AI research has become so expensive that the most leading-edge research now happens outside of universities, because they just don’t have the budgets for that anymore. And it’s not just about overall budgets for research, it’s energy costs or access to chips too.
And this uneven distribution will also be reflected in the domain of skills. AI is known to augment the talents and skills of people on the average side of the scale. While the most skilled profiles tend to reap a lot less benefits from it. In that way, AI is levelling the playing field. “Some believe that this means that we will all be above average and there will be a lot more equality”, said David. “But that’s not how it works. We can’t all be above average, of course. What it actually means is that we're devaluating human meaningfulness and human input.”
And on top of this existential crisis, the way our system works, its economics, will also prevent so-called greater equality. “To give an example: if all creators perform at the same level, prices will drop. No one will want to pay for art anymore. But that's not how economics works, and that's not how humans work. We’ll probably see a rising demand for authenticity, for the human premium. And that's where AI will suffer because users don't see AI as an authentic entity. So, at the end of the day, in line with the Feeling Economy, the human premium will be where the real value comes from.”
I’ve written a free e-book with Steven Van Belleghem about that exact situation: GenAI’s impact on the distribution of our quality of work, ranging from awful to mediocre to brilliant. This is how I believe it will shift:
We will experience an exponential rise of mediocre content which will be perceived as inauthentic by many. The good news is that “awful” will probably disappear. But how and where will we find true brilliance and how will we value it? That will become a really interesting debate in the coming years.
All of this will also have a tremendous impact on how we educate the next generations. Because homework and papers will no longer be the best way to teach or evaluate students. “What we need to make students understand is "what does what you generated mean to your own life path?””, explained David. “AI will help to create content that will help students more personally. That is why I designed my classes to be purpose driven, rather than purely knowledge driven. It’s about asking questions like “What does what you have just generated mean?” and “How will you use it?”.
And that is a completely different form of teaching, for which most schools aren’t ready yet. What we need to do is train future generations in ways that we can unleash them in companies and in society and that they understand their position in them.
If you're interested in how AI and other seismic shocks are changing your business, check out my keynote page.