David Shrier, a notable professor at Imperial College London and co-director of the Trusted AI Institute, stands at the intersection of technology and humanity. His work, transcending academia, delves into the commercialization of AI and digital assets, influencing top tier corporations and governments alike. Through our conversation, it is clear he is not just a thinker; he’s a builder of tomorrow.
In the global arena, Shrier has spearheaded over $10 billion in innovation initiatives and significant investments, shaping the future of technology and finance.
Like how the steam engine and electricity revolutionized their times, we are now evolving towards generative AI. Does he view this as hyperbole or a real and substantial technological shift? David’s response is enlightening: “The revolution of AI is not exaggerated; if anything, it’s understated. The infusion of AI is not just a progression but a transformation. AI has the potential to help address some of humanity’s biggest problems, whether that be human longevity, cancer, or climate.”
Yet, with the awe of possibilities comes the shadow of disruption. He acknowledges AI’s dual nature: a catalyst for economic growth and a disruptor of jobs and social structures, necessitating careful navigation.
Are broader society’s fears regarding AI justified, too? “For centuries, we’ve feared human-created technology attacking and destroying us – and media makes a lot of money out of fear,” he says. Hollywood’s fascination doesn’t surprise me – nor David. “However, some of that fear is justified,” he adds. “We need a better framework for dealing with AI, and I’ve started that with Trusted AI.”
David’s venture into Trusted AI echoes a commitment to safety, responsibility, and assurance. “We want to create a better future by assuring and opening up safe and responsible AI,” he explains. “At Trusted AI, we’re building an open-source workbench: it is software and code, policies, and corporate governance documents. It will provide resources for people on what to do and where to get started.”
With over 3,000 researchers and collaborations with esteemed institutions like MIT and Oxford, Trusted AI seeks to be the torchbearer in the odyssey of AI integration. He shares: “We are looking at a huge array of topics, such as managing AI risk to safeguard both security and societal impact, promoting inclusion, financial inclusion, and, for example, using AI to mitigate algorithmic bias, improve public nfrastructure and transportation, and help with water and food security.”
We run the risk of losing the essence of humanity when companies utilize AI. Greed and power are significant motivators. “By keeping the concept of a human in the loop (HITL) in AI, we can maintain control and the relevancy for humanity,” David counters. It is precisely why government intervention such as the EU AI Act has never before been more prescient. It’s not a free flight but a journey with guardrails, where the human is integral, and oversight is paramount. He emphasizes that relying solely on the goodwill of corporations is utterly insufficient. He elucidates: “AI, being a human-created technology, means its applications and impacts are under human control. Therein lies the power and the duty. Therefore, it’s crucial to make decisions ensuring AI benefits society as a whole.”
Trust, truth, and ethics are sorely lacking. Will AI not diminish what little bit we have in society?
David answers: “Interestingly, the trust deficit is a consequence of unregulated AI.” He points out that the rise of AI-driven social media has led to societal fragmentation and declining trust in institutions. Social media’s tendency to echo users’ beliefs, motivated by the dopamine-driven engagement that agreement brings, has shifted public discourse away from a middle ground towards more extreme views. This shift, propelled by AI’s role in content delivery on these platforms, significantly contributes to the weakening of modern institutions. His voice is a call for awareness, management, and responsibility. “We get to decide what it does, so we should decide that it does good things for society,” asserts David.
That makes me wonder who defines ethics in AI. According to David, ethics in AI is a dance of global norms and local nuances. He sees a world where ethical norms are adapted but connected, shaped by local values but echoing universal principles. It’s a discourse not of imposition but adaptation, a dialogue where the human, the local, and the global intersect.
As profoundly as AI impacts our world today, it will be nearly unfathomable in the next generations. David sees Gen Z as pivotal in AI’s future advocating for a blend of their technological fluency with the experience of older generations to foster balanced innovation. He believes Gen Z are generation not just defined by technology but propelled by a vision for global transformation. However, I posit that the marriage of technology and youth isn’t without its complexities. The speed and efficiency brought forth by digital nativity sometimes overshadow depth and context. David insists on the importance of a multi-generational dialogue. “Max Planck, the physicist, had a poignant expression: ‘Innovation happens one funeral at a time.’ While I hope for change to occur more swiftly than that, my worry is that the older generation isn’t moving quickly enough,” he cautions.
David shares more on the Trusted AI program. The Trusted AI initiative focuses on translational research with real-world applications. It’s not just theoretical; it’s about creating tangible solutions in collaboration with global institutions and impacting millions. A notable project is the European Commission’s personal data framework, affecting nearly 450 million people, and led to a successful commercial venture with Fidelity overseeing 350 million financial accounts, impacting 800 million people.
My presentations emphasize the importance of embracing our humanity alongside technology and policy discussions. Love, intuition, creativity, imagination, spirit, judgment, heart, and soul are just as crucial, if not more so, than understanding the technological aspects of the future. I liken technology, such as AI, to a horse – it’s more powerful, faster, and stronger than me. But the key lies in learning how to harness its power.
We will look for a responsible and promising future if we can manage this with empathy, love, kindness, generosity, and similar human values. David asserts: “Remember AI-induced job disruptions mostly impact technical, mathematical, and analytical fields. AI lacks in areas like empathy, teamwork, and collaboration, what we used to call “soft skills” in school. But now, these soft skills are vital in this post-AI era.
These inherently human traits will be essential for humanity staying relevant in the future.”
David’s core message: “Embrace creativity and humanity in the AI era.” He advocates for harnessing AI’s power with empathy and kindness, ensuring technology assists human ingenuity and spirit.
“AI has the potential to help address some of humanity’s biggest problems, whether that be human longevity, cancer, or climate.”