In the riveting expanse of technological advancements, few minds have pierced the veil of the future as incisively as Dr. Usama Fayyad. My conversation with him was nothing short of a voyage into the realms of possibility, a journey through the labyrinth of artificial intelligence (AI) and its seismic impact on our world.
A visionary in his field, Dr. Fayyad is the Chairman and Founder of Open Insights, a role he has held since 2008, with a brief hiatus when he served as the Chief Data Officer at Barclays Bank in London from 2013 to 2016. He is also the inaugural executive director of the Institute for Experiential AI at Northeastern University. His career is marked by significant contributions to AI and data science, including co-founding OODA Health to revolutionize the U.S. healthcare system and holding advisory roles at Stella.AI and MTN’s innovation division.
His tenure as Chief Data Officer at Yahoo! was particularly notable, where he grew revenues by 20x in four years and founded Yahoo! Research Labs. His background includes co-founding Audience Science, leading the Data Mining & Exploration group at Microsoft Research, and a pivotal role at NASA/JPL, where his work earned him top honors from Caltech and NASA.
With a Ph.D. in engineering from the University of Michigan and multiple degrees in electrical engineering, computer science engineering, and mathematics, Dr. Fayyad is a published author, patent holder, and a respected figure in academic circles, being a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and the Association of Computing Machinery (ACM). His contributions extend beyond academia and industry, impacting the U.S., EU, and Middle East as an active angel investor. I question Dr Fayyad on whether AI’s impact on society is as monumental as some suggest, likening it to the introduction of electricity 300 years ago and the steam engine 400 years ago. Dr. Fayyad’s answer sheds considerable light: “No, the change is not an exaggeration. People may exaggerate what they expect the technology to be capable of doing.” Many people assume AI systems can fully understand contexts, make ethical decisions, and surpass human intelligence.
However, this isn’t the case. “We are definitely in the world of narrow AI where narrow means a solution that works on a given problem and might work better than humans like a chess player has nothing to say about how to play another game,” he asserts. “Computers can only do better than humans in settings where you have a lot of data and little understanding of the processes underneath them.”
Dr Fayyad explains that AI, essentially, is a predictive technology that has reached a critical point of societal influence, primarily due to two factors. First is the explosion of datadriven by digitization and computerization. This growth, especially when combined with machine learning and predictive algorithms, has spawned an industry focused on approximating human intelligence by analyzing data to make informed decisions under certain conditions.
Despite historical breakthroughs in AI, like creating a chess player superior to humans, this doesn’t equate to cracking the code of intelligence. The real game-changer now, which is the second factor, is the emergence of the knowledge economy. In wealthier nations, this economy constitutes a significant portion of the overall economy. The combination of vast data and computational power allows AI to expedite many aspects of knowledge work, which is repetitive and mundane. This capability of AI to handle tasks traditionally done by humans makes it so impactful, especially from an economic standpoint. While generative AI might not be scientifically groundbreaking in the next few years, its economic implications are profound, particularly in the knowledge economy. This distinguishes AI’s current state from simply being a better chess or game player – it’s about its substantial influence on the knowledge economy.
“We are still mostly in the world of narrow AI where narrow means a solution that works on a given problem and might work better than humans like a chess player has nothing to say about how to play another game.”
Provocatively, I ask Dr. Fayyad: as AI technology advances and large language models such as ChatGPT improve, does it make AI omnipotent and omniscient? His answers eliminate AI myths: “Like when Google Search first appeared, if you told me that it could reach 90% or 95% of everything that’s been written by humans, I’ll say yes. Well, that hasn’t changed how we work. It changed what it means for us to retain and learn, and when and how to learn. It made us more powerful and hasn’t replaced the need for human judgment.” He explains that none of the AI algorithms truly understand the data they’re consuming. This is what he terms “stochastic parrots.” “They don’t understand what they’re saying, they can generate variety in their outputs, but they’re just saying stuff. When we perceive it, we attribute intelligence; it’s very far from understanding what’s being generated,” he elucidates.
I’ve come across numerous presentations calling data the ‘new oil.’ For CEOs and leaders, what are the elements for utilizing their data assets for data modernization and monetization? He declares: “Data is so essential, yet so misunderstood; it is a resource with immense potential if harnessed correctly.” Many executives and CEOs believe they have ample data and understand their business through analytics. However, they typically lack a comprehensive understanding of effectively capturing and utilizing data in context. There’s also a misunderstanding about the nature of data useful for humans versus algorithms. He says: “While business analytics focus on summarized, aggregated data to provide an overview, algorithms require detailed, granular data to function effectively. This finegrained data, often seen as overwhelming or irrelevant by humans, is vital for machine learning.” For Dr. Fayyad, the challenge lies in recording data and organizing and representing it to make it accessible and usable for algorithms.
This skill, which he has focused on at Open Insights and in his work at Yahoo!, Barclays, and Microsoft, is crucial yet lacking in many organizations. Companies like Google, Amazon, and Meta excel in this area, investing heavily in collecting and systematizing data for algorithmic use. “The disparity between these AI-savvy companies and the vast majority, who don’t leverage their data effectively, is growing. This divide separates the few ‘AI haves’ from the overwhelming majority of ‘AI have-nots.’ The key for these have-nots is to harness the wealth of data generated from their operations, which contains valuable intellectual property, and use it to their advantage. The failure to do so results in a significant waste of potential. AI is a competitive advantage we must be aware of and understand. As important as understanding what the technology can do is understanding what it cannot do and why it fails,” he shares.
The conversation takes a philosophical turn when we discuss the human aspects of AI and technology. Trust, truth, and ethics are fundamental to our existence, especially moving forward. Dr. Fayyad shares his insights on the importance of ethical AI and the human role in shaping AI’s future: “One of the biggest areas of emphasis at the Institute for Experiential AI is Responsible AI – How to avoid ethical issues, and assess and mitigate risks. How do we build trust in AI? We must ensure we’re not doing anything unethical with the algorithms. These algorithms are very biased to the data: if the data has any bias, the algorithm will draw a definitive conclusion to very biased data that could be bad for humanity, depending on what you use it for.”
He explains that “trustworthy AI” can be built through “accuracy, consistency, and stability.” He muses: “How do we get people to trust algorithms and believe that they won’t go off on an arbitrary tangent?” Algorithms behave differently and focus on very different features than humans. We must figure out how to change this so people get used to it. This comes from ensuring accuracy, testability, verification, stability, and behavior under different circumstances. We must figure out how to systematically detect potential points of failure and instability and govern the algorithms correctly. That’s critical for the acceptance of AIand doing AI responsibly.
“A trustworthy AI” can be built through “accuracy, consistency, and stability.”
That raises another ethical dilemma: How can governments and institutions control AI in its current format? We can’t simply leave it to greedy people doing it for money, just to make more money, and money gives them power. “We must have guardrails,” he stresses. He uses an analogy: “We build sports cars that have great brakes not because we want the cars to go slower, but to enable them to go faster, safer.” According to him, those guidelines advance the technology faster. What it also does is it creates liability and accountability. “Realistically, my solution to this problem – and I’m not a lawyer is we must insist that the machine does not have a right to decide, that every decision or recommendation is attributable to a human or a legal entity that bears the responsibility. We can’t say the algorithm did it. It has to be a human who’s held liable and responsible for what that algorithm does,” he emphasizes.
One of the most powerful messages from Dr. Fayyad is his advice to the younger generation and leaders. He urges them to leverage technology responsibly and to understand its limitations. He warns against overestimating AI’s capabilities, emphasizing the need for continuous human intervention and judgment. He shares: “In reality, the abilities of these AI tools are definitely increasing, but they cannot reason, innovate, think, or understand.”
Dr. Fayyad’s final thought, which he encapsulates in a metaphorical message in a bottle for 2030, resonates deeply: “Are you capturing all the data you need to capture? Are you putting it in a place where it can be managed? Is it catching all the context? Because that is one of the secret ingredients of making things work.” This summarises the essence of his message for the future – a call for meticulous attention to data, its management, and context.
My conversation with Dr. Fayyad is not just an exploration of AI and its potential; it is a deep dive into the philosophy of technology, ethics, and the future of human civilization. His insights are a beacon for anyone navigating the complex and ever-evolving world of AI and data science, reminding us of the need to balance technological advancement with ethical responsibility and human insight.