AsianFin -- Joseph Sifakis, a Turing Award-winning computer scientist and foreign member of the Chinese Academy of Sciences, is raising red flags about the current global AI boom.
In an interview with EduGuide, the renowned expert cautioned that today's artificial intelligence systems are often misrepresented as truly intelligent, when in fact they lack fundamental capabilities such as common sense reasoning, decision-making, and moral judgment.
Sifakis said current systems cannot yet be considered intelligent in the true sense. He noted that the real industrial impact of AI remains limited, despite growing market hype.
His remarks come at a time when conversational AI platforms like ChatGPT and DeepSeek are gaining mainstream popularity, and autonomous driving technologies are marketed as humanity's next great leap. However, Sifakis insists that these technologies are still far from achieving what he considers true intelligence — the ability to perceive the world, understand complex situations, and act purposefully toward goals.
Sifakis emphasized that current AI is largely based on statistical modeling rather than genuine understanding. He explained that such systems merely process data without grasping its real-world context. For instance, autonomous driving systems often fail to interpret social cues, such as yielding to emergency vehicles, which undermines their reliability and safety.
He cited recent incidents involving Xiaomi's electric vehicles, where drivers relying on assisted-driving features fell asleep behind the wheel, leading to accidents. He warned that the industry lacks mature technology and adequate standards, and called for stricter regulations.
He also pointed to the hallucination problem in generative AI systems, where responses that appear fluent and credible may actually be false or unverifiable. This creates risks that many users are unequipped to detect.
According to Sifakis, society increasingly confuses information with knowledge. He argued that while data is abundant, true understanding remains scarce. Knowledge, he said, must be both useful and actionable — not merely statistically plausible.
This confusion is also affecting education. Sifakis expressed concern that students are using AI tools to bypass learning, and that career choices are increasingly driven by salary rather than personal development or creativity. He said education should foster independent thinking and a sense of responsibility, rather than simply delivering answers.
Despite his criticism, Sifakis remains hopeful about AI's future. He stressed that the technology should not be deployed in critical fields like healthcare and transportation without internationally accepted safety standards and transparency mechanisms.
Unlike traditional safety-critical systems such as airplanes or medical devices, AI systems today operate without formal certification protocols. Sifakis described current practices as self-certification, a concept he believes is fundamentally flawed when human lives are at risk.
He called on governments and institutions to define clear boundaries and ethical frameworks for AI development. AI, he concluded, should be seen not as a replacement for human judgment, but as a tool to enhance it.