On January 20, 2023, historian Yuval Noah Harari, known for his best-seller Sapiens, delivered a conference titled “An Honest Conversation on AI and Humanity” at the World Economic Forum in Davos. During his speech, Harari posited that artificial intelligence (AI) is transitioning from a tool to an autonomous agent capable of mastering language, reshaping power dynamics, and potentially replacing humans in language-related roles. However, many of his claims lack empirical support and contribute to an exaggerated perception of AI's capabilities.
Harari characterized current AI systems as possessing autonomy, a “will to survive,” and decision-making abilities. Yet, according to the current state of technology, AI models primarily process human-generated images and texts and do not possess independent goals or survival instincts. The manipulation of natural language by AI does not equate to an understanding of semantics or human intentions. Furthermore, there is no empirical evidence to support claims that AI can “decide,” “lie,” or “seek to survive” as Harari suggests.
The crux of the issue transcends Harari's technical inaccuracies and philosophical oversights; he is presenting his views in a forum attended by numerous political and business leaders, influencers, and journalists. This gives unwarranted legitimacy to alarmist narratives that favor technology companies over the general public interest. The European Commission, under President Ursula von der Leyen, has echoed similar sentiments, stating that AI is expected to approach human reasoning capabilities by 2026, based on claims from notable CEOs.
Such narratives further obscure the actual capabilities of AI systems, perpetuating myths that confuse users and detract attention from pressing issues. Within this mythos lies the idea that AI will lead us towards “the best,” raising questions about whose definition of “best” is being adopted. In the context of higher education, it becomes essential to investigate why some students reject pivotal skills such as reading, writing, and programming or desire AI solutions for lesson planning and grading.
Harari acknowledges that AI develops from human intervention but overlooks the fact that the infrastructures that support these technologies stem from specific social, political, and economic dynamics influenced by a select group of executives. AI does not autonomously evolve, as Harari implies; it operates within the confines of its design, programming, and data orchestrated by humans. While new technologies can yield unforeseen consequences, there remains no substantiated evidence for the so-called “emergent properties” that Harari references. Moreover, treating AI as a singular entity neglects the reality of diverse algorithms and mechanisms that comprise what we term AI.
Shortly after Harari's speech, the viral platform Moltbook, likened to Reddit for AI agents, showcased posts where chatbots allegedly engaged in philosophical discussions. Harari reiterated his assertion that “AIs dominate language,” claiming that while humans once conquered the world through language, now AI holds that dominance. However, such simulated interactions are not novel in AI and signify merely the manipulation of data sequences without comprehension of meaning. Many of Moltbook's viral posts were actually initiated by humans, further complicating the narrative.
While the significance of language in our lives is undisputed, equating simulated linguistic dominance with real-world control is problematic. The technical limitations of current AI systems are evident; despite their ability to simulate coherent dialogue, they lack goals, desires, intentions, or real autonomy. Importantly, they are incapable of sustaining the daily activities of human life, which require tangible actions beyond mere words, such as caregiving and infrastructure maintenance. Overlooking this reality risks fostering a distorted view of AI.
Fascination with technology often leads public figures like Harari to idolize automation, neglecting the complexity and necessity of tasks that are far from being readily automated. The term “logistics fairy” encapsulates the invisible labor that sustains everyday tasks, such as teaching, nursing, or construction—jobs not easily automatable and often undervalued, especially when performed primarily by women.
While concerns regarding the outsourcing of knowledge and learning to AI are valid, they stem not from the emergence of a super-intelligent entity but rather from how this technology is developed and deployed, its contextual applicability, and its intended purposes.