Table of Contents
ToggleThe Intersection of Philosophy, Morality, and Technology
The interplay between philosophy, morality, and technology is currently facing a dual crisis: a resurgence and an urgent need for focus. Justo Hidalgo, a 51-year-old Madrid native, serves as the director of artificial intelligence (AI) and vice president of the Spanish Association of the Digital Economy (Adigital), which represents around 500 companies, including many leading tech firms. He has authored three books, including the recently released Patterns of Emergence: How Complexity Drives Artificial Intelligence (Amazon, 2025, currently available only in English). In this work, Hidalgo explores how complex systems develop “emerging abilities,” which may have unforeseen consequences that some suggest could pose risks to humanity.
Understanding Emerging Patterns
Question: What are emerging patterns?
Answer: Emerging patterns can be observed in nature, society, and AI. Examples include cells, ants, birds, atoms, or nodes in a neural network. These components may seem simple and lacking intelligence individually, but as they form complex structures, they develop unique characteristics. For instance, in AI, certain language models exhibit emerging properties after reaching complexity thresholds, such as 100 billion parameters. A notable example is language translation, where a system trained on English and Spanish can understand other languages when provided with a few examples, demonstrating an expanded perception.
Potential Risks of AI
Question: Stuart Russell, a professor at the University of California, Berkeley, warns about âinsecure and opaque systemsâ that could endanger humanity. How do emerging patterns contribute to this risk?
Answer: Emerging properties may yield capabilities that are unpredictable or not properly aligned with our objectives. The concern lies in the unknown extent and timing of these occurrences. It's essential to prioritize the control and governance of models, ensuring companies understand how to use them safely. Rather than abandoning AI, we should demand rigorous testing that aligns with advanced safety measures and clearly differentiate these systems from traditional programs.
We have to develop ways to measure behaviors that may not have existed until then and can affect us.
The Black Box Dilemma
Question: What do you mean by “black box nature”?
Answer: When asked why a self-driving car made a particular decision, its explanation could include a complex web of connections among billions of nodes that are virtually incomprehensible to us. Therefore, improvements are necessary to enable systems to articulate their decision-making processes more transparently.
Future of AI: Self-Replication and Superintelligence
Question: Can machines replicate themselves?
Answer: As of now, true self-replication by machines has not occurred. While minor examples exist, they are not significant. The distinction lies between understanding oneself and the capacity for expansive self-generationâa concern for future advancements.
Question: What does the future hold?
Answer: The next phase could involve superintelligenceâan AI system that surpasses human intelligence. This would not imply consciousness but rather superior problem-solving capability, like identifying optimal experiments. While it won't lead to self-replication, it will likely accelerate the process of idea generation.
Question: What are the potential downsides?
Answer: One major concern is alignment, particularly when systems operate outside expected frameworks. If not properly controlled, this misalignment could hinder societal adaptation, converting the user's role into managing a group of agents with diminished human oversight.
Are We Nearing Superintelligence?
Question: Are we close to achieving superintelligence?
Answer: We still have a considerable journey ahead. Current Language Models (LLMs) are not yet the technology that will yield superintelligence. Significant advancements in reinforcement learning and other research areas will be vital to reach this milestone.
Research Needs and AI Consciousness
Question: What kind of research is necessary?
Answer: Exploring models that can comprehend the worldâphysical, neural, or psychologicalâholds great potential. Humans learn effectively with minimal information; thus, a similar approach may benefit AI development.
Question: Does AI have consciousness?
Answer: Theories suggest that increased information integration might lead to some form of consciousness. While there are debates about whether AI will achieve consciousness, it could develop sufficient knowledge to act as if it were conscious, which might suffice for various applications.
Moral Values in AI
Question: What about moral values in AI?
Answer: The consciousness that may arise from complexity could differ significantly from human moral or ethical values. This raises crucial questions about the implications of such consciousness.
We do not know what kind of consciousness could emerge from that complexity and therefore it may not possess the moral or ethical values that we currently understand.
Shaping Moral AI
Question: Should moral AI be developed?
Answer: At Adigital, we have initiated a governance parliament consisting of specialized agents, including a moral and deontological agent. This initiative aims to aid those engaging with AI in addressing social and moral impacts, recognizing that while a lawyer might be on staff, a philosopher likely is not.
The Need for Regulation
Question: Do you support AI regulation?
Answer: Any system with societal implications should have an appropriate level of governance. The degree of regulation will depend on specific cases, though my concern is over-regulation that could complicate implementation or fragment regulations across jurisdictions. A balance is necessary to formulate regulations effectively without stifling innovation.