Artificial intelligence as a metaphor: promises, limits and misunderstandings.

Luis Germán Rodríguez Leal. 11 DE ENERO DE 2026. Universidad Central de Venezuela
luisger.rodl@gmail.com
Abstract
This paper offers a critical analysis of the semantic expansion and public use of the term «artificial intelligence» (AI), showing how this expansion has fueled inflated and confusing expectations regarding the relationship between functional simulation and conscious experience.
Following a historical and conceptual review of the term and an examination of recent developments—language models, deep learning, and artificial agents—the text distinguishes the instrumental capabilities of AI-based systems from human intelligence, as understood by cognitive science and neuroscience.
It is argued that contemporary AI exhibits high performance on specific tasks through statistical and correlational processing, but lacks subjective experience, self-awareness, and situated understanding. From the perspective of embodied cognition, it is emphasized that human intelligence emerges from the interaction between body, environment, evolutionary history, and social practices dimensions that remain absent in AI today.
The final discussion articulates the tension between AI as a bridge to useful and regulated forms of human-machine cooperation and AI as a conceptual and practical abyss that can distort our understanding of humanity, generate technological dependence, and erode ethical and social responsibilities.
Faced with these risks, the paper identifies concrete actions at four complementary levels: (a) personal and educational, through critical digital literacy and information verification practices; (b) organizational, through ethics committees, independent audits, and responsible use protocols; (c) public and regulatory, through impact assessments, accountability frameworks, and regulation of sensitive uses; and (d) global, through support for international manifestos, multi-level governance mechanisms, and technological cooperation.
The paper concludes that recognizing the ontological limits of AI does not imply halting its development, but rather situating it within a humanist and democratic framework that allows us to harness its benefits without compromising the dignity, autonomy, and security of individuals and societies.


