The term “Artificial Intelligence” has gained wide currency in today’s world. From the regular classrooms, and daily human discussions to high-level meetings and even political speeches, it has increasingly appeared in numbers of conversations. Yet it is often used with little or no philosophical scrutiny. Ours is an era in which a technocratic mindset – concerned only with what is produced and consumed – threatens to overshadow the sapiential mindset, which asks the deeper question of why human beings produce and consume at all. Without this balance, technology risks becoming unbounded by ethics and human purpose.
Artificial Intelligence is a contradiction in terms. To gloss over this contradiction is to overlook the ethical problems that arise when a society prioritises machines and algorithms while neglecting the human mind and moral responsibility. To bear it in mind is to better understand what an intelligent agent truly is.
An intelligent agent, by definition is endowed with self-consciousness. It is not only conscious of objects, but also of itself as conscious of those objects. It relates to an object and is aware of itself as relating to the object. It sees and is aware of itself as seeing; it smells and is aware of itself as smelling; it touches and is aware of itself as touching or being touched.
Self-consciousness is tied to intelligence. As the Canadian philosopher Bernard Lonergan explained in his seminal work titled: Insight: A Study of Human Understanding, genuine intelligence involves the ability to ask, “Am I a knower?” That question may appear simple, even silly, but it is not. It is profoundly human. It shows that intelligence is not only about perceiving objects but about reflecting on one’s own act of knowing.
Self-consciousness naturally links to self-volition. An intelligent agent not only knows, but also wills. It makes choices, exercises freedom, and takes responsibility for its actions. Human beings are intelligent agents precisely because we are conscious of what we know and are capable of choosing how to act based on that knowledge. By contrast, what we call “artificial intelligence” lacks both self-consciousness and self-volition. A so-called AI system does not ask questions of itself, it is programmed. It does not choose; it is chosen for. It is not, properly speaking, intelligence. It is better described as a digital agent, a tool created and directed by intelligent human beings.
This distinction is not merely semantic. It has real ethical and developmental implications. A digital agent cannot bear moral responsibility, only its inventors and operators can. Just as a pen cannot be sued for libel but its writer can, so too must those who deploy digital systems be held accountable for their use. Ethical responsibility, which is the capacity to perform actions that are praiseworthy or blameworthy, can only be attributed to beings with self-consciousness and self-volition.
A digital agent does not operate itself but is operated by an intelligent agent. Precisely because it lacks self-volition, it cannot decide whether to act or not to act. Its operations are determined by computational algorithms. Its mode of operation is mathematical. While a human person may operate mathematically, he or she operates first and foremost as an intelligent agent capable of asking existential questions. The difference between intelligent agency and digital agency, therefore, is that while the former can act both mathematically and existentially, the latter is confined to mathematical operations predetermined by human beings.
It is here that the misnomer “artificial intelligence” becomes dangerous. The phrase creates a mirage demonstrating machines as having the ability to think, choose, or act independently of us. They cannot. What they do is process information according to rules established by human intelligence. They mimic patterns, predict outcomes, and generate outputs, but they do not reflect, will, or assume responsibility. To mistake simulation for consciousness is to blur the line between tool and agent, and to risk abandoning accountability.
This distinction has far-reaching implications for ethics, law, and social order. If machines are thought of as “intelligent,” their operators may be tempted to abdicate responsibility for what machines produce. Debates around algorithmic bias, automated weapons, and surveillance systems already reveal how easy it is to hide behind “the system” as if it were a moral actor. But the truth remains, machines do not bear guilt, blame, or praise. Only human beings do.
Recognising this keeps the focus where it belongs – on human agency. Digital agents must be guided by human wisdom, ethical reflection, and a sense of purpose that transcends efficiency. Technology should never eclipse the fundamental questions of meaning, truth, and responsibility. A society that abandons these questions in favour of blind faith in algorithms risks reducing persons to data and freedom to code.
In the end, the phrase “artificial intelligence” obscures more than it reveals. Intelligence, properly understood, cannot be artificial. It is conscious, reflective, and free. What we call AI is more accurately a digital agent – powerful, useful, and transformative, but not intelligent in itself. To acknowledge this is not to diminish technology but to put it in its rightful place, as a servant of human beings, not their substitute.
