Module 8 – Justice and the Future
Lesson 3
The Ethics of Artificial Intelligence: Tools, Not Judges
The Ethics of Artificial Intelligence: Tools, Not Judges
Guiding Questions
• Can a machine be just — or is that a human illusion?
• Who is morally responsible for what artificial intelligence does?
• Is AI shaping society — or simply reflecting human intentions?
Machines Don’t Choose — People Do
Artificial Intelligence has been widely introduced into many systems of power — law, healthcare, finance, policing, warfare.
From so-called “autonomous” drones to predictive algorithms, AI is often spoken of as though it acts. But we must ask:
Who designed the system? Who trained the data? Who authorized the use?
Machines do not possess:
• Free will
• Moral awareness
• Responsibility
• Desire, conscience, or intent
An algorithm doesn’t “want” anything. It simply follows a set of programmed instructions — which originated in human minds, with human values, errors, and sometimes, human cruelty.
The Myth of Autonomous Blame
When injustice happens — biased arrests, wrongful rejections, even killings by drones — some blame “the algorithm.” But algorithms are not moral beings.
To say “the AI decided” is to disguise responsibility.
Examples:
• An AI drone targets a village: Who armed it? Who approved the code?
• A sentencing algorithm gives longer terms to Black defendants: Who chose the input data? Who let it replace human judgment?
Even with ChatGPT — it is not “writing” this lesson. You are the author. It is the human teacher who decides what is used, accepted, or rejected.
A key philosophical reminder:
A tool cannot be evil — but people can do evil with tools.
Philosophical Perspectives
Martin Heidegger – The Question Concerning Technology
Technology is not neutral — it frames how humans relate to the world. When we rely too heavily on tools, we risk “enframing” reality: reducing life to calculation.
Hannah Arendt – Responsibility and Judgment
True evil often comes from thoughtlessness — when people obey systems without questioning them. Automating judgment may only deepen this moral blindness.
Immanuel Kant – Moral Agency
Kant believed morality requires rational autonomy. Since machines cannot reason morally or act freely, they cannot be moral agents — only instruments.
The Real Danger: Abdication, Not Automation
The deeper problem is not AI replacing us.
It’s humans surrendering moral responsibility to machines.
• Judges relying on scoring systems
• Policymakers hiding behind technical complexity
• Soldiers using remote drones as moral shields
• Citizens trusting opaque algorithms to sort truth from lies
The issue is not what machines do, but what humans stop doing — thinking, feeling, judging, and standing accountable.
A Thought Experiment
A company builds a self-learning AI weapon.
It mistakenly attacks a school.
The CEO says: “The machine made a mistake.”
The engineer says: “It wasn’t my code.”
The government says: “We trusted the experts.”
Who is morally responsible?
The machine — or the chain of humans who built, approved, and deployed it?
Principles for Human-Guided AI
If AI must be used, then justice requires:
• Human accountability – No system without a responsible human decision-maker
• Transparency – All algorithms open to audit
• Philosophical humility – A tool can never replace conscience
• Ethical limits – Some choices (e.g. life and death) should never be outsourced
• Moral literacy – Citizens and leaders must stay alert to technology’s effects on dignity and freedom
Reflect and Discuss
• Can a machine “choose” to be unjust?
• What is lost when humans let tools make moral decisions?
• Is there a line that technology should never cross — even if it “works”?
Suggested Readings
• Martin Heidegger – The Question Concerning Technology
• Hannah Arendt – Responsibility and Judgment
• Immanuel Kant – Groundwork of the Metaphysics of Morals
• Sherry Turkle – Reclaiming Conversation
• Virginia Eubanks – Automating Inequality