Your map app reroutes you into a dangerous neighborhood, or a hiring model quietly filters out qualified candidates. Was it “just the algorithm”—or did someone, somewhere, make a moral choice?
THE NEW “MANY HANDS” PROBLEM
Ethicists talk about the “problem of many hands”: when harm results from a system so complex that no single person seems fully responsible. AI intensifies this—data collectors, model builders, product managers, executives, and users all contribute small decisions that add up to big consequences. It’s like a relay race where everyone touches the baton, but the finish line is a lawsuit—or a life changed for the worse.
A useful distinction is between causal responsibility (what caused the harm) and moral responsibility (who deserves blame or must make amends). The model may be a causal factor, but the moral burden typically lands on humans and institutions that chose goals, tolerances for error, and where to deploy the system. Responsibility often follows control and foreseeability: if you could have predicted the risk and shaped the outcome, you’re on the moral hook.
“We are what we repeatedly do. Excellence, then, is not an act, but a habit.”
— Often attributed to Aristotle
ALGORITHMIC HARM: NOT JUST BUGS, BUT VALUES
Some harms look like technical glitches—misclassification, hallucinated facts, unstable outputs. But many are value-laden: biased training data reflecting historical injustice, metrics optimized for profit rather than fairness, or “efficiency” that quietly increases surveillance. Think of an AI system as a fast student: it will learn whatever you grade it on, not what you meant to teach.
Accuracy, cost, engagement, safety, fairness—picking what to optimize is not neutral. Treat every KPI like a small ethical constitution for your product.
CHOOSING WHEN ETHICS DISAGREE
Real decisions rarely come with a single moral theory attached. A utilitarian asks: does this maximize overall well-being? A deontologist asks: does it respect rights and duties, even if outcomes look good? A virtue ethicist asks: what kind of people (and institutions) are we becoming by deploying this system?
Moral uncertainty is the honest admission that you may not know which theory is correct—or how to apply it. One practical response is “robust” decision-making: choose actions that are acceptable across multiple frameworks. If a model increases efficiency but violates consent, or improves profits but predictably harms a vulnerable group, it’s not just a trade-off—it’s a warning light.
- Aim: maximize total benefit, minimize total harm
- Key question: Do the benefits outweigh the risks for everyone affected?
- Typical tools: impact assessments, cost-benefit analysis, harm mitigation
- Aim: respect rights/duties; cultivate trustworthy institutions
- Key question: Are we treating people as ends, not merely data points?
- Typical tools: consent, transparency, audits, governance, red lines
Before deployment, ask: (1) Who could be harmed, and how badly? (2) What rights or duties are at stake (consent, privacy, due process)? (3) What habits are we building—care, honesty, courage—or complacency and denial?
- AI harms often arise from many small choices; moral responsibility tracks control and foreseeability, not just causation.
- Algorithms embed values through data, metrics, and deployment context—“technical” decisions can be ethical commitments.
- Under moral uncertainty, prefer robust choices that look defensible across outcomes, rights, and character-based ethics.
- Treat optimization targets as moral levers, and use checklists, audits, and governance to keep accountability human.