Commentary

The Algorithm That Couldn’t See: AI Ethics and the Halakhic Discipline of Perception

“One who walks along the way reviewing his studies, and interrupts his study and says, ‘How beautiful is this tree! How beautiful is this plowed field!’—Scripture regards him as if he had forfeited his life.”
Pirkei Avot 3:7

This ancient teaching anticipates a distinctly modern crisis: the outsourcing of moral perception to systems that cannot see what matters. When we delegate judgment to algorithms, we risk more than efficiency losses. We risk a degradation of attention itself — a thinning of the human capacity to recognize what stands before us as morally salient.

The Torah declares that humanity is created be-tzelem Elokim, in the image of God (Genesis 1:26-27). That image is not a static attribute but a vocation: we are meant to have the capacity for encounter — to see and to be seen, to recognize and be recognized as persons (Maimonides, Guide of the Perplexed I:1; Soloveitchik, The Lonely Man of Faith). When systems treat individuals as data points, they do not merely err statistically. They commit a theological violation, reducing the image-bearer to an object of calculation. Judaism has never rejected tools as such, but it has always insisted on training the eye that wields them. The question before us is whether artificial intelligence can be used without forfeiting the human art of seeing panim el panim, face to face.

Halakhah, I want to suggest, offers not merely ethical conclusions but a disciplined way of seeing. That perceptual discipline exposes a blind spot in contemporary AI ethics, one that cannot be resolved by better data, fairer models, or more transparent code alone.

In 1991, I volunteered at an absorption center in Israel during Operation Solomon, processing Ethiopian Jewish refugees arriving from Gondar Province. My task was simple: verify names against a 1976 census of Jewish villages. Anyone on the list was presumed Jewish and entered Israel immediately.  If your name was not on the list, you were sent to secondary review. The system was designed for speed; we processed hundreds of people each day.

An elderly man approached my desk. His name was not on the list. Through a translator, he insisted that he had been counted in 1976, that his family was already inside. I checked again. Nothing. I sent him to the other line.

Hours later, he returned. He had been beaten by others who accused him of being an impostor. His face was bruised, his dignity broken. I looked again at the census. This time I saw it: his name was there, it was just spelled slightly differently.

He and his family ultimately went to Israel. But I have never forgotten his face.

At the time, I understood this as a personal failure — of attention, of care, of seeing what was before me rather than what the system told me to see. I now think it points to something larger. We often say that Judaism trains the will through discipline. That is true. But before it trains the will, it trains the eye.

The architecture of halakhah functions as a technology of perception. It teaches not only what to choose but what to notice. Artificial intelligence does not merely automate decisions; it mediates perception itself, shaping what appears real, relevant, or worthy of response.

Much of contemporary AI ethics focuses on outcomes: bias, fairness, accountability, transparency. These concerns matter. But they remain downstream of a deeper issue. AI systems do not simply decide differently from humans; they see differently. They construct reality through statistical proximity rather than presence, through correlation rather than encounter.

A widely deployed sepsis-prediction system learned to identify patients at risk — but only those who resembled populations previously diagnosed, because its training data reflected historical patterns of diagnosis and care; groups historically under-diagnosed — and therefore under-labeled in the data — remained invisible to the model (Obermeyer et al., Science 2019; Colacci et al., 2025).  The COMPAS algorithm used in American courts predicts recidivism risk, but it does not see remorse, transformation, or moral growth. Hiring algorithms rank candidates by resemblance to past hires, seeing correlation rather than promise by design.

The danger lies not only in what such systems miss, but in how they train us to see — through their eyes, within their frames, mistaking partial vision for objectivity. “AI” names a diverse family of tools — rule-based systems, statistical models, neural networks — but they share a common epistemic posture: they perceive through pattern rather than presence (Hubert Dreyfus, What Computers Still Can’t Do).

Jewish law offers a counter-model that is instructive precisely because it is not primarily concerned with outcomes. Halakhah distinguishes rigorously between classification and guidance — between determining the objective legal status of an act or object and determining how the law should be applied to a particular person in a particular circumstance. This distinction is not an afterthought. It is structural.

Consider the laws of kashrut. An animal with a disqualifying defect is not kosher. That status does not change based on circumstance. Yet classical halakhah is equally explicit that, in cases of significant hardship or loss, a decisor may rely on lenient positions and must refrain from imposing additional stringencies. The Rema’s formulation — u-bemakom hefseid merubeh yesh lehakeil, in situations of major financial loss, leniency is warranted — is not a loophole but a meta-instruction about perception and responsibility: the food’s legal status remains unchanged, while what shifts is the obligation imposed on the person (Rema, Shulhan Arukh, Yoreh Dei‘ah 116:5).

What matters here is not the law of tereifot but the architecture it reveals. Halakhah formally separates legal classification from pastoral guidance. A system that merely encodes rules and applies them uniformly would miss this distinction entirely. It would know how to classify, but not how to see.

This distinction bears directly on algorithmic systems. AI collapses classification and guidance into a single output. The score is the decision. The risk assessment becomes the sentence. The ranking becomes the hire. There is no preserved space for judgment at the point where mechanical rigor produces injustice. Halakhah, by contrast, insists that this space remains human.

This is not a concession to subjectivity or bias. It is a disciplined refusal to confuse rule-application with moral sight. And it points to a further requirement: not only that judgment remains human, but that the human judge be formed — trained — in the very virtues that keep discretion from collapsing into arbitrariness. A further dimension of this discipline concerns the formation of the judging itself.. Halakhah does not merely preserve space for human judgment; it conditions the kind of judgment that may occupy that space. The name it gives to that conditioning is a virtue often mistranslated as humility: anavah.

In contemporary discourse, humility is frequently understood as self-effacement — the shrinking of ego, the refusal of authority, the withdrawal of assertion. In this view, to be humble is to make oneself small. But such a conception is epistemically unstable. A self that disappears cannot judge, cannot correct, and cannot bear responsibility. Jewish law advances a sharply different understanding. Anavah does not denote self-negation — a term more akin to the Hassidic concept of bitul.  Rather, the term describes a disciplined relationship between the self and truth.

This distinction matters because halakhic discretion depends on agency. Judges must rule, decisors must decide, and leaders must bear responsibility for outcomes that affect real lives. The tradition does not imagine a system in which authority is neutralized. Instead, it trains authority so that it does not collapse into entitlement. Anavah restrains the ego’s tendency to distort perception — overconfidence, premature closure, resistance to correction — without erasing the self who must judge.

Truth, emet, occupies a non-negotiable place in Jewish thought. Scripture identifies truth as a divine attribute, and the sages go further: “The seal of the Holy One, blessed be He, is truth.” In such a tradition, virtues are evaluated by their relationship to truth. A humility that disables judgment, suppresses correction, or evades responsibility, would therefore be religiously deficient. Anavah is a moral virtue precisely because it is an epistemic one.

This epistemic function explains why halakhah can preserve discretion without succumbing to arbitrariness. When halakhic sources instruct that, in cases of significant hardship or loss, additional stringencies should not be imposed, they are not authorizing subjective leniency. They are presupposing a decisor trained to recognize when the application of law, rather than its content, must be constrained. Discretion here is not bias; it is disciplined restraint exercised in fidelity to truth.

Algorithmic systems cannot presently be treated as inhabiting this structure — not because of a settled metaphysical claim about their ultimate capacities, but because humility is not a property that can be inspected, verified, or reliably attributed in non-human systems. Whether future forms of artificial intelligence might approximate something like this posture is a question we need not resolve. What matters is that no existing or foreseeable system can responsibly be relied upon as if it possessed it for purposes of delegated moral authority.

Where humility cannot be assumed or verified, its demands must instead be honored through institutional design. Systems may be built to defer judgment, to preserve human review, to resist the collapse of classification into decision, and to penalize unwarranted confidence rather than reward it. In this sense, halakhah offers not a blueprint for machine virtue but a set of normative constraints on machine authority. It teaches us not how to make machines moral, but how to prevent them from usurping moral roles they cannot responsibly occupy.

Anavah thus names the internal discipline that makes halakhic discretion possible without arbitrariness. But halakhah does not leave this discipline at the level of character alone. It gives it concrete perceptual form — structuring how one looks at others, how one weighs competing claims, and how one resists the impulse to see too quickly or too confidently. The discipline of seeing is therefore not exhausted by the formation of the judge; it is instantiated in a set of presumptions and constraints that govern perception itself. It is to these outward-facing dimensions of halakhic sight that we now turn.

One such dimension is a presumption of trust: Jewish law begins not from suspicion but from confidence in ordinary human integrity. You see a neighbor carrying on Shabbat and assume the presence of an eiruv. You see someone eating and presume the food is kosher. This principle, known as chezkat kashrut, is not naïveté. It is moral formation. The observer is trained to see others as lawful unless proven otherwise.

The algorithmic gaze often begins from the opposite stance. Facial-recognition systems presume fraud until identity is verified. Predictive policing presumes criminality until innocence is demonstrated. Risk-scoring systems treat deviation as danger. The starting point of perception has shifted from trust to fear. This is not a neutral technical choice; it reshapes moral possibility by defining what counts as salient in advance.

A second dimension concerns dignity itself. Kevod ha-beriyot, the dignity of God’s creatures, is not an external value imported to soften the law, but a higher-order principle embedded within it. The Talmud permits the violation of certain rabbinic prohibitions to prevent humiliation, revealing that dignity was always part of the system’s moral architecture.

Crucially, dignity is not treated as a competing interest to be weighed against compliance. It is a constraint on how law may be applied to people. Dignity does not alter classification; it governs application. Halakhah thus preserves a domain in which human judgment must intervene, precisely where formalism would otherwise triumph.

Algorithmic systems cannot replicate this structure. Data demands labels. Dignity resists them. It is not a feature to be detected or optimized, but a mode of address that precedes categorization altogether.

Finally, halakhah insists on the irreducibility of face-to-face encounters. The Torah reserves its highest moments for panim el panim. Moses speaks with God this way; Israel receives revelation this way. Emmanuel Levinas famously argued that the face issues a silent command: “Do not kill me.” Algorithmic systems process faces endlessly, but they never encounter them in this moral sense. They classify, compare, and predict, but they do not meet.

The rabbinic principle ein la-dayan ella mah she-einav ro’ot,  a judge has only what his eyes can see, sacralizes epistemic limitation as a condition of moral integrity. Where AI promises total vision, halakhah insists on bounded sight. Judgment depends on human presence, not omniscient sensors.

These disciplines — trust, dignity, encounter — are not nostalgic virtues. They are perceptual constraints designed to preserve moral agency.

If artificial intelligence mediates perception, then ethics must become an avodat ha-re’iyah, a discipline of seeing. Halakhah offers precisely such a discipline. It trains discernment through distinction, memory through ritual, and restraint through fences around power. Compassion appears not as sentiment but as contraction — a deliberate limitation of authority.

Some matters, halakhah insists, must remain face to face. Diagnosis, judgment, consolation — these cannot be delegated without loss. AI may assist where efficiency matters and dignity is not at stake. It may inform significant decisions while humans retain responsibility. But where identity, freedom, or life itself, is in question — refugee status, parole, conversion — automation must yield entirely to encounter.

This boundary is not technical but categorical. Dignity is not a feature to be detected; it is a claim made upon us.

Skeptics often object that such ethics cannot scale. Halakhah evolved, they note, in small covenantal communities. But Judaism did scale — across centuries and continents — not through uniformity but through distributed human judgment. Local courts decided local cases; difficult ones rose upward. Precedent guided without replacing perception. Difference was preserved without surrendering coherence.

AI scales differently. It optimizes for convergence, eliminating discretion in the name of consistency. This is not morally neutral architecture. It reflects a belief that justice means treating everyone identically. Halakhah rejects this premise. The same act may be permitted to one person and forbidden to another, not as inconsistency but as moral precision. Context matters. The face before you matters.

Some things, perhaps, should not scale in the way technology promises.

I still see that man’s face in the absorption center. The system saw a spelling discrepancy. I saw a record. Neither of us saw him ba-asher hu — as he was. The database was not evil; it was necessary. The harm lay in the perceptual frame it imposed.

The Hebrew word rahamim, compassion, shares a root with rehem, womb. Compassion is perception — an awareness of vulnerability. Halakhic life trains that awareness until it becomes reflex. Pidyon nefesh, the redemption of a soul, is not an algorithmic act. It requires the capacity to be claimed by another.

We will use AI. But we must guard the disciplines of perception that make its use safe for the soul. Trust, dignity, encounter — these are not quaint values. They are perceptual technologies that are older and deeper than anything in silicon.

AI may offer infinite mirrors. Judaism still commands us to look through the glass and find, on the other side, the face of the Other.

Joseph Feit
Joseph Feit is an attorney and independent scholar whose work explores the intersection of halakhah, jurisprudence, and contemporary intellectual challenges. His writing includes questions of moral perception, legal classification, and the limits of technological mediation within Jewish law. He has been involved for over three decades in legal and educational initiatives supporting Ethiopian Jewry. He lives in the United States.