Algorithms could help improve judicial decisions, say MIT economists

AI algorithms could help fix systemic biases in court decisions, the study suggests.

  AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023 (photo credit: REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)
AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023
(photo credit: REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)

Decision makers, such as doctors, judges, and managers, make consequential choices based on predictions of unknown outcomes. Do they make systematic prediction mistakes based on the available information? If so, in what ways are their predictions systematically biased?

Now, a new paper published in the Quarterly Journal of Economics and the Oxford University Press, entitled “identifying prediction mistakes in observational data,” found that replacing certain judicial decision-making functions with algorithms could improve outcomes for defendants by eliminating some of the systemic biases of judges.

Decision-makers, said the authors led by economics Prof. Ashesh Rambachan of MIT, make choices based on predictions of unknown outcomes. Judges, in particular, make decisions about whether to grant bail to defendants or how to sentence those convicted.

The researchers tested one such behavioral assumption – whether decision-makers make systematic prediction mistakes – and further developed methods for estimating the ways that their predictions are systematically biased. Analyzing the New York City pretrial system, the research reveals that a substantial portion of judges make systematic prediction mistakes about pretrial misconduct risk given defendant characteristics, including race, age, and prior behavior.
The US Supreme Court building is seen in Washington, US April 6, 2023. (credit: REUTERS)
The US Supreme Court building is seen in Washington, US April 6, 2023. (credit: REUTERS)

 The research here used information from judges in New York City who are quasi-randomly assigned to cases defined at the assigned courtroom by shift. The study tested whether the release decisions of judges reflect accurate beliefs about the risk of a defendant failing to appear for trial, among other things. The study was based on information on 1,460,462 New York City cases, of which 758,027 cases were subject to a pretrial release decision.

The paper found that decisions of at least 32% of judges in New York City are inconsistent with the actual ability of defendants to post a specified bail amount and note the risk of them failing to appear for trial. The research here indicates that when both defendant race and age are considered, the average judge makes systematic prediction mistakes on about 30% of defendants assigned to them.

When both the defendant’s race and whether he was charged with a felony are considered, the average judge makes systematic prediction mistakes on 24% of defendants assigned to them.

While the paper notes that replacing judges with an algorithmic decision rule has ambiguous effects that depend on the policymakers’ objective (for example, is the desired outcome one in which more defendants show up for trial or one in which fewer defendants sit in jail waiting for trial?),  it appears that replacing judges with an algorithmic decision rule would improve trial outcomes by 20% as measured based on the failure-to-appear rate among released defendants and the pretrial detention rate.
“The effects of replacing human decision-makers with algorithms depends on the trade-off between whether the human makes systematic prediction mistakes based on observable information available to the algorithm versus whether the human observes any useful private information,” said Rambachan. “The econometric framework in this paper enables empirical researchers to provide direct evidence on these competing forces.”


×
Email:
×
Email: