2024 / 2 / 29

Potential Human Rights Violations in Risk Assessment of Artificial Intelligence Systems: Two Cases of Recidivism Prediction Tools

Table of contents

Risk assessment is essential in regulating the development of artificial intelligence (henceforth AI). Two aspects of risk assessment ought to be emphasized, particularly for those advocating risk assessment as the foundation for monitoring AI development: 1) whether fundamental human rights are under potential threat, and 2) who bears the risk, in particular, whether vulnerable groups bear the most of the risk. Just as passed in 2023, the EU Artificial Intelligence Act also features the first aspect of the risk assessment methods.

Source:Beautrium/Shutterstock

This article will compare two similar AI-assisted systems to highlight the importance of who would bear the risk and what sorts of risks are to be borne. One is the Artificial Intelligent System for Risk Assessment of Recidivism of Probationers and Parolees (henceforth RARPP), a system recently reported to be in the pilot stage in Taiwan. Another one is Correctional Offender Management Profiling for Alternative Sanctions (henceforth COMPAS), a widely used system in the U.S. Both systems utilize the recidivism risk predictions from algorithms in different ways. In COMPAS, predictions of recidivism risk are mainly used in sentencing and parole decisions, while with RARPP, the predictions are adopted in the assignment of probation conditions. As a matter of fact, using recidivism risk predictions in sentencing and parole decisions might induce higher risks of human rights violations than in probation condition assignments.

A wrong prediction engenders drastically different implications in the two contexts. We first discuss the impact from an individual’s perspective. In Taiwan, an incorrect recidivism risk prediction would induce a nonoptimal assignment of probation conditions regarding the overall recidivism risk, thus further hampering the intervention’s effectiveness in lowering the recidivism risk. On the other hand, in the U.S. context, a wrong prediction might result in a person with a high recidivism probability being released on parole. In contrast, a person with a low recidivism probability is denied parole. A key observation is that a worse probation condition assignment does not deny a person to be on probation. Everyone is assigned to some probation conditions. In the schooling environment, we do not worry about human rights violations in cases where students do not receive an education that caters to their learning level as long as every student gets some education. The same logic is applied here. In parole and sentencing decisions, however, personal freedom is directly affected by any influence the recidivism risk predictions have on decision makers’ final decisions. 

The EU AI Act reflects concerns about how AI applications might violate human rights. It specifies that an assessment of basic human rights violations is required before every high-risk AI system (“AI systems that negatively affect safety or fundamental rights”) is implemented. Among proposals for AI system regulations, the EU AI Act most emphasizes the AI system’s impact on fundamental human rights.   

From a societal perspective, the release of a parolee with a high recidivism risk may also have a larger impact on the whole society (e.g., in terms of the number of crimes), a negative outcome much greater than simply making a wrong decision by assigning a high recidivism risk parolee to less intense probation conditions. 

The discussion of AI bias should also include an assessment of the risk of basic human rights violations in the AI system. AI systems may treat protected groups differently, and such a scenario is of particular concern. For example, there is a considerable debate on whether predictions from COMPASS disfavor people of color, in particular, Black people, and in turn leads to unfair sentencing and parole decisions. We argue that it is essential not just to examine whether the accuracy of predictions differs across groups but to determine whether fundamental human rights are potentially violated when applying the predictions. 

We now turn our attention to whom AI systems impact most. Who would benefit from AI systems, and who could be harmed by them? Specifically, do vulnerable groups bear most of the risk entailed by the introduction of AI systems?

The offenders and the correctional officers might benefit from the two AI systems discussed in this article. The AI systems can serve to alleviate the workload of correctional officers. Additionally, the AI systems may also alleviate potential bias (against people with specific characteristics) held by decision-makers in the correction system, leading to a more consistent and fair justice system, which benefits the offenders. On the other hand, the risk-bearers in the two cases are both offenders, who are more vulnerable than other agents in the justice system, e.g., judges and probation officers. However, the feedback from these risk-bearers regarding AI systems is rarely discussed.

Comparing the application of RARPP and COMPAS, we find that the severity of potential human rights violations from AI systems depends on how the government applies the AI system predictions. Given the variability of potential severity, gathering risk-bearers’ perspectives on introducing AI systems into judicial systems is essential. The focus on risk-bearers is critical because, as we discussed, the risk-bearers in such scenarios are also the vulnerable group, here the offenders. We, therefore, suggest police and judicial departments currently have a strong interest in introducing AI systems to assist their decision-making in discussing the following questions in the risk assessment of AI systems (potentially during pilot stages):

  1. What channels do offenders have for providing feedback on decisions assisted by AI systems?
  2. What are offenders’ feedback?
  3. What risks do the offenders bear, and whether their human rights are violated?
  4. Is it necessary to design and introduce an AI system that targets offenders?

(Authors: Zhen-Rong Gan is a researcher at the Research Institute for Democracy, Society and Emerging Technology, with research interests in AI ethics and governance; Wei-Lin Chen is an overseas researcher at the Institute, with research interests in public economics. We thank the Deputy Director of the Institute, Mu-Yi Chou, and overseas researcher You-Hao Lai for their comments.)

Share

Related Articles

Commentaries
Defense Technology

From Critical Chips to International Alliances:Taiwan’s Strategic Role in Shaping a Non-Chinese Drone Supply Chain

作者:Chun-Kuei Lai

2024 / 9 / 18

Commentaries
Cybersecurity

Disinformation and Civil Defence: How Did Taiwan’s Civil Society Counter Foreign Information Manipulation?

作者:You-Hao Lai

2024 / 6 / 16

Commentaries
Semiconductors

Assessing Interconnected Security and Socioeconomic objectives in the CHIPS Act Funding Conditions: How should Taiwan Respond?

作者:Ming-Yen Ho

2024 / 5 / 22