Noticias
3 d'October, 2024

Artificial Intelligence in the Colombian Judicial System: A Summary and Analysis of Decision T-323 of 2024

Por: Luisa Herrera - Professor at Universidad Externado and PhD Student at Queen Mary University of London

Introduction

This brief note for Universidad Externado de Colombia aims to provide a summary and offer insights into Decision T-323 of 2024 of the Colombian Constitutional Court, which addresses the use of artificial intelligence (AI) by judges in Colombia. The ruling raises crucial questions about the interaction between technology and law, focusing on the potential impact of AI on judicial independence and impartiality, as well as whether its use could undermine due process. The judgment examines the risks associated with the use of AI, such as ChatGPT, in judicial decision-making and sets forth key criteria for its application.

Definition of the “Tutela” Action

In the Colombian legal system, the “tutela” is a constitutional mechanism established under the 1991 Constitution. It allows any individual to request immediate protection of their fundamental rights when these are threatened or violated by the actions or omissions of public authorities or private individuals. The tutela seeks effective protection of fundamental rights, and its decisions are mandatory and enforceable. Although provisional in nature, tutela decisions may be reviewed by the Constitutional Court, which selects and reviews cases it deems significant for jurisprudential development.

Analysis of Decision T-323 of 2024

(i) After reviewing whether the tutela met the procedural requirements, the Constitutional Court examined whether the case involved a moot issue due to supervening circumstances. After completing this review, the Court addressed (ii) the use of artificial intelligence (AI) in tutela decisions and (iii) issues related to co-payments and moderating fees within the General Social Security Health System (SGSSS), the provision of transportation services, and the right to comprehensive treatment for a child diagnosed with autism spectrum disorder (ASD).

On the Use of Artificial Intelligence in Tutela Decision-Making

Since the judge in the second instance used AI, specifically ChatGPT 3.5, to issue the decision in this tutela case, the Court examined whether this use violated the fundamental right to due process. Concerns arose about whether the decision was made by a human judge or by an AI system and whether it was properly reasoned or the result of AI-generated hallucinations or biases.

The Court addressed several key points: (i) the right to due process, (ii) due process in a jurisdictional system that uses AI, (iii) basic concepts of AI and its functioning, (iv) societal

impacts of AI tools, (v) the status of AI in Colombia, (vi) international AI regulations, including soft law and national regulatory initiatives, (vii) specific experiences of AI in judicial practice,

(viii) the principle of the natural judge in a jurisdictional system that employs AI, and (ix) the evidentiary due process in such a system.

On Co-payments, Moderating Fees, Transportation, and Comprehensive Treatment for a Child with ASD

On the merits, the Court evaluated whether the fundamental rights to health, dignity, and social security of a child with ASD had been violated by the child’s Health Insurance Provider (EPS), as he had not been exempted from co-payments and moderating fees, had not been provided inter-city transportation, and had not received comprehensive treatment.

The Court explored the following issues: (i) children and adolescents as subjects of special constitutional protection, (ii) individuals with disabilities as subjects of special constitutional protection, (iii) the right to health, (iv) protection of children and adolescents with disabilities in relation to their fundamental right to health, (v) prohibition of administrative barriers to health services, (vi) comprehensive treatment, (vii) the recognition of transportation expenses for the patient and a companion, and (viii) exemption from co-payments and moderating fees for individuals with physical or cognitive disabilities.

Court’s Findings

Regarding the alleged violation of due process due to the use of ChatGPT, the Court found that there was no improper substitution of the judge’s jurisdictional function by AI, as the judge had already reached a decision before consulting AI for additional input. However, the Court emphasized that the principles of transparency and responsibility were not fully adhered to in the use of AI, even though privacy was upheld, as no personal data related to the child, or the parties involved was used.

On the substantive issue, the Court ruled that the child should be exempt from all co-payments and moderating fees for services and medications related to his treatment. Additionally, it concluded that the EPS had not properly authorized transportation services for all necessary consultations and treatments, ordering an expansion of the coverage.

The Court also acknowledged the complexities and challenges in governing AI, noting the absence of consistent national and international standards aligned toward common purposes: “The lack of shared standards and reference points between national and multinational risk management frameworks, as well as the multiple definitions of AI used in these frameworks, has complicated AI governance, despite the need for space to accommodate different regulatory approaches reflecting the world’s social and cultural diversity.”

Court’s Decision

The Court ruled that the second-instance decision had not violated due process, thus validating the judicial proceedings. However, it stressed the importance of the Superior Council of the Judiciary issuing clear guidelines on AI use within the judicial system, ensuring that judges adopt

best practices to prevent these technologies from compromising judicial autonomy and independence.

Concerning the child’s rights, the Court partially upheld the lower courts’ rulings and ordered the EPS to fully implement the co-payment exemptions and ensure comprehensive transportation coverage for all medical needs of the child.

Brief Commentary on the Judgment

Judgment T-323 of 2024 offers several definitions of AI, with Martín Tironi highlighting that “the phenomenon of artificial intelligence cannot be reduced to a purely technological issue, as its applications are increasingly shaping the functioning of democratic systems, public policies, innovation, cultural production, the economy, and even the intimate spaces of personal relationships.”

In his book Nexus, Harari warns of the dangers of AI and information control: “If only a few dictators decide to trust AI, this could have far-reaching consequences for all humanity,” especially if AI systems begin to make political and dictatorial decisions. This reflection underscores the risks of giving too much control to AI tools. Although Harari does not view this as a prophecy, he suggests that, like nuclear weapons, AI must be handled cautiously. Dictators who believe AI will necessarily benefit them may find themselves caught by the technology they seek to control.

Judicial Matters and AI

One key issue in the judgment is the use of AI in judicial decision-making, raising questions about potential due process violations: Was the decision made by a human judge or an AI system? Was it properly motivated, or was it based on AI-generated biases?

Decision T-323 specifically examines whether using ChatGPT could infringe on due process and includes a recent definition from the OECD that describes AI as a system that infers outcomes—such as predictions or decisions—from input data, influencing physical or virtual environments. This raises concerns about whether the responsibility of judicial decision-making is being improperly transferred to an autonomous system, potentially violating the principle of the natural judge.

AI Lifecycle and Governance

The ruling also delves into the lifecycle of AI systems, which includes stages such as data design, validation, deployment, and supervision. Autonomy and adaptability are core features of AI, necessitating constant evaluation of the extent to which responsibilities are delegated to technology versus retained by human oversight. Autonomy refers to the degree of human intervention required, as current AI systems are capable of automating decision-making processes and even executing decisions independently of human input. Adaptability reflects the system’s ability to modify its behavior based on incoming data.

In terms of AI governance, there is no global consensus on how to implement AI, nor is there interoperability between different legal frameworks, which presents challenges regarding access and security. Furthermore, the use of AI in judicial decisions could compromise the independence of judges, as AI systems might carry biases from the data used in their training.

AI and the Colombian Judicial System

With regard to Colombia, the ruling emphasizes the need to advance AI regulation, especially in the public sector. Although there is currently no specialized legal framework, it is crucial to establish strategies for self-regulation and oversight to ensure the ethical and responsible use of AI. The Congress of the Republic is processing several legislative initiatives to regulate AI, such as the 2023 Statutory Law Project No. 200, which seeks to define the limits of AI usage in Colombia.

The modernization of the judicial system has been accelerated by the COVID-19 pandemic, which prompted the digitalization of judicial services through Decree 806 of 2020, CONPES 4024 of 2021, and Law 2213 of 2022. These measures introduced information technology to streamline judicial processes, improve service delivery, and support economic recovery.

In Colombia, it is essential to make progress in defining the appropriate use of AI, particularly in the public sector, and to establish clear boundaries. While there is no binding legal framework specifically addressing AI, the country does have regulations concerning information technology.

As such, it is important to consider the adoption of self-regulation, oversight mechanisms, and collective efforts to ensure the responsible use of AI.

The Congress of the Republic is currently deliberating various legislative initiatives aimed at regulating AI in Colombia. The Senate has reported on bills such as No. 130 of 2023, No. 091 of 2023, and No. 059 of 2023, although Bill No. 253 of 2022 was archived. Meanwhile, the House of Representatives is discussing the 2023 Statutory Law Project No. 200, which aims to define and regulate AI, establishing limits on its development, use, and implementation.

The regulatory approach taken will be critical, as it will determine the scope of AI in the public sector, particularly in the judicial branch, and the protection of fundamental rights. Globally, various models of AI governance have been developed, including risk-based regulation, fundamental rights-based approaches, principle-based frameworks, standards-based regulation, and “command-and-control” rules.

In terms of the use of information technology in Colombia’s public sector, the judicial system’s modernization has made significant progress over the last decade, with the COVID-19 pandemic further accelerating the process. Since 2015, the government has promoted digital transformation within public administration through the implementation of a digital government policy.

While awaiting the passage of specific AI legislation, it will be necessary to rely on self-regulation and oversight mechanisms as effective strategies to foster innovation and maximize the benefits of these technologies without compromising fundamental rights. This process must be guided by ethical and responsible practices, respecting constitutional guarantees and the framework of a Social Rule of Law.

Moreover, emerging technologies such as AI are covered under existing legislation on information and communications technologies (ICT), as established by Law 1341 of 2009. This law defines ICT as the set of resources enabling the processing and transmission of information, including AI systems.

  • Decree 1078 of 2015 strengthens this regulatory framework by mandating the implementation of the Digital Government Policy in coordination with the legislative and judicial branches and oversight bodies. It also establishes principles such as harmonization, trust, innovation, and respect for human rights.
  • Additionally, CONPES Document 3975 of 2019 defines AI as a field of computer science dedicated to solving cognitive problems typically associated with human intelligence. It highlights the importance of preparing Colombia for the economic and social changes brought about by AI and other technologies of the Fourth Industrial Revolution.
  • In November 2020, the “Task Force for the Development and Implementation of Artificial Intelligence in Colombia” was launched, proposing mechanisms for adopting emerging technologies in both the public and private sectors. The government has also released several publications on this topic, such as a sandbox on privacy by design in AI projects and a governance model for infrastructure related to emerging technologies.
  • In April 2021, the National Planning Department issued a plan to continue implementing international AI principles and standards. In October 2021, the “Ethical Framework for Artificial Intelligence in Colombia” was published, offering recommendations to public entities on the ethical use of AI.
  • Key ethical principles include transparency, security, non-discrimination, and human oversight in automated decisions. These principles aim to ensure that AI is used responsibly and for the benefit of society.

During the pandemic, Colombia promoted the digitalization of the judicial system through regulations such as CONPES 4024 of 2021 and Decree 806 of 2020. These measures were made permanent by Law 2213 of 2022, allowing for the continued use of information technologies and streamlining judicial processes in response to the need for remote proceedings.

  • The main objectives of Decree 806 of 2020 were to implement ICT in judicial proceedings, expedite legal processes, provide more flexible access to justice, and support economic recovery. The decree’s 16 articles focused on two key areas: the implementation of information technologies and procedural modifications to expedite case processing.
  • Finally, regarding personal data protection, Statutory Laws 1266 of 2008 and 1581 of 2012, which regulate data protection in Colombia, stipulate that any AI system handling personal data must comply with these legal frameworks. These laws classify data as public, semi-private, or private and emphasize the importance of ensuring that processed data is accurate, complete, and comprehensible.

While Colombia does not yet have a specific AI regulatory framework, there is a system of technological safeguards that protect fundamental rights in the use of these technologies. The Supreme Court of Justice has recognized the right to access and benefit from technological advances, as enshrined in various international treaties.

Data Protection and Regulatory Framework

It is important to note that while there is no specific regulation for AI, Statutory Laws 1266 of 2008 and 1581 of 2012, which govern data protection in Colombia, apply to all technological tools, including AI. Law 1581 of 2012 defines personal data, including “sensitive data,” whose protection is essential in any system that utilizes AI. Adherence to these principles of accuracy and data quality is critical to ensure that AI-based decisions do not result in discrimination or errors.

The Supreme Court of Justice, in its 2023 SC370 ruling, recognized the human right to access and benefit from technological advancements, as established in various international treaties. Consequently, any use of AI in Colombia must align with these principles.

Conclusion

Decision T-323 of 2024 provides critical insights into the responsible use of AI in Colombia’s judicial system. While AI offers significant potential for improving judicial efficiency, its application must be governed by principles of transparency, responsibility, and human

supervision to preserve the fundamental rights of due process, judicial independence, and impartiality. The ruling establishes that AI cannot replace human judgment in judicial decision- making but can serve as a valuable support tool in administrative and auxiliary tasks, as long as it remains under the strict supervision of qualified legal professionals.