Top of page

Artificial “Judges”? – Thoughts on AI in Arbitration Law

Share this post:

The following is a guest post by Viktoria Simone Fritz, a foreign law intern working with Foreign Law Specialist Jenny Gesley at the Global Legal Research Directorate of the Law Library of Congress.

Wouldn’t it be great to just put all documents submitted and produced in a specific legal dispute into a machine, wait a few seconds – or let’s say a few minutes for starters – and receive a decision or an arbitral award? Sounds tempting, doesn’t it? Well, maybe this will be an option in the not too distant future.

Almost all of us are already using some kind of technology in our jobs, be it our laptops or computers, various search engines, or our virtual meeting software while sitting in our home offices all over world. Especially in times of a global pandemic, challenges emerge – not only, but also – for courts, practitioners, or arbitration. Suspending proceedings could create even more legal problems, especially with regard to the parties’ financial situation. To deal with these issues, artificial intelligence (AI) tools may be able to facilitate a smooth and uninterrupted process. However, some of these tools are not yet available or are used infrequently, and of course, there is also a certain amount of skepticism towards the unknown. Are we ready for technology to take over the positions of judges and arbitrators, to render awards, and eventually decide our legal fates?

This blog post will start with a definition of AI, and then present different AI tools which may be used throughout arbitral proceedings. This section will be followed by pointing out the potential criticism of practitioners, and end with a short conclusion.

Artificial Intelligence. Photo by Flickr user 6eo tech. Jan. 26, 2019. Used under Creative Commons license,

Definition of AI

There is not one common definition of AI, but many similar definitions of it. According to the World Intellectual Property Organization (WIPO):

AI is generally considered to be a discipline of computer science that is aimed at developing machines and systems that can carry out tasks considered to require human intelligence. Machine learning and deep learning are two subsets of AI. In recent years, with the development of new neural networks techniques and hardware, AI is usually perceived as a synonym for “deep supervised machine learning”.

Types of AI Tools Used in Arbitral Proceedings

Similar to researchers and staff at the Library of Congress, practitioners in arbitration are used to doing a lot of research online as well as working with online research programs. Many jurisdictions, such as Austria, Germany, Liechtenstein, the European Union, or the US, among others, already provide online research tools and online collections of legal documents or judicial decisions, especially from higher courts. However, due to the high level of confidentiality in arbitration, this is not very common with regard to arbitral proceedings. Nevertheless, there is some tendency towards more transparency, especially in investor-state arbitration. In addition, certain research tools may also be used to facilitate the gathering of relevant legal material.

Other AI tools may help parties and practitioners select arbitrators, counsel, experts, or even witnesses and find the “best fit” for their specific case. With these kinds of tools, people may be able to save time and costs, and reach a conclusion faster if a machine presents the optimal candidate to them.

Another category of AI tools can facilitate certain procedural phases, for example using voice recognition devices to record proceedings instead of employing a court reporter to produce a transcript, AI tools for evidentiary searches and summarizing evidence, translation, transcribing, or even for drafting compilatory parts of the arbitral award or other legal documents.

Probably the most interesting group of AI tools are those used in the adjudication phase, and one day, these might include those tools or machines I mentioned in the beginning: the “AI judge” that renders decisions for us. However, some jurisdictions, such as France (French Code of Civil Procedure, art. 1450) or the Netherlands (Dutch Code of Civil Procedure, art. 1023), explicitly require a human being as an arbitrator. AI tools used in the adjudication phase include predictive justice software.

On the topic of predictive justice, in 2019, France enacted article 33 of the Justice Reform Act which states that “[n]o personally identifiable data concerning judges or court clerks may be subject to any reuse with the purpose or result of evaluating, analyzing or predicting their actual or supposed professional practices.” According to this provision, judicial analytics, which uses “statistics and machine learning to understand or predict judicial behavior,” is basically prohibited. There are already some legal prediction technology tools being used in Colombia to support corporate litigation.


Before concluding this post, I would like to highlight some of the criticism of these AI tools pointed out by some practitioners.

AI tools are able to facilitate human tasks and could help to make them more efficient and cost-effective (as often requested by the parties). However, the algorithms used in these processes are not perfect. Intentionally or unintentionally incomplete or selected data, or data programmed in a selective way, could lead to biased or unreliable results. Furthermore, AI tools require a lot of information, in particular sensitive data, and personal details have to be collected, processed, and stored somewhere, which might raise privacy and data protection concerns in some countries. In addition, even correctly programmed tools may be used in a dysfunctional way. (Cecilia Carrara, The Impact of Cognitive Science and Artificial Intelligence on Arbitral Proceedings – Ethical Issues, in Austrian Yearbook on International Arbitration 2020, at 522 & 527.)

Being programmed by humans, robots tend to replicate their programmer’s biases. While the biases of human arbitrators are something we can usually deal with, it is unclear how one might detect the possible hidden biases of robots. (Christian Aschauer, Arbitration in the Current Wave of New Technologies, in LCIA Perspectives)

Especially during the procedural phase, for example, the selection of experts and witnesses within the evidence-taking phase, concerns may be raised regarding a potential unequal access to these AI tools and the impact of it on the principle of due process and eventually on the outcome of the process.

Last but not least, an important point with regard to decision-making is the issue of “reasoning.” AI tools may be able to render arbitral awards based on information that was once provided to them, and under specific circumstances gain even more understanding of it through machine-learning algorithms, but they are not able to provide a reasoning, or at least in a way intelligible to humans. (Maxi Scherer, International Arbitration 3.0 – How Artificial Intelligence Will Change Dispute Resolution, in Austrian Yearbook on International Arbitration 2019, at 512.) This might even be the biggest flaw of AI tools in the context of arbitration (or in general with regard to decision-making) right now, and may even lead to further issues. Parties need a reasoning in order to understand and accept the rendered award or, if an appeal is even possible, to prepare a challenge.


AI tools can and should facilitate the work of practitioners and help the parties throughout an arbitral proceeding. However, it must be ensured that AI tools do not infringe rights or only benefit certain people, because of specific access-requirements or unequal treatment. If a high standard of confidentiality and data safety can be provided and an unbiased use is possible, AI might be able to bring some change to the world of arbitration.

Add a Comment

This blog is governed by the general rules of respectful civil discourse. You are fully responsible for everything that you post. The content of all comments is released into the public domain unless clearly stated otherwise. The Library of Congress does not control the content posted. Nevertheless, the Library of Congress may monitor any user-generated content as it chooses and reserves the right to remove content for any reason whatever, without consent. Gratuitous links to sites are viewed as spam and may result in removed comments. We further reserve the right, in our sole discretion, to remove a user's privilege to post content on the Library site. Read our Comment and Posting Policy.

Required fields are indicated with an * asterisk.