Abeer Sharma & Vishwas Kumar Tripathi
Introduction
International Arbitration (IA), which has experienced recent growth, can be described as a form of arbitration where parties opt for private international arbitrators instead of national courts. The IA is generally governed by individual state arbitral laws and global frameworks such as the UNCITRAL Model Law, and the New York Convention on the Recognition and Enforcement of Foreign Arbitral Awards.
However, the introduction of Artificial Intelligence (AI) in IA has caused significant consequences, affecting its operation and enforcement within different national frameworks. Further AI usage in IA has led to violation of various nations’ public policy (each nation’s specific legal and moral standards), as most arbitral laws prohibit enforcement of an award if it violates the public policy of the state where it is sought to be enforced. Countries like the United States, Singapore, and the United Kingdom have developed arbitral laws aligned with their public policy with varying stages of AI integration. Considering the same, this blog explores AI usage in IA, with its overall impact on public policy in specific jurisdictions, and solutions to address challenges in this evolving landscape.
The Use of AI in IA and its Effects on Public Policy
Artificial Intelligence (AI) refers to the capability of digital computers or robots to perform tasks commonly associated with intelligent beings. This technology has generated significant buzz in the legal community, impacting both lawyers and judges. While some embrace AI for its advantages in legal research, drafting, and business efficiency, often depicted in a positive light. Others express scepticism, mainly citing reasons for transparency, fairness, and public policy. Recently there have been widely available AI tools used by courts in general proceedings and arbitrators in the IA, such as the Computer Assisted Review, used for translation and transcription in proceedings.
One of the most practical applications of AI tools and computer-assisted reviews is AI-driven electronic data review, which excels at processing large volumes of data and was given due consideration by the English court in the case of Pyrrho Investments Limited v MWB Property Limited, wherein the plaintiff sued the defendant over unpaid money and a large number of electronic documents about dispute were reviewed though the usage of predictive coding (Computer-assisted review), which helped in providing a simple interpretation of the documents. Furthermore, based on the same the court pointed out certain favourable factors like, “The cost of manually searching these documents would have been enormous, amounting to several million pounds at least and the accuracy depicted by this software is also commendable. This illustrated the courts’ intention and willingness to allow for the use of AI and encourage growing AI judicial acceptance in the adjudication process.
While beside this case, AI extends beyond document review by facilitating document production, identifying relevant materials, redacting sensitive data, and offering translation and transcription services, making it particularly valuable in IA. Additionally, AI assists parties in selecting arbitrators by examining relevant candidates from similar past disputes and providing informed suggestions. Moreover, AI streamlines administrative workload allowing arbitrators and lawyers to focus on the parts of the process that require the greatest amounts of human time and judgment.
Despite the growing use of AI in arbitration, it is still employed in arbitration under human oversight, mainly as a tool for conducting secondary research. However, their future primary usage could pose a challenge to the very principles of public policy because public policy can be ascribed as a form of abstract principle based on generalised notions such as fairness and impartiality. Whereas parties to the dispute might be tempted to use unfair means or manipulate systems in their favour, making AI usage detrimental to general laws and principles of the various countries. Hence working as an obstacle to the applicability of public policy frameworks as discussed in the further head.
The Challenges for AI Inclusion in IA Concerning Public Policy of Major Jurisdictions
As already discussed above the integration of AI in IA has caused multidimensional effects on public policy across various countries, shaped by their respective unique principles and needs. This section will analyze the public policies of major jurisdictions, including the U.S., U.K., and Singapore, offering a global overview of how AI usage in IA influences these policies.
The United States
The U.S. has ratified the New York Convention, whose Article V(2)(b) denies enforcement of the arbitral award if its “recognition or enforcement of the award would be contrary to the public policy of that country” This empowers the U.S. to deny enforcement of any arbitral award, sought by parties to the disputes if it contradicts its state public policy. Talking about the state public policy of the U.S. in the context of arbitration, the same is provided under Section 10 of the Federal Arbitration Act, which allows parties to file an appeal against the arbitral award if it violates public policy on grounds such as ‘award procured by corruption, fraud, or undue means’(Section10(a)(1)), ‘evident partiality or corruption in the arbitrators’ (Section 10(a)(2)), ‘arbitrators guilty of misconduct, such as refusing to hear pertinent and material evidence’(Section10(a)(3)). Moreover, in cases such as Parsons & Whittemore Overseas Co Inc vs Societe Generale de l’industrie du papier and Eastern Associated Coal Corp. v. United Mine Workers of America, where the involvement of disputes between private parties affecting society at large was concerned, the U.S. Supreme Court clarified the concept of public policy, where disputes tried in justiciable civil manner are not under its scope. This means the dispute arbitrated by principles of natural justice, morality, and due process of law, as recognised by the courts, does not fall within the scope of public policy.
The AI usage in IA can violate the above provisions and precedents, for instance, as already described in previous heads, the AI systems are used as translation software in international arbitrations, to sift through large volumes of foreign documents. However, these translations can sometimes be inaccurate, as seen in Occidental Petroleum v Ecuador, where one of the parties lost their claim based on the tribunal premising their award on a mistranslation. The same can potentially cause misconduct on the part of arbitrators and may not be justifiable for the foreign parties to the dispute, while if the same is hypothetically sought to be enforced in America, it may be denied due to public policy violations. Additionally, AI can be used for complex tasks such as predicting case outcomes, conducting legal research, or drafting deposition questions under IA. However, these applications can sometimes result in wrongful answers, as the mathematical and semantic code applied behind the algorithms of AI may create hallucinations and fictitious cases, which can lead to the award being procured by fraud, and the same if accepted by arbitrators, can show an assumption of partiality and corruption against them, going against the morality and due process of law. An example of the same was observed in New York where two lawyers were sanctioned USD 5000 for completely relying on fictitious cases created by AI.
The United Kingdom
The U.K. has been a party to the New York Convention since 1975, empowering it under Article V(2)(b), to deny enforcement of any international arbitral award against its public policy. The U.K. Public policy, concerning arbitration is governed under the Arbitration Act of 1996, which includes Section 68(2)(g), addressing ‘Serious irregularity in award which is obtained by fraud, making them contrary to public policy’. Further, Section 69(3)(c)(ii) and Section 69(3)(d) of the Act provide grounds for invoking public policy exception as a ‘question of general public importance with the decision of the tribunal open to serious doubt’ and ‘where despite the agreement upon arbitration, just and properness in all the circumstances for the court to determine the question’ respectively. Moreover, Section 58 provides for an appeal against an arbitral award and implicitly recognizes public policy as one of the grounds within it. Additionally, case laws, like Patsystems Holding Ltd v Neilly, which dealt with the alignment of reasonable restrictive covenants with public policy, recognized public policy as a ground of appeal, when the arbitral award would be injurious to the public good or wholly offensive to the reasonable and informed member of the public.
The usage of AI in IA can conflict with these provisions, as AI software assists in the drafting of proceedings by structuring and setting out arguments, and aiding arbitrators in judgement writing. However, the problem lies with the AI algorithmic code, also known as the “black box problem” under which the input and output are known but the process behind its formation involving deep neural networks with non-linearity and combinations of various models remains opaque, making it incomprehensible to prudent individuals. This lack of transparency can create serious irregularity in the process and raises serious doubt regarding the award’s legitimacy because an illegal matter can also be made arbitrable under the garb of AI arbitration and can violate principles of natural justice under public policy. Additionally, the growing primary usage of AI tools can replace human arbitrators or can be used in the selection of arbitrators by analysing their track records, as seen with Jus Mundi’s Conflict Checker, whose database contains information regarding the arbitrators and refers parties with arbitrators according to their specific needs. But still, AI operates upon limited data as contoured under it by the developer and the same may confer limited options to the parties or may result in the creation of errors during judgment, which affects the process of dispute resolution and parties involved in it at large. Ultimately, making the court an appropriate forum to decide disputes or help parties to appoint arbitrators to dispute with better legal and social knowledge of the case.
The Singapore
Singapore, a party to the New York Convention has emerged as a hub of international arbitration, supported by laws like the Singapore Arbitration Act, 2001, and International Arbitration Act, 1994, which contain provisions related to public policy exceptions in Arbitration. For instance, Section 48(1)(b)(ii), of the Singapore Arbitration Act allows for setting aside an arbitral award if it is found to be ‘contrary to public policy’. Furthermore, Section 49(5)(c)(ii) and Section 49(5)(d) of the act, similar to U.K. laws, provide for the appeal of an arbitral award on public policy grounds, particularly if the question relates to the importance of general public or the decision of the arbitral tribunal is open to ‘serious doubt’ making it as just and proper in all the circumstances for the ‘Court to determine the question, respectively. Not just limited to this, Section 24 of the International Arbitration Act, 1994, allows for the setting aside of an international arbitral order, if it falls contrary to the grounds provided under Article 34(2) of the UNCITRAL Model Law on International Arbitration. Article 34(2) highlights the possibility of setting aside an arbitral award if it conflicts with the public policy of the State. Ultimately, making public policy a crucial factor in determining the validity of arbitral awards in Singapore.
Additionally, in the case of PT Asuransi Jasa Indonesia (Persero) v. Dexia Bank, where a dispute over the validity of an arbitration agreement was involved, the Singapore High Court described the pillars of public policy as ‘fundamental notions of justice and morality’, which meant that there should be fairness in the conduct of proceedings with resemblance of honesty and good faith from both the parties. Further, in the case of Tan Poh Leng Stanley v. Tang Boon Jek Jeffrey, where an additional award was issued by an arbitrator after the final award, the court held that the arbitrator’s actions were in opposition to public policy, which was described as adhering to the process established by law and promoting actions that maintain public confidence and the integrity of the judicial process.
The application of AI in IA can conflict with the precedents and provisions provided above in various ways. For example, the IA primarily involves two parties with a strong focus on their privacy. However, the usage of AI may introduce broader implications that extend beyond the immediate parties, potentially affecting public interests. For instance, Article 30 of the London Court of International Arbitration (LCIA) Rules imposes a duty of confidentiality on the tribunal, and all persons assisting them. However, publicly accessible large language models like ChatGPT and Gemini, which could potentially be used in arbitration processes, explicitly mention in their privacy policies about the collection of personal information, including inputs and file uploads. This practice can contradict the fairness of the arbitral process, and the process established by law to protect privacy by affecting public confidence in the arbitral process and compromising the parties as well as shareholders’ privacy rights involved in the dispute. Moreover, the usage of AI in IA involves technological functioning, but errors can be made by technology as well. The question of who will be liable for these errors remains unresolved. For human arbitrators, the LCIA Rules and Arbitration Act of England addresses issues of independence and impartiality, as well as the removal of arbitrators for misconduct or apparent bias. However, when an AI causes misconduct or bias, whether used as an assisting tool or in place of arbitrators, it creates confusion regarding the fixation of liability upon its developing company, the institution, or the arbitrator using it. This ambiguity raises serious doubt about its impartiality and makes it just for the court to determine the question in confidential or complex disputes.
Solutions for Balancing AI in IA with Public Policy
So far, the examination of AI in IA, with its global effect on public policy has been done. This section will outline solutions for integrating AI into IA, focusing on a comprehensive framework, transparent mechanisms, and AI-limited roles, while minimizing public policy impact.
Creation of a Comprehensive Regulatory Framework
Currently, no specific framework exists for AI in arbitration. While the U.S., U.K. and Singapore emphasize fairness, impartiality, and privacy protection in their public policy, misuse of AI, like biased decisions, as seen in the Occidental Petroleum v Ecuador case, raises concerns about public policy violations. Although there have been attempts in the form of the E.U. Artificial Intelligence Act and draft guidelines on the use of AI by the Silicon Valley Arbitration & Mediation Centre, these do not directly deal with the usage of AI in Arbitration and lack a potential universal reach. This gap calls for a convention specifically addressing AI use in arbitration and its other forms such as International Arbitration. Alternatively, existing laws and rules, such as the New York Convention, SIAC Rules, and LCIA Rules could be amended to comprehensively regulate AI usage. Additionally, an AI regulatory body could be established under arbitral institutions to serve as an oversight mechanism, assisting arbitrators and checking the smooth and lawful usage of AI during the arbitration process.
Creation of a More Transparent and Independent AI Mechanism
AI decisions and responses that lack transparency or independence (also known as black box problems) can have various repercussions, particularly in the U.K. where public policy demands fairness and the ability to scrutinize arbitral decisions as seen in the U.K. Arbitration Act and Patsystems Holding Ltd v Neilly case. However, AI can be made more transparent and independent by adopting a combination of techniques. For instance, explainable AI methods, such as breaking down complex models into simpler rules or using visual representations like decision trees and feature importance analysis could clarify key factors influencing AI decisions. Furthermore, international guidelines could be developed collaboratively by legal and technological professionals, to outline AI’s future role in arbitration, ensuring transparency and rationality. A similar collaboration occurred in UNCITRAL Model Law on Electronic Commerce, where legal professionals and technology experts drafted a legal framework for electronic transactions in international trade. Adopting similar guidelines for AI in arbitration could help in the promotion and upholding of public policy across different countries.
Adoption of AI’s Limited Role
AI poses challenges through its nature of bias and lack of potential contextual understanding, as seen in U.S. and U.K. public policy precedents. Further, through its limited adoption as suggested by the Federal Arbitration Act and LCIA rules, the best use of AI could be made, where it could help arbitrators during the arbitration process by performing various complementary roles under their supervision, such as legal research and document review. However, the primary usage of AI as an arbitrator must be avoided. The main reason behind the appointment of arbitrators by parties is to peacefully resolve their dispute with juridical open-mindedness. This is achievable with the help of human multi-dimensional and emotional intelligence, under which the decisions are not just solely based on law and logic but also consider other existing public goods and policies of the society. On the other hand, AI, which is based on logical-mathematical intelligence, is not capable of considering these additional factors in its decision-making. Hence making its role fit secondarily, until it attains parity with human intelligence.
Conclusion
AI integration in International Arbitration (IA) has enhanced efficiency in processes like document review and live transcription as seen in jurisdictions like the U.S., U.K., and Singapore. However, challenges such as the U.K.’s “Black box problem,” risk of fraud and bias in the U.S., and data privacy concerns in Singapore need addressing. Establishing a regulatory framework specific to AI in arbitration is essential. Further, ensuring AI supports human decision-making while prioritizing public welfare should be a priority. As this approach allows us to harness AI advantages in IA while safeguarding public policy principles across jurisdictions.
The authors are second year law students at the Rajiv Gandhi National University of Law, Punjab.
