Top of page

Picture of man walking in front of sky risers in Doha, Qatar
Local man walking . . . by Flickr User Maggie Jones. January 30, 2011. Used under CC PDM 1.0, https://creativecommons.org/publicdomain/mark/1.0/.

FALQs: AI Regulations in the Gulf Cooperation Council Member States – Part Two

Share this post:

The following is the second installment of a two-part guest post on Artificial Intelligence (AI) regulations in the Gulf Cooperation Council (GCC) by Muneera Al-Khalifa, legal research fellow, working with Foreign Law Specialist George Sadek at the Global Legal Research Directorate of the Law Library of Congress. Read part one on AI regulations and legal framework in the United Arab Emirates, Kingdom of Saudi Arabia, and Kingdom of Bahrain, here. This post is part of our Frequently Asked Legal Questions (FALQs) series.

This part of the blog post will focus on Artificial Intelligence (AI) regulations in the Gulf Cooperation Council (GCC) member states of Kuwait, Qatar, and Oman.

1. How is AI regulated in Kuwait?

The State of Kuwait is in the early stages of developing a regulatory framework for AI, aligning with its Vision 2035, which seeks to transform the nation into a knowledge-based economy. Although specific rules and guidelines for the regulation of AI have yet to be introduced, Kuwait has recently launched a draft of its National AI Strategy 2025-2028.

The strategy emphasizes Kuwait’s commitment to developing a broader legal and regulatory framework to govern AI and its applications. It also establishes a robust security baseline to safeguard sensitive data and mitigate cybersecurity risks, including those associated with AI technologies. The Kuwaiti government has outlined several key initiatives to guide policymakers in their regulatory and governance efforts for overseeing and deploying AI technologies. Additionally, Kuwait’s Communications and Information Technology Regulatory Authority (CITRA) contributes to the nation’s digital transformation and the development of its AI landscape.

2. How is AI regulated in Qatar?

The State of Qatar has developed a comprehensive framework to regulate and promote AI, aligning with its National Vision 2030. In 2019, Qatar launched its National AI Strategy, which is built around six key thematic pillars: health, entertainment, business activity, education, and research. In 2021, the Artificial Intelligence Committee was established within the Ministry of Communications and Information Technology, following Cabinet Decision No. (10) of 2021. Its primary role is to coordinate with relevant ministries and authorities to implement Qatar’s AI Strategy and ensure its effective execution.

In February of 2024, Qatar’s National Cybersecurity Agency issued the Guidelines for Secure Usage and Adoption of AI to provide guidance to organizations on how they can securely adopt AI. The guidelines primarily focus on building stakeholder confidence in AI by promoting the responsible use of AI within organizations. Key regulatory provisions include:

  • Stakeholder Confidence in AI: It emphasizes managing risks related to information security and ensuring the fair usage of AI deployments.
  • Comprehensive Guidance: The guidelines provide comprehensive guidance on key issues and actionable measures for responsible AI adoption. Specific attention is given to generative AI, highlighting its associated threats and possible mitigation solutions.
  • Risk Management: Organizations are encouraged to implement an adaptive risk management framework for AI that identifies, assesses, and provides possible mitigation solutions.
  • Ethical and Fair AI Principles: The guidelines outline ethical principles such as transparency and explainability, accountability, safety, privacy, robustness, fairness and equity, societal and environmental well-being, and auditability. They also highlight the importance of human oversight and monitoring, enabling individuals managing high-impact AI systems to exercise meaningful and effective oversight.
  • Compliance and Enforcement: Organizations deploying AI are encouraged to implement robust risk mitigation strategies to ensure the secure, responsible, and efficient use of AI systems.

In September of 2024, the Qatar Central Bank (QCB) issued an AI Guideline to regulate the use of AI by QCB Licensed Entities. The guideline aims to ensure the safe, efficient, and transparent use of AI within financial institutions. Key regulatory provisions include the provisions mentioned above, along with the following:

  • Corporate Governance: Entities must create a robust AI strategy based on their needs and associated risks, which should be reviewed periodically. They should either establish a body overseeing AI-related matters or delegate this responsibility to an existing body within the entity. The board of directors and senior management are held accountable for the outcomes and decisions of the entity’s AI systems, including those systems that make decisions on behalf of the entity.
  • Registration and Disclosures: Entities must develop and maintain an updated register of all their AI system arrangements. Furthermore, they must disclose to QCB the criteria used to determine if a contract for a specific AI system is classified as high-risk or not. Likewise, they are obliged to disclose to QCB any high-level risk and impact assessments. The guideline provides a definition of high-risk AI as “AI systems that are risk assessed as having the potential to cause significant negative impact to an entity’s operations or financial system.” It also establishes a comprehensive framework for its management.
  • Approvals: Entities must receive official, prior approval from QCB before launching a new AI system as a provider and to any material modification of an existing one. Entities must also obtain official approval from QCB prior to signing any high-risk AI purchase, licensing, or outsourcing agreement. Moreover, entities that outsource any of their AI system activities must conduct due diligence on the outsourcing provider and obtain prior consent from QCB.
  • Customer Information and Consent: Entities are required to notify customers when they are interacting with an AI system. They must ensure that customers are adequately informed about any product or service utilizing AI, including the associated risks and limitations of the technology. Additionally, they must obtain the customer’s explicit consent to accept any risks associated with the use of AI before providing the service.
  • Exceptions: Entities requesting exemption from any requirement outlined in this guideline must submit a formal request to QCB for review and approval.

3. How is AI regulated in Oman?

The Sultanate of Oman was the first GCC country to introduce a long-term economic strategy with the launch of Future Vision 2020 in 1995. Building upon this foundation, Oman unveiled Vision 2040 in 2020, aiming to drive economic diversification and technological advancement. In alignment with Vision 2040, Oman’s Council of Ministers approved the National Program of AI and Advanced Digital Technologies on September 19, 2024, initiating the implementation of its projects and initiatives.

The National Program of AI and Advanced Digital Technologies focuses on three key pillars: promoting and adopting AI in the economic and development sectors, localizing AI technologies, and governing AI applications with a human-centered approach. The program aims to update relevant regulations, laws, and strategies to create a flexible regulatory environment aligned with the evolving needs of AI and advanced digital technologies. As part of the localization efforts outlined in the program, Oman developed Oman GPT in collaboration with public and private sector partners. Powered by generative AI, Oman GPT is a language model designed to capture and reflect Omani cultural, historical, artistic, scientific, civilizational, and political content.

To contribute to achieving the goals of Oman’s digital economy through the integration of AI technologies across various sectors, Oman opened a public consultation in August 2024 on the draft National Artificial Intelligence Policy. This policy includes the draft National Charter for Artificial Intelligence Ethics, designed to ensure the responsible and ethical deployment of AI.

The draft National Artificial Intelligence Policy defines the governance framework for data management and the development and use of AI systems in Oman. This policy applies to all entities engaged in the development or use of AI technologies in Oman. Entities developing AI systems must

  • Comply with National Standards and Ethics: Adhere to the National AI Policy and the National Charter for AI Ethics, ensuring alignment with ethical and technical standards as well as data protection measures.
  • Ensure Transparent Documentation: Maintain clear and transparent documentation of the development process of AI systems, including their purpose, technologies, and data used, and retain this information for future reference.
  • Conduct Impact Assessments: Perform ethical and social impact assessments before deploying AI systems and document the findings as part of the system’s records.

Entities developing AI systems must

  • Comply with National Policies: Adhere to the National AI Policy, ensuring alignment with ethical and technical standards, as well as data protection measures.
  • Ensure Human Oversight: Implement mechanisms for supervision and human control over sensitive and impactful AI decisions, ensuring that such decisions are interpretable and traceable.
  • Monitor Performance: Continuously monitor the performance of AI systems, documenting errors, deviations, or negative impacts, and take timely corrective measures.
  • Facilitate Compliance Audits: Provide all relevant documentation and information about the AI system to competent authorities for compliance verification during audits or official investigations.

The National Charter for Artificial Intelligence Ethics (draft) establishes general rules and ethical practices for the use and development of AI systems. The charter is designed to mitigate potential risks and negative impacts, ensuring the responsible and safe use of AI systems. It applies to all government entities, private sector organizations, and academic and research institutions involved in the development or use of AI technologies in Oman. The charter governs all stages of AI lifecycle management, including data collection, storage, design, training, application, and the continuous evaluation and review of AI systems.

4. Are there any GCC-wide AI initiatives? 

While each GCC member state has developed its own AI regulatory frameworks and strategies, there is growing recognition of the need for a unified regional approach. In October of 2023, the attorney generals and public prosecutors of the GCC member states approved a Unified Guiding Document for the Uses of AI in Public Prosecution, marking a significant step toward unified regulation.  Looking ahead, more unified regulations are expected to develop, further strengthening regional collaboration in AI governance.

5. Where can I find additional resources?

For additional legal developments in the above-mentioned jurisdictions, visit the Law Library resource, the Global Legal Monitor.


Subscribe to In Custodia Legis – it’s free! – to receive interesting posts drawn from the Law Library of Congress’s vast collections and our staff’s expertise in U.S., foreign, and international law.

Add a Comment

Your email address will not be published. Required fields are marked *