Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

In the next eighteen months, 71% are concerned/very concerned about the harms of artificial intelligence (AI), according to the Governing AI for Humanity report by the United Nations (UN), citing the growing need for establishing robust regulations and governance frameworks.

Over the years, AI’s accelerating development and adoption have been remarkable, driven by rapid technological advancements and network generation upgrades.

Since its popularity in 2019, AI has revolutionized various industries and enterprises, altering how we live and work. The development of smart cities embodies this rapid transformation, paving the way to a more intelligent and connected future.

However, the advent of AI comes with risks and challenges potentially affecting the digital landscape, marking the urgent need for the global implementation of its governance and ethical use.

Notable Read: Dubai Elevates AI Governance with 22 Chief AI Officers

Global AI Governance

The AI evolution has brought technological innovation to the center stage. Its transformative power calls for a collaborative effort, ensuring the responsible development and adoption of organizations.

According to the latest McKinsey Global Survey, sixty-five percent of organizations regularly use generative artificial intelligence (GenAI). Given the increasing adoption of GenAI, the data is expected to rise drastically.

The United Nations (UN) High-Level Advisory Body, established specifically for governing AI, brings together experts from the private sector, government, civil society, and academia to address the challenges related to AI governance for the under-represented community members.

The UN’s December 2023 interim report highlighted five guiding principles for the international governance of AI:

  1. AI should be governed inclusively by, and for, the benefit of all.
  2. AI must be governed with the public’s interest in mind.
  3. AI governance should be built in line with data governance and common data practices.
  4. AI governance must be universal, networked, and rooted in adaptive multi-stakeholder collaboration.
  5. AI governance should be anchored in the Charter of the United Nations, international human rights law, and other agreed upon international commitments, such as the SDGs.

In the same year, the World Economic Forum (WEF) launched the AI Governance Alliance, delivering guidance on the responsible design, development, and deployment of AI systems.

The initiative is central to three key areas: safe systems and technologies, sustainable applications and transformation, and resilient governance and regulation.

WEF’s Safe Systems and Technology track, in cooperation with IBM Consulting, developed the Presidio AI Framework: Towards Safe Generative AI Models, which aims to analyze the challenges and opportunities brought about by GenAI.

Through its guidelines, the Presidio AI Framework promotes the early identification of risks, shared responsibility, and proactive risk management of AI.

Additionally, WEF’s Resilient Governance and Regulations track, in collaboration with Accenture, developed the General AI Governance: Shaping our Collective Global Future as part of its AI Governance Alliance Briefing Paper Series 2024. This research on GenAI governance emphasizes international cooperation, inclusive access, and standards to prevent fragmentation.

Also Read: Saudi Arabia's Digital Transformation: The Power of AI in Governance

UN Recommendations for AI Governance

The rapid development and adoption of GenAI have amplified concerns regarding security and ethical repercussions, leading to the growing demand for establishing governance frameworks.

In 2024, the UN High-Level AI Advisory Body published “Governing AI for Humanity,” a final report to address AI-related risks and policies and provide an update on the group’s interim report.

The report highlighted the UN’s recommendations:

  1. An International Scientific Panel on AI: Experts on diverse multidisciplinary studies will volunteer to establish an independent international scientific panel on AI. The panel is poised to issue reports on AI-related opportunities and risks and will contribute to the UN’s Sustainable Development Goals (SDGs).
  2. Policy Dialogue on AI Governance: A biannual intergovernmental and multi-stakeholder policy dialogue on AI governance will be launched. This recommendation aims to promote the fulfillment of human rights and understand AI governance implementation, enhancing international interoperability.
  3. AI Standards Exchange: Representatives from national and international standard-development organizations, technology companies, civil society, and the global scientific panel will form the AI standards exchange to evaluate, develop, and maintain standards for AI systems.
  4. Capacity Development Network: This recommendation aims to establish an AI capacity development network to align regional and global AI capacity efforts, providing researchers and social entrepreneurs with computing and training data. A fellowship program is also proposed for people to spend time in academic institutions or technology companies.
  5. Global Fund for AI: A global fund of financial and in-kind contributions from public and private sources is recommended to ‘put a floor under the AI divide.’
  6. Global AI Data Framework: This proposed recommendation will outline data-related principles and establish common standards for the global governance of AI training data. This also promotes data stewardship and facilitates exchange mechanisms to drive thriving AI ecosystems.
  7. AI Office within the Secretariat: The advisory body suggested inaugurating an AI office within the Secretariat. This body will report directly to the Secretary General to provide support and manage the proposals included in the report.

The report underscores a holistic and comprehensive approach to governing AI for humanity, promoting international stability and the equitable development of modern innovations.

Read More: H.E. Omar Sultan Al Olama Emphasizes the Importance of AI Governance

AI Risks and Challenges

AI’s rapid development and deployment poses substantial risks, with challenges to traditional regulatory systems stemming from AI’s speed, opacity, and autonomy, as noted by the UN.

According to a McKinsey Global Survey, inaccuracy and intellectual property infringement are the highest GenAI risks that organizations consider relevant, with scores of 63% and 52% respectively.

Inaccuracy in data can potentially lead to widespread public misinformation, increasing threats to peace and national security.

The UN’s Governing AI for Humanity report revealed that 78% of experts are concerned with AI damaging information integrity.

The intentional use of AI in armed conflict by state actors concerns 75% of individuals, and inequalities arising from differential control and ownership over AI technologies concerns 74% of individuals. Discrimination and biased AIs in recruitment or criminal justice decisions is also concerning 67% of experts.

For instance, Amazon Web Services (AWS) developed a recruitment tool to streamline the hiring process. However, the AI tool appeared to be biased against women. Similarly, the company’s AI system was criticized when its facial recognition tool, Rekognition, wrongly identified members of the U.S. Congress as suspects.

The emergence of deepfakes, voice clones, and automated disinformation campaigns exacerbates societal threats, leading to a loss of social trust.  Experts also highlight their AI-related concerns, including mass surveillance, misdiagnosis of medical AI, violation of intellectual property rights, acceleration of energy consumption and carbon emissions, disruption in labor markets, loss of human control over autonomous agents, and unintended multi-agent interactions among AI systems.

Moreover, Microsoft’s chatbot, Tay, generated racist comments on X (formerly Twitter) in 2016. The incident prompted Microsoft to take down the chatbot only a day after its debut.

The UN report also highlighted the pressing need for the protection of children’s privacy, emphasizing that AI systems need to be safe and appropriate for child usage.

These risks emphasize the urgency among experts to accelerate the implementation of AI regulations and governance frameworks. Without formal regulation and governance, the digital divide and distrust in AI will persist.

Read More: Internet and the Dilemma of its Governance

Final Thoughts

In a world driven by the race for technological superiority, governance over modern innovations should be paramount.

International collaboration and proactive engagement should be encouraged to ensure AI’s ethical use. A holistic approach towards the opportunities and challenges of AI is essential to harness and understand its full potential.

As AI advances at an unprecedented pace, developers and regulators must ensure its deployment for the common good. Robust governance frameworks should be established to guarantee that the opportunities delivered by this technological marvel are equally distributed, promoting a more inclusive and safe digital future for all.

Continue Reading: “We are Raising the Bar When it Comes to Governance”, Says Fadi Sidani, Governance Dynamics

Pin It