Shaping AI for just futures: Close-up on the global landscape of AI regulation

By University of Ottawa

Office of the Vice-President, Research and Innovation, OVPRI

Research
Better healthcare
Research and innovation
Faculty of Law - Common Law Section
Research centres and institutes
Academics from around the globe discussing AI legislation in a panel
From left to right, Florian Martin-Bariteau, Célia Zolynski, Woodrow Hartzog, Sergio Branco, Nagla Rizk, Caroline Ncube and Marcelo Thompson.
To shed light on the international AI scene, leading academics from around the world were invited to the University of Ottawa to participate in Shaping AI for Just Futures, a two-day conference spearheaded by Professor Florian Martin-Bariteau and colleagues interested in the future of AI in society.

The conference kicked off October 19 with a panel titled “Regulating AI Around the World,” featuring internationally-renowned speakers discussing AI legislative trends in their regions. 

While the conversation, chaired by Martin-Bariteau, highlighted divergent visions driven by local circumstances, one common challenge stood out: how to regulate AI effectively without stifling innovation. 

Data governance: The Hong Kong conundrum 

Marcelo Thompson, a law professor at the University of Hong Kong and a uOttawa alumnus, illuminated the contrasting realities of Hong Kong and China. While China boasts robust data security laws, Thompson said that Hong Kong’s data protection framework lags behind international standards and requires substantial upgrades. 

“Regulating technology and data without imposing political values is virtually impossible,” he said. Thompson pointed to the daunting task of preserving values while embracing progress within Hong Kong’s unique geographical and political context. 

China’s regulatory approach, he added, reflects an evolving role for the state in addressing technological challenges, raising broader questions about AI and state governance. His account offered a look at the fine line Hong Kong and China tread as they navigate the uncharted waters of AI regulation. 

MENA’s contrasting realities 

“Despite significant cultural and historical ties, the Middle East and North Africa (MENA) exhibit remarkable disparities rooted in varying levels of wealth, labour resources and women’s participation in the workplace,” noted Nagla Rizk, a professor at the American University in Cairo, Egypt. 

On one end, countries like the UAE and Saudi Arabia aim to become global AI leaders by 2031, leaning on capital-heavy investments in testbeds and smart cities. In contrast, Tunisia, driven by a different economic reality, grounds its vision in sustainable AI development through data representation and small business support. 

Rizk believes that sustainable AI regulation must stem from an integrated and equitable innovation strategy that encompasses gig work, data privacy and skills development. This means increasing support for small entrepreneurship in AI and reducing data concentration as a market barrier between small businesses and large companies. This is a key factor for the economics professor, who expressed concerns regarding the legacy of “trickle-down” policies of pre-Arab Spring MENA, where large corporations get the upper hand. 

African blueprint for AI regulation 

Professor Caroline Ncube of the University of Cape Town described a complex African tech environment. “The African AI regulatory sphere is a hard nut to crack, especially as many states within the continent have adopted these technologies as users,” she said. 

Ncube highlighted the African Union’s pivotal role backing initiatives such as the African AI Working Group and the publication of a blueprint for national AI strategies. 

The South African researcher underlined two critical issues in AI regulation: the need for transparency in governance to foster accountability and prevent regulatory capture, and the significance of human rights-centred regulation, particularly in safeguarding vulnerable populations. 

She referenced the African Commission on Human and Peoples’ Rights’ 2021 resolution, which champions transparency, compliance with human rights and the promotion of African expertise in AI technologies. 

Emerging bottom-up model in the U.S. 

Representing our neighbours to the south, Professor Woodrow Hartzog from Boston University described the intricate regulatory dynamics in the United States. Despite several bills and extensive discussions, federal-level AI regulation lacks clarity. 

Nevertheless, bipartisan agreements are beginning to surface. Hartzog highlighted the growing focus on regulating both inputs and outputs, addressing data privacy, deepfakes and labour exploitation. “It’s a complex journey, but consensus is emerging,” he noted. 

Hartzog noted the significance of state-level regulation. This includes bills addressing specific AI concerns, like bans on weapon-mounted robots in Massachusetts. Additionally, new city-level regulations, such as bans on government use of facial recognition technology, are striking hard. Viewed together, such examples point to a bottom-up trend in which American policies are being shaped from local to federal. 

Europe’s careful tread 

Professor Célia Zolynski, from Université Panthéon-Sorbonne in France, provided an overview of the EU’s risk-based AI regulation strategy. 

“Balance is key,” she emphasized. AI systems are grouped into high and low-risk categories, with compliance and accountability measures, including transparency provisions for user awareness. Zolynski discussed the importance of harmonizing AI regulation with consumer laws, while keeping a finger on the pulse of AI’s ever-evolving ecosystem. 

Further comments revolved around AI system security, sustainability, systemic risks and generative AI-related issues, such as disinformation and copyright infringement. Striking a balance between fostering innovation and protecting individual rights seems to be at the heart of Europe’s regulatory concerns. 

“Brussels effect” in Brazil? 

The Brazilian vision, which cherishes a principles-based foundation, was presented by Professor Sergio Branco from the Institute for Technology and Society of Rio de Janeiro. 

Inspired in part by the European model, Brazil classifies AI systems into “unacceptable” and “high risk” categories, covering areas like education, labour and migration. The country has also introduced strict liability laws, making both suppliers and operators accountable for AI-induced damages. 

According to Branco, the Brazilian legal tradition combines principles, consensus-building and adaptability to create evergreen frameworks. “If laws are too specific, the ultrafast-moving AI industry would make them obsolete in no time. It would be like building on shifting sand,” he argued. 

While the conference offered a captivating case of ‘different strokes for different folks,’ it raised new questions for the global lawmaking community. Will diversity in AI governance be accepted or will the world yield to the “Brussels effect,” where Europe ends up setting the benchmark for everyone else?  Just how much influence does the EU maintain in today’s global tech scene? And could there be a new “Brussels” slowly shaping a universal blueprint for AI regulation?