Speevr logo

1 year in: Our new Center for Sustainable Development takes stock

1 year in: Our new Center for Sustainable Development takes stock | Speevr

What a difference a year can make.  After generating more than 130 public products within in its first 365 days—research papers, journal articles, book chapters, policy reports, blogs, op-eds, podcasts, and public events—the Center for Sustainable Development (CSD) at Brookings celebrated our first birthday this past week, on October 21.

CSD was launched with a vision of providing leading research, insights, and convenings to advance global sustainable development and implement the Sustainable Development Goals (SDGs) within and across all countries—including advanced economies. In our public launch event last October, we were honored that so many extraordinary leaders from around the world conveyed encouragement and support. We were especially grateful that Ms. Amina Mohammed, U.N. deputy secretary-general, and Dr. Rajiv Shah, president of The Rockefeller Foundation, joined to discuss so many of the world’s frontier challenges of sustainable development. We took to heart the deputy secretary-general’s reminder that “None of us can achieve the SDGs alone,” and her challenge to the center “to strive to be a beacon of inspiration for the pursuit of sustainable development in all countries and communities around the world.”
Some highlights
One year later, as this stock-taking file shows, CSD scholar teams have taken up the challenge with vigor, making contributions across a wide range of sustainable development topics, including:

The SDGs & global sustainable development.
Climate change.
Cities and local leadership.
Workforce of the future.
Global debt crisis.
Ending extreme poverty and deprivation.
Global development cooperation.
S. domestic sustainable development policy.
S. global sustainable development policy.
Gender equality.
The global middle class.

However impressive the volume of CSD outputs might be, our team cares vastly more about the quality and results of its efforts. In the world of research and ideas, it is generally unwise for any single actor to try to claim too much credit, but we are fortunate to have a unique roster of scholars who are contributing in so many exceptional ways.  To share a few examples:

Amar Bhattacharya co-chaired the U.N. Secretary General’s Independent Expert Group on Climate Finance,  which published its seminal report last December on “Delivering on the $100 Billion Climate Finance Commitment and Transforming Climate Finance.” More recently, Amar has been named a member of the World Bank-IMF High-Level Advisory Group (HLAG) on Sustainable and Inclusive Recovery and Growth, co-chaired by Mari Pangestu of the World Bank, Ceyla Pazarbasioglu of the IMF, and Lord Nicholas Stern of the London School of Economics. Amar is deeply involved in global climate deliberations in the lead-up to the forthcoming COP 26 U.N. climate summit in Glasgow, U.K., including as adviser to the Coalition of Finance Ministers for Climate Action and adviser to the COP 26 presidency. He and Lord Stern recently co-authored an important op-ed on “Our Last, Best Chance on Climate.”
Marcela Escobari has continued to pioneer the Workforce of the Future initiative at Brookings, bringing extraordinary data richness and rigor to advance opportunities for place- and job-specific worker mobility in geographies across the United States. Her mobility pathway tool, a multiyear team project summarized in a blog with Natalie Geismar, generated important public interest, including high-profile coverage in the New York Times. More recently, Marcela published, along with Ian Seyal and Carlos Daboin Contreras, a Moving Up report that reveals multidimensional hurdles to labor mobility across America, especially for people in low-income occupations. We are very proud that, earlier this year, President Biden nominated Marcela to serve as USAID Assistant Administrator for Latin America and the Caribbean, a position she previously held under the Obama Administration. While awaiting Senate confirmation for the appointment, Marcela is continuing to press forward her research on the role companies can play in improving job quality.
George Ingram drew from his extensive policy experience to publish a series of important papers following the 2020 presidential election, including a prescription for renewing U.S. global partnership in a post-COVID-19 world. George also celebrated a victory for good data in a recent post co-authored with Sally Paxton of Publish What You Fund (PWYF). They noted the fruits of a multiyear collaborative research effort to address conflicting official U.S. aid data, which previously could vary by up to billions of dollars per year across different government websites. USAID and the Department of State recently agreed to consolidate competing aid data dashboards into a single data collection and reporting channel. In July, George partnered with PWYF to convene a public event on transparency in development assistance for gender equality, which generated several new commitments to improve donor reporting on aid for gender equality. He also issued a call for a U.S. initiative to help bridge the global digital divide among low- and middle-income countries.
Homi Kharas has been prolific in contributing to global economic debates during the COVID-19 crisis, with special emphasis on steps to avoid a developing country debt crisis amid the deepest and most widespread global recession in modern history. The U.N. Secretary-General recognized Homi’s paper with Meagan Dooley on “Debt Distress and Development Distress: Twin crises of 2021” as  foundational to his March 2021 U.N. report on “Liquidity and Debt Solutions to Invest in the SDGs: The Time to Act is Now.” Concurrently, Homi and co-authors generated world-leading empirical assessments of extreme deprivation (e.g., here and here), including extreme poverty in the context of COVID-19. He also published, with Raj Desai and Selen Özdoğan, important research on the spatial dimensions of global poverty reduction.  Impressively, Homi was recently named alongside Amar Bhattacharya to serve on the HLAG on Sustainable and Inclusive Recovery and Growth.
Tony Pipa has been advancing a remarkable range of efforts on localized leadership for sustainable development. This includes a City Playbook for Advancing the SDGs, co-edited with Max Bouchet, which captures an inspiring array of insights from across the global SDG Leadership Cities community of practice that Tony launched and facilitates. In parallel, last November, Tony and Natalie Geismar released a key report with recommendations to reimagine U.S. federal policy for U.S. rural development, informed by lessons and changes in U.S. policy and practice for sustainable development overseas. The report’s insights have been influential with Congress and the Biden administration as they develop new approaches to invest directly in rural America, including the proposed $4 billion Rural Partnership Program currently being considered as part of the budget reconciliation process. Meanwhile, Tony has also spearheaded CSD’s partnership with the U.N. Foundation to expand and connect American Leadership on the SDGs in communities across the United States.

We are also extremely proud of our collaboration with leaders of the Center for Universal Education (CUE) at Brookings, who amount to the “education team” for CSD. I never go anywhere on SDG 4 (Ensure quality education) without talking with CUE co-directors Emiliana Vegas and Rebecca Winthrop, who have both made enormous public contributions over the past year.  Emiliana, for example, co-authored a seminal study on the global cost of COVID-19 school closures in earnings and income, while also publishing an important series of reports on the implementation of computer science education in geographies around the globe. Rebecca has meanwhile led a major initiative on family-school engagement and collaboration to transform and improve education systems. She has also been a driving force in the global movement to advance education for tackling climate change.
For my own part, I have been privileged to co-chair the 17 Rooms initiative in collaboration with Zia Khan and our partners at The Rockefeller Foundation. Within the past year, we have made great progress in describing key design principles for this new approach to problem-solving across all 17 SDGs. This has largely been made possible by CSD’s small but mighty new 17 Rooms secretariat team, comprised of Alexandra Bracken, Jacob Taylor, and Shrijana Khanal. All of them have been central to the progress of both the annual 17 Rooms global flagship process and the growing 17 Rooms-X community of practice. We were honored that U.N. Deputy-Secretary-General Amina Mohammed joined the 2021 flagship summit as keynote listener for the second year in a row, commenting on the Room (working group) report-outs that will be published next month in a next wave of action plans and insights. Meanwhile, the 17 Rooms-X efforts are helping universities, communities, regions, and now countries to advance localized action, insight, and collaboration processes for the SDGs. The growing interest in 17 Rooms has helped inform an evolving vision of how the initiative could help fuel a new approach to multilateral cooperation, and even an annual global “17 Rooms Day” for communities around the world.

Related Content

Looking forward
As much as we take pride in CSD’s accomplishments over the past year, we know we are only a small node in a vastly larger global network of contributors to the broader challenges of sustainable development. Over the coming year, we plan to continue our existing core workstreams while ramping up efforts on key priorities, in line with a spirit of networked leadership. We look forward to the culmination of some major research products, including a book on breakthrough technologies for the SDGs, and to launching a major new effort on gender equality and sustainable development. In parallel, we aim to ramp up work on aligning the private sector with the SDGs—in other words, bridging ESG to SDG. We are keen for our center to serve as a neutral platform that helps bring diverse constituencies together. In that spirit, we are also excited soon to be announcing CSD’s first-ever cohort of nonresident scholars. We are enthusiastic to tap into an ever-larger network of experts and allies who can share insights on both the substance of the center’s work and the opportunities for broader stakeholder engagement.

1 year in: Our new Center for Sustainable Development takes stock

1 year in: Our new Center for Sustainable Development takes stock | Speevr

What a difference a year can make.  After generating more than 130 public products within in its first 365 days—research papers, journal articles, book chapters, policy reports, blogs, op-eds, podcasts, and public events—the Center for Sustainable Development (CSD) at Brookings celebrated our first birthday this past week, on October 21.

CSD was launched with a vision of providing leading research, insights, and convenings to advance global sustainable development and implement the Sustainable Development Goals (SDGs) within and across all countries—including advanced economies. In our public launch event last October, we were honored that so many extraordinary leaders from around the world conveyed encouragement and support. We were especially grateful that Ms. Amina Mohammed, U.N. deputy secretary-general, and Dr. Rajiv Shah, president of The Rockefeller Foundation, joined to discuss so many of the world’s frontier challenges of sustainable development. We took to heart the deputy secretary-general’s reminder that “None of us can achieve the SDGs alone,” and her challenge to the center “to strive to be a beacon of inspiration for the pursuit of sustainable development in all countries and communities around the world.”
Some highlights
One year later, as this stock-taking file shows, CSD scholar teams have taken up the challenge with vigor, making contributions across a wide range of sustainable development topics, including:

The SDGs & global sustainable development.
Climate change.
Cities and local leadership.
Workforce of the future.
Global debt crisis.
Ending extreme poverty and deprivation.
Global development cooperation.
U.S. domestic sustainable development policy.
U.S. global sustainable development policy.
Gender equality.
The global middle class.

However impressive the volume of CSD outputs might be, our team cares vastly more about the quality and results of its efforts. In the world of research and ideas, it is generally unwise for any single actor to try to claim too much credit, but we are fortunate to have a unique roster of scholars who are contributing in so many exceptional ways.  To share a few examples:

Amar Bhattacharya co-chaired the U.N. Secretary General’s Independent Expert Group on Climate Finance,  which published its seminal report last December on “Delivering on the $100 Billion Climate Finance Commitment and Transforming Climate Finance.” More recently, Amar has been named a member of the World Bank-IMF High-Level Advisory Group (HLAG) on Sustainable and Inclusive Recovery and Growth, co-chaired by Mari Pangestu of the World Bank, Ceyla Pazarbasioglu of the IMF, and Lord Nicholas Stern of the London School of Economics. Amar is deeply involved in global climate deliberations in the lead-up to the forthcoming COP 26 U.N. climate summit in Glasgow, U.K., including as adviser to the Coalition of Finance Ministers for Climate Action and adviser to the COP 26 presidency. He and Lord Stern recently co-authored an important op-ed on “Our Last, Best Chance on Climate.”
Marcela Escobari has continued to pioneer the Workforce of the Future initiative at Brookings, bringing extraordinary data richness and rigor to advance opportunities for place- and job-specific worker mobility in geographies across the United States. Her mobility pathway tool, a multiyear team project summarized in a blog with Natalie Geismar, generated important public interest, including high-profile coverage in the New York Times. More recently, Marcela published, along with Ian Seyal and Carlos Daboin Contreras, a Moving Up report that reveals multidimensional hurdles to labor mobility across America, especially for people in low-income occupations. We are very proud that, earlier this year, President Biden nominated Marcela to serve as USAID Assistant Administrator for Latin America and the Caribbean, a position she previously held under the Obama Administration. While awaiting Senate confirmation for the appointment, Marcela is continuing to press forward her research on the role companies can play in improving job quality.
George Ingram drew from his extensive policy experience to publish a series of important papers following the 2020 presidential election, including a prescription for renewing U.S. global partnership in a post-COVID-19 world. George also celebrated a victory for good data in a recent post co-authored with Sally Paxton of Publish What You Fund (PWYF). They noted the fruits of a multiyear collaborative research effort to address conflicting official U.S. aid data, which previously could vary by up to billions of dollars per year across different government websites. USAID and the Department of State recently agreed to consolidate competing aid data dashboards into a single data collection and reporting channel. In July, George partnered with PWYF to convene a public event on transparency in development assistance for gender equality, which generated several new commitments to improve donor reporting on aid for gender equality. He also issued a call for a U.S. initiative to help bridge the global digital divide among low- and middle-income countries.
Homi Kharas has been prolific in contributing to global economic debates during the COVID-19 crisis, with special emphasis on steps to avoid a developing country debt crisis amid the deepest and most widespread global recession in modern history. The U.N. Secretary-General recognized Homi’s paper with Meagan Dooley on “Debt Distress and Development Distress: Twin crises of 2021” as  foundational to his March 2021 U.N. report on “Liquidity and Debt Solutions to Invest in the SDGs: The Time to Act is Now.” Concurrently, Homi and co-authors generated world-leading empirical assessments of extreme deprivation (e.g., here and here), including extreme poverty in the context of COVID-19. He also published, with Raj Desai and Selen Özdoğan, important research on the spatial dimensions of global poverty reduction.  Impressively, Homi was recently named alongside Amar Bhattacharya to serve on the HLAG on Sustainable and Inclusive Recovery and Growth.
Tony Pipa has been advancing a remarkable range of efforts on localized leadership for sustainable development. This includes a City Playbook for Advancing the SDGs, co-edited with Max Bouchet, which captures an inspiring array of insights from across the global SDG Leadership Cities community of practice that Tony launched and facilitates. In parallel, last November, Tony and Natalie Geismar released a key report with recommendations to reimagine U.S. federal policy for U.S. rural development, informed by lessons and changes in U.S. policy and practice for sustainable development overseas. The report’s insights have been influential with Congress and the Biden administration as they develop new approaches to invest directly in rural America, including the proposed $4 billion Rural Partnership Program currently being considered as part of the budget reconciliation process. Meanwhile, Tony has also spearheaded CSD’s partnership with the UN Foundation to expand and connect American Leadership on the SDGs in communities across the United States.

We are also extremely proud of our collaboration with leaders of the Center for Universal Education (CUE) at Brookings, who amount to the “education team” for CSD. I never go anywhere on SDG 4 (Ensure quality education) without talking with CUE co-directors Emiliana Vegas and Rebecca Winthrop, who have both made enormous public contributions over the past year.  Emiliana, for example, co-authored a seminal study on the global cost of COVID-19 school closures in earnings and income, while also publishing an important series of reports on the implementation of computer science education in geographies around the globe. Rebecca has meanwhile led a major initiative on family-school engagement and collaboration to transform and improve education systems. She has also been a driving force in the global movement to advance education for tackling climate change.
For my own part, I have been privileged to co-chair the 17 Rooms initiative in collaboration with Zia Khan and our partners at The Rockefeller Foundation. Within the past year, we have made great progress in describing key design principles for this new approach to problem-solving across all 17 SDGs. This has largely been made possible by CSD’s small but mighty new 17 Rooms secretariat team, comprised of Alexandra Bracken, Jacob Taylor, and Shrijana Khanal. All of them have been central to the progress of both the annual 17 Rooms global flagship process and the growing 17 Rooms-X community of practice. We were honored that U.N. Deputy Secretary-General Amina Mohammed joined the 2021 flagship summit as keynote listener for the second year in a row, commenting on the Room (working group) report-outs that will be published next month in a next wave of action plans and insights. Meanwhile, the 17 Rooms-X efforts are helping universities, communities, regions, and now countries to advance localized action, insight, and collaboration processes for the SDGs. The growing interest in 17 Rooms has helped inform an evolving vision of how the initiative could help fuel a new approach to multilateral cooperation, and even an annual global “17 Rooms Day” for communities around the world.

Related Content

Looking forward
As much as we take pride in CSD’s accomplishments over the past year, we know we are only a small node in a vastly larger global network of contributors to the broader challenges of sustainable development. Over the coming year, we plan to continue our existing core workstreams while ramping up efforts on key priorities, in line with a spirit of networked leadership. We look forward to the culmination of some major research products, including a book on breakthrough technologies for the SDGs, and to launching a major new effort on gender equality and sustainable development. In parallel, we aim to ramp up work on aligning the private sector with the SDGs—in other words, bridging ESG to SDG. We are keen for our center to serve as a neutral platform that helps bring diverse constituencies together. In that spirit, we are also excited soon to be announcing CSD’s first-ever cohort of nonresident scholars. We are enthusiastic to tap into an ever-larger network of experts and allies who can share insights on both the substance of the center’s work and the opportunities for broader stakeholder engagement.
A center takes a village
Whatever the center has accomplished to date, all of it is only possible thanks to an extensive network of external colleagues, collaborators, contributors, champions, and even constructive critics around the globe. I often say that the center has nearly 8 billion clients, i.e., the full composition of humanity. But there are also countless people whose direct insights, queries, suggestions, and convenings all play a pivotal role in our work. Perhaps most importantly, there are huge numbers of people across the Global Economy and Development program and the broader Brookings Institution who have made each of the center’s first 365 days possible. I convey special personal thanks to Brookings leadership for so strongly backing the CSD enterprise from day one, and every single day thereafter. It’s our privilege to be part of such an extraordinary undertaking. It’s our responsibility to ensure we contribute even more over the year to come.

WEEKLY POLITICAL COMPASS

WEEKLY POLITICAL COMPASS | Speevr

Voters in Japan will go to the polls. Turkey is heading for a major confrontation with its allies. A Senate report requests the indictment of Brazil’s president. The UK chancellor will present his autumn budget. South Africa is gearing up for municipal elections. The most signifi…   Become a member to read the rest of […]

Enlightened climate policy for Africa

Enlightened climate policy for Africa | Speevr

As the world convenes in Glasgow for the 26th United Nations Climate Change Conference of Parties (COP26), it is time to recognize Africa’s role in averting a climate disaster without compromising the continent’s growth and poverty reduction. The world needs to transition away from fossil fuels. But access to electricity is a human right as enshrined in sustainable development goal 7. Electric power is vital for any economy to advance, and relegating African countries to greater poverty is not the solution to the global climate crisis.

The world must transition away from the fuels that powered industrialization in Europe, the U.S., and Asia. Today, coal still accounts for up to 38 percent of electricity generation worldwide, with China, India, the U.S., and the EU remaining the world’s largest consumers of coal. At the same time, international financing institutions are restricting investment in electric power projects in Africa to wind and solar on grounds of environmental concerns. Africa’s current energy demand is estimated at 700 TW, which is 4,000 times the 175 GW of wind and solar capacity the entire world added in 2020. Africa cannot industrialize on wind and solar energy alone.
In sub-Saharan Africa, 12 million new people enter the workforce every year. They cannot run successful businesses in the dark. Today, nearly 600 million Africans lack access to electric power, a number that the International Energy Agency (IEA) projects will actually increase by 30 million due to the COVID-19 pandemic. To create jobs for Africa’s burgeoning youth population, we need to find ways to power the continent’s industrialization.
The world is facing an existential climate crisis and must come together in solidarity to stave off the potentially devastating impacts, but leaving 600 million Africans in the dark is not an option.
Importantly, Africa bears the least responsibility for the world’s climate crisis but faces its most severe consequences. Forty-eight sub-Saharan African countries outside of South Africa are responsible for just 0.55 percent of cumulative CO2 emissions. Yet, 7 of the 10 countries most vulnerable to climate change are in Africa.
Still, Africa will play a major role in solving the global crisis. The Congo Basin is the world’s second-largest rainforest and vital to stabilizing the world’s climate, absorbing 1.2 billion tons of CO2 each year. Without the Congo Basin and the Amazon, the world would be warming much more quickly.

Related Content

The global transition to renewable energy will mean exponentially scaling up the production of batteries, electric vehicles (EVs), and other renewable energy systems, which require Africa’s mineral resources. For example, the Democratic Republic of the Congo (DRC), accounts for 70 percent of the world’s cobalt, the mineral vital to battery production. Cobalt demand is expected to double by 2030. Conversely, 84 million people (80 percent of the total population) in the DRC could still lack access to electric power in 2030.
We believe that we can achieve the global emission reduction targets without constraining Africa’s development. To power Africa’s economic growth and prevent the worst consequences of climate change, we propose a four-point agenda for action:

Utilize the African Continental Free Trade Agreement (AfCFTA). The AfCFTA will create the world’s largest free trade zone by integrating 54 African countries with a combined population of more than 1 billion people and a gross domestic product of more than $3.4 trillion. Africa’s commitment to lowering intra-African trade barriers can attract more private sector investment with larger, connected market opportunities.
Leverage green economic opportunities. Increased demand for electric vehicles, critical minerals, and renewable energy systems is an opportunity for Africa to capture larger portions of supply chains in the new green economy. Nations and firms can collaborate across borders to create a pipeline of bankable power projects to attract investment. Increasing local manufacturing and production capacity for resources, materials, and value-added products vital to green technology will create jobs locally.
Adopt just development finance. The large-scale power projects needed to industrialize economies are capital-intensive and often require investments from development finance institutions. Development finance institution funding should catalyze private sector resources. While we agree on the environmental and economic justification for not financing new coal-fired plants, they should not limit support for natural gas, hydro, and geothermal power generation projects. This policy creates an unjust burden on those economies that require a variety of sources to increase access and build resilience into their power infrastructure. It is hypocritical of the EU, the U.S., and China to utilize fossil fuels while effectively denying others the means to lift themselves out of poverty.
Embrace proportionate responsibility. China, the EU, and the U.S. emit over 40 percent of total global greenhouse gases, while all of Africa emits 7 percent. Prioritizing the transition to renewables and imposing higher emission reduction requirements in the EU, U.S., and China will ease the burden on those nations that still need a variety of power generation methods to increase energy access.

The world is facing an existential climate crisis and must come together in solidarity to stave off the potentially devastating impacts, but leaving 600 million Africans in the dark is not an option. We must avert a climate disaster and expand energy access in Africa at the same time.

Enlightened climate policy for Africa

Enlightened climate policy for Africa | Speevr

As the world convenes in Glasgow for the 26th United Nations Climate Change Conference of Parties (COP26), it is time to recognize Africa’s role in averting a climate disaster without compromising the continent’s growth and poverty reduction. The world needs to transition away from fossil fuels. But access to electricity is a human right as enshrined in sustainable development goal 7. Electric power is vital for any economy to advance, and relegating African countries to greater poverty is not the solution to the global climate crisis.

The world must transition away from the fuels that powered industrialization in Europe, the U.S., and Asia. Today, coal still accounts for up to 38 percent of electricity generation worldwide, with China, India, the U.S., and the EU remaining the world’s largest consumers of coal. At the same time, international financing institutions are restricting investment in electric power projects in Africa to wind and solar on grounds of environmental concerns. Africa’s current energy demand is estimated at 700 TW, which is 4,000 times the 175 GW of wind and solar capacity the entire world added in 2020. Africa cannot industrialize on wind and solar energy alone.
In sub-Saharan Africa, 12 million new people enter the workforce every year. They cannot run successful businesses in the dark. Today, nearly 600 million Africans lack access to electric power, a number that the International Energy Agency (IEA) projects will actually increase by 30 million due to the COVID-19 pandemic. To create jobs for Africa’s burgeoning youth population, we need to find ways to power the continent’s industrialization.
The world is facing an existential climate crisis and must come together in solidarity to stave off the potentially devastating impacts, but leaving 600 million Africans in the dark is not an option.
Importantly, Africa bears the least responsibility for the world’s climate crisis but faces its most severe consequences. Forty-eight sub-Saharan African countries outside of South Africa are responsible for just 0.55 percent of cumulative CO2 emissions. Yet, 7 of the 10 countries most vulnerable to climate change are in Africa.
Still, Africa will play a major role in solving the global crisis. The Congo Basin is the world’s second-largest rainforest and vital to stabilizing the world’s climate, absorbing 1.2 billion tons of CO2 each year. Without the Congo Basin and the Amazon, the world would be warming much more quickly.

Related Content

The global transition to renewable energy will mean exponentially scaling up the production of batteries, electric vehicles (EVs), and other renewable energy systems, which require Africa’s mineral resources. For example, the Democratic Republic of the Congo (DRC), accounts for 70 percent of the world’s cobalt, the mineral vital to battery production. Cobalt demand is expected to double by 2030. Conversely, 84 million people (80 percent of the total population) in the DRC could still lack access to electric power in 2030.
We believe that we can achieve the global emission reduction targets without constraining Africa’s development. To power Africa’s economic growth and prevent the worst consequences of climate change, we propose a four-point agenda for action:

Utilize the African Continental Free Trade Agreement (AfCFTA). The AfCFTA will create the world’s largest free trade zone by integrating 54 African countries with a combined population of more than 1 billion people and a gross domestic product of more than $3.4 trillion. Africa’s commitment to lowering intra-African trade barriers can attract more private sector investment with larger, connected market opportunities.
Leverage green economic opportunities. Increased demand for electric vehicles, critical minerals, and renewable energy systems is an opportunity for Africa to capture larger portions of supply chains in the new green economy. Nations and firms can collaborate across borders to create a pipeline of bankable power projects to attract investment. Increasing local manufacturing and production capacity for resources, materials, and value-added products vital to green technology will create jobs locally.
Adopt just development finance. The large-scale power projects needed to industrialize economies are capital-intensive and often require investments from development finance institutions. Development finance institution funding should catalyze private sector resources. While we agree on the environmental and economic justification for not financing new coal-fired plants, they should not limit support for natural gas, hydro, and geothermal power generation projects. This policy creates an unjust burden on those economies that require a variety of sources to increase access and build resilience into their power infrastructure. It is hypocritical of the EU, the U.S., and China to utilize fossil fuels while effectively denying others the means to lift themselves out of poverty.
Embrace proportionate responsibility. China, the EU, and the U.S. emit over 40 percent of total global greenhouse gases, while all of Africa emits 7 percent. Prioritizing the transition to renewables and imposing higher emission reduction requirements in the EU, U.S., and China will ease the burden on those nations that still need a variety of power generation methods to increase energy access.

The world is facing an existential climate crisis and must come together in solidarity to stave off the potentially devastating impacts, but leaving 600 million Africans in the dark is not an option. We must avert a climate disaster and expand energy access in Africa at the same time.

TURKEY: Erdogan’s pyrrhic victory?

TURKEY: Erdogan’s pyrrhic victory? | Speevr

Even though the meeting of the Turkish cabinet is still ongoing (a press conference is expected at around 7pm local time), it seems that President Tayyip Erdogan will not follow through on his previous threat to declare ten Western Ambassadors “personae non gratae.” The pro-Ju…   Become a member to read the rest of this […]

Argentina: Orderly or Messy Adjustment?

When the legislative elections of November 14 pass, all eyes will return almost exclusively to the economy. Once the results are digested, the question will be how Argentina will resolve the many imbalances that are beginning to grow. They are, mainly, the fiscal issue, the monet…   Become a member to read the rest of […]

Building an inclusive recovery in Latin America and the Caribbean

Building an inclusive recovery in Latin America and the Caribbean | Speevr

WASHINGTON, DC – Global poverty rose last year for the first time since 1998 as the economic fallout of the COVID-19 pandemic pushed an additional 97 million people below the international threshold of $1.90 per day. At first glance, Latin America and the Caribbean (LAC) appears to have fared relatively well: three million newly poor people as a result of the pandemic, compared to 58 million in South Asia. But poverty in LAC demands more attention than the headline statistic suggests.

Russia’s Communist Comeback

Mihail Siergiejevicz/SOPA Images/LightRocket via Getty Images

Nina L. Khrushcheva
explains why an invigorated Party has emerged as a real challenger to Vladimir Putin’s United Russia.

3
Add to Bookmarks

China’s Journey into the Unknown

PS OnPoint

WANG ZHAOAFP via Getty Images
Subscriber Exclusive

George Magnus
reviews three books examining the increasingly obvious flaws in the country’s governance model.

25
Add to Bookmarks

The Inflation Catch-Up Game

Liu JieXinhua via Getty

Mohamed A. El-Erian
urges US monetary policymakers to respond faster to higher and more persistent price growth.

5
Add to Bookmarks

The pandemic affected economic growth in LAC more significantly than in any other developing region last year, with GDP contracting by 6.5%. Although the region is expected to rebound strongly with 5.2% growth this year, the effect on the poor could be delayed, owing not only to relative vaccine shortages but also to traditionally slower declines in poverty relative to increases in per capita income. Moreover, those without access to the digital economy could experience additional erosion in future earnings from a looming learning and productivity crisis caused by some of the world’s longest lockdowns and school closures.
Even before the pandemic, most LAC countries were experiencing anemic growth and a slow transition from middle- to high-income status. COVID-19 likely will extend this “middle-income trap.” In its recently released annual country classifications by income level, the World Bank identified seven economies globally that dropped to a lower income category. Two, Panama and Belize, are in LAC. And countries in the region that remained in the same income category as last year saw greater divergence, rather than convergence, with wealthier economies. Specifically, the average upper-middle-income country in LAC moved 10.4% further away from achieving high-income country status in 2020 than in 2019, compared to the average global divergence of 3.8%.

Digging deeper into the data reveals important nuances and heterogeneity concerning pandemic-induced poverty and vulnerability in LAC, especially in terms of geographic distribution and demographic composition.
Geographically, recent World Bank findings show that Brazil’s success in mitigating the impact of COVID-19 on the poor in 2020 altered the overall regional picture. Rescue measures lifted a net 14 million people out of poverty in Brazil (measured by the $5.50 per day poverty line used to assess upper- and lower-middle-income countries). But, excluding Brazil, the largest regional economy, LAC recorded a net increase of 13.7 million new poor. Even in Brazil, policymakers must carefully balance the trade-offs between sustaining and withdrawing extraordinary liquidity support.
In terms of demographic composition, COVID-19 put an end to an impressive period of economic growth that, beginning in the early 2000s, expanded the middle class and reduced regional poverty by half. A majority of Latin Americans and Caribbeans were middle class in 2018, but the pandemic has now left the majority vulnerable.

There is a silver lining. Consistent with global trends, much of LAC’s “new poor” or “new vulnerable” appear to be better educated, more urban, and have better access to basic services relative to the existing poor. This profile should position them to regain their footing quickly as the pandemic subsides.

Subscribe to Project Syndicate

Subscribe to Project Syndicate

Enjoy unlimited access to the ideas and opinions of the world’s leading thinkers, including weekly long reads, book reviews, topical collections, and interviews; The Year Ahead annual print magazine; the complete PS archive; and more – for less than $9 a month.

Subscribe Now

With the international community’s help, LAC countries must work to ensure a sustained recovery for the poor. For starters, expanding vaccine access will give several countries the upper hand in the ongoing tug-of-war between sluggish vaccination rates and new outbreaks or variants. The region’s poor, prone to higher infection risks because they tend to be employed in informal, in-person, and close-proximity jobs, stand to benefit from such efforts.
More broadly, a vulnerability-based approach to understanding and tackling poverty is needed. The uneven distributional impact of the COVID-19 crisis has confirmed once again that, despite having partly achieved high-income or OECD status, LAC remains highly susceptible to economic shocks, from normal business cycles to extraordinary events such as pandemics and climate-related natural disasters. Because many in LAC’s middle class are one shock away from poverty or vulnerability, shielding people from such conditions is just as important as lifting people from them.
Addressing LAC’s numerous – and at times overlapping – forms of hardship and inequality requires considering all dimensions of vulnerability. Unleashing the region’s massive human potential requires a foundation of inclusive and redistributive policies. Social protection systems, combined with data-driven risk-management and prediction tools, must better target and cover new beneficiaries in need, such as unbanked populations in rural communities.

As countries begin to rethink their medium- to long-term economic strategies, policymakers should work closely with the private sector to facilitate and broaden access to the resources needed to generate resilient, self-sustaining growth, such as digital connectivity, education, high-quality and formal employment, and credit. Continued technical and financial assistance from multilateral organizations like the World Bank can play an important role in supporting this process.
Finally, LAC policymakers across the political spectrum should take stock of the rapidly evolving sentiments in society. The pandemic has aggravated popular frustration and polarization, and a number of key elections are on the horizon. In several countries, reaching a consensus on the path forward could be difficult. The stakes are high: not only accelerating an inclusive regional recovery, but also safeguarding the conditions for continuous macro- and micro-improvements in the post-pandemic future. 

Strengthening international cooperation on AI

Strengthening international cooperation on AI | Speevr

Executive Summary
International cooperation on artificial intelligence—why, what, and how
Since 2017, when Canada became the first country to adopt a national AI strategy, at least 60 countries have adopted some form of policy for artificial intelligence (AI). The prospect of an estimated boost of 16 percent, or US$13 trillion, to global output by 2030 has led to an unprecedented race to promote AI uptake across industry, consumer markets, and government services. Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025.

At the same time, the work on developing global standards for AI has led to significant developments in various international bodies. These encompass both technical aspects of AI (in standards development organizations (SDOs) such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE) among others) and the ethical and policy dimensions of responsible AI. In addition, in 2018 the G-7 agreed to establish the Global Partnership on AI, a multistakeholder initiative working on projects to explore regulatory issues and opportunities for AI development. The Organization for Economic Cooperation and Development (OECD) launched the AI Policy Observatory to support and inform AI policy development. Several other international organizations have become active in developing proposed frameworks for responsible AI development.
In addition, there has been a proliferation of declarations and frameworks from public and private organizations aimed at guiding the development of responsible AI. While many of these focus on general principles, the past two years have seen efforts to put principles into operation through fully-fledged policy frameworks. Canada’s directive on the use of AI in government, Singapore’s Model AI Governance Framework, Japan’s Social Principles of Human-Centric AI, and the U.K. guidance on understanding AI ethics and safety have been frontrunners in this sense; they were followed by the U.S. guidance to federal agencies on regulation of AI and an executive order on how these agencies should use AI. Most recently, the EU proposal for adoption of regulation on AI has marked the first attempt to introduce a comprehensive legislative scheme governing AI.
Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025.
In exploring how to align these various policymaking efforts, we focus on the most compelling reasons for stepping up international cooperation (the “why”); the issues and policy domains that appear most ready for enhanced collaboration (the “what”); and the instruments and forums that could be leveraged to achieve meaningful results in advancing international AI standards, regulatory cooperation, and joint R&D projects to tackle global challenges (the “how”). At the end of this report, we list the topics that we propose to explore in our forthcoming group discussions.
Why international cooperation on AI is important
Even more than many domains of science and engineering in the 21st century, the international AI landscape is deeply collaborative, especially when it comes to research, innovation, and standardization. There are several reasons to sustain and enhance international cooperation.

AI research and development is an increasingly complex and resource-intensive endeavor, in which scale is an important advantage. Cooperation among governments and AI researchers and developers across national boundaries can maximize the advantage of scale and exploit comparative advantages for mutual benefit. An absence of international cooperation would lead to competitive and duplicative investments in AI capacity, creating unnecessary costs and leaving each government worse off in AI outcomes. Several essential inputs used in the development of AI, including access to high-quality data (especially for supervised machine learning) and large-scale computing capacity, knowledge, and talent, benefit from scale.
International cooperation based on commonly agreed democratic principles for responsible AI can help focus on responsible AI development and build trust. While much progress has been made aligning on responsible AI, there remain differences—even among Forum for Cooperation on AI (FCAI) participants. The next steps in AI governance involve translating AI principles into policy, regulatory frameworks, and standards. These will require deeper understanding of how AI works in practice and working through the operation of principles in specific contexts and in the face of inevitable tradeoffs, such as may arise when seeking AI that is both accurate and explainable. Effective cooperation will require concrete steps in specific areas, which the recommendations of this report aim to suggest.
When it comes to regulation, divergent approaches can create barriers to innovation and diffusion. Governments’ efforts to boost domestic AI development around concepts of digital sovereignty can have negative spillovers, such as restrictions on access to data, data localization, discriminatory investment, and other requirements. Likewise, diverging risk classification regimes and regulatory requirements can increase costs for businesses seeking to serve the global AI market. Varying governmental AI regulations may necessitate building variations of AI models that can increase the work necessary to build an AI system, leading to higher compliance costs that disproportionately affect smaller firms. Differing regulations may also force variation in how data sets are collected and stored, creating additional complexity in data systems and reducing the general downstream usefulness of the data for AI. Such additional costs may apply to AI as a service as well as hardware-software systems that embed AI solutions, such as autonomous vehicles, robots, or digital medical devices. Enhanced cooperation is key to create a larger market in which different countries can try to leverage their own competitive advantage. For example, the EU seeks to achieve a competitive advantage in “industrial AI”: EU enterprises could exploit that AI without the prospect of having to engage in substantial reengineering to meet requirements of another jurisdiction.
Aligning key aspects of AI regulation can enable specialized firms in AI development to thrive. Such companies generate business by developing expertise in a specialized AI system, then licensing these to other companies as one part of a broader tool. As AI becomes more ubiquitous, complex stacks of specialized AI systems may emerge in many sectors. A more open global market would allow a company to take advantage of digital supply chains, using a single product with a natural language model built in Canada, a video analysis algorithm trained in Japan, and network analysis developed in France. Enabling global competition by such specialized firms will encourage healthier markets and more AI innovation.
Enhanced cooperation in trade is essential to avoid unjustified restrictions to the flow of goods and data, which would substantially reduce the prospective benefits of AI diffusion. While the strategic importance of data and sovereignty has in many countries given rise to legitimate industrial policy initiatives aimed at mapping and reducing dependencies on the rest of the world, protectionist measures can jeopardize global cooperation, impinge on global value chains, and negatively affect consumer choice, thereby reducing market size and overall incentives to invest in meaningful AI solutions.
Enhanced cooperation is needed to tap the potential of AI solutions to address global challenges. No country can “go it alone” in AI, especially when it comes to sharing data and applying AI to tackle global challenges like climate change or pandemic preparedness. The governments involved in the FCAI share interests in deploying AI for global social, humanitarian, and environmental benefit. For example, the EU is proposing to employ AI to support its Green Deal, and the G-7 and GPAI have called for harnessing AI for U.N. Sustainable Development Goals. Collaborative “moonshots” can pool resources to leverage the potential of AI and related technologies to address key global problems in domains such as health care, climate science, or agriculture at the same time as they provide a way to test approaches to responsible AI together.
Cooperation among likeminded countries is important to reaffirm key principles of openness and protection of democracy, freedom of expression, and other human rights. The risks associated with the unconstrained use of AI solutions by techno-authoritarian regimes— such as China’s—expose citizens to potential violations of human rights and threaten to split cyberspace into incompatible technology stacks and fragment the global AI R&D process.

The fact that international cooperation is an element of most governments’ AI strategies indicates that governments appreciate the connection between AI development and collaboration across borders. This report is about concrete ways to realize this connection.

Related Content

At the same time, international cooperation should not be interpreted as complete global harmonization: countries legitimately differ in national strategic priorities, legal traditions, economic structures, demography, and geography. International collaboration can nonetheless create the level playing field that would enable countries to engage in fruitful “co-opetition” in AI: agreeing on basic principles and when possible seeking joint outcomes, but also competing for the best solutions to be scaled up at the global level. Robust cooperation based on common principles and values is a foundation for successful national development of AI.
Rules, standards, and R&D projects: Key areas for collaboration
Our exploration of international AI governance through roundtables, other discussions, and research led us to identify three main areas where enhanced collaboration would provide fruitful: regulatory policies, standard-setting, and joint research and development (R&D) projects. Below, we summarize ways in which cooperation may unfold in each of these areas, as well as the extent of collaboration conceivable in the short term as well as in the longer term.
Cooperation on regulatory policy
AI policy development is in the relatively early stages in all countries, and so timely and focused international cooperation can help align AI policies and regulations.
International regulatory cooperation has the potential to reduce regulatory burdens and barriers to trade, incentivize AI development and use, and increase market competition at the global level. That said, countries differ in legal tradition, economic structure, comparative advantage in AI, weighing of civil and fundamental rights, and balance between ex ante regulation and ex post enforcement and litigation systems. Such differences will make it difficult to achieve complete regulatory convergence. Indeed, national AI strategies and policies reflect differences in countries’ willingness to move towards a comprehensive regulatory framework for AI. Despite these differences, AI policy development is in the relatively early stages in all countries, and so timely and focused international cooperation can help align AI policies and regulations.
Against this backdrop, it is reasonable to assume that AI policy development is less embedded in pre-existing legal tradition or frameworks at this stage, and thus that international cooperation in this field can achieve higher levels of integration. The following areas for cooperation emerged from the FCAI dialogues and our other explorations.

Building international cooperation into AI policies. FCAI governments should give effect to their recognition of the need for international engagement on AI by committing to pursue coordination with each other and other international partners prior to adopting domestic AI initiatives.
A common, technology-neutral definition of AI for regulatory purposes. Based on the definitions among FCAI participants and the work of the OECD expert group, converging on a common definition of AI and working together to gradually update the description of an AI system, and its possible configurations and techniques, appears feasible and already partly underway. A common definition is important to guide future cooperation in AI and determines the level of ambition that can be reached by such a process.
Building on a risk-based approach to AI regulation. A variety of governments and other bodies have endorsed a risk-based approach to AI in national strategies and in bilateral or multilateral contexts. Most notably, a risk-based approach is central to the policy frameworks of the two most prominent exemplars of AI policy development—the U.S. and the EU. These recent, broadly parallel developments have opened the door to developing international cooperation on ways to address risks while maximizing benefits. However, there remain challenges to convergence on a risk-based approach. Dialogue on clear identification and classification of risks, approaches to benefit-risk analysis, possible convergence on cases in which the risks are too high to be mitigated, and the type of risk assessment to be performed and who should perform it, would greatly benefit cooperation on a risk-based approach.
Sharing experiences and developing common criteria and standards for auditing AI systems. The field of accountability in AI and algorithms has been the subject of wide and valuable work by civil society organizations as well as governments. The exchange of good practices and—ultimately—a common, or at least a compatible, framework for AI auditing would eliminate significant barriers to the development of a truly international market for AI solutions. It also would facilitate the emergence of third-party auditing standards and an international market for AI auditing, with potential benefits in terms of quality, price, and access for auditing services for deployers of AI. Additionally, exchange of practices and international standards for AI auditing, monitoring, and oversight would significantly help the policy community keep up to speed in market monitoring.
A joint platform for regulatory sandboxes. Even without convergence on risk assessments or regulatory measures, an international platform for regulatory learning involving all governments that participate in FCAI and possibly others is a promising avenue for deepening international cooperation on AI. Such a platform could host an international repository of ongoing experiments on AI-enabled innovations, including regulatory sandboxes. As use of sandboxes becomes a more common way for governments to test the viability and conformity of new AI solutions under legislative and regulatory requirements, updating information on ongoing government initiatives could save resources and inform AI developers and policymakers. Aligning the criteria and overall design of AI sandboxes in different administrations could also increase the prospective benefits and impact of these processes, as developers willing to enter the global market might be able to go through the sandbox process in a single participating country.
Cooperation on AI use in government: procurement and accountability. A natural candidate for further exchange and cooperation in FCAI is the adoption of AI solutions in government, including both “back office” solutions and more public-facing applications. The sharing of good practices and overall lessons on what works when deploying AI in government would also be an important achievement. Important areas in this respect are procurement and effective oversight of deployment.
Sectoral cooperation on AI use cases. A sector-specific approach can ensure higher levels of regulatory certainty. In sectors like finance, key criteria such as fairness, discrimination, and transparency have long been subject to extensive regulatory intervention, and sectoral regulation must ensure continuity while accounting for the increasing use of AI. In health and pharmaceuticals, the use of AI both as a stand-alone solution and embedded in medical devices has prompted a very specific, technical discussion regarding the risk-based approach to be adopted and has already enabled valuable sectoral initiatives. The adoption of different standards and criteria in sectoral regulation may increase regulatory costs for developers willing to serve more than one sector and country with their AI solutions. In such a cross-cutting framework, examples from mature areas of regulation such as finance and health can also become a form of regulatory sandbox to model regulation for other sectors in the future.

Cooperation on sharing data across borders
Data governance is a focal area for international cooperation on AI because of the importance of data as an input for AI R&D and because of the added complexity of regulatory regimes already in place that restrict certain information flows, including data protection and intellectual property laws. Effective international cooperation on AI needs a robust and coherent framework for data protection and data sharing. There are a variety of channels addressing these issues including the Asia-Pacific Economic Cooperation group, the working group on data governance of the Global Partnership on AI, and bilateral discussions between the EU and U.S. Nonetheless, the potential impact of such laws on data available for AI-driven medical and scientific research requires specific focus as the EU both reviews its General Data Protection Regulation and considers new legislation on private and public sector data sharing.
There are other significant data governance issues that may benefit from pooled efforts across borders that, by and large, are the subject of international cooperation. Key areas in this respect include opening government data including international data sharing, improving data interoperability, and promoting technologies for trustworthy data sharing.
Cooperation on international standards for AI
As countries move from developing frameworks and policies to more concrete efforts to regulate AI, demand for AI standards will grow. These include standards for risk management, data governance, and technical documentation that can establish compliance with emerging legal requirements. International AI standards will also be needed to develop commonly accepted labeling practices that can facilitate business-to-business (B2B) contracting and to demonstrate conformity with AI regulations; address the ethics of AI systems (transparency, neutrality/lack of bias, etc.); and maximize the harmonization and interoperability for AI systems globally. International standards from standards development organizations like the ISO/IEC and IEEE can help ensure that global AI systems are ethically sound, robust, and trustworthy, that opportunities from AI are widely distributed, and that standards are technically sound and research-driven regardless of sector or application.
International standards from standards development organizations like the ISO/IEC and IEEE can help ensure that global AI systems are ethically sound, robust, and trustworthy, that opportunities from AI are widely distributed, and that standards are technically sound and research-driven regardless of sector or application.
The governments participating in the FCAI recognize and support industry-led standards setting. While there are differences in how the FCAI participants engage with industry-led standards bodies, a common element is support for the central role of the private sector in driving standards. That said, there is a range of steps that FCAI participants can take to strengthen international cooperation in AI standards. The approach of FCAI participants that emphasizes an industry-led approach to developing international AI standards contrasts with the overall approach of other countries, such as China, where the state is at the center of standards making activities. The more direct involvement by the Chinese government in setting standards, driving the standards agenda, and aligning these with broader Chinese government priorities requires attention by all FCAI participants with the aim of encouraging Chinese engagement in international AI standard-setting consistent with outcomes that are technically robust and industry driven.
Sound AI standards can also support international trade and investment in AI, expanding AI opportunity globally and increasing returns to investment in AI R&D. The World Trade Organization (WTO) Technical Barriers to Trade (TBT) Agreement’s relevance to AI standards is limited by its application only to goods, whereas many AI standards will apply to services. Recent trade agreements have started to address AI issues, including support for AI standards, but more is needed. An effective international AI standards development process is also needed to avoid bifurcated AI standards—centered around China on the one hand and the West on the other. Which outcome prevails will to some extent depend on progress in effective international AI standards development.
R&D cooperation: Selecting international AI projects
Productive discussion of AI ethics, regulation, risks, and benefits requires use cases because the issues are highly contextual. As a result, AI policy development has tended to move from broad principles to specific sectors or use cases. Considering this need, we suggest that developing international cooperation on AI would benefit from putting cooperation into operation with specific use cases. To this end, we propose that FCAI participants expand efforts to deploy AI on important global problems collectively by working toward agreement on joint research aimed at a specific development project (or projects). Such an effort could stimulate development of AI for social benefit and also provide a forcing function for overcoming differences in approaches to AI policy and regulation.
Criteria for the kinds of goals or projects to consider include the following:

Global significance. The project should be aimed at important global issues that demand transnational solutions. The shared importance of the issues should give all participants a common stake and, if successful, could contribute toward global welfare.
Global scale. The problem and the scope of the project should require resources on a large enough scale that the pooled support of leading governments and institutions adds significant value.
A public good. Given its significance and scale, the project would amount to a public good. In turn, the output of the project should also be a public good and both the project and the output should be available to all participants and less developed countries.
A collaborative test bed. Governance of the project is likely to necessitate addressing regulatory, ethical, and risk questions in a context that is concrete and in which the participants have incentives to achieve results. It would amount to a very large and shared regulatory sandbox.
Assessable impact. The project will need to be monitored commensurately with its scale, public visibility, and experimental nature. Participants will need to assess progress toward both defined project goals and broader impact.
A multistakeholder effort. Considering its public importance and the resources it should marshal, the project will need to be government-initiated. But the architecture and governance should be open to nongovernmental participation on a shared basis.

This proposal could be modeled on several large-scale international scientific collaborations: CERN, the Human Genome Project, or the International Space Station. It would also build on numerous initiatives toward collaborative research and development on AI. Similar global collaboration will be more difficult in a world of increased geopolitical and economic competition, nationalism, nativism, and protectionism among governments that have been key players in these efforts.
Recommendations
Below, we present recommendations for developing international cooperation on AI based on our discussions and work to date.
R1. Commit to considering international cooperation in drafting and implementing national AI policies.
This recommendation could be implemented within a relatively short timeframe and initially would take the form of firm declarations by individual countries. Ultimately this could lead to a joint declaration with clear commitments on the part of the governments involved.
R2. Refine a common approach to responsible AI development.
This type of recommendation requires enhanced cooperation between FCAI governments, which can then provide a good basis for incremental forms of cooperation.
R3. Agree on a common, technology-neutral definition of AI systems.
FCAI governments should work on a common definition of AI that is technology-neutral and broad. This recommendation can be implemented in a relatively short term and requires joint action by FCAI governments. The time to act is short, as the rather broad definition given in the EU AI Act is still undergoing the legislative process in the EU and many other countries are still shaping their AI policy frameworks.
R4. Agree on the contours of a risk-based approach.
Alignment on this key element of AI policy would be an important step towards an interoperable system of responsible AI. It would also facilitate cooperation among FCAI governments, industry, and civil society working on AI standards in international SDOs. General agreement on a risk-based approach could be achieved in the short term; developing the contours of a risk-based classification system would probably take more time and require deeper cooperation among FCAI governments as well as stakeholders.
R5. Establish “redlines” in developing and deploying AI.
This may entail an iterative process. FCAI governments could agree on an initial, limited list of redlines such as certain AI uses for generalized social scoring by governments; and then gradually expand the list over time to include emerging AI uses on which there is substantial agreement on the need to prohibit use.
R6. Strengthen sectoral cooperation, starting with more developed policy domains.
Sectoral cooperation can be organized on relatively short timeframes starting from sectors that have well-developed regulatory systems and present higher risks, such as health care, transport and finance, in which sectoral regulation already exists, and its adaptation to AI could be achieved relatively swiftly.
R7. Create a joint platform for regulatory learning and experiments.
A joint repository could stimulate dialogue on how to design and implement sandboxes and secure sound governance, transparency, and reproducibility of results, and aid their transferability across jurisdictions and categories of users. This recommended action is independent of others and is feasible in the short term. It requires soft cooperation, in the form of a structured exchange of good practices. Over time, the repository should become richer in terms of content, and therefore more useful.
R8. Step up cooperation and exchange of practices on the use of AI in government.
FCAI governments could set up, either as a stand-alone initiative or in the context of a broader framework for cooperation, a structured exchange on government uses of AI. The dialogue may involve AI applications to improve the functioning of public administration such as the administration of public benefits or health care; AI-enabled regulation and regulatory governance practices; or other decision-making and standards and procedures for AI procurement. This recommended action could be implemented in the short term, although collecting all experiences and setting the stage for further cooperation would require more time.
R9. Step up cooperation on accountability.
FCAI governments could profit from enhanced cooperation on accountability, whether through market oversight and enforcement, auditing requirements, or otherwise. This could combine with sectoral cooperation and possibly also with standards development for auditing AI systems.
R10. Assess the impact of AI on international data governance.
There is a need for a common understanding of how data governance rules affect AI R&D in areas such as health research and other scientific research, and whether they inhibit the exploration that is an essential part of both scientific discovery and machine learning. There is also need for a critical look at R&D methods to develop a deeper understanding of appropriate boundaries on use of personal data or other protected information. In turn, there is also a need to expand R&D and understanding in privacy-protecting technologies that can enable exploration and discovery while protecting personal information.
R11. Adopt a stepwise, inclusive approach to international AI standardization.
A stepwise approach to standards development is needed to allow time for technology development and experimentation and to gather the data and use cases to support robust standards. It also would ensure that discussions at the international level happen once technology has reached a certain level of maturity or where a regulatory environment is adopted. To support such an approach, it would be helpful to establish a comprehensive database of AI standards under development at national and international levels.
R12. Develop a coordinated approach to AI standards development that encourages Chinese participation consistent with an industry-led, research-driven approach.
There is currently a risk of disconnect between growing concern among governments and national security officials alarmed by Chinese engagement in the standards process on the one hand, and industry participants’ perceptions of the impact of Chinese participation in SDOs on the other. To encourage constructive involvement and discourage self-serving standards, FCAI participants (and likeminded countries) should encourage Chinese engagement in international standards setting while also agreeing on costs for actions that use SDOs strategically to slow down or stall standards making. This can be accomplished through trade and other measures but will require cooperation among FCAI participants to be effective.
R13. Expand trade rules for AI standards.
The rules governing use of international standards in the WTO TBT Agreement and free trade agreements are limited to goods only, whereas AI standards will apply mainly to services. New trade rules are needed that extend rules on international standards to services. As a starting point, such rules should be developed in the context of bilateral free trade agreements or plurilateral agreements, with the aim to make them multilateral in the WTO. Trade rules are also needed to support data free flow with trust and to reduce barriers and costs to AI infrastructure. Consideration also should be given to linking participation in the development of AI standards in bodies such as ISO/IEC, with broader trade policy goals and compliance with core WTO commitments.
R14. Increase funding for participation in SDOs.
Funding should be earmarked for academics and industry participation in SDOs, as well as for SDO meetings in FCAI countries and more broadly in less developed countries. Broadened participation is important to democratize the standards making process and strengthen the legitimacy and adoption of the resulting standards. Hosting meetings of standards bodies in diverse countries can broaden exposure to standards-setting processes around AI and critical technology.
R15. Develop common criteria and governance arrangements for international large-scale R&D projects.
Joint research and development applying to large-scale global problems such as climate change or disease prevention and treatment can have two valuable effects: It can bring additional resources to the solution of pressing global challenges, and the collaboration can help to find common ground in addressing differences in approaches to AI. FCAI will seek to incubate a concrete roadmap on such R&D for adoption by FCAI participants as well as other governments and international organizations. Using collaboration on R&D as a mechanism to work through matters that affect international cooperation on AI policy means that this recommendation should play out in the near term.

Proposed future topics for FCAI dialogues
– Scaling R&D cooperation on AI projects.– China and AI: what are the risks, opportunities, and ways forward?– Government use of AI: developing common approaches.– Regulatory cooperation and harmonization: issues and mechanisms.– A suitable international framework for data governance.– Standards development.– An AI trade agreement: partners, content, and strategy.

Download the full report

Strengthening international cooperation on AI

Strengthening international cooperation on AI | Speevr

Executive Summary

International cooperation on artificial intelligence—why, what, and how
Since 2017, when Canada became the first country to adopt a national AI strategy, at least 60 countries have adopted some form of policy for artificial intelligence (AI). The prospect of an estimated boost of 16 percent, or US$13 trillion, to global output by 2030 has led to an unprecedented race to promote AI uptake across industry, consumer markets, and government services. Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025.

Cameron F. Kerry

Ann R. and Andrew H. Tisch Distinguished Visiting Fellow – Governance Studies, Center for Technology Innovation

Twitter
@Cam_Kerry

Joshua P. Meltzer

Senior Fellow – Global Economy and Development

Twitter
@JoshuaPMeltzer

Andrea Renda

Senior Research Fellow and Head of Global Governance, Regulation, Innovation and the Digital Economy (GRID) – Center for European Policy Studies (CEPS)

Alex Engler

Fellow – Governance Studies

Twitter
@AlexCEngler

R

Rosanna Fanni

Associate Research Assistant and Digital Forum Coordinator, Global Governance, Regulation, Innovation and the Digital Economy (GRID) – CEPS

At the same time, the work on developing global standards for AI has led to significant developments in various international bodies. These encompass both technical aspects of AI (in standards development organizations (SDOs) such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE) among others) and the ethical and policy dimensions of responsible AI. In addition, in 2018 the G-7 agreed to establish the Global Partnership on AI, a multistakeholder initiative working on projects to explore regulatory issues and opportunities for AI development. The Organization for Economic Cooperation and Development (OECD) launched the AI Policy Observatory to support and inform AI policy development. Several other international organizations have become active in developing proposed frameworks for responsible AI development.
In addition, there has been a proliferation of declarations and frameworks from public and private organizations aimed at guiding the development of responsible AI. While many of these focus on general principles, the past two years have seen efforts to put principles into operation through fully-fledged policy frameworks. Canada’s directive on the use of AI in government, Singapore’s Model AI Governance Framework, Japan’s Social Principles of Human-Centric AI, and the U.K. guidance on understanding AI ethics and safety have been frontrunners in this sense; they were followed by the U.S. guidance to federal agencies on regulation of AI and an executive order on how these agencies should use AI. Most recently, the EU proposal for adoption of regulation on AI has marked the first attempt to introduce a comprehensive legislative scheme governing AI.
Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025.
In exploring how to align these various policymaking efforts, we focus on the most compelling reasons for stepping up international cooperation (the “why”); the issues and policy domains that appear most ready for enhanced collaboration (the “what”); and the instruments and forums that could be leveraged to achieve meaningful results in advancing international AI standards, regulatory cooperation, and joint R&D projects to tackle global challenges (the “how”). At the end of this report, we list the topics that we propose to explore in our forthcoming group discussions.
Why international cooperation on AI is important
Even more than many domains of science and engineering in the 21st century, the international AI landscape is deeply collaborative, especially when it comes to research, innovation, and standardization. There are several reasons to sustain and enhance international cooperation.

AI research and development is an increasingly complex and resource-intensive endeavor, in which scale is an important advantage. Cooperation among governments and AI researchers and developers across national boundaries can maximize the advantage of scale and exploit comparative advantages for mutual benefit. An absence of international cooperation would lead to competitive and duplicative investments in AI capacity, creating unnecessary costs and leaving each government worse off in AI outcomes. Several essential inputs used in the development of AI, including access to high-quality data (especially for supervised machine learning) and large-scale computing capacity, knowledge, and talent, benefit from scale.
International cooperation based on commonly agreed democratic principles for responsible AI can help focus on responsible AI development and build trust. While much progress has been made aligning on responsible AI, there remain differences—even among Forum for Cooperation on AI (FCAI) participants. The next steps in AI governance involve translating AI principles into policy, regulatory frameworks, and standards. These will require deeper understanding of how AI works in practice and working through the operation of principles in specific contexts and in the face of inevitable tradeoffs, such as may arise when seeking AI that is both accurate and explainable. Effective cooperation will require concrete steps in specific areas, which the recommendations of this report aim to suggest.
When it comes to regulation, divergent approaches can create barriers to innovation and diffusion. Governments’ efforts to boost domestic AI development around concepts of digital sovereignty can have negative spillovers, such as restrictions on access to data, data localization, discriminatory investment, and other requirements. Likewise, diverging risk classification regimes and regulatory requirements can increase costs for businesses seeking to serve the global AI market. Varying governmental AI regulations may necessitate building variations of AI models that can increase the work necessary to build an AI system, leading to higher compliance costs that disproportionately affect smaller firms. Differing regulations may also force variation in how data sets are collected and stored, creating additional complexity in data systems and reducing the general downstream usefulness of the data for AI. Such additional costs may apply to AI as a service as well as hardware-software systems that embed AI solutions, such as autonomous vehicles, robots, or digital medical devices. Enhanced cooperation is key to create a larger market in which different countries can try to leverage their own competitive advantage. For example, the EU seeks to achieve a competitive advantage in “industrial AI”: EU enterprises could exploit that AI without the prospect of having to engage in substantial reengineering to meet requirements of another jurisdiction.
Aligning key aspects of AI regulation can enable specialized firms in AI development to thrive. Such companies generate business by developing expertise in a specialized AI system, then licensing these to other companies as one part of a broader tool. As AI becomes more ubiquitous, complex stacks of specialized AI systems may emerge in many sectors. A more open global market would allow a company to take advantage of digital supply chains, using a single product with a natural language model built in Canada, a video analysis algorithm trained in Japan, and network analysis developed in France. Enabling global competition by such specialized firms will encourage healthier markets and more AI innovation.
Enhanced cooperation in trade is essential to avoid unjustified restrictions to the flow of goods and data, which would substantially reduce the prospective benefits of AI diffusion. While the strategic importance of data and sovereignty has in many countries given rise to legitimate industrial policy initiatives aimed at mapping and reducing dependencies on the rest of the world, protectionist measures can jeopardize global cooperation, impinge on global value chains, and negatively affect consumer choice, thereby reducing market size and overall incentives to invest in meaningful AI solutions.
Enhanced cooperation is needed to tap the potential of AI solutions to address global challenges. No country can “go it alone” in AI, especially when it comes to sharing data and applying AI to tackle global challenges like climate change or pandemic preparedness. The governments involved in the FCAI share interests in deploying AI for global social, humanitarian, and environmental benefit. For example, the EU is proposing to employ AI to support its Green Deal, and the G-7 and GPAI have called for harnessing AI for U.N. Sustainable Development Goals. Collaborative “moonshots” can pool resources to leverage the potential of AI and related technologies to address key global problems in domains such as health care, climate science, or agriculture at the same time as they provide a way to test approaches to responsible AI together.
Cooperation among likeminded countries is important to reaffirm key principles of openness and protection of democracy, freedom of expression, and other human rights. The risks associated with the unconstrained use of AI solutions by techno-authoritarian regimes— such as China’s—expose citizens to potential violations of human rights and threaten to split cyberspace into incompatible technology stacks and fragment the global AI R&D process.

The fact that international cooperation is an element of most governments’ AI strategies indicates that governments appreciate the connection between AI development and collaboration across borders. This report is about concrete ways to realize this connection.

Related Content

TechTank
Winners and losers in the fulfilment of national artificial intelligence aspirations

Samar Fatima, Gregory S. Dawson, Kevin C. Desouza, and James S. Denford
Thursday, October 21, 2021

TechTank
It is time to negotiate global treaties on artificial intelligence

John R. Allen and Darrell M. West
Wednesday, March 24, 2021

Report
Strengthening international cooperation on artificial intelligence

Joshua P. Meltzer and Cameron F. Kerry
Wednesday, February 17, 2021

At the same time, international cooperation should not be interpreted as complete global harmonization: countries legitimately differ in national strategic priorities, legal traditions, economic structures, demography, and geography. International collaboration can nonetheless create the level playing field that would enable countries to engage in fruitful “co-opetition” in AI: agreeing on basic principles and when possible seeking joint outcomes, but also competing for the best solutions to be scaled up at the global level. Robust cooperation based on common principles and values is a foundation for successful national development of AI.
Rules, standards, and R&D projects: Key areas for collaboration
Our exploration of international AI governance through roundtables, other discussions, and research led us to identify three main areas where enhanced collaboration would provide fruitful: regulatory policies, standard-setting, and joint research and development (R&D) projects. Below, we summarize ways in which cooperation may unfold in each of these areas, as well as the extent of collaboration conceivable in the short term as well as in the longer term.
Cooperation on regulatory policy
AI policy development is in the relatively early stages in all countries, and so timely and focused international cooperation can help align AI policies and regulations.
International regulatory cooperation has the potential to reduce regulatory burdens and barriers to trade, incentivize AI development and use, and increase market competition at the global level. That said, countries differ in legal tradition, economic structure, comparative advantage in AI, weighing of civil and fundamental rights, and balance between ex ante regulation and ex post enforcement and litigation systems. Such differences will make it difficult to achieve complete regulatory convergence. Indeed, national AI strategies and policies reflect differences in countries’ willingness to move towards a comprehensive regulatory framework for AI. Despite these differences, AI policy development is in the relatively early stages in all countries, and so timely and focused international cooperation can help align AI policies and regulations.
Against this backdrop, it is reasonable to assume that AI policy development is less embedded in pre-existing legal tradition or frameworks at this stage, and thus that international cooperation in this field can achieve higher levels of integration. The following areas for cooperation emerged from the FCAI dialogues and our other explorations.

Building international cooperation into AI policies. FCAI governments should give effect to their recognition of the need for international engagement on AI by committing to pursue coordination with each other and other international partners prior to adopting domestic AI initiatives.
A common, technology-neutral definition of AI for regulatory purposes. Based on the definitions among FCAI participants and the work of the OECD expert group, converging on a common definition of AI and working together to gradually update the description of an AI system, and its possible configurations and techniques, appears feasible and already partly underway. A common definition is important to guide future cooperation in AI and determines the level of ambition that can be reached by such a process.
Building on a risk-based approach to AI regulation. A variety of governments and other bodies have endorsed a risk-based approach to AI in national strategies and in bilateral or multilateral contexts. Most notably, a risk-based approach is central to the policy frameworks of the two most prominent exemplars of AI policy development—the U.S. and the EU. These recent, broadly parallel developments have opened the door to developing international cooperation on ways to address risks while maximizing benefits. However, there remain challenges to convergence on a risk-based approach. Dialogue on clear identification and classification of risks, approaches to benefit-risk analysis, possible convergence on cases in which the risks are too high to be mitigated, and the type of risk assessment to be performed and who should perform it, would greatly benefit cooperation on a risk-based approach.
Sharing experiences and developing common criteria and standards for auditing AI systems. The field of accountability in AI and algorithms has been the subject of wide and valuable work by civil society organizations as well as governments. The exchange of good practices and—ultimately—a common, or at least a compatible, framework for AI auditing would eliminate significant barriers to the development of a truly international market for AI solutions. It also would facilitate the emergence of third-party auditing standards and an international market for AI auditing, with potential benefits in terms of quality, price, and access for auditing services for deployers of AI. Additionally, exchange of practices and international standards for AI auditing, monitoring, and oversight would significantly help the policy community keep up to speed in market monitoring.
A joint platform for regulatory sandboxes. Even without convergence on risk assessments or regulatory measures, an international platform for regulatory learning involving all governments that participate in FCAI and possibly others is a promising avenue for deepening international cooperation on AI. Such a platform could host an international repository of ongoing experiments on AI-enabled innovations, including regulatory sandboxes. As use of sandboxes becomes a more common way for governments to test the viability and conformity of new AI solutions under legislative and regulatory requirements, updating information on ongoing government initiatives could save resources and inform AI developers and policymakers. Aligning the criteria and overall design of AI sandboxes in different administrations could also increase the prospective benefits and impact of these processes, as developers willing to enter the global market might be able to go through the sandbox process in a single participating country.
Cooperation on AI use in government: procurement and accountability. A natural candidate for further exchange and cooperation in FCAI is the adoption of AI solutions in government, including both “back office” solutions and more public-facing applications. The sharing of good practices and overall lessons on what works when deploying AI in government would also be an important achievement. Important areas in this respect are procurement and effective oversight of deployment.
Sectoral cooperation on AI use cases. A sector-specific approach can ensure higher levels of regulatory certainty. In sectors like finance, key criteria such as fairness, discrimination, and transparency have long been subject to extensive regulatory intervention, and sectoral regulation must ensure continuity while accounting for the increasing use of AI. In health and pharmaceuticals, the use of AI both as a stand-alone solution and embedded in medical devices has prompted a very specific, technical discussion regarding the risk-based approach to be adopted and has already enabled valuable sectoral initiatives. The adoption of different standards and criteria in sectoral regulation may increase regulatory costs for developers willing to serve more than one sector and country with their AI solutions. In such a cross-cutting framework, examples from mature areas of regulation such as finance and health can also become a form of regulatory sandbox to model regulation for other sectors in the future.

Cooperation on sharing data across borders
Data governance is a focal area for international cooperation on AI because of the importance of data as an input for AI R&D and because of the added complexity of regulatory regimes already in place that restrict certain information flows, including data protection and intellectual property laws. Effective international cooperation on AI needs a robust and coherent framework for data protection and data sharing. There are a variety of channels addressing these issues including the Asia-Pacific Economic Cooperation group, the working group on data governance of the Global Partnership on AI, and bilateral discussions between the EU and U.S. Nonetheless, the potential impact of such laws on data available for AI-driven medical and scientific research requires specific focus as the EU both reviews its General Data Protection Regulation and considers new legislation on private and public sector data sharing.
There are other significant data governance issues that may benefit from pooled efforts across borders that, by and large, are the subject of international cooperation. Key areas in this respect include opening government data including international data sharing, improving data interoperability, and promoting technologies for trustworthy data sharing.
Cooperation on international standards for AI
As countries move from developing frameworks and policies to more concrete efforts to regulate AI, demand for AI standards will grow. These include standards for risk management, data governance, and technical documentation that can establish compliance with emerging legal requirements. International AI standards will also be needed to develop commonly accepted labeling practices that can facilitate business-to-business (B2B) contracting and to demonstrate conformity with AI regulations; address the ethics of AI systems (transparency, neutrality/lack of bias, etc.); and maximize the harmonization and interoperability for AI systems globally. International standards from standards development organizations like the ISO/IEC and IEEE can help ensure that global AI systems are ethically sound, robust, and trustworthy, that opportunities from AI are widely distributed, and that standards are technically sound and research-driven regardless of sector or application.
International standards from standards development organizations like the ISO/IEC and IEEE can help ensure that global AI systems are ethically sound, robust, and trustworthy, that opportunities from AI are widely distributed, and that standards are technically sound and research-driven regardless of sector or application.
The governments participating in the FCAI recognize and support industry-led standards setting. While there are differences in how the FCAI participants engage with industry-led standards bodies, a common element is support for the central role of the private sector in driving standards. That said, there is a range of steps that FCAI participants can take to strengthen international cooperation in AI standards. The approach of FCAI participants that emphasizes an industry-led approach to developing international AI standards contrasts with the overall approach of other countries, such as China, where the state is at the center of standards making activities. The more direct involvement by the Chinese government in setting standards, driving the standards agenda, and aligning these with broader Chinese government priorities requires attention by all FCAI participants with the aim of encouraging Chinese engagement in international AI standard-setting consistent with outcomes that are technically robust and industry driven.
Sound AI standards can also support international trade and investment in AI, expanding AI opportunity globally and increasing returns to investment in AI R&D. The World Trade Organization (WTO) Technical Barriers to Trade (TBT) Agreement’s relevance to AI standards is limited by its application only to goods, whereas many AI standards will apply to services. Recent trade agreements have started to address AI issues, including support for AI standards, but more is needed. An effective international AI standards development process is also needed to avoid bifurcated AI standards—centered around China on the one hand and the West on the other. Which outcome prevails will to some extent depend on progress in effective international AI standards development.
R&D cooperation: Selecting international AI projects
Productive discussion of AI ethics, regulation, risks, and benefits requires use cases because the issues are highly contextual. As a result, AI policy development has tended to move from broad principles to specific sectors or use cases. Considering this need, we suggest that developing international cooperation on AI would benefit from putting cooperation into operation with specific use cases. To this end, we propose that FCAI participants expand efforts to deploy AI on important global problems collectively by working toward agreement on joint research aimed at a specific development project (or projects). Such an effort could stimulate development of AI for social benefit and also provide a forcing function for overcoming differences in approaches to AI policy and regulation.
Criteria for the kinds of goals or projects to consider include the following:

Global significance. The project should be aimed at important global issues that demand transnational solutions. The shared importance of the issues should give all participants a common stake and, if successful, could contribute toward global welfare.
Global scale. The problem and the scope of the project should require resources on a large enough scale that the pooled support of leading governments and institutions adds significant value.
A public good. Given its significance and scale, the project would amount to a public good. In turn, the output of the project should also be a public good and both the project and the output should be available to all participants and less developed countries.
A collaborative test bed. Governance of the project is likely to necessitate addressing regulatory, ethical, and risk questions in a context that is concrete and in which the participants have incentives to achieve results. It would amount to a very large and shared regulatory sandbox.
Assessable impact. The project will need to be monitored commensurately with its scale, public visibility, and experimental nature. Participants will need to assess progress toward both defined project goals and broader impact.
A multistakeholder effort. Considering its public importance and the resources it should marshal, the project will need to be government-initiated. But the architecture and governance should be open to nongovernmental participation on a shared basis.

This proposal could be modeled on several large-scale international scientific collaborations: CERN, the Human Genome Project, or the International Space Station. It would also build on numerous initiatives toward collaborative research and development on AI. Similar global collaboration will be more difficult in a world of increased geopolitical and economic competition, nationalism, nativism, and protectionism among governments that have been key players in these efforts.
Recommendations
Below, we present recommendations for developing international cooperation on AI based on our discussions and work to date.
R1. Commit to considering international cooperation in drafting and implementing national AI policies.
This recommendation could be implemented within a relatively short timeframe and initially would take the form of firm declarations by individual countries. Ultimately this could lead to a joint declaration with clear commitments on the part of the governments involved.
R2. Refine a common approach to responsible AI development.
This type of recommendation requires enhanced cooperation between FCAI governments, which can then provide a good basis for incremental forms of cooperation.
R3. Agree on a common, technology-neutral definition of AI systems.
FCAI governments should work on a common definition of AI that is technology-neutral and broad. This recommendation can be implemented in a relatively short term and requires joint action by FCAI governments. The time to act is short, as the rather broad definition given in the EU AI Act is still undergoing the legislative process in the EU and many other countries are still shaping their AI policy frameworks.
R4. Agree on the contours of a risk-based approach.
Alignment on this key element of AI policy would be an important step towards an interoperable system of responsible AI. It would also facilitate cooperation among FCAI governments, industry, and civil society working on AI standards in international SDOs. General agreement on a risk-based approach could be achieved in the short term; developing the contours of a risk-based classification system would probably take more time and require deeper cooperation among FCAI governments as well as stakeholders.
R5. Establish “redlines” in developing and deploying AI.
This may entail an iterative process. FCAI governments could agree on an initial, limited list of redlines such as certain AI uses for generalized social scoring by governments; and then gradually expand the list over time to include emerging AI uses on which there is substantial agreement on the need to prohibit use.
R6. Strengthen sectoral cooperation, starting with more developed policy domains.
Sectoral cooperation can be organized on relatively short timeframes starting from sectors that have well-developed regulatory systems and present higher risks, such as health care, transport and finance, in which sectoral regulation already exists, and its adaptation to AI could be achieved relatively swiftly.
R7. Create a joint platform for regulatory learning and experiments.
A joint repository could stimulate dialogue on how to design and implement sandboxes and secure sound governance, transparency, and reproducibility of results, and aid their transferability across jurisdictions and categories of users. This recommended action is independent of others and is feasible in the short term. It requires soft cooperation, in the form of a structured exchange of good practices. Over time, the repository should become richer in terms of content, and therefore more useful.
R8. Step up cooperation and exchange of practices on the use of AI in government.
FCAI governments could set up, either as a stand-alone initiative or in the context of a broader framework for cooperation, a structured exchange on government uses of AI. The dialogue may involve AI applications to improve the functioning of public administration such as the administration of public benefits or health care; AI-enabled regulation and regulatory governance practices; or other decision-making and standards and procedures for AI procurement. This recommended action could be implemented in the short term, although collecting all experiences and setting the stage for further cooperation would require more time.
R9. Step up cooperation on accountability.
FCAI governments could profit from enhanced cooperation on accountability, whether through market oversight and enforcement, auditing requirements, or otherwise. This could combine with sectoral cooperation and possibly also with standards development for auditing AI systems.
R10. Assess the impact of AI on international data governance.
There is a need for a common understanding of how data governance rules affect AI R&D in areas such as health research and other scientific research, and whether they inhibit the exploration that is an essential part of both scientific discovery and machine learning. There is also need for a critical look at R&D methods to develop a deeper understanding of appropriate boundaries on use of personal data or other protected information. In turn, there is also a need to expand R&D and understanding in privacy-protecting technologies that can enable exploration and discovery while protecting personal information.
R11. Adopt a stepwise, inclusive approach to international AI standardization.
A stepwise approach to standards development is needed to allow time for technology development and experimentation and to gather the data and use cases to support robust standards. It also would ensure that discussions at the international level happen once technology has reached a certain level of maturity or where a regulatory environment is adopted. To support such an approach, it would be helpful to establish a comprehensive database of AI standards under development at national and international levels.
R12. Develop a coordinated approach to AI standards development that encourages Chinese participation consistent with an industry-led, research-driven approach.
There is currently a risk of disconnect between growing concern among governments and national security officials alarmed by Chinese engagement in the standards process on the one hand, and industry participants’ perceptions of the impact of Chinese participation in SDOs on the other. To encourage constructive involvement and discourage self-serving standards, FCAI participants (and likeminded countries) should encourage Chinese engagement in international standards setting while also agreeing on costs for actions that use SDOs strategically to slow down or stall standards making. This can be accomplished through trade and other measures but will require cooperation among FCAI participants to be effective.
R13. Expand trade rules for AI standards.
The rules governing use of international standards in the WTO TBT Agreement and free trade agreements are limited to goods only, whereas AI standards will apply mainly to services. New trade rules are needed that extend rules on international standards to services. As a starting point, such rules should be developed in the context of bilateral free trade agreements or plurilateral agreements, with the aim to make them multilateral in the WTO. Trade rules are also needed to support data free flow with trust and to reduce barriers and costs to AI infrastructure. Consideration also should be given to linking participation in the development of AI standards in bodies such as ISO/IEC, with broader trade policy goals and compliance with core WTO commitments.
R14. Increase funding for participation in SDOs.
Funding should be earmarked for academics and industry participation in SDOs, as well as for SDO meetings in FCAI countries and more broadly in less developed countries. Broadened participation is important to democratize the standards making process and strengthen the legitimacy and adoption of the resulting standards. Hosting meetings of standards bodies in diverse countries can broaden exposure to standards-setting processes around AI and critical technology.
R15. Develop common criteria and governance arrangements for international large-scale R&D projects.
Joint research and development applying to large-scale global problems such as climate change or disease prevention and treatment can have two valuable effects: It can bring additional resources to the solution of pressing global challenges, and the collaboration can help to find common ground in addressing differences in approaches to AI. FCAI will seek to incubate a concrete roadmap on such R&D for adoption by FCAI participants as well as other governments and international organizations. Using collaboration on R&D as a mechanism to work through matters that affect international cooperation on AI policy means that this recommendation should play out in the near term.

Proposed future topics for FCAI dialogues
– Scaling R&D cooperation on AI projects.
– China and AI: what are the risks, opportunities, and ways forward?
– Government use of AI: developing common approaches.
– Regulatory cooperation and harmonization: issues and mechanisms.
– A suitable international framework for data governance.
– Standards development.
– An AI trade agreement: partners, content, and strategy.

Download the full report