21 May 2024
- RSIS
- Publication
- RSIS Publications
- IP24050 | Parsing the Inaugural China-US AI Talks
Although recently concluded talks between China and the United States on artificial intelligence (AI) were not aimed at any substantive outcomes, they nevertheless set an important precedent at a time when global efforts to regulate AI have gained momentum. Future bilateral talks could focus on the urgent issue of preserving human control over nuclear weapons.
COMMENTARY
On 14 May 2024, mid-level officials from China and the United States met in Geneva for the first time to discuss risks related to AI. The meeting arose from discussions at an earlier summit between their heads of state held at Woodside, near San Francisco, last November.
Although both countries set the bar low with no expectation of tangible outcomes, the fact that they have met – even if just to share their views – should be taken as a positive sign given how tense relations have been in recent months. More importantly, the talks set a precedent for future discussions to improve mutual understanding while reducing risks from miscommunication.
Having such a platform between the world’s two largest players in AI will be increasingly important as global efforts to regulate AI and manage the associated risks gather momentum. Earlier this year, the United Nations General Assembly adopted a landmark resolution on AI without a vote, capping off a period that saw several global summits being held on issues related to AI, such as the Responsible AI in the Military Domain (REAIM) summit at The Hague and the AI Safety summit at Bletchley Park in the United Kingdom.
Substance versus Summitry
The challenge is that none of these efforts have brought us meaningfully closer to legally binding agreements or international regulations, especially when it comes to the military use of AI. Indeed, consensus remains elusive, as demonstrated by the inconclusive discussions on lethal autonomous weapons (LAWS) that have been dragging on for a decade at the United Nations, and countries are still putting their national security interests ahead of managing the risks from AI’s proliferation in the military domain.
While these circumstances should not be surprising, the question remains: what can be meaningfully achieved in terms of global governance and arms control for AI? Platforms such as the REAIM summit and AI Safety summit have featured norm-building efforts such as a call to action and declaration, respectively, while the Group of Governmental Experts (GGE) discussing the regulation of LAWS at the UN adopted 11 guiding principles in 2019.
The fact that both China and the United States have contributed to and supported these norm-building efforts demonstrates that there is some common ground to work with, even if their tense relations tend to suggest otherwise. However, future talks on AI between the two countries must find ways to better utilise these areas of broad agreement and also strive for alignment on priorities.
Keeping Humans in the Loop
One potential issue that should be jointly prioritised is the extent of human control over decision-making by AI-based systems, particularly in the military domain where they can have an escalatory impact. Earlier this month, a US State Department official urged China to match an American commitment made in 2022 to preserve human control over nuclear weapons.
The urgent need to achieve agreement on this issue is heightened by the rapid growth in recent years of China’s nuclear arsenal. According to an estimate by the Bulletin of Atomic Scientists published in January 2024, China possesses approximately 500 warheads, up from 410 in 2023 and 350 in 2021. Even if it has far fewer warheads compared to the 3,708 in the United States’ arsenal, the pace at which its stockpile is growing is of concern.
Managing Derailers
Nevertheless, the overall temperature of relations will continue to play a part in how effective subsequent bilateral talks on AI will be. Managing both related and unrelated derailers will be important, especially since it is impossible to fully compartmentalise dialogue on specific issues like AI from the broader state of bilateral relations.
A key point in China’s readout from the talks was that it “expressed a stern stance on the US restrictions and pressure in the field of artificial intelligence.” This clearly refers to ongoing efforts by the US Department of Commerce to impose restrictions on China’s access to advanced AI technologies through export controls, which are likely to remain a sore point hanging over future bilateral talks on AI.
Indeed, a recent report by Reuters points to the United States potentially expanding the scope of export controls related to AI to include software. Restrictions on the export of advanced AI models would add to those already in place on chips used for training these models and those being proposed to limit access to cloud computing for the same purpose.
Looking Ahead
Presidential elections in the United States later this year are unlikely to change the broad trajectory of US policy and bipartisan legislative efforts to limit China’s technological capabilities related to AI. An administration led by Donald Trump from 2025 onwards can be expected to take an even more aggressive stance compared to that of the Biden administration, although there is a risk that it may suspend future bilateral talks on AI altogether.
This dire prospect underlines why other platforms for dialogue will remain important. Beyond the REAIM and AI Safety summits, all eyes will be on the UN GGE on LAWS as it strives for a legally binding instrument by 2026 under a more focused revised mandate. These efforts received a boost from a conference on LAWS held in Vienna in April 2024 which was convened to support further discussion on advancing a UN General Assembly resolution on LAWS in October 2023.
Singapore’s Role
Singapore has already been establishing itself in its traditional role as a trusted and substantive interlocutor at various platforms related to AI governance. In the military domain, on top of supporting the REAIM process by co-hosting a regional consultation event for Asian countries in February 2024, Singapore has also signed the United States’ Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.
In the same way that it has led efforts to promote regional governance of civilian AI, Singapore should aim to support a parallel effort for military AI and LAWS at ASEAN. Beyond raising overall governance and technical capacity across Southeast Asia, Singapore stands to gain significantly in terms of regional strategic stability if it can foster consensus among ASEAN member states on guardrails for military use of AI.
Manoj HARJANI is Research Fellow and Coordinator of the Military Transformations Programme at the S. Rajaratnam School of International Studies.
Although recently concluded talks between China and the United States on artificial intelligence (AI) were not aimed at any substantive outcomes, they nevertheless set an important precedent at a time when global efforts to regulate AI have gained momentum. Future bilateral talks could focus on the urgent issue of preserving human control over nuclear weapons.
COMMENTARY
On 14 May 2024, mid-level officials from China and the United States met in Geneva for the first time to discuss risks related to AI. The meeting arose from discussions at an earlier summit between their heads of state held at Woodside, near San Francisco, last November.
Although both countries set the bar low with no expectation of tangible outcomes, the fact that they have met – even if just to share their views – should be taken as a positive sign given how tense relations have been in recent months. More importantly, the talks set a precedent for future discussions to improve mutual understanding while reducing risks from miscommunication.
Having such a platform between the world’s two largest players in AI will be increasingly important as global efforts to regulate AI and manage the associated risks gather momentum. Earlier this year, the United Nations General Assembly adopted a landmark resolution on AI without a vote, capping off a period that saw several global summits being held on issues related to AI, such as the Responsible AI in the Military Domain (REAIM) summit at The Hague and the AI Safety summit at Bletchley Park in the United Kingdom.
Substance versus Summitry
The challenge is that none of these efforts have brought us meaningfully closer to legally binding agreements or international regulations, especially when it comes to the military use of AI. Indeed, consensus remains elusive, as demonstrated by the inconclusive discussions on lethal autonomous weapons (LAWS) that have been dragging on for a decade at the United Nations, and countries are still putting their national security interests ahead of managing the risks from AI’s proliferation in the military domain.
While these circumstances should not be surprising, the question remains: what can be meaningfully achieved in terms of global governance and arms control for AI? Platforms such as the REAIM summit and AI Safety summit have featured norm-building efforts such as a call to action and declaration, respectively, while the Group of Governmental Experts (GGE) discussing the regulation of LAWS at the UN adopted 11 guiding principles in 2019.
The fact that both China and the United States have contributed to and supported these norm-building efforts demonstrates that there is some common ground to work with, even if their tense relations tend to suggest otherwise. However, future talks on AI between the two countries must find ways to better utilise these areas of broad agreement and also strive for alignment on priorities.
Keeping Humans in the Loop
One potential issue that should be jointly prioritised is the extent of human control over decision-making by AI-based systems, particularly in the military domain where they can have an escalatory impact. Earlier this month, a US State Department official urged China to match an American commitment made in 2022 to preserve human control over nuclear weapons.
The urgent need to achieve agreement on this issue is heightened by the rapid growth in recent years of China’s nuclear arsenal. According to an estimate by the Bulletin of Atomic Scientists published in January 2024, China possesses approximately 500 warheads, up from 410 in 2023 and 350 in 2021. Even if it has far fewer warheads compared to the 3,708 in the United States’ arsenal, the pace at which its stockpile is growing is of concern.
Managing Derailers
Nevertheless, the overall temperature of relations will continue to play a part in how effective subsequent bilateral talks on AI will be. Managing both related and unrelated derailers will be important, especially since it is impossible to fully compartmentalise dialogue on specific issues like AI from the broader state of bilateral relations.
A key point in China’s readout from the talks was that it “expressed a stern stance on the US restrictions and pressure in the field of artificial intelligence.” This clearly refers to ongoing efforts by the US Department of Commerce to impose restrictions on China’s access to advanced AI technologies through export controls, which are likely to remain a sore point hanging over future bilateral talks on AI.
Indeed, a recent report by Reuters points to the United States potentially expanding the scope of export controls related to AI to include software. Restrictions on the export of advanced AI models would add to those already in place on chips used for training these models and those being proposed to limit access to cloud computing for the same purpose.
Looking Ahead
Presidential elections in the United States later this year are unlikely to change the broad trajectory of US policy and bipartisan legislative efforts to limit China’s technological capabilities related to AI. An administration led by Donald Trump from 2025 onwards can be expected to take an even more aggressive stance compared to that of the Biden administration, although there is a risk that it may suspend future bilateral talks on AI altogether.
This dire prospect underlines why other platforms for dialogue will remain important. Beyond the REAIM and AI Safety summits, all eyes will be on the UN GGE on LAWS as it strives for a legally binding instrument by 2026 under a more focused revised mandate. These efforts received a boost from a conference on LAWS held in Vienna in April 2024 which was convened to support further discussion on advancing a UN General Assembly resolution on LAWS in October 2023.
Singapore’s Role
Singapore has already been establishing itself in its traditional role as a trusted and substantive interlocutor at various platforms related to AI governance. In the military domain, on top of supporting the REAIM process by co-hosting a regional consultation event for Asian countries in February 2024, Singapore has also signed the United States’ Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.
In the same way that it has led efforts to promote regional governance of civilian AI, Singapore should aim to support a parallel effort for military AI and LAWS at ASEAN. Beyond raising overall governance and technical capacity across Southeast Asia, Singapore stands to gain significantly in terms of regional strategic stability if it can foster consensus among ASEAN member states on guardrails for military use of AI.
Manoj HARJANI is Research Fellow and Coordinator of the Military Transformations Programme at the S. Rajaratnam School of International Studies.