10 January 2025
- RSIS
- Publication
- RSIS Publications
- IP25004 | Military AI Governance in 2024: One Step Forward, Two Steps Back
SYNOPSIS
Multilateral governance of military AI saw progress in 2024 mainly around efforts to develop norms of behaviour. However, prospects for legally binding agreements remain slim while proliferation concerns are growing as more countries seek to advance their capabilities in military AI.
COMMENTARY
On the surface, 2024 could be considered a marquee year for military AI governance. In November 2024, the UN General Assembly’s First Committee adopted its first-ever resolution focused on military AI. The resolution was led by the Netherlands and South Korea, who have also collaborated to organise summits on responsible AI in the military domain since 2023.
Another important development occurred on the sidelines of the Asia-Pacific Economic Cooperation (APEC) Summit held in November 2024 in Lima, Peru, where presidents Joe Biden and Xi Jinping agreed that human beings rather than AI should make decisions regarding the use of nuclear weapons. Earlier in the year, the United States and China had also held an inaugural round of bilateral talks in Geneva on the risks posed by AI.
In contrast to the progress seen towards developing norms of behaviour, prospects for legally binding agreements remain slim. An almost decade-long effort on lethal autonomous weapon systems (LAWS) by countries that are signatories to the Convention on Certain Conventional Weapons (CCW) failed to make significant progress in 2024. This is despite UN secretary-general Antonio Guterres urging for the conclusion of a legally binding agreement on LAWS through this platform by 2026.
While efforts to develop norms should not be discounted, it is difficult to impose enforceable constraints on the development and use of military AI without legally binding agreements. Observations from ongoing conflicts in Gaza and Ukraine have already demonstrated the urgent need for arms control, and we can expect militaries to spend more on AI as part of the larger global trend of rising defence spending. This will only deepen strategic instability, which is no longer underpinned solely by nuclear weapons.
In the year ahead, there will be three things to watch closely. First, the impact on military AI governance from any shifts in relations between China and the United States as a second administration led by Donald Trump takes office. Next, the steps that established military AI governance platforms will take to move beyond norms of behaviour. Finally, whether the conversation on military AI governance will become more inclusive and involve countries beyond those with the most advanced military capabilities or which are more economically developed.
The Rise of Platforms for Developing Norms
In a climate of acrimonious relations between China and the United States, few would have expected any progress on military AI governance. The rise of platforms for developing norms of behaviour regarding military AI has not only highlighted the value of smaller countries coming together but has shaped the superpowers’ behaviour as well.
The Responsible AI in the Military Domain (REAIM) summits held since 2023 are a good example of this dynamic at play. At the inaugural REAIM summit, hosted by the Netherlands, the United States launched a parallel effort on military AI governance, the Political Declaration on Responsible Military Use of AI and Autonomy. It is telling that so far more countries have endorsed the Blueprint for Action put forward at the second REAIM summit, held in South Korea in September 2024, than the US-led Political Declaration.
The REAIM platform’s success has in turn nudged the superpowers in a positive direction. Both China and the United States voted in favour of the resolution on military AI led by the Netherlands and South Korea at the UN General Assembly’s First Committee in November 2024. While much has been made of the fact that China did not endorse the REAIM Blueprint for Action or co-sponsor the First Committee resolution on military AI like the United States did, this does not necessarily equate to China being against these initiatives.
Nevertheless, Russia has been a consistent opponent of these norm-building efforts. However, it also recently announced an AI Alliance Network under the BRICS grouping that appears to be primarily aimed at conducting joint R&D in AI. For now, it is unclear whether this initiative will have any impact on military AI governance, but the likelihood is low given the BRICS’ traditional focus on economic issues.
Obstacles to Legally Binding Agreements
The picture is far less rosy when we turn to legally binding agreements. Although the Group of Governmental Experts (GGE) on LAWS involving parties to the CCW is aiming for a legally binding agreement under a more focused mandate approved in November 2023, the meetings in 2024 saw existing divisions entrenched over nearly a decade continuing to exercise a hold over proceedings.
These divisions are primarily along two fault lines — the characteristics and definitions of LAWS and approaches to regulating them. Regarding definitions, a major challenge is that much of what is being discussed is prospective and has yet to be deployed. In terms of approaches to regulation, some countries favour banning only LAWS that cannot comply with international humanitarian law, while others desire an all-out ban. With the GGE on LAWS working on the basis of consensus, these divisions are very likely to continue hampering future progress towards a legally binding agreement.
Meanwhile, countries that have been frustrated with the slow progress of the GGE on LAWS have sought to create new venues for discussion. Austria led a resolution in the UN General Assembly that was passed in December 2023 and organised a conference in April 2024 to move discussion on the regulation of LAWS forward without the GGE’s constraint of requiring consensus. It also successfully led a follow-up resolution in 2024, which has now firmly placed LAWS on the agenda of the UN General Assembly’s First Committee. While this effort has been criticised for seeking to sidestep countries in the GGE on LAWS that want an all-out ban, the reality is that the GGE is not an entirely inclusive platform either, being restricted to CCW signatories.
Prospects for 2025
There are three things worth watching closely in the year ahead. First among them is a new US administration led by Donald Trump that will take office later this month, armed with a majority in the Senate and the House of Representatives. While observers generally expect that the Biden administration’s policies on AI will be rolled back, it is unclear how this will affect military AI.
In August 2024, China and the United States committed to a second round of bilateral talks on AI. It remains to be seen whether the incoming administration will retain this platform. Another uncertainty is the future of the Political Declaration on Responsible Military Use of AI and Autonomy. Having convened only one plenary meeting of countries that endorsed the declaration in March 2024, it is possible that this platform may be abandoned. If the first Trump administration’s actions are any guide, future American participation in other multilateral platforms on military AI governance is also in doubt.
The second thing to watch will be how existing military AI governance platforms aim to push forward beyond norms of behaviour. The REAIM process has already signalled its intentions, having progressed from a Call to Action at the first summit to a Blueprint for Action in 2024. Furthermore, at the upcoming 80th session of the UN General Assembly’s First Committee this year, there will be two agenda items related to military AI arising from the two resolutions that were adopted in 2024 — one on the implications of AI for international peace and security, and the other on LAWS. The GGE on LAWS will also convene and aim to make progress on a rolling text that guided discussions in 2024 towards an agreement.
Finally, it will be important to track whether the conversation on military AI governance will become more inclusive in 2025. The inclusion of military AI and LAWS on the agenda of the UN General Assembly could already be interpreted as progress on this front, but it remains to be seen whether other platforms will invite greater participation. If we consider Southeast Asia, just three countries from the region (Brunei, the Philippines, and Singapore) endorsed the REAIM Blueprint for Action, and only Singapore endorsed the US Political Declaration.
ASEAN has yet to substantively address the issue of military AI governance, and think tanks in the region have suggested that the ASEAN Defence Ministers’ Meeting (ADMM) could take the lead on this. Furthermore, with the ADMM-Plus platform that involves ASEAN’s Dialogue Partner countries — most of whom are key players in military AI, including China, the United States, Australia, India, Japan, and South Korea — the region is well positioned to plug into the larger multilateral conversation. The Philippines, which has been active at multilateral platforms on military AI governance in recent years, may well make this a reality if it ends up taking over Myanmar’s 2026 slot as rotating chair of ASEAN.
The year ahead therefore looks to be a critical one for military AI governance to mature — all eyes will not just be focused on China and the United States, but the smaller and less developed countries that are playing crucial roles in taking the multilateral conversation forward.
Manoj Harjani is Research Fellow and Coordinator of the Military Transformations Programme at the S. Rajaratnam School of International Studies.
SYNOPSIS
Multilateral governance of military AI saw progress in 2024 mainly around efforts to develop norms of behaviour. However, prospects for legally binding agreements remain slim while proliferation concerns are growing as more countries seek to advance their capabilities in military AI.
COMMENTARY
On the surface, 2024 could be considered a marquee year for military AI governance. In November 2024, the UN General Assembly’s First Committee adopted its first-ever resolution focused on military AI. The resolution was led by the Netherlands and South Korea, who have also collaborated to organise summits on responsible AI in the military domain since 2023.
Another important development occurred on the sidelines of the Asia-Pacific Economic Cooperation (APEC) Summit held in November 2024 in Lima, Peru, where presidents Joe Biden and Xi Jinping agreed that human beings rather than AI should make decisions regarding the use of nuclear weapons. Earlier in the year, the United States and China had also held an inaugural round of bilateral talks in Geneva on the risks posed by AI.
In contrast to the progress seen towards developing norms of behaviour, prospects for legally binding agreements remain slim. An almost decade-long effort on lethal autonomous weapon systems (LAWS) by countries that are signatories to the Convention on Certain Conventional Weapons (CCW) failed to make significant progress in 2024. This is despite UN secretary-general Antonio Guterres urging for the conclusion of a legally binding agreement on LAWS through this platform by 2026.
While efforts to develop norms should not be discounted, it is difficult to impose enforceable constraints on the development and use of military AI without legally binding agreements. Observations from ongoing conflicts in Gaza and Ukraine have already demonstrated the urgent need for arms control, and we can expect militaries to spend more on AI as part of the larger global trend of rising defence spending. This will only deepen strategic instability, which is no longer underpinned solely by nuclear weapons.
In the year ahead, there will be three things to watch closely. First, the impact on military AI governance from any shifts in relations between China and the United States as a second administration led by Donald Trump takes office. Next, the steps that established military AI governance platforms will take to move beyond norms of behaviour. Finally, whether the conversation on military AI governance will become more inclusive and involve countries beyond those with the most advanced military capabilities or which are more economically developed.
The Rise of Platforms for Developing Norms
In a climate of acrimonious relations between China and the United States, few would have expected any progress on military AI governance. The rise of platforms for developing norms of behaviour regarding military AI has not only highlighted the value of smaller countries coming together but has shaped the superpowers’ behaviour as well.
The Responsible AI in the Military Domain (REAIM) summits held since 2023 are a good example of this dynamic at play. At the inaugural REAIM summit, hosted by the Netherlands, the United States launched a parallel effort on military AI governance, the Political Declaration on Responsible Military Use of AI and Autonomy. It is telling that so far more countries have endorsed the Blueprint for Action put forward at the second REAIM summit, held in South Korea in September 2024, than the US-led Political Declaration.
The REAIM platform’s success has in turn nudged the superpowers in a positive direction. Both China and the United States voted in favour of the resolution on military AI led by the Netherlands and South Korea at the UN General Assembly’s First Committee in November 2024. While much has been made of the fact that China did not endorse the REAIM Blueprint for Action or co-sponsor the First Committee resolution on military AI like the United States did, this does not necessarily equate to China being against these initiatives.
Nevertheless, Russia has been a consistent opponent of these norm-building efforts. However, it also recently announced an AI Alliance Network under the BRICS grouping that appears to be primarily aimed at conducting joint R&D in AI. For now, it is unclear whether this initiative will have any impact on military AI governance, but the likelihood is low given the BRICS’ traditional focus on economic issues.
Obstacles to Legally Binding Agreements
The picture is far less rosy when we turn to legally binding agreements. Although the Group of Governmental Experts (GGE) on LAWS involving parties to the CCW is aiming for a legally binding agreement under a more focused mandate approved in November 2023, the meetings in 2024 saw existing divisions entrenched over nearly a decade continuing to exercise a hold over proceedings.
These divisions are primarily along two fault lines — the characteristics and definitions of LAWS and approaches to regulating them. Regarding definitions, a major challenge is that much of what is being discussed is prospective and has yet to be deployed. In terms of approaches to regulation, some countries favour banning only LAWS that cannot comply with international humanitarian law, while others desire an all-out ban. With the GGE on LAWS working on the basis of consensus, these divisions are very likely to continue hampering future progress towards a legally binding agreement.
Meanwhile, countries that have been frustrated with the slow progress of the GGE on LAWS have sought to create new venues for discussion. Austria led a resolution in the UN General Assembly that was passed in December 2023 and organised a conference in April 2024 to move discussion on the regulation of LAWS forward without the GGE’s constraint of requiring consensus. It also successfully led a follow-up resolution in 2024, which has now firmly placed LAWS on the agenda of the UN General Assembly’s First Committee. While this effort has been criticised for seeking to sidestep countries in the GGE on LAWS that want an all-out ban, the reality is that the GGE is not an entirely inclusive platform either, being restricted to CCW signatories.
Prospects for 2025
There are three things worth watching closely in the year ahead. First among them is a new US administration led by Donald Trump that will take office later this month, armed with a majority in the Senate and the House of Representatives. While observers generally expect that the Biden administration’s policies on AI will be rolled back, it is unclear how this will affect military AI.
In August 2024, China and the United States committed to a second round of bilateral talks on AI. It remains to be seen whether the incoming administration will retain this platform. Another uncertainty is the future of the Political Declaration on Responsible Military Use of AI and Autonomy. Having convened only one plenary meeting of countries that endorsed the declaration in March 2024, it is possible that this platform may be abandoned. If the first Trump administration’s actions are any guide, future American participation in other multilateral platforms on military AI governance is also in doubt.
The second thing to watch will be how existing military AI governance platforms aim to push forward beyond norms of behaviour. The REAIM process has already signalled its intentions, having progressed from a Call to Action at the first summit to a Blueprint for Action in 2024. Furthermore, at the upcoming 80th session of the UN General Assembly’s First Committee this year, there will be two agenda items related to military AI arising from the two resolutions that were adopted in 2024 — one on the implications of AI for international peace and security, and the other on LAWS. The GGE on LAWS will also convene and aim to make progress on a rolling text that guided discussions in 2024 towards an agreement.
Finally, it will be important to track whether the conversation on military AI governance will become more inclusive in 2025. The inclusion of military AI and LAWS on the agenda of the UN General Assembly could already be interpreted as progress on this front, but it remains to be seen whether other platforms will invite greater participation. If we consider Southeast Asia, just three countries from the region (Brunei, the Philippines, and Singapore) endorsed the REAIM Blueprint for Action, and only Singapore endorsed the US Political Declaration.
ASEAN has yet to substantively address the issue of military AI governance, and think tanks in the region have suggested that the ASEAN Defence Ministers’ Meeting (ADMM) could take the lead on this. Furthermore, with the ADMM-Plus platform that involves ASEAN’s Dialogue Partner countries — most of whom are key players in military AI, including China, the United States, Australia, India, Japan, and South Korea — the region is well positioned to plug into the larger multilateral conversation. The Philippines, which has been active at multilateral platforms on military AI governance in recent years, may well make this a reality if it ends up taking over Myanmar’s 2026 slot as rotating chair of ASEAN.
The year ahead therefore looks to be a critical one for military AI governance to mature — all eyes will not just be focused on China and the United States, but the smaller and less developed countries that are playing crucial roles in taking the multilateral conversation forward.
Manoj Harjani is Research Fellow and Coordinator of the Military Transformations Programme at the S. Rajaratnam School of International Studies.