THINK TANK
Think Tank (4/2022)
We are now observing AI maturing and entering a stage where its limitations and negative social impacts are attracting the attention of policymakers.
< Back
Obstacles to Scaling Artificial Intelligence: Implications for Singapore
14 Jul 2022

The Future Issues and Technology (FIT) Research Cluster organized, with support from the Centre of Excellence for National Security (CENS), RSIS, a webinar on 14 July 2022 to stimulate discussion and research at the intersection of national security and science and technology.

Artificial intelligence (AI) is a well-established concern, and RSIS researchers have produced a significant volume of commentary and research in recent years on issues ranging from AI’s dual-use potential to the challenges of regulating this rapidly evolving set of technologies.

However, we are now observing AI maturing and entering a stage where its limitations and negative social impacts are attracting the attention of policymakers. This includes concerns such as biased output and unsustainable resource requirements. These issues are important to policymakers as AI becomes more embedded in daily life, but it is unclear whether they will lead to another “AI winter”, where investment and research will decline.

The panel for this webinar comprised Mr Gerry Chng (Executive Director, Cyber Risk Advisory Practice, Deloitte) and Dr David Leslie (Director, Ethics and Responsible Innovation Research, Alan Turing Institute), with Dr Jungpil Hahn (Associate Professor, School of Computing, National University of Singapore) as the discussant. Ms Teo Yi-Ling (Senior Fellow, CENS) moderated the webinar.

Panellists highlighted a range of issues from both academic and industry perspectives, such as the trade-off between regulation and innovation that governments and companies have to manage. One challenge raised was the role of ethics in regulating AI, given the wide variety of views globally on what is considered ethical. The panel went on to discuss how regulation can be designed to induce desired outcomes from well-intentioned actors, and questioned whether regulation always has a negative impact on innovation.

Another topic of discussion was how to operationalise responsible AI and move from principles to practice. To this end, the panel discussed the effectiveness of ex-post facto methods to improve accountability and transparency versus attempting to transform the mindsets and values of those involved in AI R&D and deployment throughout a project’s lifecycle. Panellists noted that it would actually be more cost-effective to consider refinements to AI systems during the design and development process, rather than after deployment.

The panel concluded with the question of how learning from examples of failure can be encouraged. Panellists noted that learning from failure was a powerful process, but that it comes with stigma, and there are challenges in sharing examples of failure sometimes due to concerns over intellectual property misuse and preserving loss of a competitive advantage. The panel pointed to growing public conversations as a catalyst for encouraging greater attention towards operationalising responsible AI principles.

Other Articles