• Home
  • About RSIS
    • Introduction
    • Building the Foundations
    • Welcome Message
    • Board of Governors
    • Staff Profiles
      • Executive Deputy Chairman’s Office
      • Dean’s Office
      • Management
      • Distinguished Fellows
      • Faculty and Research
      • Associate Research Fellows, Senior Analysts and Research Analysts
      • Visiting Fellows
      • Adjunct Fellows
      • Administrative Staff
    • Honours and Awards for RSIS Staff and Students
    • RSIS Endowment Fund
    • Endowed Professorships
    • Career Opportunities
    • Getting to RSIS
  • Research
    • Research Centres
      • Centre for Multilateralism Studies (CMS)
      • Centre for Non-Traditional Security Studies (NTS Centre)
      • Centre of Excellence for National Security (CENS)
      • Institute of Defence and Strategic Studies (IDSS)
      • International Centre for Political Violence and Terrorism Research (ICPVTR)
    • Research Programmes
      • National Security Studies Programme (NSSP)
      • Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
    • Future Issues and Technology Cluster
    • [email protected] Newsletter
    • Other Research
      • Science and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
    • Graduate Programmes Office
    • Overview
    • MSc (Asian Studies)
    • MSc (International Political Economy)
    • MSc (International Relations)
    • MSc (Strategic Studies)
    • NTU-Warwick Double Masters Programme
    • PhD Programme
    • Exchange Partners and Programmes
    • How to Apply
    • Financial Assistance
    • Meet the Admissions Team: Information Sessions and other events
    • RSIS Alumni
  • Alumni & Networks
    • Alumni
    • Asia-Pacific Programme for Senior Military Officers (APPSMO)
    • Asia-Pacific Programme for Senior National Security Officers (APPSNO)
    • International Strategy Forum-Asia (ISF-Asia)
    • SRP Executive Programme
    • Terrorism Analyst Training Course (TATC)
  • Publications
    • RSIS Publications
      • Annual Reviews
      • Books
      • Bulletins and Newsletters
      • Commentaries
      • Counter Terrorist Trends and Analyses
      • Commemorative / Event Reports
      • IDSS Paper
      • Interreligious Relations
      • Monographs
      • NTS Insight
      • Policy Reports
      • Working Papers
      • RSIS Publications for the Year
    • Glossary of Abbreviations
    • External Publications
      • Authored Books
      • Journal Articles
      • Edited Books
      • Chapters in Edited Books
      • Policy Reports
      • Working Papers
      • Op-Eds
      • External Publications for the Year
    • Policy-relevant Articles Given RSIS Award
  • Media
    • Great Powers
    • Sustainable Security
    • Other Resource Pages
    • Media Highlights
    • News Releases
    • Speeches
    • Vidcast Channel
    • Audio/Video Forums
  • Events
  • Giving
  • Contact Us
Facebook
Twitter
YouTube
RSISVideoCast RSISVideoCast rsis.sg
Linkedin
instagram instagram rsis.sg
RSS
  • Home
  • About RSIS
      • Introduction
      • Building the Foundations
      • Welcome Message
      • Board of Governors
      • Staff Profiles
        • Executive Deputy Chairman’s Office
        • Dean’s Office
        • Management
        • Distinguished Fellows
        • Faculty and Research
        • Associate Research Fellows, Senior Analysts and Research Analysts
        • Visiting Fellows
        • Adjunct Fellows
        • Administrative Staff
      • Honours and Awards for RSIS Staff and Students
      • RSIS Endowment Fund
      • Endowed Professorships
      • Career Opportunities
      • Getting to RSIS
  • Research
      • Research Centres
        • Centre for Multilateralism Studies (CMS)
        • Centre for Non-Traditional Security Studies (NTS Centre)
        • Centre of Excellence for National Security (CENS)
        • Institute of Defence and Strategic Studies (IDSS)
        • International Centre for Political Violence and Terrorism Research (ICPVTR)
      • Research Programmes
        • National Security Studies Programme (NSSP)
        • Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      • Future Issues and Technology Cluster
      • [email protected] Newsletter
      • Other Research
        • Science and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      • Graduate Programmes Office
      • Overview
      • MSc (Asian Studies)
      • MSc (International Political Economy)
      • MSc (International Relations)
      • MSc (Strategic Studies)
      • NTU-Warwick Double Masters Programme
      • PhD Programme
      • Exchange Partners and Programmes
      • How to Apply
      • Financial Assistance
      • Meet the Admissions Team: Information Sessions and other events
      • RSIS Alumni
  • Alumni & Networks
      • Alumni
      • Asia-Pacific Programme for Senior Military Officers (APPSMO)
      • Asia-Pacific Programme for Senior National Security Officers (APPSNO)
      • International Strategy Forum-Asia (ISF-Asia)
      • SRP Executive Programme
      • Terrorism Analyst Training Course (TATC)
  • Publications
      • RSIS Publications
        • Annual Reviews
        • Books
        • Bulletins and Newsletters
        • Commentaries
        • Counter Terrorist Trends and Analyses
        • Commemorative / Event Reports
        • IDSS Paper
        • Interreligious Relations
        • Monographs
        • NTS Insight
        • Policy Reports
        • Working Papers
        • RSIS Publications for the Year
      • Glossary of Abbreviations
      • External Publications
        • Authored Books
        • Journal Articles
        • Edited Books
        • Chapters in Edited Books
        • Policy Reports
        • Working Papers
        • Op-Eds
        • External Publications for the Year
      • Policy-relevant Articles Given RSIS Award
  • Media
      • Great Powers
      • Sustainable Security
      • Other Resource Pages
      • Media Highlights
      • News Releases
      • Speeches
      • Vidcast Channel
      • Audio/Video Forums
  • Events
  • Giving
  • Contact Us
  • instagram instagram rsis.sg
Connect

Getting to RSIS

Map

Address

Nanyang Technological University
Block S4, Level B3,
50 Nanyang Avenue,
Singapore 639798

View location on Google maps Click here for directions to RSIS

Get in Touch

    Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
    RSISVideoCast RSISVideoCast rsisvideocast
      school/rsis-ntu
    instagram instagram rsis.sg
      RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    RSIS Intranet

    S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
    Nanyang Technological University Nanyang Technological University

    Skip to content

     
    • RSIS
    • Publication
    • RSIS Publications
    • Singapore’s AI ‘Living Lab’: Safety Rules Essential
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • Commentaries
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • IDSS Paper
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers
    • RSIS Publications for the Year

    CO21143 | Singapore’s AI ‘Living Lab’: Safety Rules Essential
    Manoj Harjani, Hawyee Auyong

    28 September 2021

    download pdf
    RSIS Commentary is a platform to provide timely and, where appropriate, policy-relevant commentary and analysis of topical and contemporary issues. The authors’ views are their own and do not represent the official position of the S. Rajaratnam School of International Studies (RSIS), NTU. These commentaries may be reproduced with prior permission from RSIS and due credit to the author(s) and RSIS. Please email to Editor RSIS Commentary at [email protected].

    SYNOPSIS

    The impact of emerging regulations emphasising the safety of artificial intelligence (AI) in the EU and US will reach Singapore sooner rather than later. To sustain its ambition of being a “living laboratory” for AI applications, Singapore should develop its own AI safety regulations.


    Source: Freepik

    COMMENTARY

    EARLIER THIS year, the European Union (EU) released draft regulations for artificial intelligence (AI), prompting responses ranging from criticism to praise. In the same week, the US Federal Trade Commission (FTC) issued guidance on the use of AI, highlighting the need to ensure “truth, fairness and equity.”

    There are currently no federal regulations in the US for AI-driven systems. Nevertheless, the FTC said that companies deploying such systems must adhere to existing laws prohibiting unfair, deceptive or discriminatory practices. The EU’s proposed regulations similarly ensure AI-driven systems align with laws protecting fundamental rights and social values. These moves signal emerging regulatory regime to ensure AI does not cause harm to society. Rules made in these influential jurisdictions can and will have global implications.

    Addressing AI Safety

    Given the government’s desire to position Singapore as a “living laboratory” to test and develop new AI applications for eventual export, there is a clear need to ensure that locally-developed systems are not misaligned with these emerging regulations designed to ensure AI safety.

    Singapore appears to be well-placed to do this given the groundwork laid by the Smart Nation initiative, Model AI Governance Framework, and National AI Strategy. These outline a citizen-centric approach to digital transformation, provide guidelines for ethical adoption of AI in the private sector, and highlight some priority areas for public investment in AI development and deployment.

    However, the EU’s draft AI legislation and US FTC’s guidance on AI highlight the need to look beyond AI-related risks caused directly by malicious actors. As AI-driven systems become more widespread and integrated with daily life, it will eventually be necessary to address the inherent risks of AI that can materialise even when such systems function as intended.

    Singapore’s current policy initiatives, however, have yet to formally address issues around the safety of AI-driven systems. Although the Model AI Governance Framework has set out a relatively robust set of guidelines for ethical deployment of AI-driven systems, its adoption is still voluntary.

    Fostering Public Trust: The Need for Legislation

    Given these circumstances, it is critical for Singapore to introduce more direct oversight of the development and deployment of AI-driven systems, and ensure accountability where there are risks of harm.

    Although voluntary initiatives such as the Model AI Governance Framework can be sensible at initial stages of a technology’s development and deployment, we may be fast approaching the point where trust and safety concerns need to be addressed in concrete ways through legislation or regulation.

    In the same way that food safety standards and their effective enforcement provide assurance to consumers, robust AI safety regulations will be crucial to foster public trust. Singapore’s stringent standards are also essential to the global success of its food manufacturing industry. AI safety regulations can ideally achieve the same effect.

    However, the Model AI Governance Framework is deliberate in excluding off-the-shelf software that is being updated to incorporate AI-based features. This exclusion could become problematic as increasingly advanced AI-based features are more deeply integrated into commonly-used applications.

    Framework to Classify: Something Lacking?

    Another important dimension relates to the attribution of liability in cases where AI-driven systems cause unintentional harm. In a recently published report, the Law Reform Committee of the Singapore Academy of Law noted the need to legally define acceptable standards of conduct rather than letting the courts establish them over time.

    Although the National AI Strategy commits to developing and deploying AI based on a “human-centric” approach, it is unclear what exactly this means in practice. Furthermore, Singapore lacks a framework to classify AI-driven systems according to their potential for causing harm.

    In the EU’s draft AI legislation, a risk-based classification is used to identify obligations imposed on system providers and define activities that warrant greater scrutiny. Such an approach is worth considering to provide clarity on the scope of legislation and regulation while allaying concerns about “chilling effects” on innovation.

    Gearing for the Global Market

    The development and deployment of AI-driven systems is progressing amidst a geopolitical landscape marked by contestation and rivalry. As such, compliance with emerging AI regulatory regimes could well become a non-tariff barrier deployed by countries at the forefront of major AI research and development.

    Given Singapore’s small domestic market, many products and services developed here are often tailored to larger export markets. This same economic logic applies to the emerging AI sector. For Singapore-made solutions to succeed in the large markets of the EU and US, there is a need to comply with their respective AI safety regulations.

    Moreover, public trust needs to be carefully managed in our push to make Singapore a living laboratory for AI applications. Singapore has typically had a sense of optimism towards technology, and this has been a critical factor underpinning Smart Nation efforts.

    A Pew survey between October 2019 and March 2020 showed an overwhelming majority of respondents in Singapore (72%) felt the development of AI was good for society. However, this optimism should not be taken for granted, as there could be a severe backlash if the deployment of AI-driven systems in Singapore ends up causing unintended harm or runs contrary to social mores.

    For example, Singapore has high hopes for the deployment of facial recognition technologies. This is seen in the launch of SingPass face verification, which aims to add an additional — and perhaps more convenient — authentication option for individuals to access government digital services.

    Avoiding Backlash

    However, the application of facial recognition technologies in more intrusive ways without explicit knowledge or consent, such as to analyse emotions, poses significant risks.

    For example, the deployment of an AI-driven system could be found to have inadvertently led to discrimination against specific categories of individuals. In such a scenario, the resulting backlash could sour public willingness to embrace this whole class of technologies. This will in turn deal a blow to ongoing efforts to develop, test, and refine AI applications in Singapore.

    Singapore should therefore move to safeguard the long-term viability of its efforts to become a living laboratory for AI applications with robust AI safety regulations. A well-designed AI regulatory regime in Singapore will likely have an enduring positive impact.

    Moreover, as other Southeast Asian countries catch up in their adoption of AI, Singapore’s regulatory frameworks will serve as a tried and tested model that could potentially be adopted more broadly in the region.

    About the Authors

    Manoj Harjani is a Research Fellow with the Future Issues and Technology research cluster, S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. Hawyee Auyong is an independent researcher and formerly a Research Fellow with the Lee Kuan Yew School of Public Policy (LKYSPP), National University of Singapore (NUS).

    Categories: Commentaries / Country and Region Studies / International Politics and Security / Non-Traditional Security / East Asia and Asia Pacific / Global / South Asia / Southeast Asia and ASEAN

    Last updated on 28/09/2021

    comments powered by Disqus
    RSIS Commentary is a platform to provide timely and, where appropriate, policy-relevant commentary and analysis of topical and contemporary issues. The authors’ views are their own and do not represent the official position of the S. Rajaratnam School of International Studies (RSIS), NTU. These commentaries may be reproduced with prior permission from RSIS and due credit to the author(s) and RSIS. Please email to Editor RSIS Commentary at [email protected].

    SYNOPSIS

    The impact of emerging regulations emphasising the safety of artificial intelligence (AI) in the EU and US will reach Singapore sooner rather than later. To sustain its ambition of being a “living laboratory” for AI applications, Singapore should develop its own AI safety regulations.


    Source: Freepik

    COMMENTARY

    EARLIER THIS year, the European Union (EU) released draft regulations for artificial intelligence (AI), prompting responses ranging from criticism to praise. In the same week, the US Federal Trade Commission (FTC) issued guidance on the use of AI, highlighting the need to ensure “truth, fairness and equity.”

    There are currently no federal regulations in the US for AI-driven systems. Nevertheless, the FTC said that companies deploying such systems must adhere to existing laws prohibiting unfair, deceptive or discriminatory practices. The EU’s proposed regulations similarly ensure AI-driven systems align with laws protecting fundamental rights and social values. These moves signal emerging regulatory regime to ensure AI does not cause harm to society. Rules made in these influential jurisdictions can and will have global implications.

    Addressing AI Safety

    Given the government’s desire to position Singapore as a “living laboratory” to test and develop new AI applications for eventual export, there is a clear need to ensure that locally-developed systems are not misaligned with these emerging regulations designed to ensure AI safety.

    Singapore appears to be well-placed to do this given the groundwork laid by the Smart Nation initiative, Model AI Governance Framework, and National AI Strategy. These outline a citizen-centric approach to digital transformation, provide guidelines for ethical adoption of AI in the private sector, and highlight some priority areas for public investment in AI development and deployment.

    However, the EU’s draft AI legislation and US FTC’s guidance on AI highlight the need to look beyond AI-related risks caused directly by malicious actors. As AI-driven systems become more widespread and integrated with daily life, it will eventually be necessary to address the inherent risks of AI that can materialise even when such systems function as intended.

    Singapore’s current policy initiatives, however, have yet to formally address issues around the safety of AI-driven systems. Although the Model AI Governance Framework has set out a relatively robust set of guidelines for ethical deployment of AI-driven systems, its adoption is still voluntary.

    Fostering Public Trust: The Need for Legislation

    Given these circumstances, it is critical for Singapore to introduce more direct oversight of the development and deployment of AI-driven systems, and ensure accountability where there are risks of harm.

    Although voluntary initiatives such as the Model AI Governance Framework can be sensible at initial stages of a technology’s development and deployment, we may be fast approaching the point where trust and safety concerns need to be addressed in concrete ways through legislation or regulation.

    In the same way that food safety standards and their effective enforcement provide assurance to consumers, robust AI safety regulations will be crucial to foster public trust. Singapore’s stringent standards are also essential to the global success of its food manufacturing industry. AI safety regulations can ideally achieve the same effect.

    However, the Model AI Governance Framework is deliberate in excluding off-the-shelf software that is being updated to incorporate AI-based features. This exclusion could become problematic as increasingly advanced AI-based features are more deeply integrated into commonly-used applications.

    Framework to Classify: Something Lacking?

    Another important dimension relates to the attribution of liability in cases where AI-driven systems cause unintentional harm. In a recently published report, the Law Reform Committee of the Singapore Academy of Law noted the need to legally define acceptable standards of conduct rather than letting the courts establish them over time.

    Although the National AI Strategy commits to developing and deploying AI based on a “human-centric” approach, it is unclear what exactly this means in practice. Furthermore, Singapore lacks a framework to classify AI-driven systems according to their potential for causing harm.

    In the EU’s draft AI legislation, a risk-based classification is used to identify obligations imposed on system providers and define activities that warrant greater scrutiny. Such an approach is worth considering to provide clarity on the scope of legislation and regulation while allaying concerns about “chilling effects” on innovation.

    Gearing for the Global Market

    The development and deployment of AI-driven systems is progressing amidst a geopolitical landscape marked by contestation and rivalry. As such, compliance with emerging AI regulatory regimes could well become a non-tariff barrier deployed by countries at the forefront of major AI research and development.

    Given Singapore’s small domestic market, many products and services developed here are often tailored to larger export markets. This same economic logic applies to the emerging AI sector. For Singapore-made solutions to succeed in the large markets of the EU and US, there is a need to comply with their respective AI safety regulations.

    Moreover, public trust needs to be carefully managed in our push to make Singapore a living laboratory for AI applications. Singapore has typically had a sense of optimism towards technology, and this has been a critical factor underpinning Smart Nation efforts.

    A Pew survey between October 2019 and March 2020 showed an overwhelming majority of respondents in Singapore (72%) felt the development of AI was good for society. However, this optimism should not be taken for granted, as there could be a severe backlash if the deployment of AI-driven systems in Singapore ends up causing unintended harm or runs contrary to social mores.

    For example, Singapore has high hopes for the deployment of facial recognition technologies. This is seen in the launch of SingPass face verification, which aims to add an additional — and perhaps more convenient — authentication option for individuals to access government digital services.

    Avoiding Backlash

    However, the application of facial recognition technologies in more intrusive ways without explicit knowledge or consent, such as to analyse emotions, poses significant risks.

    For example, the deployment of an AI-driven system could be found to have inadvertently led to discrimination against specific categories of individuals. In such a scenario, the resulting backlash could sour public willingness to embrace this whole class of technologies. This will in turn deal a blow to ongoing efforts to develop, test, and refine AI applications in Singapore.

    Singapore should therefore move to safeguard the long-term viability of its efforts to become a living laboratory for AI applications with robust AI safety regulations. A well-designed AI regulatory regime in Singapore will likely have an enduring positive impact.

    Moreover, as other Southeast Asian countries catch up in their adoption of AI, Singapore’s regulatory frameworks will serve as a tried and tested model that could potentially be adopted more broadly in the region.

    About the Authors

    Manoj Harjani is a Research Fellow with the Future Issues and Technology research cluster, S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. Hawyee Auyong is an independent researcher and formerly a Research Fellow with the Lee Kuan Yew School of Public Policy (LKYSPP), National University of Singapore (NUS).

    Categories: Commentaries / Country and Region Studies / International Politics and Security / Non-Traditional Security

    Last updated on 28/09/2021

    Back to top

    Terms of Use | Privacy Statement
    Copyright © S. Rajaratnam School of International Studies. All rights reserved.
    This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
    OK
    Latest Book
    Singapore’s AI ‘Living Lab’: Safety Rules Essential

    SYNOPSIS

    The impact of emerging regulations emphasising the safety of artificial intelligence (AI) in the EU and US will reach Singapore sooner rather than ...
    more info