• Home
  • About RSIS
    • Introduction
    • Building the Foundations
    • Welcome Message
    • Board of Governors
    • Staff Profiles
      • Executive Deputy Chairman’s Office
      • Dean’s Office
      • Management
      • Distinguished Fellows
      • Faculty and Research
      • Associate Research Fellows, Senior Analysts and Research Analysts
      • Visiting Fellows
      • Adjunct Fellows
      • Administrative Staff
    • Honours and Awards for RSIS Staff and Students
    • RSIS Endowment Fund
    • Endowed Professorships
    • Career Opportunities
    • Getting to RSIS
  • Research
    • Research Centres
      • Centre for Multilateralism Studies (CMS)
      • Centre for Non-Traditional Security Studies (NTS Centre)
      • Centre of Excellence for National Security (CENS)
      • Institute of Defence and Strategic Studies (IDSS)
      • International Centre for Political Violence and Terrorism Research (ICPVTR)
    • Research Programmes
      • National Security Studies Programme (NSSP)
      • Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
    • Future Issues and Technology Cluster
    • [email protected] Newsletter
    • Other Research
      • Science and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
    • Graduate Programmes Office
    • Overview
    • MSc (Asian Studies)
    • MSc (International Political Economy)
    • MSc (International Relations)
    • MSc (Strategic Studies)
    • NTU-Warwick Double Masters Programme
    • PhD Programme
    • Exchange Partners and Programmes
    • How to Apply
    • Financial Assistance
    • Meet the Admissions Team: Information Sessions and other events
    • RSIS Alumni
  • Alumni & Networks
    • Alumni
    • Asia-Pacific Programme for Senior Military Officers (APPSMO)
    • Asia-Pacific Programme for Senior National Security Officers (APPSNO)
    • International Strategy Forum-Asia (ISF-Asia)
    • SRP Executive Programme
    • Terrorism Analyst Training Course (TATC)
  • Publications
    • RSIS Publications
      • Annual Reviews
      • Books
      • Bulletins and Newsletters
      • Commentaries
      • Counter Terrorist Trends and Analyses
      • Commemorative / Event Reports
      • IDSS Paper
      • Interreligious Relations
      • Monographs
      • NTS Insight
      • Policy Reports
      • Working Papers
      • RSIS Publications for the Year
    • Glossary of Abbreviations
    • External Publications
      • Authored Books
      • Journal Articles
      • Edited Books
      • Chapters in Edited Books
      • Policy Reports
      • Working Papers
      • Op-Eds
      • External Publications for the Year
    • Policy-relevant Articles Given RSIS Award
  • Media
    • Great Powers
    • Sustainable Security
    • Other Resource Pages
    • Media Highlights
    • News Releases
    • Speeches
    • Vidcast Channel
    • Audio/Video Forums
  • Events
  • Giving
  • Contact Us
Facebook
Twitter
YouTube
RSISVideoCast RSISVideoCast rsis.sg
Linkedin
instagram instagram rsis.sg
RSS
  • Home
  • About RSIS
      • Introduction
      • Building the Foundations
      • Welcome Message
      • Board of Governors
      • Staff Profiles
        • Executive Deputy Chairman’s Office
        • Dean’s Office
        • Management
        • Distinguished Fellows
        • Faculty and Research
        • Associate Research Fellows, Senior Analysts and Research Analysts
        • Visiting Fellows
        • Adjunct Fellows
        • Administrative Staff
      • Honours and Awards for RSIS Staff and Students
      • RSIS Endowment Fund
      • Endowed Professorships
      • Career Opportunities
      • Getting to RSIS
  • Research
      • Research Centres
        • Centre for Multilateralism Studies (CMS)
        • Centre for Non-Traditional Security Studies (NTS Centre)
        • Centre of Excellence for National Security (CENS)
        • Institute of Defence and Strategic Studies (IDSS)
        • International Centre for Political Violence and Terrorism Research (ICPVTR)
      • Research Programmes
        • National Security Studies Programme (NSSP)
        • Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      • Future Issues and Technology Cluster
      • [email protected] Newsletter
      • Other Research
        • Science and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      • Graduate Programmes Office
      • Overview
      • MSc (Asian Studies)
      • MSc (International Political Economy)
      • MSc (International Relations)
      • MSc (Strategic Studies)
      • NTU-Warwick Double Masters Programme
      • PhD Programme
      • Exchange Partners and Programmes
      • How to Apply
      • Financial Assistance
      • Meet the Admissions Team: Information Sessions and other events
      • RSIS Alumni
  • Alumni & Networks
      • Alumni
      • Asia-Pacific Programme for Senior Military Officers (APPSMO)
      • Asia-Pacific Programme for Senior National Security Officers (APPSNO)
      • International Strategy Forum-Asia (ISF-Asia)
      • SRP Executive Programme
      • Terrorism Analyst Training Course (TATC)
  • Publications
      • RSIS Publications
        • Annual Reviews
        • Books
        • Bulletins and Newsletters
        • Commentaries
        • Counter Terrorist Trends and Analyses
        • Commemorative / Event Reports
        • IDSS Paper
        • Interreligious Relations
        • Monographs
        • NTS Insight
        • Policy Reports
        • Working Papers
        • RSIS Publications for the Year
      • Glossary of Abbreviations
      • External Publications
        • Authored Books
        • Journal Articles
        • Edited Books
        • Chapters in Edited Books
        • Policy Reports
        • Working Papers
        • Op-Eds
        • External Publications for the Year
      • Policy-relevant Articles Given RSIS Award
  • Media
      • Great Powers
      • Sustainable Security
      • Other Resource Pages
      • Media Highlights
      • News Releases
      • Speeches
      • Vidcast Channel
      • Audio/Video Forums
  • Events
  • Giving
  • Contact Us
  • instagram instagram rsis.sg
Connect

Getting to RSIS

Map

Address

Nanyang Technological University
Block S4, Level B3,
50 Nanyang Avenue,
Singapore 639798

View location on Google maps Click here for directions to RSIS

Get in Touch

    Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
    RSISVideoCast RSISVideoCast rsisvideocast
      school/rsis-ntu
    instagram instagram rsis.sg
      RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    RSIS Intranet

    S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
    Nanyang Technological University Nanyang Technological University

    Skip to content

     
    • RSIS
    • Publication
    • RSIS Publications
    • Singapore Defence Technology Summit – AI Ethics 2.0: From Principles to Action
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • Commentaries
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • IDSS Paper
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers
    • RSIS Publications for the Year

    CO19128 | Singapore Defence Technology Summit – AI Ethics 2.0: From Principles to Action
    Danit Gal

    02 July 2019

    download pdf
    RSIS Commentary is a platform to provide timely and, where appropriate, policy-relevant commentary and analysis of topical and contemporary issues. The authors’ views are their own and do not represent the official position of the S. Rajaratnam School of International Studies (RSIS), NTU. These commentaries may be reproduced with prior permission from RSIS and due credit to the author(s) and RSIS. Please email to Editor RSIS Commentary at [email protected].

    SYNOPSIS

    Trying to catch up with the fast moving and increasingly pervasive development and use of AI, nations want to establish ground rules to ensure the technology benefits humanity. This is no easy task, further complicated by the mismatch between abstract AI ethics principles and existing technical capabilities and human practices.

    COMMENTARY

    IN THE past couple of years, discussions on AI ethics became the norm, with a variety of actors putting forth over 40 sets of fairly identical principles. These principles include: accountability, controllability, diversity, explainability, fairness, human-centricity, transparency, safety, security, and sustainability.

    While this creates a shared language assisting countries in addressing similar concerns, the local interpretation of these principles can differ widely, often leading to a deep sense of confusion. With the wide proliferation of such AI ethics principles, practitioners are getting closer to agreeing on what they should be in theory, but not on how to make them work in practice.

    Problem of Effective Implementation

    Principles like the ones recently published by the OECD are illustrative. They combine many existing works on AI ethics principles into another fairly generic guideline. This level of abstraction makes it appealing enough to create an important international consensus.

    It is also, however, vague enough to allow local actors to interpret the principles as they see fit within their own social and cultural contexts. This diversity of interpretation is essential in ensuring that benefits brought to humanity by using AI are inclusive. The problem is, therefore, one of effective implementation.

    The problem of effective implementation is the test bed of these principles, which can often be detached from technical capabilities and human practices. Can these AI ethics principles be codified into technical and human practices? In theory, yes. The abovementioned principles benefit us. They are intended to keep us safe and help us all benefit from the use of AI.

    In practice, however, they face real-life conflict of interests such as corporate profitability, individual and collective biases and inequality, low general levels of technical literacy, the sanctification of progress, and the desire for constant convenience.

    Moving from Principles to Action

    The good news is that this problem is already being partially solved. Individuals and institutions working in the AI ethics field are moving from principles to action. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems led to the creation of a series of technical standards on AI ethics.  The IEEE is the world’s largest association of technical professionals.

    The Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) community brings together researchers and practitioners developing tangible solutions. The Machine Intelligence Research Institute (MIRI), the Center for Human-Compatible AI (CHAI), and safety teams at DeepMind and OpenAI work to develop safe and robust AI. Institutions like AI NOW, Data & Society, and various academic centers are getting to the heart of socio-technical problems and how they already impact users.

    The bad news, however, is that while this work contributes immensely to the beneficial development and use of AI, it is still only the beginning. We need wider geographical participation and action to put AI ethics principles into practice and create inclusive benefits. Until we are able to achieve that, any benefits called for in AI ethics principles run the risk of staying as an idealistic vision for a speculative future.

    Coming to Terms with the Present

    While AI might still look futuristic, its early stage applications are already as widespread as they are pervasive. To most, AI is invisible. Users cannot really see it or interact with it, and they often do not understand how it works or affects them and their actions. And yet, most regulations and AI ethics principles look towards the theoretical future and thus fail to address the implementation programme.

    Due to their soft governance nature, most AI ethics principles do not offer tangible solutions. More alarmingly, many government regulators remains unwilling to offer tangible solutions due to fears that overregulation will ‘stifle innovation’.

    This creates a false dichotomy where ethical and well-regulated developments ‘sacrifice’ speed or innovation to ensure benefit. In reality, not making this ‘sacrifice’ leads to development that is prone to structural errors, stalls in achieving market viability, and mostly just serves its developers.

    Additions to the over 40 existing sets of AI ethics principles are a positive and welcome development if they represent new concerns and population groups. But things will only change when local governments interpret and implement them in local regulations and more institutions develop technical tools and methods to put them into practice. Tangible solutions are within reach.

    The Small Country Advantage

    Smaller countries have an edge in solving this problem. As importers of technology from larger countries, smaller ones often find themselves relying on tools not developed with their social and technical needs in mind. This entails an adjustment and adaptation period for users and the technology itself.

    All governments must, therefore, invest in creating regulatory and technical sandboxes to ensure the adjustment and adaptation period goes as smoothly as possible and comes to positive conclusions. But small governments can do it faster and more efficiently. To that end, they should do two things:

    The first is to institute agile regulatory mechanisms that develop with and support the nation’s beneficial use of AI. The second is to invest in creating well-informed and resourced actors that put local AI ethics principles interpretations into practice.

    Investment in a competitive future should be about beneficial development, not just a rapid one. If not, our future will see us spending years trying to identify and fix the mistakes we have made in the name of careless progress, and that’s the best case scenario. In short: put well-considered theory into thoughtful local regulatory and technical practice, because 40+ sets of AI ethics principles will not work unless you do.

    About the Author

    Danit Gal is founder of the TechFlows Group technology geopolitics consultancy, and creator of the Collective Futures Network for young experts. She is a researcher working on AI ethics, safety and security. She contributed this to RSIS Commentary in cooperation with RSIS’ Military Transformations Programme.

    Categories: Commentaries / Country and Region Studies / Cybersecurity, Biosecurity and Nuclear Safety / Non-Traditional Security / Global / South Asia / Southeast Asia and ASEAN

    Last updated on 02/07/2019

    comments powered by Disqus
    RSIS Commentary is a platform to provide timely and, where appropriate, policy-relevant commentary and analysis of topical and contemporary issues. The authors’ views are their own and do not represent the official position of the S. Rajaratnam School of International Studies (RSIS), NTU. These commentaries may be reproduced with prior permission from RSIS and due credit to the author(s) and RSIS. Please email to Editor RSIS Commentary at [email protected].

    SYNOPSIS

    Trying to catch up with the fast moving and increasingly pervasive development and use of AI, nations want to establish ground rules to ensure the technology benefits humanity. This is no easy task, further complicated by the mismatch between abstract AI ethics principles and existing technical capabilities and human practices.

    COMMENTARY

    IN THE past couple of years, discussions on AI ethics became the norm, with a variety of actors putting forth over 40 sets of fairly identical principles. These principles include: accountability, controllability, diversity, explainability, fairness, human-centricity, transparency, safety, security, and sustainability.

    While this creates a shared language assisting countries in addressing similar concerns, the local interpretation of these principles can differ widely, often leading to a deep sense of confusion. With the wide proliferation of such AI ethics principles, practitioners are getting closer to agreeing on what they should be in theory, but not on how to make them work in practice.

    Problem of Effective Implementation

    Principles like the ones recently published by the OECD are illustrative. They combine many existing works on AI ethics principles into another fairly generic guideline. This level of abstraction makes it appealing enough to create an important international consensus.

    It is also, however, vague enough to allow local actors to interpret the principles as they see fit within their own social and cultural contexts. This diversity of interpretation is essential in ensuring that benefits brought to humanity by using AI are inclusive. The problem is, therefore, one of effective implementation.

    The problem of effective implementation is the test bed of these principles, which can often be detached from technical capabilities and human practices. Can these AI ethics principles be codified into technical and human practices? In theory, yes. The abovementioned principles benefit us. They are intended to keep us safe and help us all benefit from the use of AI.

    In practice, however, they face real-life conflict of interests such as corporate profitability, individual and collective biases and inequality, low general levels of technical literacy, the sanctification of progress, and the desire for constant convenience.

    Moving from Principles to Action

    The good news is that this problem is already being partially solved. Individuals and institutions working in the AI ethics field are moving from principles to action. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems led to the creation of a series of technical standards on AI ethics.  The IEEE is the world’s largest association of technical professionals.

    The Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) community brings together researchers and practitioners developing tangible solutions. The Machine Intelligence Research Institute (MIRI), the Center for Human-Compatible AI (CHAI), and safety teams at DeepMind and OpenAI work to develop safe and robust AI. Institutions like AI NOW, Data & Society, and various academic centers are getting to the heart of socio-technical problems and how they already impact users.

    The bad news, however, is that while this work contributes immensely to the beneficial development and use of AI, it is still only the beginning. We need wider geographical participation and action to put AI ethics principles into practice and create inclusive benefits. Until we are able to achieve that, any benefits called for in AI ethics principles run the risk of staying as an idealistic vision for a speculative future.

    Coming to Terms with the Present

    While AI might still look futuristic, its early stage applications are already as widespread as they are pervasive. To most, AI is invisible. Users cannot really see it or interact with it, and they often do not understand how it works or affects them and their actions. And yet, most regulations and AI ethics principles look towards the theoretical future and thus fail to address the implementation programme.

    Due to their soft governance nature, most AI ethics principles do not offer tangible solutions. More alarmingly, many government regulators remains unwilling to offer tangible solutions due to fears that overregulation will ‘stifle innovation’.

    This creates a false dichotomy where ethical and well-regulated developments ‘sacrifice’ speed or innovation to ensure benefit. In reality, not making this ‘sacrifice’ leads to development that is prone to structural errors, stalls in achieving market viability, and mostly just serves its developers.

    Additions to the over 40 existing sets of AI ethics principles are a positive and welcome development if they represent new concerns and population groups. But things will only change when local governments interpret and implement them in local regulations and more institutions develop technical tools and methods to put them into practice. Tangible solutions are within reach.

    The Small Country Advantage

    Smaller countries have an edge in solving this problem. As importers of technology from larger countries, smaller ones often find themselves relying on tools not developed with their social and technical needs in mind. This entails an adjustment and adaptation period for users and the technology itself.

    All governments must, therefore, invest in creating regulatory and technical sandboxes to ensure the adjustment and adaptation period goes as smoothly as possible and comes to positive conclusions. But small governments can do it faster and more efficiently. To that end, they should do two things:

    The first is to institute agile regulatory mechanisms that develop with and support the nation’s beneficial use of AI. The second is to invest in creating well-informed and resourced actors that put local AI ethics principles interpretations into practice.

    Investment in a competitive future should be about beneficial development, not just a rapid one. If not, our future will see us spending years trying to identify and fix the mistakes we have made in the name of careless progress, and that’s the best case scenario. In short: put well-considered theory into thoughtful local regulatory and technical practice, because 40+ sets of AI ethics principles will not work unless you do.

    About the Author

    Danit Gal is founder of the TechFlows Group technology geopolitics consultancy, and creator of the Collective Futures Network for young experts. She is a researcher working on AI ethics, safety and security. She contributed this to RSIS Commentary in cooperation with RSIS’ Military Transformations Programme.

    Categories: Commentaries / Country and Region Studies / Cybersecurity, Biosecurity and Nuclear Safety / Non-Traditional Security

    Last updated on 02/07/2019

    Back to top

    Terms of Use | Privacy Statement
    Copyright © S. Rajaratnam School of International Studies. All rights reserved.
    This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
    OK
    Latest Book
    Singapore Defence Technology Summit – AI Ethics 2.0: From Principles to Action

    SYNOPSIS

    Trying to catch up with the fast moving and increasingly pervasive development and use of AI, nations want to establish ground rules to ensure the ...
    more info