50¶È»Ò

AI, Ethics, & Your Business

What do we need to move discussions on AI ethics forward?

A post by Abhishek Gupta, Founder and Principal Researcher at the Montreal AI Ethics Institute and a Machine Learning Engineer and CSE Responsible AI Board Member at Microsoft. You can find out more about his work . 

Abhishek took part in a recent webinar entitled AI, Ethics, & Your Business along with four other AI professionals.  You can watch a recording of the webinar here.

A gathering of some of the founding editorial members of the new 50¶È»Ò AI & Ethics Journal, this webinar dove into the motivations behind the creation of the journal, what it seeks to achieve, and get a sense for how industry can become active and engaged participants in the shaping of both technical and policy measures in the development of ethical AI systems. 

AI has shown tremendous potential to alter our society, but just because we can do something with AI doesn’t mean that we should. As practitioners and researchers, we need to engage deeply in asking critical questions that highlight the tradeoffs between fielding an AI-enabled system and the potential harms that might arise from its use. The journal itself aims to be a public square that will bring together voices from all walks of life. AI is something that will affect us all and hence requires active, engaged, and informed participation from all of us, no matter our backgrounds. Creating a space for interdisciplinary collaborations and collision of ideas, the journal is unique in its open invitation to researchers, practitioners, and citizens who are doing critical research in examining the societal impacts of AI. While there are many journals that take a more theoretical stance on the subject of ethics, this journal concerns itself more so with the practical implications of ethics. Something that will help us move beyond the logjam we have in the field at the moment with myriad sets of principles. 

To talk about the industry perspective from a practical lens, the panelists spoke about their experiences in integrating some of the requirements elucidated in sets of principles and the challenges that they face. One of the reiterating concerns was around unintended consequences, those that are not obvious from the outset of the creation of the system. Talking through examples like how conversational interfaces with smart voice assistants will reshape our speech patterns, bias in financial services applications, loss of consumer trust, amongst others, the panelists highlighted the importance of involving stakeholders who will be directly impacted by the systems early on in the design and development phase of the systems. We all agree that we need to use AI for Good, but how do we do that is where the industry has faced the largest barriers. 

While this is a sociotechnical problem and purely technical solutions will have obvious failings, the panelists stressed on the importance of utilizing engineering as a diagnostic tool. Examples included runtime monitoring of the system, utilization of techniques like SHAP and LIME to provide explanations to stakeholders, etc. But, an important consideration is to think about tailoring the explanations to the audience that is meant to benefit from those explanations. While an engineer might care about technical details, an everyday user would want insights into the capabilities and limitations of the system, and perhaps advice on what they could change so that they don’t get stuck in algorithmic determinism. 

Thinking about consumer trust further, in other products that we use in our lives, like toothpaste, you might have a safety checkmark that indicates whether a regulatory agency has approved the product for widespread use. Taking a similar approach to AI products might be something that could help elicit higher levels of trust from consumers. Thinking about digital assets in a manner similar to physical products will become increasingly important as they become more significant parts of our existence. Domains like nuclear security have done quite a bit of work in ensuring the reliability and safety of their work, certainly offering us some lessons in the field of AI that we can borrow from. 

There was unanimity in taking a lifecycle view to incorporating solutions to ethical concerns. Especially involving appropriate stakeholders at different stages will ultimately help us build more robust systems. Climate change has attempted to address concerns utilizing this approach and leveraging public consciousness to trigger action. Placing the threats from awry AI systems on a similar scale, there might be lessons that we can borrow from what works and what doesn’t work in addressing climate change. There are no decidedly effective strategies just yet, but that is why open discussions and helping each other grow is an essential component of our attempt to build more ethical AI systems. 

Highlighting work from countries like Singapore, UK, Canada, and Australia and organizations like BMW, Microsoft, IBM, among others as exemplars for moving the conversation to the practical application of ethics in the field of AI rather than just theoretical discussions provided a hopeful note to wrap up the panel discussion. 

The journal has already published several pieces running the gamut of topics within AI ethics and is currently accepting submissions from people across the globe who are working on this very important area of the societal impacts of AI: our generation’s challenge that will have long-lasting impact on the future of humanity itself.