Insights and Commentaries
Making AI more trustworthy to facilitate adoption and use
Lack of trust in AI and the absence of the humanistic care factor are a couple of reasons why some people still have a negative attitude towards medical AI. Because of this, research on data science is often approached from the angle of making AI more reliable, accurate, explainable and actionable. These outcomes contribute towards the broader objectives of making AI more trustworthy to facilitate its adoption and use. However, trust is a multi-faceted issue which is influenced by a variety of factors, including understanding, familiarity, perceptions of risk and credibility.
In healthcare, patients are reluctant to use services provided by medical AI even though research suggests that it outperforms human doctors – many believe that their medical needs are unique and therefore cannot be adequately addressed by algorithms. The role of the public is particularly important in such industries, where the public as end-users have closer proximity to the AI application and significant implications on its adoption. To foster public trust and adoption, there is a need for practitioners and experts to pay more attention to promoting credibility of technology and meeting patients’ emotional needs instead of simply focusing on the technicalities.
Engagement and increased dialogue is also needed for the science to better understand the public and their concerns. When and why is it important for experts to understand the public perceptions of risk? How can data scientists work with implementation partners to better understand end-users and address concerns which may inhibit adoption?
These questions were explored in the inaugural UPP4DS workshop organised in-conjunction with the prestigious 2021 SIG-KDD conference.
The 2021 KDD Workshop on Understanding Public Perceptions for Applied Data Science (UPP4DS) was held on 13 August 2021. Over 30 researchers, practitioners and civil society representatives joined the workshop to examine the role of society in the development of acceptable technologies.
The workshop, organised by IPUR and the Korea Policy Center for the Fourth Industrial Revolution (KPC4IR), sought to approach the concept of AI trustworthiness from the perspective of the public as end-users. With healthcare as the context, the workshop invited world-leading experts in applied data science, healthcare, public policy, and social sciences to examine the role of public trust and public engagement in the development of healthcare AI applications.
One significant highlight of the virtual workshop was the presentation of the “Using Artificial Intelligence to Support Healthcare Decisions: A Guide for Society” guide produced by IPUR and KPC4IR and Sense about Science. The guide focuses on issues of reliability and aims to provide some useful guidelines and tools to motivate and equip the public (journalists, policy-makers, healthcare agencies, doctors and patients) to engage in deliberations around the reliability of AI.
Over the course of the half-day programme, two panel sessions were held which featured leading experts such as Prof. Dean Ho Director, the N.1 Institute for Health, and the Institute for Digital Medicine, NUS, and Tracey Brown Director, Sense About Science. They shared insights ranging from data privacy to public engagement.
For full details of the workshop proceedings and discussions, download the report. More content from the workshop can be accessed here.