AI Guide for Society in Healthcare

Home

/

Resources and Tools

/

AI Guide for Society in Healthcare

Using AI to support healthcare decisions

AI technology is increasingly being used in the healthcare sector and has demonstrated its accuracy and efficiency in diagnosing and predicting diseases. Despite its huge impact on our daily lives in every sector of society, AI technology has some drawbacks and comes with risks, especially due to potentially biased algorithms. As these deep and rapid technological changes will profoundly affect individuals and healthcare service providers, increased societal engagement on the subject is essential for sustainable development.
To motivate and equip the public to participate in deliberations surrounding technological risks, IPUR collaborated with the Korea Policy Center for the Fourth Industrial Revolution and Sense About Science to produce “Understanding Artificial Intelligence to Support Healthcare Decisions: A Guide for Society”. The guide aims to help users understand the benefits and limitations of AI, and ask the pertinent questions which would prompt developers to improve the transparency and quality of AI solutions. Designed to serve as a reference for the responsible use of AI technologies in the healthcare sector, the guide details what should be considered when making clinical and healthcare decisions to help reduce the chances of the AI giving false or misleading results.

The guide, for instance, says that the source of the AI data must be clearly known, the data must have been collected or selected for the purpose it is used for, the limitations and assumptions for that purpose are stated, and the solution has been properly tested in the real world. By being transparent and demonstrating the steps taken to check that the AI is reliable, researchers and developers can help give people confidence about providing their data.

The guide was developed in collaboration with the KAIST Korea Policy Center for the Fourth Industrial Revolution (KAIST KPC4IR), and Sense about Science, a non-profit organisation in the UK specialising in science communication. The project team interviewed over 30 experts and practitioners working on AI and Healthcare, and feedback sessions to solicit input from the public.

“The guide was written to motivate and equip the public to participate in deliberations surrounding technological risks. This is especially important in today’s world with deep and rapid technological change that will profoundly affect individuals and healthcare service providers. Through increased societal engagement on the subject, the guide aims to help users understand the benefits and limitations of AI, and ask the pertinent questions which would prompt developers to improve the transparency and quality of AI solutions,” said Prof Koh Chan Ghee, Director of IPUR.  

Apart from making an introduction to AI in healthcare (including commonly used terms, AI development landscape and its applications), the guide focuses on the reliability of AI applications in the healthcare sector. For instance, the guide states that the source of the AI data must be clearly known, the data must have been collected or selected for the purpose it is used for, the limitations and assumptions for that purpose are stated, and the solution has been properly tested in the real world. By being transparent and demonstrating the steps taken to check that the AI is reliable, researchers and developers can help give people confidence about providing their data, and to adopt the technology.

The guide was presented during a workshop at the 2021 SIG-KDD (Special Interest Group on Knowledge Discovery and Data Mining) Conference on 15 August 2021. The workshop summary can be accessed here

Access key resources!