Seminar: How AI can support better human decision making
2 June 2023
Interpretable machine learning can provide addition ways for a user to understand and interact with an AI. However, transparency for the main purpose of oversight is closely needed in order to best support the human and AI interaction, and human decision making process.
This was of one of the key points by Prof Finale Doshi-Velez (Head of the Data to Actionable Knowledge group at Harvard Computer Science) at a public seminar organised by IPUR on 1 June. Prof Finale presented some of the findings of her lab and related work in the context of how people respond to AI recommendations and explanations, including ways to increase critical engagement.
Her insightful talk drew many engaging questions: from trust in AI to how people can be further empowered to make informed decisions. Prof Finale presented some other key points in summary:
Transparency can enable calibrated trust (use the AI when it’s right, ignore when it’s wrong).
Interactivity can empower more sophisticated teaming (humans correct elements in the AI; the AI provides options, etc.)
New and exciting content now available on our edX “Understanding and Communicating Risk” online course! Brought to you by our collaborators over at Risk know-how, Maricarmen Climént, Research and Editorial Officer...
From left to right: Prof Leonard Lee (IPUR Director), Dr Olivia Jensen (IPUR Deputy Director) and Associate Prof Alberto Salvo (Department of Economics, NUS). Dr Jensen took participants for a...