Seminar: How AI can support better human decision making
2 June 2023
Interpretable machine learning can provide addition ways for a user to understand and interact with an AI. However, transparency for the main purpose of oversight is closely needed in order to best support the human and AI interaction, and human decision making process.
This was of one of the key points by Prof Finale Doshi-Velez (Head of the Data to Actionable Knowledge group at Harvard Computer Science) at a public seminar organised by IPUR on 1 June. Prof Finale presented some of the findings of her lab and related work in the context of how people respond to AI recommendations and explanations, including ways to increase critical engagement.
Her insightful talk drew many engaging questions: from trust in AI to how people can be further empowered to make informed decisions. Prof Finale presented some other key points in summary:
Transparency can enable calibrated trust (use the AI when it’s right, ignore when it’s wrong).
Interactivity can empower more sophisticated teaming (humans correct elements in the AI; the AI provides options, etc.)
A recent survey found that nearly 40 per cent of respondents in Singapore have pre-diabetes, a condition that increases the risk of developing Type 2 Diabetes (T2D). Pre-diabetes occurs when...
IPUR is looking for an art student or artist to work with us on our Risk Resonance project from December 2024 to March 2025 and create a work of art...