Seminar: How AI can support better human decision making
2 June 2023
Interpretable machine learning can provide addition ways for a user to understand and interact with an AI. However, transparency for the main purpose of oversight is closely needed in order to best support the human and AI interaction, and human decision making process.
This was of one of the key points by Prof Finale Doshi-Velez (Head of the Data to Actionable Knowledge group at Harvard Computer Science) at a public seminar organised by IPUR on 1 June. Prof Finale presented some of the findings of her lab and related work in the context of how people respond to AI recommendations and explanations, including ways to increase critical engagement.
Her insightful talk drew many engaging questions: from trust in AI to how people can be further empowered to make informed decisions. Prof Finale presented some other key points in summary:
Transparency can enable calibrated trust (use the AI when it’s right, ignore when it’s wrong).
Interactivity can empower more sophisticated teaming (humans correct elements in the AI; the AI provides options, etc.)
https://www.youtube.com/watch?v=n5E4kwQ_xWg On 28 October, IPUR and the YST Conservatory of Music had the joy of welcoming about 90 guests to Risk Resonances: Communicating Risk through Music. It was an evening...
To cut through fatigue, fear and misinformation, climate reporting and communication must feel close to home, connect science and fact to lived experiences, and leave audiences with a clear action...