Technical Workshop Series - Introducing LIME and SHAP (as two great candidates to explain machine learning models) (1st Offering) by: Nasrin

Wednesday, July 31, 2024 - 10:00

TECHNICAL WORKSHOP SERIES

 

Introducing LIME and SHAP (as two great candidates to explain machine learning models) (1st Offering)

Presenter:  Nasrin Tavakoli

Date: Wednesday, July 31st, 2024

Time: 10:00 AM

Location: 4th Floor at 300 Ouellette Avenue (School of Computer Science Advanced Computing Hub)

 

Abstract: 

The session unfolds with a comprehensive review of XAI, exploring its pivotal role in enhancing the interpretability of complex machine learning models. Delve into the specifics of LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), two leading methodologies in the XAI landscape. Participants will gain a nuanced understanding of how LIME crafts locally faithful explanations, while SHAP employs cooperative game theory to reveal global feature contributions. A critical comparison between LIME and SHAP will unravel their distinct attributes, empowering attendees to make informed choices in their XAI endeavors. 

 

Workshop Outline:

  • A review on Explainable AI (XAI)
  • LIME: Local Interpretable Model-agnostic Explanations
  • SHAP: SHapley Additive exPlanations
  • Comparing LIME and SHAP
  • Conclusion

Prerequisites:

  • Basic Understanding of Machine Learning and AI
  • Understanding of Model Training and Evaluation

 

Biography: 

Nasrin Tavakoli is a Ph.D. student of Computer Science at the University of Windsor. Her field of study has been Artificial Intelligence and Machine Learning. During her master's program, she worked on breast cancer diagnosis based on deep features. She is continuing her research in Artificial Intelligence, specifically on Explainable AI, in the Ph.D. program.

 

MAC STUDENTS ONLY - Register here