The School of Computer Science at the University of Windsor is pleased to present…
Date: Friday, March 14, 2025
Time: 11:00 am
Location: Erie Hall, Room 3123
Knowledge Distillation (KD) is a commonly used Deep Neural Network (DNN) compression method, which maintains overall generalization performance. Despite the pervasive use of KD, the effect of KD and specifically the temperature hyperparameter on the underlying function learned by a student model has not been previously studied. Using two common fairness metrics, Demographic Parity Difference (DPD) and Equalized Odds Difference (EOD) on models trained with the CelebA, Trifeature, and HateXplain datasets, our results suggest that increasing the distillation temperature improves the distilled student model’s fairness, and the distilled student fairness can even surpass the fairness of the teacher model at high temperatures. Additionally, we examine individual fairness, ensuring similar instances receive similar predictions. Our results confirm that higher temperatures also improve the distilled student model’s individual fairness. This study highlights the uneven effects of distillation on certain classes and its potentially significant role in fairness, emphasizing that caution is warranted when using distilled models for sensitive application domains.
Yani Ioannou is an Assistant Professor and Schulich Research Chair at the University of Calgary in the Department of Electrical and Software Engineering of the Schulich School of Engineering, and leads the Calgary Machine Learning Lab.
Yani was previously a Postdoctoral Research Fellow at the Vector Institute, working with Prof. Graham Taylor, and a Visiting Researcher at Google Brain (DeepMind) Toronto. Yani completed his PhD at the University of Cambridge in 2018 supported by a Microsoft Research Ph.D. Scholarship, where he was supervised by Professor Roberto Cipolla and Dr. Antonio Criminisi.