Computer vision models trained on unparalleled amounts of data have
revolutionized many applications. However, more and more historical
societal biases are making their way into these seemingly innocuous
systems. Attention is focused on two types of biases: (1) bias in the
form of inappropriate correlations between protected attributes (age,
gender expression, skin color, …) and the predictions of visual
recognition models, as well as (2) bias in the form of unintended
discrepancies in error rates of vision systems across different social,
demographic or cultural groups. In this talk, we’ll dive deeper both
into the technical reasons and the viable strategies for mitigating bias
in computer vision. A subset of our recent work will be highlighted,
addressing bias in visual datasets (FAT*2020, ECCV 2020; and recently here), in visual models (CVPR 2020; CVPR 2021; ICCV 2021), in evaluation metrics (ICML 2021) as well as in the makeup of AI leadership.
Bio:
Olga Russakovsky is an assistant professor in the computer science
department at Princeton University. Her research is in computer vision,
closely integrated with the fields of machine learning, human-computer
interaction and fairness, accountability and transparency. She has been
awarded the AnitaB.org’s Emerging Leader Abie Award in honor of Denice
Denton in 2020, the CRA-WP Anita Borg Early Career Award in 2020, the
MIT Technology Review’s 35-under-35 Innovator award in 2017, the PAMI
Everingham Prize in 2016 and Foreign Policy Magazine’s 100 Leading
Global Thinkers award in 2015. In addition to her research, she
co-founded and continues to serve on the Board of Directors of the
AI4ALL foundation dedicated to increasing diversity and inclusion in
Artificial Intelligence. She completed her Ph.D. at Stanford University
in 2015 and her postdoctoral fellowship at Carnegie Mellon University in
2017.