The annual James Baldwin Lecture series was launched on March 29, 2006, aiming to celebrate the work of Princeton faculty and to provide an occasion for the intellectual community to reflect on the issue of race and American democracy. The lectures also honor the work of the late essayist James Baldwin, one of America’s most powerful cultural critics.
Discrimination is obvious to the people facing discrimination. Given this, do we even need quantitative studies to test if it exists? Regardless of the answer, quantitative studies such as ProPublica’s “Machine Bias” have had a galvanizing effect on racial justice, especially in the context of automated decision-making. In this talk, I will discuss what the quantitative approach can reveal, but, more importantly, situations where it cannot tell us what we need to know. Both because of the inherent limits of quantification and because of the way knowledge is socially constructed in quantitative communities, such studies tend to drastically underestimate discrimination, oppression, and algorithmic harms. In the end, quantification is no substitute for centering the experiences of those harmed.
Arvind Narayanan
Arvind Narayanan is a professor of computer science at Princeton. His work was among the first to show how machine learning reflects cultural stereotypes including racial and gender biases. He is co-authoring a textbook in fairness and machine learning. Narayanan co-created an online course and textbook on bitcoin and cryptocurrency technologies which has been used in over 150 courses worldwide. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE), twice recipient of the Privacy Enhancing Technologies Award, and thrice recipient of the Privacy Papers for Policy Makers Award.