Algorithmic Arbitrariness in Content Moderation
We are pleased to host Caio Machado, who will be presenting the paper 'Algorithmic Arbitrariness in Content Moderation' which he coauthored.
Caio will discuss the challenges of predictive multiplicity in the use of machine learning (ML) for online content moderation. Predictive multiplicity occurs when different, equally food models yield conflicting results for the same content for random reasons. Their study explores this phenomenon's impact on freedom of expression and fairness, employing the International Covenant on Civil and Political Rights framework. The authors analyze the variance among leading language models in detecting "toxic" content, their disparate impact across social groups, and their reliability on universally toxic content. The findings highlight the potential for "algorithmic leviathan," prompting a need for greater transparency in ML applications to align with human rights standards.
Please note that this event is hybrid. The Zoom link will be provided through our mailing list.
About the speaker
Caio Vieira Machado is a lawyer and social scientist focused on critical issues such as AI fairness, platform regulation, content moderation, and scientific disinformation. Currently, he is a PhD candidate at Oxford University studying Covid-19 disinformation. He is also a fellow at Harvard's School of Engineering and Applied Sciences and Berkman Klein Center for Internet & Society, where he conducts interdisciplinary research on machine learning and fairness. For 4 years, he led Instituto Vero, a research organization he founded to work with Brazil's leading social media creators to combat disinformation and influence tech policy.