de Bittencourt Siqueira, AndressaKlein, MaikeKrupka, DanielWinter, CorneliaGergeleit, MartinMartin, Ludger2024-10-212024-10-212024978-3-88579-746-32944-7682https://dl.gi.de/handle/20.500.12116/45103Automated content detection systems have become a powerful tool for quickly categorizing infringing content, as online platform providers have gradually taken a more active role in moderating speech. Thus, the decision-making structure of the Meta’s Oversight Board (hereinafter “the Board”) was chosen as the subject of this study, based on the selection of 29 case decisions and policy opinions (both of which are hereinafter collectively referred to as “cases”) published by the Board until March 31st, 2024, involving automated content moderation systems. Based on the data collected, the following research problem arises: how can the Board’s decision-making structure contribute to illustrate how platforms have been tracing new limits to freedom of expression using automated content detection to identify infringing content? The research is guided by the inductive research method, since it analyzes a sample of Oversight Board decisions to make broader assumptions.enFreedom of expressionDigital platformsContent blockingAutomated systemsAutomated detection of infringing content in Meta’s Oversight Board: How online content moderation is shaping new limits of freedom of expressionText/Conference Paper10.18420/inf2024_131617-54682944-7682