Your browser doesn't support javascript.
loading
People who share encounters with racism are silenced online by humans and machines, but a guideline-reframing intervention holds promise.
Lee, Cinoo; Gligoric, Kristina; Kalluri, Pratyusha Ria; Harrington, Maggie; Durmus, Esin; Sanchez, Kiara L; San, Nay; Tse, Danny; Zhao, Xuan; Hamedani, MarYam G; Markus, Hazel Rose; Jurafsky, Dan; Eberhardt, Jennifer L.
Afiliación
  • Lee C; Department of Psychology, Stanford University, Stanford, CA 94305.
  • Gligoric K; Stanford SPARQ, Department of Psychology, Stanford University, Stanford, CA 94305.
  • Kalluri PR; Stanford SPARQ, Department of Psychology, Stanford University, Stanford, CA 94305.
  • Harrington M; Department of Computer Science, Stanford University, Stanford, CA 94305.
  • Durmus E; Department of Computer Science, Stanford University, Stanford, CA 94305.
  • Sanchez KL; Department of Psychology, Stanford University, Stanford, CA 94305.
  • San N; Department of Computer Science, Stanford University, Stanford, CA 94305.
  • Tse D; Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755.
  • Zhao X; Stanford SPARQ, Department of Psychology, Stanford University, Stanford, CA 94305.
  • Hamedani MG; Department of Linguistics, Stanford University, Stanford, CA 94305.
  • Markus HR; Department of Computer Science, Stanford University, Stanford, CA 94305.
  • Jurafsky D; Stanford SPARQ, Department of Psychology, Stanford University, Stanford, CA 94305.
  • Eberhardt JL; Stanford SPARQ, Department of Psychology, Stanford University, Stanford, CA 94305.
Proc Natl Acad Sci U S A ; 121(38): e2322764121, 2024 Sep 17.
Article en En | MEDLINE | ID: mdl-39250662
ABSTRACT
Are members of marginalized communities silenced on social media when they share personal experiences of racism? Here, we investigate the role of algorithms, humans, and platform guidelines in suppressing disclosures of racial discrimination. In a field study of actual posts from a neighborhood-based social media platform, we find that when users talk about their experiences as targets of racism, their posts are disproportionately flagged for removal as toxic by five widely used moderation algorithms from major online platforms, including the most recent large language models. We show that human users disproportionately flag these disclosures for removal as well. Next, in a follow-up experiment, we demonstrate that merely witnessing such suppression negatively influences how Black Americans view the community and their place in it. Finally, to address these challenges to equity and inclusion in online spaces, we introduce a mitigation strategy a guideline-reframing intervention that is effective at reducing silencing behavior across the political spectrum.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Medios de Comunicación Sociales / Racismo Límite: Humans Idioma: En Revista: Proc Natl Acad Sci U S A Año: 2024 Tipo del documento: Article Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Medios de Comunicación Sociales / Racismo Límite: Humans Idioma: En Revista: Proc Natl Acad Sci U S A Año: 2024 Tipo del documento: Article Pais de publicación: Estados Unidos