Log In

Personalized AI Tools Can Combat Ableism Online

Published 2 weeks ago1 minute read

By Grace Stanley

People with disabilities experience high levels of harassment online, including microaggressions and slurs. However, social media platforms frequently fail to address reports of disability-based harassment and offer only limited tools that simply hide hateful content.

New Cornell research reveals that social media users with disabilities prefer more personalized content moderation powered by AI systems that not only hide harmful content but also summarize or categorize it by the specific type of hate expressed.

“Our work showed that indicating the type of content – whether it associates disability with inability, promotes eugenicist ideas, and so on – supported transparency and trust, and increased user agency,” said the paper’s co-author, Shiri Azenkot, associate professor at Cornell Tech. She is also an associate professor at the Jacobs Technion-Cornell Institute and at the Cornell Ann S. Bowers College of Computing and Information Science.

Read more at the Cornell Chronicle.

Origin:
publisher logo
Cornell Tech

Recommended Articles

Loading...

You may also like...