Log In

Meta admits wrongly suspending Facebook Groups

Published 10 hours ago2 minute read
AI is "central to our content review process". It says AI can detect and remove content against its community standards before anyone reports it, while content is sent to human reviewers on certain occasions.

Meta adds accounts may be disabled after one severe violation, such as posting child sexual exploitation content.

Getty Images Two people holiding phones, with various social media graphics around themGetty Images

"We take action on accounts that violate our policies, and people can appeal if they think we've made a mistake," a spokesperson added.

The social media giant also told the BBC it uses a combination of technology and people to find and remove accounts that break its rules, and shares data about what action it takes in its Community Standards Enforcement Report.

In its last version, covering January to March this year, Meta said it took action on 4.6m instances of child sexual exploitation - the lowest since the early months of 2021. The next edition of the transparency report is due to be published in a few months.

Meta says its child sexual exploitation policy relates to children and "non-real depictions with a human likeness", such as art, content generated by AI or fictional characters.

Meta also told the BBC it uses technology to identify potentially suspicious behaviours, such as adult accounts being reported by teen accounts, or adults repeatedly searching for "harmful" terms.

This could result in those accounts not being able to contact young people in future, or having their accounts removed completely.

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”

Origin:
publisher logo
BBC News

Recommended Articles

Loading...

You may also like...