Wednesday 28th June, 2017
how-facebook-has-landed-its-own-anti-terror-moderators-to-potential-terrorists-who-might-now-seek-vengeance

CALIFORNIA, U.S. - In a shocking expose, it has been revealed that Facebook introduced a bug last year that exposed the identities of its own anti-terror moderators to potential terrorists. 

According to reports, the bug in its content moderation software was introduced by Facebook last year and it eventually exposed the identities of its anti-terror cops who police content on the social network to those being policed, raising the possibility of retribution.

A Facebook spokesperson was quoted as saying in a report, “Last year, we learned that the names of certain people who work for Facebook to enforce our policies could have been viewed by a specific set of Group admins within their admin activity log. As soon as we learned about this issue, we fixed it and began a thorough investigation to learn as much as possible about what happened."

Reports pointed out that Facebook added an activity log to Group in October last year.

This activity log was visible to the other Groups admins and every time someone in the Group gets promoted to an admin, Facebook's software creates a notification – a "Story" in Facebook parlance – that gets posted to the activity log.

Further elaborating, the report stated that when Facebook workers take action to ban a Group admin for a terms-of-service violation - such as posting a beheading video or other disallowed content - that event isn't supposed to be logged. 

A bug in the content moderation software reportedly recorded the removal of the admin's original promotion Story, along with information about the Facebook moderator taking that action.

The other admins in that Group, who chose to look at the activity log, can learn about the person watching over them.

The security flaw was introduced in mid-October and identified in early November.

It was fixed two weeks later and affected roughly 1,000 Facebook workers across almost two dozen departments who use the company's content moderation software.

About 40 of these worked in the counter-terrorism division at the Facebook's Ireland office.

The programming bug was first reported by The Guardian and claimed Facebook singled out six of its workers for special attention because Groups admins with possible ties to terror groups may have viewed their information. 

Meanwhile, Facebook has argued that it has seen no evidence of a credible risk to its workers.

Facebook's spokesperson said, “Our investigation found that only a small fraction of the names were likely viewed, and we never had evidence of any threat to the people impacted or their families as a result of this matter. Even so, we contacted each of them individually to offer support, answer their questions, and take meaningful steps to ensure their safety."

The company spokesperson added that Facebook's technical fix involves the creation of administrative accounts not associated with personal Facebook accounts, because personal information represents a security risk.

More Rhode Island State News

Access More
Loading data...
{{item.TITLE}}

{{item.DATE}}

{{item.TITLE}}

{{item.DATE}}

Sign up for Rhode Island State News

a daily newsletter full of things to discuss over drinks.and the great thing is that it's on the house!