Here’s how Facebook utilizes expert system to remove violent posts

0
305
A 'like' sign stands at the entrance of Facebook headquarters on May 18, 2012 in Menlo Park, Calif

Revealed: The Secrets our Clients Used to Earn $3 Billion

Facebook is utilizing F8 to open about how it utilizes AI to combat abuse on the social media network.


James Martin/ CNET.

Facebook CEO Mark Zuckerberg sent a chorus of chuckles through the Twittersphere recently when he stated an unanticipated word throughout a business incomes call: nipple.

“It’s much easier to build an AI system that can detect a nipple than it is to determine what is linguistically hate speech,” he stated, when inquired about improper material on the world’s biggest social media network.

His remark influenced a string of jokes, however Zuckerberg was making a severe point. Abuse on Facebook takes various shapes and kinds– from nudity to racial slurs to frauds and drug listings– and eliminating all of it is not a one-size-fits-all proposal. Whenever Zuckerberg discusses cleaning Facebook of improper material, he constantly points out 2 things:

1) Facebook will work with 20,000 material mediators by the end of the year to discover and evaluate objectionable product.

2) The business is buying expert system tools to proactively spot violent posts and take them down.

On Wednesday, throughout its F8 designers conference in San Jose, California, Facebook exposed for the very first time precisely how it utilizes its AI tools for material small amounts. The bottom line is that automated AI tools assist generally in 7 locations: nudity, graphic violence, terrorist material, hate speech, spam, phony accounts and suicide avoidance.

< div class ="shortcode video v2" data-video-playlist="[{" id="" building="" ai="" tools="" to="" protect="" election="" integrity="" f8="" mark="" zuckerberg="" discusses="" how="" facebook="" plans="" use="" the="" process.="" news="" video="">

f8 electrions


Now playing:
Watch this:

Facebook building AI tools to protect election integrity



2:06

For things like nudity and graphic violence, problematic posts are detected by technology called “computer vision,” software that’s trained to flag the content because of certain elements in the image. Sometimes that graphic content is taken down, and sometimes it’s put behind a warning screen.

Something like hate speech is harder to police solely with AI because there are often different intents behind that speech. It can be sarcastic or self-referential, or it may try to raise awareness about hate speech. It’s also harder to detect hate speech in languages that are less widely spoken, because the software has fewer examples to lean on.

“We have a lot of work ahead of us,” Guy Rosen, vice president of product management, said in an interview last week. “The goal will be to get to this content before anyone can see it.”

Falling through the cracks

Facebook is opening up about its AI tools after Zuckerberg and his team were slammed for a scandal last month involving Cambridge Analytica. The digital consultancy accessed personal data on up to 87 million Facebook users and used it without their permission. The controversy has prompted questions about Facebook’s policies, including what responsibilities it has in policing the content on its platform and to the more than 2.2 billion users who log into Facebook each month.

As part of its newfound aim to be transparent about how it works, Facebook also last week for the first time released the internal guidelines its content moderators use to assess and handle objectionable material. Up until now, you could see only surface-level descriptions of what kinds of content they couldn’t post.

But even with thousands of moderators and AI tools, objectionable content still falls through the cracks. For example, Facebook’s AI is used to detect fake accounts, but bots and scammers still exist on the platform. The New York Times reported last week that fake accounts pretending to be Zuckerberg and Facebook COO Sheryl Sandberg are being used to try to scam people out of their cash.

And when Zuckerberg testified before Congress last month, lawmakers repeatedly asked about decision making for policing content. Rep. David McKinley, a Republican from West Virginia, mentioned illegal listings for opioids posted on Facebook, and asked why they hadn’t been taken down. Other Republican lawmakers asked why the social network removed posts by Diamond and Silk, two African-American supporters of President Donald Trump with 1.6 million Facebook followers. In 10 hours of testimony over two days, Zuckerberg, 33, tried to convince legislators that Facebook had a handle — and a process in place — for handling these kinds of issues.

“The combination of building AI and hiring what is going to be tens of thousands of people to work on these problems, I think we’ll see us make very meaningful progress going forward,” Zuckerberg said last week after reporting earnings that topped Wall Street expectations. “These are not unsolvable problems.”

Facebook’s F8 Developer Conference: Follow CNET’s coverage.

Cambridge Analytica: Everything you need to know about Facebook’s data mining scandal.