Facebook describes why its AI didn’t capture New Zealand shooter’s livestream

0
398
New Zealand Bans Semi-Automatic Weapons And Assault Rifles With New Gun Laws

Revealed: The Secrets our Clients Used to Earn $3 Billion

A shooter in New Zealand live streamed his attack on 2 mosques. 


NurPhoto/Getty

Facebook states its expert system systems weren’t gotten ready for an occasion like the livestreamed massacre at 2 mosques in New Zealand recently.

The world’s biggest social media has actually been slammed for the failure of its AI innovation to spot video of the terrorist attack instantly.

In a post Thursday, Guy Rosen, vice president of item management, stated that in order for AI to acknowledge something, it needs to be trained on what it is and isn’t. For example, you may require countless pictures of nudity or terrorist propaganda to teach the system to determine those things. 

“We will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare,” Rosen stated in the post. In addition, he kept in mind that it’s a difficulty for the system to acknowledge “visually similar” images that might be safe like live-streamed computer game.

“AI is an incredibly important part of our fight against terrorist content on our platforms, and while its effectiveness continues to improve, it is never going to be perfect,” Rosen stated.

Facebook’s AI obstacles likewise highlight how the social media depends on user reports. The social media didn’t get a user report throughout the supposed shooter’s live broadcast. That matters, Rosen stated, due to the fact that Facebook focuses on reports about live videos.

tt 03 18 19 thumb


Now playing:
Watch this:

Facebook deletes 1.5M videos after shooting, Democrats…



1:23

Since the shooting in Christchurch, which left 50 dead, Facebook, Google and Twitter have also had to answer questions about how to control the spread of the gunman’s livestreamed footage. 

On Monday, Facebook said that fewer than 200 viewers saw the live broadcast and that the video reached about 4,000 views before it was taken down. The first user to report the video did so 12 minutes after the livestream ended. In the first 24 hours after the event, Facebook purged 1.5 million uploads of the video, 80 percent of which were blocked before going live on the social network.

Facebook has rules against expressing support or praise for terrorists.

Some social media experts have argued Facebook should delay live videos like TV stations sometimes do, but Rosen argued it wouldn’t fix the problem because there are millions of live broadcasts on Facebook every day. 

“More importantly, given the importance of user reports, adding a delay would only further slow down videos getting reported, reviewed and first responders being alerted to provide help on the ground,” he said. 

Thursday’s post also outlined steps Facebook still plans to take, including improving its matching technology, figuring out how to get user reports faster, and working further with the Global Internet Forum to Counter Terrorism.

Originally published March 21, 7:09 a.m.
Update, 10:45 a.m.: Adds more background