WhyTech Didn't Stop the New Zealand Attack From Going Viral

Read the full article on Wired.

I do not know about you, but I do not want my news moderated. I want to make the choice. What is your thought on this subject?

At least 49 people were murdered Friday at two mosques in Christchurch, New Zealand, in an attack that followed a grim playbook for terrorism in the social media era. The shooter apparently seeded warnings on Twitter and 8chan before livestreaming the rampage on Facebook for 17 gut-wrenching minutes. Almost immediately, people copied and reposted versions of the video across the internet, including on Reddit, Twitter, and YouTube. News organizations as well started airing some of the footage as they reported on the destruction that took place.

By the time Silicon Valley executives woke up Friday morning, tech giants’ algorithms and international content moderating armies were already scrambling to contain the damage—and not very successfully. Many hours after the shooting began, various versions of the video were readily searchable on YouTube using basic keywords, like the shooter’s name.

This isn't the first time we’ve seen this pattern play out: It’s been nearly four years since two news reporters were shot and killed on camera in Virginia, with the killer’s first-person video spreading on Facebook and Twitter. It’s also been almost three years since footage of a mass shooting in Dallas also went viral.

The Christchurch massacre has people wondering why, after all this time, tech companies still haven’t figured out a way to stop these videos from spreading. The answer may be a disappointingly simple one: It’s a lot harder than it sounds.

For years now, both Facebook and Google have been developing and implementing automated tools that can detect and remove photos, videos, and text that violate their policies. Facebook uses PhotoDNA, a tool developed by Microsoft, to spot known child pornography images and video. Google has developed its own open source version of that tool. These companies also have invested in technology to spot extremist posts, banding together under a group called the Global Internet Forum to Counter Terrorism to share their repositories of known terrorist content. These programs generate digital signatures known as hashes for images and videos known to be problematic to prevent them from being uploaded again. What's more, Facebook and others have machine learning technology that has been trained to spot new troubling content, such as a beheading or a video with an ISIS flag. All of that is in addition to AI tools that detect more prosaic issues, like copyright infringement.

Read the full article on Wired.

StreetLoc is one of America’s fastest-growing Social Media companies. We do not employ woke kids in California to “police” your thoughts and put you in “jail”.
StreetLoc is designed for Family, Friends, Events, Groups, Businesses and People. JOIN TODAY