Recent Stories happening

A new lawsuit alleges that Facebook, Twitter, and Google are partially responsible for the Orlando nightclub massacre

December 22, 2016 ・0 comments

http://ift.tt/2ijMmFn Man raises fist in solidarity during vigil in memory of victims one day after a mass shooting in Orlando, Florida, U.S.

Tevin Eugene Crosby, Juan Guerrero, and Javier Jorge-Reyes were among the 50 people killed during the Orlando nightclub massacre on June 12, 2016. Now, their families are suing some of the world’s biggest tech firms for their supposed hand in the crime.

A lawsuit filed in a Michigan court alleges that Facebook, Twitter, and Google (through YouTube) provided “material support” that help terror outfits like ISIL in “spreading extremist propaganda, raising funds, and attracting new recruits.” Keith Altman, the lawyer representing the plaintiffs in the case, also represents the father of deceased Nohemi Gonzalez in a lawsuit against the same three companies alleging their responsibility for the Nov. 2015 Paris massacre, where over 130—including the 23-year-old Gonzales—were killed.

Altman says his end goal is to create “behavior modification” among the internet titans and get them to act “reasonably and responsibly.”

Twitter has developed a reputation for enabling ISIL-associated accounts to proliferate and on Facebook, ISIL operatives have been able to buy and sell sex slaves and purchase heavy-duty arms. ISIL leverages these sites to broadcast their activities and lure in new recruits. YouTube has also faced controversy for showing pre-roll ads before YouTube videos posted by ISIS affiliated channels.

In Jan. 2016, a similar lawsuit against Twitter, brought by families of two Americans killed in a Nov. 2015 ISIS attack in Jordan, was dismissed by a federal judge in San Francisco. In the decision, the judge cited Section 230 of the Communication Decency Act, a US federal law protecting platforms like Twitter that host third-party content but do not create their own.

The new lawsuit, however, argues that delivering tailor-made ad experiences to users makes the sites “information content providers”—meaning they would not be protected under Section 230. Social media platforms deny they are publishers of editorial content but plaintiffs argue that the platforms’ proprietary algorithms are functioning like publishers, coupling specific posts and ads together—”that is new content, that is unique content,” says Altman.

The lawsuit isn’t championing censorship of specific content. Instead of scouring for ISIL-related keywords and blocking posts, Altman suggests “detecting nefarious conduct” like someone recreating previously banned accounts or someone inorganically growing a huge network. “It is not normal for someone to have a name that you take down and they pop up again with another name and different sequential number,” Altman said. “It is not normal when someone who’s got an account for an hour sends out 500 requests.” The lawsuit details one such missed opportunity to clamp down on malicious activity:

According to the New York Times, the Twitter account of the pro-ISIS group Asawitiri Media has had 335 accounts. When its account @TurMedia333 was shut down, it started @TurMedia334. When that was shut down, it started @TurMedia335. This “naming convention — adding one digit to a new account after the last one is suspended — does not seem as if it would require artificial intelligence to spot.” Each of these accounts also used the same user photograph of a bearded man’s face over and over again. In the hours after the shooting attack in San Bernardino, California on December 2, 2015, @TurMedia335 tweeted: “California, we have already arrived with our soldiers. Decide how to be your end, with knife or bomb.”

Altman argues that proliferation of this sort of content led to the radicalization of Orlando gunman Omar Mateen. “That is exactly the purpose of ISIS using social media sites—to meet people like Mateen who go off and do things like this,” Altman says. “Google, Facebook, and Twitter know this and there are things they could be doing.”

In relation to the Paris attack lawsuit, a Google spokesperson previously told Quartz that the site has “clear policies prohibiting terrorist recruitment and content intending to incite violence and quickly remove videos violating these policies when flagged by our users.” Google did not respond to Quartz’s request for comment regarding the Orlando lawsuit. Twitter, which deleted more than 360,000 ISIL accounts over the course of the last year, also declined to comment.

Facebook has strict policies against terrorism-related content. If a person or organization is a known threat, they are not only barred from posting propaganda but are not allowed on the platform at all. Facebook removes all posts related to terrorism and organized crime, including content that expresses support for such groups and their leaders. “We take swift action to remove this content when it’s reported to us,” Facebook said in a statement to Quartz. “We sympathize with the victims and their families.”

Regardless of the outcome of these lawsuits, tech’s bigwigs have already started tackling terrorism more aggressively. Earlier this month, Facebook, Google, Microsoft, and Twitter said they would create a shared database to track “violent terrorist imagery” and “terrorist recruitment videos or images,” helping each other more efficiently review content.



for trending news,Trending

via Quartz http://qz.com

Post a Comment

WE LOVE TO HEAR YOUR COMMENTS

If you can't commemt, try using Chrome instead.