As we hurtle through the innovative and endlessly updated second decade of the 21st century, the prospects seem brighter and better than ever that our new web and social media tools will help us better communicate and more effectively confront serious challenges like terrorism.
But then, there are the reminders that the Algorithmic Age is still in its infancy and that all the programming in the virtual world sometimes falls short of good old people brainpower. And therein are the early warning signs that tech companies need to take in consideration of free expression rights into the inevitable — and perhaps even desirable — tilt toward AI over human “editors” controlling the flow of information.
Why not just use people instead of machines to oversee our posts, tweets, website content and such? ISIS is a good example of why not to do so. The terror group is in a running battle with social media sites to promote itself to the current and next generation of young people. Hundreds of thousands, perhaps millions of bits of propaganda have been tossed into the internet information flow of billions of images, messages, rants and raves. Recruiting videos, images of beheadings, even a slick feature film threatening Twitter CEO Jack Dorsey and Facebook founder Mark Zuckerberg, are among the social media posts by ISIS and its offshoots.
The response to the persistent and global electronic tactics by these inhumane criminals requires constant sifting through the billions of messages, posts, sites and images that make up the World Wide Web — and that requires algorithmic surrogates to constantly prowl the internet.
Earlier this year, Twitter announced it had eliminated more than 125,000 accounts linked to ISIS. Facebook has deleted posts and blocked accounts. Google and subsidiary operation YouTube have aggressively moved to block content submitted by the extremists. Hence, the video threat days later from ISIS aimed at Dorsey and Zuckerberg.
But with the good comes the bad — or at least actions that are not in keeping with the web’s promise of free expression for all. Machines and methods are only as good as the people who create and instruct them, and technology alone does not guarantee freedom.
For example, you may have seen the brief international flap over an automated decision by Facebook to ban a Pulitzer Prize-winning photo of a young girl, naked and facing the camera, running down a road. The image — posted by several Norwegians — was removed because it violated the social media behemoth’s rules on nudity and child pornography.
If you viewed the photo through the lens of a mechanical eye, case closed. Full-frontal nudity, perhaps even child porn. Check. Delete.
Except that the image was photographer Nick Ut’s Pulitzer Prize-winning photo of nine year old Phan Thi Kim Phuc, screaming as she ran in 1972 from a napalm attack by U.S.warplanes in Vietnam.
As Facebook CEO Sheryl Sandberg admitted in a Sept. 10 letter to Norway’s prime minister about Facebook restoring the photo on its pages: “We don’t always get it right.”
Sandberg explained that the photo was restored because of its “global and historical importance,” even though on the surface, the photo conflicted with “global community standards.” Sandberg added that “screening millions of posts on a case-by-case basis every week is challenging. Nonetheless, we intend to do better.”
Well, that’s good — but not a guarantee.
Facebook and the U.S.-based social media community are not bound by the First Amendment. As private companies, they have the right to make their own decisions on overall standards. The amendment’s reach in any case only applies in the U.S., a fraction of the global communities now engaged in instant interaction. The insistence by Google, Facebook, Twitter and others that they are merely “technology” companies would seem to argue content considerations are not their domains.
Still, it’s incumbent on the titans of social media to “do better” on considering and defending free expression. The tremendous impact on our lives elevates them to “quasi-government” status, where core freedoms must be protected. A report by the Pew Research Center and the Knight Foundation found that Facebook and Twitter are now seen as a prime news provider by 63 percent of their audiences.
Even as real governments turn to social media companies to help combat terrorism, there are concerns that the blocking tactics will have negative impacts: eliminating the shock and horror that the civilized world may need to see; to fully appreciate the depravity of its enemies; limiting full understanding and discussion of things like recruiting videos that are placed beyond the reach of discussants; perhaps even hampering the work of anti-terrorist forces by pushing would-be ISIS supporters off the screen into untraceable means and methods.
Human editors have always had to form a balance between reporting the news we need against being manipulated by groups for their own needs, particularly when it involves media-savvy groups. But that balance historically tilted toward “news”—more information, rather than less.
As social media operations increasingly deploy cyber editors to make those same decisions, users in their “communities” ought to insist that somewhere in those zillion bits of code and autonomous commands is at least the electronic spirit of the 45 words of the First Amendment.
About the author: Gene Policinski is chief operating officer of the Newseum Institute and senior vice president of the Institute’s First Amendment Center.
This article was published by the Newseum Institute.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment