We're Censoring Our Own Reality

22 December 2017

The Internet is a place where freedom of speech reigns; except where it doesn’t. While everyone is free to share their thoughts, in whatever form they take, most sites impose limits on what can be shared. This isn’t necessarily a bad thing; unlawful or hateful content don’t share a platform with lawful free speech. Sites usually explain what’s acceptable to share in their Terms of Service or equivalent document normally written in more pages of legal language than any normal person would ever care to read. These limits don’t stop everything and sites rely on users to report unacceptable content, which invites human moderation or prompts the site to automatically take things down.

Content moderation invites problems as much as it solves others. Algorithms that try to moderate content automatically are not perfect. YouTube, for example, has been combating videos that are built to make it through their filters for kid-friendly videos, but that have disturbing storylines or content that isn’t suitable for kids. When they do work correctly, algorithms aren’t necessarily unbiased; the software itself has no bias and is doing what it’s told to do, but the people who develop the algorithms might be. It’s not always intentional, either. Some things are not easy to measure directly, but might be measured indirectly; such as using a family history of crime to decide how likely it is that an individual would be to commit a crime in the future.

We don’t really know how these algorithms decide what content is acceptable and what isn’t. They are, for the most part, trade secrets to the companies that use them. However, no matter how biased, manipulatable, or downright wrong they can be, they’re applied on a massive scale. Facebook, YouTube, and other services rely on these algorithms to decide what posts stay up and what gets shown in searches.

However, algorithms are not the only moderation tool online. They miss things, as we know from YouTube’s ongoing battle. To compensate, sites also rely on teams of humans to check suspect posts or on their community of users to report posts. Reddit, for example, allows users to report posts to the moderators of communities and to Reddit itself, as well as to give negative feedback to content. With enough negative feedback, posts can effectively disappear from the site. This causes communities and even entire sites to develop a bias towards the beliefs of the majority of their users, contributing to the filter bubble effect. The algorithms that build individualized news feeds learn from this behavior as well.

Content moderation, both algorithm and user-driven, can push anything offline from content to entire communities. Brigades of users have managed to get Facebook groups and pages taken down because they disagreed with them. This type of user-driven moderation is also taking down content uploaded from people trying to expose atrocities from places such as Aleppo. While this type of content can be gory and may be inappropriate for some users, dropping it from a site entirely may be making evidence of war crimes disappear. YouTube rolled out changes recently that took down over 900 channels that were documenting the civil war in Syria. Facebook has been removing images documenting atrocities committed by the Myanmar government as of September.

With social networks removing content that users or algorithms find distasteful, we’re censoring the very networks that promise openness and global connection. While we worry—correctly—about ISPs and governments hiding content—we’re also doing it to ourselves. Worse, there’s little to no oversight to stop us, or the social networks we’re contributing to, from taking down things that are important.

Care about what the web is doing to our minds? Check out my book, The Thought Trap, at book.thenaterhood.com.

• • •

Stay updated by email
or, grab the feed

Found something wrong? Get in touch.

Share this