Ben Goertz

Screaming into the Void of the Internet

What Orwell Got Wrong

Kirby Ferguson’s post “Censored For 48 Hours” about the brief removal of one of his videos for “cyberbullying” has sparked some thoughts about ways to improve the content review process. I want to expand on several of his suggestions, about transparency and censorship, but you should start with his post first if you haven’t yet.

Kirby’s key point that censoring material can cause the inverse intention, the Streisand effect (where more attention comes from being banned), points towards the path forward. Attention is the rare resource. Perception matters and as Kirby points out being banned can radicalize people with views in the gray zone. Platforms are failing at an impossible mission by trying to play the outright guardian of truth for all users. These companies should flip the script by giving all users the power to opt-in or opt-out of filtering efforts. You want to read the sitting President’s tweets, no matter how false they are deemed to be by the platform, then opt out of warning labels on his tweets. Make the process available but full of friction so that the default choice means 90%+ of users experience content with additional context to fight disinformation. This step alone removes the rhetorical attack of people screaming “censorship” by side stepping the volley.

Orwell famously raised the dystopian nightmare of censorship in 1984 - Big brother, thought crimes, etc. The nuanced discussion (covered by many smarter folks) that when YouTube, Facebook, or Twitter remove “free speech” from their platforms it’s not the govenment that’s censoring speech but a private company no longer hosting it (for free). The complex truth is worth stating but these nuances miss the simplistic attack that rings true enough to most Americans. Having something you’ve written deemed “too much” for these sites feels Orwellian. Some now wear it as a badge of honor. I have a greater fear of the vapid mental intake of most social media than platforms where political debates become too crass. Huxley’s view of a world where we entertain ourselves into oblivian rings truer to me than Orwell’s direct assault on truth.

What is more dangerous for an idea: that it dies in obsurity (as most of our thoughts and works do) or that they’re are deemed too dangerous (and gain the allure of the metaphorical apple from the tree of good and evil)? If you’ve ever tried to get attention online (for good or bad) you know that being ignored is far harsher than someone taking the time to disagree or even attack you. In many cases on the internet, and one of Trump’s greatest strengths, is truly that no news is bad news.

If a video strays too far from YouTube’s guidelines, the gradient of actions should only reach outright remove after:

  1. Demonetize it (remove the economic incentive to dance along the line of the gray zone)
  2. De-prioritize it in the recommendation system (how the majority of videos gain viral reach)
  3. Disable comments (reduce engagement and the chance for even worse content to be highlighted by bad actors)
  4. De-prioritize it in search (Google already does this for web results)
  5. De-list so it’s only available via direct link

Maybe some of these actions already happen? Kirby’s request for more transparency holds here. A community review board and other ways to determine why your views might be supressed are valid. The key here is that supression on the platform is not the same as censorship. Is this list of suggestions the perfect answer? No. And no system will be. I think the danger of the current approach from Twitter and Facebook in particular is that Congress will potentially introduce oversight which could be far more complex and less effective (potentially, I know smart folks are debating options). And the other clear outcome is that users are leaving for far more dangerous platforms where oversight might not even reach - Parler, 8chan, and dark corners of Reddit for example (how on earth has the management of Reddit avoided being hauled before congress?). We are better off sharing common spaces with ideas we disagree with than fracturing into corners where we no longer see any common ground.

Companies like YouTube want users to spend time on the site for advertising and they don’t want the platform to become so toxic that users or advertisers leave. I also want to be on platforms with protections so that I don’t receive death threats or view reprehensible things. Remember that many abhorrent types of content are already governed by strict laws in the U.S. My argument here is for speech that falls into the debatable gray zone - not beheadings.

There is a key distinction between stopping the spread of misinformation and trying to stop the existence of misinformation. Information, however bad or dangerous it might be, will find a way to replicate and exist on the internet as long as data is basically free to copy and store. This post is hosted for free on Github and could be de-platformed but I can take my markdown files to basically any other host and continue. At worst my git repo (nerd speak for backups) that stores all the files is copied to several of my machines. De-platforming me would do almost nothing to stop these ideas from existing other than maybe making it annoying for me to share it (which can be effective for low propensity bullshitters!) and could help the moderators at Github / Microsoft feel a little better. It’s the spread of “bad” ideas that platforms can help with, not cleaning up all bad ideas.

Social media is terrible but some form of it will be with us for a long time. Leave terrible ideas to exist and scream into the dark empty void of internet space. Don’t give fuel to bad ideas by giving them the allure of being dangerous. Use censorship as only the last resort. Reduce the attacks against platforms by giving users a burdensome process to opt-out.