Spirit vs. Letter in Social Media Harassment Policies

Social media platforms have been under fire by critics recently due to the way they’ve let radical groups take advantage of their platforms to attack and discredit others. People on Twitter are harassed, receiving death threats and worse, yet their harassers remain unbanned. Facebook has suffered from the inundation of fake news created by Russian propagandists, as well as racist advertising using their own ad system. A recent article by Sarah Wachter-Boettcher, titled “Facebook treats its ethical failures like software bugs, and that’s why they keep happening,” argues that Facebooks’s approach lacks a true human dimension, and fails to account for the subtle and nuanced ways that people end up using social media. In other words, using a wack-a-mole method to deal with this ignores, unintentionally or otherwise, the underlying issue of people being attacked online.

I concur with this sentiment, but would like to add something. It’s not just that treating problems like racist ad targeting as bugs or glitches is the wrong way to go, but that trying to govern social media platforms with hard and fast rules creates a rigid system that inevitably lends itself to loopholes that can be exploited.

I recently had a few discussions with friends and acquaintances, all programmers and software engineers. In one discussion, I had a small debate with a friend, who argued that laws should not be open to interpretation—what says, goes, ideally. Having “wiggle room” makes things messy. In another, the subject of self-driving cars came up. Among many of the programmers (but not all, mind), there was a shared stance that giving humans more control than self-driving cars would be to open up the efficient and organized traffic of the future to the unpredictable and poor decision-making of the average driver. Additionally, any problems that occur due to the incompleteness of the self-driving AI could be solved after they arise.

I don’t mean to stereotype programmers as all having a certain way of thinking or a certain set of beliefs (you’ll find them on all sides of the political spectrum, for example), but there’s a certain desire for the human-created mechanics of the world to make consistent, logical sense that I find common to programmers—i.e. the main people driving social media platforms such as Facebook and Twitter behind the scenes. A faith (or perhaps desire) in these systems, and the idea that they can just increase the granularity of their rules, instead of trying to take a more humanistic direction, leads to holes that can be exploited.

No matter what parameters Twitter puts in for defining harassment, people will always find ways to attack others without “technically” breaking the rules. This, I believe, is the reason so many people appear to be unjustly banned while other accounts that spew hate and encourage online attacks can manage to stay active. One side is likely ignorant of rules X, Y, and Z, while the other deftly skirts them. Intent, something that requires a closer analysis, is left by the wayside.

Krang T. Nelson, a Twitter user named after a certain cartoon warlord from Dimension X, recently tested these limits. In a Vice article, Nelson describes how he decided to troll white supremacists by crafting the most intentionally absurd tweet possible, about “antifa supersoldiers” planning on beheading white parents and small business owners. Not only was it a clearly tongue-in-cheek call-out of alt-right talking points, it was also loaded with buzzwords that white nationalists actively look for. Nelson then discusses how the white nationalist movement understands the ways to take advantage of Twitter’s policies, and that they used this knowledge to get him (temporarily) banned over a facetious remark. Here, we see clear evidence that the groups known for Twitter harassment also know how to exploit its technicalities and parameters for their own ends.

Adhering to the letter and not the spirit of policies and laws is what fuels the abuse of online social platforms. Having actual people at all levels checking to see how Twitter, Facebook, etc. are being used, and relying not on hard and fast rules, is where things need to change. Granted, having “wiggle room” in rules means they can be exploited in a different way, but overly strict interpretations are also clearly not working.

 

2 thoughts on “Spirit vs. Letter in Social Media Harassment Policies

  1. You know, it’s only too obvious that all this huffing and puffing about harassment has nothing to do with any harassment and everything to do with silencing voices that were deemed undesirable by the liberal orthodoxy. The main difficulty with these policies is crafting the policies that quench dissenters without admitting it.

    Like

    • The less antagonist take is that the policy that alienates the fewest people is the best policy.

      I think the problem with policing social media has always been on based around enforcement. As networks increase to unreasonably large sizes this problem will increase non-linearly. Using any “human” judgment for this isn’t really a problem of policy, but problem of implementation and actually be able to do it. A network that publishes 10s of billions of posts a day will need 10s of millions of checkers, no? That’s ridiculous.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.