There are ethical and moral principles which underlie platform’s attempts to moderate content and misinformation. There is a tension between freedom of speech and protecting people from harmful content.
Attempts at content moderation have been met with resistance from those who believe strong forms of content moderation constitute a violation of our freedom of speech but as a right, freedom of speech cannot be exercised against social media platforms because they are private companies and do not fall under the first amendment of the US constitution, as explained by Carnegie Library. However, given the ubiquitous and global nature of platforms like Discord and YouTube it seems important to me that they commit to upholding the principles of freedom of speech.
Historically platforms have waged fierce debate and to define themselves as platforms instead of publishers to reduce the responsibility for content moderation. This might be reason why Facebook has moved away from news content in considerable capacity recently. In this post I will be analyzing the misinformation and content moderation policies of YouTube and Discord.
YouTube Content Moderation Policies for Misinformation:
YouTube outlines their strategy to counter misinformation and proposes that the best way to address misinformation and content moderation is to consider how they treat all content instead of focusing just on taking down harmful content. Their goal is to “preserve free expression” and “promote a vibrant community”. They treat all content according to the 4 R’s of responsibility.
1. Remove content that violates their policy as fast as possible.
2. Raise up authoritative voices when people are looking for breaking news and information.
3. Reward trusted creators and artists
4. Reduce the spread of borderline content.
It also helps to imagine the magnitude of the task. YouTube recognizes that misinformation now stretches into every facet of society and sometimes spreads with “blistering speed” and can happen with any topic. YouTube removes nearly 10 million videos per quarter.
They use algorithms, classifiers and keywords to detect new misinformation narratives. When faced with uncertainty, such as in the face of breaking news, they leave space for the ambiguity because they believe that tech companies should not play the role of arbitrator when there is no certainty. They also adapt their strategies to each of the 100 + countries they operate in. However, recent news from the New York Times claims that due to cuts, only one person is left in charge of misinformation policy worldwide which is alarming to say the least.
Finally they also recognize the complexity of misinformation, noting that: One person’s misinformation is often another person’s deeply held belief, including perspectives that are provocative, potentially offensive, or even in some cases, include information that may not pass a fact checker’s scrutiny.
One interesting example is the case made for removing misinformation on the 2020 presidential election that the results were due to mass voter fraud. YouTube initially removed any content related to questioning the integrity of the election, but two years later it reversed the policy to encourage free debate on ideas – even controversial and unproven ones.
I personally appreciate this policy. I think YouTube’s commitment to neutrality and promoting freedom of expression is one reason the platform has garnered appeal all around the globe. They were careful to revoke the policy two years after the event and assessed that since enough time had passed it was now safe to encourage unfettered discussion. In the moment it is important to curb harm – such as the violent storming of the capital – and emergencies must be contained. But at a certain point events become history and the more critical analysis and different perspectives we have on history the better. Making sure policies are dynamic and reconsidered and reformulated demonstrates to me a sophisticated approach to curbing misinformation.
Overall YouTube’s policies seem carefully thought out. They focus on the flow of all their content instead of limiting themselves to policing information. They focus just as much on the kind of information that should be spread and focus on increasing that by rewarding trustworthy creators. I think this is a good approach to curbing misinformation because when people have access to quality information they can counter misinformation through lateral reading. It also means less policing of content which lessens the tension between freedom of speech and preventing harm.
To improve the their policies I would suggest YouTube to conduct an independent audit of its community standards enforcement metrics much like Facebook did. This would create more transparency on how the company functions and potentially identify blind-spots in their policies, metrics and analysis. Then they can reconsider and reformulate their policies in line with the findings.
Discord Moderation Policies for Misinformation:
The second platform I am considering is Discord.
Discord started off as a way for people to connect in real time when playing video games together but has quickly become a hub of communities quite similar to Reddit, including many anonymous usernames and lack of identifying information online Instagram or Facebook. This leads to a significant challenge to curbing misinformation because when digital identities are muddled or anonymous people are more comfortable making inflammatory content and ‘messing around’ or trolling or spreading the hot gossip, which all fuels the spread of misinformation.
Upon a first reading of Discord’s misinformation policy it seemed both narrower and vaguer than Youtube’s. The overarching statement was that users
“may not share false or misleading information on Discord that is likely to cause physical or societal harm”.
However, the definition of harm has not been explored or explained in a clear way. This led some user to speculate that the policy would be used as a means of censorship rather than moderation.
The actual misinformation policy then goes on to specifically delineate the rules for sharing health information. It bans anti vax content, medically unsupported cures for diseases, content that distorts information about diseases and any content that can hinder the resolution of public emergencies.
Discord takes actions against individuals, groups and servers that violate the policy. However more vague words are used for what is acceptable such as the sharing of “personal health experiences, opinions and commentary and good faith discussions” as long as… they are based in “fact” and will not “lead to harm”.
While YouTube recognized that determining the facts of something was a challenging ask, especially for recent events, Discord seems to take a very simplistic view that something is either fact or not. And again, we see an incredibly vague definition of harm.
Health is a notoriously tricky subject because all our bodies are different, our lifestyles are different and there are so many conflating theories on things like nutrition and exercise that determining what the facts are is a very difficult task, and this simple policy does not adequately address the challenge of monitoring this kind of information.
Furthermore, Discord’s general content moderation policy is focused on removing harmful content and does not stipulate any broader goals like YouTube’s policy.
I’m going to take the same example of misinformation in the wake of the 2020 election results, subsequent violence and conspiracy theories. At the time, a group called Qanon was spreading outlandish conspiracy theories and promoting anti vax and voter fraud/illegitimate election misinformation. The National Public Radio reports that when Discord found increased activity from Qanon, they deleted 334 related communities.
I find this to be a rather limited approach focused on content removal, and I don’t see anything to stop these communities forming again based on different names or new accounts.
To improve their moderation efforts, I have many suggestions. First, I think Discord needs to define a positive purpose for their content moderation and focus on creating the behavior they want. For example, YouTube supports and amplifies trustworthy creators. How can Discord take inspiration from this? Secondly they need to focus on broader categories of information beyond health. Third, they need to create a clear ethical framework from which they define the word harm. Fourth, they need to more carefully examine the nature of fact, fiction, personal opinion, commentary and “good- faith” from each other and how they tie into the ethical framework of harm to create a clear policy for users.
In conclusion, this was an interesting exercise for me because I learnt that my degree has great potential to be useful in this field. I have been taking experience in policy making and realized I hated public policy but found company policy making to be interesting. Content moderation is a huge field within this, where the problem is still new and unsolved so this an exciting area for further work and research.