Elon's headache: moderation and free speech
Epistemic attitudes play a critical role in how we approach moderation
When his takeover of Twitter is complete, Elon Musk will face a range of difficult decisions if he wants to make Twitter - as per his stated wishes - a digital town square that welcomes freedom of speech. He may self-identify as a ‘free speech absolutist’ but that does not mean he can, or should, simply remove all moderation from Twitter.
For a start, there are legal restrictions. Perhaps more importantly, unmoderated online public spaces almost invariably end up becoming toxic and deeply unpleasant, which will drive people away from Twitter and undermine its role as a town square. Success for Musk therefore requires that he find ways to change the algorithms and shift the approach to moderation so that free speech is balanced with the responsibilities of being a town square.
In the previous post, we explored the role that philosophical assumptions play in ideas about freedom of speech, and especially the importance of epistemic attitudes. Thinking about moderation and limits to speech, typically follow JS Mill in trying to identify some balance between the harms that can arise from speech and the benefits from free speech. The discussion rarely considers epistemic attitudes and their implications for the limits we might apply to, or ways we might moderate, speech.
The basic relationship between epistemic attitudes and free speech we identified is that the more confident we are in the correctness of our knowledge, the less tolerant we will be of free speech. Most simply, if we are not sure whether we have things right, we will be open to hearing from others who disagree with us. Whereas if we are sure, we are likely to be less open. This has interesting consequences for the type and extent of limits we might accept on speech.
This is easiest to see if we tease out the relationship for each of the four foundational epistemic attitudes we have covered previously. As a reminder, these attitudes are:
Epistemic certainty: the conviction that humans have (or at least the author has) achieved genuine and reliable knowledge (about certain topics) and no-one can reasonably doubt it.
Epistemic confidence: humans can acquire knowledge, and wide ranging knowledge, with complete or at least considerable certainty.
Epistemic humility: genuine knowledge is difficult to acquire for everyone, even with the best of care and attention.
Epistemic skepticism: the belief that knowledge is not achievable and we genuinely know nothing (or very little).
We will go through each in turn and consider what a person (or a group) in a position of authority who accepts each attitude is likely to think about limits to speech. It is important to remember the focus is on the core attitudes held, not on whether the attitudes are justified. The attitudes may also be individual - perhaps ‘I am certain that what I know is right’ - or be group based - such as ‘I am not sure myself, but I know the experts definitely have it right’.
Epistemic certainty
If a person, or a group, believes they have epistemic certainty, that is they have achieved genuine and reliable knowledge that cannot be doubted, then there are no strong epistemic reasons for them to allow opposing views to be expressed. They may allow other views to humour people, to enable a teaching opportunity, or for a range of other social or political reasons. However, these will all come with risks that people will be misled by those views which are, for those with epistemic certainty, definitely false.
One classic argument for freedom of speech is that false views can be helpful by clarifying what is really true. For those with epistemic certainty, this is highly unlikely as they are already certain they know what is true.
This means they will likely have little innate regard for the freedom of speech and will be comfortable with, or champion, many limits on speech and therefore various forms of censorship. We see this, for example, in the many governments that claim they have the truth with complete certainty, whether for ideological or religious reasons, and therefore have active and strict censorship regimes.
It is worth noting that some have a universal epistemic certainty while others have epistemic certainty only about certain topics. A belief that, for example, the “science is settled” on a topic expresses that kind of certainty and is also often used to justify removing opposing views.
Epistemic skepticism
To shift to the other extreme, if our attitude is that we can't achieve definite knowledge and can never really know what is true (perhaps as truth doesn't exist), then there are no clear epistemic reasons for restricting any views at all. Every view is equally likely to be true and there is no epistemic danger in allowing different people to express their truths. We may decide to promote or restrict views for aesthetic, moral or social reasons, but there is no justification based on the truth or falsity of views as everything is similarly likely to be true.
While this attitude clearly motivates an absolutist position on free speech, it is notable that it isn't invoked in traditional justifications for free speech. Mill's arguments, for example, are all premised on the view that knowledge and truth are possible and he argues that freedom of speech helps us achieve it.
Epistemic confidence
A key difference between epistemic confidence and epistemic certainty is that confidence only presumes that we can acquire knowledge about topics, but we may have not yet achieved it widely. So we aren't certain we have it right but are confident that we can and will get there. With this attitude, there may be clear epistemic benefits to allowing a range of views on different topics, if they help us get closer to the definitive truth, especially for those topics on which we are aware we have not yet achieved certainty. Mill's arguments will have some traction:
though the silenced opinion be an error, it may, and very commonly does, contain a portion of truth; and since the general or prevailing opinion on any object is rarely or never the whole truth, it is only by the collision of adverse opinions that the remainder of the truth has any chance of being supplied.
However, the tolerance of opposing views will likely vary with the level of confidence that we already have the whole truth and whether we think they will lead to greater knowledge. If we are very confident in our knowledge - or the correct direction to pursue - on certain topics, then acceptable limits on speech are likely to be similar to those from epistemic certainty. For those areas where we aren’t so sure, we are likely to allow significantly more freedom of opinion.
This will tend to create a dynamic around limits on speech where the acceptable range of speech varies with the epistemic confidence held about different topics. To pick simple examples, we might see no harm in censoring arguments for a flat earth, but be open to all ideas about theories of quantum gravity.
Epistemic humility
As we saw in the previous post, classic arguments, especially from JS Mill, for freedom of speech have depended on epistemic humility. Yet it is worth remembering our motivating insight: the more confident we are in the correctness of our knowledge, the less tolerant we will be of free speech. This suggests that an attitude of epistemic humility should be more tolerant of free speech than the attitudes of epistemic confidence or certainty but less tolerant than the 'anything goes' approach of epistemic skepticism. Are there sufficient differences between epistemic skepticism and humility to support this?
Epistemic humility assumes that it is possible to find truth, or at least things that are more true than others, even if it remains difficult. This means that, unlike epistemic skepticism, there are epistemic practices that help us get closer to genuine knowledge. If there aren't better ways of trying to find knowledge, there is no way of judging what is more or less true and we end up back in skepticism. Examples of what the practices might be were sketched out in the post on practical humility.
An attitude of epistemic humility therefore leads to natural distinctions between views based on the epistemic practices that underpin them. If there are better and worse ways of trying to find and justify knowledge, it would make sense (at least in some contexts) to restrict or deprioritise views based on poor epistemic practices. In this sense, epistemic humility motivates certain types of limits on speech and confirms the insight we started this post with.
Different types of limits
It is striking that different epistemic attitudes have motivated very different approaches to imposing limits on speech. If we have epistemic confidence (or certainty), it makes sense to restrict false views on topics for which we are highly confident our knowledge is accurate. To phrase it differently, we should moderate content (on certain topics) and remove false information as it is likely harmful.
Under epistemic humility, the limits aren’t focused on the factual information or the content, but rather on the way the views are derived or justified. This lines up neatly with traditional classroom or tutorial approaches: you can make any claim that you want, as long as you have a good argument or justification for it. Justifiable moderation is therefore more focused on behaviour than content.
Limits in practice
To take Twitter - Elon’s headache - as a case study, what types of moderation might we see under these different attitudes?
Unsurprisingly given dominant cultural assumptions, moderation based on epistemic confidence (or certainty) would look significantly like Twitter’s current approach. The focus is largely on content moderation, that is on removing misleading and false information, but only on certain topics where we presume significant certainty. The reliance on fact-checkers to determine whether content should be moderated is a clear example of this mindset.
If we start with epistemic humility, and we argued previously that is the only attitude consistent with Elon Musk’s concerns about freedom of speech, then moderation practices would have different priorities. They would be focused not on content but on epistemic behaviours.
As someone with a strong background in formal logic, I’m tempted to argue that Twitter should moderate to remove all logical fallacies and other types of flawed reasoning. While it would guarantee high paying jobs for Philosophy PhDs, it would be widely unfair as it would highly privilege certain types of education and is especially impractical for a platform with a 280 character limit on posts.
While eliminating all fallacies is not feasible, it would make sense to focus in on the most egregious fallacy: ad hominem. This is the fallacy where you attack the person making the claim rather than the claim itself. To put it in the terms used in our post on practical reasoning, it focuses on the person rather than the reasons.
Decorum moderation
If it were to adopt this approach, Twitter moderation would primarily focus on removing direct personal attacks and abuse - or what we could refer to as manners or decorum. This is more of a reprioritisation of Twitter moderation and is not a new idea. There has been a longstanding legal distinction around speech such as blasphemy between the manner in which it is said and the matter that was discussed. More recent, and well worth reading, is Jim Rutt’s detailed and well argued proposal for Twitter to focus on decorum moderation.
This approach carries cultural baggage, as it is likely to bring to mind images of fusty European debating societies where people are disqualified from speaking for wearing the wrong tie or speaking with the wrong accent. Manners are often very culturally and class specific and moderating based on them could easily end up simply advantaging those who are already privileged.
If derived from an attitude of epistemic humility, the motivation for decorum moderation is quite specific and deliberately inclusive. Moderation would be in place to help all participating share insights and get us (as a whole) closer to truth by enforcing minimal epistemic standards. The emphasis on decorum is therefore to try to ensure anyone with genuinely held views can participate in the debates and isn’t silenced or driven out of the discussion. Ad hominem fallacies, or personal attacks, are common reasons for people to quit social media platforms.
A clear focus on decorum to encourage good epistemic practices, rather than to only allow acceptable social behaviours, should help ensure the moderation is inclusive. It is there to encourage and enable the free exchange of ideas, as we might want to see in a democratic town square.
There are many more aspects to freedom of speech and moderation on platforms like Twitter than have been covered here. However, it is striking that epistemic attitudes by themselves give us distinctly different approaches to moderation on platforms like Twitter. One, deriving from the epistemic confidence we culturally presume, leads to a focus on content moderation and removing incorrect information.
The other starts from a point of epistemic humility and motivates a form of decorum moderation designed to enable inclusive discussions. Whether this can be successfully put in place at Twitter is a different and difficult question. However, it is a direction Elon Musk could pursue if he genuinely wants to build an open town square. In short, after taking over Twitter, he may need to send the moderators off to both philosophy and decorum classes.
A well argued position. Two questions. Is your argument that adopting epistemic humility automatically leads to the establishment of standards of decorum, or are these in fact separate design decisions? Would epistemic humility require support for a range of different moderation techniques on the basis that you do not know which is actually best? Put another way, how do you reconcile the adoption of epistemic humility with the effective creation of a single moderation standard.