AI: the judge of speech
The insights shared in this article are a direct result of following the AI & Society minor. This program offers a unique and multidisciplinary platform for you to actively challenge your perspectives on the impact of AI in today’s digital society with fellow students.
The Hate Speech Context
People now communicate faster and with greater numbers of people than ever before, all from the comfort of their own homes. It is much easier to launch into a diatribe anonymously, from your own bed, than from the middle of your town square, vulnerable to law enforcement or passer-bys’ anger; this ease has made hate speech ubiquitous on digital platforms. To deal with this online tsunami of hate, social media companies are using sophisticated machine learning detection systems to remove such content from their platforms. Machine learning systems –– especially those of ‘deep’ learning –– are particularly effective at removing such content. These systems operate at an incomparable scale to manual human detection, and can adjust to detect new terms (Jahan & Oussalah, 2021). They are undeniably important for combating hate speech and its negative consequences, but also raise social, political, and legal questions.
Defining Hate Speech
The first challenge facing hate speech detection is defining what we understand as ‘hate’. There is no universally accepted definition of hate speech, and in some cases, speech can be offensive, but not necessarily hateful. Some communities also reclaim or ironically use terms that would be considered hateful in other contexts; this nuance is especially difficult for automated content moderation to pick up on. As a result, some, often marginalised, groups are overrepresented in ‘bans’ on their accounts. Drag queens, for example, are the most-banned demographic on Twitter due to their colourful language and use of reclaimed slurs (Gomes, Antonialli, & Oliva, 2022).
Companies attempt to address this challenge of defining hate speech by setting company policies and internal guidelines on what constitutes ‘acceptable’ speech. These guidelines vary from company to company –– for example, Twitter’s increasingly lenient content moderation after being bought by Elon Musk –– but is done by nearly all companies.
The majority of these policies are applied across the globe by companies based in the United States, which can struggle to adequately moderate content in other languages, or make moderation decisions that match up with the sensibilities, norms, and laws across cultures (Ahn et al., 2022; Díaz & Hecht-Felella, 2021).
Hate expression as a risk for causing societal deterioration?
In today’s society, online hate speech has without a doubt resulted in a polarisation within society. We see several political landscapes taking shape through heated online debates, where people openly voice their opinions on sensitive societal topics. The freedom to express thoughts is an undeniably valuable principle, but one cannot ignore the negative impacts hate speech has on the cohesiveness of a society. The constant expression of hate, through a digital medium that can reach so many individuals, has the potential to fragment society, which in itself is a substantial risk to the existence of democracies. However, constructing new boundaries related to speech does not simply mean regulating society as a whole, but will simultaneously characterise the way in which we communicate with each other. When we start to control and eradicate bias from human thought, we might risk regulating expression itself. This risks impeding the freedom to experiment and challenge long-held views on science and society. The freedom to make shocking statements, such as claiming that our earth revolves around the sun or that humans evolved from apelike ancestors, has positively changed the course of humanity several times. To start limiting speech as we know it, potentially means that we impose our temporal understanding of an ideal society in stone, creating a philosophical ice-age.
AI as a referee for civility
However, let us not forget that valuing human civility should not be overshadowed by an absolute pursuit of freedom of expression. After all, civility serves as one of society's fundamental pillars. So, why does it seem that in the digital realm, these two values collide head-on? Perhaps we can attribute this to a unique characteristic of the internet as a platform for sharing our thoughts and ideas. The internet seems to have provided people with a cloak of invisibility, making them immune to real-life consequences when they post distressing content. Public accessibility to virtual battle arenas, which for instance exist on Facebook or Twitter, have polarizing effects. People can give their opinion on news posts, and these comments can be read by others and influence the opinions of the reader. Almost 48% of adults in the U.S. state that they ‘often’ get their news from social media (Atske, 2021). One cannot help but wonder, amidst the chaos of these digital opinion olympics, whether online debating, without a strong dose of civility and a referee, is beneficial to society. Artificial intelligence could be this referee, upholding the standard of civility in our online society.
Hate Speech Detection and Art
An additional risk of hate speech detection technology is linked to a substantial part of human culture: art. Art in its essence, encompasses a certain freedom of expression, not only in the ways that fit within temporal societal norms. Art can be an outlet for our thoughts, feelings, and ideas that may not necessarily conform. Literature, as a form of art, constitutes a large part of culture and has been a crucial part of human expression for millennia. However, the introduction of hate speech detection technology is leading to more fundamental questions concerning the freedom of artistic expression. As artificial intelligence becomes more advanced, there is a possibility that art, which touches on sensitive topics, may be flagged as hate speech. Where should we draw the line between exaggerated depictions, and actual hateful conduct or harassment against a group or person? And how should we interpret political caricatures that have been historically known to have quite inflammatory drawing styles? These questions may interrupt the development of culture, as artists may self-censor or be discouraged from exploring certain themes because of the fear of being misunderstood and falsely flagged. Unfortunately, this is an issue that will be difficult to solve until we can define the boundaries of hate speech.
More on the minor of AI & Society
With a class consisting of individuals from various backgrounds, ranging anywhere from law to engineering, a fertile ground for discussion is guaranteed in the minor of AI & Society. Through lively debates and exchanges, valuable information can be gained and shared, as the success of this minor heavily relies on the creative viewpoints of students regarding the effects of AI on our society. By participating, students can deepen their understanding of AI's impacts, developments, and the underlying technology. This knowledge holds great significance for any student, considering the growing implementation of AI in our society. We wholeheartedly invite you to seize this opportunity and dive deep into the captivating world of artificial intelligence, to witness first hand the incredible impact this fascinating technology holds for our society and beyond.