Intellectual freedom in the age of AI

By Arne Lubbers - 8 May 2025


This week it was the 5th of May, ‘Bevrijdingsdag’, the day on which the end of the nazi occupation of the Netherlands is celebrated. Next to all the parties and festivals that commence on this day, it is also a day of reflection, a moment to consider and cherish the freedoms we enjoy today. This last part is particularly important, as threats to freedom still exist, albeit in different, often more subtle, forms than those from the past. Especially when considering current events in the world and even those within Europe itself, we realize that freedom is not just a given, but something we should cautiously preserve. Now, I think it is unlikely that the Netherlands will be invaded by a foreign power in the near future, but there are many other, less violent ways in which one’s freedom can be restricted. 


While we usually see our country as one with a relatively high amount of personal and intellectual freedom, a new threat has risen worldwide, AI. No, I do not mean that terminator-esque scenarios will occur. I am talking about current AI algorithms like ChatGPT or CoPilot, programs which you might be using every single day. These systems are being widely adopted in our workplaces, schools, and governments. The issue, or rather issues I am talking about, all fall under the term generative epistemic injustice


Let me start by explaining the term epistemic injustice first. The philosophical understanding of epistemic injustice stresses how identity-based prejudice in an information ecosystem not only unfairly hinders the representation of marginalized groups, but also significantly hinders the knowledge creation potential of all. Now, in actual human-readable words this means that when a certain group of people become excluded from society, it is not only bad for those people. There is also a collective loss of insight, innovation and understanding, all of which are harmful to the scientific community and in turn, humanity itself. 


The term generative epistemic injustice encompasses the epistemic harm stemming from the widespread adoption of AI models. When algorithms make decisions, especially in the healthcare sector, businesses, or governments, they exert power via their contributions to knowledge. They have an impact on how knowledge is distributed, valued, and created. In turn, these models can shape what is known, who is heard and what knowledge gets lost along the way.  


One way in which this happens is through the amplification of existing biased viewpoints. AI models are trained on vast datasets, which are often scraped from the web, inheriting prejudices and biases from the creators of those sources. What makes this particularly concerning is the false sense of objectivity that people often have about AI. Because it is seen as neutral or fact-based, people are more likely to trust what it produces, even when that output reflects harmful misinformation. For example, a study by NewsGuard showed that large language models can easily be prompted to repeat Chinese state propaganda or conspiracy theories, without corrections or added context. I hardly think I have to elaborate further as to why this is problematic. 


Another form of generative epistemic injustice occurs when AI systems erase or misrepresent marginalized groups. Generative models often lack deeper comprehension of these groups, as training data to construct a detailed cultural or historical understanding is scarce. When the AI starts filling in the gaps of its knowledge, important cultural elements can be erased. A problematic example of this is Midjourney (an image generating AI) producing images where certain groups are smiling, even though they would historically not have done so, a western influence. These kinds of relatively subtle modifications to the depiction and understanding of marginalized groups can cause significant harm to their cultural memory or authenticity. 


So, the next time you “collaborate” with ChatGPT to do your homework, take a moment to think critically about the problems with these systems. Yes, they are powerful tools that can help you reach greater academic levels, but they are hardly reliable or unbiased. So, as we reflect on our freedom on a day like Bevrijdingsdag take a moment to think about what intellectual freedom truly means. It is more than just having access to information, it means being aware of whose knowledge gets to count, and whose voices are simply forgotten, whether through the gradual erosion of collective memory or the implicit biases encoded in our algorithmic systems. 


 


References 

Jenka. (2023). AI and the American Smile - jenka - Medium. Medium. https://medium.com/@socialcreature/ai-and-the-american-smile-76d23a0fbfaf 


Kay, J., Kasirzadeh, A., & Mohamed, S. (2024). Epistemic injustice in generative AI. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2408.11441 


NewsGuard. (2025). AI’s Multilingual Failure: NewsGuard Audit Finds Highest Failure Rates in Russian and Chinese - NewsGuard. https://www.newsguardtech.com/press/ai-multilingual-failure-russian-chinese/?utm_source=chatgpt.com 

ABOUT

Intermate is the study association of the bachelor Technical Innovation Sciences, the majors Sustainable Innovation and Psychology & Technology and the masters Human Technology Interaction and Innovation Sciences.

SV INTERMATE

040-247 4430

@svintermate

SV Intermate

© 2025 - sv Intermate