Dear Universities of The Netherlands, Dutch Universities of Applied Sciences, and Respective Executive Boards,
In response to the letter titled "Stop the Uncritical Adoption of AI Technologies in Academia" we take a principled stand with this letter to ask that you keep 'AI' technologies in universities. Though we agree with the original letters belief that clarity around AI technologies usage in relation to the basic pedagogical values and the principles of scientific integrity should be actively considered; we disagree with its stance that it prevents individuals from maintaining standards of independence and transparency.
One of the claims of the original letter is that AI use has been shown to hinder learning and deskill critical thought. This, as pointed out by researchers such as Dr Cat Hicks is a misunderstanding of cognitive load theory and learning science perspective on working memory. This theory refers to how we balance our working memory by utilizing external processes. Rather than being limited to AI, this process is inclusive of common learning processes such as note taking and is advantageous though should not be over-relied upon. From a simplistic perspective, it allows us to reduce our cognitive load in some areas so we can focus on doing more in others.
As academics, and especially as university-level educators, we have the responsibility to educate our students, not to rubber stamp degrees without any relationship to university-level skills. Our duty as educators is the cultivation of critical thinking and intellectual honesty, and it is not our role either to police or promote cheating, nor to normalise our students' and mentees' avoidance of deep thought. Universities are about engaging deeply with the subject matter. The goal of academic training is not to solve problems as efficiently and quickly as possible, but to develop skills for identifying and dealing with novel problems, which have never been solved before. We expect students to be given space and time to form their own deeply considered opinions informed by our expertise and nurtured by our educational spaces.
In this regard, AI can be a useful tool for many students and professors, especially those who have disabilities both physically and mentally. Of course we also recognize that even the term 'Artificial Intelligence' itself (which scientifically refers to a field of academic study) is widely misused, with conceptual unclarity coopted to advance industry agendas and undermine scholarly discussions. It is our task to demystify and to challenge 'AI' in our teaching, research and in our engagement with society. This does not mean reactionary behavior towards the usage of tools which may prove useful to students.
This is also relevant to understanding the learning perspectives of other cultures that we engage in. The fear of novel technology such as AI is much more prevalent in the anglosphere than it is in collectivist cultures such as asian countries. Understanding these perspectives on a cultural level may at times also require the introduction of tools that are utilized in these cultures learning environment making a ban unfavorable to the purpose of education and demystification.
We call upon you to:
• Fortify our academic freedom as university staff to enforce these principles and standards in our classrooms and our research as well as on the computer systems we are obliged to use as part of our work. We as academics have the right to our own spaces.
• Sustain critical thinking on AI and promote critical engagement with technology on a firm academic footing. Scholarly discussion must be free from the conflicts of interest caused by industry funding, and reasoned resistance must always be an option.
•Protect disabled students' rights to have accommodations even when they may include technologies such as neural networks that the public identifies with AI with special regard to technologies that enable text to speech access, increased accessibility in the classroom and assurance of their ability to communicate knowledge they already possess along side the ability to choose diverse options that the school can support such as between dragonspeech versus otter.ai
•Allow professors to consider revitalizing their learning approaches in line with updates to learning sciences and metacognition, regardless of if this eventually promotes or dimensions reasoning for utilizing AI by these professors
•Enables reconsideration of different biases against newer mediums that professors may have such as anthropocentrcism or creativity biases with regard to unique projects and forms of expressions along side the consideration of how models can be used for those with distinct theory of minds
•Protect individuals in the digital humanities to engage in their research alongside their active teaching of newer tools to the next generation
Cargnelutti, M., Brobston, C., Hess, J., Cushman, J., Mukk, K., Scourtas, A., Courtney, K., Leppert, G., Watson, A., Whitehead, M., & Zittrain, J. (2025). Institutional Books 1.0: A 242B token dataset from Harvard Library’s collections, refined for accuracy and usability. ArXiv.org. https://arxiv.org/abs/2506.08300
Chukhlomin, V. (2024). Socratic Prompts: Engineered Dialogue as a Tool for AI-Enhanced Educational Inquiry. Latin American Business and Sustainability Review., 1(1), 1–13. https://doi.org/10.70469/labsreview.v1i1.10
THE IPSOS AI MONITOR 2024 A 32-country Ipsos Global Advisor Survey. (2024). ipsos.com/sites/default/files/ct/news/documents/20...
Morrison, A. B., & Richmond, L. L. (2020). Offloading items from memory: individual differences in cognitive offloading in a short-term memory task. Cognitive Research: Principles and Implications, 5(1). https://doi.org/10.1186/s41235-019-0201-4
Giebl, S., Mena, S., Storm, B. C., Bjork, E. L., & Bjork, R. A. (2020). Answer First or Google First? Using the Internet in ways that Enhance, not Impair, One’s Subsequent Retention of Needed Information. Psychology Learning & Teaching, 20(1), 147572572096159. https://doi.org/10.1177/1475725720961593
Dr. Cat Hicks (@grimalkina.bsky.social)..) (2025). Bluesky Social; Bluesky. bsky.app/profile/grimalkina.bsky.social/post/3lslt...
Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia. (2025). Openletter.earth. openletter.earth/open-letter-stop-the-uncritical-a...
Millet, K., Buehler, F., Du, G., & Kokkoris, M. (2023). Defending humankind: Anthropocentric bias in the appreciation of AI art. Computers in Human Behavior, 143(0747-5632), 107707. https://doi.org/10.1016/j.chb.2023.107707
Tankelevitch, L., Kewenig, V., Simkute, A., Scott, A., Sarkar, A., Sellen, A., & Rintel, S. (2024). The Metacognitive Demands and Opportunities of Generative AI. https://doi.org/10.1145/3613904.3642902
How Culture Shapes What People Want From AI. (n.d.). Arxiv.org. https://arxiv.org/html/2403.05104v1
Farrell, H. (2025). AI as Governance. Annual Review of Political Science, 28(1), 375–392. doi.org/10.1146/annurev-polisci-040723-013245...
Sakura, O. (2021). Robot and ukiyo-e: implications to cultural varieties in human–robot relationships. AI & SOCIETY. https://doi.org/10.1007/s00146-021-01243-8
Heidt, A. (2025). Walking in two worlds: how an Indigenous computer scientist is using AI to preserve threatened languages. Nature, 641(8062), 548–550. https://doi.org/10.1038/d41586-025-01354-y
SE Gyges. (2025, June 8). The Biggest Statistic About AI Water Use Is A Lie. Verysane.ai; Very Sane AI Newsletter. verysane.ai/p/the-biggest-statistic-about-ai-water...
Masley, A. (2025, May 26). All the ways I want the AI debate to be better. Substack.com; The Weird Turn Pro. andymasley.substack.com/p/all-the-ways-i-want-the-...
ScientistSeesSquirrel. (2025, May 20). No, the plagiarism machine isn’t burning down the planet (redux). Scientist Sees Squirrel. scientistseessquirrel.wordpress.com/2025/05/20/no-...
Model, A. (2025, May 15). Curb Cuts. Curb Cuts. curbcuts.co/blog/2025-5-15-gaad-foundation-service...
Otter.ai. (2024). Otter Voice Meeting Notes. Otter Voice Meeting Notes. https://otter.ai/
Dragon Speech Recognition - Get More Done by Voice. (2016). Nuance Communications. https://www.nuance.com/dragon.html
Magni, F., Park, J., & Man, M. (2023). Humans as Creativity Gatekeepers: Are We Biased Against AI Creativity? Journal of Business and Psychology, 39. https://doi.org/10.1007/s10869-023-09910-x
“It’s the only thing I can trust”: Envisioning Large Language Model Use by Autistic Workers for Communication Assistance. (2024). Arxiv.org. https://arxiv.org/html/2403.03297v1
Beshay. (2025, April 3). 2. Views of risks, opportunities and regulation of AI. Pew Research Center. pewresearch.org/internet/2025/04/03/views-of-risks...