12 December 2025
Thoughtfully Shaping Our Digital Future

To the parties forming the government in the Senate and House of Representatives, as well as the outgoing administration,

We are writing to you in recognition of your crucial responsibility for shaping current and future AI policy, overseeing digitization, and upholding public values. We are a coalition of scientists, experts, and representatives of civil society organizations. We believe it is essential to address these matters together.

This letter has two objectives:

a) Provide context for current plans, including the AI Delta Plan, the AIC4NL position paper, and the Invest-NL AI Deep Dive. These investment proposals often rely on assumptions that lack scientific evidence and do not fully reflect public values.

b) To offer a constructive, well-substantiated alternative approach to digital futures based on people, nature, and democracy. We believe a collaborative process should guide decisions on the needs, scope, and nature of investments by bringing together scientists, civil society organizations, and stakeholders.

The decisions made now will shape the digital future for everyone in the Netherlands. The government plays a crucial role in ensuring that digitization enhances people's well-being and safeguards our autonomy. This moment presents a valuable opportunity to reduce reliance on large foreign tech companies and their harmful influence through algorithms and AI models.

The Netherlands cannot achieve this alone; collaboration with other European countries is essential to safeguard our digital infrastructure. Given the increasingly adversarial stance of the US, it is more important than ever that we unite our efforts.

At the same time, we cannot depend solely on Europe. The European Commission's proposed Omnibus risks weakening both the GDPR and the AI Act, potentially undermining the public interest in favor of large, often foreign, tech companies.

Therefore, an alternative, constructive, and thoughtful approach to digitization and artificial intelligence (AI) is urgently needed. But this one must ground ownership and actions firmly in society, and should start with public values and rigorous scientific validation of new applications.

Our principles and proposals are not new. The idea of machines and automation serving as neutral, cost-effective substitutes for human labor and intellect is a persistent dream that has often resulted in overlooking crucial social insights and quality-of-life concerns (Illich, 1973; Franklin, 1999). We believe that digitization and technology should be approached from a pragmatic and humane perspective, which deserves far greater attention than it currently receives (Zaga et al., 2022).

Principle 1: Value knowledge and expertise, and distinguish between hype and science to enable well-informed, careful, and thoughtful decision-making

“Artificial intelligence” (AI) has multiple meanings. Primarily, AI refers to a field of research. As a concept, system, and academic discipline, it has existed since the mid-20th century.

In recent years, AI has increasingly been used as a marketing term for applications like ChatGPT and as a catch-all label for various technologies and implementations (Guest et al., 2025). While these often fail to deliver on their grand promises, they continue to attract significant investment (The Economist, 2025). Unfortunately, established guidelines, recommendations, and standards for safe and responsible use are frequently overlooked (Dobbe, 2023; Dobbe, 2025; Murray-Rust, Alfrink, Zaga, 2025).

Relying irresponsibly on AI and implementing it hastily can have severe consequences, further eroding public trust. This was evident in the benefits scandal, where automated processes caused unprecedented harm to families. Additionally, the mental health risks are growing: for example, sometimes chatbots have been linked to psychosis and, in some cases, fatal outcomes (Taylor, 2025; Preda, 2025).

We urge a clear distinction between hype, wishful thinking, and reality to ensure informed investment decisions and minimize risks.

Carefully caring for our digital futures requires expertise. Expertise should view technology within a broader societal and global context. Many complex challenges, such as ecological issues, already approaches that require more than technological fixes. AI models do not possess a genuine understanding of people or the world; by themselves, they cannot resolve these challenges. At best, AI may divert attention from, or even intensify, the real work that must be done.

We are keen to help bridge the gap between experts, researchers, and policymakers by sharing existing knowledge and expertise.

Principle 2: AI applications should not receive special treatment in legislation and regulations

Currently, experimentation and rushed decisions have become commonplace in policy for the digital industry, often justified as ‘inevitable’ (AI Now Institute, 2025). Terms like ‘race,’ ‘reduced regulatory pressure,’ and ‘fear of missing out’ frequently accompany this narrative. We believe that policy and regulation serve a vital purpose: protecting what is valuable and ensuring a fair balance between costs and benefits.

The current investment proposals promote deregulation, effectively granting preferential treatment to a select group of investors and companies. For example, these plans include changes to dismissal laws, the removal of intellectual property rights, and expedited licensing that serves private interests. We view this as a violation of key public values and a step toward increased social unrest.

We are also concerned about the risks of deregulation and experimentation with applications in everyday environments, as advocated by some entrepreneurs and investors. These approaches are not without danger: past experiments with unmanned vehicles (Chougule et al., 2023) and the unregulated deployment of chatbots have already resulted in fatalities (Taylor, 2025).

The regulation of AI is essential. The Rathenau Institute has previously emphasized to the House of Representatives the importance of politicians and policymakers regulating AI (Rathenau Institute, 2024), just as we do for other services and products. Without strong and widely accepted regulations, we would not have achieved safe aviation, reliable consumer products, or effective medical treatments (Leveson, 2011).

The belief that regulation inherently stifles innovation is outdated, yet it remains prevalent in discussions about AI and digitization (Bradford, 2024). We stress the need for robust oversight, as effective regulation enables well-informed social and economic decisions.

Principle 3: Promote economic viability and ensure added value for everyone

Economists have long warned that growth in the AI industry relies on risky financing, loans, and circular investment schemes (Arun, 2025). Despite this, most companies have yet to deliver the promised productivity gains (The Economist, 2025).

These speculative investments pose risks to the Dutch economy, including our pension funds. We see that in the proposed plans, it is mainly investors and company directors who are influencing the direction. We find this concerning because the returns will accrue to a small group, while the risks may be borne by society.

We are also concerned about the invisible workforce behind AI (Williams and Miceli, 2023). While marketing highlights artificiality and automation, AI relies heavily on hidden labor, including data labeling and content moderation. Much of this work occurs in low-wage positions with unsafe conditions and precarious contracts (Munn, 2024). As a society that values fair work, the Netherlands must ensure that choices throughout the value chain do not cause harm and that fair compensation is provided.

Rather than relying on blind faith in technologies unsuited to address urgent social and economic challenges, we must adopt a protective approach for those adversely affected by the unregulated implementation of such technologies. Think about teachers, lawyers, researchers, civil servants, artists, healthcare professionals, and translators.

AI applications often project a misleading sense of reliability (Suchman, 2019; Suchman, 2023; Anderl et al., 2024). Uncritical adoption and unsupported claims can cause significant societal harm, eroding essential skills and critical thinking. Sectors like education (Guest et al., 2025), journalism (NDP Nieuwsmedia, 2025), science (van Rooij, 2024), healthcare (Jeyaretnam, 2025), and the judiciary (Advocatie, 2025) are already suffering negative effects.

This is especially applicable in case law, where society depends on the accuracy and factual correctness of legal decisions. We recommend significant investment in critical AI literacy across essential sectors, including law, education, healthcare, and government.

Principle 4: Focus on reducing energy, water, and land use. Recognize potential harmful impacts.

Large AI models and data centers require vast amounts of water, energy, and land—often in environmentally sensitive regions (Jiménez Arandia et al., 2025; Suarez et al., 2025). In the Netherlands, this demand strains the power grid (NOS, 2025). Further expansion of data centers increases energy usage, greenhouse gas emissions, and water pollution from cooling processes (International Energy Agency, 2025; Gamazaychikov & Luccioni, 2025). These facilities also compete with essential needs like housing, food supply, and infrastructure (Netbeheer Nederland, 2025).

The Netherlands is falling short of its 2030 emissions reduction targets (Reuters, 2025). The construction of additional hyperscale data centers for AI threatens to further undermine the country's energy and climate transition (Green Screen Coalition, 2025).

Our concerns extend beyond the Netherlands. There is a pressing need for democratic, human- and nature-centered AI policies on a global scale, especially as democratic institutions face erosion and harmful power dynamics are amplified by AI. Addressing these challenges requires international cooperation and shared responsibility.

We are also troubled by the business practices of major tech companies. Research indicates that segments of the AI industry share troubling similarities with the oil industry, including exploitation, ecosystem destruction, colonization, surveillance, and the reinforcement of authoritarian regimes (Hao, 2022; Ricaurte, 2022).

We have a unique opportunity to choose a different path, and we recommend collaborating with all stakeholders to achieve meaningful change (Zaga et al., 2022).

Principle 5: Involve all voices, including civil society, in policy-making

The Netherlands possesses extensive expertise on the impact of AI. We propose explicitly incorporating critical perspectives into the design process. By prioritizing thoughtful decision-making over automated responses, the Netherlands can set a leading example in AI research and responsible, socially just digitization (Zaga et al., 2022).

We recognize the significant value of broad collaboration and the need to involve civil society (Murray-Rust, Alfrink & Zaga, 2025). The focus should remain on the ability of citizens and professionals to shape their own lives and organizations, rather than on technologies that promise solutions but fail to address complex social and economic challenges.

The Netherlands can lead by example in Europe by supporting civil society, academia, and the business community to develop a digital ecosystem firmly anchored in public values (PublicSpaces, 2025).

The way forward

The Netherlands can embark on a path to digitization that does justice to people, nature, and democracy. We advocate doing this jointly and carefully: with scientists, civil society organizations, and entrepreneurs. In this way, we can advance a broadly supported agenda that addresses fundamental questions vital to society. We advocate a constructive, well-founded approach grounded in people, nature, and democracy.

We recommend starting with an analysis of the issue at hand rather than seeking technological solutions to secure funding. When considering possible implementations, we should use the assessment framework that we already successfully apply as a society: namely, existing laws and regulations that protect us, as well as our sense of community and creativity. This approach is based on care and thoughtful consideration. An approach to Carefully Caring for our Digital Futures.

Our recommendations:

Analyze the problem first, only then consider solutions

• Open the discussion to take stock of issues surrounding our digital future. Involve scientists, civil society, and stakeholders.

• Look at the practices that exist in the Netherlands and elsewhere. Check claims and compare them with investments from private and public funds.

• Consider the extent to which solutions reduce or increase dependence on large technology companies.

• Ensure all perspectives are represented, with special attention to critical views that have been underrepresented in current discussions.

• Address the systemic risks posed by AI applications in critical sectors such as education, healthcare, the judiciary, and science. Listen to science and societal experts. Give careful consideration to scientific insights, especially regarding the loss of essential skills and the erosion of critical thinking.

• Establish a scientific evaluation process for claims regarding the application of AI and digital systems, ensuring that public funding decisions are well-informed and careful.

• Choose the appropriate governance model with appropriate powers. Consider a ministry or a minister.

Develop and implement a meaningfully transparent assessment framework

• No monopolies by a select few. When public funds are used for digitization or public infrastructure, the benefits must be shared with the public.

• Protect people through legislation and regulations against the harmful effects of AI and algorithmic decisions.

• Stop unregulated experiments in public spaces.

• Provide regulators (ACM, AP) with sufficient resources to fulfill their monitoring and enforcement.

• Establish a public AI council that includes broad, plural civil society representation.

• Uphold the principles of the rule of law. Proactively safeguard our critical digital infrastructure against interference and acquisition by foreign entities.

Safeguard current agreements that prioritize people and nature

• Uphold all relevant laws and human rights, along with environmental agreements and intellectual property protections.

• Guarantee fair wages and equitable contracts for all. Avoid granting preferential treatment to any select group.

• Ensure that considerations of the impacts of digitalization are fully integrated into all decisions related to nature, natural resources, and energy.

• Look beyond the Netherlands: weigh the harmful effects of extraction and exploitation in other countries.

• Invest in sustainable, people-oriented, value-driven solutions to provide an alternative.

Support initiatives that contribute to meaningful digitization and public values

• Invest in comprehensive critical AI literacy, prioritizing key sectors such as education, government, the judiciary, and healthcare.

• Engage scientists, experts, and civic society throughout the design and implementation process.

• Support organizations capable of making long-term, sustainable contributions within public-private-civil partnerships.

• Explore alternative economic models for public digital infrastructure, including utility-based frameworks that prioritize public benefit.

• Promote the open sharing and accessibility of knowledge and experience.

Signatures

We sign this appeal with the conviction that things can be done differently and better in our society. Let us join forces to build a future that is not only desirable but also truly worth striving for—one grounded in common sense rather than automated thoughtlessness

12 December 2025

Coalitie Zorgvuldig & Zorgzaam Digitaal

dr. ir. Cristina Zaga, Assistant Professor, Transdisciplinary Design for Socially Just Digital Innovation and Automation, Director of the JEDAI Network: Social Justice in AI, University of Twente, NL

dr. Olivia Guest, Assistant Professor of Computational Cognitive Science, Radboud University, NL

dr. ir. Roel Dobbe, Assistant Professor of Responsible Digitalisation and Public AI Systems, Director Sociotechnical AI Systems Lab, TU Delft & Bestuurslid Stichting PublicSpaces, NL

drs. Wiep Hamstra, Expert overheidsdienstverlening en -communicatie, NL

Lilian de Jong, MSc, co-founder Dutch AI Ethics Community, NL

Wouter Nieuwenhuizen, Researcher, Rathenau Instituut, NL

dr. Marcela Suarez Estrada, Lecturer in Critical Intersectional Perspectives on AI, Radboud University, NL

prof. dr. Iris van Rooij, Professor of Computational Cognitive Science, Chair of Cognitive Science and Artificial Intelligence, Radboud University, NL

--

prof. dr. F. Hermans, Professor of Computer Science Education, Vrije Universiteit, NL

prof. dr. M. Dingemanse, Hoogleraar AI: Taaldiversiteit en CommunicatietechnologieĂŤn, Radboud Universiteit, NL

dr. ir. Eelco Herder, Associate Professor, Universiteit Utrecht & Voorzitter ACM SIGWEB & Lid ACM Technology Policy Council, NL

Jelle van der Ster, Algemeen directeur SETUP, NL

drs. Siri Beerends, PhD candidate Philosophy of Technology, University of Twente / Researcher SETUP, NL

Emile van Bergen, Senior software engineer computer vision, NL

Paul Peters, Fluxology / Creative Reduction, NL

Dr. Nolen Gertz, Associate Professor of Applied Philosophy, University of Twente, NL

Gerry McGovern, author of World Wide Waste, and 99th Day

Cited References and Resources

Altmeyer, P., Demetriou, A. M., Bartlett, A., & Liem, C. C. S. (2024). Position: Stop Making Unscientific AGI Performance Claims. In R. Salakhutdinov, Z. Kolter, & K. Heller (Eds.), International Conference on Machine Learning (Vol. 235, pp. 1222-1242). (Proceedings of Machine Learning Research).

Adamantia Rachovitsa and Niclas Johann, “The Human Rights Implications of the Use of AI in the Digital Welfare State: Lessons Learned from the Dutch SyRI Case,” Human Rights Law Review 22, no. 2 (June 1, 2022): ngac010, https://doi.org/10.1093/hrlr/ngac010

Advocatie Redactie (2025, September 10). Rechtbank Rotterdam berispt advocaat wegens aanvoeren van niet-bestaande jurisprudentie. Advocatie. advocatie.nl/nieuws/rechtbank-rotterdam-berispt-ad...

Agnew, W., McKee, K. R., Gabriel, I., Kay, J., Isaac, W., Bergman, A. S., & Mohamed, S. (2023). Technologies of resistance to AI. Equity and Access in Algorithms, Mechanisms, and Optimization, 1-13.Reyes-Cruz, G., Spors, V., Muller, M., Ciolfi Felice, M., Bardzell, S., Williams, R. M., ... & Feldfeber, I. (2025, April). Resisting AI Solutionism: Where Do We Go From Here?. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1-6).

AI Now Institute (2025, June 3). 1.3: AI Arms Race 2.0: From Deregulation to Industrial Policy. AI Now Institute. ainowinstitute.org/publications/research/1-3-ai-ar...

Anderl, Christine, Stefanie H. Klein, Büsra Sarigül, Frank M. Schneider, Junyi Han, Paul L. Fiedler, and Sonja Utz. 2024. ‘Conversational Presentation Mode Increases Credibility Judgements during Information Search with ChatGPT’. Scientific Reports 14 (1). Nature Publishing Group: 17127. https://doi:10.1038/s41598-024-67829-6

Arun, A. (2025, November 12). Bubble or Nothing. Center for Public Enterprise. publicenterprise.org/report/bubble-or-nothing/...

Avraamidou, L. (2024). Can we disrupt the momentum of the AI colonization of science education?. Journal of Research in Science Teaching, 61(10), 2570-2574.

Bara, M. (2025, November 5). The Hidden Cost of AI at Work: Why ‘Workslop’ Is Quietly Undermining Productivity. Medium.ai.plainenglish.io/the-hidden-cost-of-ai-at...

Baumer, E. P. S., & Silberman, M. S. (2011, May 7). When the implication is not to design (technology). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’11: CHI Conference on Human Factors in Computing Systems, Vancouver BC Canada. https://doi.org/10.1145/1978942.1979275

Birhane, A. (2021). Algorithmic injustice: a relational ethics approach. Patterns, 2(2).

Birhane, A. & Guest, O. (2021). Towards Decolonising Computational Sciences. Women, Gender & Research. https://doi.org/10.7146/kkf.v29i2.124899

Birhane, A., Kalluri, P., Card, D., Agnew, W., Dotan, R., & Bao, M. (2022, June). The values encoded in machine learning research. In Proceedings of the 2022 ACM conference on fairness, accountability, and transparency (pp. 173-184).

Birhane, A., Ruane, E., Laurent, T., S. Brown, M., Flowers, J., Ventresque, A., & L. Dancy, C. (2022, June). The forgotten margins of AI ethics. In Proceedings of the 2022 ACM conference on fairness, accountability, and transparency (pp. 948-958).

Bradford, A. (2024). The false choice between digital regulation and innovation. Nw. UL Rev., 119, 377.

Chougule, Amit, et al. (2023) A comprehensive review on limitations of autonomous driving and its impact on accidents and collisions." IEEE Open Journal of Vehicular Technology 5, 142-161.

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Dingemanse, M. (2025, July 3). Waarom ik niet instem met het onderhandelingsakkoord voor de CAO universiteiten – The Ideophone. ideophone.org/waarom-ik-niet-instem-met-het-onderh...

Dobbe, R. (2023, November 22). ‘Safety Washing’ at the AI Safety Summit in the UK. Dutch version: iBestuur. ibestuur.nl/data-en-ai/toepassingen/safety-washing... English version: Linkedin. linkedin.com/pulse/safety-washing-ai-summit-roel-d...

Dobbe, R. (2025). AI Safety is Stuck in Technical Terms—A System Safety Response to the International AI Safety Report (No. arXiv:2503.04743). arXiv. https://doi.org/10.48550/arXiv.2503.04743

Dobbe, R. (2025, July 25). Opinie | De vloek van AI. NRC. nrc.nl/nieuws/2025/07/25/de-vloek-van-ai-a4901292...

The Economist (2025, November 26). Investors expect AI use to soar. That’s not happening. (n.d.). The Economist. Retrieved December 2, 2025, from economist.com/finance-and-economics/2025/11/26/inv...

El-Sayed, S., Kickbusch, I., & Prainsack, B. (2025). Data solidarity: Operationalising public value through a digital tool. Global Public Health, 20(1), 2450403.

Erscoi, L., Kleinherenbrink, A., & Guest, O. (2023). Pygmalion Displacement: When Humanising AI Dehumanises Women. SocArXiv. https://doi.org/10.31235/osf.io/jqxb6

Franklin, U. M. (1999). The Real World of Technology. CBC Massey Lectures. Toronto: Anansi.

Gamazaychikov, B., & Luccioni, S. (2025, November 10). ⚡ Power, Heat, and Intelligence ☁️—AI Data Centers Explained 🏭. Hugging Face. huggingface.co/blog/sasha/ai-data-centers-explaine...

Gebru, T., & Torres, É. P. (2024). The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday.

Government of the Netherlands. (2024). Government-wide vision on generative AI of the Netherlands. [Kamerstuk]. government.nl/documents/parliamentary-documents/20...

Green Screen Coalition. (2025, February 5). Within Bounds: Limiting AI’s environmental impact. Green Screen Coalition. https://greenscreen.network/en/blog/within-bounds-limiting-ai-environmental-impact/

Guest, O. (2025). What does “Human-Centred AI” mean? arXiv. https://doi.org/10.48550/arXiv.2507.19960

Guest, O., Suarez, M., & van Rooij, I. (2025). Towards Critical Artificial Intelligence Literacies. Zenodo. https://doi.org/10.5281/zenodo.17786243

Guest, O., Suarez, M., Müller, B., van Meerkerk, E., Oude Groote Beverborg, A., de Haan, R., Reyes Elizondo, A., Blokpoel, M., Scharfenberg, N., Kleinherenbrink, A., Camerino, I., Woensdregt, M., Monett, D., Brown, J., Avraamidou, L., Alenda-Demoutiez, J.,Hermans, F., & van Rooij, I. (2025). Against the Uncritical Adoption of “AI” Technologies in Academia. Zenodo. https://doi.org/10.5281/ZENODO.17065099

Hao, K. (2022). An MIT Technology Review Series: AI Colonialism. MIT Technology Review. technologyreview.com/supertopic/ai-colonialism-sup...

Hao, K. (2025). Empire of AI: Dreams and nightmares in Sam Altman's OpenAI. Penguin Group.

Heikkilä M (2022), “Dutch Scandal Serves as a Warning for Europe over Risks of Using Algorithms,” Politico.politico.eu/article/dutch-scandal-serves-...

Illich, I. (1973). Tools for Conviviality. Open Forum. London: Calder and Boyars.

International Energy Agency (2025) Energy and AI – Analysis. (2025, April 10). IEA. https://www.iea.org/reports/energy-and-ai

Investors expect AI use to soar. That’s not happening. (2025). The Economist. Retrieved December 2, 2025, from economist.com/finance-and-economics/2025/11/26/inv...

Jeyaretnam, M. (2025, August 13). Using AI Made Doctors Worse at Spotting Cancer Without Assistance. TIME.time.com/7309274/ai-lancet-study-artificial-i...

JimĂŠnez Arandia, P., Dib, D., & AlarcĂłn, M. (2025, August 8). The Backyard of AI: A Map of the 21st Century Gold Rush. Pulitzer Center. pulitzercenter.org/stories/backyard-ai-map-21st-ce...

Leveson, N. G. (2011). Engineering a safer world: Systems thinking applied to safety. The MIT Press.library.oapen.org/handle/20.500.12657/26043...

McQuillan, D. (2022). Resisting AI: An anti-fascist approach to artificial intelligence. Policy Press.

MĂźgge, D., Paul, R., & Stan, V. (2025). The AI matrix: Profits, power, politics. Agenda Publishing.

Munn, L. (2024). Digital labor, platforms, and AI. Introduction to Digital Humanism: A Textbook, 557–569.

Murray-Rust, D., Alfrink, K., & Zaga, C. (2025). Towards Meaningful Transparency in Civic AI Systems. arXiv preprint arXiv:2510.07889.

NDP Nieuwsmedia. (2025, November 20). Media sturen informateur brandbrief: “Techbedrijven bedreigen democratie ernstig.” NU. nu.nl/media/6376559/media-sturen-informateur-brand...

Netbeheer Nederland (2025). Netbeheer Nederland Scenario’s Editie 2025. (2025, May 14). netbeheernederland.nl/artikelen/nieuws/netbeheer-n...

NOS (2025, May 22). Nieuw onderzoek: AI verbruikt 11 tot 20 procent van wereldwijde stroom datacenters. (2025, May 22). nos.nl/nieuwsuur/artikel/2568297-nieuw-onderzoek-a...

O’Neil, C. (2017), Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown).PublicSpaces. (2025). PublicSpaces.net. Retrieved December 2, 2025, from https://publicspaces.net

Preda, A. (2025). Special Report: AI-Induced Psychosis: A New Frontier in Mental Health. Psychiatric News. https://doi.org/10.1176/appi.pn.2025.10.10.5

Rathenau Instituut (2024). Politici en beleidsmakers moeten aan de slag met risico’s van generatieve AI. Den Haag.

Reuters. (2025, September 16). Netherlands will highly likely miss 2030 climate goal, experts say. Reuters. reuters.com/sustainability/cop/netherlands-will-hi...

Ricaurte, Paola (2022). “Ethics for the majority world: AI and the question of violence at scale”. In: Media, Culture & Society 44.4, pp. 726–745.

Rolnick, D., Donti, P. L., Kaack, L. H., Kochanski, K., Lacoste, A., Sankaran, K., ... & Bengio, Y. (2022). Tackling climate change with machine learning. ACM Computing Surveys (CSUR), 55(2), 1-96.

Shrishak, K. (2025, November 14). European Commission breaches own AI guidelines by using ChatGPT in public documents. Irish Council for Civil Liberties. iccl.ie/news/european-commission-breaches-own-ai-g...

Stengers, I. (2018). Another science is possible. Cambridge, UK: Polity.

Suarez, M., MĂźller, B., Guest, O., & Van Rooij, I. (2025, June 12). Critical AI Literacy: Beyond hegemonic perspectives on sustainability [Substack newsletter]. Sustainability Dispatch. https://doi.org/10.5281/zenodo.15677840

Suchman, Lucy A. (2019). ‘Demystifying the Intelligent Machine’. In Cyborg Futures: Cross-Disciplinary Perspectives on Artificial Intelligence and Robotics, edited by Teresa Heffernan, 35–61. Social and Cultural Studies of Robots and AI. Cham: Springer International Publishing. https://doi:10.1007/978-3-030-21836-2_3

Suchman, L. (2023). The uncontroversial ‘thingness’ of AI. Big Data & Society, 10(2), 20539517231206794. https://doi.org/10.1177/20539517231206794

Taylor, J., & reporter, J. T. T. (2025, August 2). AI chatbots are becoming popular alternatives to therapy. But they may worsen mental health crises, experts warn. The Guardian. theguardian.com/australia-news/2025/aug/03/ai-chat...

van Rooij, I., Guest, O., Adolfi, F., De Haan, R., Kolokolova, A., & Rich, P. (2024). Reclaiming AI as a Theoretical Tool for Cognitive Science. Computational Brain & Behavior, 7(4), 616–636. https://doi.org/10.1007/s42113-024-00217-5

van Rooij, I. (2025) AI slop and the destruction of knowledge. Zenodo. https://doi.org/10.5281/zenodo.16905560

Verhagen, L. (2025, November 22). De AI-industrie buit mens en aarde uit, zoals de nootmuskaatwinning dat eerder deed, zegt Nederlandse expert. de Volkskrant. volkskrant.nl/tech/de-ai-industrie-buit-mens-en-aa...

Vincent, N., Li, H., Tilly, N., Chancellor, S., & Hecht, B. (2021, March 3). Data leverage: A framework for empowering the public in its relationship with technology companies. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT ’21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event Canada. https://doi.org/10.1145/3442188.3445885

Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P. S., & Gabriel, I. (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.

Williams, R. M. (2025). AI’s Eugenic Legacies, Our Disabled Future: We’re Wrong About What’s in This Book. In Disabling Intelligences: Legacies of Eugenics and How We are Wrong about AI (pp. 1-13). Cham: Springer Nature Switzerland.

Williams, A., and M. Miceli. "Data Work and its Layers of (In) visibility." JustTech. Social Science Research Council (2023).

Zaga, C., & Lupetti, M. L. (2022). Diversity equity and inclusion in embodied AI: reflecting on and re-imagining our future with embodied AI. 4TU. Federation.

198
signatures
178 verified
  1. dr. ir. Cristina Zaga, Assistant Professor, University of Twente, Enschede
  2. Roel Dobbe, Assistant Professor, TU Delft, Delft
  3. drs. Wiep Hamstra, Expert overheidsdienstverlening en -communicatie, Den Haag
  4. Iris van Rooij, Professor, Cognitive Science and AI, Radboud University, Nijmegen
  5. Lilian de Jong, Co-founder, Dutch AI Ethics Community, Nijmegen
  6. Wouter Nieuwenhuizen, Researcher, Rathenau Instituut, The Hague
  7. M. Dingemanse, Hoogleraar AI, Taaldiversiteit en Communicatie, Radboud Universiteit
  8. Peter Troxler, Lector, Hogeschool Rotterdam, Rotterdam
  9. Lieke Wouters, co-directeur en curator, stichting Platform POST, Nijmegen
  10. Jelle Van Dijk, Onderzoeker, Universiteit Twente, Utrecht
  11. Wladimir Mufty, Programmamanager Digitale Soevereiniteit, SURF, Utrecht
  12. Robert-Jan den Haan, Universitair docent, Universiteit Twente, Enschede
  13. Maartje de Graaf, Associate Professor of Human-Computer Interaction, Utrecht University, Utrecht
  14. Hans Parić- Schermer, I-adviseur, Gemeente Zoetermeer, Rotterdam
  15. Laurens Vreekamp, Journalist, Future Journalism Today, Arnhem
  16. Joost Elshoff, Informatiemanager, Zuyd Hogeschool, Heerlen
  17. Gerben de Vries, AI Research Engineer, Hilversum
  18. Pascal Wiggers, Lector Responsible IT, Hogeschool van Amsterdam, Amsterdam
  19. Paul Peters, Principal Architect ¡ Change Agent ¡ Interim CTO, Fluxology++
  20. Gaby Schram, Ondernemer, BETTER, Arnhem
...
138 more
verified signatures
  1. Avery Dangerous Garnett, Backend Developer, Chordify, Utrecht
  2. Michel Klein, Associate Professor AI, Vrije Universiteit, Amsterdam
  3. Liam Hayes, Product Designer/Masters Student, Technical University of Eindhoven, Eindhoven
  4. Ema VrĂŽnceanu, Director, Earth System Governance Foundation, Utrecht
  5. Riccardo Angius, PhD researcher, Trinity College Dublin, The University of Dublin, Dublin
  6. Alexandru Babeanu, Postdoctoral researcher, Leiden University
  7. Sara Colombo, Assistant Professor, Director Feminist Generative AI Lab, TU Delft, Delft
  8. Lorenzo Gennaro, Leiden
  9. Wim Pouw, Associate Professor, Tilburg Univsersity, Tilburg
  10. dr. Emily Sandford, Postdoctoral researcher, Leiden Observatory, Leiden
  11. Misha Velthuis, Lecturer, Amsterdam University College, Amsterdam
  12. Benedetta Lusi, Postdoctoral Researcher, Erasmus Medical Center, Rotterdam
  13. Teresa Heffernan, Professor, Saint Mary's University, Halifax
  14. Prof. Eerke Boiten, Head of School of Computer Science and Informatics, De Montfort University, Leicester
  15. Anja Sicking, Author, Sectievoorzitter Literaire auteurs Auteursbond, Amsterdam
  16. Dan Stowell, Associate Professor, Tilburg University, Tilburg
  17. Eva de Hullu, Assistant Professor, Open Universiteit, Zwolle
  18. Vlad Niculae, Assistant Professor, University of Amsterdam, Amsterdam
  19. Weston Renoud, Senior Software Developer, QPS BV, Utrecht
  20. Visvanath Ratnaweera, Lecturer (retired), EduNET.LK, Winterthur