08
April
AI Lund Lunch Seminar: 'Green teaming AI’
Topic: 'Green teaming AI’
When: 8 April 12.00 to 13.00 CET
Where: Online - link by registration
Speakers
- James White, postdoctoral researcher at the Department of Technology and Society, Lund University
- Jutta Haider, Professor of Information Studies at University of Borås
Spoken language: English
Abstract
This contribution presents ongoing research seeking to identify, examine and rethink how “the environment” is configured into generative AI systems. We draw inspiration from the method of ‘red teaming’ AI to suggest green teaming as a distinct approach to provide a first step towards mapping the diverse ways in which ‘the environment’ is constituted in GenAI, including how it is ignored.
So-called ‘artificial intelligence’ (AI) encompasses a diverse set of data-driven technologies for automation, prediction, and decision-making, some of which are becoming deeply integrated into society, culture, and professional practices, including environmental communication, environmental (social) science, policy, and management (White & Lidskog, 2022). In particular, the hype surrounding and proliferation of commercial generative AI applications have brought the environmental harms of these technologies into sharper focus, most notably their unsustainable resource use and energy demands. (e.g. Bossert & Loh, 2025; Galaz 2025). Yet the direct environmental impacts of the material infrastructure are not the only way in which the environment and generative AI interrelate. For example, the use of generative AI to fabricate scholarly work on environmental issues poses a problem for how evidence is established (Haider et al., 2024), as does its role in generating, amplifying, and disseminating climate obstruction content (Ekström & Haider, in press). There are also strategic ignorances regarding environmental and climate change knowledge embedded within AI models (Haider & Rödl, 2023), concerns about AI platforms’ hyper-consumerist values and algorithmically facilitated emissions (Haider et al., 2025), and a lack of attention to environmental concerns in generative AI’s discursive infrastructure (Ekström et al., 2025).
Generative and other AI systems are transformative technologies that not only represent but are also constitutive of human-environment relationships. However, inadequate industry disclosure of the underlying data, algorithms, and decision-making makes it difficult to understand how values concerning the environment are embedded and reshaped. Green teaming is a participatory approach designed to highlight environmental concerns, specifically emphasising the indirect and systemic environmental effects of generative AI. It is modelled on but extending the ideas of red teaming, which is typically used in technology companies to identify unintended, unsafe, and harmful outcomes of AI models. Recently, civil society and public sector organisations have begun to adopt red teaming in 'the public interest' (AI Risk and Vulnerability Alliance (ARVA) et al., 2025) or for 'social good' (UNESCO, 2025).
The presentation discusses foundational ideas and work-in-progress, inviting comments, suggestions and new collaborations.
References
- AI Risk and Vulnerability Alliance (ARVA), Singh, R., Blili-Hamelin, B., Anderson, C., Tafesse, E., Vecchione, B., Duckles, B., & Metcalf, J. (2025). Red-Teaming in the Public Interest. Data & Society Research Institute. https://doi.org/10.69985/VVGP4368
- Bossert, L. N., & Loh, W. (2025). Why the carbon footprint of generative large language models alone will not help us assess their sustainability. Nature Machine Intelligence, 7(2), 164–165. https://doi.org/10.1038/s42256-025-00979-y
- Ekström, B., Engström, L., & Haider, J. (2025). Foundation models’ acceptable use policies disregard the environment and nature. Nature Machine Intelligence. https://doi.org/10.1038/s42256-025-01134-3
- Ekström, B. & Haider, J. (in press). A methodology for analysing informational textures: Skipping stones and noticing the ripples. Journal of Documentation.
- Galaz, V. (2025). Dark Machines: How Artificial Intelligence, Digitalization and Automation is Changing our Living Planet. Routledge. https://doi.org/10.4324/9781003317814
- Haider, J., & Rödl, M. (2023). Google Search and the creation of ignorance: The case of the climate crisis. Big Data & Society, 10(1), 20539517231158997. https://doi.org/10.1177/20539517231158997
- Haider, J., Rödl, M., & White, J. (2025). Unsustainable artificial intelligence and algorithmically facilitated emissions: The case for emissions-reduction-by-design. Big Data & Society, 12(3), 20539517251365226. https://doi.org/10.1177/20539517251365226
- Haider, J., Söderström, K. R., Ekström, B., & Rödl, M. (2024). GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-156
- UNESCO, Chowdhury, R., Skeadas, T., & Amos, S. (2025). Red Teaming artificial intelligence for social good—The PLAYBOOK. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000394338
- White, J. M., & Lidskog, R. (2022). Ignorance and the regulation of artificial intelligence. Journal of Risk Research, 25(4), 488–500. https://doi.org/10.1080/13669877.2021.1957985
Registration
To participate is free of charge. Sign up at ai.lu.se/2026-04-08/registration and we send you an access link to the zoom platform.
Om händelsen
Tid:
2026-04-08 12:00
till
13:00
Plats
Online - link by registration
Kontakt
ellinor [dot] blom_lussi [at] lth [dot] lu [dot] se