About

How do we govern AI’s environmental impacts responsibly? Sustainable AI Futures is an AHRC Bridging Responsible AI Divides (BRAID) project addressing this challenge. Specifically, we investigate the social life of environmental governance tools: including how they work, how they are made, how to improve them, their potential for unintended consequences, and their wider political, social, and ethical implications.

If you’d like to be kept up-to-date with the project’s progress and opportunities, you can drop your contact details here.

AI’s environmental impacts can be difficult to detect, let alone to measure and manage. AI requires new regulations, frameworks, standards, certifications, eco-labels, sustainability indices, technical guidance, and more, to begin to weigh up the positive and the adverse impacts. Such governance tools are now emerging, but the perspectives of the arts and humanities have to date been absent from their development. We need to know much more about what really happens when AI environmental governance meets the complex, messy, and uncertain reality of human society and ecological systems. Might such tools even sometimes hinder rather than help? We need to think critically and creatively about the futures these tools presuppose, and to be able to articulate alternative futures.

AI has a complex relationship with the environment. As global heating breaches 1.5 degrees, AI is being applied in areas including energy management, supply chain optimisation, food systems resilience and disaster preparedness. Leading AI companies have ambitious decarbonisation pledges, and are major purchasers of renewable energy and investors in climate tech. Against the backdrop of the Sixth Mass Extinction, AI is being used for biodiversity monitoring and nature restoration.

At the same time, AI is putting pressure on climate action, including increased demand for energy and water to run and cool data centres. Despite its 2020 commitment to be carbon negative within ten years, Microsoft’s AI investments mean that in 2024 its carbon footprint is expanding, not contracting. Leading AI companies have drawn criticism for hoarding renewable energy resources, and for the integrity of their carbon accounting, especially the role of offsetting. AI-related growth in data centres, networks, and devices, is also driving demand for land, tech metals, and rare earth minerals, threatening biodiversity through habitat disruption and destruction. In the UK, data centres have been designated as ‘critical infrastructure,’ meaning that lower environmental standards apply to data centre construction. AI is also being used in activities, like oil exploration, that have no place within a net-zero future. Where there are positive uses of AI for the climate, it’s important to understand what kind of AI is being deployed: sometimes you’ll find it’s something quite lightweight, which doesn’t require the huge data centre build-out it’s used to justify.

Addressing the challenge of AI environmental governance means ongoing assessment of the effectiveness of key governance tools. But the task extends further. We need to explore where such tools come from, the values they embed, the assumptions that they reinforce or challenge, and their wider implications for the future of social and ecological justice. Sustainable AI Futures addresses this challenge by analysing AI environmental governance at three interconnected scales: policy and strategy, code and data, and material infrastructure. At each level, we will develop a toolkit to address a gap in the governance of AI’s environmental impacts. By reflecting openly and transparently on the challenges and experiences of doing so, and engaging widely with interdisciplinary expertise and affected communities, we aim to reveal the social life of AI environmental governance tools more broadly.

Underpinning all of this is a richly interdisciplinary, exploratory, and participatory ethos. We will cultivate an interdisciplinary, arts and humanities-led community of research and practice, building in insights from the longer history of AI imaginaries, including cultural influences from science fiction to tech journalism and marketing. We will intervene in these discourses and practices, using creative practice to unsettle received wisdom and to imagine alternative flourishing AI futures.