News

  • Artist-in-residence announcement

    We are thrilled to announce our first artist-in-residence, Yasmine Boudiaf. Yasmine is an Algerian creative technologist and researcher based in London. She is a research fellow at UAL’s Creative Computing Institute, a fellow of the Royal Society of Arts and previously at the Ada Lovelace Institute. She was recognised as one of ‘100 Brilliant Women in AI Ethics 2022’. Her projects interrogate the impact of new technologies on cultural life using anti-colonial approaches.

    Yasmine, smiling, with some cool leafy wallpaper probably in the background

    Yasmine’s project will sit at the intersection of generative AI, intangible heritage, and decolonial practice. It responds to the urgent need to rethink AI infrastructures that rely on extractive data practices, high carbon footprints, and cultural appropriation. The residency will investigate technical pipelines, participatory design and techno-rituals.

  • Call for Proposals: Creative Practice about AI and the Environment

    We invite proposals for creative practice of all kinds.

    The research project Sustainable AI Futures (AHRC BRAID) is seeking creative practitioners to create original artworks (in any medium) exploring AI and the environment. We expect to award three mini-residencies, each consisting of £2,750 plus up to £1,000 of expenses (travel, materials etc.). 

    This is an opportunity to be part of a timely interdisciplinary project investigating AI and its complex impacts on our planet. Artists-in-residence will have the opportunity to engage with the project team’s expertise, and use it to inspire and inform their  work. 

    Artists-in-residence will undertake creative practice (which may be entirely new, or develop existing work in new directions), and will deliver some form of event. The event could be an artist talk as part of one of the project’s workshops, or it could be a stand-alone event (e.g. public arts workshop, performance).

    Timeline: We expect to confirm our decisions by February 2026, with mini-residencies to be completed by November 2026.

    Suggested angles: We are open to many different angles, including but not limited to: Indigenous knowledge in relation to both AI and the climate / environment; decolonial approaches to AI and the climate / environment; AI and climate change against the background of MAGA-Silicon Valley convergence (and fall-out?); reviving obsolete / ‘dead tech’; subversive and imaginative uses of mainstream AI tools ‘against the grain’; AI, climate justice, and slow violence; interventions around building alternative AI infrastructures and capacity; resisting AI / abolishing AI; creative uses of AI against anthropocentrism; AI, the environment and archives; AI, the environment and data surveillance; AI and solastalgia / eco-grief. We would especially like to see work that is interactive / participatory.

    Please complete this form before 1 November 2025.

  • Recruiting: Research Associate in AI and the Environment

    We are seeking a Research Associate in AI and the Environment to join the AHRC BRAID project Sustainable AI Futures. This role offers the opportunity to conduct cutting-edge research, influence policy and practice, and collaborate with academic and industry partners to build more sustainable AI futures. (This is a full-time, 30-month fixed term contract). Application deadline 13 August. Apply here.

    About the Role

    Within the role you will explore the intersection of responsible AI and environmental sustainability. You will conduct cutting-edge research through literature reviews and expert interviews, co-author academic publications, toolkits, and policy papers, and track global AI governance standards (ISO, ITU, IEEE) to ensure our work remains relevant and impactful. 

    You’ll also help build a thriving community of practice, contributing to workshops, symposia, and other high-profile activities that shape the future of AI and the environment.

    About You

    • Completed PhD in a relevant discipline (open to arts and humanities, social sciences, and STEM)
    • Research experience in at least one relevant domain, specifically: a) environmental sustainability and/or climate change; b) Artificial Intelligence (e.g. AI policy, AI ethics, responsible AI, critical AI studies); policy and governance, particularly relating to science, technology, or the environment
    • Strong analytic and communication skills, with evidence of producing high-quality outputs (e.g. publications, toolkits, standards, policy papers, reports)
    • Excellent collaboration and interpersonal skills, which could be evidenced through activities such as conducting interviews or fieldwork, building relationships with stakeholders, helping to cultivate a community of practice, conducting campaigns or driving change, or in other ways
    • Independent and proactive, capable of working flexibly individually and as part of a team
    • Experience engaging with legislation, policy, technical standards, or governance frameworks, including any work related to responsible AI (e.g. ISO, IEEE, OECD, BSI, or internal industry frameworks)
    • Experience of working with partners from outside academia (e.g. industry, NGOs, policymakers, cultural or creative organisations), and/or delivering impact outside academia
    •  Research experience in more than one relevant domain (see point 2 under ‘essential’ criteria)

  • Expressions of Interest: AI and Sustainability Reporting

    Expression of Interest

    AI is poised to transform sustainability reporting by introducing new capabilities for data collection, analysis, and verification—with the potential to support more transparent, accurate, and engaging disclosures. But adopting these tools means understanding not only what they can and can’t do, but also ensuring they themselves align with sustainable and responsible AI principles. At the same time, AI introduces new challenges for sustainability reporting—such as the difficulty of measuring the carbon footprint of AI systems themselves.

    This interactive workshop, hosted by Sustainable AI Futures (UKRI BRAID) and Digital Catapult, offers an opportunity to map the landscape, share ideas, make connections, and explore emerging issues at the intersection of AI and sustainability reporting.

    The workshop will take place on 15 October in London. To express your interest in attending, please give us a few details here.

  • The Politics of AI: Governance, Resistance, Alternatives

    We invite proposals for papers offering critical perspectives on AI and the environment, AI and society, and the political economy of AI. We welcome proposals from all disciplines.

    The symposium is part of the BRAID project Sustainable AI Futures, which is mobilising interdisciplinary perspectives on AI and the environment, including the social life of AI environmental governance tools. The symposium will take place on 18th September 2025 at Goldsmiths, University of London. Please submit a 300-word abstract via this form by Friday 11th July 2025. We aim to send notifications of acceptance by Friday 25th July 2025.

    The rapid expansion of AI and computational infrastructure raises critical questions on whether we are governing AI responsibly, and if that is even possible at all. Contemporary governance regimes reduce social and environmental impacts to mere issues of quantification of harms and management of resources. Even if we track down an elusive number for its carbon emissions or water usage, how can we reconcile that with AI’s complex, messy and highly uncertain social impacts? What are AI’s sociopolitical effects, and how do we begin to notice, imagine, manage, or measure these effects? 

    This symposium aims to consolidate researchers approaching questions of AI’s implications for sustainability, public interest technology, and economic justice across multiple disciplines. While there is a proliferation of research and public discourse around the central role that AI is playing in governance and infrastructure across multiple political contexts, the siloed approaches that exist across these disciplines have not been able to account for the complex global dimensions of AI politics and contestation across its value chain. This event invites researchers approaching these questions from different angles to propose ways in which we can come together to assess AI’s impacts in more systematic and comprehensive ways.

    As a response to the current wave of AI development and deployment, concepts like responsible AI, sustainable AI, and AI governance have proliferated to manage these impacts at the point of design and consumption. We invite exploration of the nuances of these different approaches, as well as different national and regional contexts. However, despite the best intentions, these practices often end up reinforcing the very logics that they seek to question due to a lack of comprehensive assessment of global AI supply chains. As AI becomes more embedded in collective economic futures, how deeply are its core logics entangled with structural shifts – from green capitalism and the twin transition, to austerity, war, and accelerationism?

    If alternative visions of AI are possible, what do they look like and what questions do they raise? What could AI look like if designed and operationalised outside dominant commercial and geopolitical frameworks? What possibilities emerge when we centre justice, sustainability, democracy, and decoloniality in AI development? How might the answers be different in different places around the world?

    If AI should be resisted rather than governed, then where, how, by whom, with what resources and strategies? What precedents and projects of organising a resistance to AI exist, and what can we expect from the future? Where are the leverage points? If we reject the idea that AI is inevitable, what are the alternatives, and what new ethical, political, and epistemological questions do such alternatives raise?  

    We invite scholars who centre issues of power, equity, (in)justice, governance and resistance in AI infrastructures in their research to submit a 300-word abstract for this symposium. If accepted, you will be expected to give a 20-minute presentation.

    We expect to invite some of those who present to contribute articles to a special issue or an edited collection. 

    Please submit your proposals via this form, and direct any queries to d.mcquillan@gold.ac.uk.
    If you’d like to be kept informed about future opportunities events from the Sustainable AI Futures project, you can sign up here.

    The symposium will be in-person; depending on interest, we will accommodate online presentations or an online pre- or post-event.