Event

BRAID x IDI Hybrid Seminar – Alex Taylor

Click here to book your tickets for this hybrid event.

Red Teaming and the Operationalising of Responsibility

This spring, I’ll be embarking on fieldwork investigating the outsourced labours and operational logics associated with red teaming. Currently in vogue and linked to responsible AI (RAI) programmes across the tech sector, red teaming is being touted as a way to identify weaknesses in language and multi-modal AI models through adversarial or provocative prompts. The reasoning here is that weaknesses identified through this prompting might help in fine-tuning or re-training AI models, mitigating issues such as systematically unsafe or toxic content.

Forming the basis for my BRAID fellowship, this fieldwork will take place across so-called ‘data enrichment’ centres in the Philippines (and possibly other sites in the Global South) and examine red teaming from two standpoints. First, it will interrogate the portrayal of red teaming as a sector-wide solution to the toxic tendencies of data-driven models, such as large language models (LLMs). Second, it will analyse red teaming as a case study of what I term the operationalising of responsibility in the tech sector. Across both dimensions, my focus will be on the global flows of capital and the forms and concentrations of labour being mobilised to “responsiblise” AI. I see implications here not just for a more responsible AI but vital questions about responsibility in late capitalism.

In preparation for this work, I want to use this talk to think with an audience about some of the assumptions behind and controversies surrounding red teaming. I’ll begin by elaborating on ways red teaming is being approached and put into practice in R&D. I’ll then set this technical work in a wider context of RAI in the sector to raise and invite questions about the adequacy of a ‘solution’ that continues to valorise technological innovation whilst failing to reward or indeed recognise the extractive conditions necessary for AI’s proliferation.

Bio

 

Alex Taylor is a sociologist by training, with longstanding commitments to critically investigating and intervening in the proliferation of technology and machine intelligence. His work has been shaped most heavily by a critical yet hopeful scholarship in feminist technoscience, including works from Ruha Benjamin, Simone Browne, Vinciane Despret, Donna Haraway, and Anna Lowenhaupt Tsing. He’s currently a Reader in Design Informatics at the University of Edinburgh and an AHRC BRAID Fellow, and co-runs the Critical Data Studies Cluster at the Edinburgh Futures Institute. He is also a Fellow of the RSA and holds visiting roles at the University of Sweden and City, University of London.

 

Running Order

16.00 – Welcome by Nayha Sethi and Talk by Alex Taylor
16.40 – Q&A
17.00 – End

In-person: Inspace, 1 Crichton St, Newington, Edinburgh EH8 9AB
Online: Zoom

Please note limited seats are available at Inspace for in-person audiences, so please book tickets in advance. For those joining online please visit the online event page for the Zoom joining link and password.

For inquiries about accessibility, please contact the DI team at designinformatics@ed.ac.uk or visit the Access webpage for more information about the venue: https://inspace.ed.ac.uk/venue-access/


Inspace or Zoom