This Node will delve into the development of a framework for regulation of autonomous systems, enabling them to effectively adapt to the intricacies of the environment in which they operate.  

For autonomous systems to become trustworthy, we will need frameworks of informal and formal governance that must be co-designed with the systems themselves. Our Node will explore the design of the frameworks for responsibility and accountability, and tools to help the ecosystem of developers and regulators embed these values in systems. 

Our Node brings together computer science and AI specialists, legal scholars, AI ethicists, as well as experts in science and technology studies and design ethnography. Together we are developing a novel software engineering and governance methodology that includes: 

  • New frameworks that help bridge gaps between legal and ethical principles (including emerging questions around privacy, fairness, accountability, and transparency) and an autonomous systems design process that entails rapid iterations driven by emerging technologies (including, e.g., machine learning in-the-loop decision making systems). 
  • New tools for an ecosystem of regulators, developers and trusted third parties to address not only functionality or correctness, but also questions of how systems fail, and how one can manage evidence associated with this to facilitate better governance. 
  • Evidence based from full-cycle case studies of taking autonomous systems through regulatory processes, as experienced by our partners, to facilitate policy discussion regarding reflexive regulation practices. 

Further Material