We are delighted to participate in the Responsible AI Forum held in Munich from 6-8 December 2021, discussing a sustainable, inclusive and comprehensive framework for the use of AI that delivers global benefit.
This event brings together members of industry, civil society, government and academia to discuss the most relevant and pressing issues related to the responsible use of AI through shared stories, cutting edge research and practical applications. It provides a platform to encourage exchange between research and practice through productive discussion and demonstration.
Zachary Goldberg, Associate Research manager at Trilateral Research, will be presenting on The meaning of responsibility in the context of the design and deployment of AI.
His presentation will assess the meaning of responsibility in the context of the design and deployment of AI as one that is distinct from ethically good decision making associated with Responsible Research Innovation (RRI).
It begins by assessing philosophical contributions to the nature of responsibility and identifies 5 faces of responsibility:
· being held responsible
· taking responsibility
Next, it shows that responsibility is not a status, but a set of dynamic social practices employed to communicate praise, blame and future expectations.
These philosophical insights build support for the Principle of Responsive Adjustment which states that an entity is responsible when it is expected to adopt courses of action that will prevent repetitions of an untoward event it caused.
Zack’s assessment, applied to the context of the design and deployment of AI systems, provides a novel and clear understanding of who is responsible for AI systems, and which actionable steps correspond to this responsibility.
Ethical AI for the Public Good
At Trilateral Research, we understand that the choices to develop, sell and use technologies are intrinsically, though often implicitly, value-laden and these choices include prioritizing some values over others.
Our Sociotech approach scrutinizes the development and use of algorithms to understand how their functions may cause harm or indeed create good, and which values are at play in their development and use.
As a crucial part of this process, we draw upon the growing field of Explainable AI to ensure that data collection and ingestion as well as algorithmic processing and output are explainable in order to enhance end user comprehension and decision making.
Read more about the conference and contact our team for more information about our work.