At Trilateral, we strive for an approach that puts ethics into action, by grounding it in our work in technology development, governance, and beyond.
Ethics presented as high-level principles and values can make it seem abstract and too dissociated from the practical world. Though ethical norms such as fairness, dignity, autonomy can provide general guidance, it can nevertheless remain challenging to prioritise values, make decisions and perform actions in specific cases to promote social good and avoid unwanted outcomes.
Why is there a need to ground discussions around ethics?
Ethical principles and human rights help guide technology and policy development.
They help us define what kinds of interpersonal relations we want to cultivate, what good quality of life should be and how best to help those around us. They also help us see when our actions might cause harm, and why.
Much of this has been formalised into law (such as under the European Convention on Human Rights) since human rights frameworks are a core way the law seeks to realise ethical norms in practice (including by creating legal obligations for States).
Ethics help us consider how technologies are embedded in our everyday practices in ways that can radically affect human interaction. With this awareness, calls for ethics-by- and -in-design of new technologies and policy are now commonplace in academic literature, in private businesses such as Microsoft and Google, and in the European Commission’s innovation programs–Horizon 2020 and Horizon Europe.
Despite this importance, there is a lack of clarity on how to translate ethics into action and design choices. Values and human rights laws do not provide clear procedures from which to make decisions; tending towards high-level norms, they have been accused of being abstract.
How do we do this in practice?
At Trilateral, we seek to apply ethical theories in practice, interpreting legal and ethical obligations in substantive ways and avoiding the circumvention of responsibility that can occur by finding loopholes in abstract provisions.
We concretely consider how technology development might affect the individual. This means showing respect for, and understanding of, the unique contextual factors experienced by each of us.
The first step in this process is observation: where a technological tool or policy might get used, to what end, by whom, for whom, in what locations, and combined with which other technologies.
Here, we apply a context sensitive analysis by conducting, among others, social impact assessments, allowing the ethical theory to directly shape design choices. Then, for any impact we might see, we work with designers, policymakers, and stakeholders to make suggestions or requirements to mitigate negative outcomes, promote beneficial ones and to protect individuals’ autonomy, dignity, vulnerability and privacy. As part of this process, we have developed a series of methods to operationalise ethics into the various research and technology innovation projects.
Impact Assessments
We have developed a range of impact assessment methodologies to engage with stakeholders, designers, and decision-makers to collaboratively think through how different technologies and policies might impact fundamental rights as well as individual, social and cultural well-being.
These range from broader ethical impact assessments, that focus on a range of values, to proportionality assessments that focus on the particular circumstances of an individual or community and a responsiveness to their needs.
Through this contextualising approach, we are able to define concrete actions – from design recommendations to expanded policy initiatives – to ensure those who use these tools or govern with these policies can enact ethical outcomes.
By focusing on needs and situation rather than on an abstract ethical principle, we can emphasize some values over others in one set of circumstances while altering this ordering in another set of circumstances when appropriate.
One of the measures that is integral for carrying out an ethical impact assessment in the early stages of technology development is the TRI Touchpoint Table (e.g. in the ROXANNE and PREVISION projects). TRI researchers developed a procedure to work alongside technology developers to efficiently and accurately identify risks, evaluate their severity and likelihood of occurrence and determine mitigation measures.
Working directly with policymakers and practitioners
At Trilateral, we believe it is vital that the outcomes of our work in applying ethical principles in technology design is shared for future scientific research purposes. We often publish in peer reviewed journals and give presentations at industry and academic conferences, but perhaps most importantly, we engage policymakers to influence current and future, policies, laws and standards.
For instance, in the SIENNA project, Trilateral explores the ethics of Artificial Intelligence and has conducted an in-depth analysis of ethical issues related to AI. On the basis of this analysis, we have responded to a series of public consultations (e.g a UNESCO consultation) and are engaging with policymakers to ensure we place ethical considerations at the centre of this technology development. A direct result of our efforts to engage policymakers with context sensitive ethical considerations is to catalyse ethics in law.
Finally, we also hold live trainings for practitioners to share domain expertise and ethical insight. These trainings are often essential to facilitate an understanding of the context and concrete details concerning a particular practice or new technology.
Technology development
Within Trilateral’s own tool development, STRIAD®, we’ve woven together the process of assessing ethical impacts with co-design activities.
STRIAD® merges data analytics and machine learning algorithms to help end-users efficiently and effectively gather and visualize information that is often scattered across disparate sources and agencies.
Two ongoing projects utilise the tool in the fields of human security and safeguarding children. In the process of designing and developing STRIAD®, the ethics researchers, technical developers, and end-users collaborate on conducting an ethical impact assessment and implementing its suggestions through technical functional requirements, algorithmic transparency approaches, training materials for end-users on how to (and not to) work with outputs, and related suggestions for organisational protocols on managing digital ethics. It has also meant designing non-technical measures, like training materials or standards of practice with the tools, to enhance the ethical reflection of potential end-users regarding the nuances of their situation.
Ethics as a way to build sustainability
Our aim at Trilateral is to ensure the sustainable design of technological tools and research methods in order to make a lasting societal impact. By pushing design and policy to address grounded ethics, we seek to earn the trust of stakeholders and communities by combining their needs and concerns with insightful considerations of how engaging human rights and societal values can produce lasting benefits.
For more information contact our team.