Ethical AI

At Trilateral Research, Ethical AI is AI created by, and designed to support, ethical values. We focus on transparency and accountability to develop a human-centred approach, which is tailored to each client’s needs and effectively supports their decision-making process.  

 

In our products, we emphasise:

Human agency & oversight  

Privacy & data governance  

Explainability
Transparency
Traceability

Accountability 

Our tools help clients to manage the complex relationship between people, processes and technology to ensure that AI tools are implemented ethically and sustainably.  

To tailor our ethical solutions to each client, we adopt an interdisciplinary approach that focuses on the capabilities of technology and the way people will use it in the context of the problem to be solved.

Our large team of research scientists contains experts across a wide variety of domains, meaning we can help you gain a comprehensive understanding of your societal problem and enable you to use data-driven decision-making to help address it.   

Our co-design methods, which elicit input from end users and stakeholders, combine and maximise ethical, societal and technological opportunities while minimising risk, resulting in practical ethical AI solutions to the sociotechnical and ethical complexities that we face every day. Furthermore, our ethics monitoring and ethical impact assessments provide you with internal documentation to demonstrate your commitment to the ethical use of AI and build trust among your customers. 

What does the ethics of technology mean?

Our teams include social scientists, data scientists, ethical, legal and human rights experts who work across the technology-social disciplinary divide. We apply rigorous, cutting-edge research when developing and assessing new technologies to ensure they achieve sustainable innovation and measurable impact.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Our shared responsibility model

Ethical AI is a shared responsibility between Trilateral Research and our clients. Sharing responsibility alleviates the client’s burden as Trilateral creates AI tools that:

  • Establish transparency regarding data collection, data processing and scope of use
  • Identify and mitigate data bias
  • Provide clear lines of accountability
  • Protect the privacy of human subjects
  • Encourage end users’ understanding and self-assessment of the products’ output

The client assumes responsibility for the ethical use and management of our tools. Sharing responsibility allows for the flexibility and control that clients need to integrate and operationalise the tool into a business as usual environment.

AI tools are neither intrinsically good nor bad, but they can be used in ways that promote or undermine ethical values and fundamental rights. Consequently, clients are responsible for:

  • Integrating the product into their organisation
  • Complying with all applicable laws and legislation
  • Investigating the potential for operational biases
  • Self-assessing the AI output (“sanity checks”)
  • Understanding both the nuances of, and changes to, the operational environment in which they are using the tool
  • Explaining to affected stakeholders how the tool informed decision making

We encourage clients to talk to us about their responsibilities. We offer ethics consulting services including ethics roadmaps, assessments, audits, deep dives, ethics awareness trainings and ethics-driven strategies.