At Trilateral Research, equality and inclusiveness are areas of principal concern as we build our Ethical AI solutions to tackle complex societal problems, because we understand the importance of these themes to achieve a more just and fair society. Our Ethical AI solutions are built to be fair and ethically sound in their development process as much as their final use, and each step makes a difference.
The composition of the Trilateral team is a first key element to ensure that no flaws and biases are injected into our final products. At Trilateral, we are proud of our diverse team (54% of our staff is female), and of the strong female leadership across our teams (57% of our senior managers are female). This element is often overlooked by companies when AI is developed due to multiple factors – such as the need to quickly hire resources, or the smaller female candidate pool with STEM backgrounds – but it is key in the Ethical AI methodology to avoid the perpetuation of inherent biases and implicit prejudices.
Sourcing data that is reflective of the real-world situation is equally important, as algorithms are only as effective as the data they ingest. To do so, our research team is trained to identify and select the relevant data items that are required for the correct functioning of the solution. For example, in the context of our STRIAD:HONEYCOMB solution to tackle human trafficking, the research team conducts targeted research to collect open-source data on particular demographics relevant to this specific context, and ensures that the data is representative of the target-population segment.
The analytics component is of course essential as it is the core of the artificial intelligence solution. Here, it is critical to assess the gender bias that may emerge from the use of imperfect datasets, or from the incorrect configuration of algorithms. For all our products, our data science team works in partnership with our ethics team to ensure that all metrics are correctly set up. For example, our CESIUM application for children safeguarding takes into due account the uneven exposure of girls and boys to sexual exploitation to ensure that the end user is not prompted with decision support scores that are intrinsically unbalanced towards a particular demographics.
How data is presented is equally critical in the Ethical AI process, and it is key to ensure that no gender or other bias is transmitted to the end user. Our data visualisation techniques and our user interfaces are designed to minimise the risk of incorrect interpretation by the user and provide the fullest possible picture on the relevant demographics. For example, our CESIUM application displays information about gender percentiles side by side to ensure that the use has a full understanding of the exploitation risk distribution. Similarly, filters are in place to narrow down the risk assessment.
Finally, the Ethical AI methodology extends to periodic checks on the health of AI systems. All our solutions come with compulsory, periodic AI retraining steps, as well as end user training on their use. This ensures that data is displayed and read in the most neutral and inclusive way. Our Ethical Innovation Team also offers this solution as a service to external clients in the employment field since gender discrimination continues to be a significant contributor to inequality in recruiting and hiring practices and AI tools are not improving this scenario as they carry biases onto the digital automation domain.
At Trilateral, we strive for our Ethical AI tools to make a notable and practical contribution towards gender equality, and we are glad to see how our work makes a difference in people’s lives.