Get the latest insights from across Trilateral in our new monthly article, featuring the latest developments from across our innovation, research and project teams.
AI and policing – how can we ensure ethical and sustainable practices?
In October 2023, the EU funded popAI project marked its conclusion after two years of working to ensure AI law enforcement tools respect citizens’ rights while achieving the goals set for them.
Trilateral’s work in the project involved mapping the ecosystem of controversial AI policing cases, identifying and bringing diverse stakeholders together to discuss their concerns, needs, and recommendations. We considered a range of perspectives on AI and policing, using them to build targeted, ethical, and sustainable recommendations that satisfy law enforcement needs and avoid discrimination. By looking beyond innovators and technologists, the ecosystem approach enables a wider view of the impacts and opportunities of security AI, across domains such as crime prevention, crime investigation, migration and border control, the administration of justice, law enforcement cyber operations, and the training of law enforcement agents. As societies grapple with the potential and hazards of AI, this work is vital to defending citizens’ interests and safety.
Read our analysis of the controversial use of AI in policing here.
Cybersecurity – the importance of raising awareness and reporting cyberattacks
October was National Cybersecurity Awareness Month, an annual campaign that focuses on raising awareness around the importance of cybersecurity.
The EU funded CYBERSPACE project endeavours to provide policymakers, Law Enforcement Agencies (LEAs) and the private sector with a more comprehensive understanding of cyberattacks and cybercrime in the EU.
Trilateral’s involvement in the project includes supporting the preparation of training or instructional materials for citizens that will be used throughout the project to create awareness around cyber threats and how they can be reported. The materials will improve public awareness and stimulate stakeholders to report cyberattacks to LEAs and Computer Emergency Response Teams (CERTs).
We are also collecting signatures and declarations of support on increasing the reporting of cybercrime from representatives of LEAs, industries, and associations across the EU at the local, regional, and national levels. By signing the Declaration, stakeholders are contributing significantly to the collective goal of amplifying and refining the reporting of cybercrime across the EU.
TRUST aWARE- a security and privacy dashboard to better identify online risks
The development of emerging technologies goes hand in hand with their potential misuse, highlighting the increasing importance being placed on ethics in the cybersecurity space.
Trilateral’s Tim Jacquemard of the Cybersecurity Research Cluster recently joined TRUST aWARE partners at EmpoderaLIVE 2023 in Malaga, the leading event on Digital Citizen Sovereignty and Civic Technology solutions in Europe.
Our work in the TRUST aWARE project, which aims to enhance the position of the EU’s digital market and increase people’s trust in software, includes developing a security and privacy dashboard to help users identify online risks along with educating them on positive and possible preventative measures.
During the event in Malaga, Tim introduced the advancements made in the design of the TRUST aWARE dashboard. Developed by the same technical teams who designed Trilateral’s very own Honeycomb dashboard, it empowers users to take control of their online presence by assessing device security and privacy settings.
Watch these videos for more insights on cybersecurity and how you can protect your privacy.
Improving public safety and security – Trilateral takes the stage at Security Research Event
Colleagues from Trilateral Research’s Law Enforcement and Community Safeguarding and Crisis and Security clusters demonstrated the power of their research in projects like TRANSCEND, DARLENE and popAI to an audience of police, technicians, and researchers at the European Commission’s Security Research Event, last October. Understanding how this research fits into the needs of stakeholders ensures that Trilateral’s ethical values are advanced within the AI research space. As well as highlighting synergies and positives, researchers left understanding the gaps which we can push into to improve public safety and security.
Trilateral’s team talked with interested participants about TRANSCEND’s work on mapping and understanding the landscape of impact assessment methodologies – i.e., understanding what methods are used to help us assess the impact technology has on different groups. This research is foundational for better enabling citizens to participate actively and creatively in the iterative development of security research and technology development.
Read more about our findings on impact assessment methodologies.
AI & healthcare – ethical, legal and social challenges
Dr Lem Ngongalah, Research Analyst with Trilateral’s Health cluster, recently published an article walking readers through the complex challenges associated with integrating AI into healthcare, drawing on our company-wide expertise in ethical and responsible AI.
She first identifies privacy and data protection, data usage, and biased AI as major ethical challenges, and offers some solutions: securing patient data, obtaining informed consent, and using diverse data when training AI. When it comes to legal considerations, the major challenges are data ownership (who has the right to patients’ data?), regulatory compliance (how do we ensure AI tools adhere to the long list of data, privacy, and medical research regulations?), and liability (who gets the blame when AI makes a mistake?). The article also flags social considerations, such as healthcare access, job impacts, trust in healthcare, and the importance of consulting stakeholders, and encourages the designers of AI healthcare tools to consider these dimensions from the early days of development.
The article was published by the EU funded PREPARE project, which creates AI-powered tools to assist doctors working in physical rehabilitation by analysing complex data sets and considering patients’ personal and medical histories to model outcomes and identify appropriate interventions.
Read the full article here.
How can explainable AI support the detection of skin cancer?
Ilaria Bonavita, leading Trilateral’s Data Science, Research and Sociotech Innovation team, recently presented on ethical AI and explainable AI (XAI) at a workshop in Madrid. The workshop took place within the context of the iToBoS project, which aims to develop an AI-powered total body scanner for early detection of melanoma (skin cancer). Ilaria spoke about the importance of ethical AI and XAI, with a focus on understanding the different explainability requirements for all the actors involved in the development and use of the body scanner (i.e., medical professionals, technical developers and patients). She also spoke about the importance of on-going engagement between researchers, developers and users to ensure that ethical and explainable AI principles are operationalised at each stage of the AI development process. Only by creating AI-powered healthcare that is both ethical and explainable can we achieve the most socially responsible and impactful results. In the context of iToBoS, this means creating a technology that is not only the best it can be at detecting melanoma, but also making it as user-friendly and easily interpretable as possible, translating to faster diagnosis, faster treatment, and, ultimately, lives saved.
Read more about the project here.
Carbon capture technologies – recommendations for responsible practices
In September 2023, our research analysts submitted their feedback to the European Commission’s call for evidence for the Industrial Carbon Management initiative on carbon capture, utilisation and storage. Our work in the TechEthos project uncovered gaps and grey areas in the legal and policy frameworks for international management of carbon dioxide removal and storage. With these technologies becoming increasingly vital to achieving the Paris Agreement targets, the Commission’s initiative to develop a comprehensive strategy is a welcome step towards responsible and effective use of carbon capture and storage technology. One of our key recommendations is giving a solid definition to the term ‘carbon capture and storage’. Read the rest in our submission available here.
Our findings from TechEthos’ exploration of the legal, ethical and policy implications of technology like climate engineering continues to be developed into recommendations for legal experts and policymakers.
You can read more of Trilateral’s recommendations here.
How can ethical AI support manufacturing?
In July 2023, Trilateral’s Ethics, Human Rights and Emerging Tech Cluster Lead Agata Gurzawska and Christopher Fischer attended the 10th ECCOMAS Thematic Conference on Smart Structures and Materials Conference, where they presented on the ethical opportunities of AI in industrial manufacturing. Their contributions resulted in an article that investigates the importance of designing trustworthy AI tools for the manufacturing sector while emphasising the practical implementation of AI ethical principles. This work highlights the organisational and technical solutions available, as well as the importance of corporate social responsibility and responsible research and innovation.
This article was based off the work undertaken in the OPTIMAI project, which aims to create a new European industry ecosystem, focused on the development of new solutions to optimise production, reduce defects and improve training. Within this context, Trilateral Research is working to ensure that the technologies are designed through a lens of inclusion, accessibility, equality, and sustainability.
Take a look at the article here.
Navigating the path to responsible and trustworthy AI
Trilateral researchers have recently collaborated with Innovate UK and Bridge AI to publish a Report on the Core Principles and Opportunities for Responsible and Trustworthy AI’ (RTAI). In the report we provide an RTAI framework for organisations across the UK, with the aim of advancing the UK to the forefront of artificial intelligence.
With insights relevant for both the public and private sectors, our report provides a common frame of reference for core principles, key innovation priorities, commercial opportunities and policy and standards development relating to RTAI. Ultimately, we provide concrete takeaways that UK organisations – including industry, policy makers and research funders – can use to confidently identify, implement and support RTAI solutions.
To get to fully responsible and trustworthy AI, there is a need for innovations in responsible and trustworthy data, fundamentals for AI assurance, sustainable AI, and the development of sociotechnical AI professionals and research infrastructures.
For more insights, read the full report here.
If you’d like to find out more about the ground breaking research and development we’re involved in, visit our website. If you’d like to find out more about how we could support your organisation with research and development, get in touch.