While recent innovations in the machine learning domain have improved a variety of computer-aided tasks, machine learning systems present us with new challenges, new risks, and new avenues for attackers. As such, researchers should consider how machine learning may shape our environment in ways that could be harmful.
Within the SHERPA project, Trilateral has been exploring the implications of attacks against Smart Information Systems and how they differ from attacks against traditional systems.
In their report, Security Issues, Dangers and Implications of Smart Information Systems, Trilateral and SHERPA project partners F-Secure and the University of Twente, explore how flaws and biases might be introduced into machine learning models, how machine learning techniques might, in the future, be used for offensive or malicious purposes, how machine learning models can be attacked, and how those attacks can presently be mitigated.
Results of this study have been presented by Andrew Patel (F-Secure) in a webinar on 11 March 2020, as part of the project’s webinar series.

By loading the video, you agree to YouTube's privacy policy.
Learn more
Security Issues, Dangers and Implications of Smart Information Systems
The report explores the implications of attacks against machine learning systems and how they differ from attacks against traditional systems, with a particular focus on:
- Bad Artificial Intelligence (AI): ways in which machine learning systems are commonly mis-implemented (and recommendations on how to prevent this from happening)
- Malicious use of AI: how artificial intelligence and data analysis methodologies might be used for malicious purposes
- Adversarial attacks against AI: ways in which machine learning models and algorithms can be adversarially attacked (and mitigations against such attacks)
Bad AI
The capabilities of machine learning systems are often difficult for the layperson to grasp. Some people naively equate machine intelligence with human intelligence. As such, people sometimes attempt to solve problems that simply cannot (or should not) be solved with machine learning. Even knowledgeable practitioners inadvertently build systems that exhibit social bias due to the nature of the training data used.
If a machine learning model is designed or trained poorly or used incorrectly, flaws may arise. Designing and training machine learning models is often a complex process, and there are numerous ways in which flaws can be introduced. Common flaws can be broken into three categories:
- incorrect design decisions
- deficiencies in training data
- incorrect utilization choices
Malicious use of AI
The tools and resources needed to create sophisticated machine learning models have become readily available over the last few years. Powerful frameworks for creating neural networks are freely available, and easy to use. Public cloud services offer large amounts of computing resources at inexpensive rates. More and more public data is available, and cutting-edge techniques are freely shared.
Data analysis and machine learning methods are powerful tools that can be used for both benign and malicious purposes.
Organizations that are known to perpetuate malicious activity (cybercriminals, disinformation organizations, and nation-states) are technically capable enough to verse themselves with these frameworks and techniques and may already be using them.
Examples of potential malicious uses of artificial intelligence may include
- intelligent automation
- analytics
- disinformation and fake news
- phishing and spam
- synthesis of audio, visual, and text content
- obfuscation
Adversarial attacks against AI
As more and more important decisions are made with the aid of machine-learning-powered systems, it will become crucial for us to be able to explain how those models make decisions, understand whether flaws or biases exist in those models, and determine whether and to what extent the outputs of those models can be affected by attacks.
As human involvement in decision processes will continue to decline, it is only natural to assume that adversaries will eventually become interested in learning how to fool machine learning models.
Indeed, this process is well underway. Search engine optimization attacks, which have been conducted for decades, are a prime example. The algorithms that drive social network recommendation systems have also been under attack for many years. On the cybersecurity front, adversaries are constantly developing new ways to fool spam filtering and anomaly detection systems.
As more systems adopt machine learning techniques, expect to see new, previously un-thought-of attacks surface.
Our research
SHERPA is an EU-funded project which analyses how AI and big data analytics impact ethics and human rights. In dialogue with stakeholders, the project is developing novel ways to understand and address these challenges to find desirable and sustainable solutions that can benefit both innovators and society.
To contribute to this debate, Trilateral Research is undertaking a representation of the ethical and human rights challenges of Smart Information Systems through case studies and future scenarios, as well as a Delphi study, which will involve more than 60 European experts in a two-step survey, to explore regulatory options for the future.
For more information on our work in this research area please contact our team.