Did you know that your favourite streaming platform uses an artificially intelligent algorithm to customize its thumbnail images of shows and movies based on your past viewing history?
Did you know that your identity can be verified by not only your fingerprints and your face, but also your iris, your voice, the pattern of veins on your palm, your knuckles, your body shape, and the way that you walk?
Did you know that the rights to build mobile phone towers on the moon have already been granted, or that the first space hotel—yes, a hotel built in outer space—is planned to open in 2027?
It’s obvious to anyone that we live in a society embedded with technologies—in our pockets, in our homes, in our TVs, in schools and universities, at the doctor’s office, in cars, planes and trains, in outer space—and they range from the mundane to the truly fascinating.
What’s not always as obvious to everyone, is that these technologies are not value-neutral. When we choose to create and use a technology because it is, for example, more efficient, we have valued efficiency. Or when we choose to put new technologies at border crossing points to increase the accuracy of identification of travellers, we have valued security and speed. Furthermore, when we choose to pursue a particular value, we often choose to pursue it at the cost of another value.
In our streaming platform we might choose (or allow the choice of) entertainment over privacy, at the border, security over inclusivity, and in space, adventure, and discovery over sustainability.
If technologies are found everywhere in society, and these technologies always include ethical values, then ethical values are found everywhere in society. Moreover, choosing to develop, sell and use particular technologies implicitly promotes some values at the cost of others.
When we want to think about ethical values and the ethical questions that arise with the use of new technologies, we might first be at a loss of where to begin: How do I think about the ethics of new technologies? Which ethical values are relevant? Which values have priority? What does the ethics of technology even mean?
How do we begin answering these questions?
Zachary Goldberg has addressed these questions at his TEDx Kassel talk on the 9th of October 2021, view the recording here.
At Trilateral we ask these questions because we recognise that ethical values are intrinsic to the development and use of new technologies like AI. Yet at the same time, we know that AI is not inherently ethically good or bad.
In the development of technology at Trilateral, we adopt the responsibility to ensure that our AI tools are used for societal good, that we identify dataset bias to mitigate any corresponding discriminatory effects, that we communicate to our clients precisely how our AI tools work and that our AI enhances and does not replace human decision making.
“At Trilateral, we take a discerning eye to the development of new AI solutions, prioritising privacy, data protection and data ethics as key values that are embedded within our development process. Zack is leading our cross-organisational efforts in the area of Explainable AI (XAI). His TEDx talk providing insights into how to think about the ethics of technology also reveals why thinking about the ethics of technology is essential for today’s society” says Kush Wadhwa, CEO at Trilateral Research.
Trilateral Research adopts a human-centred and bespoke approach to develop ethical and explainable AI, with the objective of protecting and promoting EU and UK societal values, fundamental rights of individuals and societal wellbeing.
For more information please contact our team.