The field of AI ethics has gained significant relevance as a response to the range of individual and societal harms that can be caused by misused, poorly designed AI systems.
The development of Artificial Intelligence also involves thinking about the implications of its application for people’s everyday lives, a growing concern that demands ‘responsible AI’, which is based on transparency.
To manage these impacts responsibly and steer the development of AI systems; ethics and safety must be a priority. This involves integrating the social implications of artificial intelligence into every design and development stage. Hence, a framework that supports, underpins and motivates a responsible design and usage of AI is needed.
There are several values in the field of ethics applicable to artificial intelligence, of which, for AI Shepherds, the three priorities when developing our projects are:
Like any technology, AI solutions can equally enable and negatively affect humans.
Our solutions must be mindful of the impact they have on humans and respect fundamental rights. We want to provide solutions that improve our lives and that its impacts can be measured by our clients.
The intended purpose of an AI application – what the AI solution will deliver, to whom and for whom – must be clearly defined, and the AI in it solely used to achieve its goal.
Assessing the impact of AI solutions on people is also important to help identify the benefits but also the risks they may have, such as the social impact or potential risk arising from inadequate or inappropriate use. In addition, assessing the impact that a new technology may have before it is implemented can help to identify unintended side effects and mitigate them.
As more AI-enhanced applications seep into our daily lives and extend their reach to larger groups of populations around the world, we need to clearly understand the vulnerabilities they may have based on the data used during development.
AI needs to learn from historical data to be more accurate. However, data and statistics can reflect a biased or incorrect perspective and this can result in discrimination against certain population groups.
While automated decision systems have the potential to provide more efficiency and consistency, it also opens up the possibility of new forms of discrimination that may be more difficult to identify and address. It is therefore necessary to identify any unfair biases that may lead to inappropriate outcomes in the decision-making context, and to present possible correction scenarios, in order to eliminate them.
When thinking about fairness in the design and implementation of AI systems, it is important to always keep in mind that technologies, however neutral they may appear to be, are designed and produced by human beings, who are subject to the limitations of their environments and biases.
Human errors can also play a role in unfairness as prejudice and misjudgement can create biases at any point in the project delivery process.
AI systems must ensure equal rights of access and treatment to all individuals. It is therefore important to adopt a ‘best practice’ approach to data processing.
In addition, AI design and development teams should be built as diverse teams, with diversity of gender and ethnicity, discipline and sensitivities to ethical issues.
It is our duty to advise on implementing a process that analyses constraints, requirements and decisions clearly and transparently.
The main purpose of AI should always be to provide information that improves decision-making. This implies that it is possible to trust and rely on that information.
AI must be transparent, so both its capabilities and its purpose must be openly communicated. Any action taken must be fully explained and audited, i.e. it must be accountable.
As part of the process, each and every person interacting with the AI must be clearly aware of the purpose of each action or project, how it works, what its impact is, and what their rights are, including the right to an explanation.
The ‘right to explanation’ has many references in the legal literature, also in the European General Data Protection Regulation (GDPR), where in Article 15 it is explained as “understandable information about the logic employed, as well as the relevance and intended consequences of this processing of data”, stressing that this explanation should not affect other rights.
It is therefore necessary to provide sufficient training to each person involved in the procedure, with the necessary documentation and explanations. Our responsibility is to make sure that the developed systems are explainable in a way that is adapted to the different stakeholders involved, starting with our client, all the way to the end user.
To be accountable, artificial intelligence systems must be transparent and able to build an understandable and coherent narrative that helps tracking how a decision has been made.
AI Shepherds’ work will always apply each of these ethical values as a priority. If IA does not have a mindful impact, is not transparent and is not projected from fairness, it will not be possible for it to develop in a way that contributes positively to society.
Companies, governments and institutions must be aware of the need for ethical principles that govern each IA project, each way of working and its implementation. Already in 2017, the European Parliament made a report on robotics called Ethical Code of Conduct and since then has worked on an Ethical Guide for the responsible use of Artificial Intelligence. Governments in many countries have worked or are working along the same lines.
The technology is not the problem, but how we instruct it and what we use it for. The uses of AI contribute to the development of society, as long as they are used in a correct way and with clear ethical principles such as: the mindful impact, the fairness and the explainability.