Artificial intelligence has gradually begun solving tasks in various spheres, either working alongside a living specialist or replacing them altogether. When it comes to productivity and accuracy, algorithms often outperform humans. Despite this, they do not possess any moral qualities, ethics in particular. This is one of the reasons why the adoption of such technologies has been slow. Since the world is moving towards widespread use of various “smart” machines, we will need to be teach them ethics.
How to “nurture” a machine
Innovations in artificial intelligence have been held back mainly by things like strict government oversight or the sheer cost of such projects. There is yet another barrier: public outcry. Although many people agree that technology makes life better, and often enough preserves it  through medicine or safety, society still is concerned about ethical issues. Many innovations are perceived as tools of surveillance and control, and therefore some people don’t want them around.
Almost 40% of people admit that artificial intelligence alarms them. According to  The Harvard Gazette, as AI penetrates deeper into different industries and influences decision-making, ethics issues will become more and more acute. Principles of privacy are being violated, and there is a growing risk of excessive control, bias, and discrimination in artificial intelligence decisions.
A study of  voice services in Amazon, Apple, Google, IBM, and Microsoft found that artificial intelligence was less efficient at voice recognition from black Americans than from other races. And video analytics systems for photos from Microsoft and IBM better recognize  white men than women. The error rate for photos containing women reached 35%. Both of these examples can be perceived as bias and discrimination by neural networks, even if it is not intentional.
When the equality agenda dominates the world, such situations become a dark spot on the reputation of the business and sometimes lead to scandals. In 2015, Google had to publicly apologize after its image recognition neuronetwork labeled a  black person as a “gorilla.”
So what exactly is ethical artificial intelligence?
Analysts from Deloitte believe  that artificial intelligence must be transparent and responsible. Experts from PewResearch emphasize  principles of accountability and fairness.
That means, if we’re talking about facial recognition systems, they should identify people of different races and genders with the same precision.
Companies that develop any technology based on artificial intelligence should document the capabilities and limitations of their technology. Furthermore, any innovation can only be realized under human control, the machine must not make decisions of its own. It must not violate Asimov’s first law  of robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
There are other criteria for ethical artificial intelligence. For example, all of its decisions should be understandable to humans and be limited within its scope of work. In the government sector, following the letter of the law is foremost. In business, the priority lies in achieving the highest efficiency from all processes.
Who is going to do this?
In a survey from Capgemini, more than 70% of respondents believe  that it should be up to the state to develop AI ethics: the main authorities and independent industries, such as the IEEE, the Institute of Electrical and Electronics Engineers. They should work together to establish principles that will guide the proper use of artificial intelligence. In Europe, the authorities have already brought this issue to attention: the EU has developed a  program on the ethical aspects of AI and robotics.
Ideally, all parties that come into contact with such technologies should work together. Governments, developers, business consumers. Some have already applied these practices: for example, Microsoft has a special department for justice, accountability, transparency, and ethics in AI matters . Employees compile a checklist of requirements for the development of products and services based on artificial intelligence. As such, clients are able to see that the company is striving to keep its neural networks working without any sort of bias.
To control the ethics of artificial intelligence, a technology ombudsman may be needed: an independent party (one person or a group) who will also monitor developments and make suggestions.
Almost 70% of respondents from a Pew Research Center survey — developers, businessmen, politicians, — doubt  that the idea of ethical artificial intelligence will become natural for everyone within the next 10 years. However, the world is indeed gradually moving in that direction. And the faster each participant in the technological world decides to contribute to the development of ethical artificial intelligence, acceptance of innovation in general becomes less and less of an issue.