Algorithms also discriminate against human beings: we can prevent it

Algorithms can repeat prejudices as human as the tendency to discriminate based on gender and race. (iStock) (Getty Images/iStockphoto/)

As human beings, we should be aware that we have many biases that are reflected in the decisions we make every day. This problem extends to our creations, as it happens with Artificial Intelligence (AI). Algorithms can repeat prejudices as human as the tendency to discriminate based on gender and race.

Algorithms are supposedly neutral and objective mathematical formulas. So how can this happen?

The following is just an example. In 2018, an investigation showed how algorithms can also be racist. According to the study, those databases made up of a majority of white faces (between 80% and 85%) have a 34% risk of misclassifying people of color.

This problem becomes dramatic when it affects people’s lives. Famous is the case of the system COMPASS which determines whether to grant parole to detainees in the United States. Although the algorithm does not use race as an input argument, the accuracy of the system is affected by historical differences between whites and blacks and their relationship to crime. This supposes, in the end, an injustice for the latter by a system that should be impartial.

These considerations have opened a debate on how to program algorithms that respect fairness criteria without being discriminatory.. The problem is not easy to solve. Programmers have to face not only technical issues, but also legal ones. During their work they must take into account legal definitions and laws in force.

In Europe there is concern about the proper treatment of sensitive data. Article 9 of the General Data Protection Regulation (RGPD) establishes that some categories of personal data deserve special protection because their treatment could entail significant risks to fundamental rights and freedoms. Political opinions, religious convictions, union affiliation and the processing of genetic and biometric data aimed at unequivocally identifying a natural person, related to a person’s health and sexual orientation, are considered sensitive.

How should we proceed when developing disease diagnostic systems that use sensitive data on a massive scale?

Undoubtedly, it will be necessary to consider the security and privacy of the information in the design phase of the system. As the previous examples demonstrate, data processing and life cycle have a direct impact on the accuracy of any algorithm based on machine learning or deep neural networks. For this reason, many of the ethical codes that have been developed up to now include very diverse topics among the aspects to be taken into account.

The Alliance on Artificial Intelligence for the Well-being of Society and People identifies six areas of action: security, transparency, work, collaboration between humans and machines, social manipulation and the common good.

Recipe for responsible AI

On May 22, 2019, the OECD adopted its AI Principles in which the following points are highlighted:

  • AI must benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  • AI systems must be designed in a way that respects the rule of law, human rights, democratic values ​​and diversity. They must also include appropriate safeguards – for example, allowing human intervention when necessary – to ensure a fair and just society.
  • There must be transparency and responsible disclosure around AI systems to ensure that people understand the results and can challenge them.
  • AI systems need to function robustly, safely and securely throughout their life cycle and potential risks need to be continually assessed and managed.
  • Organizations and individuals that develop, deploy or operate AI systems should be held accountable for their proper functioning in accordance with the aforementioned principles.

ethical issue

As we can see by reading these principles, talking about AI ethics goes beyond moral philosophy. The debate covers all areas of the social sciences, from public policies and the economy, through governance mechanisms and institutions that ensure respect for human rights.

Here it is worth noting the opinion poll that the Moral Machine initiative has carried out to assess the social acceptance of the levels of decision-making in autonomous vehicles.

The study recommends avoiding the temptation to grant ourselves a moral superiority that leads us to unilaterally define the foundations of an ethical AI. It will take the meeting of many areas of expert knowledge, dialogue and crossing the confines of disciplines in pursuit of stable frameworks for reflection that help to clear up doubts about our ability to develop AI for the well-being of humanity.

MORE ON THIS TOPIC:

An algorithm can decide if a person will access a job or not

Digital culture, between discrimination and activism

Source-www.infobae.com