Artificial intelligence and human rights

The application of AI carries dangers and must be carefully regulated

Artificial Intelligence (AI) is certainly one of the scientific areas where the greatest progress has been made in recent years. Due to its growing popularity, it is increasingly visible in our daily lives.

AI aims to provide agents with intelligent behavior, in such a way that it is indistinguishable for the average user to identify whether the originator of a certain result was a machine or a human.

The definition of intelligent behavior in the machine starts from rationality, defining objectives and performance measures, in order to allow the agent to select the action that maximizes its performance.

There are different approaches to implementing intelligent behavior in machines, however the machine learning approach has been of greater interest.

Learning is the process by which a system improves its performance through experience. The human brain, a network with billions of interconnected neurons, as well as its learning process, served as inspiration for the computational model of Artificial Neural Networks (ANN).

Machine training consists of calculating the network to perform a function, based on representative examples of a given activity, using sets of input and output pairs.

In ANNs, the input and output data are of numeric type. Deep learning is included in this paradigm and adds more abstraction, allowing to use examples such as sound, image and natural language.

The challenge of AI is its application to the most varied contexts. Currently, we are witnessing its implementation in the labor market, where employees are replaced by machines: for example, in telemarketing, in the analysis of human resources profiles, in the analysis of diagnostics in medicine, in stock control, in the detection of fraud in the financial system, weather forecasting, product recommendation to customers and many others.

Several AI experts (eg Kai-Fu Lee) claim that, in the next 15 years, 40% of jobs in the world could be performed by machines.

In 2016, Microsoft created the profile Tay, aged between 18 and 24, on the social network Twitter, who interacted with other users using AI. Tay began to issue racist and sexist opinions, offending people and reproducing hate speech. It is important to ensure the diversity of examples to minimize the risk of algorithmic bias.

The Ministry of Justice intends to use AI to speed up the dispatch of cases. Later, when talking about the subject with legal experts, I heard several questions about the application to Criminal Law. Can the processes be parameterized and serve as an example for training a machine? The criminal law expert says no. I say yes, but there is jurisprudence and the human factor, which cannot easily be learned by the machine.

The application of AI carries dangers and must be carefully regulated.

 

Author Tiago Candeias is director of the degree in Computer Engineering at ISMAT

 

 

 



Comments

Ads