We are afraid of the artificial intelligence of the future, but we do not understand the present intelligence either, says Gina Neff, a professor, and researcher at Oxford. She studies the impact of big data and machine learning on human society. 

What is artificial intelligence? 

Many people feel that this marketing name hides some sci-fi future full of robots who talk to us, have their own will, and slowly make plans for rebellion against people. But, neural networks – which is a more accurate term for artificial intelligence – are all around us.

Neural networks are already deciding what emails to see and what spam will fall. Machine learning helps social networks choose the content that interests them. Fewer people know that the trained neural network also decides in the bank who to give the loan and how large it sorts resumes for large companies, advises the police to detect fraud, and judges suggest who to release from prison and who to leave there.

What can AI do?

Artificial intelligence can look objectively and inevitably; in fact, it is the result of many decisions. We should not care who makes those decisions and what rules they must follow. At the British Embassy, my researcher at the Internet Institute of Oxford University and I talked about frequent misconceptions about a subject called the abbreviation “AI”.

Artificial intelligence is not a robot. It is primarily generalized data.

What are the common metaphors that people use to describe artificial intelligence? Do these metaphors prevent us from understanding what machine learning is?

Most often, misleading comparisons with the human brain occur. Almost every illustrative photo that is supposed to show artificial intelligence or machine learning usually indicates the human brain as an activation function. But most machines supervised learning does not try to imitate the human brain. They work completely differently. Indeed, we are far from understanding the human brain enough to build a functional replica of it.

Another problem is the term ‘superintelligence’. Today’s artificial intelligence is usually very strictly trained for a particular task. Just because a trained network is successful in this task does not mean that the same algorithm can be applied to another problem. The term “intelligence” is misleading at because we usually mean something completely different.

Third, it is the idea of artificial intelligence as a robot. Of course, there are AI equipment robots, but the vast majority of machine learning is not tied to a physical machine. These are algorithms that determine what you see on social networks; it is speech recognition; it is a mail client that recognizes important emails. These are places where people meet machine learning on a daily basis.

By misrepresenting artificial intelligence, there is a risk that we will not recognize the impact of machine learning already. We would recognize the robot at first glance. But Facebook learns what to show us to affect our mood, it does not occur to us.

People talk about how to prepare for the artificial intelligence of the future. We are not ready for the one that is already here. We are afraid that a robot will rebel and try to rule the world. We do not realize – or overlook – how distorted the data is to learn current decision-making models.

We should look at where and how systems collect data, how they analyze it, and what decisions they make. This is extremely important for the future.

What metaphor would you suggest in the meaning of AI? 

I like the metaphor of Stuart Russell. Imagine you have a group of designers who specialize in working with asphalt. He uses asphalt. When they look for a way to connect two places, they choose an asphalt road. If you ask them what would help your garden, they will advise you, to flood it with asphalt, it will be easier to maintain. When they see the beach, they say it would be paved with asphalt. He can handle asphalt, but that doesn’t mean that asphalt is the answer to everything.

I like the asphalt metaphor because it emphasizes AI as a decision and infrastructure. Now we are deciding what the rules will be on asphalt roads, who will drive them and where they will lead. And those decisions are often not made publicly.

Machine learning is dominated by large companies that collect your data.

Machine learning is now in a phase of enthusiastic development. Researchers and companies are looking for what they can use deep learning for, or what they can apply a generative adversarial neural network. It was similar to the past when people discovered the potential of electricity. 

So what’s the problem with AI?

Certainly now a lot of experimentation. And it’s worth noting that not everyone can participate in this experimentation. We need to ensure that many more people can participate in testing and the debate.

Does anyone have the opportunity to use free tools to create a neural network?

Yes, but practically no one – except for large companies – has the data actually to train their models. Getting a truly functional tool based on machine learning can be challenging. Facebook, Amazon, or Google have a huge amount of data.

Machine learning today is very narrowly specified. Still, researchers hope that in the future, a model capable of solving one type of problem can relatively quickly overtrain to solve another problem. Now everyone is competing mainly for how much data they collect. With the data, they want to get ahead of others. And so they collect a massive amount of personal health data.

We need better rules for accessing and working with data. This data is often edited and described by external companies, and it is a long and costly process.

Before everyone talked about machine learning and artificial intelligence, the term “big data” was my main favorite. And in both cases, the term is often overused. How did artificial intelligence follow the era of big data?

In 2019, Silicon Valley researchers and we were looking for the main threats to artificial intelligence research and application. One area they identified was the difference between expectation and reality. People often have bad, exaggerated ideas about what artificial intelligence can do.

This can pose a number of problems. If someone thinks that “artificial intelligence” is a super-smart computer, it may tend to over-trust it in things where that trust is unfounded. When we feel that artificial intelligence knows more about the world than humans, we can give such a system the power to make decisions. We can overlook the mistakes he makes and why.

As educators and specialists, we have a responsibility to explain how machine learning works and what problems are associated with it. Only then can we show when there is a mistake in automated decision-making systems.

Intelligence is not neutral just because it is artificial

What are the most important sources of error in such systems?

Artificial intelligence is a tool that looks for similarities and regularities in a large volume of data. The problem is when the data is not a good representation of the real world. Or there may be an error in the algorithm. The third type of problem is the inability to determine why the system decided the way it decided.

As humans, we should always know what led to a decision. It is not enough to know that the computer has said something, we need to know how it happened. We must not accept that the computer has knocked out a mortgage and refused another applicant. We need to know why based on what data. Traditional systems have sometimes been challenging to find, but in systems based on machine learning, this is often practically impossible.

That should bother us. This makes it impossible to audit, it makes it impossible to control, and it opens the door to different kinds of manipulation. And not only manipulation, but it can also be unintentional mistakes.

Programmers who train neural networks sometimes say that they feel like breeders rather than programmers. He feeds the system with data and then waits for what he learns and how he will perform compared to others. Why something works, and something does not often say.

Exactly. Neural networks are often trained for a particular task. They are successful, but nobody really can say why. It might not matter with some types of neural networks. But when we look for ways to allocate mortgages and find the best job seekers in the labor market, we should care to build such networks fairly, without unfounded prejudice. Otherwise, the involvement of such systems will lead to a society that is even less fair than the current one.

Neural networks can give such prejudices the illusion of objectivity. People think we have prejudices, but we don’t expect that with a computer system. Isn’t it possible for people to make decisions about a computer system? “The computer decided!” And we don’t have to decide …

There are many types of decisions that we have to be able to justify under all circumstances. They are too important to entrust them to some capricious black box. We must explain exactly how we made those decisions.

We have to find a way to do it. GDPR has been created in the EU, which I think is going in the right direction.

Telling people you don’t like how social networks work with your data, so don’t use those services, it doesn’t solve anything, that’s not the real choice.

In your lecture, you say that we need to find a way to ensure that machine learning will help us and not harm us. Isn’t that a utopia? 

Actually, I think anything that has the potential to help also has the potential to do harm, Gina Neff said.

It will not be 100%, but it is a principle. Look at medicine. For two millennia, professional doctors have sworn: First of all, no harm. (In the modern version: “I will abandon anything that could harm the patient or burden him unnecessarily.”

What would it look like if we started to push something like that into technology? If we realized that technology is aimed at people and that it is not right to misuse people’s data against them? It would mean fundamentally transforming how we approach data and the data-based economy. And this data would create a new generation of machine learning.

If we succeed in enforcing the principle that data should be collected with responsibility towards people, I think we can build a future where technology like machine learning benefits people and solves real problems.

LATEST POSTS

GOVERNMENT

PEOPLE
The unity of soul and mind is the path to happiness
The unity of soul and mind is so rare that it can literally be sold profitably. All masterpieces of culture and art are the essence of unity.

CITIES

SUSTAINABILITY

TECHNOLOGY

smart cities, space, science, technology, quantum, government, economics, SDG, municipal services, startups, influencers, brands, pioneers, innovator's dictionary, history, design