in

AI And Challenges .

 

AI is like any machine, always has a failure rate above 0% and never reaches perfect state.
What is the aircraft’s acceptable breakdown rate? Surely it will always be higher than 0%, no matter how hard people try to improve this number. The same is true for AI as well.

That’s not to say AI doesn’t offer real value. Everyone needs to understand why machines make mistakes, which also means understanding the three challenges that prevent perfect AI development.

* Underground bias.

Unconscious thoughts can cloud one’s reasoning and actions. Implicit bias plays a big role in discriminating between people, it also prevents scientists from using AI to reduce prejudice in society. AI learns from humans, if implicit bias emerges in this process, AI will learn human bias.

As AI works, it deepens bias, even though the machine’s work may serve the social good.

One particular example is the Allegheny family screening technology, developed in the US to predict which orphans are at risk of being abused by adoptive parents. The process of implementing this solution was fraught with problems, especially when the Department of Human Services in the states acknowledged it could be biased on race and income.

Programs that detect child neglect are often mistaken in equating parents living in poverty with the criteria of indifference or child abuse. Steps have been taken to limit the risk of implicit bias when such problems are discovered.

However, completely eliminating the problem is much more difficult, when people cannot deal with all the unknown variables, as well as the social context. What is “right” or “fair” behavior? If humans cannot identify, define, and solve such questions, they will not be able to teach machines to do the same. This is the main reason why AI can’t be perfect.

* Data .

Data is the fuel for AI. Computers learn from the true value of data, such as rules for making decisions, not the decisions themselves. Computers require huge amounts of data to detect patterns and correlations in that information.

If the data is incomplete or contains errors, AI cannot learn well. Covid-19 provides a prime example when the US Centers for Disease Control and Prevention (CDC), World Health Organization (WHO), Covid Tracking Project (CTP) and John Hopkins University all released markedly different numbers.

Such a difference would make it difficult for AI to detect data patterns, not to mention find hidden meanings in that information block. That’s not to mention where the data is wrong or incomplete, like trying to teach AI about healthcare but only providing data on women.

In addition, there is a challenge when users provide too much information, which can be irrelevant, meaningless or even distracting. IBM used to have the Watson system read the Urban Dictionary page, and the tool could not distinguish when to use normal language and when to use slang or swear words. The problem was so serious that the AI had to delete Urban Dictionary from Watson’s memory.

An AI needs to learn 100 million words to become fluent in a language, while a child only needs 15 million words. It shows that developers have not been able to determine what kind of data makes sense, causing AI to focus on redundant information, waste time, or even define the wrong data model.

* Expectation .

Humans always make mistakes, but they expect machines to be perfect. In healthcare, experts estimate the misdiagnosis rate can be as high as 20%, meaning that one in five patients in the US will be misdiagnosed.

When informed of the numbers and the hypothetical case that AI aided diagnosis gave an error rate of about 0.001%, the majority of respondents still chose human doctors. The reason is that they think the rate of misdiagnosis of AI is too high, even if it is much lower than that of humans. People expect the AI to be perfect, worse than expect the AI trainer to be perfect as well.

On March 23, 2016, Microsoft launched a Twitter bot called “Tay” (Thinking About You). The American corporation trained an AI with the language and interaction abilities of a 19-year-old girl, then brought it to society. Microsoft had to shut down Tay after only 16 hours and 96,000 tweets, because it turned into a racist and sexist, even promoting fascism.

Some people taught Tay rebellious language to wreak havoc, while Microsoft didn’t think to train him for inappropriate behavior, leaving Tay with no background or reason to understand that malicious intent could be harmful. exist. Microsoft’s social experiment failed, becoming a demonstration of human society rather than the limits of AI.

“These three challenges show that AI will never be perfect, it’s not the panacea that many people expect. AI can do extraordinary things for humanity, like restore mobility to humans. people who lose limbs, or improve their ability to produce food thatI use less resources. However, people need to remember that AI is never perfect, just like us,” commented Forbes journalist Neil Sahota.

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Feel the WF-1000XM4 headphones.

AI technology accelerates the tourism industry.