Posts

AI Opinion

Are we ready to drop the “A” from (AI) Artificial Intelligence?

A quick story: A drowning boy was saved in time by a good Samaritan. The boy’s father, who was very rich rushed to the spot and saw a man doing something on the boy. “What are you doing?”, he asked. “Artificial Respiration, Sir”. The Rich father got very wild. “What the heck? Do you know how rich I am. Give him Real Respiration.

This article discusses some challenges in declaring AI as just Intelligence (dropping A) and qualifying the normal intelligence as HI (with H for human).
 
In other words, are we ready for redefining Intelligence? Before answering the question, let me simplify the picture with some background.
If puritans do not fight with me, I will make a bold statement that AI is synonym to ‘prediction’. The machine (computer) predicts through AI.
 
Let me go one step further. I will define ‘prediction’ as ‘extrapolation’ (Have I committed blasphemy?). It may be as simple as what you encounter in Economics as ‘marginal utility’ or as complex as predicting where the cyclone will land. How does it get the power to predict? Past data and learning. The machine learns. It learns constantly. It learns progressively. Or, stated differently, the Machine learns in a simple manner (not worrying about too many influencers) or in a complicated manner (worrying about several influencers). We call them Machine Learning or Deep Learning accordingly.
 
There is one final element leading to AI and probably confused with AI. That is, Automation. Automation is a generic term. Anything that is done through the machine can be called as Automation. You can give it a Purchase Order and ask it to produce the invoice. Congratulations, you have automated the invoice process.
 
In another scenario, you can give the individual marks in the subjects and ask it to define the result, after informing it what the meaning of result is. You can go one step further: you can give it an answer sheet and ask it to give the marks first. Here comes the challenge: how do you ‘teach’ it to evaluate? If it is a multiple choice question, it may be easy. If not, what do you do? As usual, you have two choices: either restrict the answers to Multiple Choice type (‘just automated evaluation’) or scratch the surface of AI to interpret and evaluate text answers.
 
The extrapolation, a term from the past became AI with more technology at our disposal. As our actions become more and more sophisticated , our requirements will also be more and more demanding. It is a constant race. May be, we are better off, ignoring Technology as an element to be worried about.
 
The first challenge, of course, relates to the ‘learning’ part. Do we have capable ‘teachers’ (or Domain Experts)? Do we have reliable ‘books’ (or past data) to be used in teaching?
  
The data used for learning could be suffering from the following shortcomings: 

Not exhaustive / extensive 

Any conclusion based on the available data might not be universally correct. The size of the sample may not be big enough.
 
If somebody tries to predict a fraud in credit card transactions, based on just one month’s data for a single card in a single town, the chances are that enough learning might not have been done and the machine may be similar to a boy who tries to do a three digit sum after learning counting with fingers.
 
Definition of a reasonable size is not universal. However, if we are confident that more and more data will be generated progressively, with a chance for the machine to learn , we can explore option to discard initial predictions and rely on ‘experience’ of the machine with the passing of time.

Bias in the data  

The data that has been collected and used for training might have some inherent bias. For example, the data may relate to companies with predominantly male workers and using that for finding the health insurance premium for a company with both male and female workers will prove wrong.
It is interesting to note that a recent article, ‘How Machine Learning Pushes Us to Define Fairness’ in Harvard Business Review appreciates Machine Learning for making us be fair.

Spurious Correlations 

Another risk in building data is to consider incorrect influencers. For example, if AI has to predict the rain fall in a place, it may make sense to learn the effect of trees in the place but it may not make sense to force the learning of the effect of number of stray dogs in the place.
 
The only way to resolve such issues is to get a good knowledge of the factors. In other words, build a good human intelligence first before going into AI.

Randomness

Though the AI applies the knowledge correctly, it is likely that some randomness gets into the parameters of learning / testing resulting in some errors. You can imagine an example as below: the students had been well trained in the concept of averages. However, a question in an exam paper asks the average of 5 numbers but listing six numbers. Such errors can occur randomly. Probably to simulate such conditions, we can associate a random error in some of the data used for learning / testing and see if the machine decides correctly.
Next, we can consider the shortcomings while the machine is crunching the numbers.

Infrastructure

Though we agreed to rule out the influence of the Technology, for a more accurate analysis more data might have to be loaded in memory. This may or may not be feasible or limited by the available infrastructure.
And finally let us consider the shortcomings on the output side.

Gut Feeling

This aspect could really be the most important challenge in the growth of AI. This assumes a lot of importance in health care. In spite of lab reports, a Doctor has a ‘gut feeling’ or ‘hunch’ about the condition of a patient. Can we have that feature present in AI? What makes it more challenging is that such gut could be right or wrong in real life. But we want it to be only right considering the involvement of a human life.
Though the correct status is not known yet on this aspect, it is interesting to note that in a Fortune article (Artificial Intuition Wants to Guide Business Decisions. Can It Improve on ‘Going With Your Gut’?), there was an interview with a start up on what is known as ‘artificial intuition’.
Probably in a few years we may be able to define AI as the real intelligence without the need for the prefix. Shall we wait for that?
author: S. Sundararajan
Sundararajan Seshadri has been in IT since the EDP days. He derives his mastery over the computers through his association with different kinds of computer applications, scientific or commercial and different technologies as one gave way to another. He is comfortable with both rolling up his sleeves and wearing the strategy cap. With an academic ammunition from IIT and NIT, his work experience includes start-ups as well established companies such as BHEL and TCS. In earlier days, he successfully applied mathematical modelling, probably a very crude form of AI in predicting the behaviour of boilers. He is currently driving the offshore work for the US based IT company, Real Soft, Inc. .