Posts

the third wave of ai

Really Useful Machine Learning — RUMLSM

Abstract

AI advances represent a great technological opportunity, but also possible perils. This The case outlined here involves Deep Learning Black-Boxes and their risk issues in an environment that requires compliance with legal rules and industry best practices.  We examine a technological means to attempt to solve the Black-box problem for this case, referred to as “Really Useful Machine Learning” (RUMLSM). DARPA has identified such cases as being the “Third Wave of AI.”

There has been a dramatic increase in media interest in Artificial Intelligence (AI), with regards to the promises and potential pitfalls of ongoing research, development, and deployments. Recent news of success and failures are often presented as supporting evidence for hope or hype of the journalist’s bias. E.g., the existential opportunities and threats of extreme goals of AI (expressed regarding Superintelligence/AGI and Socio-Economic impacts) are prominent in this media “frenzy.”

We need to learn some lessons about our public presentations in this domain, to avoid this display of excessive exuberance.  In that respect, it is useful to get perspective via a critical review of this media coverage.  And, from that, to find lessons on being perhaps more modest on project naming/claiming in the AI space.  Perhaps we need to be more precise and offer realistic short-term goals of achieving simply really useful machine learning (RUMLSM), with specific smart components.

An example of this a project in Machine Learning in the Industrial Robotics space, namely our RUMLSM project.  It is a novel AI/Machine Learning system that is proposed to resolve some of the known issues in bottom-up Deep Learning by Neural Networks, recognized by DARPA as the “Third Wave of AI.”

Neural Network based Deep Learning methods are currently highly popular. But these neural networks have a potential issue in some applications – they can’t easily explain their discovered logic.  Furthermore, because there is no human expert to understand its learned behavior, who can warrant for its compliance with a behavior specification, there is a need for “trust” or “faith” without a rational explanation. The learning is not based on a discursive rule-based logic.

Neural Network based Machine Learning can be said to be similar in this context to the intuitive learning of a human expert who has a “gut feel” acquired over years of experience.  In this respect, they are “Bottom-Up” machine learning paradigms.  “Really Useful Machine Learning” (RUMLSM) proposes an AI-based parallel discursive process.  It interprets a rationalization of the learning acquired by the Deep Learning base system and can conceptualize its learning into structure, which can then support a dialogue/checking process.  This interpreter is a “top-down” learning paradigm.  The means to do this is via a process, whereby an Investigator system interrogates a target Deep Learning Black Box Deep Learning System.  See Figure 1.

deep learning
FIGURE 1: CONCEPT RUMLSM

The concept of Rationalization and Explanation of Deep Learning systems has been discussed by DARPA                      ( https://www.darpa.mil/attachments/AIFull.pdf ).  See Figure 2.

the third wave of ai   FIGURE 2 DARPA THIRD WAVE OF AI

As noted above, a well-known problem of AI risk is applying Deep Learning Neural Networks, specifically the black-box problem.  This occurs when the Neural Network has a large set of training data; the consequential connectivity arrangements are such that for a given input, an output is provided to a reinforced goal of the system – typically prediction, decision or identification. These bottom-up systems can learn well, especially in tight domains, and in them can surpass human performance – but, without any means of validation, or clear compliance with regulations. This renders the AI potentially untrustworthy, a serious deployment risk that may be deemed unacceptable.  A human can usually provide a top-down rationalization of their behavior, responding to “why you did that” questions. But a Deep Learning system cannot easily answer such queries.  It is not rule-based and cannot easily track its “reasoning.”

A key part of the solution proposed is an “Extractor” which will build a rationalization of the Deep Learning System into a Rule-Based Decision tree that can be validated against risk analysis/compliance needs, i.e., answering questions related to risks.

The means by which Expert Heuristics are extracted from the Deep Learning Neural Networks has been studied by other teams. However, the specific means by which we propose to do so in RUMLSM is an innovative process.

Expert Heuristic/Rule extraction can be defined as “…given a trained neural network and the data on which it was trained, produce a description of the network’s hypothesis that is comprehensible yet closely approximates the network’s predictive behavior.” Such extraction algorithms are useful for experts to verify and cross-check Neural Network systems.   Earlier this year, John Launchbury, director of DARPA’s Information Innovation Office said, “There’s been a lot of hype and bluster about AI.”  They published their view of AI into “Three Waves,” to explain what AI can do, what AI can’t do, and where AI is headed.  We consider the example we have outlined above falls into this “Third Wave of AI.”

The above was the basis of a paper presented at 4th International Conference on Artificial Intelligence and Applications, organized by the Academy & Industry Research Collaboration Center (AIRCC), presented on the 25th March 2017 in Geneva, Switzerland.

Our initial paper introducing this concept can be accessed here:

http://airccj.org/CSCP/vol7/csit76607.pdf



Martin Ciupa

About the Author :

Martin Ciupa, CEO,

Remoscope.

Follow Martin on Linkedin

This post was authored by Martin Ciupa. If you want to sponsor or contribute an article please reach us at advertising@alltechevent.com