Towards collaborative xAI

February 27, 2020 § Leave a comment

AI_expl_image.jpeg(Image from Piqsels, License to use Creative Commons Zero)

Göte Nyman, University of Helsinki, Finland
Ossi Kuittinen, SimAnalytics Finland, Ltd

Deep learning AI systems have, until now, been black boxes and although they show amazing high-level artificial intelligence, it has become imperative to understand their detailed, underlying logic, the computational elements in their critical decision-making, and their functional structure in order to know how and why they end up making specific decisions. The same need for transparency concerns the learning phase and the choice of the materials used for their teaching. The problem is not new as it was met and recognised already with the logical and symbolic AI developed by the pioneers like Newell and Simon in 1950-60s.

The reasons for this emerging need for explanatory AI (xAI) are evident: the costs of failures and underperformance become intolerable when AI enters large-scale and sensitive domains like medicine, traffic, weapon industry, and massive industrial settings. Explanatory knowledge can be critical from various perspectives: what limitations are introduced by choosing specific teaching materials, how the AI system generalises from this, what happens in exceptional situations, what kind of failures are possible, how to organise communication within the AI system at large, are there ‘black holes’ in its learning, and how and why should its parameters and other performance-related variables be adjusted for a better, optimal or safe performance and finally, how should xAI communicate with the humans and the human community or teams working with it.

There has been the underlying assumption that accuracy of computations in large AI systems prevents explainability but this is now being challenged. Accurate, interpretative deep learning (DL) models are being developed and tested where their reasoning is modelled e.g. by imitating the decision making of professionals in the specific classification task (take a look at e.g. image recognition task in Chen et al., 2019). However, the problem is still open and new developments can be expected to occur soon.

In psychological and social sciences, there is a well-known tradition in modelling human decision making and methods have been developed for that purpose. Think-aloud and verbal protocol analysis (Ericson & Simon, 1993) and various advanced methods to apply them have been widely used. However, it is clear now that in the human case, this is far from a simple problem, and often it is difficult if not impossible to identify the critical mental processes in individual or collective decision making and the reasons for such behaviours in general. Verbalisation of mental processes is difficult for the test subjects and for those analysing that data. This becomes problematic when procedural or tacit knowledge is involved, when social interaction occurs,  and when intuitive reasoning is used. It is likely that with increasing complexity of DL and other systems like GANs (Generatve Adversarial Nertworks) the problem explainability will be no less challenging than it is in the case of human decision making. We can expect the human and AI research fields to feed each other.

Solving the xAI problem

In the medical domain, for example, Holzinger at al (2019) look at the xAI problem from peer-to-peer perspective, as it occurs between medical professionals. The idea is simply that xAI should be able to conduct professional negotiations, man to man, woman to women. They present an excellent review of the black box problem as it concerns medical decision making in specific diagnostic situations (histopathology). Because of the hard diagnostic criteria and the risks involved in failures, the medical domain serves as an excellent case environment for building the theoretical and practical foundations for AI. Holzinger at al. emphasise the need for both explainability and causality of the AI systems. Large-scale industrial settings bear similarities with the medical domain – there the cost of failing AI can become high and complex process control and understanding is necessary. Both explainability and causality are needed.

The need for xAI is now widely shared where the complexity of AI environments increases. Costly false alarms, misses and unpredictable erroneous behaviours can be difficult to predict and trace in current systems. Extensive follow-up, time series analysis, off-line testing and continuous improvements are necessary to build successful AI based systems. On the other hand, AI implementation contexts very considerably and what can be tolerated in customer service or marketing can be a catastrophe in military, medical and industrial settings.

Gunning  (2017) summarises the overall aims for xAI: a) to produce more explainable AI models and maintain a high level of learning performance (e.g., prediction accuracy and b) to make it possible for human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. To accomplish this, Gunning emphasises that an explainable AI model needs an “explainable interface”, interaction and means of explaining the behaviour of the AI/ML system to a human operator.

The following are two example approaches in aiming at xAI:

The decision making situation can be replayed and discover which factors were used in each decision making situation and which were used when the situation changed (Johnson, 1994). Humans performing the same tasks can be used as a reference and try to find correspondence between the human and the artificial decision making processes.

It is possible to imitate human reasoning like in observing (visual) objects, e.g. cars or birds. To quote Chen at al (2019): “The model thus reasons in a way that is qualitatively similar to the way ornithologists, physicians, and others would explain to people on how to solve challenging image classification tasks.”  Again the human specialists is taken as the reference.

At the moment there are several initiatives going on to solve the black box problem in AI and even an Explainable Machine Learning Challenge competition between Google, the Fair Isaac Corporation (FICO), and the academics at Berkeley, Oxford, Imperial, UC Irvine, and MIT, was arranged in 2018 (Rudin & Radin, 2019).

However, the field is still young, and the medical example case by Holzinger et al emphasises the need to make separation between explainability and causality. No doubt, this will be a crucial aspect in introducing understanding into future AI in large scale contexts. In real industrial settings this is a new challenge and to accomplish it without putting the AI-based processes at risk. Small steps can and must be taken now.

Human and collaboration approach to xAI

From the human side, the value of explanation depends on its clarity, usefulness, the way I facilitates the use of relevant mental models of the AI system and on the way its decision making performance and overall workings become understandable and even predictable within the working community. In general, an xAI system includes trust in how it behaves in variable situations and what it can, will and cannot do. The explanations offered to the users must carry information, knowledge and guidance of which the users learn something important or new,  and which can be turned into relevant automatic, human-controlled or semi-automatic, well-defined actions.

These are important requirement of xAi, and the question arises how to accomplish this in real implementation of AI where a community and an AI implementation project work and interact? Is it only a matter of peer-to-peer professional negotiations?

Here we approach the black box problem from industrial-scale collaboration perspective, that is, we consider the interaction between the three parties – the human process operators and other site personnel, design specialists and the artificial AI/Ml system itself – interacting in the planning, implementation, using and tuning of the industrial scale AI/ML systems. This interactive “Triad” needs relevant information that is generated within this interacting entity: it does not originate from the AI/ML system or from the participating human resources alone but instead it emerges as an outcome of the triad interaction and collaboration. This is a new and acute knowledge creation paradigm in AI/ML settings. Because of the complex nature of this knowledge creation, it is good to consider what does “explainability” mean in such a collaborative contexts. We will not go into details here, but introduce some basic principles of collaborative xAI as we see them. We consider ‘explainability’ as collective knowledge acquired during planning, implementation and inter action with the system. From this perspective, it is not only a question of peer-to-peer negotiations but about collective knowledge building and sharing. Of course, peer-to-peer type of explainability can be an integral part of this.

Earlier we (Nyman and Kuittinen, 2019) described the  “Triad of collaboration” where system designers, site engineers and operators and other personnell work together and interact with the AI/ML system to plan, implement, run and tune the system. Looking at xAI from this perspective reveals several “stakeholders” who need AI explanations from xAI in their work. We can discern the following instances where AI explanations are received, understood and used:

  1. Technological performance of the AI system is monitored by the design and engineering professionals who know the architecture and theory of the system in every possible detail. For them, there is the need for high-level of “peer-to-peer” negations (when one peer can be the AI/ML system), with some support from the specialists in the application field (operators).
  2. The domain of the specialist operators is the process (manufacturing, maintenance, service etc) and all of its parts where their task is to secure that the ML/AI system behaviour converges towards optimal performance in terms of product/service quality and quantity. Their knowledge domain is different from that of the design specialists. Hence, an “explanation” to them must carry relevant information about the domains of their responsibility and support them in initiating relevant actions and control.
  3. Management of AI/ML at large scale sites need information that includes elements different from typical automation and digitalisation programs. The black box problem, if not solved, introduces, for example, the following new management challenges: 1. How to gain efficient knowledge for deciding about significant investments into a full implementation of the AI/ML system and what kind of knowledge is critical in this? 2. How to follow the development of the system performance when it is being introduced and tested and how to extend it? Both of these needs include explanative knowledge that must be both technologically solid and understandable and as clearly as possible related to the operating domain of the site.

 

Explainability in different domains

We can return back to the explainability and how it depends on the domain of negations and knowledge needs.

Firstly, what is meant by “explainability” and “understanding” for the people working with and managing the AI/ML system? The main aim of xAI is to inform designers, operators, and the management to take action and to enjoy the positive feedback suggesting that the outcome of the AI/ML guided process is leading to the expected positive performance. Both situation occur, daily. However, many intervention actions that the operators must take require coordinated collaboration, often negotiations and reporting of what to do, how and when. Any explanatory information available for this, must be congruent with the ways of thinking, observing and negotiating the process that is being controlled. Hence, already form the start xAI must produce both AI-technical information and process related information.

The question then arises, how should the available xAI  knowledge be formulated so that it can be used efficiently by the professional operators.  Clearly, it necessary to facilitate collaborative knowledge creation which serves different knowledge needs. From the beginning the development of xAI must consider this. Simplified peer-to-peer consideration is not enough for supporting effective organisational decision making and actions.

We have earlier used the concept of ‘observational architecture’ to consider the critical information flows in AI guided environments. When building an xAI, it is necessary to consider this information or knowledge architecture and take the perspectives of different knowledge domains and needs. There is no unique solution to this since the information needs vary depending on the site and its overall environment. In essence this can be described as a demand to generate relevant actions from critical perceptions. Suitable models of collaboration, information flow, knowledge creation and action are needed.

The outcome of a mature, collaborative xAI is a knowledge community which shares relevant data obtained from xAI and represents it in different domains, in a way that serves the aims and purposes of the organisation or the site using AI/ML in its operations. In this sense, this new xAI knowledge is dynamic and it emerges and evolves with the improving performance of the AI/ML system. This does not happen without proper management of the collaboration within an xAI environment.

Some references

Chen, C., Li, O., Tao, C., Barnett, A.J., Su, J. & Rudin, C. (2019) This looks like that: Deep learning for interpretable image recognition. Advances in Neural Information Processing Systems 32 (NeurIPS)

Ericsson, K.A. , & Simon, H.A. ( 1993). Protocol analysis. Verbal reports as data (1st rev. ed.) . Cambridge, MA: MIT Press.

Gunning, D. (2017). Explainable Artificial Intelligence. DARPA/I20, Program update.

Holzinger, A:, Langs, G., Denk, H., Zatloukal, K. & Muller, H. (2019). Causability and explainability of artificial intelligence in medicine. WIREs, Vol 9, Issue 4.

Johnson, W. L. (1994) Agents that learn to explain themselves. AAAI-94 Proceedings, 1994.

Rudin, C. And Radin, J. (2019) Why are we using black box models in AI when we don’t need to? A lesson from and explainable competition. HDSR Nov 1/2019

https://hdsr.mitpress.mit.edu/pub/f9kuryi8

 

Where Am I?

You are currently viewing the archives for February, 2020 at Gote Nyman's (gotepoem) Blog.