October 1, 2018 § 2 Comments
Göte Nyman, University of Helsinki,
Ossi Kuittinen, SimAnalytics Ltd
In traditional industrial and service environments, an ideal AI system is meant to reach the competence level of human operators and their teams, and then outperform them as soon as it is technically, economically and resource-wise possible and can be managed. Inevitably, any organization adopting AI will meet – it is occurring all the time – its first critical moment when this happens, for the first time. For firms, services and industrial settings this significant event is a wake-up call, both to the whole organization but also to its AI developers.
Photo by ICAPlants – Own work, CC BY-SA 3.0, Wikimedia commons.
The learning AI system becomes accurate, fast and excellent in securing production quality, offering failure diagnosis and predicting service outcomes. It is perhaps taking its first steps in predicting, managing and solving more complex problems than has been possible before by human operators and other professionals. The criteria of these significant events involve several (socio-technological) variables and it is not self-evident at all how to recognize them and how an organization should react to them. We should understand well these first moments where AI it appears to outperform the operators. Here we explain why.
What is happens when the AI system outperforms its teachers?
This special, but very practical step has not been a popular discussion topic while the focus has been on potential future risks, job transformations and even disasters when AI becomes strong enough. However, it is a new window of opportunity for the organization and its people and should be recognized as such: if a firm or a production unit makes mistakes in interpreting what it means and how to respond to the new situation it runs at risk of missing the further development of the AI based work and processes. It is not only an alerting signal to the technology use but to human resource management as well. AI does not know where it leads the working community, it does not ask and it does not care.
The personnel responsible for the ‘old’ system and implementing the AI can face uncertain times. This is not a new phenomenon in businesses and industries experiencing digital transformations; similar situations occur in banks and insurance companies, for example, when they change to more modern technological solutions while it is necessary to run the old systems. The same has occurred in major firms in Silicon Valley as well, where the personnel knows their value on the job market depends on their ability to work with the latest technology. Investing time and energy in old systems is risky.
Division of labor, team and management functions, will all become under reconsideration. Soon, also the pay and reward systems must be aligned so that they support the new AI-based situation at the site: traditional pay models may not work any longer and can be even harmful if they reward for wrong activities and miss the critical ones. Before this renovation can be accomplished it is necessary to identify the major factors contributing to the system performance.
One could call this new phase a strange form of ‘organizational interregnum’, when the organization is aware that its process control must be changed but it is not clear yet, what should be done and how to run the two systems in parallel; the change cannot be accomplished overnight and its necessary to keenly follow the performance of the AI system. It may well take more than a year depending on the scale and nature of the AI implementation. During this time, the organization, its management and the personnel in general must build trust on two frontiers: trust in the implemented AI technology and its use and trust in the collaboration among the personnel, teams and management. Failure in either of these leads to uncertainty, loss of motivation, conflicts and hinders in decision making.
Paradoxical as it may seem, when AI reaches its first ambitious goal it will need the best human knowledge and competences and new ways of working together, new teams, for example, just as it had needed them during the implementation, when starting to learn its new tasks. If handled well this can become a moment of growth and inspiration to the personnel, a chance to build an effective work environment. Adoption of AI is a challenge to people committed to creating a healthy and fair workplace, to learn new required skills and to secure a good performance. It is the responsibility of the management to provide conditions for this.
Management’s task is to secure both social and situational awareness in the company, to facilitate seamless collaboration and mutual understanding. Facing the unpredictable future of the fresh AI, the best strategy with personnel is to be proactive, especially in education, and to build a realistic image of the expected development.
For a factory, firm or industrial plant it is a matter of tuning anew what we call “observational architecture”: finding out where are the most important information sources of the site and its AI system, what phenomena to follow, and how to report and act on them. The architecture must include the personnel, too. This is nothing new for the management: healthy team work, collaboration, learning and communication are needed. However, their content and form are changing. Furthermore, there is an acute risk: if misused or having failed to see design faults in the AI system, serious problems, of a magnitude worse than ever before, can follow. The risks must be identified during the first implementation phase. This requires efficient participation of the personnel so that early warning signs are taken seriously.
How to communicate with an AI companion?
When AI performs better than the operators used to do, what knowledge has it acquired and how could we find out what is the essence of this new machine competence? It has learned “intelligent” input-output relationships that humans have traditionally mastered but it has learned something else, too. “What might that be?” is a question that will be repeatedly asked when AI is adopted. Interestingly, the above question is not far from the classical problem of behaviorism in psychology: should we focus on the observable behavior only or should we try to see what happens inside the “box”, what underlies behind its behavior and decision making? AI and especially the deep learning systems have reached a satisfactory black box performance and now its designers try to progress from that, to understand what happens inside these systems. However, the fellow in the AI box remains known.
In his MIT Technology Review (April/2017) column, Will Knigth starts the story “The Dark Secret at the Heart of AI”, with an ingress. “No one really knows how the most advanced algorithms do what they do. That could be a problem.” As an example, he uses the self-driving car designed by NVIDIA, which has used deep learning to observe how humans do it and learn by observing. Knight then asks: “But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why.”
With self-driving cars, this “dark learning” can be a multi-layered problem extending from simple pattern recognition aspects to traffic safety and to ethics of decision making. However, less often we hear about similar problems from industrial plants, service providers or other organizations launching the use of strong AI to run their processes.
We don’t know how to best talk to AI systems and how they should talk to us. We don’t know what new and valuable knowledge the best AI systems would have to tell us – if they had a means to do it and we could understand it. It is no longer a distant philosophical question but a very practical one as we have seen in complex industrial settings. AI can be taught to follow instructions and complex rule sets and even have a reasonable sounding conversation with us, but it becomes more difficult to communicate with it, especially in case of deep learning, running large-scale processes, The MIT story reminds us that it’s is about building trust in the AI system – but it is essentially human trust, something similar we experience towards our cars – and tools are needed for it.
It is easy to see when an AI system outperforms human operators in speed, accuracy and the quality of output in a specific task. Standard methods and metrics cover these basic measures; interpretation of the performance data may not be that simple: why was the AI system able to perform so well? Did it use the same information as the human operators have always used, but more efficiently or did it find its own ways of deriving new knowledge from what it had learned during its teaching/learning process? We should know if it has learned potential, dormant behaviors – which have not yet occurred – that are harmful for the system. Curious enough, many technologies show implicit trust in its users: their design assumes that the users do not act in certain (dangerous or stupid) ways even though they could easily do that.
First, second and third order learning phases of AI in industrial practice
When an AI system is taught by feeding it with offline data and then later real-time data from a real process, it is learning to behave as expected and to produce the hoped for beneficial input-output reactions. We call this first order learning of AI, when it still does not outperform the ‘old’ practices. The operators can compare the system performance data – as it has occurred under manual or semi-manual control – against the data obtained with the AI system running. Based on this comparison the AI system can be tuned, given additional teaching material, improve the quality of input data, and identify any needs for additional data sources.
This is a relatively complex, socio-technical development phase where the immediate aim is to guide the AI system so that it could match and eventually outperforms humans. When it is implemented for the first time at the site, it is natural to follow and rely on the same performance metrics and other work practices that have been used when the process was controlled by human operators.
Because of the huge data output capacity of the AI system, specific user interfaces (UI) are needed, with the capacity to show and allow safe control of its functions and performance. New data representations and system controls must be used and tested. However, there is no unique standard to define what the output of an AI controlled system should now look like and what would be its core purpose. From a running AI system, it is possible to have real-time and computed data from thousands of measurement points which it must show to the operators whose task is to interpret the flow of information and evaluate the system performance. But of course, humans cannot follow so much data. Effective representations, condensed, packed and informative, must be invented, just like in nuclear plants, for example, but what should such an UI be like and how should it be used?
It is our belief that the UI:s of AI will evolve fast and new concepts and forms will be continuously introduced to serve the specific AI contexts. They will have a crucial, even competitive role in supporting human work and collaboration and in helping the personnel to understand the new control environment. Investing time and resources in wise UI can have a major impact on the system performance. There are good grounds to take this very seriously, especially at large scale industrial and production sites. Often when AI is introduced it touches a major part of the organization. During the first order learning phase the initial UIs are designed on site, but the need to improve them becomes quickly apparent.
The second order learning phase has a special nature since it is the moment when the AI system, for the first time, outperforms human operators. This becomes a new high-level challenge: how and why did the AI system reach such a good performance? What data and controls were most informative for it and how should its performance now be described and represented? How should this new AI knowledge be presented to the operators? What kind of representations are best for describing the important states and performance of AI and the processes it controls? Can the system offer guidance to the site personnel about how to improve the process and the infrastructure? How should the operators be offered ways of controlling the AI-based processes? Is it at all reasonable to consider the interaction with the operators and AI as a UI problem or is some other approach needed?
This is only the beginning of a major transformation in the AI-Humans interaction and touches the participating designers, engineers, operators and AI specialists. We have earlier described a triad model of learning (Nyman & Kuittinen, 2018) where the operators, designers and those responsible for the engineering and implementation of the AI system, work together, learning from each other and from the AI system. Now this situation is at hand again, but the context has profoundly changed: the AI system does what it was intended to do, outperforming the human operators. What does this mean to the triad community, what should be done next and how to prepare for its future?
This is the start of the third order learning phase where the learning extends to the organization at large. It becomes a major organizational and management task – to reorganize under the pressure of the emerging, new needs and possibilities. The reason to this is simple: an effective AI system introduces new constraints on the industrial plant or a service provider, for example, which must now re-organize its work, modify its organizational structure, and secure the new competences for building the processes around the running AI system. It can be a time to rethink the business model as well.
During this race for performance and quality it is often forgotten that the organization must be prepared to change for a better AI technology when it is useful and possible. Finally, with the fast advancement of AI we must make sure that we don’t end up becoming the victims of it but instead can co-work with AI to secure our good life and work with human purpose.