Weak signals or weak theory of observation?
March 19, 2011 § 3 Comments
“Signals can be weak to the eyes but strong to the observer”
During the times of disasters we are repeatedly faced with the question: ”Why didn’t we see it coming?” The questions are asked again when facing the impact of the Japanese earthquake, the second massive tsunami within 10 years, and the large-scale nuclear catastrophe. Some might think that weak warning signals have been neglected and many have already used the psychology of the situation to organize demonstrations and argumentation against nuclear power. The disaster seems also to evoke discussion on the nature of the weak signals (e.g. http://www.wfs.org/content/third-lesson).
The MIT Sloan management Review story, “How to Make Sense of Weak Signals” (Paul J.H. Schoemaker and George S. Day) in April 1st, 2009 is another case where the concept of “weak signals” is used to offer an analysis framework for the difficult problems like ”Why … so many smart people miss[ed] the signs of the collapse of the subprime market?” At first sight the “weak signals” framework can appear as a promising concept to understand complex, difficult-to-see, and perhaps disastrous phenomena. It blames us for being insensitive to these weak warning signs. However, I will here shortly explain my view on why I believe that it is actually a weak, ill-formulated, and even a harmful approach that can lead to a misleading framing of the problems at hand. Especially the implications of such a framework and thought model can offer an illusory understanding of the underlying complex phenomena and lead to the state of theoretical helplessness.
Instead of the “weak signal metaphor” (WSM) we need a theory of observation, a theory that explains how complex, distributed, non-linear, and inter-related phenomena are actually observed and how they could be optimally observed. By observation I mean reliable and systematic collecting, sensing, recoding, gathering and recovering of environmental (social, economic, political, or other) information that is not irrecoverably noisy. Such information can still be non-linear, interdependent, and complex in nature.
In the business strategy literature, a related concept was introduced by Yves Doz and Mikko Kosonen in their book ”Fast Strategy” where they use the concept of “strategic sensitivity” that can be viewed as an ability to strategic perception, a kind of open-mindedness to strategic issues in the environment or in the company itself. Their view is to do this by involving a wide spectrum of organizational processes, objects, and other entities in order to provide a sensitive relationship with the environment.
Unfortunately, to the best of my knowledge, a generally accepted, observer-based theory of observation does not exist in perceptual psychology, decision-making, strategy analysis, or in theoretical physics either. Of course there are several theories available in these research fields, but there is a lack of theories that would relate the observer characteristics, the environment, and the nature of this relationship in a way that would explain why we have the kind of theories of the world that we now have. I believe that by building such a theoretical framework the focus in trying to deal with complex and dynamic problems will shift to the observational frameworks or architectures , with both a local/focused and a global/distributed perspective.
There is no doubt, that in the financial crisis, for example, numerous easily observable phenomena, and accurate observers existed. One could even take an inverse version of Doz &Kosonen “strategic sensitivity” concept and to ask the question, related to the financial crisis: which were the processes, customers, partners, commentators, ventures, industries, networks and other entities that formed the architecture via which “observation” of the crisis development took place (even though these perceptions were not used to avoid the crisis)? One could even argue, from this perspective, that the observational architecture was distorted which led to the disastrous consequences.
An excellent description of the concrete events related to the financial crisis was given at CISAC/Stanford on April 1st, 2010 by Charles Perrow “Markets, Information, and the Spreading of Risks: The Economic Meltdown and Organizational Theory”. It was not about weak signals, they were loud and clear but there was no theory of observation or a description of the observation architecture available that would have helped to see them. Because the relevant observers were not included in the observational architecture, their impacts (that actually were huge) remained “weak” or non-existent to unprepared observers. Such cases can provide an illusory evidence for the existence of “weak signals”.
The problem is not a lack of theories of observation. The real problem is that they are implicit, fuzzy, they deal with non-problematic processes, they are biased by stakeholder values, and they often involve simple perception and measurement models and typically, only after a disaster or a surprise there arises a need to make them explicit. But the time window to make valuable observations has then already closed and returning to business as usual means a return to the point zero, perhaps with a slight system parameter adjustments, new monitoring tools like the stress tests of banks (that already have failed in Europe), and early warning systems for a tsunamis (that did not help in Japan). Sometimes even a complete value blindness can take place in the middle of a catastrophe, like in Europe where the German and French banks – who do not share their profits with the rest of us Europeans – still want us to cover their economical costs of their risky behaviors.
What does the concept of a weak signal mean and imply?
Weak signal metaphor is often applied to wicked futuristic problems but it is also used to analyze significant historical events like the origins of the recent financial crisis in US. Futures researchers can use it in a loose statistical sense, for example, by assuming that weak signals are “weak” simply because they are below the noise level of the system and consequently, they are difficult to observe or discriminate. It is believed that we have somehow missed the critical warning signs or other signals that would have informed us of the coming events and that could have provided strategic advantage (cf. Ansoff 1975).
A potential solution is then to accumulate enough of such signals – ask knowledgeable people like in the Delphi method, for example – so that it would be somehow possible to detect these signals if only we have enough of them that carry the same information of an event or a development. Interestingly, in the Delphi method the knowledge sources are collected on the basis of their assumed potential and competence in the problem at hand, so even there is a hidden theory of observation underlying the method. But the theory there is not developed seriously and in depth. Overall, it is often implied that by improving the signal-to-noise ratio by accumulating the talks of the wise men, for example, the noise could be removed or prevented from masking the (relevant) weak signals.
What is wrong with the weak signals metaphor (WSM)?
As a simple example, in the EEG studies of the human brain it is customary to record “weak signals” of the brain that result from the stimulation of our senses by simple sensory events. These subtle electric brain responses are so noisy in response to any single sound or light that they are not immediately visible or detectable in the raw recording data. But when the same stimulation is repeated a number of times, by summing up the time-locked electric brain responses to all these single events, a well-formed “average response” can be uncovered from the noisy single responses. One could say that a weak signal in the brain is identified. The noise metaphor of WSM would thus imply that it is perhaps possible to apply a similar type of summing or coordinated collection of signal information from numerous sources that would help to uncover the weak signals that are hidden below the noise level.
But there is a serious problem: how do we know that all the single, independent brain events as a response to the stimulation are similar in their essential characteristics? We don’t, and at least to me, there is a good reason to believe that they are not. What if these miniscular brain responses are all incomplete, for example, in a wicked way distributed, or otherwise distorted in some unknown sense? Most signal analysis paradigms in brain research just assume that they are not and that they have similar or at least “friendly” statistical characteristics. This is necessary in order to be able to conduct the noise-removing summing process. In the academic brain research exercises one can take the risk of assuming such things, but in predicting an earthquake or a nuclear plant accident, it is not: the costs of failures are too large.
It is indeed more than just possible that some relevant information is lost by the summation (e.g. due to nonlinearities, varying internal decision contexts, complex interdependencies between situations and stimulation effects, between individuals an many more). We simply don’t know. Clearly, for the WSM such simple statistical analogs just would not work.
Another problem is that the weak signals can occur in the context of complex economical, social or other large scale phenomena so that the signals actually do not mix. Hence, we cannot assume, that they have a statistically “nice” behavior, and what is even worse, they can originate from completely separate and qualitatively different sources (like individual economists, politicians, organizations, statistics) that are totally incompatible for any simple analysis. But what is really interesting in such cases is that despite the incompatibility of the sources, each of these channels can be a high-quality observational unit, that is, one where accurate and valuable observations take place. The problem is, how could we know, which observations are accurate and valuable? This is where we need to build a theory of observation.
One could further argue that WSM is still valid but we should apply more intelligent statistical and mathematical analysis models in order to detect the underlying weak signals. Again, for example, in the research fields like brain sciences, visual system modeling, neural network analysis, robot vision, and signal analysis in general, there are a multitude of approaches to deal with such signals. One of the most inspiring signal analysis paradigms, even as a metaphor for futures researchers, is that of the Independent Component Analysis (ICA; cf. Hyvärinen, Karhunen, Oja (2001): Independent Component Analysis, Wiley.) What if we do have some valuable knowledge of the properties of the signals we are interested in? The basic assumptions and the applicability of the ICA method is an excellent, educational metaphor to think about such situations although the linearity constraints of ICA show how problematic natural, non-linear signals are and how resistant they are to observation. And I don’t know how ICA might work with class/qualitative data. But ICA, even as a metaphor, has a number of lessons to tell to the proponents of the weak signal analysis.
Elements of a general theory of observation for the study of complex and non-linear natural phenomena
Of course it is not possible to introduce a complete theory here, and I have not tried it elsewhere either, but I will take up some of the essential (rather theoretical) elements that I believe should constitute such a set of theories. The aim of this scheme is to suggest some basic principles and the value of a theory of observation. It is worth while to develop this approach.
It may sound surprising, but to my knowledge, even in the theory of quantum mechanics, there is no observer-centered theory available to deal with the properties of the human (animal, alien) observer and the consequences of them to the development of the physical theories. So the problem is far from an easy one. As far as I can understand, a theory of the observer (human, alien or another system) and observation should be the cornerstone of any theory development in physics. Here are the basic elements of the theory of observations as I see them:
- An observer is any system that interacts with its environment.
- An observation can be defined as a relationship between an observer and her environment but it is always defined by a (reference) system that is external (although interconnected) to the observer. In quantum physics you might say that there is an energy-related relationship between the observer and the environment. However, any observer-environment relationship can have many interpretations so that there is no unique observation of the complex environment. When we want to obtain valuable information of a certain process in our (social, biological, economical) environment we first need to identify the observers that interact with that environment of which we are interested in. Then we need to define the observations that we are interested in and from which perspectives they can be perceived. For example, it appears to me, that our scientists do not know which observers and observations to look for in order to predict earthquakes. In the same sense, economists and politicians did not know the observers that were close to the critical financial system processes (afterwards these have been identified, at least partly). Failing to identify relevant observers is the mistake of the first order, perhaps the most typical one in many cases. That is why hindsight is such a delicious activity.
- All observers possess limited-capacity “sensory” channels that interact with the environment. The state of a channel and the state of its environment are dependent on each other. It is possible and often even necessary to see the environment as the observer and the observer as the environment.
- Because of the limited-capacity channels, all collected information – observations – involves an inverse problem, that is, it is not possible to compute the state/space of the source on the basis of any of the single channel activities or even as a composition of them. In the financial crisis, for example, because of this property, any collection of information about observations, could not have fully predicted the crisis. In a sense there was no way “to see it coming” although observations would have been available to help prepare for it. This was simply a result of the fact that the outcome state was not, and could not be defined or determined from the same state/spaces where the observations took place.
- Because of the inverse problem, the theory of observation and the observer determines the means by which optimum approximate information can be collected from the source states and their state space.
- A theory of observation describes how the observer is connected with its environment and on the basis of that it can describe what type of knowledge this observer can provide and which factors constrain it.
- In order to study complex phenomena, in the same sense as the WSM aims to do, the theory of observation suggests means by which the limited capacity observer effect can be compensated.
The idea that any system that interacts with its environment can be treated as an observer, and vice versa, might sound as weird. However, this is a very basic property of any biological organism, for example, and it is only a matter of definition what we call sensory processes.
Trying to be more practical
A theory does not help much if it has little practical guidelines to offer. So, I take again the example of the financial crisis as my amateur example that Charles Sparrow also used in his talk (cf. above). As I’m not a specialist in this field I will offer a few questions to which a theory of observation in this context should be able to provide useful guidelines that could help to better see what is difficult to see in the history or in the future.
Following the theoretical scheme above, the following general questions need to be answered in this special case:
- What kind of interaction architecture underlies the observations? What are its basic elements and relationships?
- Who/what were the relevant observers? What was the nature of the interaction that they had with their relevant environment?
- How (by whom, by what) were these observers defined externally? Which instances have defined these observers?
- What information channels do these observers possess? How are they limited?
- What is the nature of the inverse problem in the case of these observers? What problems it creates in interpreting their observations?
- What is the theory by which optimal collection of information from these sources has been (can be) organized?
- How is the information from the observations integrated and represented?
- What is the decision model according to which the observation data is interpreted
One is perhaps tempted to ask “So what?” and I’m sure many of these questions have been partly answered. But the motivation of this example is to suggest a way to better deal with what is often called a “weak signal” issue and to point out that often it is not a matter of weak signals but rather a weak concept and theory of observation and the observer. By taking this approach I believe, in principle, that it is possible to turn our observation focus on strong observations and to collect this information in a meaningful way. Some years ago, even before the major tsunami in Thailand we were thinking about a mobile network for human warnings about similar disasters and their developments.
Now editing this on 11th July 2011 I just read of a promising example for earthquake sensor system “Quake Catcher Network” by a Stanford team in the Bay area where thousands of miniature sensors connected as a network will provide new type of holistic and distributed perception/observation system for numerous uses to sense, communicate and analyze earthquake-related information. They also invite people in the Bay are to join the project at http://qcn.stanford.edu/index.php. These are inevitably significant first signs in the development of an observation theory that is becoming ever more practical. More will follow, we can be sure of that.
Finally, someone might still suggest that we can use the term “weak signals” loosely, just as a metaphor, but why then, use it at all?