The very basic research
March 30, 2011 § Leave a comment
Why not boost the university-business collaboration by building an impressive research and development unit of a large company with a known brand in the middle of a university campus? Or moving a university unit in the middle of an industry campus? This would guarantee material and immaterial exchange between these two worlds. We could be thinking of such revolutionary possibilities at our own home universities.
These inspiring ideas are not from my own University, instead they were presented by Professor Francesco Profumo, Rector of the Politecnico di Torino, the home base of Fiat. And he did not mean Fiat by that suggestion, but a known car brand from USA! Professor Profumo was one of the most inspiring of the invited speakers at the recent EU meeting in Brussels discussing the relationship between the universities, business life and industry. The first one of these ideas is already a flourishing reality in Torino.
I have just arrived from this conference that had the unholy title – from a puritan academic perspective – of ”University-Business Forum (UBF)” where 400 participants, leaders, researchers, practitioners from academia, industry, EU offices, and ministries, for the fourth time, got together to meet the challenge of creating a happy marriage between these two different universes: science and business.
One could see a diagnostic significance in that I was the only representative from my own university, University of Helsinki, even paid the trip from my own project money as the last time when I asked our officials, they did not have the money or interest to cover the costs of such trips. This was nothing new in the present university management mentality: the same happened when I tried to get funding for a trip to Sweden where I participated a Swedish-Finnish collaboration meeting in Stockholm on building projects to study the future biosociety. There is a passivity in participating in this revolution and some may even think that it is just “efficiency” or a “private interest”. To me it appears as strategic blindness.
So, what more than these stimulating obstacles does an inquiring mind need to be even more motivated to face these inspiring future challenges? The world of science and knowledge creation changes fast and we meet daily the demanding question of what will be the future forms of knowledge, where are the knowledge creating spaces, forms and processes? Where will be the knowledge sources, how are they accessed, how do we create new knowledge together, how is this knowledge shared, valued, applied, protected, and multiplied? For a curious mind, it is impossible to observe this drama as a bystander and remain an academic vegetable.
Not so long ago, it was clear that universities should stick to their basic research. Some other people or organizations could, of course, become interested in their results – because of the promising profit forecasts – and start commercializing the available basic research knowledge and findings. According to this scheme the academic researcher must not be contaminated by this r&d or the “dirty” business processes and the money involved.
Indeed, the word “dirty” is not only imagination in this context as it was clearly and painfully familiar to some of the company representatives who cited it in this recent UBF meeting. The European etymology of this attribute is a mystery to me but clearly it is used to criticize the researchers who work in close connection with companies. When this gloomy attribute is used, in between the lines it carries the message that the basic research money is somehow “clean”, through unknown and mysterious ways.
I have my own experiences in my department of this war of words when doing research on the impact of high-quality magazine paper on reader experience and quality perception when I learned that our research was called, by some colleagues as “toilet paper research”. The company M-real for whom we worked then also had a known and successful brand in this sensitive field although we did not study that quality perception. It is quite clear that there is a very short semantic distance between the words “toilet” and “dirty” in the minds of such malevolent people and they know it.
An implicit belief in many argumentations is that the geniuses and masterminds live only in the basic research labs. This view has its historical roots, the medieval catholic church is a good example, and it is still typical to separate the history of science from history of innovations or technology. You are not supposed to start your scientific career in a garage but it is a heroic place for an innovative technologists. But from the perspective of the creative brain there is no difference between these acts of inspiration, wisdom and imagination.
According to the orthodox basic research scheme, basic research and applied research (some of my colleagues might even be hesitant to call applied work research at all) live their own lives and comply to their own values. But ever more often it is not so simple.
Researchers are curious, creative and learning creatures and for many of them searching for novelty and new solutions is more important that the breed of “researchers” they belong to. This type of behavior will be amplified by the breakthrough of the new generation Internet, open X, and the new, border-breaking collaboration possibilities. Magnificent human and social powers will be released and the question “Is it basic research?” will have no meaning, when the new question will be “Why do you research that?” This has only started to happen but the speed of development is accelerating and one is tempted to see this as our future form of research education and culture.
Why do I worry? My takeaway from the UBF in Brussels was that there is a genuine European science neurosis that expresses itself as a fear that basic research will suffer and will lose its significant position in the world of science. At the same time, not much worry is present to wonder what is the real mechanism by which, through consult guidance, massive offices, courses, open networks, and well-prepared and bureaucratically functional applications provide EU-money to the research projects.
It is a fear of applied research money, industry interests, and entrepreneurs. It is a similar fear that is entertained by people afraid of immigrants and unfamiliar cultures. In Finland it has been expressed by the Academy of Finland in a statement that our science is too much directed by applied research interests and money.
There are various opinions about Silicon Valley, its history and its present nature, but it is an unavoidable example, especially to us Europeans, of a vital ecosystem where basic research, applied research, r&d, business and marketing have succeeded in living side by side and to benefit from each other.
But Silicon Valley is not only a business and technology environment, it is most of all an environment for people to be inspired, involved, and accepted, but with one condition: you have to have something fresh to offer or promise. There is a real demand of revolutionary knowledge and people are deliciously aware of this. What happens when such a new knowledge is offered is another story.
The Silicon Valley ecosystem is based on the elements that we could adopt in building our future environment for basic research and industry/business collaboration. But this task is not a matter of University-Business collaboration only: it is a question of creating a new form of knowledge life.
These future ecosystems for basic and applied research can be built on the following foundations:
- Firm economical and spiritual ground for basic research. Applied research can make profits fast and its economical and human time constants are significantly shorter than for basic research. Because of this, it is necessary to invent business models that guarantee a sustainable position for basic research.
- Economic environment that has an ethically sustainable incentive code. This is crucial in integrating basic research and industry/business oriented application work. The challenge of ethics does not concern applied research only.
- Experiments with new forms of ownership where material and immaterial capital values are in balance. Today this is not true and anyone with a slightest material investment can expect significant profits while a major immaterial investment (time, knowledge, experience, network) is treated haphazardly. This is an unsustainable situation and needs to be corrected if we aim at creating healthy ecosystems for basic and applied research.
- Social platform that encourages cultural mobility within the research community. Dominating paradigms become methodologically, economically, and in their governance closed systems that should be opened by suitable incentive systems.
- Educating the industry and business life of the potential, cultures, and development processes in these new environments.
- Helping the young generations of students to adopt the multi-dimensional value system that this unavoidable development requires.
Weak signals or weak theory of observation?
March 19, 2011 § 3 Comments
“Signals can be weak to the eyes but strong to the observer”
During the times of disasters we are repeatedly faced with the question: ”Why didn’t we see it coming?” The questions are asked again when facing the impact of the Japanese earthquake, the second massive tsunami within 10 years, and the large-scale nuclear catastrophe. Some might think that weak warning signals have been neglected and many have already used the psychology of the situation to organize demonstrations and argumentation against nuclear power. The disaster seems also to evoke discussion on the nature of the weak signals (e.g. http://www.wfs.org/content/third-lesson).
The MIT Sloan management Review story, “How to Make Sense of Weak Signals” (Paul J.H. Schoemaker and George S. Day) in April 1st, 2009 is another case where the concept of “weak signals” is used to offer an analysis framework for the difficult problems like ”Why … so many smart people miss[ed] the signs of the collapse of the subprime market?” At first sight the “weak signals” framework can appear as a promising concept to understand complex, difficult-to-see, and perhaps disastrous phenomena. It blames us for being insensitive to these weak warning signs. However, I will here shortly explain my view on why I believe that it is actually a weak, ill-formulated, and even a harmful approach that can lead to a misleading framing of the problems at hand. Especially the implications of such a framework and thought model can offer an illusory understanding of the underlying complex phenomena and lead to the state of theoretical helplessness.
Instead of the “weak signal metaphor” (WSM) we need a theory of observation, a theory that explains how complex, distributed, non-linear, and inter-related phenomena are actually observed and how they could be optimally observed. By observation I mean reliable and systematic collecting, sensing, recoding, gathering and recovering of environmental (social, economic, political, or other) information that is not irrecoverably noisy. Such information can still be non-linear, interdependent, and complex in nature.
In the business strategy literature, a related concept was introduced by Yves Doz and Mikko Kosonen in their book ”Fast Strategy” where they use the concept of “strategic sensitivity” that can be viewed as an ability to strategic perception, a kind of open-mindedness to strategic issues in the environment or in the company itself. Their view is to do this by involving a wide spectrum of organizational processes, objects, and other entities in order to provide a sensitive relationship with the environment.
Unfortunately, to the best of my knowledge, a generally accepted, observer-based theory of observation does not exist in perceptual psychology, decision-making, strategy analysis, or in theoretical physics either. Of course there are several theories available in these research fields, but there is a lack of theories that would relate the observer characteristics, the environment, and the nature of this relationship in a way that would explain why we have the kind of theories of the world that we now have. I believe that by building such a theoretical framework the focus in trying to deal with complex and dynamic problems will shift to the observational frameworks or architectures , with both a local/focused and a global/distributed perspective.
There is no doubt, that in the financial crisis, for example, numerous easily observable phenomena, and accurate observers existed. One could even take an inverse version of Doz &Kosonen “strategic sensitivity” concept and to ask the question, related to the financial crisis: which were the processes, customers, partners, commentators, ventures, industries, networks and other entities that formed the architecture via which “observation” of the crisis development took place (even though these perceptions were not used to avoid the crisis)? One could even argue, from this perspective, that the observational architecture was distorted which led to the disastrous consequences.
An excellent description of the concrete events related to the financial crisis was given at CISAC/Stanford on April 1st, 2010 by Charles Perrow “Markets, Information, and the Spreading of Risks: The Economic Meltdown and Organizational Theory”. It was not about weak signals, they were loud and clear but there was no theory of observation or a description of the observation architecture available that would have helped to see them. Because the relevant observers were not included in the observational architecture, their impacts (that actually were huge) remained “weak” or non-existent to unprepared observers. Such cases can provide an illusory evidence for the existence of “weak signals”.
The problem is not a lack of theories of observation. The real problem is that they are implicit, fuzzy, they deal with non-problematic processes, they are biased by stakeholder values, and they often involve simple perception and measurement models and typically, only after a disaster or a surprise there arises a need to make them explicit. But the time window to make valuable observations has then already closed and returning to business as usual means a return to the point zero, perhaps with a slight system parameter adjustments, new monitoring tools like the stress tests of banks (that already have failed in Europe), and early warning systems for a tsunamis (that did not help in Japan). Sometimes even a complete value blindness can take place in the middle of a catastrophe, like in Europe where the German and French banks – who do not share their profits with the rest of us Europeans – still want us to cover their economical costs of their risky behaviors.
What does the concept of a weak signal mean and imply?
Weak signal metaphor is often applied to wicked futuristic problems but it is also used to analyze significant historical events like the origins of the recent financial crisis in US. Futures researchers can use it in a loose statistical sense, for example, by assuming that weak signals are “weak” simply because they are below the noise level of the system and consequently, they are difficult to observe or discriminate. It is believed that we have somehow missed the critical warning signs or other signals that would have informed us of the coming events and that could have provided strategic advantage (cf. Ansoff 1975).
A potential solution is then to accumulate enough of such signals – ask knowledgeable people like in the Delphi method, for example – so that it would be somehow possible to detect these signals if only we have enough of them that carry the same information of an event or a development. Interestingly, in the Delphi method the knowledge sources are collected on the basis of their assumed potential and competence in the problem at hand, so even there is a hidden theory of observation underlying the method. But the theory there is not developed seriously and in depth. Overall, it is often implied that by improving the signal-to-noise ratio by accumulating the talks of the wise men, for example, the noise could be removed or prevented from masking the (relevant) weak signals.
What is wrong with the weak signals metaphor (WSM)?
As a simple example, in the EEG studies of the human brain it is customary to record “weak signals” of the brain that result from the stimulation of our senses by simple sensory events. These subtle electric brain responses are so noisy in response to any single sound or light that they are not immediately visible or detectable in the raw recording data. But when the same stimulation is repeated a number of times, by summing up the time-locked electric brain responses to all these single events, a well-formed “average response” can be uncovered from the noisy single responses. One could say that a weak signal in the brain is identified. The noise metaphor of WSM would thus imply that it is perhaps possible to apply a similar type of summing or coordinated collection of signal information from numerous sources that would help to uncover the weak signals that are hidden below the noise level.
But there is a serious problem: how do we know that all the single, independent brain events as a response to the stimulation are similar in their essential characteristics? We don’t, and at least to me, there is a good reason to believe that they are not. What if these miniscular brain responses are all incomplete, for example, in a wicked way distributed, or otherwise distorted in some unknown sense? Most signal analysis paradigms in brain research just assume that they are not and that they have similar or at least “friendly” statistical characteristics. This is necessary in order to be able to conduct the noise-removing summing process. In the academic brain research exercises one can take the risk of assuming such things, but in predicting an earthquake or a nuclear plant accident, it is not: the costs of failures are too large.
It is indeed more than just possible that some relevant information is lost by the summation (e.g. due to nonlinearities, varying internal decision contexts, complex interdependencies between situations and stimulation effects, between individuals an many more). We simply don’t know. Clearly, for the WSM such simple statistical analogs just would not work.
Another problem is that the weak signals can occur in the context of complex economical, social or other large scale phenomena so that the signals actually do not mix. Hence, we cannot assume, that they have a statistically “nice” behavior, and what is even worse, they can originate from completely separate and qualitatively different sources (like individual economists, politicians, organizations, statistics) that are totally incompatible for any simple analysis. But what is really interesting in such cases is that despite the incompatibility of the sources, each of these channels can be a high-quality observational unit, that is, one where accurate and valuable observations take place. The problem is, how could we know, which observations are accurate and valuable? This is where we need to build a theory of observation.
One could further argue that WSM is still valid but we should apply more intelligent statistical and mathematical analysis models in order to detect the underlying weak signals. Again, for example, in the research fields like brain sciences, visual system modeling, neural network analysis, robot vision, and signal analysis in general, there are a multitude of approaches to deal with such signals. One of the most inspiring signal analysis paradigms, even as a metaphor for futures researchers, is that of the Independent Component Analysis (ICA; cf. Hyvärinen, Karhunen, Oja (2001): Independent Component Analysis, Wiley.) What if we do have some valuable knowledge of the properties of the signals we are interested in? The basic assumptions and the applicability of the ICA method is an excellent, educational metaphor to think about such situations although the linearity constraints of ICA show how problematic natural, non-linear signals are and how resistant they are to observation. And I don’t know how ICA might work with class/qualitative data. But ICA, even as a metaphor, has a number of lessons to tell to the proponents of the weak signal analysis.
Elements of a general theory of observation for the study of complex and non-linear natural phenomena
Of course it is not possible to introduce a complete theory here, and I have not tried it elsewhere either, but I will take up some of the essential (rather theoretical) elements that I believe should constitute such a set of theories. The aim of this scheme is to suggest some basic principles and the value of a theory of observation. It is worth while to develop this approach.
It may sound surprising, but to my knowledge, even in the theory of quantum mechanics, there is no observer-centered theory available to deal with the properties of the human (animal, alien) observer and the consequences of them to the development of the physical theories. So the problem is far from an easy one. As far as I can understand, a theory of the observer (human, alien or another system) and observation should be the cornerstone of any theory development in physics. Here are the basic elements of the theory of observations as I see them:
- An observer is any system that interacts with its environment.
- An observation can be defined as a relationship between an observer and her environment but it is always defined by a (reference) system that is external (although interconnected) to the observer. In quantum physics you might say that there is an energy-related relationship between the observer and the environment. However, any observer-environment relationship can have many interpretations so that there is no unique observation of the complex environment. When we want to obtain valuable information of a certain process in our (social, biological, economical) environment we first need to identify the observers that interact with that environment of which we are interested in. Then we need to define the observations that we are interested in and from which perspectives they can be perceived. For example, it appears to me, that our scientists do not know which observers and observations to look for in order to predict earthquakes. In the same sense, economists and politicians did not know the observers that were close to the critical financial system processes (afterwards these have been identified, at least partly). Failing to identify relevant observers is the mistake of the first order, perhaps the most typical one in many cases. That is why hindsight is such a delicious activity.
- All observers possess limited-capacity “sensory” channels that interact with the environment. The state of a channel and the state of its environment are dependent on each other. It is possible and often even necessary to see the environment as the observer and the observer as the environment.
- Because of the limited-capacity channels, all collected information – observations – involves an inverse problem, that is, it is not possible to compute the state/space of the source on the basis of any of the single channel activities or even as a composition of them. In the financial crisis, for example, because of this property, any collection of information about observations, could not have fully predicted the crisis. In a sense there was no way “to see it coming” although observations would have been available to help prepare for it. This was simply a result of the fact that the outcome state was not, and could not be defined or determined from the same state/spaces where the observations took place.
- Because of the inverse problem, the theory of observation and the observer determines the means by which optimum approximate information can be collected from the source states and their state space.
- A theory of observation describes how the observer is connected with its environment and on the basis of that it can describe what type of knowledge this observer can provide and which factors constrain it.
- In order to study complex phenomena, in the same sense as the WSM aims to do, the theory of observation suggests means by which the limited capacity observer effect can be compensated.
The idea that any system that interacts with its environment can be treated as an observer, and vice versa, might sound as weird. However, this is a very basic property of any biological organism, for example, and it is only a matter of definition what we call sensory processes.
Trying to be more practical
A theory does not help much if it has little practical guidelines to offer. So, I take again the example of the financial crisis as my amateur example that Charles Sparrow also used in his talk (cf. above). As I’m not a specialist in this field I will offer a few questions to which a theory of observation in this context should be able to provide useful guidelines that could help to better see what is difficult to see in the history or in the future.
Following the theoretical scheme above, the following general questions need to be answered in this special case:
- What kind of interaction architecture underlies the observations? What are its basic elements and relationships?
- Who/what were the relevant observers? What was the nature of the interaction that they had with their relevant environment?
- How (by whom, by what) were these observers defined externally? Which instances have defined these observers?
- What information channels do these observers possess? How are they limited?
- What is the nature of the inverse problem in the case of these observers? What problems it creates in interpreting their observations?
- What is the theory by which optimal collection of information from these sources has been (can be) organized?
- How is the information from the observations integrated and represented?
- What is the decision model according to which the observation data is interpreted
One is perhaps tempted to ask “So what?” and I’m sure many of these questions have been partly answered. But the motivation of this example is to suggest a way to better deal with what is often called a “weak signal” issue and to point out that often it is not a matter of weak signals but rather a weak concept and theory of observation and the observer. By taking this approach I believe, in principle, that it is possible to turn our observation focus on strong observations and to collect this information in a meaningful way. Some years ago, even before the major tsunami in Thailand we were thinking about a mobile network for human warnings about similar disasters and their developments.
Now editing this on 11th July 2011 I just read of a promising example for earthquake sensor system “Quake Catcher Network” by a Stanford team in the Bay area where thousands of miniature sensors connected as a network will provide new type of holistic and distributed perception/observation system for numerous uses to sense, communicate and analyze earthquake-related information. They also invite people in the Bay are to join the project at http://qcn.stanford.edu/index.php. These are inevitably significant first signs in the development of an observation theory that is becoming ever more practical. More will follow, we can be sure of that.
Finally, someone might still suggest that we can use the term “weak signals” loosely, just as a metaphor, but why then, use it at all?