Comments about the article in Nature: Globally networked risks and how to respond

Following is a discussion about this article in Nature Vol 497 2 May 2013
In the last paragraph I explain my own opinion.


The article in the header starts with the following text:
Today's strongly connected, global networks have produced highly interdependent systems that we do not understand and cannot control well.
When you study Figure 1 you can see what type of connected systems the author has in mind. They include air pollution, storms, fiscal issues, economics, information, weapons, migration etc etc.
IMO to consider such a system is not realistic. What you should do is make a list of all the systems that we in one way or another can control and describe which external factors influence those systems.
For example we can describe and control atomic reactors and we can evaluate the influence of earthquakes.
These systems are vulnerable to failure at all scales, posing serious threats to society, even when external shocks are absent.
A meteorite can strike the earth any place any moment. Such a serious event threats our society. The problem is we have no tool to control meteorites. We have to live which such an threat.
As the complexity and interaction strengths in our networked world increase, man made systems can become unstable, creating uncontrollable situations even when decision-makers are well-skilled, have all data and technology at their disposal and do their best.
I call this sentence: creating anxiety, panic.
To make these systems manageable, a fundamental redesign is needed.
I do not think so.
The problem with many manmade system is that they are political driven. Political systems are not based on knowledge. In political driven systems the interest of some is preferred above others. Most often this the largest group with the most political power.
Political driven systems can become unstable the larger the gap between the different groups involved. To make those systems stable requires that the interest of all humans is considered by the people in power, in governments
A "Global System Science" might create the required knowledge and paradigm shift in thinking.
If you want make systems more stable general control theory is enough.

Main article

The article starts with the following sentence:
Globalization and technology revolutions are changing our planet
Technology revolutions are common. Globalization is a different animal.
Today we have a worldwide exchange of people, goods, money, information and ideas, which has produced many new opportunities, services and benefits for humanity
At the same time, however, the underlying networks have created pathways along which dangerous and damaging events can spread rapidly and globally.
These two sentences are in contradiction which each other. Is globalization now good or bad? From a scientific point of view you expect a firm answer considering the state of the whole world in total.
This has increased system risks. The related societal costs are huge.
Again what is the total situation.
Our world is already facing some of the consequences: global problems such as fiscal and economic crisis, global migration etc coming along with social unrest, international and civil war and global terrorism.
Is globalization now a benefit or a disaster?
In this perspective, I argue, that systematic failures and extreme events are consequences of highly interconnected systems and networked risks humans have created.
Failures are typical of mechanical electrical systems and are in the realm of the industry. Extreme events (Earthquakes) are outside human control. Globalization is created by humans.
I also argue that initially beneficial trends such as globalization, increasing network densities, sparse use of resources, higher complexity and an acceleration of institutional decision processes ultimately push our anthropogenic systems towards systemic instability - a state in which things will inevitably get out of control sooner or later.
You have to be careful when you use the concepts in control or out of control. Related concepts are stable, unstable and targets. You can use those words to describe chemical and medical processes. In general those words cannot be used in financial, economic, political and biological and social systems.
Many disasters etc. Even worse they are often the consequences of a wrong understanding due to the counter-intuitive nature of the underlying system behavior.
Such a sentence requires more detail. Intuition has nothing to do with science.
The FuturICT community (See FuturICT Participatory Computing for Our Complex World which involves thousands of scientists worldwide is now engaged in establishing a "Global Systems Science" in order to understand better our information society with its close co-evolution of information and communication technology (ICT) and society
See: Global Systems Science Developing a GSS research program and also: Futurium
Global System Science (GSS) wants to make the theory of complex systems applicable to the solution of global-scale problems
Is GSS going to solve the food problem, that there is no hunger? I doubt this. Is GSS going to solve that all Europeans have work? I doubt this. Is GSS going to solve that I receive no spam anymore? I doubt this. Is GSS going to take care that everyone gets an IPad? I doubt this. I think that GSS is too optimistic. One of the actions that GSS should do is to make the world less global. Each region in the world should become more self supporting. One other action that GSS should do is to make each region more production independent and less profit dependent.

What we know

Cascade effects due to strong interactions

Our society is entering a new era - the era of a global information society, characterized by increasing interdependency, interconnectivity and complexity and a life in which the real world and digital world can no longer be separated (See Box 2)

Box 2

For example supercomputers are now performing the majority of financial transactions.
The rules that these computer follow are developed by financial experts.
The "flash crash" of 6 May 2010 illustrate the unexpected systemic behavior that can result within minutes, nearly $1 trillion in market value disappeared before the financial markets recovered again
See: Flash crash of 6 May 2010
and FINDINGS REGARDING THE MARKET EVENTS OF MAY 6, 2010 The fact that these events happened imply that the system designers did not consider a worst case scenario. These worst scenario's should include a total system load throughput calculation. Those scenario's are relative easy to simulate.
In financial dealing systems you have sellers (computers that supply money), buyers (computers that demand money) and a market. The market are computers which calculate the price based on supply and demand.
  • If supply per unit of time is equal to demand you use the previous price.
  • If supply is higher the price will drop.
  • If supply is lower the price will rice.
However there is more involved. It is always to discuss the issues involved in more detail
Consider that a computer want to sell 10000 shares and that the average number of shares traded per unit of time is 10 shares. For simplicity a unit of time 1 second.
In this case the supply is greater than the demand that means the price will drop and 10 shares will be traded.
In the next second the same scenario. On the supply side we have 10000 - 10 shares. On the demand side 10 shares. The price will drop and 10 shares will be traded.
In the third until the sixties second the same will happen. Trade (average) 10 shares and the price will drop.
After 60 seconds the situation becomes different because now the supply side (computers) and the demand side (computers) will be informed about the actual price. That means they will "see" that there is a drop in price, implying more supply than demand. (This drop in price will continue for at least the next 60 seconds). In this scenario the actual price is not the actual price but a price delayed with 60 seconds.
What happens next are two opposite scenarios (or a combination there off). When is a question mark.
  • Because there is a heavy drop in price at the demand side computers will start to buy shares from the same company at lower prices. The expectation is that the price will rise.
  • Because there is a heavy drop in price at the supply side people will start to sell shares from the same company at lower prices. The expectation is that the price will drop more.
Remember the decision makers are programs. The underlying rules are designed by humans.
Next is written in Box 2:
Such computer systems can be considered to be 'artificial social systems', as they learn from information about their environment, develop expectations about the future and decide, interact and communicate autonomously.
I do not agree. All the actions undertaken by the programs are designed by humans. The text again creates a certain type of feeling that computers are an invisible enemy, is spooky but which does not exists in reality.
To design these systems properly ensure a suitable response to human needs and avoid problems such as co-ordination failures, breakdowns of cooperation, conflict, (cyber)crime or (cyber) war, we need a better fundamental understanding of socially interactive systems.
In this sentence two different problems are discussed.
  • At one side you have applications, which should be described in detail, starting with objectives. Those objectives should be realistic. The systems should perform strictly as designed. System load calculations are part of this.
  • At the other side you have a network to communicate worldwide. This network can be used for all types of purposes. It is not the task application developers to control these issues.

Systematic instabilities challenge our intuition

Crowd disasters

Crowd disasters constitute an eye-opening example of the eventual failure of control in a complex systems. Even if nobody wants to harm anybody else, people may be fatally injured
I do not like the language used. The issue is crowd control, safety, panic, what is the cause and how to prevent it. One of the main causes of panic are accidents, a fire. That means our main course of action to prevent panic is to prevent that those accidents happen.
In many cases, the instability is created not by foolish or malicious individual actions, but by the unavoidable amplification of small fluctuations above a critical density threshold.
The author should read about the Hillsborough disaster IMO the reason can never be the police.

Knowledge Gaps

Not well behaved

In other words there is no standard solution for complex systems and the 'devil is in the detail'
When you design a system you need a lot of detail about what the system is supposed to do, how you do that and how to operate the system. When you implement your system you should use as much standard hardware and software as possible.
Moreover, most existing theories do not provide practical advice on how to respond to actual global risks, crises and disasters and empirically based risk-mitigation strategies often remain qualitative.
This sentence again creates panic. If you build a boat and you specify that it can only maintain 100 people but the captain allows 300 persons to enter than it is not your responsibility that it sinks.
If your plant manufactures matches are you responsible for all it possible dangerous applications?

Some design and operation principles

Managing complexity using self-organization

When systems reach a certain size or level of complexity, algorithmic constraints often prohibit efficient top-down management by real-time optimization.
I have never heard of algorithmic constraints. I think this should mean something like mathematical constraints. In real-time systems the constraints are mostly(?) physical.
However "guided self-organization" is a promising alternative way of managing complex dynamical systems in a decentralized, bottom-up way.
My first impression is that if you what to design and control any system you should do that in a centralized top down approach. You should not first design and build the road to a plant.
The underlying idea is to use, rather than fight, the system-immanent tendency of complex systems to self-organize and thereby create a stable, ordered state.
I do not slightest idea how the author wants to use this idea to design an atomic plant.
This sounds a bit like: Use an expert system.
By establishing proper 'rules of the game' within which the system components can self-organize, including mechanisms ensuring rule compliances etc
Again how do you do that to build an atomic reactor or an airplane?
Similar self-control principles could be applied to logistics and production systems or even to administrative processes and governance.
Management of production plants want to know in great detail how for example startup and shutdown are performed. To claim that you as a supplier will use self-control is not an easy concept to sell.

Coping with networked risks

To cope with hyper-risks, it is necessary to develop risk competence and to prepare and exercise contingency plans for all sorts of possible failures.
Specific for the last part: be realistic.
In the Netherlands there are no earthquakes (?). In the Netherlands in many places you are below sea level. For both possible causes of disaster you do not have take precautions.
In Japan both earthquakes and tsunamis are common. Please take precautions.
Note that a backup system should be operated and designed according to different principles in order to avoid failure of both systems for the same reasons.
My advice is: Do not do that.
Of course if both systems use exactly the same software and if you divide by zero both systems should stop. However this is a system error which should have been solved during system testing.
The problem is that if you have to completely independent designed systems you should test both systems independently. What is worse both systems should monitor where the other one is. This monitoring concept is different but can be very complex and again can generate its own problems. In short don't do that in a power plant.

What is ahead

Despite all our knowledge much work is still ahead of us. For example the current financial crisis shows that much of our theoretical knowledge has not yet found its way into real-world policies as it should
The first step if you want to be successful is to obtain practical knowledge by learning and by trying. The main reason for the financial crisis is globalization.

Economic crisis

Two main pillars of mainstream economics are the equilibrium paradigm and the representative agent approach. According to the equilibrium paradigm, economics are viewed as systems that tend to evolve towards an equilibrium state.
Economical systems are always in equilibrium or never. It is all in the definition.
Bubbles and crashes should not happen and hence would not require any precautions.
The major reason of the financial crises is slowly change. For example the percentage of mortgage obtained when buying a house. This went from 60% to 80 to 100 to 110% of the value of your house. If this happens almost automatically also the price of a house increases while the underlying value does not increase. This same increase happens if subsidies are involved. Specific misbalance starts when the people involved become unemployed.
Representative agent models, which assume that companies act in the way a representative (average) individual would optimally decide, are more general and allow one to describe dynamical process.
An example is required to clarify this.
In order to find the shortest path through a maze you can follow the following strategy: You start with 100 tokens. At each split 50% go left and 50% go right. The part that the first token has followed that comes out of the maze is the shortest path.
The problem is that using such a token (object or agent) approach cannot be used to control a chemical process.
It can even happen that representative agent models make predictions opposite to those of agent-based computer simulations assuming the very same interaction rules.
If the interaction rules are the same the outcome should be the same.

Paradigm shift ahead

Cascade effects cause a system to leave its previous (equilibrium) state, and there is also no representitive dynamics, because different possible paths of events may look very different (See Figure 3).

Figure 3 - Illustration of probabilistic cascade effects in systems with networked risks

The orange and the blue paths show that the same cause can have different effects depending on the random realization
The blue and red path show different causes can have the same effect.
What means random realization ? The problem with figure 3 is that the shown example is completely abstract and lacks any reality.
Everyone knows that if you give someone antibiotics when different diseases are involved the results can be rather different
It could also be that when different diseases are at stake and you give different antibiotics the results are the same.

Global Systems Science

For a long time humans have considered systemic failures to originate from 'outside the system' because it has been difficult to understand how they could come about otherwise. however, many disasters in anthropogenic systems result from a wrong way of thinking and consequently from inappropriate organization and systems design. For example we often apply theories for well-behaved systems to systems that are not well-behaved.
For example?
A good overview of global dependencies between different kinds of networks is lacking as well
For example? Between Asia as a network versus Australia as a network?
The establishment of a Global Systems Science should fill these knowledge gaps, particular regarding the role of human and social factors.
In much of what we call natural science there is no human or social factor. In fact true science should be independent of human point of view.
It is only in economics and finance where human and social factors are important. Both are not true sciences.


I have described how system components even if their behavior is harmless and predictable can create unpredictable and uncontrollable systemic risks when tightly coupled together.
In 1967 the TWA Group bought the Hilton Group. There is nothing wrong with this if you keep both companies clearly distinct and separated. Problems will start if you combine different departments and make it one company.
The same problems also happens if a bank buys a foreign bank and tries to merge both.
Hence, an improper design or management of our global anthropogenic system creates possibilities of catastrophic failures.
I think the word anthropogenic (human) should be removed. Also the word failure.
Today many necessary safety precautions to protect ourselves from human-made disasters are not taken owing to insufficient theoretical understanding and consequently wrong policy decisions.
This sentence is wrong.
Sometimes safety precautions are neglected causing disasters and insufficient understanding can cause wrong policy decisions.

Reflection part 1

My overall impression of the article is that the author tries to cover too many all most opposite subjects in one article.
From a certain way this is good, but I doubt the final practical value. As the author writes himself: It's all in the detail. That is what the article lacks: Detail A typical case are the examples in Table 1 (Drivers and examples of systemic instabilities). The examples are: Revolutions, evolutionary ramps, floods, Phantom traffic jams, Financial crisis, production, politics, Conflict and financial derivatives.
General speaking all these issues have nothing in common. Each are a study on their own, with their own issues and solutions.

Column 'Field/modelling approach' in Table 1 shows: Catastrophe theory, sensitivity analysis, agent based models, chaos theory, cybernetics, heuristics, operations research, genetic algorithms.
This column gives the impression that you can solve all problems by using mathematics. Many problems in our world are human oriented and are partly based on culture, greed and religion. To solve those problems which means to make people happier requires education and tact. No mathematics is involved.

Reflection part 2

The title of the nature article is: Globally networked risks and how to respond.
When you only read this title than assuming that networks cause a risk my first responds is: then reduce the use of networks i.e. communicate less all over the world
However that is not what the author has in mind. The issue is globalization. The fact that to solve many problems the whole world becomes involved. For Europe this means that if you want to reprocess meat, the animals come from country A,B and C. The factory is in D. The work force comes from E and F and the money comes from G,H and I. For G,H and I read Russia, Japan and the USA. For D read China.
If you compare this with the past One easy consequence of all this that the physical production lines are much longer. i.e. a hugh increase in transportation costs (via road, sea and air)
At the same time a lot of expertise disappears in many countries, partly because factories are closed in one country and opened in cheaper countries.
In the Netherlands the responds is (partly): This (production) is not important. We maintain the knowledge how to produce. This knowledge we can sell. That means we become a knowledge economy.
The problem is: Also this knowledge wil disappear. The primary reason is because all practical knowledge will disappear.
A related problem is that the urge to study those subjects will disappear. That means to study electronics, mechanics and chemistry. The subjects that are not affected are: medicine, dentist, mathematics and ICT.

My concern is that those consequences are not discussed in the article. Too much emphasis is on networks.


If you want to give a comment you can use the following form Comment form

Created: 25 May 2013

Back to my home page Index
Back to Nature comments Nature Index