At a closed meeting held in Boston in October 2009, the room was packed with high-flyers in foreign policy and finance: Henry Kissinger, Paul Volcker, Andy Haldane, and Joseph Stiglitz, among others, as well as representatives of sovereign wealth funds, pensions, and endowments worth more than a trillion dollars—a significant slice of the world’s wealth. The session opened with the following telling question: “Have the last couple of years shown that our traditional finance/risk models are irretrievably broken and that models and approaches from other fields (for example, ecology) may offer a better understanding of the interconnectedness and fragility of complex financial systems?”
Science is a creative human enterprise. Discoveries are made in the context of our creations: our models and hypotheses about how the world works. Big failures, however, can be a wake-up call about entrenched views, and nothing
produces humility or gains attention faster than an event that blindsides so many so immediately.
Examples of catastrophic and systemic changes have been gathering in a variety of fields, typically in specialized contexts with little cross-connection. Only recently have we begun to look for generic patterns in the web of linked causes and effects that puts disparate events into a common framework—a framework that operates on a sufficiently high level to include geologic climate shifts, epileptic seizures, market and fishery crashes, and rapid shifts from healthy ecosystems to biological deserts.
The main themes of this framework are twofold: First, they are all complex systems of interconnected and interdependent parts. Second, they are nonlinear, non-equilibrium systems that can undergo rapid and drastic state changes.
Consider first the complex interconnections. Economics is not typically thought of as a global systems problem. Indeed, investment banks are famous for a brand of tunnel vision that focuses risk management at the individual firm level and ignores the difficult and costlier, albeit less frequent, systemic or financial-web problem. Monitoring the ecosystem-like network of firms with interlocking balance sheets is not in the risk manager’s job description. Even so, there is emerging agreement that ignoring the seemingly incomprehensible meshing of counterparty obligations and mutual interdependencies (an accountant’s nightmare, more recursive than Abbott and Costello’s “Who’s on first?”) prevented real pricing of risk premiums, which helped to propagate the current crisis.
A parallel situation exists in fisheries, where stocks are traditionally managed one species at a time. Alarm over collapsing fish stocks, however, is helping to create the current push for ecosystem-based ocean management. This is a step in the right direction, but the current ecosystem simulation models remain incapable of reproducing realistic population crashes. And the same is true of most climate simulation models: Though the geological record tells us that global temperatures can change very quickly, the models consistently underestimate that possibility. This is related to the next property, the nonlinear, non-equilibrium nature of systems.
Most engineered devices, consisting of mechanical springs, transistors, and the like, are built to be stable. That is, if stressed from rest, or equilibrium, they spring back. Many simple ecological models, physiological models, and even climate and economic models are built by assuming the same principle: a globally stable equilibrium. A related simplification is to see the world as consisting of separate parts that can be studied in a linear way, one piece at a time. These pieces can then be summed independently to make the whole. Researchers have developed a very large tool kit of analytical methods and statistics based on this linear idea, and it has proven invaluable for studying simple engineered devices. But even when many of the complex systems that interest us are not linear, we persist with these tools and models. It is a case of looking under the lamppost because the light is better even though we know the lost keys are in the shadows. Linear systems produce nice stationary statistics—constant risk metrics, for example. Because they assume that a process does not vary through time, one can subsample it to get an idea of what the larger universe of possibilities looks like. This characteristic of linear systems appeals to our normal heuristic thinking.
Nonlinear systems, however, are not so well behaved. They can appear stationary for a long while, then without anything changing, they exhibit jumps in variability—so-called “heteroscedasticity.” For example, if one looks at the range of economic variables over the past decade (daily market movements, GDP changes, etc.), one might guess that variability and the universe of possibilities are very modest. This was the modus operandi of normal risk management. As a consequence, the likelihood of some of the large moves we saw in 2008, which happened over so many consecutive days, should have been less than once in the age of the universe.
Our problem is that the scientific desire to simplify has taken over, something that Einstein warned against when he paraphrased Occam: “Everything should be made as simple as possible, but not simpler.” Thinking of natural and economic systems as essentially stable and decomposable into parts is a good initial hypothesis, current observations and measurements do not support that hypothesis—hence our continual surprise. Just as we like the idea of constancy, we are stubborn to change. The 19th century American humorist Josh Billings, perhaps, put it best: “It ain’t what we don’t know that gives us trouble, it’s what we know that just ain’t so.”
So how do we proceed? There are a number of ways to approach this tactically, including new data-intensive techniques that model each system uniquely but look for common characteristics. However, a more strategic approach is to study these systems at their most generic level, to identify universal principles that are independent of the specific details that distinguish each system. This is the domain of complexity theory.
Among these principles is the idea that there might be universal early warning signs for critical transitions, diagnostic signals that appear near unstable tipping points of rapid change. The recent argument for early warning signs is based on the following: 1) that both simple and more realistic, complex nonlinear models show these behaviors, and 2) that there is a growing weight of empirical evidence for these common precursors in varied systems.
A key phenomenon known for decades is so-called “critical slowing” as a threshold approaches. That is, a system’s dynamic response to external perturbations becomes more sluggish near tipping points. Mathematically, this property gives rise to increased inertia in the ups and downs of things like temperature or population numbers—we call this inertia “autocorrelation”—which in turn can result in larger swings, or more volatility. In some cases, it can even produce “flickering,” or rapid alternation from one stable state to another (picture a lake ricocheting back and forth between being clear and oxygenated versus algae-ridden and oxygen-starved). Another related early signaling behavior is an increase in “spatial resonance”: Pulses occurring in neighboring parts of the web become synchronized. Nearby brain cells fire in unison minutes to hours prior to an epileptic seizure, for example, and global financial markets pulse together. The autocorrelation that comes from critical slowing has been shown to be a particularly good indicator of certain geologic climate-change events, such as the greenhouse-icehouse transition that occurred 34 million years ago; the inertial effect of climate-system slowing built up gradually over millions of years, suddenly ending in a rapid shift that turned a fully lush, green planet into one with polar regions blanketed in ice.
The global financial meltdown illustrates the phenomenon of critical slowing and spatial resonance. Leading up to the crash, there was a marked increase in homogeneity among institutions, both in their revenue-generating strategies as well as in their risk-management strategies, thus increasing correlation among funds and across countries—an early warning. Indeed, with regard to risk management through diversification, it is ironic that diversification became so extreme that diversification was lost: Everyone owning part of everything creates complete homogeneity. Reducing risk by increasing portfolio diversity makes sense for each individual institution, but if everyone does it, it creates huge group or system-wide risk. Mathematically, such homogeneity leads to increased connectivity in the financial system, and the number and strength of these linkages grow as homogeneity increases. Thus, the consequence of increasing connectivity is to destabilize a generic complex system: Each institution becomes more affected by the balance sheets of neighboring institutions than by its own. The role of systemic risk monitoring, then, could simply be rapid detection and dissemination of potential imbalances, much as we allow frequent underbrush fires to burn in order to forestall catastrophic wildfires. Provided that these kinds of imbalances can be rapidly identified, maybe we will need no regulation beyond swift diffusion of information. Having frequent, small disruptions could even be the sign of a healthy, innovative financial system.
Further tactical lessons could be drawn from similarities in the structure of bank payment networks and cooperative, or “mutualistic,” networks in biology. These structures are thought to promote network growth and support more species. Consider the case of plants and their insect pollinators: Each group benefits the other, but there is competition within groups. If pollinators interact with promiscuous plants (generalists that benefit from many different insect species), the overall competition among insects and plants decreases and the system can grow very large.
Relationships of this kind are seen in financial systems too, where small specialist banks interact with large generalist banks. Interestingly, the same hierarchical structure that promotes biodiversity in plant-animal cooperative networks may increase the risk of large-scale systemic failures: Mutualism facilitates greater biodiversity, but it also creates the potential for many contingent species to go extinct, particularly if large, well-connected generalists—certain large banks, for instance—disappear. It becomes an argument for the “too big to fail” policy, in which the size of the company’s Facebook network matters more than the size of its balance sheet.
To be sure, bailing out a large financial institution raises questions of “moral hazard.” The more compelling reason for caution, however, is that this action could propagate another systemic collapse elsewhere in the network if there is too much focused subsidy and the benefit is not spread out (a relevant point for those who are dispensing TARP funds). Excessively favorable terms between two cooperating agents—say, the Fed and a large financial institution—can lead to the unintended collapse of other agents and to eventual duopoly.
Another good example is the interdependence of the online auction site eBay and e-payment system PayPal. PayPal was the dominant method of payment for eBay auctions when eBay bought it in 2002, strengthening cooperative links between the two companies. This duopolistic partnership contributed to the demise of competing payment systems, such as eBay’s Billpoint (phased out after the purchase of PayPal), Citibank’s c2it (closed in 2003), and Yahoo!’s PayDirect (closed in 2004).
As a final thought, as much as we would like to predict and manage catastrophic change, some will be inevitable. Instability is a fact of nature. And hard as it may now be to believe, displacements from climate change will one day dwarf our worries about the economy. As we become increasingly aware of the ways in which our actions are speeding us toward climate tipping points, perhaps our greatest asset will be our ability to anticipate these changes soon enough to avoid them or, at the very least, prepare for their consequences.
George Sugihara is a theoretical biologist and the McQuown Chair in Natural Science at the Scripps Institution of Oceanography.
Originally published December 20, 2010