Managing Dynamic Complexity

May 14th, 2018

For an interesting and unique way to understand Complexity Theory, read Caryl Johnson’s blog below.

Complexity is one of those words that everybody knows, but in many cases do not know in quite the same way. At its most fundamental meaning, complexity suggests something that is so complicated that it is difficult to understand. However, when scientists discuss the field of ‘complexity theory’ they are referring to something much more precise, but perhaps just as difficult for the layperson to understand.

In our experience, as we consider the problems that seem to repel solution, three factors emerge that in many respects appear to define dynamic complexity. We identify these three factors as 1) systems having many diverse actors including both cyber and human, 2) a goal to seek an optimal distribution of scarce resources in pursuit of a generally well-defined objective, and 3) systems having an extremely high dimension and generally non-linear decision space. In most cases, we attempt to model and manage such systems in the form of a Markov Decision Process, or MDP, although in truth this characterization does not technically apply. In an MDP, a system is understood to be in some state after which an action is chosen, and the system transitions to another state, although in stating this we admit to a certain amount of oversimplification. In such systems, the Markov condition applies, which is the outcome of the decision is only dependent on the state and not how one achieved that state.

Unfortunately, for dynamic complex systems, this definition does not strictly apply. Generally, we think of an MDP as having a single actor making a decision, while in a dynamic complex system there are many actors. For example, in even the simplest natural ecosystem, which is a prime example of dynamic complexity, the number of actors making independent decision surpasses trillions. The problem is further complicated due to the fact, that each of these possibly trillions of actors has an incomplete awareness of the state, as when a frog does not know that an owl is about to select it as its next meal. In addition, even if each actor did have a complete knowledge of the current state of the system, something that is clearly an impossibility in a global sense, the outcome of any decision is entirely stochastic.

Let’s consider a swamp. The actors are the various and diverse creatures big and small that inhabit the environment. Each of these actors, including the evolutionary learning, manifest in vegetation’s dissemination of seeds, is independent, yet at the same time linked into a complex web of interdependency. The resources in a swamp, such as light, space, nutrients, and such, are strictly limited, and dynamic stability is achieved in a kind of competitive interplay amongst the various actors. At the same time, each actor has many choices, as in a frog can choose to jump, sit still, eat a bug, or seek a mate, but regardless of what action is selected, the outcome is not necessarily what might have been intended. In fact, this is what is manifest in our management problem, in that as situations increase in dynamic complexity, as in an expanding business, the best of intentions resulting from any decision might be defeated by the negative effect of unintended consequences. This is one reason why I have steadfastly avoided the trap of upper-level management positions – I still enjoy sleeping at night.

A lot of our experience and research has come from working with the US Department of Energy on a national grid architecture that decentralizes control. What this means is that rather having all of the information flow back to a central control point, most of the decisions regarding energy distribution is made on the edge, that is near or at the generation or load points. An example of centralized control is something called Demand-Side Management (DSM), where messages are sent from a central facility to start and/or suspend generation and load in response to instantaneous demand. With edge control, these decisions are made at the edge, and in the research that we have been conducting, this is accomplished by pricing signals that can be used by edge devices in the sense of buy or not buy decisions. As a specific example, an energy storage device can learn a strategy wherein it buys energy and stores it when energy is cheap, and then sells it back when the price is high.

This is a clear example of dynamic complexity. There are many actors of many different types, power is a scarce or at least limited resource, and the decision space is extremely high. However, and the reason that this is the first example, is that it is something simpler than others because the behavior of the devices is for the most part barring failure predictable. In control theory the statement could be made that each device has a specific plant model – if a switch is closed, a motor will turn on. Yet even in this simple example, the question of whether a large-scale distribution of such devices can achieve dynamic stability in the same way as our natural ecosystem example of a swamp.