Noisy Chaos

In case deterministic chaos isn’t enough you, this post adds in something extra: a little bit of randomness, or ‘noise’. Rather than making things more complicated, this actually makes them smoother. If you’ve read the What is Chaos? series, you know that finding periodic orbits is important to understand chaos. The randomness allows you to determine how many periodic orbits you need to make predictions.


Prerequisites: This was originally designed as a lecture to undergraduate physics students, before I even considered writing a blog. The What is Chaos? series provides context, but this post was written first and so does not assume that you’ve read them.

Originally Written: December 2013.

Confidence Level: This has been published in a peer reviewed journal. It is cutting edge research, not scientific consensus.

Further Reading: https://arxiv.org/abs/1507.00462 .



Suppose you happen upon a deterministic system whose behavior is highly erratic. Although you can write down the equations of motion that the system should follow, its long-term behavior proves elusive. It continually returns to states similar to where it starts, although never exactly the same, and it has sensitive dependence on initial conditions. In short, your system is deterministically chaotic.

What do you do? Since you have the equations of motion, a first guess is that you should put them on a computer and tell the computer to solve them. This works as long as you are only interested in short times. For longer times, the sensitive dependence on initial conditions defeats you. Any error in your initial conditions, noise in the physical system, uncertainty about the appropriate equations of motion, or numerical imprecision is amplified until the uncertainty is as large as the system itself.

You need to use a different technique to approach the problem. The key to unraveling the dynamics lies in its recurrence: the system approximately repeats itself. This recurrence suggests that there are periodic solutions, but they are unstable, so you never see them in practice. If you look for these periodic solutions, you find that there are infinitely many of them, with arbitrarily long periods. The number of periodic solutions with a given period is roughly the exponential of that period. The behavior of a periodic solution is known for all time – it simply repeats itself – so the periodic orbits are the key to understanding the long-time behavior of the system.

Although these periodic orbits aren’t seen directly, they form the skeleton for the rest of the dynamical system. A typical trajectory, although not periodic itself, can be described in terms of these periodic orbits. The trajectory starts close to one periodic orbit. Since there is continuity in the equations of motion, it will follow the periodic orbit for a ways. However, the neighborhood of the periodic orbit is unstable, so your trajectory will eventually leave the neighborhood of the periodic orbit. It then finds itself in the neighborhood of another periodic orbit. The process repeats. Instead of calculating individual trajectories, we instead think of the behavior of a trajectory as the list of periodic orbits it follows and how long it follows each of them.

There is an ambiguity in this approach: there is no notion of what it means for a trajectory to be close to another trajectory. In the deterministic system, the state space can be resolved to an arbitrary precision. All of the periodic orbits must be considered in order to get a complete description of the dynamics. Since there are infinitely many periodic orbits, and they are not completely trivial to calculate, this process would take infinite time.

To resolve this crisis, we return to the noise inherent to any physical system. When calculating individual trajectories, the noise was troublesome; it prevented us from following any trajectory for a long time. In this new framework, the noise becomes essential because it imposes finite resolution to the state space. Trajectories separated by a short distance are indistinguishable because the noise can transfer a trajectory between them. The noise can be used to define what it means to be close to a periodic orbit.

Define the neighborhood of a periodic orbit using the local competition between the noise and the expansion or contraction of the dynamics. The deterministic dynamics will either make nearby trajectories contract onto or expand away from the periodic orbit. The noise smears out the trajectories. The neighborhood of a periodic orbit is a distribution for which the strength of the noise exactly balances the strength of the deterministic dynamics. This notion of a stationary distribution depends on whether the deterministic dynamics is contracting or expanding.

For almost all physical applications, the deterministic dynamics is contracting in some directions and expanding in other directions. The dynamics must be separated into the expanding and contracting directions, the separate notions of stationary distributions are applied on each manifold, and then the resulting distributions are recombined to form the neighborhood in the full state space.

Shorter periodic orbits are more important. The noise has had less time to disrupt the deterministic dynamics. This suggests a systematic way to develop these neighborhoods. Start with the shortest periodic orbits and find their neighborhoods, then find the neighborhoods of increasingly longer periods. The finest possible resolution of the system is when the neighborhoods are just starting to overlap. These neighborhoods cover the attractor, the set of all of long-term behaviors of the system. We have created a finite partition of the attractor.

Figure 1: The Lozi map is closely related to the Hénon map. Its strange attractor is shown here. Each rhombus is a neighborhood of a periodic orbit included in this optimum partition.

Once we have these neighborhoods, we are able to think of the dynamics in different terms. Instead of thinking of individual trajectories, we can think of the dynamics in terms of the probability of transferring between one neighborhood and another. The result is a finite Markov graph.

All of the techniques developed for Markov graphs can now be applied to your dynamical system. In particular, there is a theorem which states that ergodic systems – systems for which all of the long-time behavior looks the same – have a unique global stationary distribution, which corresponds to the leading eigenvector of the transition probability matrix. The global stationary distribution describes the probability of finding your system at a certain point on the attractor after the system has been allowed
to evolve for a long time.

This global stationary distribution allows you to calculate long-time observables for your system. The long-time average of any variable which you are trying to measure is the sum of the value of the variable on each neighborhood weighted by the probability of being in the neighborhood given by the global stationary distribution.

Thoughts?