Project 2: Dynamical system invariants and information flow
In an open, dissipative system that exchanges energy and matter with its environment, a balancing of driving forces and dissipation will confine the dynamics to a subset of the state space known as an attractor. Examples of attractors can be stable fixed points (e.g. a pendulum with friction will converge on a position of rest), limit cycles (e.g. predator-prey oscillations), or more complex subsets such as the strange attractors of chaotic systems (e.g. the famous “butterfly”-shaped attractor of the Lorenz system).
The attractor is itself an invariant subset of the dynamics: if we pick any point on the attractor and follow its evolution under the dynamics of the system, the resulting trajectory lies entirely on the attractor. This implies that to study the long-term behavior of the system one needs to have information on the attractor, rather than the whole set of variables constituting the system.
For a dynamical system of multiple coupled components, a time series of any single component can be used to reconstruct the dynamics of the whole system, by embedding the time series with lagged instances of itself. Attractor reconstruction using delay embedding has found many applications in nonlinear time series analysis, including transfer entropy, which is an information-theoretic measure of predictive (Granger) causality interpreted as a directional information flow. In this project, we take a new approach to estimating transfer entropy.
Roughly speaking, the frequency with which a typical trajectory will visit some portion of the state space is related to the density of the attractor, which is invariant under the dynamics, and the visitation frequency is an invariant measure. This notion of invariance is related to the concept of ergodicity. In an ergodic system, averages over the state space taken with respect to the invariant measure are equal to time averages taken over a typical trajectory. The ergodicity property allows us to interpret the invariant measure of a given region of the attractor as the probability for the system to be in any of the states in that region.
The usual method for estimating the probability distributions required for computing transfer entropy is based on the asymptotic behavior of nearest neighbours in state space. In a forthcoming paper, we implement an alternative method that has greater accuracy for short and noisy time series.