Linear Stochastic ODE's lead to Gaussianity at Asymptotic times, even if Noise function has non-zero higher order Cumulants
So a few friends of mine
are working on Stochastic ODE's and their connection to path integrals. After
dorking out about this for a few moments, I'm able to make some ``baby''
statements about the problem. If you consider a sequence of random numbers:
{Xi}ni=1
which is determined by the following difference equation:
dXi=Xi+1−Xi=ai+Wi
subject to the initial condition X0=0
You can express the solution as a sum of two sums -- one deterministic and one
random.
Xn=n∑i=0ai+n∑i=1Wi
Where I have boldfaced all random variables. For instance ai is a real
sequence of numbers, perhaps they are the same for all i. W is a
noise variable, or some random forcing function. We see that the solution
after N steps will be
Xn=na+n∑i=1Wi
Now, if we see that Wi is drawn from some probability distribution
at every single step i, we know that, at asymptotic times N→∞,
subject to certain conditions on the probability density of Wi, our
distribution on X will converge to a Gaussian. This is very cool,
and not necessarily dependent on W being an identically
independently distributed variable. We simply say that if
Wi∼N(0,σ2) ∀i
then,
XN∼na(t)+N(0,nσ2)
Where N(0,σ2) stands for a normal distribution with zero mean and
variance σ2. Note that, this is simply a conclusion from the addition
of cumulants under convolution -- which is what you do when add random
variables.
Z=X+YX∼N(c1,c2)Y∼N(c′1,c′2)Z∼N(c1+c′1,c2+c′2)
So our cumulants add, and the central limit theorem hinges upon this, because
since our characteristic function -- or the fourier transform of our
probability distribution -- is bounded above by one (1), when we convolve tow
distributions in real space we multiply in frequency space, making the
characteristic function of our result variable Z -- which is very much like
an average, thinner and thinner and thinner... Meaning that you can truncate
the characteristic function's cumulant generating function ψ at order
k2, leading to a Gaussian.
This means that any sum of random variables, even they are not identically and
independently distributed -- although they must be independent in order to
convolve -- and even if those variables have non-zero higher order cumulants,
like skewness c3 or kurtosis c4, will give you a Gaussian in the n→∞ limit. This is an analog of the law of large numbers.
So why do we care in this Stochastic ODE case? It means that under linear
dynamics, at asymptotic times, we converge to a Gaussian distribution on
X, even our noise function itself has very strange properties, like
higher order cumulants. This is very strange indeed, and comes from the fact
that system is **linear**, i.e. we are **adding** random variables together.
Under non-linear evolution, it can be shown using Perturbation theory that
non-zero third and higher order moments are created, but showing this in the
stochastic framework is a bit difficult...
It is easy to show however, that an equation like:
L0δ=δ2
where L0 is some linear differential operator, can be expanded in power
series of small parameter λ
L0δ=λδ2δ=∞∑i=1λiδi
So we have, to each order:
λ0:L0δ0=0
which is our linear solution. Then we have to leading order:
λ1:L0δ1=δ20
Now we find, that if we take the connected third moment, or the third
cumulant, we get a nonzero value:
⟨δ3⟩=⟨δ30⟩+λ⟨δ20δ1⟩+…
If δ0 is Gaussian distributed, as we found that we would be for some
driving function at asymptotic times -- or if we simply assume Gaussian
initial conditions -- then we know that ⟨δ30⟩=0. The
leading order term however, will not be zero, because it goes like ∼δ40, which under Wick's theorem/Gaussian statistics can be built out
of second cumulants. So see that non-linearity gives non-zero skewness and
kurtosis, and other higher order things, at late times.
The key to connecting this with Stochastic ODE's lies in the fact that we are
not adding random variables anymore but multiplying them, and this is a very
peculiar type of convolution, which in general does **not do a simple
addition** of cumulants. I will have to look more into this.
Note: The lognormal distribution is the convergent distribution for a product
of random variables, since the log of the product is the sum of the logs. So
perhaps it could be shown that some non-linear Stochastic ODE's go to a
lognormal (which I believe is already a common concept on Wall street, for
estimating the dynamics stock prices).