Problem Statement: Two stochastic processes Z1t and Z2t have autocovariance functions
Calculate the autocovariance function for the process Z3t = Z1t + 2 Z2t and verify that it is a valid stationary process.
Solution:
Of course, we have to assume something about the joint
distribution of the processes Z1t and Z2t, so
that we can compute for all s and
t. As stated in lecture, just assume that the two processes
are independent in the strictly stationary case and
uncorrelated in the second order case,
i.e. that
=
for all s and t. This is typical - if people don't know
(``people'' meaning the statistically unsophisticated), given
a couple of marginal distributions and needing a joint distribution,
they assume independence. But this is just an exercise from a
textbook and I assume the authors were just careless in not
stating the assumption that would make the problem tractable
at this stage. If we don't make some such assumption, then
the problem is not do-able in the sense that we can't compute
the autocovariance of the linear combination from the autocovariances
of the components. Furthermore, it is entirely possible to have
two processses which themselves are (second order) stationary
but the vector valued process is not (second order) stationary.
But life is easy with the uncorrelatedness assumption.
Definition
Two time series type stochastic processes
and
are called uncorrelated if
=
for all s and t.
Proposition
If Z1t and Z2t are uncorrelated time series type
stochastic processes which are (second order) stationary
with autocovariances and
, respectively,
then for any constants a1 and a2,
Zt = a1 Z1t + a2 Z2t is a (second order)
stationary process and the autocovariance function is given by
(2)
Proof. Of course, we have two theorems to prove, one for the strictly stationary case and one for the second order stationary case (that's why ``second order'' is in parentheses, in case you didn't know). What we'll do here is quickly dispense with the second order stuff (which also takes care of the formula for the autocovariance function in the strictly stationary case) and handwave the stationary case.
OK, for the second order stuff we only have to verify that the mean is constant and the autocovariance only depends on the lag. The mean, as usual, is pretty easy:
where I just made up the notations
= E[Zit], which don't
depend on t by assumption. Clearly
E[Zt] is a constant, independent of t.
Now the autocovariance, using properties
of the covariance operator already discussed
in class:
Turning to the issue of strict stationarity, assume Z1t and Z2t are strictly stationary and independent processes. Consider the typical case where any finite number of random variables from the process will have a joint density (i.e., has a continuous distribution, as opposed to a discrete distribution where we would have a joint probability mass function). Subscripts to the joint density will indicate what random variables it is the density of. In order to avoid too much notation, let
This completes the proof. Application to the original problem is trivial. Plugging in the values: