next up previous
Next: Solution to Exercise 2.4. Up: No Title Previous: Solution to Exercise 2.2.

Solution to Exercise 2.3.

Problem Statement: Two stochastic processes Z1t and Z2t have autocovariance functions

Calculate the autocovariance function for the process Z3t = Z1t + 2 Z2t and verify that it is a valid stationary process.

Solution: Of course, we have to assume something about the joint distribution of the processes Z1t and Z2t, so that we can compute $ \mbox{Cov} [Z_{1t}, Z_{2s}]$ for all s and t. As stated in lecture, just assume that the two processes are independent in the strictly stationary case and uncorrelated in the second order case, i.e. that $ \mbox{Cov} [Z_{1t}, Z_{2s}]$ = for all s and t. This is typical - if people don't know (``people'' meaning the statistically unsophisticated), given a couple of marginal distributions and needing a joint distribution, they assume independence. But this is just an exercise from a textbook and I assume the authors were just careless in not stating the assumption that would make the problem tractable at this stage. If we don't make some such assumption, then the problem is not do-able in the sense that we can't compute the autocovariance of the linear combination from the autocovariances of the components. Furthermore, it is entirely possible to have two processses which themselves are (second order) stationary but the vector valued process is not (second order) stationary.

But life is easy with the uncorrelatedness assumption.



Definition Two time series type stochastic processes $\langle Z_{1t} : t = ... - 2 , -1 , 0 , 1 , 2, ... \rangle$ and $\langle Z_{2t} : t = ... - 2 , -1 , 0 , 1 , 2, ... \rangle$ are called uncorrelated if $ \mbox{Cov} [Z_{1t}, Z_{2s}]$ = for all s and t.



Proposition If Z1t and Z2t are uncorrelated time series type stochastic processes which are (second order) stationary with autocovariances $\gamma_1$ and $\gamma_2$, respectively, then for any constants a1 and a2, Zt = a1 Z1t + a2 Z2t is a (second order) stationary process and the autocovariance function is given by  
 \begin{displaymath}
\gamma (h) \; = \; \mbox{Cov} (Z_t , Z_{t+h}) \; = \; 
a_1^2 \gamma_1 (h) \, + \, a_2^2 \gamma_2 (h) .

\end{displaymath} (2)


Proof. Of course, we have two theorems to prove, one for the strictly stationary case and one for the second order stationary case (that's why ``second order'' is in parentheses, in case you didn't know). What we'll do here is quickly dispense with the second order stuff (which also takes care of the formula for the autocovariance function in the strictly stationary case) and handwave the stationary case.

OK, for the second order stuff we only have to verify that the mean is constant and the autocovariance only depends on the lag. The mean, as usual, is pretty easy:

where I just made up the notations $\mu_i$ = E[Zit], which don't depend on t by assumption. Clearly E[Zt] is a constant, independent of t. Now the autocovariance, using properties of the covariance operator already discussed in class:

Here, the zeroes in (6) are from our uncorrelatedness of the process assumption. Since the autocovariance function depends only on the lag h, this completes the proof of second order stationarity and verifies the formula for the autocovariance function.

Turning to the issue of strict stationarity, assume Z1t and Z2t are strictly stationary and independent processes. Consider the typical case where any finite number of random variables from the process will have a joint density (i.e., has a continuous distribution, as opposed to a discrete distribution where we would have a joint probability mass function). Subscripts to the joint density will indicate what random variables it is the density of. In order to avoid too much notation, let

\begin{displaymath}
Z_i [t,h] \; = \; (Z_{it},Z_{i,t+1}, \ldots , Z_{i,t+h} ),
\quad h \ge 0 , \quad i = 1,2.
\end{displaymath}

Also, let zi denote a vector of dimension h+1 which will be used as a variable. The joint density of (Z1 [t,h] , Z2 [t,h] ) is

i.e. the product of the marginals. Here, (7) and (9) follow from the independence assumption, and (8) from the stationarity assumptions on the individual processes. Thus, we see that the distribution of the vector valued process (Z1t,Z2t) is shift invariant, hence if g is any function of two variables, the distribution of g(Z1t,Z2t) will be shift invariant, i.e. the process will be stationary.

This completes the proof. Application to the original problem is trivial. Plugging in the values:


next up previous
Next: Solution to Exercise 2.4. Up: No Title Previous: Solution to Exercise 2.2.
Dennis Cox
3/10/1999