Whether or goal in analyzing a particular time series is forecasting or understanding the underlying mechanism, we will be building a probability model for the data. The general end result of probability modeling is a method for ``reducing'' the series to some kind of standard ``random noise''. The point is that when we have such ``noise'' we have extracted all useful information. For forecasting, the utility of the ``reduction to random noise'' notion is that ``noise'' cannot be predicted except with a probability statement usually in the form of a so-called prediction interval. We can then reverse the ``reduction to random noise'' procedure to obtain a prediction interval for the original series. When it comes to understanding the mechanism that generates the series, the notion is that ``noise'' is not understandable, so all the useful information is in the mechanism whereby the process is reduced to noise.
So, two big issues are evident: what tools are available to develop the ``reduction to noise'' and how do we recognize ``noise'' when we see it?
We shall see that there are three typical steps in the ``reduction-to-noise'' process: