Time Series Data as a Realization of a Stochastic Process

A stochastic process with state space  \( \mathbb S \) is a collection of random variables \( \{X_t , t ∈\mathbb T\} \) defined on the same probability space \( ( \Omega, F,P) \). The set   \( \mathbb T \) is called its parameter set. If \( \mathbb T = \mathbb {N }= \{0, 1, 2, . . .\} \), the process is said to be a discrete parameter process. If   \( \mathbb T \) is not countable, the process is said to have a continuous parameter. In the latter case the usual examples are  \( \mathbb T = \mathbb{R_+} = [0, ∞) \) and \( \mathbb  T = [a, b] ⊂\mathbb{R_+} \) . The index  \( t \) represents time, and then one thinks of  \( X_t \) as the “state” or the “position” of the process at time \( t \). The state space is \( \mathbb{R_+} \)  in most usual examples, and then the process is said real-valued. There will be also examples where \( \mathbb S \) is \( \mathbb{N} \) , the set of all integers, or a finite set.

A stochastic process describes the probability structure of a sequence of observations which is a unique realization of the generating process and a  time series of \( \mathbb T \)  successive observations \( \{x_t , t ∈ \mathbb T\} \) is regarded as a sample realization, from an infinite population of such samples, which could have been generated by the process. This conception of an observed time series can be visualized as in the following figure.  

A major objective of statistical investigation is to infer properties of the population from those of the sample. For example, to make a forecast is to infer the probability distribution of a future observation from the population, given a sample \( \{X_t , t ∈ \mathbb T\} \) of past  values. To do this we need ways of describing stochastic processes and time series, and we also need classes of stochastic models that are capable of describing practically occurring situations.

It is known that the complete probabilistic structure of such a process is determined by the set of distributions of all finite collections of the  \( X \)’s. Fortunately, we will not have to deal explicitly with these multivariate distributions. Much of the information in these joint distributions can be described in terms of means, variances, and covariances. Consequently, we concentrate our efforts on these first and second moments. (If the joint distributions of the \( X \)’s are multivariate normal distributions, then the first and second moments completely determine all the joint distributions.)

Last modified: Tuesday, 23 February 2016, 10:13 AM