autocovariance function

The autocovariance function is defined as the second moment product$$  \gamma (s,t)=cov(x_s,x_t)=E[(x_s- \mu_s)(x_t- \mu_t)]$$ for all \(s\), \(t\). Note that for all time points  s and t. The autocovariance measures the linear dependence between two points on the same series observed at different times. Recall from classical statistic \(\gamma (s,t)=0\), \(x_t\)  and \(x_s\)  are not linearly related, but there still may be some dependence structure between them. If, however, \(x_t\)  and \(x_s\) are bivariate normal, \(\gamma (s,t)=0\) ensures their independence. It is clear that, for     \(s = t\), the autocovariance reduces to the (assumed finite) variance,$$\gamma (s,t)=E[(x_s- \mu_s)(x_t- \mu_t)^2]=var(x_t).$$
Example: The white noise series \(w_t\) has \(E(w_t)=0\) and $$ \gamma_w (s,t)=cov(w_s,w_t)=  \left\{ {\begin{array}{ccccccccccccccc}{\sigma^2_w,}&{{s} = t,}\\{0,}&{{s} \ne t.}\end{array}  } \right. $$

» A Conceptual Vocalubary of Time Series Analysis