Time series

(This post got published on The Statistics Forum yesterday.)

The short book review section of the International Statistical Review sent me Raquel Prado’s and Mike West’s book, Time Series (Modeling, Computation, and Inference) to review. The current post is not about this specific book, but rather on why I am unsatisfied with the textbooks in this area (and correlatively why I am always reluctant to teach a graduate course on the topic). Again, I stress that the following is not specifically about the book by Raquel Prado and Mike West!

With the noticeable exception of Brockwell and Davis’ Time Series: Theory and Methods, most time-series books seem to suffer (in my opinion) from the same difficulty, which sums up as being unable to provide the reader with a coherent and logical description of/introduction to the field. (This echoes a complaint made by Håvard Rue a few weeks ago in Zurich.) Instead, time-series books appear to haphazardly pile up notions and techniques, theory and methods, without paying much attention to the coherency of the presentation. That’s how I was introduced to the field (even though it was by a fantastic teacher!) and the feeling has not left me since then. It may be due to the fact that the field stemmed partly from signal processing in engineering and partly from econometrics, but such presentations never achieve a Unitarian front on how to handle time-series. In particular, the opposition between the time domain and the frequency domain always escapes me. This is presumably due to my inability to see the relevance of the spectral approach, as harmonic regression simply appears (to me) as a special type of non-linear regression with sinusoidal regressors and with a well-defined likelihood that does not require Fourier frequencies nor periodogram (nor either spectral density estimation). Even within the time domain, I find the handling of stationarity  by time-series book to be mostly cavalier. Why stationarity is important is never addressed, which leads to the reader being left with the hard choice between imposing stationarity and not imposing stationarity. (My original feeling was to let the issue being decided by the data, but this is not possible!) Similarly, causality is often invoked as a reason to set constraints on MA coefficients, even though this resorts to a non-mathematical justification, namely preventing dependence on the future. I thus wonder if being an Unitarian (i.e. following a single logical process for analysing time-series data) is at all possible in the time-series world! E.g., in Bayesian Core, we processed AR, MA, ARMA models in a single perspective, conditioning on the initial values of the series and imposing all the usual constraints on the roots of the lag polynomials but this choice was far from perfectly justified…

7 Responses to “Time series”

  1. I was just wondering if you were taught these fantastic classes on times series at ENSAE, and if yes, who was the teacher?

  2. Brockwell & Davis is also one of my favourite time series books. However it almost exclusively focuses on the mechanics of ARIMA models. Now the spectral analysis allow us to see what portion of the variance can be attributed to a particular frequency. Studying the series from a frequency perspective we can also more easily discover the effect of applying particular filters on our data. Economists should for example know that the Hodrick-Prescott filter will create spurious cycles when used for detrending if the series of interest are integrated. In particular applying the HP filter to a random walk (I(1))generates detrended observations which have the characteristics of a business cycle for quarterly observations*!

    * See Harvey, A. C. and Jaeger, A. (1993), Detrending, stylized facts and the business cycle. Journal of Applied Econometrics, 8: 231–247 which I summarize in my blog.

  3. Radford Neal Says:

    The difference between time domain and frequency domain for stationary time series analysis is simple. In the time domain, all the variables have the same variance, but they are probably autocorrelated. After applying a finite Fourier transform to your data, you get numbers that are uncorrelated (at least in the limit of a long series), but that probably have different variances. Deal with autocorrelaton, or deal with varying variance – you get to choose which problem you have to tackle.

    • This is a very helpful reason for turning to the frequency domain, then. Would it make any sense to compare the information contents of time domain and frequency domain representations of a dataset?

    • Radford Neal Says:

      The information content is the same, since the finite Fourier transform is invertible. Another way to look at it is that it’s really a lot like Principal Component Analysis, in that it produces uncorrellated variables from correlated variables by applying a linear transformation.

  4. Ben Bolker Says:

    For reasonably numerate biologists I always liked Peter Diggle’s “Time Series: A Biostatistical Introduction”, although it may be insufficiently rigorous for your tastes … (can be found on Amazon, I won’t try to include the URL here)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 619 other followers