SDM-CAR explained: spectral structure in spatial Bayesian modeling
Why this matters: when spatial structure is learned rather than fixed, uncertainty is better calibrated and prediction is more robust.
Why classical spatial models fail under misspecification
A standard CAR model encodes spatial dependence through a fixed neighborhood matrix: who is connected to whom, and with what strength. That is convenient, but it quietly assumes the chosen graph structure is correct.
In practice, that assumption is often wrong. Real spatial dependence can be multi-scale, anisotropic, or partially nonlocal. If the imposed structure is too rigid, the model can oversmooth sharp local effects, distort uncertainty, and produce biased estimates. The issue is not that CAR is unusable; it is that fixed structure turns model misspecification into a first-order error source.
Intuition: what the graph spectrum represents
Every spatial graph has a spectrum (the eigenvalues of a matrix such as its graph Laplacian), and those spectral components act like spatial frequencies. Low-frequency modes represent broad, smooth regional trends; high-frequency modes capture sharper local variation and boundaries.
Thinking spectrally separates two questions that are conflated in classical CAR: which graph was chosen and how much signal should live at each spatial scale. This perspective makes misspecification easier to diagnose and, more importantly, easier to model.
The SDM-CAR idea
SDM-CAR (Spectral Density Modulated CAR) moves the prior into the graph spectral domain. Instead of hard-coding one smoothing profile from a fixed precision template, it learns a spectral density over eigenvalues. That density controls how strongly each spatial frequency is expressed.
Concretely, this keeps the computational and interpretive advantages of graph-based priors, while allowing the model to adapt between global smoothness and local structure as the data demands. In practice, this is more flexible than classical CAR under misspecification, without adding a heavy modeling stack at every layer.
Inference: variational speed with MCMC validation
Inference uses collapsed variational methods for scalability, paired with targeted MCMC diagnostics to check posterior geometry and calibration. This is not "VI instead of MCMC"; it is a deliberate hybrid: VI for tractable fitting at realistic problem sizes, MCMC for robustness checks where approximation error matters.
That combination is central to the project: it signals both algorithmic practicality and Bayesian discipline.
Why this matters in practice
If your spatial structure is slightly wrong, classical models can look stable while being systematically off. SDM-CAR is designed for that regime. Classical CAR implicitly fixes spectral allocation through its chosen neighborhood structure; SDM-CAR learns spectral allocation from data. By learning spectral allocation rather than freezing it, the model better tracks heterogeneous spatial effects and yields more credible uncertainty under structural ambiguity.
The practical contribution is not just better fit on one dataset, but a modeling strategy that stays reliable when the graph you start with is only an approximation.