Simulating a spatial Conditional Autoregressive Model (CAR) from a graph G(V,E)

This is the second delivery on how to simulate a Conditional Autoregressive Model (CAR) using a given graph (aka, lattice, network, etc). For simulating a CAR model without a given graph see: [ HERE ].

Ingredients

  • An adjacency matrix (M) representation of a lattice. This represented a simple graph, i.e. (\(\delta_{i,j} = 1\)) if there exists an edge between \(i \text{and} j\).
  • [To Plot] A corresponding data frame (GeoPandas) to visualize polygons corresponding to the areas of the lattice.

Methods

Load the necessary data. In this example, the lattice L corresponds is the following figure

## Geometry file
geometryfile = "training_data_sample_puebla_p9_geometry.json"
## Readfile
gg = gpd.read_file(geometryfile)
gg.set_index('indx',inplace=True)
plt.rcParams['figure.figsize'] = [10, 10]
gg.plot(edgecolor='black')
## Read the corresponding adjancy matrix
mat_filename = "training_data_sample_puebla_p9_abies_pinophyta_adjmat.npy"
M = np.load(file=mat_filename)

The order of the columns (rows) of M corresponds to the ascending order of the indices of the polygons in the GeoPandas Data Frame, i.e. the element in gg with the lowest index corresponds to the first column (row) of M, and successively. If this order is not respected the spatial autocorrelation will be lost.

Building the precision matrix Q

Given that the probability distribution of CAR model is a Multivariate Normal distribution \(MVN(0,Q^{-1})\). Simulating a CAR model would correspond to simulating this process. The challenge resides, however, in the construction of the precision matrix \(Q\). We will follow the work of: Besag, 1991 , Rue and Hoavard, 2005 and Morris et. al. 2019 . Any doubt go those references.

During the realisation of this tutorial I had problems simulating an intrinsic CAR Model obtaining a Q that is not semi positive definite (SPD). In this regard we will build Q as follows:

\(Q = \tau^2 D_{w}( I – \alpha B)\)

Where \(\tau^2\) models the general variance, \(D_{w}\) represents the number of neighbours for each node in the graph and B is the matrix resulting from \(B = M/D_{w}\) where M is the corresponding adjacency matrix. B guarantees that Q will be positive definite. The parameter \(\alpha\) represents the degree of spatial autocorrelation and ranges from \((0 to 1)\). I recommend to test different values of \(\alpha\). In addition remove any “islands” nodes without neighbours. Failure to do so can lead to no positive definiteness. The code to achieve this is here:

def buildQ(M,D,tau_2=2.0,alpha=0.99999):
    I = np.identity(M.shape[0])
    D_ = np.diag(np.reciprocal(np.diag(D)))
    B = np.dot(M,D_)
    Q = np.dot(tau_2 * D_,(I - alpha* B))
    return(Q)

## Calculate number of neighbours 
D = np.sum(M,axis=1) 
## Remove islands 
idx_islands = np.where(D == 0 ) 
ND = np.delete(D,idx_islands,axis=0) 
NM = np.delete(M,idx_islands,axis=0) 
NM = np.delete(NM,idx_islands,axis=1)

## Calculate Q
Q = buildQ(NM,ND,tau_2=2.0,alpha=0.995)

 

Here we will visualize the Covariance matrix (\(Q^{-1}\))

 

A we now made sure (by construction, or try and error 😛 ) that Q is Positive Definite, we can therefore calculate the Cholesky decomposition, making the simulation remarkably fast and efficient. Here we follow algorithm 2.4 from Rue and Hovard. The code is as follows:

## Simulation Algorithm 2.4
def generateCARSimulation(Q,mu):
    L = np.linalg.cholesky(Q)
    I = np.identity(L.shape[0])
    z = sp.random.normal(loc=mu,scale=1.0,size=L.shape[0])
    v = np.linalg.solve(L.transpose(),z)
    return(v)
v = generateCARSimulation(Q,mu)

To visualise the simulation just join the random vector v with the GeoPandas dataframe, gg as follows.

## order index (cell_id) ascending, according to M.
ordered_g = gg.sort_index()
## Reset the index, this is necessary in order to join. 
ordered_g.reset_index(inplace=True)
## Create the dataframe
yy = pd.DataFrame(vs[1],columns=['simulated'])
## Join it with the geospatial dataframe
simulation = ordered_g.join(yy)
## Plot it!
simulation.plot(column='simulated')

This is the visualisation.

Simulation a CAR process witgh a agiven adjacency matrix and a Lattice

Thanks for reading and remember to use your skills for the benefit of Mother Earth.

Logistic Hierarchical regression

This is a blunt translation from my original post. Just to bring bilingual audience, I actually used google translator, hehehe!

I am surprised by the ability and flexibility to express complex statistical models with STAN. As you know, STAN is a declarative language of probabilistic programming. Statistical models are described with a simple syntax to specify in a way very similar to what we do on the board or on paper. STAN is based on Bayesian principles and uses several methods to sample subsequent distributions, among the best known are Hamiltonian Monte Carlo, Non U Turn and Metropolis-Hastings (I’m not going to get into detail on this now, on the internet there are lots of references about it ).

In this post we will see how to define a hierarchical logistic model. Suppose that for any binary observation $ latex y_i \ in \ {0,1 \} $ there is an associated level $ latex j_i \ in \ {1,2, …, J \} $ and a vector of covariates (predictors) $ latex x_i \ in \ mathbb {R} ^ K $. We will therefore assume that the observations are realizations of an independent bernoulli process when it is conditioned with a normal hierarchical process. This is:

\([Y | S] \sim Bernoulli(S)\)

Now, $ latex S $ can be described as follows:

\(S_{i,j} = \beta^t_j x_i + \epsilon\)

And it reads as follows. The variable S in observation i of level j is equal to $ latex \ beta ^ t_j x_ {i} + \ epsilon $. That is, the vector $ latex \ beta $ is the coefficients associated with the response $ latex S_ {i} $ at level j.

In STAN this is equivalent to:

hierarquical_log_model = u"""
data {
  int<lower=0> N; // num obs. 
  int<lower=0> J; // number of levels
  int<lower=1> K;                // num covariates
  matrix[N, K] x;                // design matrix
  int<lower=1,upper=J> level[N];  // type of level (spec)
  int<lower=0,upper=1> y[N];              // observations, in this case is binary.
  
} 
parameters {
  real mu[K];
  vector[K] beta[J];
  real<lower=0,upper=100> sigma; // same sigma

}

model {
    for (k in 1:K) {
        mu[k] ~ normal(0,100);
        for (j in 1:J)
            beta[j,k] ~ normal(mu[k], sigma);
    }
    for (i in 1:N)
        y[i] ~ bernoulli_logit(x[i] * beta[level[i]]);
  
}"""

For running the model in Python, using PyStan, use:

varying_intercept_model = pystan.StanModel(model_code=hierarquical_log_model)
multilevel_linear_data = {'N': X.shape[0],
                          'J': 2,
                          'K': X.shape[1],
                          'level': data.codes.values, # niveles
                          'x' : X.values,
                          'y': data.obs.values}
%time fit = varying_intercept_model.sampling(data=multilevel_linear_data, iter=1000,chains=1)
sample = fit.extract(permuted=True)

Resources:

Originally taken from: https://mc-stan.org/docs/2_18/stan-users-guide/basic-motivation.html

Very good tutorial: https://mc-stan.org/users/documentation/case-studies/radon.html