This is the second delivery on how to simulate a Conditional Autoregressive Model (CAR) using a given graph (aka, lattice, network, etc). For simulating a CAR model without a given graph see: [ HERE ].
Ingredients
- An adjacency matrix (M) representation of a lattice. This represented a simple graph, i.e. (\(\delta_{i,j} = 1\)) if there exists an edge between \(i \text{and} j\).
- [To Plot] A corresponding data frame (GeoPandas) to visualize polygons corresponding to the areas of the lattice.
Methods
Load the necessary data. In this example, the lattice L corresponds is the following figure

## Geometry file geometryfile = "training_data_sample_puebla_p9_geometry.json" ## Readfile gg = gpd.read_file(geometryfile) gg.set_index('indx',inplace=True) plt.rcParams['figure.figsize'] = [10, 10] gg.plot(edgecolor='black') ## Read the corresponding adjancy matrix mat_filename = "training_data_sample_puebla_p9_abies_pinophyta_adjmat.npy" M = np.load(file=mat_filename)
The order of the columns (rows) of M corresponds to the ascending order of the indices of the polygons in the GeoPandas Data Frame, i.e. the element in gg with the lowest index corresponds to the first column (row) of M, and successively. If this order is not respected the spatial autocorrelation will be lost.
Building the precision matrix Q
Given that the probability distribution of CAR model is a Multivariate Normal distribution \(MVN(0,Q^{-1})\). Simulating a CAR model would correspond to simulating this process. The challenge resides, however, in the construction of the precision matrix \(Q\). We will follow the work of: Besag, 1991 , Rue and Hoavard, 2005 and Morris et. al. 2019 . Any doubt go those references.
During the realisation of this tutorial I had problems simulating an intrinsic CAR Model obtaining a Q that is not semi positive definite (SPD). In this regard we will build Q as follows:
\(Q = \tau^2 D_{w}( I – \alpha B)\)
Where \(\tau^2\) models the general variance, \(D_{w}\) represents the number of neighbours for each node in the graph and B is the matrix resulting from \(B = M/D_{w}\) where M is the corresponding adjacency matrix. B guarantees that Q will be positive definite. The parameter \(\alpha\) represents the degree of spatial autocorrelation and ranges from \((0 to 1)\). I recommend to test different values of \(\alpha\). In addition remove any “islands” nodes without neighbours. Failure to do so can lead to no positive definiteness. The code to achieve this is here:
def buildQ(M,D,tau_2=2.0,alpha=0.99999): I = np.identity(M.shape[0]) D_ = np.diag(np.reciprocal(np.diag(D))) B = np.dot(M,D_) Q = np.dot(tau_2 * D_,(I - alpha* B)) return(Q) ## Calculate number of neighbours D = np.sum(M,axis=1) ## Remove islands idx_islands = np.where(D == 0 ) ND = np.delete(D,idx_islands,axis=0) NM = np.delete(M,idx_islands,axis=0) NM = np.delete(NM,idx_islands,axis=1) ## Calculate Q Q = buildQ(NM,ND,tau_2=2.0,alpha=0.995)
Here we will visualize the Covariance matrix (\(Q^{-1}\))

A we now made sure (by construction, or try and error 😛 ) that Q is Positive Definite, we can therefore calculate the Cholesky decomposition, making the simulation remarkably fast and efficient. Here we follow algorithm 2.4 from Rue and Hovard. The code is as follows:
## Simulation Algorithm 2.4 def generateCARSimulation(Q,mu): L = np.linalg.cholesky(Q) I = np.identity(L.shape[0]) z = sp.random.normal(loc=mu,scale=1.0,size=L.shape[0]) v = np.linalg.solve(L.transpose(),z) return(v)
v = generateCARSimulation(Q,mu)
To visualise the simulation just join the random vector v with the GeoPandas dataframe, gg as follows.
## order index (cell_id) ascending, according to M. ordered_g = gg.sort_index() ## Reset the index, this is necessary in order to join. ordered_g.reset_index(inplace=True) ## Create the dataframe yy = pd.DataFrame(vs[1],columns=['simulated']) ## Join it with the geospatial dataframe simulation = ordered_g.join(yy) ## Plot it! simulation.plot(column='simulated')
This is the visualisation.

Thanks for reading and remember to use your skills for the benefit of Mother Earth.