Solving Inverse Problems with Spectral Dynamic Causal Modeling
Introduction
Neuroblox provides you with a comprehensive environment for simulations as we have explored previously, but its functionality doesn't stop there. We will now pivot and turn our attention to a different kind of problem: inferring model parameters, that is solving inverse problems, from time series. The method of choice is one of the most widely spread in imaging neuroscience, spectral Dynamic Causal Modeling (spDCM)[1,2]. In this tutorial we will introduce how to perform a spDCM analysis on simulated data. To do so we roughly reproduce the procedure in the SPM script DEM_demo_induced_fMRI.m
in Neuroblox. This work was also presented in Hofmann et al.[2]
In this tutorial we will define a circuit of three linear neuronal mass models, all driven by an Ornstein-Uhlenbeck process. We will model fMRI data by a balloon model and BOLD signal on top. After simulation of this simple model we will use spectral Dynamic Causal Modeling to infer some of the model parameters from the simulation time series.
A brief outline of the procedure we will pursue:
- define the graph, add blocks -> section A, B and C in the figure
- simulate the model -> instead we could also use actual data, section D in figure
- compute the cross spectral density
- setup the DCM
- estimate parameters
- plot the results
using Neuroblox
using LinearAlgebra
using StochasticDiffEq
using DataFrames
using OrderedCollections
using CairoMakie
using ModelingToolkit
using Random
using StatsBase
Model simulation
Define the model
We will define a model of 3 regions. This means first of all to define a graph. To this graph we will add three linear neuronal mass models which constitute the (hidden) neuronal dynamics. These constitute three nodes of the graph. Next we will also need some input that stimulates the activity, we use simple Ornstein-Uhlenbeck blocks to create stochastic inputs. One per region. We want to simulate fMRI signals thus we will need to also add a BalloonModel per region. Note that the Ornstein-Uhlenbeck block will feed into the linear neural mass which in turn will feed into the BalloonModel blox. This needs to be represented by the way we define the edges.
Random.seed!(17) # set seed for reproducibility
nr = 3 # number of regions
g = MetaDiGraph()
regions = []; # list of neural mass blocks to then connect them to each other with an adjacency matrix `A_true`
Now add the different blocks to each region and connect the blocks within each region. For convenience we use a for loop since the type of blocks belonging to a each region repeat over regions but you could also approach building the system the same way as was shown in previous tutorials:
for i = 1:nr
region = LinearNeuralMass(;name=Symbol("r$(i)₊lm"))
push!(regions, region) # store neural mass model in list. We need this list below. If you haven't seen the Julia command `push!` before [see here](http://jlhub.com/julia/manual/en/function/push-exclamation).
# add Ornstein-Uhlenbeck block as noisy input to the current region
input = OUBlox(;name=Symbol("r$(i)₊ou"), σ=0.2, τ=2)
add_edge!(g, input => region, weight=1/16) # Note that 1/16 is taken from SPM12, this stabilizes the balloon model simulation. Alternatively the noise of the Ornstein-Uhlenbeck block or the weight of the edge connecting neuronal activity and balloon model could be reduced to guarantee numerical stability.
# simulate fMRI signal with BalloonModel which includes the BOLD signal on top of the balloon model dynamics
measurement = BalloonModel(;name=Symbol("r$(i)₊bm"))
add_edge!(g, region => measurement, weight=1.0)
end
Next we define the between-region connectivity matrix and connect regions; we use the same matrix as is defined in [3]
A_true = [[-0.5 -0.2 0]; [0.4 -0.5 -0.3]; [0 0.2 -0.5]]
3×3 Matrix{Float64}:
-0.5 -0.2 0.0
0.4 -0.5 -0.3
0.0 0.2 -0.5
Note that in SPM DCM connection matrices column variables denote output from and rows denote inputs to a particular region. This is different from the usual Neuroblox definition of connection matrices. Thus we flip the indices in what follows:
for idx in CartesianIndices(A_true)
add_edge!(g, regions[idx[1]] => regions[idx[2]], weight=A_true[idx[2], idx[1]]) # Note the definition of columns as outputs and rows as inputs. For consistency with SPM we keep this notation.
end
finally we compose the simulation model
@named simmodel = system_from_graph(g)
\[ \begin{align} \frac{\mathrm{d} \mathtt{r1.ou.x}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r1.ou.\mu} - \mathtt{r1.ou.x}\left( t \right)}{\mathtt{r1.ou.\tau}} \\ \frac{\mathrm{d} \mathtt{r1.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{w\_r1.lm\_r1.lm} \mathtt{r1.lm.x}\left( t \right) + \mathtt{w\_r1.ou\_r1.lm} \mathtt{r1.ou.x}\left( t \right) + \mathtt{w\_r2.lm\_r1.lm} \mathtt{r2.lm.x}\left( t \right) + \mathtt{w\_r3.lm\_r1.lm} \mathtt{r3.lm.x}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r1.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r1.bm.lnu}\left( t \right)} \right) + \mathtt{w\_r1.lm\_r1.bm} \mathtt{r1.lm.x}\left( t \right) - 0.64 \mathtt{r1.bm.s}\left( t \right) e^{\mathtt{r1.bm.ln\kappa}} \\ \frac{\mathrm{d} \mathtt{r1.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r1.bm.s}\left( t \right)}{e^{\mathtt{r1.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r1.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r1.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{r1.bm.ln\tau}} e^{\mathtt{r1.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r1.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 e^{\mathtt{r1.bm.lnu}\left( t \right)} \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r1.bm.lnu}\left( t \right)}}} \right)}{e^{\mathtt{r1.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{r1.bm.ln\tau}}} \\ \frac{\mathrm{d} \mathtt{r2.ou.x}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r2.ou.\mu} - \mathtt{r2.ou.x}\left( t \right)}{\mathtt{r2.ou.\tau}} \\ \frac{\mathrm{d} \mathtt{r2.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{w\_r1.lm\_r2.lm} \mathtt{r1.lm.x}\left( t \right) + \mathtt{w\_r2.lm\_r2.lm} \mathtt{r2.lm.x}\left( t \right) + \mathtt{w\_r2.ou\_r2.lm} \mathtt{r2.ou.x}\left( t \right) + \mathtt{w\_r3.lm\_r2.lm} \mathtt{r3.lm.x}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r2.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r2.bm.lnu}\left( t \right)} \right) + \mathtt{w\_r2.lm\_r2.bm} \mathtt{r2.lm.x}\left( t \right) - 0.64 \mathtt{r2.bm.s}\left( t \right) e^{\mathtt{r2.bm.ln\kappa}} \\ \frac{\mathrm{d} \mathtt{r2.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r2.bm.s}\left( t \right)}{e^{\mathtt{r2.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r2.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r2.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{r2.bm.ln\nu}\left( t \right)} e^{\mathtt{r2.bm.ln\tau}}} \\ \frac{\mathrm{d} \mathtt{r2.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r2.bm.lnu}\left( t \right)}}} \right) e^{\mathtt{r2.bm.lnu}\left( t \right)}}{e^{\mathtt{r2.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{r2.bm.ln\tau}}} \\ \frac{\mathrm{d} \mathtt{r3.ou.x}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r3.ou.\mu} - \mathtt{r3.ou.x}\left( t \right)}{\mathtt{r3.ou.\tau}} \\ \frac{\mathrm{d} \mathtt{r3.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{w\_r1.lm\_r3.lm} \mathtt{r1.lm.x}\left( t \right) + \mathtt{w\_r2.lm\_r3.lm} \mathtt{r2.lm.x}\left( t \right) + \mathtt{w\_r3.lm\_r3.lm} \mathtt{r3.lm.x}\left( t \right) + \mathtt{w\_r3.ou\_r3.lm} \mathtt{r3.ou.x}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r3.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r3.bm.lnu}\left( t \right)} \right) + \mathtt{w\_r3.lm\_r3.bm} \mathtt{r3.lm.x}\left( t \right) - 0.64 e^{\mathtt{r3.bm.ln\kappa}} \mathtt{r3.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r3.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r3.bm.s}\left( t \right)}{e^{\mathtt{r3.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r3.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r3.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{r3.bm.ln\nu}\left( t \right)} e^{\mathtt{r3.bm.ln\tau}}} \\ \frac{\mathrm{d} \mathtt{r3.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r3.bm.lnu}\left( t \right)}}} \right) e^{\mathtt{r3.bm.lnu}\left( t \right)}}{e^{\mathtt{r3.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{r3.bm.ln\tau}}} \\ 0 &= - \mathtt{r1.bm.bold}\left( t \right) + 4 \left( 3.7726 + \frac{ - 0.4 e^{\mathtt{r1.bm.ln\epsilon}} e^{\mathtt{r1.bm.lnq}\left( t \right)}}{e^{\mathtt{r1.bm.ln\nu}\left( t \right)}} - 0.6 e^{\mathtt{r1.bm.ln\epsilon}} - 2.7726 e^{\mathtt{r1.bm.lnq}\left( t \right)} + \left( -1 + e^{\mathtt{r1.bm.ln\epsilon}} \right) e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right) \\ 0 &= - \mathtt{r2.bm.bold}\left( t \right) + 4 \left( 3.7726 - 0.6 e^{\mathtt{r2.bm.ln\epsilon}} - 2.7726 e^{\mathtt{r2.bm.lnq}\left( t \right)} + \frac{ - 0.4 e^{\mathtt{r2.bm.ln\epsilon}} e^{\mathtt{r2.bm.lnq}\left( t \right)}}{e^{\mathtt{r2.bm.ln\nu}\left( t \right)}} + \left( -1 + e^{\mathtt{r2.bm.ln\epsilon}} \right) e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right) \\ 0 &= - \mathtt{r3.bm.bold}\left( t \right) + 4 \left( 3.7726 - 0.6 e^{\mathtt{r3.bm.ln\epsilon}} - 2.7726 e^{\mathtt{r3.bm.lnq}\left( t \right)} + \frac{ - 0.4 e^{\mathtt{r3.bm.ln\epsilon}} e^{\mathtt{r3.bm.lnq}\left( t \right)}}{e^{\mathtt{r3.bm.ln\nu}\left( t \right)}} + e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \left( -1 + e^{\mathtt{r3.bm.ln\epsilon}} \right) \right) \end{align} \]
Run the simulation and plot the results
tspan = (0, 1022)
dt = 2 # 2 seconds as measurement interval for fMRI
2
setup simulation of the model, time in seconds
prob = SDEProblem(simmodel, [], tspan)
sol = solve(prob, ImplicitRKMil(), saveat=dt);
we now want to extract all the variables in our model which carry the tag "measurement". For this purpose we can use the Neuroblox function get_idx_tagged_vars
the observable quantity in our model is the BOLD signal, the variable of the Blox BalloonModel
that represents the BOLD signal is tagged with "measurement" tag. other tags that are defined are "input" which denotes variables representing a stimulus, like for instance an OUBlox
.
idx_m = get_idx_tagged_vars(simmodel, "measurement") # get index of bold signal
3-element Vector{Int64}:
19
20
21
plot bold signal time series
f = Figure()
ax = Axis(f[1, 1],
title = "fMRI time series",
xlabel = "Time [ms]",
ylabel = "BOLD",
)
lines!(ax, sol, idxs=idx_m);
f

We note that the initial spike is not meaningful and a result of the equilibration of the stochastic process thus we remove it.
dfsol = DataFrame(sol);
Add measurement noise and rescale data
data = Matrix(dfsol[:, idx_m .+ 1]); # +1 due to the additional time-dimension in the data frame.
add measurement noise
data += randn(size(data))/4;
center and rescale data (as done in SPM):
data .-= mean(data, dims=1);
data *= 1/std(data[:])/4;
dfsol = DataFrame(data, :auto);
Add correct names to columns of the data frame
_, obsvars = get_eqidx_tagged_vars(simmodel, "measurement"); # get index of equation of bold state
rename!(dfsol, Symbol.(obsvars))
Row | r1₊bm₊bold(t) | r2₊bm₊bold(t) | r3₊bm₊bold(t) |
---|---|---|---|
Float64 | Float64 | Float64 | |
1 | -0.261965 | -0.261045 | 0.391148 |
2 | 0.303896 | -0.136823 | -0.173335 |
3 | -0.0937618 | -0.0542405 | -0.165682 |
4 | 0.1322 | 0.0475346 | -0.0341104 |
5 | 0.364514 | -0.358245 | -0.0113573 |
6 | 0.199581 | 0.311414 | 0.0190306 |
7 | -0.0988521 | 0.218106 | -0.0474904 |
8 | 0.16117 | 0.211684 | -0.125414 |
9 | 0.0197356 | -0.0248634 | -0.428715 |
10 | -0.00561004 | 0.375068 | -0.408708 |
11 | 0.153786 | -0.0661862 | -0.198953 |
12 | -0.00896392 | 0.427925 | -0.456382 |
13 | -0.405563 | 0.242045 | -0.491816 |
14 | -0.0812915 | -0.076713 | 0.0713896 |
15 | 0.0564128 | 0.455571 | 0.151749 |
16 | 0.0095164 | -0.0493799 | -0.0874892 |
17 | 0.241745 | 0.0285054 | 0.17517 |
18 | 0.340321 | 0.263609 | -0.417404 |
19 | 0.0272821 | -0.200276 | 0.0371979 |
20 | 0.195127 | -0.269932 | -0.193659 |
21 | -0.232009 | -0.239251 | -0.118522 |
22 | -0.126782 | -0.617645 | -0.326592 |
23 | 0.0765651 | -0.0872546 | 0.0306633 |
24 | 0.202917 | -0.151627 | -0.526433 |
25 | -0.00261651 | -0.294182 | -0.464835 |
26 | 0.0121943 | 0.0751406 | -0.232426 |
27 | 0.000842652 | 0.0240632 | -0.511763 |
28 | -0.281213 | 0.237053 | -0.246478 |
29 | -0.0787778 | 0.0696607 | -0.355975 |
30 | -0.458605 | 0.0090913 | -0.0315461 |
31 | -0.199941 | 0.10773 | -0.0361844 |
32 | -0.016583 | 0.0549135 | -0.155938 |
33 | -0.11266 | 0.281129 | -0.245769 |
34 | -0.00318693 | 0.0120613 | -0.233039 |
35 | -0.044319 | -0.0514396 | -0.118341 |
36 | 0.383753 | -0.124213 | 0.106873 |
37 | 0.110905 | 0.146188 | 0.112925 |
38 | 0.11288 | 0.370991 | 0.294788 |
39 | 0.332099 | 0.293809 | 0.289002 |
40 | 0.0348948 | 0.209385 | -0.307632 |
41 | 0.2071 | 0.239657 | -0.0653779 |
42 | -0.0945283 | -0.240151 | 0.478676 |
43 | 0.519678 | -0.218962 | 0.551999 |
44 | 0.333049 | -0.0769695 | 0.0663154 |
45 | 0.232821 | -0.188746 | -0.123387 |
46 | 0.181949 | 0.000990162 | 0.0242832 |
47 | 0.232597 | -0.0856432 | 0.124101 |
48 | 0.272658 | -0.118209 | -0.346531 |
49 | -0.189948 | -0.501419 | -0.180996 |
50 | -0.0162457 | 0.0212086 | 0.0439917 |
51 | 0.39259 | 0.060813 | -0.301818 |
52 | 0.0761934 | 0.200322 | -0.450907 |
53 | 0.232114 | 0.164915 | -0.216094 |
54 | -0.0183519 | 0.382931 | 0.162036 |
55 | 0.150599 | 0.22243 | 0.1147 |
56 | -0.107748 | -0.200213 | 0.0841968 |
57 | 0.126494 | 0.0191143 | -0.0222996 |
58 | 0.373341 | 0.210212 | -0.203252 |
59 | 0.332 | 0.200739 | -0.00163898 |
60 | 0.119902 | 0.113278 | -0.0856193 |
61 | 0.0699023 | -0.0793886 | 0.0328242 |
62 | -0.272469 | -0.0776394 | 0.000869761 |
63 | -0.0904003 | -0.108852 | -0.0315934 |
64 | -0.162573 | -0.330408 | -0.361963 |
65 | -0.372469 | -0.348131 | -0.200946 |
66 | 0.0888628 | -0.13552 | -0.344135 |
67 | -0.018872 | 0.0777202 | -0.317323 |
68 | -0.383972 | 0.0286722 | 0.0523513 |
69 | -0.315405 | -0.219925 | -0.420844 |
70 | -0.230647 | -0.439407 | -0.0400733 |
71 | 0.254222 | -0.255052 | 0.0648123 |
72 | 0.394912 | -0.222459 | 0.084413 |
73 | 0.467703 | -0.359516 | 0.455132 |
74 | 0.520084 | -0.412101 | 0.185415 |
75 | 0.0901548 | 0.0353807 | 0.194998 |
76 | -0.0253333 | -0.699692 | 0.194728 |
77 | 0.14518 | -0.0542106 | 0.421357 |
78 | -0.0658652 | 0.248182 | 0.370637 |
79 | 0.242443 | 0.110133 | 0.186187 |
80 | 0.159038 | 0.16903 | 0.147753 |
81 | 0.275408 | 0.463176 | 0.208275 |
82 | 0.0260999 | 0.265653 | 0.176344 |
83 | 0.303734 | 0.078286 | 0.517161 |
84 | -0.12489 | 0.0426113 | -0.148191 |
85 | 0.260471 | 0.0381392 | 0.295121 |
86 | 0.337301 | 0.13243 | 0.281349 |
87 | 0.0944076 | 0.158656 | 0.0833511 |
88 | 0.080329 | 0.0557053 | 0.354448 |
89 | 0.404124 | -0.00298321 | 0.45849 |
90 | 0.087233 | -0.0585021 | 0.335897 |
91 | 0.19906 | -0.0386863 | 0.0900859 |
92 | 0.171752 | -0.131821 | 0.417053 |
93 | 0.00958227 | -0.486783 | 0.222327 |
94 | 0.223281 | -0.178438 | 0.139011 |
95 | -0.371982 | -0.133392 | -0.0796333 |
96 | -0.398572 | -0.0638496 | -0.138716 |
97 | -0.195683 | 0.284695 | -0.206923 |
98 | -0.483577 | 0.204664 | 0.0586401 |
99 | -0.43307 | 0.358271 | -0.463997 |
100 | -0.128487 | 0.207819 | -0.220754 |
101 | -0.179425 | 0.301194 | 0.141064 |
102 | -0.0466858 | 0.0165546 | -0.0377058 |
103 | -0.0772408 | 0.360143 | -1.30542e-5 |
104 | -0.0451599 | -0.348618 | 0.0880216 |
105 | -0.342465 | -0.0371225 | 0.0480952 |
106 | -0.0810149 | -0.0459667 | -0.0361463 |
107 | -0.165834 | 0.0978639 | -0.149304 |
108 | 0.149062 | -0.0289158 | -0.387724 |
109 | -0.257277 | 0.12425 | -0.201554 |
110 | -0.0332418 | -0.0141487 | -0.0844778 |
111 | -0.0627082 | 0.253154 | -0.117716 |
112 | 0.233893 | 0.136945 | -0.159807 |
113 | -0.154772 | 0.254204 | -0.419643 |
114 | -0.219808 | 0.289361 | -0.585313 |
115 | -0.620279 | -0.189742 | -0.194334 |
116 | -0.00682585 | -0.163435 | -0.0894824 |
117 | -0.213945 | 0.135082 | -0.0509784 |
118 | -0.0622705 | 0.440916 | -0.532437 |
119 | -0.0108475 | -0.319998 | -0.188074 |
120 | -0.696492 | -0.319416 | -0.155067 |
121 | -0.757402 | -0.111494 | -0.429208 |
122 | -0.303925 | -0.255204 | -0.10659 |
123 | -0.159827 | -0.562444 | -0.0276251 |
124 | -0.00887016 | -0.204617 | -0.233829 |
125 | 0.0104003 | 0.436531 | -0.214416 |
126 | -0.0506629 | 0.150274 | 0.0503826 |
127 | 0.215736 | 0.45017 | 0.1998 |
128 | -0.180791 | 0.179662 | -0.0620984 |
129 | 0.234082 | -0.212231 | 0.4196 |
130 | 0.00417564 | -0.234757 | -0.0627296 |
131 | 0.0104559 | 0.00201369 | 0.170141 |
132 | -0.0911119 | -0.342304 | -0.112064 |
133 | 0.080468 | 0.269127 | -0.213246 |
134 | 0.0500475 | -0.109752 | 0.122611 |
135 | 0.0998271 | 0.2417 | 0.184767 |
136 | 0.188528 | -0.0732961 | 0.109423 |
137 | 0.41398 | 0.196927 | 0.193505 |
138 | 0.285908 | -0.246418 | 0.0983003 |
139 | -0.0122996 | 0.195265 | 0.190688 |
140 | -0.168745 | -0.00494868 | -0.244325 |
141 | -0.106246 | -0.0382607 | -0.253241 |
142 | -0.352939 | -0.210166 | 0.264644 |
143 | -0.351258 | 0.0271726 | 0.0152307 |
144 | -0.578566 | 0.0722738 | 0.30291 |
145 | -0.263507 | 0.00331684 | 0.267825 |
146 | -0.0667634 | -0.208603 | 0.0696795 |
147 | -0.283417 | -0.0258921 | -0.172966 |
148 | -0.456118 | 0.185638 | 0.00160654 |
149 | -0.229497 | -0.028839 | -0.40474 |
150 | -0.282973 | 0.38051 | -0.557232 |
151 | -0.104709 | 0.210769 | -0.278323 |
152 | -0.139239 | 0.201761 | 0.429088 |
153 | -0.350793 | 0.200186 | 0.61313 |
154 | -0.112656 | 0.139804 | 0.413917 |
155 | -0.158401 | -0.196335 | 0.411077 |
156 | -0.470325 | -0.199303 | 0.537126 |
157 | -0.0260301 | -0.499747 | 0.0818355 |
158 | 7.77412e-5 | -0.103283 | -0.124717 |
159 | 0.187972 | 0.197798 | -0.00393656 |
160 | 0.0560392 | 0.214356 | 0.138598 |
161 | 0.0637655 | 0.359809 | 0.148642 |
162 | -0.53517 | 0.0355972 | 0.355629 |
163 | -0.359557 | -0.175001 | 0.269058 |
164 | -0.0965468 | -0.0782629 | -0.00286111 |
165 | -0.115429 | -0.27324 | -0.54586 |
166 | -0.0714536 | -0.0471413 | -0.259012 |
167 | -0.143417 | -0.102385 | -0.0599949 |
168 | -0.0733586 | 0.0377543 | -0.308692 |
169 | 0.08783 | -0.206944 | 0.453681 |
170 | -0.140733 | -0.36074 | 0.308337 |
171 | -0.231353 | -0.505866 | 0.241632 |
172 | 0.187825 | -0.261057 | 0.319394 |
173 | -0.416085 | -0.592732 | 0.181256 |
174 | -0.0726659 | -0.420506 | 0.292956 |
175 | -0.0684704 | -0.157355 | 0.278478 |
176 | -0.175332 | -0.160639 | 0.335137 |
177 | -0.190825 | -0.146294 | -0.158396 |
178 | -0.0197451 | -0.22981 | 0.141163 |
179 | -0.171763 | -0.337308 | -0.243328 |
180 | -0.174447 | 0.11748 | -0.0531531 |
181 | -0.575709 | 0.00594702 | -0.269202 |
182 | -0.434028 | 0.23796 | 0.0631365 |
183 | -0.277012 | 0.330448 | -0.0398182 |
184 | -0.289634 | 0.187197 | 0.0899897 |
185 | 0.211476 | 0.00660671 | 0.00623514 |
186 | -0.0110458 | -0.240239 | -0.267235 |
187 | 0.356371 | 0.23357 | 0.189582 |
188 | 0.279259 | 0.143599 | 0.199062 |
189 | 0.184415 | 0.180252 | -0.0716381 |
190 | 0.285874 | 0.186292 | -0.0891894 |
191 | 0.2882 | -0.00313022 | -0.0928106 |
192 | -0.0512236 | -0.0540746 | 0.286045 |
193 | 0.0228772 | -0.280991 | -0.434152 |
194 | 0.233134 | 0.0447185 | 0.340932 |
195 | 0.160005 | -0.337632 | 0.316829 |
196 | -0.0445397 | -0.173746 | 0.506323 |
197 | -0.174178 | -0.396435 | 0.199279 |
198 | 0.239364 | -0.321321 | 0.0903532 |
199 | 0.177753 | -0.09012 | 0.0805273 |
200 | 0.156648 | -0.201669 | -0.0365185 |
201 | -0.201167 | -0.174254 | -0.500305 |
202 | -0.339408 | 0.303857 | -0.243254 |
203 | -0.0363549 | 0.0131175 | -0.133597 |
204 | -0.609808 | -0.0782532 | -0.190889 |
205 | -0.14773 | -0.0710003 | -0.253934 |
206 | 0.00619895 | -0.246813 | 0.273866 |
207 | 0.157595 | 0.211175 | 0.236721 |
208 | 0.230625 | 0.141544 | -0.0502022 |
209 | 0.0571313 | 0.249687 | -0.179405 |
210 | 0.339187 | -0.443487 | -0.196441 |
211 | 0.0466324 | -0.240023 | -0.382874 |
212 | 0.281965 | -0.0297198 | -0.307379 |
213 | -0.00516357 | -0.318593 | -0.442439 |
214 | 0.395817 | -0.0221648 | 0.127167 |
215 | 0.248452 | -0.260765 | 0.0249021 |
216 | 0.0886442 | 0.0494491 | 0.113269 |
217 | 0.0132521 | 0.419334 | -0.172635 |
218 | -0.0371405 | 0.167095 | 0.152887 |
219 | 0.215616 | 0.488525 | 0.144633 |
220 | -0.00171785 | -0.0684319 | 0.163965 |
221 | 0.00300466 | 0.0151258 | -0.0779362 |
222 | -0.105555 | 0.153347 | -0.640797 |
223 | -0.429619 | 0.267585 | -0.423445 |
224 | -0.279941 | 0.106379 | -0.13874 |
225 | -0.348125 | 0.426272 | 0.424528 |
226 | -0.173177 | 0.313103 | 0.0414145 |
227 | -0.21194 | -0.0653623 | -0.0527471 |
228 | 0.0830735 | -0.0840702 | 0.241219 |
229 | -0.0224897 | -0.19906 | -0.0462349 |
230 | -0.609661 | -0.262376 | 0.160739 |
231 | -0.0621763 | -0.311325 | 0.0961985 |
232 | -0.229518 | -0.468329 | 0.105876 |
233 | 0.0926904 | -0.344502 | 0.10074 |
234 | -0.0396807 | -0.0423502 | 0.0833085 |
235 | -0.253028 | -0.363144 | -0.209479 |
236 | 0.205612 | -0.445429 | -0.0493075 |
237 | 0.0939908 | -0.616267 | -0.0660439 |
238 | 0.104732 | -0.272252 | 0.0254952 |
239 | -0.141944 | -0.125633 | 0.0700748 |
240 | 0.0568266 | -0.45609 | -0.206806 |
241 | 0.270237 | -0.0530028 | 0.257781 |
242 | 0.346584 | -0.160206 | -0.11565 |
243 | -0.107366 | -0.156629 | 0.378631 |
244 | 0.313864 | -0.480343 | 0.294982 |
245 | 0.138591 | -0.303937 | 0.114632 |
246 | -0.577029 | -0.303122 | 0.216442 |
247 | -0.120774 | -0.0959704 | 0.192291 |
248 | -0.231057 | 0.0581456 | 0.106824 |
249 | -0.0804353 | -0.119567 | 0.266509 |
250 | -0.485931 | -0.200886 | 0.356782 |
251 | -0.0711387 | -0.481877 | 0.287163 |
252 | -0.279228 | -0.588898 | 0.55422 |
253 | 0.187936 | -0.744235 | 0.26275 |
254 | -0.201642 | -0.489634 | 0.0818526 |
255 | 0.107972 | -0.0155551 | 0.181908 |
256 | -0.329059 | -0.484695 | -0.0234829 |
257 | 0.144385 | -0.0600195 | 4.46952e-5 |
258 | 0.153928 | -0.199515 | -0.265475 |
259 | 0.017647 | 0.215034 | -0.115268 |
260 | -0.219353 | 0.384648 | -0.0877678 |
261 | -0.14969 | 0.116376 | -0.203053 |
262 | -0.165507 | -0.183067 | 0.0488363 |
263 | 0.14994 | 0.605318 | -0.264557 |
264 | -0.0675205 | 0.215291 | 0.0244479 |
265 | -0.00808328 | -0.0687731 | 0.280227 |
266 | -0.0408992 | -0.225838 | -0.208862 |
267 | 0.189477 | 0.0429471 | 0.0740346 |
268 | -0.560871 | -0.218621 | 0.112594 |
269 | -0.492891 | -0.350834 | -0.0412869 |
270 | -0.488155 | -0.225807 | 0.0784703 |
271 | -0.225794 | -0.583922 | 0.170448 |
272 | -0.183575 | -0.135643 | 0.271499 |
273 | -0.299606 | -0.719217 | 0.0792828 |
274 | -0.131422 | -0.701732 | -0.0457413 |
275 | 0.221849 | -0.296214 | 0.14127 |
276 | 0.200564 | -0.488397 | 0.387019 |
277 | 0.290792 | 0.0659682 | 0.189074 |
278 | 0.0715868 | -0.269109 | 0.169408 |
279 | -0.000113454 | 0.0126937 | 0.0256357 |
280 | 0.180266 | -0.0939981 | -0.281113 |
281 | 0.222257 | -0.21086 | -0.366613 |
282 | 0.1968 | 0.0970268 | -0.0833752 |
283 | 0.208524 | -0.0557734 | -0.241888 |
284 | 0.0959915 | 0.0728546 | -0.503222 |
285 | 0.373137 | -0.164534 | -0.343427 |
286 | 0.142376 | 0.0921582 | -0.0508231 |
287 | 0.0731289 | 0.0168957 | -0.26225 |
288 | -0.0727917 | -0.0125547 | -0.28011 |
289 | -0.0992782 | -0.176429 | -0.0239845 |
290 | 0.363213 | -0.0188299 | 0.153484 |
291 | 0.293216 | -0.233011 | 0.347886 |
292 | 0.264155 | 0.158338 | 0.16867 |
293 | 0.107843 | 0.271961 | 0.174075 |
294 | 0.198381 | 0.520956 | -0.240874 |
295 | 0.0982524 | 0.0739853 | 0.134751 |
296 | 0.0302435 | 0.485474 | 0.134825 |
297 | 0.0414697 | 0.522722 | 0.233191 |
298 | 0.100884 | 0.156411 | 0.338087 |
299 | 0.549029 | 0.207509 | 0.245648 |
300 | -0.15095 | 0.0416965 | 0.0837732 |
301 | 0.277712 | 0.547334 | 0.267173 |
302 | 0.362611 | 0.373684 | 0.19524 |
303 | 0.0362932 | 0.434541 | 0.18349 |
304 | -0.159537 | 0.311821 | 0.633279 |
305 | 0.380586 | 0.144789 | 0.418952 |
306 | -0.0475004 | 0.157702 | 0.277615 |
307 | 0.073119 | 0.128907 | 0.0157899 |
308 | 0.0614556 | -0.0376652 | 0.136903 |
309 | 0.151408 | 0.215787 | 0.367668 |
310 | 0.0902026 | 0.08137 | 0.0634252 |
311 | -0.0380612 | -0.180567 | 0.0897672 |
312 | 0.0315569 | 0.395103 | 0.0472091 |
313 | -0.100241 | 0.090151 | 0.224245 |
314 | -0.0879183 | -0.0228566 | -0.00916431 |
315 | -0.281579 | -0.0998052 | 0.107952 |
316 | -0.31633 | -0.145648 | 0.0930275 |
317 | 0.157786 | -0.38039 | 0.0492624 |
318 | 0.100421 | -0.267825 | 0.267645 |
319 | -0.161645 | -0.347799 | -0.134729 |
320 | 0.182242 | 0.0298389 | -0.192489 |
321 | 0.00277113 | -0.0611773 | 0.0585722 |
322 | 0.519911 | -0.0298113 | -0.0831042 |
323 | 0.442532 | 0.289899 | -0.13757 |
324 | 0.479877 | 0.0445259 | -0.328216 |
325 | -0.0274735 | -0.368109 | -0.620258 |
326 | -0.0391931 | 0.0347327 | -0.211236 |
327 | 0.169156 | -0.321599 | -0.474877 |
328 | -0.0489977 | -0.193875 | -0.481865 |
329 | -0.00395251 | -0.417265 | -0.117063 |
330 | 0.0357954 | 0.160975 | 0.0710595 |
331 | 0.367035 | -0.0394032 | -0.301101 |
332 | 0.290692 | -0.282822 | 0.125515 |
333 | -0.0479592 | 0.321101 | -0.29798 |
334 | 0.198236 | 0.594446 | 0.0033131 |
335 | 0.178393 | -0.0757823 | 0.228745 |
336 | -0.245725 | -0.116668 | 0.613432 |
337 | -0.105089 | 0.116084 | 0.133984 |
338 | -0.211437 | -0.0216328 | 0.215134 |
339 | -0.230757 | -0.164505 | 0.248347 |
340 | -0.133727 | 0.141404 | 0.214824 |
341 | -0.182608 | -0.315407 | 0.149095 |
342 | -0.0242426 | -0.148962 | 0.342258 |
343 | -0.0798574 | -0.0137661 | 0.371246 |
344 | 0.0961617 | 0.047246 | 0.15371 |
345 | 0.208901 | -0.192093 | 0.0496105 |
346 | -0.0758004 | 0.213345 | -0.0202104 |
347 | -0.263235 | 0.271167 | -0.0943934 |
348 | 0.11873 | 0.174454 | -0.240087 |
349 | -0.292911 | 0.305084 | -0.0612019 |
350 | -0.0693902 | 0.362701 | 0.0578607 |
351 | 0.449909 | -0.0227106 | 0.0261549 |
352 | -0.03299 | 0.237718 | -0.215399 |
353 | 0.187476 | -0.0854012 | 0.217642 |
354 | 0.524004 | 0.331388 | 0.0055376 |
355 | 0.158949 | 0.432281 | -0.548121 |
356 | 0.00100333 | 0.114703 | -0.191319 |
357 | 0.105808 | 0.294121 | 0.0332358 |
358 | -0.136295 | 0.291666 | -0.403591 |
359 | -0.298748 | -0.184621 | -0.325414 |
360 | 0.0582475 | 0.124601 | 0.0235502 |
361 | -0.148682 | -0.238008 | -0.20834 |
362 | -0.26573 | -0.5074 | -0.1962 |
363 | 0.011853 | -0.365277 | -0.0336191 |
364 | 0.199713 | -0.0350969 | -0.0862087 |
365 | -0.0532729 | 0.0349598 | -0.0665411 |
366 | 0.148694 | 0.172958 | -0.244293 |
367 | 0.325949 | 0.211875 | 0.215592 |
368 | -0.175868 | -0.08989 | -0.150897 |
369 | 0.514382 | 0.210089 | -0.00952594 |
370 | 0.0143866 | 0.286914 | 0.290285 |
371 | 0.0351663 | 0.277413 | 0.0090945 |
372 | -0.121686 | 0.0707863 | 0.0595321 |
373 | -0.526683 | 0.294357 | -0.176202 |
374 | -0.0797591 | -0.20031 | -0.0551678 |
375 | -0.368907 | 0.304283 | -0.0323275 |
376 | -0.182266 | -0.410095 | 0.189866 |
377 | -0.181622 | 0.0732733 | 0.148967 |
378 | 0.278293 | -0.208378 | 0.023603 |
379 | -0.00762898 | 0.0202135 | -0.181552 |
380 | 0.0295012 | 0.108258 | -0.29938 |
381 | -0.0552167 | -0.340127 | -0.274286 |
382 | 0.0763747 | -0.242166 | -0.409571 |
383 | 0.284537 | -0.302061 | -0.282525 |
384 | -0.178257 | -0.233841 | -0.238823 |
385 | 0.249194 | -0.152588 | 0.50539 |
386 | 0.0465428 | 0.0479277 | 0.185256 |
387 | 0.451589 | 0.409787 | 0.242657 |
388 | 0.140038 | 0.413606 | 0.3583 |
389 | 0.250975 | -0.01974 | 0.151164 |
390 | 0.275793 | 0.0952664 | -0.338909 |
391 | 0.0191222 | -0.241959 | -0.232689 |
392 | -0.0778698 | -0.267597 | -0.276459 |
393 | -0.366371 | -0.173372 | -0.0474177 |
394 | -0.182853 | -0.491437 | -0.0521924 |
395 | 0.0657978 | -0.61177 | 0.0193666 |
396 | -0.145868 | -0.00328135 | 0.0978026 |
397 | 0.210129 | -0.308781 | -0.0544222 |
398 | -0.185841 | 0.37971 | -0.101112 |
399 | 0.0428532 | 0.149977 | 0.185744 |
400 | 0.0160045 | 0.0786661 | -0.0790084 |
401 | -0.120781 | 0.421855 | 0.312148 |
402 | -0.234865 | 0.115962 | 0.105122 |
403 | 0.0298102 | 0.482176 | 0.127325 |
404 | 0.0732582 | 0.113288 | 0.0904348 |
405 | 0.0504051 | 0.156277 | 0.259415 |
406 | -0.00320684 | 0.635872 | 0.238799 |
407 | -0.256233 | 0.194349 | 0.279716 |
408 | 0.18314 | 0.159441 | -0.046175 |
409 | -0.0234721 | -0.312125 | -0.0841838 |
410 | 0.0791181 | 0.0501531 | -0.546846 |
411 | -0.280329 | 0.238524 | -0.0747288 |
412 | 0.0757131 | -0.121553 | -0.197266 |
413 | -0.284481 | 0.0804145 | 0.033832 |
414 | -0.233483 | -0.0739132 | 0.0168969 |
415 | -0.151096 | -0.505079 | -0.20138 |
416 | 0.0432473 | 0.258387 | -0.126265 |
417 | 0.00464007 | -0.102824 | 0.159177 |
418 | 0.010541 | 0.0456445 | -0.0123019 |
419 | 0.0615854 | 0.330897 | 0.0359624 |
420 | 0.136502 | 0.142894 | 0.259552 |
421 | -0.257244 | 0.120014 | 0.461684 |
422 | -0.519328 | -0.243852 | -0.272924 |
423 | -0.264239 | -0.0287871 | -0.0401149 |
424 | -0.400213 | -0.213581 | -0.0265722 |
425 | 0.0202052 | 0.044272 | 0.287351 |
426 | -0.0860531 | -0.0650003 | 0.28518 |
427 | 0.0416748 | -0.176317 | 0.193586 |
428 | -0.281419 | 0.373366 | -0.0503032 |
429 | -0.196274 | 0.100691 | 0.19144 |
430 | -0.130074 | -0.0545394 | -0.194751 |
431 | 0.143747 | 0.125588 | 0.0466545 |
432 | 0.329343 | -0.0783963 | -0.061476 |
433 | 0.314702 | 0.0132451 | -0.147624 |
434 | 0.32501 | 0.389429 | 0.0794263 |
435 | 0.240058 | -0.0565543 | -0.00346988 |
436 | 0.353131 | 0.392986 | 0.0413786 |
437 | -0.0691634 | 0.265824 | 0.174465 |
438 | 0.375574 | 0.338125 | 0.138244 |
439 | -0.0975308 | 0.219255 | -0.105687 |
440 | 0.579869 | 0.437505 | -0.0722312 |
441 | 0.35346 | 0.180254 | -0.371519 |
442 | 0.540868 | 0.331668 | 0.0841366 |
443 | -0.0936477 | 0.476029 | -0.300079 |
444 | 0.499602 | 0.108923 | -0.21869 |
445 | 0.111941 | 0.824416 | -0.187626 |
446 | 0.00581802 | 0.575828 | -0.437763 |
447 | -0.10377 | 0.298637 | -0.126562 |
448 | -0.0216984 | 0.0912533 | -0.308582 |
449 | 0.0763079 | 0.303776 | -0.0531353 |
450 | 0.102095 | 0.124776 | -0.206239 |
451 | 0.266955 | 0.330733 | 0.38519 |
452 | 0.392067 | 0.219392 | 0.142679 |
453 | 0.416464 | 0.378872 | 0.688966 |
454 | 0.213568 | 0.438129 | 0.0786848 |
455 | 0.102303 | 0.465265 | 0.0956624 |
456 | 0.0886718 | 0.186166 | -0.240473 |
457 | -0.0539855 | 0.306611 | -0.0220418 |
458 | -0.0831432 | 0.0190005 | 0.0199026 |
459 | -0.187533 | 0.506862 | 0.306575 |
460 | -0.120606 | 0.148502 | 0.0673568 |
461 | -0.406976 | 0.203064 | -0.103478 |
462 | -0.0292389 | -0.330779 | 0.244719 |
463 | -0.0760217 | -0.51415 | 0.150275 |
464 | 0.085459 | -0.24682 | -0.0907524 |
465 | -0.126433 | 0.00458169 | 0.0161263 |
466 | 0.0670107 | -0.0899429 | -0.160237 |
467 | -0.179441 | -0.246458 | -0.0196026 |
468 | -0.146231 | -0.0437391 | -0.262131 |
469 | -0.162131 | 0.0624488 | -0.107996 |
470 | -0.262685 | 0.0147826 | -0.388316 |
471 | 0.175973 | 0.171059 | 0.292975 |
472 | 0.109008 | -0.120848 | -0.062266 |
473 | -0.0719753 | -0.0996735 | 0.152383 |
474 | -0.0975168 | -0.0477183 | 0.0806866 |
475 | -0.177336 | -0.0660317 | 0.501959 |
476 | 0.112563 | 0.129951 | 0.0576743 |
477 | 0.124412 | 0.115789 | -0.124712 |
478 | 0.184716 | 0.155044 | 0.134421 |
479 | 0.0153365 | 0.122188 | -0.129889 |
480 | 0.244843 | 0.0499705 | -0.15273 |
481 | 0.198598 | -0.337799 | -0.0753176 |
482 | 0.471022 | 0.0324249 | 0.268234 |
483 | 0.145854 | 0.158162 | 0.000610981 |
484 | 0.0594445 | 0.212305 | 0.0206962 |
485 | 0.24613 | 0.163561 | -0.330722 |
486 | 0.0126403 | 0.0676293 | -0.16497 |
487 | -0.0888981 | 0.254612 | -0.599825 |
488 | -0.0395522 | 0.321781 | -0.595434 |
489 | -0.126201 | 0.593345 | -0.125253 |
490 | 0.025565 | 0.174686 | -0.0516239 |
491 | 0.296143 | 0.40937 | 0.178031 |
492 | -0.376026 | 0.219319 | 0.0420441 |
493 | 0.132471 | 0.399346 | 0.574897 |
494 | 0.33755 | 0.0577862 | 0.661118 |
495 | 0.302328 | -0.125181 | 0.530016 |
496 | 0.159672 | -0.266475 | 0.362886 |
497 | 0.0865285 | -0.382756 | -0.0280398 |
498 | 0.0269972 | -0.288706 | -0.378425 |
499 | 0.0746063 | -0.074783 | -0.084576 |
500 | 0.0412211 | -0.00671307 | -0.179012 |
501 | -0.10933 | -0.03271 | -0.1586 |
502 | -0.289707 | -0.361789 | 0.270907 |
503 | -0.017333 | -0.552381 | 0.108833 |
504 | -0.39496 | -0.0761768 | -0.0929399 |
505 | -0.378468 | -0.150799 | -0.712837 |
506 | -0.203044 | 0.299287 | -0.46223 |
507 | -0.386875 | 0.0815559 | -0.289179 |
508 | -0.0305119 | 0.223957 | -0.402801 |
509 | 0.140426 | -0.00672008 | 0.325029 |
510 | 0.000219185 | -0.0284677 | 0.186512 |
511 | 0.0243414 | 0.00897582 | -0.00809878 |
512 | 0.0646512 | -0.311447 | 0.523201 |
Estimate and plot the cross-spectral densities
We compute the cross-spectral density by fitting a linear model of order p
and then compute the csd analytically from the parameters of the multivariate autoregressive model
p = 8;
mar = mar_ml(data, p); # maximum likelihood estimation of the MAR coefficients and noise covariance matrix
ns = size(data, 1);
freq = range(min(128, ns*dt)^-1, max(8, 2*dt)^-1, 32);
csd = mar2csd(mar, freq, dt^-1);
Now plot the real part of the cross-spectra. Most part of the signal is in the lower frequencies:
fig = Figure(size=(1200, 800))
grid = fig[1, 1] = GridLayout()
for i = 1:nr
for j = 1:nr
if i == 1 && j == 1
ax = Axis(grid[i, j], xlabel="Frequency [Hz]", ylabel="real value of CSD")
else
ax = Axis(grid[i, j])
end
lines!(ax, freq, real.(csd[:, i, j]))
end
end
Label(grid[1, 1:3, Top()], "Cross-spectral densities", valign = :bottom,
font = :bold,
fontsize = 32,
padding = (0, 0, 5, 0))
fig

These cross-spectral densities are the data we use in spectral DCM to fit our model to and perform the inference of connection strengths.
Model Inference
We will now assemble a new model that is used for fitting the previous simulations. This procedure is similar to before with the difference that we will define global parameters and use tags such as [tunable=false/true] to define which parameters we will want to estimate. Note that parameters are tunable by default.
g = MetaDiGraph()
regions = []; # list of neural mass blocks to then connect them to each other with an adjacency matrix `A`
Note that parameters are typically defined within a Blox and thus not immediately visible to the user. Since we want some parameters to be shared across several regions we define them outside of the regions. For this purpose use the ModelingToolkit macro @parameters
which is used to define symbolic parameters for models. Note that we can set the tunable flag right away thereby defining whether we will include this parameter in the optimization procedure or rather keep it fixed to its predefined value.
@parameters lnκ=0.0 [tunable=false] lnϵ=0.0 [tunable=false] lnτ=0.0 [tunable=false] # lnκ: decay parameter for hemodynamics; lnϵ: ratio of intra- to extra-vascular components, lnτ: transit time scale
@parameters C=1/16 [tunable=false] # note that C=1/16 is taken from SPM12 and stabilizes the balloon model simulation. See also comment above.
\[ \begin{equation} \left[ \begin{array}{c} C \\ \end{array} \right] \end{equation} \]
We now define a similar model as above for the simulation but instead of using an actual stimulus Blox we here add ExternalInput which represents a simple linear external input that is not specified any further. We simply say that our model gets some input with a proportional factor $C$. This is mostly only to make sure that our results are consistent with those produced by SPM
for i = 1:nr
region = LinearNeuralMass(;name=Symbol("r$(i)₊lm"))
push!(regions, region)
input = ExternalInput(;name=Symbol("r$(i)₊ei"))
add_edge!(g, input => region, weight=C)
# we assume fMRI signal and model them with a BalloonModel
measurement = BalloonModel(;name=Symbol("r$(i)₊bm"), lnτ=lnτ, lnκ=lnκ, lnϵ=lnϵ)
add_edge!(g, region => measurement, weight=1.0)
end
Here we define the prior expectation values of the effective connectivity matrix we wish to infer:
A_prior = 0.01*randn(nr, nr)
A_prior -= diagm(diag(A_prior)) # remove the diagonal
3×3 Matrix{Float64}:
0.0 0.0138076 -0.000870584
-0.0110173 0.0 -0.000945151
-0.0153045 -0.00676341 0.0
These two parameters are not present in the ground truth thus set them to zero and set their tuning parameter to false:
A_prior[3] = 0.0
A_prior[7] = 0.0
0.0
Since we want to optimize these weights we turn them into symbolic parameters: Add the symbolic weights to the edges and connect regions.
A = []
for (i, a) in enumerate(vec(A_prior))
symb = Symbol("A$(i)")
push!(A, only(@parameters $symb = a))
end
for (i, idx) in enumerate(CartesianIndices(A_prior))
if idx[1] == idx[2]
add_edge!(g, regions[idx[1]] => regions[idx[2]], weight=-exp(A[i])/2) # -exp(A[i])/2: treatement of diagonal elements in SPM12 to make diagonal dominance (see Gershgorin Theorem) more likely but it is not guaranteed
else
add_edge!(g, regions[idx[2]] => regions[idx[1]], weight=A[i])
end
end
Avoid simplification of the model in order to be able to exclude some parameters from fitting
@named fitmodel = system_from_graph(g, simplify=false)
\[ \begin{align} \mathtt{r1.lm.jcn}\left( t \right) &= \mathtt{A4} \mathtt{r2.lm.x}\left( t \right) + \mathtt{A7} \mathtt{r3.lm.x}\left( t \right) + C \mathtt{r1.ei.u}\left( t \right) - \frac{1}{2} e^{\mathtt{A1}} \mathtt{r1.lm.x}\left( t \right) \\ \mathtt{r1.bm.jcn}\left( t \right) &= \mathtt{w\_r1.lm\_r1.bm} \mathtt{r1.lm.x}\left( t \right) \\ \mathtt{r2.lm.jcn}\left( t \right) &= \mathtt{A2} \mathtt{r1.lm.x}\left( t \right) + \mathtt{A8} \mathtt{r3.lm.x}\left( t \right) + C \mathtt{r2.ei.u}\left( t \right) - \frac{1}{2} \mathtt{r2.lm.x}\left( t \right) e^{\mathtt{A5}} \\ \mathtt{r2.bm.jcn}\left( t \right) &= \mathtt{w\_r2.lm\_r2.bm} \mathtt{r2.lm.x}\left( t \right) \\ \mathtt{r3.lm.jcn}\left( t \right) &= \mathtt{A3} \mathtt{r1.lm.x}\left( t \right) + \mathtt{A6} \mathtt{r2.lm.x}\left( t \right) + C \mathtt{r3.ei.u}\left( t \right) - \frac{1}{2} e^{\mathtt{A9}} \mathtt{r3.lm.x}\left( t \right) \\ \mathtt{r3.bm.jcn}\left( t \right) &= \mathtt{w\_r3.lm\_r3.bm} \mathtt{r3.lm.x}\left( t \right) \\ \mathtt{r1.ei.u}\left( t \right) &= 0 \\ \frac{\mathrm{d} \mathtt{r1.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{r1.lm.jcn}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r1.bm.s}\left( t \right)}{\mathrm{d}t} &= \mathtt{r1.bm.jcn}\left( t \right) - 0.32 \left( -1 + e^{\mathtt{r1.bm.lnu}\left( t \right)} \right) - 0.64 e^{\mathtt{ln\kappa}} \mathtt{r1.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r1.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r1.bm.s}\left( t \right)}{e^{\mathtt{r1.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r1.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r1.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{ln\tau}} e^{\mathtt{r1.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r1.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 e^{\mathtt{r1.bm.lnu}\left( t \right)} \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r1.bm.lnu}\left( t \right)}}} \right)}{e^{\mathtt{r1.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{ln\tau}}} \\ \mathtt{r1.bm.bold}\left( t \right) &= 4 \left( 3.7726 + \frac{ - 0.4 e^{\mathtt{ln\epsilon}} e^{\mathtt{r1.bm.lnq}\left( t \right)}}{e^{\mathtt{r1.bm.ln\nu}\left( t \right)}} - 0.6 e^{\mathtt{ln\epsilon}} - 2.7726 e^{\mathtt{r1.bm.lnq}\left( t \right)} + \left( -1 + e^{\mathtt{ln\epsilon}} \right) e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right) \\ \mathtt{r2.ei.u}\left( t \right) &= 0 \\ \frac{\mathrm{d} \mathtt{r2.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{r2.lm.jcn}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r2.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r2.bm.lnu}\left( t \right)} \right) + \mathtt{r2.bm.jcn}\left( t \right) - 0.64 e^{\mathtt{ln\kappa}} \mathtt{r2.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r2.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r2.bm.s}\left( t \right)}{e^{\mathtt{r2.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r2.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r2.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{ln\tau}} e^{\mathtt{r2.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r2.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r2.bm.lnu}\left( t \right)}}} \right) e^{\mathtt{r2.bm.lnu}\left( t \right)}}{e^{\mathtt{r2.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{ln\tau}}} \\ \mathtt{r2.bm.bold}\left( t \right) &= 4 \left( 3.7726 + \frac{ - 0.4 e^{\mathtt{r2.bm.lnq}\left( t \right)} e^{\mathtt{ln\epsilon}}}{e^{\mathtt{r2.bm.ln\nu}\left( t \right)}} - 2.7726 e^{\mathtt{r2.bm.lnq}\left( t \right)} - 0.6 e^{\mathtt{ln\epsilon}} + e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \left( -1 + e^{\mathtt{ln\epsilon}} \right) \right) \\ \mathtt{r3.ei.u}\left( t \right) &= 0 \\ \frac{\mathrm{d} \mathtt{r3.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{r3.lm.jcn}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r3.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r3.bm.lnu}\left( t \right)} \right) + \mathtt{r3.bm.jcn}\left( t \right) - 0.64 e^{\mathtt{ln\kappa}} \mathtt{r3.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r3.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r3.bm.s}\left( t \right)}{e^{\mathtt{r3.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r3.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r3.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{ln\tau}} e^{\mathtt{r3.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r3.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r3.bm.lnu}\left( t \right)}}} \right) e^{\mathtt{r3.bm.lnu}\left( t \right)}}{e^{\mathtt{r3.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{ln\tau}}} \\ \mathtt{r3.bm.bold}\left( t \right) &= 4 \left( 3.7726 - 2.7726 e^{\mathtt{r3.bm.lnq}\left( t \right)} - 0.6 e^{\mathtt{ln\epsilon}} + \frac{ - 0.4 e^{\mathtt{r3.bm.lnq}\left( t \right)} e^{\mathtt{ln\epsilon}}}{e^{\mathtt{r3.bm.ln\nu}\left( t \right)}} + e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \left( -1 + e^{\mathtt{ln\epsilon}} \right) \right) \end{align} \]
With the function changetune
` we can provide a dictionary of parameters whose tunable flag should be changed, for instance set to false to exclude them from the optimization procedure. For instance the effective connections that are set to zero in the simulation and the self-connections:
untune = Dict(A[3] => false, A[7] => false, A[1] => false, A[5] => false, A[9] => false)
fitmodel = changetune(fitmodel, untune) # 3 and 7 are not present in the simulation model
fitmodel = structural_simplify(fitmodel) # and now simplify the euqations
\[ \begin{align} \frac{\mathrm{d} \mathtt{r1.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{r1.lm.jcn}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r1.bm.s}\left( t \right)}{\mathrm{d}t} &= \mathtt{r1.bm.jcn}\left( t \right) - 0.32 \left( -1 + e^{\mathtt{r1.bm.lnu}\left( t \right)} \right) - 0.64 e^{\mathtt{ln\kappa}} \mathtt{r1.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r1.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r1.bm.s}\left( t \right)}{e^{\mathtt{r1.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r1.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r1.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{ln\tau}} e^{\mathtt{r1.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r1.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 e^{\mathtt{r1.bm.lnu}\left( t \right)} \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r1.bm.lnu}\left( t \right)}}} \right)}{e^{\mathtt{r1.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{ln\tau}}} \\ \frac{\mathrm{d} \mathtt{r2.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{r2.lm.jcn}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r2.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r2.bm.lnu}\left( t \right)} \right) + \mathtt{r2.bm.jcn}\left( t \right) - 0.64 e^{\mathtt{ln\kappa}} \mathtt{r2.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r2.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r2.bm.s}\left( t \right)}{e^{\mathtt{r2.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r2.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r2.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{ln\tau}} e^{\mathtt{r2.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r2.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r2.bm.lnu}\left( t \right)}}} \right) e^{\mathtt{r2.bm.lnu}\left( t \right)}}{e^{\mathtt{r2.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{ln\tau}}} \\ \frac{\mathrm{d} \mathtt{r3.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{r3.lm.jcn}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r3.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r3.bm.lnu}\left( t \right)} \right) + \mathtt{r3.bm.jcn}\left( t \right) - 0.64 e^{\mathtt{ln\kappa}} \mathtt{r3.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r3.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r3.bm.s}\left( t \right)}{e^{\mathtt{r3.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r3.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r3.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{ln\tau}} e^{\mathtt{r3.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r3.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r3.bm.lnu}\left( t \right)}}} \right) e^{\mathtt{r3.bm.lnu}\left( t \right)}}{e^{\mathtt{r3.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{ln\tau}}} \\ 0 &= - \mathtt{r1.ei.u}\left( t \right) \\ 0 &= - \mathtt{r1.bm.bold}\left( t \right) + 4 \left( 3.7726 + \frac{ - 0.4 e^{\mathtt{ln\epsilon}} e^{\mathtt{r1.bm.lnq}\left( t \right)}}{e^{\mathtt{r1.bm.ln\nu}\left( t \right)}} - 0.6 e^{\mathtt{ln\epsilon}} - 2.7726 e^{\mathtt{r1.bm.lnq}\left( t \right)} + \left( -1 + e^{\mathtt{ln\epsilon}} \right) e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right) \\ 0 &= - \mathtt{r2.ei.u}\left( t \right) \\ 0 &= - \mathtt{r2.bm.bold}\left( t \right) + 4 \left( 3.7726 + \frac{ - 0.4 e^{\mathtt{r2.bm.lnq}\left( t \right)} e^{\mathtt{ln\epsilon}}}{e^{\mathtt{r2.bm.ln\nu}\left( t \right)}} - 2.7726 e^{\mathtt{r2.bm.lnq}\left( t \right)} - 0.6 e^{\mathtt{ln\epsilon}} + e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \left( -1 + e^{\mathtt{ln\epsilon}} \right) \right) \\ 0 &= - \mathtt{r3.ei.u}\left( t \right) \\ 0 &= - \mathtt{r3.bm.bold}\left( t \right) + 4 \left( 3.7726 - 2.7726 e^{\mathtt{r3.bm.lnq}\left( t \right)} - 0.6 e^{\mathtt{ln\epsilon}} + \frac{ - 0.4 e^{\mathtt{r3.bm.lnq}\left( t \right)} e^{\mathtt{ln\epsilon}}}{e^{\mathtt{r3.bm.ln\nu}\left( t \right)}} + e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \left( -1 + e^{\mathtt{ln\epsilon}} \right) \right) \end{align} \]
Setup spectral DCM
max_iter = 128; # maximum number of iterations
# attribute initial conditions to states
sts, _ = get_dynamic_states(fitmodel);
the following step is needed if the model's Jacobian would give degenerate eigenvalues when expanded around the fixed point 0 (which is the default expansion). We simply add small random values to avoid this degeneracy:
perturbedfp = Dict(sts .=> abs.(10^-10*rand(length(sts)))) # slight noise to avoid issues with Automatic Differentiation.
Dict{SymbolicUtils.BasicSymbolic{Real}, Float64} with 15 entries:
r3₊lm₊x(t) => 1.10897e-11
r3₊bm₊lnq(t) => 8.56768e-11
r2₊bm₊lnq(t) => 4.80233e-11
r1₊bm₊lnν(t) => 5.27927e-11
r2₊bm₊s(t) => 6.51909e-11
r3₊bm₊lnu(t) => 6.74234e-11
r2₊bm₊lnν(t) => 1.89427e-11
r1₊bm₊s(t) => 3.51678e-11
r3₊bm₊lnν(t) => 3.39346e-11
r1₊lm₊x(t) => 2.54735e-11
r1₊bm₊lnu(t) => 5.47787e-12
r3₊bm₊s(t) => 6.77173e-11
r2₊lm₊x(t) => 6.93896e-11
r2₊bm₊lnu(t) => 5.51802e-11
r1₊bm₊lnq(t) => 7.86157e-11
For convenience we can use the default prior function to use standardized prior values as given in SPM:
pmean, pcovariance, indices = defaultprior(fitmodel, nr)
priors = (μθ_pr = pmean,
Σθ_pr = pcovariance
);
Setup hyper parameter prior as well:
hyperpriors = (Πλ_pr = 128.0*ones(1, 1), # prior metaparameter precision, needs to be a matrix
μλ_pr = [8.0] # prior metaparameter mean, needs to be a vector
);
To compute the cross spectral densities we need to provide the sampling interval of the time series, the frequency axis and the order of the multivariate autoregressive model:
csdsetup = (mar_order = p, freq = freq, dt = dt);
Prepare the DCM. This function will setup the computation of the Dynamic Causal Model. The last parameter specifies that wer are using fMRI time series as opposed to LFPs.
(state, setup) = setup_sDCM(dfsol, fitmodel, perturbedfp, csdsetup, priors, hyperpriors, indices, pmean, "fMRI");
We are now ready to run the optimization procedure! :) That is we loop over runsDCMiteration! which will alter state
after each optimization iteration. It essentially computes the Variational Laplace estimation of expectation and variance of the tunable parameters.
for iter in 1:max_iter
state.iter = iter
run_sDCM_iteration!(state, setup)
print("iteration: ", iter, " - F:", state.F[end] - state.F[2], " - dF predicted:", state.dF[end], "\n")
if iter >= 4
criterion = state.dF[end-3:end] .< setup.tolerance
if all(criterion)
print("convergence\n")
break
end
end
end
iteration: 1 - F:0.0 - dF predicted:19.232273529957006
iteration: 2 - F:19.37482670788961 - dF predicted:30.707068879245398
iteration: 3 - F:50.655424914953414 - dF predicted:48.64621371287047
iteration: 4 - F:100.09875731039517 - dF predicted:66.50487538597852
iteration: 5 - F:167.57998501972884 - dF predicted:87.59541818790805
iteration: 6 - F:252.20574296268023 - dF predicted:94.60009401176076
iteration: 7 - F:338.6111215564051 - dF predicted:72.59336078511157
iteration: 8 - F:399.4638606608544 - dF predicted:40.15286895308655
iteration: 9 - F:431.506356704998 - dF predicted:21.890298580388997
iteration: 10 - F:449.65652739463724 - dF predicted:15.845258956523619
iteration: 11 - F:463.85377515143136 - dF predicted:14.2108070697794
iteration: 12 - F:476.50761404107527 - dF predicted:9.56836412325421
iteration: 13 - F:484.1507134033777 - dF predicted:3.3615485690947784
iteration: 14 - F:486.64570746791253 - dF predicted:0.8751741796037201
iteration: 15 - F:487.26506074517033 - dF predicted:0.16371205338067757
iteration: 16 - F:487.3729718833739 - dF predicted:0.019061509965934566
iteration: 17 - F:487.38682582015736 - dF predicted:0.003488952407441315
iteration: 18 - F:487.3886289097669 - dF predicted:0.0005617855863777341
iteration: 19 - F:487.3893358615782 - dF predicted:9.790373570204723e-5
convergence
Note that the output F
is the free energy at each iteration step and dF
is the predicted change of free energy at each step which approximates the actual free energy change and is used as stopping criterion by requiring that it does not excede the tolerance
level for 4 consecutive times.
Plot Results
Free energy is the objective function of the optimization scheme of spectral DCM. Note that in the machine learning literature this it is called Evidence Lower Bound (ELBO). Plot the free energy evolution over optimization iterations to see how the algorithm converges towards a (potentially local) optimum:
freeenergy(state)

Plot the estimated posterior of the effective connectivity and compare that to the true parameter values. Bar hight are the posterior mean and error bars are the standard deviation of the posterior.
ecbarplot(state, setup, A_true)

References
[1] Novelli, Leonardo, Karl Friston, and Adeel Razi. “Spectral Dynamic Causal Modeling: A Didactic Introduction and Its Relationship with Functional Connectivity.” Network Neuroscience 8, no. 1 (April 1, 2024): 178–202.
[2] Hofmann, David, Anthony G. Chesebro, Chris Rackauckas, Lilianne R. Mujica-Parodi, Karl J. Friston, Alan Edelman, and Helmut H. Strey. “Leveraging Julia’s Automated Differentiation and Symbolic Computation to Increase Spectral DCM Flexibility and Speed.” bioRxiv: The Preprint Server for Biology, 2023.
[3] Friston, Karl J., Joshua Kahan, Bharat Biswal, and Adeel Razi. “A DCM for Resting State fMRI.” NeuroImage 94 (July 2014): 396–407.
This page was generated using Literate.jl.