Solving Inverse Problems with Spectral Dynamic Causal Modeling

Introduction

Neuroblox provides you with a comprehensive environment for simulations as we have explored previously, but its functionality doesn't stop there. We will now pivot and turn our attention to a different kind of problem: inferring model parameters, that is solving inverse problems, from time series. The method of choice is one of the most widely spread in imaging neuroscience, spectral Dynamic Causal Modeling (spDCM)[1,2]. In this tutorial we will introduce how to perform a spDCM analysis on simulated data. To do so we roughly reproduce the procedure in the SPM script DEM_demo_induced_fMRI.m in Neuroblox. This work was also presented in Hofmann et al.[2]

In this tutorial we will define a circuit of three linear neuronal mass models, all driven by an Ornstein-Uhlenbeck process. We will model fMRI data by a balloon model and BOLD signal on top. After simulation of this simple model we will use spectral Dynamic Causal Modeling to infer some of the model parameters from the simulation time series.

Workflow illustration

A brief outline of the procedure we will pursue:

  • define the graph, add blocks -> section A, B and C in the figure
  • simulate the model -> instead we could also use actual data, section D in figure
  • compute the cross spectral density
  • setup the DCM
  • estimate parameters
  • plot the results
using Neuroblox
using LinearAlgebra
using StochasticDiffEq
using DataFrames
using OrderedCollections
using CairoMakie
using ModelingToolkit
using Random
using StatsBase

Model simulation

Define the model

We will define a model of 3 regions. This means first of all to define a graph. To this graph we will add three linear neuronal mass models which constitute the (hidden) neuronal dynamics. These constitute three nodes of the graph. Next we will also need some input that stimulates the activity, we use simple Ornstein-Uhlenbeck blocks to create stochastic inputs. One per region. We want to simulate fMRI signals thus we will need to also add a BalloonModel per region. Note that the Ornstein-Uhlenbeck block will feed into the linear neural mass which in turn will feed into the BalloonModel blox. This needs to be represented by the way we define the edges.

Random.seed!(17)   # set seed for reproducibility

nr = 3             # number of regions
g = MetaDiGraph()
regions = [];      # list of neural mass blocks to then connect them to each other with an adjacency matrix `A_true`

Now add the different blocks to each region and connect the blocks within each region. For convenience we use a for loop since the type of blocks belonging to a each region repeat over regions but you could also approach building the system the same way as was shown in previous tutorials:

for i = 1:nr
    region = LinearNeuralMass(;name=Symbol("r$(i)₊lm"))
    push!(regions, region)          # store neural mass model in list. We need this list below. If you haven't seen the Julia command `push!` before [see here](http://jlhub.com/julia/manual/en/function/push-exclamation).

    # add Ornstein-Uhlenbeck block as noisy input to the current region
    input = OUBlox(;name=Symbol("r$(i)₊ou"), σ=0.2, τ=2)
    add_edge!(g, input => region, weight=1/16)   # Note that 1/16 is taken from SPM12, this stabilizes the balloon model simulation. Alternatively the noise of the Ornstein-Uhlenbeck block or the weight of the edge connecting neuronal activity and balloon model could be reduced to guarantee numerical stability.

    # simulate fMRI signal with BalloonModel which includes the BOLD signal on top of the balloon model dynamics
    measurement = BalloonModel(;name=Symbol("r$(i)₊bm"))
    add_edge!(g, region => measurement, weight=1.0)
end

Next we define the between-region connectivity matrix and connect regions; we use the same matrix as is defined in [3]

A_true = [[-0.5 -0.2 0]; [0.4 -0.5 -0.3]; [0 0.2 -0.5]]
3×3 Matrix{Float64}:
 -0.5  -0.2   0.0
  0.4  -0.5  -0.3
  0.0   0.2  -0.5

Note that in SPM DCM connection matrices column variables denote output from and rows denote inputs to a particular region. This is different from the usual Neuroblox definition of connection matrices. Thus we flip the indices in what follows:

for idx in CartesianIndices(A_true)
    add_edge!(g, regions[idx[1]] => regions[idx[2]], weight=A_true[idx[2], idx[1]])   # Note the definition of columns as outputs and rows as inputs. For consistency with SPM we keep this notation.
end

finally we compose the simulation model

@named simmodel = system_from_graph(g)

\[ \begin{align} \frac{\mathrm{d} \mathtt{r1.ou.x}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r1.ou.\mu} - \mathtt{r1.ou.x}\left( t \right)}{\mathtt{r1.ou.\tau}} \\ \frac{\mathrm{d} \mathtt{r1.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{w\_r1.lm\_r1.lm} \mathtt{r1.lm.x}\left( t \right) + \mathtt{w\_r1.ou\_r1.lm} \mathtt{r1.ou.x}\left( t \right) + \mathtt{w\_r2.lm\_r1.lm} \mathtt{r2.lm.x}\left( t \right) + \mathtt{w\_r3.lm\_r1.lm} \mathtt{r3.lm.x}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r1.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r1.bm.lnu}\left( t \right)} \right) + \mathtt{w\_r1.lm\_r1.bm} \mathtt{r1.lm.x}\left( t \right) - 0.64 \mathtt{r1.bm.s}\left( t \right) e^{\mathtt{r1.bm.ln\kappa}} \\ \frac{\mathrm{d} \mathtt{r1.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r1.bm.s}\left( t \right)}{e^{\mathtt{r1.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r1.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r1.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{r1.bm.ln\tau}} e^{\mathtt{r1.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r1.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 e^{\mathtt{r1.bm.lnu}\left( t \right)} \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r1.bm.lnu}\left( t \right)}}} \right)}{e^{\mathtt{r1.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{r1.bm.ln\tau}}} \\ \frac{\mathrm{d} \mathtt{r2.ou.x}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r2.ou.\mu} - \mathtt{r2.ou.x}\left( t \right)}{\mathtt{r2.ou.\tau}} \\ \frac{\mathrm{d} \mathtt{r2.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{w\_r1.lm\_r2.lm} \mathtt{r1.lm.x}\left( t \right) + \mathtt{w\_r2.lm\_r2.lm} \mathtt{r2.lm.x}\left( t \right) + \mathtt{w\_r2.ou\_r2.lm} \mathtt{r2.ou.x}\left( t \right) + \mathtt{w\_r3.lm\_r2.lm} \mathtt{r3.lm.x}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r2.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r2.bm.lnu}\left( t \right)} \right) + \mathtt{w\_r2.lm\_r2.bm} \mathtt{r2.lm.x}\left( t \right) - 0.64 \mathtt{r2.bm.s}\left( t \right) e^{\mathtt{r2.bm.ln\kappa}} \\ \frac{\mathrm{d} \mathtt{r2.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r2.bm.s}\left( t \right)}{e^{\mathtt{r2.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r2.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r2.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{r2.bm.ln\nu}\left( t \right)} e^{\mathtt{r2.bm.ln\tau}}} \\ \frac{\mathrm{d} \mathtt{r2.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r2.bm.lnu}\left( t \right)}}} \right) e^{\mathtt{r2.bm.lnu}\left( t \right)}}{e^{\mathtt{r2.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{r2.bm.ln\tau}}} \\ \frac{\mathrm{d} \mathtt{r3.ou.x}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r3.ou.\mu} - \mathtt{r3.ou.x}\left( t \right)}{\mathtt{r3.ou.\tau}} \\ \frac{\mathrm{d} \mathtt{r3.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{w\_r1.lm\_r3.lm} \mathtt{r1.lm.x}\left( t \right) + \mathtt{w\_r2.lm\_r3.lm} \mathtt{r2.lm.x}\left( t \right) + \mathtt{w\_r3.lm\_r3.lm} \mathtt{r3.lm.x}\left( t \right) + \mathtt{w\_r3.ou\_r3.lm} \mathtt{r3.ou.x}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r3.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r3.bm.lnu}\left( t \right)} \right) + \mathtt{w\_r3.lm\_r3.bm} \mathtt{r3.lm.x}\left( t \right) - 0.64 e^{\mathtt{r3.bm.ln\kappa}} \mathtt{r3.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r3.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r3.bm.s}\left( t \right)}{e^{\mathtt{r3.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r3.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r3.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{r3.bm.ln\nu}\left( t \right)} e^{\mathtt{r3.bm.ln\tau}}} \\ \frac{\mathrm{d} \mathtt{r3.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r3.bm.lnu}\left( t \right)}}} \right) e^{\mathtt{r3.bm.lnu}\left( t \right)}}{e^{\mathtt{r3.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{r3.bm.ln\tau}}} \\ 0 &= - \mathtt{r1.bm.bold}\left( t \right) + 4 \left( 3.7726 + \frac{ - 0.4 e^{\mathtt{r1.bm.ln\epsilon}} e^{\mathtt{r1.bm.lnq}\left( t \right)}}{e^{\mathtt{r1.bm.ln\nu}\left( t \right)}} - 0.6 e^{\mathtt{r1.bm.ln\epsilon}} - 2.7726 e^{\mathtt{r1.bm.lnq}\left( t \right)} + \left( -1 + e^{\mathtt{r1.bm.ln\epsilon}} \right) e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right) \\ 0 &= - \mathtt{r2.bm.bold}\left( t \right) + 4 \left( 3.7726 - 0.6 e^{\mathtt{r2.bm.ln\epsilon}} - 2.7726 e^{\mathtt{r2.bm.lnq}\left( t \right)} + \frac{ - 0.4 e^{\mathtt{r2.bm.ln\epsilon}} e^{\mathtt{r2.bm.lnq}\left( t \right)}}{e^{\mathtt{r2.bm.ln\nu}\left( t \right)}} + \left( -1 + e^{\mathtt{r2.bm.ln\epsilon}} \right) e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right) \\ 0 &= - \mathtt{r3.bm.bold}\left( t \right) + 4 \left( 3.7726 - 0.6 e^{\mathtt{r3.bm.ln\epsilon}} - 2.7726 e^{\mathtt{r3.bm.lnq}\left( t \right)} + \frac{ - 0.4 e^{\mathtt{r3.bm.ln\epsilon}} e^{\mathtt{r3.bm.lnq}\left( t \right)}}{e^{\mathtt{r3.bm.ln\nu}\left( t \right)}} + e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \left( -1 + e^{\mathtt{r3.bm.ln\epsilon}} \right) \right) \end{align} \]

Run the simulation and plot the results

tspan = (0, 1022)
dt = 2   # 2 seconds as measurement interval for fMRI
2

setup simulation of the model, time in seconds

prob = SDEProblem(simmodel, [], tspan)
sol = solve(prob, ImplicitRKMil(), saveat=dt);

we now want to extract all the variables in our model which carry the tag "measurement". For this purpose we can use the Neuroblox function get_idx_tagged_vars the observable quantity in our model is the BOLD signal, the variable of the Blox BalloonModel that represents the BOLD signal is tagged with "measurement" tag. other tags that are defined are "input" which denotes variables representing a stimulus, like for instance an OUBlox.

idx_m = get_idx_tagged_vars(simmodel, "measurement")    # get index of bold signal
3-element Vector{Int64}:
 19
 20
 21

plot bold signal time series

f = Figure()
ax = Axis(f[1, 1],
    title = "fMRI time series",
    xlabel = "Time [ms]",
    ylabel = "BOLD",
)
lines!(ax, sol, idxs=idx_m);
f
Example block output

We note that the initial spike is not meaningful and a result of the equilibration of the stochastic process thus we remove it.

dfsol = DataFrame(sol);

Add measurement noise and rescale data

data = Matrix(dfsol[:, idx_m .+ 1]);    # +1 due to the additional time-dimension in the data frame.

add measurement noise

data += randn(size(data))/4;

center and rescale data (as done in SPM):

data .-= mean(data, dims=1);
data *= 1/std(data[:])/4;
dfsol = DataFrame(data, :auto);

Add correct names to columns of the data frame

_, obsvars = get_eqidx_tagged_vars(simmodel, "measurement");  # get index of equation of bold state
rename!(dfsol, Symbol.(obsvars))
512×3 DataFrame
Rowr1₊bm₊bold(t)r2₊bm₊bold(t)r3₊bm₊bold(t)
Float64Float64Float64
1-0.261965-0.2610450.391148
20.303896-0.136823-0.173335
3-0.0937618-0.0542405-0.165682
40.13220.0475346-0.0341104
50.364514-0.358245-0.0113573
60.1995810.3114140.0190306
7-0.09885210.218106-0.0474904
80.161170.211684-0.125414
90.0197356-0.0248634-0.428715
10-0.005610040.375068-0.408708
110.153786-0.0661862-0.198953
12-0.008963920.427925-0.456382
13-0.4055630.242045-0.491816
14-0.0812915-0.0767130.0713896
150.05641280.4555710.151749
160.0095164-0.0493799-0.0874892
170.2417450.02850540.17517
180.3403210.263609-0.417404
190.0272821-0.2002760.0371979
200.195127-0.269932-0.193659
21-0.232009-0.239251-0.118522
22-0.126782-0.617645-0.326592
230.0765651-0.08725460.0306633
240.202917-0.151627-0.526433
25-0.00261651-0.294182-0.464835
260.01219430.0751406-0.232426
270.0008426520.0240632-0.511763
28-0.2812130.237053-0.246478
29-0.07877780.0696607-0.355975
30-0.4586050.0090913-0.0315461
31-0.1999410.10773-0.0361844
32-0.0165830.0549135-0.155938
33-0.112660.281129-0.245769
34-0.003186930.0120613-0.233039
35-0.044319-0.0514396-0.118341
360.383753-0.1242130.106873
370.1109050.1461880.112925
380.112880.3709910.294788
390.3320990.2938090.289002
400.03489480.209385-0.307632
410.20710.239657-0.0653779
42-0.0945283-0.2401510.478676
430.519678-0.2189620.551999
440.333049-0.07696950.0663154
450.232821-0.188746-0.123387
460.1819490.0009901620.0242832
470.232597-0.08564320.124101
480.272658-0.118209-0.346531
49-0.189948-0.501419-0.180996
50-0.01624570.02120860.0439917
510.392590.060813-0.301818
520.07619340.200322-0.450907
530.2321140.164915-0.216094
54-0.01835190.3829310.162036
550.1505990.222430.1147
56-0.107748-0.2002130.0841968
570.1264940.0191143-0.0222996
580.3733410.210212-0.203252
590.3320.200739-0.00163898
600.1199020.113278-0.0856193
610.0699023-0.07938860.0328242
62-0.272469-0.07763940.000869761
63-0.0904003-0.108852-0.0315934
64-0.162573-0.330408-0.361963
65-0.372469-0.348131-0.200946
660.0888628-0.13552-0.344135
67-0.0188720.0777202-0.317323
68-0.3839720.02867220.0523513
69-0.315405-0.219925-0.420844
70-0.230647-0.439407-0.0400733
710.254222-0.2550520.0648123
720.394912-0.2224590.084413
730.467703-0.3595160.455132
740.520084-0.4121010.185415
750.09015480.03538070.194998
76-0.0253333-0.6996920.194728
770.14518-0.05421060.421357
78-0.06586520.2481820.370637
790.2424430.1101330.186187
800.1590380.169030.147753
810.2754080.4631760.208275
820.02609990.2656530.176344
830.3037340.0782860.517161
84-0.124890.0426113-0.148191
850.2604710.03813920.295121
860.3373010.132430.281349
870.09440760.1586560.0833511
880.0803290.05570530.354448
890.404124-0.002983210.45849
900.087233-0.05850210.335897
910.19906-0.03868630.0900859
920.171752-0.1318210.417053
930.00958227-0.4867830.222327
940.223281-0.1784380.139011
95-0.371982-0.133392-0.0796333
96-0.398572-0.0638496-0.138716
97-0.1956830.284695-0.206923
98-0.4835770.2046640.0586401
99-0.433070.358271-0.463997
100-0.1284870.207819-0.220754
101-0.1794250.3011940.141064
102-0.04668580.0165546-0.0377058
103-0.07724080.360143-1.30542e-5
104-0.0451599-0.3486180.0880216
105-0.342465-0.03712250.0480952
106-0.0810149-0.0459667-0.0361463
107-0.1658340.0978639-0.149304
1080.149062-0.0289158-0.387724
109-0.2572770.12425-0.201554
110-0.0332418-0.0141487-0.0844778
111-0.06270820.253154-0.117716
1120.2338930.136945-0.159807
113-0.1547720.254204-0.419643
114-0.2198080.289361-0.585313
115-0.620279-0.189742-0.194334
116-0.00682585-0.163435-0.0894824
117-0.2139450.135082-0.0509784
118-0.06227050.440916-0.532437
119-0.0108475-0.319998-0.188074
120-0.696492-0.319416-0.155067
121-0.757402-0.111494-0.429208
122-0.303925-0.255204-0.10659
123-0.159827-0.562444-0.0276251
124-0.00887016-0.204617-0.233829
1250.01040030.436531-0.214416
126-0.05066290.1502740.0503826
1270.2157360.450170.1998
128-0.1807910.179662-0.0620984
1290.234082-0.2122310.4196
1300.00417564-0.234757-0.0627296
1310.01045590.002013690.170141
132-0.0911119-0.342304-0.112064
1330.0804680.269127-0.213246
1340.0500475-0.1097520.122611
1350.09982710.24170.184767
1360.188528-0.07329610.109423
1370.413980.1969270.193505
1380.285908-0.2464180.0983003
139-0.01229960.1952650.190688
140-0.168745-0.00494868-0.244325
141-0.106246-0.0382607-0.253241
142-0.352939-0.2101660.264644
143-0.3512580.02717260.0152307
144-0.5785660.07227380.30291
145-0.2635070.003316840.267825
146-0.0667634-0.2086030.0696795
147-0.283417-0.0258921-0.172966
148-0.4561180.1856380.00160654
149-0.229497-0.028839-0.40474
150-0.2829730.38051-0.557232
151-0.1047090.210769-0.278323
152-0.1392390.2017610.429088
153-0.3507930.2001860.61313
154-0.1126560.1398040.413917
155-0.158401-0.1963350.411077
156-0.470325-0.1993030.537126
157-0.0260301-0.4997470.0818355
1587.77412e-5-0.103283-0.124717
1590.1879720.197798-0.00393656
1600.05603920.2143560.138598
1610.06376550.3598090.148642
162-0.535170.03559720.355629
163-0.359557-0.1750010.269058
164-0.0965468-0.0782629-0.00286111
165-0.115429-0.27324-0.54586
166-0.0714536-0.0471413-0.259012
167-0.143417-0.102385-0.0599949
168-0.07335860.0377543-0.308692
1690.08783-0.2069440.453681
170-0.140733-0.360740.308337
171-0.231353-0.5058660.241632
1720.187825-0.2610570.319394
173-0.416085-0.5927320.181256
174-0.0726659-0.4205060.292956
175-0.0684704-0.1573550.278478
176-0.175332-0.1606390.335137
177-0.190825-0.146294-0.158396
178-0.0197451-0.229810.141163
179-0.171763-0.337308-0.243328
180-0.1744470.11748-0.0531531
181-0.5757090.00594702-0.269202
182-0.4340280.237960.0631365
183-0.2770120.330448-0.0398182
184-0.2896340.1871970.0899897
1850.2114760.006606710.00623514
186-0.0110458-0.240239-0.267235
1870.3563710.233570.189582
1880.2792590.1435990.199062
1890.1844150.180252-0.0716381
1900.2858740.186292-0.0891894
1910.2882-0.00313022-0.0928106
192-0.0512236-0.05407460.286045
1930.0228772-0.280991-0.434152
1940.2331340.04471850.340932
1950.160005-0.3376320.316829
196-0.0445397-0.1737460.506323
197-0.174178-0.3964350.199279
1980.239364-0.3213210.0903532
1990.177753-0.090120.0805273
2000.156648-0.201669-0.0365185
201-0.201167-0.174254-0.500305
202-0.3394080.303857-0.243254
203-0.03635490.0131175-0.133597
204-0.609808-0.0782532-0.190889
205-0.14773-0.0710003-0.253934
2060.00619895-0.2468130.273866
2070.1575950.2111750.236721
2080.2306250.141544-0.0502022
2090.05713130.249687-0.179405
2100.339187-0.443487-0.196441
2110.0466324-0.240023-0.382874
2120.281965-0.0297198-0.307379
213-0.00516357-0.318593-0.442439
2140.395817-0.02216480.127167
2150.248452-0.2607650.0249021
2160.08864420.04944910.113269
2170.01325210.419334-0.172635
218-0.03714050.1670950.152887
2190.2156160.4885250.144633
220-0.00171785-0.06843190.163965
2210.003004660.0151258-0.0779362
222-0.1055550.153347-0.640797
223-0.4296190.267585-0.423445
224-0.2799410.106379-0.13874
225-0.3481250.4262720.424528
226-0.1731770.3131030.0414145
227-0.21194-0.0653623-0.0527471
2280.0830735-0.08407020.241219
229-0.0224897-0.19906-0.0462349
230-0.609661-0.2623760.160739
231-0.0621763-0.3113250.0961985
232-0.229518-0.4683290.105876
2330.0926904-0.3445020.10074
234-0.0396807-0.04235020.0833085
235-0.253028-0.363144-0.209479
2360.205612-0.445429-0.0493075
2370.0939908-0.616267-0.0660439
2380.104732-0.2722520.0254952
239-0.141944-0.1256330.0700748
2400.0568266-0.45609-0.206806
2410.270237-0.05300280.257781
2420.346584-0.160206-0.11565
243-0.107366-0.1566290.378631
2440.313864-0.4803430.294982
2450.138591-0.3039370.114632
246-0.577029-0.3031220.216442
247-0.120774-0.09597040.192291
248-0.2310570.05814560.106824
249-0.0804353-0.1195670.266509
250-0.485931-0.2008860.356782
251-0.0711387-0.4818770.287163
252-0.279228-0.5888980.55422
2530.187936-0.7442350.26275
254-0.201642-0.4896340.0818526
2550.107972-0.01555510.181908
256-0.329059-0.484695-0.0234829
2570.144385-0.06001954.46952e-5
2580.153928-0.199515-0.265475
2590.0176470.215034-0.115268
260-0.2193530.384648-0.0877678
261-0.149690.116376-0.203053
262-0.165507-0.1830670.0488363
2630.149940.605318-0.264557
264-0.06752050.2152910.0244479
265-0.00808328-0.06877310.280227
266-0.0408992-0.225838-0.208862
2670.1894770.04294710.0740346
268-0.560871-0.2186210.112594
269-0.492891-0.350834-0.0412869
270-0.488155-0.2258070.0784703
271-0.225794-0.5839220.170448
272-0.183575-0.1356430.271499
273-0.299606-0.7192170.0792828
274-0.131422-0.701732-0.0457413
2750.221849-0.2962140.14127
2760.200564-0.4883970.387019
2770.2907920.06596820.189074
2780.0715868-0.2691090.169408
279-0.0001134540.01269370.0256357
2800.180266-0.0939981-0.281113
2810.222257-0.21086-0.366613
2820.19680.0970268-0.0833752
2830.208524-0.0557734-0.241888
2840.09599150.0728546-0.503222
2850.373137-0.164534-0.343427
2860.1423760.0921582-0.0508231
2870.07312890.0168957-0.26225
288-0.0727917-0.0125547-0.28011
289-0.0992782-0.176429-0.0239845
2900.363213-0.01882990.153484
2910.293216-0.2330110.347886
2920.2641550.1583380.16867
2930.1078430.2719610.174075
2940.1983810.520956-0.240874
2950.09825240.07398530.134751
2960.03024350.4854740.134825
2970.04146970.5227220.233191
2980.1008840.1564110.338087
2990.5490290.2075090.245648
300-0.150950.04169650.0837732
3010.2777120.5473340.267173
3020.3626110.3736840.19524
3030.03629320.4345410.18349
304-0.1595370.3118210.633279
3050.3805860.1447890.418952
306-0.04750040.1577020.277615
3070.0731190.1289070.0157899
3080.0614556-0.03766520.136903
3090.1514080.2157870.367668
3100.09020260.081370.0634252
311-0.0380612-0.1805670.0897672
3120.03155690.3951030.0472091
313-0.1002410.0901510.224245
314-0.0879183-0.0228566-0.00916431
315-0.281579-0.09980520.107952
316-0.31633-0.1456480.0930275
3170.157786-0.380390.0492624
3180.100421-0.2678250.267645
319-0.161645-0.347799-0.134729
3200.1822420.0298389-0.192489
3210.00277113-0.06117730.0585722
3220.519911-0.0298113-0.0831042
3230.4425320.289899-0.13757
3240.4798770.0445259-0.328216
325-0.0274735-0.368109-0.620258
326-0.03919310.0347327-0.211236
3270.169156-0.321599-0.474877
328-0.0489977-0.193875-0.481865
329-0.00395251-0.417265-0.117063
3300.03579540.1609750.0710595
3310.367035-0.0394032-0.301101
3320.290692-0.2828220.125515
333-0.04795920.321101-0.29798
3340.1982360.5944460.0033131
3350.178393-0.07578230.228745
336-0.245725-0.1166680.613432
337-0.1050890.1160840.133984
338-0.211437-0.02163280.215134
339-0.230757-0.1645050.248347
340-0.1337270.1414040.214824
341-0.182608-0.3154070.149095
342-0.0242426-0.1489620.342258
343-0.0798574-0.01376610.371246
3440.09616170.0472460.15371
3450.208901-0.1920930.0496105
346-0.07580040.213345-0.0202104
347-0.2632350.271167-0.0943934
3480.118730.174454-0.240087
349-0.2929110.305084-0.0612019
350-0.06939020.3627010.0578607
3510.449909-0.02271060.0261549
352-0.032990.237718-0.215399
3530.187476-0.08540120.217642
3540.5240040.3313880.0055376
3550.1589490.432281-0.548121
3560.001003330.114703-0.191319
3570.1058080.2941210.0332358
358-0.1362950.291666-0.403591
359-0.298748-0.184621-0.325414
3600.05824750.1246010.0235502
361-0.148682-0.238008-0.20834
362-0.26573-0.5074-0.1962
3630.011853-0.365277-0.0336191
3640.199713-0.0350969-0.0862087
365-0.05327290.0349598-0.0665411
3660.1486940.172958-0.244293
3670.3259490.2118750.215592
368-0.175868-0.08989-0.150897
3690.5143820.210089-0.00952594
3700.01438660.2869140.290285
3710.03516630.2774130.0090945
372-0.1216860.07078630.0595321
373-0.5266830.294357-0.176202
374-0.0797591-0.20031-0.0551678
375-0.3689070.304283-0.0323275
376-0.182266-0.4100950.189866
377-0.1816220.07327330.148967
3780.278293-0.2083780.023603
379-0.007628980.0202135-0.181552
3800.02950120.108258-0.29938
381-0.0552167-0.340127-0.274286
3820.0763747-0.242166-0.409571
3830.284537-0.302061-0.282525
384-0.178257-0.233841-0.238823
3850.249194-0.1525880.50539
3860.04654280.04792770.185256
3870.4515890.4097870.242657
3880.1400380.4136060.3583
3890.250975-0.019740.151164
3900.2757930.0952664-0.338909
3910.0191222-0.241959-0.232689
392-0.0778698-0.267597-0.276459
393-0.366371-0.173372-0.0474177
394-0.182853-0.491437-0.0521924
3950.0657978-0.611770.0193666
396-0.145868-0.003281350.0978026
3970.210129-0.308781-0.0544222
398-0.1858410.37971-0.101112
3990.04285320.1499770.185744
4000.01600450.0786661-0.0790084
401-0.1207810.4218550.312148
402-0.2348650.1159620.105122
4030.02981020.4821760.127325
4040.07325820.1132880.0904348
4050.05040510.1562770.259415
406-0.003206840.6358720.238799
407-0.2562330.1943490.279716
4080.183140.159441-0.046175
409-0.0234721-0.312125-0.0841838
4100.07911810.0501531-0.546846
411-0.2803290.238524-0.0747288
4120.0757131-0.121553-0.197266
413-0.2844810.08041450.033832
414-0.233483-0.07391320.0168969
415-0.151096-0.505079-0.20138
4160.04324730.258387-0.126265
4170.00464007-0.1028240.159177
4180.0105410.0456445-0.0123019
4190.06158540.3308970.0359624
4200.1365020.1428940.259552
421-0.2572440.1200140.461684
422-0.519328-0.243852-0.272924
423-0.264239-0.0287871-0.0401149
424-0.400213-0.213581-0.0265722
4250.02020520.0442720.287351
426-0.0860531-0.06500030.28518
4270.0416748-0.1763170.193586
428-0.2814190.373366-0.0503032
429-0.1962740.1006910.19144
430-0.130074-0.0545394-0.194751
4310.1437470.1255880.0466545
4320.329343-0.0783963-0.061476
4330.3147020.0132451-0.147624
4340.325010.3894290.0794263
4350.240058-0.0565543-0.00346988
4360.3531310.3929860.0413786
437-0.06916340.2658240.174465
4380.3755740.3381250.138244
439-0.09753080.219255-0.105687
4400.5798690.437505-0.0722312
4410.353460.180254-0.371519
4420.5408680.3316680.0841366
443-0.09364770.476029-0.300079
4440.4996020.108923-0.21869
4450.1119410.824416-0.187626
4460.005818020.575828-0.437763
447-0.103770.298637-0.126562
448-0.02169840.0912533-0.308582
4490.07630790.303776-0.0531353
4500.1020950.124776-0.206239
4510.2669550.3307330.38519
4520.3920670.2193920.142679
4530.4164640.3788720.688966
4540.2135680.4381290.0786848
4550.1023030.4652650.0956624
4560.08867180.186166-0.240473
457-0.05398550.306611-0.0220418
458-0.08314320.01900050.0199026
459-0.1875330.5068620.306575
460-0.1206060.1485020.0673568
461-0.4069760.203064-0.103478
462-0.0292389-0.3307790.244719
463-0.0760217-0.514150.150275
4640.085459-0.24682-0.0907524
465-0.1264330.004581690.0161263
4660.0670107-0.0899429-0.160237
467-0.179441-0.246458-0.0196026
468-0.146231-0.0437391-0.262131
469-0.1621310.0624488-0.107996
470-0.2626850.0147826-0.388316
4710.1759730.1710590.292975
4720.109008-0.120848-0.062266
473-0.0719753-0.09967350.152383
474-0.0975168-0.04771830.0806866
475-0.177336-0.06603170.501959
4760.1125630.1299510.0576743
4770.1244120.115789-0.124712
4780.1847160.1550440.134421
4790.01533650.122188-0.129889
4800.2448430.0499705-0.15273
4810.198598-0.337799-0.0753176
4820.4710220.03242490.268234
4830.1458540.1581620.000610981
4840.05944450.2123050.0206962
4850.246130.163561-0.330722
4860.01264030.0676293-0.16497
487-0.08889810.254612-0.599825
488-0.03955220.321781-0.595434
489-0.1262010.593345-0.125253
4900.0255650.174686-0.0516239
4910.2961430.409370.178031
492-0.3760260.2193190.0420441
4930.1324710.3993460.574897
4940.337550.05778620.661118
4950.302328-0.1251810.530016
4960.159672-0.2664750.362886
4970.0865285-0.382756-0.0280398
4980.0269972-0.288706-0.378425
4990.0746063-0.074783-0.084576
5000.0412211-0.00671307-0.179012
501-0.10933-0.03271-0.1586
502-0.289707-0.3617890.270907
503-0.017333-0.5523810.108833
504-0.39496-0.0761768-0.0929399
505-0.378468-0.150799-0.712837
506-0.2030440.299287-0.46223
507-0.3868750.0815559-0.289179
508-0.03051190.223957-0.402801
5090.140426-0.006720080.325029
5100.000219185-0.02846770.186512
5110.02434140.00897582-0.00809878
5120.0646512-0.3114470.523201

Estimate and plot the cross-spectral densities

We compute the cross-spectral density by fitting a linear model of order p and then compute the csd analytically from the parameters of the multivariate autoregressive model

p = 8;
mar = mar_ml(data, p);   # maximum likelihood estimation of the MAR coefficients and noise covariance matrix
ns = size(data, 1);
freq = range(min(128, ns*dt)^-1, max(8, 2*dt)^-1, 32);
csd = mar2csd(mar, freq, dt^-1);

Now plot the real part of the cross-spectra. Most part of the signal is in the lower frequencies:

fig = Figure(size=(1200, 800))
grid = fig[1, 1] = GridLayout()
for i = 1:nr
    for j = 1:nr
        if i == 1 && j == 1
            ax = Axis(grid[i, j], xlabel="Frequency [Hz]", ylabel="real value of CSD")
        else
            ax = Axis(grid[i, j])
        end
        lines!(ax, freq, real.(csd[:, i, j]))
    end
end
Label(grid[1, 1:3, Top()], "Cross-spectral densities", valign = :bottom,
    font = :bold,
    fontsize = 32,
    padding = (0, 0, 5, 0))
fig
Example block output

These cross-spectral densities are the data we use in spectral DCM to fit our model to and perform the inference of connection strengths.

Model Inference

We will now assemble a new model that is used for fitting the previous simulations. This procedure is similar to before with the difference that we will define global parameters and use tags such as [tunable=false/true] to define which parameters we will want to estimate. Note that parameters are tunable by default.

g = MetaDiGraph()
regions = [];   # list of neural mass blocks to then connect them to each other with an adjacency matrix `A`

Note that parameters are typically defined within a Blox and thus not immediately visible to the user. Since we want some parameters to be shared across several regions we define them outside of the regions. For this purpose use the ModelingToolkit macro @parameters which is used to define symbolic parameters for models. Note that we can set the tunable flag right away thereby defining whether we will include this parameter in the optimization procedure or rather keep it fixed to its predefined value.

@parameters lnκ=0.0 [tunable=false] lnϵ=0.0 [tunable=false] lnτ=0.0 [tunable=false]   # lnκ: decay parameter for hemodynamics; lnϵ: ratio of intra- to extra-vascular components, lnτ: transit time scale
@parameters C=1/16 [tunable=false]   # note that C=1/16 is taken from SPM12 and stabilizes the balloon model simulation. See also comment above.

\[ \begin{equation} \left[ \begin{array}{c} C \\ \end{array} \right] \end{equation} \]

We now define a similar model as above for the simulation but instead of using an actual stimulus Blox we here add ExternalInput which represents a simple linear external input that is not specified any further. We simply say that our model gets some input with a proportional factor $C$. This is mostly only to make sure that our results are consistent with those produced by SPM

for i = 1:nr
    region = LinearNeuralMass(;name=Symbol("r$(i)₊lm"))
    push!(regions, region)
    input = ExternalInput(;name=Symbol("r$(i)₊ei"))
    add_edge!(g, input => region, weight=C)

    # we assume fMRI signal and model them with a BalloonModel
    measurement = BalloonModel(;name=Symbol("r$(i)₊bm"), lnτ=lnτ, lnκ=lnκ, lnϵ=lnϵ)
    add_edge!(g, region => measurement, weight=1.0)
end

Here we define the prior expectation values of the effective connectivity matrix we wish to infer:

A_prior = 0.01*randn(nr, nr)
A_prior -= diagm(diag(A_prior))    # remove the diagonal
3×3 Matrix{Float64}:
  0.0         0.0138076   -0.000870584
 -0.0110173   0.0         -0.000945151
 -0.0153045  -0.00676341   0.0

These two parameters are not present in the ground truth thus set them to zero and set their tuning parameter to false:

A_prior[3] = 0.0
A_prior[7] = 0.0
0.0

Since we want to optimize these weights we turn them into symbolic parameters: Add the symbolic weights to the edges and connect regions.

A = []
for (i, a) in enumerate(vec(A_prior))
    symb = Symbol("A$(i)")
    push!(A, only(@parameters $symb = a))
end

for (i, idx) in enumerate(CartesianIndices(A_prior))
    if idx[1] == idx[2]
        add_edge!(g, regions[idx[1]] => regions[idx[2]], weight=-exp(A[i])/2)  # -exp(A[i])/2: treatement of diagonal elements in SPM12 to make diagonal dominance (see Gershgorin Theorem) more likely but it is not guaranteed
    else
        add_edge!(g, regions[idx[2]] => regions[idx[1]], weight=A[i])
    end
end

Avoid simplification of the model in order to be able to exclude some parameters from fitting

@named fitmodel = system_from_graph(g, simplify=false)

\[ \begin{align} \mathtt{r1.lm.jcn}\left( t \right) &= \mathtt{A4} \mathtt{r2.lm.x}\left( t \right) + \mathtt{A7} \mathtt{r3.lm.x}\left( t \right) + C \mathtt{r1.ei.u}\left( t \right) - \frac{1}{2} e^{\mathtt{A1}} \mathtt{r1.lm.x}\left( t \right) \\ \mathtt{r1.bm.jcn}\left( t \right) &= \mathtt{w\_r1.lm\_r1.bm} \mathtt{r1.lm.x}\left( t \right) \\ \mathtt{r2.lm.jcn}\left( t \right) &= \mathtt{A2} \mathtt{r1.lm.x}\left( t \right) + \mathtt{A8} \mathtt{r3.lm.x}\left( t \right) + C \mathtt{r2.ei.u}\left( t \right) - \frac{1}{2} \mathtt{r2.lm.x}\left( t \right) e^{\mathtt{A5}} \\ \mathtt{r2.bm.jcn}\left( t \right) &= \mathtt{w\_r2.lm\_r2.bm} \mathtt{r2.lm.x}\left( t \right) \\ \mathtt{r3.lm.jcn}\left( t \right) &= \mathtt{A3} \mathtt{r1.lm.x}\left( t \right) + \mathtt{A6} \mathtt{r2.lm.x}\left( t \right) + C \mathtt{r3.ei.u}\left( t \right) - \frac{1}{2} e^{\mathtt{A9}} \mathtt{r3.lm.x}\left( t \right) \\ \mathtt{r3.bm.jcn}\left( t \right) &= \mathtt{w\_r3.lm\_r3.bm} \mathtt{r3.lm.x}\left( t \right) \\ \mathtt{r1.ei.u}\left( t \right) &= 0 \\ \frac{\mathrm{d} \mathtt{r1.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{r1.lm.jcn}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r1.bm.s}\left( t \right)}{\mathrm{d}t} &= \mathtt{r1.bm.jcn}\left( t \right) - 0.32 \left( -1 + e^{\mathtt{r1.bm.lnu}\left( t \right)} \right) - 0.64 e^{\mathtt{ln\kappa}} \mathtt{r1.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r1.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r1.bm.s}\left( t \right)}{e^{\mathtt{r1.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r1.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r1.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{ln\tau}} e^{\mathtt{r1.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r1.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 e^{\mathtt{r1.bm.lnu}\left( t \right)} \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r1.bm.lnu}\left( t \right)}}} \right)}{e^{\mathtt{r1.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{ln\tau}}} \\ \mathtt{r1.bm.bold}\left( t \right) &= 4 \left( 3.7726 + \frac{ - 0.4 e^{\mathtt{ln\epsilon}} e^{\mathtt{r1.bm.lnq}\left( t \right)}}{e^{\mathtt{r1.bm.ln\nu}\left( t \right)}} - 0.6 e^{\mathtt{ln\epsilon}} - 2.7726 e^{\mathtt{r1.bm.lnq}\left( t \right)} + \left( -1 + e^{\mathtt{ln\epsilon}} \right) e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right) \\ \mathtt{r2.ei.u}\left( t \right) &= 0 \\ \frac{\mathrm{d} \mathtt{r2.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{r2.lm.jcn}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r2.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r2.bm.lnu}\left( t \right)} \right) + \mathtt{r2.bm.jcn}\left( t \right) - 0.64 e^{\mathtt{ln\kappa}} \mathtt{r2.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r2.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r2.bm.s}\left( t \right)}{e^{\mathtt{r2.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r2.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r2.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{ln\tau}} e^{\mathtt{r2.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r2.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r2.bm.lnu}\left( t \right)}}} \right) e^{\mathtt{r2.bm.lnu}\left( t \right)}}{e^{\mathtt{r2.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{ln\tau}}} \\ \mathtt{r2.bm.bold}\left( t \right) &= 4 \left( 3.7726 + \frac{ - 0.4 e^{\mathtt{r2.bm.lnq}\left( t \right)} e^{\mathtt{ln\epsilon}}}{e^{\mathtt{r2.bm.ln\nu}\left( t \right)}} - 2.7726 e^{\mathtt{r2.bm.lnq}\left( t \right)} - 0.6 e^{\mathtt{ln\epsilon}} + e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \left( -1 + e^{\mathtt{ln\epsilon}} \right) \right) \\ \mathtt{r3.ei.u}\left( t \right) &= 0 \\ \frac{\mathrm{d} \mathtt{r3.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{r3.lm.jcn}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r3.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r3.bm.lnu}\left( t \right)} \right) + \mathtt{r3.bm.jcn}\left( t \right) - 0.64 e^{\mathtt{ln\kappa}} \mathtt{r3.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r3.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r3.bm.s}\left( t \right)}{e^{\mathtt{r3.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r3.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r3.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{ln\tau}} e^{\mathtt{r3.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r3.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r3.bm.lnu}\left( t \right)}}} \right) e^{\mathtt{r3.bm.lnu}\left( t \right)}}{e^{\mathtt{r3.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{ln\tau}}} \\ \mathtt{r3.bm.bold}\left( t \right) &= 4 \left( 3.7726 - 2.7726 e^{\mathtt{r3.bm.lnq}\left( t \right)} - 0.6 e^{\mathtt{ln\epsilon}} + \frac{ - 0.4 e^{\mathtt{r3.bm.lnq}\left( t \right)} e^{\mathtt{ln\epsilon}}}{e^{\mathtt{r3.bm.ln\nu}\left( t \right)}} + e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \left( -1 + e^{\mathtt{ln\epsilon}} \right) \right) \end{align} \]

With the function changetune` we can provide a dictionary of parameters whose tunable flag should be changed, for instance set to false to exclude them from the optimization procedure. For instance the effective connections that are set to zero in the simulation and the self-connections:

untune = Dict(A[3] => false, A[7] => false, A[1] => false, A[5] => false, A[9] => false)
fitmodel = changetune(fitmodel, untune)           # 3 and 7 are not present in the simulation model
fitmodel = structural_simplify(fitmodel)          # and now simplify the euqations

\[ \begin{align} \frac{\mathrm{d} \mathtt{r1.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{r1.lm.jcn}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r1.bm.s}\left( t \right)}{\mathrm{d}t} &= \mathtt{r1.bm.jcn}\left( t \right) - 0.32 \left( -1 + e^{\mathtt{r1.bm.lnu}\left( t \right)} \right) - 0.64 e^{\mathtt{ln\kappa}} \mathtt{r1.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r1.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r1.bm.s}\left( t \right)}{e^{\mathtt{r1.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r1.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r1.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{ln\tau}} e^{\mathtt{r1.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r1.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 e^{\mathtt{r1.bm.lnu}\left( t \right)} \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r1.bm.lnu}\left( t \right)}}} \right)}{e^{\mathtt{r1.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{ln\tau}}} \\ \frac{\mathrm{d} \mathtt{r2.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{r2.lm.jcn}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r2.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r2.bm.lnu}\left( t \right)} \right) + \mathtt{r2.bm.jcn}\left( t \right) - 0.64 e^{\mathtt{ln\kappa}} \mathtt{r2.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r2.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r2.bm.s}\left( t \right)}{e^{\mathtt{r2.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r2.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r2.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{ln\tau}} e^{\mathtt{r2.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r2.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r2.bm.lnu}\left( t \right)}}} \right) e^{\mathtt{r2.bm.lnu}\left( t \right)}}{e^{\mathtt{r2.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{ln\tau}}} \\ \frac{\mathrm{d} \mathtt{r3.lm.x}\left( t \right)}{\mathrm{d}t} &= \mathtt{r3.lm.jcn}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r3.bm.s}\left( t \right)}{\mathrm{d}t} &= - 0.32 \left( -1 + e^{\mathtt{r3.bm.lnu}\left( t \right)} \right) + \mathtt{r3.bm.jcn}\left( t \right) - 0.64 e^{\mathtt{ln\kappa}} \mathtt{r3.bm.s}\left( t \right) \\ \frac{\mathrm{d} \mathtt{r3.bm.lnu}\left( t \right)}{\mathrm{d}t} &= \frac{\mathtt{r3.bm.s}\left( t \right)}{e^{\mathtt{r3.bm.lnu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r3.bm.ln\nu}\left( t \right)}{\mathrm{d}t} &= \frac{e^{\mathtt{r3.bm.lnu}\left( t \right)} - \left( e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \right)^{3.125}}{2 e^{\mathtt{ln\tau}} e^{\mathtt{r3.bm.ln\nu}\left( t \right)}} \\ \frac{\mathrm{d} \mathtt{r3.bm.lnq}\left( t \right)}{\mathrm{d}t} &= \frac{\frac{2.5 \left( 1 - 0.6^{\frac{1}{e^{\mathtt{r3.bm.lnu}\left( t \right)}}} \right) e^{\mathtt{r3.bm.lnu}\left( t \right)}}{e^{\mathtt{r3.bm.lnq}\left( t \right)}} - \left( e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \right)^{2.125}}{2 e^{\mathtt{ln\tau}}} \\ 0 &= - \mathtt{r1.ei.u}\left( t \right) \\ 0 &= - \mathtt{r1.bm.bold}\left( t \right) + 4 \left( 3.7726 + \frac{ - 0.4 e^{\mathtt{ln\epsilon}} e^{\mathtt{r1.bm.lnq}\left( t \right)}}{e^{\mathtt{r1.bm.ln\nu}\left( t \right)}} - 0.6 e^{\mathtt{ln\epsilon}} - 2.7726 e^{\mathtt{r1.bm.lnq}\left( t \right)} + \left( -1 + e^{\mathtt{ln\epsilon}} \right) e^{\mathtt{r1.bm.ln\nu}\left( t \right)} \right) \\ 0 &= - \mathtt{r2.ei.u}\left( t \right) \\ 0 &= - \mathtt{r2.bm.bold}\left( t \right) + 4 \left( 3.7726 + \frac{ - 0.4 e^{\mathtt{r2.bm.lnq}\left( t \right)} e^{\mathtt{ln\epsilon}}}{e^{\mathtt{r2.bm.ln\nu}\left( t \right)}} - 2.7726 e^{\mathtt{r2.bm.lnq}\left( t \right)} - 0.6 e^{\mathtt{ln\epsilon}} + e^{\mathtt{r2.bm.ln\nu}\left( t \right)} \left( -1 + e^{\mathtt{ln\epsilon}} \right) \right) \\ 0 &= - \mathtt{r3.ei.u}\left( t \right) \\ 0 &= - \mathtt{r3.bm.bold}\left( t \right) + 4 \left( 3.7726 - 2.7726 e^{\mathtt{r3.bm.lnq}\left( t \right)} - 0.6 e^{\mathtt{ln\epsilon}} + \frac{ - 0.4 e^{\mathtt{r3.bm.lnq}\left( t \right)} e^{\mathtt{ln\epsilon}}}{e^{\mathtt{r3.bm.ln\nu}\left( t \right)}} + e^{\mathtt{r3.bm.ln\nu}\left( t \right)} \left( -1 + e^{\mathtt{ln\epsilon}} \right) \right) \end{align} \]

Setup spectral DCM

max_iter = 128;            # maximum number of iterations
# attribute initial conditions to states
sts, _ = get_dynamic_states(fitmodel);

the following step is needed if the model's Jacobian would give degenerate eigenvalues when expanded around the fixed point 0 (which is the default expansion). We simply add small random values to avoid this degeneracy:

perturbedfp = Dict(sts .=> abs.(10^-10*rand(length(sts))))     # slight noise to avoid issues with Automatic Differentiation.
Dict{SymbolicUtils.BasicSymbolic{Real}, Float64} with 15 entries:
  r3₊lm₊x(t)   => 1.10897e-11
  r3₊bm₊lnq(t) => 8.56768e-11
  r2₊bm₊lnq(t) => 4.80233e-11
  r1₊bm₊lnν(t) => 5.27927e-11
  r2₊bm₊s(t)   => 6.51909e-11
  r3₊bm₊lnu(t) => 6.74234e-11
  r2₊bm₊lnν(t) => 1.89427e-11
  r1₊bm₊s(t)   => 3.51678e-11
  r3₊bm₊lnν(t) => 3.39346e-11
  r1₊lm₊x(t)   => 2.54735e-11
  r1₊bm₊lnu(t) => 5.47787e-12
  r3₊bm₊s(t)   => 6.77173e-11
  r2₊lm₊x(t)   => 6.93896e-11
  r2₊bm₊lnu(t) => 5.51802e-11
  r1₊bm₊lnq(t) => 7.86157e-11

For convenience we can use the default prior function to use standardized prior values as given in SPM:

pmean, pcovariance, indices = defaultprior(fitmodel, nr)

priors = (μθ_pr = pmean,
          Σθ_pr = pcovariance
         );

Setup hyper parameter prior as well:

hyperpriors = (Πλ_pr = 128.0*ones(1, 1),   # prior metaparameter precision, needs to be a matrix
               μλ_pr = [8.0]               # prior metaparameter mean, needs to be a vector
              );

To compute the cross spectral densities we need to provide the sampling interval of the time series, the frequency axis and the order of the multivariate autoregressive model:

csdsetup = (mar_order = p, freq = freq, dt = dt);

Prepare the DCM. This function will setup the computation of the Dynamic Causal Model. The last parameter specifies that wer are using fMRI time series as opposed to LFPs.

(state, setup) = setup_sDCM(dfsol, fitmodel, perturbedfp, csdsetup, priors, hyperpriors, indices, pmean, "fMRI");

We are now ready to run the optimization procedure! :) That is we loop over runsDCMiteration! which will alter state after each optimization iteration. It essentially computes the Variational Laplace estimation of expectation and variance of the tunable parameters.

for iter in 1:max_iter
    state.iter = iter
    run_sDCM_iteration!(state, setup)
    print("iteration: ", iter, " - F:", state.F[end] - state.F[2], " - dF predicted:", state.dF[end], "\n")
    if iter >= 4
        criterion = state.dF[end-3:end] .< setup.tolerance
        if all(criterion)
            print("convergence\n")
            break
        end
    end
end
iteration: 1 - F:0.0 - dF predicted:19.232273529957006
iteration: 2 - F:19.37482670788961 - dF predicted:30.707068879245398
iteration: 3 - F:50.655424914953414 - dF predicted:48.64621371287047
iteration: 4 - F:100.09875731039517 - dF predicted:66.50487538597852
iteration: 5 - F:167.57998501972884 - dF predicted:87.59541818790805
iteration: 6 - F:252.20574296268023 - dF predicted:94.60009401176076
iteration: 7 - F:338.6111215564051 - dF predicted:72.59336078511157
iteration: 8 - F:399.4638606608544 - dF predicted:40.15286895308655
iteration: 9 - F:431.506356704998 - dF predicted:21.890298580388997
iteration: 10 - F:449.65652739463724 - dF predicted:15.845258956523619
iteration: 11 - F:463.85377515143136 - dF predicted:14.2108070697794
iteration: 12 - F:476.50761404107527 - dF predicted:9.56836412325421
iteration: 13 - F:484.1507134033777 - dF predicted:3.3615485690947784
iteration: 14 - F:486.64570746791253 - dF predicted:0.8751741796037201
iteration: 15 - F:487.26506074517033 - dF predicted:0.16371205338067757
iteration: 16 - F:487.3729718833739 - dF predicted:0.019061509965934566
iteration: 17 - F:487.38682582015736 - dF predicted:0.003488952407441315
iteration: 18 - F:487.3886289097669 - dF predicted:0.0005617855863777341
iteration: 19 - F:487.3893358615782 - dF predicted:9.790373570204723e-5
convergence

Note that the output F is the free energy at each iteration step and dF is the predicted change of free energy at each step which approximates the actual free energy change and is used as stopping criterion by requiring that it does not excede the tolerance level for 4 consecutive times.

Plot Results

Free energy is the objective function of the optimization scheme of spectral DCM. Note that in the machine learning literature this it is called Evidence Lower Bound (ELBO). Plot the free energy evolution over optimization iterations to see how the algorithm converges towards a (potentially local) optimum:

freeenergy(state)
Example block output

Plot the estimated posterior of the effective connectivity and compare that to the true parameter values. Bar hight are the posterior mean and error bars are the standard deviation of the posterior.

ecbarplot(state, setup, A_true)
Example block output

References

[1] Novelli, Leonardo, Karl Friston, and Adeel Razi. “Spectral Dynamic Causal Modeling: A Didactic Introduction and Its Relationship with Functional Connectivity.” Network Neuroscience 8, no. 1 (April 1, 2024): 178–202.
[2] Hofmann, David, Anthony G. Chesebro, Chris Rackauckas, Lilianne R. Mujica-Parodi, Karl J. Friston, Alan Edelman, and Helmut H. Strey. “Leveraging Julia’s Automated Differentiation and Symbolic Computation to Increase Spectral DCM Flexibility and Speed.” bioRxiv: The Preprint Server for Biology, 2023.
[3] Friston, Karl J., Joshua Kahan, Bharat Biswal, and Adeel Razi. “A DCM for Resting State fMRI.” NeuroImage 94 (July 2014): 396–407.


This page was generated using Literate.jl.