Main Content

smooth

Backward recursion of Bayesian nonlinear non-Gaussian state-space model

Since R2024a

Description

smooth provides approximate posterior means and covariances, conditioned on model parameters Θ and the full-sample response data, to summarize the smoothing distribution of the state of a Bayesian nonlinear non-Gaussian state-space model (bnlssm).

To compute the smoothed state distribution and likelihood, smooth uses the forward-backward smoothing method, in which it implements sequential Monte Carlo (SMC) to perform forward filtering, and then it resamples and reweights particles (weighted random samples) generated by SMC to perform backward smoothing.

[X,logL] = smooth(Mdl,Y,params) returns approximate posterior means and covariances of the smoothing state distribution, for each sampling time in the input response data, and the corresponding loglikelihood resulting from performing backward recursion of, or smoothing, the input Bayesian nonlinear state-space model. smooth evaluates the parameter map Mdl.ParamMap by using the input vector of parameter values.

example

[X,logL] = smooth(Mdl,Y,params,Name=Value) specifies additional options using one or more name-value arguments. For example, smooth(Mdl,Y,params,NumParticles=1e4,Resample="residual") specifies 1e4 particles for the SMC routine and to resample residuals.

example

[X,logL,Output] = smooth(___) additionally returns smoothing results by sampling period using any of the input-argument combinations in the previous syntaxes. Output contains the following quantities:

  • Approximate loglikelihood values associated with the input data, input parameters, and particles

  • Approximate posterior means of smoothed state distribution

  • Approximate posterior covariance of smoothed state distribution

  • Effective sample size

  • Flags indicating which data the software used to smooth

  • Flags indicating resampling

example

Examples

collapse all

This example uses simulated data to compute means of the approximate posterior smoothed state distribution of the following Bayesian nonlinear state-space model in equation. The state-space model contains two independent, stationary, autoregressive states each with a model constant. The observations are a nonlinear function of the states with Gaussian noise. The prior distribution of the parameters is flat. Symbolically, the system of equations is

[xt,1xt,2xt,3xt,4]=[θ1θ200010000θ3θ40001][xt-1,1xt-1,2xt-1,3xt-1,4]+[θ50000θ600][ut,1ut,3]

yt=log(exp(xt,1-μ1)+exp(xt,3-μ3))+θ7εt.

μ1 and μ3 are the unconditional means of the corresponding states. The initial distribution moments of each state are their unconditional mean and covariance.

Create a Bayesian nonlinear state-space model characterized by the system. The observation equation is in equation form, that is, the function composing the states is nonlinear and the innovation series εt is additive, linear, and Gaussian. The Local Functions section contains two functions required to specify the Bayesian nonlinear state-space model: the state-space model parameter mapping function and the prior distribution of the parameters. You can use the functions only within this script.

Mdl = bnlssm(@paramMap,@priorDistribution)
Mdl = 
  bnlssm with properties:

             ParamMap: @paramMap
    ParamDistribution: @priorDistribution
      ObservationForm: "equation"
           Multipoint: [1x0 string]

Mdl is a bnlssm model specifying the state-space model structure and prior distribution of the state-space model parameters. Because Mdl contains unknown values, it serves as a template for posterior analysis with observations.

Simulate a series of 100 observations from the following stationary 2-D VAR process.

xt,1=1+0.9xt-1,1+0.3ut,1xt,3=-1+-0.75xt-1,3+0.2ut,3,

where the disturbance series ut,j are standard Gaussian random variables.

rng(1,"twister")    % For reproducibility
T = 100;
thetatrue = [0.9; 1; -0.75; -1; 0.3; 0.2; 0.1];
MdlSim = varm(AR={diag(thetatrue([1 3]))},Covariance=diag(thetatrue(5:6).^2), ...
    Constant=thetatrue([2 4]));
XSim = simulate(MdlSim,T);

Compose simulated observations using the following equation.

yt=log(exp(xt,1-x1)+exp(xt,3-x3))+0.1εt,

where the innovation series εt is a standard Gaussian random variable.

ysim = log(sum(exp(XSim - mean(XSim)),2)) + thetatrue(7)*randn(T,1);

To compute means of the approximate posterior smoothed state distribution, the smooth function requires response data and a model with known state-space model parameters. Choose a random set with the following constraints:

  • θ1 and θ3 are within the unit circle. Use U(-1,1) to generate values.

  • θ2 and θ4 are real numbers. Use the N(0,32) distribution to generate values.

  • θ5, θ6, and θ7 are positive real numbers. Use the χ12 distribution to generate values.

theta13 = (-1+(1-(-1)).*rand(2,1));
theta24 = 3*randn(2,1);
theta567 = chi2rnd(1,3,1);
theta = [theta13(1); theta24(1); theta13(2); theta24(2); theta567];

Compute approximate smoothed state posterior means and corresponding loglikelihood by passing the Bayesian nonlinear model, simulated data, and parameter values to smooth.

[SmoothX,logL] = smooth(Mdl,ysim,theta);
size(SmoothX)
ans = 1×2

   100     4

logL
logL = 
-134.1053

SmoothX is a 100-by-4 matrix of approximate smoothed state posterior means, with rows corresponding to periods in the sample and columns corresponding to the state variables. The smooth function uses the forward-backward smoothing method (SMC and simulation smoothing by default) to obtain draws from the posterior smoothed state distribution. logL is the approximate loglikelihood function estimate evaluated at the data and parameter values.

Compare the loglikelihood logL and the loglikelihood computed using Θ from the data simulation.

[SmoothXSim,logLSim] = smooth(Mdl,ysim,thetatrue);
logLSim
logLSim = 
-0.2036

logLSim > logL, suggesting that the model evaluated at thetaSim has the better fit.

Plot the two sets of approximate smoothed state posterior means with the true state values.

figure
tiledlayout(2,1)
nexttile
plot([SmoothX(:,1) SmoothXSim(:,1) XSim(:,1)])
title("x_{t,1}")
legend("Smoothed State, random \theta","Smoothed State, true \theta","XSim")
nexttile
plot([SmoothX(:,3) SmoothXSim(:,3) XSim(:,2)])
title("x_{t,3}")
legend("Smoothed State, random \theta","Smoothed State, true \theta","XSim")

Figure contains 2 axes objects. Axes object 1 with title x indexOf t, 1 baseline contains 3 objects of type line. These objects represent Smoothed State, random \theta, Smoothed State, true \theta, XSim. Axes object 2 with title x indexOf t, 3 baseline contains 3 objects of type line. These objects represent Smoothed State, random \theta, Smoothed State, true \theta, XSim.

The approximate smoothed state posterior means using the true value of Θ and the simulated state paths are close. The approximate smoothed state posterior means are far from the simulated state paths.

Local Functions

These functions specify the state-space model parameter mappings, in equation form, and log prior distribution of the parameters.

function [A,B,C,D,Mean0,Cov0,StateType] = paramMap(theta)
    A = @(x)blkdiag([theta(1) theta(2); 0 1],[theta(3) theta(4); 0 1])*x; 
    B = [theta(5) 0; 0 0; 0 theta(6); 0 0];
    C = @(x)log(exp(x(1)-theta(2)/(1-theta(1))) + ...
        exp(x(3)-theta(4)/(1-theta(3))));
    D = theta(7);
    Mean0 = [theta(2)/(1-theta(1)); 1; theta(4)/(1-theta(3)); 1];         
    Cov0 = diag([theta(5)^2/(1-theta(1)^2) 0 theta(6)^2/(1-theta(3)^2) 0]);          
    StateType = [0; 1; 0; 1];     % Stationary state and constant 1 processes
end

function logprior = priorDistribution(theta)
    paramconstraints = [(abs(theta([1 3])) >= 1) (theta(5:7) <= 0)];
    if(sum(paramconstraints))
        logprior = -Inf;
    else 
        logprior = 0;   % Prior density is proportional to 1 for all values
                        % in the parameter space.
    end
end

This example shows how to use smooth and Bayesian optimization to calibrate a Bayesian nonlinear state-space. Consider this nonlinear state-space model.

xt=θ1xt-1+θ2utyt=eθ3xt+θ4+θ5εt,

where θ has a flat prior and the series ut and εt are standard Gaussian random variables.

Simulate Series

Simulate a series of 100 observations from the following stationary 2-D VAR process.

xt=0.5xt-1+0.6utyt=e-xt+2+0.75εt.

where the series ut and εt are standard Gaussian random variables.

rng(500,"twister")    % For reproducibility
T = 100;
thetaDGP = [0.5; 0.6; -1; 2; 0.75];
numparams = numel(thetaDGP);

MdlSim = arima(AR=thetaDGP(1),Variance=thetaDGP(2).^2, ...
    Constant=0);
xsim = simulate(MdlSim,T);
ysim = exp(thetaDGP(3).*xsim) + thetaDGP(4) + thetaDGP(5)*randn(T,1);

Create Bayesian Nonlinear Model

Create a Bayesian nonlinear state-space model. The Local Functions section contains two functions required to specify the Bayesian nonlinear state-space model: the state-space model parameter mapping function and the prior distribution of the parameters. You can use the functions only within this script. Specify Multipoint=["A" "C"] because the state-transition and measurement-sensitivity parameter mappings in paraMap can evaluate multiple particles simultaneously.

Mdl = bnlssm(@paramMap,@priorDistribution,Multipoint=["A" "C"]);

Perform Random Exploration

One way to explore the parameter space for the point that maximizes the likelihood is by random exploration:

  1. Randomly generated set of parameters.

  2. Compute the loglikelihood for that set derived from the approximate posterior smoothed state distribution. Generate 5000 particles for the SMC algorithm.

  3. Repeat steps 1 and 2 for 100 epochs.

  4. Choose the set of parameters that yields the largest loglikelihood.

Choose a random set using the following arbitrary recipe:

  • θ1U(-1,1).

  • θ2 and θ5 are χ12.

  • θ3 and θ4 areN(0,2).

numepochs = 100;
numparticles = 5000;
theta = NaN(numparams,numepochs);
logL = NaN(numepochs,1);

for j = 1:numepochs
    theta(:,j) = [(-1+(1-(-1)).*rand(1,1)); chi2rnd(1,1,1); ... 
        2*randn(2,1); chi2rnd(1,1,1);];
    [~,logL(j)] = smooth(Mdl,ysim,theta(:,j),NumParticles=numparticles);
end

Choose the set of parameters that maximizes the loglikelihood.

[bestlogL,idx] = max(logL);
bestTheta = theta(:,idx);
[bestTheta thetaDGP]
ans = 5×2

    0.1096    0.5000
    1.8110    0.6000
   -0.7610   -1.0000
    1.5364    2.0000
    0.9389    0.7500

The values that maximize the likelihood are fairly close to the values that generated the data.

Perform Bayesian Optimization

Bayesian optimization requires you to specify which variables require optimization and their support. To do this, use optimizableVariable to provide a name to the variable and its support. This example limits the support of each variable to a small interval; you must experiment with the support for your application.

thetaOp(5) = optimizableVariable("theta5",[0,2]);
thetaOp(1) = optimizableVariable("theta1",[0,0.9]);
thetaOp(2) = optimizableVariable("theta2",[0,2]);
thetaOp(3) = optimizableVariable("theta3",[-3,0]);
thetaOp(4) = optimizableVariable("theta4",[-3,3]);

thetaOp is an optimizableVariable object specifying the name and support of each variable of θ.

bayesopt accepts a function handle to the function that you want to minimize with respect to one argument θ and the optimizable variable specifications. Therefore, provide the function handle neglog, which is associated with the negative loglikelihood function smoothneglogl located in Local Functions. Specify the Expected Improvement Plus acquisition function and an exploration ratio of 1.

neglogl = @(var)smoothneglogl(var,Mdl,ysim,numparticles);

rng(1,"twister")
results = bayesopt(neglogl,thetaOp,AcquisitionFunctionName="expected-improvement-plus", ...
    ExplorationRatio=1);
|==================================================================================================================================================|
| Iter | Eval   | Objective   | Objective   | BestSoFar   | BestSoFar   |       theta1 |       theta2 |       theta3 |       theta4 |       theta5 |
|      | result |             | runtime     | (observed)  | (estim.)    |              |              |              |              |              |
|==================================================================================================================================================|
|    1 | Best   |      1127.7 |      0.8493 |      1127.7 |      1127.7 |       0.0761 |      0.51116 |     -0.78522 |       -2.272 |      0.44888 |
|    2 | Best   |      322.82 |     0.88013 |      322.82 |      369.77 |       0.5021 |       0.8957 |     -0.11662 |      -1.1518 |       1.8165 |
|    3 | Best   |       245.8 |     0.16333 |       245.8 |      245.83 |      0.67392 |      0.54178 |       -2.091 |     -0.82814 |      0.63451 |
|    4 | Accept |      379.26 |     0.17716 |       245.8 |      245.99 |      0.74179 |       1.8872 |      -1.9159 |      -2.5691 |        1.596 |
|    5 | Best   |      193.41 |     0.63844 |      193.41 |      193.61 |      0.60226 |        1.388 |      -2.8719 |       1.8649 |      0.99824 |
|    6 | Accept |      378.94 |     0.17881 |      193.41 |      193.34 |      0.61122 |       1.7241 |      -1.9549 |      -2.7603 |       1.5881 |
|    7 | Best   |      164.95 |      2.0665 |      164.95 |      164.99 |      0.89145 |       0.3939 |     -0.86732 |       2.9996 |         1.13 |
|    8 | Accept |      175.33 |     0.78984 |      164.95 |      165.14 |      0.77039 |       1.1099 |      -2.8749 |       2.9986 |       1.4799 |
|    9 | Accept |      188.55 |      1.0217 |      164.95 |      165.31 |        0.899 |       1.5766 |      -2.7935 |       1.9077 |       1.4035 |
|   10 | Accept |      176.16 |      1.8755 |      164.95 |      165.82 |      0.89675 |        1.074 |       -2.056 |       2.9966 |       1.7122 |
|   11 | Best   |      162.35 |      1.4387 |      162.35 |      162.26 |      0.70663 |     0.021792 |      -2.8874 |       2.4126 |       1.2572 |
|   12 | Accept |      167.12 |      2.0511 |      162.35 |      162.89 |      0.82297 |    0.0099343 |      -2.8661 |       2.5609 |       1.3679 |
|   13 | Accept |       37507 |      0.6273 |      162.35 |      131.64 |      0.72519 |     0.019759 |      -2.4419 |       2.7505 |     0.041092 |
|   14 | Accept |      1544.1 |      2.4395 |      162.35 |      171.69 |      0.61096 |      0.38246 |     -0.30051 |        -2.98 |      0.85496 |
|   15 | Accept |      196.22 |      1.1053 |      162.35 |      173.98 |      0.67033 |      0.88907 |      -2.6327 |       2.4757 |       1.9997 |
|   16 | Accept |       221.1 |     0.50966 |      162.35 |      174.33 |      0.68492 |      0.41576 |      -2.9961 |       0.1524 |      0.54823 |
|   17 | Accept |      331.59 |     0.40651 |      162.35 |      170.62 |      0.83362 |       1.8137 |      -2.9992 |     -0.77834 |       1.8933 |
|   18 | Accept |      170.22 |      1.0741 |      162.35 |         125 |      0.59515 |      0.42113 |      -2.9034 |       2.9794 |       1.0632 |
|   19 | Accept |      216.94 |      0.7908 |      162.35 |       126.1 |       0.3111 |      0.84077 |      -1.3483 |       2.9564 |       0.5773 |
|   20 | Accept |      177.18 |     0.32959 |      162.35 |      154.22 |       0.3466 |      0.42928 |      -2.9798 |       1.6888 |      0.25124 |
|==================================================================================================================================================|
| Iter | Eval   | Objective   | Objective   | BestSoFar   | BestSoFar   |       theta1 |       theta2 |       theta3 |       theta4 |       theta5 |
|      | result |             | runtime     | (observed)  | (estim.)    |              |              |              |              |              |
|==================================================================================================================================================|
|   21 | Accept |      185.07 |     0.48319 |      162.35 |       154.2 |      0.00115 |     0.093713 |      -1.8586 |       2.7626 |       1.9993 |
|   22 | Accept |       271.9 |     0.35265 |      162.35 |      153.64 |    0.0080476 |      0.37377 |      -2.9919 |     -0.27248 |       0.1696 |
|   23 | Accept |      369.97 |     0.15656 |      162.35 |      160.63 |      0.28982 |       1.2793 |      -2.9778 |       -1.244 |      0.38512 |
|   24 | Accept |       171.4 |     0.55156 |      162.35 |      160.02 |    0.0058968 |      0.41891 |     -0.51186 |       2.6739 |       1.4408 |
|   25 | Accept |      644.44 |      1.2839 |      162.35 |      162.18 |     0.079409 |      0.32809 |       -1.219 |       2.9808 |      0.30693 |
|   26 | Accept |      198.03 |     0.55734 |      162.35 |      162.75 |    0.0065223 |       1.0531 |       -2.964 |        2.899 |      0.96624 |
|   27 | Accept |      339.01 |     0.20376 |      162.35 |      161.93 |      0.15558 |      0.84882 |      -2.9909 |      -2.1498 |       1.7239 |
|   28 | Accept |      327.16 |     0.18582 |      162.35 |      161.54 |      0.23219 |      0.48289 |      -2.9966 |      -2.3966 |       1.2154 |
|   29 | Accept |      199.37 |     0.41817 |      162.35 |      161.18 |      0.29476 |      0.83904 |      -1.7824 |       1.7096 |       1.9978 |
|   30 | Accept |      363.59 |     0.16076 |      162.35 |      192.75 |      0.51879 |      0.97684 |      -2.9991 |      -2.1266 |      0.52726 |

__________________________________________________________
Optimization completed.
MaxObjectiveEvaluations of 30 reached.
Total function evaluations: 30
Total elapsed time: 36.0568 seconds
Total objective function evaluation time: 23.7671

Best observed feasible point:
    theta1      theta2     theta3     theta4    theta5
    _______    ________    _______    ______    ______

    0.70663    0.021792    -2.8874    2.4126    1.2572

Observed objective function value = 162.3519
Estimated objective function value = 164.8136
Function evaluation time = 1.4387

Best estimated feasible point (according to models):
    theta1      theta2     theta3     theta4    theta5
    _______    ________    _______    ______    ______

    0.00115    0.093713    -1.8586    2.7626    1.9993

Estimated objective function value = 192.7467
Estimated function evaluation time = 0.49262

Figure contains an axes object. The axes object with title Min objective vs. Number of function evaluations, xlabel Function evaluations, ylabel Min objective contains 2 objects of type line. These objects represent Min observed objective, Estimated min objective.

results is a BayesianOptmization object containing properties summarizing the results of Bayesian optimization.

Extract the value that minimizes the negative loglikelihood from results by using bestPoint.

restbl = [bestPoint(results); num2cell(thetaDGP')];
restbl.Properties.RowNames = ["calibrated" "true"]
restbl=2×5 table
                  theta1      theta2     theta3     theta4    theta5
                  _______    ________    _______    ______    ______

    calibrated    0.00115    0.093713    -1.8586    2.7626    1.9993
    true              0.5         0.6         -1         2      0.75

The results are fairly close to the values that generated the data.

Local Functions

These functions specify the state-space model parameter mappings, in equation form, and the log prior distribution of the parameters, and they compute the negative loglikelihood of the model.

function [A,B,C,D,Mean0,Cov0,StateType] = paramMap(theta)
    A = @(x)theta(1)*x; 
    B = theta(2);
    C = @(x)exp(theta(3).*x) + theta(4);
    D = theta(5);
    Mean0 = 0;         
    Cov0 = 1;          
    StateType = 0;     % Stationary state process
end

function logprior = priorDistribution(theta)
    paramconstraints = [abs(theta(1)) >= 1; theta([2 4]) <= 0];
    if(sum(paramconstraints))
        logprior = -Inf;
    else 
        logprior = 0;   % Prior density is proportional to 1 for all values
                        % in the parameter space.
    end
end

function neglogL = smoothneglogl(theta,mdl,resp,np)
    theta = table2array(theta);
    [~,logL] = smooth(mdl,resp,theta,NumParticles=np);
    neglogL = -logL;
end

smooth runs SMC to forward filter the state-space model, which includes resampling particles. To assess the quality of the sample, including whether any posterior smoothed state distribution is close to degenerate, you can monitor these algorithms by returning the third output of smooth.

Consider this nonlinear state-space model.

xt=θ1xt-1+θ2utyt=eθ3xt+θ4+θ5εt,

where θ has a flat prior and the series ut and εt are standard Gaussian random variables.

Simulate a series of 100 observations from the following stationary 2-D VAR process.

xt=0.5xt-1+0.6utyt=e-xt+2+0.75εt.

where the series ut and εt are standard Gaussian random variables.

rng(500,"twister")    % For reproducibility
T = 100;
thetaDGP = [0.5; 0.6; -1; 2; 0.75];
numparams = numel(thetaDGP);

MdlSim = arima(AR=thetaDGP(1),Variance=sqrt(thetaDGP(2)), ...
    Constant=0);
xsim = simulate(MdlSim,T);
y = exp(thetaDGP(3).*xsim) + thetaDGP(4) + thetaDGP(5)*randn(T,1);

Create a Bayesian nonlinear state-space model, and specify Multipoint=["A" "C"]. The Local Functions section contains the required functions specifying the Bayesian nonlinear state-space model structure and joint prior distribution.

Mdl = bnlssm(@paramMap,@priorDistribution,Multipoint=["A" "C"]);

Approximate the posterior smoothed state distribution of the state-space model. As in the Calibrate State-Space Model Parameters example, choose a random set of initial parameter values. Specify the resampling residuals for the SMC. Return the third output.

theta0 = [(-1+(1-(-1)).*rand(1,1)); chi2rnd(1,1,1); ... 
        2*randn(2,1); chi2rnd(1,1,1);];
[~,~,Output] = smooth(Mdl,y,theta0,Resample="residual");

Output is a 100-by-1 structure array containing several fields, one set of fields for each observation, including:

  • SmoothedStatesCov — Approximate posterior smoothed state distribution covariance for the states at each sampling time

  • DataUsed — Whether smooth used an observation for posterior estimation

  • Resample — Whether smooth resampled the particles associated with an observation

Plot the approximate posterior smoothed state covariances.

figure
plot([Output.SmoothedStatesCov])
title("Approx. Post. Smoothed State Covariances")

Figure contains an axes object. The axes object with title Approx. Post. Smoothed State Covariances contains an object of type line.

Any covariance close to 0 indicates a close-to-degenerate distribution. No covariances in the analysis are close to 0.

Determine whether smooth omitted any observations from posterior estimation.

anyObsOmitted = sum([Output.DataUsed]) ~= T
anyObsOmitted = logical
   0

anyObsOmitted = 0 indicates that smooth used all observations.

Determine whether smooth resampled any particles associated with observations.

whichResampled = find([Output.Resampled] == true)
whichResampled = 1×2

    37    56

smooth resampled particles associated with observations 37 and 56.

Local Functions

These functions specify the state-space model parameter mappings, in equation form, and the log prior distribution of the parameters, and they compute the negative loglikelihood of the model.

function [A,B,C,D,Mean0,Cov0,StateType] = paramMap(theta)
    A = @(x)theta(1)*x; 
    B = theta(2);
    C = @(x)exp(theta(3).*x) + theta(4);
    D = theta(5);
    Mean0 = 0;         
    Cov0 = 1;          
    StateType = 0;     % Stationary state process
end

function logprior = priorDistribution(theta)
    paramconstraints = [abs(theta(1)) >= 1; theta([2 4]) <= 0];
    if(sum(paramconstraints))
        logprior = -Inf;
    else 
        logprior = 0;   % Prior density is proportional to 1 for all values
                        % in the parameter space.
    end
end

Input Arguments

collapse all

Bayesian nonlinear state-space model, specified as a bnlssm model object created by the bnlssm function.

The function handles of the properties Mdl.ParamDistribution and Mdl.ParamMap determine the prior and the data likelihood, respectively. smooth evaluates Mdl.ParamMap at the input params.

Observed response data, specified as a numeric matrix or a cell vector of numeric vectors.

  • If Mdl is time invariant with respect to the observation equation, Y is a T-by-n matrix. Each row of the matrix corresponds to a period and each column corresponds to a particular observation in the model. T is the sample size and n is the number of observations per period. The last row of Y contains the latest observations.

  • If Mdl is time varying with respect to the observation equation, Y is a T-by-1 cell vector. Y{t} contains an nt-dimensional vector of observations for period t, where t = 1, ..., T. For linear observation models, the corresponding dimensions of the coefficient matrices, outputs of Mdl.ParamMap, C{t}, and D{t} must be consistent with the matrix in Y{t} for all periods. For nonlinear observation models, the dimensions of the inputs and outputs associated with the observations must be consistent. Regardless of model type, the last cell of Y contains the latest observations.

NaN elements indicate missing observations. For details on how smooth accommodates missing observations, see Algorithms.

Data Types: double | cell

State-space model parameters Θ to evaluate the parameter mapping Mdl.ParamMap, specified as a numparams-by-1 numeric vector. Elements of params0 must correspond to the elements of the first input arguments of Mdl.ParamMap and Mdl.ParamDistribution.

Data Types: double

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: smooth(Mdl,Y,params,NumParticles=1e4,Resample="residual") specifies 1e4 particles for the SMC routine and to resample residuals.

Forward-backward smoothing method, specified as a value in this table.

ValueDescription
"marginal"

smooth approximates the posterior moments of the smoothing state distribution by following this procedure:

  1. Obtain particles, their weights, and approximate posterior filtering distributions by forward filtering the state-space model using SMC.

  2. For each t = 1,…,T, use the filtering results to obtain the posterior smoothing distribution moments by performing backward recursion of the posterior marginal distribution P(xt|y1,…,yT,Θ).

You can tune the forward-filtering algorithm by setting the NumParticles, Resample, Cutoff, or SortParticles name-value arguments.

"simsmooth"smooth dispatches to simsmooth to approximate the mean and covariance of the posterior smoothed state distribution by generating sample paths using the simulation smoother. In addition to the arguments for tuning the forward-filtering algorithm, you can tune the simulation smoother by setting the NumPaths and MaxIterations name-value arguments. For details, see simsmooth.

"simsmooth" and "marginal" algorithms are computationally intensive; both have complexity O(NumParticles2T). Their speed and accuracy is application dependent. It is good practice to experiment with both algorithms and tune the associated parameters as needed.

Example: Method="marginal"

Data Types: char | string

Number of particles for SMC, specified as a positive integer.

Example: NumParticles=1e4

Data Types: double

Number of sample state paths to generate for the simulation smoother, specified as a positive integer. smooth ignores this argument when Method is "marginal".

Example: NumPaths=10000

Data Types: double

Maximum number of rejection sampling iterations for posterior sampling using the simulation smoother, specified as a nonnegative integer. smooth conducts rejection sampling before it conducts the computationally intensive importance sampling algorithm.

smooth ignores this argument when Method is "marginal".

Example: MaxIterations=100

Data Types: double

SMC resampling method, specified as a value in this table.

ValueDescription
"multinomial"At time t, the set of previously generated particles (parent set) follows a standard multinomial distribution, with probabilities proportional to their weights. An offspring set is resampled with replacement from the parent set [1].
"residual"Residual sampling, a modified version of multinomial resampling that can produce an estimator with lower variance than the multinomial resampling method [6].
"systematic"Systematic sampling, which produces an estimator with lower variance than the multinomial resampling method [4].

Resampling methods downsample insignificant particles to achieve a smaller estimator variance than if no resampling is performed and to avoid sampling from a degenerate proposal [4].

Example: Resample="residual"

Data Types: char | string

Effective sample size threshold, below which smooth resamples particles, specified as a nonnegative scalar. For more details, see [4], Ch. 12.3.3.

Tip

  • To resample during every period, set Cutoff=numparticles, where numparticles is the value of the NumParticles name-value argument.

  • To avoid resampling, set Cutoff=0.

Example: Cutoff=0.75*numparticles

Data Types: double

Flag for sorting particles before resampling, specified as a value in this table.

ValueDescription
truesmooth sorts the generated particles before resampling them.
falsesmooth does not sort the generated particles.

When SortPartiles=true, smooth uses Hilbert sorting during the SMC routine to sort the particles. This action can reduce Monte Carlo variation, which is useful when you compare loglikelihoods resulting from evaluating several params arguments that are close to each other [3]. However, the sorting routine requires more computation resources, and can slow down computations, particularly in problems with a high-dimensional state variable.

Example: SortParticles=true

Data Types: logical

Output Arguments

collapse all

Approximate smoothed state estimates E(xt|y1,…,yT), for t = 1,…,T, returned as a T-by-m numeric matrix or a T-by-1 cell vector of numeric vectors.

Each row corresponds to a time point in the sample. The last row contains the latest smoothed states.

For time-invariant models, smooth returns a matrix. Each column corresponds to a state variable xt.

If Mdl is time varying, X is a cell vector. Cell t contains a column vector of smoothed state estimates with length mt. Each column corresponds to a state variable.

Approximate loglikelihood function value log p(y1,…,yT).

The loglikelihood value has noise induced by SMC.

Missing observations do not contribute to the loglikelihood.

Smoothing results by sampling time, returned as a structure array.

Output is a T-by-1 structure, where element t corresponds to the smoothing result for time t.

FieldDescriptionEstimate/Approximation of
LogLikelihoodScalar approximate loglikelihood objective function value log p(yt|y1,…,yt)
SmoothedStatesmt-by-1 vector of approximate smoothed state estimatesE(xt|y1,...,yT)
SmoothedStatesCovmt-by-mt variance-covariance matrix of smoothed statesVar(xt|y1,...,yT)
EffectiveSampleSizeEffective sample size for importance sampling, a scalar in [0,NumParticles]N/A
DataUsedht-by-1 flag indicating whether the software smooths using a particular observation. For example, if observation j at time t is a NaN, element j in DataUsed at time t is 0.N/A
ResampledFlag indicating whether smooth resampled particlesN/A

Tips

  • Smoothing has several advantages over filtering.

    • The smoothed state estimator is more accurate than the online filter state estimator because it is based on the full-sample data, rather than only observations up to the estimated sampling time.

    • A stable approximation to the gradient of the loglikelihood function, which is important for numerical optimization, is available from the smoothed state samples of the simulation smoother (finite differences of the approximated loglikelihood computed from the filter state estimates is numerically unstable).

    • You can use the simulation smoother to perform Bayesian estimation of the nonlinear state-space model via the Metropolis-within-Gibbs sampler.

  • Unless you set Cutoff=0, smooth resamples particles according to the specified resampling method Resample. Although resampling particles with high weights improves the results of the SMC, you should also allow the sampler traverse the proposal distribution to obtain novel, high-weight particles. To do this, experiment with Cutoff.

  • Avoid an arbitrary choice of the initial state distribution. bnlssm functions generate the initial particles from the specified initial state distribution, which impacts the performance of the nonlinear filter. If the initial state specification is bad enough, importance weights concentrate on a small number of particles in the first SMC iteration, which might produce unreasonable filtering results. This vulnerability of the nonlinear model behavior contrasts with the stability of the Kalman filter for the linear model, in which the initial state distribution usually has little impact on the filter because the prior is washed out as it processes data.

Algorithms

smooth accommodates missing data by not updating filtered state estimates corresponding to missing observations. In other words, suppose there is a missing observation at period t. Then, the state forecast for period t based on the previous t – 1 observations and filtered state for period t are equivalent.

Alternative Functionality

To monitor the forward-filtering stage or to conduct a Gibbs sampler, use simsmooth instead of smooth.

References

[1] Andrieu, Christophe, Arnaud Doucet, and Roman Holenstein. "Particle Markov Chain Monte Carlo Methods." Journal of the Royal Statistical Society Series B: Statistical Methodology 72 (June 2010): 269–342. https://doi.org/10.1111/j.1467-9868.2009.00736.x.

[2] Andrieu, Christophe, and Gareth O. Roberts. "The Pseudo-Marginal Approach for Efficient Monte Carlo Computations." Ann. Statist. 37 (April 2009): 697–725. https://dx.doi.org/10.1214/07-AOS574.

[3] Deligiannidis, George, Arnaud Doucet, and Michael Pitt. "The Correlated Pseudo-Marginal Method." Journal of the Royal Statistical Society, Series B: Statistical Methodology 80 (June 2018): 839–870. https://doi.org/10.1111/rssb.12280.

[4] Durbin, J, and Siem Jan Koopman. Time Series Analysis by State Space Methods. 2nd ed. Oxford: Oxford University Press, 2012.

[5] Fernández-Villaverde, Jesús, and Juan F. Rubio-Ramírez. "Estimating Macroeconomic Models: A Likelihood Approach." Review of Economic Studies 70(October 2007): 1059–1087. https://doi.org/10.1111/j.1467-937X.2007.00437.x.

[6] Liu, Jun, and Rong Chen. "Sequential Monte Carlo Methods for Dynamic Systems." Journal of the American Statistical Association 93 (September 1998): 1032–1044. https://dx.doi.org/10.1080/01621459.1998.10473765.

Version History

Introduced in R2024a

See Also

Objects

Functions