LQG issue: state estimates always perfect

I'm working through an LQG controller for a simple 2nd-order spring/mass/damper system, using an LQR controller base.
See my attached .m file.
I'm using Mathworks' LQG page here, at the section titled "Constructing the Kalman State Estimator".
Also, this link here in the section titled "Observer Design", which more clearly shows the formulation for [x;e] state vector.
For some reason, my LQG system states and state esimates track perfectly no matter what -- ie state estimates always perfectly track the state with zero error -- even if L = [0, 0]. The latter should imply no state estimate correction at all, since the actual measurements aren't taken into account to form a state estimation correction.
State esimates also perfectly track with 0 error (1e-315) on reference step changes.
What might be going on?
1) Process: Make state-space sys --> LQR design (find K) --> LQG by finding L (and using K) --> simulate
2) The state vector is [x; e], with x = [pos, vel] and e = [error_pos_hat, error_vel_hat],
where eg "error_pos_hat" is the error associated with the position state estimate.
x_dot = (A - B*K_lqr )*x + B*K_lqr*e
e_dot = 0*x + (A - L*C)*e
So:
A = [A-BK, BK; 0, A-LC]
B = [B; 0]
C = [C, 0]
D = 0
Are these state equations correct?
3) For some reason, the choice of L doesn't impact system response, even if L = [0, 0] that should imply no state estimate correction at all.
Although, the A matrix's state error eigenvalues (error_pos_hat, and error_vel_hat) do reduce as expected as L --> 0.
LQG state estimates always perfectly track the state, ie ~0 error. More exactly, it's about 1e-315 (yes, that's three-hundred-fifteen)
4) Also, state errors = 0 starting at t = 0; they should take some time to converge even if estimation is very good.
But, the state estimation does seem to be doing something (what though, is unclear to me), even when L = 0. For example if I change the initial states, so that
x0 = [0.3, 0.1, 0.0, 0.0]
becomes
x0 = [0.3, 0.1, 0.1, -100]
ie the errors also start nonzero, then errors do take some time to eventually converge to 0 error.
But they will converge to 0 error even if L = [0, 0]. This seems unexpected; it's not really doing what I thought.
This plot is convergence with L = [0, 0] and nonzero initial error states:
And this is using system poles*10, with L found the "normal" way per the attached .m file, zoomed in on the X axis.
Obviously much faster convergence, which would be expected if I thought the system was set up right, ie state estimation error wasn't 0:)

2 Comments

Just wondering, why does a relatively low 5% overshoot give a relatively oscillatory damping ratio of ?
I don't remember the formula. Please check.
overShootPec = 5;
zeta = abs(log(overShootPec/100)) / sqrt(pi^2 * log(overShootPec/100)^2)
zeta = 0.3183
Thanks @Sam Chak, I'll check :) It's not the main point (as you surmised), but thanks for pointing it out! It's just to get something meaningful for the example... will check on it.

Sign in to comment.

 Accepted Answer

Hi John,
Disclaimer: I didn't actually look at your code.
I think all of the the results you're seeing make sense. First, let's write state equations for x and xhat (we could use x and e, but I think it might be clearer to use xhat instead of e).
State equation for the plant:
xdot = A*x + B*u (1)
State equation for the estimator:
xhatdot = A*xhat + B*u + L(y-yhat) (2)
= A*xhat + B*u + L*C*(x - xhat) (2a)
= L*C*x + (A - L*C) * xhat + B*u (2b)
Before going any further by closing the control loop, consider equations (1) and (2). If x and xhat have the same initial conditions, then we'd certainly expect x and xhat to be identical because A and B are the same in both state equations and the plant and the estimator are both excited by the exact same input u. So, with the same initial conditions, we'd certainly expect that x and xhat to be identical regardless of L. In reality this doesn't work exactly like this because model in the estimator won't exactly match the true dynamics of the plant.
Suppose instead that the plant has a nonzero initial condition and the estimator has a zero initial condition. Because LTI, the state response of the plant is the sum of the response due to the initial condition and the response due to the input. The state response of the estimator is same as the plant response due to the input becasue the estimator IC is zero. If the plant is stable (and under any stabilizing closed loop control), the plant response due to the intitial condition eventually decays to zero anyway, and when that decay is small enough the state response of the estimator will match the state response of the plant, i.e. the estimation error eventually goes to zero as the IC response of the plant decays to zero. Note that this will be true even if L = 0. However, if L=0 the "time constant" and dynamics of the estimation error is therefore the same as that of the plant, which isn't useful if we want to use xhat for feedback. Hence, we use the non-zero L feedback in the estimator to make the estimation error due to mismatched initial conditions decay to zero faster than the dynamics of the plant, and actually faster than the dynamics after applying the closed-loop control.
Closed loop control, assuming state feedback and servo tracking on x(1):
u = K*([1;0]*r - xhat)
Subbing the control into (1)
xdot = A*x + B*(K*[1;0]*r - xhat) = A*x - B*K*xhat + B*K*[1;0]*r (3)
Subbing the control into (2b)
xhatdot = L*C*x + (A-L*C)*xhat + B*K([1;0]*r - xhat)
= L*C*x + (A - L*C - B*K) *xhat + B *K *[1;0]*r (4)
Combining (3) and (4) into a single equation:
[xdot;xhatdot] = [A -B*K; L*C A-L*C-B*K]*[x;xhat] + [B*K*[1;0] ; B*K*[1;0] ] * r
Example
overShootPec = 5;
zeta = abs(log(overShootPec/100)) / sqrt(pi^2 * log(overShootPec/100)^2);
wn_rPs = 2 * pi * 40;
%% OL system
J_kgm2 = 1e-5;
b = zeta * 2 * wn_rPs * J_kgm2;
k = wn_rPs^2 * J_kgm2;
A = [0 1; -k/J_kgm2 -b/J_kgm2];
B = [0; 1/J_kgm2];
C = [1 0];
D = 0;
massSys = ss(A, B, C, D, ...
'StateName', {'theta', 'theta_dot'}, ...
'InputName', {'u'}, ...
'OutputName', {'theta'});
% control gains
ControlPoles = roots([1 1400 1e6]);
K = place(massSys.A,massSys.B,ControlPoles);
% estimator gains
EstimatorPoles = [-2000 -2500];
L = place(massSys.A.',massSys.C.',EstimatorPoles).';
% closed loop system
Acl = [A , -B*K; L*C , A-L*C-B*K];
Bcl = [B*K*[1;0] ; B*K*[1;0]];
Ccl = [1 0 0 0];
Dcl = 0;
clsys = ss(Acl,Bcl,Ccl,Dcl);
% step response
[y,t,x] = step(clsys);
figure
plot(t,y)
figure
plot(t,x(:,1:2)-x(:,3:4))
legend('pos estimation error','vel estimation error')
As expected, with the same ICs (zero), the true state and the estimated state are identical.
Comment: the plot above is not the same as what I get on my local installation using R2022a. Not sure what that's about.
Now check the response due to a non-zero IC on the plant and zero IC on estimator
[y,t,x] = initial(clsys,[1 0 0 0]);
figure
plot(t,x(:,1)-x(:,3)),grid,ylabel('pos estimation error')
figure
plot(t,x(:,2)-x(:,4)),grid,ylabel('vel estimation error')
These error responses should change depending on L. Try setting L = [0;0] and see if the step response and the inital condition responses make sense.

13 Comments

Thanks @Paul.
1) Ah, that makes sense about L = 0 leading to convergence of the same time constant as the plant -- thanks.
And interesting that your observer poles were only ~3x the pole placement poles; I thought rule of thumb was ~10x.
2) I believe I have the equivalent state equations as you (I know you didn't look at my code, so just mentioning), but in your output (and in Mathworks' examples), I think velocity estimation error shouldn't be 1e-315, ie what seems like the float noise floor. Yours is 1e-13, which would make more sense.
The setup you're using is with the x = [x; x_hat] setup, the one i'm using is with the x = [x; e] setup, but they should be equivalent behavior.
I also tried the formulation you used with x = [x; x_hat], and had the same 1e-315 error for both pos and vel.
Here's what i'm using (i haven't yet migrated notation to Angle, and am still saying Pos, but you get the idea :) )
A11 = massSys.A - massSys.B * K_lqr;
A12 = massSys.B * K_lqr;
A21 = zeros(size(massSys.A));
A22 = massSys.A - L * massSys.C;
A_clLqr_SO = [ A11, A12;
A21, A22 ]
eig(A_clLqr_SO)
B_SO = [ massSys.B; zeros(size(massSys.B)) ];
C_SO = [ massSys.C, zeros(size(massSys.C))];
sys_ClLqr_SO = ss(A_clLqr_SO, B_SO * K_lqr * [1;0], C_SO, 0);
gains = dcgain(sys_ClLqr_SO)
sys_ClLqr_SO.StateName = {'p', 'v', 'e hat', 'e hat'};
sys_ClLqr_SO.InputName = {'r(ref)'};
sys_ClLqr_SO.OutputName = {'theta'};
3) Out of curiosity, for what you see on your local machine with R2022a, is it related to what i'm seeing? (I know i'm running a different release, R2022b)
I do see 1e-13 vel error when running your code on my local machine.
1). That is the rule-of-thumb, or maybe 5x - 10x. I just picked estimator pole locations that I thought would work.
2) I converted my equations to [x;e] and got the same closed loop system as yours, using my K and L. However, I get the estimation error being identically zero for the step response. Recall that step() and lsim() both discretize the system. Maybe your system with its K and L suffers from a smidge of roundoff in the discretization process.
3) In my original version with [x; xhat] I was seeing velocity estimation error on the same order of magnitude of 1e-13 on my 2022a and Answers 2022b, but the shape of the curve was different. I suspect, but don't know for sure, that your 1e-315 issue is related to numerical error in the discretization process. If you're using lsim you can choose between zoh and foh methods. Consider forcing one, then the other, and then compare each to the unspecified, or default, case. And/or change your gains to mine and see if you get expected results.
John
John on 13 Mar 2023
Edited: John on 13 Mar 2023
Thanks @Paul. Interesting -- now I'm less sure I understand what's going on.
I tried your suggestions: using your L gains, and zoh/foh with lsim().
1) I used my gains in your code with step(). The result is below; it's still an error about what you saw, so my gains should be okay. It's a different sign largely though, so the response is quite different.
Also, there's some unexplained difference between step() and lsim(), since using lsim() with my gains still gets 1e-315.
2) Using lsim() with my SS formulation and your gains, though, gets a larger (more realistic) error that you saw.
3) Step() with my formulation, with either your or my gains, gets 0 error (ie likely 1e-315). Note scale of 1e-308.
4) I tried both zoh and foh for lsim(), with no difference
I don't understand what's going on with lsim() and step(), since the formulations should have the same results...and trying various gains (your and mine) have conflicting results...
Focusing just on the formulation that used the state vector [x;e]
overShootPec = 5;
zeta = abs(log(overShootPec/100)) / sqrt(pi^2 * log(overShootPec/100)^2);
wn_rPs = 2 * pi * 40;
%% OL system
J_kgm2 = 1e-5;
b = zeta * 2 * wn_rPs * J_kgm2;
k = wn_rPs^2 * J_kgm2;
A = [0 1; -k/J_kgm2 -b/J_kgm2];
B = [0; 1/J_kgm2];
C = [1 0];
D = 0;
massSys = ss(A, B, C, D, ...
'StateName', {'theta', 'theta_dot'}, ...
'InputName', {'u'}, ...
'OutputName', {'theta'});
ControlPoles = roots([1 1400 1e6]);
K = place(massSys.A,massSys.B,ControlPoles);
EstimatorPoles = [-2000 -2500];
L = place(massSys.A.',massSys.C.',EstimatorPoles).';
% Acl = [A , -B*K; L*C , A-L*C-B*K];
% Bcl = [B*K*[1;0] ; B*K*[1;0]];
Acl = [A-B*K , B*K; zeros(2) , A-L*C];
Bcl = [B*K*[1;0] ; zeros(2,1)];
Ccl = [1 0 0 0];
Dcl = 0;
clsys = ss(Acl,Bcl,Ccl,Dcl);
From the structure of clsys, the estimation error (e) should be zero in response to any input as long as the initial condtions on e are zero
3) step response
Using my gains
[y,t,x] = step(clsys);
% figure
% plot(t,y)
figure
plot(t,x(:,3:4))
legend('pos estimation error','vel estimation error')
The error plot is zero and the y-axis scale are +-1, as I would expect for plotting values that are identically zero for all time.
Verify that is, in fact, the case
max(abs(x(:,3:4)))
ans = 1×2
0 0
Get the same result using lsim to compute the step response
[y,t,x] = lsim(clsys,0*t+1,t);
max(abs(x(:,3:4)))
ans = 1×2
0 0
Repeat the exercise using John's gains
H = [ 2*0.7*1000, 1 ];
Q_lqr = H.'*H;
R_lqr = [ 0.01 ];
K_lqr = lqr(massSys, Q_lqr, R_lqr);
ecl = eig(massSys.A - massSys.B*K_lqr);
slowestPole = max(real(ecl));
ol_p1 = slowestPole * 10; % at min slowest pole x ~10
ol_p2 = slowestPole * 10.1;
L_lqr = place(massSys.A', massSys.C', [ol_p1 ol_p2])';
Acl = [A-B*K_lqr , B*K_lqr; zeros(2) , A-L_lqr*C];
Bcl = [B*K_lqr*[1;0] ; zeros(2,1)];
Ccl = [1 0 0 0];
Dcl = 0;
clsys = ss(Acl,Bcl,Ccl,Dcl);
[y,t,x] = step(clsys);
% figure
% plot(t,y)
% figure
% plot(t,x(:,3:4))
% legend('pos estimation error','vel estimation error')
max(abs(x(:,3:4)))
ans = 1×2
0 0
[y,t,x] = lsim(clsys,0*t+1,t);
max(abs(x(:,3:4)))
ans = 1×2
0 0
I'm not sure what's going on with your plot, but I'm seeing that, when the states are [x;e], the estimation error reponse to a step reference command input is zero using either your gains or mine. Have you checked the actual value of the estimation error, or just plotted it?
"I'm not sure what's going on with your plot, but I'm seeing that, when the states are [x;e], the estimation error reponse to a step reference command input is zero using either your gains or mine. Have you checked the actual value of the estimation error, or just plotted it?"
Oh, sorry @Paul, I set limits manually on the plot, just to show values < 1e-15, ie 0.
I could have just mentioned the max value of 0 as you did... It was indeed +/-1 before manual scaling
My point was to show the result of what you proposed: different combinations of your gains or mine, and lsim() or step(). They give different and seemingly inconsistent responses to what should be equivalent formulations. It's confusing what is occurring.
So we've established that the estimation error is zero for the reference command step response when using the [x;e] formulation and either set of gains.
I'll try lsim with the ref step sequence, using my implementation of [x;e], and your gains
overShootPec = 5;
zeta = abs(log(overShootPec/100)) / sqrt(pi^2 * log(overShootPec/100)^2);
wn_rPs = 2 * pi * 40;
%% OL system
J_kgm2 = 1e-5;
b = zeta * 2 * wn_rPs * J_kgm2;
k = wn_rPs^2 * J_kgm2;
A = [0 1; -k/J_kgm2 -b/J_kgm2];
B = [0; 1/J_kgm2];
C = [1 0];
D = 0;
massSys = ss(A, B, C, D, ...
'StateName', {'theta', 'theta_dot'}, ...
'InputName', {'u'}, ...
'OutputName', {'theta'});
H = [ 2*0.7*1000, 1 ];
Q_lqr = H.'*H;
R_lqr = [ 0.01 ];
K_lqr = lqr(massSys, Q_lqr, R_lqr);
ecl = eig(massSys.A - massSys.B*K_lqr);
slowestPole = max(real(ecl));
ol_p1 = slowestPole * 10; % at min slowest pole x ~10
ol_p2 = slowestPole * 10.1;
L_lqr = place(massSys.A', massSys.C', [ol_p1 ol_p2])';
Acl = [A-B*K_lqr , B*K_lqr; zeros(2) , A-L_lqr*C];
Bcl = [B*K_lqr*[1;0] ; zeros(2,1)];
Ccl = [1 0 0 0];
Dcl = 0;
clsys = ss(Acl,Bcl,Ccl,Dcl);
tf_s = 0.5;
ts_s = 1e-6;
t = (0 : ts_s : tf_s)';
refStartTime = 0.1;
distAngleTime = 2*tf_s/3;
distAngVelTime = 4*tf_s/5;
ref = zeros(numel(t), 1);
ref(t > refStartTime) = 1;
ref(t > 2*refStartTime) = 0;
ref(t > 3*refStartTime) = -1;
[y,t,x] = lsim(clsys,ref,t);
plot(t,y)
max(abs(x(:,3:4)))
ans = 1×2
0 0
The estimation error is zero. Is that not the same as your result?
@Paul That is what I see, and I do get 0 running your code. But the confusion seems to be in the formulations. From my prior post #1 and #2:
"1) ... Also, there's some unexplained difference between a step() and lsim(), since using lsim() with my gains still gets 1e-315.
2) Using lsim() with my SS formulation and your gains, though, gets a larger (more realistic) error that you saw."
I'll see if I can run some more tests on this, but the outcome is confusing.
Going with the [x; x^] formuation (which is the one you used initially, whichseems to give reasonable error), I also have some doubts that the estimator is doing what it should. For example, after a disturbance input, the estimated states never converge, even with an LQGI (integral action) formulation.
( .m attached just in case)
Even LQG should converge on the states after a disturbance, since K*(y - y^) should yield a correction error.
Below is LQG. Note the estimated state errors (right-hand axes, which are the red lines), and how they're constant. Similarly x_n and x_n_hat never converges after the dist, even if error was small before the dist.
So, I tried LQGI. Below is LQGI. Note the estimated state errors (right-hand axes, and red lines), and how they are constant, even though the errors are smaller than LQG (i zoomed in vertically). Similarly x_n and x_n_hat never converges after the dist, even if error was small before the dist.
Perhaps the formulation is wrong, which is related to the unexpected lsim() error outputs?
Just because there is an error signal in the estimator doesn't mean the estimation error will go to zero. That's similar to a unity feedback system with proportional control, where the steady state error to step input is nonzero, if the plant doesn't have an integrator. Let's look at the LQ problem with [x;xhat] formulation
overShootPec = 5;
zeta = abs(log(overShootPec/100)) / sqrt(pi^2 * log(overShootPec/100)^2);
wn_rPs = 2 * pi * 40;
%% OL system
J_kgm2 = 1e-5;
b = zeta * 2 * wn_rPs * J_kgm2;
k = wn_rPs^2 * J_kgm2;
A = [0 1; -k/J_kgm2 -b/J_kgm2];
B = [0; 1/J_kgm2];
C = [1 0];
D = 0;
massSys = ss(A, B, C, D, ...
'StateName', {'theta', 'theta_dot'}, ...
'InputName', {'u'}, ...
'OutputName', {'theta'});
% control gains
ControlPoles = roots([1 1400 1e6]);
K = place(massSys.A,massSys.B,ControlPoles);
% estimator gains
EstimatorPoles = [-2000 -2500];
L = place(massSys.A.',massSys.C.',EstimatorPoles).';
% closed loop system
Acl = [A , -B*K; L*C , A-L*C-B*K];
Bcl = [B*K*[1;0] ; B*K*[1;0]];
Ccl = [1 0 0 0];
Dcl = 0;
Suppose we have a disturbance torque, d. Then the closed loop system is with inputs [r;d]
F = [B;zeros(2,1)]; % d does not directly feed to xhatdot
clsys = ss(Acl,[Bcl F],Ccl,0); % let z = [x; xhat] be the state vector
Because the system is stable, the state derivatives will be zero in response to a constant external inputs. Consequently, steady state value of z must satisfy
clsys.A*zss + clsys.B*[rconstant;dconstant] = [0;0], or
zss = -inv(clsys.A)*clsys.B*[rconstant; dconstant]
Because we are only interested in the disturbance, let rconstant = 0 and dconstant = 1. Then
zss = -clsys.A\(clsys.B*[0;1])
zss = 4×1
0.2264 0 0.2064 -86.8000
In steady state the estimation error in response to constant disturbance is
ess = zss(1:2)-zss(3:4)
ess = 2×1
0.0200 86.8000
Simulate the system for a step input disturbance
[y,t,x] = step(clsys(1,2));
figure;
plot(t,y),grid
yline(zss(1))
The steady state position is exactly as predicted.
figure
plot(t,x(:,1)-x(:,3)),grid
ylabel('position estimation error')
yline(ess(1))
ylim([0 .021])
The steady state position estimation error is exactly as predicted.
figure
plot(t,x(:,2)-x(:,4)),grid
ylabel('velocity estimation error')
yline(ess(2))
The steady state velocity estimation error is exactly as predicted.
The estimation error does converge, it just doesn't converge to zero, which it shouldn't.
I'm sure the same analysis with the z = [x;e] formulation would yield the same results.
I didn't do the anlaysis for the LQI case, but one could go about it in the same way, using z = [x;xi;xhat], for a numerical answer to determine if the steady state estimation error should be zero.
Also, if you're willing to assume that d is constant (or maybe even a sequence of steps, I'm not sure) there are techniques (well, at least one) to have the observer estimate the disturbance, which might improve the estimation error and the estimated disturbance might be usable for feedback to cancel the actual disturbance (it's been a while, I'd have to work out the details).
John
John on 15 Mar 2023
Edited: John on 16 Mar 2023
@Paul Interesting. That's seems a useful technique for checking expectations on convergence; thanks.
(1) "The steady state velocity estimation error is exactly as predicted. The estimation error does converge, it just doesn't converge to zero, which it shouldn't."
Agreed on position: I wouldn't expect position estimate to converge with a disturbance.
I'm surprised that vel_hat doesn't converge though to 0, since the SS formulation knows (via model dynamics) that if dist and ref are both constant, then vel -->0, therefore vel_hat should --> 0.
(2) "I didn't do the anlaysis for the LQI case, but one could go about it in the same way, using z = [x;xi;xhat], for a numerical answer to determine if the steady state estimation error should be zero."
In the previous post I showed LGQI results (also included the .m file), ie i had checked z = [x;xi;xhat], and there was steady-state error:
"So, I tried LQGI. Below is LQGI. Note the estimated state errors (right-hand axes, and red lines), and how they are constant, even though the errors are smaller than LQG (i zoomed in vertically). Similarly x_n and x_n_hat never converges after the dist, even if error was small before the dist."
Here's the result for ref step:
zss =
1 (x1 pos)
2.3283e-11 (x2 vel)
6.8204e-07 (xi)
0.99998 (x1_hat pos_est)
-0.85487 (x2_hat vel_est)
ess_errors =
2.1374e-05
0.85487
and for dist step:
zss =
0 (x1 pos)
-4.5475e-14 (x2 vel)
-8.4556e-06 (xi)
1.6919e-05 (x1_hat pos_est)
0.67669 (x2_hat vel_est)
ess_errors =
-1.6919e-05
-0.67669
It's surprising to me that velocity estimation error is not going to zero with LQGI, since that's not consistent with the dynamics of the model, and steady-state dist and ref.
Perhaps LQGI also needs an xi estimated state, so z = [x; xi; x_hat; xi_hat], instead of the z = [x; xi; x_hat] that i was using? Ie:
x_hat_dot = A*x_hat + B*u + L(y-y_hat),
where x_hat_dot now becomes:
x_hat_dot = A*x_hat + B*u + L(y - y_hat) + Li*xi_hat
So, Li*xi_hat is the new term to correct x_hat (both pos and vel) based on the state output; ie if stationary with a static difference (vel_hat converged but not to 0), vel_hat could be nudged towards 0. Here, xi_hat = int(y - y_hat), compared with xi = int(ref - y) used for LQGI.
The problem is that the vel state will not actually be available in the system this will be used for, so hence estimating it. So, in the real system i'll have no knowledge of the actual vel estimation error, so it seems I can only find xi_hat using y and y_hat. But perhaps that correction on position and position_estimate also ends up affecting vel and vel_estimate?
I'm not sure if that's a legitimate formulation, and moreso -- I would have thought vel est error would go to 0 with the standard LQGI formulation...
(3) "there are techniques (well, at least one) to have the observer estimate the disturbance, which might improve the estimation error and the estimated disturbance might be usable for feedback to cancel the actual disturbance"
Yes, i've implemented ADRC on some systems, but not at the same time as an observer. That still wouldn't resolve the underlying issue that vel_hat doesn't go to 0 in the steady-state, when d = 0 and r = 0.
1). I guess I'm not as surprised in the LQ case that neither the position or velocity estimation error converges to zero in response to a step disturbance. I'd try to offer an intuitve explanation, but it would just come back to the math that I showed for the computation of ess.
2). Perhaps we're getting different results for LQI with an observer. Here's what I get
Plant model
overShootPec = 5;
zeta = abs(log(overShootPec/100)) / sqrt(pi^2 * log(overShootPec/100)^2);
wn_rPs = 2 * pi * 40;
%% OL system
J_kgm2 = 1e-5;
b = zeta * 2 * wn_rPs * J_kgm2;
k = wn_rPs^2 * J_kgm2;
A = [0 1; -k/J_kgm2 -b/J_kgm2];
B = [0; 1/J_kgm2];
C = [1 0];
D = 0;
Control gains for augmented plant with states [x; xi]
ControlPoles = [-700+1j*700, -700-1j*700, -1000];
K = place([A zeros(2,1);-C 0],[B; -D],ControlPoles);
Estimator gains
EstimatorPoles = [-2000 -2500];
L = place(A.',C.',EstimatorPoles).';
Closed loop plant. The state vector is: z = [pos; vel; poshat; velhat; xi]. I made xi the last state because it made the math easier for me
Acl = [ ...
A , -B*K;
L*C , (A - L*C - B*K(1:2)), -B*K(3);
zeros(1,2) , -[1 0] , 0];
Bcl = [zeros(2,1); zeros(2,1); 1];
Ccl = [[1 0] zeros(1,2) 0];
Dcl = 0;
F = [B;zeros(2,1);0];
Closed loop system with state vector as defined above, and inputs [r ; d]
clsys = ss(Acl,[Bcl F],Ccl,0,'StateName',{'pos' 'vel' 'poshat' 'velhat' 'xi'},'InputName',{'r' 'd'});
Steady state values of the state vector to constant inputs.
format short e
zss = -clsys.A\clsys.B
zss = 5×2
1.0000e+00 2.0000e-02 2.9104e-12 0 1.0000e+00 0 9.7383e-13 -8.6800e+01 2.4286e-03 -2.9915e-04
Hmm, this is troubling. zss(2,:), zss(3,2), and zss(4,1) are not exactly the same values as I get on my local installation with 2022a, even though I copy/pasted the code here. Very strange ....
Anyway, note that the disturbance causes non-zero steady state pos, but zero steady state poshat.
Step response due to reference command
t = 0:1e-6:.04;
[y,t,z] = step(clsys(1,1),t);
figure
plot(t,y,t,z(:,3)),ylabel('pos, poshat')
figure
plot(t,z(:,2),t,z(:,4)),ylabel('vel, velhat')
As expected pos goes to 1, poshat matches pos, vel goes to zero, and velhat matches vel.
Response to unit disturbance step followed by step back to zero
dist = 1*(t< .02);
[y,t,z] = lsim(clsys(1,2),dist,t);
figure
plot(t,y),ylabel('pos')
yline(zss(1,2))
Position response as expected, goes to expected steady state value then back to zero when the disturbance is removed.
figure;
plot(t,z(:,3)), ylabel('poshat')
yline(zss(3,2));
Position estimate as expected, goes to expected steady state value then back to zero when the disturbance is removed. Note that xidot = -poshat (when r = 0), so we must have poshat = 0 in steady state, even though pos is nonzero.
figure
plot(t,z(:,4)),ylabel('velhat')
yline(zss(4,2))
Velocity estimate as expected, goes to expected steady state value then back to zero when the disturbance is removed.
I guess it's this last plot that's troubling you, specifically that there is a steady state velhat in response to the intial step, even though vel is zero?
AFAICT, all of the simulated results are consistent with expectations based on the math (unless I made a mistake in the formulation of clsys).
John
John on 16 Mar 2023
Edited: John on 16 Mar 2023
Interesting approach and that makes sense for the most part, thanks @Paul. But I'm not sure I follow why all the results are expected:
(1) "Response to unit disturbance step followed by step back to zero ... Position response as expected, goes to expected steady state value then back to zero when the disturbance is removed."
I see. What you showed does match your zss results.
But shouldn't it already be at steady-state ~0 with a disturbance? Shouldn't the integral action pull pos back to 0? Ie zss(1,2) = 0 as well.
Isn't the purpose of xi to pull what would normally be non-0 steady state, to 0, in the case that y and ref are mismatched in steady-state? Similarly for y and y_hat mismatch if pos_est doesn't match pos, pulling x_hat --> x.
If what you're saying is true, that it seems to imply that state estimation doesn't work well for regulation, but to my knowledge LQGI is used for regulation without any caveat on ref = 0 and steady-state.
From my last post, i'm seeing pos and pos_est --> 0 with dist:
"For dist step:
zss =
0 (x1 pos)
-4.5475e-14 (x2 vel)
-8.4556e-06 (xi)
1.6919e-05 (x1_hat pos_est)
0.67669 (x2_hat vel_est)
"
(2) "Position estimate as expected, goes to expected steady state value then back to zero when the disturbance is removed. Note that xidot = -poshat (when r = 0), so we must have poshat = 0 in steady state, even though pos is nonzero."
Hmmm, I see what you mean. But oddly, per (1), my poshat does not go to 0 during steady-state, and my zss doesn't show 0 either. It shows pos converging to 0, and pos_est settling at a nonzero (but close) value.
Also, this plot is from two of my posts ago (.m file also in that post), showing a nonzero pos_est with and without a disturbance, both with r = 0. This is the time interval 0.1 to 0.15.
See the 2nd subplot, red line (e1). Ref = 0 with dist, steady-state at t = ~0.12. Ref = 0 with dist, steady-state at t ~0.15. e is small in both cases relative to the disturbance, ie poshat tracks pos relatively well even with a disturbance (but doesn't go to 0)
Vertical zoom on pos and pos_est, and adding more labels. Pos --> 0 as expected via integral action, pulling post_est with it:
(3) But overall: Good point. These formulations (eg LQGI) don't include disturbances as inputs, so short of adding an ESO, the best the estimator can do is use the knowledge it has -- reference inputs and system dynamics -- which is incomplete.
So, inherently any estimated state can only be as good as the total inputs into the observer, so it's expected that poshat wouldn't track pos perfectly.
You showed the math bears that out, but I was trying to understand this poshat point intuitively...
(4) "I guess it's this last plot that's troubling you, specifically that there is a steady state velhat in response to the intial step, even though vel is zero?"
Yeah, exactly. In contrast to pos_est, it seems like there are enough inputs on the velocity side, to know that velhat should be zero: this info can be constructed from already-known inputs, to know if a disturbance is steady-state or not. It wouldn't need a disturbance estimator. Summary: if a dist and ref are both constant, then it seems velhat could be bled out until 0. Here's how it would know:
The system can know, if an external (non-plant) disturbance exists or not, because if velhat is converged to nonzero value, and the known-input ref is steady-state (deriv = zero, for longer than the closed loop dynamics time constant), then that means a disturbance must be active.
Whether a disturbance is steady-state or not should also be knowable, since if pos - pos_est is constant for that duration, then the disturbance is also constant due to the nature of a 2nd order system.
Wouldn't that mean velhat can be bled out to approach 0, and be more accurate? If the above reasoning is right (it might not be...), then something like this likely exists; i just don't know what to search for...
As you mentioned, an extended state observer, or disturbance rejection, are other schemes, but the above text description seems another strategy.
"AFAICT, all of the simulated results are consistent with expectations based on the math (unless I made a mistake in the formulation of clsys)."
I'll look through in more detail, but tentatively agreed (after you showed it :) )
As far as position goes, the integrator will drive the input to the integrator to zero for constant inputs. If the input to the integrator is r - poshat, as it was in my formulation, then in steady steady state response to constant disturbance poshat will be zero if r = 0. But as we saw, poshat = 0 doesn't mean that pos = 0.
OTOH, you said that your formulation drove pos = 0 and poshat nonzero in steady state. As far as I can see, the only way this can happen is if you feedback pos into the integrator, instead of poshat. That's a reasonable approach if you have a pos sensor (but be aware that doing so introduces sensor errors into the system in a different way than when using sensed pos just for input into the observer).
Here is my formulation using pos for feedback instead of poshat. Note that pos is used for feedback into the integrator and into the proportional feedback
xidot = r - x(1)
u = -K(1) * x(1) - K(2)*x2hat - K(3)*xi
overShootPec = 5;
zeta = abs(log(overShootPec/100)) / sqrt(pi^2 * log(overShootPec/100)^2);
wn_rPs = 2 * pi * 40;
%% OL system
J_kgm2 = 1e-5;
b = zeta * 2 * wn_rPs * J_kgm2;
k = wn_rPs^2 * J_kgm2;
A = [0 1; -k/J_kgm2 -b/J_kgm2];
B = [0; 1/J_kgm2];
C = [1 0];
D = 0;
ControlPoles = [-700+1j*700, -700-1j*700, -1000];
K = place([A zeros(2,1);-C 0],[B; -D],ControlPoles);
EstimatorPoles = [-2000 -2500];
L = place(A.',C.',EstimatorPoles).';
% feedback x(1) and xhat(2)
Acl = [...
[A zeros(2,3)] - B*[K(1) 0 0 K(2:3)] ;
[L*C , (A-L*C) , zeros(2,1)] - B*[K(1) 0 0 K(2:3)];
-[1 zeros(1,4)] ];
Bcl = [zeros(2,1); zeros(2,1); 1];
Ccl = [ [1 0], zeros(1,2), 0];
F = [B;zeros(2,1);0];
clsys = ss(Acl,[Bcl F],Ccl,0,'StateName',{'pos' 'vel' 'poshat' 'velhat' 'xi'});
format short e
zss = -clsys.A\clsys.B
zss = 5×2
1.0000e+00 3.3114e-17 0 -6.7447e-13 1.0000e+00 -2.0000e-02 1.9477e-12 -8.6800e+01 2.4286e-03 -3.0044e-04
Using pos for feedback instead of poshat, we now see that steady state pos = 0 in response in response to disturbance, but steady state poshat is now nonzero. This is not surprising given that pos is now the input to xidot.
t = 0:1e-6:.04;
Step responses look nearly, if not exactly, identical to the step response when using xhat(1) for feedback.
[y,t,x] = step(clsys(1,1),t);
figure
plot(t,y,t,x(:,3)),ylabel('pos, poshat')
figure
plot(t,x(:,2),t,x(:,4)),ylabel('vel, velhat')
Response to disturbance
dist = 1*(t< .02);
[y,t,x] = lsim(clsys(1,2),dist,t);
figure
plot(t,y),ylabel('pos')
yline(zss(1,2))
Now pos = 0 in steady state respons to disturbance.
figure;
plot(t,x(:,3)), ylabel('poshat')
yline(zss(3,2));
But poshat is nonzero until the disturbance is removed.
figure
plot(t,x(:,4)),ylabel('velhat')
yline(zss(4,2))
The velhat plot looks about the same as above using xhat(1) for feedback.
As far as velocity goes, when I say "expected results" I mean expected as based on the math that describes the specific implementation. Now I'm wondering, but am not sure, if you're expectations are statements of what should be feasible to achieve using some other design strategy than what we've been discussing in this thread. But, I have been showing in this thread that the steady state velocity estimate from lsim is fully consistent with expectations based on the math. If you want to be able to estimate velocity better, maybe consider implementing an observer based on an augmented plant model to estimate the disturbance, or maybe even a reduced order estimator would be helpful if willing to use sensed pos for feedback. I'm wondering if a reduced order estimator will be better at driving the velocity error to zero becasue it uses pos as an input.
John
John on 21 Mar 2023
Edited: John on 21 Mar 2023
Thanks @Paul. This was all very helpful and clarified a number of my misconceptions, and gave useful guidance for using additional Matlab'smany features for intermediate debugging (many of which I didn't know existed). Thanks again.

Sign in to comment.

More Answers (0)

Categories

Find more on Programming in Help Center and File Exchange

Products

Release

R2022b

Asked:

on 10 Mar 2023

Edited:

on 21 Mar 2023

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!