Simulating the impulse response of a higher order filter.

90 views (last 30 days)
I am trying to simulate an 8th order continuos time filter in MATLAB. The filter transfer function is
I am cascading 8 first order filters byusing the following code for the purpose
clear all;
clc;
wch = 2*pi*1e9;
filter_1 = tf([wch],[1 wch]);
sys = filter_1^8;
bode(sys);
figure
impulse(sys)
Somehow, the bode plot comes out fine, but the impulse response does not show anything.
If I simulate the same code with a second order filter , I can get the impulse response to show up perfectly
sys = filter_1^2
Obviously, thgere is something I am not doing correct. It would great if someone could point out the error. Thanks.

Accepted Answer

Naga
Naga on 25 Nov 2024 at 3:12
Hello Saqib,
By default, MATLAB might not simulate for a long enough period to capture the full impulse response of a high-order system. Specifying a custom time vector ensures that the simulation runs for an adequate duration. The 'linspace' function is used to create a time vector that defines the start and end times, as well as the number of points in the simulation.
clear all;
clc;
wch = 2*pi*1e9;
filter_1 = tf([wch], [1 wch]);
sys = filter_1^8;
bode(sys);
figure;
% Define a time vector for the impulse response simulation
t = linspace(0, 1e-8, 1000); % Adjust the time vector as needed
% Impulse response using state-space representation
impulse(sys, t);
  2 Comments
saqib
saqib on 25 Nov 2024 at 3:19
Thank you both for your inputs, It seems with a properly specified time vector, the impulse response is simulated properly.
I had one more question resgarding this. It seems that for both the and order filters, the time step is of the order of a 0.1 - 1ns. Why is it that the order filter can simulate properly without specifying a time vector but not the order filter?
Paul
Paul on 25 Nov 2024 at 3:31
Maybe you've uncovered a flaw in impulse. If the t vector is not specified, impulse is suppsoed to smartly define a t vector based on the system dynamics. But it seem like there is a problem as the system order inreases
wch = 2*pi*1e9;
filter_1 = tf([wch], [1 wch]);
for ii = 1:8
sys = filter_1^ii;
[y,t] = impulse(sys);
dt(ii) = t(2);
end
format short e
dt
dt = 1×8
7.3294e-12 7.3294e-12 1.0000e-02 1.0000e-02 1.0000e-02 1.0000e-02 1.0000e-02 1.0000e-02
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
The dt that impulse() uses to discretize the model is way too large for a filter order greater than 2. Same issue if using zpk or ss forms too.

Sign in to comment.

More Answers (3)

Paul
Paul on 25 Nov 2024 at 3:08
Hi @saqib,
You're probably running into issues with numerical accuracy because the coefficients in the system are so large.
Generally speaking, zpk is preferred over tf.
And/or, consider rescaling the time variable, maybe from seconds to nanoseconds.
And/or, consider using the time argument to impulse to make sure it's using a small enough time step.
And/or, consider computing the impulse response of filter_1 and then get the impulse response of the system using convolution.
  2 Comments
Paul
Paul on 26 Nov 2024 at 22:38
Edited: Paul on 28 Nov 2024 at 20:10
Upon further inspection, it appears that the logic that impulse uses to select the step size, if not specified by the user, is flawed when the plant has only a repeated, real pole (did not explore if there is a problem if the plant has only a repeated, complex pair of poles).
For example, with a simple, second order system
p = 300;
h = zpk([],[-p, -p],1);
[y,t] = impulse(h);
syms s
figure
fplot(ilaplace(1/(s+p)^2),[0 0.1]); % exact answer
hold on
plot(t,y),grid
xlim([0,0.1])
legend('exact','impulse');
The logic can deal with two repeated poles, as long as there is at least one simple pole
h = zpk([],[-p, -p, -1],1);
[y,t] = impulse(h);
figure
fplot(ilaplace(1/(s+p)^2/(s+1)),[0 0.1])
hold on
plot(t,y),grid
legend('exact','impulse');
xlim([0 1])
In this case, impulse() is still smart enough to compute the time step based on the two fast, repeated poles
t(2)
ans = 3.0701e-04
I'm pretty sure that step (and possibly initial) will have the same problem in selecting the step size when the plant has only a repeated, real pole.
Edit: The occurs when the A matrix of the realization of the system has only repeated poles, whether they be real, complex, or a combination of both, and the right eigenvector associated with each pole is orthogonal to its corresponding left eigenvector (eigenvectors as computed by eig). There may be cases, as shown above, where the problem is not observed when it might be expected to because of a successful test against a numerical tolerance.
Paul
Paul on 30 Nov 2024 at 3:14
Tech Support wouldn't go so far as to say there is a bug, but did say an enhancement request has been filed to address the issue. They also suggested another workaround is to specify the final time manually, which does work for the original problem as given.
wch = 2*pi*1e9;
filter_1 = tf([wch],[1 wch]);
sys = filter_1^8;
figure
impulse(sys,1e-8); % specify a final time

Sign in to comment.


Andrew Ouellette
Andrew Ouellette on 28 Nov 2024 at 3:54
Hello,
You are encountering this behavior because your cascaded system is poorly conditioned numerically. Hopefully this is not surprising to you, as your transfer function has coefficients that vary in magnitude from the order of 10^0 to 10^78. In this case, I believe the best solution would be for you to select a different time unit for your system. Let's try with nanoseconds:
wch = 2*pi; %rad/nanosecond
filter_1 = tf([wch],[1 wch],TimeUnit="nanoseconds");
sys = filter_1^8;
f = figure();
t = tiledlayout(f,1,2);
bp = bodeplot(t,sys);
bp.Layout.Tile = 1;
ip = impulseplot(t,sys);
ip.Layout.Tile = 2;
As you can see, both the bode response and impulse response are computed correctly. If you wish to view the charts in seconds, you can then alter the chart units.
f2 = figure();
t2 = tiledlayout(f2,1,2);
bp2 = bodeplot(t2,sys);
bp2.Layout.Tile = 1;
ip2 = impulseplot(t2,sys);
ip2.Layout.Tile = 2;
bp2.FrequencyUnit = "rad/s";
ip2.TimeUnit = "seconds";
  1 Comment
Paul
Paul on 28 Nov 2024 at 15:44
Hi Andrew,
Your point is well taken as a general guidance, i.e., transfer functions with large coefficients are to be avoided.
But, even after changing the time scale, tf isn't really all that good for systems with repeated poles.
Here is the system
wch = 2*pi; %rad/nanosecond
filter_1 = tf([wch],[1 wch],TimeUnit="nanoseconds");
sys = filter_1^8;
[y,t] = impulse(sys);
Convert to state space (as many functions do, inlcuding impulse) and check the eigenvalues.
sys = ss(sys);
eig(sys.a)
ans =
-6.3652 + 0.0000i -6.3388 + 0.0591i -6.3388 - 0.0591i -6.2794 + 0.0787i -6.2794 - 0.0787i -6.2275 + 0.0515i -6.2275 - 0.0515i -6.2089 + 0.0000i
They should all be 2*pi, but they are not.
Instead, use zpk and do the same thing.
wch = 2*pi; %rad/nanosecond
filter_1 = zpk(tf([wch],[1 wch],TimeUnit="nanoseconds"));
sys = filter_1^8;
sys = ss(sys);
eig(sys.a)
ans = 8×1
-6.2832 -6.2832 -6.2832 -6.2832 -6.2832 -6.2832 -6.2832 -6.2832
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
Now we get a much better realization.
More importantly, your example worked despite there being a flaw (I'm nearly certain of it) in how impulse (and others) determines the time step (actually, the flaw is in a function way down the calling chain from impulse). In short, that flaw in your example led to a "default" time step of 0.01
t(2)
ans = 0.0100
which is fine for pole locations with a time constant of
1/2/pi
ans = 0.1592
If you look in the example I showed in this answer you'll see that the same flaw leads to the same time step 0.01, even though the pole locations are much further out in the left hand plane (but not so far as to expect any numerical problems, IMO).
Let's modify your example and push the poles out further to the left, but not so far that we'd expect any problems:
wch = 2*pi*5; %rad/nanosecond
filter_1 = tf([wch],[1 wch],TimeUnit="nanoseconds");
sys = filter_1^8;
%sys = sys*tf(1,[1 5],TimeUnit="nanoseconds");
[y,t] = impulse(sys);
t(2)
ans = 0.0100
And, because of the same flaw we get the same time step, which is probably too large to accurately represent the dynamics of the plant.
Now add an additional slow pole for which a time step of 0.01 should be fine, i.e., we wouldn't expect this pole to change the time step, which should be driven by the fast poles
sys = sys*tf(1,[1 1],TimeUnit="nanoseconds");
[y,t] = impulse(sys);
t(2)
ans = 0.0029
That additonal, slow, simple pole, causes the code to go down the correct logic path and now we get a much better time step for the system at hand.

Sign in to comment.


Pascal Gahinet
Pascal Gahinet on 30 Nov 2024 at 13:46
Edited: Pascal Gahinet on 30 Nov 2024 at 13:47
Hello all,
Let's get to the bottom of this. The code for automatic step size and final time selection is written for general situations when you may have both slow and fast time scales. Just looking at the poles tells you little about which time scales are dominant, so we need to estimate the respective contribution of each mode to the response to figure out what time scale matters most. This is fairly easy when the poles are simple, much harder when some poles are repeated. For efficiency sake, we ignore repeated poles on the ground that they are fairly uncommon. Of course, this example is all about repeated poles, so the code is left guessing at what appropriate Ts and Tf should be.
Arguably, we should do better when all poles have about the same natural frequency and the time scale is obvious. Point taken and we will look into this. This being said, auto-ranging is a convenience that will never be bullet proof. When you see that IMPULSE picked an inadequate final time for your application, just override the default and all will be fine. Here for example:
wch = 2*pi*1e9;
filter_1 = tf([wch],[1 wch]);
sys = filter_1^8;
impulse(sys,1e-8) % final time = 1e-8
Thanks for a spirited discussion and happy to take any suggestion!
  2 Comments
Paul
Paul on 30 Nov 2024 at 16:59
Hi Pascal,
Thanks for taking the time to respond. Always good for the community when a core developer participates in the discussion.
I'm afraid I have to disagree with both claims in this statement:
" ... we ignore repeated poles on the ground that they are fairly uncommon..."
Repeated poles that are all simple will not be ignored.
Example (1):
p = -100;
sys1 = ss(p*eye(3),eye(3),eye(3),0);
[y,t] = impulse(sys1);
t(2)
ans = 4.6052e-04
When I inspected the code (R2024a) I saw that the code actually wants to explicitly include all poles that are not simple as dominant modes that get into the logic for computing the sample time.
Here's we have two, fast, not-simple poles, and one simple, stable, slow pole.
Example (2);
J = @(e,n) diag(e*ones(1,n)) + diag(ones(1,n-1),1); % Jordan block
sys2 = ss(blkdiag(J(p,2),-1),eye(3),eye(3),0);
Verify that sys2 has two modes that are not simple
[V,D,W] = eig(sys2.a);
format short e
diag(W'*V).' % values close to zero indicate corresponding modes are not simple.
ans = 1×3
2.2204e-14 2.2204e-14 1.0000e+00
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
[y,t] = impulse(sys2);
t(2)
ans =
9.2103e-04
Clearly, the fast, repeated poles are not ignored when determining the sample time.
If the system has only non-simple modes, even if they are not all the same, then we have an issue
Example (3)
sys3 = ss(blkdiag(J(p,2),J(1,2)),eye(4),eye(4),0);
[y,t] = impulse(sys3);
t(2)
ans =
1.0000e-02
In this case, the logic selects the sample time based on an assumption about the structure of the plant that is not warranted.
Let's go back to example (2), but put the simple pole at the origin.
Example (4)
sys4 = ss(blkdiag(J(p,2),0),eye(3),eye(3),0);
[y,t] = impulse(sys4);
t(2)
ans =
1.0000e-02
Again, we get the "default" sample time because of how the logic handles poles at the origin (combined with of the other poles being not simple), even if that's not really the best sample time (add a fourth pole at s = -1, and we'd be back to a sample time of approximatlely 1e-3).
In summary, we see that the logic wants to include repeated poles (simple or not) in the sample time computation, but (unintentionally?) does not if the plant has only non-simple, stable, repeated poles, w/ or w/o additional poles at the origin (and I think that logic should not be hard to fix, maybe I'm wrong). I didn't check what happens with poles in the open RHP or poles on the jw axis not at the origin.
I don't think repeated poles are uncommon. Consider a multi-input system that uses the same actuation for each input. The nominal model for such a system will have repeated poles.
Perhaps you meant that repeated, non-simple, poles, are uncommon. That may be the case in practice (I have no idea), but such systems are of interest in an academic setting.
Pascal Gahinet
Pascal Gahinet on 1 Dec 2024 at 11:41
Hi Paul
I stand corrected about my own code! I stopped reading past line 60 of ltipack.ssdata.timegrid and failed to see that (defective) repeated poles are added back on line 84 as you correctly stated. Bad case of jumping to the wrong conclusion, I should have stepped through the debugger first.
The real issue is the exit condition on line 75 which is triggered here because all mode contributions are set to zero for defective repeated poles. This is indeed a bug and the fix is simple: Change this condition to "if isempty(iDom)" after line 84, which was the original intent anyway. I verified this fix and it will be included in R2025a.
Finally, the difficulty is indeed with non-simple or defective repeated poles (Jordan blocks). Something like A = -eye(3) is not a problem because the eigenvalues -1,-1,-1 are perfectly conditioned (see CONDEIG).
Thank you for your perseverance, for taking the time to share your insights, and for helping us improve the software!

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!