Clear Filters
Clear Filters

explain sparse convolution algorithm

8 views (last 30 days)
Robert Alvarez
Robert Alvarez on 7 Jun 2017
Commented: David Goodmanson on 25 Jun 2017
There are (at least) two functions to convolve two sparse arrays on the file exchange: sconv2 and the convn method of n-D sparse array class. Both use a similar algorithm, which I have re-written for 1D data:
%
function C = SparseConv1(A, B)
assert( isvector(A) );
A = A(:);
assert( isvector(B) );
B = B(:);
[idxA,~,avals] = find(A(:));
[idxB,~,bvals] = find(B(:));
C = avals(:)*bvals(:).';
idxArep = repmat(idxA(:),1,numel(idxB));
idxBrep = repmat(idxB(:).',numel(idxA),1);
% sparse adds together elements that have duplicate subscripts
% but don't understand how this gives the convolution.
C = sparse(idxArep(:) + idxBrep(:) -1, ones(numel(idxArep),1), C(:));
Question: Can someone derive the algorithm from the traditional sum of product of offset vectors formula?
Or provide a reference to the derivation?
edit: 6/11/17 I corrected the expressions for idxArep and idxBrep
edit: 6/23/17 I added another simplified version as an answer that shows the algorithm more clearly
  4 Comments
Robert Alvarez
Robert Alvarez on 12 Jun 2017
Edited: Robert Alvarez on 12 Jun 2017
I think one way to derive the algorithm is to observe that we can expand the A vector as a sum of basis vectors {u_k} of length numel(A) with a '1' at location k and zeros elsewhere multiplied by the scalars A(k).
A = Sum[k=1..numel(A)] A(k)*u_k
With a similar expansion for B.
The u_k vectors are orthonormal.

Sign in to comment.

Answers (2)

David Goodmanson
David Goodmanson on 12 Jun 2017
Edited: David Goodmanson on 12 Jun 2017
Hi Robert,
Since the posted code is oriented toward sparse matrices, it uses ‘find’ to eliminate the calculation for zeros contained A and B, of which there are probably a lot. The code below parallels the posted code but for simplicity it takes all possible values of A and B and does not bother with find. This means that idxA = 1:length(A), similarly for B. It includes the correction, thanks to Bruno Luong.
Convolution has the form
D(j) = Sum{i} A(i)*B(j+1-i).
This is equivalent to
D(j) = Sum{i,k} A(i)*B(k) --- with the condition k = j+1-i --> j = i+k-1.
(The 1 is there so that when 1=1 and k=1, then j=1 and not 2).
The algorithm needs every possible product of an element of A with an element of B, and the outer product M = A*B.’ is a matrix of this form, where
M(i,k) = A(i)*B(k).
Each matrix index pair (i,k) corresponds to j = i+k-1, and the values of j look like
[1 2 3
2 3 ...
3
You just need to construct a matrix like that in order to grab all the right entries of M for each j, and sum over those entries. At this point it’s probably easiest to do an example.
>> A = [1 2 3].'; B = [3 5 7 9].';
>> M = A*B.'
M =
3 5 7 9
6 10 14 18
9 15 21 27
>> nA = length(A); nB = length(B);
>> indA = repmat((1:nA)',1,nB)
>> indB = repmat(1:nB,nA,1)
>> Z = indA+indB-1
indA =
1 1 1 1
2 2 2 2
3 3 3 3
indB =
1 2 3 4
1 2 3 4
1 2 3 4
Z =
1 2 3 4
2 3 4 5
3 4 5 6
Summing entries in M by eye, for constant j in Z, gives, 3, 11, 26 etc. compared to
>> conv(A,B).'
ans = 3 11 26 38 39 27
The last command in the original code reads out the matrices (in this notation) M, indA and indB column by column (so everything still matches up), creates indices (j,1) for a sparse column vector and does the sum as you mentioned, where sparse matrix elements with the same indices are added.
The two repmat lines can be replaced more compactly by
[indA indB] = ndgrid(1:nA,1:nB)
  8 Comments
Robert Alvarez
Robert Alvarez on 14 Jun 2017
I found a bug with SparseConv1. The following test case gives a result that differs from conv: x = [0 1 2 3 4]; h = [5 6 0]; SparseConv1(x,h) outputs [0 5 16 27 38 24]. conv(x,h) outputs [0 5 16 27 38 24 0]. Note the zero at the end. I need to understand the algorithm better to fix this bug.
David Goodmanson
David Goodmanson on 15 Jun 2017
good catch. This is not exactly a bug because you end up with a sparse matrix, all of whose nonzero elements are correct. If you do some test cases with both leading and trailing zeros in x and h, then convert from sparse to full, you will see that the leading zeros come out correctly, the nonzero entries occur in the right locations, and only the trailing zeros are missing.
When 'full' converts from sparse it is able to put in the leading zeros before the first nonzero element because the answer has to begin with element 1. But 'full' has no way of knowing how long the output of conv is supposed to be, which is nA+nB-1. If you put that info into sparse,
C = sparse(idxArep(:)+idxBrep(:)-1, ones(numel(idxArep),1),C(:),nA+nB-1,1)
then full is able to come up with the right number of trailing zeros.
I think the zeros are a distraction, and the best way to approach the sparse conv algorithm is to work it out from beginning to end without treating nonzero entries in h and x as special. In the not-special case you don't need to use 'find', idxA is simply 1:nA, and avals = A; similarly for B. Then come back later and see what happens with special treatment for zeros in A and B.
Sparse is being used in two ways, as a method to save space by not saving zeros explicitly, and for its feature of adding duplicate entries. The second one is the important one. As I mentioned in a previous comment, finding a way to do that using extra indices and without resorting to sparse is how to get to a real proof, and it's a good topic for discussion.

Sign in to comment.


Robert Alvarez
Robert Alvarez on 23 Jun 2017
A simplified version that shows the algorithm more clearly
% Simplified code version 2
function C = Conv1x(A,B)
idxA = find(A);
idxB = find(B);
idxArep = repmat(idxA(:),1,numel(idxB));
idxBrep = repmat(idxB(:).',numel(idxA),1);
% indexes for output convolution
idxs4conv = idxArep(:) + idxBrep(:) -1;
% compute convolution from indexes and values
C = zeros(numel(A) + numel(B) - 1, 1);
idxs4conv_uniq = unique(idxs4conv);
for k = 1:numel(idxs4conv_uniq);
idx_uniq = idxs4conv_uniq(k);
idxs_repeated = find(idxs4conv==idx_uniq);
for ki = 1:numel(idxs_repeated)
kA = idxArep(idxs_repeated(ki));
kB = idxBrep(idxs_repeated(ki));
C(idx_uniq) = C(idx_uniq) + A(kA)*B(kB);
end
end
  1 Comment
David Goodmanson
David Goodmanson on 25 Jun 2017
Looks like you have scoped this out in its entirety. Good one! There's a lot to be said for working out the details, compared to most people who just plug into an algorithm like this one and are not motivated to do so.
Earlier in this thread I maintained that using 'find' and the like does not lead to a mathematically rigorous process. But having considered further I think that there is nothing wrong with that method.

Sign in to comment.

Categories

Find more on Sparse Matrices in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!