Fastest way to compute min and max values over an array subset

I'm trying to compute the minimum and maximum values over subsets of an array. My current approach is fairly simple but is noticeably slow for the size of data that I am working with.
e.g.
data = rand(1,100);
starts = [5 10 15 20];
stops = [9 14 19 24];
mins = zeros(1,4);
maxes = zeros(1,4);
for i = 1:4
data_subset = data(starts(i):stops(i));
mins(i) = min(data_subset);
maxes(i) = max(data_subset);
end
Note that in reality data is an array that has millions of elements and I am grabbing a couple thousand subsets. The slow part seems to be creating the data_subset. I tried creating a mex function to avoid memory allocation but the whole thing was a wash, largely due to the multi-threaded nature of min and max in Matlab. I tried calling min and max from mex, but this requires creating an mxArray, which from what I can tell, requires actual data copying, not just pointer adjustment.
Any suggestions or thoughts on something I might have missed?
Thanks, Jim

4 Comments

How did you create the MEX function? Do the intervals have the same lengths as in your example? Then reshaping the vector to a matrix and one call to min and max is efficient. Do you have the parallel processing toolbox? Can NaNs appear in the data?
Hi Jan,
The intervals don't necessarily have the same lengths but they are generally close to the same lengths. I do have access to the parallel processing toolbox. NaNs could appear in the data, although I'm willing to have two processing methods based on whether or not NaNs are present.
Regarding the mex function, here's the important part:
for (mwSize iGroup = 0; iGroup < n_groups; iGroup++){
row_start = (mwSize) starts1[iGroup];
mexPrintf("iG: %d, row: %d\n",iGroup,row_start);
xp = data_offset + row_start - 1;
n_values = row_end - row_start + 1;
mxSetPr(x,xp);
mxSetM(x,n_values);
assignment_offset++;
mexCallMATLAB(2, lhs, 1, &x, "min");
all_min[assignment_offset] = *mxGetPr(lhs[0]);
all_min_I[assignment_offset] = *mxGetPr(lhs[1]);
This works if xp == data_offset (pointer to first value in data). This throws an error and crashes things if xp != data_offset
Jim
I'm going to publish a hand coded MEX version in the FEX. What is the desired result, if a NaN appears in a chunk? Should it be ignored or should NaN be replied for the min and max values?
The mex is single threaded, but 10 times faster than the posted Matlab code for a chunk length of 1000, and 30 times faster for 10 elements per chunk. I assume calling it from a PARFOR loop will squeeze out even more speed.
Thanks Jan. In general I think following convention and ignoring it would be best.
i.e. min([1 NaN 2]) => 1, not NaN

Sign in to comment.

 Accepted Answer

See FEX: ChunkMinMax. An automatic multi-threading is not trivial, because starting a thread for each interval might waste a lot of time, when the intervals are tiny. You can try to split the list of intervals in a PARFOR loop, but avoid splitting the job for each interval, because this will lead to collisions of the cache line. So better let the 1nd half and the 2nd of the intervals be processed in different threads.

2 Comments

Thanks Jan!
I had written something similar but it was not nearly as well polished and it was surprisingly slower. I see numerous ways to clean things up after looking at your code. I had thought for sure that the Matlab versions of min and max were multithreaded but now I'm not so sure ....
The performance benefit seems to vary significantly with the size of the chunks. If you increase the chunk size to 1e5 I get only about a 50% benefit. However if I go to 1e6 the value improves to about 15%. Note that both of these were run on data sizes of 1e8. Reshape is actually faster in all cases, with the best case improvement in my tests being about 65% execution time relative to the mex (~75% average).
The tradeoffs I see are that with reshape, I have to be a bit clever about how I use it, but it is just Matlab code. The mex code handles uneven size chunks, but requires making sure I have the code compiled for all users and that I'm always working with doubles.
Thanks again! Jim
You are welcome! Min and Max of Matlab are multi-threaded, when the data size exceeds a certain limit. For e.g. SUM the limit is 89000 elements.

Sign in to comment.

More Answers (4)

I want to give credit to Jan's comment which inspired this answer. However, it assumes that the subsets are all relatively small (i.e., they can all be held in RAM simultaneously when NaN-padded to the length of the largest subset).
function [mins,maxes]=doProcess(data,starts,stops)
starts=starts(:).'; stops=stops(:).'; %ensure row vectors
len=stops-starts;
maxlen=max(len);
data(end+maxlen)=0;
idx=bsxfun(@plus,starts,(0:maxlen).');
nanmap=bsxfun(@gt,idx,stops);
data_subsets=data(idx);
data_subsets(nanmap)=nan;
mins=min(data_subsets);
maxes=max(data_subsets);

3 Comments

I had considered reshaping and was perhaps somewhat quick to dismiss it. I'll give it a try. I might need to do the reshaping in mex on a subset of the data and then just compute min and max on the last small chunk.
i.e., consider having 107 data points, which I want to divide into 10s for calculating the min and max values -> 1:10,11:20, etc
I could do: reshaped_data = reshape(data(1:100),10,10); Then calculate the min and max on 'reshaped_data', and add on min(data(101:107)) and max(data(101:107))
This however could, depending on the array size, require a pretty significant duplication of the data. But if I throw some in place mex work into it, then we might be set!
I'll provide updates after I've had time to give these approaches a try.
This however could, depending on the array size, require a pretty significant duplication of the data.
Not from the reshaping. Reshaping doesn't duplicate any data. Also, it doesn't sound like your data is that big if it only has "millions of elements". So why care about duplication anyway?
Are your subsets always intervals connected end-to-end? And are they always the same length but for the final interval (due to divisibility issues)? If so, just pad "data" with NaNs so that all intervals are of equal length. Then, reshape and be done with it!
Thanks Matt for your help.
I should clarify, the data duplication comes from data(1:100), not the reshaping. From what I can tell Matlab does not try to maintain, as they say in Python, views of the original data set. If it did then data(1:100) wouldn't necessarily need to lead to data duplication either.
Point taken regarding padding with NaNs and just being done with it!
Thanks, Jim

Sign in to comment.

Matt J
Matt J on 31 Dec 2014
Edited: Matt J on 31 Dec 2014
This tool looks like it would allow you to make data_subset share memory from "data" without actually copying it.
That would probably reduce the overhead of creating data_subset that you mention.

2 Comments

This looks like exactly what I need! I'll give it a try and update with how it goes.
Matt,
I tried the code and it worked a couple of times then crashed when I put it into a loop (perhaps due to some JIT change that wasn't present with line by line evaluation?) Anyway, I think this approach is probably best avoided. It would be nice if Matlab had some hidden way of allowing the mxSetPr check to be avoided, since then I think my mex code would work fine. Perhaps that would just be too hard to guarantee that some later optimization wouldn't prevent things from working?

Sign in to comment.

Matt J
Matt J on 31 Dec 2014
Edited: Matt J on 31 Dec 2014
Are all your subsets contiguous and of the same size, like in your example? If so, and if you have the Image Processing Toolbox, the imdilate(X) command will perform a local max filter over all subsets of X a fixed size (or the local min filtering of -X). You can then just grab the results of the particular subsets you want from the filter result.
If the size of the subsets varies (but not too much), you could try grouping together all subsets of a common size, and do as above in a loop over the different subset sizes.
Depending on how many subsets you are going to use and the size of the data, I think it maybe worthy to spend some resources sorting the data just once, and using the sorted data to find the extrema of the subsets. The resulting code would look similar to this (not verified):
data = rand(1,100);
starts = [5 10 15 20];
stops = [9 14 19 24];
mins = zeros(1,4);
maxes = zeros(1,4);
[B,I] = sort(data); % This call would be lengthy for large datasets, but runs just once
for i = 1:4
range = starts(i):stops(i);
ind_sort = I(range); % indices of the range in the sorted data B
% Since B is already sorted, we avoid calling min and max for each subset
mins(i) = B(ind_sort(1));
maxes(i) = B(ind_sort(end));
end
I haven't tried this yet (sorry) but even if the codes needs some fixing, the key idea is to run through the data values just once, instead of calling min and max each time.

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!