can someone suggest to make this code faster
You are now following this question
- You will see updates in your followed content feed.
- You may receive emails, depending on your communication preferences.
An Error Occurred
Unable to complete the action because of changes made to the page. Reload the page to see its updated state.
Show older comments
0 votes
Share a link to this question
Can someone suggest me a trick to make this code faster.
"all_DOY" = 1000*15000*23 ; %(i,j,k) such that k dimension is the day of the year whic is random, while (i,j) are my pixel location starting from (1,1) to (1000,15000). The values of day of the year (DOY) in the kth dimension rranges range from -1 to 366
"all_variable" = 1000*15000*23; This is my variable value for (i,j,k) for the (i,j) position and k dimension i.e., day of the year in the "all_DOY" matrix. The values in the kth dimension range from -3000 to 15000
all_variable_new = 1000*15000*366
I'm trying to update the "all_variable_new" such that, my k dimension is from 1:366 (days of the year). for each day of the year obtained from "all_DOY" matrix, the variable value is updated in the "all_variable_new" for that respective (i,J,k) from "all_variable"
all_variable_new=zeros(1000,15000,366,'int16');
all_variable = load('all_variable.mat');
all_DOY = load('all_DOY.mat');
tic
for ii =1: size(all_DOY,1) % for each ii
for jj = 1:size(all_DOY,2) % for each jj
idx = squeeze(all_DOY(ii,jj,:)); % sequeez k'th dimension. idk is the DOY which is also the position value in all_variable_new
variable = squeeze(all_variable(ii,jj,:)); % obtain the variable values
variable(idx==-1)=[]; % if -1, remove
idx(idx==-1)=[]; % if -1 then remove the values
Ind=idx;
if isempty(Ind)
all_variable_new(ii,jj,:) = NaN; % if there are no idx values then replace with NAN
else
for mm = Ind
all_variable_new(ii,jj,mm) = variable; % for the day of the year values obtained, replace those positions with variable values.
end
end
clear idx Ind mm
end
end
toc
4 Comments
Luna
on 10 Mar 2020
what is all_DOY? And can you explain what you are exactly want to achieve as result? So that we can provide a solution without loops. It is hard to understand it from your code with no comments.
I updated the code with comments. let me know if you need more details.
Writing good comments is hard. Unfortunately your comments are not that useful, they mostly paraphrase the code but don't explain the purpose of the code, so we still have no idea what your code is doing. It's not helped by the poor choice of variable names. A good variable name describes the purpose of the variable. Calling a variable variable tells you nothing about its purpose.
We need to know:
- What do the inputs represent?
- It looks like you have 3D variables, what does each dimension represent? and if within one dimension, the rows/columns/pages represent different things, what are these?
- Why is all_variable of class int16?
- What is the purpose of the code? What does it try to achieve?
Saying that there are clearly some things that can be improved in your code. It's doubtful they'll have much of an impact though:
- The clear idx Ind mm is completely pointless. The variables will be replaced anyway on the next step of the loop.
- The copy of idx into Ind is completely pointless. You may as well continue using idx and never bother creating Ind.
- The for mm = Ind is completely pointless. Ind is guaranteed to be a column vector, so the loop will only have one iteration with mm exactly equal to Ind, so the mm loop can be altogether replaced by:
all_variable_new(ii,jj,Idx) = variable;
nlm
on 10 Mar 2020
- What do the inputs represent?
- It looks like you have 3D variables, what does each dimension represent? and if within one dimension, the rows/columns/pages represent different things, what are these?
"all_DOY" : is a 3D matrix of size 1000*15000*23 where (i,j) represent pixel position, while k represent day of the year (DOY). the values of DOY randomly varies in the range of -1 to 366.
My variable is the temperature data.
"all_variable": is a 3D matrix (i,j,k), where (i,j) represent pixel position, while k represent temperature. Temperature ranges from -3000 to 10000.
"all_variable_new": is a 3D matrix of size 1000*15000*366 and I want to make a 3D matrix (i,j,), where (i,j) represent pixel position, and store the temperature values sequentially according to the DOY values obatined from "all_DOY" .
- Why is all_variable of class int16?
to reduce the memory of the variable.
- What is the purpose of the code? What does it try to achieve?
I want to make a "all_variable_new", such that the temperature variable is arranged according to the DOY.
Accepted Answer
0 votes
If I understood correctly:
pixels_dayofyear = load('all_doy.mat');
pixels_temperature = load('all_variable.mat');
assert(isequal(size(pixels_dayofyear), size(pixels_temperature)), 'Size of matrices doesn''t match!');
%sort each pixel (row, column) temperature (pages) according to the corresponding day of year.
%temperature for invalid days of year (-1) are all replaced by 0 at the end of the page.
pixels_dayofyear(pixels_dayofyear == -1) = NaN; %replace by a value that is sorted last. Could be Inf instead of NaN
[~, order] = sort(pixels_dayofyear, 3); %sort the day of year of each pixel. Get the new order
[rows, cols, ~] = ndgrid(1:size(pixels_dayofyear, 1), 1:size(pixels_dayofyear, 2), 1:size(pixels_dayofyear, 3)); %create row and column indices replicated along the correct dimensions for sub2ind
newpixels_temperature = pixels_temperature(sub2ind(size(pixels_temperature), rows, cols, order)); %reorder temperature according to the sorting of the days
newpixels_temperature(isnan(pixels_dayofyear)) = 0; %replace invalid values by 0
26 Comments
I'm trying to make a newpixels_temperature matrix of size 1000*36000*366, where the values are updated from pixels_temperature respective to the DOY from pixels_dayofyear. The sizes of pixels_temperature and pixels_dayofyear are 1000*36000*23, and while the newpixels_temperature is required at 1000*36000*366.
The code you suggested does not meet that requirement. Also, I get an error of "The subscript vectors must all be of the same size".
I have other 10 pixels_dayofyear and pixels_temperature mat files with different DOY's and temperature values in each file. The reason, I want to make a 1000*36000*366, is evetually load all the temperature values corresponding to that DOY into one file which is newpixels_temperature.
each pixels_dayofyear mat file is a random DOY values globally (1000*36000), with only 23 days of data available.
each pixels_temperature mat file is a one year data with temperature values globally (1000*36000) with only 23 days of temperature data available.
Evntually my newpixels_temperature will have 1000*36000*366 where (23*10 =230) values will be there while rest will be NaN.
I can try explaining again if you have questions.
I've fixed my original code, I made a typo in the ndgrid call.
Anyway, it doesn't do what you want as you said. What you want is even easier to produce. However, the memory requirements for a 1000 x 36000 x 366 matrix. You're going to need around 25 GB of memory to store that as int16 (or a whopping 98 GB as double). Considering that the matrix is going to be full of 0s, that's a big waste. There's probably a much more efficient way to achieve whatever it is you're trying to do which doesn't require wasting so much memory.
Anyway, to create your 1000 x 36000 x 366 matrix
pixels_dayofyear = load('all_doy.mat');
pixels_temperature = load('all_variable.mat');
assert(isequal(size(pixels_dayofyear), size(pixels_temperature)), 'Size of matrices doesn''t match!');
[rows, cols, ~] = ndgrid(1:size(pixels_dayofyear, 1), 1:size(pixels_dayofyear, 2), 1:size(pixels_dayofyear, 3)); %create row and column indices replicated along the correct dimensions for accumarray
validvalues = pixels_dayofyear > 0; %will be used as a mask over pixels_dayofyear and pixels_temperature
pixels_temperaturebyday = accumarray([rows(validvalues), cols(validvalues), pixels_dayofyear(validvalues)], int16(pixels_temperature(validvalues)), [size(pixels_temperature, 1), size(pixels_temperature, 2), 366], @mean);
Notes:
- You say day of year go from -1 to 366. 0 is an invalid index. Your original code would fail for a 0. Here I ignore 0 and -1
- If the same day is repeated for a pixel the above code will take the average of the temperature for that day. Your original code would have assigned just the last temperature
nlm
on 11 Mar 2020
Inspite of all data in "int8", I get this error,
Error using accumarray
Requested 1000x36000x366 (98.2GB) array exceeds maximum array size preference. Creation of
arrays greater than this limit may take a long time and cause MATLAB to become unresponsive. See
array size limit or preference panel for more information.
Is there a way we can speed up using a for loop,
for ii = 1:size(pixels_dayofyear ,1)
for jj = 1:size(pixels_dayofyear ,2)
idx = squeeze(pixels_dayofyear (ii,jj,:));
temp = squeeze(pixels_temperature(ii,jj,:));
temp(idx==-1)=[];
idx(idx==-1)=[];
if isempty(idx)
pixels_temperaturebyday (ii,jj,:) = NaN;
else
pixels_temperaturebyday (ii,jj,idx) = temp;
end
end
end
Guillaume
on 11 Mar 2020
Oops yes,
pixels_temperaturebyday = accumarray([rows(validvalues), cols(validvalues), pixels_dayofyear(validvalues)], pixels_temperature(validvalues), [size(pixels_temperature, 1), size(pixels_temperature, 2), 366], @(x) int16(mean(x)));
would fix it.
Alternatively you could do:
pixels_temperaturebyday = zeros(size(pixels_temperature, 1), size(pixels_temperature, 2), 366, 'int16');
pixels_temperaturebyday(sub2ind(size(pixels_temperaturebyday), rows(validvalues), cols(validvalues), pixels_dayofyear(validvalues))) = pixels_temperature(valid_values);
This behaves exactly as your old code in that if the same day is repeated for a pixel only the last day is copied.
"Is there a way we can speed up using a for loop"
Yes, all the solutions I've presented don't use loops. They will all be significantly faster than loops.
The script this doesn't work...
The code generates all zeros. does not replace the temperature data according to the day.
Walter Roberson
on 11 Mar 2020
Well, yes, an array 1000x36000x366 is going to occupy 12 1/4 gigabytes per byte of data representation (so 24 1/2 at 2 bytes per entry.) There is no way to get around that if you want a non-sparse array that large. It doesn't matter whether the calculation is vectorized or by way of a loop if the array is that large.
Generally speaking, though, as soon as you start to use big arrays, you tend to end up making temporary copies that could be as large as the original. A = A + 1 requires a temporary array the same size as the original array, for example. Carefully using loops is often the key to reducing the size of temporary arrays that are in use -- but using loops cannot do anything about the possibility that the basic array being created is larger than your available RAM.
Sometimes for some calculations, you can use tall() arrays to get around array size limits. This is only a realistic possibility if the calculation for any one location only uses comparatively few other locations, rather than the whole array. For example a matrix multiplication followed by summing along a dimension, can be decomposed into row or column operations instead of having to calculate the entire matrix multiplication first.
nlm
on 11 Mar 2020
Whats the alternative ? Its a global temperature data at 1 km which has a original grid size of 18000*36000 every 16 days. And that sums up to 18000*36000*230 over 10 years of data which i need to interpolate to 18000*36000*366. I already broke it down to 18 sub-datasets.
Walter Roberson
on 11 Mar 2020
When you interpolate at (J,K, 1:230) to (J,K,1:366) then do you use any information from rows J+/-1, columns K+/-1 ?
Temperature ranges from -3000 to 10000
Does that represent -30.00C to +100.00C ? Or is it -300.0C (i.e., around 0K) to +1000.0C ?
Should we understand those values to be integers? If so then int16 would be a viable storage representation.
Guillaume
on 11 Mar 2020
I've already explained that to store a matrix of your desired size you're going to need huge amount of memory and that most of that space is wasted since it's just lots of zeros.
As I said, there's probably a way to do whatever you want using a much more efficient storage, however you haven't really explained what the end goal (nor the true starting point really).
Walter Roberson
on 11 Mar 2020
I don't know that the data is going to be mostly zero (or missing): it could be from satellite readings.
Guillaume
on 12 Mar 2020
In this comment, Maheshwari stated that the input matrix has 23 pages. and that ultimately the output matrix is going to be filled with 10 of these, so 230 pages. That leaves 136 empty pages, nearly 40% of the matrix full of 0s occupying around 9 GB of memory as int16.
If interpolation is required, a scatteredInterpolant could do the job without needing to create that huge matrix at all.But of course, storing the result of the interpolation might be a problem in itself.
As I keep saying, we need a better explanation of the overarching goal. Unless Maheshwari has access to a supercomputer, it's unlikely the original goal can be achieved the way it was originally planned.
@Walter and @Guillaume,
Thank you for taking time to think about this problem...
Walter to answer your question,
When you interpolate at (J,K, 1:230) to (J,K,1:366) then do you use any information from rows J+/-1, columns K+/-1 ? Temperature ranges from -3000 to 10000. Does that represent -30.00C to +100.00C ? Or is it -300.0C (i.e., around 0K) to +1000.0C ? Should we understand those values to be integers? If so then int16 would be a viable storage representation. I don't know that the data is going to be mostly zero (or missing): it could be from satellite readings.
No, it doesn't use any information from J+/-1, or K+1/-1. These temperatures values need to be scaled accordingly, which is the next step and I want to hold off on that. These are integers values only, and yes I have been trying to save them only as int16. Yes, it is a satellite data.
Guillaume to answer your question,
In this comment, Maheshwari stated that the input matrix has 23 pages. and that ultimately the output matrix is going to be filled with 10 of these, so 230 pages. That leaves 136 empty pages, nearly 40% of the matrix full of 0s occupying around 9 GB of memory as int16.
If interpolation is required, a scatteredInterpolant could do the job without needing to create that huge matrix at all.But of course, storing the result of the interpolation might be a problem in itself.
As I keep saying, we need a better explanation of the overarching goal. Unless Maheshwari has access to a supercomputer, it's unlikely the original goal can be achieved the way it was originally planned.
Since this is a satellite data, it is only available 23 times in a year and I'm trying to make a climatorlogy data using 10 years of temperature data ( each year has only 23 data points per pixel). Inspite of 23 data points, the obersvation days can be different, e..g, 2010, pixel-1 might be observed at 76th day, in 2011, it was observed at 81th day, so on and so forth. [My primary objective is 1) rearrange year 2010 of data (1000*36000*23), into (1000*36000*366). if (1,1,:) = [34, 66, 88, 99....] which has a length of 36 then create new_temp(1,1,:) = [0, 0, 0, 0, 0,,,,,,,,,,,34,,0,0,0,0,0,,,,,66.....] which has a length of 366.
2) then I read data from 2011 which is (1,1,:) = [2,10, 33,...], such that the data is stored in new_temp created in step 1) such that it fills missing DOY's i.e., new temp(1,1,:) = [0,2,0,0,0,0,10,0,0,0,0,00,0,0, 33, 34, 0,0,0,0,0 66,...]. and so and and so forth for other 8 years.
3) then do a 10 day moving average on new_temp and time interpolate to fill missing DOY's.
I hope this clears to some extent atleast...Yes I have access to super computer, but isn't the memory handling capacity for matlab remains same ?
Walter Roberson
on 12 Mar 2020
"Yes I have access to super computer, but isn't the memory handling capacity for matlab remains same ?"
MATLAB can handle variables that are up to 2^48-1 bytes. This is internal design decisions that are considered reasonable because the Intel x64 architecture is defined as only having at most 48 address lines.
There is, however, also a preference that you can set to restrict memory use further, and you presently have that preference set. Your current settings blocks your program from requesting an array that is larger than your RAM, because when you use arrays larger than your RAM, your operating system needs to swap to disk which is quite slow.
If you want you can go into your preferences and remove that block. That would permit you to use an array that was 280 gigabytes or whatever provided your operating system has access to a swap partition that large. This would remove the error message you are seeing and would permit your program to continue.
Just do not expect your program to finish executing this century. Your data is arranged in the worst arrangement for swapping to disk.
Supercomputers do not have more MATLAB capabilities than your laptop does. Supercomputers have more RAM, which reduces the amount of swapping to disk.
The general guideline is that swapping to disk is about 1000 times slower executing from RAM.
nlm
on 12 Mar 2020
OK, so apart from removing the block, How can I achieve my objective ?
nlm
on 12 Mar 2020
@ Guillaume,
Both the codes aint working...any comments
pixels_temperaturebyday = zeros(size(pixels_temperature, 1), size(pixels_temperature, 2), 366, 'int16');
pixels_temperaturebyday(sub2ind(size(pixels_temperaturebyday), rows(validvalues), cols(validvalues), pixels_dayofyear(validvalues))) = pixels_temperature(valid_values);
or
pixels_temperaturebyday = accumarray([rows(validvalues), cols(validvalues), pixels_dayofyear(validvalues)], pixels_temperature(validvalues), [size(pixels_temperature, 1), size(pixels_temperature, 2), 366], @(x) int16(mean(x)));
Guillaume
on 12 Mar 2020
If you have the memory, then the code I posted earlier (minus typos) will create the 1000x3600x366 array that you want:
pixels_dayofyear = load('all_doy.mat'); %a MxNx(23*8) matrix
pixels_temperature = load('all_variable.mat'); % should be same size as previous matrix
assert(isequal(size(pixels_dayofyear), size(pixels_temperature)), 'Size of matrices doesn''t match!');
pixels_temperaturebyday = zeros(size(pixels_temperature, 1), size(pixels_temperature, 2), 366, 'int16'); %requires ~25 GB of memory
%find destination of each element of pixels_temperature according to pixels_dayofyear
%destination is same row and column but page is defined by the corresponding element of pixels_dayofyear
%a value of -1 for the day is invalid. Corresponding temperature is not copied
[rows, cols, ~] = ndgrid(1:size(pixels_dayofyear, 1), 1:size(pixels_dayofyear, 2), 1:size(pixels_dayofyear, 3)); %create row and column indices of the pixels
validvalues = pixels_dayofyear > 0; %will be used as a mask over pixels_dayofyear and pixels_temperature
pixels_temperaturebyday(sub2ind(size(pixels_temperaturebyday), rows(validvalues), cols(validvalues), pixels_dayofyear(validvalues))) = pixels_temperature(validvalues);
As noted, you'll need about 25 GB of RAM to store pixels_temperaturebyday as int16. Or half that as int8. Just replace int16 by int8 in the creation of pixels_temperaturebyday. As also noted, the matrix will be around 40% zeros (actually more like 50% if it's only 8 years). Also as noted, if the same day is repeated for the same location, you'll end up with the last of the temperature for the duplicates.
If you don't have memory, I would look into big data functions: tall arrays and mapreduce.
Walter Roberson
on 12 Mar 2020
The question I asked earlier about indexing ±1 was asking about the interpolation formula. Your answer implied that if you had information about a given pixel on day 86 then you would not use that information to deduce the temperature of an adjacent pixel on day 88, that the only information that you can use for any given pixel is the information from past and future times that exact pixel was sampled. Is that correct?
nlm
on 12 Mar 2020
Guillaume,
I understand there will be a lot of zero's. But I'm checking the code for doy's where the temp data should exisit. However, it shows zero. all pixels_temperaturebyday data is zero.
Walter,
Yes, correct. +/-1 i.e, adjacent pixels have no impact on any other pixels, it is only time interpolation from 1:366.
Walter Roberson
on 12 Mar 2020
This does not make scientific sense to me. If you know the temperature yesterday one km over and (additional criteria) you know that landscape is much like this one, why would you avoid using the information?
I can see the point of not using adjacent information between cliff and sea (for example) but in most cases, 1 km does not make a steep difference.
nlm
on 12 Mar 2020
Walter,
There are other criteria I should check for before I do any spatial interpolation. And the satellite swath is very wide, 1200 km. First, I need to arrange them in DOY fashion. It is not like, day1 there was data in pixel 1, and day2 there is no data in pixel 1 but data in pixel 2. It's not like that.
If my code doesn't produce the expected result then there's something I don't understand about your input data. Simple test on a smaller matrices:
%made up data for testing. Using 2x3x2 array
pixels_dayofyear = cat(3, [-1, 3, 4; 5, -1, 1], [2, -1, 2; 4, -1, 3])
pixels_temperature = reshape(1:12, 2, 3, 2)
%creating a 2x3x5 matrix
pixels_temperaturebyday = zeros(size(pixels_temperature, 1), size(pixels_temperature, 2), 5, 'int16');
[rows, cols, ~] = ndgrid(1:size(pixels_dayofyear, 1), 1:size(pixels_dayofyear, 2), 1:size(pixels_dayofyear, 3)); %create row and column indices of the pixels
validvalues = pixels_dayofyear > 0; %will be used as a mask over pixels_dayofyear and pixels_temperature
pixels_temperaturebyday(sub2ind(size(pixels_temperaturebyday), rows(validvalues), cols(validvalues), pixels_dayofyear(validvalues))) = pixels_temperature(validvalues)
produces the expected result, pixels_dayofyear(1, 1, 1) is -1 so the corresponding temperature is ignored. pixels_dayofyear(2, 1, 1) is 5 and the corresponding pixels_temperature(2, 1, 1) which is 2 ends up at day 5 in pixels_temperaturebyday(2, 1, 5), and so on.
Walter Roberson
on 12 Mar 2020
Ohhh kay...
Do you want to do linear interpolation, or copy the previous reading until it changes?
Guillaume,
And one more thing, if same DOY exists, is there a possibility to store both values intead of assuming a mean ?
Walter,
A 10 day moving average, after which a linear interpolation is enough.
Guillaume
on 12 Mar 2020
"is there a possibility to store both values intead of assuming a mean" No, a matrix element can only have one value. What would be the point anyway? Note that the above doesn't average (but my accumarray solution posted previously can), you just get the last of the two.
"A 10 day moving average, after which a linear interpolation is enough" Wouldn't you do the interpolation beforehand? Otherwise, you're averaging with a lot of zeros.
Either way, I'm not sure how you're going to perform the interpolation efficiently. With much smaller datasets, you'd use a scatteredInterpolant (which would also avoid having to construct that initial matrix full of zeros), but this requires 3 matrices the same size as your final matrix to specify the query points.
However, since you said you're not going to use neighbouring points for calculation the way to get eleminate the memory issue is to divide you satellite image into smaller areas. Split each image into 10 along the rows and columns and you need 100 times less memory.
nlm
on 13 Mar 2020
"No, a matrix element can only have one value. What would be the point anyway? Note that the above doesn't average (but my accumarray solution posted previously can), you just get the last of the two." While interpolating the data, it is required to have all the data even if it is same DOY, instead of mean.
"Wouldn't you do the interpolation beforehand? Otherwise, you're averaging with a lot of zeros". The zero's are turned into NaN, while interpolating.
"However, since you said you're not going to use neighbouring points for calculation the way to get eleminate the memory issue is to divide you satellite image into smaller areas. Split each image into 10 along the rows and columns and you need 100 times less memory." It is global data, has 18000 rows, dividing it into 10 rows, makes it hard to keep count of data blocks. I read 1000*36000 block from each hdf file per year (23 DOYs) and saved it as .mat file and then for 10 years, thus 10 mat files. Do you suggest I save .mat files differently?
"While interpolating the data, it is required to have all the data even if it is same DOY, instead of mean." I'm not saying it doesn't exist, but I'm not aware of an interpolation method that can cope with two different values at the same point.
"dividing it into 10 rows, makes it hard to keep count of data blocks" I don't see why it would be hard. It's certainly an easy way to get rid of memory issue. At the same time, it also makes it possible to parallelise the processing (but beware of memory usage!) since the processing of the blocks is completely independent of each other. On a supercomputer, I assume that would be a better use of the resources. So, yes, Instead of splitting by year, I'd split by satellite coordinates and process blocks of size MxNx(10*23) -> MxNx366 where M and N are such that the data fits comfortably in memory.
Addendum: Assuming than M and N are reasonable so that the below fits in memory, then this is how I'd implement the transformation from the MxNx(10*23) to MxNx366 matrix. This directly interpolates the MxNx(10*23) data without ever creating the matrix of 0s:
%inputs:
%pixels_dayofyear, a M x N x day matrix indicating the day of measurement of the respective location in pixels_temperature
%pixels_temperature, a M x N x day matrix indicating the temperature on the respective day in pixels_dayofyear
[M, N, days] = size(pixels_dayofyear);
[inrows, incols, ~] = ndgrid(1:M, 1:N, 1:days); %create matrices of locations for each temperature
isvalid = pixels_dayofyear > -1
temperature_interpolant = scatteredinterpolant(M(isvalid), N(isvalid), pixels_dayofyear(isvalid), pixels_temperature(isvalid)); %create interpolant using locations and days as coordinates
%the line below creates 3 double matrices of size M x N x 366. They will use a lot of memory for large M and N!
[qrows, qcols, qday] = ndgrid(1:M, 1:N, 1:366); %matrices of all query locations and days
interpolated_temperature = reshape(temperature_interpolant(qrows, qcols, qday), M, N, 366); %interpolate at all query points and reshape into 3D matrix
Beware that qrows, qcols and qday will require M*N*366*8 bytes of memory. ScatteredInterpolant only accepts doubles so there's no way to save memory there by using a smaller type. However, these 3 arrays can be used unchanged for all the blocks (assuming they're all the same size).
More Answers (0)
Categories
Find more on Matrix Indexing in Help Center and File Exchange
See Also
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)