# Comparison between elements of matrix of different data type

6 views (last 30 days)
Stewart Tan on 30 Aug 2019
Commented: Guillaume on 4 Sep 2019
So I recently wrote a few line of code to compare adjacent pairs of a matrix where the values in the matrix are integers:
test_mat = [99 100 54 32 14; 89 4 41 2 3; 87 64 32 19 20];
the matrix i currently have is a matrix of 200,000x5. When i pass the matrix for comparison, it took about roughly 2 minutes to complete the comparison. however, i had another matrix where it contains:
test_mat2 = [0.0482 0.0050 0.0516 0.0063 0.0058; 0.0847 0.0008 0.0071 0.0086 0.0502];
and the one that I'm using is also a 200,000x5 matrix which contains data as the test_mat2 above. I notice that comparison takes way longer time compared to the first matrix of integers. Is there any reasoning behind this? Is comparison more expensive with numbers with decimals?
Guillaume on 4 Sep 2019
@Jan, indeed. However, there doesn't appear to be much difference in timing for allocating uint8 or double:
>> timeit(@() randi([0 255], 2e5, 5, 'uint8'))
ans =
0.012459
>> timeit(@() randi([0 255], 2e5, 5, 'double'))
ans =
0.01323

Nikhil Sonavane on 4 Sep 2019
The way floating points are allocated in the memory is very different as compared to integers. Hence, the algorithm used for comparing floating point numbers is also different from that of integers. I would suggest you go through the Floating-Point Representation to understand this better. Also, the memory allocation in case of floating-point numbers is more than that of integers. For more information please refer to the documentation of integers and floating-point numbers.
##### 2 CommentsShowHide 1 older comment
Guillaume on 4 Sep 2019
And of course, if the original vector is a 64-bit integer type, then there's the same number of bytes to compare. I would still expect double comparison to be marginally slower due to the need to test for NaN indeed. Plus if I recall correctly modern processors have different pipelines for FP and integer.