If you are comparing the size of a .mat that stores just the original signal, to the size of a .mat that stores just A1 together with D1, then:
A1 and D1 are both double precision arrays that are each the same number of elements as the original data. If we assume that the original signal is double precision as well, then we should expect that storing A1 and D1 (each the size of the original) should take about the twice the size as the original.
When you store into .mat files, data is generally compressed by an LZ algorithm. The compression efficiency of the double precision arrays might be different for the original compared to the A1 and D1 matrices.
Remember, that the A1 and D1 matrices produced by wrcoef are precise enough to exactly reproduce the signal (to within round-off). The reason that wavelet techniques can be used for compression is that you can choose to zero out small coefficients, or zero out coefficients in a pattern (for example, off-diagonals inside blocks), and then use the modified coefficients to reconstruct close to the original signal. But the modified arrays with zeros in them can be compressed more efficiently; depending on exactly what you choose to zero and what pattern, you might be able to not even save a number of the zeros in the file, counting on your knowledge of where to zero in order to be able to read back just the other locations from the file and fill in zeros where appropriate.
But that zeroing out of small components and removing the zeros from the array in careful patterns, is not done automatically, so you cannot just dump the arrays and expect that the resulting output file will be smaller... especially since you are comparing the sizes of the compressed arrays, not the sizes of the original arrays.