Why do the values of learnables in a quantized dlnetwork still stored as float32(single precision)?
Show older comments
Even though the dlquantizer is quantizing the weights of the fully connected layer to int8 and bias of the layer to int32, why do I see in the quantized dlnetwork the values are still stored as float32(single precision)?
Also, I would like to find out if dlquantizer can quantize a particular layer or not?
Accepted Answer
More Answers (0)
Categories
Find more on Quantization in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!