Linear Integer Output from a Neural Network

17 views (last 30 days)
James Mathieson on 27 Feb 2015
Commented: Mithun Goutham on 9 Jun 2020
I am trying to construct a custom neural network for regression that gives its response in whole integers rather than in continuous real numbers. This is done because the target data is also in whole integers and fractional amounts are meaningless to the application.
At first blush, I thought the solution here would be to create a copy of the purelin transfer function and sub-functions with the a=n term replaced with a=round(n). However, this seems to create only 3 discrete steps. On further inspection when the custom function runs, the inputs to the function are all bounded to [-1 1], resulting in round(n) converting them to -1, 0, or 1. Given that the purelin template is unbound, I can only surmise that there is a separate function which scales the input back up to the value range after the transfer function.
So, the question is, what function for the layer actually determines the final output? How can I achieve my aim of getting a network to output whole integers? As a note, just rounding the result after the fact is not an acceptable solution as the integer nature of the output needs to be considered when calculating the performance of the network during training.
Mithun Goutham on 9 Jun 2020
James Mathieson, I am in a similar fix, and was wondering if you were able to implement this within the neural network training. From what I understand, rounding within the neural network allows the RMSE to consider that when setting weights and biases, which would be different from rounding off the final value. While this may cause convergence to bounce up and down, I believe it should trend towards a lower RMSE.

Greg Heath on 27 May 2020
The integer nature of the output DOES NOT HAVE TO BE CONSIDERED during training.
It is sufficient to round the real valued output.
Hope is helps.
Greg