It is my understanding that your neural network is giving same output which is independent of the test data set. You would like to know the probable reason of this problem and some methods which can solve this.
This type of problem can happen when you are not choosing your activation function and cost function appropriately or your data is not normalized. I will recommend to use Sigmoid activation function and cross entropy cost function as this combination eliminates lot of problems. I will suggest to first normalize your data before separating the data in to training, cross validation and test data sets. I am assuming your data set is linearly separable. So, try using 3 layer ANN, start with using less number of hidden layer units first and increase number of hidden units gradually if the error is more. If the data is not linearly separable, try increasing the number of hidden layer.
Refer to the following documentation for more information.