Imported U-Net from Onnx to MATLAB Deep Learning toolbox and it does not work

5 views (last 30 days)
Hi,
I used importONNXNetwork to load a U-Net with its weights from Onnx file, but it gives zero output image. I debugged the MATLAB code and it works well until it reaches the transposed convolution layers: {1×1 nnet.internal.cnn.layer.TransposedConvolution2D}
Is there a bug in the toolbox in the transposed convoliution layers? Or are there any precautions to take while saving the model in PyTorch?
I store the model and weights from pytorch in Onnx format as follows
input_names = ["x"]
output_names = ["y"]
dummy_input = torch.randn(1, 1, 1016, 1016, device='cuda')
UNetWts=torch.load("UNet.pth")
modelUNet = UNet()
torch.nn.DataParallel(modelUNet, device_ids=gpus_list)
modelUNet.load_state_dict(UNetWts,strict=False)
torch.onnx.export(modelUNet, dummy_input, "UNet.onnx", verbose=True, input_names=input_names, output_names=output_names)
and load it in Matlab as follows:
net = importONNXNetwork('UNet.onnx','OutputLayerType','regression');
y = activations(net,x,'node_38');
I take output from node_38 because I want to do inference, not training. These are the network layers.
1 'x' Image Input 1016x1016x1 images
2 'node_1' Convolution 16 3x3x1 convolutions with stride [1 1] and padding [1 1 1 1]
3 'node_2' Leaky ReLU Leaky ReLU with scale 0.2
4 'node_3' Convolution 16 3x3x16 convolutions with stride [1 1] and padding [1 1 1 1]
5 'node_4' Leaky ReLU Leaky ReLU with scale 0.2
6 'node_5' Max Pooling 2x2 max pooling with stride [2 2] and padding [0 0 0 0]
7 'node_6' Convolution 32 3x3x16 convolutions with stride [1 1] and padding [1 1 1 1]
8 'node_7' Leaky ReLU Leaky ReLU with scale 0.2
9 'node_8' Convolution 32 3x3x32 convolutions with stride [1 1] and padding [1 1 1 1]
10 'node_9' Leaky ReLU Leaky ReLU with scale 0.2
11 'node_10' Max Pooling 2x2 max pooling with stride [2 2] and padding [0 0 0 0]
12 'node_11' Convolution 64 3x3x32 convolutions with stride [1 1] and padding [1 1 1 1]
13 'node_12' Leaky ReLU Leaky ReLU with scale 0.2
14 'node_13' Convolution 64 3x3x64 convolutions with stride [1 1] and padding [1 1 1 1]
15 'node_14' Leaky ReLU Leaky ReLU with scale 0.2
16 'node_15' Max Pooling 2x2 max pooling with stride [2 2] and padding [0 0 0 0]
17 'node_16' Convolution 128 3x3x64 convolutions with stride [1 1] and padding [1 1 1 1]
18 'node_17' Leaky ReLU Leaky ReLU with scale 0.2
19 'node_18' Convolution 128 3x3x128 convolutions with stride [1 1] and padding [1 1 1 1]
20 'node_19' Leaky ReLU Leaky ReLU with scale 0.2
21 'node_20' Transposed Convolution 64 2x2x128 transposed convolutions with stride [2 2] and cropping [0 0 0 0]
22 'node_21' Depth concatenation Depth concatenation of 2 inputs
23 'node_22' Convolution 64 3x3x128 convolutions with stride [1 1] and padding [1 1 1 1]
24 'node_23' Leaky ReLU Leaky ReLU with scale 0.2
25 'node_24' Convolution 64 3x3x64 convolutions with stride [1 1] and padding [1 1 1 1]
26 'node_25' Leaky ReLU Leaky ReLU with scale 0.2
27 'node_26' Transposed Convolution 32 2x2x64 transposed convolutions with stride [2 2] and cropping [0 0 0 0]
28 'node_27' Depth concatenation Depth concatenation of 2 inputs
29 'node_28' Convolution 32 3x3x64 convolutions with stride [1 1] and padding [1 1 1 1]
30 'node_29' Leaky ReLU Leaky ReLU with scale 0.2
31 'node_30' Convolution 32 3x3x32 convolutions with stride [1 1] and padding [1 1 1 1]
32 'node_31' Leaky ReLU Leaky ReLU with scale 0.2
33 'node_32' Transposed Convolution 16 2x2x32 transposed convolutions with stride [2 2] and cropping [0 0 0 0]
34 'node_33' Depth concatenation Depth concatenation of 2 inputs
35 'node_34' Convolution 16 3x3x32 convolutions with stride [1 1] and padding [1 1 1 1]
36 'node_35' Leaky ReLU Leaky ReLU with scale 0.2
37 'node_36' Convolution 16 3x3x16 convolutions with stride [1 1] and padding [1 1 1 1]
38 'node_37' Leaky ReLU Leaky ReLU with scale 0.2
39 'node_38' Convolution 1 1x1x16 convolutions with stride [1 1] and padding [0 0 0 0]
40 'RegressionLayer_node_38' Regression Output mean-squared-error
Thanks,
-Omar
  1 Comment
Omar Elgendy
Omar Elgendy on 22 Oct 2019
After debugging using an all-ones image, I get the same output in MATLAB like PyTorch. But when I put a real image, the output is not right. It gives an output of intermediate layer that has low gain.

Sign in to comment.

Accepted Answer

Omar Elgendy
Omar Elgendy on 25 Oct 2019
I found the answer for my question. I should not wrap my model in torch.nn.DataParallel. Otherwise, the imported ONNX model is wrong.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!