Custom Deep Learning Network for Xilinx FPGA target
4 views (last 30 days)
Show older comments
I am researching on building deep learning accelerators on Xilinx FPGA using the Deep Learning HDL Toolbox. I have a custom CNN network that has an input layer size of [1 1024 2]. Using the ProcessorConfig Class, I'm trying to optimize the processor configuration for my custom CNN network with the optimizeConfigurationForNetwork helper.
The optimize processor is generated sucessfully, but while trying to estimate performance of the CNN network i get the following error:
The Conv module in the processor configuration has an InputMemorySize of [150 150 7]. This is insufficient to delploy the 'AP1' Layer. Increase the InputMemorySize to [171 171 7] or more using hPC.setModuleProperty('conv', 'InputMemorySize', [171 171 7]), where hPC is the dlhdl.ProcessorConfig object.
I've tried to increase the InputMemorySize, but the error keeps being thrown with a higher inputMemorySize requirement for the 2D Average Pooling layer.
I'd appreciate you recommendation on how i could fix this. Thank you.
Paul Osinowo,
Graduate Student
University of Strathclyde, Glasgow.
0 Comments
Accepted Answer
Umar
on 8 Jul 2024
Moved: Stefanie Schwarz
on 19 Sep 2024
Hi Paul,
When encountering the error related to insufficient InputMemorySize for the Conv module in the processor configuration, it is crucial to adjust the memory size appropriately to accommodate the layers in your custom CNN network. The error message specifically mentions that the InputMemorySize for the Conv module needs to be increased to [171 171 7] or more to deploy the 'AP1' layer successfully.
To address this issue, follow these steps to adjust the InputMemorySize for the Conv module using the dlhdl.ProcessorConfig object:
Set InputMemorySize for Conv Module: Use the setModuleProperty method of the ProcessorConfig object to set the InputMemorySize for the Conv module to [171 171 7]:
hPC.setModuleProperty('conv', 'InputMemorySize', [171 171 7]);
Verify Configuration: After setting the InputMemorySize, ensure that the configuration is updated correctly by checking the properties of the ProcessorConfig object:
disp(hPC.ModuleProperties);
Re-Estimate Performance: Once you have adjusted the InputMemorySize, attempt to estimate the performance of the CNN network again to verify if the error persists:
hPC.estimatePerformance('Network', customCNN);
Iterative Adjustment: If the error persists with higher requirements for subsequent layers like the 2D Average Pooling layer, repeat the process of increasing the InputMemorySize for the respective modules until all layers can be accommodated.
By following these steps and iteratively adjusting the InputMemorySize for the modules in your custom CNN network, you should be able to resolve the issue of insufficient memory size for deploying the layers successfully. Remember to validate the configuration changes and re-estimate the performance after each adjustment to ensure compatibility with your network architecture.
If you encounter any further challenges or require additional assistance, feel free to provide more details for a more tailored solution. Good luck with optimizing your deep learning accelerator on Xilinx FPGA!
2 Comments
Umar
on 8 Jul 2024
Moved: Stefanie Schwarz
on 19 Sep 2024
Hi Paul,
Thank you for your feedback. I'm glad to hear that you found the information helpful. It's great that you were able to identify the issue with the output of the layer on top of the 2D Average Pooling Layer and make the necessary modifications to fix the error. If you have any further questions or need assistance with anything else, please don't hesitate to reach out.
More Answers (0)
See Also
Categories
Find more on System Integration of Deep Learning Processor IP Core in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!