Why should I choose matlab deep learning toolbox over other opensource frameworks like caffe, onnx, pytorch, torch etc?

223 views (last 30 days)
Hi, I see, the name of the product has been changed from "Neural Network Toolbox" to "Deep learning toolbox". But, I do not see many deep learning research papers implemented in MATLAB. Everyone uses PyTorch, Tensorflow, Caffe etc. Even the popular online courses as well classroom courses at top places like stanford have stopped teaching in MATLAB.
I have been big fan of MATLAB and other mathworks products and mathworks' participation in ONNx appears interesting to me., but seems like, I have no option left apart from moving to other tools. Please let me why I should use MATLAB which is paid, rather than the freely available popular tools like pytorch, tensorflow, caffe etc. I can easily get codes for free there, also good community, documentation everything, in fact those frameworks are very convenient e.g. tensor which is easy to implement in python, but difficult in MATLAB, like this there are many such examples.
I am pretty sure you guys would have an answer to it, since you have a lot of toolboxes revolving around deep learning, also the major customers of Model-based design tools like automotive, defense etc. must need deep learning.
  7 Comments
Ethem
Ethem on 18 Sep 2023
This chapter of the documentation is all about how to create custom training loops, loss functions, differentiable functions, etc. Ofcourse, as the flexibility increases the amount of code you need to write increases. But, you can implement any cutting edge network with these methods and cutomize it for your research.
cui,xingxing
cui,xingxing on 19 Sep 2023
@Ethem Thanks for the reply, looking at the last couple of years, the deep learning toolbox has really strengthened quite a bit from what it was, and once again it's worthwhile for me to learn and study, and thanks again for the excellent development work.

Sign in to comment.

Accepted Answer

David Willingham
David Willingham on 19 Aug 2020
Edited: David Willingham on 21 Sep 2020
Hi tharun,
Thanks for this question, I’d like to provide an update to Sebastian’s answer as a lot has changed in MATLAB since 2018. MATLAB is used by Engineers and Scientists to develop, automate and integrate deep learning models into their domain-specific workflows. It helps them achieve this by providing:
  • An open framework that supports interoperability with Python and other open source deep learning frameworks.
  • Capabilities that extend beyond modeling to developing end to end applications.
  • Integration and Simulation of Deep Learning models into larger domain-specific systems.
  • Dedicated support from engineers at MathWorks, developers of MATLAB.
Further information:
The development efforts of MATLAB are aimed at addressing the entire system design workflow for building systems that rely on Deep Learning.
Deep Learning System Design Workflow (scroll left to right)
This workflow is being applied to developing Domain Specific Deep Learning applications in areas such as:
For each of the domain’s mentioned above MATLAB provides specialized tools and functions for data preprocessing and preparation, training interfaces, evaluation tools and reference examples.
Data Preparation:
Having the right data is critical to the success of developing a Deep Learning model but can be a time consuming process. MATLAB provides Apps for automating domain-specific labeling (Signal Labeler, Image Labeler, Video Labeler & Audio Labeler) and functions for pre-processing data, which aim at saving development time.
Modeling:
Users have the choice if they would like to use models developed in MATLAB, pretrained models such as GoogleNet or ResNet-50, or those available in OpenSource Frameworks TensorFlow, PyTorch or ONNX through Framework Interoperability. MATLAB's Deep Learning toolbox provides interactive Apps that automate network design, training and experiment management, allowing users to avoid steps that can be automated or eliminated.
Simulation & Test:
Deep learning models created in MATLAB can be integrated into system-level designs, developed in Simulink, for testing and verification using simulation. System-level simulation models can be used to verify how deep learning models work with the overall design, and test conditions that might be difficult or expensive to test in a physical system. THIS example shows how deep learning can be integrated with a controls model in Simulink, further more AI models can be tested using 3D simulation environments with sensor models as shown in THIS example.
Deployment:
These applications are being deployed to embedded and production systems through automatic code generation. Automatic code generation generates optimized native code for Intel and ARM CPU's, FPGA's and SoC's and NVIDIA GPU's for Deep Networks along with pre-processing and post-processing, eliminating errors of transcription or interpretation.
Examples in Industry and Academia:
MATLAB users in industry and academia have had success using MATLAB deep learning to solve challenging problems such as terrain recognition using hyperspectral data and converting brain signals to word phrases.
To Summarize on why engineers and scientists use MATLAB and MathWorks for Deep Learning:
  • MATLAB is focused towards engineering and science workflows
  • MATLAB is a platform that covers the entire workflow where users can improve productivity by using interactive apps that expedite analysis and automatically generate reusable code
  • Models can be deployed anywhere, from embedded to cloud systems
  • MATLAB has interoperability with OpenSource frameworks Tensorflow and PyTorch
  • Users have access to support from experienced MathWorks engineers in development, training & consulting.
If you have any questions regarding Deep Learning, please don't hesitate to contact me or any one of our Deep Learning experts at MathWorks via the "Have Questions? Talk to a deep learning expert." form on our Deep Learning solution page.
Regards,
Deep Learning Product Manager, MathWorks

More Answers (6)

Sebastian Castro
Sebastian Castro on 10 Oct 2018
Edited: Sebastian Castro on 10 Oct 2018
This is a fun and tough question...
DISCLAIMER: Even though I currently work at MathWorks, I am answering with my personal opinion.
There are many factors contributing to the popularity of open-source frameworks you mention:
  • They are free, but more importantly, they are free for everyone else that wants to use your code
  • They have been doing deep learning for longer than MathWorks -- this is one reason why most research papers you see use these tools. The tools were developed earlier out of a research need, were heavily contributed to by the research community, and...
  • ...they are often backed by companies such as Google (TensorFlow), Amazon (mxNet), Caffe (Facebook), etc. which also have research departments and publish their own papers
  • MATLAB developers obviously can't be doing everything the research community is doing in real-time. When a technology becomes established, commercial software such as MATLAB will consider implementing it and making it accessible to people with less expertise in the area. This "delay" between research and any commercial software is natural.
So... when should you use MathWorks' deep learning solution?
  • You are already comfortable with MATLAB and the functionality in Deep Learning Toolbox can solve your problem (this could often mean you're not necessarily pushing the boundaries of deep learning, but rather solving a problem with commonly available techniques that are already in these tools)
  • You are using other MathWorks tools that are more unique/established (such as Simulink, Stateflow, controls, signal processing, etc.) and want easy integration
  • You are working in an industry/on a product where open-source software may not meet certain certification/quality criteria, but perhaps working with commercial software is preferred. Think about the risks of cobbling together constantly evolving experimental software vs. working with software that has a release cycle for all toolboxes at once, and a dedicated quality engineering team
  • In open-source software, you can seek assistance from the online community and report issues on e.g. GitHub... and maybe you will get an answer. Paying for MathWorks technical support gives you easier and more consistent access to the same, especially if you're not capable/have no time to jump into a code base and debug/fix issues on your own. As a student or researcher, this is generally more acceptable
  • You've already "paid" for MATLAB, whether it's through work, school, or another program... so all being equal, the argument of free vs. not free isn't there. This also holds for technical support.
Besides that, the MATLAB deep learning solution isn't isolated from the outside world:
  • You can import pretrained networks from ONNX, TensorFlow-Keras, and Caffe into MATLAB
  • You can use MATLAB Coder and/or GPU Coder to deploy standalone C/C++ code from your neural network
  • You can export networks trained in MATLAB to ONNX
  • Deep Learning Toolbox provides an inheritable "Layer" class you can use to define your own neural network layer if it doesn't exist in the toolbox. This may work for some of the more "researchy" tasks.
  • MATLAB can be helpful in auxiliary tasks outside the neural network construction and training itself... You can use tools like the Image and Video Labeler apps to help label your deep learning data, do data input/output, clean up data, perform statistical analysis, generate reports, etc. without even touching a neural network in MATLAB.
IN SUMMARY:
  • Deep Learning is still a research area and any commercial software package will inevitably have a lag associated with it.
  • How researchy your application is may very well influence your decision to choose a software package.
  • When you buy MATLAB you're paying for more than the software itself.
I don't doubt that a lot of the things I said MATLAB can do can also be done with the massive list of open-source packages like NumPy/SciPy, OpenCV, etc. Ultimately it's up to you to shop around and make a decision. You can always reach out to a real person at MathWorks with specific questions about the tools' ability to solve your problem... which is another one of the "side effects" of a for-profit commercial software.
- Sebastian
  4 Comments
Minhtri Ho
Minhtri Ho on 23 May 2019
Hi Sebastian,
Thank you for your answer! Your answer is fair and helpful to me.
I would like to some follow-up questions:
  • If we use the same training set and have the same deep learning model, is Matlab faster than Tensorflow or vice versa when we train the network?
  • With the same training and testing sets, what is the performance of Matlab vs. Tensorflow? For example, for a 2-class classification problem, which one has better ROC curve?
Thanks,
-Minhtri
Oisín Watkins
Oisín Watkins on 16 Jan 2020
Hi Sebastian,
Thanks very much for your answer.
I come from a more electronics and computer engineering background, but I have upskilled and worked in AI for the best part of a year now. Based on all the reading and practical work I've done in this field I'd like to provide some feedback pertaining to this topic. If this is the wrong avenue for this sort of feedback do please redirect me to the correct avenue.
Of the languages I've worked in MATLAB provides some of the best tools for most any industrial setting, keeping code clean, legible and easily read with an efficacy not matched by Python. However the Deep-Learning tools developed in Python do surpass those developed in MATLAB. To the best of my understanding, it seems as though the developers haven't taken inspiration or followed the lead of those already in this field. Key concepts of Deep-Learning and the functionality of various layers and network topographies are completely misrepresented in your documentation. The main examples of this are:
  1. The conflation of training and validation data. The most rudimentary of Deep-Learning guides will go to great lengths to make sure the reader knows that all datasets are to be completely exclusive of one another. No presentation or data-point in the Training set can be allowed to repeat in either the validation or testing sets. The same is true of all data sets, yet no few examples in the MATLAB documentation have the Validation data set being a subset of the Training data set.
  2. The limitation of layers or network topographies to only certain tasks is a very big mistake from a design point. While the use of deep network designs in image processing has been both widely accepted and largely successful, limiting the use of densely connected layers or convolutional layers to image processing only is equivalent to handing someone a phone and saying all it can do is take pictures. The invention of the Deep Network Designer was inspired, however limiting functionality of such deep networks by using "ImageInputLayers" is both misleading and in poor judgement. Have an input layer which handles data normalisation or standardisation is a great idea, but it should be generic and not constrain the user to any data type, or even any data format. (As I write this I am aware that any real-valued data can be given to an ImageInputLayer, but the point still stands: give the most functionality to the user and do not limit the applications of deep learning to image processing only)
  3. To further compound the previous issue, certain layer functionalities seem to have been misunderstood. The "Flatten" layer is a good example of this. Here you have it being a tool solely for vectorising sequence data, when in truth a Flatten layer can (and often should) be used with any data structure as a method of ensuring the correct tensor shape at the output layer. Here it seems MATLAB has taken a lot of trust away from the user, handling a lot of tensor processing in a way which is hidden and without regard for the users application. While this can help with bringing in new engineers to the field, it detracts from their experience and limits their potential.
  4. This is a minor complaint, but the meaning of "epoch" seems to be misundestood. Here it's hard-set as a run through all training presentations. While that's true in many cases, the user should be allowed to define how many presentations per epoch. Oftentimes setting the number of presentations to be less than the total number available can prevent overfitting. A minor detail, it doesn't impede on functionality enough to be a serious issue.
My last issue is with the lack of any equivalent to a generator object in MATLAB. People have made versions of them using function calls, and in honesty they're brilliant implementations! However the deep learning toolkit does not support them as a valid data source. Instead we're asked to use datastore objects. While in principal this works to the same effect, it is met with the same problems: a lack of control over the output tensors and any lack of customisability. For the last week I have been editing the deep learning toolkit's functions on my own machine to see if I could add this functionality, and I've made some considerable progress, but it's an uphill fight.
So that's my take on this toolkit: a flawed implementation of a great idea. All of the advantages listed here by MathWorks engineers still stand, and I wholeheartedly agree that of all processing tools MATLAB is best suited to handling Deep-Learning and AI and all that, however this toolkit still needs work. As I said before, please do let me know if I'm out of place with these remarks and I will direct my comments to the proper channel.
Thanks for your time.
All the best,
Oisín.

Sign in to comment.


cui,xingxing
cui,xingxing on 11 Jul 2020
Edited: cui,xingxing on 23 Aug 2020
A few months ago, out of personal hobby, I designed yolov3-yolov4's training and compatibility with the original framework. Overall, the matlab code implementation is still very concise, which is much more convenient than Pytorch and tensorflow, but there is also a problem. The differential framework is not efficient enough. For example, when GIOU is used as a loss, the network calculation loss is very slow and cannot be carried forward. Therefore, it is recommended that mathorks strengthen the improvement of the underlying performance and control the flexibility to facilitate the construction of various algorithms on the upper layer!
I personally have more than 9 years of matlab experience, I personally sincerely suggest that mathwork can currently make the following changes:
  1. Strengthen the interaction with other open source frameworks. Although there are onnx, caffe, and tensorflow, many of their operations are not supported, and it is completely impossible to customize import and export!
  2. The automatic differentiation mechanism imitates pytorch is very good, but the training efficiency is not as good as pytorch, and many matlab built-in functions do not support automatic differentiation;
  3. The custom network layer is not flexible enough, and the characteristics of the input and output cannot be customized;
  4. deeplearning toolbox There are too many official toolbox examples that use procedural programming, it is not easy to see the entire architecture, and a large number of cellfun functions are used. I think this function is not very readable. It is recommended to use high-dimensional array expressions as much as possible. In my open source yolov3-yolov4, the cefunn function is used as little as possible, dlnetwork is very inefficient in converting layerGraphy network!
talk is cheap,show me the code!
and so on....
The above are influential applications of deep learning in various aspects, but it is difficult to reproduce in matlab. Although Matlab2019b version supports automatic differentiation mechanism, it is still difficult to implement algorithms in matlab. The efficiency of the differentiation mechanism is not high, and many operators do not support it. I tried to implement the more famous yolov3/v4 algorithm with the latest MATLAB2020a version, but it is still not satisfactory
In summary, my personal suggestions are like my personal answer above, and I hope that future versions can improve a lot!
以上都为深度学习在各个方面有影响力的应用,但是在matlab中复现困难,虽然Matlab2019b版本支持自动微分机制,但仍然不易在matlab实现算法,微分机制效率不高,很多operators也不支持。。。
总之,我的个人建议就像上面的个人回答建议一样,我希望将来的版本可以有所改善!

Mark Hanslip
Mark Hanslip on 11 Jul 2019
Another reason to choose MATLAB over TensorFlow etc might be that MATLAB uses resources more efficiently than Python-based deep learning tools. No-one ever says that you 'need' the latest, most expensive NVidia GPU to run MATLAB, in fact only a GPU with a compute power of 3 is suggested, which is pretty old school. Also running into memory errors in Python is common and so annoying, it really compromises the workflow, whereas I've never had such an issue in MATLAB.

Yury Petrov
Yury Petrov on 1 Nov 2021
Edited: Yury Petrov on 1 Nov 2021
I mostly use Tensorflow 2 for my network training. So, I've tried training a Matlab network identical to the one I use in Tensorflow most often (VNet applied to large 192x192x192 3D images). I used the same 8-GPU cluster for both Tensorflow and Matlab training and used the same optimizer with the same options (Adam, lr = 0.0002, beta = 0.5). Matlab 2020b took 2x longer per epoch than Tensorflow 2. When I profile the training I can see that it takes about 0.7 s to read 8 batch images from the disk, 1.9 s to forward-pass them through the network, 2 - 2.5 s to average the gradients among my 8 workers, and 0.25 s to update the network weights. So, the most time-consuming part was averaging gradients between workers. I guess, this is because the 8 GPUs are training out of sync? Maybe, in Tensorflow they are better sycronized, so that they all start and finish their portions of a batch together and there is no waiting for the last one to finish? Anyway, this is, actually, a big improvement from 3 years ago, when I tried a MNIST classifier network, and it trained 6-8 times slower in Matlab than in Tensorflow 2 (on a single GPU).
I liked the Matlab implementation of DL more than Tensorflow, mainly, because it is more explicit and leaves less guesswork, especially with custom training loops and multi-GPU programming. It is a lot easier to debug for this reason. So, Matlab, please bring your efficiency on par with Tensorflow and Pytorch, and I'll start using Matlab DL!

Jack Xiao
Jack Xiao on 7 Mar 2019
really hope support Generative and Network (GAN) framework soon!

Samuel Boudet
Samuel Boudet on 2 Apr 2020
I have realised a performance comparison with a small program in tensorflow 2.1 vs Matlab on my labtop GTX980m.
The test was on MNIST numbers classification
Here are the matlab and python codes, so do not hesitate to try with different hardware and check if there is no difference between the two versions since there is a lot of hidden different defaut parameters that I may have missed.
Conclusion :
108s under Windows 10 + Matlab R2019a
103s under Ubuntu + Matlab R2020a
48s under Ubuntu + Docker + Tensorflow 2.1
My personnal opinion
I am learning deep learning for only a few monthes, I was a Matlab user (and lover) and I am discovering Python. My feeling is that Matlab will be easier and faster to develop and the loss in time due to performance difference will be less important than the gain of time to develop. However I think if I need to train a very complicated model, I will need to train on clouded servers and I do not think it would be appropriate in Matlab. But I can be wrong.
  1 Comment
Sakib Mahmud
Sakib Mahmud on 26 Jun 2020
Yes, if you don't have a good GPU personally (which is very constly, we use research funds to buy those things), you can always have Google Colab or similar platforms for Python. But for MATLAB, your computer is the only resort so far...

Sign in to comment.

Products


Release

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!