Caffe is not just one of the oldest frameworks, but one of the oldest. In my opinion, Caffe has very good characteristics, but there are also some minor shortcomings. At first, it was not a general framework, but focused only on computer vision, but it had a very good versatility. In our lab experiments, the training time of CaffeNet architecture was five times less in Caffe than in Keras (using Theano back end). Caffe's disadvantage is that it's not flexible enough. If you want to change it a little bit, you need to use C++ and C UDA programming, but you can also use Python or MATLAB interface for some minor changes. Caffe's documentation is very poor. One of the biggest drawbacks of Caffe is its installation. It needs to solve a lot of dependency packages... I've installed Caffe twice and it's really painful. But to be clear, Caffe is not useless. Caffe is the undisputed leader in the tools that have been put into production of computer vision systems. It's very robust and fast. My suggestion is: experiment and test with Keras, then migrate to Caffe for production.
6 DSSTNE
The pronunciation of DSSTNE, like Destiny, is a cool framework that is always overlooked. Why? Apart from other factors, the reason is that the framework is not universal and not designed for common tasks. The DSSTNE framework does only one thing - recommendation systems, but it does it to the extreme. Designed neither for research nor for testing idea, the DSSTNE framework is designed for mass production. We have done some experiments on BEEVA, and now I feel that it is a very fast tool and can get very good results (mean accuracy - high mAP). To achieve this speed, the DSSTNE framework runs on GPU, which is also one of its drawbacks: unlike other frameworks or libraries analyzed in this article, this framework does not support users to switch freely between CPU and GPU, which may be useful for some attempts, but we are not allowed to make such attempts in DSSTNE. The other feeling is that DSSTNE is not a mature enough project so far, and it's packaged too tightly ("black box"). If we want to get a deeper understanding of the framework's operating mechanism, we must and can only look at its source code, and you need to complete a lot of necessary settings ("TODO") before you can see it. At the same time, there are not many online tutorials on this framework, and there are fewer guidelines for developers to make operational attempts. My opinion is to wait another four months to see the latest version of DSSTNE. DSSTEN is indeed an interesting project, but it still needs some room for growth. I would also like to point out that this framework does not require programming capabilities. The DSSTNE framework performs related operations through the command line of its terminal. So far, many of the frameworks and libraries that I know are very popular have not been used yet, and I can't give more details.
7 Torch
There are still many wars in the world every day, but a good "warrior" (Guerrero in Spanish) must know which wars need to be fought and which can be chosen not to participate. Torch is a well-known framework, because the AI research framework of the giant Facebook is Torch, and DeepMind used Torch before it was acquired by Google (DeepMind turned to TensorFlow after the acquisition). Torch's programming language is Lua, which is exactly what I mean by "war" just now. In the current trend of Python implementation for the majority of in-depth learning programming languages, the greatest disadvantage of a framework based on Lua programming language is that. I've never used this language before. If I want to use Torch, there's no doubt that I need to learn Lua before I can use Torch. This is certainly a reasonable process, but in my personal case, I prefer to use Python, matlab or C++ implementation.
8 MXNet
Mxnet is one of the frameworks supporting most programming languages, including Python, R, C++, Julia, etc. But I think developers who use R language will particularly prefer mxnet, because so far Python has dominated the deep learning language in an undisputed way (Python versus R, guess where I will stand?: -p) To be honest, I haven't paid much attention to mxnet before. But when Amazon AWS announced its choice of mxnet as its library for in-depth learning of AMI, it triggered me to focus on mxnet. I have to get to know it. Later, I learned that Amazon listed mxnet as its reference library for in-depth learning and claimed its tremendous horizontal scalability. I feel that there are some new changes that have taken place and I have to understand them in depth. That's why we have mnxet on our BEEVA technology test list in 2017. I have some doubts about the scalability of multiple GPUs and I would like to know more details about such experiments, but I am still skeptical about mxnet.
9 DL4J
I came into contact with this library because of its documentation. I was looking for a limited Boltzmann machine and a self-encoder, and I found these two documents in DL4J. The documents are clear, there are theories, there are code cases. I have to say that DL4J documentation is artwork, and other libraries need to learn from it when they record code. Skymind, the company behind DL4J, realizes that although Python is the leader in the deep learning community, most programmers come from Java, so a solution needs to be found. DL4J is JVM-compatible and also applies to Java, Clojure and Scala. With the rise and fall of Scala, it is also used by many potential start-ups, so I will continue to follow this library closely. In addition, Skymind's Twitter account is very active, constantly publishing the latest scientific papers, cases and tutorials, and recommending them for your attention.
10 Cognitive Toolkit
The Cognitive Toolkit was previously known as CNTK, but recently renamed and returned to Cognitive Toolkit, probably in the light of Microsoft Cognitive Services. In terms of performance on open benchmarks, the tool seems to be very powerful, supporting both vertical and horizontal shifts. So far, the Cognitive Toolkit seems not very popular. I haven't read many blogs, online experiments or comments on using the library in Kaggle.
Contact: Manager Xu
Phone: 13907330718
Tel: 0731-22222718
Email: hniatcom@163.com
Add: Room 603, 6th Floor, Shifting Room, No. 2, Orbit Zhigu, No. 79 Liancheng Road, Shifeng District, Zhuzhou City, Hunan Province