Welcome: Hunan Intelligent Applications Tecgnology CO.,ltd.-HNIAT.com
Language: Chinese ∷  English

Industry tracking

Comparison of Ten Frameworks of Deep Learning

Framework comparison (rotation):

1 TensorFlow

TensorFlow is their favorite framework for in-depth learning for those who have heard about in-depth learning but haven't gone too far, but here I want to clarify some facts. On TensorFlow's official website, it is defined as "an open source software library for machine intelligence", but I think it should be defined as: TensorFlow is an open source software library for numerical computation using data flow graphs. Here, they do not include TensorFlow in the scope of the Deep Learning Framework, but in the category of graph compilers together with Theano. After completing Udacity's Deep Learning course (https://www.udacity.com/course/deep-learning-ud730), I felt that TensorFlow was a very good framework, but very low-level. Using TensorFlow requires a lot of code, and you have to reinvent the wheel over and over again. And I'm not the only one who thinks so. Andrej Karpathy tweeted many times on Twitter: Twitter: I hope TensorFlow standardizes our code, but it's low-level, so we split up on top of it: Slim, Pretty Tensor, Keras, TFLearn... For example: We use TensorFlow in OpenAI, but we all seem to prefer other frameworks, and some of us write custom code. A few months ago, I went to "Google Experts Summit: TensorFlow, Machine Learning for everyone, with Sergio Guadarrama". Sergio was an engineer who developed TensorFlow, but he did not show TensorFlow at the conference. Instead, he showed a higher-level library tf.contrib: https://www.tensorflow.org/tutorials/tflearn/, which works on TensorFlow. My point is that they have realized internally that if they want more people to use TensorFlow, they need to create some layers on it at a higher level of abstraction to simplify the use of TensorFlow. TensorFlow supports Python and C++, allows computing distributions on CPUs and GPUs, and even supports horizontal scaling using gRPC. Summary: TensorFlow is great, but you have to know where it's good. If you don't want to do everything by hand and reinvent the wheel, you can use a simpler library (Amway Keras).

2 Theano

Theano is one of the oldest and most stable libraries. As far as I know, the beginning of the deep learning library is either Caffe or Theano. Like TensorFlow, Theano is a relatively low-level library. Therefore, it is not suitable for in-depth learning, but more suitable for numerical optimization. It supports automatic function gradient calculation, has Python interface and integrates Numpy, which makes it one of the most commonly used libraries in the field of general deep learning from the beginning. Today, Theano still works well, but because it doesn't support multiple GPUs and horizontal extensions, it's beginning to be forgotten by the TensorFlow boom, which targets the same area.

3 Keras

"You have just found Keras."

The above sentence is the first one you see when you open the document page. Keras has a fairly clear syntax, excellent documentation (albeit relatively new) and supports the language Python that I already know. Its use is very simple and easy; we can also intuitively understand its instructions, functions and the links between each module. Keras is a very high-level library that works on Theano and TensorFlow (configurable). In addition, Keras emphasizes minimalism - you only need a few lines of code to build a neural network. Here you can compare the code that Keras and TensorFlow need to implement the same function.

4 Lasagne

Lasagne is a library that works on Theano. Its mission is to simplify a bit of the complex computation under the deep learning algorithm, but also to provide a more friendly interface (also Python's). It's an old library, and it's been an extensible tool for a long time; but in my opinion, it's not growing as fast as Keras. They all have similar domains of application, but Keras has better documentation and completeness.

5 Caffe

Caffe is not just one of the oldest frameworks, but one of the oldest. In my opinion, Caffe has very good characteristics, but there are also some minor shortcomings. At first, it was not a general framework, but focused only on computer vision, but it had a very good versatility. In our lab experiments, the training time of CaffeNet architecture was five times less in Caffe than in Keras (using Theano back end). Caffe's disadvantage is that it's not flexible enough. If you want to change it a little bit, you need to use C++ and C UDA programming, but you can also use Python or MATLAB interface for some minor changes. Caffe's documentation is very poor. One of the biggest drawbacks of Caffe is its installation. It needs to solve a lot of dependency packages... I've installed Caffe twice and it's really painful. But to be clear, Caffe is not useless. Caffe is the undisputed leader in the tools that have been put into production of computer vision systems. It's very robust and fast. My suggestion is: experiment and test with Keras, then migrate to Caffe for production.


6 DSSTNE

The pronunciation of DSSTNE, like Destiny, is a cool framework that is always overlooked. Why? Apart from other factors, the reason is that the framework is not universal and not designed for common tasks. The DSSTNE framework does only one thing - recommendation systems, but it does it to the extreme. Designed neither for research nor for testing idea, the DSSTNE framework is designed for mass production. We have done some experiments on BEEVA, and now I feel that it is a very fast tool and can get very good results (mean accuracy - high mAP). To achieve this speed, the DSSTNE framework runs on GPU, which is also one of its drawbacks: unlike other frameworks or libraries analyzed in this article, this framework does not support users to switch freely between CPU and GPU, which may be useful for some attempts, but we are not allowed to make such attempts in DSSTNE. The other feeling is that DSSTNE is not a mature enough project so far, and it's packaged too tightly ("black box"). If we want to get a deeper understanding of the framework's operating mechanism, we must and can only look at its source code, and you need to complete a lot of necessary settings ("TODO") before you can see it. At the same time, there are not many online tutorials on this framework, and there are fewer guidelines for developers to make operational attempts. My opinion is to wait another four months to see the latest version of DSSTNE. DSSTEN is indeed an interesting project, but it still needs some room for growth. I would also like to point out that this framework does not require programming capabilities. The DSSTNE framework performs related operations through the command line of its terminal. So far, many of the frameworks and libraries that I know are very popular have not been used yet, and I can't give more details.

7 Torch

There are still many wars in the world every day, but a good "warrior" (Guerrero in Spanish) must know which wars need to be fought and which can be chosen not to participate. Torch is a well-known framework, because the AI research framework of the giant Facebook is Torch, and DeepMind used Torch before it was acquired by Google (DeepMind turned to TensorFlow after the acquisition). Torch's programming language is Lua, which is exactly what I mean by "war" just now. In the current trend of Python implementation for the majority of in-depth learning programming languages, the greatest disadvantage of a framework based on Lua programming language is that. I've never used this language before. If I want to use Torch, there's no doubt that I need to learn Lua before I can use Torch. This is certainly a reasonable process, but in my personal case, I prefer to use Python, matlab or C++ implementation.

8 MXNet

Mxnet is one of the frameworks supporting most programming languages, including Python, R, C++, Julia, etc. But I think developers who use R language will particularly prefer mxnet, because so far Python has dominated the deep learning language in an undisputed way (Python versus R, guess where I will stand?: -p) To be honest, I haven't paid much attention to mxnet before. But when Amazon AWS announced its choice of mxnet as its library for in-depth learning of AMI, it triggered me to focus on mxnet. I have to get to know it. Later, I learned that Amazon listed mxnet as its reference library for in-depth learning and claimed its tremendous horizontal scalability. I feel that there are some new changes that have taken place and I have to understand them in depth. That's why we have mnxet on our BEEVA technology test list in 2017. I have some doubts about the scalability of multiple GPUs and I would like to know more details about such experiments, but I am still skeptical about mxnet.

9 DL4J

I came into contact with this library because of its documentation. I was looking for a limited Boltzmann machine and a self-encoder, and I found these two documents in DL4J. The documents are clear, there are theories, there are code cases. I have to say that DL4J documentation is artwork, and other libraries need to learn from it when they record code. Skymind, the company behind DL4J, realizes that although Python is the leader in the deep learning community, most programmers come from Java, so a solution needs to be found. DL4J is JVM-compatible and also applies to Java, Clojure and Scala. With the rise and fall of Scala, it is also used by many potential start-ups, so I will continue to follow this library closely. In addition, Skymind's Twitter account is very active, constantly publishing the latest scientific papers, cases and tutorials, and recommending them for your attention.

10 Cognitive Toolkit

The Cognitive Toolkit was previously known as CNTK, but recently renamed and returned to Cognitive Toolkit, probably in the light of Microsoft Cognitive Services. In terms of performance on open benchmarks, the tool seems to be very powerful, supporting both vertical and horizontal shifts. So far, the Cognitive Toolkit seems not very popular. I haven't read many blogs, online experiments or comments on using the library in Kaggle. 

CONTACT US

Contact: Manager Xu

Phone: 13907330718

Tel: 0731-22222718

Email: hniatcom@163.com

Add: Room 603, 6th Floor, Shifting Room, No. 2, Orbit Zhigu, No. 79 Liancheng Road, Shifeng District, Zhuzhou City, Hunan Province

Scan the qr codeClose
the qr code