is a type of machine learning in which a model learns to
perform classification tasks directly from images, text, or sound. Deep
learning is usually implemented using a neural network architecture. The
term “deep” refers to the number of layers in the network—the more layers,
the deeper the network. Traditional neural networks contain only 2 or 3
layers, while deep networks can have hundreds.
Deep learning is the use of many hidden layers in an artificial neural network to train a model to learn and understand the data without an inherent understanding of the data. Deep learning is used by many different disciplines to allow computers to learn from vast amounts of data. Recent advances in computer vision, object detection and natural language processing can be attributed to the adoption of Deep Learning techniques.
Here is the book which should help: http://irishsecure.com/books/Deep_Learning_ebook.pdf
From wiki :
- use a cascade of many layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. The algorithms may be supervised or unsupervised and applications include pattern analysis (unsupervised) and classification (supervised).
- are based on the (unsupervised) learning of multiple levels of features or representations of the data. Higher level features are derived from lower level features to form a hierarchical representation.
- are part of the broader machine learning field of learning representations of data.
- learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
In a simple case, there might be two sets of neurons: one set that receives an input signal and one that sends an output signal. When the input layer receives an input it passes on a modified version of the input to the next layer. In a deep network, there are many layers between the input and the output (and the layers are not made of neurons but it can help to think of it that way), allowing the algorithm to use multiple processing layers, composed of multiple linear and non-linear transformations
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license.
Demo at : http://demo.caffe.berkeleyvision.org/classify_upload check it out
You should now be ready for the lesson.
Got it? Leave a comment.