Deep Learning is fast becoming successful, already establishing itself as the mainstream modern approach to pattern recognition for perceptual tasks. Deep Learning outperforms humans at set tasks with less errors, from identifying items in a photo, to finding tumours in MRI scans. When you talk to Siri, Google voice, Cortana, or Skype translate, your speech is being interpreted by a Deep Neural Network.
Whether its targeted at product recommendations to autonomous vehicles, defeating the World GO champion, or predicting financial markets, Deep Learning is being applied in social media, defence/intelligence, consumer electronics, medical, energy, media & entertainment, finance, robotics, and beyond… so, where next?
Deep Learning is not a new idea and isn’t complicated. The simple idea is to train a very deep (with multiple layers) neural network with huge amounts of training data. We know that a properly configured neural network can approximate any function (think universal Turing machines). The problem then becomes how to properly configure that neural network. This is where Deep Learning comes in, instead of solving a problem in one or two big steps (as most neural networks do), Deep Neural Networks, having many layers, solve a problem in lots of little steps because smaller steps are easier to learn. This makes it possible to solve problems or perform tasks that have previously eluded us.
While Deep Learning is not a new idea, learning typically requires a large training set and is so computationally expensive in that it is only hardware-demanding with the use of graphics cards as massively parallel processors that we use to finally reach their potential. Training Deep Neural Networks is now almost exclusively done using graphics cards (GPU’s), however the resulting deep network can often be deployed as a relatively light load running on embedded hardware, or even a smartphone.
By far the most commonly used Deep Learning model is the Convolutional Neural Network, originally designed for computer vision problems, and now widely applied to all kinds of perceptual problems (labelling complex data); for example, to identify a plant from a photo, an individual from a security camera, a tank from a satellite image, or a tumour from an MRI scan, words from speech, and so on. But those labels can just as easily be actions (e.g. steering a drone, or playing Go) or predictions (e.g. when to buy or sell).
Once trained, Deep Learning models are easily embedded into larger hybrid systems enabling new technologies. For example, any autonomous car must be able to reliably recognise road signs, pedestrians, cyclists, other cars, and so on.
Ultimately, Machine Learning, and Deep Learning models find complex patterns and relationships in your data. How you use that is up to you. With Deep Learning, your initiatives future is what you choose for it.