What does the Future of Deep Learning Look Like?
Deep learning may be hot now but some variants of it or something new all together may emerge later. Let me point out the reasons I feel deep learning maybe getting old soon.
Slow learner as in, it converges to an optimal solution slowly but with GPU acceleration the training speed can be improved dramatically. The slowness is affected by the learning rate. Adjusting the learning rate affects the reliability of the resulting deep neural net.
Huge Training Data Requirement
Deep learning requires a huge training data to achieve good performance. The presence of a huge number of parameters to adjust requires some huge example sets. The fact that deep learning requires such a large number of training examples makes it dull in some way, despite such huge training set requirement deep neural nets do have error rates of about 10%.
Deep learning tends to overfit easily but with the new dropout algorithm, this problem can be avoided but with some consequences such as an increase in error rates.
Poor Initial State
Deep neural nets have parameters that need to be initialized. The most used method is random initialization, this results in neural nets with very poor initial states. Compare this to a mammalian brain, the brain is born with some rigid instincts such as basic survival behavioral patterns.
The human brain also shows a learning pattern that moves from an instance-level recognition type to category-level recognition type as more data is made available. The brain prefers higher false-negative rates than higher false-positive rates when few training data is available, but deep neural nets are affected by the quality of their initialization method and the initial state is rather very unpredictable in terms of behavior.
Sensory Data Transformations
For example in visual object recognition tasks, images undergo various geometric and photometric transformations that need to be modeled by a recognition system to rectify new image observations. Deep learning as used today does not take such transformations into account, this is one of the reasons why deep neural nets still suffer from relatively high error rates (relative to a human).
is one of the algorithms that explicitly models a flat distorted face into a 3D model and affine warps the face to a canonical (frontal) form before feeding it to a deep neural net. Such sensory data normalization in terms of known transformations can improve the recognition accuracy dramatically.
Algorithms May Emerge
These are one of the major reasons why deep learning requires improvements and surely some algorithms that model the data better may emerge in the future. Since sensory data especially visual data, can undergo translation, scaling, rotation and photometric distortions e.t.c simply doing a max-pooling or sum-pooling operation cannot provide the best method to deal with these transformations.
Explicit modeling of such transformations and recovering the models from observed sensory data can do wonders in terms of recognition accuracy. But such algorithms are yet to be discovered in the near future. They may be an extension of deep learning or some other completely new algorithms.
So What Does the Future Holds?
In my opinion, Deep Learning is getting older and dying a slow death. If this flow will keep going then Deep Learning may fade away with time. It can survive with a good collaboration between Artifical Intelligence and some other domains of machine learning. Apart from these, it needs to check all the boxes to maintain it’s crucial presence in the future.
Atlast but not least, it will be interesting to see what will get true in future. Whether we’ll say-
Long Live Deep Learning.”
Rest In peace, Deep Learning.”
What Do you Think?
What are your thoughts on Deep Learning and it’s future?
Drop your views in the comment section below. Let’s discuss that.
That’s all from this post Keep Visiting for more stuff on Machine Learning. Cheers! 💡