The most popular recent trend in machine learning is deep learning (DL). By definition, traditional connectionist machine learning models connect nonlinear (neural) processing units configured in shallow hierarchies or layers to solve tasks of classification, regression, or dimension reduction. The complexity of features (or combinations of input variables) that can be found and used by Shallow networks is limited. But it is more difficult to train deeper hierarchies. Deep learning leverages advances in computational power and theory of statistical learning to update standard connectionist learning models with deeply stacked processing unit layers.
Using deep stacks enables the model to identify complex data features that can be useful to enhance the performance of the model. Depth in architecture learning is an idea that has been taken into account for much of the research history of machine learning. The main obstacle to the exploitation of deep architectures was limited in the use of computing power. Machine learning models solve the learning process optimization problem. The problem of optimization is a function of the model’s number of tunable parameters. Neural network models (both shallow and deep) typically rely on the backpropagation algorithm for parameter tuning. Shallow models have fewer parameters than deep models. The difficulty of the optimization task grows exponentially with the number of parameters. So deep models can be prohibitively computationally intensive. Deep models often also require larger data sets for training. The key factors that make DL models feasible are the existence of large application-relevant data sets and massive computing power. Our previous discussion already highlighted the growth of the ecosystem of behaviour-related data. Available computational power has also grown explosively.
The ultimate goal of AI is to develop human-like intelligence in machines. However, such a dream can be accomplished through learning algorithms which try to mimic how the human brain learns. Machine learning, which is a field that had grown out of the field of artificial intelligence, is of utmost importance as it enables the machines to gain human-like intelligence without explicit programming. However, AI programs do more interesting things such as web search or photo tagging or email anti-spam. So, machine learning was developed as a new capability for computers and today it touches many segments of the industry and basic science. There is autonomous robotics, computational biology.
The main purpose of machine learning is to develop algorithms that assist in the creation of intelligent machines thus reducing the jobs of the programmers as the machine learns in due course of time to improve its performance. Although a lot of advancements have been made in this field still then there exist glaring limitations in the data set from which machine learns. It can be rectified by constantly keeping the data sets up-to-date as learning is a continuous process. Apart from this issue, a great number of publications on machine learning evaluate new algorithms on a handful of isolated benchmark data sets. In spite of all these shortcomings machine learning has solved varying problems of global impact.
Machine learning has proven to be extremely useful in a variety of fields such as data mining, artificial intelligence, OCR, statistics, computer vision, mathematical optimization, and so on, and its importance tends to continue to grow. Machine learning theories and algorithms are inspired by biological learning systems where performance depends on factors such as the amount of data available, the history and experience of learning, etc. and thus helps to explain human learning. The future challenge is to develop an emergency automated prescription at critical condition using machine learning concept, which can minimize the error in diagnosis.