* Even though Supervised Learning is the most popular and frequently used algorithms among all the learning processes , all the machine learning algorithms respond to the same logic that is reading of miniscule or multiple sets of data at a time and find meaningful patterns from the cited parameteric dataset , which will also find out the best contributing features from the data and then find out if any applicable models from the data
* The central most idea for a learning process is that one can represent reality using a mathematical function which the algorithm doesn't know in advance but will comprehend from the data and then can guess some of the important findings and predictions from the data . This concept is the core idea for all kind of machine learning algorithms
* As witnessed from several readings , all of the experts on the subject of machine learning do put their word on the reliability of Supervised Machine Learning and Classification as the most pivotal of all the learning types and provides explanations of the inner functioning which one can extend to other types of machine learning approaches as well
* The objective of the supervised learning classifier is to assign a class to an example after having examined some of the characteristics of the example . Such characteristics are called as "features" and they are both quantitative (numeric values) or qualitative(string labels) .
* In order to assign classes correctly , a classifier must first examine a certain number of known examples correctly , where the classifier must first examine a certain number of known examples closely ( example that one can already have a class assigned to them ) , where each one of the algorithms is accompanied by the same kinds of features as the examples that dont have any classes
* The training phase involves observation of many examples by the classifier that helps the algorithm to learn more about the learning process so that it can provide an answer in terms of a class whenever it sees an example without a class
* We can relate to what happens in a training process by imagining a child learning to distinguish trees from other objects . This is not a first time process for a child to learn the attributes of a tree for first time , rather when a child sees a tree it also gets to learn associated attributes which also resembles that of a tree . Gradually this becomes a process which keeps continuing from time to time , again and again whenever perception occurs using the visual faculties of the eye and processing by the brain infused with the conscious recognition of the environment . So whenever an image of a tree comes to the mind , the perception is kindled again and then one gets to adapt oneself with the picture of a tree .
* So , whenever a similar tree bearing leaves , green texture , a brown sap comes about in the mind of the child , one gets mentally attuned to the perception which also helps in recognition of other such similar objects in and around oneself . All these help a child create an idea of what a tree looks like by contrasting the display of tree features with the images of other different objects such as pieces of furniture that are made of wood but do not share other such characteristics of a tree .
* A Machine Learning Algorithm's classifier works in the same process . The machine learning algorithm builds its cognitive capabilities by creating a mathematical formulation which includes all the given features in such a way that it creates a function which can distinguish one class from another .
* Being able to express such mathematical formulation , is the representation capability of a classifier . From a mathematical perspective , one can express the representation process in machine learning using the concept which is called as "Mapping" . Mapping is a process which takes place when one discovers the construction of a function by observing the outputs of a function . This means that the process of mapping is a retrospective one where one has to assume that this process takes place from the determination of the output by proper consideration of the input . One can say that a successful mapping process of a machine learning process is similar to a child internalising the idea of an object where the child develops the required skills of learning from the environment and then using the knowledge acquired to distinguish the given set of objects when the need is called for .The child now after internalising the things , understands the abstract rules derived from the facts of the world in an effective manner so that when the child will see a tree , the child will immediately recognise the tree when a situation arises .
* Such a representation ( using abstract rules derived from real-world facts ) is possible because the learning algorithm has many internal parameters which constitutes of vectors and matrices of values . The dimension and type of internal parameters delimits the kind of target functions that an algorithm can learn . An optimisation engine in the algorithm changes the parameters from their initial values during the process of learning to represent the target's hidden function . I think the above paragraph is a bit complex to understand as it needs some explanatory level diagrams which could help suffice the need for proper understanding of the mentioned jargons .
The construct of any applicable Machine Learning Algorithm based on any mathematical construct with employment of statistical formulations of hypothesis conjectures would be covered under a separate title under the series of Learning Process of Machine Learning algorithms
No comments:
Post a Comment