Wednesday, April 28, 2021

The Learning Process of M.L Algorithms

 *  During the process of optimization , the machine learning algorithm searches the possible variants of parameter combinations in order to find the best one which would allow the correct mapping between the features and the classes during the process of training

 *  This process evaluates many potential candidate target fuunctions from among those which a learning algorithm can guess

 *  The set of all the potential functions that the learning algorithm can figure out is called a Hypothesis Space

 *  One can call the resulting classifier with their set of parameters as a Hypothesis , which is a way in machine learning to say that the algorithm has set parameters to replicate the target function and is thus now ready to work out correct classifications

  *  The hypothesis space space must contain all the parameter variants of all the machine learning algorithms that one may want to try to map to an unknown function when solving a classification problem . This particular sentence suggests that the entire sample space takes into consideration , a hypothesis space which would contain all the possible variations in the form of scenarios over where the machine learning algorithm could manifest itself at each point of time under the conditions upto which a particular program has been evaluated till a particular point of time and from which the Machine Learning algorithm would do a self analysis on its own for finding the best possible approach for a given condition or problem . This is an instance example of a condition to showcase how a machine learning algorithm would be doing a self analysis for a possible  condition and   then take the best possible course of action basing upon its own understanding and derived results .So , elaborating more upon the aspect of hypothesis space .. one can deduce that a hypothesis space generally consists of a target function or  a similar approximation which is much different for a similar function .

 *  The equivalent of this could be thought of as the time when a child in an effort to figure out an image of a tree experiments with many different creative ideas by assembling one's own knowledge and experiences . Most certainly , parents play a major role in this learning phase and they provide all kinds of relevant environmental inputs for the faster and effective upbringing of the child . In Machine Learning , for say in supervised learning algorithms one has to provide the right learning algorithms and with that one has to provide some non-learnable parameters called as hyper-parameters , next one has to choose a set of  examples to learn and adapt from and then select the features that accompnay the examples . And just as a child cannot always learn to distinguish between  right and wrong if left alone in the world ( consider the example of the case depicted in the book - Lord of the Flies ; summary is available at may sites where one can have a quick synopsis of the story and save time from reading the entire book which in these days is a very tedious , demanding and unproductive task ). In such a similar scenario as well , a machine learning algorithm also needs multiple directions , multiple interjections in order to facilitate the smooth running and execution of a program .

 

*     So even after the completion of the learning process , a machine learning classifier often cannot unequivocally map the examples to the target  classification because many false and erroneous mappings are possible which could mar the generation of best possible results and then render the learning process ineffective as the learning algorithm in its path to effective learning picks up erroneous and wrong paths and lands up adding insufficient data points to discover the right function . In addition to this , conditions of noise ( this aspect is also a great factor in machine learning ) also affect the process of learning

 

*  In real world as well , Noise plays a same kind of impediment factor in the process of learning which derides the effective learning mechanism . Similarly , many such extraneous factors and errors also occur which during the process  of recording of the data which distort the values and features to be read and understood . In true sense , therefore it is considered that a good machine learning algorithm should distinguish the signals that can map back to a target function even though extraneus environmental noise is still in play .

 

Last modified: 27 Apr 2021

Monday, April 26, 2021

Learning Process of Machine Learning Algorithms - a precursor article


* Even though Supervised Learning is the most popular and frequently used algorithms among all the learning processes , all the machine learning algorithms respond to the same logic that is reading of miniscule or multiple sets of data at   a time and find meaningful patterns from the cited parameteric dataset , which will also find out the best contributing features from the data and then find out if any applicable models from the data


* The central most idea for a learning process is that one can represent reality using a mathematical function which the algorithm doesn't  know in advance  but will comprehend from the data and then can guess some of the important findings and predictions from the data . This concept is the core idea for all kind of machine learning algorithms


* As witnessed from several readings , all of the experts on the subject of machine learning do put their word on the reliability of Supervised Machine Learning and Classification as the most pivotal of all the learning types and provides explanations of the inner functioning which one can extend to other types of machine learning approaches as well


* The objective of the supervised learning classifier is to assign a class to an example after having examined some of the characteristics of the example . Such characteristics are called as "features" and they are both quantitative (numeric values) or qualitative(string labels) .


* In order to assign classes correctly , a classifier must first examine a certain number of known examples correctly , where the classifier must first examine a certain number of known examples closely ( example that one  can already have  a class assigned to them ) , where each one of the algorithms is accompanied by the same kinds of features as the examples that dont have any classes


* The training phase involves observation of many examples by the classifier that helps the algorithm to learn more about the learning process so that it can provide an answer in terms of a class whenever it sees an example without a class


 * We can relate to what happens in a training process by imagining a child learning to distinguish trees from other objects . This is not a first time process for a child to learn the attributes of a tree for first time , rather when a child sees a tree it also gets to learn associated attributes which also resembles that of a tree . Gradually this becomes a process which keeps continuing from time to time , again and again whenever perception occurs using the visual faculties of  the eye and processing by the brain infused with the conscious recognition of the environment . So whenever an image of a tree comes to the mind , the perception is kindled again and then one gets to adapt oneself with the picture of a tree .


* So , whenever a similar tree bearing leaves , green texture , a brown sap comes about in the mind of the child , one gets mentally attuned to the perception which also helps in recognition of other such similar objects in and around oneself . All these help a child create an idea of what a tree looks like by contrasting the display of tree features with the images of other different objects such as pieces of furniture that are made of wood but do not share other such characteristics of a tree .


* A Machine Learning Algorithm's classifier works in the same process . The machine learning algorithm builds its cognitive capabilities by creating a mathematical formulation which includes all the given features in such a way that it creates a function which can distinguish one class from another .

* Being able to express such mathematical formulation , is the representation capability of a classifier . From a mathematical perspective , one can express the representation process in machine learning using the concept which is called as "Mapping" . Mapping is a process which takes place when one discovers the construction of a function by observing the outputs of a function . This means that the process of mapping is   a retrospective one where one has to assume that this process takes place from the determination of the output by proper consideration of the input . One can   say that a successful mapping process of a machine learning process is similar to a child internalising the idea of an object where the child develops the required skills of learning from the environment and then using the knowledge acquired to distinguish the given set of objects when the need is called for .The child now after internalising the things , understands the abstract rules derived from the facts of the world in an effective manner so that when the child will see a tree , the child will immediately recognise the tree when a situation arises .


* Such a representation ( using abstract rules derived from real-world facts ) is possible because the learning algorithm has many internal parameters which constitutes of vectors and matrices of values . The dimension and type of internal parameters delimits the kind of target functions that an algorithm can learn . An optimisation engine in the algorithm changes the parameters from their initial values during the process of learning to represent the target's hidden function . I think the above paragraph is a bit complex to understand as it needs some explanatory level diagrams which could help suffice the need for proper understanding of the mentioned jargons .


The construct of any applicable Machine Learning Algorithm based on any mathematical construct with employment of statistical formulations of hypothesis conjectures would be covered under a separate title under the series of Learning Process of Machine Learning algorithms



The Various Categories of Machine Learning Algorithms with their Interpretational learnings


Machine Learning has the three different flavours depending on the algorithm  and their objectives they serve . One can divide machine learning algorithms into three main groups based on the purpose :

01)      Supervised Learning

02)      Unsupervised Learning

03)      Re-inforcement Learning

Now in this article we will learn more on each of the learning techniques in greater detail .

==================================

01)      Supervised Learning

==================================

*  Supervised Learning occurs when an algorithm learns from a given form of example data and associated target responses that consist of numeric values or string labels such as classes or tags , which can help in later prediction of correct responses when one is encountered with newer examples

*  The supervised learning approach is similar to human learning under the guidance and mentorship of a teacher . This guided teaching and learning of a student under the aegis of a teacher is the basis for Supervised Learning

*  In this process , a teacher provides good examples for the student to memorize and understand and then the student derives general rules from the specific examples

*  One can distinguish between regression problems whose target is a numeric value and along with that one can make use of such regression problems whose target is a qualitative variable which is an indicator of a class or a tag as in the case of a selection criteria

*  More on Supervised Learning Algorithms with examples would be discussed in later articles .

==================================

02)      Unsupervisd Learning

==================================

*  Unsupervised Learning occurs when an algorithm learns from plain examples without any associated response in the target variable , leaving it to the algorithm to determine the data patterns on their own

 *  This type of algorithm tends to restructure the data into something else , such as new features that may represent a class or a new series of uncorrelated values

*  What is Unsupervised Learning ? It is a type of learning which tends to restructure the data into some new set of features which may represent a new class or a series of uncorrelated values within a data set

*  Unsupervised Learning algorithms are quite useful in providing humans with insights into the meaning of the data as there are patterns which need to be found out

*  Unsupervised Learning is quite useful in providing humans with insights into the meaning of the data and new useful inputs to supervised machine learning algorithms

*  As a new kind of learning , Unsupervised Learning resembles the methods that humans use to figure out that certain objects or events are of the same class or characteristic or not , by observing the degree of similarity of the given objects

*  Some of the recommendation systems that one may have come across over several retail websites or applications are in the form of marketing automation which are based on the type of learning

*  The marketing automation algorithm derives its suggestions from what one has done in the past

*  The recommendations are based on an estimation of what group of customers that one resembles the most and then inferring one's likely preferences based on that group

==================================

02) Reinforcement Learning

==================================

*  Reinforcement Learning occurs when one would present the algorithm with examples that lack any form of labels as in the case of unsupervised learning .

*  However , one can provide an example with some positive and negative feedback according to the solution of the algorithm proposed

*  Reinforcement Learning is connected to the applications for which the algorithm must make decisions ( so the product is mostly prescriptive and not just descriptive as in the case of unsupervised learning ) and on top of that the decisions bear some consequences .

*  In the human world , Reinforcement learning is mostly a process of learning by the application of trial and error method to the process of learning

*  In this type of learning , initial errors and aftermath errors help a reader to learn because this type of learning is associated with a penalty and reward system which gets added each time whenever the following factors like cost , loss of  time , regret , pain and so on get associated with the results that come in the  form of output for any particular model upon which the set of reinforcement learning algorithms are applied

*  One of the most interesting examples on reinforcement learning occurs when computers learn to play video games by themselves and then scaling up the ladders of various levels within the game on their own just by learning on their own the mechanism and the procedure to get through each of the level .

*  The application lets the algorithm know the outcome of what sort of action would result in what type of result .

*  One can come across a typical examplle  of  the  implementation of a Reinforcement Learning program developed by Google's Deep Mind porgram which plays old Atari's videogames in a solo mode at https://www.youtube.com/watch?v=VieYniJORnk

*  From the video , one can notice that the program is initially clumsy and unskilled but it steadily improves with better continuous training until the program  becomes a champion at performance of the task

 


 


 


 

Descending the Right Curve in Machine Learning - A relation to science fiction and science in practice

 *  Machine Learning may appear as a magic trick to any newcomer to the  discipline - something to expect from any application of advanced scientific discovery , as similar as Arthur C Clarke , the futurist and author of popular  science fiction stories like 2001: A Space Odyssey. This sentence suggests that  ML is largely a construct of so many things combined which has the ability to deem itself incomprehensible by the sheer magnitude of the level of machinery and engineering involved which could help a general user to ascertain models and predictions based on the patterns identified from a particular dataset


*  Supporting his theory of Machine Learning , Mr Arthur C Clarke had stated in his third law stating that "any sufficiently advanced technology is indistinguishable from magic" which appeals to a common user that the when it comes to user level perception of any sufficiently high level technology , then to a common user the technology seems some form of magic . Since in magic , the trick is to carry off a spectacle without letting the viewer of the trick to get to know the underlying working principle involved in the magic

 

* Though it is greatly believed that Machine Learning's underlying strength  is some form of imperceptible mathematical , statistical and coding based magic , however , this is not a form of magic but rather one needs to understand the underlying foundational concepts from the scratch so that so of the more complex working mechanism could be understood . Therefore , it is said that machine learning is is the application of mathematical formulations to have a r great learning experience

 

*  Expecting that the world itself is a representation of mathematical and statistical formulations , machine learning algorithms strive to learn about such formulations by tracking them back from a limited number of observations .

 

*  Just as humans have the power of distinction and perception , and can  recognise what is a ball and which one is a tree , machine learning algorithms can also use the computational power of the computers to deploy the widely available data on all the subjects and domains , human beings can use the computational power of computers and leverage their wide availability to learn how to solve a large number of important and useful problems


*  It is being said that though Machine Learning is a complex subject , humans devised this and in its initial inception , Machine Learning started mimicking the way in which one can learn from the surrounding world . One can also on top of that express simple data problems and basic learning algorithms based on how  a child would perceive and understand the problems of the world or to solve a challenging learning problem by using the analogy of descending from the top of the mountains by taking the right slope of descent .

 

*  Now with a somewhat better understanding of the capabilities of machine learning and how they can help in the direction of solving a problem , one can now start to learn the more complex facets of the technology in greaer detail with  more examples of their proper usages .


Friday, April 23, 2021

Describing the use of Statistics in Machine Learning - A full detailed article on some of the most important concepts in Statistics

 

Describing the use of Statistics


* Its important to skim through some of the basic statistical concepts related to probability and statistics . Along with that , we will also try to understand how these concepts can help someone to describe the information used by machine learning algorithms


 * Some of the main concepts that I shall also try to cover in my articles leading to a stronger foothold over the various sections of Statistics are the following topics -- sampling, statistical distributions, statistical descriptive measures.. etc which in one way or the other are based on the concepts of algebra and probability in some or the other ways as they are nothing but more elaborate manifestations of the concepts and theorems of mathematics .

 

* The zist of all the learning of these concepts is not only about how to describe an event by counting the number of occurrences , but its about describing an event without counting every time how many times a particular event occurs .

 

* If there are some imprecision in a recording instrument that one uses , or simply because some error in the the recording procedure of a machine occurs , rather an imprecision occurs in the instrument that one uses or simply because of any random nuisance which disturbs the process of recording a given measure during the process of recording the measure occurs ... then a simple measure such as weight , will differ every time one would get a scale which would be slightly oscillating around the true weights and minimal variation scale . Now , if someone wants to perform such a small incident in a city and want to measure the weight of all the people in the city , then it is probably an impossible experiment to be conducted on such a large scale as it would involve taking the weight-wise reading of all the people in the city which is something that is practically not possible , because first of all if someone wants to perform this experiment in one go , then one has to create a big big gigantic weighing scale to mount all the people of the city in its weighing pans , which is completely an impossible task , and probably the scale may break once all the people have been mounted to the pan or otherwise the worst thing that may happen is that once all the people's weights have been measured , the experiment could render itself insignificant as the experiment once conducted would make the use of the weighing machine useless and hence the cost associated with building of such a big machine for carrying out just one task would become meaningless .

* So , the purpose of experiment might get achieved , but the cost of the built-up of the instrument would run so high that a big dent in the overall GDP of the city would get created which might cripple the city's finance and budget . On the other hand , if we take the measurement of the entire city's weight recording each person's weight one by one , then the effort and time taken for the entire activity to be completed might take some weeks or months of time . Because of the high amount of time and effort that would get consumed while managing the entire ruckus won’t suit the idea for adaptability and taking up the idea. And even if all the weights of the people residing in the city is successfully measured, there are a lot of chances that anyhow some amount for error would definitely popup making the idea of the entire process not so fruitful and fault-proof

 

* Having partial information is a quite complex process which is not a completely negative condition because one can use such smaller matrices for the purpose of efficient and less cumbersome calculations. Also on top of that, it is said that one cannot get a sample of what one wants to describe and learn because the event's complexity might be quite high and may probably feature a great variety of features . Another example that some users could consider while taking a case of a sample or a large case of data , is a case of Twitter tweets . All the tweets may be considered as some sample of data over where the same data could be treated as some experimental potions and minerals which are processed using several word processors , sentiment analyzers , business enhancers , spam , abusive data and all depending upon the sample of data associated with each of the text within the short frame of data that one can provide within the text section .

 

* Therefore it is also a good practice in sampling to sample similar data which has associated characteristics and features which will present the sample data in the form of a grouped cohesive data which fit into a proper sampling criteria. And when sampling is done carefully, one can get an idea that one can obtain a better global view of the data from the constituent samples

 

* In Statistics , a population refers to all the events and objects that one wants to measure and is a part of the given criteria which gives in detail the account of metrices of the population . Using the concept of random sampling , which is picking the events or objects one needs to choose one's examples according to the criteria which would determine how the data is collected ,assembled and synthesised. This is then used for feeding into machine learning algorithms which apply their inherent functions for determination of patterns and behaviour.

 

* Along with such determination , a probabilistic model of input data is built which is used for prediction of similar patterns from any newly input data or datasets ,Application of this concept of data generation from population's subsamples and mapping the identified patterns to map new use cases is one of subsamples the chief objectives of machine learning on the back of supported algorithms

 

* "Random Sampling" -- It is not the only approach for any sort of sampling . One can also apply an approach of "stratified sampling" through which one can control some aspects of the random sample in order to avoid picking too many or too few events of a certain kind .After all , it is said that a random sample is a random sample , the manner it gets picked is irrespective of the manner in which all samples would criterion themselves for picking up a sample , and there is no absolute assurance of always replicating an exact distribution of a population .

 

* A distribution is a statistical formulation which describes how to observe any event or a measure by ideating the probability of witnessing a certain value . Distributions are described in Mathematical formula and can be graphically described using charts such as histograms or distribution plots . The  information that one wants to put over the matrix has a distribution , and one may find that the distributions of different features are related to each other . A normal distribution naturally implies variation and when dealing with numeric values , it is very important to figure out a center of variation which is essentially a value which corresponds to the statistical mean which can be calculated by summing all the values and then dividing the sum by the total number of values considered .

 

* Mean - This is specifically a descriptive measure which tells the users the values to expect the most from within dataset . as it is a general fact that most of the times , one can observe that the mean of a dataset is that data which generally hovers around a given data group or the entire dataset . The Mean of a dataset is the best suited data for any symmetrical and bell-shaped distribution . In cases , when the value is above the mean of the entire dataset , the distribution is similarly shaped for the values that lie below the mean . The normal distribution or the Gaussian distribution is shaped around the mean which one can find only when one is dealing with legible data which is not much skewed in any direction from the equally shaped domes of the normal distribution curve . In the real world , in most of the datasets one can find many skewed distributions that have extreme values on one side of the distribution , which influences the value of mean so much

 

* Median - The Median is a measure that takes the value in the middle after one orders all the observations from smallest to the largest values within the dataset . Based on the value order, the median is a less approximate measure of central approximation of data .

 

* Variance - The significance of mean and median data descriptors is that they describe a value within a data description around which there is some form of variation . In general, the significance of the mean and median descriptors is variation. In general , the significance of the mean and median descriptors is that they describe a value within the distribution around which there is a variation and machine learning algorithms generally do not care about such a form of variation . Most people generally refer to the term , variation as "variance" . And since , variance is a squared number there is also a root equivalent which is termed as "Standard Deviation" . Machine Learning takes into account the concept of variance in every single variable (univariate distributions) and in all the features together (multivariate distribution) to determine how such a variation impacts the response obtained .

 

* Statistics is an important matter in machine learning because it conveys the idea that features have a distribution pattern . Distribution of data implies variation and variation means quantification of information ... which means that more amount of variance is present in the features , then the more amount of Information can be matched to the response .

 

* One can use statistics to assess the quality of the feature matrix and then leverage statistical measures in order to draw a rule from the types of information to their purposes that they cater to .

 

Wednesday, April 21, 2021

An article on - Conditioning Chance and Probability by Bayes Theorem

Conditioning Chance & Probability by Bayes Theorem


* Probability is one of the most key important factors that takes into effect the condition of time and space but there are other measures which go hand in hand with the measures that go into calculation of probability values and that is Conditional Probability which takes into effect the chance of occurrence of one particular event with effect to occurrence of some other events that may also affect the possibility and probability of the other event .

 

* When one would like to estimate the probability of any given event , one may believe the probability of some value to be applicable to some values which one may calculate upon a set of possible events or situations . This term is used to express a belief of "apriori probability" which means general probability of any given event .

 

* For example , in the condition of a throw of a coin ... if the coin thrown is a fair coin , then it could be said that the apriori probability of occurrence of a head is around 50 percent . This means that when someone would go for tossing a coin , he already knows what is the probability of occurrence of a positive ( in other words .. desired outcome ) otherwise occurrence of a negative outcome ( in other words .. undesired outcome ) .

 

* Therefore , no matter how many times one would toss a coin .. whenever faced with a new toss the probability of occurrence of a heads is still 50 percent and the probability of occurrence of a tail is still 50 percent .

 

* But consider a situation where if someone wishes to change the context , then the subject of apriori probability is not valid anymore .. because something subtle has happened and changed the outcome as we all know there are some prerequisites and conditions that must satisfy so that the general experiment could be carried out and come to fruitition. In such a case , one can express the belief as a form of posteriori probability which is the priori probability after something has happened that would tend to modify the count or outcome of the event .

 

* For instance , gender estimation for a person being either a male or a female is the same which is about 50 percent in almost all of the cases . But this general assumption that any population taken into account would be having the same demography is wrong as I happened to come across my referenced article that what generally happens in a demographic population is that generally the women are the ones who tend to live longer and exceed their counterpart males in most of the cases in all of human existence .. as they are mostly the ones who tend to live longer and exceed their counterpart males in most of the factors that contribute to the general well being , and as a result of which the population demographic tilt is more towards the female gender .

 

 Hence , putting all these factors into account that contribute to the general estimate of any population , one should not ideally take gender as a main parameter for determination of population data because this factor is tilted in age-brackets and hence an overall idea for generalisation of this factor should not be considered .

 

* Again , taking this factor of gender into account , the posteriori probability is different from the expected apriori one which in this example can consider gender to be the parameter for estimation of population data and thus estimate somebody's probability of gender on the belief that there are 50 percent males and 50 percent females in a given population data .

 

* One can view cases of conditional probability in the given manner P(y(x)) which in mathematical sense can be read as probability of the event y given the probability of occurrence of event x takes place . For the great relevance Conditional Probability plays in the concepts and studies of machine learning , learning and understanding the syntax of representation , expression and comprehension of the given equation is of great paramount importance to any newbie or virtuoso in the field of maths , statistics and machine learning . Hence , again if someone comes across a notation for conditional probability in the form P(y(x)) which can be read as the probability of event Y happening given X has already happened .

 

* As mentioned earlier in the above paragraph , because of its dependence on possibility of occurrence on single or multiple prior conditions , the role of conditional probability is of paramount importance for machine learning which takes into effect statistical conditions of occurrence of any event . If the apriori probability can change because of circumstances, knowing the possible circumstances can give a big push in one's chances of correctly predicting any event by observing the underlying examples - exactly what machine learning generally intends to do .

 

* Generally , the possibility of finding a random person's gender as a male or female is around 50 percent . But , in case one would like to take into consideration the mortal aspects and age factor of any population , we have seen that the demographic tilt is more in favour of females . If under all such conditions , one would take into consideration the female population , and then dictate a machine learning algorithm to find out the gender of the considered person on the basis of their apriori conditions like length of hair , mortality rate etc , the ML algorithm would be able to very well determine the solicited answer