Showing posts with label technique. Show all posts
Showing posts with label technique. Show all posts

Friday, April 30, 2021

Updating Machine Learning Algorithms by Mini-Batch and Batch Wise


*  Machine Learning boils down to an optimization problem in which one could look for a global minimum given a certain cost function

 

*  Working out an optimization algorithm using all the data available is an advantage , because it allows checking all the data which is clearly an advantage as it allows checking that too in the form of iteration by iteration in order to determine the amount of minimization with respect to all the data

 

*  It is the single most reason by which Machine Learning algorithms prefer to use all the data available at any instance , which they want to access inside the memory of the residing computer or the virtual memory of the GPU with tons of secondary memory available to it

 

 

*  Learning techniques based on statistical algorithms use calculus and matrix algebra , and they need all the data within the memory .

  

*     Simpler algorithms such as those based on step-by-step search of  the  next best solution by proceeding iteration by iteration through partial solution ( such as gradient descent ) can gain an advantage when developing a hypothesis which is based on all the data because the algorithms can catch some amount of weaker signals on the spot and avoid getting fooled by the noise in the data . This means that the machine learning algorithms can develop themselves for the purpose of learning either in supervised or un-supervised format which would help in the overall learning process subject to the conditions of either presence or absence of any noise .

  

*  While Operating within the data limits of the computer's memory , one can think that one is working upon a core memory .As straightforward as it is , one could imagine that all the operational computations do take place within the memory of the computer which either  could be a primary memory or the secondary memory . But as the precedence of the computation needs to be first and foremost , the primary memory is assigned for the task which jostles up when triggered with an incoming process and gets to action .

  

*     The afore-mentioned mechanism is quite well suited for the purpose of memory allocation to a process and an algorithm's execution which is called as "Batch Algorithm" because as in a factory where machines process batches of materials , such algorithms learn to handle and predict a single data or batch data at a given point of time . The incoming data is generally represented in the form   of a Data Matrix .

  

*  It is also believed that sometimes data cannot fit into core memory because the size of data is too big . Under such circumstances , data which is derived from the web is a typical example of information that cannot fit easily into the memory . Since most of the data might be homogenuous or heterogenuous in form and cannot be boiled down a particular format within the precincts of xml , json , sql , no-sql , big data etc the derived data is relatively hard to be deciphered and fitted .

 

 *  In addition to this , the data which is derived from sensors , tracking devices , satellites and video monitoring devices are often problematic because of their dimensions when compared to a computer RAM ; however they can be stored easily over a hard disk , given the availability of cheap and large storage devices which easily hold terabytes of data

  

*  A few strategies which can help in the determination of the amount of data whether it is too big or too low is to fit the data into standard memory of a single computer . A first solution that one can try is to subsample the data into smaller samples .

  

*  Here , the data is reshaped by a selection of cases and sometimes with features which is based on statistical sampling into a more manageable yet reduced form of data matrix .Reducing the data cannot always provide the same results  as during the time of analysis of the data . Also another problem that can come while working with less amount of data is that they can produce less powerful models . But in case , if the process of  subsampling is executed in proper   manner , then the approach can generate reliable and good results . Therefore , it is said that a successful subsampling must correctly use statistical sampling by employing random or stratified sample drawings

 

*  Now we will try to have a bird'e eye view on the various forms / methods of sampling which are used during the process of data reshaping and data reducing :

 

1)     Random Sampling

 

*  In random sampling , one can create a sample by randomly choosing the examples or sub-samples associated with any part of the sample . Here , the larger the size of the sample , the more likely the sample will resemble the original structure and the variety of the data .

 

2)     Stratified Sampling

 

*  In Stratified Sampling , one can control the final disribution of the target variable or of certain features within the data that one deems critical for successfully replicating the characteristics of the complete data .

 

*  One of a classic examples of stratified sampling is to draw a sample in a classroom which is made up of different proportions of males and females in order to guess the average height of the class .

 

*  If the females of the class are on average , shorter / smaller in height in proportion to the males of the class .. then one may like to draw a sample which would replicate the same amount of proportion from the considered sample in order to obtain a reliable estimate of the average height .

 

*  If one would only sample only the males by mistake , then one will overestimate the average height as in general the sub-sample which produced such a result is taken into consideration .. then the sub-sample would only fetch that data which is tilted in numbers towards the more contributing items from the picked out sub-sample. So as Boys or the males of the class as a sub-sample outweighs the average height factor inside the sample called class , the factor over which this attribute makes the sub-sample supercede the other sub-sample would lead to an over-estimation of trend due to negating out the lesser dominant attribute of average height of the sub-sample of girls / females of the class

 

================

Sampling Strategy

================

 

*  In order to avoid such problems that might come up during such problems during " Random Sampling " and " Stratified Sampling " , one has to draw a sub-sample of enough examples given a brief idea about what is the exact requirement that one is trying to fulfill which has been provided for in the sampling strategy used for defining the varieties of data .

 

*     Data with high dimensionality is larger characterised by many cases and many features , this is more difficult to sub-sample because this needs a much sample which may not even fit into the core memory  of the sampling strategy

 

*     After one has chosen a proper sampling strategy for creation and picking up a sampling strategy , given the existence of several memory limitations which would be used to represent the variety of data . It is a widespread assumption that Data with high dimensionality , characterised by many number of cases and features are more difficult to sub-sample as it would need a much larger sample , which may not even judiciously or marginally or completely fot over the core memory .

  

=========

Network Parallelism

=========

 

·        Beyond the process of sub-sampling, a second possible solution to fitting the  data in the memory is to leverage the problem of "network parallelism" which splits the data into multiple computers which are connected over a network .  Each of the computer handles part of the data for the process of optimization . After each of the computer has done its own computation and all of the parallel optimizations have been reduced to a form of single dimension and proportion , the core memory

 

*     In order to understand how the process of solution works , one can compare the process of building a car in a piece by piece manner starting from its framework  as per the blueprint to the core to the complete body which can be either done by a line of assembly workers and robotic hands of manufacturing . Apart from having a faster assmebly execution , one does not have to keep all the parts within the factory at the same time . In a very similar manner , one doesn't have to keep all the data parts within a single computer or computing device , but one can take advantage of the distributed architecture which helps in the distributed and parallel working mechanism over different computers , thereby overcoming some of the core memory limitations that can take place as a result of network parallelism .

 

 

*  This approach serves as the basis of map-reduce technology and cluster- computer frameworks, Apache Spark etc . A quick recap of the underlying technology can throw some light upon these in the following manner -- Map Reduce technology is a sorting and storing technique which takes numerous  data files , arranges them in the form of virtual data queues with sorted data  from top to bottom manner upon which indexing and count of the words is performed and the result is kept in the form of mapped key-value pairs in the storing architecture . Clustered computer and parallel servers also follow a similar structure of data representation in the form of master and slave nodes for data storage and data access . One needs to explore more on the exact manner of such storage systems which facilitate high-level data storage .

  

*     All these technologies are focused on mapping a problem over to multiple machines and then finally reducing their output into a desired solution . This means that all the machine learning computations for large scale data reading are not done by single data servers or computers but rather they are done in a parallel and distributed manner which touches many a nodes ( as in child nodes rest and then finally assimilating back upon a root node ) before the final result of the entire learning process along with the result is thrown as an output to the user in charge of reading the results at the root node .

  

*     But along with such sophisticated and complex system in place for reading of such data , one cannot split all the machine learning algorithms into separable processes and this problem limits the usability of such an approach . Also ,  more importantly one would encounter significant amount of cost and time overhead in the process of setup and maintenance when one keeps a network of computers ready for such kind of data processing . As such kind of massive level of computation and infrastructure is beyond the reach of individuals with less funding and lower level application setup , this is mainly hosted and distributed by a sleuth of large scale organisations having the ability to havesz big scale infrastructure for implementing , organising and running the chain .

  

*  The third solution is to rely on out-of-core algorithms which work by keeping the data on the storage device and feeding into the computer memory for processing . The feeding process is called as streaming because the data chunks are smaller than the core memory , the algorithm can handle the data properly   and use the data for updating the machine learning algorithm optimization . After the update , the system discards them in favour of new chunks which the algorithm uses for the purpose of learning . This process goes on repetitively until there are no more chunks left . Data Chunks can be small ( depending upon the Core Memory ) and the process is called as mini-batch learning or they have can be constituted by just a single example which is called as Online Learning.

 

*  The previously described gradient descent which can be used with other iterative algorithms can work fine with such an approach however; reaching an optimization takes longer because the gradient's path is more erratic and non- linear with respect to a batch approach. The algorithm can reach a solution using fewer numbers of computations with respect to its in-memory versions.

  

*  While working with any related updates of the parameters which are based on mini-batches and single-examples , the gradient descent algorithm takes the name stochastic gradient descent which will reach a proper optimization solution with the given pre-requisites

 

1)    The examples streamed are randomly extracted ( hence they are called as stochastic , recalling the idea of a random extraction )

 

2)    A proper learning rate is defined as some fixed or flexible value which according to the number of observations or other criteria

  

*  The learning parameter can make a great difference in the quality of the optimisation because a high learning rate even though is faster than the optimisation can constrain the parameters to the effects of noisy or erroneous examples seen at the beginning of the stream .

 

*  A high learning rate also renders the algorithm very insensible to the latter streamed observations which can prove to be a problem when the algorithm is learning from sources that are naturally evolving and mutable such as data from digital advertising sector where new advertising campaigns start mutating the level of attention and response of the targeted individuals

 

Last modified: 12:23


Wednesday, April 28, 2021

The Learning Process of M.L Algorithms

 *  During the process of optimization , the machine learning algorithm searches the possible variants of parameter combinations in order to find the best one which would allow the correct mapping between the features and the classes during the process of training

 *  This process evaluates many potential candidate target fuunctions from among those which a learning algorithm can guess

 *  The set of all the potential functions that the learning algorithm can figure out is called a Hypothesis Space

 *  One can call the resulting classifier with their set of parameters as a Hypothesis , which is a way in machine learning to say that the algorithm has set parameters to replicate the target function and is thus now ready to work out correct classifications

  *  The hypothesis space space must contain all the parameter variants of all the machine learning algorithms that one may want to try to map to an unknown function when solving a classification problem . This particular sentence suggests that the entire sample space takes into consideration , a hypothesis space which would contain all the possible variations in the form of scenarios over where the machine learning algorithm could manifest itself at each point of time under the conditions upto which a particular program has been evaluated till a particular point of time and from which the Machine Learning algorithm would do a self analysis on its own for finding the best possible approach for a given condition or problem . This is an instance example of a condition to showcase how a machine learning algorithm would be doing a self analysis for a possible  condition and   then take the best possible course of action basing upon its own understanding and derived results .So , elaborating more upon the aspect of hypothesis space .. one can deduce that a hypothesis space generally consists of a target function or  a similar approximation which is much different for a similar function .

 *  The equivalent of this could be thought of as the time when a child in an effort to figure out an image of a tree experiments with many different creative ideas by assembling one's own knowledge and experiences . Most certainly , parents play a major role in this learning phase and they provide all kinds of relevant environmental inputs for the faster and effective upbringing of the child . In Machine Learning , for say in supervised learning algorithms one has to provide the right learning algorithms and with that one has to provide some non-learnable parameters called as hyper-parameters , next one has to choose a set of  examples to learn and adapt from and then select the features that accompnay the examples . And just as a child cannot always learn to distinguish between  right and wrong if left alone in the world ( consider the example of the case depicted in the book - Lord of the Flies ; summary is available at may sites where one can have a quick synopsis of the story and save time from reading the entire book which in these days is a very tedious , demanding and unproductive task ). In such a similar scenario as well , a machine learning algorithm also needs multiple directions , multiple interjections in order to facilitate the smooth running and execution of a program .

 

*     So even after the completion of the learning process , a machine learning classifier often cannot unequivocally map the examples to the target  classification because many false and erroneous mappings are possible which could mar the generation of best possible results and then render the learning process ineffective as the learning algorithm in its path to effective learning picks up erroneous and wrong paths and lands up adding insufficient data points to discover the right function . In addition to this , conditions of noise ( this aspect is also a great factor in machine learning ) also affect the process of learning

 

*  In real world as well , Noise plays a same kind of impediment factor in the process of learning which derides the effective learning mechanism . Similarly , many such extraneous factors and errors also occur which during the process  of recording of the data which distort the values and features to be read and understood . In true sense , therefore it is considered that a good machine learning algorithm should distinguish the signals that can map back to a target function even though extraneus environmental noise is still in play .

 

Last modified: 27 Apr 2021

Thursday, April 8, 2021

Writing MapReduce Programming - a descriptive guide with example on writing of a sample maprReduce programme with reference to its architecture

     


             

                   Writing MapReduce Programming

 

* As per standard books , one should start MapReduce  program by writing pseudocode for Map and Reduce Functions 

* A "pseudo-code" is not the entire / actual length of the code but it is a blueprint of the code that would be written in place of the actual code that is going to be used in case of a working standardised code . 

* The program code for both the Map and Reduce functions can be written in Java or other programming languages 

* In Java , a Map function is represented by generic Mapper Class (which acts over structured and unstructured data type objects ) . 

* The Map Function has mainly four parameters (input key,input value, output key and output value)

 * General handling of Map Function imported from Mapper Class in Java is out of context for this article , however I shall try to cover the usage of Map Function in a separate blog with appropriate example 

* The Mapper Class uses an abstract Map() method which receives the Input Key and Input Values which would produce an Output key and Output value .

 * For more complex problems involving Map() functions , it is advised to use a higher-level language than MapReduce such as Pig, Hive and Spark

 ( Detailed coverage on the above programming languages - Pig , Hive and Spark would be done in separate articles later )

 * A Mapper function commonly performs input format parsing , projection (selection of relevant fields) and filtering related operations (selection of requisite records needed from context table)

* The Reducer function typically combines (adds/averages) the requisite values again after performance of the necessary operations after Mapping procedure which finally yields the Output .

 * Below diagram is a breakdown of all operations that happen within the MapReduce Program Flow .

 


* Following is a step-by-step logic for performing a word count of all unique words in a text file .

1) A document taken into consideration is split into several different segments .The Map step is run on each segment of the data . The output is a set of key and value pairs . In the given case , the key is a word in the document .

2) The Big Data system gathers the (key,value) pair outputs from all the mappers and then it will sort the entire system with the help of a Key . The sorted list is then split into a few segments

3) The task of the Reducer in the entire system is to sort the entire list and produce a combined list of word counts from the entire list provided to the system for the purpose of Sorting and counting .

 

==================================

Java Code for WordCount

==================================

 map(String key,String value):

    for each word w in value:

    EmitIntermediate(w,"I"):

    reduce(String key,Iterator values):

int result = 0:

for each 'v' in values:

result == ParseInt(v):

Emit(AsString(result))

==================================

 

MapReduce Programming - An introductory article into the concept of MapReduce Programming

 


MapReduce Programming

 * A Data Processing problem can be transformed into a MapReduce Model by the usage of MapReduce Programming

 * The very first step within the process is to visualize the processing plan of a Map and Reduce Programming problem in a step by step process

 * When a problem involving Map and Reduce Programming gets more complex , the underlying complexity within the Map and Reduce problem can be manifested and resolved in either of the two ways or a combination of two ways

 1) Having more number of MapReduce Jobs -- which would eventually increase the load present over the processors and then mitigated by parallel distribution over the servers

 2) Having more complex Map and Reduce Jobs -- under this scenario one may suppose that the number of sorting jobs and processes might get increased tremendously which might add to the complexity , otherwise complexity might also get enhanced under conditions when more and more key and values for same set of text/words are found out by the program and thus mapping their frequency to the matched key becomes more and more which would again add to the complexity of the Map-Reduce Program . Having more but simple MapReduce jobs leads to more easily maintainable Map and Reduce Programs .

 

Wednesday, April 7, 2021

Sample MapReduce Application – WordCount ( analysis and interpretation with an example )

 


          Sample MapReduce Application – WordCount

 

* Suppose one wants to identify unique words in a piece of text with the frequency of the occurrence of each of the words in the text .

 

* Suppose the text within a datafile "file.txt" can be split into 4 segments in such a way that each of the segments are somewhat of the same length with a few changes between them and that too very minimally , then one can represent the same in the following manner :

 

Segment01 - "I stay at WonderVille in the city of Gods"

Segment 02 - "I am going to a picnic near our house "

Segment 03 - " Many of our friends are coming "

Segment 04 - " You are welcome to join us "

Segment 05 - " We will have fun "

 

* Each of the given segments of data can be processed in parallel where all the constituent data within the sample could be aggregated to provide results for the text as given in the above text segments "




 

* From this it can be ascertained that there are 4 map tasks one for each segment of data where each Map process takes in input in a <key,value> pair format .

 

* Each Map process takes in a <key,value> pair format where the first column is addressed as the Key which is the entire sentence in the case .

 

The second column holds the Value which in the application is the frequency of the words appearing within the counting process . Here , each Map Process within the application is executed by a different processor .


* There are four intermediate files in <keys2,value2> pair format which can be shown in the below manner

 

* The sort process inherent within "MapReduce" will "SORT" each of the

intermediate files and prodce a following sorted key-value pair in the following format .

 

* The "Reduce" function will read the sorted intermediate files and combine the results into one result