User Tools

Site Tools


students:phd_mlws

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Next revision Both sides next revision
students:phd_mlws [2017/05/20 09:06]
blay created
students:phd_mlws [2017/05/20 09:32]
blay [Machine Learning Workflow System]
Line 1: Line 1:
 ====== Machine Learning Workflow System ====== ====== Machine Learning Workflow System ======
  
 +This subject is proposed as part of the [[http://​rockflows.i3s.unice.fr/​|ROCKFlows]] project involving the following researchers:​ [[http://​mireilleblayfornarino.i3s.unice.fr|Mireille Blay-Fornarino]],​ [[http://​www.i3s.unice.fr/​~mosser/​start|Sébastien Mosser]] and [[http://​www.i3s.unice.fr/​~precioso/​|Frédéric Precioso]].
 ===== Context ===== ===== Context =====
 For many years, Machine Learning research has been focusing on designing new algorithms for solving similar kinds of problem instances (Kotthoff, 2016). However, Researchers have long ago recognized that a single algorithm will not give the best performance across all problem instances, e.g. the No-Free-Lunch-Theorem (Wolpert, 1996) states that the best classifier will not be the same on every dataset. Consequently,​ the “winner-take-all” approach should not lead to neglect some algorithms that, while uncompetitive on average, may offer excellent performances on particular problem instances. In 1976, Rice characterized this as the "​algorithm selection problem"​ (Rice, 1976). ​ For many years, Machine Learning research has been focusing on designing new algorithms for solving similar kinds of problem instances (Kotthoff, 2016). However, Researchers have long ago recognized that a single algorithm will not give the best performance across all problem instances, e.g. the No-Free-Lunch-Theorem (Wolpert, 1996) states that the best classifier will not be the same on every dataset. Consequently,​ the “winner-take-all” approach should not lead to neglect some algorithms that, while uncompetitive on average, may offer excellent performances on particular problem instances. In 1976, Rice characterized this as the "​algorithm selection problem"​ (Rice, 1976). ​
Line 8: Line 9:
 A Machine Learning (ML) Workflow can be defined as a tuple (h,p,c) where h represents hyper-parameter tuning strategy, ​ p represents a set of preprocessing techniques applied on the dataset, and c is a ML algorithm used to learn a model from the processed data and to predict then over new data. A Machine Learning (ML) Workflow can be defined as a tuple (h,p,c) where h represents hyper-parameter tuning strategy, ​ p represents a set of preprocessing techniques applied on the dataset, and c is a ML algorithm used to learn a model from the processed data and to predict then over new data.
 The construction of a Machine Learning Workflow depends upon two main aspects: The construction of a Machine Learning Workflow depends upon two main aspects:
- The structural characteristics (size, quality, and nature) of the collected data +         ​* ​The structural characteristics (size, quality, and nature) of the collected data 
- How the results will be used+         * How the results will be used.
 This task is highly complex because of the increasing number of available algorithms, the difficulty in choosing the correct preprocessing techniques together with the right algorithms as well as the correct tuning of their parameters. To decide which algorithm to choose, data scientists often consider families of algorithms in which they are experts, and can leave aside algorithms that are more “exotic” to them, but could perform better for the problem they are trying to solve. This task is highly complex because of the increasing number of available algorithms, the difficulty in choosing the correct preprocessing techniques together with the right algorithms as well as the correct tuning of their parameters. To decide which algorithm to choose, data scientists often consider families of algorithms in which they are experts, and can leave aside algorithms that are more “exotic” to them, but could perform better for the problem they are trying to solve.
  
Line 24: Line 25:
 The thesis must address the following challenges: Relevance and quality of predictions and Scalability to manage the huge mass of ML workflows. ​ The thesis must address the following challenges: Relevance and quality of predictions and Scalability to manage the huge mass of ML workflows. ​
 To meet these challenges, attention should be paid to the following aspects: ​ To meet these challenges, attention should be paid to the following aspects: ​
- Handling Variabilities: ​ Variability of compositions (e.g. identifying dominated workflows, managing requirements between WF components);​ Variability of performance metrics (e.g. dependencies among metrics); Variability of Data Sets (e.g. images, text) and consequently meta features; Variability of platforms; Variability of algorithms and preprocessing algorithms (i.e. characterization to distinguish and automate the compositions);​ Variability of hyper-parameter tuning strategies (i.e. dependency with workflows); etc. +        * //Handling Variabilities: ​// Variability of compositions (e.g. identifying dominated workflows, managing requirements between WF components);​ Variability of performance metrics (e.g. dependencies among metrics); Variability of Data Sets (e.g. images, text) and consequently meta features; Variability of platforms; Variability of algorithms and preprocessing algorithms (i.e. characterization to distinguish and automate the compositions);​ Variability of hyper-parameter tuning strategies (i.e. dependency with workflows); etc. 
- Architecture of portfolio to automatically manage (1) experiment running, (2) collect of experiment results, (3) analyze of results, (4) evolution of algorithm base. It must support the management of execution errors, incremental analyzes, identifying context of experiments.  +        ​*// ​Architecture of portfolio// to automatically manage (1) experiment running, (2) collect of experiment results, (3) analyze of results, (4) evolution of algorithm base. It must support the management of execution errors, incremental analyzes, identifying context of experiments.  
- Handling Scalability of Portfolio: ​Selecting ​discriminating data sets; Detecting “deprecated” algorithms and WF from experiments and literature revues; Dealing with information from scientific literature without deteriorating portfolio computed knowledge.  +        * //Handling Scalability of Portfolio: ​S//​electing ​discriminating data sets; Detecting “deprecated” algorithms and WF from experiments and literature revues; Dealing with information from scientific literature without deteriorating portfolio computed knowledge.  
- Ensuring global consistency of Portfolio and Software Product Line. Such a system is enriched by additions to the portfolio and experiment feedbacks. As "​knowledge"​ evolves (e.g., new data types, new metrics), the entire system needs to be updated. It is therefore to find abstractions not only to manage these changes but also to optimize them (Bischl et al. 2016). +        * //Ensuring global consistency// of Portfolio and Software Product Line. Such a system is enriched by additions to the portfolio and experiment feedbacks. As "​knowledge"​ evolves (e.g., new data types, new metrics), the entire system needs to be updated. It is therefore to find abstractions not only to manage these changes but also to optimize them (Bischl et al. 2016). 
-We have a two-year experience on this subject which has enabled us to (I) eliminate some approaches (e.g. modeling knowledge as a system of constraints because it generates on our current basis more than 6 billion constraints),​ (ii) lay the foundations for a platform for collecting experiences and presenting to the user (Camillieri et al., 2016) (see http:// http://​rockflows.i3s.unice.fr/​),​ (iii) study the ML workflows to predict workflows (Master internships Luca Parisi, Miguel Fabian Romero Rondon and Melissa Sanabria Rosas), (iv) address platform evolution introducing deep learning workflows (see Melissa’s Report). ​+ 
 +We have a two-year experience on this subject which has enabled us to (I) eliminate some approaches (e.g. modeling knowledge as a system of constraints because it generates on our current basis more than 6 billion constraints),​ (ii) lay the foundations for a platform for collecting experiences and presenting to the user (Camillieri et al., 2016) (see [[http:// http://​rockflows.i3s.unice.fr/​]]), (iii) study the ML workflows to predict workflows (Master internships Luca Parisi, Miguel Fabian Romero Rondon and Melissa Sanabria Rosas), (iv) address platform evolution introducing deep learning workflows (see Melissa’s Report). ​
  
 The thesis must investigate the research around the selection of algorithms, considering the automatic composition of workflows and supporting dynamic evolutions. It is therefore a thesis in software engineering research but to address one of the current most central problems in machine learning. The thesis must investigate the research around the selection of algorithms, considering the automatic composition of workflows and supporting dynamic evolutions. It is therefore a thesis in software engineering research but to address one of the current most central problems in machine learning.
students/phd_mlws.txt · Last modified: 2017/05/28 20:03 by blay