In the past decades, the area of Machine Learning (ML) has experienced tremendous success. Companies have begun to rely on ML to provide continuous services to their clients. For instance, recommender systems and learning-to-rank models are widely used by Internet companies to serve their customers. An important feature of a reliable and sustainable ML based service is that it exceeds the basic training requirements. From onset, it involves data preparation (e.g. data ingestion, curation, validation), enhanced attention to feature selection and engineering, and may also rely on ensemble models to further boost the performance of the ML based service.
Since ML services are often served online for a long period of time, issues such as auto model re-training with incremental feedback, handling concept drift and environment changing become very critical. Moreover, as there are costs in providing such services, how to strike a balance among performance, computation resources, and ease of maintenance can be very challenging.
To advance the research in this direction, we would like to solicit articles on the following topics:
- Dealing with dynamic environment in ML handling concept drift
- Learning given incremental feedbacks
- Causality inference
- Dealing with noise and missing in data
- Dealing with sampling, measuring, and algorithmic bias in ML
- Resourced constrained machine learning
- Evaluation metrics in reliability and sustainability for ML
- Auto machine learning
- Life-long learning
- Invariant learning
In the past decades, the area of Machine Learning (ML) has experienced tremendous success. Companies have begun to rely on ML to provide continuous services to their clients. For instance, recommender systems and learning-to-rank models are widely used by Internet companies to serve their customers. An important feature of a reliable and sustainable ML based service is that it exceeds the basic training requirements. From onset, it involves data preparation (e.g. data ingestion, curation, validation), enhanced attention to feature selection and engineering, and may also rely on ensemble models to further boost the performance of the ML based service.
Since ML services are often served online for a long period of time, issues such as auto model re-training with incremental feedback, handling concept drift and environment changing become very critical. Moreover, as there are costs in providing such services, how to strike a balance among performance, computation resources, and ease of maintenance can be very challenging.
To advance the research in this direction, we would like to solicit articles on the following topics:
- Dealing with dynamic environment in ML handling concept drift
- Learning given incremental feedbacks
- Causality inference
- Dealing with noise and missing in data
- Dealing with sampling, measuring, and algorithmic bias in ML
- Resourced constrained machine learning
- Evaluation metrics in reliability and sustainability for ML
- Auto machine learning
- Life-long learning
- Invariant learning