Загрузка страницы

How to learn classifier chains using positive-unlabelled multi-label data? | ML in PL 22

How to learn classifier chains using positive-unlabelled multi-label data? by Paweł Teisseyre (Institute of Computer Science, Polish Academy of Sciences and Faculty of Mathematics and Information Sciences, Warsaw University of Technology)

Multi-label learning deals with data examples which are associated with multiple class labels simultaneously. The problem has attracted significant attention in recent years and dozens of algorithms have been proposed. However in traditional multi-label setting it is assumed that all relevant labels are assigned to the given instance. This assumption is not met in many real-world situations. In positive unlabelled multi-label setting, only some of relevant labels are assigned. The appearance of a label means that the instance is really associated with this label, while the absence of the label does not imply that this label is not proper for the instance. For example, when predicting multiple diseases in one patient, some diseases can be undiagnosed however it does not mean that the patient does not have these diseases. Among many existing multi-label methods, classifier chains gained the great popularity mainly due to their simplicity and high predictive power. However, it turns out that adaptation of classifier chains to positive unlabelled framework is not straightforward, due to the fact that the true target variables are observed only partially and therefore they cannot be used directly to train the models in the chain. The partial observability concerns not only the current target variable in the chain but also the feature space, which additionally increases the difficulty of the problem. We propose two modifications of classifier chains. In the first method we scale the output probabilities of the consecutive classifiers in the chain. In the second method we minimize weighted empirical risk, with weights depending on prior probabilities of the target variables. The predictive performance of the proposed methods is studied on real multi-label datasets for different positive unlabelled settings.

The talk was delivered during ML in PL Conference 2022 as a part of Contributed Talks. The conference was organized by a non-profit NGO called ML in PL Association.

ML in PL Association website: https://mlinpl.org/
ML in PL Conference 2022 website: https://conference2022.mlinpl.org/
ML In PL Conference 2023 website: https://conference2023.mlinpl.org/

---

ML in PL Association was founded based on the experiences in organizing of the ML in PL Conference (formerly PL in ML), the ML in PL Association is a non-profit organization devoted to fostering the machine learning community in Poland and Europe and promoting a deep understanding of ML methods. Even though ML in PL is based in Poland, it seeks to provide opportunities for international cooperation.

Видео How to learn classifier chains using positive-unlabelled multi-label data? | ML in PL 22 канала ML in PL
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
30 сентября 2023 г. 21:00:40
00:20:51
Другие видео канала
Rafał Pilarczyk: Is Artificial Intelligence a threat to musicians? – Music generation techniquesRafał Pilarczyk: Is Artificial Intelligence a threat to musicians? – Music generation techniquesPhilippe Preux - Bits of Reinforcement Learning | ML in PL 23Philippe Preux - Bits of Reinforcement Learning | ML in PL 23B. Ludwiczuk, K. Jasinska-Kobus (Allegro) - Batch construction strategies in deep metric learningB. Ludwiczuk, K. Jasinska-Kobus (Allegro) - Batch construction strategies in deep metric learningMarcin Andrychowicz - Solving Rubik’s Cube with a Robot HandMarcin Andrychowicz - Solving Rubik’s Cube with a Robot HandAdam Paszke: PyTorch 1.0: now and in the futureAdam Paszke: PyTorch 1.0: now and in the futureAdam Podraza: Applied time series forecasting using machine learningAdam Podraza: Applied time series forecasting using machine learningGül Varol - Learning human body representations from visual dataGül Varol - Learning human body representations from visual dataDavid Haber - Opportunities and Challenges when Building AI for Autonomous FlightDavid Haber - Opportunities and Challenges when Building AI for Autonomous FlightAdam Gonczarek (Alphamoon) – Intelligent Document ProcessingAdam Gonczarek (Alphamoon) – Intelligent Document ProcessingJonasz Pamuła (RTB House) – ML Challenges in cookieless worldJonasz Pamuła (RTB House) – ML Challenges in cookieless worldJoão Henriques - Mapping environments with deep networks and spatial memoriesJoão Henriques - Mapping environments with deep networks and spatial memoriesKrzysztof Geras (NYU): "Towards Solving Breast Cancer Screening Diagnosis with Deep Learning"Krzysztof Geras (NYU): "Towards Solving Breast Cancer Screening Diagnosis with Deep Learning"Stanisław Jastrzębski - Deep Learning in the Light of the Simplicity Bias | MLSS Kraków 2023Stanisław Jastrzębski - Deep Learning in the Light of the Simplicity Bias | MLSS Kraków 2023Yoshua Bengio – Cognitively-inspired inductive biases for higher-level cognitionYoshua Bengio – Cognitively-inspired inductive biases for higher-level cognitionTomasz Grel (Nvidia): Faster Deep Learning with mixed precision and multiple GPUsTomasz Grel (Nvidia): Faster Deep Learning with mixed precision and multiple GPUsPanel Discussion – Women in MLPanel Discussion – Women in MLMichał Jamroż - Class fitting in residual convolutional networks | ML in PL 23Michał Jamroż - Class fitting in residual convolutional networks | ML in PL 23Sebastian Cygert - Toward continually learning models | ML in PL 23Sebastian Cygert - Toward continually learning models | ML in PL 23Barbara Rychalska - Neural Machine Translation: achievements, challenges and the way forwardBarbara Rychalska - Neural Machine Translation: achievements, challenges and the way forwardStanisław Jastrzębski - Gradient Alignment: When Deep Networks Work, and When They Don'tStanisław Jastrzębski - Gradient Alignment: When Deep Networks Work, and When They Don't
Яндекс.Метрика