Research Area:  Machine Learning
Multi-label learning deals with training examples each represented by a single instance while associated with multiple class labels. Due to the exponential number of possible label sets to be considered by the predictive model, it is commonly assumed that label correlations should be well exploited to design an effective multi-label learning approach. On the other hand, class-imbalance stands as an intrinsic property of multi-label data which significantly affects the generalization performance of the multi-label predictive model. For each class label, the number of training examples with positive labeling assignment is generally much less than those with negative labeling assignment. To deal with the class-imbalance issue for multi-label learning, a simple yet effective class-imbalance aware learning strategy called cross-coupling aggregation (Cocoa) is proposed in this article. Specifically, Cocoa works by leveraging the exploitation of label correlations as well as the exploration of class-imbalance simultaneously. For each class label, a number of multiclass imbalance learners are induced by randomly coupling with other labels, whose predictions on the unseen instance are aggregated to determine the corresponding labeling relevancy. Extensive experiments on 18 benchmark datasets clearly validate the effectiveness of Cocoa against state-of-the-art multi-label learning approaches especially in terms of imbalance-specific evaluation metrics.
Author(s) Name:  Min-Ling Zhang; Yu-Kun Li; Hao Yang; Xu-Ying Liu
Journal name:  IEEE Transactions on Cybernetics ( Early Access )
Publisher name:  IEEE
Volume Information:   Page(s): 1 - 13
Paper Link:   https://ieeexplore.ieee.org/abstract/document/9262911