Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

A K-Nearest Neighbours Based Ensemble via Optimal Model Selection for Regression - 2020

a-k-nearest-neighbours-based-ensemble-via-optimal-model-selection-for-regression.jpg

A K-Nearest Neighbours Based Ensemble via Optimal Model Selection for Regression | S-Logix

Research Area:  Machine Learning

Abstract:

Ensemble methods based on k -NN models minimise the effect of outliers in a training dataset by searching groups of the k closest data points to estimate the response of an unseen observation. However, traditional k -NN based ensemble methods use the arithmetic mean of the training points responses for estimation which has several weaknesses. Traditional k -NN based models are also adversely affected by the presence of non-informative features in the data. This paper suggests a novel ensemble procedure consisting of a class of base k -NN models each constructed on a bootstrap sample drawn from the training dataset with a random subset of features. In the k nearest neighbours determined by each k -NN model, stepwise regression is fitted to predict the test point. The final estimate of the target observation is then obtained by averaging the estimates from all the models in the ensemble. The proposed method is compared with some other state-of-the-art procedures on 16 benchmark datasets in terms of coefficient of determination ( R2 ), Pearsons product-moment correlation coefficient ( r ), mean square predicted error ( MSPE ), root mean squared error ( RMSE ) and mean absolute error ( MAE ) as performance metrics. Furthermore, boxplots of the results are also constructed. The suggested ensemble procedure has outperformed the other procedures on almost all the datasets. The efficacy of the method has also been verified by assessing the proposed method in comparison with the other methods by adding non-informative features to the datasets considered. The results reveal that the proposed method is more robust to the issue of non-informative features in the data as compared to the rest of the methods.

Keywords:  
Data models
Predictive models
Training data
Training
Computational modeling
Bagging
Buildings

Author(s) Name:  Amjad Ali; Muhammad Hamraz; Poom Kumam

Journal name:  IEEE Access

Conferrence name:  

Publisher name:  IEEE

DOI:  10.1109/ACCESS.2020.3010099

Volume Information:  Volume: 8