ASIA unversity:Item 310904400/108610
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 94286/110023 (86%)
造訪人次 : 21666604      線上人數 : 894
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    ASIA unversity > 資訊學院 > 資訊工程學系 > 期刊論文 >  Item 310904400/108610


    請使用永久網址來引用或連結此文件: http://asiair.asia.edu.tw/ir/handle/310904400/108610


    題名: Soft Estimation by Hierarchical Classification and Regression
    作者: Shih-Wen Ke;林維昭;Wei-Chao Lin;Chih-Fong Ts;Chih-Fong Tsai;Ya-Han Hu;Ya-Han Hu
    貢獻者: 資訊工程學系
    日期: 2017-04
    上傳時間: 2017-12-08 06:47:20 (UTC+0)
    摘要: Classification and numeric estimation are the two most common types of data mining. The goal of classification is to predict the discrete type of output values whereas estimation is aimed at finding the continuous type of output values. Predictive data mining is generally achieved by using only one specific statistical or machine learning technique to construct a prediction model. Related studies have shown that prediction performance by this kind of single flat model can be improved by the utilization of some hierarchical structures. Hierarchical estimation approaches, usually a combination of multiple estimation models, have been proposed for solving some specific domain problems. However, in the literature, there is no generic hierarchical approach for estimation and no hybrid based solution that combines classification and estimation techniques hierarchically. Therefore, we introduce a generic hierarchical architecture, namely hierarchical classification and regression (HCR), suitable for various estimation problems. Simply speaking, the first level of HCR involves pre-processing a given training set by classifying it into k classes, leading to k subsets. Three approaches are used to perform this task in this study: hard classification (HC); fuzzy c-means (FCM); and genetic algorithms (GA). Then, each training data containing its associated class label is used to train a support vector machine (SVM) classifier for classification. Next, for the second level of HCR, k regression (or estimation) models are trained based on their corresponding subsets for final prediction. The experiments based on 8 different UCI datasets show that most hierarchical prediction models developed with the HCR architecture significantly outperform three well-known single flat prediction models, i.e., linear regression (LR), multilayer perceptron (MLP) neural networks, and support vector regression (SVR) in terms of mean absolute percentage error (MAPE) and root mean squared error (RMSE) rates. In addition, it is found that using the GA-based data pre-processing approach to classify the training set into 4 subsets is the best threshold (i.e., k=4) and the 4-class SVM+MLP outperforms three baseline hierarchical regression models.
    關聯: NEUROCOMPUTING
    顯示於類別:[資訊工程學系] 期刊論文

    文件中的檔案:

    檔案 大小格式瀏覽次數
    index.html0KbHTML417檢視/開啟


    在ASIAIR中所有的資料項目都受到原著作權保護.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋