English  |  正體中文  |  简体中文  |  Items with full text/Total items : 94286/110023 (86%)
Visitors : 21652410      Online Users : 977
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://asiair.asia.edu.tw/ir/handle/310904400/115015


    Title: Aleatory-aware deep uncertainty quantification for transfer learning
    Authors: Kab, H M Dipu;Kabir, H M Dipu;Rajendra, U.;Acharya, U. Rajendra
    Contributors: 資訊電機學院生物資訊與醫學工程學系
    Keywords: Patient referral;Uncertainty;COVID;Aleatoric;Epistemic;Heteroscedastic
    Date: 2022-01-01
    Issue Date: 2023-03-28 01:58:35 (UTC+0)
    Publisher: 亞洲大學
    Abstract: The user does not have any idea about the credibility of outcomes from deep neural networks (DNN) when uncertainty quantification (UQ) is not employed. However, current Deep UQ classification models capture mostly epistemic uncertainty. Therefore, this paper aims to propose an aleatory-aware Deep UQ method for classification problems. First, we train DNNs through transfer learning and collect numeric output posteriors for all training samples instead of logical outputs. Then we determine the probability of happening a certain class from K-nearest output posteriors of the same DNN in training samples. We name this probability as opacity score, as the paper focuses on the detection of opacity on X-ray images. This score reflects the level of aleatory on the sample. When the NN is certain on the classification of the sample, the probability of happening a class becomes much higher than the probabilities of others. Probabilities for different classes become close to each other for a highly uncertain classification outcome. To capture the epistemic uncertainty, we train multiple DNNs with different random initializations, model selection, and augmentations to observe the effect of these training parameters on prediction and uncertainty. To reduce execution time, we first obtain features from the pre-trained NN. Then we apply features to the ensemble of fully connected layers to get the distribution of opacity score during the test. We also train several ResNet and DenseNet DNNs to observe the effect of model selection on prediction and uncertainty. The paper also demonstrates a patient referral framework based on the proposed uncertainty quantification. The scripts of the proposed method are available at the following link:
    Appears in Collections:[生物資訊與醫學工程學系 ] 期刊論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML140View/Open


    All items in ASIAIR are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback