English  |  正體中文  |  简体中文  |  Items with full text/Total items : 94286/110023 (86%)
Visitors : 21690063      Online Users : 479
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://asiair.asia.edu.tw/ir/handle/310904400/115287


    Title: Deep Learning in Left and Right Footprint Image Detection Based on Plantar Pressure
    Authors: Ardhia, Peter;Ardhianto, Peter;Liau, Ben-Yi;Liau, Ben-Yi;Jan, Yih-Kuen;Jan, Yih-Kuen;Tsa, Jen-Yung;Tsai, Jen-Yung;Akh, Fityanul;Akhyar, Fityanul;Li, Chih-Yang;Lin, Chih-Yang;Bagus, Raden;Subiakto, Raden Bagus Reinaldy;龍希文;Lung, Chi-Wen
    Contributors: 創意設計學院創意商品設計學系
    Keywords: cerebral palsy;YOLO;object detection;foot progression angle;complex footprints
    Date: 2022-09-01
    Issue Date: 2023-03-29 01:27:29 (UTC+0)
    Publisher: 亞洲大學
    Abstract: People with cerebral palsy (CP) suffer primarily from lower-limb impairments. These impairments contribute to the abnormal performance of functional activities and ambulation. Footprints, such as plantar pressure images, are usually used to assess functional performance in people with spastic CP. Detecting left and right feet based on footprints in people with CP is a challenge due to abnormal foot progression angle and abnormal footprint patterns. Identifying left and right foot profiles in people with CP is essential to provide information on the foot orthosis, walking problems, index gait patterns, and determination of the dominant limb. Deep learning with object detection can localize and classify the object more precisely on the abnormal foot progression angle and complex footprints associated with spastic CP. This study proposes a new object detection model to auto-determine left and right footprints. The footprint images successfully represented the left and right feet with high accuracy in object detection. YOLOv4 more successfully detected the left and right feet using footprint images compared to other object detection models. YOLOv4 reached over 99.00% in various metric performances. Furthermore, detection of the right foot (majority of people’s dominant leg) was more accurate than that of the left foot (majority of people’s non-dominant leg) in different object detection models.
    Appears in Collections:[創意商品設計學系] 期刊論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML155View/Open


    All items in ASIAIR are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback