ARDHIANTO, PETER (2022) Deep Learning in Left and Right Footprint Image Detection Based on Plantar Pressure. Applied Sciences, 12 (7). p. 8885. ISSN 2076-3417
|
Text
applsci-12-08885.pdf Download (3MB) | Preview |
Abstract
People with cerebral palsy (CP) suffer primarily from lower-limb impairments. These impairments contribute to the abnormal performance of functional activities and ambulation. Footprints, such as plantar pressure images, are usually used to assess functional performance in people with spastic CP. Detecting left and right feet based on footprints in people with CP is a challenge due to abnormal foot progression angle and abnormal footprint patterns. Identifying left and right foot profiles in people with CP is essential to provide information on the foot orthosis, walking problems, index gait patterns, and determination of the dominant limb. Deep learning with object detection can localize and classify the object more precisely on the abnormal foot progression angle and complex footprints associated with spastic CP. This study proposes a new object detection model to auto-determine left and right footprints. The footprint images successfully represented the left and right feet with high accuracy in object detection. YOLOv4 more successfully detected the left and right feet using footprint images compared to other object detection models. YOLOv4 reached over 99.00% in various metric performances. Furthermore, detection of the right foot (majority of people’s dominant leg) was more accurate than that of the left foot (majority of people’s non-dominant leg) in different object detection models
Item Type: | Article |
---|---|
Subjects: | 000 Computer Science, Information and General Works > 004 Data processing & computer science |
Divisions: | Faculty of Architecture and Design |
Depositing User: | Mr. Peter Ardhianto, S.Sn., M.Sn., PhD. |
Date Deposited: | 06 Feb 2023 06:15 |
Last Modified: | 06 Feb 2023 06:15 |
URI: | http://repository.unika.ac.id/id/eprint/30790 |
Actions (login required)
View Item |