• Login
    • University Home
    • Library Home
    • Lib Catalogue
    • Advance Search
    View Item 
    •   KDU-Repository Home
    • SYMPOSIUM ABSTRACTS
    • FOC STUDENT SYMPOSIUM 2026
    • View Item
    •   KDU-Repository Home
    • SYMPOSIUM ABSTRACTS
    • FOC STUDENT SYMPOSIUM 2026
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    A Review on Deep Learning for Automated Hair Damage Detection and Personalized Care Recommendations

    Thumbnail
    View/Open
    FOCSS 2026 32.pdf (494.3Kb)
    Date
    2026-01
    Author
    Upethra, NTPGB
    Vidanage, BVKI
    De S Sirisuriya, SCM
    Metadata
    Show full item record
    Abstract
    Hair damage detection has evolved with deep learning techniques providing different approaches tautomate hair health assessment by investigating profile-based hair damage identification that explore features such as texture analysis, shine detection, frizz patterns, split ends, porosity, and scalp biomarkers. Currently it is done manually by hair specialists using traditional methods like visual inspection and expensive clinical methods such as scanning electron microscopy188 to examine cuticle structure which are subjective, time-consuming, limiting consumer accessibility. This narrative review explores how deep learning approaches can be used for hair damage detection using smartphone images by evaluating the application of CNNs, Vision Transformers (ViT), and multi-modal fusion to enable personalized care recommendations without using clinical imaging. Existing research on hair image analysis is limited since most studies focusing only on hair segmentation, color detection, or style classification, rather than structural damage identification, and there is a lack of automated tools capable of analyzing hair damage directly from smartphone images, despite the growing capability of deep learning in visual analysis. This research proposes a solid framework that follows established ML pipelines requirement analysis with domain experts, data collection from diverse non-clinical sources, preprocessing with CLAHE/U-Net segmentation, multi-label classification via CNN-ViT ensembles with SVM heads and focal loss, plus 18-dimensional user context fusion for habit-aware recommendations, leveraging advanced computer vision to revolutionize consumer accessibility by addressing gaps in shaft damage analysis, ethnic dataset bias, and explainability absent in existing scalp-focused tools.
    URI
    https://ir.kdu.ac.lk/handle/345/9063
    Collections
    • FOC STUDENT SYMPOSIUM 2026 [52]

    Library copyright © 2017  General Sir John Kotelawala Defence University, Sri Lanka
    Contact Us | Send Feedback
     

     

    Browse

    All of KDU RepositoryCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsFacultyDocument TypeThis CollectionBy Issue DateAuthorsTitlesSubjectsFacultyDocument Type

    My Account

    LoginRegister

    Library copyright © 2017  General Sir John Kotelawala Defence University, Sri Lanka
    Contact Us | Send Feedback