• Login
    • University Home
    • Library Home
    • Lib Catalogue
    • Advance Search
    View Item 
    •   KDU-Repository Home
    • ACADEMIC JOURNALS
    • International Journal of Research in Computing
    • Volume 04 , Issue 01 , 2025
    • View Item
    •   KDU-Repository Home
    • ACADEMIC JOURNALS
    • International Journal of Research in Computing
    • Volume 04 , Issue 01 , 2025
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    An Image-Based Facial Emotion Detection Chatbot

    Thumbnail
    View/Open
    IJRC V 4 I (pages 40-47).pdf (447.2Kb)
    Date
    2025-01
    Author
    Harshani, WGL
    Gamini, DDA
    Metadata
    Show full item record
    Abstract
    In the evolving domain of conversational AI, integrating visual recognition capabilities into chatbots represents a pivotal step toward achieving empathetic and context-aware interactions. This study introduces an innovative emotion-aware chatbot system that utilizes facial emotion recognition (FER) to enhance emotional intelligence in human- AI communication. The primary problem addressed is the lack of conversational systems capable of interpreting non-verbal cues, such as facial emotions, to create meaningful and personalized interactions. Our chatbot allows users to input facial images, enabling the system to recognize and classify emotions in real-time and dynamically generate emotion-based responses tailored to the user's state. The FER model was developed using the FER-2013 benchmark dataset, categorizing expressions into seven predefined emotions: Angry, Disgust, Fear, Happy, Sad, Surprise, and Neutral. To address achieved moderate results, data augmentation techniques and hyperparameter tuning were applied to improve robustness. Furthermore, LangChain, an open-source framework for building conversational agents, was integrated to manage dialogue flows. LangChain was utilized to orchestrate the chatbot’s conversational flow, leveraging its modular architecture for dynamic and adaptive dialogue management textually and visually. Recognized emotions from the FER model were processed by LangChain to generate contextually relevant responses tailored to the user's emotional state. The framework enabled seamless integration of visual input processing with language-based conversation, ensuring smooth transitions between emotion recognition and response generation. The integration methodology leverages LangChain’s toolkits for real-time processing of visual cues, enabling emotion-driven, contextually adaptive conversation generation. Unlike conventional chatbots, this system introduces a multimodal approach that bridges textual and visual emotional inputs with the integration of LangChain. This research contributes a detailed framework for integrating FER into conversational agents, emphasizing its potential in building rapport, improving engagement, and creating empathetic dialogue. Future work will focus on optimizing the FER model’s accuracy through advanced architectures and exploring real-world use cases, including healthcare and customer service, to demonstrate the transformative impact of emotion-aware AI on communication platforms. Future work will focus on improving FER model performance through advanced architectures like Vision Transformers and larger, more diverse datasets to boost accuracy and generalizability.
    URI
    https://ir.kdu.ac.lk/handle/345/8920
    Collections
    • Volume 04 , Issue 01 , 2025 [6]

    Library copyright © 2017  General Sir John Kotelawala Defence University, Sri Lanka
    Contact Us | Send Feedback
     

     

    Browse

    All of KDU RepositoryCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsFacultyDocument TypeThis CollectionBy Issue DateAuthorsTitlesSubjectsFacultyDocument Type

    My Account

    LoginRegister

    Library copyright © 2017  General Sir John Kotelawala Defence University, Sri Lanka
    Contact Us | Send Feedback