Show simple item record

dc.contributor.authorHarshani, WGL
dc.contributor.authorGamini, DDA
dc.date.accessioned2025-10-01T07:54:18Z
dc.date.available2025-10-01T07:54:18Z
dc.date.issued2025-01
dc.identifier.urihttps://ir.kdu.ac.lk/handle/345/8920
dc.description.abstractIn the evolving domain of conversational AI, integrating visual recognition capabilities into chatbots represents a pivotal step toward achieving empathetic and context-aware interactions. This study introduces an innovative emotion-aware chatbot system that utilizes facial emotion recognition (FER) to enhance emotional intelligence in human- AI communication. The primary problem addressed is the lack of conversational systems capable of interpreting non-verbal cues, such as facial emotions, to create meaningful and personalized interactions. Our chatbot allows users to input facial images, enabling the system to recognize and classify emotions in real-time and dynamically generate emotion-based responses tailored to the user's state. The FER model was developed using the FER-2013 benchmark dataset, categorizing expressions into seven predefined emotions: Angry, Disgust, Fear, Happy, Sad, Surprise, and Neutral. To address achieved moderate results, data augmentation techniques and hyperparameter tuning were applied to improve robustness. Furthermore, LangChain, an open-source framework for building conversational agents, was integrated to manage dialogue flows. LangChain was utilized to orchestrate the chatbot’s conversational flow, leveraging its modular architecture for dynamic and adaptive dialogue management textually and visually. Recognized emotions from the FER model were processed by LangChain to generate contextually relevant responses tailored to the user's emotional state. The framework enabled seamless integration of visual input processing with language-based conversation, ensuring smooth transitions between emotion recognition and response generation. The integration methodology leverages LangChain’s toolkits for real-time processing of visual cues, enabling emotion-driven, contextually adaptive conversation generation. Unlike conventional chatbots, this system introduces a multimodal approach that bridges textual and visual emotional inputs with the integration of LangChain. This research contributes a detailed framework for integrating FER into conversational agents, emphasizing its potential in building rapport, improving engagement, and creating empathetic dialogue. Future work will focus on optimizing the FER model’s accuracy through advanced architectures and exploring real-world use cases, including healthcare and customer service, to demonstrate the transformative impact of emotion-aware AI on communication platforms. Future work will focus on improving FER model performance through advanced architectures like Vision Transformers and larger, more diverse datasets to boost accuracy and generalizability.en_US
dc.language.isoenen_US
dc.subjectFacial Emotion Detection, NLP, Chatbot, FER-2013, Accuracy, LangChainen_US
dc.titleAn Image-Based Facial Emotion Detection Chatboten_US
dc.typeJournal articleen_US
dc.identifier.facultyFOCen_US
dc.identifier.journalIJRCen_US
dc.identifier.issue01en_US
dc.identifier.volume04en_US
dc.identifier.pgnos40-47en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record