dc.description.abstract | Music is the emotional language. Music emotion identification has gained considerable
attention in the academic and industrial communities since it can be widely used
in fields like recommendation systems, automatic music composing, psychotherapy,
music visualization, etc. Deep learning-based music emotion identification is gradually
becoming popular, especially with the rapid development of artificial intelligence. The
main aims of the research are to examine and review major topics: computer music
emotion identification, emotional semantic-driven music retrieval, and emotional music
synthesis technology. In this research Collecting a variety of datasets and utilizing
machine learning models such as Convolutional Neural Network, Recurrent Neural
Network, Support Vector Machine, and Random Forests are required for music emotion
recognition using Artificial Intelligence (AI). These models use factors including rhythm,
pitch, and pace to classify music according to emotions. Emotion and music have
a strong link that drives artistic expression and therapeutic advantages. Affective
computing impacts recommendation systems, therapy personalization, entertainment,
and cultural conservation, particularly in music emotion analysis. AI’s emotional
analysis improves streaming experiences, personalizes therapy sessions, and influences
marketing methods. AI handles the complexity that comes with emotional and cultural
diversity easily. A system for recognizing emotions based on musical scales is yet to
be developed. Finally, this paper concludes the possible future research directions and
provides a review thorough examination of music emotion recognition and a review of
the AI algorithms for the above-mentioned major projects. | en_US |