• Login
    • University Home
    • Library Home
    • Lib Catalogue
    • Advance Search
    View Item 
    •   KDU-Repository Home
    • ACADEMIC JOURNALS
    • International Journal of Research in Computing
    • Volume 03 , Issue 01, 2024
    • View Item
    •   KDU-Repository Home
    • ACADEMIC JOURNALS
    • International Journal of Research in Computing
    • Volume 03 , Issue 01, 2024
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Systematic Review on AI in Gender Bias Detection and Mitigation in Education and Workplaces

    Thumbnail
    View/Open
    IJRCV4I2_154.pdf (618.0Kb)
    Date
    2024-07
    Author
    Deckker, D
    Sumanasekara, s
    Metadata
    Show full item record
    Abstract
    Gender bias in artificial intelligence (AI) systems, particularly within education and workplace settings, poses serious ethical and operational concerns. These biases often stem from historically skewed datasets and flawed algorithmic logic, which can lead to the reinforcement of existing inequalities and the systematic exclusion of underrepresented groups, especially women. This systematic review analyses peer-reviewed literature from 2010 to 2024, sourced from IEEE Xplore, Google Scholar, PubMed, and SpringerLink. Using targeted keywords such as AI gender bias, algorithmic fairness, and bias mitigation, the review assesses empirical and theoretical studies that examine the causes of gender bias, its manifestations in AI-driven decision-making systems, and proposed strategies for detection and mitigation. Findings reveal that biased training data, algorithm design flaws, and unacknowledged developer assumptions are primary sources of gender discrimination in AI systems. In education, these systems affect grading accuracy and learning outcomes; in workplaces, they influence hiring, evaluations, and promotions. Mitigation approaches can be categorized into three main categories: data-centric (e.g., data augmentation and data balancing), algorithm-centric (e.g., fairness-aware learning and adversarial training), and post-processing techniques (e.g., output calibration). However, each approach faces implementation challenges, including trade-offs between fairness and accuracy, lack of transparency, and the absence of intersectional bias detection. The review concludes that gender fairness in AI requires integrated strategies that combine technical solutions with ethical governance. Ethical AI deployment must be grounded in inclusive data practices, transparent protocols, and interdisciplinary collaboration. Policymakers and organizations must strengthen accountability frameworks, such as the EU AI Act and the U.S. AI Bill of Rights, to ensure that AI technologies support equitable outcomes in education and employment.
    URI
    https://ir.kdu.ac.lk/handle/345/8905
    10.64701/ijrc/345/8905
    Collections
    • Volume 03 , Issue 01, 2024 [11]

    Library copyright © 2017  General Sir John Kotelawala Defence University, Sri Lanka
    Contact Us | Send Feedback
     

     

    Browse

    All of KDU RepositoryCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsFacultyDocument TypeThis CollectionBy Issue DateAuthorsTitlesSubjectsFacultyDocument Type

    My Account

    LoginRegister

    Library copyright © 2017  General Sir John Kotelawala Defence University, Sri Lanka
    Contact Us | Send Feedback