Bias Mitigation in AI: A Collection of Research Papers on Gender and Racial Bias
Research Papers on Mitigating Bias in AI Systems
This collection of research papers explores the crucial issue of bias in AI systems, specifically focusing on gender and racial bias. The papers cover diverse topics including image search, language models, image representation, and image captioning, offering insights into the challenges and potential solutions for creating fairer AI systems.
1. Directional Bias Amplification
- Authors: Angelina Wang and Olga Russakovsky
- Conference: ICML 2021
This paper examines the phenomenon of directional bias amplification in AI systems, highlighting the ways in which biases can be exacerbated during training and inference.
2. Are Gender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias in Image Search
- Authors: Jialu Wang, Yang Liu, and Xin Wang
- Conference: EMNLP 2021
This research investigates the presence of gender bias in image search results, even when queries are ostensibly gender-neutral. The authors propose techniques to mitigate this bias and ensure more equitable search outcomes.
3. OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
- Authors: Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang
- Conference: ICML 2022
This paper introduces OFA, a unified framework for sequence-to-sequence learning that addresses bias by harmonizing various architectures, tasks, and modalities within a single system.
4. Balanced Datasets are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations
- Authors: Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, and Vicente Ordonez
- Conference: ICCV 2019
This research underscores the inadequacy of simply relying on balanced datasets to address gender bias in deep image representations. The authors propose methods to estimate and mitigate this bias, leading to more accurate and equitable image analysis.
5. Taxonomy of Risks Posed by Language Models
- Authors: Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, et al.
- Conference: FAccT 2022
This comprehensive paper presents a taxonomy of risks associated with language models, including bias, fairness, and societal impact. It provides a valuable framework for understanding and addressing the potential harms posed by these powerful AI systems.
6. Towards Fairer Datasets: Filtering and Balancing the Distribution of the People Subtree in the ImageNet Hierarchy
- Authors: Kaiyu Yang, Klint Qinami, Li Fei-Fei, Jia Deng, and Olga Russakovsky
- Conference: FAccT 2020
This research focuses on the importance of dataset construction in mitigating bias. The authors propose techniques for filtering and balancing the distribution of human representation within the ImageNet dataset, promoting a more equitable representation of individuals.
7. Ernie-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graphs
- Authors: Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang
- Conference: AAAI 2021
This paper introduces Ernie-ViL, a model that leverages scene graphs to enhance vision-language representations. By integrating knowledge from scene graphs, Ernie-ViL aims to improve accuracy and reduce bias in various vision-language tasks.
8. Understanding and Evaluating Racial Biases in Image Captioning
- Authors: Dora Zhao, Angelina Wang, and Olga Russakovsky
- Conference: ICCV 2021
This research investigates the presence of racial bias in image captioning systems. The authors provide an analysis of existing biases and propose methods for evaluating and mitigating these biases, ultimately promoting more equitable and inclusive image description.
9. Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus-Level Constraints
- Authors: Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang
- Conference: EMNLP 2017
This paper presents a method for reducing gender bias amplification by introducing corpus-level constraints during training. The authors demonstrate that this approach effectively mitigates gender bias in language models, leading to more balanced and fair representations.
These research papers provide valuable insights into the challenges and opportunities associated with mitigating bias in AI systems. By understanding the nuances of bias and employing effective mitigation techniques, we can work towards creating AI systems that are fairer, more accurate, and more beneficial for all.
原文地址: https://www.cveoy.top/t/topic/fBEW 著作权归作者所有。请勿转载和采集!