HOW ALGORITHMIC BIAS IN ARTIFICIAL INTELLIGENCE AFFECTS SOCIETY

Main Article Content

Mamadiyev Asilbek

Abstract

Artificial intelligence (AI) systems may exhibit algorithmic bias, causing social, financial, and legal injustice. Bias often arises from skewed or incomplete datasets. This issue appears in hiring, credit decisions, crime prediction, and other sectors. As a result, social inequality increases, financial discrimination occurs, and public trust in AI declines. To mitigate the problem, data diversification, algorithm auditing, transparency enhancement, and legal oversight are necessary. By implementing these measures, AI can be used fairly and safely, maximizing its benefits while minimizing adverse effects on society and ensuring equitable treatment for all individuals.


 

Article Details

Section
Articles

References

Agrawal, P. (2024). Algorithmic bias in artificial intelligence: A socio-legal perspective.

Fazil, A. W., Hakimi, M., & Shahidzay, A. K. (2024). A comprehensive review of bias in AI algorithms.

Ferrara, E. (2023). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies.

Ma, J. (2025). The impact of AI bias on social justice: Challenges and solutions.

Min, A. (2023). Artificial intelligence and bias: Challenges, implications, and remedies.

Singh, R., Majumdar, P., Mittal, S., & Vatsa, M. (2021). Anatomizing bias in facial analysis.

Zajko, M. (2022). Algorithmic political bias in artificial intelligence systems.

Williams, S., & Thornburg, M. (2023). Mitigating discrimination in machine learning: Ethical and technical approaches.

Roberts, L. J. (2024). Bias, fairness, and transparency in automated decision-making systems.