1. Fostering fairness in image classification through awareness of sensitive dataIvona Colakovic, Sašo Karakatič, 2025, original scientific article Abstract: Machine learning (ML) has demonstrated remarkable ability to uncover hidden patterns in data. However, the presence of biases and discrimination originating from the data itself and, consequently, emerging in the ML outcomes, remains a pressing concern. With the exponential growth of unstructured data, such as images, fairness has become increasingly critical, as neural network (NN) models may inadvertently learn and perpetuate societal and historical biases. To address this challenge, we propose a fairness-aware loss function that iteratively prioritizes the worst-performing sensitive group during NN training. This approach aims to balance treatment quality across sensitive groups, achieving fairer image classification outcomes while incurring only a slight compromise in overall performance. Our method, evaluated on the FairFace dataset, demonstrates significant improvements in fairness metrics while maintaining comparable overall quality. These trade-offs highlight that the minor decrease in overall quality is justified by the improvement in fairness of the models. Keywords: fairness, search-basimage classification, machine learning, supervised learnign, neural networks Published in DKUM: 23.04.2025; Views: 0; Downloads: 4
Full text (2,01 MB) |
2. The UN Sustainable Development Goals and Provision of Security, Responses to Crime and Security Threats, and Fair Criminal Justice Systems2024, scientific monograph Abstract: The book comprises 14 peer-reviewed chapters based on research on crime and security threats in relation to the United Nations Sustainable Development Goals. The book represents a multidisciplinary work that combines different views of safety and security provision in local environments, at the national level, as well as in the international environment. The chapters include findings of a literature review, empirical research on crime and victimization of individuals, case studies, specific forms of crime, institutional and civil society responses to security threats, as well as legal and police and policing perspectives in relation to safety and security provision in modern society. Keywords: sustainable development goals, United Nations, safety and security, crime, security threats, criminology, criminal justice, fairness Published in DKUM: 08.07.2024; Views: 164; Downloads: 72
Full text (12,32 MB) This document has many files! More... |
3. FairBoost: Boosting supervised learning for learning on multiple sensitive featuresIvona Colakovic, Sašo Karakatič, 2023, original scientific article Abstract: The vast majority of machine learning research focuses on improving the correctness of the outcomes (i.e., accuracy, error-rate, and other metrics). However, the negative impact of machine learning outcomes can be substantial if the consequences marginalize certain groups of data, especially if certain groups of people are the ones being discriminated against. Thus, recent papers try to tackle the unfair treatment of certain groups of data (humans), but mostly focus on only one sensitive feature with binary values. In this paper, we propose an ensemble boosting FairBoost that takes into consideration fairness as well as accuracy to mitigate unfairness in classification tasks during the model training process. This method tries to close the gap between proposed approaches and real-world applications, where there is often more than one sensitive feature that contains multiple categories. The proposed approach checks the bias and corrects it through the iteration of building the boosted ensemble. The proposed FairBoost is tested within the experimental setting and compared to similar existing algorithms. The results on different datasets and settings show no significant changes in the overall quality of classification, while the fairness of the outcomes is vastly improved. Keywords: fairness, boosting, machine learning, supervised learning Published in DKUM: 11.06.2024; Views: 147; Downloads: 23
Full text (1,70 MB) This document has many files! More... |
4. Adaptive boosting method for mitigating ethnicity and age group unfairnessIvona Colakovic, Sašo Karakatič, 2024, original scientific article Abstract: Machine learning algorithms make decisions in various fields, thus influencing people’s lives. However, despite their good quality, they can be unfair to certain demographic groups, perpetuating socially induced biases. Therefore, this paper deals with a common unfairness problem, unequal quality of service, that appears in classification when age and ethnicity groups are used. To tackle this issue, we propose an adaptive boosting algorithm that aims to mitigate the existing unfairness in data. The proposed method is based on the AdaBoost algorithm but incorporates fairness in the calculation of the instance’s weight with the goal of making the prediction as good as possible for all ages and ethnicities. The results show that the proposed method increases the fairness of age and ethnicity groups while maintaining good overall quality compared to traditional classification algorithms. The proposed method achieves the best accuracy in almost every sensitive feature group. Based on the extensive analysis of the results, we found that when it comes to ethnicity, interestingly, White people are likely to be incorrectly classified as not being heroin users, whereas other groups are likely to be incorrectly classified as heroin users. Keywords: fairness, boosting, machine learning, classification Published in DKUM: 24.05.2024; Views: 283; Downloads: 20
Full text (1,66 MB) This document has many files! More... |
5. Improved Boosted Classification to Mitigate the Ethnicity and Age Group UnfairnessIvona Colakovic, Sašo Karakatič, 2022, published scientific conference contribution Abstract: This paper deals with the group fairness issue that arises when classifying data, which contains socially induced biases for age and ethnicity. To tackle the unfair focus on certain age and ethnicity groups, we propose an adaptive boosting method that balances the fair treatment of all groups. The proposed approach builds upon the AdaBoost method but supplements it with the factor of fairness between the sensitive groups. The results show that the proposed method focuses more on the age and ethnicity groups, given less focus with traditional classification techniques. Thus the resulting classification model is more balanced, treating all of the sensitive groups more equally without sacrificing the overall quality of the classification. Keywords: fairness, classification, boosting, machine learning Published in DKUM: 02.08.2023; Views: 530; Downloads: 63
Full text (884,95 KB) This document has many files! More... |