| | SLO | ENG | Piškotki in zasebnost

Večja pisava | Manjša pisava

Iskanje po katalogu digitalne knjižnice Pomoč

Iskalni niz: išči po
išči po
išči po
išči po
* po starem in bolonjskem študiju

Opcije:
  Ponastavi


1 - 4 / 4
Na začetekNa prejšnjo stran1Na naslednjo stranNa konec
1.
The UN Sustainable Development Goals and Provision of Security, Responses to Crime and Security Threats, and Fair Criminal Justice Systems
2024, znanstvena monografija

Opis: The book comprises 14 peer-reviewed chapters based on research on crime and security threats in relation to the United Nations Sustainable Development Goals. The book represents a multidisciplinary work that combines different views of safety and security provision in local environments, at the national level, as well as in the international environment. The chapters include findings of a literature review, empirical research on crime and victimization of individuals, case studies, specific forms of crime, institutional and civil society responses to security threats, as well as legal and police and policing perspectives in relation to safety and security provision in modern society.
Ključne besede: sustainable development goals, United Nations, safety and security, crime, security threats, criminology, criminal justice, fairness
Objavljeno v DKUM: 08.07.2024; Ogledov: 164; Prenosov: 37
.pdf Celotno besedilo (12,32 MB)
Gradivo ima več datotek! Več...

2.
FairBoost: Boosting supervised learning for learning on multiple sensitive features
Ivona Colakovic, Sašo Karakatič, 2023, izvirni znanstveni članek

Opis: The vast majority of machine learning research focuses on improving the correctness of the outcomes (i.e., accuracy, error-rate, and other metrics). However, the negative impact of machine learning outcomes can be substantial if the consequences marginalize certain groups of data, especially if certain groups of people are the ones being discriminated against. Thus, recent papers try to tackle the unfair treatment of certain groups of data (humans), but mostly focus on only one sensitive feature with binary values. In this paper, we propose an ensemble boosting FairBoost that takes into consideration fairness as well as accuracy to mitigate unfairness in classification tasks during the model training process. This method tries to close the gap between proposed approaches and real-world applications, where there is often more than one sensitive feature that contains multiple categories. The proposed approach checks the bias and corrects it through the iteration of building the boosted ensemble. The proposed FairBoost is tested within the experimental setting and compared to similar existing algorithms. The results on different datasets and settings show no significant changes in the overall quality of classification, while the fairness of the outcomes is vastly improved.
Ključne besede: fairness, boosting, machine learning, supervised learning
Objavljeno v DKUM: 11.06.2024; Ogledov: 147; Prenosov: 14
.pdf Celotno besedilo (1,70 MB)
Gradivo ima več datotek! Več...

3.
Adaptive boosting method for mitigating ethnicity and age group unfairness
Ivona Colakovic, Sašo Karakatič, 2024, izvirni znanstveni članek

Opis: Machine learning algorithms make decisions in various fields, thus influencing people’s lives. However, despite their good quality, they can be unfair to certain demographic groups, perpetuating socially induced biases. Therefore, this paper deals with a common unfairness problem, unequal quality of service, that appears in classification when age and ethnicity groups are used. To tackle this issue, we propose an adaptive boosting algorithm that aims to mitigate the existing unfairness in data. The proposed method is based on the AdaBoost algorithm but incorporates fairness in the calculation of the instance’s weight with the goal of making the prediction as good as possible for all ages and ethnicities. The results show that the proposed method increases the fairness of age and ethnicity groups while maintaining good overall quality compared to traditional classification algorithms. The proposed method achieves the best accuracy in almost every sensitive feature group. Based on the extensive analysis of the results, we found that when it comes to ethnicity, interestingly, White people are likely to be incorrectly classified as not being heroin users, whereas other groups are likely to be incorrectly classified as heroin users.
Ključne besede: fairness, boosting, machine learning, classification
Objavljeno v DKUM: 24.05.2024; Ogledov: 283; Prenosov: 16
.pdf Celotno besedilo (1,66 MB)
Gradivo ima več datotek! Več...

4.
Improved Boosted Classification to Mitigate the Ethnicity and Age Group Unfairness
Ivona Colakovic, Sašo Karakatič, 2022, objavljeni znanstveni prispevek na konferenci

Opis: This paper deals with the group fairness issue that arises when classifying data, which contains socially induced biases for age and ethnicity. To tackle the unfair focus on certain age and ethnicity groups, we propose an adaptive boosting method that balances the fair treatment of all groups. The proposed approach builds upon the AdaBoost method but supplements it with the factor of fairness between the sensitive groups. The results show that the proposed method focuses more on the age and ethnicity groups, given less focus with traditional classification techniques. Thus the resulting classification model is more balanced, treating all of the sensitive groups more equally without sacrificing the overall quality of the classification.
Ključne besede: fairness, classification, boosting, machine learning
Objavljeno v DKUM: 02.08.2023; Ogledov: 530; Prenosov: 52
.pdf Celotno besedilo (884,95 KB)
Gradivo ima več datotek! Več...

Iskanje izvedeno v 0.09 sek.
Na vrh
Logotipi partnerjev Univerza v Mariboru Univerza v Ljubljani Univerza na Primorskem Univerza v Novi Gorici