1. Recent applications of explainable AI (XAI) : a systematic literature reviewMirka Saarela, Vili Podgorelec, 2024, pregledni znanstveni članek Ključne besede: explainable artificial intelligence, applications, interpretable machine learning, convolutional neural network, deep learning, post-hoc explanations, model-agnostic explanations Objavljeno v DKUM: 31.01.2025; Ogledov: 0; Prenosov: 3
Celotno besedilo (1,42 MB) |
2. |
3. Agile Machine Learning Model Development Using Data Canyons in Medicine : A Step towards Explainable Artificial Intelligence and Flexible Expert-Based Model ImprovementBojan Žlahtič, Jernej Završnik, Helena Blažun Vošner, Peter Kokol, David Šuran, Tadej Završnik, 2023, izvirni znanstveni članek Opis: Over the past few decades, machine learning has emerged as a valuable tool in the field of medicine, driven by the accumulation of vast amounts of medical data and the imperative to harness this data for the betterment of humanity. However, many of the prevailing machine learning algorithms in use today are characterized as black-box models, lacking transparency in their decision-making processes and are often devoid of clear visualization capabilities. The transparency of these machine learning models impedes medical experts from effectively leveraging them due to the high-stakes nature of their decisions. Consequently, the need for explainable artificial intelligence (XAI) that aims to address the demand for transparency in the decision-making mechanisms of black-box algorithms has arisen. Alternatively, employing white-box algorithms can empower medical experts by allowing them to contribute their knowledge to the decision-making process and obtain a clear and transparent output. This approach offers an opportunity to personalize machine learning models through an agile process. A novel white-box machine learning algorithm known as Data canyons was employed as a transparent and robust foundation for the proposed solution. By providing medical experts with a web framework where their expertise is transferred to a machine learning model and enabling the utilization of this process in an agile manner, a symbiotic relationship is fostered between the domains of medical expertise and machine learning. The flexibility to manipulate the output machine learning model and visually validate it, even without expertise in machine learning, establishes a crucial link between these two expert domains. Ključne besede: XAI, explainable artificial intelligence, data canyons, machine learning, transparency, agile development, white-box model Objavljeno v DKUM: 14.03.2024; Ogledov: 299; Prenosov: 32
Celotno besedilo (5,28 MB) Gradivo ima več datotek! Več... |
4. Assessing Perceived Trust and Satisfaction with Multiple Explanation Techniques in XAI-Enhanced Learning AnalyticsSaša Brdnik, Vili Podgorelec, Boštjan Šumak, 2023, izvirni znanstveni članek Opis: This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics while comparing two groups of STEM college students based on their Bologna study level, using various established feature relevance techniques, certainty, and comparison explanations. Overall, the students reported the highest trust in local feature explanation in the form of a bar graph. Additionally, master's students presented with global feature explanations also reported high trust in this form of explanation. The highest measured explanation satisfaction was observed with the local feature explanation technique in the group of bachelor's and master's students, with master's students additionally expressing high satisfaction with the global feature importance explanation. A detailed overview shows that the two observed groups of students displayed consensus in favored explanation techniques when evaluating trust and explanation satisfaction. Certainty explanation techniques were perceived with lower trust and satisfaction than were local feature relevance explanation techniques. The correlation between itemized results was documented and measured with the Trust in Automation questionnaire and Explanation Satisfaction Scale questionnaire. Master's-level students self-reported an overall higher understanding of the explanations and higher overall satisfaction with explanations and perceived the explanations as less harmful. Ključne besede: explainable artificial intelligence, learning analytics, XAI techniques, trust, explanation satisfaction Objavljeno v DKUM: 12.02.2024; Ogledov: 368; Prenosov: 53
Celotno besedilo (3,24 MB) Gradivo ima več datotek! Več... |