| | SLO | ENG | Cookies and privacy

Bigger font | Smaller font

Search the digital library catalog Help

Query: search in
search in
search in
search in
* old and bologna study programme

Options:
  Reset


1 - 4 / 4
First pagePrevious page1Next pageLast page
1.
2.
An efficient iterative approach to explainable feature learning
Dino Vlahek, Domen Mongus, 2023, original scientific article

Keywords: data classification, explainable artificial intelligence, feature learning, knowledge discovery
Published in DKUM: 13.06.2024; Views: 129; Downloads: 17
.pdf Full text (1,95 MB)
This document has many files! More...

3.
Agile Machine Learning Model Development Using Data Canyons in Medicine : A Step towards Explainable Artificial Intelligence and Flexible Expert-Based Model Improvement
Bojan Žlahtič, Jernej Završnik, Helena Blažun Vošner, Peter Kokol, David Šuran, Tadej Završnik, 2023, original scientific article

Abstract: Over the past few decades, machine learning has emerged as a valuable tool in the field of medicine, driven by the accumulation of vast amounts of medical data and the imperative to harness this data for the betterment of humanity. However, many of the prevailing machine learning algorithms in use today are characterized as black-box models, lacking transparency in their decision-making processes and are often devoid of clear visualization capabilities. The transparency of these machine learning models impedes medical experts from effectively leveraging them due to the high-stakes nature of their decisions. Consequently, the need for explainable artificial intelligence (XAI) that aims to address the demand for transparency in the decision-making mechanisms of black-box algorithms has arisen. Alternatively, employing white-box algorithms can empower medical experts by allowing them to contribute their knowledge to the decision-making process and obtain a clear and transparent output. This approach offers an opportunity to personalize machine learning models through an agile process. A novel white-box machine learning algorithm known as Data canyons was employed as a transparent and robust foundation for the proposed solution. By providing medical experts with a web framework where their expertise is transferred to a machine learning model and enabling the utilization of this process in an agile manner, a symbiotic relationship is fostered between the domains of medical expertise and machine learning. The flexibility to manipulate the output machine learning model and visually validate it, even without expertise in machine learning, establishes a crucial link between these two expert domains.
Keywords: XAI, explainable artificial intelligence, data canyons, machine learning, transparency, agile development, white-box model
Published in DKUM: 14.03.2024; Views: 299; Downloads: 36
.pdf Full text (5,28 MB)
This document has many files! More...

4.
Assessing Perceived Trust and Satisfaction with Multiple Explanation Techniques in XAI-Enhanced Learning Analytics
Saša Brdnik, Vili Podgorelec, Boštjan Šumak, 2023, original scientific article

Abstract: This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics while comparing two groups of STEM college students based on their Bologna study level, using various established feature relevance techniques, certainty, and comparison explanations. Overall, the students reported the highest trust in local feature explanation in the form of a bar graph. Additionally, master's students presented with global feature explanations also reported high trust in this form of explanation. The highest measured explanation satisfaction was observed with the local feature explanation technique in the group of bachelor's and master's students, with master's students additionally expressing high satisfaction with the global feature importance explanation. A detailed overview shows that the two observed groups of students displayed consensus in favored explanation techniques when evaluating trust and explanation satisfaction. Certainty explanation techniques were perceived with lower trust and satisfaction than were local feature relevance explanation techniques. The correlation between itemized results was documented and measured with the Trust in Automation questionnaire and Explanation Satisfaction Scale questionnaire. Master's-level students self-reported an overall higher understanding of the explanations and higher overall satisfaction with explanations and perceived the explanations as less harmful.
Keywords: explainable artificial intelligence, learning analytics, XAI techniques, trust, explanation satisfaction
Published in DKUM: 12.02.2024; Views: 368; Downloads: 58
.pdf Full text (3,24 MB)
This document has many files! More...

Search done in 0.09 sec.
Back to top
Logos of partners University of Maribor University of Ljubljana University of Primorska University of Nova Gorica