| | SLO | ENG | Piškotki in zasebnost

Večja pisava | Manjša pisava

Iskanje po katalogu digitalne knjižnice Pomoč

Iskalni niz: išči po
išči po
išči po
išči po
* po starem in bolonjskem študiju

Opcije:
  Ponastavi


1 - 4 / 4
Na začetekNa prejšnjo stran1Na naslednjo stranNa konec
1.
2.
An efficient iterative approach to explainable feature learning
Dino Vlahek, Domen Mongus, 2023, izvirni znanstveni članek

Ključne besede: data classification, explainable artificial intelligence, feature learning, knowledge discovery
Objavljeno v DKUM: 13.06.2024; Ogledov: 129; Prenosov: 17
.pdf Celotno besedilo (1,95 MB)
Gradivo ima več datotek! Več...

3.
Agile Machine Learning Model Development Using Data Canyons in Medicine : A Step towards Explainable Artificial Intelligence and Flexible Expert-Based Model Improvement
Bojan Žlahtič, Jernej Završnik, Helena Blažun Vošner, Peter Kokol, David Šuran, Tadej Završnik, 2023, izvirni znanstveni članek

Opis: Over the past few decades, machine learning has emerged as a valuable tool in the field of medicine, driven by the accumulation of vast amounts of medical data and the imperative to harness this data for the betterment of humanity. However, many of the prevailing machine learning algorithms in use today are characterized as black-box models, lacking transparency in their decision-making processes and are often devoid of clear visualization capabilities. The transparency of these machine learning models impedes medical experts from effectively leveraging them due to the high-stakes nature of their decisions. Consequently, the need for explainable artificial intelligence (XAI) that aims to address the demand for transparency in the decision-making mechanisms of black-box algorithms has arisen. Alternatively, employing white-box algorithms can empower medical experts by allowing them to contribute their knowledge to the decision-making process and obtain a clear and transparent output. This approach offers an opportunity to personalize machine learning models through an agile process. A novel white-box machine learning algorithm known as Data canyons was employed as a transparent and robust foundation for the proposed solution. By providing medical experts with a web framework where their expertise is transferred to a machine learning model and enabling the utilization of this process in an agile manner, a symbiotic relationship is fostered between the domains of medical expertise and machine learning. The flexibility to manipulate the output machine learning model and visually validate it, even without expertise in machine learning, establishes a crucial link between these two expert domains.
Ključne besede: XAI, explainable artificial intelligence, data canyons, machine learning, transparency, agile development, white-box model
Objavljeno v DKUM: 14.03.2024; Ogledov: 299; Prenosov: 32
.pdf Celotno besedilo (5,28 MB)
Gradivo ima več datotek! Več...

4.
Assessing Perceived Trust and Satisfaction with Multiple Explanation Techniques in XAI-Enhanced Learning Analytics
Saša Brdnik, Vili Podgorelec, Boštjan Šumak, 2023, izvirni znanstveni članek

Opis: This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics while comparing two groups of STEM college students based on their Bologna study level, using various established feature relevance techniques, certainty, and comparison explanations. Overall, the students reported the highest trust in local feature explanation in the form of a bar graph. Additionally, master's students presented with global feature explanations also reported high trust in this form of explanation. The highest measured explanation satisfaction was observed with the local feature explanation technique in the group of bachelor's and master's students, with master's students additionally expressing high satisfaction with the global feature importance explanation. A detailed overview shows that the two observed groups of students displayed consensus in favored explanation techniques when evaluating trust and explanation satisfaction. Certainty explanation techniques were perceived with lower trust and satisfaction than were local feature relevance explanation techniques. The correlation between itemized results was documented and measured with the Trust in Automation questionnaire and Explanation Satisfaction Scale questionnaire. Master's-level students self-reported an overall higher understanding of the explanations and higher overall satisfaction with explanations and perceived the explanations as less harmful.
Ključne besede: explainable artificial intelligence, learning analytics, XAI techniques, trust, explanation satisfaction
Objavljeno v DKUM: 12.02.2024; Ogledov: 368; Prenosov: 53
.pdf Celotno besedilo (3,24 MB)
Gradivo ima več datotek! Več...

Iskanje izvedeno v 0.1 sek.
Na vrh
Logotipi partnerjev Univerza v Mariboru Univerza v Ljubljani Univerza na Primorskem Univerza v Novi Gorici