| | SLO | ENG | Cookies and privacy

Bigger font | Smaller font

Search the digital library catalog Help

Query: search in
search in
search in
search in
* old and bologna study programme

Options:
  Reset


1 - 6 / 6
First pagePrevious page1Next pageLast page
1.
Eexplaining 3D semantic segmentation through generative AI-based counterfactuals
Dzemail Rozajac, Niko Lukač, Stefan Schweng, Christoph Gollob, Arne Nothdurft, Karl Stampfer, Javier Del Ser, Andreas Holzinger, 2025, original scientific article

Abstract: Interpreting the predictions of deep learning models on 3D point cloud data is an important challenge for safety-critical domains such as autonomous driving, robotics and geospatial analysis. Existing counterfactual explainability methods often struggle with the sparsity and unordered nature of 3D point clouds. To address this, we introduce a generative framework for counterfactual explanations in 3D semantic segmentation models. Our approach leverages autoencoder-based latent representations, combined with UMAP embeddings and Delaunay triangulation, to construct a graph that enables geodesic path search between semantic classes. Candidate counterfactuals are generated by interpolating latent vectors along these paths and decoding into plausible point clouds, while semantic plausibility is guided by the predictions of a 3D semantic segmentation model. We evaluate the framework on ShapeNet objects, demonstrating that semantically related classes yield realistic counterfactuals with minimal geometric change, whereas unrelated classes expose sharp decision boundaries and reduced plausibility. Quantitative results confirm that the method balances defined interpretability metrics, producing counterfactuals that are both interpretable and geometrically consistent. Overall, our work demonstrates that generative counterfactuals in latent space provide a promising alternative to input-level perturbations.
Keywords: 3D point cloud, explainable artificial intelligence, counterfactual analysis, generative AI
Published in DKUM: 14.11.2025; Views: 0; Downloads: 6
.pdf Full text (27,14 MB)

2.
Toward explainable time-series numerical association rule mining : a case study in smart-agriculture
Iztok Fister, Sancho Salcedo-Sanz, Enrique Alexandre-Cortizo, Damijan Novak, Iztok Fister, Vili Podgorelec, Mario Gorenjak, 2025, original scientific article

Abstract: This paper defines time-series numerical association rule mining in smart-agriculture applications from an explainable-AI perspective. Two novel explainable methods are presented, along with a newly developed algorithm for time-series numerical association rule mining. Unlike previous approaches, such as fixed interval time-series numerical association, the proposed methods offer enhanced interpretability and an improved data science pipeline by incorporating explainability directly into the software library. The newly developed xNiaARMTS methods are then evaluated through a series of experiments, using real datasets produced from sensors in a smart-agriculture domain. The results obtained using explainable methods within numerical association rule mining in smart-agriculture applications are very positive.
Keywords: association rule mining, explainable artificial intelligence, XAI, numerical association rule mining, optimization algorithms
Published in DKUM: 27.08.2025; Views: 0; Downloads: 5
.pdf Full text (329,69 KB)

3.
4.
An efficient iterative approach to explainable feature learning
Dino Vlahek, Domen Mongus, 2023, original scientific article

Keywords: data classification, explainable artificial intelligence, feature learning, knowledge discovery
Published in DKUM: 13.06.2024; Views: 129; Downloads: 30
.pdf Full text (1,95 MB)
This document has many files! More...

5.
Agile Machine Learning Model Development Using Data Canyons in Medicine : A Step towards Explainable Artificial Intelligence and Flexible Expert-Based Model Improvement
Bojan Žlahtič, Jernej Završnik, Helena Blažun Vošner, Peter Kokol, David Šuran, Tadej Završnik, 2023, original scientific article

Abstract: Over the past few decades, machine learning has emerged as a valuable tool in the field of medicine, driven by the accumulation of vast amounts of medical data and the imperative to harness this data for the betterment of humanity. However, many of the prevailing machine learning algorithms in use today are characterized as black-box models, lacking transparency in their decision-making processes and are often devoid of clear visualization capabilities. The transparency of these machine learning models impedes medical experts from effectively leveraging them due to the high-stakes nature of their decisions. Consequently, the need for explainable artificial intelligence (XAI) that aims to address the demand for transparency in the decision-making mechanisms of black-box algorithms has arisen. Alternatively, employing white-box algorithms can empower medical experts by allowing them to contribute their knowledge to the decision-making process and obtain a clear and transparent output. This approach offers an opportunity to personalize machine learning models through an agile process. A novel white-box machine learning algorithm known as Data canyons was employed as a transparent and robust foundation for the proposed solution. By providing medical experts with a web framework where their expertise is transferred to a machine learning model and enabling the utilization of this process in an agile manner, a symbiotic relationship is fostered between the domains of medical expertise and machine learning. The flexibility to manipulate the output machine learning model and visually validate it, even without expertise in machine learning, establishes a crucial link between these two expert domains.
Keywords: XAI, explainable artificial intelligence, data canyons, machine learning, transparency, agile development, white-box model
Published in DKUM: 14.03.2024; Views: 299; Downloads: 41
.pdf Full text (5,28 MB)
This document has many files! More...

6.
Assessing Perceived Trust and Satisfaction with Multiple Explanation Techniques in XAI-Enhanced Learning Analytics
Saša Brdnik, Vili Podgorelec, Boštjan Šumak, 2023, original scientific article

Abstract: This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics while comparing two groups of STEM college students based on their Bologna study level, using various established feature relevance techniques, certainty, and comparison explanations. Overall, the students reported the highest trust in local feature explanation in the form of a bar graph. Additionally, master's students presented with global feature explanations also reported high trust in this form of explanation. The highest measured explanation satisfaction was observed with the local feature explanation technique in the group of bachelor's and master's students, with master's students additionally expressing high satisfaction with the global feature importance explanation. A detailed overview shows that the two observed groups of students displayed consensus in favored explanation techniques when evaluating trust and explanation satisfaction. Certainty explanation techniques were perceived with lower trust and satisfaction than were local feature relevance explanation techniques. The correlation between itemized results was documented and measured with the Trust in Automation questionnaire and Explanation Satisfaction Scale questionnaire. Master's-level students self-reported an overall higher understanding of the explanations and higher overall satisfaction with explanations and perceived the explanations as less harmful.
Keywords: explainable artificial intelligence, learning analytics, XAI techniques, trust, explanation satisfaction
Published in DKUM: 12.02.2024; Views: 368; Downloads: 68
.pdf Full text (3,24 MB)
This document has many files! More...

Search done in 0.06 sec.
Back to top
Logos of partners University of Maribor University of Ljubljana University of Primorska University of Nova Gorica