| | SLO | ENG | Cookies and privacy

Bigger font | Smaller font

Show document Help

Title:Assessing Perceived Trust and Satisfaction with Multiple Explanation Techniques in XAI-Enhanced Learning Analytics
Authors:ID Brdnik, Saša (Author)
ID Podgorelec, Vili (Author)
ID Šumak, Boštjan (Author)
Files:.pdf Brdnik-2023-Assessing_Perceived_Trust_and_Sati.pdf (3,24 MB)
MD5: BC82645F4AA7A1E80DF0AABDA635D227
 
URL https://www.mdpi.com/2079-9292/12/12/2594
 
Language:English
Work type:Article
Typology:1.01 - Original Scientific Article
Organization:FERI - Faculty of Electrical Engineering and Computer Science
Abstract:This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics while comparing two groups of STEM college students based on their Bologna study level, using various established feature relevance techniques, certainty, and comparison explanations. Overall, the students reported the highest trust in local feature explanation in the form of a bar graph. Additionally, master's students presented with global feature explanations also reported high trust in this form of explanation. The highest measured explanation satisfaction was observed with the local feature explanation technique in the group of bachelor's and master's students, with master's students additionally expressing high satisfaction with the global feature importance explanation. A detailed overview shows that the two observed groups of students displayed consensus in favored explanation techniques when evaluating trust and explanation satisfaction. Certainty explanation techniques were perceived with lower trust and satisfaction than were local feature relevance explanation techniques. The correlation between itemized results was documented and measured with the Trust in Automation questionnaire and Explanation Satisfaction Scale questionnaire. Master's-level students self-reported an overall higher understanding of the explanations and higher overall satisfaction with explanations and perceived the explanations as less harmful.
Keywords:explainable artificial intelligence, learning analytics, XAI techniques, trust, explanation satisfaction
Publication status:Published
Publication version:Version of Record
Submitted for review:08.05.2023
Article acceptance date:06.06.2023
Publication date:08.06.2023
Publisher:MDPI
Year of publishing:2023
Number of pages:Str. 1-23
Numbering:Letn. 12, Št. 12, št. članka 2594
PID:20.500.12556/DKUM-87040 New window
UDC:004.8
ISSN on article:2079-9292
COBISS.SI-ID:155107331 New window
DOI:10.3390/electronics12122594 New window
Publication date in DKUM:12.02.2024
Views:368
Downloads:68
Metadata:XML DC-XML DC-RDF
Categories:Misc.
:
Copy citation
  
Average score:(0 votes)
Your score:Voting is allowed only for logged in users.
Share:Bookmark and Share


Hover the mouse pointer over a document title to show the abstract or click on the title to get all document metadata.

Record is a part of a journal

Title:Electronics
Shortened title:Electronics
Publisher:MDPI
ISSN:2079-9292
COBISS.SI-ID:523068953 New window

Document is financed by a project

Funder:ARRS - Slovenian Research Agency
Project number:P2-0057
Name:Informacijski sistemi

Licences

License:CC BY 4.0, Creative Commons Attribution 4.0 International
Link:http://creativecommons.org/licenses/by/4.0/
Description:This is the standard Creative Commons license that gives others maximum freedom to do what they want with the work as long as they credit the author.
Licensing start date:08.06.2023

Secondary language

Language:Slovenian
Keywords:umetna inteligenca, analitično učenje, zaupanje, zadovoljstvo


Comments

Leave comment

You must log in to leave a comment.

Comments (0)
0 - 0 / 0
 
There are no comments!

Back
Logos of partners University of Maribor University of Ljubljana University of Primorska University of Nova Gorica