| Title: | Assessing Perceived Trust and Satisfaction with Multiple Explanation Techniques in XAI-Enhanced Learning Analytics |
|---|
| Authors: | ID Brdnik, Saša (Author) ID Podgorelec, Vili (Author) ID Šumak, Boštjan (Author) |
| Files: | Brdnik-2023-Assessing_Perceived_Trust_and_Sati.pdf (3,24 MB) MD5: BC82645F4AA7A1E80DF0AABDA635D227
https://www.mdpi.com/2079-9292/12/12/2594
|
|---|
| Language: | English |
|---|
| Work type: | Article |
|---|
| Typology: | 1.01 - Original Scientific Article |
|---|
| Organization: | FERI - Faculty of Electrical Engineering and Computer Science
|
|---|
| Abstract: | This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics while comparing two groups of STEM college students based on their Bologna study level, using various established feature relevance techniques, certainty, and comparison explanations. Overall, the students reported the highest trust in local feature explanation in the form of a bar graph. Additionally, master's students presented with global feature explanations also reported high trust in this form of explanation. The highest measured explanation satisfaction was observed with the local feature explanation technique in the group of bachelor's and master's students, with master's students additionally expressing high satisfaction with the global feature importance explanation. A detailed overview shows that the two observed groups of students displayed consensus in favored explanation techniques when evaluating trust and explanation satisfaction. Certainty explanation techniques were perceived with lower trust and satisfaction than were local feature relevance explanation techniques. The correlation between itemized results was documented and measured with the Trust in Automation questionnaire and Explanation Satisfaction Scale questionnaire. Master's-level students self-reported an overall higher understanding of the explanations and higher overall satisfaction with explanations and perceived the explanations as less harmful. |
|---|
| Keywords: | explainable artificial intelligence, learning analytics, XAI techniques, trust, explanation satisfaction |
|---|
| Publication status: | Published |
|---|
| Publication version: | Version of Record |
|---|
| Submitted for review: | 08.05.2023 |
|---|
| Article acceptance date: | 06.06.2023 |
|---|
| Publication date: | 08.06.2023 |
|---|
| Publisher: | MDPI |
|---|
| Year of publishing: | 2023 |
|---|
| Number of pages: | Str. 1-23 |
|---|
| Numbering: | Letn. 12, Št. 12, št. članka 2594 |
|---|
| PID: | 20.500.12556/DKUM-87040  |
|---|
| UDC: | 004.8 |
|---|
| ISSN on article: | 2079-9292 |
|---|
| COBISS.SI-ID: | 155107331  |
|---|
| DOI: | 10.3390/electronics12122594  |
|---|
| Publication date in DKUM: | 12.02.2024 |
|---|
| Views: | 368 |
|---|
| Downloads: | 68 |
|---|
| Metadata: |  |
|---|
| Categories: | Misc.
|
|---|
|
:
|
Copy citation |
|---|
| | | | Average score: | (0 votes) |
|---|
| Your score: | Voting is allowed only for logged in users. |
|---|
| Share: |  |
|---|
Hover the mouse pointer over a document title to show the abstract or click
on the title to get all document metadata. |