TY - JOUR
T1 - Integrating Explainable Artificial Intelligence in Extended Reality Environments
T2 - A Systematic Survey †
AU - Maathuis, Clara
AU - Cidota, Marina Anca
AU - Datcu, Dragoș
AU - Marin, Letiția
N1 - Publisher Copyright:
© 2025 by the authors.
PY - 2025/1
Y1 - 2025/1
N2 - The integration of Artificial Intelligence (AI) within Extended Reality (XR) technologies has the potential to revolutionize user experiences by creating more immersive, interactive, and personalized environments. Nevertheless, the complexity and opacity of AI systems raise significant concerns regarding the transparency of data handling, reasoning processes, and decision-making mechanisms inherent in these technologies. To address these challenges, the implementation of explainable AI (XAI) methods and techniques becomes imperative, as they not only ensure compliance with prevailing ethical, social, and legal standards, norms, and principles, but also foster user trust and facilitate the broader adoption of AI solutions in XR applications. Despite the growing interest from both research and practitioner communities in this area, there is an important gap in the literature concerning a review of XAI methods specifically applied and tailored to XR systems. On this behalf, this research presents a systematic literature review that synthesizes current research on XAI approaches applied within the XR domain. Accordingly, this research aims to identify prevailing trends, assess the effectiveness of various XAI techniques, and highlight potential avenues for future research. It then contributes to the foundational understanding necessary for the development of transparent and trustworthy AI systems for XR systems using XAI technologies while enhancing the user experience and promoting responsible AI deployment.
AB - The integration of Artificial Intelligence (AI) within Extended Reality (XR) technologies has the potential to revolutionize user experiences by creating more immersive, interactive, and personalized environments. Nevertheless, the complexity and opacity of AI systems raise significant concerns regarding the transparency of data handling, reasoning processes, and decision-making mechanisms inherent in these technologies. To address these challenges, the implementation of explainable AI (XAI) methods and techniques becomes imperative, as they not only ensure compliance with prevailing ethical, social, and legal standards, norms, and principles, but also foster user trust and facilitate the broader adoption of AI solutions in XR applications. Despite the growing interest from both research and practitioner communities in this area, there is an important gap in the literature concerning a review of XAI methods specifically applied and tailored to XR systems. On this behalf, this research presents a systematic literature review that synthesizes current research on XAI approaches applied within the XR domain. Accordingly, this research aims to identify prevailing trends, assess the effectiveness of various XAI techniques, and highlight potential avenues for future research. It then contributes to the foundational understanding necessary for the development of transparent and trustworthy AI systems for XR systems using XAI technologies while enhancing the user experience and promoting responsible AI deployment.
KW - Augmented reality
KW - explainable AI
KW - Extended reality
KW - responsible AI
KW - trustworthy AI
KW - Virtual reality
UR - https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=wos-integration-pure&SrcAuth=WosAPI&KeyUT=WOS:001404358800001&DestLinkType=FullRecord&DestApp=WOS_CPL
U2 - 10.3390/math13020290
DO - 10.3390/math13020290
M3 - Review article
AN - SCOPUS:85215821596
VL - 13
JO - Mathematics
JF - Mathematics
IS - 2
M1 - 290
ER -