An Explainable Artificial Intelligence Model for Enhancing Trust and Transparency in Autonomous Decision Systems

  • Zaheer Abbas Sarhad University of Sciences and Technology, Peshawar
Keywords: Explainable Artificial Intelligence, Autonomous Decision Systems, Trust, Transparency, Structural Equation Modeling, Human AI Interaction

Abstract

The rapid deployment of autonomous decision systems in healthcare, finance, transportation, and public governance has intensified concerns regarding algorithmic opacity, accountability, and user trust. While high performance black box models such as deep neural networks demonstrate superior predictive capabilities, their lack of interpretability undermines stakeholder confidence and regulatory compliance. This research develops and empirically validates an Explainable Artificial Intelligence model designed to enhance trust and transparency in autonomous decision systems. The study integrates technical explainability mechanisms including SHAP based feature attribution and rule extraction with cognitive trust theory and transparency perception constructs. A quantitative research design using Partial Least Squares Structural Equation Modeling was employed to test relationships among explainability quality, perceived transparency, perceived fairness, cognitive trust, affective trust, and behavioral intention to adopt autonomous systems. Data were collected from 412 professionals interacting with AI enabled decision platforms across healthcare and financial technology sectors. Measurement model assessment confirmed reliability and convergent validity with composite reliability values above 0.85 and AVE above 0.60. Structural model analysis indicated that explainability quality significantly predicts perceived transparency beta 0.62 p less than 0.001 and perceived fairness beta 0.48 p less than 0.001. Transparency and fairness jointly influence cognitive trust beta 0.55 and 0.29 respectively. Cognitive trust strongly predicts adoption intention beta 0.67. The findings confirm that explainable AI mechanisms enhance trust indirectly through transparency and fairness perceptions. The study contributes a validated interdisciplinary framework bridging machine learning interpretability and trust theory, offering practical guidelines for responsible AI deployment and regulatory compliance.

Published
2026-03-22