Trustworthy AI: Ensuring Explainability and Acceptance
In the dynamic realm of Artificial Intelligence (AI), this study explores the multifaceted landscape of Trustworthy AI with a dedicated focus on achieving both explainability and acceptance. The research addresses the evolving dynamics of AI, emphasizing the essential role of human involvement in shaping its trajectory.
A primary contribution of this work is the introduction of a novel "Trustworthy Explainability Acceptance Metric", tailored for the evaluation of AI-based systems by field experts. Grounded in a versatile distance acceptance approach, this metric provides a reliable measure of acceptance value. Practical applications of this metric are illustrated, particularly in a critical domain like medical diagnostics. Another significant contribution is the proposal of a trust-based security framework for 5G social networks. This framework enhances security and reliability by incorporating community insights and leveraging trust mechanisms, presenting a valuable advancement in social network security.
The study also introduces an artificial conscience-control module model, innovating with the concept of "Artificial Feeling." This model is designed to enhance AI system adaptability based on user preferences, ensuring controllability, safety, reliability, and trustworthiness in AI decision-making. This innovation contributes to fostering increased societal acceptance of AI technologies. Additionally, the research conducts a comprehensive survey of foundational requirements for establishing trustworthiness in AI. Emphasizing fairness, accountability, privacy, acceptance, and verification/validation, this survey lays the groundwork for understanding and addressing ethical considerations in AI applications. The study concludes with exploring quantum alternatives, offering fresh perspectives on algorithmic approaches in trustworthy AI systems. This exploration broadens the horizons of AI research, pushing the boundaries of traditional algorithms.
In summary, this work significantly contributes to the discourse on Trustworthy AI, ensuring both explainability and acceptance in the intricate interplay between humans and AI systems. Through its diverse contributions, the research offers valuable insights and practical frameworks for the responsible and ethical deployment of AI in various applications.
NSF (Grant No. 1547411)
USDA NIFA (Award No. 2017-67003-26057)
- Doctor of Philosophy
- Computer Science
Advisor/Supervisor/Committee ChairDr. Arjan Durresi
Additional Committee Member 2Dr. Mihran Tuceryan
Additional Committee Member 3Dr. Murat Dundar
Additional Committee Member 4Dr. Qin Hu
- Fairness, accountability, transparency, trust and ethics of computer systems
- Mixed initiative and human-in-the-loop
- Artificial life and complex adaptive systems
- Human-computer interaction
- Autonomous agents and multiagent systems
- Planning and decision making
- Artificial intelligence not elsewhere classified
- Collaborative and social computing
- Modelling and simulation