Purdue University Graduate School
Browse
uslu-dissertation.pdf (9.13 MB)

Trustworthy and Causal Artificial Intelligence in Environmental Decision Making

Download (9.13 MB)
thesis
posted on 2024-06-03, 18:32 authored by Suleyman UsluSuleyman Uslu

We present a framework for Trustworthy Artificial Intelligence (TAI) that dynamically assesses trust and scrutinizes past decision-making, aiming to identify both individual and community behavior. The modeling of behavior incorporates proposed concepts, namely trust pressure and trust sensitivity, laying the foundation for predicting future decision-making regarding community behavior, consensus level, and decision-making duration. Our framework involves the development and mathematical modeling of trust pressure and trust sensitivity, drawing on social validation theory within the context of environmental decision-making. To substantiate our approach, we conduct experiments encompassing (i) dynamic trust sensitivity to reveal the impact of learning actors between decision-making, (ii) multi-level trust measurements to capture disruptive ratings, and (iii) different distributions of trust sensitivity to emphasize the significance of individual progress as well as overall progress.

Additionally, we introduce TAI metrics, trustworthy acceptance, and trustworthy fairness, designed to evaluate the acceptance of decisions proposed by AI or humans and the fairness of such proposed decisions. The dynamic trust management within the framework allows these TAI metrics to discern support for decisions among individuals with varying levels of trust. We propose both the metrics and their measurement methodology as contributions to the standardization of trustworthy AI.

Furthermore, our trustability metric incorporates reliability, resilience, and trust to evaluate systems with multiple components. We illustrate experiments showcasing the effects of different trust declines on the overall trustability of the system. Notably, we depict the trade-off between trustability and cost, resulting in net utility, which facilitates decision-making in systems and cloud security. This represents a pivotal step toward an artificial control model involving multiple agents engaged in negotiation.

Lastly, the dynamic management of trust and trustworthy acceptance, particularly in varying criteria, serves as a foundation for causal AI by providing inference methods. We outline a mechanism and present an experiment on human-driven causal inference, where participant discussions act as interventions, enabling counterfactual evaluations once actor and community behavior are modeled.

Funding

CICI: Secure Data Architecture: Collaborative Research: Assured Mission Delivery Network Framework for Secure Scientific Collaboration

Directorate for Computer & Information Science & Engineering

Find out more...

INFEWS/T2: COLLABORATIVE: IFEWCOORDNET - A SECURE DECISION SUPPORT SYSTEM FOR COORDINATION OF ADAPTATION PLANNING AMONG FEW ACTORS IN THE PACIFIC NORTHWEST

National Institute of Food and Agriculture

Find out more...

History

Degree Type

  • Doctor of Philosophy

Department

  • Computer Science

Campus location

  • Indianapolis

Advisor/Supervisor/Committee Chair

Arjan Durresi

Additional Committee Member 2

Mihran Tuceryan

Additional Committee Member 3

Murat Dundar

Additional Committee Member 4

Qin Hu