<p dir="ltr">AI technologies are increasingly embedded in various aspects of our daily lives, assisting us in making decisions ranging from financial investments to criminal justice. Recently, there is a growing recognition of the pivotal role of Explainable AI (XAI) in facilitating effective AI-assisted decision making. XAI, in essence, ensures that AI decision aids not only provide recommendations but do so in a manner that is interpretable, transparent, and dependable. </p><p dir="ltr">While many technical XAI methods have been developed, how to evaluate established AI explanation methods in AI-assisted decision making remains less explored. As AI explanations are ultimately used by human decision makers, this dissertation attempts to evaluate whether AI explanation methods enhance decision-making from a human-centered perspective---focusing on how people use and process AI explanations, and how these insights can inform the design of more effective, user-aligned explanation systems. </p><p dir="ltr">This dissertation begins with a human-subject experimental study that examines whether established conventional AI explanations are helpful in AI-assisted decision making, demonstrating how the effectiveness of existing AI explanations varies across decision-making tasks where people have varying levels of domain expertise in, and when applied to AI models of differing levels of complexity. </p><p dir="ltr">The subsequent experimental study adopts a dynamic viewpoint and explores how humans' usage of AI is shaped by changes in explanations due to AI model updates. The findings suggest the necessity of providing additional information when unexpected changes occur in explanations after the underlying AI model gets updated. </p><p dir="ltr">Given today's fast-evolving AI, a third study investigates how humans process the novel form of AI explanations introduced by the state-of-the-art AI models nowadays---Large Language Models (LLMs). The study underscores the importance of quantifying the quality of LLM explanations and of determining the optimal timing for presenting LLM explanations to users.</p><p dir="ltr">AI models and their explanations are ultimately communicated to users through the user interface. The design of this explanation interface largely shapes how effectively human users interpret and engage with the information presented by XAI systems. To further refine the design of user-centered explanation interfaces, it is essential to account for practical constraints in emerging real-world displays, such as ultra-small devices such where AI assistants today are often accessed. The fourth study address the need to make the most of the limited screen space while minimizing user's cognitive load by making the glanceability of explanation a key factor. The study specifically tackled the challenge of making LLM-generated explanations more glanceable, as the verbosity of natural language makes it particularly difficult to deliver succinct explanations on ultra-small interfaces.</p><p dir="ltr">In conclusion, studies presented in this dissertation empirically explores how to evaluate and design XAI systems from a user-centered perspective to support human decision making, by considering the contextual relevance in fast evolving AI models and diverse user interfaces. The dissertation aims to provide practical implications on making AI explanations collaborative, adaptable, and glanceable.</p>