The IT Law Wiki
Advertisement

Definition[]

Explainable AI (XAI) (or Transparent AI) is

new machine learning systems that [will] have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.[1]

Overview[]

"[E]xplainable AI has been an active area of research in recent years. As described by experts at DARPA, XAI research aims to create AI applications that can explain their actions and decisions to human users to improve trust and collaboration between humans and AI systems. Such explanations could help people identify and correct errors that AI systems make when generalizing from training data. This is of particular concern in high-stakes applications, such as classifying disease in medical images and classifying combatants and civilians in military surveillance images."[2]

"Federal agencies and the White House have been working to define and guide federal development and use of understandable and explainable AI systems. In August 2020, NIST released a draft publication for public comment on 'Four Principles of Explainable Artificial Intelligence' that presents principles, categories, and theories of XAI.58 In December 2020, Executive Order 13960 included, as a principle guiding the use of AI in federal government, that AI should be understandable, specifically that agencies shall "ensure that the operations and outcomes of their AI applications are sufficiently understandable by subject matter experts, users, and others."

References[]

  1. DARPA, Explainable Artificial Intelligence (XAI) (full-text).
  2. Alejandro Barredo Arrieta et al., "Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI," 58 Information Fusion 82-115 (June 2020) (full-text).
Advertisement