Abstract: As AI systems, often implemented as black-box ML models, are increasingly deployed in high-stakes domains, a new research focus on trustworthy and responsible AI (TRAI) has emerged over the past several years and attracted interest from academia, industry, and government agencies. This tutorial covers recent advances in TRAI in three subareas: fairness, interpretability and transparency. We discuss not only foundational and frontier research within each subarea, but also their interactions. Various real-life applications are covered, such as autonomous driving, medical diagnosis, and judicial systems. The tutorial also puts a special emphasis on tooling and processes that help ML research and production to develop and deploy trustworthy and responsible systems.
This tutorial is of interest to a wide range of audience, who will benefit from the tutorial in different ways. TRAI researchers will learn the latest advancement in the area, and in particular work at the intersection of subareas. Practitioners will learn effective methods for models debugging and auditing. Policy and decision makers will learn the algorithmic realizations of abstract concepts (e.g., fairness and interpretability criteria) to engage in technical conversations. A general understanding about machine learning is recommended, but familiarity with any TRAI topics is not necessary.