Trustworthy AI in the Era of Foundation Models

1 Arizona State University & NVIDIA 2IBM Research
CVPR 2023 Tutorial


While machine learning (ML) models (especially the foundation models) have achieved great success in many perception applications, however, every coin has two sides, and so does AI. Concerns have been raised about their potential security, robustness, privacy, and transparency issues when applied to real-world applications. Irresponsibly applying the foundation model to mission-critical and human-centric domains such as healthcare, education, and law can lead to serious misuse, inequity issues, negative economic and environmental impacts, and/or legal and ethical concerns. For example, machine learning models are often regarded as “black boxes” and can produce unreliable, unpredictable, and unexplainable outcomes, especially under domain shifts or maliciously crafted attacks, challenging the reliability of safety-critical applications (e.g., autonomous driving); Stable Diffusion may generate NSFW content and privacy violated-content.

Unlike conventional tutorials that focus on either the positive or the negative impacts made by AI, this tutorial aims to provide a holistic and complementary overview of the trustworthy issues, including security, robustness, privacy, and societal issues of the models so that researchers and developers can have a fresh perspective and some reflection on the induced impacts and responsibility and introduce the potential solutions. This tutorial aims to serve as a short lecture for researchers and students to gain awareness of the misuse and potential risks in existing AI techniques and, more importantly, to motivate them to rethink the trustworthy problem in research. Many case studies will be drawn from computer vision-based applications. The ultimate goal of this tutorial is to invoke more discussions, efforts, and actions into addressing the two sides of the same coin. The contents of this tutorial will provide sufficient background for participants to understand the motivation, research progress, known issues, and ongoing challenges in trustworthy perception systems, in addition to pointers to open-source libraries and surveys.

Tutorial Outline

The upside of the coin
  • Recent advances in foundation models
  • A brief introduction to deep learning and notable application in computer vision
  • AI lifecycle and industrial use cases
The downside of the coin
  • Examples of misuse of AI
  • Examples of using AI for malicious purposes
  • AI ethics and the induced costs
  • Probing cross-attention

  • A holistic view of robustness problem in perception systems
  • Potential solutions from different perspectives, including training algorithms, architectures, and foundation models.
Adversarial Environments
  • A holistic view of vulnerabilities in perception systems
  • Training-time and test-time vulnerabilities
  • Promising solutions with recent techniques, such as diffusion models
  • Repurposing security vulnerabilities for good
  • A holistic view of privacy problem in perception systems
  • Differential privacy at scale
  • Federated Learning
Other issues
Conclusion and Q&A