Developing the foundations of responsible universal models.

What is a universal model?

In recent years, a new successful paradigm for building AI systems has emerged: Train one model on a huge amount of data and adapt it to many applications. We call such a model a universal model.

WHY DO WE CARE?

Universal models (e.g., GPT-3) have demonstrated impressive behavior, but can fail unexpectedly, harbor biases, and are poorly understood. Nonetheless, they are being deployed at scale.

Our Mission

Foundations of Responsible Universal Models (FORUM) is an interdisciplinary initiative born out of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) that aims to make fundamental advances in the study, development, and deployment of universal models.

We are an interdisciplinary group of faculty, students, post-docs, and researchers spanning 10+ departments who have a shared interest in studying and building responsible universal models.

FORUM has the following thrusts:

  • Research. We will conduct interdisciplinary research that lays the foundations of how universal models should be built to make them more efficient, robust, interpretable, multimodal, and ethically sound.
  • Artifacts. We will train and release universal models, code, tools, and also ensure that the full training pipeline is reproducible and scientifically rigorous.
  • Community. We will invite universities, companies, and non-profits to convene and work together to develop a set of professional norms for how to responsibly train and deploy universal models.