As part of a new consortium, the University of Notre Dame researchers will help establish the advanced measurement techniques required to identify the risks associated with current AI systems and to develop new systems that are safer and more trustworthy.
The consortium – called the Artificial Intelligence Safety Institute Consortium (AISIC) – was formed by the National Institute of Standards and Technology, an agency of the U.S. Department of Commerce that develops standards for emerging technologies.
The consortium was formed in response to a presidential executive order made in October.
The consortium includes over 200 member companies and organizations that are on the front lines of developing and using AI systems and the civil society and academic teams that are building the foundational understanding of how AI can and will transform our society.
These entities represent the nation’s largest companies and its innovative startups; creators of the world’s most advanced AI systems and hardware; key members of civil society and the academic community; and representatives of professions with deep engagement in the use of AI today. And the consortium also includes state and local governments, as well as nonprofits. The consortium will also work with organizations from like-minded nations that have a major role to play in setting interoperable and effective safety around the world. The full list of consortium participants can be viewed here.
KEY QUOTES:
“Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”
- White House Statement
“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do. Through President Biden’s landmark executive order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”
- S. Secretary of Commerce Gina Raimondo
“We are excited to join AISIC at a pivotal time for AI and for our society. We know that to manage AI risks, we first have to measure and understand them. It is a grand challenge that neither technologists nor government agencies can tackle alone. Through this new consortium, Notre Dame researchers will have a place at the table where they can live out Notre Dame’s mission to seek discoveries that yield benefits for the common good.”
- Jeffrey F. Rhoads, vice president for research and professor of aerospace and mechanical engineering
“A special focus for the consortium will be dual-use foundation models, the advanced AI systems used for a wide variety of purposes. Improving evaluation and measurement techniques will help researchers and practitioners gain a deeper understanding of AI capabilities endowed in a system, including risks and benefits. They will then be able to offer guidance for the industry leaders working to create AI that is safe, secure and trustworthy. It is a moment for human-machine teaming.”
Nitesh Chawla, the Frank M. Freimann Professor of Computer Science and Engineering and director of the Lucy Family Institute for Data and Society; Recently elected a fellow of the Association for the Advancement of Artificial Intelligence