Keynote Talk
in
Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations
Elham Tabassi: Path to trustworthy and responsible AI
With AI already changing the way in which society addresses economic and national security challenges and opportunities, AI technologies must be developed and used in a trustworthy and responsible manner. Experts in industry, academia, and government are still assessing how to best measure and manage risks and impacts of AI systems. At NIST we believe that working together to develop and advance the science of AI safety will harness the power of AI to serve humanity. We have been and will continue developing tests and facilitating the development of standards – measurement science and standards that will allow industry, academia, and government to better map, measure and manage AI risks. This work is a necessary precursor to any compliance or conformity assessment, either voluntary or required. Delivering these needed measurements, standards, and other tools is a primary focus for NIST’s portfolio of AI efforts. This talk provides an overview of NIST Trustworthy and Responsible AI program.