Health

NIST Launches New Open Source Platform for AI Safety Assessment



The free downloadable tool, called Dioptra, is designed to help artificial intelligence developers understand some of the unique data risks with AI models and help them “mitigate those risks while supporting innovation,” the NIST director said.

Nearly a year since the Biden Administration issued an executive order on Developing Safe, Secure, and Trustworthy AI, the National Institute of Standards and Technology has made available a new open-source tool to help test the safety and security of AI and machine learning models.

WHY IT MATTERS
The new platform, called Dioptra, is mandated in a White House Executive Order that mandates that NIST take an active role in supporting algorithm testing.

“One of the weaknesses of an AI system is its core model,” the NIST researchers explained. “By exposing a model to large amounts of training data, it learns how to make decisions. But if an adversary poisons the training data with incorrect information—for example, by introducing data that might cause the model to misidentify a stop sign as a speed limit sign—the model can make incorrect, potentially disastrous decisions.”

The goal, according to NIST, is to help healthcare organizations and others better understand their AI software and assess how well it performs when faced with “a variety of hostile attacks.”

The open-source tool – which is free to download – could help healthcare providers, other businesses and government agencies evaluate and verify AI developers’ promises about the performance of their models.

“Dioptra does this by allowing users to define what types of attacks will cause the model to perform less well and quantify the performance loss so users can understand how often and under what circumstances the system will fail.”

THE BIGGER TREND
In addition to launching the Dioptra platform, NIST’s AI Safety Institute last week also released new draft guidance on Managing Misuse Risks for Dual-Use Platform Models.

Such models – known as dual-use because they have the “potential to cause both benefit and harm” – can pose safety risks when used incorrectly or by unsuitable people. The proposed new guidance describes “seven key approaches to reducing the risk that models will be misused, along with recommendations on how to deploy them and how to be transparent about their deployment”.

Additionally, NIST has also released three final AI safety documents, focusing on mitigating the risks of generative AI, reducing threats to data used to train AI systems, and global commitment to AI standards.

In addition to the executive order on AI, there have been recent efforts at the federal level to establish AI protections in health care and many other areas.

This includes reorganizing agencies within the Department of Health and Human Services to “focus policy and operations on technology, data, and AI.”

The White House has also issued new regulations on the use of AI in federal agencies, including the CDC and VA hospitals.

Meanwhile, NIST is also working on other AI and security initiatives, such as privacy-protecting guidance for AI-driven research and a recent major update to its landmark Cybersecurity Framework.

ON PROFILE
“For all of its potentially transformative benefits, generative AI also brings risks that are significantly different from those we see in traditional software,” NIST Director Laurie E. Locascio said in a statement. “These guidance documents and testing platforms will inform software creators of these unique risks and help them develop ways to mitigate them while still supporting innovation.”

“AI is the defining technology of our generation, so we are racing to keep up and help ensure the safe development and deployment of AI,” added US Secretary of Commerce Gina Raimondo.[These] This announcement demonstrates our commitment to providing AI developers, implementers, and users with the tools they need to safely harness the potential of AI while mitigating the associated risks. We have made great progress, but there is still much work ahead.”

Mike Miliard is executive editor of Healthcare IT News
Email the author: [email protected]
Healthcare IT News is a publication of HIMSS.

The HIMSS Healthcare AI Forum is scheduled for September 5-6 in Boston. Learn more and register.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button