Japan: AISI Highlights Differences in AI Risk Management Approaches by NIST and Japan

The AI Safety Institute (AISI) recently shared a comparison between two major guidelines for managing AI risks: the U.S. National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF) and Japan’s AI Guidelines for Business (AI GfB).

Here are the key takeaways:

  • AI System Vulnerabilities:
    Both frameworks address adversarial attacks as a significant risk, but NIST emphasizes "red teaming" (simulated attacks) as a proactive defense. Japan’s AI GfB also focuses on monitoring AI after it's in use and suggests rewarding those who report issues.

  • Shutting Down AI Systems:
    NIST encourages diverse expertise from different fields when managing risks, something Japan’s guidelines don’t highlight.

  • Pre-Trained AI Models:
    NIST stresses checking for privacy and bias risks in pre-trained models and keeping track of risk controls when using third-party tools. Japan goes further, recommending extra security checks and measures to ensure the models are reliable.

  • Model Drift (Changes Over Time):
    Both frameworks recognize this risk, but NIST specifically advises regular monitoring to catch and fix these changes.

For those interested, AISI’s press release in Japanese - here

Previous
Previous

Malaysia: Introduces New Cybersecurity Guidelines to Strengthen Communications Sector

Next
Next

Australia: Moves to Protect Kids Online with Age Verification Rules