
NIST Cybersecurity Professional®
NCSP® AI 600-1 Foundation Certificate
Build Trustworthy, Responsible AI Systems with the NIST AI Risk Management Framework.

NIST Cybersecurity Professional®
NCSP® AI 600-1 Foundation Certificate
​
Course Description
​
Artificial Intelligence is transforming every sector but with innovation comes new categories of risk. Organisations must ensure that AI systems are safe, secure, trustworthy, and aligned with ethical and regulatory expectations. NIST AI 600‑1: The AI Risk Management Framework (AI RMF) provides a comprehensive, outcomes‑based approach for identifying, assessing, managing, and monitoring AI‑related risks across the entire AI lifecycle.
​
The NCSP® AI 600‑1 Foundation Certificate is a 2‑day, instructor‑led course introducing participants to the structure, concepts, and practical application of the NIST AI RMF. This course explains how to build trustworthy AI systems, integrate AI risk management into organisational governance, and align AI programs with the NIST Cybersecurity Framework (CSF) 2.0.
​
Participants learn how to operationalise AI governance, evaluate AI risks, implement safeguards, and support responsible AI adoption across technical and non‑technical teams.
What You Will Learn
Participants gain foundational knowledge required to apply the NIST AI RMF across AI development, deployment, and oversight. You will learn:
-
How the AI RMF aligns with the NIST CSF 2.0 and supports enterprise risk management.
-
The structure and purpose of the AI RMF Core: Govern, Map, Measure, and Manage.
-
How to identify and assess AI‑specific risks, including safety, security, fairness, privacy, and transparency.
-
How to integrate responsible AI practices into model development, evaluation, deployment, and monitoring.
-
Techniques for documenting AI risks, controls, and assurance activities.
-
How to build organisational governance structures that support trustworthy AI.
​
Course Agenda
​
Day 1: AI RMF Foundations, Governance & Risk Understanding
Module 1: Introduction to NIST AI 600‑1 (AI RMF)
-
Understanding the purpose, evolution, and strategic importance of the AI RMF in supporting trustworthy AI adoption.
Module 2: Structure of the AI RMF Core
-
Exploring the four core functions, Govern, Map, Measure, and Manage, and their associated categories and outcomes.
Module 3: AI Governance & Organisational Readiness
-
Establishing policies, roles, responsibilities, and governance structures that support responsible AI development and use.
Module 4: AI Risk Identification & Context Mapping
-
Understanding AI system context, intended use, stakeholders, and risk factors across the AI lifecycle.
​
Day 2: Measurement, Controls, Monitoring & Continuous Improvement
Module 5: Measuring AI Risks & System Performance
-
Applying metrics, evaluations, and testing methods to assess AI safety, robustness, fairness, and reliability.
Module 6: Managing AI Risks & Implementing Safeguards
-
Integrating controls, mitigations, and assurance activities into AI development, deployment, and operational processes.
Module 7: AI Monitoring, Incident Response & Model Lifecycle Management
-
Implementing ongoing monitoring, drift detection, incident handling, and model update processes.
Module 8: Continuous Improvement & Alignment with NIST CSF 2.0
-
Establishing feedback loops, governance reviews, and enterprise‑wide alignment with CSF 2.0 outcomes.
​
Learning Outcomes
​
Participants will be able to:
-
Explain how NIST AI 600‑1 supports the NIST Cybersecurity Framework 2.0 and responsible AI governance.
-
Identify and describe the AI RMF Core functions and their associated outcomes.
-
Assess AI risks across the lifecycle, including safety, security, fairness, privacy, and transparency concerns.
-
Apply AI RMF practices to model development, evaluation, deployment, and monitoring.
-
Develop documentation, metrics, and governance artifacts that support trustworthy AI.
-
Translate AI RMF guidance into actionable practices that strengthen organisational AI resilience and accountability.
​
Who Should Attend?
​
This course is designed for professionals responsible for developing, deploying, governing, or overseeing AI systems, including:
-
AI/ML Engineers & Data Scientists
-
Cybersecurity & Risk Management Professionals
-
AI Governance, Ethics & Compliance Teams
-
Product Managers & Technical Leaders
-
Systems Integrators & Technology Vendors
-
Program & Project Managers supporting AI initiatives
-
Legal, Audit, and Policy Personnel involved in AI oversight
​
Prerequisites
There are no formal prerequisites for this Foundation‑level course, though a basic understanding of cybersecurity, data science, or risk management is helpful.
Participants are provided with:
-
NIST Cybersecurity Professional® (NCSP®) AI 600-1 Foundation Certificate courseware including links to further reading and resources.
-
NIST Cybersecurity Professional® (NCSP®) AI 600-1 Foundation Certificate, Certificate of Completion.
-
NIST Cybersecurity Professional® (NCSP®) AI 600-1 Foundation Certificate digital badge.
​
​
​Enrol Today
​
Develop the skills to apply the NIST AI Risk Management Framework and implement trustworthy, responsible, and risk‑aware AI governance practices.

Further Reading
NIST AI 600-1 - Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
