
AI-RMF® LLC is a specialized consultancy founded by Bobby K. Jenkins, a career U.S. Department of Defense Computer Scientist, Systems Engineer, and Acquisition Program Manager. As the author of "AI-RMF: A Practical Guide for the NIST Artificial Intelligence Risk Management Framework", Bobby brings experience in applying AI risk, governance, and security principles.
Bobby is the investigative voice behind "Security of AI", a channel dedicated to exploring the strategic, technical, and societal risks emerging from artificial intelligence. Drawing on decades of experience in defense technology, systems engineering, cybersecurity, and AI governance, Bobby examines how advanced technologies shape national security, global stability, and the future of digital infrastructure'
His work is grounded in DoDI 5000.02 acquisition management and NIST AI-Risk Management Framework. AI-RMF supports clients ranging from defense programs to small commercial enterprises providing both technical and programmatic expertise across AI system acquisition development, and security assurance. If you need assistance, book an appointment.
AI Risk Management Framework was created by the National Institute of Standards and Technology (NIST) is a U.S. federal agency that develops standards, guidelines, and tools to ensure the reliability and security of technology, including artificial intelligence (AI). NIST's mission spans a wide array of fields from cybersecurity to physical sciences and engineering, aiming to promote innovation and industrial competitiveness.
In the realm of artificial intelligence, NIST introduced the AI Risk Management Framework (AI-RMF) to guide organizations in managing the Security-of-AI risks. The AI-RMF is designed to be a flexible and voluntary framework that helps stakeholders across various sectors understand, assess, and address the risks AI technologies can pose. This includes considerations for the ethical, technical, and societal implications of AI deployment. The framework emphasizes the importance of trustworthy AI, which means AI systems that are responsible, equitable, traceable, reliable, and governable while also being transparent, explainable, and accountable.




Security-of-AI involves a series of steps and strategies aimed at protecting AI systems from vulnerabilities, ensuring they operate reliably, and are free from manipulation. Here are the main steps involved in securing AI systems.
1. Risk Assessment:
2. Data Security:
3. Model Security:
4. Adversarial AI Defense:
5. Ethical and Legal Compliance:
6. AI Governance:
7. Incident Response:
8. Research and Development:

Patuxent River, MD, USA
Bobby K Jenkins bobby.jenkins@ai-rmf.com www.linkedin.com/in/bobby-jenkins-navair-492267239
Mon | By Appointment | |
Tue | By Appointment | |
Wed | By Appointment | |
Thu | By Appointment | |
Fri | By Appointment | |
Sat | Closed | |
Sun | Closed |