Artificial Intelligence Management Consulting
Artificial Intelligence Management Consulting

AI-RMF® LLC is a specialized consultancy founded by Bobby K. Jenkins, a career U.S. Department of Defense Computer Scientist, Systems Engineer, and Acquisition Program Manager. As the author of AI-RMF: A Practical Guide for the NIST Artificial Intelligence Risk Management Framework, Bobby brings experience in applying AI risk, governance, and security principles across mission-critical environments.
Our work is grounded in DoDI 5000.02 acquisition management and NIST AI-Risk Management Framework. We support clients ranging from defense programs to small commercial enterprises providing both technical and programmatic expertise across AI system acquisition development, and security assurance. If you need assistance, book an appointment. We cater to small business.
1. DoD Program Acquisition Planning and Execution
2. AI-Red Team Test Planning and Execution
3. AI-Driven DoD Government Contract Business Development
4. AI-Governance via NIST AI Risk Management Framework
The National Institute of Standards and Technology (NIST) is a U.S. federal agency that develops standards, guidelines, and tools to ensure the reliability and security of technology, including artificial intelligence (AI). NIST's mission spans a wide array of fields from cybersecurity to physical sciences and engineering, aiming to promote innovation and industrial competitiveness.
In the realm of artificial intelligence, NIST introduced the AI Risk Management Framework (AI-RMF) to guide organizations in managing the risks associated with AI systems. The AI-RMF is designed to be a flexible and voluntary framework that helps stakeholders across various sectors understand, assess, and address the risks AI technologies can pose. This includes considerations for the ethical, technical, and societal implications of AI deployment. The framework emphasizes the importance of trustworthy AI, which means AI systems that are responsible, equitable, traceable, reliable, and governable while also being transparent, explainable, and accountable.




Security-of-AI involves a series of steps and strategies aimed at protecting AI systems from vulnerabilities, ensuring they operate reliably, and are free from manipulation. Here are the main steps involved in securing AI systems.
1. Risk Assessment:
2. Data Security:
3. Model Security:
4. Adversarial AI Defense:
5. Ethical and Legal Compliance:
6. AI Governance:
7. Incident Response:
8. Research and Development:

Bobby K Jenkins bobby.jenkins@ai-rmf.com www.linkedin.com/in/bobby-jenkins-navair-492267239
Mon | By Appointment | |
Tue | By Appointment | |
Wed | By Appointment | |
Thu | By Appointment | |
Fri | By Appointment | |
Sat | Closed | |
Sun | Closed |