US Executive Order on AI: What Organizations Need to Know
In October 2023, President Biden signed Executive Order 14110, establishing the most comprehensive federal action on AI in US history. This landmark order sets new standards for AI safety, security, and trustworthiness across the federal government and beyond.
What is the Executive Order on AI?
Executive Order 14110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” represents a whole-of-government approach to AI governance. It directs federal agencies to:
- Establish new safety and security standards
- Protect Americans’ privacy
- Advance equity and civil rights
- Support workers affected by AI
- Promote innovation and competition
- Advance American leadership globally
Key Requirements by Area
AI Safety and Security
The order establishes significant safety requirements for powerful AI systems:
Reporting Requirements
- Developers of dual-use foundation models must notify the government
- Companies training models above certain compute thresholds must report
- Results of red-team safety tests must be shared with government
Safety Standards
- NIST must develop standards for red-teaming AI systems
- Guidelines for secure AI development practices
- Standards for AI-generated content authentication (watermarking)
Protecting Privacy
The order addresses AI’s impact on personal privacy:
- Strengthen privacy-preserving techniques in AI
- Evaluate federal agencies’ data collection practices
- Develop guidelines for AI use in surveillance
- Support research on privacy-preserving AI
Advancing Equity and Civil Rights
To prevent AI-enabled discrimination:
- Guidance on algorithmic discrimination in hiring
- Best practices for AI in the justice system
- Standards for AI use in healthcare
- Protections for tenants from AI screening
Supporting Workers
Recognizing AI’s workforce impact:
- Study AI’s labor market effects
- Develop principles for employee well-being
- Support workers displaced by AI
- Guidelines for AI in workplace monitoring
Promoting Innovation
To maintain US competitiveness:
- Streamline visa processes for AI talent
- Expand AI research funding
- Promote small business AI adoption
- Open access to government AI resources
Agency-Specific Mandates
The EO assigns specific responsibilities to federal agencies:
| Agency | Key Responsibilities |
|---|---|
| Commerce/NIST | Safety standards, testing guidelines, watermarking |
| DHS | AI security in critical infrastructure |
| DOE | AI risks in nuclear/biological domains |
| HHS | AI safety in healthcare and drug development |
| DOL | Worker protections and labor impact |
| DOJ | Civil rights enforcement and AI in justice |
| FTC | Consumer protection and competition |
| OMB | Federal AI procurement and use |
Compliance Timeline
The EO established aggressive timelines:
| Deadline | Requirement |
|---|---|
| 90 days | Reporting requirements for dual-use models |
| 150 days | NIST AI safety guidelines |
| 180 days | Agency AI use case inventories |
| 240 days | Procurement guidelines |
| 365 days | Comprehensive safety standards |
Impact on Private Sector
While the EO primarily directs federal agencies, it significantly affects private companies:
Direct Requirements
- Reporting: Companies developing large AI models must report to government
- Red-teaming: Safety testing results must be shared
- Watermarking: Content authentication for AI-generated media
Indirect Effects
- Federal contractors: Must comply with new AI procurement rules
- Healthcare: AI systems face new safety requirements
- Financial services: Enhanced algorithmic discrimination scrutiny
- Employment: New guidelines on AI in hiring
Relationship to Other Frameworks
The EO complements existing AI governance:
| Framework | Relationship |
|---|---|
| NIST AI RMF | EO references and builds on RMF |
| State Laws | EO encourages consistent federal approach |
| EU AI Act | Creates basis for international alignment |
| Sector Rules | Adds to existing industry regulations |
Implementation Status
Key developments since the EO:
- NIST: Released initial safety guidelines and testing frameworks
- OMB: Published federal AI use guidance
- Commerce: Established AI Safety Institute
- DHS: Released critical infrastructure guidance
- Agencies: Completed AI use case inventories
Preparing for Compliance
Organizations should take these steps:
Assessment Phase
- Inventory AI systems: Document all AI applications
- Evaluate scope: Determine which EO provisions apply
- Gap analysis: Compare current practices to requirements
- Risk assessment: Identify high-priority compliance areas
Implementation Phase
- Safety testing: Implement red-teaming for high-risk systems
- Documentation: Prepare required reports and disclosures
- Governance: Establish oversight structures
- Training: Educate staff on new requirements
Monitoring Phase
- Track guidance: Monitor agency rulemaking
- Update practices: Adapt to evolving requirements
- Engage stakeholders: Participate in public comment periods
- Audit compliance: Regular self-assessment
Enforcement Considerations
While the EO itself doesn’t create new legal penalties, enforcement flows through:
- Existing agency authorities (FTC, DOJ, etc.)
- Federal procurement requirements
- Sector-specific regulations
- Future legislation building on EO
How Metrica.uno Helps
Metrica.uno supports EO compliance by:
- Tracking relevant requirements for your AI systems
- Mapping systems to agency-specific mandates
- Generating documentation for federal reporting
- Assessing alignment with NIST guidelines
- Monitoring evolving implementation guidance
Start your assessment to understand your EO compliance posture.
Ready to assess your AI compliance?
Start your free assessment today and get actionable insights.
Written by
Metrica.uno Team
Content Team
Metrica.uno Team is part of the Metrica.uno team, helping organizations navigate AI compliance with practical insights and guidance.
Related Articles
UK AI Regulation: A Pro-Innovation Approach
Understanding the UK's principles-based approach to AI regulation, the role of existing regulators, and how it differs from the EU AI Act.
NIST AI Risk Management Framework: What You Need to Know
An overview of the NIST AI RMF, its core functions, and how organizations can implement it for effective AI risk management.
Canada's AIDA: The Artificial Intelligence and Data Act Explained
A comprehensive guide to Canada's proposed AI legislation, including requirements for high-impact systems, penalties, and compliance strategies.