 
        Two Global Marketplaces, Two AI Approaches
 
        On July 23, 2025, President Donald Trump signed three new executive orders concerning artificial intelligence (AI).
The first order is intended to accelerate federal permitting for data centers and other AI-related infrastructure, with loans and other forms of financial support and steps including having the Interior, Energy, and Defense departments identify and authorize land for construction. The second requires federal agencies to procure only AI models that are “truth-seeking” and ideologically neutral, avoiding “dogmas” like diversity, equity, and inclusion (DEI). The third directs the Commerce Department to establish a program to market U.S.-made AI products for export to allies.
In no way were the executive orders akin to federal regulation of AI. As the president outlined in his America’s AI Action Plan, his administration favors light-touch regulation to promote innovation and ensure U.S. dominance globally in the technology. This posture, however, does not preclude states from passing their own AI regulations, based on the needs of constituents.
Across the Atlantic, the European Union (EU) has assembled a vastly different, unified AI regulatory framework. The EU AI Act is a single set of standards designed to reduce regulatory complexity and cost of compliance. The regulations categorize AI into four distinct buckets: minimal risk, limited risk, high risk, and unacceptable risk.
Since minimal risk systems like AI-enabled video games pose little threat, developers follow a voluntary code of conduct, such as whether to inform players when they are interacting with an AI-enabled chatbot. Developers and deployers of limited risk systems like chatbots and AI-manipulated text, audio, and video are required to inform users that the content has been artificially generated.
The use of AI in medical devices, employment and recruiting, educational assessments, and critical infrastructure fall within the high-risk category. Among the restrictions:
- AI-enabled medical devices must undergo a comprehensive conformity assessment to ensure they meet technical, legal, and safety standards before being placed on the market.
- AI systems used in hiring and other employment decisions must be tested for bias to prevent discrimination against protected classes. Employers can be held liable for discriminatory outcomes, even if unintentional.
- The use of AI systems to infer emotions in educational institutions is banned under some frameworks, except for medical or safety purposes.
Unacceptable, prohibited risks involve real-time biometric identification of the public by law enforcement and AI systems developed explicitly to manipulate the behavior of people—for instance, leveraging a person’s mental state to encourage harmful actions.
While proponents extol the EU AI Act for creating a more transparent ecosystem, critics contend the framework is too rigid, stifles innovation, and puts European companies at a global competitive disadvantage. Complaints range from high compliance costs and requirements like audits that will extend the time it takes to bring products to market.
While Trump’s lighter-touch, decentralized approach is lauded for encouraging flexibility and experimentation, it is criticized for not adequately addressing the perceived harms of AI, such as doing little to ensure potential algorithmic discrimination does not harm the prospects of marginalized people.




