March 9: This week in AI federal policy
DC/ai Decoded: A weekly newsletter on developments in artificial intelligence and quantum federal policy
This week decoded
The Trump administration is expected to announce which state-level AI laws it deems “onerous” and will refer to the Justice Department’s AI Litigation Task Force, as directed under the President’s AI Executive Order. The Commerce Department has approved and sent to the White House for review draft regulations that would extend federal oversight to AI chip exports—despite reported opposition from the President himself.
Meanwhile, major technology firms signed a White House Ratepayer Protection Pledge, committing to fully fund power delivery infrastructure upgrades tied to their expanding data center operations.
On Capitol Hill, lawmakers are increasingly sounding the alarm over potential job losses stemming from rapid advances in AI.
Read more below
Congress
Hearings
Last week
On March 3, the Senate Commerce, Science and Transportation Science, Manufacturing, and Competitiveness Subcommittee held a hearing on “Less Help, More Help: AI That Improves Safety, Productivity, and Care.”
On March 4, House Education and the Workforce Subcommittee on Higher Education and Workforce Development held the fifth hearing in a series examining artificial intelligence, titled “Building an AI-Ready America: Strengthening Employer-Led Training.”
On March 5, the Senate Health, Education, Labor and Pensions Committee held a hearing on “Transforming Health Care with Data: Improving Patient Outcomes Through Next-Generation Care.”
Upcoming
On March 17, the House Financial Services Committee holds a hearing on “Updating America’s Financial Privacy Framework for the 21st Century.”
On March 26, the House Financial Services Digital Assets, Financial Technology, and Artificial Intelligence Subcommittee holds a hearing on “Innovation at the Speed of Markets: How Regulators Keep Pace with Technology.”
Legislation
The House Energy and Commerce Committee passed the KIDS Act to establish protections for children and teens online by mandating technology verification to prevent minors from accessing harmful content, requiring platforms to provide robust parental controls, banning direct and ephemeral messaging for users under 13, and restricting market research on minors. The bill includes the Safeguarding Adolescents from Exploitative BOTs (SAFE BOTs) Act covering AI chatbots. (Text)
Reps. Jay Obernolte (R-CA) and Jennifer McClellan (D-VA) introduced the AI-Ready Networks Act to require the National Telecommunications and Information Administration (NTIA) to issue a forward-looking report on the integration of artificial intelligence into commercial telecommunications networks across the country. (Text)
Rep. Bonnie Watson Coleman (D-NJ) introduced the Data Center Community Impact Act to authorize a federal study on the environmental, economic, and public health impacts of data centers, with a focus on communities of color and low-income communities. (Text)
Sens. Tim Sheehy (R-MT) and Lisa Blunt Rochester (D-DE) introduced the AI Fraud Accountability Act to create a new offense under the Communications Act to prohibit falsely posing as a real or imaginary individual through a highly realistic digital impersonation with intent to defraud a person of money or other things of value and direct the FTC to identify foreign countries most associated with digital impersonation fraud and to pursue international cooperation agreements to bolster enforcement against overseas actors. The House companion was introduced by Reps. Vern Buchanan (R-FL) and Darren Soto (D-FL). (Text)
Senate Commerce, Science and Transportation Committee Ranking Member Maria Cantwell (D-WA) and Sen. Jerry Moran (R-KS) introduced the NSF AI Education Act to expand scholarship and professional development opportunities to study artificial intelligence with support from the National Science Foundation. (Text)
Correspondence
Sen. Jim Banks (R-IN) sent a letter to Defense Secretary Pete Hegseth requesting the Pentagon’s Artificial Intelligence Futures Steering Committee identify China’s top AI influencers, examine China’s security practices, and investigate sabotage of frontier models. (Axios)
Sens. Ed Markey (D-MA), Richard Blumenthal (D-CT), Chris Van Hollen (D-MD), and Cory Booker (D-NJ) wrote to Ann Rendahl, President of the National Association of Regulatory Utility Commissioners (NARUC), urging the association’s members to protect residential and small business ratepayers from rate hikes stemming from the rapid artificial intelligence-fueled data center buildout. (Letter)
Sens. Mark Warner (D-VA), Josh Hawley (R-MO), Jim Banks (R-IN), Maggie Hassan (D-NH), John Hickenlooper (D-CO), Mark Kelly (D-AZ), Tim Kaine (D-VA), Mike Rounds (R-SD), and Todd Young (R-IN) sent letters to the Department of Labor, Bureau of Labor Statistics, and Census Bureau urging them to expand data collection and public reporting on the impact of artificial intelligence on the U.S. workforce. (Letter)
Sen. Josh Hawley (R-MO) sent a letter to Alphabet Chief Executive Officer Sundar Pichai, informing the company that the Senate Judiciary Subcommittee on Crime and Counterterrorism is opening an investigation into the role that Big Tech platforms play in the crisis of child trafficking and online exploitation. (Letter)
House Committee on Oversight and Government Reform Chair James Comer (R-KY) sent letters to Booking Holdings, Expedia Group Incorporated, Uber, Lyft, and Instacart documents and information regarding their use of artificial intelligence to conduct surveillance pricing of consumers that artificially increases the prices of goods and services. (Press release)
Sen. Ron Wyden (D-OR) sent letters to Anthropic, Google, OpenAI and X.AI requesting information about how the companies’ government customers, in the U.S. and overseas, can use their AI products under the terms of their contracts. (Letter)
Labor Caucus Co-Chairs Reps. Debbie Dingell (D-MI), Steven Horsford (D-NV), Donald Norcross (D-NJ), and Mark Pocan (D-WI) sent a letter to the chairs of the House Democratic Commission on Artificial Intelligence urging them to implement pro worker policy and strong labor standards in AI regulation within the Innovation Economy strategy. (Letter)
Trump Administration
Commerce Department
The Commerce Department approved and sent to OMB for review draft regulations to give the government authority over AI chip exports abroad. The White House said the draft “does not reflect what President Trump has said on export controls nor does it reflect the direction of the Trump administration on encouraging export of the American AI stack.” OMB has until Thursday to complete interagency review. (Axios)
White House
The White House released its National Cyber Strategy, saying, “We will outcompete adversaries who sell ‘low cost’ AI and digital technologies that carry embedded censorship, surveillance, and ideological bias. We will partner closely with industry and academia, at the speed and scale commensurate with the threats we face, and in accordance with our values.” In addition to leveraging artificial intelligence, the strategy also includes promoting the adoption of post-quantum cryptography and secure quantum computing. (Report)
The White House also released the Ratepayer Protection Pledge, in which Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI agreed to build, bring, or buy new generation resources and cover the cost of all power delivery infrastructure upgrades required for their data centers. (Fact sheet)
Noteworthy Quotes and Events
ADMINISTRATION
Securities and Exchange Commission (SEC)
Chair Paul Atkins delivered remarks at the Financial Stability Oversight Council Artificial Intelligence Innovation Series Roundtable on Strategy and Governance Principles, saying “The SEC’s best historical regulatory approach has hewn to principles-based rules—rooted in materiality. This time-tested approach should inform how a public company today ought to disclose developments concerning AI, just as it guides disclosures about any other development. The standard is a familiar one: whether there is a substantial likelihood that a reasonable shareholder would consider the information important in making an investment decision. Prescriptive mandates are not the answer to every emerging technology. And disclosure “checklists” are no substitute for materiality-based transparency that offers meaningful disclosure under established principles. If the advent of each new technology becomes a pretext for new line items, then disclosure swiftly loses its discipline. In the absence of a limiting principle, a morass of information can do more to obscure than to illuminate. Now, insisting on clarity in disclosure should not suggest an aversion to adoption. We actively encourage market participants to engage with our staff around innovative use cases.” (Remarks)
Commodity Futures Trading Commission (CFTC)
CFTC Chair Mike Selig said about former CFTC and SEC Chair Gary Gensler, “Someone like that can just run roughshod over entire industries, and that’s exactly what we saw — we saw with crypto, we saw it with prediction markets, we saw with AI, and many other industries where there was just this political motivation to target market participants.” He added, “The other problem with it is that it’s not transparent. I think that you’ve got to put the rules in clear Federal Register typeface so that everybody can see them and it’s fair.” (Washington Examiner)
On the CFTC Innovation Advisory Committee, Selig said, “For too long, governments have kind of been this top-down ivory tower world of we’re going to tell businesses what to do and what’s best for them. We’re not going to do that anymore. We’re going to work together with industry to understand what they’re doing in the markets, what they’re building as a business, and collaborate on what the best regulatory framework looks like.” (Washington Examiner)
CONGRESS
AI-Related Job Impacts
Sen. Josh Hawley (R-MO) said on AI-related job losses, “We haven’t done anything. The public will pretty soon demand it.” He added, “I hear from a lot of college graduates now, who are new entrants in the labor market, that they’re having a really tough time finding jobs. I’m really concerned about it. I’m concerned about it for blue collar workers too.” (Politico)
Sen. Thom Tillis (R-NC) said, “There’s an implication that you may need to slow down AI, because it may be disrupting jobs, but China and our international competitors aren’t. If you stop, China wins.” (Politico)
Sen. Mark Warner (D-VA) said, “I’m pro-AI. Over the long haul, it’ll bring enormous positives, but for the next five to seven years, the disruptions that can take place, and I think a lot of it is going to take place with recent college grads.” (Politico)
Sen. Mike Rounds (R-SD) said, “There will be a transition period. There always is. Every time you have a technology change you have an upheaval, but there’s also opportunity.” (Politico)
Rep. Suzanne Bonamici (D-OR) said, “I’m hearing anxiety about AI; part of it has to do with potential job loss,” said “If the job market changes and is different then we have to prepare people.” (Politico)
Sen. Mark Kelly (D-AZ) posted “AI is reshaping the workforce faster than we can measure it. I’m pushing for the federal government to modernize how we track AI’s impact on jobs so that we can better support workers and be prepared for what’s to come.”
Sen. Todd Young (D-IN) posted “We need to know how AI is impacting the U.S. workforce and jobs. Glad to join this effort to support American workers.”
AI and Energy
At a roundtable discussion with President Donald Trump and Energy Secretary Chris Wright on “lowering energy costs by committing major tech and AI companies to fund their own data center electricity needs instead of shifting costs onto consumers,” Sen. Jon Husted (R-OH) said, “In places like Ohio, this work matters. If America wants economic and national security dominance, we must lead in technology and AI to stay ahead of adversaries like China. But, this innovation requires energy and Ohio families are concerned about the impact on their own bills. I’m grateful to those who signed the Ratepayer Protection Pledge today and to the president for making this happen for working Ohio families. Requiring companies to fund their own power protects working people from higher bills. I was honored to join the president and his administration at the White House as we work to deliver stable, reliable and affordable energy for the future.” (Press release)
Sen. Ed Markey (D-MA) posted “Electricity rates are spiking as AI data centers pop up across the country. Households and small businesses should not be made to subsidize data centers while they already struggle to get by. Our nation’s public utility commissions set household rates--they can act now.”
Department of Defense vs. Anthropic
Sen. Thom Tillis (R-NC) said, “They’re telling Anthropic that they should compromise their code of conduct to facilitate whatever it is Hegseth or somebody wants.” (Politico)
House Committee on Science, Space, and Technology Ranking Member Zoe Lofgren (D-CA) said, “The Trump administration’s bullying tactics towards Anthropic are shocking and senseless. Anthropic is trying to do the right thing and put their own guardrails in place in the absence of legislation. Any freedom loving American can appreciate Anthropic’s attempts to prevent the DOD from using its AI model for mass surveillance of Americans. And it should go without saying that AI technology should not be making potentially lethal decisions without human involvement. I fear what America will become if the DOD is given this unrestricted power.” (Press release)
Sen. Kirsten Gillibrand (D-NY) said, “The Defense Department’s designation of Anthropic as a supply-chain risk is a dangerous misuse of a tool meant to address adversary-controlled technology. Instead, DOD has turned it against a leading American technology company. This reckless action is shortsighted, self-destructive, and a gift to our adversaries. This goes beyond simply not doing business with a company that won’t meet the administration’s terms. The government openly attacking an American company for refusing to compromise its own safety measures is something we expect from China, not the United States. We cannot credibly compete with China on artificial intelligence while this administration simultaneously takes an axe to American innovation and sells advanced chips to Beijing. I urge the administration to reverse course. We should be nurturing and encouraging American innovation — not threatening it into submission.” (Press release)
Rep. Lloyd Doggett (D-TX) posted “Seeking modest protections to ensure their technology is used neither for mass surveillance of Americans nor deployment of weaponized drones without humans involved in the decision-making, AI company Anthropic was blacklisted by Hegseth and Trump as a “radical left” company and national security threat. The Trump regime is abusing Anthropic for acting reasonably in accord with American values. This private company is showing far more respect for the American people and our national security than Trump and his incompetent cabinet of Fox News recruits.”
Miscellaneous
Rep. Darren Soto (D-FL) posted “As AI continues to evolve, it is critical for us to ensure that the technology isn’t being misused to create harm. Proud to introduce the bipartisan AIAccountabilityAct with VernBuchanan to hold bad actors accountable.”
Rep. Summer Lee (D-PA) posted “ICE is using data from AI-based school surveillance to aid and abet their mass deportation efforts. If Microsoft is going to continue working with ICE, teachers are rightfully concerned about using Microsoft’s AI tools in the classroom while their students are being deported. But Microsoft refuses to answer to them.”
Sen. Ted Budd (R-NC) posted “AI’s potential in the healthcare industry presents unique opportunities to save and improve lives. We can accelerate breakthroughs, streamline research through automation, and reduce the costs and barriers that slow life-saving trials.”
Sen. Jim Banks (R-IN) posted “The U.S. needs to win the AI race against China. That’s why I sent a letter to SecWar asking the DoW to examine the state of advanced AI in China and how we can ensure American technology is always in the lead.”
Senate Committee on Armed Services hearing on the American small drone industrial base Q&A
Sen. Mark Kelly (D-AZ)
Sen. Mark Kelly (D-AZ): “So, General Marks, I want to ask you something I think that is going to define warfare, now for the rest of our lives, for generations, and that’s the role of artificial intelligence in what we’ve seen play out here over the last several days. But certainly, into the future, it’s going to be a new feature of combat operations in many different ways. But specifically, the LUCAS, the low-cost unmanned combat aircraft system, attack system, those drones deployed in Operation Epic Fury have documented autonomous anti-jamming and I believe also some swarming capability. So, my question is about what’s underneath all of that. Are AI systems being used to assist in targeting decisions during this operation?”
Major General Steven M. Marks: “So, Senator, thank you for the question. I am familiar with the LUCAS system. At this level, open hearing, I’m not able to go into great depth on what is inside of the LUCAS system, but I would be willing to get on your calendar, on the Committee’s calendar, and provide you a classified briefing.”
Kelly: “Okay, so my next question is kind of irrelevant there, because I was going to ask about who validated the systems, who safeguarded them, and what human oversight exists at the moment a drone selects or confirms a target. So, let’s do that in a closed session as well. But I also want to just state for the record here that companies like Anthropic and others in the AI industry have published their own safety frameworks of how advanced AI systems should be deployed. But Congress has not yet set any kind of clear statutory framework for how AI can be used in lethal military operations. There’s a DoD directive, directive 3000.09, which requires what is called, and I’m quoting from the directive, “appropriate levels of human judgment over the use of force.” But that language doesn’t necessarily mean a human is involved at the moment a target is selected or engaged. So, before we rapidly scale up production and field more of these systems that have AI incorporated into their capability, we need a clear answer on this. At the moment, a drone identifies and confirms a target, whether or not a human has to make the final decision to strike the target, or can a system execute the engagement autonomously once it’s been activated? These are questions we haven’t yet dealt with here in Congress, and we need to. So, General, I just want to get your thoughts on that, independent of what LUCAS or any other system can do.”
Marks: “Thank you, Senator, for the question. Any system, any capability that the department procures has to comply and be compliant with the Law of Armed Conflict. I would say that any commander that deploys these systems, just like any weapon system, it has to comply with the Law of Armed Conflict.”
Kelly: “I am not sure that the Law of Armed Conflict has dealt with this issue, so LOAC might not be exactly clear, and that’s why I think it’s up to us, Mister Chairman, that we take this issue of humans in the loop seriously and create the framework that DoD will apply to these systems with regards to their autonomous nature and the ability for a system to make a decision on targeting the enemy. Thank you.”
What I’m Reading This Week
Agents of Chaos, Natalie Shapira et al.
A Word to the Wise: Don’t Trust A.I. to File Your Taxes, Stuart A. Thompson, The New York Times
Khan seeks to lure AI giant blacklisted by Trump to London, James Titcomb, The Telegraph.
What to watch from Trump’s national AI standard, Allison Mollenkamp, Roll Call.
The Week the AI Scare Turned Real and America Realized Maybe It Isn’t Ready for What’s Coming, Nick Lichtenberg, Fortune.
How Candidates Are Using Winks and Posts to Seek Crypto and A.I. Cash, Shane Goldmacher, The New York Times.
About Zero One Strategies
Zero One Strategies is a specialized government relations practice dedicated to navigating the complex landscape of U.S. federal policy in emerging technologies. As advancements in technology continue to outpace regulatory frameworks, Zero One Strategies aims to provide strategic guidance and bipartisan advocacy for innovators and businesses operating at the forefront of technological development.
The practice focuses on key areas such as artificial intelligence, digital assets, blockchain, decentralized technologies, cybersecurity, data, and digital infrastructure, as well as the multiple policy issues impacting these sectors, including tax and financial services.
Contact us at Stacey@ZeroOneStrategies.com





