April 27: This week in AI federal policy
DC/ai Decoded: A weekly newsletter on artificial intelligence and quantum federal policy
This week decoded
New efforts have yet to bridge the bipartisan divide on comprehensive AI and data privacy legislation. House Republicans introduced two new data privacy bills, while a key Democrat withheld support from the leading Republican AI measure.
House lawmakers sound the alarm after DHS demonstrated jailbroken AI models in a classified briefing. Senators on both sides of the aisle pressed Kevin Warsh on whether AI will upend the labor market.
Meanwhile, White House Office of Science and Technology Policy Director Michael Kratsios issued a memorandum outlining Administration steps to block foreign actors from exploiting U.S. AI models.
Read more below
Congress
Hearings
Last week
On April 21, the House Armed Services Subcommittee on Cyber, Information Technology, and Innovation held a hearing on “Cyber Posture of the Department of Defense.”
On April 21, the House Science, Space and Technology Subcommittee on Research and Technology held a hearing on “Robots Made in America: Advancing U.S. Leadership in Manufacturing and Automation.”
On April 22, the Senate Judiciary Committee held a hearing on “Stealth Stealing: China’s Ongoing Theft of U.S. Innovation.”
This week
On April 28, the Senate Armed Services Committee holds a hearing on the “Posture of the U.S. Special Operations Command and U.S. Cyber Command in review of the Defense Authorization Request for FY2027 and the Future Years Defense Program.”
On April 29, the House Energy and Commerce Energy Subcommittee holds a hearing on “AI and the Grid: Meeting Growing Power Demand While Protecting Ratepayers.”
On April 29, the House Homeland Security Cybersecurity and Infrastructure Protection Subcommittee holds a hearing on “Data Centers, Telecommunications Networks, and Space-Based Systems: Modernizing DHS’s SRMA (Sector Risk Management Agency) Role for the Communications and IT Sectors.”
On April 29, the House Science, Space and Technology Committee holds a markup of H.R. 8462 (119), the “National Quantum Initiative Reauthorization Act.”
Upcoming
On May 13, the Senate Armed Services Cybersecurity Subcommittee will hold a closed briefing on cyber operations and readiness for the fourth quarter of FY2025 and the first quarter of FY2026.
On May 20, the House Financial Services Digital Assets, Financial Technology, and Artificial Intelligence Subcommittee holds a hearing on “Partnering for Innovation: How Bank-Fintech Collaborations Enhance Financial Infrastructure.”
Legislation
The House Foreign Affairs Committee passed a number of emerging tech-related bills, including the Stop Stealing our Chips Act, the Addressing Dangerous Vulnerabilities in Exports and Research to Strategic Adversaries, Regimes, and Industrial Entities of Security Concern (ADVERSARIES) Act, Semiconductor Technology Resilience, Integrity, and Defense Enhancement Act, and the Full AI Stack Export Promotion Act. (Notice)
Sen. Joni Ernst (R-IA) introduced the Protecting American Taxpayers Act, a package of anti-fraud bills including the Preventing Deep Fake Scams Act sponsored by Sen. Jon Husted (R-OH). (Text)
Reps. Jay Obernolte (R-CA) and Sara Jacobs (D-CA) introduced the Economy of the Future Commission Act to establish a bipartisan, bicameral commission to study how AI is transforming the American economy and develop consensus-driven policy recommendations for Congress. The Senate version was introduced by Sens. Mark Warner (D-VA) and Mike Rounds (R-SD). (Text)
House Committee on Energy and Commerce Chair Brett Guthrie (R-KY) and Rep. John Joyce (R-PA) introduced the Securing and Establishing Consumer Uniform Rights and Enforcement over Data Act (SECURE Data Act) to establish a comprehensive national framework for consumer data privacy with federal preemption of state law. (Text)
House Committee on Financial Services Chair French Hill (R-AR) and Reps. Bill Huizenga (R-MI) and Bryan Steil (R-WI) introduced the Guidelines for Use, Access, and Responsible Disclosure of Financial Data Act (GUARD Financial Data Act) to modernize the Gramm-Leach-Bliley Act. (Text)
Reps. Valerie Foushee (D-NC) and Don Beyer (D-VA) and Del. James Moylan (R-GU) introduced the Protecting Consumers From Deceptive AI Act to require the National Institute of Standards and Technology to establish task forces to facilitate and inform the development of technical standards and guidelines relating to the identification of content created by generative artificial intelligence, to ensure that audio or visual content created or substantially modified by generative artificial intelligence includes a disclosure acknowledging the generative artificial intelligence origin of such content. (Text)
Rep. LaMonica McIver (D-NJ) introduced the AI Data Center Site Selection Transparency Act to require developers of AI-focused data centers to disclose locations, electricity use, water consumption, cooling demands, and environmental impacts before the AI-focused data centers are developed. (Press release)
Reps. Brian Fitzpatrick (R-PA) and Debbie Dingell (D-MI) and Sens. Ed Markey (D-MA.) and Ben Ray Luján (D-NM) introduced the Communications, Video, and Technology Accessibility (CVTA) Act to modernize the 21st Century Communications and Video Accessibility Act (CVAA) of 2010. (Press release)
Reps. Nick Begich (R-AK), Dan Crenshaw (R-TX), and Burgess Owens (R-UT) introduced the DATA Act to modernize federal regulations to allow manufacturers and other energy-intensive industries to develop fully isolated, off-grid power systems and ensure new industrial growth does not strain existing power grids or increase electricity costs. The Senate version was introduced by Senator Tom Cotton (R-AR). (Press release)
Rep. Blake Moore (R-UT) introduced the AI Children’s Toy Safety Act to ban the manufacturing, importation, sale, or distribution of any children’s toy or childcare article that incorporates an artificial intelligence chatbot in the United States. (Press release)
Correspondence
Sen. Mark Kelly (D-AZ) and Rep. Brian Fitzpatrick (R-PA) sent a letter to President Trump urging the Administration to incorporate worker-centered principles into federal artificial intelligence policy, guidance, and procurement, such as those supported by the AFL-CIO. (Letter)
Sen. Ruben Gallego (D-AZ) and Rep. Greg Casar (D-TX) sent a letter to JetBlue Airlines requesting information about its potential use of customer data and artificial intelligence to set prices for consumers. (Letter)
Publications and Events
On April 29, Sen. Bernie Sanders (I-VT) will hold a discussion on “the existential risks of AI and need for international cooperation.”
Sens. Marsha Blackburn (R-TN) and Peter Welch (D-VT) held a roundtable with artists advocating for the NO FAKES Act and TRAIN Act. (Press release)
Trump Administration
White House
White House Office of Science and Technology Policy Director Michael Kratsios released a memorandum on Administration efforts to prevent foreign actors from training AI on U.S. models. (Memo)
Axios reported that several agency actions required in the President’s December executive order on “Ensuring a National Policy Framework for Artificial Intelligence” have not been completed by their March 11 deadline, including Federal Trade Commission guidance on impacts on AI models of consumer protection laws, Commerce Department evaluation of “onerous” state AI laws and rules tying broadband funding to state AI regulation, and Federal Communications Commission guidance on a national AI reporting and transparency standard and identification of conflicting state laws. (Axios)
National Institute for Standards and Technology (NIST)
The NIST National Cybersecurity Center of Excellence is developing an operational technology asset management project focused on zero trust with a concept paper expected this summer. (Inside Cybersecurity)
Department of Homeland Security
The DHS National Counterterrorism Innovation, Technology and Education Center and the House Homeland Security Committee hosted a closed-door briefing for all House lawmakers to allow them to interact with AI models that have built-in safety guardrails removed.
Sean Plankey, President Donald Trump’s nominee to lead the Cybersecurity and Infrastructure Security Agency (CISA), withdrew his name after awaiting Senate consideration since March 2025.
Noteworthy Quotes and Events
ADMINISTRATION
White House
President Donald Trump said, “We want to beat China at the industry. We’re leading with crypto, we’re leading with AI, and I really feel I have an obligation... as a President, I have to be able to make sure that all of our industries do well. Crypto’s a big industry.”
The White House posted, “America is leading the AI race and our foreign adversaries know it. The Trump Administration will not allow China to subvert American interests by stealing AI.”
White House adviser David Sacks posted “UPDATE: the DOJ has joined xAI’s lawsuit against Colorado on First Amendment grounds. AI models should not be required to alter truthful output to comply with DEI.”
Sacks also posted “Reverse discrimination does not undo historical injustices but rather perpetrates new ones against a different set of individuals. This foments social strife and undermines the ideal of America as a meritocracy. AI models should not be taught that this practice is justified.”
Sacks said in an interview, “We’re lucky that Trump’s the President when this AI revolution is happening. [If Kamala Harris were President], We’d have no data centers, and they’d be using AI to censor us, and they’d be promoting DEI values through AI. That was in the Biden executive order.”
CONGRESS
On the reported hundreds of pages in length of his comprehensive AI bill, Rep. Jay Obernolte (R-CA) said, “If you’re going to do a comprehensive job at implementing a federal regulatory framework, it’s going to be detailed.” (Punchbowl)
Obernolte also said, “I’m 100% positive that we will have subcommittee hearings and markups.” (Punchbowl)
On support for Rep. Jay Obernolte’s comprehensive AI legislation, Rep. Sam Liccardo (D-CA) said, “I won’t be on the bill. If we’re going to preempt state regulation, we need to have clear conditions that ensure that there is a race to the top to safety. This bill is not going to reflect that approach, and so I’m stepping back.” (Politico)
About his committee’s hearing on data center power costs, House Energy and Commerce Chair Brett Guthrie (R-KY) said, “It’s clearly interstate commerce that we have to look at where the federal action should be. We want to make sure people don’t pay more for their personal electricity because a data center locates in their community.” (Politico)
On data center moratorium proposals, Sen. Tim Kaine (D-VA) said, “I think a moratorium would send the message to other nations, ‘Hey, the U.S. is giving up leadership in this space,’ and I don’t want to send that message.’” (Politico)
On the Republican data privacy bills, House Energy and Commerce Ranking Member Frank Pallone (D-NJ) said, “We should be protecting the little guy with a bill that empowers consumers, not one that preempts consumer protections at the behest of Big Tech.” (Politico)
Sen. Jon Husted (R-OH) posted “Bad actors are using AI to prey on older Ohioans and steal their savings. I introduced the Preventing Deep Fake Scams Act to protect seniors and crack down on this fraud.”
Sen. Jim Banks (R-IN) posted “THE WHITE HOUSE: Announces Chinese foreign entities are running industrial-scale espionage campaigns to steal American AI. DEMOCRATS ON THE SAME DAY: Announce meetings with Chinese scientists to discuss AI. Make it make sense. These meetings should not be happening at all.”
Rep. Josh Gottheimer (D-NJ) posted “AI is advancing at lightning speed, with massive implications for jobs, national security, energy, healthcare, and everyday life — we have to be ready for that reality.”
Sen. Bernie Sanders (I-VT) posted, “The existential risk of artificial intelligence: Nearly every day, a new headline comes out about how AI is upending the world — from displacing workers, to negatively impacting our children’s emotional and cognitive well-being, to eviscerating our privacy, to threatening the integrity of our political institutions. And yet, as significant as these changes might seem, there is another AI development that could have an even more frightening impact. Increasingly, AI scientists are concerned about the possibility that, if AI becomes smarter than human beings, we could lose control over this revolutionary technology and AI could turn against the human race with cataclysmic consequences. Yoshua Bengio, the most cited living scientist in the world, says “we’re playing with fire” and “we still don’t know how to make sure [the machines] won’t turn against us.” Geoffrey Hinton, the Nobel Prize-winning ‘godfather of AI,’ says there is a ‘10% to 20% chance [for AI] to wipe us out.’ They are not alone. In 2023, more than 1,000 leading AI experts, including Elon Musk, signed a letter warning that: ‘Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?’ What has happened since that letter was written? Has there been a pause on AI development? No. Has there been any international treaty to regulate AI? No. Has there been serious discussion in Congress about this existential threat? No. The development of artificial intelligence may be the most consequential technological development in human history. We must make certain that AI benefits humanity, not hurts us. On Wednesday, I’ll be hosting a discussion at the U.S. Capitol with leading AI scientists from the United States and China about the existential risks posed by AI and the need for international cooperation. I hope you will join us — in person or via livestream on my social media.”
Sanders also posted, “Uncontrolled AI poses a severe danger to all of humanity. On Wednesday, I’ll be hosting a discussion with leading AI scientists from the US and China about the need for international cooperation against this existential threat. This is an enormously important issue. Join us.”
Homeland Security Jailbroken AI Briefing
On asking an LLM how to kidnap a member of Congress, House Homeland Security Chair Andrew Garbarino (R-NY) said, “It spit out an answer in under three seconds. [It offered] ways to find them, where to look for them. You know, the best spots to do it.” (Politico)
Rep. Gabe Evans (R-CO) said, “What we saw in there with the jailbroken AI is what happens when you take those guardrails off of AI, and ask, ‘How do I make a nuclear bomb?’” (Politico)
Rep. Andy Ogles (R-TN) said, “What’s extraordinary about this presentation is how most of [the AI tools] are readily off-the-shelf and easy to access. That just increases the probability that the wrong person gets their hands on this.” (Politico)
Ogles posted “There is an AI arms race. Jailbroken AI used by our adversaries poses a monumental cyber threat. As Cyber Chairman, I am doing everything I can to stop it.”
Rep. August Pfluger (R-TX) said, “It’s really scary, because what AI is supposed to do is have some guardrails on certain things like, ‘How would you terrorize a school?’ ‘What type of weapons would you use?’” (Politico)
TAKE IT DOWN Act
On the TAKE IT DOWN Act, Senate Commerce Chair Ted Cruz (R-TX) said, “The Take It Down Act, which I authored with Senator Klobuchar, is instrumental in ensuring that predators who weaponize new technology to post exploitative filth will rightfully face criminal consequences. Because of our work to make it the law of the land, Big Tech can no longer turn a blind eye to the spread of this vile material. I am thrilled to see our legislation in action with this first conviction. Let it be known that individuals who commit these acts will be prosecuted to the fullest extent of the law.” (Press release)
Sen. Amy Klobuchar (D-MN) said, “Last year, we passed our major bipartisan legislation to take on the brutal sharing of nonconsensual sexual images, real or deepfake. Kids tragically commit suicide over the sharing of these images, and we knew we had to act. Now we have some good news: the first conviction under this law was secured of an Ohio man who victimized multiple people with his creation of pornographic deepfakes online. This conviction sends a clear message — perpetrators of these heinous crimes will be held accountable.” (Press release)
No Fakes Act
On his No Fakes Act to establish liability on the creation of unauthorized deepfakes of people’s voice or likeness, Sen. Chris Coons (D-DE) said, “We have enough votes now to get it out of committee. I’m very optimistic.” (Punchbowl)
Sen. Thom Tillis (R-NC) added, “We’re getting a lot of support for it.” (Punchbowl)
Sen. Marsha Blackburn (R-TN) said, “Protecting creators is a critical part of the national AI framework President Trump has called on Congress to pass to establish overdue guardrails in the virtual space.” (Punchbowl)
Potential Democratic Divide on AI Legislative Approach
Rep. Alexandria Ocasio-Cortez (D-NY) said, “People need to feel more confident that their representatives are voting in the interests of their constituents, as opposed to the interests of their donors.” (Punchbowl)
Rep. Yvette Clarke (D-NY) said, “To lock yourself out of any opportunities to have really high-level conversations about how this technology is being developed… is foolhardy.” (Punchbowl)
Rep. Sam Liccardo (D-CA) said “As Democrats, it’s important for us to understand that AI is here to stay.” (Punchbowl)
On accepting AI-backed campaign donations, Rep. Zoe Lofgren (D-CA) said, “Every company has AI in it now.” (Punchbowl)
Sen. Bernie Sanders (I-VT) said, “Despite the inaction of Congress, despite the power of the tech lords, people are standing up and fighting back.” (Punchbowl)
Senate Banking, Housing and Urban Affairs Committee Hearing on the Nomination of Kevin Warsh to be a Member and Chairman Designate of the Federal Reserve Board of Governors Q&A
Chair Tim Scott (R-SC)
Chair Tim Scott (R-SC): “I’m sure my colleagues on either side will have a conversation with you and ask questions about AI. The AI the AI future is going to have massive impact on where we go as a nation. It’s got a massive impact on your dual mandate as relates to full employment there, our production may go up or while our employment stays flat. So, this is a really important question that at some point we should delve into.”
Sen. Chris Van Hollen (D-MD)
Sen. Chris Van Hollen (D-MD): “Well, let me just say this was a pretty clear question about the framework in which these decisions are made. I have heard you talk about how AI may change that calculation, I will just say, and I think you know this. You know, the Financial Times pointed out the economists reject Kevin Warsh’s claim that the AI boom will enable rate cuts. And I find it just implausible to suggest that by the end of this year, AI would produce such increases in productivity that it could reduce in a rate cut to below 1% and you can’t tell me that would very likely increase prices.”
Kevin Warsh: “So, Senator, can I say two things? Monetary policy, Senator, works with long and variable lags. Quite famously, if the Fed were to make a decision today about the conduct of policy, it’s likely to find its way to the real economy six, nine or 12 months later, so it’s difficult to judge policy today for an immediate result, and that would be my only concern about the framing of your question.”
Sen. Bernie Moreno (R-OH)
Sen. Bernie Moreno (R-OH): “Let’s shift over to AI. A lot has been said about that, and I worry that some of my colleagues don’t feel that this is quote, unquote real or happening as quick as it is. It is happening insanely fast. In fact, would you agree that it could lead to an employment shock, especially for entry level white collar jobs?”
Kevin Warsh: “Senator, I agree with you on the pace of the technology revolution. I’ve had to update my own priors versus six or 12 months ago, as I’ve seen, the rate of improvement of the models. So now the way I describe it is this, I am more confident that there will be improved output than I am certain about when the effects of that would be on the labor market. I basically believe in the fallacious economic tenet, which is described as the lump of labor fallacy. We tend to think in economics, there’s only a fixed number of jobs, and we’ve got to fill them every day possible. The labor force, the structure of labor market, changes the jobs that will be created two or three years from now, some of which are unimaginable to us today, but the lag between the improvement and output and the effect on the labor markets. That’s got to be central to the Fed’s thinking, given the pace of innovation in this cycle.”
Sen. John Neely Kennedy (R-LA)
Sen. John Neely Kennedy (R-LA): “…I’ve heard your argument the last few months about artificial intelligence has made us so productive, labor so productive, that companies don’t have to raise prices, therefore inflation isn’t a problem, therefore rates can be cut. Do you really believe that right now?”
Kevin Warsh: “That is not how I would characterize the story on AI.”
Kennedy: “Okay, but you’ve said what I just said, haven’t you?”
Warsh: “I have said that this is the most disruptive moment in modern economic history, in the US and the world. I’ve said that artificial intelligence, AI…”
Kennedy: “…Here’s my worry, that a lot of this stuff about artificial intelligence making us more productive is a bunch of hype by people who want to sell stock in an IPO. Okay, I’d be careful there.”
Sen. Lisa Blunt Rochester (D-DE)
Sen. Lisa Blunt Rochester (D-DE): “I want to jump to AI, because it’s probably one of the number one issues that I hear about in my state. I’m former secretary of labor in Delaware. This is something that I asked Chair Powell when he was here as well, and a lot has happened in a year. You have characterized this as the most productivity enhancing wave of our lifetimes, past, present and future. You have described it as structurally dis-inflationary, and that central bankers must make a bet. I’m in the camp with Senator Kennedy. I’m concerned about us making a bet on something that we don’t know. You’ve said it, Chairman Powell said it. We don’t have the data to even understand yet, and so my first question is, what happens for policy if that surge doesn’t materialize as expected?”
Kevin Warsh: “Yeah, so Senator, I enjoyed our discussion. I think the essential elements of new policy for the Federal Reserve is to get access to better data and to dig deeper into the productivity possibilities that can come out of this new investment wave. Today, we call it artificial intelligence. Two years from now, we’re going to call it business capex, and three years from now, we’re going to call it just ordinary business. I think it has two important effects on the conduct of policy. I don’t claim to have perfect knowledge of how any of these are going to go, but I do have an intuition the pace of change is accelerating.”
Blunt Rochester: “I was just going to ask how much of your view on interest rates depends on those productivity gains showing up quickly.”
Warsh: “Yeah. So, I think it has two elements. One is the increase in capital expenditures to build data centers and the rest that will have an effect on demand. That will increase demand, my guess is a few tenths of 1% but on the supply side of the economy, to increase the potential output of the economy that could be considerably bigger. We don’t know that. We can’t bank on that, but considerable work needs to be done by the Federal Reserve in evaluating this productivity wave. As I said before, monetary policy works with long and variable lags, so you have to make some informed judgments. And unlike other people in office, if the judgments are wrong, you’ve got to call the flag on yourself and correct them.”
Blunt Rochester: “Well, I think there’s been a lot of conversation here about concerns that in your record and your history, you have been hawkish on inflation rates and keeping them low. And now we’re looking at AI. What I don’t want to see is us use AI as an excuse, yes, for making good policy. Too much depends on it. Too many families’ lives depend on it. And in our conversation, I also talked about the fact that I know Wall Street is going to be okay, but who we’re concerned about, as well, is Main Street.”
What I’m Reading This Week
IRS May Be Coming for Your Artificial Intelligence Chats, Tax Notes.
Anthropic’s ‘Mythos’ AI Can Hack Nearly Anything and We Aren’t Ready, Kemba Walden, Fortune.
He Warned About the Dangers of A.I. If Only His Father Had Listened, Teddy Rosenbluth, The New York Times.
Unauthorized group has gained access to Anthropic’s exclusive cyber tool Mythos, TechCrunch
Anthropic’s New A.I. Model Sets Off Global Alarms, Paul Mozur and Adam Satariano, The New York Times.
About Zero One Strategies
Zero One Strategies is a specialized government relations practice dedicated to navigating the complex landscape of U.S. federal policy in emerging technologies. As advancements in technology continue to outpace regulatory frameworks, Zero One Strategies aims to provide strategic guidance and bipartisan advocacy for innovators and businesses operating at the forefront of technological development.
The practice focuses on key areas such as artificial intelligence, digital assets, blockchain, decentralized technologies, cybersecurity, data, and digital infrastructure, as well as the multiple policy issues impacting these sectors, including tax and financial services.
Contact us at Stacey@ZeroOneStrategies.com




