February 16: This week in AI federal policy
DC/ai Decoded: A weekly newsletter on developments in artificial intelligence and quantum federal policy
This week decoded
The Department of Labor released a new framework for AI literacy, defining five foundational content areas and seven guiding principles for effective delivery.
The partial government shutdown affecting the Department of Homeland Security has resulted in a temporary lapse in funding for CISA, forcing employees to either face furloughs or continue working without pay.
On Capitol Hill, last week’s Senate and House hearings with SEC Chair Paul Atkins on SEC oversight centered on the use of AI agents in financial services and the Commission’s anticipated innovation exemption.
Read more below
Congress
Hearings
Last week
On February 11, the House Education and the Workforce, Workforce Protections Subcommittee held a hearing on “Building an AI-Ready America: Safer Workplaces Through Smarter Technology.”
On February 11, the House Financial Services Committee held a hearing on “Oversight of the Securities and Exchange Commission.”
On February 12, the Senate Banking, Housing and Urban Affairs Committee held a hearing on “Oversight of the U.S. Securities and Exchange Commission.”
Legislation
The Senate Commerce, Science, and Transportation Committee passed the National Programmable Cloud Lab Network (NPCLN) Act to create a national network of six remotely accessible programmable cloud laboratories (PCLs) for academic research. (Text)
Sens. Jerry Moran (R-KS) and Maria Cantwell (D-WA) introduced the Small Business Artificial Intelligence Training Act that would authorize the Department of Commerce to work with the Small Business Administration to create and distribute artificial intelligence training resources and tools to help small businesses leverage AI in their operations. (Text)
Sens. Adam Schiff (D-CA) and John Curtis (R-UT) introduced the Copyright Labeling and Ethical AI Reporting (CLEAR) Act to require companies to disclose their use of copyrighted work to train generative AI models, implementing ethical guidelines and protections to promote transparency. (Text)
Reps. Josh Gottheimer (D-NJ) and Michael Lawler (R-NY) introduced The AI Workforce Training Act to amend the Internal Revenue Code to establish a federal tax credit for businesses that invest in artificial intelligence training for their employees. (Text)
Sens. Raphael Warnock (D-GA) and Dick Durbin (D-IL) and Rep. Brad Schneider (D-IL) introduced the Investing In Tomorrow’s Workforce Act to create a grant program through the Department of Labor to support industry partnerships in developing training programs for workers who are, or are likely to become, dislocated because of advances in technology. The bill also increases funding for National Dislocated Worker Grants and amend the Workforce Innovation and Opportunity Act (WIOA) to ensure workers who are dislocated by automation are included in WIOA programs. (Press release)
Sens. Cory Booker (D-NJ), Mike Rounds (R-SD), and Martin Heinrich (D-NM) and Reps. Ted Lieu (D-CA) and Jay Obernolte (R-CA) introduced bill to authorize the Director of the National Science Foundation to identify grand challenges and award competitive prizes for artificial intelligence research and development. (Text)
Correspondence
Sens. Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) sent a letter to the Chief Executive Officer of Miko, Inc., launching an investigation into why the company exposed sensitive data involving children through an unsecured, publicly accessible dataset. (Letter)
Sens. Adam Schiff (D-CA), Jon Ossoff (D-GA), Chris Van Hollen (D-MD), Richard Durbin (D-IL), John Hickenlooper (D-CO), and Reverend Raphael Warnock (D-GA) sent a letter to Department of Defense (DoD) Secretary Pete Hegseth requesting information about DoD plans to use the xAI chatbot Grok, despite reports that the chatbot has promoted antisemitic and other dangerous content. (Letter)
Reps. Sydney Kamlager-Dove (D-CA), Gregory Meeks (D-NY), Jim Himes (D-CT), Robert Garcia (D-CA), Adam Smith (D-WA), Grace Meng (D-NY, Lois Frankel (D-FL), Jimmy Gomez (D-CA), Brad Sherman (D-CA), George Latimer (D-NY), Johnny Olszewski (D-MD), Bill Keating (D-MA), Ami Bera (D-CA), Gabe Amo (D-RI), and Julie Johnson (D-TX) sent a letter to the Acting Inspector General for the U.S. Department of Commerce demanding an investigation into financial conflicts of interest surrounding President Trump’s decision to approve the export of 500,000 advanced AI chips to the United Arab Emirates in May 2025. (Letter)
House Select Committee on China Chairman John Moolenaar (R-MI) and House Foreign Affairs Committee Chairman Brian Mast (R-FL), House Foreign Affairs Ranking Member Gregory Meeks (D-NY), and Reps. Bill Huizenga (R-MI), Sydney Kamlager-Dove (D-CA), Greg Stanton (D-AZ), Michael Baumgartner (R-WA), and Johnny Olszewski (D-MD) sent a letter to Secretary of State Marco Rubio and Secretary of Commerce Howard Lutnick on the need for closer cooperation with partners and allies to restrict China’s access to advanced semiconductor manufacturing equipment. (Letter)
Publications, Meetings, and Events
House Science, Space and Technology Committee Chair Brian Babin (R-TX) published an op-ed in the Washington Times entitled, “America Must Win the Artificial Intelligence Race with China,” saying, “Our jurisdiction spans the Department of Energy’s National Laboratories, the National Science Foundation’s AI research and the National Institute of Standards and Technology’s leadership on voluntary standards and model evaluations. That places us at the center of America’s AI ecosystem. We are advancing practical legislation to expand access to federal computing resources, strengthen research and development, train the next-generation workforce and promote the consensus-based standards that have made U.S. technology the global benchmark.” (Op-ed)
Rep. Sam Liccardo (D-CA) published an op-ed in the Wall Street Journal entitled, “Congress Shouldn’t Stop AI Innovation,” saying, “No amount of regulation will stop AI development. Slowing innovation will ensure that China writes the rules for the next century. If the Democrats retake Congress in November, we must advance policies that acknowledge the automation-related job loss and rising utility bills Americans endured long before the arrival of ChatGPT. AI certainly could exacerbate those maladies, but we can better address them by taking advantage of the AI infrastructure build-out to spur overdue policy change.” (Op-ed)
Trump Administration
Department of Labor (DOL)
DOL’s Employment and Training Administration published a framework for AI literacy, outlining outlines five foundational content areas and seven delivery principles for AI literacy. (Framework)
National Science Foundation (NSF)
The NSF, in coordination with partner agencies from Australia, India and Japan, announced the first cohort of awards made under the Advancing Innovations for Empowering NextGen AGriculturE (AI-ENGAGE) initiative to support six international research projects that will provide artificial intelligence and critical emerging technologies to farmers across the United States and Indo-Pacific region. (Press release)
National Institute of Standards and Technology (NIST)
NIST announced funding totaling $3.19 million to eight small businesses in seven states under the Small Business Innovation Research (SBIR) program to support research and development related to artificial intelligence, medical diagnostics, biotechnology, semiconductors, quantum and other key technologies. (Press release)
Cybersecurity and Infrastructure Security Agency (CISA)
Due to the partial government shutdown of the Department of Homeland Security, funding for CISA has temporarily lapsed and CISA employees are furloughed or working without pay. (Politico)
Government Publishing Office (GPO)
On March 4, the GPO will hold a virtual meeting of the Depository Library Council to discuss “Understanding AI bias: how it arises and how to respond.”
Noteworthy Quotes and Events
ADMINISTRATION
Cybersecurity and Infrastructure Security Agency (CISA)
In House testimony, CISA Acting Director Madhu Gottumukkala said, “A shutdown would degrade our capacity to provide timely and actionable guidance to help partners defend their networks.” (Politico)
Commodity Futures Trading Commission (CFTC)
Chair Mike Selig posted “Innovators are harnessing technologies such as artificial intelligence, blockchain, and cloud computing to modernize legacy financial systems and build entirely new ones. Under my leadership, the Commission will develop fit-for-purpose market structure regulations for this new frontier of finance. The Innovation Advisory Committee will play a critical role in advising the Commission on the commercial, economic, and practical considerations of emerging products, platforms, and business models in the financial markets so that it can develop clear rules of the road for the Golden Age of American Financial Markets.”
CONGRESS
Senate Banking SEC Oversight Hearing Q&A
Sen. Mike Rounds (R-SD)
Sen. Mike Rounds (R-SD): “I want to move back into an area that I think is really important, and that is the development of artificial intelligence and its use under the Biden Administration we saw secure. Regulation used to achieve some political and social objectives, often at the expense of investors and small businesses. I want to thank you for your commitment to returning the SEC to its core mission. Chairman Atkins, the administration’s 2025 AI Action Plan encourages the development of regulatory sand boxes at independent agencies, including the SEC this venue would allow SEC regulated entities such as broker dealers, investment advisors, to test new AI tools under structured oversight. Now I’ve introduced legislation with Senators Heinrich, Kim, and Tillis that would do just that bipartisan legislation. Do you believe that our legislation would give the SEC the tools it needs to foster responsible AI innovation, and could it serve as a useful model as the SEC implements the AI action plan?”
SEC Chair Paul Atkins: “Well, thank you, Senator, I haven’t actually had the opportunity to review your language, happy to do that and discuss it. But the premise, I agree with you very much, that I think it would be very useful, and I’ve been talking about innovation exemption to begin that at the SEC to allow entrepreneurs in a sandbox, like environment that’s cabined by, you know, tie, it’s cabin time limited, transparence, you know, flexible, and then focused on investor protection. So, all of those principles, you know, I think, are important and to allow people to try different things in a particular environment and then prove their concept.”
Rounds: “It’s not going away, is it?”
Atkins: “No, no.”
Sen. Mark Warner (D-VA)
Sen. Mark Warner (D-VA): “We all know AI is going to transform everything. I think it’s clearly going to transform markets, banking, both at the banker and broker dealer level, but also at the retail level. And one of the things I’m really concerned about is kind of agentic AI where the AI tools can self execute, do trades without any human input. And I wonder, if we don’t have some guard rails, here, are people going to be able to say, Well, my agentic AI agent did something was clearly illegal or bad, but I didn’t you know that was that tool. It was not me as an individual or me as a firm. So, I got a couple questions. First, you know, do you think banks and broker dealers will take it up a level have appropriate guard rails in place today to make sure that that agent, AI, agent, doesn’t commit a malfeasance or something illegal.”
SEC Chair Paul Atkins: “Yeah, well, that those are very good questions, and I share your concern with that. And this is a new technology, obviously, and people are still experimenting with it. And so I can’t really say, you know, what’s going on with respect to, you know, individual broker dealers or anything else, but I do agree that, you know, we can’t allow that.
Warner: “I think there’d be a lot of bipartisan interest in helping on this. And then when we take it down to, like, the retail investor. I just want to make sure that you know that the same fair dealing and conflict of interest standards that exist for the retail investor would also apply to that agentic AI agent who is acting on behalf of the retail investor.”
Atkins: “Yes, so I think that’s important. And so whichever way this technology grows and changes, I think we have to be very attuned to those potential problems. So even in this last discussion here about proxies and whatnot, obviously, we’ve seen proxy advisory firms. We’ve seen banks now starting to rely on AI rather than, you know, hiring out, forming it out to third party advisors.”
Warner: “Yeah, and again, we saw, was it this week or last week or, you know, I think one of the new tools came out, and all the wealth management firms took a huge hit on the marketplace, so they thought AI was going to take over. I, I just think this is moving so quickly, it would be great if we could get a little bit ahead of it, and we’re not trying to chase it after the fact.”
Atkins: “Yeah, we have a special task force at the SEC, headed by a former enforcement attorney, Valerie Szczepanik, who’s looking at AI for tools for us, for example, at the SEC, with respect to enforcement, Corporation Finance reviews and things like that.”
House Financial Services on SEC Oversight Q&A
Rep. Bill Foster (D-IL)
Rep. Bill Foster (D-IL): “Thank you, Mr. Chair and our witness, I presume that you’ve been encouraged to use artificial intelligence and all of your workings from the White House. It was a general directive. And obviously industry is moving very fast that way. Now, in terms of the 1000-page prop that you have there, how long do you think it would take AI to summarize that as in one paragraph, one page, 10 pages, whatever level of detail you wanted, maybe a few seconds. I don’t know. Probably neither of us are an expert on that, but I imagine it could be done in a few minutes. And so that, in terms of a burden on investors, it seems like having a lot of information, there is not a burden on investors in an AI world. And so I think that’s an important thing that should go into your thinking as to what the appropriate level of disclosure materiality, I believe, is the best in the that the investor is the best judge of materiality. And if we err on the side of having a lot of you know, probably 95% of what’s in that stack of paper is not material. But there could be a footnote in there that is absolutely crucial. No human is likely to pick it up. As you point out, no human is going to read that stack of paper, but an AI can and find that footnote. And so it seems like, you know, the whole narrative of, let’s get to less disclosure is going in the wrong direction for an AI world. You know, I imagine that that stack of paper is probably 90% generated by AI, or soon will be if it’s not, and so that, I think that it’s a much better position if we have, you know, the AIS, you know, for IPOs and things like that, generate a lot of information, and then let the AIS of potential investors look at it in great detail, that you’ll end up with a More efficient market and less mistakes being made by investors due to lack of information. So, have you been thinking about that sort of thing when you puzzle through the whole what’s the right level of information?”
Atkins: “Oh, absolutely, but again, there’s when you think of the cost and the amount of the of work that goes into this sort of disclosure with the lawyers and everything else. You know, what sort of bang for the buck are we getting?”
Foster: “But anyway, and lawyers are going to get crushed too with AI, you know. I mean, most companies have everything electronically. They just turn their AI loose on that and generate, you know, it is coming at us fast. Yeah, well, it’s that, that’s a good point. Yeah, anyway, so I really encourage you to think about that, because I think that’s going to be a much safer market where there’s a lot of information and that each investor, or the investor’s personal AI advisor will have access. To that information, but summarize it. You know, just in the case of climate, a lot of investors are going to think climate is material. Others will think it’s immaterial, and it’s we’re in a better position if it is the potential investor talking to their AI advisor saying, I do or I don’t care about climate, and then getting a summary that reflects their preferences.”
Miscellaneous
Sen. Elizabeth Warren (D-MA) issued a statement on comments by LTG Joshua Rudd, nominee to head U.S. Cyber Command and the National Security Agency, confirming that China is seeking to acquire advanced AI chips to accelerate its development of AI-enhanced weapons. Warren said, “A senior military officer nominated by President Trump is warning that China is aggressively seeking to acquire advanced AI chips to accelerate its development of AI-enhanced weapons. This Administration has failed to take these risks seriously. I will keep working with my colleagues to pass bipartisan legislation to protect U.S. economic and national security.” (Press release)
Rep. Sarah McBride (D-DE) said she plans to focus on “scalpel solutions, not sledgehammer solutions” for AI, adding, “I don’t think we’re going to figure this all out and get something major on the thorniest of issues passed before January of 2027. These issues… will inevitably bleed into a new Congress, whomever is in the majority.” (Punchbowl)
Rep. Sam Liccardo (D-CA) intends to focus on “a human in the loop if you’re deploying AI-enabled software to ensure that the outcomes of your lending decisions are not discriminatory and the treatment of the financial data is secure.” (Punchbowl)
Rep. April McClain Delaney (D-MD) said of AI and quantum, “We should have some standards. I do think that there should be some coordination, and I do think that we should have greater collaboration, including with the National Science Foundation.” She added, “I think NIST is the right place to do much of that framework because it has deep expertise in quantum and many aspects of AI. But I hesitate to limit it only to NIST, because other stakeholders, including universities and private companies, also need to be involved.” She also said, “I think we almost have to have a new deal on infrastructure, AI, data, and platform accountability that comes together, because what we have now does not meet the moment.” (MeriTalk)
Sen. John Cornyn (R-TX) posted “AI’s energy demands are fueling a nuclear comeback”
Sen. Chris Van Hollen (D-MD) posted “At this perilous time in our country, we need quality journalism more than ever. Using AI to fill a page and calling it “analysis” is the opposite of that. The Baltimore Sun Guild is right to call this out. It’s a disservice to the Sun’s readers and its reporters.”
Sen. Mark Warner (D-VA) posted “We’re not ready for the impact AI is going to have on our workforce. Every week in the Senate, I am working to make sure we’re building an economy that can withstand these massive upheavals.”
Sen. Jack Reed (D-RI) posted “RED ALERT: AI romance scams are up this time of year. Know how to spot red flags from V-Day romance scammers”
Sen. Marsha Blackburn (R-TN) posted “After we confronted Miko for exposing sensitive data involving children to the public, the company scrubbed this information and denied any wrongdoing. Miko needs to answer to the American people for violating kids’ privacy and putting children at risk.”
Sen. Bernie Sanders (I-VT) posted “Microsoft AI CEO Mustafa Suleyman says most white-collar work “will be fully automated by an AI within the next 12 to 18 months.” If that’s true, it’s an economic earthquake. We need a moratorium on new AI data centers to make sure AI works for workers, not just billionaires.”
Sanders also posted “Electricity, the automobile, radio, TV, computers, the internet—all changed the way we live. But they pale in comparison to the revolution AI will bring about. The question is: who benefits? It must work for working families, not just billionaires.”
Sen. Ed Markey (D-MA) posted “What’s described here isn’t isolated—it’s systemic. Embedded bias in AI has real-world consequences. My AI Civil Rights Act would put strict guardrails on algorithms to prevent bias and discrimination. Innovation without safeguards isn’t progress.”
Rep. Valerie Foushee (D-NC) posted “It was great to meet with Democrat members of the former bipartisan AI Task Force with my HouseDemocrats Commission on AI colleagues. Republicans are blocking all legislation to prevent AI job cuts and protect civil rights in the age of AI, but we remain ready to pass AI guardrails immediately to protect our communities.”
House Science Committee posted “Chairman RepBrianBabin lays out why America must lead — and win — the global AI race. AI will shape our economy, our security, and our future.”
Sen. Dick Durbin (D-IL) posted “The rapid development of AI and other automating technologies will profoundly transform the American workplace. I joined SenatorWarnock and RepSchneider to introduce the Investing in Tomorrow’s Workforce Act to equip American workers for the future and ensure no one is left behind in the age of automation.”
Rep. Ilhan Omar (D-MN) posted “AI is shaping the future whether we like it or not. Technology itself is not the problem. The problem is deploying powerful tools without guardrails, transparency, or accountability. AI should be a tool that works for people – not one that extracts value and burdens workers.”
Rep. Zach Nunn (R-IA) posted “Taiwan’s greatest threat? The Chinese Communist Party. Taiwan’s greatest ally? The United States of America. I’ve seen Beijing’s coercion tactics firsthand. Our U.S.-Taiwan Defense Innovation Partnership Act strengthens U.S. coordination on AI to counter the CCP’s malign influence in the Indo-Pacific.”
Rep. Mark DeSaulnier (D-CA) posted “The wealth gap between big corporations and the working people who make these companies’ success possible has been widening for decades and will only be exacerbated by the unchecked rise of AI. As a senior member EdWorkforceDems, I’m pushing my colleagues in Congress to act now to protect American workers by both implementing commonsense guardrails on AI and ensuring the ultrawealthy pay their fair share.”
Sen. Adam Schiff (D-CA) posted “As our digital landscape rapidly evolves, it is important that we protect the work of those bring their talents to music, television and film. I’m working with SenJohnCurtis to pass the CLEAR Act, which will guarantee transparency in the development of AI models to ensure that digital creators are fairly compensated for their work.”
House Homeland Security Committee Chair Andrew Garbarino (R-NY) said, “CISA is defending our networks from relentless adversaries while preparing for midterm elections this fall. It is unacceptable that many of these frontline personnel could lose their paychecks for the second time in six months because of Washington’s dysfunction.” (Politico)
Sen. Chris Murphy (D-CT) delivered remarks at the Munich Security Conference on “Outsmarting Ourselves? Risks and Rewards of the AI Race,” saying, “Our primary victory, in the short run, is to be able to work together or separately on a regulatory regime that doesn’t collapse our economy as we lose jobs at a pace that creates political disintegration, and that we don’t collapse our cultures spiritually as we transition the human race away from the core basic meanings and purposes - conversation, friendship, creativity, problem solving - to machines too quickly… I wake up every day thinking that victory looks like preventing the worst, more so than capturing the best. But obviously you want that balance in the end, that’s victory to me.” (Press release)
On unregulated AI and the unchecked power of tech leaders, Murphy said, “What freaks people out in the United States is the idea that there are going to be only a handful of companies that are going to control these enormous data sets and these enormously powerful capabilities, and that one person could wake up one day and decide to make a tweak to the code, the algorithm, the way in which the LLM works, and your existence has fundamentally changed.” (Press release)
On the impact of political spending on AI regulation, Murphy said, “A mandate for an American leader to prioritize working internationally to regulate AI doesn’t come from nowhere. The industry right now is spending millions of dollars trying to suppress conversation in the United States at the state level, and at the federal level, around a regulatory framework, and so it becomes very difficult for there to be any president who’s going to prioritize bringing this conversation to China or to our allies, if there are hundreds of millions of dollars being spent on our politics in the United States by the AI companies and by the technology companies trying to destroy any enthusiasm or conversation about regulation, that’s the political reality.” (Press release)
On precedential regulation, Murphy said, “There is plenty of history in the way that the United States regulates emerging industries to show that we actually can prioritize protecting consumers, protecting human beings, rather than allocating who wins in a market or who loses in a market. You know, if you just started with some very easy steps, like requiring watermarks for AI generated content, keeping kids away from these chat bots that doesn’t give advantage to one company or another, it just protects human beings from the immediate abuses.” (Press release)
Murphy posted “The AI leaders came to the Munich Security Conference acting like they want international regulation. Yet they are spending hundreds of millions of dollars in American elections trying to stamp out any conversation about regulation.”
Murphy also posted “We need to ask ourselves an important question about our AI future. What happens to the human soul if we outsource basic functions - like friendship, creativity and problem solving - to machines? The answer shouldn’t just be left to industry. It needs to involve all of us.”
What I’m Reading This Week
IRS Adopts Generative Artificial Intelligence Policy, Benjamin Valdez, Tax Notes.
AI Raises the Stakes for National Security. Here’s How to Get it Right, Chris Lehane, FoxNews.
AI’s Bitter Rivalry Heads to Washington, France24.
AI Is Shaping Up to Be America’s Next Political Fault Line, Matthew Urwin, BuiltIn.
Beyond the Chips: A Better Strategy for AI Dominance, Sharon Squassoni, National Interest.
Why are Chinese AI models dominating open-source as Western labs step back?, Dashveenjit Kaur, AI News.
About Zero One Strategies
Zero One Strategies is a specialized government relations practice dedicated to navigating the complex landscape of U.S. federal policy in emerging technologies. As advancements in technology continue to outpace regulatory frameworks, Zero One Strategies aims to provide strategic guidance and bipartisan advocacy for innovators and businesses operating at the forefront of technological development.
The practice focuses on key areas such as artificial intelligence, digital assets, blockchain, decentralized technologies, cybersecurity, data, and digital infrastructure, as well as the multiple policy issues impacting these sectors, including tax and financial services.
Contact us at Stacey@ZeroOneStrategies.com





