Legislative and regulatory

Artificial intelligence sparks fascination and concern throughout Washington

Uncharted territory: Artificial intelligence sparks fascination and concern throughout Washington
Key takeaways

Dozens of bills related to AI have been introduced in Congress in 2023, touching on a wide variety of policy areas like energy, health care, and homeland security.


The Biden administration has engaged in a full-court press from all corners of the Executive Branch in an attempt to get a handle on this fast-evolving landscape.


Modern AI systems have become the subject of an increasing amount of copyright litigation as companies and government officials try and keep up with the dynamic new technology. 

Artificial intelligence (AI) has become the hot topic of the moment – not just in Washington, D.C., but across the globe. Although technology companies have been working to develop different kinds of AI for years, including many technologies that we’ve already been using, OpenAI’s launch of ChatGPT in November 2022 seemed to open the lid on a slate of new AI tools overnight. Soon after, Microsoft announced its plans to invest billions of dollars in OpenAI; Amazon and Google launched their own generative AI tools; TikTok introduced AI-generated profile pictures for users; Elon Musk announced his intention to create “TruthGPT”; and China’s Alibaba and Huawei released their own versions of AI chatbots, AliChat and HiBot.

The meteoric rise of AI, however, has also generated a healthy dose of caution and concern among tech leaders, researchers, governments, and experts in the field. In March of this year, more than 1,000 tech leaders and researchers penned an open letter warning about the “profound risks to society and humanity” posed by AI. Congress and the Biden administration are now taking measures to examine and mitigate these risks, as well as gain insight into how this technology can be used for the benefit of American society. Let’s take a look at what steps they’ve taken so far and where they expect to go. 


In June, Senate Majority Leader Chuck Schumer (D-NY) announced a series of three all-Senators briefings focusing on the current state of AI, where the technology is headed in the future, the national security implications it presents, and how it’s being used by our adversaries. Schumer also unveiled his SAFE Innovation Framework – a blueprint for a bipartisan policy response with five central policy objectives:

  1. Security: Safeguard our national security with AI, determine how adversaries use it, and ensure economic security for workers by mitigating and responding to job loss.
  2. Accountability: Support the deployment of responsible systems to address misinformation and bias, support our creators by addressing copyright concerns, protect intellectual property, and address liability.
  3. Foundations: Require that AI systems align with our democratic values, protect our elections, promote AI’s societal benefits while avoiding potential harms, and stop the Chinese government from writing the rules of the road on AI.
  4. Explain: Determine what information the federal government needs from AI developers and deployers to be a better steward of the public good, and what information the public needs to know about an AI system, data, or content.
  5. Innovation: Support US-led innovation – including innovation in security, transparency and accountability – that focuses on unlocking the immense potential of AI and maintaining US leadership in the technology.

Schumer also announced that he has tasked both committee and non-committee chairs to develop bipartisan legislation, and recently previewed a series of nine AI Insight Forums coming this fall to discuss copyright, workforce issues, national security, high-risk AI models, existential risks, privacy, transparency, and elections and democracy. He tapped Senators Martin Heinrich (D-NM), Mike Rounds (R-SD), and Todd Young (R-IN) as co-organizers.

Lawmakers in the House are also holding AI briefings. In July, the New Democrat Coalition and the Republican Governance Group held a joint meeting to hear from experts on the current state of AI technology. Additionally, House Speaker Kevin McCarthy (R-CA) recently tapped Rep. Jay Obernolte (R-CA), who holds an advanced degree in AI and has a background as a software developer, to lead a bipartisan task force on AI.

Dozens of bills related to AI have been introduced since the beginning of the year that touch several different policy areas like energy, health care, homeland security, and more. Some of these include:

  • The No Section 230 Immunity for AI Act by Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT).
  • The Block Nuclear Launch by Autonomous Artificial Intelligence Act by Reps. Ted Lieu (D-CA), Ken Buck (R-CO), and Don Beyer, and Sens. Ed Markey (D-MA), Elizabeth Warren (D-MA), Jeff Merkley (D-OR), and Bernie Sanders (I-VT).
  • The CLOUD AI Act by Reps. Jeff Jackson (D-NC), Michael Lawler (R-NY), Jasmine Crockett (D-TX), and Rich McCormick (R-GA).
  • The AI LEAD Act by Sens. Gary Peters (D-MI) and John Cornyn (R-TX).

Hearings have also been held in committees of jurisdiction, including the House Science, Space, and Technology Committee; the House Oversight and Investigations Committee; and the Senate Homeland Security and Judiciary Committees.

Beyond risks to national security and civil rights, lawmakers have also expressed interest in preventing consumers from being victimized by AI scams. Senate Aging Committee Chair Bob Casey (D-PA) and Ranking Member Mike Braun (R-IN) sent a letter to the Biden administration earlier this year urging action to protect seniors from AI scams. Similarly, Senators Maggie Hassan (D-NH), Chuck Grassley (R-IA), Ron Wyden (D-OR), and James Lankford (R-OK) sent a letter to IRS Commissioner Danny Werfel expressing concerns about tax scams developed with AI technology.


In October 2022, the White House Office of Science and Technology Policy (OSTP) introduced the Blueprint for an AI Bill of Rights. It highlights five principles that should be used to guide the design, use, and deployment of AI to protect Americans’ civil rights:

1. Safe and effective systems

2. Algorithmic discrimination protections

3. Data privacy

4. Notice and explanation

5. Human alternatives, consideration, and fallback

Since then, the administration has engaged in a full-court press from all corners of the Executive Branch in an attempt to get a handle on this fast-evolving landscape.

Date Event
January 2023
  • The National Institute of Standards and Technology (NIST) released the AI Risk Management Framework. It was followed in March by the launch of the NIST Trustworthy and Responsible AI Resource Center and the announcement of a new public working group on AI.
February 2023
  • President Biden issues an executive order directing Federal agencies to “root out bias” in how new technologies, like AI, are designed and used.
May 2023
  • Vice President Harris hosted CEOs of leading AI companies to emphasize the need to responsibly advance AI technology, including CEOs of OpenAI, Anthropic, Alphabet, and Microsoft.
  • The White House announced an updated National AI R&D Strategic Plan outlining priorities for federal investments in AI research and development.
  • The White House issued a Request For Information seeking input on national priorities for mitigating AI risks.
  • The Department of Education issued a report on AI and the future of teaching and learnings.
  • G7 leaders called for the development of international AI standards.
June 2023
  • The National Artificial Intelligence Advisory Committee delivered its first report to President Biden with recommendations for the US government to reap the benefits of AI technology while minimizing potential harms.
July 2023
  • The White House secured commitments from a handful of AI companies to increase safety, trust, and security of AI systems by: investing in cybersecurity, testing the security of AI systems before they are released to the public, developing watermarking systems to let users know when content is AI-generated, and prioritizing research on AI risks and biases.
  • The Federal Communications Commission and the National Science Foundation held a joint workshop on the “Opportunities and Challenges of Artificial Intelligence for Communications Networks and Consumers.”
  • The White House previewed an upcoming executive order on a “whole of government” approach to AI. White House officials also emphasized that they will continue to hold conversations with allies and partners across the globe, including at G7 meetings and the upcoming UK AI Safety Summit. 


Modern artificial intelligence systems, particularly those that rely on generative AI models to produce or reproduce certain types of content, have become the subject of an increasing amount of copyright litigation as companies and government officials try and keep up with the dynamic new technology.

Because of how the software uses existing content filtered through algorithms to generate its own renderings, there is inherent risk that the final product will be noticeably similar to other existing work that may be covered by copyright protections. Getty Images, a popular photo licensing company, recently filed suit against Stability AI over allegations that its Stable Diffusion software copied 12 million of Getty’s images without permission. Another class action lawsuit involving Microsoft, GitHub, and OpenAI alleges that AI software developed by each company fails to comply with licensing terms and unlawfully reproduces lines of code.

Another source of legal concern is the ability of AI to develop lifelike content, known as deepfakes, that have the potential to create realistic videos or images of people or events that have not occurred. The issue had the potential to be decided by the Supreme Court as part of proceedings relating to Section 230, a law that protects online platform providers from being held liable for content posted to their websites, but the court declined to take up the case  during this session. Look for it to be back before the court soon.


The recent innovation in generative AI has sparked both fascination and concern across Washington, D.C., and foreign governments alike. And as competition with China remains at the forefront of federal government discussion, there is bipartisan agreement for the United States to be a leader both in the development of new AI technologies and the global regulatory framework and best practices to mitigate risks. As the shiny new toy in Washington, AI is expected to lead major policy conversations in Congress and the Biden administration for months and, likely, years to come.

With contributions from KDCR Partners