Virginia’s AI Bill Screeches to a Halt

Cozen O’Connor’s Emerging Data Privacy Trends practice is pleased to announce the creation of its new sub-practice Artificial Intelligence and Data Privacy. Among other tasks, this sub-practice will monitor and analyze legislation that regulates developers and deployers of artificial intelligence. Up first is Virginia, where Governor Youngkin recently vetoed pending AI regulation. For businesses creating and utilizing AI technologies, the veto of House Bill 2094 is a favorable development. It signals a preference for executive-led initiatives and voluntary standards over prescriptive laws that could impose burdensome compliance requirements. In Virginia at least, companies can expect a more flexible regulatory environment that encourages innovation while maintaining basic consumer protections—at least for now. While this result is favorable for many of our clients who rely on algorithmic systems in commercially sensitive contexts, the broader regulatory conversation is far from over. The key is to address this now.

On the heels of Colorado’s first-of-its-kind AI consumer protection law, Virginia sought to become just the second state to pass comprehensive legislation regulating the intersection of AI and consumer protection. Long known for its strong technology sector and proximity to federal regulators, Virginia had the opportunity to set a national standard with House Bill 2094 (HB 2094) to establish a regulatory framework for “high-risk” AI systems and prohibit algorithmic discrimination in housing, employment, education, and credit. Governor Youngkin, however, had other ideas. Citing Executive Order 30 (EO 30), the Governor vetoed HB 2094 on March 24, 2025, citing concerns over a “burdensome artificial intelligence regulatory framework” that would “stifl[e] the AI industry as it is taking off.” The veto has sparked significant discussion within the legal and tech communities, especially given Virginia’s position as a bellwether for state-level AI governance.

In the meantime, similar bills are pending in dozens of states. Staying on top of these developments is critical for businesses that create and utilize artificial intelligence.  Soon, Colorado’s will take effect in 2026, and while there is no private right of action available, Attorney General Weiser has been no stranger to deploying his resources to address privacy concerns. In preparing for these new regulatory frameworks, companies would enhance their response by strategically partnering with counsel to ensure compliance and limit their exposure to liability. Cozen welcomes these partnerships and stands ready to advise its clients on how to navigate what may seem like unchartered territory.

Executive Order 30 (2024): A Business-Favorable Status Quo

Governor Youngkin’s decision to veto HB 2094 must be understood in the context of EO 30, which he signed in January 2024. Titled “An Ethical Framework and Governance for the Use of Artificial Intelligence in the Commonwealth,” EO 30 became the Commonwealth’s guiding document on artificial intelligence policy. Rather than legislating obligations for private sector actors, EO 30 focuses almost exclusively on government agency use of AI and aims to “embrace innovation while safeguarding against the misuse of AI technologies.”

Notably, EO 30 creates the Governor’s Task Force on AI, charged with studying best practices, evaluating ethical risks, and proposing further recommendations by the end of 2025. The task force includes legal, technical, academic, and business stakeholders, with a mandate to advise the state on how AI can be used responsibly—especially in the public sector. The executive order further charges all executive branch agencies to adopt a risk management framework for AI usage and establishes education guidelines to prepare students for future jobs without sacrificing current learning opportunities.

For clients, EO 30 provides regulatory breathing room. Companies are encouraged—but not compelled—to align with ethical principles around AI use. As such, the Commonwealth’s current posture favors flexibility, innovation, and self-governance over statutory mandates. Clients leveraging AI in high-stakes contexts (e.g., employment screening, lending, banking, insurance) should consider voluntary alignment with EO 30 standards—but can proceed without fear of enforcement actions or litigation under state AI law.

House Bill 2094: What Could Have Been—and Why It Matters

HB 2094, also known as the High-Risk Artificial Intelligence Developer and Deployer Act, passed with bipartisan support in both chambers of the General Assembly. It would have represented a dramatic shift in how Virginia regulates the use of AI in consumer-facing decisions. Modeled in part after Colorado’s AI legislation and elements of the European Union’s AI Act, HB 2094 would have required entities deploying “high-risk” AI systems to implement rigorous impact assessments, establish data governance protocols, and disclose the use of AI to consumers in clear terms. If passed, HB 2094 would have been one of the most legally consequential AI regulatory proposals in the United States. Despite bipartisan support in the General Assembly, the bill raised significant concerns for the business community. Clients would have faced considerable compliance costs, potential exposure to enforcement actions, and ongoing documentation requirements.

The bill targeted “high-risk” AI systems that materially affect a “consequential decision,” which the bill defined as the provision or denial of a consumer’s parole, probation, incarceration, education, employment, financial service, health care, housing, insurance, marital status, or legal service. Importantly, the bill would have imposed affirmative duties on covered entities to mitigate risks of algorithmic discrimination and maintain audit trails for accountability. The bill did not include a private right of action but would have charged the Attorney General to enforce its requirements.

Governor Youngkin nevertheless vetoed HB 2094, citing concerns that its stringent requirements would stifle innovation and economic growth, particularly for startups and small businesses. Noting that the bill “would harm the creation of new jobs, the attraction of new business investment, and the availability of innovative technology in the Commonwealth of Virginia,” the Governor emphasized that there are already consumer protections in place. For clients, the demise of HB 2094 is a favorable development, as it avoids immediate, mandatory compliance obligations and preserves Virginia’s reputation as a business-friendly jurisdiction.

Predictions and How to Prepare

The landscape is shifting—and fast. The veto of HB 2094 does not signify the end of AI regulatory efforts in Virginia—it merely signals a pivot toward executive-led initiatives and a more iterative policy development process. EO 30’s task force is expected to deliver preliminary findings later in 2025, with an eye toward developing sector-specific guidance or voluntary standards. The gubernatorial election in November 2025 could play a decisive role in charting Virginia’s AI policy trajectory. Governor Youngkin is term-limited, and the field of candidates likely presents a stark contrast in AI governance philosophy.

Democratic frontrunner Abigail Spanberger has previously expressed support for regulating the AI industry. In contrast, Republican candidate Winsome Earle-Sears, currently the state Lieutenant Governor, would likely take a more cautious approach that aligns with the view of Governor Youngkin to issue executive-led nonbinding guidelines or regulatory sandboxes over prescriptive laws. With Virginia set to elect its first female governor, the new governor’s policies will shape Virginia’s role as a leader in AI governance and influence broader national and global discussions on the subject. The next administration will influence whether future AI legislation includes carve-outs for low-risk use cases, exemptions for regulated industries (e.g., banking, insurance), or safe harbors for firms with strong internal controls.

Regardless of the election outcome, businesses should anticipate ongoing discussions and developments in AI governance and immediately align themselves with attorneys who remain on top of the changes in Virginia and throughout the country. In fact, Colorado’s recent AI law (set to take effect in 2026) may spark a domino effect in other states, especially those with active legislatures and strong consumer advocacy communities. Here’s how clients should prepare, and we are ready to provide further guidance:

  • Use this time to prepare defensively: Even absent legislation, companies can face lawsuits alleging algorithmic discrimination or unfair practices. Plaintiffs’ attorneys and advocacy groups are watching closely—and may pursue creative claims under existing statutes (e.g., Title VII, FCRA, ECOA, consumer protection or privacy statutes). Plaintiffs will no doubt tee up negligence per se claims based on AI legislation even where there is no private right of action.
  • Conduct internal AI audits now, especially for models that affect employment, credit, lending, or housing decisions.
  • Consider implementing voluntary AI risk assessments modeled after NIST or EO 30 guidance to demonstrate good faith and reduce litigation risk.
  • Call us early if you anticipate or receive inquiries from regulators, Plaintiffs’ firms, or civil rights organizations. Early engagement can prevent disputes from escalating—and position you for more favorable outcomes.

Needless to say, the key takeaway is this: You may not be regulated yet, but you are already exposed. The time to prepare—legally and operationally—is now. For more information, including how to conduct a risk assessment of your AI systems, please feel free to reach out.

Melissa Siebert is the Chair of, and Brian Browne and James Billings-Kang are members in, Cozen O’Connor’s Emerging Data Privacy Trends practice. Brian and James co-lead the Artificial Intelligence and Data Privacy sub-practice.

Related Posts