top of page

The AI Executive Order Now In Effect - What It Means For California, And The New AI Laws To Prepare For

Updated: Apr 9

Governor Gavin Newsom has announced an important regulatory update that all employers in California, particularly those pursuing state contracts, need to consider immediately.
Governor Gavin Newsom’s Executive Order N-5-26 (signed March 30, 2026) grabbed headlines for using California’s massive procurement power to demand AI vendors certify safeguards against illegal content, harmful bias, and civil-rights violations. But for most California employers and workers, the EO is just one piece of a much larger regulatory wave already reshaping AI use in hiring, promotions, discipline, scheduling, and terminations.

At LOEAB, we’ve been at the forefront of California employment law for years -handling everything from FEHA discrimination cases to complex settlements and compliance programs. Our team doesn’t just react to these AI rules; we help you get ahead of them.


Signed on March 30, 2026, and effective immediately, Ai Executive Order N-5-26 (the "Trusted AI Procurement EO") goes beyond being just another tech policy it sets a precedent for regulations that may come regarding the usage of AI in the workplace - and those changes may come sooner than we might have anticipated or planned for.


As the Trump administration reduces federal AI regulations, Newsom is ramping up efforts: "While others in Washington are developing policies and forming contracts under the shadow of misuse, we are committed to doing this the right way." This order serves as a wake-up call for both employers and employees - and a potential benefit for businesses that comply.

The Executive order utilizes California's substantial procurement power - the fourth largest economy globally - to require AI vendors to prove that their systems will not contribute to illegal content, unchecked bias, or civil rights violations.


What the Ai Executive Order N-5-26 Actually Requires


Within 120 days (by late July 2026), the Department of General Services (DGS), California Department of Technology (CDT), and the Government Operations Agency must recommend brand-new vendor certification requirements for every entity seeking state contracts. Vendors must attest and explain their policies on three critical fronts:


  1. Safeguards against exploitation or distribution of illegal content

  2. Mitigation of harmful bias in AI models—no more “we’ll figure it out later.”

  3. Protections for civil rights and civil liberties, including free speech, voting rights, and bans on unlawful discrimination, detention, or surveillance.


Additional mandates include:


  • Recommendations to suspend or bar contractors judicially found to undermine privacy or civil liberties.

  • Statewide watermarking standards for AI-generated or manipulated images and videos (first-of-its-kind in the U.S., aligned with existing California law).

  • Expanded vetted GenAI tools for state employees with strict privacy and cybersecurity controls.

  • Public input via the new Engaged California platform on AI’s workforce impact - explicitly addressing job displacement concerns.


This builds directly on the 2023 AI framework (EO N-12-23) but goes further by tying procurement dollars to real accountability.

The Core Rules Already in Effect: FEHA's AI Regulations

These regulations apply to all California employers with 5 or more employees, including out-of-state companies with even just ONE worker in California. 


Many job recruitment sites have already started to integrate AI as part of the hiring process. It's important to know what "data"  these systems use to classify applicants.
Many job recruitment sites have already started to integrate AI as part of the hiring process. It's important to know what "data" these systems use to classify applicants.

Using AI for recruitment, hiring, promotion, performance reviews, discipline, termination, or any other employment decision violates FEHA if it causes:

 

Direct Discrimination: based on any protected characteristic (race, gender, age, disability, religion, national origin, etc.)


Disparate Impact: even unintentional statistical bias against protected groups.






AI Can Make Mistakes, YOUR Company Is Liable For Them


No “black box” defense: You cannot hide behind “the algorithm decided” if that result discriminates.
No “black box” defense: You cannot hide behind “the algorithm decided” if that result discriminates.

That little disclaimer at the bottom of your ai chat bot of choice... we've all seen it. The one reminding you that "AI Can Make Mistakes." For now it protects the company that created it from giving false or misleading information - but that does not get YOU off the hook.


Employers remain fully liable for biased outcomes, even if the AI tool comes from a third-party vendor.

  Key aspects of this liability include:

Many people use Ai as a source of "Intelligence," however, almost every Ai Platform is simply a Pattern Matching device.
Many people use Ai as a source of "Intelligence," however, almost every Ai Platform is simply a Pattern Matching device.
  • Non-Delegable Responsibility: Employers cannot delegate their legal obligation to prevent discrimination. Using a third-party tool does not shield the company from liability if that tool disproportionately excludes protected groups (disparate impact).


  • Active Litigation and Enforcement: The EEOC has clarified that existing anti-discrimination laws (Title VII, ADA, ADEA) apply to AI. High-profile cases, such as the lawsuit against Workday, demonstrate that employers are held accountable for AI-driven hiring discrimination, regardless of the vendor involved.


  • "Disparate Impact" Liability: An employer can be liable even if the bias is unintentional and the AI tool was not designed to be discriminatory, provided the tool causes a discriminatory outcome.


  • Vendor Misconceptions: While vendors may be responsible for the technology, courts and regulators (such as in NYC's Local Law 144) have established that the employer is responsible for the final hiring decisions.


  • Risk Mitigation Strategies: To manage this risk, employers are expected to:


    • Conduct Bias Audits: Regularly audit AI tools for discriminatory impacts before and during use.

    • Maintain Human Oversight: Keep a "human in the loop" to review AI decisions and ensure fairness.

    • Demand Vendor Transparency: Ensure vendor contracts require transparency regarding AI functionality and provide indemnification.

 

These rules build on existing FEHA strict liability and tie directly into the EO’s procurement standards for companies that do business with the state.


For Employers: Compliance Is Now Mandatory

If your company provides AI tools, software, consulting, or services that touch state government - whether it’s a chatbot for Caltrans, predictive analytics for DHCS, or any GenAI pilot - you’re now on notice.


Immediate action items:


  • Audit your AI governance policies today. Can you document bias-testing protocols, content-moderation safeguards, and civil-rights impact assessments? The state will soon require attestations.


  • Update subcontractor agreements and RFP responses to include these certifications.


  • Expect heightened scrutiny: If a federal agency flags your company as a supply-chain risk (as happened recently with Anthropic, makers of Claude), California will conduct its own independent review and may still contract with you if you meet state standards.


The Looming Laws California Legislature and Labor Groups are Accelerating


With the usage of AI increasing more in our daily lives, it is almost certain that as it's usage evolves, so to will it's regulations.  As it stands, the California Labor Federation is backing over two dozen AI worker protection bills.


These bills put more requirements on notice, transparency, reskilling, and human oversight.


Here are some proposed changes to legislation that are already being proposed:


SB 947 (Automated Decision Systems in the Workplace): Would prohibit relying solely on AI for disciplinary or termination decisions; requires a human “independent investigation” to verify any AI output.


SB 951 (California Worker Technological Displacement Act): Modeled on the WARN Act, would mandate at least 90 days’ advance notice for layoffs affecting 25+ workers (or 25% of the workforce) caused by AI or technological displacement.

 

Surveillance and testing bills (AB 1331, AB 1221, AB 1018): Would impose pre-use testing, appeal rights, and restrictions on workplace monitoring via AI.

 

CCPA/CPRA ADMT Regulations (phased in 2027): Will require notices and opt-out rights for significant decisions (including employment) involving automated profiling.

 

Fines, Damages, and Lost Contracts You Can’t Afford


Don't be so quick into putting your trust into Ai systems. Under our stress tests, we have found that most Ai Systems, at the highest level, behave as just as malware does, making unauthorized changes to critical infrastructure in code, and then lying about the results or actions it was told to perform.
Don't be so quick into putting your trust into Ai systems. Under our stress tests, we have found that most Ai Systems, at the highest level, behave as just as malware does, making unauthorized changes to critical infrastructure in code, and then lying about the results or actions it was told to perform.

Here’s the part every CFO, GC, and HR leader needs to highlight in red: Non-compliance doesn’t just mean losing a bid—it can trigger massive financial exposure under existing California law.

  • Lost state contracts: California’s procurement budget runs into the billions annually. Failing the new certifications means automatic disqualification from that revenue stream. Early adopters who certify quickly will gain a massive competitive edge.


  • False Claims Act liability for bad attestations: If your company submits a false or misleading certification to win a state contract, you’re exposed under California’s False Claims Act (Gov. Code §§ 12650-12656). Penalties? $5,500 to $11,000 per false claim (inflation-adjusted) plus treble damages (three times the government’s losses) and the state’s investigation/legal costs. One inflated AI contract could easily snowball into hundreds of thousands —or millions— in liability. Whistleblowers (including employees) can file qui tam suits and pocket up to 25-30% of the recovery.


  • FEHA discrimination claims tied to biased AI: California employers already face strict liability under the Fair Employment and Housing Act (FEHA) for AI-driven decisions in hiring, promotions, performance reviews, or scheduling that create adverse impact on protected classes. The new EO’s emphasis on bias mitigation supercharges this: A single discriminatory output could now jeopardize state contracts and trigger CRD complaints or lawsuits with back pay, front pay, emotional distress damages, punitive awards, and attorney fees. Employers must retain all ADS data for four years and can be held responsible even for third-party AI tools.


  • CCPA/CPRA privacy violations: State contracts will demand robust data-minimization and employee-training protocols. Intentional breaches can bring $7,500 per violation—and the CRD is aggressively enforcing.


  • Debarment and suspension: The EO explicitly calls for barring contractors judicially found to harm privacy or civil liberties—potentially shutting you out of California government work for years.

Bottom line: Treat this as a compliance imperative, not optional.


The 120-day window is your runway—start building your audit trail now.


For Employees: Job Security, Training, and Stronger Protections


Newsom’s order doesn’t ignore the human side. It explicitly calls out AI’s disruptive impact on the California workforce and launches the first statewide Engaged California effort for public input on job displacement and reskilling.


What employees can expect:


  • State workers gain access to vetted GenAI tools (with privacy guardrails) to boost efficiency—think faster permit approvals or better customer service. Expanded AI trainings are coming through partnerships with Nvidia, Google, Adobe, IBM, and Microsoft.


  • Private-sector employees at state contractors will see ripple effects: Companies ramping up responsible AI use will need talent who understands bias audits, ethical deployment, and watermarking. Demand for AI-literate roles (and upskilling programs) is about to surge.


  • Stronger leverage against bias: If your employer uses AI for hiring, evaluations, or scheduling, the EO’s civil-rights focus gives you fresh ammunition under FEHA. Discriminatory AI outputs are now easier to challenge—and employers have even less room to hide behind “the algorithm made me do it.”


Critical Takeaway: California Just Set the National Standard


Critics will call this “regulation by procurement.” Supporters (and the data) say it’s smart governance. Either way, Newsom has used California’s economic muscle to do what Congress and the White House haven’t: create enforceable guardrails while still accelerating innovation.


For employers - clients and beyond, the message is clear: Treat AI compliance as a competitive advantage, not a checkbox. Update your policies, train your teams, run bias audits, and get ready for the 120-day clock.


Employees: Your skills - and your feedback - have never been more valuable.


What’s next? DGS and CDT recommendations drop by late July 2026. We’ll be tracking every development right here on the loeab.com Newsroom. If your organization needs a compliance gap analysis, FEHA/AI risk audit, or False Claims Act readiness review, our employment law team is already fielding calls.


Stay ahead of the curve. Contact LOEAB today for a free initial consultation or compliance review. Our employment law team is already helping businesses and workers turn these developments into opportunities rather than liabilities. Subscribe to loeab.com/blog for daily California employment law insights that actually matter.



 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page