iToverDose/Technology· 8 MAY 2026 · 18:33

Judge strikes down $100M grant cuts after DOGE relied on ChatGPT for DEI decisions

A federal judge ruled New York’s DOGE violated constitutional standards by using ChatGPT to disqualify $100M in grants based on DEI criteria. The landmark decision highlights risks of AI in public policy decisions and raises questions about accountability in automated decision-making.

The Verge2 min read0 Comments

A federal judge has delivered a landmark ruling against New York’s Department of Government Efficiency (DOGE), declaring its decision to cancel over $100 million in grants unconstitutional. Judge Colleen McMahon, presiding over the U.S. District Court for the Southern District of New York, found that DOGE’s process for eliminating grants relied improperly on ChatGPT to assess whether applications aligned with diversity, equity, and inclusion (DEI) standards.

The ruling stems from a 2025 lawsuit filed by several humanities organizations, which argued that DOGE’s grant cancellations targeted programs specifically because of their DEI commitments. In her 143-page decision, Judge McMahon emphasized that the agency’s approach violated constitutional protections by disqualifying grants based solely on the presence of protected characteristics. "It could not be more obvious that DOGE used the mere presence of particular, protected characteristics to disqualify grants from continued funding," she wrote in the opinion.

How ChatGPT became a factor in a major policy dispute

DOGE’s decision to use ChatGPT for grant evaluations raised significant concerns among legal experts and advocacy groups. According to court documents, the agency instructed the AI model to review grant applications and flag those that included keywords associated with DEI initiatives. The process, which lacked transparency and human oversight, led to the cancellation of funding for programs under the National Endowment for the Humanities (NEH).

Critics argue that the reliance on AI for such high-stakes decisions introduces systemic biases and undermines the neutrality required in public funding processes. The lawsuit’s plaintiffs contended that DOGE’s methodology was arbitrary and discriminatory, as it effectively penalized organizations for prioritizing inclusion without considering the merit or impact of their work. Legal analysts note that this case could set a precedent for future challenges involving AI-driven decision-making in government agencies.

Constitutional violations and the role of AI in governance

Judge McMahon’s ruling underscores broader constitutional concerns about how AI tools are deployed in governmental functions. The decision highlights that automated systems cannot replace human judgment when evaluating sensitive criteria like DEI, especially when such evaluations risk violating equal protection principles. The judge’s critique focused on the absence of safeguards to prevent biased outcomes, noting that DOGE’s process failed to account for the nuance of individual grant applications.

Legal scholars point out that this case reflects a growing tension between technological innovation and constitutional governance. As AI systems become more integrated into policy decisions, the need for accountability, transparency, and human oversight becomes critical. The ruling serves as a cautionary tale for agencies considering similar approaches, emphasizing that technology must serve public interests—not undermine them.

What’s next for DOGE and AI in public policy

DOGE has not yet indicated whether it plans to appeal the ruling. The decision requires the agency to revisit its grant cancellation process and ensure compliance with constitutional standards. Meanwhile, policymakers and technologists are increasingly scrutinizing the role of AI in governance, with calls for stricter regulations on its use in public decision-making.

For organizations receiving public funding, the ruling sends a clear message: automated systems must be carefully vetted to avoid constitutional pitfalls. As AI continues to evolve, its integration into governance will likely face heightened legal challenges, making transparency and accountability essential. The outcome of this case may influence how other agencies approach AI-driven decisions in the future.

AI summary

New York yargıcı, DOGE’nin 100 milyon dolarlık DEİ hibelerini iptal etmek için ChatGPT’yi kullanmasını hem yasa dışı hem de anayasaya aykırı buldu. Yapay zekanın kamu fonlarında kullanımı hukukî ve etik tartışmaları alevlendirdi.

Comments

00
LEAVE A COMMENT
ID #AXUJVK

0 / 1200 CHARACTERS

Human check

8 + 6 = ?

Will appear after editor review

Moderation · Spam protection active

No approved comments yet. Be first.