iToverDose/Startups· 9 MAY 2026 · 00:01

5,000 AI-built apps expose corporate secrets—here’s how to close the gap

New research reveals 5,000 shadow AI applications leaking sensitive corporate and customer data online. Default privacy settings and lack of governance turn AI-generated tools into high-risk assets for enterprises.

VentureBeat3 min read0 Comments

The rise of AI-powered "vibe coding" has quietly created a security blind spot for enterprises. In a matter of weeks, product managers and developers can spin up fully functional applications using tools like Lovable, Base44, or Replit, deploy them on platforms such as Netlify, and publish them to public URLs—often without realizing the data exposure that follows.

A recent investigation by Israeli cybersecurity firm RedAccess uncovered 380,000 publicly accessible assets tied to these tools, including applications, databases, and infrastructure. Shockingly, 5,000 of these assets—about 1.3%—contained sensitive corporate or customer information. CEO Dor Zvi explained that the team stumbled upon the issue while auditing shadow AI use for clients. The findings were independently verified by Axios and Wired, which confirmed several exposed systems, including a shipping company’s vessel tracking portal and an internal health firm’s clinical trial database.

The exposed data spanned industries and geographies. A British cabinet supplier left full customer service conversations unredacted on the open web. A Brazilian bank’s internal financial documents were accessible to anyone with the URL. Patient records from a children’s long-term care facility and hospital summaries were also found. Depending on jurisdiction, these breaches could trigger violations under regulations like HIPAA, UK GDPR, or Brazil’s LGPD.

Beyond corporate data, RedAccess identified phishing sites built on Lovable that impersonated major brands such as Bank of America, FedEx, Trader Joe’s, and McDonald’s. The company stated it had begun investigating and removing these fraudulent applications.

Default settings create unforeseen risks

Many vibe coding platforms set new projects to "public" by default, leaving applications discoverable via search engines. This oversight turns what was meant to be a productivity boost into a security liability. As Zvi noted, expecting non-technical users to manually adjust privacy controls is unrealistic: "I don’t think it’s feasible to educate the whole world around security. My mother might be vibe coding with Lovable, but I doubt she’s thinking about role-based access."

The problem extends beyond one platform

Shadow AI risks aren’t limited to a single tool or vendor. In October 2025, Escape.tech analyzed 5,600 publicly available vibe-coded applications and uncovered over 2,000 high-severity vulnerabilities. Their findings included 400 exposed secrets like API keys and access tokens, plus 175 instances of personal data leaks involving medical records and bank account numbers. Every vulnerability was live in production and discoverable within hours.

The implications are far-reaching. Gartner’s 2026 Predicts report warns that prompt-to-app development methods will increase software defects by 2,500% by 2028. These aren’t just syntax errors—they’re contextual blind spots where AI-generated code ignores system architecture or business rules, creating technical debt that drains innovation budgets.

Shadow AI amplifies breach costs and complexity

IBM’s 2025 Cost of a Data Breach Report found that 20% of organizations experienced breaches linked to shadow AI, costing an average of $4.63 million per incident—$670,000 more than the overall breach average. The report highlighted critical gaps in governance: 97% of affected organizations lacked proper AI access controls, and 63% had no AI governance policies at all.

The stakes are particularly high for customer data. Shadow AI breaches exposed personally identifiable information in 65% of cases, compared to 53% across all breaches. These incidents also spread across multiple environments 62% of the time, complicating response efforts. VentureBeat’s research suggests that actively used shadow apps could double by mid-2026, while Cyberhaven data reveals 73.8% of workplace ChatGPT accounts in enterprises are unauthorized.

A five-step framework to mitigate AI-generated risks

Enterprises need a structured approach to identify and secure vibe-coded applications. The table below outlines a practical audit framework for CISOs to triage risks across five critical domains.

| Domain | Current State (Most Orgs) | Target State | First Action | |--------------------------|-------------------------------------|-----------------------------------------------|-----------------------------------------------| | Discovery | No visibility into vibe-coded apps | Automated scanning of vibe coding platforms | Run DNS + certificate transparency scans for Lovable, Replit, Base44, and Netlify | | Authentication | Platform defaults (public by default) | SSO/SAML integration required before deployment | Block unauthenticated apps from accessing internal data | | Code Scanning | Zero coverage for citizen-built apps | Mandatory SAST/DAST before production | Extend AppSec pipeline to cover vibe-coded deployments | | Data Loss Prevention | Limited monitoring of outbound data | Real-time DLP policies for AI-generated apps | Deploy DLP tools to monitor data exfiltration | | Governance | Reactive policy enforcement | Proactive AI risk assessments | Integrate AI-specific controls into SDLC |

Implementing these measures won’t just reduce exposure—it will future-proof organizations against the next wave of AI-driven security threats. The key is treating vibe-coded applications with the same rigor as traditional software, before they become tomorrow’s breach headlines.

AI summary

Yapay zeka destekli vibe kodlama uygulamaları şirketleri ciddi güvenlik risklerine maruz bırakıyor. 5.000'den fazla uygulama hassas verileriyle birlikte herkese açık durumda.

Comments

00
LEAVE A COMMENT
ID #L24MPB

0 / 1200 CHARACTERS

Human check

6 + 4 = ?

Will appear after editor review

Moderation · Spam protection active

No approved comments yet. Be first.