Breaking NewsApril 21, 2026·12 min read

Lovable Hacked: API Flaw Exposes Thousands of Projects on the Lovable AI App Builder

A security researcher posting as @weezerOSINT on X showed that a Lovable API flaw let any free Lovable account read source code, AI chat histories and database credentials belonging to other users. Lovable denies data was breached. The follow-up apology concedes a February 2026 backend change accidentally turned chat access back on for public projects, and that the HackerOne bug report sat as a duplicate submission for 48 days.

5

API calls from a free account

48 days

HackerOne report sat unpatched

1.1M

Views on the X disclosure

$6.6B

Lovable valuation (Dec 2025)

Executive Summary

  • On April 20, 2026, @weezerOSINT on X demonstrated a Lovable API flaw that gave unauthorized users access to data belonging to other users across thousands of projects, reaching source code, AI chat histories, and database credentials with five API calls from a free account.
  • The vulnerability is a textbook case of Broken Object Level Authorization (BOLA), the top-ranked issue on OWASP's API Security Top 10. Every project created before november 2025 was in scope.
  • The bug was disclosed via HackerOne on March 3, 2026, about 48 days ago. Lovable's HackerOne partners marked it as a duplicate and left it open.
  • Lovable denies data was breached, calling the situation “unclear documentation.” The follow-up apology admits a February 2026 backend regression opened up access to chats on public projects.
  • If you ship on a cloud AI coding tool, rotate every API key and password you have pasted into a chat. Elephas keeps the chat on your Mac, so a regression on someone else's server can't leak your sensitive data.

This is the story of how a $6.6 billion valuation AI app builder, where Nvidia, Microsoft, Uber and Spotify employees hold personal accounts, let a free-tier account walk out with other developer's projects, chat history and Supabase credentials. The company's first move was to call it a documentation problem.

It's also a story about where your chat history lives, and why the cybersecurity risk has moved from your app's frontend to the conversations you have with your tools.

Security Researcher @weezerOSINT on X Exposes the Lovable API Flaw

@weezerOSINT on X disclosing the Lovable API flaw with an embedded HackerOne report card

On April 20, 2026, an OSINT analyst posting as @weezerOSINT on X published a short security disclosure. He opened a fresh Lovable account the same day. He fired five API calls. From a free account, he was able to access another user's full project, reading the project's source tree, chat history, and Supabase credentials.

Lovable has a mass data breach affecting every project created before november 2025. I made a lovable account today and was able to access another users source code, database credentials, AI chat histories, and customer data are all readable by any free account.
@weezerOSINT, April 20, 2026

The researcher said he was not hacking. He was using the platform Lovable the way it was built, with five API calls that happened to accept another account holder's project ID without complaint. The post drew 1.1 million views within hours. Lovable replied at 10:20 pm.

The thread included an embedded HackerOne report card. Title: “Broken Object Level Authorization on Lovable API leads to unauthorized access to user data and project source code.” Submitted March 3, 2026, 7:52 pm UTC. Status: Duplicate.

What the Lovable API Flaw Exposed

Diagram: a single broken Lovable API endpoint leaking source code, chat histories, Supabase credentials, and personal data to any free account

Every piece of sensitive project data the platform stored was reachable through the same broken endpoint:

To prove the flaw was not theoretical, the researcher chose an admin panel built on Lovable by the Danish nonprofit Connected Women in AI. The project had been edited ten days earlier. The developer had 3,703 edits this year. It was not an abandoned sandbox.

He also ran a controlled test. A project created in April 2026 returned 403 Forbidden when queried through the API. An older project on the same account, actively edited ten days before disclosure, returned 200 OK with the full source tree. Same API. The fix had been applied to new projects only, leaving older projects unpatched.

Lovable patched this for new projects. They never patched it for existing ones.
@weezerOSINT

The BOLA Vulnerability, Explained

BOLA flow diagram: free-tier user sends GET /project/X, API checks authentication (passes) but skips the object-ownership check (fails), returning 200 OK with another user's source tree and chat

Broken Object Level Authorization, BOLA for short, sits at the top of the OWASP API Security Top 10. The pattern is simple. An API path checks whether the caller has a valid session. It does not check whether the caller is allowed to read the specific object requested. Any attacker who can guess object IDs walks straight through.

Lovable's setup matched the pattern. Authenticate any free-tier user. Send a request for project X. Get project X, regardless of who owns it. The researcher called it “five API calls,” which is the exact price of this class of flaw when it ships in production.

BOLA is not the only thing to check when auditing a platform like this. The same codebase could just as easily ship with cross-site scripting (XSS) holes, missing CSP headers, or misconfigured session cookies. For Lovable this week, BOLA is the one that leaked chat and source code. And BOLA is the one the HackerOne bug report named.

The HackerOne Bug Report That Sat for 48 Days

Timeline showing 48-day gap between the HackerOne report filed March 3, 2026 and the public X disclosure on April 20, 2026

Lovable runs a public vulnerability disclosure program via HackerOne, which functions like a bug bounty intake channel. Responsible disclosure worked, then it didn't. The researcher filed the vulnerability on March 3, 2026, 7:52 pm UTC. It was marked as a duplicate. It was closed without escalation. The fix never came.

48 days ago became the headline number. A live regression sat in production because the HackerOne disclosure program had already decided the behavior was intended. Lovable later admitted this in the apology.

A duplicate submission is normally a sign the program is healthy. It means other researchers have already spotted the same bug. For this one, duplicate meant closed, and closed meant nobody looked at it again for almost seven weeks. The only escalation that worked was X.

Lovable Denies Data Breach, and Explains Itself

Lovable denies data breach, first X statement from @Lovable on April 20, 2026

Lovable's first public reply came at 10:20 pm. The company said it had been aware of concerns regarding the visibility of chat messages and code on Lovable projects with public visibility settings, and concerns regarding the visibility of chat more broadly. It pushed back on the framing.

To be clear: We did not suffer a data breach. Our documentation of what “public” implies was unclear, and that's a failure on us.
Lovable, @Lovable on X, April 20, 2026

The statement drew a line between chat and code. For public projects, both used to be visible. Chat messages were “now no longer possible” to see. Code on Lovable projects set to public was different. Visibility there was intentional, the company said, after years of having experimented with different ways of surfacing build history. For enterprise customers, it added, new projects had not been able to set visibility to public since May 25, 2025.

The enterprise carve-out was true. It didn't cover the personal accounts held by Nvidia, Microsoft, Uber and Spotify employees that the researcher had flagged. It didn't cover every project created before november 2025 either. The “unclear documentation” defense held for about a day.

The Apology and the February Backend Regression

Lovable's apology post explaining the February 2026 backend regression that reopened public project chats

A longer follow-up post walked back the first one. Lovable conceded that pointing to unclear documentation alone was not enough, then gave a timeline. In the early days, the platform's “public” toggle exposed both chat and code, GitHub-style. In May 2025, free-tier users got the ability to make projects private. In December 2025, the platform switched to private by default across all tiers, and the company patched the API so public-project chats could no longer be read.

Then came the admission that matters. In February, the company wrote, while unifying permissions in the backend, it had accidentally opened back up access to chats on public projects.

That is not a documentation problem. That's a live security regression, introduced by a refactor, that reopened user data the company had already promised to close. The public disclosure on X was what forced Lovable's team to look at the report, and revert, the change. Until then, the platform was unpatched for the class of projects most users cared about.

The apology ended with a sentence harder to walk back than the denial: “We understand that pointing to documentation issues alone was not enough here.”

Who Was Affected: Thousands of Projects Before November 2025

At a glance

Scope

Every project created before november 2025.

Accounts named

Nvidia, Microsoft, Uber, Spotify employees.

Data exposed

Source tree, chat history, Supabase credentials.

Proof target

Connected Women in AI admin panel.

The researcher framed the scope widely. Projects created before november 2025 were in play, numbering in the thousands of projects by his own framing. Nvidia, Microsoft, Uber and Spotify employees were named as accountholders whose personal projects fell inside that window. A proper API design would have limited users to access only their own projects; the old Lovable routes did not. Lovable has not published a count.

What the researcher did read, across the accounts he sampled, gives a sense of the liability. Error logs. Schema discussions. Business logic. Credentials pasted during debugging. When the AI generated SQL migrations on request, the full schema sat inside the chat.

The demonstration target, Connected Women in AI, the Danish nonprofit, was picked because it was not a private company with commercial secrets. The researcher wanted proof that the flaw worked. The point landed anyway.

This Is Not the Lovable AI App Builder's First Security Story

May 2025 — the original Lovable security story, by the numbers:

1,645

Lovable apps scanned

170+

DBs wide open (no RLS)

13,000

Users hit in one breach

11 mo

Before this week's incident

Eleven months before this week, in May 2025, security researchers scanned 1,645 apps built on Lovable. More than 170 shipped with their Supabase database wide open because Row Level Security was never switched on. One breach inside that sample hit 13,000 end users. Home addresses, financial records, API keys and payment data. It was not an isolated headline either; see the Anthropic source code leak from earlier this month for the broader pattern.

That was a different vulnerability class, a configuration issue on user-built apps rather than a flaw in Lovable's own API. The pattern repeats anyway. Vibe coding tools ship apps whose authors do not fully understand their own attack surface. This week's incident joins a running list of security vulnerabilities on platforms built for people who do not write code for a living. Security issues are not edge cases on these tools, they are the default. Cybersecurity grows slower than the tools that create new attack surface.

What To Do If You Shipped an App on Lovable

If you have shipped anything on Lovable, treat this week as a free comprehensive security audit of your stack. Here is the checklist, step by step.

  1. Rotate every secret you ever pasted into a Lovable chat. API keys, passwords, Supabase service_role tokens, Stripe secrets. Assume each one has been read by at least one stranger.
  2. Review your project visibility settings. Lovable only switched to private-by-default in December 2025. Every project created before that window sits in the affected range. Verify today.
  3. Audit your Supabase database for RLS policies. Open the Supabase dashboard, confirm Row Level Security is on for every table that stores user data, and check that policies don't just read USING(true), which is the same as having no policy at all.
  4. Harden the frontend. Ship a CSP that locks down script sources. Escape user input to block XSS. Set HttpOnly, Secure and SameSite flags on every session cookie.
  5. Audit exported URLs and API endpoints. AI-generated frontends happily embed staging links, debug paths and tokens that were only meant for development. Treat every link in your shipped bundle as public.
  6. Delete chat history where you can. Even on tools that take privacy seriously, your chat history is a liability. It carries business logic, sensitive data, and whatever credentials you pasted during a debugging session. For the broader checklist, the Elephas guide on keeping data safe covers the rest.

Your Chat History Is the Real Liability

Anatomy of an AI chat session showing how developers paste Supabase keys, schemas, stack traces and customer records into prompts that the provider stores on its own servers

The Lovable API flaw is a good news story because it is about one tool. The structural problem is larger. Every cloud-hosted AI coding assistant stores your chat on its servers by default. When a developer pastes a stack trace, an error log, or a schema full of personal data into that chat, the provider keeps a copy. It is one server change away from exposure. It is one security vulnerabilities triage miss away from a data leak. The same concern applies to pasting confidential documents into any cloud chatbot.

People tell the AI what they want to build. They paste error logs. They discuss their business logic. They share credentials. Lovable stores all of it and exposes all of it.
@weezerOSINT

The Danish nonprofit's developer did nothing wrong. They used the tool the way the tool wanted to be used. The failure sat in the design assumption that storing every conversation on shared infrastructure was fine, because nothing would ever break the authorization layer. Something broke the authorization layer.

What lives in your AI chat history

  • Error logs with stack traces
  • Database schemas and SQL migrations
  • API keys, passwords, tokens
  • Staging and production URLs
  • Client names and customer tables
  • Rough drafts of product decisions

A Safer Pattern: Elephas Keeps the Chat on Your Mac

Side-by-side comparison: cloud-hosted AI sends your prompt and context to a provider that stores them, versus Elephas local-first architecture where the LLM runs on your Mac and the chat never leaves the device

For work where leaking a chat archive is unacceptable, legal drafts, patient notes, client contracts, source with credentials, picking a better-hosted AI tool still leaves the same problem. The answer is a local AI tool that does not host your chat in the first place.

Elephas is a privacy-friendly AI knowledge assistant for macOS. Every conversation, every document, and every knowledge base stays on your Mac. Elephas doesn't keep your chat history on a third-party server, the exact failure class at the center of the Lovable story. There is no “public project” toggle to accidentally flip. There is no shared API an attacker can iterate through. Your chat isn't a piece of data belonging to other users' neighborhood on someone else's server.

Built-In Local LLM Models

Elephas provides built-in local LLM models, with no Ollama, no external install, no cloud provider staying online. For sensitive work, the model runs on your machine and the prompt never leaves it. A regression inside someone else's server cannot leak a conversation that never went there. The full walkthrough is in the guide to running AI offline.

Smart Redaction for Cloud Models

When you want the horsepower of GPT-5.4 or Claude Opus 4.7 on a sensitive document, Elephas Smart Redaction handles it. Sensitive data is automatically detected and redacted before anything reaches a cloud AI model, your content is never used to train AI models, and nothing passes through a third-party reviewer's screen.

Before you type into any AI tool, ask where your chat lives.

Try Elephas Free →

Related Reading

Selvam Sivakumar
Written by

Selvam Sivakumar

Founder, Elephas.app

Selvam Sivakumar is the founder of Elephas and an expert in AI, Mac apps, and productivity tools. He writes about practical ways professionals can use AI to work smarter while keeping their data private.

Related Resources

Explore AI Privacy & Security
news

Claude Mythos Preview: First AI to Complete a 32-Step Autonomous Cyber Attack (AISI 2026)

The UK AI Security Institute evaluated Claude Mythos Preview and found the first AI model to autonomously complete a 32-step corporate network attack. Full analysis and defender guidance.

12 min read
news

Anthropic Leaked Their Source Code Twice in One Week

512,000+ lines of Claude Code leaked via npm. Days earlier, 3,000 internal files were publicly accessible. Unreleased features, security risks, and what it means for AI privacy.

14 min read
news

Anthropic Sues Pentagon, 1.5 Million Quit ChatGPT: The AI Trust Crisis Reshaping the Industry

Anthropic filed two lawsuits against the Pentagon over its supply chain risk designation. Meanwhile, 1.5 million users quit ChatGPT in the QuitGPT boycott. Here's what it means for AI privacy.

14 min read
news

OpenAI Just Published Its Biggest Policy Document Ever. Here's What It Says and What It Doesn't.

OpenAI released a 13-page policy paper proposing 20 ideas to reshape America around AI, from a Public Wealth Fund to containment playbooks for self-replicating AI. The gaps tell a different story.

12 min read
Back to News