AI Privacy IncidentJanuary 20, 2026

Chat and Ask AI Firebase misconfiguration exposes 300 million user messages

Vendor: Codeway
Product: Chat & Ask AI
Severity: high
Status: confirmed-resolved
Users affected: approximately 25 million users; approximately 300 million messages

Summary

On January 20, 2026, independent security researcher Harry identified a Firebase misconfiguration in Chat & Ask AI, a multi-model AI chat application developed by Turkish firm Codeway with more than 50 million installs across Google Play and the Apple App Store. The app routes user conversations to ChatGPT, Google Gemini, and Claude. The misconfiguration left Firebase Security Rules set to public, exposing approximately 300 million messages from 25 million user accounts to unauthenticated read, write, and delete access. Codeway resolved the issue across all of its applications within hours of Harry's report.

What happened

  • Harry, using an automated Firebase-scanning tool he built called Firehound, identified Codeway's Firebase project as publicly accessible without authentication.
  • The project's Security Rules allowed any party to read, modify, or delete the stored data without credentials.
  • Exposed records included users' complete chat histories, the AI models used in each session, and application settings. Malwarebytes reported that some conversations involved sensitive personal topics.
  • Additional applications by the same developer shared the same Firebase project and were exposed in the same configuration.
  • Harry reported the issue to Codeway on January 20, 2026. The company resolved the misconfiguration across all affected applications within hours.
  • Malwarebytes published Harry's findings on February 9, 2026.
  • Harry's broader Firehound scan of 200 popular iOS applications found that 103 had the same Firebase misconfiguration, collectively exposing tens of millions of files.

Timeline

  • 2026-01-20 -- Harry identifies the Firebase misconfiguration and reports to Codeway.
  • 2026-01-20 -- Codeway remediates the issue within hours across all affected applications.
  • 2026-02-09 -- Malwarebytes publishes Harry's research findings.

What remains unclear

  • The duration of the misconfiguration before January 20, 2026, has not been disclosed.
  • Codeway has not confirmed whether any third party accessed the data before remediation.
  • Codeway has not issued a public statement on the incident.

Broader context

AI chat applications that route conversations through multiple underlying model APIs and log results in a shared cloud database create a single point of exposure for the combined conversation history of all model integrations. Firebase's default Security Rules require an active configuration step to restrict database access, and the gap between deployment and that step has been the source of similar exposures across the mobile application ecosystem.

Sources

Selvam Sivakumar
Written by

Selvam Sivakumar

Founder, Elephas.app

Selvam Sivakumar is the founder of Elephas and an expert in AI, Mac apps, and productivity tools. He writes about practical ways professionals can use AI to work smarter while keeping their data private.

Related Resources

news

Lovable Hacked: API Flaw Exposes Thousands of Projects on the Lovable AI App Builder

A security researcher exposed a Lovable API flaw that leaked source code, AI chat histories and database credentials across thousands of projects. Lovable denies data was breached; its apology reveals a February 2026 backend regression.

13 min read
news

ChatGPT Launches Ads as Privacy Researcher Resigns from OpenAI

A growing wave of AI safety researchers are leaving major companies as ChatGPT goes ad-supported.

6 min read
news

Claude Mythos Preview: First AI to Complete a 32-Step Autonomous Cyber Attack (AISI 2026)

The UK AI Security Institute evaluated Claude Mythos Preview and found the first AI model to autonomously complete a 32-step corporate network attack. Full analysis and defender guidance.

12 min read
news

Anthropic Leaked Their Source Code Twice in One Week

512,000+ lines of Claude Code leaked via npm. Days earlier, 3,000 internal files were publicly accessible. Unreleased features, security risks, and what it means for AI privacy.

14 min read