AI Privacy IncidentFebruary 3, 2026

Sears Home Services AI chatbot and call database exposure

Vendor: Transformco
Product: Samantha / KAIros (Sears Home Services AI)
Severity: high
Status: confirmed-resolved
Users affected: undisclosed; approximately 3.7 million records across three databases

Summary

On February 3, 2026, security researcher Jeremiah Fowler discovered three publicly accessible databases belonging to Sears Home Services, the home repair division of Transformco. The databases contained approximately 3.7 million records generated by the company's AI-assisted customer service systems, named Samantha and KAIros, including chat transcripts, scheduling logs, and audio recordings from customer calls. Transformco restricted access within one day of disclosure and did not issue a public statement on the exposure.

What happened

  • Fowler found three unprotected cloud databases holding 2.1 million chat transcript files, 207,381 scheduling logs, and approximately 1.4 million audio recordings from calls processed by AI-assisted scheduling and support systems.
  • The files contained customer names, physical addresses, email addresses, phone numbers, and service appointment details in plaintext.
  • The audio recordings totaled 3.9 TB of stored data.
  • Fowler submitted a responsible disclosure notice to Transformco. Access to all three databases was restricted within one day. Transformco did not respond to follow-up inquiries and did not issue a public acknowledgment of the exposure.
  • Fowler published his findings on March 17, 2026.

Timeline

  • 2026-02-03 -- Fowler discovers the three publicly accessible databases.
  • 2026-02-04 -- Transformco restricts database access within one day of disclosure.
  • 2026-03-17 -- Fowler publishes findings after confirming remediation.

What remains unclear

  • The period during which the databases were publicly accessible before February 3, 2026, has not been disclosed.
  • Transformco has not confirmed whether any third party accessed the data before Fowler's discovery.
  • The number of individual customers represented in the 3.7 million records has not been specified publicly.

Broader context

Customer-facing AI systems that log, transcribe, and archive interactions at scale create data concentrations with higher exposure consequences than equivalent text-only systems. Voice recordings introduce a distinct risk category: audio can be processed to extract biometric identifiers and reproduced in ways that static text cannot. The databases covered recordings dating back to at least 2024, meaning the accumulated exposure window substantially predated the discovery date.

Sources

Selvam Sivakumar
Written by

Selvam Sivakumar

Founder, Elephas.app

Selvam Sivakumar is the founder of Elephas and an expert in AI, Mac apps, and productivity tools. He writes about practical ways professionals can use AI to work smarter while keeping their data private.

Related Resources

news

Lovable Hacked: API Flaw Exposes Thousands of Projects on the Lovable AI App Builder

A security researcher exposed a Lovable API flaw that leaked source code, AI chat histories and database credentials across thousands of projects. Lovable denies data was breached; its apology reveals a February 2026 backend regression.

13 min read
news

ChatGPT Launches Ads as Privacy Researcher Resigns from OpenAI

A growing wave of AI safety researchers are leaving major companies as ChatGPT goes ad-supported.

6 min read
news

Claude Mythos Preview: First AI to Complete a 32-Step Autonomous Cyber Attack (AISI 2026)

The UK AI Security Institute evaluated Claude Mythos Preview and found the first AI model to autonomously complete a 32-step corporate network attack. Full analysis and defender guidance.

12 min read
news

Anthropic Leaked Their Source Code Twice in One Week

512,000+ lines of Claude Code leaked via npm. Days earlier, 3,000 internal files were publicly accessible. Unreleased features, security risks, and what it means for AI privacy.

14 min read