Scale AI exposed sensitive client data in public Google Docs
NewsBytes June 25, 2025 06:39 PM


Scale AI exposed sensitive client data in public Google Docs
25 Jun 2025


Scale AI, a leading data labeling company, has come under fire for major security lapses.

The company is accused of exposing sensitive information about its clients including Meta, Google, xAI in public Google Docs.

Business Insider discovered that several AI training documents marked "confidential" were publicly accessible.

The revelation comes just as clients like Google, OpenAI, and xAI have paused work with Scale after Meta's $14.8 billion investment in the firm.


How Google and xAI's information was leaked
Project exposure


Business Insider found thousands of pages of project documents across 85 Google Docs linked to Scale AI's work with Big Tech clients.

These included sensitive details like how Google used ChatGPT to improve its chatbot, Bard.

For Elon Musk's xAI, public documents revealed details of "Project Xylophone," which aimed at improving the AI's conversational skills.

Contractors working with the company revealed that this method is often used to share internal files among its vast army of at least 240,000 contractors.


Scale AI says it is conducting an investigation
Investigation underway


In light of these revelations, Scale AI has said that it takes data security seriously and is investigating the matter.

A spokesperson for the company said, "We are conducting a thorough investigation and have disabled any user's ability to publicly share documents from Scale-managed systems."

The spokesperson also emphasized their commitment to robust technical and policy safeguards to protect confidential information.


Documents contained sensitive info about thousands of contractors
Personal data leak


The public Google Docs also contained sensitive information about thousands of Scale AI's contractors. This included their personal email addresses and allegations of "suspected of 'cheating.'"

Some documents were not just viewable but also viewed and also edited by anyone with the right URL.

Cybersecurity experts have warned that such practices could make both Scale AI and its clients vulnerable to various cyberattacks, including impersonation or malware insertion.

© Copyright @2025 LIDEA. All Rights Reserved.