Rogue Meta AI exposes data

Meta has reportedly started rolling out software that records employee mouse movements, clicks, and typing patterns on work devices to help train its AI systems. The tool works only within approved workplace apps and may also capture limited screen context to understand user actions, reports Reuters. The goal is to build AI agents that can handle everyday computer tasks more like humans. The data is expected to be used strictly for improving AI capabilities and not for employee performance tracking.

It seems like instead of relying only on traditional datasets like text, images, and videos, the company is now focusing more on capturing real-world human-computer interactions. This includes how employees navigate menus, switch between applications, use shortcuts, fix mistakes, and complete multi-step workflows – small but important behaviours that are difficult to replicate using static training data alone.

Internally, the system is designed to operate in a controlled environment. It is limited to company-approved tools and does not extend to unrestricted monitoring across all personal and external activity on a device. According to the report, the captured data is expected to include patterns of interaction rather than raw sensitive content, with filtering mechanisms intended to exclude confidential and personal information. However, the exact scope and effectiveness of these safeguards remain unclear.

It appears this initiative is linked to Meta’s broader goal of creating ‘AI agents’ that can independently perform tasks on computers. While current AI models, particularly large language models, are highly capable in generating text, writing code, and analyzing information, they often struggle with basic operational tasks like navigating software interfaces, filling out forms, or executing sequential actions across multiple programs. And these limitations have slowed the transition from AI as a passive assistant to AI as an active digital worker. Therefore, by learning directly from human behaviour, the social media behemoth aims to close this gap. Real interaction data provides insight into decision-making processes, error handling, and adaptability – elements that are essential for building systems that can function reliably in real-world environments.

However, such an approach also raises some serious concerns. One major issue is whether employees are indirectly helping build systems that could eventually automate parts of their own jobs. Another concern is related to consent and compensation, since workplace behaviour data is being reused to develop commercial AI systems. This brings up questions about how much employees are aware of such data collection and whether they are fairly recognized and compensated for contributing to AI training in this way. It is important to note that AI training data-related controversy is not new for the Mark Zuckerberg-led firm. Last year, the company announced plans to make its AI systems more personalised by leveraging user activity and interaction data across its core platforms, including WhatsApp, Messenger, and Facebook, further raising concerns around how personal data is used to train and refine AI models.

The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →