OpenAI is strengthening ChatGPT’s security with the launch of Lockdown Mode and a new Elevated Risk labelling system, amid growing concerns about how AI tools handle sensitive data and interact with the web and external apps. The update is designed to guard against emerging threats like prompt injection attacks, where hidden instructions try to trick AI systems into revealing confidential information and taking unsafe actions. With AI assistants now able to browse websites, connect to third-party services, and automate tasks, the AI giant says these new protections have become necessary.
Lockdown Mode is designed as a high-security operating environment for ChatGPT deployments where confidentiality and data integrity are essential. When enabled, it restricts the system’s ability to interact freely with external networks and services. Live web access is limited, preventing real-time retrieval from potentially compromised sources, while certain external tool integrations and automated actions may be disabled if they cannot meet strict safety guarantees. By narrowing how data flows into and out of the system, the mode reduces opportunities for data exfiltration and manipulation.
The feature is aimed at users and organizations operating under elevated cyber-risk. This includes corporate leadership handling sensitive communications, legal and finance teams working with confidential records, healthcare providers managing protected patient data, educators overseeing student information, journalists in sensitive environments, and government or security personnel. In enterprise deployments, administrators can enforce approved integrations, apply role-based access policies, and monitor activity through audit logs and compliance controls.
Notably, Lockdown Mode builds on a broader set of security mechanisms already embedded in ChatGPT enterprise environments. These include sandboxed execution environments, monitoring systems designed to detect anomalous behaviour, protections against unauthorized data transfer, and structured access controls that limit who can view or manipulate sensitive information. Initially, Lockdown Mode is being deployed in enterprise, education, and regulated-sector environments where data protection requirements are strict. And wider availability may follow as the feature evolves and demand grows among smaller organizations and individual users handling sensitive material.
Along with the new security mode, the Sam Altman-led firm also introduced Elevated Risk labels, a transparency feature designed to inform users when enabling certain capabilities could introduce additional security exposure. The labels appear when users grant the AI network access, allow it to interact with live web content, or connect third-party apps and external data sources. Each label explains what functionality is being activated, the potential risks involved, and scenarios where caution is recommended.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →