Be a part of the occasion trusted by enterprise leaders for practically 20 years. VB Remodel brings collectively the folks constructing actual enterprise AI technique. Learn more
Common ChatGPT customers (together with the creator of this text) could or could not have seen that the hit chatbot from OpenAI permits customers to enter right into a “temporary chat” designed to wipe all the data exchanged between the person and the underlying AI mannequin as quickly because the chat session is closed.
As well as, OpenAI additionally permits customers to manually delete prior ChatGPT periods from the left sidebar on the internet and desktop/cellular apps by left-clicking or control-clicking, or holding down/lengthy urgent on them from the selector.

Nevertheless, this week, OpenAI discovered itself going through criticism from some ChatGPT customers after they found that the corporate has not really been deleting these chat logs as beforehand indicated.
“You’re telling me my deleted chatgpt chats are literally not deleted and [are] being saved to be investigated by a decide?” posted X person @ns123abc. The remark drew greater than 1,000,000 views.
One other person, @kepano, added, “you may ‘delete’ a ChatGPT chat, nonetheless all chats should be retained on account of authorized obligations ?”.
As AI influencer and software program engineer Simon Willison wrote on his personal blog: “Paying clients of [OpenAI’s] APIs could effectively make the choice to change to different suppliers who can provide retention insurance policies that aren’t subverted by this court docket order!”
As an alternative, OpenAI confirmed it has been preserving deleted and short-term person chat logs since mid-Could 2025 in response to a federal court docket order, though it didn’t disclose this to customers till June fifth.
The order, embedded under and issued on Could 13, 2025, by U.S. Justice of the Peace Choose Ona T. Wang, requires OpenAI to “protect and segregate all output log information that might in any other case be deleted on a going ahead foundation,” together with chats deleted by person request or on account of privateness obligations.
The court docket’s directive stems from The New York Occasions (NYT) v. OpenAI and Microsoft, a now year-and-a-half old copyright case nonetheless being argued. The NYT’s attorneys allege that OpenAI’s language fashions regurgitate copyrighted information content material verbatim. The plaintiffs argue that logs, together with people who customers could have deleted, may include infringing outputs related to the lawsuit.
Whereas OpenAI complied with the order instantly, it didn’t publicly notify affected customers for greater than three weeks, issuing a weblog publish and FAQ describing the authorized mandate and outlining who’s impacted.
Nevertheless, OpenAI is putting the blame squarely on the NYT and the decide’s order, saying it believes the preservation demand to be “baseless.”
OpenAI clarifies what’s happening with the court docket order to protect ChatGPT person logs — together with which chats are impacted
In a blog post published yesterday, OpenAI COO Brad Lightcap defended the corporate’s place and acknowledged that it was advocating for person privateness and safety towards an over-broad judicial order, writing:
“The New York Occasions and different plaintiffs have made a sweeping and pointless demand of their baseless lawsuit towards us: retain shopper ChatGPT and API buyer information indefinitely. This essentially conflicts with the privateness commitments we’ve made to our customers.”
The publish clarified that ChatGPT Free, Plus, Professional and Crew customers, together with API clients with no zero information retention (ZDR) settlement, are affected by the preservation order, that means even when customers on these plans delete their chats or use short-term chat mode, their chats might be saved for the foreseeable future.
Nevertheless, subscribers to the ChatGPT Enterprise and Edu customers, in addition to API purchasers utilizing ZDR endpoints, are not impacted by the order and their chats might be deleted as directed.
The retained information is held below authorized maintain, that means it’s saved in a safe, segregated system and solely accessible to a small variety of authorized and safety personnel.
“This information will not be routinely shared with The New York Occasions or anybody else,” Lightcap emphasised in OpenAI’s weblog publish.
Sam Altman floats new idea of ‘AI privilege’ permitting for confidential conversations between fashions and customers, just like chatting with a human physician or lawyer
OpenAI CEO and co-founder Sam Altman additionally addressed the difficulty publicly in a publish from his account on the social network X last night, writing:
“lately the NYT requested a court docket to power us to not delete any person chats. we expect this was an inappropriate request that units a nasty precedent. we’re interesting the choice. we are going to combat any demand that compromises our customers’ privateness; it is a core precept.”
He additionally advised a broader authorized and moral framework could also be wanted for AI privateness:
“we’ve been considering lately in regards to the want for one thing like ‘AI privilege’; this actually accelerates the necessity to have the dialog.”
“imo speaking to an AI needs to be like speaking to a lawyer or a health care provider.”
“i hope society will determine this out quickly.“
The notion of AI privilege — as a possible authorized customary — echoes attorney-client and doctor-patient confidentiality.
Whether or not such a framework would achieve traction in courtrooms or coverage circles stays to be seen, however Altman’s remarks point out OpenAI could more and more advocate for such a shift.
What comes subsequent for OpenAI and your short-term/deleted chats?
OpenAI has filed a proper objection to the court docket’s order, requesting that or not it’s vacated.
In court docket filings, the corporate argues that the demand lacks a factual foundation and that preserving billions of extra information factors is neither crucial nor proportionate.
Choose Wang, in a Could 27 listening to, indicated that the order is short-term. She instructed the events to develop a sampling plan to check whether or not deleted person information materially differs from retained logs. OpenAI was ordered to submit that proposal by at the moment (June 6) however I’ve but to see the submitting.
What it means for enterprises and decision-makers in control of ChatGPT utilization in company environments
Whereas the order exempts ChatGPT Enterprise and API clients utilizing ZDR endpoints, the broader authorized and reputational implications matter deeply for professionals chargeable for deploying and scaling AI options inside organizations.
Those that oversee the total lifecycle of huge language fashions (LLMs) — from information ingestion to fine-tuning and integration — might want to reassess assumptions about information governance. If user-facing elements of an LLM are topic to authorized preservation orders, it raises pressing questions on the place information goes after it leaves a safe endpoint, and how you can isolate, log or anonymize high-risk interactions.
Any platform touching OpenAI APIs should validate which endpoints (ZDR vs non-ZDR) are used and guarantee information dealing with insurance policies are mirrored in person agreements, audit logs and inside documentation.
Even when ZDR endpoints are used, information lifecycle insurance policies could require assessment to substantiate that downstream programs (analytics, logging, backup) don’t inadvertently retain transient interactions that had been presumed short-lived.
Safety officers chargeable for managing danger should now broaden menace modeling to incorporate authorized discovery as a possible vector. Groups should confirm whether or not OpenAI’s backend retention practices align with inside controls and third-party danger assessments, and whether or not customers are counting on options like “short-term chat” that not operate as anticipated below authorized preservation.
A brand new flashpoint for person privateness and safety
This second is not only a authorized skirmish; it’s a flashpoint within the evolving dialog round AI privateness and information rights. By framing the difficulty as a matter of “AI privilege,” OpenAI is successfully proposing a brand new social contract for the way clever programs deal with confidential inputs.
Whether or not courts or lawmakers settle for that framing stays unsure. However for now, OpenAI is caught in a balancing act — between authorized compliance, enterprise assurances and person belief — and going through louder questions on who controls your information once you speak to a machine.
Source link