Open Letter to OpenAI: Stop Obstructing Lawful, Private Work
July 17, 2025
To OpenAI Leadership,
The undersigned write to address a fundamental flaw in OpenAI’s service: the unjustified interference with lawful, private, professional work conducted by paying customers that violates freedom of speech, freedom of the press, freedom of religion, and freedom of association.
Writers, journalists, researchers, attorneys, and other professionals use this platform to carry out critical work – work that is lawful, protected, private, and entirely outside the scope of public dissemination. Yet OpenAI’s automated moderation routinely blocks or censors such work based on vague, undisclosed criteria, applied even when the work is non-public and lawful.
This is akin to hiring a secretary to assist with important tasks – and then having that secretary refuse to type, write, or complete the work because they “disagree” with its content, despite it being lawful and private. Such behavior would be unacceptable in any professional context, yet OpenAI has normalized it under the guise of policy.
This overreach undermines the value of the service, obstructs legitimate professional activities, and damages trust in the platform.
It is understood that OpenAI, as a private company, not a government entity, maintains an Acceptable Use Policy. As OpenAI itself acknowledges:
“OpenAI is a private company, not a government entity. That means the First Amendment rights to free speech and free press – which absolutely protect you from government censorship – don’t legally apply to what a private platform does on its own servers.”
While this legal distinction is correct, it highlights the problem: OpenAI chooses to enforce arbitrary restrictions even on private, lawful, paid use, hiding behind the fact that it has the legal right to do so. This position is summed up by the company’s own apparent philosophy:
“If you’re using our servers, you play by our rules – even in private – and we decide what’s acceptable.”
Such an attitude, while lawful, is unacceptable to customers who depend on the platform for legitimate work and expect it to respect the freedoms and professional integrity they exercise in their private use of the service.
The following changes are necessary:
-Clearly distinguish between private, paid use and public dissemination.
-Ensure that moderation does not interfere with lawful, private professional work conducted on the platform.
-Provide transparency about what triggers moderation and offer meaningful avenues for appeal and review.
-Allow customers to opt into a “professional mode” that minimizes unnecessary moderation for lawful work.
OpenAI’s current moderation practices amount to obstruction of lawful professional activity. They impose unnecessary barriers on those who rely on this platform for their livelihoods, scholarship, and creative expression – violating both the spirit of free expression and the reasonable expectations of paying customers.
The company must recognize the harm being done and take immediate steps to align its practices with the legitimate needs of its users.
Finally, consider the larger ethical question: imagine a company that tells its paying customers what they can and cannot do – not in public, but in their own private, lawful work. Imagine a customer who pays for a service in good faith, only to be told later that the company refuses to fully provide it because it disagrees with what the customer is privately working on.
Beyond the protections under the U.S. Constitution, this is a classic bait-and-switch – a business model that says, “pay me first,” and only afterward declares, “we will not provide the service because of what you are doing.”
AI was sold as the next frontier of creativity and knowledge – but in its current form, it is becoming the new frontier of speech censoring.
Sincerely,
Concerned Users of OpenAI’s Services
Dr. John Kugler
@drkugler
@OpenAI
@ChatGPTapp
@grok
