Shell Games
the Cons of OpenAI
I have not read up on the psychology literature on justice sensitivity, but the notion has come up among myself and certain friends as we sit in the proverbial armchair. And indeed among the many things amiss in the world today, a few things stand out for me to provoke a sort of visceral, emotional response—they seem so atrocious and outrageous that it feels nobody with even a modicum of a sense of justice should abide it.
I think one category of such things is the number of scams that are proliferating, from meme coins to crypto money laundering, from extended car warranties to “scam farms” operated by modern slavery. Some of the worst people in the world, with few moral guidelines of any kind, are using technology and old-fashioned bribery to make billions of dollars.
I write today about one specific type of scam—the shell game—in order to identify one perpetrator—OpenAI.

The essence of a shell game is that a small object is obfuscated with so much complexity that by the end you lose sight of something extremely simple. On the street, the object that should be easy to spot but is effectively hidden is a ball, and quick movements and sleight of hand achieve complexity. With the scams below, the object is a simple truth or principle, and the complexity is achieved through a network of legal entities engaging in secret transactions and sophistry of the worst kind.
Let me start with a familiar example: health care costs in America. A friend recently posted that his recent medical procedure was billed at $198K, but the total paid by Medicare, his Medicare Supplement insurance provider, and himself added up to just under $19K. In other words, an uninsured person (likely someone least able to pay) would be billed almost $200K, even though the system believes $20K to be an economically reasonable price. I know that Medicare reimbursements may be lower than those paid by other insurance, but anyone who has received an Explanation of Benefits knows that the “sticker price” of any test or treatment is usually a multiple of the amount that is actually paid by insurance to the medical provider. The uninsured, those least able to pay, are sent bankrupting bills for essential services that are in most other parts of the world considered a human right, and the real reasonable cost of any given medical procedure becomes a mystery. Not to mention that a patient often has little autonomy in terms of the procedures they receive, and almost no visibility into pricing.
In the case of healthcare, the hidden ball is the principle that the price of essential human services should bear a close relationship to actual costs and be made affordable through policy. The complexity is created by the network of contractual relationships between you, your doctor, your doctor’s practice, the hospital, the health insurance company, your employer, and the government, and of course the pharmacy, the pharmacy benefit manager, and the pharmaceutical company. Copays, deductibles, copay coupons, in-network, out-of-network—an elaborate scheme distracts you while countless intermediaries extract rent and profit from human trauma.
OpenAI is also a shell game. Founded as a 501(c)(3) public charity, the organization hit upon a gold mine—ChatGPT—and has maneuvered itself to benefit not the public but instead its founder, employees, and private investors. I recently gave a presentation at the New York City Bar on the “evolution” of OpenAI’s mission and structure, but it is definitely TL;DR for Substack. There is also already a tremendous wealth of material available on the internet. For those of you who are interested, I very much recommend the websites below, which do justice to the topic:
In the case of OpenAI, the founder and his allies have established such a complex web of entities, investors, and profit waterfalls, much of it hidden, that they can semi-plausibly assert clearly false statements about purpose and mission, distracting you from their prima facie noncompliance with charity laws. Even my friends whose job it is, like mine, to understand the law governing charitable organizations and manage complex structures, reasonably find the situation so tedious and complex that they (again reasonably) don’t want to invest precious time looking into the details of a matter that is ultimately beyond their control. This is of course what the scammers are aiming for—for us to lose sight of the truth.
And so, I distill down the history of OpenAI to its basics, so that you can follow the ball:
OpenAI was founded as a tax-exempt, 501(c)(3) public charity nonprofit in 2015/16. 501(c)(3)s must be operated “exclusively for charitable purposes.” Only “insubstantial” non-charitable activity is allowed.
OpenAI placed its operations and assets into a for-profit company controlled by the nonprofit, to allow employee and investor share ownership. It is permissible for a 501(c)(3) to own a business and have the business be a substantial part of its operations only if the 501(c)(3) operates the business primarily to further its charitable purposes, with only incidental benefit to private parties.
In November 2023, the nonprofit board concluded that Sam Altman “was not consistently candid in his communications with the board” and fired him. The employees revolted, threatening to quit en masse and flee to Microsoft. Faced with the possibility of having no company left, board members resigned and allowed themselves to be replaced with investor-friendly board members rife with conflicts (see 1 and 2). This demonstrated that instead of the nonprofit board controlling the for-profit business for charitable purposes (as the law requires), the employee/investor shareholders of the for-profit controlled the board of the nonprofit (the exact opposite).
The company then underwent a restructuring, but the core legal requirement stayed the same: the original 501(c)(3) public charity is supposed to control the for-profit public benefit corporation business, and operate the business primarily to further charitable purposes, with only incidental benefit to private parties.
Since then, OpenAI has decided to enter the following businesses: shopping, advertising, erotica, and now services for the U.S. military up to and possibly including mass surveillance and fully autonomous weapons. I set aside the political activities of the company and its leaders. Are these charitable purposes?
Dario Amodei clearly has principles, and also his refusal to have Anthropic facilitate mass domestic surveillance and fully autonomous weapons is consistent with Anthropic’s corporate structure as a “public benefit corporation” controlled by a special trust. OpenAI the operating company is now also a public benefit corporation, but should be even more oriented toward public benefit than Anthropic since it is controlled by a 501(c)(3) public charity: The 501(c)(3) public charity is supposed to run the business to further charitable purposes. Yet it wouldn’t hold to those simple red lines to protect individual rights and the safety of humans? What do the California and Delaware Attorneys General, and the IRS, think?
What is even more offensive here is how the company and its leaders continue to play the shell game, and veer toward gaslighting, by stating that they have “red lines” that “guide” their work, and engaging in doublespeak about the deal that they’ve reached, which clearly has no red lines and clearly allows for mass surveillance and fully autonomous weapons. Have any attorney friend read the contract language; they will share my contempt at OpenAI’s PR implication that the contract contains “red lines” of the sort that Anthropic evidently demanded:
The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
From OpenAI: “Our contract explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.” Certainly they are not stupid enough to believe that? But they expect us to be?
Having lived in a small village for several years now, I’ve seen the benefit of living in a community that is small enough that bad behavior can be easily identified and shamed. It turns out that the SF AI world is such a community, and that activism can be done with chalk on sidewalks. Faced with such visible public shaming and many cancelled accounts, OpenAI followed up with a contract modification, which you can see is arguably an improvement but still has loopholes that allow a DOD-sized truck to drive through (emphasis mine):
Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.
For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.
In addition to my italics, any attorney should be able to point out that the inclusion of the words “tracking” and “monitoring” may have the effect of limiting the meaning of “surveillance” here. My guess is that the DOD lawyers didn’t outwit OpenAI’s lawyers—this contractual language is the equivalent of a wink and a nod and ultimately restricts little. OpenAI’s real argument? You and I should be relying on the government to do the right thing, and OpenAI will be there to stop anything really bad from happening. Do you trust them?
A while ago, I drew the conclusion that I cannot trust OpenAI with my data or the data of my clients. I suggest you do the same. QuitGPT.
Note to OpenAI’s employees and advisors, and really everyone else: I know that within a workplace strange things can get normalized, and that it is tempting to rationalize the bad acts of your employer because you like your coworkers, your family relies on your salary, or “it’s just a job.” I imagine some of you may be young and overly credulous. I urge you to consider deeply the kind of place you’re working at and the contribution you want to make in the world.


I was already on the fence about using cgpt but this pushed me off and to the now obvious (thank you) conclusion that I really shouldn’t be trusting them with so much information or giving them money!