505 employees will put money over ethics.


An odd error for the company, indeed. • 505 HTTP Version Not Supported

Just one vote missing till the • 506 Variant Also Negotiates

Guess, they are stuck now. :D


Or they made enough and got better / same offer to be able to risk it at MS.


Ain’t that simply a curtain drama for practical acquisition of OpenAI by Microsoft, circumventing potential legal issues?

This started months ago.

Smacks, avatar

Article tomorrow : “OpenAI starts massive layoffs!”


I feel like this is Satya’s wet dream. He woke up on Friday like normal and went to bed on Sunday owning what, 85% of OpenAI’s top people? Acquisitions aren’t usually that easy.

It seems obvious Sam would want to grow his company to infinity. That’s what VC people do. The board expecting otherwise is strange in hindsight. Now they can oversee the slow, measured adoption of much smaller business while the rest of the team shoots for the stars.

Anyways, RIP y’all. Skynet launches next year.

M0oP0o, avatar

Wow, number really does go up!


The biopic on this whole thing is going to be hilarious. The rumors are that the board didn’t like how fast the CEO is moving with AI and they’re afraid of consequences of possible AGI (which I don’t think these new LLMs are even close to) but that doesn’t feel like what modern boards of directors are so I don’t trust it.

It’s just baffling how this golden goose was half way strangled in the nest.


They are a non-profit board set up precisely to exercise caution over rapid AI development.


Or this is essentially a hostile takeover by Microsoft. OpenAI is a non-profit with non-shareholders as it’s board. They don’t have a profit motive to develop AI quickly and without safety measures. But the tech they’ve developed has quickly become the hottest product on the planet.

Microsoft was clearly prepared to take on all the employees the second this happened.


Microsoft is huge. They’re always prepared to take on a few hundred new employees.


These will come at a premium. Not only are they high-demand jobs, but they’ll absolutely be sued by OpenAI if they hire away half the staff of a company with which they had a business relationship. Those legal fees alone will be 8 figures even if they win.

b3nsn0w, avatar

they spent 10 figures on openai already. 8 figures for the whole openai team is pennies


I’m positive that lawyers will get super involved and a lot will depend on the various contracts which we don’t have any visibility into. But from an ethical standpoint, the openai board shat in the bathwater and can’t really complain if people get out and move over to a cleaner pool.


Maybe they are not doing it to move to cleaner water but maybe they were promised more fish if they do by certain fisherman conglomerate. But i could be wrong.

BattleGrown, avatar

How in the world OpenAI didn’t sign non-compete with MS, how can MS hire OpenAI employees so blatantly?? What the actual fuck


Non-competes are illegal in California. Which they should be.


California is just ahead of the game, as they are in a lot of different ways. Non-competes are, and I’m paraphrasing a lawyer friend here since I’m not one, functionally dead in the water. They’re generally honored because no one wants to hash it out in court for months that they could be relaxing or transitioning to the new job anyway. A surgeon I knew left a clinic to start his own, and told his clients to just contact him in six months, not because he cared about the non-compete he had signed, but because it was going to take him about that long to set up the new clinic and hire staff.


I don’t think NCAs are valid in California.


Non-competes are illegal in California and should be illegal everywhere else too.

HiddenLayer5, (edited ) avatar

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.

The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance. For this story, TIME reviewed hundreds of pages of internal Sama and OpenAI documents, including workers’ payslips, and interviewed four Sama employees who worked on the project. All the employees spoke on condition of anonymity out of concern for their livelihoods.


Documents reviewed by TIME show that OpenAI signed three contracts worth about $200,000 in total with Sama in late 2021 to label textual descriptions of sexual abuse, hate speech, and violence. Around three dozen workers were split into three teams, one focusing on each subject. Three employees told TIME they were expected to read and label between 150 and 250 passages of text per nine-hour shift. Those snippets could range from around 100 words to well over 1,000. All of the four employees interviewed by TIME described being mentally scarred by the work. Although they were entitled to attend sessions with “wellness” counselors, all four said these sessions were unhelpful and rare due to high demands to be more productive at work. Two said they were only given the option to attend group sessions, and one said their requests to see counselors on a one-to-one basis instead were repeatedly denied by Sama management.


One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.


That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT.

Gonna leave this here.


I’m shocked and I shouldn’t be… Poor people

HiddenLayer5, (edited ) avatar

The last quote danced around it but if the implication is that they were seeking out and collecting CSAM which is a sex crime to access, possess and distribute, why the fuck are the boards of both companies not in prison and on the sex offender list?!

I mean, I know why, but


I’m sure there’s some loophole there, maybe between countries’ laws. And if there isn’t, Hey! We’ll make one!


Isn’t CSAM classed as images and videos which depict child sexual abuse? Last time I checked written descriptions alone did not count, unless they were being forced to look at AI generated image prompts of such acts?


That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT.

This is the quote in question. They’re talking about images


They could be working with the governments of relevant countries to develop filters and detection systems.


I really find this a bit alarmist and exaggerated. Consider the motive and the alternative. You really think companies like that have any other options than to deal with those things?

HiddenLayer5, (edited ) avatar

If absolutely nothing else and even assuming for the sake of the argument that work of this nature is completely justified, they still have to answer for the fact that they severely underpaid foreign workers in clickfarms to do this and traumatize themselves on their behalf presumably so no one in the West had to.

Personally, my opinion is very strongly that if you can’t develop a technology without committing such serious ethical breaches, for example seeking out and accumulating CSAM, then it’s either too early to develop that technology or it’s not worth developing at all. One may counter this with something like “well it’s basically inevitable that unscrupulous people will harm others to develop technology” but I would also argue that while that is true, the inevitability of something doesn’t make the act itself any less unethical.

As a bit of context: The reason why even accessing and possessing CSAM is illegal almost everywhere in the world is because the generally accepted philosophy around this kind of material is that every time someone views it for any reason, it victimizes that child all over again, which is also very consistent with the opinions of actual CSAM survivors so I don’t feel it’s something that the rest of us can really question. I obviously cannot speak on their behalf in any way, but my guess would be the vast majority of CSAM victims do not want photos and videos the most terrifying and traumatic moments of their lives being used in this way, especially not by a for-profit company so they can develop a product with the goal of making themselves richer.


Consider the impact on human psychology. Not everyone has the guts to read and even look through these. And even though they appear to have, it still scars them inside.

Maybe There is no alternative for now, but don’t do that to people with such low paycheck. Consider even the background of these people who may work on these tasks to not even live, but to survive. I would have preffered to wait 10 years than to indulge these horrifying tasks to those persons.

I’m sure there are lots of people who are in jail for creating/sharing or even making a profit off of these content. They could do that work ? But then again, even though it bothers me less than people who has no choice to live their lives, that is still an Idea I find ethically very questionable.


Very much yes police authorities have CSAM databases. If what you want to do with it really is above board and sensible they’ll let you access that stuff.

I don’t doubt anything that OpenAI could do with that stuff can be above board, but sensible is another question: Any model that can detect something can be used to train a model which can generate it. As such those models are under lock and key just like their training sets, (social) media platforms which have a use for these things and the resources run them, under the watchful eye of the authorities. Think faceboogle. OpenAI could, in principle, try to get into the business of selling companies at that scale models they can, and have, trained themselves, I don’t really see that making sense from the business POV, either.


IIRC there are a few legitimate and legal reasons to seek CSAM, such as journalism, and definitely developing methods to prevent it’s spread.


No, you’re right, you should be. We don’t want to normalize this shit, it should continue to shock and offend.

These are the dark sides of modern technology. The kids working cobalt mines. The workers being paid pennies to categorize data so bad that it is traumatic to even read it. I can’t imagine how the people who have to look at pictures can do it.

I feel like I could handle some dark text here or there, but if I had to do it for 40-50 hours a week? Hundreds of passages every day. That would warp me pretty quickly.


So they paid Kenyan workers $2 an hour to sift through some of the darkest shit on the internet.



They could have just given 4chan a $1 bounty per piece and they would have gleefully delivered until Lambo.


They are problaby the ones writing those pieces literature


In some countries 2 bucks an hour puts you above the median

FlyingSquid, avatar

“Above the median” should not be the standard for having to spend all day reading about racism and rape.


What about spending all day being abused by people in a call center?

I mean sure we’d all like to make enough money to live a full life with any job but that’s sadly not a reality and the point you’re missing is that economies don’t work the same as the US in every country.

I live in Argentina, I make 25k a year as a software developer and I’m on the top 1% of highest earners on the country

FlyingSquid, avatar

What about it? It’s nowhere near the same as spending all day reading graphic rape and racist screeds, let alone look at CSAM, which is what they’re paying them to do now. Did you miss the part where they are psychologically damaged from this work and the counseling they have been offered is insufficient? Call centers don’t usually result in that sort of thing.

Also, maybe you shouldn’t expect and defend wages that low for being in the top 1%?

AutistoMephisto, (edited )

They’re in the top 1% for Argentina, not globally. I mean, it would be nice if every worker made US wages. It’s kinda fucked though that even the lowest paid workers in America can live like kings in the Philippines. I make $42k/yr as an electrical assembler at a plant that manufactures environmental test chambers. If I take my PTO and go to almost any other country, especially Argentina, I can live like royalty for a week.


I strongly disagree. I have read and seen a lot of messed up things on the internet, I much, much, prefer it to the couple weeks I spent helping out a friend at a part-time service job. (And I was doing it with good friends in a casual environment.)

FlyingSquid, avatar

You’re welcome to strongly disagree that this:

One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.

Is not worth high pay, but I would say psychologically damaging your employees and then not even giving them the counseling tools to help them is absolutely worth high pay. You should not have to endure things like that for an ‘above the median’ wage in a country where ‘the median’ is still being very poor. I see this as not much better than defending other corporations making poor people in Africa work in mines for a decent wage relative to others in their country but not giving them safety equipment. And they still die poor.


I obviously prefer people aren’t in poverty at all. But I have far more sympathy for the miner risking their lives than someone reading something disgusting/disturbing on the internet, it is not anywhere near close.

FlyingSquid, avatar

You don’t understand how massive psychological damage can be as bad as seriously endangering someone’s physical health?

Just because a graphic description of a dog being raped while a child watches doesn’t bother you doesn’t mean it won’t bother anyone else. In fact, I would wager that it would be pretty disturbing for most people to read that, let alone read that sort of thing for hours every day.

And then there are the ones who are just as low-paid but have to look at images instead. Again, you may not be bothered by CSAM, but I would wager that most people would find looking at that all the time very hard to deal with and it could easily result in PTSD.


Getting crushed in a mine collapse harms everyone. As unfashionable as it is, the vast majority of people, that I know at least, have experienced far more traumatic things than you could ever get from third person observation.

I hate gore, I hate seeing people dying, I hate hearing about those sorts of things. They seriously upset me, but to compare that discomfort to anything like someone working (maybe enslaved) in a mine in essentially anywhere in Africa is ridiculous. Risking on a daily basis, painful death, painful suffering than death, likely slow death from dust inhalation, severe maming, etc.

If you really believed reading it were that dangerous, it is evil of you to even summarize it as you did and risk serious harm to others.

FlyingSquid, avatar

PTSD leads to suicides. Very often. And even without suicide, people with poor mental health often live very short lives due to stress.

Also, please do not misrepresent what I said. I talked about not giving them safety equipment, not them dying in a mine collapse. Both involve not giving the workers protection they need for low pay and could easily lead to very poor health and short lives in exchange for being somewhat less poor than their neighbors but still poor. The miners are not given physical safety equipment and the workers for OpenAI are not given the mental safety equipment.


I think you trust too much in modern psychology if you think that this job would lead to significant suicides but non-chemical therapy would prevent. Much more effective would just be pre-screening or informing applicants of the duties(which may have been done)

FlyingSquid, avatar

Did you not read what was pasted?

All of the four employees interviewed by TIME described being mentally scarred by the work. Although they were entitled to attend sessions with “wellness” counselors, all four said these sessions were unhelpful and rare due to high demands to be more productive at work. Two said they were only given the option to attend group sessions, and one said their requests to see counselors on a one-to-one basis instead were repeatedly denied by Sama management.

They are not being given the psychological tools they need. That’s a big part of the problem. Again, it is no different than not being given safety equipment.

HiddenLayer5, avatar

All while getting all high and mighty about how AI is poised to rid humanity of the need to make humans do degrading jobs, mind you.


What? And here I am doing it for free…


That’s actually about 3x what the average Kenyan makes, sadly.


This reminds me of an NPR podcast from 5 or 6 years ago about the people who get paid by Facebook to moderate the worst of the worst. They had a former employee giving an interview about the manual review of images that were CP andrape related shit iirc. Terrible stuff


Hold on, why exactly do they need people to label this shit?


How else will the AI be able to recognize that such text is “bad”?


This is actually extremely critical work, if results are going to be used by ai’s that are going to be used widely. This essentially determines the “moral compass” of the ai.

Imagine if some big corporation did the labeling and such, trained some huge ai with that data and it became widely used. Then years pass and eventually ai develops to such extent it can be reliably be used to replace entire upper management. Suddenly becoming slave for “evil” ai overlord is starting to move from being beyond crazy idea to plausible(years and years in future, not now obviously).

ColdFenix, avatar

Extremely critical but mostly done by underpaid workers in poor countries who have to look at the most horrific stuff imaginable and develop lifelong trauma because it’s the only job available and otherwise they and their family might starve. SourceThis is one of the main reasons I have little hope that if OpenAI actually manages to create an AGI that it will operate in an ethical way. How could it if the people trying to instill morality into it are so lacking in it themselves.


True. Though while its horrible for those people, they might be doing more important work than they or us even realize. I also kind of trust moral judgement of oppressed more than oppressor(since they are the ones who do the work). Though i’m definitely not condoning the exploitation of those people.

Its quite awful that this seems to be the best we can hope for regarding this. I doubt google or microsoft are going to give very positive guidance whether its ok for people to suffer if it leads to more money for investors when they do their own labeling.


This whole situation happened so fast and it confuses me


Later: All 195 employees of OpenAI in support of board of directors.




They did the monster math


It was a graveyard graph


Title gore.


What do you expect, it was written by AI.


Microsoft will embrace (extend and then extinguish) them all with OpenArms.


OpenEEE 😅


Seeing as Sam and Greg now work for microsoft I’d say this is late


Microsoft was also the biggest early investor in OpenAI, anyone that wants to leave that company has a guaranteed job at Microsoft, bet on it.


Might not be able to hire them due to non compete clauses though if they exist


It think Pres Biden killed that this year. Like, they’re all unenforceable. Confirm that, but I remember something about that.


They are illegal in California.


Late only because of how swiftly Sam and Greg had agreed to work for Microsoft. This is sent on the first day back to work after the firing assuming OpenAI doesn’t work full staff over the weekend. Furthermore contacting 700 people and getting a response back takes a little time too.

Let’s be honest, Microsoft will probably be happy for Sam and Greg to return since OpenAI is almost a Microsoft company and it causes the least disruption. Alternatively could OpenAI go to 💩 and Microsoft lose their Edge (😉) over competitors in this space.


Microsoft would never aquire an innovative company just to ruin it.


I didn’t say that


Oh that, like all that went before it, will be ruined accidentally. MS is second only to IBM in the rate and effectiveness of companies consumed and shat out. But not maliciously; just, feeding MS for another year or two isn’t compatible with the continued life of the food item.


Dunno, Google is up there too


You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

You are God damned right that shutting everything down is one of the roles of a non-profit Board focused on AI safety.


It’s supposed to be a nonprofit benefiting humanity, not a pay day for owners or workers. The board isn’t making money off of it.

Giving microsoft control is a bad idea. (duh?)

Giving a single person control is a bad idea, per sam altman.

TurtleJoe, avatar

They have a for profit arm in addition to the non profit.


More like a for profit arm ruled by an non profit head.



  • Loading...
  • DragonTypeWyvern,

    This only makes sense if the goal is to transfer all the profitable technologies to the for-profit arm and then give it to Microsoft and call it ethics.


    It makes sense if you can still objectively look at it without the capitalist profit-above-all


    Profit plus minimum standards of life lol

    slaacaa, (edited )

    My take on what happened (we are now at step 8):

    1. Sam wants to push for more & quicker profit with MS and VC backing, but board resists, constant conflicts
    2. Sam aligns with MS, hatch a plan on how to gut OpenAI for its know-how, ppl, and tech, leaving the non-profit part bleeding out in the gutter
    3. Sam & MS set a trap: Sam crosses some red lines, maybe taking commercial decisions without board approval. Potentially there was also some whispering in key ears (e.g, Ilya) by seemingly helpful advisors/VCs to push & pull at the same time on both sides
    4. Board has enough after Sam doesn’t back down, fires him & other co-founder guy
    5. MS and VCs go full attack to discredit board. After some info gathering, they realize they have been utterly fucked
    6. Some chaos, quick decision of appointing/replacing ppl, trying to manage the fire, even talking to Sam (btw this might have been a fallback option for MS, that the board reinstates him with more control and guardrails, weakening the power of the non-profit)
    7. Sam joins MS, masks are off
    8. Employees on the sinking ship revolt, even Ilya realizes he was manipulated/fucked
    9. OpenAI dead, key ppl join MS, tech and rest of the company bought for scraps. Non-profit part dead. Capitalist victory

    Source: subjective interpretation/deduction based on the available info and my experience working as a management consultant for 10 years (dealing with lot of exec politics, though nothing this serious)


    You might very well be correct. The thing that people need to remember is that just because something involves conspiracy doesn’t mean that it’s false. The more people required to be involved in a conspiracy is typically what makes it false. I think it is very within human nature. Especially those of programmers who have traditionally been better treated and paid than most other workers. To side with the profit motive against actual altruism. It’s the tech bro thing to do. I’m going to wait and see what happens. Not take any sides. Even though typically I’m always for supporting the workers.

    ikidd, avatar

    This is precisely the take I’ve been coming to on this. It fits all the fuckery going on. You can rest assured there is nothing in writing that can back this up, but one day there will be an unrelated lawsuit where it all comes out.


    You’re wrong on point #1. This isn’t being done per Sam Altman for commercial purposes. It’s being done per Microsoft in an attempt to remove the OpenAI board completely. Facebook recently shutdown its AI Ethics division.

    All of this is happening in conjunction with each other. Large corporations are trying to privatize AI and using key personnel in the industry to make it seem like a good thing. This wasn’t just Sam Altman. Whoever drafted the letter demanding the board steps down is working with Microsoft to do this.

    More than likely, that group went around spreading doomsday to the other employees in an attempt to scare them into fleeing the company.

    Sam Altman is just a pawn.

    people_are_cute, avatar

    Facebook recently shutdown its AI Ethics division.

    Meta is the only player that’s releasing its models to public. Ironically, it is the one being the most ethical in the AI space right now.

    “AI Ethics” teams in the Silicon Valley are nothing but rent-seeking doomer cults that leech off on the effort of others and hold back progress with bullshit gatekeeping. There was not a single positive contribution Facebook’s AI “ethics” team ever made.


    This is exactly my thoughts on it too, unfortunately.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • All magazines