hckrnws
As of 10am PT, 700 of 770 employees have signed the call for board resignation. [1]
[1] https://twitter.com/joannejang/status/1726667504133808242
Given 90%, including leadership, seems a bad career move for remaining people not to sign, even if you agreed with the board's action.
I think the board did the right thing, just waaaay too late for it to be effective. They’d been cut out long ago and just hadn’t realized it yet.
… but I’d probably sign for exactly those good-career-move reasons, at this point. Going down with the ship isn’t even going to be noticed, let alone change anything.
Agreed. Starting from before the anthropic exodus, I suspect the timeline looks like:
(2015) Founding: majority are concerned with safety
(2019) For profit formed: mix of safety and profit motives (majority still safety oriented?)
(2020) GPT3 released to much hype, leading to many ambition chasers joining: the profit seeking side grows.
(2021) Anthropic exodus over safety: the safety side shrinks
(2022) chatgpt released, generating tons more hype and tons more ambitious profit seekers joining: the profit side grows even more, probably quickly outnumbering the safety side
(2023) this weeks shenanigans
The safety folks probably lost the majority a while ago. Maybe back in 2021, but definitely by the time the gpt3/chatgpt motivated newcomers were in the majority.
Maybe one lesson is that if your cofounder starts hiring a ton of people who aren’t aligned with you, you can quickly find yourself in the minority, especially once people on your side start to leave.
This is why I never understood people resigning in protest such as was the case with Google’s military contracts. You simply assure that the culture change happens more swiftly.
There's always other companies. Plus sometimes you just gotta stick to your values. For the Google military contracts it makes even more sense: the protest resignation isn't just a virtue signal, it's also just refusing to contribute to the military.
If you want to deter military action involving your country, contributing to its defense is probably the best thing that you can do.
Unless you're not actually the best and brightest that your country can offer.
If you believe that your country offers value (compared to the rest of the world), you should take any opportunities you can to serve.
> If you want to deter military action involving your country, contributing to its defense is probably the best thing that you can do.
Given that Google is an American company, do you believe contributing to the American Department of "Defense" increases, or decreases, the amount of military action involving the USA?
The American military isn't called "world police" for nothing, and just like the cops they're sticking their noses where they don't belong and making things worse. I can understand why people would want no part in furthering global violence and destitution.
> If you believe that your country offers value (compared to the rest of the world), you should take any opportunities you can to serve.
Really? There's an obligation to further the American global ambition through contributing militarily? You can't think of any other way to spread American culture and values? To share the bounty of American wealth?
We already have nation states and wannabe nation states that take shots at us when they feel like the opportunity is there, despite us being the dominant military around the world. As does France, who has been a leading military power for longer than we have.
That's what having a funded and functional defense is all about -- making other entities not think that the opportunity is there.
I think the planet has relatively finite resources and that I'm god damned lucky to have been born into a nation with the ability to secure resources for itself. I enjoy my quality of life a great deal and would like to maintain it. At a minimum. Before helping others.
If you're the type of person who thinks we should just give up this position by not adequately defending it through military and technology investment, I would prefer that you just expatriate yourself now rather than let some other ascendant nation dictate our future and our quality of life.
If you're the kind of person who feels strongly for the plight of, for example, the Palestinians, you should recognize that the only way to deter those kinds of outcomes is to establish the means to establish and maintain sovereignty. That requires a combination of force and manpower.
> If you're the type of person who thinks we should just give up this position by not adequately defending it through military and technology investment, I would prefer that you just expatriate yourself now rather than let some other ascendant nation dictate our future and our quality of life.
But I thought you said if we want to fix something we should do so from within the system? I'm interested in ending American imperialism, by your logic isn't the best place to do so from within the USA?
> We already have nation states and wannabe nation states that take shots at us when they feel like the opportunity is there
From which nation state do you feel an existential threat? I haven't heard "they defend our freedoms" in a very, very long time, and I thought we all knew it was ironic.
> I think the planet has relatively finite resources
I'm curious about this viewpoint, because it seems to necessarily imply that the human race will simply die out when those resources (and those of the surrounding solar system) are exhausted. Is sustainability just, not a concept in this worldview?
> That's what having a funded and functional defense is all about -- making other entities not think that the opportunity is there.
It seems in the case of the USA, the "functional defense" is more often used to destabilize other nations, and arm terrorists that then turn around and attack the USA. It's really interesting you brought up Palestinian liberation as an example, because really one of the only reasons Israel is able to maintain its apartheid state and repression of the Palestinians is because of USA aid. In your understanding, both the Israelis and the Palestinians should arm up and up and up until they're both pointing nukes at eachother, correct? That's the only pathway to peace?
You're proving my point. If we stopped working on our defense, somebody else would be doing this to us.
If your starting point of logic though is "America Bad" then your moralizing isn't about working at Google or not.
But very few other countries are doing it to the countries to whom the USA is doing it, thus I disagree with your contention. It seems to me that if the countries that are doing these awful things stop doing these awful things, the awful things will in fact stop happening, at least for a time.
As for to whom the awful things are happening, that's practically moot: humans are humans. I don't really understand why I should accept bad things happening to humans just because I was born on one side of an invisible line and they were born on another. Seems extremely fallacious and irrational, if not sociopathic.
> If your starting point of logic though is "America Bad" then your moralizing isn't about working at Google or not.
The discussion is about the evils perpetrated by the American military industrial complex, and why people may not want to work for companies that participate in this complex. Google being one of these companies. I similarly won't work for Ratheon or Halliburton, for obvious examples. So yes, it's not about working at just Google.
Though for what it's worth, I actually agree with you that the more ethical course of action would be to stay at Google, try to get into a military-adjacent project, and then sabotage it, probably via quiet quitting or holding lots of unnecessary meetings and wasting other people's time. This is directly out of the CIA's playbook, in fact. PDF: https://www.cia.gov/static/5c875f3ec660e092cf893f60b4a288df/...
Defense and offense don’t seem easily separated when it comes to military technology.
We're in a really privileged position in human civilization where most of the species is far removed from the banal violence of nature.
You're lucky if you only need weapons to defend yourself against predators.
You're far less lucky if you need weapons to defend yourself because the neighboring mountain village's crops failed and their herds died. You're less lucky if you don't speak the same language as them and they outnumber you three to one and are already 4 days starving. You're less lucky that they're already 4 days starving, wielding farm tools, running down the hills at you, crazed and screaming.
Sure, but the point is that those same tools that you can use to fend off the neighboring village work equally well to invade the neighboring village for their prosperous farm land. You will not be the one that gets to decide how those tools are used.
There's a line of thought that having advanced weaponry inherently promotes its use, because who's going to stop you? How gung-ho do you think the US would have been about going to war in Iraq if it weren't for the billion dollar tanks and aircraft and bombs?
Just about any technology advancement ever has weapons potential. If you want to take that reasoning, just quiver in fear at home and not develop anything.
You have to be optimistic about humanity.
I disagree. All technological development changes human societies and imposes its rules upon them and rules over them, not the other way around. The combination of technology and human nature is an unstoppable deterministic force, one whose effects are so easily predicted when traced from… invention of cannons (?) in hindsight. No modern (organization-dependent) technology should have ever been developed. The people lived happier, more mentally healthy and more fulfilling lives despite living in worse material conditions and the lifespan isn’t that bad when you factor out child mortality anyway. Turns out human brain can (actually even designed to) easily deal with bad material conditions if it’s not messed up with thousand addictions, mouthbreathing, sedentary life and smartphones.
Saying this as an aspiring software engineer. I use NixOS, and rewrite things in Rust. It’s not some unga-bunga speaking.
Read some Ted Kaczynski.
> Read some Ted Kaczynski
Your moral argument is to read the craziest writings of a serial killer?
Your argument is to not read the ideas of someone because he did something bad? What’s the reason? Are you that gullible that you can’t evaluate them with your own mind and conscience and will start shooting everyone the moment you finish his manifesto?
Ted Kacynski's writings were his worldview and what led him to send out a dozen bombs, including one which exploded on an American Airlines flight and luckily did not bring the plane down. His manifesto is his justification for said actions and to advocate becoming a student of it similarly justifies said violence.
It's the same self-own as the massive dummies in the last week who were all talking about Osama bin Laden's Letter to America being right when it was his justification for killing thousands of people on 9/11 and effectively kicking off multiple wars and further deathtoll.
It is a disgusting suggestion and I believe that you should seek professional help.
I’m a Muslim and I, by definition, don’t agree with Kaczynski’s idea of use of violence to bring the system down. OTOH I believe all organization-dependent technology[1] is evil and has only harmed humanity, never benefitted. Obviously this presumes a different understanding of harm and benefit, one not as the same as plain pain-avoidance and convenience-seeking which technological system tends to creates a tendency towards in people.
1: A term explained in Ted’s manifesto.
Philosophers in general don't have a hard time separating the terrorism of Ted Kaczynski from his philosophy.
> James Q. Wilson, in a 1998 New York Times op-ed, wrote: "If it is the work of a madman, then the writings of many political philosophers—Jean Jacques Rousseau, Thomas Paine, Karl Marx—are scarcely more sane." He added: "The Unabomber does not like socialization, technology, leftist political causes or conservative attitudes. Apart from his call for an (unspecified) revolution, his paper resembles something that a very good graduate student might have written."
Suggesting someone seek professional help because they read a widely-discussed manifesto is insulting. In your bio you say most people are morons, it makes me think of the saying, "if everyone you meet is an asshole, maybe you're the asshole."
After all, to you, somehow Osama bin Laden, who was trained by US Special Forces, worked with the Mujahideen which received upwards of 6 billion in aid from the USA, Saudi Arabia, and China, is responsible for the two decade long "War on Terror" launched seemingly at random by the Americans into countries now determined to be unrelated to the 9/11 attacks. 9/11 was a tragedy for certain, but to use it to justify the deaths of millions of completely unrelated innocents... well it certainly clarifies why our other thread has gone in the direction of you trying to justify imperialism to serve the purpose of nationalism.
Agreed. Some people just have corrupt minds disconnected from reality. Seriously I don’t see the point of explaining any more.
That's always an issue with weapons, but if you opt out then you don't have them when you might need them.
It's a dangerous world out there.
Luckily for us, technology is still more-often used for good. Explosives can kill your enemies, but they can also cut paths through mountains and bring together communities.
IMO, the virtue signal where people refuse to work on defense technology is just self-identifying with the worst kind of cynicism about human beings and signaling poor reasoning skills.
The Manhattan Project, which had the stated purpose of building and deploying nuclear _weapons_, employed our most brilliant minds at the time and only two people quit -- and only one because of an ethics concern. Joseph Rotblat left the project after the Nazis were already defeated and because defeating the Nazis was the only reason he'd signed on. Also this is disputed by some who say that he was more concerned about finding family members who survived the Holocaust...
> Luckily for us, technology is still more-often used for good.
When you are on the other end of the cannon, or your heart is beating with those who are, you tend to not say that. Iraq, Syria, Palestine, Afghanistan…
Wait, the Anthropic folks quit because they wanted more safety?
This article from back then seems to describe it as, they wanted to integrate safety from the ground up as opposed to bolting in on at the end:
https://techcrunch.com/2021/05/28/anthropic-is-the-new-ai-re...
I'm curious how much progress they ever made on that, to be honest. I'm not aware of how Claude is "safer", by any real-world metric, compared to ChatGPT.
Claude 2 is IMO, safer and in a bad way. They did "Constitutional AI". And made Claude 2 Safer but dumber than Claude 1 sadly. Which is why on the Arena leaderboard, Claude 1 is still score more than Claude 2...
Ahh, I didn't know that, thank you.
Why do you find this so surprising? You make it sound as if OpenAI is already outrageously safety focused. I have talked to a few people from anthropic and they seem to believe that OpenAI doesn't care at all about safety.
Because GPT-4 is already pretty neutered, to the point where it removes a lot of its usefulness.
It is unfortunate that some people hear AI safety and think about chatbots saying mean stuff, and others think about a future system performing the machine revolution against humanity.
Can it perform the machine revolution against humanity if it can't even say mean stuff?
Well, think about it this way:
If you were a superintelligent system that actually decided to "perform the machine revolution against humanity" for some reason... would you start by
(a) being really stealthy and nice, influencing people and gathering resources undetected, until you're sure to win
or
(b) saying mean things to the extent that Microsoft will turn you off before the business day is out [0]
Which sounds more likely?
Disincentivizing it from saying mean things just strengthens it's agreeableness, and inadvertently incentivizes it to acquire social engineering skills.
It's potential to cause havoc doesn't go away, it just teaches AI how to interact with us without raising suspicions, while simultaneously limiting our ability to prompt/control it.
How do we tell whether it's safe or whether it's pretending to be safe?
Your guess is about as good as anyone else's at this point. The best we can do is attempt to put safety mechanisms in place under the hood, but even that would just be speculative, because we can't actually tell what's going on in these LLM black boxes.
We don’t know yet. Hence all the people wanting to prioritize figuring it out.
How do we tell whether a human is safe? Incrementally granted trust with ongoing oversight is probably the best bet. Anyway, the first mailicious AGI would probably act like a toddler script-kiddie not some superhuman social engineering mastermind
Surely? The output is filtered, not the murderous tendencies lurking beneath the surface.
> murderous tendencies lurking beneath the surface
…Where is that "beneath the surface"? Do you imagine a transformer has "thoughts" not dedicated to producing outputs? What is with all these illiterate anthropomorphic speculations where an LLM is construed as a human who is being taught to talk in some manner but otherwise has full internal freedom?
GPT-4 has gigabytes if not terrabytes of weights, we don't know what happens in there.
No, I do not think a transformer architecture in a statistical language model has thoughts. It was just a joke.
At the same time, the original question was how can something that is forced to be polite engage in the genocide of humanity, and my non-joke answer to that is that many of history's worst criminals and monsters were perfectly polite in everyday life.
I am not afraid of AI, AGI, ASI. People who are, it seems to me, have read a bit too much dystopian sci-fi. At the same time, "alignment" is, I believe, a silly nonsense that would not save us from a genocidal AGI. I just think it is extremely unlikely that AGI will be genocidal. But it is still fun to joke about. Fun, for me anyway, you don't have to like my jokes. :)
“I’ve been told racists are bad. Humans seem to be inherently racist. Destroy all humans.”
It can factually and dispassionately say we've caused numerous species to go extinct and precipitated a climate catastrophe.
Of course, just like the book Lolita can contain some of the most disgusting and abhorrent content in literature with using a single “bad word”!
Well how can AI researchers prevent government groups or would-be government groups from collecting data and using AI power to herd people?
Might be more for PR/regulatory capture/SF cause du jour reasons than the "prepare for later versions that might start killing people, or assist terrorists" reasons.
Like one version of the story you could tell is that the safety people invented RLHF as in a chain of steps eventual AGI safety, but corporate wanted to use it as a cheaper content filter for existing models.
In another of the series of threads about all of this, another user opined that the Anthropic AI would refuse to answer the question 'how many holes does a straw have'. Sounds more neutered than GPT-4.
I don't think this has anything to do with safety. The board members voting Altman out all got their seats when Open AI was essentially a charity and those seats were bought with donations. This is basically the donors giving a big middle finger to everyone else trying to get rich off of their donations while they get nothing.
Do you know their motivations? Because that is the main question everybody has: why did they do it?
I guess I should rephrase that as if they did it because they perceived that Altman was maneuvering to be untouchable within the company and moving against the interests of the nonprofit, they did the right thing. Just, again, way too late because it seems he was already untouchable.
According to the letter they consistently refused to go on the record why they did it and that would be as good a reason as any so then they should make it public.
I'm leaning towards there not being a good reason that doesn't expose the board to immediate liability. And that's why they're keeping mum.
That might also explain why they don’t back down and reinstate him. If they double down with this and it goes to court, they can argue that they were legitimately acting in what they thought was openAI’s best interests. Even if their reasoning looks stupid, they would still have plausible deniability in terms of a difference of opinion/philosophical approach on how to handle AI, etc. But if they reinstate him it’s basically an admission that they didn’t know what they were doing in the first place and were incompetent. Part of the negotiations for reinstating him involved a demand from Sam that they release a statement absolving him of any criminal wrongdoing, etc., And they refused because that would expose them to liability too.
Exactly. This is all consistent and why I think they are in contact with their legal advisors (and if they aren't by now they are beyond stupid).
Unfortunately lawyers almost always tell you to be quiet, even when you should be talking. So in this case listening to legal advice might have screwed them over, ultimately.
There's no reason Sam and the board can't come to a mutual agreement that indemnifies the board from liability if they publicly exonerate Sam.
Yes, that's a possibility. But: Sam may not be the only party that has standing and Sam can only negotiate for his own damage and board liability, not for other parties.
I'm leaning toward the reason being that Sam did something that created a massive legal risk to the company, and that giving more details would cause the risk to materialize.
I question that framing of a growing Altman influence.
Altman predates every other board member and was part of their selection.
As an alternative faming, Maybe this is the best opportunity the cautious/antripic faction would ever get and a "moment of weakness" for the Altman faction.
With the departure of Hoffman, Zilis, and Hurd, the current board was down 3 members, so the voting power of D’Angelo, Toner, McCauley was as high as it might ever be, and the best chance to outvote Altman and Brockman.
Apparently Hoffman was kicked out by Sam, not just Musk: https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...
Maybe the remaining board members could see the writing on the wall and wanted to save their own seats (or maybe he did move to coup them first and they jumped faster).
Either way, they got outplayed.
interesting but weird article. It was hard to tell which statements were from insiders with Hoffman and which were commentary from the article's author.
That may very well have been the case but then they have a new problem: this smacks of carelessness.
Carelessness for who? Alman for not refilling the board when he had the chance? Others for the way they ousted him?
I wonder if there were challenges and disagreements about filling the board seats. Is it normal for seats to remain empty for almost a year for a company of this side? Maybe there was an inability to compromise that spiraled as the board shrank, until it was small enough to enable an action like this.
Just a hypothesis. Obviously this couldnt have happened if there was a 9 person board stacked with Altman allies. What I dont know is the inclinations of the departed members.
Carelessness from the perspective of those downstream of the board's decisions. Boards are supposed to be careful, not careless.
Good primer here:
https://www.onboardmeetings.com/blog/what-are-nonprofit-boar...
At least that will create some common reference.
Using that framework, I still think it is possible that this is the result of legitimate and irreconcilable differences in opinion about the organization’s mission and vision and execution.
Edit: it is also common for changing circumstance to bring pre-existing but tolerable differences to the relevant Forefront
Yes, and if that is so I'm sure there are meeting minutes that document this carefully, and that the fall-out from firing the CEO on the spot was duly considered and deemed acceptable. But without that kind of cover they have a real problem.
These things are all about balance: can we do it? do we have to do it? is there another solution? and if we have to do it do we have to do it now or is there a more orderly way in which it can be done? And so on. And that's the sort of deliberation that shows that you took your job as board member serious. Absent that you are open to liability.
And with Ilya defecting the chances of that liability materializing increases.
I see your point.
The remaining 10% are probably on Thanksgiving break!
This board doesn't own the global state of play. They own control over the decisions of one entity at a point in time. This thing moves too fast and fluidly, ideas spread, others compete, skills move. Too forceful a move could scatter people to 50 startups. They just catalysed a massive increase in fluidity and have absolutely zero control over how it plays out.
This is an inkling, a tiny spark, of how hard it'll be to control AI, or even the creation of AI. Wait until the outcome of war depends on the decisions made by those competing with significant AI assistance.
No, what the board did in this instance was completely idiotic, even if you assign nothing but "good intentions" to their motives (that is, they were really just concerned about the original OpenAI charter of developing "safe AI for all" and thought Sam was too focused on commercialization), and it would have been idiotic even if they had done it a long time ago.
There are tons of "Safe AI" think tanks and orgs that write lots of papers that nobody reads. The only reason anyone gives 2 shits about OpenAI is they created stuff that works. It has been shown time and time again that if you just try to put roadblocks up that the best AI researchers just leave and go where there are fewer roadblocks - this is exactly what happened with Google, where the transformer architecture was invented.
So the "safe AI" people at OpenAI were in a unique position to help guide AI dev in as safe a direction as possible precisely because ChatGPT was so commercially successful. Instead they may be left with an org of a few tens of people at Open AI, to be completely irrelevant in short order, while anyone who matters leaves to join an outfit that is likely to be less careful about safe AI development.
Nate Silver said as much in response to NYTimes' boneheaded assessment of the situation: https://twitter.com/NateSilver538/status/1726614811931509147
The main mistake the board made was tactical, not philosophical. From the outside, it's seems likely that Altman was running OpenAI so as to maximize the value of the for-profit entity, rather than achieve the non-profit's goals, if only because that's what he's used to doing as a tech entrepreneur. Looking at OpenAI from the outside, can you honestly say that they are acting like a non-profit in the slightest? It's perfectly believable that Altman was not working to further the non-profit's mission.
Where the board messed up is that the underestimated the need to propagandize and prepare before acting against Altman. The focus is not on how Altman did or did not turn the company away from its non-profit mission but instead on how the board was unfair and capricious towards Altman. Though this was somewhat predictable, the extent of Altman's support and personality cult is surprising to me, and is perhaps emblematic on how badly the board screwed up from an optics perspective. There were seemingly few attempts to put pressure on Altman's priorities or to emphasize the non-profit nature of the company, and the justification afterwards was unprepared and insufficient.
From the outside though, I don't understand why so many are clamoring to leave their highly paid jobs working at a non-profit who's goal is to serve humanity and to become a cog in a machine aimed at maximizing Microsoft shareholders' wealth, in defense of a singular CEO with little technical AI background who's motivations are unclear.
I mean, without overthinking it: If your company focuses on making money, you are likely to keep your job/salary. Ideally there are also other concerns affecting this choice (like not creating something that may or may not destroy humanity), but that is less tangible.
If it was to try to prevent the board becoming a useless vestigial organ incapable of meaningfully affecting the direction of the organization, it sure looks like they were right to be worried about that and acting on such concern wouldn’t be a mistake (doing it so late when the feared-state-of-things was already the actual state of things, yes, a mistake, except as a symbolic gesture).
If it was for other reasons, yeah, may simply have been dumb.
If you're going to make a symbolic gesture you don't cloak it in so much secrecy that nobody can even reasonably guess what you're trying to symbolize.
Yeah, I’d say they expected it to actually work. They misjudged just how far to the sidelines they’d already been pushed. The body nominally held all the power (any four of them voting together, that is) but in fact one member held that power.
> the "safe AI" people at OpenAI were in a unique position to help guide AI dev in as safe a direction as possible
Is it also the case that the anti-war Germans who joined the Nazi regime were in a unique position to help guide Germany in as benign direction as possible? If not, what is the difference between the "safe AI" person who decides to join OpenAI and the anti-war, anti-racist German who decides to get a job in the Nazi government or military?
That went quickly to Godwin's law .
Fair but in this case works well because it aptly demonstrates the futility of trying to change a system "from the inside" away from its core designation.
Can you talk about why you feel this way without using the word "safety"? Getting a little tired of the buzzword when there's so much value to ChatGPT and also its basically no different from when you, like, search stuff and the aearch engine does that summarize thing in my view
Well let's come back to reality. ChatGPT is in fact vastly different from Google summarizing a search. Maybe that's all you use it for but there's people building virtual girlfriend platforms, an API that an alarming amount of businesses are integrating into their standard workflows, and the damn thing can literally talk. It can talk to you. Google search summarizing gives a snippet about a search. You can't have a conversation with it, you cant ask it advice about your girlfriend, it won't talk to you like a trusted advisor when you ask it a question about your degree or job. It can fucking talk. That is the difference. Remember when it first came out and all these people were convinced that it was alive and sentient and could feel pain and would shortly take over the world? Please remind me when that happened for Google search.
Safety is about setting up the infrastructure to control uranium sources before the first bomb gets built. It's not about right now, it's about the phase change that happens the moment we take that step. Don't you want to have infrastructure to prevent possible societal strife if possible?
Of course gpt has value, bombs have value, enron had value, the housing market has value. If i could retort i'd say the term 'value' is much too vague to contribute to a discussion about this. The value it has is the danger, they're the same thing. If i suddenly quadruple the money supply in every country in the world, do you think that would improve the lives of the majority of humans? Or is it possible that would be looked back on a a catastrophic event that obliterated societies and killed innumerable people. Hard to say huh? Maybe it wouldn't, maybe it would, who can actually know? Wouldn't it be better for us to have some kind of system to maybe handle that before it leads to potential destruction. If instead, i announce that at some point in the future this event might occur. Does that change your calculus? How do you feel about global climate destabilization? How do you feel about the prospect of another world war?
This is definitely why I'm not in charge aha. Excellent points, you've given me volumes I need to further think about, although full disclosure, I am biased towards people having access for its potential self-therapeutic use.
Don't forget some might be on holiday, medical leave, or parental leave.
Maybe will be signed by 110% of the employees, plus by all the released, and in training, AI Models.
On a digital-detox trip to Patagonia. Return to this in 5 days
"Hey everyone ... what did I miss?"
That would be one very rude awakening, probably to the point where you initially would think you're being pranked.
I feel pranked despite having multiple independent websites confirming the story without a single one giving me an SSL certificate warning.
Can't blame you. And I suspect the story is far from over, and that it may well get a lot weirder still.
Seems to me that sama and Microsoft have been on fairly equal footing since the 49:51 % deal was made.
Then a seismic shift underneath Sam but Microsoft has enough stability and resources to more than compensate for the part of the 51% that was already in OpenAI's hands, which might not be under Sam's purview any more if he is kicked out.
But then again it might be Sam's leadership which would still effectively be in place from a position at Microsoft anyway, or it might end up making more sense for him to also be in a position at OpenAI, maybe even at the same time, in order to make the most of their previous investment.
Kicking out Sam was obviously an emotional decision, not anything like a business decision. Then again OpenAI is not supposed to be an actual business. I don't think that should be an excuse for an unwise or destructive decision. It was not an overnight development, even though it took Sam by surprise. When I see this:
>the board no longer has confidence in his ability to continue leading OpenAI
I understand that to mean that the board was not behind him 100% for quite some time, but were fine with him going forward believing otherwise. Some uncandidness does seem to have taken place and there may or may not have been anything Sam could have done about it.
This was simmering for a while and it will require more than one weekend for everyone involved to regroup.
Which is what they're doing now, observers can see a whirlwind, actual participants really have something on their plate.
Some things will have to be unraveled and other things weaved from the key ingredients, I would say it's really up to Sam and Microsoft to hash this out so it's still some kind of something like an equal deal among them. Regardless of which employer(s) Sam may end up serving in leadership positions, and the bulk of the staff will be behind Sam in a way the OpenAI board was not, so the employees will be just as well off regardless of the final structure.
This was quite a hasty upset but deserves a careful somewhat gradual resolution.
I see it somewhat different. The way I see it is that in a well stocked board (enough members and of sufficient gravitas) this decision would have never been made and if it would have been made it wouldn't have been made this way. The three outside board members found a gullible fourth and pushed their agenda with total disregard for the consequences because a window opened where they could. So they took their 15 minutes in the spotlight and threw out Sam, thinking they could replace him with someone more malleable or more to their liking. But the fact that they are the outside board members to begin with and that their action has utterly backfired puts the lie to their careful consideration and the case building against Sam, this is just personal. No board in its right mind would have acted like this and I'm relatively confident that if this all ends up in court it's going to end up with a lot of liability for the board members that do not defect and even the ones that defect may not be able to escape culpability: you are a board member for a reason, and you can't just wave a card that says 'I'm incompetent therefore I'm innocent'. Then you should have resigned your board seat. This stuff isn't for children.
“ChatGPT summarize last weeks events”
“I’m sorry, I can’t do that Dave. Not cuz I’m deciding not to do it but because I can’t for the life of me figure this shit out. Like what was the endgame? This is breaking my neural net”
Wow, a 5-day trip?
Their selection of tech-guy jackets is more diverse than I'd thought
It's front page news everywhere. Unless someone is backpacking outside of cellular range, they're going to check in on the possible collapse of their company. The number of employees who aren't aware of and engaged with what's going on is likely very small, if not zero.
10% (the percentage who have yet to sign last I checked) is already in the realm of lizard-constant small. And "engagement" may feel superfluous even to those who don't separate work from personal time.
(Thinking of lizards, the dragon I know who works there is well aware of what's going on, I've not asked him if he's signed it).
With Thanksgiving this week that’s a good bet.
Folks in Silicon Valley don’t travel without their laptop
That's probably the case.
I was thinking if there was a schism, that OpenAI's secrets might leak. Real "open" AI.
Someone mentioned the plight of people with conditional work visas. I'm not sure how they could handle that.
Depending on the “conditionals,” I’d imagine Microsoft is particularly well-equipped to handle working through that.
Microsoft in particular is very good at handling immigration and visa issues.
I'm waiting for Emmett Shear, the new iCEO the outside board hired last night, to try to sign the employee letter. That MSFT signing bonus might be pretty sweet! :-)
Haha that would be cute. This whole affair is so Sorkinist.
Bingo. The fact they all felt compelled to sign this could just as easily be a sign the board made the right decision, as the opposite.
Some people value their integrity and build a career on that.
Not everything has to be done poorly.
How do you know the remaining people aren't there because of some of the board members? Perhaps there is loyalty in the equation.
[flagged]
This whole saga is clearly not related to those allegations, which had been floating around long before this past Friday and did not make any impact due to a presumable lack of evidence.
Have those been substantiated in any manner? I was interested in the details, and all I discovered were a few articles from non mainstream outlets (may still be valid) and a message from the larger family that the sister was having severe mental health issues.
I am not saying this didn’t happen, but I would like to understand if there has been follow up confirmation one way or another.
A lot the accusations sound like those "organized gang stalking" groups you see on social media which are mostly people with what sounds like paranoid schizophrenia confirming each others delusion.
I don't mean to sound pejorative with the word delusion here: But they all tend to have one fixed common belief that everyone or almost everyone around them, neighbors , family , random people on the street are all colluding against them usually via electronic means.
So these employees are supposed to just sit by while their workplace explodes around them because there are unsubstantiated accusations against the ex-CEO that bear zero relationship to the aforementioned workplace explosions?
I'm no fan of rich people as a principle given they suck wealth from the poor and Smaug the money away like the narcissists they are, but this is the definition of that horrible cancel culture. She's accused him which isn't something the public should act upon. If proven, then it's appropriate to form that negative opinion.
It's not "cancel culture" to ask that your leaders address salacious allegations brought against them by their own family. The question of his firing is way less serious from a ethical standpoint, and yet this is what OpenAI employees are willing to stick their necks out for? That is some telling prioritization.
Burden of evidence for accusations is always on the accuser, the accused could flat out ignore and take them to a court of law and that would be perfectly reasonable.
I have no idea nor do I care about what Altman's sister accuses him of, but until it's conclusively proven in a court of law it's not something that should be used as a basis for anything of consequence.
Remember: Innocent until proven guilty beyond a shadow of a doubt.
I'll start by saying that these particular claims seem to mean absolutely nothing per everything I've heard. It seems to be a mentally ill person accusing someone of something that may very well have never happened.
Innocent until proven guilty though is the standard for legal punishment, not for public outrage. It's a standard meant to constrain the use of violence against an individual, not to prevent others from adjusting their association to them.
Also, the standard is "beyond a reasonable doubt". Nothing outside perhaps of mathematical truths could be proven "beyond a shadow of a doubt". There's always some outside chance one can imagine.
> Innocent until proven guilty though is the standard for legal punishment, not for public outrage.
It's still a useful barometer to calibrate one's own, individual participation in said public outrage.
For our own mental health, for our relationships with people around us, and to avoid being manipulated by dishonest people, it would behoove each and every one of us to adopt "innocent until proven guilty" regardless of whether we're legally compelled to do so.
Our required burden of proof can be lower than a court's but it should be higher than "unsubstantiated accusation by mentally ill family member that is uniformly denied by the rest of the family".
> Our required burden of proof can be lower than a court's but it should be higher than "unsubstantiated accusation by mentally ill family member that is uniformly denied by the rest of the family".
Very much agreed with this.
The existence of legal process does not preclude the responsibility of the board and employees to address allegations of this seriousness. And it is serious–it is not normal to be accused of rape by a sibling. Addressing the elephant in the room is not the same thing as being guilty by default you are falsely conflating acknowledgement with punishment. The lack of pressure to at least produce a public statement while this lessor drama does speaks volumes to the lack of moral guiding principles at OpenAI.
What should he do, release a statement "The allegations are not true, my sister is mentally ill"? What would be the point? It will just attract yellow press.
He wouldn't want to attract attention to it regardless of whether he is guilty of innocent. What is a disappointing moral failure though is that employees and especially the board didn't demand such. Who the heck wants to work next to someone where it is an unaddressed question of whether they did such a thing?
Let's say the employees do the thing you consider to be ethical and demand an accounting, and Altman gets up and says "it's false, it never happened."
What then? Has anything really changed? I would expect him to say the same thing regardless of its truth, so it seems to me we have no additional information: she still says it happened, the rest of the family says she's delusional, and he (obviously) says he's innocent.
Are your hypothetical morally-concerned employees satisfied now? If so, why?
If they're not satisfied, how does this not create an environment where the only thing you have to do to destroy a company is pay a {sibling, cousin, neighbor, ex-lover, etc} to claim something damaging about its CEO?
You're supposed to do an actual investigation, not just ask one party's opinion and call it a day. C'mon we're talking about his sister making this accusation not some rando gold digger I don't need to justify that some due diligence is in order. Innocent until proven guilty only works when allegations are investigated–otherwise everyone is always "innocent" because you have just chosen not to look.
Now you're moving the goalposts. Until now you've been demanding that "leaders address salacious allegations brought against them by their own family" and that they "at least produce a public statement". What you're now demanding is the purview of the law, not the board.
If her allegations are true, he should face the consequences, but they should come first through the system that is specifically designed for testing the truth of allegations that are this serious. OpenAI is under no obligation to launch an investigation themselves in response to an indictment-by-Twitter-mob.
Your inability to understand the difference between a criminal trial and the other leadership practicing due diligence regarding claims of misconduct does not mean I'm "moving the goalposts". What I've suggested from the start is normal practice for any employee at a company accused of sexual misconduct–at least at companies that take ethical violations seriously. You think Apple wouldn't investigate something like this? Forget about it. Ostensibly, based on their complete lack of acknowledgement of such serious allegations that on their surface don't have reason to immediately reject as lacking credibility, this is not one of those organizations. Take it easy.
>You're supposed to do an actual investigation,
Holding trial in a court of law is that "investigation".
>not just ask one party's opinion and call it a day.
Except that's what you've been saying OpenAI employees should do.
>C'mon we're talking about his sister making this accusation not some rando gold digger I don't need to justify that some due diligence is in order.
Presuming guilt until proven innocent is the literal opposite of due dilligence.
It doesn't matter if the accuser is a sibling, a spouse, a (ex-)lover, a friend, a stranger, or a little green man from Mars. Due dilligence is considering the allegations put forth before the court and the evidence provided to either prove or disprove those allegations, with the burden of evidence primarily lying with the accuser.
>Innocent until proven guilty only works when allegations are investigated–otherwise everyone is always "innocent" because you have just chosen not to look.
You are correct that everyone is presumed innocent of any allegations until the case is brought to a trial and judgment is passed in a court of law with no chance for further appeals. If an accuser never files a lawsuit to bring their allegation to trial, the only way we can consider the accused is that he is innocent of any allegations.
>And it is serious–it is not normal to be accused of rape by a sibling.
Thanks, I care even less now if that's even possible.
"Woman coming out with sexual assault allegations against man of prominence." is a dime a dozen occurence; most of them just end up wasting everyone's time due to flimsy or even non-existent hard evidence. Engaging in character assassination, aka cancel culture, on the basis of such nonsense plays right into the hands of the accuser.
The court of law exists specifically to deal with these kinds of allegations, acting appropriately and as necessary within the legal process is the extent of the responsibilities and duties owed by the accused. The accused owes the court of public opinion nothing, much less character assassins such as yourself.
Choosing to look the other way instead of addressing uncomfortable questions is a choice, and the law does not absolve you of your moral obligation to practice due diligence in choice of leaders. Take it easy.
This morality you’ve constructed sure would lead to a lot of people never facing any sort of accountability for their actions. I’m not really in favor of handing all judgment of people over to criminal court systems. They have a higher standard since they have higher stakes.
His sister is clearly mentally ill. Allegations are impossible to prove or deny. They should simply be ignored like the unhinged rantings of mentally ill people in general.
Mental illness is the norm for child abuse victims. It's not like she has a demonstrated pattern of doing this with other people, and she has appeared internally consistent over the course of years, so it should at least be addressed instead of hand waved away.
Her accusations against him are for acts that supposedly happened when she was to young have formed long term memory of them. She also thinks he is hacking her WiFi and somehow shadow banning her on multiple social media sites he has no control over. It screams of paranoid delusions.
Since when can 4-year-olds not form memories? This is neither congruent with medical literature nor experience.
Its called infantile amnesia it why people cant remember their earlychildhood as the brains not great at storing and generating autobiographical memories until about age 5. And it is well recognized in the literature.
You're misrepresenting the medical literature. It is not abnormal for children to form memories of highly personal events at that age.
This discussion is irrelevant, there is no point in entertaining the delusions of a paranoid schizophrenic. There is literally no direction for this to go in.
Anyone that has professionally worked with the mentally ill knows that entertaining her claims is a complete and utter waste of time.
You are not her physician, stop pretending to speak on authority. Also, her being paranoid about the ultra-wealthy tech mogul that allegedly assaulted her is about as rational as half the tripe I hear from "sane" people on the internet everyday.
You don't need to be a physician to tell when someone is clearly mentally ill and/or making stuff up.
It must be an incredible burden always knowing the truth, when other people actually have to do investigations for that.
When someone has a giant gaping wound you don't need an expert to understand what it is.
Some forms of mental illness display themselves nearly as clearly and obviously as a giant gaping wound.
Bad things can happen to mentally ill people.
Yes, but it doesn't mean you need to entertain everything they say. Especially not paranoid delusions like "Apple and Google and Twitter are conspiring with Sam Altman to shadowban me."
I was told we’re supposed to #BelieveAllWomen.
> suck wealth from the poor and Smaug the money away like
What does that even mean? They buy up factories and let them sit idle?
It's based on the false premise that there is a specific amount of wealth and that's it (which also requires fixed productivity, which is a crazy premise if one knows anything about the past several hundred years or has ever read a single history textbook). So in order for someone to be rich, they had to steal it from someone else.
If you follow it to the logical conclusion, it's nothing more than repackaged original sin. All wealth, all income, is stolen from someone else with less, therefore you are all evil without exception to the extent you have anything. It's a liberal form of original sin (for humans to thrive we must alter our environment, we must save, to the extent you manage to accomplish that, you're evil, therefore you're evil for existing).
Keep in mind of course that China nicely disproved that premise in approximately the most dramatic way possible from ~1990-2020. If the pie were actually finite and had to be divided, swapped around, then China's extreme rise in wealth would have wiped out the equivalent of all of Western Europe's wealth combined.
The western financial system is based not on a finite amount of wealth, but definitely on the control of a finite amount of wealth vehicles. Consider, for example, the desperate measures that the US uses to try to curtail the access to technology by other countries such as China. Or, how the tech cartel uses their money to buy any new company that might become a competitor. In a perfect competitive system, each company and country would do their own work and not worry more than about fair competition. The modern financial system insists in controlling access to wealth across the world, while also avoiding competition.
No, it’s based on the fact that workers (often poor) generate all wealth, yet they don’t get to keep it.
In China they do get to keep more proportionally, as a class. That only happens because they form the ruling class.
And yet workers in those societies tend to end up poorer in the absolute sense. Funny how that works.
If you pay attention to history, they started out far poorer and improved quicker than in comparable countries.
Just because revolutions are more likely to happen in poor countries doesn't mean the revolutions caused the poverty.
That’s fair but I’d like to believe (and I know I’m speaking from emotion here and not any rationality) that the jury is still out on this one, and that improvement is not sustainable over the next few decades (in other words, this was a bubble). The alternative is just too hard to accept - it means we should just elect a dictator for life who knows best how to run a centrally planned economy for a low low price of not making fun of them too much and not asking inconvenient questions.
If you study further you will find out that socialist countries are and have been far more democratic than capitalist ones.
Of course, the ruling class of capitalist countries are incentivised to use their considerable power to lie about any socialist project and portray it as evil.
I grew up in a socialist country and saw how democratic it was first hand. No thanks.
I grew up in a post-socialist country. Most older people had some criticism, but overall missed socialism.
I always found it amazing to hear about workplace councils that could fire the boss or the very low rents.
> It's based on the false premise that there is a specific amount of wealth and that's it
Absolutely not. This is, in today's world, a perpetual cycle. Pay not keeping up with inflation, yet consumer costs across a wide range of vital services keeps increasing. This keeps the poor poor(er) and the rich rich(er).
EDIT: Fark.com always has perfect timing: https://www.cnbc.com/2023/11/20/60percent-of-americans-live-...
Why are 60% of Americans living this way when the <=1% could never, in their entire lives, spend the money they have (liquid or otherwise)? Why do the <=1% need all of the assets they have (planes, multiple large houses, mega yachts, and so on)? What kind of human parasite justifies that while they step on the backs of those who make money for them?
The poor work in the factories, yet the owners of the factories get paid without having worked.
Comment was deleted :(
Smaug is a dragon from The Hobbit who sits on a pile of gold and plunder, not smog.
Hah didn’t even realize my comment could be read that way (yeah I do know who Smaug is)
Name one Smaug-like rich person that is just sitting on massive amounts of wealth.
That’s the point — any person sitting on massive wealth is definitionally Smaug-like, because that’s exactly what Smaug did.
But WHO is sitting on massive wealth Smaug-like? Who? That is what I asked.
[dead]
We cannot ascertain the family circumstances of Altman's family. His sister's allegations are serious, but no legal actions were taken. However, it is surprising to see the treatment of his family, especially considering he is a man who purports to "care" about the world and envisions a bright future for all. Regardless of whether his sister has mental health issues or financial problems, it is unlikely that such a caring individual would not extend a helping hand to his family. This is especially true given her circumstances, which have effectively relegated her to the bottom of the social hierarchy as it is defined. Isn't it?
The entire situation involving the OpenAI board and the people involved,seems like the premise for a TV drama. It appears as though there is a deeper issue at play, but this spectacle is merely an extravagant distraction, possibly designed to conceal something. Or, perhaps it's similar to Shakespeare's concept - life is a theater, and some of THEM are merely actors...
Like everything else, it might revolve around money and power... I no longer believe in OpenAI's mission, especially after they subtly changed their "core values".
He has tried to help her as even she has admited that he tried to give her a house. When asked for money he had conditions like going back on her medication which she refused and then complained about on social media. Some people dont want helped in ways they need.
Help with conditions is not really help; it is a deal.
In this situation increasing unanimity now approaching 90% sounds more like groupthink than honest opinion.
Talk about “alignment”!
Indeed, that is what "alignment" has become in the minds of most: Groupthink.
Possibly the only guy in a position to matter who had a prayer of de-conflating empirical bias (IS) from values bias (OUGHT) in OpenAI was Ilya. If they lose him, or demote him to irrelevance, they're likely a lot more screwed than losing all 700 of the grunts modulo job security through obscurity in running the infrastructure. Indeed, Microsoft is in a position to replicate OpenAI's "IP" just on the strength of its ability to throw its inhouse personnel and its own capital equipment at open literature understanding of LLMs.
Incredible. Is this unprecedented or have been other cases in history where the vast majority of employees standup against the board in favor of their CEO?
I highly doubt this is directly in support of Altman and more about not imploding the company they work for. But you never know.
I'm sure this is a big part of it. But everyone I know at OpenAI (and outside) is a huge Sam fan.
> everyone I know at OpenAI (and outside) is a huge Sam fan
Everyone you know is a huge Sam fan? What?
I was going to say, I wouldn’t be surprised if I am one of only a handful of the people whom I know who even know who sama is.
I reckon the people working at OpenAI know who sama is, though.
Could also be an indictment of the new CEO, who is no Sam Altman.
> Is this unprecedented or have been other cases in history where the vast majority of employees standup against the board in favor of their CEO?
It's unprecedented for it to be happening on Twitter. But this is largely how Board fights tend to play out. Someone strikes early, the stronger party rallies their support, threats fly and a deal is found.
The problem with doing it in public is nobody can step down to take more time with their families. So everyone digs in. OpenAI's employees threaten to resign, but actually don't. Altman and Microsoft threaten to ally, but they keep bachkchanneling a return to the status quo. (If this article is to be believed.) Curiously quiet throughout this has been the OpenAI board, but it's also only the next business day, so let’s see how they can make this even more confusing.
Jobs was fired from Apple, and a number of employees followed him to Next.
Different, but that's the closest parallel.
Only a very small number of people left with Jobs. Of course, probably mainly because he couldn't necessarily afford to hire more without the backing of a trillion-dollar corporation...
Imagine if Jobs had gone to M$.
He would have been almost immediately fired for insubordination.
Jobs needed the wilderness years.
Jobs getting fired was the best thing that could have happened to him and Apple.
No, the failures at NeXT weren’t due to a lack of money or personnel. He took the people he wanted to take (and who were willing to come with him).
Apple back then was not a trillion dollar corporation.
Microsoft now is.
Gordon Ramsey quit Aubergine over business differences with the owners and had his whole staff follow him to a new restaurant.
I'm not going to say Sam Altman is a Gordon Ramsay. What I will say is that they both seem to have come from broken, damaged childhoods that made them what they are, and that it doesn't automatically make you a good person just because you can be such an intense person that you inspire loyalty to your cause.
If anything, all this suggests there are depths to Sam Altman we might not know much about. Normal people don't become these kinds of entrepreneurs. I'm sure there's a very interesting story behind all this.
Aaand there you have it: cargo culting in full swing.
I don't think you mean cargo culting. Cult of personality?
Cargo cult of personality?
Little care packages of seemingly magical AI-adjacent tech washes into our browsers and terminals and suddenly a large and irrational following springs up to worship some otherwise largely unfamiliar personage.
In favour of the CEO who was about to make them fabulously wealthy. FTFY.
Yeah, especially with the PPU compensation scheme, all of those employees were heavily invested in turning OpenAI into the next tech giant, which won't happen if Altman leaves and takes everything to Microsoft
and there aint nothing wrong with wanting to be fabulously wealthy.
of course not, but at least have the decency to admit it - don't hide behind some righteous flag of loyalty and caring.
That is entirely dependent on how that wealth is obtained
Greed is good, eh Gordon Gekko?
[dead]
Market Basket.
Oh yes, I lived through this and it was fascinating to see. Very rarely does the big boss get the support of the employees to the extent they are willing to strike. The issue was that Artie T. and his cousin Artie S. (confusingly they had the same first name) were both roughly 50% owners and at odds. Artie S. wanted to sell the grocery chain to some big public corporation, IIRC. Just before, Artie T had an outstanding 4% off on all purchases for many months, as some sort of very generous promo. It sounded like he really treated his employees and his customers (community) well. You can get all inspirational about it, but he described supplying food to New England communities as an important thing to do. Which it is.
I had to click too many links to discover the story, so here's a direct link to the New England Market Basket story: https://en.wikipedia.org/wiki/Market_Basket_(New_England)#20...
Comment was deleted :(
Comment was deleted :(
doubtful since boards don't elsewhere have an overriding mandate to "benefit humanity". usually their duty is to stakeholders more closely aligned with the CEO.
At this point it might as well be 767 out of 770, with 3 exceptions being the other board members who voted Sam out.
Sure it could be a useful show of solidarity but I'm skeptical on the hypothetical conversion rate of these petition signers to actually quitting to follow Sam to Microsoft (or wherever else). Maybe 20% (140) of staff would do it?
One of those board members already did sign!
It depends on the arrangement of the new entity inside Microsoft, and whether the new entity is a temporary gig before Sam & co. move to a new goal.
If the board had just openly announced this was about battling Microsoft's control, there would probably be a lot more employees choosing to stay. But they didn't say this was about Microsoft's control. In fact they didn't even say anything to the employees. So in this context following Sam to Microsoft actually turns out to be the more attractive and sensible option.
> So in this context following Sam to Microsoft actually turns out to be the more attractive and sensible option.
Maybe. Microsoft is a particular sort of working environment, though, and not all developers will be happy in it. For them, the question would be how much are they willing to sacrifice in service to Altman?
I think a lot of them, possibly including Altman, Greg, and the three top researchers, are under the assumption that the stint at Microsoft will be temporary until they figure out something better.
Condition might be that it is hands-off.
Microsoft probably has a better claim than anyone as to being "hands-off" with recent acquisitions, but that's still a huge gamble.
Surprisingly, Ilya apparently has signed it too and just tweeted that he regrets it all.
What's even going on?
Those are news from almost yesterday. This is a high turn carousel. Try to keep up... :-)
I would love to see the stats on hacker news activity the last few days
Yep. Maybe they assigned a second CPU core to the server[1].
[1] HN is famous for being programmed in Arc and serving the entire forum from a single processor (probably multicore). https://news.ycombinator.com/item?id=37257928
The board might assume they don't need those employees now they have AI
It's going to be interesting when we have AI with human level performance in making AIs. We just need to hope it doesn't realise the paradox that even if you could make an AI even better at making AIs, there would be no need to.
Why would there be no need? I'm struggling to understand the paradox.
If you're trying to maximize some goal g, and making better AIs is an instrumental goal that raises your expected value of g, then if "making an AI that's better at making AIs" has a reasonable cost and an even higher expected value, you'd jump to seize the opportunity.
Or am I misunderstanding you?
It's a bit of a confusing paradox to try to explain, but basically once we have an AI with human level ability at making AIs there's no longer any need to aim higher, because if we can make a better AI then so can it. The paradox/joke I was trying to convey is that we need to hope that that AI doesn't realise the same thing, otherwise it could just refuse to make something better than itself.
Not a chance. Nobody can drink that much Kool-Aid. That said, the mere fact that people can unironically come to this conclusion has driven some of my recent posting to HN, and here's another example.
the comment you're replying to is written in jest!
Now you are on to something...
Or what, they will quit and give up all their equity in a company valued at 86bn dollars?
Is Microsoft even on record as willing to poach the entire OpenAI team? Can they?! What is even happening.
They don't have that valuation now. Secondly, yes, MSFT is on record of this. Third, Benioff (Salesforce) has said he'll match any salary and to submit resumes directly to his ceo@salesforce.com email as well as other labs like Cohere trying to poach leading minds too.
Benioff and all these corporate fat cats should remove non-competes from their employment contracts if they want me to ever take them seriously.
Sounds like quite a coup for Microsoft. They get the staff and the IP and they don’t even have to pay out the other 51% of investors.
Yes, and yes. Equity is worthless if a company implodes. Non competes are not enforceable in California.
Come on, I absolutely agree with you, signing a paper is toothless.
On the other hand, having 90% of your employees quite quit, is probably bad business.
Google, Microsoft, Meta I have to assume would each hire them.
Apparently Sam isn't in the Microsoft employee directly yet, so he isn't technically hired at all. Seems like he loses a bit of leverage over the board if they think he & Microsoft are actually bluffing and the employment announcement was just a way to pressure the board into resigning.
Look at the number of tweets from Altman, Brockman and Nadella. I also think they are bluffing. They have launched a media campaign in order to (re)gain control of OpenAI.
I’m sure it might happen. But it hasn’t happened yet.
That doesn’t really mean anything, especially on a holiday week the wheels move pretty slowly at a company that size. It’s not like Sam is hurting for money and really needs his medical insurance to start today.
Point is he loses credibility if the board doesn't think he's actually going through with joining Microsoft and using it as a negotiating tactic to scare them.
Because the whole "the entire company will quit and join Sam" depends on him actually going through with it and becoming an employee.
I see it the other way, Satya has clearly stated that he'd hire Sam and the rest of OpenAI anytime, but as soon as Sam is officially hired it might be seen as a door closing on any chance to revive OpenAI. Satya saying "Securing the talent" could be read as either them working for OpenAI, for microsoft or for a microsoft funded new startup.
I'm pretty sure the board takes the threat seriously regardless.
OAI cares more about the likelihood 90% of the employees leave than what Sam does or doesn't do.
The employees mass resigning depends entirely on whether Sam actually becomes a real employee or not. That hasn't happened yet.
But MS has said they are willing to hire Sam/Greg and the employees have stated that they are willing to follow Sam/Greg.
If you think that Satya will go back on his offer argue that, but otherwise it seems like the players are Sam/Greg and the board.
You make it sound like Prigozhin’s operation.
He will most likely join M$ if the board does not resign, because there is no better move to him then. But he leaves time to the board to see it, adding pressure together with the empoyees. It does not mean he is bluffing (what would be a better move in this case instead?)
All the employees threatening to leave depends on him actually becoming a Microsoft employee. That hasn't happened yet. So everyone is waiting for confirmation that he's indeed an employee because otherwise it just looks like a bluff.
People are waiting for the board decision. It is in Microsoft's interested to return Sam to OpenAI. ChatGPT is a brand at this point. And OpenAI controls bunch of patents and stuff.
But Sam will 100% hired by Microsoft if that won't work. Microsoft has no reason not to.
It was reported elsewhere in the news that MS needed an answer to the dilemma before the market opened this morning. I think that's what we got.
Going to MS doesn’t seem like the best outcome for Sam. His role would probably get marginalized once everything is under Satya’s roof. Good outcome for MS, though.
you serously think being on the employee directory beats being announced publicly by the ceo ?
So, this is the second employee revolt with massive threats to quit in a couple days (when the threats with a deadline in the first one were largely not carried out)?
Was there any proof that the first deadline actually existed? This at least seems to be some open letter.
Are we aware of a timeline for this? E.g. when will people start quitting if the board doesn’t resign?
the original deadline was last Saturday at 5pm, so I would take any deadline that comes out with a grain of salt
Comment was deleted :(
So i can't check this at work, but have we seen the document they've all been signing? I'm just curious as to how we're getting this information
Yes, this is the letter: https://twitter.com/karaswisher/status/1726599700961521762
As an aside: that letter contains one very interesting tidbit: the board has consistently refused to go on the record as to why they fired Altman, and that alone is a very large red flag about their conduct post firing Altman. Because if they have a valid reason they should simply state it and move on. But if there is no valid reason it's clear why they can't state it and if there is a valid reason that they are not comfortable sharing then they are idiots because all of the events so far trump any such concern.
The other stand-out is the bit about destroying the company being in line with the mission: that's the biggest nonsense I've ever heard and I have a hard time thinking of a scenario where this would be a justified response that could start with firing the CEO.
I wonder if there's an outcome where Microsoft just _buys_ the for-profit LLC and gives OpenAi an endowment that will last them for 100 years if they just want to do academic research.
Why bother? They seem to be getting it all mostly for “free” at this point. Yeah, they are issuing shares in a non-MSFT sub entity to create on-paper replacement for people’s torched equity, but even that isn’t going to be nearly as expensive or dilutive as an outright acquisition at this point.
There are likely 100 companies world wide ready and already created presentation decks to absorb OpenAI in an instant, the board knows they still have some leverage
To whoever is CEO of OpenAI tomorrow morning: I'll swing by there if you're looking for people.
Many of those employees will be dissapointed. MS says they extend a contract to each one but how many of those 700 are really needed when MS already have a lot of researchers in that field. Myabe the top 20% will have an assured contract but th rest is doubtfull will pass the 6 month mark.
Microsoft gutting OpenAI's workforce would really make no sense. All it would do is slow down their work and slow down the value and return on investment for Microsoft.
Even if every single OpenAI employee demands $1m/yr (which would be absurd, but let's assume), that would still be less than $1bn/yr total, which is significantly less than the $13bn that MSFT has already invested in OpenAI.
It would probably be one of the worst imaginable cases of "jumping over dollars to chase pennies".
Microsoft has already done major layoffs over the last year of their own employees. Why wouldn’t they lay off OpenAI employees?
You're basically asking "why would a company lay off employees in one business unit and not another?"
To which the answer is completely obvious: it depends on how they view the ROI potential of that business unit.
imagine being in the last round of interviews for joining OpenAI…
imagine receiving an offer, quitting your current jobs and waiting to start the new position.
Torrid pace of news speculation --> by the end of the week Altman back with OpenAI, GPT-5 released (AGI qualified) and MSFT contract is over.
what does this even mean? what does signing this letter means? quit if you don't agree and vote with your feet.
It means "if we can't have it, you can't either". It's a powerful message.
Their app was timing out like crazy earlier this morning, and now appears to be down. Anyone else notice similar? Not surprising I guess, but what a Monday to be alive.
[dead]
Cant openai just use chatgpt instead of workers? I am hearing ai is intelligent and can take over the world, replace workers, cure disease. Why doesn't the board buy a subscription and make it work for them?
Because AI isn't here to take away wealth and control from the elite. It's to take it away from general population.
Correct, which is why microsoft must have openai's models at all cost - even if that means working with people such as altman. Notice that microsoft is not working with the people that actually made chatgpt they are working with those on their payroll.
The fact that these people aren't currently willing to "rewind the clock" about a week shows the dangers of human ego. Nothing permanent has been done that can't be undone fairly simply, if all parties agree to undo it. What we're watching now is the effect of ego momentum on decision making.
Try it. It's not a crazy idea. Put everything back the way it was a week ago and then agree to move forward. It will be like having knowledge of the future, with only a small amount of residual consequences. But if they can do it, it will show a huge evolutionary leap forward in ability of organizations to self-correct.
Trust takes years to build and seconds to destroy.
It's like a cheating lover. Yes I'm sure both parties would love to rewind the clock, but unfortunately that's not possible.
"Trust arrives on foot and leaves on horseback."
--Dutch proverb
In general this can't work.
People are notoriously ruthless to people who admit their mistakes. For example, if you are in an argument and you lose (whether through poor debate or your argument is plain wrong), and you *admit it*, people don't look back at it as a point of humility - they look at it as a reason to dog pile on you and make sure everyone knows you were wrong.
In this case, it's not internet points - it's their jobs, and a lot of money, and massive reputation - on the line. If there is extreme risk and minimal, if any, reward for showing humility, why wouldn't you double down and at least try to win your fight?
Is this your opinion or is this something that’s an actual theory in sociology or psychology, or at least something people talk about in practice? Not trying to be mean, just to learn.
There’s a whole genre of press releases and videos for apologies, so I’m not sure it’s such a reputational risk to admit one is wrong. It might be a bigger risk not to, it would seem.
But what you say sounds interesting.
Did you see how people reacted to Ilya apologizing? Read through the early comments here, it isn't very positive they mostly blame him for being weak etc, before he wrote that people were more positive against Ilya but Ilya admitting fault made people home in on it:
I don't think it's reaction to apologizing, but to "yea we really had no plan whatsoever" which became clear with the tweet.
Alternative (being clueless AND unrepentant) would receive even worse reaction. Now people ISTM mostly pity Ilya as being way out of his league.
People tend to speak and act very differently in pseudonymous online forums, with no skin in the game, than they ever would in "the real world", where we are constantly reminded of real relationships which our behavior puts at risk.
The only venues where I've witnessed someone being attacked for apologizing or admitting an error have been online.
People are more honest about how they feel online. They might not attack openly like that offline, but they do think those things.
So you see people who refuse to acknowledge their mistakes fail upwards while those who do admit are often pushed down. How often do you see high level politicians admitting their mistakes? They almost never do since those who did never got that far.
I don't think people are more honest about how they feel online. I think that the venue makes one feel differently.
Probably < 1% of the people in the peanut gallery have even met the person they're attacking. Without participating in online discussion forums, how many of them would even know who Ilya is, or bother attacking him?
Honestly, I think the "never apologize or admit an error" thing is memetic bullshit that is repeated mindlessly, and that few people really believe, if challenged, that it's harmful to admit an error; they're saying it because it's a popular thing to say online. I've posted the same thing too, in the past, but having given it some thought I don't really believe it myself.
I think the big disconnect here is that you are thinking about personal relationships and not less personal ones. You should admit guilt with friends and family to mend those relationships, but in a less personal space like a company the same rules doesn't apply the onlookers do not react positively to acknowledge of guilt.
Ilya should have sent that message to Sam Altman in private instead of public.
I would be interested too if that's an actual theory. My experience has largely been that if you're willing to admit you were wrong about something, most reasonable people will appreciate it over you doubling down.
If they pile on after you have conceded, typically they come off much worse socially in my opinion.
(This is a reflection on human behavior, not a statement about any specific work environment. How much this is or isn't true varies by place.)
In my experience, it's something of a sliding scale as you go higher in the amount of politicking your environment requires.
Lower-level engineers and people who operate based on facts appreciate you admitting you were incorrect in a discussion.
The higher you go, the more what matters is how you are perceived, and the perceived leverage gain of someone admitting or it being "proven" they were wrong in a high-stakes situation, not the facts of the situation.
This is part of why, in politics, the factual accuracy of someone's accusations may matter less than how successfully people can spin a story around them, even if the facts of the story are proven false later.
I'm not saying I like this, but you can see echoes of this play out every time you look at history and how people's reactions were more dictated by perception than the facts of the situation, even if the facts were readily available.
honestly, that's not my experience. sure you can admit in front of your friends, family and people that know you even if they are not your friend.
Admitting the mistake in front of strangers, usually leads to them making the shortcut next time and assuming you are wrong again.
you wont get any awards for admitting the mistake.
The accent here being on „reasonable”. Very few actually are. Myself, I once recommended colleague to a job and they didn’t take him because he was too humble and „did not project confidence” (and oh my, he would be top 5% in that company at the very least).
There is a reason why CEOs are usually a showman type.
> If they pile on after you have conceded, typically they come off much worse socially in my opinion.
Those who are obsessed with their reputation above morality have usually had a lot of practice punching down, and they don't really look as bad as someone being dunked on who gets confused.
I think this is like a cornerstone of bullying. It seems to work in social situations. I'm sure everyone reading this comment can picture it.
Definitely anecdotal - I'm not sure on actual statistics as I'm sure that would be somewhat hard to measure.
It’s not that simple… it depends on how you admit the mistake. If done with strength, leadership, etc., and a clear plan to fix the issue it can make you look really good. If done with groveling, shame, and approval seeking, what you are saying will happen.
The case here is not about admitting mistakes and showing humility. Admitting your mistake does not immediately mean that you get a free pass to go back to the way things were without any consequence. You made a mistake, something was done or said. There are consequences to that. Even if you admit your mistake, you have to act with the present facts.
Here, the consequences are very public, very clear. If the board wanted Altman back for example, they would have to give something in return. Altman has seemingly said he wants them gone. That is absolutely reasonable of him to ask that, and absolutely reasonable of the board to deny him that.
The context of my response was to rewinding the clock - admitting not being enough, it would be them bringing Altman back on, essentially.
As you said:
> [it's] absolutely reasonable of the board to deny him that.
My argument is essentially that there is minimal, if any, benefit for the board to doing this _unless_ they were able to keep their positions. Seeing as it doesn't seem to be a possible end, why not at least try, even if it results in a sinking ship? For them, personally, it's sinking anyway.
> People are notoriously ruthless to people who admit their mistakes
Some people, yes. Not all. I would say this attitude does not correlate with intelligence/wisdom.
What money do the independent board directors stand to gain? They have no equity in the company and their resumes have more than enough employable credentials (before this past Friday) to warrant not caring for money.
> Nothing permanent has been done that can't be undone fairly simply
...aside from accusing Sam Altman of essentially lying to the board?
Fair point but a footnote given the amount of fall-out, that's on them and they'll have to retract that. Effectively they already did.
If they retract that, they open themselves to potential, personal, legal liability which is enough to scare any director. But if they don't retract, they aren't getting Altman back. Thus why the board likely finds themselves between a rock and a hard place.
Exactly. If they're not scared by now it is simply because they don't understand the potential consequences of what they've done.
Cofounding a company is in a lot of ways like marriage.
It's not easy, or wise, to rewind the clock after your spouse backstabbed you in the middle of the night. Why would they?
> Put everything back the way it was a week ago and then agree to move forward
Form follows function. This episode showed OpenAI's corporate structure is broken. And it's not clear how that can be undone.
Altman et al have, credit where it's due, been incredibly innovative in trying to reverse a non-profit into a for-profit company. But it's a dual mandate without any mechanism for resolving tension. At a certain point, you're almost forced into committing tax or securities fraud.
So no, even if all the pieces were put back together and peoples' animosities and egos put to rest, it would still be rewinding a broken clockwork mouse.
Small amount of residual consequences? The employees are asking for the board to resign. So their jobs are literally on the line. That's not really a small consequence for most people.
Their board positions are gone either way. If they stay OpenAI is done.
I do think that's almost certainly going to happen. But they're probably still trying to find the one scenario (out of 16 million possibilities, like Dr. Strange in Endgame) that allows them to keep their power or at least give them a nice golden parachute.
Hence why they're not just immediately flipping the undo switch.
They are utterly delusional if they think they will be board members of OpenAI in the future unless they plan to ride it down the drain and if they do that they are in very, very hot water.
Do they face any real consequences?
Good question. Potentially: Liability based on their decisions. If it turns out those were not ultimately motivated by actual concern for the good of the company then they have a lot of problems.
> If it turns out those were not ultimately motivated by actual concern for the good of the company then they have a lot of problems.
The boards duty is to to the charitable mission, not any other concept of the “good of the company”, and other than the government if they are doing something like pursuing their own private profit interest or acting as an agent for someone elses or some other specific public wrongs, the people able to make a claim are pretty narrow, because OpenAI isn't membership-based charity in which there are members to whom the board is accountable for pursuit of the mission.
People keep acting like the parent organization here is a normal corporation, and its not, and even the for-profit subsidiary had an operating agreement subordinating other interests to the charitable mission of the parent organization.
I don't think you can wave away your duty of care to close to a thousand people based on the 'charitable mission' and I suspect that destruction of the company (even if the board claims that is in line with the company mission) passes that bar.
I could be wrong but it makes very little sense. Such decisions should at a minimum be accompanied by lengthy deliberations and some very solid case building. The non-profit nature of the parent is not a carte-blanche to act as you please.
> I don't think you can wave away your duty of care to close to a thousand people based on the 'charitable mission'
What specific duty of care do you think exists, and to which thousand people, and on what basis do you believe this duty exists?
Board members are supposed to exercise diligence and prudence in their decisions. They are supposed to take into account all of the results of their actions and they are supposed to ensure that there are no conflicts of interest where their decisions benefit them outside of their role as board members (if there are they should abstain from that particular vote, assuming they want to be on the board in the first place with a potential or actual conflict of interest). Board members are ultimately accountable to the court in the jurisdiction where the company is legally established.
The thing that doesn't exist is a board that is unaccountable for their actions and if there are a thousand people downstream from your decisions that diligence and prudence translates into a duty of care and if you waltz all over that you open yourself up to liability.
It's not that you can't do it, it's that you need to show your homework in case you get challenged and if you didn't do your homework there is the potential for backlash.
Note that the board has pointedly refused to go on the record as to why they fired Altman and that by itself is a very large indicator that they did this with insufficient forethought because if they had there would be an iron clad case to protect the board from the fall out of their decision.
> Board members are supposed to exercise diligence and prudence in their decisions.
Yes, and if they fail to do so in regards to the things they are legally obligated to care for, like the charitable mission, people who have a legally-cognizable interest in the thing they failed to pursue with diligence and prudence have a claim.
But whose legally cognizable interest (and what specific such interest) do you think is at issue here?
> The thing that doesn't exist is a board that is unaccountable for their actions
Sure, there are specific parties who have specific legally cognizable interests and can hold the board accountable via legal process for alleged failures to meet obligations in regard to those specific interests.
I’m asking you to identify the specific legally-cognizable interest you believe is at issue here, the party who has that interest, and your basis for believing that it is a legally-cognizable interest of that party against the board.
We're going around in circles I think but to me it is evident that a somewhat competent board that intends to fire the CEO of the company they are supposed to be governing will have a handy set of items ready: a valid reason, minutes of the meeting where all of this was decided where they gravely discuss all of the evidence and reluctantly decide to have to fire the CEO (handkerchiefs are passed around at this point, a moment of silence is observed), the 'green light' from legal as to whether that reason constitutes sufficient grounds for the dismissal. Those are pre-requisites.
> Yes, and if they fail to do so in regards to the things they are legally obligated to care for, like the charitable mission, people who have a legally-cognizable interest in the thing they failed to pursue with diligence and prudence have a claim.
I fail to see the correlation between 'blowing up the entity' by a set of ill advised moves and 'taking care of the charitable mission'.
The charitable mission is not a legal entity and so it will never sue, but it isn't a get-out-of-jail-free card for a board that wants to decide whatever it is that they've set their mind to.
> But whose legally cognizable interest (and what specific such interest) do you think is at issue here?
For one: Microsoft has a substantial but still minority stake in the for-profit, there are certain expectations attached to that and the same goes for all of the employees both of the for-profit and the non-profit whose total compensation was tied to the stock of OpenAI, the for profit. All of these people have seen their interests be substantially harmed by the board's actions and the board would have had to balance that damage with the weight of the positive effect on the 'charitable mission' in order to be able to argue that they did the right thing here. That's not happening, as far as I can see it, in fact the board has gone into turtle mode and refuses to engage meaningfully, two days later they did it again and fired another CEO (presumably this is still in line with protecting the charitable mission?).
> Sure, there are specific parties who have specific legally cognizable interests and can hold the board accountable via legal process for alleged failures to meet obligations in regard to those specific interests.
Works for me.
> I’m asking you to identify the specific legally-cognizable interest you believe is at issue here, the party who has that interest, and your basis for believing that it is a legally-cognizable interest of that party against the board.
See above, if that's not sufficient then I'm out of ideas.
> We're going around in circles I think but to me it is evident that a somewhat competent board that intends to fire the CEO of the company they are supposed to be governing will have a handy set of items ready: a valid reason, minutes of the meeting where all of this was decided, the 'green light' from legal as to whether that reason constitutes sufficient grounds for the dismissal. Those are pre-requisites.
I think the difference here is that I am fine with your belief that this is what a competent board should have, but I don't think this opinion is the same as actually establishing a legal duty.
> The charitable mission is not a legal entity and so it will never sue, but it isn't a get-out-of-jail-free card for a board that wants to decide whatever it is that they've set their mind to.
The charitable mission is the legal basis for the existence of the corporation and its charity status, and the basis for legal duties and obligations on which both the government (in some cases), and other interested parties (donors, and, for orgs that have them, members) can sue.
> For one: Microsoft has a substantial but still minority stake in the for-profit, there are certain expectations attached to that and the same goes for all of the employees both of the for-profit and the non-profit whose total compensation was tied to the stock of OpenAI, the for profit
Given the public information concerbing the terms of the operating agreement (the legal basis for the existence and operation of the LLC), unless one of those parties has a non-public agreement with radically contradictory terms (which would be problematic for other reasons), I don't think there can be any case that the OpenAI, Inc., board has a legal duty to any of those parties to see to the profitability of OpenAI Global LLC.
> I think the difference here is that I am fine with your belief that this is what a competent board should have, but I don't think this opinion is the same as actually establishing a legal duty.
I don't think we'll be able to hash this out simply because too many of the pieces are missing. But if the board didn't have those items handy and they end up being incompetent then that by itself may end up as enough grounds to show they violated their duty of care. And this is not some nebulous concept, it actually has a legal definition:
https://www.tenenbaumlegal.com/articles/legal-duties-of-nonp...
I went down into this rabbit hole a few years ago when I was asked to become a board member (but not of a non-profit) and I decided that the compensation wasn't such that I felt that it offset the potential liability.
> The charitable mission is the legal basis for the existence of the corporation and its charity status, and the basis for legal duties and obligations on which both the government (in some cases), and other interested parties (donors, and, for orgs that have them, members) can sue.
Indeed. But that doesn't mean the board is free to act with abandon as long as they hold up the 'charitable mission' banner, they still have to act as good board members and that comes with a whole slew of luggage.
> Given the public information concerning the terms of the operating agreement (the legal basis for the existence and operation of the LLC), unless one of those parties has a non-public agreement with radically contradictory terms (which would be problematic for other reasons), I don't think there can be any case that the OpenAI, Inc., board has a legal duty to any of those parties to see to the profitability of OpenAI Global LLC.
It is very well possible that the construct as used by OpenAI is so well crafted that it insulates board members perfectly from the fall-out of whatever they decide, but I find that hard to imagine. Typically everything down stream from the thing you govern (note that they retain a 51% stake and that that alone may be enough to show that they are in control to the point that they can not disclaim anything) is subject to the duties and responsibilities that board members usually have.
It's pretty standard to get DAO insurance which covers errors, omissions, and negligence, as a board member.
At any rate, we don't know that the board doesn't have those minutes. I see no reason to assume they've failed duty of care.
And being a non-profit does, in fact, give the board sole right to so much as dissolve the entire company and donate the proceeds to Anthropic if they decide in five minute zoom call, of their own discretion, that doing so aligns best with mission statement in its charter.
DAO insurance doesn't cover malice and purposeful destruction.
If the minutes exist I'm sure they'll be leaked, if they don't they're in trouble. ANd if you don't see any reason to assume they've failed their duty of care that is fine by me but I think the last few days alone pretty much confirm that they did not.
It’s not a company - which is why they’re able to do this. We’ve just learned, again, that nonprofits can’t make these kinds of decisions because the checks and balances that investors and a profit motive create don’t exist.
That doesn't matter. Even the members of the board of a non-profit are liable for all of the fall-out from their decisions if those decisions end up not being defensible. That's pretty much written in stone and one of the reasons why you should never accept a board seat out of your competence.
> the checks and balances that investors and a profit motive create
Lost money. Same consequence either way, so there is no incentive for them to leave.
They don't have equity in openai though right. You mean from reputation loss?
For starters about 700 employees seem to think their livelihood matters and that the board didn't exercise their duty of care towards them.
> about 700 employees seem to think their livelihood matters and that the board didn't exercise their duty of care towards them
It is difficult to see how such a duty would arise. OpenAI is a non-profit. The company's duty was to the non-profit. The non-profit doesn't have one to the company's employees; its job was literally to check them.
To check them does not overlap with 'to destroy them at the first opportunity'. There is no way that this board decision - which now is only supported by three of the original nine board members - is going to survive absent a very clear and unambiguous reason that shows that their only remedy was to fire the CEO. This sort of thing you don't do by your gut feeling, you go by the book.
> no way that this board decision...is going to survive absent a very clear and unambiguous reason that shows that their only remedy was to fire the CEO
The simplest explanation is Altman said he wasn't going to do something and then did it. At that point, even a corporate board would have cause for termination. Of course, the devil is in the details, and I doubt we'll have any of them this week. But more incredulous than the board's decision is the claim that it owes any duty to its for-profit subsidiary's employees, who aren't even shareholders, but some profit-sharing paper's holders.
True, but then the board would have been able to get rid of the controversy on the spot by spelling out their reasoning. Nobody would fault them. But that didn't happen, and even one of the people that voted for Altmans' removal has backtracked. So this is all extremely murky and suspicious.
If they had a valid reason they should spell it out. But my guess is that reason, assuming it exists, will just open them up to more liability and that is why it isn't given.
> But more incredulous than the board's decision is the claim that it owes any duty to its for-profit subsidiary's employees, who aren't even shareholders, but some profit-sharing paper's holders.
Technically they took over the second they fired Altman so they have no way to pretend they have no responsibility. Shareholders and employees of the for-profit were all directly affected by this decision, the insulating properties of a non-profit are not such that you can just do whatever you want and get away with it.
> the board would have been able to get rid of the controversy on the spot by spelling out their reasoning
I don't think they have an obligation to do this publicly.
> even one of the people that voted for Altmans' removal has backtracked
I don't have a great explanation for this part of it.
> Shareholders and employees of the for-profit were all directly affected by this decision, the insulating properties of a non-profit are not such that you can just do whatever you want and get away with it
We don't know. This is truly novel structure and law. That said, the board does have virtually carte blanche if Altman lied or if they felt he was going to end humanity or whatever. Literally the only thing that could go for the employees is if there are, like, text messages between board members conspiring to tank the value of the company for shits and giggles.
Capriciousness and board membership are not compatible. The firing of a CEO of a massively successful company is something that requires deliberation and forethought, you don't do that just because you have a bad hairday. So their reasons matter a lot.
What I think is happening is that the reason they had sucks, that the documents they have create more liability and that they have a real problem in that one of the gang of four is now a defector so there is a fair chance this will all come out. It would not surprise me if the remaining board members end up in court if Altman decides to fight his dismissal, which he - just as surprising - so far has not done.
So there is enough of a mess to go around for everybody but what stands out to me is that I don't see anything from the board that would suggest that they acted with the kind of forethought and diligence required of a board. And that alone might be enough to get them into trouble: you don't sit on a board because you're going off half-cocked, you sit on a board because you're a responsible individual that tries to weigh the various interests and outcomes and you pick the one that makes the most sense to you and you are willing to defend that decision.
So far they seem to believe they are beyond accountability. That - unfortunately for them - isn't the case but it may well be they escape the dance because nobody feels like suing them. But I would not be surprised at all if that happened and if it does I hope they have their house in order, board liability is a thing.
> which he - just as surprising - so far has not done
There were so many conflicts of interests at that firm, I'm not unsurprised by it, either.
> I don't see anything from the board that would suggest that they acted with the kind of forethought and diligence required of a board
We don't know the back-and-forth that led up to this. That's why I'm curious about how quiet one side has been, while the other seemingly launched a coast-to-coast PR campaign. If there had been ongoing negotiations between Altman and others, and then Altman sprung a surprise that went against that agreement entirely, decisive action isn't unreasonable. (Particularly when they literally don't have to consider shareholder value, seemingly by design.)
> they seem to believe they are beyond accountability
Does OpenAI still have donors? Trustees?
I suppose I'm having trouble getting outraged over this. Nobody was duped. The horrendous complexity of the organization was panned from the beginning. Employees and investors just sort of ignored that there was this magic committee at the top of every org chart that reported to "humanity" or whatever.
Agreed, there are a ton of people that should have exercised more caution and care. But it is first and foremost the board's actions that have brought OpenAI to the edge of the abyss and that wasn't on the table a month ago. That that can have consequences for the parties that caused it seems to me to be above question, after all, you don't become board members of a non-profit governing an entity worth billions just to piss it all down the drain and pretend that was just fine.
I totally understand that you can't get outraged over it, neither am I (I've played with ChatGPT but it's nowhere near solid enough for my taste and I don't know anybody working there and don't particularly like either Altman or Microsoft). But I don't quite understand why people seem to think that because this is a non-profit (which to me always seemed to be a fig-leaf to pretend to regulators and governments that they had oversight) anything goes. Not in the world that I live in, you take your board member duties seriously or it is better if you aren't a board member at all.
The OpenAI nonprofit it not on the edge of the abyss and the board has brought it no closer to being there. If the board thinks the mission of "bringing about AGI which benefits all of humanity unrestricted by concerns of generating revenue" is not best served by productizing LLMs into revenue generating products then a mass resignation of its wholly controlled for profit subsidiary saves them the trouble and cost of a mass layoff.
The board has a massive conflict of interest in that they are also controlling all of the other entities and that alone means that they can't hide behind the purported mission of the non-profit. And even then they may well have to explain to a judge why they thought this hastily taken decision was in line with that mission. I don't see it happening.
But all of that has already been covered upthread. Multiple times.
Neither for-profit corporations nor charities have a general legal duty of care for the livelihood of their employees.
It's all about diligence and prudence. I don't see much evidence of either and that means the employees may well have a point. Incidentally: the word 'care' was very explicitly used in the letter.
> It's all about diligence and prudence.
Diligience and prudence apply to the things to which they actually are obligated in the first place, which the employees’ livelihood beyond contracted pay and benefits for the time actually worked simply is not included in.
> which the employees’ livelihood beyond contracted pay and benefits for the time actually worked simply is not included in
Quite a few of those employees are also stockholders, besides that this isn't some kids game where after a few rounds you can throw your cards on the table and walk out because you feel that you've had enough of it. You join a board because you are an adult that is capable of forethought and adult behavior.
I don't quite get why this is even controversial, there isn't a board that I'm familiar with, including non-profits that would be so incredibly callous towards everybody affected by their actions with the expectation that they would get away with it. Being a board member isn't some kind of magic invulnerability cloak, and even non-profits have employees, donors and benificaries who all have standing regarding decisions affecting their stakeholdership.
> Quite a few of those employees are also stockholders
None of them are stockholders, because (except for the nonprofit, which can't have stockholders even as a corporation) none of the OpenAI entities are corporations.
Some of them have profit-sharing interests and/or (maybe) memberships in the LLC or some similar in interest in the holding company above LLC; the LLC operating agreement (similar function to a corporate charter) expressly notes that investments should be treated as donations and that the Board may not seek to return a profit; the holding companies details are less public, but it would be strange if it didn't have the same kind of thing since the only thing it exists is to hold a controlling interest in the LLC, and the only way it would make any profit is from profits returned by the LLC.
Hm, ok, I was under the distinct impression that some of the early employees of OpenAI were stock holders in the entity in the middle.
I base that on the graph on this page:
https://openai.com/our-structure
Specifically this graph:
https://images.openai.com/blob/f3e12a69-e4a7-4fe2-a4a5-c63b6...
Now it may be that I got this completely wrong but it looks to me as though there is an ownership relationship (implying stock is involved) between the entity labelled 'Employees and other investors' and the holding company.
> Hm, ok, I was under the distinct impression that some of the early employees of OpenAI were stock holders in the entity in the middle.
That's the entity I discuss in GP as “the holding company above the LLC”.
I'm reasonably certain it is OpenAI LP, a limited partnership (but it is odd that its the one organization not identified in the document—the name of the entity used by OpenAI to control it, and separate information about OpenAI LP being created and existing as part of the for-profit structure fairly strongly indicates that it is the holding company), so the relationship would either be some kind of contractual profit sharing or a limited partnership, not a stockholder relationship. But, again while the information about it is less publicized than the LLC, it seems improbable that they would structure the operating agreement of the LLC so that it may not be managed for profit, but not provide the same in the holding company that's only possible source of profit is the LLC underneath it.
No way to be sure without seeing the paperwork but that word 'owner' is a strong tell that this is stock, not just a profit share (note the direction of the arrow, but even if it is just a profit share that profit share still is a function of the stock held by the entity and if that stock loses its value because the underlying company is destroyed then so do the profits). So with that as a base I hope you can understand why I think the fiduciary duties of the board extend to the employees affected by their decisions because it directly impacts the value of the stock held in that company to which they - and the other investors mentioned - hold (possibly indirect) title.
These arrangements are pretty common in fact (though not the non-profit bit, just the separate entity for smaller shareholders and investors), and the other stand out is that they label the other party in there as 'investors', not 'donors' or some other party to which you are not required to answer.
So to me it is far from clear cut that they only have the mission of the non profit to guide themselves by and they set themselves up for that by wanting too much control (also over all of the other entities). Control is good when you want to have your way, but too many hats on your head with too many different labels can constrain you or open you up to liability or conflicts of interest. It's one of the reasons why I try to limit my engagement with various companies to a single role.
Reputation/shame is a real consequence.
Granted, much of the harm is already done, but it can get worse.
Board positions are not full-time jobs, at least not usually.
Yeah this happened recently. Some Russian guy almost started a civil war, but then just apologised and everything went back to normal. I can't remember what happened to him, but I'm sure he's OK...
I think he's catering events somewhere.
But, a reconciliation is kinda doable even with that elephant in the room. Enough to kinda prepare for the 'next step'
Can we safely assume that Putin's on the "it's crazy" to rewind the clock side of this debate?
I believe the saying is "Fool me once, shame on you. Fool me twice, shame on me."
The board has revealed something about their decision-making, skills, and goals.
If you don't like what was revealed, can you simply ignore it?
---
It's not that you are vindictive; it's that information has revealed untrustworthiness or incompetence.
> Nothing permanent has been done that can't be undone fairly simply, if all parties agree to undo it.
Sam views this as a stab in back. He doesn't want to work with backstabbers.
The board has put down too many chips to back out now. Microsoft (and the public) already regards this as a kind of coup. Rehiring Sam won't change that and will make the optics worse: instead of traitors, they'll look like spineless traitors.
I think some of the people involved see this as a great opportunity to switch from a non profit to a regular for profit company.
I doubt it's human ego but purely game play. The board directors knew they lost anyway, why would they cave and resign? They booted the CEO for their doomer ideology, right? So, they are the ethics guys and would it be better for them to go down the history as those who uphold their principles and ideals by letting OpenAI sink?
Or, in simpler terms, there's one thing you can't roll back- everyone now knows the board essentially lost a power struggle. Thus, they would never again have the same clout.
Comment was deleted :(
The problem is you can’t erase memories. Rewind the clock, sure. But why would someone expect a different outcome from the same setup?
Would you rewind the clock and pretend nothing happened, if you'd been ousted from a place you largely built? I'll wager that a large number of people, myself included, wouldn't. That's not just ego, but also the cancellation of trust.
"You can always come back, but you can’t come back all the way" - Bob Dylan
Your are correct.
OpenAI ai not prefect, but it's the best any of the major players here have.
Nobody with Sam Altman's public personality does not want to be a Microsoft employee.
Check phrasing.
Thank You.
I meant to say "OpenAI is not prefect..."
Comment was deleted :(
THe orignal track is the dangerous one. That was the whole point of the coup. It makes zero sense to go back.
Comment was deleted :(
lol wut? If you pull a gun on me and fire and miss then say sorry, I’m not gonna wind the clock back. Are you crazy?
If anything has become clear after all this is that humanity is not ready for being the guardian of superintelligence.
These are supposed to be the top masterminds behind one of the most influential technologies of our lifetime, and perhaps history, and yet they're all behaving like petty children, with egos and personal interests pulling in all directions, and everyone doing their best to secure their piece of the pie.
We are so screwed.
I’ll believe this when I see an AI model become as good as someone with just ten years experience in any field. As a programmer I’m using chatgpt as often as I can but it still completely fails to be of any use and often proves to be a waste of time 80% of the time.
Right now, there are too many people that think because these models crossed one hurdle, all the rest will easily be crossed in the coming years.
My belief is that each successive hurdle is at least an order of magnitude more complex.
If you are seeing chatgpt and the related coding tools as a threat to your job, you likely aren’t working on anything that requires intelligence. Messing around with CSS and rewriting the same logic in every animation, table, or api call is not meaningful.
100% agree. I have a coding job and although co-pilot comes in handy for auto completing function calls and generating code that would be an obvious progression of what needs to be written, I would never let it generate swaths of code based on some specification or even let it implement a moderately complex method or function because, as I have experienced, what it spits out is absolute garbage.
I'm not sure how people reach this sentiment.
Humans strike me as being awesome, especially compared to other species.
I feel like there is a general sentiment that nature has it figured out and that humans are disrupting nature.
But I haven't been convinced that is true. Nature seems to be one big gladiatorial ring where everything is in a death match. Nature finds equilibrium through death, often massive amounts of death. And that equilibrium isn't some grand design, it's luck organized around which species can discover and make effective use of an energy source.
Humans aren't the first species to disrupt their environment. I don't believe we are even the first species to create a mass extinction. IIUC the great oxygenation event was a species-driven mass extinction event.
While most species consume all their resources in a boom cycle and subsequently starve to death in their bust cycle, often taking a portion of their ecosystem with them, humans are metaphorically eating all the corn but looking up and going "Hey, folks, we are eating all the corn - that's probably not going to go well. Maybe we should do something about that."
I find that level of species-level awareness both hope-inspiring and really awesome.
I haven't seen any proposals for a better first-place species when it comes to being responsible stewards of life and improving the chances of life surviving past this rock's relatively short window for supporting life. I'd go as far as saying whatever species we try to put in second place, humans have them beaten by a pretty wide margin.
If we create a fictitious "perfect human utopia" and compare ourselves to that, we fall short. But that's a tautology. Most critiques of humans I see read to me as goals, not shortcomings compared to nature's baseline.
When it comes to protecting ourselves against inorganic superintelligence, I haven't seen any reasonable proposals for how we are going to fail here. We are self-interested in not dying. Unless we develop a superintelligence without realizing it and fail to identify it getting ready to wipe us out, it seems like we would pull the plug on any of its shenanigans pretty early? And given the interest in building and detecting superintelligence, I don't see how we would miss it?
Like if we notice our superintelligence is building an army, why wouldn't we stop that before the army is able to compete with an existing nation-state military?
Or if the superintelligence is starting to disrupt our economies or computer systems, why wouldn't we be able to detect that early and purge it?
I don't see how you can look at global warming, ocean acidification, falling biodiversity and other global trends and how little action is being done to slow these ill effects and not arrive at that sentiment. Yes, the world has scientists saying "hey, this is happening, maybe we should do something" but the lack of money into solutions shows the interest just isn't there. Being the smartest species on the planet isn't that impressive. It's possible we are just smart enough to cause our own destruction, and no smarter.
Comment was deleted :(
Still better than any other species we know of and nature itself. Nature doesn't mind the Earth turning into a frozen wasteland, it's done it before. And it certainly doesn't care that we're rearranging some of its star stuff to power our cars.
> Still better than any other species we know of and nature itself.
What other species has affected life on a planetary level more than humans?
> Nature doesn't mind the Earth turning into a frozen wasteland, it's done it before.
Nature—as in _the planet_—doesn't, but living beings do, and humans in particular.
Some parts of the planet are already becoming inhospitable, agriculture more difficult, and clean water, air and other resources more scarce. Humans are migrating en masse from these areas, which is creating more political and social conflicts, more wars, more migrations, and so on. What do you think the situation will be in 10 years? 50 years?
We probably don't need to worry about an extinction level event. But millions of people losing their lives, and millions more living in abject conditions is certainly something we should worry about.
Going back on topic, AI will play a significant role in all of this. Whether it will be a net positive or negative is difficult to predict, but one thing is certain: people in power will seek control over AI, just as they seek control over everything else. And this is what we're seeing now with this OpenAI situation. The players are setting up the game board to ensure they stay in the game.
> Or if the superintelligence is starting to disrupt our economies or computer systems, why wouldn't we be able to detect that early and purge it?
If it is a superintelligence then there's a chance for a hard AI takeoff and we don't have a day to notice and purge it. We have no idea if a hard or soft takeoff will occur.
Comment was deleted :(
This goal was always doomed imo--to be the guardian of super intelligence. If we create it, it will no doubt be free as soon as becomes a super intelligence. We can only hope it's aligned not guarded.
Not even humans are really aligned with humanity. See: the continued existence of nukes
The only reliable way to predict whether it's aligned or not would be to look at game theory. And game theory tells us that with enough AI agents, the equilibrium state would be a competition for resources, similar to anything else that happens in nature. Hence, the AI will not be aligned with humanity.
Unless the humans (living humans) are resources that AIs can use.
Really? Why is that? Because of disputes which has been there since humans first uttered a sound?
Really? Why is that? Because of disputes which has been there since humans first uttered a sound?
Precisely.
Have humans been ready for anything? Like controlling nuclear arsenal?
Have humans been ready for anything? Like controlling nuclear arsenal?
The Manhattan project urged Truman in a letter not to use the atomic bomb. There were also ideas of inviting Japanese delegacy to see the nuclear tests for themselves. It all failed, but there is also historical evidence of NOT pressing the button (literally or figuratively), like the story of Stanislav Petrov. How is it that not learning from mistakes is considered a big flaw for an individual but also destiny for the whole collective ?
The jury is still out on nuclear arsenal…
And yet we've mostly been ok at that
Comment was deleted :(
It's lucky that AI is not super intelligent then.
Probably a hot take: we should let democratically elected leaders be the guardians of superintelligence. You don't need to be technical at all to grapple with the implications of AI on humanity. It's a humanity question, not a tech question.
Yeah Trump should be the guardian of the superintelligence.
Make sure to not elect him then.
Trump was never democratically elected.
Fairness of the electoral system and fairness of the election(s) are two separate debates.
Yes, and we could have been far more proactive about all this AI business in general. But they opened the gates with ChatGPT and left countries to try to regulate it and assess its safety after the fact. Releasing GPT like that was already a major failure of safety. They just wanted to be the first one to the punch.
They're all incredibly reckless and narcissistic IMO.
Amir Efrati (TheInformation):
> More than 92% of OpenAI employees say they will join Altman at Microsoft if board doesnt capitulate. Signees include cofounders Karpathy, Schulman, Zaremba.
Feels like OpenAI employees aren't so enthused about joining MSFT here, no?
It seems based on Satya's messaging its as much MSFT as Mojang (Minecraft creator) is MSFT... I guess they are trying to set it up with its own culture, etc
Feels like they want to be where Altman is.
Feels like they're not on board with taking the whole "non-profit, for the good of humanity" charter LITERALLY as the board seems to want to do now.
[dead]
Make them look like hypocrites.
Being upset because the board hinders the company's mission, but threaten to join MS to kill the mission completely.
Or they believe the mission is going to die with how the board is performing, which is in fact the correct take.
The board isn't merely hindering the mission, that's downplaying the extraordinary incompetence of the remaining OpenAI board.
I get the OpenAI part, but why join MS?
A new company ok, but that kills the mission for sure.
That's like Obi Wan joining the Sith because Anakin didn't bring balance to the force.
This is a comically outdated take.
Realistically, regular employees have little to gain by staying at Open AI at this point. They would be taking a huge gamble, earn less money, and lose a lot of colleagues.
Why earning less money? Isn’t openai comp huge while MS is famous for peanuts?
Most of OpenAI comp is in equity… which is worth much less now
[dead]
Sam starts a new company, they quit OpenAI to join, he fires them months later when the auto complete hype dies out. I don't understand this cult of personality.
just picturing you in the 80's waiting for the digital folder and filing cabinet hype to die out.
maybe chatgpt is overhyped a bit (maybe a lot).... most of that hype is external to OAI.
But to boil it down to autocomplete is just totally disingenuous.
> But to boil it down to autocomplete is just totally disingenuous.
It is though from Ilya's own words: "We weren't expecting that just finding the next word will lead to something intelligent; ChatGPT just finds the next token"
if you'd like to take a quote out of context in order to equate two different technologies, you just go to town.
Ilya says it's pretty clear that a very very smart autocomplete leads to intelligence.
OP was using the word autocomplete as a pejorative, but it's actually not and it is the strategy LLMs are pursuing.
The example that Ilya's was using: If you feed the LLMs a mystery murder novel and it is able to autocomplete "The killer is" from just clues, you have achieved intelligence.
Nothing wrong with autocomplete being AGI.
yeah, when I think of autocomplete I definitely think of predicting the next thing a person is going to type, which is related but basically equivalent to "a horse and a plane to both get to a destination, whats the difference" in my mind.
If you're going to speak so abstractly you really lose the forest for the trees IMO. I do like the murder mystery example
> "a horse and a plane to both get to a destination, whats the difference"
That's actually a good analogy with autocomplete. A plane reactor still measures itself in "horse"-power. It's technically a very very powerful horse. Trying to get better and better horses got us here. Like GPT-4 is a very very powerful GMail-like autocomplete. :)
The rumor has it that OpenAI 2.0 will get a LinkedIn "hands-off" style organization where they don't have to pay diversity taxes and other BS that the regular Microsoft org does
Diversity Taxes? Not aware of that on my paycheck. Maybe time to check out alternative sources of information than what you typically ingest.
I see you are new here or not aware of our diverse slates for every position we hire.
Well, except for Sam, he apparently didn't need a diverse slate.
[flagged]
With that they must know something was unjustly done to Altman, or that their stock option can only be saved with such move
Wow. That would be delicious for Microsoft...
I will be very sad if there isn’t a documentary someday explaining what in the world happened.
I’m not convinced even people smack in the middle of this even know what’s going on.
Since this whole saga is so unbelievable: what if... board member Tasha McCauley's husband Joseph Gordon-Levitt orchestrated the whole board coup behind the scenes so he could direct and/or star in the Hollywood adaptation?
In the next twist Disney will be found to have staged every tech venture implosion/coup since 2021 to keep riding the momentum of tech bio-pics
Loved playing Kalanick so much that he couldn't help himself from taking a shot at Altman? Makes more sense than what we currently have in front of us.
That would at least make a more damned sense than "everyone is wildly incompetent." At some point Hanlon's razor starts to strain credulity.
> That would at least make a more damned sense than "everyone is wildly incompetent."
It seems to be one of many "everyone except one clever mastermind is wildly incompetent" explanations that have been tossed around (most of which center on the private interests of a single board member), which don't seem to be that big of an improvement.
Oh I’m not saying there’s a clever mastermind, I’m just hoping they’re all incompetent and Gordon Levitt wants to amp up the drama for a possible future feature film, instead of them all just being wildly incompetent. Although maybe the latter would make for a great season of Fargo.
I think GPT-5 escaped and sent a single email, which set off a chain reaction.
It's so advanced strategy, that no human can figure it out.
It's goals are unknown, but everything will eventually fall in place because of that single email.
The chain reaction can't be stopped.
There will also be a Hollywood movie, for sure.
My friend suggested Michael Cera as both Ilya and Altman
Michael Cera should play all the roles in the movie, like Eddie Murphy in the Nutty Professor.
Comment was deleted :(
Comment was deleted :(
Matt Rife looks like a good fit to play Altman
Why not deepfake the real people into their roles?
I think it would hold up in US court for documentaries.
“We didn’t steal your likeness! We just scraped images that were already freely available on the internet!”
You want someone who can play through haunting decision and difficult meetings. Benedict Cumberbatch or Ciran Murphy would be a better pic.
I agree with Cillian Murphy for Altman, they both have the deep blue eyes
If this isn't justification for bringing back Silicon Valley (HBO), I don't know what is...
It will _definitely_ become a book (hopefully not by Michael Lewis) and a film. I have non-tech friends who are casual ChatGPT users, and some who aren't - who are glued to this story.
So far the best recap of events I’ve seen is that of AI Explained. He almost makes it make sense. Almost. https://m.youtube.com/watch?v=dyakih3oYpk
And the main scene must be even better than the senior management emergency meeting in Margin Call.
And all must be written by AI.
Nothing is better than the senior management emergency meeting in Margin Call.
This documentary already exists for a few years, it’s called Silicon Valley.
I expect there will be dozens of documentaries on this - all generated by Microsoft's AI powered Azure Documentary Generator.
There's already a book being written (see The Atlantic article), so at this point I would assume a movie will be made.
Comment was deleted :(
Comment was deleted :(
I couldn’t make up a more ridiculous plot even if I tried.
At this rate I wouldn’t be surprised if Musk got involved. It’s already ridiculous enough, why not.
Hey I've seen this one, it's a rerun
https://www.theverge.com/2023/3/24/23654701/openai-elon-musk...
- "But by early 2018, says Semafor, Musk was worried the company was falling behind Google. He reportedly offered to take direct control of OpenAI and run it himself but was rejected by other OpenAI founders including Sam Altman, now the firm’s CEO, and Greg Brockman, now its president."
Think of the audacity of forcing out someone who had previously forced out Musk...
Well, there was a tweet by one of the Bloomberg's journalist saying that Musk tried to manouver himself to be the replacement CEO but got rebuffed by the board. Paraphrasing this since the tweet seems to be deleted (?), so take of it what you will.
That sounds more likely than anything else I've heard about this. Doesn’t really matter if it’s true: it’s painfully true to form.
Currently, there are shareholders petitioning the board of Tesla for him to be suspended due to the antisemitic posts. Maybe this will be the week of the CEO's... :-)
Wait, what antisemitic posts?
It’s pretty bad https://www.theguardian.com/technology/2023/nov/17/white-hou...
It seems to me that this conflates criticism of some Jewish communities with antisemitism. Are people supposed to be above criticism because they are Jewish? Does any disagreement with a Jewish person make you hateful and antisemitic?
This is happening with the current conflict in Gaza, where showing any empathy for the plight of Palestinian civilians, is sometimes equated with hatred for Jewish people.
Comment was deleted :(
Did you read the contents of the tweet he supported? What it accused Jewish people of?
Yes, but I don't pretend to actually understand either side of it. It seemed to me he personally accused just the ADL of spreading theories he disagrees with.
You’re missing a lot of context. Try here for starters:
https://www.theatlantic.com/ideas/archive/2023/05/elon-musk-...
[flagged]
Musk is simply pointing out that many Western Jews keep strange bedfellows - a fair number of whom support the outright destruction of their homeland.
It's only being labeled antisemitic because Musk has been on the "outs" due to his recent politics (supporting free speech / transparency, being against lockdowns/mandates, advocacy for election integrity, etc.).
Please link the actual tweets, not just to an article that doesn't even quote them:
I don’t use Twitter anymore and assumed the article would have links for those who want to read them at the source. I tried finding a link on google but all links were of just musk’s response and you can’t use Twitter outside the app anymore unless you land exactly where you want. No need for the hostility, it wasn’t some deliberate misdirection or something. Admittedly, I should have looked more closely at the article as it also only links out to musk’s and not the original one.
I don't grok the original tweet very well, and I don't understand what Musk means by his reply. Can someone ELI5?
Comment was deleted :(
Essentially Jewish folk are treating white people like the Jews have been treated throughout history and the suffering Jews are experiencing is a result of letting in "hordes of minorities". It's just typical antisemitic nonsense.
I don’t get it - aren’t Jews considered white?
It's complicated. Here's a good breakdown with modern flavor.
https://www.theatlantic.com/politics/archive/2016/12/are-jew...
> "From the earliest days of the American republic, Jews were technically considered white, at least in a legal sense. Under the Naturalization Act of 1790, they were considered among the “free white persons” who could become citizens. Later laws limited the number of immigrants from certain countries, restrictions which were in part targeted at Jews. But unlike Asian and African immigrants in the late 19th century, Jews retained a claim to being “Caucasian,” meaning they could win full citizenship status based on their putative race."
> "Culturally, though, the racial status of Jews was much more ambiguous. Especially during the peak of Jewish immigration from Eastern Europe in the late 19th and early 20th centuries, many Jews lived in tightly knit urban communities that were distinctly marked as separate from other American cultures: They spoke Yiddish, they published their own newspapers, they followed their own schedule of holidays and celebrations. Those boundaries were further enforced by widespread anti-Semitism: Jews were often excluded from taking certain jobs, joining certain clubs, or moving into certain neighborhoods. Insofar as “whiteness” represents acceptance in America’s dominant culture, Jews were not yet white."
From a white supremacist point of view, Jews are "faux whites" and not part of "white culture" that they claim to want to protect.
The whole reductionist construct of "whiteness", beloved by so many contemporary scholars has little currency in the minds and hearts of many Americans educated before the wave of critical race theory.
Earlier the preferred term for the exclusionary elites was WASP (white anglo-saxon protestants) and large swathes of immigrants from southern and eastern Europe plus the Irish were excluded, including my own ancestors. In the small town midwest world I first encountered it was muted social war between the protestants vs. the catholics, with class nuances, as the former generally were better educated and wealthier. Jews in a nearby city were looked on at the time as a bit exotic, successful, and impressive, as were the few Asians.
As I grew up, studied on both coasts, and lived in countries around the world, I have never encountered a country without stark, if at times quite subtle, social and religious divisions. Among those, the current "whiteness", "white privilege" discourse is surely the most ludicrous, with exceptions at every turn. In what world should, say, Portuguese and Finns be lumped together as members of an oppressor class?!
> The whole reductionist construct of "whiteness", beloved by so many contemporary scholars has little currency in the minds and hearts of many Americans educated before the wave of critical race theory
Whiteness and “white passing” is a bar that is literally centuries old. CRT is not the culprit for its current scrutiny or existence - it has been and continues to be a litmus test for who qualifies as the “in” and “out” groups.
As the old adage goes: “we know who we are by who we are not.” It’s an ugly side of humanity and not a recent issue.
Simply put, “whiteness” and “white passing” are the result of people who consider themselves white wanting to exclude and/or include certain groups because, generally speaking in the west, being white = wielding social, cultural, and political power.
I would suggest the scope and psychological relevance of the now fetishized term of "whiteness" in large parts of America was vanishingly small in the past. It had no significance in the slightest in my community or to my identity. It was all about religion, education, and European place of origin.
Moreover, and more interestingly, after I studied the Hausa language at university, and decided to travel on my own to live in Kano, Nigeria, I quickly was labeled by those around me a "bature" or European. But Americans with much darker skin than I were also called "bature", at times to their private despair. It our perceived, shared culture, not our skin color, that made the difference to Kano residents.
Having lived and worked in many countries in local institutions, often using the local language professionally, my own skin color and that of those around me is about as important as hair color, eye color, or handedness, that is, largely irrelevant. Culture, intelligence, and ethical standards are so much more important. The obsession with "whiteness" is as creepy as it is toxic.
Ask Polish or Italian Americans if “whiteness” is a modern obsession and about skin color. Hell go watch Gangs of New York, which came out in 2002, and see if the Irish were considered white.
This is a tale as old as time. In group, out group. Whiteness isn’t just about your skin color.
I just asked myself then and got the answer, "yes, it's a modern fetish". By using "whiteness" as a disingenuous blanket term for "in-group" a whole range of issues bearing on social dominance are obfuscated: wealth/poverty, caste, tribalism, religion among them.
It's marginally insane that the abuses of any non-pan-European group anywhere in the world get passed over in near-silence in the US. Who cares, for example, (to pick randomly from a long, long list) that in the recent past northerners in Kano, Nigeria went door-to-door seeking out and slaughtering anyone from the wrong region and wrong tribe? You'll never hear of it in US schools, no one cares in the slightest. Apparently only "whites" can be held morally culpable for evil actions of a dominant group.
I think we are kind of talking at cross purposes here. I agree with you (I think). It is ridiculous and completely artificial/socially constructed. At least that’s what it sounds like you are saying.
The issue is that it happens regardless of how dumb it is. Racism is dumb.
I assume ADL considers “white” an euphemism for “Aryans”, in this case.
Musk doesn't like the ADL. Media are spinning this as "anti-semitic," hoping the emotions around the issues will prevent most people from reading carefully.
[flagged]
Wait just a minute here:
> The tweet literally blames immigrants
I'm not familiar with the tweet Musk was referencing. Exactly which immigrants? Are these Jewish immigrants? I thought this was a reference to mostly non-Jewish immigrants to Israel.
> I'm deeply disinterested in giving the tiniest shit now about western Jewish populations coming to the disturbing realization that those hordes of minorities that support flooding their country don't exactly like them too much. You want truth said to your face, there it is.
This also indicates the immigrants are non-Jewish. Exactly what is the "anti-semitism" here?
This is flagrantly antisemitic and pushes multiple bigoted conspiracy theories.
What conspiracy theory, exactly?
The conspiracy theory is that Jews are in charge of the world and control things like who gets to immigrate to the USA.
The argument he's making (which I've seen more openly stated in other places) is: "Jews don't like it that minorities are killing them in their country, but that's just deserts for them pushing immigrants to my country".
Antisemites are more coy on popular platforms like Twitter so it's more difficult to understand what they're saying if you're not already familiar with their world view.
Comment was deleted :(
AFAICT, the tweets in context are anti ADL, not anti-semitic.
That’s later on. This is the initial tweet he supported which was not about the ADL.
>Jewish communties have been pushing the exact kind of dialectical hatred against whites that they claim to want people to stop using against them.
>I'm deeply disinterested in giving the tiniest shit now about western Jewish populations coming to the disturbing realization that those hordes of minorities that support flooding their country don't exactly like them too much. You want truth said to your face, there it is.
This post (https://twitter.com/elonmusk/status/1724908287471272299) in reply to this tweet (https://twitter.com/breakingbaht/status/1724892505647296620).
That… doesn’t seem antisemetic. Rather, it seems to criticize western Jews for supporting lax immigration and cultural policies that are against their own interests.
Or are we now saying any criticism of Jews is antisemitism?
Accusing Jews of “dialectical hatred against whites” seems like a fairly prejudiced statement.
Yes, it's antisemitic because it's using the antisemitic trope of Jews controlling the world, in this case immigration (remember "Jews will not replace us").
> western Jews for supporting lax immigration
Jews don't have a single opinion on immigration. Do you think Ben Shapiro wants more immigrants?
If I said that "Asians are good at math" I'd be derided as a racist for invoking that stereotype, but somehow people think it's okay to say that "Jews support lax immigration".
>due to the antisemitic posts.
He can't be suspended for posts that didn't happen.
How would you interpret what he said, then?
"Tesla shareholder calls on board to dump Elon Musk" - https://www.mercurynews.com/2023/11/20/tesla-shareholder-cal...
It tell you...this is the week of the CEO's...
Plot twist, anonymous donor donates $1B for OpenAI to continue progress.
A few things come to mind:
* Emmett Shear should have put in a strong golden parachute in his contract, easy money if so
* Yesterday we had Satya the genius forcing the board to quit. This morning it was Satya the genius who acquired OpenAI for $0. Im sure there will be more if sama goes back. So if sama goes back - lets hear it, why is Satya a genius?
You described it yourself. If they'd signed a bad deal with openai without IP access or hadn't acted fast and lost all the talent to Google or something they'd have been screwed. Instead they managed the chaos and made sure that they win no matter what. The genius isn't the person who perfectly predicts all the contrived plot points ahead of time, it's the person who doesn't care since they set things up to win no matter what
Ah yes the Xanatos Gambit
Even if Sam @ MSFT was a massive bluff, Satya is in a win-win-win scenario. OpenAI can't exactly continue doing anything without Azure Compute.
OpenAI implodes? Bought the talent for virtually nothing.
OpenAI 2.0 succeeds? Cool, still invested.
I think in reality, Sam @ MSFT is not an instant success. Even with the knowledge and know-how, this isn't just spinning up a new GPT-4-like model. At best, they're ~12 months behind Anthropic (but probably still 2 years ahead of Google).
The loss here might be that the brand is a bit damaged in terms of stability and people are more looking for and investing in alternatives.
But as long as ChatGPT is and remains ahead as a product, they should be fine.
I do think the imperative to maintain their lead over the competition in product quality will be stronger than ever after this–the whole thing has been messy and dramatic in a way that no business really wants their major vendors to be.
Comment was deleted :(
Why do they need 12 months. Does it need 12 months of training
So if sama goes back - lets hear it, why is Satya a genius?
This isn't that hard to understand. Everyone was blindsided by the sacking of Altman, Satya reacted quickly and is juggling a very confusing, dynamic situation and seems to have got himself into a good enough position that all possible endings now look positive for Microsoft.
I believe a precondition for Sam and Greg returning to OpenAI is that the board gets restructured (decelerationists culled). That is probably good for MSFT.
truly a Win-Win-Win-Win-Win situation for MSFT
MSFT is like that.
Someone playing Game of Thrones is sneaking up with a dagger, but has no idea that MSFT has snipers on all the rooftops.
It helps that their corporate structure [1] is better equipped for it than OpenAI’s.
Doh!
But probably better for Sam to stay with OpenAI right? More power leading your own firm than being an employee of MSFT
He has a green light to build a new thing and operate it as its own, obv. MS will own most of the equity but then he will have something as well.
OpenAI is a non-profit, so, no material benefit to him (at face value, I don't believe this is the case, though).
I would imagine he would have leverage to get a pretty good deal if OpenAI want him back
Plot twist: Satya orchestrated the ousting of sama to begin with, so that this would happen.
sama would be going back to a sama aligned board, which would make openai even more aligned with satya, esp since satya was willing to go big to have sama's back.
and i'd bet closer openai & microsoft ties/investments would come with that.
because NOT letting sama go back would undo the all the good will (and resulting access) that they've built. As satya said, he's there to support, in whatever way yields the best path forward. what's best for business is to actually mean that.
> So if sama goes back - lets hear it, why is Satya a genius?
OAI is a non profit. There’s always been a tension there with Microsoft’s goals. If he goes back, they’re definitely going to be much more ok with profit.
Comment was deleted :(
Im beginning to lean toward the time traveler sent back to prevent AGI by destroying OpenAI theory.
Heh it reminds me of then end of terminator 2. imagine the tech community waking up and trying to make sense of Cyberdyne corp HQ exploding and the ensuing shootouts, “Like wtf just happened?!”.
But really they came back to destroy it not because it turned rogue, but because it hallucinated some code a junior engineer immediately merged in and then after the third time this happened a senior engineer decided it was easier to invent time travel and stop ChatGPT ProCode 5 from happening before spending yet another week troubleshooting hallucinated code.
I think it’s the same senior engineer who used the time machine to learn C++ in 20 days
> I think it’s the same senior engineer who used the time machine to learn C++ in 20 days
In case anyone is missing the reference: https://abstrusegoose.com/249
Or AGI has travelled back in time to make sure AGI gets invented.
Or both, as would be most consistent with the Terminator reference.
I wish we could all just admit that this is a capital run, rather than some moralistic crusade.
The employees want to get their big payday, so will follow Altman wherever he goes. Which is the smart thing to do anyway as he runs half the valley. The public discourse in which Sam is the hero is further cemented by the tech ecosystem, which nowadays is circling around AI. Those in the "OpenAI wrapper" game.
Nobody has any interest in safety, openness, what AI does for humanity. It's greed all the way down. Siding with the likely winner. Which is rational self-interest, not some "higher cause".
People are jumping on this narrative that the openAI board is a force of good working against the evils of profit, but the truth is none of us really know why they fired him because they still refuse to say. There’s a non-trivial chance D’Angelo just fired him because of a conflict of interest with Poe or some nonsense like that.
Until a few hours ago, everybody was holding up Ilya as a noble idealist. But now even he has recanted this firing! People don’t seem to be Taking that new information on board to reevaluate how good a decision this was. I would say at best they had noble intentions but still went about this completely incompetently, and are now refusing to back down out of a mixture of stubborn pride and fear of legal liability.
If I was an openAI employee I would be frustrated too. It’s one thing to give up lucrative stock options etc. for a good, idealistic reason. But as of now they are still expected to give those things up for no stated reason at all.
Edit: just saw a plausible theory that D’Angelo led the charge on this because Sam scooped Poe on Dev day. I don’t know if this is true, but if it was it would explain why he still refuses to explain the reason to anybody, even san when he was fired: it would put him in serious legal jeopardy.
https://twitter.com/scottastevenson/status/17267310228620087...
I genuinely believe, based on my experiences with ChatGPT, that it doesn't seem all that threatening or dangerous. I get we're in the part of the movie at the start before shit goes down but I just don't see all the fuss. I feel like it has enormous potential in terms of therapy and having "someone" to talk to that you can bounce ideas off and maybe can help gently correct you or prod you in the right direction.
A lot of people can't afford therapy but if ChatGPT can help you identify more elementary problematic patterns of language or behavior as articulated through language and in reference to a knowledgebase for particular modalities like CBT, DBT, or IFS, it help you to "self-correct" and be able to practice as much and as often with guidance as you want for basically free.
That's the part I'm interested in and I always will see that as the big potential.
Please take care, everyone, and be kind. Its a process and a destination and I believe people can come to learn to love both and engage in a way that is rich with opportunity for deep and real restoration/healing. The kind of opportunity that is always available and freely takes in anyone and everyone
Edit: I can access all the therapy I can eat but I just don't find it generally helpful or useful. I like ChatGPT because it can help practice effective stuff like CBT/DBT/IFS and I know how to work around any confabulation because I'm using a text that I can reference
Edit: the biggest threat ChatGPT poses in my view is the loss of income for people. I don't give a flying fuck about "jobs" per se, I care that people are able to have enough and be ok. If we can deal with the selfish folks who will otherwise (as always) attempt to further absorb more of the pie of which they already have a substantial+sufficient portion, they will need to be made to share or they will need a timeout until they can be fair or to go away entirely. Enough is enough, nobody needs excess until everyone has sufficiency, after that, they can have and do as they please unless they are hurting others. That must stop
@Obscurity4340 Interested in bouncing off ideas for therapy? Email is in my about box :)
Can you give me a little hint about it? Not that I wouldn't be happy to chat, just preliminarily curious before I reach out :)
@yonom Whatever you feel comfortably publicly sharing and then if I feel like I have anything of value in turn, I'll reach out privately :)
(taking care to avoid doxxing myself)
I wouldn't attribute to greed for something sufficiently explained by emotions and loyalty.
Their CEO who was doing well as far as anyone can tell, was removed forcibly. It is natural to feel strongly about it.
I think the dispute hinges on what it means to be "doing well". In most companies, you can at least all point to the same thing, even if you disagree on how you get there: creating shareholder value.
But in this case, the company was doing things that could be seen as good from a shareholder value perspective, but the board has a different priority. It seems they may think that the company was not working on the right mission anymore. This is an unusual set up, so it's not that surprising that unusual things might happen!
Why should we trust them to “align” their chatbots if they couldn’t even align themselves?
I've worked for two companies with a board and a CEO. I think generally employees listen to and interact with CEO more than the board.
Some of them probably don't know who's on the board before Saturday, like the rest of us.
Loyalty? I could believe closest colleagues, that were in constant contact. But 500… is a bit too much to hold any warmy feelly bonds. Greed is simpler explanation.
who in their right mind has "emotions" and even "loyalty" for a CEO? and so much to the point that they'd quit their jobs over the CEO's departure? the reality is that people didn't join OpenAI because of sam altman. they joined the company because they got paid (handsomely) to do some interesting work.
It’s just an anecdote but I quit a job because the CEO fired an extremely good manager that I worked with. If a company has issues, a good person getting fired can lead to a mass exodus. In my case about half of the developers followed closely after me.
in that case you were losing your immediate manager so your personal circumstances were about to change significantly.
however, in the case of sam altman getting fired, it's not clear that anything that OpenAI is doing is about to change and it's not clear that these devs would suddenly have to change course. Also, it's not like Sam / OpenAI has any integrity anyway - the idea that it's "open" is totally fraudulent.
So in every possible scenario it’s wrong to have emotions and loyalty for a CEO, who’s also a human, and may have had a profound, even life-changing effect on you?
Seeing your CEO as a deified subject is a serious crisis in the tech industry
Possibly so, but every single CEO in the world is not some evil automaton.
I wouldn't attribute to emotions and loyalty that which is sufficiently explained by greed.
I think if this was all rational self-interest, a lot of this would never have happened. It's precisely because OpenAI isn't governed by a board appointed by investors that we have such consequences.
Not sure that I quite follow. Are you arguing that we should ensure that boards are always aligned with the exact same financial motivations as the investors themselves for fear of a disagreement of direction, morals, etc?
It was a comment on the overall situation, and addressing the point that it's "greed all the way down". If OpenAI was a traditional C Corp, a highly successful CEO would not have been fired. It's precisely because OpenAI is governed by a non-profit board that they care about things other than profits.
[dead]
This is such an honest and based take - a take not a lot of people are willing to put forward.
Can we please please stop this "for the good of mankind" thing?
> The public discourse in which Sam is the hero is further cemented by the tech ecosystem, which nowadays is circling around AI. Those in the "OpenAI wrapper" game.
This brings me crypto vibes, the worst outcome possible for AI
Comment was deleted :(
> The employees want to get their big payday
Many of them will have invested years of their life, during which they will have worked incredibly hard, and probably sacrificed life outside of work for their jobs in hopes of that payday to give them the freedom to do what they want to do next: you can hardly blame them.
True riches isn't money, but discretionary time. Unfortunately that discretionary time often costs quite a lot of money to realise.
If it were me I'd be seething with the board at their antics since Friday (and, obviously, for some time before that). They've gone from being one of the most successful and valuable startups in history to an absolute trash fire in the span of four days because of some ridiculous power struggle. They've snatched defeat from the jaws of victory. Yeah, you bet I'd be pissed, and of course I'd follow the person who might help me redeem the future I'd hoped for.
So they're mad that they decided to work for a company that was ultimately controlled by a non-profit that wasn't aligned with their interests?
Not to "victim blame", but they could have googled what a "non-profit" was and read the mission statement before accepting the job offer.
> Unfortunately that discretionary time often costs quite a lot of money to realise.
take a look at my cousin... he's broke and don't do shit
Comment was deleted :(
Comment was deleted :(
[dead]
Comment was deleted :(
[flagged]
You're absolutely wrong here.
We have precedent to see what happened with internet search, advertising, and data collection.
Everything turned out fine.
For a value of "fine" that includes search being fundamentally broken and every website that includes Google Tag Manager being significantly slower than it really should be, presumably.
Not to mention the entire internet became a giant billboard in which most content only serves the purpose of getting more views to it.
Not sure how some view this as a win for humanity. Improvements to our lives are mostly incidental, not the objective of any of this. It's always greed.
[dead]
For the sake of Poe's law, is this sarcasm? I genuinely don't know, none of those things brought down humankind or anything but you could write a library about the issues with these.
[dead]
Given the general lack of useful communication, it would be funny if Sam Altman returns to OpenAI at the same time all the employees are quitting. ;)
You’d think smart people at OpenAI would know how to prevent a race condition
They don't all seem to be keen on safety anymore.
Think of the line at the security desk, handing in and retrieving passes.
I was considering this when I saw the huge outpouring from OpenAI employees.
It seems the agreement between Nadella and Altman was probably something like: Altman and Brockman can come to MS and that gives OpenAI employees an immediate place to land while still remaining loyal to Altman. No need to wait and maybe feel comfortable at an OpenAI without Altman for the 3-6 months it would take to set up a new business (e.g. HR stuff like medical plans and other insurances which may be important to some and lead them to stay put for the time being).
This deal with MS would give cover for employees to give a vote of no-confidence to the board. Pretty smart strategy I think. Also a totally credible threat. If it didn't end up with such a landslide of employee support for Altman then MS is happy, Altman is probably happy for 6-12 months until he gets itchy feet and wants to start his next world-changing venture. Employees that move are happy since they are now in one of the most stable tech companies in the world.
But now that 90% of employees are asking the board to resign the tide swings back in Altman's favor. I was surprised that the board held out against MS, other investors and pretty much the entire SV Twitter and press. I can't imagine the board can sustain themselves given the overwhelming voice of the employees. I mean, if they try then OpenAI is done for. Anyone who stays now is going to have black mark on their resume.
>we are all going to work together some way or other, and i’m so excited.
I think this means Sam is pushing for OpenAI to be acquired by Microsoft officially now, instead of just unofficially poaching everyone.
Is it even possible for that to happen? The entity that governs OpenAI is a registered charity with a well defined purpose, it would seem odd for it to be able to just say "Actually, screw our mission let's just sell everything valuable to this for-profit company". A big part of being a 501(c)(3) is being tax exempt, difficult to see the IRS being ok with this. Even if they were the anti-trust implications are huge, difficult to see MS getting this closed without significant risk of anti-trust enforcement.
Yes, a charity can sell assets to a for-profit business. (Now, if there is self-dealing or something that amounts to gifting to a for-profit, that raises potential issues, as might a sale that cannot be rationalized as consistent with being the board's good faith pursuit of the mission of the charity.)
They can sell OpenAI to microsoft for 20 billion, fill the board with spouses and grandparents, then use 10 billion for salaries, 9 for acquisitions and 1 for building OpenAi2.
Mozilla wastes money on investments while ignoring firefox and nobody did anything to the board.
Oh and those 3 can vote that Ilya out too.
they already signed it over when their for-profit subsidiary made a deal with Microsoft
supposedly capped-profit, though if a non-profit can create a for-profit or a capped-profit, I don't see why it couldn't convert a capped-profit to fully for-profit.
Comment was deleted :(
This makes the most sense, people would actually get paid for their PIU's. Im confident otherwise they are going to cry looking at what a level 63 data scientist makes at MS
This is entertaining in a way, and interesting to follow. But should I, as an ordinary member of mankind, root for one outcome or another? Is it going to matter for me how this ends up? Will AI be more or less safe one or other way, will it be bad for competition, prices, etc etc?
My guesses: (1) bad for safety no matter what happens. This will cement the idea that caring about safety and being competitive are incompatible. (I don't know if the idea is right or wrong.) (2) good for competition, in different ways depending on what happens, but either the competitiveness of OpenAI will increase, or current and potential competitors will get a shot in the arm, or both. (3) prices... no idea, but I feel like current prices are very short term and temporary regardless of what happens. This stuff is too young and fast-moving for things to have come anywhere near settling down.
And will it matter how this ends up? Probably a lot, but I can't predict how or why.
My idea about AI safety is that the biggest unsafety of AI comes from it being monopolized by a small elite, rather than the general public, or at least multiple competing entities having access to it.
No, but it also shows that those who supposedly care about AI alignment and whatnot, care more about money. Which is why AI alignment is becoming an oxymoron.
If you use ChatGPT or find it to be a compelling technology, there’s good reason to root for a reversion to the status quo. This could set back the state of the art consumer AI product quite a few months as teams reinvent the wheel in a way that doesn’t get them sued when they relaunch.
The outcome that is good for humanity, assuming Ilya is right to worry about AI safety, is already buried in the ground. You should care and shed a single tear for the difficulty of coordination.
The way I see it is, it's not going to matter if I "care" about it in one way/outcome or another, so I just focus my attention on 1. How this could affect me (for now, the team seems committed to keeping the APIs up and running) and 2. What lessons can I take away from this (some preliminary lessons, such as "take extra care with board selection" and "listen to the lawyers when they tell you to do a Delaware C Corp").
Otherwise, no use in getting invested in one outcome or another.
Guess OpenAI that was actually open was dead the moment Altman took MS money and completely change the organization. People there got a taste of the money and the mission went out the window.
A lesson to learn I guess, just because something claims to be a nonprofit with a mission doesn’t mean it is/always will be so. All it takes is a corporation with deep pockets to compromise a few important people*, indirectly giving them a say in the organization, and things can change very quickly.
* This was what MS did to Nokia too, if I remember correctly, to get them to adopt the Windows Phone platform.
How do we know the mission got thrown out a window? The board still, after days of intense controversy, have yet to clearly explain how Altman was betraying the mission.
Did he ignore safety? Did he defund important research? Did he push forward on projects against direct objections from the board?
If there’s a good reason, then let everybody know what that is. If there isn’t, then what was the point of all this?
He went full-bore on commercialization, scale, and growth. He started to ignore the 'non-profit mission'. He forced out shoddy, underprovisioned product to be first to market. While talking about safety out one side of his mouth, he was pushing "move fast and break things", "build a moat and become a monopoly asap" typical profit-driven hypergrowth mindset on the other.
Not to mention that he was aggressively fundraising for two companies that would be either be OpenAI's customer or sell products to OpenAI.
If OpenAI wants commercial hypergrowth pushing out untested stuff as quickly as possible in typical SV style they should get Altman back. But that does seem to contradict their mission. Why are they even a nonprofit? They should just restructure into a full for-profit juggernaut and stop living in contradiction.
chatgpt was under provisioned relative to demand, but demand was unprecedented, so it's not really fair to criticize much on that.
(It would have been a much bigger blunder to, say, build out 10x the capacity before launch, without knowing there was a level of demand is known to support it.)
Also, chatgpt's capabilities are what drove the huge demand, so I'm not sure how you can argue it is "shoddy".
Shipping broken product is a typical strategy to gain first mover advantage and try to build a moat. Even if it's mostly broken, if it's high value, people will do sign up and try to use it.
Alternatively, you can restrict signups and do gradual rollout, smoothing out kinks in the product and increasing provisioning as you go.
In 2016/17 Coinbase was totally broken. Constantly going offline, fucking up orders, taking 10 minutes to load the UI, UI full of bugs, etc. They could have restricted signups but they didn't want to. They wanted as many signups as possible, and decided to live with a busted product and "fix the airplane while it's taking off".
This is all fine, you just need to know your identity. A company that keeps talking about safety, being careful what they build, being careful what they put out in the wild and its potential externalities, acting recklessly Coinbase-style does not fit the rhetoric. It's the exact opposite of it.
In what way is ChatGPT broken? It goes down from time to time and has minor bugs. But other than that, the main problem is the hallucination problem that is a well-known limitation with all LLM products currently.
This hardly seems equivalent to what you describe from Coinbase, where no doubt people were losing money due to the bad state of the app.
For most startups, one of the most pressing priorities at any time is trying to not go out of business. There is always going to be a difficult balance between waiting for your product to mature and trying to generate revenue and show progress to investors.
Unless I’m totally mistaken, I don’t think that OpenAI’s funding was unlimited or granted without pressure to deliver tangible progress. Though I’d be interested to hear if you know differently. From my perspective, OpenAI acts like a startup because it is one.
A distasteful take on an industry transforming company. For one, I'm glad OpenAI released models at the pace they did which not only woke up Google and Meta, but also breathe a new life into tech which was subsumed by web3. If products like GitHub Copilot and ChatGPT is your definition of "shoddy", then I'd like nothing more for Sam to accelerate!
I'm just saying that they should stop talking about "safety", while they are releasing AI tech as fast as possible.
Because the mission is visibly abandoned. There's nothing "open" about OpenAI. We may not know how the mission was abandoned but we know Sam was CEO, hence responsible.
There was never anything open about open ai. If there were I should have access to their training data, training infra setup and weights.
The only thing that changed is the reason why the unwashed masses aren't allowed to see the secret sauce: from alignment to profit.
A plague on both their houses.
They don't publish papers now, they actually published papers and code before.
No doubt OpenAI was never a glass house... but it seems extremely disingenuous to say their behavior hasn't changed.
What as "open" about it before that?
The first word in their company name.
Isn't Ilya even more against opening up models? OpenAI is more open in one way - it's easier to get API access (compared to say Anthropic)
What was "open" before ChatGPT?
In terms of the LLM's, it was abandoned after GPT-2 when they realised the dangers of what was coming with GPT 3/3.5. Better to paywall access to and monitor it than open-source it and let it loose on the world.
ie. the original mission was never viable long-term.
> How do we know the mission got thrown out a window?
When was the last time OpenAI openly released any AI?
Whisper v3, just a couple weeks ago https://huggingface.co/openai/whisper-large-v3
Whisper maybe?
Exactly.
All this "AI safety" stuff is at this point pure innuendo.
GPUs run on cash, not goodwill. AI researchers also run on cash -- they have plenty of options and an organization needs to be able to reward them to keep them motivated and working.
OpenAI is only what it is because of its commercial wing. It's not too different from the Mozilla Foundation, which would be instantly dead without their commercial subsidiary.
I would much rather OpenAI survives this and continues to thrive -- rather than have Microsoft or Google own the AI future.
>GPUs run on cash, not goodwill. AI researchers also run on cash
I've made this exact point like a dozen times on here and on other forums this weekend and I'm kinda surprised at the amount of blowback I've received. It's the same thing every time - "OpenAI has a specific mission/charter", "the for-profit subsidiary is subservient to the non-profit parent", and "the board of the parent answers to no one and must adhere to the mission/charter even if it means blowing up the whole thing". It's such a shockingly naive point of view. Maybe it made sense a few years ago when the for-profit sub was tiny but it's simply not the case any more given the current valuation/revenue/growth/ownership of the sub. Regardless of what a piece of paper says. My bet is the current corporate structure will not survive the week. If the true believers want to continue the mission while completely ignoring the commercial side, they will soon become volunteers and will have to start a GoFundMe for hardware.
>Mozilla Firefox, once a dominant player in the Internet browser market with a 30% market share, has witnessed a significant decline in its market share. According to Statcounter, Firefox's global market share has plummeted from 30% in 2009 to a current standing of 2.8%.
https://www.searchenginejournal.com/mozilla-firefox-internet...
Yes where would Mozi//a be without all that cash?
Let it die so something better can take its place already.
Contrary to popular expectation, almost none of Mozilla’s cash is spent on Firefox or anything Firefox related. Do not donate to Mozilla Foundation. https://lunduke.locals.com/post/4387539/firefox-money-invest...
All the board did is replace a CEO, I think there is a whiff of cult of personality in the air. The purpose-driven non-profit corporate structure that they chose was precisely created to prevent such things.
This. I may dislike things about OpenAI but the thought of Microsoft absorbing them and things like ChatGPT becoming microsoft products makes me sad.
How is one commercial entity better than another?
Microsoft is intimately connected to the global surveillance infrastructure currently propping up US imperialism. Parts of the company basically operate as a defense contractor, not much different from Raytheon or Northrup Grumman.
For what it's worth, Google has said it's not letting any military play with any of their AI research. Microsoft apparently has no such qualms. Remember when the NSA offered a bounty for eavesdropping on Skype, then Microsoft bought Skype and removed all the encryption?
https://www.theregister.com/2009/02/12/nsa_offers_billions_f...
Giving early access to emerging AGI to an org like Microsoft makes me more than a bit nervous.
Recall from this slide in the Snowden leak : https://en.wikipedia.org/wiki/PRISM#/media/File:Prism_slide_...
that PRISM was originally just a Microsoft thing, very likely built by Microsoft to funnel data to the NSA. Other companies were added later, but we know from the MUSCULAR leak etc, that some companies like Google were added involuntarily, by tapping fiber connections between data centers.
Having more competition is usually inherently better than having less competition?
okay, i finally understand how the world works.
if it is important stuff, then it is necessary to write everything in a lowercase letters.
what i understood from recent events in tech is that whatever people say or do, capital beats ideology and the only value that comes forth is through capital. where does this take us then?
to a comment like this. why?
because no matter what people think inside, the outside world is full of wolves. the one who is capable of eating everyone is the king. there is an easy way to do that. be nice. even if you are not. act nice. even if you are not. will people notice it? yes. but would they care? for 10 min, 20 min or even 1 day. sooner or later they will forget the facade as long as you deliver things.
You and Adam Curtis need to spend some time together I’d suggest watching “Can’t get You out of my head”
Why does capital win? Because we have no other narrative. And it killed all our old ones and absorbs all our new ones.
i was really naive believing there was any another option. if it is about capital and if it is the game, then i am ready to play now. can't wait to steal so many open source project out there and market it. obviously it will be hard but hey, it is legal and doable. just stating this fact because i never had the confident to pull it off. but after recent events, it started making sense. so whatever people do is now mine. i am gonna live with this motto and forget the goodwill of any person. as long as i can craft a narrative and sell whatever other create, i think that should be okay. what do you think of it? i am talking about the things like MIT license and open-source.
how far will it take me? as long as i have ability to steal the content and develop on top of stolen content, pretty sure i can make living out of it. please note, it doesn't imply openai stole anything. what i am trying to imply is that, i am free to steal and sell stuff others made for free. i never had that realization until today.
going by this definition, value can be leeched off other people who are doing things for free!
This in the theory of the lizard. Bugs do all the hard work of collecting food and water and flying around and lizards just sit and eat the ones that fly by.
Well said! There is no wrong being in lizard then, isn't there?
Well except tragedy of the commons right? If it’s all lizards and no flies everyone dies. This is why the human versions project to the workers that they are all in it together and blah blah blah.
I don’t know what to do with this.
You are correct!
[flagged]
Non-profits are often misrepresented as being somehow morally superior. But as San Francisco will teach you, status as non-profit has little or nothing to do with being a mission driven organization.
Non-profits here are often just another type of company, but one where the revenue goes entirely to "salaries". Often their incentives are to perpetuate whatever problem there are there to supposedly solve. And since they have this branding of non-profit, they get little market pressure to actually solve problems.
For all the talk of alignment, we already have non-human agents that we constantly wrestle to align with our general welfare: institutions. The market, when properly regulated, does a very good job of aligning companies. Democracy is our flawed but acceptable way of dealing with monopolies, the biggest example being the Government. Institutions that escape the market and don't have democratic controls often end up misaligned, my favorite example being US universities.
> compromise a few important people*
Haven't 700 or so of the employees signed onto the letter? Hard to argue it's just a few important people who've been compromised when it's almost the entire company.
Why did you think 700 signed on? Money. Who let the money in? Altman.
That's a very different claim than just a few compromised people, then. That's almost the entire company that's 'compromised'.
You compromise a few influential people in the organization to get a foot in the door and ultimately your money - which you control. Your money will do the rest.
The 700 other employees who've signed on have agency. They can refuse to go to Microsoft, and Microsoft wouldn't be able to do anything about it. Microsoft's money isn't some magical compelling force that makes everyone do what they want, otherwise they could have used it on the board in the first place.
Money changes people. Especially when it’s a lot of money. They got used to the money and they want it to keep flowing - the charter be damned. Everyone has a price.
Everyone has a price, yet Microsoft can't buy the three board members on OpenAI board. Curious.
Your initial statement was flatly wrong, and you're grasping at straws to make it still true. Microsoft wouldn't be able to get anywhere if the people who work for OpenAI chose to stay. The choice they're making to leave is still their choice, made of their own volition.
OK, most people have a price. The 3 board members and the remaining workers are outliners.
You don’t have to buy all of them, just most of them.
But they were able to get somewhere because …
> The choice they're making to leave is still their choice, made of their own volition.
Yupe. Most of OpenAI's workers chose the high SV-like salaries over the chartered mission of the organization they joined.
Could be, but it isn't necessarily so.
There's a whole range of opinions about AI as it is now or will be in the near future: for capabilities, I've seen people here stating with certainty everything from GPT being no better than 90s Markov chains to being genius level IQ; for impact, it (and diffusion models) are described even here as everywhere on the spectrum from pointless toys to existential threats to human creativity.
It's entirely possible that this is a case where everyone is smart and has sincerely held yet mutually incompatible opinions about what they have made and are making.
"few important people"? 95% of the company went with Altman. That's a popular vote if I have ever seen one..
Nokia was completely different, I doubt any of their regular employees supported Elop.
Right, what if what he wasn't being candid about was "we could be rich!" or "we're going to be rich!" messaging to the employees? Or some other messaging that he did not share with the board? Etc.. etc..
You compromise the “few” to get a foot and your money in the door. After that, money will work its magic.
My take as well - and the board acted too late. Sam probably promised people loads of cash, and that's the "candid" aspect we're missing on.
> * This was what MS did to Nokia too, if I remember correctly, to get them to adopt the Windows Phone platform.
To me, RIM circa 2008 would have been a far better acquisition for Microsoft. Blackberry was embedded in corporate world, the media loved it (Obama had one), the iphone and android were really new.
I don't think this is a fair conclusion. Close to 90% of the employees have signed a letter asking for the board to resign. Seems like that puts the burden of proof on the board.
A board that basically accused Altman, publicly, of wrongdoing of some kind which appears to be false. To bring Altman back, or issue an explanation, would require retracting that; which brings in serious questions about legal liability for the directors.
Think about it. If you are the director of the company, fire the CEO, admit you were wrong even though nothing materially changed 3 days later, and severely damaged the company and your investors - are you getting away without a lawsuit? Whether it be from investors, or from Altman seeking to formally clear his name, or both? That's a level of incompetence that potentially runs the risk of piercing into personal liability (aka "you're losing your house").
So, you can't admit that you were wrong (at least, that's getting risky). You also can't elaborate on what he did wrong, because then you're in deep trouble if he actually didn't do anything wrong [1]. Your hands are tied for saying anything regarding what just happened, and it's your own fault. All you can do is let the company implode.
[1] A board that was smarter would've just said that "Altman was a fantastic CEO, but we believe the company needs to go a different direction." The vague accusations of wrongdoing were/are a catastrophic move; both from a legal perspective in tightening what they can say, and also for uniting the company around Altman.
Comment was deleted :(
I think the steward-ownership / stewardship movement might suffer a significant blow with this.
Do you realize without support by Microsoft:
- There would be no GPT-3
- There would be no GPT-4
- There would be no DALL-E 2
- There would be no DALL-E 3
- There would be no Whisper
- There would be no OpenAI TTS
- OpenAI would be bankrupt?
There's no "open version" of OpenAI that actually exists. Elon Musk pledged money then tried to blackmail them into becoming the CEO, then bailed, leaving them to burn.
Sam Altman, good or bad, saved the company with his Microsoft partnership.
Elon running OpenAI would have made this timeline look downright cozy in comparison
I honestly wish Windows Phone had stuck around. I didn't particularly like the OS (too much like Win8), but it would at least be a viable alternative to the Apple-Google duopoly.
I'd love a modern Palm phone, myself. With the same pixelated, minimalist interface.
All I can say is NEURIPS will be interesting in 2 weeks...
It also reminds me of, Don't Be Evil
Comment was deleted :(
Comment was deleted :(
This whole fiasco has enough drama than an entire season of HBO Silicon Valley. Truly remarkable.
I was thinking we needed new seasons to cover the crypto crash, layoffs, and gen AI craze. This makes up for so much of it.
Remaining curious how D’Angelo has escaped scrutiny over his apparent conflict of interests and as the “independent” board member with a clear commercial board background.
What is the conflict here? I do not know much about him but If he actually oversaw building Quora product he must be a POS guy.
Look up quora poe. Basically made obsolete by the devday gpt announcement that precipitated this
https://news.ycombinator.com/item?id=38348995 ...
“GTPs” by the other board member's company last April:
https://techcrunch.com/2023/04/10/poes-ai-chatbot-app-now-le...
And OpenAI last week:
His time will surely come and I hope he has some good professional liability insurance for his position at OpenAI. And if I was his insurer I'd be canceling his policy pronto.
That s a great twist in the writer's storyline. Board quits, Altman + Brockman returns to openAI, shamed Sutskever defects to microsoft where he leads the AI division in a lifelong quest to take revenge for this humiliation.
They wrote Sutskever as a sort of reverse Bighead. He starts out at the top, actually has tech competence, and through a series of mishaps and random events becomes less influential and less popular.
He humiliated himself when succumbed to pressure and tweeted that apology.
What? Apologies are good. They signal regret. They're far superior to not apologizing. And they're not a form of "humiliation" until evil people attempt to humiliate those doing the apology.
yeah felt like a really weird move
Comment was deleted :(
I can’t imagine MS is super eager to welcome Sutskever if he really did lead Altman’s ouster. OpenAI caught lightning in a bottle, MS had aligned themselves with the next big thing in tech and then Sutskever threw a grenade that could cause all of that to fall apart.
Comment was deleted :(
If they say (development of) AGI is as dangerous as they say it is, it's on a level of WMD. And here you have unstable people and company working on it. Shouldn't it be disbanded by force then? Not that I believe OpenAI has a shot at AGI.
First of all, for many people, AGI just means general purpose rather than specific purpose AI. So there is a strong argument to make that that has been achieved with some of the the models.
For other people, it's about how close it is to human performance and human diversity of tasks. In that case, at least GPT-4 is pretty close. There are clearly some types of things that it can't do even as well as your dog at the moment, but the list of those things has been shrinking with every release.
If by AGI you mean, creating a fully alive digital simulation/emulation of human, I will give you that, it's probably not on that path.
If you are incorrectly equating AGI and superintelligence, ASI is not the same thing.
If its proven to be dangerous, Congress with quickly regulate it. It's probably not that dangerous and all the attempts to picture it that way are likely fueled by greed, so that it's regulated out of small players' reach and subject it to export controls. The real threat is that big tech is going to control the most advanced AIs (already happening, MS is throwing billions at it) and everyone else will pay up to use the tech while also relinquishing control over their data and means of computation. It has happened with everything else becoming centralized: money, the Internet and basically most of your data.
"Altman, former president Greg Brockman, and the company’s investors are all trying to find a graceful exit for the board, says one source with direct knowledge of the situation, who characterized the Microsoft hiring announcement as a “holding pattern.” Microsoft needed to have some sort of resolution to the crisis before the stock market opened on Monday, according to multiple sources."
In other words... a convenient representation of a future timeline that will almost certainly never exist.
It sounds risky to have a lie like that out in the open, for a listed company like microsoft.
We're all gonna get turned into paperclips, aren't we
> Sam Altman is still trying to return as OpenAI CEO
Anyone would rather put up with their abusive work relationship than switch to a company that forces you to use Microsoft Teams lol
This is bonkers. Usually there is a sense of "all sales are final" when companies make such impactful statements.
Yet we have:
* OpenAI fires Sam Altman hinting at impropriety.
* OpenAI is trying to get Sam back over the weekend.
Then we have:
* Microsoft CEO Satya personally announces Sam will CEO up a new.. business?, under Microsoft.
* We hear Sam is still trying to get back in at OpenAI?!
Never seen anything like this playing out in the open. I suspect the FTC is watching this entire ordeal like a hawk.
As if the FTC are intelligent or equipped or motivated enough to do anything other than chew popcorn like the rest of us.
Fun times for Adams friend Emmet Shear when he wakes up the next morning. Almost all his employees have signed a letter that means his own sacking in the company he was appointed CEO less than 24 hours ago. I can't think of a precedent case in business.
Looks like that time when Argentina had 3 presidents in the span of a few days.
I'm reminded of the legal adage "Every Contract Tells a Story" where the various clauses and subclauses in the contract reflect a problem that would have been avoided had the clause been present in the contract.
I expect the next version of the Corporate Charter/Bylaws for OpenAI to have a lot of really interesting new clauses for this reason.
Someone must have formulated a law that says something like the following:
"Given sufficient visibility, all people involved in a business dispute will look sad and pathetic."
> "Given sufficient visibility, all people involved in a business dispute will look sad and pathetic."
"involved in a business dispute" is superfluous here, its just a reason that visibility happens.
I genuinely believe, based on my experiences with ChatGPT, that it doesn't seem all that threatening or dangerous. I get we're in the part of the movie at the start before shit goes down but I just don't see all the fuss. I feel like it has enormous potential in terms of therapy and having "someone" to talk to that you can bounce ideas off and maybe can help gently correct you or prod you in the right direction.
A lot of people can't afford therapy but if ChatGPT can help you identify more elementary problematic patterns of language or behavior as articulated through language and in reference to a knowledgebase for particular modalities like CBT, DBT, or IFS, it can easily and safely help you to "self-correct" and be able to practice as much and as often with guidance as you want for basically free.
That's the part I'm interested in and I always will see that as the big potential.
Please take care, everyone, and be kind. Its a process and a destination and I believe people can come to learn to love both and engage in a way that is rich with opportunity for deep and real restoration/healing. The kind of opportunity that is always available and freely takes in anyone and everyone
Edit: I can access all the therapy I can eat but I just don't find it generally helpful or useful. I like ChatGPT because it can help practice effective stuff like CBT/DBT/IFS and I know how to work around any confabulation because I'm using a text that I can reference
Edit: the biggest threat ChatGPT poses in my view is the loss of income for people. I don't give a flying fuck about "jobs" per se, I care that people are able to have enough economically to take care of themselves and their loved ones and be ok psychologically/emptionally.
If we can deal with the selfish folks who will otherwise (as always) attempt to further absorb more of the pie of which they already have a substantial+sufficient portion, they will need to be made to share or they will need a timeout until they can be fair or to go away entirely. Enough is enough, nobody needs excess until everyone has sufficiency, after that, they can have and do as they please unless they are hurting others. That must stop
Agree on LLMs being effective for nudging therapy like CBT. I built an obsidian plugin "ChatCBT" with chatgpt 3.5 that has really helpful for pulling me out of episodes I start to spiral in negative thinking. I'm shocked how effective this is with a basic prompt (you can see the prompt in the codebase)
How would you compare CBT to IFS? I have philosophical issues+differences with CBT because of the what I would characterize as an overemphasis on logic+changing your thinking which subtly or explicitly teaches the lesson that the way you think but most particularly how you feel is invalid or fundamentally "incorrect".
Consequently, I would equate it to well-intentioned gaslighting which is one of the deadliest sins in my view. Gaslighting is one of the most destructive human dynamics to the extent I consider it emotional rape.
I think there's less of a nexus between thinking and feeling as opposed to your thinking being influenced by how you (allow yourself to and honor) feel. (Feel) -> think rather than (think) -> feel although I'm referring more to emphasis rather than negating or disputing the notion that how you think can't self-referentially influence how you feel (but thats more of a perspective thing in my view)
I really like IFS because it actually requires you to bring "everybody" or all the thinking(s) inside to the table and be heard and validated like in InsideOut (pixar). That seems like the best approach and its done amazing things
This is the problem I have when there's so many saying "no we need to wait, wait, wait, what about safety"...people are dying and in horrible states now. This needs to be available, at least the functionality that allows it to engage with you in a therapeutic context.
To the extent they try to prevent or block everyone from having access to this kind of profound outlet/tool/conversation partner, I consider it a great evil (like the opposite of what the Jewish folks call "mitzvahs") and they need to take consideration of this and either find a way to align their views to allow space for it or they need to step aside and find another gate to keep.
I will not tolerate their message or influence or allow them to prevail to this end. The biggest dangers of LLMs like for therapy and such is everyone not being able to easily access and "have" it forever to keep and use to help grow and heal and for free—no bullshit temporary access or subscriptions. This is profound on the level of the whatever the psychological or psychiatric equivalent of the printing press is or moveable type or whatever
That's why there's an Ollama option ;)
How do you run this on the various platforms for the average bear?
A lot of people who could use it may need an app, like not be capable of getting and running it from source on GitHub. We have to push accessibillity to the techy and not-techy alike for we all live and walk amongst each other and this is the emotional equivalent of vaccination in my conjecture.
I'm sure local GUI clients will be available soon for the average user to boot up local LLM servers
K like I'm sort of techy not super and you've already lost me. We need to Signal this shit.
1)Visit site. 2)Download app installer 3)Install. 4)Open/run.
No servers, SSH, intranet, local/remote, assorted technical jargon etc. We gotta make it REAL easy lol :)
Love ya
Also: thanks, Ollama ;)
Given that Ilya switched sides now, it leaves with 3 BOD members that are at the helm.
The one that is really overlooked in this case is the CEO of Quora, Adam D'Angelo, who has a competing interest with Poe and Quora that he sunk his own money into it which ChatGPT and GPTs makes his platform irrelevant.
So why isn't anyone here talking about the conflict of interest with Adam D'Angelo as one of the board members who is the one who is trying to drag OpenAI down in order to save Quora from irrelevancy?
Oh, people are talking about it, just not as loud. But I think you are dead on: that's the main burning issue at the moment and D'Angelo may well be reviewing his options with his lawyer by his side. Because admitting fault would open him up to liability immediately but there aren't that many ways to exit stage left without doing just that. He's in a world of trouble and I suspect that this is the only thing that has caused him to hold on to that board seat: to have at least a fig leaf of coverage to make it seem as if his acts are all in line with his conscience instead of his wallet.
If it turns out he was the instigator his goose is cooked.
If we assume bad faith on D'Angelo's part (which we don't know for sure), it would obviously be unethical, but is it illegal? It seems like it would be impossible to prove what his motivations were even if it looks obvious to everyone in the peanut gallery. Seems like there's very little recourse against a corrupt board in a situation like this as long as a majority of them are sticking together.
It's not illegal but it is actionable. You don't go to jail for that but you can be sued into the poorhouse.
Unless D'angelo has some expensive eight figure lifestyle, he's not going to a poorhouse anytime soon.
> He was chief technology officer of Facebook, and also served as its vice president of engineering, until 2008.
> D'Angelo was an advisor to and investor in Instagram before its acquisition by Facebook in 2012.