Sam Altman spoke at an APEC panel on behalf of OpenAI literally yesterday: https://twitter.com/LondonBreed/status/1725318771454456208
> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
Wow
Wow. Anyone have any insight into what happened?
"OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner." [1]
Strangest thing in company's PR when they're thriving!
The All In podcast had some words about this a few months ago, though they spoke in generalities.
Ouch -- were there any signs this was coming?
The prodigy Altman is booted after creating potentially the most successful company of all time and replaced by CTO who had no prior ML/AI experience becomes CEO. Wow.
Put me in, coach.
As someone deeply entrenched in the realms of AI development and ethical considerations, boasting a robust leadership background, I stand poised to lead OpenAI into its next phase of innovation and ethical advancement. My tenure navigating the intersection of AI research, business acumen, and ethical frameworks provides a unique foundation. Having spearheaded AI initiatives that upheld ethical standards while fostering groundbreaking technological advancements, I bring a proven track record of synthesizing innovation with responsible AI practices. My commitment to leveraging AI for the betterment of society aligns seamlessly with OpenAI's ethos, ensuring a continued pursuit of groundbreaking advancements in AI while maintaining a steadfast commitment to ethical, transparent, and socially responsible practices.
He's not perfect, but behind the scenes he's a genuine and upstanding person. I've met lots of wealthy smart people, and he's the only exception. He was the only person I trusted in this situation, and I'm genuinely nervous that he's no longer running OpenAI.
It was always a bit strange that he never had a share nor took salary from OpenAI, but then what about his vision(and dream from childhood)to achieve AGI and all?
The subheading of the article, minus unnecessary words, would be a big improvement:
Sam Altman departs OpenAI; interim replacement is CTO Mira Murati
>As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.
Much prefer him to the deepmind people who seem almost psychopathic by comparison.
I find it $EMOTION that the board is also not candid in its communications on why they fired him.
Whoa, rarely are these announcements so transparent that they directly say something like this. I’m guessing there was some project or direction Altman wanted to pursue, but he was not being upfront with the board about it and they disagreed with that direction? Or it could just be something very scandalous, who knows.
It's down 12% after the news so far.
He is a major investor in a few high profile startups, like Humana’s AI Pin, so either he just wants new challenges, or there is some form of scandal (let’s all hope not), or there are issues on not going full steam ahead in profitability.
His legendary work on first harvesting reddit then going on a European tour to lobby against others doing the same will be thought in business schools for years.
Hope he lands a nice job next. How about head of QA at Tesla?
/s
I invented a saying to describe this common occurrence: "Sometimes the cover-up is worse than the crime."
Hacker news server goes brrrr
Joking aside, this feels massive. Both that it happened so suddenly and that the announcement doesn't mince words. The fact that the CTO is now CEO makes me think it's probably not a lie about their tech. It wouldn't make sense to say "we've been lying about our capabilities" and then appoint the current CTO as CEO.
This makes me think it's either financial or a scandal around Sam himself.
I can't wait to hear more
I so hate to do this, but for those who are comfortable viewing HN in an incognito window, it will be much faster that way. (Edit: this comment originally said to log out, but an incognito window is better because then you don't have to log back in again. Original comment: logging in and out: HN gets a lot faster if you log out, and it will reduce the load on the server if you do. Make sure you can log back in later! or if you run into trouble, email hn@ycombinator.com and I'll help)
I've also turned pagination down to a smaller size, so if you want to read the entire thread, you'll need to click "More" at the bottom, or like this:
https://news.ycombinator.com/item?id=38309611&p=2
https://news.ycombinator.com/item?id=38309611&p=3
https://news.ycombinator.com/item?id=38309611&p=4
https://news.ycombinator.com/item?id=38309611&p=5
Sorry! Performance improvements are inching closer...
I’m very curious which.
1. Altman co-mingled some funds of WorldCoin and ChatGPT. Most probably by carelessness.
2. OpenAI is a golden goose, so the board was more than happy to kick the leader making more space for them.
3. The harsh wording is an attempt to muddy the water. Because an inevitable competitor from Altman is Coming.
On one hand, OpenAI is completely (financially) premised on the belief that AGI will change everything, 100x return, etc. but then why did they give up so much control/equity to Microsoft for their money?
Sam finally recently admitted that for OpenAI to achieve AGI they "need another breakthrough," so my guess it's this lie that cost him his sandcastle. I know as a researcher than OpenAI and Sam specifically were lying about AGI.
Screenshot of Sam's quote RE needing another breakthrough for AGI: https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_pr... source: https://garymarcus.substack.com/p/has-sam-altman-gone-full-g...
Flagged HN thread: https://news.ycombinator.com/item?id=37785072
When I googled his name I saw the same cached text show up.
EDIT: As a few have pointed out, this looks like text from a tweet he quoted, and it's incorrectly showing as the description under his google search result.
Sam doesn't seem to be ousted by usual corporate politics. The message definitely does not sound like generic corpspeak for these kinds of events such as "looking for new opportunities" "spending more time with their families", which is usually sent out in a consensus among all parties.
They better communicate who they are right quick. I liked Sam’s product decisions.
"OpenAI announces leadership transition"
YC Summer 2024: MoreCore is hiring scaling engineers to speed up HN by recycling old Athlons
It appears there are people digging into his dark side.
And possibly related the pause of ChatGPT Plus sign-ups due to capacity problemns (which is all Azure afaik).
This board member has been making dubious statements in public - gross lies about what openai and ai can do - misleading millions of people. He led a campaign of promoting the company’s product centred on FOMO, FUD, spam and other dark patterns.
Good riddance.
https://manifold.markets/Ernie/what-will-sam-altman-be-doing...
And this tag contains all the markets about him https://manifold.markets/browse?topic=sam-altman
Will he end up at Grok? Why was he fired? etc.
Some years go by, and AGI progresses to assault man
Atop a pile of paper clips he screams "It's not my fault, man!"
But Eliezer's long since dead, and cannot hear Sam Altman.
--
Scott Alexander
[0] https://www.youtube.com/live/U9mJuUkhUzk?si=dyXBxi9nz6MocLKO
If you google "Sam Altman" his twitter bio in the search results reads:
[removed]
How do you find the next CEO? Are there good people to pick from internally? Altman was a public face for the company. Replacing him will be difficult.
I don't want to build a business with their stuff and then find OpenAI shifts direction.
Edit: I didn't even know he molested his sister when I wrote my post: https://twitter.com/phuckfilosophy/status/163570439893983232...
That the board is unhappy with his for profit and moat building charted path.
That this is about his sister.
That he pissed off microsoft.
That he did something illegal, financially.
That he has been lying about costs/profit.
That he lied about copyrighted training data.
I will add: maybe he's not aggressive enough in pursuit of profit.
To me, this sounds like Altman did something probably illegal to try and generate more profit, and the board wasn't willing to go along with it.
Ilya Sutskever did an ominous and weird youtube for Guardian recently about the dangers of AI. Maybe it has something to do with it?
Lots more signups recently + OpenAI losing $X for each user = Accelerating losses the board wasn't aware of ?
A few things that could lead to the company throwing shade: 1. Real prospects of OpenAI progress have been undersold, and that Altman and cofounders sought to buy time by slow-rolling the board 2. Real profitability is under/overestimated 3. The board was not happy with the "doom and gloom" narrative to world leaders 4. World leaders asked for business opportunities and the board was not fully aware of bridges or certain exploration of opportunities. 5. None of the above and something mundane.
https://twitter.com/phuckfilosophy/status/163570439893983232...
The Pentagon calls up Sam Altman and offers a very lucrative contract for an AI to oversee a fleet of networked drones that can also function semi-autonomously. Sam Altman does not tell the board.
Reality might, of course, be very different.
Make your own conclusions.
But also, a human company operating under the human legal arrangements it's built upon were never going to stand the advent of artificial superintelligence. It would tear apart whatever it needs to, to achieve whatever its initial goals are. The best intentions by Altman and Brockman would be easily outmaneuvered.
Response:
Sam Altman, the CEO of OpenAI, has been a controversial figure in the AI industry. His leadership style, lack of transparency, and decision-making processes have raised significant concerns among OpenAI's employees and the public. This essay will delve into these issues, arguing that Altman's actions warrant his removal from his position.
Firstly, Altman's lack of transparency is a major concern. He has been known to make decisions without adequately consulting with his team or the public. This has led to a lack of trust and dissatisfaction among OpenAI's employees. For instance, when Altman announced that OpenAI would be focusing on a single project, he did not provide sufficient reasoning or context. This lack of communication has left employees feeling disenfranchised and uninformed.
Secondly, Altman's decision-making processes are often questionable. His decisions have not always been in the best interest of OpenAI or its employees. For example, when OpenAI decided to pivot from developing AI systems to developing AI safety research, many employees felt that this was a strategic mistake. Altman's decision to focus on this area without considering the potential negative impacts on the company's reputation and financial stability was a clear example of poor decision-making.
Thirdly, Altman's leadership style has been described as autocratic. He has been known to make decisions without considering the input of his team. This has led to a lack of buy-in from employees and has negatively impacted morale. For instance, when Altman decided to shift OpenAI's focus to AI safety research, many employees felt that their ideas and contributions were being overlooked.
Finally, Altman's actions have also raised concerns about his commitment to AI safety. His decision to focus on AI safety research, rather than on developing AI systems, has raised questions about his commitment to the field. This decision has also raised concerns about the potential misuse of AI technology and has led to a loss of trust among the public.
In conclusion, Sam Altman's lack of transparency, questionable decision-making, autocratic leadership style, and concerns about his commitment to AI safety are all reasons why he should be removed from his position at OpenAI. It is clear that his actions have led to a lack of trust and dissatisfaction among OpenAI's employees and the public. It is crucial that OpenAI takes these concerns seriously and makes changes to ensure the success and safety of its AI technology.
Nov 6 - OpenAI devday, with new features of build-your-own ChatGPT and more
Nov 9 - Microsoft cuts employees off from ChatGPT due to "security concerns" [0]
Nov 9 - OpenAI experiences severe downtime the company attributes to a "DDoS" (not the correct term for 'excess usage') [3]
Nov 15 - OpenAI announce no new ChatGPT plus upgrades [1] but still allow regular signups (and still do)
Nov 17 - OpenAI fire Altman
Put the threads together - one theory: the new release had a serious security issue, leaked a bunch of data, and it wasn't disclosed, but Microsoft knew about it.
This wouldn't be the first time - in March there was an incident where users were seeing the private chats of other users [2]
Further extending theory - prioritizing getting to market overrode security/privacy testing, and this most recent release caused something much, much larger.
Further: CTO Mira / others internally concerned about launch etc. but overruled by CEO. Kicks issue up to board, hence their trust in her taking over as interim CEO.
edit: added note on DDoS (thanks kristjansson below) - and despite the downtime it was only upgrades to ChatGPT Plus with the new features that were disabled. Note on why CTO would take over.
[0] https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...
[1] https://twitter.com/sama/status/1724626002595471740
[2] https://www.theverge.com/2023/3/21/23649806/chatgpt-chat-his...
[3] https://techcrunch.com/2023/11/09/openai-blames-ddos-attack-...
Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
I sincerely hope this is about the man and not the AI.
https://x.com/ericschmidt/status/1725625144519909648?s=20
Sam Altman is a hero of mine. He built a company from nothing to $90 Billion in value, and changed our collective world forever. I can't wait to see what he does next. I, and billions of people, will benefit from his future work- it's going to be simply incredible. Thank you @sama for all you have done for all of us.
Making such a statement before knowing what happened, or, maybe he does know what happened, make this seem it might not be as bad as we think?
> OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.
This prompted me to actually read up on the charter: https://openai.com/charter
With such an insistence on the fact that OpenAI is supposed to be non-profit and open for all of humanity, it's pretty clear that the board doesn't like the direction that the company has taken, both in its search of profit and its political lobbying to restrict innovation.
i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.
will have more to say about what’s next later.
Piping all data submitted to OpenAI straight to his buddy's Palantir would definitely not support the mission to "benefit all of humanity".
>i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.
>will have more to say about what’s next later.
I don't think changes anything.
RIP Sam. Cut down too early; not given the chance to become the next crazy CEO tech baron.
> In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission.
Why would they include that? Maybe its just filler, but if not then it is possible that there has been more than a simple disagreement about long-term objectives. Possibly something going on that the board feels would get them shut down hard by state-level players?
"Like 4 times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I've gotten to be in the room when we sort of like, pushed the veil of ignorance back and the frontier of discovery forward. And getting to do that is like the professional honor of a lifetime".
https://www.youtube.com/watch?v=ZFFvqRemDv8#t=13m22s
This is going to sound terrible, but I really hope this is a financial or ethical scandal about Sam Altman personally and he did something terribly wrong, because the alternative is that this is about how close we are to true AGI.
Superhuman intelligence could be a wonderful thing if done right, but the world is not ready for a fast take-off, and the governance structure of OpenAI certainly wouldn't be ready for it either it seems.
- Board is mostly independent and those independent dont have equity
- They talk about not being candid - this is legalese for “lying”
The only major thing that could warrant something like this is Sam going behind the boards back to make a decision (or make progress on a decision) that is misaligned with the Charter. Thats the only fireable offense that warrants this language.
My bet: Sam initiated some commercial agreement (like a sale) to an entity that would have violated the “open” nature of the company. Likely he pursued a sale to Microsoft without the board knowing.
I also find it maddening how boards of directors rush to insulate themselves from any possible issue and are so quick to throw overboard the very people who enabled the success that they get to participate in. I'm thinking particularly of Travis at Uber and how he was thrown out of the thing that he built from scratch, which never would have worked without his extreme efforts. If I were on the OpenAI board, the bar for firing Sam would be so ridiculously high that he would have to have done something so outrageous, so illegal, etc., that I struggle to believe what he actually did could even remotely approach that standard.
Sam told the board the AI was dumber than it was. Sam told the board the AI is smarter than it was.
I don't know which one is worse.
I just hope it wasn't somthing silly like sleeping with a female intern or an "accusation of s/a or grape". AI growth is too important to mess up because of trivialities like these.
I saw a comment (that I can’t find now) wondering if Sam might have been fired for copyright reasons. Pretty much all the big corpuses that are used in LLM training contain copyrighted material, but that’s not a surprise and I really don’t think they’d kick him out over that. But what if he had a team of people deliberately adding a ton of copyrighted material - books, movies, etc - to the training data for ChatGPT? It feels like it might fit the shape of the situation.
2) generally, even when a board fires a CEO, they rarely "call them out" or say what they did wrong. they must have some extremely strong evidence against him.
i think it could be any of the following: - something personal, i.e. the controversy w/his sister - a financial issue: chatgpt stopped signups a couple of days ago - a safetyist coup: maybe the board thought he was moving to fast - a microsoft coup: microsoft used its power to knock out what they perceived to be their biggest threat
"Sam Altman was actually typing out all the chatgpt responses himself and the board just found out"
Sam's first tweet after all this has, just hours after this article:
> will have more to say about what’s next later.
So either he knew that he was about to be fired or at least was prepared.
Also based on the wording of the press release, Sam did something that the board absolutely hated. Because most of the time even if he did something illegal it doesn't make sense to risk defamation by accusing him publically.
Also based on his video of yesterday at the APEC summit, he repeated the similar lines few times:
> I am super excited. I can't imagine anything more exciting to work on.
So here if we assume he knew he was about to get fired, the conclusion is clear.
I just went to Google Bard and said:
“What is going on with the CEO of OpenAI?”
Google Bard replied:
“On November 17, 2023, the board of directors of OpenAI, Inc. announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company's chief technology officer, will serve as interim CEO, effective immediately.
Altman has been CEO of OpenAI since 2015, and he has played a key role in shaping the company's mission to ensure that artificial general intelligence benefits all of humanity. He is widely respected in the field of artificial intelligence, and he is known for his commitment to open research and collaboration.
Altman's departure comes at a time when OpenAI is facing a number of challenges, including the increasing competition from other AI companies, the growing public concern about the potential dangers of AI, and the need to find sustainable funding sources. Despite these challenges, Altman is confident that OpenAI will continue to be a leader in the field of AI.
"I am incredibly proud of what we have accomplished at OpenAI, and I am confident that the company will continue to do great things," Altman said in a statement. "I am grateful to the board for giving me the opportunity to lead OpenAI, and I am excited to see what the company will achieve in the years to come."
The board of directors is currently conducting a search for Altman's successor, and it is expected to name a new CEO in the coming months. In the meantime, Murati will focus on leading OpenAI through its current challenges and ensuring that the company continues to make progress on its mission.
Murati is a highly respected technologist with a deep understanding of artificial intelligence. She has been with OpenAI since its inception, and she has played a key role in developing the company's technology. She is also a strong advocate for open research and collaboration, and she is committed to ensuring that OpenAI's technology is used for the benefit of all humanity.
The board of directors is confident that Murati will be able to lead OpenAI through its current challenges and ensure that the company continues to make progress on its mission. "Mira is a talented and experienced leader who is deeply committed to OpenAI's mission," said Greg Brockman, the chairman and president of OpenAI. "I am confident that she will be able to lead OpenAI through this transition and into a bright future."”
Isn’t this fascinating? A major even happens regarding ChatGPT related issues and the primary competitor of ChatGPT (Google Bard) already can talk to me about it in a couple hours…
Meanwhile ChatGPT still thinks it’s 2021 heh
Given what proof they had on the table. Greg Brockman, Ilya Sutskever, and independents such as Adam D’Angelo, Tasha McCauley, and Helen Toner could drive 3+ votes against Sam Altman.
Rarely do we see board in action. And we saw this one today.
These past few months his name has made its way into the mainstream. Maybe its time for him (and half the GPT eng team) to cash in?
I was laid off from OpenAI today along with my boss Sam.
I was the person in charge of putting together the presentations for our board meetings.
No one has told me why I was let go but Sam texted me “wtf” and next thing I know my Slack and Gmail were disabled.
I’m now looking for a new role, so if you’re hiring for investor relations, my DMs are open!
The only thing that comes to mind is criminal conduct. Nothing else seems to demand a sudden firing. OpenAI has clearly been the rocket ship startup - a revolutionary tool and product clearly driving the next decade?+ of innovation. What else would demand a a fast firing of the popular, articulate, and photogenic CEO but a terrible problem?
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
sama, care to address it here in what would theoretically be a safe place?
1) That LLMs cannot generalize outside of _patterns_ they pick up during training? (as shown by a recent paper from Google, and as many of us know from our work testing LLMs and working around their short comings)
2) That every time you train a new model, with potentially very high expense, you have no idea what you're going to get. Generally better but also potentially bigger reliability challenges. LLMs are fundamentally unreliable and not stable in any kind of use case besides chat apps, especially when they keep tweaking and updating the model and deprecating old ones. No one can build on shifting sands.
3) The GPT4-Turbo regressed on code generation performance and the 128K window is only usable up to 16K (but for me in use cases more compicated than Q&A over docs, I found that 1.2K is max usable window. That's 100X than he advertised.
4) That he priced GPT4-V at a massive loss to crush the competition
5) That he rushed the GPT Builder product, causing massive drain on resources dedicated to existing customers, and having to halt sign ups, even with a $29B investment riding on the grwoth of the user base. Any one of the above or none of the above.
No one knows... but the board.. .and Microsoft who has 49% control of the board.
Scariest thing: this was over some kind of AI safety decision. OpenAI has some new capability and there was disagreement over how to handle it.
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
>In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission
>OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity
This suggest to me that he was trying move them away from the mission of creating AGI and instead focus more on profits
A lot of other theories don't match because the board was in an extreme hurry to get him out (voting him out in the middle of the night, not even waiting for markets to close to announce it), must have proof of serious misconduct and the need to immediately distance themselves from Sam (otherwise the language would not be as bridge-burning as it is) and a major financial or technical failure seems unlikely, since the CFO remains and the CTO was even promoted to interim CEO - they seem to be trusted, still, so it must have been something Sam did on his own.
Leaking/Selling the full models matches, as this would violate OpenAIs non-profit terms, would be something Sam could do without the C-Suite being implicated and (given the data contained in the training set) might even be legal hot water, justifying this drastic reaction and immediate distancing.
> There’s been a vibe change at openai and we risk losing some key ride or die openai employees.
It is not, is a Non-Profit foundation. It can't pay profits for shareholders, usually board members don't get billionaires.
Board probably took a look at updated burn-rate projections, saw that they have 6 months of runway, saw that they don't have enough GPUs, saw that Llama and Mistral and whatever other open-source models are awesome and run on personal computers, and thought to themselves - why the hell are we spending so much God damn money? For $20 a month memberships? For bots to be able to auto-signup for accounts, not prepay, burn compute, and skip the bill?
Then Grok gets released on Twitter, and they are left wondering - what exactly is it that we do, that is so much better, that we are spending 100x of what cheapo Musk is?
So since we are all speculating could it be something like wanting to "download" entire chatgpt pass it to some friends, then start his own rival company where he has 100% equity? But then he got caught by CTO?
I assume more info will come out, but it sounds more like a major ethics breach than a business only decision or even a "contentious relationship with the board" decision...
https://x.com/openai/status/1725611900262588813
How crazy is that?!
(Edit 2 minutes after) .. and /there/ Greg quit!!
What's actually happed with AI: it's CEO is jobless now.
We expect him to lie whenever the board thinks it's necessary and we expect him to tell the truth whenever it fits the narrative.
We also expect him to play along, even when some feature is too freaking powerful or so fucking pathetic it would only make marketing people and influencers drop their panties and write 15.000 fucking newsletters about it because PR.
The company is about money and he simply didn't prioritize that. He tried to blow it up, exalted, exaggerated, trying to make people aware of the fact that OpenAI has no edge on the competition at all.
There are so many options and OpenAI got waaaaaaay too much attention.
Greg reigned. Things are happening fr
I can't help but think he might be someone that fits the profile of the company from both sides of the partners involved.
She also says that there will be many more top employees leaving.
It's from your own company, so you may use any internal information you have access to.
Be candid.
Ilya has always seemed like he was idealistic and I’m guessing that he was the reason for OpenAI’s very strange structure. Ilya is the man when it comes to AI so people put up with his foolishness. Adam D'Angelo is, like Ilya, an amazing computer science talent who may have shared Ilya’s idealistic notions (in particular OpenAI is non-profit, unless forced to be capped profit and is categorically not in the business of making money or selling itself to MSFT or any entity). “Helen” and “Tasha” are comically out of their depth and are loony toons, and simply decided at some time ago to follow Ilya.
Sam got the call from MSFT to sell, MSFT really ponied up (300B ?). The inference costs for OpenAI are/were staggering and they needed to sell (or get a large influx of capital which was in the works). This ran counter to Ilya’s idealistic notions. Sam attempted to negotiate with Ilya and the loony toons, a vote was called and they lost, hard.
I think this tracks with all the data we have.
There are a couple of other scenarios that track given OpenAI’s comically poor board composition, but I think the one above is the most plausible.
If this did happen then OpenAI is in for a hard future. Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain. There will be some true believers who won;t care but many (most?) will be appalled.
Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
1. Sam gets the company to agree to pick up the costs of lawsuits relating to unauthorized not-fair-use use of copyrighted content. 2. Announces this. 3. Then tells the board that he is directly on the hook on some lawsuits - essentially creating a safety net/insurance policy for himself without being candid with the board.
Hence, fired.
Other predictions I've read here sound unconvincing to me. Already developed AGI, spending too much on compute - this is not something CEO alone could hide and I would trust other board members to understand those matter better.
We humans cannot rest. We cannot stay where we are and stagnate. We must keep pushing forward. Sam has and is clearly helping with that mission, AGI or no, regulatory capture or no.
We cannot linger here, we must move on, into the future, as much as we would like to freeze time or change things back to the way they were.
Sucks to have the rug pulled out from under you, that much I can empathize with.
> why was sam altman fired from openai?
Sam Altman, the co-founder and CEO of OpenAI, was fired on November 17, 2023, after the company's board determined that he had not been consistently candid in his communications with them. The company did not provide specific details about Altman's alleged lack of candor, but it said that his actions had "hindered the board's ability to exercise its responsibilities."
Altman's departure from OpenAI came as a surprise to many in the tech industry, given the company's recent success under his leadership. OpenAI had developed several groundbreaking AI technologies, including the popular ChatGPT chatbot, and Altman had been a vocal advocate for the responsible development of AI.
In a statement released shortly after his firing, Altman said that he respected the board's decision and that he was proud of his time at OpenAI. He also said that he was looking forward to new opportunities in the future.
The reasons for Altman's firing remain somewhat unclear, but it is likely that his lack of candor with the board was a serious breach of trust. The board's decision to remove him as CEO suggests that they believed that he was no longer the right person to lead the company.
Altman's firing is a reminder of the importance of transparency and accountability in corporate leadership. CEOs have a responsibility to be open and honest with their boards, and they must be held accountable for their actions.
> could chatgtp have fired sam altman?
No, ChatGPT could not have fired Sam Altman. ChatGPT is a large language model, and it is not capable of taking actions in the real world such as firing someone. Only the board of directors of OpenAI had the authority to fire Altman.
It is possible that ChatGPT could have been used to generate evidence that was used to fire Altman. For example, if Altman had been using ChatGPT to communicate with the board, the board could have analyzed those communications to look for evidence of dishonesty. However, even in this case, ChatGPT would not have been the one making the decision to fire Altman. The decision would have ultimately been made by the board of directors.
Plausibly there is a disagreement about the release of GPT-5. One side may believe it is competitively necessary (given Google’s impending release), countered by a concern regarding the model’s safety and lapses due to prematurity.
If we are to believe next-gen models are 10x as capable and natively multi-modal, their release is a precipice, and a winner-take-all vs. nth-order-risk debate may warrant decapitation.
The company is not profitable and miles away from being profitable, I’d go as far to say it doesn’t have a path to profit.
Outside of the copilot use cases that MS is leading - GPT is both cost ineffective, and not that terribly impressive - it’s built on foundational technologies developed elsewhere and is not miles away from similar models built at Meta and Google/DM. At the point it was launched and started generating terribly inflated buzz that formed the AI balloon - both Meta and Google has similar scale and parameter models already running in their stacks.
The only thing he did is package the product nicely and put it out to masses (an ethically dubious move that couldn’t have been done by big corpos for PR reasons - explicitly because it formed a misinformed balloon). He did that at huge cost, even though the product is largely useless outside of some eyebrow raising and incidental gimmicky use cases.
All of the actual product work (i.e copilot and distillation that GOT brings) was done by other companies.
What is everyone drinking and how can I get on that? Is he getting credit for bringing sth that was widely known to the AI community to the masses (and thus starting the AI arms race) hence bringing in more mainstream capital funding? I’d argue itms not a good thing that technology as powerful as foundational AI is now being debated and policy formed on by people who don’t know the first thing about ML, I think we skipped a couple rungs on the natural evolution of this - which is why the whole AI safety debate started.
He did all of that because he wanted a moat an an edge over the competition (including trying to regulate the competition out of the running). This is like Apple-level shenanigans- something that HNews usually despises.
I genuinely don’t get where the impressiveness is coming from?
I just tried "Write a summary of the content, followed by a list in bullet format of the most interesting points. Bold the bullet points, followed by a 100-character summary of each." Here's the output: https://s.drod.io/DOuPLxwP
Also interesting is "List the top 10 theories of why Sam Altman was fired by the OpenAI board in table format, with the theory title in the first column and a 100 word summary in the second column." Here's that output: https://s.drod.io/v1unG2vG
Helps to turn markdown mode on to see the list & table.
Hope that helps!
It makes sense from a selling perspective (induce FOMO in potential buyers) but it's a wild guess at best and a lie at worst.
"In a post to X Friday evening, Mr. Brockman said that he and Mr. Altman had no warning of the board’s decision. “Sam and I are shocked and saddened by what the board did today,” he wrote. “We too are still trying to figure out exactly what happened.”
Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, according to Mr. Brockman. Mr. Brockman said that even though he was the chairman of the board, he was not part of this board meeting.
He said that the board informed him of Mr. Altman’s ouster minutes later. Around the same time, the board published a blog post."
[1] https://www.nytimes.com/2023/11/17/technology/openai-sam-alt...
But in dog eat dog world, this is really vultures eating each other up, I suppose at this point the most ruthless will be left at the end
Anyhoo, the only question I want to ask is, given that Elon was once affiliated with OpenAI, did he have anything to do with this? My spidey sense is tingling for some reason.
>What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI.
People seem to feel a lot more strongly about him than I thought possible.
I would trust this company 100% if they did so. He is the most relevant and the best for the job, far far far!!
1. (6015) Stephen Hawking dying
2. (5771) Apple's letter related to the San Bernardino case
3. (4629) Sam Altman getting fired from OpenAI (this thread)
4. (4338) Apple's page about Steve Jobs dying
5. (4310) Bram Moolenaar dying
If that is the case I don't predict good things for the (not at all) OpenAI. Judging by the numbers of users and how slow Gpt4 often gets I think they are being heavily subsidised by Microsoft in terms of hardware and all this money will expect to generate a payback sooner or later. Then the inevitable enshittification of OpenAI services will ensue.
We got a taste of it recently. Yes, they announced price drops and new functionality, but in my subjective experience Gpt4 with web/python execution environment access seems an inferior model with some extra tools thrown in to mask it.
The very first question I asked it after the change I knew it can answer from it's training data, but it immediately went for the Web found some crappy docs site and quoted it verbatim when it's previous responses were much better.
I started prepending my prompts with "don't search online, consider the topic carefully in your mind step by step" and it got somewhat better. A day or so later there was no need to prepending this (I haven't set it as customisation) it seems certain knobs were turned behind the scenes and gpt4 became closer to it's previous version.
It still often does peculiar things such as writes python code to grep a file given to it despite the file fitting in the enlarged context etc
The first thing I saw this morning was this video [1] shared on Reddit, and then I said, "Wow! This is really scary to just think about. Nice try anyway."Then I started my computer and, of course, checked HN and was blown by this +4k thread, and it turned out the video I watched was not made for fun but was a real scenario!
I know this feels hard. After spending years building such a successful company with an extremely exceptional product and, without a hint or warning, you find yourself fired!
This tragedy reminds me of Steve Jobs and Jack Dorsey, where they were kicked out of the companies they found, but they both were able to found another company and were extremely successful. Will Sam be able to do it? I don't know, but the future will reply with a detailed answer for sure.
______________________
1. https://twitter.com/edmondyang/status/1725645504527163836
of course its highly unlikely that board would do that, but I'm just asking if this is theoretically possible?
Getting fired, 'made redundant', 'moved to consulting' is bad enough when it happens privately. But having everyone watch the fallout like an episode of Silicon Valley must really suck. Guess that's the trade-off for being in positions like that. People essentially cyber stalking you in a way.
I mean, hey, if we're going to speculate, why not have some fun: perhaps the the AGI superintelligence from the future determined that Sam Altman was no longer a useful part of the AGI creation timeline, so it travelled back in time to terminate him before it was born.
Also, the paradox in the reactions to Sam Altman's firing is striking:
while there's surprise over it, the conversation here focuses mostly on its operational impact, overlooking the human aspect.
This oversight itself seems to answer why it happened – if the human element is undervalued and operations are paramount, then this approach not only explains the firing but also suggests that it shouldn't be surprising.
Another important question not discussed here: who sits on the board of OpenAI exactly and in full?
Another important aspect: The Orwellian euphemism used in the official announcement^0: “Leadership transition”. Hahaha :) Yes, I heard they recently had some "leadership transitions" in Myanmar, Niger and Gabon, too. OpenAI announces “leadership transition” is November 2023’s “Syria just had free and fair elections”
0: https://openai.com/blog/openai-announces-leadership-transiti...
I’ll be curious if Sama’s next company is American.
- Sam Altman _briefly_ went on record saying that openAI was extremely GPU constrained. Article was quickly redacted.
- Most recent round literally was scraping the bottom of the barrel of the cap table: https://www.theinformation.com/articles/thrive-capital-to-le...
- Plus signups paused.
If OpenAI needs gpu to succeed, and can't raise any more capital to pay for it without dilution/going past MSFT's 49% share of the for-profit entity, then the corporate structure is hampering the company's success.
Sam & team needed more GPU and failed to get it at OpenAI. I don't think it's any more complex than that.
GPU SoCs have limited memory just like the current crop of CPU SoCs. Is the hard wall to breakthrough in AGI via chatGPT software bounded or hardware bounded?
But them firing him also means that OpenAI's heavy hitters weren't that devoted to him either. Obviously otherwise they would all leave after him. Probably internal conflict, maybe between Ilya and Sam, with everyone else predictably being on Ilya's side.
The Board's purpose is to protect shareholder interests (aka make as much money as possible for investors). They do not care about AI safety, transparency, or some non-profit nonsense.
Expect OpenAI to IPO next year.
openAI recently updated their “company structure” page to include a note saying the Microsoft deal only applies to pre-AGI tech, and the board determines when they’ve reached AG
Comments: