Inside OpenAI's Turbulent Year

Conteúdo do Vídeo OriginalExpandir Vídeo
  • In November 2024, Suche Balaji, a former OpenAI researcher, was found dead under controversial circumstances.
  • Balaji had previously expressed ethical concerns about OpenAI's practices, claiming violations of copyright.
  • Despite OpenAI's significant financial challenges, including a projected loss of $5 billion, they recently raised $6.6 billion in funding.
  • The competition in AI is intensifying, leading to concerns about the sustainability of OpenAI's rapidly growing model.
  • OpenAI's latest model, O3, shows promising advancements but also highlights ongoing internal struggles within the company.

Hi, welcome to another episode of Cold Fusion.

In November of 2024, a 26-year-old OpenAI researcher who worked on ChatGPT was found dead in his San Francisco apartment. This was just eight days after his name appeared in a lawsuit against his former employer.

His name was Suche Balaji, and he publicly voiced ethical concerns about OpenAI's products. He alleged that the company was engaging in unethical behavior by utilizing protected content without proper authorization, potentially violating copyright laws. Balaji stated that these practices would lead to negative consequences for the wider Internet ecosystem.

Officials have ruled his death a suicide. But that didn't stop the Internet from speculating. This heartbreaking news is just the latest in a series of controversies for OpenAI. It really hasn't been a good year for them in that regard.

To paint a picture, top executives resigned; there's increased competition, the latest products have produced a lukewarm yawn from customers, and now the tragic recent passing of a former employee turned whistleblower. It should be enough for any company to go through, but it's not over. Now even Elon Musk and Zuckerberg are teaming up to fight against OpenAI.

Now, in hearing all of this, you might think OpenAI was done for. But a recent development in their latest AI model has stunned mathematics experts and naysayers alike. So, what's the bottom line? Has OpenAI built itself too fast, too soon, to the point of potentially crumbling? Or are they already too big to fail?

You are watching Cold Fusion TV. In just a few short years, OpenAI went from a bold experiment to one of the most powerful players in artificial intelligence. ChatGPT's place in the modern zeitgeist can't be overstated. It was the fastest-growing platform in history, reaching one million users in just five days.

In fact, even Balaji's tweets before his passing mentioned that his takes were not necessarily a critique of ChatGPT or OpenAI per se. According to recent data, ChatGPT now has more than 300 million weekly users. But rapid success often comes with growing pains, and OpenAI is feeling them hard.

Let's talk about the elephants in the room. There's something going on with OpenAI staff and internal teams. Remember late last year when Sam Altman was ousted by the board? It felt like a Silicon Valley soap opera. The chaos then led to a dramatic employee revolt, forcing Altman's return.

And now, in 2024, the company has seen some high-profile exits, starting with Ilya Satskova, a co-founder and AI legend. His departure from the company earlier this year raised some eyebrows for sure. When Myra Morati, the company's visionary CTO, also left, it raised a few questions.

In fact, the departures were soon followed by strikes that reflected deeper frustrations within the company. The strikes weren't just about paychecks; they revealed a culture under strain. Employees cited concerns over leadership, transparency, and the future direction of the company. Issues like this within a company are much harder to resolve.

The staff and company culture are the DNA. All of this going on behind the scenes is in stark contrast to the unified front OpenAI used to project. OpenAI's financial story is as dramatic as its rise. According to a few sources, it costs $700,000 per day to keep ChatGPT chugging along.

OpenAI has reportedly invested close to $7 billion on training ChatGPT and another $1.5 billion on staffing. Despite generating an impressive $3.7 billion in revenue, the company expects a $5 billion loss this year. And that's not even the worst projection. Other analysts predict that annual losses could balloon to $14 billion by 2026.

However, they recently raised $6.6 billion in funding, the largest venture capital round in history. But the way that they're burning through cash may not be sustainable in the midterm, especially as AI competition grows. As other options become available, users may switch, cutting into OpenAI's revenue.

Generative AI today is not the shiny new toy it was two years ago. We're now in the age of AI fatigue, where artificial intelligence is stale and no one really cares anymore. This is in part thanks to competitors like Meta or even small startups catching up. OpenAI's newest model, Sora, was supposed to be a game changer, but it's fallen short of expectations.

Sora is a text-to-video generator, and it had all the promise in the world. Demos showcased stunning imagery never seen before from an AI system, but there was a long delay between the announcement and release. In that time, competition from Chinese competitors took some of the wind out of Sora's sails.

Then, in a protest move by artists who had access, the Sora API was leaked before release. When the time finally came for Sora to be released to the public, the results weren't as good as first imagined. For example, tech reviewer Marcus Brownlee, who had early access, acknowledged its creative potential but flagged issues like physics glitches and unnatural movement or objects disappearing mid-frame.

He also raised an eyebrow over Sora's training data when he asked the system to make a video of a tech reviewer. The output featured pretty much exactly the same plant prop that he had used in his videos. OpenAI says that Sora uses publicly available videos but hasn't confirmed whether YouTube was included, leading Marcus Brownlee to ask, "Were my videos used in the training data? Are my videos in that source material? Is this exact plant part of the source material? Is it just a coincidence? I don't know."

Meanwhile, YouTube themselves are investigating but have yet to provide answers. To make matters worse for Sora, Google unveiled their own video generator called VO2. The results, for many, have blown Sora out of the park.

And this isn't to mention the slew of other video generators coming out. So it's clear to see the shine in OpenAI slowly fading. Overall, though, the novelty of generative AI is wearing off, and the fight to stay relevant is fiercer than ever.

In 2015, the company was founded as OpenAI Non-profit and the goal was to build safe and beneficial artificial intelligence for the benefit of humanity. But now, in 2024, the plan is to restructure its core business model into a for-profit benefit corporation and to no longer be controlled by its non-profit board.

According to sources speaking with Reuters, the OpenAI nonprofit will continue to exist and own a minority stake in the for-profit company. Sam Altman will receive equity for the first time in the for-profit company, which could be worth $150 billion after restructuring as it also tries to remove the cap on returns for investors.

Elon Musk, who was part of the founding team at OpenAI, recently asked the courts to block OpenAI from converting to a for-profit. But the most surprising bit of this is that all of this led to the unlikeliest joining of forces to stop AI from becoming a for-profit company. Meta and Elon Musk are teaming together to stop the move.

It was only yesterday that the CEOs were working to fight each other in the ring. Oh, how they've grown up. But if this isn't the sign that there's a new common enemy in the tech world, then I don't know what is.

OpenAI, however, has defended the shift. Not only that, but they've publicly released a timeline for alleged proof that Elon wanted OpenAI to be a for-profit company from the start. They also claim that the immense costs of AI research demand significant funding. It's a controversial strategy, and while it's helped OpenAI scale quickly, it's alienated some of its original supporters.

In December of 2024, when OpenAI revealed their latest Model O3, it broke the skepticism that AI was at a bottleneck. Its main feature was the ability to think longer when answering questions requiring sequential step reasoning. This is what's known as a reasoning model, meaning it stops, thinks, and then fact-checks itself. This can solve math equations that would take a PhD mathematician hours or even days.

It is a step function improvement the tech industry was waiting for. Released just three months after their last Model 01, this new version has improved remarkably. It's better in coding, maths, and even science.

Let's take a closer look. Its code forces rating implies that it exceeds 99% of human programmers. Its score of 2727 is equivalent to ranking 175th among human programmers in global coding competitions. With PhD level scientific questions, a.k.a. the GPQA, it scores 87.7%. PhD students generally score around 70%. In the hardest frontier mathematics test, it scores 25.2% while all other AI models did not exceed 2%.

Maths genius Terence Tao said that this test, quote, could stump AI for years, but that seems to be holding a bit less true now. These kinds of questions are the ones that top mathematicians could work on for hours or even days for just a single problem.

Most experts wouldn't be able to get the majority of these questions right. In the ARC AGI test, O3 scored 87.5%, whereas O1 scored 25% and GPT-4.0 scored just 5%. You might be thinking this isn't that impressive; it's just finding examples within its training dataset and leeching off that. Well, that's not true in this case.

The questions aren't in the training set data. In order to do well, it must reason and come up with new solutions, a fact which many AI skeptics didn't think could happen. But there are two caveats here. One, somehow O3 still trips up on simple logic questions that a five-year-old could probably solve. And number two, whether it's DALL·E, ChatGPT, voice mode, or Sora, what OpenAI hypes up and promises seems to be much milder and less capable upon release.

In addition to this, competition from others like Google isn't going to stay still. The death of Suche Balaji has intensified the scrutiny of OpenAI's internal dynamics and ethical standards. While it's unclear that his whistleblowing directly influenced the company's current struggles, his passing casts a significant shadow over OpenAI's commitment, transparency, and ethical AI development.

As we move forward, it’s clear that many moving pieces for OpenAI, coupled with seemingly all negative outcomes, have led many to believe that OpenAI is heading towards a crisis of some sort. But on the flip side, the lawsuits, controversies, and poor financial report card haven't entirely stopped the company.

In fact, the information reports that investors are hanging on for the ride despite everything. There's a future potential funding round which could value the company at $150 billion, and after all, they still have the top brand recognition in the AI space.

Investors still love them despite a tough year. So, at this point, could anything sway OpenAI's trajectory?

Hate to sound cliché, but like many things, we're just going to have to wait and see how everything plays out.

So, what are your thoughts on this? Feel free to comment below.

These days everyone's talking about fake news. Getting informed is more complicated than ever before, but who's doing something about it?

Well, today's sponsor, Ground News is doing just that. Ground News is a website and app developed by a former NASA engineer. She wanted to give readers an easy, data-driven, objective way to read the news, and as time goes on, this approach is proving more useful.

Using the story of OpenAI and Microsoft being sued by Elon Musk as an example, their bias distribution chart shows me the political leaning of those outlets, and I can even get a summary of how the issue is being framed.

In this example, the left focuses on Musk's issue with OpenAI straying from its initial non-profit vision. There's also a focus on the emotional stakes that Musk has in OpenAI's future. The centrist tends to stick to the straight facts in terms of legal and corporate actions.

The right, on the other hand, discusses Musk's intent to ensure that OpenAI upholds its founding mission. They also mention Musk's legal moral standing. Scrolling down, I can compare every single article on this topic with convenient tags showing me context about the source, like how factual it is and who owns it.

Ground News’ blindness feed is also great. This shows you stories that are underreported by one side of the political spectrum. For example, if you were on the right, you may have missed the story on a study claiming that airborne microplastics could cause lung and colon cancer.

Ground News is a fantastic tool for getting international views, sifting through misinformation, and identifying media bias. They provide all the tools you need to be a critical thinker.

For the holiday season, Ground News is offering 50% off. To get started, go to Ground News.com/ColdFusion.

Thank you to Ground News for supporting the channel.

Anyway, that's about it from me. Thanks for watching. My name is Degogo, and you've been watching ColdFusion. If you're new to ColdFusion, you can subscribe if you like. Or not, you don't have to.

But anyway, that's all from me. My name is Degogo, and you've been watching ColdFusion.

And I'll catch you again soon for the next episode. Cheers, guys. Have a good one.

Cold Fusion. It's new thinking.