rewrite this title NVIDIA’s New LLM Puts Question Marks Over OpenAI’s Just-Acquired $157 Billion Valuation
rewrite this content and keep HTML tags
This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy.
NVIDIA appears determined to corner every segment of the emerging AI-led paradigm shift in its quest to become the go-to retailer of full-stack AI solutions. As a verifiable demonstration of this underlying strategy, look no further than the GPU manufacturer’s recent release of a dedicated, open-source Large Language Model (LLM), dubbed the NVLM-D 1.0. What’s more, given the model’s near-parity with comparable proprietary offerings such as the GPT-4o, investors must ask the question: is OpenAI’s new $157 billion valuation justified?
OpenAI’s Cash-Rich Coffers
To wit, OpenAI has now announced that it was able to raise a whopping $6.6 billion in a new funding round that included the likes of SoftBank, Microsoft, and even NVIDIA. Critically, the new funding has now bestowed a $157 billion valuation on the AI-focused enterprise.
However, in a move that is almost guaranteed to attract antitrust scrutiny, OpenAI has now bound its investors in an exclusive funding relationship that prevents them from plowing cash into rivals such as Anthropic and Elon Musk’s xAI.
OpenAI raised $6.6 billion at a $157 billion valuation, nearly doubling its previous $86 billion value, with ChatGPT now reaching 250 million weekly users (up from 200 million previously reported), and reportedly plans to let employees sell some of their shares https://t.co/C27Bif4C9v
— Tibor Blaho (@btibor91) October 2, 2024
OpenAI’s weekly users have now jumped to 200 million, and the firm expects its revenue to triple in 2025 to a whopping $11.6 billion. This means that the firm is currently valued at 13.53x its 2025 revenue, which is not exactly a bargain.
NVIDIA’s NVLM-D 1.0 LLM
Even as OpenAI is attracting significant funding, its competition is growing by leaps and bounds. We reported last week that NVIDIA was gearing up to release a new LLM that leverages its Blackwell architecture’s 50x generational uplift in inference capability.
Nvidia releases NVLM-1.0-D-72B
frontier-class multimodal LLM with decoder-only architecture
SOTA results on vision-language and text-only tasks pic.twitter.com/n1iS4Ud9j2
— AK (@_akhaliq) October 1, 2024
Well, NVIDIA officially unveiled its NVLM-D 1.0 LLM yesterday. The model is based on 72 billion parameters and offers near-parity in performance when compared with not only Meta’s open-source Llama 3-V model, which is based on 405 billion parameters, but also those that belong in the rarefied, black box class, such as OpenAI’s GPT-4o.
I’m not sure if people realize how noteworthy the NVIDIA LLM (NVLM) news from yesterday is. (NVIDIA drops a new 70 billion parameter LLM).
If you look at the NVIDIA moat, this is just one more layer in the “AI full stack.” offering.
Add to the hardware, software, frameworks,… pic.twitter.com/BKHwjHZkeK
— Daniel Newman (@danielnewmanUV) October 2, 2024
So, the question then emerges: if NVIDIA’s 70-billion-parameter model is able to effectively compete with much larger and more complex models such as the Llama 3-V and GPT-4o, is OpenAI’s stratospheric valuation justified?
This question becomes all the more critical when one considers NVIDIA’s already-established, mammoth user-base and a vibrant developer ecosystem that almost guarantees success.
[gpt3]