In a recent announcement, Kevin Scott, Chief Technology Officer of Microsoft, shared encouraging news for those in need of Nvidia’s AI GPUs. Speaking at the Code Conference in Dana Point, California, Scott highlighted the improving availability of these critical components for artificial intelligence and high-performance computing applications.
Scott acknowledged the challenges faced just a few months ago, stating, “Demand was far exceeding the supply of GPU capacity that the whole ecosystem could produce.” However, he reassured the audience that the situation is steadily improving. “It’s still tight, but it’s getting better every week, and we’ve got more good news ahead of us than bad on that front, which is great,” he added.
This revelation comes as a sigh of relief for many, especially in light of the unprecedented demand for Nvidia’s GPUs. Major tech companies, including Microsoft, have integrated generative AI into their offerings, leaning heavily on Nvidia’s compute GPUs to train large language models and execute inference. This surge in demand led to a notable supply crunch, causing delays and backlogs.
Microsoft’s reliance on these chips played a significant role in Nvidia’s remarkable financial performance in 2023. The company witnessed a staggering 190% increase in stock price and a substantial revenue spike, further solidifying the importance of AI-driven technologies in the industry.
Kevin Scott, who oversees GPU allocations at Microsoft, noted that the role had been daunting in recent quarters due to the scarcity of these critical components. However, he expressed optimism about the future, stating that the availability of compute GPUs is steadily improving. This shift is especially crucial as the landscape of AI technology continues to evolve, demanding more robust computational power.
Nvidia, too, has expressed its commitment to addressing the supply challenge. During a recent earnings discussion, the company indicated its intention to augment its supply throughout the forthcoming year. Colette Kress, Nvidia’s Chief Financial Officer, echoed this sentiment, underlining their dedication to meeting the escalating demand.
Simultaneously, Microsoft’s Azure platform, a pivotal player in cloud computing, supports OpenAI, the company behind ChatGPT. This alliance further emphasizes the symbiotic relationship between these tech giants and their reliance on cutting-edge hardware for AI advancements.
While rumors have circulated about Microsoft potentially developing its own custom silicon for AI workloads, Scott refrained from confirming or denying these speculations. Nevertheless, he emphasized Microsoft’s substantial investment in the silicon domain over the years. Nvidia has remained the preferred choice for Microsoft’s AI needs, owing to its unrivaled performance and reliability.
“I am not confirming anything, but I will say that we have got a pretty substantial silicon investment that we have had for years,” Scott disclosed. “And the thing that we will do is we will make sure that we are making the best choices for how we build these systems, using whatever options we have available. And the best option that’s been available during the last handful of years has been Nvidia.”
Kevin Scott’s announcement at the Code Conference offers a glimmer of hope for those seeking Nvidia’s AI GPUs. The improving availability of these crucial components not only signifies progress but also underscores the pivotal role they play in shaping the future of artificial intelligence and high-performance computing. With Nvidia’s commitment to increasing supply and Microsoft’s continued pursuit of cutting-edge technology, the industry is poised for even greater advancements in the AI sector.