| Nvidia's fourth-quarter earnings release on Wednesday was widely watched in the US tech community. Joshua Brustein writes today about the company that seems to hold the weight of the whole US AI industry on its shoulders. Plus: Central banks try their hand at social media, and matcha's supply chain is no match for its going viral. If this email was forwarded to you, click here to sign up. It's been just over a month since the Chinese chatbot DeepSeek sent panic through the US artificial intelligence industry, and this week, Wall Street looked to Nvidia Corp. for permission to declare that Silicon Valley's AI boom is still on. Nvidia did its best, delivering a quarterly report Wednesday that showed it did a bit better than expected on fourth-quarter revenue and profit, with sales in fiscal first quarter of 2025 predicted to be slightly above the average forecast from analysts. Yet this was interpreted as a downer, and the stock slipped when markets opened. Silicon Valley has operated for years under the assumption that advances in AI are directly tied to increases in computing power. DeepSeek showed, perhaps, that a guy who ran a random Chinese hedge fund can build a product on the cheap that could compete with top US offerings. If that was the case, maybe devoting hundreds of billions of dollars to build AI data centers wasn't the best idea? The most logical way to take the temperature on those concerns is to watch Nvidia. Because the $3 trillion chipmaker is by far the leading producer of the equipment needed to run AI data centers, its quarterly earnings have become a bellwether for the entire AI project. The DeepSeek freakout in late January caused Nvidia's market value to drop almost $600 million in a single day. Although shares have partially recovered, there was a nervousness about what the company would reveal Wednesday. Huang delivers a keynote address at the Consumer Electronics Show in Las Vegas in January. Photographer: Patrick T. Fallon/AFP/ Getty Images There was never going to be a big connection between Nvidia's actual results and DeepSeek's product, given when it was released. But Nvidia Chief Executive Officer Jensen Huang clearly came prepared to soothe fears about how DeepSeek might point to changes in the basic economics of AI. In short, he wasn't buying the negative assessments. Huang praised the model powering the Chinese chatbot, just as he did after its release. But, he argued, its breakthrough in using less computing power in the initial step known as pretraining doesn't change the need for more power later in the process. In any case, Huang added, most of the computing costs actually come from inference, the stage where a model considers each subsequent prompt entered from users. Nvidia is betting that AI will soon take on huge new applications such as warehouse automation and humanoid robots. "This is just the beginning," Huang said, adding that future models might require millions of times more computing power than today. If those predictions were to come true, they'd bring along some daunting problems—even today's AI systems are straining existing power capacity. But weak demand for Nvidia's chips wouldn't be one of them. Huang has been making such claims for some time, and it's fair to question his optimism around AI being on the verge of solving problems like robotics. But his attitude toward DeepSeek fits a pattern among US tech leaders, who've praised the technology, then interpreted its emergence not as a threat but as another sign that AI is on the march. For now, at least, signs point to the AI infrastructure binge accelerating. DeepSeek's prospects as a Chinese consumer app unseating US incumbents, meanwhile, seem to be dimming: As of Thursday morning it's dropped to the 35th-ranked iPhone app in the US. |
No comments:
Post a Comment