• Nvidia’s gaming past and mastering of the GPU made it well-positioned for the AI boom.
  • Its next market to corner is advanced robotics that could give way to humanoids.
  • Technical hurdles could be a reality check to Jensen Huang’s robotics future.

Wearing his signature black leather jacket, Jensen Huang outstretched both arms, gesturing at the humanoid robots flanking him, and the audience applauded. “About my size,” he joked from the stage at Computex 2024 in Taipei, Taiwan, in June.

“Robotics is here. Physical AI is here. This is not science fiction,” he said. The robots though, were flat, generated on a massive screen. What came onto the stage were wheeled machines resembling delivery robots.

Robots are a big part of Huang’s vision of the future, which is shared by other tech luminaries, including Elon Musk. In addition to the Computex display, humanoid robots have come up on Nvidia’s last two earnings calls.

Most analysts agree that Nvidia’s fate is all but sealed for a few years. Demand for graphics processing units has fueled it to a $3 trillion market capitalization — some days. But the semiconductor industry is cruel. Investment in data centers, which make up 87% of Nvidia’s revenue comes in booms and busts. Nvidia needs another big market.

At Computex, Huang said there would be two “high-volume” robotic products in the future. The first is self-driving cars and the second is likely to be humanoid robots. Thanks to machine learning the technologies are converging.

Both machines require human-like perception of fast-changing surroundings and instantaneous reactions with little room for error. They also both require immense amounts of what Huang sells: AI computing power. But robotics is a tiny portion of Nvidia’s revenue today. And growing it isn’t just a matter of time.

If Nvidia’s place in the tech stratosphere is to be permanent, Huang needs the market for robotics to be big. Though the story of Nvidia’s last few years has been one of incredible engineering, foresight, and timing, the challenge to make robots real may be even tougher.

How can Nvidia bring on the robots?

AI presents a massive unlock for robotics. But scaling the field means making the engineering and building more accessible.

“Robotic AI is the most complicated because a large language model is software, but robots are a mechanical engineering problem, a software problem, and a physics problem. It’s much more complicated,” said Raul Martynek, CEO of data center landlord Databank.

Most of the people working on robotics are experts with doctoral degrees in robotics because they have to be. The same was true of language-based AI 10 years ago. Now that foundation models and computing to support them are widely available, it doesn’t take a doctorate to build AI applications.

Layers of software and vast language and image libraries are intended to make users stickier and lower the barrier to entry so that almost anyone can build with AI.

Nvidia’s robotics stack needs to do the same, but since using AI in physical spaces is harder, making it work for laymen is also harder.

The Nvidia robotics stack takes some navigating. It’s a sea of platforms, libraries, and names.

Omniverse is a simulation platform. It offers a virtual world that developers can customize and use to test simulations of robots. “Isaac” is what Nvidia calls a “gym” built on top of Omniverse. It’s how you put your robot into an environment and practice tasks.

“Jetson Thor” is Nvidia’s chip for powering robots. Project Groot, which the company refers to as a “moonshot” initiative, is a foundation model for humanoid robots. In July, the company launched a synthetic data generation service and “Osmo,” a software layer that ties it all together.

Huang often touts that humanoids are easier to build because the world is already made for humans.

“The easiest robot to adapt in the world are humanoid robots because we built the world for us,” he said at Computex. “There’s more data to train these robots because we have the same physique,” Huang said.

Gathering data about how we move still takes time, effort, and money. Tesla, for example, is paying people $48 per hour to perform tasks in a special suit to train its humanoid, Optimus.

“That’s been the biggest problem in robotics — how much data is needed to give those foundational models an understanding of the world and adjust for it,” said Sophia Velastegui, an AI expert who’s worked for Apple, Google, and Microsoft.

But, analysts see the potential. Research firm William Blair’s analysts recently wrote, “Nvidia’s capabilities in robotics and digital twins (with Omniverse) have the potential to scale into massive businesses themselves.” The analysts expect Nvidia’s automotive business to grow 20% annually through 2027.

Nvidia has announced that BMW uses Isaac and Omniverse to train factory robots. Boston Dynamics, BYD Electronics, Figure, Intrinsic, Siemens, and Teradyne Robotics use Nvidia’s stack to build robot arms, humanoids, and other robots.

But three robotics experts told Business Insider that so far, Nvidia has failed to lower the barrier to entry for want-to-be robot builders as it has in language and image-based AI. Competitors are coming in to try to open up the ideal stack for robotics before Nvidia can dominate that too.

“We recognize that developing AI that can interact with the physical world is extremely challenging. That’s why we developed an entire platform to help companies train and deploy robots,” a Nvidia spokesperson told Business Insider via email.

In July, the company launched a humanoid robot developer program. After submitting a successful application developers can access all of these tools.

Nvidia can’t do it alone

Ashish Kapoor is acutely aware of all the progress the field has yet to make. For 17 years he was a leader in Microsoft’s robotics research department. There, he helped to develop Airsim, a computer vision simulation platform launched in 2017 that sunsetted last year.

Kapoor left with the shutdown to make his own platform. Last year, he founded Scaled Foundations and launched Grid, a robotic development platform designed for aspiring robot builders.

No one company can solve the tough problems of robotics alone, Kapoor said.

“The way I’ve seen it happen in AI, the actual solution came from the community when they worked on something together. That’s when the magic started to happen, and this needs to happen in robotics right now,” Kapoor said.

It feels like every player aiming for humanoid robots is in it for themselves, Kapoor said. But, there’s a robotics startup graveyard for a reason. The robots get into real-world scenarios and they are simply not good enough. Customers give up on them before they can get better.

“The running joke is that every robot has a team of 10 people trying to run it,” Kapoor said.

Grid offers a free tier or a managed service that offers more help. Scaled Foundations is building its own foundation model for robotics, but also encourages users to develop one too.

Some elements of Nvidia’s robotics stack are open source. And Huang often touts that Nvidia is working with every robotics and AI company on the planet, but some developers fear the juggernaut will protect its own success first, and support the ecosystem second.

“They’re doing the Apple effect. To me, they’re trying to lock you in as much as they can into their ecosystem,” said Jonathan Stephens, chief developer advocate at computer vision firm EveryPoint.

An Nvidia spokesperson told BI that this perception is inaccurate. The company “collaborates with the majority of the leading players in the robotics and humanoid developer ecosystem,” to help them deploy robots faster. “Our success comes from the ecosystem,” they said.

Scaled Foundations and Nvidia aren’t the only ones working on a foundation model for robotics. Skild AI raised $300 million in July to build its version.

What makes a humanoid?

Simulators are an essential stop on the path to humanoid robots, but they don’t necessarily lead to human-like perception.

When describing a robotic arm at Computex, Huang said that Nvidia supplies “the computer, the acceleration layers, and the pre-trained AI models” needed to put an AI robot into an AI factory. The goal of using robotic arms in factories at scale has been around for decades. Robotic arms have been building cars since 1961. But Huang was talking about an AI robot — an intelligent robot.

The arms that build cars are largely unintelligent. They are programmed to perform repetitive tasks and often “see” with sensors instead of cameras.

An AI-enabled robotic arm would be able to handle varied tasks — picking up diverse items and putting them down in diverse places without breaking them, maybe while on the move. They need to be able to perceive objects and guardrails and then make moves in a coherent order. But a humanoid robot is a world away from even the most useful non-humanoid. Some roboticists doubt that it’s the right target to aim for.

“I’m very skeptical. The cost to make a humanoid robot and to make it versatile is going to be higher than if you make a robot that doesn’t look like a human and can only do a single task but does the task well and faster,” said a former Nvidia robotics expert with more than 15 years in the field, who asked to remain anonymous.

But Huang is all in.

“I think Jensen has an obsession with robots because, ultimately, what he’s trying to do is create the future,” Martynek said.

Gaming and graphics aren’t Nvidia’s future anymore. Autonomous cars and robotics are. The company told BI it expects everything to be autonomous in the future starting with robotic arms and vehicles and leading to buildings and even cities.

“I was at Apple when we developed iPad inspired by Star Trek and other future worlds in movies,” Velastegui said. Robotics tap into our imagination, she explained.



Read the full article here

Share.
Leave A Reply

Exit mobile version