Home Robotics NVIDIA takes center stage at CES, introducing a trifecta of cutting-edge innovations: upgraded Omniverse capabilities, the groundbreaking Cosmos foundation model, and more.

NVIDIA takes center stage at CES, introducing a trifecta of cutting-edge innovations: upgraded Omniverse capabilities, the groundbreaking Cosmos foundation model, and more.

0
NVIDIA takes center stage at CES, introducing a trifecta of cutting-edge innovations: upgraded Omniverse capabilities, the groundbreaking Cosmos foundation model, and more.

Seamless convergence of human staff, robotic innovations, and agency-driven technologies within a digitally replicated production site. | Supply: Accenture, KION Group.

NVIDIA’s CEO, Jensen Huang, delivered a series of keynote announcements during the 2025 Consumer Electronics Show (CES) in Las Vegas. By unveiling the Mega Omniverse blueprint for building industrial robotic fleets’ digital twins, integrating generative bodily AI with Omniverse, introducing the Cosmos World Foundation Model platform, and rolling out upgrades to its Isaac platform, they took a major step forward.

NVIDIA’s investments are being doubled down on applied sciences, particularly focusing on the advancements of generative AI technologies that enhance robotics capabilities. The California-based company, alongside its fresh merchandise, announced that companies such as Mercedes-Benz, Toyota, and Volvo are upgrading their client and industrial automobile fleets with NVIDIA’s cutting-edge computing and artificial intelligence capabilities.

NVIDIA has highlighted the achievement of critical security and cybersecurity milestones for its DRIVE Hyperion Platform. The platform boasts safety certifications from esteemed authorities TÜV SÜD and TÜV Rheinland, renowned for their rigorous evaluations in the realm of automotive-grade security and cybersecurity assessments.

The corporation’s “end-to-end” system comprises the DRIVE AGX system-on-a-chip (SoC) and reference board design, NVIDIA Drive OS automotive operating system, a comprehensive sensor suite, and an integrated security and Stage 2+ driving software stack. 

NVIDIA updates Omniverse

NVIDIA introduced Mega, an Omniverse

Mega announced that it is providing enterprise customers with a reference architecture for its accelerated computing, artificial intelligence, NVIDIA, and NVIDIA Omniverse-based technologies, the company stated. This allows companies to design and simulate digital replicas of complex systems, such as AI-powered “brains” controlling robots or processing vast amounts of video data, thereby facilitating the development and testing of innovative applications like AI brokers and advanced tools.

NVIDIA announced that its innovative Omniverse platform is capable of tackling vast complexities and massive scales. The company claims that the innovation might likely bring software-defined capabilities to physical amenities, thereby facilitating continuous improvement, testing, optimisation, and deployment.

Here is the rewritten text:

By leveraging mega-driven digital twins in conjunction with a centralized system coordinating robotic actions and sensor data, companies can seamlessly deploy robots to optimize routes and tasks, ultimately driving operational efficiencies, according to NVIDIA.

Developed around Omniverse’s utility programming interfaces, this blueprint leverages APIs that enable seamless data exchange between various intelligent machines within a manufacturing facility, permitting concurrent rendering of high-fidelity, large-scale sensor simulations. Through the digital twin, this enables robots to be thoroughly examined in countless scenarios, leveraging artificial intelligence and a software-in-the-loop pipeline powered by NVIDIA’s technology.

Nvidia further expanded its generative AI capabilities by introducing new models and blueprints that enhance the integration of Nvidia Omniverse into physical AI applications such as robotics, autonomous vehicles, and AI-assisted systems. The corporation highlights how these innovations accelerate each stage of creating 3D environments for bodily AI simulations, including

“Bodily AI is poised to transform the $50 trillion manufacturing and logistics sectors, ushering in a new era of efficiency and innovation.” According to Huang, the future holds a vast expanse of automation where even seemingly minute occurrences will be driven by robotics and infused with artificial intelligence, as machines transform into sentient entities. “Nvidia’s Omniverse digital twin platform, paired with its Cosmos physical AI capabilities, establishes a strong foundation for digitising the world’s physical industries.”

Cosmic world bases mannequins’ objectives aim to accelerate artificial intelligence advancements in the human body.

Firms including 1X, Agile Robots, Agility, Determine AI, Foretellix, Fourier, Galbot, Hillbot, IntBot, Neura Robotics, Skild AI, Uber, Digital Incision, Waabi, and XPeng are among the pioneers to join the Cosmos initiative. | Supply: NVIDIA

As part of its Omniverse updates, NVIDIA also unveiled

The corporation asserts that developing embodied AI models is costly due to the need for extensive amounts of real-world data and rigorous testing. Cosmos World Basis Fashions (WFMs) offer builders a straightforward approach to generate massive amounts of photorealistic, physics-based assets to train and test their current fashions. Customized designs can be achieved by tailoring Cosmos WFMs to specific requirements.

NVIDIA notes that Cosmos’ suite of open frameworks allows developers to tailor WFMs with datasets, such as video recordings of autonomous vehicle journeys or robots navigating a warehouse, tailored to the requirements of their intended applications.

The corporation claims to have developed its Work Flow Management systems for the purpose of bodily artificial intelligence analysis and enhancement. The Workflow Management System (WFMS) enables the creation of physics-based animations by integrating diverse input sources, including text, images, video, and data from robotics sensors or motion capture systems.

“The highly anticipated ChatGPT update for the robotics industry is just around the corner.” As massive global trends shape advancements in robotics and autonomous vehicles, many developers lack the expertise and resources to train their own models, according to Huang. “We designed Cosmos to democratize access to bodily AI and bring fundamental robotics within reach of every developer.”

NVIDIA announced plans to release its Cosmo-based designs under an open-source model license, aiming to accelerate innovation in robotics, AI, and autonomous vehicles communities. Developers can preview the primary fashion options on the NVIDIA API catalog, or access the entire library of pre-trained models and fine-tuning frameworks available through the NVIDIA NGC registry.

NVIDIA additionally revises Isaac

NVIDIA Isaac is a comprehensive platform comprising accelerated libraries, utility frameworks, and AI models designed to accelerate the development of AI-powered robots, streamlining their creation and deployment. Comprising four distinct functionalities – Isaac Sim, Isaac Lab, Isaac Manipulator, and Isaac Perceptor – this entity is a multifaceted entity. 

NVIDIA Omniverse hosts a cutting-edge reference utility that empowers customers to design, simulate, and visualize AI-powered robots in photorealistic, physically-based digital environments. Isaac Sim 4.5 will deliver a substantial array of key enhancements, including:

  • A reference utility template
  • Unified Robotic Description Format (URDF) import and setup streamline the process of integrating robots from various manufacturers into ROS-based applications.
  • Improved physics simulation and modeling
  • New joint visualization device
  • Simulation accuracy and statistics
  • NVIDIA Cosmos world basis mannequin

 Is a unified, open-source framework for learning and training robotics policies in an open-ended manner. Isaac Lab builds upon NVIDIA Isaac Sim to empower developers and researchers to create intelligent, adaptive robots that can be programmed with robust, perception-enabled, and simulation-trained policies. The latest version of Isaac Lab boasts significant upgrades in efficiency and usability, featuring innovations such as tiled rendering and numerous quality-of-life improvements. 

Built upon the foundation of version 2, this collection consists of NVIDIA-accelerated libraries, pre-trained AI models, and reference workflows. The new updates introduce streamlined reference workflows for pick-and-place and object following, empowering customers to quickly start tackling fundamental industrial robotic arm tasks, such as object following and pick-and-place operations. 

Ultimately, an additional set of libraries, frameworks, and reference workflows was constructed upon ROS 2 to facilitate the development of autonomous cellular robots. It enables Autonomous Mobile Robots (AMRs) to comprehend, pinpoint, and operate effectively within disorganized settings akin to warehouses or manufacturing facilities.

NVIDIA has introduced significant upgrades that bring substantial advancements in both environmental sustainability through Adaptive Motion Rendering (AMR) and operational efficiency in fast-paced, ever-changing scenarios. They introduce a cutting-edge end-to-end visible simultaneous localization and mapping (SLAM) reference workflow, featuring novel applications of nvblox with multiple cameras for 3D scene reconstruction, people detection, and dynamic scene components, as well as enhanced D-scene reconstruction through operation on several RGB-D cameras.


SITE AD for the 2025 Robotics Summit registration.


LEAVE A REPLY

Please enter your comment!
Please enter your name here