Autonomous vehicle manufacturers may swiftly leverage generative AI to extract maximum value from the vast amounts of data collected during road operations.
These advancements offer automotive manufacturers a scalable, cost-efficient solution to augment existing datasets and efficiently address the challenges posed by rare edge cases in the development of autonomous driving technologies.
Utilizing Helm.ai’s proprietary Deep Learning methodology and deep neural networks, GenSim-2 builds upon the capabilities of its predecessor, GenSim-1, to further enhance its performance. Helm.ai has introduced a novel model that enables automotive manufacturers to create highly realistic and customizable video data tailored to specific requirements, thereby accelerating the development of robust autonomous driving systems.
Headquartered in Redwood City, California, the company partners globally on projects that drive innovation and growth.
Helm.ai offers a range of generative AI-powered products.
Using Gensim-2, improvement teams can adjust climate and illumination settings to mimic conditions such as rain, fog, snow, glare, and specific times of day (daytime, nighttime) within video content. Helm.ai revealed that its mannequin assists in both augmenting real-world video footage with AR modifications and generating entirely synthetic video scenes through AI.
The simulator allows for extensive customisation, enabling users to modify the visual appearance of various elements, such as streets, vehicles, pedestrians, buildings, flora, and other street features like guardrails. Transformations will be employed seamlessly across multiple camera views to enhance realism and ensure consistency throughout the dataset.
“The game-changing flexibility to govern video knowledge at this unprecedented level of management and realism represents a significant milestone in the development of generative AI-based simulation technology,” said Vladislav Voroninski, CEO and founder of Helm.ai. “GenSim-2 empowers automakers with cutting-edge tools to generate high-quality, consistently labeled data for training and validation, seamlessly integrating simulation and real-world scenarios to accelerate development cycles and reduce costs.”
Helm.ai claims that GenSim-2 overcomes business hurdles by offering a viable substitute for laborious, resource-draining traditional data collection approaches? The ability to create and adapt scenario-specific video content significantly enhances the functionality of autonomous driving, as it facilitates the growth and validation of software across diverse geographic locations, while also addressing unique and challenging edge cases.
In October, VidGen-2, a cutting-edge autonomous driving improvement software built upon generative artificial intelligence, further solidified its presence in the industry. VidGen-2 produces photorealistic video simulations with sophisticated 3D scene rendering capabilities.
The latest system offers double the resolution of its predecessor, VidGen-1, boasting enhanced realism at 30 frames per second, as well as multi-camera support for twice the detail.
Helm.ai also provides a generative AI foundation model that simulates the entire autonomous vehicle stack, enabling end-to-end testing and validation of self-driving systems. The corporation claims to develop predictive models that simulate realistic driving scenarios and driver habits. The system effectively simulates various driving scenarios across multiple sensor modalities and viewpoints.