ADAS

Helm.ai unveils perception simulation for ADAS

REDWOOD CITY, Calif.–(BUSINESS WIRE)– Helm.ai, a company that develops advanced AI software for driver assistance systems, announced the launch of neural network-based, high-fidelity virtual scenario generation models for perception simulation. The new technology enhances the company’s suite of AI software solutions for developing high-end ADAS (Levels 2 and 3) and Level 4 autonomous driving systems.

The company has developed its generative simulation models by training neural networks with large-scale image datasets. The models can generate highly realistic images of virtual driving environments with variations in parameters, including illumination and weather conditions, different times of day, geographical locations, highway and urban scenarios, road geometries, and road markings. The generated synthetic images contain accurate label information for the surrounding agents, obstacles, and other aspects of the driving environment. This includes pedestrians, vehicles, lane markings, and traffic cones. Additionally, these details contribute to a comprehensive understanding of the simulated driving scenarios. Helm.ai’s generative simulation produces highly realistic synthetic labeled image data. This data can be used for large scale training and validation. Especially, it helps to resolve rare corner cases.

Users can provide text- or image-based prompts. This generates high-fidelity driving scenes that replicate real-world encounters. It can also create entirely synthetic environments. These AI-based simulation capabilities enable scalable training and validation of robust perception software for autonomous systems. Simulation is crucial in the development and validation process for ADAS and autonomous driving systems. It’s particularly important when addressing rarely occurring corner cases, such as difficult lighting conditions, complicated road geometries, or encounters with unusual obstacles. These include animals or flipped over vehicles, and specific object configurations like a bicyclist partially occluded by a vehicle.

Neural network-based simulation developed by Helm.ai provides significant advantages over traditional physics-based simulators, particularly in scalability and extensibility. Physics-based simulators are often limited by the complexity of accurately modeling physical interactions and realistic appearances. Generative AI-based simulation learns directly from real image data, allowing highly realistic appearance modeling and rapid asset generation with simple prompts. This approach also offers the scalability required to accommodate diverse driving scenarios and ODDs. Helm.ai’s generative simulation models can be further enhanced to construct any object class or environmental condition. Moreover, this enables the creation of a wide variety of driving environments to meet the specific development and validation requirements of automakers.

“Generative simulation provides a highly scalable approach to the development and validation of robust high-end ADAS. It also offers a unified approach to L4 autonomous driving systems,” said Helm.ai’s CEO and founder, Vladislav Voroninski.

“Our models, trained on extensive real-world datasets, capture the complexities of driving environments accurately. This milestone in generative simulation is important. It is for developing and validating production grade autonomous driving software, particularly addressing the critical tail end of rare corner cases. We’re excited to pave the way for AI-based simulation for autonomous driving.”

Helm.ai closed a $55 million Series C funding round in August 2023. The round was led by Freeman Group and included investments from venture capital firms ACVC Partners and Amplo, alongside strategic investments from Honda Motor, Goodyear Ventures and Sungwoo Hitech. Helm.ai’s total funding raised to date has reached $102 million with this financing.

Back to top button