ADASAutonomous Vehicle

Algolux’s updated embedded perception software for ADAS and autonomous vehicles

Algolux announced at the Embedded Vision Summit the next generation of its Eos Embedded Perception Software. This new release delivers best in class accuracy and scalability benefits for teams developing vision-based Advanced Driver-Assistance Systems (ADAS), Autonomous Vehicles (AV), Smart City, and Transportation applications.

Algolux developed a reoptimized end-to-end architecture and a novel AI deep learning framework. This reduces model training cost and time by orders of magnitude and removes sensor and processor lock-in, something not possible with today’s learning methods.

Roadblocks to autonomy and industry growth

The automated driving and fleet management markets are expected to collectively reach over $145B by 2025 (1)(2) and rely on accurate perception technologies to achieve that growth. But current “good enough” vision systems and misleading autonomy claims have led to vision failures, unexpected disengagements, and crashes. Recent AAA studies show that ADAS features such as pedestrian detection are unreliable, especially at night or in bad weather when most fatalities occur, and that systems designed to help drivers are actually doing more to interfere.

Groundbreaking AI framework provides scalability and reduced design costs

Algolux, novel AI deep learning framework, significantly improves robustness and saves hundreds of thousands of dollars of training data capture, curation, and annotation costs per project while quickly enabling support of new camera configurations.

The framework uniquely enables end-to-end learning of computer vision systems. This effectively turns them into “computationally evolving” vision systems through the application of computational co-design of imaging and perception, an industry first. The approach allows customers to easily adapt their existing datasets to new system requirements, enabling reuse and reducing effort and cost vs. existing training methodologies. The sensing and processing pipeline is included in the domain adaptation, addressing typical edge cases in the camera design ahead of the downstream computer vision network. 

Expanded portfolio delivers industry-leading robustness across all perception tasks

Eos delivers a full set of highly robust perception components addressing individual NCAP requirements, L2+ ADAS, higher levels of autonomy from highway autopilot and autonomous valet (self-parking) to L4 autonomous vehicles as well Smart City applications such as video security and fleet management. Key vision features include object and vulnerable road user (VRU) detection and tracking, free space and lane detection, traffic light state and sign recognition, obstructed sensor detection, reflection removal, multi-sensor fusion, and more.

The Eos end-to-end architecture combines image formation with vision tasks to address the inherent robustness limitations of today’s camera-based vision systems. Through joint design and training of the optics, image processing, and vision tasks, Eos delivers up to 3x improved accuracy across all conditions, especially in low light and harsh weather. This was benchmarked by leading OEM and Tier 1 customers against state of the art public and commercial alternatives. In addition, Eos has been optimized for efficient real-time performance across common target processors, providing customers with their choice of compute platform.

Back to top button