As drivers dread most, it’s the unpredictable moment when a pedestrian suddenly emerges into their path, affording only a fleeting instant to react and avoid catastrophe. Many modern vehicles feature advanced digital camera technologies that can detect potential hazards and either alert drivers to take corrective action or even intervene with automatic emergency braking systems. While these techniques may not prove swift or reliable enough, they would likely require significant enhancements to ensure effectiveness in autonomous vehicles where human intervention is impossible.
Researchers at the University of Zurich’s Division of Informatics, led by Daniel Gehrig and Davide Scaramuzza, have successfully combined a novel bio-inspired camera with artificial intelligence to create a system that detects obstacles around a vehicle significantly faster than current methods while requiring less computational power. This groundbreaking study is unveiled in our latest episode.
Many modern cameras operate on a frame-by-frame basis, capturing static images at regular intervals. Utilized for driver assistance in vehicles, these systems typically operate at a rate of 30 to 50 frames per second, where a trained artificial neural network identifies objects in images – pedestrians, bicycles, and other vehicles – to facilitate safe navigation. However, when an event unfolds within the fleeting interval of 20 to 30 milliseconds between two consecutive frames, there’s a risk that the camera will capture it just a fraction too late? According to Daniel Gehrig, lead author of the study, the proposed solution involves increasing body charge, which would require processing additional real-time data and computational power.
Occasion cameras represent a cutting-edge innovation rooted in a novel principle. Instead of relying on ongoing charges, these sensors feature advanced pixel technology that tracks activity and reports data with precision whenever swift movements occur. “With no blind spots between frames, they’re able to detect obstacles more quickly.” According to Davide Scaramuzza, head of the Robotics and Perception Group, these devices are also known as neuromorphic cameras because they replicate how human eyes process images. Despite their individual strengths, these methods have notable limitations: they can overlook slow-transferring issues and struggle to convert images into the format required for training the AI algorithm?
Developed by Gehrig and Scaramuzza, their innovative hybrid system harmoniously combines the strengths of both approaches: it integrates a conventional camera capable of capturing 20 frames per second, along with a relatively low power consumption profile compared to existing solutions. The images are analyzed through an artificial intelligence system, specifically a convolutional neural network, trained to identify vehicles or pedestrians. Information gathered by the occasion camera is linked to a sophisticated AI system, specifically designed to process complex three-dimensional data that evolve over time through the use of an asynchronous graph neural network, ideal for analyzing dynamic spatial patterns. The occasion camera’s detections are leveraged to predict and complement those of the standard camera, thereby enhancing overall efficiency. According to Daniel Gehrig, the outcome yields a visible detector capable of detecting objects in the same timeframe as a standard camera capturing 5,000 frames per second would, yet it necessitates the same bandwidth as a typical 50-frame-per-second camera.
The workforce evaluated their system against current camera and algorithm offerings on the automotive market, finding that it enables 100 times faster detections while reducing the amount of data required for transmission between the camera and onboard computer, as well as processing power needed to analyze images without compromising accuracy. By leveraging advanced processing capabilities, the system excels in identifying vehicles and pedestrians that momentarily enter the field of view between consecutive frames of a standard camera, thereby providing enhanced safety assurances for both drivers and pedestrians – a vital distinction that can have a profound impact, especially at high velocities?
By combining cameras with LiDAR sensors, as employed in self-driving cars, scientists predict that this approach will become significantly more efficient in the near future. “Accordingly, hybrid approaches such as these may prove crucial for enabling autonomous driving, ensuring security while avoiding excessive data generation and computational resources.”