Pilot Systems has a long track record working on Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD). Pilot Systems has helped universities to test ADAS functions and integrate AD algorithms based on dSPACE’s rapid prototyping platform MicroAutoBox. In a recent Small Business Innovation Research (SBIR) project Pilot engineers analyzed how autonomous vehicles could navigate roundabouts. Roundabouts are a challenging environment because of the variance in characteristics, construction details as well as mixed traffic scenarios. The project focused on a simplified scenario and described the perception, planning and drive control of an autonomous car.
Safety is a key design goal of ADAS and self-driving systems. The developers of an ADAS functionality need to make sure that they have considered and analyzed all aspects of the problem and can provide measurable evidence that their function will be safe. Upcoming standards and regulations do not describe any particular method to do so but require that all aspects have been considered in a systematic way.
Advanced Driver Assistance Systems expose a large attack surface and cybersecurity weaknesses could have devastating consequences. Automotive Cybersecurity is not only a key requirement but a crucial pre-requisite for safe ADAS and future self-driving cars. There will be no safety without cybersecurity.
Ensuring the cybersecurity of ADAS and AD is a challenging task as any wireless interface can be a potential attack vector. ADAS and especially self-driving systems are based on a mix of complex hardware and software often integrating Vehicle-to-Everything (V2X) communication and back-end connectivity. Automotive OEMs and suppliers have created internal organizations to ensure cybersecurity of their product and recent regulations require the establishment of a cybersecurity management system to ensure the security of the product across the entire life-cycle.
Pilot Systems conducted a study on ADAS and AD cybersecurity focusing on vulnerabilities in sensors and the sensor fusion algorithms as several white hack attacks have been documented and published concentrating on the perception layer.
ADAS and self-driving cars deploy a wide range of sensors like ultrasonic, radar, camera, and LiDAR and combine the sensor measurements to create an accurate representation of the vehicle’s environment. Camera sensors are particularly common and widely used often offering a 360-degree view of the vehicle’s surroundings.
However, camera sensors and many algorithms for image analysis, sensor fusion and perception, especially those based on neural network/deep learning systems, are vulnerable to manipulation and cyber-attacks.
This was recently demonstrated by researchers of the university of Tübingen and the Max-Planck institute for Intelligent Systems. They showed that even subtle alterations like small paintings on a wall, noise sources or stickers could confuse the AI-based image analysis of autonomous cars. These modifications of the environment irritated the optical flow analysis making objects appear to move in the wrong direction. What is even more disturbing is that the changes could be so subtle that they would not even be noticed by humans.
Other researchers demonstrated the vulnerability of camera based ADAS to modifications of traffic signs and road markings in an adversarial attack were speed limit signs and road markings were manipulated. A few strokes were enough to make a 35-mph limit sign appear to show 85 mph. Some ADAS architectures were fooled and misinterpreted the speed information to be much higher than allowed. In a real-world scenario this could have caused dangerous speed adjustments. A human driver would most probably have ignored the false sign interpreting this as either an optical illusion, a prank, a mistake of road construction workers, or an intentional, malicious modification. This judgement would be based on meta information about the traffic scenario taking into account the possibility of manipulated traffic signs. The perception mechanism in ADAS, however, rely on proper speed information.
The concerned manufacturers claim that the problem is solved in the current generation of cameras, but the hack reveals a much deeper structural problem of ADAS/AD and perception algorithms. There always will be scenarios that need a judgement call and context switch between different levels of perception and interpretation. While humans are experts in dealing with such situations, AI-based systems face difficulties or fail entirely.
Areas of Expertise: Pilot Systems' HIL experts, Cyber Security specialists and Safety engineers look forward to speaking with you. Based on our experience with ADAS and AD and the recent study we can provide valuable suggestions for hardening the AI-based sensing layer as part of a comprehensive safety analysis. A broad range of support is available; from requirements engineering of security functions, risk and threat analysis, vulnerability assessments to code audits. We also help with security architecture for ADAS and AD functions and work on new ways to integrate HIL and security testing (security testing in the loop).
For more information, please contact us.