The missing piece for reliable autonomous driving systems: Considerations on AI safety
Imagine a world where you can sit back, relax, and let your car do the driving for you. Sounds amazing, right? Well, that future is approaching fast with autonomous vehicles that promise to fundamentally change the future of our mobility. But before we can fully embrace this technology, manufacturers must ensure that it is safe for everyone on the road. This means for now to overcome gaps in current safety standards and introduce AI safety.
For autonomous dynamic driving tasks, vehicles must be able to correctly sense their environment, predict the behavior of other road users as well as plan and execute safe action steps in response. In this context, AI-based systems are required to mimic cognitive abilities of a human driver and enable autonomous system behavior. The development of such systems happens in the course of a data-driven and iterative AI lifecycle.
During the AI lifecycle, a certain behavior or decision logic is learned by the system based on training data which makes success dependent on numerous factors attributed to the data used.
In contrast to the iterative AI lifecycle, development according to the V model (“waterfall” process) has become de facto standard in the automotive industry. To avoid malfunctions, for example, the process of the well-established safety standard ISO 26262 starts with the definition of basic functional requirements which are transferred into concrete technical safety requirements. After the requirements have been incorporated into the development, verification and validation follow. While this approach has been well suited to yield safe systems in the past, there are inconsistencies with the AI lifecycle. Moreover, the pure focus on malfunctions of ISO 26262 is insufficient for non-deterministic lifecycle of AI systems, since safety-critical situations can occur even without the presence of a malfunction in highly automated systems.
In order to observe further risks, the application of ISO 21448 becomes mandatory with respect to automated driving. ISO 21448 is another safety standard that considers performance limitations of installed components and non-deterministic systems, and thus takes into account risks of potentially insufficiently trained AI systems. While ISO 26262 takes a system perspective, ISO 21448 especially also examines the interaction of components with the environment which is particularly important for automated and autonomous driving to address the complexity. During an iterative process, each defined functionality is evaluated regarding potential residual risks. Through functional modifications, the overall residual risk of the automated system is reduced to a minimum. Subsequently, by considering possible known and unknown traffic scenarios the vehicle might engage, a verification and validation strategy is defined which eventually demonstrates the safety of the intended functionality.
Hence, ISO 21448 is a necessary complement to ISO 26262, however it still insufficiently addresses risks due to the data-driven AI lifecycle. Therefore, an expansion of the safety rationale based on additional safety artifacts along the AI lifecycle is required. Especially for those AI artifacts no best practices exist yet. Examples for such safety artifacts are provided by the Technical Report ISO/TR 4804. This Technical Report, in contrast to the standards mentioned, is of a purely informal and non-normative nature. It is up to companies developing such autonomous and automated driving systems to use this as a guidance to develop their own frameworks and concepts for the safety of AI based systems.
In addition to the already outlined safety aspects, the aspect of cybersecurity must also be taken into account. ISO 21434 is a novel security standard in this context that lays down requirements for mitigating risks due to external manipulation, such as hackers taking remote control of the steering or braking system. The standard thus becomes highly relevant, especially for connected autonomous vehicles without a human driver fallback.
Conclusion: A holistic safety and security approach is needed to comprehensively safeguard AI systems for automated vehicles. Since current automotive safety standards inadequately address potential risks of the data-driven AI lifecycle, an extended safety rationale is required. For manufacturers, the four areas functional safety, safety of the intended function, AI safety and cybersecurity must be aligned and tailored development processes must be defined. In particular, the inclusion of additional, AI-specific safety artifacts is important in order to develop trustworthy AI systems that pave the way for a reliable autonomous future.
Principal at Kearney
2yGreat read - thank you for sharing Hendrik
TRUMPF | Tech Investor & Advisor | Ex-CEO DEKRA DIGITAL
2yGreat piece dear Hendrik A. Reese. I agree a holistic view on AI safety & cyber security across manufacturers, regulators and compliance companies/TIC companies is immanent to truly mitigate risks and not just monitor compliance to separate (overlapping) ISO-standards.