Making the Case for Safety of Machine Learning applied to Automated Driving

Simon Burton

Chief Expert Safety Reliability and Availability, Bosch

Machine learning technologies such as neural networks show great potential for enabling automated driving functions in an open world context. However, these technologies can only be released for series production if it can be demonstrated to be sufficiently safe. As a result, convincing arguments need to be made for the safety of automated driving systems based on such technologies. This talk examines the various forms in which machine learning can be applied to automated driving and the resulting functional safety challenges. A systems engineering approach is proposed to derive a precise definition of the performance requirements on the function to be implemented on which to base the safety case. A systematic approach to structuring the safety case  is introduced and a number of open research questions are presented including a discussion on how to relate machine learning specific performance metrics to system level safety requirements.

About Simon Burton

Dr. Simon Burton currently has the role of Chief Expert within the Robert Bosch GmbH Central Research division, where he coordinates research strategy in the area of safety, security, reliability and availability of software intensive systems. He graduated in Computer Science from the University of York, where he also achieved his PhD on the topic of the verification and validation of safety-critical systems. Dr. Burton has a background in a number of safety-critical industries. He has spent the last 15 years mainly focusing on the automotive domain, working in research and development projects within a major OEM as well as leading consulting, engineering service and product organizations supporting OEMs and their supply chain with solutions for process improvement, embedded software, safety and security.

Sponsored by