Safety assurance of machine learning and autonomous systems
I have been reading some papers and reports about the challenges of assuring safety of machine learning systems and autonomous systems.
Safety assurance typically makes the case that a system is acceptably safe to operate in a particular context. The demonstration that this holds is based on an argument and corresponding evidence that all relevant hazards have been identified, that the risks associated with these hazards have been adequately conrolled, that the overall residual risk is acceptable, and that the evidence to back these claims is sufficiently trustworthy. There is an underlying assumption that (a) the system has a predictable behaviour, and (b) the operating context is reasonably well defined and stable.
Machine learning approaches have a number of well-known issues that affect the quality of the resulting systems. Among these are the problem of ensuring that the training data set is sufficiently representative of the real world data, and that the system is able to generalise from the training data to actual data (i.e. avoiding overfitting to the training data).
In terms of safety assurance of machine learning and autonomous systems there are challenges that result from the adaptive (and therefore to a certain extent unpredictable) behaviour of such systems, and from the above quality issues. There is also a challenge to the assumption that the context is reasonably well understood, because autonomous systems might be deployed in specifically in changing and dynamic contexts. A safety argument for machine learning / autonomous systems needs to be able to demonstrate that this does not impact on safety.
At present there appear to be no actual solutions, but there is a lot of research activity going on.
 Faria, JM. Machine Learning Safety: An Overview. Safety-Critical Systems Symposium 2018
 Burton, S., Gauerhof, L., Heinzemann, C. Making the case for machine learning in highly automated driving. Safecomp 2017