Most of the quoted executives in the linked piece are focusing on AI determinism, but I think this is incorrect.
photo credit: Airbus
Requiring determinism of an AI does a few things:
1) fragilizes the system (in the Taleb-ian sense) by imposing rigidity of actions
2) limits actions to logic that makes sense to humans
3) imposes restrictions that we’re actually fine with (human pilots are non-deterministic!).
There will come a day when machine learning actually gets to the right fidelity and is more deterministic.
Raytheon Chief Technology Officer Mark Russell
I think we should worry less about the black-box nature of AIs (since humans are black-box), and instead spend time thinking about how to sufficiently test the box. We need test cases that have these qualities:
1) the inputs bound reality
2) the plant, or the simulation environment, accurately implements relevant dynamics
3) the outputs are ranges, not specific values
4) all of this data is easily ported and used across many platforms, existing and future.
This will still be difficult, but it attacks the actual problem. We don’t actually care if a system is deterministic, that’s just a currently useful heuristic, we just want to be confident that these systems are safe.