The dynamics of belief: continuously monitoring and visualising complex systems

08/11/2022
by   Edwin J. Beggs, et al.
0

The rise of AI in human contexts places new demands on systems to be transparent and explainable. We examine some anthropomorphic ideas and principles relevant to such accountablity in order to develop a theoretical framework for thinking about digital systems in complex human contexts and the problem of explaining their behaviour. Structurally, complex systems are made of modular and hierachical components, which we model abstractly using a new notion of modes and mode transitions. A mode is an independent component of the system with its own objectives, monitoring data, and algorithms. The behaviour of a mode, including its transitions to other modes, is determined by belief functions that interpret the mode's monitoring data in the light of its objectives and algorithms. We show how these belief functions can help explain system behaviour by visualising their evaluation in higher dimensional geometric spaces. These ideas are formalised by abstract and concrete simplicial complexes.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset