Only recently has attention begun to be paid to the issue of automating the generation of comprehensible qualitative explanations of probabilistic reasoning. Elsaesser [4] provides some empirical evidence on the efficacy of explanations of simple Bayesian inference, with one variable and one observation. Sember and Zukemman [10] describe a scheme for generating micro explanations, that is local propagation of evidence between two or three variables in a belief net. In this paper, we present an approach for generating macro explanations, intended to explain probabilistic reasoning over much larger networks.
It is useful to distinguish explanation as communication of static knowledge, as represented in a Bayesian belief network for example, from explanation of dynamic reasoning, of how beliefs are updated in the light of new evidence. We believe that the development of effective explanations is likely to be greatly helped by a deeper psychological understanding of human reasoning under uncertainty. So we began our research by empirical studies of cognitive processes involved in plausible reasoning. As we shall describe, this has led us to develop a novel approach to explaining reasoning based on quasi-deterministic scenarios. We wish to avoid dogmatism about what kinds of scheme will be most effective, but rather explore a variety of approaches, including qualitative and numerical, graphical and linguistic representations. We will illustrate some with fragments of explanations from our prototype explanation system.