Here is an interesting word cloud built using dissertation by Hei Chan of UCLA on Sensitivity analysis of probabilistic graphical models.
Following is the summary of the thesis.
Probabilistic belief systems are used in artificial intelligence to model uncertainty. A popular framework for realizing probabilistic belief systems is to use graphical models, such as Bayesian networks and Markov networks. The topic of sensitivity analysis is concerned broadly with the relationships between local beliefs, such as network parameters, and global beliefs, such as values of probabilistic queries. Sensitivity analysis is crucial to probabilistic belief systems because we often need to revise our state of belief to incorporate new probabilistic information in the form of local belief changes. This work focuses on sensitivity analysis of probabilistic graphical models, by addressing central research problems such as the assessment of global belief changes due to local belief changes, the identification of local belief changes that induce certain global belief changes, and the quantifying of belief changes in general. Our results can be divided into the following parts. First, we develop procedures and complexity results for tuning Bayesian or Markov network parameters (single or multiple) to ensure certain query constraints. Second, we provide network-independent bounds on changes in query values due to arbitrary changes in Bayesian or Markov network parameters. Third, we propose a new distance measure for quantifying probabilistic belief changes, and use it to provide guarantees on global belief changes in Bayesian or Markov networks. Fourth, we provide algorithms and complexity results on the sensitivity of decisions induced by Bayesian networks. Finally, we discuss the philosophical topic of belief revision. Many of our results have been implemented in a program called SamIam (Sensitivity Analysis, Modeling, Inference and More), a graphical Bayesian network tool developed by the UCLA Automated Reasoning Group.