Decision Analysis course – Lecture 1 – Key topics

This post is for students attending the 2015-2016 edition of my Decision Analysis course at University of Namur.


– Course basics and evaluation;
– Key questions and topics in decision analysis.


View and download

Mandatory readings for next lecture

Keeney, Ralph L. “Decision analysis: an overview.” Operations research 30.5 (1982): 803-838.

If you want to know more

Howard, Ronald A. “Decision analysis: practice and promise.” Management science 34.6 (1988): 679-695.

Schoemaker, Paul JH. “The expected utility model: Its variants, purposes, evidence and limitations.” Journal of economic literature (1982): 529-563.

Figueira, José, Salvatore Greco, and Matthias Ehrgott. Multiple criteria decision analysis: state of the art surveys. Vol. 78. Springer Science & Business Media, 2005.

Problem to solve for next week

View and download

HBS Cases

View all cases

Analytic Graphs for Root Cause Analysis: a healthcare case

This text illustrates one way of using Analytic Graphs for Root Cause Analysis, with an example from healthcare.

What is Root Cause Analysis?

Root Cause Analysis is a method for identifying causes of errors, or more generally, of variations in performance.

It was originally developed in psychology and systems engineering.

It is used in such diverse domains as healthcare, manufacturing, and information technology.

There is no conclusive empirical evidence to confirm that doing Root Cause Analysis in fact reduces risk and improves safety. Nevertheless, it remains widely used.[1,2]

Why perform Root Cause Analysis?

The following passage gives an example of when and why Root Cause Analysis is used in in healthcare.

“Preventable mistakes are common in medicine. For example, at 1 hospital, a patient received patient-controlled analgesia (PCA), a combination of local anesthetic and narcotic. The medication was intended to be infused into the epidural space. Instead, a nurse inadvertently connected the tubing to an intravenous catheter, delivering potentially lethal anesthetic into the patient’s bloodstream. What followed were the nurse’s anguish and guilt and, almost as inevitably, the hospital’s root cause analysis (RCA). In the last decade, this process has become the main way medicine investigates mistakes and tries to prevent future mistakes.”[1]

Why use Analytic Graphs for Root Cause Analysis?

In healthcare, experts estimate that one Root Cause Analysis requires between 20 to 90 person-hours to complete.

It requires communication between different people, representation of information about causes and links between causes, explanations for links and causes, and discussion to reach agreement. The following passage illustrates this.

“A root cause analysis should be performed as soon as possible after the error or variance occurs. Otherwise, important details may be missed. All of the personnel involved in the error must be involved in the analysis. Without all parties present, the dis- cussion may lead to fictionalization or speculation that will di- lute the facts.” [2]

Participants in Root Cause Analysis can make Analytic Graphs to record causes and causal links, to use the resulting graphs as documentation, to query these graphs to get answers to questions they have while doing analysis, and so on.


The following passage summarises a safety failure, which led to the application of Root Cause Analysis.

“A laboratory aide was cleaning one of the gross dissection rooms where the residents work. This aide was a relatively new employee who had transferred to the department just a few days prior to the event. When she was cleaning the sink in the dis- section room, she accidentally ran her thumb along the length of a dissecting knife—an injury that required 10 to 15 stitches. Since there had been other less serious accidents in this room and several previous attempts to address the safety issues had not been effective, the department completed a root cause analysis.” [2]

In all graphs below, you can zoom in and out and move the graph around.

First graph: Events and causality

The first graph represents some of the information accumulated in the case.

All links, shown as circles marked with “L”, are labelled “cause” to indicate that each is an instance of the relationship that designates causation. The relationship is binary, irreflexive, antisymmetric, and transitive, that is, the cause relation is a partial order.

Each node is shown as a square, is marked “N” and has two labels:

– One label is “Event” to indicate that the node is an event, that is, an instance of an event class.

– The other label is text describing the event instance.


Second graph: Classification of events

The second graph below was made by adding information to the first graph. The additional information is about the source events in the graph, or the events which are the root causes. Each root cause event now has an additional label, used to indicate the category of that event, following the Eindhoven Classification model of system failure [3]. Each label is an abbreviation, as follows:

– OP: Events due to quality and availability of processes, protocols;

– OC: Events due to organizational culture, that is, collective promotion or suppression of specific behaviors;

– HSS: Failures in applying or performing fine motor skills;

– TEX: Technical failures which are beyond control and responsibility of the relevant organization;

– OEX: Organizational failures which are beyond control and responsibility of the relevant organization.



[1] Wu, Albert W., Angela KM Lipshutz, and Peter J. Pronovost. “Effectiveness and efficiency of root cause analysis in medicine.” Jama 299.6 (2008) 685-687. Link to the paper.

[2] Williams, Patricia M. “Techniques for root cause analysis.” Proceedings (Baylor University. Medical Center) 14.2 (2001)- 154. Link to the paper.

[3] Battles, James B., et al. “The attributes of medical event-reporting systems.” Arch Pathol Lab Med 122.3 (1998): 132-8. Link to the paper.

Analytic Graphs

An Analytic Graph is a directed labelled multigraph made and used for problem solving.

Analytic Graphs are used:
– to represent information about a problem and its solutions;
– to incrementally and iteratively design a problem and its solutions;
– to answer questions about problems and solutions that they represent.

Formally, an Analytic Graph G is the tuple G = ( N, L, A ) such that:
– N is a non-empty set of nodes;
– L is a potentially empty multiset (bag) of directed edges, that is, ordered pairs of nodes, such that (a, b) is an edge directed from node a to node b;
– A is a function which returns the label of a given node or edge;
– Every node must have at least one label;
– Every link must have exactly one label;
– There can be different types of labels.

Labels are often defined in order to enable computations over an Analytic Graph. For example, labels can be defined to enable the application of optimisation algorithms, in order to find an optimal part of a graph, which may correspond to the best solution to the problem that the overall graph describes.

Analytic Graphs came out from my research on the design of formal languages for problem solving in requirements engineering and system design. The book “The Design of Requirements Modeling Languages” (Springer, 2015) gives examples of Analytic Graphs applied to problem solving in system design.

Analytic Graphs are related to:
– multidimensional networks, in that if if only links can have labels, then an Analytic Graph becomes a multidimensional network;
– multilayer networks, if there is a partition of A, and each subgraph of G which has only the labels from a single partition is treated as a layer, and all interconnections between layers are identity relations on nodes.

The Design of Requirements Modeling Languages book

My book on how to make formalisms for problem solving in requirements engineering will be out soon at Springer. The book page is up.

What is a Requirements Problem?

You have a Requirements Problem (RP) to solve, if (i) you have information about unclear, abstract, incomplete, potentially conflicting expectations of various stakeholders and about the environment in which these expectations should be met, (ii) you know that there is presently no solution which meets these expectations, and (iii) you need to define and document a set of clear, concrete, sufficiently complete, coherent requirements, which are approved by the stakeholders as appropriately conveying their expectations, and will guide the engineering, development, release, maintenance, and improvement of the solution which will in fact meet stakeholders’ expectations.

In simpler terms, you have an RP to solve whenever you are asked by someone else to solve a problem for them, you want to solve it, and it is not clear to you or them what exactly the problem is, and how best to solve it. Situations in which RPs occur are part of the everyday, and complicated variants thereof occur often in the workplace, and especially for engineering and management professionals, although medical, legal, investment and many other professions are concerned as well.

When is a formal language slow?

A formal language is slow if it has few or no tools which were designed specifically for solving the problem at hand. Perhaps you could use that language to solve that problem, but it would take you more time to do so, than if the language already had some additional tools in it, even if these tools are simply defined from other components of that language.

So language being slow is specific to problems, or if you prefer to generalise, to problem classes.

For example, classical first-order logic is expressive, but slow if you want to use it to describe, say temporal constraints on a system. This is because it is generic, while it is fairly well know what temporal constraints look like, and why they are defined in the first place. The ontology of these constraints is known (see the modalities in linear temporal logic for instance), and first-order logic can be used to talk about temporal constraints. Linear temporal first-order logic is, then, faster than generic first-order logic when you want to specify temporal constraints.

You can easily find an expressive formal language. First- or higher-order logics for example. But it can be hard to see how to actually use that language to solve concrete problems, instances of a problem class. In such cases, you have an expressive, but slow formal language, and perhaps this is not a great position to be in.

What you need in such cases is human expertise which is applicable to the problem class, since this is what lets you understand the problem to solve. And this is what allows you to solve the problem. If you are obliged to use a slow language, this simply reflects the fact that you are facing a problem for which a strong formal language is absent.

What is a task language?

A task language is a formal language where all allowed expressions are predefined, and the set of allowed expressions are restricted to only those which have been observed as relevant to performing the task.

Suppose that you want to have a task language which helps you and your colleagues agree on a meeting time and date. The task language could have only three allowed expressions: “I suggest [time and date]”, “I accept”, and “I reject”.

If you obliged everyone to use this language only, when scheduling meetings, then everyone would only use one of these three expressions when scheduling a meeting. It also means that no one could explain why they cannot make it at a certain time and date.

This could be good, if you don’t care for these reasons, since no one would have to explain why they prefer a date and time, or reject a date and time.

But it could be restrictive if you want, for example, to allow meeting participants to communicate the meeting agenda to others, and have other participants influence that agenda.

Why and How to Make Requirements Modelling Languages? Tutorial announcement

I am holding the tutorial “Why and How to Make Requirements Modelling Languages?” at the 33rd edition of the International Conference on Conceptual Modeling, in Atlanta, GA, on Tuesday October 28th, 2014.

Go to this page for the abstract and tutorial material.

The Requirements Problem for Adaptive Systems in ACM TMIS

Alex Borgida, Neil Ernst, John Mylopoulos, and I have a new paper out:

Jureta, Ivan J., et al. “The Requirements Problem for Adaptive Systems.” ACM Transactions on Management Information Systems (TMIS) 5.3 (2014): 17.


Requirements Engineering (RE) focuses on eliciting, modeling, and analyzing the requirements and environment of a system-to-be in order to design its specification. The design of the specification, known as the Requirements Problem (RP), is a complex problem-solving task because it involves, for each new system, the discovery and exploration of, and decision making in a new problem space. A system is adaptive if it can detect deviations between its runtime behavior and its requirements, specifically situations where its behavior violates one or more of its requirements. Given such a deviation, an Adaptive System uses feedback mechanisms to analyze these changes and decide, with or without human intervention, how to adjust its behavior as a result. We are interested in defining the Requirements Problem for Adaptive Systems (RPAS). In our case, we are looking for a configurable specification such that whenever requirements fail to be fulfilled, the system can go through a series of adaptations that change its configuration and eventually restore fulfilment of the requirements. From a theoretical perspective, this article formally shows the fundamental differences between standard RE (notably Zave and Jackson [1997]) and RE for Adaptive Systems (see the seminal work by Fickas and Feather [1995], to Letier and van Lamsweerde [2004], and up to Whittle et al. [2010]). The main contribution of this article is to introduce the RPAS as a new RP class that is specific to Adaptive Systems. We relate the RPAS to RE research on the relaxation of requirements, the evaluation of their partial satisfaction, and the monitoring and control of requirements, all topics of particular interest in research on adaptive systems [de Lemos et al. 2013]. From an engineering perspective, we define a proto-framework for solving RPAS, which illustrates features needed in future frameworks for adaptive software systems.

Download PDF:

How is paraconsistent reasoning related to decision-making?

The short answer is that to define a paraconsistent formalism, you have to define decision-making rules. The rules must say what can be concluded from an inconsistent set of formulas. Simply put, your proof theory reflects the decision-making rules that you like, and which you built into the language when you made it.

Even if you are not making the paraconsistent formalism yourself, but, say, are working with a specialist, then you need to define informally, but clearly the decision-making rules first. They should ideally be specific to the problem domain, or problem class that you want to be solving with that formalism.

These are simple observations, and have a crucial implication: there is unlikely to be one best paraconsistent formalism, because there are no universal criteria for which conclusions are valid, when you have an inconsistent set of formulas.

So another important implication is that if you say “I am using paraconsistent formalism X, which someone else made” then you need to make it clear how the decision-making rules in that formalism are good for what you want to use that formalism for. Otherwise, you are picking one, among many, and remains unclear why that one is more relevant than another.

So when making a new, or picking an existing paraconsistent formalism, you need to make it clear why that formalism draws, or better, prefers some conclusions, rather than others, when it is given a set of inconsistent formulas.

What is the minimal number of formulas in a formal logic X, which would be needed to learn the proof theory of X?

Suppose that you have a machine that can produce any number of formulas. You assume that they are all formulas of the same formal logic, called X. How many formulas, and which formulas, would you need, in order to determine all the rules of the proof theory of X?

How my PhD students and I collaboratively write research papers?

In 2014, my research group includes four PhD students, me and another professor.

We apply the following process when collaborating on a research paper:

1) In a Google Drive document, PhD student writes a short and rough motivation and if feasible, something that looks like a research question.

2) PhD student shares the document with me.

3) I add comments and edits via Google Drive.

4) PhD student and I meet in person, or hold a conference call, to discuss the rough motivation and research question.

5) PhD student and I identify relevant existing research, and prioritise it.

6) PhD student revises the Google Drive document, by clarifying the motivation, research question, and related work.

7) PhD student and I meet in person, or hold a conference call, to agree on the research methodology.

8) PhD student and I define the research hypotheses, the required tools for collecting data, or otherwise, as required by the research methodology. These are revised usually in several iterations, until I approve that the student can start applying the tools according to the research methodology.

9) PhD student and I (if I can do something more or different than the student can) collect data, do simulations, and so on, whichever is needed. If we need outside experts to help us, we find them and coordinate with them. We clean data up, and decide if it is worthy of analysis, and of what kind of analysis.

10) PhD student shares dataset on Google Drive, along with any analyses of the data, described in a Google Drive document. PhD student and I separately or together analyse data, decide which results to present, and how to present them.

11) PhD student and I decide on the key ideas and results to present in the research publication.

12) PhD student writes first incomplete draft as a Google Drive document. PhD student and I add comments and edit the document until it is approved as ready for submission. In parallel, we decide on publication venue (specific workshop, conference, journal, book chapter).

13) PhD student transfers the content of the Google Drive draft publication, to LaTex format, if this is required by the publication venue. The LaTex files and the resulting PDF are shared in a folder on Google Drive.

14) I approve the PDF version, and PhD student submits it.

The rest depends on the replies from the reviewers at the publication venue.

The process can vary somewhat, depending on the data to collect, if there is data to collect, problems with data, problems with the hypotheses, or the research question, and so on.

How to design a modelling language ontology from empirical data?

I don’t know how yet. I’ve been thinking about this for about two years now.

The only idea that stuck so far, is to work in five steps.

Firstly, to identify recurring terms in the domain, used to describe problems and solutions.

Then, to somehow estimate their relative importance to people who are experts in that domain, in identifying and solving problems.

Thirdly, to define the concepts for the ontology, and then, the relations. Relations are the harder part.

Finally, to define rules for concept and relation use.

Corentin Burnay and I are going in this direction, in the work on requirements elicitation, but we still have a long way to go. We are, roughly speaking, at the second step.

What is missing in formal models of argument?

Formal models of argument, such as Dung’s argumentation framework, usually do not answer the following interesting questions:
– How to detect groupthink in arguments?
– How to check if arguments are specific enough to the context and topic, so as to sanction the use of generic arguments?
– How to how to detect the availability bias in arguments?
– How to evaluate the relevance of an argument?
– How to evaluate the relevance of the attack of an argument on another one?
– Which extensions to prefer, and why?
– How to detect the manipulation in arguments and attack relation, in favour of one or a subset of extensions?

There are many others, but there is relatively little work that I know of, on the questions above.

How to write emails?

I follow the rules below when writing emails. Many people who work with me do the same. I highly recommend them.

They are inspired by similar rules which Nikola Tosic, Andrea Toniolo, and I designed and use at JTT Partners, to coordinate remote teams efficiently.

If you are a student, apply these rules, and I will reply faster to your email.


– No bcc (blind carbon copy).

– One topic per email.

– Email subject should clearly state the topic.

– One thought per paragraph.

– Short paragraphs.

– One empty line between every two paragraphs.

– One verb per sentence, if feasible. In general: minimise the number of verbs in a single sentence.

– No passive voice.

– No “we”. Say who.

– If you want to ask me something, then include one or more clear questions, which end with the question mark “?”.

– If you want me to do something, then say what, and suggest a deadline.

– If you want to meet me, then propose at least two meeting slots.

– If you need my approval for something, then the word “approval” has to appear in the question.

– Titles and other formalities do not matter to me. I will treat you with the same formalities (or absence therof) that you treat me.