How to evaluate the quality of a process? By process, I mean any sequence of more or less complicated activities or tasks, which you (and others) are trying to do, to achieve some goals you agreed on. It can be something called a business process, a decision process, a problem-solving process, and so on – what I’m writing below is widely applicable.
In processes which involve uncertain outcomes (so, when you do the process, you are not 100% sure what you’ll get from it), the problem is that you can do it well, and get bad outcomes; at other times, you will do it well, and get good outcomes. And you see the point – there are four cases: well done but bad outcomes, well done with good outcomes, badly done with bad outcomes, and badly done with bad outcomes.
Because of the fact that, when you have an uncertain process, you cannot know the exact outcomes, we have two different kinds of quality to evaluate:
Procedural quality is the evaluation of how you did the process; this usually involves you having some idea – before actually doing the process – of how it should be done. You then do the process, and the difference between how you should have done it, and how you actually did it, is your procedural quality. In requirements engineering, for example, you could have a method which tells you how you should elicit requirements, and then you will do it, and the differences tell you something about procedural quality.
Outcome quality is the evaluation of what you got from the process, after you did it and was able to observe at least some of its outcomes. Outcome quality is going to depend on the difference between what you expected as outcomes, and what you actually got.
This difference between procedural and outcome quality makes it is important to understand – over time, as these processes are done and repeated – the factors which influence procedural quality and factors which influence outcome quality, and see which you can influence, how, and at what cost. In some cases, you can pay more and get higher confidence in outcome quality, while in others this cost will be too high. Procedural quality is slightly different, since you can influence to a considerable extent – through training, process improvements, and so on – how something is being done, and thereby influence procedural quality.
What remains critical, is not to confuse procedural and outcome quality, especially when evaluating people tasked with doing uncertain processes.
The anecdotal answer would be “as soon as it is made”. But while it is funny and tragic, it is also not correct.
Inevitably, requirements that someone may have from a system today will likely change at some future time.
Requirements change can be due to that person’s learning from use: they use the system, and new requirements arise, while old ones lose relevance. Take a trivial example: your requirements from an email client were different the first time you wrote an email, and today, after you wrote hundreds, or thousands of them.
Besides learning, the context in which you use the system may have changed. You may no longer need to use it in the same way. Its use may no longer be as important to you. There may be alternative systems which offer different features, and which may lead you to have new requirements; and so on.
But even if requirements changed, the requirements model or requirements specification can remain relevant for a long time, throughout the system’s lifecycle; this can be from months to decades.
One reason for a long lifetime of a requirements model is that it reflects the expectations and intentions of those who provided the requirements in the first place, and can thus be used to explain why the system is as it is.
A second reason, beyond explanation, is that a requirements model is a way to preserve and transfer knowledge about the system. It is often equally useful to know why something was made as it is, besides knowing how it was made and how it works.
A third reason is that it is part of the system’s documentation, as long as the requirements in the model are still being satisfied by the system.
A model, or representation of requirements is a record of data which a group of people agreed to call requirements.
The definition bundles several important ideas:
The definition does not say what a requirement is. This is better done separately, since it is a hard problem itself.
What qualifies as a requirements model is decided by a group of people. It is a convention. It may be a convention local only to a small group of people, or a wider one, such as an industry standard.
The model or representation must be a record, something physical or digital, which can be moved around, so that it can be accessed by different people in the same way. So, if you are now thinking about something, and you are calling those things requirements, and I have no other way to access them except to ask you, then that is not a representation/model of requirements.
A model can look in any way which is accepted by those who consider it a model in the first place: text, graphics, multimedia – this does not matter at all in the definition above.
Preferences are one of the few central concepts in mainstream economic approaches to decision-making and problem-solving. Interestingly enough, they were – to the best of my knowledge – absent from requirements engineering, at least up to the proposal Mylopoulos, Faulkner, and I made in 2008 .
Let’s recall what is usually meant by “preference”:
“Preferences are evaluations: they concern matters of value, typically in relation to practical reasoning, i.e. questions about what should be done. This distinguishes preferences from concepts that concern matters of fact. Furthermore, preferences are subjective in that the evaluation is typically attributed to an agent – where this agent might be either an individual or a collective. This distinguishes them from statements to the effect that “X is better than Y” in an objective sense. The logic of preference has often also been used to represent such objective evaluations (e.g. Broome 1991b), but the substantial notion of preference includes this subjective element. Finally, preferences are comparative in that they express the evaluation of an item X relative to another item Y. This distinguishes them from monadic concepts like “good”, “is desired”, etc. which only evaluate one item. Most philosophers take the evaluated items to be propositions. In contrast to this, economists commonly conceive of items as bundles of goods.” 
There are two relationships between the requirements and preference:
If requirements are descriptions of desirable future conditions, and if we have such requirements which describe alternative future conditions, then we can also have preference relations over requirements, to indicate which is more desirable than another one. Note that I’m not mentioning here the properties of such preference relations should or must have – this does not matter in the question I asked in the title here.
A single requirement itself subsumes a preference. If I require that some condition X is satisfied by the future system, then any other condition, which is an alternative to X and which I may be aware of, is less desirable to me than X. It is in this sense that requirements are about future preferences, as I wrote in the text “Why are requirements so interesting?”.
References and notes
Jureta, Ivan, John Mylopoulos, and Stephane Faulkner. “Revisiting the core ontology and problem in requirements engineering.” 2008 16th IEEE International Requirements Engineering Conference. IEEE, 2008. https://arxiv.org/pdf/0811.4364
To analyze is to break apart, reorganize, describe in a different way, so that you can have a different look, to take a different perspective, and we do it to draw some new conclusions, to see and say something you did not before.
Regardless of how exactly you do requirements analysis, that analysis is going to be destructive. Why is this the case? Isn’t the aim to do constructive analysis?
If I ask you to analyze requirements, I would want the outcome to be constructive, to move me closer to the goal of specifying the “right” requirements for the system-to-be.
Your analysis will always have one of three outcomes:
You found something that supports the claims I made thus far. Let’s call this a positive outcome.
You found something which undermines or contradicts my claims. This will be called the negative outcome.
No outcome at all, meaning you found nothing that either supports or counters what I did. It was a useless analysis, in other words.
Both the positive and negative outcomes are constructive for a simple reason: they both give me new information relative to what I previously had – for if I had all the information already, then you would necessarily get the third and useless outcome.
But they are also destructive. They destroy the information which was there before the analysis. Specifically, they replace the information you had before, with something new, even if the differences are minimal. If there are no differences, then you got that third outcome above: a useless analysis.
In other words, any constructive analysis of requirements, one which moves you forward, is also going to destroy what you and others thought was right. There are no perfect requirements; any useful analysis will destroy that conception.
Another way to put it is that if it was not destructive, then the analysis was useless.
If you wondered what requirements are, here is what Zave and Jackson say in a classical paper :
“From this perspective, all statements made in the course of requirements engineering are statements about the environment. The primary distinction necessary for requirements engineering is captured by two grammatical moods. Statements in the “indicative” mood describe the environment as it is in the absence of the machine or regardless of the actions of the machine; these statements are often called ‘assumptions’ or ‘domain knowledge.’ Statements in the ‘optative’ mood describe the environment as we would like it to be and as we hope it will be when the machine is connected to the environment. Optative statements are commonly called ‘requirements.’ The ability to describe the environment in the optative mood makes it unnecessary to describe the machine.”
This is a quote from a broader discussion they do, where their emphasis is on requirements being about the environment of the system-to-be, not the system-to-be itself .
It is important to realize that requirements are about conditions which may or may not hold in the future. We can have requirements which are satisfied today, such as “I want my car to use electricity, rather than petrol” in case your current car indeed uses electricity over petrol. But the whole point of a requirement is that it needs to be, or remain satisfied in the future.
Why cannot we have a requirement that is only about the present?
This is because of why we do requirements engineering in the first place. We do it in order to design new, or make changes to existing systems. If this was not the case, then any statement about the present is merely a description of the present or past, in case its future status as satisfied or not, or true or false, does not matter.
Since requirements are about the future, they must be predictions. This is important to keep in mind, because it also means that requirements are not knowledge. It does not matter if the requirement is about a condition which is true today, and if we can know that this condition indeed holds. If requirements are only interesting in light of changes that we want to make to present conditions, then even those requirements which are about presently true statements must be predictions in order to count as requirements: they are predictions of conditions which should be satisfied after the change we want to make – be it with a new system, or changes to existing ones.
References and notes
Zave, Pamela, and Michael Jackson. “Four dark corners of requirements engineering.” ACM transactions on Software Engineering and Methodology (TOSEM) 6.1 (1997): 1-30.
“Decision analysis” can be used to mean any activity that tries to find reasons for a decision, but in research, it usually denotes an approach that was proposed, to analyze decisions before making them.
Drawing on utility theory in particular, “decision analysis” was introduced, as far as I know by Ron Howard  at the end of the 1960s, with the aim to approach decision-making in a rigorous way, while trying to reuse ideas from expected utility theory in economics. Here is how he argued for the need for decision analysis, and how he defined it.
“A decision is an irrevocable allocation of resources, irrevocable in the sense that it is impossible or extremely costly to change back to the situation that existed before making the decision. […] a decision is not a mental commitment to follow a course of action but rather the actual pursuit of that course of action. […]
A good decision is a logical decision – one based on the uncertainties, values and preferences of the decision maker. A good outcome is one that is profitable or otherwise highly valued. In short, a good outcome is one we wish would happen. […] We may be disappointed to find that a good decision has produced a bad outcome or dismayed to learn that someone who has made what we consider to be a bad decision has enjoyed a good outcome. Yet, pending the invention of the true clairvoyant, we find no better alternative in the pursuit of good outcomes than to make good decisions.”
The aim with decision analysis, in other words, is to find ways of making “good decisions”, and hope that good outcomes will follow. This is sometimes called “procedural quality” of decision-making, which evaluates the process we took to make a decision, while “outcome quality” is concerned with the evaluation of outcomes. What Howard is saying is simply that we want procedures that aim for high procedural quality, since we do not see another way to influence outcome quality.
“Decision analysis is a logical procedure for the balancing of the factors that influence a decision. The procedure incorporates uncertainties, values, and preferences in a basic structure that models the decision. […] The essence of the procedure is the construction of a […] model of the decision in a form suitable for computation and manipulation; the realization of this model is often a set of computer programs.”
The motivation for decision analysis looks a lot like those that led to the research in requirements engineering: we want to have rigorous approaches to finding, representing, analyzing, and specifying requirements. If we don’t, we will specify wrong and unclear requirements; systems will be built to satisfy these deficient requirements, they will fail to deliver what stakeholders expect of them, and eventually, their design, engineering, and running will be, to a considerable extent, waste. To fail requirements engineering is to fail to relate expectations from a system to the purpose for which it is being designed.
So how are decision analysis and requirements engineering related?
When you do requirements engineering for a system, you make many decisions: how to elicit requirements, how to model them, how to analyze them, which conclusions to draw from the analysis, what to do about these conclusions, how to verify requirements consistency, among others, how to validate requirements with stakeholders to make sure you got them right – and these questions are only about how you do requirements engineering; there are many decisions to make about the actual content, the information you work with during requirements engineering: given some input from stakeholders, do you make it into a requirement, a constraint of the environment in which the system will run, do you refine it, do you look for more information to relate to it, and so on.
One way to see the connection between decision analysis (DA) and requirements engineering (RE) is that you could approach each decision during requirements engineering in the way that DA tells you to.
While this may be possible, does it make sense to do it?
The decision analysis process is as follows :
Given that we often have hundreds of requirements to deal with, which came from more even more information collected through elicitation, the frequent refinements, clarifications, and changes we have to make to requirements, and the many other choices to make during RE, it is hard to take seriously the idea that decision analysis ought to be applied each time a decision needs to be made during RE.
If this is not the relationship, then what is?
We could, instead, see the requirements engineering process as a special case of decision analysis: the decision to make is which system to design, that is, the purpose of the system-to-be.
But there are problems with this as well. Decision analysis makes a number of assumptions, namely:
The best option is the option which gives the highest expected utility.
It is possible to find or make more than one option to the sufficient level of detail that they can be compared over criteria.
It is possible to operationalize goals into criteria.
Preference and importance can capture emotions, moods, expectations.
It is possible to produce a total order of preference, on each criterion, over consequences of options.
It is possible to produce probability estimates on each criterion, for consequences of options.
It is possible to separate the search and design of the best option from the selection of Criteria.
If your way of doing RE satisfies these, then perhaps your requirements engineering process looks like a variant of decision analysis. But these are demanding assumptions, and there is no research that can lead us to conclude it is a process that generates the best results.
References and notes
Howard, Ronald A. “The foundations of decision analysis.” IEEE transactions on systems science and cybernetics 4.3 (1968): 211-219.
Keeney, Ralph L. “Decision analysis: an overview.” Operations research 30.5 (1982): 803-838.
Keeney, Ralph L., and Howard Raiffa. Decisions with multiple objectives: preferences and value trade-offs. Cambridge university press, 1993.
Working on requirements means working on other people’s predictions of their own future preferences, of others’ future preferences, and of future situations in which these preferences realize.
When someone says something as simple as “the system should do this, instead of that”, which looks like a requirement, this is only the tip of an iceberg.
Working with requirements means, in other words, working with a lot of data which is unstructured, unstable, and is about unobservable phenomena (other people’s intentions, beliefs, knowledge, desires, expectations, etc.). It means working in a setting where problems are unclear, solutions need to be made from scratch, and there is no definite and general notion of what an optimal solution looks like.
Understanding how people think about requirements, how they express them, how to design solutions to these requirements, how they evaluate if their requirements are satisfied, are only some of the many relevant questions.
Developing such an understanding also means that you will better understand how people solve unclear problems which have no available solution, how they make decisions as they do design and engineering.
Working on these topics thoroughly easily takes you to interesting places, such as philosophy of decisions, human psychology, communication, sociology, and economics, as well as software engineering, product and service design and management.
Having worked on this for almost 15 years, I am convinced that anyone who wants to understand how people work, could work, and should work together can benefit from learning about requirements engineering, that is, how to elicit, specify, analyze, evaluate, verify, validate, and negotiate requirements.