Uncategorized

Generic vs Specific Management Frameworks

A management framework identifies an opportunity or problem in the coordination of people within a single, or across organizations, and proposes (i) how to think about that opportunity/problem, (ii) how to identify its instances, and (iii) what to do – how to change the organization – in order to address it.

It is useful to distinguish generic from specific management frameworks; a generic one is not specific to a single organization (or a group of identified organizations), while a specific one is.

Here are two well known examples of generic ones:

Kaplan & Norton’s Balanced Scorecard is concerned with the problem of broadening the range of decision criteria across an organization, and specifically, it tries to reduce the focus on financial criteria. It suggests that optimizing for financial criteria disregards others which affect the ability of the business to generate value sustainably, such as the perspective (or expectations) of customers, the importance of improving internal processes, and the ability to innovate. It suggests to identify organization-specific criteria over the four categories, and measure expected and actual outcomes over these criteria.

Porter’s firm-level value chain framework is concerned with the problem of understanding why and how the business generates its margin, or what inside the business determines its margin (at least part of it, in case external factors influence it most). It suggests to think about all internal activities as split into supporting ones and primary ones, the latter having the most important influence on the ability to generate ideally above-average margins (within, roughly speaking, an industry; or better, among competitors). It recommends focusing on improving primary activities.

When a management framework is generic, it comes with claims that it applies across types of organizations, industries, markets, and other many parameters or dimensions which highlight differences between organizations. Both the Balanced Scorecard and Value Chain are generic, as are many others – the resource-based view, the five forces, and so on.

What if you want to make them work in a specific organization?

If these frameworks are generic, then this begs the question of how they are different from a specific management framework, one which is used within one organization, or if it crosses over others, is still specific to that network.

A specific management framework needs to satisfy harder requirements than a global one.

The specific framework must be proven relevant in the context of the organization which uses it.

Relevance here means that people who use the framework make different decisions than in absence of the framework, and they actually perceive and report that the framework makes the difference – these reports may well be vague, since they inevitably involve what-if speculations.

It needs to be clear who should apply it, how they should do so, what resources they will use, what outcomes they are expected to produce, how to measure progress in the use of the framework, and how to improve one’s use of the framework, and/or the framework itself.

From personal experience, a local framework can be complicated, having to have many parts which all need to be carefully designed.

The specific framework should be connected to the organization – in terms of roles, responsibilities, authority, lines of reporting, and so on. Its design should be explained so that people can be trained in it. Its effects must be measurable, so that we can show which kind of value, and how much of it, it is generating; otherwise, it will not be used/adopted.

What goes into a local management framework? How to design and roll it out? More on these in future posts.

Posted by Ivan Jureta

The Value of Disagreement over New Ideas

Loads or Shipments? Truckload or LTL?

“Why is it a problem to have stops? Stops are common. We should be able to add them to a live load.” He was insisting.

This made no sense to me.

“You mean a shipment, right? The load becomes a shipment once matched.” I waited for his confirmation. It wasn’t happening.

This got me thinking about what it means to add stops for loads too. What was this about live loads? We’d have to change how matching algorithms work. It took us months to research, design, redesign, and have everyone align on. My next meeting with design, engineering, and quality teams will not go well. I can’t keep revising the short-term roadmap, nothing will get done.

“That’s what I meant. Was it truckload only? We did say LTL too?” He was one of the founders, and an important investor.

“No. We said truckload, and we agreed at the time that this was full truckload only. LTL is a different business altogether. You know it. You built a business in FTL before, and I don’t think you did LTL. Different customer needs, suppliers, service, technology. We’d have to do new research. Do you want to wait for another year? Differentiators are different. Everything is different.” I was Head of Product at the time, which meant that I was responsible for aligning everyone on what the product is, what it could be, and getting everyone to agree what it should be. In this venture, the initial ideas came from investors, what one might call a “product vision”. It was also on me to make sure the product satisfies everyone, from customers to engineers who make, release, maintain, and improve it.

“Can we stick to truckload only for now? We know it’s a big opportunity, we’re early, and it’s complicated enough.” I hoped this would stop him, or at least postpone this.

He was silent. I continued. “So, there’s no such thing as ‘live load’. I know someone may be calling freight that’s moving a ‘live load’, but we aren’t. Remember, the load is what the customer asks us to move, and it stays a load until it’s matched to a carrier; at that point, it becomes a shipment. Loads and shipments are described in a different way, the information about the load is only some of the information we then need to have and keep a record of, about a shipment.”

It might have been the tenth or 20th time we had essentially the same issue; I lost count. It wasn’t specific to the two of us. We had been working together for a while. There were no bad intentions. It was happening frequently in our other teams. It was in conversations, brainstorming, planning. What we used to communicate didn’t make much difference — emails, chats, remote or live meetings. It was faster to resolve in live meetings, but that lasted only until the end of the meeting, or at most until the next one.

Unpacking Disagreement

At some times, we used the same names for different new ideas, and at others, we used different names for similar new ideas. We disagreed frequently. Even when we might have agreed quickly, we couldn’t. If the same words stand for different ideas, and if these ideas are new (and therefore, do not come with an established definition), you are never sure if you agree or not.

Disagreements we had over “load”, “shipment”, “truckload” and “LTL” were an insignificantly small sample of confrontations we had over four years when I was involved in the logistics venture. Innovation there was never-ending. As our customers changed their minds about what worked best for them, as we acquired new customers with new expectations, conventions, constraints, practices, we kept coming up with new ideas internally for how to change our organization, products, services, systems, in response.

In such an environment, it is more useful to develop an appreciation for disagreement, than to prefer stability. This is not only to accept it as a frequent phenomenon, but also to learn to analyze it, so you can then better decide how to address it.

Part of the problem with “load” and “shipment” was that we used same words for different new ideas. The cure for polysemy is obvious if we could pick one of a few available definitions for “load” and “shipment”: we should review available ones and agree on one for each word.

Disagreement over new concepts is more subtle, of course, for three reasons.

Firstly, it is naïve to expect to reach the agreement easily — disagreement is not simply over which definition we will pick among a few, it is over the scope of the system, product, service to design, build, run, manage, and improve.

There are significant ramifications of adopting one or other definition of a new concept; the definition affects where we want to go, how we will get there, and what resources we will need.

If we defined “load” as being anything fitting in a dry van, this would not remove the possibility of shipping smaller loads for different customers on the same truck (the same trailer), and would lead us to LTL.

Secondly, disagreement over “load” may not be local to the definition of “load”. What we agree for “load” may lead us to have to change our definition of “shipment”, “customer”, and others.

New concepts depend on each other, in that the meaning of one will be tied to the meaning of others. If definitions ought to represent some of that meaning, then changing the definition of one new concept will affect definitions of other concepts which mention it. If the definition of X mentions Y, then changing the definition of Y may require us to change the definition of X.

Thirdly, we were creating new ideas, and the first version of a new idea is rarely the best. It wasn’t that we disagreed over general-purpose or even established specialized definitions of “load” and “shipment” — we used these words in new ways, specific to inventions we were coming up with, within the local context of the innovation process we were involved in. Even if he had a specialized, industry-standard definition in mind for “load”, it didn’t matter, since I was looking for an idea of load which was new, and which fitted our aims and our constraints and the innovation we wanted to get to.

The problem that the novelty of an idea introduces, is that disagreement we have now is not going to be the only disagreement we will have: the new idea will go through many changes, which will be motivated by various disagreements over time.

Disagreement over new ideas is a problem that intensifies over time and with more people. The more successful the venture became, the more this problem became pronounced, and the more it cost to solve. If communication leads to disagreement over meaning of words, how can you tell that the teams are in sync? How could you possibly assess and manage the risk of planning one thing, then being delivered another?

Disagreement about who meant what, while working on new ideas, may seem a straightforward issue to solve. Let’s get together and talk it through. But you first need to detect disagreement, then spend time solving it. You might detect it late, after damage is done. Handling it means more communication, not less. Could you have avoided this?

When you know that there is a risk for this kind of disagreement to occur, how do you detect it? Moreover, how do you detect it early, when it involves fewer people, before more is invested, and may only have affected inconsequential decisions? How can you make detection and correction part of a routine, instead of just hoping it will all go well?

Is Disagreement an Anomaly?

There’s “new” in the term New Concept Networks. The focus is on new concepts, those which are invented to fit specific purposes when we design and build new products, services, systems.

Disagreement about new concepts is quite different from disagreement established concepts.

When we disagree on established concepts, there is a reference that we can look up, to settle our differences and reach a common understanding. This could be a dictionary, an encyclopedia, a terminology accepted in a domain — something that we can both accept, along with others, as an authoritative source.

However, when we disagree on new concepts, then there will be no authoritative source, someone other than the two of us, or a passive source — a book, database, knowledge base, or otherwise — which we can both go to. Instead, we have to create and define the new concept.

This is exactly what was done in the logistics venture, where we had a new and our own “load” and “shipment” concepts, among many others.

The same happened in other businesses I was involved in during the last decade: I was in teams which were tasked with inventing, creating, testing, delivering, and running new products, services, systems which targeted specific opportunities and problems in various industries. We were coming up with new concepts, and had to make specific definitions for them — part of it was so that we can agree internally on what to do with and about them, the other part being that we have to be clear how our innovation differed from what was already available.

Disagreement over established concepts and disagreement over new concepts are two different kinds of anomalies. The former signals the need to point everyone to the authoritative reference, which provides the agreed-upon concept. The latter begs a different question: Is disagreement a signal that the concept in question should change? And if so, how do we change it so as to avoid disagreement later?

The key point is that disagreement over established concepts signals an anomaly, something to detect and correct without changing the concept, while disagreement over new concepts is part of their formation, that is, is a step in the creation of such concepts, and in their maturing up to the moment when they become accepted by, and thereby established in a community. At that point, there is an authoritative source, an accepted definition, and disagreement is an anomaly.

Posted by Ivan Jureta

New Concept Networks – A counterintuitive tool for faster innovation

Innovation stands for various actions we take to create something new and useful.

To prove novelty, we have to explain how the outcomes of all that effort – the invention – relates, and specifically differs, from all that’s already available, so-called prior art.

To prove usefulness, we have to produce evidence that it is being used by our target audience.

To show both novelty and usefulness, we have to define the invention. Its definition, as long as it precisely, accurately, and clearly identifies its properties, will help us identify comparable ideas, artifacts, products, services, and from there let us build an explanation of novelty. The invention’s definition is crucial to generating evidence for (and against) usefulness: to build, deliver, and see if and how it is used, we must define it.

How and when do you make a definition of an invention? A patent specification, an integral part of a patent application, is an example of an exhaustive definition of the invention. However, a patent specification is made after the ideas around the invention are stable, when the inventors are ready to submit a patent application. That moment is only the end of an innovation process, during which inventors came up with new ideas, researched prior art, prototyped (parts of) the invention to try it out with a sample of their target audience, collected feedback, changed their ideas, and performed many such iterations over and over, to build confidence that the invention will in fact become an innovation, once it goes to market.

Here is a simple observation: during innovation, inventors have to describe new ideas in order to communicate about them, and they have to do this well before these ideas are stable enough to justify the effort of producing their exhaustive specifications, or detailed and structured definitions. These descriptions are necessary for coordination – how else can we agree on what to prototype, make, deliver, and get feedback on?

If innovators have to produce descriptions of their new ideas throughout their innovation process, because they have to communicate and coordinate with others about them, and if we eventually want to have an exhaustive definition, or specification of the invention when ideas on it are mature enough, then we should consider the following question.

What if we wanted to have precise, accurate, clear, documented definitions of the invention during the innovation process, from its earliest moments, and not only at its end?

This question motivated my efforts when working with inventors over the past ten years, and eventually led to the tool called New Concept Network. Any New Concept Network is made of

  • new terms used to describe and explain the invention,
  • their definitions,
  • relationships between definitions of new terms, and
  • relationships between definitions of new terms and definitions of “old” terms, that is, those which have not been newly defined or redefined in descriptions and explanations of the invention; “old terms” will carry over their definition from ordinary language, or if they are technical terms of a specific discipline, the technical definition they there have.

Making an New Concept Network during innovation forces everyone involved to be precise, accurate, and clear about new ideas, and about how these new ideas relate to established ones, even if these new ideas may be changed or thrown away soon after they are defined.

With the question above, and New Concept Networks, I wanted to understand if producing precise, accurate and clear definitions throughout innovation impedes innovation, or if it can be done in a way which is helpful.

It is non-controversial to say that we want to innovate faster rather than slower. We want to rapidly go from early new ideas to more mature new ideas, since the faster we go to market, the earlier we will see the invention in all its glory, or see it fail. But the first new ideas are rarely the same as last new ideas an innovation process: an innovation process will rarely stabilize the earliest new ideas; instead, there will be disagreement about the new ideas, learning about what works and what doesn’t, ideas will be confronted with the behavior and expectations of a sample target audience.

Innovation can involve many iterations, during which new ideas will give place to newer ones, that is, the invention itself will be changing. If change is the constant of innovation, then why invest an additional effort into producing precise, accurate, and clear definitions of ideas which we know will change, and can change very quickly?

Why not go through the chaos of innovation with low quality descriptions, and wait for there to be enough confidence to be bothered with precision, accuracy, and clarity of the invention’s definition?

I argue that we should invest effort to produce precise, accurate, and clear definitions of new ideas during innovation, even if we reject them immediately after producing such definitions. In other words, I argue that innovation processes should embrace the paradox of wanting to be precise, accurate, and clear about unstable ideas.

The reason to embrace the paradox turns out to be simple. During innovation, new ideas change through confrontation: innovators confront each other on how to change the invention to improve it, they confront the realities of the environment in which the invention is expected to be used, they confront expectations and existing behaviors of their target audience, and so on. In absence of confrontation, why change the earliest new ideas? Why have them in the first place?

If confrontation is central to progress through new ideas in an innovation process, and if we want faster innovation, then we should generate confrontations more more rapidly. This is where definition comes in: if one is imprecise, vague, ambiguous about one’s new ideas, then it is harder to find what to confront them on. Instead, if one is precise, accurate, and clear, then it is easier for others to identify what they disagree with. In other words, being precise, accurate and clear about new ideas in innovation, is an open invitation for disagreement, one which is easier to accept and act on for others.

Over the past ten years, I have been leading and participating in innovation processes in companies in USA, UK, Denmark, Belgium, and Israel, where we invented new software products and services, and eventually helped build new organizations around them. We dedicated substantial effort to make precise, accurate, and clear definitions of new ideas from the very start of each innovation process, when new ideas were changing daily.

These definitions were related, as each used terms from others. Definitions and their relationships formed what I call ”New Concept Network” in this book; as we will see, this is neither a terminology, nor an ontology, but can be a precursor to either.

We recorded, documented, designed, and improved a New Concept Network in each innovation process. They were available to everyone involved: inventors, investors, lawyers, product designers, product managers, software architects, software engineers, and non-technical staff. It was relevant in all topics, from corporate strategy and finance, marketing and sales, production, business operations, research and development, delivery, maintenance. Benefits went beyond facilitated communication and teamwork, for local and remote team members. The NDN became a core asset for preserving, analyzing, improving, and documenting intellectual property, spanning business documentation, requirements and software specifications, marketing and sales material, as well as serving legal professionals who assisted the assessment and protection of intellectual property.

Posted by Ivan Jureta

What to look for in a process specification?

If you work with requirements, you have to work with descriptions of processes or workflows which satisfy requirements. What makes a relevant workflow specification or model?

Let’s start with the basics:

“The workflow concept has evolved from the notion of process in manufacturing and the office. Such processes have existed since industrialization and are products of a search to increase efficiency by concentrating on the routine aspects of work activities. They typically separate work activities into well-defined tasks, roles, rules, and procedures which regulate most of the work in manufacturing and the office. Initially, processes were carried out entirely by humans who manipulated physical objects. With the introduction of information technology, processes in the workplace are partially or totally automated by information systems, i.e., computer programs performing tasks and enforcing rules which were previously implemented by humans.” [1]

A workflow is a description of what people and machines do, with a focus on showing separate units of work, usually called tasks, activities, actions, or such, and specifically the sequence and synchronization across these units (what’s first, second, third, what waits for what else to be done, what needs to be done in parallel, and so on). It is safe to say that a business process is a synonym of workflow [2]. 

Although there are many ways to describe workflows [3], i.e., workflow modeling or specification languages, knowing them is necessary, but far from sufficient to make relevant workflows for a requirement or a goal, within and across organizations, industries, and markets.

So, what should you want to see or include in a workflow specification?

Think about this in terms of the kinds of questions you want a workflow specification to answer. I group these questions into “layers” of a workflow.

Each workflow has one or more of these layers:

  • Communication layer, describing:
    • Who communicates with whom?
    • What are they communicating about?
    • What is the purpose of the communication? For example, joint work, negotiation, exchange of paperwork, etc.
    • Does that communication have a pattern, which is repeated over and over? 
  • Incentives layer, describing:
    • Who gets what benefits from whom?
    • How important are these benefits for them, relative to their other benefits?
    • Who has which costs (loses)? 
    • How important are these costs for them, relative to their other costs?
  • Financial layer, describing financial flows between the parties involved;
  • Regulatory layer, describing steps done because of regulatory rules or guidelines;
  • Technology layer, describing which, how, and why IT tools are used in each step;
  • Base layer, describing steps which do not belong to other layers.

Each workflow can be over several or all layers, and is made of:

  • Steps, which can be, for example:
    • Tasks to complete;
    • Goals to achieve;
    • Approvals to obtain;
    • Reports to make;
    • Tests to perform;
    • Etc.
  • Relationships between steps (for a deeper discussion, see [4]), such as:
    • Sequence (next step, previous step);
    • Parallel steps;
    • Alternative steps;
    • Cycle;
    • Etc.

Each step’s description or definition should, ideally, answer the following questions:

  • Who (which position in the team or organization) is responsible for doing that step?
  • What should be done or achieved in that step? How should the step be done?
  • When should that step be done (which conditions need to be satisfied)?
  • What are the inputs (documents, approvals, etc.) needed to start a step?
  • What are the outputs of the step?
  • Which criteria and measures are used to evaluate how well the step was done?

References

  1. Georgakopoulos, Diimitrios, Mark Hornick, and Amit Sheth. “An overview of workflow management: From process modeling to workflow automation infrastructure.” Distributed and parallel Databases 3.2 (1995): 119-153.
  2. Decker, Gero, et al. “Transforming BPMN diagrams into YAWL nets.” International Conference on Business Process Management. Springer, Berlin, Heidelberg, 2008.
  3. Van Der Aalst, Wil MP, and Arthur HM Ter Hofstede. “YAWL: yet another workflow language.” Information systems 30.4 (2005): 245-275.
Posted by Ivan Jureta

Why the quality of requirements cannot fully be controlled at design time?

Quality of a service (such as, e.g., a flight) correlates negatively with the gap between what you expect from the service and what you experience through that service [1]. The greater the gap, the lower the quality. You expected more than you got. The smaller the gap, the closer the experience to expectations, and the greater the quality. Saying that you got more and better than expected is to say that the gap was negative – you expected less than you experienced.

This translates to the quality of software. If you think of software as being something that delivers services, or to make it simple here, one service, then software quality correlates with the gap between what you expected that service to be, and what you experienced when software delivered the service. Your perception of quality of a flight booking software, correlates negatively with the gap between what you expected it to do for you, and what it ended up doing. As above, it could be low quality (some positive gap) or high quality (some negative gap), or somewhere in-between (gap close to zero, from either side).

How does this translate to the quality of requirements? The service requirements should do for you, is that they should guide the engineering of the system-to-be in some specific way you are interested in. You give a requirement in order to have, eventually, the system do something you expect. So if it does not, then requirements, however good they may have seemed at the time, ended up giving you the experience which is different from the expectations you had. It follows that the quality of requirements is only partly determined by such properties as their precision, clarity, absence of vagueness, ambiguity, how they have been specified, validated, and so on – namely, by the many things which could be controlled when requirements are being elicited, analyzed, specified, verified, and validated. In other words, you cannot fully control the quality of requirements at design-time.

Simply put, it is because the evaluation of quality of the system-to-be will happen at run-time, that there is always going to be uncertainty about the quality of its design-time requirements.

References

  1. Parasuraman, Anantharanthan, Valarie A. Zeithaml, and Leonard L. Berry. “A conceptual model of service quality and its implications for future research.” Journal of marketing 49.4 (1985): 41-50.APA
Posted by Ivan Jureta

What is the difference between a requirements model and requirements specification?

There is no difference. Both are representations of requirements, including other information that may be useful to understand requirements.

This view is not generally accepted. Some of my colleagues in requirements engineering research see the following difference. If the representation is with diagrams, where text is in natural language, then it is a requirements model. If the representation is in a mathematical logic, then it is a requirements specification.

They are right that it is not the same to represent requirements with diagrams and as formulas of mathematical logic. Syntax is different, rules for interpretation and computation are different, and so, your reading will be different.

I see no benefits in basing the difference in the properties of the language used for representation. Although diagrams can be unrelated to a mathematical logic, you can have those which can be fully rewritten in one. The Techne language is one such example [1]. When the language is such, that diagram/logic difference falls apart, and the model and specification are the same thing.

References

  1. Jureta, Ivan J., et al. “Techne: Towards a new generation of requirements modeling languages with goals, preferences, and inconsistency handling.” 2010 18th IEEE International Requirements Engineering Conference. IEEE, 2010.
Posted by Ivan Jureta

What are the procedural quality and outcome quality of a process?

How to evaluate the quality of a process? By process, I mean any sequence of more or less complicated activities or tasks, which you (and others) are trying to do, to achieve some goals you agreed on. It can be something called a business process, a decision process, a problem-solving process, and so on – what I’m writing below is widely applicable.

In processes which involve uncertain outcomes (so, when you do the process, you are not 100% sure what you’ll get from it), the problem is that you can do it well, and get bad outcomes; at other times, you will do it well, and get good outcomes. And you see the point – there are four cases: well done but bad outcomes, well done with good outcomes, badly done with bad outcomes, and badly done with bad outcomes.

Because of the fact that, when you have an uncertain process, you cannot know the exact outcomes, we have two different kinds of quality to evaluate:

  • Procedural quality is the evaluation of how you did the process; this usually involves you having some idea – before actually doing the process – of how it should be done. You then do the process, and the difference between how you should have done it, and how you actually did it, is your procedural quality. In requirements engineering, for example, you could have a method which tells you how you should elicit requirements, and then you will do it, and the differences tell you something about procedural quality.
  • Outcome quality is the evaluation of what you got from the process, after you did it and was able to observe at least some of its outcomes. Outcome quality is going to depend on the difference between what you expected as outcomes, and what you actually got.

This difference between procedural and outcome quality makes it is important to understand – over time, as these processes are done and repeated – the factors which influence procedural quality and factors which influence outcome quality, and see which you can influence, how, and at what cost. In some cases, you can pay more and get higher confidence in outcome quality, while in others this cost will be too high. Procedural quality is slightly different, since you can influence to a considerable extent – through training, process improvements, and so on – how something is being done, and thereby influence procedural quality.

What remains critical, is not to confuse procedural and outcome quality, especially when evaluating people tasked with doing uncertain processes.

Posted by Ivan Jureta

When does a requirements model or specification expire?

The anecdotal answer would be “as soon as it is made”. But while it is funny and tragic, it is also not correct.

Inevitably, requirements that someone may have from a system today will likely change at some future time.

Requirements change can be due to that person’s learning from use: they use the system, and new requirements arise, while old ones lose relevance. Take a trivial example: your requirements from an email client were different the first time you wrote an email, and today, after you wrote hundreds, or thousands of them.

Besides learning, the context in which you use the system may have changed. You may no longer need to use it in the same way. Its use may no longer be as important to you. There may be alternative systems which offer different features, and which may lead you to have new requirements; and so on.

But even if requirements changed, the requirements model or requirements specification can remain relevant for a long time, throughout the system’s lifecycle; this can be from months to decades.

One reason for a long lifetime of a requirements model is that it reflects the expectations and intentions of those who provided the requirements in the first place, and can thus be used to explain why the system is as it is.

A second reason, beyond explanation, is that a requirements model is a way to preserve and transfer knowledge about the system. It is often equally useful to know why something was made as it is, besides knowing how it was made and how it works.

A third reason is that it is part of the system’s documentation, as long as the requirements in the model are still being satisfied by the system.

Posted by Ivan Jureta

What is a model of requirements?

A model, or representation of requirements is a record of data which a group of people agreed to call requirements.

The definition bundles several important ideas:

  1. The definition does not say what a requirement is. This is better done separately, since it is a hard problem itself.
  2. What qualifies as a requirements model is decided by a group of people. It is a convention. It may be a convention local only to a small group of people, or a wider one, such as an industry standard.
  3. The model or representation must be a record, something physical or digital, which can be moved around, so that it can be accessed by different people in the same way. So, if you are now thinking about something, and you are calling those things requirements, and I have no other way to access them except to ask you, then that is not a representation/model of requirements.
  4. A model can look in any way which is accepted by those who consider it a model in the first place: text, graphics, multimedia – this does not matter at all in the definition above.
Posted by Ivan Jureta

What is the relationship between preferences and requirements?

Preferences are one of the few central concepts in mainstream economic approaches to  decision-making and problem-solving. Interestingly enough, they were – to the best of my knowledge – absent from requirements engineering, at least up to the proposal Mylopoulos, Faulkner, and I made in 2008 [1].

Let’s recall what is usually meant by “preference”:

“Preferences are evaluations: they concern matters of value, typically in relation to practical reasoning, i.e. questions about what should be done. This distinguishes preferences from concepts that concern matters of fact. Furthermore, preferences are subjective in that the evaluation is typically attributed to an agent – where this agent might be either an individual or a collective. This distinguishes them from statements to the effect that “X is better than Y” in an objective sense. The logic of preference has often also been used to represent such objective evaluations (e.g. Broome 1991b), but the substantial notion of preference includes this subjective element. Finally, preferences are comparative in that they express the evaluation of an item X relative to another item Y. This distinguishes them from monadic concepts like “good”, “is desired”, etc. which only evaluate one item. Most philosophers take the evaluated items to be propositions. In contrast to this, economists commonly conceive of items as bundles of goods.” [2]

There are two relationships between the requirements and preference:

  1. If requirements are descriptions of desirable future conditions, and if we have such requirements which describe alternative future conditions, then we can also have preference relations over requirements, to indicate which is more desirable than another one. Note that I’m not mentioning here the properties of such preference relations should or must have – this does not matter in the question I asked in the title here.
  2. A single requirement itself subsumes a preference. If I require that some condition X is satisfied by the future system, then any other condition, which is an alternative to X and which I may be aware of, is less desirable to me than X. It is in this sense that requirements are about future preferences, as I wrote in the text “Why are requirements so interesting?”.
References and notes
  1. Jureta, Ivan, John Mylopoulos, and Stephane Faulkner. “Revisiting the core ontology and problem in requirements engineering.” 2008 16th IEEE International Requirements Engineering Conference. IEEE, 2008. https://arxiv.org/pdf/0811.4364
  2. Hansson, Sven Ove and Grüne-Yanoff, Till, “Preferences”, The Stanford Encyclopedia of Philosophy (Summer 2018 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/sum2018/entries/preferences/
Posted by Ivan Jureta