Archive for the ‘Functional Requirements’ Category

Event Oriented Analysis & Object Oriented Design

April 15, 2016

As it’s safe to assume that a primary objective of process analysis is to align business concerns (by nature specific and changing) with enterprise architectures (meant to be shared and stable), events could provide a good starting point.

(F. Handoko)

Event, concerns, processing (F. Handoko)

Business Analysis & Application Design

Taking example from the convincing track record of object oriented approaches for systems architectures and software design, the same principles have been tried for business requirements analysis. While that approach can be credited with significant realizations, success usually depends on some prior alignment of business domains with their system counterpart, in particular on the possibility to uniformly and consistently identify and define business entities as objects independently of operating processes.

Alternatively, when business entities cannot not be readily identified upfront as system objects, analysis may start with organization, entitled agents, activities, and be carried out with the definition of business flows and associated entities.

So, and whatever the approach, the question is how to ensure that the applications under consideration are designed in accordance with architecture capabilities.

System Architecture & Software Design

Words are worth the difference they make: as long as systems were not much more than an assortment of software modules, architecture and design could be understood as one and the same. But nowadays a distinction may be overdue between, on one hand the design of software components run within a single system’s address space and time-frame and, on the other hand, architectures of systems set across different spaces and time-frames.

vvv

Architecture vs Design: words are worth the difference they make

Object oriented solutions (e.g Domain Driven Design) are arguably the option of choice for the former, but services oriented approaches may be a better fit for the latter. Not by chance, events provide a sound conceptual hinge between the two approaches.

Event Oriented Analysis vs Object Oriented Design

Object oriented principles can be streamlined around three core topics: (a) information hiding and coupling between structures and methods; (b) inheritance between types; (c) communication through interfaces and polymorphism.

OO principles can be streamlined along three topics: encapsulation (a), inheritance (b), and communication through interfaces (c).

OO principles can be streamlined around three topics: encapsulation (a), inheritance (b), and communication through interfaces (c).

Of these, encapsulation and inheritance are specific to software design, but communication mechanisms are also at the core of services oriented architectures. Considering messages as the logical system counterparts of business events, event-oriented analysis should help to align business processes with systems capabilities.

From a business processes perspective, events are signaling changes in the states of activities, objects, or expectations. Given that  supporting systems are meant to deal with those changes, the analysis of business requirements could proceed from corresponding events:

  • Business events are defined with regard to time-frames (a) and sources to be authenticated and authorized (b).
  • Triggering changes must be described by messages with regard to their functional (c) and operational (d) scope.
  • Business logic (e) and entities (f) are often shared across applications and therefore better defined independently.
  • Internal changes (same space and time-frame) are hidden.
  • Triggered (external) changes are defined with regard to time-frames (h), processes (d), and devices (g).
A simplified blueprint of Event Oriented Process Analysis

A simplified blueprint of Event Oriented Process Analysis

As it happens, those facets can be aligned with OO design ones, with (c) and (d) for communication, (e) and (f) for encapsulation. On a broader perspective they also fit with the growing focus on event-driven applications and service oriented architectures.

From Event Oriented Process Analysis to Service Oriented Architectures

By moving business logic to the background, event-driven analysis fosters polymorphism at enterprise level with corresponding benefits:

  • With regard to business processes, events come with functional and operational requirements set independently of the business logic that will be carried out: trigger (what has changed), role (who is requesting), and message communication semantics (when the system is supposed to deal with the event).
  • With regard to system capabilities messages can be used to align business (aka external) events with system (aka internal) ones independently of the business entities and logic (what is to be done and how).
  • With regard to architecture and design, that approach is to uphold OO principles by dealing separately with polymorphic requests (interfaces) and business logic (methods).

Those benefits appear clearly when capabilities are realized by services defined with regard to business processes (customers), business objects (messages), business logic (contract), and business operations (policy).

Environment (bold) vs Services (italic)

Environment (bold) vs Services (italic)

It must be reminded that services are part of functional architectures and as a consequence cannot be directly addressed by users or devices.

Events & Action Semantics

With events set as modeling anchors, use cases may provide the modeling glue between processes and functional capabilities:

  • Triggering events (a) map changes in business environments (aka external events) to changes in systems objects (aka internal events).
  • Actors (b) map roles in organization to system users.
  • Messages (c) map the semantics of business processes to the semantics of applications (e) and domains (f).
Use cases (orange) provide a comprehensive and consistent mapping from processes (green) to services (blue).

Use cases (orange) provide a comprehensive and consistent mapping from processes (green) to services (blue).

On that basis, the main objective of event-oriented analysis would be to distinguish between communication and business semantics, the former dealing with interactions, the latter with business logic.

Further Reading

IoT & Real Time Activities

March 2, 2016

The world is the totality of facts, not of things.

Ludwig Wittgenstein

As the so-called internet of things (IoT) seems to bring together people, systems and devices, the meaning of real-time activities may have to be reconsidered.

Real Time Representation (Lucy Nicholson)

Fact and Broadcast (Lucy Nicholson)

Things, Facts, Events

To begin with, as illustrated by marketed solutions like SIGFOX, the IoT can be described as a fast and stripped-down communication layer carrying not so much things than facts and associated raw (i.e non symbolic) events. That seems to cut across traditional understandings because the IoT is dedicated to non symbolic devices yet may include symbolic systems, and fast communication may or may not mean real-time. So, when applications network requirements are to be considered, the focus should be on the way events are meant to be registered and processed.

Business Environments Cannot be Frozen

Given that time-frames are set according primary events, real-time activities can be defined as exclusive ongoing events: their start initiates a proprietary time-frame perceived from the outside as being without duration, i.e as if nothing could happen until their completion, with activities targeting the same domain supposed to be frozen.

ccc

Contrary to operational timing constraints (left), real-time ones (right) are set against the specific (i.e event driven) time-frames of targeted domain.

That principle can be understood as a generalization of the ACID (Atomicity, Consistency, Isolation, Durability) scheme used to guarantee that database transactions are processed reliably. Along that understanding a real-time business transaction would require that, whatever its actual duration, no change from other transactions would be accepted to its domain representation until the business transaction is completed and its associated outcomes duly committed. Yet, the hitch is that, contrary to systems transactions, there is no way to freeze actual business ones which will continue to be carried out notwithstanding suspended registrations.

Accesses can be fully synchronized within DB systems (single clock), suspended within functional architectures, consolidated within environment.

Accesses can be fully synchronized within DB systems (single clock), suspended within functional architectures, consolidated within environment.

In that case the problem is not so much one of locks on DB as one of dynamic alignment of managed representations with the changing state of affairs in their actual counterpart.

Yoking Systems & Environments

As Einstein famously said, “the only reason for time is so that everything doesn’t happen at once”. Along that reasoning coupling constraints for systems can be analyzed with regard to the way events are notified and registered:

  • Input flows: what happens between changes in environment (aka facts) and their recording by applications (a).
  • Processing: could the application be executed fully based on locally available information, or be contingent on some information managed by systems at domain level (b).
  • Output flows: what happens between actions triggered by applications and the corresponding changes in the environment (c).
vvvv

How to analyze the coupling between environment and system.

It’s important to remind that real-time activities are not defined in absolute time units: they can be measured in microsecond as well as in aeons, and carried out by light sensors or by snails.

A Simple Decision Routine

Deciding on real-time requirements can therefore follow a straightforward routine:

  • Should changes in relevant external objects, processes, or expectations, be instantly detected at system’s boundaries ? (a)
  • Could the interpretation and processing of associated events be carried out locally, or be contingent on information shared at domain level ? (b)
  • Should subsequent actions on relevant external objects, processes, or expectations be carried out instantly ? (c)
vvv

Coupling with the environment must be synchronous and footprint local or locked.

Positive answers to the three questions entail real-time requirements, as will also be the case if access to shared information is necessary.

 What about IoT ?

Strictly speaking, the internet of things is characterized by networked connections between non symbolic things. As it entails asynchronous communication and some symbolic mediation in between, one may assume that the IoT cannot support real-time activities. That assumption can be checked with some business cases given as examples.

Further Readings

External Links

Relating to Functions

February 10, 2016

Preamble

Functions map inputs with outputs as expected from a performing agent whatever its nature (concrete or abstract) or means (physical, logical, or magical).
They are not to be confused with objectives (which don’t necessarily specify performing agents or detail inputs) nor with activities (which purport to describe concrete execution paths).

cheyenneMedicine

Functions map inputs with outputs as expected from performing agents (Cheyenne view of Medicine)

Those distinctions are critical when the aim is to align business processes requirements with systems architecture capabilities.

Functions & Processes

Functions are complete (contrary to objectives) and abstract (contrary to activities) descriptions of what organizations (represented by actors), system architectures (represented by services), or objects (through operations) can do. As such they are akin to interfaces or types, and cannot be instanciated on their own. Processes on the contrary describe how activities are executed, i.e instanciated (#).

Business processes describe sets of execution instances (#). Functions describe what can be expected from enterprise or functional architectures. Business logic describe how the flows are to be processed.

Business processes describe sets of execution instances (#). Functions describe what can be expected from enterprise or functional architectures. Business logic describe how the flows are to be processed.

That understanding provides for a modular approach to business processes:

  • Business processes can be defined with regard to business functions independently of the way they are supported.
  • Business rules can be managed independently of the way they are applied, by people or systems.
  • Business logic can be factored out in functions (business or systems) or set within specific processes.

Yet that would not be possible without some modeling across enterprise architecture layers.

Functions & Models

Functions are meant to facilitate reuse across enterprise architectures, which entails descriptions that are clearly and easily accessible: context, modus operandi, expected outcome. Whatever the modeling method(s) in use, it’s safe to assume that different stakeholders across enterprise architectures will pursue different objectives, to be defined with different concepts. If they are to communicate they will need some explicit and unambiguous semantics for the links between processes, functions, and activities:

  • Functional flows are used between processes and functions (a) or actors (d), or between actors and functions (e).
  • Composition or aggregates are used to specify where the business logic is to be employed, by functions (b) or by processes (c).
  • Documentation references (f) are used between unspecified actors and business logic, in case it would performed by people.
Semantics of connectors: functional flows (a,d,e), aggregates (b) and composition (c), and documentation (f).

Semantics of connectors: functional flows (a,d,e), aggregates (b) and composition (c), and documentation (f).

Finally, the semantics of connectors used between functions will have to be consistent with the one used to connect them to processes and activities.

Combining Functions

Considering that functions are neatly set within the systems modeling realm, one would assume that inheritance and structure connectors can be used to detail and combine them. Yet, since functions cannot be instantiated, some paring down can be applied to their semantics:

  • Traditional structure connectors are set with regard to identification: bound to structure for composition, set independently otherwise. Since functions have no instances that criterion is irrelevant and the same reasoning goes for composition.
  • Likewise, since functions have no states to be considered, inheritance of functions can be represented by aggregates.
ddd

Functions can be combined at will using only aggregates

As far as functions are concerned, structures as well as inheritance connectors can be fully and soundly replaced by aggregate ones, which could significantly improve the mapping of business processes, activities, and supporting functions.

Further Readings

Enterprise Systems & the OS Kernel Paradigm

September 1, 2015

Preamble

Given the ubiquity of information and communication technologies on one hand, the falling apart of technical fences between systems, enterprises, and business environments on the other hand, applying the operating system (OS) paradigm to enterprise architectures seems a logical move.

Users and access to services (Queuing at a Post Office in French West Indies)

Borrowing the blueprint of computers operating systems, enterprise operating systems (EOS) would  be organized around a kernel managing shared resources (people, hardware and software) and providing services to business, engineering or operational processes.

Gerrymandering & Layers

When IT was neatly fenced behind computer screens managers could keep a clear view on organization, roles, and responsibilities. But with physical hedges replaced by clouded walls, the risk is that IT may appear as the primary constituent of enterprise architecture. Given the lack of formal arguments against what may be a misguided understanding, enterprise architects have to rely on pragmatic answers. Yet, they could prop their arguments by upending the very principles of IT operating systems and restore the right governance footprint.

To begin with, turfs must be reclaimed, and that can be done if the whole of assets and services are layered according to the nature of problems and solutions: business processes (enterprise), supporting functionalities (systems), and technologies (platforms).

Problems and solutions must be set along architecture layers

EA must separate and federate concerns along architecture layers

Then, reclaiming must also include governance, and for that purpose EOS are to rely on a comprehensive and consistent understanding of assets, people and mechanisms across layers:

  • Physical assets, including hardware.
  • Non physical assets, including software.
  • Agents (identified people with organizational responsibilities) and roles.
  • Events (changes in the state of objects, processes, or expectations) and activities.

Mimicking traditional OS, that could be achieved with a small and compact conceptual kernel of formal concepts bearing out the definitions of primitives and services for the whole of enterprise processes.

EOS’s Kernel: 12 concepts

A wealth of definitions may be the main barrier to enterprise architecture as a discipline because such profusion necessarily comes with overlaps, ambiguities, and inconsistencies. Hence the benefit of relying on a small set of concepts covering the whole of enterprise systems:

  • Six for individuals actual (objects, events, processes) and symbolic (surrogates objects, activities, roles) elements.
  • One for actual (locations) or symbolic (package) containers.
  • One for the partitioning of behaviors (branch) or surrogates (power type).
  • Four for actual (channels and synchronization) and symbolic (references and flows) connectors.
Semantics

Governance calls for comprehensive and consistent semantics

Considering that nowadays business entities (enterprise), services (systems), and software components (technology) share the same distributed world, these concepts have to keep some semantic consistency across layers whatever their lexical avatars. To mention two critical examples, actors (aka roles) and events must be consistently understood by business and system analysts.

Those concepts are used to describe enterprise systems building blocks which can be combined with a small set of well known syntactic operators:

  • Two types of connectors depending on target: instances (associations) or types (inheritance).
  • Three types connections for nondescript, aggregation, and composition.
UMLSharp_syntax

Syntactic operators are meant to be applied independently of targets semantics

Again, Occam’s razor should be the rule: just like semantics are consistently defined across architecture layers, the same syntactic operators are to be uniformly applied to artifacts independently of their semantics.

Kernel’s Functions

Continuing with the kernel analogy, based on a comprehensive and consistent description of resources, the traditional OS functions can be reinterpreted with regard to architecture capabilities implemented across layers:

  • What: memory of business objects and operations (enterprise), data base logical entities (systems), data base physical records (platforms).
  • Who: roles (enterprise), interfaces (systems), and devices (platforms).
  • When: business events (enterprise), logical events (systems), and transaction managers (platforms).
  • Where: sites (enterprise), logical processing units (systems), network architecture (platforms).
  • How: business processes (enterprise), applications (systems), and CPU (systems).
Traceability of Capabilities across architecture layers

Traceability of Capabilities across architecture layers

That fits with the raison d’être of a kernel which is to combine core functions in order to support the services called by processes.

Services

Still milking the OS analogy, a primary goal of an enterprise kernel is to support a seamless integration of services:

  1. Business driven: the definition of services must be directly and unambiguously associated to business ends and means across enterprise layers.
  2. Traceability: they must ensure the transparency of the tie-ups between organization and processes on one hand, information systems on the other hand.
  3. Plasticity: they must facilitate the alignment of changes in business objectives, organization and supporting systems.

A reasoned way to achieve these objectives is to classify services with regard to the purpose of calling processes:

  • Business processes deal with the transactions between the enterprise and its environment.
  • Engineering processes deal with the development of enterprise resources independently of their use.
  • Operational processes deal with the management of enterprise resources when directly or indirectly used by business processes.
Enterprise Operating System: Layers & Services

Enterprise Operating System: Layers & Services

That classification can then be crossed with architecture levels:

  • At enterprise level services are bound to assets to be used by business, engineering, or operational processes.
  • At systems level services are bound to functions supporting business, engineering, or operational processes.
  • At platform level services are bound to resources used by business, engineering, or operational processes.

As services will usually rely on different functions across layers, the complexity will be dealt with by kernel primitives and masked behind interfaces.

Services called by processes can combine different functions directly (basic lines) or across layers (dashed lines).

Services called by processes can combine different functions directly (basic lines) or across layers (dashed lines).

Finally, that organization of services along architecture layers may be aligned with governance levels: strategic for enterprise assets, tactical for systems functionalities, and operational for platforms and resources.

Further Reading

Business Processes & Use Cases

March 19, 2015

Summary

As can be understood from their theoretical basis (Pi-Calculus, Petri Nets, or State Machines), processes are meant to describe the concurrent execution of activities. Assuming that large enterprises have to keep a level of indirection between operations and business logic, it ensues that activities and business logic should be defined independently of the way they are executed by business processes.

Communication Semantics vs Contexts & Contents (G. Winograd)

Communication Semantics vs Contexts & Contents (G. Winogrand)

For that purpose two basic modeling approaches are available:  BPM (Business Process Modeling) takes the business perspective, UML (Unified Modeling Language) takes the engineering one. Yet, each falls short with regard to a pivotal conceptual distinctions: BPM lumps together process execution and business logic, and UML makes no difference between business and software process execution. One way out of the difficulty would be to single out communications between agents (humans or systems), and specify interactions independently of their contents and channels.

vvv

Business Process: Communication + Business Logic

That could be achieved if communication semantics were defined independently of domain-specific languages (for information contents) and technical architecture (for communication channels). As it happens, and not by chance, the outcome would neatly coincide with use cases.

Linguistics & Computers Parlance

Business and functional requirements (see Requirements taxonomy) can be expressed with formal or natural languages. Requirements expressed with formal languages, domain-specific or generic, can be directly translated into some executable specifications. But when natural languages are used to describe what business expects from systems, requirements often require some elicitation.

When that challenge is put into a linguistic perspective, two school of thought can be considered, computational or functional.

The former approach is epitomized by Chomsky’s Generative Grammar and its claim that all languages, natural or otherwise, share an innate universal grammar (UG) supporting syntactic processing independently of their meanings. Nowadays, and notwithstanding its initial favor, that “computer friendly” paradigm hasn’t kept much track in general linguistics.

Alternatively, the functionalist school of thought considers linguistics as a general cognitive capability deprived of any autonomy. Along that reasoning there is no way to separate domain-specific semantics from linguistic constructs, which means that requirements complexities, linguistic or business specific, have to be dealt with as a lump, with or without the help of knowledgeable machines.

In between, a third approach has emerged that considers language as a functional system uniquely dedicated to communication and the mapping of meanings to symbolic representations. On that basis it should be possible to separate the communication apparatus (functional semantics) from the complexities of business (knowledge representation).

Processes Execution & Action Languages

Assuming the primary objective of business processes is to manage the concurrent execution of activities, their modeling should be driven by events and their consequences for interactions between business and systems. Unfortunately, that approach is not explicitly supported by BPM or UML.

Contrary to the “simplex” mapping of business information into corresponding data models (e.g using Relational Theory), models of business and systems processes (e.g Petri Nets or State Machines) have to be set in a “duplex” configuration as they are meant to operate simultaneously. Neither BPM nor UML are well equipped to deal with the task:

  • The BPM perspective is governed by business logic independently of interactions with systems.
  • Executable UML approaches are centered on software processes execution, and their extensions to action semantics deal essentially on class instances, features value, and objects states.

Such shortcomings are of no serious consequences for stand-alone applications, i.e when what happens at architecture level can be ignored; but overlooking the distinction between the respective semantics of business and software processes execution may critically hamper the usefulness, and even the validity, of models pertaining to represent distributed interactions between users and systems. Communication semantics may help to deal with the difficulty by providing relevant stereotypes and patterns.

Business Process Models

While business process models can (and should) also be used to feed software engineering processes, their primary purpose is to define how concurrent business operations are to be executed. As far as systems engineering is concerned, that will tally with three basic steps:

  1. Dock process interactions (aka sessions) to their business context: references to agents, business objects and time-frames.
  2. Specify interactions: references to context, roles, events, messages, and time related constraints.
  3. Specify information: structures, features, and rules.
Communication with systems: dock to context (1), interactions (2), information (3).

Communication with systems: dock to context (1), interactions (2), information (3).

Although modeling languages and tools usually support those tasks, the distinctions remain implicit, leaving users with the choice of semantics. In the absence of explicit guidelines confusion may ensue, e.g between business rules and their use by applications (BPM), or between business and system events (UML). Hence the benefits of introducing functional primitives dedicated to the description of interactions.

Such functional action semantics can be illustrated by the well known CRUD primitives for the creation, reading, updating and deletion of objects, a similar approach being also applied to the design of domain specific patterns or primitives supported by functional frameworks. While thick on the ground, most of the corresponding communication frameworks deal with specific domains or technical protocols without factoring out what pertains to communication semantics independently of information contents or technical channels.

Communication semantics should be independent of business specific contents and systems architectures.

Communication semantics should be independent of business specific contents and systems architectures.

But that distinction could be especially productive when applied to business processes as it would be possible to fully separate the semantics of communications between agents and supporting systems on one hand, and the business logic used to process business flows on the other hand.

 Communication Semantics

Languages have developed to serve two different purposes: first as a means to communicate (a capability shared between humans and many primates), and then as a means to represent and process information (a capability specific to humans). Taking a leaf from functional approaches to linguistics, it may be possible to characterize messages with regard to action semantics, more precisely associated activity and attendant changes:

  • No change: messages relating to passive (objects) or active  (performed activity) states.
  • Change: messages relating to achievement (no activity) or accomplishment (attendant on performed activity).
Dynamics of communication context

Dynamics of communication context

It must be noted that such semantics can also be extended to the state of expectations.

Additionally, messages would have to be characterized by:

  • Time-frame: single or different.
  • Address space : single or different.
  • Mode: information, request for information, or request for action.

Organizing business processes along those principles, would enable the alignment of BPMs with their UML counterpart.

Use Cases as Bridges between BPM & UML

Use cases can be seen as the default entry point for UML modeling, and as such they should provide a bridge from business process models. That can be achieved by if use cases are understood as a combination of interactions and activities, the former obtained from communications as defined above, the latter from business logic.

Use Cases are best understood as a combination of interactions and business logic

Use Cases are best understood as a combination of interactions and business logic

One step further, the distinction between communication semantics and business contents could be used to align models respectively to systems and software architectures:

Last but not least, the consolidation of models contents at architecture level would be congruent with modeling languages, BPM and UML.

Further Reading

External Links

Use Cases Shouldn’t Know About Classes

January 5, 2015

Summary

Uses cases are meant to describe how users interact with systems, classes are meant to describe software components, including those executing use cases. It ensues that classes are introduced with the realization of use cases but are not supposed to appear as such in their definition.

TurkAutomat

Users are not supposed to know about surrogates

The Case for Use Cases

Use cases (UCs) are the brain child of Ivar Jacobson and often considered as the main innovation introduced by UML. Their success, which largely outdoes UML’s footprint, can be explained by their focus and simplicity:

  • Focus: UCs are meant to describe what happens between users and systems. As such they are neatly bounded with regard to their purpose (UCs are the detailed parts of business processes supported by systems) and realization (UCs are implemented by software applications).
  • Simplicity: while UCs may eventually include formal (e.g pre- and post-conditions) and graphical (e.g activity diagrams) specifications, they can be fully defined and neatly circumscribed using stick actors (for the roles played by users or any other system) and ellipses (for system behaviors).
vvv

Use Cases & UML diagrams

As it often happens to successful innovations, use cases have been widely interpreted and extended; nonetheless, the original concepts introduced by Ivar Jacobson remain basically unaltered.

The Point of Use Cases

Whereas focus and simplicity are clearly helpful, the primary success factor is that UCs have a point, namely they provide a conceptual bridge between business and system perspectives. That appears clearly when UCs are compared to main alternatives like users’ stories or functional requirements:

  • Users’ stories are set from business perspective and lack explicit constructs for the parts supported by systems. As a consequence they may flounder to identify and describe business functions meant to be shared across business processes.
  • Conversely, functional requirements are set from system perspective and have no built-in constructs linking business contexts and concerns to their system counterparts. As a consequence they may fall short if business requirements cannot be set upfront or are meant to change with business opportunities.

Along that understanding, nothing should be done to UCs that could compromise their mediating role between business value and system capabilities, the former driven by changes in business environment and enterprise ability to seize opportunities, the latter by the continuity of operations and the effective use of technical or informational assets.

Business Objects vs Software Components

Users’ requirements are driven by concrete, partial, and specific business expectations, and it’s up to architects to weld those diverse and changing views into the consistent and stable functional abstractions that will be implemented by software components.

Users' requirements are driven by concrete, partial, and specific concerns

Users’ requirements are driven by concrete, partial, changing and specific concerns, but supported by stable and fully designed software abstractions.

Given that double discrepancy of objectives and time-scales, business analysts should not try to align their requirements with software designs, and system analysts should not try to second-guess their business counterparts with regard to future business objects. As a consequence, respective outcomes would be best achieved through a clear separation of concerns:

  • Use cases deal with the business value of applications, mapping views on business objects to aspects of classes.
  • Functional architectures deal with assets, in particular the continuous and consistent representation of business objects by software components as described by classes.
vvv

How to get best value from assets

As it happens, that double classification with regard to scope and purpose should also be used to choose a development model: agile when scope and purpose can be united, phased approach otherwise.

Further Reading

 

Ergonomy, Fingertips Errors & Automated Testing

February 10, 2014

Objective

When interacting with systems, users do things they aren’t supposed to do and walk along irrelevant, even unthinkable, paths that can put tests designers at a loss. This apparent chink between users’ conscious self and their fingertips can be explained by the way humans assess situations and make decisions. Curtailing it is the aim of ergonomics.

Errors at fingerstips (Rembrandt)

Anatomy of Errors: from brain to fingers (Rembrandt)

Taking a leaf from A. Tversky and D. Kahneman (who received the 2002 Nobel Price in Economics), decision-making relies on two cognitive mechanisms:

  1. The first one “operates automatically and quickly, with little or no effort and no sense of voluntary control”. It’s put in use when actual situations must be assessed and decisions taken rapidly if not instantly.
  2. The second one “allocates attention to the effortful mental activities that demand it, including complex computations”. It’s put in use when situations can be assessed with regard to past experience in order to support informed decisions making.

That distinction can be directly applied to users’ behaviors interacting with systems:

  1. Intuitive behavior: decisions are taken on the basis of the visual context and options as presented by users interfaces before taking into account underlying business contents and logic.
  2. Rational behavior: decisions are taken on the basis of business contents and logic disregarding supporting systems interfaces.

Set in context, that distinction can be put in parallel (but not confused) with the one between domain and functional requirements, the former dealing rationally with business objects and logic, the latter putting the former to use through interactions with supporting systems.

Functional requirements describe the part played by supporting systems

Functional requirements describe the part played by supporting systems

Assuming that business logic should not be contingent on supporting systems interfaces, the best option would be to test its implementation independently of users interactions; moreover, tests targeting intuitive behaviors (i.e not directly based on domain specific contents), could then be generated automatically.

Looking for Errors

Given that testing is meant to find flaws in deliverables, tests are certainly more effective when designers know what they are looking for.

For that purpose phased approaches rely on sequences of differentiated tests dealing successively with programming (unit tests), functional requirements (integration tests), and business requirements (acceptance tests).  The unfortunate downside of those policies is that the most wide-ranging flaws are the last to be looked for, with the risk of being found after cascading and costly consequences for functionalities and programs.

Phased and Iterative approaches to tests

Phased and Iterative approaches to tests

Conversely, agile approaches follow iterative policies, with each development cycle combining the definition, programming, and tests of software products. When properly implemented those policies significantly improve the early detection and correction of errors whatever their origin. Yet, since there is no explicit management of intermediate outcomes, it’s difficult to differentiate the tests according the kind of errors to look for, e.g faulty business rules implementation or flawed user interface.

Architecture driven approaches may provide an answer, with requirements unambiguously sorted out depending on their architectural footprint: business contents or system functionalities. As a corollary, tests could also be designed along the same lines, targeting business rationale or human behavior.

Errors in Mirrors

Acceptance tests being performed with regard to requirements, they should be designed along requirements taxonomy, respectively for business logic, users’ interactions, quality of services, and components implementation. Being aligned on requirements, those tests can be neatly defined with regard to closed sets of specifications, functional or otherwise.

Functional tests have to expect the unexpected

Functional tests have to expect the unexpected

But that’s not the case for users’ interactions because people behaviors are not fully predictable; hence, while tests can be systematically designed with regard to the set of users’ actions framed by business and functional requirements, there is no way to comprehensively and unambiguously check for all and every possible behavioral contingencies. That will make for three levels of functional tests:

  1. Implementation of business logic: tests should be designed directly from business requirements, independently of interactions with users.
  2. Implementation of scenarii: while interactions are defined in reference to business logic, their validation should focus on the presentation of contents and dialog control.
  3. Users exceptions: in addition to inputs validity, already checked with business logic, and users’ actions, supposedly secured by interaction scenarii, it is necessary to check that unexpected behaviors have been properly considered .
How to check that unexpected behaviors have been properly considered ?

How to check that unexpected behaviors have been properly considered ?

In other words, functional tests will have to look simultaneously for errors in software (defined with regard to a finite set of requirements), and for users’ mistakes (set in an open range of behaviors). As if tests designers were to mirror users errors in order to look for software ones. So, assuming that errors in business logic and interactions have been considered, what should still be checked, and how ?

Fingertips Errors

When faced with choices, users bank on mental maps combining graphical and business layers, with the implicit assumption that maps’ contexts and concerns are kept up to date. Those maps combine three communication mechanisms:

  • Languages, natural or specific, use syntax and semantics to define business contents, logic, and operations.
  • Icons use similarity for the visual representation of business operations or functional primitives (e.g create, delete, etc).
  • Signals use proximity to draw users’ attention to predefined events (e.g sounds for operations completion or incoming emails).

While language-based interactions are supposedly fully covered by business and functional tests, icons and signals make room for “fingertips” reactions which cannot be directly framed within business logic or functional scenarii, and therefore cannot be comprehensively checked for erroneous behaviors.

Icons and signal based communication can trigger unexpected behaviors.

Icons and signal based communication can trigger unexpected behaviors.

Yet, if instinctive reactions preclude rational considerations, decisions may be swayed by analogies and associations before being informed by the relevant business contents. To prevent that risk, test scenarii built on business logic and functional interactions should be extended in order to take into account the intuitive aspects of users’ behaviors.

Mental Maps & Automated Tests

As noted above, mental maps are built on three layers, one deep (language semantics) and two shallow (icons and signals). While the shallow layers are supposed to reference the deep one, icons and signals may induce instinctive behaviors independently of the referenced business logic. Those behaviors can be triggered by two kinds of mechanisms:

  • Analogy: users will look for similarities and familiar configurations.
  • Proximity: users will look for continuity with regard to scope and operations.

Clearly, lapses in such behaviors will normally escape tests designed for business and functional requirements; yet, by being driven by self-contained mechanisms, intuitive behaviors can be checked independently of references to business contents. And that may open the door to automated tests generation.

With regard to similarities, tests should look for possible confusion between:

  • Objects with common representation but specific features (inheritance).
  • Operations with shared semantics but different scope (polymorphism).
  • Sequences with shared operations but different timing .

With regard to proximity, tests should look for possible confusion between:

  • Objects and their parts, or between their parts (structural proximity).
  • Operations usually associated into the same activity (functional proximity).
  • Operations usually executed successively (chronological proximity).

Scripts for such tests could be generated through pattern-matching and run by wizard applications.

Further Reading

External Links

From Processes to Services

November 24, 2013

Objective

Even in the thick of perplexing debates, enterprise architects often agree on the meaning of processes and services, the former set from a business perspective, the latter from a system one. Considering the rarity of such a consensus, it could be used to rally the different approaches around a common understanding of some of EA’s objectives.

Process and service

Service meets Process

A Governing Dilemma

Systems have long been of three different species that communicated but didn’t interbred: information ones were calmly processing business records, industrial ones were tensely controlling physical devices, and embedded ones lived their whole life stowed away within devices. Lastly, and contrary to the natural law of evolution, those three species have started to merge into a versatile and powerful new breed keen to colonize the whole of enterprise ecosystem.

When faced with those pervading systems, enterprises usually waver between two policies, containment or integration, the former struggling to weld and confine all systems within technology boundaries, the latter trying to fragment them and share out the pieces between whichever business units ready to take charge.

While each approach may provide acceptable compromises in some contexts, both suffer critical flaws:

  • Centralized solutions constrict business opportunities and innovation by putting all concerns under a single unwieldy lid of technical constraints.
  • Federated solutions rely on integration mechanisms whose increasing size and complexity put the whole of systems integrity and adaptability on the line.

Service oriented architectures may provide a way out of this dilemma by introducing a functional bridge between enterprise governance  and systems architectures.

Separation of Concerns

Since governance is meant to be driven by concerns, one should first consider the respective rationales behind business processes and system functionalities, the former driven by contexts and opportunities, and the latter by functional requirements and platforms implementation.

While business processes usually involve various degrees of collaboration between enterprises, their primary objective is to fulfill each one’s very specific agenda, namely to beat the others and be the first to take advantage of market opportunities. That put systems at the cross of a dual perspective: from a business point of view they are designed to provide a competitive edge, but from an engineering standpoint they aim at standards and open infrastructures. Clearly, there is no reason to assume that those perspectives should coincide, one  being driven by changes in competitive environments, the other by continuity and interoperability of systems platforms. That’s where Service Oriented Architectures should help: by introducing a level of indirection between business processes and systems functionalities, services naturally allow for the mapping of requirements with architecture capabilities.

cccc

Services provide a level of indirection between business and system concerns.

Along that reasoning (and the corresponding requirements taxonomy), the design of services would be assessed in terms of optimization under constraints: given enterprise organization and objectives (business requirements), the problem is to maximize the business value of supporting systems (functional requirements) within the limits set by implementation platforms (non functional requirements).

Services & Capabilities

Architectures and processes are orthogonal descriptions respectively for enterprise assets and activities. Looking for the footprint of supporting systems, the first step is to consider how business processes should refer to architecture capabilities :

  • From a business perspective, i.e disregarding supporting systems and platforms, processes can be defined in terms of symbolic objects, business logic, and the roles of agents, devices, and systems.
  • The functional perspective looks at the role of supporting systems; as such, it is governed by business objectives and subject to technical constraints.
  • From a technical perspective, i.e disregarding the symbolic contents of interactions between systems and contexts, operational processes are characterized by the nature of interfaces (human, devices, or other systems), locations (centralized or distributed), and synchronization constraints.

Service oriented architectures typify the functional perspective by factoring out the symbolic contents of system functionalities, introducing services as symbolic hinges between enterprise and system architectures. And when defined in terms of customers, messages, contract, policy, and endpoints, services can be directly mapped to architectures capabilities.

nn

Services are a perfect match for capabilities

Moreover, with services defined in terms of architecture capabilities, the divide between business and operational requirements can be drawn explicitly:

  • Actual (external) entities and their symbolic counterpart: services only deal with symbolic objects (messages).
  • Actual entities and their roles: services know nothing about physical agents, only about symbolic customers.
  • Business logic and processes execution: contracts deal with the processing of symbolic flows, policies deal with operational concerns.
  • External events and system time: service transactions are ACID, i.e from customer standpoint, they appear to be timeless.

Those distinctions are used to factor out the common backbone of enterprise and system architectures, and as such they play a pivotal role in their alignment.

Anchoring Business Requirements to Supporting Systems

Business processes are meant to met enterprise objectives given contexts and resources. But if the alignment of enterprise and system architectures is to be shielded from changes in business opportunities and platforms implementation, system functionalities will have to support a wide range of shifting business goals while securing the continuity and consistency of shared resources and communication mechanisms. In order to conciliate business changes with system continuity, business processes must be anchored to objects and activities whose identity and semantics are set at enterprise level independently of the part played by supporting systems:

  • Persistent units (aka business objects): structured information uniquely associated to identified individuals in business context. Life cycle and integrity of symbolic representations must be managed independently of business processes execution.
  • Functional and execution units: structured activity triggered by an event identified in business context, and whose execution is bound to a set of business objects. State of symbolic representations must be managed in isolation for the duration of process execution.
Services can be defined according persistency and functional units (#)

Services can be defined according persistency and functional units (#)

The coupling between business units (persistent or transient) identified at business level and their system counterpart can be secured through services defined with regard to business processes (customers), business objects (messages), business logic (contract), and business operations (policy).

It must be noted that while services specifications for customers, messages, contracts, and policy are identified at business level and completed at functional level, that’s not the case for endpoints since services locations are set at architecture level independently of business requirements.

Filling out Functional Requirements

Functional requirements are set in two dimensions, symbolic and operational; the former deals with the contents exchanged between business processes and supporting systems with regard to objects, activities and events, or actors; the latter deals with the actual circumstances of the exchanges: locations, interfaces, execution constraints, etc.

Given that services are by nature shared and symbolic, they can only be defined between systems. As a corollary, when functionalities are slated as services, a clear distinction should be maintained between the symbolic contents exchanged between business processes and supporting systems, and the operational circumstances of actual interactions with actors.

Interactions: symbolic and local (a), non symbolic and local (b), symbolic and shared (c).

Interactions: symbolic and local (a), non symbolic and local (b), symbolic and shared (c).

Depending on the preferred approach for requirements capture, symbolic contents can be specified at system boundaries (e.g use cases), or at business level (e.g users’ stories). Regardless, both approaches can be used to flesh out the symbolic descriptions of functional and persistency units.

From a business process standpoint, users (actors in UML parlance) should not be seen as agents but as the roles agents play in enterprise organization, possibly with constraints regarding the quality of service at entry points. That distinction between agents and roles is critical if functional architectures are to dissociate changes in business processes on one hand, platform implementation on the other hand.

Along that understanding actors triggering use cases (aka primary actors) can represent the performance of human agents as well as devices or systems. Yet, as far as symbolic flows are concerned, only human agents and systems are relevant (devices have no symbolic capabilities of their own). On the receiving end of use cases (aka secondary actors), only systems are to be considered for supporting services.

Mapping Processes to Services (through Use Cases)

Mapping Processes to Services (through Use Cases)

Hence, when requirements are expressed through use cases, and assuming they are to be realized (fully or partially) through services:

  • Persistency and functional units identified by business process would be mapped to messages and contracts.
  • Business processes would fit service policy.
  • Use case containers (aka systems) would be registered as service customers.

Alternatively, when requirements are set from users’ stories instead of use cases, persistency and functional units have to be elicited through stories, traced back to business processes, and consolidated into features. Those features will be mapped into system functionalities possibly, but not necessarily, implemented as services.

Mapping Processes to Services (through Users' Stories)

Mapping Processes to Services (through Users’ Stories)

Hence, while the mapping of business objects and logic respectively to messages and contracts will be similar with use cases and users’ stories, paths will differ for customers and policies:

  • Given that use cases deal explicitly with interactions at system boundaries, they represent a primary source of requirements for services’ customers and policy. Yet, as services are not supposed to be directly affected by interactions at systems boundaries, those elements would have to be consolidated across use cases.
  • Users’ stories for their part are told from a business process perspective that may take into account boundaries and actors but are not focused on them. Depending on the standpoint, it should be possible to define customers and policies requirements for services independently of the contingencies of local interactions.

In both cases, it would be necessary to factor out the non symbolic (aka non functional) part of requirements.

Non Functional Requirements: Quality of Service and System Boundaries

Non functional requirements are meant to set apart the constraints on systems’ resources and performances that could be dealt with independently of business contents. While some may target specific business applications, and others encompass a broader range, the aim is to separate business from architecture concerns and allocate the responsibilities (specification, development, and acceptance) accordingly.

Assuming an architecture of services aligned on capabilities, the first step would be to sort operational constraints:

  • Customers: constraints on usability, customization, response time, availability, …
  • Messages: constraints on scale, confidentiality, compliance with regulations, …
  • Contracts: constraints on scale, confidentiality, …
  • Policy: availability, reliability, maintenance, …
  • Endpoints: costs, maintenance, security, interoperability, …
Non functional constraints may cut across services and capabilities

Non functional constraints may cut across services and capabilities

Since constraints may cut across services and capabilities, non functional requirements are not a given but the result of explicit decisions about:

  • Architecture level: should the constraint be dealt with locally (interfaces), at functional level (services), or at technical level (resources).
  • Services: when set at functional level, should the constraint be dealt with by business services (e.g domain or activity), or architecture ones (e.g authorization or orchestration).

The alignment of services with architecture capabilities will greatly enhance the traceability and rationality of those decisions.

A Simple Example

This example is based on the Purchase Order case published with the OMG specifications: http://www.omg.org/spec/SoaML/1.0/Beta2/PDF/

A simple purchase order process analyzed in terms of service customers, messages and entities (#), contracts, and policy (aka choreography)

A simple purchase order process analyzed in terms of service customers, messages and entities (#), contracts, and policy (aka choreography)

Further Reading

Modeling Languages: Differences Matter

November 2, 2013

“Since words are only names for things, it would be more convenient for all men to carry about them such things as were necessary to express a particular business they are to discourse on.”

Jonathan Swift, Gulliver’s Travels

Objective

Modeling languages are meant to support the description of contexts and concerns from specific standpoints. And because those different perspectives have to be mapped against shared references, language must also provide constructs for ironing out variants and factoring out constants.

Digging or Ironing out differences (Erwitt Elliott)

Ironing out differences in features and behaviors (Erwitt Elliott)

Yet, while most languages generally agree on basic definitions of objects and behaviors, many distinctions are ignored or subject to controversial understanding; such shortcomings might be critical when architecture capabilities are concerned:

  • Actual entities and their symbolic counterpart.
  • Actual entities and their roles.
  • Business logic and business operations
  • External events and system time.
Making Differences

Making Differences

As those distinctions set the backbone of functional architectures, languages should be assessed according their ability to express them unambiguously using the least possible set of constructs.

Business objects vs Symbolic Surrogates

As far as symbolic systems are concerned, the primary purpose of models is to distinguish between targets and representations. Hence the first assessment yardstick: modeling languages must support a clear and unambiguous distinction between objects and behaviors of concern on one hand, symbolic system surrogates on the other hand.

cc

A primer on representation: objects of concerns and surrogates

Yet, if objects and behaviors are to be consistently and continuously managed, modeling languages must also provide common constructs mapping identities and structures of actual entities to their symbolic counterpart.

Agents vs Roles

Given that systems are built to support business processes, modeling languages should enable a clear distinction between physical entities able to interact with systems on one hand, roles as defined by enterprise organization on the other hand.

ccc

Agents and Roles

In that case the mapping of actual entities to systems representations is not about identities but functionalities: a level of indirection has to be introduced between enterprise organization and system technology because the same roles can be played,simultaneously or successively, by people, devices, or other systems.

Business logic vs Business operations

Just like actual objects are not to be confused with their symbolic description, modeling languages must make a clear distinction between business logic and processes execution, the former defining how to process symbolic objects and flows, the latter with the coupling between process execution and changes in actual contexts. That distinction is of a particular importance if business and organizational decisions are to be made independently of supporting systems technology.

Pull vs Push Rule Management

Business logic and operational contingencies

Language constructs must also support the consolidation of functional and operational units, the former being defined by integrity constraints on symbolic flows and roles authorizations on objects and operations, the latter taking into account synchronization constraints between the state of actual contexts and the state of their system symbolic counterpart. And for that purpose languages must support the distinction between external and internal events.

External vs Internal Events

Paraphrasing Albert Einstein, time is what happens between events. As a corollary, external time is what happens between context events, and internal time is what happens between system ones. While both can coincide for single systems and locations, no such assumption should be made for distributed systems set in different locations. In that case modeling language should support a clear distinction between external events signalling changes set in actual locations, and internal events signalling changes affecting system surrogates.

Time scales can be design on purpose

Time scales are set by events and concerns

Along that perspective synchronization is to be achieved through the consolidation of time scales. For single locations that can be done using system clocks, across distributed locations the consolidation will entail dedicated time frames and mechanisms set in reference to some initial external event.

Conclusion: How to share differences across perspectives

Somewhat paradoxically, multiple modeling languages erase differences by paring down shared descriptions to some uniform lump that can be understood by all. Conversely, agreeing on a set of distinctions that should be supported by every language could provide an antidote to the Babel syndrome.

That approach can be especially effective for the alignment of enterprise and systems architectures as the four distinctions listed above are equally meaningful in both perspectives.

Further reading

Tests in Driving Seats

April 24, 2013

Objective

Contrary to its manufacturing cousin, a long time devotee of preventive policies, software engineering is still ambivalent regarding the benefits of integrating quality management with development itself. That certainly should raise some questions, as one would expect the quality of symbolic artifacts to be much easier to manage than the one of their physical counterparts, if for no other reason than the former has to check  symbolic outcomes against symbolic specifications while the latter must also to overcome the contingencies of non symbolic artifacts.

ccc

Walking Quality Hall (E. Erwitt)

Thanks to agile approaches, lessons from manufacturing are progressively learned, with lean and just-in-time principles making tentative inroads into software engineering. Taking advantage of the homogeneity of symbolic development flows,  agile methods have forsaken phased processes in favor of iterative ones, making a priority of continuous and value driven deliveries to business users. Instead of predefined sequences of dedicated tasks, products are developed through iterations regrouping definition, building, and acceptance into the same cycles. That push differentiated documentation and models on back seats and may also introduce a new paradigm by putting tests on driving ones.

From Phased to Iterative Tests Management

Traditional (aka phased) processes follow a corrective strategy: tests are performed according a Last In First Out (LIFO) framework, for components (unit tests), system (integration), and business (acceptance). As a consequence, faults in functional architecture risk being identified after components completion, and flaws in organization and business processes may not emerge before the integration of system functionalities. In other words, the faults with the more wide-ranging consequences may be the last to be detected.

Phased and Iterative approaches to tests

Phased and Iterative approaches to tests

Iterative approaches follow a preemptive strategy: the sooner artifacts are tested, the better. The downside is that without differentiated and phased objectives, there is a question mark on the kind of specifications against which software products are to be tested; likewise, the question is how results are to be managed across iteration cycles, especially if changing requirements are to be taken into account.

Looking for answers, one should first consider how requirements taxonomy can support tests management.

Requirements Taxonomy and Tests Management

Whatever the methods or forms (users’ stories, use case, functional specifications, etc), requirements are meant to describe what is expected from systems, and as such they have two main purposes: (1) to serve as a reference for architects and engineers in software design and (2) to serve as a reference for tests and acceptance.

With regard to those purposes, phased development models have been providing clearly defined steps (e.g requirements, analysis, design, implementation) and corresponding responsibilities. But when iterative cycles are applied to progressively refined requirements, those “facilities” are no longer available. Nonetheless, since tests and acceptance are still to be performed, a requirements taxonomy may replace phased steps as a testing framework.

Taxonomies being built on purpose, one supporting iterative tests should consider two criteria, one driven by targeted contents, the other by modus operandi:

With regard to contents, requirements must be classified depending on who’s to decide: business and functional requirements are driven by users’ value and directly contribute to business experience; non functional requirements are driven by technical considerations. Overlapping concerns are usually regrouped as quality of service.

ccc

Requirements with regard to Acceptance.

That requirements taxonomy can be directly used to build its testing counterpart. As developed by D. Leffingwell (see selected readings), tests should also be classified with regard to their modus operandi, the distinction being between those that can be performed continuously along development iterations and those that are only relevant once products are set within their technical or business contexts. As it happens, those requirements and tests classifications are congruent:

  • Units and component tests (Q1) cover technical requirements and can be performed on development artifacts independently of their functionalities.
  • Functional tests (Q2) deal with system functionalities as expressed by users (e.g with stories or use cases), independently of operational or technical considerations.
  • System acceptance tests (Q3) verify that those functionalities, when performed at enterprise level, effectively support business processes.
  • System qualities tests (Q4) verify that those functionalities, when performed at enterprise level, are supported by architecture capabilities.
Tests Matrix for target and MO (adapted from D. Leffingwell)

Tests Matrix for target and MO (adapted from D. Leffingwell).

Besides the specific use of each criterion in deciding who’s to handle tests, and when, combining criteria brings additional answers regarding automation: product acceptance should be performed manually at business level, preferably by tools at system level; tests performed along development iterations can be fully automated for units and components (black-box), but only partially for functionalities (white-box).

That tests classification can be used to distinguish between phased and iterative tests: the organization of tests targeting products and systems from business (Q3) or technology (Q4) perspectives is clearly not supposed to be affected by development models, phased or iterative, even if resources used during development may be reused. That’s not the case for the organization of the tests targeting functionalities (Q2) or components (Q1).

Iterative Tests

Contrary to tests aiming at products and systems (Q3 and Q4), those performed on development artifacts cannot be set on fixed and well-defined specifications: being managed within iteration cycles they must deal with moving targets.

Unit and components tests (Q1) are white-box operations meant to verify the implementation of functionalities; as a consequence:

  • They can be performed iteratively on software increments.
  • They must take into account technical requirements.
  • They must be aligned on the implementation of tested functionalities.
ccccc

Iterative (aka development) tests for technical (Q1) and functional (Q2) requirements.

Hence, if unit and component tests are to be performed iteratively, (1) they must be set against features and, (2) functional tests must be properly documented and available for reuse.

Functional tests (Q2) are black-box operations meant to validate system behavior with regard to users’ expectations; as a consequence:

  • They can be performed iteratively on software increments.
  • They don’t have to take into account technical requirements.
  • They must be aligned on business requirements (e.g users’ stories or use cases).

Assuming (see previous post) a set of stories (a,b,c,d) identified by alternative paths built from features (f1…5), functional tests (Q2) are to be defined and performed for each story, and then reused to test the implementation of associated features (Q1).

cccc

Functional tests are set along stories, units and components tests are set along features.

At that point two questions must be answered:

  • Given that stories can be changed, expanded or refined along development iterations, how to manage the association between requirements and functional tests.
  • Given that backlogs can be rearranged along development cycles according to changing priorities, how to update tests, manage traceability, and prevent regression.

With model-driven approaches no longer available, one should consider a mirror alternative, namely test-driven development.

Tests Driven Development

Test driven development can be seen as a mirror image of model driven development, a somewhat logical consequence considering the limited role of models in agile approaches.

The core of agile principles is to put the definition, building and acceptance of software products under shared ownership, direct collaboration, and collective responsibility:

  • Shared ownership: a project team groups users and developers and its first objective is to consolidate their respective concerns.
  • Direct collaboration: decisions are taken by team members, without any organizational mediation or external interference.
  • Collective responsibility: decisions about stories, priorities and refinements are negotiated between team members from both sides of the business/system (or users/developers) divide.

Assuming those principles are effectively put to work, there seems to be little room for organized and persistent documentation, as users’ stories are meant to be developed, and products released, in continuity, and changes introduced as new stories.

With such lean and just-in-time processes, documentation, if any, is by nature transient, falling short as a support of test plans and results, even when problems and corrections are formulated as stories and managed through backlogs. In such circumstances, without specifications or models available as development handrails, could that be achieved by tests ?

ccc

Given the ephemeral nature of users’ stories, functional tests should take the lead.

To begin with, users’ stories have to be reconsidered. The distinction between functional tests on one hand, unit and component tests on the other hand, reflects the divide between business and technical concerns. While those concerns may be mixed in users’ stories, they are progressively set apart along iteration cycles. It means that users’ stories are, by nature, transitory, and as a consequence cannot be used to support tests management.

The case for features is different. While they cannot be fully defined up-front, features are not transient: being shared by different stories and bound to system functionalities they are supposed to provide some continuity. Likewise, notwithstanding their changing contents, users’ stories should be soundly identified by solution paths across problems space.

Paths and Features can be identified consistently along iteration cycles

Paths and Features can be identified consistently along iteration cycles.

That can provide a stable framework supporting the management of development tests:

  • Unit tests are specified from crosses between solution paths (described by stories or scenarii) and features.
  • Functional tests are defined by solution paths and built from unit tests associated to the corresponding features.
  • Component tests are defined by features and built by the consolidation of unit tests defined for each targeted feature according to technical constraints.

The margins support continuous and consistent identification of functional and component tests whose contents can be extended or updated through changes made to unit tests.

One step further, and tests can even be used to drive iteration cycles: once features and solution paths soundly identified, there is no need to swell backlogs with detailed stories whose shelf life will be limited. Instead, development processes would get leaner if extensions and refinements could be directly expressed as unit tests.

System Quality and Acceptance Tests

Contrary to development tests which are applied iteratively to programs, system tests are applied to released products and must take into account requirements that cannot be directly or uniquely attached to users stories, either because they cannot be expressed from a business perspective or because they are shared concerns and best described as features.  Tests for those requirements will be consolidated with development ones into system quality and acceptance tests:

  • System Quality Tests deal with performances and resources from the system management perspective. As such they will combine component and functional tests in operational configurations without taking into account their business contents.
  • System  Acceptance Tests deal with the quality of service from the business process perspective. As such they will perform functional tests in operational configurations taking into account business contents and users’ experience.
Development Tests are to be consolidated into Product and System Acceptance Tests

Development Tests are to be consolidated into Product and System Acceptance Tests.

Requirements set too early and quality checks performed too late are at the root of phased processes predicaments, and that can be fixed with a two-pronged policy: a preemptive policy based upon a requirements taxonomy organizing problem spaces according concerns business value, system functionalities, components designs, platforms configuration; a corrective policy driven by the exploration of solution paths, with developments and releases driven by quality concerns.

Further Reading

External Links