Archive for the ‘Metrics’ Category

Beans must be Counted, one way And the other

May 2, 2017

Preamble

Conversations across software engineering forums sometimes reveal unexpected views, as it’s the case for the benefits of accountability.

Counting Paper Beans (Pieter Brueghel the Younger)

One would assume that competition would impel enterprises to scrutiny with regard to resources employed and product outcomes, pushing for the assessment of internal activities based on some agreed metrics. And yet, now and again, software development is viewed as a boutique occupation, if not an art pursuit, carried out by creative craftsmen for enlightened if demanding patrons; a vocation too distinctive to be gauged by common yardsticks.

Difficulties of Oversight

Setting apart creative delusions, the assessment of software development is effectively confronted with rational as well as practical obstacles.

To begin with rationality, and unlike traditional products, there is no market pricing mechanism that could match software development costs with customers’ value. As a consequence business stakeholders and systems engineers prefer to play safe and keep their respective assessments on the opposed banks of the customer/provider divide.

As for the practicality of assessments, the choice is between idiosyncratic approaches (e.g users’ points) and reasoned ones (essentially function points). The former ones being by nature specific and subject to changes in business opportunities, whereas the latter ones are being plagued by implementation plights that make them both costly and unreliable.

Yet, the diluting of IT systems in business environments is making that conundrum irrelevant: the fusing of business processes and supporting software is blanketing the discontinuities between business value and development costs.

Perils of Oversight

Given the digital integration between systems and business environments and the part played by software in production, marketing and operations, enterprises can no longer ignore the economics of software development.

As far as enterprises are concerned, economics use prices for two key purposes, external and internal.

With regard to their business environment, enterprises need metrics to price the resources they could buy and the products they could sell; their competitive edge fully depends on the thoroughness and accuracy of both.

With regard to their internal governance, enterprises need metrics to gauge the efficiency of their factors and the maturity of their processes, and allocate resources accordingly. That internal assessment is the basis of their versatility and plasticity:

  • Confronted to continuous, frequent, and often abrupt changes in business environments, enterprise must be able to adapt their activities without having to change its architectures. That cannot be achieved without timely and accurate assessments of the way their resources are put to use.
  • Conversely, enterprises may have to change their architectures without affecting their performances; that cannot be achieved without a comprehensive and accurate assessments of alternative options, organizational as well as technical.

To summarize, the spread and intricacy of software footprint over both sides of the crumbling fences between enterprise systems and business environments makes software economics a necessary component of enterprises governance, so a tally of software beans should not be an option.

Further Reading

Advertisements

Feasibility & Capabilities

May 2, 2015

Synopsis

As far as systems engineering is concerned, the aim of a feasibility study is to verify that a business solution can be supported by a system architecture (requirements feasibility) subject to some agreed technical and budgetary constraints (engineering feasibility).

(A. Magnaldo)

Project Rain Check  (A. Magnaldo)

Where to Begin

A feasibility study is based on the implicit assumption of slack architecture capabilities. But since capabilities are set with regard to several dimensions, architectures boundaries cannot be taken for granted and decisions may even entail some arbitrage between business requirements and engineering constraints.

Using the well-known distinction between roles (who), activities (how), locations (where), control (when), and contents (what), feasibility should be considered for supporting functionalities (between business processes and systems) and implementation (between functionalities and platforms):

ccc

Feasibility with regard to Systems and Platforms

Depending on priorities, feasibility considerations could look from three perspectives:

  • Focusing on system functionalities (e.g with use cases) implies that system boundaries are already identified and that the business logic will be defined along with users’ interfaces.
  • Starting with business requirements puts business domains and logic on the driving seat, making room for variations in system functionalities and boundaries .
  • Operational requirements (physical environments, events, and processes execution) put the emphasis on a mix of business processes and quality of service, thus making software functionalities a dependent variable.

In any case a distinction should be made between requirements and engineering feasibility, the former set with regard to architecture capabilities, the latter with regard to development resources and budget constraints.

Requirements Feasibility & Architecture Capabilities

Functional capabilities are defined at system boundaries and if all feasibility options are to be properly explored, architectures capabilities must be understood as a trade-off between the five intrinsic factors e.g:

  • Security (entry points) and confidentiality (domains).
  • Compliance with regulatory constraints (domains) and flexibility (activities).
  • Reliability (processes) and interoperability (locations).
vvvv

Feasible options must be set within the Capabilities Pentagon

Feasible options could then be figured out by points set within the capabilities pentagon. Given metrics on functional requirements, their feasibility under the non functional constraints could be assessed with regard to cross capabilities. And since the same five core capabilities can be consistently defined across enterprise, systems, and platforms layers, requirements feasibility could be assessed without prejudging architecture decisions.

Business Requirements & Architecture Capabilities

One step further, the feasibility of business and operational objectives (the “Why” of the Zachman framework) can be more easily assessed if set on the outer range and mapped to architecture capabilities.

ccc

Business Requirements and Architecture Capabilities

Engineering Feasibility & ROI

Finally, the feasibility of business and functional requirements under the constraints set by non functional requirements has to be translated in terms of ROI, and for that purpose the business value has to be compared to the cost of engineering the solution given the resources (people and tools), technical requirements, and budgetary constraints.

Architecture Capabilities for Processes, with deontic (basic) and alethic (dashed) dependencies.

ROI assessment mapping business value against functionalities, engineering outlays, and operational costs.

That where the transparency and traceability of capabilities across layers may be especially useful when alternatives and priorities are to be considered mixing functionalities, engineering outlays, and operational costs.

Further Reading

Models Truth and Disproof

January 21, 2015

“If you cannot find the truth right where you are, where else do you expect to find it?”

Dōgen Zenji

Summary

Software engineering models can be regrouped in two categories depending on their target: analysis models represent business context and concerns, design ones represent systems components. Whatever the terminologies, all models are to be verified with regard to their intrinsic qualities, and validated with regard to their domain of discourse, respectively business objects and activities (analysis models), or software artifacts (design models).

(Chris Engman)

Internal & External Consistency (Chris Engman)

Checking for internal consistency is arguably straightforward as proofs can be built with regard to the syntax and semantics of modeling (or programming) languages. Things are more complicated for external consistency because hypothetical proofs would have to rely on what is known of the business domains, whose knowledge is by nature partial and specific, if not hypothetical. Nonetheless, even if general proofs are out of reach, the truth of models can still be disproved by counter examples found among the instances under consideration.

Domains of Discourse: Business vs Engineering

With regard to systems engineering, domains of discourse cover artifacts which are, by “construct”, fully defined. Conversely, with regard to business context and objectives, domains of discourse have to deal with instances whose definitions are a “work in progress”.

vvvv

Domains of Discourse: Business vs Engineering

That can be illustrated by analysis models, which are meant to translate requirements into system functionalities, as opposed to design ones, which specify the corresponding software artifacts. Since software artifacts are supposed to be built from designs, checking the consistency of the mapping is arguably a straightforward undertaking. That’s not the case when the consistency of analysis models has to be checked against objects and activities identified by business’ domains of discourse, possibly with partial, ambiguous, or conflicting descriptions. For that purpose some logic may help.

Flat Models & Logic

Business requirements describe objects, events, and activities, and the purpose of modeling is to identify those instances and regroup them into subsets built according to their features and relationships.

Building descriptions for targeted instances business objects & activities

How to organize instances of business objects & activities into subsets

As far as models make no use of abstractions (“flat” models), instances can be organized using basic set operators and epistemic (i.e relating to the degree of validation) constraints with regard to existence (m/d), uniqueness (x/o), and change (f/m):

cccc

Notation for epistemic constraints

Using the EU-Rent Car example:

  • Rental cars are exclusively and definitively partitioned according to models (mxf).
  • Models are exclusively partitioned according to rental group (mxm), and exclusively and definitively according body style (mxf).
  • Rental cars are partitioned by derivation (/) according to group and body style.
cccc

Flat model using basic set operators for exclusive (cross) and final (grey) partitions (2)

Such models are deemed to be consistent if all instances are consistently taken into account.

Flat Models External Consistency

Assuming that models backbone can be expressed logically, their consistency can be formally verified using a logical language, e.g Prolog.

To begin with, candidate subsets are obtained by combing requirements for core modeling artifacts expressed as predicates (21 for descriptions of actual objects, 121 for descriptions of actual locations, 20 for descriptions of symbolic ones, 22 for descriptions of symbolic partitions), e.g:

  • type(20, manufacturer).
  • type(21, rentalCar).
  • type(22,  carModel).
  • type(22, rentalGroup).
  • type(22,  bodyStyle).
  • type(121, depot).

Partitions and functional connectors (220 for symbolic reference, 222 for partitions, 221 for actual connection), e.g:

  • connect(222, rentalCar,carModel, mxf).
  • connect(222, carModel, group, mxm).
  • connect(222, carModel, bodyStyle,mxf).
  • connect(220, manufacturer_, carModel, manufacturer, mof).
  • connect(121, location, rentalCar, depot, mxt)

Finally, features and structures (320 for properties, 340 for operations), e.g:

  • feature(340, move_to, depot).
  • feature(320, address).
  • feature(320, location).
  • member(manufacturer,address,mom).
  • member(rentalCar,location,mxm).
  • member(rentalCar,move_to,mxm).

Those candidate descriptions are to be assessed and improved by applying them to sets of identified occurrences taken from requirements. The objective being to map each instance to a description: instance(name, term()), e.g:

  • instance(sedan,carModel(f1(F1),f2(F2))).
  • instance(coupe,carModel(f1(F1),f2(F2))).
  • instance(ford, manufacturer(f6(F6),f7(F7))).
  • instance(focus, rentalCar(f6(F6),f7(F7))).
  • instance_(manufacturer_,focus,ford).

Using a logical interpreter, validation can then be carried out iteratively by looking for counter examples that could disprove the truth of the representations:

  • All instances are taken into account: there is no instance N without instance(N,Structure).
  • Logical consistency: there is no instance N with conflicting partitioning (native and derived).
  • Completeness: there is no instance type(X,N,T(f1,f2,..)) with undefined feature fi.
  • Functional consistency: there is no instance of relation R (native and derived) without a consistent type relation(X, R, Origin, Destination, Epistemic) .

It must be noted that the approach is not limited to objects and is meant to encompass the whole scope of requirements: actual objects, symbolic representations, business logic, and processes execution.

Multilevel Models: From Partitions to Sub-types

Flat models fall short when specific features are to be added to the elements of partitions subsets, and in that case sub-types must be introduced. Yet, and contrary to partitions, sub-types come with abstractions: set within a flat model (i.e without sub-types), Car model fully describes all instances, but when sub-types sedan, coupe, and convertible are introduced, the Car model base type is nothing more than a partial (hence abstract) description.

ccc

From partition to sub-types: subsets descriptions are supplemented with specific features.

Whereas that difference may seem academic, it has direct and practical consequences when validation is considered because consistency must then be checked for every level, concrete or abstract.

LSP & External Consistency

As it happens, the corresponding problem has been tackled by Barbara Liskov for software design: the so-called Liskov substitution principle (LSP) states that if S is a sub-type of T, then instances of T may be replaced with instances of S without altering any of the desirable properties of the program.

Translated to analysis models, the principle would state that, given a set of instances, a model must keep its consistency independently of the level of abstraction considered. As a corollary, and assuming a model abides by the substitution principle, it would be possible to generalize the external consistency of a detailed level to the whole model whatever the level of abstraction. Hence the importance of compliance with the substitution principle when introducing sub-types in analysis models.

vvv

All instances must be accounted for whatever the level of abstraction

Applying the Substitution Principle to Analysis Models

Abstraction is arguably the essence of requirements modeling as its purpose is to bring specific and changing concerns under a common, consistent, and lasting conceptual roof. Yet, the two associated operations of specialization and generalization often receive very little scrutiny despite the fact that most of the related pitfalls can be avoided if the introduction of sub-types (i.e levels of abstraction) is explicitly justified by partitions. And that can be achieved by the substitution principle.

First of all, and as far as requirements analysis is concerned, sub-types should only to be introduced for specific features, properties or operations. Then, epistemic constraints can be used to tally the number of specialized instances with the number of generalized ones, and check for the possibility of functional discrepancies:

  • Discretionary (or conditional or non exhaustive) partitions (d__) may bring about more instances for the base type (nb >= ∑nbi).
  • Overlapping (or duplicate or non isolated) partitions (_o_) may bring about less instances for the base type (nb <= ∑nbi).
  • Assuming specific features, mutable (or reversible) partitions (__m) means that features may differ between level; otherwise (same features) sub-types are not necessary.
vvv

Epistemic constraints on partitions can be used to enforce the LSP

Using a Prolog-like language, the only modification will concern the syntax of predicates, with structures replaced by lists of features:

  • type(20, manufacturer,[f6,f7]).
  • type(21, rentalCar,[f5]).
  • type(22,  carModel,[f1,f2]).
  • type(22, rentalGroup,[f9]).
  • type(22,  bodyStyle,[f8]).
    • type(20, bodyStyle:sedan, [f11,f12]).
    • type(20, bodyStyle:coupe, [f13]).
    • type(20, bodyStyle:convertible, [f14]).
  • type(121, depot,[f10]).

The logical interpreter could then be used to map the sub-types to partitions and check for substitution.

Further Reading

Further Readings

Tests in Driving Seats

April 24, 2013

Objective

Contrary to its manufacturing cousin, a long time devotee of preventive policies, software engineering is still ambivalent regarding the benefits of integrating quality management with development itself. That certainly should raise some questions, as one would expect the quality of symbolic artifacts to be much easier to manage than the one of their physical counterparts, if for no other reason than the former has to check  symbolic outcomes against symbolic specifications while the latter must also to overcome the contingencies of non symbolic artifacts.

ccc

Walking Quality Hall (E. Erwitt)

Thanks to agile approaches, lessons from manufacturing are progressively learned, with lean and just-in-time principles making tentative inroads into software engineering. Taking advantage of the homogeneity of symbolic development flows,  agile methods have forsaken phased processes in favor of iterative ones, making a priority of continuous and value driven deliveries to business users. Instead of predefined sequences of dedicated tasks, products are developed through iterations regrouping definition, building, and acceptance into the same cycles. That push differentiated documentation and models on back seats and may also introduce a new paradigm by putting tests on driving ones.

From Phased to Iterative Tests Management

Traditional (aka phased) processes follow a corrective strategy: tests are performed according a Last In First Out (LIFO) framework, for components (unit tests), system (integration), and business (acceptance). As a consequence, faults in functional architecture risk being identified after components completion, and flaws in organization and business processes may not emerge before the integration of system functionalities. In other words, the faults with the more wide-ranging consequences may be the last to be detected.

Phased and Iterative approaches to tests

Phased and Iterative approaches to tests

Iterative approaches follow a preemptive strategy: the sooner artifacts are tested, the better. The downside is that without differentiated and phased objectives, there is a question mark on the kind of specifications against which software products are to be tested; likewise, the question is how results are to be managed across iteration cycles, especially if changing requirements are to be taken into account.

Looking for answers, one should first consider how requirements taxonomy can support tests management.

Requirements Taxonomy and Tests Management

Whatever the methods or forms (users’ stories, use case, functional specifications, etc), requirements are meant to describe what is expected from systems, and as such they have two main purposes: (1) to serve as a reference for architects and engineers in software design and (2) to serve as a reference for tests and acceptance.

With regard to those purposes, phased development models have been providing clearly defined steps (e.g requirements, analysis, design, implementation) and corresponding responsibilities. But when iterative cycles are applied to progressively refined requirements, those “facilities” are no longer available. Nonetheless, since tests and acceptance are still to be performed, a requirements taxonomy may replace phased steps as a testing framework.

Taxonomies being built on purpose, one supporting iterative tests should consider two criteria, one driven by targeted contents, the other by modus operandi:

With regard to contents, requirements must be classified depending on who’s to decide: business and functional requirements are driven by users’ value and directly contribute to business experience; non functional requirements are driven by technical considerations. Overlapping concerns are usually regrouped as quality of service.

ccc

Requirements with regard to Acceptance.

That requirements taxonomy can be directly used to build its testing counterpart. As developed by D. Leffingwell (see selected readings), tests should also be classified with regard to their modus operandi, the distinction being between those that can be performed continuously along development iterations and those that are only relevant once products are set within their technical or business contexts. As it happens, those requirements and tests classifications are congruent:

  • Units and component tests (Q1) cover technical requirements and can be performed on development artifacts independently of their functionalities.
  • Functional tests (Q2) deal with system functionalities as expressed by users (e.g with stories or use cases), independently of operational or technical considerations.
  • System acceptance tests (Q3) verify that those functionalities, when performed at enterprise level, effectively support business processes.
  • System qualities tests (Q4) verify that those functionalities, when performed at enterprise level, are supported by architecture capabilities.
Tests Matrix for target and MO (adapted from D. Leffingwell)

Tests Matrix for target and MO (adapted from D. Leffingwell).

Besides the specific use of each criterion in deciding who’s to handle tests, and when, combining criteria brings additional answers regarding automation: product acceptance should be performed manually at business level, preferably by tools at system level; tests performed along development iterations can be fully automated for units and components (black-box), but only partially for functionalities (white-box).

That tests classification can be used to distinguish between phased and iterative tests: the organization of tests targeting products and systems from business (Q3) or technology (Q4) perspectives is clearly not supposed to be affected by development models, phased or iterative, even if resources used during development may be reused. That’s not the case for the organization of the tests targeting functionalities (Q2) or components (Q1).

Iterative Tests

Contrary to tests aiming at products and systems (Q3 and Q4), those performed on development artifacts cannot be set on fixed and well-defined specifications: being managed within iteration cycles they must deal with moving targets.

Unit and components tests (Q1) are white-box operations meant to verify the implementation of functionalities; as a consequence:

  • They can be performed iteratively on software increments.
  • They must take into account technical requirements.
  • They must be aligned on the implementation of tested functionalities.
ccccc

Iterative (aka development) tests for technical (Q1) and functional (Q2) requirements.

Hence, if unit and component tests are to be performed iteratively, (1) they must be set against features and, (2) functional tests must be properly documented and available for reuse.

Functional tests (Q2) are black-box operations meant to validate system behavior with regard to users’ expectations; as a consequence:

  • They can be performed iteratively on software increments.
  • They don’t have to take into account technical requirements.
  • They must be aligned on business requirements (e.g users’ stories or use cases).

Assuming (see previous post) a set of stories (a,b,c,d) identified by alternative paths built from features (f1…5), functional tests (Q2) are to be defined and performed for each story, and then reused to test the implementation of associated features (Q1).

cccc

Functional tests are set along stories, units and components tests are set along features.

At that point two questions must be answered:

  • Given that stories can be changed, expanded or refined along development iterations, how to manage the association between requirements and functional tests.
  • Given that backlogs can be rearranged along development cycles according to changing priorities, how to update tests, manage traceability, and prevent regression.

With model-driven approaches no longer available, one should consider a mirror alternative, namely test-driven development.

Tests Driven Development

Test driven development can be seen as a mirror image of model driven development, a somewhat logical consequence considering the limited role of models in agile approaches.

The core of agile principles is to put the definition, building and acceptance of software products under shared ownership, direct collaboration, and collective responsibility:

  • Shared ownership: a project team groups users and developers and its first objective is to consolidate their respective concerns.
  • Direct collaboration: decisions are taken by team members, without any organizational mediation or external interference.
  • Collective responsibility: decisions about stories, priorities and refinements are negotiated between team members from both sides of the business/system (or users/developers) divide.

Assuming those principles are effectively put to work, there seems to be little room for organized and persistent documentation, as users’ stories are meant to be developed, and products released, in continuity, and changes introduced as new stories.

With such lean and just-in-time processes, documentation, if any, is by nature transient, falling short as a support of test plans and results, even when problems and corrections are formulated as stories and managed through backlogs. In such circumstances, without specifications or models available as development handrails, could that be achieved by tests ?

ccc

Given the ephemeral nature of users’ stories, functional tests should take the lead.

To begin with, users’ stories have to be reconsidered. The distinction between functional tests on one hand, unit and component tests on the other hand, reflects the divide between business and technical concerns. While those concerns may be mixed in users’ stories, they are progressively set apart along iteration cycles. It means that users’ stories are, by nature, transitory, and as a consequence cannot be used to support tests management.

The case for features is different. While they cannot be fully defined up-front, features are not transient: being shared by different stories and bound to system functionalities they are supposed to provide some continuity. Likewise, notwithstanding their changing contents, users’ stories should be soundly identified by solution paths across problems space.

Paths and Features can be identified consistently along iteration cycles

Paths and Features can be identified consistently along iteration cycles.

That can provide a stable framework supporting the management of development tests:

  • Unit tests are specified from crosses between solution paths (described by stories or scenarii) and features.
  • Functional tests are defined by solution paths and built from unit tests associated to the corresponding features.
  • Component tests are defined by features and built by the consolidation of unit tests defined for each targeted feature according to technical constraints.

The margins support continuous and consistent identification of functional and component tests whose contents can be extended or updated through changes made to unit tests.

One step further, and tests can even be used to drive iteration cycles: once features and solution paths soundly identified, there is no need to swell backlogs with detailed stories whose shelf life will be limited. Instead, development processes would get leaner if extensions and refinements could be directly expressed as unit tests.

System Quality and Acceptance Tests

Contrary to development tests which are applied iteratively to programs, system tests are applied to released products and must take into account requirements that cannot be directly or uniquely attached to users stories, either because they cannot be expressed from a business perspective or because they are shared concerns and best described as features.  Tests for those requirements will be consolidated with development ones into system quality and acceptance tests:

  • System Quality Tests deal with performances and resources from the system management perspective. As such they will combine component and functional tests in operational configurations without taking into account their business contents.
  • System  Acceptance Tests deal with the quality of service from the business process perspective. As such they will perform functional tests in operational configurations taking into account business contents and users’ experience.
Development Tests are to be consolidated into Product and System Acceptance Tests

Development Tests are to be consolidated into Product and System Acceptance Tests.

Requirements set too early and quality checks performed too late are at the root of phased processes predicaments, and that can be fixed with a two-pronged policy: a preemptive policy based upon a requirements taxonomy organizing problem spaces according concerns business value, system functionalities, components designs, platforms configuration; a corrective policy driven by the exploration of solution paths, with developments and releases driven by quality concerns.

Further Reading

External Links

 

The Economics of Reuse

May 15, 2012

Objectives

Reusing artifacts means using them in contexts that are different of their native ones. That may come by design, when specifications can anticipate on shared concerns, or as an afterthought, when initially unexpected similarities are identified later on.

Economics of Reuse (Sergio Silva)

Economics of Reuse (Sergio Silva)

Planned or opportunistic, reuse brings benefits in terms of costs, quality, and continuity:

  • Cost benefits are most easily achieved for component engineering but may also be obtained upstream with model reuse and patterns.
  • Quality benefits are first and foremost rooted at model level, especially when components implementation is supported by automated tools.
  • Continuity benefits are to be found both along the business (semantics and business rules) and engineering (functional architecture and platform implementations) perspectives.

Reuse policies may also bring positive externalities by inducing a comprehensive approach to software design.

Yet, those policies will usually entail costs and may as well bear negative externalities:

  • Artifacts designed for reuse are usually most costly to develop, even if part of additional costs should be ascribed to quality management.
  • Excessive enforcement policies may significantly hamper projects ability to meet business needs in time.
  • Managing reusable assets usually induces overheads.

In order to assess those policies, economics of reuse must be set across business, engineering or architecture perspectives:

  • Business perspective: how to factor out and reuse artifacts associated with the knowledge of business domains when system functionalities or platforms are modified.
  • Engineering perspective: how to reuse development artifacts when business contents or system platforms are modified.
  • Architecture perspective: how to use system components independently of changes affecting business contents or development artifacts.

That can be achieved by managing development assets along model driven architecture: CIMs for business and enterprise architecture, PIMs for systems functional architecture, and PSMs for systems technical architecture.

Contexts & Concerns

Whatever their inception, reused artifacts are meant to be used independently of their native context and concerns: opportunistic reuse will map a specific purpose to another one, planned reuse will map a shared concern to a specific purpose. As a corollary, reuse policies must be supported by some traceability mechanism linking concerns and purposes across contexts and architectures.

From the enterprise perspective, the problem is to reuse the knowledge of business domains and processes. For that purpose different mechanisms can be considered:

  • The simplest solution is to reuse generic components, with the business knowledge directly transferred through parameters.
  • Similarly, business rules can be separately edited and managed in business contexts before being executed by system components.
  • One step further, business semantics and rules can be fenced off with domains and applied to different objects and applications.
  • Finally, models of business objects and processes can be capitalized and managed as reusable assets.

Reuse Policies with Model Driven Architecture

Once business knowledge is duly capitalized as functional assets, they can be reused along the engineering perspective:

  • System functionalities: functional patterns are (re)used to map functional requirements to functional architecture, and services are (re)used to support business processes.
  • Software architecture: Object and aspect oriented designs, using inheritance and polymorphism.
  • Software implementations: Component-based development and information hiding.

Along the architecture perspective information hiding is generalized to systems, and reuse is masked by the definition of services.

It must be noted that, contrary to misleading similarities, refactoring is the opposite of reuse: instead of building from well understood and safe artifacts, it tries to extract some reusable chunks from opaque and unsafe components.

Knowledge Reuse

Enterprise and business knowledge may affect the full scope of system functionalities: boundaries (e.g users authorizations), controls (e.g accounting rules), and persistency (e.g consistency constraints). Whereas there isn’t much to argue about the benefits of reusing enterprise and business knowledge, costs may significantly diverge depending on the way corresponding assets are managed:

  • Domain specific knowledge are rooted at requirements level. That’s typically achieved when use cases are introduced to describe how systems are meant to support business processes. With different use cases targeting different aspects of the same business objects and processes, overlaps must be identified and factored out in order to be reused across processes.
  • Business knowledge may also be global, i.e shared at enterprise architecture level, defining objectives, assets and organization associated with the continuity of corporate identity and business capabilities within a regulatory and market environment.

Use cases access to respectively shared (a,b) and specific (c,d) knowledge for objects and application domains

In any case, the challenge is to map business knowledge to system models, more precisely to embed reused descriptions of business objects and process to corresponding development artifacts. At architecture level the mapping should target objects or processes identities, at domain level the focus will be on aspects and views.

As epitomized by service oriented architectures, business architectures can be mapped to system ones through delegation, either directly (business processes calling on services), or indirectly (collaboration between services). That will establish a clear distinction between shared (aka global) and domain specific knowledge, and consequently between respective economics of reuse.

  • Given that shared knowledge must be reused across domains and applications, there can be no argument about benefits. That will be achieved by a messaging model built on a need-to-know basis. And since such model is an intrinsic feature of the functional architecture, it incurs no additional overheads.
  • Specific knowledge for its part is managed at domain level and therefore masked behind services interfaces. Whatever reuse occurs there remains local and an intrinsic part of domain design.

Knowledge Reuse through services (a), boundaries (b), controls (c), and entities (d).

Things are not so clear when business knowledge is not managed by services but distributed across domains, mixing specific and global knowledge. Managing reusable assets would be easy were the distinction between business and functional requirements to coincide with the one between shared and specific knowledge; unfortunately that’s seldom the case, and requirements, functional or business, will have to be sorted out at architecture or design level.

Whereas Service Oriented Architectures (SOA) put functionalities in the driving seat, Enterprise Application Integration (EAI) gives the lead to applications for which it provides adapters. As maintenance of integration adapters is a very poor substitute for knowledge management, reuse is mostly limited to legacy applications.

Maintaining adapters across layers of applications induces significant overheads

At design level knowledge is weaved into canonical data models (entities), functional architectures (controls), and user interfaces frameworks (boundaries).

Mixed concerns: business requirements can be specific, functional ones can be global.

On one hand, tracing reuse to requirements may be problematic as they are by nature concrete and unstructured, hence not the best support for generalization or the factoring out of shared features. Assuming business analysts can nonetheless separate reusable wheat from specific chaff, knowledge management at this level will require a dedicated framework supporting comprehensive and differentiated traceability. Additional overheads will have to be taken into account and compared to potential benefits.

On the other hand, canonical data models and functional architectures are meant to provide unified views of shared objects and semantic domains. Yet, canonical models are by nature unwieldy as they carve structures, features, and connections of business objects, without clear mechanisms to combine shared and specific knowledge. As a corollary, their use may reduce flexibility, and their management usually induces significant overheads.

Artifacts Reuse

With enterprise and business knowledge capitalized as development assets, the engineering case for reuse may appears indisputable, but business cases are often much more controversial due to large overheads and fleeting returns. Taking cues from Barry Boehm (“Managing Software Productivity and Reuse”),  here are some of the main pitfalls of artifacts reuse:

  • Repository delusion: knowledge being by nature contextual, its reuse is driven by circumstances and purposes; as a consequence the availability of large repositories of development assets will probably be ignored without clear pointers rooted in contexts and concerns.
  • Confusion between components (or structures) and functionalities (or interfaces): under the influence of the object oriented paradigm, the distinction between objects and aspects is all too often forgotten. That’s unfortunate as this difference is congruent with the one between business objects on one hand, business operations on the other hand.
  • Over generalization: reuse is usually achieved by factoring out useful aspects or factoring off useless ones. In both cases the temptation is to repeat the operation until nothing could be added to the scope. Such “flight for abstraction” will inevitably overtake the proper level of reuse and begets models void of any anchor to business relevancy.
  • Scalability: while reuse is about separation of concerns and complexity management, those two criteria don’t have to pull in the same direction. When they don’t, variants will be dispersed across artifacts and their processing will suffer a combinatorial explosion if the system has to be scaled up.
  • Obsolescence: shelf lives of development assets can be defined by each or both business or technical relevancy. Assuming spans either coincide or are managed independently, they should be explicitly taken into account before any reuse.

Those obstacles can be managed providing that models:

Sorting out reuse concerns with differentiated inheritance.

Economics of Reuse and Sustainable Development

Sustainable system development is the ability to meet present business requirements while enhancing system capability to support future ones. Clearly reuse is not the only factor of sustainability, with architectures, returns, and risk management being pivotal. But the economics of reuse encompass most of other factors.

Architectures are clearly first to be considered, as epitomized by MDA:

  • Reuse of development assets rooted in enterprise architecture is not an option: system functionalities are meant to support business processes as they are (a).
  • At the other end of the development process, reuse of software designs and components across technical architectures should bring benefits in quality and costs (c).
  • In between reuse of system functionalities is necessary to guarantee the robustness and continuity of functional architectures; it should also leverage the benefits of reuse of enterprise and development assets (b).
Reuse of models is at the core of the MDA framework

Reuse of models is at the core of the MDA framework

Regarding returns, reuse through generic components, rules engines, or semantic domains can be directly supported by development tools, bypassing explicit models of functional architectures. That makes their costs/benefits analysis both simpler and well circumscribed. That is not the case for system functionalities which stand at the hub of perspectives. As a consequence, costs/benefits should be analyzed as a whole:

  • Regarding business assets, a clear distinction must be maintained between specific and shared knowledge, reuse being considered for the latter only.
  • Regarding the reuse of business assets as functional ones, services clearly offer the best returns. Otherwise costs/benefits are to be assessed, from reuse of vocabulary and semantics domains (straightforward, limited overheads), to canonical models and enterprise application integration (contingent, significant overheads).
  • Economics of reuse will ultimately be set by traceability mechanisms linking enterprise and business knowledge on one hand, components designs on the other hand. Even for services (c), if at a lesser degree, the business case for reuse will be decided by leveraged benefits and non cumulative costs. Hence the importance of maintaining the distinction between identified structures and associated aspects from business (a) and functional (b) requirements, to components interfaces (d) and structures (e).

Maintaining the distinction between structures and aspects from business (a) and functional (b) requirements, to components interfaces (d) and structures (e).

Finally, reuse may also play a significant role in risk management, especially when risks are managed according to their source:

  • Changes in business contexts can usually occurs along two frequency waves: short, for market opportunities, and long, for an organization’s continuity. Associated risks could be better managed if corresponding knowledge were managed accordingly.
  • System architectures are meant to evolve in synch with organization continuity; were technological environment or corporate structures subject to unexpected changes, reusable functional assets would be of great help.
  • Given that enterprise IT can no longer be self-contained and operate in isolation, reusable designs may provide buffers to technological risks and help exploiting unexpected business opportunities.

Further Reading

Requirements Metrics Matter

April 9, 2012

Objectives

Contrary to some unfortunate misconceptions, measurements are as much about collaboration and trust as they are about rewards or sanctions. By shedding light on objectives and pitfalls they prevent prejudiced assumptions and defensive behaviors;  by setting explicit quality and acceptance criteria they help to align respective expectations and commitments.

How to put requirements into equations (Lawrence Weiner)

How to put requirements into equations (Lawrence Weiner)

Since projects begin with requirements, decisions about targeted functionalities and resources commitments are necessarily based upon estimations made at inception. Yet at such an early stage very little is known about the size and complexity of the components to be developed. Nonetheless, quality planning all along the engineered process is to be seriously undermined if not grounded in requirements as expressed by stakeholders and users.

Business vs Functional Complexity

Contracts without transparency are worthless, and the first objective is therefore to track down complexity across enterprise architecture and business processes. For that purpose one should distinguish between business and application domains, business processes, and use cases, which describe how system functionalities support business processes.

Processes are defined on domains, system functionalities are set by use cases

Based upon intrinsic metrics computed at domain level, functional metrics are introduced to measure system functionalities supporting business processes and compare alternatives solutions. Finally requirements metrics should also provide a yardstick to assess development models.

Business Requirements Metrics

The first step is to assess the intrinsic size and complexity of business domains and processes independently of system functionalities, and that can be done according symbolic representations:

  • The footprint includes artifacts for symbolic objects and activities, partitions (objects classifications or activities variants). Symbolic objects and activities are qualified as primary or secondary depending on their identification. The reliability of the footprint is weighted by the explicit qualification of artifacts as primary or secondary.
  • Symbolic representations for objects and activities are associated with features (attributes or operations) defined within semantic domains.

A part from absolute measurements a number of basic ratios can be computed for anchors (primary objects and activities) and associated partitions.

A robust appraisal of the completeness and maturity of requirements can be derived from:

  • Percentage of artifacts with undefined identification mechanism.
  • Percentage of partitions with undefined characteristics (exclusivity, changeability).

The organization and structure of requirements can be estimated by:

  • Average number of artifacts and partitions by domain (a).
  • Total number of secondary objects and activities relative to primary ones.
  • Average and maximum depth of secondary identification.
  • Total number of primary activities relative to primary objects
  • Total number of features (attributes and operations) relative to number of artifacts.
  • Ratio of local features (defined at artifact level) relative to shared (defined at domain level) ones.

Finally intrinsic complexity can be objectively assessed using partitions:

  • Total number of activity variants relative to object classifications.
  • Total number of exclusive partitions relative to primary artifacts, respectively for objects and activities.
  • Percentage of activity variants combined with object classifications.
  • Average and maximum depth of cross partitions.

It must be noted that whereas those ratios do not depend of any modeling method, they can nonetheless be used to assess requirements or refactor them according specific methods, patterns, or practices.

Functional Requirements Metrics

Functional metrics target the support systems are meant to bring to business processes:

  • The footprint of supporting functionalities is marked out by roles to be supported, active physical objects to be interfaced, events to be managed, and processes to be executed. As for symbolic representations, corresponding artifacts are to be qualified as primary or secondary depending on their identification, with accuracy and reliability of metrics weighted by the completeness of qualifications.
  • Functional artifacts (objects, processes, events, and roles) are associated with anchors and features (attributes or operations) defined by business requirements.

From business to functional requirements metrics

Given that use cases are meant to focus on interactions between systems and contexts, they should provide the best basis for functional metrics:

  • Interactions with users, identified by primary roles and weighted by activities and flows (a).
  • Access to business (aka persistent) objects, weighted by complexity and features (b).
  • Control of execution, weighted by variants and couplings (c).
  • Processing of objects, weighted by variants and features (d).
  • Processing of actual (analog) events, weighted by features (e).
  • Processing of physical objects, weighted by features (f).

Additional adjustments are to be considered for distributed locations and synchronous execution.

Use Cases metrics can also provide a basis for quality checks:

  • Average and maximum number of root use cases relative to business and application domains.
  • Average and maximum number of root use cases relative to primary activities.
  • Average and maximum number of secondary use cases relative to root ones.
  • Average number of primary (for identified) and secondary roles relative to root use cases.
  • Average number primary (aka external) and secondary (issued by use cases) events relative to root use cases.
  • Average and maximum number of primary objects relative to  use cases.
  • The number of variants supported by a use case relative to the total associated with its footprint, respectively for persistent and transient objects.

Requirements Metrics, Contracts, and Acceptance Criteria

As noted above, requirements metrics are a means to a dual end, namely to set a price on developments and define corresponding acceptance criteria.  Moreover, as far are business is concerned, change is the rule of the game. Hence, if many projects can be set on detailed and stable requirements, projects with overlapping concerns and outlying horizons will usually entail changing contexts and priorities. While the former can be contracted out for fixed prices, the latter should clearly allow some room for manoeuvre, and requirements metrics can help to manage that room.

  1. Assuming that projects are initiated from clearly identified business domains or processes, requirements capture should be set against a preliminary wire-frame referencing new or existing pivotal elements of business (1) and system (2) contexts. Those wire-frames (aka anchors, aka backbones) will be used both for circumscribing envelops and managing changes.
  2. Envelops should be defined along architecture layers:  enterprise for business objectives (1), functional for supporting systems (2), and technical for deployment platforms (3). That will set budgets outer limits for given architectural contexts.
  3. Moves within rooms (4) are governed by functional requirements and anchored to pivotal business objects or processed (wire-frame’s nodes) . Their estimations should take into account analogies with previous developments.
  4. Acceptance criteria could then be defined both for invariants (changes are not to overstep architectural constraints) and increments (added functionalities are properly implemented).

Pricing the loops

Requirements Metrics and Agile

While agile development models are meant to be driven by business value, project assessment essentially relies on informed guesses about users stories. That let two questions unanswered:

  • Given that the business value of a story is often defined at process level, how to align it with its local assessment by project team.
  • If problem spaces and solution paths are to be explored iteratively, how to reassess dynamically the stories in backlog.

Answers to both questions are to be helped if requirements are qualified with regard to their nature and footprint.

Informed decision making about what to consider and when clearly depend on the nature of stakes. Hence the importance of a reasoned requirements taxonomy setting apart business requirements, system functionalities, quality of service, and technical constraints on platform implementations.

Dynamic assessment and ranking of backlog elements require some hierarchy and modularity:

  • Architectural options, functional or technical, must be weighted by supported business features and quality of services.
  • Dynamic ranking of users’ stories implies that their value can be mapped to features metrics at the corresponding level of granularity.

That can be achieved with functional features assessed along architecture layers.

Requirements should be assessed with regard to their nature and their footprint in functional architecture.

Requirements should be assessed with regard to their nature and their footprint in functional architecture.

Requirements Metrics and Quality Planning

Finally, requirements metrics may also be used to design quality checks to downstream models. With metrics for intrinsic characteristics of business domains on one hand, system functionalities on the other hand, subsequent models can be assessed for consistency, and alternatives solutions compared.

Providing unified OO modeling concepts, metrics can be computed on analysis and design models and set against their counterpart for original requirements. Examples of standard metrics for UML models include:

  • The number of root packages models is to be compared to symbolic containers for business and application domains, and the sub packages to primary (#) objects or activities.
  • The number of classes in models and packages is to be
  • The number of actors is to be set against the number of roles, with primary (#) roles standing for identified agents.
  • The number of root use cases is supposed to coincide with primary (#) activities and roles.
  • The number of structural inheritance hierarchies should be compatible with the number of frozen and exclusive partitions respectively for objects and activities.

Further Reading

External Links

 

Projects As Non-Zero Sum Games

April 5, 2012

Objective

Contracts are as much about collaboration and trust as they are about protection. While projects are not necessarily governed by contracts, all entail some compact between participants. From in-house developments with shared ownership on one hand, to fixed price outsourcing on the other hand, success will depend on collaboration and a community of purpose.

The whole is worth more than its parts (Ariel Schlesinger)

Hence, a first objective should be to prevent prejudiced assumptions and defensive behaviors. Then, taking a cue from Game Theory, compacts should be designed to enhance collaboration between parties by establishing a non-zero sum playground where one’s benefits don’t have to come from others’ losses.

Players and Stakes

Basic participants of engineering projects can be put in two groups, possibly with subdivisions: business units expressing requirements, and engineering units meant to provide solutions.

Who’s Who

The former (user, customers, or stakeholder) specify the needs, pay for the outcome, and expect to benefit from its use. Their success is measured by return on investment (ROI) and depends on cost, quality, and timely delivery.

The latter design the solution, develop the components, and integrate them into target environments. Narrowly defined, their success is measured by costs.

Project participants should collaborate on features and timing.

Hence, while stakes may clearly conflict on prices, there is room for collaboration on features, quality and timing, and that may bring benefits to both customers and providers.

Communication and Collaboration: Modeling Languages

Understanding is a prerequisite to collaboration, and except for in-house developments, there is no reason to expect immediate (aka mediation free) understanding between business and engineering units. Some common language is therefore required if heterogeneous concerns are to be matched.

Mapping functional and engineering concerns.

While in-house developments under shared ownership can proceed with domain specific languages, collaboration between different organizational units along time can only be supported by modeling languages with non specific semantics.

Assessments and Strategies: Requirements Metrics

If participants are to act rationally based on symmetric information, they must agree on some measurements for size and complexity.

Given that collaboration is not to offset interests by nature different or even conflicting, participants’ strategies must be based on shared and reliable information regarding their respective stakes. Moreover, if timing is to matter, projects’ metrics will have to be reassessed periodically, and strategies redefined accordingly.

Matching the respective deadlines and strategies of customers and providers

Despite clear shortcomings about targets and accuracy, requirements metrics are nonetheless a necessary component of any collaborative framework, with or without explicit contracts.

First of all, some measurements are required to set schedules and plan resources. Even faulty, those estimators will serve their purpose if agreed upon by all participants, with possible biases redressed by consent, and lack of accuracy reduced or circumscribed progressively.

Then, as engineering projects often come with intrinsic uncertainties, they may go off-trail. Providing timely and reliable indicators, participants should be able to anticipate hurdles, reassess strategies, and amend agreements.

Finally, metrics should make room for arbitrage and differentiated strategies; given trustworthy information and clear acceptance criteria, participants would be in a position to weight their respective stakes with risks, rearrange their priorities, and plan their endgames approprietely.

Prices and Acceptance: Contracts

While trustworthy information is a necessary condition for cooperation, it’s not a sufficient one, as players may be tempted to cheat or defect if they think they can get over with it. Leaving out ethical considerations and invasive regulations, the best policy should be to set rewards and sanctions such as to make a convincing case for cooperation benefits.

That can be achieved if participants can opt for differentiated and non conflicting strategies set according  model layers.

Starting with business processes (aka computation independent models) and expected return on investment, customers consider system functionalities (a) and ask providers for feasibility (b), and schedules (c).

Given a functional architecture (aka platform independent models), participants may define strategies: customers associate features with date and expected benefits, providers plan resources and deliveries for products (aka platform specific models and code).

Commitments, by customers on prices and providers on dates, are made at two levels: functional architectures (d) and features (e).

Business requirements (a), functional architecture (b), developments (c), architectural commitments (d), incremental developments (e).

As for any iterative process, changes must be explicitly circumscribed by invariants, e.g functional architecture of supporting systems. Yet, within those constrains, there may be room for adjustments insofar that providers’ costs and customers’ returns are contingent on schedules.

Processes: Time, Risks, Responsibilities and Milestones

As Einstein put it, “The only reason for time is so that everything doesn’t happen at once”. That seems a perfect fit to project planning, as illustrated by two archetypal project configuration:

  • At one extreme, in-house projects with shared ownership may operate within their own time wraps (aka time-boxes) because decisions by customers and providers can be taken jointly and simultaneously independently of external factors.
  • At the other extreme, outsourced projects are set along sequenced milestones and governed by different organizational units, each with its own time scale. With tasks performed independently and decisions made separately, time becomes a factor and processes are introduced to consolidate time-scales and manage risks and decisions alongside.

More generally, engineering processes are introduced when decisions are to be made along time by different organizational entities. Assuming that time is governed by changes and the decisions to be made thereof, the sequencing of tasks should be defined with regard to the nature of events and decision-making:

Technical: engineering processes come with their own intrinsic constraints regarding the sequencing of operations. When those operations can be performed independently of contexts the whole sequence can be seen as a single step wrapped into a time fold. Otherwise they must be associated with distinct process steps.

Contractual: while uncertainties about contexts may affect both expectations and commitment, decisions have to be made notwithstanding. But decisions carry risks and should therefore be factored out, with rationale and responsibilities stated explicitly. While some responsibilities may be shared, decisions about contexts should remain the sole responsibility of the participants directly in charge and appear as milestones set in contracts.

Managed: whatever the number of model layers and automated transformations, engineering processes set about and wind up with human activities. Given that resources and skills are by nature limited, participants have to make the most of their use. But once milestones are set and technical constraints identified, each participant should be able to optimize its own planning.

That makes for three basic project configurations:

  • Requirements analysis is about business requirements and feasibility. Its main objective is to identify technical and contractual milestones.
  • Use case developments are self-contained projects under shared ownership, bounded by contractual milestones. They come with defined prices and schedules whose mix can be managed by consent.
  • System functionalities are set by cross-cutting objectives and their development is therefore governed by contractual milestones and schedules.

Architectures and Project Configurations

Managing engineering projects along these principles would greatly improve their integration with business needs on one hand, enterprise architecture on the other hand.

Further Reading

External Links