Archive for the ‘Project Management’ Category

iStar and the Requirements Conundrum

December 12, 2016

Synopsis

Whenever software engineering problems are looked at, the blame is generally put on requirements, with each side of the business/system divide holding the other responsible.

vvvv

Figuring Concepts (Ai Weiwei)

The iStar approach tries to tackle the problem with a conceptual language focused on interactions between business processes and supporting systems.

Dilemma

Conceptual approaches to requirements try to breach the dilemma between phased and agile development schemes: the former takes for granted that requirements can be fully and definitively set upfront; the latter takes a more pragmatic path and tries to reconcile business and system analysts through direct and continuous collaboration.

Setting apart frictions between specific methods, the benefits of agile principles and practices are now well-recognized, contingent on the limits of agile scope. Summarily, agile development is at its best when requirements capture and analysis can be weaved with development and tests. The question remains of what happens when requirements are to be dealt with separately.

The iStar’s answer shares with agile a focus on collaboration and doesn’t take side for business (e.g users’ stories) or systems (e.g use cases). Instead, iStar modeling language is meant to support a conceptual description of interactions between business processes and supporting systems in terms of actors’ goals and commitments, and the associated dependencies.

Actors & Goals

The defining aspect of the iStar modeling approach is to replace one-sided perspectives (business or system) by a systemic one focused on the interactions between agents. The interactive part of a requirement will therefore comprise three basic items:

  • A primary actor trigger an interaction in order to meet some goal; e.g a car owner want his car repaired.
  • Secondary actors may be involved during the ensuing exchanges: e.g body shop, appraiser, insurance company.
  • Functions to be performed: actual task; e.g appraise damages; qualification (soft goal), e.g fair appraisal; and resources, e.g premium payment.
Actors & dependencies

Actors & Dependencies

Dependencies Semantics

The factual description of interactions is both detailed and enriched by elements set within a broader scope:

  • Goal (strong) dependency: assertions about actual state of affairs: object, activity, or expectations.
  • Soft-goal dependency: assertions about expected outcomes.
  • Task dependency: organizational, functional, or technical constraints pertaining to the execution of activities.
  • Resource dependency: constraints or conditions on the availability of inputs, actual or symbolic.

It would be tempting to generalize the strong/soft distinction to dependencies as to make use of modal logic, strong dependencies associated with deontic rules, soft dependencies with alethic ones. That would .

iStar & Caminao

Since iStar modeling categories are directly aligned with UML Use Cases, they can easily mapped to core Caminao stereotypes for actors, objects, events, and activities.

Actors & dependencies

iStar with Caminao Stereotypes

Interestingly, the iStar strong/soft distinction could translate to the actual/symbolic one which constitute the conceptual backbone of the Caminao paradigm.

Assessment

From the business perspective, iStar must be credited with two critical tenets:

  • The focus on interactions between agents is essential for business and system analysts to collaborate. Such benefits appear clearly for the definition of primary and secondary roles (aka actors), intents (business) and capabilities (supporting environments).
  • The distinction between strong and soft goals, even if the logical basis remains unexploited.

Yet, the system perspective lacks a functional dimension, e.g:

  • Architecture levels (enterprise and organization, systems and functionalities, platforms and technologies) are not taken into consideration, nor the nature of capabilities, e.g strategic and operational.
  • The strong/soft dependencies distinction is not explicitly associated with systems capabilities.

On the whole these pros and cons reflect iStar’s declared intent on conceptual modeling; as a corollary these flaws mark also the limits of conceptual modeling when it is detached from the symbolic description of supporting systems functionalities.

Nonetheless, as illustrated by the research quoted below, iStar remains a sound basis for the specification of interactions between users and systems, either as use cases or users’ stories.

Further Reading

External Links

Business Problems shouldn’t sleep with IT Solutions

October 8, 2016

Preamble

The often mentioned distinction between problem and solution levels may make sense from an analyst’s particular point of view, whether business or system.  But blending problems and solutions independently of their nature becomes a serious over simplification for enterprise architects considering that one of their prime responsibility is to keep apart business problems from IT solutions.

(Mircea Cantor)

Functional problem with technical solution (Mircea Cantor)

That issue is relevant from engineering as well as business perspective.

Engineering View: Problem Levels & Architecture Layers

As long as computers are used to solve problems the only concern is to find the best solution, and the only architecture of concern is software’s.

But enterprise architects have to deal with systems, not computers, namely how to best serve business objectives with corporate resources, across business units and along business cycles. For that purpose resources (financial, human, technical) and their use are to be layered according to the nature of problems and solutions: business processes (enterprise), supporting functionalities (systems), and technologies (platforms).

From an engineering perspective, the intended congruence between problems levels and architecture layers can be illustrated with the OMG’s model driven architecture (MDA) framework:

  • Computation independent models (CIMs) deal with business processes solutions, to be translated into functional problems for supporting systems.
  • Platform independent models (PIMs) deal with functional solutions, to be translated into technical problems for supporting platforms.
  • Platform specific models (PSMs) deal with technical solutions, to be implemented as code.
MDA layers correspond to a clear hierarchy of problems and solutions

MDA layers can be mapped to a clear hierarchy of problems and solutions

Along that understanding, architectures can be seen as solutions, and the primary responsibility of enterprise architects is to see that problems/solutions brace remain in their respective swim-lanes.

Business View: Business Value & Enterprise Assets

Whereas the engineering perspective may appear technical or specific to a model based approach, the same issue is all the more significant when expressed with regard to business concerns and corporate governance. In that case the critical distinction is between business value and assets:

  • Business value: Problems are set by business opportunities, and solutions by processes and applications. The critical factor is reactivity and time-to-market.
  • Assets: Problems are set by business objectives and strategy, and solutions are to be supported by organization and systems capabilities. The critical factor is reuse and ROI.
Decision-making must distinguish between business opportunities and enterprise governance

Decision-making must distinguish between business opportunities and enterprise governance

If opportunities are to be seized and operations managed on the fly  yet tally with strategic decisions, respective problems and solutions should be kept apart. Juggling with their dynamic alignment is at the core of enterprise architects’ job description.

Enterprise Architects & Governance

Engineering and business perspectives are not to be seen as the terms of an alternative to be picked by enterprise architects. As a matter of fact they must be crossed and governance policies selected depending on the point of view:

  • Looking at EA from an engineering perspective,  the business one will focus on systems governance and assets management as epitomized by model based systems engineering schemes.
  • Looking at EA from a business perspective, the engineering one will focus on lean and just-in-time solutions, as epitomized by agile development models.

As far as governance of large and complex corporate entities, supposedly EA’s primary target, must deal with tactical, operational, and strategic concerns, the nexus between business and engineering perspectives is where enterprise architects are to stand.

 

 

Projects Have to Liaise

May 25, 2016

Preamble

Liaison between projects is all too often preempted by methodological issues. So when some communication is needed alternatives should not be limited to no models at all or a medley of ambiguous ones.

jux_juan_munoz8

Avoiding Parochial Behaviors, Turf Wars, an Autarky (Juan Munoz)

Archetypal Development Models

Software engineering processes can be regrouped in two categories: phased ones are segmented with regard to responsibilities, tasks and the nature of artifacts, agile ones are iterative, with a single team sharing responsibilities for the definition, building and acceptance of final outcomes.

vv

Compared to phased projects, agile ones start right away with software and carry on without making use of intermediate development artifacts.

With regard to modeling languages, both approaches may encounter parochial pitfalls: tasked teams of phased projects may go through turf wars and misunderstandings, agile teams could tend to autarkic biases and objections to models.

Stories Must Be Set in Context

Agile projects start right away with writing users’ stories into software, making no use of intermediate development artifacts other than code. With business analysts and software engineers working side by side, the semantics of business objects and activities is meant to be directly, if progressively, inscribed into software artifacts.

Nonetheless, users’ stories, being by nature specific, are to be set against the broader context of enterprise business objectives. Teams may therefore have to communicate with outside entities and, considering that source code is seldom the preferred language of other units, agile teams may have to resort to means they would otherwise disparage. They would be in a better position if stories could be docked to open concepts to be used to contrive messages in line with projects’ needs and creeds.

Developments Must be Shared

Paraphrasing Einstein, the only reason for phased processes is so that everything doesn’t happen at once. In that case intermediate artifacts are to be introduced between tasks. But then, suppliers and customers, often from different backgrounds and with different concerns, have to agree about the semantics of the development flows. Moreover, the accuracy and consistency of agreed upon definitions must stand the test of time whatever the changes on each side.

As illustrated by model based software engineering processes, that objective is essentially attainable for programming languages; otherwise blurred footprints and ambivalent semantics require cumbersome maintenance of transformation rules as well as regressive updates of previous versions of the artifacts. In that case open concepts may help to prevent the corruption of a core of sound specifications by surroundings ambiguous ones.

Open Concepts: Yet Another Conceptual Framework ?

As many will have noticed, there is no lack of frameworks, conceptual or otherwise. So what could be the point of yet another one ?

The answer, as it should be, is to be found its impact on the use, or reuse, of artifacts by projects and teams, whatever their preferred development model. And that’s why the open source paradigm applied to open concepts is critical:

  • The difference between generalization and specialization is fully taken into account so that the semantics of sub-types defined by different projects cannot be modified.
  • The concept of Individuals is used to guarantee that business objects and activities are consistently identified across projects.
  • The semantics of sub-types are consistently, but not necessarily uniformly, defined across projects.
  • There is no overlapping of semantics even when subsets of individuals overlap.

Those are very strong constraints which, combined with the already limited footprint of open concepts, will result in a very compact set of concepts. And that is to make the difference: a small set of concepts, built from well-known principles, with clear properties and benefits, to be shared by projects independently of their modeling languages and development methods.

Further Readings

Models Transformation & Agile Development

April 5, 2016

Models transformation is generally recognized as the basic mechanism of model based systems engineering (MBSE). Yet, the actual scope of transformations is somewhat limited to design-to-code, and its sequential bias puts MBSE at odds with agile development approaches. Could a revisited understanding help to figure out this apparent discrepancy ?

 (Sand Painting Navajo Rug)

Weaving Life by Crossing Patterns (Sand Painting Navajo Rug)

Transformation Issues

Traditional transformation paradigm involves ordered sequences of models obtained by applying rules to their immediate predecessor(s). That organizational scheme has three critical consequences, for applicability, economics of reuse, and development processes.

  • Applicability: the effectiveness of transformations is conditioned by (a) an executable language for the description of targets, and (b) a closed and compact set of unambiguous patterns. Those conditions can only be satisfied for the downstream part of the development process.
  • Reuse: given the sequencing constraints, models are to be managed and reused along tree-like structures with duplicates introduced at branching points.
  • Development processes: sequenced models brings forth phased options and leaves out agile solutions.

Assuming those issues are not conclusive, they may be overcame by revisiting the nature of transformations.

Transformation vs Inheritance & Composition

Most of the proposed taxonomies (see references below) put the focus on languages and mechanisms (e.g rules) of sequential transformation without paying enough attention to the nature and the semantics of models contents. Even when abstraction levels are taken into account, the respective contents of each level remain undefined. As it happens, that issue may be the key to a better understanding of models transformation.

To begin with, rule-based transformation has to be compared to inheritance and composition:

  • Structural inheritance can be used to refine models as to take into account business scenarii previously ignored; e.g special conditions for good customers.
  • Functional inheritance can be used to introduce new capabilities; e.g new authentication procedures.
  • Functional composition can be used to apply capabilities across different scenarii; e.g customized authentication procedures.
  • Rules-based solutions can be used by any kind of transformation.
vvvv

A broader understanding of models transformation should include inheritance and composition

That taxonomy implies a clear distinction between operations executed within the same level of abstraction and those targeting artifacts defined at different levels: contrary to rules-based transformations, inheritance and composition can only be applied to artifacts sharing common semantics.

heterogeneous Models

While that would clearly prevent their use for models organized along abstraction levels, semantic pitfalls could be mastered for models built from artifacts from different abstraction levels.

Releasing models from (still to be defined) abstraction levels would bring two critical benefits:

  • Whatever the terminology (abstract, conceptual, functional, etc.), abstraction semantics are much easier to define for artifacts than for models.
  • That would remove a chunk of restrictions on the design of transformation processes.
Heterogeneous models are not bound to abstraction layers.

Releasing models from abstraction layers.

In that case transformation rules could be turned into combination ones and sequential transformation turned into cross-breeding.

Mendel, Models, Mongrels

Taking a cue from Gregor Mendel’s use of cross-fertilization, the aim of a revisited transformation paradigm would be three-fold:

  1. To refine the granularity of reuse, from models to artifacts
  2. To substitute combination for sequential transformation whenever possible.
  3. To substitute graphs for trees, with models organized along two basic layers, final (aka mongrels) or reusable (aka blueprints).
vvv

Models combination (top) replaces transformation phases (bottom) by a distinction between blueprints (full line) and mongrels (dashed line).

As far as MBSE is concerned, the genetics metaphor helps to clarify the nature of abstraction. Conceptually, it introduces a distinction between artifacts and models:

  • With regard to artifacts, abstraction layers are defined by scope: enterprise, systems, platforms.
  • With regard to models, abstraction layers are defined by capabilities: reusable (stable traits), or final (recessive traits).

That taxonomy is corroborated by its functional counterpart: artifacts transformation is carried out with inheritance and composition, models transformation relies on combination.

More important, that understanding goes a long way solving the issues regarding scope, reuse, and development processes.

Scope: Weaving Analysis & Design Traits

Definitions and taxonomies should always be assessed with regard of their applicability. On that account there isn’t much to say for abstraction layers applied to models: they don’t fit because too many traits can be defined across different layers, e.g: business rules, authentication, encryption, etc.

That difficulty can be neatly and consistently removed by models built from artifacts defined at different levels.

Models Reuse: Blueprints vs Mongrels

Reuse is all too often seen as a contentious objective with inconclusive ROI. One one hand it requires significant overheads to manage the resources, on the other hand the outcomes can introduce regressive traits. The distinction between sound reusable models and final ones significantly reduces both the costs of the former and the risks of the latter.

Processes Organization: MBSE & Agile

Model based systems engineering and the agile development model are arguably two of the most conclusive approaches to software engineering. Unfortunately they are often seen as difficult bedfellows, principally (but not uniquely) because the former insists on the importance of models with some bias toward phased processes, while the latter is all for iterative processes with models mentioned as an afterthought, if at all. Yet, both approaches could be made complementary on condition that models could be processed iteratively. And that could be achieved if sequenced transformations of homogeneous models would be replaced by the combination of heterogeneous ones.

vvv

Iterative development mixing new business requirements with existing functionalities (a) and business rules (b).

Within such a framework an agile team could, e.g, iteratively develop new business requirements, taking into account existing functionalities (a) and business rules (b), and generate code (c).

Further readings

External Links

 

Agile Business Analysis: From Wonders to Logic

March 7, 2016

Time and again new recruits will ask about the role of business analysts. Considering that such a question is seldom heard from software engineers, are BAs more curious about their job, or are they standing on more tentative grounds ? If that’s the case agility would help them to flip-flop between business quicksands to systems hard rocks.

vvv

How to make sense of business wonders (Hieronymus Bosch)

Holding the fort vs scouting outskirts

Systems architects and software engineers may have to meet esoteric business requirements, but their responsibility is first and foremost to guarantee the functional and economic sustainability of systems. On that account they are given licence to build solid walls and secure gateways, and to enforce their own languages and rules upon well vetted parties.

Business analysts don’t get such a free hand: while being straitened by software engineers constructs and constraints, their primary undertaking is to explore business wilds, reconnoitre competitors, trace new tracks, and learn the dialects of any nicknamed natives ready to trade.

No wonder the qualms of new business analysts.

Great businesses make their own rules

The best rules in business are the ones still unbeknownst, as success is most often brought by disruptive initiatives taking advantage of previously undiscovered opportunities. It ensues that at its core, BAs’ job description is to relentlessly look across the frontier for still uncharted businesses, and bring them back to the digitized world of shipshape business domains and processes.

For that purpose BAs will have to juggle with the fuzzy idiosyncrasies of new business openings until they can be aligned with the functionalities of “legacy” systems.

BA’s Agility

While usually presented as a software engineering hallmark, agility may be equally useful for business analysts as they have to balance two crossing perspectives:

  • Analysis: sorting detailed activities into business processes.
  • Synthesis: factoring out business functions and mapping them to systems capabilities.

That could be a challenging achievement if carried out sequentially: crossing back and forth between changing scope and steady capabilities could generate unsettling alternatives and unbounded complexity.

The agile development model is meant to tackle the difficulties through iterations and collaboration without being too specific about the kind of agility required from business analysts and software engineers.

Yet the apparent symmetry between the parties may be misleading: whereas software engineers don’t have (and shouldn’t even try) to second guess business analysts, business analysts shouldn’t forget that at the end of the day business expectations, however exotic or esoteric, will have to feed very conformist logical beasts.

Further Readings

Quality Circles

November 11, 2015

Generally speaking, quality may refer to intrinsic properties, functional characteristics, or some external yardstick. With regard to software engineering it would mean code, users experience, and operations, each with its own specific stakeholders and criteria.

A bird's eye view on quality circles (Jonathan Monk)

A bird’s-eye view on quality circles (Jonathan Monk)

On one side, traditional phased approaches to QA are meant to deal with those different aspects, yet they fall short when those facets are weaved together across enterprise architectures and business environments. On the other side agile quality solutions may also fail to cope with transverse business functions shared across architectures. Hence the need of a bird’s-eye view putting quality into a broader enterprise perspective.

Who Cares for Quality

Whatever the attributes considered, quality should clearly encompass actual products as well as their uses. For that purpose quality has to be assessed with regard to the requirements as expressed by business stakeholders, users, or systems engineers and administrators. Given the constraints and specificity of changing environments, objective yardsticks are of limited use and quality is often to be assessed for the lack thereof:

  • Business requirements: the product doesn’t meet expectations with regard to business contents (objects and logic).
  • Functional requirements: while the product meets business requirements, the part played by supporting systems doesn’t meet users’ expectations.
  • Quality of service: while the product meets business and functional requirements, users’ experience doesn’t meet expectations.
  • Technical requirements: while the product meets users’ expectations (business, functional, and ease of use), there are problems with deployment, maintenance, or operations.
cc

Quality is best defined with regard to requirements and checked with regard to architectures

Crossing those concerns, quality assessment has to deal with two primary challenges:

  • Since assessment at each level can be conditioned by lower levels, outcomes must be described and traced accordingly. That is to be the role of quality management.
  • Since assessment has to cover both products and their use during their shelf life, uncertainty will have to be taken into account. That is to be the role of quality assurance.

A third aspect can be added for externalities, i.e factors whose impact cannot be clearly or uniquely attributed: external risks are not under control, ergonomy cannot be accurately measured, and the assessment of ROI for processes improvement remains a matter of insight.

Quality Management & Documentation

The primary objective of quality management is to identify, define, and track the targeted outcomes and the factors deemed to affect their characteristics: contracts, products traceability, models reuse, tests, etc.

Depending on target and development model, management footprint can be defined at three levels of detail:

  • With regard to the use of products in their operational context, the focus is to be on deployed systems compared to textual specifications (a).
  • With regard to the intrinsic properties of deliverables, the focus is to be extended to software components (b).
  • When products are to be deployed in different environments, or to be maintained or modified along time, additional documentation will be necessary to trace changes to functional (c) and enterprise (d) architectures.
Assessment at each level can be conditioned by lower levels

Assessment at each level may be conditioned by lower levels

In any case (i.e with or without intermediate documentation,) traceability is to be a corner-stone of quality management:

  • Business processes with regard to business objectives, e.g how to assess insurance premiums or compute missile trajectory.
  • Code with regard to textual requirements.
  • System functionalities with regard to business processes. Use cases are widely used to describe how systems are to support business processes, and system functionalities are combined to realize use cases.
  • System components as technical implementations of functionalities targeted to different users, locations, and configurations.

And another dimension of traceability is required when quality assurance has to deal with uncertainty, risks, and decision-making.

From Management to Assurance

The objective of quality assurance is to define, carry on, and monitor operations in order to improve the characteristics concerned and reduce the probability that something will go amiss during the planned shelf life of products.

For that purpose assurance footprint and granularity must be aligned with the layers defined by quality management:

  • Integration and acceptance tests are carried out in reference to requirements on the assumption that software components have been validated.
  • Code checking and unit tests are carried out in reference to business and functional requirements on the assumption that their consistency has been checked.
  • External consistency is checked with regard to business requirements independently of functional or technical ones.
  • Internal consistency is checked with regard to functional requirements on the assumption that the business requirements (external) consistency has been checked.
Footprint & granularity of management and assurance must be congruent

Footprint & granularity of management and assurance must be congruent

Those operations, meant to deal with the quality of each layers, have to be combined with schemes of secure transformations between layers, e.g reuse, patterns or code generation. That would put quality on a sound basis were it not for externalities.

Quality Assurance & Risk Management

As already noted, QA has to take into account uncertainties and risks both external (business or technical environments) and internal (development processes). Assuming quality assurance has to include risk assessment, policies should be driven by risk acceptance levels:

  • No risk: quality assurance can be designed as to eliminate some uncertainties (e.g reuse and code generation).
  • No risk taken: whereas business and technology options are not sure bets some must be carried out regardless of what happens in the environment (e.g unexpected regulatory change or delay in critical technology). In that case QA must provide fallback solutions.
  • Managed risks: some defaults or delays can be priced and weighted by likelihood. In that case QA should monitor the risks and balance their cost (e.g resources consumption, late delivery) against the cost of preventive (e.g more systematic checks on consistency, additional staff) or corrective (e.g tests or maintenance) measures.
Quality management should be set at the nexus between risks management and quality assurance.

Quality management should be set at the nexus between risks management and quality assurance.

That will put quality management at the nexus between regulatory compliancerisks management and quality assurance.

Further Readings

Agile Delivery & Dynamic Programming

September 7, 2015

Synopsis

Business driven development is arguably the cornerstone of the agile development model. On one side it means business value set by users and stakeholders, on the other side it entails continuous and just-in-time delivery; what happens in-between is set by backlogs (for development) and product increments (for delivery).

greekAthl_RN3c

Continuous Delivery

A sanguine understanding of continuous releases may assume that planning is no longer relevant, and that deployment can be carried out “on-the-fly”. But that would assume that stakeholders and product owners are ready to put aside roadmaps, overlook milestones, and more generally forget that time is money. That would go against a basic objective of agile, namely that developments must be driven by business needs, and products delivered just-in-time for best value. Given the well established track record of dynamic programming for  manufacturing processes, could the technique be usefully applied to agile engineering processes ?

Delivery, Deployment, & Continuity

Continuity doesn’t mean synchronization: business, engineering, and operations are governed by different concerns set along different time-frames. Some buffering is needed, materialized by the distinction between releases (engineering concerns) and deployment (business and operational concerns).

A time for every purpose: Epics & business roadmap (a), associated backlogs of users’ stories (b), released features (c), architectural capabilities (d), deployed components (e).

A time for every purpose: Epics & business roadmap (a), associated backlogs of users’ stories (b), released features (c), architectural capabilities (d), deployed components (e).

Such distinctions introduce both overlapping (with business time-frame) and discontinuity (between development and deployment):

  • Product roadmaps are set in business time-frames and determine development and deployment time-frames.
  • Development time is set by product roadmaps and runs clockwise from project inception to software releases (a>b>c).
  • Deployment time is also set relative to product roadmaps but it runs counterclockwise, from product deployment as planned by business back to released software components (c<e).

Development and deployments runs can be compared to crews tunneling through a mountain from both sides; where and when they meet leaves room to adjustments. Yet more is at stake in the meeting between development completion and deployment inception. Apart from time, adjustments may also bear on formats and contents; and given the specificity of development and deployment purposes, their adjustment may also be seen as the morphing of continuous software releases (project perspective) into discrete increments (product perspective).

Dynamic Programming

Dynamic programming (aka multistage programming) is a problem solving method that combines two principles:

  • Divide & conquer is a general purpose strategy that deals with the intrinsic complexity of problems by breaking them down into collections of simpler sub-problems to be solved separately depending on sequencing constraints.
  • Recursion deals with the lack of complete or accurate information upfront, solving the problem in stages rather than as one entity. Each stage is optimized separately on the basis of the current state reflecting decisions taken at previous stages, the optimality principle guaranteeing the optimality of the final outcome.

That incremental and iterative approach clearly befits the tenets of the agile development model.

Twofold Planning

As noted above, and whatever the technique, agile processes entwine two phases, one for development, the other for deployment, each with its own planning:

  • The development planning, epitomized by backlog management, deals with the definition of work units and their sequencing.
  • The deployment planning deals with the merging of software released into product and their incremental deployment.
Project work units are sequenced (backlog), Product increments are merged, and both are dynamically adjusted around their nexus.

Project work units are sequenced (backlog), Product increments are merged, and both are dynamically adjusted around their nexus.

That makes for multiple dynamics, first for updated backlogs, then for updated deployment targets, and finally for possible feedback through their nexus. That can only be achieved with dynamic programming.

Backlog: Multistage sequencing

Backlogs are used to manage work units targeting self-contained requirements items; they can be represented by graphs with nodes standing for work units and arcs weighted by constraints.

Basically, the problem is to optimize the development of a given set of items given users’ priorities, technical constraints, and resources availability. When all the information is available upfront, optimum solutions can be obtained with simple Shortest Path algorithms. Yet, given the iterative and exploratory nature of agile processes, backlogs are meant to be updated as the project advances, taking advantage of improved knowledge:

  • Users may introduce new items, remove ones, or change their priorities due to a better understanding of requirements space.
  • Engineers may also introduce new items (e.g for technical debt) or reconsider technical difficulties and dependencies due to a better understanding of solutions space.
Dynamic reordering of backlogs: looking forward for the optimum path to completion.

Dynamic reordering of backlogs: looking forward for the optimum path to completion.

Dynamic programming is introduced in order to support step-wise decisions optimizing the whole process:

  • Backlog states (t1 & t2) are defined by the remaining work units, rankings, and feasibility constraints.
  • Each stage redefines the optimum path to completion taking into account the current state and updated information. Recursive computation being based on the summary information etched in states, it ensues that all future decisions can be selected optimally without recourse to information regarding previously made decisions.

Given a set of feasible paths (as defined by technical dependencies and time), the aim at each stage is to select the optimum path for the remaining units based on current state. Optimization functions will typically consider users’ value, learning curve and associated risks, and resources availability and costs.

As illustrated below, nodes can represent grouped items, e.g when several projects have to share resources or releases are to be regrouped.

Deployment: Dynamic merging

Given a set of released software components, the aim of deployment planning is to decide which increment to add to deployed product. Assuming that technical concerns have already been dealt with by releases, the objective at each stage is select the items maximizing the ROI of the deployed product. It must be noted that, contrary to the development algorithm looking forward for the optimization, the deployment algorithm select the optimum path by looking backward at the ROI of deployed products.

Dynamic reordering of deployments: looking backward for the optimum path to completion.

Dynamic reordering of deployments: looking backward for the optimum path to completion.

But the backward impact of deployment optimization can go further and affect backlogs.

Feedback

Shared ownership and continuous delivery are two main pillars of the agile development model, the former giving the development full authority and responsibility, the latter ensuring the users have a firm hand on the helm. Yet, as already noted, development, deployment, and business are governed by different time-frames, which could induce some frictions, e.g if business units were forced to synchronize products deployments with software releases. While severe disruptions can be avoided if releases and deployments are managed separately, development teams cannot be completely sheltered from changes in business or operational priorities. That is where the dynamic reassessment of optimum paths is to help: assuming a change in deployment planning (nq instead of op), the new priorities are fed back into development (aka backlog) rankings, and the optimum path is updated.

Feedback

Change in deployment priorities (nq instead of op) can be fed back to backlog planning (f before a).

It must be noted that such feedback only affects ranks and must leaves contents unchanged.

Conclusion: Business driven, Just-in-time delivery, & Lean Inventories

Dynamic programming appears as a primary factor with regard to three core tenets of the agile development model:

  • Business driven development doesn’t mean that developments are pushed by requirements but that they are pulled by deployment.
  • Just-in-time delivery can only be achieved with the help of a buffer between development and deployment. This buffer should not be confused with an inventory as it has nothing to do with product quantities.
  • On the contrary, this buffer, combined with dynamic programming, plays a critical role in the cutback of intermediate documents and models (aka development inventories).

Further Readings

External Links

Feasibility & Capabilities

May 2, 2015

Synopsis

As far as systems engineering is concerned, the aim of a feasibility study is to verify that a business solution can be supported by a system architecture (requirements feasibility) subject to some agreed technical and budgetary constraints (engineering feasibility).

(A. Magnaldo)

Project Rain Check  (A. Magnaldo)

Where to Begin

A feasibility study is based on the implicit assumption of slack architecture capabilities. But since capabilities are set with regard to several dimensions, architectures boundaries cannot be taken for granted and decisions may even entail some arbitrage between business requirements and engineering constraints.

Using the well-known distinction between roles (who), activities (how), locations (where), control (when), and contents (what), feasibility should be considered for supporting functionalities (between business processes and systems) and implementation (between functionalities and platforms):

ccc

Feasibility with regard to Systems and Platforms

Depending on priorities, feasibility considerations could look from three perspectives:

  • Focusing on system functionalities (e.g with use cases) implies that system boundaries are already identified and that the business logic will be defined along with users’ interfaces.
  • Starting with business requirements puts business domains and logic on the driving seat, making room for variations in system functionalities and boundaries .
  • Operational requirements (physical environments, events, and processes execution) put the emphasis on a mix of business processes and quality of service, thus making software functionalities a dependent variable.

In any case a distinction should be made between requirements and engineering feasibility, the former set with regard to architecture capabilities, the latter with regard to development resources and budget constraints.

Requirements Feasibility & Architecture Capabilities

Functional capabilities are defined at system boundaries and if all feasibility options are to be properly explored, architectures capabilities must be understood as a trade-off between the five intrinsic factors e.g:

  • Security (entry points) and confidentiality (domains).
  • Compliance with regulatory constraints (domains) and flexibility (activities).
  • Reliability (processes) and interoperability (locations).
vvvv

Feasible options must be set within the Capabilities Pentagon

Feasible options could then be figured out by points set within the capabilities pentagon. Given metrics on functional requirements, their feasibility under the non functional constraints could be assessed with regard to cross capabilities. And since the same five core capabilities can be consistently defined across enterprise, systems, and platforms layers, requirements feasibility could be assessed without prejudging architecture decisions.

Business Requirements & Architecture Capabilities

One step further, the feasibility of business and operational objectives (the “Why” of the Zachman framework) can be more easily assessed if set on the outer range and mapped to architecture capabilities.

ccc

Business Requirements and Architecture Capabilities

Engineering Feasibility & ROI

Finally, the feasibility of business and functional requirements under the constraints set by non functional requirements has to be translated in terms of ROI, and for that purpose the business value has to be compared to the cost of engineering the solution given the resources (people and tools), technical requirements, and budgetary constraints.

Architecture Capabilities for Processes, with deontic (basic) and alethic (dashed) dependencies.

ROI assessment mapping business value against functionalities, engineering outlays, and operational costs.

That where the transparency and traceability of capabilities across layers may be especially useful when alternatives and priorities are to be considered mixing functionalities, engineering outlays, and operational costs.

Further Reading

Business Processes & Use Cases

March 19, 2015

Summary

As can be understood from their theoretical basis (Pi-Calculus, Petri Nets, or State Machines), processes are meant to describe the concurrent execution of activities. Assuming that large enterprises have to keep a level of indirection between operations and business logic, it ensues that activities and business logic should be defined independently of the way they are executed by business processes.

Communication Semantics vs Contexts & Contents (G. Winograd)

Communication Semantics vs Contexts & Contents (G. Winogrand)

For that purpose two basic modeling approaches are available:  BPM (Business Process Modeling) takes the business perspective, UML (Unified Modeling Language) takes the engineering one. Yet, each falls short with regard to a pivotal conceptual distinctions: BPM lumps together process execution and business logic, and UML makes no difference between business and software process execution. One way out of the difficulty would be to single out communications between agents (humans or systems), and specify interactions independently of their contents and channels.

vvv

Business Process: Communication + Business Logic

That could be achieved if communication semantics were defined independently of domain-specific languages (for information contents) and technical architecture (for communication channels). As it happens, and not by chance, the outcome would neatly coincide with use cases.

Linguistics & Computers Parlance

Business and functional requirements (see Requirements taxonomy) can be expressed with formal or natural languages. Requirements expressed with formal languages, domain-specific or generic, can be directly translated into some executable specifications. But when natural languages are used to describe what business expects from systems, requirements often require some elicitation.

When that challenge is put into a linguistic perspective, two school of thought can be considered, computational or functional.

The former approach is epitomized by Chomsky’s Generative Grammar and its claim that all languages, natural or otherwise, share an innate universal grammar (UG) supporting syntactic processing independently of their meanings. Nowadays, and notwithstanding its initial favor, that “computer friendly” paradigm hasn’t kept much track in general linguistics.

Alternatively, the functionalist school of thought considers linguistics as a general cognitive capability deprived of any autonomy. Along that reasoning there is no way to separate domain-specific semantics from linguistic constructs, which means that requirements complexities, linguistic or business specific, have to be dealt with as a lump, with or without the help of knowledgeable machines.

In between, a third approach has emerged that considers language as a functional system uniquely dedicated to communication and the mapping of meanings to symbolic representations. On that basis it should be possible to separate the communication apparatus (functional semantics) from the complexities of business (knowledge representation).

Processes Execution & Action Languages

Assuming the primary objective of business processes is to manage the concurrent execution of activities, their modeling should be driven by events and their consequences for interactions between business and systems. Unfortunately, that approach is not explicitly supported by BPM or UML.

Contrary to the “simplex” mapping of business information into corresponding data models (e.g using Relational Theory), models of business and systems processes (e.g Petri Nets or State Machines) have to be set in a “duplex” configuration as they are meant to operate simultaneously. Neither BPM nor UML are well equipped to deal with the task:

  • The BPM perspective is governed by business logic independently of interactions with systems.
  • Executable UML approaches are centered on software processes execution, and their extensions to action semantics deal essentially on class instances, features value, and objects states.

Such shortcomings are of no serious consequences for stand-alone applications, i.e when what happens at architecture level can be ignored; but overlooking the distinction between the respective semantics of business and software processes execution may critically hamper the usefulness, and even the validity, of models pertaining to represent distributed interactions between users and systems. Communication semantics may help to deal with the difficulty by providing relevant stereotypes and patterns.

Business Process Models

While business process models can (and should) also be used to feed software engineering processes, their primary purpose is to define how concurrent business operations are to be executed. As far as systems engineering is concerned, that will tally with three basic steps:

  1. Dock process interactions (aka sessions) to their business context: references to agents, business objects and time-frames.
  2. Specify interactions: references to context, roles, events, messages, and time related constraints.
  3. Specify information: structures, features, and rules.
Communication with systems: dock to context (1), interactions (2), information (3).

Communication with systems: dock to context (1), interactions (2), information (3).

Although modeling languages and tools usually support those tasks, the distinctions remain implicit, leaving users with the choice of semantics. In the absence of explicit guidelines confusion may ensue, e.g between business rules and their use by applications (BPM), or between business and system events (UML). Hence the benefits of introducing functional primitives dedicated to the description of interactions.

Such functional action semantics can be illustrated by the well known CRUD primitives for the creation, reading, updating and deletion of objects, a similar approach being also applied to the design of domain specific patterns or primitives supported by functional frameworks. While thick on the ground, most of the corresponding communication frameworks deal with specific domains or technical protocols without factoring out what pertains to communication semantics independently of information contents or technical channels.

Communication semantics should be independent of business specific contents and systems architectures.

Communication semantics should be independent of business specific contents and systems architectures.

But that distinction could be especially productive when applied to business processes as it would be possible to fully separate the semantics of communications between agents and supporting systems on one hand, and the business logic used to process business flows on the other hand.

 Communication Semantics

Languages have developed to serve two different purposes: first as a means to communicate (a capability shared between humans and many primates), and then as a means to represent and process information (a capability specific to humans). Taking a leaf from functional approaches to linguistics, it may be possible to characterize messages with regard to action semantics, more precisely associated activity and attendant changes:

  • No change: messages relating to passive (objects) or active  (performed activity) states.
  • Change: messages relating to achievement (no activity) or accomplishment (attendant on performed activity).
Dynamics of communication context

Dynamics of communication context

It must be noted that such semantics can also be extended to the state of expectations.

Additionally, messages would have to be characterized by:

  • Time-frame: single or different.
  • Address space : single or different.
  • Mode: information, request for information, or request for action.

Organizing business processes along those principles, would enable the alignment of BPMs with their UML counterpart.

Use Cases as Bridges between BPM & UML

Use cases can be seen as the default entry point for UML modeling, and as such they should provide a bridge from business process models. That can be achieved by if use cases are understood as a combination of interactions and activities, the former obtained from communications as defined above, the latter from business logic.

Use Cases are best understood as a combination of interactions and business logic

Use Cases are best understood as a combination of interactions and business logic

One step further, the distinction between communication semantics and business contents could be used to align models respectively to systems and software architectures:

Last but not least, the consolidation of models contents at architecture level would be congruent with modeling languages, BPM and UML.

Further Reading

External Links

Models Truth and Disproof

January 21, 2015

“If you cannot find the truth right where you are, where else do you expect to find it?”

Dōgen Zenji

Summary

Software engineering models can be regrouped in two categories depending on their target: analysis models represent business context and concerns, design ones represent systems components. Whatever the terminologies, all models are to be verified with regard to their intrinsic qualities, and validated with regard to their domain of discourse, respectively business objects and activities (analysis models), or software artifacts (design models).

(Chris Engman)

Internal & External Consistency (Chris Engman)

Checking for internal consistency is arguably straightforward as proofs can be built with regard to the syntax and semantics of modeling (or programming) languages. Things are more complicated for external consistency because hypothetical proofs would have to rely on what is known of the business domains, whose knowledge is by nature partial and specific, if not hypothetical. Nonetheless, even if general proofs are out of reach, the truth of models can still be disproved by counter examples found among the instances under consideration.

Domains of Discourse: Business vs Engineering

With regard to systems engineering, domains of discourse cover artifacts which are, by “construct”, fully defined. Conversely, with regard to business context and objectives, domains of discourse have to deal with instances whose definitions are a “work in progress”.

vvvv

Domains of Discourse: Business vs Engineering

That can be illustrated by analysis models, which are meant to translate requirements into system functionalities, as opposed to design ones, which specify the corresponding software artifacts. Since software artifacts are supposed to be built from designs, checking the consistency of the mapping is arguably a straightforward undertaking. That’s not the case when the consistency of analysis models has to be checked against objects and activities identified by business’ domains of discourse, possibly with partial, ambiguous, or conflicting descriptions. For that purpose some logic may help.

Flat Models & Logic

Business requirements describe objects, events, and activities, and the purpose of modeling is to identify those instances and regroup them into subsets built according to their features and relationships.

Building descriptions for targeted instances business objects & activities

How to organize instances of business objects & activities into subsets

As far as models make no use of abstractions (“flat” models), instances can be organized using basic set operators and epistemic (i.e relating to the degree of validation) constraints with regard to existence (m/d), uniqueness (x/o), and change (f/m):

cccc

Notation for epistemic constraints

Using the EU-Rent Car example:

  • Rental cars are exclusively and definitively partitioned according to models (mxf).
  • Models are exclusively partitioned according to rental group (mxm), and exclusively and definitively according body style (mxf).
  • Rental cars are partitioned by derivation (/) according to group and body style.
cccc

Flat model using basic set operators for exclusive (cross) and final (grey) partitions (2)

Such models are deemed to be consistent if all instances are consistently taken into account.

Flat Models External Consistency

Assuming that models backbone can be expressed logically, their consistency can be formally verified using a logical language, e.g Prolog.

To begin with, candidate subsets are obtained by combing requirements for core modeling artifacts expressed as predicates (21 for descriptions of actual objects, 121 for descriptions of actual locations, 20 for descriptions of symbolic ones, 22 for descriptions of symbolic partitions), e.g:

  • type(20, manufacturer).
  • type(21, rentalCar).
  • type(22,  carModel).
  • type(22, rentalGroup).
  • type(22,  bodyStyle).
  • type(121, depot).

Partitions and functional connectors (220 for symbolic reference, 222 for partitions, 221 for actual connection), e.g:

  • connect(222, rentalCar,carModel, mxf).
  • connect(222, carModel, group, mxm).
  • connect(222, carModel, bodyStyle,mxf).
  • connect(220, manufacturer_, carModel, manufacturer, mof).
  • connect(121, location, rentalCar, depot, mxt)

Finally, features and structures (320 for properties, 340 for operations), e.g:

  • feature(340, move_to, depot).
  • feature(320, address).
  • feature(320, location).
  • member(manufacturer,address,mom).
  • member(rentalCar,location,mxm).
  • member(rentalCar,move_to,mxm).

Those candidate descriptions are to be assessed and improved by applying them to sets of identified occurrences taken from requirements. The objective being to map each instance to a description: instance(name, term()), e.g:

  • instance(sedan,carModel(f1(F1),f2(F2))).
  • instance(coupe,carModel(f1(F1),f2(F2))).
  • instance(ford, manufacturer(f6(F6),f7(F7))).
  • instance(focus, rentalCar(f6(F6),f7(F7))).
  • instance_(manufacturer_,focus,ford).

Using a logical interpreter, validation can then be carried out iteratively by looking for counter examples that could disprove the truth of the representations:

  • All instances are taken into account: there is no instance N without instance(N,Structure).
  • Logical consistency: there is no instance N with conflicting partitioning (native and derived).
  • Completeness: there is no instance type(X,N,T(f1,f2,..)) with undefined feature fi.
  • Functional consistency: there is no instance of relation R (native and derived) without a consistent type relation(X, R, Origin, Destination, Epistemic) .

It must be noted that the approach is not limited to objects and is meant to encompass the whole scope of requirements: actual objects, symbolic representations, business logic, and processes execution.

Multilevel Models: From Partitions to Sub-types

Flat models fall short when specific features are to be added to the elements of partitions subsets, and in that case sub-types must be introduced. Yet, and contrary to partitions, sub-types come with abstractions: set within a flat model (i.e without sub-types), Car model fully describes all instances, but when sub-types sedan, coupe, and convertible are introduced, the Car model base type is nothing more than a partial (hence abstract) description.

ccc

From partition to sub-types: subsets descriptions are supplemented with specific features.

Whereas that difference may seem academic, it has direct and practical consequences when validation is considered because consistency must then be checked for every level, concrete or abstract.

LSP & External Consistency

As it happens, the corresponding problem has been tackled by Barbara Liskov for software design: the so-called Liskov substitution principle (LSP) states that if S is a sub-type of T, then instances of T may be replaced with instances of S without altering any of the desirable properties of the program.

Translated to analysis models, the principle would state that, given a set of instances, a model must keep its consistency independently of the level of abstraction considered. As a corollary, and assuming a model abides by the substitution principle, it would be possible to generalize the external consistency of a detailed level to the whole model whatever the level of abstraction. Hence the importance of compliance with the substitution principle when introducing sub-types in analysis models.

vvv

All instances must be accounted for whatever the level of abstraction

Applying the Substitution Principle to Analysis Models

Abstraction is arguably the essence of requirements modeling as its purpose is to bring specific and changing concerns under a common, consistent, and lasting conceptual roof. Yet, the two associated operations of specialization and generalization often receive very little scrutiny despite the fact that most of the related pitfalls can be avoided if the introduction of sub-types (i.e levels of abstraction) is explicitly justified by partitions. And that can be achieved by the substitution principle.

First of all, and as far as requirements analysis is concerned, sub-types should only to be introduced for specific features, properties or operations. Then, epistemic constraints can be used to tally the number of specialized instances with the number of generalized ones, and check for the possibility of functional discrepancies:

  • Discretionary (or conditional or non exhaustive) partitions (d__) may bring about more instances for the base type (nb >= ∑nbi).
  • Overlapping (or duplicate or non isolated) partitions (_o_) may bring about less instances for the base type (nb <= ∑nbi).
  • Assuming specific features, mutable (or reversible) partitions (__m) means that features may differ between level; otherwise (same features) sub-types are not necessary.
vvv

Epistemic constraints on partitions can be used to enforce the LSP

Using a Prolog-like language, the only modification will concern the syntax of predicates, with structures replaced by lists of features:

  • type(20, manufacturer,[f6,f7]).
  • type(21, rentalCar,[f5]).
  • type(22,  carModel,[f1,f2]).
  • type(22, rentalGroup,[f9]).
  • type(22,  bodyStyle,[f8]).
    • type(20, bodyStyle:sedan, [f11,f12]).
    • type(20, bodyStyle:coupe, [f13]).
    • type(20, bodyStyle:convertible, [f14]).
  • type(121, depot,[f10]).

The logical interpreter could then be used to map the sub-types to partitions and check for substitution.

Further Reading

Further Readings