Archive for the ‘Project Management’ Category

Deep Blind Testing

March 21, 2017

Preamble

Tests are meant to ensure that nothing will go amiss. Assuming that expected hazards can be duly dealt with beforehand, the challenge is to guard against unexpected ones.

Unexpected Outcome (Ariel Schlesinger)

That would require the scripting of every possible outcomes in an unlimited range of unknown circumstances, and that’s where Deep Learning may help.

What to Look For

As Donald Rumsfeld once famously said, there are things that we know we don’t know, and things we don’t know we don’t know; hence the need of setting things apart depending on what can be known and how, and build the scripts accordingly:

  • Business requirements: tests can be designed with respect to explicit specifications; yet some room should also be left for changes in business circumstances.
  • Functional requirements: assuming business requirements are satisfied, the part played by supporting systems can be comprehensively tested with respect to well-defined boundaries and operations.
  • Quality of service: assuming business and functional requirements are satisfied, tests will have to check how human interfaces and resources are to cope with users behaviors and expectations which, by nature, cannot be fully anticipated.
  • Technical requirements: assuming business and functional requirements are satisfied as well as users’ expectations for service, deployment, maintenance, and operations are to be tested with regard to feasibility and costs.

Automated testing has to take into account these differences between scope and nature, from bounded and defined specifications to boundless, fuzzy and changing circumstances.

Automated Software Testing

Automated software testing encompasses two basic components: first the design of test cases (events, operations, and circumstances), then their scripted execution. Leading frameworks already integrate most of the latter together with the parts of the former targeting technical aspects like graphical user interfaces or system APIs. Artificial intelligence (AI) and machine learning (ML) have also been tried for automated test generation, yet with a scope limited by dependency on explicit knowledge, and consequently by the need of some “manual” teaching. That hurdle may be overcame by the deep learning ability to get direct (aka automated) access to implicit knowledge.

Reconnaissance: Known Knowns

Systems are designed artifacts, with the corollary that their components are fully defined and their behavior predictable. The design of technical test cases can therefore be derived from what is known of software and systems architectures, the former for test units, the latter for integration and acceptance tests. Deep learning could then mine recorded log-files in order to identify critical cases’ events and circumstances.

Exploration: Known Unknowns

Assuming that applications must be tested for use during their expected shelf life, some uncertainty has to be factored in for future business circumstances. Yet, assuming applications are designed to meet specific business objectives, such hypothetical circumstances should remain within known boundaries. In that context deep learning could be applied to exploration as well as policies:

  • Compared to technical test cases that can rely on the content of systems log-files, business and functional ones have to look outside and mine raw data from business environments.
  • In return, the relevancy of observations can be assessed with regard to business objectives, improved, and feed the policy module in charge of defining test cases.

Blind Errands: Unknown Unknowns

Even with functional and technical capabilities well-tested and secured, quality of service may remain contingent on human quirks: instinctive or erratic behaviors that could thwart the best designed handrails. On one hand, and due to their very nature, such hazards are not to be easily forestalled by reasoned test cases; but on the other hand they don’t take place in a void but within known functional circumstances. Given that porosity of functional and cognitive layers, the validity of functional test cases may be compromised by unfathomable cognitive associations, and that could open the door to unmanageable regression. Enter deep learning and its ability to extract knowledge from insignificance.

Compared to business and functional test cases, hazards are not directly related to business activities. As a consequence, the learning process cannot be guided by business and functional test cases but has to chart unpredictable human behaviors. As it happens, that kind of learning combining random simulation with automated reinforcement is what makes the specificity of deep learning.

From Non-regression to Self-improvement

As a conclusion, if non-regression is to be the cornerstone of quality management, test cases are to be set along clear swim-lanes: business logic (independently of systems), supporting systems functionalities (for shared applications), users interfaces (for non shared interactions). Then, since test cases are also run across swim-lanes, it opens the door to feedback, e.g unit test cases reassessed directly from business rules independently of systems functionalities, or functional test cases reassessed from users’ behaviors.

Considering that well-defined objectives, sound feedback mechanisms, and the availability of massive data from systems logs (internal) and business environment (external) are the main pillars of deep learning technologies, their combination in integrated frameworks could result in a qualitative leap toward self-improving automated test cases.

Further Reading

 

Focus: Business Cases for Use Cases

February 27, 2017

Preamble

As originally defined by Ivar Jacobson, uses cases (UCs) are focused on the interactions between users and systems. The question is how to associate UC requirements, by nature local, concrete, and changing, with broader business objectives set along different time-frames.

Sigmar-Polke-Hope-Clouds

Cases, Kites, and Clouds (Sigmar Polke)

Backing Use Cases

On the system side UCs can be neatly traced through the other UML diagrams for classes, activities, sequence, and states. The task is more challenging on the business side due to the diversity of concerns to be defined with other languages like Business Process Modeling Notation (BPMN).

Use cases at the hub of UML diagrams

Use Cases contexts

Broadly speaking, tracing use cases to their business environments have been undertaken with two approaches:

  • Differentiated use cases, as epitomized by Alister Cockburn’s seminal book (Readings).
  • Business use cases, to be introduced beside standard (often renamed as “system”) use cases.

As it appears, whereas Cockburn stays with UCs as defined by Jacobson but refines them to deal specifically with generalization, scaling, and extension, the second approach introduces a somewhat ill-defined concept without setting apart the different concerns.

Differentiated Use Cases

Being neatly defined by purposes (aka goals), Cockburn’s levels provide a good starting point:

  • Users: sea level (blue).
  • Summary: sky, cloud and kite (white).
  • Functions: underwater, fish and clam (indigo).

As such they can be associated with specific concerns:

Cockburn’s differentiated use cases

  • Blue level UCs are concrete; that’s where interactions are identified with regard to actual agents, place, and time.
  • White level UCs are abstract and cannot be instanciated; cloud ones are shared across business processes, kite ones are specific.
  • Indigo level UCs are concrete but not necessarily the primary source of instanciation; fish ones may or may not be associated with business functions supported by systems (grey), e.g services , clam ones are supposed to be directly implemented by system operations.

As illustrated by the example below, use cases set at enterprise or business unit level can also be concrete:

Example with actors for users and legacy systems (bold arrows for primary interactions)

UC abstraction connectors can then be used to define higher business objectives.

Business “Use” Cases

Compared to Cockburn’s efficient (no new concept) and clear (qualitative distinctions) scheme, the business use case alternative adds to the complexity with a fuzzy new concept based on quantitative distinctions like abstraction levels (lower for use cases, higher for business use cases) or granularity (respectively fine- and coarse-grained).

At first sight, using scales instead of concepts may allow a seamless modeling with the same notations and tools; but arguing for unified modeling goes against the introduction of a new concept. More critically, that seamless approach seems to overlook the semantic gap between business and system modeling languages. Instead of three-lane blacktops set along differentiated use cases, the alignment of business and system concerns is meant to be achieved through a medley of stereotypes, templates, and profiles supporting the transformation of BPMN models into UML ones.

But as far as business use cases are concerned, transformation schemes would come with serious drawbacks because the objective would not be to generate use cases from their business parent but to dynamically maintain and align business and users concerns. That brings back the question of the purpose of business use cases:

  • Are BUCs targeting business logic ? that would be redundant because mapping business rules with applications can already be achieved through UML or BPMN diagrams.
  • Are BUCs targeting business objectives ? but without a conceptual definition of “high levels” BUCs are to remain nondescript practices. As for the “lower levels” of business objectives, users’ stories already offer a better defined and accepted solution.

If that makes the concept of BUC irrelevant as well as confusing, the underlying issue of anchoring UCs to broader business objectives still remains.

Conclusion: Business Case for Use Cases

With the purposes clearly identified, the debate about BUC appears as a diversion: the key issue is to set apart stable long-term business objectives from short-term opportunistic users’ stories or use cases. So, instead of blurring the semantics of interactions by adding a business qualifier to the concept of use case, “business cases” would be better documented with the standard UC constructs for abstraction. Taking Cockburn’s example:

Abstract use cases: no actor (19), no trigger (20), no execution (21)

Different levels of abstraction can be combined, e.g:

  • Business rules at enterprise level: “Handle Claim” (19) is focused on claims independently of actual use cases.
  • Interactions at process level: “Handle Claim” (21) is focused on interactions with Customer independently of claims’ details.

Broader enterprise and business considerations can then be documented depending on scope.

Further Reading

External Links

iStar and the Requirements Conundrum

December 12, 2016

Synopsis

Whenever software engineering problems are looked at, the blame is generally put on requirements, with each side of the business/system divide holding the other responsible.

rockwell_runaway

iStar modeling put the focus on communication (N. Rockwell)

The iStar approach tries to tackle the problem with a conceptual language focused on interactions between business processes and supporting systems.

Dilemma

Conceptual approaches to requirements try to breach the dilemma between phased and agile development schemes: the former takes for granted that requirements can be fully and definitively set upfront; the latter takes a more pragmatic path and tries to reconcile business and system analysts through direct and continuous collaboration.

Setting apart frictions between specific methods, the benefits of agile principles and practices are now well-recognized, contingent on the limits of agile scope. Summarily, agile development is at its best when requirements capture and analysis can be weaved with development and tests. The question remains of what happens when requirements are to be dealt with separately.

The iStar’s answer shares with agile a focus on collaboration and doesn’t take side for business (e.g users’ stories) or systems (e.g use cases). Instead, iStar modeling language is meant to support a conceptual description of interactions between business processes and supporting systems in terms of actors’ goals and commitments, and the associated dependencies.

Actors & Goals

The defining aspect of the iStar modeling approach is to replace one-sided perspectives (business or system) by a systemic one focused on the interactions between agents. The interactive part of a requirement will therefore comprise three basic items:

  • A primary actor trigger an interaction in order to meet some goal; e.g a car owner want his car repaired.
  • Secondary actors may be involved during the ensuing exchanges: e.g body shop, appraiser, insurance company.
  • Functions to be performed: actual task; e.g appraise damages; qualification (soft goal), e.g fair appraisal; and resources, e.g premium payment.
Actors & dependencies

Actors & Dependencies

Dependencies Semantics

The factual description of interactions is both detailed and enriched by elements set within a broader scope:

  • Goal (strong) dependency: assertions about actual state of affairs: object, activity, or expectations.
  • Soft-goal dependency: assertions about expected outcomes.
  • Task dependency: organizational, functional, or technical constraints pertaining to the execution of activities.
  • Resource dependency: constraints or conditions on the availability of inputs, actual or symbolic.

It would be tempting to generalize the strong/soft distinction to dependencies as to make use of modal logic, strong dependencies associated with deontic rules, soft dependencies with alethic ones. That would .

iStar & Caminao

Since iStar modeling categories are directly aligned with UML Use Cases, they can easily mapped to core Caminao stereotypes for actors, objects, events, and activities.

Actors & dependencies

iStar with Caminao Stereotypes

Interestingly, the iStar strong/soft distinction could translate to the actual/symbolic one which constitute the conceptual backbone of the Caminao paradigm.

Assessment

From the business perspective, iStar must be credited with two critical tenets:

  • The focus on interactions between agents is essential for business and system analysts to collaborate. Such benefits appear clearly for the definition of primary and secondary roles (aka actors), intents (business) and capabilities (supporting environments).
  • The distinction between strong and soft goals, even if the logical basis remains unexploited.

Yet, the system perspective lacks a functional dimension, e.g:

  • Architecture levels (enterprise and organization, systems and functionalities, platforms and technologies) are not taken into consideration, nor the nature of capabilities, e.g strategic and operational.
  • The strong/soft dependencies distinction is not explicitly associated with systems capabilities.

On the whole these pros and cons reflect iStar’s declared intent on conceptual modeling; as a corollary these flaws mark also the limits of conceptual modeling when it is detached from the symbolic description of supporting systems functionalities.

Nonetheless, as illustrated by the research quoted below, iStar remains a sound basis for the specification of interactions between users and systems, either as use cases or users’ stories.

Further Reading

External Links

Business Problems shouldn’t sleep with IT Solutions

October 8, 2016

Preamble

The often mentioned distinction between problem and solution levels may make sense from an analyst’s particular point of view, whether business or system.  But blending problems and solutions independently of their nature becomes a serious over simplification for enterprise architects considering that one of their prime responsibility is to keep apart business problems from IT solutions.

(Mircea Cantor)

Functional problem with technical solution (Mircea Cantor)

That issue is relevant from engineering as well as business perspective.

Engineering View: Problem Levels & Architecture Layers

As long as computers are used to solve problems the only concern is to find the best solution, and the only architecture of concern is software’s.

But enterprise architects have to deal with systems, not computers, namely how to best serve business objectives with corporate resources, across business units and along business cycles. For that purpose resources (financial, human, technical) and their use are to be layered according to the nature of problems and solutions: business processes (enterprise), supporting functionalities (systems), and technologies (platforms).

From an engineering perspective, the intended congruence between problems levels and architecture layers can be illustrated with the OMG’s model driven architecture (MDA) framework:

  • Computation independent models (CIMs) deal with business processes solutions, to be translated into functional problems for supporting systems.
  • Platform independent models (PIMs) deal with functional solutions, to be translated into technical problems for supporting platforms.
  • Platform specific models (PSMs) deal with technical solutions, to be implemented as code.
MDA layers correspond to a clear hierarchy of problems and solutions

MDA layers can be mapped to a clear hierarchy of problems and solutions

Along that understanding, architectures can be seen as solutions, and the primary responsibility of enterprise architects is to see that problems/solutions brace remain in their respective swim-lanes.

Business View: Business Value & Enterprise Assets

Whereas the engineering perspective may appear technical or specific to a model based approach, the same issue is all the more significant when expressed with regard to business concerns and corporate governance. In that case the critical distinction is between business value and assets:

  • Business value: Problems are set by business opportunities, and solutions by processes and applications. The critical factor is reactivity and time-to-market.
  • Assets: Problems are set by business objectives and strategy, and solutions are to be supported by organization and systems capabilities. The critical factor is reuse and ROI.
Decision-making must distinguish between business opportunities and enterprise governance

Decision-making must distinguish between business opportunities and enterprise governance

If opportunities are to be seized and operations managed on the fly  yet tally with strategic decisions, respective problems and solutions should be kept apart. Juggling with their dynamic alignment is at the core of enterprise architects’ job description.

Enterprise Architects & Governance

Engineering and business perspectives are not to be seen as the terms of an alternative to be picked by enterprise architects. As a matter of fact they must be crossed and governance policies selected depending on the point of view:

  • Looking at EA from an engineering perspective,  the business one will focus on systems governance and assets management as epitomized by model based systems engineering schemes.
  • Looking at EA from a business perspective, the engineering one will focus on lean and just-in-time solutions, as epitomized by agile development models.

As far as governance of large and complex corporate entities, supposedly EA’s primary target, must deal with tactical, operational, and strategic concerns, the nexus between business and engineering perspectives is where enterprise architects are to stand.

 

 

Projects Have to Liaise

May 25, 2016

Preamble

Liaison between projects is all too often preempted by methodological issues. So when some communication is needed alternatives should not be limited to no models at all or a medley of ambiguous ones.

jux_juan_munoz8

Avoiding Parochial Behaviors, Turf Wars, an Autarky (Juan Munoz)

Archetypal Development Models

Software engineering processes can be regrouped in two categories: phased ones are segmented with regard to responsibilities, tasks and the nature of artifacts, agile ones are iterative, with a single team sharing responsibilities for the definition, building and acceptance of final outcomes.

vv

Compared to phased projects, agile ones start right away with software and carry on without making use of intermediate development artifacts.

With regard to modeling languages, both approaches may encounter parochial pitfalls: tasked teams of phased projects may go through turf wars and misunderstandings, agile teams could tend to autarkic biases and objections to models.

Stories Must Be Set in Context

Agile projects start right away with writing users’ stories into software, making no use of intermediate development artifacts other than code. With business analysts and software engineers working side by side, the semantics of business objects and activities is meant to be directly, if progressively, inscribed into software artifacts.

Nonetheless, users’ stories, being by nature specific, are to be set against the broader context of enterprise business objectives. Teams may therefore have to communicate with outside entities and, considering that source code is seldom the preferred language of other units, agile teams may have to resort to means they would otherwise disparage. They would be in a better position if stories could be docked to open concepts to be used to contrive messages in line with projects’ needs and creeds.

Developments Must be Shared

Paraphrasing Einstein, the only reason for phased processes is so that everything doesn’t happen at once. In that case intermediate artifacts are to be introduced between tasks. But then, suppliers and customers, often from different backgrounds and with different concerns, have to agree about the semantics of the development flows. Moreover, the accuracy and consistency of agreed upon definitions must stand the test of time whatever the changes on each side.

As illustrated by model based software engineering processes, that objective is essentially attainable for programming languages; otherwise blurred footprints and ambivalent semantics require cumbersome maintenance of transformation rules as well as regressive updates of previous versions of the artifacts. In that case open concepts may help to prevent the corruption of a core of sound specifications by surroundings ambiguous ones.

Open Concepts: Yet Another Conceptual Framework ?

As many will have noticed, there is no lack of frameworks, conceptual or otherwise. So what could be the point of yet another one ?

The answer, as it should be, is to be found its impact on the use, or reuse, of artifacts by projects and teams, whatever their preferred development model. And that’s why the open source paradigm applied to open concepts is critical:

  • The difference between generalization and specialization is fully taken into account so that the semantics of sub-types defined by different projects cannot be modified.
  • The concept of Individuals is used to guarantee that business objects and activities are consistently identified across projects.
  • The semantics of sub-types are consistently, but not necessarily uniformly, defined across projects.
  • There is no overlapping of semantics even when subsets of individuals overlap.

Those are very strong constraints which, combined with the already limited footprint of open concepts, will result in a very compact set of concepts. And that is to make the difference: a small set of concepts, built from well-known principles, with clear properties and benefits, to be shared by projects independently of their modeling languages and development methods.

Further Readings

Models Transformation & Agile Development

April 5, 2016

Models transformation is generally recognized as the basic mechanism of model based systems engineering (MBSE). Yet, the actual scope of transformations is somewhat limited to design-to-code, and its sequential bias puts MBSE at odds with agile development approaches. Could a revisited understanding help to figure out this apparent discrepancy ?

 (Sand Painting Navajo Rug)

Weaving Life by Crossing Patterns (Sand Painting Navajo Rug)

Transformation Issues

Traditional transformation paradigm involves ordered sequences of models obtained by applying rules to their immediate predecessor(s). That organizational scheme has three critical consequences, for applicability, economics of reuse, and development processes.

  • Applicability: the effectiveness of transformations is conditioned by (a) an executable language for the description of targets, and (b) a closed and compact set of unambiguous patterns. Those conditions can only be satisfied for the downstream part of the development process.
  • Reuse: given the sequencing constraints, models are to be managed and reused along tree-like structures with duplicates introduced at branching points.
  • Development processes: sequenced models brings forth phased options and leaves out agile solutions.

Assuming those issues are not conclusive, they may be overcame by revisiting the nature of transformations.

Transformation vs Inheritance & Composition

Most of the proposed taxonomies (see references below) put the focus on languages and mechanisms (e.g rules) of sequential transformation without paying enough attention to the nature and the semantics of models contents. Even when abstraction levels are taken into account, the respective contents of each level remain undefined. As it happens, that issue may be the key to a better understanding of models transformation.

To begin with, rule-based transformation has to be compared to inheritance and composition:

  • Structural inheritance can be used to refine models as to take into account business scenarii previously ignored; e.g special conditions for good customers.
  • Functional inheritance can be used to introduce new capabilities; e.g new authentication procedures.
  • Functional composition can be used to apply capabilities across different scenarii; e.g customized authentication procedures.
  • Rules-based solutions can be used by any kind of transformation.
vvvv

A broader understanding of models transformation should include inheritance and composition

That taxonomy implies a clear distinction between operations executed within the same level of abstraction and those targeting artifacts defined at different levels: contrary to rules-based transformations, inheritance and composition can only be applied to artifacts sharing common semantics.

heterogeneous Models

While that would clearly prevent their use for models organized along abstraction levels, semantic pitfalls could be mastered for models built from artifacts from different abstraction levels.

Releasing models from (still to be defined) abstraction levels would bring two critical benefits:

  • Whatever the terminology (abstract, conceptual, functional, etc.), abstraction semantics are much easier to define for artifacts than for models.
  • That would remove a chunk of restrictions on the design of transformation processes.
Heterogeneous models are not bound to abstraction layers.

Releasing models from abstraction layers.

In that case transformation rules could be turned into combination ones and sequential transformation turned into cross-breeding.

Mendel, Models, Mongrels

Taking a cue from Gregor Mendel’s use of cross-fertilization, the aim of a revisited transformation paradigm would be three-fold:

  1. To refine the granularity of reuse, from models to artifacts
  2. To substitute combination for sequential transformation whenever possible.
  3. To substitute graphs for trees, with models organized along two basic layers, final (aka mongrels) or reusable (aka blueprints).
vvv

Models combination (top) replaces transformation phases (bottom) by a distinction between blueprints (full line) and mongrels (dashed line).

As far as MBSE is concerned, the genetics metaphor helps to clarify the nature of abstraction. Conceptually, it introduces a distinction between artifacts and models:

  • With regard to artifacts, abstraction layers are defined by scope: enterprise, systems, platforms.
  • With regard to models, abstraction layers are defined by capabilities: reusable (stable traits), or final (recessive traits).

That taxonomy is corroborated by its functional counterpart: artifacts transformation is carried out with inheritance and composition, models transformation relies on combination.

More important, that understanding goes a long way solving the issues regarding scope, reuse, and development processes.

Scope: Weaving Analysis & Design Traits

Definitions and taxonomies should always be assessed with regard of their applicability. On that account there isn’t much to say for abstraction layers applied to models: they don’t fit because too many traits can be defined across different layers, e.g: business rules, authentication, encryption, etc.

That difficulty can be neatly and consistently removed by models built from artifacts defined at different levels.

Models Reuse: Blueprints vs Mongrels

Reuse is all too often seen as a contentious objective with inconclusive ROI. One one hand it requires significant overheads to manage the resources, on the other hand the outcomes can introduce regressive traits. The distinction between sound reusable models and final ones significantly reduces both the costs of the former and the risks of the latter.

Processes Organization: MBSE & Agile

Model based systems engineering and the agile development model are arguably two of the most conclusive approaches to software engineering. Unfortunately they are often seen as difficult bedfellows, principally (but not uniquely) because the former insists on the importance of models with some bias toward phased processes, while the latter is all for iterative processes with models mentioned as an afterthought, if at all. Yet, both approaches could be made complementary on condition that models could be processed iteratively. And that could be achieved if sequenced transformations of homogeneous models would be replaced by the combination of heterogeneous ones.

vvv

Iterative development mixing new business requirements with existing functionalities (a) and business rules (b).

Within such a framework an agile team could, e.g, iteratively develop new business requirements, taking into account existing functionalities (a) and business rules (b), and generate code (c).

Further readings

External Links

 

Agile Business Analysis: From Wonders to Logic

March 7, 2016

Time and again new recruits will ask about the role of business analysts. Considering that such a question is seldom heard from software engineers, are BAs more curious about their job, or are they standing on more tentative grounds ? If that’s the case agility would help them to flip-flop between business quicksands to systems hard rocks.

vvv

How to make sense of business wonders (Hieronymus Bosch)

Holding the fort vs scouting outskirts

Systems architects and software engineers may have to meet esoteric business requirements, but their responsibility is first and foremost to guarantee the functional and economic sustainability of systems. On that account they are given licence to build solid walls and secure gateways, and to enforce their own languages and rules upon well vetted parties.

Business analysts don’t get such a free hand: while being straitened by software engineers constructs and constraints, their primary undertaking is to explore business wilds, reconnoitre competitors, trace new tracks, and learn the dialects of any nicknamed natives ready to trade.

No wonder the qualms of new business analysts.

Great businesses make their own rules

The best rules in business are the ones still unbeknownst, as success is most often brought by disruptive initiatives taking advantage of previously undiscovered opportunities. It ensues that at its core, BAs’ job description is to relentlessly look across the frontier for still uncharted businesses, and bring them back to the digitized world of shipshape business domains and processes.

For that purpose BAs will have to juggle with the fuzzy idiosyncrasies of new business openings until they can be aligned with the functionalities of “legacy” systems.

BA’s Agility

While usually presented as a software engineering hallmark, agility may be equally useful for business analysts as they have to balance two crossing perspectives:

  • Analysis: sorting detailed activities into business processes.
  • Synthesis: factoring out business functions and mapping them to systems capabilities.

That could be a challenging achievement if carried out sequentially: crossing back and forth between changing scope and steady capabilities could generate unsettling alternatives and unbounded complexity.

The agile development model is meant to tackle the difficulties through iterations and collaboration without being too specific about the kind of agility required from business analysts and software engineers.

Yet the apparent symmetry between the parties may be misleading: whereas software engineers don’t have (and shouldn’t even try) to second guess business analysts, business analysts shouldn’t forget that at the end of the day business expectations, however exotic or esoteric, will have to feed very conformist logical beasts.

Further Readings

Quality Circles

November 11, 2015

Generally speaking, quality may refer to intrinsic properties, functional characteristics, or some external yardstick. With regard to software engineering it would mean code, users experience, and operations, each with its own specific stakeholders and criteria.

A bird's eye view on quality circles (Jonathan Monk)

A bird’s-eye view on quality circles (Jonathan Monk)

On one side, traditional phased approaches to QA are meant to deal with those different aspects, yet they fall short when those facets are weaved together across enterprise architectures and business environments. On the other side agile quality solutions may also fail to cope with transverse business functions shared across architectures. Hence the need of a bird’s-eye view putting quality into a broader enterprise perspective.

Who Cares for Quality

Whatever the attributes considered, quality should clearly encompass actual products as well as their uses. For that purpose quality has to be assessed with regard to the requirements as expressed by business stakeholders, users, or systems engineers and administrators. Given the constraints and specificity of changing environments, objective yardsticks are of limited use and quality is often to be assessed for the lack thereof:

  • Business requirements: the product doesn’t meet expectations with regard to business contents (objects and logic).
  • Functional requirements: while the product meets business requirements, the part played by supporting systems doesn’t meet users’ expectations.
  • Quality of service: while the product meets business and functional requirements, users’ experience doesn’t meet expectations.
  • Technical requirements: while the product meets users’ expectations (business, functional, and ease of use), there are problems with deployment, maintenance, or operations.
cc

Quality is best defined with regard to requirements and checked with regard to architectures

Crossing those concerns, quality assessment has to deal with two primary challenges:

  • Since assessment at each level can be conditioned by lower levels, outcomes must be described and traced accordingly. That is to be the role of quality management.
  • Since assessment has to cover both products and their use during their shelf life, uncertainty will have to be taken into account. That is to be the role of quality assurance.

A third aspect can be added for externalities, i.e factors whose impact cannot be clearly or uniquely attributed: external risks are not under control, ergonomy cannot be accurately measured, and the assessment of ROI for processes improvement remains a matter of insight.

Quality Management & Documentation

The primary objective of quality management is to identify, define, and track the targeted outcomes and the factors deemed to affect their characteristics: contracts, products traceability, models reuse, tests, etc.

Depending on target and development model, management footprint can be defined at three levels of detail:

  • With regard to the use of products in their operational context, the focus is to be on deployed systems compared to textual specifications (a).
  • With regard to the intrinsic properties of deliverables, the focus is to be extended to software components (b).
  • When products are to be deployed in different environments, or to be maintained or modified along time, additional documentation will be necessary to trace changes to functional (c) and enterprise (d) architectures.
Assessment at each level can be conditioned by lower levels

Assessment at each level may be conditioned by lower levels

In any case (i.e with or without intermediate documentation,) traceability is to be a corner-stone of quality management:

  • Business processes with regard to business objectives, e.g how to assess insurance premiums or compute missile trajectory.
  • Code with regard to textual requirements.
  • System functionalities with regard to business processes. Use cases are widely used to describe how systems are to support business processes, and system functionalities are combined to realize use cases.
  • System components as technical implementations of functionalities targeted to different users, locations, and configurations.

And another dimension of traceability is required when quality assurance has to deal with uncertainty, risks, and decision-making.

From Management to Assurance

The objective of quality assurance is to define, carry on, and monitor operations in order to improve the characteristics concerned and reduce the probability that something will go amiss during the planned shelf life of products.

For that purpose assurance footprint and granularity must be aligned with the layers defined by quality management:

  • Integration and acceptance tests are carried out in reference to requirements on the assumption that software components have been validated.
  • Code checking and unit tests are carried out in reference to business and functional requirements on the assumption that their consistency has been checked.
  • External consistency is checked with regard to business requirements independently of functional or technical ones.
  • Internal consistency is checked with regard to functional requirements on the assumption that the business requirements (external) consistency has been checked.
Footprint & granularity of management and assurance must be congruent

Footprint & granularity of management and assurance must be congruent

Those operations, meant to deal with the quality of each layers, have to be combined with schemes of secure transformations between layers, e.g reuse, patterns or code generation. That would put quality on a sound basis were it not for externalities.

Quality Assurance & Risk Management

As already noted, QA has to take into account uncertainties and risks both external (business or technical environments) and internal (development processes). Assuming quality assurance has to include risk assessment, policies should be driven by risk acceptance levels:

  • No risk: quality assurance can be designed as to eliminate some uncertainties (e.g reuse and code generation).
  • No risk taken: whereas business and technology options are not sure bets some must be carried out regardless of what happens in the environment (e.g unexpected regulatory change or delay in critical technology). In that case QA must provide fallback solutions.
  • Managed risks: some defaults or delays can be priced and weighted by likelihood. In that case QA should monitor the risks and balance their cost (e.g resources consumption, late delivery) against the cost of preventive (e.g more systematic checks on consistency, additional staff) or corrective (e.g tests or maintenance) measures.
Quality management should be set at the nexus between risks management and quality assurance.

Quality management should be set at the nexus between risks management and quality assurance.

That will put quality management at the nexus between regulatory compliancerisks management and quality assurance.

Further Readings

Agile Delivery & Dynamic Programming

September 7, 2015

Synopsis

Business driven development is arguably the cornerstone of the agile development model. On one side it means business value set by users and stakeholders, on the other side it entails continuous and just-in-time delivery; what happens in-between is set by backlogs (for development) and product increments (for delivery).

greekAthl_RN3c

Continuous Delivery

A sanguine understanding of continuous releases may assume that planning is no longer relevant, and that deployment can be carried out “on-the-fly”. But that would assume that stakeholders and product owners are ready to put aside roadmaps, overlook milestones, and more generally forget that time is money. That would go against a basic objective of agile, namely that developments must be driven by business needs, and products delivered just-in-time for best value. Given the well established track record of dynamic programming for  manufacturing processes, could the technique be usefully applied to agile engineering processes ?

Delivery, Deployment, & Continuity

Continuity doesn’t mean synchronization: business, engineering, and operations are governed by different concerns set along different time-frames. Some buffering is needed, materialized by the distinction between releases (engineering concerns) and deployment (business and operational concerns).

A time for every purpose: Epics & business roadmap (a), associated backlogs of users’ stories (b), released features (c), architectural capabilities (d), deployed components (e).

A time for every purpose: Epics & business roadmap (a), associated backlogs of users’ stories (b), released features (c), architectural capabilities (d), deployed components (e).

Such distinctions introduce both overlapping (with business time-frame) and discontinuity (between development and deployment):

  • Product roadmaps are set in business time-frames and determine development and deployment time-frames.
  • Development time is set by product roadmaps and runs clockwise from project inception to software releases (a>b>c).
  • Deployment time is also set relative to product roadmaps but it runs counterclockwise, from product deployment as planned by business back to released software components (c<e).

Development and deployments runs can be compared to crews tunneling through a mountain from both sides; where and when they meet leaves room to adjustments. Yet more is at stake in the meeting between development completion and deployment inception. Apart from time, adjustments may also bear on formats and contents; and given the specificity of development and deployment purposes, their adjustment may also be seen as the morphing of continuous software releases (project perspective) into discrete increments (product perspective).

Dynamic Programming

Dynamic programming (aka multistage programming) is a problem solving method that combines two principles:

  • Divide & conquer is a general purpose strategy that deals with the intrinsic complexity of problems by breaking them down into collections of simpler sub-problems to be solved separately depending on sequencing constraints.
  • Recursion deals with the lack of complete or accurate information upfront, solving the problem in stages rather than as one entity. Each stage is optimized separately on the basis of the current state reflecting decisions taken at previous stages, the optimality principle guaranteeing the optimality of the final outcome.

That incremental and iterative approach clearly befits the tenets of the agile development model.

Twofold Planning

As noted above, and whatever the technique, agile processes entwine two phases, one for development, the other for deployment, each with its own planning:

  • The development planning, epitomized by backlog management, deals with the definition of work units and their sequencing.
  • The deployment planning deals with the merging of software released into product and their incremental deployment.
Project work units are sequenced (backlog), Product increments are merged, and both are dynamically adjusted around their nexus.

Project work units are sequenced (backlog), Product increments are merged, and both are dynamically adjusted around their nexus.

That makes for multiple dynamics, first for updated backlogs, then for updated deployment targets, and finally for possible feedback through their nexus. That can only be achieved with dynamic programming.

Backlog: Multistage sequencing

Backlogs are used to manage work units targeting self-contained requirements items; they can be represented by graphs with nodes standing for work units and arcs weighted by constraints.

Basically, the problem is to optimize the development of a given set of items given users’ priorities, technical constraints, and resources availability. When all the information is available upfront, optimum solutions can be obtained with simple Shortest Path algorithms. Yet, given the iterative and exploratory nature of agile processes, backlogs are meant to be updated as the project advances, taking advantage of improved knowledge:

  • Users may introduce new items, remove ones, or change their priorities due to a better understanding of requirements space.
  • Engineers may also introduce new items (e.g for technical debt) or reconsider technical difficulties and dependencies due to a better understanding of solutions space.
Dynamic reordering of backlogs: looking forward for the optimum path to completion.

Dynamic reordering of backlogs: looking forward for the optimum path to completion.

Dynamic programming is introduced in order to support step-wise decisions optimizing the whole process:

  • Backlog states (t1 & t2) are defined by the remaining work units, rankings, and feasibility constraints.
  • Each stage redefines the optimum path to completion taking into account the current state and updated information. Recursive computation being based on the summary information etched in states, it ensues that all future decisions can be selected optimally without recourse to information regarding previously made decisions.

Given a set of feasible paths (as defined by technical dependencies and time), the aim at each stage is to select the optimum path for the remaining units based on current state. Optimization functions will typically consider users’ value, learning curve and associated risks, and resources availability and costs.

As illustrated below, nodes can represent grouped items, e.g when several projects have to share resources or releases are to be regrouped.

Deployment: Dynamic merging

Given a set of released software components, the aim of deployment planning is to decide which increment to add to deployed product. Assuming that technical concerns have already been dealt with by releases, the objective at each stage is select the items maximizing the ROI of the deployed product. It must be noted that, contrary to the development algorithm looking forward for the optimization, the deployment algorithm select the optimum path by looking backward at the ROI of deployed products.

Dynamic reordering of deployments: looking backward for the optimum path to completion.

Dynamic reordering of deployments: looking backward for the optimum path to completion.

But the backward impact of deployment optimization can go further and affect backlogs.

Feedback

Shared ownership and continuous delivery are two main pillars of the agile development model, the former giving the development full authority and responsibility, the latter ensuring the users have a firm hand on the helm. Yet, as already noted, development, deployment, and business are governed by different time-frames, which could induce some frictions, e.g if business units were forced to synchronize products deployments with software releases. While severe disruptions can be avoided if releases and deployments are managed separately, development teams cannot be completely sheltered from changes in business or operational priorities. That is where the dynamic reassessment of optimum paths is to help: assuming a change in deployment planning (nq instead of op), the new priorities are fed back into development (aka backlog) rankings, and the optimum path is updated.

Feedback

Change in deployment priorities (nq instead of op) can be fed back to backlog planning (f before a).

It must be noted that such feedback only affects ranks and must leaves contents unchanged.

Conclusion: Business driven, Just-in-time delivery, & Lean Inventories

Dynamic programming appears as a primary factor with regard to three core tenets of the agile development model:

  • Business driven development doesn’t mean that developments are pushed by requirements but that they are pulled by deployment.
  • Just-in-time delivery can only be achieved with the help of a buffer between development and deployment. This buffer should not be confused with an inventory as it has nothing to do with product quantities.
  • On the contrary, this buffer, combined with dynamic programming, plays a critical role in the cutback of intermediate documents and models (aka development inventories).

Further Readings

External Links

Feasibility & Capabilities

May 2, 2015

Synopsis

As far as systems engineering is concerned, the aim of a feasibility study is to verify that a business solution can be supported by a system architecture (requirements feasibility) subject to some agreed technical and budgetary constraints (engineering feasibility).

(A. Magnaldo)

Project Rain Check  (A. Magnaldo)

Where to Begin

A feasibility study is based on the implicit assumption of slack architecture capabilities. But since capabilities are set with regard to several dimensions, architectures boundaries cannot be taken for granted and decisions may even entail some arbitrage between business requirements and engineering constraints.

Using the well-known distinction between roles (who), activities (how), locations (where), control (when), and contents (what), feasibility should be considered for supporting functionalities (between business processes and systems) and implementation (between functionalities and platforms):

ccc

Feasibility with regard to Systems and Platforms

Depending on priorities, feasibility considerations could look from three perspectives:

  • Focusing on system functionalities (e.g with use cases) implies that system boundaries are already identified and that the business logic will be defined along with users’ interfaces.
  • Starting with business requirements puts business domains and logic on the driving seat, making room for variations in system functionalities and boundaries .
  • Operational requirements (physical environments, events, and processes execution) put the emphasis on a mix of business processes and quality of service, thus making software functionalities a dependent variable.

In any case a distinction should be made between requirements and engineering feasibility, the former set with regard to architecture capabilities, the latter with regard to development resources and budget constraints.

Requirements Feasibility & Architecture Capabilities

Functional capabilities are defined at system boundaries and if all feasibility options are to be properly explored, architectures capabilities must be understood as a trade-off between the five intrinsic factors e.g:

  • Security (entry points) and confidentiality (domains).
  • Compliance with regulatory constraints (domains) and flexibility (activities).
  • Reliability (processes) and interoperability (locations).
vvvv

Feasible options must be set within the Capabilities Pentagon

Feasible options could then be figured out by points set within the capabilities pentagon. Given metrics on functional requirements, their feasibility under the non functional constraints could be assessed with regard to cross capabilities. And since the same five core capabilities can be consistently defined across enterprise, systems, and platforms layers, requirements feasibility could be assessed without prejudging architecture decisions.

Business Requirements & Architecture Capabilities

One step further, the feasibility of business and operational objectives (the “Why” of the Zachman framework) can be more easily assessed if set on the outer range and mapped to architecture capabilities.

ccc

Business Requirements and Architecture Capabilities

Engineering Feasibility & ROI

Finally, the feasibility of business and functional requirements under the constraints set by non functional requirements has to be translated in terms of ROI, and for that purpose the business value has to be compared to the cost of engineering the solution given the resources (people and tools), technical requirements, and budgetary constraints.

Architecture Capabilities for Processes, with deontic (basic) and alethic (dashed) dependencies.

ROI assessment mapping business value against functionalities, engineering outlays, and operational costs.

That where the transparency and traceability of capabilities across layers may be especially useful when alternatives and priorities are to be considered mixing functionalities, engineering outlays, and operational costs.

Further Reading