Archive for the ‘Project Management’ Category

EA’s Merry-go-round

June 14, 2017

Preamble

All too often EA is planned as a big bang project to be carried out step by step until completion. That understanding is misguided as it confuses EA with IT systems and implies that enterprises could change their architectures as if they were apparel.

EA is a never-ending endeavor (Robert Doisneau)

But enterprise architecture is part and parcel of enterprises, a combination of culture, organization, and systems; whatever the changes, they must keep the continuity, integrity, and consistency of the whole.

Work Units

Compared to usual projects, architectural ones are not meant to address business needs but architecture capabilities. Taking a leaf from the Zachman Framework, those capabilities can be organized around five pillars supporting enterprise, systems, and platform architectures:

  • Who: enterprise roles, system users, platform entry points.
  • What: business objects, symbolic representations, objects implementation.
  • How: business logic, system applications, software components.
  • When: processes synchronization, communication architecture, communication mechanisms.
  • Where: business sites, systems locations, platform resources.

These capabilities are used by business, engineering, and operational processes.

Enterprise architecture capabilities

Defining architectural projects and work units bottom-up from capabilities will bring clear benefits.

Contrary to top-down (aka activity based) ones, bottom-up schemes don’t rely on one-fits-all procedures; as a consequence work units can be directly defined by capabilities and therefore mapped to engineering workshops:

Iterative development of architecture capabilities across workshops

Moreover, dependency constraints can be directly defined as declarative assertions attached to capabilities and managed dynamically instead of having to be hard-wired into phased processes.

That approach is to ensure two agile conditions critical for the development of architectural features:

  • Shared ownership: lest the whole enterprise be paralyzed by decision-making procedures, work units must be carried out under the sole responsibility of project teams.
  • Continuous delivery: architecture driven developments are by nature transverse but the delivery of building blocs cannot be put off by the decision of all parties concerned; instead it should be decoupled from integration.

Enterprise architecture projects could then be organized as a merry-go-round of capabilities based work units to be added, developed, and delivered according to needs and time-frames.

Work units & Time Frames

Enterprise architecture is about governance more than engineering. As such it has to ensure continuity and consistency between business objectives and strategies on one side, engineering resources and projects on the other side.

Assuming that capability-based work units will do the job for internal dependencies (application contents and engineering), the problem is to deal with external ones (business objectives and enterprise organization) without introducing phased processes. Beyond differences in monikers, such dependencies can generally be classified along three reasoned categories:

  • Operational: whatever can be observed and acted upon within a given envelope of assets and capabilities.
  • Tactical: whatever can be observed and acted upon by adjusting assets, resources and organization without altering the business plans and anticipations.
  • Strategic: decisions regarding assets, resources and organization contingent on anticipations regarding business environments.

Architectural projects could then be managed as a dynamic backlog of self-contained work units continuously added (a) or delivered (b).

EA projects: a merry-go-round of work units.

The role of enterprise architects will then to manage the deployment of updated architecture capabilities according to their respective time-frames.

Further Reading

Beans must be Counted, one way And the other

May 2, 2017

Preamble

Conversations across software engineering forums sometimes reveal unexpected views, as it’s the case for the benefits of accountability.

Counting Paper Beans (Pieter Brueghel the Younger)

One would assume that competition would impel enterprises to scrutiny with regard to resources employed and product outcomes, pushing for the assessment of internal activities based on some agreed metrics. And yet, now and again, software development is viewed as a boutique occupation, if not an art pursuit, carried out by creative craftsmen for enlightened if demanding patrons; a vocation too distinctive to be gauged by common yardsticks.

Difficulties of Oversight

Setting apart creative delusions, the assessment of software development is effectively confronted with rational as well as practical obstacles.

To begin with rationality, and unlike traditional products, there is no market pricing mechanism that could match software development costs with customers’ value. As a consequence business stakeholders and systems engineers prefer to play safe and keep their respective assessments on the opposed banks of the customer/provider divide.

As for the practicality of assessments, the choice is between idiosyncratic approaches (e.g users’ points) and reasoned ones (essentially function points). The former ones being by nature specific and subject to changes in business opportunities, whereas the latter ones are being plagued by implementation plights that make them both costly and unreliable.

Yet, the diluting of IT systems in business environments is making that conundrum irrelevant: the fusing of business processes and supporting software is blanketing the discontinuities between business value and development costs.

Perils of Oversight

Given the digital integration between systems and business environments and the part played by software in production, marketing and operations, enterprises can no longer ignore the economics of software development.

As far as enterprises are concerned, economics use prices for two key purposes, external and internal.

With regard to their business environment, enterprises need metrics to price the resources they could buy and the products they could sell; their competitive edge fully depends on the thoroughness and accuracy of both.

With regard to their internal governance, enterprises need metrics to gauge the efficiency of their factors and the maturity of their processes, and allocate resources accordingly. That internal assessment is the basis of their versatility and plasticity:

  • Confronted to continuous, frequent, and often abrupt changes in business environments, enterprise must be able to adapt their activities without having to change its architectures. That cannot be achieved without timely and accurate assessments of the way their resources are put to use.
  • Conversely, enterprises may have to change their architectures without affecting their performances; that cannot be achieved without a comprehensive and accurate assessments of alternative options, organizational as well as technical.

To summarize, the spread and intricacy of software footprint over both sides of the crumbling fences between enterprise systems and business environments makes software economics a necessary component of enterprises governance, so a tally of software beans should not be an option.

Further Reading

Deep Blind Testing

March 21, 2017

Preamble

Tests are meant to ensure that nothing will go amiss. Assuming that expected hazards can be duly dealt with beforehand, the challenge is to guard against unexpected ones.

Unexpected Outcome (Ariel Schlesinger)

That would require the scripting of every possible outcomes in an unlimited range of unknown circumstances, and that’s where Deep Learning may help.

What to Look For

As Donald Rumsfeld once famously said, there are things that we know we don’t know, and things we don’t know we don’t know; hence the need of setting things apart depending on what can be known and how, and build the scripts accordingly:

  • Business requirements: tests can be designed with respect to explicit specifications; yet some room should also be left for changes in business circumstances.
  • Functional requirements: assuming business requirements are satisfied, the part played by supporting systems can be comprehensively tested with respect to well-defined boundaries and operations.
  • Quality of service: assuming business and functional requirements are satisfied, tests will have to check how human interfaces and resources are to cope with users behaviors and expectations which, by nature, cannot be fully anticipated.
  • Technical requirements: assuming business and functional requirements are satisfied as well as users’ expectations for service, deployment, maintenance, and operations are to be tested with regard to feasibility and costs.

Automated testing has to take into account these differences between scope and nature, from bounded and defined specifications to boundless, fuzzy and changing circumstances.

Automated Software Testing

Automated software testing encompasses two basic components: first the design of test cases (events, operations, and circumstances), then their scripted execution. Leading frameworks already integrate most of the latter together with the parts of the former targeting technical aspects like graphical user interfaces or system APIs. Artificial intelligence (AI) and machine learning (ML) have also been tried for automated test generation, yet with a scope limited by dependency on explicit knowledge, and consequently by the need of some “manual” teaching. That hurdle may be overcame by the deep learning ability to get direct (aka automated) access to implicit knowledge.

Reconnaissance: Known Knowns

Systems are designed artifacts, with the corollary that their components are fully defined and their behavior predictable. The design of technical test cases can therefore be derived from what is known of software and systems architectures, the former for test units, the latter for integration and acceptance tests. Deep learning could then mine recorded log-files in order to identify critical cases’ events and circumstances.

Exploration: Known Unknowns

Assuming that applications must be tested for use during their expected shelf life, some uncertainty has to be factored in for future business circumstances. Yet, assuming applications are designed to meet specific business objectives, such hypothetical circumstances should remain within known boundaries. In that context deep learning could be applied to exploration as well as policies:

  • Compared to technical test cases that can rely on the content of systems log-files, business and functional ones have to look outside and mine raw data from business environments.
  • In return, the relevancy of observations can be assessed with regard to business objectives, improved, and feed the policy module in charge of defining test cases.

Blind Errands: Unknown Unknowns

Even with functional and technical capabilities well-tested and secured, quality of service may remain contingent on human quirks: instinctive or erratic behaviors that could thwart the best designed handrails. On one hand, and due to their very nature, such hazards are not to be easily forestalled by reasoned test cases; but on the other hand they don’t take place in a void but within known functional circumstances. Given that porosity of functional and cognitive layers, the validity of functional test cases may be compromised by unfathomable cognitive associations, and that could open the door to unmanageable regression. Enter deep learning and its ability to extract knowledge from insignificance.

Compared to business and functional test cases, hazards are not directly related to business activities. As a consequence, the learning process cannot be guided by business and functional test cases but has to chart unpredictable human behaviors. As it happens, that kind of learning combining random simulation with automated reinforcement is what makes the specificity of deep learning.

From Non-regression to Self-improvement

As a conclusion, if non-regression is to be the cornerstone of quality management, test cases are to be set along clear swim-lanes: business logic (independently of systems), supporting systems functionalities (for shared applications), users interfaces (for non shared interactions). Then, since test cases are also run across swim-lanes, it opens the door to feedback, e.g unit test cases reassessed directly from business rules independently of systems functionalities, or functional test cases reassessed from users’ behaviors.

Considering that well-defined objectives, sound feedback mechanisms, and the availability of massive data from systems logs (internal) and business environment (external) are the main pillars of deep learning technologies, their combination in integrated frameworks could result in a qualitative leap toward self-improving automated test cases.

Further Reading

 

Focus: Business Cases for Use Cases

February 27, 2017

Preamble

As originally defined by Ivar Jacobson, uses cases (UCs) are focused on the interactions between users and systems. The question is how to associate UC requirements, by nature local, concrete, and changing, with broader business objectives set along different time-frames.

Sigmar-Polke-Hope-Clouds

Cases, Kites, and Clouds (Sigmar Polke)

Backing Use Cases

On the system side UCs can be neatly traced through the other UML diagrams for classes, activities, sequence, and states. The task is more challenging on the business side due to the diversity of concerns to be defined with other languages like Business Process Modeling Notation (BPMN).

Use cases at the hub of UML diagrams

Use Cases contexts

Broadly speaking, tracing use cases to their business environments have been undertaken with two approaches:

  • Differentiated use cases, as epitomized by Alister Cockburn’s seminal book (Readings).
  • Business use cases, to be introduced beside standard (often renamed as “system”) use cases.

As it appears, whereas Cockburn stays with UCs as defined by Jacobson but refines them to deal specifically with generalization, scaling, and extension, the second approach introduces a somewhat ill-defined concept without setting apart the different concerns.

Differentiated Use Cases

Being neatly defined by purposes (aka goals), Cockburn’s levels provide a good starting point:

  • Users: sea level (blue).
  • Summary: sky, cloud and kite (white).
  • Functions: underwater, fish and clam (indigo).

As such they can be associated with specific concerns:

Cockburn’s differentiated use cases

  • Blue level UCs are concrete; that’s where interactions are identified with regard to actual agents, place, and time.
  • White level UCs are abstract and cannot be instanciated; cloud ones are shared across business processes, kite ones are specific.
  • Indigo level UCs are concrete but not necessarily the primary source of instanciation; fish ones may or may not be associated with business functions supported by systems (grey), e.g services , clam ones are supposed to be directly implemented by system operations.

As illustrated by the example below, use cases set at enterprise or business unit level can also be concrete:

Example with actors for users and legacy systems (bold arrows for primary interactions)

UC abstraction connectors can then be used to define higher business objectives.

Business “Use” Cases

Compared to Cockburn’s efficient (no new concept) and clear (qualitative distinctions) scheme, the business use case alternative adds to the complexity with a fuzzy new concept based on quantitative distinctions like abstraction levels (lower for use cases, higher for business use cases) or granularity (respectively fine- and coarse-grained).

At first sight, using scales instead of concepts may allow a seamless modeling with the same notations and tools; but arguing for unified modeling goes against the introduction of a new concept. More critically, that seamless approach seems to overlook the semantic gap between business and system modeling languages. Instead of three-lane blacktops set along differentiated use cases, the alignment of business and system concerns is meant to be achieved through a medley of stereotypes, templates, and profiles supporting the transformation of BPMN models into UML ones.

But as far as business use cases are concerned, transformation schemes would come with serious drawbacks because the objective would not be to generate use cases from their business parent but to dynamically maintain and align business and users concerns. That brings back the question of the purpose of business use cases:

  • Are BUCs targeting business logic ? that would be redundant because mapping business rules with applications can already be achieved through UML or BPMN diagrams.
  • Are BUCs targeting business objectives ? but without a conceptual definition of “high levels” BUCs are to remain nondescript practices. As for the “lower levels” of business objectives, users’ stories already offer a better defined and accepted solution.

If that makes the concept of BUC irrelevant as well as confusing, the underlying issue of anchoring UCs to broader business objectives still remains.

Conclusion: Business Case for Use Cases

With the purposes clearly identified, the debate about BUC appears as a diversion: the key issue is to set apart stable long-term business objectives from short-term opportunistic users’ stories or use cases. So, instead of blurring the semantics of interactions by adding a business qualifier to the concept of use case, “business cases” would be better documented with the standard UC constructs for abstraction. Taking Cockburn’s example:

Abstract use cases: no actor (19), no trigger (20), no execution (21)

Different levels of abstraction can be combined, e.g:

  • Business rules at enterprise level: “Handle Claim” (19) is focused on claims independently of actual use cases.
  • Interactions at process level: “Handle Claim” (21) is focused on interactions with Customer independently of claims’ details.

Broader enterprise and business considerations can then be documented depending on scope.

Further Reading

External Links

iStar and the Requirements Conundrum

December 12, 2016

Synopsis

Whenever software engineering problems are looked at, the blame is generally put on requirements, with each side of the business/system divide holding the other responsible.

rockwell_runaway

iStar modeling put the focus on communication (N. Rockwell)

The iStar approach tries to tackle the problem with a conceptual language focused on interactions between business processes and supporting systems.

Dilemma

Conceptual approaches to requirements try to breach the dilemma between phased and agile development schemes: the former takes for granted that requirements can be fully and definitively set upfront; the latter takes a more pragmatic path and tries to reconcile business and system analysts through direct and continuous collaboration.

Setting apart frictions between specific methods, the benefits of agile principles and practices are now well-recognized, contingent on the limits of agile scope. Summarily, agile development is at its best when requirements capture and analysis can be weaved with development and tests. The question remains of what happens when requirements are to be dealt with separately.

The iStar’s answer shares with agile a focus on collaboration and doesn’t take side for business (e.g users’ stories) or systems (e.g use cases). Instead, iStar modeling language is meant to support a conceptual description of interactions between business processes and supporting systems in terms of actors’ goals and commitments, and the associated dependencies.

Actors & Goals

The defining aspect of the iStar modeling approach is to replace one-sided perspectives (business or system) by a systemic one focused on the interactions between agents. The interactive part of a requirement will therefore comprise three basic items:

  • A primary actor trigger an interaction in order to meet some goal; e.g a car owner want his car repaired.
  • Secondary actors may be involved during the ensuing exchanges: e.g body shop, appraiser, insurance company.
  • Functions to be performed: actual task; e.g appraise damages; qualification (soft goal), e.g fair appraisal; and resources, e.g premium payment.
Actors & dependencies

Actors & Dependencies

Dependencies Semantics

The factual description of interactions is both detailed and enriched by elements set within a broader scope:

  • Goal (strong) dependency: assertions about actual state of affairs: object, activity, or expectations.
  • Soft-goal dependency: assertions about expected outcomes.
  • Task dependency: organizational, functional, or technical constraints pertaining to the execution of activities.
  • Resource dependency: constraints or conditions on the availability of inputs, actual or symbolic.

It would be tempting to generalize the strong/soft distinction to dependencies as to make use of modal logic, strong dependencies associated with deontic rules, soft dependencies with alethic ones. That would .

iStar & Caminao

Since iStar modeling categories are directly aligned with UML Use Cases, they can easily mapped to core Caminao stereotypes for actors, objects, events, and activities.

Actors & dependencies

iStar with Caminao Stereotypes

Interestingly, the iStar strong/soft distinction could translate to the actual/symbolic one which constitute the conceptual backbone of the Caminao paradigm.

Assessment

From the business perspective, iStar must be credited with two critical tenets:

  • The focus on interactions between agents is essential for business and system analysts to collaborate. Such benefits appear clearly for the definition of primary and secondary roles (aka actors), intents (business) and capabilities (supporting environments).
  • The distinction between strong and soft goals, even if the logical basis remains unexploited.

Yet, the system perspective lacks a functional dimension, e.g:

  • Architecture levels (enterprise and organization, systems and functionalities, platforms and technologies) are not taken into consideration, nor the nature of capabilities, e.g strategic and operational.
  • The strong/soft dependencies distinction is not explicitly associated with systems capabilities.

On the whole these pros and cons reflect iStar’s declared intent on conceptual modeling; as a corollary these flaws mark also the limits of conceptual modeling when it is detached from the symbolic description of supporting systems functionalities.

Nonetheless, as illustrated by the research quoted below, iStar remains a sound basis for the specification of interactions between users and systems, either as use cases or users’ stories.

Further Reading

External Links

Business Problems shouldn’t sleep with IT Solutions

October 8, 2016

Preamble

The often mentioned distinction between problem and solution levels may make sense from an analyst’s particular point of view, whether business or system.  But blending problems and solutions independently of their nature becomes a serious over simplification for enterprise architects considering that one of their prime responsibility is to keep apart business problems from IT solutions.

(Mircea Cantor)

Functional problem with technical solution (Mircea Cantor)

That issue is relevant from engineering as well as business perspective.

Engineering View: Problem Levels & Architecture Layers

As long as computers are used to solve problems the only concern is to find the best solution, and the only architecture of concern is software’s.

But enterprise architects have to deal with systems, not computers, namely how to best serve business objectives with corporate resources, across business units and along business cycles. For that purpose resources (financial, human, technical) and their use are to be layered according to the nature of problems and solutions: business processes (enterprise), supporting functionalities (systems), and technologies (platforms).

From an engineering perspective, the intended congruence between problems levels and architecture layers can be illustrated with the OMG’s model driven architecture (MDA) framework:

  • Computation independent models (CIMs) deal with business processes solutions, to be translated into functional problems for supporting systems.
  • Platform independent models (PIMs) deal with functional solutions, to be translated into technical problems for supporting platforms.
  • Platform specific models (PSMs) deal with technical solutions, to be implemented as code.
MDA layers correspond to a clear hierarchy of problems and solutions

MDA layers can be mapped to a clear hierarchy of problems and solutions

Along that understanding, architectures can be seen as solutions, and the primary responsibility of enterprise architects is to see that problems/solutions brace remain in their respective swim-lanes.

Business View: Business Value & Enterprise Assets

Whereas the engineering perspective may appear technical or specific to a model based approach, the same issue is all the more significant when expressed with regard to business concerns and corporate governance. In that case the critical distinction is between business value and assets:

  • Business value: Problems are set by business opportunities, and solutions by processes and applications. The critical factor is reactivity and time-to-market.
  • Assets: Problems are set by business objectives and strategy, and solutions are to be supported by organization and systems capabilities. The critical factor is reuse and ROI.
Decision-making must distinguish between business opportunities and enterprise governance

Decision-making must distinguish between business opportunities and enterprise governance

If opportunities are to be seized and operations managed on the fly  yet tally with strategic decisions, respective problems and solutions should be kept apart. Juggling with their dynamic alignment is at the core of enterprise architects’ job description.

Enterprise Architects & Governance

Engineering and business perspectives are not to be seen as the terms of an alternative to be picked by enterprise architects. As a matter of fact they must be crossed and governance policies selected depending on the point of view:

  • Looking at EA from an engineering perspective,  the business one will focus on systems governance and assets management as epitomized by model based systems engineering schemes.
  • Looking at EA from a business perspective, the engineering one will focus on lean and just-in-time solutions, as epitomized by agile development models.

As far as governance of large and complex corporate entities, supposedly EA’s primary target, must deal with tactical, operational, and strategic concerns, the nexus between business and engineering perspectives is where enterprise architects are to stand.

 

 

Projects Have to Liaise

May 25, 2016

Preamble

Liaison between projects is all too often preempted by methodological issues. So when some communication is needed alternatives should not be limited to no models at all or a medley of ambiguous ones.

jux_juan_munoz8

Avoiding Parochial Behaviors, Turf Wars, an Autarky (Juan Munoz)

Archetypal Development Models

Software engineering processes can be regrouped in two categories: phased ones are segmented with regard to responsibilities, tasks and the nature of artifacts, agile ones are iterative, with a single team sharing responsibilities for the definition, building and acceptance of final outcomes.

vv

Compared to phased projects, agile ones start right away with software and carry on without making use of intermediate development artifacts.

With regard to modeling languages, both approaches may encounter parochial pitfalls: tasked teams of phased projects may go through turf wars and misunderstandings, agile teams could tend to autarkic biases and objections to models.

Stories Must Be Set in Context

Agile projects start right away with writing users’ stories into software, making no use of intermediate development artifacts other than code. With business analysts and software engineers working side by side, the semantics of business objects and activities is meant to be directly, if progressively, inscribed into software artifacts.

Nonetheless, users’ stories, being by nature specific, are to be set against the broader context of enterprise business objectives. Teams may therefore have to communicate with outside entities and, considering that source code is seldom the preferred language of other units, agile teams may have to resort to means they would otherwise disparage. They would be in a better position if stories could be docked to open concepts to be used to contrive messages in line with projects’ needs and creeds.

Developments Must be Shared

Paraphrasing Einstein, the only reason for phased processes is so that everything doesn’t happen at once. In that case intermediate artifacts are to be introduced between tasks. But then, suppliers and customers, often from different backgrounds and with different concerns, have to agree about the semantics of the development flows. Moreover, the accuracy and consistency of agreed upon definitions must stand the test of time whatever the changes on each side.

As illustrated by model based software engineering processes, that objective is essentially attainable for programming languages; otherwise blurred footprints and ambivalent semantics require cumbersome maintenance of transformation rules as well as regressive updates of previous versions of the artifacts. In that case open concepts may help to prevent the corruption of a core of sound specifications by surroundings ambiguous ones.

Open Concepts: Yet Another Conceptual Framework ?

As many will have noticed, there is no lack of frameworks, conceptual or otherwise. So what could be the point of yet another one ?

The answer, as it should be, is to be found its impact on the use, or reuse, of artifacts by projects and teams, whatever their preferred development model. And that’s why the open source paradigm applied to open concepts is critical:

  • The difference between generalization and specialization is fully taken into account so that the semantics of sub-types defined by different projects cannot be modified.
  • The concept of Individuals is used to guarantee that business objects and activities are consistently identified across projects.
  • The semantics of sub-types are consistently, but not necessarily uniformly, defined across projects.
  • There is no overlapping of semantics even when subsets of individuals overlap.

Those are very strong constraints which, combined with the already limited footprint of open concepts, will result in a very compact set of concepts. And that is to make the difference: a small set of concepts, built from well-known principles, with clear properties and benefits, to be shared by projects independently of their modeling languages and development methods.

Further Readings

Models Transformation & Agile Development

April 5, 2016

Models transformation is generally recognized as the basic mechanism of model based systems engineering (MBSE). Yet, the actual scope of transformations is somewhat limited to design-to-code, and its sequential bias puts MBSE at odds with agile development approaches. Could a revisited understanding help to figure out this apparent discrepancy ?

 (Sand Painting Navajo Rug)

Weaving Life by Crossing Patterns (Sand Painting Navajo Rug)

Transformation Issues

Traditional transformation paradigm involves ordered sequences of models obtained by applying rules to their immediate predecessor(s). That organizational scheme has three critical consequences, for applicability, economics of reuse, and development processes.

  • Applicability: the effectiveness of transformations is conditioned by (a) an executable language for the description of targets, and (b) a closed and compact set of unambiguous patterns. Those conditions can only be satisfied for the downstream part of the development process.
  • Reuse: given the sequencing constraints, models are to be managed and reused along tree-like structures with duplicates introduced at branching points.
  • Development processes: sequenced models brings forth phased options and leaves out agile solutions.

Assuming those issues are not conclusive, they may be overcame by revisiting the nature of transformations.

Transformation vs Inheritance & Composition

Most of the proposed taxonomies (see references below) put the focus on languages and mechanisms (e.g rules) of sequential transformation without paying enough attention to the nature and the semantics of models contents. Even when abstraction levels are taken into account, the respective contents of each level remain undefined. As it happens, that issue may be the key to a better understanding of models transformation.

To begin with, rule-based transformation has to be compared to inheritance and composition:

  • Structural inheritance can be used to refine models as to take into account business scenarii previously ignored; e.g special conditions for good customers.
  • Functional inheritance can be used to introduce new capabilities; e.g new authentication procedures.
  • Functional composition can be used to apply capabilities across different scenarii; e.g customized authentication procedures.
  • Rules-based solutions can be used by any kind of transformation.
vvvv

A broader understanding of models transformation should include inheritance and composition

That taxonomy implies a clear distinction between operations executed within the same level of abstraction and those targeting artifacts defined at different levels: contrary to rules-based transformations, inheritance and composition can only be applied to artifacts sharing common semantics.

heterogeneous Models

While that would clearly prevent their use for models organized along abstraction levels, semantic pitfalls could be mastered for models built from artifacts from different abstraction levels.

Releasing models from (still to be defined) abstraction levels would bring two critical benefits:

  • Whatever the terminology (abstract, conceptual, functional, etc.), abstraction semantics are much easier to define for artifacts than for models.
  • That would remove a chunk of restrictions on the design of transformation processes.
Heterogeneous models are not bound to abstraction layers.

Releasing models from abstraction layers.

In that case transformation rules could be turned into combination ones and sequential transformation turned into cross-breeding.

Mendel, Models, Mongrels

Taking a cue from Gregor Mendel’s use of cross-fertilization, the aim of a revisited transformation paradigm would be three-fold:

  1. To refine the granularity of reuse, from models to artifacts
  2. To substitute combination for sequential transformation whenever possible.
  3. To substitute graphs for trees, with models organized along two basic layers, final (aka mongrels) or reusable (aka blueprints).
vvv

Models combination (top) replaces transformation phases (bottom) by a distinction between blueprints (full line) and mongrels (dashed line).

As far as MBSE is concerned, the genetics metaphor helps to clarify the nature of abstraction. Conceptually, it introduces a distinction between artifacts and models:

  • With regard to artifacts, abstraction layers are defined by scope: enterprise, systems, platforms.
  • With regard to models, abstraction layers are defined by capabilities: reusable (stable traits), or final (recessive traits).

That taxonomy is corroborated by its functional counterpart: artifacts transformation is carried out with inheritance and composition, models transformation relies on combination.

More important, that understanding goes a long way solving the issues regarding scope, reuse, and development processes.

Scope: Weaving Analysis & Design Traits

Definitions and taxonomies should always be assessed with regard of their applicability. On that account there isn’t much to say for abstraction layers applied to models: they don’t fit because too many traits can be defined across different layers, e.g: business rules, authentication, encryption, etc.

That difficulty can be neatly and consistently removed by models built from artifacts defined at different levels.

Models Reuse: Blueprints vs Mongrels

Reuse is all too often seen as a contentious objective with inconclusive ROI. One one hand it requires significant overheads to manage the resources, on the other hand the outcomes can introduce regressive traits. The distinction between sound reusable models and final ones significantly reduces both the costs of the former and the risks of the latter.

Processes Organization: MBSE & Agile

Model based systems engineering and the agile development model are arguably two of the most conclusive approaches to software engineering. Unfortunately they are often seen as difficult bedfellows, principally (but not uniquely) because the former insists on the importance of models with some bias toward phased processes, while the latter is all for iterative processes with models mentioned as an afterthought, if at all. Yet, both approaches could be made complementary on condition that models could be processed iteratively. And that could be achieved if sequenced transformations of homogeneous models would be replaced by the combination of heterogeneous ones.

vvv

Iterative development mixing new business requirements with existing functionalities (a) and business rules (b).

Within such a framework an agile team could, e.g, iteratively develop new business requirements, taking into account existing functionalities (a) and business rules (b), and generate code (c).

Further readings

External Links

 

Agile Business Analysis: From Wonders to Logic

March 7, 2016

Time and again new recruits will ask about the role of business analysts. Considering that such a question is seldom heard from software engineers, are BAs more curious about their job, or are they standing on more tentative grounds ? If that’s the case agility would help them to flip-flop between business quicksands to systems hard rocks.

vvv

How to make sense of business wonders (Hieronymus Bosch)

Holding the fort vs scouting outskirts

Systems architects and software engineers may have to meet esoteric business requirements, but their responsibility is first and foremost to guarantee the functional and economic sustainability of systems. On that account they are given licence to build solid walls and secure gateways, and to enforce their own languages and rules upon well vetted parties.

Business analysts don’t get such a free hand: while being straitened by software engineers constructs and constraints, their primary undertaking is to explore business wilds, reconnoitre competitors, trace new tracks, and learn the dialects of any nicknamed natives ready to trade.

No wonder the qualms of new business analysts.

Great businesses make their own rules

The best rules in business are the ones still unbeknownst, as success is most often brought by disruptive initiatives taking advantage of previously undiscovered opportunities. It ensues that at its core, BAs’ job description is to relentlessly look across the frontier for still uncharted businesses, and bring them back to the digitized world of shipshape business domains and processes.

For that purpose BAs will have to juggle with the fuzzy idiosyncrasies of new business openings until they can be aligned with the functionalities of “legacy” systems.

BA’s Agility

While usually presented as a software engineering hallmark, agility may be equally useful for business analysts as they have to balance two crossing perspectives:

  • Analysis: sorting detailed activities into business processes.
  • Synthesis: factoring out business functions and mapping them to systems capabilities.

That could be a challenging achievement if carried out sequentially: crossing back and forth between changing scope and steady capabilities could generate unsettling alternatives and unbounded complexity.

The agile development model is meant to tackle the difficulties through iterations and collaboration without being too specific about the kind of agility required from business analysts and software engineers.

Yet the apparent symmetry between the parties may be misleading: whereas software engineers don’t have (and shouldn’t even try) to second guess business analysts, business analysts shouldn’t forget that at the end of the day business expectations, however exotic or esoteric, will have to feed very conformist logical beasts.

Further Readings

Quality Circles

November 11, 2015

Generally speaking, quality may refer to intrinsic properties, functional characteristics, or some external yardstick. With regard to software engineering it would mean code, users experience, and operations, each with its own specific stakeholders and criteria.

A bird's eye view on quality circles (Jonathan Monk)

A bird’s-eye view on quality circles (Jonathan Monk)

On one side, traditional phased approaches to QA are meant to deal with those different aspects, yet they fall short when those facets are weaved together across enterprise architectures and business environments. On the other side agile quality solutions may also fail to cope with transverse business functions shared across architectures. Hence the need of a bird’s-eye view putting quality into a broader enterprise perspective.

Who Cares for Quality

Whatever the attributes considered, quality should clearly encompass actual products as well as their uses. For that purpose quality has to be assessed with regard to the requirements as expressed by business stakeholders, users, or systems engineers and administrators. Given the constraints and specificity of changing environments, objective yardsticks are of limited use and quality is often to be assessed for the lack thereof:

  • Business requirements: the product doesn’t meet expectations with regard to business contents (objects and logic).
  • Functional requirements: while the product meets business requirements, the part played by supporting systems doesn’t meet users’ expectations.
  • Quality of service: while the product meets business and functional requirements, users’ experience doesn’t meet expectations.
  • Technical requirements: while the product meets users’ expectations (business, functional, and ease of use), there are problems with deployment, maintenance, or operations.
cc

Quality is best defined with regard to requirements and checked with regard to architectures

Crossing those concerns, quality assessment has to deal with two primary challenges:

  • Since assessment at each level can be conditioned by lower levels, outcomes must be described and traced accordingly. That is to be the role of quality management.
  • Since assessment has to cover both products and their use during their shelf life, uncertainty will have to be taken into account. That is to be the role of quality assurance.

A third aspect can be added for externalities, i.e factors whose impact cannot be clearly or uniquely attributed: external risks are not under control, ergonomy cannot be accurately measured, and the assessment of ROI for processes improvement remains a matter of insight.

Quality Management & Documentation

The primary objective of quality management is to identify, define, and track the targeted outcomes and the factors deemed to affect their characteristics: contracts, products traceability, models reuse, tests, etc.

Depending on target and development model, management footprint can be defined at three levels of detail:

  • With regard to the use of products in their operational context, the focus is to be on deployed systems compared to textual specifications (a).
  • With regard to the intrinsic properties of deliverables, the focus is to be extended to software components (b).
  • When products are to be deployed in different environments, or to be maintained or modified along time, additional documentation will be necessary to trace changes to functional (c) and enterprise (d) architectures.
Assessment at each level can be conditioned by lower levels

Assessment at each level may be conditioned by lower levels

In any case (i.e with or without intermediate documentation,) traceability is to be a corner-stone of quality management:

  • Business processes with regard to business objectives, e.g how to assess insurance premiums or compute missile trajectory.
  • Code with regard to textual requirements.
  • System functionalities with regard to business processes. Use cases are widely used to describe how systems are to support business processes, and system functionalities are combined to realize use cases.
  • System components as technical implementations of functionalities targeted to different users, locations, and configurations.

And another dimension of traceability is required when quality assurance has to deal with uncertainty, risks, and decision-making.

From Management to Assurance

The objective of quality assurance is to define, carry on, and monitor operations in order to improve the characteristics concerned and reduce the probability that something will go amiss during the planned shelf life of products.

For that purpose assurance footprint and granularity must be aligned with the layers defined by quality management:

  • Integration and acceptance tests are carried out in reference to requirements on the assumption that software components have been validated.
  • Code checking and unit tests are carried out in reference to business and functional requirements on the assumption that their consistency has been checked.
  • External consistency is checked with regard to business requirements independently of functional or technical ones.
  • Internal consistency is checked with regard to functional requirements on the assumption that the business requirements (external) consistency has been checked.
Footprint & granularity of management and assurance must be congruent

Footprint & granularity of management and assurance must be congruent

Those operations, meant to deal with the quality of each layers, have to be combined with schemes of secure transformations between layers, e.g reuse, patterns or code generation. That would put quality on a sound basis were it not for externalities.

Quality Assurance & Risk Management

As already noted, QA has to take into account uncertainties and risks both external (business or technical environments) and internal (development processes). Assuming quality assurance has to include risk assessment, policies should be driven by risk acceptance levels:

  • No risk: quality assurance can be designed as to eliminate some uncertainties (e.g reuse and code generation).
  • No risk taken: whereas business and technology options are not sure bets some must be carried out regardless of what happens in the environment (e.g unexpected regulatory change or delay in critical technology). In that case QA must provide fallback solutions.
  • Managed risks: some defaults or delays can be priced and weighted by likelihood. In that case QA should monitor the risks and balance their cost (e.g resources consumption, late delivery) against the cost of preventive (e.g more systematic checks on consistency, additional staff) or corrective (e.g tests or maintenance) measures.
Quality management should be set at the nexus between risks management and quality assurance.

Quality management should be set at the nexus between risks management and quality assurance.

That will put quality management at the nexus between regulatory compliancerisks management and quality assurance.

Further Readings