Archive for the ‘Project Management’ Category

Squaring Software To Value Chains

July 4, 2018

Preamble

Digital environments and the ubiquity of software in business processes introduces a new perspective on value chains and the assessment of supporting applications.

Michel Blazy plante

Business & Supporting Processes (Michel Blazy)

At the same time, as software designs cannot be detached of architectures capabilities, the central question remains of allocating costs and benefits between primary and support activities .

Value Chains & Activities

The concept of value chain introduced by Porter in 1985 is meant to encompass the set of activities contributing to the delivery of a valuable product or service for the market.

porter1.png

Porter’s generic model

Taking from Porter’s generic model, various value chains have been refined according to business specific categories for primary and support activities.

Whatever their merits, these approaches are essentially static and fall short when the objective is to trace changes induced by business developments; and that flaw may become critical with the generalization of digital business environments:

  • Given the role and ubiquity of software components (not to mention smart ones), predefined categories are of little use for impact analysis.
  • When changes in value chains are considered, the shift of corporate governance towards enterprise architecture puts the focus on assets contribution, cutting down the relevance of activities.

Hence the need of taking into account changes, software development, and enterprise architectures capabilities.

Value Added & Software Development

While the growing interest for value chains in software engineering is bound to agile approaches and business driven developments, the issue can be put in the broader perspective of project planning.

With regard to assessment, stakeholders, start with business opportunities and look at supporting systems from a black box perspective; in return, software providers are to analyze requirements from a white box perspective, and estimate corresponding development effort and time delivery.

Assuming transparency and good faith, both parties are meant to eventually align expectations and commitments with regard to features, prices, and delivery.

With regard to policies, stakeholders put the focus on returns on investment (ROI), obtained from total cost of ownership, quality of service, and timely delivery. Providers for their part try to minimize development costs while taking into account effective use of resources and costs of opportunities. As it happens, those objectives may be carried on as non-zero sum games:

  • Business stakeholders foretell the actualized returns (a) to be expected from the functionalities under consideration (b).
  • Providers consider the solutions (b’) and estimate actualized costs (a’).
  • Stakeholders and providers agree on functionalities, prices and deliveries (c).
ReksRings_NZSG

Developing the value chain

Assuming that business and engineering environments are set within different time-frames, there should be room for non-zero-sum games winding up to win-win adjustments on features, delivery, and prices.

Continuous vs Phased Alignments

Notwithstanding the constraints of strategic planning, business processes are by nature opportunistic, and their ability to be adjusted to circumstances is becoming all the more critical with the generalization of digital business environments.

Broadly speaking, the squaring of supporting applications to business value can be done continuously or by phases:

  • Phased alignments start with some written agreements with regard to features, delivery, and prices before proceeding with development phases.
  • Continuous alignment relies on direct collaboration and iterative development to shape applications according to business needs.
ReksRings_PhasCont

Phased vs Continuous Adjustments

Beyond sectarian controversies, each approach has its use:

  • Continuous schemes are clearly better at harnessing value chains, providing that project teams be allowed full project ownership, with decision-making freed of external dependencies or delivery constraints.
  • Phased schemes are necessary when value chains cannot be uniquely sourced as they take roots in different organizational units, or if deliveries are contingent on technical constraints.

In any case, it’s not a black-and-white alternative as work units and projects’ granularity can be aligned with differentiated expectations and commitments.

Work Units & Architecture Capabilities

While continuous and phased approaches are often opposed under the guises of Agile vs Waterfall, that understanding is misguided as it extends the former to a motley of self-appointed agile schemes and reduces the latter to an ill-famed archetype.

Instead, a reasoned selection of a development models should be contingent on the problems at hand, and that can be best achieved by defining work-units bottom-up with regard to the capabilities targeted by requirements:

ReksRings_Capabs.jpg

Work units should be defined with regard to the capabilities targeted by requirements

Development patterns could then be defined with regard to architecture layers (organization and business, systems functionalities, platforms implementations) and capabilities footprint:

  • Phased: work units are aligned with architecture capabilities, e.g : business objects (a), business logic (b), business processes (c), users interfaces (d).
  • Iterative: work units are set across capabilities and defined dynamically according to development problems.
EARdmap_VChain

A common development framework for phased and iterative projects

That would provide a development framework supporting the assessment of iterative as well as phased projects, paving the way for comprehensive and integrated impact analysis.

Value Chains & Architecture Capabilities

As far as software engineering is concerned, the issue is less the value chain itself than its change, namely how value is to be added along the chain.

Given the generalization of digitized and networked business environments, that can be best achieved by harnessing value chains to enterprise architecture capabilities, as can be demonstrated using the Caminao layout of the Zachman framework.

To summarize, the transition to this layout is carried out in two steps:

  1. Conceptually, Zachman’s original “Why” column is translated into a line running across column capabilities.
  2. Graphically, the five remaining columns are replaced by embedded pentagons, one for each architecture layer, with the new “Why” line set as an outer layer linking business value to architectures capabilities:

Mapping value chains to enterprise architectures capabilities

That apparently humdrum transformation entails a significant shift in focus and practicality:

  • The focus is put on organizational and business objectives, masking the ones associated to systems and platforms layers.
  • It makes room for differentiated granularity in the analysis of value, some items being anchored to specific capabilities, others involving cross dependencies.

Value chains can then be charted from business processes to supporting architectures, with software applications in between.

Impact Analysis

As pointed above, the crumbling of traditional fences and the integration of enterprise architectures into digital environments undermine the traditional distinction between primary and support activities.

To be sure, business drive is more than ever the defining factor for primary activities; and computing more than ever the archetype of supporting ones. But in between the once clear-cut distinctions are being blurred by a maze of digital exchanges.

In order to avoid a spaghetti heap of undistinguished connections, value chains are to be “colored” according to the nature of links:

  • Between architectures capabilities: business and organization (enterprise), systems functionalities, or platforms and technologies.
  • Between architecture layers: engineering processes.
Pentaheds-Traces

Tracing value chains across architectures layers

When set within that framework, value chains could be navigated in both directions:

  • For the assessment of applications developed iteratively: business value could be compared to development costs and architecture assets’ depreciation.
  • For the assessment of features (functional or non functional) to be shared across business applications: value chains will provide a principled basis for standard accounting schemes.

Combined with model based system engineering that could significantly enhance the integration of enterprise architecture into corporate governance.

Model Driven Architecture & Value Chains

Model driven architecture (MDA) can be seen as the main (only ?) documented example of model based systems engineering. Its taxonomy organizes models within three layers:

  • Computation independent models (CIMs) describe organization and business processes independently of the role played by supporting systems.
  • Platform independent models (PIMs) describe the functionalities supported by systems independently of their implementation.
  • Platform specific models (PSMs) describe systems components depending on platforms and technologies.

Engineering processes can then be phased along architecture layers (a), or carried out iteratively for each application (b).

When set across activities value chains could be engraved in CIMs and refined with PIMs and PSMs(a). Otherwise, i.e with business value neatly rooted in single business units, value chains could remain implicit along software development (b).

Further Reading

 External Links

 

Advertisements

Focus: Individuals in Models

May 2, 2018

Preamble

Models are meant to characterize categories of given (descriptive models), designed (prescriptive models), or hypothetical (predictive models) individuals.

Xian-Lintong-terracotta-soldiers f6.jpg

Identity Matter (Terracotta Soldiers of Emperor Qin Shi Huang) 

Insofar as purposes can be kept apart, the discrepancies in targeted individuals can be ironed out, notably by using power-types. Otherwise, e.g if modeling concerns mix business analysis with software engineering, meta-models are introduced as jack-of-all-trades to deal with mixed semantics.

But meta-models generate exponential complexity when used across domains, not to mention the open and fuzzy ones of business intelligence. What is at stake can be better understood through the way individuals are identified and represented.

Partitions & Abstraction

Since models are meant to classify instances with regard to concerns, mixing concerns is to entail mixed classifications, horizontally across domains (e.g business and accounting), or vertically along engineering cycles (e.g business and engineering).

That can be achieved with power-types, meta-classes, or ontologies.

Delegation & Power-types

Given that categories (or classes or types) represent set of instances (given, designed, or simulated), they may by themselves be regarded as symbolic instances used to manage features commonly valued by their own members.

CaKe_indivs1

Power-types are simultaneously categories and instances.

This approach is consistent as well as effective providing the semantics and identities of instances are shared by all agents concerned, e.g business and technical aspects of car rentals. Yet it falls short on both accounts when abstractions levels a set across  domains, inducing connectors with different semantics and increased complexity.

Abstraction & Sub-classes

Sub-classes often appear as a way to overcome the difficulty, as illustrated by the Hepp Research’s Vehicle Sales Ontology: despite being set at different abstraction levels, instances for cars and models are defined and identified uniformly.

CaKe_indivs2

If, in that case, the model fulfills the substitution principle for external consistency, sub-types will fall short if engineering concerns were to be taken into account because the set of individual car models could then differ depending on perspective.

Meta-classes & Stereotypes

The overuse of meta-classes and stereotypes epitomizes the escapism school of modeling. That may be understandable, if not helpful, for the former which is by nature a free pass to abstraction; less so for the latter which is supposed to go the other way toward the specialization of meta-classes according to specific profiles. It ensues that stereotypes should never be used on their own or be extended by another stereotype.

As it happens, such consistency concerns appear to be easily diluted when stereotypes are jumbled with meta constructs; e.g:

  • Abstract stereotype (a).
  • Strong (aka class) inheritance of abstract stereotype from concrete meta-class (b).
  • Weak inheritance (aka aspect) between stereotypes (c, f).
  • Meta-constraint used to map Vehicle to methodology stereotype (d).
  • Domain specific connector to stereotype (e).

CaKe_uafp

Without comprehensive and consistent semantics for instances and abstractions, individuals cannot be solidly mapped to models:

  • Individual concepts (car, horse, boat, …) have no clear mooring.
  • Actual vehicles cannot be tied to the meta-class or the abstract stereotype.
  • There is no strong inheritance tying individuals models and rental cars to an identification mechanism.

Assuming that detachment is not an option, the basis of models must be reset.

Ontological Stereotypes

Put in simple words, ontologies are meant to examine the nature and categories of existence, in general (metaphysics) or in specific contexts. As for the latter, they can be applied to individuals according to the nature of their existence (aka epistemic identity): concepts, documents, categories and aspects.

CaKe_recap

As it happens, sorting things out makes the whole paraphernalia of meta-classes and stereotypes no longer needed because the semantics of inheritance and associations is set by the nature of individuals.

With individuals solidly rooted in targeted domains, models can then fully serve their purposes.

What’s the Point

It’s worth to remind that models have to be built on purpose, which cannot be achieved without a clear understanding of context and targets. Here are some examples:

Quality arguably come first:

  • External consistency: to be checked by mapping individuals in analysis categories to big data.
  • Internal consistency: to be checked by mapping individuals in design classes to observed or simulated run-time components.
  • Acceptance: automated tests generation.

Then, individuals could be used to map models to past and future, the former for refactoring, the latter for business intelligence.

Refactoring looks to the past but is frustrated by undocumented legacy code. Combined with machine learning, individuals could help to bridge the gap between code and models.

Business Intelligence looks the other way as it is meant to map hypothetical business objects and behaviors to the structures and semantics already managed by information systems. As in reversal of legacy benefits, individuals could provide cues to new business meanings.

Finally, maturity assessment and optimization of enterprise processes fully depend on the reliability of their basis.

Further Reading

Collaborative Systems Engineering: From Models to Ontologies

April 9, 2018

Given the digitization of enterprises environments, engineering processes have to be entwined with business ones while kept in sync with enterprise architectures. That calls for new threads of collaboration taking into account the integration of business and engineering processes as well as the extension to business environments.

Wang-Qingsong_scaffold

Collaboration can be personal and direct, or collective and mediated (Wang Qingsong)

Whereas models are meant to support communication, traditional approaches are already straining when used beyond software generation, that is collaboration between humans and CASE tools. Ontologies, which can be seen as a higher form of models, could enable a qualitative leap for systems collaborative engineering at enterprise level.

Systems Engineering: Contexts & Concerns

To begin with contents, collaborations should be defined along three axes:

  1. Requirements: business objectives, enterprise organization, and processes, with regard to systems functionalities.
  2. Feasibility: business requirements with regard to architectures capabilities.
  3. Architectures: supporting functionalities with regard to architecture capabilities.
RekReuse_BFCo

Engineering Collaborations at Enterprise Level

Since these axes are usually governed by different organizational structures and set along different time-frames, collaborations must be supported by documentation, especially models.

Shared Models

In order to support collaborations across organizational units and time-frames, models have to bring together perspectives which are by nature orthogonal:

  • Contexts, concerns, and languages: business vs engineering.
  • Time-frames and life-cycle: business opportunities vs architecture stability.
EASquare2_eam.jpg

Harnessing MBSE to EA

That could be achieved if engineering models could be harnessed to enterprise ones for contexts and concerns. That is to be achieved through the integration of processes.

 Processes Integration

As already noted, the integration of business and engineering processes is becoming a key success factor.

For that purpose collaborations would have to take into account the different time-frames governing changes in business processes (driven by business value) and engineering ones (governed by assets life-cycles):

  • Business requirements engineering is synchronic: changes must be kept in line with architectures capabilities (full line).
  • Software engineering is diachronic: developments can be carried out along their own time-frame (dashed line).
EASq2_wrkflw

Synchronic (full) vs diachronic (dashed) processes.

Application-driven projects usually focus on users’ value and just-in-time delivery; that can be best achieved with personal collaboration within teams. Architecture-driven projects usually affect assets and non-functional features and therefore collaboration between organizational units.

Collaboration: Direct or Mediated

Collaboration can be achieved directly or through some mediation, the former being a default option for applications, the latter a necessary one for architectures.

Cycles_collabs00

Both can be defined according to basic cognitive and organizational mechanisms and supported by a mix of physical and virtual spaces to be dynamically redefined depending on activities, projects, locations, and organisation.

Direct collaborations are carried out between individuals with or without documentation:

  • Immediate and personal: direct collaboration between 5 to 15 participants with shared objectives and responsibilities. That would correspond to agile project teams (a).
  • Delayed and personal: direct collaboration across teams with shared knowledge but with different objectives and responsibilities. That would tally with social networks circles (c).
Cycles_collabs.jpg

Collaborations

Mediated collaborations are carried out between organizational units through unspecified individual members, hence the need of documentation, models or otherwise:

  • Direct and Code generation from platform or domain specific models (b).
  • Model transformation across architecture layers and business domains (d)

Depending on scope and mediation, three basic types of collaboration can be defined for applications, architecture, and business intelligence projects.

EASq2_collabs

Projects & Collaborations

As it happens, collaboration archetypes can be associated with these profiles.

Collaboration Mechanisms

Agile development model (under various guises) is the option of choice whenever shared ownership and continuous delivery are possible. Application projects can so be carried out autonomously, with collaborations circumscribed to team members and relying on the backlog mechanism.

The OODA (Observation, Orientation, Decision, Action) loop (and avatars) can epitomize projects combining operations, data analytics, and decision-making.

EASquare2_collaMechas

Collaboration archetypes

Projects set across enterprise architectures cannot be carried out without taking into account phasing constraints. While ill-fated Waterfall methods have demonstrated the pitfalls of procedural solutions, phasing constraints can be dealt with a roundabout mechanism combining iterative and declarative schemes.

Engineering vs Business Driven Collaborations

With collaborative engineering upgraded at enterprise level, the main challenge is to iron out frictions between application and architecture projects and ensure the continuity, consistency and effectiveness of enterprise activities. That can be achieved with roundabouts used as a collaboration mechanism between projects, whatever their nature:

  • Shared models are managed at roundabout level.
  • Phasing dependencies are set in terms of assertions on shared models.
  • Depending on constraints projects are carried out directly (1,3) or enter roundabouts (2), with exits conditioned by the availability of models.

Engineering driven collaboration: roundabout and backlogs

Moreover, with engineering embedded in business processes, collaborations must also bring together operational analytics, decision-making, and business intelligence. Here again, shared models are to play a critical role:

  • Enterprise descriptive and prescriptive models for information maps and objectives
  • Environment predictive models for data and business understanding.
OKBI_BIDM

Business driven collaboration: operations and business intelligence

Whereas both engineering and business driven collaborations depend on sharing information  and knowledge, the latter have to deal with open and heterogeneous semantics. As a consequence, collaborations must be supported by shared representations and proficient communication languages.

Ontologies & Representations

Ontologies are best understood as models’ backbones, to be fleshed out or detailed according to context and objectives, e.g:

  • Thesaurus, with a focus on terms and documents.
  • Systems modeling,  with a focus on integration, e.g Zachman Framework.
  • Classifications, with a focus on range, e.g Dewey Decimal System.
  • Meta-models, with a focus on model based engineering, e.g models transformation.
  • Conceptual models, with a focus on understanding, e.g legislation.
  • Knowledge management, with a focus on reasoning, e.g semantic web.

As such they can provide the pillars supporting the representation of the whole range of enterprise concerns:

KM_OntosCapabs

Taking a leaf from Zachman’s matrix, ontologies can also be used to differentiate concerns with regard to architecture layers: enterprise, systems, platforms.

Last but not least, ontologies can be profiled with regard to the nature of external contexts, e.g:

  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to established procedures.
  • Corporate: Defined by enterprises, changes subject to internal decision-making.
  • Social: Defined by usage, volatile, continuous and informal changes.
  • Personal: Customary, defined by named individuals (e.g research paper).

Cross profiles: capabilities, enterprise architectures, and contexts.

Ontologies & Communication

If collaborations have to cover engineering as well as business descriptions, communication channels and interfaces will have to combine the homogeneous and well-defined syntax and semantics of the former with the heterogeneous and ambiguous ones of the latter.

With ontologies represented as RDF (Resource Description Framework) graphs, the first step would be to sort out truth-preserving syntax (applied independently of domains) from domain specific semantics.

KM_CaseRaw

RDF graphs (top) support formal (bottom left) and domain specific (bottom right) semantics.

On that basis it would be possible to separate representation syntax from contents semantics, and to design communication channels and interfaces accordingly.

That would greatly facilitate collaborations across externally defined ontologies as well as their mapping to enterprise architecture models.

Conclusion

To summarize, the benefits of ontological frames for collaborative engineering can be articulated around four points:

  1. A clear-cut distinction between representation semantics and truth-preserving syntax.
  2. A common functional architecture for all users interfaces, humans or otherwise.
  3. Modular functionalities for specific semantics on one hand, generic truth-preserving and cognitive operations on the other hand.
  4. Profiled ontologies according to concerns and contexts.
KM_OntosCollabs

Clear-cut distinction (1), unified interfaces architecture (2), functional alignment (3), crossed profiles (4).

A critical fifth benefit could be added with regard to business intelligence: combined with deep learning capabilities, ontologies would extend the scope of collaboration to explicit as well as implicit knowledge, the former already framed by languages, the latter still open to interpretation and discovery.

Further Reading

 

Focus: Requirements Reuse

February 22, 2018

Preamble

Requirements is what to feed engineering processes. As such they are to be presented under a wide range of forms, and nothing should be assumed upfront about forms or semantics.

What is to be reused: Sketches or Models  ? (John Devlin)

Answering the question of reuse therefore depends on what is to be reused, and for what purpose.

Documentation vs Reuse

Until some analysis can be carried out, requirements are best seen as documents;  whether such documents are to be ephemeral or managed would be decided depending on method (agile or phased), contents (business, supporting systems, implementation, or quality of services), or purpose (e.g governance, regulations, etc).

What is to be reused.

Setting apart external conditions, requirements documentation could be justified by:

  • Traceability of decision-making linking initial requests with actual implementation.
  • Acceptance.
  • Maintenance of deliverables during their life-cycle.

Depending on development approaches, documentation could limited to archives (agile development models) or managed as intermediate products (phased development models). In the latter case reuse would entail some formatting of requirements.

The Cases for Requirements Reuse

Assuming that requirements have been properly formatted, e.g as analysis models (with technical ones managed internally at system level), reuse could be justified by changes in business, functional, or quality of services requirements:

  • Business processes are meant to change with opportunities. With requirements available as analysis models, changes would be more easily managed (a) if they could be fine-grained. Business rules are a clear example, but that could also be the case for new features added to business objects.
  • Functional requirements may change even without change of business ones, e.g if new channels and users are introduced addressing existing business functions. In that case reusable business requirements (b) would dispense with a repeat of business analysis.
  • Finally, quality of service could be affected by operational changes like localization, number of users, volumes, or frequency. Adjusting architecture capabilities would be much easier with functional (c) and business (d) requirements properly documented as analysis models.

Cases for Reuse

Along that perspective, requirements reuse appears to revolve around two pivots, documents and analysis models. Ontologies could be used to bind them.

Requirements & Ontologies

Reusing artifacts means using them in contexts or for purposes different of native ones. That may come by design, when specifications can anticipate on shared concerns, or as an afterthought, when initially unexpected similarities are identified later on. In any case, reuse policies have to overcome a twofold difficulty:

  • Visibility: business and functional analysts must be made aware of potential reuse without having to spend too much time on research.
  • Overheads: ensuring transparency, traceability, and consistency checks on requirements (documents or analysis models) cannot be achieved without costs.

Ontologies could help to achieve greater visibility with acceptable overheads by framing requirements with regard to nature (documents or models) and context:

With regard to nature, the critical distinction is between document management and model based engineering systems. When framed as ontologies, the former is to be implemented as thesaurus targeting terms and documents, the latter as ontologies targeting categories specific to organizations and business domains.

Documents, models, and capabilities should be managed separately

With regard to context the objective should be to manage reusable requirements depending on the kind of jurisdiction and stability of categories, e.g:

  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to accord.
  • Corporate: Defined by enterprises, changes subject to internal decision-making.
  • Social: Defined by usage, volatile, continuous and informal changes.
  • Personal: Customary, defined by named individuals (e.g research paper).

Combining contexts of reuse with architectures layers (enterprise, systems, platforms) and capabilities (Who,What,How, Where, When).

Combined with artificial intelligence, ontology archetypes could crucially extend the benefits of requirements reuse, notably through the impact of deep learning for visibility.

On a broader perspective requirements should be seen as a source of knowledge, and their reuse managed accordingly.

Further Reading

Focus: Enterprise Architect Booklet

October 16, 2017

Objective

Given the diversity of business and organizational contexts, and EA still a fledgling discipline, spelling out a job description for enterprise architects can be challenging.

hans-vredeman-de-vries-3b

Aligning business, organization, and systems perspectives (Hans Vredeman de Vries)

So, rather than looking for comprehensive definitions of roles and responsibilities, one should begin by circumscribing the key topics of the trade, namely:

  1. Concepts : eight exclusive and unambiguous definitions provide the conceptual building blocks.
  2. Models: how the concepts are used to analyze business requirements and design systems architectures and software artifacts.
  3. Processes: how to organize business and engineering processes.
  4. Architectures: how to align systems capabilities with business objectives.
  5. Governance: assessment and decision-making.

The objective being to define the core issues that need to be addressed by enterprise architects.

Concepts

To begin with, the primary concern of enterprise architects should be to align organization, processes, and systems with enterprise business objectives and environment. For that purpose architects are to consider two categories of models:

  • Analysis models describe business environments and objectives.
  • Design models prescribe how systems architectures and components are to be developed.

Enterprise architects must focus on individuals (objects and processes) consistently identified (#) across business and system realms.

That distinction is not arbitrary but based on formal logic: analysis models are extensional as they classify actual instances of business objects and activities; in contrast, design models are intensional as they define the features and behaviors of required system artifacts.

The distinction is also organizational: as far as enterprise architecture is concerned, the focus is to remain on objects and activities whose identity (#) and semantics are to be continuously and consistently maintained across business (actual instances) and system (symbolic representations) realms:

Relevant categories at architecture level can be neatly and unambiguously defined.

  • Actual containers represent address spaces or time frames; symbolic ones represent authorities governing symbolic representations. System are actual realizations of symbolic containers managing symbolic artifacts.
  • Actual objects (passive or active) have physical identities; symbolic objects have social identities; messages are symbolic objects identified within communications. Power-types (²) are used to partition objects.
  • Roles (aka actors) are parts played by active entities (people, devices, or other systems) in activities (BPM), or, if it’s the case, when interacting with systems (UML’s actors). Not to be confounded with agents meant to be identified independently of their behavior.
  • Events are changes in the state of business objects, processes, or expectations.
  • Activities are symbolic descriptions of operations and flows (data and control) independently of supporting systems; execution states (aka modes) are operational descriptions of activities with regard to processes’ control and execution. Power-types (²) are used to partition execution paths.

Since the objective is to identify objects and behaviors at architecture level, variants, abstractions, or implementations are to be overlooked. It also ensues that the blueprints obtained remain general enough as to be uniformly, consistently and unambiguously translated into most of modeling languages.

Languages & Models

Enterprise architects may have to deal with a range of models depending on scope (business vs system) or level (enterprise and system vs domains and applications):

  • Business process modeling languages are used to associate business domains and enterprises organization.
  • Domain specific languages do the same between business domains and software components, bypassing enterprise organization and systems architecture.
  • Generic modeling languages like UML are supposed to cover the whole range of targets.
  • Languages like Archimate focus on the association between enterprises organization and systems functionalities.
  • Contrary to modeling languages programming ones are meant to translate functionalities into software end-products. Some, like WSDL (Web Service Definition Language), can be used to map EA into service oriented architectures (SOA).

Scope of Modeling Languages

While architects clearly don’t have to know the language specifics, they must understand their scope and purposes.

Processes

Whatever the languages, methods, or models, the primary objective is that architectures support business processes whenever and wherever needed. Except for standalone applications (for which architects are marginally involved), the way systems architectures support business processes is best understood with regard to layers:

  • Processes are solutions to business problems.
  • Processes (aka business solutions) induce problems for systems, to be solved by functional architecture.
  • Implementations of functional architectures induce problems for platforms, to be solved by technical architectures.

Enterprise architects should focus on the alignment of business problems and supporting systems functionalities

As already noted, enterprise architects are to focus on enterprise and system layers: how business processes are supported by systems functionalities and, more generally, how architecture capabilities are to be aligned with enterprise objectives.

Nonetheless, business processes don’t operate in a vacuum and may depend on engineering and operational processes, with regard to development for the former, deployment for the latter.

EARdmap_XProcs

Enterprise architects should take a holistic view of business, engineering, and operational processes.

Given the crumbling of traditional fences between environments and IT systems under combined markets and technological waves, the integration of business, engineering, and operational processes is to become a necessary condition for market analysis and reactivity to changes in business environment.

Hence the benefits of combining bottom-up and top-down perspectives, the former focused on business and engineering processes, the latter business models and organization.

Crossing processes and architecture perspectives

Enterprise architects could then focus on the mapping of business functions to services, the alignment of quality of services with architecture capabilities, and the flows of information across the organization.

Architecture

Blueprints being architects’ tool of choice, enterprise architects use them to chart how enterprise objectives are to be supported by systems capabilities; for that purpose:

  • On one hand they have to define the concepts used for the organization, business domains, and business processes.
  • On the other hand they have to specify, monitor, assess, and improve the capabilities of supporting systems.

In between they have to define the functionalities that will consolidate specific and possibly ephemeral business needs into shared and stable functions best aligned with systems capabilities.

MapsTerrits_Archis

The role of functional architectures is to map conceptual models to systems capabilities

As already noted, enterprise architects don’t have to look under the hood at the implementation of functions; what they must do is to ensure the continuous and comprehensive transparency between existing as well a planned business objectives and systems capabilities.

Assessment

One way or the other, governance implies assessment, and for enterprise architects that means setting apart architectural assets and business applications:

  • Whatever their nature (enterprise organization or systems capabilities), the life-cycle of assets encompasses multiple production cycles, with returns to be assessed across business units. On that account enterprise architects are to focus on the assessment of the functional architecture supporting business objectives.
  • By contrast, the assessment of business applications can be directly tied to a business value within a specific domain, value which may change with cycles. Depending on induced changes for assets, adjustments are to be carried out through users’ stories (standalone, local impact) or use cases (shared business functions, architecture impact).

Enterprise architects deal with assets, business analysts with processes.

The difficulty of assessing returns for architectural assets is compounded by cross dependencies between business, engineering, and operational processes; and these dependencies may have a decisive impact for operational decision-making.

Business Intelligence & Decision-making

Embedding IT systems in business processes is to be decisive if business intelligence (BI) is to accommodate the ubiquity of digitized business processes and the integration of enterprises with their environments. That is to require a seamless integration of data analytics and decision-making:

Data analytics (sometimes known as data mining) is best understood as a refining activity whose purpose is to process raw data into meaningful information:

  • Data understanding gives form and semantics to raw material.
  • Business understanding charts business contexts and concerns in terms of objects and processes descriptions.
  • Modeling consolidates data and business understanding into descriptive, predictive, or operational models.
  • Evaluation assesses and improves accuracy and effectiveness with regard to objectives and decision-making.

Decision-making processes in open and digitized environment are best described with the well established OODA (Observation, Orientation, Decision, Action) loop:

  1. Observation: understanding of changes in business environments (aka territories).
  2. Orientation: assessment of the reliability and shelf-life of pertaining information (aka maps) with regard to current positions and operations.
  3. Decision: weighting of options with regard to enterprise capabilities and broader objectives.
  4. Action: carrying out of decisions within the relevant time-frame.
OKBI_BIDM

Seamless integration of data analytics and decision-making.

Along that perspective data analytics and decision-making can be seen as the front-offices of business intelligence, and  knowledge management as its back-office.

More generally that scheme epitomizes the main challenge of enterprise architects, namely the continuous and dynamic alignment of enterprise organization and systems to market environment, business processes, and decision-making.

Further Reading

EA’s Merry-go-round

June 14, 2017

Preamble

All too often Enterprise Architecture (EA) is planned as a big bang project to be carried out step by step until completion. That understanding is misguided as it confuses EA with IT systems and implies that enterprises could change their architectures as if they were apparel.

EA is a never-ending endeavor (Robert Doisneau)

But enterprise architectures are part and parcel of enterprises, a combination of culture, organization, and systems; whatever the changes, they must keep the continuity, integrity, and consistency of the whole.

Capabilities

Compared to usual projects, architectural ones are not meant to address specific business needs but architecture capabilities that may or may not be specific to business functions. Taking a leaf from the Zachman Framework, those capabilities can be organized around five pillars supporting enterprise, systems, and platform architectures:

  • Who: enterprise roles, system users, platform entry points.
  • What: business objects, symbolic representations, objects implementation.
  • How: business logic, system applications, software components.
  • When: processes synchronization, communication architecture, communication mechanisms.
  • Where: business sites, systems locations, platform resources.

These capabilities are set across architecture layers and support business, engineering, and operational processes.

Enterprise architecture capabilities

Enterprise architects are to continuously assess and improve these capabilities with regard to current weaknesses (organizational bottlenecks, technical debt) or future developments (new business, M&A, new technologies).

Work Units

Given the increased dependencies between business, engineering, and operations, defining EA workflows in terms of work units defined bottom-up from capabilities is to provide clear benefits with regard to EA versatility and plasticity.

Contrary to top-down (aka activity based) ones, bottom-up schemes don’t rely on one-fits-all procedures; as a consequence work units can be directly defined by capabilities and therefore mapped to engineering workshops:

Iterative development of architecture capabilities across workshops

Moreover, dependency constraints can be directly defined as declarative assertions attached to capabilities and managed dynamically instead of having to be hard-wired into phased processes.

That approach is to ensure two agile conditions critical for the development of architectural features:

  • Shared ownership: lest the whole enterprise be paralyzed by decision-making procedures, work units must be carried out under the sole responsibility of project teams.
  • Continuous delivery: architecture driven developments are by nature transverse but the delivery of building blocs cannot be put off by the decision of all parties concerned; instead it should be decoupled from integration.

Enterprise architecture projects could then be organized as a merry-go-round of capabilities-based work units to be set up, developed, and delivered according to needs and time-frames.

Time Frames

Enterprise architecture is about governance more than engineering. As such it has to ensure continuity and consistency between business objectives and strategies on one side, engineering resources and projects on the other side.

Assuming that capability-based work units will do the job for internal dependencies (application contents and engineering), the problem is to deal with external ones (business objectives and enterprise organization) without introducing phased processes. Beyond differences in monikers, such dependencies can generally be classified along three reasoned categories:

  • Operational: whatever can be observed and acted upon within a given envelope of assets and capabilities.
  • Tactical: whatever can be observed and acted upon by adjusting assets, resources and organization without altering the business plans and anticipations.
  • Strategic: decisions regarding assets, resources and organization contingent on anticipations regarding business environments.

The role of enterprise architects will then to manage the deployment of updated architecture capabilities according to their respective time-frames.

Portfolio Management

As noted before, EA workflows by nature can seldom be carried out in isolation as they are meant to deal with functional features across business domains. Instead, a portfolio of architecture (as opposed to development) work units should be managed according to their time-frame, the nature of their objective, and the kind of models to be used:

EA portfolio

  • Strategic features affect the concepts defining business objectives and processes. The corresponding business objects and processes are primarily defined with descriptive models; changes will have cascading effects for engineering and operations.
  • Tactical features affect the definition of artifacts, logical or physical. The corresponding engineering processes are primarily defined with prescriptive models; changes are to affect operational features but not the strategic ones.
  • Operational features affect the deployment of resources, logical or physical. The corresponding processes are primarily defined with predictive models derived from descriptive ones; changes are not meant to affect strategic or tactical features.

Architectural projects could then be managed as a dynamic backlog of self-contained work units continuously added (a) or delivered (b).

EA projects: a merry-go-round of work units.

That would bring together agile development processes and enterprise architecture.

Further Reading

Beans must be Counted, one way And the other

May 2, 2017

Preamble

Conversations across software engineering forums sometimes reveal unexpected views, as it’s the case for the benefits of accountability.

Counting Paper Beans (Pieter Brueghel the Younger)

One would assume that competition would impel enterprises to scrutiny with regard to resources employed and product outcomes, pushing for the assessment of internal activities based on some agreed metrics. And yet, now and again, software development is viewed as a boutique occupation, if not an art pursuit, carried out by creative craftsmen for enlightened if demanding patrons; a vocation too distinctive to be gauged by common yardsticks.

Difficulties of Oversight

Setting apart creative delusions, the assessment of software development is effectively confronted with rational as well as practical obstacles.

To begin with rationality, and unlike traditional products, there is no market pricing mechanism that could match software development costs with customers’ value. As a consequence business stakeholders and systems engineers prefer to play safe and keep their respective assessments on the opposed banks of the customer/provider divide.

As for the practicality of assessments, the choice is between idiosyncratic approaches (e.g users’ points) and reasoned ones (essentially function points). The former ones being by nature specific and subject to changes in business opportunities, whereas the latter ones are being plagued by implementation plights that make them both costly and unreliable.

Yet, the diluting of IT systems in business environments is making that conundrum irrelevant: the fusing of business processes and supporting software is blanketing the discontinuities between business value and development costs.

Perils of Oversight

Given the digital integration between systems and business environments and the part played by software in production, marketing and operations, enterprises can no longer ignore the economics of software development.

As far as enterprises are concerned, economics use prices for two key purposes, external and internal.

With regard to their business environment, enterprises need metrics to price the resources they could buy and the products they could sell; their competitive edge fully depends on the thoroughness and accuracy of both.

With regard to their internal governance, enterprises need metrics to gauge the efficiency of their factors and the maturity of their processes, and allocate resources accordingly. That internal assessment is the basis of their versatility and plasticity:

  • Confronted to continuous, frequent, and often abrupt changes in business environments, enterprise must be able to adapt their activities without having to change its architectures. That cannot be achieved without timely and accurate assessments of the way their resources are put to use.
  • Conversely, enterprises may have to change their architectures without affecting their performances; that cannot be achieved without a comprehensive and accurate assessments of alternative options, organizational as well as technical.

To summarize, the spread and intricacy of software footprint over both sides of the crumbling fences between enterprise systems and business environments makes software economics a necessary component of enterprises governance, so a tally of software beans should not be an option.

Further Reading

Deep Blind Testing

March 21, 2017

Preamble

Tests are meant to ensure that nothing will go amiss. Assuming that expected hazards can be duly dealt with beforehand, the challenge is to guard against unexpected ones.

Unexpected Outcome (Ariel Schlesinger)

That would require the scripting of every possible outcomes in an unlimited range of unknown circumstances, and that’s where Deep Learning may help.

What to Look For

As Donald Rumsfeld once famously said, there are things that we know we don’t know, and things we don’t know we don’t know; hence the need of setting things apart depending on what can be known and how, and build the scripts accordingly:

  • Business requirements: tests can be designed with respect to explicit specifications; yet some room should also be left for changes in business circumstances.
  • Functional requirements: assuming business requirements are satisfied, the part played by supporting systems can be comprehensively tested with respect to well-defined boundaries and operations.
  • Quality of service: assuming business and functional requirements are satisfied, tests will have to check how human interfaces and resources are to cope with users behaviors and expectations which, by nature, cannot be fully anticipated.
  • Technical requirements: assuming business and functional requirements are satisfied as well as users’ expectations for service, deployment, maintenance, and operations are to be tested with regard to feasibility and costs.

Automated testing has to take into account these differences between scope and nature, from bounded and defined specifications to boundless, fuzzy and changing circumstances.

Automated Software Testing

Automated software testing encompasses two basic components: first the design of test cases (events, operations, and circumstances), then their scripted execution. Leading frameworks already integrate most of the latter together with the parts of the former targeting technical aspects like graphical user interfaces or system APIs. Artificial intelligence (AI) and machine learning (ML) have also been tried for automated test generation, yet with a scope limited by dependency on explicit knowledge, and consequently by the need of some “manual” teaching. That hurdle may be overcame by the deep learning ability to get direct (aka automated) access to implicit knowledge.

Reconnaissance: Known Knowns

Systems are designed artifacts, with the corollary that their components are fully defined and their behavior predictable. The design of technical test cases can therefore be derived from what is known of software and systems architectures, the former for test units, the latter for integration and acceptance tests. Deep learning could then mine recorded log-files in order to identify critical cases’ events and circumstances.

Exploration: Known Unknowns

Assuming that applications must be tested for use during their expected shelf life, some uncertainty has to be factored in for future business circumstances. Yet, assuming applications are designed to meet specific business objectives, such hypothetical circumstances should remain within known boundaries. In that context deep learning could be applied to exploration as well as policies:

  • Compared to technical test cases that can rely on the content of systems log-files, business and functional ones have to look outside and mine raw data from business environments.
  • In return, the relevancy of observations can be assessed with regard to business objectives, improved, and feed the policy module in charge of defining test cases.

Blind Errands: Unknown Unknowns

Even with functional and technical capabilities well-tested and secured, quality of service may remain contingent on human quirks: instinctive or erratic behaviors that could thwart the best designed handrails. On one hand, and due to their very nature, such hazards are not to be easily forestalled by reasoned test cases; but on the other hand they don’t take place in a void but within known functional circumstances. Given that porosity of functional and cognitive layers, the validity of functional test cases may be compromised by unfathomable cognitive associations, and that could open the door to unmanageable regression. Enter deep learning and its ability to extract knowledge from insignificance.

Compared to business and functional test cases, hazards are not directly related to business activities. As a consequence, the learning process cannot be guided by business and functional test cases but has to chart unpredictable human behaviors. As it happens, that kind of learning combining random simulation with automated reinforcement is what makes the specificity of deep learning.

From Non-regression to Self-improvement

As a conclusion, if non-regression is to be the cornerstone of quality management, test cases are to be set along clear swim-lanes: business logic (independently of systems), supporting systems functionalities (for shared applications), users interfaces (for non shared interactions). Then, since test cases are also run across swim-lanes, it opens the door to feedback, e.g unit test cases reassessed directly from business rules independently of systems functionalities, or functional test cases reassessed from users’ behaviors.

Considering that well-defined objectives, sound feedback mechanisms, and the availability of massive data from systems logs (internal) and business environment (external) are the main pillars of deep learning technologies, their combination in integrated frameworks could result in a qualitative leap toward self-improving automated test cases.

Further Reading

 

Focus: Business Cases for Use Cases

February 27, 2017

Preamble

As originally defined by Ivar Jacobson, uses cases (UCs) are focused on the interactions between users and systems. The question is how to associate UC requirements, by nature local, concrete, and changing, with broader business objectives set along different time-frames.

Sigmar-Polke-Hope-Clouds

Cases, Kites, and Clouds (Sigmar Polke)

Backing Use Cases

On the system side UCs can be neatly traced through the other UML diagrams for classes, activities, sequence, and states. The task is more challenging on the business side due to the diversity of concerns to be defined with other languages like Business Process Modeling Notation (BPMN).

Use cases at the hub of UML diagrams

Use Cases contexts

Broadly speaking, tracing use cases to their business environments have been undertaken with two approaches:

  • Differentiated use cases, as epitomized by Alister Cockburn’s seminal book (Readings).
  • Business use cases, to be introduced beside standard (often renamed as “system”) use cases.

As it appears, whereas Cockburn stays with UCs as defined by Jacobson but refines them to deal specifically with generalization, scaling, and extension, the second approach introduces a somewhat ill-defined concept without setting apart the different concerns.

Differentiated Use Cases

Being neatly defined by purposes (aka goals), Cockburn’s levels provide a good starting point:

  • Users: sea level (blue).
  • Summary: sky, cloud and kite (white).
  • Functions: underwater, fish and clam (indigo).

As such they can be associated with specific concerns:

Cockburn’s differentiated use cases

  • Blue level UCs are concrete; that’s where interactions are identified with regard to actual agents, place, and time.
  • White level UCs are abstract and cannot be instanciated; cloud ones are shared across business processes, kite ones are specific.
  • Indigo level UCs are concrete but not necessarily the primary source of instanciation; fish ones may or may not be associated with business functions supported by systems (grey), e.g services , clam ones are supposed to be directly implemented by system operations.

As illustrated by the example below, use cases set at enterprise or business unit level can also be concrete:

Example with actors for users and legacy systems (bold arrows for primary interactions)

UC abstraction connectors can then be used to define higher business objectives.

Business “Use” Cases

Compared to Cockburn’s efficient (no new concept) and clear (qualitative distinctions) scheme, the business use case alternative adds to the complexity with a fuzzy new concept based on quantitative distinctions like abstraction levels (lower for use cases, higher for business use cases) or granularity (respectively fine- and coarse-grained).

At first sight, using scales instead of concepts may allow a seamless modeling with the same notations and tools; but arguing for unified modeling goes against the introduction of a new concept. More critically, that seamless approach seems to overlook the semantic gap between business and system modeling languages. Instead of three-lane blacktops set along differentiated use cases, the alignment of business and system concerns is meant to be achieved through a medley of stereotypes, templates, and profiles supporting the transformation of BPMN models into UML ones.

But as far as business use cases are concerned, transformation schemes would come with serious drawbacks because the objective would not be to generate use cases from their business parent but to dynamically maintain and align business and users concerns. That brings back the question of the purpose of business use cases:

  • Are BUCs targeting business logic ? that would be redundant because mapping business rules with applications can already be achieved through UML or BPMN diagrams.
  • Are BUCs targeting business objectives ? but without a conceptual definition of “high levels” BUCs are to remain nondescript practices. As for the “lower levels” of business objectives, users’ stories already offer a better defined and accepted solution.

If that makes the concept of BUC irrelevant as well as confusing, the underlying issue of anchoring UCs to broader business objectives still remains.

Conclusion: Business Case for Use Cases

With the purposes clearly identified, the debate about BUC appears as a diversion: the key issue is to set apart stable long-term business objectives from short-term opportunistic users’ stories or use cases. So, instead of blurring the semantics of interactions by adding a business qualifier to the concept of use case, “business cases” would be better documented with the standard UC constructs for abstraction. Taking Cockburn’s example:

Abstract use cases: no actor (19), no trigger (20), no execution (21)

Different levels of abstraction can be combined, e.g:

  • Business rules at enterprise level: “Handle Claim” (19) is focused on claims independently of actual use cases.
  • Interactions at process level: “Handle Claim” (21) is focused on interactions with Customer independently of claims’ details.

Broader enterprise and business considerations can then be documented depending on scope.

Further Reading

External Links

iStar and the Requirements Conundrum

December 12, 2016

Synopsis

Whenever software engineering problems are looked at, the blame is generally put on requirements, with each side of the business/system divide holding the other responsible.

Requirements inception

The iStar approach tries to tackle the problem with a conceptual language focused on interactions between business processes and supporting systems.

Dilemma

Conceptual approaches to requirements try to breach the dilemma between phased and agile development schemes: the former takes for granted that requirements can be fully and definitively set upfront; the latter takes a more pragmatic path and tries to reconcile business and system analysts through direct and continuous collaboration.

Setting apart frictions between specific methods, the benefits of agile principles and practices are now well-recognized, contingent on the limits of agile scope. Summarily, agile development is at its best when requirements capture and analysis can be weaved with development and tests. The question remains of what happens when requirements are to be dealt with separately.

The iStar’s answer shares with agile a focus on collaboration and doesn’t take side for business (e.g users’ stories) or systems (e.g use cases). Instead, iStar modeling language is meant to support a conceptual description of interactions between business processes and supporting systems in terms of actors’ goals and commitments, and the associated dependencies.

Actors & Goals

The defining aspect of the iStar modeling approach is to replace one-sided perspectives (business or system) by a systemic one focused on the interactions between agents. The interactive part of a requirement will therefore comprise three basic items:

  • A primary actor trigger an interaction in order to meet some goal; e.g a car owner want his car repaired.
  • Secondary actors may be involved during the ensuing exchanges: e.g body shop, appraiser, insurance company.
  • Functions to be performed: actual task; e.g appraise damages; qualification (soft goal), e.g fair appraisal; and resources, e.g premium payment.
Actors & dependencies

Actors & Dependencies

Dependencies Semantics

The factual description of interactions is both detailed and enriched by elements set within a broader scope:

  • Goal (strong) dependency: assertions about actual state of affairs: object, activity, or expectations.
  • Soft-goal dependency: assertions about expected outcomes.
  • Task dependency: organizational, functional, or technical constraints pertaining to the execution of activities.
  • Resource dependency: constraints or conditions on the availability of inputs, actual or symbolic.

It would be tempting to generalize the strong/soft distinction to dependencies as to make use of modal logic, strong dependencies associated with deontic rules, soft dependencies with alethic ones. That would .

iStar & Caminao

Since iStar modeling categories are directly aligned with UML Use Cases, they can easily mapped to core Caminao stereotypes for actors, objects, events, and activities.

Actors & dependencies

iStar with Caminao Stereotypes

Interestingly, the iStar strong/soft distinction could translate to the actual/symbolic one which constitute the conceptual backbone of the Caminao paradigm.

Assessment

From the business perspective, iStar must be credited with two critical tenets:

  • The focus on interactions between agents is essential for business and system analysts to collaborate. Such benefits appear clearly for the definition of primary and secondary roles (aka actors), intents (business) and capabilities (supporting environments).
  • The distinction between strong and soft goals, even if the logical basis remains unexploited.

Yet, the system perspective lacks a functional dimension, e.g:

  • Architecture levels (enterprise and organization, systems and functionalities, platforms and technologies) are not taken into consideration, nor the nature of capabilities, e.g strategic and operational.
  • The strong/soft dependencies distinction is not explicitly associated with systems capabilities.

On the whole these pros and cons reflect iStar’s declared intent on conceptual modeling; as a corollary these flaws mark also the limits of conceptual modeling when it is detached from the symbolic description of supporting systems functionalities.

Nonetheless, as illustrated by the research quoted below, iStar remains a sound basis for the specification of interactions between users and systems, either as use cases or users’ stories.

Further Reading

External Links