Archive for the ‘Enterprise governance’ Category

2018: Clones vs Octopuses

December 4, 2017

In the footsteps of robots replacing workmen, deep learning bots look to boot out knowledge workers overwhelmed by muddy data.

Cloning Knowledge (Tadeusz Cantor, from “The Dead Class”)

Faced with that , should humans try to learn deeper and faster than clones, or should they learn from octopuses and their smart hands.

Machine Learning & The Economics of Clones

As illustrated by scan-reading AI machines, the spreading of learning AI technology in every nook and cranny introduces something like an exponential multiplier: compared to the power-loom of the Industrial Revolution which substituted machines for workers, deep learning is substituting replicators for machines; and contrary to power looms, there is no physical limitation on the number of smart clones that can be deployed. So, however fast and deep humans can learn, clones are much too prolific: it’s a no-win situation. To get out of that conundrum humans have to put their hand on a competitive edge, e.g some kind of knowledge that cannot be cloned.

Knowledge & Competition

Appraising humans learning sway over machines, one can take from Spinoza’s categories of knowledge with regard to sources:

  1. Senses (views, sounds, smells, touches) or beliefs (as nurtured by the supposed common “sense”). Artificial sensors can compete with human ones, and smart machines are much better if prejudiced beliefs are put into the equation.
  2. Reasoning, i.e the mental processing of symbolic representations. As demonstrated by AlphaGo, machines are bound to fast extend their competitive edge.
  3. Philosophy which is by essence meant to bring together perceptions, intuitions, and symbolic representations. That’s where human intelligence could beat its artificial cousin which is clueless when purposes are needed.

That assessment is bore out by evolution: the absolute dominance established by humans over other animal species comes from their use of knowledge, which can be summarized as:

  1. Use of symbolic representations.
  2. Ability to formulate and exchange representations of contexts, concerns, and policies.
  3. Ability to agree on stakes and cooperate on policies.

On that basis, the third dimension, i.e the use of symbolic knowledge to cooperate on non-zero-sum endeavors, can be used to draw the demarcation line between human and artificial intelligence:

  • Paths and paces of pursuits as part and parcel of the knowledge itself. The fact that both are mostly obviated by search engines gives humans some edge.
  • Operational knowledge is best understood as information put to use, and must include concerns and decision-making. But smart bots’ ubiquity and capabilities often sap information traceability and decisions transparency, which makes room for humans to prevail.

So humans can find a clear competitive edge in this knowledge dimension because it relies on a combination of experience and thinking and is therefore hard to clone. Organizations should make sure that’s where smart systems take back and humans take up.

Organization & Innovation

Innovation being at the root of competitive edge, understanding the role played by smart systems is a key success factor; that is to be defined by organization.

As epitomized by Henry Ford, industrial-era thinking associated innovation with top-down management and the specialization of execution:

  • At execution level manual tasks were to be fragmented and specialized.
  • At management level analysis and decision-making were to be centralized and abstracted.

That organizational paradigm puts a double restraint on innovation:

  • On execution side the fragmentation of manual tasks prevents workers from effectively assessing and improving their performances.
  • On management side knowledge is kept in conceptual boxes and bereft of feedback from actual uses.

That railing between smart brains and dumb hands may have worked well enough for manufacturing processes limited to material flows and subject to circumscribed and predictable technological changes. It didn’t last.

First, as such hierarchies necessarily grow with processes complexity, overheads and rigidity force repeated pruning. Then, flat hierarchies are of limited use when information flows are to be combined with material ones, so enterprises have to start with matrix organization. Finally, with the seamless integration of digital and material flows, perpetuating the traditional line between management and execution is bound to hamstring innovation:

  • Smart tools may be able to perform a wide range of physical tasks without human supervision, but the core of innovation core as well as its front lines are where human and machines collaborate in processing a mix of material and information flows, both learning from the experience.
  • Hierarchies and centralized decision-making are being cut out from feeders when set in networked business environments colonized by smart bots on both sides of corporate boundaries.

Not surprisingly, these innovation trends seem to tally with the social dimension of knowledge.

Learning from the Octopus

The AI revolution has already broken all historical records of footprint (everything is affected) and speed (a matter of years). Given the length of human education cycles, appraising the consequences comes with some urgency, beginning with the disposal of two entrenched beliefs:

At individual level the new paradigm could be compared to the nervous system of octopuses: each arm gets its brain and neurons, and so its own touch of knowledge and taste of decision-making.

On a broader (i.e enterprise) perspective, knowledge should be supported by two organizational layers, one direct and innovation-driven between trusted co-workers, the other networked and knowledge-driven between remote workers, trusted or otherwise.

Further Reading

External Links

Advertisements

Focus: Business Analyst Booklet

November 6, 2017

Objective

Business analysts stand between unbounded and moving business landscapes on one hand, distinctive and steady enterprise organization and culture on the other hand.

How to align enterprise resources and business opportunities (Patrick Zachmann)

Assuming that BAs’ primary concern is to keep ahead of the competition, framing business undertakings into universal guidelines could be counterproductive. By contrast, harnessing together versatile business processes and reliable systems architectures will clearly enhance business agility; hence the benefits of lining up enterprise architects’ and business analysts’ conceptual toolboxes:

  1. Concepts : eight exclusive and unambiguous definitions provide the conceptual building blocks.
  2. Models: how the concepts are used to consolidate business requirements and convey them to enterprise architects and software engineers.
  3. Processes: how to harness organization and business objectives and align applications with business value.
  4. Architectures: how to contrive along time the continuity and consistency of business concepts and objectives, and their congruence with systems capabilities.
  5. Governance: assessment of business value and risks.

On that basis, the objective here is not to detail BAs’ tasks or methods but to focus on core issues to be addressed by business analysts.

Concepts

Whereas systems architecture is not their primary concern, business analysts should nonetheless share the same modeling paradigm:

  • Analysis models for business environments and objectives.
  • Design models for the architecture of systems and the specification of components.

Business objects and processes must be consistently identified (#) across business and system realms.

It is worth to remind that the distinction between descriptive (aka analysis) and prescriptive (aka design) models is not arbitrary but based on logic principles: the former are extensional as they classify actual instances of business objects and activities; in contrast, the latter are intensional as they define the features and behaviors of required system artifacts.

The distinction also brings organizational benefits as it tallies with BAs’ responsibility regarding the consistency and continuity of identities and semantics of actual objects and processes (business extensions) and their symbolic counterparts (system intensions):

Relevant categories at architecture level can be neatly and unambiguously defined.

  • Actual containers represent address spaces or time frames; symbolic ones represent authorities governing symbolic representations. System are actual realizations of symbolic containers managing symbolic artifacts.
  • Actual objects (passive or active) have physical identities; symbolic objects have social identities; messages are symbolic objects identified within communications. Power-types (²) are used to partition objects.
  • Roles (aka actors) are parts played by active entities (people, devices, or other systems) in activities (BPM), or, if it’s the case, when interacting with systems (UML’s actors). Not to be confounded with agents meant to be identified independently of their behavior.
  • Events are changes in the state of business objects, processes, or expectations.
  • Activities are symbolic descriptions of operations and flows (data and control) independently of supporting systems; execution states (aka modes) are operational descriptions of activities with regard to processes’ control and execution. Power-types (²) are used to partition execution paths.

While business analysts should only be tasked with the continuous and consistent mapping of business individuals to their system surrogates, and not with their implementations, that cannot be achieved without a full and unambiguous specification of the variants and abstractions for the business objects and processes to be represented.

Languages & Models

Being in charge of requirements, business analysts can be seen as the gate-keepers of the whole engineering process. To begin with, and depending on the nature of domains, BAs can capture requirements using formal (e.g for scientific domains), specific, or natural languages. Then, requirements analysis can be carried out:

  • Iteratively in unison with development and in collaboration with software engineers (agile approach). In that case models are not necessary as requirements are expressed in natural language (users’ stories), possibly combined with domain specific languages (DSLs) for development.
  • As phased undertakings carried out independently, using a dedicated modeling language (e.g BPMN).
  • As phased undertakings carried out jointly with system analysts using a general purpose modeling language (e.g UML).

Three ways to deal with requirements analysis: business oriented and phased (BPMN), system oriented and phased (use cases), or business driven and iterative (users’ stories).

These schemes are therefore best understood as tools whose employ may overlap or be combined:

  • BPMN and UML activity diagrams have much in common.
  • Class diagram can complement BPMN for business objects, and State diagrams for processes control.
  • Use cases can be seen as describing the part of users’ stories to be supported by systems.

How BAs will employ them is to depend on business processes and projects’ objectives.

Business & Development Processes

The responsibility of BAs is about business processes, the choice of development model being left to project managers; hence the need for business analysts to be familiar with basic options:

  • Agile: business analysts collaborate with software engineers in project teams and share responsibilities from requirements to delivery.
  • Phased: roles and responsibilities are defined specifically with regard to development tasks.

With agile schemes BAs share roles and responsibilities all along, with phased ones roles and responsibilities are defined with regard to tasks.

Agile or phased, the contribution of business analysts can be defined around three core issues, corresponding to three typical modus operandi:

  • Concepts associated to business objects and activities that are to be represented. Assuming that conceptual models are meant to be stable and shared across processes, they should be under the responsibility of business analysts independently of applications.
  • Actors (users, devices, or systems) and activities. Insofar as the impact on organization and system functional features can be localized (users interfaces) or circumscribed (business rules), business analysts can collaborate and share responsibility with software engineers all along an iterative process. Otherwise (changes in organization or business functions) business analysts will have to consolidate their work with enterprise architects.
  • Processes execution. Often labelled as non functional capabilities, they essentially deal with the different aspects of user’s experience and the synchronization of changes in business environments and supporting systems. For that purpose business analysts will have to check requirements against systems capabilities.

Business analysts core concerns and MO: conceptual model, activities, and processes.

While these issues are often interwoven, sorting them out can help to match development models with projects objectives and scope: agile for projects facing business users, phased for the ones dealing with architectures; that will also help to characterize the role of BAs depending on focus: business processes (BPM, use cases, users’ stories), functional architecture (services, conceptual models), or quality of services.

Business Analysis & Systems Architectures

When considering business opportunities, business analysts have to define requirements’ footprint with regard to system capabilities:

  • Confined: applications can be developed in collaboration with software engineers from users’ stories to code, without modeling. Assuming agile conditions about shared ownership and continuous delivery are met, that would be the default option.
  • Distributed: some modeling is needed for communication and consolidation purposes. But business processes modeling languages like BPMN make no distinction between processes’ details and the shared features of supporting systems. That puts a challenging toll on business analysts (complexity, ambiguity) with limited benefits (no easy mapping to system functions).

A primary concern for business analysts should therefore to frame projects accordingly: self-contained and business driven on one hand, shared and architecture driven on the other hand, with use cases set in between if and when necessary. For that purpose shared concerns will have to be clearly identified; taking BPMN for example:

BPEA_ArchiProc

Separation of concerns: architecture backbone and processes’ details

  • Containers for physical (locations) and logical (organizations and domains) objects have no BPMN explicit equivalents.
  • Active objects have no BPMN explicit equivalent.
  • Swimlanes and pool tally with roles (aka actors)
  • Data stores tally with entities (persistent representation of business objects).
  • Tasks, transactions, and sub-processes can be translated as activities description and processes execution.

Given backbones shared with enterprise architects, the next step is to flesh them out with specific details. Depending on methods and tools, that can be done using a domain specific language (DSL) with direct implementation, or through a generic subset of BPMN that could be unambiguously mapped to design constructs, for instance:

  • Anchors (#): instances (objects or activities) directly and consistently identified across businesses and system.
  • Collections (*): set of individuals with shared features.
  • Features: attributes or operations without identity of their own.
  • Structures (diamond): composition (black) for individual components (objects or activities) whose life-cycle is bound to their owner, i.e they have no identity of their own; aggregation (white) for components identified independently but used in the context of their owner.
  • Connectors: associate individuals; their semantics is set by context: communication channel, reference, data or control flow, transition. They can bear identification (#).
  • Power-types (2): define subsets of individuals objects or activities. Depending on context and modeling language, power-types correspond to classifications, extension points, gateways, branch and joins, etc.
  • Inheritance (triangle): contrary to structure and functional connectors that deal with instances, inheritance connectors are used to describe relationships between descriptors. Strong inheritance (black) is the counterpart of composition (inheritance of structural features), and weak inheritance (white) the counterpart of aggregation (inheritance functional features).

Separation of concerns: architecture backbone and anchors details

Using the same set of well accepted and unambiguous logical constructs for both objects and behaviors can greatly enhance the consistency of analysis models as well as their traceability to designs.

Business Analysis & Knowledge Architecture

As noted above, while business analysts may have to consolidate functional requirements or check the feasibility of non functional ones with enterprise architects, they should take responsibility for conceptual models, and more generally for enterprise knowledge architecture. Taking a leaf from Davis, Shrobe, and Szolovits, that will cover:

  1. Surrogates: description of symbolic counterparts (aka) of actual objects, events and relationships.
  2. Ontological commitments: statements about the categories of things that may exist in the domain under consideration.
  3. Fragmentary theory of intelligent reasoning: model of what the things can do or can be done with.
  4. Medium for efficient computation: knowledge understandable by computers.
  5. Medium for human expression: communication between specific domain experts on one hand, generic knowledge managers on the other hand.

Putting apart users interfaces (point 5), two typical approaches can be considered:

  • Domain Driven Design (DDD), which deals with domains representation and computation from a system perspective (point 4).
  • Ontologies, which put the focus on knowledge oriented languages independently of computation (points 1-3).

Besides their simplex orientation, both fall short of business analysts needs, the former being too technical, the latter too open-ended. Instead, a conceptual framework should combine bounded domains with a compact and unambiguous knowledge oriented language.

As it happens, mapping the symbolic footprint of business domains and knowledge into systems may be dictated by the generalization of networked environments and digital business flows. Along that reasoning, BAs will have to deal with knowledge from domains as well process perspectives.

With regard to domains, a distinction should be maintained between institutional (external, statutory), business specific (external, agreed), and enterprise specific (internal).

vvv

A conceptual approach to domain layers: institutional, business specific (e.g HR management) and enterprise specific (e.g supply, sales).

With regard to processes, knowledge must be understood as the dynamic and multi-faceted outcome of data analytics, production systems, and decision-making. Taking a (revised) leaf of Zachman’s framework, business and operational objectives would be reset as to cross architecture layers instead of being aligned. Using a pentagonal representation of enterprise architecture, Zachman’s sixth column (“Why” ) would be rounded as an outer range.

Knowledge: timely and multi-faceted information put to use

Such tightened integration of business processes and IT systems can be decisive in getting a competitive edge, as illustrated by the OOAD (Observation, Orientation, Decision, Action) loop, a real-time decision-making paradigm originally developed by Colonel John Boyd for USAF fighter jets:

  • Observation: operational processes must provide accurate and up-to-date analysis of business contexts as well as feedback.
  • Orientation: transparency of functional architecture is to support business positioning and the adjustment of business objectives.
  • Decision: versatility and plasticity of applications are to facilitate change of tactical options..
  • Action: integration of business, engineering, and operational processes are to ensure just-in-time business moves.

BAs must consider the benefits of systems integration for decision-making.

On a broader perspective the integration of data analytics, production systems, and knowledge management is becoming a key success factor for governance.

Governance: Metrics, Quality, & Risks

As gate-keepers, business analysts have to rank projects with regard to business value, risks, and return on investment. Assuming that business value is set independently of supporting systems, projects’ assessment and ranking should be set according to the nature of problems:

  • Intrinsic business size and complexity: requirements can be estimated from individuals (objects and activities), features, relationships, and partitions.
  • Supporting systems functionalities: intrinsic business metrics are to be combined with what is expected from supporting systems: processes and transactions, triggering events, users and devices interfaces, etc.
  • Business and functional measurements can then be weighted by non-functional (aka Quality of Service) requirements.

Assessment should be aligned with problems: business, supporting systems, operations.

If returns on investment (ROI) and risks are to be assessed consistently and decision-making carried out accordingly, value, costs, quality, and hazards have to be set within the same framework, in particular for quality and risks management:

  • Business environment: risks are external and quality is to check for timely and relevant analysis models.
  • Engineering:  risks are internal and quality is to focus on processes maturity.
  • Technologies: risks are external and quality is to address versatility, plasticity, and effectiveness of solutions.

To conclude, whereas business risks remain the primary concern of business analysts, the fusion of business and systems processes means that they can no longer ignore engineering pitfalls and the importance of quality for risks management.

Further Reading

Focus: Enterprise Architect Booklet

October 16, 2017

Objective

Given the diversity of business and organizational contexts, and EA still a fledgling discipline, spelling out a job description for enterprise architects can be challenging.

hans-vredeman-de-vries-3b

Aligning business, organization, and systems perspectives (Hans Vredeman de Vries)

So, rather than looking for comprehensive definitions of roles and responsibilities, one should begin by circumscribing the key topics of the trade, namely:

  1. Concepts : eight exclusive and unambiguous definitions provide the conceptual building blocks.
  2. Models: how the concepts are used to analyze business requirements and design systems architectures and software artifacts.
  3. Processes: how to organize business and engineering processes.
  4. Architectures: how to align systems capabilities with business objectives.
  5. Governance: assessment and decision-making.

The objective being to define the core issues that need to be addressed by enterprise architects.

Concepts

To begin with, the primary concern of enterprise architects should be to align organization, processes, and systems with enterprise business objectives and environment. For that purpose architects are to consider two categories of models:

  • Analysis models describe business environments and objectives.
  • Design models prescribe how systems architectures and components are to be developed.

Enterprise architects must focus on individuals (objects and processes) consistently identified (#) across business and system realms.

That distinction is not arbitrary but based on formal logic: analysis models are extensional as they classify actual instances of business objects and activities; in contrast, design models are intensional as they define the features and behaviors of required system artifacts.

The distinction is also organizational: as far as enterprise architecture is concerned, the focus is to remain on objects and activities whose identity (#) and semantics are to be continuously and consistently maintained across business (actual instances) and system (symbolic representations) realms:

Relevant categories at architecture level can be neatly and unambiguously defined.

  • Actual containers represent address spaces or time frames; symbolic ones represent authorities governing symbolic representations. System are actual realizations of symbolic containers managing symbolic artifacts.
  • Actual objects (passive or active) have physical identities; symbolic objects have social identities; messages are symbolic objects identified within communications. Power-types (²) are used to partition objects.
  • Roles (aka actors) are parts played by active entities (people, devices, or other systems) in activities (BPM), or, if it’s the case, when interacting with systems (UML’s actors). Not to be confounded with agents meant to be identified independently of their behavior.
  • Events are changes in the state of business objects, processes, or expectations.
  • Activities are symbolic descriptions of operations and flows (data and control) independently of supporting systems; execution states (aka modes) are operational descriptions of activities with regard to processes’ control and execution. Power-types (²) are used to partition execution paths.

Since the objective is to identify objects and behaviors at architecture level, variants, abstractions, or implementations are to be overlooked. It also ensues that the blueprints obtained remain general enough as to be uniformly, consistently and unambiguously translated into most of modeling languages.

Languages & Models

Enterprise architects may have to deal with a range of models depending on scope (business vs system) or level (enterprise and system vs domains and applications):

  • Business process modeling languages are used to associate business domains and enterprises organization.
  • Domain specific languages do the same between business domains and software components, bypassing enterprise organization and systems architecture.
  • Generic modeling languages like UML are supposed to cover the whole range of targets.
  • Languages like Archimate focus on the association between enterprises organization and systems functionalities.
  • Contrary to modeling languages programming ones are meant to translate functionalities into software end-products. Some, like WSDL (Web Service Definition Language), can be used to map EA into service oriented architectures (SOA).

Scope of Modeling Languages

While architects clearly don’t have to know the language specifics, they must understand their scope and purposes.

Processes

Whatever the languages, methods, or models, the primary objective is that architectures support business processes whenever and wherever needed. Except for standalone applications (for which architects are marginally involved), the way systems architectures support business processes is best understood with regard to layers:

  • Processes are solutions to business problems.
  • Processes (aka business solutions) induce problems for systems, to be solved by functional architecture.
  • Implementations of functional architectures induce problems for platforms, to be solved by technical architectures.

Enterprise architects should focus on the alignment of business problems and supporting systems functionalities

As already noted, enterprise architects are to focus on enterprise and system layers: how business processes are supported by systems functionalities and, more generally, how architecture capabilities are to be aligned with enterprise objectives.

Nonetheless, business processes don’t operate in a vacuum and may depend on engineering and operational processes, with regard to development for the former, deployment for the latter.

Enterprise architects should take a holistic view of business, engineering, and operational processes.

Given the crumbling of traditional fences between environments and IT systems under combined markets and technological waves, the integration of business, engineering, and operational processes is to become a necessary condition for market analysis and reactivity to changes in business environment.

Architecture

Blueprints being architects’ tool of choice, enterprise architects use them to chart how enterprise objectives are to be supported by systems capabilities; for that purpose:

  • On one hand they have to define the concepts used for the organization, business domains, and business processes.
  • On the other hand they have to specify, monitor, assess, and improve the capabilities of supporting systems.

In between they have to define the functionalities that will consolidate specific and possibly ephemeral business needs into shared and stable functions best aligned with systems capabilities.

MapsTerrits_Archis

The role of functional architectures is to map conceptual models to systems capabilities

As already noted, enterprise architects don’t have to look under the hood at the implementation of functions; what they must do is to ensure the continuous and comprehensive transparency between existing as well a planned business objectives and systems capabilities.

Assessment

One way or the other, governance implies assessment, and for enterprise architects that means setting apart architectural assets and business applications:

  • Whatever their nature (enterprise organization or systems capabilities), the life-cycle of assets encompasses multiple production cycles, with returns to be assessed across business units. On that account enterprise architects are to focus on the assessment of the functional architecture supporting business objectives.
  • By contrast, the assessment of business applications can be directly tied to a business value within a specific domain, value which may change with cycles. Depending on induced changes for assets, adjustments are to be carried out through users’ stories (standalone, local impact) or use cases (shared business functions, architecture impact).

Enterprise architects deal with assets, business analysts with processes.

The difficulty of assessing returns for architectural assets is compounded by cross dependencies between business, engineering, and operational processes; and these dependencies may have a decisive impact for operational decision-making.

Operational Decision-making

The weaving of enterprise systems within networked business environment calls for a tightened integration of business processes and IT systems, bringing new challenges for enterprise architects. Stakes can be illustrated with the OOAD (Observation, Orientation, Decision, Action) loop, a real-time decision-making paradigm originally developed by Colonel John Boyd for USAF fighter jets:

  • Observation: operational processes must provide accurate and up-to-date analysis of business contexts as well as feedback.
  • Orientation: transparency of functional architecture is to support business positioning and the adjustment of business objectives.
  • Decision: versatility and plasticity of applications are to facilitate change of tactical options..
  • Action: integration of business, engineering, and operational processes are to ensure just-in-time business moves.

The integration of maps and territories can greatly enhance strategic and operational decision-making.

That scheme epitomizes the main challenge of enterprise architects, namely the continuous and dynamic alignment of enterprise organization and systems to market environment, business processes, and decision-making.

Further Reading

EA’s Merry-go-round

June 14, 2017

Preamble

All too often EA is planned as a big bang project to be carried out step by step until completion. That understanding is misguided as it confuses EA with IT systems and implies that enterprises could change their architectures as if they were apparel.

EA is a never-ending endeavor (Robert Doisneau)

But enterprise architecture is part and parcel of enterprises, a combination of culture, organization, and systems; whatever the changes, they must keep the continuity, integrity, and consistency of the whole.

Capabilities

Compared to usual projects, architectural ones are not meant to address specific business needs but architecture capabilities that may or may not be specific to business functions. Taking a leaf from the Zachman Framework, those capabilities can be organized around five pillars supporting enterprise, systems, and platform architectures:

  • Who: enterprise roles, system users, platform entry points.
  • What: business objects, symbolic representations, objects implementation.
  • How: business logic, system applications, software components.
  • When: processes synchronization, communication architecture, communication mechanisms.
  • Where: business sites, systems locations, platform resources.

These capabilities are set across architecture layers and support business, engineering, and operational processes.

Enterprise architecture capabilities

Enterprise architects are to continuously assess and improve these capabilities with regard to current weaknesses (organizational bottlenecks, technical debt) or future developments (new business, M&A, new technologies).

Work Units

Given the increased dependencies between business, engineering, and operations, defining EA workflows in terms of work units defined bottom-up from capabilities is to provide clear benefits with regard to EA versatility and plasticity.

Contrary to top-down (aka activity based) ones, bottom-up schemes don’t rely on one-fits-all procedures; as a consequence work units can be directly defined by capabilities and therefore mapped to engineering workshops:

Iterative development of architecture capabilities across workshops

Moreover, dependency constraints can be directly defined as declarative assertions attached to capabilities and managed dynamically instead of having to be hard-wired into phased processes.

That approach is to ensure two agile conditions critical for the development of architectural features:

  • Shared ownership: lest the whole enterprise be paralyzed by decision-making procedures, work units must be carried out under the sole responsibility of project teams.
  • Continuous delivery: architecture driven developments are by nature transverse but the delivery of building blocs cannot be put off by the decision of all parties concerned; instead it should be decoupled from integration.

Enterprise architecture projects could then be organized as a merry-go-round of capabilities-based work units to be set up, developed, and delivered according to needs and time-frames.

Time Frames

Enterprise architecture is about governance more than engineering. As such it has to ensure continuity and consistency between business objectives and strategies on one side, engineering resources and projects on the other side.

Assuming that capability-based work units will do the job for internal dependencies (application contents and engineering), the problem is to deal with external ones (business objectives and enterprise organization) without introducing phased processes. Beyond differences in monikers, such dependencies can generally be classified along three reasoned categories:

  • Operational: whatever can be observed and acted upon within a given envelope of assets and capabilities.
  • Tactical: whatever can be observed and acted upon by adjusting assets, resources and organization without altering the business plans and anticipations.
  • Strategic: decisions regarding assets, resources and organization contingent on anticipations regarding business environments.

The role of enterprise architects will then to manage the deployment of updated architecture capabilities according to their respective time-frames.

Portfolio Management

As noted before, EA workflows by nature can seldom be carried out in isolation as they are meant to deal with functional features across business domains. Instead, a portfolio of architecture (as opposed to development) work units should be managed according to their time-frame, the nature of their objective, and the kind of models to be used:

EA portfolio

  • Strategic features affect the concepts defining business objectives and processes. The corresponding business objects and processes are primarily defined with descriptive models; changes will have cascading effects for engineering and operations.
  • Tactical features affect the definition of artifacts, logical or physical. The corresponding engineering processes are primarily defined with prescriptive models; changes are to affect operational features but not the strategic ones.
  • Operational features affect the deployment of resources, logical or physical. The corresponding processes are primarily defined with predictive models derived from descriptive ones; changes are not meant to affect strategic or tactical features.

Architectural projects could then be managed as a dynamic backlog of self-contained work units continuously added (a) or delivered (b).

EA projects: a merry-go-round of work units.

That would bring together agile development processes and enterprise architecture.

Further Reading

Beans must be Counted, one way And the other

May 2, 2017

Preamble

Conversations across software engineering forums sometimes reveal unexpected views, as it’s the case for the benefits of accountability.

Counting Paper Beans (Pieter Brueghel the Younger)

One would assume that competition would impel enterprises to scrutiny with regard to resources employed and product outcomes, pushing for the assessment of internal activities based on some agreed metrics. And yet, now and again, software development is viewed as a boutique occupation, if not an art pursuit, carried out by creative craftsmen for enlightened if demanding patrons; a vocation too distinctive to be gauged by common yardsticks.

Difficulties of Oversight

Setting apart creative delusions, the assessment of software development is effectively confronted with rational as well as practical obstacles.

To begin with rationality, and unlike traditional products, there is no market pricing mechanism that could match software development costs with customers’ value. As a consequence business stakeholders and systems engineers prefer to play safe and keep their respective assessments on the opposed banks of the customer/provider divide.

As for the practicality of assessments, the choice is between idiosyncratic approaches (e.g users’ points) and reasoned ones (essentially function points). The former ones being by nature specific and subject to changes in business opportunities, whereas the latter ones are being plagued by implementation plights that make them both costly and unreliable.

Yet, the diluting of IT systems in business environments is making that conundrum irrelevant: the fusing of business processes and supporting software is blanketing the discontinuities between business value and development costs.

Perils of Oversight

Given the digital integration between systems and business environments and the part played by software in production, marketing and operations, enterprises can no longer ignore the economics of software development.

As far as enterprises are concerned, economics use prices for two key purposes, external and internal.

With regard to their business environment, enterprises need metrics to price the resources they could buy and the products they could sell; their competitive edge fully depends on the thoroughness and accuracy of both.

With regard to their internal governance, enterprises need metrics to gauge the efficiency of their factors and the maturity of their processes, and allocate resources accordingly. That internal assessment is the basis of their versatility and plasticity:

  • Confronted to continuous, frequent, and often abrupt changes in business environments, enterprise must be able to adapt their activities without having to change its architectures. That cannot be achieved without timely and accurate assessments of the way their resources are put to use.
  • Conversely, enterprises may have to change their architectures without affecting their performances; that cannot be achieved without a comprehensive and accurate assessments of alternative options, organizational as well as technical.

To summarize, the spread and intricacy of software footprint over both sides of the crumbling fences between enterprise systems and business environments makes software economics a necessary component of enterprises governance, so a tally of software beans should not be an option.

Further Reading

Squaring EA Governance

April 18, 2017

Preamble

Enterprise governance has to face combined changes in the way business times and spaces are to be taken into account. On one hand social networks put well-thought-out market segments and well planned campaigns at the mercy of consumers’ weekly whims. On the other hand traditional fences between environments and IT systems are crumbling under combined markets and technological waves.

Squaring Governance in Space and Time (Jasenka Tucan-Vaillant)

So, despite (or because of) the exponential ability of intelligent systems to learn from circumstances, enterprise governance is not to cope with such dynamic complexities without a reliable compass set with regard to key primary factors: time-frames of concerns; control of processes; administration of artifacts.

Concerns & Time-frames

Confronted to massive and continuous waves of stochastic data flows, the priority is to position external events and decision-making with regard to business and assets time-frames:

  • Business value is to be driven by market opportunities which cannot be coerced into predefined fixed time-frames.
  • Assets management is governed by continuity and consistency constraints on enterprise identity, objectives, and investments along time.

Governance Square and its four corners

Enterprises, once understood as standalone entities, must now be redefined as living organisms in continuous adaptation with their environment. Governance schemes must therefore be broadened to business environments and layered as to take into account the duality of time-frames: operational for business value, strategic for assets.

Control of processes and administration of artifacts can then be defined accordingly.

Time & Control: Processes

Architectures being by nature shared and persistent, their layers are meant to reflect different time-frames, from operational cycles to long-term assets:

  • At enterprise level the role of architectures is to integrate shared assets and align various objectives set along different time-frames. At this level it’s safe to assume some cross dependencies between processes, which would call for phased governance.
  • By contrast, business units are meant to be defined as self-governing entities pursuing specific objectives within their own time-frame. From a competitive perspective markets opportunities and competitors moves are best assumed unpredictable, and processes best governed by circumstances.

Enterprise Processes have to align business and engineering objectives

Processes can then be defined vertically (business or Systems) as well as horizontally (enterprise architecture or application development), and governance set accordingly:

  • At enterprise level processes are phased: stakeholders and architects plan and manage the development and deployment of assets (organization and systems).
  • At business units level processes are lean and just-in-time: business analysts and software engineers design and develop applications supporting users needs as defined by users stories or use cases.

Models are then to be introduced to describe shared assets (organization and systems) across the enterprise. They may also support business analysis and software engineering.

Spaces & Administration: Models and Artifacts

Whatever the targets and terminologies, architecture is best defined as a relationship between concrete territories (processes and systems) and abstract maps (blueprints or models).

Carrying on with the four corners of governance square:

  • Business analysts are to set users’ narratives (concrete) in line with the business plots (blueprints) set by stakeholders.
  • Software engineers designing applications (concrete) in line with systems functional architectures (blueprints).

Enterprise Architecture uses maps to manage territories

As for the overlapping of business and development time-frames, the direct mapping between concrete business and system corners (e.g though agile development) is to facilitate the governance of integrated actual and numeric flows across business and systems.

Conclusion: A Compass for Enterprise Architects

Behind turfs perimeters and jobs descriptions, roles and responsibilities involved in enterprise architecture can be summarized by four drives:

  • Business stakeholders (top left): adjust organization as to maximize the versatility and plasticity of architectures.
  • Business analysts (bottom left): define business processes with regard to broader objectives and engineering efficiency.
  • Software engineers (bottom right): maximize the value for users and the quality of applications.
  • Systems architects (top right): dynamically align systems with regard to business models and engineering processes.

Orientation should come before job descriptions

Whereas roles and responsibilities will generally differ depending on enterprise environment, business, and culture, such a compass would ensure that the governance of enterprise architectures hinges on reliable pillars and is driven by clear principles.

Further Reading

Deep Blind Testing

March 21, 2017

Preamble

Tests are meant to ensure that nothing will go amiss. Assuming that expected hazards can be duly dealt with beforehand, the challenge is to guard against unexpected ones.

Unexpected Outcome (Ariel Schlesinger)

That would require the scripting of every possible outcomes in an unlimited range of unknown circumstances, and that’s where Deep Learning may help.

What to Look For

As Donald Rumsfeld once famously said, there are things that we know we don’t know, and things we don’t know we don’t know; hence the need of setting things apart depending on what can be known and how, and build the scripts accordingly:

  • Business requirements: tests can be designed with respect to explicit specifications; yet some room should also be left for changes in business circumstances.
  • Functional requirements: assuming business requirements are satisfied, the part played by supporting systems can be comprehensively tested with respect to well-defined boundaries and operations.
  • Quality of service: assuming business and functional requirements are satisfied, tests will have to check how human interfaces and resources are to cope with users behaviors and expectations which, by nature, cannot be fully anticipated.
  • Technical requirements: assuming business and functional requirements are satisfied as well as users’ expectations for service, deployment, maintenance, and operations are to be tested with regard to feasibility and costs.

Automated testing has to take into account these differences between scope and nature, from bounded and defined specifications to boundless, fuzzy and changing circumstances.

Automated Software Testing

Automated software testing encompasses two basic components: first the design of test cases (events, operations, and circumstances), then their scripted execution. Leading frameworks already integrate most of the latter together with the parts of the former targeting technical aspects like graphical user interfaces or system APIs. Artificial intelligence (AI) and machine learning (ML) have also been tried for automated test generation, yet with a scope limited by dependency on explicit knowledge, and consequently by the need of some “manual” teaching. That hurdle may be overcame by the deep learning ability to get direct (aka automated) access to implicit knowledge.

Reconnaissance: Known Knowns

Systems are designed artifacts, with the corollary that their components are fully defined and their behavior predictable. The design of technical test cases can therefore be derived from what is known of software and systems architectures, the former for test units, the latter for integration and acceptance tests. Deep learning could then mine recorded log-files in order to identify critical cases’ events and circumstances.

Exploration: Known Unknowns

Assuming that applications must be tested for use during their expected shelf life, some uncertainty has to be factored in for future business circumstances. Yet, assuming applications are designed to meet specific business objectives, such hypothetical circumstances should remain within known boundaries. In that context deep learning could be applied to exploration as well as policies:

  • Compared to technical test cases that can rely on the content of systems log-files, business and functional ones have to look outside and mine raw data from business environments.
  • In return, the relevancy of observations can be assessed with regard to business objectives, improved, and feed the policy module in charge of defining test cases.

Blind Errands: Unknown Unknowns

Even with functional and technical capabilities well-tested and secured, quality of service may remain contingent on human quirks: instinctive or erratic behaviors that could thwart the best designed handrails. On one hand, and due to their very nature, such hazards are not to be easily forestalled by reasoned test cases; but on the other hand they don’t take place in a void but within known functional circumstances. Given that porosity of functional and cognitive layers, the validity of functional test cases may be compromised by unfathomable cognitive associations, and that could open the door to unmanageable regression. Enter deep learning and its ability to extract knowledge from insignificance.

Compared to business and functional test cases, hazards are not directly related to business activities. As a consequence, the learning process cannot be guided by business and functional test cases but has to chart unpredictable human behaviors. As it happens, that kind of learning combining random simulation with automated reinforcement is what makes the specificity of deep learning.

From Non-regression to Self-improvement

As a conclusion, if non-regression is to be the cornerstone of quality management, test cases are to be set along clear swim-lanes: business logic (independently of systems), supporting systems functionalities (for shared applications), users interfaces (for non shared interactions). Then, since test cases are also run across swim-lanes, it opens the door to feedback, e.g unit test cases reassessed directly from business rules independently of systems functionalities, or functional test cases reassessed from users’ behaviors.

Considering that well-defined objectives, sound feedback mechanisms, and the availability of massive data from systems logs (internal) and business environment (external) are the main pillars of deep learning technologies, their combination in integrated frameworks could result in a qualitative leap toward self-improving automated test cases.

Further Reading

 

Focus: Business Cases for Use Cases

February 27, 2017

Preamble

As originally defined by Ivar Jacobson, uses cases (UCs) are focused on the interactions between users and systems. The question is how to associate UC requirements, by nature local, concrete, and changing, with broader business objectives set along different time-frames.

Sigmar-Polke-Hope-Clouds

Cases, Kites, and Clouds (Sigmar Polke)

Backing Use Cases

On the system side UCs can be neatly traced through the other UML diagrams for classes, activities, sequence, and states. The task is more challenging on the business side due to the diversity of concerns to be defined with other languages like Business Process Modeling Notation (BPMN).

Use cases at the hub of UML diagrams

Use Cases contexts

Broadly speaking, tracing use cases to their business environments have been undertaken with two approaches:

  • Differentiated use cases, as epitomized by Alister Cockburn’s seminal book (Readings).
  • Business use cases, to be introduced beside standard (often renamed as “system”) use cases.

As it appears, whereas Cockburn stays with UCs as defined by Jacobson but refines them to deal specifically with generalization, scaling, and extension, the second approach introduces a somewhat ill-defined concept without setting apart the different concerns.

Differentiated Use Cases

Being neatly defined by purposes (aka goals), Cockburn’s levels provide a good starting point:

  • Users: sea level (blue).
  • Summary: sky, cloud and kite (white).
  • Functions: underwater, fish and clam (indigo).

As such they can be associated with specific concerns:

Cockburn’s differentiated use cases

  • Blue level UCs are concrete; that’s where interactions are identified with regard to actual agents, place, and time.
  • White level UCs are abstract and cannot be instanciated; cloud ones are shared across business processes, kite ones are specific.
  • Indigo level UCs are concrete but not necessarily the primary source of instanciation; fish ones may or may not be associated with business functions supported by systems (grey), e.g services , clam ones are supposed to be directly implemented by system operations.

As illustrated by the example below, use cases set at enterprise or business unit level can also be concrete:

Example with actors for users and legacy systems (bold arrows for primary interactions)

UC abstraction connectors can then be used to define higher business objectives.

Business “Use” Cases

Compared to Cockburn’s efficient (no new concept) and clear (qualitative distinctions) scheme, the business use case alternative adds to the complexity with a fuzzy new concept based on quantitative distinctions like abstraction levels (lower for use cases, higher for business use cases) or granularity (respectively fine- and coarse-grained).

At first sight, using scales instead of concepts may allow a seamless modeling with the same notations and tools; but arguing for unified modeling goes against the introduction of a new concept. More critically, that seamless approach seems to overlook the semantic gap between business and system modeling languages. Instead of three-lane blacktops set along differentiated use cases, the alignment of business and system concerns is meant to be achieved through a medley of stereotypes, templates, and profiles supporting the transformation of BPMN models into UML ones.

But as far as business use cases are concerned, transformation schemes would come with serious drawbacks because the objective would not be to generate use cases from their business parent but to dynamically maintain and align business and users concerns. That brings back the question of the purpose of business use cases:

  • Are BUCs targeting business logic ? that would be redundant because mapping business rules with applications can already be achieved through UML or BPMN diagrams.
  • Are BUCs targeting business objectives ? but without a conceptual definition of “high levels” BUCs are to remain nondescript practices. As for the “lower levels” of business objectives, users’ stories already offer a better defined and accepted solution.

If that makes the concept of BUC irrelevant as well as confusing, the underlying issue of anchoring UCs to broader business objectives still remains.

Conclusion: Business Case for Use Cases

With the purposes clearly identified, the debate about BUC appears as a diversion: the key issue is to set apart stable long-term business objectives from short-term opportunistic users’ stories or use cases. So, instead of blurring the semantics of interactions by adding a business qualifier to the concept of use case, “business cases” would be better documented with the standard UC constructs for abstraction. Taking Cockburn’s example:

Abstract use cases: no actor (19), no trigger (20), no execution (21)

Different levels of abstraction can be combined, e.g:

  • Business rules at enterprise level: “Handle Claim” (19) is focused on claims independently of actual use cases.
  • Interactions at process level: “Handle Claim” (21) is focused on interactions with Customer independently of claims’ details.

Broader enterprise and business considerations can then be documented depending on scope.

Further Reading

External Links

New Year: 2016 is the One to Learn

December 15, 2016

Sometimes the future is best seen through rear-view mirrors; given the advances of artificial intelligence (AI) in 2016, hindsight may help for the year to come.

(J.Bosh)

Deep Mind Learning (J.Bosh)

Deep Learning & the Depths of Intelligence

Deep learning may not have been discovered in 2016 but Google’s AlphaGo has arguably brought a new dimension to artificial intelligence, something to be compared to unearthing the spherical Earth.

As should be expected for machines capabilities, artificial intelligence has for long been fettered by technological handcuffs; so much so that expert systems were initially confined to a flat earth of knowledge to be explored through cumbersome sets of explicit rules. But exponential increase in computing power has allowed neural networks to take a bottom-up perspective, mining for implicit knowledge hidden in large amount of raw data.

Like digging tunnels from both extremities, it took some time to bring together top-down and bottom-up schemes, namely explicit (rule-based) and implicit (neural network-based) knowledge processing. But now that it comes to fruition, the alignment of perspectives puts a new light on the cognitive and social dimensions of intelligence.

Intelligence as a Cognitive Capability

Assuming that intelligence is best defined as the ability to solve problems, the first criterion to consider is the type of input (aka knowledge) to be used:

  • Explicit: rational processing of symbolic representations of contexts, concerns, objectives, and policies.
  • Implicit: intuitive processing of factual (non symbolic) observations of objects and phenomena.

That distinction is broadly consistent with the one between humans, seen as the sole symbolic species with the ability to reason about explicit knowledge, and other animal species which, despite being limited to the processing of implicit knowledge, may be far better at it than humans. Along that understanding, it would be safe to assume that systems with enough computing power will sooner or later be able to better the best of animal species, in particular in the case of imperfect inputs.

Intelligence as a Social Capability

Alongside the type of inputs, the second criterion to be considered is obviously the type of output (aka solution). And since classifications are meant to be built on purpose, a typology of AI outcomes should focus on relationships between agents, humans or otherwise:

  • Self-contained: problem-solving situations without opponent.
  • Competitive: zero-sum conflictual activities involving one or more intelligent opponents.
  • Collaborative: non-zero-sum activities involving one or more intelligent agents.

That classification coincides with two basic divides regarding communication and social behaviors:

  1. To begin with, human behavior is critically different when interacting with living species (humans or animals) and machines (dumb or smart). In that case the primary factor governing intelligence is the presence, real or supposed, of beings with intentions.
  2. Then, and only then, communication may take different forms depending on languages. In that case the primary factor governing intelligence is the ability to share symbolic representations.

A taxonomy of intelligence with regard to cognitive (reason vs intuition) and social (symbolic vs non-symbolic) capabilities may help to clarify the role of AI and the importance of deep learning.

Between Intuition and Reason

Google’s AlphaGo astonishing performances have been rightly explained by a qualitative breakthrough in learning capabilities, itself enabled by the two quantitative factors of big data and computing power. But beyond that success, DeepMind (AlphaGo’s maker) may have pioneered a new approach to intelligence by harnessing both symbolic and non symbolic knowledge to the benefit of a renewed rationality.

Perhaps surprisingly, intelligence (a capability) and reason (a tool) may turn into uneasy bedfellows when the former is meant to include intuition while the latter is identified with logic. As it happens, merging intuitive and reasoned knowledge can be seen as the nexus of AlphaGo decisive breakthrough, as it replaces abrasive interfaces with smart full-duplex neural networks.

Intelligent devices can now process knowledge seamlessly back and forth, left and right: borne by DeepMind’s smooth cognitive cogwheels, learning from factual observations can suggest or reinforce the symbolic representation of emerging structures and behaviors, and in return symbolic representations can be used to guide big data mining.

From consumers behaviors to social networks to business marketing to supporting systems, the benefits of bridging the gap between observed phenomena and explicit causalities appear to be boundless.

Further Reading

External Links

iStar and the Requirements Conundrum

December 12, 2016

Synopsis

Whenever software engineering problems are looked at, the blame is generally put on requirements, with each side of the business/system divide holding the other responsible.

rockwell_runaway

iStar modeling put the focus on communication (N. Rockwell)

The iStar approach tries to tackle the problem with a conceptual language focused on interactions between business processes and supporting systems.

Dilemma

Conceptual approaches to requirements try to breach the dilemma between phased and agile development schemes: the former takes for granted that requirements can be fully and definitively set upfront; the latter takes a more pragmatic path and tries to reconcile business and system analysts through direct and continuous collaboration.

Setting apart frictions between specific methods, the benefits of agile principles and practices are now well-recognized, contingent on the limits of agile scope. Summarily, agile development is at its best when requirements capture and analysis can be weaved with development and tests. The question remains of what happens when requirements are to be dealt with separately.

The iStar’s answer shares with agile a focus on collaboration and doesn’t take side for business (e.g users’ stories) or systems (e.g use cases). Instead, iStar modeling language is meant to support a conceptual description of interactions between business processes and supporting systems in terms of actors’ goals and commitments, and the associated dependencies.

Actors & Goals

The defining aspect of the iStar modeling approach is to replace one-sided perspectives (business or system) by a systemic one focused on the interactions between agents. The interactive part of a requirement will therefore comprise three basic items:

  • A primary actor trigger an interaction in order to meet some goal; e.g a car owner want his car repaired.
  • Secondary actors may be involved during the ensuing exchanges: e.g body shop, appraiser, insurance company.
  • Functions to be performed: actual task; e.g appraise damages; qualification (soft goal), e.g fair appraisal; and resources, e.g premium payment.
Actors & dependencies

Actors & Dependencies

Dependencies Semantics

The factual description of interactions is both detailed and enriched by elements set within a broader scope:

  • Goal (strong) dependency: assertions about actual state of affairs: object, activity, or expectations.
  • Soft-goal dependency: assertions about expected outcomes.
  • Task dependency: organizational, functional, or technical constraints pertaining to the execution of activities.
  • Resource dependency: constraints or conditions on the availability of inputs, actual or symbolic.

It would be tempting to generalize the strong/soft distinction to dependencies as to make use of modal logic, strong dependencies associated with deontic rules, soft dependencies with alethic ones. That would .

iStar & Caminao

Since iStar modeling categories are directly aligned with UML Use Cases, they can easily mapped to core Caminao stereotypes for actors, objects, events, and activities.

Actors & dependencies

iStar with Caminao Stereotypes

Interestingly, the iStar strong/soft distinction could translate to the actual/symbolic one which constitute the conceptual backbone of the Caminao paradigm.

Assessment

From the business perspective, iStar must be credited with two critical tenets:

  • The focus on interactions between agents is essential for business and system analysts to collaborate. Such benefits appear clearly for the definition of primary and secondary roles (aka actors), intents (business) and capabilities (supporting environments).
  • The distinction between strong and soft goals, even if the logical basis remains unexploited.

Yet, the system perspective lacks a functional dimension, e.g:

  • Architecture levels (enterprise and organization, systems and functionalities, platforms and technologies) are not taken into consideration, nor the nature of capabilities, e.g strategic and operational.
  • The strong/soft dependencies distinction is not explicitly associated with systems capabilities.

On the whole these pros and cons reflect iStar’s declared intent on conceptual modeling; as a corollary these flaws mark also the limits of conceptual modeling when it is detached from the symbolic description of supporting systems functionalities.

Nonetheless, as illustrated by the research quoted below, iStar remains a sound basis for the specification of interactions between users and systems, either as use cases or users’ stories.

Further Reading

External Links