Modeling Symbolic Representations

March 16, 2010

System modeling is all too often a flight for abstraction, when business analysts should instead look for the proper level of representation, ie the one with the best fit to business concerns.

Modeling is synchronic: contexts must be mapped to representations (Velazquez, “Las Meninas”).

Caminao’s blog (see Topics Guide) will try to set a path to Architecture Driven System Modelling. The guiding principle is to look at systems as sets of symbolic representations and identify the core archetypes defining how they must be coupled to their actual counterparts. That would provide for lean (need-to-know specs) and fit (architecture driven) models, architecture traceability, and built-in consistency checks.

This blog is meant to be a work in progress, with the basic concepts set open to suggestions or even refutation:

All examples are taken from ancient civilizations in order to put the focus on generic problems of symbolic architectures, disregarding technologies.

Symbolic representation: a primer

Original illustrations by Albert (http://www.albertdessinateur.com/) allow for concrete understanding of requirements, avoiding the biases associated with contrived textual descriptions.

Advertisements

Transcription & Deep Learning

September 17, 2017

Humans looking for reassurance against the encroachment of artificial brains should try YouTube subtitles: whatever Google’s track record in natural language processing, the way its automated scribe writes down what is said in the movies is essentially useless.

A blank sheet of paper was copied on a Xerox machine.
This copy was used to make a second copy.
The second to make a third one, and so on…
Each copy as it came out of the machine was re-used to make the next.
This was continued for one hundred times, producing a book of one hundred pages. (Ian Burn)

Experience directly points to the probable cause of failure: the usefulness of real-time transcriptions is not a linear function of accuracy because every slip can be fatal, without backup or second chance. It’s like walking a line: for all practical purposes a single misunderstanding can throw away the thread of understanding, without a chance of retrieve or reprieve.

Contrary to Turing machines, listeners have no finite states; and contrary to the sequence of symbols on tapes, tales are told by weaving together semantic threads. It ensues that stories are work in progress: readers can pause to review and consolidate meanings, but listeners have no other choice than punting on what comes to they mind, hopping that the fabric of the story will carry them out.

So, whereas automated scribes can deep learn from written texts and recorded conversations, there is no way to do the same from what listeners understand. That’s the beauty of story telling: words may be written but meanings are renewed each time the words are heard.

Further Reading

Why Virtual Reality (VR) is Late

July 25, 2017

Preamble

Whereas virtual reality (VR) has been expected to be the next breakthrough for IT human interfaces, the future seems to be late.

Detached Reality (N.Ghesquiere, G.Coddington)

Together with the cost of ownership, a primary cause mentioned for the lukewarm embrace is the nausea associated with the technology. Insofar as the nausea is provoked by a delay in perceptions, the consensus is that both obstacles should be overcame by continuous advances in computing power. But that optimistic assessment rests on the assumption that the nausea effect is to be uniformly decreasing.

Virtual vs Augmented

The recent extension of a traditional roller-coaster at SeaWorld Orlando illustrates the difference between virtual and augmented reality. Despite being marketed as virtual reality, the combination of actual physical experience (roller-coaster) with virtual perceptions (3D video) clearly belongs to the augmented breed, and its success may put some new light on the nausea effect.

Consciousness Cannot Wait

Awareness is what anchors living organisms to their environment. So, lest a confusion is introduced between individuals experience and their biological clock, perceptions are to be immediate; and since that confusion is not cognitive but physical, it will cause nausea. True to form, engineers initial answer has been to cut down elapsed time through additional computing power; that indeed brought a decline in the nausea effect, as well as an increase in the cost of ownership. Unfortunately, benefits and costs don’t tally: however small is the remaining latency, nausea effects are disproportionate.

Aesop’s Lesson

The way virtual and augmented reality deal with latency may help to understand the limitations of a minimizing strategy:

  • With virtual reality latency occurs between users voluntary actions (e.g moving their heads) and devices (e.g headset) generated responses.
  • With augmented reality latency occurs between actual perceptions and software generated responses.

That’s basically the situation of Aesop’s “The Tortoise and the Hare” fable: in the physical realm the hare (aka computer) is either behind or ahead of the tortoise (the user), which means that some latency (positive or negative) is unavoidable.

That lesson applies to virtual reality because both terms are set in actuality, which means that nausea can be minimized but not wholly eliminated. But that’s not the case for augmented reality because the second term is a floating variable that can be logically adjusted.

The SeaWorld roller-coaster takes full advantage of this point by directly tying up augmented stimuli to actual ones: augmented reality scripts are aligned with roller-coaster episodes and their execution synchronized through special sensors. Whatever the remaining latency, it is to be of a different nature: instead of having to synchronize their (conscious) actions with the environment feedback, users only have to consolidate external stimuli, a more mundane task which doesn’t involve consciousness.

Further Reading

External Links

The Agility of Words

July 9, 2017

Preamble

Oral cultures come with implicit codes for the repetition of words and sentences, making room for some literary hide-and-seek between the storyteller and his audience.

The Agility of Words (B. Flanagan)

Could such narrative schemes be employed for users’ stories to open out the dialog between users (the storytellers) and business analysts (the listeners).

Open Storytelling

To begin with fiction, authors are meant to tell stories for readers ready to believe them at least while they are reading.

For young readers yet unable to suspend their disbelief, laser-disc games of the last century already gave post-toddlers a free hand to play with narratives.

But when the same scheme has been tried with grown-ups it has fizzled out: what would be the point of buying a story if you have to make it yourself ? the answer of agile business analysts is that users’ stories may be more pliable than budgets.

Tell Once, Tell Twice, Think Again

That’s what has just happened to “Hamlet on the Holodeck: The Future of Narrative in Cyberspace”, first published by Janet H. Murray twenty years ago with qualified ado, and now making a new debut, unedited yet clever as ever. That suggests both an observation and an interrogation.

For one, and notwithstanding readers consideration, a good story, fiction or otherwise, remains a good story which may be better appreciated in different circumstances. Then, considering the weighty mutation of circumstances since the book first appearance, the interrogation is about probable cause: should the origin of the rebirth to be looked for in technological advances, in the readers’ mind of that specific (non-fiction) book, or in the readiness of (fiction) books readers to collaborate in story building

Alternates in Narrative

As probable cause for new narrative ways, technology obviously comes first due to its means to change the relationship between readers and stories: breakthroughs in artificial intelligence, deep-learning, and computational linguistics have opened paths barely conceivable twenty years ago.

As a collateral effect of the technological revolution, opportunity may explain the renewed interest of Janet Murray’s likely readers: issues that were hardly broached before the initial publishing are now routinely mooted in the literati cognosphere.

Finally, on a broader social perspective, changes may have altered the motivation of fiction aficionados, bringing new relevancy to Janet Murray’s intuitions: as farcically illustrated by the uncritical audiences for alternative facts, the perception of reality may have been transformed by the utter sway of social networks.

Back to a literary perspective, evidences seem to point to the status of stories with regard to reality:

  • When embedded in games, stories don’t pretend to anything. On that ground changes are driven by players’ decisions regarding events or characters’ options that only affect the narratives of a plot defined upfront.
  • When set as fictions, stories, however preposterous, are meant to stand on their own ground. The meanings given to events and options are constitutive of the plot, and readers’ decisions are driven by their understanding of facts and behaviors.

So, Google’s AlphaGo may have overturned the grounds for the first category, but stories are not games and the only variants that count are the ones affecting understanding. More so for stories that use fictional realities to tell what should be.

Heed & Lead in Users’ Stories

Users’ stories are the agile answer to the challenge of elusive requirements. Definitively a cornerstone of the agile approach to software engineering, users’ stories are meant to deal with the instability of requirements, in contours as well as detours.

With regard to contours, users’ stories explore the space of requirements through successive iterations rooted into clearly identified users’ needs. Whereas the backbone (the plot) is set by stakeholders (the authors), the scope doesn’t have to be revealed upfront but can be progressively discovered through interactions between users (the storytellers) and analysts (the listeners).

But detours are where alternates in narratives may really prove themselves by helping to adjust users’ needs (the narratives) to business objectives (the plot). As a consequence changes suggested by analysts should not be limited to users’ options and ergonomy but may also concern the meaning of facts and behaviors. Along that reasoning users’ stories would use the agility of words to align the meanings of new business applications with the ones set by business functionalities already supported by systems.

Further Reading

External Links

Unified Architecture Framework Profile (UAFP): Lost in Translation ?

July 2, 2017

Synopsis

The intent of Unified Architecture Framework Profile (UAFP) is to “provide a Domain Meta-model usable by non UML/SysML tool vendors who may wish to implement the UAF within their own tool and metalanguage.”

Detached Architecture (Víctor Enrich)

But a meta-model trying to federate (instead of bypassing) the languages of tools providers has to climb up the abstraction scale above any domain of concerns, in that case systems architectures. Without direct consideration of the domain, the missing semantic contents has to be reintroduced through stereotypes.

Problems with that scheme appear at two critical junctures:

  • Between languages and meta-models, and the way semantics are introduced.
  • Between environments and systems, and the way abstractions are defined.

Caminao’s modeling paradigm is used to illustrate the alternative strategy, namely the direct stereotyping of systems architectures semantics.

Languages vs Stereotypes

Meta-Models are models of models: just like artifacts of the latter represent sets of instances from targeted domains, artifacts of the former represent sets of symbolic artifacts from the latter. So while set higher on the abstraction scale, meta-models still reflect the domain of concerns.

Meta-models takes a higher view of domains, meta-languages don’t.

Things are more complex for languages because linguistic constructs ( syntax and semantics) and pragmatic are meant to be defined independently of domain of discourse. Taking a simple example from the model above, it contains two kinds of relationships:

  • Linguistic constructs:  represents, between actual items and their symbolic counterparts; and inherits, between symbolic descriptions.
  • Domain specific: played by, operates, and supervises.

While meta-models can take into account both categories, that’s not the case for languages which only consider linguistic constructs and mechanisms. Stereotypes often appear as a painless way to span the semantic fault between what meta-models have to do and what languages use to do; but that is misguided because mixing domain specific semantics with language constructs can only breed confusion.

Stereotypes & Semantics

If profiles and stereotypes are meant to refine semantics along domains specifics, trying to conciliate UML/SysML languages and non UML/SysML models puts UAFP in a lopsided position by looking the other way, i.e towards one-fits-all meta-language instead of systems architecture semantics. Its way out of this conundrum is to combine stereotypes with UML constraint, as can be illustrated with PropertySet:

UAFP for PropertySet (italics are for abstract)

Behind the mixing of meta-modeling levels (class, classifier, meta-class, stereotype, meta-constraint) and the jumble of joint modeling concerns (property, measurement, condition), the PropertySet description suggests the overlapping of two different kinds of semantics, one looking at objects and behaviors identified in environments (e.g asset, capability, resource); the other focused on systems components (property, condition, measurement). But using stereotypes indifferently for both kind of semantics has consequences.

Stereotypes, while being the basic UML extension mechanism, comes without much formalism and can be applied extensively. As a corollary, their semantics must be clearly defined in line with the context of their use, in particular for meta-languages topping different contexts.

PropertySet for example is defined as an abstract element equivalent to a data type, simple or structured, a straightforward semantic that can be applied consistently for contexts, domains or languages.

That’s not the case for ActualPropertySet which is defined as an InstanceSpecification for a “set or collection of actual properties”. But properties defined for domains (as opposed to languages) have no instances of their own and can only occur as concrete states of objects, behaviors, or expectations, or as abstract ranges in conditions or constraints. And semantics ambiguities are compounded when inheritance is indifferently applied between a motley of stereotypes.

Properties epitomize the problems brought about by confusing language and domain stereotypes and point to a solution.

To begin with syntax, stereotypes are redundant because properties can be described with well-known language constructs.

As for semantics, stereotyped properties should meet clearly defined purposes; as far as systems architectures are concerned, that would be the mapping to architecture capabilities:

Property must be stereotyped with regard to induced architecture capabilities.

  • Properties that can be directly and immediately processed, symbolic (literal) or not (binary objects).
  • Properties whose processing depends on external resource, symbolic (reference) or not (numeric values).

Such stereotypes could be safely used at language level due to the homogeneity of property semantics. That’s not the case for objects and behaviors.

Languages Abstractions & Symbolic Representations

The confusion between language and domain semantics mirrors the one between enterprise and systems, as can be illustrated by UAFP’s understanding of abstraction.

In the context of programming languages, isAbstract applies to descriptions that are not meant to be instantiated: for UAFP “PhysicalResource” isAbstract because it cannot occur except as “NaturalResource” or “ResourceArtifact”, none of them isAbstract.

“isAbstract” has no bearing on horses and carts, only on the meaning of the class PhysicalResource.

Despite the appearances, it must be reminded that such semantics have nothing to do with the nature of resources, only with what can be said about it. In any case the distinction is irrelevant as long as the only semantics considered are confined to specification languages, which is the purpose of the UAFP.

As that’s not true for enterprise architects, confusion is to arise when the modeling Paradigm is extended as to include environments and their association with systems. Then, not only that two kinds of instances (and therefore abstractions) are to be described, but that the relationship between external and internal instances is to determine systems architectures capabilities. Extending the simple example above:

  • Overlooking the distinction between active and passive physical resources prevents a clear and reliable mapping to architecture technical capabilities.
  • Organizational resource lumps together collective (organization), individual and physical (person), individual and organizational (role), symbolic (responsibility), resources. But these distinctions have a direct consequences for architecture functional capabilities.

Abstraction & Symbolic representation

Hence the importance of the distinction between domain and language semantics, the former for the capabilities of the systems under consideration, the latter for the capabilities of the specification languages.

Systems Never Walk Alone

Profiles are supposed to be handy, reliable, and effective guides for the management of specific domains, in that case the modeling of enterprise architectures. As it happens, the UAF profile seems to set out the other way, forsaking architects’ concerns for tools providers’ ones; that can be seen as a lose-lose venture because:

  • There isn’t much for enterprise architects along that path.
  • Tools interoperability would be better served by a parser focused on languages semantics independently of domain specifics.

Hopefully, new thinking about architecture frameworks (e.g DoDAF) tends to restyle them as EA profiles, which may help to reinstate basic requirements:

  • Explicit modeling of environment, enterprise, and systems.
  • Clear distinction between domain (enterprise and systems architecture) and languages.
  • Unambiguous stereotypes with clear purposes

A simple profile for enterprise architecture

On a broader perspective such a profile would help with the alignment of purposes (enterprise architects vs tools providers), scope (enterprise vs systems), and languages (modeling vs programming).

Further Reading

Models
Architectures
Enterprise Architecture
UML#

External Links

EA’s Merry-go-round

June 14, 2017

Preamble

All too often EA is planned as a big bang project to be carried out step by step until completion. That understanding is misguided as it confuses EA with IT systems and implies that enterprises could change their architectures as if they were apparel.

EA is a never-ending endeavor (Robert Doisneau)

But enterprise architecture is part and parcel of enterprises, a combination of culture, organization, and systems; whatever the changes, they must keep the continuity, integrity, and consistency of the whole.

Capabilities

Compared to usual projects, architectural ones are not meant to address specific business needs but architecture capabilities that may or may not be specific to business functions. Taking a leaf from the Zachman Framework, those capabilities can be organized around five pillars supporting enterprise, systems, and platform architectures:

  • Who: enterprise roles, system users, platform entry points.
  • What: business objects, symbolic representations, objects implementation.
  • How: business logic, system applications, software components.
  • When: processes synchronization, communication architecture, communication mechanisms.
  • Where: business sites, systems locations, platform resources.

These capabilities are set across architecture layers and support business, engineering, and operational processes.

Enterprise architecture capabilities

Enterprise architects are to continuously assess and improve these capabilities with regard to current weaknesses (organizational bottlenecks, technical debt) or future developments (new business, M&A, new technologies).

Work Units

Given the increased dependencies between business, engineering, and operations, defining EA workflows in terms of work units defined bottom-up from capabilities is to provide clear benefits with regard to EA versatility and plasticity.

Contrary to top-down (aka activity based) ones, bottom-up schemes don’t rely on one-fits-all procedures; as a consequence work units can be directly defined by capabilities and therefore mapped to engineering workshops:

Iterative development of architecture capabilities across workshops

Moreover, dependency constraints can be directly defined as declarative assertions attached to capabilities and managed dynamically instead of having to be hard-wired into phased processes.

That approach is to ensure two agile conditions critical for the development of architectural features:

  • Shared ownership: lest the whole enterprise be paralyzed by decision-making procedures, work units must be carried out under the sole responsibility of project teams.
  • Continuous delivery: architecture driven developments are by nature transverse but the delivery of building blocs cannot be put off by the decision of all parties concerned; instead it should be decoupled from integration.

Enterprise architecture projects could then be organized as a merry-go-round of capabilities-based work units to be set up, developed, and delivered according to needs and time-frames.

Time Frames

Enterprise architecture is about governance more than engineering. As such it has to ensure continuity and consistency between business objectives and strategies on one side, engineering resources and projects on the other side.

Assuming that capability-based work units will do the job for internal dependencies (application contents and engineering), the problem is to deal with external ones (business objectives and enterprise organization) without introducing phased processes. Beyond differences in monikers, such dependencies can generally be classified along three reasoned categories:

  • Operational: whatever can be observed and acted upon within a given envelope of assets and capabilities.
  • Tactical: whatever can be observed and acted upon by adjusting assets, resources and organization without altering the business plans and anticipations.
  • Strategic: decisions regarding assets, resources and organization contingent on anticipations regarding business environments.

The role of enterprise architects will then to manage the deployment of updated architecture capabilities according to their respective time-frames.

Portfolio Management

As noted before, EA workflows by nature can seldom be carried out in isolation as they are meant to deal with functional features across business domains. Instead, a portfolio of architecture (as opposed to development) work units should be managed according to their time-frame, the nature of their objective, and the kind of models to be used:

EA portfolio

  • Strategic features affect the concepts defining business objectives and processes. The corresponding business objects and processes are primarily defined with descriptive models; changes will have cascading effects for engineering and operations.
  • Tactical features affect the definition of artifacts, logical or physical. The corresponding engineering processes are primarily defined with prescriptive models; changes are to affect operational features but not the strategic ones.
  • Operational features affect the deployment of resources, logical or physical. The corresponding processes are primarily defined with predictive models derived from descriptive ones; changes are not meant to affect strategic or tactical features.

Architectural projects could then be managed as a dynamic backlog of self-contained work units continuously added (a) or delivered (b).

EA projects: a merry-go-round of work units.

That would bring together agile development processes and enterprise architecture.

Further Reading

Views, Models, & Architectures

May 27, 2017

Preamble

Views can take different meanings, from windows opening on specific data contexts (e.g DB relational theory), to assortments of diagrams dedicated to particular concerns (e.g UML).

Fortunato Depero tunnels

Deconstructing the Universe along Contexts and Concerns (Depero Fortunato)

Models for their part have also been understood as views, on DB contents as well as systems’ architecture and components, the difference being on the focus put on engineering. Due to their association with phased processes, models has been relegated to a back-burner by agile approaches; yet it may resurface in terms of granularity with model-based engineering frameworks.

Views & Architectures

As far as systems engineering is concerned, understandings of views usually refer to Philippe Kruchten’s “4+1” View Model of Software Architecture” :

  • Logical view: design of software artifacts.
  • Process view: captures the concurrency and synchronization aspects.
  • Physical view: describes the mapping(s) of software artifacts onto hardware.
  • Development view: describes the static organization of software artifacts in development environments.

A fifth is added for use cases describing the interactions between systems and business environments.

Whereas these views have been originally defined with regard to UML diagrams, they may stand on their own meanings and merits, and be assessed or amended as such.

Apart from labeling differences, there isn’t much to argue about use cases (for requirements), process (for operations), and physical (for deployment) views; each can be directly associated to well identified parts of systems engineering that are to be carried out independently of organizations, architectures or methods.

Logical and development views raise more questions because they imply a distinction between design and implementation. That implicit assumption induces two kinds of limitations:

  • They introduce a strong bias toward phased approaches, in contrast to agile development models that combine requirements, development and acceptance into iterations.
  • They classify development processes with regard to predefined activities, overlooking a more critical taxonomy based on objectives, architectures and life-cycles: user driven and short-term (applications ) vs data-based and long-term (business functions).

These flaws can be corrected if logical and development views are redefined respectively as functional and application views, the former targeting business objects and functions, the latter business logic and users’ interfaces.

Architecture based views

Architecture based views

That make views congruent with architecture levels and consequently with engineering workshops. More importantly, since workshops make possible the alignment of products with work units, they are a much better fit to model-based engineering and a shift from procedural to declarative paradigm.

Model-based Systems Engineering & Granularity

At least in theory, model-based systems engineering (MBSE) should free developers from one-fits-all procedural schemes and support iterative as well as declarative approaches. In practice that would require matching tasks with outcomes, which could be done if responsibilities on the former can be aligned with models granularity of the latter.

With coarse-grained phased schemes like MDA’s CIM/PIM/PSM (a), dependencies between tasks would have to be managed with regard to a significantly finer artifacts’ granularity.

Managing changes at architecture (a) or application (b) level.

Managing changes at architecture (a) or application (b) level.

For agile schemes, assuming conditions on shared ownership and continuous deliveries are met, projects would put locks on “models” at both ends (users’ stories and deliveries) of development cycles (b), with backlogs items defining engineering granularity.

Backlogs mechanism can be used to manage customized granularity and hierarchical dependencies across model layers

Along that reasoning it would be possible to unify the management of changes in engineered artifacts at the appropriate level of granularity: static and explicit using milestones (phased), dynamic and implicit using backlogs (agile).

Cycles_DeclarIntervs

Fine grained model based frameworks could support phased as well as agile development solutions

Such a declarative repository would greatly enhance exchanges and integration across projects  and help to align heterogeneous processes independently of the methodologies used.

Further Reading

External Links

Beans must be Counted, one way And the other

May 2, 2017

Preamble

Conversations across software engineering forums sometimes reveal unexpected views, as it’s the case for the benefits of accountability.

Counting Paper Beans (Pieter Brueghel the Younger)

One would assume that competition would impel enterprises to scrutiny with regard to resources employed and product outcomes, pushing for the assessment of internal activities based on some agreed metrics. And yet, now and again, software development is viewed as a boutique occupation, if not an art pursuit, carried out by creative craftsmen for enlightened if demanding patrons; a vocation too distinctive to be gauged by common yardsticks.

Difficulties of Oversight

Setting apart creative delusions, the assessment of software development is effectively confronted with rational as well as practical obstacles.

To begin with rationality, and unlike traditional products, there is no market pricing mechanism that could match software development costs with customers’ value. As a consequence business stakeholders and systems engineers prefer to play safe and keep their respective assessments on the opposed banks of the customer/provider divide.

As for the practicality of assessments, the choice is between idiosyncratic approaches (e.g users’ points) and reasoned ones (essentially function points). The former ones being by nature specific and subject to changes in business opportunities, whereas the latter ones are being plagued by implementation plights that make them both costly and unreliable.

Yet, the diluting of IT systems in business environments is making that conundrum irrelevant: the fusing of business processes and supporting software is blanketing the discontinuities between business value and development costs.

Perils of Oversight

Given the digital integration between systems and business environments and the part played by software in production, marketing and operations, enterprises can no longer ignore the economics of software development.

As far as enterprises are concerned, economics use prices for two key purposes, external and internal.

With regard to their business environment, enterprises need metrics to price the resources they could buy and the products they could sell; their competitive edge fully depends on the thoroughness and accuracy of both.

With regard to their internal governance, enterprises need metrics to gauge the efficiency of their factors and the maturity of their processes, and allocate resources accordingly. That internal assessment is the basis of their versatility and plasticity:

  • Confronted to continuous, frequent, and often abrupt changes in business environments, enterprise must be able to adapt their activities without having to change its architectures. That cannot be achieved without timely and accurate assessments of the way their resources are put to use.
  • Conversely, enterprises may have to change their architectures without affecting their performances; that cannot be achieved without a comprehensive and accurate assessments of alternative options, organizational as well as technical.

To summarize, the spread and intricacy of software footprint over both sides of the crumbling fences between enterprise systems and business environments makes software economics a necessary component of enterprises governance, so a tally of software beans should not be an option.

Further Reading

Squaring EA Governance

April 18, 2017

Preamble

Enterprise governance has to face combined changes in the way business times and spaces are to be taken into account. On one hand social networks put well-thought-out market segments and well planned campaigns at the mercy of consumers’ weekly whims. On the other hand traditional fences between environments and IT systems are crumbling under combined markets and technological waves.

Squaring Governance in Space and Time (Jasenka Tucan-Vaillant)

So, despite (or because of) the exponential ability of intelligent systems to learn from circumstances, enterprise governance is not to cope with such dynamic complexities without a reliable compass set with regard to key primary factors: time-frames of concerns; control of processes; administration of artifacts.

Concerns & Time-frames

Confronted to massive and continuous waves of stochastic data flows, the priority is to position external events and decision-making with regard to business and assets time-frames:

  • Business value is to be driven by market opportunities which cannot be coerced into predefined fixed time-frames.
  • Assets management is governed by continuity and consistency constraints on enterprise identity, objectives, and investments along time.

Governance Square and its four corners

Enterprises, once understood as standalone entities, must now be redefined as living organisms in continuous adaptation with their environment. Governance schemes must therefore be broadened to business environments and layered as to take into account the duality of time-frames: operational for business value, strategic for assets.

Control of processes and administration of artifacts can then be defined accordingly.

Time & Control: Processes

Architectures being by nature shared and persistent, their layers are meant to reflect different time-frames, from operational cycles to long-term assets:

  • At enterprise level the role of architectures is to integrate shared assets and align various objectives set along different time-frames. At this level it’s safe to assume some cross dependencies between processes, which would call for phased governance.
  • By contrast, business units are meant to be defined as self-governing entities pursuing specific objectives within their own time-frame. From a competitive perspective markets opportunities and competitors moves are best assumed unpredictable, and processes best governed by circumstances.

Enterprise Processes have to align business and engineering objectives

Processes can then be defined vertically (business or Systems) as well as horizontally (enterprise architecture or application development), and governance set accordingly:

  • At enterprise level processes are phased: stakeholders and architects plan and manage the development and deployment of assets (organization and systems).
  • At business units level processes are lean and just-in-time: business analysts and software engineers design and develop applications supporting users needs as defined by users stories or use cases.

Models are then to be introduced to describe shared assets (organization and systems) across the enterprise. They may also support business analysis and software engineering.

Spaces & Administration: Models and Artifacts

Whatever the targets and terminologies, architecture is best defined as a relationship between concrete territories (processes and systems) and abstract maps (blueprints or models).

Carrying on with the four corners of governance square:

  • Business analysts are to set users’ narratives (concrete) in line with the business plots (blueprints) set by stakeholders.
  • Software engineers designing applications (concrete) in line with systems functional architectures (blueprints).

Enterprise Architecture uses maps to manage territories

As for the overlapping of business and development time-frames, the direct mapping between concrete business and system corners (e.g though agile development) is to facilitate the governance of integrated actual and numeric flows across business and systems.

Conclusion: A Compass for Enterprise Architects

Behind turfs perimeters and jobs descriptions, roles and responsibilities involved in enterprise architecture can be summarized by four drives:

  • Business stakeholders (top left): adjust organization as to maximize the versatility and plasticity of architectures.
  • Business analysts (bottom left): define business processes with regard to broader objectives and engineering efficiency.
  • Software engineers (bottom right): maximize the value for users and the quality of applications.
  • Systems architects (top right): dynamically align systems with regard to business models and engineering processes.

Orientation should come before job descriptions

Whereas roles and responsibilities will generally differ depending on enterprise environment, business, and culture, such a compass would ensure that the governance of enterprise architectures hinges on reliable pillars and is driven by clear principles.

Further Reading

Deep Blind Testing

March 21, 2017

Preamble

Tests are meant to ensure that nothing will go amiss. Assuming that expected hazards can be duly dealt with beforehand, the challenge is to guard against unexpected ones.

Unexpected Outcome (Ariel Schlesinger)

That would require the scripting of every possible outcomes in an unlimited range of unknown circumstances, and that’s where Deep Learning may help.

What to Look For

As Donald Rumsfeld once famously said, there are things that we know we don’t know, and things we don’t know we don’t know; hence the need of setting things apart depending on what can be known and how, and build the scripts accordingly:

  • Business requirements: tests can be designed with respect to explicit specifications; yet some room should also be left for changes in business circumstances.
  • Functional requirements: assuming business requirements are satisfied, the part played by supporting systems can be comprehensively tested with respect to well-defined boundaries and operations.
  • Quality of service: assuming business and functional requirements are satisfied, tests will have to check how human interfaces and resources are to cope with users behaviors and expectations which, by nature, cannot be fully anticipated.
  • Technical requirements: assuming business and functional requirements are satisfied as well as users’ expectations for service, deployment, maintenance, and operations are to be tested with regard to feasibility and costs.

Automated testing has to take into account these differences between scope and nature, from bounded and defined specifications to boundless, fuzzy and changing circumstances.

Automated Software Testing

Automated software testing encompasses two basic components: first the design of test cases (events, operations, and circumstances), then their scripted execution. Leading frameworks already integrate most of the latter together with the parts of the former targeting technical aspects like graphical user interfaces or system APIs. Artificial intelligence (AI) and machine learning (ML) have also been tried for automated test generation, yet with a scope limited by dependency on explicit knowledge, and consequently by the need of some “manual” teaching. That hurdle may be overcame by the deep learning ability to get direct (aka automated) access to implicit knowledge.

Reconnaissance: Known Knowns

Systems are designed artifacts, with the corollary that their components are fully defined and their behavior predictable. The design of technical test cases can therefore be derived from what is known of software and systems architectures, the former for test units, the latter for integration and acceptance tests. Deep learning could then mine recorded log-files in order to identify critical cases’ events and circumstances.

Exploration: Known Unknowns

Assuming that applications must be tested for use during their expected shelf life, some uncertainty has to be factored in for future business circumstances. Yet, assuming applications are designed to meet specific business objectives, such hypothetical circumstances should remain within known boundaries. In that context deep learning could be applied to exploration as well as policies:

  • Compared to technical test cases that can rely on the content of systems log-files, business and functional ones have to look outside and mine raw data from business environments.
  • In return, the relevancy of observations can be assessed with regard to business objectives, improved, and feed the policy module in charge of defining test cases.

Blind Errands: Unknown Unknowns

Even with functional and technical capabilities well-tested and secured, quality of service may remain contingent on human quirks: instinctive or erratic behaviors that could thwart the best designed handrails. On one hand, and due to their very nature, such hazards are not to be easily forestalled by reasoned test cases; but on the other hand they don’t take place in a void but within known functional circumstances. Given that porosity of functional and cognitive layers, the validity of functional test cases may be compromised by unfathomable cognitive associations, and that could open the door to unmanageable regression. Enter deep learning and its ability to extract knowledge from insignificance.

Compared to business and functional test cases, hazards are not directly related to business activities. As a consequence, the learning process cannot be guided by business and functional test cases but has to chart unpredictable human behaviors. As it happens, that kind of learning combining random simulation with automated reinforcement is what makes the specificity of deep learning.

From Non-regression to Self-improvement

As a conclusion, if non-regression is to be the cornerstone of quality management, test cases are to be set along clear swim-lanes: business logic (independently of systems), supporting systems functionalities (for shared applications), users interfaces (for non shared interactions). Then, since test cases are also run across swim-lanes, it opens the door to feedback, e.g unit test cases reassessed directly from business rules independently of systems functionalities, or functional test cases reassessed from users’ behaviors.

Considering that well-defined objectives, sound feedback mechanisms, and the availability of massive data from systems logs (internal) and business environment (external) are the main pillars of deep learning technologies, their combination in integrated frameworks could result in a qualitative leap toward self-improving automated test cases.

Further Reading

 

Focus: Business Cases for Use Cases

February 27, 2017

Preamble

As originally defined by Ivar Jacobson, uses cases (UCs) are focused on the interactions between users and systems. The question is how to associate UC requirements, by nature local, concrete, and changing, with broader business objectives set along different time-frames.

Sigmar-Polke-Hope-Clouds

Cases, Kites, and Clouds (Sigmar Polke)

Backing Use Cases

On the system side UCs can be neatly traced through the other UML diagrams for classes, activities, sequence, and states. The task is more challenging on the business side due to the diversity of concerns to be defined with other languages like Business Process Modeling Notation (BPMN).

Use cases at the hub of UML diagrams

Use Cases contexts

Broadly speaking, tracing use cases to their business environments have been undertaken with two approaches:

  • Differentiated use cases, as epitomized by Alister Cockburn’s seminal book (Readings).
  • Business use cases, to be introduced beside standard (often renamed as “system”) use cases.

As it appears, whereas Cockburn stays with UCs as defined by Jacobson but refines them to deal specifically with generalization, scaling, and extension, the second approach introduces a somewhat ill-defined concept without setting apart the different concerns.

Differentiated Use Cases

Being neatly defined by purposes (aka goals), Cockburn’s levels provide a good starting point:

  • Users: sea level (blue).
  • Summary: sky, cloud and kite (white).
  • Functions: underwater, fish and clam (indigo).

As such they can be associated with specific concerns:

Cockburn’s differentiated use cases

  • Blue level UCs are concrete; that’s where interactions are identified with regard to actual agents, place, and time.
  • White level UCs are abstract and cannot be instanciated; cloud ones are shared across business processes, kite ones are specific.
  • Indigo level UCs are concrete but not necessarily the primary source of instanciation; fish ones may or may not be associated with business functions supported by systems (grey), e.g services , clam ones are supposed to be directly implemented by system operations.

As illustrated by the example below, use cases set at enterprise or business unit level can also be concrete:

Example with actors for users and legacy systems (bold arrows for primary interactions)

UC abstraction connectors can then be used to define higher business objectives.

Business “Use” Cases

Compared to Cockburn’s efficient (no new concept) and clear (qualitative distinctions) scheme, the business use case alternative adds to the complexity with a fuzzy new concept based on quantitative distinctions like abstraction levels (lower for use cases, higher for business use cases) or granularity (respectively fine- and coarse-grained).

At first sight, using scales instead of concepts may allow a seamless modeling with the same notations and tools; but arguing for unified modeling goes against the introduction of a new concept. More critically, that seamless approach seems to overlook the semantic gap between business and system modeling languages. Instead of three-lane blacktops set along differentiated use cases, the alignment of business and system concerns is meant to be achieved through a medley of stereotypes, templates, and profiles supporting the transformation of BPMN models into UML ones.

But as far as business use cases are concerned, transformation schemes would come with serious drawbacks because the objective would not be to generate use cases from their business parent but to dynamically maintain and align business and users concerns. That brings back the question of the purpose of business use cases:

  • Are BUCs targeting business logic ? that would be redundant because mapping business rules with applications can already be achieved through UML or BPMN diagrams.
  • Are BUCs targeting business objectives ? but without a conceptual definition of “high levels” BUCs are to remain nondescript practices. As for the “lower levels” of business objectives, users’ stories already offer a better defined and accepted solution.

If that makes the concept of BUC irrelevant as well as confusing, the underlying issue of anchoring UCs to broader business objectives still remains.

Conclusion: Business Case for Use Cases

With the purposes clearly identified, the debate about BUC appears as a diversion: the key issue is to set apart stable long-term business objectives from short-term opportunistic users’ stories or use cases. So, instead of blurring the semantics of interactions by adding a business qualifier to the concept of use case, “business cases” would be better documented with the standard UC constructs for abstraction. Taking Cockburn’s example:

Abstract use cases: no actor (19), no trigger (20), no execution (21)

Different levels of abstraction can be combined, e.g:

  • Business rules at enterprise level: “Handle Claim” (19) is focused on claims independently of actual use cases.
  • Interactions at process level: “Handle Claim” (21) is focused on interactions with Customer independently of claims’ details.

Broader enterprise and business considerations can then be documented depending on scope.

Further Reading

External Links