Modeling Symbolic Representations

March 16, 2010

System modeling is all too often a flight for abstraction, when business analysts should instead look for the proper level of representation, ie the one with the best fit to business concerns.

Modeling is synchronic: contexts must be mapped to representations (Velazquez, “Las Meninas”).

Caminao’s blog (Map of Posts) will try to set a path to Architecture Driven System Modelling. The guiding principle is to look at systems as sets of symbolic representations and identify the core archetypes defining how they must be coupled to their actual counterparts. That would provide for lean (need-to-know specs) and fit (architecture driven) models, architecture traceability, and built-in consistency checks.

This blog is meant to be a work in progress, with the basic concepts set open to suggestions or even refutation:

All examples are taken from ancient civilizations in order to put the focus on generic problems of symbolic architectures, disregarding technologies.

Symbolic representation: a primer

Original illustrations by Albert (http://www.albertdessinateur.com/) allow for concrete understanding of requirements, avoiding the biases associated with contrived textual descriptions.

AlphaGo: From Intuitive Learning to Holistic Knowledge

February 1, 2016

Brawn & Brain

Google’s AlphaGo recent success against Europe’s top player at the game of Go is widely recognized as a major breakthrough for Artificial Intelligence (AI), both because of the undertaking (Go is exponentially more complex than Chess) and time (it has occurred much sooner than expected). As it happened, the leap can be credited as much to brawn as to brain, the former with a massive increase in computing power, the latter with an innovative combination of established algorithms.

(Kunisada)

Brawny Contest around Aesthetic Game (Kunisada)

That breakthrough and the way it has been achieved may seem to draw opposite perspectives about the future of IA: either the current conceptual framework is the best option, with brawny machines becoming brainier and, sooner or later, will be able to leap over  the qualitative gap with their human makers; or it’s a quantitative delusion that could drive brawnier machines and helpless humans down into that very same hole.

Could AlphaGo and its DeepMind makers may point to a holistic bypass around that dilemma ?

Taxonomy of Sources

Taking a leaf from Spinoza, one could begin by considering the categories of knowledge with regard to sources:

  1. The first category is achieved through our senses (views, sounds, smells, touches) or beliefs (as nurtured by our common “sense”). This category is by nature prone to circumstances and prejudices.
  2. The second is built through reasoning, i.e the mental processing of symbolic representations. It is meant to be universal and open to analysis, but it offers no guarantee for congruence with actual reality.
  3. The third is attained through philosophy which is by essence meant to bring together perceptions, intuitions, and symbolic representations.

Whereas there can’t be much controversy about the first ones, the third category leaves room for a wide range of philosophical tenets, from religion to science, collective ideologies, or spiritual transcendence. With today’s knowledge spread across smart devices and driven by the wisdom of crowds, philosophy seems to look more at big data than at big brother.

Despite (or because of) its focus on the second category, AlphaGo and its architectural’s feat may still carry some lessons for the whole endeavor.

Taxonomy of Representations

As already noted, the effectiveness of IA’s supporting paradigms has been bolstered by the exponential increase in available data and the processing power to deal with it. Not surprisingly, those paradigms are associated with two basic forms of representations aligned with the source of knowledge, implicit for senses, and explicit for reasoning:

  • Designs based on symbolic representations allow for explicit information processing: data is “interpreted” into information which is then put to use as knowledge governing behaviors.
  • Designs based on neuronal networks are characterized by implicit information processing: data is “compiled” into neuronal connections whose weights (pondering knowledge ) are tuned iteratively on the basis of behavioral feedback.

Since that duality mirrors human cognitive capabilities, brainy machines built on those designs are meant to combine rationality with effectiveness:

  • Symbolic representations support the transparency of ends and the traceability of means, allowing for hierarchies of purposes, actual or social.
  • Neuronal networks, helped by their learning kernels operating directly on data, speed up the realization of concrete purposes based on the supporting knowledge implicitly embodied as weighted connections.

The potential of such approaches have been illustrated by internet-based language processing: pragmatic associations “observed” on billions of discourses are progressively complementing and even superseding syntactic and semantic rules in web-based parsers.

On that point too AlphaGo has focused ambitions since it only deals with non symbolic inputs, namely a collection of Go moves (about 30 million in total) from expert players. But that limit can be turned into a benefit as it brings homogeneity and transparency, and therefore a more effective combination of algorithms: brawny ones for actual moves and intuitive knowledge from the best players, brainy ones for putative moves, planning, and policies.

Teaching them how to work together is arguably a key factor of the breakthrough.

Taxonomy of Learning

As should be expected from intelligent machines, their impressive track record fully depends of their learning capabilities. Whereas those capabilities are typically applied separately to implicit (or non symbolic) and explicit (or symbolic) contents, bringing them under the control of the same cognitive engine, as humans brains routinely do, has long been recognized as a primary objective for IA.

Practically that has been achieved with neuronal networks by combining supervised and unsupervised learning: human experts help systems to sort the wheat from the chaff and then let them improve their expertise through millions of self-play.

Yet, the achievements of leading AI players have marked out the limits of these solutions, namely the qualitative gap between playing as the best human players and beating them. While the former outcome can be achieved through likelihood-based decision-making, the latter requires the development of original schemes, and that brings quantitative and qualitative obstacles:

  • Contrary to actual moves, possible ones have no limit, hence the exponential increase in search trees.
  • Original schemes are to be devised with regard to values and policies.

Overcoming both challenges with a single scheme may be seen as the critical achievement of DeepMind engineers.

Mastering the Breadth & Depth of Search Trees

Using neuronal networks for the evaluation of actual states as well as the sampling of policies comes with exponential increases in breath and depth of search trees. Whereas Monte Carlo Tree Search (MCTS) algorithms are meant to deal with the problem, limited capacity to scale up the processing power will nonetheless lead to shallow trees; until DeepMind engineers succeeded in unlocking the depth barrier by applying MCTS to layered value and policy networks.

AlphaGo seamless use of layered networks (aka Deep Convolutional Neuronal Networks) for intuitive learning, reinforcement, values, and policies was made possible by the homogeneity of Go’s playground and rules (no differentiated moves and search traps as in the game of Chess).

From Intuition to Knowledge

Humans are the only species that combines intuitive (implicit) and symbolic (explicit) knowledge, with the dual capacity to transform the former into the latter and in reverse to improve the former with the latter’s feedback.

Applied to machine learning that would require some continuity between supervised and unsupervised learning which would be achieved with neuronal networks being used for symbolic representations as well as for raw data:

  • From explicit to implicit: symbolic descriptions built for specific contexts and purposes would be engineered into neuronal networks to be tried and improved by running them on data from targeted environments.
  • From implicit to explicit: once designs tested and reinforced through millions of runs in relevant targets, it would be possible to re-engineer the results into improved symbolic descriptions.

Whereas unsupervised learning of deep symbolic knowledge remains beyond the reach of intelligent machines, significant results can be achieved for “flat” semantic playground, i.e if the same semantics can be used to evaluate states and policies across networks:

  1. Supervised learning of the intuitive part of the game as observed in millions of moves by human experts.
  2. Unsupervised reinforcement learning from games of self-play.
  3. Planning and decision-making using Monte Carlo Tree Search (MCTS) methods to build, assess, and refine its own strategies.

Such deep and seamless integration would not be possible without the holistic nature of the game of Go.

Aesthetics Assessment & Holistic Knowledge

The specificity of the game of Go is twofold, complexity on the quantitative side, simplicity on  the qualitative side, the former being the price of the latter.

As compared to Chess, Go’s actual positions and prospective moves can only be assessed on the whole of the board, using a criterion that is best defined as aesthetic as it cannot be reduced to any metrics or handcrafted expert rules. Players will not make moves after a detailed analysis of local positions and assessment of alternative scenarii, but will follow their intuitive perception of the board.

As a consequence, the behavior of AlphaGo can be neatly and fully bound with the second level of knowledge defined above:

  • As a game player it can be detached from actual reality concerns.
  • As a Go player it doesn’t have to tackle any semantic complexity.

Given a fitted harness of adequate computing power, the primary challenge of DeepMind engineers is to teach AlphaGo to transform its aesthetic intuitions into holistic knowledge without having to define their substance.

Further Readings

External Links

Operational Intelligence & Decision Making

January 18, 2016

Preamble

According to a leading tools provider operational intelligence (OI) is the ability to “discover and analyze relationships between business events and corresponding IT events”.

(Gilles Barbier)

Minding streams of big data things (Gilles Barbier)

From a marketing perspective, the moniker suggests some kind of cross-breeding between operational research, artificial intelligence, and real-time analytics. Yet, behind vendor dressing, problems and policies remain the ones traditionally dealt with by decision-making and knowledge management, and as far as marketing is concerned, pitches will hardly affect the assessment of field professionals.

Nevertheless, functional pitches may have a deeper influence if they try to outline the aims of operational intelligence to the people directly involved, affecting the way problems are understood and dealt with. That may be the case if business and system events are seemed to be put on a par: overlooking the directed dependency between actual events and their systems counterparts can critically hamper the very capabilities of systems decision-making.

Facts, Data, & Information

The new connected world of human brains and smart things have scaled down space and time by orders of magnitude, up to the point that events seem to come out as soon as they happen, wherever that may be. Facts and updates, that once were incoming as discrete and manageable batches of information, are now bursting continuously and massively as seamless streams of data that have to be processed on-the-fly into information lest they be cannibalized by ambient noise. That new configuration blurs the distinction between operational data (pushed, shallow, transient) and underlying information (pulled, deep, persistent), making it unworkable, if not meaningless altogether.

Taking inventories decisions as an example, traditional schemes rely on periodic readings of actual inventories and sales crossed with market foresight. Now, with on-line sales and the internet of things, real-time data can be used to build on-the-fly indicators whose biases and inaccuracy would be dynamically readjusted on the basis of information built on hindsight. At any given time (t), decision-makers will be presented with actual observations (a),  initial estimations of previous observations (b1, b2), and revised estimations of previous observations (c).

DataInfoKnow_OpIntel0

At any given time (t), decision-makers are presented with actual observations (a), initial estimations of previous observations (b1, b2), and revised estimations of previous observations (c).

Set along this framework, the debate about big data can be misleading as it puts the focus on the quantity of data feeding the processes, overlooking the process itself and the distinction between data, information, and knowledge.

Information, Knowledge, & Decision-making

Generally speaking, the distinction between data and information can be set with reference to time and context, data being instant and standalone, and information associated to a shelf life and domain. With regard to decision-making, it would mean that data can be directly used within the context of the current activities and circumstances; e.g, whereas on-line sales data may (or may not) be directly (i.e despite inaccuracies and biases) used to allocate inventories across depots, it has to be “mined” into consolidated information before being used in the broader perspective of inventories planning.

Compared to the transition between data and information, which is carried out by adding time and context, the one between information and knowledge is best understood in terms of decision-making.

Information is obtained by anchoring data to time-frames and contexts, knowledge is acquired by putting information to use.

Information is obtained by anchoring data to time-frames and contexts, knowledge is acquired by putting information to use.

Decisions are best defined as commitments set against some unknown circumstances: somebody, somewhere, or sometime. First, it ensues that decision-making calls for specific and timed information that has to be maintained up-to-date until decisions are taken. Then, taking decisions introduces some irreversible change in the state of affairs or expectations, making potentially obsolete all relevant information. So it may be argued that decisions is what transform information into knowledge.

Operational Intelligence: Objectives & Tools

Assuming decisions mark the nexus between information and knowledge, operational intelligence could be defined as the ability to put information to use, that ability being supported by the analysis of the relationships between business events and corresponding IT events.

Far from being academic, that distinction is essentially pragmatic as it marks the boundary between OI objectives and tools capabilities:

  • The aim of OI is to make sense (and profit) from the dynamic relationship between business (aka external) events on one hand, business objectives and enterprise capabilities on the other hand.
  • The role of supporting tools is to define and manage IT (aka internal) events used to reflect external ones and analyze them.
Whereas business events (red) represent change in the state of affairs, IT events (blue) only represent changes in associated information.

Whereas business events (red) represent change in the state of affairs, IT events (blue) only represent changes in associated information.

Since IT events are artifacts built on purpose there isn’t much to discover or analyze about them; not to mention the fact that confusing business events and their IT shadows is bound to undermine the whole decision-making process. So what is at stake for OI is how to design IT events as to timely and accurately trail the relevant business events.

Operational Intelligence & Actual Knowledge

As already noted, operational intelligence (OI) is about decision-making, which entails changing the state of objects, processes, or expectations. Compared to knowledge management (KM) which may or may not be time-related, OI is inherently bound to the actual state of affairs: on one hand it relies on specific and timed information, on the other hand it renders that information obsolete when it triggers decisions.

At the risk of oversimplification, operational intelligence can first be understood as a combination of traditional disciplines:

  • Data-mining is to filter facts and events, capture data, and analyze it into information.
  • Knowledge management chart information with regard to business objectives and enterprise capabilities.
  • Decision-making manage time-stamps and plan commitments subject to accuracy and likelihood.

But the specificity of operational intelligence is to be found in the way these functions are intertwined and cross-fed by operational concerns.

To begin with, data mining can be dynamically adjusted depending on what is needed for decision-making, and when. As a corollary, with the benefits of data so cooked in advance, some decisions can be taken directly, bypassing the mediation (and delays) of information processing. From a cognitive point of view that would be the equivalent of non symbolic (aka implicit) knowledge to be processed by neuronal networks.

Parceling out OI objectives

Decision-making and differentiated knowledge management

Conversely, information processing could benefit from operational feedback so that knowledge management would be driven by business value, and the supporting information weighted by timing and shelf-life considerations. Whereas part of it could be done through implicit connections, it would be more comprehensively and explicitly achieved through symbolic representations.

Operational Intelligence: Signals vs Symbols

Assuming that intelligence is the ability to figure out situations and solve problems, one may conclude that it is inherently operational. Along the same reasoning, if knowledge is information put to use, it may be implicit as well as explicit.

Nonetheless, the merit of operational intelligence is to bring to a single functional roof symbolic and non symbolic knowledge, the former explicit, using mediation of semantic constructs and used to weight information and support managed decisions, the latter implicit, using direct associations between actual objects or phenomena, and supporting automated decisions.

Further Readings

People should not be Confused with their Personas

December 19, 2015

Confronted with the ubiquity of IT systems and the blurring of traditional fences, enterprises grapple with the management of accesses and authorizations. Hence the importance of a clear distinction between agents, organizational units, and systems users.

(E. Erwitt)

Confusing Mimicry: People Impersonating Personas (E. Erwitt)

What is at stake is best understood by looking at the modeling of users’ access, collective agents, and interoperability.

Users’ Access

Roles (or actors in UML parlance) are meant to provide a twofold description of system users in order to combine two perspectives: organization and business process on one hand, system and applications on the other hand.

That can only be achieved by maintaining a clear distinction between actual agents, able to physically interact with systems, and roles, which are symbolic positions defined by and relative to organizations. Since mapping people and organization with systems users is the primary purpose of access rights management, lumping both sides under a single concept would definitely preclude the modeling of typical access scripts:

  • Anonymous: no authentication or authorization.
  • Registered user (role): user name and password are matched to user record.
  • Identified person: authentication of external identity.
  • Registered person: identification of a user with established external identity.
Security: actors vs actual and symbolic counterparts

Security: actors vs actual and symbolic counterparts

Given that authentication and authorization procedures depend on matching actual agents with their system symbolic representations (authentication) and roles (authorization), ignoring those distinctions would sap the whole security architecture.

Collective Agents

Confusing agents and roles may also prevent a proper management of collective authorizations.

At enterprise level parties can be identified physically as individuals or nominally as groups. But from a system perspective interactions can only be carried out by actors with physical identities, whatever their nature, users, systems, or devices.

Parties and actors are set along orthogonal perspectives, identities for the former, role for the latter.

Parties and actors are set along orthogonal perspectives, identities for the former, roles for the latter.

Managing accesses therefore requires an additional level of complexity, namely the relationship between collective and individual rights:

  • Parties can be intrinsically individual, collective, or contingent on circumstances (a).
  • As far as collective entities are concerned, access rights can directly allocated on behalf of group membership or delegated to named individuals (b).
  • Rights may depend on their origin and compatibility (c).
  • Roles allocation may be conditioned by both entitlements and specific rights on operations and objects (d).
Powertypes (2) are introduced to manage categories of roles, operations, and objects.

Powertypes (2) are introduced to manage categories of roles, operations, and objects.

That will not be possible without modeling separately entities identified by organizations (collectively or individually) and their personas while interacting with systems.

Interoperability

From smartphones to dumb appliances, things are unceasingly moving around networks and swapping places with people. Given the number, diversity, and turnover of interacting parties, systems are in no position to keep tabs on what is happening to agents behind the roles. Interoperability is therefore fully subordinate to the reliability and versatility of actors’ functional capabilities with regard to agents (organization) and applications (systems):

  • Agents identified externally are classified with regard to communication capability: users (natural language, digital, analog), systems (digital), and devices (analog).
  • Applications are classified with regard to their communication requirements (services, users interfaces, RT interfaces, …).
  • Actors are used to map agents to applications.
vvvvv

Actors can be used to characterize communication mechanism between actual agents and applications.

That formal distinction between agents and actors comes to be critical when access rights are to be checked for peer-to-peer transactions carried out across multiple participants.

Postscript

Besides its benefits, the validity of this perspective is borne out by its congruence with enterprise architecture layers (business, systems functionalities, platforms technologies) and model driven engineering (e.g computation independent, platform independent, and platform specific models).

Further Reading

External Links

New Year: Regrets or Expectations ?

December 8, 2015
(Zineb Sedira)

Missed or Expected ? (Zineb Sedira)

Do Calendars Still Matter ?

Whereas financial results are being established on an annual basis, events nowadays unfold within a seamless space/time dimension:

  • Social networks put long-planned strategies at the mercy of consumers weekly whims often unconcerned by borders or products intents.
  • Mining of big data may curtail innovative head-starts to a matter of weeks if not days.
  • On-line (not to mention high frequency) trading put markets sanction at investors fingers tips, and enterprises’ stocks at predators’ claws.

So why should enterprises bother with yearly schedules ?

Evolutionary Arms Race Doesn’t Wait for Saint Sylvester

Business is governed by the same rule as nature, namely the survival of the fittest, and as with biological ecosystems, enterprises’ individual fitness depends on their relationships with others. Hence, as suggested by Lewis Carroll’s Red Queen, the survival of enterprises in their evolutionary arms race doesn’t depend on their absolute speed set against some time-frame, but on the relative one set with regard to competitors. In that case Saint Sylvester could be ignored. Or should it ? Because seasons do play their part in biological races.

From Race to Game

As it happens, business races are becoming more complicated as extensive and ubiquitous information and communication systems redefine the traditional predator-prey casting. With time and space cut down to symbolic dimensions, collaboration and competition can no longer be safely allocated to time-spans and market segments, which entails that the roles of predator and prey can be upended on the spur of moment and the turn of a switch.

Introducing this kind of option transforms tracks into playgrounds and races into games because there is no point in running without knowing who to run against and who to run with. Adding to the challenge, these playgrounds are moving ones, and so the rules of the game: one week a non-zero sum game, the next a winner-gets-all. That is when calendars make a come-back.

Games come with Boards and Time Scales

Contrary to races, games have comprehensive and detailed rules, or regulations in business parlance. As soon as enterprises start to play with roles they have to meet conditions, follow procedures, and take into account specific constraints; all defined with regard to institutional spaces and calendar time, e.g:

  • Customers’ behavior is often seasonal.
  • Regulatory bodies rule within geographical borders and their decisions stand for calendar periods.
  • M&A have to align with stock markets schedules

So, assuming that institutional time-frames cannot be avoided when strategies are set, Saint Sylvester may provide a practical meter for yearly moves.

When to Board a Shuttle

Enterprises have therefore to set their decision-making with regard to the accelerating pulse of business events on one hand, institutional time-boxes on the other hand. That corresponds to a typical shuttle scheme, with taking a decision being associated with boarding a shuttle.

Along that understanding, and assuming that taking a decision means closing alternative options, boarding a shuttle excludes some of the next one(s), and missing it may be beneficial as well as detrimental. That’s the reasoning behind the Last Responsible Moment principle which, when put to use for periodic decision-making, suggests that there can be as much risk at boarding a shuttle than at missing it.

Further Readings

 

 

Use Cases are Agile Tools

November 26, 2015

Preamble

Use cases are often associated with the whole of UML diagrams, and consequently with cumbersome procedures, and excessive overheads. But like cats, use cases can be focused, agile and versatile.

Use cases are agile tools with side view

Use Case: agile, simple-minded, with a side view (P. Picasso)

Simple minded, Robust, Easy going

As initially defined by Ivar Jacobson, the aim of use cases is to describe what happens between users and systems. Strictly speaking, it means a primary actor (possibly seconded) and a triggering event (possibly qualified); that may (e.g transaction) or may not (e.g batch) be followed by some exchange of messages . That header may (will) not be the full story, but it provides a clear, simple and robust basis:

  • Clear and simple: while conditions can be added and interactions detailed, they are not necessary parts of the core definition and can be left aside without impairing it.
  • Robust: the validity of the core definition is not contingent on further conditions and refinements. It ensues that simple and solid use cases can be defined and endorsed very soon, independently of their further extension.

As a side benefit, use cases come with a smooth learning curve that enable analysts to become rapidly skilled without being necessarily expert.

Open minded and Versatile

Contrary to a somewhat short-sighted perspective, use cases are not limited to users because actors (aka roles) are meant to hide the actual agents involved: people, devices, or other systems. As a consequence, the scope of UCs is not limited to dialog with users but may also includes batch (as one-step interactions) and real-time transactions.

Modular and Inter-operable

Given their simplicity and clarity of purpose, use cases can be easily processed by a wide array of modeling tools on both sides of the business/engineering divide, e.g BPM and UML. That brings significant benefits for modularity.

At functional level use cases can be used to factor out homogeneous modules to be developed by different tools according to their nature. As an example, shared business functions may have to be set apart from business specific scenarii.

Use cases at the hub of UML diagrams

Use cases can be easily combined with a wide range of modeling tools

At technical level, interoperability problems brought about by updates synchronization are to be significantly reduced simply because modules’ core specifications (set by use cases) can be managed independently.

Iterative

Given their modularity, use cases can be easily tailored to the iterative paradigm. Once a context is set by business process or user’s story, development iterations can be defined with regard to:

  • Invariants (use case): primary actor and triggering event.
  • Iterations (scenarii): alternative execution paths identified by sequences of extension points representing the choices of actors or variants within the limits set by initial conditions.
  • Backlog units: activities associated to segments of execution paths delimited by extension points.
  • Exit condition: validation of execution path.
cccc

Execution paths & Development cycles

Scalable

Last but not least, use cases provide a sound and pragmatic transition between domain specific stories and architectural features. Taking a leaf from the Scaled Agile Framework, business functions can be factored out as functional features represented by shared use cases whose actors stand for systems and not final users.

Features are to be supported by systems functionalities, but not necessarily implemented as services.

Features are to be supported by systems functionalities that may or may not implemented as services.

Depending on targeted architectures, those features may or may not be implemented by services.

Further Reading

External Links

Quality Circles

November 11, 2015

Generally speaking, quality may refer to intrinsic properties, functional characteristics, or some external yardstick. With regard to software engineering it would mean code, users experience, and operations, each with its own specific stakeholders and criteria.

A bird's eye view on quality circles (Jonathan Monk)

A bird’s-eye view on quality circles (Jonathan Monk)

On one side, traditional phased approaches to QA are meant to deal with those different aspects, yet they fall short when those facets are weaved together across enterprise architectures and business environments. On the other side agile quality solutions may also fail to cope with transverse business functions shared across architectures. Hence the need of a bird’s-eye view putting quality into a broader enterprise perspective.

Who Cares for Quality

Whatever the attributes considered, quality should clearly encompass actual products as well as their uses. For that purpose quality has to be assessed with regard to the requirements as expressed by business stakeholders, users, or systems engineers and administrators. Given the constraints and specificity of changing environments, objective yardsticks are of limited use and quality is often to be assessed for the lack thereof:

  • Business requirements: the product doesn’t meet expectations with regard to business contents (objects and logic).
  • Functional requirements: while the product meets business requirements, the part played by supporting systems doesn’t meet users’ expectations.
  • Quality of service: while the product meets business and functional requirements, users’ experience doesn’t meet expectations.
  • Technical requirements: while the product meets users’ expectations (business, functional, and ease of use), there are problems with deployment, maintenance, or operations.
cc

Quality is best defined with regard to requirements and checked with regard to architectures

Crossing those concerns, quality assessment has to deal with two primary challenges:

  • Since assessment at each level can be conditioned by lower levels, outcomes must be described and traced accordingly. That is to be the role of quality management.
  • Since assessment has to cover both products and their use during their shelf life, uncertainty will have to be taken into account. That is to be the role of quality assurance.

A third aspect can be added for externalities, i.e factors whose impact cannot be clearly or uniquely attributed: external risks are not under control, ergonomy cannot be accurately measured, and the assessment of ROI for processes improvement remains a matter of insight.

Quality Management & Documentation

The primary objective of quality management is to identify, define, and track the targeted outcomes and the factors deemed to affect their characteristics: contracts, products traceability, models reuse, tests, etc.

Depending on target and development model, management footprint can be defined at three levels of detail:

  • With regard to the use of products in their operational context, the focus is to be on deployed systems compared to textual specifications (a).
  • With regard to the intrinsic properties of deliverables, the focus is to be extended to software components (b).
  • When products are to be deployed in different environments, or to be maintained or modified along time, additional documentation will be necessary to trace changes to functional (c) and enterprise (d) architectures.
Assessment at each level can be conditioned by lower levels

Assessment at each level may be conditioned by lower levels

In any case (i.e with or without intermediate documentation,) traceability is to be a corner-stone of quality management:

  • Business processes with regard to business objectives, e.g how to assess insurance premiums or compute missile trajectory.
  • Code with regard to textual requirements.
  • System functionalities with regard to business processes. Use cases are widely used to describe how systems are to support business processes, and system functionalities are combined to realize use cases.
  • System components as technical implementations of functionalities targeted to different users, locations, and configurations.

And another dimension of traceability is required when quality assurance has to deal with uncertainty, risks, and decision-making.

From Management to Assurance

The objective of quality assurance is to define, carry on, and monitor operations in order to improve the characteristics concerned and reduce the probability that something will go amiss during the planned shelf life of products.

For that purpose assurance footprint and granularity must be aligned with the layers defined by quality management:

  • Integration and acceptance tests are carried out in reference to requirements on the assumption that software components have been validated.
  • Code checking and unit tests are carried out in reference to business and functional requirements on the assumption that their consistency has been checked.
  • External consistency is checked with regard to business requirements independently of functional or technical ones.
  • Internal consistency is checked with regard to functional requirements on the assumption that the business requirements (external) consistency has been checked.
Footprint & granularity of management and assurance must be congruent

Footprint & granularity of management and assurance must be congruent

Those operations, meant to deal with the quality of each layers, have to be combined with schemes of secure transformations between layers, e.g reuse, patterns or code generation. That would put quality on a sound basis were it not for externalities.

Quality Assurance & Risk Management

As already noted, QA has to take into account uncertainties and risks both external (business or technical environments) and internal (development processes). Assuming quality assurance has to include risk assessment, policies should be driven by risk acceptance levels:

  • No risk: quality assurance can be designed as to eliminate some uncertainties (e.g reuse and code generation).
  • No risk taken: whereas business and technology options are not sure bets some must be carried out regardless of what happens in the environment (e.g unexpected regulatory change or delay in critical technology). In that case QA must provide fallback solutions.
  • Managed risks: some defaults or delays can be priced and weighted by likelihood. In that case QA should monitor the risks and balance their cost (e.g resources consumption, late delivery) against the cost of preventive (e.g more systematic checks on consistency, additional staff) or corrective (e.g tests or maintenance) measures.
Quality management should be set at the nexus between risks management and quality assurance.

Quality management should be set at the nexus between risks management and quality assurance.

That will put quality management at the nexus between risks management and quality assurance.

Further Readings

Where to Begin with EA

November 4, 2015

Whereas enterprise architecture (EA) is a broadly recognized practical concern, there isn’t much of a consensus about it as a discipline. Hence the interest of figuring it out from practice.

(André Kertesz)

Separate actual and symbolic realms (Marc Riboud)

Be Specific

Compared to the abundance of advice about EA management, there is a dearth of specifics about what is to be managed, apart from the Zachman framework. So the best approach is to begin with actual practice and try to characterize the specifics of what is actually done.

Separate Structures from Processes

Architecture is about shared assets whose life cycle is not limited to specific activities. Hence the need to set apart processes, which have to change depending on business environments and opportunities, and structures (e.g organization and systems) whose life cycle is meant to be set along a corporate time frame.

Separate Symbolic from Actual

To be of any use EA has to rely on some consensus regarding what is to be managed, and by who. That can only be achieved if some distinction is kept between symbolic descriptions (the equivalent of blueprints) of information and processes on one hand, actual objects (e.g legacy) or activities on the other hand.

Add Time Frames

At the end of the day success will be decided by the fruitful combination of enterprise assets (financial, physical, logical, human) and business context and objectives. Defining their respective life cycles and planning the necessary changes could be seen as the primary responsibility of enterprise architects.

Add Responsibilities

Last and least, allocating responsibilities is probably better carried out on a case by case basis depending on each organization and corporate culture.

That’s All

Those few principles may seem unassuming but they provide a sound basis that takes full advantage of what staff and management know about their enterprise resources and practices. And never forget that continuity is a critical factor of EA success.

Further Reading

Data Mining & Requirements Analysis

October 24, 2015

Preamble

Data mining explores business opportunities and competitive advantage, requirements analysis describe supporting applications. Both use models, the former’s are predictive and ephemeral, the latter’s descriptive (or prescriptive) and perennial.

(Andreas Gursky)

Data mining: sorting business wheat from world chaff (Andreas Gursky)

Understanding how they are related could significantly improve processes maturity.

Data vs Requirements Analysis

Nowadays the success of a wide range of enterprises critically depends on two achievements:

  1. Mapping business models to changing environments by sorting through facts, capturing the relevant data, and processing the whole into meaningful and up to date information.
  2. Putting that information into effective use through their business processes and supporting systems.
Data analysis (left) and systems requirements (right)

Models from data analysis (left) and systems requirements (right)

Those challenges are converging: under the pressure of markets forces and technological advances most of traditional fences between business channels and IT systems are crumbling, putting the focus on the functional integration between data mining and production systems. How the latter is expected to feed the former has been the bread and butter of good corporate governance for some time, but there has been less interest for the opposite flow, namely how data analysis could “inform” business requirements.

From Data to Information

Facts are not given but must be captured through a symbolic description of actual observations. That entails some observer set on task using a mix of conceptual and technical apparatus. Data mining and requirements analysis are practical realizations of that process:

  • Data mining relies on analytic tools to extract revealing information that could be used to chart opportunities along business models.
  • Requirements analysis relies on business processes and users’ practice to extract symbolic descriptions that will be used to build models of supporting applications.

If both walk the path from data to information, their objectives are different: the former’s is to improve business decisions by making sense of actual observations; the latter’s is to build system surrogates from the symbolic descriptions of actual business objects and activities.

Anchors & Structures: Plasticity of  Business Entities

Perhaps paradoxically, business agility calls for terra firma because nimble trades must be rooted in corporate identity and business continuity. As a consequence, the first step of requirements analysis should be to associate individuals business objects or activities with stable and consistent identification mechanisms, and to group them with regard to that mechanism:

  • External entities with natural (person) or designed identity (car).
  • Symbolic entities for roles (customer) or commitments (maintenance contract).
  • Actual activities (promotion campaign) and events (sale) or business logic (promotion).
Anchors

Anchors

Conversely, as the aim of data analysis is to explore every business angle, individual observations are supposed to be moved across groups; yet, since the units identified by data analysis will have to be aligned with the ones described by requirements analysis, moves must also keep track of identities. That dilemma between continuity of identified structures on one side, plasticity of functional aspects on the other side, can be illustrated by banks which, in response to marketing requirements, had to shift from account (internal identification) to customer (external identification) based systems.

From account (left) to customer (right) centered systems

It’s easier to market insurance from customer centered systems (right) than from account centered ones (left)

That challenge can be overcome by linking the identification of symbolic entities to external anchors.

Profiles & Features: Versatility of Business Opportunities

As noted above, requirements and data analysis are set on the same road but driven by different forces: the former tries to group individuals with regard to identification mechanisms before fleshing them out with relevant features; the latter tries to group individuals with given identities according to features and opportunity profiles. Yet, what could appear as collision courses may become a meeting of minds if both courses are charted with regard to variants analysis.

From the requirements perspective the primary concern is to distinguish between structural and functional variants:

  • Structural variants are bound to identities, i.e set up-front for the respective life-cycle of individual business objects or transactions. As a consequence they cannot be changed without undermining business continuity. Moreover, being part and parcel of descriptors (e.g  types and use cases) their change will affect engineering processes.
  • Functional variants may vary during the respective life-cycle of individual business objects or transactions. As a consequence they can be changed without undermining business continuity, and changes in descriptors (e.g partitions and scenarii) can be managed without affecting engineering processes.

From the data mining perspective the objective is to improve the benefits of information systems for decision-making processes:

  • Static: how to classify individuals as to reduce the uncertainty of predictions
  • Dynamic: how to classify business options as to reduce the uncertainty of decisions.

Since those objectives are set for individuals, constraints on continuity and consistency can be dealt with independently of the description of symbolic surrogates.

Identified individuals with profiles for customers (a), their behaviors (b), and conciliatory gestures (c)

Identified individuals with profiles for customers (a), their behaviors (b), and promotional gestures (c)

It ensues that perspectives can be adjusted by factoring out the constraints of continuity and consistency for business objects (e.g cars), agents (e.g customer) and processes (e.g repair). Profiles for agents (a), behaviors (b), and business options (c) could then be freely explored and tailored with regard to changes in business environment and objectives.

Applying Data Analysis to Requirements

Not surprisingly data analysis techniques can be used to adjust perspectives. For that purpose a sample of individuals (business objects and operations) representing the population targeted by requirements would have to be submitted to basic mining routines. Borrowing a catalog from F. Provost & T. Fawcett:

  1. Classification: estimates the probability for each individual (objects or operations) to belong to a set of classes; can be used to assess the closeness of the variants (respectively power-types or execution paths) identified by requirements analysis.
  2. Regression: reverse classification; estimates how much of individual features valuations can be explained by the proposed classifications.
  3. Similarity: a shallow version of classification; can be used to assess the distance between variants and consolidate the proposed classifications.
  4. Clustering: a deep version of classification; can be used to distinguish between shallow and natural classifications.
  5. Co-occurrence: deals with behavioral variants; can be used to distinguish between functional and structural classifications.
  6. Profiling: reverse of co-occurrence; can be used to consolidate functional and structural classifications.
  7. Links prediction: can be used to define relationships.
  8. Data reduction: eliminate redundant individuals; can be used to consolidate requirements and refine tests scenarii.
  9. Causal modeling: brings together business logic (events and rules) and users decisions; should provide the backbone of tests scenarii.

Besides the direct benefits for requirements, such procedures may help to bridge the span between data and requirements analysis and significantly improve processes’ capability and maturity level.

Business Objectives & Enterprise Architecture Capabilities

Data mining being first and foremost about competitive edge, it relies on a timely and effective coupling between enterprises capabilities and business opportunities. But the dilemma between continuity and plasticity described above for business objects and processes reappears at enterprise level: how to conciliate architecture, by nature perennial, with the agility needed to make the best of changing and competitive environments ?

As architectural big bang is arguably a last resort option, answers to that question must be progressive and local: if changes are to be swift and pertinent they must be both circumscribed and leveraged to the relevant parts of architecture. Taking an (amended) leaf of the Zachman framework, its sixth column (“Why” ) could be reset as a line for business and operational objectives that would cross the original five columns instead of the architecture layers. Using a pentagonal representation of enterprise architecture, that line would be set on the outer range.

ccc

Enterprise Architecture and the loci of change

It must be reminded that setting objectives on a line crossing the columns of capabilities instead of a column crossing the lines of layers means that objectives are set at enterprise level and their cascading impact traced and managed through layers.

Symbolic Systems vs World

Nowadays the life of enterprises fully depends on the ability of their systems to deal with their environment by making sense of data and supporting production systems. As long as environments were a hotchpotch of actual and symbolic artifacts the pros and cons of integration could be balanced. But the generalization of digital facts and transactions has upended the balance: there is no more room or time for latency and enterprises must unify the symbolic representation of their business models, organization, and computer systems.

Selected Readings

EA Frameworks: Non Negotiable Features

October 15, 2015

Frameworks are meant to abet the design and governance of enterprises’ organization and systems, not to add any methodological layer of complexity. If that entry level is to be attained preconditions are to be checked out for comprehensiveness, modularity, clarity of principle, and consistency.

(André Kertesz)

Some features are not negotiable (André Kertesz)

Meeting these requirements will in turn greatly facilitate declarative and iterative approaches to enterprise architecture.

Comprehensiveness

The primary objective of an enterprise framework is to bring under a common management roof different contexts and concerns (business, technical, organizational), and synchronize their respective time-frames. That can only be achieved through an all-inclusive and unified conceptual perspective.

Suggested check: Variants for core concepts like agents or events must be clearly defined at enterprise and system levels; e.g  people (agents with identity and organizational status), roles (organization and access to systems), and bots (software agents without identity and organizational status).

Modularity

On one hand enterprise frameworks must deal with strategic issues without being sidetracked by enterprises idiosyncrasies. On the other hand swift and specific adaptations to changing environments should not be hampered by cumbersome procedures or steep learning curve. That can only be achieved by lean and versatile frameworks built from a clear and compact set of architecture artifacts, to be readily extended, specialized or implemented through the enactment of dedicated processes.

Suggested check: How a framework is to further the development of a new business, facilitate the merging of organizations, or support the transition to a new architecture (e.g SOA).

Clarity of Principle

Comprehensiveness and modularity are pointless without a principled backbone supporting incremental changes and a smooth learning curve. For that purpose a clear separation should be maintained between the semantics of the core patterns used to describe architectures and the processes to be carried out for their evolution.

Suggested check: The meaning of  primary terms (event, role, activity, etc) should be uniquely and unambiguously defined based on the core framework principles, independently of the processes using them.

Consistency

EA frameworks should be more compass than textbook, drawing clear lines of action before providing details of implementation. Lest architects been lost in compilations of ambiguous or overlapping definitions and rules, core meanings must remain unaffected when put to use across the framework.

Suggested check: Carry out a comprehensive search for a sample of primary terms (e.g event, role, activity, etc.), list and compare the different (if any) definitions, and verify that they can be boiled down to a unique and unambiguous one.

EA & Model Based Systems Engineering

These basic requirements get their full meaning when set in the broader context of EA evolution. Contrary to their IT component, enterprise architectures cannot be reduced to planned designs but grow from a mix of organization, culture, business environments, technology constraints, and strategic planning. As EA evolution is by nature incremental, supporting frameworks should provide for iterative development based on declarative knowledge of their organizational or technical constituents. That could be achieved by combining EA with model based systems engineering.

Further Readings

 

How to Begin with Requirements

October 5, 2015

Despite being a recurring topic of discussions, a comprehensive and formalized approach to requirements should not hold back newcomers: given that requirements are the necessary genesis of any project nothing can be assumed about their emergence; as a consequence they are better dealt with as they come.

PopolVuh_Quiché_Maya_Creation

The Genesis of Meanings (Quiché Maya’s view of creation)

For that purpose, and whatever the preferred approach, requirements analysis can be neatly described as an iterative process built from three basic increments: identified individuals, associated features, and classifications.

Individuals

When entering unknown territory, the first thing to do is to set apart recognizable items. Regarding requirements that would be individuals whose identities has to be managed. Such individuals would then further divided between:

  • Objects (persistent identities) and activities (transient identities).
  • Activities (managed duration) and events (no managed duration).
  • Agents (physical identities) and roles (social identities).
  • Actual (physical identities) and symbolic (social identities) objects.

It must be noted that, except for new standalone applications, most of individuals may have been already defined by existing functional architectures.

Features

Contrary to individuals, features have no identity of their own. As a corollary, they can only be considered as possible attachments to individuals. On that basis, analysts have to answer three basic questions:

  • Can a feature be shared or transferred (functional), or is it bound to the same individual (structural).
  • Does it reflect a state of affairs (attribute) or represent a capability (operation).
  • Is there some overlapping with already known features.

The last question brings about the core of requirements analysis, i.e how to deal with variants, consolidate applications, and manage changes.

Classifications

Categories and rules are the dual facets of requirements purpose, namely how to classify objects and operations and deal with variants.

  • Categories take a declarative approach that relies on static classifications to describe variants. Starting with partitions, analysts will have to distinguish between structural variants (associated with specific features) and functional ones (associated with the state of the same set of features).
  • Rules take a procedural approach with the equivalent of dynamic classifications to deal with variants. As for categories, analysts will have to decide between the equivalent of structural variants (rules to be enforced for the whole of execution paths) and functional ones (rules to be evaluated along execution paths).

Whereas each option may have its benefits depending on the nature of variants, the primary factor when selecting a scheme should be its consistency with regard to existing applications on one hand, between the description of new objects and activities on the other hand.

Further Readings


Hexa

Your content with a new angle at WordPress.com

IT Modernization < V.Hanniet

About Information Systems transformation and Model Driven Engineering

IT Modernization < V. Hanniet

software model driven approaches

Caminao's Ways

Do systems know how symbolic they are ?

Follow

Get every new post delivered to your Inbox.

Join 409 other followers