Archive for the ‘Knowledge Management’ Category

Agile Collaboration & Social Creativity

February 22, 2016

Open-plan offices and social networks are often seen as significant factors of collaboration and innovation, breeding and nurturing the creativity of knowledge workers, weaving their ideas into webs of truths, and molding their minds into some collective intelligence.

Brains need some breathing space

Open-plan offices, collaboration, and knowledge workers creativity

Yet, as creativity comes with agility, knowledge workflows should give brains enough breathing space lest they get more pressure than pasture.

Collaboration & Thinking Flows

Collaboration is a means to an end. To be of any use exchanges have to be fed with renewed ideas and assumptions, triggering arguments and adjustments, and opening new perspectives. If not they may burn themselves out with hollow considerations blurring clues and expectations, clogging the channels, and finally stemming the thinking flows.

Taking example from lean manufacturing, the first objective should be to streamline knowledge workflows as to eliminate swirling pools of squabbles, drain stagnant puddles of stale thoughts, and gear collaboration to flowing knowledge streams. As illustrated by flood irrigation, the first step is to identify basin levels.

Dunbar Numbers & Collaboration Basins

Studying the grooming habits of social primates, psychologist Robin Dunbar came to the conclusion that the size of social circles that individuals of a living species can maintain is set by the size of brain’s neocortex. Further studies have confirmed Dunbar’s findings, with the corresponding sizes for humans set around 10 for trusted personal groups and 150 for untried social ones. As it happens, and not by chance, those numbers seem to coincide with actual observations: the former for personal and direct collaboration, the latter for social and mediated collaboration.

Based on that understanding, the objective would be to organize knowledge workflows across two primary basins:

  • On-site and face-to-face collaboration with trusted co-workers. Corresponding interactions would be driven by personal dispositions and attitudes.
  • On-line and networked collaboration with workers, trusted or otherwise. Corresponding interactions would be based on shared interests and past exchanges.

Knowledge Workflows

The aim of knowledge workflows is to process data into information and put it to use. That is to be achieved by combining different kinds of tasks, in particular:

  • Data and information management: build the symbolic descriptions of contexts, concerns, and means.
  • Objectives management: based on a set of symbolic descriptions, identify and refine opportunities together with the ways to realize them.
  • Tasks management: allocate rights and responsibilities across organizations and collaboration frames, public and shallow or personal and deep.
  • Flows management: monitor and manage actual flows, publish arguments and propositions, consolidate decisions, …

Taking into account constraints and dependencies between the tasks, the aims would be to balance creativity and automation while eliminating superfluous intermediate products (like documents or models) or activities (e.g unfocused meetings).

With regard to dependencies, KM tasks are often intertwined and cannot be carried out sequentially; moreover, as illustrated by the impact of “creative accounting” on accounted activities, their overlapping is not frozen but subject to feedback, changes and adjustments.

With regard to automation, three groups are to be considered: the first requires only raw processing power and can be fully automated; the second also involves some intelligence that may be provided by smart systems; and the third calls for decision-making that can only be done by human agents entitled by the organization.

At first sight some lessons could be drawn from lean manufacturing, yet, since knowledge processes are not subject to hardware constraints, agile approaches should provide a more informative reference.

Iterative Knowledge Processing

A simple preliminary step is to check the applicability of agile principles by replacing “software” by “knowledge”. Assuming that ground is secured, the core undertaking is to consider what would become of cycles and iterations when applied to knowledge processing:

  • Cycle invariants: tasks would be iterated on given sets of symbolic descriptions applied to the state of affairs (contexts, concerns, and means).
  • Iterations content: based on those descriptions data would be processed into information, changes would be monitored, and possibilities explored.
  • Exit condition: cycles would complete with decisions committing changes in the state of affairs that would also entail adjustments or changes in symbolic descriptions.

That scheme meets three of the basic tenets of the agile paradigm, i.e open scope (unknowns cannot be set in advance), continuity of delivery (invariants are defined and managed by knowledge workers), and users in driving seats (through exit conditions). Yet it still doesn’t deal with creativity and the benefits of collaboration for knowledge workers.

Thinking Space & Pace

The scope of creativity in processes is neatly circumscribed by the nature of flows, i.e the possibility to insert knowledge during the processing: external for material flows (e.g in manufacturing), internal for symbolic flows (e.g in software engineering and knowledge processing).

Yet, whereas both software engineering and knowledge processes come with some built-in capability to redefined their symbolic flows on-the-fly, they don’t grant the same room to creativity. Contrary to software engineering projects which have to close their perspectives on the delivery of working products, knowledge processes are meant to keep them open to new understandings and opportunities. For the former creativity is the means to an end, for the latter it’s the end in itself, with collaboration as means.

Such opposite perspectives have direct consequences for two basic agile collaboration mechanisms: backlog and time-boxing:

  • Backlogs are used to structure and manage the space under exploration. But contrary to software processes whose space is focused and structured by users’ needs, knowledge processes are supposed to play on workers’ creativity to expand and redefine the range under consideration.
  • Time-boxes are used to synchronize tasks. But with creativity entering the fray, neither space granularity or thinking pace can be set in advance and coerced into single-sized boxes. In that case individuals must remain in full control of the contents and stride of their thinking streams.

It ensues that when creativity is the primary success factor standard agile collaboration mechanisms are falling short and intelligent collaboration schemes are to be introduced.

Creativity & Collaboration Tiers

The synchronization of creative activities has to deal with conflicting objectives:

  • On one hand the mental maps of knowledge workers and the stream of their thoughts have to be dynamically aligned.
  • On the other hand unsolicited face-to-face interactions or instant communications may significantly impair the course of creative thinking.

When activities, e.g software engineering, can be streamlined towards the delivery of clearly defined outcomes, backlogs and time-boxes can be used to harness workers’ creativity. When that’s not the case more sophisticated collaboration mechanisms are needed.

Assuming that mediated collaboration has a limited impact on thinking creativity (emails don’t have to be answered, or even presented, instantly), the objective is to steer knowledge workflows across a two-tiered collaboration framework: one personal and direct between knowledge workers, the other social and mediated through enterprise or institutional networks.

On the first tier knowledge workers would manage their thinking flows (content and tempo) independently, initiating or accepting personal collaboration (either through physical contact or some kind of instant messaging) depending on their respective “state of mind”.

The second tier would be for social collaboration and would be expected to replace backlogs and time-boxing. Proceeding from the first to the second tier would be conditioned by workers’ needs and expectations, triggered on their own initiative or following prompts.

From Personal to Collective Thinking

The challenging issue is obviously to define and implement the mechanisms governing the exchanges between collaboration tiers, e.g:

  • How to keep tabs on topics and contents to be safeguarded.
  • How to mediate (i.e filter and time) the solicitations and contribution issued by the social tier.
  • How to assess the solicitations and contribution issued by individuals.
  • How to assess and manage knowledge deemed to remain proprietary.
  • How to identify and manage knowledge workers personal and social circles.

Whereas such issues are customary tackled by various AI systems (knowledge management, decision-making, multi-players games, etc), taken as a whole they bring up the question of the relationship between personal and collective thinking, and as a corollary, the role of organization in nurturing corporate innovation.

Conclusion: Collaboration Spaces vs Panopticon

As illustrated by the rising of futuristic headquarters, leading technology firms have been trying to tackle these issues by redefining internal architecture as collaboration spaces. Compared to traditional open spaces, such approaches try to fuse physical and digital spaces into overlapping layers of collaboration spaces, using artificial intelligence to harness cooperation.

Yet, lest uniform and comprehensive transparency brings the worrying shadow of a panopticon within which everyone can be unknowingly observed, working spaces have to be designed as to enhance collaboration without trespassing on privacy.

That could be achieved with a layered transparency set along the nature of collaboration:

  • Immediate and personal: working cells regrouping 5 to 10 workstations earmarked for a task and used indifferently by teams members.
  • Delayed and personal: open physical spaces accommodating working cells, with instant messaging and geo-localization; spaces are hinged on domains and focused on shared knowledge.
  • On-line and networked: digital spaces merging physical spaces and organizational structures.

That mix of physical and virtual spaces could be dynamically redefined depending on activities, projects, location, and organisation.

Further Readings

External Links

Advertisements

Data Mining & Requirements Analysis

October 24, 2015

Preamble

Data mining explores business opportunities and competitive advantage, requirements analysis considers supporting applications. Both use models, the former’s are predictive and ephemeral, the latter’s descriptive (or prescriptive) and perennial.

(Andreas Gursky)

Data mining: sorting business wheat from world chaff (Andreas Gursky)

As the generalization of digitized environment calls for more integration of business and software engineering processes, understanding the relationship between data mining and requirements analysis could significantly improve processes maturity and agility.

Data vs Requirements Analysis

Nowadays the success of a wide range of enterprises critically depends on two achievements:

  1. Mapping business models to changing environments by sorting through facts, capturing the relevant data, and processing the whole into meaningful and up to date information. That can be achieved through analysis models meant to described business expectations with regard to supporting systems.
  2. Putting that information into effective use through business processes and supporting systems. That is done by systems architecture and design models meant to prescribe how to build software artifacts.
vv

From data analysis to systems requirements and software design

Those challenges are converging: under the pressure of markets forces and technological advances most of traditional fences between business channels and IT systems are crumbling, putting the focus on the functional integration between data mining and production systems. That’s where predictive models can help by anchoring descriptive models to moving markets and by cross-feeding analysis and operations. How that can be achieved has been the bread and butter of good corporate governance for some time, but there has been less interest for the third branch, namely how data analysis (predictive models) could “inform” business requirements (descriptive models).

From Data to Information

Facts are not given but must be captured through a symbolic description of actual observations. That entails some observer set on task using a mix of conceptual and technical apparatus. Data mining and requirements analysis are practical realizations of that process:

  • Data mining relies on analytic tools to extract revealing information that could be used to chart opportunities along business models.
  • Requirements analysis relies on business processes and users’ practice to extract symbolic descriptions that will be used to build models of supporting applications.

If both walk the path from data to information, their objectives are different: the former’s is to improve business decisions by making sense of actual observations; the latter’s is to build system surrogates from the symbolic descriptions of actual business objects and activities.

Anchors & Structures: Plasticity of  Business Entities

Perhaps paradoxically, business agility calls for terra firma because nimble trades must be rooted in corporate identity and business continuity. As a consequence, the first step of requirements analysis should be to associate individuals business objects or activities with stable and consistent identification mechanisms, and to group them with regard to that mechanism:

  • External entities with natural (person) or designed identity (car).
  • Symbolic entities for roles (customer) or commitments (maintenance contract).
  • Actual activities (promotion campaign) and events (sale) or business logic (promotion).
Anchors

Anchors

Conversely, as the aim of data analysis is to explore every business angle, individual observations are supposed to be moved across groups; yet, since the units identified by data analysis will have to be aligned with the ones described by requirements analysis, moves must also keep track of identities. That dilemma between continuity of identified structures on one side, plasticity of functional aspects on the other side, can be illustrated by banks which, in response to marketing requirements, had to shift from account (internal identification) to customer (external identification) based systems.

From account (left) to customer (right) centered systems

It’s easier to market insurance from customer centered systems (right) than from account centered ones (left)

That challenge can be overcome by linking the identification of symbolic entities to external anchors.

Profiles & Features: Versatility of Business Opportunities

As noted above, requirements and data analysis are set on the same road but driven by different forces: the former tries to group individuals with regard to identification mechanisms before fleshing them out with relevant features; the latter tries to group individuals with given identities according to features and opportunity profiles. Yet, what could appear as collision courses may become a meeting of minds if both courses are charted with regard to variants analysis.

From the requirements perspective the primary concern is to distinguish between structural and functional variants:

  • Structural variants are bound to identities, i.e set up-front for the respective life-cycle of individual business objects or transactions. As a consequence they cannot be changed without undermining business continuity. Moreover, being part and parcel of descriptors (e.g  types and use cases) their change will affect engineering processes.
  • Functional variants may vary during the respective life-cycle of individual business objects or transactions. As a consequence they can be changed without undermining business continuity, and changes in descriptors (e.g partitions and scenarii) can be managed without affecting engineering processes.

From the data mining perspective the objective is to improve the benefits of information systems for decision-making processes:

  • Static: how to classify individuals as to reduce the uncertainty of predictions
  • Dynamic: how to classify business options as to reduce the uncertainty of decisions.

Since those objectives are set for individuals, constraints on continuity and consistency can be dealt with independently of the description of symbolic surrogates.

Identified individuals with profiles for customers (a), their behaviors (b), and conciliatory gestures (c)

Identified individuals with profiles for customers (a), their behaviors (b), and promotional gestures (c)

It ensues that perspectives can be adjusted by factoring out the constraints of continuity and consistency for business objects (e.g cars), agents (e.g customer) and processes (e.g repair). Profiles for agents (a), behaviors (b), and business options (c) could then be freely explored and tailored with regard to changes in business environment and objectives.

Applying Data Analysis to Requirements

Not surprisingly data analysis techniques can be used to adjust perspectives. For that purpose a sample of individuals (business objects and operations) representing the population targeted by requirements would have to be submitted to basic mining routines. Borrowing a catalog from F. Provost & T. Fawcett:

  1. Classification: estimates the probability for each individual (objects or operations) to belong to a set of classes; can be used to assess the closeness of the variants (respectively power-types or execution paths) identified by requirements analysis.
  2. Regression: reverse classification; estimates how much of individual features valuations can be explained by the proposed classifications.
  3. Similarity: a shallow version of classification; can be used to assess the distance between variants and consolidate the proposed classifications.
  4. Clustering: a deep version of classification; can be used to distinguish between shallow and natural classifications.
  5. Co-occurrence: deals with behavioral variants; can be used to distinguish between functional and structural classifications.
  6. Profiling: reverse of co-occurrence; can be used to consolidate functional and structural classifications.
  7. Links prediction: can be used to define relationships.
  8. Data reduction: eliminate redundant individuals; can be used to consolidate requirements and refine tests scenarii.
  9. Causal modeling: brings together business logic (events and rules) and users decisions; should provide the backbone of tests scenarii.

Besides the direct benefits for requirements, such procedures may help to bridge the span between data and requirements analysis and significantly improve processes’ capability and maturity level.

Business Objectives & Enterprise Architecture Capabilities

Data mining being first and foremost about competitive edge, it relies on a timely and effective coupling between enterprises capabilities and business opportunities. But the dilemma between continuity and plasticity described above for business objects and processes reappears at enterprise level: how to conciliate architecture, by nature perennial, with the agility needed to make the best of changing and competitive environments ?

As architectural big bang is arguably a last resort option, answers to that question must be progressive and local: if changes are to be swift and pertinent they must be both circumscribed and leveraged to the relevant parts of architecture. Taking an (amended) leaf of the Zachman framework, its sixth column (“Why” ) could be reset as a line for business and operational objectives that would cross the original five columns instead of the architecture layers. Using a pentagonal representation of enterprise architecture, that line would be set as circling the outer range.

Enterprise Architecture and the loci of change

It is worth to note that setting objectives on a line crossing the columns of capabilities instead of a column crossing the lines of layers means that objectives are set at enterprise level and their cascading impact traced and managed through layers.

Conceptual Models & Business Contexts

But even that updated framework doesn’t take into account the fundamental changes in business environments. Once secure behind organizational and technical fences, enterprises must now navigate through open digitized business environments and markets. For business processes it means a seamless integration with supporting applications; for corporate governance it means keeping track of heterogeneous and changing business contexts and concerns while assessing the capability of organizations and systems to cope, adjust, and improve.

As long as environments were a hotchpotch of actual and symbolic artifacts the pros and cons of integration could be balanced. But the generalization of digital flows and transactions has upended the balance: there is no more room or time for latency and enterprises must bring all symbolic representations (business, organization, and systems) under a common conceptual roof:

OpenConcepts_00

Conceptual models as bridges between business processes, and systems.

A canonical approach would be to introduce a conceptual indexing scheme open to extensions but with its footprint defined by business processes and systems functionalities. That would ensure a better integration of processes and supporting engineering but will do nothing for the modeling gap between enterprise architecture and external contexts. That could be achieved with ontologies.

Conceptual Loop: Ontologies & Business Intelligence

As far as data mining is concerned, three kinds of operations are to be considered:

  • Data understanding gives form and semantics to raw material.
  • Business understanding charts business contexts and concerns in terms of objects and processes descriptions.
  • Modeling consolidate data and business understanding into descriptive, predictive, and operational models.
OKBI_dmProcess

The aim of data mining is to refine raw data into meaningful information

While traditional approaches fall short when tasked with iterative modeling of unstructured data, ontologies may fare better because their explicit aim is only to describe what could exist in a domain of discourse:

  1. They are made of categories of things, beings, or phenomena; as such they may range from simple catalogs to philosophical doctrines.
  2. They are driven by cognitive (i.e non empirical) purposes, namely the validity and consistency of symbolic representations.
  3. They are meant to be directed at specific domains of concerns, whatever they can be: politics, religion, business, astrology, etc.

With regard to models, only the second point puts ontologies apart: contrary to models, ontologies are about understanding and are not supposed to be driven by empirical purposes. It ensues that ontologies can be understood as conceptual (aka canonical) models, used as such for business analysis and extended with purposes for systems analysis and design.

In addition to the integration with enterprise architectures, ontologies benefits for business intelligence would be twofold.

On one side, and whatever their use, ontologies could be aligned with the nature of contexts and their impact on business and enterprise governance e.g:

  • Institutional: mandatory semantics sanctioned by regulatory authority, steady, changes subject to established procedures.
  • Professional: agreed upon semantics between parties, steady, changes subject to established procedures.
  • Corporate: enterprise defined semantics, changes subject to internal decision-making.
  • Social: definedpragmatic semantics, no authority, volatile, continuous and informal changes.
  • Personal: customary semantics defined by named individuals.
OKnow_TabOntos

Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).

On the other side ontologies could be defined according to the nature of targeted items, namely terms, documents, symbolic representations, or actual objects and phenomena. That would outline four basic concerns that may or may not be combined:

  • Thesaurus: ontologies covering terms and concepts.
  • Document Management: ontologies covering documents with regard to topics.
  • Organization and Business: ontologies pertaining to enterprise organization, objects and activities.
  • Engineering: ontologies pertaining to the symbolic representation of products and services.
KM_OntosCapabs

Ontologies: Purposes & Targets

On a broader perspective, ontologies could be influential in spanning the gap between explicit and implicit knowledge. In a few years’ time practically unlimited access to raw data and the exponential growth in computing power have opened the door to massive sources of unexplored knowledge which is paradoxically both directly relevant yet devoid of immediate meaning:

  • Relevance: mined raw data is supposed to reflect the geology and dynamics of targeted markets.
  • Meaning: the main value of that knowledge rests on its implicit nature; applying existing semantics would add little to existing knowledge.

Assuming that deep learning can transmute raw base metals into knowledge gold, ontologies would be decisive in framing the understanding, assessment, and improvement of the processes.

Operational Loop: Business Intelligence & Decision-making

Once carried out separately and periodically, decision-making is to be carried out iteratively at operational, tactical, and strategic level; while each level is to be set along its own time-frames, all are to rely on data-mining, with cycles following the same pattern:

  1. Observation: understanding of changes in business opportunities.
  2. Orientation: assessment of the reliability and shelf-life of pertaining information with regard to current positions and operations.
  3. Decision: weighting of options with regard to enterprise capabilities and broader objectives.
  4. Action: carrying out of decisions within the relevant time-frame.

Given business new playground, decision-making processes have to weave together material and digitized flows, actual contexts (aka territories) and symbolic descriptions (maps), and overlapping time-frames (operational tactical, strategic). That operational loop could then be coupled with the broader one of business intelligence:

OKBI_BIDM

Integration of  operational analytics, business intelligence, and decision-making.

Selected Readings

Detour from Turing Game

February 20, 2015

Summary

Considering Alan Turing’s question, “Can machines think ?”, could the distinction between communication and knowledge representation capabilities help to decide between human and machine ?

vvvv

Alan Turing at 4

What happens when people interact ?

Conversations between people are meant to convey concrete, partial, and specific expectations. Assuming the use of a natural language, messages have to be mapped to the relevant symbolic representations of the respective cognitive contexts and intentions.

ccc

Conveying intentions

Assuming a difference in the way this is carried on by people and machines, could that difference be observed at message level ?

Communication vs Representation Semantics

To begin with, languages serve two different purposes: to exchange messages between agents, and to convey informational contents. As illustrated by the difference between humans and other primates, communication (e.g alarm calls directly and immediately bound to imminent menace) can be carried out independently of knowledge representation (e.g information related to the danger not directly observable), in other words linguistic capabilities for communication and symbolic representation can be set apart. That distinction may help to differentiate people from machines.

Communication Capabilities

Exchanging messages make use of five categories of information:

  • Identification of participants (Who) : can be set independently of their actual identity or type (human or machine).
  • Nature of message (What): contents exchanged (object, information, request, … ) are not contingent on participants type.
  • Life-span of message (When): life-cycle (instant, limited, unlimited, …) is not contingent on participants type.
  • Location of participants (Where): the type of address space (physical, virtual, organizational,…) is not contingent on participants type.
  • Communication channels (How): except for direct (unmediated) human conversations, use of channels for non direct (distant, physical or otherwise) communication are not contingent on participants type .

Setting apart the trivial case of direct human conversation, it ensues that communication capabilities are not enough to discriminate between human and artificial participants, .

Knowledge Representation Capabilities

Taking a leaf from Davis, Shrobe, and Szolovits, knowledge representation can be characterized by five capabilities:

  1. Surrogate: KR provides a symbolic counterpart of actual objects, events and relationships.
  2. Ontological commitments: a KR is a set of statements about the categories of things that may exist in the domain under consideration.
  3. Fragmentary theory of intelligent reasoning: a KR is a model of what the things can do or can be done with.
  4. Medium for efficient computation: making knowledge understandable by computers is a necessary step for any learning curve.
  5. Medium for human expression: one the KR prerequisite is to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.

On that basis knowledge representation capabilities cannot be used to discriminate between human and artificial participants.

Returning to Turing Test

Even if neither communication nor knowledge representation capabilities, on their own, suffice to decide between human and machine, their combination may do the trick. That could be achieved with questions like:

  • Who do you know: machines can only know previous participants.
  • What do you know: machines can only know what they have been told, directly or indirectly (learning).
  • When did/will you know: machines can only use their own clock or refer to time-spans set by past or planned transactional events.
  • Where did/will you know: machines can only know of locations identified by past or planned communications.
  • How do you know: contrary to humans, intelligent machines are, at least theoretically, able to trace back their learning process.

Hence, and given scenarii scripted adequately, it would be possible to build decision models able to provide unambiguous answers.

Reference Readings

A. M. Turing, “Computing Machinery and Intelligence”

Davis R., Shrobe H., Szolovitz P., “What is a Knowledge Representation?”

Further Reading

 

 

AI & Embedded Insanity

February 6, 2015

Summary

Bill Gates recently expressed his concerns about AI’s threats, but shouldn’t we fear insanity, artificial or otherwise ?

vvvv

Human vs Artificial Insanity: chicken or egg ? (Peter Sellers as Dr. Strangelove)

Some clues to answers may be found in the relationship between purposes, designs, and behaviors of intelligent devices.

Intelligent Devices

Intelligence is generally understood as the ability to figure out situations and solve problems, with its artificial avatar turning up when such ability is exercised by devices.

Devices being human artifacts, it’s safe to assume that their design can be fully accounted for, and their purposes wholly exhibited and assessed. As a corollary, debates about AI’s threats should distinguish between harmful purposes (a moral issue) on one hand, faulty designs,  flawed outcomes, and devious behaviors, (all engineering issues) on the other hand. Whereas concerns for the former could arguably be left to philosophers, engineers should clearly take full responsibility for the latter.

Human, Artificial, & Insane Behaviors

Interestingly, the “human” adjective takes different meanings depending on its association to agents (human as opposed to artificial) or behaviors (human as opposed to barbaric). Hence the question: assuming that intelligent devices are supposed to mimic human behaviors, what would characterize devices’ “inhuman” behaviors ?

ccc

How to characterize devices’ inhuman behaviors ?

From an engineering perspective, i.e moral issues being set aside, a tentative answer would point to some flawed reasoning, commonly described as insanity.

Purposes, Reason, Outcomes & Behaviors

As intelligence is usually associated with reason, flaws in the design of reasoning capabilities is where to look for the primary factor of  hazardous devices’ behaviors.

To begin with, the designs of intelligent devices neatly mirror human cognitive activity by combining both symbolic (processing of symbolic representations), and non symbolic (processing of neuronal connections) capabilities. How those capabilities are put to use is therefore to characterize the mapping of purposes to behaviors:

  • Designs relying on symbolic representations allow for explicit information processing: data is “interpreted” into information which is then put to use as the knowledge governing behaviors.
  • Designs based on neuronal networks are characterized by implicit information processing: data is “compiled” into neuronal connections whose weights (representing knowledge ) are tuned iteratively based on behavioral feedback.
Symbolic (north) vs non symbolic (south) intelligence

Symbolic (north) vs non symbolic (south) intelligence

That distinction is to guide the analysis of potential threats:

  • Designs based on symbolic representations can support both the transparency of ends and the traceability of means. Moreover, such designs allow for the definition of broader purposes, actual or social.
  • Neuronal networks make the relationships between means and ends more opaque because their learning kernels operate directly on data, with the supporting knowledge implicitly embodied as weighted connections. They make for more concrete and focused purposes for which symbolic transparency and traceability are less of a bearing.

Risks, Knowledge,  & Decision Making

As noted above, an engineering perspective should focus on the risks of flawed designs begetting discrepancies between purposes and outcomes. Schematically, two types of outcome are to be considered: symbolic ones are meant to feed human decisions making, and behavioral ones directly govern devices behaviors. For AI systems combining both symbolic and non symbolic capabilities, risks can arise from:

  • Unsupervised decision making: device behaviors directly governed by implicit knowledge (a).
  • Embedded decision making: flawed decisions based on symbolic knowledge built from implicit knowledge (b).
  • Distributed decision making: muddled decisions based on symbolic knowledge built by combining different domains of discourse (c).
Unsupervised (a), embedded (b), and distributed (c) decision making.

Unsupervised (a), embedded (b), and distributed (c) decision making.

Whereas risks bred by unsupervised decision making can be handled with conventional engineering solutions, that’s not the case for embedded or distributed decision making supported by intelligent devices. And both risks may be increased respectively by the so-called internet of things and semantic web.

Internet of Things, Semantic Web, & Embedded Insanity

On one hand the so-called “internet second revolution” can be summarized as the end of privileged netizenship: while the classic internet limited its residency to computer systems duly identified by regulatory bodies, the new one makes room for every kind of device. As a consequence, many intelligent devices (e.g cell phones) have made their coming out into fully fledged systems.

On the other hand the so-called “semantic web” can be seen as the symbolic counterpart of the internet of things, providing a whole of comprehensive and consistent meanings to targeted factual realities. Yet, given that the symbolic world is not flat but built on piled mazes of meanings, their charting is to be contingent on projections with dissonant semantics. Moreover, as meanings are not supposed to be set in stone, semantic configurations have to be continuously adjusted.

That double trend clearly increases the risks of flawed outcomes and erratic behaviors:

  • Holed up sources of implicit knowledge are bound to increase the hazards of unsupervised behaviors or propagate unreliable information.
  • Misalignment of semantics domains may blur the purposes of intelligent apparatus and introduce biases in knowledge processing.

But such threats are less intrinsic to AI than caused by the way it is used: insanity is more probably to spring from the ill-designed integration of intelligent and reasonable systems into the social fabric than from insane ones.

Further Readings

Events & Decision-making

September 9, 2014

Objective

Between Internet-of-Things and ubiquitous social networks, enterprises’ environments are turning into unified open spaces, transforming the divide between operational and decision-making systems into a pitfall for corporate governance. That jeopardy can be better understood when one consider how the processing of events affect decision-making.

divination_JW_Waterhouse

Making sense of event (J.W. Waterhouse)

Events & Information Processing

Enterprises’ success critically depends on their ability to track, understand, and exploit changes in their environment; hence the importance of a fast, accurate, and purpose-driven reading of events.

That is to be achieved by picking the relevant facts to be tracked, capturing the associated data, processing the data into meaningful information, and finally putting that information into use as knowledge.

From Facts to Knowledge and Back

From Facts to Knowledge and Back

Those tasks have to be carried out iteratively, dealing with both external and internal events:

  • External events are triggered by changes in the state of actual objects, activities, and expectations.
  • Internal events are triggered by the ensuing changes in the symbolic representations of objects and processes as managed by systems.

With events set at the root of the decision-making process, they will also define the time frames.

Events & Decisions Time Frames

As a working hypothesis, decision-making can be defined as real-time knowledge management:

  • To begin with, a real-time scale is created by new facts (t1) registered through the capture of events and associated data (t2).
  • A symbolic intermezzo is then introduced during which data is analyzed, information updated (t3), knowledge extracted, and decisions taken (t4);
  • The real-time scale completes with decision enactment and corresponding change in facts (t5).
Time scale of decision making (real time events are in red, symbolic ones in blue)

Time scale of decision making (real time events are in red, symbolic ones in blue)

The next step is to bring together events and knowledge.

Events & Changes in Knowns & Unknowns

As Donald Rumsfeld once suggested, decision-making is all about the distinction between things we know that we know, things that we know we don’t know, and things we don’t know we don’t know. And that classification can be mapped to the nature of events and the processing of associated data:

  • Known knowns (KK) are traced through changes in already defined features of identified objects, activities or expectations. Corresponding external events are expected and the associated data can be immediately translated into information.
  • Known unknowns (KU) are traced through changes in still undefined features of identified objects, activities or expectations. Corresponding external events are unexpected and the associated data cannot be directly translated into information.
  • Unknown unknowns (UU) are traced through changes in still undefined objects, activities or expectations. Since the corresponding symbolic representations are still to be defined, both external and interval events are unexpected.
vvvvv

Knowledge lifespan is governed by external events

Given that decisions are by nature set in time-frames, they should be mapped to changes in environments, or more precisely to the information carried out by the events taken into consideration.

Knowledge & Decision Making

Events bisect time-scales between before and after, past and future; as a corollary, the associated information (or lack thereof) about changes can be neatly allocated to the known and unknown of current and prospective states of affairs.

Changes in the current states of affairs are carried out by external events:

  • Known knowns (KK): when events are about already defined features of objects, activities or expectations, the associated data can be immediately used to update the states of their symbolic representation.
  • Known unknowns (KU): when events entail new features of already defined objects, activities or expectations, the associated data must be analyzed in order to adjust existing symbolic representations.
  • Unknown unknowns (UU): when events entail new objects, activities or expectations, the associated data must be analyzed in order to build new symbolic representations.

As changes in current states of affairs are shadowed by changes in their symbolic representation, they generate internal events which in turn may trigger changes in prospective states of affairs:

  • Known knowns (KK): updating the states of well-defined objects, activities or expectations may change the course of action but should not affect the set of possibilities.
  • Known unknowns (KU): changes in the set of features used to describe objects, activities or expectations may affect the set of tactical options, i.e ones that are can be set for individual production life-cycles.
  • Unknown unknowns (UU): introducing new types of objects, activities or expectations is bound to affect the set of strategic options, i.e ones that are encompass multiple production life-cycles.

Interestingly, those levels of knowledge appear to be congruent with usual horizons in decision-making: operational , tactical, and strategic:

Decision-making and knowledge level

The scope of decision-making is set by knowledge level

  • Operational: full information on actual states allows for immediate appraisal of prospective states.
  • Tactical: partially defined actual states allow for periodic appraisal of prospective states in synch with production cycles.
  • Strategic: undefined actual states don’t allow for periodic appraisal of prospective states in synch with production cycles; their definition may also be affected through feedback.

Given that those levels of appraisal are based on conjectural information (internal events) built from fragmentary or fuzzy data (external events), they have to be weighted by risks.

Weighting the Risks

Perfect information would guarantee risk-free future and would render decision-making pointless. As a corollary, decisions based on unreliable information entail risks that must be traced back accordingly:

  • Operational: full and reliable information allows for risk-free decisions.
  • Tactical: when bounded by well-defined contexts with known likelihoods, partial or uncertain information allows for weighted costs/benefits analysis.
  • Strategic: set against undefined contexts or unknown likelihoods decision-making cannot fully rely on weighted costs/benefits analysis and must encompass policy commitments, possibly with some transfer of risks, e.g through insurance contracts.

That provides some kind of built-in traceability between the nature and likelihood of events, the reliability of information, and the risks associated to decisions.

Decision Timing

Considering decision-making as real-time knowledge management driven by external (aka actual) events and governed by internal (aka symbolic) ones, how would that help to define decisions time frames ?

To begin with, such time frames would ensure that:

  • All the relevant data is captured as soon as possible (t1>t2).
  • All available data is analyzed as soon as possible (t2>t3).
  • Once a decision has been made, nothing can change during the interval between commitment and action (respectively t4 and t5).

Given those constraints, the focus of timing is to be on the interval between change in prospective states (t3) and decision (t4): once all information regarding prospective states is available, how long to wait before committing to a decision ?

Assuming that decisions are to be taken at the “last responsible moment”, i.e until not taking side could change the possible options, that interval will depend on the nature of decisions:

  • Operational decisions can be put to effect immediately. Since external changes can also be taken into account immediately, the timing is to be set by events occurring within production life-cycles.
  • Tactical decisions can only be enacted at the start of production cycles using inputs consolidated at completion. When analysis can be done in no time (t3=t4) and decisions enacted immediately (t4=t5), commitments can be taken from on cycle to the next. Otherwise some lag will have to be introduced. The last responsible moment for committing a decision will therefore be defined by the beginning of the next production cycle minus the time needed for enactment.
  • Strategic decisions are meant to be enacted according to predefined plans. The timing of commitments should therefore combine planning (when a decision is meant to be taken) and events (when relevant and reliable information is at hand).
The scope of decision-making must be aligned with architecture layers

The scope of decision-making should be aligned with architecture layers

Not surprisingly, when the scope of decision-making is set by knowledge level, it appears to coincide with architecture layers: strategic for enterprise assets, tactical for systems functionalities, and operational for platforms and resources. While that clearly calls for more verification and refinements, such congruence put events processing, knowledge management, and decision-making within a common perspective.

Further Reading

Semantic Web: from Things to Memes

August 10, 2014

The new soup is the soup of human culture. We need a name for the new replicator, a noun which conveys the idea of a unit of cultural transmission, or a unit of imitation. ‘Mimeme’ comes from a suitable Greek root, but I want a monosyllable that sounds a bit like ‘gene’. I hope my classicist friends will forgive me if I abbreviate mimeme to meme…

Richard Dawkins

The genetics of words

The word meme is the brain child of Richard Dawkins in his book The Selfish Gene, published in 1976, well before the Web and its semantic soup. The emergence of the ill-named “internet-of-things” has brought about a new perspective to Dawkins’ intuition: given the clear divide between actual worlds and their symbolic (aka web) counterparts, why not chart human culture with internet semantics ?

ccccc

Symbolic Dissonance: Flowering Knives (Adel Abdessemed).

With interconnected digits pervading every nook and cranny of material and social environments, the internet may be seen as a way to a comprehensive and consistent alignment of language constructs with targeted realities: a name for everything, everything with its name. For that purpose it would suffice to use the web to allocate meanings and don things with symbolic clothes. Yet, as the world is not flat, the charting of meanings will be contingent on projections with dissonant semantics. Conversely, as meanings are not supposed to be set in stone, semantic configurations can be adjusted continuously.

Internet searches: words at work

Semantic searches (as opposed to form or pattern based ones) rely on textual inputs (key words or phrase) aiming at specific reality or information about it:

  • Searches targeting reality are meant to return sets of instances (objects or phenomena) meeting users’ needs (locations, people, events, …).
  • Searches targeting information are meant to return documents meeting users’ interest for specific topics (geography, roles, markets, …).
What are you looking for ?

Looking for information or instances.

Interestingly, the distinction between searches targeting reality and information is congruent with the rhetorical one between metonymy and metaphor, the former best suited for things, the latter for meanings.

Rhetoric: Metonymy & Metaphor

As noted above, searches can be heeded by references to identified objects, the form of digital objects (sound, visuals, or otherwise), or associations between symbolic representations. Considering that finding referenced objects is basically a technical problem, and that pattern matching is a discipline of its own,  the focus is to be put on the third case, namely searches driven by words. From that standpoint searching the web becomes a problem of rhetoric, namely: how to use language to get rapidly and effectively the most accurate outcome to a query. And for that purpose rhetoric provides two basic contraptions: metonymy and metaphor.

Both metonymy and metaphor are linguistic constructs used to substitute a word (or a phrase) by another without altering its meaning. When applied to searches, they are best understood in terms of extensions and intensions, extensions standing for the actual set of objects and behaviors, and intensions for the set of features that characterize these instances.

Metonymy uses contiguity to substitute target terms for source ones, contiguity being defined with regard to their respective extensions. For instance, given that US Presidents reside at the White House, Washington DC, each term can be used instead.

Metonymy use physical or functional proximity (full line) to match terms (dashed line)

Metonymy use physical or functional proximity (full line) to match extensions (dashed line)

Metaphor uses similarity to substitute target terms for source ones, similarity being defined with regard to a shared subset of features, the others being ignored. Hence, in contrast to metonymy, metaphor is based on intensions.

Metaphors use analogy to maps terms whose intensions share a selected subset of features

Metaphors use analogy (dashed line) to maps terms whose intensions (dotted line) share a selected subset of features

As it happens, and not by chance, those rhetorical constructs can be mapped to categories of searches:

  • Metonymy will be used to navigate across instances of things and phenomena following structural, functional, or temporal associations.
  • Metaphors will be used to navigate across terms and concepts according to similarities, ontologies, and abstractions.

As a corollary, searches can be seen as scaffolds supporting the building of meanings.

Selected metaphors are used to extract occurrences to be refined using metonymies.

The building of meanings, back and forth between metonymies and metaphors

Memes & their making

Today general purpose search engines combine brains and brawn to match queries to references, the former taking advantage of language parsers and statistical inferences, the latter running heuristics on gigantic repositories of data and searches. Given the exponential growth of accumulated data and the benefits of hindsight, such engines can continuously improve the relevancy of their answers; moreover, their advances are not limited to accuracy and speed but also embrace a better understanding of queries. And that brings about a qualitative change: accruing improved understandings to knowledge bases provides search engines with learning capabilities.

Assuming that such learning is autonomous, self-sustained, and encompasses concepts and categories, the Web could be seen as a semantic incubator for the development of meanings. That would vindicate Dawkins’ intuition comparing the semantic evolution of memes to the Darwinian evolution of living organisms.

Further Reading

External Links

Governance, Regulations & Risks

July 16, 2014

Governance & Environment

Confronted with spreading and sundry regulations on one hand, the blurring of enterprise boundaries on the other hand, corporate governance has to adapt information architectures to new requirements with regard to regulations and risks. Interestingly, those requirements seem to be driven by two different knowledge policies: what should be known with regard to compliance, and what should be looked for with regard to risk management.

Zhigang-tang2

Governance (Zhigang-tang)

 Compliance: The Need to Know

Enterprise are meant to conform to rules, some set at corporate level, others set by external entities. If one may assume that enterprise agents are mostly aware of the former, that’s not necessary the case for the latter, which means that information and understanding are prerequisites for regulatory compliance :

  • Information: the relevant regulations must be identified, collected, and their changes monitored.
  • Understanding: the meanings of regulations must be analyzed and the consequences of compliance assessed.

With regard to information processing capabilities, it must be noted that, since regulations generally come as well structured information with formal meanings, the need for data processing will be limited, if at all.

With regard to governance, given the pervasive sources of external regulations and their potentially crippling consequences, the challenge will be to circumscribe the relevant sources and manage their consequences with regard to business logic and organization.

Regulatory Compliance vs Risks Management

Regulatory Compliance vs Risks Management

 

Risks: The Will to Know

Assuming that the primary objective of risk management is to deal with the consequences (positive or negative) of unexpected events, its information priorities can be seen as the opposite of the ones of regulatory compliance:

  • Information: instead of dealing with well-defined information from trustworthy sources, risk management must process raw data from ill-defined or unreliable origins.
  • Understanding: instead of mapping information to existing organization and business logic, risk management will also have to explore possible associations with still potentially unidentified purposes or activities.

In terms of governance risks management can therefore be seen as the symmetric of regulatory compliance: the former relies on processing data into information and expanding the scope of possible consequences, the latter on translating information into knowledge and reducing the scope of possible consequences.

With regard to regulations governance is about reduction, with regard to risks it's about expansion

With regard to regulations governance is about reduction, with regard to risks it’s about expansion

Not surprisingly, that understanding coincides with the traditional view of governance as a decision-making process balancing focus and anticipation.

Decision-making: Framing Risks and Regulations

As noted above, regulatory compliance and risk management rely on different knowledge policies, the former restrictive, the latter inclusive. That distinction also coincides with the type of factors involved and the type of decision-making:

  • Regulations are deontic constraints, i.e ones whose assessment is not subject to enterprises decision-making. Compliance policies will therefore try to circumscribe the footprint of regulations on business activities.
  • Risks are alethic constraints, i.e ones whose assessment is subject to enterprise decision-making. Risks management policies will therefore try to prepare for every contingency.

Yet, when set on a governance perspective, that picture can be misleading because regulations are not always mandatory, and even mandatory ones may left room for compliance adjustments. And when regulations are elective, compliance is less driven by sanctions or penalties than by the assessment of business or technical alternatives.

Regulations & Risks : decision patterns

Decision patterns: Options vs Arbitrage

Conversely, risks do not necessarily arise from unidentified events and upshot but can also come from well-defined outcomes with unknown likelihood. Managing the latter will not be very different from dealing with elective regulations except that decisions will be about weighted opportunity costs instead of business alternatives. Similarly, managing risks from unidentified events and upshot can be compared to compliance to mandatory regulations, with insurance policies instead of compliance costs.

What to Decide: Shifting Sands

As regulations can be elective, risks can be interpretative: with business environments relocated to virtual realms, decision-making may easily turns to crisis management based on conjectural and ephemeral web-driven semantics. In that case ensuing overlaps between regulations and risks can only be managed if  data analysis and operational intelligence are seamlessly integrated with production systems.

When to Decide: Last Responsible Moment

Finally, with regulations scope and weighted risks duly assessed, one have to consider the time-frames of decisions about compliance and commitments.

Regarding elective regulations and defined risks, the time-frame of decisions is set at enterprise level in so far as options can be directly linked to business strategies and policies. That’s not the case for compliance to mandatory regulations or commitments exposed to undefined risks since both are subject to external contingencies.

Whatever the source of the time-frame, the question is when to decide, and the answer is at the “last responsible moment”, i.e not until taking side could change the possible options:

  • Whether elective or mandatory, the “last responsible moment” for compliance decision is static because the parameters are known.
  • Whether defined or not, the “last responsible moment” for commitments exposed to risks is dynamic because the parameters are to be reassessed periodically or continuously.
Compliance and risk taking: last responsible moments to decide

Compliance and risk taking: last responsible moments to decide

One step ahead along that path of reasoning, the ultimate challenge of regulatory compliance and risk management would be to use the former to steady the latter.

Further Readings

EA: Entropy Antidote

June 24, 2014

Cybernetics & Governance

When seen through cybernetics glasses, enterprises are social entities whose sustainability and capabilities hang on their ability to track changes in their environment and exploit opportunities before their competitors. As a corollary, corporate governance is to be contingent on fast, accurate and purpose-driven reading of  environments on one hand, effective use of assets on the other hand.

menloop_moholy-nagy

Entropy grows from confusion (Lazlo Moholo-Nagy)

And that will depend on enterprises’ capacity to capture data, process it into information, and translate information into knowledge supporting decision-making. Since that capacity is itself determined by architectures, a changing and competitive environment will require continuous adaptation of enterprises’ organization. That’s when disorder and confusion may increase: lest a robust and flexible organization can absorb and consolidate changes, variety will progressively clog the systems with specific information associated with local adjustments.

Governance & Information

Whatever its type, effective corporate governance depends on timely and accurate information about the actual state of assets and environments. Hence the need to assess such capabilities independently of the type of governance structure that has to be supported, and of any specific business context.

cc

Effective governance is contingent on the distance between actual state of assets and environment on one hand, relevant information on the other hand.

That put the focus on the processing of information flows supporting the governance of interactions between enterprises and their environment:

  • How to identify the relevant facts and monitor them as accurately and timely as required.
  • How to process external data from environment into information, and to consolidate the outcome with information related to enterprise objectives and internal states.
  • How to put the consolidated information to use as knowledge supporting decision-making.
  • How to monitor processes execution and deal with relevant feedback data.
ccc

What is behind enterprise ability to track changes in environment and exploit opportunities.

Enterprises being complex social constructs, those tasks can only be carried on through structured organization and communication mechanisms  supporting the processing of information flows.

Architectures & Changes

Assuming that enterprise governance relies on accurate and timely information with regard to internal states and external environments, the first step would be to distinguish between the descriptions of actual contexts on one hand, of symbolic representation on the other hand.

Models are used to describe actual or symbolic objects and behaviors

Enterprise architectures can be described along two dimensions: nature (actual or symbolic), and target (objects or activities).

Even for that simplified architecture, assessing variety and information processing capabilities in absolute terms would clearly be a challenge. But assessing variations should be both easier and more directly useful.

Change being by nature relative to time, the first thing is to classified changes with regard to time-frames:

  • Operational changes are occurring, and can be dealt with, within the time-frame of processes execution.
  • Structural changes affect contexts and assets and cannot be dealt with at process level as they.

On that basis the next step will be to examine the tie-ups between actual changes and symbolic representations:

  • From actual to symbolic: how changes in environments are taken into account; how processes execution and state of assets are monitored.
  • From symbolic to actual: how changes in business models and processes design are implemented.
What moves first: actual contexts and processes or enterprise abstractions

What moves first: actual contexts and processes or enterprise abstractions

The effects of  those changes on overall governance capability will depend on their source (internal or external) and modality (planned or not).

Changes & Information Processing

As far as enterprise governance is considered, changes can be classified with regard to their source and modality.

With regard to source:

  • Changes within the enterprise are directly meaningful (data>information), purpose-driven (information>knowledge), and supposedly manageable.
  • Changes in environment are not under control, they may need interpretation (data<?>information), and their consequences or use are to be explored (information<?>knowledge).

With regard to modality:

  • Data associated with planned changes are directly meaningful (data>information) whatever their source (internal or external); internal changes can also be directly associated with purpose (information>knowledge);
  • Data associated with unplanned internal changes can be directly interpreted (data>information) but their consequences have to be analyzed (information<?>knowledge); data associated with unplanned external changes must be interpreted (data<?>information).
Changes can be classified with regard to their source (enterprise or environment) and modality (planned or not).

Changes can be classified with regard to their source (enterprise or environment) and modality (planned or not).

Assuming with Stafford Beer that viable systems must continuously adapt their capabilities to their environment, this taxonomy has direct consequences for their governance:

  • Changes occurring within planned configurations are meant to be dealt with, directly (when stemming from within enterprise), or through enterprise adjustments (when set in its environment).
  • That assumption cannot be made for changes occurring outside planned configurations because the associated data will have to be interpreted and consequences identified prior to any decision.

Enterprise governance will therefore depend on the way those changes are taken into account, and in particular on the capability of enterprise architectures to process the flows of associated data into information, and to use it to deal with variety.

EA & Models

Originally defined by thermodynamic as a measure of heat dissipation, the concept of entropy has been taken over by cybernetics as a measure of  the (supposedly negative) variation in the value of information supporting corporate governance.

As noted above, the key challenge is to manage the relevancy and timely interpretation and use of the data, in particular when new data cannot be mapped into predefined  semantic frame, as may happen with unplanned changes in contexts. How that can be achieved will depend on the processing of data and its consolidation into information as carried on at enterprise level or by business and technical units.

Given that data is captured at the periphery of systems, one may assume that the monitoring of operations performed by business and technical units are not to be significantly affected by architectures. The same assumption can be made for market research meant to be carried on at enterprise level.

Architecture Layers and Information Processing

Architecture Layers and Information Processing

Within that working assumption, the focus is to be put on enterprise architecture capability to “read” environments (from data to information), as well as to “update” itself (putting information to use as knowledge).

With regard to “reading” capabilities the primary factor will be traceability:

  • At technical level traceability between components and applications is required if changes in business operations are to be mapped to IT architecture.
  • At organizational level, the critical factor for governance will be the ability to adapt the functionalities of supporting systems to changes in business processes. And that will be significantly enhanced if both can be mapped to shared functional concepts.

Once the “readings” of external changes are properly interpreted with regard to internal assets and objectives, enterprise governance will have to decide if changes can be dealt with by the current architecture or if it has to be modified. Assuming that change management is an intrinsic part of enterprise governance, “updating” capabilities will rely on a continuous, comprehensive and consistent management of information, best achieved through models, as epitomized by the Model Driven Architecture (MDA) framework.

vvvvv

Models as bridges between data and knowledge

Based on requirements capture and analysis, respective business, functional, and technical information is consolidated into models:

  • At technical level platform specific models (PSMs) provide for applications and components traceability. They support maintenance and configuration management and can be combined with design patterns to build modular software architecture from reusable components.
  • At organizational level, platform independent models (PIMs) are used to align business processes with systems functionalities. Combined with functional patterns the objective is to use service oriented architectures as a level of indirection between organization and information technology.
  • At enterprise level, computation independent models (CIMs) are meant to bring together corporate tangible and intangible assets. That’s where corporate culture will drive architectural changes  from systems legacy, environment challenges, and planned designs.

EA & Entropy

Faced with continuous changes in their business environment and competition, enterprises have to navigate between rocks of rigidity and whirlpools of variety, the former policies trying to carry on with existing architectures, the latter adding as many variants as seems to appear to business objects, processes, or channels. Meeting environments challenges while warding off growing complexity will depend on the plasticity and versatility of architectures, more precisely on their ability to “digest” the variety of data and transform it into corporate knowledge. Along that perspective enterprise architecture can be seen as a natural antidote to entropy, like a corporate cousin of  Maxwell’s demon, standing at enterprise gates and allowing changes in a way that would decrease internal complexity relative to the external one.

Further Readings

The Finger & the Moon: Fiddling with Definitions

June 5, 2014

Objective

Given the glut of redundant, overlapping, circular, or conflicting definitions, it may help to remember that “define” literally means putting limits upon. Definitions and their targets are two different things, the former being language constructs (intensions), the latter set of instances (extensions).  As a Chinese patriarch once said, the finger is not to be confused with the moon.

Fingering definition

Fiddling with words: to look at the moon, it is necessary to gaze beyond the finger.(Thich Nhat Hanh)

In order to gauge and cut down the distance between words and world, definitions can be assessed and improved at functional and semantic levels.

What’s In & What’s Out

At the minimum a definition must support clear answers at whether any occurrence is to be included in or excluded from the defined set. Meeting that straightforward condition will steer clear of self-sustained semantic wanderings.

Functional Assessment

Since definitions can be seen as a special case of non exhaustive classifications, they can be assessed through a straightforward two-steps routine:

  1. Effectiveness: applying candidate definition to targeted instances must provide clear and unambiguous answers (or mutually exclusive subsets).
  2. Usefulness: the resulting answers (or subsets) must directly support well-defined purposes.

Such routine meets Occam’s razor parsimony principle by producing outcomes that are consistent (“internal” truth, i.e no ambiguity), sufficient (they meet their purpose), and simple (mutually exclusive classifications are simpler than overlapping ones).

Functional assessment should also take feedback into account as instances can be refined and purposes reconsidered with the effect of improving initially disappointing outcomes. For instance, a good requirements taxonomy is supposed to be used to allocate responsibilities with regard to acceptance, and carrying out classification may be accompanied by a betterment of requirements capture.

Once functionally checked, candidate definitions can be assessed for semantics, and adjusted as to maximize the scope and consistency of their footprint. While different routines can be used, all rely on tweaking words with neighboring meanings.

Purposes & Capabilities

On a broader perspective, definitions can be ranked with regard to purposes and capabilities:

  1. Lexicon: flat and non specific list of words.
  2. Thesaurus: cross and domain specific semantics of words.
  3. Ontology: cross and domain specific semantics of concepts with epistemic qualification of whatever is considered.
  4. Models: cross and domain specific semantics of concepts with epistemic qualification of whatever is considered and rules to be applied to the processing of representations (descriptive, predictive, or prescriptive).

That principled approach can be used to clarify the scope and reliability of competing standards. It could also be extended to the design of business oriented ontologies.

Ontologies & Business

Ontologies are all too often seen as abstract contraptions best reserved for arcane issues. But, as noted above, ontologies are meant to be built on purpose, to flesh out thesauruses with actual contexts and concerns, and put to use.

That could help enterprises confronted to the crumbling of traditional fences, changing business environments, and waves of digitized flows with confusing semantics.

To manage these challenges enterprise governance need knowledge architectures bringing together heterogeneous and changing business contexts as well as homogeneous and stable models of organization, systems, and platforms; that cannot be achieved without open and modular ontologies.

But for that to be achieved, means, e.g conceptual graphs or semantic networks, should not be confused with ends, i.e the purpose of ontologies. Whereas implementation issues are not to be ignored, the priority should be to characterize ontologies with regard to the social basis of contexts (institutional, social, professional, corporate, personal), and the epistemic nature of targeted instances (concepts, documents, actual occurrences, or symbolic representations.)

Further Readings

 

EA Documentation: Taking Words for Systems

May 18, 2014

In so many words

Given the clear-cut and unambiguous nature of software, how to explain the plethora of  “standard” definitions pertaining to systems, not to mention enterprises, architectures ?

Documents and architectures, which grows on the other (Gilles Barbier).

Documents and Systems: which ones nurture the others (Gilles Barbier).

Tentative answers can be found with reference to the core functions documents are meant to support: instrument of governance, medium of exchange, and content storage.

Instrument of Governance: the letter of the law

The primary role of documents is to support the continuity of corporate identity and activities with regard to their regulatory and business environments. Along that perspective documents are to receive legal tender for the definitions of parties (collective or individuals), roles, and contracts. Such documents are meant to support the letter of the law, whether set at government, industry, or corporate level. When set at corporate level that letter may be used to assess the capability and maturity of architectures, organizations, and processes. Whatever the level, and given their role for legal tender or assessment, those documents have to rely on formal textual definitions, possibly supplemented with models.

Medium of Exchange: the spirit of the law

Independently of their formal role, documents are used as medium of exchange, across corporate entities as well as internally between their organizational units. When freed from legal or governance duties, such documents don’t have to carry authorized or frozen interpretations and assorted meanings can be discussed and consolidated in line with the spirit of the law. That makes room for model-based documents standing on their own, with textual definitions possibly set in the background. Given the importance of direct discussions in the interpretation of their contents, documents used as medium of (immediate) exchange should not be confused with those used as means of storage (exchange along time).

Means of Storage: letter only

Whatever their customary functions, documents can be used to store contents to be reinstated at a later stage. In that case, and contrary to direct (aka immediate) exchange, interpretations cannot be consolidated through discussion but have to stand on the letter of the documents themselves. When set by regulatory or organizational processes, canonical interpretations can be retrieved from primary contexts, concerns, or pragmatics. But things can be more problematic when storage is performed for its own purpose, without formal reference context. That can be illustrated by legacy applications with binary code can be accompanied by self-documented source code, source with documentation, source with requirements, generated source with models, etc.

Documentation and Enterprise Architecture

Assuming that the governance of structured social organizations must be supported by comprehensive documentation, documents must be seen as a necessary and intrinsic component of enterprise architectures and their design should be aligned on concerns and capabilities.

As noted above, each of the basic functionalities comes with specific constraints; as a consequence a sound documentation policy should not mix functionalities. On that basis, documents should be defined by mapping purposes with users across enterprise architecture layers:

  • With regard to corporate environment, documentation requirements are set by legal constraints, directly (regulations and contracts) or indirectly (customary framework for transactions, traceability and audit).
  • With regard to organization, documents have to met two different objectives. As a medium of exchange they are meant to support the collaboration between organizational units, both at business level (processes) and across architecture levels. As an instrument of governance they are used to assess architecture capabilities and processes performances. Documents supporting those objectives are best kept separate if negative side effects are to be avoided.
  • With regard to systems functionalities, documents can be introduced for procurements (governance), development (exchange), and change (storage).
  • Within systems, the objective is to support operational deployment and maintenance of software components.
Documents’ purposes and users

Documents’ purposes and users

The next step will be to integrate documents pertaining to actual environments and organization (brown background) with those targeting symbolic artifacts (blue background).

Models are used to describe actual or symbolic objects and behaviors

Models are used to describe actual or symbolic objects and behaviors

That could be achieved with MBE/MDA approaches.

Further readings

 


Hexa

Your content with a new angle at WordPress.com

IT Modernization < V.Hanniet

About IT Modernization

IT Modernization < V. Hanniet

software model driven approaches

Caminao's Ways

Do systems know how symbolic they are ?