Archive for the ‘Risk Management’ Category

AI & Embedded Insanity

February 6, 2015

Summary

Bill Gates recently expressed his concerns about AI’s threats, but shouldn’t we fear insanity, artificial or otherwise ?

vvvv

Human vs Artificial Insanity: chicken or egg ? (Peter Sellers as Dr. Strangelove)

Some clues to answers may be found in the relationship between purposes, designs, and behaviors of intelligent devices.

Intelligent Devices

Intelligence is generally understood as the ability to figure out situations and solve problems, with its artificial avatar turning up when such ability is exercised by devices.

Devices being human artifacts, it’s safe to assume that their design can be fully accounted for, and their purposes wholly exhibited and assessed. As a corollary, debates about AI’s threats should distinguish between harmful purposes (a moral issue) on one hand, faulty designs,  flawed outcomes, and devious behaviors, (all engineering issues) on the other hand. Whereas concerns for the former could arguably be left to philosophers, engineers should clearly take full responsibility for the latter.

Human, Artificial, & Insane Behaviors

Interestingly, the “human” adjective takes different meanings depending on its association to agents (human as opposed to artificial) or behaviors (human as opposed to barbaric). Hence the question: assuming that intelligent devices are supposed to mimic human behaviors, what would characterize devices’ “inhuman” behaviors ?

ccc

How to characterize devices’ inhuman behaviors ?

From an engineering perspective, i.e moral issues being set aside, a tentative answer would point to some flawed reasoning, commonly described as insanity.

Purposes, Reason, Outcomes & Behaviors

As intelligence is usually associated with reason, flaws in the design of reasoning capabilities is where to look for the primary factor of  hazardous devices’ behaviors.

To begin with, the designs of intelligent devices neatly mirror human cognitive activity by combining both symbolic (processing of symbolic representations), and non symbolic (processing of neuronal connections) capabilities. How those capabilities are put to use is therefore to characterize the mapping of purposes to behaviors:

  • Designs relying on symbolic representations allow for explicit information processing: data is “interpreted” into information which is then put to use as the knowledge governing behaviors.
  • Designs based on neuronal networks are characterized by implicit information processing: data is “compiled” into neuronal connections whose weights (representing knowledge ) are tuned iteratively based on behavioral feedback.
Symbolic (north) vs non symbolic (south) intelligence

Symbolic (north) vs non symbolic (south) intelligence

That distinction is to guide the analysis of potential threats:

  • Designs based on symbolic representations can support both the transparency of ends and the traceability of means. Moreover, such designs allow for the definition of broader purposes, actual or social.
  • Neuronal networks make the relationships between means and ends more opaque because their learning kernels operate directly on data, with the supporting knowledge implicitly embodied as weighted connections. They make for more concrete and focused purposes for which symbolic transparency and traceability are less of a bearing.

Risks, Knowledge,  & Decision Making

As noted above, an engineering perspective should focus on the risks of flawed designs begetting discrepancies between purposes and outcomes. Schematically, two types of outcome are to be considered: symbolic ones are meant to feed human decisions making, and behavioral ones directly govern devices behaviors. For AI systems combining both symbolic and non symbolic capabilities, risks can arise from:

  • Unsupervised decision making: device behaviors directly governed by implicit knowledge (a).
  • Embedded decision making: flawed decisions based on symbolic knowledge built from implicit knowledge (b).
  • Distributed decision making: muddled decisions based on symbolic knowledge built by combining different domains of discourse (c).
Unsupervised (a), embedded (b), and distributed (c) decision making.

Unsupervised (a), embedded (b), and distributed (c) decision making.

Whereas risks bred by unsupervised decision making can be handled with conventional engineering solutions, that’s not the case for embedded or distributed decision making supported by intelligent devices. And both risks may be increased respectively by the so-called internet of things and semantic web.

Internet of Things, Semantic Web, & Embedded Insanity

On one hand the so-called “internet second revolution” can be summarized as the end of privileged netizenship: while the classic internet limited its residency to computer systems duly identified by regulatory bodies, the new one makes room for every kind of device. As a consequence, many intelligent devices (e.g cell phones) have made their coming out into fully fledged systems.

On the other hand the so-called “semantic web” can be seen as the symbolic counterpart of the internet of things, providing a whole of comprehensive and consistent meanings to targeted factual realities. Yet, given that the symbolic world is not flat but built on piled mazes of meanings, their charting is to be contingent on projections with dissonant semantics. Moreover, as meanings are not supposed to be set in stone, semantic configurations have to be continuously adjusted.

That double trend clearly increases the risks of flawed outcomes and erratic behaviors:

  • Holed up sources of implicit knowledge are bound to increase the hazards of unsupervised behaviors or propagate unreliable information.
  • Misalignment of semantics domains may blur the purposes of intelligent apparatus and introduce biases in knowledge processing.

But such threats are less intrinsic to AI than caused by the way it is used: insanity is more probably to spring from the ill-designed integration of intelligent and reasonable systems into the social fabric than from insane ones.

Further Readings

Models Truth and Disproof

January 21, 2015

“If you cannot find the truth right where you are, where else do you expect to find it?”

Dōgen Zenji

Summary

Software engineering models can be regrouped in two categories depending on their target: analysis models represent business context and concerns, design ones represent systems components. Whatever the terminologies, all models are to be verified with regard to their intrinsic qualities, and validated with regard to their domain of discourse, respectively business objects and activities (analysis models), or software artifacts (design models).

(Chris Engman)

Internal & External Consistency (Chris Engman)

Checking for internal consistency is arguably straightforward as proofs can be built with regard to the syntax and semantics of modeling (or programming) languages. Things are more complicated for external consistency because hypothetical proofs would have to rely on what is known of the business domains, whose knowledge is by nature partial and specific, if not hypothetical. Nonetheless, even if general proofs are out of reach, the truth of models can still be disproved by counter examples found among the instances under consideration.

Domains of Discourse: Business vs Engineering

With regard to systems engineering, domains of discourse cover artifacts which are, by “construct”, fully defined. Conversely, with regard to business context and objectives, domains of discourse have to deal with instances whose definitions are a “work in progress”.

vvvv

Domains of Discourse: Business vs Engineering

That can be illustrated by analysis models, which are meant to translate requirements into system functionalities, as opposed to design ones, which specify the corresponding software artifacts. Since software artifacts are supposed to be built from designs, checking the consistency of the mapping is arguably a straightforward undertaking. That’s not the case when the consistency of analysis models has to be checked against objects and activities identified by business’ domains of discourse, possibly with partial, ambiguous, or conflicting descriptions. For that purpose some logic may help.

Flat Models & Logic

Business requirements describe objects, events, and activities, and the purpose of modeling is to identify those instances and regroup them into subsets built according to their features and relationships.

Building descriptions for targeted instances business objects & activities

How to organize instances of business objects & activities into subsets

As far as models make no use of abstractions (“flat” models), instances can be organized using basic set operators and epistemic (i.e relating to the degree of validation) constraints with regard to existence (m/d), uniqueness (x/o), and change (f/m):

cccc

Notation for epistemic constraints

Using the EU-Rent Car example:

  • Rental cars are exclusively and definitively partitioned according to models (mxf).
  • Models are exclusively partitioned according to rental group (mxm), and exclusively and definitively according body style (mxf).
  • Rental cars are partitioned by derivation (/) according to group and body style.
cccc

Flat model using basic set operators for exclusive (cross) and final (grey) partitions (2)

Such models are deemed to be consistent if all instances are consistently taken into account.

Flat Models External Consistency

Assuming that models backbone can be expressed logically, their consistency can be formally verified using a logical language, e.g Prolog.

To begin with, candidate subsets are obtained by combing requirements for core modeling artifacts expressed as predicates (21 for descriptions of actual objects, 121 for descriptions of actual locations, 20 for descriptions of symbolic ones, 22 for descriptions of symbolic partitions), e.g:

  • type(20, manufacturer).
  • type(21, rentalCar).
  • type(22,  carModel).
  • type(22, rentalGroup).
  • type(22,  bodyStyle).
  • type(121, depot).

Partitions and functional connectors (220 for symbolic reference, 222 for partitions, 221 for actual connection), e.g:

  • connect(222, rentalCar,carModel, mxf).
  • connect(222, carModel, group, mxm).
  • connect(222, carModel, bodyStyle,mxf).
  • connect(220, manufacturer_, carModel, manufacturer, mof).
  • connect(121, location, rentalCar, depot, mxt)

Finally, features and structures (320 for properties, 340 for operations), e.g:

  • feature(340, move_to, depot).
  • feature(320, address).
  • feature(320, location).
  • member(manufacturer,address,mom).
  • member(rentalCar,location,mxm).
  • member(rentalCar,move_to,mxm).

Those candidate descriptions are to be assessed and improved by applying them to sets of identified occurrences taken from requirements. The objective being to map each instance to a description: instance(name, term()), e.g:

  • instance(sedan,carModel(f1(F1),f2(F2))).
  • instance(coupe,carModel(f1(F1),f2(F2))).
  • instance(ford, manufacturer(f6(F6),f7(F7))).
  • instance(focus, rentalCar(f6(F6),f7(F7))).
  • instance_(manufacturer_,focus,ford).

Using a logical interpreter, validation can then be carried out iteratively by looking for counter examples that could disprove the truth of the representations:

  • All instances are taken into account: there is no instance N without instance(N,Structure).
  • Logical consistency: there is no instance N with conflicting partitioning (native and derived).
  • Completeness: there is no instance type(X,N,T(f1,f2,..)) with undefined feature fi.
  • Functional consistency: there is no instance of relation R (native and derived) without a consistent type relation(X, R, Origin, Destination, Epistemic) .

It must be noted that the approach is not limited to objects and is meant to encompass the whole scope of requirements: actual objects, symbolic representations, business logic, and processes execution.

Multilevel Models: From Partitions to Sub-types

Flat models fall short when specific features are to be added to the elements of partitions subsets, and in that case sub-types must be introduced. Yet, and contrary to partitions, sub-types come with abstractions: set within a flat model (i.e without sub-types), Car model fully describes all instances, but when sub-types sedan, coupe, and convertible are introduced, the Car model base type is nothing more than a partial (hence abstract) description.

ccc

From partition to sub-types: subsets descriptions are supplemented with specific features.

Whereas that difference may seem academic, it has direct and practical consequences when validation is considered because consistency must then be checked for every level, concrete or abstract.

LSP & External Consistency

As it happens, the corresponding problem has been tackled by Barbara Liskov for software design: the so-called Liskov substitution principle (LSP) states that if S is a sub-type of T, then instances of T may be replaced with instances of S without altering any of the desirable properties of the program.

Translated to analysis models, the principle would state that, given a set of instances, a model must keep its consistency independently of the level of abstraction considered. As a corollary, and assuming a model abides by the substitution principle, it would be possible to generalize the external consistency of a detailed level to the whole model whatever the level of abstraction. Hence the importance of compliance with the substitution principle when introducing sub-types in analysis models.

vvv

All instances must be accounted for whatever the level of abstraction

Applying the Substitution Principle to Analysis Models

Abstraction is arguably the essence of requirements modeling as its purpose is to bring specific and changing concerns under a common, consistent, and lasting conceptual roof. Yet, the two associated operations of specialization and generalization often receive very little scrutiny despite the fact that most of the related pitfalls can be avoided if the introduction of sub-types (i.e levels of abstraction) is explicitly justified by partitions. And that can be achieved by the substitution principle.

First of all, and as far as requirements analysis is concerned, sub-types should only to be introduced for specific features, properties or operations. Then, epistemic constraints can be used to tally the number of specialized instances with the number of generalized ones, and check for the possibility of functional discrepancies:

  • Discretionary (or conditional or non exhaustive) partitions (d__) may bring about more instances for the base type (nb >= ∑nbi).
  • Overlapping (or duplicate or non isolated) partitions (_o_) may bring about less instances for the base type (nb <= ∑nbi).
  • Assuming specific features, mutable (or reversible) partitions (__m) means that features may differ between level; otherwise (same features) sub-types are not necessary.
vvv

Epistemic constraints on partitions can be used to enforce the LSP

Using a Prolog-like language, the only modification will concern the syntax of predicates, with structures replaced by lists of features:

  • type(20, manufacturer,[f6,f7]).
  • type(21, rentalCar,[f5]).
  • type(22,  carModel,[f1,f2]).
  • type(22, rentalGroup,[f9]).
  • type(22,  bodyStyle,[f8]).
    • type(20, bodyStyle:sedan, [f11,f12]).
    • type(20, bodyStyle:coupe, [f13]).
    • type(20, bodyStyle:convertible, [f14]).
  • type(121, depot,[f10]).

The logical interpreter could then be used to map the sub-types to partitions and check for substitution.

Further Reading

Further Readings

Events & Decision-making

September 9, 2014

Objective

Between Internet-of-Things and ubiquitous social networks, enterprises’ environments are turning into unified open spaces, transforming the divide between operational and decision-making systems into a pitfall for corporate governance. That jeopardy can be better understood when one consider how the processing of events affect decision-making.

divination_JW_Waterhouse

Making sense of event (J.W. Waterhouse)

Events & Information Processing

Enterprises’ success critically depends on their ability to track, understand, and exploit changes in their environment; hence the importance of a fast, accurate, and purpose-driven reading of events.

That is to be achieved by picking the relevant facts to be tracked, capturing the associated data, processing the data into meaningful information, and finally putting that information into use as knowledge.

From Facts to Knowledge and Back

From Facts to Knowledge and Back

Those tasks have to be carried out iteratively, dealing with both external and internal events:

  • External events are triggered by changes in the state of actual objects, activities, and expectations.
  • Internal events are triggered by the ensuing changes in the symbolic representations of objects and processes as managed by systems.

With events set at the root of the decision-making process, they will also define the time frames.

Events & Decisions Time Frames

As a working hypothesis, decision-making can be defined as real-time knowledge management:

  • To begin with, a real-time scale is created by new facts (t1) registered through the capture of events and associated data (t2).
  • A symbolic intermezzo is then introduced during which data is analyzed, information updated (t3), knowledge extracted, and decisions taken (t4);
  • The real-time scale completes with decision enactment and corresponding change in facts (t5).
Time scale of decision making (real time events are in red, symbolic ones in blue)

Time scale of decision making (real time events are in red, symbolic ones in blue)

The next step is to bring together events and knowledge.

Events & Changes in Knowns & Unknowns

As Donald Rumsfeld once suggested, decision-making is all about the distinction between things we know that we know, things that we know we don’t know, and things we don’t know we don’t know. And that classification can be mapped to the nature of events and the processing of associated data:

  • Known knowns (KK) are traced through changes in already defined features of identified objects, activities or expectations. Corresponding external events are expected and the associated data can be immediately translated into information.
  • Known unknowns (KU) are traced through changes in still undefined features of identified objects, activities or expectations. Corresponding external events are unexpected and the associated data cannot be directly translated into information.
  • Unknown unknowns (UU) are traced through changes in still undefined objects, activities or expectations. Since the corresponding symbolic representations are still to be defined, both external and interval events are unexpected.
vvvvv

Knowledge lifespan is governed by external events

Given that decisions are by nature set in time-frames, they should be mapped to changes in environments, or more precisely to the information carried out by the events taken into consideration.

Knowledge & Decision Making

Events bisect time-scales between before and after, past and future; as a corollary, the associated information (or lack thereof) about changes can be neatly allocated to the known and unknown of current and prospective states of affairs.

Changes in the current states of affairs are carried out by external events:

  • Known knowns (KK): when events are about already defined features of objects, activities or expectations, the associated data can be immediately used to update the states of their symbolic representation.
  • Known unknowns (KU): when events entail new features of already defined objects, activities or expectations, the associated data must be analyzed in order to adjust existing symbolic representations.
  • Unknown unknowns (UU): when events entail new objects, activities or expectations, the associated data must be analyzed in order to build new symbolic representations.

As changes in current states of affairs are shadowed by changes in their symbolic representation, they generate internal events which in turn may trigger changes in prospective states of affairs:

  • Known knowns (KK): updating the states of well-defined objects, activities or expectations may change the course of action but should not affect the set of possibilities.
  • Known unknowns (KU): changes in the set of features used to describe objects, activities or expectations may affect the set of tactical options, i.e ones that are can be set for individual production life-cycles.
  • Unknown unknowns (UU): introducing new types of objects, activities or expectations is bound to affect the set of strategic options, i.e ones that are encompass multiple production life-cycles.

Interestingly, those levels of knowledge appear to be congruent with usual horizons in decision-making: operational , tactical, and strategic:

Decision-making and knowledge level

The scope of decision-making is set by knowledge level

  • Operational: full information on actual states allows for immediate appraisal of prospective states.
  • Tactical: partially defined actual states allow for periodic appraisal of prospective states in synch with production cycles.
  • Strategic: undefined actual states don’t allow for periodic appraisal of prospective states in synch with production cycles; their definition may also be affected through feedback.

Given that those levels of appraisal are based on conjectural information (internal events) built from fragmentary or fuzzy data (external events), they have to be weighted by risks.

Weighting the Risks

Perfect information would guarantee risk-free future and would render decision-making pointless. As a corollary, decisions based on unreliable information entail risks that must be traced back accordingly:

  • Operational: full and reliable information allows for risk-free decisions.
  • Tactical: when bounded by well-defined contexts with known likelihoods, partial or uncertain information allows for weighted costs/benefits analysis.
  • Strategic: set against undefined contexts or unknown likelihoods decision-making cannot fully rely on weighted costs/benefits analysis and must encompass policy commitments, possibly with some transfer of risks, e.g through insurance contracts.

That provides some kind of built-in traceability between the nature and likelihood of events, the reliability of information, and the risks associated to decisions.

Decision Timing

Considering decision-making as real-time knowledge management driven by external (aka actual) events and governed by internal (aka symbolic) ones, how would that help to define decisions time frames ?

To begin with, such time frames would ensure that:

  • All the relevant data is captured as soon as possible (t1>t2).
  • All available data is analyzed as soon as possible (t2>t3).
  • Once a decision has been made, nothing can change during the interval between commitment and action (respectively t4 and t5).

Given those constraints, the focus of timing is to be on the interval between change in prospective states (t3) and decision (t4): once all information regarding prospective states is available, how long to wait before committing to a decision ?

Assuming that decisions are to be taken at the “last responsible moment”, i.e until not taking side could change the possible options, that interval will depend on the nature of decisions:

  • Operational decisions can be put to effect immediately. Since external changes can also be taken into account immediately, the timing is to be set by events occurring within production life-cycles.
  • Tactical decisions can only be enacted at the start of production cycles using inputs consolidated at completion. When analysis can be done in no time (t3=t4) and decisions enacted immediately (t4=t5), commitments can be taken from on cycle to the next. Otherwise some lag will have to be introduced. The last responsible moment for committing a decision will therefore be defined by the beginning of the next production cycle minus the time needed for enactment.
  • Strategic decisions are meant to be enacted according to predefined plans. The timing of commitments should therefore combine planning (when a decision is meant to be taken) and events (when relevant and reliable information is at hand).
The scope of decision-making must be aligned with architecture layers

The scope of decision-making should be aligned with architecture layers

Not surprisingly, when the scope of decision-making is set by knowledge level, it appears to coincide with architecture layers: strategic for enterprise assets, tactical for systems functionalities, and operational for platforms and resources. While that clearly calls for more verification and refinements, such congruence put events processing, knowledge management, and decision-making within a common perspective.

Further Reading

Governance, Regulations & Risks

July 16, 2014

Governance & Environment

Confronted with spreading and sundry regulations on one hand, the blurring of enterprise boundaries on the other hand, corporate governance has to adapt information architectures to new requirements with regard to regulations and risks. Interestingly, those requirements seem to be driven by two different knowledge policies: what should be known with regard to compliance, and what should be looked for with regard to risk management.

Zhigang-tang2

Governance (Zhigang-tang)

 Compliance: The Need to Know

Enterprise are meant to conform to rules, some set at corporate level, others set by external entities. If one may assume that enterprise agents are mostly aware of the former, that’s not necessary the case for the latter, which means that information and understanding are prerequisites for regulatory compliance :

  • Information: the relevant regulations must be identified, collected, and their changes monitored.
  • Understanding: the meanings of regulations must be analyzed and the consequences of compliance assessed.

With regard to information processing capabilities, it must be noted that, since regulations generally come as well structured information with formal meanings, the need for data processing will be limited, if at all.

With regard to governance, given the pervasive sources of external regulations and their potentially crippling consequences, the challenge will be to circumscribe the relevant sources and manage their consequences with regard to business logic and organization.

Regulatory Compliance vs Risks Management

Regulatory Compliance vs Risks Management

 

Risks: The Will to Know

Assuming that the primary objective of risk management is to deal with the consequences (positive or negative) of unexpected events, its information priorities can be seen as the opposite of the ones of regulatory compliance:

  • Information: instead of dealing with well-defined information from trustworthy sources, risk management must process raw data from ill-defined or unreliable origins.
  • Understanding: instead of mapping information to existing organization and business logic, risk management will also have to explore possible associations with still potentially unidentified purposes or activities.

In terms of governance risks management can therefore be seen as the symmetric of regulatory compliance: the former relies on processing data into information and expanding the scope of possible consequences, the latter on translating information into knowledge and reducing the scope of possible consequences.

With regard to regulations governance is about reduction, with regard to risks it's about expansion

With regard to regulations governance is about reduction, with regard to risks it’s about expansion

Not surprisingly, that understanding coincides with the traditional view of governance as a decision-making process balancing focus and anticipation.

Decision-making: Framing Risks and Regulations

As noted above, regulatory compliance and risk management rely on different knowledge policies, the former restrictive, the latter inclusive. That distinction also coincides with the type of factors involved and the type of decision-making:

  • Regulations are deontic constraints, i.e ones whose assessment is not subject to enterprises decision-making. Compliance policies will therefore try to circumscribe the footprint of regulations on business activities.
  • Risks are alethic constraints, i.e ones whose assessment is subject to enterprise decision-making. Risks management policies will therefore try to prepare for every contingency.

Yet, when set on a governance perspective, that picture can be misleading because regulations are not always mandatory, and even mandatory ones may left room for compliance adjustments. And when regulations are elective, compliance is less driven by sanctions or penalties than by the assessment of business or technical alternatives.

Regulations & Risks : decision patterns

Decision patterns: Options vs Arbitrage

Conversely, risks do not necessarily arise from unidentified events and upshot but can also come from well-defined outcomes with unknown likelihood. Managing the latter will not be very different from dealing with elective regulations except that decisions will be about weighted opportunity costs instead of business alternatives. Similarly, managing risks from unidentified events and upshot can be compared to compliance to mandatory regulations, with insurance policies instead of compliance costs.

What to Decide: Shifting Sands

As regulations can be elective, risks can be interpretative: with business environments relocated to virtual realms, decision-making may easily turns to crisis management based on conjectural and ephemeral web-driven semantics. In that case ensuing overlaps between regulations and risks can only be managed if  data analysis and operational intelligence are seamlessly integrated with production systems.

When to Decide: Last Responsible Moment

Finally, with regulations scope and weighted risks duly assessed, one have to consider the time-frames of decisions about compliance and commitments.

Regarding elective regulations and defined risks, the time-frame of decisions is set at enterprise level in so far as options can be directly linked to business strategies and policies. That’s not the case for compliance to mandatory regulations or commitments exposed to undefined risks since both are subject to external contingencies.

Whatever the source of the time-frame, the question is when to decide, and the answer is at the “last responsible moment”, i.e not until taking side could change the possible options:

  • Whether elective or mandatory, the “last responsible moment” for compliance decision is static because the parameters are known.
  • Whether defined or not, the “last responsible moment” for commitments exposed to risks is dynamic because the parameters are to be reassessed periodically or continuously.
Compliance and risk taking: last responsible moments to decide

Compliance and risk taking: last responsible moments to decide

One step ahead along that path of reasoning, the ultimate challenge of regulatory compliance and risk management would be to use the former to steady the latter.

Further Readings