Archive for the ‘Augmented reality’ Category

EA: Legacy & Latency

June 7, 2018

 “For things to remain the same, everything must change”

Lampedusa, “The Leopard”

Preamble

Whatever the understanding of the discipline, most EA schemes implicitly assume that enterprise architectures, like their physical cousins, can be built from blueprints. But they are not because enterprises have no “Pause” and “Reset” buttons: business cannot be put on stand-by and must be carried on while work is in progress.

hans-vredeman-de-vries-2

Dealing with EA’s Legacy: WIP or RIP ? (Hans Vredeman de Vries)

Systems & Enterprises

Systems are variously defined as:

  • “A regularly interacting or interdependent group of items forming a unified whole” (Merriam-Webster).
  • “A set of connected things or devices  that operate  together” (Cambridge Dictionary).
  • “A way of working, organizing, or doing something which follows a fixed plan or set of rules” (Collins Dictionary)
  • “A collection of components organized to accomplish a specific function or set of functions” (TOGAF from ISO/IEC 42010:2007)

While differing in focus, most understandings mention items and rules, purpose, and the ability to interact; none explicitly mention social structures or interactions with humans. That suggests where the line should be drawn between systems and enterprises, and consequently between corresponding architectures.

Architectures & Changes

Enterprises are live social entities made of corporate culture, organization, and supporting systems; their ultimate purpose is to maintain their identity and integrity while interacting with environments. As a corollary, changes cannot be carried out as if architectures were just apparel, but must ensure the continuity and consistency of enterprises’ structures and behaviors.

That cannot be achieved by off-soil schemes made of blueprints and step-by-step processes detached from actual organization, systems, and processes. Instead, enterprise architectures must be grown bottom up from actual legacies whatever their nature: technical, functional, organizational, business, or cultural.

EA’s Legacy

Insofar as enterprise architectures are concerned, legacies are usually taken into account through one of three implicit assumptions:

No legacy assumptions ignore the issue, as if the case of start-ups could be generalized. These assumptions are logically flawed because enterprises without legacy are like embryos growing their own inherent architecture, and in that case there would be no need for architects.

En Bloc legacy assumptions take for granted that architectures as a whole could be replaced through some Big Bang operation without having a significant impact on business activities. These assumptions are empirically deceptive because, even limited to software architectures, Big Bang solutions cannot cope with the functional and generational heterogeneity of software components characterizing large organizations. Not to mention that enterprise architectures are much more that software and IT.

Piecemeal legacies can be seen as the default assumption, based on the belief that architectures can be re-factored or modernized step by step. While that assumption may be empirically valid, it may also miss the point: assuming that all legacies can be dealt with piecemeal rubs out the distinction pointed above between systems and enterprises.

So, the question remains of what is to be changed, and how ?

EA as a Work In Progress

As with leopard’s spots and identity, the first step would be to set apart what is to change (architectures) from what is to carry on (enterprise).

Maps and territories do provide an overview of spots’ arrangement, but they are static views of architectures, whereas enterprises are dynamic entities that rely on architectures to interact with their environment. So, for maps and territories to serve that purpose they should enable continuous updates and adjustments without impairing enterprises’ awareness and ability to compete.

That shift from system architecture to enterprise behavior implies that:

  • The scope of changes cannot be fully defined up-front, if only because the whole enterprise, including its organization and business model, could possibly be of concern.
  • Fixed schedules are to be avoided, lest each and every unit, business or otherwise, would have to be shackled into a web of hopeless reciprocal commitments.
  • Different stakeholders may come as interested parties, some more equal than others, possibly with overlapped prerogatives.

So, instead of procedural and phased approaches supposed to start from blank pages, EA ventures must be carried out iteratively with the planning, monitoring, assessment, and adjustment of changes across enterprises’ businesses, organizations, and systems. That can be represented as an extension of the OODA (Observation, Orientation, Decision, Action) loop:

  • Actual observations from operations (a)
  • Data analysis with regard to architectures as currently documented (b).
  • Changes in business processes (c).
  • Changes in architectures (d).
DataInfoKnow_OODA

EA decision-making as an extension of the OODA loop

Moreover, due to the generalization of digital flows between enterprises and their environment, decision-making processes used to be set along separate time-frames (operational, tactical, strategic, …), must now be weaved together along a common time-scale encompassing internal (symbolic) as well as external (actual) events.

It ensues that EA processes must not only be continuous, but they also must deal with latency constraints.

Changes & Latency

Architectures are by nature shared across organizational units (enterprise level) and business processes (system level). As a corollary, architecture changes are bound to introduce mismatches and frictions across business-specific applications. Hence the need of sorting out the factors affecting the alignment of maps and territories:

  • Elapsed time between changes in territories and maps updates (a>b) depends on data analytics and operational architecture.
  • Elapsed time between changes in maps and revised objectives (b>c) depends on business analysis and organization.
  • Elapsed time between changes in objectives and their implementation (c>d) depends on engineering processes and systems architecture.
  • Elapsed time between changes in systems and changes in territories (d>a) depends on applications deployment and technical architectures.

Latency constraints can then be associated with systems engineering tasks and workshops.

DataInfoKnow_Worshops

EA changes & Latency

On that basis it’s possible to define four critical lags:

  • Operational: data analytics can be impeded by delayed, partial, or inaccurate feedback from processes.
  • Mapping: business analysis can be impeded by delays or discrepancies in data analytics.
  • Engineering: development of applications can be impeded by delays or discrepancies in business analysis.
  • Processes: deployment of business processes can be impeded by delays in the delivery of supporting applications.

These lags condition the whole of EA undertakings because legacy structures, mechanisms, and organizations are to be continuously morphed into architectures without introducing misrepresentations that would shackle activities and stray decision-making.

EA Latency & Augmented Reality

Insofar as architectural changes are concerned, discrepancies and frictions are rooted in latency, i.e the elapsed time between actual changes in territories and the updating of relevant maps.

As noted above, these lags have to be weighted according to time-frames, from operational days to strategic years, so that the different agents could be presented with the relevant and up-to-date views befitting to each context and concerns.

DataInfoKnow_intervs

EA views must be set according to contexts and concerns, with relevant lags weighted appropriately.

That could be achieved if enterprises architectures were presented through augmented reality technologies.

Compared to virtual reality (VR) which overlooks the whole issue of reality and operates only on similes and avatars, augmented reality (AR) brings together virtual and physical realms, operating on apparatuses that weaves actual substrates, observations, and interventions with made-up descriptive, predictive, or prescriptive layers.

On that basis, users would be presented with actual territories (EA legacy) augmented with maps and prospective territories.

DataInfoKnow_esh3

Augmented EA: Actual territory (left), Map (center), Prospective territory (right)

Composition and dynamics of maps and territories (actual and prospective) could be set and edited appropriately, subject to latency constraints.

Further Reading

 

Advertisements

Self-driving Cars & Turing’s Imitation Game

May 27, 2018

Self-driving vehicles should behave like humans, here is how to teach them so.

3f36d-custer1

Preamble

The eventuality of sharing roads with self-driven vehicles raises critical issues, technical, social, or ethical. Yet, a dual perspective (us against them) may overlook the question of drivers’ communication (and therefore behavior) because:

  • Contrary to smart cars, human drivers don’t use algorithms.
  • Contrary to humans, smart cars are by nature unethical.

If roads are to become safer when shared between human and self-driven vehicles, enhancing their collaboration should be a primary concern.

Driving Is A Social Behavior

The safety of roads has more to do with social behaviors than with human driving skills, as it depends on human ability, (a) to comply with clearly defined rules and, (b) to communicate if and when rules fail to deal with urgent and exceptional circumstances. Given that self-driving vehicles will have no difficulty with rules compliance, the challenge is their ability to communicate with other drivers, especially human ones.

What Humans Expect From Other Drivers

Social behavior of human drivers is basically governed by clarity of intent and self-preservation:

  1. Clarity of intent: every driver expects from all protagonists a basic knowledge of the rules, and the intent to follow the relevant ones depending on circumstances.
  2. Self-preservation: every driver implicitly assumes that all protagonists will try to preserve their physical integrity.

As it happens, these assumptions and expectations may be questioned by self-driving cars:

  1. Human drivers wouldn’t expect other drivers to be too smart with their interpretation of the rules.
  2. Machines have no particular compunction with their physical integrity.

Mixing human and self-driven cars may consequently induce misunderstandings that could affect the reliability of communications, and so the safety of the roads.

Why Self-driving Cars Have To Behave Like Human Drivers

As mentioned above, driving is a social behavior whose safety depends on communication. But contrary to symbolic and explicit driving regulations, communication between drivers is implicit by necessity; if and when needed, it is in urgency because rules are falling short of circumstances: communication has to be instant.

So, since there is no time for interpretation or reasoning about rules, or for the assessment of protagonists’ abilities, communication between drivers must be implicit and immediate. That can only be achieved if all drivers behave like human ones.

Turing’s Imitation Game Revisited

Alan Turing designed his Imitation Game as a way to distinguish between human and artificial intelligence. For that purpose a judge was to interact via computer screen and keyboard with two anonymous “agents”, one human and one artificial, and to decide which was what.

Extending the principle to drivers’ behaviors, cars would be put on the roads of a controlled environment, some driven by humans, others self-driven. Behaviors in routine and exceptional circumstances would be recorded and analyzed for drivers and protagonists.

Control environments should also be run, one for human-only drivers, and one with drivers unaware of the presence of self-driving vehicles.

Drivers’ behaviors would then be assessed according to the nature of protagonists:

DeciMakingTaxo

  • H / H: Should be the reference model for all driving behaviors.
  • H / M: Human drivers should make no difference when encountering self-driving vehicles.
  • M / H: Self-driving vehicles encountering human drivers should behave like good human drivers.
  • Ma / Mx: Self-driving vehicles encountering self-driving protagonists and recognising them as such could change their driving behavior providing no human protagonists are involved.
  • Ma / Ma: Self-driving vehicles encountering self-driving protagonists and recognising them as family related could activate collaboration mechanisms providing no other protagonists are involved.

Such a scheme could provide the basis of a driving licence equivalent for self-driving vehicles.

Self-driving Vehicles & Self-improving Safety

If self-driving vehicles have to behave like humans and emulate their immediate reactions, they may prove exceptionally good at it because imitation is what machines do best.

When fed with data about human drivers behaviors, deep-learning algorithms can extract implicit knowledge and use it to mimic human behaviors; and with massive enough data inputs, such algorithms can be honed to statistically perfect similitude.

That could set the basis of a feedback loop:

  1. A limited number of self-driving vehicles (properly fed with data) are set to learn from communicating with human drivers.
  2. As self-driving vehicles become better at the imitation game their number can be progressively increased.
  3. Human behaviors improve influenced by the growing number of self-driving vehicles, which adjust their behavior in return.

That is to create a virtuous feedback for roads safety.

Further Reading

 

Why Virtual Reality (VR) is Late

July 25, 2017

Preamble

Whereas virtual reality (VR) has been expected to be the next breakthrough for IT human interfaces, the future seems to be late.

Detached Reality (N.Ghesquiere, G.Coddington)

Together with the cost of ownership, a primary cause mentioned for the lukewarm embrace is the nausea associated with the technology. Insofar as the nausea is provoked by a delay in perceptions, the consensus is that both obstacles should be overcame by continuous advances in computing power. But that optimistic assessment rests on the assumption that the nausea effect is to be uniformly decreasing.

Virtual vs Augmented

The recent extension of a traditional roller-coaster at SeaWorld Orlando illustrates the difference between virtual and augmented reality. Despite being marketed as virtual reality, the combination of actual physical experience (roller-coaster) with virtual perceptions (3D video) clearly belongs to the augmented breed, and its success may put some new light on the nausea effect.

Consciousness Cannot Wait

Awareness is what anchors living organisms to their environment. So, lest a confusion is introduced between individuals experience and their biological clock, perceptions are to be immediate; and since that confusion is not cognitive but physical, it will cause nausea. True to form, engineers initial answer has been to cut down elapsed time through additional computing power; that indeed brought a decline in the nausea effect, as well as an increase in the cost of ownership. Unfortunately, benefits and costs don’t tally: however small is the remaining latency, nausea effects are disproportionate.

Aesop’s Lesson

The way virtual and augmented reality deal with latency may help to understand the limitations of a minimizing strategy:

  • With virtual reality latency occurs between users voluntary actions (e.g moving their heads) and devices (e.g headset) generated responses.
  • With augmented reality latency occurs between actual perceptions and software generated responses.

That’s basically the situation of Aesop’s “The Tortoise and the Hare” fable: in the physical realm the hare (aka computer) is either behind or ahead of the tortoise (the user), which means that some latency (positive or negative) is unavoidable.

That lesson applies to virtual reality because both terms are set in actuality, which means that nausea can be minimized but not wholly eliminated. But that’s not the case for augmented reality because the second term is a floating variable that can be logically adjusted.

The SeaWorld roller-coaster takes full advantage of this point by directly tying up augmented stimuli to actual ones: augmented reality scripts are aligned with roller-coaster episodes and their execution synchronized through special sensors. Whatever the remaining latency, it is to be of a different nature: instead of having to synchronize their (conscious) actions with the environment feedback, users only have to consolidate external stimuli, a more mundane task which doesn’t involve consciousness.

Further Reading

External Links

Alternative Facts & Augmented Reality

February 5, 2017

Preamble

Coming alongside the White House creative use of facts, the upcoming Snap’s IPO is to bring another perspective on reality with its Snapchat star product integrating augmented reality (AR) with media.

Duchamp_01-LHOO_home

Layers of Reality (Marcel Duchamp)

Whatever the purpose, the “alternative facts” favored by the White House communication detail may bring to the fore two related issues of present-day relevancy: virtual and augmented reality on one hand, the actuality of George Orwell’s Newspeak on the other hand.

Facts and Fiction

To begin with, facts are not given but observed, and that can only be achieved through a mix of conceptual and technical apparatus, the former to design fact-finding vessels, the latter to fill them with actual observations. Based on that understanding, alternatives are less about the facts themselves than about the apparatuses used to collect them, which may be trustworthy, faulty, or deceitful. Setting flaws aside, trust is also what distinguishes augmented and virtual reality:

  • Augmented reality (AR) technologies operate on apparatuses that combine observation and analysis before adding layers of information.
  • Virtual reality (VR) technologies simply overlook the whole issue of reality and observation, and are only concerned with the design of trompe l’oeuils.

The contrast between facts (AR) and fiction (VR) may account for the respective applications and commercial advances: whereas augmented reality is making rapid inroads in business applications, its virtual cousin is still testing the water in games. More significantly perhaps, the comparison points to a somewhat unexpected difference in the role of language: necessary for the establishment of facts, accessory for the creation of fictions.

Speaking of Alternative Facts

As illustrated (pun intended) by virtual reality, fiction can do without words, which is not the case for facts. As a matter of fact (intended again), even facts can be fictional, as epitomized by Orwell’s Newspeak, the language used by the totalitarian state in his 1949 novel Nineteen Eighty-Four. Figuratively speaking, that language may be likened to a linguistic counterpart of virtual reality as its purpose is to bypass the issue of trusty discourse about reality by introducing narratives wholly detached from actual observations. And that’s when fiction catches up with reality: no much stretch of imagination is needed to recognize a similar scheme in current White House’s comments.

Language Matter

As far as humans are concerned, reality comes with semantic and social dimensions that can only be carried out through language. In other words truth is all about the use of language with regard to purpose: communication, information, or knowledge. Taking Trump’s inauguration crowd for example:

vvv

Data come from observations, Information is Data put in form, Knowledge is Information put to use.

  • Communication: language is used to exchange observations associated to immediate circumstances (the place and the occasion).
  • Information: language is used to map observations to mental representations and operations (estimates for the size of the audience).
  • Knowledge: language is use to associate information to purposes through categories and concepts detached of the original circumstances (comparison of audiences for similar events and political conclusions).

Augmented Reality devices on that occasion could be used to tally people on viewed portions of the audience (fact), figure out estimates for the whole audience (information), or decide on the best itineraries back home (knowledge). By contrast, Virtual Reality (aka “alternative facts”) could only be used at communication level to deceive the public.

Further Reading