Engineering is about the making of artifacts, software engineering about the making of symbolic ones. Hence the three dimensions of the discipline:
- Concepts: what is to be represented, targets (agents, objects, locations, events, activities) and status of representations.
- Artifacts: how are relevant concepts to be represented through the engineering process (architectures, requirements, models, and code).
- Processes: how to plan and manage engineering activities (milestones, projects, metrics, quality, …).
The aim of the Caminao project is to provide a comprehensive framework dedicated to systems engineering, from enterprise architecture to software design and development. On a broader perspective, the objective is to build a community of interest around a set of unambiguous and coherent concepts and principles that could be accepted independently of specific methods.
The Caminao approach is based upon a very simple postulate: systems are designed to manage symbolic representations (aka surrogates) of actual (physical or notional) objects, events and behaviors. That single position provides the leverage for a paradigmatic shift all along the software engineering process.
Since disproving convictions is typically easier than establishing alternative ones, a kind of negative theology may help to deal with some fallacies hampering progresses in system engineering. While some are no more than misunderstandings that can be corrected with unambiguous definitions, others are deeply rooted in misconceptions sometimes entrenched behind walled dogmas.
For instance: facts are given (no, they are built from observations); models are about truth (no, they are built on purpose); models and code are equivalent (under specific conditions, but in these cases processes are pointless).
If facts are not given, where to start when representations (aka models) are to be built?
As Plato would have said, wherever we look, all we can see are mental representations, latent or overt. One step further, these representations can be developed into symbolic placeholders ready for what concerns or purposes will bring to our mind.
When systems are designed to manage such placeholders, anchors must be identified for the symbolic representations of objects, events, and behaviors, relevant aspects regrouped accordingly, and irrelevant features or behaviors dropped altogether. That’s called modelling.
Systems’ purpose is to manage symbolic representations. As a corollary, one will expect Knowledge Management to shadow systems architectures and concerns: business contexts and objectives, enterprise organization and operations, systems functionalities and technologies. On the other hand, knowledge being by nature a shared resource of reusable assets, its organization should support the needs of its different users independently of the origin and nature of information. Knowledge Management should therefore bind knowledge of architectures with architecture of knowledge.
As long as information was just data files and systems just computers, the respective roles of enterprise and IT architectures could be overlooked; but that has become a hazardous neglect with distributed systems pervading every corner of enterprises, monitoring every motion, and sending axons and dendrites all around to colonize environments.
Yet, the overlapping of enterprise and systems footprints shouldn’t generate confusion. When the divide between business and technology concerns was clearly set, casual governance was of no consequence; now that architecture capabilities are defined at different levels, turfs are more and more entwined, a dedicated policies are required lest decisions be blurred and entropy escalates as a consequence.
Systems are designed to manage symbolic representations. Those representations (aka surrogates) are objects on their own and the objective of system modelling is to design useful and effective symbolic objects that will stand for their business counterparts.
If they are to be processed, those symbolic representations must be physically implemented. The specific aim of architecture driven modelling is to identify the constraints to be supported whatever the technology used to store the representations, clay tablets, codices, optical discs, or holograms.
Models are first and foremost communication media; as far as system engineering is concerned, they are meant to support understanding between participants, from requirements to deployment. And because participants are collective entities from various professional backgrounds, models must be written down using agreed upon languages.
Those languages are made of well-formed constructs (syntax) associated with agreed meanings (semantics). While correct syntax can be easily checked, that’s not the case for semantics as interpretations can easily flourish depending on methods, businesses, or engineering concerns.
As the world turns digital, the divides between social lives, corporate businesses, and physical realities are progressively dissolved. That calls for some unified modeling framework that would supplement OMG’s unified modeling language (UML).
That could be achieved through the consolidation of the concepts used to describe agents with symbolic processing capabilities independently of the modeling perspective.
Models are shadows of reality, with their form and contents set by contexts and concerns. They can be characterized by their capabilities and purpose.
Regarding capabilities the distinction is between extensional (aka denotative) and intensional (aka connotative) languages, the former used to describe sets of actual instances, the latter used for artifacts design.
Regarding purposes, models fall in two groups: descriptive models deal with problems at hand (e.g requirements capture), prescriptive models with solutions (e.g architectures). Differentiated purposes are best illustrated by OMG’s Model Driven Architecture (MDA).
Descriptive as well as prescriptive models occupy the driving seats when communication between organizational units with independent decision-making has to be anchored to milestones. When shared ownership can be achieved agile use of models put them on back seats.
Software Engineering is about the making of symbolic artifacts used as surrogates for actual business objects, events, and behaviors. Surprisingly, that critical distinction is generally overlooked by modeling languages, as if it was a matter of preference. This primary flaw introduces a confusion that encroaches upon models all along the engineering process.
Architectures are about continuity in space and time as their capabilities are meant to support activities which, by nature, must be adaptable to changing concerns and objectives:
- Enterprise architecture deals with assets and organization supporting corporate identity and business capabilities within regulatory and market environments.
- Functional architecture describe with systems functionalities supporting enterprise architecture.
- Technical architecture deals with the feasibility, efficiency and economics of systems operations.
Requirements are not manna from heaven, they do not come to the world as models but have to be “captured“.
Then, given that system expected functionalities are not served in a vacuum but within architectures combining human agents, physical equipments, and symbolic systems, a requirements taxonomy should make a clear distinction between (1) business objectives, (2) supporting systems functionalities, (3) how those functionalities are implemented, and (4) how they are operated.
Whatever the methodology, the objective of requirements analysis is to consolidate new requirements and existing architectures capabilities. Regarding systems functionalities, the first thing to do is to decide what should be reused and what will have to be developed. For that purpose functional requirements must be ascribed to archetypal symbolic constructs, e.g: domain or use case, actual or symbolic, partition or subtype, object or feature, aggregate or composite, inheritance or delegation, static or dynamic rule.
Modeling methods, with or without UML, have some difficulties to agree about scope (e.g requirements, analysis, and design), or concepts (e.g objects, aspects, and domains). Hence the benefits to be expected from a comprehensive and consistent approach based upon two basic distinctions:
- Business vs System: assuming that systems are designed to manage symbolic representations of business objects and processes, models should keep the distinction between business and system objects descriptions.
- Identity vs behavior: while business objects and their system counterparts must be identified uniformly, that’s not the case for the symbolic representation of aspects, which can be specified independently.
That two-pronged approach will bridge the gap between analysis and design models, bringing about a unified perspective for concepts (objects and aspects) as well as scope (business objects and system counterparts).
The adoption 15 years ago of UML (Unified Modeling Language) as a modeling standard could have spurred a new start of innovation for software engineering. Whatever the reasons, initial successes have not been transformed and UML utilization remains very limited, both in breadth (projects developed) and depth (features effectively used).
Taking a cue from Ivar Jacobson (“The road ahead for UML“), some modularity should be introduced in order to facilitate the use of UML in different contexts, organizational, methodological, or operational. Three main overlapping objectives should be taken into consideration:
- Complexity levels: language features should be regrouped into clearly defined subsets corresponding to levels of description. That would open the way to leaner and fitter models.
- Model layers: language constructs should be re-organized along MDA layers if models are to map stakeholder concerns regarding business requirements, system functionalities, and system design.
- Specificity: principled guidelines are needed for stereotypes and profiles in order to keep the distinction between specific contents and “unified” ones, the former set with limited scope and visibility, the latter meant to be used across model layers and/or organizational units.
As it happens, a subset of constructs centered on functional architecture may well meet those three objectives as it will separate supporting structures (“charpentes” in french) from features whose specifications have no consequences on system architectures.
Artifacts can be organized along layers depending on their semantics. That is the rationale behind the OMG’s MDA (Model Driven Architecture):
- Computation Independent Models (CIMs) describe business objects and activities independently of supporting systems.
- Platform Independent Models (PIMs) describe how business processes are supported by systems seen as functional black boxes, i.e disregarding the constraints associated to candidate technologies.
- Platform Specific Models (PSMs) describe system components as implemented by specific technologies.
Broadly speaking, software engineering can be defined as the processing of symbolic artifacts by enrichment, extension, interpretation, translation, transformation, generation, etc. That may be performed at two levels, one targeting languages constructs, the other dealing with model contents. While artifacts semantics can be managed uniformly at language level, that’s not the case when model contents are processed across contexts, namely from business to system realms.
Retrieving legacy code has something to do with archaeology as both try to retrieve undocumented artifacts and understand their initial context and purpose. The fact that legacy code is still well alive and kicking may help to chart their structures and behaviors, but it may also confuse the rationale of initial designs.
Hence the importance of traceability and the benefits of aknowledge based approach to modernization organized along architecture layers (enterprise, systems, platforms), and processes (business, engineering, supporting services).
The main purpose of engineering processes is to balance stakeholders objectives rooted into different shearing Layers: business needs, system functionalities, and applications deployment. In a perfect world there would be one stakeholder, one architecture, and one time-scale. Unfortunately, goals are often set by different organizational units, based upon different concerns and rationales, and subject to changes along different time-frames.
The rationale governing the design of development processes may be compared to the one governing cooking processes. While there is an infinity of ingredients, most belong to four categories: water, proteins, carbohydrates, and fats. The ways they react to physical processing and can be combined are set by chemical laws. It is therefore possible to characterize cooking recipes and identify basic states and processing sequences independently of their gourmet flavors. And that is what model driven engineering is about: reasoned processing of models depending on the fundamental properties of their contents.
Given intrinsic constraints on models contents, governance of development processes has to take into account organizational contexts and external dependencies. If shared ownership can be established collaboration along agile principles is the solution of choice; otherwise more procedural solutions will be necessary, including milestones and phased work units.
Models are representations and as such they are necessarily set in perspective and marked out by concerns.
With regard to perspective, models will encompass whole contexts (symbolic, mechanic, and human components), information systems (functional components), software components (platform implementation).
With regard to concerns, models will take into account responsibilities (enterprise architecture), functionalities (functional architecture), and operations (technical architecture).
While it may be a sensible aim, perspectives and concerns are not necessarily congruent as responsibilities or functionalities may cross perspectives (e.g support units), and perspectives may mix concerns (e.g legacies and migrations). That conundrum may be resolved by a clear distinction between descriptive and prescriptive models, the former dealing with the problem at hand, the latter with the corresponding solutions, respectively for business, system functionalities, and system implementation.
Assuming that everything and everybody outside a project will hold its breath between inception and completion is arguably a risky bet; more probably the path from requirements to deployment is bound to be affected by changes in business or technical contexts, not to speak about what may happen within the project itself. Such hazards are compounded by scale, as large undertakings entails the collaboration of several teams during significant periods of time. Hence the need of milestones, where expectations are matched to commitments.
All too often when the agile project model is discussed, the debate turns into a religious war with waterfall as the villain. But asking which project model will bring salvation will only bring attrition, because the question is not which is the best but when it is the best.
Basically, Agile’s progressive exploration of problem spaces and solution paths solves two critical flaws of traditional Waterfall approaches, namely:
- Fixed requirements set upfront: since there is an inverse relationship between the level of details and the reliability and stability of requirements, staking the whole project on requirements fully defined at such an early time is arguably a very hazardous policy.
- Quality as an afterthought: given that finding defects is not very gratifying when undertaken in isolation, delegating the task will offer few guarantees if not associated with rewards commensurate to findings; moreover, quality as a detached concern may easily turn into a collateral damage when set along mounting costs and scheduling constraints. Alternatively, quality checks may change into a more positive endeavor when conducted as an intrinsic part of development.
On that basis, Agile should be the solution of choice when shared ownership can be achieved and external dependencies limited. When organizational or technical divides cannot be avoided Agile with models can provide an effective compromise.
There is no such things as “statistical facts”. Statistics are made on purpose, usually to support conjectural arguments or counter questionable ones. Hence, considering statistics per se is like counting fingers when the hand points at the moon.
As far as software engineering is concerned, three main rationales are to be considered: predictive (project planning), preventive (risks management), or corrective (process assessment). While respective estimators may overlap, they may also interfere, one often cited example being code size estimators used to assess productivity.
Functional size measurement is the corner-stone of software economics, from portfolio management to project planning, benchmarking, or ROI assessment. Given that software has no physical features, relevant metrics can only be derived from the functional value of the software under consideration, in other words their functional requirements.
Revisiting Function Points, the objective is to estimate the functionalities supported by a system independently of the technology and tools used to implement it. Those functionalities can only be set in their business and operational contexts, and must be assessed accordingly. Yet, functional metrics should not be confused with business value as the same application may be valued differently when deployed in different business contexts.
Quality is a quantity: it’s the probability that something will go amiss. As for any prediction about future events, quality, even set within a strictly demarcated context, could never be fully accounted for. That’s the reason why a sound quality management must clearly distinguish between, on one hand, outcomes whose contingency falls, to some degree, under project capability and responsibility and, on the other hand, risks whose origin can’t be rooted within project responsibilities.
Quality management must be abutted to two pillars, traceability and testability.
Traceability is a prerequisite for QM, whatever the nature of contingency. Testability means that every requirement, specification or product should be open to confirmation, verification or validation.
Reusing artifacts means using them in contexts that are different of their native ones. That may come by design, when specifications can anticipate on shared concerns, or as an afterthought, when initially unexpected similarities are identified later on.
Reuse deals with shared assets and mechanisms and is best achieved when managed according business, engineering or architecture perspectives:
- Business perspective: how to factor out and reuse artifacts associated with the knowledge of business domains when system functionalities or platforms are modified.
- Engineering perspective: how to reuse development artifacts when business contents or system platforms are modified.
- Architecture perspective: how to use system components independently of changes affecting business contents or development artifacts.
The objective of the Capability Maturity Model Integration (CMMI) is to assess and improve organization performances.
With regard to software development processes, the relevance of assessment fully depends on (1) objectives and unbiased indicators for products size and projects performances and, (2) transparent mapping between organizational alternatives and process outcomes. Lest those conditions are satisfied, capability and maturity assessments are to remain self-referencing.