Blue degree

Table of contents

Principles

Design and Implementation Don't Overlap

Design and Implementation Don't Overlap

Why?
Planning documents that have nothing in common with the implementation do more harm than good. Therefore, do not give up on planning, but minimize the chance of inconsistency.
Changeability  
Correctness  
Production efficiency  
Continuous improvement  
Team

One of the fundamental problems of software development are implementations that no longer show any signs of prior planning. There are design diagrams hanging on the wall that have little to do with the code reality. The reason for this is a violation of the fundamental DRY principle: design and implementation are repetitions of the same thing, the structure of a software. Since implementation follows design and accounts for the lion's share of the work, both quickly get out of step if structural changes are not repeatedly incorporated into the design during implementation. Otherwise, design diagrams are soon worth nothing once implementation has begun.

How can the situation be improved? Should design perhaps be dispensed with if the "structural truth" ultimately lies in implementation? No, certainly not. Design is a must. Without planning, there is no objective. But design and implementation must comply with the DRY principle. That is why design and implementation should overlap as little as possible. Their interface should be thin. If this is the case, they are no longer repetitions, but describe different things. This means: design/architecture does not care about implementation and implementation does not care about architecture.

And where does this dividing line run? In the so-called components (see Practices below). Architects are not concerned with the internal structure of components. For them, they are black boxes whose class structure is not relevant to the architecture. Conversely, the architecture is irrelevant for a component implementer. What he has to implement results from the component contracts that his component imports and exports. He does not need to know a larger context.

The task of architecture is therefore to break software down into components, define their dependencies and describe services in contracts. These structures are then maintained solely by architects. And the task of implementation is to realize the components defined by the architecture. How they do this is not relevant to the architecture. Their internal structure is invisible to the architecture.

Implementation Reflects Design

Implementation Reflects Design

Why?
Implementation that can deviate from the planning at will leads directly to unmaintainability. Implementation therefore requires a physical framework defined by the planning.
Changeability  
Correctness  
Production efficiency  
Continuous improvement  
Team

Architecture and implementation should not overlap so that they do not violate the DRY principle. This avoids inconsistencies that can arise if something is changed on one side without updating this change on the other side.

Nevertheless, the architecture makes statements about the implementation. Not its details, but its basic form. Architecture defines the structural elements and their relationships within a code system. Implementation therefore does not exist independently of architecture, even in the absence of overlaps, but within it, so to speak.

However, this should also be expressed in the implementation. This makes it easier to understand and ensures that the implementation actually follows the architecture. The structural elements defined by the architecture at different levels of abstraction should therefore not be "stirred together" in a large "code pot" (e.g. a large Visual Studio solution). It is much better, also in terms of high productivity and easy testability, to manifest the logical structures of the architecture as physically as possible.

  1. The structures planned by the architecture at various levels of abstraction should be reflected as far as possible in the code organization. On the one hand, this means that the architecture primarily uses physical code units as structural elements. On the other hand, these structural elements should also be clearly visible in the source code or in the code organization in the repository.
  2. When working on the implementation of structural elements and especially within components, it should be impossible to make architectural changes "on the fly". Anyone working in or on a structural element, i.e. a part, must not be able to change the surrounding structure, i.e. the whole, ad hoc. Only if this is guaranteed will the entropy of a software not grow uncontrollably. This is important because the main goal of architecture is to minimize entropy and thus the complexity of software.

Planning is a must. Implementation must not torpedo planning. (Even if findings during implementation may of course have an impact on planning). Planning and implementation must therefore be decoupled. And where this is not possible, planning should work with the means of implementation and implementation should physically reflect planning.

You Ain't Gonna Need It (YAGNI)

You Ain't Gonna Need It (YAGNI)

Why?
Things that nobody needs have no value. So don't waste time on them.
Changeability  
Correctness  
Production efficiency  
Continuous improvement  
Single Developer

The YAGNI principle (You Ain't Gonna Need It) is one of the simplest in software development - and yet probably the most frequently violated principle after the DRY principle. That is why YAGNI is not only at the beginning of the red degreesbut also here towards the end of the path through the Value system.

The YAGNI principle is due to the special relationship between requirements accuracy and product materiality in software development. Requirements are notoriously imprecise or changeable and the product in which they are to be implemented is immaterial. Compared to mechanical engineering or building construction, the material is therefore infinitely flexible and can, in principle, be adapted to virtually any requirement with comparatively little effort. High volatility or imprecision therefore meets high flexibility. This seems ideal at first.

However, practice shows that it is precisely in this relationship that the seeds of failure of many projects lie. In the short term, projects try to do the right thing by doing the obvious:

  • Imprecise requirements are often compensated for by products that attempt to make up for the imprecision. The immateriality of software is used to implement it so broadly and flexibly that even unknown or vague requirements are fulfilled in anticipation.
  • Constantly changing requirements are updated in the product as quickly as possible because this is possible thanks to its immateriality.

In the long term, however, such behavior is counterproductive:

  • Anticipatory obedience leads to breadth and flexibility that are not really needed. It realizes features that are not used.
  • Rapid modifications to software due to changing requirements lead to quality erosion in the code. Although software is immaterial and flexible, not every software structure is evolvable or even comprehensible.

Unclear and changing requirement situations against the background of the high fundamental flexibility of software quickly lead to unnecessary effort and brittle code. A large number of projects that have exceeded their budget limits and an even larger number of projects that have become unmaintainable after just a few years are eloquent testimony to this.

CCD, as professional software developers, see it as their duty to counter such developments on a daily basis. In view of the undeniable nature of software - it is and remains immaterial - the approach to this lies in dealing with the requirements. This is the origin of the YAGNI principle.

The YAGNI principle is like a sharp knife: if you use it, you cut a problem into small cubes of what is immediately necessary. According to the YAGNI principle, only what is unquestionably and immediately useful is implemented. Everything else... well, that comes later. In this respect, YAGNI goes hand in hand with the "decide as late as possible" rule of the Lean Software Development.

The YAGNI principle is relevant at all levels of software development and in all phases. Whenever you ask yourself "Should I really go to this effort?" or "Do we really need this?" - even if it's only very coyly and quietly in the back of your mind - then this is a use case for the YAGNI principle. It says: When in doubt, decide against the effort.

It sounds easy, but it is difficult. Hence the frequent violations. There are many forces that contradict the decision not to make an effort. "Oh, it's not that much effort" or "If we don't look ahead now, we won't be able to do anything else in the future" are just two obvious justifications for effort, even if there are doubts about its benefits. This applies to architectural decisions (e.g. should we already start with a distributed architecture, even if the current load does not yet need it?) and local decisions (e.g. should the algorithm be optimized now, even if it is not currently causing any performance problems?)

The customer only pays for direct benefits. What he cannot clearly specify today is of no use to him. Wanting to anticipate it in the implementation phase therefore invests effort without generating benefits. When the customer later knows exactly what he wants, then - and not earlier! - it is time to comply with their wishes. However, wherever a project tries to anticipate this will, it risks being contradicted by the reality of the customer's will tomorrow. A feature - functional or non-functional - that is implemented today without a clear requirement may no longer be of interest to the customer tomorrow. Or it may no longer be as important to them as another feature.

This means for software development:

  • Only implement clear requirements.
  • The customer prioritizes his clear requirements.
  • Implement the clear requirements in the order of their prioritization.
  • Set up the development process and code structure on a large and small scale in such a way that there is no fear of realizing changing and new requirements.

As professional developers, CCD communicate this approach clearly to the customer. This makes them:

  • willing to provide service, because they do not have to refuse the customer a clear request
  • Responsible, because they only use the budget for clearly formulated benefits
  • protective towards the code, because they protect it from being overloaded with ultimately unnecessary

YAGNI is therefore not only a principle that every developer should follow, but also a principle for projects and teams, i.e. at organizational level. YAGNI should always be taken into account, just like DRY. If in doubt, postpone the decision if possible. Otherwise, decide against the effort. This relaxes and streamlines and leads to success more quickly.

Practices

Design before implementation

Design before implementation

Why?
A solution must be designed before implementation. Otherwise, there will be no consistent reflection on the solution.
Changeability  
Correctness  
Production efficiency  
Continuous improvement  
Team

The task of a developer is to translate requirements into code. To do this, it is necessary to develop a solution for the requirements. Thought must be put into it. But how can this be done in a good way if developers jump straight into coding?

In trivial cases, it may be possible to write code directly. Nevertheless, the solution is also thought about when jumping directly into coding. However, this tends to happen unconsciously, and above all while the implementation. The developer thinks a little, codes, thinks, codes, etc. There is a lack of consistent thinking through of the solution, separate from the implementation.

At the latest, if a group of developers wants to work together as a team, the design must take place separately from the implementation. Otherwise, a fluid division of labor is not possible.

The design enables the team or an individual developer to think about important principles even before coding. For example, methods or classes with multiple responsibilities are not created in the first place, as the design is already thought through at the design level. Single Responsibility Principle (SRP) can be considered. This saves the team the refactoring effort that arises when coding "on the fly".

See also https://flow-design.info.

Continuous Delivery (CD)

Continuous Delivery (CD)

Why?
As a clean code developer, I want to be sure that a setup installs the product correctly. If I only find this out at the customer's site, it's too late.
Changeability  
Correctness  
Production efficiency  
Continuous improvement  
Team

In the green degree we have set up the continuous integration process for build and test. The continuous integration process ensures that errors are detected quickly during the build and test phase. If, for example, a change to the code means that another component can no longer be compiled, the continuous integration process points out the error shortly after the change has been committed. However, if in the end a setup program is produced that cannot be installed due to errors, we have still not achieved our goal: functioning software that can be installed at our customers.

Consequently, we also need to automate the setup and deployment phases so that they can be carried out at the touch of a button. This is the only way we can be sure that we are producing installable software. And automation ensures that nobody forgets an important step that has to be carried out "on foot". This means that everyone in the team can produce and install the current version of the product ready for installation at any time.

See also under Tools.

The most important book in this context is probably Accelerate be.

Iterative development

Iterative development

Why?
To paraphrase von Clausewitz: no design, no implementation survives contact with the customer. Software development is therefore well advised to be able to correct its course.
Changeability  
Correctness  
Production efficiency  
Continuous improvement  
Team

Of course, software development always progresses from planning to implementation to testing by the customer. However, it is wrong to assume that a project can get by with a planning phase, an implementation phase and a customer test phase. This only works - if at all - in trivial scenarios where all requirements are known in the planning phase. In real projects, however, each phase provides insights for previous phases. The customer test always has consequences for planning and implementation.

However, such findings can only have an impact on a project if the approach is not linear. If there is no way back from a later phase to an earlier phase, feedback is useless.

In order to incorporate feedback into a software product, the development process must contain loops. The loop from the customer test phase back to the planning phase is always necessary. This means that software development can only take place iteratively, i.e. in several iterations, using the customer's catalog of requirements. Anyone who tries to deliver "all at once" (big bang) is acting contrary to this realization. Instead, the software development process should be planned in such a way that it "bites through" the requirements in small bites. Each of these bites should not be larger than the time it takes to go from planning to customer testing, i.e. more than 2-4 weeks. Only then will the feedback from the customer be frequent enough to avoid getting lost in the implementation process for too long.

Software development is therefore a learning process. In the course of this process, the project team learns something about the customer's requirements. It listens, plans, implements and delivers a software version that reflects its understanding of what it has heard. Then the team listens again, plans further/again according to the latest findings, etc. and so on, always in a circle. Iteration after iteration. Sometimes something from a previous iteration is refined, sometimes something new is added.

But not only the development of software is a learning process. Learning should also take place at an organizational level. The team should not only learn something about the customer, but also about itself. This is why there should always be "stopping points" at which the team reflects on its approach. The findings from such retrospectives then flow into the next iteration of organizational development. Here, the blue level follows on from the red level, which includes daily personal reflection.

Of course, every iteration must also have an end. And in order to know whether you are finished, it must be clearly defined in advance what is to be achieved in the iteration. The achievability of goals can only ever be estimated, and reflection helps to gradually improve the estimates so that they are sufficiently accurate for planning. But when is the previously defined goal achieved? 'What is done? Our primary goal is to deliver functional software to our customers. Consequently, this goal can only be achieved when we have produced ready-to-deliver software. In particular, this means that the software has been tested and can be installed via setup. We ensure this continuously through continuous integration. Under no circumstances should we decide shortly before the end of an iteration that a goal has been achieved, even though not all tests have been completed.

See also under Tools.

Incremental Development

Incremental Development

Why?
Only working in increments enables the product owner to provide feedback.
Changeability  
Correctness  
Production efficiency  
Continuous improvement  
Team

An increment represents a vertical section through the various aspects of a software system. An increment is therefore a piece of executable software. The increment can be made available to a product owner on a test machine in order to obtain feedback.

Regular feedback at short intervals, at the end of each iteration, is the definition of agility.

If, on the other hand, the approach is horizontal instead of vertical, modules are created that cannot be executed independently. A product owner cannot provide feedback on such modules. This means that a truly agile approach is not possible.

Component Orientation

Component Orientation

Why?
Software needs black box components that can be developed and tested in parallel. This promotes changeability, productivity and correctness.
Changeability  
Correctness  
Production efficiency  
Continuous improvement  
Team

The principles of the CCD value system have so far mainly referred to smaller sections of code. What should be in one method, what should be spread over several? Which methods should a class publish? Where should a client object for a service object come from? So far, it has been about principles for software development on a small scale.

But doesn't the CCD value system have anything to say about larger structures, about software development in general? What about the software architecture? This is exactly where the principle of component orientation comes in. So far we have also used the word "component", but in a rather lax and colloquial sense. From now on, however Component describe something very specific that we consider fundamental to evolvable software.

As long as we only think of software as being made up of classes with methods, we are trying to describe computers at transistor level, so to speak. Ultimately, however, this does not work because we suffocate in the wealth of detail. Even grouping the classes into layers doesn't help much. Instead, we need a means of describing larger software structures. But not only that: the means of description should also be a means of implementation - just like classes - so that the model, the plan, the description is reflected in the code.

Although operating system processes are such architectural means, they are ultimately too large. As long as the EXE of an application process consists of several hundred or thousand classes, we gain nothing.

However, the principle of component orientation can help. It states that an application process initially consists of components and not classes. Only the building blocks of the components are then classes. And what is a component? There are a number of definitions for components, two of which appear to be unbreakable criteria:

  • Components are binary functional units. (A class, on the other hand, is a functional unit at source code level).
  • The performance of components is described by a separate (!) contract. (The performance description of a class, on the other hand, lies within it. It is the sum of its method signatures).

When designing software, after defining the processes, a CCD first looks for the components that the processes should consist of. It asks itself which "service blocks" make up the application? And the CCD sees these blocks as black boxes in terms of their structure of classes. These blocks are assemblies with a well-defined service but an unknown structure.

A client component C therefore knows nothing about the class structure of its service component S. C only knows the contract of S, which is independent of S's implementation. In this respect, contracts are for components what interfaces are for classes. It is no coincidence that contracts consist to a large extent or even completely of interfaces.

Components are therefore elements of both planning and implementation. To emphasize this, components are implemented physically independent of each other; a tried and tested means of doing this are Component workbenchesi.e. separate Visual Studio Solutions for each component implementation. This not only promotes concentration on one task, because you only see the code of one component while working on it in the IDE. It also promotes consistent unit testing using mockups, as the source code of other components is not visible. In addition, such code organization increases productivity because components can be implemented in parallel thanks to their separate contracts. And finally, physical isolation counteracts the creeping increase in entropy in the code. This is because where links between components can only be established via a contract, the coupling is loose and controlled.

Component orientation therefore includes not only binary, larger code units with separate contracts, but also the development of the contracts before implementation (Contract-first design). As soon as the contracts that a component imports and exports have been defined, work on the component can begin independently of all others.

See also under Tools.

For the term component, see also this Blog post about the module hierarchy.

Test First

Test First

Why?
The customer is king and determines the form of a service. Service implementations are therefore only a perfect fit if they are driven by a client.
Changeability  
Correctness  
Production efficiency  
Continuous improvement  
Single Developer

If component orientation calls for the contracts for components to be defined independently of their implementation, the question arises as to how this should be done. Through round table discussions? That is certainly one way. A better way, however, is not to spend a long time drafting contracts on a blackboard, but to pour them into code immediately. Component contracts - or more generally: every code interface - ultimately serves as an API for other code. It is therefore logical and effective to specify interfaces based on this code.

This is the concern of Test first. Test first is based on the idea that functional units (methods, classes, etc.) are characterized by client-service relationships. These relationships revolve around the interface between client and service. And this interface should be determined by the client. As the customer of the service, the client is king. The service should serve him and the interface of the service should therefore be geared towards him.

For this reason, the interfaces of a software's code units are defined from the outside in. On the outside, at the user interface, is the ultimate client, the user. They define the visual/haptic interface of the UI code units. These in turn are the clients of underlying code layers. These are then clients of deeper layers and so on. The performance and interfaces of the deepest code layers can therefore only be determined if those of the layers above have already been determined, and so on.

This contradicts the frequent approach of bottom-up definition of code units. Projects often start by defining and implementing a data access layer. This is understandable, because such fundamental functionality is apparently the prerequisite for everything else. But this approach is problematic, as many failed projects show:

  • Anyone who specifies and implements from the bottom up, from the inside out, only offers the customer value at a very late stage. This is at the very least frustrating, if not counterproductive.
  • If you proceed bottom-up in the specification, you specify without the exact requirements of the ultimate client, the user. What he specifies therefore runs the risk of being too general and therefore unwieldy in the end - or simply not being used (a violation of the YAGNI principle, see above and in the red degree).
  • If you implement from the bottom up, you run the risk of not really decoupling. Because if deeper layers are required to implement higher layers, then no truly isolated unit tests with dummies are likely to be used and no inversion of control either.

However, clean code developers avoid these problems. They specify interfaces not only before the implementations (contract-first, see component orientation above), but also from the outside in and very practically through coding. With the means of automated testing, it is very easy to define interfaces in small steps in the form of tests.

Test first thus adds a semantic side to syntactic contracts (e.g. interfaces). In the absence of other formal methods for specifying semantics, tests are the only way to formalize requirements. If you want to assign a component to a developer for implementation, it is therefore a good idea to not only specify its "interface" (API) syntactically, but also the desired behavior in the form of tests.

This has many advantages:

  • The form of an interface is directly client-driven and therefore maximally relevant. YAGNI has no chance.
  • The tests are not just tests, but also specification documentation. Users of an interface and implementers can study them in equal measure. Separate documentation is largely superfluous. This satisfies the DRY principle.
  • The specifications are not just passive texts, but executable code. Once an implementation is available, it can be checked against these tests. Specification and testing are therefore not time-consuming successive phases. This increases productivity. Quality assurance is thus already upstream of implementation.

See also under Tools.

Continue with the white degree

en_USEnglish