Table of contents
Principles
Open Closed Principle (OCP)
Because the risk of instabilizing a previously error-free system with new features should be kept as low as possible.
Changeability | |
---|---|
Correctness | |
Production efficiency | |
Continuous improvement | |
Single Developer |
The Open Closed Principle (OCP) states that a class must be open to extensions, but closed to modifications. It is another of the SOLID Principles. The following code example is intended to illustrate where the problem lies if the principle is not followed:
public double Price() { const decimal StammkundenRabatt = 0.95m; switch(customer type) { case Kundenart.Einmalkunde: return quantity * unit price; case customer type.regularcustomer: return quantity * unit price * regularcustomerDiscount; default: throw new ArgumentOutOfRangeException(); } }
The problem with this form of implementation is that the class must be modified if another type of price calculation is required. The danger here is that errors are made during this modification and the existing functions no longer work properly. Even if automated unit tests and integration tests are available, there is a risk of leaving behind new bugs because it is not possible to achieve 100% test coverage. What is generally needed is a method that makes the class extensible without having to modify the class itself. This can be achieved, for example, with the help of the Strategy Patterns can be achieved:
public interface IPreisRechner { double Price(int quantity, double unit price); } private IPreisRechner preisRechner; public double Preis() { return preisRechner.Preis(quantity, unit price); } public class One-time customer : IPriceCalculator { public double Price(int quantity, double unit price) { return quantity * unit price; } } public class Regular customer : IPriceCalculator { const decimal RegularCustomerDiscount = 0.95m; public double Price(int quantity, double unit price) { return quantity * unit price * regular customer discount; } }
The actual calculation of the price is outsourced to other classes via an interface. This makes it possible to add new implementations of the interface at any time. The class is therefore open for extensions, but at the same time closed to modifications. Existing code can, for example, be modified with the refactoring Replace Conditional with Strategy be redesigned in such a way that the Open Closed Principle is adhered to.
Sources
Source | Author | Brief description |
---|---|---|
Robert C. Martin | Article on the Open Closed Principle published in 1996 for The C++ Report |
Tell, don't ask
High cohesion and loose coupling are virtues. Public status details of a class contradict this.
Changeability | |
---|---|
Correctness | |
Production efficiency | |
Continuous improvement | |
Single Developer |
To put it somewhat provocatively, classes should not have property getters. These tempt the user of a class to make decisions based on values provided by an object. Instead of telling the object what it should do, it is asked questions in order to then make external observations about the internal state of the object.
One of the core principles of object-oriented programming is Information Hiding (see also the yellow degree). No class should carry details to the outside world that reveal how it is implemented internally. If a class requires an internal state for its work, this is typically stored in an internal field. If this value is also visible to the outside world, users are tempted to use this actually internal state of the object for their own decisions. This quickly degrades the class to pure data storage. An implementation in which an object is told what it should do is always preferable. This means that the user no longer has to be interested in how the class accomplishes the task internally.
As a result of the Tell don't ask principle, objects with behavior are created instead of "dumb" data storage objects. The interaction between the objects is loosely coupled, as the objects do not have to make any assumptions about the collaborating objects. But that's not all! If objects do not publish their state, they retain decision-making authority. This increases the cohesion of the decisive code because it is pooled in one place.
A typical code example is shown below. Instead of first asking whether the trace messages are activated in logging (Ask), the logging library should be instructed directly to output the trace message (Tell). The library should then decide internally whether the message is logged or not.
if (_logger.Trace()) {
_logger.TraceMsg("... a message... ");
}
Law of Demeter (LoD)
Dependencies of objects across several links in a service chain lead to unattractively close coupling.
Changeability | |
---|---|
Correctness | |
Production efficiency | |
Continuous improvement | |
Single Developer |
At the Law of Demeter is about limiting the interaction between objects to a healthy level. It can be simplified to "Don't talk to strangers". According to the Law of Demeter, a method should only use the following other methods:
- Methods of your own class
- Parameter methods
- Methods of associated classes
- Methods of self-created objects
However, it should be noted that pure data retention classes also make sense from time to time. Of course, the Law of Demeter does not have to be applied to these. For example, it may make sense to distribute the configuration data hierarchically in several classes, so that the following access to a value could result in the end:
int margin = config.Pages.Margins.Left;
If the Law of Demeter were applied here, only access to config.pages would be permitted.
Practices
Continuous Integration (CI)
Automation and centralization of software production make you more productive and reduce the risk of errors during delivery.
Changeability | |
---|---|
Correctness | |
Production efficiency | |
Continuous improvement | |
Team |
The integration of software components is often postponed and carried out "by hand" in a time-consuming and error-prone manner. However, the software should actually be fully executable at all times. Continuous integration is a process that ensures that the entire code is compiled and tested after changes have been submitted.
The continuous integration process is particularly important for teams, as it ensures that after changes are submitted, the entire code is compiled and tested, not just the part that a developer has just worked on. The automated tests should be carried out by every developer before they submit changes to the central version control system. Continuous integration does not change this. To ensure that the tests are actually executed and errors are detected at an early stage, they always run on the Continuous Integration Server. This does not release the developer from executing the tests before the commit, as faulty code that has been checked into version control hinders the entire team, possibly even other teams. The continuous integration process ensures that errors are detected as early as possible across all teams.
For the continuous integration process, there are numerous Software tools is available. In addition to the continuous build and test, which takes place immediately when changes are transferred to version control, continuous integration can also be used to automate longer-running processes, such as database tests. These are then only executed at night, for example. At the green level, only the build and test process is taken into account. The continuous setup and deployment of the software only follows later in the blue degree.
Martin Fowler has written a very good article on this topic, which can be read at http://www.martinfowler.com/articles/continuousIntegration.html
See also under Tools.
Statical Code Analysis
Trust is good, control is better - and the more automatic it is, the easier it is.
Changeability | |
---|---|
Correctness | |
Production efficiency | |
Continuous improvement | |
Single Developer |
How is the quality of a code unit, e.g. a class or component, actually defined? Is it enough that it functionally fulfills the customer's requirements? Is it enough that it is fast enough and scalable enough? Automatic tests and ultimately tests by the customer provide information on this. Without such conformity to requirements, software naturally has no relevant quality. If it is of no use to the customer, there is no need to ask any further questions.
On the other hand, however, contrary to widespread belief, it is not enough to be compliant with requirements. High quality is not just about functionality and performance, for example. In addition to the functional and non-functional requirements, there is also a mostly unspoken hidden requirement: customers always want software to not only meet their requirements today, but also tomorrow and the day after tomorrow. Customers want investment protection through changeability.
For customers, this requirement is usually implicit. They believe that it goes without saying that an intangible product such as software can be adapted to new requirements almost infinitely and at the touch of a button. Even managers who do not come from software development often believe this. And even software developers themselves!
However, the misunderstanding about software could hardly be greater. Changeability is neither a matter of course in the sense of a goal that every software developer pursues anyway, nor does it come about by itself. Rather, changeability is hard work and must be constantly weighed up against other values.
If other requirements conformity can now be determined by (automated) tests, what about changeability? Can the quality of code with regard to its (survival) viability also be measured automatically? In part. Not all aspects that make software evolvable can be tested automatically. For example, whether software is kept open for extensions through an add-in concept cannot be recognized automatically.
Nevertheless, there are Metricswhose value can be "calculated" for software. Tools help with this. These tools should therefore be used in every software project.
- For legacy code, the tools can determine the status quo and thus define a baseline against which the further development of the code (for the better) can be compared.
- For new code that was planned with changeability in mind, such static code analysis shows whether it fulfills the ideal of planning.
CCD are not satisfied with just testing code automatically. They also always keep an eye on its changeability, because they know that customers are just as interested in it - regardless of whether they have explicitly said so or not.
See also under Tools.
Inversion of Control Container
Only things that are not hard-wired can be reconfigured more easily.
Changeability | |
---|---|
Correctness | |
Production efficiency | |
Continuous improvement | |
Single Developer |
Already in the yellow degree the CCD got to know the Dependency Inversion Principle. The dependencies were still resolved "by hand". The next logical step is now to automate the resolution of dependencies. Two methods are available for this purpose:
- Locator
- Container
Both use a so-called Inversion of Control Container (IoC container). Before using the container, the classes used must be stored in the container. The container can then deliver instances of the stored classes. With the Locator this is done explicitly. This has the advantage that the dependencies do not all have to be listed in the constructor of the class. For cross-cutting tasks such as Logging this is a common procedure. As a rule, however, the dependencies are listed as parameters of the constructor. This has the advantage that all dependencies are visible. The container is thus able to resolve the dependencies implicitly by recursively instantiating all required objects via the container.
IoC containers become important as soon as the number of classes grows. If you Separation of Concerns many small classes with manageable tasks are created. Assembling instances of these classes becomes correspondingly more complex. This is precisely where the IoC container comes in, helping to instantiate and connect the many small objects.
Another advantage of IoC containers is the fact that the Life cycle of an object can be determined by configuration. If there is only to be a single instance of an object at runtime (Singleton), the container can be instructed to always deliver one and the same instance. Other life cycles such as One instance per session are supported.
To avoid becoming dependent on a specific IoC container when using a locator, the Microsoft Common Service Locator (see Tools) can be used. This offers a standardized interface to the common IoC containers.
To understand the mechanics behind an IoC container, it is useful to implement the functionality yourself. The aim is not to implement a complete container but only the basic functions.
See also under Tools.
Share Experience
Those who pass on their knowledge not only help others, but also themselves.
Changeability | |
---|---|
Correctness | |
Production efficiency | |
Continuous improvement | |
Single Developer |
Professional work naturally requires constantly updated knowledge. Of course, this does not mean that anyone can and should know everything about software development, even if it is only on the .NET platform. Up-to-date knowledge refers to one's own areas of specialization - whatever they may be. Part of other degrees is therefore the practice of regularly receiving information via various media.
For several reasons, however, such information gathering should only be one of two sides of the "learning" coin. The other is the passing on of information, the transfer of knowledge. In our view, true professionalism involves not only "research", but also "teaching". Because only with "teaching" does true reflection and penetration of a subject take place.
Applying what you have heard/read is one thing. Of course, you also notice gaps in your understanding. However, the "exploration" of an object is naturally limited by its intended use. If you only research as far as you need a technology/concept, you don't necessarily delve deeply into it.
However, it is completely different when learning takes place with the sign of passing on. Those who learn not only for themselves, but also for others, learn more deeply. This becomes clear when you try to communicate what you have (supposedly) learned to others. If you don't keep this in mind when learning, questions quickly arise that you have never asked yourself. Others always have completely different perspectives.
That is why we believe that only those who repeatedly expose themselves to teaching, passing on and imparting knowledge are truly solid learners. Only those who not only apply what they have learned, but also formulate it in their own words for an audience, realize in the process how deep their knowledge really is. Because if the question marks start to pile up among the "students", then something is not quite right.
Of course, a real audience is best for this. Every CCD should therefore look for regular opportunities to pass on their knowledge orally (e.g. at events with colleagues or user group meetings). They are sure to receive immediate feedback. Alternatively, or as a supplement, written statements of competence are also useful. A blog can be set up in 5 minutes and specialist journals are constantly looking for new authors. Feedback may not come back as directly here, but the textual formulation of knowledge is still a very good exercise.
Clean code developers from the green level upwards therefore not only learn "passively" by absorbing information, but "actively" by passing on their knowledge through presentations or texts. This may be unfamiliar - but Continuous Integration may also be unfamiliar. In any case, active knowledge transfer is a good exercise in deepening one's own skills according to the motto: "Do good and talk about it" ;-)
It goes without saying that "teaching" also has a benefit for the listeners/readers. But benefits for others are not as motivating as benefits for yourself. That's why we emphasize the benefits of knowledge transfer for the clean code developer.
Error Measurement
Only those who know how many errors occur can change their approach to reduce the error rate.
Changeability | |
---|---|
Correctness | |
Production efficiency | |
Continuous improvement | |
Team |
Mistakes happen during software development. They happen in all phases: misunderstood or unclearly formulated requirements lead to errors, as do faulty implementations. In the end, everything is a mistake, resulting in the customer receiving software that does not meet their requirements. Iterative procedures and reflection are two building blocks that serve to improve the process. However, in order to recognize whether an improvement is actually occurring, there must be a metric that can be used to measure a development for the better.
Errors can be measured by counting or timing. The focus is not on precision as long as the measurement method provides comparable data. The development trend over several iterations should become apparent. Furthermore, it is not a question of clarifying who is responsible for an error. In the end, it doesn't matter who caused the error as long as the team learns from it and improves its process.
Which errors should be measured? These are not the errors that occur during development. These cannot be avoided and hopefully lead to the delivery of an error-free product at the end of an iteration. Rather, it is about the errors that are reported back after an iteration by the customer or their representative (e.g. product owner or support). These are errors that hinder the implementation of new requirements. Errors to be measured are therefore those that occur when you believe that they should not exist ;-) When a team reaches this point in the process and curses because another error interferes with other work must be determined individually for each team.
It continues with the blue degree.