Grady Booch, the author of Object-Oriented Analysis and Design with Applications, says that "Clean code reads like well-written prose". Unfortunately, I didn't have the opportunity to read this book, but it's a great quote to get started with.
Clean Code is Not a Choice
My experience includes more than ten years of software development and spans both large and small organizations. I had the opportunity to examine code from various vantage points while working as a developer, manager, or customer over this period. This allowed me not only to perceive how code standards function but also comprehend their influence on products, people, and processes. I hope that this article is both interesting and informative for developers, as well as business people.
Let's start by talking about why and who needs clean code. We create software, which is the antithesis of hardware. The goal of the software is to be simple to modify.
Because reading takes considerably more time than writing, it's well known that typing speed has little bearing on software development speed. As a result, it would be strange to inquire about symbols per second when interviewing a developer.
However, one of my favorite interview questions is: "Which code is preferable: one that works but isn't understandable, or one that doesn't work but is simple to understand?" Many software engineers believe that working code is preferable. Still, bugs can be fixed, but incomprehensible code cannot be modified. It's no longer software.
What exactly is cleanliness? It's a measure of clarity. Developers won't be able to troubleshoot or develop new features if they don't understand the code. Clean code is not about bugs, quality, or system performance; it's a feature that allows system modifications.
So, who needs a well-written program? First of all, who will make modifications. In most cases, it's developers who write it. So the programmers assist their future self by writing clean code. As a result, they'll need to spend less time adding new features or repairing bugs. The customer, on the other hand, pays for the time and effort invested.
It's expensive to maintain a mess, and it's even more costly than maintaining a clean code. As a result, developers should assist the future self and the future business in this respect.
Sometimes, under the strain of commercial demands, there is an impression that we can accomplish something "quick and dirty" and clean up later since we truly require it right now. However, this method will slow down the progress of the current task rather than the next. So, contrary to what many people believe, the future arrives much quicker than we want - not just the next day, but also code review, testing, and when "it's really fantastic but let's try something else."
Accept the "No" option when it's correct
Time is a valuable currency. For business, time is measured in billable development hours and time to market disadvantages. When viewed in total, this may amount to a significant sum of money, which is why consumers will always want everything they want yesterday and not as previously anticipated because the market situation has changed. It is normal. According to Adizes' Corporate Lifecycles, businesses that don't want it are at least in the fall.
Developers must create realistic expectations for clients, which allow us to make judgments. We should not demand that developers adjust expectations and forecasts because we need features faster. An estimate is not a bargain; it's data for making decisions. When we challenge developers' expectations, it's not a contest between code quality and ready features; we'll lose both. Worse yet, we'll have false expectations and make poor judgments based on incorrect data.
"The only way to go fast, is to go well," says Robert C. Martin, one of the most renowned software engineers. Developing is similar to cooking. We won't have the scrambled eggs faster if we increase the fire; instead, we'll have burned eggs. It's also true in the realm of software. Developers state "no" since they should set appropriate expectations.
But what if we have a deadline and developers inform us they will not be able to provide the required features before then? "What can we have completed before the deadline?" is the correct question. They will most likely suggest delivering some crucial components, developing a less complex version of the requested system, or even creating a stop-gap solution. That's fantastic news! The business will make a decision based on the appropriate expectations and data.
How can businesspeople anticipate a code disaster? Is it feasible to have any automatic clean code measures? Of course, static code analysis tools like SonarQube provide a wide range of metrics, including cognitive complexity. Naturally, excellent results don't guarantee clean code, but bad ones will show with certainty that something went wrong.
The first type of metrics are bugs, vulnerabilities, and code smells. It's better to identify vulnerabilities and security hotspots right away, but hopefully, most of them will be false positives. Another thing is bugs and code smells. We should rarely try to repair them immediately; the ideal approach is to concentrate on the new code. According to the Boy Scout slogan, the objective is to leave the code in better condition than you found it: "always leave a spot cleaner than you found it." It will help you refactor code step by step, paying more attention to parts of code that are frequently updated.
The standard rule of thumb is that the number of problems should not increase.
Simplifying cognitive complexity
The cognitive complexity metric in SonarQube measures how difficult it is to comprehend the code.
The most essential thing is to apply the rule in such a way that when a piece of code becomes difficult to understand, it emits a code smell. Change the default setting in the "Cognitive complexity of functions should not be too high" rule in the Quality Profiles section. Standard "15" misses everything, but you can go as low as "5".
The method should be applied as well to the rule "Cyclomatic complexity of functions should not be too high." This measure indicates how many unit tests you'll need to cover all of the code.
Should we do unit testing? Isn't it expensive?
Tests are not only crucial for code validation but also for enhancing readability. If you examine the graphs of Cognitive Complexity and Cyclomatic Complexity, you'll see that they are very similar. This is because if the function is an accumulation of if/else/case statements, side effects, additional arguments, and return statements, it will be difficult to read it and write tests.
Here's a simple guideline: the code needs refactoring if you can't create a test quickly. The most common issues with unit testing are actually caused by poor coding.
When a developer creates source code and then attempts to cover it with tests, issues can arise. This code was not built with testing in mind. You must spend time on manual testing, then on code coverage with tests, which is challenging to accomplish, and then again on refactoring because it was not feasible to write the test from the start. What is the most common outcome? That's correct: "It's more essential to finish this task right now than to create those tests." And the bad code goes to production.
How can you avoid wasting time on it? It's quite easy: test your code with tests, not hands. How many times per minute is a developer able to produce console.log? And how many thousands of units can be done? A positive side effect is that the automated test does not make any mistakes, which will cut down on bug fixing time.
A well-known experiment is when a developer repeats the same minor task multiple times in a row, either with or without TDD. While solutions using TDD have always been subjectively longer, they were also objectively quicker.
How does this method affect real projects? Several scientific studies have various opinions, but only one examines the relationship between TDD and maintainability. This, in my view, demonstrates how academic work is disconnected from industry in the real world. Despite a decline in productivity, the study of maintainability revealed a striking reduction in the average time for change requests from 80+ hours to 60 hours. This may be due to a decrease in cyclomatic complexity from 6-7k to 4.5k, which might explain it. A code designed based on tests is more straightforward and easier to read and modify.
It should be mentioned that the developers in the study had no prior experience with TDD, which might explain their poor performance in the first stage. However, in today's world of Agile development, which is driven by user input rather than preconceived ideas, projects have become a change request stream. As a result, we may infer greater productivity in the long run.
Of course, only a few individuals can use TDD exclusively, but this is an excellent example of how testing in the right way quickens rather than slows down development. Even if you just start by replacing manual testing with unit tests, you'll be pleased with the outcome.
What is the ideal percentage of test coverage? In a perfect world, 100%, but in practice, it is determined by the project. It should certainly not be zero, and all tests should pass. Coverage isn't a strict measurement of quality but rather an indicator to look for. The rule of thumb is that new code should be better covered than old code. When test coverage decreases over time, there's a problem.
Keep the code as DRY as possible
The DRY principle, or "Don't Repeat Yourself," is one of the essential rules of programming. Duplications are one of the crucial metrics revealed by SonarQube.
Duplicated code is a serious problem. Consider the scenario where a developer needs to make changes in numerous places instead of just one. The time required to develop the feature multiplies by the number of locations it must be applied. And it doesn't stop there. It's really simple to overlook some of such places, so it does add not only time but also the number of potential bugs.
Duplications may occasionally be necessary. It happens most often when identical pieces of code have different reasons to change. For example, two methods are used to compute billable and non-billable employee hours. We may think of a scenario when both types of hours should be measured in the same way, but these numbers are not connected and calculation logic has different reasons to change. It's known as "false duplication."
Since "false duplication" is a highly unusual pattern, we should aim to eliminate as much duplicate code as possible. It may not be zero, but it should strive for improvement. An alarm threshold of 1% to 5% may apply to each project.
Clean code is not a choice; clean code should be the standard. It's incredibly costly for a business to maintain a code mess. So, it's important to create clean coding practices by writing unit tests, refactoring existing code, and keeping it as DRY as possible.
Simple rules for business people:
Put your faith in the developers and their predictions.
Static code analysis tools are a good idea.
In time, a number of code smells should not get worse.
Reduce the threshold for cognitive and cyclomatic complexities of functions.
New code should be more thoroughly covered by tests than the rest of the codebase.
Code duplications should be eliminated.
Please do not hesitate to contact us if you want to learn more about the code standards or need assistance in implementing them into your project.