It may be tempting to think that you can lower the quality of your software to get it done faster, but this will only cost you more time and money in the long run. Clean code is not a choice – it's essential for producing high-quality software. In this blog post, we'll go over what clean coding is and how following these practices can benefit your development process.
Managing Software Quality. Clean Code Is Not a Choice.
The author of Object-Oriented Analysis and Design with Applications, Grady Booch, says, "Clean code reads like well-written prose." Unfortunately, I didn't have the opportunity to read this book, but it's a great quote to get started with.
Clean code reads like well-written prose. - Grady Booch
My experience includes more than ten years of software development and spans both large and small organizations. I had the opportunity to examine code from various vantage points while working as a developer, manager, or customer over this period. This allowed me to perceive how code standards function and comprehend their influence on products, people, and processes. I hope that this article is both interesting and informative for developers, as well as business people.
Let's start by talking about why and who needs clean code. We create software, which is the antithesis of hardware. The goal of the software is to be simple to modify.
Because reading takes considerably more time than writing, it's well-known that typing speed has little bearing on software development speed. As a result, it would be strange to inquire about symbols per second when interviewing a developer.
However, one of my favorite interview questions is: "Which code is preferable: one that works but isn't understandable, or one that doesn't work but is simple to understand?" Many software engineers believe that working code is preferable. Still, bugs can be fixed, but incomprehensible code cannot be modified. It's no longer software.
What exactly is code cleanliness?
Code cleanliness it's a measure of clarity. Developers won't be able to troubleshoot or develop new features if they don't understand the code. Clean code is not about bugs, quality, or system performance; it's a feature that allows system modifications.
So, who needs a well-written program? First of all, who will make modifications? In most cases, it's developers who write it. So the programmers assist their future selves by writing clean code. As a result, they'll need to spend less time adding new features or repairing bugs. On the other hand, the customer pays for the time and effort invested.
It's expensive to maintain a mess, and it's even more costly than maintaining a clean code. As a result, developers should assist the future self and the future business in this respect.
Sometimes, under the strain of commercial demands, there is an impression that we can accomplish something "quick and dirty" and clean up later since we truly require it right now. However, this method will slow down the progress of the current task rather than the next. So, contrary to what many people believe, the future arrives much quicker than we want - not just the next day, but also code review, testing, and when "it's really fantastic but let's try something else."
Accept the "No" option when it's correct.
Time is a valuable currency. For business, time is measured in billable development hours and time-to-market disadvantages. This may amount to a significant sum of money when viewed in total. Consumers will always want everything they wanted yesterday, not as previously anticipated because the market situation has changed. It is normal. According to Adizes' Corporate Lifecycles, businesses that don't want it are at least in the fall.
Developers must create realistic expectations for clients, which allows us to make judgments. We should not demand that developers adjust expectations and forecasts because we need features faster. An estimate is not a bargain; it's data for making decisions. When we challenge developers' expectations, it's not a contest between code quality and ready features; we'll lose both. Worse yet, we'll have false expectations and make poor judgments based on incorrect data.
"The only way to go fast, is to go well," says Robert C. Martin, one of the most renowned software engineers. Developing is similar to cooking. We won't have the scrambled eggs faster if we increase the fire; instead, we'll have burned eggs. It's also true in the realm of software. Developers state "no" since they should set appropriate expectations.
But what if we have a deadline and developers inform us they will not be able to provide the required features before then? "What can we have completed before the deadline?" is the correct question. They will most likely suggest delivering some crucial components, developing a less complex version of the requested system, or even creating a stop-gap solution. That's fantastic news! The business will make a decision based on the appropriate expectations and data.
How can businesspeople anticipate a coding disaster? Is it feasible to have any automatic clean code measures? Of course, static code analysis tools like SonarQube provide various metrics, including cognitive complexity. Naturally, excellent results don't guarantee clean code, but bad ones will show with certainty that something went wrong.
The first type of metrics are bugs, vulnerabilities, and code smells. It's better to identify vulnerabilities and security hotspots immediately, but hopefully, most will be false positives. Another thing is bugs and code smells. We should rarely try to repair them immediately; the ideal approach is concentrating on the new code. According to the Boy Scout slogan, the objective is to leave the code in better condition than you found it: "always leave a spot cleaner than you found it." It will help you refactor code step by step, paying more attention to parts of code that are frequently updated.
The standard rule of thumb is that the number of problems should not increase.
Simplifying cognitive complexity
The cognitive complexity metric in SonarQube measures how difficult it is to comprehend the code.
The most essential thing is to apply the rule so that when a piece of code becomes difficult to understand, it emits a code smell. Change the default setting in the "Cognitive complexity of functions should not be too high" rule in the Quality Profiles section. Standard "15" misses everything, but you can go as low as "5".
The method should also be applied to the rule "Cyclomatic complexity of functions should not be too high." This measure indicates how many unit tests you'll need to cover all of the code.
Should we do unit testing? Isn't it expensive?
Tests are not only crucial for code validation but also for enhancing readability. If you examine the graphs of Cognitive Complexity and Cyclomatic Complexity, you'll see that they are very similar. This is because if the function is an accumulation of if/else/case statements, side effects, additional arguments, and return statements, it will be difficult to read it and write tests.
Here's a simple guideline: the code needs refactoring if you can't create a test quickly. The most common issues with unit testing are actually caused by poor coding.
Issues can arise when a developer creates source code and then attempts to cover it with tests. This code was not built with testing in mind. You must spend time on manual testing, then on code coverage with tests, which is challenging to accomplish, and then again on refactoring because it was not feasible to write the test from the start. What is the most common outcome? That's correct: "It's more essential to finish this task right now than to create those tests." And the bad code goes to production.
How can you avoid wasting time on it? It's quite easy: test your code with tests, not hands. How many times per minute is a developer able to produce console.log? And how many thousands of units can be done? A positive side effect is that the automated test does not make any mistakes, which will cut down on bug-fixing time.
A well-known experiment is when a developer repeats the same minor task multiple times in a row, either with or without TDD. While TDD solutions have always been subjectively longer, they were also objectively quicker.
How does this method affect real projects? Several scientific studies have various opinions, but only one examines the relationship between TDD and maintainability. This, in my view, demonstrates how academic work is disconnected from industry in the real world. Despite a decline in productivity, the study of maintainability revealed a striking reduction in the average time for change requests from 80+ hours to 60 hours. This may be due to a decrease in cyclomatic complexity from 6-7k to 4.5k, which might explain it. A code designed based on tests is more straightforward and easier to read and modify.
It should be mentioned that the developers in the study had no prior experience with TDD, which might explain their poor performance in the first stage. However, in today's world of Agile development, which is driven by user input rather than preconceived ideas, projects have become a change request stream. As a result, we may infer greater productivity in the long run.
Of course, only a few individuals can use TDD exclusively, but this is an excellent example of how testing in the right way quickens rather than slows down development. Even if you just start by replacing manual testing with unit tests, you'll be pleased with the outcome.
What is the ideal percentage of test coverage? In a perfect world, 100%, but in practice, it is determined by the project. It should certainly not be zero, and all tests should pass. Coverage isn't a strict measurement of quality but rather an indicator to look for. The rule of thumb is that new code should be better covered than old code. When test coverage decreases over time, there's a problem.
Keep the code as DRY as possible.
The DRY principle, or "Don't Repeat Yourself," is one of the essential rules of programming. Duplications are one of the crucial metrics revealed by SonarQube.
Duplicated code is a serious problem. Consider the scenario where a developer needs to make changes in numerous places instead of just one. The time required to develop the feature is multiplied by the number of locations it must be applied. And it doesn't stop there. It's really simple to overlook some of such places, so it adds time and the number of potential bugs.
Duplications may occasionally be necessary. It happens most often when identical pieces of code have different reasons to change. For example, two methods are used to compute billable and non-billable employee hours. We may think of a scenario when both types of hours should be measured similarly, but these numbers are not connected, and calculation logic has different reasons to change. It's known as "false duplication."
Since "false duplication" is a highly unusual pattern, we should aim to eliminate as much duplicate code as possible. It may not be zero, but it should strive for improvement. An alarm threshold of 1% to 5% may apply to each project.
Managing software quality is vital to the success of any business. Here's how to ensure your code is clean and your business runs smoothly: unit tests, refactoring existing code, and DRY principles should all be standard practice. A messy code base is expensive for a business to maintain; cleaning up coding practices saves money in the long run.
Simple rules for business people:
Put your faith in the developers and their predictions.
Static code analysis tools are a good idea.
In time, a number of code smells should not get worse.
Reduce the threshold for cognitive and cyclomatic complexities of functions.
Tests should cover new code more thoroughly than the rest of the codebase.
Code duplications should be eliminated.
Please do not hesitate to contact us if you want to learn more about the code standards or need assistance in implementing them into your project.