I Did an Experiment Enforcing a Minimum Code Quality, And There Are The Results
There are tools like SonarQube that watches and enforces minimum quality standards, but are they actually helping?
Some articles ago, I talked about the only utility for code coverage. It was not quality, but code coverage is one of metrics that the companies tend to enforce to its employees. For example, if the code coverage is under an arbitrary number, let's say 87%, the pull request is automatically rejected and the developer has to fix it. The claim is that establishing this minimum for the metric, the code quality improves. And of course, there is no a unique metric, there are several of them.
One year ago, while I was reviewing code delivered by undergraduates, I realize that the code quality in a laboratory session was very low. As part of the session statement, I did explain basics of clean code, like creating auxiliary functions or better descriptions for names. Although I asked to follow those good practices, and I explained that not following them would affect their grades, almost every delivery ended in big functions full of branches. Several students even lost control of their code and created infinite loops; some private tests were able to hang their programs.