How can you measure the software quality

Measurement of software quality

Organizational features

published in: 2004#2, page 67

Summary: An old physicist's saying goes: What we don't measure, we can't understand either. To assess reliability, availability and robustness, metrics are also required in the practical testing of application programs.

Data quality requirements are becoming more and more important in operational practice. As a practical example, the structure of data to determine the probability of default in the credit business (Basel II) or the risk assessment of trading transactions using the historical simulation method (MaH) can currently be mentioned.

However, the necessary data quality is only achieved if the corresponding application programs are equipped with the corresponding reliability, availability and robustness. In order to identify these properties of the application programs and to recognize the likelihood of errors, measurement using software metrics is recommended.

In teaching, two approaches are emphasized: The software metrics according to Halstead (measurement of textual complexity) for determining the effort involved in program inspections and program understanding during maintenance processes and, secondly, according to McCabe (measurement of structural complexity). Despite their origins in the 1970s, these software metrics have not lost their usefulness. On the contrary: the advantages of software metrics are the comparability of application programs in their quality characteristics and the need to provide transparency by means of sufficient information technology documentation.

The information technology documentation can basically be developed in the form of a data flow plan, program flow chart, structure diagram, pseudocode, program organization plan, tree diagram, control flow diagram or decision table. Repository systems (e.g. Rochade) are also increasingly being actively integrated into the software development process. In the testing practice, effective experience was gained with the development tool Easycode V7 for Cobol according to McCabe. In addition, an individual test tool from the international project work was used. This test tool available for Cobol and Assembler documents a "scoring" with the characteristics described below.

Complexity metrics are selected relational measures for program complexity on a scale from 0 to 1. The lower the value, the lower the complexity:

0,0–0,2"no" complexity
0,2–0,4low complexity
0,4–0,6moderate complexity
0,6–0,8high complexity
0,8–1,0very high complexity

An individual feature for calculating the complexity is determined with the data access according to the following function: Data access = 1 - (number of files + number of databases) / (number of data inputs + number of data outputs + number of database accesses)

Quality metrics are selected relational measures for program quality on a scale from 0 to 1. The higher the value, the better the quality:

1,0–0,8top quality
0,8–0,6good quality
0,6–0,4adequate quality
0,4–0,2poor quality
0,2–0,0"no" quality

A characteristic for calculating the quality is determined with the legibility according to the following function: Readability = 1 - (number of comment lines x 4) / (number of total source code lines)

The results are essentially determined by the initial status of the application program tested. This means that too frequent changes to a source program have a negative impact on the results of the software metrics.

Practical experience has shown that it is worthwhile to actively include the "theoretical" topic of software metrics in the testing of application programs. (Bernd Wojtyna)

back to content
© SecuMedia-Verlags-GmbH, 55205 Ingelheim (DE),
2004#2, page 67