McCabe's complexity metric is utterly fscked
The complexity metric used in the article is:
The complexity of a function is 1 (for the function), plus:
- 1 for every
if
, and
, and or
,
- 1 for every
while
or repeat
- 1 for every
for
, and
- 1 for every case of a
case
statement.
This does not measure complexity -- it just measures how big the syntax tree is. A proper measure of complexity should reflect something like "how hard is it to prove the program implements some specification". For procedural code, the appropriate place to look is at the Floyd-Hoare proof rules, and to track how hard the proof is. (In this style, you establish preconditions and postconditions on each statement in the program, and try to prove that the postcondition is implied by the preconditions and the semantics of the statement.)
For
if
,
and
,
or
, and
case
, proving that the preconditions imply the postconditions is just a case analysis over the branches. For a
for
loop, you have to do a proof by induction over the integers the loop variable ranges over. For a
while
or
repeat
loop, you need to first figure out what the loop invariants are, and then do the induction. So establishing the correctness of -- ie, figuring out what the code does -- is a lot harder for a
while
loop than an
if
statement. Penalizing a
case
more heavily than a
while
loop is just nuts.