I admit it – I’m a recovering C++ programmer. I’ve suffered a bit of a relapse related to a recent project, and may very well be dragged back deeper on an upcoming one. (Point of history – I attended one of Bjarne Stroustrup’s early lectures on “C with Classes” when I worked at Bell Labs. Yes, I’m that old. Sigh. At the time, I was a heavy-duty C programmer, and thought what he was presenting was cute, but impractical. It subsequently became my primary language for almost a decade. Just goes to show you…)

In any event, what I’m about to discuss is shown in C++, because I came across it in one of my old books that I was re-reading. It should be intelligible to anybody who uses a language even remotely simpler, however, because this isn’t a language-specific point – it’s a point about being too fancy.

The section of the book in question was lamenting the fact that (supposedly) programmers don’t use logical operators enough. It started off by being irritated at seeing people write this:

bool r = false;
if( a < b )
    r = true;

instead of this

bool r = a<b;

OK, I can go along with that. The latter is quite obvious and the former doesn’t add any value. (I deduct style points for lack of bracing and spacing, but that’s not the issue.)

But then our intrepid author goes on as follows:

Do you have to count to eight when presented with the following?

int ctr = 0;
for( int i = 0; i < 8; ++i )
    if( options & 1<<(8+i) )
        if( ctr++ ) {
            cerr << "Too many options selected";
            break; 
        }

Instead of this?

typedef unsigned short Bits;
inline Bits repeated( Bits b, Bits m )
    { return b & m & (b & m)-1; }
 // . . .
if( repeated( options, 0XFF00 ) )
    cerr << "Too many options selected";

What ever happened to Boolean logic?

(I presume the “count to eight” comment was intended in the “count to 10 before responding” sense, not a reference to the “8” in the code.)

Now, I bet that most of you can look at that first block of code, and in 30 seconds or less, can figure out what it’s doing. Time’s up? It’s detecting whether or not there is more than one bit set in a portion of the variable named options.

How many of you could have figured that out about the second block of code in five minutes or less? And how many of you, even if you were told that was the intent was, could have assured yourself that the second block of code behaves correctly?

Code like this is WAY too “cute” unless you are in desperate need of performance. And if you’re that desperate for performance, there are other, clearer ways of doing this quickly. Say, a quick 256-entry lookup table of the number of bits set in a byte, followed by

if (BitsSetInByte[(options >> 8) & 0xFF] > 1) ...

In the vast, vast majority of cases, code should be written to be clear. Getting tricky to make it fast should only be done after you have positively verified that a particular block of code is, indeed, a bottleneck. And if you’re going to be tricky, you darned well better include a BIG comment block describing the algorithm. Somebody is going to come along and wonder what the heck that bit of code is doing. (And it may be you – it amazes me every now and then that I can go back to code I myself wrote and completely forget how it works.)

Just for fun, however, let’s analyze how that algorithm works.

The first point is that parentheses might have made it a good deal clearer what was being attempted, as well as helping us not to have to remember operator precedence tables.

inline Bits repeated( Bits b, Bits m )
    { return (b & m) & ((b & m) - 1); }

Now you can clearly see that what’s effectively being done is

(x) & (x-1)

It might have been even yet clearer to write

inline Bits repeated( Bits bits, Bits mask )
{
    Bits bitsToTest = bits & mask;
    return bitsToTest & (bitsToTest - 1);
}

This same author, in another section of the book, railed at people who name their variables poorly. Pot, Kettle. Kettle, Pot. This rewrite also emphasizes that we’d prefer not to “and” together b and m twice. Oh, I know, a good compiler might very well optimize it that way anyway, but do you know that that’s the case?

I also see no value whatsoever in using the typedef to replace short with Bits, and possibly some harm, as it makes it appear a class may be involved. But we’ll pass on that particular item.

As to whether it works or not, there are four cases to consider:

  1. x = 0. In this case, because the left-hand side is zero, the expression will return zero. Presumably we don’t care about the underflow in the calculation of (x - 1).
  2. x = 1. In this case, the right-hand side will be zero, so again the expression will return zero.
  3. x > 1 and has exactly one bit set. Call that bit number N. In this case, x - 1 will have all the bits below bit N set, but bit N will be clear. You may have to think about it a bit, but it works out that way. In this case, x and x - 1 will have no bits in common, and the result of the expression will be zero.
  4. x > 1 and there are two or more bits set. In this case, x - 1 will do exactly the same thing as the previous case from the least significant set bit down, but will leave the bits above that point unaltered. Thus, the higher-order set bits will be set in both halves of the expression, and will be preserved as a result of the “and.” If x = 0x18, x - 1 will be 0x17 and the result of the expression will be 0x10. So in this case, and this case only, the expression is non-zero.

And the more astute among you may have noticed that we probably should consider the case when the most-significant bit is set, as the shorts in this code will get promoted to ints during the evaluation of the x - 1 part, and thus we have to factor in sign extension. (It turns out that the algorithm does still work, but…)

Does this strike you as way, WAY too much analysis to make sure a simple task is correct?

Yes, it’s faster, but it sure as heck isn’t self-documenting, and recommending that people obfuscate their code this way just doesn’t cut it with me.

There, I feel better.

For those of you that are curious, this was taking from Gotcha # 7 on page 15 of C++ Gotchas: Avoiding Common Problems in Coding and Design by Stephen C. Dewhurst. It’s actually a pretty good book all in all, although, having been written in 2002, it’s now a bit dated in terms of language features. It’s just that, in this one particular case, it seems that Mr. Dewhurst is trading a potentially small problem (performance) for a potentially large one (lack of maintainability.)

That being said, and to his credit, after 4-5 more pages of similarly tricky code, he concludes the section with:

I’ve received strong, negative, and sometimes abusive reactions to my use of every one of the constructs above.

I’m not surprised…

I do agree this is not necessarily the kind of code you’d want to unleash on a novice maintainer, but such constructs — suitably encapsulated and with accompanying documentation — do have an occasional place in highly tuned or highly specialized code. Familiarity with the esoterica of the base language can be useful.

OK, I’ll buy that, and apologize to the author at least slightly, although it might have been better to state that up front. I just hope I don’t get assigned to a project that depends on the use of “esoterica” like this for success…

Fast but Obtuse is Usually Bad originally appeared on http://www.silverbaytech.com/blog/.