12/27/2008

12-26-08 - In Defense of Less Typing

In the C++ rant we got a bit into the discussion of "that just reduces typing". It's become a common wisdom these days that "anything that just reduces typing is not of significant importance in programming". This is sort of a reaction to the bad old days where developers put too much emphasis on reducing the time it took to make a first draft of the code. Now people like to repeat the plathitudes that "first draft is only 10% of dev time, debuggability and readability and maintainability are what really matter".

Yes, yes, that is all true, but I think people miss the forest for the trees here in some cases.

Every character you type is a chance for a bug or a mistake. For one thing there are typos, but there are also just brain farts. The less you type when writing a piece of code, the more likely it is to be correct.

(I should emphasize the fact that reducing code duplication is very good for many reasons that I don't go into detail much in this rant, and those are mainly the main reason to merge duplicate code; I'm talking about cases where the code is not exactly duplicated, but is similar, or where you have a choice between making a very simple API which requires a lot of typing by the client, or a more complex API which has very simple usage for the client).

A lot of good programmers now are adopting the idea of exposing simple minimal C-style APIs that leave usage up to the client. There are a lot of things to like about that (for example, Sean's stb.h style thing for simple functionality is in fact wonderfully easy to integrate and steal snippets from), but there are also bad things about it. I think good programmers overestimate their ability to write simple usage code without mistakes. For example, you might think that you don't need a class to encapsulate a 32-bit Color, you can easily just use a 4-byte array or pass around a dword and do the shifts by hand - but I have seen bug after bug from small mistakes in that kind of code, because if you write the same thing over and over, or copy-paste and try to change a bunch of code by hand, there is some small chance of mistake each time you do it.

It's funny to me that good programmers in game dev are going in two directions at the same time. One direction is to make code very easy to follow by simple visual inspection. Many major people in game dev are pretty high on this these days. The basic idea is to use C-style imperative programming, make the action of each line obvious to simple visual inspection, reduce segmentation of code into function calls and prefer simple long linear code blocks (rather than factored out functions, conditionals, objects, etc). There are certainly merits to that. The other direction people are going is custom metaprogramming and local language redefinition. Many of these people want coding languages where you can locally redefine the rules of the language and do custom markup for things like network mirroring, thread fiber spawning, local-global state memory variable differentiation, etc. This kind of stuff would make code completely impossible to understand by simple visual inspection without intimate undestanding of how all the extra meta-language rules work. These ideas also have a lot of merit, because writing micro-threaded massively parallel code in plain C-style C++ is really difficult and ugly, and having custom features would make it much better - but these two directions are totally conflicting.

While I'm ranting about opposite directions, let me also rant about the idea that something is a good idea for "library design" but not for within your app (or vice versa). IMO Coding = Library Design. Most of the complex code that I write is in "libraries" for myself. Libraries are just chunks of functionality that you want to expose in a compact interface. Well, that's what you should be doing all the time. Coding is just writing a bunch of libraries, then the "app" is just tying together the "libraries".

So, for example, Casey's excellent talk about good library design (things like exposing multiple levels of interface from very simple to nitty gritty, and not forcing a usage pattern on the client) are just good ways to write code *always*.

I don't trust the me of one year ago, nor do I trust the me of one year in the future. I need to write API's for myself that make me write the right code. Part of that is all the things I've often written about before (self-validating API's, API's that are impossible to use wrong), but part of it is just plain less typing. If the API makes me (the client) write a whole bunch of code to do the simple things I often want to do - that makes it far more likely I will do it wrong.

Also I believe the illusion of choice is a horrible thing. If there's really only one or two reasonable ways to use a system, then just expose that to me. Don't give me what looks like a bunch of flexible components, but they only really work right if you do one specific thing.


Addendum : okay I'm bored of this topic and I'm sure you are too, but I feel like I started it so I should wrap it up a bit more.

Paul Graham has this thing "Succinctness is Power" that's sort of similar to this rant. As usual he writes it well, but I think he's a little bit wrong. The issue that I believe is important, which is what I'm trying to talk about here is :

Reducing the number of choices that the programmer has to make in order to write correct code.

Part of that is reducing typing - but the crucial thing is reducing typing when there is actual choice in the alternatives. That is, if it's something you *have* to type, that's not bad. For example a very verbose language like COBOL is not inherently bad due to its verbosity (cobol is horrible for other reasons).

Making code that works correctly with minimal typing (and makes compile errors if you use it wrong) is the goal. So part of what I'm getting at here is using short common words that it's easy for the programmer to get right, using highly constrained classes instead of very general ones, etc.

Part of the credo goes like this :


remove the option to do things that you never want to do

make it hard to do things that you rarely want to do

make it easy to do the right thing

As an example - iterators are cool even when they save almost no work. Say for example you have something like a doubly linked list class. Many of the simple C guys would say "hey you can just write code to traverse the linked list", and you write client code like :

for(Node * n = list.head; n != list.head; n = n->next)
{
    ...
}

That's easy right, you're a good programmer, you can just write that loop. No. You can't, that's what I'm trying to say with this rant. I mean, yes, you can, but you had a 1% chance of introducing a bug because you had to write code.

Writing code is bad.

Instead you could make your Node look like an iterator. You could give it standard names like begin() and end(). Now instead of writing code you can just use for_each , or even just copy-paste or spit out a loop that you've done a million times :

for(Node::iterator it = list.begin(); it != list.end(); ++it)
{
    ...
}

Is safer because it's standard. On a similar note, using a constrained object like an iterator is safer than using an int, because every time you use an int people get tempted to do evil things. How many bugs have I seen because people try to get clever with their loop iteration? Maybe they count backwards for efficiency and use and unsigned type by mistake. Or they pull the ++i out of the for() and then forget to do it due to a continue. Or they use the "i" outside of the for loop and bugs get introduced.

Lots of people are anti-OOP these days. I love OOP ; no, not deep inheritance trees and lots of data inheritance, and whatnot, but the basic idea of coding in terms of objects that impose constraints and conditions on their use. The best way for me to program is to build components and helpers which make expressing the program easy.

2 comments:

Unknown said...

"I don't trust the me of one year ago, nor do I trust the me of one year in the future."

Awesome quote.

Unknown said...

Its funny that your C loop is wrong.

old rants