Site icon SmoothSpan Blog

Is Programming Like Music or Engineering, and Must it Be Unintuitive?

Two different blog posts hit me today with one of those unconscious knee jerk desires to disagree.  First was Joel Spolsky wishing that undergraduate computer science programs would quit spending so much time on formalisms and become more like a Julliard music program.  Second was Raganwald contending that if a language is really going to push the envelope of “better”, it must by necessity be even less intuitive.  Taken individually, I probably could have swallowed these pills without comment, but wouldn’t you know it: both wound up back to back in my feed reader.  Somehow they tickled the same mysterious place in my subsconscious, but I had to actually write this diatribe to understand how.

Let’s start with Raganwald.  On the face of it, the logic is pretty sound.  It’s not unlike one of my favorite quotations, which goes:

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself.  Therefore all progress depends on the unreasonable man. 
George Bernard Shaw

The essential thesis that Reg works from is that all programming languages are equally powerful, because they’re Turing complete.  As such, he wants to talk about what makes one language better than another.  He concludes at one point that since all languages are equally as powerful, and since better is subjective, that better languages make you a better programmer, which in and of itself is subjective.  Also, making you a better programmer by definition means moving you out of your comfort zone.  Because of all that, “better” languages have to be unintuitive.  “QED”, as we used to say.

Right at this point I’ve had too many almost-rights go by to be able to sit still.  First, “better” does not have to be subjective.  We could choose to quantify it in some way.  I know that as a profession, programmers have a tremendous love hate relationship with various metrics to the point where we largely throw up our hands and dismiss them all as useless.  Sorry, but I am not in that camp.  I think there are many useless metrics, but not all are useless any more than all evaluations of every programming language are hopelessly subjective personal opinions.  I am even happy to accept fairly simple-minded quantifications such as being able to write similar things in fewer lines of code.  And please, don’t waste our time with silly contrived examples where fewer lines of code result in backwards progress.  In general, and with appropriate subjective oversight to weed out the silly off-point examples, I am convinced that fewer lines of code is a good thing.  Here is a harder one:  how about fewer clearer lines of code?  That’s going to be harder to quantify as a metric, and I can accept that. 

Now here is where I guess Reg and I part company:  is a language that needs fewer clearer lines of code necessarily less intuitive?  Reg seems to say “yes”, that language will be less intuitive.  I say that by the very construction, i.e. the lines are clearer, the answer must be “no.”  Is the only resolution that it is impossible to create a language using fewer clearer lines of code?  Gosh, I hope not, because that seems to say we can’t build better languages unless you want to quarrel with “fewer” or “clearer” not being better.  That’s my “QED”, but let’s tease things apart further, because it gets interesting.

If nothing else, this quantification business reflects a lack of good formalisms being available for a lot of squishy things like clarity of code.  There is another phenomenon at work too with this word formalism.  BTW, you can substitute “model” there if it helps make what I’m saying clearer.  There may be no single formalism or even a small set that is best for all problems solved by all programming languages.  Rather, we may have different fairly independent domains with happy little language ecosystems that work best in one or a few, but that may be terrible in others.  I am surprised and amused at the apparently fractal nature of this formalism by domain concept.  Whenever we think we’re close to finding the universal formalism with which to create the best possible language, we build something like Lisp.  After it’s built, and we use it for a while, what we discover is that such universal formalisms are really good at is creating domain-specific languages, but that they are not so good at many other things.  So in a sense, they’re lousy tools for any particular problem,  but they make it easy to create a special tool that gives the best possible solution to the sub-domain.  See what I mean by fractal?  There’s something very cool about that fractal idea that just feels right.  If you agree, it probably means we’re communicating.

Is it any surprise that it takes many formalisms to make the disparate problems we face soluble with fewer clearer lines of code?  I’m not surprised.  I’ve fiddled with just a few domains and languages in my career, and there are many differences.  Here are some I’ve played with: 

–  Database problems like the inherently parallel and set theoretic languages including the rather homely but oh-so-practical SQL. 

–  Bootstrapping everything from nothing can be done very easily with Forth.

–  Certain kinds of graphics cry out for the purity of mathematical languages with an ability to manipulate matrices and deal with collections of graphical entities.  Some like Fortran for math, but I’ve always like APL and spreadsheets.  I never played with Mathematica enough, but it is likely another alternative.  Tying something like APL and spreadsheets together with graphics in a suitably expressive way would probably make for a wonderful graphics language.  I’ve certainly seen interesting graphics come out of Mathematica, so maybe it’s been done.

–  User interface is a surprisingly parallel problem space dealing with events.  Most frameworks make this painful.  I’ll bet there is an opportunity for a great language in this niche.  I like Adobe’s Flex a lot, but it is far from perfect.  Constraint oriented programming (Thinglab!) is closer, so I am very hopeful about Adobe Thermo raising the bar further.

–  Low-level systems programming of memory allocators, process schedulers, and the like loves languages from the C family.   The less successful branch that included things like Modula-2 and Eifel is not bad either.  There are likely more recent developments I’m not familiar with.

–  Application domain specific programing likes Ruby. 

–  String processing is another world of fascinating special-purpose formalisms such as regular expressions and weird parser generator languages.

–  Creating new languages and pure computer science concepts seems to benefit hugely from Lisp. 

All of the above are sharply affected by the choice of accompanying frameworks and libraries, so they have to be considered as a whole.

This is why I’m so convinced that polyglot systems are the way to go.  Whether the polyglot family has some underlying “assembly language” like Lisp or even Ruby is almost immaterial.  The virtual machine can also be the underlying unifying theme, so long as the various polyglot languages can communicate with one another. 

A last comment on this way of thinking is that I feel Design Patterns are simply ways of imposing or adding those formalisms a particular language doesn’t directly support or doesn’t support well.  There’s been a lot of back and forth on this in the blogosphere, and it seems controversial.  Some say that a perfect language would need no design patterns.  I don’t think so.  I hope I’ve pointed out adequately that the domains require different languages so that there are no “perfect” languages.  From my perspective, you could take any design pattern and come up with a language construct that builds the pattern into the language.  Whether it makes sense or not is another issue.

Languages can become too boroque and obtuse when they are burdened with too many formalisms.  This is particularly true if the formalisms are at war with one another.  Too many ways to do the same thing may not be helpful.  I’d better stop, because I’m rapidly heading back to my polyglot programming soapbox.  Give me a set of very concise and relatively small langauges for well-defined problem domains.  Perhaps I’ll take one “general purpose”  (Jack of all Trades, master of none?) language to fill in the gaps.

Which brings me to Joel Spolsky’s post.  He wants to forget a lot of the formal training and immerse undergrads in large programming projects coordinated by talented teachers.  Joel is responding to a lamentation that schools are too quick to press kids into Java and don’t teach enough of the old formalisms (now you see why I’ve used that peculiar word so much!).  His answer is to create what he views as a “Julliard” style curriculum where the students build relatively complex software in teams under the watchful eye of talented teachers and without so much of the formalisms.  In this model, students would sign up to build a real piece of software of some kind, perhaps a game or social network.

I had one of those formal computer science educations.  I only have the Bachelor’s degree, but I completed all of the work for a PhD as well.  I just felt it would be more fun to write a business plan for my first startup than a thesis (something which maybe Joel and I agree on given his post).   We covered the formalisms in spades.  There was also an opportunity to take on some large software projects, but that was perhaps 1/3 of the curriculum. 

You are probably familiar with the observation many have made that the top 5% of programmers are perhaps 20 times more productive than the average programmer.  I know this to be true, and have seen it many times.  What does this have to do with the discussion?  Simply this: the amount of experience does not seem to matter in determine whether you’re in that 5%.  It does not seem to be possible to teach one who is not a 5% performer how to get there.  I won’t try to prove that here, take my word for it, or at least ask yourself what it means if it is true.

What is more relevant is my observations on watching that 5% group.  What did they do better?  I used to say the thing that really set them apart was a facility with factoring of all kinds.  Those were the words I used, but factoring has so many meanings these days that I feel I should clarify.  The top performers were really good at flipping problems around and viewing them from many angles.  The top players could do it from extreme “meta” positions that were layers of abstraction away from the core problem.  These people lived and breathed isomorphisms, whether or not they had any idea what the word meant.  I’ve described the application of isomorphisms to creativity in an earlier post on “The Medici” effect.  It’s worth a reread.

By contrast, the average performers tended to view everything according to a very limited set of models or formalisms they grasped.  They would try to hammer every problem into one of those few models.  In the worst cases, the models they were able to understand and employ were limited simply to basic structured programming constructs.  With difficulty they could get control flow.  They were not especially good at breaking things into functions or procedures, let alone modules.  Object oriented programming was often a bridge too far. 

We had several classes where this distinction became obvious.  One was a survey of programming languages that included Lisp, APL, SNOBOL, Prolog, and a couple of others I’ve forgotten.  We learned each new language and had to complete a model program that demonstrated we had really grasped the new models and formalisms being offered.  Another was a course wherein we built up an entire language using the fundamental Turing constructs.  The language created was our own, and everybody’s was different, but we had to build it up from the basics of Turing.  This course was done in Lisp.  It was a 300-level undergraduate course, but it was a definite weed-out course that showed who had it and who didn’t.  The last one was the hardcore algorithms course.  Once again, to understand a full range of interesting algorithms required applying many different formalisms.

You’ve probably guessed my problem with Joel’s approach.  It’s probably fine if you are seeking to create a vocational school for factory programmers.  But, the formalisms are the real meat.  It isn’t the lines of code.  The programmers who got the formalisms could grind out more code than any half dozen of the others combined.  They did so almost automatically and unconsciously.  I suspect Julliard doesn’t work quite as Joel envisions either.  Do students there stretch their thinking about musics “formalisms”, or do they spend most of their time perfecting a single arrangement and composition?

People, go out and learn some new languages.  Learn some new design patterns.  Dig out those formalisms.  It is surprising how intuitive many of them will be.  The more you pick up, the more intuitive future ones will be.  Divorce yourself from a particular language.  That’s what’s holding you up and making things unintuitive.  The best news is that because you don’t have to write reams of code ala Joel, you can learn faster and in your spare time.  The emphasis is on variety more so than volume.

QED

Exit mobile version