My first computer was a Texas Instruments TI-99/4A. I got it just before going to college. I had been interested in computers since junior high but this was the first time I could afford one. The primary programming options were Extended BASIC and assembly. Writing interesting programs required assembly (because BASIC was too slow) so I learned TI TMS9900 assembly language. I went on to learn a few others including PDP-11 and 8080.
If you have programmed in any assembly language you know how tedious it is. There are so many details to be concerned with that have nothing to do with the problem you are trying to solve — simple instructions, addressing modes, managing registers, and maintaining the call stack to name a few. Because the instructions are so simple you write volumes to accomplish little. Often you would be dealing directly with hardware devices so at the same time as dealing with the many details of the processor’s instruction set you had to understand the idiosyncrasies of the device IO registers.
Once you got some small useful thing working like writing a character to the screen you made it into a reusable library routine so that you didn’t have to write that code again. Libraries helped but the big leap in productivity came with higher level languages. Frederic Brooks wrote in No Silver Bullet: Essence and Accidents of Software Engineering:
“Surely the most powerful stroke for software productivity, reliability, and simplicity has been the progressive use of high-level languages for programming.”
Structured statements replaced compares and jumps. Expressions took the place of numerous instructions. The vast (well not so vast by todays standards) stretches of memory locations were replaced by data structures. The compiler took care of pushing values onto the call stack and popping them on return.
The details were interesting to a point and there was satisfaction in having the level of knowledge needed to program in assembly language. Making the switch to a language like C meant giving up some control but it was well worth it because of the productivity gain.
The switch from assembly to high-level languages didn’t happen instantaneously. Trust had to be won. Early compilers had bugs. People thought they could do better optimization than the compiler, and at first they could, but compilers got better and better. Compilers could also do something that assembly programs couldn’t — they could compile your program so it could run on different types of CPUs.
The details can be interesting and sometimes frustrating. Do I really need to know that IE make the href attribute an absolute URL when inserting an anchor into the DOM while Firefox does not? (This usually doesn’t matter since if you use the href property you always get the absolute URL but it bit me once.)
So whats wrong? Like I said, some of these frameworks abstract away HTML so you can think in terms of higher level “components”. The problem is they are still just libraries (for the purposes of this article frameworks are just really big libraries). I believe that larger productivity gains are possible from languages rather than libraries. The languages may be domain specific — for creating web applications — but thats OK. The languages could be compiled or interpreted or some combination of both.
One very important thing that happened with the move from assembly to high-level languages is that one relatively small group of people were able to focus on implementing compilers that did great optimization following the best practices for the particular CPU type while another much larger group of people focused on building their applications.
Imagine if best practices like using a hidden token to protect against CSRF or using the POST-Redirect-GET pattern were built into the language.
Imagine if cross cutting concerns and decisions that usually have to be made up front were compiler switches or runtime options. Examples:
- What level of functionality do old browsers get?
- Is the back end going to be PHP or J2EE?
- Should session state be kept on the client or server?
There are huge opportunities for performance optimizations – on the client, over the wire, and on the server. The optimizations can improve over time independent of the applications. Imagine if css, js, and images were automatically combined, minified, and compressed such that overall response time was minimized.
Yes the abstractions will leak. It will take time for the language implementations to get good enough and for programmers to trust them. I would like to see this happen.
Some may wonder if I actually read “No Silver Bullet” or just pulled a quote from it. After all it says that languages are not a silver bullet. I’m not arguing that they are. I’m saying that they can do better than frameworks. There is enough accidental complexity in building web apps today that a domain specific language could be a big help.
So why hasn’t it happened yet? Perhaps its just that the right framework hasn’t come along yet. Thats the thinking that has given us at least 57 frameworks in Java alone. I think some believe that tools are the answer. It used to be that you could charge money for a compiler. Not so anymore. But that can’t be the problem since most of the frameworks are also free.
Perhaps it is already happening. I have not dug into Links but it sounds just like what I’m talking about.
I plan to write more on this topic in the future.