Back in Uncle Bob's early years as a programmer, programming was reserved for the extremely patient.
You wrote your code on a card. You gave it to a typist who creates punch cards for you. They were doing it in the spare time they had between important data input jobs. If you're lucky in twenty-four hours you would get your punch cards, you studied them carefully because people that were making them couldn't code and therefore couldn't catch any typos or mistakes. What's worse - they could introduce a few typos themselves.
When you had your punch cards you gave them to a computer operator. That person was also busy with feeding the computer important data for payrolls or some other number-crunching jobs.
Somewhere in the next twenty-four hours (or more) you would get your code compiled and you would find a printout about a missing semicolon.
Your edit-code to read-output time was on the order of days. Code reusability was non-existent as operating systems and libraries weren't a thing back then. As Uncle Bob pointed out the way to cope with those delays was to work on several programs at once.
This was a necessity because computers were expensive. No attention was paid to things like programmers' context switching and their minds' working memory. Computer's time was more expensive than programmer's time. Your feedback loop was not even considered as a factor, despite being measured in days.
Things have changed since then.
When developers had access to a terminal and when they could run programs themselves the priorities had shifted. Suddenly we started to think about compilation and printout as a part of the job, not as something that other people experience.
Computer Operators became System Administrators. You still had to wait for builds, and running programs remotely on limited resources could put you in an hours-long queue, but the improvement was obvious. You could use a computer to write software.
Besides the obvious benefit of being able to type the program yourself you could get your result sooner. Not involving other people meant that the feedback loop was shortened. You, compiler and a job queue.
You still had to fix your missing semicolon, but nobody else would be frustrated about it.
We also noticed at this point that having to write the same stuff over and over again is inefficient. Libraries, operating systems and frameworks started to appear.
Then the computers moved from a separate room to the one you were in and sometimes you were allowed to use them directly.
First, you had to share them, but in several years you could have your own computer on your desk. In the meantime, another kind of programming languages started to appear. Interpreted languages were slow and inefficient but they gave you some results without a costly compilation process. The distinction between compiled and interpreted languages is blurred, but the workflow between those two ways of running code is very different.
The feedback loop was shorter again.
Some would surely point out that compilation was able to catch many of the mistakes before the runtime, and they are right. Yet automatic compilation and restart-less class reload which emulates interpreted languages flow are a valid product with thousands of users.
We did not have to spend several minutes of going back into program context because we could type and get results immediately.
I think the main reasons Perl was so popular was how quick you can hack something useful and how short was the feedback loop it created for command line applications.
Enter the web
By the nineties when the World Wide Web was in its infancy and people realised they can use HTTP with HTML to not only serve content but also interact with a user as an application would.
Compiled languages we're efficient on servers but rebuilding took out developers out of their focus. Interpreted languages gained some advantage because of this very reason. You save your file, switch to a browser and refresh.
While better-designed languages like Ruby or Python could do it, the PHP took the biggest slice of the cake because of its share-nothing architecture. You don't have to restart a server to refresh the routing. Refresh the page and off you go.
And things are changing, as always.
I love using REPLs for their instant feedback and exploratory prototyping. PHP sucks at this because you cannot redefine a class or a method and you have to restart your REPL (and recreate your environment), but for Ruby it's wonderful.
REPLs are not a new idea, but there is at least one new player around in the web browser world - live reload. Get a nice big display (or two), put your code on one side, results on the other.
Did it influence the design of our code? I'm quite sure of it. For the better? That's yet to be seen.
The upside is you spend more time with working output so you can get more insights in your code. On the other hand, if you get your result sooner you might be tempted not to refactor it into something readable to another human being. Anyone working with academic code can agree. ;)
A long way ahead
There are still some places where the feedback loop is longer than this. While having a REPL on the back-end and the live-reload on front-end we still have to deal with redeployments of native mobile apps. Hardware-based projects also require that special kind of discipline for long feedback loops.
But the trend is very clear. The sooner you see your results the more you keep in your code (and less in your head).
Subscribe to Chris Hasinski
Get the latest posts delivered right to your inbox