Thu Oct 11 09:42:30 PST Principle of Least Responsibility: A lot of the code I write these days is for running experiments (or analyzing data from experiments). And I'm always trying to automate my testing more, in order to keep the system from being idle when there are more cases to test. So, given some experimental program E, and a set of inputs to test {1,2,3}, it's tempting to give E the power (through the magic of computer programming) to take in a list of tasks to work on -- so you can tell it "do {1,2,3}" instead of telling it "do 1", then telling it "do 2", and then "do 3".
But there's a problem here, especially if your tests take a long time to complete. If your code crashes (or a test fails) in the middle of "do {1,2,3}" it may not be obvious where in the test sequence the code failed. Or, even if it is obvious, it may not be easy to pick up the tasks from where things crashed.
Instead, it's more robust to write a simple wrapper W that, when given the inputs {1,2,3}, picks them off one at a time, calling E with the individual tasks. It may take a little extra work to do things this way, but it will pay dividends in the long run.
For example, another benefit is intelligent management of repetitions of experiments. Suppose you have tasks {1,2,3}, but you want to run each of them 3 times. A naive approach will run {1,1,1,2,2,2,3,3,3} but then you have potential ordering bias. And, if (say) tests {1,1,1} succeed, but the 2nd test 2 fails, what happens to your results?
Using a wrapper W, you can generate the list of experiments, randomize them, and write them to a file. W can track the success or failure of each test, keeping your experiments running but also separating the wheat from the chaff. This approach can also give you a good way to track progress through a set of experiments.
My "task log" usually looks like a shuffled file with one line for each task, or a directory of files with each file representing one task; this also allows you to modify the task set on the fly -- adding tests to the end of the task log. W doesn't remove a task until it's been completed successfully, but it moves on to subsequent tasks so it doesn't get stuck in a pathological case. (You could have it put failed tests in a separate list.) Anyway, all this allows for better recovery and progress tracking.
Anyway, I know there are lots of names for this general principle of abstraction or compartmentalization, but I'm calling it the "Principle of Least Responsibility" -- with the first working definition being "give a piece of code the least amount of responsibility that is practical". I say "practical", not "possible" because taking it to the extreme could be burdensome.
This is pretty similar to the UNIX philosophy of "Do one thing and do it well." In my case, it specifically means "don't make the code hold the state of a sequence of experiments because all it needs to be responsible for is the single experiment it is currently performing!"
In the moment, it may seem like a great idea to let your code do all the heavy lifting for you, but a little abstraction/functional decomposition can save you a lot of grief in the long run. | [Edit me!] |
Unless otherwise noted, all content licensed by Peter A. H. Peterson under a Creative Commons License. |