This is a look at the process used to write Rheolism, a one line game written in BBC BASIC. The techniques extracted will hopefully be of interest to more than just those who write one line BASIC programs.
I don't intend the classification of techniques and principles here to be complete or exhaustive. Maybe it will be interesting to someone doing a more serious survey.
The strategy we used to write Rheolism was mostly an incremental one. The initial cost of construction was small. Most of the work involved understanding the effects of coding things in different ways.
Our target was very tight, we had to use all our efforts in making the program as short as possible. We coded it as tightly as we could and worked out how to improve on it.
No amount of careful design work can make up for doing these experiments. Our initial coding work was completed in a matter of hours, and incrementally refined as a result of the research done over the next two months. The process involved following different possible design choices to understand their effect.
We started by writing a simple program that worked and tried to make it shorter. This seems easier than the approach starting with a one line program and trying to make it do what we want.
Having a current best point helped by providing a cut off. It provides a way of determining that an alternative is definitely, or likely to be worse.
We would like to be able to do a top down design, making the major design decisions that have the largest and least predictable effect first. Many significant changes were made to the code, changing the overall structure and format of the brick data throughout the project. The final version of the code does not resemble the first version in any way.
There are many dead ends and failed experiments. We do however learn from all the changes we make. Most experiments done didn't make it to the final version. However they gave us understanding of what may be possible, and how to go about it.
As we refined the code, we soon found we cannot just make local changes. We needed to adjust more than one thing at once to get a significant reduction in size. Although many small changes are often possible, the biggest improvements are found at a more global level.
A sequence of changes often had an overall goal in mind. This maybe to eliminate some variable or share some pieces of code.
It is not possible to try every possible change, and it is not profitable to try too many changes. We had to spend our time wisely. (Can any of the time writing a one line game be said to be spent wisely?)
Overall the process was satisfying and entertaining. Satisfaction came from making new discoveries which shortened the code. The hacks made were sometimes elegant, sometimes horrifying and often amusing.
I identify some tactics used. I'm using the word tactics to mean short term ways of approaching a problem. The division between strategy, tactics and techniques isn't always clear.
Many tactics use the general ideas of searching. The set of all possible programs is the domain of our search. Most of the general ideas about searching carry over, but I only mention a few here.
A measure can help in evaluating a design decision. A measure provides an estimate for how compact the resulting code is. This estimate can be compared against the alternative decisions made on other branches of the search.
A search can be guided by using heuristic measures. These can identify "soft" areas, that are easiest to change to move the program in the right direction. An identified soft area can then be the subject of marking.
There are many ways of looking at the search space. In search framing we pick an aspect (a frame) we wish to vary and look at what choices we have within that. This focuses us on considering variations within a particular frame.
As an example, consider a simple assignment. In one frame we may think about varying the data format used to store values in a variable. In another frame we may think about how to write the assignment so that it is short. In yet another frame, we may wish to change the code structure so the assignment is unnecessary.
Marking is the name for what we do when we have decided to select a target, in this case a piece of code, and try out various techniques to improve it.
Hammering is the other way round from marking. We take our technique (our hammer) and go round applying it (hitting things with it) to see what effect it has. Hopefully we find somewhere where the technique helps improve things. This is often useful when we have just learnt or thought of a new technique.
Here we start with constraints given to us by the problem, or select a subproblem, adding in extra assumptions and constraints. We then make deductions about the form of the solution. And, if as is said, that necessity is the mother of invention, we come up with something that meets our needs.
Adding optimistic constraints can be the more powerful technique. If we ask, "Can we do this in two bytes?", it makes the search easier and, provided a solution exists, the result better, than if our target is 20 bytes.
Often rearrangements of the code are a big part of applying the various techniques. An incremental reorganization of code while preserving its meaning is often the best way to do this. A modern term for this is refactoring.
Simplicity is something to aim for. A simple program will tend to be quite short. This works against many of the other techniques which tend to increase complexity. However, the simplest solution is often the best, and if not, is certainly a good place to start. A simpler, more basic way of doing something is also likely to lead to more code sharing.
With no other force to arbitrate in our decision, consistency is a good thing to aim for. This gives us the best chance to see the similarities and differences in our code. It also leads on more directly to some form of sharing.
The simplest and most obvious technique is to make sure we don't repeat code that we can more economically share. Often we have to make substantial rearrangements to share code.
One way to share code is to use subroutines or functions. Code from different branches of a conditional may also be shared, sometimes after imitation. Code may also be shared, by putting repeated code inside a loop. Again imitation may be needed.
We can define local flexibility as the ability to make an arbitrary choice in the way something is coded. This could be an ordering of statements or the encoding of data. Often we can take this flexibility and make a choice that is convenient for some other part of the code. Whenever we turn choice into coincidence we call this hardening.
Where pieces of code fit together, there is a chance that various conditions will be met without having to add extra code. For example, we sometimes avoid initializing variables and instead rely on there values after use by an earlier part of the code. The invisible contracts between various parts of the code maintain invariants that work together in a most fortunate way, reducing the total amount of code we need.
I think of the example of not initializing variables a bit like the convention of always putting sharp knives the same way round in a knife drawer. You can reach in and pick up a knife without having to check which end the handle is. You either get a minor saving or a bloody mess.
A piece of code can have a side effect as well as achieving its main purpose. We can use this in three ways. If the code with the side effect is shorter than the code without, then we can save either by making the side effect harmless (harmless side effect), or by adding extra code to undo the side effect (neutralized side effect). The third way is to arrange for the side effect to be useful in some way (bonus side effect).
Making two similar pieces of code identical is the technique of imitation. They are designed to perform different tasks but, by changing the code, perhaps by adding conditionals, we can make similar code identical yet still perform correctly in all cases. This can then be used for sharing. The most obvious case is taking code two branches of a conditional and making the same code apply to each case. This is particularly useful in our one line BASIC program where we cannot easily have two different branches of an IF on one line.
The technique of merging is where two occurrences of something are merged into one. Merging is like sharing, except it is the framework and not the contents that are being shared. For example, we have two conditionals which can be merged because their conditions are the same. Maybe two different variables can become one, if their contents can be combined. Often other changes are made to introduce the possibility of merging.
A similar technique to merging is extension. The end effect can look the same, but the process is different. In extension, you ask yourself, "while I'm doing this, is there anything else that can be done at the same time?". Can I put some more code in this condition or loop? Can I store more state with this variable?
Matching is the technique of implementing code that has a complex behaviour with some components that have a matching behaviour when used in a particular way.
In the search framing technique, we focus on a particular aspect (frame) of the code. This section lists a few of these frames.
The basic algorithms we use are deeply tied up with the data structures we use in their overall effect on program size. Remember that the fastest algorithms are rarely the shortest.
The operations we use to operate on the data can be very different in size, and depend on the data structures used. Since each piece of data will tend to be involved in many interactions, the effects are large and widespread.
Data formats are very important and tend to be tied up with the operations we want to do on the data structures. The particular values affect the compactness of every point in the program where they are referenced. We can often apply hardening to the flexibility we have in data format.
We have a choice of what components we use to make our program out of. We need to consider how complex the components are to use. We can also look at the amount of variety we have in expression when using them. Finding components that have a behaviour which mimics a behaviour we wish to implement is called matching.
The most important gains here are to be had when we structure the code so that code is shared. Often substantial reorganization of the code can be made to gain some benefit, either as a direct result of restructuring, or to reveal other opportunities for improvement.
There are also simple differences in length because of the structures used when sharing and merging are not applied.
Often there are many ways to express a particular idea in code. This is often minor things like ordering of statements or operands. It is useful to combine the focus on expression with some extra flexibility such as data format. A condition in particular can often have a number of ways of being expressed, as it is only important that the result is 0 in the right cases.
We have a lot of choice in the exact behaviour of our program. We don't have to end up with an exactly equivalent program to the one we started with. We just need something that meets our requirements. In particular we have a lot of choice about exact screen output, how the keys are scanned, etc. We can also compromise on correctness. It's only a game.
The aims and rules that we set ourselves can be bent and broken. We had flexibility in stating our intention and the rules in the first place. They are our rules and we can change them.
Measures are useful tool when trying to understand the effects various changes can have on the code. A quantitative approach can work well in areas where our intuition is weak, and is more objective providing more predictable results. Measures can help us estimate the effect an improvement will have, as well as guide us to the parts of the code where there is most room for improvement.
When writing Rheolism, we used a lot of intuition to tell us where to look, and estimated by guesswork. However understanding the things we took into account can help improve the guesswork and provide hints at how more formal measures could be made.
The most obvious thing to look at is how much functionality is achieved per bit of implementation. To measure this we have to have some way of measuring functionality , we can estimate what proportion of code may be needed to do what. If we spend 40% of the code on 10% of the functionality, we may well want to improve the code. If code is shared, how we account for it becomes a problem when trying to make this notion more precise.
Another thing to look at is the number of different uses a piece of code has. The more uses it has, the more useful it is. We can perhaps look at the average number of uses over the code.
Another indication the compactness of the program is how tightly it fits together. Imagine changing a piece of code to change the data format, order things differently. What is the cost of knock on changes? This is a measure of how good a choice is. How much bigger will the program get if we make the choice differently? To make this assessment accurately, we have to optimize over all other choices. This is hardly ever going to be possible. However we can often estimate the effect of a choice to a great extent. We can get a relative answer about a saving more easily than a absolute answer.
An indication that there is room for improvement is looseness. Loose code has arbitrary choices made that are not dependent on other parts of the program. We can measure the number of free choices made in the code (the degrees of freedom), the number of different equivalent ways the code can be written.
David Moore
$Id: method.html,v 1.18 2001/07/17 08:13:14 dsm Exp dsm $