Wednesday, 25 September 2013

101 uses for an old smartphone, #1

I'm about to install a freezer in the barn, which is half a mile from my home. How am I going to know if the power fails or the internal temperature rises?

Various companies sell ultra-low power temperature sensors which communicate via Bluetooth, for example the TI SensorTag (which actually does far more, but it's temperature I'm interested in just now)

It is said to run for a year on one button cell.

I have an old Android phone - in fact, an original Google G1 - that is sitting doing nothing. An Android app can detect whether the device is connected to AC power by registering a receiver for the intent ACTION_BATTERY_CHANGED.

Texas Instruments offer sample source code for an app which can read data from the bluetooth sensor (although needless to say their Linux installer doesn't bloody well work!)

So, here is the specification for an app:

  • It runs as a service, from startup;
  • It checks for the presence of the bluetooth sensor, and if it doesn't detect it, sends a message;
  • It registers a receiver for power state events, and if the mains power goes off, it sends a message;
  • It periodically (e.g. every ten minutes, but configurable) reads temperature from the sensor. If it can't communicate with the sensor or the temperature is out of the configured range, it sends a message;
  • When it sends a message, it does so by SMS, email or Twitter (configurably), to a configured number/address


All you do then is drop the sensor in the freezer, plug the phone's charger into a wall socket on the same circuit as the freezer, ensure the phone has a valid SIM with enough credit to send texts, and leave the phone switched on.

Sorted!

NOTE: I haven't yet written this app. I may never write this app. But then again, I may...

Friday, 20 September 2013

Editing and clojure revisited: this time, with structure!

Yesterday I blogged on editing Clojure, and commented that

"So after all that I'm going to talk about how wonderful it is to be able to do structure editing in Clojure, aren't I? No, sadly, I'm not, because (as yet) you can't."

Well, you can now. About tea-time yesterday, fooling around with Clojure, I worked out how the 'source' macro works - it retrieves the source of the function or macro whose name is its argument, by inspecting the metadata tagged to the symbol. So, you can retrieve function definitions. They're retrieved as strings from the file, but the read-string function parses s-expressions from strings, so that isn't a fatal problem. And having found that, a terminal based structure editor was not far away.

OK, it is, as yet, crude. It needs a lot of polishing. And, a terminal oriented structure editor is not what I really want - I want a display editor which pops up in a window (and, related to that, a display inspector, too). That window is almost certainly a web page. However, writing a terminal oriented structure editor proves that it can be done.

There are significant problems ahead. You cannot, in theory, rebind a Clojure symbol, so you cannot in theory compile a new definition for an existing function symbol. But the package system does this, so it is possible. I need to find out how. Indeed, the whole package system would have to be modified significantly to support in core editing. Just writing an editor is only scratching the surface.

But the surface is scratched. There is now a structure editor for Clojure. You can get it here.

Thursday, 19 September 2013

On editing, and Clojure

Back in the days when the world was young and we had proper software development tools, you'd write your top level function and evaluate it. And, of course, it would break, because you hadn't yet written your lower level functions. So you'd get a break inspector window pop up on the screen, and in that you could do one of several things.

Firstly and most obviously, you could supply a value as the return value of the function you hadn't yet written, and continue the computation. Secondly, you could inspect the stack, unwind a few levels of computation, and choose a different point to continue from, supplying a new return value there. But thirdly...

Thirdly, you could write your new lower level function right there on the stack, and continue the computation through your new function. And you could write it in a structure editor.

What is a structure editor?

OK, Lisp is a homoicinic language. Which is to say, a Lisp program is just a Lisp data structure. And the canonical form of the program is not a stream of byres in a file, it's the structure in memory. Now, OK, one can reconstruct the structure in memory by writing it out to a file as a stream of bytes and then reading that stream of bytes back in; and consequently, one could (as one must for more primitive languages) edit that stream of bytes in a text editor in order to modify the program, and then read in the modified stream of bytes to create a modified structure in memory, and consequently a modified program.

That is, of course, how the LMI and the Symbolics boys worked, how Richard Stallman worked. But that's a lot of hassle. It's much simpler to pop the data structure in memory up in a window (automatically formatted, since it's just data and not a stream of bytes so the indenting can't be wrong) and edit it right there. And the joy of that, of course, is just as you can't get white space wrong, you also can't get parenthesis nesting wrong, since all you can do is insert a new well formed symbolic expression, delete an existing well formed symbolic expression, or replace an existing well formed symbolic expression with a new well formed symbolic expression.

People don't understand Lisp syntax, and that's because they think of it as text. It isn't text, it's structure; and the structure is beautifully simple and perfectly regular. It consists of well formed symbolic expressions nested in other well formed symbolic expressions, and nothing else. There are no keywords. There are no operators. There are no control blocks. Nothing is magic, because everything is magic. The Lisp programmer is perfectly at liberty to define T as NIL, or NIL as something else. The world will probably stop working if he does, and not in a good way - but he can, because not even T and NIL are magic.

So after all that I'm going to talk about how wonderful it is to be able to do structure editing in Clojure, aren't I? No, sadly, I'm not, because (as yet) you can't. All you can do is the old-fashioned primitive thing of editing a text file, a stream of bytes, and then reloading it. The world of modern software tools is still years behind where we were thirty years ago.

But some of the modern editors for Clojure are nevertheless pointing the way forward.

Slime

There is an editor which these days is Generally Not Used Except by Middle Aged Computer Scientists; back in the day when eight megabytes was a lot of memory, we used to call it Eight Megabytes And Constant Swapping. Its original author called it 'editor macros', but since, in those days, on most computer file systems, file names were constrained to be short, it was more generally abbreviated to Emacs.

Emacs, of course, is written in Lisp and for Lisp. And consequently, there has long been a Superior Lisp Interaction Mode for Emacs, better known as Slime. Slime establishes a communication between the Emacs editing buffer and a separate Lisp process, and sends chosen symbolic expressions from the buffer to the Lisp process for evaluation. And the beauty of that, of course, is that it works not only with Emacs Lisp but, in principal, with any Lisp, from Portable Standard Lisp through Symbolics Lisp and Franz Lisp to the execrable Common Lisp (now) to Clojure.

The problem with this is that Emacs, although enormously powerful and in its way elegant, is now long in the tooth. It always was user hostile to an extreme degree, and has a very steep learning curve. Thirty years ago when it was designed, modern user interface design hadn't evolved. Now, not only are all the Emacs key commands incompatible with modern conventions, you can't even copy and paste simply between an Emacs buffer and any other window. Don't get me wrong, Emacs was my text editor of choice for fifteen years from 1988 until the early 2000s, and it still is my editor of choice for XML and SGML, since it parses DTDs properly and does helpful auto-completion on that basis (to be fair Oxygen XML does this equally well, but it's expensive), but otherwise in truth it's a pig to work with these days, and Slime, while elegant, doesn't marry up to modern tools.

Clojure Mode

Separately from Slime there's now a 'clojure mode' for Emacs, which (allegedly) works similarly to but better than Slime. The truth is I've no idea whether this is true, because it's built for the Emacs 24 package system, and while I spent an hour this morning trying to back-port it to the Emacs 23 which is supported by Debian, I eventually gave up. Life is too short. It would have to be utterly amazing to be better than the more modern Clojure editing tools.

Counterclockwise

Counterclockwise is a Clojure mode for my current editor of choice, Eclipse. Not only does it understand Leiningen projects, do syntax highlighting and intelligent auto-complete, structure analysis, linking, all the things one expects of a modern IDE, it also has an interactive Clojure process in which, just as with Slime, you can evaluate either the currently highlighted symbolic expression in a file, or the whole file. This is simple, powerful, easy to use, and, of course, integrates with all the other good Eclipse tools like Mylyn, and version control systems.

Counterclockwise is a joy to use. In fact, one might almost describe it as the perfect development environment for Clojure, were it not for one other competitor...

Light Table

Light Table is something new; something which, although it is still a text editor, has something of the power of the structure editors I still yearn for. It's live. Which means, you don't just send things from the buffer you're typing into to a separate Clojure process to evaluate them; they are continually evaluated as you type. It's remarkable, revolutionary, and, I think, potentially points to a future for programming editors.

Unfortunately, in its present state, it's alpha code, and it's fragile. While it works well for simple exploratory code, trying to load add a connection to a Leiningen project - something which Light Table allegedly supports, and must do if it is to be used for serious work, then, providing the project depends only on Clojure, everything works well. But if, as most projects do, the project depends on other projects, it frequently results in
Exception in thread "main" java.lang.RuntimeException: java.lang.NoSuchMethodError: clojure.lang.RT.mapUniqueKeys([Ljava/lang/Object;)Lclojure/lang/IPersistentMap;
 at clojure.lang.Util.runtimeException(Util.java:165)
 at clojure.lang.Compiler.eval(Compiler.java:6476)

That's really not helpful. Googling for it suggests that the problem is probably that Light Table is (on my machine) using Clojure 1.3, when it needs at least Clojure 1.5.1; however, while I have both versions on my machine, the path is set up to prefer Clojure 1.5.1, and the Lieningen project.clj file I'm using also specifies Clojure 1.51, so if it's using the wrong version I don't understand why it is.

In short, I'm really excited by Light Table, but I'm afraid it isn't yet ready for prime time.

Conclusion

Of course, there is not (yet) a structure editor for Clojure. It wouldn't at all be impossible to build one, and in fact that is the sort of tool-building project which is likely to pay off well in productivity in the long run; but it's also potentially tricky and I think I need to become more generally fluent in the language first. For the meantime, Counterclockwise is a tool which will serve me well.

Sunday, 15 September 2013

A staggered cantilever house

 I'm back to worrying again at the structural design of a house to fit into the natural hollow in the north-east corner of my croft, a house to fit organically into the landscape. It isn't that I've fallen out of love with the Winter Palace, I haven't. I like it very much. So long as I remain single, and remain fit enough to climb its stairs, it suits me very well. But it is my ambition some day to cease to be single, and within the next twenty years the stairs will probably become beyond me. So in the long term another house is necessary. And the view from this house is very restricted; from the hollow in the north east corner I could see out to the Isle of Man.

So let's go over the options for that hollow. The first option I designed was the design I called Sousterran: four tessellated concrete domes, supported by beautifully sculptural flying buttresses. The merits of that design remain that its irregular, sculptural shape would fit very well into the landscape, that it is iconic and would be beautiful; and, in so much as the design is modular, extending it would actually be easy. The demerit is that it uses a lot of concrete, a lot of embodied energy. An eco-house it is not. A further technical problem is that if the waterproofing of the back wall were to fail, it would be extremely expensive and difficult to fix.

So the next design was the design I called Singlespace: a mostly-timber conical roof over a single large, circular room, later (in the Longeaves variant) with some sheltered external storage and the possibility at a later stage of adding an earth closet attached to the building, which one could use without going out of doors. Again, the design is elegant. But for me the major demerit is that the regular cone, even if turf-roofed, would look significantly unnatural. It would draw the eye in the landscape, immediately announcing something artificial. Also, if the back wall were not to suffer the same waterproofing issues as with Sousterran, there would have to be a significant walkway round the back, which firstly wastes space and secondly interrupts the continuity of the walking surface between the natural hillside and the roof. There would be, in effect, a chasm to be stepped over, or fallen into.

It was in thinking about that gap that I came up with the new design. The issue about the gap is that, if the earth of the hillside presses directly against the back wall of the house, that wall has to be very efficiently waterproofed - 'tanked' - to prevent damp. That's complex in itself, but it it fails, a great deal of earth has to be very carefully removed to expose the wall for repair, which would be expensive. Also, the wall needs to be inherently strong enough to resist the thrust of the hillside - not a problem with the hexagonal concrete cells of Sousterran, but much more problematic with a lighter weight wooden or straw structure.

The thing is, soil affected by heavy rainfall will not stand in a vertical cliff without substantial, expensive support. There is a natural angle of repose - the angle of a natural talus slope - at which it will safely lie. The use of modern fabrics, well pegged, or a dry-stane retaining wall, can steepen that slope a bit, at cost, but fundamentally the gap between the wall and the slope will be much wider at the top than at the bottom...

If the wall is vertical.

If the wall is vertical? Of course a wall is vertical.

Yes, but what if it isn't?

If it isn't, if it lies back parallel to the slope, you still get an air gap behind the house, eliminating the penetrating damp problem. And you can still - if rather less comfortably - get in to do any necessary maintenance. But you save that wasted space, you get more volume inside, and you close up the chasm between the hillside and the roof. Good!

Of course, you could have an angled back wall with the conical Singlespace roof, using a yurt like structure. But in thinking about how you support an angled-out back wall, I thought of a W shaped cantilever truss. However, if you have a triangular truss supporting a principal rafter from a point which is fairly central under that rafter, there should be no significant thrust on the roof tree. So the trusses supporting principal rafters on the back and front sheds of the roof don't need to line up with one another. So I came up with a sketch in which the trusses are staggered, and that's actually rather interesting in terms of the interior space. The space is continuous, and you can walk through it, but the trusses naturally break the space up into smaller spaces with a degree of visual privacy.

Of course, the staggered trusses could be built symmetrically to a straight rooftree supporting rectangular plane roof sheds. But actually they need not. A curved roof tree supporting sheds with simple, not compound, curvature would fit much more naturally into the landscape. I think this could be reasonably simple and easy to build - but of course the roof tree and purlins would need to be laminated, which is a little more complex than simply straight beams.

This design is at a fairly early stage. It has three times the space of the Winter Palace upstairs, but only about twice the space downstairs (and the need for a staircase limits the amount of usable room upstairs). Obviously, of course the basic design could be significantly bigger, either by increasing the span of the trusses or by adding an extra bay (or two bays) to the right hand end. There are some details which are not yet worked out. I've shown storage space (for e.g. firewood, or sheddy things) at the left-hand end of the building, but I have not yet thought through the right hand end.

I also haven't decided what to do about a staircase. In order to get two reasonably private bedrooms upstairs, the staircase needs to go up the middle of the building. It could go up the sloping back wall, at the left hand end of the sink unit in these drawings; or it could go across the middle of the space, dividing off the kitchen; although that would be a bit wasteful.

Sunday, 8 September 2013

A farewell to pigs

Tonight I have bagged up 12Kg of sausages, 9Kg of chops, 4Kg of spare ribs. I have salted one 7Kg ham, and I have another one waiting in the cool box. In refrigerators and freezers up in the void there is a veritable mountain of pork...

But I get ahead of myself. This week is the first time we've had pigs commercially slaughtered. Previously, we've slaughtered pigs here on the farm, but if you do that firstly need cool weather, and secondly you can't sell the meat, or even give it away. Two pigs, each of them substantially bigger than me, are far more than I can eat; and processing them would have needed me to call on a lot of support from friends.

So after a lot of swithering I decided to get them slaughtered commercially. I organised for them to go to Lockerbie slaughterhouse, and organised for them to be delivered from there to my favourite butcher, Henderson's in Castle Douglas. Again, if you slaughter commercially, you have to have them butchered in a commercial standard, health approved butchery, or you can't sell meat. Henderson's, apart from being my butcher of choice, also quoted a very favourable price - £50 per pig.

Legally, the pigs had to be ear tagged to go to slaughter. We didn't have the requisite tags, but the Tarff Valley farmer's co-op was able to have them made up for us at twenty-four hours notice, which I thought was pretty good. There was some debate about what trailer to use. Finn has an animal trailer, but it's seen better days. After looking it over carefully, James, who was going to be driving (my car doesn't have a tow hitch), decided to borrow a trailer from a friend of his.

Finn decided to send one of his pigs with mine, so on Wednesday evening we loaded his pig into the borrowed trailer, and I towed it over the hill to my croft. I dismantled the electric fence in the gloaming, and then had a long fight with the ScotEID website to register the transfer of the pigs (in these food-safety conscious days, every animal movement on or off the farm has to be recorded), went to bed tired, and slept remarkably well. Thursday morning I was up at six, getting the pig trailer into my yard, seeing that Finn's pig was fed and watered, and organising barriers to help direct the pigs into the trailer. At seven, Finn and James came over to help load.

We were a little concerned that my pigs might fight with Finn's. They are from the same litter but hadn't seen one another for six months. So we divided the trailer into two with a hurdle. I then filled a bucket of pig food and went to call my pigs. They came willingly enough, but were amazingly reluctant to cross the line where the electric fence had been. However, I coaxed first one, then the other, over the line and round into the trailer. In the trailer they greeted their brother with no sign of hostility, and munched their breakfast contentedly. We closed the ramp and headed off to the slaughterhouse.

Coming round Dumfries bypass we were pulled over by police, doing a routine vehicle check. James was doubly glad he had brought the better trailer! But the police, after ten minutes, let us proceed with no problems, and shortly we arrived at Lockerbie. My phone was playing up, and wouldn't get Google Maps to direct us to the slaughterhouse, but I knew roughly where it was, I thought. Eventually after getting lost twice and having to ask for directions (also twice), we found we'd driven straight past it three times - it's a remarkably small, anonymous building, unsignposted. But the people there were friendly and efficient, and handled the pigs calmly and with remarkable gentleness. It was good to see such obvious concern for their welfare. And so I said goodbye to my pigs.

After slaughter the pigs were transported by border meats to Henderson's, on Friday. I hadn't discussed with Henderson's in advance how long they would take to turn them round and was expecting them sometime next week, but when I dropped in on them on Thursday lunchtime they said they would cut them up on Sunday morning and have them ready for half past twelve. They would have done me 'cured' and sliced bacon, but obviously you can't do a traditional cure in that time, so in fact the bacon would simply have been injected and then sliced, as much modern bacon is. Instead I asked them just to give me the bacon in flitches which I shall cure myself.

The pigs weighed around 90Kg each going to slaughter; Henderson's reported 70Kg each deadweight. By coincidence, that's what I weigh.

My friend Jude and I duly collected the meat at lunchtime; it's an extraordinary amount. I had ordered a freezer in good time, but the one I'd ordered was out of stock and couldn't be delivered. So I've ordered another, but it hasn't yet arrived. Fortunately Finn had a large second hand fridge-freezer in the void. I've filled that, and the remainder has gone into James and Vicky's freezer...

Except for the hams. I have two boned out hams, each 7Kg. I brought them down to the winter palace in the cool box. There are various web pages about onto how to make air dried ham; I read several and then decided to experiment. I wanted to do a honey cure, so I mixed up


  • A kilogram of salt
  • Half a jar of honey
  • A desert spoon of crushed pepper corns
  • A desert spoon of mustard seeds
  • A desert spoon of saltpetre
  • A couple of tea spoons of cloves

This made a very sticky paste, and I thought it looked fine - until I tried to apply it to the ham. But it doesn't adhere well, and it's important to get salt into every cut surface of the meat. So I mixed up a mixture of

  • 1.5Kg salt
  • 0.5Kg demarara sugar
  • 1 desert spoon of saltpetre

and I've rubbed that carefully into all the surfaces of the meat. I've then stuffed my original honey paste mostly into the bone cavity, but also sort of spread it on the outer surfaces.

I've taken a plastic container which once held 25Kg of mineral lick for the cows, drilled some holes in the bottom, lined it with kitchen paper and poured in a couple of centimetres of salt. Into this container I've placed my ham. Tomorrow I'll cut boards to loosely fit into the top, and pile about 15Kg of stones onto them. The ham will stay in that, under the house, for a month, and then be hung up in a wire mesh cage in the roof of the woodshed through the winter. If all goes well, I'll then have air-dried ham. If all goes badly, of course, I'll just have a lump of rotten meat, but we'll see.

I'm not yet sure whether I will dry-cure and air-dry my other ham. I'd like to, but I'm aware I'd be taking a substantial risk. It would be much less risky to brine-cure the other and then freeze it. I'll think on that overnight.

But I ended the day with a celebratory meal of sausages, and they were good!

Friday, 30 August 2013

Freezer sizing

As I prepare to send the pigs to slaughter, one important issue is how big a freezer to buy.

My estimate of my pigs' live weight, using a well known estimating technique, is about 90 Kg each. Boned out dead weight is likely to be about 66% of that, so say 120 Kg. Of course I shan't keep all that, I'll sell some and give some away; and I shan't freeze all I keep. But given that I won't know how much I'll sell until I sell it, I need to reckon on being able to store 100 Kg of frozen meat.

Flesh just about floats in water; it's within 10% the same density. So a kilogram of meat is about a litre of meat, give or take not very much. But, the packing density will not be perfect; there will inevitably be gaps. Let's say between fudge factors and packing density, I'll need to have space for at least 120 litres.

This year.

Aye, there's the rub. Steerpike - my one steer calf - comes ready for slaughter in the winter of 2014-2015, which is to say just over a year's time. As a 'non-short' Dexter, he's likely by then to have a live weight of at least 250Kg and a boned-out weight of at least 150Kg. Assuming the freezer is empty, I'll need at least 150 litres for him.

So I think I'm looking for a 200 litre freezer.

Sunday, 25 August 2013

Reference counting, and the garbage collection of equal sized objects

Yes, I'm still banging on about ideas provoked by the Wing, White and Singer paper, as presented at Mostly Functional. Brief summary: on Friday I posted an essay on the possible use of cons-space as an optimisation for the JVM memory allocator, for use specifically with functional languages (I'm thinking mainly of Clojure, but it has implications for other things such as Armed Bear Common Lisp, and possibly also Scala). Yesterday, I wrote about maintaining a separate heap for immutable objects, as another optimisation. Today, I'm going to write about reference counting, and where it fits in the mix.

The HotSpot JVM uses a tunable generational garbage collector, as detailed here. Generational garbage collectors are a development of the classic mark-and-sweep garbage collector that was used in early LISP implementations from about 1962.

Mark and Sweep

A mark and sweep garbage collector is triggered when allocatable memory is almost exhausted. When it is triggered, execution of the user program is halted. The garbage collector then

Mark phase:

iterate over every object in the heap, clearing the 'mark bit' in the header of each;
Set the mark bit in the 'root' object;
repeatedly do
for each marked object in the heap, 
for each pointer in the object, 
set the mark bit in the header of the pointed-to object;
end for each
end for each
until no further objects are marked

Sweep phase:

iterate over objects in the heap again, as follows
for each object, 
if there is 'free space' (i.e., objects which were not marked as pointed to in the preceding step) 'below' the object in the heap, then
copy the object as low in free space as is possible
iterate through every object in the whole heap fixing up every pointer which pointed to the object in its old location, to point to its new location.
end if
end for each

Finally, the user program is restarted. Needless to say, all this is expensive in time, and leads to noticable pauses in the running program.

Generational

The 'generational' garbage collector optimises this by observing that in most programs, the majority of objects are short lived, and that therefore it is younger objects which should most aggressively be garbage collected. The heap is divided into (at least two) segments, an 'old generation' segment and a 'young generation' segment. A generation counter is added to the header of each object, initialised to zero when the object is instantiated.

Then, when allocatable memory is almost exhausted, normally only the 'young generation' segment is marked and swept. Each time an object survives garbage collection in the 'young generation' segment, its generation counter is incremented, and when it hits the generation-max value, it is copied from the 'young generation' segment into the 'old generation' segment. However, obviously, when any object is moved, either by the sweep operation in the 'young generation' space or by promotion into the 'old generation' space, the entire heap needs to be checked for pointers which need to be updated.

Finally, if, when doing a promotion, it is found that 'old generation' space is almost exhausted, a full mark and sweep operation is performed on old generation space.

Although this sounds (and is) considerably more complicated than the naive mark and sweep algorithm, the 'new generation' garbage collection operations tend to be relatively quick, leading to less frequent noticeable pauses.

Mark and sweep interacts horribly with paged virtual memory systems but it has to be said that generational isn't a whole lot better here, since there will still be a need repeatedly to access every page on which any part of the heap, no matter how rarely used, is stored.

An aside: the look-aside table

One possible means of avoiding the need to iterate over every pointer in the heap each time an object is moved is to have an indirection table, or look-aside table. This is simply an array of pointers, one pointer for every possible object in the heap. Now 'pointers' within user-program objects are simply indices into the location in the indirection table where the actual pointer is stored. Now, when an object is moved during GC, only the indirection table needs to be updated.

That sounds very appealing and efficient; it clearly saves a very great deal of time. Unfortunately it's inefficient and clumsy in its use of memory. The indirection table once allocated cannot easily be extended, and once entries in the table are exhausted, no more objects can be created even if there it still plenty of free memory left in the heap. Finally, every single object access requires an additional memory lookup. All these things together mean that the look-aside table isn't as much of a win as it seems, and is not often used.

Reference counting

In a reference counting garbage collector, every object header contains a reference counter, initially set to one (since when the object is created, something must point to it). Whenever a new pointer is created to an object, the reference counter on the object is incremented. Whenever something that points to the object is removed from the system, the reference counter on the object is decremented. when the reference counter is decremented to zero, the object is removed from the system, and the reference counters of any objects it pointed to are in their turn decremented.

Let's walk a little over what that means. Typically, reference counting is used in systems which have uniform-sized objects. At system initialisation time, memory is essentially an array of 'empty' objects. Each of these objects is initialised with a header which marks it as being an empty object, with a reference count value of zero. A register or global variable, the 'free list pointer', points to the first of these objects; each object in turn points to the next object.

In use, when a memory object is allocated, it is popped off the front of the free list; when a memory object is deallocated, it is pushed back on the front of the free list. Because all the objects are equal sized, any object can be initialised in the space left by any other, so there's never any need to move things. And if memory allocated to the user program becomes exhausted, provided the operating system can allocate more memory it can be initialised and added to the end of the free list at any point - it does not have to be contiguous with the existing memory allocation.

So, lots and lots of win. There's never any significant pause for garbage collection. Nothing ever has to be moved, so there's no problem with fixing up pointers. Why doesn't every system do it this way?

Because, sadly, in a normal useful user program there's a need for unequal sized objects. Yes, strings can be represented as lists of characters; raster images can represented as lists of lists of bits. But this is hopelessly inefficient. So it's much better to maintain a heap for variable sized data. Of course, the pages of equal sized objects - 'cons space' - can themselves float in the heap, so there's no ideological problem with having variable sized data in a reference counter system.

Typically, for each 'heap space object', you'll have a proxy pointer object in cons space. Other things which reference the heap space object will actually hold a pointer to its proxy, and the proxy will have the usual header fields of a cons-space object including the reference counter. The type field in its header will indicate that it is a proxy for the heap space object, and a pointer in its body will point to the actual heap space object.

Problems

There are still a few problems with this solution, most of which affect long-running programs. The first is, no matter how many bits you allocate to the reference counter, there is a maximum value it can store. What happens when an object has more references to it than its reference counter can store? Well, obviously, it can't be incremented further, because it would wrap around and you'd end with a mess. But, equally, it can't be decremented - because you don't know how many times to decrement. So once an object has reached the maximum reference value, it can never be removed from the system by the normal operation of reference counting memory manager.

More subtly, circular data structures can never be removed from the system even if nothing outside the circle any longer references it, since each element holds a pointer to the next and none can ever be decremented to zero. This, however, can't happen in a pure functional language with immutable data objects, since circular data structures are then impossible to create.

Finally, while the user program will not have to be paused for mark-and-sweep, occasionally the deletion of an object which serves as the root of a deep tree of objects will cause a cascade of further deletions, which may also cause a noticeable pause.

Heap space, of course, will fragment, as heap space of any variable-size-object system always does. But heap space can be compacted using a very infrequent mark-and-sweep, and, in any case, this problem isn't special to reference counting systems.

Conclusion

In summary, especially for systems with very large numbers of equal-sized objects such as is typical of programs written in LISP-like languages, reference counting garbage collectors have always struck me as having many benefits. Adding reference counting to any Java Virtual Machine would be decidedly non-trivial, however, and, in particular, using proxy objects in cons-space to point to heap space objects might (I don't yet know) break compatibility with existing compiled Java programs. Also, while a reference counting system may have fewer noticeable pauses, its overall efficiency is not necessarily better than a generational system. It's more 'worth a try' than 'a certain win'.

Saturday, 24 August 2013

The immutable pool: more on optimising memory management for functional languages

Further to yesterday's note on optimising the Java Runtime Environment's memory allocator for the code generated by functional language compilers, I've been reading up on the memory allocator in the OpenJDK Java platform implementation.

First, a note about nomenclature. To my mind the 'Java Virtual Machine' is simply a processor which processes instruction codes - as it were, something in the same category as an ARM 6 or an Intel 80486, except implemented in software. To me it's clear that memory management is not 'part of' that device, it's a low level library expected to be available as part of the runtime environment. However, the writers of the HotSpot VM documentation don't see it that way. To them, the memory allocator is part of the virtual machine, not part of the runtime environment, and as I'm playing with their ball I shall try in what follows to stick to their preferred nomenclature.

The HotSpot memory allocator operates a per-thread generational garbage collector, which depends on
 ... the second part of the weak generational hypothesis: that there are few references from old objects to young objects.
The generational garbage collector is in fact much more complex precisely because there are some references from an older object to younger objects. The collector uses two pools of memory, a 'young generation' pool and an 'old generation' (or 'tenured') pool (actually it's considerably more subtle than that, but that's enough detail for the present discussion). Churn and froth are expected in the young generation pool, so it is garbage collected regularly. An object which survives more than a short while in the young generation pool is promoted or scavenged into the old generation pool by copying it; and when this is done, it is necessary to scan through the whole old generation pool (as well as the young generation pool) to fix up any pointers which pointed to its old location in the young generation pool so that they now point to its new location in the old generation pool. That scan is inevitably costly: it must visit every single pointer, and compare its value to the value to be changed.

So far so good. But, how come there are any pointers in the old generation pool which point to newly created objects in the young generation pool? There are, because objects in Java are mutable. We can do the equivalent of RPLACA and RPLACD operations - we can destructively overwrite pointers - and in fact the Java imperative object oriented paradigm encourages us to do so.

You can write classes of immutable objects in Java; if you declared each instance variable of a class (and each instance variable of every class from which it inherits) to be final, then every object of that class would be immutable. I've never seen it done. But in pure functional languages, all data items are immutable. You cannot overwrite pointers.

Of course, for many purposes holding state information is pretty vital, or at least it's hard for us, educated as we have been in imperative languages, not to see it as such. So Clojure, for example, while holding to the mantra that data is immutable, has a special reserved area of Software Transactional Memory in which mutable data may be stored, and also handles things like I/O by invoking Java classes expected to be present in the environment which do use mutable data. Nevertheless, programs compiled with the Clojure compiler can be expected to generate a very high proportion of objects with are immutable.

So the question arises, at what density of immutable objects does the idea of having an additional pool for 'old generation immutable' objects become a win? Remember that an older, immutable object cannot hold a pointer to a younger object (whether mutable or not), because the younger object did not exist when the older object was created. So immutable 'old generation' objects do not need to be scanned and 'fixed up' when a surviving 'young generation' object is scavenged into the old generation pool. Given that the scan operation is expensive, it does not seem to me that the density of immutable objects would need to be very high before this was a win.

For clarity, the algorithm for the revised 'scavenger' part of the garbage collector could be as follows:

for each object in the 'young generation' pool
if it has survived long enough to be worthy of scavenging (no change to that part of the algorithm) then
if mutable flag set, then
copy into the 'old generation mutable' pool (i.e. no change to what happens now).
else
  copy into the 'old generation immutable' pool
end if
scan remaining objects in the 'young generation' pool and fix up pointers (since objects in the young generation pool, even if immutable, may reference the object just promoted - again, no change to what happens now)
scan all objects in the 'old generation mutable' pool and fix up pointers.
end if
end loop

There's no need to scan anything in the 'old generation immutable' pool, because it cannot hold pointers to the promoted object. For the same reason there's no need to scan the 'old generation immutable' pool during the mark phase of a generational mark-and-sweep operation, which necessarily happens before each group of salvage operations.

So the tradeoff is one 'check flag and branch' operation for each object promoted, against  scanning the whole old generation space including potentially many immutable objects. I'd guess that by 10% immutable objects you'd have a performance win, and that for higher immutable object densities you could have an even more important win.

The fly in the ointment is that you need a flag bit on the header of every object. The OpenJDK object header currently contains a (by implication 32 bit) 'mark word', which is used precisely for flags used by the memory allocation system, but I haven't yet found where those flags are documented or whether any are free to be used.

Finally, this optimisation - while both potentially bigger and easier to implement than the cons-space optimisation I suggested yesterday - is independent of it. Both optimisations together potentially offer a still bigger win.

Friday, 23 August 2013

Functional languages, memory management, and modern language runtimes

At the Mostly Functional workshop at the Turing Festival in Edinburgh yesterday, I attended a very interesting presentation on the performance of compilers of functional languages for the JVM by Wing Hang Li and Jeremy Singer. The whole talk was interesting, but some of the graphs were electrifying.

What Wing had done was analyse the emitted code size of methods emitted by the different compilers (Scala, Clojure, Jython and JRuby). These showed that the code emitted by these compilers was systematically different - very different - from code emitted by the Java compiler. Typically, they generated smaller code units ('methods', in the terminology of the presentation) - this especially true of Scala - and made much more use of stack. But of more interest to me was this:
"We examine the distribution of object sizes, weighted by their dynamic allocation frequency. The size of java.lang.Object is 16 bytes for the JVM we use. The boxplots in Figure 6 show that the object size for most non-Java JVM languages is dominated by only one or two sizes. This can be seen from the median object size in the unfiltered JRuby and Jython boxplots and the filtered Clojure and Scala boxplots. However, the median object size for Java varies between 24 to 48 bytes. By comparing the unfiltered and filtered boxplots, we see that Clojure and Scala use smaller objects more frequently than Java."
This is kind of what one would expect. Functional languages should (hopefully) encourage programmers to use smaller units of code than imperative languages; and, because functional programming paradigms make much more use of recursion than typical imperative paradigms, you'd expect to see more use of stack. But more significantly, a great deal of the memory allocation is likely to be small fixed size objects (CONS cells, and other things like e.g. ints and doubles which will fit into the memory footprint of a CONS cell), and that furthermore these small objects are likely to be allocated and deallocated much more frequently than larger objects. And this takes me right back into ideas about the design of LISP runtimes that I was interested in twenty five years ago.

Given this pattern of a rapid churn of small fixed-size objects a naive heap allocator will tend to fragment the heap, as small but still-live objects become sparsely scattered through heap space, ultimately requiring a full mark-and-sweep before larger objects can be allocated. Now I'm certain that the Java heap allocator is anything but naive, but it's unlikely to be optimised for large numbers of rapidly allocated and deallocated equal sized objects.

Some earlier LISPs divided memory into 'cons space' and 'heap space'. 'Cons space' was essentially a set of pages themselves allocated within heap space, each of which contained an array of cons cells. When a cons-space page was allocated, each of its cells would be linked together onto the free list. When a cons cell was allocated, it would be popped off the freelist and the freelist head pointer updated from its CDR; when a cons cell was deallocated, it was simply pushed back onto the freelist and the freelist head pointer updated to point to it. When cons space was exhausted, a new page was allocated from the heap. This strategy works with both mark-and-sweep and reference-counting strategies, although I'm most familiar with it in the reference-counting context.

This makes the allocation of objects of size equal to or smaller than a cons cell extremely cheap and fast, and avoids heap fragmentation. A cons cell comprises two words the width of the address bus, plus a header containing e.g. type flags, GC flag and reference count; a number of other data objects such as, e.g., Integers, Doubles and other boxed primitives, easily fit within this footprint.

Wing and Singer's enquiry, as I understand it, is whether special tuning of the JIT could improve performance of the Java Virtual Machine for non-Java languages. Brief summary, the 'JIT' (Just in time) compiler is an element of the Java Virtual Machine implementation for a particular concrete processor, which translates fragments of JVM object code into optimised object code for the concrete processor. The JIT is part of the Java Runtime Environment (JRE), not of the Java compiler, because this detail tuning of the object code happens at runtime for specific patterns in the code being executed on the specific concrete processor. Because the code fragments emitted by the different functional-language compilers are systematically different from those emitted by the Java compiler, there may be merit in this special tuning.

But, since the Java Runtime Environment comprises not just the JVM but also other components including, critically, the memory management subsystem, it occurs to me that given this very different memory usage pattern, a custom memory manager - implementing specifically a separate cons-space allocated as pages in the heap, using reference counts and a free-list - might well be an even bigger win. Furthermore, unlike the JIT tuning suggested by Wing and Singer, the memory manager tuning would be portable between different concrete processor families.

Wing and Singer's proposed change to the JIT would not prevent the JRE running any arbitrary JVM program, nor should any arbitrary JVM program run less efficiently than on a 'vanilla flavour' JRE. Neither (I hope) would my proposed change to the memory manager. Of course, the adapted JRE would be somewhat larger; you could describe this as code bloat. Of course, if all you want to run on your JVM is Java code, this adapted JRE would be of no benefit.

All this applies, of course, not only to the Java Runtime Environment. It almost certainly applies equally to the Common Language Runtime used in the .Net environment, to the runtime environment Erlang virtual machine (targeted by for example Joxa), and probably others.

However, it would not be trivial to retrofit this. The Clojure cons cell is a POJO ('plain old Java object'), allocated and deallocated by standard Java mechanisms. Joxa on the Erlang VM is similar, and I should be surprised if ClojureCLR on the .Net Common Language Runtime is much different. As any arbitrary object may hold pointers to cons cells, attempting to do special memory management for cons cells would require considerable rewriting of the memory management subsystems, at the runtime environment level. I say again, I do not believe the memory manager in the JRE is by any means naive. It is very highly tuned code written by very able engineers and tested widely across the world over a number of years.

Even supposing - as Wing, White and Singer's paper does suggest - that the Java Runtime Environment is not optimally tuned to the code patterns and object patterns of functional languages, it doesn't necessarily follow that changing it would improve things. But, the OpenJDK is just that - open. It would be possible to clone it and experiment. It would be possible to produce a variant JRE, either with specific tuning for specific things as I've described, or designed to allow modular selection and replacement of JIT and memory manager components at run time, and, if benchmarking proved the variant implementation more efficient at executing functional language programs while no less efficient at executing Java language programs, it might be worth contributing the changes back.

Friday, 19 July 2013

Ultra-low impact housing and public policy


Politicians are once again talking openly about how to reduce the cost of housing. I know how to do this, and have a genuinely modest proposal.

This house, as I've described earlier, is almost entirely bio-degradable. Without maintenance, it would fairly rapidly collapse into a pile of rotted timber and straw on the forest floor, marked only by the stove, the bath, the water pipes and the glass; and, as these things are inherently valuable and recyclable, I would imagine someone else will rob them out long before the house gets to that state. With reasonable maintenance, I believe the house could last - and be comfortable and habitable - in the long term. Sixty years at least, perhaps twice that; as long as a conventional modern house is designed to.

This house would not pass building warrant, and there are some good reasons for that. It has no foundations; it is (intentionally) very close to trees; it has some fire safety deficits. Notably, it would be impossible to get a fire engine to it in the event of a fire, but also there is no fire separation between the walls and the roof structure, and I have not yet fitted the fire ladder which I intend to fit from the rear window. And it seems to me that it would be very hard to draw up building regulations which this house would pass which would not allow very unsatisfactory buildings also to pass.

But those things don't matter to me: I built this house myself to my needs, and I'm comfortable with it. I've chosen the risks and assessed them; I'm happy that it is safe enough for me.

Obviously, if you allow people to build houses without building warrant and then let them to tenants, you will get slums, and grossly unfit and unhealthy housing. And that suggests a compromise.

Suppose you were to legislate that a person did not need building warrant or full planning consent for a low-impact building which they had erected themselves, and in which they lived themselves; but planning consent and building warrant would be required before that building could be sold or let? There would be a couple of problems with a scheme as simple as that. Such a house would have to be built more than a safe margin from the edge of its plot, so that if it caught fire, fire would not spread to other properties. I imagine three metres (i.e. a 6 metre gap between adjacent buildings) should be enough, but there are people better qualified than I to make that judgement. So this sort of development could not provide very high density housing. And while an earth closet has very low environmental impact, for public health reasons, in urban areas, there would have to be some regulations about the disposition of shit, and sewerage.

But if people are responsible for their own houses, they will build houses to meet their needs; and if the houses they build don't meet their needs, that is their problem. And in any case, if a low impact home does prove unsatisfactory, it is cheap enough that the owner would be able to pull it down and start again.

Campaigners for rural amenity will also claim there's a need for planning consent - they don't want to see people building houses in pretty places. But farmers can, within broad limits, erect huge ugly sheds without much in the way of planning regulation, and it seems to me that homes for people are more urgently required than sheds for tractors. So I would like to see no more onerous planning regulation - simple notification - for self build dwellings than for tractor sheds.

There is, of course, another issue: people will fear that urban margins and rural areas will become littered with the wrecks of abandoned self-builds. And that's where 'ultra-low impact' comes in. If these self builds are built mainly of bio-degradable materials - timber, straw, wool - or of materials sourced in the local environment - fieldstone, clay, sand - then the buildings, if abandoned, will not litter the landscape long. They will be transient.

So that's part one of my modest suggestion: building warrant and full planning permission is required to sell or let a low impact dwelling, but not to build one. Now onto part two.

Local government could prepare sites with sewer, drinking water, electricity and telecoms connections, and could lease these on sixty year lets to people. The people could build their own houses on the plots. When the lessor chose to move on then, if the house passed building warrant, the council would buy it at a pre-agreed but fairly nominal price covering the basic cost of construction, and let it or sell it as social housing; if the house did not pass building warrant, the council would demolish it at their cost and relet the site to another self-builder.

Obviously, commercial companies would come into this market with kits or house designs. It seems to me reasonable that there should be some sort of building warrant type-approval for kits and designs - if build correctly as specified, they should pass building warrant. You should not be able to sell a kit which definitely would not pass building warrant.

The cost of building a house is mainly down to two things: planning consent (and therefore scarcity, and windfall profits for landowners), and labour. If you take those elements out of the equation, housing becomes very cheap indeed: the basic materials out of which perfectly good houses can be constructed need not be expensive. As I've argued before, the total cost of building a house can be less than the deposit on a commercially built one. Encouraging and facilitating self build of low impact housing would solve Britain's housing stress problems at a stroke.

Wednesday, 17 July 2013

Modelling the change from rural to urban

This essay is about software, not the real world! If you're interested in my thoughts on real world rural policy issues, check the Rural Policy category on the right.

In the real world - in Northern Britain particularly, but I think this holds for many other places - there are three essential layouts of rural communities:

  • Non-nucleated settlements, where dwellings are scattered at least tens of metres apart over quite a wide area; highland crofting settlements are typically of this form.
  • Nucleated settlements, where dwellings are grouped closely around a central feature such as a village green or a pond; villages of this form are typically older villages, especially in areas of Anglian settlement. Rhonehouse is a good example locally.
  • Linear settlements, which are a special case of nucleated settlements, where dwellings line the sides of an (often broad) street. These settlements typically are medieval in origin and reflect the runrig agricultural pattern - each house had inbye land stretching back from the street. Moffat, Lochmaben and Thornhill are local examples.

Stamfordham, Northumberland: a typical nucleated settlement centred around a village green. Field boundaries to the north give clear evidence of runrig agriculture, while the south side shows the growth of closes and alleys.
In nucleated settlements generally, as the settlement grows, alleys and lanes form stretching out from the original grouping. Edinburgh's old town is an example of a linear settlement which grew in this way. Older, unplanned rural settlements do not have a regular street plan - that's a feature of quite advanced urban societies.

In creating a mechanism for settling a convincing, naturalistic virtual world, these patterns need to be born in mind. This is complicated by a number of issues which are driven by technology (and perhaps by my lack of skill and imagination); I want to avoid as far as possible technological issues introducing visible artefacts into the virtual world.

Currently I'm working on a grid of 100 metre square cells. This isn't ideal at all, but it's simple and at the stage I'm at in developing algorithms that simplicity helps. But naturally, things people build are not always square and certainly not always aligned to a north/south grid.

Dunvegan, Skye, a non-nucleated settlement. Dwellings are set on their individual crofts, scattered across the landscape.
From the point of view of settling farmers, a 100 metre square grid - one hectare to a cell - is adequate; I'm working on the principle that my sort of roughly bronze-age to late medieval farmers can manage between four and six hectares per household. As farmers will naturally cluster together in areas with better soil fertility and shallower gradients, and so that 'naturally' models the non-nucleated settlement. Within any non-urban holding, the dwelling house should gravitate to the edge of the holding nearest other dwelling houses - a gravitational algorithm is precisely right to do this. This will provide the beginnings of clustering.

As you'll see in my note on Populating a game world, when a non-nucleated settlement of farmers reaches a certain size, it begins to attract craftsmen. Among the first of these is an innkeeper; to support an innkeeper requires ten other actors dwelling within walking distance (say a ten cell - one kilometre - radius). When an innkeeper settles, he reserves a set of four to six cells, close to sufficient potential customers, and ideally adjacent to a cell already designated as road.

One of these cells becomes the inn yard, where the inn and its outbuildings are sited. One adjoining the inn yard becomes 'urban open space', which can't be built on - it's rendered as a village green until the village reaches a certain size, then paved square. If possible the 'urban open space' cell will be placed to border both the inn yard cell and a cell designated as 'road'. Other cells from the farm are designated as 'urban'.

Similarly, when an aristocrat settles, he will reserve one cell for his castle, one for a 'market place', and four more as urban; a market place will subclass an urban open space and so will prefer to be located adjacent to an existing road. The castle will normally be on a hilltop, except where there is a river crossing, which would be a preferred site.

There's probably some algorithm I could find which would lay out wee twisty streets and arterial roads in a naturalistic fashion... If cells which are designated 'road' adjoining 'urban' cells get redesignated as 'arterial', that's probably a good first step. Arterial cells have dwelling plots lined either side of a broad street, and one lane off either side. An arterial cell alongside an 'urban open space' may have no dwellings on the side towards the 'urban open space', giving in effect of a larger urban open space. Land use types 'urban open space', 'arterial' and possibly others will subclass 'urban', or share a common interface.

Kirkinner, Wigtownshire: a linear settlement.
A settlement with mainly arterial cells should give a fairly good model of a linear settlement.

As a tweak, it's possible that urban cells adjacent to water cells could become dock cells.

A number of pre-planned (that is, designed) ground plans will be created for 'urban' cells, such that they will tessellate together to give an impression of irregularity. Urban cells have up to twelve dwellings per cell. Dwellings in urban (and arterial and 'city wall' cells, see below) are erected in preplanned locations within the ground plan - generally, facing the most significant street which abuts the plot.

Although urban cells have plots for twelve dwellings, they won't automatically be occupied. Rather, a new craftsman setting up shop will occupy a free plot in an existing urban cell, and will only create a new urban cell if there are no free plots available. In selecting a plot, the available plot with most occupied neighbours will be chosen.

Where more than a critical number - say 15 - urban cells are clustered together, the outermost will become 'town wall' cells. An outermost arterial cell will become a 'town gate' cell. Again, preplanned ground plans for 'city wall' cells will be created, with wall models. These will be designed to fit together in ways which are not obviously square and grid-aligned.

There is a slight problem with this which is that to get to town wall status would take at least 180 households, which is quite a lot for my game economy; this implies walled towns will be relatively uncommon, which is fair enough. However if, as a slight fudge, journeymen and soldiers each have their own dwelling instead of living in their employers' dwelling, the number of households will grow faster.

Buildings in urban cells will just be genetic buildings like anywhere else, except that the fact of being in an urban cell should give an emphasis to building upward. Arterial cells will promote taller buildings even more strongly than other urban cells.


Saturday, 6 July 2013

Populating a game world

(You might want to read this essay in conjunction with my older essay, Settling a game world, which covers similar ground but which this hopefully advances on)

For an economy to work people have to be able to move between occupations to fill economic niches. In steady state, non player character (NPC) males become adult as 'vagrants', and then move through the state transitions described in this document. The pattern for females is different.

Basic occupations

The following are 'unskilled' occupations which form the base of the occupation system. Generally a male character at maturity becomes a 'Vagrant' and wanders though the world until he encounters a condition which allows him to advance up the occupation graph. If an occupation wholly fails, the character can revert to being a 'Vagrant' and start again.


Occupation Dwelling condition New trade Notes
Vagrant None land available and animals available Herdsman
Vagrant None arable land available Farmer See crops
Vagrant None has weapons Outlaw
Herdsman None Insufficient food Vagrant
Farmer Farm Insufficient food Vagrant
Outlaw None loses weapons Vagrant
Vagrant None craftsman willing to take on apprentice Apprentice
Herdsman None arable land available Farmer
Outlaw None Battle hardened OutlawLeader
Apprentice (craftsman's) Qualified Journeyman
Journeyman None Unserviced customers available Craftsman See crafts
Craftsman See crafts Too few customers Journeyman
Journeyman None arable land available Farmer
Vagrant None Lord with vacancies available Soldier See military
OutlawLeader None Unprotected farms available Laird See nobility

Gender dimorphism

In the paragraph above I said 'a male character'. It may seem unfair to create a game world in which the sexual inequality of the real world is carried over, and for that reason it seems sensible that female children should have the same opportunities as male children. But games work on conflicts and injustices, and so it seems reasonable to me to have a completely different occupation graph for women. I haven't yet drawn that up.

Wandering

Vagrants wander in a fairly random way. While vagrants are wandering they are assumed to live off the land and require no resources. Solitary outlaws similarly wander until they find a leader, although they will avoid the areas protected by nobles. Herdsmen also wander but only over unenclosed pasture. They visit markets, if available, periodically; otherwise, they live off their herds. Journeymen wander from market to market, but are assumed to trade skills with farmers along the way.

Crafts

Crafts are occupations which require acquired skills. In the initial seeding of the game world there are probably 'pioneers', who are special vagrants who, on encountering the conditions for a particular craft to thrive, instantly become masters of that craft.

Craft Dwelling Supplies Perishable? Customer types Needs market? Customers Supplier Suppliers Recruits
Solo Per journeyman Per apprentice

Min Max Min Max Min Max
Smith Forge Metal Items no Farmer, Soldier No 6 10 4 6 1 3 Miner 1 Vagrant
Baker Bakery Bread yes All NPCs No 20 30 12 18 6 10 Miller 1 Vagrant
Miller Mill Flour, meal no Baker, Innkeeper No 2 3 1 2 1 1 Farmer 6 Vagrant
Weaver Weaver's house Cloth no All NPCs Yes 6 10 4 6 1 3 Herdsman 2 Vagrant
Innkeeper Inn Food, hospitality yes Merhant, Soldier, Farmer, Lord No 10 20 5 10 2 4 Farmer,Herdsman 2 Vagrant
Miner Mine Ores no Smith Yes 2 3 1 2 1 1 Farmer 1 Vagrant
Butcher Butchery Meat yes All NPCs No 10 20 4 8 2 4 Farmer, Herdsman 2 Vagrant
Merchant Townhouse Transport, logistics n/a Craftsmen, nobility Yes 10 20 4 8 2 4 n/a n/a Vagrant
Banker Bank Financial services yes Merchant Yes 10 20 4 8 2 4 n/a n/a Merchant
Scholar Academy Knowledge n/a Ariston, Tyrranos, General, Banker No 1 4 1 2 0.25 0.5 n/a n/a Vagrant
Priest Temple Religion n/a All NPCs No 50 100





Scholar
Chancellor Chancellory Administration n/a Ariston, Tyrranos No 1 1 0 0 0 0

Scholar
Lawyer Townhouse Legal services n/a Ariston, Merchant, Banker No 4 6 2 3 1 2

Scholar
Magus Townhouse Magic n/a Tyrranos, General No 3 4 1 2 0.25 0.5

Scholar

A craftsman starts as an apprentice to a master of the chosen crafts. Most crafts recruit from vagrants, A character must be a journeyman merchant before becoming an apprentice banker, while various intellectual crafts recruit from journeyman scholars.

It's assumed that a journeyman scholar, presented with the opportunity, would prefer to become an apprentice magus than a master scholar.

A journeyman settles and becomes a master when he finds a location with at least the solo/min number of appropriate customer type who are not serviced by another master craftsman of the same craft; he also (obviously) needs to find enough free land to set up his dwelling. The radius within which his serviced customers must live may be a fixed 10Km or it may be variable dependent on craft. If there are unserviced customers within his service radius, the master craftsman may take on apprentices and journeymen to service the additional customers up to a fixed limit – perhaps a maximum of four of each, perhaps variable by craft. If the number of customers falls, the master craftsman will first dismiss journeymen, and only in desperate circumstances dismiss apprentices. Every apprentice becomes a journeyman after three years service.

The list of crafts given here is illustrative, not necessarily exhaustive.

Aristocracy

As in the real world, aristocracy is essentially a protection racket, and all nobles are originally outlaw leaders who found an area with rich pickings and settled down.

Rank Follower rank Client type Clients protected Trade in market Followers per client



Min Max Min Max Min Max
Bonnet Laird Private Farmer 6 20 0 100 0.25 0.5
Ariston Captain Bonnet Laird 10 30 25 1000 0.5 1
Tyrranos General Ariston 10 unlimited 250 unlimited 0.1 0.5

Every noble establishes a market and, if he employs a chancellor, taxes trade in it. Crafts which 'need a market' can only be established in the vicinity of a market, irrespective of whether there are sufficient customers elsewhere. All non-perishable goods are traded through the markets, and merchants will transfer surpluses between markets if they can make a profit from it.

My world has essentially three ranks of nobility. The title of the lowest rank will probably change to something vaguely italianate. An aristocrat advances to the next rank when either the requisite number of clients become available in the locality to support the next rank, or the trade in his market becomes sufficient to support the next rank.

Obviously when a province has eleven unprotected bonnet lairds, under the rules given above any of them may become the ariston, and essentially it will be the next one to move after the condition becomes true. If the number of available clients drops below the minimum and the market trade also drops below the minimum, the noble sinks to a lower level – in the case of the bonnet laird, to outlaw leader.

Military

The aristocracy is supported by the military. An outlaw becomes a soldier when his leader becomes a noble. Otherwise, vagrants are recruited as soldiers by bonnet lairds or sergeants who have vacancies. Captains are recruited similarly by aristons or generals, and generals are recruited by tyrranos. If the conditions for employment no longer exist, a soldier is allowed a period of unemployment while he lives off savings and finds another employer, but if no employer is found he will eventually become an outlaw (or, if an officer, an outlaw leader). A private is employed by his sergeant or bonnet laird, a sergeant by his captain, a captain by his arison or general, a general by his tyrranos.

Rank Follower rank Followers
Condition New rank


Min Max

Private None 0 0 Battle hardened, unled privates Sergeant
Sergeant Private 5 15 More battle hardened, unled sergeantts Captain
Captain Sergeant 5 15 More battle hardened, unled captains General
General Captain 5 unlimited


Soldiers have no loyalty to their employer's employer.

Creative Commons Licence
The fool on the hill by Simon Brooke is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License