Kohath wrote:You raise good points. Thanks for the reply, findinglisp. It seems that there are (at least) two types/categories of persistence:
One is like the internet, or the automobile example, which seem reminiscent of cellular bodies, ant colonies (nature seems to have a lot of these), and distributed human organisations (terrorist cell groups?). All the parts seem to have very similar behaviour, and you can get emergent behaviour. Let's call it the distributed group.
The other 'type' would be well designed systems that have special interfaces (almost unique) that allow them to replace a given component with another component, like a jet being refueled (and repaired) in the air. Let's call this the designed group. Although it seems that nature and economics seem to push towards the first group, made of many cheap, replaceable components. Actually, ant colonies would be an example of this too, when the ants need a new queen they must grow one, and as far as I know, they only have one queen at a time, and if they have two, one will take some workers and start a new colony.
Thinking about Lisp, as a whole to me it seems big and firmly on the way to the second, 'designed', category, with specially designed custom parts. However, if you look inside, you see a dynamic world of functions, classes, bindings, methods, all able to replace and be replaced at any time - to me this seems more organic, and at least partially in the 'distributed' category. Perhaps this is why a lisp machine would appeal to me so much - the monolithic core is part of the platform, not the operation, and so all you see is the cool dynamic internals.
Then again, maybe it's not about "two types of persistence", but more of a spectrum, because the second category starts to look more like the distributed group the more bits there are (I suppose that the parts lose their individuality in the crowd).
It actually isn't about persistence so much as the time that a binding is made to a resource. I think Alan Kay would describe this as late binding vs. early binding. If things are late-bound, then they are easier to upgrade on the fly because the question of which "object" (may not be an object in the OOP/CLOS sense; can be a function or "object" in the Lisp type sense) will service my request is determined right at before you actually issue the request. This means that you have an opportunity to alter the binding in real time as you upgrade. For instance, the function that gets invoked is the current value of the FOO symbol, not the function it referenced 100 milliseconds ago.
If things are static bound, it's much more difficult. You essentially do very few dynamic lookups and as a result the system is much more tightly coupled. This provides far fewer opportunities to redirect requests to a new object/server.
For instance, imagine the Internet didn't have DNS and that everything worked off of hard-coded IP addresses. You'd see a lot more "outages" because a service would always be bound to a static address. This still happens because DNS uses caching with large timeout values (hours/days). All major sites (Google, Facebook, etc.) also use load balancers to redirect traffic at a fine granularity and rapidly detect server outages and route traffic around the failures. That's simply an example of a proxy-object that is designed to turn a pseudo-static binding (DNS is pseudo-static when compared with millisecond-by-millisecond traffic management performed by a load balancer) into a late binding. All this can help you get around the failure or restart of an individual server.
So, how this relates to programming languages is that with C you're effectively static bound all the time and thus it's very difficult to upgrade a running process on the fly. Yes, you can do it with loadable libraries by unloading and then reloading, but it's a lot of work and depends on the correct operating system support. Most of the time, you take the easy way out and simply restart the process with the new code. In contrast, late-bound languages like Lisp and Smalltalk make this relatively easy at a very fine granularity. This is why it's customary to start up a long-running Lisp or Smalltalk image and then update it on the fly, saving a core file when you're "done" so that you can recover the dynamic running state at a later time.
Systems that are designed for high-availability (e.g. telephone switches) often use redundant hardware to help with upgrades. For instance, it's a requirement to be able to upgrade a telephone switch while calls continue to be processed. Most current designs do this using two call processing hardware modules (which also serve to handle redundancy in the case of hardware failure). They simply boot up the new code on the backup module, and then simulate a "failover" to the new code. The primary is then upgraded while the original backup module handles call processing. You can then either switch back to the primary, or you can swap the roles until the next upgrade. Even with redundant hardware, switchover between the two modules is often tricky. It's hard to capture all the transient state, so typically there is some small hiccup that affects some portion of the system. For instance, in the telephone example, all existing calls at the time of switchover would be preserved, but any call that had just started to be set up and had not yet been established would be failed; we simply expect the user to redial in the unlikely case that they happen to place a call at the exact moment we're performing an upgrade switchover. The main thing is, the possibly thousands of existing calls through the switch remain intact and these users never notice the disruption.