Living Computation (aka Indefinitly Scable Computing) Primer

(incomple, unedited and mostly incoherent)

Background

Over the past decade, David H. Ackely and his students have been working on an idea: use artificial life as a computation medium. This has a few repurcussions:

  1. The design of the "physics" of this medium allows it to be scaled indefinitley: there is no global synchronization, pointers, or identifiers. Thus we can turn this computer on before we're finished building it and the program need never stop even if large amounts of the hardware fails.
  2. By designing for a noisy and spread out world, the kind of programs you create end up resembling living things more than computer programs. This poses a huge engineering challenge, but the payoff is huge and we will learn new things about computation along the way.

By being indefinitely scalable we lose three things essential for "normal" programming: global constant time addressing (like pointers), global synchronization, and guarenteed correctness. The first two are fairly obvious - signals can only travel at the speed of light, so as other parts of the computer get arbitrarily far away, latency increases until the computer can no longer function. Guarenteed corectness however, is a bit weirder. Every time you tell your computer to add 2+2, you will get 4. This is because computer absolutely rely on correctness - most of the energy usage of a processor is for error corrections. Now consider if you have the same computation running on a million computers, or a billion. At some point, one will make an uncorrectable error and say 2+2=5, and all your assumptions are thrown out the window. This is way Robust First computing throws correctness out the window from the start.

Now with just this, there are so many directions you can run. Is this like a bunch of servers networked together and I write my programs as usual but duplicate them and pass data around in some complex way? Or is it like trying to make a computer in game of life? While both of these are possible interpretatins (with the caveat the your network would need to be modified so computers can only talk to physically adjacent ones and the Game of Life needs to be anynchronous to remove the need for a global clock), neither of these if where we want to go. The big server style of ISC (Indefinitely Scalable Computing) is already practiced somewhat out there in the world, albiet not in a way which meets any of the criteria of the previous paragraph. And people have done compution of all sorts in the Game of Life, but they rely on correctness and global synvhronization. Miss one update or flip one bit and the entire creation falls apart. GoL is just too difficult to scrape computation out of, there's no room for robustness.

So how should we compute if we want to be indefinitely scalable and robust first? Introducting the Moveable Feast Machine. The MFM is made up of "tiles", which could be implemented on standard computers of any kind networked into a grid, where information can be sent between adjacent tiles. Each tile has a grid of "atoms" which each store their state. An update function takes an atoms and its neighbors within some radius (3 or 4 is typical) and returns the atom and its nieghbors' states which are put back. This update function is not synchronous, each tile runs its update function at a random location as fast as it can, and the atoms along the edge of the tile are synchronized between tiles. For the programmer, the exact implementation details shouldn't matter. From a software perspective, the system provides random uniform updates on a huge grid of atoms and all the programmer needs to worry about is writing that local update function.

The update function has access to the entire neighborhood, so it can swap around atoms and change anything it wants within that tiny universe. 80 or so bits is a typical amount of state for one atom, making the design space absolutely huge compared to a normal cellular automata. They also must be terminating - an atom that fails to halt will stall that tile, so precautions need to be taken, either by using a language that is in a class lesser than Turing complete or by putting a timer on each run of the function.

The final design choice we need to set straight before entering the world of robust first computing is that some amount of the state is called the "type" of an atom while the rest is the "data". The update function is broken up into one smaller function for each type, so there is an update function for type 0 and type 1 and so on. This is important because we will be defining a few special atoms to serve as the basis of our computer.

Hello infinite world

Let's go ahead and work a small example. We'll have two type: BLANK and RES. BLANK is just the default type, like the vacuum of space. RES will just float around like an atom of hydrogen. All the code is rough pseudocode, but later we'll look at a functioning (and fast, and extensible) C implementation.

def update(cell, neighbors):
  case BLANK:
    pass
  case RES:
     swap(cell, random(neighbors))
If we run this in some fixed size, say 100 by 100, and have mostly BLANK cells with a few RES, then as we run updates occassionally one of the updates will hit a res and it will hop around. Speed up the scale, and we'll see all the RES bobbing around like gas in brownian motion. Let's add another type, WALL. I know, none of this seems very computery, we'll get to that I promise. We also add a condition that if the type is unrecognized or someone has an error, then we zero out the cell and its data.

def update(cell, neighbors):
  case BLANK:
    pass
  case RES:
    (chance 1/6) swap(cell, random(neighbors, radius = 1, shape = square))
  case WALL:
    pass
  else or error:
     cell.type = BLANK
     memset(cell.data, 0)

Now is we draw a wall with some res in it, and make the event rate high enough that we get an average of 6 events per site per second (AER = 6), we can see the res bouncing around in the box

The word entropy being destroyed by dreg

Since each of these bits of code might have an error, or maybe a bit got flipped by a cosmic ray since we last looked at it, we need to handle this by resetting the cell back to BLANK. This is the "hide and heal" paradigm coming into play: errors happen and we should shove them under the rug and hope we were robust enough.

In fact, being able to deal with unexpected errors is at the heart of robustness. We can force our deisgns to be robust by introducing noise. One way is to simulate these gamma rays on our computer by occasionally randomly flipping bits. Another is to introduce software elements which scramble things around. A very elegant way of doing this is with an atom that Ackley calls the "dynamic regulator" or DREG. A DREG, left to its own, will fill space sparsely with RES, occassionally reproduce, and convert pieces of anything it runs into into RES.


  ...
  case DREG:
    neigh = random(neighbors, radius = 3, shape = diamond)
    if neigh is BLANK:
      (chance 1/10) neigh = RES         // make new res
      (chance 9/10) swap(neigh, cell)   // move self around in empty space
    else if neigh is RES:
      (chance 1/500) neigh = copy(cell) // reproduce, consume a res
      (chance 499/500) neig = BLANK     // destroy a res
    else if neigh is DREG:
      (chance 1/10) neigh = BLANK       // destroy a dreg
    else:
      (chance 1/2) neigh = RES          // replace anything else with res
  ...

If you leave a single DREG in an enclosed space, it will eventually reach an equilibrium with about 0.1% DREG and 10% RES. Here this is shown at an AER of 600. (RES is dark green and DREG is red)

a DREG going to equilibrium by filling space with dreg and res

The DREG's purpose can be seen here, tearing down a non-equilibrium structure and converting it to RES.

The word entropy being destroyed by dreg

Under the Dreg Paradigm, other atoms must follow specific rules:

  1. Conservation of mass: to create an atom (as opposed to moving or changing one), a res must be simultaniously destroyed.
  2. Conservation of mass Part 2: an atom cannot be destroyed, only converted to res. (this one may be broken if you like)
  3. Indestructability of Dreg: No atom besides dreg can do anything with a dreg except move it.

Now I will show two structures which follow this paradigm: box and robust wall.

Box tries to draw a box and robust wall tries to maintain whatever shape it was drawn in.

Here a single atom of box is placed, then the rest grows be capturing res and using it to build. Dreg damages the box several times, but it easily recovers.

A box building itself from res

When part of the box is erased, the rest recovers.

A box being damaged and repairing itself

Box is cool, but limited. All the information needed to rebuild it is in a single atom, so it acts like a polyp or something, which grows back from a single cell as long as resource are available. But it can't be used to make anything except box, and making new shapes requires reprogramming. So robust wall was created to provide a way to make arbitrary connected shapes that self repair using res. Each atom of robust wall checks its eight adjacent atoms, and records if each of them is another robust wall. Then, if one of those robust wall atoms is ever anything else, it tries to find a res to use to rebuild it. This way, the way maintains its shape as long as only a few atoms are destroyed at a time.

Here a square of wall (left) and robust wall (right) are shown in a dynamically regulated environment. The wall is quickly torn apart, but the robust wall persists and even flashes back after losing most of its mass.

A box being damaged and repairing itself

Here's some of the papers:

Building a survivable protocell for a corrosive digital environment(2019)

Artificial life programming in the robust-first attractor (2015)

Indefinite Scalability for Living Computation (2016)

Bespoke physics for living technology (2013)