Application Patterns for the Outernet

James Uther
2016-03-29

I've been meandering through the Long Earth series by Terry Pratchett (may Death be as kind to him as he was to Death) and Stephen Baxter (not met Death yet). It's a classic alternate universe setup, where one (contemporary) day the multiple worlds theory becomes reality and people find they can 'step' between alternate universes. Earth is no longer just Earth, but 'Datum' Earth, and there are (arbitrarily) East and West earths 1..∞ forming a probability tree of everything that could have happened. Addresses suddenly get another dimension, so there's Datum London, and London West 65536 (which may in fact be glaciated, or something).

There are further plot-enhancing nuances that I won't go into here, but the main limitations of stepping are that

Which is my bone to gnaw today. I want to have a look at how some applications might be implemented on the Outernet. I'm kind of imagining a client arriving at LShift saying 'I've discovered how to step between alternate universes. Can you design an internet thing for me?'. A normal day, then.

So what sort of network is (might be) the Outernet? Well, as each universe can support an internet, it's certainly some set of occasionally connected networks forming a broader internet. It would probably grow something like this;

Pioneers head off, and might take the equivalent of a home network with them, but of course there's no actual data connection, so they rely on passing travellers to pass on the latest viral videos and take any mail (carried on USB sticks or whatever). So we have a sort of random walk store-and-forward system. Routing would definitely be via the Galapagos protocol so any sort of reliability would be unlikely.

As an Earth develops it might get more of a local network, and we can assume it's going to be some sort of IP thing. But routing between Earths again falls to old technology. We might see the revival of FidoNet or UUCP mail.

Applications beyond UseNet tended to rely on a session with a reasonable latency, so you would need to really design them around the possibility that consistency would be 'extremely eventual'. This would entail rethinking the application design completely. For example, your distributed data store is not going to be consistent to any reasonable extent, and what useful things can we do with a random subset of the data?

Latency is obviously the sticking point here, so how far we can get that latency down? (We'll assume bandwith is going to be OK because you can step a truckload of SSDs between Earths). We can step a single step in 0.02 seconds (transit time t). We will assume for the sake of argument that an SSD carrier dock/sync/write/read cycle with specialised hardware is about 10 sec (call time c). The carrier must be sentient so it can step, but that's not important to this calculation. So the simplest mechanism would be a carrier that contains a bunch of SSD drives, and automated docking points at the local internets. The carrier steps along the Earths, exchanging data when it finds a dock. So from Valhalla (West 1,400,013) to Datum, it would be about \(t*1.4*10^6 / 3600\) or 7.7 hours with no stops. That's actually better than I thought (and better than the feel you get from the books). If we assume that the carrier stops at sufficiently populated Earths (perhaps 1%, or 10,000) then it becomes \(7.7 + c*10,000/3600 = 35.5\) hours. This would seem to rule out any session-based protocols between arbitrary Earths.

However, locally we could get this down. So let's say that Valhalla spreads its footprint over a few adjacent Earths, and really exists between about West 1,400,000 and 1,400,025 . Across that distance our round trip time (the Long Ping) would be \(t*25 = 0.5 sec + c = 10.5\) sec. We could even assume that less data needs to be synced in a cycle, so c could reduce to pretty much whatever the mechanical docking time of the carrier is (0.5 sec?). That'd give a round trip of 1.5 seconds (load in En, step, unload in En+25 and route to local host for ping, load response, return to En, unload and route to originator). Which is just about within the realm of current session based protocols, although papers from the early 90s about reducing message exchanges would become relevant again. Bear in mind though that we would need a bunch of carriers running some sort of schedule to ensure that the latency distribution is not too dependent on the packet's arrival time at the distribution node. Also, would some sort of elevator scheduling algorithm help when we consider that packets need to travel to arbitrary Earths within the local area, and not just the full 20 or so step span?

So there's a broad brush look at the Outernet. Locally (on an Earth) it's the internet, possibly using VHF or microsats instead of intercontinental fibre. Between near Earths interactive protocols are possible using fast-cycling stepping data carriers. Across the Long Earth, we fairly rapidly lose any reasonable latency and must resort to good old store-and-forward message sending. This of course only looks at a small slice of the infinite Long Earth. Addressing an infinite sequential space might be harder.

Addendum: I had a look at some things that might be similar, like the Interplanetary Internet. They are interesting in their own right, but I'm not convinced the transport problems are identical. RFC1149 may also be a useful analogy.

(Originally here)