A low-bandwidth "post-internet"

So I’ve been doing some thinking recently about starting to build up alternative arrangements to the traditional Internet (not that they can’t overlay on top of it for now, but I think there’s a good argument to be made to start creating other forms of linkage and practice as well). Important aspects would be:

  • Low bandwidth requirements,
  • Efficient communications mechanisms,
  • Infrequent and intermittent connectivity, and
  • Efficient information organization and retrieval.

Existing ideas such as gopher, usenet, listservs (along with email-based search) and other tech are applicable here - obviously there are improvements that could be made.

One thing I’ve been doing additional thinking on is how to improve the idea of “email”, as the concept of a mailbox-based messaging platform is central to a lot of these “intermittent-connection” type networks - a storage queue for messages is critical. That said, how to prevent an immediate spike in spam? So, along these lines, I’ve thought of an idea which I wanted to float as well - consider it one of many important aspects to building up a low-bandwidth workable post-internet: consent-based communication.

Here’s how it works in theory:

  1. Your inbox is effectively “keyed” (like a locking post office box). Except that in that case, the post office has the key and so anything they’re willing to carry can go in your box. In this case, you can generate effectively infinite keys - and can revoke them at will also.
  2. You cannot be communicated with if the sender does not have a key that works in your box.
  3. Since the post-internet is largely request-based and connection-oriented rather than large-scale widespread comms, you would not be necessarily giving out “an email address” - instead you would give out a unique key which grants access to contact you - it could, obviously, include your email address as a component.
  4. The communication flow is as follows: A. You give a key to someone. B. They send you a communication. C. During the initial conversation from their client to your box, their client hands your box the key, which is then immediately cancelled and used to authorize the generation of a private key pairing used for communication only between the two of you. D. The key is no longer useful to anyone else, and the pairing from that key is recorded using the signature of the sender so you know to whom the key was ultimately assigned, as compared to whom you gave it. E. That sender can now send you messages until you revoke the key - at which point if they try to send you any further messages they are notified that it was revoked. You can also silently revoke in which case your box will simply route their messages to /dev/null.
  5. So there are a few big questions and a lot of little details to deal with. The first big question is: how do random people contact me with actually useful information? I think a layer of vetting is useful here: if someone or some organization has a key to your box, they’re allowed to request that you grant a key for the as-yet-unconnected third party.

So let’s say that you’re a researcher at an organization (company, uni, whatever) and you publish a paper in a journal or you write an article and it gets shared on a listserv or whatever. Someone else reads it in the journal or the listserv. If they want to respond to you, they can either: contact you via the listserv (which would make the request on it’s behalf, through the grant you gave them to email you) or the journal (ostensibly the same). If you’re members of a common organization (e.g. the IEEE) a similar mechanism could apply. For highly vetted orgs who don’t have a horrid advertising policy (NOT the IEEE, lol) you could optionally tell your client to always grant requests automatically so you don’t need to deal with manual approvals.

The pitfalls here are the typical nefarious spammer types: a spammer joins a listserv and then uses that to fish the members. Due to the fact that the listserv ideally requires some sort of human-like performance to join, and the fact that the consent is 1:1 between the spammer and the listserv, it is easy to identify the source of spam (the listserv itself will have to proxy the requests to other members so it is a first line of defence and can enact this through throttling, logging, or other statistical analysis). Reporting and banning can then follow on a manual or automated basis by the end user clients as well. Same thing for organizations: say the IEEE has decided (as they do) to start spamming you with crap ads. They risk you revoking or blackholing their consent - which is serious business now, since you’d have to intentionally go grant it again, so it serves as a greater deterrent - and you know exactly through whom it came, too, so you can hold them more directly accountable. There’s also the issue of getting spammed with requests for consent, but again, since you know the vetting channel, it is easier to throttle and gatekeep these and to inform the party that they’re being used as a harassment campaign. Last but not least, any individual can proxy a request for any other, so Sally, who has consent to communicate with Lisa, can introduce Joe to Lisa via a proxy communication request (embedded in a cover email, for instance) which Lisa can accept or not.

Implied in all this is a layer of public/private cryptography. I don’t see this as particularly troublesome - there don’t have to be single master keys for each mailbox, key rotation and exchange is pretty well sorted now, although as always the devil is in the details as they say, and given that the ideal situation is for clients to be offline most of the time, messages will remain fully encrypted (except for the delivery layer itself, which is encrypted separately by the consent key framework) on the holding servers, and since most people communicate within intentionally managed groups already, there’s no reason this system of consent should prevent important communications of any kind. The only complication would be handing out your email address in a low-tech way to someone - and here you could either generate a little card with tear-off consent keys precomputed or use other low-tech means for creating acceptable keys - perhaps you could even define a personal coding scheme so that you could mnemonically generate keys at will. This part is likely best left to the user to customize.

While I’m open to people shooting holes in this (I know there are challenges to solve) I’m as interested in positive expansion of ideas here as I am in nitpicking a theory to death. We all know nothing is perfect and every communication system has it’s own challenges, so if you’d like to toss your hat in the ring and weigh in on the feasibility of this idea, I’d appreciate it to be in the spirit of construction - as in, if you have a criticism, a concern, or see a glaring hole, submit it with at least one idea to fix it and continue towards the common goal of consent-based, low-bandwidth, intermittent-friendly communication modes. And don’t forget that this component (offline-stored messaging) is only one of many important aspects to the whole framework - so let’s make sure we cover all the bases.

Basically reinventing fidonet’s store and forward type system, overlaid with modern encryption? Or something like freenet but optimized for intermittency? Or even just a packet radio BBS? (eg SCCo ARES/RACES Network: Packet BBS Service Description)

Packet radio and listservs were the inspiration, with a little more wrapping to reduce spam, yes! The idea being that anything that utilizes this should be able to run over these extant technologies, with minimal changes to the tech.

Edit: I have used packet radio and the offshore SSB variants of radiomail extensively in the past. I am definitely a fan of these technologies, although given that radio bandwidth is a commons, it doesn’t scale well without a system of coordination. So I’m a huge fan of radio guilds such as the ARRL, ARES, etc. who coordinate the services and those allowed to access them.

So basically a modernized, encrypted BBS. (which would be illegal over the air though). Or fidonet style store and forward.

It would be illegal over the air on amateur frequencies, yes. On licensed spectrum, there are no restrictions given you have a license. Fidonet style store-forward over traditional means such as modem/analogue phone, digital networks, even USB keys over mail drop, who knows, all of this is within the gamut, definitely.

How well does it work to perhaps not have encryption, but some kind of rolling code authentication? Sticking a message ID to a packet isn’t encryption and isn’t illegal, so even though the message might be in the clear we can at least verify that the message claiming to be from Joe is, in fact, from Joe, and hasn’t been tampered with when it arrives at your inbox.

I wouldn’t consider authentication to be remotely as important as any of the other considerations - namely consent, privacy, intermittency-friendly, and low-bandwidth, among others such as good tools for archival, search (offline as well as online), etc. So personally this is not really a big feature in isolation, IMO.

Some listservs are public, some are not. The ones I’d be interested in, personally, would likely be somewhat closed if not fully closed. Privacy is paramount in these cases - even just the somewhat-privacy of regular email is vastly better than the not-at-all-private nature of broadcast packet radio, for instance which is much more akin to a community phone line. Once you have consent structures in place, you get authentication of the sender, at least to a degree, baked in “for free” as it were.

BTW, for the purposes of indexing and mailbox maintenance, packet radio sort of has this already in a very basic sort of way through the message id, as you point out. If the ID were cryptographically verifiable, it would in a sense be considered encrypted data and therefore illegal, so by necessity, I think, it cannot contain more than is trivially decoded as it is.

Edit: to further clarify, the proscription against encryption applies even if the key is fully public, so having Joe encrypt with his private key a signature or using a private variable to encode an ID as authentication is still technically in violation of the law.

In which case what can be done with ISM band devices? I just tried searching around for a direct ‘yay or nay’ from the FCC for encryption on ISM band and couldn’t find a direct answer, but there are several ISM devices for sale that tout their encryption capabilities on the sales page, so I presume it’s legal.

So then I guess it’s a matter of are the power and data rates available up to the job. Any thoughts on what kind of data rates are needed? Are messages purely text based? 1200 baud would be good for short messages, probably a bit of a bummer for a long-form email type message though.

I’m sure impressed with the ranges I’m getting from the UHF LoRa radio modules I’ve been playing with. I haven’t done anything with LoraWAN yet, but could it be a viable transport layer, especially since it can interface with the internet when it needs to.

2.4Ghz ISM radios could open up some possibilities with data rates, or even using modified wifi equipment, but that’s so line of sight it would be hard to get much more than a neighborhood of coverage at a time.

Man, can’t help but think a revised APRS/Ax.25 BBS would be the best way to go with the most existing infrastructure, but the private message requirement knocks out that option of course. I’m thinking the biggest hurdle if using ISM radios is how far the communications can reach with very limited power levels?

Some good thoughts here. Let me sort them out a bit:

  1. Data rates, etc.

Well, I had about that or a bit less using radiomail over SSB… the trick is that the remote end allows you to view only the header data quickly, so you can preview the messages (you can also apply filtering if you have an idea of what you specifically want to get). So first you briefly connect and fetch your headers, then disconnect (so someone else can use the band). Offline, you mark which messages you want to fetch fulltext of, and reconnect to download just those. Text compresses pretty heavily so it goes quite quickly, and many important graphics types (charts, weather reports, etc) can be heavily compressed as well using usually specialized compression for their particular data characteristics. So I would say 1200 baud is perfectly capable of serving a limited quantity of users from a given frequency band - the SailMail organization is a good example and they probably have a pretty good idea of how many users per node and per frequency can be packed in… I would expect in the 10^3 magnitude is an upper bound, possibly even 10^2. The slower it is and the longer range the frequencies carry, the fewer people per geographic area you can serve since there’s more area that you’re effectively denying service to while it is in use.

Again, this is assuming radio. I don’t view radio as a particularly great tool for point to point communications… which brings me to my next topic:

  1. Radio for public interest and general purpose communications, other methods for more private things.

So, bingo. The trick is that we don’t try to send private data through these channels. That’s ok though, there’s still a wealth of information that can be totally public. Think local message boards, service and civic announcements, and even regional news. News and the like of course are easily broadcast using voice, so there is only limited utility in transmitting imagery there, and we’ve still got analogue TV and other tools for imagery when necessary. But things like weather GRIBs, scientific data from local sensors or universities, things like that can be sent totally in the clear and might be a useful thing to put on a radio server for downloading by interested parties on request from time to time - or to make more accessible to remote destinations (for instance, a remote library could download public domain books on demand via this mechanism, although of course given that the body of public domain works is more or less fixed they could also have just been sent on a 64GB USB key or what have you…). Microwaves, line-of-sight methods as you refer to, and other wireless techniques could be used for more private access - probably in a cyber-café sort of setting - one location in the town (the radio guild hall for instance) might have a high speed private-permissible link to other towns, thus forming a sort of mesh network. You’d go in, pay a time-based access fee, and fetch your messages, go home, write all your responses, and transmit them on your next fetch-and-send visit.

That USB key gave me a bit of a think too, though:

  1. Low bandwidth enables easier bulk-downloading and offline use.

One of the other advantages to low-bandwidth focus on information presentation is the possibility for entire websites to be archived in a single go and downloaded over lower bandwidth media… so let’s say you’ve got a subscription to N forums (via RSS or listserv or whatever). You can request a download in the low-demand hours, at some point in the night it gets scheduled and comes in automatically, and you then use a local connection (sneakernet, wifi, phone and modem) to pick up your download from your local radio guild or microwave aggregator or whatever you have connecting you to the wider world. If semiconductor tech becomes less globally accessible and/or even dies out, magnetic media might make a comeback (it’s easier to regionally manufacture, for one thing) and lower density storage is also likely to be a necessity. So being able to archive a lot more data locally, sort it, search it, and put it to good use is enabled by lower-bandwidth data formatting (a return to pure HTML and other textual markup, less or no JS, lower resolution and less frequent use of images, etc).

I can add a lot more, but hopefully this starts to take a broader look at some of these ideas.

One other thing to note is that this really should be “need driven”… so much of what’s on the Internet is there for entertainment and idle chatter. If we strip that away and assume that those replacements are better served by local features and remove them from the duties of the internet and communications technology in general, precious little of what the Internet is used for becomes high bandwidth. Sure videos as a tutorial mechanism is nice, but it’s not essential that they be distributed over networks - ordering physical copies (VHS, anyone? USB keys?) is still entirely doable (never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway). So if we really replace what we’ve glommed onto the Internet with what’s really needed to be there, we get down to a much smaller set of things we need to be concerned with.

In short, instead of asking “what can we do with this” we should ask “how do we solve X in specific?”

Sign and filter based on signatures is legal, afaik. Bandwidth wasteful, but valid. There’s no privacy there, though.

The ultimate arbiter is the FCC nastygram, though. And no amount of neckbeard wankerly would change that.

Your biggest issues with VHF+ bands is range, which means more repeaters and the like. So any system like that would be range limited by line of sight at best.

You can always use 900mhz unlicensed bands (if you’re still talking about FCC compliance) if you want a fully secure system.

Interesting. I thought the prohibition on encryption included any byproducts of it, but I’m happy to hear that may not be the case. You’re right about the neckbeardy FCCisms and their equivalents throughout the world, but … low power extremely wideband spread-spectrum stuff is quite fascinating these days too.

Most of that is just repurposing existing wifi mesh stuff and calling it something new, eg; http://www.broadband-hamnet.org/ .
I also mean that the FCC is the arbiter, the rest of us just talk about what we think it means and what doesn’t get us in trouble.

Interesting, didn’t know about them, but that’s not what I was talking about with UWB SS type stuff. Anyways, these techs are still expensive though I’ve heard of some successful results with software-defined radios in that space on the amateur side of things. But… we’re veering heavily towards needing a dedicated radio topic, methinks!

I’d like to bring this topic back around to the data organization, structure, requesting/retrieving, storage mechanisms, suitable protocols, and other logistics for identifying the utility needed from a lower-bandwidth, lower power replacement for what the Internet currently provides - this need look very little like the current Internet - more as an exercise for triaging what is really necessary and what tools and techniques we already have that serve that admirably. Then we can figure out what’s not covered well and identify areas to put effort moving forward, either in conservation and adaptation of existing techniques or in innovating as necessary to solve anticipatable future problems.

That said a part of me just also wants to find out which admirable qualities of the 1980’s/1990’s early Internet ideas that got scrapped are worth trying to bring back in some form, too.

This isn’t directly related to forwarding on a global network, but one of the concepts I’ve wanted to mess with in a post-internet, low-bandwidth sort of way is solar powered data nodes that contain “useful stuff.” They could certainly participate in this sort of of network, depending on range.

I’d been playing with the project name “Wikenberg” - a combination of Wikipedia and Project Gutenberg. Basically, a solar powered Raspberry Pi with a small amount of power buffer (possibly ultracaps, if I can justify the buck/boost complexity there). If the sun is shining, it runs. If the sun isn’t shining, it shuts down. And offer access to read-only versions of Wikipedia, Project Gutenberg, and perhaps a local message board/file storage repo, but I’d want to be careful there for filesystem corruption reasons (see “solar powered”). If a Pi Zero could run the services at acceptable performance, that might be worth looking at as well.

One could use these as nodes in a store-and-forward network as well, if they had intermittent internet access.

As for the 1980s/1990s, well, IRC still holds up very well!

Caches are definitely an interesting idea, though, really, the problem is the degredation of the electronics themselves, and ability for people to read the cache.

I view it as more of a transitional technology - there will still be an endless supply of used laptops, cell phones, etc, even if the batteries aren’t much good.

One other thing I’ve considered, in the land of “Well, if you go simple…” - buck based solar charging stations for USB devices. If you put a 100W panel up, even in lower light, you can usually get 5-10W out for charging devices, and you need no batteries or anything else. You could even go with basic voltage regulators that dump the surplus as heat, though a buck converter would be an awful lot more efficient. Put these around parks and such, and you’ve got a long lived set of charging stations (at least assuming nobody runs off with them, which is far from a given).

Another thing I’ve been thinking through is that we really need to get back (especially in a low bandwidth future) to the concept of documents - not these Javascript abominations that run in their own OS and can do anything/everything, given a supercomputer to run their nested abstractions.

Most of that which we do with the internet can be served by three classes of technology, IMO:

  • Static documents. Web pages. Images. Video, maybe. But something that is an available resource for general consumption.
  • Latency-tolerant communications. Email, more or less - I send a message to another person or group of people and it eventually gets there, and they eventually read it.
  • Low latency interactive communication. AIM. IRC. 50 bajillion layers on top of IRC. I will offer, from past experience, that IRC can be hosted on a toaster oven without any problems. I ran a college IRC network with a few dozen users, peak, on a pair of Mac SE/30s - 16MHz CPUs and 12MB of RAM, no problems. I will contrast this with Matrix, which is currently beating the crap out of a VM with a gig of RAM, heavily using swap in the process… though that’s mostly Python being a bloated pig.

I like the concept of latency-tolerant interactive communication (Hangouts, Matrix, etc), but I don’t think that’s a critical set of functionality, and it adds a lot of additional requirements (especially when you start having to handle multiple endpoints per person - that gets complex, fast).

Absolutely. By the way, in Mexico there’s this thing called “El Paquete Semanal” (The weekly package) which is basically just a bigass zip (1TB or thereabouts) assembled weekly of popular show rips, various manuals, how-tos, pop entertainment of various forms, etc.

Radio (as we’ve discussed here extensively in both digital, voice, and hybrid formats) and other forms of long-distance digital communications (everything from lasers - which are not actually terribly hard to make - to microwaves - again, 1970’s tech and not terribly hard to manufacture now that we know how) will be around for a looooong time in some form or another, as long as we don’t destroy all the books and mathematics ever. Signal lamps, morse code over telegraph wires, etc. all have their places too.

BUT - since we’re on the topic of “internet” here, what is important (and fundamental) is the idea of routed packets - that’s sort of the ‘net’ part at least. So for me, what I’m most interested in are things precisely along the lines you’ve just drawn up here: docs, emails, chat, and the related meta-services: indexes/catalogues and “phone books”. You might toss in the idea of “message boards” too, those will never go away in some form or another. Not sure if those are their own category or not.

One of the reasons I think this will be important: libraries, and books as a whole, will not likely be tremendously viable in a low-energy society. Books are expensive to produce, heavy for transport, require care and storage in climate controlled locations. Those will not necessarily be ubiquitous simultaneously, especially - and this is key - in areas where the Internet as we know it is also not present.

Here are some issues that are happening TODAY that make this point relevant:

  • Rural Internet still struggles to be “broadband” everywhere, and where it can be had at all it can be quite expensive and still unreliable.
  • Cell phones are one of the major methods people consume content today, but they’re also unreliable in even minor catastrophes and localized crises.
  • The US is not interested in asserting global hegemony as much, and that is likely to continue, thus the always-connected Internet of today is likely to either fracture or continue to degrade for the rest of the world at a faster rate than domestically, at least initially.

Add to this:

So I think we can summarize the main “fruits” of the “Internet” as follows:

  1. A fault-tolerant, distributed mechanism for transportation of information, albeit with unevenly distributed bandwidth and widely variable latency, which can operate across almost any numeric-capable medium.
  2. A suite of technologies built atop that information transmission mechanism to provide for the location and retrieval of documents (including meta-documents), asynchronous messages, and pseudo-real-time messages.

Viable competing technologies include:

  1. Books/Newspapers/Magazines/Other print media
  2. The local tavern’s message board (or church, or school, or city hall, or library, etc.)
  3. Radio - Analogue and Digital, where not packetized
  4. Large format digital storage (SD cards, hard disks, floppies, usb keys, repurposed audiocassette, DAT, etc)

Advantages a low bandwidth internet (lowercase, indefinite-article “an internet” for short) likely has over these competitors:

  1. Has a moderate tech requirement (a computer is still required to take advantage of digital storage too, and an internet only requires an MCU, nominally.
  2. Can repurpose a wide variety of communications technologies as needed (books, etc. require presses, paper, ink and transport)
  3. Does not take up a lot of physical space, nor significant energy to operate (though manufacturing any computing device is not low-cost, but this assumes a mid-range application, not a far-future one)
  4. Requires only a modest standard of education and resources to operate (radio in particular requires quite a lot of understanding in order to set up and operate a low-power 1500 watt ham setup safely and efficiently, along with land and space for antennae, a shed, the power to operate it, etc)
  5. Does not require any serious physical transport once the communications mechanisms have been put in place, yet is easily mobile if necessary

Notable disadvantages:

  1. Not always mobile in operation
  2. Requires some form of electricity
  3. Dependent on collective effort to make useful
  4. Dependent in some ways on good-faith behaviour (the more secure the network needs to be, the more difficult and complex it is to operate and to reason about and the more advanced knowledge and training one needs to protect and defend it)
  5. Not efficient for audio in particular - that’s better served by audiocassette and radio

There are more, these are just some short lists of very obvious entries.

In short, for an internet to be useful, it must provide important information to people in a way that, compared to delivering the same information via a competing mechanism, has a high enough value for the effort to be worthwhile. I posit that this is actually a fairly high bar for the bulk of human-kind’s information needs.

A deconstruction of why radio and books have failed is very simple: books are comparatively expensive, and radio is comparatively limited (in channels/stations/bandwidth) and also expensive (in transmitting stations, personnel and operations, power consumed, etc.). This is in comparison to today’s largely unlimited Internet “experience”, which has, by virtue of being radically subsidized (by military expenditures on the core technology, for the most part, and by later commercial expenditures as the balance of utility swung hard in its favour).

An internet will have similar limitations as these older media: less subsidization, fewer commercial interests (due to a necessary reduction in advertising, as a main point, though I fully expect orders to be able to be placed by an internet as they would be by a phone call or any other long-distance mechanism. However, because those other methods will likely regain utility in balance, I don’t expect companies to pile on to an internet with as much enthusiasm as before.

Throughout this piece I have seemingly been talking about two different things: an internet which is effectively running “on top of” the current one (that is to say, a movement by content producers to utilize less bandwidth in providing their content and, simultaneously, a shift by users to orient their communications towards technologies that inherently are lower bandwidth); and a set of internets which are likely fully disconnected from or never were a part of the Internet to begin with, yet use fundamentally the same components (namely TCP/IP in particular). I don’t see this as mutually exclusive - in fact I expect that, much like ARPANet emerged as a result of interconnecting more and more formerly independent networks, the Internet will go quietly into that dark night using the same process in reverse. Already various darknets exist hither and yon, some of them utilizing the Internet’s backbones for their transport, some of them emphatically not. To me, the transition never started and never began - the original Internet was, by necessity, low bandwidth, and those technologies persist on it today, despite being all but abandoned like old railway spurs and ancient sidings.

Another point: while low bandwidth internets are interesting to consider from a consumer’s point of view, there is another use case where they might be able to be a little less low-bandwidth, but where the efficiency of bandwidth and the technologies used still can make a significant difference in the total cost (and thus the utility/value ratio) of having an internet at all, and that is in education. School-area-networks might have an excellent use-case for things like library access to articles (books should, in my opinion, properly be read on paper wherever possible, but articles can either be printed on demand or read and quoted from a screen without too much issue, since they are shorter and there are significantly more of them than there are books worth having in a library), and some subjects (especially the mid-level mathematics, control systems, physics, etc. courses, among others) benefit from a realtime computing environment which can be networked to devices, laboratory sensors, oscilloscopes, etc. Providing affordable, easy to administer tools for education - in the context of being disconnected from the public Internet, and therefore without as much concern for security - could revolutionize the value proposition and push the utility of computing much further into the future than it would with the hazards of globally connected networking as it stands today. It would also make cheaper hardware “fast enough” again, which would be a boon for third world regions like rural West Virginia.

So, what technologies do we, here, find fascinating and wish to champion, for retro-tastic purposes, plain fun and hobbying, or to begin to provide real value to those left behind by today’s increasingly bandwidth- and attention-demanding Internet?

I’d like to see more IRC, and as I’ve spoken elsewhere, bringing a federated form of it into life would be wonderful. You (@Syonyk) have pointed out Synapse as a potentially good foundation for this - making thinner servers and clients that speak that protocol might be a good place to start.

Same with web publishing platforms. Right now, things like Jekyll and other static-site-generating blog and content management tools have reached a fairly solid level of user-friendliness.It would be fun to put together a “new myspace” which relies on such tooling to provide self-publishing features to users - with an utter ban on Javascript!