This feed contains some of my blog entries that link to software code that I've developed.

Exede Surfbeam 2

My new satellite internet connection is from Exede, connecting to the ViaSat 1 bird in geosync orbit. A few technical details that I've observed follow.

antagonistic by design

The "Surfbeam 2 wifi modem" is a closed proprietary system. That is important because it's part of Exede's bandwidth management system. The Surfbeam tracks data use and sends it periodically to Exede. When a user has gone over their monthly priority data, Exede then throttles the bandwidth in various ways -- this throttling seems to be implemented, at least partially on the Surfbeam itself. (Perhaps by setting QoS flags?)

So, if a user could hack their Surfbeam, they could probably bypass the bandwidth caps, or at least some of them. Perhaps Exede would notice eventually. Of course, doing so would surely violate the Exede TOS. If you're renting the modem, like I am, hacking a device you don't own might also subject you to criminal penalties. Needless to say, I don't plan to hack the SurfBeam. But it's been hacked before.

So, this is a device that lives in people's homes and is antagonistic to them by design.

weird data throttling

The way the Surfbeam reports data use back to Exede periodically and gets throttling configured has some odd effects sometimes. For example, the Surfbeam can be in throttled state left-over from the previous billing month. When a new billing month begins, it can remain throttled for some time (up to multiple hours) until it sends an update to Exede and they un-throttle it. Data downloaded at that time might still be counted as priority data even though it was throttled. I've seen some good indications of that happening, but am not sure yet.

But, I've decided that the throttling doesn't matter for me. Why? ViaSat 1 has many spot beams, and the working-class beam I'm in (most of it is in eastern Kentucky) does not seem to get a lot of use between 7 am and 4:30 pm weekdays. Even when throttled, I often get 300 kb/s - 1 mb/s speeds during the day, which is not a lot worse than the ~2.5 mb/s peak when unthrottled. And that's the time when I want to use broadband -- when the sun is shining and I'm at home at work/play. I'm probably going to switch to a cheaper plan with less priority data, because the priority data is not buying me much. This is a big change from the old FAP which rendered the satellite no faster than dialup.

a whole network in there

Looking at the ports open on the Surfbeam, some very strange things turned up. First, there are not one, not two, but three separate IPs used by the device, and there are at least two and perhaps three distinct computers involved. There are a lot of flickering LEDs inside the box; a whole network in there. is the satellite controller. It's a Linux box, fingerprinted as kernel 3.10 or so (so full of security holes presumably), and it's running thttpd/2.25b (doesn't seem to have any known holes). It seems to have ssh and snmp, but with some port filtering that prevents access. (Note that the above exploit video confirms that snmp is running.) Some machine parsable data is exposed at and (See (SurfStat program) is the wifi router. It has a dns server, an icslap proxy, and nmap thinks it's Linux 3.x with Synology DiskStation Manager (probably the latter is a false positive?) It has its own separate web server for configuration, which is not thttpd. I'm fairly sure this is a separate processor from the other IP address. responds to ICMP, but has no open ports at all. However, it seems to have filtered ssh, telnet, msrpc, microsoft-ds, and port 8090 (probably http), so perhaps it's running all that stuff. This one is definitely a separate processor, located in the Satellite dish's TRIA (transmit receive integrated assembly). Verified by disconnecting the dish's coax cable and being unable to ping it.

lack of source code for GPLed software

Exede did not provide anything to me about the GPL licensed source code on the Surfbeam 2. I'm not sure if they're legally required to do so in my case, since they're renting it to me?

But, they do let you buy the device too, so it's interesting that nothing mentions the licenses of its software and where to get the source code. I'll try to buy it and ask for the source code eventually.

power use

The Surfbeam pulls as much power as two beefy laptops, but it is beaming signals thousands of miles into space, so I give it a bit of a pass. I want to find a way to power it from DC power, but have not had any success with that so far, so am wasting more power to run it on an inverter.

Its power supply is rated at 30v and 2.5 amps. Interestingly, I've seen a photo of another Surfbeam power supply that only pulls 1.5 amps. This and a different case with fewer (external) LEDs makes me think perhaps the installer stuck me with an old and less efficient version. Maybe they integrated some of those processors into a single board in a new version and perhaps made it waste less power as heat. Mine gets pretty warm.

I'm currently only able to run it approximately one hour per hour of sun collected by the house. Still on dialup the rest of the time. With the ability to get on broadband when dialup is being painful, being on dialup the rest of the time is perfectly ok. Of course it helps that with tools like git-annex, I'm very well tuned to popping up onto broadband briefly and making it count.

starting debug-me and a new devblog

I've started building debug-me. It's my birthday, and building a new program is kind of my birthday gift to myself, because I love starting a new program and seeing where it goes. (Also, my Patreon backers wanted me to get on with building debug-me.)

I also have a new devblog! Up until now, I've had a devblog that only covered work on git-annex. That one continues, but the new devblog is for development journaling for any project I'm working on.

multi-terminal applications

While learning about and configuring weechat this evening, I noticed a lot of complexity and unsatisfying tradeoffs related to its UI, its mouse support, and its built-in window system. Got to wondering what I'd do differently, if I wrote my own IRC client, to avoid those problems.

The first thing I realized is, it is not a good idea to think about writing your own IRC client. Danger will robinson..

So, let's generalize. This blog post is not about designing an IRC client, but about exploring simpler ways that something like an IRC client might handle its UI, and perhaps designing something general-purpose that could be used by someone else to build an IRC client, or be mashed up with an existing IRC client.

What any modern IRC client needs to do is display various channels to the user. Maybe more than one channel should be visible at a time in some kind of window, but often the user will have lots of available channel and only want to see a few of them at a time. So there needs to be an interface for picking which channel(s) to display, and if multiple windows are shown, for arranging the windows. Often that interface also indicates when there is activity on a channel. The most recent messages from the channel are displayed. There should be a way to scroll back to see messages that have already scrolled by. There needs to be an interface for sending a message to a channel. Finally, a list of users in the channel is often desired.

Modern IRC clients implement their own UI for channel display, windowing, channel selection, activity notification, scrollback, message entry, and user list. Even the IRC clients that run in a terminal include all of that. But how much of that do they need to implement, really?

Suppose the user has a tabbed window manager, that can display virtual terminals. The terminals can set their title, and can indicate when there is activity in the terminal. Then an IRC client could just open a bunch of terminals, one per channel. Let the window manager handle channel selection, windowing (naturally), and activity notification.

For scrollback, the IRC client can use the terminal's own scrollback buffer, so the terminal's regular scrollback interface can be used. This is slightly tricky; can't use the terminal's alternate display, and have to handle the UI for the message entry line at the bottom.

That's all the UI an IRC client needs (except for the user list), and most of that is already implemented in the window manager and virtual terminal. So that's an elegant way to make an IRC client without building much new UI at all.

But, unfortunately, most of us don't use tabbed window managers (or tabbed terminals). Such an IRC client, in a non-tabbed window manager, would be a many-windowed mess. Even in a tabbed window manager, it might be annoying to have so many windows for one program.

So we need fewer windows. Let's have one channel list window, and one channel display window. There could also be a user list window. And there could be a way to open additional, dedicated display windows for channels, but that's optional. All of these windows can be seperate virtual terminals.

A potential problem: When changing the displayed channel, it needs to output a significant number of messages for that channel, so that the scrollback buffer gets populated. With a large number of lines, that can be too slow to feel snappy. In some tests, scrolling 10 thousand lines was noticiably slow, but scrolling 1 thousand lines happens fast enough not to be noticiable.

(Terminals should really be faster at scrolling than this, but they're still writing scrollback to unlinked temp files.. sigh!)

An IRC client that uses multiple cooperating virtual terminals, needs a way to start up a new virtual terminal displaying the current channel. It could run something like this:

x-terminal-emulator -e the-irc-client --display-current-channel

That would communicate with the main process via a unix socket to find out what to display.

Or, more generally:

x-terminal-emulator -e connect-pty /dev/pts/9

connect-pty would simply connect a pty device to the terminal, relaying IO between them. The calling program would allocate the pty and do IO to it. This may be too general to be optimal though. For one thing, I think that most curses libraries have a singleton terminal baked into them, so it might be hard to have a single process control cursors on multiple pty's. And, it might be innefficient to feed that 1 thousand lines of scrollback through the pty and copy it to the terminal.

Less general than that, but still general enough to not involve writing an IRC client, would be a library that handled the irc-client-like channel display, with messages scrolling up the terminal (and into the scrollback buffer), a input area at the bottom, and perhaps a few other fixed regions for status bars and etc.

Oh, I already implemented that! In concurrent-output, over a year ago: a tiling region manager for the console

I wonder what other terminal applications could be simplified/improved by using multiple terminals? One that comes to mind is mutt, which has a folder list, a folder view, and an email view, that all are shoehorned, with some complexity, into a single terminal.

Linux.Conf.Au 2017 presentation on Propellor

On January 18th, I'll be presenting "Type driven configuration management with Propellor" at Linux.Conf.Au in Hobart, Tasmania. Abstract

Linux.Conf.Au is a wonderful conference, and I'm thrilled to be able to attend it again.


Update: My presentation on keysafe has also been accepted for the Security MiniConf at LCA, January 17th.

keysafe with local shares

If your gpg key is too valuable for you to feel comfortable with backing it up to the cloud using keysafe, here's an alternative that might appeal more.

Keysafe can now back up some shares of the key to local media, and other shares to the cloud. You can arrange things so that the key can't be restored without access to some of the local media and some of the cloud servers, as well as your password.

For example, I have 3 USB sticks, and there are 3 keysafe servers. So let's make 6 shares total of my gpg secret key and require any 4 of them to restore it.

I plug in all 3 USB sticks and look at mount to get the paths to them. Then, run keysafe, to back up the key spread amoung all 6 locations.

keysafe --backup --totalshares 6 --neededshares 4 \
    --add-storage-directory /media/sdc1 \
    --add-storage-directory /media/sdd1 \
    --add-storage-directory /media/sde1

Once it's done, I can remove the USB sticks, and distribute them to secure places.

To restore, I need at least one of the USB sticks. (If some of the servers are down, more USB sticks will be needed.) Again I tell keysafe the paths where USB stick(s) are mounted.

keysafe --restore --totalshares 6 --neededshares 4 \
    --add-storage-directory /media/sdb1

Using keysafe this way, physical access to the USB sticks is the first level of defense, and hopefully you'll know if that's breached. The keysafe password is the second level of defense, and cracking that will take a lot of work. Leaving plenty of time to revoke your key, etc, if it comes to that.

I feel this is better than the methods I've been using before to back up my most important gpg keys. With paperkey, physical access to the printout immediately exposes the key. With Shamir Secret Sharing and manual distribution of shares, the only second line of defense is the much easier to crack gpg passphrase. Using OpenPGP smartcards is still a more secure option, but you'd need 3 smartcards to reach the same level of redundancy, and it's easier to get your hands on 3 USB sticks than 3 smartcards.

There's another benefit to using keysafe this way. It means that sometimes, the data stored on the keysafe servers is not sufficient to crack a key. There's no way to tell, so an attacker risks doing a lot of futile work.

If you're not using an OpenPGP smartcard, I encourage you to back up your gpg key with keysafe as described above.

Two of the three necessary keysafe servers are now in operation, and I hope to have a full complement of servers soon.

(This was sponsored by Thomas Hochstein on Patreon.)

keysafe beta release

After a month of development, keysafe 0.20160922 is released, and ready for beta testing. And it needs servers.

With this release, the whole process of backing up and restoring a gpg secret key to keysafe servers is implemented. Keysafe is started at desktop login, and will notice when a gpg secret key has been created, and prompt to see if it should back it up.

At this point, I recommend only using keysafe for lower-value secret keys, for several reasons:

  • There could be some bug that prevents keysafe from restoring a backup.
  • Keysafe's design has not been completely reviewed for security.
  • None of the keysafe servers available so far or planned to be deployed soon meet all of the security requirements for a recommended keysafe server. While server security is only the initial line of defense, it's still important.

Currently the only keysafe server is one that I'm running myself. Two more keysafe servers are needed for keysafe to really be usable, and I can't run those.

If you're interested in running a keysafe server, read the keysafe server requirements and get in touch.

PoW bucket bloom: throttling anonymous clients with proof of work, token buckets, and bloom filters

An interesting side problem in keysafe's design is that keysafe servers, which run as tor hidden services, allow anonymous data storage and retrieval. While each object is limited to 64 kb, what's to stop someone from making many requests and using it to store some big files?

The last thing I want is a git-annex keysafe special remote. ;-)

I've done a mash-up of three technologies to solve this, that I think is perhaps somewhat novel. Although it could be entirely old hat, or even entirely broken. (All I know so far is that the code compiles.) It uses proof of work, token buckets, and bloom filters.

Each request can have a proof of work attached to it, which is just a value that, when hashed with a salt, starts with a certain number of 0's. The salt includes the ID of the object being stored or retrieved.

The server maintains a list of token buckets. The first can be accessed without any proof of work, and subsequent ones need progressively more proof of work to be accessed.

Clients will start by making a request without a PoW, and that will often succeed, but when the first token bucket is being drained too fast by other load, the server will reject the request and demand enough proof of work to allow access to the second token bucket. And so on down the line if necessary. At the worst, a client may have to do 8-16 minutes of work to access a keysafe server that is under heavy load, which would not be ideal, but is acceptible for keysafe since it's not run very often.

If the client provides a PoW good enough to allow accessing the last token bucket, the request will be accepted even when that bucket is drained. The client has done plenty of work at this point, so it would be annoying to reject it. To prevent an attacker that is willing to burn CPU from abusing this loophole to flood the server with object stores, the server delays until the last token bucket fills back up.

So far so simple really, but this has a big problem: What prevents a proof of work from being reused? An attacker could generate a single PoW good enough to access all the token buckets, and flood the server with requests using it, and so force everyone else to do excessive amounts of work to use the server.

Guarding against that DOS is where the bloom filters come in. The server generates a random request ID, which has to be included in the PoW salt and sent back by the client along with the PoW. The request ID is added to a bloom filter, which the server can use to check if the client is providing a request ID that it knows about. And a second bloom filter is used to check if a request ID has been used by a client before, which prevents the DOS.

Of course, when dealing with bloom filters, it's important to consider what happens when there's a rare false positive match. This is not a problem with the first bloom filter, because a false positive only lets some made-up request ID be used. A false positive in the second bloom filter will cause the server to reject the client's proof of work. But the server can just request more work, or send a new request ID, and the client will follow along.

The other gotcha with bloom filters is that filling them up too far sets too many bits, and so false positive rates go up. To deal with this, keysafe just keeps count of how many request IDs it has generated, and once it gets to be too many to fit in a bloom filter, it makes a new, empty bloom filter and starts storing request IDs in it. The old bloom filter is still checked too, providing a grace period for old request IDs to be used. Using bloom filters that occupy around 32 mb of RAM, this rotation only has to be done every million requests of so.

But, that rotation opens up another DOS! An attacker could cause lots of request IDs to be generated, and so force the server to rotate its bloom filters too quickly, which would prevent any requests from being accepted. To solve this DOS, just use one more token bucket, to limit the rate that request IDs can be generated, so that the time it would take an attacker to force a bloom filter rotation is long enough that any client will have plenty of time to complete its proof of work.

This sounds complicated, and probably it is, but the implementation only took 333 lines of code. About the same number of lines that it took to implement the entire keysafe HTTP client and server using the amazing servant library.

There are a number of knobs that may need to be tuned to dial it in, including the size of the token buckets, their refill rate, the size of the bloom filters, and the number of argon2 iterations in the proof of work. Servers may eventually need to adjust those on the fly, so that if someone decides it's worth burning large quantities of CPU to abuse keysafe for general data storage, the server throttles down to a rate that will take a very long time to fill up its disk.

This protects against DOS attacks that fill up the keysafe server storage. It does not prevent a determined attacker, who has lots of CPU to burn, from flooding so many requests that legitimate clients are forced to do an expensive proof of work and then time out waiting for the server. But that's an expensive attack to keep running, and the proof of work can be adjusted to make it increasingly expensive.

keysafe alpha release

Keysafe securely backs up a gpg secret key or other short secret to the cloud. But not yet. Today's alpha release only supports storing the data locally, and I still need to finish tuning the argon2 hash difficulties with modern hardware. Other than that, I'm fairly happy with how it's turned out.

Keysafe is written in Haskell, and many of the data types in it keep track of the estimated CPU time needed to create, decrypt, and brute-force them. Running that through a AWS SPOT pricing cost model lets keysafe estimate how much an attacker would need to spend to crack your password.

(Above is for the password "makesad spindle stick")

If you'd like to be an early adopter, install it like this:

sudo apt-get install haskell-stack libreadline-dev libargon2-0-dev zenity
stack install keysafe

Run ~/.local/bin/keysafe --backup --store-local to back up a gpg key to ~/.keysafe/objects/local/

I still need to tune the argon2 hash difficulty, and I need benchmark data to do so. If you have a top of the line laptop or server class machine that's less than a year old, send me a benchmark:

~/.local/bin/keysafe --benchmark | mail -s benchmark

Bonus announcement: is my quick Haskell interface to the C version of the zxcvbn password strength estimation library.

PS: Past 50% of my goal on Patreon!


Have you ever thought about using a gpg key to encrypt something, but didn't due to worries that you'd eventually lose the secret key? Or maybe you did use a gpg key to encrypt something and lost the key. There are nice tools like paperkey to back up gpg keys, but they require things like printers, and a secure place to store the backups.

I feel that simple backup and restore of gpg keys (and encryption keys generally) is keeping some users from using gpg. If there was a nice automated solution for that, distributions could come preconfigured to generate encryption keys and use them for backups etc. I know this is a missing peice in the git-annex assistant, which makes it easy to generate a gpg key to encrypt your data, but can't help you back up the secret key.

So, I'm thinking about storing secret keys in the cloud. Which seems scary to me, since when I was a Debian Developer, my gpg key could have been used to compromise millions of systems. But this is not about developers, it's about users, and so trading off some security for some ease of use may be appropriate. Especially since the alternative is no security. I know that some folks back up their gpg keys in the cloud using DropBox.. We can do better.

I've thought up a design for this, called keysafe. The synopsis of how it works is:

The secret key is split into three shards, and each is uploaded to a server run by a different entity. Any two of the shards are sufficient to recover the original key. So any one server can go down and you can still recover the key.

A password is used to encrypt the key. For the servers to access your key, two of them need to collude together, and they then have to brute force the password. The design of keysafe makes brute forcing extra difficult by making it hard to know which shards belong to you.

Indeed the more people that use keysafe, the harder it becomes to brute-force anyone's key!

I could really use some additional reviews and feedback on the design by experts.

This project is being sponsored by Purism and by my Patreon supporters. By the way, I'm 15% of the way to my Patreon goal after one day!


I've been funded for two years by the DataLad project to work on git-annex. This has been a super excellent gig; they provided funding and feedback on ways git-annex could be improved, and I had a large amount of flexability to decide what to work on in git-annex. Also plenty of spare time to work on new projects like propellor, concurrent-output, and scroll. It was an awesome way to spend the last two years of my twenty years of free software.

That funding is running out. I'd like to continue this great streak of working on the free software projects that are important to me. I'd normally dip into my savings at this point and keep on going until some other source of funding turned up. But, my savings are about to be obliterated, since I'm buying the place where I've had so much success working distraction-free.

So, I've started a Patreon page to fund my ongoing work. Please check it out and contribute if you want to.

Some details about projects I want to work on this fall: