This feed contains some of my blog entries that link to software code that I've developed.
On January 18th, I'll be presenting "Type driven configuration management with Propellor" at Linux.Conf.Au in Hobart, Tasmania. Abstract
Linux.Conf.Au is a wonderful conference, and I'm thrilled to be able to attend it again.
Update: My presentation on keysafe has also been accepted for the Security MiniConf at LCA, January 17th.
If your gpg key is too valuable for you to feel comfortable with backing it up to the cloud using keysafe, here's an alternative that might appeal more.
Keysafe can now back up some shares of the key to local media, and other shares to the cloud. You can arrange things so that the key can't be restored without access to some of the local media and some of the cloud servers, as well as your password.
For example, I have 3 USB sticks, and there are 3 keysafe servers. So let's make 6 shares total of my gpg secret key and require any 4 of them to restore it.
I plug in all 3 USB sticks and look at
mount to get the paths to them.
Then, run keysafe, to back up the key spread amoung all 6 locations.
keysafe --backup --totalshares 6 --neededshares 4 \ --add-storage-directory /media/sdc1 \ --add-storage-directory /media/sdd1 \ --add-storage-directory /media/sde1
Once it's done, I can remove the USB sticks, and distribute them to secure places.
To restore, I need at least one of the USB sticks. (If some of the servers are down, more USB sticks will be needed.) Again I tell keysafe the paths where USB stick(s) are mounted.
keysafe --restore --totalshares 6 --neededshares 4 \ --add-storage-directory /media/sdb1
Using keysafe this way, physical access to the USB sticks is the first level of defense, and hopefully you'll know if that's breached. The keysafe password is the second level of defense, and cracking that will take a lot of work. Leaving plenty of time to revoke your key, etc, if it comes to that.
I feel this is better than the methods I've been using before to back up my most important gpg keys. With paperkey, physical access to the printout immediately exposes the key. With Shamir Secret Sharing and manual distribution of shares, the only second line of defense is the much easier to crack gpg passphrase. Using OpenPGP smartcards is still a more secure option, but you'd need 3 smartcards to reach the same level of redundancy, and it's easier to get your hands on 3 USB sticks than 3 smartcards.
There's another benefit to using keysafe this way. It means that sometimes, the data stored on the keysafe servers is not sufficient to crack a key. There's no way to tell, so an attacker risks doing a lot of futile work.
If you're not using an OpenPGP smartcard, I encourage you to back up your gpg key with keysafe as described above.
Two of the three necessary keysafe servers are now in operation, and I hope to have a full complement of servers soon.
(This was sponsored by Thomas Hochstein on Patreon.)
After a month of development, keysafe 0.20160922 is released, and ready for beta testing. And it needs servers.
With this release, the whole process of backing up and restoring a gpg secret key to keysafe servers is implemented. Keysafe is started at desktop login, and will notice when a gpg secret key has been created, and prompt to see if it should back it up.
At this point, I recommend only using keysafe for lower-value secret keys, for several reasons:
- There could be some bug that prevents keysafe from restoring a backup.
- Keysafe's design has not been completely reviewed for security.
- None of the keysafe servers available so far or planned to be deployed soon meet all of the security requirements for a recommended keysafe server. While server security is only the initial line of defense, it's still important.
Currently the only keysafe server is one that I'm running myself. Two more keysafe servers are needed for keysafe to really be usable, and I can't run those.
If you're interested in running a keysafe server, read the keysafe server requirements and get in touch.
An interesting side problem in keysafe's design is that keysafe servers, which run as tor hidden services, allow anonymous data storage and retrieval. While each object is limited to 64 kb, what's to stop someone from making many requests and using it to store some big files?
The last thing I want is a git-annex keysafe special remote. ;-)
I've done a mash-up of three technologies to solve this, that I think is perhaps somewhat novel. Although it could be entirely old hat, or even entirely broken. (All I know so far is that the code compiles.) It uses proof of work, token buckets, and bloom filters.
Each request can have a proof of work attached to it, which is just a value that, when hashed with a salt, starts with a certain number of 0's. The salt includes the ID of the object being stored or retrieved.
The server maintains a list of token buckets. The first can be accessed without any proof of work, and subsequent ones need progressively more proof of work to be accessed.
Clients will start by making a request without a PoW, and that will often succeed, but when the first token bucket is being drained too fast by other load, the server will reject the request and demand enough proof of work to allow access to the second token bucket. And so on down the line if necessary. At the worst, a client may have to do 8-16 minutes of work to access a keysafe server that is under heavy load, which would not be ideal, but is acceptible for keysafe since it's not run very often.
If the client provides a PoW good enough to allow accessing the last token bucket, the request will be accepted even when that bucket is drained. The client has done plenty of work at this point, so it would be annoying to reject it. To prevent an attacker that is willing to burn CPU from abusing this loophole to flood the server with object stores, the server delays until the last token bucket fills back up.
So far so simple really, but this has a big problem: What prevents a proof of work from being reused? An attacker could generate a single PoW good enough to access all the token buckets, and flood the server with requests using it, and so force everyone else to do excessive amounts of work to use the server.
Guarding against that DOS is where the bloom filters come in. The server generates a random request ID, which has to be included in the PoW salt and sent back by the client along with the PoW. The request ID is added to a bloom filter, which the server can use to check if the client is providing a request ID that it knows about. And a second bloom filter is used to check if a request ID has been used by a client before, which prevents the DOS.
Of course, when dealing with bloom filters, it's important to consider what happens when there's a rare false positive match. This is not a problem with the first bloom filter, because a false positive only lets some made-up request ID be used. A false positive in the second bloom filter will cause the server to reject the client's proof of work. But the server can just request more work, or send a new request ID, and the client will follow along.
The other gotcha with bloom filters is that filling them up too far sets too many bits, and so false positive rates go up. To deal with this, keysafe just keeps count of how many request IDs it has generated, and once it gets to be too many to fit in a bloom filter, it makes a new, empty bloom filter and starts storing request IDs in it. The old bloom filter is still checked too, providing a grace period for old request IDs to be used. Using bloom filters that occupy around 32 mb of RAM, this rotation only has to be done every million requests of so.
But, that rotation opens up another DOS! An attacker could cause lots of request IDs to be generated, and so force the server to rotate its bloom filters too quickly, which would prevent any requests from being accepted. To solve this DOS, just use one more token bucket, to limit the rate that request IDs can be generated, so that the time it would take an attacker to force a bloom filter rotation is long enough that any client will have plenty of time to complete its proof of work.
This sounds complicated, and probably it is, but the implementation only took 333 lines of code. About the same number of lines that it took to implement the entire keysafe HTTP client and server using the amazing servant library.
There are a number of knobs that may need to be tuned to dial it in, including the size of the token buckets, their refill rate, the size of the bloom filters, and the number of argon2 iterations in the proof of work. Servers may eventually need to adjust those on the fly, so that if someone decides it's worth burning large quantities of CPU to abuse keysafe for general data storage, the server throttles down to a rate that will take a very long time to fill up its disk.
This protects against DOS attacks that fill up the keysafe server storage. It does not prevent a determined attacker, who has lots of CPU to burn, from flooding so many requests that legitimate clients are forced to do an expensive proof of work and then time out waiting for the server. But that's an expensive attack to keep running, and the proof of work can be adjusted to make it increasingly expensive.
Keysafe securely backs up a gpg secret key or other short secret to the cloud. But not yet. Today's alpha release only supports storing the data locally, and I still need to finish tuning the argon2 hash difficulties with modern hardware. Other than that, I'm fairly happy with how it's turned out.
Keysafe is written in Haskell, and many of the data types in it keep track of the estimated CPU time needed to create, decrypt, and brute-force them. Running that through a AWS SPOT pricing cost model lets keysafe estimate how much an attacker would need to spend to crack your password.
(Above is for the password "makesad spindle stick")
If you'd like to be an early adopter, install it like this:
sudo apt-get install haskell-stack libreadline-dev libargon2-0-dev zenity stack install keysafe
~/.local/bin/keysafe --backup --store-local to back up a gpg key
I still need to tune the argon2 hash difficulty, and I need benchmark data to do so. If you have a top of the line laptop or server class machine that's less than a year old, send me a benchmark:
~/.local/bin/keysafe --benchmark | mail email@example.com -s benchmark
Bonus announcement: http://hackage.haskell.org/package/zxcvbn-c/ is my quick Haskell interface to the C version of the zxcvbn password strength estimation library.
PS: Past 50% of my goal on Patreon!
Have you ever thought about using a gpg key to encrypt something, but didn't due to worries that you'd eventually lose the secret key? Or maybe you did use a gpg key to encrypt something and lost the key. There are nice tools like paperkey to back up gpg keys, but they require things like printers, and a secure place to store the backups.
I feel that simple backup and restore of gpg keys (and encryption keys generally) is keeping some users from using gpg. If there was a nice automated solution for that, distributions could come preconfigured to generate encryption keys and use them for backups etc. I know this is a missing peice in the git-annex assistant, which makes it easy to generate a gpg key to encrypt your data, but can't help you back up the secret key.
So, I'm thinking about storing secret keys in the cloud. Which seems scary to me, since when I was a Debian Developer, my gpg key could have been used to compromise millions of systems. But this is not about developers, it's about users, and so trading off some security for some ease of use may be appropriate. Especially since the alternative is no security. I know that some folks back up their gpg keys in the cloud using DropBox.. We can do better.
I've thought up a design for this, called keysafe. The synopsis of how it works is:
The secret key is split into three shards, and each is uploaded to a server run by a different entity. Any two of the shards are sufficient to recover the original key. So any one server can go down and you can still recover the key.
A password is used to encrypt the key. For the servers to access your key, two of them need to collude together, and they then have to brute force the password. The design of keysafe makes brute forcing extra difficult by making it hard to know which shards belong to you.
Indeed the more people that use keysafe, the harder it becomes to brute-force anyone's key!
I could really use some additional reviews and feedback on the design by experts.
I've been funded for two years by the DataLad project to work on git-annex. This has been a super excellent gig; they provided funding and feedback on ways git-annex could be improved, and I had a large amount of flexability to decide what to work on in git-annex. Also plenty of spare time to work on new projects like propellor, concurrent-output, and scroll. It was an awesome way to spend the last two years of my twenty years of free software.
That funding is running out. I'd like to continue this great streak of working on the free software projects that are important to me. I'd normally dip into my savings at this point and keep on going until some other source of funding turned up. But, my savings are about to be obliterated, since I'm buying the place where I've had so much success working distraction-free.
So, I've started a Patreon page to fund my ongoing work. Please check it out and contribute if you want to.
Some details about projects I want to work on this fall:
PocketCHIP is the pocket sized Linux terminal I always used to want. Which is to say, it runs (nearly) stock Debian, X, etc, it has a physical keyboard, and the hardware and software is (nearly) non-proprietary and very hackable. Best of all, it's fun and it encourages playful learning.
It's also clunky and flawed and constructed out of cheap components. This keeps it from being something I'd actually carry around in my pocket and use regularly. The smart thing they've done though is embrace these limitations, targeting it at the hobbiest, and not trying to compete with smart phones. The PocketCHIP is its own little device in its own little niche.
Unless you're into hardware hacking and want to hook wires up to the GPIO pins, the best hardware feature is the complete keyboard, with even Escape and Control and arrow keys. You can ssh around and run vi on it, run your favorite REPL (I use ghci) to do quick programming, etc. The keyboard is small and a little strange, but you get used to it quickly; your QWERTY muscle memory is transferrable to it. I had fun installing nethack on it and handing it to my sister who had never played nethack before, to watch her learn to play.
The screen resolution is 480x272, which is pretty tiny. And, it's a cheap resistive touchscreen, with a bezil around it. This makes it very hard to use scroll bars and icons near the edge of the screen. The customized interface that ships with it avoids these problems, and so I've been using that for now. When I have time, I plan to put a fullscreen window manager on it, and write a pdmenu menu configuration for it, so everything can be driven using the keyboard.
I also have not installed Debian from scratch on it yet. This would be tricky because it uses a somewhat patched kernel (to support the display and wifi). The shipped distribution is sadly not entirely free software. There are some nonfree drivers and firmwares. And, they included a non-free gaming environment on it (a very nice one for part of the audience, that allows editing the games, but non-free nevertheless). They did do a good job of packaging up all the custom software they include on it, although they don't seem to have published source packages for everything.
(They might be infringing my GPL copyright of flash-kernel by distributing a modified version without source. I say "might" because flash-kernel is a pile of shell scripts, so you could probably extract the (probably trivial) modifications. Still.. Also, they seem to have patched network-manager in some way and I wasn't able to find the corresponding source.)
The battery life is around 5 hours. Unfortunately the "sleep" mode only turns off the backlight and maybe wifi, and leaves the rest of the system running. This and the slightly awkward form factor too big to really comfortably fit in a pocket limit the use of PocketCHIP quite a bit. Perhaps the sleeping will get sorted out, and perhaps I'll delete the GPIO breakout board from the top of mine to make it more pocket sized.
Propellor is my second big Haskell program. I recently described the motivation for it like this, in a proposal for a Linux.Conf.Au talk:
The configuration of Linux hosts has become increasingly declarative, managed by tools like puppet and ansible, or by the composition of containers. But if a server is a collection of declarative properties, how do you make sure that changes to that configuration make sense? You can test them, but eventually it's 3 AM and you have an emergency fix that needs to go live immediately.
Data types to the rescue! While data types are usually used to prevent eg, combining an Int and a Bool, they can be used at a much more abstract level, for example to prevent combining a property that needs a Debian system with a property that needs a Red Hat system.
Propellor leverages Haskell's type system to prove the consistency of the properties it will apply to a host.
The real origin story though, is that I wanted to finally start using configuration management, but the tools for it all seemed very complicated and built on shaky foundations (like piles of yaml), and it seemed it would be easier to write my own than deal with that. Meanwhile, I had Haskell burning a hole in my pocket, ready to be used in a second large project after git-annex.
Propellor has averaged around 2.5 contributions per month from users since it got started, but increasing numbers recently. That's despite having many fewer users than git-annex, which remember gets perhaps 1 patch per month.
Of course, I've "cheated" by making sure that propellor's users know Haskell, or are willing to learn some. And, propellor is very compositional; adding a new property to it is not likely to be complicated by any of the existing code. So it's easy to extend, if you're able to use it.
At this point propellor has a small community of regular contributors, and I spend some pleasant weekend afternoons reviewing and merging their work.
Much of my best work on propellor has involved keeping the behavior of the program the same while making its types better, to prevent mistakes. Propellor's core data types have evolved much more than in any program I worked on before. That's exciting!
concurrent-output is a more meaty Haskell library than the ones I've covered so far. Its interface is simple, but there's a lot of complexity under the hood. Things like optimised console updates, ANSI escape sequence parsing, and transparent paging of buffers to disk.
It developed out of needing to display multiple progress bars on the console in git-annex, and also turned out to be useful in propellor. And since it solves a general problem, other haskell programs are moving toward using it, like shake and stack.