databranches: using git as a database

I've just released git-annex version 3, which stops cluttering the filesystem with .git-annex directories. Instead it stores its data in a git-annex branch, which it manages entirely transparently to the user. It is essentially now using git as a distributed NOSQL database. Let's call it a databranch.

This is not an unheard of thing to do with git. The git notes built into recent git does something similar, using a dynamically balanced tree in a hidden branch to store notes. My own pristine-tar injects data into a git branch. (Thanks to Alexander Wirt for showing me how to do that when I was a git newbie.) Some distributed bug trackers store their data in git in various ways.

What I think takes git-annex beyond these is that it not only injects data into git, but it does it in a way that's efficient for large quantities of changing data, and it automates merging remote changes into its databranch. This is novel enough to write up how I did it, especially the latter which tends to be a weak spot in things that use git this way.

Indeed, it's important to approach your design for using git as a database from the perspective of automated merging. Get the merging right and the rest will follow. I've chosen to use the simplest possible merge, the union merge: When merging parent trees A and B, the result will have all files that are in either A or B, and files present in both will have their lines merged (and possibly reordered or uniqed).

The main thing git-annex stores in its databranch is a bunch of presence logs. Each log file corresponds to one item, and has lines with this form:

timestamp [0|1] id

This records whether the item was present at the specified id at a given time. It can be easily union merged, since only the newest timestamp for an id is relevant. Older lines can be compacted away whenever the log is updated. Generalizing this technique for other kinds of data is probably an interesting problem. :)

While git can union merge changes into the currently checked out branch, when using git as a database, you want to merge into your internal-use databranch instead, and maintaining a checkout of that branch is inefficient. So git-annex includes a general purpose git-union-merge command that can union merge changes into a git branch, efficiently, without needing the branch to be checked out. Another problem is how to trigger the merge when git pulls changes from remotes. There is no suitible git hook (post-merge won't do because the checked out branch may not change at all). git-annex works around this problem by automatically merging */git-annex into git-annex each time it is run. I hope that git might eventually get such capabilities built into it to better support this type of thing.

So that's the data. Now, how to efficiently inject it into your databranch? And how to efficiently retrieve it?

The second question is easier to answer, although it took me a while to find the right way ... Which is two orders of magnitude faster than the wrong way, and fairly close in speed to reading data files directly from the filesystem. The right choice is to use git-cat-file --batch; starting it up the first time data is requested, and leaving it running for further queries. This would be straightforward, except git-cat-file --batch is a little difficult when a file is requested that does not exist. To detect that, you'll have to examine its stderr for error messages too. Perhaps git-cat-file --batch could be improved to print something machine parseable to stdout when it cannot find a file. Takes some careful parsing, but straightforward.

Efficiently injecting changes into the databranch was another place where my first attempt was an order of magnitude slower than my final code. The key trick is to maintain a separate index file for the branch. (Set GIT_INDEX_FILE to make git use it.) Then changes can be fed into git by using git hash-object, and those hashes recorded into the branch's index file with git update-index --index-info. Finally, just commit the separate index file and update the branch's ref.

That works ok, but the sad truth is that git's index files don't scale well as the number of files in the tree grows. Once you have a hundred thousand or so files, updating an index file becomes slow, since for every update, git has to rewrite the entire file. I hope that git will be improved to scale better, perhaps by some git wizard who understands index files (does anyone except Junio and Linus?) arranging for them to be modified in-place.

In the meantime, I use a workaround: Each change that will be committed to the databranch is first recorded into a journal file, and when git-annex shuts down, it runs git hash-object just once, passing it all the journal files, and feeds the resulting hashes into a single call to git update-index. Of course, my database code has to make sure to check the journal when retrieving data. And of course, it has to deal with possibly being interrupted in the middle of updating the journal, or before it can commit it, and so forth. If gory details interest you, the complete code for using a git branch as a database, with journaling, is here.

After all that, git-annex turned out to be nearly as fast as before when it was simply reading files from the filesystem, and actually faster in some cases. And without the clutter of the .git-annex/ directory, git use is overall faster, commits are uncluttered, and there's no difficulty with branching. Using a git branch as a database is not always the right choice, and git's plumbing could be improved to better support it, but it is an interesting technique.

Posted
thoughts on the last shuttle launch

I watched the final shuttle launch this morning; for several years I've tried to catch shuttle events, knowing it would soon be over, but before that the shuttle had faded into the background for me as it did for so many of us. It was an impractical rocket to nowhere that we mostly only paid attention to when it blew up.

Fourteen years ago today, Debian was flying in space aboard Columbia. According to the press release, it ran on an SSD on an embedded 486 in the lab module, and controlled plant watering, telemetry, and video. At the time, I had just become a Debian developer, and was very impressed to be part of a project that was involved in that. It was early days for Debian, and near the midpoint of the shuttle's thirty years.

Now it seems likely that Debian, or its derivatives (or at least Free Software) will easily outlast that thirty year run, but I do wonder to what extent our work will fade into the background (and what interesting ways it will find to explode) over that time span and beyond. We'd say we have better methods than the centralized, committee-driven, top-down, PR-conscious NASA... impressive though it can be at its best. We have dreams just as noble as the ones behind the space program, but also goals that are more adaptable, equally at home flying in space, or emebdded in some pocket-lint laden artifact of a perhaps more contemporary inward turn.

Posted
from the convoy

As I type this, it's just passed midnight. I'm in the back of a BMW somewhere in east Germany , and the Debian UK convoy is doing 110 mph on the autobahn, twenty hours into a twenty-five hour first leg of our trip to Banja Lunka.

This all started out so sanely with a 3 am departure to catch the 6 am ferry at Dover. Followed by a couple of hours leisurely breakfast onboard. First hint that yes, this is a road trip in which things will go wrong was a minor bumper denting of one of the convoy's cars by a stray landrover during the ferry trip, but it didn't really phase us. On to Cologne, for a very nice lunch and to pick up another person.

But we didn't anticipate how brutal the next leg to Gratz would be. Nor did we count on apparently half of Germany and the Netherlands getting out their campers and heading east this Friday. Spent multiple hours stop-and-go, and many more in constant traffic. Finally it opened out, so we can follow the night speed limit. And while we started out horsing around on the radio, we've developed some real comms discipline by now to keep the convoy together.

Also people seem to be amazingly keeping rested while not driving. To add to the sleep debt to me, I flew in the day before, but I actually feel caught up now. Still I've not been driving at all due to mislaid license and general inability to safely drive a right side drive stick shift at 100 mph at night. Our 7 drivers are doing an amazing job.

Update: Arrived safely in Gratz at 4 am. Austria tantilized with 30+ miles of tunnels thru the alps, but I've not seen an alp yet.


PS, you'll never appreciate a stinky, free bathroom until you're in a country where all the antiseptic bathrooms cost money and hoards of vacationers are doing the logical thing next to service stations.

Posted
arrival at DebConf

The trip down from Gratz to Banja Luka was much easier than the day before. After a while you just get used to being sat in a car for ages. Plenty of nice scenery to enjoy through Slovenia. After a while our car's GPS's began to fail, showing us driving through fields, and we were stuck for 1.5 hours in a traffic jam when the 4 lane highway seemed to end. Got around that with some guesswork, and on into Croatia by back roads.

The Bosnian border was an interesting experience, all the guards could say in English was "green card! green card!" -- which from an American POV is an unsettling thing to be asked for at a border, especially if they've already taken your passport away -- but at least we were not detained overnight.

While drivers were away getting the car insurance settled it descended toward farce as we had to hand roll the cars forward to let trucks get into the country. (Or we thought we did.. one was rolled with the keys in it as it turned out.)

Arrived at the Hotel in Banja Luka in the middle of a wedding, which was amazingly loud (I could still hear it from the 5th floor at 2 am). There's also a casino at the hotel, so first impression was garish and loud! ... But now that it's a rainy Sunday, seems much nicer here.

Posted