Sometimes it makes sense to ship a program to linux users in ready-to-run form that will work no matter what distribution they are using. This is hard.

Often a commerical linux game will bundle up a few of the more problimatic libraries, and ship a dynamic executable that still depends on other system libaries. These days they're building and shipping entire Debian derivatives instead, to avoid needing to deal with that.

There have been a few efforts to provide so-called one click install package systems that AFAIK, have not been widely used. I don't know if they generally solved the problem.

More modern appoaches seem to be things like docker, which move the application bundle into a containerized environment. I have not looked at these, but so far it does not seem to have spread widely enough to be a practical choice if you're wanting to provide something that will work for a majority of linux users.

So, I'm surprised that I seem to have managed to solve this problem using nothing more than some ugly shell scripts.

My standalone tarballs of git-annex now seem fairly good at running on a very wide variety of systems.

For example, I unpacked the tarball into the Debian-Installer initramfs and git-annex could run there. I can delete all of /usr and it keeps working! All it needs is a basic sh, which even busybox provides.

Looks likely that the new armel standalone tarball of git-annex will soon be working on embedded systems as odd as the Synology NAS, and it's already been verified to work on Raspbian. (I'm curious if it would work on Android, but that might be a stretch.)

Currently these tarballs are built for a specific architecture, but there's no particular reason a single one couldn't combine binaries built for each supported architecture.

technical details

The main trick is to ship a copy of, as well as all the glibc libraries and associated files, and of course every other library and file the application needs.

Shipping lets a shell script wrapper be made around each binary, that runs and passes it the library directories to search. This way the binary can be run, bypassing the system's own dynamic linker (which might not like it) and using the included glibc.

For example a shell script that runs the git binary from the bundle:

exec "$GIT_ANNEX_LINKER" --library-path "$GIT_ANNEX_LD_LIBRARY_PATH" "$GIT_ANNEX_SHIMMED/git/git" "$@"

I have to set quite a lot of environment variables, to avoid using any files from the system and instead use ones from my tarball. One important one is GCONV_PATH. Note that LD_LIBRARY_PATH does not have to be set, and this is nice because it allows running a few programs from the host system, such as its web browser.

worse is better

Of course I'll take a proper distribution package anytime over this.

Still, it seems to work quite well, in all the horrible cases that require it.

broken link
The link for "standalone tarballs of git-annex" does not work.
Comment by Regis
comment 2

We've been through this so many times that it's painful that people STILL buy this.

I've seen over 10 such solutions come and go in the mere 5 years I've been following the open-source community. Some even had backing of major software projects, e.g. Inkscape making their binaries available in such form. Not even a single one took off, ever.

I could go on and on about how and why this is broken, but I've done this like 10 times already - each time somebody wanted to embrace such a thing - and I don't want to do this any more. I'll give a hint though: this is exactly what Windows forces developers to do. And I'd be hard-pressed to name a platform with an even more borked software management than Windows.

And with Linux you have another problem many people don't realize - integration. As long as it's done well people don't know it's even there. Admittedly, it doesn't matter all that much for fullscreen games, but for anything else it's cruicial. With traditional repository model integration it is handled by the package maintainer, who actually knows the distro and how to make things work well with it. If you try to shift that burden to app developers, bad things tend to happen, because the app developers have no idea about distribution internals and conventions, and because they're usually not interested in that stuff. Oh, and if you try to shift that burden to developers AND make them target several distributions at once, that's a recipe for disaster.

Comment by Shnatsel

@joey: Interesting, reminds me on Autopackage.

@Shnatsel "And with Linux you have another problem many people don't realize - integration."

Yeah, integration (in the sense of "centralization") is problem in current linux distros, see Molnar's analysis (, distros should not try to own all the apps and overburden the system. Separation between apps and core system (e.g. Android: by bundling the apps is the proven solution. This is called recently "half-rolling release" (

Comment by mandrit