Here's a simple program written using this monad. See if you can guess what it might do:
import Control.Monad.BrainFuck demo :: String demo = brainfuckConstants $ \constants -> do add 31 forever constants $ do add 1 output
Here's the brainfuck code that
If you feed that into a brainfuck interpreter (I'm using
hsbrainfuck for my
testing), you'll find that it loops forever and prints out each character,
starting with space (32), in ASCIIbetical order.
The implementation is quite similar to the ASM monad. The main differences are that it builds a String, and that the BrainFuck monad keeps track of the current position of the data pointer (as brainfuck lacks any sane way to manipulate its instruction pointer).
newtype BrainFuck a = BrainFuck (DataPointer -> ([Char], DataPointer, a)) type DataPointer = Integer -- Gets the current address of the data pointer. addr :: BrainFuck DataPointer addr = BrainFuck $ \loc -> (, loc, loc)
Having the data pointer address available
allows writing some useful utility functions like this one,
which uses the
next (brainfuck opcode
prev (brainfuck opcode
-- Moves the data pointer to a specific address. setAddr :: Integer -> BrainFuck () setAddr n = do a <- addr if a > n then prev >> setAddr n else if a < n then next >> setAddr n else return ()
Of course, brainfuck is a horrible language, designed to be nearly impossible to use. Here's the code to run a loop, but it's really hard to use this to build anything useful..
-- The loop is only entered if the byte at the data pointer is not zero. -- On entry, the loop body is run, and then it loops when -- the byte at the data pointer is not zero. loopUnless0 :: BrainFuck () -> BrainFuck () loopUnless0 a = do open a close
To tame brainfuck a bit, I decided to treat data addresses 0-8 as constants, which will contain the numbers 0-8. Otherwise, it's very hard to ensure that the data pointer is pointing at a nonzero number when you want to start a loop. (After all, brainfuck doesn't let you set data to some fixed value like 0 or 1!)
I wrote a little
brainfuckConstants that runs a BrainFuck
program with these constants set up at the beginning.
It just generates the brainfuck code for a series of ASCII art fishes:
With the fishes^Wconstants in place, it's possible to write a more useful
loop. Notice how the data pointer location is saved at the beginning, and
restored inside the loop body. This ensures that the provided BrainFuck
action doesn't stomp on our constants.
-- Run an action in a loop, until it sets its data pointer to 0. loop :: BrainFuck () -> BrainFuck () loop a = do here <- addr setAddr 1 loopUnless0 $ do setAddr here a
I haven't bothered to make sure that the constants are really constant,
but that could be done. It would just need a Control.Monad.BrainFuck.Safe
module, that uses a different monad, in which
don't do anything when the data pointer is pointing at a constant.
Or, perhaps this could be statically checked at the type level, with
type level naturals. It's Haskell, we can make it safer if we want to. ;)
So, not only does this BrainFuck monad allow writing brainfuck code using crazy haskell syntax, instead of crazy brainfuck syntax, but it allows doing some higher-level programming, building up a useful(!?) library of BrainFuck combinators and using them to generate brainfuck code you'd not want to try to write by hand.
Of course, the real point is that "monad" and "brainfuck" so obviously belonged together that it would have been a crime not to write this.
The Memory Palace: This is the way history should be taught, but rarely is. Nate DiMeo takes past events and puts you in the middle of them, in a way that makes you emphathise so much with people from the past. Each episode is a little short story, and they're often only a few minutes long. A great example is this description of when Niagra falls stopped. I have listened to the entire back archive, and want more. Only downside is it's a looong time between new episodes.
The Haskell Cast: Panel discussion with a guest, there is a lot of expertise amoung them and I'm often scrambling to keep up with the barrage of ideas. If this seems too tame, check out The Type Theory Podcast instead..
Benjamen Walker's Theory of Everything: Only caught 2 episodes so far, but they've both been great. Short, punchy, quirky, geeky. Astoundingly good production values.
Lightspeed magazine and Escape Pod blur together for me. Both feature 20-50 minute science fiction short stories, and occasionally other genre fictions. They seem to get all the award-winning short stories. I sometimes fall asleep to these which can make for strange dreams. Two strongly contrasting examples: "Observations About Eggs from the Man Sitting Next to Me on a Flight from Chicago, Illinois to Cedar Rapids, Iowa" and "Pay Phobetor"
Serial: You probably already know about this high profile TAL spinoff. If you didn't before: You're welcome. :) Nuff said.
Redecentralize: Interviews with creators of decentralized internet tools like Tahoe-LAFS, Ethereum, Media Goblin, TeleHash. I just wish it went into more depth on protocols and how they work.
Love and Radio: This American Life squared and on acid.
Debian & Stuff: My friend Asheesh and that guy I ate Thai food with once in Portland in a marvelously unfocused podcast that somehow connects everything up in the end. Only one episode so far; what are you guys waiting on? :P
Hacker Public Radio: Anyone can upload an episode, and multiple episodes are published each week, which makes this a grab bag to pick and choose from occasionally. While mostly about Linux and Free Software, the best episodes are those that veer var afield, such as the 40 minute river swim recording featured in Wildswimming in France.
Also, out of the podcasts I listed previously, I still listen to and enjoy Free As In Freedom, Off the Hook, and the Long Now Seminars.
PS: A nice podcatcher, for the technically inclined is git-annex importfeed. Featuring list of feeds in a text file, and distributed podcatching!
You have a machine someplace, probably in The Cloud, and it has Linux installed, but not to your liking. You want to do a clean reinstall, maybe switching the distribution, or getting rid of the cruft. But this requires running an installer, and it's too difficult to run d-i on remote machines.
Wouldn't it be nice if you could point a program at that machine and have it do a reinstall, on the fly, while the machine was running?
This is what I've now taught propellor to do! Here's a working configuration which will make propellor convert a system running Fedora (or probably many other Linux distros) to Debian:
testvm :: Host testvm = host "testvm.kitenet.net" & os (System (Debian Unstable) "amd64") & OS.cleanInstallOnce (OS.Confirmed "testvm.kitenet.net") `onChange` propertyList "fixing up after clean install" [ User.shadowConfig True , OS.preserveRootSshAuthorized , OS.preserveResolvConf , Apt.update , Grub.boots "/dev/sda" `requires` Grub.installed Grub.PC ] & Hostname.sane & Hostname.searchDomain & Apt.installed ["linux-image-amd64"] & Apt.installed ["ssh"] & User.hasSomePassword "root"
And here's a video of it in action.
It was surprisingly easy to build this. Propellor already knew how to create a chroot, so from there it basically just has to move files around until the chroot takes over from the old OS.
After the cleanInstallOnce property does its thing, propellor is running inside a freshly debootstrapped Debian system. Then we just need a few more Propertites to get from there to a bootable, usable system: Install grub and the kernel, turn on shadow passwords, preserve a few config files from the old OS, etc.
It's really astounding to me how much easier this was to build than it was to build d-i. It took years to get d-i to the point of being able to install a working system. It took me a few part days to add this capability to propellor (It's 200 lines of code), and I've probably spent a total of less than 30 days total developing propellor in its entirity.
So, what gives? Why is this so much easier? There are a lot of reasons:
Technology is so much better now. I can spin up cloud VMs for testing in seconds; I use VirtualBox to restore a system from a snapshot. So testing is much much easier. The first work on d-i was done by booting real machines, and for a while I was booting them using floppies.
Propellor doesn't have a user interface. The best part of d-i is preseeding, but that was mostly an accident; when I started developing d-i the first thing I wrote was main-menu (which is invisible 99.9% of the time) and we had to develop cdebconf, and tons of other UI. Probably 90% of d-i work involves the UI. Jettisoning the UI entirely thus speeds up development enormously. And propellor's configuration file blows d-i preseeding out of the water in expressiveness and flexability.
Propellor has a much more principled design and implementation. Separating things into Properties, which are composable and reusable gives enormous leverage. Strong type checking and a powerful programming language make it much easier to develop than d-i's mess of shell scripts calling underpowered busybox commands etc. Properties often Just Work the first time they're tested.
No separate runtime. d-i runs in its own environment, which is really a little custom linux distribution. Developing linux distributions is hard. Propellor drops into a live system and runs there. So I don't need to worry about booting up the system, getting it on the network, etc etc. This probably removes another order of magnitude of complexity from propellor as compared with d-i.
This seems like the opposite of the Second System effect to me. So perhaps d-i was the second system all along?
I don't know if I'm going to take this all the way to propellor is d-i 2.0. But in theory, all that's needed now is:
- Teaching propellor how to build a bootable image, containing a live Debian system and propellor. (Yes, this would mean reimplementing debian-live, but I estimate 100 lines of code to do it in propellor; most of the Properties needed already exist.) That image would then be booted up and perform the installation.
- Some kind of UI that generates the propellor config file.
- Adding Properties to partition the disk.
cleanInstallOnce and associated Properties will be included in
propellor's upcoming 1.1.0 release, and are available in git now.
Oh BTW, you could parameterize a few Properties by OS, and Propellor could be used to install not just Debian or Ubuntu, but whatever Linux distribution you want. Patches welcomed...
In a recent blog post, I mentioned how lucky I feel to keep finding ways to work on free software. In the past couple years, I've had a successful Kickstarter, and followed that up with a second crowdfunding campaign, and now a grant is funding my work. A lot to be thankful for.
A one-off crowdfunding campaign to fund free software development is wonderful, if you can pull it off. It can start a new project, or kick an existing one into a higher gear. But in many ways, free software development is a poor match for kickstarter-type crowdfunding. Especially when it comes to ongoing development, which it's really hard to do a crowdfunding pitch for. That's why I was excited to find Snowdrift.coop, which has a unique approach.
Imagine going to a web page for a free software project that you care about, and seeing this button:
That's a lot stronger incentive than some paypal donation button or flattr link! The details of how it works are explained on their intro page, or see the ever-insightful and thoughtful Mike Linksvayer's blog post about it.
When I found out about this, I immediately sent them a one-off donation. Later, I got to meet one of the developers face to face in Portland. I've also done a small amount of work on the Snowdrift platform, which is itself free software. (My haskell code will actually render that button above!)
Free software is important, and its funding should be based, not on how lucky or good we are at kickstarter pitches, but on its quality and how useful it is to everyone. Snowdrift is the most interesting thing I've seen in this space, and I really hope they succeed. If you agree, they're running their own crowdfunding campaign right now.
Propellor has supported docker containers for a "long" time, and it works great. This week I've worked on adding more container support.
docker containers (revisited)
The syntax for docker containers has changed slightly. Here's how it looks now:
example :: Host example = host "example.com" & Docker.docked webserverContainer webserverContainer :: Docker.Container webserverContainer = Docker.container "webserver" "joeyh/debian-stable" & os (System (Debian (Stable "wheezy")) "amd64") & Docker.publish "80:80" & Apt.serviceInstalledRunning "apache2" & alias "www.example.com"
That makes example.com have a web server in a docker container, as you'd expect, and when propellor is used to deploy the DNS server it'll automatically make www.example.com point to the host (or hosts!) where this container is docked.
I use docker a lot, but I have drank little of the Docker KoolAid. I'm not keen on using random blobs created by random third parties using either unreproducible methods, or the weirdly underpowered dockerfiles. (As for vast complicated collections of containers that each run one program and talk to one another etc ... I'll wait and see.)
That's why propellor runs inside the docker container and deploys whatever configuration I tell it to, in a way that's both replicatable later and lets me use the full power of Haskell.
Which turns out to be useful when moving on from docker containers to something else...
Propellor now supports containers using systemd-nspawn. It looks a lot like the docker example.
example :: Host example = host "example.com" & Systemd.persistentJournal & Systemd.nspawned webserverContainer webserverContainer :: Systemd.Container webserverContainer = Systemd.container "webserver" chroot & Apt.serviceInstalledRunning "apache2" & alias "www.example.com" where chroot = Chroot.debootstrapped (System (Debian Unstable) "amd64") Debootstrap.MinBase
Notice how I specified the Debian Unstable chroot that forms the basis of this container. Propellor sets up the container by running debootstrap, boots it up using systemd-nspawn, and then runs inside the container to provision it.
Unlike docker containers, systemd-nspawn containers use systemd as their
init, and it all integrates rather beautifully. You can see the container
systemctl status, including the services running inside it,
journalctl to examine its logs, etc.
But no, systemd is the devil, and docker is too trendy...
Propellor now also supports deploying good old chroots. It looks a lot like the other containers. Rather than repeat myself a third time, and because we don't really run webservers inside chroots much, here's a slightly different example.
example :: Host example = host "mylaptop" & Chroot.provisioned (buildDepChroot "git-annex") buildDepChroot :: Apt.Package -> Chroot.Chroot buildDepChroot pkg = Chroot.debootstrapped system Debootstrap.BuildD dir & Apt.buildDep pkg where dir = /srv/chroot/builddep/"++pkg system = System (Debian Unstable) "amd64"
Again this uses debootstrap to build the chroot, and then it runs propellor inside the chroot to provision it (btw without bothering to install propellor there, thanks to the magic of bind mounts and completely linux distribution-independent packaging).
In fact, the systemd-nspawn container code reuses the chroot code, and so turns out to be really rather simple. 132 lines for the chroot support, and 167 lines for the systemd support (which goes somewhat beyond the nspawn containers shown above).
Which leads to the hardest part of all this...
Making a propellor property for debootstrap should be easy. And it was, for Debian systems. However, I have crazy plans that involve running propellor on non-Debian systems, to debootstrap something, and installing debootstrap on an arbitrary linux system is ... too hard.
In the end, I needed 253 lines of code to do it, which is barely one magnitude less code than the size of debootstrap itself. I won't go into the ugly details, but this could be made a lot easier if debootstrap catered more to being used outside of Debian.
Docker and systemd-nspawn have different strengths and weaknesses, and there are sure to be more container systems to come. I'm pleased that Propellor can add support for a new container system in a few hundred lines of code, and that it abstracts away all the unimportant differences between these systems.
Seems likely that systemd-nspawn containers can be nested to any depth. So, here's a new kind of fork bomb!
infinitelyNestedContainer :: Systemd.Container infinitelyNestedContainer = Systemd.container "evil-systemd" (Chroot.debootstrapped (System (Debian Unstable) "amd64") Debootstrap.MinBase) & Systemd.nspawned infinitelyNestedContainer
Strongly typed purely functional container deployment can only protect us against a certian subset of all badly thought out systems. ;)
I left Debian. I don't really have a lot to say about why, but I do want to clear one thing up right away. It's not about systemd.
As far as systemd goes, I agree with my friend John Goerzen:
I promise you – 18 years from now, it will not matter what init Debian chose in 2014. It will probably barely matter in 3 years.
And with Jonathan Corbet:
However things turn out, if it becomes clear that there is a better solution than systemd available, we will be able to move to it.
I have no problem with trying out a piece of Free Software, that might have abrasive authors, all kinds of technical warts, a debatable design, scope creep etc. None of that stopped me from giving Linux a try in 1995, and I'm glad I jumped in with both feet.
It's important to be unafraid to make a decision, try it out, and if it doesn't work, be unafraid to iterate, rethink, or throw a bad choice out. That's how progress happens. Free Software empowers us to do this.
Debian used to be a lot better at that than it is now. This seems to have less to do with the size of the project, and more to do with the project having aged, ossified, and become comfortable with increasing layers of complexity around how it makes decisions. To the point that I no longer feel I can understand the decision-making process at all ... or at least, that I'd rather be spending those scarce brain cycles on understanding something equally hard but more useful, like category theory.
It's been a long time since Debian was my main focus; I feel much more useful when I'm working in a small nimble project, making fast and loose decisions and iterating on them. Recent events brought it to a head, but this is not a new feeling. I've been less and less involved in Debian since 2007, when I dropped maintaining any packages I wasn't the upstream author of, and took a year of mostly ignoring the larger project.
Now I've made the shift from being a Debian developer to being an upstream author of stuff in Debian (and other distros). It seems best to make a clean break rather than hang around and risk being sucked back in.
My mailbox has been amazing over the past week by the way. I've heard from so many friends, and it's been very sad but also beautiful.
Free software has been my career for a long time -- nothing else since 1999 -- and it continues to be a happy surprise each time I find a way to continue that streak.
The latest is that I'm being funded for a couple of years to work part-time on git-annex. The funding comes from the DataLad project, which was recently awarded a grant by the National Science Foundation. DataLad folks (at Dartmouth College and at Magdeburg University in Germany) are working on providing easy access to scientific data (particularly neuroimaging). So git-annex will actually be used for science!
I'm being funded for around 30 hours of work each month, to do general work on the git-annex core (not on the webapp or assistant). That includes bugfixes and some improvements that are wanted for DataLad, but are all themselves generally useful. (see issue list)
This is enough to get by on, at least in my current living situation. It would be great if I could find some funding for my other work time -- but it's also wonderful to have the flexibility to spend time on whatever other interesting projects I might want to.
I've taught my laptop to wake up at 7:30 in the morning. When it does, it will run whatever's in my ~/bin/goodmorning script. Then, if the lid is still closed, it will go back to sleep again.
So, it's a programmable alarm clock that doesn't need the laptop to be left turned on to work.
But it doesn't have to make noise and wake me up (I rarely want to be woken up by an alarm; the sun coming in the window is a much nicer method). It can handle other tasks like downloading my email, before I wake up. When I'm at home and on dialup, this tends to take an hour in the morning, so it's nice to let it happen before I get up.
This took some time to figure out, but it's surprisingly simple. Besides ~/bin/goodmorning, which can be any program/script, I needed just two files to configure systemd to do this.
[Unit] Description=good morning [Timer] Unit=goodmorning.service OnCalendar=*-*-* 7:30 WakeSystem=true Persistent=false [Install] WantedBy=multi-user.target
[Unit] Description=good morning RefuseManualStart=true RefuseManualStop=true ConditionACPower=true [Service] Type=oneshot ExecStart=/bin/systemd-inhibit --what=handle-lid-switch --why=goodmorning /bin/su joey -c "/usr/bin/timeout 45m /home/joey/bin/goodmorning"
After installing those files, run (as root):
systemctl enable goodmorning.timer; systemctl start goodmorning.timer
Then, you'll also need to edit
LidSwitchIgnoreInhibited=no -- this overrides the default, which
is not to let systemd-inhibit block sleep on lid close.
almost too easy
I don't think this would be anywhere near as easy to do without systemd, logind, etc. Especially the handling of waking the system at the right time, and the behavior around lid sleep inhibiting.
The WakeSystem=true relies on some hardware support for waking from sleep; my laptop supported it with no trouble but I don't know how broadly available that is.
Also, notice the
ConditionACPower=true, which I added once I realized I
don't want the job to run if I forgot to leave the laptop plugged in
overnight. Technically, it will still wake up when on battery power, but
then it should go right back to sleep.
Quite a lot of nice peices of systemd all working together here!
If using xfce, xfce4-power-manager takes over handling of lid close from systemd, and currently prevents the system from going back to sleep if the lid is still closed when goodmorning finishes. Happily, there is an easy workaround; this configures xfce to not override the lid switch behavior:
xfconf-query -c xfce4-power-manager -n -p /xfce4-power-manager/logind-handle-lid-switch -t bool -s true
Other desktop environments may have similar issues.
why not a per-user unit?
It would perhaps be better to use the per-user systemd, not the system wide one. Then I could change the time the alarm runs without using root.
What's prevented me from doing this is that systemd-inhibit uses policykit, and policykit prevents it from being used in this situation. It's a lot easier to run it as root and use su, than it is to reconfigure policykit.
I think I've been writing the second system to replace d-i with in my spare time for a couple months, and never noticed.
I'm as suprised as you are, but consider this design:
Installation system consists of debian live + haskell + propellor + web browser.
Entire installation UI consists of a web-based (and entirely pictographic and prompt based, so does not need to be translated) selection of the installation target.
Installation target can be local disk, remote system via ssh (wiping out crufty hacked-up pre-installed debian), local VM, live ISO, etc.
Really, no other questions. Not even user name/password! The installed system will only allow login via the same method that was used to install it. So a locally installed system will accept console/X login with no password and then a forced password change. Or a system installed via ssh will only allow login using the same ssh key that was used to install it.
The entire installation process consists of a disk format, followed by debootstrap, followed by running propellor in the target system. This also means that the installed system includes a propellor config file which now describes the properties of the system as installed (so can be edited to tweak the installation, or reused as starting point for next installation).
Users who want to configure installation in any way write down properties of system using a simple propellor config file. I suppose some people still use more than one partiton or gnome or some such customization, so they'd use:
main :: IO main = Installer.main & Installer.partition First "/boot" Ext3 (MiB 256) & Installer.partition Next "/" Ext4 (GiB 5) & Installer.partition Next "/home" Ext4 FreeSpace & Installer.grubBoots "hd0" & os (System (Debian Stable) "amd64") & Apt.stdSourcesList & Apt.installed ["task-gnome-desktop"]
- The installation system is itself built using propellor. A free feature given the above design, so basically all it will take to build an installation iso is this code:
main :: IO main = Installer.main & Installer.target CdImage "installer.iso" & os (System (Debian Stable) "amd64") & Apt.stdSourcesList & Apt.installed ["task-xfce-desktop", "ghc", "propellor"] & User.autoLogin "root" & User.loginStarts "propellor --installer"
- Propellor has a nice display of what it's doing so there is no freaking progress bar.
Well, now I know where propellor might end up if I felt like spending a month and adding a few thousand lines of code to it.
Today I did something interesting with the Debian packaging for propellor, which seems like it could be a useful technique for other Debian packages as well.
Propellor is configured by a directory, which is maintained as a local git
repository. In propellor's case, it's
~/.propellor/. This contains a lot
of haskell files, in fact the entire source code of propellor! That's
really unusual, but I think this can be generalized to any package whose
configuration is maintained in its own git repository on the user's
system. For now on, I'll refer to this as the config repo.
The config repo is set up the first time a user runs propellor. But, until now, I didn't provide an easy way to update the config repo when the propellor package was updated. Nothing would break, but the old version would be used until the user updated it themselves somehow (probably by pulling from a git repository over the network, bypassing apt's signature validation).
So, what I wanted was a way to update the config repo, merging in any
changes from the new version of the Debian package, while preserving the
user's local modifications. Ideally, the user could just run
upstream/master, where the upstream repo was included in the
But, that can't work! The Debian package can't reasonably include the full git repository of propellor with all its history. So, any git repository included in the Debian binary package would need to be a synthetic one, that only contains probably one commit that is not connected to anything else. Which means that if the config repo was cloned from that repo in version 1.0, then when version 1.1 came around, git would see no common parent when merging 1.1 into the config repo, and the merge would fail horribly.
To solve this, let's assume that the config repo's master branch has
a parent commit that can be identified, somehow, as coming from a past
version of the Debian package. It doesn't matter which version, although
the last one merged with will be best. (The easy way to do this is to set
refs/heads/upstream/master to point to it when creating the config repo.)
Once we have that parent commit, we have three things:
- The current content of the config repo.
- The content from some old version of the Debian package.
- The new content of the Debian package.
Now git can be used to merge #3 onto #2, with -Xtheirs, so the result is a git commit with parents of #3 and #2, and content of #3. (This can be done using a temporary clone of the config repo to avoid touching its contents.)
Such a git commit can be merged into the config repo, without any conflicts other than those the user might have caused with their own edits.
So, propellor will tell the user when updates are available, and they can
git merge upstream/master to get them. The resulting history
looks like this:
* Merge remote-tracking branch 'upstream/master' |\ | * merging upstream version | |\ | | * upstream version * | user change |/ * upstream version
So, generalizing this, if a package has a lot of config files, and creates a git repository containing them when the user uses it (or automatically when it's installed), this method can be used to provide an easily mergable branch that tracks the files as distributed with the package.
It would perhaps not be hard to get from here to a full git-backed version of ucf. Note that the Debian binary package doesn't have to ship a git repisitory, it can just as easily ship the current version of the config files somewhere in /usr, and check them into a new empty repository as part of the generation of the upstream/master branch.