This is my discussion blog. The way it works is that when any pages in this wiki have a discussion page created for them, the discussion pages show up below. Also, any comments on my blog posts also show up here.

Zephyr copilot blog
Hi there - I am the public relations manager for the Zephyr Project. I discovered your zephyr copilot blog and was wondering if you would like us to re-run it on the Zephyr website. We often re-run blogs and videos from the dev community. you can read other blogs here: If this is approved, please send me an email at with your approval and link to the blog and video. Thanks!
Comment by maemalynn
comment 5

I have a copy somewhere but I'm sure the TOS has changed in the meantime.

It's Github's problem that they have a TOS that you automatically "accept" before you can see it. I am not going to try to help them fix that problem, beyond pointing it out.

Comment by joey
404 Link

The Link to "See also: PDF of Github TOS that can be read without being forced to first accept Github's TOS" is giving a 404 error.

Any copy of the document somewhere?

Comment by zoobab
Now there's an alternative:

A couple of free software have been developed since: uses WikiData and other sources to create an inventory of books. It's AGPL software. Developers are currently working on ActivityPub federation. Although the software is 7 years old already, it remains quite experimental, with a lot of attention to detail and a slow development pace.

While Inventaire is geared toward publishers, another free software is more inclined to serve the readers: bookwyrm. Bookwyrm also supports ActivityPub. Both projects are very complementary and I hope they will continue working in a complementary fashion, one towards professionals, the other towards readers and social features. They both have fantastic and welcoming communities. has good funding so far but bookwyrm could get some help.

Comment by how
comment 1

Forgot to mention that extensions can be enabled in the cabal file. Since Copilot operates on a per file basis, that will prevent it from realizing that an extension is needed by the code it absorbed.

Comment by joey
Speeding up process discovery

The shell will have started the processes close together in time, so the pids are probably nearby. So look at the previous pid, and the next pid, and fan outward.

The shell will have put all the processes of the pipeline into a single process group, so this can be sped up a bit more by calling getpgid() on a process before examining its fds.

Comment by
typed pipes in every shell

Wow, this is really clever!

I think the /proc dancing is a strong argument to implement "typed" as a shell command (in the shell itself), because otherwise performance will probably drop in shell scripts with a lot of simple calls.

Comment by pat_h
Finding the latest article...
% telnet 119
200 Leafnode NNTP Daemon, version 1.11.11 running at (my fqdn:
500 NEWNEWS is meaningless for this server

Would have been pretty useful though, for clearing your last challenge :-)

Comment by julien
comment 1
Since some people were confused by this, it's (currently) fiction.
Comment by joey
possible alternative

I don't think that there is no need to backup content from Github anymore - in contrary, developers are aware that Github can change their policies at any time and if that happens, their content might be gone, so they do make a backup.

Now that you announced github-backup as withdrawn, I can recommend another tool which seems to aim for the same target as yours: python-github-backup. It seems to do the job of backing up the metadata quite nicely.

I use it in conjunction with a more basic approach of cloning/pulling each repository itself like this:

function doBackup {
    REPO=$(echo $URL | sed -e 's#^.*/##g' -e 's#.git$##')

    if [ -d $TARGET_REPOS/$REPO ]; then
        cd $TARGET_REPOS/$REPO
        git pull --all >/dev/null
        echo "cloning $REPO"
        cd $TARGET_REPOS
        git clone $URL >/dev/null
curl -s '' \
    | grep -Eo '"git_url": "[^"]+"' \
    | sed -e 's#"git_url": ##' -e 's#"##g' \
    | while read url; do doBackup $url; done

Hope to help!

Mathis Dirksen-Thedens

Comment by joeyh-blog
Cool idea
Thanks for showing me a new way to annoy my co-workers. People could probably just create a new clean git repo without any history, but it's still a cool idea!
Comment by chris
it might be a variation of TOCTOU


I think what you found might be a variation of TOCTOU - time of creation, time of access.

For example (although slightly different, but it's the same underlying idea):

Comment by cwk
comment 3

@alex safe-exceptions and unliftio use uninterruptibleMask in its async safe bracket. Which is ok if the cleanup action is fast, but does risk the program not responding to ctrl-c if the cleanup takes a while for whatever reason.

As well as SIGINT, there's also the possibility that an async exception is thrown for some truely exceptional circumstance, like a segfault. Most code would do well to exit immediately on such an exception, not mask it.

I wonder if there's a way to make an uninterruptibleMask that masks only a specific async exception, eg the AsyncCancelled exception. Probably this would need ghc support, if it's possible at all.

Comment by joey
comment 2
As alex said, safe-exceptions may help. There was a series of blog posts a few years ago about async exceptions: (I think that is the last post)
Comment by gueux+joeyh
comment 3


Sectioning using "article" is helpful, as they provide semantics about the web page layout, but they are not considered to be a navigational landmark, so not all screen readers support navigating by "article" sections. From ARIA: article role

"Articles are not considered a navigational landmark, but many assistive technologies that support landmarks also support a means to navigate among articles. ..."

"header" elements are turned into navigational landmarks when they are descendants of the "body" element and this type of landmark is the "banner". They are the converse of a "footer" element which transform into "content info" landmark when it is directly child of "body". As navigational landmarks they are just meant for the whole page and not for individual sections of the page.

In Section 4.1 of WAI-ARIA Authoring Practices 1.1 is described which HTML semantic region elements get turned into aria landmarks.

Finally, headings are what most screen readers go to first for navigation of a web page. So for those reasons I'd recommend making the article titles headings. If you don't want to literally make them a "h1-6" because it would mess with your CSS then you can just set aria attributes which will change how the page is understood by assisstive tech but not affect any visual rendering of the page. a role="heading" aria-level="2"Lemons/a (using _ instead of angular brackets)

More info on heading role

Comment by samuel.kacer