This is my discussion blog. The way it works is that when any pages in this wiki have a discussion page created for them, the discussion pages show up below. Also, any comments on my blog posts also show up here.
I have a copy somewhere but I'm sure the TOS has changed in the meantime.
It's Github's problem that they have a TOS that you automatically "accept" before you can see it. I am not going to try to help them fix that problem, beyond pointing it out.
The Link to "See also: PDF of Github TOS that can be read without being forced to first accept Github's TOS" is giving a 404 error.
Any copy of the document somewhere?
A couple of free software have been developed since:
Inventaire.io uses WikiData and other sources to create an inventory of books. It's AGPL software. Developers are currently working on ActivityPub federation. Although the software is 7 years old already, it remains quite experimental, with a lot of attention to detail and a slow development pace.
While Inventaire is geared toward publishers, another free software is more inclined to serve the readers: bookwyrm. Bookwyrm also supports ActivityPub. Both projects are very complementary and I hope they will continue working in a complementary fashion, one towards professionals, the other towards readers and social features. They both have fantastic and welcoming communities.
Inventaire.io has good funding so far but bookwyrm could get some help.
Forgot to mention that extensions can be enabled in the cabal file. Since Copilot operates on a per file basis, that will prevent it from realizing that an extension is needed by the code it absorbed.
The shell will have started the processes close together in time, so the pids are probably nearby. So look at the previous pid, and the next pid, and fan outward.
The shell will have put all the processes of the pipeline into a single process group, so this can be sped up a bit more by calling getpgid() on a process before examining its fds.
Hacker news thread with some prior art.
Wow, this is really clever!
I think the /proc dancing is a strong argument to implement "typed" as a shell command (in the shell itself), because otherwise performance will probably drop in shell scripts with a lot of simple calls.
% telnet nntp.olduse.net 119
200 Leafnode NNTP Daemon, version 1.11.11 running at kitenet.net (my fqdn: kite.kitenet.net)
NEWNEWS
500 NEWNEWS is meaningless for this server
Would have been pretty useful though, for clearing your last challenge :-)
Many thanks for having run that news server! Greatly appreciated. Take care of you, and enjoy your future projects :) Julien ÉLIE
Fiction no more:
https://www.theguardian.com/us-news/2021/may/12/ohio-coronavirus-vaccine-lottery-1-million
I don't think that there is no need to backup content from Github anymore - in contrary, developers are aware that Github can change their policies at any time and if that happens, their content might be gone, so they do make a backup.
Now that you announced github-backup as withdrawn, I can recommend another tool which seems to aim for the same target as yours: python-github-backup. It seems to do the job of backing up the metadata quite nicely.
I use it in conjunction with a more basic approach of cloning/pulling each repository itself like this:
function doBackup {
URL=$1
REPO=$(echo $URL | sed -e 's#^.*/##g' -e 's#.git$##')
if [ -d $TARGET_REPOS/$REPO ]; then
cd $TARGET_REPOS/$REPO
git pull --all >/dev/null
else
echo "cloning $REPO"
cd $TARGET_REPOS
git clone $URL >/dev/null
fi
}
curl -s 'https://api.github.com/users/mathisdt/repos?type=owner&per_page=500' \
| grep -Eo '"git_url": "[^"]+"' \
| sed -e 's#"git_url": ##' -e 's#"##g' \
| while read url; do doBackup $url; done
Hope to help!
Mathis Dirksen-Thedens
Hi,
I think what you found might be a variation of TOCTOU - time of creation, time of access.
For example (although slightly different, but it's the same underlying idea):
https://duo.com/decipher/docker-bug-allows-root-access-to-host-file-system
@alex safe-exceptions and unliftio use uninterruptibleMask in its async safe bracket. Which is ok if the cleanup action is fast, but does risk the program not responding to ctrl-c if the cleanup takes a while for whatever reason.
As well as SIGINT, there's also the possibility that an async exception is thrown for some truely exceptional circumstance, like a segfault. Most code would do well to exit immediately on such an exception, not mask it.
I wonder if there's a way to make an uninterruptibleMask that masks only a specific async exception, eg the AsyncCancelled exception. Probably this would need ghc support, if it's possible at all.
@Joey
Sectioning using "article" is helpful, as they provide semantics about the web page layout, but they are not considered to be a navigational landmark, so not all screen readers support navigating by "article" sections. From ARIA: article role
"Articles are not considered a navigational landmark, but many assistive technologies that support landmarks also support a means to navigate among articles. ..."
"header" elements are turned into navigational landmarks when they are descendants of the "body" element and this type of landmark is the "banner". They are the converse of a "footer" element which transform into "content info" landmark when it is directly child of "body". As navigational landmarks they are just meant for the whole page and not for individual sections of the page.
In Section 4.1 of WAI-ARIA Authoring Practices 1.1 is described which HTML semantic region elements get turned into aria landmarks.
Finally, headings are what most screen readers go to first for navigation of a web page. So for those reasons I'd recommend making the article titles headings. If you don't want to literally make them a "h1-6" because it would mess with your CSS then you can just set aria attributes which will change how the page is understood by assisstive tech but not affect any visual rendering of the page. a role="heading" aria-level="2"Lemons/a (using _ instead of angular brackets)