This is my discussion blog. The way it works is that when any pages in this wiki have a discussion page created for them, the discussion pages show up below. Also, any comments on my blog posts also show up here.

ethics and open source

Google Inc may be opposed to AGPL but other interested parties may well happily take it.

I think you are misusing an open source license to achieve an ethical purpose.

If you want to be 100% sure that your open source work is not used for “evil” you could add to the license the line "The Software shall be used for Good, not Evil" (see JSLint).

But then who checks what is "evil"?

You probably need to set up a bureaucracy with "certificates" issued by independent third partes do demonstrate that the purpose is "not evil" (as you need if you want to sell "organic" food).

Comment by paolo.greppi
DFSG#5 and #6

This sounds like a very clear-cut violation of DFSG#5 and #6. You're discriminating against using the software for military purposes.

Well, this is the same Google that discriminates against anyone who's looking to purchase a gun or anything unrelated that just happens to have "gun" as part of a word in its name (yet something denying cakes for gay weddings is a legal argument used by them), but two wrongs don't make a right.

There's also a question whether AGPL is a free software license at all. I believe it's not: fails FSF Freedom 0 (networked light switch; IMAP server) and the Dissident Test (a dissident hiding steganographic messages on a blogging platform with thousands of unrelated users; only fellow dissidents receive a module to encrypt/decrypt the messages).

Comment by kilobyte
Popularity of Nix*

I believe the popularity of NixPkgs has multiplied over the past few years, e.g. look at "contributors per month" (60 -> 250 in the past four years). That shows rapid growth of the number of people who "only" send a few changes (per month), which is IMHO a plausible indication of being an common active user.

I actually think that too rapid growth would be detrimental, as quite some things need to change in the organization of such a project to handle the growth (e.g. just manpower for issues and PRs), and such changes tend work better if given sufficient time.

BTW, NixPkgs also strives to only have one version per package, except for cases with good-enough reason to do otherwise. It really helps maintenance, debugging, etc. So it's a kind of strange situation: technically Nix makes multiple versions easier, but tries to avoid using this :-)

Comment by vcunat
Versions

Debian still only packages one version of anything

One of the killer features of Debian is that it does not just package one version of anything. C libraries are usually packaged so that the package name contains soname version, so that multiple versions are coinstallable and dependencies just work. When things break (and they do), it's usually easy to keep an old version for years while the rest of the system is updated regularly. This usually applies to libs only, not their -dev packages, but sometimes there are multiple versions of those as well, and even non-libraries, e.g python3.5 & python3.6. Obviously this increases maintenance costs, but from an end user (me) point of view, it's absolutely worth it. (Thanks to everyone involved!)

Now for Haskell this a bit more difficult because of code inlining. To make things coinstallable, I'd suggest reversing the current package namimg practice: instead of having "libghc-mtl-dev" as name and "libghc-mtl-dev-2.2.1-93d32" as Provides, do it the other way round. Perhaps even "libghc8.0-mtl-dev-..." Admittedly this would poison the package namespace and slow apt down considerably. Also, unless it's all automated, the manual labor needed to maintain multiple versions of everything would be unbearable.

I guess if there was an apt repository that contained such packages and I could ask it to install from a specific version of Stackage LTS, I'd use it instead of stack immediately. I'm quite annoyed by stack's ignorance of disk space. As if these young people never ever had to uninstall anything.

Comment by tomi
comment 4

@josh, the problem is that these tunings are not always safe to enable, causing audio problems or screen problems or whatever, and information about which laptops have hardware that breaks with them is currently hard to collect.

But yes, if the information were collected as I propose, it could be used in the kernel to whitelist the tunings on good hardware.

Comment by joey
Better defaults?
Rather than making powertop auto-apply these settings, could we fix the default settings so that they match what powertop would set?
Comment by josh
Config File

Of course we could invent a new config file format and we could write a new GUI for it.

I think it would be wiser to use (and improve) solutions that already implement config file mechansims, have GUIs/WebUIs, and have ways to upload/share configs.

Ad: More about that in the talk tomorrow at FOSDEM https://fosdem.org/2018/schedule/event/elektra/

Comment by markus
comment 1
Every so often I remember about powertop and that I haven't automated it yet. I'd be willing to help.
Comment by db48x
Update blog to tell about debuerreotype

Hello!

Please update this blog post. Many sites seem to link to it. It did raise a valid concern at the time it was written. Things have changes since, however.

Nowadays anybody can independently build the same Docker images as published under the so called official Docker Debian account thanks to the new build system using debuerreotype: https://github.com/debuerreotype/debuerreotype

Also, as a side note, if you want to have Debian images that are slightly more optimized for Docker in an opinionated way, you might want to check out https://github.com/jgoerzen/docker-debian-base

Comment by otto
kinda answered

Had a chat with Sesse about ASLR.

ASLR operates on a page basis, and with 4k pages that's why the lower bytes are zero. When mapping a program into memory, it's necessarily page aligned.

It does seem that it would be possible for binaries have their code be offset by some fraction of a page, but it would have overhead. Somewhere between the overhead of copying the whole binary's content into memory and the overhead of (non-dynamic) linking. And no executable pages could be shared between processes if that were done.

Comment by joey
comment 2

@larry which is the approach being used in the browsers, but seems very hard to prevent timing information being available to native code.

Comment by joey
Spectre mitigation
My possibly naive perspective is that Spectre is one of a large class of weaknesses based on incomplete virtualization/abstraction of the CPU. As others have pointed out, modern hardware emulates a simple abstract machine in terms of execution order and memory models, where the reality is actually quite different. The key item that is not abstracted is time. All side channels I know of (including the ones on which the Spectre work depends) require access to high-resolution non-virtual time. Take that away, reserve those high-res timers for privileged-mode only, and I think you'll find problems like this just disappear.
Comment by larry
comment 5

I've recently got a 2017 Lenovo Yoga 11 inch, which is actually a pretty sweet little netbook, fanless and light and a lot smaller than older 11 inch screen computers.

Comment by joey
development machine

Sorry that comment is off-topic for this post but I just found your interview at https://usesthis.com/interviews/joey.hess/. I was wondering what your current development set up is? Are you still using a little netbook?

I currently use a recent MacBook Air to develop Haskell but it can be fairly slow. I've recently set up a Ubuntu-based server on GCE which is quite nice to use (preemptible i.e. cheap) and their network is heaps faster than my network at home (ADSL2+). I'm unable to use local SSDs with preembible GCE instance and the performance of persistent SSDs isn't really as fast as I'd like. Therefore I'm considering buying a workstation (tower/server) to put on my local network with latest Intel i9, lots of ram (32/64G) and fast SSD(s), probably running linux or freebsd with ZFS but wondering if it might be overkill. I've gotten used to remoting-in (ssh) to my GCE server, so remoting in to a local workstation would present little problems, workflow-wise. I tend to use tmux and spacemacs (with intero). I like Atom and it's haskell-ide plugin but luckily I switched to Spacemacs a few months ago now. I still occasionally use Atom-Beta on my local MacBook and haven't tried X11 forwarding yet to see if that workflow would still be useable. I've got a bad back — from too much time crouched over a keyboard — and it's nice to be able to use a laptop (or network) as my primary interface to my workstation (or server-in-the-cloud) so that I can mix up my work environment i.e. stand up desk with large monitor, sit down desk (aka dining room table) or couch/sofa.

Appreciate any advice!

Comment by steven
beyond compile error

In an attempt to reproduce the generation of custom ARM images I did get compile errors.

The errors said what to do. e.g. change hasPassword into User.hasPassword.

This gives me a clean compile with propellor version 5.1.0

lime :: Host
lime = host "lime.example.net" $ props
    & osDebian Unstable ARMHF
    & Machine.olimex_A10_OLinuXino_LIME
    & hasPartition (partition EXT4 `mountedAt` "/" `setSize` MegaBytes 8192)
    & User.hasPassword (User "root")
    & Ssh.installed
    & Ssh.permitRootLogin (Ssh.RootLogin True)

Caveat: not tested on actual hardware

Comment by stappers
comment 2
I'm ok with either method for now, if it changes I'll let people know.
Comment by joey