Feel free to edit this page to suggest tools to add, or make any other comments --Joey

Sponge segmentation fault

Sponge gets terminated with SIGSEGV if it tries to append to an unwritable file:

(make sure you dont have a file named "a")

sudo touch a && sudo chown 000 a && echo 1 | sponge -a a
sudo rm -f a

Arch Linux, moreutils 0.63-1

tool suggestion: "re" (Reach Recent Folders)

The re command aims to save numerous cd operations for reaching recently-used folders. This command orders folders by date of last modification, and enables specifying a substring of the name of the folder targeted. The re command saves me a lot of time, every single day.

It is currently implemented as a shell script. https://github.com/charguer/re

tool suggestion: "even" and "wol"

Written these two tiny tools, but figure they're useful to others:

even was originally written to filter unicode files for use with grep/rg: https://gist.github.com/lionello/9166502

wol is a portable wake-on-lan tool: https://gist.github.com/lionello/6481448

tool suggestion: "fork after grep"

I've written this tool for tzap (tuning DVB-T cards). Iit runs a process as a child, waits for a string to appear on its stdout/stderr, then daemonizes it.
This is useful if a program takes a while to initialize, then prints a message on stdout, but does not daemonize itself.

It put it up on github here


discarding certain non-zero exit codes

I'm running "chronic fetchmail" in my crontab and I'd like to discard exit code 1 - "no new mail". The method proposed in fetchmail's man page is to run "chronic sh -c 'fetchmail || [ $? -eq 1 ]'" but I'd prefer something like "chronic -i 1 fetchmail", because this avoids the separate shell and allows to discard several exit codes (think of rsync's various possibly unimportant errors). I'll try to come up with a patch if this is desired.

-- deep42thought

triggering with zero exit code

I've a use case where I want chronic to work in exactly the opposite fashion: to throw away stdout/stderr if and only if the exit code is non-zero. I could obviously do this with a wrapper script that inverts the exit code, but that (a) means I only know whether the command I ran had a zero/non-zero exit code, not what it was, and (b) means there's an unnecessary layer between chronic and the command.

I've a patch that achieves exactly this; what's the best way to send it in for you to look at?

-- Adam

triggering by non-empty stderr output

chronic is now triggered by program returning nonzero exit code. however it's quite often that i encounter scripts that don't return error exit code, but they still output errors on stderr.

I'd like to have chronic to be able to setup in way that would trigger output by nonzero output to stderr.

I suggest parameter -e to do this. (e stands for error)


chronic bash -c 'ls /nonexistent_file; echo hello; exit 0' #no output
chronic -e bash -c 'ls /nonexistent_file; echo hello; exit 0' #output

You may say that i could simply do

bash -c 'ls /nonexistent_file; echo hello; exit 0' > /dev/null

to achieve the same, but it's not the case as it does throw the stdout away. i'd like to see nothing if there's no stderr and exit code 0. if there's non-zero exit code or stderr then i want to see both. stdout and stderr.

I am not sure, but this probably should not trigger if stderr contains only whitespace characters. It should need at least one printable.

Also maybe you can just make this default and use -e to disable it and do the legacy behavior, it's up to you.

I'd would really appreciate this feature. I am trying to write my scripts to return proper code, however i often use 3rd party scripts without proper exit codes and i can't or don't want to change them. Such scripts usualy output lots of stdout when everything is OK, but only way to tell that something went wrong is when they output something to stderr. That would be my usecase.

-- Tomas Mudrunka

I'd proabably be willing to merge a patch adding -e, but making it the default would break existing uses of chronic.

I don't like the filtering out whitespace idea.

I'm somewhat dubious about the idea that scripts so badly written they don't exit nonzero on error are somehow practicing good hygene in their output to stderr. --Joey

In fact such scripts do not usually output to stderr themselves. But they may just call some (well written) binararies that output to stderr or exit nonzero. However the script itself exits zero after such fail, because it didn't expected binary to fail and it's not handler properly. At least i think this is the case. --Tomas Mudrunka

verbose mode for distidguishing output streams and return code

Another feature that might be interresting in chronic would be distinguishing between stdout and stderr in output. I suggest -v as verbose

chronic bash -c 'echo foo 1>&2; echo -n bar; echo baz 1>&2; exit 23' #outputs:

chronic -v bash -c 'echo foo 1>&2; echo -n bar; echo baz 1>&2; exit 23' #outputs:
E: foo
O: bar
E: baz
R: 23

Also note it adds newline to output when output stream changes in middle of line. Just to make sure that "E: " or "O: " is at the beginning of line. E: identifies stderr output, O: identifies standart output and R: identifies return code.

This should also work in combination with proposed -e, like chronic -ve something...

I think this would be usefull as hell for debuging cron scripts. I have currently troubles with such debugging.

-- Tomas Mudrunka

I think this would be a reasonable default behavior. Patches accepted. --Joey

I just realized that this would be usefull as separate wrapper tool that can be used by chronic. --Tomas Mudrunka

poptail and peeif

I just finished two utilities that might be general purpose enough for the collection. They're on github here: https://github.com/ohspite/evenmoreutils

Here's the summary:

peeif - Pipe STDIN to any number of commands, but only if the exit status of the previous command was zero. Behavior can also be inverted to "peeuntil".

poptail - Print and remove the last lines (or bytes) from a file. This is done without reading the whole file and without copying. Can be used with parallel to batch process the lines of a file.

--Don (PS-thanks for your work on moreutils and your other projects)

more exposure for errno

After having moreutils installed for a couple months I just stumbled upon errno which I find incredibly useful! How about giving errno program some more love and just mentioning it on the project landing page among others so ppl know it exists?


Thanks, done --Joey


You are sick of doing ls | wc -l

Why? It is slow on large directories!

instead consider this:


calvin@ecoli:~/big_folder> time ls file2v1dir/ | wc -l

real    0m7.798s
user    0m7.317s
sys     0m0.700s

calvin@ecoli:~/big_folder> time ~/bin/dc file2v1dir/

real    0m0.138s
user    0m0.057s
sys     0m0.081s

A different name might be preferable as dc is the arbitrary precision, reverse-polish calculator that comes with probably all Linux distros. -- miriam-e


  • You work often in a console
  • You have sometimes files which you want to view/edit with your preferred (registered) desktop tool
  • You don't want to keep in mind / know the name e.g. nautilus, evince, libreoffice, ...
  • You don't want to type the cmd more than once e.g. libreoffice file_a; libreoffice file_b
  • You don't want to type long names for the cmd

Here is o:

# This is free software.  You may redistribute copies of it under the terms of
# the GNU General Public License <http://www.gnu.org/licenses/gpl.html>.
# There is NO WARRANTY, to the extent permitted by law.

test $# -eq 0 && set -- . # open current folder if no parameter

  KDE)         cmd="kfmclient exec";;
  GNOME|Unity) cmd="gnome-open";;
  XFCE)        cmd="exo-open";;
  *)           cmd="xdg-open";;

which `echo ${cmd/ */}` >/dev/null || cmd=""

test "$cmd" = "" && which xdg-open   >/dev/null && cmd="xdg-open"
test "$cmd" = "" && which kfmclient  >/dev/null && cmd="kfmclient exec"
test "$cmd" = "" && which gnome-open >/dev/null && cmd="gnome-open"
test "$cmd" = "" && which exo-open   >/dev/null && cmd="exo-open"

for file in "$@"
  $cmd "$file"

Would be nice if some can test it with desktop != Unity. Feedback is welcome.


In all the years of administration i had often the problem that something went wrong due to missing permissions. The solution was always the same:

ls -ld /
ls -ld /x
ls -ld /x/y -> /y/z
ls -ld /y
ls -ld /y/z
  • -> if a symlink is used i had to check other paths, too
  • -> sometimes i use md5sum to check if the file differs from 2nd server

As a result i developed a tool which shows the user his rights on the whole tree using ansi colors and the md5sum if it is a file <= 100 MB.

$ myperms /usr/share/recovery-mode
# filetype yourperms uid-/gid-/sticky-bit mtime path [md5sum]
dr-x --- 20120511-0030 /
dr-x --- 20120425-1804 /usr
dr-x --- 20120514-2245 /usr/share
lrwx --- 20120510-2321 /usr/share/recovery-mode
>>> /lib/recovery-mode (absolute)
  dr-x --- 20120511-0029 /lib
  dr-x --- 20120425-1807 /lib/recovery-mode
<<< /lib/recovery-mode

$ myperms /usr/bin/X
# filetype yourperms uid-/gid-/sticky-bit mtime path [md5sum]
dr-x --- 20120511-0030 /
dr-x --- 20120425-1804 /usr
dr-x --- 20120609-2032 /usr/bin
-r-x ug- 20120322-1859 /usr/bin/X a7ac83a4031da58dab3a88e9dd247f51

It needs ruby. Tested & developed under Ubuntu 12.04 with Ruby 1.8.

License GPLv3.

I would be glad if you embed it into moreutils. Please send a note if you are interested.


How about translate moreutils to C? Especially, utils for background work like ts; perl is too fat for it. http://mellonta.narod.ru/f/ts.c


I have some code I wrote a decade ago which lists the 'extensions' in a folder. It's like ls -l (a little bit), but defaults to recursive, and colorized. For every extension type (defined as everything after the last dot, unless it starts with a dot) it lists the count and the the total size. If there's only one of that type then the output is a little different. I'd have to find the code and compile it, but it would be nice to release this in the wild.


I'd like to suggest a shell script I wrote long ago to manipulate dates and for which I still don't know any available replacement.

It computes the number of days between two dates and can check whether a date exists or not.

$ datediff 2009-09-27 2009-10-27
$ datediff 2009-09-27 2008-09-27
$ datediff today
$ datediff today 0
$ datediff test 2009-02-31
$ datediff test 2009-02-28
$ datediff today -30

How about "ddiff" (and more nice tools) in dateutils at http://www.fresse.org/dateutils/ -- miriam_e


I hope this is the right place to make any propositions. Vidir is the, by far, most powerfull mass renaming tool I am aware of, but unfortunately it is limited to terminal use only. However, sometimes I want to do mass renaming from a graphical file browser, atm this is possible by opening the current folder in terminal and typing vidir. Many file browsers (e.g. nemo, nautilus, thunar) allow you to specify a command for mass renaming purposes which brings me to the following two limitations:

  • In most of these file-browsers the file names are given as URIs which vidir doesn't seem to understand

  • My EDITOR variable is, of course, set to vim therefore, it doesn't work from a graphical context. It would be nice if vidir would understand a command line flag like --editor where one could set an editor (e.g. gvim) only for the current session

-- Mani

A suggestion regarding vidir: I would find it quite useful if vidir could work with version control system commands like:

svn mv
svn rm


git mv
git rm

Another improvement would be the addition would be to add checkboxes to control which files should be controlled by the vcs and which shouldn't.

-- David Riebenbauer

What I want to do with vidir is make it able to run arbitrary configured commands based on filename transformations. So that it can be configured to run git add, or svn rm or whatever. Also so that if you remove .gz from a filename, it's decompressed; adding .gz compresses it, etc. I have not figured out a configuration language that is flexible enough to handle all these cases though. --Joey

What about a Makefile-style stem-based configuration file, like the following?

%: %.gz
    gunzip $< -c > $@

%.gz: %
    gzip $< -c > $@

Plus additional special rules like .ADD .RM .WHATEVER: to specify the commands (although I'm not particularly interested in this, I only use git and tig is an excellent command-line utility to manage the repository). --G. Bilotta

How about adding copying ability to vidir? Items are recognized by the leading number in a line, right? A second instance of the same number should imply copying.

I tried using vidir like that today as I thought it had this feature already, but it doesn’t. Something like %s/\(.*\)one\(.*\)/&\r\1two\2 (in vim) should copy every file containing “one” and replace it with “two” in the copy. That’s not too hard to add as a feature, or is it? -- K. Stephan

vidir is great, but when editing and deciding to abandon the sesssion, deleting the entire buffer should abort (like git commit messages) instead of deleting all the files. -- N.J. Thomas

You can undo all your changes (keep 'u' pressed until it goes back to the original state: it won't backtrack further). But I agree that a different way to do this should be offered. --G. Bilotta

One thing vidir needs is a -i option (“interactive”) to ask confirmation before action: it would first present a summary of the request (“the following commands will be issued: remove blah, remove blah, rename blahblah etc; proceed? yes/no/all”), aborting on ‘n’o, doing everything on ‘a’ll, or ask again before each action (yes/no/all the rest). --G. Bilotta

Another possible improvement would be to automatically create needed directories. For example, renaming dir/file to dir2/file will also create "dir2" to avoid a failing mv. This may be an option or a change to the default behavior, I'm not sure what's better. -- M. Poletti

Sometimes I feel vidir misses chmod capabilities. I would imagine this as a switch (--chmod):

$ vidir --chmod zshrc

1 0644 zshrc

/end example

Literals would be even better. -- poyo983


Throwing out ideas for a vidir implementation to support non-interactive editors like sed.
An actual 'seddir' (what the name implies) wouldn't be too useful; the best option, in my opinion, would be an 'eddir' that supports both kinds of editors.

Firstly, a '--' option, which passes the rest of the arguments to the editor, would be nice. That would be enough to simulate a 'seddir' of some kind: VISUAL=sed vidir foo -- -i 's/foo/bar/'
Note the sad fact that we need to rely on the GNU extension '-i'. An option (say -s) to use stdin/stdout instead of creating a temp file, would be handy.
Also, talking about 'visual' is quite misleading here. Maybe an option (say -e) to manually specify the program to run, and fall back to the variables?

Currently printf 's/foo/bar/\nw\nq\n' | VISUAL=ed vidir works, but no way to combine that with vidir - (filelist on stdin).
An option (say -f) to specify a file that will be passed to the editor as stdin would be handy.

'-f [script-file]' and '-s' are mutually exclusive.
'-f' is default, uses a temp file, passes contents of the file (script) specified in its optional argument as stdin to the editor. Can be repeated to supply more files on stdin.
'-s' uses stdin/stdout with the editor, instead of a temp file. (Not related to the stdin/stdout of eddir itself.)
'-e editor' specifies the program to run as the 'editor'. Defaults to ${VISUAL:-${EDITOR:-vi}} on '-f' and sed on '-s'.
Other arguments are file names as usual. (Also, IMO there's no need for '-' to enable the stdin filelist, just always use it if it's nonempty.)
'--' ends processing of arguments, and passes the rest to the editor.

What still cannot be done is to have a script (like for ed) on stdin. But how often do you have a script coming from a pipe anyway? Scripts are kept in files, so use '-f script'.

And here's some humour for you: Let's make a sedpe !!!


Like vidir, to edit where symbolic links point at.


New tool url2file as found at http://specs.dachary.org/url2file/ : the idea is to be able to do wc -l $(url2file http://foo.com/)

See the dog utility from the package by that name.


I'm suggesting a command that would run the following command if and only if the standard input is not empty. I often want this in crontabs, as in:

find . -name core | ifne mail -s "Core files found" root

This is a good idea, and included now.


A command that pages the stdout of a subcommand only if stdout is a tty, similar to the way git treats its stdout. For example:

autopage ls -l

works like "ls -l | pager" but

autopage ls -l >mifilelist

works correctly and writes the output of "ls -l" in "myfilelist". This command would be useful for aliases, so you could add git's autopaging to other commands like this:

alias ls="autopage ls"


Might the generic unix tool that's missing here really be a command-line version of "isatty"? I like to think generic; running a pages if isatty is a specific case that can be done with a small shell script. --Joey

The generic unix tool is "test -t 1", so probably this shouldn't go on moreutils. Still I like the autopaging capability of git and I miss them in commands like grep or ls. On the other hand, I don't think every command should check if its stdout is a tty and run a pager if its output is larger than a screen. I think this should be done in a more general way, like globbing.--Vicho

You can add this to your .bashrc:

autopage () { "$@" | more ; }

isatty () { local -i fd=${1:-1} ; eval "tty 0>&${fd}" ; }

moreutils tricks

Look at this poor man's hex editor, made with moreutils (and xxd)

xxd $file | vipe | xxd -r | sponge $file

For me 'vipe' is the killer app in moreutils, I hardly use the other (but I think it just takes getting used to the new repertoire). --ulrik.

That's sweet. I never thought of using vipe for that, which is of course, the point. Amusingly, for me, vipe is one of the rarer used tools. :-) --Joey

I'm sure you guys heard of mmv, and maybe zmv and others. Well, vidir seriously makes them look pathetic. Although it requires user interaction (and see "seddir" above, regarding that). --TaylanUB

URI to local path converter

This shell problem pops up in shell scripting for Nautilus and some other gnome applications. I'm proposing a tool to convert between file:/// URIs and local paths, with the proper encoding conversions in that of course.


$ tourl "/tmp/my dir/idx.html"
$ tolocal "file://localhost/home/ben/Documents/report%202008.pdf"
/home/ben/Documents/report 2008.pdf

Just a suggestion, easily solved using most high-level desktop APIs. Is there already a shell tool? --ulrik

I imagine there's a perl oneliner that can do it. It doesn't seem generic enough for morutils. --Joey


If have a suggestion that may be a bit trivial, but it would be highly useful and make life a bit easier for the hacker: body. It belongs in a group with head and tail (hence the name). It outputs either one line from a file or a range of lines. Sure, you could say

head -n 42 file | tail -n 1

but isn't

body file:42

much more elegant? Note how you can just copy'n'paste your compiler error messages. Specifying ranges as

body file:40-44

will also save you some math that would slow down your workflow -- Jann

[Two weeks later:] This command can be emulated with sed:

sed -n 42p file
sed -n 40,44p file


about cattail

It can be done with "tail -fc+1 FOO".

This is missing the key part, which is exiting when the file stops being written to. --Joey

"null_tee" for sponge

I wrote a system that forks a lot of processes, redirecting stdout to YYYYMMDD-progname-stdout.txt and stderr to YYYYMMDD-progname-stderr.txt for each process. I also wrote a little utility I called 'null_tee' that would copy stdin to a file if there was data, else it would not create the file at all. With a spawn command of 'progname 2>&1 1>YYYYMMDD-progname-stdout.txt | null_tee -o YYYYMMDD-progname-stderr.txt', I would have all the stdout.txt files but only the stderr.txt files of programs that had actually sent stuff to stderr. In production mode, I redirected stdout to /dev/null so the only files that appeared were from unusual errors. It worked out quite well in practice.

It would be great if sponge had this same feature as an option: only create the file if something really came in from stdin. --KevinL

Use "foobar | ifne sponge file"? --TaylanUB


According to ckester's comment, should we remove parallel from moreutils? The main website of moreutils says: "I'm always interested to add more to the collection, as long as they're suitably general-purpose, and don't DUPLICATE other well-known tools." -- hong

Need to resolve the name conflict with GNU parallel (http://www.gnu.org/software/parallel/). As far as I can tell, these are two distinct implementations addressing the same general problem space. --ckester

PATCH: make -i and -n not mutually exclusive in parallel

I wrote some patches for parallel (https://github.com/ghuls/moreutils-parallel/commits/master), so now -i and -n are not mutually exclusive. Also parallel -h displays all options with a small description. The manual page has some additional examples too.

parallel -j 3 -i mv {}.JPG {}.jpeg -- photo1 photo2 photo3

  This  runs three mv commands at the same time.  In each mv command, the
  {} strings will be replaced by the current argument.  For the first  mv
  command, the executed command line will be: mv photo1.JPG photo1.jpeg

parallel -j 2 -n 2 -i mv {}.JPG {}.jpeg -- photo1 photo2 photo3 photo4

  This runs two mv commands at the same time.  In each mv command, the {}
  strings will be replaced by different arguments.  For the first mv com‐
  mand, the executed command line will be: mv photo1.JPG photo2.jpeg

-- Gert Hulselmans

"sponge" being careful with symlinks

I wonder whether "sponge" will preserve symlinks and replace the content of a symlinked file with the new content (like expected from "editing in place" or from the usual shell redicrections to symlinks), as asked in a question at that Q&A site.

If yes, then it's one more advantage of "sponge" over "perl -i" (which is said not to follow symlinks according to its documentation) or "sed -i".


(Not sure whether I should add suggestions at the top or bottom of this page) We already have fold in standard distributions, which wraps lines to a specified length, but we don't have the reverse, to unwrap or unfold lines. I find this especially annoying when reading ebooks which have been pre-folded, as for instance Project Gutenberg texts are. If I try to read them on a narrower screen (e.g. my Palm computer) then it really screws up the text, so I put together this deceptively simple-looking one-liner:

sed ':a; /^$/!N; /\n$/!s/\n/\ /; ta' "$1" >"$1.unfolded"

It really needs some extra functionality though, for instance it should leave collections of short lines alone because they're probably verse (but how short?), and indented text currently gets big spaces in the middle of text when unfolded. Of course to make this more general-purpose the redirection to the '.unfolded' file should probably be removed.

This sed script is quite tricky and took me a lot of fiddling to come up with it. Here is a quick description:

Start with the label (:a) to mark the beginning of a loop. The semicolon separates sed commands. The first thing in the loop is a condition (/^$/!) testing if the current line is NOT blank then use the N command to append the next line to the current one. Next we have a condition (/\n$/!) that lets the rest of the loop work only if a newline is NOT at the end of line. If the appended line is empty the newline on the current line will be at the end of the line in the buffer, but if the appended line has text on it the newline will be wedged between text and won't be at the end, in which case the newline gets substituted with a space. And the 'ta' closes the loop.


I don't think this is needed. Simply do this:

file_sig="`shasum $file`"
[ "`shasum $file`" = "$file_sig" ] && do_sth_else


What ifdata really needs is IPv6 support ! It has code to print them, but no switch to request them explicitely.



Yet another suggestion.

Occasionally I'm interested in the ancestry of a process. For this purpose I have a little script in my ~/bin/. The name pretty much tells all about it. You give it a pid and it prints the pid of the parent, grandparent, and so on, up to init.

I have uploaded the code I use to github: https://gist.github.com/4235657

A typical example of how I use it: pidof-ancestors $$ | xargs ps u

It's not portable (Linux specific) at the moment. However, it could be fixed easily by using ps -o ppid= $pid instead of reading from /proc/$pid/stat.

What do you think?

Bye, -- Bence Romsics


I do a lot of scripts that interact with the user, and over the years have developed tools around tput to manage the screen. The piece that's lacking is an enhanced read command. The bash read -e creates some screen problems in that it will allow the cursor to backspace and left arrow out of the field, and jump to the left margin. This will also disturb coloring on the screen.

So, the first thing I'd like is a mode that allows the width (columns, min=1, max=console cols) and height (lines, min=1, max=console rows) to be defined, and to constrain the cursor and all screen handling to that area. Perhaps with ncurses? It should use the current colors, but I suppose having those as options wouldn't hurt either. It should also be able to either limit the input to rows*cols chars, or provide scrolling when input data exceeds the defined area.

With the cursor and screen under control, the other thing I'd like to see are input masks. For example, when inputting a US-style telephone number, I'd like to define --mask="(###) ###-####". Then eread would display the mask, sans # markers, and accept only numbers in their places. An option to store (ie, output to stdout or to a defined file) either the fully masked result (ie, (123) 456-7890) or the input only (ie, 1234567890) would be useful. Other masks my be the basic key/word list, date/time, filespec, etc.

There are many apps with such schemes, so the masking options should be relatively easy to borrow. Implementation, of course, is the problem, although I suspect the core code for the input masking probably exists in free/open space. The screen and keyboard handling might also. As I'm not a C coder, it is beyond my skills to pull it all together.

If we're dreaming -- auto-accept on length, key, or timeout, and other common bells and whistles, would be good things, too.

As a rough example:

# position cursor to 12,5
# display prompt
# set input color
myNumber="$(eread --auto-skip --mask="{us-phone}" --store-input --default="1234567890")"
# reposition to fetch notes
myNotes="$(eread --accept="^W,F10" --mask="{printable}" --default="$myNotes" --rows=5 --cols=50 --scroll)"

I would expect the cursor to be pushed/popped back to its starting position, and if possible the console state (colors, primarily) retained. If not, easy enough to do in script.

Performance isn't such a concern with this command. It's not like it's going into a loop and doing a few thousand iterations. It can be big and fat. So long as it loads within the span of a keypress or three, that should be sufficient.

I know dialog et al could be used; however, what I don't like about dialog is that it is a dialog manager that expects to put boxes all over my console. I also have not found the magic to convince dialog to create a 1-row by x-col input. It's also not a masking editor, for the most part. And provides a bunch of features I'm not usually interested in. What I want is an enhanced read that doesn't add to or change my display, except as I tell it to, that can also wrangle input into a mask and other constraints.




It would be nice to have an option to add a blank line or other string as an indicator that time has elapsed. Seeing blank lines is easier on the eyes than parsing numbers when your goal is just to see clusters of debug info that happened around the same time. e.g.

testprogram | ts -b 0.1

would break into a different cluster whenever the time from one line to the next exceeds 0.1 second, and there could be another option to set the break string to something other than "\n", which is the default.

It would be nice to show both stdin and stderr, and distinguish them. (Otherwise one needs 2>&1 to combine stdin and stderr.) But that could alternatively be done with a colorizer, which would be a good separate tool to have.

I'd like to have environment variable TS_FORMAT to avoid typing custom format every time I use ts. Default is very inconvenient to read for non-native english speaker; I personally almost always use '%H:%M:%S'.

It would be helpful for performance testing to have an option to include the current instantaneous CPU load along with the time. A specific example is for output from make. Graphing the load average over time would help identify where a long and complex build is failing to parallelize.

Suppress sponge warnings for stdout?

I have the following two scripts:


#!/usr/bin/env python

import subprocess



cat /usr/share/dict/words | sponge | less

Running foo.py and then immediately exiting 'less' by pressing 'q' causes sponge to print an error:

error writing buffer to output file: Broken pipe

Curiously, running bar.sh directly doesn't have this problem. I'm not sure what Python is doing to the execution environment to cause this, but would you consider changing write_buff_out to suppress its warning spew if it's writing to stdout?



  • Please can the manpage warn of the non-preservation of order, on the collected stdout from child processes?
    On first inspection with pee md5sum sha256sum vs pee sha256sum md5sum, and on a fast/idle machine, it looks like stdout are collected and then written. But more complex examples show that stdouts may be interleaved even when there have been no newlines yet, so pee 'md5sum > foo.md5' 'sha256sum > foo.sha256' is a much better solution. Thanks -- m
    ps. no bytes were harmed in our discovery of this behaviour, because we've seen interleaved data too many times before! :-]

Vipe without stdin

Hi Joey, thanks for your work on moreutils. I use vipe all the time. As someone else said, this is the killer app :) Sometimes I would like to use vipe without receiving any input first, so that I can set up some text for piping to another program. Kind of like here-docs or echo, except you get to edit the text with vim first. A simple example could look something like this:

vipe | wc -c

More often, I might want to do something like this:

vipe | datamash sum 1

Now, vipe can sort of do this, by using either a here-string (vipe <<< "" | wc -c) or by pressing Ctrl-D as soon as vipe launches. I wondered if we could just skip the Ctrl-D step and have vipe detect when no stdin has been given to it; instead, jump straight into an empty vim document? - TR

Uhmm, "vipe </dev/null | datamash sum 1"? -DG

Vipe with arguments for editor commands involving whitespaces

I sometimes want to use vipe while passing arguments to my editor (let’s say vim +startinsert). That’s easy enough, I just need to run it with the EDITOR variable locally set to that. But if the arguments contain whitespaces (for instance vim -c 'norm o'), this trick doesn’t work anymore, because the EDITOR variable is split on whitespaces. For that purpose, it would be nice to check as a last case after EDITOR and VISUAL whether any arguments have been passed to vipe and, if that is the case, to set @editor to @ARGV. We could then do vipe vim -c 'norm o' for those cases that require more flexibility. --Vej Kse


Quoting rum soaked space hobo:

It is a fun bit of trivia that pre-Bourne shell, as in Ken Thompson's rough hack at a shell, had no flow-control statements built in. It had some basic one-line ternary logic operators I think, but it was extremely limited. So if you wanted to branch somewhere else in your script, you had to shell out to /bin/goto.

Yes, that's right. You forked a subprocess which would run goto and the parent process would resume from the new location.

How, you ask? Well, the subprocess would inherit all file descriptors, including the fd for the script itself. All it needed to do was seek() that; and when it exited, the parent would drop the needle where the subprocess had left it and play the chosen tune.

This could be done without shell support, by the goto command parsing the shebang and execing it on /dev/fd/n seeked appropriately. (Local variables would be a small problem.) --Joey

Feature suggestion: vidir ---> Use of vim command injection of macros and other keystrokes non interactively

Let vidir use sorted file list ls -rt order into vim buffer.Currently it just randomly creates the file list buffer.

Case 1 - If we set EDITOR="vi" then, vidir << EOF :%s/string/replace/g EOF !!! IT WORKS !!! --> Non interactively

Case 2 - I have a files whose rename requires a regex that works only on sorted order of files vidir does not consider sorted order of files and requires additional ls -rt | vidir - [Works but is interactive cannot be used for batch processing] to be executed to edit filenames in sorted order.

But above doesn't allow for keystroke injection as in Case 1

eg. ls -rt | vidir - << EOF :%s/string/replace/g EOF !!! NOT WORKS !!! ---> As input is conflicted with heredoc

ls -rt | vim - -c ':%s/string/replace/g^[:wx'

!!!Above works!!!----> but how can we achieve it in vidir

Tried setting EDITOR="vim - -c 'command'" But not working

Current workaround:

printf 'regex' >> /tmp/script.vim

export EDITOR="vim -s /tmp/script.vim"

ls -rt | vidir -

export EDITOR="vi"

rm /tmp/script.vim

I know moreutils are not being maintained anymore just add this single feature to vidir. Please !!!