Differences between revisions 145 and 388 (spanning 243 versions)
Revision 145 as of 2006-12-11 12:36:02
Size: 109844
Editor: r5bb59
Comment:
Revision 388 as of 2008-11-22 17:01:55
Size: 6074
Editor: GreyCat
Comment: go back to individual page includes, see if the anchors work...
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
#acl GreyCat:read,write,revert,admin,delete pgas:read,write,revert All:read
Line 5: Line 6:
These are answers to frequently asked questions on channel #bash on the [http://www.freenode.net/ freenode] IRC network. These answers are contributed by the regular members of the channel (originally heiner, and then others including greycat and r00t), and by users like you. If you find something inaccurate or simply misspelled, please feel free to correct it! || '''Note''': The FAQ was split into individual pages for easier editing. Just click the 'Edit' link at the bottom of each entry, and please don't add new ones to this page; create a new page with the entry number instead.<<BR>>Thank you.||

These are answers to frequently asked questions on channel #bash on the [[http://www.freenode.net/|freenode]] IRC network. These answers are contributed by the regular members of the channel (originally heiner, and then others including greycat and r00t), and by users like you. If you find something inaccurate or simply misspelled, please feel free to correct it!
Line 9: Line 12:
["BASH"] is a BourneShell compatible shell, which adds many new features to its ancestor. Most of them are available in the KornShell, too. If a question is not strictly shell specific, but rather related to Unix, it may be in the UnixFaq. [[BASH]] is a BourneShell compatible shell, which adds many new features to its ancestor. Most of them are available in the KornShell, too.  The answers given in this FAQ may be slanted toward Bash, or they may be slanted toward the lowest common denominator Bourne shell, depending on who wrote the answer. In most cases, an effort is made to provide both a portable (Bourne) and an efficient (Bash, where appropriate) answer. If a question is not strictly shell specific, but rather related to Unix, it may be in the UnixFaq.
Line 11: Line 14:
This FAQ assumes a certain level of familiarity with basic shell script syntax. If you're completely new to Bash or to the Bourne family of shells, you may wish to start with the (incomplete) BashGuide.

If you can't find the answer you're looking for here, try BashPitfalls.
Line 13: Line 19:
[[TableOfContents]] Chet Ramey's official [[http://tiswww.case.edu/php/chet/bash/FAQ|Bash FAQ]] contains many technical questions not covered here.
Line 15: Line 21:
[[Anchor(faq1)]]
== How can I read a file line-by-line? ==
{{{
    while read line
    do
        echo "$line"
    done < "$file"
}}}
<<TableOfContents>>
## Until TableOfContents.py gets fixed, every page will have to be manually included.
## Currently, using the regex makes the ToC have links pointing to the wrong anchors.
## <<Include(^BashFAQ/.*, , editlink))>>
Line 24: Line 26:
If you want to operate on individual fields within each line, you may supply additional variables to {{{read}}}:

{{{
    # Input file has 3 columns separated by white space.
    while read first_name last_name phone; do
      ...
    done < "$file"
}}}

If the field delimiters are not whitespace, you can set {{{IFS}}} (input field separator):

{{{
    while IFS=: read user pass uid gid gecos home shell; do
      ...
    done < /etc/passwd
}}}

Also, please note that you do ''not'' necessarily need to know how many fields each line of input contains. If you supply more variables than there are fields, the extra variables will be empty. If you supply fewer, the last variable gets "all the rest" of the fields after the preceding ones are satisfied. For example,

{{{
    while read first_name last_name junk; do
      ...
    done <<< 'Bob Smith 123 Main Street Elk Grove Iowa 123-555-6789'
    # Inside the loop, first_name will contain "Bob", and
    # last_name will contain "Smith". The variable "junk" holds
    # everything else.
}}}

The {{{read}}} command modifies each line read, e.g. it removes all leading whitespace characters (blanks, tab characters). If that is not desired, the {{{IFS}}} variable has to be cleared:

{{{
    OIFS=$IFS; IFS=
    while read line
    do
        echo "$line"
    done < "$file"
    IFS=$OIFS
}}}

As a feature, the {{{read}}} command concatenates lines that end with a backslash '\' character to one single line. To disable this feature, KornShell and ["BASH"] have {{{read -r}}}:

{{{
    OIFS=$IFS; IFS=
    while read -r line
    do
        echo "$line"
    done < "$file"
    IFS=$OIFS
}}}

Note that reading a file line by line this way is ''very slow'' for large files. Consider using e.g. ["AWK"] instead if you get performance problems.

One may also read from a command instead of a regular file:

{{{
    some command | while read line; do
       other commands
    done
}}}

That may cause problems later on if the commands inside the body of the loop attempt to set variables which need to be used outside the loop; in that case, see [#faq24 FAQ 24], or use process substitution like:

{{{
    while read line; do
        other commands
    done < <(some command)
}}}

Sometimes it's useful to read a file into an array, one array element per line. You can do that with the following example:

{{{
    O=$IFS IFS=$'\n' arr=($(< myfile)) IFS=$O
}}}

This temporarily changes the Input Field Separator to a newline, so that each line will be considered one field by read. Then it populates the array {{{arr}}} with the fields. Then it sets the {{{IFS}}} back to what it was before.

This same trick works on a stream of data as well as a file:

{{{
    O=$IFS IFS=$'\n' arr=($(find . -type f)) IFS=$O
}}}

Of course, this will blow up in your face if the files contain newlines; see [#faq20 FAQ 20] for hints on dealing with such files.

[[Anchor(faq2)]]
== How can I store the return value of a command in a variable? ==
Well, that depends on exactly what you mean by that question. Some people want to store the command's ''output'' (either stdout, or stdout + stderr); and others want to store the command's ''exit status'' (0 to 255, with 0 typically meaning "success").

If you want to capture the output:

{{{
    var=$(command) # stdout only; stderr remains uncaptured
    var=$(command 2>&1) # both stdout and stderr will be captured
}}}

If you want the exit status:

{{{
    command
    var=$?
}}}

If you want both:

{{{
    var1=$(command)
    var2=$? # the assignment to var1 has no effect on command's exit status, which is still in $?
}}}

If you don't ''actually'' want the exit status, but simply want to take an action upon success or failure:

{{{
    if command
    then
        echo "it succeeded"
    else
        echo "it failed"
    fi
}}}

[[Anchor(faq3)]]
== How can I insert a blank character after each character? ==
{{{
    sed 's/./& /g'
}}}

Example:

{{{
    $ echo "testing" | sed 's/./& /g'
    t e s t i n g
}}}

[[Anchor(faq4)]]
== How can I check whether a directory is empty or not? ==
We can test for the exit status of ls:

{{{
    if ls "$directory"/file.txt; then
         echo "file.txt found!"
    else
         echo "file.txt not found."
    fi
}}}


The following idea counts the number of entries in the specified directory (omitting ".." and "."):

{{{
    find "$dir" -maxdepth 0 -links 2 \
     -exec echo "empty directory: {}" \;
}}}

Conversely, to find a non-empty directory:

{{{
    find "$dir" -maxdepth 0 -links +2 \
     -exec echo "directory is non-empty" \;
}}}

Most modern systems have an "ls -A" which explicitly omits "." and ".." from the directory listing:

{{{
    if [ -n "$(ls -A somedir)" ]
    then
        echo directory is non-empty
    fi
}}}

This can be shortened to:

{{{
    if [ "$(ls -A somedir)" ]
    then
        echo directory is non-empty
    fi
}}}

Another way, using Bash features, involves setting the special shell option which changes the behavior of globbing. Some people prefer to avoid this approach, because it's so drastically different and could severely alter the behavior of scripts.

Nevertheless, if you're willing to use this approach, it does greatly simplify this particular task:

{{{
    shopt -s nullglob
    if [[ -z $(echo *) ]]; then
        echo directory is empty
    fi
}}}

It also simplifies various other operations:

{{{
    shopt -s nullglob
    for i in *.zip; do
        blah blah "$i" # No need to check $i is a file.
    done
}}}

Without the {{{shopt}}}, that would have to be:

{{{
    for i in *.zip; do
        [[ -f $i ]] || continue # If no .zip files, i becomes *.zip
        blah blah "$i"
    done
}}}

(You may want to use the latter anyway, if there's a possibility that the glob may match directories in addition to files.)

[[Anchor(faq5)]]
== How can I convert all upper-case file names to lower case? ==
{{{
# tolower - convert file names to lower case

for file in *
do
    [ -f "$file" ] || continue # ignore non-existing names
    newname=$(echo "$file" | tr '[A-Z]' '[a-z]') # lower-case version of file name
    [ "$file" = "$newname" ] && continue # nothing to do
    [ -f "$newname" ] && continue # do not overwrite existing files
    mv "$file" "$newname"
done
}}}

Purists will insist on using
{{{
tr '[[:upper:]]' '[[:lower:]]'
}}}
in the above code, in case of non-ASCII (e.g. accented) letters in locales which have them.

This technique can also be used to replace all unwanted characters in a file name e.g. with '_' (underscore). The script is the same as above, only the "newname=..." line has changed.

{{{
# renamefiles - rename files whose name contain unusual characters
for file in *
do
    [ -f "$file" ] || continue # ignore non-existing names
    newname=$(echo "$file" | sed 's/[^a-zA-Z0-9_.]/_/g')
    [ "$file" = "$newname" ] && continue # nothing to do
    [ -f "$newname" ] && continue # do not overwrite existing files
    mv "$file" "$newname"
done
}}}

The character class in {{{[]}}} contains all allowed characters; modify it as needed.

If you have the utility "mmv" on your machine, you could simply do

{{{
mmv "*" "#l1"
}}}


[[Anchor(faq6)]]
== How can I use a logical AND in a shell pattern (glob)? ==
That can be achieved through the !() extglob operator. You'll need {{{extglob}}} set. It can be checked with:
{{{
$ shopt extglob
}}}

and set with:
{{{
$ shopt -s extglob
}}}

To warm up, we'll move all files starting with foo AND not ending with .d to directory foo_thursday.d:
{{{
$ mv foo!(*.d) foo_thursday.d
}}}

For the general case:

Delete all files containing Pink_Floyd AND not containing The_Final_Cut:

{{{
$ rm !(!(*Pink_Floyd*)|*The_Final_Cut*)
}}}

By the way: these kind of patterns can be used with KornShell and KornShell93, too. They don't have to be enabled there, but are the default patterns.

[[Anchor(faq7)]]
== Is there a function to return the length of a string? ==
The fastest way, not requiring external programs (but usable only with ["BASH"] and KornShell):
{{{
${#varname}
}}}

or

{{{
expr "$varname" : '.*'
}}}

({{{expr}}} prints the number of characters matching the pattern {{{.*}}}, which is the length of the string)

or

{{{
expr length "$varname"
}}}

(for a BSD/GNU version of {{{expr}}}. Do not use this, because it is not ["POSIX"]).

[[Anchor(faq8)]]
== How can I recursively search all files for a string? ==
On most recent systems (GNU/Linux/BSD), you would use {{{grep -r pattern .}}} to search all files from the current directory (.) downward.

You can use {{{find}}} if your {{{grep}}} lacks -r:
{{{
    find . -type f -exec grep -l "$search" '{}' \;
}}}

The {} characters will be replaced with the current file name.

This command is slower than it needs to be, because {{{find}}} will call {{{grep}}} with only one file name, resulting in many {{{grep}}} invocations (one per file). Since {{{grep}}} accepts multiple file names on the command line, {{{find}}} can be instrumented to call it with several file names at once:
{{{
    find . -type f -exec grep -l "$search" '{}' \+
}}}

The trailing '+' character instructs {{{find}}} to call {{{grep}}} with as many file names as possible, saving processes and resulting in faster execution. This example works for POSIX {{{find}}}, e.g. with Solaris.

GNU find uses a helper program called {{{xargs}}} for the same purpose:
{{{
    find . -type f -print0 | xargs -0 grep -l "$search"
}}}

The {{{-print0}}} / {{{-0}}} options ensure that any file name can be processed, even ones containing blanks, TAB characters, or new-lines.

90% of the time, all you need is:

Have grep recurse and print the lines (GNU grep):
{{{
    grep -r "$search" .
}}}

Have grep recurse and print only the names (GNU grep):
{{{
    grep -r -l "$search" .
}}}

The {{{find}}} command can be used to run arbitrary commands on every file in a directory (including sub-directories). Replace {{{grep}}} with the command of your choice. The curly braces {} will be replaced with the current file name in the case above.

(Note that they must be escaped in some shells, but not in ["BASH"].)

[[Anchor(faq9)]]
== My command line produces no output: tail -f logfile | grep 'ssh' ==
Most standard Unix commands buffer their output if used non-interactively. This means, that they don't write each character (or even each line) as they are ready, but collect a larger number (e.g. 4 kilobytes) before printing it.
In the case above, the {{{tail}}} command buffers its output, and therefore {{{grep}}} only gets its input in e.g. 4K blocks.

Unfortunately there's no easy solution to this, because the behaviour of the standard programs would need to be changed. *See bottom of section before taking 'no easy solution' to heart*

Some programs provide special command line options for this purpose, e.g.

||grep (e.g. GNU version 2.5.1)||{{{--line-buffered}}}||
||sed (e.g. GNU version 4.0.6)||{{{-u,--unbuffered}}}||
||awk (some GNU versions)||{{{-W interactive, or use the fflush() function}}}||
||tcpdump, tethereal||{{{-l}}}||

The {{{expect}}} package (http://expect.nist.gov/) has an {{{unbuffer}}} example program, which can help here. It disables buffering for the output of a program.

Example usage:

{{{
    unbuffer tail -f logfile | grep 'ssh'
}}}

There is another option when you have more control over the creation of the log file. If you would like to {{{grep}}} the real-time log of a text interface program which does buffered session logging by default (or you were using {{{script}}} to make a session log), then try this instead:

{{{
   $ program | tee -a program.log

   In another window:
   $ tail -f program.log | grep whatever
}}}

Apparently this works because {{{tee}}} produces unbuffered output. This has only been tested on GNU {{{tee}}}, YMMV.

A solution to this is to use the 'less' command in follow mode. This is simple to do!
{{{
   $ less program.log
}}}
Then enter your search pattern (/ is search in less, like vi)
   /ssh

Next, put less into follow mode by issuing shift+f

Thats all there is to it!
[[Anchor(faq10)]]
== How can I recreate a directory structure, without the files? ==
With the {{{cpio}}} program:
{{{
    cd "$srcdir"
    find . -type d -print | cpio -pdumv "$dstdir"
}}}

or with GNU-{{{tar}}}, and less obscure syntax:

{{{
    cd "$srcdir"
    find . -type d -print | tar c --files-from - --no-recursion | tar x --directory "$dstdir"
}}}

This creates a list of directory names with find, non-recursively adds just the directories to an archive, and pipes it to a second tar instance to extract it at the target location.

[[Anchor(faq11)]]
== How can I print the n'th line of a file? ==
The dirty (but not quick) way would be {{{sed -n ${n}p "$file"}}} but this reads the whole input file, even if you only wanted the third line.

The following {{{sed}}} command line reads a file printing nothing (-n). At line $n the command "p" is run, printing it, with a "q" afterwards: quit the program.

{{{
    sed -n "$n{p;q;}" "$file"
}}}

[[Anchor(faq12)]]
== A program (e.g. a file manager) lets me define an external command that an argument will be appended to - but i need that argument somewhere in the middle... ==
{{{
    sh -c 'echo "$1"' -- hello
}}}

[[Anchor(faq13)]]
== How can I concatenate two variables? ==
There is no concatenation operator for strings (either literal or variable dereferences) in the shell. The strings are just written one after the other:

{{{
    var=$var1$var2
}}}

If the right-hand side contains whitespace characters, it needs to be quoted:

{{{
    var="$var1 - $var2"
}}}

Braces can be used to disambiguate the right-hand side:

{{{
    var=${var1}xyzzy
    # without braces, var1xyzzy would be interpreted as a variable name
    # Another equivalent way would be:
    var="$var1"xyzzy
}}}

CommandSubstitution can be used as well. The following line creates a log file name {{{logname}}} containing the current date, resulting in names like e.g. {{{log.2004-07-26}}}:

{{{
    logname="log.$(date +%Y-%m-%d)"
}}}

Appending data to the end of a string doesn't require any black magic, either.

{{{
    string="$string more data here"
}}}

Bash 3.1 has a new += operator that you may see from time to time:

{{{
    string+=" more data here" # EXTREMELY non-portable!
}}}

It's generally best to use the portable syntax.

[[Anchor(faq14)]]
== How can I redirect the output of multiple commands at once? ==
Redirecting the standard output of a single command is as easy as
{{{
    date > file
}}}

To redirect standard error:
{{{
    date 2> file
}}}

To redirect both:
{{{
    date > file 2>&1
}}}

In a loop or other larger code structure:
{{{
    for i in $list; do
        echo "Now processing $i"
        # more stuff here...
    done > file 2>&1
}}}

However, this can become tedious if the output of many programs should be redirected. If all output of a script should go into a file (e.g. a log file), the {{{exec}}} command can be used:

{{{
    # redirect both standard output and standard error to "log.txt"
    exec > log.txt 2>&1
    # all output including stderr now goes into "log.txt"
}}}

Otherwise command grouping helps:

{{{
    {
        date
        # some other command
        echo done
    } > messages.log 2>&1
}}}

In this example, the output of all commands within the curly braces is redirected to the file {{{messages.log}}}.

[[Anchor(faq15)]]
== How can I run a command on all files with the extention .gz? ==
Often a command already accepts several files as arguments, e.g.

{{{
    zcat *.gz
}}}

(One some systems, you would use {{{gzcat}}} instead of {{{zcat}}}. If neither is available, or if you don't care to play guessing games, just use {{{gzip -dc}}} instead.) If an explicit loop is desired, or if your command does not accept multiple filename arguments in one invocation, the {{{for}}} loop can be used:

{{{
    for file in *.gz
    do
        echo "$file"
        # do something with "$file"
    done
}}}

To do it recursively, you should use a loop, plus the find command:

{{{
    while read file; do
        echo "$file"
        # do something with "$file"
    done < <(find . -name '*.gz' -print)
}}}

For more hints in this direction, see [#faq20 FAQ #20], below. To see why the find command comes after the loop instead of before it, see [#faq24 FAQ #24].

[[Anchor(faq16)]]
== How can I remove a file name extension from a string, e.g. file.tar to file? ==
The easiest (and fastest) way is to use the following:

{{{
    $ name="file.tar"
    $ echo "${name%.tar}"
    file
}}}

The {{{${var%pattern}}}} syntax removes the pattern from the end of the variable. {{{${var#pattern}}}} would remove pattern from the start of the string. This could be used to rename all files from "*.doc" to "*.txt":

{{{
    for file in *.doc
    do
        mv "$file" "${file%.doc}".txt
    done
}}}

There's more to ParameterSubstitution, e.g. {{{${var%%pattern}, ${var##pattern}, ${var//old/new}}}}.

Note that this extended form of ParameterSubstitution works with ["BASH"], KornShell, KornShell93, but not with the older BourneShell. If the code needs to be portable to that shell as well, {{{sed}}} could be used to remove the filename extension part:

{{{
    for file in *.doc
    do
        base=`echo "$file" | sed 's/\.[^.]*$//'` # remove everything starting with last '.'
        mv "$file" "$base".txt
    done
}}}

Finally, some GNU/Linux/BSD systems offer a {{{rename}}} command. There are multiple different {{{rename}}} commands out there with contradictory syntaxes. Consult your man pages to see which one you have (if any).

[[Anchor(faq17)]]
== How can I group expressions, e.g. (A AND B) OR C? ==
The TestCommand {{{[}}} uses parentheses () for expression grouping. Given that "AND" is "-a", and "OR" is "-o", the following expression

{{{
    (0<n AND n<=10) OR n=-1
}}}

can be written as follows:

{{{
    if [ \( $n -gt 0 -a $n -le 10 \) -o $n -eq -1 ]
    then
        echo "0 < $n <= 10, or $n=-1"
    else
        echo "invalid number: $n"
    fi
}}}

Note that the parentheses have to be quoted: \(, '(' or "(".

["BASH"] and KornShell have different, more powerful comparison commands with slightly different (easier) quoting:
 * ArithmeticExpression for arithmetic expressions, and
 * NewTestCommand for string (and file) expressions.

Examples:
{{{
    if (( (n>0 && n<10) || n == -1 ))
    then echo "0 < $n < 10, or n==-1"
    fi
}}}

or
{{{
    if [[ ( -f $localconfig && -f $globalconfig ) || -n $noconfig ]]
    then echo "configuration ok (or not used)"
    fi
}}}

Note that the distinction between numeric and string comparisons is strict. Consider the following example:
{{{
    n=3
    if [[ n>0 && n<10 ]]
    then echo "$n is between 0 and 10"
    else echo "ERROR: invalid number: $n"
    fi
}}}

The output will be "ERROR: ....", because in a ''string comparision'' "3" is bigger than "10", because "3" already comes after "1", and the next character "0" is not considered. Changing the square brackets to double parentheses {{{((}}} makes the example work as expected.

[[Anchor(faq18)]]
== How can I use numbers with leading zeros in a loop, e.g. 01, 02? ==
As always, there are different ways to solve the problem, each with its own advantages and disadvantages.

If there are not many numbers, BraceExpansion can be used:
{{{
    for i in 0{1,2,3,4,5,6,7,8,9} 10
    do
        echo $i
    done
}}}

Output:
{{{
00
01
02
03
[...]
}}}

This gets tedious for large sequences, but there are other ways, too. If the command {{{seq}}} is available, you can use it as follows:
{{{
    seq -w 1 10
}}}

or, for arbitrary numbers of leading zeros (here: 3):

{{{
    seq -f "%03g" 1 10
}}}

If you have the {{{printf}}} command (which is a Bash builtin, and is also POSIX standard), it can be used to format a number, too:

{{{
    for ((i=1; i<=10; i++))
    do
        printf "%02d " "$i"
    done
}}}

The KornShell and KornShell93 have the {{{typeset}}} command to specify the number of leading zeros:

{{{
    $ typeset -Z3 i=4
    $ echo $i
    004
}}}

Finally, the following example works with any BourneShell derived shell to zero-pad each line to three bytes:

{{{
i=0
while test $i -le 10
do
    echo "00$i"
    i=`expr $i + 1`
done |
    sed 's/.*\(...\)$/\1/g'
}}}

In this example, the number of '.' inside the parentheses in the {{{sed}}} statement determins how many total bytes from the {{{echo}}} command (at the end of each line) will be kept and printed.

One more addendum: in Bash 3, you can use:
{{{
printf "%03d \n" {1..300}
}}}

Which is slightly easier in some cases.

Also you can use the {{{printf}}} command with xargs and wget to fetch files:

{{{
printf "%03d \n" {$START..$END} | xargs -i% wget $LOCATION/%
}}}

Sometimes a good solution.

[[Anchor(faq19)]]
== How can I split a file into line ranges, e.g. lines 1-10, 11-20, 21-30? ==
Some Unix systems provide the {{{split}}} utility for this purpose:

{{{
    split --lines 10 --numeric-suffixes input.txt output-
}}}

For more flexibility you can use {{{sed}}}. The {{{sed}}} command can print e.g. the line number range 1-10:
{{{
    sed -n '1,10p'
}}}

This stops {{{sed}}} from printing each line ({{{-n}}}). Instead it only processes the lines in the range 1-10 ("1,10"), and prints them ("p"). {{{sed}}} still reads the input until the end, although we are only interested in lines 1 though 10. We can speed this up by making {{{sed}}} terminate immediately after printing line 10:

{{{
    sed -n -e '1,10p' -e '10q'
}}}

Now the command will quit after reading line 10 ("10q"). The {{{-e}}} arguments indicate a script (instead of a file name). The same can be written a little shorter:

{{{
    sed -n '1,10p;10q'
}}}

We can now use this to print an arbitrary range of a file (specified by line number):

{{{
file=/etc/passwd
range=10
firstline=1
maxlines=$(wc -l < "$file") # count number of lines
while (($firstline < $maxlines))
do
    ((lastline=$firstline+$range+1))
    sed -n -e "$firstline,${lastline}p" -e "${lastline}q" "$file"
    ((firstline=$firstline+$range+1))
done
}}}

This example uses ["BASH"] and KornShell ArithmeticExpressions, which older [wiki:Self:BourneShell Bourne shells] do not have. In that case the following example should be used instead:

{{{
file=/etc/passwd
range=10
firstline=1
maxlines=`wc -l < "$file"` # count line numbers
while [ $firstline -le $maxlines ]
do
    lastline=`expr $firstline + $range + 1`
    sed -n -e "$firstline,${lastline}p" -e "${lastline}q" "$file"
    firstline=`expr $lastline + 1`
done
}}}

[[Anchor(faq20)]]
== How can I find and deal with file names containing newlines, spaces or both? ==
The preferred method is still to use

{{{
    find ... -exec command {} \;
}}}

or, if you need to handle filenames ''en masse'':

{{{
    find ... -print0 | xargs -0 command
}}}

for GNU {{{find}}}/{{{xargs}}}, or (POSIX {{{find}}}):

{{{
    find ... -exec command {} +
}}}

Use that unless you really can't.

Another way to deal with files with spaces in their names is to use the shell's filename expansion (["globbing"]). This has the disadvantage of not working recursively (except with zsh's extensions), but if you just need to process all the files in a single directory, it works fantastically well.

This example changes all the *.mp3 files in the current directory to use underscores in their names instead of spaces. (But it will not work in the original BourneShell.)

{{{
for file in *.mp3; do
    mv "$file" "${file// /_}"
done
}}}

You could do the same thing for all files (regardless of extension) by using

{{{
for file in *\ *; do
}}}

instead of *.mp3.

Another way to handle filenames recursively involes using the {{{-print0}}} option of {{{find}}} (a GNU/BSD extension), together with bash's {{{-d}}} option for read:

{{{
unset a i
while read -d $'\0' file; do
  a[i++]="$file" # or however you want to process each file
done < <(find /tmp -type f -print0)
}}}

The preceding example reads all the files under /tmp (recursively) into an array, even if they have newlines or other whitespace in their names, by forcing {{{read}}} to use the NUL byte (\0) as its word delimiter. Since NUL is not a valid byte in Unix filenames, this is the safest approach besides using {{{find -exec}}}.



[[Anchor(faq21)]]
== How can I replace a string with another string in all files? ==
{{{sed}}} is a good command to replace strings, e.g.

{{{
    sed 's/olddomain\.com/newdomain\.com/g' input > output
}}}

To replace a string in all files of the current directory:

{{{
    for i in *; do
        sed 's/old/new/g' "$i" > atempfile && mv atempfile "$i"
    done
}}}

GNU sed 4.x (but no other version of sed) has a special {{{-i}}} flag which makes the temp file unnecessary:

{{{
   for i in *; do
      sed -i 's/old/new/g' "$i"
   done
}}}

Those of you who have perl 5 can accomplish the same thing using this code:

{{{
    perl -pi -e 's/old/new/g' *
}}}

Recursively:

{{{
    find . -type f -print0 | xargs -0 perl -pi -e 's/old/new/g'
}}}

To replace for example all "unsigned" with "unsigned long", if it is not "unsigned int" or "unsigned long" ...:

{{{
    perl -i.bak -pne 's/\bunsigned\b(?!\s+(int|short|long|char))/unsigned long/g' $(find . -type f)
}}}


Finally, here's a script that some people may find useful:

{{{
    :
    # chtext - change text in several files

    # neither string may contain '|' unquoted
    old='olddomain\.com'
    new='newdomain\.com'

    # if no files were specified on the command line, use all files:
    [ $# -lt 1 ] && set -- *

    for file
    do
        [ -f "$file" ] || continue # do not process e.g. directories
        [ -r "$file" ] || continue # cannot read file - ignore it
        # Replace string, write output to temporary file. Terminate script in case of errors
        sed "s|$old|$new|g" "$file" > "$file"-new || exit
        # If the file has changed, overwrite original file. Otherwise remove copy
        if cmp "$file" "$file"-new >/dev/null 2>&1
        then rm "$file"-new # file nas not changed
        else mv "$file"-new "$file" # file has changed: overwrite original file
        fi
    done
}}}

If the code above is put into a script file (e.g. {{{chtext}}}), the resulting script can be used to change a text e.g. in all HTML files of the current and all subdirectories:

{{{
    find . -type f -name '*.html' -exec chtext {} \;
}}}

Many optimizations are possible:
 * use another {{{sed}}} separator character than '|', e.g. ^A (ASCII 1)
 * some implementations of {{{sed}}} (e.g. GNU sed) have an "-i" option that can change a file in-place; no temporary file is necessary in that case
 * the {{{find}}} command above could use either {{{xargs}}} or the built-in {{{xargs}}} of POSIX find

Note: {{{set -- *}}} in the code above is safe with respect to files whose names contain spaces. The expansion of * by {{{set}}} is the same as the expansion done by {{{for}}}, and filenames will be preserved properly as individual parameters, and not broken into words on whitespace.

A more sophisticated example of {{{chtext}}} is here: http://www.shelldorado.com/scripts/cmds/chtext

[[Anchor(faq22)]]
== How can I calculate with floating point numbers instead of just integers? ==
["BASH"] does not have built-in floating point arithmetic:

{{{
    $ echo $((10/3))
    3
}}}

For better precision, an external program must be used, e.g. {{{bc}}}, {{{awk}}} or {{{dc}}}:

{{{
    $ echo "scale=3; 10/3" | bc
    3.333
}}}

The "scale=3" command notifies {{{bc}}} that three digits of precision after the decimal point are required.

{{{awk}}} can be used for calculations, too:

{{{
    $ awk 'BEGIN {printf "%.3f\n", 10 / 3}' /dev/null
    3.333
}}}

There is a subtle but important difference between the {{{bc}}} and the {{{awk}}} solution here: {{{bc}}} reads commands and expressions ''from standard input''. {{{awk}}} on the other hand evaluates the expression as ''part of the program''. Expressions on standard input are ''not'' evaluated, i.e. {{{echo 10/3 | awk '{print $0}'}}} will print {{{10/3}}} instead of the evaluated result of the expression.

This explains why the example uses {{{/dev/null}}} as an input file for {{{awk}}}: the program evaluates the {{{BEGIN}}} action, evaluating the expression and printing the result. Afterwards the work is already done: it reads its standard input, gets an end-of-file indication, and terminates. If no file had been specified, {{{awk}}} would wait for data on standard input.

Newer versions of KornShell93 have built-in floating point arithmetic, together with mathematical functions like {{{sin()}}} or {{{cos()}}} .

[[Anchor(faq23)]]
== How do I append a string to the contents of a variable? ==
The shell doesn't have a string concatenation operator like Java ("+") or Perl ("."). The following example shows how to append the string ".2004-08-15" to the contents of the shell variable {{{filename}}}:

{{{
    filename="$filename.2004-08-15"
}}}

If the variable name and the string to append could be confused, the variable name can be enclosed in braces, e.g.

{{{
    filename="${filename}old"
}}}

instead of {{{filename=$filenameold}}}

[[Anchor(faq24)]]
== I set variables in a loop. Why do they suddenly disappear after the loop terminates? ==

The following command always prints "total number of lines: 0", although the variable {{{linecnt}}} has a larger value in the {{{while}}} loop:

{{{
    linecnt=0
    cat /etc/passwd | while read line
    do
        linecnt=`expr $linecnt + 1`
    done
    echo "total number of lines: $linecnt"
}}}

The reason for this surprising behaviour is that a {{{while/for/until}}} loop runs in a subshell when its input or output is redirected from a pipeline. For the {{{while}}} loop above, a new subshell with its own copy of the variable {{{linecnt}}} is created (initial value, taken from the parent shell: "0"). This copy then is used for counting. When the {{{while}}} loop is finished, the subshell copy is discarded, and the original variable {{{linecnt}}} of the parent (whose value has not changed) is used in the {{{echo}}} command.

It's hard to tell when shell would create a new process for a loop:
 * BourneShell creates it when the input or output is redirected, either by using a pipeline or by a redirection operator ('<', '>').
 * ["BASH"] creates a new process only if the loop is part of a pipeline
 * KornShell creates it only if the loop is part of a pipeline, but ''not'' if the loop is the last part of it.

To solve this, either use a method that works without a subshell (shown below), or make sure you do all processing inside that subshell (a bit of a kludge, but easier to work with):

{{{
    linecnt=0
    cat /etc/passwd |
    (
        while read line ; do
                linecnt="$((linecnt+1))"
        done
        echo "total number of lines: $linecnt"
    )
}}}

To avoid the subshell completely (not easily possible if the other part of the pipe is a command!), use redirection, which does not have this problem at least for ["BASH"] and KornShell (but still for BourneShell):

{{{
    linecnt=0
    while read line ; do
        linecnt="$((linecnt+1))"
   done < /etc/passwd
   echo "total number of lines: $linecnt"
}}}

For ["BASH"], when the first part of the pipe is a command, you can use "process substitution". The command used here is a simple "echo -e $'a\nb\nc'" as a substitute for a command with a multiline output:

{{{
    while read LINE; do
        echo "-> $LINE"
    done < <(echo -e $'a\nb\nc')
}}}

A portable and common work-around is to redirect the input of the {{{read}}} command using {{{exec}}}:

{{{
    linecnt=0
    exec < /etc/passwd # redirect standard input from the file /etc/passwd
    while read line # "read" gets its input from the file /etc/passwd
    do
        linecnt=`expr $linecnt + 1`
    done
    echo "total number of lines: $linecnt"
}}}

This works as expected, and prints a line count for the file /etc/passwd. But the input is redirected from that file permanently. What if we need to read the original standard input sometime later again? In that case we have to save a copy of the original standard input file descriptor, which we later can restore:

{{{
    exec 3<&0 # save original standard input file descriptor "0" as FD "3"
    exec 0</etc/passwd # redirect standard input from the file /etc/passwd

    linecnt=0
    while read line # "read" gets its input from the file /etc/passwd
    do
        linecnt=`expr $linecnt + 1`
    done

    exec 0<&3 # restore saved standard input (fd 0) from file descriptor "3"
    exec 3<&- # close the no longer needed file descriptor "3"

    echo "total number of lines: $linecnt"
}}}

Subsequent {{{exec}}} commands can be combined into one line, which is interpreted left-to-right:

{{{
    exec 3<&0
    exec 0</etc/passwd
    _...read redirected standard input..._
    exec 0<&3
    exec 3<&-
}}}

is equivalent to

{{{
    exec 3<&0 0</etc/passwd
    _...read redirected standard input..._
    exec 0<&3 3<&-
}}}

[[Anchor(faq25)]]
== How can I access positional parameters after $9? ==
Use {{{${10}}}} instead of {{{$10}}}. This works for ["BASH"] and KornShell, but not for older BourneShell implementations. Another way to access arbitrary positional parameters after $9 is to use {{{for}}}, e.g. to get the last parameter:

{{{
    for last
    do
        : # nothing
    done

    echo "last argument is: $last"
}}}

To get an argument by number, we can use a counter:

{{{
    n=12 # This is the number of the argument we are interested in
    i=1
    for arg
    do
        if [ $i -eq $n ]
        then
            argn=arg
            break
        fi
        i=`expr $i + 1`
    done
    echo "argument number $n is: $argn"
}}}

This has the advantage of not "consuming" the arguments. If this is no problem, the {{{shift}}} command discards the first positional arguments:

{{{
    shift 11
    echo "the 12th argument is: $1"
}}}

Although direct access to any positional argument is possible this way, it's hardly needed. The common way is to use {{{getopts(3)}}} to process command line options (e.g. "-l", or "-o filename"), and then use either {{{for}}} or {{{while}}} to process all arguments in turn. An explanation of how to process command line arguments is available here: http://www.shelldorado.com/goodcoding/cmdargs.html

[[Anchor(faq26)]]
== How can I randomize (shuffle) the order of lines in a file? ==
{{{
    randomize(){
        while read l ; do echo "0$RANDOM $l" ; done |
        sort -n |
        cut -d" " -f2-
    }
}}}

Note: the leading 0 is to make sure it doesnt break if the shell doesnt support $RANDOM, which is supported by ["BASH"], KornShell, KornShell93 and ["POSIX"] shell, but not BourneShell.

The same idea (printing random numbers in front of a line, and sorting the lines on that column) using other programs:
{{{
    awk '
        BEGIN { srand() }
        { print rand() "\t" $0 }
    ' |
    sort -n | # Sort numerically on first (random number) column
    cut -f2- # Remove sorting column
}}}

This is faster thAn the previous solution, but will not work for very old AWK implementations (try "nawk", or "gawk", if available).

A related question we frequently see is, "How can I print a random line from a file?" The problem here is that you need to know in advance how many lines the file contains. Lacking that knowledge, you have to read the entire file through once just to count them -- or, you have to suck the entire file into memory. Let's explore both of these approaches.

{{{
   n=$(wc -l < "$file") # Count number of lines.
   r=$((RANDOM % n + 1)) # Random number from 1..n.
   sed -n "$r{p;q;}" "$file" # Print the r'th line.
}}}

(These examples use the answer from [#faq11 FAQ 11] to print the n'th line.) The first one's pretty straightforward -- we use {{{wc}}} to count the lines, choose a random number, and then use {{{sed}}} to print the line. If we already happened to know how many lines were in the file, we could skip the {{{wc}}} command, and this would be a very efficient approach.

The next example sucks the entire file into memory. This approach saves time reopening the file, but obviously uses more memory.

{{{
   oIFS=$IFS IFS=$'\n' lines=($(<"$file")) IFS=$oIFS
   n=${#lines[@]}
   r=$((RANDOM % n))
   echo "${lines[r]}"
}}}

Note that we don't add 1 to the random number in this example, because the array of lines is indexed counting from 0.

Also, some people want to choose a random file from a directory (for a signature on an e-mail, or to chose a random song to play, or a random image to display, etc.). A similar technique can be used:

{{{
    files=(*.ogg) # Or *.gif, or *
    n=${#files[@]} # For aesthetics
    xmms "${files[RANDOM % n]}" # Choose a random element
}}}

[[Anchor(faq27)]]
== How can two processes communicate using named pipes (fifos)? ==
NamedPipes, also known as FIFOs ("First In First Out") are well suited for inter-process communication. The advantage over using files as a means of communication is, that processes are synchronized by pipes: a process writing to a pipe blocks if there is no reader, and a process reading from a pipe blocks if there is no writer.

Here is a small example of a server process communicating with a client process. The server sends commands to the client, and the client acknowledges each command:

'''Server'''
{{{
#! /bin/sh
# server - communication example

# Create a FIFO. Some systems don't have a "mkfifo" command, but use
# "mknod pipe p" instead

mkfifo pipe

while sleep 1
do
    echo "server: sending GO to client"

    # The following command will cause this process to block (wait)
    # until another process reads from the pipe
    echo GO > pipe

    # A client read the string! Now wait for its answer. The "read"
    # command again will block until the client wrote something
    read answer < pipe

    # The client answered!
    echo "server: got answer: $answer"
done
}}}

'''Client'''
{{{
#! /bin/sh
# client

# We cannot start working until the server has created the pipe...
until [ -p pipe ]
do
    sleep 1; # wait for server to create pipe
done

# Now communicate...

while sleep 1
do
    echo "client: waiting for data"

    # Wait until the server sends us one line of data:
    read data < pipe

    # Received one line!
    echo "client: read <$data>, answering"

    # Now acknowledge that we got the data. This command
    # again will block until the server read it.
    echo ACK > pipe
done
}}}

Write both examples to files {{{server}}} and {{{client}}} respectively, and start them concurrently to see it working:

{{{
    $ chmod +x server client
    $ server & client &
    server: sending GO to client
    client: waiting for data
    client: read <GO>, answering
    server: got answer: ACK
    server: sending GO to client
    client: waiting for data
    client: read <GO>, answering
    server: got answer: ACK
    server: sending GO to client
    client: waiting for data
    [...]
}}}

[[Anchor(faq28)]]
== How do I determine the location of my script? I want to read some config files from the same place. ==
This is a complex question because there's no single right answer to it. Even worse: it's not possible to find the location reliably in 100% of all cases. All ways of finding a script's location depend on the name of the script, as seen in the predefined variable {{{$0}}}. But providing the script name in {{{$0}}} is only a (very common) convention, not a requirement.

The suspect answer is "in some shells, $0 is always an absolute path, even if you invoke the script using a relative path, or no path at all". That's not the case in ["BASH"]. But this isn't reliable across shells; some of them return the actual command typed in by the user instead of the fully qualified path. In those cases, if all you want is the fully qualified version of $0, you can use something like this (["POSIX"], non-Bourne):

{{{
  [[ $0 = /* ]] && echo $0 || echo $PWD/$0
}}}

Or the BourneShell version:

{{{
  case $0 in /*) echo $0;; *) echo `pwd`/$0;; esac
}}}

However, this approach has some major drawbacks. The most important is, that the script name (as seen in {{{$0}}}) may not be relative to the current working directory, but relative to a directory from the program search path {{{$PATH}}} (this is often seen with KornShell).

Another drawback is that there is really no guarantee that your script is still in the same place it was when it first started executing. Suppose your script is loaded from a temporary file which is then unlinked immediately... your script might not even exist on disk any more! The script could also have been moved to a different location while it was executing. Or (and this is most likely by far...) there might be multiple links to the script from multiple locations, one of them being a simple symlink from a common {{{PATH}}} directory like {{{/usr/local/bin}}}, which is how it's being invoked. Your script might be in {{{/opt/foobar/bin/script}}} but the naive approach of reading {{{$0}}} won't tell you that.

(For a more general discussion of the Unix file system and how symbolic links affect your ability to know where you are at any given moment, see [http://www.cs.bell-labs.com/sys/doc/lexnames.html this Plan 9 paper].)

So if the name in {{{$0}}} is a relative one, i.e. does not start with '/', we can still try to search the script like the shell would have done: in all directories from {{{$PATH}}}.

The following script shows how this could be done:

{{{
    myname=$0
    if [ -s "$myname" ] && [ -x "$myname" ]
    then # $myname is already a valid file name
        mypath=$myname
    else
        case "$myname" in
        /*) exit 1;; # absolute path - do not search PATH
        *)
            # Search all directories from the PATH variable. Take
            # care to interpret leading and trailing ":" as meaning
            # the current directory; the same is true for "::" within
            # the PATH.

            for dir in `echo "$PATH" | sed 's/^:/.:/g;s/::/:.:/g;s/:$/:./;s/:/ /g'`
            do
                [ -f "$dir/$myname" ] || continue # no file
                [ -x "$dir/$myname" ] || continue # not executable
                mypath=$dir/$myname
                break # only return first matching file
            done
            ;;
        esac
    fi

    if [ -f "$mypath" ]
    then
        : # echo >&2 "DEBUG: mypath=<$mypath>"
    else
        echo >&2 "cannot find full path name: $myname"
        exit 1
    fi

    echo >&2 "path of this script: $mypath"
}}}

Note that {{{$mypath}}} is not necessarily an absolute path name. It still can contain relative parts like {{{../bin/myscript}}}.

Generally storing data files in the same directory as their scripts is a bad practice. The Unix file system layout assumes that files in one place (e.g. /bin) are executable programs, while files in another place (e.g. /etc) are data files. (Let's ignore legacy Unix systems with programs in /etc for the moment, shall we....)

It really makes the most sense to keep your script's configuration in a single, static location such as {{{$SCRIPTROOT/etc/foobar.conf}}}. If you need to define multiple configuration files, then you can have a directory (say, {{{/var/lib/foobar}}} or {{{/usr/local/lib/foobar}}}), and read that directory's location from a variable in {{{/etc/foobar.conf}}}. If you don't even want that much to be hard-coded, you could pass the location of {{{foobar.conf}}} as a parameter to the script. If you need the script to assume certain default in the absence of {{{/etc/foobar.conf}}}, you can put defaults in the script itself, and/or fall back to something like {{{$HOME/.foobar.conf}}} if {{{/etc/foobar.conf}}} is missing. (This depends on what your script does. In some cases, it may make more sense to abort gracefully.)

[[Anchor(faq29)]]
== How can I display value of a symbolic link on standard output? ==
The external command {{{readlink}}} can be used to display the value of a symbolic link.

{{{
$ readlink /bin/sh
bash
}}}

you can also use GNU find's %l directive, which is especially useful if you need to resolve links in batches:

{{{
$ find /bin/ -type l -printf '%p points to %l\n'
/bin/sh points to bash
/bin/bunzip2 points to bzip2
...
}}}

If your system lacks {{{readlink}}}, you can use a function like this one:
{{{
readlink() {
    local path=$1 ll

    if [ -L "$path" ]; then
        ll="$(LC_ALL=C ls -l "$path" 2> /dev/null)" &&
        echo "${ll/* -> }"
    else
        return 1
    fi
}
}}}

[[Anchor(faq30)]]
== How can I rename all my *.foo files to *.bar? ==
Some GNU/Linux distributions have a rename command, which you can use for this purpose; however, the syntax differs from one distribution to the next, so it's not a portable answer.

You can do it in POSIX shells like this:

{{{
for f in *.foo; do mv "$f" "${f%.foo}.bar"; done
}}}

This invokes the external command {{{mv}}} once for each file, so it may not be as efficient as some of the {{{rename}}} implementations.

If you want to do it recursively, then it becomes much more challenging. This example works (in ["BASH"]) as long as no files have newlines in their names:

{{{
find . -name '*.foo' -print | while IFS=$'\n' read -r f; do
  mv "$f" "${f%.foo}.bar"
done
}}}

Another common form of this question is "How do I rename all my MP3 files so that they have underscores instead of spaces?" You can use this:

{{{
for f in *\ *.mp3; do mv "$f" "${f// /_}"; done
}}}

[[Anchor(faq31)]]
== What is the difference between the old and new test commands ([ and [[)? ==
{{{[}}} ("test" command) and {{{[[}}} ("new test" command) are both used to evaluate expressions. Some examples:

{{{
    if [ -z "$variable" ]
    then
        echo "variable is empty!"
    fi

    if [ -f "$filename" ]
    then
        echo "not a valid, existing file name: $filename"
    fi
}}}

and

{{{
    if [[ -e $file ]]
    then
        echo "directory entry does not exist: $file"
    fi

    if [[ $file0 -nt $file1 ]]
    then
        echo "file $file0 is newer than $file1"
    fi
}}}

To cut a long story short: {{{[}}} implements the old, portable syntax of the command. Although all modern shells have built-in implementations, there usually still is an external executable of that name, e.g. {{{/bin/[}}}. {{{[[}}} is a new improved version of it, which is a keyword, not a program. This has benefical effects on the ease of use, see below. {{{[[}}} is understood by KornShell, ["BASH"] (e.g. 2.03), KornShell93, ["POSIX"] shell, but not by the older BourneShell.

Although {{{[}}} and {{{[[}}} have much in common, and share many expression operators like "-f", "-s", "-n", "-z", there are some notable differences. Here is a comparison list:

||'''Feature'''||'''new test''' {{{[[}}}||'''old test''' {{{[}}}||'''Example'''||
||<rowspan="4">string comparison||>||(not available)||-||
||<||(not available)||-||
||== (or =)||=||-||
||!=||!=||-||
||<rowspan="2">expression grouping||&&||-a||{{{[[ -n $var && -f $var ]] && echo "$var is a file"}}}||
||{{{||}}}||-o||-||
||Pattern matching||=||(not available)||{{{[[ $name = a* ]] || echo "name does not start with an 'a': $name"}}}||
||In-process regular expression matching||=~||(not available)||{{{[[ $(date) =~ '^Fri ... 13 ' ]] && echo "It's Friday the 13th!"}}}||

Special primitives that {{{[[}}} is defined to have, but {{{[}}} may be lacking (depending on the implementation):

||'''Description'''||'''Primitive'''||'''Example'''||
||entry (file or directory) exists||-e||{{{[[ -e $config ]] && echo "config file exists: $config"}}}||
||file is newer/older than other file||-nt / -ot||{{{[[ $file0 -nt $file1 ]] && echo "$file0 is newer than $file1"}}}||
||two files are the same||-ef||{{{[[ $input -ef $output ]] && { echo "will not overwrite input file: $input"; exit 1; } }}}||
||negation||!||-||

But there are more subtle differences.
 * No field splitting will be done for {{{[[}}} (and therefore many arguments need not to be quoted)

 {{{
 file="file name"
 [[ -f $file ]] && echo "$file is a file"}}}

 will work even though $file is not quoted and contains whitespace. With {{{[}}} the variable needs to be quoted:

 {{{
 file="file name"
 [ -f "$file" ] && echo "$file is a file"}}}

 This makes {{{[[}}} easier to use and less error prone.

 * No file name generation will be done for {{{[[}}}. Therefore the following line tries to match the contents of the variable $path with the pattern {{{/*}}}

 {{{
 [[ $path = /* ]] && echo "\$path starts with a forward slash /: $path"}}}

 The next command most likely will result in an error, because {{{/*}}} is subject to file name generation:

 {{{
 [ $path = /* ] && echo "this does not work"}}}

 {{{[[}}} is strictly used for strings and files. If you want to compare numbers, use ArithmethicExpression ((''expression'')), e.g.

 {{{
 i=0
 while ((i<10))
 do
    echo $i
    ((i=$i+1))
 done}}}

When should the new test command {{{[[}}} be used, and when the old one {{{[}}}? If portability to the BourneShell is a concern, the old syntax should be used. If on the other hand the script requires ["BASH"] or KornShell, the new syntax could be preferable.

[[Anchor(faq32)]]
== How can I redirect the output of 'time' to a variable or file? ==
The reason that 'time' needs special care for redirecting its output is one of those mysteries of the universe. The answer will probably be solved around the same time we find dark matter.

 * File Redirection
{{{
     bash -c "time ls" > /path/to/foo 2>&1
     ( time ls ) > /path/to/foo 2>&1
     { time ls; } > /path/to/foo 2>&1
}}}

 * Variable Redirection
{{{
     foo=$( bash -c "time ls" 2>&1 )
     foo=$( ( time ls ) 2>&1 )
     foo=$( { time ls; } 2>&1 )
}}}

Note: Using 'bash -c' and ( ) creates a subshell, using { } does not. Do with that as you wish.

[[Anchor(faq33)]]
== How can I find a process ID for a process given its name? ==
Usually a process is referred to using its process ID (PID), and the {{{ps}}} command can display the information for any process given its process ID, e.g.

{{{
    $ echo $$ # my process id
    21796
    $ ps -p 21796
    PID TTY TIME CMD
    21796 pts/5 00:00:00 ksh
}}}

But frequently the process ID for a process is not known, but only its name. Some operating systems, e.g. Solaris, BSD, and some versions of Linux have a dedicated command to search a process given its name, called {{{pgrep}}}:

{{{
    $ pgrep init
    1
}}}

Often there is an even more specialized program available to not just find the process ID of a process given its name, but also to send a signal to it:

{{{
    $ pkill myprocess
}}}

Some systems also provide {{{pidof}}}. It differs from {{{pgrep}}} in that multiple output process IDs are only space separated, not newline separated.

{{{
    $ pidof cron
    5392
}}}

If these programs are not available, a user can search the output of the ps(1) command using {{{grep}}}.

The major problem when grepping the ps output is that grep ''may'' match its own ps entry (try: ps aux | grep init). To make matters worse, this does not happen every time; the techicnal name for this is a "race condition". To avoid this, there are several ways:

 * Using grep -v at the end
{{{
     ps aux | grep name | grep -v grep
}}}

will throw away all lines containing "grep" from the output. Disadvantage: You always have the exit state of the grep -v, so you can't e.g. check if a specific process exists.

 * Using grep -v in the middle
{{{
     ps aux | grep -v grep | grep name
}}}

This does exactly the same, beside that the exit state of "grep name" is acessible and a representation for "name is a process in ps" or "name is not a process in ps". It still has the disadvantage to start a new process (grep -v).

 * Using [] in grep
{{{
     ps aux | grep [n]ame
}}}

This spawns only the needed grep-process. The trick is to use the {{{[]}}}-character class (regular expressions). To put only one character in a character group normally makes no sense at all, because a {{{[c]}}} will always be a "c". In this case, it's the same. {{{grep [n]ame}}} searches for "name". But as grep's own process list entry is what you executed ("grep [n]ame") and not "grep name", it will not match itself.

===BEGIN greycat rant===

Most of the time when someone asks a question like this, it's because they want to manage a long-running daemon using primitive shell scripting techniques. Common variants are "How can I get the PID of my foobard process.... so I can start one if it's not already running" or "How can I get the PID of my foobard process... because I want to prevent the foobard script from running if foobard is already active." Both of these questions will lead to seriously flawed production systems.

If what you really want is to restart your daemon whenever it dies, just do this:

{{{
#!/bin/sh
while true; do
   mydaemon --in-the-foreground
done
}}}

where --in-the-foreground is whatever switch, if any, you must give to the daemon to PREVENT IT from automatically backgrounding itself. (Often, -d does this and has the additional benefit of running the daemon with increased verbosity.) Self-daemonizing programs may or may not be the target of a future greycat rant....

If that's too simplistic, look into [http://cr.yp.to/daemontools.html daemontools] or [http://smarden.org/runit/ runit], which are programs for managing services.

If what you really want is to prevent multiple instances of your program from running, then the only sure way to do that is by using a lock. For details on doing this, see ProcessManagement or [#faq45 FAQ 45].

===END greycat rant===

[[Anchor(faq34)]]
== Can I do a spinner in Bash? ==
Sure.
{{{
    i=1
    sp="/-\|"
    echo -n ' '
    while true
    do
        echo -en "\b${sp:i++%${#sp}:1}"
    done
}}}

You can also use \r instead of \b. You can use pretty much any character sequence you want as well. If you want it to slow down, put a {{{sleep}}} command inside the loop.
A similar technique can be used to build progress bars.

[[Anchor(faq35)]]
== How can I handle command-line arguments to my script easily? ==
Well, that depends a great deal on what you want to do with them. Here's a general template that might help for the simple cases:

{{{
    while [[ $1 == -* ]]; do
        case "$1" in
          -h|--help) show_help; exit 0;;
          -v) verbose=1; shift;;
          -f) output_file=$2; shift 2;;
        esac
    done
    # Now all of the remaining arguments are the filenames which followed
    # the optional switches. You can process those with "for i" or "$@".
}}}

For more complex/generalized cases, or if you want things like "-xvf" to be handled as three separate flags, you can use getopts or getopt. (Heiner, that's your cue....)

[[Anchor(faq36)]]
== How can I get all lines that are: in both of two files (set intersection) or in only one of two files (set subtraction). ==

Use the comm(1) command.

{{{
  # intersection of file1 and file2
  comm -12 <(sort file1) <(sort file2)
  # subtraction of file1 from file2
  comm -13 <(sort file1) <(sort file2)
}}}

Read the comm(1) manpage for details.

If for some reason you lack the core comm(1) program, you can use these other methods:

an amazingly simple and fast implementation, that took just 20 seconds to match a 30k line file against a 400k line file for me.

note that it probably only works with GNU grep, and that the file specified with -f is will be loaded into ram, so it doesn't scale for very large files.

it has grep read one of the sets as a pattern list from a file (-f), and interpret the patterns as plain strings not regexps (-F), matching only whole lines (-x).

{{{
  # intersection of file1 and file2
  grep -xF -f file1 file2
  # substraction of file1 from file2
  grep -vxF -f file1 file2
}}}

an implementation using sort and uniq

{{{
  # intersection of file1 and file2
  sort file1 file2 | uniq -d (Assuming each of file1 or file2 does not have repeated content)
  # file1-file2 (Subtraction)
  sort file1 file2 file2 | uniq -u
  # same way for file2 - file1, change last file2 to file1
  sort file1 file2 file1 | uniq -u
}}}

another implementation of substraction:
{{{
  cat file1 file1 file2 | sort | uniq -c |
  awk '{ if ($1 == 2) { $1 = ""; print; } }'
}}}

This may introduce an extra space at the start of the line; if that's a problem, just strip it away.

Also, this approach assumes that neither file1 nor file2 has any duplicates in it.

Finally, it sorts the output for you. If that's a problem, then you'll have to abandon this approach altogether. Perhaps you could use awk's associative arrays (or perl's hashes or tcl's arrays) instead.

[[Anchor(faq37)]]
== How can I print text in various colors? ==
''Do not'' hard-code ANSI color escape sequences in your program! The {{{tput}}} command lets you interact with the terminal database in a sane way.

{{{
  tput setaf 1; echo this is red
  tput setaf 2; echo this is green
  tput setaf 0; echo now we are back in black
}}}

{{{tput}}} reads the terminfo database which contains all the escape codes necessary for interacting with your terminal, as defined by the {{{$TERM}}} variable. For more details, see the {{{terminfo(5)}}} man page.

If you don't know in advance what your user's terminal's default text color is, you can use {{{tput sgr0}}} to reset the colors to their default settings. This also removes boldface ({{{tput bold}}}), etc.

[[Anchor(faq38)]]
== How do Unix file permissions work? ==
See ["Permissions"].

[[Anchor(faq39)]]
== What are all the dot-files that bash reads? ==
See DotFiles.

[[Anchor(faq40)]]
== How do I use dialog to get input from the user? ==

{{{
  foo=$(dialog --inputbox "text goes here" 8 40 2>&1 >/dev/tty)
  echo "The user typed '$foo'"
}}}

The redirection here is a bit tricky.

 1. The {{{foo=$(command)}}} is set up first, so the standard output of the command is being captured by bash.

 1. Inside the command, the {{{2>&1}}} causes standard error to be sent to where standard out is going -- in other words, stderr will now be captured.

 1. {{{>/dev/tty}}} sends standard output to the terminal, so the dialog box will be seen by the user. Standard error will still be captured, however.

Another common {{{dialog(1)}}}-related question is how to dynamically generate a dialog command that has items which must be quoted (either because they're empty strings, or because they contain internal white space). One ''can'' use {{{eval}}} for that purpose, but the cleanest way to achieve this goal is to use an array.

{{{
  unset m; i=0
  words=(apple banana cherry "dog droppings")
  for w in "${words[@]}"; do
    m[i++]=$w; m[i++]=""
  done
  dialog --menu "Which one?" 12 70 9 "${m[@]}"
}}}

In the previous example, the while loop that populates the '''m''' array could have been reading from a pipeline, a file, etc.

Recall that the construction {{{"${m[@]}"}}} expands to the entire contents of an array, but with each element implicitly quoted. It's analogous to the {{{"$@"}}} construct for handling positional parameters. For more details, see [#faq50 FAQ50] below.

Here's another example, using filenames:

{{{
    files=(*.mp3) # These may contain spaces, apostrophes, etc.
    cmd=(dialog --menu "Select one:" 22 76 16); n=6
    i=0
    for f in "${files[@]}"; do
        cmd[n++]=$((i++)); cmd[n++]="$f"
    done
    choice=$("${cmd[@]}" 2>&1 >/dev/tty)
}}}

The user's choice will be stored in the {{{choice}}} variable, as an integer, which can in turn be used as an index into the {{{files}}} array.

A seperate but useful function of dialog is to track progress of a process that produces output. Below is an example that uses dialog to track processes writing to a log file. In the dialog window, there is a tailbox where output is stored, and a msgbox with a clickable Quit. Clicking quit will cause trap to execute, removing the tempfile, and destroying the tail process.

{{{
  #you can not tail a nonexistant file, so always ensure it pre-exists!
  rm -f dialog-tail.log; echo Initialize log >> dialog-tail.log
  date >> dialog-tail.log
  tempfile=`tempfile 2>/dev/null` || tempfile=/tmp/test$$
  trap "rm -f $tempfile" 0 1 2 5 15
  dialog --title "TAIL BOXES" \
        --begin 10 10 --tailboxbg dialog-tail.log 8 58 \
        --and-widget \
        --begin 3 10 --msgbox "Press OK " 5 30 \
        2>$tempfile &
  mypid=$!;
  for i in 1 2 3; do echo $i >> dialog-tail.log; sleep 1; done
  echo Done. >> dialog-tail.log
  wait $mypid;

}}}

[[Anchor(faq41)]]
== How do I determine whether a variable contains a substring? ==

{{{
  if [[ $foo = *bar* ]]
}}}

The above works in virtually all versions of Bash. Bash version 3 also allows regular expressions:

{{{
  if [[ $foo =~ ab*c ]] # bash 3, matches abbbbcde, or ac, etc.
}}}

If you are programming in the BourneShell instead of Bash, there is a more portable (but less pretty) syntax:

{{{
  case "$foo" in
    *bar*) .... ;;
  esac
}}}

This should allow you to match variables against globbing-style patterns. if you need a portable way to match variables against regular expressions, use {{{grep}}} or {{{egrep}}}.

{{{
  if echo "$foo" | egrep some-regex >/dev/null; then ...
}}}

[[Anchor(faq42)]]
== How can I find out if a process is still running? ==
The {{{kill}}} command is used to send signals to a running process. As a convenience function, the signal "0", which does not exist, can be used to find out if a process is still running:

 {{{
 myprog & # Start program in the background
 daemonpid=$! # ...and save its process id

 while sleep 60
 do
     if kill -0 $daemonpid # Is the process still alive?
     then
         echo >&2 "OK - process is still running"
     else
         echo >&2 "ERROR - process $daemonpid is no longer running!"
         break
     fi
 done}}}

This is one of those questions that usually masks a much deeper issue. It's rare that someone wants to know whether a process is still running simply to display a red or green light to an operator. More often, there's some ulterior motive, such as the desire to ensure that some daemon which is known to crash frequently is still running, or to ensure mutually exclusive access to a resource, etc. For much better discussion of these issues, see ProcessManagement or [#faq33 FAQ #33].

[[Anchor(faq43)]]
== How can I use array variables? ==

BASH and KornShell already have one-dimensional arrays indexed by a numerical expression, e.g.

 {{{
 host[0]="micky"
 host[1]="minnie"
 host[2]="goofy"
 i=0
 while (($i < ${#host[@]} ))
 do
     echo "host number $i is ${host[i++]}"
 done}}}

The awkward experssion {{{ ${#host[@]} }}} returns the number of elements for the array {{{host}}}.

It's possible to assign multiple values to an array at once, but the syntax differs from BASH to KornShell:

 {{{
 # BASH
 array=(one two three four)
 # KornShell
 set -A array -- one two three four}}}

[[Anchor(faq44)]]
== How can I use associative arrays or variable variables? ==

Sometimes it's convenient to have associative arrays, arrays indexed by a string. Perl calls them "hashes". KornShell93 already supports this kind of array:

 {{{
 # KornShell93 script - does not work with BASH
 typeset -A homedir # Declare KornShell93 associative array
 homedir[jim]=/home/jim
 homedir[silvia]=/home/silvia
 homedir[alex]=/home/alex
 
 for user in ${!homedir[@]} # Enumerate all indices (user names)
 do
     echo "Home directory of user $user is ${homedir[$user]}"
 done}}}

BASH (including version 3.x) does not (yet) support them. However, we could simulate this kind of array by dynamically creating variables like in the following example:

 {{{
 for user in jim silvia alex
 do
     eval homedir_$user=/home/$user
 done}}}

This creates the variables

 {{{
 homedir_jim=/home/jim
 homedir_silvia=/home/silvia
 homedir_alex=/home/alex}}}

with the corresponding content. Note the use of the {{{eval}}} command, which interprets a command line not just one time like the shell usually does, but '''twice'''. In the first step, the shell uses the input {{{homedir_$user=/home/$user}}} to create a new line {{{homedir_jim=/home/jim}}}. In the second step, caused by {{{eval}}}, this variable assignment is executed, actually creating the variable.

Print the variables using

 {{{
 for user in jim silvia alex
 do
     varname=homedir_$user # e.g. "homedir_jim"
     eval varcontent='$'$varname # e.g. "/home/jim"
     echo "home directory of $user is $varcontent"
 done}}}

The {{{eval}}} line needs some explanation. In a first step the command substitution is run:

 {{{
 eval varcontent='$'$varname}}}

becomes

 {{{
 eval varcontent=$homedir_jim}}}

In a second step the {{{eval}}} re-evaluates the line, and converts this to

 {{{
 varcontent=/home/jim}}}

Before starting to use dynamically created variables, think again of a simpler approach. If it still seems to be the best thing to do, have a look at the following disadvantages:

 1. it's hard to read and to maintain
 1. the variable names must match the regular expression ^[a-zA-Z_][a-zA-Z_0-9]* , i.e. a variable name cannot contain arbitrary characters but only letters, digits, and underscores. In the example above we e.g. could not have processed the home directory of a user named {{{hong-hu}}}, because a dash '-' can be no valid part of a user name.
 1. Quoting is hard to get right. If a content (not variable name) string can contain whitespace characters, it's hard to quote it right to preserve it.

Here is the summary. "{{{var}}}" is a constant prefix, "{{{$index}}}" contains index string, "{{{$content}}}" is the string to store. Note that quoting is absolutely essential here. A missing backslash \ or a wrong type of quote (e.g. apostrophes '...' instead of quotation marks "...") can (and probably will) cause the examples to fail:

 * Set variables

  {{{
  eval "var$index=\"$content\"" # index must only contain characters from [a-zA-Z0-9_]}}}

 * Print variable content

  {{{
  eval "echo \"var$index=\$$varname\""}}}

 * Check if a variable is empty

  {{{
  if eval "[ -z "\$var$index\" ]"
  then echo "variable is empty: $var$index"
  fi}}}

You've seen the examples. Now maybe you can go a step back and consider using AWK associative arrays, or a multi-line environment variable instead of dynamically created variables.

[[Anchor(faq45)]]
== How can I ensure that only one instance of a script is running at a time (mutual exclusion)? ==

We need some means of '''mutual exclusion'''. One easy way is to use a "lock": any number of processes can try to acquire the lock simultaneously, but only one of them will succeed.

How can we implement this using shell scripts? Some people suggest creating a lock file, and checking for its presence:

 {{{
 # locking example -- WRONG

 lockfile=/tmp/myscript.lock
 if [ -f "$lockfile" ]
 then # lock is already held
     echo >&2 "cannot acquire lock, giving up: $lockfile"
     exit 0
 else # nobody owns the lock
     > "$lockfile" # create the file
     #...continue script
 fi}}}

This example '''does not work''', because there is a time window between checking and creating the file. Assume two processes are running the code at the same time. Both check if the lockfile exists, and both get the result that it does not exist. Now both processes assume they have acquired the lock -- a disaster waiting to happen. We need an atomic check-and-create operation, and fortunately there is one: {{{mkdir}}}, the command to create a directory:

 {{{
 # locking example -- CORRECT

 lockdir=/tmp/myscript.lock
 if mkdir "$lockdir"
 then # directory did not exist, but was created successfully
     echo >&2 "successfully acquired lock: $lockdir"
     # continue script
 else
     echo >&2 "cannot acquire lock, giving up on $lockdir"
     exit 0
 fi}}}

The advantage over using a lock file is, that even when two processes call {{{mkdir}}} at the same time, only one process can succeed at most. This atomicity of check-and-create is ensured at the operating system kernel level.

Note that we cannot use "mkdir -p" to automatically create missing path components: "mkdir -p" does not return an error if the directory exists already, but that's the feature we rely upon to ensure mutual exclusion.

Now let's spice up this example by automatically removing the lock when the script finishes:

 {{{
 lockdir=/tmp/myscript.lock
 if mkdir "$lockdir"
 then
     echo >&2 "successfully acquired lock"
 
     # Remove lockdir when the script finishes, or when it receives a signal
     trap 'rm -rf "$lockdir"' 0 # remove directory when script finishes
     trap "exit 2" 1 2 3 15 # terminate script when receiving signal
 
     # Optionally create temporary files in this directory, because
     # they will be removed automatically:
     tmpfile=$lockdir/filelist
 
 else
     echo >&2 "cannot acquire lock, giving up on $lockdir"
     exit 0
 fi}}}

This example provides reliable mutual exclusion. There is still the disadvantage that a ''stale'' lock file could remain when the script is terminated with a signal not caught (or signal 9, SIGKILL), but it's a good step towards reliable mutual exclusion. An example that remedies this (contributed by Charles Duffy) follows:

 ''Are we sure this code's correct? There seems to be a discrepancy between the names LOCK_DEFAULT_NAME and DEFAULT_NAME; and it checks for processes in what looks to be a race condition; and it uses the Linux-specific /proc file system and the GNU-specific egrep -o to do so.... I don't trust it. It looks overly complex and fragile. And quite non-portable. -- GreyCat''

 {{{
 LOCK_DEFAULT_NAME=$0
 LOCK_HOSTNAME="$(hostname -f)"
 
 ## function to take the lock if free; will fail otherwise
 function grab-lock {
   local PROGRAMNAME="${1:-$DEFAULT_NAME}"
   local PID=${2:-$$}
   (
     umask 000;
     mkdir -p "/tmp/${PROGRAMNAME}-lock"
     mkdir "/tmp/${PROGRAMNAME}-lock/held" || return 1
     mkdir "/tmp/${PROGRAMNAME}-lock/held/${LOCK_HOSTNAME}--pid-${PID}" && return 0 || return 1
   ) 2>/dev/null
   return $?
 }
 
 ## function to nicely let go of the lock
 function release-lock {
   local PROGRAMNAME="${1:-$DEFAULT_NAME}"
   local PID=${2:-$$}
   (
     rmdir "/tmp/${PROGRAMNAME}-lock/held/${LOCK_HOSTNAME}--pid-${PID}" || true
     rmdir "/tmp/${PROGRAMNAME}-lock/held" && return 0 || return 1
   ) 2>/dev/null
   return $?
 }
 
 ## function to force anyone else off of the lock
 function break-lock {
   local PROGRAMNAME="${1:-$DEFAULT_NAME}"
   (
     [ -d "/tmp/${PROGRAMNAME}-lock/held" ] || return 0
     for DIR in "/tmp/${PROGRAMNAME}-lock/held/${LOCK_HOSTNAME}--pid-"* ; do
       OTHERPID="$(echo $DIR | egrep -o '[0-9]+$')"
       [ -d /proc/${OTHERPID} ] || rmdir $DIR
     done
     rmdir /tmp/${PROGRAMNAME}-lock/held && return 0 || return 1
   ) 2>/dev/null
   return $?
 }
 
 ## function to take the lock nicely, freeing it first if needed
 function get-lock {
   break-lock "$@" && grab-lock "$@"
 }
 }}}

Instead of using {{{mkdir}}} we could also have used the program to create a symbolic link, {{{ln -s}}}.

For more discussion on these issues, see ProcessManagement.

[[Anchor(faq46)]]
== I want to check to see whether a word is in a list (or an element is a member of a set). ==

Let's suppose you have your "list" stored as a big string of words, with spaces in between them. (That's the most common case when people are asking this one.) What you actually want to do is determine whether the string " foo " (note the spaces around it) appears in the list. But since your list may not have leading/trailing spaces, you have to add them as well. So, here's the most portable way to do it:

  {{{
  if echo " $list " | grep " foo " >/dev/null; then ....}}}

GNU grep seems to have a special {{{-w}}} extension which lets you avoid the spaces:

  {{{
  if echo "$list" | GNUgrep -q -w "foo"; then ....}}}

Finally, if you want to use Bash builtins, you can do it thus:

  {{{
  if [[ " $list " = *\ foo\ * ]]; then ....}}}

This is basically the same as the original grep -- we surround both the list and the word (foo) with spaces, and then do a simple text matching.

[[Anchor(faq47)]]
== How can I redirect stderr to a pipe? ==
                                                                                
A pipe can only carry stdout of a program. To pipe stderr through it, you
need to redirect stderr to the same destination as stdout. Optionally
you can close stdout or redirect it to /dev/null to only get stderr. Some
sample code:

{{{
# - 'myprog' is an example for a program that outputs both, stdout and
# stderr
# - after the pipe I will just use a 'cat', of course you can put there
# what you want

# version 1: redirect stderr towards the pipe while stdout survives (both come
# mixed)
myprog 2>&1 | cat
                                                                                
# version 2: redirect stderr towards the pipe without getting stdout (it's
# redirected to /dev/null)
myprog 2>&1 >/dev/null | cat
#Note that '>/dev/null' comes after '2>&1', otherwise the stderr will also be directed to /dev/null
                                                                                
# version 3: redirect stderr towards the pipe while the "original" stdout gets
# closed
myprog 2>&1 >&- | cat
}}}

One may also pipe stderr only but keep stdout intact (without ''a priori'' knowledge of where the script's output is going). This is a bit trickier.

This has an obvious application with eg. dialog, which draws (using ncurses) windows onto the screen to stdout, and returns output to stderr. This may be a little inconvenient, because it may lead to a necessary temporary file which we may like to evade. (Although this is not necessary -- see [#faq40 FAQ #40] for more examples of using dialog specifically!)

On [http://www.tldp.org/LDP/abs/html/io-redirection.html TLDP], I've found following trick:
{{{
# Redirecting only stderr to a pipe.

exec 3>&1 # Save current "value" of stdout.
ls -l /dev/fd/ 2>&1 >&3 3>&- | grep bad 3>&- # Close fd 3 for 'grep' and 'ls'.
# ^^^^ ^^^^
exec 3>&- # Now close it for the remainder of the script.

# Thanks, S.C.
}}}

The output of the ls command shows where each file descriptor points to.

The same can be done without exec:
{{{
{ ls -l /dev/fd/ 2>&1 1>&3 3>&- | grep bad 3>&-; } 3>&1
}}}

To show it as a dialog one-liner:
{{{
exec 3>&1
dialog --menu Title 0 0 0 FirstItem FirstDescription 2>&1 >&3 3>&- | sed 's/First/Only/'
exec 3>&-
}}}

This will have the dialog window working properly, yet it will be the output of dialog (returned to stderr) being altered by the sed. Cheers.

A similar effect can be achieved with process substitution:
{{{
perl -e 'print "stdout\n"; warn "stderr\n"' 2> >(tr a-z A-Z)
}}}
This will pipe standard error through the tr command.

[[Anchor(faq48)]]
== Why should I never use eval? ==

"eval" is a common misspelling of "evil". The section dealing with spaces in file names used to include the following
quote "helpful tool (which is probably not as safe as the \0 technique)", end quote.

{{{
    Syntax : nasty_find_all [path] [command] <maxdepth>
}}}

{{{
    #This code is evil and must never be used
    export IFS=" "
    [ -z "$3" ] && set -- "$1" "$2" 1
    FILES=`find "$1" -maxdepth "$3" -type f -printf "\"%p\" "`
    #warning, evilness
    eval FILES=($FILES)
    for ((I=0; I < ${#FILES[@]}; I++))
    do
        eval "$2 \"${FILES[I]}\""
    done
    unset IFS
}}}

This script is supposed to recursively search for files with newlines and/or spaces in them, arguing that {{{find -print0 | xargs -0}}} was unsuitable for some purposes such as multiple commands. It was followed by an instructional description on all the lines involved, which we'll skip.

To its defense, it works:
{{{
$ ls -lR
.:
total 8
drwxr-xr-x 2 vidar users 4096 Nov 12 21:51 dir with spaces
-rwxr-xr-x 1 vidar users 248 Nov 12 21:50 nasty_find_all

./dir with spaces:
total 0
-rw-r--r-- 1 vidar users 0 Nov 12 21:51 file?with newlines
$ ./nasty_find_all . echo 3
./nasty_find_all
./dir with spaces/file
with newlines
$
}}}

But consider this:
{{{
$ touch "\"); ls -l $'\x2F'; #"
}}}

You just created a file called {{{ "); ls -l $'\x2F'; #}}}

Now FILES will contain {{{ ""); ls -l $'\x2F'; #}}}. When we do {{{eval FILES=($FILES)}}}, it becomes
{{{
FILES=(""); ls -l $'\x2F'; #"
}}}

Which becomes the two statements {{{ FILES=(""); }}} and {{{ ls -l / }}}. Congratulations, you just allowed execution of arbitrary commands.

{{{
$ touch "\"); ls -l $'\x2F'; #"
$ ./nasty_find_all . echo 3
total 1052
-rw-r--r-- 1 root root 1018530 Apr 6 2005 System.map
drwxr-xr-x 2 root root 4096 Oct 26 22:05 bin
drwxr-xr-x 3 root root 4096 Oct 26 22:05 boot
drwxr-xr-x 17 root root 29500 Nov 12 20:52 dev
drwxr-xr-x 68 root root 4096 Nov 12 20:54 etc
drwxr-xr-x 9 root root 4096 Oct 5 11:37 home
drwxr-xr-x 10 root root 4096 Oct 26 22:05 lib
drwxr-xr-x 2 root root 4096 Nov 4 00:14 lost+found
drwxr-xr-x 6 root root 4096 Nov 4 18:22 mnt
drwxr-xr-x 11 root root 4096 Oct 26 22:05 opt
dr-xr-xr-x 82 root root 0 Nov 4 00:41 proc
drwx------ 26 root root 4096 Oct 26 22:05 root
drwxr-xr-x 2 root root 4096 Nov 4 00:34 sbin
drwxr-xr-x 9 root root 0 Nov 4 00:41 sys
drwxrwxrwt 8 root root 4096 Nov 12 21:55 tmp
drwxr-xr-x 15 root root 4096 Oct 26 22:05 usr
drwxr-xr-x 13 root root 4096 Oct 26 22:05 var
./nasty_find_all
./dir with spaces/file
with newlines
./
$
}}}

It doesn't take much imagination to replace {{{ ls -l }}} with {{{ rm -rf }}} or worse.

One might think these circumstances are obscure, but one should not be tricked by this. All it takes is one malicious user, or perhaps more likely, a benign user who left the terminal unlocked when going to the bathroom, wrote a funny php uploading script that doesn't sanity check file names or who made the same mistake as oneself in allowing arbitrary code execution (now instead of being limited to the www-user, an attacker can use {{{nasty_find_all}}} to traverse chroot jails and/or gain additional privileges), uses an IRC or IM client that's too liberal in the filenames it accepts for file transfers or conversation logs, etc.

[[Anchor(faq49)]]
== How can I view periodic updates/appends to a file? (ex: growing log file) ==
{{{tail -f}}} will show you the growing log file. On some systems (e.g. OpenBSD), this will automatically track a rotated log file to the new file with the same name (which is usually what you want). To get the equivalent functionality on GNU systems, use {{{tail --follow=name}}} instead.

This is helpful if you need to view only the updates to the file after your last view.
{{{
# Start by setting n=1
   tail -n $n testfile; n="+$(( $(wc -l < testfile) + 1 ))"
}}}

Every invocation of this gives the update to the file from where we stopped last. If you know the line number from where you want to start, set n to that.

[[Anchor(faq50)]]
== I'm trying to construct a command dynamically, but I can't figure out how to deal with quoted multi-word arguments. ==

Some people attempt to do things like this:
{{{
    # Non-working example
    args="-s 'The subject' $address"
    mail $args < $body
}}}

This fails because of word-splitting. When {{{$args}}} is evaluated, it becomes four words: {{{'The}}} is the second word, and {{{subject'}}} is the third word.

What's needed is a way to maintain each word as a separate item, even if that word contains multiple spaces. Quotes won't do it, but an array will.

{{{
    # Working example
    args=(-s "The subject" "$address")
    mail "${args[@]}" < $body
}}}

Usually, this question arises when someone is trying to use {{{dialog}}} to construct a menu on the fly. For an example of how to do this properly, see [#faq40 FAQ #40] above.

[[Anchor(faq51)]]
== I want history-search just like in tcsh. How can I bind it to the up and down keys? ==

Just add the following to /etc/inputrc or your ~/.inputrc
{{{
"\e[A":history-search-backward
"\e[B":history-search-forward
}}}

[[Anchor(faq52)]]
== How do I convert a file in DOS format to UNIX format. ( Remove CRLF line terminators ) ==

All these are from the sed one-liners page
{{{
sed 's/.$//' dosfile # assumes that all lines end with CR/LF
sed 's/^M$//' dosfile # in bash/tcsh, press Ctrl-V then Ctrl-M
sed 's/\x0D$//' dosfile
}}}

Some distributions have ''dos2unix'' command which can do this. In vim, you can use '':set fileformat=unix''

[[Anchor(faq53)]]
== I have a fancy prompt with colors, and now bash doesn't seem to know how wide my terminal is. Lines wrap around incorrectly. ==

You must put {{{\[}}} and {{{\]}}} around any non-printing escape sequences in your prompt. Thus:

{{{
BLUE=$(tput setaf 4)
PURPLE=$(tput setaf 5)
BLACK=$(tput setaf 0)
PS1='\[$BLUE\]\h:\[$PURPLE\]\w\[$BLACK\]\$ '
}}}

Without the {{{\[ \]}}}, bash will think the bytes which constitute the escape sequences for the color codes will actually take up space on the screen, so bash won't be able to know where the cursor actually is.

[[Anchor(faq54)]]
== How can I tell whether a variable contains a valid number? ==

First, you have to define what you mean by "number". The most common case seems to be that, when people ask this, they actually mean "a non-negative integer, with no leading + sign".

{{{
if [[ $foo = *[^0-9]* ]]; then
   echo "'$foo' has a non-digit somewhere in it"
else
   echo "'$foo' is strictly numeric"
fi
}}}

This can be done in Korn and legacy Bourne shells as well, using {{{case}}}:

{{{
case "$foo" in
    *[!0-9]*) echo "'$foo' has a non-digit somewhere in it" ;;
    *) echo "'$foo' is strictly numeric" ;;
esac
}}}

If what you actually mean is "a valid floating-point number" or something else more complex, then you might prefer to use a regular expression. Bash version 3 and above have regular expression support in the [[ command:

{{{
if [[ $foo =~ ^[-+]?[0-9]+\(\.[0-9]+\)?$ ]]; then
    echo "'$foo' looks rather like a number"
else
    echo "'$foo' doesn't look particularly numeric to me"
fi
}}}

If you don't have bash version 3, then you would use {{{egrep}}}:

{{{
if echo "$foo" | egrep '^[-+]?[0-9]+(\.[0-9]+)?$' >/dev/null; then
    echo "'$foo' might be a number"
else
    echo "'$foo' might not be a number"
fi
}}}

Note that the parentheses in the {{{egrep}}} regular expression don't require backslashes in front of them, whereas the ones in the bash3 command do.

[[Anchor(faq55)]]
== Tell me all about 2>&1 -- what's the difference between 2>&1 >foo and >foo 2>&1, and when do I use which? ==

Bash processes all redirections from left to right, in order. And the order is significant. Moving them around within a command may change the results of that command.

For newbies who've somehow managed to miss the previous hundred or so examples, here's what you want:

{{{
foo >file 2>&1 # Sends both stdout and stderr to file.
}}}

Now for the rest of you, here's a simple demonstration of what's happening:

{{{
foo() {
  echo "This is stdout"
  echo "This is stderr" 1>&2
}
foo >/dev/null 2>&1 # produces no output
foo 2>&1 >/dev/null # writes "This is stderr" on the screen
}}}

Why do the results differ? In the first case, {{{>/dev/null}}} is performed first, and therefore the standard output of the command is sent to {{{/dev/null}}}. Then, the {{{2>&1}}} is performed, which causes standard error to be sent to the same place that standard output is ''already'' going. So both of them are discarded.

In the second example, {{{2>&1}}} is performed first. This means standard error is sent to wherever standard output happens to be going -- in this case, the user's terminal. Then, standard output is sent to {{{/dev/null}}} and is therefore discarded. So when we run {{{foo}}} the second time, we see only its standard error, not its standard output.

There are times when we really do want {{{2>&1}}} to appear first -- for one example of this, see [#faq40 FAQ 40].

There are other times when we may use {{{2>&1}}} without any other redirections. Consider:

{{{
find ... 2>&1 | grep "some error"
}}}

In this example, we want to search {{{find}}}'s standard error (as well as its standard output) for the string "some error". The {{{2>&1}}} in the piped command forces standard error to go into the pipe along with standard output. (When pipes and redirections are mixed in this way, remember: the pipe is done ''first'', before any redirections. So {{{find}}}'s standard output is already set to point to the pipe before we process the {{{2>&1}}} redirection.)

If we wanted to read ''only'' standard error in the pipe, and discard standard output, we could do it like this:

{{{
find ... 2>&1 >/dev/null | grep "some error"
}}}

The redirections in that example are processed thus:

 1. First, the pipe is created. {{{find}}}'s output is sent to it.
 1. Next, {{{2>&1}}} causes {{{find}}}'s standard error to go to the pipe as well.
 1. Finally, {{{>/dev/null}}} causes {{{find}}}'s standard output to be discarded, leaving only stderr going into the pipe.

A related question is [#faq47 FAQ #47], which discusses how to send stderr to a pipeline.

[[Anchor(faq56)]]
== How can I untar or unzip multiple tarballs at once? ==

As the {{{tar}}} command was originally designed to read from and write to tape devices (tar - Tape ARchiver), you can specify only filenames to put inside an archive or to extract out of an archive (e.g. {{{tar x myfileonthe.tape}}}). There is an option to tell {{{tar}}} that the archive is not on some tape, but in a file: {{{-f}}}. This option takes exactly one argument: the filename of the file containing the archive. All other (following) filenames are taken to be archive members:
{{{
    tar -x -f backup.tar myfile.txt
    # OR (more common syntax IMHO)
    tar xf backup.tar myfile.txt
}}}

Now here's a common mistake -- imagine a directory containing the following archive-files you want to extract all at once:
{{{
    $ ls
    backup1.tar backup2.tar backup3.tar
}}}

Maybe you think of {{{tar xf *.tar}}}. Let's see:
{{{
    $ tar xf *.tar
    tar: backup2.tar: Not found in archive
    tar: backup3.tar: Not found in archive
    tar: Error exit delayed from previous errors
}}}

What happened? The shell replaced your *.tar by the matching filenames. You really wrote:
{{{
    tar xf backup1.tar backup2.tar backup3.tar
}}}

And as we saw earlier, it means: "extract the files backup2.tar and backup3.tar from the archive backup1.tar", which will of course only succeed when there are such filenames stored in the archive.

The solution is relatively easy: extract the contents of all archives '''one at a time'''. As we use a UNIX shell and we are lazy, we do that with a loop:
{{{
    for tarname in *.tar; do
      tar xf "$tarname"
    done
}}}

What happens? The for-loop will iterate through all filenames matching {{{*.tar}}} and call {{{tar xf}}} for each of them. That way you extract all archives one-by-one and you even do it automagically.

The second common archive type in these days is ZIP. The command to extract contents from a ZIP file is {{{unzip}}} (who would have guessed that!). The problem here is the very same: {{{unzip}}} takes only one option specifying the ZIP-file. So, you solve it the very same way:
{{{
    for zipfile in *.zip; do
      unzip "$zipfile"
    done
}}}

Not enough? Ok. There's another option with {{{unzip}}}: it can take shell-like patterns to specify the ZIP-file names. And to avoid interpretion of those patterns by the shell, you need to quote them. {{{unzip}}} itself and '''not''' the shell will interpret {{{*.zip}}} in this case:
{{{
    unzip "*.zip"
    # OR, to make more clear what we do:
    unzip \*.zip
}}}

(This feature of {{{unzip}}} derives mainly from its origins as an MS-DOS program. MS-DOS's command interpreter does not perform glob expansions, so every MS-DOS program must be able to expand wildcards into a list of filenames. This feature was left in the Unix version, and as we just demonstrated, it can occasionally be useful.)

[[Anchor(faq57)]]
== How can group entries (in a file by common prefixes)? ==
as in, convert:
{{{
    foo: entry1
    bar: entry2
    foo: entry3
    baz: entry4
}}}
to
{{{
    foo: entry1 entry3
    bar: entry2
    baz: entry4
}}}

there are two simple general methods for this:
        a. sort the file, and then iterate over it, collectin entries until the prefix changes, and then print the collected entries with the previous prefix
        b iterate over the file, collect entries for each prefix in an array indexed by the prefix

a basic implementation of a) in bash:
{{{
old=xxx ; stuff=
(sort file ; echo xxx) | while read prefix line ; do
        if [[ $prefix = $old ]] ; then
                stuff="$stuff $line"
        else
                echo "$old: $stuff"
                old="$prefix"
                stuff=
        fi
done
}}}

and a basic implementation of b) in awk:
{{{
    {
        a[$1] = a[$1] " " $2
    }
    END{
        for (x in a) print x, a[x]
    }
}}}
usage:
{{{
    awk '{a[$1] = a[$1] " " $2}END{for (x in a) print x, a[x]}' file
}}}

[[Anchor(faq58)]]
== Can bash handle binary data? ==
the answer is, basically no...
while bash won't have as much problems with it as older shells, it still can't process arbitrary binary data, and more specifically, shell variables are not 100% binary clean, so you can't store binary files in them.
one instance where such would sometimes be handy is for example storing small temporary bitmaps while working with netpbm... here i resorted to adding an extra pnmnoraw to the pipe, creating (larger) ascii files that bash has no problems storing)

if you are feeling adventurous, consider this experiment:
{{{
    # bindec.bash, attempt to decode binary data to ascii decimals
    IFS=
    while read -n1 x ;do
        case "$x" in
            '') echo empty ;;
            # insert the 256 lines generated by the following oneliner here:
            # for x in $(seq 0 255) ;do echo " $'\\$(printf %o $x)') echo $x;;" ;done
        esac
    done
}}}
and then pipe binary data into it, maybe like so:
{{{
    for x in $(seq 0 255) ;do echo -ne "\\$(printf %o $x)" ;done | bash bindec.bash | nl | less
}}}
this suggests that a the 0 character is skipped entirely, because we can't create it with the input generation, enough to conveniently corrupt most binary files we try to process

(note that this refers to storing them in variables... moving data between programs using pipes is always binary clean)

[[Anchor(faq59)]]
== How can I remove the last character of a line? ==
Using bash and ksh extended parameter substitution:

{{{
    var=${var%?}
}}}

Remember that ${var%foo} removes foo from the end, and ${var#foo} removes foo from the beginning, of {{{var}}}. As a mnemonic, # appears to the left of % on the keyboard (US keyboards, at least).

More portable, but slower:

{{{
    var=`expr "$var" : '\(.*\).'`
}}}

or (using {{{sed}}}):

{{{
    var=`echo "$var" | sed 's/.$//'`
}}}

[[Anchor(faq60)]]
== I'm trying to write a script that will change directory (or set a variable), but after the script finishes, I'm back where I started (or my variable isn't set)! ==

Consider this:

{{{
   #!/bin/sh
   cd /tmp
}}}

If one executes this simple script, what happens? Bash forks, and the parent waits. The child executes the script, including the {{{chdir(2)}}} system call, and then exits. The parent, which was waiting for the child, harvests the child's exit status (presumably 0 for success), and then bash carries on with the next command.

Since the {{{chdir}}} was done by a child process, it has no effect on the parent.

Moreover, there is '''no conceivable way''' you can ''ever'' have a child process affect ''any'' part of the parent's environment, which includes its variables as well as its current working directory.

So, how does one go about it? You can still have the {{{cd}}} command in an external file, but you can't ''run it'' as a script. Instead, you must {{{source}}} it (or "dot it in", using the {{{.}}} command, which is a synonym for {{{source}}}).

{{{
   echo 'cd /tmp' > $HOME/mycd
   source $HOME/mycd
   pwd # Now, we're in /tmp
}}}

The same thing applies to setting variables. {{{source}}} the file that contains the commands; don't try to run it.

[[Anchor(faq61)]]
== Is there a list of which features were added to specific releases of Bash? ==

  * [http://cnswww.cns.cwru.edu/~chet/bash/NEWS NEWS]: a file tersely listing the notable changes between the current and previous versions
  * [http://cnswww.cns.cwru.edu/~chet/bash/CHANGES CHANGES]: a complete bash change history
  * [http://cnswww.cns.cwru.edu/~chet/bash/COMPAT COMPAT]: compatibility issues between bash3 and previous versions

Here's a ''partial'' list of the changes, in a more compact format:

||'''Feature'''||'''Added in version'''||
||x+=string||3.1-alpha1||
||{x..y}||3.0-alpha||
||${!array[@]}||3.0-alpha||
||[[ =~||3.0-alpha||
||<<<||2.05b-alpha1||
||i++||2.04-devel||
||for ((;;))||2.04-devel||
||/dev/fd/N, /dev/tcp/host/port, etc.||2.04-devel||
||a=(*.txt) file expansion||2.03-alpha||
||extglob||2.02-alpha1||
||[[||2.02-alpha1||
||builtin printf||2.02-alpha1||
||$(< filename)||2.02-alpha1||
||** (exponentiation)||2.02-alpha1||
||\xNNN||2.02-alpha1||
||(( ))||2.0-beta2||

[[Anchor(faq62)]]
== How do I create a temporary file in a secure manner? ==
Good question. To be filled in later. (Interim hints: {{{tempfile}}} is not portable. {{{mktemp}}} exists more widely, but it may require a {{{-c}}} switch to create the file in advance; or it may create the file by default and barf if {{{-c}}} is supplied. There does not appear to be any single command that simply ''works'' everywhere, without testing various arguments.)

[[Anchor(faq63)]]
== My ssh client hangs when I try to run a remote background job! ==
The following will not do what you expect:
{{{
   ssh me@remotehost 'sleep 120 &'
   # Client hangs for 120 seconds
}}}

This is a "feature" of [http://www.openssh.org/ OpenSSH]. The client will not close the connection as long as the remote end's terminal still is still in use -- and in the case of {{{sleep 120 &}}}, stdout and stderr are still connected to the terminal.

The immediate answer to your question -- "How do I get the client to disconnect so I can get my shell back?" -- is to kill the ssh client. You can do this with the {{{kill}}} or {{{pkill}}} commands, of course; or by sending the INT signal (usually Ctrl-C) for a non-interactive ssh session (as above); or by pressing '''<Enter><~><.>''' (Enter, Tilde, Period) in the client's terminal window for an interactive remote shell.

The long-term workaround for this is to ensure that all the file descriptors are redirected to a log file (or {{{/dev/null}}}) on the remote side:

{{{
   ssh me@remotehost 'sleep 120 >/dev/null 2>&1 &'
   # Client should return immediately
}}}

This also applies to restarting daemons on some legacy Unix systems.

{{{
   ssh root@hp-ux-box # Interactive shell
   ... # Discover that the problem is stale NFS handles
   /sbin/init.d/nfs.client stop # autofs is managed by this script and
   /sbin/init.d/nfs.client start # killing it on HP-UX is OK (unlike Linux)
   exit
   # Client hangs -- use Enter ~ . to kill it.
}}}

The legacy Unix {{{/sbin/init.d/nfs.client}}} script runs daemons in the background but leaves their stdout and stderr attached to the terminal (and they don't fully self-daemonize). The solution is either to fix the Unix vendor's broken init script, or to kill the ssh client process after this happens. The author of this article uses the latter approach.

[[Anchor(faq64)]]
== Why is it so hard to get an answer to the question that I asked in #bash ? ==

  * #bash aphorism #1 "The questioner's first description of the problem/question will be misleading."
  * corollary 1.1 "The questioner's second description of the problem/question will also be misleading"
  * corollary 1.2 "The questioner is never precise" ex: will say "print the file" when they mean print the file's name, rather than printing the file itself."
  * #bash aphorism #2, "The questioner will keep changing their original question until it drives the helpers in the channel insane."
  * #bash aphorism #3, "The data is never formatted in the way that makes it easiest to manipulate :-)"
  * #bash aphorism #4, "30 to 40 percent of the conversations in #bash will be about aphorisms #1 and #2"

[[Anchor(faq65)]]
== Is there a "PAUSE" command in bash like there is in MSDOS batch scripts? To prompt the user to press any key to continue? ==

No, but you can use these:
{{{
echo press enter to continue; read
}}}
{{{
echo press any key to continue; read -n 1
}}}

[[Anchor(faq66)]]
== I want to check if [[ $var == foo || $var == bar || $var = more ]] without repeating $var n times. ==

{{{
   case $var in
      foo|bar|more) ... ;;
   esac
}}}

[[Anchor(faq67)]]
== How can I trim leading/trailing white space from one of my variables? ==
There are a few ways to do this -- none of them elegant.

First, the most portable way would be to use sed:

{{{
   x=$(echo "$x" | sed -e 's/^ *//' -e 's/ *$//')
   # Note: this only removes spaces. For tabs too:
   x=$(echo "$x" | sed -e $'s/^[ \t]*//' -e $'s/[ \t]*$//')
   # Or possibly, with some systems:
   x=$(echo "$x" | sed -e 's/^[[:space:]]\+//' -e 's/[[:space:]]\+$//')
}}}

One can achieve the goal using builtins, although at the moment I'm not sure which shells the following syntax supports:

{{{
   # Remove leading whitespace:
   while [[ $x = [$' \t\n']* ]]; do x=${x#[$' \t\n']}; done
   # And now trailing:
   while [[ $x = *[$' \t\n'] ]]; do x=${x%[$' \t\n']}; done
}}}

Of course, the preceding example is pretty slow, because it removes one character at a time, in a loop (although it's good enough in practice for most purposes). If you want something a bit fancier, there's a bash-only solution using extglob:

{{{
   shopt -s extglob
   x=${x##*([$' \t\n'])}; x=${x%%*([$' \t\n'])}
   shopt -u extglob
}}}

There are many, many other ways to do this. These are not necessarily the most efficient, but they're known to work.

[[Anchor(faq68)]]
== How do I run a command, and have it abort (timeout) after N seconds? ==

There are two C programs that can do this: [http://pilcrow.madison.wi.us/ doalarm], and [http://www.porcupine.org/forensics/tct.html timeout]. (Compiling them is beyond the scope of this document; suffice to say, it'll be trivial on GNU/Linux systems, easy on most BSDs, and painful on anything else....)

If you don't have or don't want one of the above two programs, you can use a perl one-liner to set an ALRM and then exec the program you want to run under a time limit. In any case, you must understand what your program does with SIGALRM.

{{{
function doalarm () { perl -e 'alarm shift; exec @ARGV' "$@" ; }

doalarm ${NUMBER_OF_SECONDS_BEFORE_ALRMING} program arg arg ...
}}}

If you can't or won't install one of these programs (which ''really'' should have been included with the basic core Unix utilities 30 years ago!), then the best you can do is an ugly hack like:

{{{
   command & pid=$!; { sleep 10 && kill $pid; } &
}}}

This will, as you will soon discover, produce quite a mess regardless of whether the timeout condition kicked in or not. Cleaning it up is not something worth my time -- just use {{{doalarm}}} or {{{timeout}}} instead. Really.

[[Anchor(faq69)]]
== I want to automate an ssh (or scp, or sftp) connection, but I don't know how to send the password.... ==

'''STOP!'''

First of all, if you actually were to embed your password in a script somewhere, it would be visible to the entire world (or at least, anyone who can read files on your system). This would defeat the entire purpose of having a password on your remote account.

If you understand this and still want to continue, then the next thing you need to do is read and understand the man page for {{{ssh-keygen(1)}}}. This will tell you how to generate a public/private key pair (in either RSA or DSA format), and how to use these keys to authenticate to the remote system without sending a password at all.

Since many of you are too lazy to read man pages, and instead prefer to ask us in #bash to read them for you, I'll even give a brief summary of the procedure here:

{{{
ssh-keygen -t rsa
scp ~/.ssh/id_rsa.pub me@remote:
ssh me@remote 'cat id_rsa.pub >> .ssh/authorized_keys'
ssh me@remote date # should not prompt for passWORD,
                       # but your key may have a passPHRASE
}}}

If your key has a passphrase on it, and you want to avoid typing it every time, look into {{{ssh-agent(1)}}}. It's beyond the scope of this document, though.

If you're being prompted for a password even with the public key inserted into the remote {{{authorized_keys}}} file, chances are you have a permissions problem on the remote system. Check '''every single directory''' in the full path leading up to the {{{authorized_keys}}} file and make sure they do '''not''' have world- or group-write privilegs. ''E.g.'', if your home directory is {{{/home/fred}}} and {{{/home}}} has group "staff" write privileges, {{{sshd}}} will refuse to honor your key.

If that's not it, then make sure you didn't spell it ''authorised_keys''. SSH uses the US spelling, ''authorized_keys''.

If you ''really'' want to use a password instead of public keys, first have your head examined. Then, if you ''still'' want to use a password, use {{{expect(1)}}}. And don't ask us for help with it.

[[Anchor(faq70)]]
== How do I convert Unix (epoch) timestamps to human-readable values? ==

The only sane way to handle time values within a program is to convert them into a linear scale. You can't store "January 17, 2005 at 5:37 PM" in a variable and expect to do anything with it. Therefore, any competent program is going to use time stamps with semantics such as "the number of seconds since point X". These are called ''epoch'' timestamps. If the epoch is January 1, 1970 at midnight UTC, then it's also called a "Unix timestamp", because this is how Unix stores all times (such as file modification times).

Standard Unix, unfortunately, has ''no'' tools to work with Unix timestamps. (Ironic, eh?) GNU date, and later BSD date, has a {{{%s}}} extension to generate output in Unix timestamp format:

{{{
    date +%s # Prints the current time in Unix format, e.g. 1164128484
}}}

This is commonly used in scripts when one requires the ''interval'' between two events:

{{{
   start=$(date +%s)
   ...
   end=$(date +%s)
   echo "Operation took $((end - start)) seconds."
}}}

Reading the SOURCECODE of GNU date's date parser reveals that it accepts Unix timestamps prefixed with '@', so:
{{{
   $ date -d "@1164128484"
   # Prints "Tue Nov 21 18:01:24 CET 2006" in the central European time zone
}}}

Another method that was suggested before is to trick GNU date using:
{{{
   date -d "1970-01-01 UTC + 1164128484 seconds"
   # Prints "Tue Nov 21 12:01:24 EST 2006" in the US/Eastern time zone.
}}}

If you don't have GNU date available, an external language such as Perl can be used:

{{{
   perl -le "print scalar localtime 1164128484"
   # Prints "Tue Nov 21 12:01:24 2006"
}}}

I used double quotes in these examples so that the time constant could be replaced with a variable reference. See the documentation for {{{date(1)}}} and Perl for details on changing the output format.

Newer versions of Tcl (such as 8.5) have very good support of date and clock functions.
See the tclsh man page for usage details.
For example:

{{{
   echo 'puts [clock format [clock scan "today"]]' | tclsh
   # Prints today's date (the format can be adjusted with parameters to "clock format").
   
   echo 'puts [clock format [clock scan "fortnight"]]' | tclsh
   # Prints the date two weeks from now.
   
   echo 'puts [clock format [clock scan "5 years + 6 months ago"]]' | tclsh
   # Five and a half years ago, compensating for leap days and daylight savings time.
}}}

[[Anchor(faq71)]]
== How do I convert ASCII character to its decimal value and back? ==

This task is quite easy using the {{{printf}}} builtin. You can write two simple functions as shown below (or use the plain {{{printf}}} constructions alone).

{{{
   # chr() - converts decimal value to its ASCII character representation
   # ord() - converts ASCII character to its decimal value
 
   chr() {
     printf \\$(printf "%03o" "$1")
   }
 
   ord() {
     printf "%d" "'$1"
   }
 
   # examples:
 
   chr $(ord A) # -> A
   ord $(chr 65) # -> 65
}}}

The {{{ord}}} function above is quite tricky. It can be re-written in two (or even more) other ways (use that one that will best suite your coding style or your actual needs).

{{{
   ord() {
     printf "%d" \"$1\"
   }
}}}

Or, rather:

{{{
   ord() {
     printf "%d" "'$1'"
   }
}}}

All of the above three {{{ord}}} functions should work properly.
<<Include(BashFAQ/001, , editlink)>>
<<Include(BashFAQ/002, , editlink)>>
<<Include(BashFAQ/003, , editlink)>>
<<Include(BashFAQ/004, , editlink)>>
<<Include(BashFAQ/005, , editlink)>>
<<Include(BashFAQ/006, , editlink)>>
<<Include(BashFAQ/007, , editlink)>>
<<Include(BashFAQ/008, , editlink)>>
<<Include(BashFAQ/009, , editlink)>>
<<Include(BashFAQ/010, , editlink)>>
<<Include(BashFAQ/011, , editlink)>>
<<Include(BashFAQ/012, , editlink)>>
<<Include(BashFAQ/013, , editlink)>>
<<Include(BashFAQ/014, , editlink)>>
<<Include(BashFAQ/015, , editlink)>>
<<Include(BashFAQ/016, , editlink)>>
<<Include(BashFAQ/017, , editlink)>>
<<Include(BashFAQ/018, , editlink)>>
<<Include(BashFAQ/019, , editlink)>>
<<Include(BashFAQ/020, , editlink)>>
<<Include(BashFAQ/021, , editlink)>>
<<Include(BashFAQ/022, , editlink)>>
<<Include(BashFAQ/023, , editlink)>>
<<Include(BashFAQ/024, , editlink)>>
<<Include(BashFAQ/025, , editlink)>>
<<Include(BashFAQ/026, , editlink)>>
<<Include(BashFAQ/027, , editlink)>>
<<Include(BashFAQ/028, , editlink)>>
<<Include(BashFAQ/029, , editlink)>>
<<Include(BashFAQ/030, , editlink)>>
<<Include(BashFAQ/031, , editlink)>>
<<Include(BashFAQ/032, , editlink)>>
<<Include(BashFAQ/033, , editlink)>>
<<Include(BashFAQ/034, , editlink)>>
<<Include(BashFAQ/035, , editlink)>>
<<Include(BashFAQ/036, , editlink)>>
<<Include(BashFAQ/037, , editlink)>>
<<Include(BashFAQ/038, , editlink)>>
<<Include(BashFAQ/039, , editlink)>>
<<Include(BashFAQ/040, , editlink)>>
<<Include(BashFAQ/041, , editlink)>>
<<Include(BashFAQ/042, , editlink)>>
<<Include(BashFAQ/043, , editlink)>>
<<Include(BashFAQ/044, , editlink)>>
<<Include(BashFAQ/045, , editlink)>>
<<Include(BashFAQ/046, , editlink)>>
<<Include(BashFAQ/047, , editlink)>>
<<Include(BashFAQ/048, , editlink)>>
<<Include(BashFAQ/049, , editlink)>>
<<Include(BashFAQ/050, , editlink)>>
<<Include(BashFAQ/051, , editlink)>>
<<Include(BashFAQ/052, , editlink)>>
<<Include(BashFAQ/053, , editlink)>>
<<Include(BashFAQ/054, , editlink)>>
<<Include(BashFAQ/055, , editlink)>>
<<Include(BashFAQ/056, , editlink)>>
<<Include(BashFAQ/057, , editlink)>>
<<Include(BashFAQ/058, , editlink)>>
<<Include(BashFAQ/059, , editlink)>>
<<Include(BashFAQ/060, , editlink)>>
<<Include(BashFAQ/061, , editlink)>>
<<Include(BashFAQ/062, , editlink)>>
<<Include(BashFAQ/063, , editlink)>>
<<Include(BashFAQ/064, , editlink)>>
<<Include(BashFAQ/065, , editlink)>>
<<Include(BashFAQ/066, , editlink)>>
<<Include(BashFAQ/067, , editlink)>>
<<Include(BashFAQ/068, , editlink)>>
<<Include(BashFAQ/069, , editlink)>>
<<Include(BashFAQ/070, , editlink)>>
<<Include(BashFAQ/071, , editlink)>>
<<Include(BashFAQ/072, , editlink)>>
<<Include(BashFAQ/073, , editlink)>>
<<Include(BashFAQ/074, , editlink)>>
<<Include(BashFAQ/075, , editlink)>>
<<Include(BashFAQ/076, , editlink)>>
<<Include(BashFAQ/077, , editlink)>>
<<Include(BashFAQ/078, , editlink)>>
<<Include(BashFAQ/079, , editlink)>>
<<Include(BashFAQ/080, , editlink)>>
<<Include(BashFAQ/081, , editlink)>>
<<Include(BashFAQ/082, , editlink)>>
<<Include(BashFAQ/083, , editlink)>>
<<Include(BashFAQ/084, , editlink)>>
<<Include(BashFAQ/085, , editlink)>>
<<Include(BashFAQ/086, , editlink)>>
<<Include(BashFAQ/087, , editlink)>>
<<Include(BashFAQ/088, , editlink)>>
<<Include(BashFAQ/089, , editlink)>>
<<Include(BashFAQ/090, , editlink)>>
<<Include(BashFAQ/091, , editlink)>>
<<Include(BashFAQ/092, , editlink)>>
<<Include(BashFAQ/093, , editlink)>>
<<Include(BashFAQ/094, , editlink)>>
<<Include(BashFAQ/095, , editlink)>>
<<Include(BashFAQ/096, , editlink)>>
<<Include(BashFAQ/097, , editlink)>>
<<Include(BashFAQ/098, , editlink)>>
<<Include(BashFAQ/099, , editlink)>>
<<Include(BashFAQ/100, , editlink)>>

BASH Frequently Asked Questions

Note: The FAQ was split into individual pages for easier editing. Just click the 'Edit' link at the bottom of each entry, and please don't add new ones to this page; create a new page with the entry number instead.
Thank you.

These are answers to frequently asked questions on channel #bash on the freenode IRC network. These answers are contributed by the regular members of the channel (originally heiner, and then others including greycat and r00t), and by users like you. If you find something inaccurate or simply misspelled, please feel free to correct it!

All the information here is presented without any warranty or guarantee of accuracy. Use it at your own risk. When in doubt, please consult the man pages or the GNU info pages as the authoritative references.

BASH is a BourneShell compatible shell, which adds many new features to its ancestor. Most of them are available in the KornShell, too. The answers given in this FAQ may be slanted toward Bash, or they may be slanted toward the lowest common denominator Bourne shell, depending on who wrote the answer. In most cases, an effort is made to provide both a portable (Bourne) and an efficient (Bash, where appropriate) answer. If a question is not strictly shell specific, but rather related to Unix, it may be in the UnixFaq.

This FAQ assumes a certain level of familiarity with basic shell script syntax. If you're completely new to Bash or to the Bourne family of shells, you may wish to start with the (incomplete) BashGuide.

If you can't find the answer you're looking for here, try BashPitfalls. If you want to help, you can add new questions with answers here, or try to answer one of the BashOpenQuestions.

Chet Ramey's official Bash FAQ contains many technical questions not covered here.

Contents

  1. How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?
    1. Field splitting, whitespace trimming, and other input processing
    2. Input source selection
    3. My text files are broken! They lack their final newlines!
    4. How to keep other commands from "eating" the input
  2. How can I store the return value and/or output of a command in a variable?
  3. How can I sort or compare files based on some metadata attribute (most recently modified, size, etc)?
  4. How can I check whether a directory is empty or not? How do I check for any *.mpg files, or count how many there are?
  5. How can I use array variables?
    1. Intro
    2. Loading values into an array
      1. Loading lines from a file or stream
        1. Handling newlines (or lack thereof) at the end of a file
        2. Other methods
        3. Don't read lines with for!
      2. Reading NUL-delimited streams
      3. Appending to an existing array
    3. Retrieving values from an array
      1. Retrieving with modifications
    4. Using @ as a pseudo-array
  6. See Also
  7. How can I use variable variables (indirect variables, pointers, references) or associative arrays?
    1. Associative Arrays
      1. Associative array hacks in older shells
    2. Indirection
      1. Think before using indirection
      2. Evaluating indirect/reference variables
      3. Assigning indirect/reference variables
        1. eval
    3. See Also
  8. Is there a function to return the length of a string?
  9. How can I recursively search all files for a string?
  10. What is buffering? Or, why does my command line produce no output: tail -f logfile | grep 'foo bar' | awk ...
      1. Eliminate unnecessary commands
      2. Your command may already support unbuffered output
      3. Disabling buffering in a C application
      4. unbuffer
      5. stdbuf
      6. less
      7. coproc
      8. Further reading
  11. How can I recreate a directory hierarchy structure, without the files?
  12. How can I print the n'th line of a file?
    1. See Also
  13. How do I invoke a shell command from a non-shell application?
    1. Calling shell functions
  14. How can I concatenate two variables? How do I append a string to a variable?
  15. How can I redirect the output of multiple commands at once?
  16. How can I run a command on all files with the extension .gz?
  17. How can I use a logical AND/OR/NOT in a shell pattern (glob)?
  18. How can I group expressions in an if statement, e.g. if (A AND B) OR C?
  19. How can I use numbers with leading zeros in a loop, e.g. 01, 02?
    1. Brace expansion
    2. Formatting with printf
    3. Ksh formatted brace expansion
    4. External programs
  20. How can I split a file into line ranges, e.g. lines 1-10, 11-20, 21-30?
  21. How can I find and safely handle file names containing newlines, spaces or both?
  22. How can I replace a string with another string in a variable, a stream, a file, or in all the files in a directory?
    1. Files
      1. Just Tell Me What To Do
      2. Using a file editor
      3. Using a temporary file
      4. Using nonstandard tools
    2. Variables
    3. Streams
  23. How can I calculate with floating point numbers instead of just integers?
  24. I want to launch an interactive shell that has special aliases and functions, not the ones in the user's ~/.bashrc.
    1. Variant question: ''I have a script that sets up an environment, and I want to give the user control at the end of it.''
  25. I set variables in a loop that's in a pipeline. Why do they disappear after the loop terminates? Or, why can't I pipe data to read?
    1. Workarounds
  26. How can I access positional parameters after $9?
  27. How can I randomize (shuffle) the order of lines in a file? Or select a random line from a file, or select a random file from a directory?
    1. Shuffling an array
    2. Selecting a random line/file
      1. With counting lines first
      2. Without counting lines first
    3. Known bugs
    4. Using external random data sources
      1. Awk as a source of seeded pseudorandom numbers
  28. How can two unrelated processes communicate?
    1. A file
    2. A directory as a lock
    3. Signals
    4. Named Pipes
  29. How do I determine the location of my script? I want to read some config files from the same place.
    1. I need to access my data/config files
    2. I need to access files bundled with my script
      1. Using BASH_SOURCE
      2. Using PWD
      3. Using a configuration/wrapper
    3. Why $0 is NOT an option
  30. How can I display the target of a symbolic link?
  31. How can I rename all my *.foo files to *.bar, or convert spaces to underscores, or convert upper-case file names to lower case?
    1. Recursively
    2. Upper- and lower-case
    3. Nonstandard tools
  32. What is the difference between test, [ and [[ ?
    1. Theory
  33. How can I redirect the output of 'time' to a variable or file?
  34. How can I find a process ID for a process given its name?
    1. greycat rant: daemon management
  35. Can I do a spinner in Bash?
  36. How can I handle command-line options and arguments in my script easily?
    1. Overview
    2. Manual loop
    3. getopts
  37. How can I get all lines that are: in both of two files (set intersection) or in only one of two files (set subtraction).
  38. How can I print text in various colors?
    1. Discussion
  39. How do Unix file permissions work?
  40. What are all the dot-files that bash reads?
  41. How do I use dialog to get input from the user?
  42. How do I determine whether a variable contains a substring?
  43. How can I find out if a process is still running?
  44. Why does my crontab job fail? 0 0 * * * some command > /var/log/mylog.`date +%Y%m%d`
  45. How do I create a progress bar? How do I see a progress indicator when copying/moving files?
    1. When copying/moving files
  46. How can I ensure that only one instance of a script is running at a time (mutual exclusion, locking)?
    1. Discussion
      1. Alternative Solution
      2. Removal of locking mechanism
      3. flock file descriptor uniqueness
  47. I want to check to see whether a word is in a list (or an element is a member of a set).
    1. Associative arrays
    2. Indexed arrays
    3. enum (ksh93)
  48. How can I redirect stderr to a pipe?
  49. Eval command and security issues
    1. Examples of bad use of eval
    2. The problem with bash's name references
    3. Examples of good use of eval
    4. The problem with declare
    5. Robust eval usage
  50. How can I view periodic updates/appends to a file? (ex: growing log file)
  51. I'm trying to put a command in a variable, but the complex cases always fail!
    1. Things that do not work
    2. I'm trying to save a command so I can run it later without having to repeat it each time
    3. I only want to pass options if the runtime data needs them
    4. I want to generalize a task, in case the low-level tool changes later
    5. I'm constructing a command based on information that is only known at run time
    6. I want a log of my script's actions
  52. I want history-search just like in tcsh. How can I bind it to the up and down keys?
  53. How do I convert a file from DOS format to UNIX format (remove CRs from CR-LF line terminators)?
    1. Testing for line terminator type
    2. Converting files
  54. I have a fancy prompt with colors, and now bash doesn't seem to know how wide my terminal is. Lines wrap around incorrectly.
    1. Escape the colors with \[ \]
    2. Escape the colors with \001 \002 (dynamic prompt or read -p)
  55. How can I tell whether a variable contains a valid number?
    1. Hand parsing
    2. Using the parsing done by [ and printf (or "using eq")
  56. Tell me all about 2>&1 -- what's the difference between 2>&1 >foo and >foo 2>&1, and when do I use which?
    1. If you're still confused...
    2. See Also
  57. How can I untar (or unzip) multiple tarballs at once?
  58. How can I group entries (in a file by common prefixes)?
  59. Can bash handle binary data?
  60. I saw this command somewhere: :(){ :|:& } (fork bomb). How does it work?
    1. Why '':(){ :|:& };:'' is a bad way to define a fork bomb
  61. I'm trying to write a script that will change directory (or set a variable), but after the script finishes, I'm back where I started (or my variable isn't set)!
  62. Is there a list of which features were added to specific releases (versions) of Bash?
    1. Changes in the upcoming bash-5.3 release
    2. Notable changes in released bash versions
    3. List of bash releases and other notable events
  63. How do I create a temporary file in a secure manner?
    1. Use your $HOME
    2. Make a temporary directory
    3. Use platform-specific tools
    4. Using m4
    5. Other approaches
  64. My ssh client hangs when I try to logout after running a remote background job!
  65. Why is it so hard to get an answer to the question that I asked in #bash?
  66. Is there a "PAUSE" command in bash like there is in MSDOS batch scripts? To prompt the user to press any key to continue?
  67. I want to check if [[ $var == foo || $var == bar || $var == more ]] without repeating $var n times.
  68. How can I trim leading/trailing white space from one of my variables?
  69. How do I run a command, and have it abort (timeout) after N seconds?
  70. I want to automate an ssh (or scp, or sftp) connection, but I don't know how to send the password....
    1. Limiting access to process information
  71. How do I convert Unix (epoch) times to human-readable values?
  72. How do I convert an ASCII character to its decimal (or hexadecimal) value and back? How do I do URL encoding or URL decoding?
    1. URL encoding and URL decoding
    2. More complete examples (with UTF-8 support)
      1. Note about Ext Ascii and UTF-8 encoding
  73. How can I ensure my environment is configured for cron, batch, and at jobs?
  74. How can I use parameter expansion? How can I get substrings? How can I get a file without its extension, or get just a file's extension? What are some good ways to do basename and dirname?
    1. Examples of Filename Manipulation
    2. Bash 4
    3. Parameter Expansion on Arrays
    4. Portability
  75. How do I get the effects of those nifty Bash Parameter Expansions in older shells?
  76. How do I use 'find'? I can't understand the man page at all!
  77. How do I get the sum of all the numbers in a column?
    1. BASH Alternatives
  78. How do I log history or "secure" bash against history removal?
  79. I want to set a user's password using the Unix passwd command, but how do I script that? It doesn't read standard input!
    1. Construct your own hashed password and write it to some file
    2. Fool the computer into thinking you are a human
    3. Find some magic system-specific tool
    4. Don't rely on /dev/tty for security
  80. How can I grep for lines containing foo AND bar, foo OR bar? Or for files containing foo AND bar, possibly on separate lines? Or files containing foo but NOT bar?
    1. foo AND bar on the same line
    2. foo OR bar on the same line
    3. foo AND bar in the same file, not necessarily on the same line
    4. foo but NOT bar in the same file, possibly on different lines
  81. How can I make an alias that takes an argument?
  82. How can I determine whether a command exists anywhere in my PATH?
  83. Why is $(...) preferred over `...` (backticks)?
    1. Important differences
    2. Other advantages
    3. See also:
  84. How do I determine whether a variable is already defined? Or a function?
    1. Setting a default value
    2. Testing whether a function has been defined
  85. How do I return a string (or large number, or negative number) from a function? "return" only lets me give a number from 0 to 255.
    1. Capturing standard output
    2. Global variables
    3. Writing to a file
    4. Dynamically scoped variables
  86. How to write several times to a fifo without having to reopen it?
    1. The problem
    2. Grouping the commands
    3. Opening a file descriptor
    4. Using tail
    5. Using a guarding process
  87. How to ignore aliases, functions, or builtins when running a command?
    1. Bypass aliases
    2. Prioritize calling a builtin or external command
    3. Prioritize calling only a builtin
    4. Call an external utility by PATH resolution, bypassing builtins and/or functions
    5. Call a specific external utility
    6. See also
  88. How can I get a file's permissions (or other metadata) without parsing ls -l output?
  89. How can I avoid losing any history lines?
    1. Using extended attributes
    2. Prevent mangled history with atomic writes and lock files
    3. Compressing History Files
    4. Archiving History Files
    5. Archiving by month
    6. Saving history into a database
  90. I'm reading a file line by line and running ssh or ffmpeg, only the first line gets processed!
  91. How do I prepend a text to a file (the opposite of >>)?
  92. I'm trying to get the number of columns or lines of my terminal but the variables COLUMNS / LINES are always empty.
  93. How do I write a CGI script that accepts parameters?
    1. Associative Arrays
    2. Older Bash Shells
    3. The Wrong Way
  94. How can I set the contents of my terminal's title bar?
  95. I want to get an alert when my disk is full (parsing df output).
  96. I'm getting "Argument list too long". How can I process a large list in chunks?
  97. ssh eats my word boundaries! I can't do ssh remotehost make CFLAGS="-g -O"!
    1. Manual requoting
    2. Passing data on stdin instead of the command line
    3. Automatic requoting of each parameter
  98. How do I determine whether a symlink is dangling (broken)?
  99. How to add localization support to your bash scripts
    1. First, some variables you must understand
    2. Marking strings as translatable
    3. Generating and/or merging PO files
    4. Translate the strings
    5. Install MO files
    6. Test!
    7. References
  100. How can I get the newest (or oldest) file from a directory?
  101. How do I do string manipulations in bash?
    1. Parameter expansion syntax
    2. Length of a string
    3. Checking for substrings
    4. Substituting part of a string
    5. Removing part of a string
    6. Extracting parts of strings
    7. Splitting a string into fields
    8. Joining fields together
    9. Upper/lower case conversion
    10. Default or alternate values
    11. See Also

1. How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?

Don't try to use "for". Use a while loop and the read command. Here is the basic template; there are many variations to discuss:

   1 while IFS= read -r line; do
   2   printf '%s\n' "$line"
   3 done < "$file"

line is a variable name, chosen by you. You can use any valid shell variable name(s) there; see field splitting below.

< "$file" redirects the loop's input from a file whose name is stored in a variable; see source selection below.

If you want to read lines from a file into an array, see FAQ 5.

1.1. Field splitting, whitespace trimming, and other input processing

The -r option to read prevents backslash interpretation (usually used as a backslash newline pair, to continue over multiple lines or to escape the delimiters). Without this option, any unescaped backslashes in the input will be discarded. You should almost always use the -r option with read.

The most common exception to this rule is when -e is used, which uses Readline to obtain the line from an interactive shell. In that case, tab completion will add backslashes to escape spaces and such, and you do not want them to be literally included in the variable. This would never be used when reading anything line-by-line, though, and -r should always be used when doing so.

By default, read modifies each line read, by removing all leading and trailing whitespace characters (spaces and tabs, if present in IFS). If that is not desired, the IFS variable may be cleared, as in the example above. If you want the trimming, leave IFS alone:

   1 # Leading/trailing whitespace trimming.
   2 while read -r line; do
   3   printf '%s\n' "$line"
   4 done < "$file"

If you want to operate on individual fields within each line, you may supply additional variables to read:

   1 # Input file has 3 columns separated by white space (space or tab characters only).
   2 while read -r first_name last_name phone; do
   3   # Only print the last name (second column)
   4   printf '%s\n' "$last_name"
   5 done < "$file"

If the field delimiters are not whitespace, you can set IFS (internal field separator):

   1 # Extract the username and its shell from /etc/passwd:
   2 while IFS=: read -r user pass uid gid gecos home shell; do
   3   printf '%s: %s\n' "$user" "$shell"
   4 done < /etc/passwd

For tab-delimited files, use IFS=$'\t' though beware that multiple tab characters in the input will be considered as one delimiter (and the Ksh93/Zsh IFS=$'\t\t' workaround won't work in Bash).

You do not necessarily need to know how many fields each line of input contains. If you supply more variables than there are fields, the extra variables will be empty. If you supply fewer, the last variable gets "all the rest" of the fields after the preceding ones are satisfied. For example,

   1 # Bash
   2 read -r first last junk <<< 'Bob Smith 123 Main Street Elk Grove Iowa 123-555-6789'
   3 
   4 # first will contain "Bob", and last will contain "Smith".
   5 # junk holds everything else.

Some people use the throwaway variable _ as a "junk variable" to ignore fields. It (or indeed any variable) can also be used more than once in a single read command, if we don't care what goes into it:

   1 # Bash
   2 read -r _ _ first middle last _ <<< "$record"
   3 
   4 # We skip the first two fields, then read the next three.
   5 # Remember, the final _ can absorb any number of fields.
   6 # It doesn't need to be repeated there.

Note that this usage of _ is only guaranteed to work in Bash. Many other shells use _ for other purposes that will at best cause this to not have the desired effect, and can break the script entirely. It is better to choose a unique variable that isn't used elsewhere in the script, even though _ is a common Bash convention.

If avoiding comments starting with # is desired, you can simply skip them inside the loop:

   1 # Bash/
   2 while read -r line; do
   3   [[ $line = \#* ]] && continue
   4   printf '%s\n' "$line"
   5 done < "$file"

Above read removes leading and trailing spaces or tabs (assuming IFS hasn't been modified from its default value) so we just need to look for a # at the start of the (now trimmed) line. To preserve the spacing:

   1 # Bash
   2 while IFS= read -r line; do
   3   [[ $line = *([[:blank:]])\#* ]] && continue
   4   printf '%s\n' "$line"
   5 done < "$file"

In older versions of Bash, you'd need shopt -s extglob for the *(...) extended glob operator to be available. In newer versions they are always available for the =/==/!= pattern matching operator of the [[ ... ]] construct.

1.2. Input source selection

The redirection < "$file" tells the while loop to read from the file whose name is in the variable file. If you would prefer to use a literal pathname instead of a variable, you may do that as well. If your input source is the script's standard input, then you don't need any redirection at all.

If your input source is the contents of a variable/parameter, bash can iterate over its lines using a here string:

   1 while IFS= read -r line; do
   2   printf '%s\n' "$line"
   3 done <<< "$var"

The same can be done in any Bourne-type shell by using a "here document" (although read -r is POSIX, not Bourne):

   1 while IFS= read -r line; do
   2   printf '%s\n' "$line"
   3 done <<EOF
   4 $var
   5 EOF

One may also read from a command instead of a regular file:

   1 some command | while IFS= read -r line; do
   2   printf '%s\n' "$line"
   3 done

This method is especially useful for processing the output of find with a block of commands:

   1 find . -type f -print0 | while IFS= read -r -d '' file; do
   2     dir=${file%/*} base=${file##*/}
   3     mv "$file" "$dir/${base// /_}"
   4 done

This reads one filename at a time from the find command and renames the file, replacing spaces with underscores in its base name.

Note the usage of -print0 in the find command, which uses NUL bytes as filename delimiters; and -d '' in the read command to instruct it to read all text into the file variable until it finds a NUL byte. By default, find and read delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split up those filenames at the newlines and cause the loop body to fail. Additionally it is necessary to set IFS to an empty string, because otherwise read would still strip leading and trailing whitespace (with the default value of IFS). See FAQ #20 for more details.

Using a pipe to send find's output into a while loop places the loop in a SubShell, which means any state changes you make (changing variables, cd, opening and closing files, etc.) will be lost when the loop finishes. To avoid that, you may use a ProcessSubstitution:

   1 linecount=0
   2 
   3 while IFS= read -r line; do
   4   linecount=$((linecount + 1))
   5 done < <(some command)
   6 
   7 printf 'total lines: %d\n' "$linecount"

See FAQ 24 for more discussion.

1.3. My text files are broken! They lack their final newlines!

If there are some characters after the last line in the file (or to put it differently, if the last line is not terminated by a newline character), then read will read it but return false, leaving the broken partial line in the read variable(s). You can process this after the loop:

   1 # Emulate cat
   2 while IFS= read -r line; do
   3   printf '%s\n' "$line"
   4 done < "$file"
   5 [[ -n $line ]] && printf %s "$line"

Or:

   1 # This does not work:
   2 printf 'line 1\ntruncated line 2' | while read -r line; do
   3   echo $line
   4 done
   5 
   6 # This does not work either:
   7 printf 'line 1\ntruncated line 2' | while IFS= read -r line; do
   8   echo "$line"
   9 done
  10 [[ $line ]] && echo -n "$line"
  11 
  12 # This works:
  13 printf 'line 1\ntruncated line 2' | {
  14   while IFS= read -r line; do
  15     echo "$line"
  16   done
  17   [[ $line ]] && echo "$line"
  18 }

The first example, beyond missing the after-loop test, is also missing quotes. See Quotes or Arguments for an explanation why. The Arguments page is an especially important read.

For a discussion of why the second example above does not work as expected, see FAQ #24.

Alternatively, you can simply add a logical OR to the while test:

   1 while IFS= read -r line || [[ -n $line ]]; do
   2   printf '%s\n' "$line"
   3 done < "$file"
   4 
   5 printf 'line 1\ntruncated line 2' | while IFS= read -r line || [[ -n $line ]]; do
   6   echo "$line"
   7 done

1.4. How to keep other commands from "eating" the input

Some commands greedily eat up all available data on standard input. The examples above do not take precautions against such programs. For example,

   1 while IFS= read -r line; do
   2   cat > ignoredfile
   3   printf '%s\n' "$line"
   4 done < "$file"

will only print the contents of the first line, with the remaining contents going to "ignoredfile", as cat slurps up all available input.

One workaround is to use a numeric FileDescriptor rather than standard input:

   1 # Bash
   2 while IFS= read -r -u 9 line; do
   3   cat > ignoredfile
   4   printf '%s\n' "$line"
   5 done 9< "$file"
   6 
   7 # Note that read -u is not portable to every shell.
   8 # Use a redirect to ensure it works in any POSIX compliant shell:
   9 while IFS= read -r line <&9; do
  10   cat > ignoredfile
  11   printf '%s\n' "$line"
  12 done 9< "$file"

Or:

   1 exec 9< "$file"
   2 while IFS= read -r line <&9; do
   3   cat > ignoredfile
   4   printf '%s\n' "$line"
   5 done
   6 exec 9<&-

This example will wait for the user to type something into the file ignoredfile at each iteration instead of eating up the loop input.

You might need this, for example, with mencoder which will accept user input if there is any, but will continue silently if there isn't. Other commands that act this way include ssh and ffmpeg. Additional workarounds for this are discussed in FAQ #89.


CategoryShell CategoryBashguide

2. How can I store the return value and/or output of a command in a variable?

Well, that depends on whether you want to store the command's output (either stdout, or stdout + stderr) or its exit status (0 to 255, with 0 typically meaning "success").

If you want to capture the output, you use command substitution:

   1 output=$(command)      # stdout only; stderr remains uncaptured
   2 output=$(command 2>&1) # both stdout and stderr will be captured

If you want the exit status, you use the special parameter $? after running the command:

   1 command
   2 status=$?

If you want both:

   1 output=$(command)
   2 status=$?

The assignment to output has no effect on command's exit status, which is still in $?.

If you don't actually want to store the exit status, but simply want to take an action upon success or failure, just use if:

   1 if command; then
   2     printf "it succeeded\n"
   3 else
   4     printf "it failed\n"
   5 fi

Or if you want to capture stdout as well as taking action on success/failure, without explicitly storing or checking $?:

   1 if output=$(command); then
   2     printf "it succeeded\n"
   3     ...

If you don't understand the difference between standard output and standard error. here is a brief demonstration. A sane command writes the output that you request to standard output (stdout) and only writes errors to standard error (stderr). Like so:

$ dig +short A mywiki.wooledge.org
199.231.184.176
$ ip=$(dig +short A mywiki.wooledge.org)
$ echo "{$ip}"
{199.231.184.176}

$ ls no-such-file
ls: cannot access 'no-such-file': No such file or directory
$ output=$(ls no-such-file)
ls: cannot access 'no-such-file': No such file or directory
$ echo "{$output}"
{}

In the example above, dig wrote output to stdout, which was captured in the ip variable. ls encountered an error, so it did not write anything to stdout. It wrote to stderr, which was not captured (because we didn't use 2>&1). The error message appeared directly on the terminal instead.

Some commands are not well-written, however, and may write information to the wrong place. You must keep an eye out for such commands, and work around them when necessary. For example:

$ vers=$(python --version)
Python 2.7.13
$ echo "{$vers}"
{}

Even though we specifically asked for the version number, python wrote it to stderr. Thus, it appeared on the terminal, and was not captured in the vers variable. You'd need to use 2>&1 here.

What if you want the exit status of one command from a pipeline? If you want the last command's status, no problem -- it's in $? just like before. If you want some other command's status, use the PIPESTATUS array (BASH only. In the case of Zsh, it's lower-cased pipestatus). Say you want the exit status of grep in the following:

   1 grep foo somelogfile | head -n5
   2 status=${PIPESTATUS[0]}

Bash 3.0 added a pipefail option as well, which can be used if you simply want to take action upon failure of the grep:

   1 set -o pipefail
   2 if ! grep foo somelogfile | head -n5; then
   3     printf "uh oh\n"
   4 fi

Now, some trickier stuff. Let's say you want only the stderr, but not stdout. Well, then first you have to decide where you do want stdout to go:

   1 output=$(command 2>&1 >/dev/null)  # Save stderr, discard stdout.
   2 output=$(command 2>&1 >/dev/tty)   # Save stderr, send stdout to the terminal.
   3 output=$(command 3>&2 2>&1 1>&3-)  # Save stderr, send stdout to script's stderr.

Since the last example may seem a bit confusing, here is the explanation. First, keep in mind that 1>&3- is equivalent to 1>&3 3>&-. So it will be easier to analyse the following sequence: $(... 3>&2 2>&1 1>&3 3>&-)

Redirection

fd 0 (stdin)

fd 1 (stdout)

fd 2 (stderr)

fd 3

Description

initial

/dev/tty

/dev/tty

/dev/tty

Let's assume this is run in a terminal, so stdin, stdout and stderr are all initially connected to the terminal (tty).

$(...)

/dev/tty

pipe

/dev/tty

First, the command substitution is set up. Command's stdout (FileDescriptor 1) gets captured (by using a pipe internally). Command's stderr (FD 2) still points to its regular place (the script's stderr).

3>&2

/dev/tty

pipe

/dev/tty

/dev/tty

Next, FD 3 should point to what FD 2 points to at this very moment, meaning FD 3 will point to the script's stderr ("save stderr in FD 3").

2>&1

/dev/tty

pipe

pipe

/dev/tty

Next, FD 2 should point to what FD 1 currently points to, meaning FD 2 will point to stdout. Right now, both FD 2 and FD 1 would be captured.

1>&3

/dev/tty

/dev/tty

pipe

/dev/tty

Next, FD 1 should point to what FD 3 currently points to, meaning FD 1 will point to the script's stderr. FD 1 is no longer captured. We have "swapped" FD 1 and FD 2.

3>&-

/dev/tty

/dev/tty

pipe

Finally, we close FD 3 as it is no longer necessary.

A little note: operation n>&m- is sometimes called moving FD m to FD n.

This way what the script writes to FD 2 (normally stderr) will be written to stdout because of the second redirection. What the script writes to FD 1 (normally stdout) will be written to stderr because of the first and third redirections. Stdout and stderr got replaced. Done.

It's possible, although considerably harder, to let stdout "fall through" to wherever it would've gone if there hadn't been any redirection. This involves "saving" the current value of stdout, so that it can be used inside the command substitution:

   1 exec 3>&1                    # Save the place that stdout (1) points to.
   2 output=$(command 2>&1 1>&3)  # Run command.  stderr is captured.
   3 exec 3>&-                    # Close FD #3.
   4 
   5 # Or this alternative, which captures stderr, letting stdout through:
   6 { output=$(command 2>&1 1>&3-) ;} 3>&1

In the last example above, note that 1>&3- duplicates FD 3 and stores a copy in FD 1, and then closes FD 3. It could also be written 1>&3 3>&-.

What you cannot do is capture stdout in one variable, and stderr in another, using only FD redirections. You must use a temporary file (or a named pipe) to achieve that one.

Well, you can use a horrible hack like:

   1 cmd() { curl -s -v http://www.google.fr; }
   2 
   3 result=$(
   4     { stdout=$(cmd) ; } 2>&1
   5     printf "this line is the separator\n"
   6     printf "%s\n" "$stdout"
   7 )
   8 var_out=${result#*this line is the separator$'\n'}
   9 var_err=${result%$'\n'this line is the separator*}

Obviously, this is not robust, because either the standard output or the standard error of the command could contain whatever separator string you employ.

And if you want the exit code of your cmd (here a modification in the case of if the cmd stdout nothing)

   1 cmd() { curl -s -v http://www.google.fr; }
   2 
   3 result=$(
   4     { stdout=$(cmd); returncode=$?; } 2>&1
   5     printf "this is the separator"
   6     printf "%s\n" "$stdout"
   7     exit "$returncode"
   8 )
   9 returncode=$?
  10 
  11 var_out=${result#*this is the separator}
  12 var_err=${result%this is the separator*}

Note: the original question read, "How can I store the return value of a command in a variable?" This was, verbatim, an actual question asked in #bash, ambiguity and all.


CategoryShell

3. How can I sort or compare files based on some metadata attribute (most recently modified, size, etc)?

The tempting solution is to use ls to output sorted filenames and operate on the results using e.g. awk. As usual, the ls approach cannot be made robust and should never be used in scripts due in part to the possibility of arbitrary characters (including newlines) present in filenames. Therefore, we need some other way to compare file metadata.

The most common requirements are to get the most or least recently modified, or largest or smallest files in a directory. Bash and all ksh variants can compare modification times (mtime) using the -nt and -ot operators of the conditional expression compound command:

   1 unset -v latest
   2 for file in "$dir"/*; do
   3   [[ $file -nt $latest ]] && latest=$file
   4 done

Or to find the oldest:

   1 unset -v oldest
   2 for file in "$dir"/*; do
   3   [[ -z $oldest || $file -ot $oldest ]] && oldest=$file
   4 done

Keep in mind that mtime on directories is that of the most recently added, removed, or renamed file in that directory. Also note that -nt and -ot are not specified by POSIX test, but many shells such as dash include them anyway. No bourne-like shell has analogous operators for comparing by atime or ctime, so one would need external utilities for that; however, it's nearly impossible to either produce output which can be safely parsed, or handle said output in a shell without using nonstandard features on both ends.

If the sorting criteria are different from "oldest or newest file by mtime", then GNU find and GNU sort may be used together to produce a sorted list of filenames + timestamps, delimited by NUL characters. This will operate recursively by default. GNU find's -maxdepth operator can limit the search depth to 1 directory if needed. Here are a few possibilities, which can be modified as necessary to use atime or ctime, or to sort in reverse order:

   1 # GNU find + GNU sort (To the precision possible on the given OS, but returns only one result)
   2 IFS= read -r -d '' latest \
   3   < <(find "$dir" -type f -printf '%T@ %p\0' | sort -znr)
   4 latest=${latest#* }   # remove timestamp + space

   1 # GNU find (To the nearest 1s, using "find -printf" format (%Ts).)
   2 while IFS= read -rd '' time; do
   3   IFS= read -rd '' 'latest[time]'
   4 done < <(find "$dir" -type f -printf '%Ts\0%p\0')
   5 latest=${latest[-1]}

One disadvantage to these approaches is that the entire list is sorted, whereas simply iterating through the list to find the minimum or maximum timestamp (assuming we want just one file) would be faster. However, depending on the size of the job, the algorithmic disadvantage of sorting may be negligible in comparison to the overhead of using a shell.

   1 # GNU find
   2 unset -v latest time
   3 while IFS= read -r -d '' line; do
   4   t=${line%% *} t=${t%.*}   # truncate fractional seconds
   5   ((t > time)) && { latest=${line#* } time=$t; }
   6 done < <(find "$dir" -type f -printf '%T@ %p\0')

Similar usage patterns work well on many kinds of filesystem meta-data. This example gets the largest file in each subdirectory recursively. This is a common pattern for performing a calculation on a collection of files in each directory.

Readers who are asking this question in order to rotate their log files may wish to look into logrotate(1) instead, if their operating system provides it.


CategoryShell

4. How can I check whether a directory is empty or not? How do I check for any *.mpg files, or count how many there are?

In Bash, you can count files safely and easily with the nullglob and dotglob options (which change the behaviour of globbing), and an array:

   1 # Bash
   2 shopt -s nullglob dotglob
   3 files=(*)
   4 (( ${#files[*]} )) || echo directory is empty
   5 shopt -u nullglob dotglob

See ArithmeticExpression for explanations of arithmetic commands.

Of course, you can use any glob you like instead of *. E.g. *.mpg or /my/music/*.mpg works fine.

Bear in mind that you need read permission on the directory, or it will always appear empty.

Some people dislike nullglob because having unmatched globs vanish altogether confuses programs like ls. Mistyping ls *.zip as ls *.zpi may cause every file to be displayed (for such cases consider setting failglob). Setting nullglob in a SubShell avoids accidentally changing its setting in the rest of the shell, at the price of an extra fork(). If you'd like to avoid having to set and unset shell options, you can pour it all into a SubShell:

   1 # Bash
   2 if (shopt -s nullglob dotglob; f=(*); ((! ${#f[@]}))); then
   3     echo "The current directory is empty."
   4 fi

The other disadvantage of this approach (besides the extra fork()) is that the array is lost when the subshell exits. If you planned to use those filenames later, then they have to be retrieved all over again.

Both of these examples expand a glob and store the resulting filenames into an array, and then check whether the number of elements in the array is 0. If you actually want to see how many files there are, just print the array's size instead of checking whether it's 0:

   1 # Bash
   2 shopt -s nullglob dotglob
   3 files=(*)
   4 echo "The current directory contains ${#files[@]} things."

You can also avoid the nullglob if you're OK with putting a non-existing filename in the array should no files match (instead of an empty array):

   1 # Bash
   2 shopt -s dotglob
   3 files=(*)
   4 if [[ -e ${files[0]} || -L ${files[0]} ]]; then
   5     echo "The current directory is not empty.  It contains:"
   6     printf '%s\n' "${files[@]}"
   7 fi

Without nullglob, if there are no files in the directory, the glob will be added as the only element in the array. Since * is a valid filename, we can't simply check whether the array contains a literal *. So instead, we check whether the thing in the array exists as a file. The -L test is required because -e fails if the first file is a dangling symlink.

If you don't care how many matching files there are and don't want to store the results in an array, you can use bash's compgen command. Unfortunately, due to a bug, you need to use a hack to make it recognize dotglob:

   1 # Bash
   2 if (shopt -s dotglob; : *; compgen -G '*' >/dev/null); then
   3     echo "The current directory is not empty."
   4 else
   5     echo "The current directory is empty."
   6 fi

Or you can use an extended glob:

   1 # Bash
   2 # The subshell may be avoided by enabling extglob for the whole script.
   3 # Doing so should be safe.
   4 if (shopt -s extglob; compgen -G '@(*|.[!.]*|..?*)' >/dev/null); then
   5     echo "The current directory is not empty."
   6 else
   7     echo "The current directory is empty."
   8 fi

You may also use failglob:

   1 # Bash
   2 if ( shopt -s dotglob failglob; : ./* ) 2>/dev/null; then
   3     echo "The current directory is not empty."
   4 else
   5     echo "The current directory is empty."
   6 fi

But, if you use failglob, note that the subshell is required; the following code does not work because failglob will raise a shell error that will cause bash to stop running the current command (including the if command, any outer compound command, and the entire function that ran this code if it is part of a function), so this will only work in the true case, the else branch will never run:

   1 # BROKEN!
   2 shopt -s dotglob failglob
   3 if { : ./* ;} 2> /dev/null; then
   4     echo "The current directory is not empty."
   5 else
   6     echo "The current directory is empty."
   7 fi

If you really want to avoid using the subshell and want to set failglob globally, you can either "catch" the shell error using command eval, or you can write a function that expands the glob indirectly:

   1 shopt -s dotglob failglob
   2 if command eval ': ./*' 2> /dev/null; then
   3     echo "The current directory is not empty."
   4 else
   5     echo "The current directory is empty."
   6 fi
   7 # or
   8 shopt -s dotglob failglob
   9 any_match () { local IFS=; { : $@ ;} 2> /dev/null ;}
  10 if any_match './*'; then
  11     echo "The current directory is not empty."
  12 else
  13     echo "The current directory is empty."
  14 fi

If your script needs to run with various non-Bash shell implementations, you can try using an external program like python, perl, or find; or you can try one of these. Note the "magic 3 globs"1 as POSIX does not have the dotglob option.

   1 # POSIX
   2 # Clobbers the positional parameters, so make sure you don't need them.
   3 set -- * .[!.]* ..?*
   4 for f in "$@"; do
   5   if test -e "$f" || test -L "$f"; then
   6     echo "directory is non-empty"
   7     break
   8   fi
   9 done

At this stage, the positional parameters have been loaded with the contents of the directory, and can be used for processing.

If you just want to count files:

   1 # POSIX
   2 n=0
   3 for f in * .[!.]* ..?*; do
   4   if test -e "$f" || test -L "$f"; then n=$((n+1)); fi
   5 done
   6 printf "There are %d files.\n" "$n"

In the Bourne shell, it's even worse, because there is no test -e or test -L:

   1 # Bourne
   2 # (Of course, the system must have printf(1).)
   3 if test "`printf '%s %s %s' .* *`" = '. .. *' && test ! -f '*'
   4 then
   5     echo "directory is empty"
   6 fi

Of course, that fails if * exists as something other than a plain file (such as a directory or FIFO). The absence of a -e test really hurts.

Here is another solution using find:

   1 # POSIX
   2 # Print a single `.' for each file and count the number of characters printed.
   3 # This one will recurse.  If that is not desired, see below.
   4 n=$(find . -type f -exec printf %.0s. {} + | wc -m)
   5 printf "There are %d files.\n" "$n"

If you want it not to recurse, then you need to tell find not to recurse into directories. This gets really tricky and ugly. GNU find has a -maxdepth option to do it. With standard POSIX find, you're stuck with -prune. This is left as an exercise for the reader.

Never try to parse ls output. Even ls -A solutions can break (e.g. on HP-UX, if you are root, ls -A does the exact opposite of what it does if you're not root -- and no, I can't make up something that incredibly stupid).

In fact, one may wish to avoid the direct question altogether. Usually people want to know whether a directory is empty because they want to do something involving the files therein, etc. Look to the larger question. For example, one of these find-based examples may be an appropriate solution:

   1 # Bourne / POSIX
   2 find "$somedir" -type f -exec echo Found unexpected file {} \;
   3 find "$somedir" -prune -empty -exec printf '%s is empty.\n' {} \;  # GNU/BSD
   4 find "$somedir" -type d -empty -exec cp /my/configfile {} \;   # GNU/BSD

Most commonly, all that's really needed is something like this:

   1 # Bourne / POSIX
   2 for f in ./*.mpg; do
   3     test -f "$f" || continue
   4     mympgviewer "$f"
   5 done

In other words, the person asking the question may have thought an explicit empty-directory test was needed to avoid an error message like mympgviewer: ./*.mpg: No such file or directory when in fact no such test is required.

Support for a nullglob-like feature is inconsistent. In ksh93 it can be done on a per-pattern basis by prefixing with ~(N)2:

   1 # ksh93
   2 for f in ~(N)*; do
   3     ....
   4 done


CategoryShell

5. How can I use array variables?

This answer assumes you have a basic understanding of what arrays are. If you're new to this kind of programming, you may wish to start with the guide's explanation. This page is more thorough. See links at the bottom for more resources.

5.1. Intro

One-dimensional integer-indexed arrays are implemented by Bash, Zsh, and most KornShell varieties including AT&T ksh88 or later, mksh, and pdksh. Arrays are not specified by POSIX and not available in legacy or minimalist shells such as BourneShell and Dash. The POSIX-compatible shells that do feature arrays mostly agree on their basic principles, but there are some significant differences in the details. Advanced users of multiple shells should be sure to research the specifics. Ksh93, Zsh, and Bash 4.0 additionally have Associative Arrays (see also FAQ 6). This article focuses on indexed arrays as they are the most common type.

Basic syntax summary (for bash, math indexed arrays):

a=(word1 word2 "$word3" ...)

Initialize an array from a word list, indexed starting with 0 unless otherwise specified.

a=(*.png *.jpg)

Initialize an array with filenames.

a[i]=word

Set one element to word, evaluating the value of i in a math context to determine the index.

a[i+1]=word

Set one element, demonstrating that the index is also a math context.

a[i]+=suffix

Append suffix to the previous value of a[i] (bash 3.1).

a+=(word ...) # append

Modify an existing array without unsetting it, indexed starting at one greater than the highest indexed element unless otherwise specified (bash 3.1).

a+=([3]=word3 word4 [i]+=word_i_suffix)
# modify (ormaaj example)

unset 'a[i]'

Unset one element. Note the mandatory quotes (a[i] is a valid glob).

"${a[i]}"

Reference one element.

"$(( a[i] + 5 ))"

Reference one element, in a math context.

"${a[@]}"

Expand all elements as a list of words.

"${!a[@]}"

Expand all indices as a list of words (bash 3.0).

"${a[*]}"

Expand all elements as a single word, with the first char of IFS as separator.

"${#a[@]}"

Number of elements (size, length).

"${a[@]:start:len}"

Expand a range of elements as a list of words, cf. string range.

"${a[@]#trimstart}" "${a[@]%trimend}"
"${a[@]//search/repl}" etc.

Expand all elements as a list of words, with modifications applied to each element separately.

declare -p a

Show/dump the array, in a bash-reusable form.

mapfile -t a < stream

Initialize an array from a stream (bash 4.0).

readarray -t a < stream

Same as mapfile.

"$a"

Same as "${a[0]}". Does NOT expand to the entire array. This usage is considered confusing at best, but is usually a bug.

Here is a typical usage pattern featuring an array named host:

   1 # Bash
   2 
   3 # Assign the values "mickey", "minnie", and "goofy" to sequential indexes starting with zero.
   4 host=(mickey minnie goofy)
   5 
   6 # Iterate over the indexes of "host".
   7 for idx in "${!host[@]}"; do
   8     printf 'Host number %d is %s\n' "$idx" "${host[idx]}"
   9 done

"${!host[@]}" expands to the indices of of the host array, each as a separate word.

Indexed arrays are sparse, and elements may be inserted and deleted out of sequence.

   1 # Bash/ksh
   2 
   3 # Simple assignment syntax.
   4 arr[0]=0
   5 arr[2]=2
   6 arr[1]=1
   7 arr[42]='what was the question?'
   8 
   9 # Unset the second element of "arr"
  10 unset -v 'arr[2]'
  11 
  12 # Concatenate the values, to a single argument separated by spaces, and echo the result.
  13 echo "${arr[*]}"
  14 # outputs: "0 1 what was the question?"

It is good practice to write your code in such a way that it can handle sparse arrays, even if you think you can guarantee that there will never be any "holes". Only treat arrays as "lists" if you're certain, and the savings in complexity is significant enough for it to be justified.

5.2. Loading values into an array

Assigning one element at a time is simple, and portable:

   1 # Bash/ksh
   2 arr[0]=0
   3 arr[42]='the answer'

It's possible to assign multiple values to an array at once, but the syntax differs across shells. Bash supports only the arrName=(args...) syntax. ksh88 supports only the set -A arrName -- args... syntax. ksh93, mksh, and zsh support both. There are subtle differences in both methods between all of these shells if you look closely.

   1 # Bash, ksh93, mksh, zsh
   2 array=(zero one two three four)

   1 # ksh88/93, mksh, zsh
   2 set -A array -- zero one two three four

When initializing in this way, the first index will be 0 unless a different index is specified.

With compound assignment, the space between the parentheses is evaluated in the same way as the arguments to a command, including pathname expansion and WordSplitting. Any type of expansion or substitution may be used. All the usual quoting rules apply within.

   1 # Bash/ksh93
   2 oggs=(*.ogg)

With ksh88-style assignment using set, the arguments are just ordinary arguments to a command.

   1 # Korn
   2 set -A oggs -- *.ogg

   1 # Bash (brace expansion requires 3.0 or higher)
   2 homeDirs=(~{,root}) # brace expansion occurs in a different order in ksh, so this is bash-only.
   3 letters=({a..z})    # Not all shells with sequence-expansion can use letters.

   1 # Korn
   2 set -A args -- "$@"

5.2.1. Loading lines from a file or stream

In bash 4, the mapfile command (also known as readarray) accomplishes this:

   1 # Bash 4
   2 mapfile -t lines <myfile
   3 
   4 # or
   5 mapfile -t lines < <(some command)

See ProcessSubstitution and FAQ #24 for more details on the <(...) syntax.

mapfile handles blank lines by inserting them as empty array elements, and (with -t) also silently appends a missing final newline if the input stream lacks one. These can be problematic when reading data in other ways (see the next section). mapfile in bash 4.0 through 4.3 does have one serious drawback: it can only handle newlines as line terminators. Bash 4.4 adds the -d option to supply a different line delimiter.

When mapfile isn't available, we have to work very hard to try to duplicate it. There are a great number of ways to almost get it right, but many of them fail in subtle ways.

The following examples will duplicate most of mapfile's basic functionality in older shells. You can skip all of these alternative examples if you have bash 4.

   1 # Alternative: Bash 3.1, Ksh93, mksh
   2 unset -v lines
   3 while IFS= read -r; do
   4     lines+=("$REPLY")
   5 done <file
   6 [[ $REPLY ]] && lines+=("$REPLY")

The += operator, when used together with parentheses, appends the element to one greater than the current highest numbered index in the array.

   1 # Alternative: ksh88
   2 # Ksh88 doesn't support pre/post increment/decrement. mksh and others do.
   3 i=0
   4 unset -v lines
   5 while IFS= read -r; do
   6     lines[i+=1,$i]=$REPLY     # Mimics lines[i++]=$REPLY
   7 done <file
   8 [[ $REPLY ]] && lines[i]=$REPLY

The square brackets create a math context. The result of the expression is the index used for assignment.

5.2.1.1. Handling newlines (or lack thereof) at the end of a file

read returns false when it reads the last line of a file. This presents a problem: if the file contains a trailing newline, then read will be false when reading/assigning that final line, otherwise, it will be false when reading/assigning the last line of data. Without a special check for these cases, no matter what logic is used, you will always end up either with an extra blank element in the resulting array, or a missing final element.

To be clear - text files should contain a newline as the last character in the file. Newlines are added to the ends of files by most text editors, and also by Here documents and Here strings. Most of the time, this is only an issue when reading output from pipes or process substitutions, or from "broken" text files created with broken or misconfigured tools. Let's look at some examples.

This approach reads the elements one by one, using a loop.

   1 # Doesn't work correctly!
   2 unset -v arr i
   3 while IFS= read -r 'arr[i++]'; do
   4     :
   5 done < <(printf '%s\n' {a..d})

Unfortunately, if the file or input stream contains a trailing newline, a blank element is added at the end of the array, because the read -r arr[i++] is executed one extra time after the last line containing text before returning false.

   1 # Still doesn't work correctly!
   2 unset -v arr i
   3 while read -r; do
   4     arr[i++]=$REPLY
   5 done < <(printf %s {a..c}$'\n' d)

The square brackets create a math context. Inside them, i++ works as a C programmer would expect (in all but ksh88).

This approach fails in the reverse case - it correctly handles blank lines and inputs terminated with a newline, but fails to record the last line of input. If the file or stream is missing its final newline. So we need to handle that case specially:

   1 # Alternative: Bash, ksh93, mksh
   2 unset -v arr i
   3 while IFS= read -r; do
   4     arr[i++]=$REPLY
   5 done <file
   6 [[ $REPLY ]] && arr[i++]=$REPLY # Append unterminated data line, if there was one.

This is very close to the "final solution" we gave earlier -- handling both blank lines inside the file, and an unterminated final line. The null IFS is used to prevent read from stripping possible whitespace from the beginning and end of lines, in the event you wish to preserve them.

Another workaround is to remove the empty element after the loop:

   1 # Alternative: Bash
   2 unset -v arr i
   3 while IFS= read -r 'arr[i++]'; do
   4     :
   5 done <file
   6 
   7 # Remove trailing empty element, if any.
   8 [[ ${arr[i-1]} ]] || unset -v 'arr[--i]'

Whether you prefer to read too many and then have to remove one, or read too few and then have to add one, is a personal choice.

NOTE: it is necessary to quote the 'arr[i++]' passed to read, so that the square brackets aren't interpreted as globs. This is also true for other non-keyword builtins that take a subscripted variable name, such as let and unset.

5.2.1.2. Other methods

Sometimes stripping blank lines actually is desirable, or you may know that the input will always be newline delimited, such as input generated internally by your script. It is possible in some shells to use the -d flag to set read's line delimiter to null, then abuse the -a or -A (depending on the shell) flag normally used for reading the fields of a line into an array for reading lines. Effectively, the entire input is treated as a single line, and the fields are newline-delimited.

   1 # Bash 4
   2 IFS=$'\n' read -rd '' -a lines <file

   1 # mksh, zsh
   2 IFS=$'\n' read -rd '' -A lines <file

5.2.1.3. Don't read lines with for!

Never read lines using for..in loops! Relying on IFS WordSplitting causes issues if you have repeated whitespace delimiters, because they will be consolidated. It is not possible to preserve blank lines by having them stored as empty array elements this way. Even worse, special globbing characters will be expanded without going to lengths to disable and then re-enable it. Just never use this approach - it is problematic, the workarounds are all ugly, and not all problems are solvable.

5.2.2. Reading NUL-delimited streams

If you are trying to deal with records that might have embedded newlines, you will be using an alternative delimiter such as the NUL character ( \0 ) to separate the records. In bash 4.4, you can simply use mapfile -t -d '':

   1 # Bash 4.4
   2 mapfile -t -d '' files < <(find . -name '*.ugly' -print0)

Otherwise, you'll need to use the -d argument to read inside a loop:

   1 # Bash
   2 while read -rd ''; do
   3     arr[i++]=$REPLY
   4 done < <(find . -name '*.ugly' -print0)
   5 
   6 # or (bash 3.1 and up)
   7 while read -rd ''; do
   8     arr+=("$REPLY")
   9 done < <(find . -name '*.ugly' -print0)

read -d '' tells Bash to keep reading until a NUL byte instead of until a newline. This isn't certain to work in all shells with a -d feature.

If you choose to give a variable name to read instead of using REPLY then also be sure to set IFS= for the read command, to avoid trimming leading/trailing IFS whitespace.

5.2.3. Appending to an existing array

As previously mentioned, arrays are sparse - that is, numerically adjacent indexes are not guaranteed to be occupied by a value. This confuses what it means to "append" to an existing array. There are several approaches.

If you've been keeping track of the highest-numbered index with a variable (for example, as a side-effect of populating an array in a loop), and can guarantee it's correct, you can just use it and continue to ensure it remains in-sync.

   1 # Bash/ksh93
   2 arr[++i]="new item"

If you don't want to keep an index variable, but happen to know that your array is not sparse, then you can use the number of elements to calculate the offset (not recommended):

   1 # Bash/ksh
   2 # This will FAIL if the array has holes (is sparse).
   3 arr[${#arr[@]}]="new item"

If you don't know whether your array is sparse or not, but don't mind re-indexing the entire array (very inefficient), then you can use:

   1 # Bash
   2 arr=("${arr[@]}" "new item")
   3 
   4 # Ksh
   5 set -A arr -- "${arr[@]}" "new item"

If you're in bash 3.1 or higher, then you can use the += operator:

   1 # Bash 3.1, ksh93, mksh, zsh
   2 arr+=(item 'another item')

NOTE: the parentheses are required, just as when assigning to an array. Otherwise you will end up appending to ${arr[0]} which $arr is a synonym for. If your shell supports this type of appending, it is the preferred method.

For examples of using arrays to hold complex shell commands, see FAQ #50 and FAQ #40.

5.3. Retrieving values from an array

${#arr[@]} or ${#arr[*]} expand to the number of elements in an array:

   1 # Bash
   2 shopt -s nullglob
   3 oggs=(*.ogg)
   4 echo "There are ${#oggs[@]} Ogg files."

Single elements are retrieved by index:

   1 echo "${foo[0]} - ${bar[j+1]}"

The square brackets are a math context. Within an arithmetic context, variables, including arrays, can be referenced by name. For example, in the expansion:

   1 ${arr[x[3+arr[2]]]}

arr's index will be the value from the array x whose index is 3 plus the value of arr[2].

Using array elements en masse is one of the key features of shell arrays. In exactly the same way that "$@" is expanded for positional parameters, "${arr[@]}" is expanded to a list of words, one array element per word. For example,

   1 # Korn/Bash
   2 for x in "${arr[@]}"; do
   3   echo "next element is '$x'"
   4 done

This works even if the elements contain whitespace. You always end up with the same number of words as you have array elements.

If one simply wants to dump the full array, one element per line, this is the simplest approach:

   1 # Bash/ksh
   2 printf "%s\n" "${arr[@]}"

For slightly more complex array-dumping, "${arr[*]}" will cause the elements to be concatenated together, with the first character of IFS (or a space if IFS isn't set) between them. As it happens, "$*" is expanded the same way for positional parameters.

   1 # Bash
   2 arr=(x y z)
   3 IFS=/; echo "${arr[*]}"; unset -v IFS
   4 # prints x/y/z

Unfortunately, you can't put multiple characters in between array elements using that syntax. You would have to do something like this instead:

   1 # Bash/ksh
   2 arr=(x y z)
   3 tmp=$(printf "%s<=>" "${arr[@]}")
   4 echo "${tmp%<=>}"    # Remove the extra <=> from the end.
   5 # prints x<=>y<=>z

Or using array slicing, described in the next section.

   1 # Bash/ksh
   2 typeset -a a=([0]=x [5]=y [10]=z)
   3 printf '%s<=>' "${a[@]::${#a[@]}-1}"
   4 printf '%s\n' "${a[@]:(-1)}"

This also shows how sparse arrays can be assigned multiple elements at once. Note using the arr=([key]=value ...) notation differs between shells. In ksh93, this syntax gives you an associative array by default unless you specify otherwise, and using it requires that every value be explicitly given an index, unlike bash, where omitted indexes begin at the previous index. This example was written in a way that's compatible between the two.

BASH 3.0 added the ability to retrieve the list of index values in an array:

   1 # Bash 3.0 or higher
   2 arr=(0 1 2 3) arr[42]='what was the question?'
   3 unset -v 'arr[2]'
   4 echo "${!arr[@]}"
   5 # prints 0 1 3 42

Retrieving the indices is extremely important for certain kinds of tasks, such as maintaining parallel arrays with the same indices (a cheap way to mimic having an array of structs in a language with no struct):

   1 # Bash 3.0 or higher
   2 unset -v file title artist i
   3 for f in ./*.mp3; do
   4   file[i]=$f
   5   title[i]=$(mp3info -p %t "$f")
   6   artist[i++]=$(mp3info -p %a "$f")
   7 done
   8 
   9 # Later, iterate over every song.
  10 # This works even if the arrays are sparse, just so long as they all have
  11 # the SAME holes.
  12 for i in "${!file[@]}"; do
  13   echo "${file[i]} is ${title[i]} by ${artist[i]}"
  14 done

5.3.1. Retrieving with modifications

Bash's Parameter Expansions may be performed on array elements en masse:

   1 # Bash
   2 arr=(abc def ghi jkl)
   3 echo "${arr[@]#?}"          # prints bc ef hi kl
   4 echo "${arr[@]/[aeiou]/}"   # prints bc df gh jkl

Parameter Expansion can also be used to extract sub-lists of elements from an array. Some people call this slicing:

   1 # Bash
   2 echo "${arr[@]:1:3}"        # three elements starting at #1 (second element)
   3 echo "${arr[@]:(-2)}"       # last two elements

The same goes for positional parameters

   1 set -- foo bar baz
   2 echo "${@:(-1)}"            # last positional parameter baz
   3 echo "${@:(-2):1}"          # second-to-last positional parameter bar

5.4. Using @ as a pseudo-array

As we see above, the @ array (the array of positional parameters) can be used almost like a regularly named array. This is the only array available for use in POSIX or Bourne shells. It has certain limitations: you cannot individually set or unset single elements, and it cannot be sparse. Nevertheless, it still makes certain POSIX shell tasks possible that would otherwise require external tools:

   1 # POSIX
   2 set -- *.mp3
   3 if [ -e "$1" ] || [ -L "$1" ]; then
   4   echo "there are $# MP3 files"
   5 else
   6   echo "there are 0 MP3 files"
   7 fi

   1 # POSIX
   2 ...
   3 # Add an option to our dynamically generated list of options
   4 set -- "$@" -f "$somefile"
   5 ...
   6 foocommand "$@"

(Compare to FAQ #50's dynamically generated commands using named arrays.)

6. See Also


CategoryShell

7. How can I use variable variables (indirect variables, pointers, references) or associative arrays?

This is a complex page, because it's a complex topic. It's been divided into roughly three parts: associative arrays, evaluating indirect variables, and assigning indirect variables. There are discussions of programming issues and concepts scattered throughout.

7.1. Associative Arrays

We introduce associative arrays first, because we observe that inexperienced programmers often conjure solutions to problems that would most typically utilize associative arrays by attempting to dynamically generate variables in a Hungarian notation scheme (in order to coerce the symbol table hashing function into resolving user-defined associative mappings).

An associative array is an unordered collection of key-value pairs. A value may be retrieved by supplying its corresponding key. Since strings are the only datatype most shells understand, associative arrays map strings to strings, unlike indexed arrays, which map integers to strings. Associative arrays exist in AWK as "associative arrays", in Perl as "hashes", in Tcl as "arrays", in Python and C# as "dictionaries", and in Java as a "Map", and in C++11 STL as std::unordered_map.

   1 # Bash 4 / ksh93
   2 
   3 typeset -A homedir    # Declare associative array
   4 homedir=(             # Compound assignment
   5     [jim]=/home/jim
   6     [silvia]=/home/silvia
   7     [alex]=/home/alex
   8 )
   9 
  10 homedir[ormaaj]=/home/ormaaj # Ordinary assignment adds another single element
  11 
  12 for user in "${!homedir[@]}"; do   # Enumerate all indices (user names)
  13     printf 'Home directory of user %q is: %q\n' "$user" "${homedir[$user]}"
  14 done

Prior to Bash 4 or if you can't use ksh93, your options are limited. Either move to another interpreter (awk, perl, python, ruby, tcl, ...) or re-evaluate your problem to simplify it. There are certain tasks for which associative arrays are a powerful and completely appropriate tool. There are others for which they are overkill, or simply unsuitable.

Suppose we have several subservient hosts with slightly different configuration, and that we want to ssh to each one and run slightly different commands. One way we could set it up would be to hard-code a bunch of ssh commands in per-hostname functions in a single script and just run them in series or in parallel. (Don't reject this out of hand! Simple is good.) Another way would be to store each group of commands as an element of an associative array keyed by the hostname:

   1 declare -A commands
   2 commands=(
   3   [host1]="mvn clean install && cd webapp && mvn jetty:run"
   4   [host2]="..."
   5 )
   6 
   7 for host in "${!commands[@]}"; do
   8     ssh -- "$host" "${commands[$host]}"
   9 done

This is the kind of approach we'd expect in a high-level language, where we can store hierarchical information in advanced data structures. The difficulty here is that we really want each element of the associative array to be a list or another array of command strings. But the shell simply doesn't permit that kind of data structure.

So, often it pays to step back and think in terms of shells rather than other programming languages. Aren't we just running a script on a remote host? Then why don't we just store the configuration sets as scripts? Then it's simple:

   1 # A series of conf files named for the hosts we need to run our commands on:
   2 for conf in /etc/myapp/*; do
   3     host=${conf##*/}
   4     ssh -- "$host" bash < "$conf"
   5 done
   6 
   7 # /etc/myapp/hostname is just a script:
   8 mvn clean install &&
   9 cd ./webapp &&
  10 mvn jetty:run

Now we've removed the need for associative arrays, and also the need to maintain a bunch of extremely horrible quoting issues. It is also easy to parallelize using GNU Parallel:

   1 parallel ssh -- {/} bash "<" {} ::: /etc/myapp/*

7.1.1. Associative array hacks in older shells

Before you think of using eval to mimic associative arrays in an older shell (probably by creating a set of variable names like homedir_alex), try to think of a simpler or completely different approach that you could use instead. If this hack still seems to be the best thing to do, consider the following disadvantages:

  1. It's really hard to read, to keep track of, and to maintain.
  2. The variable names must be a single line and match the RegularExpression ^[a-zA-Z_][a-zA-Z_0-9]*$ -- i.e., a variable name cannot contain arbitrary characters but only letters, digits, and underscores. We cannot have a variable's name contain Unix usernames, for instance -- consider a user named hong-hu. A dash '-' cannot be part of a variable name, so the entire attempt to make a variable named homedir_hong-hu is doomed from the start.

  3. Quoting is hard to get right. If a content string (not a variable name) can contain whitespace characters and quotes, it's hard to quote it right to preserve it through both shell parsings. And that's just for constants, known at the time you write the program. (Bash's printf %q helps, but nothing analogous is available in POSIX shells.)

  4. If the program handles unsanitized user input, it can be VERY dangerous!

Read BashGuide/Arrays or BashFAQ/005 for a more in-depth description and examples of how to use arrays in Bash.

If you need an associative array but your shell doesn't support them, please consider using AWK instead.

7.2. Indirection

7.2.1. Think before using indirection

Putting variable names or any other bash syntax inside parameters is frequently done incorrectly and in inappropriate situations to solve problems that have better solutions. It violates the separation between code and data, and as such puts you on a slippery slope toward bugs and security issues. Indirection can make your code less transparent and harder to follow.

Normally, in bash scripting, you won't need indirect references at all. Generally, people look at this for a solution when they don't understand or know about Bash Arrays or haven't fully considered other Bash features such as functions.

7.2.2. Evaluating indirect/reference variables

BASH allows for expanding parameters indirectly -- that is, one variable may contain the name of another variable. Name reference variables are the preferred method for performing variable indirection. Older versions of Bash could also use a ! prefix operator in parameter expansions for variable indirection. Namerefs should be used unless portability to older bash versions is required. No other shell uses ${!variable} for indirection and there are problems relating to use of that syntax for this purpose. It is also less flexible.

   1 # Bash
   2 realvariable=contents
   3 ref=realvariable
   4 printf '%s\n' "${!ref}"   # prints the contents of the real variable

KornShell (ksh93) has a completely different, more powerful syntax -- the nameref command (also known as typeset -n):

   1 # ksh93 / mksh / Bash 4.3
   2 realvariable=contents
   3 typeset -n ref=realvariable
   4 printf '%s\n' "${!ref} = $ref"      # prints the name and contents of the real variable

Zsh allows you to access a parameter indirectly with the parameter expansion flag P:

   1 # zsh
   2 realvariable=contents
   3 ref=realvariable
   4 echo ${(P)ref}   # prints the contents of the real variable

Unfortunately, for shells other than Bash, ksh93, and zsh there is no syntax for evaluating a referenced variable. You would have to use eval, which means you would have to undergo extreme measures to sanitize your data to avoid catastrophe.

It's difficult to imagine a practical use for this that wouldn't be just as easily performed by using an associative array. But people ask it all the time (it is genuinely a frequently asked question).

ksh93's nameref allows us to work with references to arrays, as well as regular scalar variables. For example,

   1 # ksh93 / mksh / bash
   2 function myfunc {
   3   typeset -n ref=$1
   4   printf 'Array %s has %d elements.\n' "${!ref}" "${#ref[@]}"
   5 }
   6 
   7 realarray=(...)
   8 myfunc realarray

zsh's ability to nest parameter expansions allow for referencing arrays too:

   1 # zsh
   2 myfunc() {
   3  local ref=$1
   4  echo "array $1 has ${#${(@P)ref}} elements"
   5 }
   6 realarray=(...)
   7 myfunc realarray

We are not aware of any trick that can duplicate that functionality in POSIX or Bourne shells without eval, which can be difficult to do securely. Older versions of Bash can almost do it -- some indirect array tricks work, and others do not, and we do not know whether the syntax involved will remain stable in future releases. So, consider this a use at your own risk hack.

   1 # Bash -- trick #1.  Works in bash 2 and up, and ksh93v+ (when invoked as bash)
   2 realarray=(...) ref=realarray; index=2
   3 tmp=${ref}[index]
   4 echo "${!tmp}"            # gives array element [2]

   1 # Bash -- trick #2.  Seems to work in bash 3 and up.
   2 # Can't be combined with special expansions until 4.3. e.g. "${!tmp##*/}"
   3 # Does NOT work in bash 2.05b -- Expands to one word instead of three in bash 2.
   4 tmp=${ref}[@]
   5 printf '<%s> ' "${!tmp}"; echo    # Iterate whole array as one word per element.

It is not possible to retrieve array indices directly using the Bash ${!var} indirect expansion.

7.2.3. Assigning indirect/reference variables

Sometimes you'd like to "point" from one variable to another, for purposes of writing information to a dynamically configurable place. Typically this happens when you're trying to write a "reusable" function or library, and you want it to put its output in a variable of the caller's choice instead of the function's choice. (Various traits of Bash make safe reusability of Bash functions difficult at best, so this is something that should not happen often.)

Assigning a value "through" a reference (I'm going to use "ref" from now on) is more widely possible, but the means of doing so are usually extremely shell-specific. All shells with the sole exception of AT&T ksh93 lack real reference variables or pointers. Indirection can only be achieved by indirectly evaluating variable names. IOW, you can never have a real unambiguous reference to an object in memory, the best you can do is use the name of a variable to try simulating the effect. Therefore, you must control the value of the ref and ensure side-effects such as globbing, user-input, and conflicting local parameters can't affect parameter names. Names must either be deterministic or validated in a way that makes certain guarantees. If an end user can populate the ref variable with arbitrary strings, the result can be unexpected code injection. We'll show an example of this at the end.

In ksh93, we can use nameref again:

   1 # ksh93/mksh/Bash 4.3
   2 typeset -n ref=realvariable
   3 ref=contents
   4 # realvariable now contains the string "contents"

In zsh, using parameter expansions ::= and expansion flags P:

   1 # zsh
   2 ref=realvariable
   3 : ${(P)ref::=contents}
   4 # redefines realvariable unconditionally to the string "contents"

In Bash, if you only want to assign a single line to the variable, you can use read and Bash's here string syntax:

   1 # Bash/ksh93/mksh/zsh
   2 ref=realvariable
   3 IFS= read -r -- "$ref" <<<"contents"
   4 # realvariable now contains the string "contents"

If you need to assign multiline values, keep reading.

A similar trick works for Bash array variables too:

   1 # Bash
   2 aref=realarray
   3 IFS=' ' read -d '' -ra "$aref" <<<'words go into array elements'
   4 
   5 # ksh93/mksh/zsh
   6 aref=realarray
   7 IFS=' ' read -d '' -rA "$aref" <<<'words go into array elements'

IFS is used to delimit words, so you may or may not need to set that. Also note that the read command will return failure because there is no terminating NUL byte for the -d '' to catch. Be prepared to ignore that failure.

Another trick is to use Bash's printf -v (only available in recent versions):

   1 # Bash 3.1 or higher ONLY. Array assignments require 4.2 or higher.
   2 ref=realvariable
   3 printf -v "$ref" %s "contents"

You can use all of printf's formatting capabilities. This trick also permits any string content, including embedded and trailing newlines.

Yet another trick is Korn shell's typeset or Bash's declare. The details of typeset vary greatly between shells, but can be used in compatible ways in limited situations. Both of them cause a variable to become locally scoped to a function, if used inside a function; but if used outside all functions, they can operate on global variables.

   1 # Bash/ksh (any)/zsh
   2 typeset -- "${ref}=contents"
   3 
   4 # Bash
   5 declare -- "${ref}=contents"

Bash 4.2 adds declare -g which assigns variables to the global scope from any context.

There is very little advantage to typeset or declare over eval for scalar assignments, but many drawbacks. typeset cannot be made to not affect the scope of the assigned variable. This trick does preserve the exact contents, like eval, if correctly escaped.

You must still be careful about what is on the left-hand side of the assignment. Inside square brackets, expansions are still performed; thus, with a tainted ref, declare or printf -v can be just as dangerous as eval:

   1 # Bash:
   2 ref='x[$(touch evilfile)0]'
   3 ls -l evilfile   # No such file or directory
   4 declare "${ref}=value"
   5 ls -l evilfile   # It exists now!
   6 rm evilfile # Now it's gone.
   7 printf -v "$ref" %s "value"
   8 ls -l evilfile   # It came back!

This problem also exists with typeset in mksh and pdksh, but apparently not ksh93. This is why the value of ref must be under your control at all times.

If you aren't using Bash or Korn shell, you can do assignments to referenced variables using HereDocument syntax:

   1 # Bourne
   2 ref=realvariable
   3 IFS= read -r -- "$ref" <<'EOF'
   4 contents
   5 EOF

(Alas, read without -d means we're back to only getting at most one line of content. This is the most portable trick, but it's limited to single-line content.)

Remember that when using a here document, if the sentinel word (EOF in our example) is unquoted, then parameter expansions will be performed inside the body. If the sentinel is quoted, then parameter expansions are not performed. Use whichever is more convenient for your task.

7.2.3.1. eval

   1 # Bourne
   2 ref=myVar
   3 eval "${ref}=\$value"

This expands to the statement that is executed:

   1 myVar=$value

The right-hand side is not parsed by the shell, so there is no danger of unwanted side effects. The drawback, here, is that every single shell metacharacter on the right hand side of the = must be quoted/escaped carefully. In the example shown here, there was only one. In a more complex situation, there could be dozens.

This is very often done incorrectly. Permutations like these are seen frequently all over the web even from experienced users that ought to know better:

   1 eval ${ref}=\"$value\" # WRONG!
   2 eval "$ref='$value'"   # WRONG!
   3 eval "${ref}=\$value"  # Correct (curly braced PE used for clarity)
   4 eval "$ref"'=$value'   # Correct (equivalent)

The good news is that if you can sanitize the right hand side correctly, this trick is fully portable, has no variable scope issues, and allows all content including newlines. The bad news is that if you fail to sanitize the right hand side correctly, you have a massive security hole. Use eval if you know what you're doing and are very careful.

The following code demonstrates how to correctly pass a scalar variable name into a function by reference for the purpose of "returning" a value:

   1 # POSIX
   2 
   3 f() {
   4     # Code goes here that eventually sets the variable "x".
   5     # x=foo
   6 
   7     # Check that the referenced variable name is not empty.  Validating
   8     # it as a valid identifier is left as an exercise for the reader.
   9     if [ -z "$1" ]; then
  10         return 1
  11     fi
  12 
  13     eval "${1}=\$x"
  14 }

7.3. See Also


CategoryShell

8. Is there a function to return the length of a string?

The fastest way, not requiring external programs (but not usable in Bourne shells):

   1 # POSIX
   2 "${#varname}"

(note that with bash 3 and above, that's the number of characters, not bytes, which is a significant differences in multi-byte locales. Behaviour of other shells in that regard vary).

or for Bourne shells:

   1 # Bourne
   2 expr "x$varname" : '.*' - 1

(expr prints the number of characters or bytes matching the pattern .*, which is the length of the string (in bytes for GNU expr). The x is necessary to avoid problems with $varname values that are expr operators)

or:

   1 # Bourne, with GNU expr(1)
   2 expr length "x$varname" - 1

(BSD/GNU expr only)

This second version is not specified in POSIX, so is not portable across all platforms.

One may also use awk:

   1 # Bourne with POSIX awk
   2 awk  'BEGIN {print length(ARGV[1])}' "$varname"

(there, whether the length is expressed in bytes or characters depends on the implementation (for instance, it's characters for GNU awk, but bytes for mawk).


Similar needs:

   1 # Korn/Bash
   2 "${#arrayname[@]}"

Expands to the number of elements in an array.

   1 # Korn/Bash
   2 "${#arrayname[i]}"

Expands to the length of the array's element i.


CategoryShell

9. How can I recursively search all files for a string?

If you are on a typical GNU or BSD system, all you need is one of these:

   1 # Recurse and print matching lines (GNU grep):
   2 grep -r -- "$search" .
   3 
   4 # Recurse and print only the filenames (GNU grep):
   5 grep -r -l -- "$search" .

If your grep lacks a -r option, you can use find to do the recursion:

   1 # Portable but slow
   2 find . -type f -exec grep -l -- "$search" {} \;

This command is slower than it needs to be, because find will call grep with only one file name, resulting in many grep invocations (one per file). Since grep accepts multiple file names on the command line, find can be instructed to call it with several file names at once:

   1 # Fast, but requires a recent find
   2 find . -type f -exec grep -l -- "$search" {} +

The trailing '+' character instructs find to call grep with as many file names as possible, saving processes and resulting in faster execution. This example works for POSIX-2008 find, which most current operating systems have, but which may not be available on legacy systems.

Traditional Unix has a helper program called xargs for the same purpose:

   1 # DO NOT USE THIS
   2 find . -type f | xargs grep -l -- "$search"

However, if your filenames contain spaces, quotes or other metacharacters, this will fail catastrophically. BSD/GNU xargs has a -print0 option:

   1 find . -type f -print0 | xargs -0 grep -l -- "$search"

The -print0 / -0 options ensure that any file name can be processed, even one containing blanks, TAB characters, or newlines.


CategoryShell

10. What is buffering? Or, why does my command line produce no output: tail -f logfile | grep 'foo bar' | awk ...

Most standard Unix commands buffer their output when used non-interactively. This means that they don't write each character (or even each line) immediately, but instead collect a larger number of characters (often 4 kilobytes) before printing anything at all. In the case above, the grep command buffers its output, and therefore awk only gets its input in large chunks.

Buffering greatly increases the efficiency of I/O operations, and it's usually done in a way that doesn't visibly affect the user. A simple tail -f from an interactive terminal session works just fine, but when a command is part of a complicated pipeline, the command might not recognize that the final output is needed in (near) real time. Fortunately, there are several techniques available for controlling I/O buffering behavior.

The most important thing to understand about buffering is that it's the writer who's doing it, not the reader.

10.0.1. Eliminate unnecessary commands

In the question, we have the pipeline tail -f logfile | grep 'foo bar' | awk ... (with the actual AWK command being unspecified). There is no problem if we simply run tail -f logfile, because tail -f never buffers its output. Nor is there a problem if we run tail -f logfile | grep 'foo bar' interactively, because grep does not buffer its output if its standard output is a terminal. However, if the output of grep is being piped into something else (such as an AWK command), it starts buffering to improve efficiency.

In this particular example, the grep is actually redundant. We can remove it, and have AWK perform the filtering in addition to whatever else it's doing:

   1 tail -f logfile | awk '/foo bar/ ...'

In other cases, this sort of consolidation may not be possible. But you should always look for the simplest solution first.

10.0.2. Your command may already support unbuffered output

Some programs provide special command line options specifically for this sort of problem:

awk (GNU awk, nawk, busybox awk, mawk)

use the fflush() function. It will be defined in POSIX Issue 8

awk (mawk)

-W interactive

find (GNU)

use -printf with the \c escape

grep (e.g. GNU version 2.5.1)

--line-buffered

jq

--unbuffered

python

-u

sed (e.g. GNU version 4.0.6)

-u, --unbuffered

tcpdump, tethereal

-l

Each command that writes to a pipe would have to be told to disable buffering, in order for the entire pipeline to run in (near) real time. The last command in the pipeline, if it's writing to a terminal, will not typically need any special consideration.

10.0.3. Disabling buffering in a C application

If the buffering application is written in C, and is either your own or one whose source you can modify, you can disable the buffering with:

   1 setvbuf(stdout, NULL, _IONBF, 0);

Often, you can simply add this at the top of the main() function, but if the application closes and reopens stdout, or explicitly calls setvbuf() later, you may need to exercise more discretion.

10.0.4. unbuffer

The expect package has an unbuffer program which effectively tricks other programs into always behaving as if they were being used interactively (which may often disable buffering). Here's a simple example:

   1 tail -f logfile | unbuffer grep 'foo bar' | awk ...

expect and unbuffer are not standard POSIX tools, but they may already be installed on your system.

10.0.5. stdbuf

Recent versions of GNU coreutils (from 7.5 onwards) come with a nice utility called stdbuf, which can be used among other things to "unbuffer" the standard output of a command. Here's the basic usage for our example:

   1 tail -f logfile | stdbuf -oL grep 'foo bar' | awk ...

In the above code, -oL makes stdout line buffered; you can even use -o0 to entirely disable buffering. The man and info pages have all the details.

stdbuf is not a standard POSIX tool, but it may already be installed on your system (if you're using a recent GNU/Linux distribution, it will probably be present).

10.0.6. less

If you simply wanted to highlight the search term, rather than filter out non-matching lines, you can use the less program instead of a filtered tail -f:

   1 less program.log
  • Inside less, start a search with the '/' command (similar to searching in vi). Or start less with the -p pattern option.

  • This should highlight any instances of the search term.
  • Now put less into "follow" mode, which by default is bound to shift+f.

  • You should get an unfiltered tail of the specified file, with the search term highlighted.

"Follow" mode is stopped with an interrupt, which is probably control+c on your system. The '/' command accepts regular expressions, so you could do things like highlight the entire line on which a term appears. For details, consult man less.

10.0.7. coproc

If you're using ksh or Bash 4.0+, whatever you're really trying to do with tail -f might benefit from using coproc and fflush() to create a coprocess. Note well that coproc does not itself address buffering issues (in fact it's prone to buffering problems -- hence the reference to fflush). coproc is only mentioned here because whenever someone is trying to continuously monitor and react to a still-growing file (or pipe), they might be trying to do something which would benefit from coprocesses.

10.0.8. Further reading


CategoryShell

11. How can I recreate a directory hierarchy structure, without the files?

With the cpio program:

   1 cd -- "$srcdir" &&
   2 find . -type d -print | cpio -dumpv -- "$dstdir"

or with the pax program:

   1 cd -- "$srcdir" &&
   2 find . -type d -print | pax -rwdv -- "$dstdir"

or with GNU tar, and more verbose syntax:

   1 cd -- "$srcdir" &&
   2 find . -type d -print | tar c --files-from - --no-recursion |
   3   tar x --directory "$dstdir"

This creates a list of directory names with find, non-recursively adds just the directories to an archive, and pipes it to a second tar instance to extract it at the target location. As you can see, tar is the least suited to this task, but people just adore it, so it has to be included here to appease the tar fanboy crowd. (Note: you can't even do this at all with a typical Unix tar. Also note: there is no such thing as "standard tar", as both tar and cpio were intentionally omitted from POSIX in favor of pax.)

All the solutions above will fail if directory names contain newline characters. On many modern BSD/GNU systems, at least, they can be trivially modified to cope with that, by using find -print0 and one of pax -0 or cpio -0 or tar --null (check your system documentation to see which of these commands you have, and which extensions are available). If you really don't have access to those options, you can probably, at least, use ! -path $'*\n*' -type d -print, or better -name $'*\n*' -prune -o -type d -print (instead of -type d -print) to ignore directories that contain newline characters in their path, making sure find is run in the C/POSIX locale to also exclude file paths containing newline characters as well as sequence of bytes not forming valid characters in the user's locale.

with find

   1 export dstdir
   2 mkdir -p -- "$dstdir" &&
   3 cd -- "$srcdir" &&
   4 find . -type d -exec sh -c \
   5   'cd -- "$dstdir" && mkdir -- "$@"' sh {} +

or with bash 4's globstar

   1 shopt -s globstar nullglob &&
   2 cd -- "$srcdir" && dirs=(**/) && (( ${#dirs[@]} )) &&
   3 cd -- "$dstdir" && mkdir -- "${dirs[@]}"

(though beware that will also copy symlinks to directories as directories; older versions of bash would also traverse symlinks when crawling the directory tree).

or with zsh's recursive globbing and glob qualifiers:

   1 export srcdir dstdir
   2 zsh -ec '
   3 cd -- "$srcdir"
   4 dirs=(**/*(/D))
   5 cd -- "$dstdir"
   6 mkdir -- $dirs'

If you want to create stub files instead of full-sized files, the following is likely to be the simplest solution. The find command recreates the regular files using "dummy" files (empty files with the same timestamps):

   1 cd -- "$srcdir" &&
   2 DSTDIR=$dstdir find . -type f -exec sh -c \
   3   'for i do touch -r "$i" -- "$DSTDIR/$i"; done' sh {} +

If your find can't handle -exec + then you can use \; instead of + at the end of the command. See UsingFind for explanations.


CategoryShell

12. How can I print the n'th line of a file?

One dirty (but not quick) way is:

   1 sed -n "${n}p" "$file"

But this reads the entire file even if only the third line is desired, which can be avoided by using the q command to quit on line $n, and deleting all other lines with the d command:

   1 sed "${n}q;d" "$file"

Another method is to grab lines starting at n, then get the first line of that.

   1 tail -n "+$n" "$file" | head -n 1

Another approach, using AWK:

   1 awk -v n="$n" 'NR==n{print;exit}' "$file"

If more than one line is needed, it's easy to adapt any of the previous methods:

   1 x=3 y=4
   2 sed -n "$x,${y}p;${y}q;" "$file"                # Print lines $x to $y; quit after $y.
   3 head -n "$y" "$file" | tail -n "$((y - x + 1))"   # Same
   4 head -n "$y" "$file" | tail -n "+$x"            # If your tail supports it
   5 awk -v x="$x" -v y="$y" 'NR>=x{print} NR==y{exit}' "$file"        # Same

Or a counter with a simple read loop:

   1 # Bash/ksh
   2 m=0
   3 while ((m++ < n)) && read -r _; do
   4     :
   5 done
   6 
   7 head -n 1

To read into a variable, it is preferable to use read or mapfile rather than an external utility. More than one line can be read into the given array variable or the default array MAPFILE by adjusting the argument to mapfile's -n option:

   1 # Bash4
   2 mapfile -ts "$((n - 1))" -n 1 x <"$file"
   3 printf '%s\n' "$x"

12.1. See Also


CategoryShell

13. How do I invoke a shell command from a non-shell application?

You can use the shell's -c option to run the shell with the sole purpose of executing a short bit of script:

   1 sh -c 'echo "Hi!  This is a short script."'

This is usually pretty useless without a means of passing data to it. The best way to pass bits of data to your shell is to pass them as positional arguments:

   1 sh -c 'echo "Hi! This short script was run with the arguments: $@"' -- "foo" "bar"

Notice the -- before the actual positional parameters. The first argument you pass to the shell process (that isn't the argument to the -c option) will be placed in $0. Positional parameters start at $1, so we put a little placeholder in $0. This can be anything you like; in the example, we use the generic --.

This technique is used often in shell scripting, when trying to have a non-shell CLI utility execute some bash code, such as with find(1):

   1 find /foo -name '*.bar' -exec bash -c 'mv "$1" "${1%.bar}.jpg"' -- {} \;

Here, we ask find to run the bash command for every *.bar file it finds, passing it to the bash process as the first positional parameter. The bash process runs the mv command after doing some Parameter Expansion on the first positional parameter in order to rename our file's extension from bar to jpg.

Alternatively, if your non-shell application allows you to set environment variables, you can do that, and then read them using normal variables of the same name.

Similarly, suppose a program (e.g. a file manager) lets you define an external command that an argument will be appended to, but you need that argument somewhere in the middle. In that case:

#!/bin/sh
sh -c 'command foo "$1" bar' -- "$@"

13.1. Calling shell functions

Only a shell can call a shell function. So constructs like this won't work:

   1 # This won't work!
   2 find . -type f -exec my_bash_function {} +

If your shell function is defined in a file, you can invoke a shell which sources that file, and then calls the function:

   1 find . -type f -exec bash -c 'source /my/bash/function; my_bash_function "$@"' _ {} +

(See UsingFind for explanations.)

Bash also permits function definitions to be exported through the environment. So, if your function is defined within your current shell, you can export it to make it available to the new shell which find invokes:

   1 # Bash
   2 export -f my_bash_function
   3 find . -type f -exec bash -c 'my_bash_function "$@"' _ {} +

This works ONLY in bash, and isn't without problems. The maximum length of the function code cannot exceed the max size of an environment variable, which is platform-specific. Functions from the environment can be a security risk as well because bash simply scans environment variables for values that fit the form of a shell function, and has no way of knowing who exported the function, or whether a value that happens to look like a function really is one. It is generally a better idea to retrieve the function definition and put it directly into the code. This technique is also more portable.

   1 # Bash/ksh/zsh
   2 
   3 function someFunc {
   4     :
   5 }
   6 
   7 bash -s <<EOF
   8 $(typeset -f someFunc)
   9 
  10 someFunc
  11 EOF

Note that ksh93 uses a completely different approach for sourcing functions. See the manpage for the FPATH variable. Bash doesn't use FPATH, but the source tree includes 3 different examples of how to emulate it: http://git.savannah.gnu.org/cgit/bash.git/tree/examples/functions/autoload.v3

This technique is also necessary when calling the shell function on a remote system (e.g. over ssh). Sourcing a file containing the function definition on the remote system will work, if such a file is available. If no such file is available, the only viable approach is to ask the current shell to spit out the function definition, and feed that to the remote shell over the ssh channel:

   1 {
   2     declare -f my_bash_function
   3     echo "my_bash_function foo 'bar bar'"
   4 } | ssh -T user@host bash

Care must be taken when writing a script to send through ssh. Ssh works like eval, with the same concerns; see that FAQ for details.


CategoryShell

14. How can I concatenate two variables? How do I append a string to a variable?

There is no (explicit) concatenation operator for strings (either literal or variable dereferences) in the shell; you just write them adjacent to each other:

   1 var=$var1$var2

If the right-hand side contains whitespace characters, it needs to be quoted:

   1 var="$var1 - $var2"

If you're appending a string that doesn't "look like" part of a variable name, you just smoosh it all together:

   1 var=$var1/.-

Otherwise, braces or quotes may be used to disambiguate the right-hand side:

   1 var=${var1}xyzzy
   2 # Without braces, var1xyzzy would be interpreted as a variable name
   3 
   4 var="$var1"xyzzy
   5 # Alternative syntax

CommandSubstitution can be used as well. The following line creates a log file name logname containing the current date, resulting in names like e.g. log.2004-07-26:

   1 logname="log.$(date +%Y-%m-%d)"

There's no difference when the variable name is reused, either. A variable's value (the string it holds) may be reassigned at will:

   1 string="$string more data here"

Concatenating arrays is also possible:

   1 var=( "${arr1[@]}" "${arr2[@]}" )

Bash 3.1 has a new += operator that you may see from time to time:

   1 string+=" more data here"     # EXTREMELY non-portable!

It's generally best to use the portable syntax.


CategoryShell

15. How can I redirect the output of multiple commands at once?

Redirecting the standard output of a single command is as easy as:

   1 date > file

To redirect standard error:

   1 date 2> file

To redirect both:

   1 date > file 2>&1

or, a fancier way:

   1 # Bash only.  Equivalent to date > file 2>&1 but non-portable.
   2 date &> file

Redirecting an entire loop:

   1 for i in "${list[@]}"; do
   2     echo "Now processing $i"
   3     # more stuff here...
   4 done > file 2>&1

However, this can become tedious if the output of many programs should be redirected. If all output of a script should go into a file (e.g. a log file), the exec command can be used:

   1 # redirect both standard output and standard error to "log.txt"
   2 exec > log.txt 2>&1
   3 # all output including stderr now goes into "log.txt"

(See FAQ 106 for more complex script logging techniques.)

Otherwise, command grouping helps:

   1 {
   2     date
   3     # some other commands
   4     echo done
   5 } > messages.log 2>&1

In this example, the output of all commands within the curly braces is redirected to the file messages.log.

More discussion

In-depth: Illustrated Tutorial


CategoryShell

16. How can I run a command on all files with the extension .gz?

Often a command already accepts several files as arguments, e.g.

   1 zcat -- *.gz

On some systems, you would use gzcat instead of zcat. If neither is available, or if you don't care to play guessing games, just use gzip -dc instead.

The -- prevents a filename beginning with a hyphen from causing unexpected results.

If an explicit loop is desired, or if your command does not accept multiple filename arguments in one invocation, the for loop can be used:

   1 # Bourne
   2 for file in ./*.gz
   3 do
   4     echo "$file"
   5     # do something with "$file"
   6 done

To do it recursively, use find:

   1 # Bourne
   2 find . -name '*.gz' -type f -exec do-something {} \;

If you need to process the files inside your shell for some reason, then read the find results in a loop:

   1 # Bash
   2 while IFS= read -r file; do
   3     echo "Now processing $file"
   4     # do something fancy with "$file"
   5 done < <(find . -name '*.gz' -print)

This example uses ProcessSubstitution (see also FAQ #24), although a pipe may also be suitable in many cases. However, it does not correctly handle filenames that contain newlines. To handle arbitrary filenames, see FAQ #20.


CategoryShell

17. How can I use a logical AND/OR/NOT in a shell pattern (glob)?

"Globs" are simple patterns that can be used to match filenames or strings. They're generally not very powerful. If you need more power, there are a few options available.

If you want to operate on all the files that match glob A or glob B, just put them both on the same command line:

   1 rm -- *.bak *.old

If you want to use a logical OR in just part of a glob (larger than a single charcter -- for which square-bracketed character classes suffice), in Bash, you can use BraceExpansion:

   1 rm -- *.{bak,old}

If you need something still more general/powerful, in KornShell or BASH you can use extended globs. In Bash, you'll need the extglob option to be set. It can be checked with:

   1 shopt extglob

and set with:

   1 shopt -s extglob

To warm up, we'll move all files starting with foo AND not ending with .d to directory foo_thursday.d:

   1 mv foo!(*.d) foo_thursday.d

A more complex example -- delete all files containing Pink_Floyd AND not containing The_Final_Cut:

   1 rm !(!(*Pink_Floyd*)|*The_Final_Cut*)

By the way: these kind of patterns can be used with the KornShell, too. They don't have to be enabled there, but are the default patterns.


CategoryShell

18. How can I group expressions in an if statement, e.g. if (A AND B) OR C?

The portable (POSIX or Bourne) way is to use multiple test (or [) commands:

   1 # Bourne
   2 if commandA && commandB || commandC; then
   3 ...
   4 
   5 # or with test(1) calls:
   6 if [ testA ] && [ testB ] || [ testC ]; then
   7 ...

When they are shell operators between commands (as opposed to the [[...]] operators), && and || have equal precedence, so processing is left to right.

If we need explicit grouping, then we can use curly braces:

   1 # Bourne
   2 if commandA && { commandB || commandC; }; then
   3 ...

What we should not do is try to use the -a or -o operators of the test command, because the results are undefined.

BASH, zsh and the KornShell have different, more powerful comparison commands with slightly different (easier) quoting:

Examples:

   1 # Bash/ksh/zsh
   2 if (( (n>0 && n<10) || n == -1 ))
   3 then echo "0 < $n < 10, or n==-1"
   4 fi

or

   1 # Bash/ksh/zsh
   2 if [[ ( -f $localconfig && -f $globalconfig ) || -n $noconfig ]]
   3 then echo "configuration ok (or not used)"
   4 fi

Note that contrary to the && and || shell operators, the && operator in ((...)) and [[...]] has precedence over the || operator (same goes for ['s -a over -o), so for instance:

   1 [ a = a ] || [ b = c ] && [ c = d ]

is false because it's like:

   1 { [ a = a ] || [ b = c ]; } && [ c = d ]

(left to right association, no precedence), while

   1 [[ a = a || b = c && c = d ]]

is true because it's like:

   1 [[ a = a || ( b = c && c = d ) ]]

(&& has precedence over ||).

Note that the distinction between numeric and string comparisons is strict. Consider the following example:

   1 n=3
   2 if [[ $n > 0 && $n < 10 ]]
   3 then echo "$n is between 0 and 10"
   4 else echo "ERROR: invalid number: $n"
   5 fi

The output will be "ERROR: ....", because in a string comparison "3" is bigger than "10", because "3" already comes after "1", and the next character "0" is not considered. Changing the square brackets to double parentheses (( )) makes the example work as expected.


CategoryShell

19. How can I use numbers with leading zeros in a loop, e.g. 01, 02?

As always, there are many different ways to solve the problem, each with its own advantages and disadvantages. The most important considerations are which shell you're using, whether the start/end numbers are constants, and how many times the loop is going to iterate.

19.1. Brace expansion

If you're in bash/zsh/ksh, and if the start and end numbers are constants, and if there aren't too many of them, you can use BraceExpansion. Bash version 4 allows zero-padding and ranges in its brace expansion:

   1 # Bash 4 / zsh
   2 for i in {01..10}; do
   3     ...

In Bash 3, you can use ranges inside brace expansion (but not zero-padding). Thus, the same thing can be accomplished more concisely like this:

   1 # Bash 3
   2 for i in 0{1..9} 10
   3 do
   4     ...

Another bash 3 example, for output of 0000 to 0034:

   1 # Bash 3
   2 for i in {000{0..9},00{10..34}}
   3 do
   4     echo "$i"
   5 done
   6 
   7 # using the outer brace instead of just adding them one next to the other
   8 # allows to use the expansion, for instance, like this:
   9 wget 'http://foo.com/adir/thepages'{000{0..9},00{10..34}}'.html'

In ksh and in older bash versions, where the leading zeroes are not supported directly by brace expansion, you might still be able to approximate it:

   1 # Bash / ksh / zsh
   2 for i in 0{1,2,3,4,5,6,7,8,9} 10
   3 do
   4     ...

19.2. Formatting with printf

The most important drawback with BraceExpansion is that the whole list of numbers is generated and held in memory all at once. If there are only a few thousand numbers, that may not be so bad, but if you're looping millions of times, you would need a lot of memory to hold the fully expanded list of numbers.

The printf command (which is a Bash builtin, and is also POSIX standard), can be used to format a number, including zero-padding. The bash builtin can also assign the formatted result to a shell variable (in recent versions), without forking a SubShell.

If all you want to do is print the sequence of numbers, and you're in bash/ksh/zsh, and the sequence is fairly small, you can use the implicit looping feature of printf together with a brace expansion:

   1 # Bash 3
   2 printf '%03d\n' {1..300}

If you're in bash 3.1 or higher, you can use a C-style for loop together with printf -v to format the numbers into a variable:

   1 # Bash 3.1 / ksh93 / zsh
   2 for ((i = 1; i <= 10; i++)); do
   3     printf -v ii %02d "$i"
   4     echo "$ii"
   5 done

Brace expansion requires constant starting and ending values. If you don't know in advance what the start and end values are, you can cheat:

   1 # Bash 3
   2 # start and end are variables containing integers
   3 eval "printf '%03d\n' {$start..$end}"

The eval is required in Bash because brace expansions occur before parameter expansions.

The traditional Csh implementation, which all other applicable shells follow, insert the brace expansion pass sometime between the processing of other expansions and pathname expansion, thus parameter expansion has already been performed by the time words are scanned for brace expansion. There are various pros and cons to Bash's implementation, this being probably the most frequently cited drawback. Given how messy that eval solution is, please give serious thought to using a for or while loop with shell arithmetic instead.

19.3. Ksh formatted brace expansion

The ksh93 method for specifying field width for sequence expansion is to add a (limited) printf format string to the syntax, which is used to format each expanded word. This is somewhat more powerful, but unfortunately incompatible with bash, and ksh does not understand Bash's field padding scheme:

   1 #ksh93
   2 echo {0..10..2%02d}

ksh93 also has a variable attribute that specifies a field with to pad with leading zeros whenever the variable is referenced. The concept is similar to other attributes supported by Bash such as case modification. Note that ksh never interprets octal literals.

   1 # ksh93 / mksh / zsh
   2 $ typeset -Z3 i=4
   3 $ echo $i
   4 004

19.4. External programs

If the command seq(1) is available (it's part of GNU sh-utils/coreutils), you can use it as follows:

   1 seq -w 1 10

or, for arbitrary numbers of leading zeros (here: 3):

   1 seq -f "%03g" 1 10

Combining printf with seq(1), you can do things like this:

   1 # POSIX shell, GNU utilities
   2 printf '%03d\n' $(seq 300)

(That may be helpful if you are not using Bash, but you have seq(1), and your version of seq(1) lacks printf-style format specifiers. That's a pretty odd set of restrictions, but I suppose it's theoretically possible. Since seq is a nonstandard external tool, it's good to keep your options open.)

Be warned however that using seq might be considered bad style; it's even mentioned in Don't Ever Do These.

Some BSD-derived systems have jot(1) instead of seq(1). In accordance with the glorious tradition of Unix, it has a completely incompatible syntax:

   1 # POSIX shell, OpenBSD et al.
   2 printf "%02d\n" $(jot 10 1)
   3 
   4 # Bourne shell, OpenBSD (at least)
   5 jot -w %02d 10 1

Finally, the following example works with any BourneShell derived shell (which also has expr and sed) to zero-pad each line to three bytes:

   1 # Bourne
   2 i=0
   3 while test $i -le 10
   4 do
   5     echo "00$i"
   6     i=`expr $i + 1`
   7 done |
   8     sed 's/.*\(...\)$/\1/g'

In this example, the number of '.' inside the parentheses in the sed command determines how many total bytes from the echo command (at the end of each line) will be kept and printed.

But if you're going to rely on an external Unix command, you might as well just do the whole thing in awk in the first place:

   1 # Bourne
   2 # count variable contains an integer
   3 awk -v count="$count" 'BEGIN {for (i=1;i<=count;i++) {printf("%03d\n",i)} }'
   4 
   5 # Bourne, with Solaris's decrepit and useless awk:
   6 awk "BEGIN {for (i=1;i<=$count;i++) {printf(\"%03d\\n\",i)} }"


Now, since the number one reason this question is asked is for downloading images in bulk, you can use the examples above with xargs(1) and wget(1) to fetch files:

   1 almost any example above | xargs -i% wget $LOCATION/%

The xargs -i% will read a line of input at a time, and replace the % at the end of the command with the input.

Or, a simpler example using a for loop:

   1 # Bash 3
   2 for i in {1..100}; do
   3    wget "$prefix$(printf %03d $i).jpg"
   4    sleep 5
   5 done

Or, avoiding the subshells (requires bash 3.1):

   1 # Bash 3.1
   2 for i in {1..100}; do
   3    printf -v n %03d $i
   4    wget "$prefix$n.jpg"
   5    sleep 5
   6 done


CategoryShell

20. How can I split a file into line ranges, e.g. lines 1-10, 11-20, 21-30?

POSIX specifies the split utility, which can be used for this purpose:

   1 split -l 10 input.txt

For more flexibility you can use sed. The sed command can print e.g. the line number range 1-10:

   1 sed 10q         # Print lines 1-10 and then quit.
   2 sed '1,5d; 10q' # Print just lines 6-10 by filtering the first 5 then quitting after 10.

The d command stops sed from printing each line. This could alternatively have been done by passing sed the -n option and printing lines with the p command rather than deleting them with d. It makes no difference.

We can now use this to print an arbitrary range of a file (specified by line number):

   1 # POSIX shell
   2 file=/etc/passwd
   3 range=10
   4 cur=1
   5 last=$(awk 'END { print NR }' < "$file") # count number of lines
   6 chunk=1
   7 while [ "$cur" -lt "$last" ]
   8 do
   9     endofchunk=$((cur + range - 1))
  10     sed -n -e "$cur,${endofchunk}p" -e "${endofchunk}q" "$file" > c"hunk.$(printf %04d "$chunk")"
  11     chunk=$((chunk + 1))
  12     cur=$((cur + range))
  13 done

The previous example uses POSIX arithmetic, which older Bourne shells do not have. In that case the following example should be used instead:

   1 # legacy Bourne shell; assume no printf either
   2 file=/etc/passwd
   3 range=10
   4 cur=1
   5 last=`awk 'END { print NR }' < "$file"` # count number of lines
   6 chunk=1
   7 while test "$cur" -lt "$last"
   8 do
   9     endofchunk=`expr $cur + $range - 1`
  10     sed -n -e "$cur,${endofchunk}p" -e "${endofchunk}q" "$file" > "chunk.$chunk"
  11     chunk=`expr "$chunk" + 1`
  12     cur=`expr "$cur" + "$range"`
  13 done

Awk can also be used to produce a more or less equivalent result:

   1 awk -v range=10 '{print > FILENAME "." (int((NR -1)/ range)+1)}' file


CategoryShell

21. How can I find and safely handle file names containing newlines, spaces or both?

First and foremost, to understand why you're having trouble, read Arguments to get a grasp on how the shell understands the statements you give it. It is vital that you grasp this matter well if you're going to be doing anything with the shell.

The preferred method to deal with arbitrary filenames is still to use find(1):

find ... -exec command {} \;

or, if you need to handle filenames en masse:

find ... -exec command {} +

xargs is rarely ever more useful than the above, but if you really insist, remember to use -0 (-0 is not in the POSIX standard, but is implemented by GNU and BSD systems):

# Requires GNU/BSD find and xargs
find ... -print0 | xargs -r0 command

# Never use xargs without -0 or similar extensions!

Use one of these unless you really can't.

Another way to deal with files with spaces in their names is to use the shell's filename expansion (globbing). This has the disadvantage of not working recursively (except with zsh's extensions or bash 4's globstar), and it normally does not include hidden files (filenames beginning with "."). But if you just need to process all the files in a single directory, and omitting hidden files is okay, it works fantastically well.

For example, this code renames all the *.mp3 files in the current directory to use underscores in their names instead of spaces (this uses the bash/ksh extension allowing "/" in parameter expansion):

# Bash/ksh
for file in ./*\ *.mp3; do
  if [ -e "$file" ] || [ -L "$file" ]; then  # Make sure it isn't an empty match
    mv "$file" "${file// /_}"
  fi
done

You can omit the "if..." and "fi" lines if you're certain that at least one path will match the glob. The problem is that if the glob doesn't match, instead of looping 0 times (as you might expect), the loop will execute once with the unexpanded pattern (which is usually not what you want). You can also use the bash extension "shopt -s nullglob" to make empty globs expand to nothing, and then again you can omit the if and fi.

For more examples of renaming files, see FAQ #30.

Remember, you need to quote all your Parameter Expansions using double quotes. If you don't, the expansion will undergo WordSplitting and filename generation (see also argument splitting and BashPitfalls). Also, always prefix globs with "/" or "./"; otherwise, if there's a file with "-" as the first character, the expansions might be misinterpreted as options.

Another way to handle filenames recursively involves using the -print0 option of find (a GNU extension now found in most other implementations and soon POSIX), together with bash's -d extended option for read:

# Bash
unset a i
while IFS= read -r -d $'\0' file; do
  a[i++]="$file"        # or however you want to process each file
done < <(find /tmp -type f -print0)

The preceding example reads all the files under /tmp (recursively) into an array, even if they have newlines or other whitespace in their names, by forcing read to use the NUL byte (\0) as its line delimiter. Since NUL is not a valid byte in Unix filenames, this is the safest approach besides using find -exec. IFS= is required to avoid trimming leading/trailing whitespace, and -r is needed to avoid backslash processing. In fact, $'\0' is actually the empty string (bash doesn't support passing NUL bytes to commands even built-in ones) so we could also write it like this:

# Bash
unset a i
while IFS= read -r -d '' file; do
  a[i++]="$file"
done < <(find /tmp -type f -print0)

So, why doesn't this work?

# DOES NOT WORK
unset a i
find /tmp -type f -print0 | while IFS= read -r -d '' file; do
  a[i++]="$file"
done

Because of the pipeline, the entire while loop is executed in a SubShell and therefore the array assignments will be lost after the loop terminates. (For more details about this, see FAQ #24.)

For a longer discussion about handling filenames in shell, see Filenames and Pathnames in Shell: How to do it Correctly.


CategoryShell

22. How can I replace a string with another string in a variable, a stream, a file, or in all the files in a directory?

There are a number of techniques for this. Which one to use depends on many factors, the biggest of which is what we're editing. This page also contains contradictory advice from multiple authors. This is a deeply ugly topic, and there are no universally right answers (but plenty of universally wrong ones).

22.1. Files

Before you start, be warned that editing files is a really bad idea. The preferred way to modify a file is to create a new file within the same file system, write the modified content into it, and then mv it to the original name. This is the only way to prevent data loss in the event of a crash while writing. However, using a temp file and mv means that you break hardlinks to the file (unavoidably), that you would convert a symlink to hard file, and that you may need to take extra steps to transfer the ownership and permissions (and possibly other metadata) of the original file to the new file. Some people prefer to roll the dice and accept the tiny possibility of data loss versus the greater possibility of hardlink loss and the inconvenience of chown/chmod (and potentially setfattr, setfacl, chattr...).

The other major problem you're going to face is that all of the standard Unix tools for editing files expect some kind of regular expression as the search pattern. If you're passing input you did not create as the search pattern, it may contain syntax that breaks the program's parser, which can lead to failures, or CodeInjection exploits.

22.1.1. Just Tell Me What To Do

If your search string or your replacement string comes from an external source (environment variable, argument, file, user input) and is therefore not under your control, then this is your best choice:

in="$search" out="$replace" perl -pi -e 's/\Q$ENV{"in"}/$ENV{"out"}/g' ./*

That will operate on all of the files in the current directory. If you want to operate on a full hierarchy (recursively), then:

in="$search" out="$replace" find . -type f -exec \
  perl -pi -e 's/\Q$ENV{"in"}/$ENV{"out"}/g' -- {} +

You may of course supply additional options to find to restrict which files are replaced; see UsingFind for more information.

The critical reader may note that these commands use perl which is not a standard tool. That's because none of the standard tools can do this task safely.

If you're stuck using standard tools due to a restricted execution environment, then you'll have to weigh the options below and choose the one that will do the least amount of damage to your files.

22.1.2. Using a file editor

The only standard tools that actually edit a file are ed and ex (vi is the visual mode for ex).

ed is the standard UNIX command-based editor. ex is another standard command-line editor. Here are some commonly-used syntaxes for replacing the string olddomain.com by the string newdomain.com in a file named file. All four commands do the same thing, with varying degrees of portability and efficiency:

## Ex
ex -sc '%s/olddomain\.com/newdomain.com/g|x' file

## Ed
# Bash
ed -s file <<< $'g/olddomain\\.com/s//newdomain.com/g\nw\nq'

# Bourne (with printf)
printf '%s\n' 'g/olddomain\.com/s//newdomain.com/g' w q | ed -s file

printf 'g/olddomain\\.com/s//newdomain.com/g\nw\nq' | ed -s file

# Bourne (without printf)
ed -s file <<!
g/olddomain\\.com/s//newdomain.com/g
w
q
!

To replace a string in all files of the current directory, just wrap one of the above in a loop:

for file in ./*; do
    [[ -f $file ]] && ed -s "$file" <<< $'g/old/s//new/g\nw\nq'
done

To do this recursively, the easy way would be to enable globstar in bash 4 (shopt -s globstar, a good idea to put this in your ~/.bashrc) and use:

# Bash 4+ (shopt -s globstar)
for file in ./**; do
    [[ -f $file ]] && ed -s "$file" <<< $'g/old/s//new/g\nw\nq'
done

If you don't have bash 4, you can use find. Unfortunately, it's a bit tedious to feed ed stdin for each file hit:

find . -type f -exec sh -c 'for f do ed -s "$f" <<!
g/old/s//new/g
w
q
!
done' sh {} +

Since ex takes its commands from the command-line, it's less painful to invoke from find:

find . -type f -exec ex -sc '%s/old/new/g|x' {} \;

Beware though, if your ex is provided by vim, it may get stuck for files that don't contain an old. In that case, you'd add the e option to ignore those files. When vim is your ex, you can also use argdo and find's {} + to minimize the amount of ex processes to run:

# Bash 4+ (shopt -s globstar)
ex -sc 'argdo %s/old/new/ge|x' ./**

# Bourne
find . -type f -exec ex -sc 'argdo %s/old/new/ge|x' {} +

You can also ask for confirmation for every replacement from A to B. You will need to type y or n every time. Please note that the A is used twice in the command. This approach is good when wrong replacements may happen (working with a natural language, for example) and the data set is small enough.

find . -type f -name '*.txt' -exec grep -q 'A' {} \; -exec vim -c '%s/A/B/gc' -c 'wq' {} \;

22.1.3. Using a temporary file

If shell variables are used as the search and/or replace strings, ed is not suitable. Nor is sed, or any tool that uses regular expressions. Consider using the awk code at the bottom of this FAQ with redirections, and mv.

gsub_literal "$search" "$rep" < "$file" > tmp && mv -- tmp "$file"

# Using GNU tools to preseve ownership/group/permissions
gsub_literal "$search" "$rep" < "$file" > tmp &&
  chown --reference="$file" tmp &&
  chmod --reference="$file" tmp &&
  mv -- tmp "$file"

22.1.4. Using nonstandard tools

sed is a Stream EDitor, not a file editor. Nevertheless, people everywhere tend to abuse it for trying to edit files. It doesn't edit files. GNU sed (and some BSD seds) have a -i option that makes a copy and replaces the original file with the copy. An expensive operation, but if you enjoy unportable code, I/O overhead and bad side effects (such as destroying symlinks), and CodeInjection exploits, this would be an option:

sed -i    's/old/new/g' ./*  # GNU, OpenBSD
sed -i '' 's/old/new/g' ./*  # FreeBSD

Those of you who have perl 5 can accomplish the same thing using this code:

perl -pi -e 's/old/new/g' ./*

Recursively using find:

find . -type f -exec perl -pi -e 's/old/new/g' -- {} \;   # if your find doesn't have + yet
find . -type f -exec perl -pi -e 's/old/new/g' -- {} +    # if it does

If you want to delete lines instead of making substitutions:

# Deletes any line containing the perl regex foo
perl -ni -e 'print unless /foo/' ./*

To replace for example all "unsigned" with "unsigned long", if it is not "unsigned int" or "unsigned long" ...:

find . -type f -exec perl -i.bak -pne \
    's/\bunsigned\b(?!\s+(int|short|long|char))/unsigned long/g' -- {} \;

All of the examples above use regular expressions, which means they have the same issue as the sed code earlier; trying to embed shell variables in them is a terrible idea, and treating an arbitrary value as a literal string is painful at best.

If the inputs are not under your direct control, you can pass them as variables into both search and replace strings with no unquoting or potential for conflict with sigil characters:

in="$search" out="$replace" perl -pi -e 's/\Q$ENV{"in"}/$ENV{"out"}/g' ./*

Or, wrapped in a useful shell function:

# Bash
# usage: replace FROM TO [file ...]
replace() {
  in=$1 out=$2 perl -p ${3+'-i'} -e 's/\Q$ENV{"in"}/$ENV{"out"}/g' -- "${@:3}"
}

This wrapper passes perl's -i option if there are any filenames, so that they are "edited in-place" (or at least as far as perl does such a thing -- see the perl documentation for details).

22.2. Variables

If you want to replace content within a variable, this can (and should) be done very simply with Bash's parameter expansion:

# Bash
var='some string'
var=${var//some/another}

However, if the replacement string is in a variable, one must be cautious. There are inconsistent behaviors across different versions of bash.

# Bash
var='some string'
search=some; rep=another

# Assignments work consistently.  Note the quotes.
var=${var//"$search"/"$rep"}

# Expansions outside of assigments are not consistent.
echo "${var//"$search"/"$rep"}"     # Works in bash 4.3 and later.
echo "${var//"$search"/$rep}"       # Works in bash 5.1 and earlier.

The quotes around "$search" prevent the contents of the variable from being treated as a shell pattern (also called a glob). Of course, if pattern matching is intended, do not include the quotes.

In Bash 4.2 and earlier, if you quote $rep in ${var//"$search"/"$rep"} the quotes will be inserted literally.

In Bash 5.2, you must either quote $rep in ${var//"$search"/"$rep"} or disable patsub_replacement (shopt -u patsub_replacement) because otherwise & characters in $rep will be substituted by $search.

The only way to get a consistent and correct result across all versions of bash is to use a temporary variable:

# Bash
tmp=${var//"$search"/"$rep"}
echo "$tmp"

For compatibility with Bash 4.2 and earlier, make sure you do not put quotes around the assignment's right hand side.

# In bash 4.2, this fails.  You get literal quotes in the result.
tmp="${var//"$search"/"$rep"}"

Replacements within a variable are even harder in POSIX sh:

# POSIX function

# usage: string_rep SEARCH REPL STRING
# replaces all instances of SEARCH with REPL in STRING
string_rep() {
  # initialize vars
  in=$3
  unset -v out

  # SEARCH must not be empty
  case $1 in '') return; esac

  while
    # break loop if SEARCH is no longer in "$in"
    case "$in" in
      *"$1"*) ;;
      *) break;;
    esac
  do
    # append everything in "$in", up to the first instance of SEARCH, and REP, to "$out"
    out=$out${in%%"$1"*}$2
    # remove everything up to and including the first instance of SEARCH from "$in"
    in=${in#*"$1"}
  done

  # append whatever is left in "$in" after the last instance of SEARCH to out, and print
  printf '%s%s\n' "$out" "$in"
}

var=$(string_rep "$search" "$rep" "$var")

# Note: POSIX does not have a way to localize variables. Most shells (even dash and
# busybox), however, do. Feel free to localize the variables if your shell supports
# it. Even if it does not, if you call the function with var=$(string_rep ...), the
# function will be run in a subshell and any assignments it makes will not persist.

22.3. Streams

If you wish to modify a stream, and if your search and replace strings are known in advance, then use the stream editor:

some_command | sed 's/foo/bar/g'

sed uses regular expressions. In our example, foo and bar are literal strings. If they were variables (e.g. user input), they would have to be rigorously escaped in order to prevent errors. This is very impractical, and attempting to do so will make your code extremely prone to bugs. Embedding shell variables in sed commands is never a good idea -- it is a prime source of CodeInjection bugs.

You could also do it in Bash itself, by combining a parameter expansion with Faq #1:

search=foo rep=bar

while IFS= read -r line; do
  printf '%s\n' "${line//"$search"/"$rep"}"
done < <(some_command)

# or

some_command | while IFS= read -r line; do
  printf '%s\n' "${line//"$search"/"$rep"}"
done

If you want to do more processing than just a simple search/replace, this may be the best option. Note that the last example runs the loop in a SubShell. See Faq #24 for more information on that.

You may notice, however, that the bash loop above is very slow for large data sets. So how do we find something faster, that can replace literal strings? Well, you could use awk. The following function replaces all instances of STR with REP, reading from stdin and writing to stdout.

# usage: gsub_literal STR REP
# replaces all instances of STR with REP. reads from stdin and writes to stdout.
gsub_literal() {
  # STR cannot be empty
  [[ $1 ]] || return

  str=$1 rep=$2 awk '
    # get the length of the search string
    BEGIN {
      str = ENVIRON["str"]
      rep = ENVIRON["rep"]
      len = length(str);
    }

    {
      # empty the output string
      out = "";

      # continue looping while the search string is in the line
      while (i = index($0, str)) {
        # append everything up to the search string, and the replacement string
        out = out substr($0, 1, i-1) rep;

        # remove everything up to and including the first instance of the
        # search string from the line
        $0 = substr($0, i + len);
      }

      # append whatever is left
      out = out $0;

      print out;
    }
  '
}

some_command | gsub_literal "$search" "$rep"


# condensed as a one-liner:
some_command | s=$search r=$rep awk 'BEGIN {s=ENVIRON["s"]; r=ENVIRON["r"]; l=length(s)} {o=""; while (i=index($0, s)) {o=o substr($0,1,i-1) r; $0=substr($0,i+l)} print o $0}'


CategoryShell

23. How can I calculate with floating point numbers instead of just integers?

BASH's builtin arithmetic uses integers only:

$ printf '%s\n' "$((10 / 3))"
3

For most operations involving non-integer numbers, an external program must be used, e.g. bc, AWK or dc:

$ printf 'scale=3; 10/3\n' | bc
3.333

The "scale=3" command notifies bc that three digits of precision after the decimal point are required.

Same example with dc (reverse polish calculator, lighter than bc):

$ printf '3 k 10 3 / p\n' | dc
3.333

k sets the precision to 3, and p prints the value of the top of the stack with a newline. The stack is not altered, though.

If you are trying to compare non-integer numbers (less-than or greater-than), and you have GNU bc, you can do this:

# Bash and GNU bc
if (( $(bc <<<'1.4 < 2.5') )); then
  printf '1.4 is less than 2.5.\n'
fi

However, x < y is not supported by all versions of bc:

# HP-UX 10.20.
imadev:~$ bc <<<'1 < 2'
syntax error on line 1,

If you want to be portable, you need something more subtle:

# POSIX
case $(printf '%s - %s\n' 1.4 2.5 | bc) in
  -*) printf '1.4 is less than 2.5\n' ;;
esac

This example subtracts 2.5 from 1.4, and checks the sign of the result. If it is negative, the first number is less than the second. We aren't actually treating bc's output as a number; we're treating it as a string, and only looking at the first character.

Legacy (Bourne) version:

# Bourne
case "`echo "1.4 - 2.5" | bc`" in
  -*) echo "1.4 is less than 2.5";;
esac

AWK can be used for calculations, too:

$ awk 'BEGIN {printf "%.3f\n", 10 / 3}'
3.333

There is a subtle but important difference between the bc and the awk solution here: bc reads commands and expressions from standard input. awk on the other hand evaluates the expression as part of the program. Expressions on standard input are not evaluated, i.e. echo 10/3 | awk '{print $0}' will print 10/3 instead of the evaluated result of the expression.

ksh93, zsh and yash have support for non-integers in shell arithmetic. zsh (in the zsh/mathfunc module) and ksh93 additionally have support for some C99 math.h functions sin() or cos() as well as user-defined math functions callable using C syntax. So many of these calculations can be done natively in ksh or zsh:

# ksh93/zsh/yash
$ LC_NUMERIC=C; printf '%s\n' "$((3.00000000000/7))"
0.428571428571428571

(ksh93 and yash are sensitive to locale. In ksh93, a dotted decimal used in locales where the decimal separator character is not dot will fail, like in German, Spanish, French locales... In yash, the locale's decimal radix is only honoured in the result of arithmetic expansions.).

Comparing two non-integer numbers for equality is potentially an unwise thing to do. Similar calculations that are mathematically equivalent and which you would expect to give the same result on paper may give ever-so-slightly-different non-integer numeric results due to rounding/truncation and other issues. If you wish to determine whether two non-integer numbers are "the same", you may either:

  • Round them both to a desired level of precision, and then compare the rounded results for equality; or
  • Subtract one from the other and compare the absolute value of the difference against an epsilon value of your choice.

  • Be sure to output adequate precision to fully express the actual value. Ideally, use hex float literals, which are supported by Bash.

 $ ksh93 -c 'LC_NUMERIC=C printf "%-20s %f %.20f %a\n" "error accumulation:" .1+.1+.1+.1+.1+.1+.1+.1+.1+.1{,,} constant: 1.0{,,}'
error accumulation:  1.000000 1.00000000000000000011 0x1.0000000000000002000000000000p+0
constant:            1.000000 1.00000000000000000000 0x1.0000000000000000000000000000p+0

One of the very few things that Bash actually can do with non-integer numbers is round them, using printf:

# Bash 3.1
# See if a and b are close to each other.
# Round each one to two decimal places and compare results as strings.
a=3.002 b=2.998
printf -v a1 %.2f "$a"
printf -v b1 %.2f "$b"
if [[ $a1 = "$b1" ]]; then
    printf 'a and b are roughly the same\n'
fi

Many problems that look like non-integer arithmetic can in fact be solved using integers only, and thus do not require these tools -- e.g., problems dealing with rational numbers. For example, to check whether two numbers x and y are in a ratio of 4:3 or 16:9 you may use something along these lines:

# Bash
# Variables x and y are integers
if (( (x * 9 - y * 16) == 0 )) ; then
   printf '16:9.\n'
elif (( (x * 3 - y * 4) == 0 )) ; then
   printf '4:3.\n'
else
   printf 'Neither 16:9 nor 4:3.\n'
fi

A more elaborate test could tell if the ratio is closest to 4:3 or 16:9 without using non-integer arithmetic. Note that this very simple example that apparently involves non-integer numbers and division is solved with integers and no division. If possible, it's usually more efficient to convert your problem to integer arithmetic than to use non-integer arithmetic.


CategoryShell

24. I want to launch an interactive shell that has special aliases and functions, not the ones in the user's ~/.bashrc.

When starting bash in non-POSIX mode, specify a different start-up file with --rcfile:

bash --rcfile /my/custom/bashrc

Or:

bash --rcfile <(printf %s 'my; commands; here')

Or:

 ~ $ bash --rcfile /dev/fd/9 -i 9<<<'cowsay moo'
 _____
< moo >
 -----
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
+bash-4.3$ exit
exit

For POSIX-compatible shells, use the ENV environment variable:

  ~ $ ( { ENV=/dev/fd/9 exec -a sh bash -i; } 9<<<'echo yo' )
yo
+sh-4.3$ exit
exit

Unfortunately, ENV only works in bash and zsh when executed in their respective POSIX modes. Confusingly, Bash also has BASH_ENV, which only works in non-posix mode, and only in non-interactive shells.

24.1. Variant question: ''I have a script that sets up an environment, and I want to give the user control at the end of it.''

Put exec bash at the end of it to launch an interactive shell. This shell will inherit the environment variables and open FDs but none of the shell's internal state such as functions or aliases, since the shell process is being replaced by a new instance. Of course, you must also make sure that your script runs in a terminal -- otherwise, you must create one, for example, by using exec xterm -e bash.


CategoryShell

25. I set variables in a loop that's in a pipeline. Why do they disappear after the loop terminates? Or, why can't I pipe data to read?

In most shells, each command of a pipeline is executed in a separate SubShell. Non-working example:

# Works only in ksh88/ksh93, or zsh or bash 4.2 with lastpipe enabled
# In other shells, this will print 0
linecount=0

printf '%s\n' foo bar |
while IFS= read -r line
do
    linecount=$((linecount + 1))
done

echo "total number of lines: $linecount"

The reason for this potentially surprising behaviour, as described above, is that each SubShell introduces a new variable context and environment. The while loop above is executed in a new subshell with its own copy of the variable linecount created with the initial value of '0' taken from the parent shell. This copy then is used for counting. When the while loop is finished, the subshell copy is discarded, and the original variable linecount of the parent (whose value hasn't changed) is used in the echo command.

Different shells exhibit different behaviors in this situation:

  • BourneShell creates a subshell when the input or output of anything (loops, case etc..) but a simple command is redirected, either by using a pipeline or by a redirection operator ('<', '>').

  • BASH, Yash and PDKsh-derived shells create a new process only if the loop is part of a pipeline.

  • KornShell and Zsh creates it only if the loop is part of a pipeline, but not if the loop is the last part of it. The read example above actually works in ksh88, ksh93, zsh! (but not MKsh or other PDKsh-derived shells)

  • POSIX specifies the bash behaviour, but as an extension allows any or all of the parts of the pipeline to run without a subshell (thus permitting the KornShell behaviour, as well).

More broken stuff:

# Bash 4
# The problem also occurs without a loop
printf '%s\n' foo bar | mapfile -t line
printf 'total number of lines: %s\n' "${#line[@]}" # prints 0

f() {
    if [[ -t 0 ]]; then
        echo "$1"
    else
        read -r var
    fi
}

f 'hello' | f
echo "$var" # prints nothing

Again, in both cases the pipeline causes read or some containing command to run in a subshell, so its effect is never witnessed in the parent process.

It should be stressed that this issue isn't specific to loops. It's a general property of all pipes, though the while/read loop might be considered the canonical example that crops up over and over when people read the help or manpage description of the read builtin and notice that it accepts data on stdin. They might recall that data redirected into a compound command is available throughout that command, but not understand why all the fancy process substitutions and redirects they run across in places like FAQ #1 are necessary. Naturally they proceed to put their funstuff directly into a pipeline, and confusion ensues.

25.1. Workarounds

  • If the input is a file, a simple redirect will suffice:
    # POSIX
    while IFS= read -r line; do linecount=$((linecount + 1)); done < file
    echo "$linecount"

    Unfortunately, this doesn't work with a Bourne shell; see sh(1) from the Heirloom Bourne Shell for a workaround.

  • Use command grouping and do everything in the subshell:

    # POSIX
    linecount=0
    
    cat /etc/passwd |
    {
        while IFS= read -r line
        do
            linecount=$((linecount + 1))
        done
    
        echo "total number of lines: $linecount"
    }
    This doesn't really change the subshell situation, but if nothing from the subshell is needed in the rest of your code then destroying the local environment after you're through with it could be just what you want anyway.
  • Use ProcessSubstitution (Bash/Zsh/Ksh93 only):

    # Bash/Ksh93/Zsh
    while IFS= read -r line
    do
        ((linecount++))
    done < <(grep PATH /etc/profile)
    
    echo "total number of lines: $linecount"
    This is essentially identical to the first workaround above. We still redirect a file, only this time the file happens to be a named pipe temporarily created by our process substitution to transport the output of grep.
  • Use a named pipe:

    # POSIX
    mkfifo mypipe
    grep PATH /etc/profile > mypipe &
    
    while IFS= read -r line
    do
        linecount=$((linecount + 1))
    done < mypipe
    
    echo "total number of lines: $linecount"
  • Use a coprocess (ksh, even pdksh, oksh, mksh..):

    # ksh
    grep PATH /etc/profile |&
    
    while IFS= read -r -p line
    do
        linecount=$((linecount + 1))
    done
    
    echo "total number of lines: $linecount"
    # bash>4
    coproc grep PATH /etc/profile
    
    while IFS= read -r line
    do
        linecount=$((linecount + 1))
    done <&"${COPROC[0]}"
    
    echo "total number of lines: $linecount"
  • Use a HereString (Bash/Zsh/Ksh93 only, though the example uses the Bash-specific read -a (Ksh93 and Zsh using read -A instead)):

    # Options:
    # -r Backslash does not act as an escape character for the word separators or line delimiter.
    # -a The words are assigned to sequential indices of the array "words"
    
    read -ra words <<< 'hi ho hum'
    printf 'total number of words: %d\n' "${#words[@]}"

    The <<< operator is available in Bash (2.05b and later), Zsh (where it was first introduced inspired from a similar operator in the Unix port of the rc shell), Ksh93 and Yash.

  • With a POSIX shell, or for longer multi-line data, you can use a here document instead:
    # POSIX
    linecount=0
    while IFS= read -r line; do
        linecount=$((linecount+1))
    done <<EOF
    hi
    ho
    hum
    EOF
    
    printf 'total number of lines: %d\n' "$linecount"
  • Use lastpipe (Bash 4.2)
    # Bash 4.2
    # +m: Disable monitor mode (job control) in an interactive shell since it is
    # on by default there and it needs to be disabled for lastpipe to work.
    set +m
    shopt -s lastpipe
    
    x=0
    printf '%s\n' hi{,,,,,} | while IFS= read -r 'lines[x++]'; do :; done
    printf 'total number of lines: %d\n' "${#lines[@]}"
    Bash 4.2 introduces the aforementioned ksh-like behavior to Bash. The one caveat is that job control must not be enabled, thereby limiting its usefulness in an interactive shell.

For more related examples of how to read input and break it into words, see FAQ #1.


CategoryShell

26. How can I access positional parameters after $9?

Use ${10} instead of $10. This works for BASH and KornShell, but not for older BourneShell implementations. Another way to access arbitrary positional parameters after $9 is to use for, e.g. to get the last parameter:

    # Bourne
    for last
    do
        : # nothing
    done

    echo "last argument is: $last"

To get an argument by number, we can use a counter:

    # Bourne
    n=12        # This is the number of the argument we are interested in
    i=1
    for arg
    do
        if test $i -eq $n
        then
            argn=$arg
            break
        fi
        i=`expr $i + 1`
    done
    echo "argument number $n is: $argn"

This has the advantage of not "consuming" the arguments. If this is no problem, the shift command discards the first positional arguments:

    shift 11
    echo "the 12th argument is: $1"

and can be put into a helpful function:

    # Bourne
    getarg() { # $1 is argno
        shift "$1" && echo "$1"
    }
    arg12="`getarg 12 "$@"`"

In addition, bash and ksh93 treat the set of positional parameters as an array, and you may use parameter expansion syntax to address those elements in a variety of ways:

    # Bash, ksh93
    for x in "${@:(-2)}"    # iterate over the last 2 parameters
    for y in "${@:2}"       # iterate over all parameters starting at $2
                            # which may be useful if we don't want to shift

Although direct access to any positional argument is possible this way, it's seldom needed. The common alternative is to use getopts to process options (e.g. "-l", or "-o filename"), and then use either for or while to process all the remaining arguments in turn. An explanation of how to process command line arguments is available in FAQ #35, and another is found at http://www.shelldorado.com/goodcoding/cmdargs.html


CategoryShell

27. How can I randomize (shuffle) the order of lines in a file? Or select a random line from a file, or select a random file from a directory?

To randomize the lines of a file, here is one approach. This one involves generating a random number, which is prefixed to each line; then sorting the resulting lines, and removing the numbers.

   1 # Bash/Ksh
   2 randomize() {
   3     while IFS='' read -r l ; do printf '%d\t%s\n' "$RANDOM" "$l"; done |
   4     sort -n |
   5     cut -f2-
   6 }

RANDOM is supported by BASH and KornShell, but is not defined by POSIX.

Here's the same idea (printing random numbers in front of a line, and sorting the lines on that column) using other programs:

   1 # Bourne
   2 awk '
   3     BEGIN { srand() }
   4     { print rand() "\t" $0 }
   5 ' |
   6 sort -n |    # Sort numerically on first (random number) column
   7 cut -f2-     # Remove sorting column

This is (possibly) faster than the previous solution, but will not work for very old AWK implementations (try nawk, or gawk, or /usr/xpg4/bin/awk if available). (Note that AWK uses the epoch time as a seed for srand(), which may or may not be random enough for you.)

Other non-portable utilities that can shuffle/randomize a file:

  • GNU shuf (in recent enough GNU coreutils)

  • GNU sort -R (coreutils 6.9)

For more details, please see their manuals.

27.1. Shuffling an array

A generalized version of this question might be, How can I shuffle the elements of an array? If we don't want to use the rather clumsy approach of sorting lines, this is actually more complex than it appears. A naive approach would give us badly biased results. A more complex (and correct) algorithm looks like this:

   1 # Uses a global array variable.  Must be compact (not a sparse array).
   2 # Bash syntax.
   3 shuffle() {
   4    local i tmp size max rand
   5 
   6    size=${#array[@]}
   7    for ((i=size-1; i>0; i--)); do
   8       # RANDOM % (i+1) is biased because of the limited range of $RANDOM
   9       # Compensate by using a range which is a multiple of the rand modulus.
  10 
  11       max=$(( 32768 / (i+1) * (i+1) ))
  12       while (( (rand=RANDOM) >= max )); do :; done
  13       rand=$(( rand % (i+1) ))
  14       tmp=${array[i]} array[i]=${array[rand]} array[rand]=$tmp
  15    done
  16 }

This function shuffles the elements of an array in-place using the Knuth-Fisher-Yates shuffle algorithm.

If we just want the unbiased random number picking function, we can split that out separately:

   1 # Returns random number from 0 to ($1-1) in global var 'r'.
   2 # Bash syntax.
   3 rand() {
   4     local max=$((32768 / $1 * $1))
   5     while (( (r=RANDOM) >= max )); do :; done
   6     r=$(( r % $1 ))
   7 }

This rand function is better than using $((RANDOM % n)). For simplicity, many of the remaining examples on this page may use the modulus approach. In all such cases, switching to the use of the rand function will give better results; this improvement is left as an exercise for the reader.

27.2. Selecting a random line/file

Another question we frequently see is, How can I print a random line from a file?

There are two main approaches to this:

  • Count the number of lines n, select a random integer r from 1 to n, and print line r.

  • Read line by line, selecting lines with a varying probability as we go along.

27.2.1. With counting lines first

The simpler approach is to count lines first.

   1 # Bash
   2 n=$(wc -l <"$file")         # Count number of lines.
   3 r=$((RANDOM % n + 1))       # Random number from 1..n (see warnings above!)
   4 sed -n "$r{p;q;}" "$file"   # Print the r'th line.
   5 
   6 # POSIX with (new) AWK
   7 awk -v n="$(wc -l <"$file")" \
   8   'BEGIN{srand();l=int((rand()*n)+1)} NR==l{print;exit}' "$file"

(See FAQ 11 for more info about printing the r'th line.)

The next example sucks the entire file into memory. This approach saves time rereading the file, but obviously uses more memory. (Arguably: on systems with sufficient memory and an effective disk cache, you've read the file into memory by the earlier methods, unless there's insufficient memory to do so, in which case you shouldn't, QED.)

   1 # Bash
   2 unset lines n
   3 while IFS= read -r 'lines[n++]'; do :; done < "$file"   # See FAQ 5
   4 r=$((RANDOM % n))   # See warnings above!
   5 echo "${lines[r]}"

Note that we don't add 1 to the random number in this example, because the array of lines is indexed counting from 0.

Also, some people want to choose a random file from a directory (for a signature on an e-mail, or to choose a random song to play, or a random image to display, etc.). A similar technique can be used:

   1 # Bash
   2 files=(*.ogg)                  # Or *.gif, or *
   3 n=${#files[@]}                 # For readability
   4 xmms -- "${files[RANDOM % n]}" # Choose a random element

27.2.2. Without counting lines first

If you happen to have GNU shuf you can use that, but it is not portable.

   1 # example, 5 random lines from file
   2 shuf -n 5 file

Without shuf, we have to write some code ourselves. If we want n random lines we need to:

  1. accept the first n lines
  2. accept each further line with probability n/nl where nl is the number of lines read so far
  3. if we accepted the line in step 2, replace a random one of the n lines we already have

   1 # WARNING: srand() without an argument seeds using the current time accurate to the second.
   2 # If run more than once in a single second on the clock you will get the same output.
   3 # Find a better way to seed this.
   4 
   5 n=$1
   6 shift
   7 
   8 awk -v n="$n" '
   9 BEGIN            { srand()                           }
  10 NR     <= n      { lines[NR - 1         ] = $0; next }
  11 rand() <  n / NR { lines[int(rand() * n)] = $0       }
  12 END              { for (k in lines) print lines[k]   }
  13 ' "$@"

Bash and POSIX sh solutions forthcoming.

27.3. Known bugs

  • http://lists.gnu.org/archive/html/bug-bash/2010-01/msg00042.html points out a surprising pitfall concerning the use of RANDOM without a leading $ in certain mathematical contexts. (Upshot: you should prefer n=$((...math...)); ((array[n]++)) over ((array[...math...]++)) in almost every case.)

    • Behavior described appears reversed in current versions of mksh, ksh93, Bash, and Zsh. Still something to keep in mind for legacy. -ormaaj

27.4. Using external random data sources

Some people feel the shell's builtin RANDOM parameter is not sufficiently random for their applications. Typically this will be an interface to the C library's rand(3) function, although the Bash manual does not specify the implementation details. Some people feel their application requires cryptographically stronger random data, which would have to be supplied by some external source.

Before we explore this, we should point out that often people want to do this as a first step in writing some sort of random password generator. If that is your goal, you should at least consider using a password generator that has already been written, such as pwgen.

Now, if we're considering the use of external random data sources in a Bash script, we face several issues:

  • The data source will probably not be portable. Thus, the script will only be usable in special environments.
  • If we simply grab a byte (or a single group of bytes large enough to span the desired range) from the data source and do a modulus on it, we will run into the bias issue described earlier on this page. There is absolutely no point in using an expensive external data source if we're just going to bias the results with sloppy code! To work around that, we may need to grab bytes (or groups of bytes) repeatedly, until we get one that can be used without bias.
  • Bash can't handle raw bytes very well, so each time we grab a byte (or group) we need to do something to it to turn it into a number that Bash can read. This may be an expensive operation. So, it may be more efficient to grab several bytes (or groups), and do the conversion to readable numbers, all at once.

  • Depending on the data source, these random bytes may be precious, so grabbing a lot of them all at once and discarding ones we don't use might be more expensive (by whatever metric we're using to measure such costs) than grabbing one byte at a time, even counting the conversion step. This is something you'll have to decide for yourself, taking into account the needs of your application, and the nature of your data source.

At this point you should be seriously rethinking your decision to do this in Bash. Other languages already have features that take care of all these issues for you, and you may be much better off writing your application in one of those languages instead.

You're still here? OK. Let's suppose we're going to use the /dev/urandom device (found on most Linux and BSD systems) as an external random data source in Bash. This is a character device which produces raw bytes of "pretty random" data. First, we'll note that the script will only work on systems where this is present. In fact, you should add an explicit check for this device somewhere early in the script, and abort if it's not found.

Now, how can we turn these bytes of data into numbers for Bash? If we attempt to read a byte into a variable, a NUL byte would give us a variable which appears to be empty. However, since no other input gives us that result, this may be acceptable -- an empty variable means we tried to read a NUL. We can work with this. The good news is we won't have to fork an od(1) or any other external program to read bytes. Then, since we're reading one byte at a time, this also means we don't have to write any prefetching or buffering code to save forks.

One other gotcha, however: reading bytes only works in the C locale. If we try this in en_US.utf8 we get an empty variable for every byte from 128 to 255, which is clearly no good.

So, let's put this all together and see what we've got:

   1 #!/usr/bin/env bash
   2 # Requires Bash 3.1 or higher, and an OS with /dev/urandom (Linux, BSD, etc.)
   3 
   4 export LANG=C
   5 if [[ ! -e /dev/urandom ]]; then
   6     echo "No /dev/urandom on this system" >&2
   7     exit 1
   8 fi
   9 
  10 # Return an unbiased random number from 0 to ($1 - 1) in variable 'r'.
  11 rand() {
  12     if (($1 > 256)); then
  13         echo "Argument larger than 256 currently unsupported" >&2
  14         r=-1
  15         return 1
  16     fi
  17 
  18     local max=$((256 / $1 * $1))
  19     while IFS= read -r -n1 -d '' r < /dev/urandom
  20           printf -v r %d "'$r"
  21           ((r >= max))
  22     do
  23         :
  24     done
  25     r=$((r % $1))
  26 }

This uses a trick from FAQ 71 for converting bytes to numbers. When the variable populated by read is empty (because of a NUL byte), we get 0, which is just what we want.

Extending this to handle ranges larger than 0..255 is left as an exercise for the reader.

27.4.1. Awk as a source of seeded pseudorandom numbers

Sometimes we don't actually want truly random numbers. In some applications, we want a reproducible stream of pseudorandom numbers. We achieve this by using a pseudorandom number generator (PRNG) and "seeding" it with a known value. The PRNG then produces the same stream of output numbers each time, for that seed.

Bash's RANDOM works this way (assigning to it seeds the PRNG that bash uses internally), but for this example, we're going to use awk instead. Awk's rand() function returns a floating point value, so we don't run into the biasing issue that we have with bash's RANDOM. Also, awk is much faster than bash, so really it's just the better choice.

For this example, we'll set up the awk as a background process using ProcessSubstitution. We will read from it by maintaining an open FileDescriptor connected to awk's standard output.

# Bash

list=(ox zebra cat buffalo giraffe salamander)
n=${#list[@]}

exec 3< <(
    awk -v seed=31337 -v n="$n" \
    'BEGIN {srand(seed); while (1) {print int(n*rand())}}'
)

# Print a "random" list element every second, using the background awk stream
# as a seeded PRNG.
while true; do
    read -r r <&3
    printf %s\\n "${list[r]}"
    sleep 1
done

Each time the program is run, the same results will be printed. Changing the seed value will change the results.

If you don't want the same results every time, you can change srand(seed); to srand(); in the awk program. Awk will then seed its PRNG using the current epoch timestamp.


CategoryShell

28. How can two unrelated processes communicate?

Two unrelated processes cannot use the arguments, the environment or stdin/stdout to communicate; some form of inter-process communication (IPC) is required.

28.1. A file

Process A writes in a file, and Process B reads the file. This method is not synchronized and therefore is not safe if B can read the file while A writes in it. A lockdir or a signal can probably help.

28.2. A directory as a lock

mkdir can be used to test for the existence of a dir and create it in one atomic operation; it thus can be used as a lock, although not a very efficient one.

Script A:

    until mkdir /tmp/dir;do # wait until we can create the dir
      sleep 1
    done
    echo foo > file         # write in the file this section is critical
    rmdir /tmp/dir          # remove the lock

Script B:

    until mkdir /tmp/dir;do #wait until we can create the dir
      sleep 1
    done
    read var < file         # read in the file this section is, critical
    echo "$var"             # Script A cannot write in the file
    rmdir /tmp/dir          # remove the lock

See Faq #45 and mutex for more examples with a lock directory.

28.3. Signals

Signals are probably the simplest form of IPC:

ScriptA:

    trap 'flag=go' USR1 #set up the signal handler for the USR1 signal

    # echo $$ > /tmp/ScriptA.pid #if we want to save the pid in a file

    flag=""
    while [[ $flag != go ]]; do # wait for the green light from Script B
      sleep 1;
    done
    echo we received the signal

You must find or know the pid of the other script to send it a signal using kill:

     #kill all the
     pkill -USR1 -f ScriptA 
     
     #if ScriptA saved its pid in a file
     kill -USR1 $(</var/run/ScriptA.pid)

     #if ScriptA is a child:
     ScriptA & pid=$!
     kill -USR1 $pid

The first 2 methods are not bullet proof and will cause trouble if you run more than one instance of scriptA.

28.4. Named Pipes

Named pipes are a much richer form of IPC. They are described on their own page: NamedPipes.


CategoryShell

29. How do I determine the location of my script? I want to read some config files from the same place.

There are two prime reasons why this issue comes up: either you want to externalize data or configuration of your script and need a way to find these external resources, or your script is intended to act upon a bundle of some sort (eg. a build script), and needs to find the resources to act upon.

It is important to realize that in the general case, this problem has no solution. Any approach you might have heard of, and any approach that will be detailed below, has flaws and will only work in specific cases. First and foremost, try to avoid the problem entirely by not depending on the location of your script!

Before we dive into solutions, let's clear up some misunderstandings. It is important to understand that:

  • Your script does not actually have a location! Wherever the bytes end up coming from, there is no "one canonical path" for it. Never.

  • $0 is NOT the answer to your problem. If you think it is, you can either stop reading and write more bugs, or you can accept this and read on.

29.1. I need to access my data/config files

Very often, people want to make their scripts configurable. The separation principle teaches us that it's a good idea to keep configuration and code separate. The problem then ends up being: how does my script know where to find the user's configuration file for it?

Too often, people believe the configuration of a script should reside in the same directory where they put their script. This is the root of the problem.

A UNIX paradigm exists to solve this problem for you: configuration artifacts of your scripts should exist in either the user's HOME directory or /etc. That gives your script an absolute path to look for the file, solving your problem instantly: you no longer depend on the "location" of your script:

   1 if [[ -e ~/.myscript.conf ]]; then
   2     source ~/.myscript.conf
   3 elif [[ -e /etc/myscript.conf ]]; then
   4     source /etc/myscript.conf
   5 fi

The same holds true for other types of data files. Logs should be written to /var/log or the user's home directory. Support files should be installed to an absolute path in the file system or be made available alongside the configuration in /etc or the user's home directory.

29.2. I need to access files bundled with my script

Sometimes scripts are part of a "bundle" and perform certain actions within or upon it. This is often true for applications unpacked or contained within a bundle directory. The user may unpack or install the bundle anywhere; ideally, the bundle's scripts should work whether that's somewhere in a home dir, or /var/tmp, or /usr/local. The files are transient, and have no fixed or predictable location.

When a script needs to act upon other files it's bundled with, independently of its absolute location, we have two options: either we rely on PWD or we rely on BASH_SOURCE. Both approaches have certain issues; here's what you need to know.

29.2.1. Using BASH_SOURCE

The BASH_SOURCE internal bash variable is actually an array of pathnames. If you expand it as a simple string, e.g. "$BASH_SOURCE", you'll get the first element, which is the pathname of the currently executing function or script. Using the BASH_SOURCE method, you access files within your bundle like this:

   1 # cd into the bundle and use relative paths
   2 if [[ $BASH_SOURCE = */* ]]; then
   3     cd -- "${BASH_SOURCE%/*}/" || exit
   4 fi
   5 read somevar < etc/somefile

   1 # Use the dirname directly, without changing directories
   2 if [[ $BASH_SOURCE = */* ]]; then
   3     bundledir=${BASH_SOURCE%/*}/
   4 else
   5     bundledir=./
   6 fi
   7 read somevar < "${bundledir}etc/somefile"

Please note that when using BASH_SOURCE, the following caveats apply:

  • $BASH_SOURCE expands empty when bash does not know where the executing code comes from. Usually, this means the code is coming from standard input (e.g. ssh host 'somecode', or from an interactive session).

  • $BASH_SOURCE does not follow symlinks (when you run z from /x/y, you get /x/y/z, even if that is a symlink to /p/q/r). Often, this is the desired effect. Sometimes, though, it's not. Imagine your package links its start-up script into /usr/local/bin. Now that script's BASH_SOURCE will lead you into /usr/local and not into the package.

If you're not writing a bash script, the BASH_SOURCE variable is unavailable to you. There is a common convention, however, for passing the location of the script as the process name when it is started. Most shells do this, but not all shells do so reliably, and not all of them attempt to resolve a relative path to an absolute path. Relying on this behaviour is dangerous and fragile, but can be done by looking at $0 (see below). Again, consider all your options before doing this: you are likely creating more problems than you are solving.

29.2.2. Using PWD

Another option is to rely on PWD, the current working directory. In this case, you can assume the user has first cd'ed into your bundle and make all your pathnames relative. Using the PWD method, you access files within your bundle like this:

   1 read somevar < etc/somefile                 # Using pathname relative to PWD
   2 read somevar < "${PWD%/}/etc/somefile"      # Expand PWD if you want an absolute pathname
   3 
   4 bundledir=$PWD                              # Store PWD if you expect to cd in your script.
   5 cd /somewhere/else
   6 read somefile < "${bundledir%/}/etc/somefile"

To reduce fragility, you could even test whether, for example, the relative path to the script name is correct, to make sure the user has indeed cd'ed into the bundle:

   1 if [[ ! -e bin/myscript ]]; then
   2     echo >&2 "Please cd into the bundle before running this script."
   3     exit 1
   4 fi

You can also try some heuristics, just in case the user is sitting one directory above the bundle:

   1 if [[ ! -e bin/myscript ]]; then
   2     if [[ -d mybundle-1.2.5 ]]; then
   3         cd mybundle-1.2.5 || {
   4             echo >&2 "Bundle directory exists but I can't cd there."
   5             exit 1
   6         }
   7     else
   8         echo >&2 "Please cd into the bundle before running this script."
   9         exit 1
  10     fi
  11 fi

If you ever do need an absolute path, you can always get one by prefixing the relative path with $PWD: echo "Saved to: $PWD/result.csv"

The only difficulty here is that you're forcing your user to change into your bundle's directory before your script can function. Regardless, this may well be your best option!

29.2.3. Using a configuration/wrapper

If neither the BASH_SOURCE or the PWD option sound interesting, you might want to consider going the route of configuration files instead (see the previous section). In this case, you require that your user set the location of your bundle in a configuration file, and have him put that configuration file in a location you can easily find. For example:

   1 [[ -e ~/.myscript.conf ]] || {
   2     echo >&2 "First configure the product in ~/.myscript.conf"
   3     exit 1
   4 }
   5 
   6 # ~/.myscript.conf defines something like bundleDir=/x/y
   7 source ~/.myscript.conf
   8 
   9 [[ $bundleDir ]] || {
  10     echo >&2 "Please define bundleDir='/some/path' in ~/.myscript.conf"
  11     exit 1
  12 }
  13 
  14 cd "$bundleDir" || {
  15     printf >&2 'Could not cd to <%s>\n' "$bundleDir"
  16     exit 1
  17 }
  18 
  19 # Now you can use the PWD method: use relative paths.

A variant of this option is to use a wrapper that configures your bundle's location. Instead of calling your bundled script, you install a wrapper for your script in the standard system PATH, which changes directory into the bundle and calls the real script from there, which can then safely use the PWD method from above:

   1 #!/usr/bin/env bash
   2 cd /path/to/where/bundle/was/installed
   3 exec "bin/realscript"

29.3. Why $0 is NOT an option

Common ways of finding a script's location depend on the name of the script, as seen in the predefined variable $0. Unfortunately, providing the script name via $0 is only a (common) convention, not a requirement. In fact, $0 is not at all the location of your script, it's the name of your process as determined by your parent. It can be anything.

The suspect answer is "in some shells, $0 is always an absolute path, even if you invoke the script using a relative path, or no path at all". But this isn't reliable across shells; some of them (including BASH) return the actual command typed in by the user instead of the fully qualified path. And this is just the tip of the iceberg!

Consider that your script may not actually be on a locally accessible disk at all. Consider this:

ssh remotehost bash < ./myscript

The shell running on remotehost is getting its commands from a pipe. There's no script anywhere on any disk that bash can see.

Moreover, even if your script is stored on a local disk and executed, it could move. Someone could mv the script to another location in between the time you type the command and the time your script checks $0. Or someone could have unlinked the script during that same time window, so that it doesn't actually have a link within a file system any more.

(That may sound fanciful, but it's actually very common. Consider a script installed in /opt/foobar/bin, which is running at the time someone upgrades foobar to a new version. They may delete the entire /opt/foobar/ hierarchy, or they may move the /opt/foobar/bin/foobar script to a temporary name before putting a new version in place. For these reasons, even approaches like "use lsof to find the file which the shell is using as standard input" will still fail.)

Even in the cases where the script is in a fixed location on a local disk, the $0 approach still has some major drawbacks. The most important is that the script name (as seen in $0) may not be relative to the current working directory, but relative to a directory from the program search path $PATH (this is often seen with KornShell). Or (and this is most likely problem by far...) there might be multiple links to the script from multiple locations, one of them being a simple symlink from a common PATH directory like /usr/local/bin, which is how it's being invoked. Your script might be in /opt/foobar/bin/script but the naive approach of reading $0 won't tell you that -- it may say /usr/local/bin/script instead.

Some people will try to work around the symlink issue with readlink -f "$0". Again, this may work in some cases, but it's not bulletproof. Nothing that reads $0 will ever be bulletproof, because $0 itself is unreliable. Furthermore, readlink is nonstandard, and won't be available on all platforms.

For a more general discussion of the Unix file system and how symbolic links affect your ability to know where you are at any given moment, see this Plan 9 paper.


CategoryShell

30. How can I display the target of a symbolic link?

The nonstandard external command readlink(1) can be used to display the target of a symbolic link:

$ readlink /bin/sh
bash

If you don't have readlink, you can use Perl:

perl -le 'print readlink "/bin/sh"'

You can also use GNU find's -printf %l directive, which is especially useful if you need to resolve links in batches:

$ find /bin/ -type l -printf '%p points to %l\n'
/bin/sh points to bash
/bin/bunzip2 points to bzip2
...

If your system lacks both readlink and Perl, you can use a function like this one:

# Bash
readlink() {
    local path=$1 ll

    if [ -L "$path" ]; then
        ll=$(LC_ALL=C ls -ld -- "$path" 2>/dev/null) &&
        printf '%s\n' "${ll#* -> }"
    else
        return 1
    fi
}

However, this can fail if a symbolic link contains " -> " in its name.


CategoryShell

31. How can I rename all my *.foo files to *.bar, or convert spaces to underscores, or convert upper-case file names to lower case?

There are a bunch of different ways to do this, depending on which nonstandard tools you have available. Even with just standard POSIX tools, you can still perform most of the simple cases. We'll show the portable tool examples first.

You can do most non-recursive mass renames with a loop and some Parameter Expansions, like this:

# POSIX
# Rename all *.foo to *.bar
for f in *.foo; do mv -- "$f" "${f%.foo}.bar"; done

To check what the command would do without actually doing it, you can add an echo before the mv. This applies to almost(?) every example on this page, so we won't mention it again.

# POSIX
# This removes the extension .zip from all the files.
for file in ./*.zip; do mv "$file" "${file%.zip}"; done

The "--" and "./*" are to protect from problematic filenames that begin with "-". You only need one or the other, not both, so pick your favorite.

Here are some similar examples, using Bash-specific parameter expansions:

# Bash
# Replace all spaces with underscores
for f in *\ *; do mv -- "$f" "${f// /_}"; done

For more techniques on dealing with files with inconvenient characters in their names, see FAQ #20.

# Bash
# Replace "foo" with "bar", even if it's not the extension
for file in ./*foo*; do mv "$file" "${file//foo/bar}"; done

All the above examples invoke the external command mv(1) once for each file, so they may not be as efficient as some of the nonstandard implementations.

31.1. Recursively

If you want to rename files recursively, then it becomes much more challenging. This example renames *.foo to *.bar:

# Bash
# Also requires GNU or BSD find(1)
# Recursively change all *.foo files to *.bar

find . -type f -name '*.foo' -print0 | while IFS= read -r -d '' f; do
  mv -- "$f" "${f%.foo}.bar"
done

This example uses Bash 4's globstar instead of GNU find:

# Bash 4
# Replace "foo" with "bar" in all files recursively.
# "foo" must NOT appear in a directory name!

shopt -s globstar
for file in /path/to/**/*foo*; do
    mv -- "$file" "${file//foo/bar}"
done

The trickiest part of recursive renames is ensuring that you do not change the directory component of a pathname, because something like this is doomed to failure:

mv "./FOO/BAR/FILE.TXT" "./foo/bar/file.txt"

Therefore, any recursive renaming command should only change the filename component of each pathname, like this:

mv "./FOO/BAR/FILE.TXT" "./FOO/BAR/file.txt"

If you need to rename the directories as well, those should be done separately. Furthermore, recursive directory renaming should either be done depth-first (changing only the last component of the directory name in each instance), or in several passes. Depth-first works better in the general case.

Here's an example script that uses depth-first recursion (changes spaces in names to underscores, but you just need to change the ren() function to do anything you want) to rename both files and directories. Again, it's easy to modify to make it act only on files or only on directories, or to act only on files with a certain extension, to avoid or force overwriting files, etc.:

# Bash
ren() {
  local newname
  newname=${1// /_}
  [[ $1 != "$newname" ]] && mv -- "$1" "$newname"
}

traverse() {
  local file
  cd -- "$1" || exit
  for file in *; do
    [[ -d $file ]] && traverse "$file"
    ren "$file"
  done
  cd .. || exit
}

# main program
shopt -s nullglob dotglob
traverse /path/to/startdir

Here is another way to recursively rename all directories and files with spaces in their names, UsingFind:

find . -depth -name "* *" -exec bash -c 'dir=${1%/*} base=${1##*/}; mv "$1" "$dir/${base// /_}"' _ {} \;

or, if your version of find accepts it, this is more efficient as it runs one bash for many files instead of one bash per file:

find . -depth -name "* *" -exec bash -c 'for f; do dir=${f%/*} base=${f##*/}; mv "$f" "$dir/${base// /_}"; done' _ {} +

31.2. Upper- and lower-case

To convert filenames to lower-case with only standard tools, you need something that can take a mixed-case filename as input and give back the lowercase version as output. In Bash 4 and higher, there is a parameter expansion that can do it:

# Bash 4
for f in *[[:upper:]]*; do mv -- "$f" "${f,,}"; done

Otherwise, tr(1) may be helpful:

# tolower - convert file names to lower case
# POSIX
for file do
    [ -f "$file" ] || continue                # ignore non-existing names
    newname=$(printf %s "$file" | tr '[:upper:]' '[:lower:]')     # lower case
    [ "$file" = "$newname" ] && continue      # nothing to do
    [ -f "$newname" ] && continue             # don't overwrite existing files
    mv -- "$file" "$newname"
done

This example will not handle filenames that end with newlines, because the CommandSubstitution will eat them. The workaround for that is to append a character in the command substitution, and remove it afterward. Thus:

newname=$(printf %sx "$file" | tr '[:upper:]' '[:lower:]')
newname=${newname%x}

We use the fancy range notation, because tr can behave very strangely when using the A-Z range on some locales:

imadev:~$ echo Hello | tr A-Z a-z
hÉMMÓ

To make sure you aren't caught by surprise when using tr with ranges, either use the fancy range notations, or set your locale to C.

imadev:~$ echo Hello | LC_ALL=C tr A-Z a-z
hello
imadev:~$ echo Hello | tr '[:upper:]' '[:lower:]'
hello
# Either way is fine here.

Note that GNU tr doesn't support multi-byte characters (like non-ASCII UTF-8 ones). So on GNU systems, you may prefer:

# GNU
sed 's/.*/\L&/g'
# POSIX
awk '{print tolower($0)}'

This technique can also be used to replace all unwanted characters in a file name, e.g. with '_' (underscore). The script is the same as above, with only the "newname=..." line changed.

# renamefiles - rename files whose name contain unusual characters
# POSIX
for file do
    [ -f "$file" ] || continue            # ignore non-regular files, etc.
    newname=$(printf '%s\n' "$file" | sed 's/[^[:alnum:]_.]/_/g' | paste -sd _ -)
    [ "$file" = "$newname" ] && continue  # nothing to do
    [ -f "$newname" ] && continue         # do not overwrite existing files
    mv -- "$file" "$newname"
done

The character class in [] contains all the characters we want to keep (after the ^); modify it as needed. The [:alnum:] range stands for all the letters and digits of the current locale. Note however that it will not replace bytes that don't form valid characters (like characters encoded in the wrong character set).

Here's an example that does the same thing, but this time using Parameter Expansion instead of sed:

# renamefiles (more efficient, less portable version)
# Bash/Ksh/Zsh
for file do
   [[ -f $file ]] || continue
   newname=${file//[![:alnum:]_.]/_}
   [[ $file = "$newname" ]] && continue
   [[ -e $newname ]] && continue
   [[ -L $newname ]] && continue
   mv -- "$file" "$newname"
done

It should be noted that all these examples contain a race condition -- an existing file could be overwritten if it is created in between the [ -e "$newname" ... and mv "$file" ... commands. Solving this issue is beyond the scope of this page, however adding the -i and (GNU specific) -T option to mv can reduce its impact.

One final note about changing the case of filenames: when using GNU mv, on many file systems, attempting to rename a file to its lowercase or uppercase equivalent will fail. (This applies to Cygwin on DOS/Windows systems using FAT or NTFS file systems; to GNU mv on Mac OS X systems using HFS+ in case-insensitive mode; as well as to Linux systems which have mounted Windows/Mac file systems, and possibly many other setups.) GNU mv checks both the target names before attempting a rename, and due to the file system's mapping, it thinks that the destination "already exists":

mv README Readme    # fails with GNU mv on FAT file systems, etc.

The workaround for this is to rename the file twice: first to a temporary name which is completely different from the original name, then to the desired name.

mv README tempfilename &&
mv tempfilename Readme

31.3. Nonstandard tools

To convert filenames to lower case, if you have the utility mmv(1) on your machine, you could simply do:

# convert all filenames to lowercase
mmv "*" "#l1"

Some GNU/Linux distributions have a rename(1) command; however, the syntax differs from one distribution to the next. Debian uses the perl rename script (formerly included with Perl; now it is not), which it installs as prename(1) and rename(1). Red Hat uses a totally different rename(1) command.

The prename script is extremely flexible. For example, it can be used to change files to lower-case:

# convert all filenames to lowercase
prename '$_=lc($_)' ./*

Alternatively, you can also use:

# convert all filenames to lowercase
prename 'y/A-Z/a-z/' ./*

For prename to use Unicode instead of ASCII for files encoded in UTF-8:

# convert all filenames to lowercase using Unicode rules
PERL_UNICODE=SA rename '$_=lc' ./*

To assume the current locale charset for filenames:

rename 'BEGIN{use Encode::Locale qw(decode_argv);decode_argv} $_=lc'

(note that it still doesn't use the locale's rules for case conversion. For instance, in a Turkish locale, I would be converted to i, not ı).

Or recursively:

# convert all filenames to lowercase, recursively (assumes a find
# implementation with support for the non-standard -execdir predicate)
#
# Note: this will not change directory names. That's because -execdir
# cd's to the parent directory before running the command. That means
# however that (despite the +), one prename command is executed for
# each file to rename.
find . -type f -name '*[[:upper:]]*' -execdir prename '$_=lc($_)' {} +

A more efficient and portable approach:

find . -type f -name '*[[:upper:]]*' -exec prename 's{[^/]*$}{lc($&)}e' {} +

Or to replace all underscores with spaces:

prename 's/_/ /g' ./*_*

To rename files interactively using $EDITOR (from moreutils):

vidir

Or recursively:

find . -type f | vidir -

(Note: vidir cannot handle filenames that contain newline characters.)


CategoryShell

32. What is the difference between test, [ and [[ ?

The open square bracket [ command (aka test command) and the [[ ... ]] test construct are used to evaluate expressions. [[ ... ]] works only in the Korn shell (where it originates), Bash, Zsh, and recent versions of Yash and busybox sh (if enabled at compilation time, and still very limited there especially in the hush-based variant), and is more powerful; [ and test are POSIX utilities (generally builtin). POSIX doesn't specify the [[ ... ]] construct (which has a specific syntax with significant variations between implementations) though allows shells to treat [[ as a keyword. Here are some examples:

#POSIX
[ "$variable" ] || echo 'variable is unset or empty!' >&2
[ -f "$filename" ] || printf 'File does not exist or is not a regular file: %s\n' "$filename" >&2

if [[ ! -e $file ]]; then
    echo "File doesn't exist or is in an inaccessible directory or is a symlink to a file that doesn't exist." >&2
fi

if [[ $file0 -nt $file1 ]]; then
    printf '%s\n' "file $file0 is newer than $file1 (or $file1 is no accessible)"
    # (behaviour varies between shells if $file1 is not accessible)
fi

To cut a long story short: test implements the old, portable syntax of the command. In almost all shells (the oldest Bourne shells are the exception), [ is a synonym for test (but requires a final argument of ]). Although all modern shells have built-in implementations of [, there usually still is an external executable of that name, e.g. /bin/[. POSIX defines a mandatory feature set for [, but almost every shell offers extensions to it. So, if you want portable code, you should be careful not to use any of those extensions.

[[ is a new, improved version of it, and it is a keyword rather than a program. This makes it easier to use, as shown below.

Although [ and [[ have much in common and share many expression operators like "-f", "-s", "-n", and "-z", there are some notable differences. Here is a comparison list:

Feature

new test [[

old test [

Example

string comparison

>

\> (*)

[[ a > b ]] || echo "a does not come after b"

<

\< (*)

[[ az < za ]] && echo "az comes before za"

= (or ==)

=

[[ a = a ]] && echo "a equals a"

!=

!=

[[ a != b ]] && echo "a is not equal to b"

integer comparison

-gt

-gt

[[ 5 -gt 10 ]] || echo "5 is not bigger than 10"

-lt

-lt

[[ 8 -lt 9 ]] && echo "8 is less than 9"

-ge

-ge

[[ 3 -ge 3 ]] && echo "3 is greater than or equal to 3"

-le

-le

[[ 3 -le 8 ]] && echo "3 is less than or equal to 8"

-eq

-eq

[[ 5 -eq 05 ]] && echo "5 equals 05"

-ne

-ne

[[ 6 -ne 20 ]] && echo "6 is not equal to 20"

conditional evaluation

&&

-a (**)

[[ -n $var && -f $var ]] && echo "$var is a file"

||

-o (**)

[[ -b $var || -c $var ]] && echo "$var is a device"

expression grouping

(...)

\( ... \) (**)

[[ $var = img* && ($var = *.png || $var = *.jpg) ]] &&
echo "$var starts with img and ends with .jpg or .png"

Pattern matching

= (or ==)

(not available)

[[ $name = a* ]] || echo "name does not start with an 'a': $name"

RegularExpression matching

=~

(not available)

[[ $(date) =~ ^Fri\ ...\ 13 ]] && echo "It's Friday the 13th!"

(*) This is an extension to the POSIX standard; some shells may have it, others may not.

(**) The -a and -o operators, and ( ... ) grouping, are defined by POSIX but only for strictly limited cases, and are marked as deprecated. Use of these operators is discouraged; you should use multiple [ commands instead:

  • if [ "$a" = a ] && [ "$b" = b ]; then ...

  • if [ "$a" = a ] || { [ "$b" = b ] && [ "$c" = c ];}; then ...

Special primitives that [[ is defined to have, but [ may be lacking (depending on the implementation):

Description

Primitive

Example

entry (file or directory) exists

-e

[[ -e $config ]] && echo "config file exists: $config"

file is newer/older than other file

-nt / -ot

[[ $file0 -nt $file1 ]] && echo "$file0 is newer than $file1"

two files are the same

-ef

[[ $input -ef $output ]] && { echo "will not overwrite input file: $input"; exit 1; } 

negation

!

[[ ! -u $file ]] && echo "$file is not a setuid file"

But there are more subtle differences.

  • No WordSplitting or glob expansion will be done for [[ (and therefore many arguments need not be quoted):

     file="file name"
     [[ -f $file ]] && echo "$file is a regular file"

    will work even though $file is not quoted and contains whitespace. With [ the variable needs to be quoted:

     file="file name"
     [ -f "$file" ] && echo "$file is a regular file"

    This makes [[ easier to use and less error-prone.

  • Parentheses in [[ do not need to be escaped:

     [[ -f $file1 && ( -d $dir1 || -d $dir2 ) ]]
     [ -f "$file1" -a \( -d "$dir1" -o -d "$dir2" \) ]
  • As of bash 4.1, string comparisons using < or > respect the current locale when done in [[, but not in [ or test. In fact, [ and test have never used locale collating order even though past man pages said they did. Bash versions prior to 4.1 do not use locale collating order for [[ either.

As a rule of thumb, [[ is used for strings and files. If you want to compare numbers, use an ArithmeticExpression, e.g.

# Bash
i=0
while (( i < 10 )); do ...

When should the new test command [[ be used, and when the old one [? If portability/conformance to POSIX or the BourneShell is a concern, the old syntax should be used. If on the other hand the script requires BASH, Zsh, or KornShell, the new syntax is usually more flexible, but not necessarily backwards compatible.

For reasons explained in the theory section below, any problem with an operator used with [[ is an unhandleable parse-time error that will cause bash to terminate, even if the command is never evaluated.

# Example of improper [[ usage.
# Notice that isSet is never even called.

 $ bash-3.2 <<\EOF
if ((BASH_VERSINFO[0] > 4 || (BASH_VERSINFO[0] == 4 && BASH_VERSINFO[1] >= 2))); then
  isSet() { [[ -v $1 ]]; }
else
  isSet() { [[ ${1+_} ]]; }
fi
EOF
bash-3.2: line 2: conditional binary operator expected
bash-3.2: line 2: syntax error near `$1'
bash-3.2: line 2: `  isSet() { [[ -v $1 ]]; }'

If backwards-compatibility were desired then [ -v should have been used instead. The only other alternatives would be to use an alias to conditionally expand during the function definition, or eval to defer parsing until the command is actually reached at runtime.

See the Tests and Conditionals chapter in the BashGuide.

32.1. Theory

The theory behind all of this is that [ is a simple command, whereas [[ is a compound command. [ receives its arguments as any other command would, but most compound commands introduce a special parsing context which is performed before any other processing. Typically this step looks for special reserved words or control operators specific to each compound command which split it into parts or affect control-flow. The Bash test expression's logical and/or operators can short-circuit because they are special in this way (as are e.g. ;;, elif, and else). Contrast with ArithmeticExpression, where all expansions are performed left-to-right in the usual way, with the resulting string being subject to interpretation as arithmetic.

  • The arithmetic compound command has no special operators. It has only one evaluation context - a single arithmetic expression. Arithmetic expressions have operators too, some of which affect control flow during the arithmetic evaluation step (which happens last).
     # Bash
     (( 1 + 1 == 2 ? 1 : $(echo "This doesn't do what you think..." >&2; echo 1) ))
  • Test expressions on the other hand do have "operators" as part of their syntax, which lie on the other end of the spectrum (evaluated first).
     # Bash
     [[ '1 + 1' -eq 2 && $(echo "...but this probably does what you expect." >&2) ]]
  • Old-style tests have no way of controlling evaluation because its arguments aren't special.
     # Bash
     [ $((1 + 1)) -eq 2 -o $(echo 'No short-circuit' >&2) ]
  • Different error handling is made possible by searching for special compound command tokens before performing expansions. [[ can detect the presence of expansions that don't result in a word yet still throw an error if none are specified. Ordinary commands can't.

     # Bash
     ( set -- $(echo 'Unquoted null expansions do not result in "null" parameters.' >&2); echo $# )
     [[ -z $(:) ]] && echo "-z was supplied an arg and evaluated empty."
     [ -z ] && echo "-z wasn't supplied an arg, and no errors are reported. There's no possible way Bash could enforce specifying an argument here."
     [[ -z ]] # This will cause an error that ordinary commands can't detect.
  • For the very same reason, because ['s operators are just "arguments", unlike [[, you can specify operators as parameters to an ordinary test command. This might be seen as a limitation of [[, but the downsides outweigh the good almost always.

     # ksh93
    
     args=(0 -gt 1)
    
     (( $(print '0 > 1') )) # Valid command, Exit status is 1 as expected.
     [ "${args[@]}" ]       # Also exit 1.
     [[ ${args[@]} ]]       # Valid command, but is misleading. Exit status 0. set -x reveals the resulting command is [[ -n '0 -gt 1' ]]
  • Do keep in mind which operators belong to which shell constructs. Order of expansions can cause surprising results especially when mixing and nesting different evaluation contexts!
     # ksh93
     typeset -i x=0
    
     ( print "$(( ++x, ${ x+=1; print $x >&2;}1, x ))"      ) # Prints 1, 2
     ( print "$(( $((++x)), ${ x+=1; print $x >&2;}1, x ))" ) # Prints 2, 2 - because expansions are performed first.


CategoryShell

33. How can I redirect the output of 'time' to a variable or file?

Bash's time keyword uses special trickery, so that you can do things like

time find ... | xargs ...

and get the execution time of the entire pipeline, rather than just the simple command at the start of the pipe. (This is different from the behavior of the external command time(1), for obvious reasons.)

Because of this, people who want to redirect time's output often encounter difficulty figuring out where all the file descriptors are going. It's not as hard as most people think, though -- the trick is to call time in a SubShell or block, and then capture stderr of the subshell or block (which will contain time's results). If you need to redirect the actual command's stdout or stderr, you do that inside the subshell/block. For example:

  • File redirection:
    bash -c "time ls" 2>time.output      # Explicit, but inefficient.
    ( time ls ) 2>time.output            # Slightly more efficient.
    { time ls; } 2>time.output           # Most efficient.
    
    # The general case:
    { time some command >stdout 2>stderr; } 2>time.output
  • CommandSubstitution:

    foo=$( bash -c "time ls" 2>&1 )       # Captures *everything*.
    foo=$( { time ls; } 2>&1 )            # More efficient version.
    
    # Keep stdout unmolested.
    # The shell's original FD 1 is saved in FD 3, which is inherited by the subshell.
    # Inside the innermost block, we send the time command's stdout to FD 3.
    exec 3>&1
    foo=$( { time bar 1>&3; } 2>&1 )      # Captures stderr and time.
    exec 3>&-
    
    # Keep both stdout and stderr unmolested.
    exec 3>&1 4>&2
    foo=$( { time bar 1>&3 2>&4; } 2>&1 )  # Captures time only.
    exec 3>&- 4>&-
    
    # same thing without exec
    { foo=$( { time bar 1>&3- 2>&4-; } 2>&1 ); } 3>&1 4>&2

    See FileDescriptor for full explanations of the redirection juggling.

  • Pipe:
    # Make time only output elapsed time in seconds
    TIMEFORMAT=%R
    # Keep stdout and stderr unmolested
    exec 3>&1 4>&2
    { time foo 1>&3 2>&4; } 2>&1 | awk '{
        printf "The task took %d hours, %d minutes and %.3f seconds\n",
               $1/3600, $1%3600/60, $1%60
    }'
    exec 3>&- 4>&-

A similar construct can be used to capture "core dump" messages, which are actually printed by the shell that launched a program, not by the program that just dumped core:

./coredump >log 2>&1           # Fails to capture the message
{ ./coredump; } >log 2>&1      # Captures the message

The same applies to job control messages:

$ { sleep 1 & } >log 2>&1
$ cat log
[1] 10316
[1]+  Done                    sleep 1

Of course you may opt to redirect to /dev/null instead of a file.


CategoryShell CategoryExampleCode

34. How can I find a process ID for a process given its name?

Usually a process is referred to using its process ID (PID), and the ps(1) command can display the information for any process given its process ID, e.g.

    $ echo $$         # my process id
    21796
    $ ps -p 21796
      PID TTY          TIME CMD
    21796 pts/5    00:00:00 ksh

But frequently the process ID for a process is not known, but only its name. Some operating systems, e.g. Solaris, BSD, and some versions of Linux have a dedicated command to search a process given its name, called pgrep(1):

    $ pgrep init
    1

Often there is an even more specialized program available to not just find the process ID of a process given its name, but also to send a signal to it:

    $ pkill myprocess

Some systems also provide pidof(1). It differs from pgrep in that multiple output process IDs are only space separated, not newline separated.

    $ pidof cron
    5392

If these programs are not available, a user can search the output of the ps command using grep.

The major problem when grepping the ps output is that grep may match its own ps entry (try: ps aux | grep init). To make matters worse, this does not happen every time; the technical name for this is a RaceCondition. To avoid this, there are several ways:

  • Using grep -v at the end
         ps aux | grep name | grep -v grep
    will throw away all lines containing "grep" from the output. Disadvantage: You always have the exit state of the grep -v, so you can't e.g. check if a specific process exists.
  • Using grep -v in the middle
         ps aux | grep -v grep | grep name
    This does exactly the same, except that the exit state of "grep name" is accessible and a representation for "name is a process in ps" or "name is not a process in ps". It still has the disadvantage of starting a new process (grep -v).
  • Using [] in grep
         ps aux | grep [n]ame

    This spawns only the needed grep-process. The trick is to use the []-character class (regular expressions). To put only one character in a character group normally makes no sense at all, because [c] will always match a "c". In this case, it's the same. grep [n]ame searches for "name". But as grep's own process list entry is what you executed ("grep [n]ame") and not "grep name", it will not match itself.

34.1. greycat rant: daemon management

All the stuff above is OK if you're at an interactive shell prompt, but it should not be used in a script. It's too unreliable.

Most of the time when someone asks a question like this, it's because they want to manage a long-running daemon using primitive shell scripting techniques. Common variants are "How can I get the PID of my foobard process.... so I can start one if it's not already running" or "How can I get the PID of my foobard process... because I want to prevent the foobard script from running if foobard is already active." Both of these questions will lead to seriously flawed production systems.

If what you really want is to restart your daemon whenever it dies, just do this:

while true; do
   mydaemon --in-the-foreground
done

where --in-the-foreground is whatever switch, if any, you must give to the daemon to PREVENT IT from automatically backgrounding itself. (Often, -d does this and has the additional benefit of running the daemon with increased verbosity.) Self-daemonizing programs may or may not be the target of a future greycat rant....

If that's too simplistic, look into daemontools or runit, which are programs for managing services.

If what you really want is to prevent multiple instances of your program from running, then the only sure way to do that is by using a lock. For details on doing this, see ProcessManagement or FAQ 45.

ProcessManagement also covers topics like "I want to divide my batch job into 5 'threads' and run them all in parallel." Please read it.


CategoryShell

35. Can I do a spinner in Bash?

Sure!

# Bash, with GNU sleep
spin() {
  local i=0
  local sp='/-\|'
  local n=${#sp}
  printf ' '
  while sleep 0.1; do
    printf '\b%s' "${sp:i++%n:1}"
  done
}

Each time the loop iterates, it displays the next character in the sp string, wrapping around as it reaches the end. (i is the position of the current character to display and ${#sp} is the length of the sp string).

The \b string is replaced by a 'backspace' character. Alternatively, you could play with \r to go back to the beginning of the line.

To slow it down, the sleep command is included inside the loop (after the printf).

A POSIX equivalent would be:

# POSIX sh
spin() {
  sp='/-\|'
  printf ' '
  while sleep 1; do
    printf '\b%.1s' "$sp"
    sp=${sp#?}${sp%???}
  done
}

One way to use these spinners in a script is to run them as background processes, and kill them when you're done. For example,

# POSIX sh
spin & spinpid=$!
# long-running commands here
kill "$spinpid"

If you already have a loop which does a lot of work, you can write a function that "advances" the spinner one step at a time, and call it at the beginning of each iteration:

# Bash
sp='/-\|'
sc=0
sn=${#sp}
spin() {
    printf '\b%s' "${sp:sc++%sn:1}"
}
endspin() {
    printf '\r%s\n' "$*"
}

until work_done; do
   spin
   some_work ...
done
endspin

A similar technique can be used to build progress bars.


CategoryShell

36. How can I handle command-line options and arguments in my script easily?

Well, that depends a great deal on what you want to do with them. There are two standard approaches, each with its strengths and weaknesses.

36.1. Overview

A Unix command generally has an argument syntax like this:

tar -x -f archive.tar -v -- file1 file2 file3

Please note the conventions and the ordering here, because they are important. They actually matter. This command has some arguments (file1, file2, file3), and some options (-x -f archive.tar -v), as well as the traditional end of options indicator "--".

The options appear before the non-option arguments. They do not appear afterward. They do not appear at just any old random place in the command.

Some options (-x, -v) are standalones. They are either present, or not. Some options (-f) take an argument of their own.

In all cases, option processing involves writing a loop. Ideally, this loop will make one pass over the argument list, examining each argument in turn, and setting appropriate shell variables so that the script remembers which options are in effect. Ultimately, it will discard all of the options, so that the argument list is left holding only the non-option arguments (file1 file2 file3). The rest of the script, then, can simply begin processing those, referring as needed to the variables that were set up by the option processor.

The option processor recognizes the end of options when it finds a -- argument, or when it finds an argument that doesn't start with a hyphen. (The option argument archive.tar does not signal the end of options, because it is processed along with the -f option.)

There are two basic approaches to writing an option processing loop: either write the loop yourself from scratch (we'll call this a "manual loop"), or use the shell's getopts command to assist with option splitting. We'll cover both of these cases.

Do not use getopt(1). Do not even discuss getopt on this page. Go to ComplexOptionParsing to learn more about it.

36.2. Manual loop

Manually parsing options is the most flexible approach. It is the best way, really, because it allows you to do anything you like: you can handle both single-letter and long options, with or without option arguments. That's why we're showing it first.

If you want to handle GNU-style --long-options or Tcl-style -longopts, a manual loop is your only choice. getopts does not support these.

In this example, notice how both --file FILE and --file=FILE are handled.

   1 #!/bin/sh
   2 # POSIX
   3 
   4 die() {
   5     printf '%s\n' "$1" >&2
   6     exit 1
   7 }
   8 
   9 # Initialize all the option variables.
  10 # This ensures we are not contaminated by variables from the environment.
  11 file=
  12 verbose=0
  13 
  14 while :; do
  15     case $1 in
  16         -h|-\?|--help)
  17             show_help    # Display a usage synopsis.
  18             exit
  19             ;;
  20         -f|--file)       # Takes an option argument; ensure it has been specified.
  21             if [ "$2" ]; then
  22                 file=$2
  23                 shift
  24             else
  25                 die 'ERROR: "--file" requires a non-empty option argument.'
  26             fi
  27             ;;
  28         --file=?*)
  29             file=${1#*=} # Delete everything up to "=" and assign the remainder.
  30             ;;
  31         --file=)         # Handle the case of an empty --file=
  32             die 'ERROR: "--file" requires a non-empty option argument.'
  33             ;;
  34         -v|--verbose)
  35             verbose=$((verbose + 1))  # Each -v adds 1 to verbosity.
  36             ;;
  37         --)              # End of all options.
  38             shift
  39             break
  40             ;;
  41         -?*)
  42             printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2
  43             ;;
  44         *)               # Default case: No more options, so break out of the loop.
  45             break
  46     esac
  47 
  48     shift
  49 done
  50 
  51 # if --file was provided, open it for writing, else duplicate stdout
  52 if [ "$file" ]; then
  53     exec 3> "$file"
  54 else
  55     exec 3>&1
  56 fi
  57 
  58 # Rest of the program here.
  59 # If there are input files (for example) that follow the options, they
  60 # will remain in the "$@" positional parameters.

This parser does not handle single-letter options concatenated together (like -xvf being understood as -x -v -f). This could be added with effort, but it's left as an exercise for the reader. In practice, it's exceptionally rare for shell scripts that handle long options to handle single-letter option splitting as well. It's simply not worth the effort.

For the most part, shell scripts that you write will not need to worry about single-letter option splitting, because you are the only person using them. Fancy option processing is only desirable if you are releasing the program for general use, and that is almost never going to be the case in real life. Single-letter option combining also precludes the use of Tcl-style long arguments (-foo), which some commands like find(1), gcc(1) and star(1) use.

36.3. getopts

The main benefit of getopts is to allow single-letter option splitting (-xvf handled as -x -v -f). The trade-off for this is that you cannot use long arguments of any kind (GNU-style --foo or Tcl-style -foo), or options with an optional argument (like mysql's -p[password] option).

getopts is suitable for simple scripts. The more complex your option parsing needs are, the less likely it is that you'll be able to make use of getopts.

Here is a getopts example:

   1 #!/bin/sh
   2 
   3 # Usage info
   4 show_help() {
   5 cat << EOF
   6 Usage: ${0##*/} [-hv] [-f OUTFILE] [FILE]...
   7 Do stuff with FILE and write the result to standard output. With no FILE
   8 or when FILE is -, read standard input.
   9 
  10     -h          display this help and exit
  11     -f OUTFILE  write the result to OUTFILE instead of standard output.
  12     -v          verbose mode. Can be used multiple times for increased
  13                 verbosity.
  14 EOF
  15 }
  16 
  17 # Initialize our own variables:
  18 output_file=""
  19 verbose=0
  20 
  21 OPTIND=1
  22 # Resetting OPTIND is necessary if getopts was used previously in the script.
  23 # It is a good idea to make OPTIND local if you process options in a function.
  24 
  25 while getopts hvf: opt; do
  26     case $opt in
  27         h)
  28             show_help
  29             exit 0
  30             ;;
  31         v)  verbose=$((verbose+1))
  32             ;;
  33         f)  output_file=$OPTARG
  34             ;;
  35         *)
  36             show_help >&2
  37             exit 1
  38             ;;
  39     esac
  40 done
  41 shift "$((OPTIND-1))"   # Discard the options and sentinel --
  42 
  43 # Everything that's left in "$@" is a non-option.  In our case, a FILE to process.
  44 printf 'verbose=<%d>\noutput_file=<%s>\nLeftovers:\n' "$verbose" "$output_file"
  45 printf '<%s>\n' "$@"
  46 
  47 # End of file

There is a getopts tutorial which explains what all of the syntax and variables mean. In bash, there is also help getopts.

The advantages of getopts over a manual loop:

  1. It can handle things like -xvf filename in the expected Unix way, automatically.

  2. It makes sure options are parsed like any standard command (lowest common denominator), avoiding surprises.
  3. With some implementations, the error messages will be localised in the language of the user.

The disadvantages of getopts:

  1. (Except for ksh93) it can only handle short options (-h, not --help).

  2. It cannot handle options with optional arguments like mysql's -p[password].

  3. It doesn't exist in the Bourne shell.
  4. It only allows options to be parsed in the "standard way" (lowest common denominator).

  5. Options are coded in at least 2, probably 3 places -- in the call to getopts, in the case statement that processes them, and in the help/usage message that documents them.

For other, more complicated ways of option parsing, see ComplexOptionParsing.


CategoryShell

37. How can I get all lines that are: in both of two files (set intersection) or in only one of two files (set subtraction).

Use the comm(1) command:

# Bash
# Intersection of file1 and file2
# (i.e., only the lines that appear in both files)
comm -12 <(sort file1) <(sort file2)

# Subtraction of file1 from file2
# (i.e., only the lines unique to file2)
comm -13 <(sort file1) <(sort file2)

Read the comm man page for details. Those are process substitutions you see up there.

If for some reason you lack the core comm program, or seek alternatives, you can use these other methods. The grep (#1) or awk (#4) methods are faster than the above comm + sort (multiple calls to sort + pipes slow it down), but #1 and #4 don't scale as well to very large files since one of the data files is loaded into memory.

  1. An amazingly simple and fast implementation, that took just 20 seconds to match a 30k line file against a 400k line file for me.
      # intersection of file1 and file2
      grep -xF -f file1 file2
    
      # subtraction of file1 from file2
      grep -vxF -f file1 file2
    • It has grep read one of the sets as a pattern list from a file (-f), and interpret the patterns as plain strings not regexps (-F), matching only whole lines (-x).
    • Note that the file specified with -f will be loaded into memory, so it doesn't scale for very large files.
    • It should work with any POSIX grep; on older systems you may need to use fgrep rather than grep -F.

  2. An implementation using sort and uniq:
      # intersection of file1 and file2
      sort file1 file2 | uniq -d  (Assuming each of file1 or file2 does not have repeated content)
    
      # file1-file2 (Subtraction)
      sort file1 file2 file2 | uniq -u
    
      # same way for file2 - file1, change last file2 to file1
      sort file1 file2 file1 | uniq -u
  3. Another implementation of subtraction:
      sort file1 file1 file2 | uniq -c |
      awk '{ if ($1 == 2) { $1 = ""; print; } }'
    • This may introduce an extra space at the start of the line; if that's a problem, just strip it away.
    • Also, this approach assumes that neither file1 nor file2 has any duplicates in it.
    • Finally, it sorts the output for you. If that's a problem, then you'll have to abandon this approach altogether. Perhaps you could use awk's associative arrays (or perl's hashes or tcl's arrays) instead.
  4. These are subtraction and intersection with awk, regardless of whether the input files are sorted or contain duplicates:
      # prints lines only in file1 but not in file2. Reverse the arguments to get the other way round
      awk 'NR==FNR{a[$0];next} !($0 in a)' file2 file1
    
      # prints lines that are in both files; order of arguments is not important
      awk 'NR==FNR{a[$0];next} $0 in a' file1 file2

    For an explanation of how these work, see http://awk.freeshell.org/ComparingTwoFiles.

If the lines of your files contain extra rubbish data, and you only want to compare part of each line from file 1 vs. part of each line from file 2, see FAQ 116.

See also: http://www.pixelbeat.org/cmdline.html#sets


CategoryShell

38. How can I print text in various colors?

Do not hard-code ANSI color escape sequences in your program! The tput command lets you interact with the terminal database in a sane way:

  # Bourne
  tput setaf 1; echo this is red
  tput setaf 2; echo this is green
  tput bold; echo "boldface (and still green)"
  tput sgr0; echo back to normal

Cygwin users: you need to install the ncurses package to get tput (see: Where did "tput" go in 1.7?)

tput reads the terminfo database which contains all the escape codes necessary for interacting with your terminal, as defined by the $TERM variable. For more details, see the terminfo(5) man page.

tput sgr0 resets the colors to their default settings. This also turns off boldface (tput bold), underline, etc.

If you want fancy colors in your prompt, consider using something manageable:

  # Bash
  red=$(tput setaf 1)
  green=$(tput setaf 2)
  blue=$(tput setaf 4)
  reset=$(tput sgr0)
  PS1='\[$red\]\u\[$reset\]@\[$green\]\h\[$reset\]:\[$blue\]\w\[$reset\]\$ '

Note that we do not hard-code ANSI color escape sequences. Instead, we store the output of the tput command into variables, which are then used when $PS1 is expanded. Storing the values means we don't have to fork a tput process multiple times every time the prompt is displayed; tput is only invoked 4 times during shell startup. The \[ and \] symbols allow bash to understand which parts of the prompt cause no cursor movement; without them, lines will wrap incorrectly.

As an exception, if you are attempting to use tput with the PROMPT_COMMAND shell variable, remember, unlike most other variables, "the value of each set element [of the PROMPT_COMMAND array] is executed as a command prior to issuing each primary prompt," so the array value should be parsable as a command.

pr_cmd_0='printf "%b" "${reset}"'
PROMPT_COMMAND=( [0]="${pr_cmd_0}" )

And here is a function to pick colors in a 256 color terminal

# Bash
colors256() {
        local c i j

        printf "Colors 0 to 15 for the standard 16 colors\n"
        for ((c = 0; c < 16; c++)); do
                printf "|%s%3d%s" "$(tput setaf "$c")" "$c" "$(tput sgr0)"
        done
        printf "|\n\n"

        printf "Colors 16 to 231 for 256 colors\n"
        for ((i = j = 0; c < 232; c++, i++)); do
                printf "|"
                ((i > 5 && (i = 0, ++j))) && printf " |"
                ((j > 5 && (j = 0, 1)))   && printf "\b \n|"
                printf "%s%3d%s" "$(tput setaf "$c")" "$c" "$(tput sgr0)"
        done
        printf "|\n\n"

        printf "Greyscale 232 to 255 for 256 colors\n"
        for ((; c < 256; c++)); do
                printf "|%s%3d%s" "$(tput setaf "$c")" "$c" "$(tput sgr0)"
        done
        printf "|\n"
}

See also https://web.archive.org/web/20230405000510/https://wiki.bash-hackers.org/scripting/terminalcodes for an overview.

The following is a more extensive range of terminal sequence variables. Pick the ones you want:

# Variables for terminal requests.
[[ -t 2 ]] && { 
    alt=$(      tput smcup  || tput ti      ) # Start alt display
    ealt=$(     tput rmcup  || tput te      ) # End   alt display
    hide=$(     tput civis  || tput vi      ) # Hide cursor
    show=$(     tput cnorm  || tput ve      ) # Show cursor
    save=$(     tput sc                     ) # Save cursor
    load=$(     tput rc                     ) # Load cursor
    bold=$(     tput bold   || tput md      ) # Start bold
    stout=$(    tput smso   || tput so      ) # Start stand-out
    estout=$(   tput rmso   || tput se      ) # End stand-out
    under=$(    tput smul   || tput us      ) # Start underline
    eunder=$(   tput rmul   || tput ue      ) # End   underline
    reset=$(    tput sgr0   || tput me      ) # Reset cursor
    blink=$(    tput blink  || tput mb      ) # Start blinking
    italic=$(   tput sitm   || tput ZH      ) # Start italic
    eitalic=$(  tput ritm   || tput ZR      ) # End   italic
[[ $TERM != *-m ]] && { 
    red=$(      tput setaf 1|| tput AF 1    )
    green=$(    tput setaf 2|| tput AF 2    )
    yellow=$(   tput setaf 3|| tput AF 3    )
    blue=$(     tput setaf 4|| tput AF 4    )
    magenta=$(  tput setaf 5|| tput AF 5    )
    cyan=$(     tput setaf 6|| tput AF 6    )
}
    white=$(    tput setaf 7|| tput AF 7    )
    default=$(  tput op                     )                                                                                                                                                                   
    eed=$(      tput ed     || tput cd      )   # Erase to end of display
    eel=$(      tput el     || tput ce      )   # Erase to end of line
    ebl=$(      tput el1    || tput cb      )   # Erase to beginning of line
    ewl=$eel$ebl                                # Erase whole line
    draw=$(     tput -S <<< '   enacs
                                smacs
                                acsc
                                rmacs' || { \
                tput eA; tput as;
                tput ac; tput ae;         } )   # Drawing characters
    back=$'\b'
} 2>/dev/null ||:

The above leaves the variables unset when stderr isn't connected to a terminal and leaves the color variables unset for monochrome terminals. The alternative tput executions allow the code to keep working on systems where tput takes old termcap names instead ANSI capnames. It also uses 2>/dev/null ||: to silence potential errors and avoid ERR script abortion. That allows this code to be used in a range of edge cases such as scripts that use set -e and terminals or OS's that don't support certain sequences (the code is borrowed from https://github.com/lhunath/scripts/blob/master/bashlib/bashlib ).

38.1. Discussion

This will be contentious, but I'm going to disagree and recommend you use hard-coded ANSI escape sequences because terminfo databases in the real world are too often broken.

tput setaf literally means "Set ANSI foreground" and shouldn't have any difference with a hard-coded ANSI escape sequence, except that it will actually work with broken terminfo databases so your colors will look correct in a VT with terminal type linux-16color or any terminal type so long as it really is a terminal capable of 16 ANSI colors.

So do consider setting those variables to hard-coded ANSI sequences such as:

  # Bash
  white=$'\e[0;37m'
  • You assume the entire world of terminals that you will ever use always conforms to one single set of escape sequences. This is a very poor assumption. Maybe I'm showing my age, but in my first job after college, in 1993-1994, I worked with a wide variety of physical terminals (IBM 3151, Wyse 30, NCR something or other, etc.) all in the same work place. They all had different key mappings, different escape sequences, the works. If I were to hard-code a terminal escape sequence as you propose it would only work on ONE of those terminals, and then if I had to login from someone else's office, or from a server console, I'd be screwed. So, for personal use, if this makes you happy, I can't stop you. But the notion of writing a script that uses hard-coded escape sequences and then DISTRIBUTING that for other people should be discarded immediately. - GreyCat

    • I said it would be contentious, but there is an alternative view. A large number of people today will use Linux on their servers and their desktops and their profiles follow them around. The terminfo for linux-16color is broken. By doing it the "right" way, they will find their colors do not work correctly in a virtual terminal on one of the console tty's. Doing it the "wrong" way will result only in light red becoming bold if they use the real xterm or a close derivitve. If terminfo can't get it right for something as common as linux-16color, it's hard to recommend relying on it. People should be aware that it doesn't work correctly, try it yourself, go through the first 16 colours on A Linux VT with linux-16color. I know ANSI only specified names not hue's but setaf 7 is obviously not supposed to result in black text seeing as it is named white. I'd place money on a lot more people using Linux for their servers than any other UNIX based OS and if they are using another UNIX-based or true UNIX they are probably aware of the nuances. A Linux newbie would be very surprised to find after following the "right way" her colors did not work properly on a VT. Of course the correct thing to do is to fix terminfo, but that isn't in my power, although I have reported the bug for linux-16color in particular, how many other bugs are there in it? The only completely accurate thing to do is to hard-code the sequences for all the terminals you will encounter yourself, which is what terminfo is supposed to avoid the necessity of doing. However, it is buggy in at least this case (and a very common case), so relying on it to do it properly is also suspect. I will add here I have much respect for Greycat, and he is a very knowledgeable expert in many areas of IT; I fully admit I do not have the same depth of knowledge as he does, but will YOU ever be working on a Wyse 30? To be completely clear, I'm suggesting that you should consider hard-coded colors for your own profile and uses, if you are intending to write a completely portable script for others to use on foreign systems then you should rely on terminfo/termcap even if it is buggy.

  • I've never heard of linux-16colors before. It's not an installed terminfo entry in Debian, at least not by default. If your vendor is shipping broken terminfo databases, file a bug report. Meanwhile, find a system where the entry you need is not broken, and copy it to your broken system(s) -- or write it yourself. That's what the rest of the world has always done. It's where the terminfo entries came from in the first place. Someone had to write them all.
  • http://wooledge.org/~greg/linux-console-colors.png

  • -- GreyCat

I'm also "old" and have worked with various terminals. I owned a WYSE 50 at one point. That was long ago; it's 2023 now. I'm against the recommendation to use tput. The tput command as such is described by POSIX, but not any specific actions that it performs. POSIX requires only the sub-commands clear, init and reset. Everything else is system-specific, nonstandard cruft. Meanwhile, ANSI/ECMA escape sequence are a standard which goes back to something like 1976. It's 2023 today; it's okay to specify that your script requires a terminal or terminal emulator that conforms to a standard we have had for at least fifty years. At least in production. If we are tinkering with old terminals for fun, that is a retrocomputing activity. There is nothing wrong with that, but FAQ answers should proably be be geared more toward production than tinkering around with old gear. -- KazKylheku


CategoryShell

39. How do Unix file permissions work?

See Permissions.


CategoryShell

40. What are all the dot-files that bash reads?

See DotFiles.


CategoryShell

41. How do I use dialog to get input from the user?

Here is an example:

   1 # POSIX
   2 foo=$(dialog --inputbox "text goes here" 8 40 2>&1 >/dev/tty)
   3 printf "The user typed '%s'\n" "$foo"

The redirection here is a bit tricky.

  1. The foo=$(command) is set up first, so the standard output of the command is being captured by bash.

  2. Inside the command, the 2>&1 causes standard error to be sent to where standard out is going -- in other words, stderr will now be captured.

  3. >/dev/tty sends standard output to the terminal, so the dialog box will be seen by the user. Standard error will still be captured, however.

Another common dialog(1)-related question is how to dynamically generate a dialog command that has items which must be quoted (either because they're empty strings, or because they contain internal white space). One can use eval for that purpose, but the cleanest way to achieve this goal is to use an array.

   1 # Bash
   2 unset -v m; i=0
   3 words=(apple banana cherry "dog droppings")
   4 for w in "${words[@]}"; do
   5     m[i++]=$w; m[i++]=""
   6 done
   7 dialog --menu "Which one?" 12 70 9 "${m[@]}"

In this example, the while loop that populates the m array could have been reading from a pipeline, a file, etc.

Recall that the construction "${m[@]}" expands to the entire contents of an array, but with each element implicitly quoted. It's analogous to the "$@" construct for handling positional parameters. For more details, see FAQ #50.

Newer versions of bash have a slightly prettier syntax for appending elements to an array:

   1 # Bash 3.1 and up
   2 ...
   3 for w in "${words[@]}"; do
   4     m+=("$w" "")
   5 done
   6 ...

Here's another example, using filenames:

   1 # Bash
   2 files=(*.mp3)       # These may contain spaces, apostrophes, etc.
   3 cmd=(dialog --menu "Select one:" 22 76 16)
   4 i=0 n=${#cmd[*]}
   5 for i in "${!files[@]}"; do
   6     cmd[n++]=$i; cmd[n++]=${files[i]}
   7 done
   8 choice=$("${cmd[@]}" 2>&1 >/dev/tty)
   9 printf "Here's the file you chose:\n"
  10 ls -ld -- "${files[choice]}"

A separate but useful function of dialog is to track progress of a process that produces output. Below is an example that uses dialog to track processes writing to a log file. In the dialog window, there is a tailbox where output is stored, and a msgbox with a clickable Quit. Clicking quit will cause trap to execute, removing the tempfile, and destroying the tail process.

   1 # POSIX
   2 # you cannot tail a nonexistent file, so always ensure it pre-exists!
   3 > dialog-tail.log
   4 {
   5     for i in 1 2 3; do
   6         printf '%d\n' "$i"
   7         sleep 1
   8     done
   9 
  10     printf 'Done\n'
  11 } > dialog-tail.log &
  12 
  13 dialog --title "TAIL BOXES" \
  14        --begin 10 10 --tailboxbg dialog-tail.log 8 58 \
  15        --and-widget \
  16        --begin 3 10 --msgbox "Press OK " 5 30
  17 
  18 wait

For an example of creating a progress bar using dialog --gauge, see FAQ #44.


CategoryShell

42. How do I determine whether a variable contains a substring?

In BASH:

  # Bash
  if [[ $foo = *bar* ]]

The above works in virtually all versions of Bash. Bash version 3 (and up) also allows regular expressions:

  # Bash
  my_re='ab*c'
  if [[ $foo =~ $my_re ]]   # bash 3, matches abbbbcde, or ac, etc.

For more hints on string manipulations in Bash, see FAQ #100.

If you are programming in the POSIX sh syntax or for the BourneShell instead of Bash, there is a more portable (but less pretty) syntax:

  # Bourne
  case $foo in
    *bar*) .... ;;
  esac

case allows you to match variables against globbing-style patterns (including extended globs, if your shell offers them). If you need a portable way to match variables against regular expressions, use expr.

  # Bourne/POSIX
  if expr "x$foo" : 'x.*bar' >/dev/null; then ...


CategoryShell

43. How can I find out if a process is still running?

The kill command is used to send signals to a running process. As a convenience function, the signal "0", which does not exist, can be used to find out if a process is still running:

# Bourne
myprog &          # Start program in the background
daemonpid=$!      # ...and save its process id

while sleep 60
do
    if kill -0 $daemonpid       # Is the process still alive?
    then
        echo >&2 "OK - process is still running"
    else
        echo >&2 "ERROR - process $daemonpid is no longer running!"
        break
    fi
done

NOTE: Anything you do that relies on PIDs to identify a process is inherently flawed. If a process dies, the meaning of its PID is UNDEFINED. Another process started afterward may take the same PID as the dead process. That would make the previous example think that the process is still alive (its PID exists!) even though it is dead and gone. It is for this reason that nobody other than the parent of a process should try to manage the process. Read ProcessManagement.

This is one of those questions that usually masks a much deeper issue. It's rare that someone wants to know whether a process is still running simply to display a red or green light to an operator.

More often, there's some ulterior motive, such as the desire to ensure that some daemon which is known to crash frequently is still running. If this is the case, the best course of action is to fix the program or its configuration so that it stops crashing. If you can't do that, then just restart it when it dies:

# POSIX
while true
do
  myprog && break
  sleep 1
done

This piece of code will restart myprog if it terminates with an exit code other than 0 (indicating something went wrong). If the exit code is 0 (successfully shut down) the loop ends. (If your process is crashing but also returning exit status 0, then adjust the code accordingly.) Note that myprog must run in the foreground. If it automatically "daemonizes" itself, you are screwed.

For a much better discussion of these issues, see ProcessManagement or FAQ #33.


CategoryShell

44. Why does my crontab job fail? 0 0 * * * some command > /var/log/mylog.`date +%Y%m%d`

In many versions of crontab, the percent sign (%) is treated specially, and therefore must be escaped with backslashes:

0 0 * * * some_user some_command >"/var/log/mylog.$(date '+\%Y\%m\%d')"

See your system's manual (crontab(5) or crontab(1)) for details. Note: on systems which split the crontab manual into two parts, you may have to type man 5 crontab or man -s 5 crontab to read the part you need.


CategoryShell

45. How do I create a progress bar? How do I see a progress indicator when copying/moving files?

The easiest way to add a progress bar to your own script is to use dialog --gauge. Here is an example, which relies heavily on BASH features:

# Bash
# Process all of the *.zip files in the current directory.
files=(*.zip)
dialog --gauge "Working..." 20 75 < <(
   n=${#files[@]} i=0
   for f in "${files[@]}"; do
      # process "$f" in some way (for testing, "sleep 1")
      echo $((100*(++i)/n))
   done
)

Here's an explanation of what it's doing:

  • An array named files is populated with all the files we want to process.

  • dialog is invoked, and its input is redirected from a ProcessSubstitution. (A pipe could also be used here; we'd simply have to reverse the dialog command and the loop.)

  • The processing loop iterates over the array.
  • Every time a file is processed, it increments a counter (i), and writes the percent complete to stdout.

A similar example, but using lines of a file as the input:

# Bash 4
mapfile -t lines < "$inputfile"
n=${#lines[@]}
i=0
for line in "${lines[@]}"; do
    echo "$((100*(++i)/n))"
    # process the line (use "sleep 1" or similar to test)
done | dialog --gauge "Working..." 20 75

The key concept here is that we have to know how many lines there are in order to calculate the percent complete. Thus, the entire input must be read once, just to count the items, before we can start processing. By saving the input to an array in memory, we can avoid problems if the input happens to be non-reusable (e.g. a pipe instead of a file).

For more examples of using dialog, see FAQ #40.

A simple progress bar can also be programmed without dialog. There are lots of different approaches, depending on what kind of presentation you're looking for.

One traditional approach is the spinner which shows a whirling line segment to indicate "busy". This is not really a "progress meter" since there is no information presented about how close the program is to completion.

The next step up is presenting a numeric value without scrolling the screen. Using a carriage return to move the cursor to the beginning of the line (on a graphical terminal, not a teletype...), and not writing a newline until the very end:

i=0
while ((i < 100)); do
  printf "\r%3d%% complete" $i
  ((i += RANDOM%5+2))
  # Of course, in real life, we'd be getting i from somewhere meaningful.
  sleep 1
done
echo

Of note here is the %3d in the printf format specifier. It's important to use a fixed-width field for displaying the numbers, especially if the numbers may count downward (first displaying 10 and then 9). Of course we're counting upwards here, but that may not always be the case in general. If a fixed-width field is not desired, then printing a bunch of spaces at the end may help remove any clutter from previous lines.

If an actual "bar" is desired, rather than a number, then one may be drawn using ASCII characters:

bar="=================================================="
barlength=${#bar}
i=0
while ((i < 100)); do
  # Number of bar segments to draw.
  n=$((i*barlength / 100))
  printf "\r[%-${barlength}s]" "${bar:0:n}"
  ((i += RANDOM%5+2))
  # Of course, in real life, we'd be getting i from somewhere meaningful.
  sleep 1
done
echo

Naturally one may choose a bar of a different length, or composed of a different set of characters, e.g., you can have a colored progress bar

files=(*)
width=${COLUMNS-$(tput cols)}
rev=$(tput rev)

n=${#files[*]}
i=0
printf "$(tput setab 0)%${width}s\r"
for f in "${files[@]}"; do
   # process "$f" in some way (for testing, "sleep 1")
   printf "$rev%$((width*++i/n))s\r" " "
done
tput sgr0
echo

Here's an example using the same interface as dialog --gauge but implementing the progress bar ourselves:

prog() {
    local max=$((${COLUMNS:-$(tput cols)} - 2)) in n i
    while read -r in; do
        n=$((max*in/100))
        printf '\r['
        for ((i=0; i<n; i++)); do printf =; done
        for ((; i<max; i++)); do printf ' '; done
        printf ']'
    done
}

mapfile -t lines    # read stdin as input
n=${#lines[@]}
i=0
for line in "${lines[@]}"; do
    echo "$((100*(++i)/n))"
    # process the line (use "sleep 1" or similar to test)
done | prog

45.1. When copying/moving files

You can't get a progress indicator with cp(1), but you can either:

  • build one yourself with tools such as pv or clpbar;

  • use some other tool, e.g. vcp.

You may want to use pv(1) since it's packaged for many systems. In that case, it's convenient if you create a function or script to wrap it.

For example:

pv "$1" > "$2/${1##*/}"

This lacks error checking and support for moving files.

you can also use rsync:

rsync -avx --progress --stats "$1" "$2"

Please note that the "total" of files can change each time rsync enters a directory and finds more/less files that it expected, but at least is more info than cp. Rsync progress is good for big transfers with small files.


CategoryShell

46. How can I ensure that only one instance of a script is running at a time (mutual exclusion, locking)?

We need some means of mutual exclusion. One way is to use a "lock": any number of processes can try to acquire the lock simultaneously, but only one of them will succeed.

How can we implement this using shell scripts? Some people suggest creating a lock file, and checking for its presence:

   1 # locking example -- WRONG
   2 lockfile=/tmp/myscript.lock
   3 if [ -f "$lockfile" ]
   4 then                      # lock is already held
   5     printf >&2 'cannot acquire lock, giving up: %s\n' "$lockfile"
   6     exit 0
   7 else                      # nobody owns the lock
   8     > "$lockfile"         # create the file
   9     #...continue script
  10 fi

This example does not work, because there is a RaceCondition: a time window between checking and creating the file, during which other programs may act. Assume two processes are running this code at the same time. Both check if the lockfile exists, and both get the result that it does not exist. Now both processes assume they have acquired the lock -- a disaster waiting to happen. We need an atomic check-and-create operation, and fortunately there is one: mkdir, the command to create a directory:

   1 # locking example -- CORRECT
   2 # Bourne
   3 lockdir=/tmp/myscript.lock
   4 if mkdir -- "$lockdir"
   5 then    # directory did not exist, but was created successfully
   6     printf >&2 'successfully acquired lock: %s\n' "$lockdir"
   7     # continue script
   8 else
   9     printf >&2 'cannot acquire lock, giving up on %s\n' "$lockdir"
  10     exit 0
  11 fi

Here, even when two processes call mkdir at the same time, only one process can succeed at most. This atomicity of check-and-create is ensured at the operating system kernel level.

Instead of using mkdir we could also have used the program to create a symbolic link, ln -s. A third possibility is to have the program delete a preexisting lock file with rm. The lock is released by recreating the file on exit.

Note that we cannot use mkdir -p to automatically create missing path components: mkdir -p does not return an error if the directory exists already, but that's the feature we rely upon to ensure mutual exclusion.

Now let's spice up this example by automatically removing the lock when the script finishes:

   1 # POSIX (maybe Bourne?)
   2 lockdir=/tmp/myscript.lock
   3 if mkdir -- "$lockdir"
   4 then
   5     printf >&2 'successfully acquired lock\n'
   6 
   7     # Remove lockdir when the script finishes, or when it receives a signal
   8     trap 'rm -rf -- "$lockdir"' 0    # remove directory when script finishes
   9 
  10     # Optionally create temporary files in this directory, because
  11     # they will be removed automatically:
  12     tmpfile=$lockdir/filelist
  13 
  14 else
  15     printf >&2 'cannot acquire lock, giving up on %s\n' "$lockdir"
  16     exit 0
  17 fi

This example is much better. There is still the problem that a stale lock could remain when the script is terminated with a signal not caught (or signal 9, SIGKILL), or could be created by a user (either accidentally or maliciously), but it's a good step towards reliable mutual exclusion. Charles Duffy has contributed an example that may remedy the "stale lock" problem.

If you're using a GNU/Linux distribution, you can also get the benefit of using flock(1), which ties a FileDescriptor to a lock file. There are multiple ways to use it; one possibility to solve the multiple instance problem is:

   1 # Bash -- in POSIX fds >=3 may not get inherithed; doesn't work in ksh93
   2 exec 9>/path/to/lock/file
   3 if ! flock -n 9; then
   4     printf 'another instance is running\n';
   5     exit 1
   6 fi
   7 # this now runs under the lock until 9 is closed (it will be closed automatically when the script ends)

flock can also be used to protect only a part of your script, see the man page for more information.

46.1. Discussion

46.1.1. Alternative Solution

I believe using if (set -C; : >$lockfile); then ... is equally safe if not safer. The Bash source uses open(filename, flags|O_EXCL, mode); which should be atomic on almost all platforms (with the exception of some versions of NFS where mkdir may not be atomic either). I haven't traced the path of the flags variable, which must contain O_CREAT, nor have I looked at any other shells. I wouldn't suggest using this until someone else can backup my claims. --Andy753421

  • Using set -C does not work with ksh88. Ksh88 does not use O_EXCL, when you set noclobber (-C). --jrw32982

    Are you sure mkdir has problems with being atomic on NFS? I thought that affected only open, but I'm not really sure. -- BeJonas 2008-07-24 01:22:59

46.1.2. Removal of locking mechanism

Shouldn't the example code blocks above include a rm "$lockfile" or rmdir "lockdir" directly after the #...continue script line? - AnthonyGeoghegan

  • The lock can't be safely removed while the script is still doing its work -- that would allow another instance to run. The longer example includes a trap that removes the lock when the script exits.

46.1.3. flock file descriptor uniqueness

The example uses file descriptor 9 with flock, i.e.

exec 9>/path/to/lock/file

  • if ! flock -n 9...

Note, file descriptors are unique per-process. FD 0,1, and 2 are used for stdin,stdout, and stderr so picking a generally high value is sufficient. (source: http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/fdescript.htm )

However, what if this file descriptor is already in use by a completely different process? Are we then locking on the file descriptor and not the lock file? How can we ensure we use something that is not already being used?

For more discussion on these issues, see ProcessManagement.


CategoryShell

47. I want to check to see whether a word is in a list (or an element is a member of a set).

If your real question is How do I check whether one of my parameters was -v? then see FAQ #35 instead. Otherwise, read on…

47.1. Associative arrays

All we need to do is create an entry for each item and look it up by index. In this example, we test whether the user input x is a member of the set a:

# Bash etc.

function get_input {
        [[ -t 0 ]] || return
        printf 'hm? '
        IFS= read -r${BASH_VERSION+\e} -- "$1"
}

set -- Bigfoot UFOs Republicans
typeset -A a
for x; do
        a+=([$x]=)
done

get_input x

if [[ -v a[$x] ]]; then
        printf '%s exists!\n' "$x"
else
        printf $'%s doesn\'t exist.\\n' "$x"
fi

47.2. Indexed arrays

We can store a list of strings in an indexed array by looping over each element:

# Bash

typeset -a haystack
for x in "${haystack[@]}"; do
        [[ $x == "$needle" ]] && printf 'Found %q!\n' "$needle"
done

47.3. enum (ksh93)

In ksh93t or later, one may create enum types/variables/constants using the enum builtin. These work similarly to C enums (and the equivalent feature of other languages). These may be used to restrict which values may be assigned to a variable so as to avoid the need for an expensive test each time an array variable is set or referenced. Like types created using typeset -T, the result of an enum command is a new declaration command that can be used to instantiate objects of that type.

# ksh93

 $ enum colors=(red green blue)
 $ colors foo=green
 $ foo=yellow
ksh: foo:  invalid value yellow

typeset -a can also be used in combination with an enum type to allow enum constants as subscripts.

# ksh93

 $ typeset -a '[colors]' bar
 $ bar[blue]=test1
 $ typeset -p bar
typeset -a '[colors]' bar=([blue]=test)
 $ bar[orange]=test
ksh: colors:  invalid value orange


CategoryShell

48. How can I redirect stderr to a pipe?

A pipe can only carry standard output (stdout) of a program. To pipe standard error (stderr) through it, you need to redirect stderr to the same destination as stdout. Optionally you can close stdout or redirect it to /dev/null to only get stderr. Some sample code:

# Bourne
# Assume 'myprog' is a program that writes to both stdout and stderr.

# version 1: redirect stderr to the pipe while stdout survives (both come
# mixed)
myprog 2>&1 | grep ...

# version 2: redirect stderr to the pipe without getting stdout (it's
# redirected to /dev/null)
myprog 2>&1 >/dev/null | grep ...

# same idea, this time storing stdout in a file
myprog 2>&1 >file | grep ...

Another simple example of redirection stdout and stderr:

# Bourne
{ command | stdout_reader; } 2>&1 | stderr_reader

For further explanation of how redirections and pipes interact, see FAQ #55.

This has an obvious application with programs like dialog, which draws (using ncurses) windows onto the screen (stdout), and returns results on stderr. One way to deal with this would be to redirect stderr to a temporary file. But this is not necessary -- see FAQ #40 for examples of using dialog specifically!

In the examples above (as well as FAQ #40), we either discarded stdout altogether, or sent it to a known device (/dev/tty for the user's terminal). One may also pipe stderr only but keep stdout intact (without a priori knowledge of where the script's output is going). This is a bit trickier.

# Bourne
# Redirect stderr to a pipe, keeping stdout unaffected.

exec 3>&1                       # Save current "value" of stdout.
myprog 2>&1 >&3 | grep ...      # Send stdout to FD 3.
exec 3>&-                       # Now close it for the remainder of the script.

# Thanks to http://www.tldp.org/LDP/abs/html/io-redirection.html

The same can be done without exec:

# POSIX
$ myfunc () { echo "I'm stdout"; echo "I'm stderr" >&2; }
$ { myfunc 2>&1 1>&3 3>&- | cat  > stderr.file 3>&-; } 3>&1
I'm stdout
$ cat stderr.file
I'm stderr

The fd 3 is closed (3>&-) so that the commands do not inherit it. Note bash allows to duplicate and close in one redirection: 1>&3- You can check the difference on linux trying the following:

# Bash
{ bash <<< 'lsof -a -p $$ -d1,2,3'   ;} 3>&1
{ bash <<< 'lsof -a -p $$ -d1,2,3' 3>&-  ;} 3>&1

To show a dialog one-liner:

# Bourne
exec 3>&1
dialog --menu Title 0 0 0 FirstItem FirstDescription 2>&1 >&3 | sed 's/First/Only/'
exec 3>&-

This will have the dialog window working properly, yet it will be the output of dialog (returned to stderr) being altered by the sed.

A similar effect can be achieved with ProcessSubstitution:

# Bash
perl -e 'print "stdout\n"; warn "stderr\n"' 2> >(tr '[:lower:]' '[:upper:]')

This will pipe standard error through the tr command.

See this redirection tutorial (with an example that redirects stdout to one pipe and stderr to another pipe).


CategoryShell

49. Eval command and security issues

The eval command is extremely powerful and extremely easy to abuse.

It causes your code to be parsed twice instead of once; this means that, for example, if your code has variable references in it, the shell's parser will evaluate the contents of that variable. If the variable contains a shell command, the shell might run that command, whether you wanted it to or not. This can lead to unexpected results, especially when variables can be read from untrusted sources (like users or user-created files).

49.1. Examples of bad use of eval

"eval" is a common misspelling of "evil".

One of the most common reasons people try to use eval is because they want to pass the name of a variable to a function. Consider:

# This code is evil and should never be used!
fifth() {
    _fifth_array=$1
    eval echo "\"The fifth element is \${$_fifth_array[4]}\""    # DANGER!
}
a=(zero one two three four five)
fifth a

This breaks if the user is allowed to pass arbitary arguments to the function:

$ fifth 'x}"; date; #'
The fifth element is
Thu Mar 27 16:13:47 EDT 2014

We've just allowed arbitary code execution. Bash 4.3 introduced name references to try to solve this problem, but unfortunately they don't solve the problem! We'll discuss those in depth later.

Now let's consider a more complicated example. The section of this FAQ dealing with spaces in file names used to include the following "helpful tool (which is probably not as safe as the \0 technique)".

Syntax : nasty_find_all <path> <command> [maxdepth]

# This code is evil and must never be used!
export IFS=" "
[ -z "$3" ] && set -- "$1" "$2" 1
FILES=`find "$1" -maxdepth "$3" -type f -printf "\"%p\" "`
# warning, BAD code
eval FILES=($FILES)
for ((I=0; I < ${#FILES[@]}; I++))
do
    eval "$2 \"${FILES[I]}\""
done
unset IFS

This script was supposed to recursively search for files and run a user-specified command on them, even if they had newlines and/or spaces in their names. The author thought that find -print0 | xargs -0 was unsuitable for some purposes such as multiple commands. It was followed by an instructional description of all the lines involved, which we'll skip.

To its defense, it worked:

$ ls -lR
.:
total 8
drwxr-xr-x  2 vidar users 4096 Nov 12 21:51 dir with spaces
-rwxr-xr-x  1 vidar users  248 Nov 12 21:50 nasty_find_all

./dir with spaces:
total 0
-rw-r--r--  1 vidar users 0 Nov 12 21:51 file?with newlines
$ ./nasty_find_all . echo 3
./nasty_find_all
./dir with spaces/file
with newlines
$

But consider this:

$ touch "\"); ls -l $'\x2F'; #"

You just created a file called  "); ls -l $'\x2F'; #

Now FILES will contain  ""); ls -l $'\x2F'; #. When we do eval FILES=($FILES), it becomes

FILES=(""); ls -l $'\x2F'; #"

Which becomes the two statements  FILES=("");  and  ls -l / . Congratulations, you just allowed execution of arbitrary commands.

$ touch "\"); ls -l $'\x2F'; #"
$ ./nasty_find_all . echo 3
total 1052
-rw-r--r--   1 root root 1018530 Apr  6  2005 System.map
drwxr-xr-x   2 root root    4096 Oct 26 22:05 bin
drwxr-xr-x   3 root root    4096 Oct 26 22:05 boot
drwxr-xr-x  17 root root   29500 Nov 12 20:52 dev
drwxr-xr-x  68 root root    4096 Nov 12 20:54 etc
drwxr-xr-x   9 root root    4096 Oct  5 11:37 home
drwxr-xr-x  10 root root    4096 Oct 26 22:05 lib
drwxr-xr-x   2 root root    4096 Nov  4 00:14 lost+found
drwxr-xr-x   6 root root    4096 Nov  4 18:22 mnt
drwxr-xr-x  11 root root    4096 Oct 26 22:05 opt
dr-xr-xr-x  82 root root       0 Nov  4 00:41 proc
drwx------  26 root root    4096 Oct 26 22:05 root
drwxr-xr-x   2 root root    4096 Nov  4 00:34 sbin
drwxr-xr-x   9 root root       0 Nov  4 00:41 sys
drwxrwxrwt   8 root root    4096 Nov 12 21:55 tmp
drwxr-xr-x  15 root root    4096 Oct 26 22:05 usr
drwxr-xr-x  13 root root    4096 Oct 26 22:05 var
./nasty_find_all
./dir with spaces/file
with newlines
./
$

It doesn't take much imagination to replace  ls -l  with  rm -rf  or worse.

One might think these circumstances are obscure, but one should not be tricked by this. All it takes is one malicious user, or perhaps more likely, a benign user who left the terminal unlocked when going to the bathroom, or wrote a funny PHP uploading script that doesn't sanity check file names, or who made the same mistake as oneself in allowing arbitrary code execution (now instead of being limited to the www-user, an attacker can use nasty_find_all to traverse chroot jails and/or gain additional privileges), or uses an IRC or IM client that's too liberal in the filenames it accepts for file transfers or conversation logs, etc.

49.2. The problem with bash's name references

Bash 4.3 introduced declare -n ("name references") to mimic Korn shell's nameref feature, which permits variables to hold references to other variables (see FAQ 006 to see these in action). Unfortunately, the implementation used in Bash has some issues.

First, Bash's declare -n doesn't actually avoid the name collision issue:

$ foo() { declare -n v=$1; }
$ bar() { declare -n v=$1; foo v; }
$ bar v
bash: warning: v: circular name reference

In other words, there is no safe name we can give to the name reference. If the caller's variable happens to have the same name, we're screwed. Well, not completely screwed, but you need to use a trick one cannot help but think should not be necessary. You can avoid the circularity by using declare only if the names do not clash (if they do clash, then v, here, is simply a direct reference to caller's v):

$ foo() { if [[ $1 != v ]]; then declare -n v=$1; fi; echo $v; }
$ bar() { if [[ $1 != v ]]; then declare -n v=$1; fi; foo v; }
$ v="xzy"
$ bar v
xyz

Second, Bash's name reference implementation still allows arbitrary code execution:

$ foo() { declare -n var=$1; echo "$var"; }
$ foo 'x[i=$(date)]'
bash: i=Thu Mar 27 16:34:09 EDT 2014: syntax error in expression (error token is "Mar 27 16:34:09 EDT 2014")

It's not an elegant example, but you can clearly see that the date command was actually executed. This is not at all what one wants.

Now, despite these shortcomings, the declare -n feature is a step in the right direction. But you must be careful to select a name that the caller won't use (which means you need some control over the caller, if only to say "don't use variables that begin with _my_pkg" -- or unless you use the conditional workaround), and you must reject unsafe inputs.

49.3. Examples of good use of eval

The most common correct use of eval is reading variables from the output of a program which is specifically designed to be used this way. For example,

# On older systems, one must run this after resizing a window:
eval "`resize`"

# Less primitive: get a passphrase for an SSH private key.
# This is typically executed from a .xsession or .profile type of file.
# The variables produced by ssh-agent will be exported to all the processes in
# the user's session, so that an eventual ssh will inherit them.
eval "`ssh-agent -s`"

eval has other uses especially when creating variables out of the blue (indirect variable references). Here is an example of one way to parse command line options that do not take parameters:

# POSIX
#
# Create option variables dynamically. Try call:
#
#    sh -x example.sh --verbose --test --debug

for i; do
    case $i in
       --test|--verbose|--debug)
            shift                   # Remove option from command line
            name=${i#--}            # Delete option prefix
            eval "$name=\$name"     # make *new* variable
            ;;
    esac
done

echo "verbose: $verbose"
echo "test: $test"
echo "debug: $debug"

So, why is this version acceptable? It's acceptable because we have restricted the eval command so that it will only be executed when the input is one of a finite set of known values. Therefore, it can't ever be abused by the user to cause arbitrary command execution -- any input with funny stuff in it wouldn't match one of the three predetermined possible inputs.

Note that this is still frowned upon: It is a slippery slope and some later maintenance can easily turn this code into something dangerous. Eg. You want to add a feature that allows a bunch of different --test-xyz's to be passed. You change --test to --test-*, without going through the trouble of checking the implementation of the rest of the script. You test your use case and it all works. Unfortunately, you've just introduced arbitrary command execution:

$ ./foo --test-'; ls -l /etc/passwd;x='
-rw-r--r-- 1 root root 943 2007-03-28 12:03 /etc/passwd

Once again: by permitting the eval command to be used on unfiltered user input, we've permitted arbitrary command execution.

AVOID PASSING DATA TO EVAL AT ALL COSTS, even if your code seems to handle all the edge cases today.

If you have thought really hard and asked #bash for an alternative way but there isn't any, skip ahead to "Robust eval usage".

49.4. The problem with declare

Could this not be done better with declare?

for i in "$@"
do
    case $i in
        --test|--verbose|--debug)
            shift                   # Remove option from command line
            name=${i#--}            # Delete option prefix
            declare $name=Yes       # set default value
            ;;
        --test=*|--verbose=*|--debug=*)
            shift
            name=${i#--}
            value=${name#*=}        # value is whatever's after first word and =
            name=${name%%=*}        # restrict name to first word only (even if there's another = in the value)
            declare $name="$value"  # make *new* variable
            ;;
    esac
done

Note that --name for a default, and --name=value are the required formats.

declare does work better for some inputs:

griffon:~$ name='foo=x;date;x'
griffon:~$ declare $name=Yes
griffon:~$ echo $foo
x;date;x=Yes

But it can still cause arbitary code execution with array variables:

attoparsec:~$ echo $BASH_VERSION
4.2.24(1)-release
attoparsec:~$ danger='( $(printf "%s!\n" DANGER >&2) )'
attoparsec:~$ declare safe=${danger}
attoparsec:~$ declare -a unsafe
attoparsec:~$ declare unsafe=${danger}
DANGER!

49.5. Robust eval usage

Almost always (at least 99% or more of the time in Bash, but also in more minimal shells), the correct way to use eval is to produce abstractions hidden behind functions used in library code. This allows the function to:

  • present a well-defined interface to the function's caller that specifies which inputs must be strictly controlled by the programmer, and which may be unpredictable, such as side-effects influenced by user input. It's important to document which options and arguments are unsafe if left uncontrolled.
  • perform input validation on certain kinds of inputs where it's feasible to do so, such as integers -- where it's easy to bail out and return an error status which can be handled by the function caller.
  • create abstractions that hide ugly implementation details involving eval.

Generally, eval is correct when at least all of the following are satisfied:

  • All possible arguments to eval are guaranteed not to produce harmful side-effects or result in execution of arbitrary code under any circumstance. The inputs are statically coded, free from interaction with uncontrolled dynamic code, and/or validated throughly. This is why functions are important, because YOU don't necessarily have to make that guarantee yourself. So long as your function documents what inputs can be dangerous, you can delegate that task to the function's caller.

  • The eval usage presents a clean interface to the user or programmer.

  • The eval makes possible what would otherwise be impossible without far more large, slow, complex, dangerous, ugly, less useful code.

If for some reason you still need to dynamically build bash code and evaluate it, make certain you take these precautions:

  1. Always quote the eval expression: eval 'a=b'

  2. Always single-quote code and expand your data into it using printf's %q: eval "$(printf 'myvar=%q' "$value")"

  3. Do NOT use dynamic variable names. Even with careful %q usage, this can be exploited.

Why take heed? Here's how your scripts can be exploited if they fail to take the above advice:

  • If you don't single-quote your code, you run the risk of expanding data into it that isn't %q'ed. Which means free executable reign for that data:

  • name='Bob; echo I am arbitrary code'; eval "user=$name"
  • Even if you %q input data before treating it as a variable name, illegal variable names in assignments cause bash to search PATH for a command:

  • echo 'echo I am arbitrary code' > /usr/local/bin/a[1]=b; chmod +x /usr/local/bin/a[1]=b; var='a[1]' value=b; eval "$(printf '%q=%q' "$var" "$value")"
  • For a list of ways to reference or to populate variables indirectly without using eval, please see BashFAQ/006.

  • For a list of ways to reference or to populate variables indirectly with eval, please see BashFAQ/006#eval.

  • More examples


CategoryShell

50. How can I view periodic updates/appends to a file? (ex: growing log file)

tail -f will show you the growing log file. On some systems (e.g. OpenBSD), this will automatically track a rotated log file to the new file with the same name (which is usually what you want). To get the equivalent functionality on GNU systems, use tail -F instead.

This is helpful if you need to view only the updates to the file after your last view.

# Start by setting n=1
   tail -n $n testfile; n="+$(( $(wc -l < testfile) + 1 ))"

Every invocation of this gives the update to the file from where we stopped last. If you know the line number from where you want to start, set n to that.


CategoryShell

51. I'm trying to put a command in a variable, but the complex cases always fail!

Variables hold data. Functions hold code. Don't put code inside variables! There are many situations in which people try to shove commands, or command arguments, into variables and then run them. Each case needs to be handled separately.

For the simple case in bash, you can use an array to store arguments to pass to a command (similar to constructing a command using args only known at runtime, below):

args=(-s "$subject" --flag "arg with spaces")
mail "${args[@]}"

51.1. Things that do not work

Some people attempt to do things like this:

# Example of BROKEN code, DON'T USE THIS.
args=$address1
if [[ $subject ]]; then
    args+=" -s $subject"
fi
mail $args < "$body"

Adding quotes won't help, either:

# Example of BROKEN code, DON'T USE THIS.
args="$address1 $address2"
if [[ $subject ]]; then args+=" -s '$subject'"; fi
mail $args < "$body"

This fails because of WordSplitting and because the single quotes inside the variable are literal, not syntactical. If $subject contains internal whitespace, it will be split at those points. The mail command will receive -s as one argument, then the first word of the subject (with a literal ' in front of it) as the next argument, and so on.

Read Arguments to get a better understanding of how the shell figures out what the arguments in your statement are.

Here's another thing that won't work:

# BROKEN code.  Do not use!
redirs=">/dev/null 2>&1"
if ((debug)); then redirs=; fi
some command $redirs

Here's yet another thing that won't work:

# BROKEN code.  Do not use!
runcmd() { if ((debug)); then echo "$@"; fi; "$@"; }

The runcmd function can only handle simple commands with no redirections. It can't handle redirections, pipelines, for/while loops, if statements, etc.

Now let's look at how we can perform some of these tasks.

51.2. I'm trying to save a command so I can run it later without having to repeat it each time

Just use a function:

pingMe() {
    ping -q -c1 -- "$HOSTNAME"
}

[...]
if pingMe; then ..

51.3. I only want to pass options if the runtime data needs them

You can use the ${var:+..} parameter expansion for this:

ping -q ${count:+"-c$count"} -- "$HOSTNAME"

Now the -c option (with its "$count" argument) is only added to the command when $count is not empty. Notice the quoting: No quotes around ${var:+...} but quotes on expansions INSIDE!

This would also work well for our mail example:

addresses=("$address1" "$address2")
mail ${subject:+"-s$subject"} -- "${addresses[@]}" < body

If you want to pass an option to a program only if a variable is set, you cannot simply use ${var+"-o$var"} since, if $var is set but empty, the following argument will be interpreted as the argument for the -o option instead of $var. You need to pass the option and the variable as two separate arguments in that case, e.g. using the following technique.

IFS= read -r ${delimiter+-d "$delimiter"} variable

This technique can also be used to deal with programs that do not support -oargument or --option=argument style options, and require you to pass options and their arguments as separate arguments (-o argument, --option argument).

find . ${filter:+-name "$filter"} -type f

query=bash
curl -F action=fullsearch -F fullsearch=text \
    --form-string value="$query" \
    ${context+--form-string context="$context"} \
    https://mywiki.wooledge.org

There is one important caveat with this construct: the unquoted pieces of the expansion (e.g. '-d ' in the read example) are subject to word-splitting using IFS. Everything should work as you expect so long as IFS is set to its default value (or something sane). If IFS has been changed, see the gory details here.

51.4. I want to generalize a task, in case the low-level tool changes later

Again, variables hold data; functions hold code.

In the mail example, we've got hard-coded dependence on the syntax of the Unix mail command. The version in the previous section is an improvement over the original broken code, but what if the internal company mail system changes? Having several calls to mail scattered throughout the script complicates matters in this situation.

What you probably should be doing, paying very close attention at how to quote your expansions, is this:

# Bash 3.1 / ksh93

# Send an email to someone.
# Reads the body of the mail from standard input.
#
# sendto subject address [address ...]
#
sendto() {
    # Used to be standard mail, but the fucking HR department
    # said we have to use this crazy proprietary shit....
    # mailx -s "$@"

    local subject=$1
    shift
    local addr addrs=()
    for addr; do addrs+=(--recipient="$addr"); done
    MailTool --subject="$subject" "${addrs[@]}"
}

sendto "The Subject" "$address" <"$bodyfile"

The original implementation uses mailx(1), a standard Unix command. Later, this is commented out and replaced by something called MailTool, which was made up on the spot for this example. But it should serve to illustrate the concept: the function's invocation is unchanged, even though the back-end tool changes.

Note: the sendto function above could also be implemented with the ${var/} parameter expansion:

# Bash 3.1 / ksh93
sendto() {
    local subject=$1
    shift
    MailTool --subject="$subject" "${@/#/--recipient=}"
}

51.5. I'm constructing a command based on information that is only known at run time

The root of the issue described above is that you need a way to maintain each argument as a separate word, even if that argument contains spaces. Quotes won't do it, but an array will. (We saw a bit of this in the previous section, where we constructed the addrs array on the fly.)

If you need to create a command dynamically, put each argument in a separate element of an array. A shell with arrays (like Bash) makes this much easier.

# Bash 3.1 / ksh93
args=("my arguments" "$go" here)
if ((foo)); then args+=(--foo); fi    # and so on
somecommand "${args[@]}"

POSIX sh has no arrays, so the closest you can come is to build up a list of elements in the positional parameters. Here's a POSIX sh version of the sendto function from the previous section:

# POSIX sh
# Usage: sendto subject address [address ...]
sendto() {
    subject=$1
    shift
    first=1
    for addr; do
        if [ "$first" = 1 ]; then set --; first=0; fi
        set -- "$@" --recipient="$addr"
    done
    if [ "$first" = 1 ]; then
        echo "usage: sendto subject address [address ...]"
        return 1
    fi
    MailTool --subject="$subject" "$@"
}

Note that we overwrite the positional parameters inside a loop that is iterating over the previous set of positional parameters (because we can't make a second array, not even to hold a copy of the original parameters). This appears to work in at least 3 different /bin/sh implementations (tested in Debian's dash, HP-UX's sh and OpenBSD's sh).

Another example of this is using dialog to construct a menu on the fly. The dialog command can't be hard-coded, because its parameters are supplied based on data only available at run time (e.g. the number of menu entries). For an example of how to do this properly, see FAQ #40.

It's worth noting that you cannot put anything other than a list of arguments into an array variable when using the "${array[@]}" technique to evaluate a command. Pipelines, redirection, assignments, and any other shell keywords or syntax will not be evaluated correctly.

In bash, the only ways to generate, manipulate, or store code more complex than a simple command at runtime involve storing the code's plain text in a variable, file, stream, or function, and then using eval or sh to evaluate the stored code. Directly manipulating raw code strings is among the least robust of metaprogramming techniques and most common sources of bugs and security issues. That's because predicting all possible ways code might come together to form a valid construct and restricting it to never operate outside of what's expected requires great care and detailed knowledge of language quirks. Bash lacks all the usual kinds of abstractions that allow doing this safely. Excessive use can also obfuscate your code.

51.6. I want a log of my script's actions

Another reason people attempt to stuff commands into variables is because they want their script to print each command before it runs it. If that's all you want, then simply use set -x command, or invoke your script with #!/bin/bash -x or bash -x ./myscript.

if ((DEBUG)); then set -x; fi
mysql -u me -p somedbname < file
...

Note that you can turn it off and back on inside the script with set +x and set -x.

Some people get into trouble because they want to have their script print their commands including redirections. set -x shows the command without redirections. People try to work around this by doing things like:

# Non-working example
command="mysql -u me -p somedbname < file"
((DEBUG)) && echo "$command"
"$command"

(This is so common that I include it here explicitly.)

Once again, this does not work. You can't make it work. Even the array trick won't work here.

One way to log the whole command, without resorting to the use of eval or sh (don't do that!), is the DEBUG trap. A practical code example:

trap 'printf %s\\n "$BASH_COMMAND" >&2' DEBUG

Assuming you're logging to standard error.

Note that redirect representation by BASH_COMMAND may still be affected by this bug.

If you STILL think you need to write out every command you're about to run before you run it, AND that you must include all redirections, AND you can't use a DEBUG trap, then just do this:

# Working example
echo "mysql -u me -p somedbname < file"
mysql -u me -p somedbname < file

Don't use a variable at all. Just copy and paste the command, wrap an extra layer of quotes around it (can be tricky -- that's why we do not recommend trying to use eval here), and stick an echo in front of it.

However, consider that echoing your commands verbatim is really ugly. Why are you doing this? Are you debugging the script? If so, how is the output of set -x insufficient? All you have to do is find the bug and fix it. Surely you won't leave this debugging code in place once the bug has been fixed.

If you intend to create a log of your script's actions, every time it is run, for accountability or other reasons, then that log should be human-readable. In that case, don't just echo your commands (especially if you have to bend over backwards to do so)! Write out meaningful (possibly even date-stamped) lines describing what you're doing.

echo "Populating database table"
mysql -u me -p somedbname < file


CategoryShell

52. I want history-search just like in tcsh. How can I bind it to the up and down keys?

Just add the following to /etc/inputrc or your ~/.inputrc:

"\e[A":history-search-backward
"\e[B":history-search-forward

Then restart bash (either by logging out and back in, or by running exec bash).

Readline (the part of bash that handles terminal input) doesn't understand key names such as "up arrow". Instead, you must manually discern the escape sequence that the key sends on your particular terminal (usually by pressing Ctrl-V and then the key in question), and insert it into the .inputrc as shown above. \e denotes the Escape character in readline. The Ctrl-V trick shows Escape as ^[. You must recognize that the leading ^[ is an Escape character, and make the substitution yourself.

53. How do I convert a file from DOS format to UNIX format (remove CRs from CR-LF line terminators)?

Carriage return (CR) characters are used in line ending markers on some systems. There are three different kinds of line endings in common use:

  • Unix systems use line feed (LF) characters only.
  • MS-DOS and Windows systems use CR-LF pairs.
  • Old Macintosh systems use CRs only.

If you're running a script on a Unix system, the line endings need to be Unix ones (LFs only), or you will have problems.

53.1. Testing for line terminator type

A simple check is to look at the output of sed -n l:

sed -n l yourscript

This should output the script in one of these formats:

LF (Unix)

CR-LF (DOS/Windows)

CR (Old Mac OS)

command$

command\r$

command\r\ranother command\r$

$

\r$

another command$

another command\r$

Another method is to guess at the file type using the file utility, if available:

file yourscript

On GNU/Linux, the output of file tells you whether the ASCII text has some CR. On other operating systems, the output is unpredictable, except that it should contain the word "text" somewhere if the input "kind of looks like a text file of some sort, maybe".

imadev:~$ printf 'DOS\r\nline endings\r\n' > foo
imadev:~$ file foo
foo:            commands text
arc3:~$ file foo
foo: ASCII text, with CRLF line terminators

In a script, it's more difficult to say what the most reliable method should be. Anything you do is going to be a heuristic. In theory, a non-corrupt file created by a non-broken UNIX utility should only contain LFs, and one created by a DOS utility should not contain any LFs that are not preceded by a CR.

   1 # Bash / Ksh / Zsh
   2 
   3 if grep -qv $'\r$' File; then
   4     echo 'File contains at least one newline not preceded by a CR'
   5 else
   6     echo 'File contains only CRLFs (or is empty)'
   7 fi

53.2. Converting files

ex is a good standard way to convert CRLF to LF, and probably one of the few reasonable methods for doing it in-place from a script:

# works with vim's ex but not vi's ex
ex -sc $'%s/\r$//e|x' file

# works with vi's ex but not vim's ex
ex -sc $'%s/\r$//|x' file

# Using ed.
ed -s file <<< $'%s/\r$//g\nwq'

Of course, more powerful dynamic languages can do this with relative ease.

perl -pi -e 's/\r\n/\n/' filename

Some systems have special conversion tools available to do this automatically. dos2unix, recode, and fromdos are some examples.

It be done manually with an editor like nano:

nano -w yourscript

Type Ctrl-O and before confirming, type Alt-D (DOS) or Alt-M (Mac) to change the format.

Or in Vim, use :set fileformat=unix and save with :w. Ensure the value of fenc is correct (probably "utf-8").

To simply strip all CRs from some input stream, you can use tr -d '\r' <infile >outfile. Of course, you must ensure that infile and outfile are not the same file.

54. I have a fancy prompt with colors, and now bash doesn't seem to know how wide my terminal is. Lines wrap around incorrectly.

54.1. Escape the colors with \[ \]

You must put \[ and \] around any non-printing escape sequences in your prompt. Thus:

   1 fancy_prompt() {
   2   local blue=$(tput setaf 4)
   3   local purple=$(tput setaf 5)
   4   local reset=$(tput sgr0)
   5   PS1="\\[$blue\\]\\h:\\[$purple\\]\\w\\[$reset\\]\\\$ "
   6 }

Without the \[ \], bash will think the bytes which constitute the escape sequences for the color codes will actually take up space on the screen, so bash won't be able to know where the cursor actually is.

54.2. Escape the colors with \001 \002 (dynamic prompt or read -p)

The \[ \] are only special when you assign PS1, if you print them inside a function that runs when the prompt is displayed it doesn't work. In this case you need to use the bytes \001 and \002:

   1 # this function runs when the prompt is displayed
   2 active_prompt () {
   3   local blue=$(tput setaf 4)
   4   local reset=$(tput sgr0)
   5   printf '\001%s\002%s\001%s\002' "$blue" "$PWD" "$reset"
   6 }
   7 
   8 PS1='$(active_prompt)\$ '

If you want to use colors in the "read -p" prompt, the wrapping problem also occurs and you cannot use \[ \], you must also use \001 \002 instead:

   1 blue=$(tput setaf 4)
   2 reset=$(tput sgr0)
   3 IFS= read -rp $'\001'"$blue"$'\002''what is your favorite color?'$'\001'"$reset"$'\002' answer

If you still have problems, e.g. when going through your command history with the Up/Down arrows, make sure you have the checkwinsize option set:

   1 shopt -s checkwinsize

Refer to the Wikipedia article for ANSI escape codes.

More generally, you should avoid writing terminal escape sequences directly in your prompt, because they are not necessarily portable across all the terminals you will use, now or in the future. Use tput to generate the correct sequences for your terminal (it will look things up in your terminfo or termcap database).

Since tput is an external command, you want to run it as few times as possible, which is why we suggest storing its results in variables, and using those to construct your prompt (rather than putting $(tput ...) in PS1 directly, which would execute tput every time the prompt is displayed). The code that constructs a prompt this way is much easier to read than the prompt itself, and it should work across a wide variety of terminals. (Some terminals may not have the features you are trying to use, such as colors, so the results will never be 100% portable in the complex cases. But you can get close.)


  • Personal note: I still prefer this answer:

       1 BLUE=$(tput setaf 4)
       2 PURPLE=$(tput setaf 5)
       3 RESET=$(tput sgr0)
       4 PS1='\[$BLUE\]\h:\[$PURPLE\]\w\[$RESET\]\$ '
    

    I understand that people like to avoid polluting the variable namespace; hence the function and the local part, which in turn forces the use of double quotes, which in turn forces the need to double up some but not all backslashes (and to know which ones -- oy!). I find that unnecessarily complicated. Granted, there's a tiny risk of collision if someone overrides BLUE or whatever, but on the other hand, the double-quote solution also carries the risk that a terminal will have backslashes in its escape sequences. And since the contents of the escape sequences are being parsed in the double-quote solution, but not in the single-quote solution, such a terminal could mess things up. Example of the difference:

       1  imadev:~$ FOO='\w'; PS1='$FOO\$ '
       2  \w$ FOO='\w'; PS1="$FOO\\$ "
       3  ~$ 
    

    Suppose our terminal uses \w in an escape sequence. A \w inside a variable that's referenced in a single-quoted PS1 is only expanded out to a literal \w when the prompt is printed, which is what we want. But in the double-quoted version, the \w is placed directly into the PS1 variable, and gets evaluated by bash when the prompt is printed. Now, I don't actually know of any terminals that use this notation -- it's entirely a theoretical objection. But then again, so is the objection to the use of variables like BLUE. And some people might actually want to echo "$BLUE" in their shells anyway. So, I'm not going to say the single-quote answer is better, but I'd like to see it retained here as an alternative. -- GreyCat

    • Fair enough. I initially just intended to change the BLACK= to a RESET= (since not everyone uses white on black), but then I thought it would be better if the prompt did not depend on variables being available. I obviously was not aware about the possibility of such terminal escape sequences, so I think mentioning the single-quote version first would be a better idea and also mention what happens if those vars change.

      I guess one could also make the variables readonly to prevent accidentally changing them and mess up the prompt, though that'll probably have other drawbacks..? -- ~~~

55. How can I tell whether a variable contains a valid number?

First, you have to define what you mean by "number". The most common case when people ask this seems to be "a non-negative integer, with no leading + sign". Or in other words, a string of all digits. Other times, people want to validate a floating-point input, with optional sign and optional decimal point.

55.1. Hand parsing

If you're validating a simple "string of digits", you can do it with a glob:

   1 # Bash / Ksh
   2 if [[ -n $foo && $foo != *[!0123456789]* ]]; then
   3     printf '"%s" is strictly numeric\n' "$foo"
   4 else
   5     printf '"%s" has a non-digit somewhere in it or is empty\n' "$foo"
   6 fi >&2

Avoid [0-9] or [[:digit:]] which in some locales and some systems can match characters other than 0123456789.

The same thing can be done in POSIX shells as well, using case:

   1 # POSIX
   2 case $var in
   3     '')
   4         printf 'var is empty\n';;
   5     *[!0123456789]*)
   6         printf '%s has a non-digit somewhere in it\n' "$var";;
   7     *)
   8         printf '%s is strictly numeric\n' "$var";;
   9 esac >&2

Of course, if all you care about is valid vs. invalid, you can combine cases:

   1 # POSIX
   2 case $var in
   3     '' | *[!0123456789]*)
   4         printf '%s\n' "$0: $var: invalid digit" >&2; exit 1;;
   5 esac

If you need to allow a leading negative sign, or if want a valid floating-point number or something else more complex, then there are a few possible ways. Standard globs aren't expressive enough to do this, but you can trim off any sign and then compare:

   1 # POSIX
   2 case ${var#[-+]} in   # notice ${var#prefix} substitution to trim sign
   3     '')
   4         printf 'var is empty\n';;
   5     .)
   6         printf 'var is just a dot\n';;
   7     *.*.*)
   8         printf '"%s" has more than one decimal point in it\n' "$var";;
   9     *[!0123456789.]*)
  10         printf '"%s" has a non-digit somewhere in it\n' "$var";;
  11     *)
  12         printf '"%s" looks like a valid float\n' "$var";;
  13 esac >&2

Or in Bash, we can use extended globs:

   1 # Bash -- extended globs must be enabled explicitly in versions prior to 4.1.
   2 # Check whether the variable is all digits.
   3 shopt -s extglob
   4 [[ $var = +([0123456789]) ]]

A more complex case:

   1 # Bash / ksh
   2 shopt -s extglob # not necessary in ksh and bash 4.1 or newer
   3 
   4 if [[ $foo = @(*[0123456789]*|!([+-]|)) && $foo = ?([+-])*([0123456789])?(.*([0123456789])) ]]; then
   5   echo 'foo is a floating-point number'
   6 fi

Optionally, case..esac may have been used in shells with extended pattern matching. The leading test of $foo is to ensure that it contains at least one digit, isn't empty, and contains more than just + or - by itself.

If your definition of "a valid number" is even more complex, or if you need a solution that works in legacy Bourne shells, you might prefer to use an external tool's regular expression syntax. Here is a portable version (explained in detail here), using awk (not egrep which is line-based so would be tricked by variables that contain newline characters):

   1 # Bourne
   2 
   3 if awk -- 'BEGIN {exit !(ARGV[1] ~ /^[-+]?([0123456789]+\.?|[0123456789]*\.[0123456789]+)$/)}' "$foo"; then
   4     printf '"%s" is a number\n' "$foo"
   5 else
   6     printf '"%s" is not a number\n' "$foo"
   7 fi

Bash version 3 and above have regular expression support in the [[...]] construct.

   1 # Bash
   2 # The regexp must be stored in a var and expanded for backward compatibility with versions < 3.2
   3 
   4 regexp='^[-+]?[0123456789]*(\.[0123456789]*)?$'
   5 if [[ $foo = *[0123456789]* && $foo =~ $regexp ]]; then
   6     printf '"%s" looks rather like a number\n' "$foo"
   7 else
   8     printf '"%s" doesn'\''t look particularly numeric to me.\n' "$foo"
   9 fi

55.2. Using the parsing done by [ and printf (or "using eq")

   1 # fails with ksh
   2 if [ "$foo" -eq "$foo" ] 2>/dev/null; then
   3     printf '"%s" is an integer\n' "$foo"
   4 fi

[ parses the variable and interprets it a decimal integer because of the -eq. If the parsing succeeds the test is trivially true; if it fails [ prints an error message that 2>/dev/null hides and sets a status different from 0. However this method fails if the shell is ksh, because ksh evaluates the variable as an arithmetic expression (and that would constitute an arbitrary command injection vulnerability).

Be careful: the following trick with printf (not supported by all shells, and the list of supported float representations varies with the shell as well; not to mention the command injection vulnerability in ksh or zsh)

   1 if printf %f "$foo" >/dev/null 2>&1; then
   2     printf '"%s" is a float\n' "$foo"
   3 fi

is broken: about the arguments of the a, A, e, E, f, F, g, or G format modifiers, POSIX specifies that if the leading character is a single-quote or double-quote, the value shall be the numeric value in the underlying codeset of the character following the single-quote or double-quote. Hence this fails when foo expands to a string with a leading single-quote or double-quote: the previous command will happily validate the string as a float. It also returns 0 when foo expands to a number with a leading 0x, which is a valid number in a shell script but may not work elsewhere.

You can use %d to parse an integer. Take care that the parsing might be (is supposed to be?) locale-dependent.

56. Tell me all about 2>&1 -- what's the difference between 2>&1 >foo and >foo 2>&1, and when do I use which?

Bash processes all redirections from left to right, in order. And the order is significant. Moving them around within a command may change the results of that command.

If all you want is to send both standard output and standard error to the same file, use this:

# Bourne
foo >file 2>&1          # Sends both stdout and stderr to file.

Here's a simple demonstration of what's happening:

# POSIX
foo() {
  echo "This is stdout"
  echo "This is stderr" 1>&2
}
foo >/dev/null 2>&1             # produces no output
foo 2>&1 >/dev/null             # writes "This is stderr" on the screen

Why do the results differ? In the first case, >/dev/null is performed first, and therefore the standard output of the command is sent to /dev/null. Then, the 2>&1 is performed, which causes standard error to be sent to the same place that standard output is already going. So both of them are discarded.

In the second example, 2>&1 is performed first. This means standard error is sent to wherever standard output happens to be going -- in this case, the user's terminal. Then, standard output is sent to /dev/null and is therefore discarded. So when we run foo the second time, we see only its standard error, not its standard output.

The redirection chapter in the guide explains why we use a duplicate file descriptor rather than opening /dev/null twice. In the specific case of /dev/null it doesn't actually matter because all writes are discarded, but when we write to a log file, it matters very much indeed.

There are times when we really do want 2>&1 to appear first -- for one example of this, see FAQ #40.

There are other times when we may use 2>&1 without any other redirections. Consider:

# Bourne
find ... 2>&1 | grep "some error"

In this example, we want to search find's standard error (as well as its standard output) for the string "some error". The 2>&1 in the piped command forces standard error to go into the pipe along with standard output. (When pipes and redirections are mixed in this way, remember: the pipe is done first, before any redirections. So find's standard output is already set to point to the pipe before we process the 2>&1 redirection.)

If we wanted to read only standard error in the pipe, and discard standard output, we could do it like this:

# Bourne
find ... 2>&1 >/dev/null | grep "some error"

The redirections in that example are processed thus:

  1. First, the pipe is created. find's output is sent to it.

  2. Next, 2>&1 causes find's standard error to go to the pipe as well.

  3. Finally, >/dev/null causes find's standard output to be discarded, leaving only stderr going into the pipe.

A related question is FAQ #47, which discusses how to send stderr to a pipeline.

See Making sense of the copy descriptor operator for a more graphical explanation.

56.1. If you're still confused...

If you're still confused at this point, it's probably because you started out with a misconception about how FDs work, and you haven't been able to drop that misconception yet. Don't worry -- it's an extremely common misconception, and you're not alone. Let me try to explain....

Many people think that 2>&1 somehow "unites" or "ties together" or "marries" the two FDs, so that any change to one of them becomes a change to the other. This is not the case. And this is where the confusion comes from, for many people.

2>&1 only changes FD2 to point to "that which FD1 points to at the moment"; it does not actually make FD2 point to FD1 itself. Note that "2" and "1" have different meanings due to the way they are used: "2", which occurs before ">&" means the actual FD2, but "1", which occurs after ">&", means "that which FD1 currently points to", rather than FD1 itself. (If reversed, as in "1>&2", then 1 means FD1 itself, and 2 means "that which FD2 currently points to".)

Analogies may help. One analogy is to think of FDs as being like C pointers.

   int some_actual_integer;
   int *fd1, *fd2;

   fd1 = &some_actual_integer;  /* Analogous to 1>file */
   fd2 = fd1;                   /* Analogous to 2>&1 */
   fd1 = NULL;                  /* Analogous to 1>&- */

   /* At this point, fd2 is still pointing to the actual memory location.
      The fact that fd1 and fd2 both *used to* point to the same place is
      not relevant.  We can close or repoint one of them, without affecting
      the other. */

Another analogy is to think of them like hardlinks in a file system.

    touch some_real_file
    ln some_real_file fd1       # Make fd1 a link to our file
    ln fd1 fd2                  # Make fd2 another link to our file
    rm fd1                      # Remove the fd1 link, but fd2 is not
                                # affected

    # At this point we still have a file with two links: "some_real_file"
    # and "fd2".

Or like symbolic links -- but we have to be careful with this analogy.

    touch some_real_file
    ln -s some_real_file fd1    # Make fd1 a SYMlink to our file
    ln -s "$(readlink fd1)" fd2 # Make fd2 symlink to the same thing that
                                # fd1 is a symlink to.
    rm fd1                      # Remove fd1, but fd2 is untouched.

    # Requires the nonstandard "readlink" program.
    # Result is:

    lrwxrwxrwx 1 wooledg wooledg 14 Mar 25 09:19 fd2 -> some_real_file
    -rw-r--r-- 1 wooledg wooledg  0 Mar 25 09:19 some_real_file

    # If we had attempted to use "ln -s fd1 fd2" in this analogy, we would have
    # FAILED badly.  This isn't how FDs work; rather, it's how some people
    # THINK they work.  And it's wrong.

Other analogies include thinking of FDs as hoses. Think of files as barrels full of water (or empty, or half full). You can put a hose in a barrel in order to dump more water into it. You can put two hoses into the same barrel, and they can both dump water into the same barrel. You can then remove one of those hoses, and that doesn't cause the other hose to go away. It's still there.

56.2. See Also

57. How can I untar (or unzip) multiple tarballs at once?

As the tar command was originally designed to read from and write to tape devices (tar - Tape ARchiver), you can specify only filenames to put inside an archive (write to tape) or to extract out of an archive (read from tape).

There is an option to tell tar that the archive is not on some tape, but in a file: -f. This option takes exactly one argument: the filename of the file containing the archive. All other (following) filenames are taken to be archive members:

    tar -x -f backup.tar myfile.txt
    # OR (more common syntax IMHO)
    tar xf backup.tar myfile.txt

Now here's a common mistake -- imagine a directory containing the following archive-files you want to extract all at once:

    $ ls
    backup1.tar backup2.tar backup3.tar

Maybe you think of tar xf *.tar. Let's see:

    $ tar xf *.tar
    tar: backup2.tar: Not found in archive
    tar: backup3.tar: Not found in archive
    tar: Error exit delayed from previous errors

What happened? The shell replaced your *.tar by the matching filenames. You really wrote:

    tar xf backup1.tar backup2.tar backup3.tar

And as we saw earlier, it means: "extract the files backup2.tar and backup3.tar from the archive backup1.tar", which will of course only succeed when there are such filenames stored in the archive.

The solution is relatively easy: extract the contents of all archives one at a time. As we use a UNIX shell and we are lazy, we do that with a loop:

    for tarname in ./*.tar; do
      tar xf "$tarname"
    done

What happens? The for-loop will iterate through all filenames matching *.tar and call tar xf for each of them. That way you extract all archives one-by-one and you even do it automagically.

The second common archive type in these days is ZIP. The command to extract contents from a ZIP file is unzip (who would have guessed that!). The problem here is the very same: unzip takes only one option specifying the ZIP-file. So, you solve it the very same way:

    for zipfile in ./*.zip; do
      unzip "$zipfile"
    done

Not enough? Ok. There's another option with unzip: it can take shell-like patterns to specify the ZIP-file names. And to avoid interpretation of those patterns by the shell, you need to quote them. unzip itself and not the shell will interpret *.zip in this case:

    unzip "*.zip"
    # OR, to make more clear what we do:
    unzip \*.zip

(This feature of unzip derives mainly from its origins as an MS-DOS program. MS-DOS's command interpreter does not perform glob expansions, so every MS-DOS program must be able to expand wildcards into a list of filenames. This feature was left in the Unix version, and as we just demonstrated, it can occasionally be useful.)

58. How can I group entries (in a file by common prefixes)?

As in, one wants to convert:

    foo: entry1
    bar: entry2
    foo: entry3
    baz: entry4

to

    foo: entry1 entry3
    bar: entry2
    baz: entry4

There are two simple general methods for this:

  1. sort the file, and then iterate over it, collecting entries until the prefix changes, and then print the collected entries with the previous prefix
  2. iterate over the file, collect entries for each prefix in an array indexed by the prefix

A basic implementation of a in bash:

old=xxx ; stuff=
(sort file ; echo xxx) | while read prefix line ; do 
        if [[ $prefix = $old ]] ; then
                stuff="$stuff $line"
        else
                echo "$old: $stuff"
                old="$prefix"
                stuff=
        fi
done 

And a basic implementation of b in awk, using a true multi-dimensional array:

    {
      a[$1,++b[$1]] = $2;
    }

    END {
      for (i in b) {
        printf("%s", i);
        for (j=1; j<=b[i]; j++) {
          printf(" %s", a[i,j]);
        }
        print "";
      }
    }

Written out as a shell command:

    awk '{a[$1,++b[$1]]=$2} END {for (i in b) {printf("%s", i); for (j=1; j<=b[i]; j++) printf(" %s", a[i,j]); print ""}}' file

59. Can bash handle binary data?

The answer is, basically, no....

While bash won't have as many problems with it as older shells, it still can't process arbitrary binary data, and more specifically, shell variables are not 100% binary clean, so you can't store binary files in them.

You can store uuencoded ASCII data within a variable such as

    var=$(uuencode /bin/ls ls)
    cd /somewhere/else
    uudecode <<<"$var"  # don't forget the quotes!
  • Note: there is a huge difference between GNU and Unix uuencode/uudecode. With Unix uudecode, you cannot specify the output file; it always uses the filename encoded in the ASCII data. I've fixed the previous example so that it works on Unix systems. If you make further changes, please don't use GNUisms. Thanks. --GreyCat

One instance where such would sometimes be handy is storing small temporary bitmaps while working with netpbm... here I resorted to adding an extra pnmnoraw to the pipe, creating (larger) ASCII files that bash has no problems storing.

If you are feeling adventurous, consider this experiment:

    # bindec.bash, attempt to decode binary data to ascii decimals
    IFS=
    while read -n1 x ;do
        case "$x" in
            '') echo empty ;;
            # insert the 256 lines generated by the following oneliner here:
            # for x in $(seq 0 255) ;do echo "        $'\\$(printf %o $x)') echo $x;;" ;done
        esac
    done

and then pipe binary data into it, maybe like so:

    for x in $(seq 0 255) ;do echo -ne "\\$(printf %o $x)" ;done | bash bindec.bash | nl | less

This suggests that the 0 character is skipped entirely, because we can't create it with the input generation, enough to conveniently corrupt most binary files we try to process.

  • Yes, Bash is written in C, and uses C semantics for handling strings -- including the NUL byte as string terminator -- in its variables. You cannot store NUL in a Bash variable sanely. It simply was never intended to be used for this. - GreyCat

Note that this refers to storing them in variables... moving data between programs using pipes is always binary clean. Temporary files are also safe, as long as appropriate precautions are taken when creating them.

To cat binary file with just bash builtins when no external command is available (had to use this trick once when /lib/libgcc_s.so.1 was renamed, saved the day):

# simulate cat with just bash builtins, binary safe
IFS=
while read -d '' -r -n1 x ; do
    case "$x" in
        '') printf "\x00";;
        *) printf "%s" "$x";;
    esac
done
  • I'd rather just use cat. Also, is the -n1 really needed? -GreyCat

    • without -n1 you have to be careful to deal with the data after the last \0, something like [[ $x ]] && printf "%s" "%x" after the loop. I haven't tested this to know if it works or if it is enough. Also I don't know what happens if you read a big file without any \0 --pgas

60. I saw this command somewhere: :(){ :|:& } (fork bomb). How does it work?

This is a potentially dangerous command. Don't run it! The "trigger" is omitted from the question above, leaving only the part that sets up the function.

A fork bomb is a simple form of denial of service (DoS) named after the Unix fork(2) system call. It is a program which rapidly consumes resources by repeatedly forking copies of itself, whose children do the same, recursively. On many systems without proper resource limits, this may leave you in an irrecoverably unresponsive state.

This particular definition of a Bash fork bomb is for some reason so well-known that it's sometimes just known as "the forkbomb".

Here is the code in the most common popularized format:

:(){ :|:& };:

And again, following good conventions for readability:

#!/usr/bin/env bash
:() { 
    : | : &
}

:

This defines a function named :. The body of the function sets up a pipeline, which in Bash consists of two subshells, the stdout of the first connected to the stdin of the second by a pipe. The function (parent shell of the pipeline) backgrounds the pipeline, the function returns, and the shell terminates, leaving behind the background job. The end result is two new processes that each call : to repeat the process.

: is actually an illegal function name in most contexts (described below). Here, bomb is used instead of :, which is both portable and more readable.

bomb() {
    bomb | bomb &
}

bomb

Theoretically, anybody that has shell access to your computer can use such a technique to consume all the resources to which he/she has access. A chroot(2) won't help here. If the user's resources are unlimited then in a matter of seconds all the resources of your system (processes, virtual memory, open files, etc.) will be used, and it will probably deadlock itself. Any attempt made by the kernel to free more resources will just allow more instances of the function to be created.

As a result, the only way to protect yourself from such abuse is by limiting the maximum allowed use of resources for your users. Such resources are governed by the setrlimit(2) system call. The interface to this functionality in Bash and KornShell is the ulimit command. Your operating system may also have special configuration files to help manage these resources (for example, /etc/security/limits.conf in Debian, or /etc/login.conf in OpenBSD). Consult your documentation for details.

60.1. Why '':(){ :|:& };:'' is a bad way to define a fork bomb

This popular definition works because of an unusual combination of details that (of the shells I have to test with) only occur in Bash's non-POSIX mode, and Zsh (all emulation modes).

  1. The shell must allow defining function names beyond those required by a POSIX "Name". This immediately rules out ksh93, Bash POSIX mode, Dash, Posh (segfault on definition, Posh is an old pdksh fork that's no longer maintained), and Busybox sh.

  2. The shell must either:
    • Incorrectly resolve functions that overload special builtins ahead of the builtin itself. See Command search and execution. mksh fails (correctly) at this step, simply executing the : builtin. It is impossible to call the function even if you've successfully defined it. Bash non-POSIX mode and Zsh (even POSIX emulation) meet this criteria.

    • Provide a means of disabling the builtin. Bash and Ksh93 do. Bash fails (correctly according to its documentation). Ksh93 succeeds (incorrectly according to its documentation).
          $ bash -c 'enable -d :; type -p :'
          bash: line 0: enable: :: not dynamically loaded
          $ ksh -c 'builtin -d :; whence -v :'
          ksh: whence: :: not found
      Probably a bug:
        -d    Deletes each of the specified built-ins. **Special built-ins cannot be deleted**.
      In any event, it's irrelevant because ksh93 already failed at step 1. Now you just have an inaccessible builtin.

So in a nutshell, this forkbomb isn't very interesting. It's basically the canonical definition that's been trivially obfuscated by giving it a strange name that breaks almost everywhere. Ironic given the supposed original author's claim that one may

type in :(){ :|:& };: on any UNIX terminal
  • -- sure, I guess you can type it.

61. I'm trying to write a script that will change directory (or set a variable), but after the script finishes, I'm back where I started (or my variable isn't set)!

Consider this:

   #!/bin/sh
   cd /tmp

If one executes this simple script, what happens? Bash forks, resulting in a parent (the interactive shell in which you typed the command) and a child (a new shell that reads and executes the script). The child runs, while the parent waits for it to finish. The child reads and executes the script, changes its current directory to /tmp, and then exits. The parent, which was waiting for the child, harvests the child's exit status (presumably 0 for success), and then carries on with the next command. Nowhere in this process has the parent's current working directory changed -- only the child's.

A child process can never affect any part of the parent's environment, which includes its variables, its current working directory, its open files, its resource limits, etc.

So, how does one go about changing the current working directory of the parent? You can still have the cd command in an external file, but you can't run it as a script. That would cause the forking explained earlier. Instead, you must source it with . (or the Bash-only synonym, source). Sourcing basically means you execute the commands in a file using the current shell, not in a forked shell (child shell):

   echo 'cd /tmp' > "$HOME/mycd"  # Create a file that contains the 'cd /tmp' command.
   . "$HOME/mycd"                 # Source that file, executing the 'cd /tmp' command in the current shell.
   pwd                            # Now, we're in /tmp

The same thing applies to setting variables. . ("dot in") the file that contains the commands; don't try to run it.

If the command you execute is a function, not a script, it will be executed in the current shell. Therefore, it's possible to define a function to do what we tried to do with an external file in the examples above, without needing to "dot in" or "source" anything. Define the following function and then call it simply by typing mycd:

   mycd() { cd /tmp; }

Put it in ~/.bashrc or similar if you want the function to be available automatically in every new shell you open.

62. Is there a list of which features were added to specific releases (versions) of Bash?

Here are some links to official Bash documentation:

  • NEWS: a file tersely listing the notable changes between the current and previous versions

  • CHANGES: a "complete" bash change history (back to 2.0 only)

  • COMPAT: compatibility issues between bash3 and previous versions

A more extensive list than the one below can be found at https://web.archive.org/web/20230401195427/https://wiki.bash-hackers.org/scripting/bashchanges

62.1. Changes in the upcoming bash-5.3 release

Feature

Copied from / Inspired by

${ CMDS;}

ksh93

${|CMDS;}

mksh

GLOBSORT (variable)

zsh's o/O glob qualifiers

local BASH_REMATCH

compgen -V

native

read -E

array_expand_once (shopt)

printf %#Q %#q (${var@Q} quoting)

zsh's q, qq, qqq, qqqq, q+, q- parameter expansion flags

printf %l.*s %lc (character aware %s %c)

zsh's default for %s, ksh93's %Ls/%Lc

62.2. Notable changes in released bash versions

Feature

Added in version

Copied from / Inspired by

READLINE_ARGUMENT (variable)

5.2 (2022)

zsh's NUMERIC variable

varredir_close (shopt)

5.2 (2022)

printf %Q

5.2 (2022)

noexpand_translations (shopt)

5.2 (2022)

${var@k}

5.2 (2022)

zsh's ${(kv)var}

${var/$pat/&}, patsub_replacement (shopt)

5.2 (2022)

ksh93's \0, zsh's $MATCH

globskipdots (shopt)

5.2 (2022)

BASH_REMATCH is no longer readonly

5.1 (2020)

PROMPT_COMMAND may be an array

5.1 (2020)

zsh's precmd_functions array

SRANDOM (variable)

5.1 (2020)

wait -p varname

5.1 (2020)

declare -I

5.1 (2020)

NetBSD sh (default behaviour in ash)

${var@U}, ${var@u}, ${var@L}, ${var@K}

5.1 (2020)

zsh's U, L, C, kv parameter expansion flags, zsh/tcsh's :u, :l modifiers

BASH_ARGV0 (variable)

5.0 (2019)

EPOCHSECONDS, EPOCHREALTIME (variables)

5.0 (2019)

zsh (2003, 2011)

wait -f

5.0 (2019)

history -d allows negative offsets

5.0 (2019)

assoc_expand_once (shopt)

5.0 (2019)

localvar_inherit (shopt)

5.0 (2019)

--pretty-print (invocation option)

5.0 (2019)

native

assoc=(key1 value1 key2 value2) assoc+=(key value)

5.0 (2019)

PS0 (variable)

4.4 (2016)

native

loadable builtin deployment infrastructure

4.4 (2016)

ksh93 (1993)

mapfile/readarray -d

4.4 (2016)

native

--help for builtins

4.4 (2016)

ksh93 (2001, possibly earlier)

${var@a}, ${var@A}, ${var@E}, ${var@P}, ${var@Q}

4.4 (2016)

mksh (2012) for the syntax, zsh (1990s) for the feature

local -

4.4 (2016)

Almquist shell (1989)

$! and wait for process substitutions

4.4 (2016)

native

wait -n

4.3 (2014)

native

test -R

4.3 (2014)

ksh93 (1993)

test -v 'array[element]' (bug fix)

4.3 (2014)

ksh93 (1993)

declare/typeset -n and associated changes to ${!ref} and for..in

4.3 (2014)

ksh93 (1993)

array[-idx] (in assignments, read, unset, etc)

4.3 (2014)

zsh (1990s)

printf %(fmt)T uses -1 as default argument instead of 0

4.3 (2011)

ksh93 (1999)

quotes in the replacement part of ${var/pat/"$rep"} are no longer literal

4.3 (2011)

\uXXXX and \UXXXXXXXX

4.2 (2011)

zsh (2001)

declare -g

4.2 (2011)

zsh (1990s)

test -v

4.2 (2011)

ksh93 (2009)

printf %(fmt)T

4.2 (2011)

ksh93 (1999)

${array[-idx]} and ${var:start:-len}

4.2 (2011)

zsh (1990s) and native

lastpipe (shopt)

4.2 (2011)

ksh (1980s) default behaviour there

read -N

4.1 (2010)

ksh93 (2003)

{var}> or {var}< etc. (FD variable assignment)

4.1 (2010)

developed jointly with ksh93 and zsh

syslog history (compile option)

4.1 (2010)

native

complete -D (allowing dynamically loaded completions)

4.1 (2010)

BASH_XTRACEFD (variable)

4.1 (2010)

native

${@:offset[:length]} includes $0

4.0 (2009)

ksh

;& and ;;& fall-throughs for case

4.0 (2009)

ksh93 (1993)

associative arrays

4.0 (2009)

ksh93 (1993)

&>> and |&

4.0 (2009)

native and csh (1970s)

command_not_found_handle (function)

4.0 (2009)

native

compopt (builtin)

4.0 (2009)

coproc (keyword)

4.0 (2009)

ksh (1980s), zsh (1990) for the coproc keyword

globstar (shopt)

4.0 (2009)

zsh (1992), ksh93 (2005) for the name of the option

mapfile/readarray (builtin)

4.0 (2009)

native

${var,} ${var,,} ${var^} ${var^^}

4.0 (2009)

native

{009..012} (leading zeros in brace expansions)

4.0 (2009)

zsh (1995)

{x..y..incr}

4.0 (2009)

ksh93 (2005)

read -t 0 (test input availability)

4.0 (2009)

read -t 0.5

4.0 (2009)

zsh (2003)

read -i

4.0 (2009)

zsh (vared) (1990s)

x+=string array+=(string)

3.1 (2005)

ksh93 (2000)

printf -v var

3.1 (2005)

native

nocasematch (shopt)

3.1 (2005)

native

{x..y}

3.0 (2004)

zsh (1995)

${!array[@]}

3.0 (2004)

ksh93 (1993)

[[ =~

3.0 (2004)

native

BASH_REMATCH

3.0 (2004)

native

RETURN (trap)

3.0 (2004)

native

pipefail (option)

3.0 (2004)

ksh93

failglob (shopt)

3.0 (2004)

native

printf %q produces $'...'

2.05b (2002)

ksh93 (1993)

[n]>&word- and [n]<&word-

2.05b (2002)

ksh93

<<<

2.05b (2002)

zsh (1991)

printf %n

2.05a (2001)

ksh93

i++

2.04 (2000)

ksh93 (1993)

for ((;;))

2.04 (2000)

ksh93 (1993)

/dev/fd/N, /dev/tcp/host/port, etc.

2.04 (2000)

ksh93 (1993)

read -t, -n, -d and -s

2.04 (2000)

ksh93 (1993)

a=(*.txt) file expansion

2.03 (1999)

ksh93 (1993)

extglob (shopt)

2.02 (1998)

ksh (1980s)

[[

2.02 (1998)

ksh (1980s)

printf (builtin)

2.02 (1998)

ksh (1980s)

$(< filename)

2.02 (1998)

ksh (1980s)

** (exponentiation)

2.02 (1998)

zsh (1994)

\xXX

2.02 (1998)

zsh (1994 or earlier)

(( ))

2.0 (1996)

ksh (1980s)

arrays

2.0 (1996)

csh (1979), zsh (1991) for array=(assignment) syntax

$'...' (new quoting syntax)

2.0 (1996)

ksh93 (1993)

62.3. List of bash releases and other notable events

Release

Date

bash-5.2

2022-09-26

bash-5.1

2020-12-07

bash-5.0

2019-01-07

bash-4.4

2016-09-16

Shellshock patches are released for bash-2.05b through bash-4.3

2014-09-24 to 2014-10-05

bash-4.3

2014-02-27

bash-4.2

2011-02-14

bash-4.1

2010-01-02

bash-4.0

2009-02-23

bash-3.2

2006-10-12

bash-3.1

2005-12-09

bash-3.0

2004-07-27

bash-2.05b

2002-07-17

bash-2.05a

2001-11-15

bash-2.05

2001-04-09

bash-2.04

http://ftp.gnu.org/gnu/bash puts it around 2000-03-21

bash-2.03

1999-02-19

bash-2.02

1998-04-20

bash-2.01

1997-06-06

bash-2.0

1996-12-23

bash-1.14

1994-06-02

bash-1.13

1993-09-03

Looks like Chet Ramey takes over as maintainer between 1.12 and 1.13

bash-1.12

1992-01-26

bash-1.11

1992-01-10

bash-1.10

First mention found at 1991-10-07, so some time before that

bash-1.09

First mention found at 1991-06-02

bash-1.08

1991-05-22

bash-1.07

likely 1991-02-01

bash-1.06

This is a tough one. There was apparently a "bootleg" version of 1.06, so this requires more digging

bash-1.05

1990-03-03

bash-1.04

First mention found at 1989-11-07

bash-1.03

1989-09-02

bash-1.02

First mention found at 1989-07-09

bash-1.01

1989-06-23

bash-1.00

Hard to search for ...

bash-0.99

First beta release 1989-06-07

63. How do I create a temporary file in a secure manner?

There does not appear to be any single command that simply works everywhere. tempfile is not portable. mktemp exists more widely (but still not ubiquitously), but it may require a -c switch to create the file in advance; or it may create the file by default and barf if -c is supplied. Some systems don't have either command (Solaris, POSIX). POSIX systems are supposed to have m4 which has the ability to create a temporary file, but some systems may not install m4 by default, or their implementation of m4 may be missing this feature.

The traditional answer has usually been something like this:

   1 # Do not use!  Race condition!
   2 tempfile=/tmp/myname.$$
   3 trap 'rm -f -- "$tempfile"; exit 1' 1 2 3 15
   4 rm -f -- "$tempfile"
   5 touch -- "$tempfile"

The problem with this is: if the file already exists (for example, as a symlink to /etc/passwd), then the script may write things in places they should not be written. Even if you remove the file immediately before using it, you still have a RaceCondition: someone could re-create a malicious symlink in the interval between your shell commands.

63.1. Use your $HOME

The best portable answer is to put your temporary files in your home directory (or some other private directory, e.g. $XDG_RUNTIME_DIR) where nobody else has write access. Then at least you don't have to worry about malicious users. Simplistic PID-based schemes (or hostname + PID for shared file systems) should be enough to prevent conflicts with your own scripts.

If you're implementing a daemon which runs under a user account with no home directory, why not simply make a private directory for your daemon at the same time you're installing the code?

Unfortunately, people don't seem to like that answer. They demand that their temporary files should be in /tmp or /var/tmp. For those people, there is no clean answer, so they must choose a hack they can live with.

63.2. Make a temporary directory

If you can't use $HOME, the next best answer is to create a private directory to hold your temp file(s), instead of creating the files directly inside a world-writable sandbox like /tmp or /var/tmp. The mkdir command is atomic, and only reports success if it actually created the directory. So long as we do not use the -p option, we can be assured that it actually created a brand new directory, rather than following a symlink to danger.

Here is one example of this approach:

   1 # Bash
   2 
   3 i=0 tempdir=
   4 trap '[[ $tempdir ]] && rm -rf -- "$tempdir"' EXIT
   5 
   6 while ((++i <= 10)); do
   7   tempdir=${TMPDIR:-/tmp}//$RANDOM-$$
   8   mkdir -m 700 -- "$tempdir" 2>/dev/null && break
   9 done
  10 
  11 if ((i > 10)); then
  12   printf 'Could not create temporary directory\n' >&2
  13   exit 1
  14 fi

Instead of RANDOM, awk can be used to generate a random number in a POSIX compatible way:

   1 # POSIX
   2 
   3 i=0 tempdir=
   4 cleanup() {
   5   [ "$tempdir" ] && rm -rf -- "$tempdir"
   6   if [ "$1" != EXIT ]; then
   7     trap - "$1"         # reset trap, and
   8     kill "-$1" "$$"     # resend signal to self
   9   fi
  10 }
  11 for sig in EXIT HUP INT TERM; do
  12   trap "cleanup $sig" "$sig"
  13 done
  14 
  15 while [ "$i" -lt 10 ]; do
  16   tempdir=${TMPDIR:-/tmp}//$(awk 'BEGIN { srand (); print rand() }')-$$
  17   mkdir -m 700 -- "$tempdir" 2>/dev/null && break
  18   sleep 1
  19   i=$((i+1))
  20 done
  21 
  22 if [ "$i" -ge 10 ]; then
  23   printf 'Could not create temporary directory\n' >&2
  24   exit 1
  25 fi

Note however that srand() seeds the random number generator using seconds since the epoch which is fairly easy for an adversary to predict and perform a denial of service attack. (Historical awk implementations that predate POSIX may not even use the time of day for srand(), so don't count on this if you're on an ancient system.)

Some systems have a 14-character filename limit, so avoid the temptation to string $RANDOM together more than twice. You're relying on the atomicity of mkdir for your security, not the obscurity of your random name. If someone fills up /tmp with hundreds of thousands of random-number files to thwart you, you've got bigger issues.

In some systems (like Linux):

  • You have available the mktemp command and you can use its -d option so that it creates temporary directores only accessible for you, with random characters inside their names to make it almost impossible for an attacker to guess the directory name beforehand.

  • You can create filenames longer than 14 characters in /tmp.

   1 # Bash on Linux
   2 
   3 unset -v tempdir
   4 trap '[ "$tempdir" ] && rm -rf -- "$tempdir"' EXIT
   5 tempdir=$(mktemp -d -- "${TMPDIR:-/tmp}//XXXXXXXXXXXXXXXXXXXXXXXXXXXXX") ||
   6   { printf 'ERROR creating a temporary file\n' >&2; exit 1; }

And then you can create your particular files inside the temporary directory.

63.3. Use platform-specific tools

If you're writing a script for only a specific set of systems, and if those systems have a mktemp or tempfile tool which works correctly, you can use that. Make sure that your tool actually creates the temporary file, and does not simply give you an unused name. The options will vary from system to system, so you will have to write your script with that in mind. Since some platforms have none of these tools, this is not a portable solution, but it's often good enough, especially if your script is only targeting a specific operating system.

Here's an example using Linux's mktemp:

   1 # Bash on Linux
   2 
   3 unset -v tmpfile
   4 trap '[[ $tmpfile ]] && rm -f -- "$tmpfile"' EXIT
   5 tmpfile=$(mktemp)

63.4. Using m4

The m4 approach is theoretically POSIX-standard, but not in practice -- it may not work on MacOS, as its version of m4 is too old (as of July, 2021). Nevertheless, here's an example. Note that mkstemp requires a template, including a path prefix, or else it creates the temporary file in the current directory.

   1 # Bash
   2 
   3 die() { printf >&2 '%s\n' "$*"; exit 1; }
   4 
   5 unset -v tmpfile
   6 trap '[[ $tmpfile ]] && rm -f -- "$tmpfile"' EXIT
   7 tmpfile=$(m4 - <<< 'mkstemp(/tmp/foo-XXXXXX)') ||
   8   die "couldn't create temporary file"

Or, alternatively:

   1 # Bash
   2 
   3 die() { printf >&2 '%s\n' "$*"; exit 1; }
   4 
   5 unset -v tmpfile
   6 trap '[[ $tmpfile ]] && rm -f -- "$tmpfile"' EXIT
   7 
   8 : "${TMPDIR:=/tmp}"
   9 tmpfile=$TMPDIR//$(HOME=$TMPDIR cd && m4 - <<< 'mkstemp(foo-XXXXXX)') ||
  10   die "couldn't create temporary file"

63.5. Other approaches

Another not-quite-serious suggestion is to include C code in the script that implements a mktemp(1) command based on the mktemp(3) library function, compile it, and use that in the script. But this has a couple problems:

  • The useless Solaris systems where we would need this probably don't have a C compiler either.
  • Chicken and egg problem: we need a temporary file name to hold the compiler's output.

64. My ssh client hangs when I try to logout after running a remote background job!

The following will not do what you expect:

   ssh me@remotehost 'sleep 120 &'
   # Client hangs for 120 seconds

This is a "feature" of OpenSSH. The client will not close the connection as long as the remote end's terminal is still in use -- and in the case of sleep 120 &, stdout and stderr are still connected to the terminal.

The simplest solution is to tell OpenSSH to disconnect as soon as authentication is complete:

    ssh -f me@remotehost 'sleep 120'

The immediate answer to your question -- "How do I get the client to disconnect so I can get my shell back?" -- is to kill the ssh client. You can do this with the kill or pkill commands, of course; or by sending the INT signal (usually Ctrl-C) for a non-interactive ssh session (as above); or by pressing <Enter><~><.> (Enter, Tilde, Period) in the client's terminal window for an interactive remote shell.

The problem is that the stdout and stderr file descriptors are still connected to the terminal, preventing the exit of ssh. So the long-term workaround for this is to ensure that all these file descriptors are either closed or redirected to a log file (or /dev/null) on the remote side.

   ssh me@remotehost 'sleep 120 >/dev/null 2>&1 &'
   # Client should return immediately

This also applies to restarting daemons on some legacy Unix systems.

   ssh root@hp-ux-box   # Interactive shell
   ...                  # Discover that the problem is stale NFS handles
   /sbin/init.d/nfs.client stop   # autofs is managed by this script and
   /sbin/init.d/nfs.client start  # killing it on HP-UX is OK (unlike Linux)
   exit
   # Client hangs -- use Enter ~ . to kill it.

Please note that allowing root to log in over SSH is a very bad security practice. If you must do this, then create a single script that does all the commands you want, with no command line options, and then configure the sudoers file to grant a single user the right to run the mentioned script with no password required. This will ensure that you know which commands need run regularly, and that if the regular account is compromised, the damage which can be incurred is quantified.

The legacy Unix /sbin/init.d/nfs.client script runs daemons in the background but leaves their stdout and stderr attached to the terminal (and they don't fully self-daemonize). The solution is either to fix the Unix vendor's broken init script, or to kill the ssh client process after this happens. The author of this article uses the latter approach.

65. Why is it so hard to get an answer to the question that I asked in #bash?

Maybe nobody knows the answer (or the people who know the answer are busy). Maybe you haven't given enough detail about the problem, or you haven't presented the problem clearly. Maybe the question you asked is answered in this FAQ, or in BashPitfalls, or in the BashGuide.

This is a big one: don't just post a URL and say "here is my script, fix it!" Only post code as a last resort, if you have a small piece of code that you don't understand. Instead, you should state what you're trying to do.

Shell scripting is largely a collection of hacks and tricks that do not generalize very well. The optimal answer to one problem may be quite different from the optimal answer to a similar-looking problem, so it's extremely important that you tell us the exact problem you want to solve.

Moreover, if you've attempted to solve a problem yourself, there's a really high probability that you've gone about it using a technique that doesn't work (or, at least, doesn't work for that particular problem). Any code you already have is probably going to be thrown away. Posting your non-working code as a substitute for a description of the problem you want to solve is usually a waste of time, and is nearly always irritating.

See NetEtiquette for more general suggestions. Try to avoid the infamous XyProblem.

dilbert-20110402.gif

The aphorisms (bash aphorisms, "bashphorisms") given here are intended to be humorous, but with a touch of realism underlying them. Several have been suggested over time, and this list is evolving.

  1. The questioner's first description of the problem/question will be misleading.
    • Corollary 1.1: The questioner's second description of the problem/question will also be misleading.

    • Corollary 1.2: The questioner's third description of the problem will clarify two previous misdescribed elements of the problem, but will add two new irrelevant issues that will be even more difficult to unravel from the actual problem.

  2. The questioner will keep changing the original question until it drives the helpers in the channel insane.
  3. Offtopicness will continue until someone asks a bash question that falls under bashphorisms 1 and/or 2, and greycat gets pissed off.
  4. The questioner will not read and apply the answers he is given but will instead continue to practice b1 and b2.
  5. The ignorant will continually mis-educate the other noobies.
  6. When given a choice of solutions, the newbie will always choose the wrong one.
  7. The newbie will always find a reason to say, "It doesn't work."
  8. If you don't know to whom the bashphorism's referring, it's you.
  9. All examples given by the questioner will be broken, misleading, wrong, incomplete, and/or not representative of the actual question.
    • 9.5: Especially when the example is 'ls'.

  10. The data is never formatted in the way that makes it easiest to manipulate.
  11. If your script uses cut, head or sed to operate on strings, rewrite it.
  12. All logic is deniable; however, some logic will plonk you if you deny it.

  13. Everyone ignores greycat when he is right. When he is wrong, it is !b1.
  14. The newbie doesn't actually know what he's asking. If he did, he wouldn't need to ask.
  15. The more advanced you are, the more likely you are to be overcomplicating it.
  16. The more of a beginner you are, the more likely you are to be overcomplicating it.
  17. A newbie comes to #bash to get his script confirmed. He leaves disappointed.
  18. The newbie will not accept the answer you give, no matter how right it is.
  19. The newbie is a bloody loon.
  20. The newbie will always have some excuse for doing it wrong.
  21. If When the newbie's question is ambiguous, the proper interpretation will be whichever one makes the problem the hardest to solve.

  22. The newcomer will abuse the bot's factoid triggers for their own entertainment until someone gets annoyed enough to ask them to message it privately instead.
  23. Everyone is a newcomer.
  24. The newcomer will address greybot as if it were human.
  25. The newbie won't accept any answer that uses practical or standard tools.
  26. The newbie will not TELL you about this restriction until you have wasted half an hour.
  27. The newbie will lie.
  28. When the full horror of the newbie's true goal is revealed, the newbie will try to restate the goal to trick you into answering. Newbies are stupid.
  29. The fad of the month (as of June 2018) is Docker. It's always Docker. Why are they doing it THAT WAY? Because Docker.
  30. They won't show you the homework assignment. That would make it too easy.
  31. Your teacher is a fucking idiot.
  32. The more horrifyingly wrong a proposed solution is, the more likely it will be used.
  33. The newbie cannot explain what he is doing, or why. He will show you incomprehensible, nonworking code instead. What? You can't read his mind?!
  34. The person who is somehow responsible for 10000 machines knows jack shit about system administration.
  35. They won't show you their code, when it's a single command that is failing, even when you ask them to. But they'll dump an unsolicited 600 line script on a pastebin and expect you to read it all.
  36. Those who do not understand sysvinit are doomed to reinvent it. Poorly. (Those who DO understand it know to run like hell.)
  37. If something is a really bad idea, GNU will develop a nonstandard, nonportable tool to do it, not understanding that impossible things were impossible for a good reason.
  38. And then some of the BSDs will follow the GNU like sheep.
  39. If the noob is asking how to generate a random number, it's because the noob is writing a password generator. Because the noob is an idiot.
  40. The noob would rather waste several hours trying to dodge and weave through 4+ layers of quoting hell than spend 3 minutes putting the code in a file.
  41. The noob will spend 2 hours NOT answering "What are you trying to do?" instead of 3 minutes answering it.
  42. It takes 15 seconds to answer the question. It takes 2 hours to figure out what the question is.
  43. A "quick" or "simple" question will be neither.
  44. You think you've figured out what the newbie is trying to do? Nope. Sorry.

66. Is there a "PAUSE" command in bash like there is in MSDOS batch scripts? To prompt the user to press any key to continue?

Use the following to wait until the user presses enter:

# Bash
read -p "Press [enter] to continue..."

# Bourne
echo "Press [enter] to continue..."
read junk

Or use the following to wait until the user presses any key to continue:

# Bash
read -rsn 1 -p "Press any key to continue..."

Sometimes you need to wait until the user presses any key to continue, but you are already using the "standard input" because (for example) you are using a pipe to feed your script. How do you tell read to read from the keyboard? Unix flexibility is helpful here, you can add "< /dev/tty"

# Bash
read -rsn 1 -p "Press any key to continue..." < /dev/tty

If you want to put a timeout on that, use the -t option to read:

# Bash
printf 'WARNING: You are about to do something stupid.\n'
printf 'Press a key within 5 seconds to cancel.'
if ! read -rsn 1 -t 5
then something_stupid
fi

If you just want to pause for a while, regardless of the user's input, use sleep:

echo "The script is tired.  Please wait a minute."
sleep 60

If you want a fancy countdown on your timed read:

# Bash
# This function won't handle multi-digit counts.
countdown() {
  local i 
  printf %s "$1"
  sleep 1
  for ((i=$1-1; i>=1; i--)); do
    printf '\b%d' "$i"
    sleep 1
  done
}

printf 'Warning!!\n'
printf 'Five seconds to cancel: '
countdown 5 & pid=$!
if ! read -s -n 1 -t 5; then
  printf '\nboom\n'
else
  kill "$pid"; printf '\nphew\n'
fi

(If you test that code in an interactive shell, you'll get "chatter" from the job control system when the child process is created, and when it's killed. But in a script, there won't be any such noise.)

67. I want to check if [[ $var == foo || $var == bar || $var == more ]] without repeating $var n times.

The portable solution uses case:

   # Bourne
   case $var in
      foo|bar|more) ... ;;
   esac

In Bash and ksh, Extended globs can also do this within a [[ command:

   # bash/ksh
   if [[ $var == @(foo|bar|more) ]]; then
      ...
   fi

Extended globs are turned on by default inside the [[ command in bash 4.1 and newer. If you need to target an older version of bash, you will need to turn them on in your script (shopt -s extglob outside of all functions or compound commands).

Alternatively, you may loop over a list of patterns, checking each individually.

# bash/ksh93

[[ -v BASH_VERSION ]] && shopt -s extglob

# usage: pmatch string pattern [ pattern ... ]
function any {
    [[ -n $1 ]] || return
    typeset pat match=$1
    shift

    for pat; do
        [[ $match == $pat ]] && return
    done

    return 1
}

var='foo bar'
if any "$var" '@(bar|baz)' foo\* blarg; then
    echo 'var matched at least one of the patterns!'
fi

For logical conjunction (return true if $var matches all patterns), ksh93 can use the & pattern delimiter.

    # ksh93 only
    [[ $var == @(foo&bar&more) ]] && ...

For shells that support only the ksh88 subset (extglob patterns), you may DeMorganify the logic using the negation sub-pattern operator.

    # bash/ksh88/etc...
    [[ $var == !(!(foo)|!(bar)|!(more)) ]] && ...

But this is quite unclear and not much shorter than just writing out separate expressions for each pattern.


CategoryShell

68. How can I trim leading/trailing white space from one of my variables?

There are a few ways to do this. Some involve special tricks that only work with whitespace. Others are more general, and can be used to strip leading zeroes, etc.

For simple variables, you can trim whitespace (or other characters) using this trick:

# POSIX
junk=${var%%[! ]*}   # remove all but leading spaces
var=${var#"$junk"}   # remove leading spaces from original string

junk=${var##*[! ]}   # remove all but trailing spaces
var=${var%"$junk"}   # remove trailing spaces from original string

Bash can do the same thing, but without the need for a throw-away variable, by using extglob's more advanced pattern matching:

   # Bash
   shopt -s extglob
   var=${var##*( )}   # trim the left
   var=${var%%*( )}   # trim the right

Here's one that only works for whitespace. It relies on the fact that read strips all leading and trailing whitespace (tab or space character) when IFS isn't set:

   # POSIX, but fails if the variable contains newlines
   read -r var << EOF
   $var
   EOF

Bash can do something similar with a "here string":

   # Bash
   read  -rd '' x <<< "$x"

Using an empty string as a delimiter means the read consumes the whole string, as NUL is used. (Remember: BASH only does C-string variables.) This is entirely safe for any text, including newlines (which will also be stripped from the beginning and end of the variable with the default value of IFS).

Here's a solution using extglob together with parameter expansion:

   # Bash
   shopt -s extglob
   x=${x##+([[:space:]])} x=${x%%+([[:space:]])}

(where [[:space:]] includes space, tab and all other horizontal and vertical spacing characters, the list of which varies with the current locale).

This also works in KornShell, without needing the explicit extglob setting:

   # ksh
   x=${x##+([[:space:]])} x=${x%%+([[:space:]])}

This solution isn't restricted to whitespace like the first few were. You can remove leading zeroes as well:

   # Bash
   shopt -s extglob
   x=${x##+(0)}

Another way to remove leading zeroes from a number in bash is to treat it as a decimal integer, in a math context:

   # Bash
   x=$((10#$x))
   # However, this fails if x contains anything other than decimal digits.

If you need to remove leading zeroes in a POSIX shell, you can use a loop:

   # POSIX
   while true; do
     case "$var" in
       0*) var=${var#0};;
       *)  break;;
     esac
   done

Or this trick (covered in more detail in FAQ #100):

   # POSIX
   zeroes=${var%%[!0]*}
   var=${var#"$zeroes"}

There are many, many other ways to do this, using sed for instance:

   # POSIX, suppress the trailing and leading whitespace on every line
   x=$(printf '%s\n' "$x" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')

Solutions based on external programs like sed are better suited to trimming large files, rather than shell variables.

69. How do I run a command, and have it abort (timeout) after N seconds?

FIRST check whether the command you're running can be told to timeout directly. The methods described here are "hacky" workarounds to force a command to terminate after a certain time has elapsed. Configuring your command properly is always preferable to the alternatives below.

If the command has no native support for stopping after a specified time, then you're forced to use an external wrapper. There are a few available now:

  • Recent GNU coreutils (since at least 2012) has one implementation called timeout.

  • An older stand-alone timeout implementation exists within TCT. This version of timeout may be offered as a package in some Linux distributions.

  • Busybox has a third implementation of timeout. (Note: the coreutils and busybox implementations have incompatible arguments.)

  • A similar program named doalarm also exists.

Beware: by default, some implementations of timeout issue a SIGKILL (kill -9), which is roughly the same as pulling out the power cord, leaving no chance for the program to commit its work, often resulting in corruption of its data. You should use a signal that allows the program to shut itself down cleanly instead (i.e. SIGTERM). See ProcessManagement for more information on SIGKILL.

Also be aware that some of these wrappers "exec" your program after setting up an alarm, which makes them wonderful to use in wrapper scripts, while others launch your program as a child and then hang around (because they want to send a second signal if the first one is ignored, or whatever). Be sure to read your tool's documentation.

If you don't have or don't want one of the above programs, you can use a perl one-liner to set an ALRM and then exec the program you want to run under a time limit. In any case, you must understand what your program does with SIGALRM; programs with periodic updates usually use ALRM for that purpose and update rather than dying when they receive that signal.

doalarm() { perl -e 'alarm shift; exec @ARGV' -- "$@"; }

doalarm ${NUMBER_OF_SECONDS_BEFORE_ALRMING} program arg arg ...

If you don't even have perl, then the best you can do is an ugly hack like:

command & pid=$!
{ sleep 10; kill "$pid"; } &

This will, as you will soon discover, produce quite a mess regardless of whether the timeout condition kicked in or not, if it's run in an interactive shell. Cleaning it up is not something worth my time. Also, it can't be used with any command that requires a foreground terminal, like top.

It is possible to do something similar, but to keep command in the foreground:

sh -c '(sleep 10; kill "$$") & exec command'

kill $$ would kill the shell, except that exec causes the command to take over the shell's PID. It is necessary to use sh -c so that the calling shell isn't replaced; in bash 4, it is possible to use a subshell instead:

( cmdpid=$BASHPID; (sleep 10; kill "$cmdpid") & exec command )

The shell-script "timeout" (not to be confused with any of the commands named timeout) uses the second approach above. It has the advantage of working immediately (no need for compiling a program), but has problems e.g. with programs reading standard input.

But... just use one of the timeout or doalarm commands instead. Really.

70. I want to automate an ssh (or scp, or sftp) connection, but I don't know how to send the password....

When dealing with authentication in a shell script, please bear in mind the following points:

  1. Do not pass secrets (passwords, etc.) as arguments to external commands. The arguments of a command are generally visible in ps(1) output. (See Linux-specific notes below, but not everyone is using a Linux system to which they have administrative access.)

  2. Do not pass secrets as environment variables. The initial environment of a process is generally visible in ps(1) (see below).

  3. Read the documentation of the thing you're trying to authenticate against. Find out the various ways it can accept authentication secrets/tokens and choose the most appropriate. This may mean using SSH public key authentication as described below, or connecting to a database via a Unix-domain socket instead of a TCP connection to localhost, etc.

  4. If something requires a password, let it prompt the user for the password by itself. Just run it in the foreground inside a terminal so that it can launch a dialog with the end user if required. Do not try to "help" it by storing the password in your script and then trying to figure out how to circumvent its security in order to pass the password to it. Because:

    • Storing a password in a shell variable may cause the password to be written to disk via swap/paging. Shells do not provide a way to mark a variable as "never swap me out".

    • Most shells, including bash, use temporary files "behind the scenes" as part of here document (<<) and here string (<<<) operations. Never put secret data in a dynamically generated here string or here document.

  5. If you absolutely must store a password somewhere on disk, don't store it inside the shell script. Shell scripts must have read permissions in order to be used. Store the password in a separate file that doesn't have universal read permission, and let the appropriate process read that file. The appropriate process may be your script in rare cases, but more often it'll be whatever program is actually going to use that password.

If all you want is for the user to be prompted for a password by ssh, simply make sure your script is executed in a terminal and that your ssh command is executed in the foreground ("normally"). Either ssh or the program specified in the SSH_ASKPASS environment variable will prompt the user for a password if the remote server requires one for authentication.

If you want to bypass SSH password authentication entirely, then you should use public key authentication instead. Read and understand the man page for ssh-keygen(1), or see SshKeys for a brief overview. This will tell you how to generate a public/private key pair, and how to use these keys to authenticate to the remote system without sending a password at all.

Here is a brief summary of the key generation procedure:

test -f ~/.ssh/id_rsa || ssh-keygen -t rsa
ssh-copy-id me@remote
ssh me@remote hostname # should not prompt for a passWORD,
                       # but your key may have a passPHRASE

If your key has a passphrase on it, and you want to avoid typing it every time, look into ssh-agent(1). It's beyond the scope of this document, though. If your script has to run unattended, then you may need to remove the passphrase from the key. This reduces your security, because then anyone who grabs the key can log in to the remote server as you (it's equivalent to putting a password in a file). However, sometimes this is deemed an acceptable risk.

If you're being prompted for a password even with the public key inserted into the remote authorized_keys file, chances are you have a permissions problem on the remote system. See SshKeys for a discussion of such problems.

If that's not it, then make sure you didn't spell it authorised_keys. SSH uses the US spelling, authorized_keys.

If you really want to store a password in a variable and then pass it to SSH, instead of using public keys, first have your head examined. Then, if you still want to use a password, use expect(1) (or the less classic but maybe more bash friendly empty(1)). But don't ask us for help with it.

expect also applies to the telnet or FTP variations of this question. However, anyone who's still running telnetd without a damned good reason needs to be fired and replaced.

70.1. Limiting access to process information

Information about processes is often visible to every user on the system via the ps(1) command or via pseudo-files in the /proc file system (which is where ps reads its information on Linux and certain other operating systems). Specifically, the arguments of a process are generally visible to all users on all Unix-based systems, and the initial environment of a process is generally visible to all users on traditional BSD-based systems (including older Linux systems).

The implementation details are specific to each operating system. For Linux, consult proc(5) (here's one version from Debian 9, circa 2016).

If you're the administrator of a system, you may be able to change security settings in order to prevent password leaks by the worst sorts of programs that accept passwords as command-line arguments (cough mysql).

However, most people writing a shell script do not have the luxury of assuming that a system has been hardened beyond the default settings. We must write for the most common cases. This means following the standard advice given at the top of this page whenever possible.

71. How do I convert Unix (epoch) times to human-readable values?

The only sane way to handle time values within a program is to convert them into a linear scale. You can't store "January 17, 2005 at 5:37 PM" in a variable and expect to do anything with it....

Therefore, any competent program is going to use time stamps with semantics such as "the number of seconds since point X". These are called epoch timestamps. If the epoch is January 1, 1970 at midnight UTC, then it's also called a "Unix timestamp", because this is how Unix stores all times (such as file modification times).

The closest tool to deal with Unix timestamps in Standard Unix, is only the command date. (Ironic, eh?) GNU date, and later BSD date, has a %s extension to generate output in Unix timestamp format:

# GNU/BSD date
date +%s       # Prints the current time in Unix format, e.g. 1164128484
date -u +%s    # Seconds from Epoch AT UTC. This clears local Daytime savings or local time corrections.

This is commonly used in scripts when one requires the interval between two events:

# POSIX shell, with GNU/BSD date
start=$(date -u +%s)
...
end=$(date -u +%s)
echo "Operation took $(($end - $start)) seconds."

Now, to convert those Unix timestamps back into human-readable values, we need to use date in a special way. The GNU date could perform simple adition and substraction :

# GNU date
date -u -d "1970-01-01" +"%s seconds"      # Prints "0 seconds"
date -u -d "1970-01-01" +"%D %T"           # Prints "01/01/70 00:00:00"

date -u -d "1970-01-01 14415 sec" +"%D %T"
date -u -d @14415 +"%D %T"   # Alternative notation
            # Prints "01/01/70 04:00:15", or 4 hours and 15 seconds ahead.
date -u -d "1970-01-01 14415 sec - 3605 sec" +"%D %T"
            # Prints "01/01/70 03:00:10", that is, 4 hours 15 seconds ahead
            # and 2 hours 5 seconds back.

So, we can do in just one command (provided start and end vars are values in seconds):

# Assuming start is "1418347200" and end is "1418350815" (for example):
date -u -d "1970-01-01 $end sec - $start sec" +"%T"

# Prints the time difference in the usual (human readable) time format:
01:00:15
# the output format could be easily adjusted as needed.

Note that this works only for less than 24 hours. Bigger time spans will require some external math.

With a relatively modern version of bash (4.2 and later), printf has the %(fmt)T option. It can be used to both get the current time in unix format, and to convert from unix format back to a human readable time. "fmt" will take anything that is valid for strftime(3) (which means this basically only works on BSD and GNU systems):

# store the current epoch time in "$start" (BSD/GNU only)
printf -v start '%(%s)T' -1

# print the saved epoch time in a human readable format (portable)
printf '%(%Y-%m-%d %H:%M:%S)T\n' "$start"

If you don't have GNU date or a modern version of bash available, you can use Perl:

perl -le "print scalar localtime 1164128484"
# Prints "Tue Nov 21 12:01:24 2006"

I used double quotes in these examples so that the time constant could be replaced with a variable reference. See the documentation for date(1) and Perl for details on changing the output format.

Newer versions of Tcl (8.5 and higher) have very good support of date and clock functions. For example:

echo 'puts [clock format [clock scan "today"]]' | tclsh
# Prints today's date (the format can be adjusted with parameters to "clock format").
   
echo 'puts [clock format [clock scan "fortnight"]]' | tclsh
# Prints the date two weeks from now.
   
echo 'puts [clock format [clock scan "5 years + 6 months ago"]]' | tclsh
# Five and a half years ago, compensating for leap days and daylight savings time.

A convenient way of calculating seconds elapsed since 'YYYY MM DD HH MM SS' is to use GNU awk:

echo "2008 02 27 18 50 23" | awk '{print systime() - mktime($0)}'
# Use systime() to return the current time in epoch format
# Use mktime() on the input string to return the epoch time of the input string
# These are both GNU awk extensions; mawk may also work.

To make this more human readable, GNU awk's strftime() may be used. The format string is similar to that of GNU date.

echo "YYYY MM DD HH MM SS" | gawk '{print strftime("%M Minutes, %S Seconds",systime() - mktime($0))}'
# The gawk-specific strftime() function converts the difference into a human readable format

72. How do I convert an ASCII character to its decimal (or hexadecimal) value and back? How do I do URL encoding or URL decoding?

If you have a known octal or hexadecimal value (at script-writing time), you can just use printf:

   1 # POSIX
   2 printf '\047\n'
   3 
   4 # bash/ksh/zsh and a few other printf implementations also support:
   5 printf '\x27\n'

In locales where the character encoding is a superset of ASCII, this prints the literal ' character (47 is the octal ASCII value of the apostrophe character) and a newline. The hexadecimal version can also be used with a few printf implementations including the bash builtin, but is not standard/POSIX.

printf in bash 4.2 and higher, and in ksh93, supports Unicode code points as well:

   1 # bash 4.2, ksh93
   2 printf '\u0027\n'

Another approach: bash's $'...' quoting can be used to expand to the desired characters, either in a variable assignment, or directly as a command argument:

   1 ExpandedString=$'\x27\047\u0027\U00000027\n'
   2 printf %s\\n "$ExpandedString"
   3 
   4 # or, more simply
   5 printf %s\\n $'\x27\047\u0027\U00000027\n'

If you need to convert characters (or numeric ASCII values) that are not known in advance (i.e., in variables), you can use something a little more complicated. Note: These functions only work for single-byte character encodings.

   1 # POSIX
   2 # chr() - converts decimal value to its ASCII character representation
   3 # ord() - converts ASCII character to its decimal value
   4 
   5 chr() {
   6   [ "$1" -lt 256 ] || return 1
   7   printf "\\$(printf %o "$1")"
   8 }

Even better to avoid using a subshell is to pass the value inside a variable instead of the command output. faster as it avoids the subshell

   1 chr () {
   2   local val
   3   [ "$1" -lt 256 ] || return 1
   4   printf -v val %o "$1"; printf "\\$val"
   5   # That one requires bash 3.1 or above.
   6 }

   1 ord() {
   2   # POSIX
   3   LC_CTYPE=C printf %d "'$1"
   4 }
   5 
   6 # hex() - converts ASCII character to a hexadecimal value
   7 # unhex() - converts a hexadecimal value to an ASCII character
   8 
   9 hex() {
  10    LC_CTYPE=C printf %x "'$1"
  11 }
  12 
  13 unhex() {
  14    printf "\\x$1"
  15 }
  16 
  17 # examples:
  18 
  19 chr "$(ord A)"    # -> A
  20 ord "$(chr 65)"   # -> 65

The ord function above is quite tricky.

  • Tricky? Rather, it's using a feature that I can't find documented anywhere -- putting a single quote in front of a character. Neat effect, but how on earth did you find out about it? Source diving? -- GreyCat

    • It validates The Single Unix Specification: "If the leading character is a single-quote or double-quote, the value shall be the numeric value in the underlying codeset of the character following the single-quote or double-quote." (see printf() to know more) -- mjf

72.1. URL encoding and URL decoding

Note that URL encoding is defined only at the byte (octet) level. A URL-encoding of a multibyte (e.g. UTF-8) character is done by simply encoding each byte individually, then concatenating everything.

Also note that the urldecode function below performs no error checking; getting it to yield a sensible error message when you feed it malformed input is left as an exercise for the reader.

   1 urlencode() {
   2     # urlencode <string>
   3     local LC_ALL=C c i n
   4     for (( i = 0, n = ${#1}; i < n; i++ )); do
   5         c=${1:i:1}
   6         case $c in
   7             [[:alnum:].~_-]) printf %s "$c" ;;
   8             *) printf %%%02X "'$c"  ;;
   9         esac
  10     done
  11 }

   1 urldecode() {
   2     # urldecode <string>
   3     local s
   4     s=${1//\\/\\\\}
   5     s=${s//+/ }
   6     printf %b "${s//'%'/\\x}"
   7 }

   1 # Alternative urlencode, prints all at once (requires bash 3.1)
   2 urlencode() {
   3     # urlencode <string>
   4     local LC_ALL=C c i n=${#1}
   5     local out= tmp
   6     for (( i=0; i < n; i++ )); do
   7         c=${1:i:1}
   8         case $c in
   9             [[:alnum:].~_-]) printf -v tmp %s "$c" ;;
  10             *) printf -v tmp %%%02X "'$c"  ;;
  11         esac
  12         out+=$tmp
  13     done
  14     printf %s "$out"
  15 }

72.2. More complete examples (with UTF-8 support)

The command-line utility nkf can decode URLs:

   1 echo 'https://ja.wikipedia.org/wiki/%E9%87%8E%E8%89%AF%E7%8C%AB' | nkf --url-input

72.2.1. Note about Ext Ascii and UTF-8 encoding

  • The following example was never peer-reviewed. Everyone is terrified of it. Proceed at your own risk.

    • for values 0x00 - 0x7f Identical
    • for values 0x80 - 0xff conflict between UTF-8 & ExtAscii

    • for values 0x100 - 0xffff Only UTF-8 UTF-16 UTF-32
    • for values 0x100 - 0x7FFFFFFF Only UTF-8 UTF-32

      value

      EAscii

      UTF-8

      UTF-16

      UTF-32

      0x20

      "\x20"

      "\x20"

      \u0020

      \U00000020

      0x20

      "\x7f"

      "\x7f"

      \u007f

      \U0000007f

      0x80

      "\x80"

      "\xc2\x80"

      \u0080

      \U00000080

      0xff

      "\xff"

      "\xc3\xbf"

      \u00ff

      \U000000ff

      0x100

      N/A

      "\xc4\x80"

      \u0100

      \U00000100

      0x1000

      N/A

      "\xc8\x80"

      \u1000

      \U00001000

      0xffff

      N/A

      "\xef\xbf\xbf"

      \uffff

      \U0000ffff

      0x10000

      N/A

      "\xf0\x90\x80\x80"

      \ud800\udc00

      \U00010000

      0xfffff

      N/A

      "\xf3\xbf\xbf\xbf"

      \udbbf\udfff

      \U000fffff

      0x10000000

      N/A

      "\xfc\x90\x80\x80\x80\x80"

      N/A

      \U10000000

      0x7fffffff

      N/A

      "\xfd\xbf\xbf\xbf\xbf\xbf"

      N/A

      \U7fffffff

      0x80000000

      N/A

      N/A

      N/A

      N/A

      0xffffffff

      N/A

      N/A

      N/A

      N/A

   1   ###########################################################################
   2   ## ord family
   3   ###########################################################################
   4   # ord        <Return Variable Name> <Char to convert> [Optional Format String]
   5   # ord_hex    <Return Variable Name> <Char to convert>
   6   # ord_oct    <Return Variable Name> <Char to convert>
   7   # ord_utf8   <Return Variable Name> <Char to convert> [Optional Format String]
   8   # ord_eascii <Return Variable Name> <Char to convert> [Optional Format String]
   9   # ord_echo                      <Char to convert> [Optional Format String]
  10   # ord_hex_echo                  <Char to convert>
  11   # ord_oct_echo                  <Char to convert>
  12   # ord_utf8_echo                 <Char to convert> [Optional Format String]
  13   # ord_eascii_echo               <Char to convert> [Optional Format String]
  14   #
  15   # Description:
  16   # converts character using native encoding to its decimal value and stores
  17   # it in the Variable specified
  18   #
  19   #       ord
  20   #       ord_hex         output in hex
  21   #       ord_hex         output in octal
  22   #       ord_utf8        forces UTF8 decoding
  23   #       ord_eascii      forces eascii decoding
  24   #       ord_echo        prints to stdout
  25   function ord {
  26           printf -v "${1?Missing Dest Variable}" "${3:-%d}" "'${2?Missing Char}"
  27   }
  28   function ord_oct {
  29           ord "${@:1:2}" "0%c"
  30   }
  31   function ord_hex {
  32           ord "${@:1:2}" "0x%x"
  33   }
  34   function ord_utf8 {
  35           LC_CTYPE=C.UTF-8 ord "${@}"
  36   }
  37   function ord_eascii {
  38           LC_CTYPE=C ord "${@}"
  39   }
  40   function ord_echo {
  41           printf "${2:-%d}" "'${1?Missing Char}"
  42   }
  43   function ord_oct_echo {
  44           ord_echo "${1}" "0%o"
  45   }
  46   function ord_hex_echo {
  47           ord_echo "${1}" "0x%x"
  48   }
  49   function ord_utf8_echo {
  50           LC_CTYPE=C.UTF-8 ord_echo "${@}"
  51   }
  52   function ord_eascii_echo {
  53           LC_CTYPE=C ord_echo "${@}"
  54   }
  55 
  56   ###########################################################################
  57   ## chr family
  58   ###########################################################################
  59   # chr_utf8   <Return Variale Name> <Integer to convert>
  60   # chr_eascii <Return Variale Name> <Integer to convert>
  61   # chr        <Return Variale Name> <Integer to convert>
  62   # chr_oct    <Return Variale Name> <Octal number to convert>
  63   # chr_hex    <Return Variale Name> <Hex number to convert>
  64   # chr_utf8_echo                  <Integer to convert>
  65   # chr_eascii_echo                <Integer to convert>
  66   # chr_echo                       <Integer to convert>
  67   # chr_oct_echo                   <Octal number to convert>
  68   # chr_hex_echo                   <Hex number to convert>
  69   #
  70   # Description:
  71   # converts decimal value to character representation an stores
  72   # it in the Variable specified
  73   #
  74   #       chr                     Tries to guess output format
  75   #       chr_utf8                forces UTF8 encoding
  76   #       chr_eascii              forces eascii encoding
  77   #       chr_echo                prints to stdout
  78   #
  79   function chr_utf8_m {
  80     local val
  81     #
  82     # bash only supports \u \U since 4.2
  83     #
  84 
  85     # here is an example how to encode
  86     # manually
  87     # this will work since Bash 3.1 as it uses -v.
  88     #
  89     if [[ ${2:?Missing Ordinal Value} -le 0x7f ]]; then
  90       printf -v val "\\%03o" "${2}"
  91     elif [[ ${2} -le 0x7ff        ]]; then
  92       printf -v val "\\%03o" \
  93         $((  (${2}>> 6)      |0xc0 )) \
  94         $(( ( ${2}     &0x3f)|0x80 ))
  95     elif [[ ${2} -le 0xffff       ]]; then
  96       printf -v val "\\%03o" \
  97         $(( ( ${2}>>12)      |0xe0 )) \
  98         $(( ((${2}>> 6)&0x3f)|0x80 )) \
  99         $(( ( ${2}     &0x3f)|0x80 ))
 100     elif [[ ${2} -le 0x1fffff     ]]; then
 101       printf -v val "\\%03o"  \
 102         $(( ( ${2}>>18)      |0xf0 )) \
 103         $(( ((${2}>>12)&0x3f)|0x80 )) \
 104         $(( ((${2}>> 6)&0x3f)|0x80 )) \
 105         $(( ( ${2}     &0x3f)|0x80 ))
 106     elif [[ ${2} -le 0x3ffffff    ]]; then
 107       printf -v val "\\%03o"  \
 108         $(( ( ${2}>>24)      |0xf8 )) \
 109         $(( ((${2}>>18)&0x3f)|0x80 )) \
 110         $(( ((${2}>>12)&0x3f)|0x80 )) \
 111         $(( ((${2}>> 6)&0x3f)|0x80 )) \
 112         $(( ( ${2}     &0x3f)|0x80 ))
 113     elif [[ ${2} -le 0x7fffffff ]]; then
 114       printf -v val "\\%03o"  \
 115         $(( ( ${2}>>30)      |0xfc )) \
 116         $(( ((${2}>>24)&0x3f)|0x80 )) \
 117         $(( ((${2}>>18)&0x3f)|0x80 )) \
 118         $(( ((${2}>>12)&0x3f)|0x80 )) \
 119         $(( ((${2}>> 6)&0x3f)|0x80 )) \
 120         $(( ( ${2}     &0x3f)|0x80 ))
 121     else
 122       printf -v "${1:?Missing Dest Variable}" ""
 123       return 1
 124     fi
 125     printf -v "${1:?Missing Dest Variable}" "${val}"
 126   }
 127   function chr_utf8 {
 128           local val
 129           [[ ${2?Missing Ordinal Value} -lt 0x80000000 ]] || return 1
 130 
 131           if [[ ${2} -lt 0x100 && ${2} -ge 0x80 ]]; then
 132 
 133                   # bash 4.2 incorrectly encodes
 134                   # \U000000ff as \xff so encode manually
 135                   printf -v val "\\%03o\%03o" $(( (${2}>>6)|0xc0 )) $(( (${2}&0x3f)|0x80 ))
 136           else
 137                   printf -v val '\\U%08x' "${2}"
 138           fi
 139           printf -v ${1?Missing Dest Variable} ${val}
 140   }
 141   function chr_eascii {
 142           local val
 143           # Make sure value less than 0x100
 144           # otherwise we end up with
 145           # \xVVNNNNN
 146           # where \xVV = char && NNNNN is a number string
 147           # so chr "0x44321" => "D321"
 148           [[ ${2?Missing Ordinal Value} -lt 0x100 ]] || return 1
 149           printf -v val '\\x%02x' "${2}"
 150           printf -v ${1?Missing Dest Variable} ${val}
 151   }
 152   function chr {
 153           if [ "${LC_CTYPE:-${LC_ALL:-}}" = "C" ]; then
 154                   chr_eascii "${@}"
 155           else
 156                   chr_utf8 "${@}"
 157           fi
 158   }
 159   function chr_dec {
 160           # strip leading 0s otherwise
 161           # interpreted as Octal
 162           chr "${1}" "${2#${2%%[!0]*}}"
 163   }
 164   function chr_oct {
 165           chr "${1}" "0${2}"
 166   }
 167   function chr_hex {
 168           chr "${1}" "0x${2#0x}"
 169   }
 170   function chr_utf8_echo {
 171           local val
 172           [[ ${1?Missing Ordinal Value} -lt 0x80000000 ]] || return 1
 173 
 174           if [[ ${1} -lt 0x100 && ${1} -ge 0x80 ]]; then
 175 
 176                   # bash 4.2 incorrectly encodes
 177                   # \U000000ff as \xff so encode manually
 178                   printf -v val '\\%03o\\%03o' $(( (${1}>>6)|0xc0 )) $(( (${1}&0x3f)|0x80 ))
 179           else
 180                   printf -v val '\\U%08x' "${1}"
 181           fi
 182           printf "${val}"
 183   }
 184   function chr_eascii_echo {
 185           local val
 186           # Make sure value less than 0x100
 187           # otherwise we end up with
 188           # \xVVNNNNN
 189           # where \xVV = char && NNNNN is a number string
 190           # so chr "0x44321" => "D321"
 191           [[ ${1?Missing Ordinal Value} -lt 0x100 ]] || return 1
 192           printf -v val '\\x%x' "${1}"
 193           printf "${val}"
 194   }
 195   function chr_echo {
 196           if [ "${LC_CTYPE:-${LC_ALL:-}}" = "C" ]; then
 197                   chr_eascii_echo "${@}"
 198           else
 199                   chr_utf8_echo "${@}"
 200           fi
 201   }
 202   function chr_dec_echo {
 203           # strip leading 0s otherwise
 204           # interpreted as Octal
 205           chr_echo "${1#${1%%[!0]*}}"
 206   }
 207   function chr_oct_echo {
 208           chr_echo "0${1}"
 209   }
 210   function chr_hex_echo {
 211           chr_echo "0x${1#0x}"
 212   }
 213 
 214   #
 215   # Simple Validation code
 216   #
 217   function test_echo_func {
 218     local Outcome _result
 219     _result="$( "${1}" "${2}" )"
 220     [ "${_result}" = "${3}" ] && Outcome="Pass" || Outcome="Fail"
 221     printf "# %-20s %-6s => "           "${1}" "${2}" "${_result}" "${3}"
 222     printf "[ "%16q" = "%-16q"%-5s ] "  "${_result}" "${3}" "(${3//[[:cntrl:]]/_})"
 223     printf "%s\n"                       "${Outcome}"
 224 
 225 
 226   }
 227   function test_value_func {
 228     local Outcome _result
 229     "${1}" _result "${2}"
 230     [ "${_result}" = "${3}" ] && Outcome="Pass" || Outcome="Fail"
 231     printf "# %-20s %-6s => "           "${1}" "${2}" "${_result}" "${3}"
 232     printf "[ "%16q" = "%-16q"%-5s ] "  "${_result}" "${3}" "(${3//[[:cntrl:]]/_})"
 233     printf "%s\n"                       "${Outcome}"
 234   }
 235   test_echo_func  chr_echo "$(ord_echo  "A")"  "A"
 236   test_echo_func  ord_echo "$(chr_echo "65")"  "65"
 237   test_echo_func  chr_echo "$(ord_echo  "ö")"  "ö"
 238   test_value_func chr      "$(ord_echo  "A")"  "A"
 239   test_value_func ord      "$(chr_echo "65")"  "65"
 240   test_value_func chr      "$(ord_echo  "ö")"  "ö"
 241   # chr_echo             65     => [                A = A               (A)   ] Pass
 242   # ord_echo             A      => [               65 = 65              (65)  ] Pass
 243   # chr_echo             246    => [      $'\303\266' = $'\303\266'     (ö)  ] Pass
 244   # chr                  65     => [                A = A               (A)   ] Pass
 245   # ord                  A      => [               65 = 65              (65)  ] Pass
 246   # chr                  246    => [      $'\303\266' = $'\303\266'     (ö)  ] Pass
 247   #
 248 
 249 
 250   test_echo_func  chr_echo     "65"     A
 251   test_echo_func  chr_echo     "065"    5
 252   test_echo_func  chr_dec_echo "065"    A
 253   test_echo_func  chr_oct_echo "65"     5
 254   test_echo_func  chr_hex_echo "65"     e
 255   test_value_func chr          "65"     A
 256   test_value_func chr          "065"    5
 257   test_value_func chr_dec      "065"    A
 258   test_value_func chr_oct      "65"     5
 259   test_value_func chr_hex      "65"     e
 260   # chr_echo             65     => [                A = A               (A)   ] Pass
 261   # chr_echo             065    => [                5 = 5               (5)   ] Pass
 262   # chr_dec_echo         065    => [                A = A               (A)   ] Pass
 263   # chr_oct_echo         65     => [                5 = 5               (5)   ] Pass
 264   # chr_hex_echo         65     => [                e = e               (e)   ] Pass
 265   # chr                  65     => [                A = A               (A)   ] Pass
 266   # chr                  065    => [                5 = 5               (5)   ] Pass
 267   # chr_dec              065    => [                A = A               (A)   ] Pass
 268   # chr_oct              65     => [                5 = 5               (5)   ] Pass
 269   # chr_hex              65     => [                e = e               (e)   ] Pass
 270 
 271   #test_value_func chr          0xff   $'\xff'
 272   test_value_func chr_eascii   0xff   $'\xff'
 273   test_value_func chr_utf8     0xff   $'\uff'      # Note this fails because bash encodes it incorrectly
 274   test_value_func chr_utf8     0xff   $'\303\277'
 275   test_value_func chr_utf8     0x100  $'\u100'
 276   test_value_func chr_utf8     0x1000 $'\u1000'
 277   test_value_func chr_utf8     0xffff $'\uffff'
 278   # chr_eascii           0xff   => [          $'\377' = $'\377'         (�)   ] Pass
 279   # chr_utf8             0xff   => [      $'\303\277' = $'\377'         (�)   ] Fail
 280   # chr_utf8             0xff   => [      $'\303\277' = $'\303\277'     (ÿ)  ] Pass
 281   # chr_utf8             0x100  => [      $'\304\200' = $'\304\200'     (Ā)  ] Pass
 282   # chr_utf8             0x1000 => [  $'\341\200\200' = $'\341\200\200' (က) ] Pass
 283   # chr_utf8             0xffff => [  $'\357\277\277' = $'\357\277\277' (���) ] Pass
 284   test_value_func ord_utf8     "A"           65
 285   test_value_func ord_utf8     "ä"          228
 286   test_value_func ord_utf8     $'\303\277'  255
 287   test_value_func ord_utf8     $'\u100'     256
 288 
 289 
 290 
 291   #########################################################
 292   # to help debug problems try this
 293   #########################################################
 294   printf "%q\n" $'\xff'                  # => $'\377'
 295   printf "%q\n" $'\uffff'                # => $'\357\277\277'
 296   printf "%q\n" "$(chr_utf8_echo 0x100)" # => $'\304\200'
 297   #
 298   # This can help a lot when it comes to diagnosing problems
 299   # with read and or xterm program output
 300   # I use it a lot in error case to create a human readable error message
 301   # i.e.
 302   echo "Enter type to test, Enter to continue"
 303   while read -srN1 ; do
 304     ord asciiValue "${REPLY}"
 305     case "${asciiValue}" in
 306       10) echo "Goodbye" ; break ;;
 307       20|21|22) echo "Yay expected input" ;;
 308       *) printf ':( Unexpected Input 0x%02x %q "%s"\n' "${asciiValue}" "${REPLY}" "${REPLY//[[:cntrl:]]}" ;;
 309     esac
 310   done
 311 
 312   #########################################################
 313   # More exotic approach 1
 314   #########################################################
 315   # I used to use this before I figured out the LC_CTYPE=C approach
 316   # printf "EAsciiLookup=%q" "$(for (( x=0x0; x<0x100 ; x++)); do printf '%b' $(printf '\\x%02x' "$x"); done)"
 317   EAsciiLookup=$'\001\002\003\004\005\006\a\b\t\n\v\f\r\016\017\020\021\022\023'
 318   EAsciiLookup+=$'\024\025\026\027\030\031\032\E\034\035\036\037 !"#$%&\'()*+,-'
 319   EAsciiLookup+=$'./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghi'
 320   EAsciiLookup+=$'jklmnopqrstuvwxyz{|}~\177\200\201\202\203\204\205\206\207\210'
 321   EAsciiLookup+=$'\211\212\213\214\215\216\217\220\221\222\223\224\225\226\227'
 322   EAsciiLookup+=$'\230\231\232\233\234\235\236\237\240\241\242\243\244\245\246'
 323   EAsciiLookup+=$'\247\250\251\252\253\254\255\256\257\260\261\262\263\264\265'
 324   EAsciiLookup+=$'\266\267\270\271\272\273\274\275\276\277\300\301\302\303\304'
 325   EAsciiLookup+=$'\305\306\307\310\311\312\313\314\315\316\317\320\321\322\323'
 326   EAsciiLookup+=$'\324\325\326\327\330\331\332\333\334\335\336\337\340\341\342'
 327   EAsciiLookup+=$'\343\344\345\346\347\350\351\352\353\354\355\356\357\360\361'
 328   EAsciiLookup+=$'\362\363\364\365\366\367\370\371\372\373\374\375\376\377'
 329   function ord_eascii2 {
 330     local idx="${EAsciiLookup%%${2:0:1}*}"
 331     eval ${1}'=$(( ${#idx} +1 ))'
 332   }
 333 
 334   #########################################################
 335   # More exotic approach 2
 336   #########################################################
 337   #printf "EAsciiLookup2=(\n    %s\n)" "$(for (( x=0x1; x<0x100 ; x++)); do printf '%-18s'  "$(printf '[_%q]="0x%02x"' "$(printf "%b" "$(printf '\\x%02x' "$x")")" $x )" ; [ "$(($x%6))" != "0" ] || echo -en "\n    " ; done)"
 338   typeset -A EAsciiLookup2
 339   EAsciiLookup2=(
 340     [_$'\001']="0x01" [_$'\002']="0x02" [_$'\003']="0x03" [_$'\004']="0x04"
 341     [_$'\005']="0x05" [_$'\006']="0x06" [_$'\a']="0x07"   [_$'\b']="0x08"
 342     [_$'\t']="0x09"   [_'']="0x0a"      [_$'\v']="0x0b"   [_$'\f']="0x0c"
 343     [_$'\r']="0x0d"   [_$'\016']="0x0e" [_$'\017']="0x0f" [_$'\020']="0x10"
 344     [_$'\021']="0x11" [_$'\022']="0x12" [_$'\023']="0x13" [_$'\024']="0x14"
 345     [_$'\025']="0x15" [_$'\026']="0x16" [_$'\027']="0x17" [_$'\030']="0x18"
 346     [_$'\031']="0x19" [_$'\032']="0x1a" [_$'\E']="0x1b"   [_$'\034']="0x1c"
 347     [_$'\035']="0x1d" [_$'\036']="0x1e" [_$'\037']="0x1f" [_\ ]="0x20"
 348     [_\!]="0x21"      [_\"]="0x22"      [_\#]="0x23"      [_\$]="0x24"
 349     [_%]="0x25"       [_\&]="0x26"      [_\']="0x27"      [_\(]="0x28"
 350     [_\)]="0x29"      [_\*]="0x2a"      [_+]="0x2b"       [_\,]="0x2c"
 351     [_-]="0x2d"       [_.]="0x2e"       [_/]="0x2f"       [_0]="0x30"
 352     [_1]="0x31"       [_2]="0x32"       [_3]="0x33"       [_4]="0x34"
 353     [_5]="0x35"       [_6]="0x36"       [_7]="0x37"       [_8]="0x38"
 354     [_9]="0x39"       [_:]="0x3a"       [_\;]="0x3b"      [_\<]="0x3c"
 355     [_=]="0x3d"       [_\>]="0x3e"      [_\?]="0x3f"      [_@]="0x40"
 356     [_A]="0x41"       [_B]="0x42"       [_C]="0x43"       [_D]="0x44"
 357     [_E]="0x45"       [_F]="0x46"       [_G]="0x47"       [_H]="0x48"
 358     [_I]="0x49"       [_J]="0x4a"       [_K]="0x4b"       [_L]="0x4c"
 359     [_M]="0x4d"       [_N]="0x4e"       [_O]="0x4f"       [_P]="0x50"
 360     [_Q]="0x51"       [_R]="0x52"       [_S]="0x53"       [_T]="0x54"
 361     [_U]="0x55"       [_V]="0x56"       [_W]="0x57"       [_X]="0x58"
 362     [_Y]="0x59"       [_Z]="0x5a"       [_\[]="0x5b"      #[_\\]="0x5c"
 363     #[_\]]="0x5d"
 364                       [_\^]="0x5e"      [__]="0x5f"       [_\`]="0x60"
 365     [_a]="0x61"       [_b]="0x62"       [_c]="0x63"       [_d]="0x64"
 366     [_e]="0x65"       [_f]="0x66"       [_g]="0x67"       [_h]="0x68"
 367     [_i]="0x69"       [_j]="0x6a"       [_k]="0x6b"       [_l]="0x6c"
 368     [_m]="0x6d"       [_n]="0x6e"       [_o]="0x6f"       [_p]="0x70"
 369     [_q]="0x71"       [_r]="0x72"       [_s]="0x73"       [_t]="0x74"
 370     [_u]="0x75"       [_v]="0x76"       [_w]="0x77"       [_x]="0x78"
 371     [_y]="0x79"       [_z]="0x7a"       [_\{]="0x7b"      [_\|]="0x7c"
 372     [_\}]="0x7d"      [_~]="0x7e"       [_$'\177']="0x7f" [_$'\200']="0x80"
 373     [_$'\201']="0x81" [_$'\202']="0x82" [_$'\203']="0x83" [_$'\204']="0x84"
 374     [_$'\205']="0x85" [_$'\206']="0x86" [_$'\207']="0x87" [_$'\210']="0x88"
 375     [_$'\211']="0x89" [_$'\212']="0x8a" [_$'\213']="0x8b" [_$'\214']="0x8c"
 376     [_$'\215']="0x8d" [_$'\216']="0x8e" [_$'\217']="0x8f" [_$'\220']="0x90"
 377     [_$'\221']="0x91" [_$'\222']="0x92" [_$'\223']="0x93" [_$'\224']="0x94"
 378     [_$'\225']="0x95" [_$'\226']="0x96" [_$'\227']="0x97" [_$'\230']="0x98"
 379     [_$'\231']="0x99" [_$'\232']="0x9a" [_$'\233']="0x9b" [_$'\234']="0x9c"
 380     [_$'\235']="0x9d" [_$'\236']="0x9e" [_$'\237']="0x9f" [_$'\240']="0xa0"
 381     [_$'\241']="0xa1" [_$'\242']="0xa2" [_$'\243']="0xa3" [_$'\244']="0xa4"
 382     [_$'\245']="0xa5" [_$'\246']="0xa6" [_$'\247']="0xa7" [_$'\250']="0xa8"
 383     [_$'\251']="0xa9" [_$'\252']="0xaa" [_$'\253']="0xab" [_$'\254']="0xac"
 384     [_$'\255']="0xad" [_$'\256']="0xae" [_$'\257']="0xaf" [_$'\260']="0xb0"
 385     [_$'\261']="0xb1" [_$'\262']="0xb2" [_$'\263']="0xb3" [_$'\264']="0xb4"
 386     [_$'\265']="0xb5" [_$'\266']="0xb6" [_$'\267']="0xb7" [_$'\270']="0xb8"
 387     [_$'\271']="0xb9" [_$'\272']="0xba" [_$'\273']="0xbb" [_$'\274']="0xbc"
 388     [_$'\275']="0xbd" [_$'\276']="0xbe" [_$'\277']="0xbf" [_$'\300']="0xc0"
 389     [_$'\301']="0xc1" [_$'\302']="0xc2" [_$'\303']="0xc3" [_$'\304']="0xc4"
 390     [_$'\305']="0xc5" [_$'\306']="0xc6" [_$'\307']="0xc7" [_$'\310']="0xc8"
 391     [_$'\311']="0xc9" [_$'\312']="0xca" [_$'\313']="0xcb" [_$'\314']="0xcc"
 392     [_$'\315']="0xcd" [_$'\316']="0xce" [_$'\317']="0xcf" [_$'\320']="0xd0"
 393     [_$'\321']="0xd1" [_$'\322']="0xd2" [_$'\323']="0xd3" [_$'\324']="0xd4"
 394     [_$'\325']="0xd5" [_$'\326']="0xd6" [_$'\327']="0xd7" [_$'\330']="0xd8"
 395     [_$'\331']="0xd9" [_$'\332']="0xda" [_$'\333']="0xdb" [_$'\334']="0xdc"
 396     [_$'\335']="0xdd" [_$'\336']="0xde" [_$'\337']="0xdf" [_$'\340']="0xe0"
 397     [_$'\341']="0xe1" [_$'\342']="0xe2" [_$'\343']="0xe3" [_$'\344']="0xe4"
 398     [_$'\345']="0xe5" [_$'\346']="0xe6" [_$'\347']="0xe7" [_$'\350']="0xe8"
 399     [_$'\351']="0xe9" [_$'\352']="0xea" [_$'\353']="0xeb" [_$'\354']="0xec"
 400     [_$'\355']="0xed" [_$'\356']="0xee" [_$'\357']="0xef" [_$'\360']="0xf0"
 401     [_$'\361']="0xf1" [_$'\362']="0xf2" [_$'\363']="0xf3" [_$'\364']="0xf4"
 402     [_$'\365']="0xf5" [_$'\366']="0xf6" [_$'\367']="0xf7" [_$'\370']="0xf8"
 403     [_$'\371']="0xf9" [_$'\372']="0xfa" [_$'\373']="0xfb" [_$'\374']="0xfc"
 404     [_$'\375']="0xfd" [_$'\376']="0xfe" [_$'\377']="0xff"
 405   )
 406   function ord_eascii3 {
 407         local -i val="${EAsciiLookup2["_${2:0:1}"]-}"
 408         if [ "${val}" -eq 0 ]; then
 409                 case "${2:0:1}" in
 410                         ])  val=0x5d ;;
 411                         \\) val=0x5c ;;
 412                 esac
 413         fi
 414         eval "${1}"'="${val}"'
 415   }
 416   # for fun check out the following
 417   time for (( i=0 ; i <1000; i++ )); do ord TmpVar 'a'; done
 418   #  real 0m0.065s
 419   #  user 0m0.048s
 420   #  sys  0m0.000s
 421 
 422   time for (( i=0 ; i <1000; i++ )); do ord_eascii TmpVar 'a'; done
 423   #  real 0m0.239s
 424   #  user 0m0.188s
 425   #  sys  0m0.000s
 426 
 427   time for (( i=0 ; i <1000; i++ )); do ord_utf8 TmpVar 'a'; done
 428   #  real       0m0.225s
 429   #  user       0m0.180s
 430   #  sys        0m0.000s
 431 
 432   time for (( i=0 ; i <1000; i++ )); do ord_eascii2 TmpVar 'a'; done
 433   #  real 0m1.507s
 434   #  user 0m1.056s
 435   #  sys  0m0.012s
 436 
 437   time for (( i=0 ; i <1000; i++ )); do ord_eascii3 TmpVar 'a'; done
 438   #  real 0m0.147s
 439   #  user 0m0.120s
 440   #  sys  0m0.000s
 441 
 442   time for (( i=0 ; i <1000; i++ )); do ord_echo 'a' >/dev/null ; done
 443   #  real       0m0.065s
 444   #  user       0m0.044s
 445   #  sys        0m0.016s
 446 
 447   time for (( i=0 ; i <1000; i++ )); do ord_eascii_echo 'a' >/dev/null ; done
 448   #  real       0m0.089s
 449   #  user       0m0.068s
 450   #  sys        0m0.008s
 451 
 452   time for (( i=0 ; i <1000; i++ )); do ord_utf8_echo 'a' >/dev/null ; done
 453   #  real       0m0.226s
 454   #  user       0m0.172s
 455   #  sys        0m0.012s

73. How can I ensure my environment is configured for cron, batch, and at jobs?

If a shell or other script calling shell commands runs fine interactively but fails due to environment configurations (say: a complex $PATH) when run noninteractively, you'll need to force your environment to be properly configured.

You can write a shell wrapper around your script which configures your environment. You may also want to have a "testenv" script (bash or other scripting language) which tests what shell and environment are present when running under different configurations.

In cron, you can invoke Bash (or the Bourne shell) with the '-c' option, source your init script, then invoke your command, eg:

  * * * * *  /bin/bash -c ". myconfig.bashrc; myscript"

Another approach would be to have myscript dot in the configuration file itself, if it's a rather static configuration. (Or, conditionally dot it in, if you find a certain variable to be missing from your environment... the possibilities are numerous.)

The at and batch utilities copy the current environment (except for the variables TERM, DISPLAY and _) as part of the job metadata, and should recreate it when the job is executed. If this isn't the case you'll want to test the environment and/or explicitly initialize it similarly to cron above.

74. How can I use parameter expansion? How can I get substrings? How can I get a file without its extension, or get just a file's extension? What are some good ways to do basename and dirname?

Parameter expansion is an important subject. This page contains a concise overview of parameter expansion.

Parameter Expansion substitutes a variable or special parameter for its value. It is the primary way of dereferencing (referring to) variables in Bourne-like shells such as Bash. Parameter expansion can also perform various operations on the value at the same time for convenience. Remember to quote your expansions.

The first set of capabilities involves removing a substring, from either the beginning or the end of a parameter. Here's an example using parameter expansion with something akin to a hostname (dot-separated components):

parameter     result
-----------   ------------------------------
$name         polish.ostrich.racing.champion
${name#*.}           ostrich.racing.champion
${name##*.}                         champion
${name%%.*}   polish
${name%.*}    polish.ostrich.racing

And here's an example of the parameter expansions for a typical filename:

parameter     result
-----------   --------------------------------------------------------
$file         /usr/share/java-1.4.2-sun/demo/applets/Clock/Clock.class
${file#*/}     usr/share/java-1.4.2-sun/demo/applets/Clock/Clock.class
${file##*/}                                                Clock.class
${file%%/*}
${file%/*}    /usr/share/java-1.4.2-sun/demo/applets/Clock

US keyboard users may find it helpful to observe that, on the keyboard, the "#" is to the left of the "%" symbol. Mnemonically, "#" operates on the left side of a parameter, and "%" operates on the right. The glob after the "%" or "%%" or "#" or "##" specifies what pattern to remove from the parameter expansion. Another mnemonic is that in an English sentence "#" usually comes before a number (e.g., "The #1 Bash reference site"), while "%" usually comes after a number (e.g., "Now 5% discounted"), so they operate on those sides.

You cannot nest parameter expansions. If you need to perform two expansions steps, use a variable to hold the result of the first expansion:

# foo holds: key="some value"
bar=${foo#*=\"} bar=${bar%\"*}
# now bar holds: some value

Here are a few more examples (but please see the real documentation for a list of all the features!). I include these mostly so people won't break the wiki again, trying to add new questions that answer this stuff.

${string:2:1}   # The third character of string (0, 1, 2 = third)
${string:1}     # The string starting from the second character
                # Note: this is equivalent to ${string#?}
${string%?}     # The string with its last character removed.
${string: -1}   # The last character of string
${string:(-1)}  # The last character of string, alternate syntax
                # Note: string:-1 means something entirely different; see below.

${file%.mp3}    # The filename without the .mp3 extension
                # Very useful in loops of the form: for file in *.mp3; do ...
${file%.*}      # The filename without its last extension
${file%%.*}     # The filename without all of its extensions
${file##*.}     # The extension only, assuming there is one. If not, will expand to: $file

74.1. Examples of Filename Manipulation

Here is one POSIX-compliant way to take a full pathname, extract the directory component of the pathname, the filename, just the extension, the filename without the extension (the "stub"), any numeric portion occurring at the end of the stub (ignoring any digits that occur in the middle of the filename), perform arithmetic on that number (in this case, incrementing by one), and reassemble the entire filename adding a prefix to the filename and replacing the number in the filename with another one.

FullPath=/path/to/name4afile-009.ext     # result:   #   /path/to/name4afile-009.ext
Filename=${FullPath##*/}                             #   name4afile-009.ext
PathPref=${FullPath%"$Filename"}                     #   /path/to/
FileStub=${Filename%.*}                              #   name4afile-009
FileExt=${Filename#"$FileStub"}                      #   .ext
FnumPossLeading0s=${FileStub##*[![:digit:]]}         #   009
FnumOnlyLeading0s=${FnumPossLeading0s%%[!0]*}        #   00
FileNumber=${FnumPossLeading0s#"$FnumOnlyLeading0s"} #   9
NextNumber=$(( FileNumber + 1 ))                     #   10
NextNumberWithLeading0s=$(printf "%0${#FnumPossLeading0s}d" "$NextNumber")
                                                     #   010
FileStubNoNum=${FileStub%"$FnumPossLeading0s"}       #   name4afile-
NewFullPath=${PathPref}New_${FileStubNoNum}${NextNumberWithLeading0s}${FileExt}
                        # Final result is:           #   /path/to/New_name4afile-010.ext

Note that trying to get the directory component of the pathname with PathPref="${FullPath%/*}" will fail to return an empty string if $FullPath is "SomeFilename.ext" or some other pathname without a slash. Similarly, trying to get the file extension using FileExt="${Filename#*.}" fails to return an empty string if $Filename has no dot (and thus no extension).

Also note that it is necessary to get rid of leading zeroes for $FileNumber in order to perform arithmetic, or else the number is interpreted as octal. Alternatively, one can add a 10# prefix to force base 10. In the example above, trying to calculate $(( FnumPossLeading0s + 1 )) results in an error since "00809" is not a valid number. If we had used "00777" instead, then there would have been no error, but $(( FnumPossLeading0s + 1 )) would result in "1000" (since octal 777 + 1 is octal 1000) which is probably not what was intended. See ArithmeticExpression.

Quoting is not needed in variable assignment, since WordSplitting does not occur. On the other hand, variables referenced inside a parameter expansion need to be quoted (for example, quote $Filename in PathPref=${FullPath%"$Filename"} ) or else any * or ? or other such characters within the filename would incorrectly become part of the parameter expansion (for example, if an asterisk is the first character in the filename --try FullPath="dir/*filename" ).

74.2. Bash 4

Bash 4 introduced some additional parameter expansions: toupper (^) and tolower (,).

# string='hello, World!'
parameter     result
-----------   --------------------------------------------------------
${string^}    Hello, World! # First character to uppercase
${string^^}   HELLO, WORLD! # All characters to uppercase
${string,}    hello, World! # First character to lowercase
${string,,}   hello, world! # All characters to lowercase

Bash 4.4 introduced another set of expansions, which it calls Parameter transformation:

${string@Q}     # Quote to be reused as input, like printf %q
${string@E}     # Expand C-style backslash combos, similar to printf %b
${string@P}     # Prompt string expansion, using the rules for PS1
${string@A}     # Assignment statement creation, like declare -p but different
${string@a}     # attributes fetch

In action:

$ string=$'nice "day" isn\'t it?' ; echo "${string@Q}"
'nice "day" isn'\''t it?'

$ string='hello\tworld' ; echo "${string@E}"
hello   world

$ string='\h:\w\$ ' ; echo "${string@P}"
wooledg:~$ 

$ string=hello ; echo "${string@A}"
string='hello'
$ a=(an array); echo "${a[@]@A}"
declare -a a=([0]="an" [1]="array")
$ declare -ri i=3 ; echo "${i@A}"
declare -ir i='3'

$ echo "${string@a}"

$ echo "${a@a}" "${i@a}"
a ir

74.3. Parameter Expansion on Arrays

BASH arrays are remarkably flexible, because they are well integrated with the other shell expansions. Any parameter expansion that can be carried out on a scalar or individual array element can equally apply to an entire array or the set of positional parameters such that all members are expanded at once, possibly with an additional operation mapped across each element. This is done by expanding parameters of the form @, *, arrayname[@] and arrayname[*]. It is critical that these special expansions be quoted properly - almost always that means double-quoting (e.g. "$@" or "${cmd[@]}") - so that the members are treated literally as individual words, regardless of their content. For example, arr=("${list[@]}" foo) correctly handles all elements in the list array.

First the expansions:

$ a=(alpha beta gamma)  # assign to our base array via compound assignment
$ echo "${a[@]#a}"      # chop 'a' from the beginning of every member
lpha beta gamma
$ echo "${a[@]%a}"      # from the end
alph bet gamm
$ echo "${a[@]//a/f}"   # substitution
flphf betf gfmmf

The following expansions (substitute at beginning or end) are very useful for the next part:

$ echo "${a[@]/#a/f}"   # substitute a for f at start
flpha beta gamma
$ echo "${a[@]/%a/f}"   # at end
alphf betf gammf

We use these to prefix or suffix every member of the list:

$ echo "${a[@]/#/a}"    # append a to beginning
aalpha abeta agamma     #    (thanks to floyd-n-milan for this)
$ echo "${a[@]/%/a}"    # append a to end
alphaa betaa gammaa

This works by substituting an empty string at beginning or end with the value we wish to append.

So finally, a quick example of how you might use this in a script, say to add a user-defined prefix to every target:

$ PFX=inc_
$ a=("${a[@]/#/$PFX}")
$ echo "${a[@]}"
inc_alpha inc_beta inc_gamma

This is very useful, as you might imagine, since it saves looping over every member of the array.

The special parameter @ can also be used as an array for purposes of parameter expansions:

${@:(-2):1}             # the second-to-last parameter
${@: -2:1}              # alternative syntax

You can't use ${@:-2:1} (note the whitespace), however, because that conflicts with the syntax described in the next section.

74.4. Portability

The original Bourne shell (7th edition Unix) only supports a very limited set of parameter expansion options:

${var-word}             # if var is defined, use var; otherwise, "word"
${var+word}             # if var is defined, use "word"; otherwise, nothing
${var=word}             # if var is defined, use var; otherwise, use "word" AND...
                        #   also assign "word" to var
${var?error}            # if var is defined, use var; otherwise print "error" and exit

These are the only completely portable expansions available.

POSIX shells (as well as KornShell and BASH) offer those, plus a slight variant:

${var:-word}             # if var is defined AND NOT EMPTY, use var; otherwise, "word"
similarly for ${var:+word} etc.

POSIX, Korn (all versions) and Bash all support the ${var#word}, ${var%word}, ${var##word} and ${var%%word} expansions.

ksh88 does not support ${var/replace/with} or ${var//replace/all}, but ksh93 and Bash do.

ksh88 does not support fancy expansion with arrays (e.g., ${a[@]%.gif}) but ksh93 and Bash do.

ksh88 does not support the arr=(...) style of compound assignment. Either use set -A arrayname -- elem1 elem2 ..., or assign each element individually with arr[0]=val1 arr[1]=val2 ...

75. How do I get the effects of those nifty Bash Parameter Expansions in older shells?

Most of the extended forms of parameter expansion do not work with the older BourneShell. If your code needs to be portable to that shell as well, sed and expr can often be used.

For example, to remove the filename extension part:

   1 for file in ./*.doc
   2 do
   3     base=`echo "$file" | sed 's/\.[^.]*$//'`    # remove everything starting with last '.'
   4     mv "$file" "$base".txt
   5 done

Another example, this time to remove the last character of a variable:

   1 var=`expr "$var" : '\(.*\).'`

or (using sed):

    var=`echo "$var" | sed 's/.$//'`

76. How do I use 'find'? I can't understand the man page at all!

See UsingFind.

77. How do I get the sum of all the numbers in a column?

This and all similar questions are best answered with an AWK one-liner.

awk '{sum += $1} END {print sum}' myfile

A small bit of effort can adapt this to most similar tasks (finding the average, skipping lines with the wrong number of fields, etc.).

For more examples of using awk, see handy one-liners for awk.

77.1. BASH Alternatives

# One number per line.
sum=0; while read -r line; do (( sum += line )); done < "myfile"; echo "$sum"

# Add numbers in field 3.
sum=0; while read -r -a fields; do (( sum += ${fields[2]} )); done < "myfile"; echo "$sum"

# Do the same for a file where the rows are not lines but separated by semicolons, and fields are comma delimited.
sum=0; while IFS=, read -rd ';' -a fields; do (( sum += ${fields[2]} )); done < "myfile"; echo "$sum"

# Note that for the above, the file needs to end with a ; (not end with a row).  If it doesn't, you can replace ''< "myfile"'' by ''<<< "$(<myfile);"'' to add the semicolon so ''read'' can see the last row.

78. How do I log history or "secure" bash against history removal?

If you're a shell user who wants to record your own activities, see FAQ #88 instead. If you're a system administrator who wants to know how to find out what a user had executed when they unset or /dev/nulled their shell history, there are several problems with this....

The first issue is:

  • kill -9 $$

This innocuous looking command does what you would presume it to: it kills the current shell off. However, the .bash_history is ONLY written to disk when bash is allowed to exit cleanly. As such, sending SIGKILL to bash will prevent logging to .bash_history.

Users may also set variables that disable shell history, or simply make their .bash_history a symlink to /dev/null. All of these will defeat any attempt to spy on them through their .bash_history file. The simplest method is to do

  • unset HISTFILE

and the history won't be written even if the user exits the shell cleanly.

The second issue is permissions. The bash shell is executed as a user. This means that the user can read or write all content produced by or handled by the shell. Any location you want bash to log to MUST be writable by the user running bash. However, this means that the user you're trying to spy on can simply erase the information from the log.

The third issue is location. Assume that you pursue a chroot jail for your bash users. This is a fantastic idea, and a good step towards securing your server. However, placing your users in a chroot jail conversely affects the ability to log the users' actions. Once jailed, your user can only write to content within its specific jail. This makes finding user writable extraneous logs a simple matter, and enables the attacker to find your hidden logs much easier than would otherwise be the case.

Where does this leave you? Unfortunately, nowhere good, and definitely not what you wanted to know. If you want to record all of the commands issued to bash by a user, the first requirement is to modify bash so that it actually records them, in real time, as they are executed -- not when the user logs off. The second requirement is to log them in such a way that the user cannot go back and erase the logs (which means, not just appending to a file).

This is still not reliable, though, because end users may simply upload their own shell and run that instead of your hacked bash. Or they may use one of the other shells already on your system, instead of your hacked bash.

Bash 4.1 has a compile-time configuration option to enable logging all commands through syslog(3). (Note that this only helps if users actually use that shell, as discussed above.)

For those who absolutely must have some sort of logging functionality in older versions of bash, you can use the patch located at http://wooledge.org/~greg/bash_logging.txt (patch submitted by _sho_ -- use at your own risk. The results of a code-review with improvements are here: http://phpfi.com/220302 -- Heiner. Unfortunately, that URL seems to have expired now.). Note that this patch does not use syslog. It relies on the user not noticing the log file.

For a more serious approach to the problem of tracking what your users are doing, consider BSD process accounting (kernel-based) instead of focusing on shells.

79. I want to set a user's password using the Unix passwd command, but how do I script that? It doesn't read standard input!

OK, first of all, I know there are going to be some people reading this, right now, who don't even understand the question. Here, this does not work:

{ echo oldpass; echo newpass; echo newpass; } | passwd
# This DOES NOT WORK!

Nothing you can do in bash can possibly work. The traditional passwd(1) does not read from standard input. This is intentional. It is for your protection. Passwords were never intended to be put into programs, or generated by programs. They were intended to be entered only by the fingers of an actual human being, with a functional brain, and never, ever written down anywhere. So before you continue, consider the possibility that the authors of passwd(1) were on to something, and you probably shouldn't be trying to script passwd(1) input.

Nonetheless, we get hordes of users asking how they can circumvent 35 years of Unix security. And we get people contributing their favorite security-removing "solutions" to this page. If you still think this is what you want, read on.

79.1. Construct your own hashed password and write it to some file

The first approach involves constructing your own hashed password (DES, MD5, Blowfish, or whatever your OS uses) using nonstandard tools such as http://wooledge.org/~greg/crypt/ or Debian/Ubuntu's mkpasswd package. You would then write that hashed password, along with additional fields, in a line in your system's local password-hash file (which may be /etc/passwd, or /etc/shadow, or /etc/master.passwd, or /etc/security/passwd, or ...). This requires that you read the relevant man pages on your system, find out where the password hash goes, what formatting the file requires, and then construct code that writes it out in that format.

A minor variant of this involves using a system-specific tool to write the line for you, given the hashed password that you constructed. For example, on Debian/Ubuntu, we've been told that useradd -m joe -s /bin/bash -p "$(mkpasswd "$password")" might work.

79.2. Fool the computer into thinking you are a human

The second approach is to use Expect or its python equivalent. I think Expect even has this exact problem as one of its canonical examples.

79.3. Find some magic system-specific tool

Finally, system-specific tools designed to do this may already exist on your platform. We've already mentioned useradd. Some GNU/Linux systems also have a newusers(8) command specifically designed for this, or a chpasswd(8) tool which can be coerced into doing these sorts of things. Or they may have a --stdin flag on their passwd command. Also try commands such as apropos users or man -k account to see what else might exist. Be creative.

See also FAQ #69 -- I want to automate an ssh (or scp, or sftp) connection.


79.4. Don't rely on /dev/tty for security

As an aside, the reverse of this FAQ is also a problem. It's trivial, at least under Linux, to wrap any program in a way that forces the controlling terminal to be an abstraction that's connected to any kind of I/O you like. This means it's very difficult to securely guarantee that a user with local access is actually giving your program input directly from a keyboard. Often people do this by reading from /dev/tty. This, just like the way the passwd program works, is only a small step to discourage bad security practices like storing passwords in plain text files. The following runs Bash, which reads a program on FD 3, which unwittingly gets its input through a pipe (which could just as easily be a file), using just one function from the Python standard library.

 ~ $ { echo 'o hi there' | python -c 'import pty; pty.spawn(["bash", "/dev/fd/3"])'; } <<"EOF" 3<&0- <&2 # <&2 prevents disconnecting echo's stdin. No real effect.
{
    stty -echo
    read -p 'Password: ' passw
    printf '\npassword is: %s\n' "$passw"
    stty echo
} </dev/tty
EOF

o hi there
Password:
password is: o hi there


# version without using Python

#{ echo 'o hi there' | script -c "bash /dev/fd/3" /dev/null; } <<"EOF" 3<&- 3<&0- <&2  # Linux
{ echo 'o hi there' | script -q /dev/null bash /dev/fd/3; } <<"EOF" 3<&- 3<&0- <&2     # FreeBSD; Mac OS X
{
    stty -echo
    read -p 'Password: ' passw
    printf '\npassword is: %s\n' "$passw"
    stty echo
} </dev/tty
EOF

Additionally, reading from /dev/tty is just plain annoying because it breaks the way users expect their redirections to work. Just don't do it. Better is to use [[ -t 0 ]] to test for a tty and handle the condition accordingly. Even this can be annoying when a sysadmin is expecting certain behavior that changes depending on I/O. If you must use either of these tricks, document it, and provide an option to disable any I/O conditional behavior.

80. How can I grep for lines containing foo AND bar, foo OR bar? Or for files containing foo AND bar, possibly on separate lines? Or files containing foo but NOT bar?

This is really four different questions, so we'll break this answer into parts.

80.1. foo AND bar on the same line

The easiest way to match lines that contain both foo AND bar is to use two grep commands:

   1 grep foo | grep bar
   2 grep foo -- "$myfile" | grep bar   # for those who need the hand-holding

It can also be done with one grep, although (as you can probably guess) this doesn't really scale well to more than two patterns:

   1 grep -E 'foo.*bar|bar.*foo'

If you prefer, you can achieve this in one sed or awk statement:

   1 sed '/foo/!d; /bar/!d'
   2 awk '/foo/ && /bar/'

If you need to scale the awk solution to an arbitrary number of patterns, you can write a function like this:

   1 # POSIX
   2 multimatch() { # usage: multimatch pattern...
   3   awk '
   4     BEGIN {
   5       for ( i = 1; i < ARGC; i++ )
   6         a[i] = ARGV[i]
   7       ARGC = 1
   8     }
   9     {
  10       for (i in a)
  11         if ($0 !~ a[i])
  12           next
  13       print
  14     }' "$@"
  15 }

80.2. foo OR bar on the same line

There are lots of ways to match lines containing foo OR bar. grep can be given multiple patterns with -e:

   1 grep -e 'foo' -e 'bar'

Or you can separate the patterns with newlines:

   1 grep 'foo
   2 bar'

Or you can construct one pattern with grep -E:

   1 grep -E 'foo|bar'

(You can't use the | union operator with plain grep. | is only available in Extended Regular Expressions.)

It can also be done with sed, awk, etc.

   1 sed -n -e '/foo/{ p; d; }' -e '/bar/{ p; d; }'
   2 awk '/foo|bar/'

The awk approach has the advantage of letting you use awk's other features on the matched lines, such as extracting only certain fields.

To match lines that do not contain "foo" AND do not contain "bar":

   1 grep -E -v 'foo|bar'

Or using sed, or awk:

   1 sed -e '/foo/d' -e '/bar/d'
   2 awk '!/foo|bar/'

80.3. foo AND bar in the same file, not necessarily on the same line

If you want to match files (rather than lines) that contain both "foo" and "bar", there are several possible approaches. The simplest (although not necessarily the most efficient) is to read the file twice:

if grep -q foo "$myfile" && grep -q bar "$myfile"; then
  printf 'Found both\n'
fi

The double grep -q solution has the advantage of stopping each read whenever it finds a match; so if you have a huge file, but the matched words are both near the top, it will only read the first part of the file. Unfortunately, if the matches are near the bottom (worst case: very last line of the file), you may read the whole file two times.

Another approach is to read the file once, keeping track of what you've seen as you go along. In awk:

if awk '/foo/{a=1} /bar/{b=1} a&&b{exit} END{if(a&&b){exit 0};exit 1}' "$myfile"; then
  printf 'Found both\n'
fi

It reads the file one time, stopping when both patterns have been matched. No matter what happens, the END block is then executed, and the exit status is set accordingly.

If you want to do additional checking of the file's contents, this awk solution can be adapted quite easily.

A perl one-liner that scales to any number of patterns, while also reading each input file only once:

perl -e '@pat=("foo","bar"); local $/; L: for $f (@ARGV){open(FH,,$f); $a=<FH>; for(@pat){next L unless $a =~ $_} print "$f\n"}'

80.4. foo but NOT bar in the same file, possibly on different lines

This is a variant of the previous case. The advantage here is that if we find "bar", we can stop reading. Here's an awk solution:

awk '/foo/{good=1} /bar/{good=0;exit} END{exit !good}'

81. How can I make an alias that takes an argument?

You can't. Aliases in bash are extremely rudimentary, and not really suitable to any serious purpose. The bash man page even says so explicitly:

  • There is no mechanism for using arguments in the replacement text. If arguments are needed, a shell function should be used (see FUNCTIONS below).

Use a function instead. For example,

settitle() {
  case "$TERM" in *xterm*|*rxvt*)
    printf '\e]2;%s\a' "$1"
  esac
}

Oh, by the way: aliases are not allowed in scripts. They're only allowed in interactive shells, and that's simply because users would cry too loudly if they were removed altogether. If you're writing a script, always use a function instead.

82. How can I determine whether a command exists anywhere in my PATH?

POSIX specifies a shell builtin called command which can be used for this purpose:

# POSIX
if command -v qwerty >/dev/null; then
  echo qwerty exists
else
  echo qwerty does not exist
fi

In BASH, there are a couple more builtins that may also be used: hash and type. Here's an example using hash:

# Bash
if hash qwerty 2>/dev/null; then
  echo qwerty exists
else
  echo qwerty does not exist
fi

Or, if you prefer type:

# Bash
# type -P forces a PATH search, skipping builtins and so on
if type -P qwerty >/dev/null; then
  echo qwerty exists
else
  echo qwerty does not exist
fi

KornShell and zsh have whence instead:

# ksh/zsh
if whence -p qwerty >/dev/null; then
  echo qwerty exists
else
  echo qwerty does not exist
fi

The command builtin also returns true for shell builtins (unlike type -P). If you absolutely must check only PATH, the only POSIX way is to iterate over it:

# POSIX
IsInPath ()
(
  [ "$#" = 1 ] && [ "$1" ] || return 2
  set -f; IFS=:
  for dir in $PATH$IFS; do
    [ -x "${dir:-.}/$1" ] && return
  done
  return 1
)

if IsInPath qwerty; then
  echo qwerty exists
else
  echo qwerty does not exist
fi

Note that the function defined above uses parentheses around the body rather than the normal curly braces. This makes the body run in a subshell, and is the reason we don't need to undo set -f or IFS.

The iterative approach is also used in configure scripts. Here's a simplified version of such a test:

# Bourne
save_IFS=$IFS
IFS=:
found=no
for dir in $PATH; do
  if test -x "$dir/qwerty"; then
    echo "qwerty is installed (in $dir)"
    found=yes
    break
  fi
done
IFS=$save_IFS
if test "$found" = no; then
  echo "qwerty is not installed"
fi

Real configure scripts are generally much more complicated than this, since they may deal with systems where $PATH is not delimited by colons; or systems where executable programs may have optional extensions like .EXE; or $PATH variables that have the current working directory included in them as an empty string; etc. If you're interested in such things, I suggest reading an actual GNU autoconf-generated configure script. They're far too large and complicated to include in this FAQ.

The command which (which is often a csh script, although sometimes a compiled binary) is not reliable for this purpose. which may not set a useful exit code, and it may not even write errors to stderr. Therefore, in order to have a prayer of successfully using it, one must parse its output (wherever that output may be written).

# Bourne.  Last resort -- using which(1)
tmpval=`LC_ALL=C which qwerty 2>&1`
if test "$?" -ne 0; then
  # FOR NOW, we'll assume that if this machine's which(1) sets a nonzero
  # exit status, that it actually failed.  I've yet to see any case where
  # which(1) sets an erroneous failure -- just erroneous "successes".
  echo "qwerty is not installed.  Please install it."

else
    # which returned 0, but that doesn't mean it succeeded.  Look for known error strings.
    case "$tmpval" in
      *no\ *\ in\ *|*not\ found*|'')
        echo "qwerty is not installed.  Please install it."
        ;;
      *)
        echo "Congratulations -- it seems you have qwerty (in $tmpval)."
        ;;
    esac
fi

Note that which(1)'s output when a command is not found is not consistent across platforms. On HP-UX 10.20, for example, it prints no qwerty in /path /path /path ...; on OpenBSD 4.1, it prints qwerty: Command not found.; on Debian (3.1 through 5.0 at least) and SuSE, it prints nothing at all; on Red Hat 5.2, it prints which: no qwerty in (/path:/path:...); on Red Hat 6.2, it writes the same message, but on standard error instead of standard output; and on Gentoo, it writes something on stderr.

We strongly recommend not using which. Use one of the builtins or the iterative approaches instead.

83. Why is $(...) preferred over `...` (backticks)?

`...` is the legacy syntax for command substitution, required by only the very oldest of non-POSIX-compatible Bourne shells. There are several reasons to always prefer the $(...) syntax:

83.1. Important differences

  • Backslashes (\) inside backticks are handled in a non-obvious manner:
       1 $ echo "`echo \\a`" "$(echo \\a)"
       2 a \a
       3 $ echo "`echo \\\\a`" "$(echo \\\\a)"
       4 \a \\a
       5 # Note that this is true for *single quotes* too!
       6 $ foo=`echo '\\'`; bar=$(echo '\\'); echo "foo is $foo, bar is $bar"
       7 foo is \, bar is \\
    
  • Nested quoting inside $() is far more convenient.

       1 echo "x is $(sed ... <<<"$y")"
    

    In this example, the quotes around $y are treated as a pair, because they are inside $(). This is confusing at first glance, because most C programmers would expect the quote before x and the quote before $y to be treated as a pair; but that isn't correct in shells. On the other hand,

       1 echo "x is `sed ... <<<\"$y\"`"
    
    requires backslashes around the internal quotes in order to be portable. Bourne and Korn shells require these backslashes, while Bash and dash don't.
  • It makes nesting command substitutions easier. Compare:
       1 x=$(grep -F "$(dirname "$path")" file)
       2 x=`grep -F "\`dirname \"$path\"\`" file`
    

    It just gets uglier and uglier after two levels. $() forces an entirely new context for quoting, so that everything within the command substitution is protected and can be treated as though it were on its own, with no special concern over quoting and escaping.

83.2. Other advantages

  • The function of $(...) as being an expansion is visually clear. The syntax of a $-prefixed token is consistent with all other expansions that are parsed from within double-quotes, at the same time, from left-to-right. Backticks are the only exception. This improves human and machine readability, and consistent syntax makes the language more intuitive for readers.

  • Per the above, people are (hopefully) accustomed to seeing double-quoted expansions and substitutions with the usual "$..." syntax. Quoting command substitutions is almost always the correct thing to do, yet the great majority of `...` specimens we find in the wild are left unquoted, perhaps because those who still use the legacy syntax are less experienced, or they don't associate it with the other expansions due to the different syntax. In addition, the ` character is easily camouflaged when adjacent to " making it even more difficult to read, especially with small or unusual fonts.

  • The backtick is also easily confused with a single quote.

83.3. See also:

84. How do I determine whether a variable is already defined? Or a function?

There are several ways to test these things, depending on the exact requirements. Most of the time, the desired test is whether a variable has a non-empty value. In this case, we may simply use:

# POSIX
if test "$var"; then
  echo "The variable has a non-empty value."
fi

If this fails for you because you use set -u, please see FAQ 112.

If we wish to distinguish between an empty variable and an unset variable, then we may use the + parameter expansion:

# POSIX
if test "${var+defined}"; then
  echo "The variable is defined."
fi

The magic here is the +, not the word defined. We can use any non-empty word after the + sign. I prefer defined because it indicates what kind of test is being performed.

Some people prefer the -v option that was added in bash 4.2:

# Bash 4.2 and up
# Bash 4.3 if you want to test an array element
if test -v var; then
  echo "The variable is defined."
fi

There's really no benefit to this over the portable test, though.

84.1. Setting a default value

If what we really want is to set a variable to a default value unless it already has a value, then we may skip the test, and use the = parameter expansion:

# POSIX
: "${var=default}"

See FAQ 73 for details.

84.2. Testing whether a function has been defined

For determining whether a function with a given name is already defined, there are several answers, all of which require Bash (or at least, non-Bourne) commands. Testing that a function is defined should rarely be necessary. Just define the function as you want it to be defined instead of worrying about what might or might not have been inherited from who-knows-where.

declare -F f >/dev/null       # Bash only - declare outputs "f" and returns 0 if defined, returns non-zero otherwise.
typeset -f f >/dev/null       # Bash/Ksh - typeset outputs the entire function and returns 0 if defined, returns non-zero otherwise.
[[ $(type -t f) = function ]] # Bash-only - "type" outputs "function" if defined. In ksh (and mksh), the "type" alias for "whence -v" differs.

# Bash/Ksh. Workaround for the above, but the first two are preferable.
isFunction() [[ $(type ${BASH_VERSION:+'-t'} "$1") == ${KSH_VERSION:+"$1 is a "}function ]]; isFunction f

85. How do I return a string (or large number, or negative number) from a function? "return" only lets me give a number from 0 to 255.

Functions in Bash (as well as all the other Bourne-family shells) work like commands: that is, they only "return" an exit status, which is an integer from 0 to 255 inclusive. This is intended to be used only for signaling errors, not for returning the results of computations, or other data.

If you need to send back arbitrary data from a function to its caller, there are several different methods by which this can be achieved.

85.1. Capturing standard output

You may have your function write the data to stdout, and then have the caller capture stdout.

   1 foo() {
   2    echo "this is my data"
   3 }
   4 
   5 x=$(foo)
   6 printf 'foo returned "%s"\n' "$x"

One drawback of this method is that the function is executed in a SubShell, which means that any variable assignments, etc. performed in the function will not take effect in the caller's environment (and incurs a speed penalty as well, due to a fork()). This may or may not be a problem, depending on the needs of your program and your function. Another drawback is that everything printed by the function foo is captured and put into the variable instead. This leads to problems if foo also writes things that are not intended to be a returned value. To isolate user prompts and/or error messages from "returned" data, redirect them to stderr which will not be captured by the caller.

   1 foo() {
   2    echo "running foo()..."  >&2        # send user prompts and error messages to stderr
   3    echo "this is my data"              # variable will be assigned this value below
   4 }
   5 
   6 x=$(foo)                               # prints:  running foo()...
   7 printf 'foo returned "%s"\n' "$x"      # prints:  foo returned "this is my data"

85.2. Global variables

You may assign data to global variables, and then refer to those variables in the caller.

   1 foo() {
   2    return="this is my data"
   3 }
   4 
   5 foo
   6 printf 'foo returned "%s"\n' "$return"

The advantage of this method (compared to capturing stdout) is that your function is not executed in a SubShell, which means the function call is much faster. It also means side effects (like other variable assignments and FileDescriptor changes) will affect the rest of the script.

The drawback of this method is that if the function is executed in a subshell, then the assignment to a global variable inside the function will not be seen by the caller. This means you would not be able to use the function in a pipeline, for example.

85.3. Writing to a file

Your function may write its data to a file, from which the caller can read it.

   1 foo() {
   2    echo "this is my data" > "$1"
   3 }
   4 
   5 # This is NOT solid code for handling temp files!
   6 tmpfile=$(mktemp)   # GNU/Linux
   7 foo "$tmpfile"
   8 printf 'foo returned "%s"\n' "$(<"$tmpfile")"
   9 rm "$tmpfile"
  10 # If this were a real program, there would have been error checking, and a trap.

The drawbacks of this method should be obvious: you need to manage a temporary file, which is always inconvenient; there must be a writable directory somewhere, and sufficient space to hold the data therein; etc. On the positive side, it will work regardless of whether your function is executed in a SubShell.

For more information about handling temporary files within a shell script, see FAQ 62. For traps, see SignalTrap.

85.4. Dynamically scoped variables

Instead of using global variables, you can use variables whose scope is restricted to the caller and the called function.

   1 rand() {
   2    local max=$((32768 / $1 * $1))
   3    while (( (r=$RANDOM) >= max )); do :; done
   4    r=$(( r % $1 ))
   5 }
   6 
   7 foo() {
   8    local r
   9    rand 6
  10    echo "You rolled $((r+1))!"
  11 }
  12 
  13 foo
  14 # Here at the global scope, 'r' is not visible.

This has the same advantages and disadvantages as using global variables, plus the additional advantage that the global variable namespace isn't "polluted" by the function return variable.

However, this technique doesn't work with recursive functions.

   1 # This example won't work.
   2 fact() {
   3    local r      # to hold the return value of things we call
   4    if (($1 == 1)); then
   5       r=1       # to send data back to the caller
   6    else
   7       fact $(($1 - 1))       # call ourself recursively
   8       r=$((r * $1))          # send data back to the caller
   9    fi
  10 }

There is a variable name collision -- the example above tries to use r for two conflicting purposes at the same time. For recursive functions, stick with the global variable technique.

   1 # This example works.  It's not the best way to compute a factorial, but
   2 # it's a simple example of a recursive function.
   3 fact() {
   4    if (($1 <= 1)); then
   5       r=1
   6    else
   7       fact "$(($1 - 1))"
   8       ((r *= $1))
   9    fi
  10 }
  11 
  12 fact 11
  13 echo "$r"


CategoryShell

86. How to write several times to a fifo without having to reopen it?

In the general case, you'll open a new FileDescriptor (FD) pointing to the fifo, and write through that. For simple cases, it may be possible to skip that step.

86.1. The problem

The most basic use of NamedPipes is:

mkfifo myfifo
cat <myfifo &
echo a >myfifo

This works, but cat dies after reading one line. (In fact, what happens is when the named pipe is closed by all the writers, this signals an end of file condition for the reader. So cat, the reader, terminates because it saw the end of its input.)

What if we want to write several times to the pipe without having to restart the reader?

86.2. Grouping the commands

We have to arrange for all our data to be sent without opening and closing the pipe multiple times.

If the commands are consecutive, they can be grouped:

cat <myfifo &
{ echo a; echo b; echo c; } >myfifo

86.3. Opening a file descriptor

It is basically the same idea as above, but using exec to have greater flexibility:

cat <myfifo &

# assigning fd 3 to the pipe
exec 3>myfifo

# writing to fd 3 instead of reopening the pipe
echo a >&3
echo b >&3
echo c >&3

# closing the fd
exec 3>&-

Closing the FD causes the pipe's reader to receive the end of file indication.

This works well as long as all the writers are children of the same shell.

86.4. Using tail

The use of tail -f instead of cat can be an option, as tail will keep reading even if the pipe is closed:

tail -f myfifo &

echo a >myfifo
# Doesn't die
echo b >myfifo
echo c >myfifo

The problem here is that the process tail doesn't die, even if the named pipe is deleted. In a script this is not a problem as you can kill tail on exit.

If your reader is a program that only reads from a file, you can still use tail with the help of process substitution:

myprogram <(tail -f myfifo) &
# Doesn't die
echo b >myfifo
echo c >myfifo

Here, tail will be closed when myprogram exits.

86.5. Using a guarding process

The reader of the pipe won't receive an EOF until all open writer file descriptors are closed. You can exploit this by keeping a file descriptor opened on a process doing nothing.

Therefore, an elegant solution is to create a "guarding process", and to use a second pipe to control the guarding process:

mkfifo myfifo
mkfifo guard

# keep the fifo opened using a fake writer
>myfifo <guard &  #note the order is important! 

# if you do <guard first, it will be blocked
# and >myfifo will not be opened until guard is opened

# start the reader
cat myfifo

Now you can use writers in other unrelated processes, and the pipe will not be closed because of your process keeping it opened.

echo something >myfifo
# reader does not die

When you are finished and want to close the pipe, you just need to open and close the helper pipe to unblock the guarding processs:

>guard 

An alternative is to use a process doing nothing, killing it in the end:

mkfifo myfifo

while :;do sleep 10000 & wait;done >myfifo &
pid=$!

cat myfifo

kill "$pid"

87. How to ignore aliases, functions, or builtins when running a command?

functions, builtins, external utilities, and aliases can all be defined with the same name at once. It's sometimes necessary specify which of these the shell should resolve while bypassing the others.

87.1. Bypass aliases

Resolve commands normally ignoring aliases:

\name

\unalias name
name

Clear all aliases:

\unalias -a

Alias expansion in bash is disabled by default in non-posix mode.

87.2. Prioritize calling a builtin or external command

Bypass aliases and functions:

\command name

If PATH is unknown / unreliable:

   1 \command -p -- name "${args[@]}"

The remainder of this FAQ assumes alias expansion has been disabled or otherwise mitigated.

87.3. Prioritize calling only a builtin

   1 # Strictly bash-only. Not recommended
   2 
   3 function my_builtin {
   4     builtin my_builtin "$@"
   5 }

87.4. Call an external utility by PATH resolution, bypassing builtins and/or functions

   1 "$(type -P name)" "${args[@]}"

87.5. Call a specific external utility

Specify the full or relative path name containing at least one forward slash.

87.6. See also

88. How can I get a file's permissions (or other metadata) without parsing ls -l output?

There are several potential ways, most of which are system-specific. They also depend on precisely why you want the information; in most cases, there will be some other way to accomplish your real goal. You don't want to parse ls's output if there's any possible way to avoid doing so.

Many of the cases where you might ask about permissions -- such as I want to find any files with the setuid bit set -- can be handled with the find(1) command.

For some questions, such as I want to make sure this file has 0644 permissions, you don't actually need to check what the permissions are. You can just use chmod 0644 myfile and set them directly. And if you DO actually need to check what the permissions are instead of forcing them, then you can use find's -perm.

If you want to see whether you can read, write or execute a file, there are test -r, -x and -w.

If you want to see whether a file is zero bytes in size or not, you don't need to read the file's size into a variable. You can just use test -s instead.

If you want to copy the modification time from one file to another, you can use touch -r. The chown command on some GNU/Linux systems has a --reference option that works the same way, letting you copy the owner and group from one file to another.

If your needs aren't met by any of those, and you really feel you must extract the metadata from a file into a variable, then we can look at a few alternatives:

  • On GNU/Linux systems, *BSD and possibly others, there is a command called stat(1). On older GNU/Linux systems, this command takes no options -- just a filename -- and you will have to parse its output.

     $ stat /
       File: "/"
       Size: 1024         Filetype: Directory
       Mode: (0755/drwxr-xr-x)         Uid: (    0/    root)  Gid: (    0/    root)
     Device:  8,0   Inode: 2         Links: 25   
     Access: Wed Oct 17 14:58:02 2007(00000.00:00:01)
     Modify: Wed Feb 28 15:42:14 2007(00230.22:15:49)
     Change: Wed Feb 28 15:42:14 2007(00230.22:15:49)

    In this case, one could extract the 0755 from the Mode: line, using awk or similar commands.

  • On newer GNU/Linux systems:
     $ stat -c %a /
     755
    That's obviously a lot easier to parse. With *BSDs (NetBSD, OpenBSD, FreeBSD and their derivatives like Apple OS/X), the syntax is different and you need to extract the permissions from the mode:
     mode=$(stat -f %p -- "$filename")
     perm=$(printf %o "$((mode & 07777))")
  • On systems with perl 5, you can use:
     perl -e 'printf "%o\n", 07777 & (stat $ARGV[0])[2]' "$filename"

    This returns the same octal string that the stat -c %a example does, but is far more portable. (And slower.)

  • GNU find has a -printf switch that can print out any metadata of a file:

     find "$filename" -prune -printf '%m\n'

    That predates GNU stat by over a decade and can give metadata for several files in a directory as well. Beware though that for file named -print, (, !... or any of the other find predicates, you need to make sure you pass the file name as ./-print, ./(, ! or any other relative or absolute path to the file that doesn't confuse find.

  • If your bash is compiled with loadable builtin support, you can build the finfo builtin (type make in the examples/loadables/ subdirectory of your bash source tree), enable it, and then use:

     $ finfo -o .bashrc
     644

    Beware that the finfo.c distributed with bash up through 4.0 contains at least one bug (in the -s option), so the code has clearly not been tested much. Most precompiled bash packages do not include compiled examples, so this may be a difficult alternative for most users.

89. How can I avoid losing any history lines?

This method is designed to allow you to store a complete log of all commands executed by a friendly user; it is not meant for secure auditing of commands - see securing bash against history removal.

By default, Bash updates its history only on exit, and it overwrites the existing history with the new version. This prevents you from keeping a complete history log, for two reasons:

  • If a user is logged in multiple times, the overwrite will ensure that only the last shell to exit will save its history.
  • If your shell terminates abnormally - for example because of network problems, firewall changes or because it was killed - no history will be written.

To solve the first problem, we set the shell option histappend which causes all new history lines to be appended, and ensures that multiple logins do not overwrite each other's history.

To prevent history lines being lost if Bash terminates abnormally, we need to ensure that they are written after each command. We can use the shell builtin history -a to cause an immediate write of all new history lines, and we can automate this execution by adding it to the PROMPT_COMMAND variable. This variable contains a command to be executed before any new prompt is shown, and is therefore run after every interactive command is executed.

Note that there are two side effects of running 'history -a' after every command:

  • A new login will be able to immediately scroll back through the history of existing logins. So if you wish to run the same command in two sessions, run the command and then initiate the second login and you will be able to retrieve the command immediately.
  • More negatively, the history commands of simultaneous interactive shells (for a given user) will be intertwined. Therefore the history is not a guaranteed sequential list of commands as they were executed in a single shell. You may find this confusing if you review the history file as a whole, looking for sections encapsulating particular tasks rather than searching for individual commands. It's probably only an issue if you have multiple people using a single account simultaneously, which is not ideal in any case.

To set all this, use the following in your own ~/.bashrc file:

   1 HISTFILESIZE=400000000
   2 HISTSIZE=10000
   3 PROMPT_COMMAND="history -a"
   4 
   5 shopt -s histappend

In the above we have also increased the maximum number of lines of history that will be stored in memory, and removed any limit for the history file itself. The default for these is 500 lines, which will cause you to start to lose lines fairly quickly if you are an active user. By setting HISTFILESIZE to a large value we ensure a file big enough so that it is infinite in practice - and by setting $HISTSIZE, we limit the number of these lines to be retained in memory. Unfortunately, bash will read in the full history file before truncating its memory copy to the length of $HISTSIZE - therefore if your history file grows very large, bash's startup time can grow annoyingly high. Even worse, loading a large history file then truncating it via $HISTSIZE results in bloated resource usage; bash ends up using much more memory than if the history file contained only $HISTSIZE lines. Therefore if you expect your history file to grow very large, for example above 20,000 lines, you should archive it periodically. See Archiving History Files below.

PROMPT_COMMAND may already be used in your setup, for example containing control codes to update an XTerm's display bar with your current prompt. If yours is already in use, you can append to it with: PROMPT_COMMAND="${PROMPT_COMMAND:-:} ; history -a"

You may also want to set the variables HISTIGNORE and HISTCONTROL to control what is saved, for example to remove duplicate lines - though doing so prevents you from seeing how many times a given command was run by a user, and precisely when (if HISTTIMEFORMAT is also set).

Note that because PROMPT_COMMAND executes just before a new prompt is printed, you may still lose the last command line if your shell terminates during the execution of this line. As an example, consider: this_cmd_is_never_written_to_history ; kill -9 $$

89.1. Using extended attributes

Even after you've convinced bash to record history without truncating the history file, it's still very easy to lose. If you ever start a shell in interactive mode without the shell sourcing your .bashrc for any reason (e.g. via the --rcfile option) bash will default to indiscriminately truncating the history, which could mean losing everything unless you archive and backup the file as detailed in the next sections.

Under Linux and some other OSes, this can be prevented by setting the append-only extended attribute on the history file. Subsequently, open(2) calls without the O_APPEND flag will fail and the file cannot be deleted, moved, truncated, or otherwise modified other than to append data to the end (even by the root user) until the append-only bit is unset. Usually only a root user can set or unset this attribute.

# Linux example on btrfs - setting the append-only flag with chattr(1)

ormaaj-laptop # chattr +a .bash_history
ormaaj-laptop # lsattr -a .bash_history
-----a---------- .bash_history
ormaaj-laptop # rm .bash_history
rm: cannot remove '.bash_history': Operation not permitted
ormaaj-laptop # >.bash_history
bash: .bash_history: Operation not permitted

The exact method and which attributes are supported varies by OS and file system. See this wikipedia article for details. Under (at least) some BSD-like systems and OS X the analogous command is chflags. The append-only feature is split between "user" and "system" versions presumably so non-root users can use it on their own files. Linux appears to have no equivalent way for a non-root user to set/unset append-only.

89.2. Prevent mangled history with atomic writes and lock files

TODO...

89.3. Compressing History Files

The result of the above is a history file with a great many duplicate commands. Appending history causes your history file to grow by all the shell's loaded history each time.

More importantly, the main thing we care about with regards to history is being able find previously executed commands. The following script will remove all commands from the history file that are already in there, while keeping the order of the commands intact in such a way that commands you most recently executed will remain at the bottom of the file (ie. keep the last occurrence of a command, not the first).

   1 awk 'NR==FNR && !/^#/{lines[$0]=FNR;next} lines[$0]==FNR' "$HISTFILE" "$HISTFILE" > "$HISTFILE.compressed" &&
   2 mv "$HISTFILE.compressed" "$HISTFILE"

After a few months, this compressed my history file from 761474 lines to 2349. Note that this does not preserve the timestamps if you have HISTTIMEFORMAT set.

89.4. Archiving History Files

Once you have enabled these methods, you should find that your bash history becomes much more valuable, allowing you to recall any command you have executed at any time. As such, you should ensure your history file(s) are included in your regular backups.

You may also want to enable regular archiving of your history file, to prevent the full history from being loaded into memory by each new bash shell. With a history file size of 10,000 entries, bash uses approximately 5.5MB of memory on Solaris 10, with no appreciable start-up delay (with $HOME on a local disk, I assume? -- GreyCat). With a history size of 100,000 entries this has grown to 10MB with a noticeable 3-5 second delay on startup. Periodic archiving is advisable to remove the oldest log lines and to avoid wasting resources, particular if RAM is at a premium. (My largest ~/.bash_history is at 7500 entries after 1.5 months.)

This is best done via a tool that can archive just part of the file. A simple script to do this would be:

   1 #!/bin/bash
   2 umask 077
   3 max_lines=10000
   4 
   5 linecount=$(awk 'END { print NR }' ~/.bash_history)
   6 
   7 if ((linecount > max_lines)); then
   8     prune_lines=$((linecount - max_lines))
   9     head -n "$prune_lines" ~/.bash_history >> ~/.bash_history.archive &&
  10     sed -e "1,${prune_lines}d"  ~/.bash_history > ~/".bash_history.tmp$$" &&
  11     mv ~/".bash_history.tmp$$" ~/.bash_history
  12 fi

This script removes enough lines from the top of the history file to truncate its size to X lines, appending the rest to ~/.bash_history.archive. This mimics the pruning functionality of HISTFILESIZE, but archives the remainder rather than deleting it - ensuring you can always query your past history by grepping ~/.bash_history*.

Such a script could be run nightly or weekly from your personal crontab to enable periodic archiving. Note that the script does not handle multiple users and will archive the history of only the current user - extending it to run for all system users (as root) is left as an exercise for the reader.

89.5. Archiving by month

   1 # https://github.com/kaihendry/dotfiles
   2 mkdir -p ~/bash_history
   3 shopt -s histappend
   4 HISTCONTROL=ignoredups
   5 PROMPT_COMMAND=$'history -a; history -n;\n'$PROMPT_COMMAND
   6 
   7 # If your bash is older than 4.3, set these to a large number instead
   8 # else your history files will be empty
   9 HISTFILESIZE=-1 HISTSIZE=-1
  10 
  11 HISTFILE=~/bash_history/$(date +%Y-%m)
  12 
  13 h() {
  14     grep "$@" ~/bash_history/*
  15 }

https://youtu.be/DJ_HdmfA72E

  • What happens when the date changes to a new month while your shell is still running? Then HISTFILE is pointing to the wrong place.

89.6. Saving history into a database

TODO...


90. I'm reading a file line by line and running ssh or ffmpeg, only the first line gets processed!

When reading a file line by line, if a command inside the loop also reads stdin, it can exhaust the input file. For example:

   1 # Non-working example
   2 while IFS= read -r file; do
   3   ffmpeg -i "$file" -c:v libx264 -c:a aac "${file%.avi}".mkv
   4 done < <(find . -name '*.avi')

   1 # Non-working example
   2 while read host; do
   3   ssh "$host" some command
   4 done < hostslist

What's happening here? Let's take the first example. read reads a line from standard input (FD 0), puts it in the file parameter, and then ffmpeg is executed. Like any program you execute from BASH, ffmpeg inherits standard input. However, ffmpeg additionally uses standard input to detect quit commands indicated by user input of q, thus sucking up all the input from the find command and starving the loop.

Use the -nostdin global option in ffmpeg to disable interaction on standard input:

   1 while IFS= read -r file; do
   2   ffmpeg -nostdin -i "$file" -c:v libx264 -c:a aac "${file%.avi}".mkv
   3 done < <(find . -name '*.avi')

Alternatively you could use redirection at the end of the ffmpeg line: </dev/null. The ssh example can be fixed the same way, or with the -n switch (at least with OpenSSH).

Sometimes with large loops it might be difficult to work out what's reading from stdin, or a program might change its behaviour when you add </dev/null to it. In this case you can make read use a different FileDescriptor that a random program is less likely to read from:

   1 while IFS= read -r line <&3; do
   2   ...
   3 done 3< file

In bash, the read builtin can also be told to read directly from an fd (-u fd) without redirection, and since bash 4.1, an available fd can be assigned ({var}<file) instead of hard coding a file descriptor.

   1 # bash 4.1+
   2 while IFS= read -r -u "$fd" line; do
   3   ...
   4 done {fd}< file
   5 exec {fd}<&-

91. How do I prepend a text to a file (the opposite of >>)?

You cannot do it with bash redirections alone; the opposite of >> does not exist....

To insert content at the beginning of a file, you can use an editor, for example ex:

ex file << EOF
0a
header line 1
header line 2
.
w
EOF

or ed:

printf '%s\n' 0a "line 1" "line 2" . w | ed -s file

ex will also add a newline character to the end of the file if it's missing.

Or you can rewrite the file, using things like:

{ echo line; cat file ;} >tmpfile && mv tmpfile file
echo line | cat - file > tmpfile && mv tmpfile file

Some people insist on using the sed hammer to pound in all the screws:

sed "1iTEXTTOPREPEND" filename > tmp &&
mv tmp filename

There are lots of other solutions as well.

92. I'm trying to get the number of columns or lines of my terminal but the variables COLUMNS / LINES are always empty.

COLUMNS and LINES are set by BASH in interactive mode; they are not available by default in a script. On most systems, you can try to query the terminal yourself:

unsup() { echo "Your system doesn't support retrieving $1 with tput.  Giving up." >&2; exit 1; }
COLUMNS=$(tput cols) || unsup cols
LINES=$(tput lines) || unsup lines

Bash automatically updates the COLUMNS and LINES variables when an interactive shell is resized. If you're setting the variables in a script and you want them to be updated when the terminal is resized, i.e. upon receipt of a SIGWINCH, you can set a trap yourself:

trap 'COLUMNS=$(tput cols) LINES=$(tput lines)' WINCH

You can also set the shell as interactive in the script's shebang:

#!/bin/bash -i
echo $COLUMNS

This has some drawbacks, however:

  • Though not the best practice, it's not too uncommon for scripts to test for the -i option to determine whether a shell is interactive, and then abort or misbehave. There is no completely foolproof way to test for this, so some scripts may break as a result.

  • Running with -i sources .bashrc, and sets various options such as job-control which may have unintended side-effects.

Though you can technically set -i in the middle of a script, it has no effect on the setting of COLUMNS and LINES. -i must be set when Bash is first invoked.

Normally Bash updates COLUMNS and LINES when your terminal sends a SIGWINCH signal, indicating a change of size. Some terminals may not do this, so if your variables aren't being updated even when running an interactive shell, try using the shopt -s checkwinsize. This will make Bash query the terminal after every command, so only use it if it's really necessary.

tput, of course, requires a terminal. According to POSIX, if stdout is not a tty, the results are unspecified, and stdin is unused, though some implementations may try using it anyway. On OpenBSD and Gentoo, and Debian Linux (and apparently at least some other Linuxes), at least one of stdout or stderr must be a tty, or else tput just returns some default values:

linux$ tput -S <<<$'cols\nlines' 2>&1 | cat
80
24

openbsd$ tput cols lines 2>&1 | cat
80
24

93. How do I write a CGI script that accepts parameters?

There are always circumstances beyond our control that drive us to do things that we would never choose to do on our own. This FAQ entry describes one of those situations.

A CGI program can be invoked with parameters, sent by the web browser (user agent). There are (at least) two ways to invoke a CGI program: the "GET" method and the "POST" method. In the "GET" method, parameters are provided to the CGI program in an environment variable called QUERY_STRING. The parameters take the form of KEY=VALUE definitions (e.g. user=george), with some characters encoded in hexadecimal, spaces encoded as plus signs, all joined together with ampersands. In the "POST" method, the parameters are provided on standard input instead.

Now of course we know you would never write a CGI script in Bash. So for the purposes of this entry we will assume that terrorists have kidnapped your spouse and children and will torture, maim, kill, "or worse" them if you do not comply with their demands to write such a script.

(The "or worse" situation would clearly be something like being forced to use Microsoft based software.)

So, given a QUERY_STRING variable, we would like to extract the keys (variables) and their values, so that we can use them in the script.

93.1. Associative Arrays

The best approach is to place the key/value pairs into an associative array. Associative arrays are available in ksh93 and in bash 4.0, but not in POSIX or Bourne shells. They are designed to hold key/value pairs where the keys can be arbitrary strings, so they seem appropriate for this job.

# Bash 4+

# Read in the cgi input string
if [[ $QUERY_STRING ]]; then
  query=$QUERY_STRING
else
  read -r query
fi

# Set up an associative array to hold the query parameters.
declare -A params

# Iterate through the key=value+%41%42%43 elements.
# Separate key and value, and perform decoding on the value.
while IFS='=' read -r -d '&' key value; do

    # Decoding steps: 
    # 1. turn \ to \\. Step 4 will change them back to \
    # 2. plus signs become spaces.
    # 3. percent signs become \x.
    # 4. run it through printf %b which will expand the \-escapes

    value=${value//\\/\\\\}
    value=${value//+/ }
    value=${value//'%'/\\x}
    printf -v 'params[$key]' %b "$value"
done <<< "$query&"

# Now we can use the parameters from the associative array named params.
# If we need a list of the keys, it's "${!params[@]}".

The printf -v varname option is available in every version of bash that supports associative arrays, so we may use it here. It's much more efficient than calling a SubShell. We've also avoided the potential problems with echo -e if the value happens to be something like -n.

Technically, the CGI specification allows multiple instances of the same key in a single query. For example, group=managers&member=Alice&member=Charlie is a perfectly legitimate query string. None of the approaches on this page handle this case (at least not in what we'd probably consider the "correct" way). Fortunately, it's not often that you'd write a CGI like this; and in any case, you're not being forced to use bash for this task. The quick, easy and dangerous way to process the QUERY_STRING is to convert the &s to ;s and then use the eval command to run those assignments. However, the use of eval is STRONGLY DISCOURAGED. That is to say we always avoid using eval if there is any way around it.

93.2. Older Bash Shells

If you don't have associative arrays, don't just leap to eval. A better approach is to extract each variable/value pair, and assign them to shell variables, one by one, without executing them. This requires an indirect variable assignment, which means using some shell-specific trickery. We'll write this using Bash syntax; converting to ksh or Bourne shell is left as an exercise.

# Bash 3.1 +

# Read in the cgi input string
if [[ $QUERY_STRING ]]; then
  query=$QUERY_STRING
else
  read -r query
fi

# Variable names in bash are limited to ASCII alphanumerics and underscores
sanitize() {
    local LC_ALL=C  # to only consider ASCII letters
    printf %s "${1//[![:alnum:]_]/_}"
}

# query contains something like name=Fred+Flintstone&city=Bedrock
# Treat this as a list of key=value expressions joined with &.
# Iterate through the list and perform each assignment.

while IFS='=' read -r -d '&' var value; do
    # To be sure the resulting variable name is valid, add "get_" 
    # in front, and replace any invalid characters with underscores.
    # 1foo-bar => get_1foo_bar
    var=$(sanitize "get_$var")

    value=${value//\\/\\\\}
    value=${value//+/ }
    value=${value//'%'/\\x}
    printf -v "$var" %b "$value"
done <<< "$query&"

# Now you can do whatever you wanted to do with "get_name".
# If we need a list of the keys, it's "${!get_@}".

While this might be a little less clear, it avoids this huge security problem that eval has: executing any arbitrary command the user might care to enter into the web form. Clearly this is an improvement.

93.3. The Wrong Way

# DO NOT DO THIS!
#
# Read in the cgi input string
if [ "$QUERY_STRING" ]; then
  query=$QUERY_STRING
else
  read query
fi

# Convert some of the encoded strings and things like "&" (left as an exercise for the reader)

# Run eval on the string
eval "$query"

# Sit back and discover that the user has put "/bin/rm -rf /" in one of the web form fields,
# which even if not root will do damage to some part of the file system.
# Another dangerous string would be a fork bomb.

The only reason this example is still on this page is because whenever we delete bad examples, someone rewrites them. So, this is your bad example, and your multiple layers of warnings not to use it.

94. How can I set the contents of my terminal's title bar?

If you have a terminal that understands xterm-compatible escape sequences, and you just want to set the title one time, you can use a function like this:

settitle() { printf '\e]2;%s\a' "$*"; }

If you want to set the title bar to the currently-running command line every time you type a command, then this solution approximates it:

trap 'printf "\e]2;%s\a" "$(HISTTIMEFORMAT= history 1)" >/dev/tty' DEBUG

However, it leaves the command history number in place, and it doesn't trigger on explicit subshells like (cd foo && make).

Or to use just the name and arguments of the current simple command:

trap 'printf "\e]2;%s\a" "$BASH_COMMAND" >/dev/tty' DEBUG

For Posix-compliant shells which don't recognize '\e' as a character sequence to be interpreted as Escape, '\033' may be substituted instead.

95. I want to get an alert when my disk is full (parsing df output).

Sadly, parsing the output of df really is the most reliable way to determine how full a disk is, on most operating systems. However, please note that this is a "least bad" answer, not a "best" answer. Parsing any command-line reporting tool's output in a program is never pretty. The purpose of this FAQ is to try to describe all the problems this approach is known to encounter, and work around them.

The first, biggest problem with df is that it doesn't work the same way on all operating systems. Unix is divided largely into two families -- System V and BSD. On BSD-like systems (including Linux, in this case), df gives a human-readable report:

  •  ~$ df
     Filesystem           1K-blocks      Used Available Use% Mounted on
     /dev/sda2              8230432   3894324   3918020  50% /
     tmpfs                   253952         8    253944   1% /lib/init/rw
     udev                     10240        44     10196   1% /dev
     tmpfs                   253952         0    253952   0% /dev/shm

However, on System-V-like systems, the output is completely different:

  •  $ df
     /net/appl/clin   (svr1:/dsk/2/clin/pa1.1-hpux10HP-UXB.10.20):  1301728 blocks            -1 i-nodes
     /net/appl/tool-share (svr2:/dsk/4/dsk3/tool/share): 51100992 blocks       4340921 i-nodes
     /net/appl/netscape (svr2:/dsk/4/dsk3/netscape/pa1.1-hpux10HP-UXB.10.20): 51100992 blocks       4340921 i-nodes
     /net/appl/gcc-3.3 (svr2:/dsk/4/dsk3/gcc-3.3/pa1.1-hpux10HP-UXB.10.20): 51100992 blocks       4340921 i-nodes
     /net/appl/gcc-3.2 (svr2:/dsk/4/dsk3/gcc-3.2/pa1.1-hpux10HP-UXB.10.20): 51100992 blocks       4340921 i-nodes
     /net/appl/tool   (svr2:/dsk/4/dsk3/tool/pa1.1-hpux10HP-UXB.10.20): 51100992 blocks       4340921 i-nodes
     /net/home/wooledg    (/home/wooledg       ):   658340 blocks     87407 i-nodes
     /net/home            (auto.home           ):        0 blocks         0 i-nodes
     /net/hosts           (-hosts              ):        0 blocks         0 i-nodes
     /net/appl            (auto.appl           ):        0 blocks         0 i-nodes
     /net/vol             (auto.vol            ):        0 blocks         0 i-nodes
     /nfs                 (-hosts              ):        0 blocks         0 i-nodes
     /home                (/dev/vg00/lvol5     ):   658340 blocks     87407 i-nodes
     /opt                 (/dev/vg00/lvol6     ):   623196 blocks     83075 i-nodes
     /tmp                 (/dev/vg00/lvol4     ):    86636 blocks     11404 i-nodes
     /usr/local           (/dev/vg00/lvol9     ):   328290 blocks     41392 i-nodes
     /usr                 (/dev/vg00/lvol7     ):   601750 blocks     80228 i-nodes
     /var                 (/dev/vg00/lvol8     ):   110696 blocks     14447 i-nodes
     /stand               (/dev/vg00/lvol1     ):   110554 blocks     13420 i-nodes
     /                    (/dev/vg00/lvol3     ):   190990 blocks     25456 i-nodes

So, your first obstacle will be recognizing that you may need to use a different command depending on which OS you're on (e.g. bdf on HP-UX); and that there may be some OSes where it's simply not possible to do this with a shell script at all.

For the rest of this article, we'll assume that you've got a system with a BSD-like df command.

The next problem is that the output format of df is not consistent across platforms. Some plaforms use 6 columns of output. Some use 7. Some platforms (like Linux) use 1-kilobyte blocks by default when reporting the actual space used or available; others, like OpenBSD or IRIX, use 512-byte blocks by default, and need a -k switch to use kilobytes.

Worse, often a line of output will be split into multiple lines on the screen. For example (Linux):

  •  Filesystem           1K-blocks      Used Available Use% Mounted on
     ...
     svr2:/dsk/4/dsk3/tool/i686Linux2.4.27-4-686
                           35194552   7856256  25550496  24% /net/appl/tool

If the device name is sufficiently long (very common with network-mounted file systems), df may split the output onto two lines in an attempt to preserve the columns for human readability. Or it may not... see, for example, OpenBSD 4.3:

  •  ~$ df
     Filesystem  512-blocks      Used     Avail Capacity  Mounted on
     /dev/wd0a       253278    166702     73914    69%    /
     /dev/wd0d      8121774   6904178    811508    89%    /usr
     /dev/wd0e      8121774   6077068   1638618    79%    /var
     /dev/wd0f       507230        12    481858     0%    /tmp
     /dev/wd0g      8121774   5653600   2062086    73%    /home
     /dev/wd0h    125253320 116469168   2521486    98%    /export
    
     ~$ sudo mount 192.168.2.5:/var/cache/apt/archives /mnt
     ~$ df
     Filesystem                          512-blocks      Used     Avail Capacity  Mounted on
     /dev/wd0a                               253278    166702     73914    69%    /
     /dev/wd0d                              8121774   6904178    811508    89%    /usr
     /dev/wd0e                              8121774   6077806   1637880    79%    /var
     /dev/wd0f                               507230        12    481858     0%    /tmp
     /dev/wd0g                              8121774   5653600   2062086    73%    /home
     /dev/wd0h                            125253320 116469168   2521486    98%    /export
     192.168.2.5:/var/cache/apt/archives    1960616   1638464    222560    88%    /mnt

Most versions of df give you a -P switch which is intended to standardize the output... sort of. Older versions of OpenBSD still split lines of output even when -P is supplied, but Linux will generally force the output for each file system onto a single line.

Therefore, if you want to write something robust, you can't assume the output for a given file system will be on a single line. We'll get back to that later.

You can't assume the columns line up vertically, either:

  •  ~$ df -P
     Filesystem         1024-blocks      Used Available Capacity Mounted on
     /dev/hda1               180639     93143     77859      55% /
     tmpfs                   318572         4    318568       1% /dev/shm
     /dev/hda5                90297      4131     81349       5% /tmp
     /dev/hda2              5763648    699476   4771388      13% /usr
     /dev/hda3              1829190    334184   1397412      20% /var
     /dev/sdc1            2147341696 349228656 1798113040      17% /data3
     /dev/sde1            2147341696 2147312400     29296     100% /data4
     /dev/sdf1            1264642176 1264614164     28012     100% /data5
     /dev/sdd1            1267823104 1009684668 258138436      80% /hfo
     /dev/sda1            2147341696 2147311888     29808     100% /data1
     /dev/sdg1            1953520032 624438272 1329081760      32% /mnt
     /dev/sdb1            1267823104 657866300 609956804      52% /data2
     imadev:/home/wooledg   3686400   3336736    329184      92% /net/home/wooledg
     svr2:/dsk/4/dsk3/tool/i686Linux2.4.27-4-686  35194552   7856256  25550496      24% /net/appl/tool
     svr2:/dsk/4/dsk3/tool/share  35194552   7856256  25550496      24% /net/appl/tool-share

So, what can you actually do?

  • Use the -P switch. Even if it doesn't make everything 100% consistent, it generally doesn't hurt. According to the source code of df.c in Linux coreutils, the -P switch does ensure that the output will be on a single line (but that's only for Linux).

  • Set your locale to C. You don't need non-English column headers complicating the picture.

  • Consider using "stat --file-system --format=", if it's available. If portability is not an issue in your case, check the man page of the "stat" command. On many systems you'll be able to print the blocksize, total number of blocks on the disk, and the number of free blocks; all in a user-specified format.

  • Explicitly select a file system. Don't use df -P | grep /dev/hda2 if you want the results for a specific file system. Give df a directory name or a device name as an argument so you only get that file system's output in the first place.

    •   ~$  df -P /
        Filesystem         1024-blocks      Used Available Capacity Mounted on
        /dev/sda2              8230432   3894360   3917984      50% /
  • Count words of output without respecting newlines. This is the workaround for lines being split unpredictably. For example, using a Bash array df_arr:

    •   ~$ read -d '' -ra df_arr < <(LC_ALL=C df -P /); echo "${df_arr[11]}"
        50%

    As you can see, we simply slurped the entire output into a single array and then took the 12th word (array indices count from 0). We don't care whether the output got split or not, because that doesn't change the number of words.

Removing the % sign, comparing the number to a specified threshold, scheduling an automatic way to run the script, etc. are left as exercises for you.

First discard the header information and read the data into the array.

  • { read -r; read -rd '' -a disk_usage; } < <(LC_ALL=C df -Pk "$dir"; printf \\0); echo "${disk_usage[5]}"
     39%

The GNU version of df allows you to filter the df output more efficiently. The output flag --output=avail will display only the available disk space for the designated device. E.g

df --output=source,target,avail,pcent / 

This will display the source, target, available space and percentage of the root partition.

96. I'm getting "Argument list too long". How can I process a large list in chunks?

First, let's review some background material. When a process wants to run another process, it fork()s a child, and the child calls one of the exec* family of system calls (e.g. execve()), giving the name or path of the new process's program file; the name of the new process; the list of arguments for the new process; and, in some cases, a set of environment variables. Thus:

  • /* C */
    execlp("ls", "ls", "-l", "dir1", "dir2", (char *) NULL);

There is (generally) no limit to the number of arguments that can be passed this way, but on most systems, there is a limit to the total size of the list. For more details, see http://www.in-ulm.de/~mascheck/various/argmax/ .

If you try to pass too many filenames (for instance) in a single program invocation, you'll get something like:

  • $ grep foo /usr/include/sys/*.h
    bash: /usr/bin/grep: Arg list too long

There are various tricks you could use to work around this in an ad hoc manner (change directory to /usr/include/sys first, and use grep foo *.h to shorten the length of each filename...), but what if you need something absolutely robust?

Some people like to use xargs here, but it has some serious issues. It treats whitespace and quote characters in its input as word delimiters, making it incapable of handling filenames properly. (See UsingFind for a discussion of this.)

That said, the GNU version of xargs has a -0 option that lets us feed NUL-terminated arguments to it, and when reading in this mode, it doesn't fall over and explode when it sees whitespace or quote characters. So, we could feed it a list thus:

  • # Requires GNU xargs
    printf '%s\0' /usr/include/sys/*.h |
    xargs -0 grep foo /dev/null

Or, if recursion is acceptable (or desirable), you may use find directly:

  • find /usr/include/sys -name '*.h' -exec grep foo /dev/null {} +

If recursion is unacceptable but you have GNU find, you can use this non-portable alternative:

  • # Requires GNU find
    find /usr/include/sys -name '*.h' -maxdepth 1 -exec grep foo /dev/null {} +

(Recall that grep will only print filenames if it receives more than one filename to process. Thus, we pass it /dev/null as a filename, to ensure that it always has at least two filenames, even if the -exec only passes it one name.)

The most general alternative is to use a Bash array and a loop to process the array in chunks:

  • # Bash
    files=(/usr/include/*.h /usr/include/sys/*.h)
    for ((i=0; i<${#files[*]}; i+=100)); do
       grep foo "${files[@]:i:100}" /dev/null
    done

Here, we've chosen to process 100 elements at a time; this is arbitrary, of course, and you could set it higher or lower depending on the anticipated size of each element vs. the target system's getconf ARG_MAX value. If you want to get fancy, you could do arithmetic using ARG_MAX and the size of the largest element, but you still have to introduce "fudge factors" for the size of the environment, etc. It's easier just to choose a conservative value and hope for the best.

97. ssh eats my word boundaries! I can't do ssh remotehost make CFLAGS="-g -O"!

ssh emulates the behavior of the Unix remote shell command (rsh or remsh), including this bug. There are a few ways to work around it, depending on exactly what you need.

First, here is a full illustration of the problem:

~$ ~/bin/args make CFLAGS="-g -O"
2 args: <make> <CFLAGS=-g -O>
~$ ssh localhost ~/bin/args make CFLAGS="-g -O"
Password: 
3 args: <make> <CFLAGS=-g> <-O>

What's happening is the command and its arguments are being smashed together into a string on the client side, then shoved through the ssh connection to the server side, where that string is handed to your shell as an argument for re-parsing. This is not what we want.

97.1. Manual requoting

The simplest workaround is to mash everything together into a single argument, and manually add quotes in just the right places, until we get it to work.

~$ ssh localhost '~/bin/args make CFLAGS="-g -O"'
Password: 
2 args: <make> <CFLAGS=-g -O>

The shell on the remote host will re-parse the argument, break it into words, and then execute it.

The first problem with this approach is that it's tedious. If we already have both kinds of quotes, and lots of shell substitutions that need to be performed, then we may end up needing to rearrange quite a lot, add backslashes to protect the right things, and so on. The second problem is that it doesn't work very well if our exact command isn't known in advance -- e.g., if we're writing a WrapperScript.

97.2. Passing data on stdin instead of the command line

Another workaround is to pass the command(s) as standard input to the remote shell, rather than as an argument. This won't work in all cases; it means the command being executed on the remote system can't use stdin for any other purpose, since we're tying up stdin to send our commands. But in the cases where it can be used, it works quite well:

# POSIX
# Stdin will not be available for use by the remote program
ssh remotehost sh <<EOF
make CFLAGS="-g -O"
EOF

97.3. Automatic requoting of each parameter

Let's now consider a more realistic problem: we want to write a wrapper script that invokes make on a remote host, with the arguments provided by the user being passed along intact. This is a lot harder than it would appear at first, because we can't just mash everything together into one word -- the script's caller might use really complex arguments, and quotes, and pathnames with spaces and shell metacharacters, that all need to be preserved carefully. Fortunately for us, bash provides a way to protect such things safely: printf %q. Together with an array and a loop, we can write a wrapper:

# Bash 2.05b and up
# Your account's shell on the remote host MUST BE BASH, not sh
unset a i
for arg; do
  a[i++]=$(printf %q "$arg")
done
exec ssh remotehost make "${a[@]}"

# Bash 3.1 and up
# Your account's shell on the remote host MUST BE BASH, not sh
unset a
for arg; do
  printf -v temp %q "$arg"
  a+=("$temp")
done
exec ssh remotehost make "${a[@]}"

# Bash 4.1 and up
# Your account's shell on the remote host MUST BE BASH, not sh
unset a i
for arg; do
  printf -v 'a[i++]' %q "$arg"
done
exec ssh remotehost make "${a[@]}"

# Bash 4.4 and up
# Your account's shell on the remote host MUST BE BASH, not sh
exec ssh remotehost make "${@@Q}"

See FAQ 73 for a brief description of the bash 4.4 parameter transformation operators (@Q and so on).

If we also need to change directory on the remote host before running make, we can add that as well:

# Bash 3.1 and up
# Your account's shell on the remote host MUST BE BASH, not sh
args=()
for arg; do
  printf -v temp %q "$arg"
  args+=("$temp")
done
printf -v dir %q "$PWD"
exec ssh remotehost cd "$dir" "&&" make "${args[@]}"

The drawback of this approach is that it only works if the remote shell is Bash. Bash's printf %q produces output that other shells may not be able to parse (such as $'\n' for newlines).

For other Bourne family shells, the closest approximation is to replace all single quotes in the data with the four characters '\'' and then enclose the data in single quotes. For example,

# POSIX
first=1
for arg in "$@"; do
  test "$first" = 1 && set --
  first=0
  set -- "$@" "'$(printf %s "$arg" | sed "s/'/'\\\\''/g")'"
done
exec ssh remotehost make "$@"

POSIX sh has no arrays, so we have to use the positional parameters as both input and output. It also can't perform simple character replacements in parameter expansion, so we have to fork() a sed process for every single argument. Of course, if the client is bash, and only the remote server is using sh, then the client script could be written using bash's parameter expansions to generate the same "sh-encoded" arguments. Not shown.

For more discussion of this issue, see avoiding code injection.


CategorySsh CategorySsh CategorySsh

The documentation on this is fuzzy, but it turns out you can do this with shell builtins:

# Bash
if [[ -L $name && ! -e $name ]]
then echo "$name is a dangling symlink"
fi

The Bash man page tells you that "-L" returns "True if file exists and is a symbolic link", and "-e" returns "True if file exists". What might not be clear is that "-L" considers "file" to be the link itself. To "-e", however, "file" is the target of the symlink (whatever the link is pointing to). That's why you need both tests to see if a symlink is dangling; "-L" checks the link itself, and "-e" checks whatever the link is pointing to.

POSIX has these same tests, with similar semantics, so if for some reason you can't use the (preferred) [[ command, the same test can be done using the older [ command:

# POSIX
if [ -L "$name" ] && [ ! -e "$name" ]
then echo "$name is a dangling symlink"
fi

99. How to add localization support to your bash scripts

Looking for examples of how to add simple localization to your bash scripts, and how to do testing? This is probably what you need....

/!\ There is a potential security hole in this bash feature. Its use is discouraged.

99.1. First, some variables you must understand

Before we can even begin, we have to understand all the locale environment variables. This is fundamental, and extremely under-documented in the places where people actually look for documentation (man pages, etc.). Some of these variables may not apply to your system, because there seem to be various competing standards and extensions....

On recent GNU systems, the variables are used in this order:

  1. If LANGUAGE is set, use that, unless LANG is set to C, in which case LANGUAGE is ignored. Also, some programs simply don't use LANGUAGE at all.

  2. Otherwise, if LC_ALL is set, use that.
  3. Otherwise, if the specific LC_* variable that covers this usage is set, use that. (For example, LC_MESSAGES covers error messages.)
  4. Otherwise, use LANG.

That means you first have to check your current environment to see which of these, if any, are already set. If they are set, and you don't know about them, they may interfere with your testing, leaving you befuddled.

$ env | egrep 'LC|LANG'
LANG=en_US.UTF-8
LANGUAGE=en_US:en_GB:en

Here's an example from a Debian system. In this case, the LANGUAGE variable is set, which means any testing we do that involves changing LANG is likely to fail, unless we also change LANGUAGE. Now here's another example from another Debian system:

$ env | egrep 'LC|LANG'
LANG=en_US.utf8

In that case, changing LANG would actually work. A user on that system, writing a document on how to perform localization testing, might create instructions that would fail to work for the user on the first system....

So, go ahead and play around with your own system and see what works and what doesn't. You may not have a LANGUAGE variable at all (especially if you are not on GNU/Linux), so setting it may do nothing for you. You may need to use locale -a to see what locale settings are available. You may need to specify a character set in the LANG variable (e.g. es_ES.utf8 instead of es_ES). You may have to "generate locales" on your operating system (a process which is beyond the scope of this page, but which on Debian consists of running dpkg-reconfigure locales and answering questions) in order to make them work.

Try to get to the point where you can produce error messages in at least two languages:

$ wc -q
wc: invalid option -- 'q'
Try `wc --help' for more information.
$ LANGUAGE=es_ES wc -q
wc: opción inválida -- q
Pruebe `wc --help' para más información.

Once you can do that reliably, you can begin the actual work of producing a bash script with localisation.

99.2. Marking strings as translatable

This is the simplest part, at least to understand. Any string in $"..." is translated using the system's native language support (NLS) facilities. Find all the constant strings in your program that you want to translate, and mark them accordingly. Don't mark strings that contain variables or other substitutions. For example,

#!/bin/bash
echo $"Hello, world"

(As you can see, we're starting with very simple material here.)

Bash (at least up through 4.0) performs locale expansion before other substitutions. Thus, in a case like this:

echo $"The answer is $answer"

The literal string $answer will become part of the marked string. The translation should also contain $answer, and bash will perform the variable substitution on the translated string. The order in which bash does these substitutions introduces a potential security hole which we will not cover here just yet. (A patch has been submitted, but it's still too early....)

When the variables are yet undefined at the $"..." line and would thus substitute to empty strings, we can instead do:

printf $"The answer is %s" "$answer"

99.3. Generating and/or merging PO files

Next, generate what are called a "PO files" from your program. These contain the strings we've marked, and their translations (which we'll fill in later).

We start by creating a *.pot file, which is a template.

bash --dump-po-strings hello > hello.pot

This produces output which looks like:

#: hello:5
msgid "Hello, world"
msgstr ""

The name of your file (without the .pot extension) is called the domain of your translatable text. A domain in this context is similar to a package name. For example, the GNU coreutils package contains lots of little programs, but they're all distributed together; and so it makes sense for all their translations to be together as well. In our example, we're using a domain of hello. In a larger example containing lots of programs in a suite, we'd probably use the name of the whole suite.

This template will be copied once for each language we want to support. Let's suppose we wanted to support Spanish and French translations of our program. We'll be creating two PO files (one for each translation), so let's make two subdirectories, and copy the template into each one:

mkdir es fr
cp hello.pot es/hello.po
cp hello.pot fr/hello.po

This is what we do the first time through. If there were already some partially- or even fully-translated PO files in place, we wouldn't want to overwrite them. Instead, we would merge the new translatable material into the old PO file. We use a special tool for that called msgmerge. Let's suppose we add some more code (and translatable strings) to our program:

vi hello
bash --dump-po-strings hello > hello.pot
msgmerge --update es/hello.po hello.pot
msgmerge --update fr/hello.po hello.pot

The original author of this page created some notes which I am leaving intact here. Maybe they'll be helpful...?

# step 5: try to merge existing po with new updates
# remove duplicated strings by hand or with sed or something else
# awk '/^msgid/&&!seen[$0]++;!/^msgid/' lang/nl.pot > lang/nl.pot.new
msgmerge lang/nl.po lang/nl.pot

# step 5.1: try to merge existing po with new updates
cp --verbose lang/pct-scanner-script-nl.po lang/pct-scanner-script-nl.po.old
awk '/^msgid/&&!seen[$0]++;!/^msgid/' lang/pct-scanner-script-nl.pot > lang/pct-scanner-script-nl.pot.new
msgmerge lang/pct-scanner-script-nl.po.old lang/pct-scanner-script-nl.pot.new > lang/pct-scanner-script-nl.po

# step 5.2: try to merge existing po with new updates
touch lang/pct-scanner-script-process-nl.po lang/pct-scanner-script-process-nl.po.old
awk '/^msgid/&&!seen[$0]++;!/^msgid/' lang/pct-scanner-script-process-nl.pot > lang/pct-scanner-script-process-nl.pot.new
msgmerge lang/pct-scanner-script-process-nl.po.old lang/pct-scanner-script-process-nl.pot.new > lang/pct-scanner-script-process-nl.po

99.4. Translate the strings

This is a step which is 100% human labor. Edit each language's PO file and fill in the blanks.

#: hello:5
msgid "Hello, world"
msgstr "Hola el mundo"

#: hello:6
msgid "How are you?"
msgstr ""

99.5. Install MO files

Your operating system, if it has gotten you this far, probably already has some localized programs, with translation catalogs installed in some location such as /usr/share/locale (or elsewhere). If you want your translations to be installed there as well, you'll have to have superuser privileges, and you'll have to manage your translation domain (namespace) in such a way as to avoid collision with any OS packages.

If you're going to use the standard system location for your translations, then you only need to worry about making one change to your program: setting the TEXTDOMAIN variable.

#!/bin/bash
TEXTDOMAIN=hello

echo $"Hello, world"
echo $"How are you?"

This tells bash and the system libraries which MO file to use, from the standard location. If you're going to use a nonstandard location, then you have to set that as well, in a variable called TEXTDOMAINDIR:

#!/bin/bash
TEXTDOMAINDIR=/usr/local/share/locale
TEXTDOMAIN=hello

echo $"Hello, world"
echo $"How are you?"

Use one of these two depending on your needs.

Now, an MO file is essentially a compiled PO file. A program called msgfmt is responsible for this compilation. We just have to tell it where the PO file is, and where to write the MO file.

msgfmt -o /usr/share/locale/es/LC_MESSAGES/hello.mo es/hello.po
msgfmt -o /usr/share/locale/fr/LC_MESSAGES/hello.mo fr/hello.po

or

mkdir -p /usr/local/share/locale/{es,fr}/LC_MESSAGES
msgfmt -o /usr/local/share/locale/es/LC_MESSAGES/hello.mo es/hello.po
msgfmt -o /usr/local/share/locale/fr/LC_MESSAGES/hello.mo fr/hello.po

(If we had more than two translations to support, we might choose to mimic the structure of /usr/share/locale in order to facilitate mass-copying of MO files from the local directory to the operating system's repository. This is left as an exercise.)

99.6. Test!

Remember what we said earlier about setting locale environment variables... the examples here may or may not work for your system.

The gettext program can be used to retrieve individual translations from the catalog:

$ LANGUAGE=es_ES gettext -d hello -s "Hello, world"
Hola el mundo

Any untranslated strings will be left alone:

$ LANGUAGE=es_ES gettext -d hello -s "How are you?"
How are you?

And, finally, there is no substitute for actually running the program itself:

wooledg@wooledg:~$ LANGUAGE=es_ES ./hello
Hola el mundo
How are you?

As you can see, there's still some more translation to be done for our example. Back to work....

99.7. References

100. How can I get the newest (or oldest) file from a directory?

This page should be merged with BashFAQ/003

The intuitive answer of ls -t | head -1 is wrong, because parsing the output of ls is unsafe; instead, you should create a loop and compare the timestamps:

# Bash
files=(*) newest=${files[0]}
for f in "${files[@]}"; do
  if [[ $f -nt $newest ]]; then
    newest=$f
  fi
done

Then you'll have the newest file (according to modification time) in $newest. To get the oldest, simply change -nt to -ot (see help test for a list of operators), and of course change the names of the variables to avoid confusion.

Bash has no means of comparing file timestamps other than mtime, so if you wanted to get (for example) the most-recently-accessed file (newest by atime), you would have to get some help from the external command stat(1) (if you have it) or the loadable builtin finfo (if you can load builtins).

Here's an example using stat from GNU coreutils 6.10 (sadly, even across Linux systems, the syntax of stat is not consistent) to get the most-recently-accessed file. (In this version, %X is the last access time.)

# Bash, GNU coreutils
newest= newest_t=0
for f in *; do
  t=$(stat --format=%X -- "$f")   # atime
  if ((t > newest_t)); then
    newest_t=$t
    newest=$f
  fi
done

This also has the disadvantage of spawning an external command for every file in the directory, so it should be done this way only if necessary. To get the oldest file using this technique, you'd either have to initialize oldest_t with the largest possible timestamp (a tricky proposition, especially as we approach the year 2038), or with the timestamp of the first file in the directory, as we did in the first example.

Here is another solution spawning an external command, but posix:

# posix
unset newest
for f in ./*; do
  # set the newest during the first iteration
  newest=${newest-$f} 
  #-prune to avoid descending in the directories, the exit status of find is useless here, we check the output
  if [ "$(find "$f" -prune -newer "$newest")" ]; then 
    newest=$f
  fi
done

Example: how to remove all but the most recent directory. (Note, the modification time on a directory is the time of the most recent operation which changes that directory -- meaning the last file creation, file deletion, or file rename.)

 $ cat clean-old 
 dirs=(enginecrap/*/) newest=${dirs[0]}
 for d in "${dirs[@]}"
  do if [[ $d -nt $newest ]]
     then newest=$d
     fi
  done

 for z in "${dirs[@]}"
  do if [[ "$z" != "$newest" ]]
     then rm -rf "$z"
     fi
  done
 $ for x in 20101022 20101023 200101025 20101107 20101109; do mkdir enginecrap/"$x";done
 $ ls enginecrap/
 200101025       20101022        20101023        20101107        20101109
 $ ./clean-old 
 $ ls enginecrap/
 20101109


CategoryShell

101. How do I do string manipulations in bash?

Bash can do string operations. LOTS of string operations. This is an introduction to bash string manipulations and related techniques. It overlaps with the Parameter Expansion question, but the information here is presented in a more beginner-friendly manner (we hope).

101.1. Parameter expansion syntax

A parameter in bash is a term that covers both variables (storage places with names, that you can read and write by using their name) and special parameters (things you can only read from, not write to). For example, if we have a variable named fruit we can assign the value apple to it by writing:

fruit=apple

And we can read that value back by using a parameter expansion:

$fruit

Note, however, that $fruit is an expression -- a noun, not a verb -- and so normally we need to put it in some sort of command. Also, the results of an unquoted parameter expansion will be split into multiple words and expanded into filenames, which we generally don't want. So, we should always quote our parameter expansions unless we're dealing with a special case.

So, to see the value of a parameter (such as a variable):

printf '%s\n' "$fruit"

# not using echo which can't be used with arbitrary data

Or, we can use these expansions as part of a larger expression:

printf '%s\n' "I like to eat $fruit"

If we want to put an s on the end of our variable's content, we run into a dilemma:

printf '%s\n' "I like to eat $fruits"

This command tries to expand a variable named fruits, rather than a variable named fruit. We need to tell the shell that we have a variable name followed by a bunch of other letters that are not part of the variable name. We can do that like this:

printf '%s\n' "I like to eat ${fruit}s"

And while we're inside the curly braces, we also have the opportunity to manipulate the variable's content in various exciting and occasionally even useful ways, which we're about to describe.

It should be pointed out that in Bash, contrary to Zsh, these tricks only work on parameter expansions. You can't operate on a constant string (or a command substitution, etc.) using them, because the syntax requires a parameter name inside the curly braces. (You can, of course, stick your constant string or command substitution into a temporary variable and then use that.)

101.2. Length of a string

This one's easy, so we'll get it out of the way first.

printf '%s\n' "The string <$var> is ${#var} characters long."

Note that since bash 3.0, it's indeed characters as opposed to bytes which is a significant difference in multi-byte locales. If you need the number of bytes, you need to issue LC_ALL=C before expanding ${#var}.

101.3. Checking for substrings

This overlaps FAQ #41 but we'll repeat it here. To check for a (known, static) substring and act upon its presence or absence, just use the standard case construct:

case $var in
  (*substring*) printf '%s\n' "<$var> contains <substring>";;
  (*) printf '%s\n' "<$var> does not contain <substring>"
esac

In Bash, you can also use the Korn-style [[...]] construct:

if [[ $var = *substring* ]]; then
  printf '%s\n' "<$var> contains <substring>"
else
  printf '%s\n' "<$var> does not contain <substring>"
fi

If the substring you want to look for is in a variable, and you want to prevent it from being treated as a glob pattern, you can quote that part:

case $var in (*"$substring"*) ...

It also applies for the = (aka ==) and != operators of the [[...]] construct:

if [[ $var = *"$substring"* ]]; then
# substring will be treated as a literal string, even if it contains glob chars

If you want it to be treated as a glob pattern, remove the quotes:

if [[ $var = *$substring* ]]; then
# substring will be treated as a glob pattern

There is also a RegularExpression capability, involving the =~ operator. For compatibility with all versions of Bash from 3.0 up and other shells, be sure to put the regular expression into a variable -- don't put it directly into the [[ command. And don't quote it, either -- or else it may be treated as a literal string.

my_re='^fo+.*bar'
if [[ $var =~ $my_re ]]; then
# my_re will be treated as an Extended Regular Expression (ERE)

Beware that on many systems, regular expressions choke on strings that are not valid text in the user's locale, while bash glob patterns can somewhat deal with them, so in cases where either = or =~ can be used, = may be preferable.

101.4. Substituting part of a string

A common need is to replace some part of a string with something else. (Let's call the old and new parts "words" for now.) If we know what the old word is, and what the new word should be, but not necessarily where in the string it appears, then we can do this:

$ var="She favors the bold.  That's cold."
$ printf '%s\n' "${var/old/new}"
She favors the bnew.  That's cold.

That replaces just the first occurrence of the word old. If we want to replace all occurrence of the word, we double up the first slash:

$ var="She favors the bold.  That's cold."
$ printf '%s\n' "${var//old/new}"
She favors the bnew.  That's cnew.

We may not know the exact word we want to replace. If we can express the kind of word we're looking for with a glob pattern, then we're still in good shape:

$ var="She favors the bold.  That's cold."
$ printf '%s\n' "${var//b??d/mold}"
She favors the mold.  That's cold.

We can also anchor the word we're looking for to either the start or end of the string (but not both). In other words, we can tell bash that it should only perform the substitution if it finds the word at the start, or at the end, of the string, rather than somewhere in the middle.

$ var="She favors the bold.  That's cold."
$ printf '%s\n' "${var/#bold/mold}"
She favors the bold.  That's cold.
$ printf '%s\n' "${var/#She/He}"
He favors the bold.  That's cold.
$ printf '%s\n' "${var/%cold/awful}"
She favors the bold.  That's cold.
$ printf '%s\n' "${var/%cold?/awful}"
She favors the bold.  That's awful

Note that nothing happened in the first command, because bold did not appear at the beginning of the string; and also in the third command, because cold did not appear at the end of the string. The # anchors the pattern (plain word or glob) to the beginning, and the % anchors it to the end. In the fourth command, the pattern cold? matches the word cold. (including the period) at the end of the string.

101.5. Removing part of a string

We can use the ${var/old/} or ${var//old/} syntax (or even ${var/old}, ${var//old}) to replace a word with nothing if we want. That's one way to remove part of a string. But there are some other ways that come in handy more often than you might guess.

The first involves removing something from the beginning of a string. Again, the part we're going to remove might be a constant string that we know in advance, or it might be something we have to describe with a glob pattern.

$ var="/usr/local/bin/tcpserver"
$ printf '%s\n' "${var##*/}"
tcpserver

The ## means "remove the largest possible matching string from the beginning of the variable's contents". The */ is the pattern that we want to match -- any number of characters ending with a (literal) forward slash. The result is essentially the same as the basename command, with one notable exception: If the string ends with a slash (or several), basename would return the name of the last path element, while the above would return an empty string. Use with caution.

If we only use one # then we remove the shortest possible matching string. This is less commonly needed, so we'll skip the example for now and give a really cool one later.

As you might have guessed, we can also remove a string from the end of our variable's contents. For example, to mimic the dirname command, we remove everything starting at the last slash:

$ var="/usr/local/bin/tcpserver"
$ printf '%s\n' "${var%/*}"
/usr/local/bin

The % means "remove the shortest possible match from the end of the variable's contents", and /* is a glob that begins with a literal slash character, followed by any number of characters. Since we require the shortest match, bash isn't allowed to match /bin/tcpserver or anything else that contains multiple slashes. It has to remove /tcpserver only.

Here again, there is a notable difference with dirname in that for instance with var=file, dirname would return . while ${var%/*} would expand to file. And in var=dir/, dirname also returns . while ${var%/*} expands to dir.

Likewise, %% means "remove the longest possible match from the end of the variable's contents".

Those operators, contrary to the ${var/pattern/replacement} operator from ksh93 are standard so can also be used in sh script.

Now let's try something harder: what if we wanted a sort of double basename -- the last two parts of a pathname, instead of just the last part?

$ var=/home/someuser/projects/q/quark
$ tmp=${var%/*/*}
$ printf '%s\n' "${var#"$tmp/"}"
q/quark

This is a bit trickier. Here's how it works:

  • Look for the shortest possible string matching /*/* at the end of the pathname. In this case, it would match /q/quark.

  • Remove that from the end of the original string. The result of this is the thing we don't want. We store this in tmp.

  • Remove the thing we don't want (plus an extra /) from the original variable.

  • We're left with the last two parts of the pathname.

It's also worth pointing out that, as we just demonstrated, the pattern to be removed (after # or % or ## or %%) doesn't have to be a constant -- it can be another substitution. This isn't the most common case in real life, but it's sometimes handy.

101.6. Extracting parts of strings

We can combine the # and % operations to produce some interesting results, too. For example, we might know that our variable contains something in square brackets, somewhere, with an unknown amount of "garbage" on both sides. We can use this to extract the part we want:

$ var='garbage in [42] garbage out'
$ tmp=${var##*[}
$ printf '%s\n' "${tmp%%]*}"
42

Note that we used a temporary variable to hold the results of one parameter expansion, and then fed that result to the second one. We can't do two parameter expansions to the same variable at once (the syntax simply doesn't permit it).

If the delimiter is the same both times (for instance, double quotes) then we need to be a bit more careful and use only one # or %:

$ var='garbage in "42" garbage out'
$ tmp=${var#*\"}
$ printf '%s\n' "${tmp%\"*}"
42

Sometimes, however, we don't have useful delimiters. If we know that the good part resides in a certain set of columns, we can extract it that way. We can use range notation to extract a substring by specifying starting position and length:

var='CONFIG  .SYS'
left=${var:0:8}
right=${var:(-3)}

Here, the input is an MS-DOS "8.3" filename, space-padded to its full length. If for some reason we need to separate into its two parts, we have several possible ways to go about it. We could split the name into fields at the dot (we'll show that approach later). Or we could use ${var##*.} to get the "extension" (the part after the last dot) and ${var%.*} to get the left-hand part. Or we could count the characters, as we showed here.

In the ${var:0:8} example, the 0 is the starting position (0 is the first character) and 8 is the length of the piece we want in characters. If we omit the length, or if the length is greater than the rest of the string, then we get the rest of the string as result. In the ${var:(-3)} example, we omitted the length. We specified a starting position of -3 (negative three), which means three from the end. We have to use parentheses or a space between the : and the negative number to avoid a syntactic inconvenience (we'll discuss that later). We could also have used ${var:8} to get the rest of the string starting at character offset 8 (which is the ninth character) in this case, since we know the length is constant; but in many cases, we might not know the length in advance, and specifying a negative starting position lets us avoid some unnecessary work.

Character-counting is an even stronger technique when there is no delimiter at all between the pieces we want:

var='CONFIG  SYS'
left=${var:0:8}
right=${var:8}

We can't use ${var#*.} or similar techniques here!

That operator is also from ksh93 and not standard sh.

101.7. Splitting a string into fields

Sometimes your input might naturally consist of various fields with some sort of delimiter between them. In these cases, a natural approach to handling the input is to divide it into its component fields, so that each one can be handled on its own.

If the delimiter is a single character (or one character of a set -- so long as it's never more than one) then bash offers several viable approaches.

The first, and in the special case where the variable never contain newline characters and doesn't end with the delimiter, is to read the input directly into an array

var=192.168.1.3
IFS=. read -r -a octets <<< "$var"

We're no longer in the realm of parameter expansion here at all. We've combined several features at once:

  • The IFS variable tells the read command what field delimiters to use. In this case, we only want to use the dot. If we had specified more than one character, then it would have meant any one of those characters would qualify as a delimiter.

  • The notation var=value command means we set the variable only for the duration of this single command. The IFS variable goes back to whatever it was before, once read is finished.

  • read puts its results into an array named octets.

  • <<< "$var" means we use the contents of var as standard input to the read command (fed via a temporary file in older versions of bash and via a pipe in newer versions for short strings only).

After this command, the result is an array named octets whose first element (element 0) is 192, and whose second element (element 1) is 168, and so on. If we want a fixed set of variables instead of an array, we can do that as well:

IFS=, read lastname firstname rest <<< "$name"

We can also "skip" fields we don't want by assigning them to a variable we don't care about such as x or junk; or to _ which is overwritten by each command:

while IFS=: read user x uid gid x home shell; do
 ...
done < /etc/passwd

(for portability, it's best to avoid _ as it's a read-only variable in some shells)

Another approach to the same sort of problem involves the intentional use of WordSplitting to retrieve fields one at a time. This is more cumbersome than the array approach we just saw, but it does have several advantages:

  • It works in sh as well as bash.

  • It works even if the string ends in a delimier
  • It works even if the string contains newline characters.

var=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:
found=no
set -o noglob
IFS=:
for dir in $var''
do
  if test -x "${dir:+$dir/}foo"; then found=yes; fi
done
set +o noglob; unset IFS

This example is similar to one on FAQ 81. Bash offers better ways to determine whether a command exists in your PATH, but this illustrates the concept quite clearly. Points of note:

  • set -o noglob (or set -f) disables glob expansion. You should always disable globs when using unquoted parameter expansions, unless you specifically want to allow globs in the parameter's contents to be expanded.

  • We use set +o noglob (or set +f) and unset IFS at the end of the code to return the shell to a default state. However, this is not necessarily the state the shell was in when the code started. Returning the shell to its previous (possibly non-default) state is more trouble than it's worth in most cases, so we won't discuss it in depth here.

  • Again, IFS contains a list of field delimiters. We want to split our parameter at each colon. We add a '' at the end so an empty trailing element not be discarded. That also means that an empty $var is considered as containing one empty element (which is how the $PATH variable works: an empty $PATH means searching only in the current working directory).

If your field delimiter is a multi-character string, then unfortunately bash does not offer any simple ways to deal with that. Your best bet is to handle the task in awk instead.

$ cat inputfile
apple::0.75::21
banana::0.50::43
cherry::0.15::107
date::0.30::20
$ awk -F '::' '{print $1 " qty " $3 " @" $2 " = " $2*$3; total+=$2*$3} END {print "Total: " total}' inputfile
apple qty 21 @0.75 = 15.75
banana qty 43 @0.50 = 21.5
cherry qty 107 @0.15 = 16.05
date qty 20 @0.30 = 6
Total: 59.3

awk's -F allows us to specify a field delimiter as an extended regular expression. awk also allows floating point arithmetic, associative arrays, and a wide variety of other features that many shells lack.

101.8. Joining fields together

The simplest way to concatenate values is to use them together, with nothing in between:

printf '%s\n' "$foo$bar"

If we have an array instead of a fixed set of variables, then we can print the array with a single character (or nothing) between fields using IFS:

$ array=(1 2 3)
$ (IFS=/; printf '%s\n' "${array[*]}")
1/2/3

Notable points here:

  • We can't use IFS=/ printf '%s\n' ... because of how the parser works.

  • Therefore, we have to set IFS first, in a separate command. This would make the assignment persist for the rest of the shell. Since we don't want that, and because we aren't assigning to any variables that we need to keep, we use an explicit SubShell (using parentheses) to set up an environment where the change to IFS is not persistent. Another option would be to use a function in which we declare IFS as local with local IFS.

  • If IFS is not set, we get a space between elements. If it's set to the empty string, there is nothing between elements.

  • The delimiter is not printed after the final element.
  • If we wanted more than one character between fields, we would have to use a different approach; see below.

A more general approach to "joining" an array involves iterating through the fields, either explicitly (using a for loop) or implicitly (using printf). We'll start with a for loop. This example joins the elements of an array with :: between elements, producing the joined string on stdout:

array=(1 2 3)
first=1
for element in "${array[@]}"; do
  if ((! first)); then printf "::"; fi
  printf "%s" "$element"
  first=0
done
echo

This example uses the implicit looping of printf to print all the script's arguments, with angle brackets around each one:

#!/bin/sh
printf "$# args:"
[ "$#" -eq 0 ] || printf " <%s>" "$@"
echo

The case where $# is 0 has to be treated specially as printf still goes through the format once if not passed any argument.

A named array can also be used in place of @ (e.g. "${array[@]}" expands to all the elements of array).

If we wanted to join the strings into another variable, instead of dumping them out, then we have a few choices:

  • A string can be built up a piece at a time using var="$var$newthing" (portable) or var+=$newthing (bash 3.1). For example,

    output=$1; shift
    while (($#)); do output+="::$1"; shift; done
  • If the joining can be done with a single printf command, it can be assigned to a variable using printf -v var FORMAT FIELDS... (bash 3.1). For example,

    printf -v output "%s::" "$@"
    output=${output%::}    # Strip extraneous delimiter from end of string.
  • If the joining requires multiple commands, and a piecemeal string build-up isn't desirable, CommandSubstitution can be used to assign a function's output: var=$(myjoinfunction). It can also be used with a chunk of commands:

    var=$(
      command
      command
    )
  • The disadvantage of command substitution is that it discards all trailing newlines. See the CommandSubstitution page for a workaround.

101.9. Upper/lower case conversion

In bash 4, we have some new parameter expansion features:

  • ${var^} capitalizes the first letter of var

  • ${var^[aeiou]} capitalizes the first letter of var if it is a vowel

  • ${var^^} capitalizes all the letters in var

  • ${var,} lower-cases the first letter of var

  • ${var,[abc]} lower-cases the first letter of var if it is a, b or c

  • ${var,,} lower-cases all the letters in var

These are more efficient alternatives to invoking tr.

101.10. Default or alternate values

The oldest parameter expansion features of all (every Bourne-family shell has the basic form of these) involve the use or assignment of default values when a parameter is not set. These are fairly straightforward:

"${EDITOR-vi}" "$filename"

If the EDITOR variable isn't set, use vi instead. There's a variant of this:

"${EDITOR:-vi}" "$filename"

This one uses vi if the EDITOR variable is unset or empty. You may use a : in front of any of the operators in this section to treat empty variables the same as unset variables.

Previously, we mentioned a syntactic infelicity that required parentheses or whitespace to work around:

var='a bunch of junk089'
value=${var:(-3)}

If we were to use ${var:-3} here, it would be interpreted as use 3 as the default if var is unset or empty because the latter syntax has been in use longer than bash has existed. Hence the need for a workaround.

We can also assign a default value to a variable if it's not already set:

: "${PATH=/usr/bin:/bin}"
: "${PATH:=/usr/bin:/bin}"

In the first one, if PATH is set, nothing happens. If it's not set, then it is assigned the value /usr/bin:/bin. In the second one, the assignment also happens if PATH is set to an empty value. Since ${...} is an expression and not a command, it has to be used in a command. Traditionally, the : command (which does nothing, and is a builtin command even in the most ancient shells) is used for this purpose.

Finally, we have this expression:

${var+foo}

This one means use foo if the variable is set; otherwise, use nothing. It's an extremely primitive conditional check, and it has three main uses:

  • The expression ${1+"$@"} is used to work around broken behavior of "$@" in old or buggy shells when writing a WrapperScript.

  • A test such as if test "${var+defined}" can be used to determine whether a variable is set.

  • One may conditionally pass optional arguments like: cmd ${opt_x+-x "$opt_x"} ...

It's almost never used outside of those three contexts.

Quick glance table:

${var-word}

Expands to the contents of var if var is set; otherwise, word.

${var:-word}

Expands to the contents of var if var is set but not empty; otherwise, word.

${var+word}

Expands to word if var is set; otherwise, nothing.

${var:+word}

Expands to word if var is set but not empty; otherwise, nothing.

${var=word}

Assigns word to var if var is unset; then expands to the contents of var.

${var:=word}

Assigns word to var if var is unset or empty; then expands to the contents of var.

${var?word}

Expands to the contents of var if var is set; otherwise, write word to stderr and exit the shell.

${var:?word}

Expands to the contents of var if var is set but not empty; otherwise, write word to stderr and exit the shell.

Nobody ever uses ${var?word} or ${var:?word}. Please pretend they don't exist, just like you pretend set -e and set -u don't exist.

101.11. See Also

Parameter expansion (terse version, with handy tables).


CategoryShell

BashFAQ (last edited 2021-05-27 20:31:17 by GreyCat)