5378
Comment: clarify what it's using to read from or write to
|
5646
|
Deletions are marked like this. | Additions are marked like this. |
Line 3: | Line 3: |
Process substitution is a very useful BASH extension. It is similar to awk's {{{"command" | getline}}} and is especially important to get round [[SubShell|subshells]] caused by pipelines. | Process substitution is a very useful BASH extension. It is similar to awk's {{{"command" | getline}}} and is especially important for bypassing [[SubShell|subshells]] caused by pipelines. |
Line 5: | Line 5: |
Process substitution comes in two forms: `<(some command)` and `>(some command)`. Each form either causes a [[NamedPipes|FIFO]] to be created under `/tmp` or `/var/tmp`, or uses a file descriptor special device (`/dev/fd/*`), depending on the operating system. The substitution syntax is replaced by the name of the FIFO or FD, and the command inside it is run in the background. The substitution is peformed at the same time as parameter expansion and command substitution. | Process substitution comes in two forms: `<(some command)` and `>(some command)`. Each form either causes a [[NamedPipes|FIFO]] to be created under `/tmp` or `/var/tmp`, or uses a [[NamedFileDescriptor|named file descriptor]] (`/dev/fd/*`), depending on the operating system. The substitution syntax is replaced by the name of the FIFO or FD, and the command inside it is run in the background. The substitution is performed at the same time as parameter expansion and command substitution. |
Line 34: | Line 34: |
echo "$i lines processed" # FAILS | echo "$i lines processed" # Always prints 0 |
Line 62: | Line 62: |
`>(...)` is handy when redirecting the output to multiple files, based on some criteria. {{{ # For example: some_command | tee >(grep A > A.out) >(grep B > B.out) >(grep C > C.out) > /dev/null }}} |
|
Line 69: | Line 77: |
while read -r line do |
while IFS= read -r line; do |
Line 86: | Line 93: |
done < <(command $options "${param[@]}" 2>&1 | tee "$logfile") | done < <(someCommand "${options[@]}" "${param[@]}" 2>&1 | tee "$logfile") |
Line 98: | Line 105: |
Process substitution is definitely '''not''' portable. You may use NamedPipes to accomplish the same things. zsh and ksh93 support process substitution, but, in the case of ksh93 you cannot use it with redirection, ie ''cat <(echo foo)'' works but not ''cat < <(echo foo)''. zsh ksh and bash4 also offer coprocess which can sometimes be used instead of process substitution. |
Bash, Zsh, and AT&T ksh{88,93} (but not pdksh/mksh) support process substitution. Process substitution isn't specified by POSIX. You may use NamedPipes to accomplish the same things. Coprocesses can also do everything process substitutions can, and are slightly more portable (though the syntax for using them is not). |
Line 124: | Line 129: |
== See Also == * http://wiki.bash-hackers.org/syntax/expansion/proc_subst |
Process Substitution
Process substitution is a very useful BASH extension. It is similar to awk's "command" | getline and is especially important for bypassing subshells caused by pipelines.
Process substitution comes in two forms: <(some command) and >(some command). Each form either causes a FIFO to be created under /tmp or /var/tmp, or uses a named file descriptor (/dev/fd/*), depending on the operating system. The substitution syntax is replaced by the name of the FIFO or FD, and the command inside it is run in the background. The substitution is performed at the same time as parameter expansion and command substitution.
One of the most common uses of this feature is to avoid the creation of temporary files, e.g. when using diff(1):
diff <(sort list1) <(sort list2)
This is (roughly) equivalent to:
mkfifo /var/tmp/fifo1 mkfifo /var/tmp/fifo2 sort list1 >/var/tmp/fifo1 & sort list2 >/var/tmp/fifo2 & diff /var/tmp/fifo1 /var/tmp/fifo2 rm /var/tmp/fifo1 /var/tmp/fifo2
Note that the diff command actually receives two filename arguments.
Another common use is avoiding the loss of variables inside a loop that is part of a pipeline. For example, this will fail:
# This example will fail, unless run in ksh88/ksh93 i=0 sort list1 | while read line; do i=$(($i + 1)) ... done echo "$i lines processed" # Always prints 0
But this works:
# Working example, using bash syntax. i=0 while read line; do ((i++)) ... done < <(sort list1) echo "$i lines processed"
The difference between <(...) and >(...) is merely which way the redirections are done. With <(...) one is expected to read from the substitution, and the command is set up to use it as stdout. With >(...) one is expected to write to the substitution, and the command inside is set up to use it as stdin.
>(...) is used less frequently; the most common situation is in conjunction with tee(1).
exec > >(tee logfile) # Rest of script goes here # Stdout of everything is logged, and also falls through to real stdout. # Beware of buffering issues, especially if you also have stderr (e.g. # prompts for user input may appear before the previous line of stdout).
>(...) is handy when redirecting the output to multiple files, based on some criteria.
# For example: some_command | tee >(grep A > A.out) >(grep B > B.out) >(grep C > C.out) > /dev/null
See Bash FAQ #106 for more discussion of that usage.
Here's a more complicated example:
hasFile='Note: the (top-|highly )?secret plans are backed up at:(.*)' criticalFile= while IFS= read -r line; do [[ $line ]] || continue case $line in '!!! '*) errMsg "${line#'!!! '}" ;; *important* ) echo "$line" ;; * ) if [[ $line =~ $hasFile ]]; then criticalFile=${BASH_REMATCH[2]} warn "File at $criticalFile" else spin fi ;; esac done < <(someCommand "${options[@]}" "${param[@]}" 2>&1 | tee "$logfile") [[ $criticalFile ]] || abort 'File not found'
Piping the command to a while loop would mean any variables set would be lost. Note that the actual command can be a pipeline. In fact you can continue to type a whole script in that side as well. Be aware that this is running in a subshell, and also that it will continue to run when your script exits (unless you manage your child processes.)
In the above example the regex could as easily be done with a case:
'Note: the '*'secret plans are backed up at:'*) criticalFile=${line#*'secret plans are backed up at:'}
Process substitution where the external is an awk command, is particularly powerful and flexible.
Portability
Bash, Zsh, and AT&T ksh{88,93} (but not pdksh/mksh) support process substitution. Process substitution isn't specified by POSIX. You may use NamedPipes to accomplish the same things. Coprocesses can also do everything process substitutions can, and are slightly more portable (though the syntax for using them is not).
Pitfalls
It is not possible to obtain the exit code of a process substitution command, from the shell that created the process substitution:
commandA <(commandB; [commandB's exit code is available here from $?]) [commandB's exit code cannot be obtained from here. $? holds commandA's exit code]
If you need the exit code in the main script, you'll need to rewrite your code and get commandB out of the process substitution. Depending on your actual problem these options may be open to you:
# If commandA can read the data from stdin commandB | commandA # You can now get the exit code of commandB from PIPESTATUS. commandB > >(commandA) # You can now get the exit code of commandB from $? (or by putting this in an if) # If commandA cannot read it from stdin, but requires a file argument commandB > >(commandA <(cat)) # Again, commandB's exit code is available from $? # You can also keep commandB's output in memory. When you do this, you can get commandB's exit code from $? or put the assignment in an if b=$(commandB); commandA <<< "$b" # Here, commandA reads commandB's output from stdin b=$(commandB); commandA <(cat "$b") # Here, commandA gets commandB's output from a file argument.