8009
Comment:
|
10662
Looks like we can't avoid the backslashes. (thanks geirha)
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
#pragma section-numbers 2 | |
Line 2: | Line 3: |
Line 4: | Line 4: |
Variables hold data. Functions hold code. Don't put code inside variables! There are many situations in which people try to shove commands, or command arguments, into variables and then run them. Each case needs to be handled separately. <<TableOfContents>> === Things that do not work === |
|
Line 7: | Line 13: |
# Non-working example args="-s 'The subject' $address" mail $args < $body }}} This fails because of WordSplitting and because the single quotes inside the variable are literal; not syntactical. When {{{$args}}} is expanded, it becomes four words. {{{'The}}} is the second word, and {{{subject'}}} is the third word. |
# Example of BROKEN code, DON'T USE THIS. args=$address1 if [[ $subject ]]; then args+=" -s $subject" fi mail $args < "$body" }}} Adding quotes won't help, either: {{{ # Example of BROKEN code, DON'T USE THIS. args="$address1 $address2" if [[ $subject ]]; then args+=" -s '$subject'"; fi mail $args < "$body" }}} This fails because of WordSplitting and because the single quotes inside the variable are literal, not syntactical. If `$subject` contains internal whitespace, it will be split at those points. The `mail` command will receive `-s` as one argument, then the first word of the subject (with a literal `'` in front of it) as the next argument, and so on. |
Line 15: | Line 34: |
So, how do we do this? That all depends on what ''this'' is! There are at least three situations in which people try to shove commands, or command arguments, into variables and then run them. Each case needs to be handled separately. |
Here's another thing that won't work: {{{ # BROKEN code. Do not use! redirs=">/dev/null 2>&1" if ((debug)); then redirs=; fi some command $redirs }}} Here's yet another thing that won't work: {{{ # BROKEN code. Do not use! runcmd() { if ((debug)); then echo "$@"; fi; "$@"; } }}} The `runcmd` function can only handle '''simple commands''' with no redirections. It can't handle redirections, pipelines, for/while loops, if statements, etc. Now let's look at how we can perform some of these tasks. |
Line 20: | Line 54: |
If you want to put a command in a container for later use, use a function. Variables hold data, functions hold code. {{{ pingMe() { ping -q -c1 "$HOSTNAME" } [...] if pingMe; then .. }}} |
Just use a function: {{{ pingMe() { ping -q -c1 "$HOSTNAME" } [...] if pingMe; then .. }}} === I only want to pass options if the runtime data needs them === You can use the `${var:+..}` [[BashFAQ/073|parameter expansion]] for this: {{{ # using eval eval ping -q "${count:+'-c' \"\$count\"}" '"$HOSTNAME"' #or using field splitting oIFS=$IFS IFS=' ' ping -q ${count:+'-c' "$count"} "$HOSTNAME" IFS=$oIFS; unset -v oIFS }}} In either case, the `-c` option (with its `"$count"` argument) is only added to the command when `$count` is not empty. Notice the quoting: In the first case, everything is explicitly quoted to produce the desired command for `eval`. In the second case, no quotes around `${var:+...}` but '''quotes around all non-IFS-containing strings inside that TERMINATE words!''' The details on how this works get very nasty in complex situations ([[http://wiki.bash-hackers.org/syntax/pe#quote_nesting|gory details here]]). The long story short is that field splitting should be avoided at all costs, and `eval` is the clear winner when expansions yield shell syntax that must be split by the parser into words prior to performing expansions. That's because unlike the Bourne shell, modern shells don't use IFS when parsing commands - only when processing unquoted expansion output. This would also work for our `mail` example: {{{ addresses=("$address1" "$address2") eval mail "${subject:+'-s' \"\$subject\"}" '"${addresses[@]}"' <body }}} === I want to generalize a task, in case the low-level tool changes later === Again, variables hold data; functions hold code. In the `mail` example, we've got hard-coded dependence on the syntax of the Unix `mail` command. The version in the previous section is an improvement over the original broken code, but what if the internal company mail system changes? Having several calls to `mail` scattered throughout the script complicates matters in this situation. What you probably should be doing, '''paying very close attention at how to quote your expansions''', is this: {{{ # Bash 3.1 # Send an email to someone. # Reads the body of the mail from standard input. # # sendto subject address [address ...] # sendto() { # Used to be standard mail, but the fucking HR department # said we have to use this crazy proprietary shit.... # mailx -s "$@" local subject=$1 shift local addr addrs=() for addr; do addrs+=(--recipient="$addr"); done MailTool --subject="$subject" "${addrs[@]}" } sendto "The Subject" "$address" <"$bodyfile" }}} The original implementation uses `mailx(1)`, a standard Unix command. Later, this is commented out and replaced by something called `MailTool`, which was made up on the spot for this example. But it should serve to illustrate the concept: the function's invocation is unchanged, even though the back-end tool changes. |
Line 31: | Line 124: |
The root of the issue described above is that you need a way to maintain each argument as a separate word, even if that argument contains spaces. Quotes won't do it, but an [[BashFAQ/005|array]] will. Suppose your script wants to send email. You might have places where you want to include a subject, and others where you don't. The part of your script that sends the mail might check a variable named `subject` to determine whether you need to supply additional arguments to the `mail` command. A naive programmer may come up with something like this: {{{ # Don't do this. args=$recipient if [[ $subject ]]; then args+=" -s $subject" |
The root of the issue described above is that you need a way to maintain each argument as a separate word, even if that argument contains spaces. Quotes won't do it, but an [[BashFAQ/005|array]] will. (We saw a bit of this in the previous section, where we constructed the `addrs` array on the fly.) If you need to create a command dynamically, put each argument in a separate element of an array. A shell with arrays (like Bash) makes this ''much'' easier. POSIX sh has no arrays, so the closest you can come is to build up a list of elements in the positional parameters. Here's a POSIX sh version of the `sendto` function from the previous section: {{{ # POSIX sh # Usage: sendto subject address [address ...] sendto() { subject=$1 shift first=1 for addr; do if [ "$first" = 1 ]; then set --; first=0; fi set -- "$@" --recipient="$addr" done if [ "$first" = 1 ]; then echo "usage: sendto subject address [address ...] return 1 |
Line 41: | Line 144: |
mail $args < $bodyfilename }}} As we have seen, this approach fails when the `subject` contains whitespace. It simply is not robust enough. As such, if you really need to create a command dynamically, put each argument in a separate element of an array, like so: {{{ # Working example, bash 3.1 or higher args=("$recipient") if [[ $subject ]]; then args+=(-s "$subject") fi mail "${args[@]}" < "$bodyfilename" }}} (See [[BashFAQ/005|FAQ #5]] for more details on array syntax.) Often, this question arises when someone is trying to use {{{dialog}}} to construct a menu on the fly. The {{{dialog}}} command can't be hard-coded, because its parameters are supplied based on data only available at run time (e.g. the number of menu entries). For an example of how to do this properly, see [[BashFAQ/040|FAQ #40]]. === I want to generalize a task, in case the low-level tool changes later === You generally do NOT want to put command names or command options in variables. Variables should contain the data you are trying to pass to the command, like usernames, hostnames, ports, text, etc. They should NOT contain options that are specific to one certain command or tool. Those things belong in ''functions''. In the mail example, we've got hard-coded dependence on the syntax of the Unix `mail` command -- and in particular, versions of the `mail` command that permit the subject to be specified ''after'' the recipient, which may not always be the case. Someone maintaining the script may decide to fix the syntax so that the recipient appears last, which is the most correct form; or they may replace `mail` altogether due to internal company mail system changes, etc. Having several calls to `mail` scattered throughout the script complicates matters in this situation. What you probably should be doing, is this: {{{ # POSIX # Send an email to someone. # Reads the body of the mail from standard input. # # sendto address [subject] # sendto() { # unset -v IFS # mail ${2:+-s "$2"} "$1" MailTool ${2:+--subject="$2"} --recipient="$1" } sendto "$address" "The Subject" <"$bodyfile" }}} Here, the [[BashFAQ/073|parameter expansion]] checks if `$2` (the optional subject) has expanded to anything. If it has, the expansion adds the `-s "$2"` to the `mail` command. If it hasn't, the expansion doesn't add the `-s` option at all. The original implementation uses `mail(1)`, a standard Unix command. Later, this is commented out and replaced by something called `MailTool`, which was made up on the spot for this example. But it should serve to illustrate the concept: the function's invocation is unchanged, even though the back-end tool changes. Also note that the `mail(1)` example above ''does'' rely upon WordSplitting to separate the option argument from the quoted inner parameter expansion. This is a notable exception in which word splitting is acceptable and desirable. It is safe because the statically-coded option doesn't contain any glob characters, and the parameter expansion is quoted to prevent subsequent globbing. You must ensure that IFS is set to a sane value in order to get the expected results. |
MailTool --subject="$subject" "$@" } }}} Note that we overwrite the positional parameters ''inside'' a loop that is iterating over the previous set of positional parameters (because we can't make a second array, not even to hold a copy of the original parameters). This appears to work in at least 3 different `/bin/sh` implementations (tested in Debian's dash, HP-UX's sh and OpenBSD's sh). Another example of this is using {{{dialog}}} to construct a menu on the fly. The {{{dialog}}} command can't be hard-coded, because its parameters are supplied based on data only available at run time (e.g. the number of menu entries). For an example of how to do this properly, see [[BashFAQ/040|FAQ #40]]. It's worth noting that you ''cannot'' put anything other than a list of arguments into an array variable when using the {{{"${array[@]}"}}} technique to evaluate a command. Pipelines, redirection, assignments, and any other shell keywords or syntax will not be evaluated correctly. In bash, the only ways to generate, manipulate, or store code more complex than a ''simple command'' at runtime involve storing the code's plain text in a variable, file, stream, or function, and then using {{{eval}}} or {{{sh}}} to evaluate the stored code. Directly manipulating raw code strings is among the least robust of metaprogramming techniques and most common sources of [[BashFAQ/048|bugs and security issues]]. That's because predicting all possible ways code might come together to form a valid construct and restricting it to never operate outside of what's expected requires great care and detailed knowledge of language quirks. Bash lacks all the usual [[http://docs.racket-lang.org/guide/macros.html|kinds]] [[http://docs.groovy-lang.org/latest/html/documentation/#_compile_time_metaprogramming|of]] [[https://msdn.microsoft.com/en-us/library/bb397951.aspx|abstractions]] that allow doing this safely. Excessive use can also obfuscate your code. |
Line 87: | Line 157: |
Another reason people attempt to stuff commands into variables is because they want their script to print each command before it runs it. If that's all you want, then simply use the {{{set -x}}} command, or invoke your script with {{{#!/bin/bash -x}}} or {{{bash -x ./myscript}}}. Note that you can turn it off and back on inside the script with {{{set +x}}} and {{{set -x}}}. It's worth noting that you ''cannot'' put a pipeline command into an array variable and then execute it using the {{{"${array[@]}"}}} technique. The only way to store a pipeline in a variable would be to add (carefully!) a layer of quotes if necessary, store it in a string variable, and then use {{{eval}}} or {{{sh}}} to run the variable. This is [[BashFAQ/048|not recommended]], for security reasons. The same thing applies to commands involving redirection, {{{if}}} or {{{while}}} statements, and so on. Some people get into trouble because they want to have their script print their commands ''including redirections'' before it runs them. {{{set -x}}} shows the command without redirections. People try to work around this by doing things like: {{{ # Non-working example command="mysql -u me -p somedbname < file" ((DEBUG)) && echo "$command" "$command" }}} (This is so common that I include it explicitly, even though it's repeating what I already wrote.) Once again, ''this does not work''. Not even using an array works here. The only thing that would work is rigorously escaping the command to be sure ''no'' metacharacters will cause serious security problems, and then using `eval` or `sh` to re-read the command. '''Please don't do that!'''. One way to log the whole command, without resorting to the use of `eval` or `sh`, is the DEBUG trap. A practical code example: {{{ trap 'printf %s\\n "$BASH_COMMAND" >&2' DEBUG |
Another reason people attempt to stuff commands into variables is because they want their script to print each command before it runs it. If that's all you want, then simply use {{{set -x}}} command, or invoke your script with {{{#!/bin/bash -x}}} or {{{bash -x ./myscript}}}. {{{ if ((DEBUG)); then set -x; fi mysql -u me -p somedbname < file ... }}} Note that you can turn it off and back on inside the script with {{{set +x}}} and {{{set -x}}}. Some people get into trouble because they want to have their script print their commands ''including redirections''. {{{set -x}}} shows the command without redirections. People try to work around this by doing things like: {{{ # Non-working example command="mysql -u me -p somedbname < file" ((DEBUG)) && echo "$command" "$command" }}} (This is so common that I include it here explicitly.) Once again, ''this does not work''. You can't make it work. Even the array trick won't work here. One way to log the whole command, without resorting to the use of `eval` or `sh` ([[BashFAQ/048|don't do that!]]), is the DEBUG [[SignalTrap|trap]]. A practical code example: {{{ trap 'printf %s\\n "$BASH_COMMAND" >&2' DEBUG |
Line 108: | Line 187: |
Note that redirect representation by `BASH_COMMAND` is still affected by [[https://lists.gnu.org/archive/html/bug-bash/2012-01/msg00096.html|this bug]]. It appears partially fixed in git, but not completely. Don't count on it being correct. If your head is SO far up your ass that you still think you need to write out every command you're about to run before you run it, AND that you must include all redirections, then just do this: {{{ # Working example echo "mysql -u me -p somedbname < file" mysql -u me -p somedbname < file }}} Don't use a variable at all. Just copy and paste the command, wrap an extra layer of quotes around it (sometimes tricky), and stick an `echo` in front of it. My personal recommendation would be just to use {{{set -x}}} and not worry about it. |
Note that redirect representation by `BASH_COMMAND` may still be affected by [[https://lists.gnu.org/archive/html/bug-bash/2012-01/msg00096.html|this bug]]. If you STILL think you need to write out every command you're about to run before you run it, AND that you must include all redirections, AND you can't use a DEBUG trap, then just do this: {{{ # Working example echo "mysql -u me -p somedbname < file" mysql -u me -p somedbname < file }}} Don't use a variable at all. Just copy and paste the command, wrap an extra layer of quotes around it (can be tricky -- that's why we do '''not''' recommend trying to use `eval` here), and stick an `echo` in front of it. However, consider that echoing your commands verbatim is really ugly. Why are you doing this? Are you debugging the script? If so, how is the output of `set -x` insufficient? All you have to do is find the bug and fix it. Surely you won't leave this debugging code in place once the bug has been fixed. If you intend to create a ''log'' of your script's actions, every time it is run, for accountability or other reasons, then that log should be human-readable. In that case, ''don't'' just echo your commands (especially if you have to bend over backwards to do so)! Write out meaningful (possibly even date-stamped) lines describing what you're doing. {{{ echo "Populating database table" mysql -u me -p somedbname < file }}} |
I'm trying to put a command in a variable, but the complex cases always fail!
Variables hold data. Functions hold code. Don't put code inside variables! There are many situations in which people try to shove commands, or command arguments, into variables and then run them. Each case needs to be handled separately.
Contents
-
I'm trying to put a command in a variable, but the complex cases always fail!
- Things that do not work
- I'm trying to save a command so I can run it later without having to repeat it each time
- I only want to pass options if the runtime data needs them
- I want to generalize a task, in case the low-level tool changes later
- I'm constructing a command based on information that is only known at run time
- I want a log of my script's actions
1. Things that do not work
Some people attempt to do things like this:
# Example of BROKEN code, DON'T USE THIS. args=$address1 if [[ $subject ]]; then args+=" -s $subject" fi mail $args < "$body"
Adding quotes won't help, either:
# Example of BROKEN code, DON'T USE THIS. args="$address1 $address2" if [[ $subject ]]; then args+=" -s '$subject'"; fi mail $args < "$body"
This fails because of WordSplitting and because the single quotes inside the variable are literal, not syntactical. If $subject contains internal whitespace, it will be split at those points. The mail command will receive -s as one argument, then the first word of the subject (with a literal ' in front of it) as the next argument, and so on.
Read Arguments to get a better understanding of how the shell figures out what the arguments in your statement are.
Here's another thing that won't work:
# BROKEN code. Do not use! redirs=">/dev/null 2>&1" if ((debug)); then redirs=; fi some command $redirs
Here's yet another thing that won't work:
# BROKEN code. Do not use! runcmd() { if ((debug)); then echo "$@"; fi; "$@"; }
The runcmd function can only handle simple commands with no redirections. It can't handle redirections, pipelines, for/while loops, if statements, etc.
Now let's look at how we can perform some of these tasks.
2. I'm trying to save a command so I can run it later without having to repeat it each time
Just use a function:
pingMe() { ping -q -c1 "$HOSTNAME" } [...] if pingMe; then ..
3. I only want to pass options if the runtime data needs them
You can use the ${var:+..} parameter expansion for this:
# using eval eval ping -q "${count:+'-c' \"\$count\"}" '"$HOSTNAME"' #or using field splitting oIFS=$IFS IFS=' ' ping -q ${count:+'-c' "$count"} "$HOSTNAME" IFS=$oIFS; unset -v oIFS
In either case, the -c option (with its "$count" argument) is only added to the command when $count is not empty. Notice the quoting: In the first case, everything is explicitly quoted to produce the desired command for eval. In the second case, no quotes around ${var:+...} but quotes around all non-IFS-containing strings inside that TERMINATE words! The details on how this works get very nasty in complex situations (gory details here).
The long story short is that field splitting should be avoided at all costs, and eval is the clear winner when expansions yield shell syntax that must be split by the parser into words prior to performing expansions. That's because unlike the Bourne shell, modern shells don't use IFS when parsing commands - only when processing unquoted expansion output.
This would also work for our mail example:
addresses=("$address1" "$address2") eval mail "${subject:+'-s' \"\$subject\"}" '"${addresses[@]}"' <body
4. I want to generalize a task, in case the low-level tool changes later
Again, variables hold data; functions hold code.
In the mail example, we've got hard-coded dependence on the syntax of the Unix mail command. The version in the previous section is an improvement over the original broken code, but what if the internal company mail system changes? Having several calls to mail scattered throughout the script complicates matters in this situation.
What you probably should be doing, paying very close attention at how to quote your expansions, is this:
# Bash 3.1 # Send an email to someone. # Reads the body of the mail from standard input. # # sendto subject address [address ...] # sendto() { # Used to be standard mail, but the fucking HR department # said we have to use this crazy proprietary shit.... # mailx -s "$@" local subject=$1 shift local addr addrs=() for addr; do addrs+=(--recipient="$addr"); done MailTool --subject="$subject" "${addrs[@]}" } sendto "The Subject" "$address" <"$bodyfile"
The original implementation uses mailx(1), a standard Unix command. Later, this is commented out and replaced by something called MailTool, which was made up on the spot for this example. But it should serve to illustrate the concept: the function's invocation is unchanged, even though the back-end tool changes.
5. I'm constructing a command based on information that is only known at run time
The root of the issue described above is that you need a way to maintain each argument as a separate word, even if that argument contains spaces. Quotes won't do it, but an array will. (We saw a bit of this in the previous section, where we constructed the addrs array on the fly.)
If you need to create a command dynamically, put each argument in a separate element of an array. A shell with arrays (like Bash) makes this much easier. POSIX sh has no arrays, so the closest you can come is to build up a list of elements in the positional parameters. Here's a POSIX sh version of the sendto function from the previous section:
# POSIX sh # Usage: sendto subject address [address ...] sendto() { subject=$1 shift first=1 for addr; do if [ "$first" = 1 ]; then set --; first=0; fi set -- "$@" --recipient="$addr" done if [ "$first" = 1 ]; then echo "usage: sendto subject address [address ...] return 1 fi MailTool --subject="$subject" "$@" }
Note that we overwrite the positional parameters inside a loop that is iterating over the previous set of positional parameters (because we can't make a second array, not even to hold a copy of the original parameters). This appears to work in at least 3 different /bin/sh implementations (tested in Debian's dash, HP-UX's sh and OpenBSD's sh).
Another example of this is using dialog to construct a menu on the fly. The dialog command can't be hard-coded, because its parameters are supplied based on data only available at run time (e.g. the number of menu entries). For an example of how to do this properly, see FAQ #40.
It's worth noting that you cannot put anything other than a list of arguments into an array variable when using the "${array[@]}" technique to evaluate a command. Pipelines, redirection, assignments, and any other shell keywords or syntax will not be evaluated correctly.
In bash, the only ways to generate, manipulate, or store code more complex than a simple command at runtime involve storing the code's plain text in a variable, file, stream, or function, and then using eval or sh to evaluate the stored code. Directly manipulating raw code strings is among the least robust of metaprogramming techniques and most common sources of bugs and security issues. That's because predicting all possible ways code might come together to form a valid construct and restricting it to never operate outside of what's expected requires great care and detailed knowledge of language quirks. Bash lacks all the usual kinds of abstractions that allow doing this safely. Excessive use can also obfuscate your code.
6. I want a log of my script's actions
Another reason people attempt to stuff commands into variables is because they want their script to print each command before it runs it. If that's all you want, then simply use set -x command, or invoke your script with #!/bin/bash -x or bash -x ./myscript.
if ((DEBUG)); then set -x; fi mysql -u me -p somedbname < file ...
Note that you can turn it off and back on inside the script with set +x and set -x.
Some people get into trouble because they want to have their script print their commands including redirections. set -x shows the command without redirections. People try to work around this by doing things like:
# Non-working example command="mysql -u me -p somedbname < file" ((DEBUG)) && echo "$command" "$command"
(This is so common that I include it here explicitly.)
Once again, this does not work. You can't make it work. Even the array trick won't work here.
One way to log the whole command, without resorting to the use of eval or sh (don't do that!), is the DEBUG trap. A practical code example:
trap 'printf %s\\n "$BASH_COMMAND" >&2' DEBUG
Assuming you're logging to standard error.
Note that redirect representation by BASH_COMMAND may still be affected by this bug.
If you STILL think you need to write out every command you're about to run before you run it, AND that you must include all redirections, AND you can't use a DEBUG trap, then just do this:
# Working example echo "mysql -u me -p somedbname < file" mysql -u me -p somedbname < file
Don't use a variable at all. Just copy and paste the command, wrap an extra layer of quotes around it (can be tricky -- that's why we do not recommend trying to use eval here), and stick an echo in front of it.
However, consider that echoing your commands verbatim is really ugly. Why are you doing this? Are you debugging the script? If so, how is the output of set -x insufficient? All you have to do is find the bug and fix it. Surely you won't leave this debugging code in place once the bug has been fixed.
If you intend to create a log of your script's actions, every time it is run, for accountability or other reasons, then that log should be human-readable. In that case, don't just echo your commands (especially if you have to bend over backwards to do so)! Write out meaningful (possibly even date-stamped) lines describing what you're doing.
echo "Populating database table" mysql -u me -p somedbname < file