8009
Comment:
|
7970
|
Deletions are marked like this. | Additions are marked like this. |
Line 2: | Line 2: |
Line 110: | Line 109: |
If your head is SO far up your ass that you still think you need to write out every command you're about to run before you run it, AND that you must include all redirections, then just do this: | If you STILL think you need to write out every command you're about to run before you run it, AND that you must include all redirections, then just do this: |
I'm trying to put a command in a variable, but the complex cases always fail!
Some people attempt to do things like this:
# Non-working example args="-s 'The subject' $address" mail $args < $body
This fails because of WordSplitting and because the single quotes inside the variable are literal; not syntactical. When $args is expanded, it becomes four words. 'The is the second word, and subject' is the third word.
Read Arguments to get a better understanding of how the shell figures out what the arguments in your statement are.
So, how do we do this? That all depends on what this is!
There are at least three situations in which people try to shove commands, or command arguments, into variables and then run them. Each case needs to be handled separately.
1. I'm trying to save a command so I can run it later without having to repeat it each time
If you want to put a command in a container for later use, use a function. Variables hold data, functions hold code.
pingMe() { ping -q -c1 "$HOSTNAME" } [...] if pingMe; then ..
2. I'm constructing a command based on information that is only known at run time
The root of the issue described above is that you need a way to maintain each argument as a separate word, even if that argument contains spaces. Quotes won't do it, but an array will.
Suppose your script wants to send email. You might have places where you want to include a subject, and others where you don't. The part of your script that sends the mail might check a variable named subject to determine whether you need to supply additional arguments to the mail command. A naive programmer may come up with something like this:
# Don't do this. args=$recipient if [[ $subject ]]; then args+=" -s $subject" fi mail $args < $bodyfilename
As we have seen, this approach fails when the subject contains whitespace. It simply is not robust enough.
As such, if you really need to create a command dynamically, put each argument in a separate element of an array, like so:
# Working example, bash 3.1 or higher args=("$recipient") if [[ $subject ]]; then args+=(-s "$subject") fi mail "${args[@]}" < "$bodyfilename"
(See FAQ #5 for more details on array syntax.)
Often, this question arises when someone is trying to use dialog to construct a menu on the fly. The dialog command can't be hard-coded, because its parameters are supplied based on data only available at run time (e.g. the number of menu entries). For an example of how to do this properly, see FAQ #40.
3. I want to generalize a task, in case the low-level tool changes later
You generally do NOT want to put command names or command options in variables. Variables should contain the data you are trying to pass to the command, like usernames, hostnames, ports, text, etc. They should NOT contain options that are specific to one certain command or tool. Those things belong in functions.
In the mail example, we've got hard-coded dependence on the syntax of the Unix mail command -- and in particular, versions of the mail command that permit the subject to be specified after the recipient, which may not always be the case. Someone maintaining the script may decide to fix the syntax so that the recipient appears last, which is the most correct form; or they may replace mail altogether due to internal company mail system changes, etc. Having several calls to mail scattered throughout the script complicates matters in this situation.
What you probably should be doing, is this:
# POSIX # Send an email to someone. # Reads the body of the mail from standard input. # # sendto address [subject] # sendto() { # unset -v IFS # mail ${2:+-s "$2"} "$1" MailTool ${2:+--subject="$2"} --recipient="$1" } sendto "$address" "The Subject" <"$bodyfile"
Here, the parameter expansion checks if $2 (the optional subject) has expanded to anything. If it has, the expansion adds the -s "$2" to the mail command. If it hasn't, the expansion doesn't add the -s option at all.
The original implementation uses mail(1), a standard Unix command. Later, this is commented out and replaced by something called MailTool, which was made up on the spot for this example. But it should serve to illustrate the concept: the function's invocation is unchanged, even though the back-end tool changes. Also note that the mail(1) example above does rely upon WordSplitting to separate the option argument from the quoted inner parameter expansion. This is a notable exception in which word splitting is acceptable and desirable. It is safe because the statically-coded option doesn't contain any glob characters, and the parameter expansion is quoted to prevent subsequent globbing. You must ensure that IFS is set to a sane value in order to get the expected results.
4. I want a log of my script's actions
Another reason people attempt to stuff commands into variables is because they want their script to print each command before it runs it. If that's all you want, then simply use the set -x command, or invoke your script with #!/bin/bash -x or bash -x ./myscript. Note that you can turn it off and back on inside the script with set +x and set -x.
It's worth noting that you cannot put a pipeline command into an array variable and then execute it using the "${array[@]}" technique. The only way to store a pipeline in a variable would be to add (carefully!) a layer of quotes if necessary, store it in a string variable, and then use eval or sh to run the variable. This is not recommended, for security reasons. The same thing applies to commands involving redirection, if or while statements, and so on.
Some people get into trouble because they want to have their script print their commands including redirections before it runs them. set -x shows the command without redirections. People try to work around this by doing things like:
# Non-working example command="mysql -u me -p somedbname < file" ((DEBUG)) && echo "$command" "$command"
(This is so common that I include it explicitly, even though it's repeating what I already wrote.)
Once again, this does not work. Not even using an array works here. The only thing that would work is rigorously escaping the command to be sure no metacharacters will cause serious security problems, and then using eval or sh to re-read the command. Please don't do that!. One way to log the whole command, without resorting to the use of eval or sh, is the DEBUG trap. A practical code example:
trap 'printf %s\\n "$BASH_COMMAND" >&2' DEBUG
Assuming you're logging to standard error.
Note that redirect representation by BASH_COMMAND is still affected by this bug. It appears partially fixed in git, but not completely. Don't count on it being correct.
If you STILL think you need to write out every command you're about to run before you run it, AND that you must include all redirections, then just do this:
# Working example echo "mysql -u me -p somedbname < file" mysql -u me -p somedbname < file
Don't use a variable at all. Just copy and paste the command, wrap an extra layer of quotes around it (sometimes tricky), and stick an echo in front of it.
My personal recommendation would be just to use set -x and not worry about it.