Size: 9299
Comment:
|
Size: 9387
Comment: Trying to be consistent with IFS=
|
Deletions are marked like this. | Additions are marked like this. |
Line 10: | Line 10: |
The `-r` option to `read` prevents backslash interpretation (usually used as a backslash newline pair, to continue over multiple lines). Without this option, any backslashes in the input will be discarded. You should almost always use the `-r` option with read. | The `-r` option to `read` prevents backslash interpretation (usually used as a backslash newline pair, to continue over multiple lines or to escape the delimiter). Without this option, any unescaped backslashes in the input will be discarded. You should almost always use the `-r` option with read. |
Line 23: | Line 23: |
while read -r line; do | while IFS= read -r line; do |
Line 31: | Line 31: |
while read -r line; do | while IFS= read -r line; do |
Line 50: | Line 50: |
# Input file has 3 columns separated by white space. | # Input file has 3 columns separated by white space (space or tab characters only). |
Line 118: | Line 118: |
while read -r line; do | while IFS= read -r line; do |
Line 175: | Line 175: |
while read -r -u 9 line; do | while IFS= read -r -u 9 line; do |
Line 181: | Line 181: |
while read -r line <&9; do | while IFS= read -r line <&9; do |
Line 190: | Line 190: |
# Bourne | |
Line 192: | Line 191: |
while read -r line <&9; do | while IFS= read -r line <&9; do |
How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?
Don't try to use "for". Use a while loop and the read command:
The -r option to read prevents backslash interpretation (usually used as a backslash newline pair, to continue over multiple lines or to escape the delimiter). Without this option, any unescaped backslashes in the input will be discarded. You should almost always use the -r option with read.
The most common exception to this rule is when -e is used, which uses Readline to obtain the line from an interactive shell. In that case, tab completion will add backslashes to escape spaces and such, and you do not want them to be literally included in the variable. This would never be used when reading anything line-by-line, though, and -r should always be used when doing so.
In the scenario above IFS= prevents trimming of leading and trailing whitespace. Remove it if you want this effect.
line is a variable name, chosen by you. You can use any valid shell variable name there.
The redirection < "$file" tells the while loop to read from the file whose name is in the variable file. If you would prefer to use a literal pathname instead of a variable, you may do that as well. If your input source is the script's standard input, then you don't need any redirection at all.
If your input source is the contents of a variable/parameter, BASH can iterate over its lines using a "here string":
The same can be done in any Bourne-type shell by using a "here document" (although read -r is POSIX, not Bourne):
If avoiding comments starting with # is desired, you can simply skip them inside the loop:
If you want to operate on individual fields within each line, you may supply additional variables to read:
If the field delimiters are not whitespace, you can set IFS (internal field separator):
For tab-delimited files, use IFS=$'\t'.
You do not necessarily need to know how many fields each line of input contains. If you supply more variables than there are fields, the extra variables will be empty. If you supply fewer, the last variable gets "all the rest" of the fields after the preceding ones are satisfied. For example,
Some people use the throwaway variable _ as a "junk variable" to ignore fields. It (or indeed any variable) can also be used more than once in a single read command, if we don't care what goes into it:
Note that this usage of _ is only guaranteed to work in Bash. Many other shells use _ for other purposes that will at best cause this to not have the desired effect, and can break the script entirely. It is better to choose a unique variable that isn't used elsewhere in the script, even though _ is a common Bash convention.
The read command modifies each line read; by default it removes all leading and trailing whitespace characters (spaces and tabs, or any whitespace characters present in IFS). If that is not desired, the IFS variable has to be cleared:
One may also read from a command instead of a regular file:
This method is especially useful for processing the output of find with a block of commands:
This reads one filename at a time from the find command and renames the file, replacing spaces with underscores.
Note the usage of -print0 in the find command, which uses NUL bytes as filename delimiters; and -d '' in the read command to instruct it to read all text into the file variable until it finds a NUL byte. By default, find and read delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split up those filenames at the newlines and cause the loop body to fail. Additionally it is necessary to set IFS to an empty string, because otherwise read would still strip leading and trailing whitespace. See FAQ #20 for more details.
Using a pipe to send find's output into a while loop places the loop in a SubShell and may therefore cause problems later on if the commands inside the body of the loop attempt to set variables which need to be used after the loop; in that case, see FAQ 24, or use a ProcessSubstitution like:
If you want to read lines from a file into an array, see FAQ 5.
My text files are broken! They lack their final newlines!
If there are some characters after the last line in the file (or to put it differently, if the last line is not terminated by a newline character), then read will read it but return false, leaving the broken partial line in the read variable(s). You can process this after the loop:
Or:
1 # This does not work:
2 printf 'line 1\ntruncated line 2' | while read -r line; do echo $line; done
3
4 # This does not work either:
5 printf 'line 1\ntruncated line 2' | while read -r line; do echo "$line"; done; [[ $line ]] && echo -n "$line"
6
7 # This works:
8 printf 'line 1\ntruncated line 2' | { while read -r line; do echo "$line"; done; [[ $line ]] && echo "$line"; }
The first example, beyond missing the after-loop test, is also missing quotes. See Quotes or Arguments for an explanation why. The Arguments page is an especially important read.
For a discussion of why the second example above does not work as expected, see FAQ #24.
Alternatively, you can simply add a logical OR to the while test:
How to keep other commands from "eating" the input
Some commands greedily eat up all available data on standard input. The examples above do not take precautions against such programs. For example,
will only print the contents of the first line, with the remaining contents going to "ignoredfile", as cat slurps up all available input.
One workaround is to use a numeric FileDescriptor rather than standard input:
1 # Bash
2 while IFS= read -r -u 9 line; do
3 cat > ignoredfile
4 printf '%s\n' "$line"
5 done 9< "$file"
6
7 # Note that read -u is not portable to every shell. Use a redirect to ensure it works in any POSIX compliant shell:
8 while IFS= read -r line <&9; do
9 cat > ignoredfile
10 printf '%s\n' "$line"
11 done 9< "$file"
Or:
This example will wait for the user to type something into the file ignoredfile at each iteration instead of eating up the loop input.
You might need this, for example, with mencoder which will accept user input if there is any, but will continue silently if there isn't. Other commands that act this way include ssh and ffmpeg. Additional workarounds for this are discussed in FAQ #89.