Differences between revisions 8 and 36 (spanning 28 versions)
Revision 8 as of 2007-08-28 09:33:44
Size: 4211
Editor: 177
Comment: adding comment suggesting how to iterate over a variable
Revision 36 as of 2010-04-14 21:41:48
Size: 7041
Editor: GreyCat
Comment: caught another sentence with a typo...
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
[[Anchor(faq1)]]
== How can I read a file line-by-line? ==
<<Anchor(faq1)>>
== How can I read a file (data stream, variable) line-by-line (and/or field-by-field)? ==
Use a `while` loop and the `read` command:
Line 4: Line 6:
    while read line     while read -r line
Line 7: Line 9:
    done < "$file" # or <<< "$var" to iterate over a variable     done < "$file"
}}}
The `-r` option to `read` prevents backslash interpretation (usually used as a backslash newline pair, to continue over multiple lines). Without this option, any backslashes in the input will be discarded. You should always use the `-r` option with read.

`line` is a variable name, chosen by you. You can use any valid shell variable name there.

If your input source is the script's standard input, then you don't need any redirection at all.

If your input source is the contents of a variable/parameter, [[BASH]] can iterate over its lines using a "here string":

{{{
    while read -r line; do
        echo "$line"
    done <<< "$var"
}}}

The same can be done in any Bourne-type shell by using a "here document":

{{{
    while read -r line; do
        echo "$line"
    done <<EOF
$var
EOF
Line 14: Line 39:
    while read first_name last_name phone; do     while read -r first_name last_name phone; do
Line 19: Line 44:
If the field delimiters are not whitespace, you can set {{{IFS}}} (input field separator): If the field delimiters are not whitespace, you can set [[IFS|IFS (input field separator)]]:
Line 22: Line 47:
    while IFS=: read user pass uid gid gecos home shell; do     while IFS=: read -r user pass uid gid gecos home shell; do
Line 26: Line 51:
For TAB delimited files, use [[Quotes|IFS=$'\t']].
Line 30: Line 56:
    while read first_name last_name junk; do     while read -r first_name last_name junk; do
Line 33: Line 59:
Line 38: Line 65:
The {{{read}}} command modifies each line read, e.g. by default it removes all leading whitespace characters (blanks, tab characters, ... -- basically any leading characters present in IFS). If that is not desired, the {{{IFS}}} variable has to be cleared: Some people use the throwaway variable `_` as a "junk variable" to ignore fields. It (or indeed any variable) can also be used more than once in a single `read` command, if we don't care what goes into it:
Line 41: Line 68:
    OIFS=$IFS; IFS=
    while read line
    read -r _ _ first middle last _ <<< "$record"

    # Remember, the final _ can absorb any number of fields.
    # It doesn't need to be repeated there.
}}}

The {{{read}}} command modifies each line read, e.g. by default it removes all leading whitespace characters (blanks, tab characters, ... -- any whitespace characters present in [[IFS]]). If that is not desired, the {{{IFS}}} variable has to be cleared:

{{{
    while IFS= read -r line
Line 46: Line 81:
    IFS=$OIFS
Line 48: Line 82:

As a feature, the {{{read}}} command concatenates lines that end with a backslash '\' character to one single line. To disable this feature, KornShell and ["BASH"] have {{{read -r}}}:

{{{
    OIFS=$IFS; IFS=
    while read -r line
    do
        echo "$line"
    done < "$file"
    IFS=$OIFS
}}}

'''Note that reading a file line by line this way is ''very slow'' for large files. Consider using e.g. ["AWK"] instead if you get performance problems.'''
Line 66: Line 86:
    some command | while read line; do     some command | while read -r line; do
Line 70: Line 90:

This method is especially useful for processing the output of ''find'' with a block of commands:
This method is especially useful for processing the output of [[UsingFind|find]] with a block of commands:
Line 74: Line 93:
    find . -print0 | while read -d $'\0' file; do     find . -print0 | while IFS= read -r -d $'\0' file; do
Line 78: Line 97:
This reads one filename at a time from the find command and renames the file, replacing spaces with underscores.
Line 79: Line 99:
This command reads one filename at a time from the file command and renames the file so that its spaces are replaced by underscores. Note the usage of {{{-print0}}} in the find command, which uses NUL bytes as filename delimiters, and {{{-d $'\0'}}} in the read command to instruct it to read all text into the file variable until it finds a NUL byte. By default, `find` and `read` delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split up those filenames with newlines and cause the loop body to fail. Additionally it is necessary to unset IFS, because otherwise read would strip trailing whitespace. See [[BashFAQ/020|FAQ #20]] for more details.
Line 81: Line 101:
Note the usage of {{{-print0}}} in the find command, which uses NUL bytes as filename delimiters, and {{{-d $'\0'}}} in the read command to instruct it to read all text into the file variable until it finds a NUL byte. By default, find and read delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split those filenames with newlines up and cause the command block to fail. See [#faq20 FAQ #20] for more details.

Using a pipe to send find's output into a while loop places the loop in a ''subshell'' and may therefore cause problems later on if the commands inside the body of the loop attempt to set variables which need to be used outside the loop; in that case, see [#faq24 FAQ 24], or use process substitution like:
Using a pipe to send find's output into a while loop places the loop in a SubShell and may therefore cause problems later on if the commands inside the body of the loop attempt to set variables which need to be used outside the loop; in that case, see [[BashFAQ/024|FAQ 24]], or use ProcessSubstitution like:
Line 86: Line 104:
    while read line; do     while read -r line; do
Line 91: Line 109:
Sometimes it's useful to read a file into an array, one array element per line. You can do that with the following example: If you want to read a file into an [[BashFAQ/005|array]], see [[BashFAQ/005|FAQ 5]].

If the the input source lacks a trailing newline (such as {{{/proc/$$/cmdline}}} on Linux), the line will not be processed by a {{{while read ...}}} loop, as {{{read}}} returns a failure that aborts the while loop, thus failing to process the last line. It does, however, store the contents of the partial line in the variable, so you can test whether there was such an unterminated line by checking whether the variable is non-empty at the end of the loop:
Line 94: Line 114:
    O=$IFS IFS=$'\n' arr=($(< myfile)) IFS=$O     # This does not work:
    printf 'line 1\ntruncated line 2' | while read -r line; do echo $line; done

    # This does not work either:
    printf 'line 1\ntruncated line 2' | while read -r line; do echo "$line"; done; [[ $line ]] && echo -n "$line"

    # This works:
    printf 'line 1\ntruncated line 2' | (while read -r line; do echo "$line"; done; [[ $line ]] && echo "$line")
}}}
For a discussion of why the second example above does not work as expected, see [[BashFAQ/024|FAQ #24]].

=== Why you don't use "for" for this ===

Using {{{for}}} instead of {{{while read ...}}} is generally less efficient and suffers a number of unexpected side-effects.

{{{
    $ cat afile
    ef gh
    *

    $ while read -r i ; do echo "$i" ; done < afile
    ef gh
    *

    $ for i in $(<afile) ; do echo "$i" ; done
    ef
    gh
    afile
    # the glob was expanded, and it looped per word.

    #workaround:
    $ oIFS=$IFS IFS=$'\n' ;set -f ; for i in $(<afile) ; do echo "$i" ; done; IFS=$oIFS
    ef gh
    *
Line 97: Line 150:
This temporarily changes the Input Field Separator to a newline, so that each line will be considered one field by read. Then it populates the array {{{arr}}} with the fields. Then it sets the {{{IFS}}} back to what it was before. Notice that the syntax to get this "right" is more verbose. All-in-all, (ab)using {{{for}}} this way is a more dangerous, less intuitive (you don't get what you expect out of a normal {{{for}}}!) and not any more useful method.
Line 99: Line 152:
This same trick works on a stream of data as well as a file: Also, as discussed in [[BashFAQ/005|FAQ #5]], the use of `IFS=$'\n'` (or any other "whitespace" in `IFS`) causes the shell to consolidate all consecutive instances of the whitespace delimiter into one. In other words, it skips over blank lines. Did you see the blank line in the input file, and how it was missing in the output? It's easier to spot if it's not at the end:
Line 102: Line 155:
    O=$IFS IFS=$'\n' arr=($(find . -type f)) IFS=$O     ~$ cat foo
    line one

    line three
    ~$ IFS=$'\n'; set -f; for line in $(<foo); do echo "$line"; done; unset IFS; set +f
    line one
    line three
    ~$
Line 105: Line 165:
Of course, this will blow up in your face if the filenames contain newlines; see [#faq20 FAQ 20] for hints on dealing with such filenames. All that setting and unsetting, and we ''still'' couldn't even mimic a simple `cat`. If preservation of blank lines is important, just go back to using `while read`.

----
CategoryShell

How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?

Use a while loop and the read command:

    while read -r line
    do
        echo "$line"
    done < "$file"

The -r option to read prevents backslash interpretation (usually used as a backslash newline pair, to continue over multiple lines). Without this option, any backslashes in the input will be discarded. You should always use the -r option with read.

line is a variable name, chosen by you. You can use any valid shell variable name there.

If your input source is the script's standard input, then you don't need any redirection at all.

If your input source is the contents of a variable/parameter, BASH can iterate over its lines using a "here string":

    while read -r line; do
        echo "$line"
    done <<< "$var"

The same can be done in any Bourne-type shell by using a "here document":

    while read -r line; do
        echo "$line"
    done <<EOF
$var
EOF

If you want to operate on individual fields within each line, you may supply additional variables to read:

    # Input file has 3 columns separated by white space.
    while read -r first_name last_name phone; do
      ...
    done < "$file"

If the field delimiters are not whitespace, you can set IFS (input field separator):

    while IFS=: read -r user pass uid gid gecos home shell; do
      ...
    done < /etc/passwd

For TAB delimited files, use IFS=$'\t'.

Also, please note that you do not necessarily need to know how many fields each line of input contains. If you supply more variables than there are fields, the extra variables will be empty. If you supply fewer, the last variable gets "all the rest" of the fields after the preceding ones are satisfied. For example,

    while read -r first_name last_name junk; do
      ...
    done <<< 'Bob Smith 123 Main Street Elk Grove Iowa 123-555-6789'

    # Inside the loop, first_name will contain "Bob", and
    # last_name will contain "Smith".  The variable "junk" holds
    # everything else.

Some people use the throwaway variable _ as a "junk variable" to ignore fields. It (or indeed any variable) can also be used more than once in a single read command, if we don't care what goes into it:

    read -r _ _ first middle last _ <<< "$record"

    # Remember, the final _ can absorb any number of fields.
    # It doesn't need to be repeated there.

The read command modifies each line read, e.g. by default it removes all leading whitespace characters (blanks, tab characters, ... -- any whitespace characters present in IFS). If that is not desired, the IFS variable has to be cleared:

    while IFS= read -r line
    do
        echo "$line"
    done < "$file"

One may also read from a command instead of a regular file:

    some command | while read -r line; do
       other commands
    done

This method is especially useful for processing the output of find with a block of commands:

    find . -print0 | while IFS= read -r -d $'\0' file; do
        mv "$file" "${file// /_}"
    done

This reads one filename at a time from the find command and renames the file, replacing spaces with underscores.

Note the usage of -print0 in the find command, which uses NUL bytes as filename delimiters, and -d $'\0' in the read command to instruct it to read all text into the file variable until it finds a NUL byte. By default, find and read delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split up those filenames with newlines and cause the loop body to fail. Additionally it is necessary to unset IFS, because otherwise read would strip trailing whitespace. See FAQ #20 for more details.

Using a pipe to send find's output into a while loop places the loop in a SubShell and may therefore cause problems later on if the commands inside the body of the loop attempt to set variables which need to be used outside the loop; in that case, see FAQ 24, or use ProcessSubstitution like:

    while read -r line; do
        other commands
    done < <(some command)

If you want to read a file into an array, see FAQ 5.

If the the input source lacks a trailing newline (such as /proc/$$/cmdline on Linux), the line will not be processed by a while read ... loop, as read returns a failure that aborts the while loop, thus failing to process the last line. It does, however, store the contents of the partial line in the variable, so you can test whether there was such an unterminated line by checking whether the variable is non-empty at the end of the loop:

    # This does not work:
    printf 'line 1\ntruncated line 2' | while read -r line; do echo $line; done

    # This does not work either:
    printf 'line 1\ntruncated line 2' | while read -r line; do echo "$line"; done; [[ $line ]] && echo -n "$line"

    # This works:
    printf 'line 1\ntruncated line 2' | (while read -r line; do echo "$line"; done; [[ $line ]] && echo "$line")

For a discussion of why the second example above does not work as expected, see FAQ #24.

Why you don't use "for" for this

Using for instead of while read ... is generally less efficient and suffers a number of unexpected side-effects.

    $ cat afile
    ef gh
    *

    $ while read -r i ; do echo "$i" ; done < afile
    ef gh
    *

    $ for i in $(<afile) ; do echo "$i" ; done
    ef
    gh
    afile
    # the glob was expanded, and it looped per word.

    #workaround:
    $ oIFS=$IFS IFS=$'\n' ;set -f ; for i in $(<afile) ; do echo "$i" ; done; IFS=$oIFS
    ef gh
    *

Notice that the syntax to get this "right" is more verbose. All-in-all, (ab)using for this way is a more dangerous, less intuitive (you don't get what you expect out of a normal for!) and not any more useful method.

Also, as discussed in FAQ #5, the use of IFS=$'\n' (or any other "whitespace" in IFS) causes the shell to consolidate all consecutive instances of the whitespace delimiter into one. In other words, it skips over blank lines. Did you see the blank line in the input file, and how it was missing in the output? It's easier to spot if it's not at the end:

    ~$ cat foo
    line one

    line three
    ~$ IFS=$'\n'; set -f; for line in $(<foo); do echo "$line"; done; unset IFS; set +f
    line one
    line three
    ~$ 

All that setting and unsetting, and we still couldn't even mimic a simple cat. If preservation of blank lines is important, just go back to using while read.


CategoryShell

BashFAQ/001 (last edited 2024-11-07 03:19:23 by GreyCat)