Differences between revisions 1 and 33 (spanning 32 versions)
Revision 1 as of 2007-05-02 18:44:03
Size: 4150
Editor: redondos
Comment: creation
Revision 33 as of 2010-01-21 09:36:25
Size: 7982
Editor: Lhunath
Comment: Add some more info, warnings and clean out dangerous methods.
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
[[Anchor(faq1)]]
== How can I read a file line-by-line? ==
<<Anchor(faq1)>>
== How can I read a file (data stream, variable) line-by-line (and/or field-by-field)? ==
Use a `while` loop and the `read` command:
Line 4: Line 6:
    while read line     while read -r line
Line 8: Line 10:
}}}
The `-r` option to read prevents backslash interpretation (usually used as a backslash newline pair, to continue over multiple lines). Without this option, any backslashes in the input will be discarded. You should always use the `-r` option with read.

[[BASH]] can also iterate over the lines in a variable using a "here string":

{{{
    while read -r line; do
        echo "$line"
    done <<< "$var"
}}}
If your data source is the script's standard input, then you don't need any redirection at all.

The same can be done in any Bourne-type shell by using a here
document:

{{{
    while read -r line; do
        echo "$line"
    done <<EOF
$var
EOF
Line 14: Line 37:
    while read first_name last_name phone; do     while read -r first_name last_name phone; do
Line 19: Line 42:
Note that you can still get the whole line by using {{{$REPLY}}}.
Line 22: Line 47:
    while IFS=: read user pass uid gid gecos home shell; do     while IFS=: read -r user pass uid gid gecos home shell; do
Line 26: Line 51:
For TAB delimited files, use IFS=$'\t'.
Line 30: Line 56:
    while read first_name last_name junk; do     while read -r first_name last_name junk; do
Line 37: Line 63:
Line 41: Line 66:
    OIFS=$IFS; IFS=
while read line
    while IFS= read -r line
Line 46: Line 70:
    IFS=$OIFS
Line 48: Line 71:

As a feature, the {{{read}}} command concatenates lines that end with a backslash '\' character to one single line. To disable this feature, KornShell and ["BASH"] have {{{read -r}}}:

{{{
    OIFS=$IFS; IFS=
    while read -r line
    do
        echo "$line"
    done < "$file"
    IFS=$OIFS
}}}

'''Note that reading a file line by line this way is ''very slow'' for large files. Consider using e.g. ["AWK"] instead if you get performance problems.'''
'''Note that reading a file line by line this way is ''very slow'' for large files. Consider using e.g. [[AWK]] instead if you get performance problems.'''
Line 66: Line 76:
    some command | while read line; do     some command | while read -r line; do
Line 70: Line 80:

This method is especially useful for processing the output of ''find'' with a block of commands:
This method is especially useful for processing the output of `find` with a block of commands:
Line 74: Line 83:
    find . -print0 | while read -d $'\0' file; do     find . -print0 | while IFS= read -r -d $'\0' file; do
Line 78: Line 87:
Line 81: Line 89:
Note the usage of {{{-print0}}} in the find command, which uses NUL bytes as filename delimiters, and {{{-d $'\0'}}} in the read command to instruct it to read all text into the file variable until it finds a NUL byte. By default, find and read delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split those filenames with newlines up and cause the command block to fail. See [#faq20 FAQ #20] for more details. Note the usage of {{{-print0}}} in the find command, which uses NUL bytes as filename delimiters, and {{{-d $'\0'}}} in the read command to instruct it to read all text into the file variable until it finds a NUL byte. By default, `find` and `read` delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split up those filenames with newlines and cause the loop body to fail. Additionally it is necessary to unset IFS, because otherwise read would strip trailing whitespace. See [[BashFAQ/020|FAQ #20]] for more details.
Line 83: Line 91:
Using a pipe to send find's output into a while loop places the loop in a ''subshell'' and may therefore cause problems later on if the commands inside the body of the loop attempt to set variables which need to be used outside the loop; in that case, see [#faq24 FAQ 24], or use process substitution like: Using a pipe to send find's output into a while loop places the loop in a SubShell and may therefore cause problems later on if the commands inside the body of the loop attempt to set variables which need to be used outside the loop; in that case, see [[BashFAQ/024|FAQ 24]], or use ProcessSubstitution like:
Line 86: Line 94:
    while read line; do     while read -r line; do
Line 90: Line 98:

Sometimes it's useful to read a file into an array, one array element per line. You can do that with the following example:
Sometimes it's useful to read a file into an [[BashFAQ/005|array]], one array element per line. You can do that with the following example:
Line 94: Line 101:
    O=$IFS IFS=$'\n' arr=($(< myfile)) IFS=$O     IFS=$'\n' read -r -d $'\0' -a myarray < myfile
Line 96: Line 103:

This temporarily changes the Input Field Separator to a newline, so that each line will be considered one field by read. Then it populates the array {{{arr}}} with the fields. Then it sets the {{{IFS}}} back to what it was before.
The {{{-d $'\0'}}} option tells read to not stop reading at the first newline, but continue on until the end of the file (or until it sees a NUL byte, which shouldn't appear in text files). We then change the ''Internal Field Separator'' to a newline (only for the {{{read}}} command), so that each line will be considered a new field. Then it populates the array {{{myarray}}}, using these fields as its elements.
Line 102: Line 108:
    O=$IFS IFS=$'\n' arr=($(find . -type f)) IFS=$O     IFS=$'\n' read -r -d $'\0' -a myarray < <(find . -type f)
}}}
Of course, this will blow up in your face if the filenames contain newlines; see [[BashFAQ/020|FAQ 20]] for hints on dealing with such filenames.

Since bash will treat sequences of IFS whitespace as a single character, if the input has empty lines (meaning that groups of two or more consecutive \n characters appear in the file), they will be lost. So, for example:

{{{
    $ cat myfile
    line1

    line2
    line3
    $ IFS=$'\n' read -r -d $'\0' -a myarray < myfile
    $ declare -p arr
    declare -a arr='([0]="line1" [1]="line2" [2]="line3")'
Line 105: Line 125:
Of course, this will blow up in your face if the filenames contain newlines; see [020 FAQ 20] for hints on dealing with such filenames. You can deal with this issue by reading the file into an array using a loop:

{{{
    i=0
    while IFS= read -r arr[i++]; do :;done < "$file"
    # or <<< "$var" to iterate over a variable
}}}
On the other hand, if the file lacks a trailing newline (such as {{{/proc/$$/cmdline}}} on Linux), the line will not be printed by a {{{while read ...}}} loop, as {{{read}}} returns a failure that aborts the while loop, thus failing to print the ultimate line. It does, however, store the contents of the partial line in the variable, so you can test whether there was such an unterminated line by checking whether the variable is non-empty at the end of the loop:

{{{
    # This does not work:
    echo -en 'line 1\ntruncated line 2' | while read -r line; do echo $line; done

    # This does not work either:
    echo -en 'line 1\ntruncated line 2' | while read -r line; do echo "$line"; done; [[ $line ]] && echo -n "$line"

    # This works:
    echo -en 'line 1\ntruncated line 2' | (while read -r line; do echo "$line"; done; [[ $line ]] && echo "$line")
}}}
For a discussion of why the second example above does not work as expected, see [[BashFAQ/024|FAQ #24]].

=== Using for instead of while read ===

Using {{{for}}} instead of {{{while read ...}}} is generally less efficient and suffers a number of unexpected side-effects.

{{{
    $ cat afile
    ef gh
    *

    $ while read i ; do echo "$i" ; done < afile
    ef gh
    *

    $ for i in $(<afile) ; do echo "$i" ; done
    ef
    gh
    afile
    # the glob was expanded, and it looped per word.

    #workaround:
    $ oIFS=$IFS IFS=$'\n' ;set -f ; for i in $(<afile) ; do echo "$i" ; done; IFS=$oIFS
    ef gh
    *
}}}

Notice that the syntax to get this right is more verbose. All-in-all, (ab)using {{{for}}} this way is a more dangerous, less intuitive (you don't get what you expect out of a normal {{{for}}}!) and not any more useful method.

Also, as discussed in [[BashFAQ/005|FAQ #5]], the use of `IFS=$'\n'` (or any other "whitespace" in `IFS`) causes the shell to consolidate all consecutive instances of the whitespace delimiter into one. In other words, it skips over blank lines. Thus,

{{{
    ~$ cat foo
    line one

    line three
    ~$ IFS=$'\n'; set -f; for line in $(<foo); do echo "$line"; done; unset IFS; set +f
    line one
    line three
    ~$
}}}

If preservation of blank lines is important, just go back to using `while read`.

----
CategoryShell

How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?

Use a while loop and the read command:

    while read -r line
    do
        echo "$line"
    done < "$file"

The -r option to read prevents backslash interpretation (usually used as a backslash newline pair, to continue over multiple lines). Without this option, any backslashes in the input will be discarded. You should always use the -r option with read.

BASH can also iterate over the lines in a variable using a "here string":

    while read -r line; do
        echo "$line"
    done <<< "$var"

If your data source is the script's standard input, then you don't need any redirection at all.

The same can be done in any Bourne-type shell by using a here document:

    while read -r line; do
        echo "$line"
    done <<EOF
$var
EOF

If you want to operate on individual fields within each line, you may supply additional variables to read:

    # Input file has 3 columns separated by white space.
    while read -r first_name last_name phone; do
      ...
    done < "$file"

Note that you can still get the whole line by using $REPLY.

If the field delimiters are not whitespace, you can set IFS (input field separator):

    while IFS=: read -r user pass uid gid gecos home shell; do
      ...
    done < /etc/passwd

For TAB delimited files, use IFS=$'\t'.

Also, please note that you do not necessarily need to know how many fields each line of input contains. If you supply more variables than there are fields, the extra variables will be empty. If you supply fewer, the last variable gets "all the rest" of the fields after the preceding ones are satisfied. For example,

    while read -r first_name last_name junk; do
      ...
    done <<< 'Bob Smith 123 Main Street Elk Grove Iowa 123-555-6789'
    # Inside the loop, first_name will contain "Bob", and
    # last_name will contain "Smith".  The variable "junk" holds
    # everything else.

The read command modifies each line read, e.g. by default it removes all leading whitespace characters (blanks, tab characters, ... -- basically any leading characters present in IFS). If that is not desired, the IFS variable has to be cleared:

    while IFS= read -r line
    do
        echo "$line"
    done < "$file"

Note that reading a file line by line this way is very slow for large files. Consider using e.g. AWK instead if you get performance problems.

One may also read from a command instead of a regular file:

    some command | while read -r line; do
       other commands
    done

This method is especially useful for processing the output of find with a block of commands:

    find . -print0 | while IFS= read -r -d $'\0' file; do
        mv "$file" "${file// /_}"
    done

This command reads one filename at a time from the file command and renames the file so that its spaces are replaced by underscores.

Note the usage of -print0 in the find command, which uses NUL bytes as filename delimiters, and -d $'\0' in the read command to instruct it to read all text into the file variable until it finds a NUL byte. By default, find and read delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split up those filenames with newlines and cause the loop body to fail. Additionally it is necessary to unset IFS, because otherwise read would strip trailing whitespace. See FAQ #20 for more details.

Using a pipe to send find's output into a while loop places the loop in a SubShell and may therefore cause problems later on if the commands inside the body of the loop attempt to set variables which need to be used outside the loop; in that case, see FAQ 24, or use ProcessSubstitution like:

    while read -r line; do
        other commands
    done < <(some command)

Sometimes it's useful to read a file into an array, one array element per line. You can do that with the following example:

    IFS=$'\n' read -r -d $'\0' -a myarray < myfile

The -d $'\0' option tells read to not stop reading at the first newline, but continue on until the end of the file (or until it sees a NUL byte, which shouldn't appear in text files). We then change the Internal Field Separator to a newline (only for the read command), so that each line will be considered a new field. Then it populates the array myarray, using these fields as its elements.

This same trick works on a stream of data as well as a file:

    IFS=$'\n' read -r -d $'\0' -a myarray < <(find . -type f)

Of course, this will blow up in your face if the filenames contain newlines; see FAQ 20 for hints on dealing with such filenames.

Since bash will treat sequences of IFS whitespace as a single character, if the input has empty lines (meaning that groups of two or more consecutive \n characters appear in the file), they will be lost. So, for example:

    $ cat myfile
    line1

    line2
    line3
    $ IFS=$'\n' read -r -d $'\0' -a myarray < myfile
    $ declare -p arr
    declare -a arr='([0]="line1" [1]="line2" [2]="line3")'

You can deal with this issue by reading the file into an array using a loop:

    i=0
    while IFS= read -r arr[i++]; do :;done < "$file"
    # or  <<< "$var"  to iterate over a variable

On the other hand, if the file lacks a trailing newline (such as /proc/$$/cmdline on Linux), the line will not be printed by a while read ... loop, as read returns a failure that aborts the while loop, thus failing to print the ultimate line. It does, however, store the contents of the partial line in the variable, so you can test whether there was such an unterminated line by checking whether the variable is non-empty at the end of the loop:

    # This does not work:
    echo -en 'line 1\ntruncated line 2' | while read -r line; do echo $line; done

    # This does not work either:
    echo -en 'line 1\ntruncated line 2' | while read -r line; do echo "$line"; done; [[ $line ]] && echo -n "$line"

    # This works:
    echo -en 'line 1\ntruncated line 2' | (while read -r line; do echo "$line"; done; [[ $line ]] && echo "$line")

For a discussion of why the second example above does not work as expected, see FAQ #24.

Using for instead of while read

Using for instead of while read ... is generally less efficient and suffers a number of unexpected side-effects.

    $ cat afile
    ef gh
    *

    $ while read i ; do echo "$i" ; done < afile
    ef gh
    *

    $ for i in $(<afile) ; do echo "$i" ; done
    ef
    gh
    afile
    # the glob was expanded, and it looped per word.

    #workaround:
    $ oIFS=$IFS IFS=$'\n' ;set -f ; for i in $(<afile) ; do echo "$i" ; done; IFS=$oIFS
    ef gh
    *

Notice that the syntax to get this right is more verbose. All-in-all, (ab)using for this way is a more dangerous, less intuitive (you don't get what you expect out of a normal for!) and not any more useful method.

Also, as discussed in FAQ #5, the use of IFS=$'\n' (or any other "whitespace" in IFS) causes the shell to consolidate all consecutive instances of the whitespace delimiter into one. In other words, it skips over blank lines. Thus,

    ~$ cat foo
    line one

    line three
    ~$ IFS=$'\n'; set -f; for line in $(<foo); do echo "$line"; done; unset IFS; set +f
    line one
    line three
    ~$ 

If preservation of blank lines is important, just go back to using while read.


CategoryShell

BashFAQ/001 (last edited 2023-06-28 01:53:29 by larryv)