Differences between revisions 5 and 49 (spanning 44 versions)
Revision 5 as of 2007-05-02 22:34:09
Size: 4153
Editor: 190-48-43-152
Comment:
Revision 49 as of 2012-12-15 13:46:42
Size: 7469
Editor: 82-137-10-99
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
[[Anchor(faq1)]]
== How can I read a file line-by-line? ==
<<Anchor(faq1)>>
== How can I read a file (data stream, variable) line-by-line (and/or field-by-field)? ==
[[DontReadLinesWithFor|Don't try to use "for"]]. Use a `while` loop and the `read` command:
Line 4: Line 6:
    while read line     while read -r line
Line 6: Line 8:
        echo "$line"
    done < "$file"
}}}
The `-r` option to `read` prevents backslash interpretation (usually used as a backslash newline pair, to continue over multiple lines). Without this option, any backslashes in the input will be discarded. You should always use the `-r` option with read.

`line` is a variable name, chosen by you. You can use any valid shell variable name there.

The [[BashGuide/InputAndOutput#Redirection|redirection]] `< "$file"` tells the `while` loop to read from the file whose name is in the variable `file`. If you would prefer to use a literal pathname instead of a variable, you may do that as well. If your input source is the script's standard input, then you don't need any redirection at all.

If your input source is the contents of a variable/parameter, [[BASH]] can iterate over its lines using a "here string":

{{{
    while read -r line; do
        echo "$line"
    done <<< "$var"
}}}

The same can be done in any Bourne-type shell by using a "here document" (although `read -r` is POSIX, not Bourne):

{{{
    while read -r line; do
        echo "$line"
    done <<EOF
$var
EOF
}}}

If avoiding comments starting with `#` is desired, you can simply skip them inside the loop:
{{{
    # Bash
    while read -r line
    do
        [[ $line = \#* ]] && continue
Line 14: Line 49:
    while read first_name last_name phone; do     while read -r first_name last_name phone; do
Line 19: Line 54:
If the field delimiters are not whitespace, you can set {{{IFS}}} (input field separator): If the field delimiters are not whitespace, you can set [[IFS|IFS (internal field separator)]]:
Line 22: Line 57:
    while IFS=: read user pass uid gid gecos home shell; do     while IFS=: read -r user pass uid gid gecos home shell; do
Line 26: Line 61:
For tab-delimited files, use [[Quotes|IFS=$'\t']].
Line 27: Line 63:
Also, please note that you do ''not'' necessarily need to know how many fields each line of input contains. If you supply more variables than there are fields, the extra variables will be empty. If you supply fewer, the last variable gets "all the rest" of the fields after the preceding ones are satisfied. For example, You do ''not'' necessarily need to know how many fields each line of input contains. If you supply more variables than there are fields, the extra variables will be empty. If you supply fewer, the last variable gets "all the rest" of the fields after the preceding ones are satisfied. For example,
Line 30: Line 66:
    while read first_name last_name junk; do
      ...
    done
<<< 'Bob Smith 123 Main Street Elk Grove Iowa 123-555-6789'
    # Inside the loop, first_name will contain "Bob", and
    #
last_name will contain "Smith". The variable "junk" holds
    #
everything else.
    read -r first last junk <<< 'Bob Smith 123 Main Street Elk Grove Iowa 123-555-6789'

# first will contain "Bob", and last will contain "Smith".
# junk holds everything else.
Line 38: Line 72:
The {{{read}}} command modifies each line read, e.g. by default it removes all leading whitespace characters (blanks, tab characters, ... -- basically any leading characters present in IFS). If that is not desired, the {{{IFS}}} variable has to be cleared: Some people use the throwaway variable `_` as a "junk variable" to ignore fields. It (or indeed any variable) can also be used more than once in a single `read` command, if we don't care what goes into it:
Line 41: Line 75:
    OIFS=$IFS; IFS=
    while read line
    do
        echo "$line"
    done < "$file"
    IFS=$OIFS
    read -r _ _ first middle last _ <<< "$record"

    # We skip the first two fields, then read the next three.
    # Remember, the final _ can absorb any number of fields.
    # It doesn't need to be repeated there.
Line 49: Line 82:
As a feature, the {{{read}}} command concatenates lines that end with a backslash '\' character to one single line. To disable this feature, KornShell and ["BASH"] have {{{read -r}}}: The {{{read}}} command modifies each line read; by default it [[BashFAQ/067|removes all leading and trailing whitespace]] characters (spaces and tabs, or any whitespace characters present in [[IFS]]). If that is not desired, the {{{IFS}}} variable has to be cleared:
Line 52: Line 85:
    OIFS=$IFS; IFS=
    while read -r line
    # Exact lines, no trimming
    while IFS= read -r line
Line 55: Line 88:
        echo "$line"         printf '%s\n' "$line"
Line 57: Line 90:
    IFS=$OIFS
Line 59: Line 91:

'''Note that reading a file line by line this way is ''very slow'' for large files. Consider using e.g. ["AWK"] instead if you get performance problems.'''
Line 66: Line 95:
    some command | while read line; do     some command | while read -r line; do
Line 70: Line 99:

This method is especially useful for processing the output of ''find'' with a block of commands:
This method is especially useful for processing the output of [[UsingFind|find]] with a block of commands:
Line 74: Line 102:
    find . -print0 | while read -d $'\0' file; do     find . -type f -print0 | while IFS= read -r -d '' file; do
Line 78: Line 106:
This reads one filename at a time from the `find` command and [[BashFAQ/030|renames the file]], replacing spaces with underscores.
Line 79: Line 108:
This command reads one filename at a time from the file command and renames the file so that its spaces are replaced by underscores. Note the usage of `-print0` in the `find` command, which uses NUL bytes as filename delimiters; and {{{-d ''}}} in the `read` command to instruct it to read all text into the `file` variable until it finds a NUL byte. By default, `find` and `read` delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split up those filenames at the newlines and cause the loop body to fail. Additionally it is necessary to set `IFS` to an empty string, because otherwise `read` would still strip leading and trailing whitespace. See [[BashFAQ/020|FAQ #20]] for more details.
Line 81: Line 110:
Note the usage of {{{-print0}}} in the find command, which uses NUL bytes as filename delimiters, and {{{-d $'\0'}}} in the read command to instruct it to read all text into the file variable until it finds a NUL byte. By default, find and read delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split those filenames with newlines up and cause the command block to fail. See [#faq20 FAQ #20] for more details.

Using a pipe to send find's output into a while loop places the loop in a ''subshell'' and may therefore cause problems later on if the commands inside the body of the loop attempt to set variables which need to be used outside the loop; in that case, see [#faq24 FAQ 24], or use process substitution like:
Using a pipe to send `find`'s output into a while loop places the loop in a SubShell and may therefore cause problems later on if the commands inside the body of the loop attempt to set variables which need to be used after the loop; in that case, see [[BashFAQ/024|FAQ 24]], or use a ProcessSubstitution like:
Line 86: Line 113:
    while read line; do     while read -r line; do
Line 91: Line 118:
Sometimes it's useful to read a file into an array, one array element per line. You can do that with the following example: If you want to read lines from a file into an [[BashFAQ/005|array]], see [[BashFAQ/005|FAQ 5]].

=== My text files are broken! They lack their final newlines! ===

If there are some characters after the last line in the file (or to put it differently, if the last line is not terminated by a newline character), then `read` will read it but return false, leaving the broken partial line in the `read` variable(s). You can process this after the loop:
Line 94: Line 125:
    O=$IFS IFS=$'\n' arr=($(< myfile)) IFS=$O     # Emulate cat
    while IFS= read -r line
    do
        printf '%s\n' "$line"
    done < "$file"
    [ -n "$line" ] && printf %s "$line"
Line 97: Line 133:
This temporarily changes the Input Field Separator to a newline, so that each line will be considered one field by read. Then it populates the array {{{arr}}} with the fields. Then it sets the {{{IFS}}} back to what it was before.

This same trick works on a stream of data as well as a file:
Or:
Line 102: Line 136:
    O=$IFS IFS=$'\n' arr=($(find . -type f)) IFS=$O     # This does not work:
    printf 'line 1\ntruncated line 2' | while read -r line; do echo $line; done

    # This does not work either:
    printf 'line 1\ntruncated line 2' | while read -r line; do echo "$line"; done; [[ $line ]] && echo -n "$line"

    # This works:
    printf 'line 1\ntruncated line 2' | (while read -r line; do echo "$line"; done; [[ $line ]] && echo "$line")
}}}
For a discussion of why the second example above does not work as expected, see [[BashFAQ/024|FAQ #24]].

=== How to keep other commands from "eating" the input ===
Some commands greedily eat up all available data on standard input. The examples above do not take precautions against such programs. For example,
{{{l;kjklhjl
    while read -r line
    do
        cat > ignoredfile
        echo "$line"
    done < "$file"
}}}
will only print the contents of the first line, with the remaining contents going to "ignoredfile", as `cat` slurps up all available input.

One workaround is to use a numeric FileDescriptor rather than standard input:
{{{
    # Bash
    while read -r -u9 line
    do
        cat > ignoredfile
        echo "$line"
    done 9< "$file"
Line 105: Line 168:
Of course, this will blow up in your face if the filenames contain newlines; see [#faq20 FAQ 20] for hints on dealing with such filenames. Or:

{{{
    # Bourne
    exec 9< "$file"
    while read line <&9
    do
      ...
    done
    exec 9<&-
}}}

This example will wait for the user to type something into the file {{{ignoredfile}}} at each iteration instead of eating up the loop input.

You might need this, for example, with `mencoder` which will accept user input if there is any, but will continue silently if there isn't. Other commands that act this way include `ssh` and `ffmpeg`. Additional workarounds for this are discussed in [[BashFAQ/089|FAQ #89]].

----
CategoryShell

How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?

Don't try to use "for". Use a while loop and the read command:

    while read -r line
    do
        echo "$line"
    done < "$file"

The -r option to read prevents backslash interpretation (usually used as a backslash newline pair, to continue over multiple lines). Without this option, any backslashes in the input will be discarded. You should always use the -r option with read.

line is a variable name, chosen by you. You can use any valid shell variable name there.

The redirection < "$file" tells the while loop to read from the file whose name is in the variable file. If you would prefer to use a literal pathname instead of a variable, you may do that as well. If your input source is the script's standard input, then you don't need any redirection at all.

If your input source is the contents of a variable/parameter, BASH can iterate over its lines using a "here string":

    while read -r line; do
        echo "$line"
    done <<< "$var"

The same can be done in any Bourne-type shell by using a "here document" (although read -r is POSIX, not Bourne):

    while read -r line; do
        echo "$line"
    done <<EOF
$var
EOF

If avoiding comments starting with # is desired, you can simply skip them inside the loop:

    # Bash
    while read -r line
    do
        [[ $line = \#* ]] && continue
        echo "$line"
    done < "$file"

If you want to operate on individual fields within each line, you may supply additional variables to read:

    # Input file has 3 columns separated by white space.
    while read -r first_name last_name phone; do
      ...
    done < "$file"

If the field delimiters are not whitespace, you can set IFS (internal field separator):

    while IFS=: read -r user pass uid gid gecos home shell; do
      ...
    done < /etc/passwd

For tab-delimited files, use IFS=$'\t'.

You do not necessarily need to know how many fields each line of input contains. If you supply more variables than there are fields, the extra variables will be empty. If you supply fewer, the last variable gets "all the rest" of the fields after the preceding ones are satisfied. For example,

    read -r first last junk <<< 'Bob Smith 123 Main Street Elk Grove Iowa 123-555-6789'

    # first will contain "Bob", and last will contain "Smith".
    # junk holds everything else.

Some people use the throwaway variable _ as a "junk variable" to ignore fields. It (or indeed any variable) can also be used more than once in a single read command, if we don't care what goes into it:

    read -r _ _ first middle last _ <<< "$record"

    # We skip the first two fields, then read the next three.
    # Remember, the final _ can absorb any number of fields.
    # It doesn't need to be repeated there.

The read command modifies each line read; by default it removes all leading and trailing whitespace characters (spaces and tabs, or any whitespace characters present in IFS). If that is not desired, the IFS variable has to be cleared:

    # Exact lines, no trimming
    while IFS= read -r line
    do
        printf '%s\n' "$line"
    done < "$file"

One may also read from a command instead of a regular file:

    some command | while read -r line; do
       other commands
    done

This method is especially useful for processing the output of find with a block of commands:

    find . -type f -print0 | while IFS= read -r -d '' file; do
        mv "$file" "${file// /_}"
    done

This reads one filename at a time from the find command and renames the file, replacing spaces with underscores.

Note the usage of -print0 in the find command, which uses NUL bytes as filename delimiters; and -d '' in the read command to instruct it to read all text into the file variable until it finds a NUL byte. By default, find and read delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split up those filenames at the newlines and cause the loop body to fail. Additionally it is necessary to set IFS to an empty string, because otherwise read would still strip leading and trailing whitespace. See FAQ #20 for more details.

Using a pipe to send find's output into a while loop places the loop in a SubShell and may therefore cause problems later on if the commands inside the body of the loop attempt to set variables which need to be used after the loop; in that case, see FAQ 24, or use a ProcessSubstitution like:

    while read -r line; do
        other commands
    done < <(some command)

If you want to read lines from a file into an array, see FAQ 5.

My text files are broken! They lack their final newlines!

If there are some characters after the last line in the file (or to put it differently, if the last line is not terminated by a newline character), then read will read it but return false, leaving the broken partial line in the read variable(s). You can process this after the loop:

    # Emulate cat
    while IFS= read -r line
    do
        printf '%s\n' "$line"
    done < "$file"
    [ -n "$line" ] && printf %s "$line"

Or:

    # This does not work:
    printf 'line 1\ntruncated line 2' | while read -r line; do echo $line; done

    # This does not work either:
    printf 'line 1\ntruncated line 2' | while read -r line; do echo "$line"; done; [[ $line ]] && echo -n "$line"

    # This works:
    printf 'line 1\ntruncated line 2' | (while read -r line; do echo "$line"; done; [[ $line ]] && echo "$line")

For a discussion of why the second example above does not work as expected, see FAQ #24.

How to keep other commands from "eating" the input

Some commands greedily eat up all available data on standard input. The examples above do not take precautions against such programs. For example, {{{l;kjklhjl

  • while read -r line do
    • cat > ignoredfile echo "$line"

    done < "$file"

}}} will only print the contents of the first line, with the remaining contents going to "ignoredfile", as cat slurps up all available input.

One workaround is to use a numeric FileDescriptor rather than standard input:

    # Bash
    while read -r -u9 line
    do
        cat > ignoredfile
        echo "$line"
    done 9< "$file"

Or:

    # Bourne
    exec 9< "$file"
    while read line <&9
    do
      ...
    done
    exec 9<&-

This example will wait for the user to type something into the file ignoredfile at each iteration instead of eating up the loop input.

You might need this, for example, with mencoder which will accept user input if there is any, but will continue silently if there isn't. Other commands that act this way include ssh and ffmpeg. Additional workarounds for this are discussed in FAQ #89.


CategoryShell

BashFAQ/001 (last edited 2024-11-07 03:19:23 by GreyCat)