Differences between revisions 26 and 73 (spanning 47 versions)
Revision 26 as of 2009-06-11 17:33:20
Size: 6556
Editor: localhost
Comment: typo
Revision 73 as of 2024-11-06 05:31:01
Size: 10772
Comment:
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
== How can I read a file (data stream, variable) line-by-line? ==
Use a `while` loop and the `read` command:

{{{
    while read -r line
    do
        echo "$line"
    done < "$file"
}}}

The `-r` option to read prevents backslash interpretation (usually used as a backslash newline pair, to continue over multiple lines). Without this option, any backslashes in the input will be discarded. You should always use the `-r` option with read.

[[BASH]] can also iterate over the lines in a variable using a "here string":

{{{
    while read -r line; do
        echo "$line"
    done <<< "$var"
}}}

If your data source is the script's standard input, then you don't need any redirection at all.
== How can I read a file (data stream, variable) line-by-line (and/or field-by-field)? ==
[[DontReadLinesWithFor|Don't try to use "for"]]. Use a `while` loop and the `read` command. Here is the basic template; there are many variations to discuss:

{{{#!highlight bash
while IFS= read -r line; do
  printf '%s\n' "$line"
done < "$file"
}}}

`line` is a variable name, chosen by you. You can use any valid shell variable name(s) there; see [[#trimming|field splitting]] below.

`< "$file"` redirects the loop's input from a file whose name is stored in a variable; see [[#source|source selection]] below.

If you want to read lines from a file into an [[BashFAQ/005|array]], see [[BashFAQ/005|FAQ 5]].

<<TableOfContents>>

<<Anchor(trimming)>>
=== Field splitting, whitespace trimming, and other input processing ===

The `-r` option to `read` prevents backslash interpretation (usually used as a backslash newline pair, to continue over multiple lines or to escape the delimiters). Without this option, any unescaped backslashes in the input will be discarded. You should almost always use the `-r` option with read.

The most common exception to this rule is when -e is used, which uses Readline to obtain the line from an interactive shell. In that case, tab completion will add backslashes to escape spaces and such, and you do not want them to be literally included in the variable. This would never be used when reading anything line-by-line, though, and -r should always be used when doing so.

By default, `read` modifies each line read, by [[BashFAQ/067|removing all leading and trailing whitespace]] characters (spaces and tabs, if present in [[IFS]]). If that is not desired, the `IFS` variable may be cleared, as in the example above. If you want the trimming, leave `IFS` alone:

{{{#!highlight bash
# Leading/trailing whitespace trimming.
while read -r line; do
  printf '%s\n' "$line"
done < "$file"
}}}
Line 26: Line 37:
{{{
    # Input file has 3 columns separated by white space.
    while read -r first_name last_name phone; do
      ...
    done < "$file"
}}}

If the field delimiters are not whitespace, you can set {{{IFS}}} (input field separator):

{{{
    while IFS=: read -r user pass uid gid gecos home shell; do
      ...
    done < /etc/passwd
}}}

For TAB delimited files, use IFS=$'\t'.

Also, please note that you do ''not'' necessarily need to know how many fields each line of input contains. If you supply more variables than there are fields, the extra variables will be empty. If you supply fewer, the last variable gets "all the rest" of the fields after the preceding ones are satisfied. For example,

{{{
    while read -r first_name last_name junk; do
      ...
    done <<< 'Bob Smith 123 Main Street Elk Grove Iowa 123-555-6789'
    # Inside the loop, first_name will contain "Bob", and
    # last_name will contain "Smith". The variable "junk" holds
    # everything else.
}}}

The {{{read}}} command modifies each line read, e.g. by default it removes all leading whitespace characters (blanks, tab characters, ... -- basically any leading characters present in IFS). If that is not desired, the {{{IFS}}} variable has to be cleared:

{{{
    while IFS= read -r line
    do
        echo "$line"
    done < "$file"
}}}

'''Note that reading a file line by line this way is ''very slow'' for large files. Consider using e.g. [[AWK]] instead if you get performance problems.'''
{{{#!highlight bash
# Input file has 3 columns separated by white space (space or tab characters only).
while read -r first_name last_name phone; do
  # Only print the last name (second column)
  printf '%s\n' "$last_name"
done < "$file"
}}}

If the field delimiters are not whitespace, you can set [[IFS|IFS (internal field separator)]]:

{{{#!highlight bash
# Extract the username and its shell from /etc/passwd:
while IFS=: read -r user pass uid gid gecos home shell; do
  printf '%s: %s\n' "$user" "$shell"
done < /etc/passwd
}}}
For tab-delimited files, use [[Quotes|IFS=$'\t']] though beware that multiple tab characters in the input will be considered as '''one''' delimiter (and the Ksh93/Zsh `IFS=$'\t\t'` workaround won't work in Bash).

You do ''not'' necessarily need to know how many fields each line of input contains. If you supply more variables than there are fields, the extra variables will be empty. If you supply fewer, the last variable gets "all the rest" of the fields after the preceding ones are satisfied. For example,

{{{#!highlight bash
# Bash
read -r first last junk <<< 'Bob Smith 123 Main Street Elk Grove Iowa 123-555-6789'

# first will contain "Bob", and last will contain "Smith".
# junk holds everything else.
}}}

Some people use the throwaway variable `_` as a "junk variable" to ignore fields. It (or indeed any variable) can also be used more than once in a single `read` command, if we don't care what goes into it:

{{{#!highlight bash
# Bash
read -r _ _ first middle last _ <<< "$record"

# We skip the first two fields, then read the next three.
# Remember, the final _ can absorb any number of fields.
# It doesn't need to be repeated there.
}}}

Note that this usage of `_` is only guaranteed to work in Bash. Many other shells use `_` for other purposes that will at best cause this to not have the desired effect, and can break the script entirely. It is better to choose a unique variable that isn't used elsewhere in the script, even though `_` is a common Bash convention.

If avoiding comments starting with `#` is desired, you can simply skip them inside the loop:
{{{#!highlight bash
# Bash/
while read -r line; do
  [[ $line = \#* ]] && continue
  printf '%s\n' "$line"
done < "$file"
}}}

Above `read` removes leading and trailing spaces or tabs (assuming `IFS` hasn't been modified from its default value) so we just need to look for a `#` at the start of the (now trimmed) line. To preserve the spacing:

{{{#!highlight bash
# Bash
while IFS= read -r line; do
  [[ $line = *([[:blank:]])\#* ]] && continue
  printf '%s\n' "$line"
done < "$file"
}}}

In older versions of Bash, you'd need `shopt -s extglob` for the `*(...)` extended ksh glob operator to be available. In newer versions they are always available for the `=`/`==`/`!=` pattern matching operator of the `[[ ... ]]` construct.

<<Anchor(source)>>
=== Input source selection ===

The [[BashGuide/InputAndOutput#Redirection|redirection]] `< "$file"` tells the `while` loop to read from the file whose name is in the variable `file`. If you would prefer to use a literal pathname instead of a variable, you may do that as well. If your input source is the script's standard input, then you don't need any redirection at all.

If your input source is the contents of a variable/parameter, bash can iterate over its lines using a [[BashGuide/InputAndOutput#Heredocs_And_Herestrings|here string]]:

{{{#!highlight bash
while IFS= read -r line; do
  printf '%s\n' "$line"
done <<< "$var"
}}}

The same can be done in any Bourne-type shell by using a "here document" (although `read -r` is POSIX, not Bourne):

{{{#!highlight bash
while IFS= read -r line; do
  printf '%s\n' "$line"
done <<EOF
$var
EOF
}}}
Line 68: Line 124:
{{{
    some command | while read -r line; do
       other commands
    done
}}}

This method is especially useful for processing the output of `find` with a block of commands:

{{{
    find . -print0 | while read -r -d $'\0' file; do
        mv "$file" "${file// /_}"
    done
}}}

This command reads one filename at a time from the file command and renames the file so that its spaces are replaced by underscores.

Note the usage of {{{-print0}}} in the find command, which uses NUL bytes as filename delimiters, and {{{-d $'\0'}}} in the read command to instruct it to read all text into the file variable until it finds a NUL byte. By default, `find` and `read` delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split up those filenames with newlines and cause the loop body to fail. See [[BashFAQ/020|FAQ #20]] for more details.

Using a pipe to send find's output into a while loop places the loop in a SubShell and may therefore cause problems later on if the commands inside the body of the loop attempt to set variables which need to be used outside the loop; in that case, see [[BashFAQ/024|FAQ 24]], or use ProcessSubstitution like:

{{{
    while read -r line; do
        other commands
    done < <(some command)
}}}

Sometimes it's useful to read a file into an [[BashFAQ/005|array]], one array element per line. You can do that with the following example:

{{{
    oIFS=$IFS IFS=$'\n' arr=($(< myfile)) IFS=$oIFS
    # Warning: breaks if lines contain "*" or similar
}}}

This temporarily changes the Input Field Separator to a newline, so that each line will be considered one field by read. Then it populates the array {{{arr}}} with the fields. Then it sets the {{{IFS}}} back to what it was before.

This same trick works on a stream of data as well as a file:

{{{
    oIFS=$IFS IFS=$'\n' arr=($(find . -type f)) IFS=$oIFS
    # Same warning as the previous example
}}}

Of course, this will blow up in your face if the filenames contain newlines; see [[BashFAQ/020|FAQ 20]] for hints on dealing with such filenames.

Both of these array-stuffing examples fail if the shell encounters a [[glob]] that matches files in the current directory as one of the input lines. Glob expansion can be disabled with `set -f` and then re-enabled afterward with `set +f` if needed. For more details on arrays, see [[BashFAQ/005|FAQ 5]]. Moreover, since bash will treat sequences of IFS whitespace as a single character, if the input has empty lines (meaning that groups of two or more consecutive \n characters appear in the file), they will be lost. So, for example:

{{{
    $ cat myfile
    line1
    
    line2
    line3
    $ oIFS=$IFS IFS=$'\n' arr=($(< myfile)) IFS=$oIFS
    $ declare -p arr
    declare -a arr='([0]="line1" [1]="line2" [2]="line3")'
}}}

In the end, the safest way to read a file into an array is still to use a loop:

{{{
    i=0
    while IFS= read -r arr[i++]; do :;done < "$file"
    # or <<< "$var" to iterate over a variable
}}}

On the other hand, if the file lacks a trailing newline (such as {{{/proc/$$/cmdline}}} on Linux), the line will not be printed by a {{{while read ...}}} loop, as {{{read}}} returns a failure that aborts the while loop, thus failing to print the ultimate line. It does, however, store the contents of the partial line in the variable, so you can test whether there was such an unterminated line by checking whether the variable is non-empty at the end of the loop:

{{{
    # This does not work:
    echo -en 'line 1\ntruncated line 2' | while read -r line; do echo $line; done

    # This does not work either:
    echo -en 'line 1\ntruncated line 2' | while read -r line; do echo "$line"; done; [[ $line ]] && echo -n "$line"

    # This works:
    echo -en 'line 1\ntruncated line 2' | (while read -r line; do echo "$line"; done; [[ $line ]] && echo "$line")
}}}
{{{#!highlight bash
some command | while IFS= read -r line; do
  printf '%s\n' "$line"
done
}}}

This method is especially useful for processing the output of [[UsingFind|find]] with a block of commands:

{{{#!highlight bash
find . -type f -print0 | while IFS= read -r -d '' file; do
    dir=${file%/*} base=${file##*/}
    mv "$file" "$dir/${base// /_}"
done
}}}

This reads one filename at a time from the `find` command and [[BashFAQ/030|renames the file]], replacing spaces with underscores in its base name.

Note the usage of `-print0` in the `find` command, which uses NUL bytes as filename delimiters; and {{{-d ''}}} in the `read` command to instruct it to read all text into the `file` variable until it finds a NUL byte. By default, `find` and `read` delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split up those filenames at the newlines and cause the loop body to fail. Additionally it is necessary to set `IFS` to an empty string, because otherwise `read` would still strip leading and trailing whitespace (with the default value of `IFS`). See [[BashFAQ/020|FAQ #20]] for more details.

Using a pipe to send `find`'s output into a `while` loop places the loop in a SubShell, which means any state changes you make (changing variables, `cd`, opening and closing [[FileDescriptor|files]], etc.) will be lost when the loop finishes. To avoid that, you may use a ProcessSubstitution:

{{{#!highlight bash
linecount=0

while IFS= read -r line; do
  linecount=$((linecount + 1))
done < <(some command)

printf 'total lines: %d\n' "$linecount"
}}}

See [[BashFAQ/024|FAQ 24]] for more discussion.

=== My text files are broken! They lack their final newlines! ===

If there are some characters after the last line in the file (or to put it differently, if the last line is not terminated by a newline character), then `read` will read it but return false, leaving the broken partial line in the `read` variable(s). You can process this after the loop:

{{{#!highlight bash
# Emulate cat
while IFS= read -r line; do
  printf '%s\n' "$line"
done < "$file"
[[ -n $line ]] && printf %s "$line"
}}}

Or:

{{{#!highlight bash
# This does not work:
printf 'line 1\ntruncated line 2' | while read -r line; do
  echo $line
done

# This does not work either:
printf 'line 1\ntruncated line 2' | while IFS= read -r line; do
  echo "$line"
done
[[ $line ]] && echo -n "$line"

# This works:
printf 'line 1\ntruncated line 2' | {
  while IFS= read -r line; do
    echo "$line"
  done
  [[ $line ]] && echo "$line"
}
}}}
The first example, beyond missing the after-loop test, is also missing quotes. See [[Quotes|Quotes]] or [[Arguments|Arguments]] for an explanation why. The [[Arguments|Arguments]] page is an especially important read.
Line 147: Line 194:

Alternatively, you can simply add a logical OR to the while test:
{{{#!highlight bash
while IFS= read -r line || [[ -n $line ]]; do
  printf '%s\n' "$line"
done < "$file"

printf 'line 1\ntruncated line 2' | while IFS= read -r line || [[ -n $line ]]; do
  echo "$line"
done
}}}

=== How to keep other commands from "eating" the input ===
Some commands greedily eat up all available data on standard input. The examples above do not take precautions against such programs. For example,
{{{#!highlight bash
while IFS= read -r line; do
  cat > ignoredfile
  printf '%s\n' "$line"
done < "$file"
}}}
will only print the contents of the first line, with the remaining contents going to "ignoredfile", as `cat` slurps up all available input.

One workaround is to use a numeric FileDescriptor rather than standard input:
{{{#!highlight bash
# Bash
while IFS= read -r -u 9 line; do
  cat > ignoredfile
  printf '%s\n' "$line"
done 9< "$file"

# Note that read -u is not portable to every shell.
# Use a redirect to ensure it works in any POSIX compliant shell:
while IFS= read -r line <&9; do
  cat > ignoredfile
  printf '%s\n' "$line"
done 9< "$file"
}}}

Or:

{{{#!highlight bash
exec 9< "$file"
while IFS= read -r line <&9; do
  cat > ignoredfile
  printf '%s\n' "$line"
done
exec 9<&-
}}}

This example will wait for the user to type something into the file {{{ignoredfile}}} at each iteration instead of eating up the loop input.

You might need this, for example, with `mencoder` which will accept user input if there is any, but will continue silently if there isn't. Other commands that act this way include `ssh` and `ffmpeg`. Additional workarounds for this are discussed in [[BashFAQ/089|FAQ #89]].

----
CategoryShell CategoryBashguide

How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?

Don't try to use "for". Use a while loop and the read command. Here is the basic template; there are many variations to discuss:

   1 while IFS= read -r line; do
   2   printf '%s\n' "$line"
   3 done < "$file"

line is a variable name, chosen by you. You can use any valid shell variable name(s) there; see field splitting below.

< "$file" redirects the loop's input from a file whose name is stored in a variable; see source selection below.

If you want to read lines from a file into an array, see FAQ 5.

Field splitting, whitespace trimming, and other input processing

The -r option to read prevents backslash interpretation (usually used as a backslash newline pair, to continue over multiple lines or to escape the delimiters). Without this option, any unescaped backslashes in the input will be discarded. You should almost always use the -r option with read.

The most common exception to this rule is when -e is used, which uses Readline to obtain the line from an interactive shell. In that case, tab completion will add backslashes to escape spaces and such, and you do not want them to be literally included in the variable. This would never be used when reading anything line-by-line, though, and -r should always be used when doing so.

By default, read modifies each line read, by removing all leading and trailing whitespace characters (spaces and tabs, if present in IFS). If that is not desired, the IFS variable may be cleared, as in the example above. If you want the trimming, leave IFS alone:

   1 # Leading/trailing whitespace trimming.
   2 while read -r line; do
   3   printf '%s\n' "$line"
   4 done < "$file"

If you want to operate on individual fields within each line, you may supply additional variables to read:

   1 # Input file has 3 columns separated by white space (space or tab characters only).
   2 while read -r first_name last_name phone; do
   3   # Only print the last name (second column)
   4   printf '%s\n' "$last_name"
   5 done < "$file"

If the field delimiters are not whitespace, you can set IFS (internal field separator):

   1 # Extract the username and its shell from /etc/passwd:
   2 while IFS=: read -r user pass uid gid gecos home shell; do
   3   printf '%s: %s\n' "$user" "$shell"
   4 done < /etc/passwd

For tab-delimited files, use IFS=$'\t' though beware that multiple tab characters in the input will be considered as one delimiter (and the Ksh93/Zsh IFS=$'\t\t' workaround won't work in Bash).

You do not necessarily need to know how many fields each line of input contains. If you supply more variables than there are fields, the extra variables will be empty. If you supply fewer, the last variable gets "all the rest" of the fields after the preceding ones are satisfied. For example,

   1 # Bash
   2 read -r first last junk <<< 'Bob Smith 123 Main Street Elk Grove Iowa 123-555-6789'
   3 
   4 # first will contain "Bob", and last will contain "Smith".
   5 # junk holds everything else.

Some people use the throwaway variable _ as a "junk variable" to ignore fields. It (or indeed any variable) can also be used more than once in a single read command, if we don't care what goes into it:

   1 # Bash
   2 read -r _ _ first middle last _ <<< "$record"
   3 
   4 # We skip the first two fields, then read the next three.
   5 # Remember, the final _ can absorb any number of fields.
   6 # It doesn't need to be repeated there.

Note that this usage of _ is only guaranteed to work in Bash. Many other shells use _ for other purposes that will at best cause this to not have the desired effect, and can break the script entirely. It is better to choose a unique variable that isn't used elsewhere in the script, even though _ is a common Bash convention.

If avoiding comments starting with # is desired, you can simply skip them inside the loop:

   1 # Bash/
   2 while read -r line; do
   3   [[ $line = \#* ]] && continue
   4   printf '%s\n' "$line"
   5 done < "$file"

Above read removes leading and trailing spaces or tabs (assuming IFS hasn't been modified from its default value) so we just need to look for a # at the start of the (now trimmed) line. To preserve the spacing:

   1 # Bash
   2 while IFS= read -r line; do
   3   [[ $line = *([[:blank:]])\#* ]] && continue
   4   printf '%s\n' "$line"
   5 done < "$file"

In older versions of Bash, you'd need shopt -s extglob for the *(...) extended ksh glob operator to be available. In newer versions they are always available for the =/==/!= pattern matching operator of the [[ ... ]] construct.

Input source selection

The redirection < "$file" tells the while loop to read from the file whose name is in the variable file. If you would prefer to use a literal pathname instead of a variable, you may do that as well. If your input source is the script's standard input, then you don't need any redirection at all.

If your input source is the contents of a variable/parameter, bash can iterate over its lines using a here string:

   1 while IFS= read -r line; do
   2   printf '%s\n' "$line"
   3 done <<< "$var"

The same can be done in any Bourne-type shell by using a "here document" (although read -r is POSIX, not Bourne):

   1 while IFS= read -r line; do
   2   printf '%s\n' "$line"
   3 done <<EOF
   4 $var
   5 EOF

One may also read from a command instead of a regular file:

   1 some command | while IFS= read -r line; do
   2   printf '%s\n' "$line"
   3 done

This method is especially useful for processing the output of find with a block of commands:

   1 find . -type f -print0 | while IFS= read -r -d '' file; do
   2     dir=${file%/*} base=${file##*/}
   3     mv "$file" "$dir/${base// /_}"
   4 done

This reads one filename at a time from the find command and renames the file, replacing spaces with underscores in its base name.

Note the usage of -print0 in the find command, which uses NUL bytes as filename delimiters; and -d '' in the read command to instruct it to read all text into the file variable until it finds a NUL byte. By default, find and read delimit their input with newlines; however, since filenames can potentially contain newlines themselves, this default behaviour will split up those filenames at the newlines and cause the loop body to fail. Additionally it is necessary to set IFS to an empty string, because otherwise read would still strip leading and trailing whitespace (with the default value of IFS). See FAQ #20 for more details.

Using a pipe to send find's output into a while loop places the loop in a SubShell, which means any state changes you make (changing variables, cd, opening and closing files, etc.) will be lost when the loop finishes. To avoid that, you may use a ProcessSubstitution:

   1 linecount=0
   2 
   3 while IFS= read -r line; do
   4   linecount=$((linecount + 1))
   5 done < <(some command)
   6 
   7 printf 'total lines: %d\n' "$linecount"

See FAQ 24 for more discussion.

My text files are broken! They lack their final newlines!

If there are some characters after the last line in the file (or to put it differently, if the last line is not terminated by a newline character), then read will read it but return false, leaving the broken partial line in the read variable(s). You can process this after the loop:

   1 # Emulate cat
   2 while IFS= read -r line; do
   3   printf '%s\n' "$line"
   4 done < "$file"
   5 [[ -n $line ]] && printf %s "$line"

Or:

   1 # This does not work:
   2 printf 'line 1\ntruncated line 2' | while read -r line; do
   3   echo $line
   4 done
   5 
   6 # This does not work either:
   7 printf 'line 1\ntruncated line 2' | while IFS= read -r line; do
   8   echo "$line"
   9 done
  10 [[ $line ]] && echo -n "$line"
  11 
  12 # This works:
  13 printf 'line 1\ntruncated line 2' | {
  14   while IFS= read -r line; do
  15     echo "$line"
  16   done
  17   [[ $line ]] && echo "$line"
  18 }

The first example, beyond missing the after-loop test, is also missing quotes. See Quotes or Arguments for an explanation why. The Arguments page is an especially important read.

For a discussion of why the second example above does not work as expected, see FAQ #24.

Alternatively, you can simply add a logical OR to the while test:

   1 while IFS= read -r line || [[ -n $line ]]; do
   2   printf '%s\n' "$line"
   3 done < "$file"
   4 
   5 printf 'line 1\ntruncated line 2' | while IFS= read -r line || [[ -n $line ]]; do
   6   echo "$line"
   7 done

How to keep other commands from "eating" the input

Some commands greedily eat up all available data on standard input. The examples above do not take precautions against such programs. For example,

   1 while IFS= read -r line; do
   2   cat > ignoredfile
   3   printf '%s\n' "$line"
   4 done < "$file"

will only print the contents of the first line, with the remaining contents going to "ignoredfile", as cat slurps up all available input.

One workaround is to use a numeric FileDescriptor rather than standard input:

   1 # Bash
   2 while IFS= read -r -u 9 line; do
   3   cat > ignoredfile
   4   printf '%s\n' "$line"
   5 done 9< "$file"
   6 
   7 # Note that read -u is not portable to every shell.
   8 # Use a redirect to ensure it works in any POSIX compliant shell:
   9 while IFS= read -r line <&9; do
  10   cat > ignoredfile
  11   printf '%s\n' "$line"
  12 done 9< "$file"

Or:

   1 exec 9< "$file"
   2 while IFS= read -r line <&9; do
   3   cat > ignoredfile
   4   printf '%s\n' "$line"
   5 done
   6 exec 9<&-

This example will wait for the user to type something into the file ignoredfile at each iteration instead of eating up the loop input.

You might need this, for example, with mencoder which will accept user input if there is any, but will continue silently if there isn't. Other commands that act this way include ssh and ffmpeg. Additional workarounds for this are discussed in FAQ #89.


CategoryShell CategoryBashguide

BashFAQ/001 (last edited 2024-11-07 03:19:23 by GreyCat)