Differences between revisions 28 and 40 (spanning 12 versions)
Revision 28 as of 2013-03-12 07:38:28
Size: 1793
Editor: ormaaj
Comment: Quotes
Revision 40 as of 2025-04-20 12:55:53
Size: 2101
Comment:
Deletions are marked like this. Additions are marked like this.
Line 5: Line 5:
{{{
sed -n "${n}p" "$file"
{{{#!highlight bash
sed -n "${n}p" < "$file"
Line 9: Line 9:
But this reads the entire file even if only the third line is desired, which can be avoided by printing line `$n` using the `p` command, followed by a `q` to exit the script: But this reads the entire file even if only the third line is desired, which can be avoided by using the `q` command to quit on line `$n`, and deleting all other lines with the `d` command:
Line 11: Line 11:
{{{
sed -n "$n{p;q;}" "$file"
{{{#!highlight bash
sed "${n}q;d" < "$file"
Line 15: Line 15:
Another method is to grab the last line from a listing of the first `n` lines: Or
Line 17: Line 17:
{{{
head -n "$n" "$file" | tail -n 1
{{{#!highlight bash
sed "$n!d;q" < "$file"
Line 20: Line 20:

Which appears to be faster with all of GNU, busybox and ast-open sed implementations.

Another method is to grab lines starting at `n`, then get the first line of that.

{{{#!highlight bash
<"$file" tail -n "+$n" | head -n 1
}}}

As that uses more specialized tools, that's generally generally significantly faster.
Line 23: Line 33:
{{{
awk "NR==$n{print;exit}" "$file"
{{{#!highlight bash
awk -v n="$n" 'NR==n{print;exit}' < "$file"
Line 29: Line 39:
{{{ {{{#!highlight bash
Line 31: Line 41:
sed -n "$x,${y}p;${y}q;" "$file" # Print lines $x to $y; quit after $y.
head -n "$y" "$file" | tail -n $((y - x + 1)) # Same
head -n "$y" "$file" | tail -n "+$x" # If your tail supports it
awk "NR>=$x{print} NR==$y{exit}" "$file" # Same
sed "$x,$y!d;${y}q" < "$file" # Print lines $x to $y; quit after $y.
tail -n "+$x" < "$file" | head -n "$(( y - x + 1 ))" # Same, generally faster
awk -v x="$x" -v y="$y" 'NR>=x; NR==y{exit}' < "$file" # Same, generally slower
Line 37: Line 46:
In Bash 4, `mapfile` can be used similarly to `head` while avoiding buffering issues in the event input is a pipe, because it guarantees the [[ FileDescriptor|fd ]] will be seeked to where you left it: Or a counter with a simple `read` loop, though that's going to be orders of magnitude slower for any input with more than a few hundred lines.
Line 39: Line 48:
{{{
# Bash4
{ mapfile -n "$n"; head -n 1; } <"$file"
{{{#!highlight bash
# Bash/ksh/zsh
{
    m=0
    while ((m++ < x - 1)) && read -r _; do
        :
    done

    head -n "$((y - x + 1))"
} < "$file"
Line 44: Line 60:
Or a counter with a simple `read` loop: To read into a variable, it is preferable to use `read` or `mapfile` aka `readarray` rather than an external utility. More than one line can be read into the given array variable or the default array `MAPFILE` by adjusting the argument to mapfile's -n option:
Line 46: Line 62:
{{{
# Bash/ksh
m=0
while ((m++ < n)) && read -r _; do
    :
done

head -n 1
}}}

To read into a variable, it is preferable to use `read` or `mapfile` rather than an external utility. More than one line can be read into the given array variable or the default array `MAPFILE` by adjusting the argument to mapfile's -n option:

{{{
{{{#!highlight bash
Line 60: Line 64:
mapfile -ts $((n-1)) -n 1 x <"$file"
printf '%s\n' "$x"
{
  mapfile -s "$((x - 1))" -n "$((y - x + 1))" lines
  printf %s "${lines[@]}"
} < "$file"

How can I print the n'th line of a file?

One dirty (but not quick) way is:

   1 sed -n "${n}p" < "$file"

But this reads the entire file even if only the third line is desired, which can be avoided by using the q command to quit on line $n, and deleting all other lines with the d command:

   1 sed "${n}q;d" < "$file"

Or

   1 sed "$n!d;q" < "$file"

Which appears to be faster with all of GNU, busybox and ast-open sed implementations.

Another method is to grab lines starting at n, then get the first line of that.

   1 <"$file" tail -n "+$n" | head -n 1

As that uses more specialized tools, that's generally generally significantly faster.

Another approach, using AWK:

   1 awk -v n="$n" 'NR==n{print;exit}' < "$file"

If more than one line is needed, it's easy to adapt any of the previous methods:

   1 x=3 y=4
   2 sed "$x,$y!d;${y}q" < "$file"                          # Print lines $x to $y; quit after $y.
   3 tail -n "+$x" < "$file" | head -n "$(( y - x + 1 ))"   # Same, generally faster
   4 awk -v x="$x" -v y="$y" 'NR>=x; NR==y{exit}' < "$file" # Same, generally slower

Or a counter with a simple read loop, though that's going to be orders of magnitude slower for any input with more than a few hundred lines.

   1 # Bash/ksh/zsh
   2 {
   3     m=0
   4     while ((m++ < x - 1)) && read -r _; do
   5         :
   6     done
   7 
   8     head -n "$((y - x + 1))"
   9 } < "$file"

To read into a variable, it is preferable to use read or mapfile aka readarray rather than an external utility. More than one line can be read into the given array variable or the default array MAPFILE by adjusting the argument to mapfile's -n option:

   1 # Bash4
   2 {
   3   mapfile -s "$((x - 1))" -n "$((y - x + 1))" lines
   4   printf %s "${lines[@]}"
   5 } < "$file"

See Also


CategoryShell

BashFAQ/011 (last edited 2025-04-20 12:55:53 by StephaneChazelas)