How can I sort or compare files based on some metadata attribute (most recently modified, size, etc)?

The tempting solution is to use ls to output sorted filenames and operate on the results using e.g. awk. As usual, the ls approach cannot be made robust and should never be used in scripts due in part to the possibility of arbitrary characters (including newlines) present in filenames. Therefore, we need some other way to compare file metadata.

The most common requirements are to get the most or least recently modified, or largest or smallest files in a directory. Bash and all ksh variants can compare modification times (mtime) using the -nt and -ot operators of the conditional expression compound command:

   1 unset -v latest
   2 for file in "$dir"/*; do
   3   [[ $file -nt $latest ]] && latest=$file
   4 done

Or to find the oldest:

   1 unset -v oldest
   2 for file in "$dir"/*; do
   3   [[ -z $oldest || $file -ot $oldest ]] && oldest=$file
   4 done

Keep in mind that mtime on directories is that of the most recently added, removed, or renamed file in that directory. Also note that -nt and -ot are not specified by POSIX test, but many shells such as dash include them anyway. No bourne-like shell has analogous operators for comparing by atime or ctime, so one would need external utilities for that; however, it's nearly impossible to either produce output which can be safely parsed, or handle said output in a shell without using nonstandard features on both ends.

If the sorting criteria are different from "oldest or newest file by mtime", then GNU find and GNU sort may be used together to produce a sorted list of filenames + timestamps, delimited by NUL characters. This will operate recursively by default. GNU find's -maxdepth operator can limit the search depth to 1 directory if needed. Here are a few possibilities, which can be modified as necessary to use atime or ctime, or to sort in reverse order:

   1 # GNU find + GNU sort (To the precision possible on the given OS, but returns only one result)
   2 IFS= read -r -d '' latest \
   3   < <(find "$dir" -type f -printf '%T@ %p\0' | sort -znr)
   4 latest=${latest#* }   # remove timestamp + space

   1 # GNU find (To the nearest 1s, using "find -printf" format (%Ts).)
   2 while IFS= read -rd '' time; do
   3   IFS= read -rd '' 'latest[time]'
   4 done < <(find "$dir" -type f -printf '%Ts\0%p\0')
   5 latest=${latest[-1]}

One disadvantage to these approaches is that the entire list is sorted, whereas simply iterating through the list to find the minimum or maximum timestamp (assuming we want just one file) would be faster. However, depending on the size of the job, the algorithmic disadvantage of sorting may be negligible in comparison to the overhead of using a shell.

   1 # GNU find
   2 unset -v latest time
   3 while IFS= read -r -d '' line; do
   4   t=${line%% *} t=${t%.*}   # truncate fractional seconds
   5   ((t > time)) && { latest=${line#* } time=$t; }
   6 done < <(find "$dir" -type f -printf '%T@ %p\0')

Similar usage patterns work well on many kinds of filesystem meta-data. This example gets the largest file in each subdirectory recursively. This is a common pattern for performing a calculation on a collection of files in each directory.

Readers who are asking this question in order to rotate their log files may wish to look into logrotate(1) instead, if their operating system provides it.


BashFAQ/003 (last edited 2024-05-02 19:47:41 by emanuele6)