1985
Comment: clean up
|
2510
Removed UUOC + added awk solutions
|
Deletions are marked like this. | Additions are marked like this. |
Line 4: | Line 4: |
Use the comm(1) command. | Use the `comm(1)` command. |
Line 7: | Line 7: |
# intersection of file1 and file2 comm -12 <(sort file1) <(sort file2) # subtraction of file1 from file2 comm -13 <(sort file1) <(sort file2)}}} |
# Bash # intersection of file1 and file2 comm -12 <(sort file1) <(sort file2) # subtraction of file1 from file2 comm -13 <(sort file1) <(sort file2) }}} |
Line 12: | Line 14: |
Read the comm(1) manpage for details. | Read the `comm` manpage for details. |
Line 14: | Line 16: |
If for some reason you lack the core comm(1) program, you can use these other methods: | If for some reason you lack the core `comm` program, you can use these other methods: |
Line 17: | Line 19: |
* Note that it probably only works with GNU grep, and that the file specified with -f is will be loaded into ram, so it doesn't scale for very large files. * It has grep read one of the sets as a pattern list from a file (-f), and interpret the patterns as plain strings not regexps (-F), matching only whole lines (-x). |
|
Line 28: | Line 25: |
1. An implementation using sort and uniq | * It has grep read one of the sets as a pattern list from a file (-f), and interpret the patterns as plain strings not regexps (-F), matching only whole lines (-x). * Note that the file specified with -f will be loaded into memory, so it doesn't scale for very large files. * It should work with any POSIX grep; on older systems you may need to use `fgrep` rather than `grep -F`. |
Line 30: | Line 29: |
1. An implementation using sort and uniq: | |
Line 40: | Line 40: |
cat file1 file1 file2 | sort | uniq -c | | sort file1 file1 file2 | uniq -c | |
Line 44: | Line 44: |
* Also, this approach assumes that neither file1 nor file2 has any duplicates in it. * Finally, it sorts the output for you. If that's a problem, then you'll have to abandon this approach altogether. Perhaps you could use awk's associative arrays (or perl's hashes or tcl's arrays) instead. |
|
Line 45: | Line 47: |
* Also, this approach assumes that neither file1 nor file2 has any duplicates in it. | 1. These are subtraction and intersection with awk, regardless of whether the input files are sorted or contain duplicates: {{{ # prints lines only in file1 but not in file2. Reverse the arguments to get the other way round awk 'NR==FNR{a[$0];next} !($0 in a)' file2 file1 |
Line 47: | Line 52: |
* Finally, it sorts the output for you. If that's a problem, then you'll have to abandon this approach altogether. Perhaps you could use awk's associative arrays (or perl's hashes or tcl's arrays) instead. | # prints lines that are in both files; order of arguments is not important awk 'NR==FNR{a[$0];next} $0 in a' file1 file2 }}} See also: http://www.pixelbeat.org/cmdline.html#sets |
How can I get all lines that are: in both of two files (set intersection) or in only one of two files (set subtraction).
Use the comm(1) command.
# Bash # intersection of file1 and file2 comm -12 <(sort file1) <(sort file2) # subtraction of file1 from file2 comm -13 <(sort file1) <(sort file2)
Read the comm manpage for details.
If for some reason you lack the core comm program, you can use these other methods:
- An amazingly simple and fast implementation, that took just 20 seconds to match a 30k line file against a 400k line file for me.
# intersection of file1 and file2 grep -xF -f file1 file2 # subtraction of file1 from file2 grep -vxF -f file1 file2
- It has grep read one of the sets as a pattern list from a file (-f), and interpret the patterns as plain strings not regexps (-F), matching only whole lines (-x).
- Note that the file specified with -f will be loaded into memory, so it doesn't scale for very large files.
It should work with any POSIX grep; on older systems you may need to use fgrep rather than grep -F.
- An implementation using sort and uniq:
# intersection of file1 and file2 sort file1 file2 | uniq -d (Assuming each of file1 or file2 does not have repeated content) # file1-file2 (Subtraction) sort file1 file2 file2 | uniq -u # same way for file2 - file1, change last file2 to file1 sort file1 file2 file1 | uniq -u
- Another implementation of subtraction:
sort file1 file1 file2 | uniq -c | awk '{ if ($1 == 2) { $1 = ""; print; } }'
- This may introduce an extra space at the start of the line; if that's a problem, just strip it away.
- Also, this approach assumes that neither file1 nor file2 has any duplicates in it.
- Finally, it sorts the output for you. If that's a problem, then you'll have to abandon this approach altogether. Perhaps you could use awk's associative arrays (or perl's hashes or tcl's arrays) instead.
- These are subtraction and intersection with awk, regardless of whether the input files are sorted or contain duplicates:
# prints lines only in file1 but not in file2. Reverse the arguments to get the other way round awk 'NR==FNR{a[$0];next} !($0 in a)' file2 file1 # prints lines that are in both files; order of arguments is not important awk 'NR==FNR{a[$0];next} $0 in a' file1 file2