NAME
uniq-files - Report duplicate or unique files, optionally perform action on them
VERSION
This document describes version 0.142 of uniq-files (from Perl distribution App-UniqFiles), released on 2024-06-26.
SYNOPSIS
uniq-files [(--action=str)+|--actions-json=json] [--algorithm=str] [(--authoritative-dir=str)+|--authoritative-dirs-json=json|(-O=str)+] [--debug|--log-level=level|--quiet|--trace|--verbose] [--detail|-l] [--digest-args=s|--digest-args-json=json|-A=s] [--exclude-empty-files|-Z|--include-empty-files] [(--exclude-file-pattern=str)+|--exclude-file-patterns-json=json|(-X=str)+] [--format=name|--json] [--group-by-digest|--no-group-by-digest|--nogroup-by-digest] [(--include-file-pattern=str)+|--include-file-patterns-json=json|(-I=str)+] [--max-size=filesize] [--min-size=filesize] [--(no)naked-res] [--page-result[=program]|--view-result[=program]] [--recurse|-R|--no-recurse|--norecurse] [--report-duplicate=int] [--report-unique|-D|-a|-d|-u|--no-report-unique|--noreport-unique] [--show-count|-c|--count|--no-show-count|--noshow-count] [--show-digest] [--show-size] -- <files> ...
See examples in the "EXAMPLES" section.
DESCRIPTION
Given a list of filenames, will check each file's content (and/or size, and/or only name) to decide whether the file is a duplicate of another.
There is a certain amount of flexibility on how duplicate is determined: - when comparing content, various hashing algorithm is supported; - when comparing size, a certain tolerance % is allowed; - when comparing filename, munging can first be done.
There is flexibility on what to do with duplicate files: - just print unique/duplicate files (and let other utilities down the pipe deal with them); - move duplicates to some location; - open the files first and prompt for action; - let a Perl code process the files.
Interface is loosely based on the uniq
Unix command-line program.
OPTIONS
*
marks required options.
Main options
- --algorithm=s
-
What algorithm is used to compute the digest of the content.
The default is to use
md5
. Some algorithms supported includecrc32
,sha1
,sha256
, as well asDigest
to use Perl Digest which supports a lot of other algorithms, e.g.SHA-1
,BLAKE2b
.If set to '', 'none', or 'size', then digest will be set to file size. This means uniqueness will be determined solely from file size. This can be quicker but will generate a false positive when two files of the same size are deemed as duplicate even though their content may be different.
If set to 'name' then only name comparison will be performed. This of course can potentially generate lots of false positives, but in some cases you might want to compare filename for uniqueness.
-
Denote director(y|ies) where authoritative/"Original" copies are found.
Can be specified multiple times.
-
Denote director(y|ies) where authoritative/"Original" copies are found (JSON-encoded).
See
--authoritative-dir
. - --detail, -l
-
Show details (a.k.a. --show-digest, --show-size, --show-count).
- --digest-args-json=s, -A
-
See
--digest-args
. - --digest-args=s
-
Some Digest algorithms require arguments, you can pass them here.
- --exclude-empty-files, -Z
-
(No description)
- --exclude-file-pattern=s@, -X
-
Filename (including path) regex patterns to include.
Can be specified multiple times.
- --exclude-file-patterns-json=s
-
Filename (including path) regex patterns to include (JSON-encoded).
See
--exclude-file-pattern
. - --group-by-digest
-
Sort files by its digest (or size, if not computing digest), separate each different digest.
- --include-file-pattern=s@, -I
-
Filename (including path) regex patterns to exclude.
Can be specified multiple times.
- --include-file-patterns-json=s
-
Filename (including path) regex patterns to exclude (JSON-encoded).
See
--include-file-pattern
. - --max-size=s
-
Maximum file size to consider.
- --min-size=s
-
Minimum file size to consider.
- --no-report-unique
-
(No description)
- --report-duplicate=s
-
Whether to return duplicate items.
Default value:
2
Valid values:
[0,1,2,3]
Can be set to either 0, 1, 2, or 3.
If set to 0, duplicate items will not be returned.
If set to 1 (the default for
dupe-files
), will return all the the duplicate files. For example:file1
contains text 'a',file2
'b',file3
'a'. Thenfile1
andfile3
will be returned.If set to 2 (the default for
uniq-files
), will only return the first of duplicate items. Continuing from previous example, onlyfile1
will be returned becausefile2
is unique andfile3
contains 'a' (already represented byfile1
). If one or more--authoritative-dir
(-O
) options are specified, files under these directories will be preferred.If set to 3, will return all but the first of duplicate items. Continuing from previous example:
file3
will be returned. This is useful if you want to keep only one copy of the duplicate content. You can use the output of this routine tomv
orrm
. Similar to the previous case, if one or more--authoritative-dir
(-O
) options are specified, then files under these directories will not be listed if possible. - --show-count, --count, -c
-
Whether to return each file content's number of occurence.
1 means the file content is only encountered once (unique), 2 means there is one duplicate, and so on.
- --show-digest
-
Show the digest value (or the size, if not computing digest) for each file.
Note that this routine does not compute digest for files which have unique sizes, so they will show up as empty.
- --show-size
-
Show the size for each file.
- -a
-
Alias for --report-unique --report-duplicate=1 (report all files).
See
--no-report-unique
. - -D
-
Alias for --noreport-unique --report-duplicate=3.
See
--no-report-unique
. - -d
-
Alias for --noreport-unique --report-duplicate=1.
See
--no-report-unique
. - -u
-
Alias for --report-unique --report-duplicate=0.
See
--no-report-unique
.
Input options
- --action=s@
-
What action(s) to perform.
Default value:
["report"]
The following actions are available. More than one action can be
Can be specified multiple times.
- --actions-json=s
-
What action(s) to perform (JSON-encoded).
See
--action
. - --files-json=s
-
See
--files
.Can also be specified as the 1st command-line argument and onwards.
- --files=s@*
-
(No description)
Can also be specified as the 1st command-line argument and onwards.
Can be specified multiple times.
- --recurse, -R
-
If set to true, will recurse into subdirectories.
Logging options
- --debug
-
Shortcut for --log-level=debug.
- --log-level=s
-
Set log level.
By default, these log levels are available (in order of increasing level of importance, from least important to most):
trace
,debug
,info
,warn
/warning
,error
,fatal
. By default, the level is usually set towarn
, which means that log statements with levelinfo
and less important levels will not be shown. To increase verbosity, chooseinfo
,debug
, ortrace
.For more details on log level and logging, as well as how new logging levels can be defined or existing ones modified, see Log::ger.
- --quiet
-
Shortcut for --log-level=error.
- --trace
-
Shortcut for --log-level=trace.
- --verbose
-
Shortcut for --log-level=info.
Output options
- --format=s
-
Choose output format, e.g. json, text.
Default value:
undef
Output can be displayed in multiple formats, and a suitable default format is chosen depending on the application and/or whether output destination is interactive terminal (i.e. whether output is piped). This option specifically chooses an output format.
- --json
-
Set output format to json.
- --naked-res
-
When outputing as JSON, strip result envelope.
Default value:
0
By default, when outputing as JSON, the full enveloped result is returned, e.g.:
[200,"OK",[1,2,3],{"func.extra"=>4}]
The reason is so you can get the status (1st element), status message (2nd element) as well as result metadata/extra result (4th element) instead of just the result (3rd element). However, sometimes you want just the result, e.g. when you want to pipe the result for more post-processing. In this case you can use
--naked-res
so you just get:[1,2,3]
- --page-result
-
Filter output through a pager.
This option will pipe the output to a specified pager program. If pager program is not specified, a suitable default e.g.
less
is chosen. - --view-result
-
View output using a viewer.
This option will first save the output to a temporary file, then open a viewer program to view the temporary file. If a viewer program is not chosen, a suitable default, e.g. the browser, is chosen.
Other options
COMPLETION
This script has shell tab completion capability with support for several shells.
bash
To activate bash completion for this script, put:
complete -C uniq-files uniq-files
in your bash startup (e.g. ~/.bashrc). Your next shell session will then recognize tab completion for the command. Or, you can also directly execute the line above in your shell to activate immediately.
It is recommended, however, that you install modules using cpanm-shcompgen which can activate shell completion for scripts immediately.
tcsh
To activate tcsh completion for this script, put:
complete uniq-files 'p/*/`uniq-files`/'
in your tcsh startup (e.g. ~/.tcshrc). Your next shell session will then recognize tab completion for the command. Or, you can also directly execute the line above in your shell to activate immediately.
It is also recommended to install shcompgen (see above).
other shells
For fish and zsh, install shcompgen as described above.
EXAMPLES
List all files which do no have duplicate contents
% uniq-files *
List all files (recursively, and in detail) which have duplicate contents (all duplicate copies), exclude some files
% uniq-files -R -l -d -X '\.git/' --min-size 10k .
Move all duplicate files (except one copy) in this directory (and subdirectories) to .dupes/
% uniq-files -D -R * | while read f; do mv "$f" .dupes/; done
List number of occurences of contents for duplicate files
% uniq-files -c *
List number of occurences of contents for all files
% uniq-files -a -c *
List all files, along with their number of content occurrences and content digest. Use the BLAKE2b digest algorithm. And group the files according to their digest.
% uniq-files -a -c --show-digest -A BLAKE2,blake2b *
HOMEPAGE
Please visit the project's homepage at https://metacpan.org/release/App-UniqFiles.
SOURCE
Source repository is at https://github.com/perlancar/perl-App-UniqFiles.
AUTHOR
perlancar <perlancar@cpan.org>
CONTRIBUTING
To contribute, you can send patches by email/via RT, or send pull requests on GitHub.
Most of the time, you don't need to build the distribution yourself. You can simply modify the code, then test via:
% prove -l
If you want to build the distribution (e.g. to try to install it locally on your system), you can install Dist::Zilla, Dist::Zilla::PluginBundle::Author::PERLANCAR, Pod::Weaver::PluginBundle::Author::PERLANCAR, and sometimes one or two other Dist::Zilla- and/or Pod::Weaver plugins. Any additional steps required beyond that are considered a bug and can be reported to me.
COPYRIGHT AND LICENSE
This software is copyright (c) 2024, 2023, 2022, 2020, 2019, 2017, 2015, 2014, 2012, 2011 by perlancar <perlancar@cpan.org>.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
BUGS
Please report any bugs or feature requests on the bugtracker website https://rt.cpan.org/Public/Dist/Display.html?Name=App-UniqFiles
When submitting a bug or request, please include a test-file or a patch to an existing test-file that illustrates the bug or desired feature.