NAME
App::UniqFiles - Report duplicate or unique files, optionally perform action on them
VERSION
This document describes version 0.142 of App::UniqFiles (from Perl distribution App-UniqFiles), released on 2024-06-26.
SYNOPSIS
# See uniq-files script
NOTES
FUNCTIONS
dupe_files
Usage:
dupe_files(%args) -> [$status_code, $reason, $payload, \%result_meta]
Report duplicate or unique files, optionally perform action on them.
This is a thin wrapper for uniq-files. It defaults report_unique
to 0 and report_duplicate
to 1.
This function is not exported.
Arguments ('*' denotes required arguments):
actions => array[str] (default: ["report"])
What action(s) to perform.
The following actions are available. More than one action can be
algorithm => str
What algorithm is used to compute the digest of the content.
The default is to use
md5
. Some algorithms supported includecrc32
,sha1
,sha256
, as well asDigest
to use Perl Digest which supports a lot of other algorithms, e.g.SHA-1
,BLAKE2b
.If set to '', 'none', or 'size', then digest will be set to file size. This means uniqueness will be determined solely from file size. This can be quicker but will generate a false positive when two files of the same size are deemed as duplicate even though their content may be different.
If set to 'name' then only name comparison will be performed. This of course can potentially generate lots of false positives, but in some cases you might want to compare filename for uniqueness.
authoritative_dirs => array[str]
Denote director(y|ies) where authoritative/"Original" copies are found.
detail => true
Show details (a.k.a. --show-digest, --show-size, --show-count).
digest_args => array
Some Digest algorithms require arguments, you can pass them here.
exclude_empty_files => bool
(No description)
exclude_file_patterns => array[str]
Filename (including path) regex patterns to include.
files* => array[str]
(No description)
group_by_digest => bool
Sort files by its digest (or size, if not computing digest), separate each different digest.
include_file_patterns => array[str]
Filename (including path) regex patterns to exclude.
max_size => filesize
Maximum file size to consider.
min_size => filesize
Minimum file size to consider.
recurse => bool
If set to true, will recurse into subdirectories.
report_duplicate => int (default: 1)
Whether to return duplicate items.
Can be set to either 0, 1, 2, or 3.
If set to 0, duplicate items will not be returned.
If set to 1 (the default for
dupe-files
), will return all the the duplicate files. For example:file1
contains text 'a',file2
'b',file3
'a'. Thenfile1
andfile3
will be returned.If set to 2 (the default for
uniq-files
), will only return the first of duplicate items. Continuing from previous example, onlyfile1
will be returned becausefile2
is unique andfile3
contains 'a' (already represented byfile1
). If one or more--authoritative-dir
(-O
) options are specified, files under these directories will be preferred.If set to 3, will return all but the first of duplicate items. Continuing from previous example:
file3
will be returned. This is useful if you want to keep only one copy of the duplicate content. You can use the output of this routine tomv
orrm
. Similar to the previous case, if one or more--authoritative-dir
(-O
) options are specified, then files under these directories will not be listed if possible.report_unique => bool (default: 0)
Whether to return unique items.
show_count => bool (default: 0)
Whether to return each file content's number of occurence.
1 means the file content is only encountered once (unique), 2 means there is one duplicate, and so on.
show_digest => true
Show the digest value (or the size, if not computing digest) for each file.
Note that this routine does not compute digest for files which have unique sizes, so they will show up as empty.
show_size => true
Show the size for each file.
Returns an enveloped result (an array).
First element ($status_code) is an integer containing HTTP-like status code (200 means OK, 4xx caller error, 5xx function error). Second element ($reason) is a string containing error message, or something like "OK" if status is 200. Third element ($payload) is the actual result, but usually not present when enveloped result is an error response ($status_code is not 2xx). Fourth element (%result_meta) is called result metadata and is optional, a hash that contains extra information, much like how HTTP response headers provide additional metadata.
Return value: (any)
uniq_files
Usage:
uniq_files(%args) -> [$status_code, $reason, $payload, \%result_meta]
Report duplicate or unique files, optionally perform action on them.
Given a list of filenames, will check each file's content (and/or size, and/or only name) to decide whether the file is a duplicate of another.
There is a certain amount of flexibility on how duplicate is determined: - when comparing content, various hashing algorithm is supported; - when comparing size, a certain tolerance % is allowed; - when comparing filename, munging can first be done.
There is flexibility on what to do with duplicate files: - just print unique/duplicate files (and let other utilities down the pipe deal with them); - move duplicates to some location; - open the files first and prompt for action; - let a Perl code process the files.
Interface is loosely based on the uniq
Unix command-line program.
This function is not exported by default, but exportable.
Arguments ('*' denotes required arguments):
actions => array[str] (default: ["report"])
What action(s) to perform.
The following actions are available. More than one action can be
algorithm => str
What algorithm is used to compute the digest of the content.
The default is to use
md5
. Some algorithms supported includecrc32
,sha1
,sha256
, as well asDigest
to use Perl Digest which supports a lot of other algorithms, e.g.SHA-1
,BLAKE2b
.If set to '', 'none', or 'size', then digest will be set to file size. This means uniqueness will be determined solely from file size. This can be quicker but will generate a false positive when two files of the same size are deemed as duplicate even though their content may be different.
If set to 'name' then only name comparison will be performed. This of course can potentially generate lots of false positives, but in some cases you might want to compare filename for uniqueness.
authoritative_dirs => array[str]
Denote director(y|ies) where authoritative/"Original" copies are found.
detail => true
Show details (a.k.a. --show-digest, --show-size, --show-count).
digest_args => array
Some Digest algorithms require arguments, you can pass them here.
exclude_empty_files => bool
(No description)
exclude_file_patterns => array[str]
Filename (including path) regex patterns to include.
files* => array[str]
(No description)
group_by_digest => bool
Sort files by its digest (or size, if not computing digest), separate each different digest.
include_file_patterns => array[str]
Filename (including path) regex patterns to exclude.
max_size => filesize
Maximum file size to consider.
min_size => filesize
Minimum file size to consider.
recurse => bool
If set to true, will recurse into subdirectories.
report_duplicate => int (default: 2)
Whether to return duplicate items.
Can be set to either 0, 1, 2, or 3.
If set to 0, duplicate items will not be returned.
If set to 1 (the default for
dupe-files
), will return all the the duplicate files. For example:file1
contains text 'a',file2
'b',file3
'a'. Thenfile1
andfile3
will be returned.If set to 2 (the default for
uniq-files
), will only return the first of duplicate items. Continuing from previous example, onlyfile1
will be returned becausefile2
is unique andfile3
contains 'a' (already represented byfile1
). If one or more--authoritative-dir
(-O
) options are specified, files under these directories will be preferred.If set to 3, will return all but the first of duplicate items. Continuing from previous example:
file3
will be returned. This is useful if you want to keep only one copy of the duplicate content. You can use the output of this routine tomv
orrm
. Similar to the previous case, if one or more--authoritative-dir
(-O
) options are specified, then files under these directories will not be listed if possible.report_unique => bool (default: 1)
Whether to return unique items.
show_count => bool (default: 0)
Whether to return each file content's number of occurence.
1 means the file content is only encountered once (unique), 2 means there is one duplicate, and so on.
show_digest => true
Show the digest value (or the size, if not computing digest) for each file.
Note that this routine does not compute digest for files which have unique sizes, so they will show up as empty.
show_size => true
Show the size for each file.
Returns an enveloped result (an array).
First element ($status_code) is an integer containing HTTP-like status code (200 means OK, 4xx caller error, 5xx function error). Second element ($reason) is a string containing error message, or something like "OK" if status is 200. Third element ($payload) is the actual result, but usually not present when enveloped result is an error response ($status_code is not 2xx). Fourth element (%result_meta) is called result metadata and is optional, a hash that contains extra information, much like how HTTP response headers provide additional metadata.
Return value: (any)
HOMEPAGE
Please visit the project's homepage at https://metacpan.org/release/App-UniqFiles.
SOURCE
Source repository is at https://github.com/perlancar/perl-App-UniqFiles.
SEE ALSO
find-duplicate-filenames from App::FindUtils
move-duplicate-files-to from App::DuplicateFilesUtils, which is basically a shortcut for uniq-files -D -R . | while read f; do mv "$f" SOMEDIR/; done
.
AUTHOR
perlancar <perlancar@cpan.org>
CONTRIBUTOR
Steven Haryanto <stevenharyanto@gmail.com>
CONTRIBUTING
To contribute, you can send patches by email/via RT, or send pull requests on GitHub.
Most of the time, you don't need to build the distribution yourself. You can simply modify the code, then test via:
% prove -l
If you want to build the distribution (e.g. to try to install it locally on your system), you can install Dist::Zilla, Dist::Zilla::PluginBundle::Author::PERLANCAR, Pod::Weaver::PluginBundle::Author::PERLANCAR, and sometimes one or two other Dist::Zilla- and/or Pod::Weaver plugins. Any additional steps required beyond that are considered a bug and can be reported to me.
COPYRIGHT AND LICENSE
This software is copyright (c) 2024, 2023, 2022, 2020, 2019, 2017, 2015, 2014, 2012, 2011 by perlancar <perlancar@cpan.org>.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
BUGS
Please report any bugs or feature requests on the bugtracker website https://rt.cpan.org/Public/Dist/Display.html?Name=App-UniqFiles
When submitting a bug or request, please include a test-file or a patch to an existing test-file that illustrates the bug or desired feature.