NAME
mysql-fill-csv-columns-from-query - Fill CSV columns with data from a query
VERSION
This document describes version 0.022 of mysql-fill-csv-columns-from-query (from Perl distribution App-MysqlUtils), released on 2022-10-19.
SYNOPSIS
mysql-fill-csv-columns-from-query --help (or -h, -?)
mysql-fill-csv-columns-from-query --version (or -v)
mysql-fill-csv-columns-from-query [(--config-path=path)+|--no-config|-C] [--config-profile=profile|-P] [--count|-c|--no-count|--nocount] [--debug|--log-level=level|--quiet|--trace|--verbose] [--dry-run|-n] [--escape-char=str] [--format=name|--json] [--header|--input-header|--no-header|--noheader] [--host=str] [--(no)naked-res] [--no-env] [--page-result[=program]|--view-result[=program]] [--password=str] [--port=int] [--quote-char=str] [--sep-char=str] [--tsv|--input-tsv|--no-tsv|--notsv] [--username=str] -- <database> <filename> <query>
DESCRIPTION
This utility is handy if you have a partially filled table (in CSV format, which you can convert from spreadsheet or Google Sheet or whatever), where you have some unique key already specified in the table (e.g. customer_id) and you want to fill up other columns (e.g. customer_name, customer_email, last_order_date) from a query:
% mysql-fill-csv-columns-from-query DBNAME TABLE.csv 'SELECT c.NAME customer_name, c.email customer_email, (SELECT date FROM tblorders WHERE client_id=$customer_id ORDER BY date DESC LIMIT 1) last_order_time FROM tblclients WHERE id=$customer_id'
The $NAME
in the query will be replaced by actual CSV column value. SELECT fields must correspond to the CSV column names. For each row, a new query will be executed and the first result row is used.
OPTIONS
*
marks required options.
Main options
- --count, -c
-
Instead of returning the CSV rows, just return the count of rows that get filled.
- --database=s*
-
(No description)
Can also be specified as the 1st command-line argument.
- --filename=s*, -f
-
Input CSV file.
Can also be specified as the 2nd command-line argument.
- --query=s*
-
(No description)
Can also be specified as the 3rd command-line argument.
Configuration options
- --config-path=s
-
Set path to configuration file.
Can actually be specified multiple times to instruct application to read from multiple configuration files (and merge them).
Can be specified multiple times.
- --config-profile=s, -P
-
Set configuration profile to use.
A single configuration file can contain profiles, i.e. alternative sets of values that can be selected. For example:
[profile=dev] username=foo pass=beaver [profile=production] username=bar pass=honey
When you specify
--config-profile=dev
,username
will be set tofoo
andpassword
tobeaver
. When you specify--config-profile=production
,username
will be set tobar
andpassword
tohoney
. - --no-config, -C
-
Do not use any configuration file.
If you specify
--no-config
, the application will not read any configuration file.
Connection options
- --host=s
-
Default value:
"localhost"
- --password=s
-
Will try to get default from
~/.my.cnf
. - --port=s
-
Default value:
3306
- --username=s
-
Will try to get default from
~/.my.cnf
.
Environment options
- --no-env
-
Do not read environment for default options.
If you specify
--no-env
, the application wil not read any environment variable.
Input options
- --escape-char=s
-
Specify character to escape value in field in input CSV, will be passed to Text::CSV_XS.
Defaults to
\\
(backslash). Overrides--tsv
option. - --no-header, --input-header
-
By default (
--header
), the first row of the CSV will be assumed to contain field names (and the second row contains the first data row). When you declare that CSV does not have header row (--no-header
), the first row of the CSV is assumed to contain the first data row. Fields will be namedfield1
,field2
, and so on. - --quote-char=s
-
Specify field quote character in input CSV, will be passed to Text::CSV_XS.
Defaults to
"
(double quote). Overrides--tsv
option. - --sep-char=s
-
Specify field separator character in input CSV, will be passed to Text::CSV_XS.
Defaults to
,
(comma). Overrides--tsv
option. - --tsv, --input-tsv
-
Inform that input file is in TSV (tab-separated) format instead of CSV.
Overriden by
--sep-char
,--quote-char
,--escape-char
options. If one of those options is specified, then--tsv
will be ignored.
Logging options
- --debug
-
Shortcut for --log-level=debug.
- --log-level=s
-
Set log level.
By default, these log levels are available (in order of increasing level of importance, from least important to most):
trace
,debug
,info
,warn
/warning
,error
,fatal
. By default, the level is usually set towarn
, which means that log statements with levelinfo
and less important levels will not be shown. To increase verbosity, chooseinfo
,debug
, ortrace
.For more details on log level and logging, as well as how new logging levels can be defined or existing ones modified, see Log::ger.
- --quiet
-
Shortcut for --log-level=error.
- --trace
-
Shortcut for --log-level=trace.
- --verbose
-
Shortcut for --log-level=info.
Output options
- --format=s
-
Choose output format, e.g. json, text.
Default value:
undef
Output can be displayed in multiple formats, and a suitable default format is chosen depending on the application and/or whether output destination is interactive terminal (i.e. whether output is piped). This option specifically chooses an output format.
- --json
-
Set output format to json.
- --naked-res
-
When outputing as JSON, strip result envelope.
Default value:
0
By default, when outputing as JSON, the full enveloped result is returned, e.g.:
[200,"OK",[1,2,3],{"func.extra"=>4}]
The reason is so you can get the status (1st element), status message (2nd element) as well as result metadata/extra result (4th element) instead of just the result (3rd element). However, sometimes you want just the result, e.g. when you want to pipe the result for more post-processing. In this case you can use
--naked-res
so you just get:[1,2,3]
- --page-result
-
Filter output through a pager.
This option will pipe the output to a specified pager program. If pager program is not specified, a suitable default e.g.
less
is chosen. - --view-result
-
View output using a viewer.
This option will first save the output to a temporary file, then open a viewer program to view the temporary file. If a viewer program is not chosen, a suitable default, e.g. the browser, is chosen.
Other options
- --dry-run, -n
-
Run in simulation mode (also via DRY_RUN=1).
- --help, -h, -?
-
Display help message and exit.
- --version, -v
-
Display program's version and exit.
COMPLETION
This script has shell tab completion capability with support for several shells.
bash
To activate bash completion for this script, put:
complete -C mysql-fill-csv-columns-from-query mysql-fill-csv-columns-from-query
in your bash startup (e.g. ~/.bashrc). Your next shell session will then recognize tab completion for the command. Or, you can also directly execute the line above in your shell to activate immediately.
It is recommended, however, that you install modules using cpanm-shcompgen which can activate shell completion for scripts immediately.
tcsh
To activate tcsh completion for this script, put:
complete mysql-fill-csv-columns-from-query 'p/*/`mysql-fill-csv-columns-from-query`/'
in your tcsh startup (e.g. ~/.tcshrc). Your next shell session will then recognize tab completion for the command. Or, you can also directly execute the line above in your shell to activate immediately.
It is also recommended to install shcompgen (see above).
other shells
For fish and zsh, install shcompgen as described above.
CONFIGURATION FILE
This script can read configuration files. Configuration files are in the format of IOD, which is basically INI with some extra features.
By default, these names are searched for configuration filenames (can be changed using --config-path
): /home/u1/.config/mysqlutils.conf, /home/u1/mysqlutils.conf, or /etc/mysqlutils.conf.
All found files will be read and merged.
To disable searching for configuration files, pass --no-config
.
You can put multiple profiles in a single file by using section names like [profile=SOMENAME]
or [SOMESECTION profile=SOMENAME]
. Those sections will only be read if you specify the matching --config-profile SOMENAME
.
You can also put configuration for multiple programs inside a single file, and use filter program=NAME
in section names, e.g. [program=NAME ...]
or [SOMESECTION program=NAME]
. The section will then only be used when the reading program matches.
You can also filter a section by environment variable using the filter env=CONDITION
in section names. For example if you only want a section to be read if a certain environment variable is true: [env=SOMEVAR ...]
or [SOMESECTION env=SOMEVAR ...]
. If you only want a section to be read when the value of an environment variable equals some string: [env=HOSTNAME=blink ...]
or [SOMESECTION env=HOSTNAME=blink ...]
. If you only want a section to be read when the value of an environment variable does not equal some string: [env=HOSTNAME!=blink ...]
or [SOMESECTION env=HOSTNAME!=blink ...]
. If you only want a section to be read when the value of an environment variable includes some string: [env=HOSTNAME*=server ...]
or [SOMESECTION env=HOSTNAME*=server ...]
. If you only want a section to be read when the value of an environment variable does not include some string: [env=HOSTNAME!*=server ...]
or [SOMESECTION env=HOSTNAME!*=server ...]
. Note that currently due to simplistic parsing, there must not be any whitespace in the value being compared because it marks the beginning of a new section filter or section name.
To load and configure plugins, you can use either the -plugins
parameter (e.g. -plugins=DumpArgs
or -plugins=DumpArgs@before_validate_args
), or use the [plugin=NAME ...]
sections, for example:
[plugin=DumpArgs]
-event=before_validate_args
-prio=99
[plugin=Foo]
-event=after_validate_args
arg1=val1
arg2=val2
which is equivalent to setting -plugins=-DumpArgs@before_validate_args@99,-Foo@after_validate_args,arg1,val1,arg2,val2
.
List of available configuration parameters:
count (see --count)
database (see --database)
escape_char (see --escape-char)
filename (see --filename)
format (see --format)
header (see --no-header)
host (see --host)
log_level (see --log-level)
naked_res (see --naked-res)
password (see --password)
port (see --port)
query (see --query)
quote_char (see --quote-char)
sep_char (see --sep-char)
tsv (see --tsv)
username (see --username)
ENVIRONMENT
MYSQL_FILL_CSV_COLUMNS_FROM_QUERY_OPT
String. Specify additional command-line options.
FILES
/home/u1/.config/mysqlutils.conf
/home/u1/mysqlutils.conf
/etc/mysqlutils.conf
HOMEPAGE
Please visit the project's homepage at https://metacpan.org/release/App-MysqlUtils.
SOURCE
Source repository is at https://github.com/perlancar/perl-App-MysqlUtils.
AUTHOR
perlancar <perlancar@cpan.org>
CONTRIBUTING
To contribute, you can send patches by email/via RT, or send pull requests on GitHub.
Most of the time, you don't need to build the distribution yourself. You can simply modify the code, then test via:
% prove -l
If you want to build the distribution (e.g. to try to install it locally on your system), you can install Dist::Zilla, Dist::Zilla::PluginBundle::Author::PERLANCAR, Pod::Weaver::PluginBundle::Author::PERLANCAR, and sometimes one or two other Dist::Zilla- and/or Pod::Weaver plugins. Any additional steps required beyond that are considered a bug and can be reported to me.
COPYRIGHT AND LICENSE
This software is copyright (c) 2022, 2020, 2019, 2018, 2017, 2016 by perlancar <perlancar@cpan.org>.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
BUGS
Please report any bugs or feature requests on the bugtracker website https://rt.cpan.org/Public/Dist/Display.html?Name=App-MysqlUtils
When submitting a bug or request, please include a test-file or a patch to an existing test-file that illustrates the bug or desired feature.