NAME
PEQUEL - Pequel User Guide
OVERVIEW -- WHAT IS PEQUEL?
Pequel is a comprehensive ETL (Extract-Transform-Load) data processing system for raw (ASCII) data file processing. It features a simple, user-friendly event driven scripting interface that transparently generates, builds and executes highly efficient data-processing programs. By using the Pequel scripting language, the user can create and maintain complex ETL data transformation processes quickly, easily, and accurately.
The Pequel scripting language is aimed at non-programmer users and is simple to learn and use. It is event driven -- the user need only fill in the details for each event as required. It can also be used to effectively simplify what would otherwise be a complex SQL statement.
The Pequel scripting language allows embeded Perl expressions, thus giving access to regular expressions, built-in functions, and all Perl operators.
Pequel is installed as a Perl module.
Pequel generates highly efficient Perl and C code. The emphasis in the generated code is performance -- to process maximum records in minimum time. The generated code can be dumped into a program file, modified and executed independently of Pequel.
The Pequel scipt is self-documenting via pequeldoc. Pequel will automatically generate the Pequel Script Programmer's Reference Manual in pdf format. This manual contains detailed and summarised information about the script, and includes cross-reference information. It will also contain an optional listing of the generated program.
The following guide describes the use of the Pequel scripting language in detail.
Pequel can be used to process data in a number of different ways, including the following:
- Selecting records (filtering)
-
Use Perl expressions to select records. The full power of Perl regular expressions and Perl built-in functions is also available.
- Grouping and Statistics
-
Records with similar characteristics can be grouped together. Calculate statistics, such as max, min, mean, sum, and count, on grouped record sets. Grouping can also be done on unsorted input data using the
hash
option. - Calculations
-
Perform calculations on input fields to generate new (derived) fields, using Perl expressions. Calculations can be performed on both numeric fields (mathematical) and string fields (such as concatenation, substr, etc).
- Cleaning Data
-
Use Pequel with perl regular expressions to reject bad records. Rejected records will be saved in a reject file.
- Analysing Data Quality
-
Data can be analysed for quality, and a summary analysis report generated which will reflect the overall quality of the data.
- Statistics
-
Generate summary statistical information.
- Converting Data
-
Perform any kind of data conversion. These include, converting from one data type to another, reformatting, case change, splitting a field into two or more fields, combining two or more fields into one field, converting date fields from one date format to another, padding, etc.
- Tables and Cross References
-
Load and use tables to lookup / cross-reference values by key.
- Database Connectivity
-
Direct access to database (Oracle, Sqlite, etc) tables. New in v2. Pequel will generate low level database API code. Currently supported databases are Oracle (via OCI), and Sqlite.
- Merge and n-Way Join
-
Similarly sorted data source files can be merged. Similar to join, but no limit to number of source files that can be joined (merged) simultaneously. New in v2.
- Extract Data from Database Table(s)
-
TBD version 2.5
Data can be extracted directly from database tables, and from a mix of database types (Oracle, Sqlite, Mysql, Sybase, etc), into tables and into the input-section.
- Load Data into Database Table(s)
-
TBD version 2.5
The output data can be directly batch-loaded into a database table.
- Input Binary Data Files
-
TBD version 3.0
Access to binary data files via the input-section and tables.
USAGE
- pequel scriptfile.pql < file_in > file_out
-
Execute pequel with scriptfile.pql script to process file_in data file, resulting in file_out.
- pequel -c scriptfile.pql
-
Check the syntax of the pequel script scriptfile.pql.
- pequel -viewcode scriptfile.pql
-
Generate and display the code for the pequel script scriptfile.pql.
- pequel -dumpcode scriptfile.pql
-
Generate the pequel code for the script scriptfile.pql and save generated code in the file scriptname.pql.2.code.
- pequel -v
-
Display version informatio for Pequel.
- pequel -usage
-
Display Pequel usage command summary.
- pequel -pequeldoc pdf -detail scriptfile.pql
-
Generate the Script Reference document in pdf format for the Pequel script scriptfile.pql. The document will include a section showing the generated code (-detail).
TUTORIAL
Create Pequel Script
Use your prefered text editor to create a pequel script scriptname.pql. Syntax highlighting is available for vim with the pequel.vim syntax file (in vim/sytnax).
All that is required is to fill in, at least, the output section, or specify transfer option. The transfer option will have the effect of copying all input field values to the output. This is effectively a straight through process -- the resulting output is identical to the input.
options
transfer
input section
PRODUCT,
COST_PRICE,
DESCRIPTION,
SALES_CODE,
SALES_PRICE,
QUANTITY,
SALES_DATE,
LOCATION
output section
Check The Pequel Script
Do a syntax check on the script by using the Pequel -c
option. This should return the words scriptname.pql Syntax OK
.
pequel -c scriptname.pql
scriptname.pql Syntax OK
Run The Pequel Script
If syntax check is ok, run the script -- the sample.data data file in the examples directory can be used:
pequel scriptname.pql
< inputdata > outputdata
Select A Subset Of Records
We next do something usefull to transform the input data. Create a filter to output a subset of records, consisting of records which have LOCATION
starting with 10
. The filter example uses a Perl regular expression to match the LOCATION
field content with the Perl regular expression =~ /^10/
. This is specified in the filter section. Check and run the updated script as instructed above:
options
transfer
input section
PRODUCT,
COST_PRICE,
DESCRIPTION,
SALES_CODE,
SALES_PRICE,
QUANTITY,
SALES_DATE,
LOCATION
filter
LOCATION =~ /^10/
Create New Derived Fields
Create additional, derived fields based on the other input fields. In our example, two new fields are added COST_VALUE
and SALES_VALUE
. Derived fields must be specified in the input section after the last input field. The derived field name is followed by the =>
operator, and a calculation expression. Derived fields will also be output when the transfer options is specified.
options
transfer
input section
PRODUCT,
COST_PRICE,
DESCRIPTION,
SALES_CODE,
SALES_PRICE,
QUANTITY,
SALES_DATE,
LOCATION,
COST_VALUE => COST_PRICE * QUANTITY,
SALES_VALUE => SALES_PRICE * QUANTITY
filter
LOCATION =~ /^10/
output section
Select Which Fields To Output
In the above examples, the output record has the same (field) format as the input record, plus the additional derived fields. In the following example we select which fields to output, and their order, on the output record. To do this we need to remove the transfer option, and create the output section. The output fields PRODUCT
, LOCATION
, DESCRIPTION
, QUANTITY
, COST_VALUE
, and SALES_VALUE
are specified to create a new output format. In this example, all the output field names have the same name as the input fields.
options
input section
PRODUCT,
COST_PRICE,
DESCRIPTION,
SALES_CODE,
SALES_PRICE,
QUANTITY,
SALES_DATE,
LOCATION,
COST_VALUE => COST_PRICE * QUANTITY,
SALES_VALUE => SALES_PRICE * QUANTITY
filter
LOCATION =~ /^10/
output section
string PRODUCT PRODUCT,
string LOCATION LOCATION,
string DESCRIPTION DESCRIPTION,
numeric QUANTITY QUANTITY,
decimal COST_VALUE COST_VALUE,
decimal SALES_VALUE SALES_VALUE
Group Records For Analysis
Records with similar characteristics can be grouped together, and aggregations can then be performed on the grouped records' data. The following example groups the records by LOCATION
, and sums the COST_VALUE
and SALES_VALUE
fields within each group. Grouping is activated by creating a group by section. Input data must also be sorted on the grouping field(s). If the data is not pre-sorted then this needs to be done in the script by creating a sort by section. Alternatively, by specifying the hash option, the input data need not be sorted.
options
input section
PRODUCT,
COST_PRICE,
DESCRIPTION,
SALES_CODE,
SALES_PRICE,
QUANTITY,
SALES_DATE,
LOCATION,
COST_VALUE => COST_PRICE * QUANTITY,
SALES_VALUE => SALES_PRICE * QUANTITY
filter
LOCATION =~ /^10/
sort by
LOCATION
group by
LOCATION
output section
string LOCATION LOCATION,
string PRODUCT PRODUCT,
string DESCRIPTION DESCRIPTION,
numeric QUANTITY QUANTITY,
decimal COST_VALUE sum COST_VALUE,
decimal SALES_VALUE sum SALES_VALUE
Select A Subset Of Grouped Records
A subset of groups can be select by creating a having section. The having section is similar to the filter section, but instead is applied to the aggregated group of records. In this example we will output only records for locations which have a total SALES_VALUE
of 1000
or more. Note that SALES_VALUE
in the having section refers to the output field (sum SALES_VALUE
) and not the input field with same name (SALES_PRICE * QUANTITY
). The having section gives preference to output fields when interpreting field names.
options
input section
PRODUCT,
COST_PRICE,
DESCRIPTION,
SALES_CODE,
SALES_PRICE,
QUANTITY,
SALES_DATE,
LOCATION,
COST_VALUE => COST_PRICE * QUANTITY,
SALES_VALUE => SALES_PRICE * QUANTITY
filter
LOCATION =~ /^10/
sort by
LOCATION
group by
LOCATION
output section
string LOCATION LOCATION,
string PRODUCT PRODUCT,
string DESCRIPTION DESCRIPTION,
numeric QUANTITY QUANTITY,
decimal COST_VALUE sum COST_VALUE,
decimal SALES_VALUE sum SALES_VALUE
having
SALES_VALUE >= 1000
Aggregation Based On Conditions
Output fields can be aggregated conditionally. That is, the aggregation will only occur for records, within the group, that evaluate the condition to true. This is done by adding a where
clause to the aggregate function. In this example we create three new output fields SALES_VALUE_RETAIL
, SALES_VALUE_WSALE
and SALES_VALUE_OTHER
. These fields will contain the sales value for records within the group which have sales code equal to 'R', 'W', and other codes, respectively.
options
input section
PRODUCT,
COST_PRICE,
DESCRIPTION,
SALES_CODE,
SALES_PRICE,
QUANTITY,
SALES_DATE,
LOCATION,
COST_VALUE => COST_PRICE * QUANTITY,
SALES_VALUE => SALES_PRICE * QUANTITY
filter
LOCATION =~ /^10/
sort by
LOCATION
group by
LOCATION
output section
string LOCATION LOCATION,
string PRODUCT PRODUCT,
string DESCRIPTION DESCRIPTION,
numeric QUANTITY QUANTITY,
decimal COST_VALUE sum COST_VALUE,
decimal SALES_VALUE sum SALES_VALUE,
decimal SALES_VALUE_RETAIL sum SALES_VALUE where SALES_CODE eq 'R',
decimal SALES_VALUE_WSALE sum SALES_VALUE where SALES_CODE eq 'W',
decimal SALES_VALUE_OTHER sum SALES_VALUE where SALES_CODE ne 'R' and SALES_CODE ne 'W'
Derived Fields Based On Output Fields
An output derived field, the calculation of which is based on output fields, can be created by declaring an output field with the =
calulation expression.
options
input section
PRODUCT,
COST_PRICE,
DESCRIPTION,
SALES_CODE,
SALES_PRICE,
QUANTITY,
SALES_DATE,
LOCATION,
COST_VALUE => COST_PRICE * QUANTITY,
SALES_VALUE => SALES_PRICE * QUANTITY
filter
LOCATION =~ /^10/
sort by
LOCATION
group by
LOCATION
output section
string LOCATION LOCATION,
string PRODUCT PRODUCT,
string DESCRIPTION DESCRIPTION,
numeric QUANTITY QUANTITY,
numeric TOTAL_QUANTITY sum QUANTITY,
decimal COST_VALUE sum COST_VALUE,
decimal SALES_VALUE sum SALES_VALUE,
decimal SALES_VALUE_RETAIL sum SALES_VALUE where SALES_CODE eq 'R',
decimal SALES_VALUE_WSALE sum SALES_VALUE where SALES_CODE eq 'W',
decimal SALES_VALUE_OTHER sum SALES_VALUE where SALES_CODE ne 'R' and SALES_CODE ne 'W',
decimal AVG_SALES_VALUE = SALES_VALUE / TOTAL_QUANTITY
Note
In order to protect against a divide by zero exception, the AVG_SALES_VALUE
field would actually be better declared as follows. This form uses a Perl alternation ?:
operator. If TOTAL_QUANTITY
is zero, it will set AVG_SALES_VALUE
to zero, otherwise it will set AVG_SALES_VALUE
to SALES_VALUE / TOTAL_QUANTITY
. Thus, the division will only be performed on non-zero TOTAL_QUANTITY
.
decimal AVG_SALES_VALUE = TOTAL_QUANTITY == 0 ? 0.0 : SALES_VALUE / TOTAL_QUANTITY
Create Intermediate (Transparent) Output Fields
In the previous example, supposing that the TOTAL_QUANTITY
field was not required in the output, it could be made transparent by declaring it with an underdash (_
) prefix. Transparent output fields are usefull for creating intermediate fields required for calculations.
numeric _TOTAL_QUANTITY sum QUANTITY,
decimal AVG_SALES_VALUE = SALES_VALUE / _TOTAL_QUANTITY
Cleaning Data
Data can be cleaned in a variety of ways, and invalid records placed in a reject file. The following example determines the validity of a record by a) the length of certain fields, and b) the content of field QUANTITY
. The PRODUCT
and LOCATION
fields must be at least 8
and 2
characters long, respectively; the QUANTITY
field must contain only numeric digits, decimal point and minus sign. The rejected records will be placed in the reject file called scriptname.reject
options
transfer
input section
PRODUCT,
COST_PRICE,
DESCRIPTION,
SALES_CODE,
SALES_PRICE,
QUANTITY,
SALES_DATE,
LOCATION
reject
length(PRODUCT) < 8 || length(LOCATION) < 2,
QUANTITY !~ /^[0-9\.\-]+$/
Converting Data
Any sort of data conversion can be performed. These include, converting from one data type to another, reformatting, case change, splitting a field into two or more fields, combining two or more fields into one field, converting date fields from one date format to another, padding, etc. The following script demonstrates these data conversions.
options
input section
PRODUCT,
COST_PRICE,
DESCRIPTION,
SALES_CODE,
SALES_PRICE,
QUANTITY,
SALES_DATE,
LOCATION
output section
string PRODUCT_U = &uc(PRODUCT), // Convert case to upper
string DESCRIPTION_U = &uc(DESCRIPTION), // Convert case to upper
string PCODE_1 = &substr(PRODUCT,0,2), // Split field
string PCODE_2 = &substr(PRODUCT,2,4), // ""
string ANALYSIS_1 = SALES_CODE . sprintf("%08d", COST_PRICE), // Combine fields
string S_QUANTITY = sprintf("%08d", QUANTITY) // Reformat/Convert field
string NEW_PRODUCT = PCODE_2 . PCODE_1 . &substr(PRODUCT,6) // Reformat
decimal SALES_PRICE SALES_PRICE // no change
decimal SALES_CODE SALES_CODE // no change
string LOCATION LOCATION // no change
Using Date Fields
TBC
Counting Records
TBC
Extracting n Distinct Values For A Field
TBC
Tabulating Data
TBC
Statistical Analysis
TBC
Declaring And Using Tables For Value Lookup
TBC
Using External Tables
TBC
Using Date Fields
TBC
Create A Summary Report
TBC
Using Array Fields
TBC
Database Tables: oracle
TBC
Database Tables: sqlite
TBC
Merg Database Tables
TBC
View The Generated Perl Code
To view the generated Perl code use the Pequel -viewcode
option:
pequel -viewcode scriptname.pql | more
Dump The Generated Perl Code
To dump the generated Perl code use the Pequel -dumpcode
option. This will save the generated Perl program in the file with the name script_name.2.code. So, if your script is called myscript.pql the resulting generated Perl program will be saved in the the file myscript.pql.2.code, in the same path:
pequel -dumpcode scriptname.pql
Produce The Script Specification Document
Use the Pequel -pequeldoc pdf
option to produce a presentation script specification for the Pequel script. The generated pdf document will be saved in a file with the same name as the script but with the file extension changed from pql to pdf.
pequel scriptname.pql -pequeldoc pdf
Use the -detail
option to include the generated code in the document.
pequel scriptname.pql -pequeldoc pdf -detail
Display Summary Information For Script
This options will display the parsed details from the script in a summarised format.
pequel scriptname.pql -list
COMMAND LINE OPTIONS
- --prefix, --prefix_path
-
Prefix for filenames directory path
- --verbose, --ver
-
Display progress counter
- --noverbose, --silent, --quite
-
Do not progress counter
- --input_file, --is, --if, --i
-
Input data filename
- --usage
-
Display command usage description
- --output_file, --os, --of, --o
-
Output data filename
- --script_name, --script, --s, --pql
-
Script filename
- --header
-
Write header record to output.
- --pequeldoc, --doc
-
Generate pod / pdf pequel script Reference Guide.
- --viewcode, --vc
-
Display the generated Perl code for pequel script
- --dumpcode, --dc, --diag
-
Dump the generated Perl code for pequel script
- --syntax_check, --c, --check
-
Check the pequel script for syntax errors
- --version, --v
-
Display Pequel Version information
- --table_info, --ti
-
Display Table information for all tables declared in the pequel script
PEQUEL LANGUAGE REFERENCE
A Pequel script is divided into sections. Each section is delimited by a section name, which appears on a line on its own, followed by a list of statements/items. Each item line must be terminated by a newline comma (or both). In order to split an item line into mutiple lines (for better readability) use the line continuation character \
.
Pequel is event driven. Each section within an Pequel script describes an event. For example, the input section is activated whenever an input record is read; the output section is activated whenever an aggregation is performed.
The sections must appear in the order described below. A minimal script must contain input section and output section, or, input section and transfer option. All other sections are optional, and need only appear in the Pequel script if they contain statements.
The main sections are input section and output section. The input section defines the format, in fields, of the input data stream. It can also define new calculated (derived) fields. The output section defines the format of the output data stream. The output section is required in order to perform aggregation. The output section will consist of input fields, aggregations based on grouping the input records, and new calculated fields.
Input sorting can be specified with the sort by section. Break processing (grouping) can be specified with the group by section. Input filtering is specified with the filter section. Groups of records can be filtered with the having section.
A powerfull feature of Pequel is its built-in tables feature. Tables, consisting of key and value pairs. Tables are used to perform merge and joins on multiple input datasources. They can also be used to access external data for cross referencing, and value lookups.
Pequel also handles a number of date field formats. The &date() macro provides access to date fields.
Comments
Any text following and including the #
symbol is considered as comment text. C style comments (//
and /* ... */
) are also supported if your system provides the cpp preprocessor.
Pre Processor
If your system provides the cpp preprocessor, your Pequel script may include any C/C++ style macros and defines.
OPTIONS SECTION
This section is used to declare various options described in detail below. Options define the overall character of the data transformation.
- Format
-
options <option> [ (<arg>) ] [, ...]
- Example
-
options input_delimiter(\s+), # one or more space(s) delimit input fields. verbose(100000), # print progress on every 100000'th input record. optimize, varnames, default_date_type(DD/MM/YY), nonulls, diag
- verbose
-
Set the verbose option to display progress information to STDERR during the transform run. Requires one parameter. This will instruct Pequel to display a counter message on specified number of records read from input.
- input_delimiter
-
Specify the character that is used to delimit columns in the input data stream. This is usually the pipe
|
character, but can be any character including the space character. For multiple spaces use\s+
, and for multiple tabs use\t+
. This input delimiter will default to the pipe character if input_delimiter is not specified. - output_delimiter
-
Specify the character that will delimit columns in the output. The output delimiter will default to the input delimiter if not specified. Refer to input_delimiter above for more information regarding types of delimiters.
- discard_header
-
If the input data stream contains an initial header record then this option must be specified in order to discard this record from the processing.
- input_file
-
Specify the file name as a parameter. If specified, the input data will be read from this file; otherwise it will be read from STDIN.
- output_file
-
Specify the file name as a parameter. If specified, the output will be written to this file (the file will be overwritten!); otherwise it will be sent to STDOUT.
- transfer
-
Copy the input record to output. The input record is copied as is, including calculated fields, to the output record. Fields specified in the output section are placed after the input fields. The
transfer
option is not available when group by us in use. - hash
-
Use hash processing mode. Hash mode is only available when break processing is activated with 'group by'. In hash mode input data need not be sorted. Because this mode of processing is memory intensive, it should only be used when generating a small number of groups. The optional 'numeric' modifier can be specified to sort the output numerically; if not specified, a string sort is done.
- header
-
If specified then an initial header record will by written to output. This header record contains the output field names. By default a header record will be output if neither header nor noheader is specified.
- noheader
-
Specify this option to suppress writing of header record.
- addpipe
-
Specify this option to add an extra delimiter character after the last field. This is the default action if neither addpipe nor noaddpipe is specified.
- noaddpipe
-
Specify this option to suppress adding an extra delimiter character after the last field.
- optimize
-
If specified the generated Perl code will be optimized to run more efficiently. This optimisation is done by grouping similar
where
conditions intoif-else
blocks. Thus if a number of where clauses contain the same condition, these statements will be grouped under one if condition. The optimize option should only be used by users with some knowledge of Perl. - nooptimize
-
Specify this option to prevent code from being optimised. This is the default setting.
- nulls
-
If specified, numeric and decimal values with a zero/null value will be output as null character. This is the default setting.
- nonulls
-
If specified, numeric and decimal values with a zero/null value will be output as
0
. - varnames
-
Use for debugging the generated code. This setting will display the field name, instead of just the field number, in the generated Perl code. This is the default setting.
- novarnames
-
This will cause the generated code to contain field numbers only instead of field names.
- noexecute
-
Use for debugging. With this option, the generated code is displayed to STDOUT instead of being executed.
- reject_file
-
Use this option to specify a file name to contain the rejected records. These are records that are rejected by the filter specified in the reject section. If no reject file option is specified then the default reject file name is the script file name with
.reject
appended. - dumpcode
-
Set this option to save the generated code in scriptname.2.code files. The scriptname.2.code file contains the generated perl code. This latter contains the actual Perl program that will process the input data stream. This generated Perl program can be executed independatly of Pequel.
- default_date_type
-
Specify a default date type. Currently supported date types are:
YYYYMMDD
,YYMMDD
,DDMMYY
,DDMMMYY
,DDMMYYYY
,DD/MM/YY
,DD/MM/YYYY
, and US date formats:MMDDYY
,MMDDYYYY
,MM/DD/YY
,MM/DD/YYYY
. TheDDMMMYY
format refers to dates such as21JAN02
. - default_list_delimiter
-
Specify the default list delimiter for array fields created by
values_all
andvalues_uniq
aggregates. Any delimiter specified as a parameter to the aggregate function will override this. - rmctrlm v3
-
If the input file is in DOS format, specify 'rmctrlm' option to remove the Ctrl-M at end of line.
- input_record_limit v3
-
Specify number of records to process from input file. Processing will stop after the number of records as specified have been read.
- suppress_output v3
-
Use this option when summary section is used to prevent output of raw results.
- pequeldoc
-
Generate PDF for Programmer's Reference Manual for the Pequel script. The next three options are also required.
- doc_title
-
Specify the title that will appear on the pequeldoc generated manual.
- doc_email
-
Specify the user's email that will appear on the pequeldoc generated manual.
- doc_version
-
Specify the Pequel script version number that will appear on the pequeldoc generated manual.
INLINE OPTIONS
The following options require that the Inline::C Perl module and a C compiler system is installed on your system.
- use_inline
-
The use_inline option will instruct Pequel to generate (and compile/link) C code -- replacing the input file identifier inside the main while loop by a readsplit() function call. The readsplit function is implemented in C.
- input_delimiter_extra
-
Specify one or more extra field delimiter characters. These may be one of any quote character, ', ", `, and optionally, one of and bracket character, {, [, (. For example, this option can be used to parse input Apache log files in CLF format:
options input_delimiter_extra("[) // Apache CLF log quoted fields and bracketed timestamp
- inline_clean_after_build
-
Tells Inline to clean up the current build area if the build was successful. Sometimes you want to DISABLE this for debugging. Default is 1.
- inline_clean_build_area
-
Tells Inline to clean up the old build areas within the entire Inline DIRECTORY. Default is 0.
- inline_print_info
-
Tells Inline to print various information about the source code. Default is 0.
- inline_build_noisy
-
Tells ILSMs that they should dump build messages to the terminal rather than be silent about all the build details.
- inline_build_timers
-
Tells ILSMs to print timing information about how long each build phase took. Usually requires Time::HiRes
- inline_force_build
-
Makes Inline build (compile) the source code every time the program is run. The default is 0.
- inline_directory
-
The DIRECTORY config option is the directory that Inline uses to both build and install an extension.
Normally Inline will search in a bunch of known places for a directory called '.Inline/'. Failing that, it will create a directory called '_Inline/'
If you want to specify your own directory, use this configuration option.
Note that you must create the DIRECTORY directory yourself. Inline will not do it for you.
- inline_CC
-
Specify which compiler to use.
- inline_OPTIMIZE
-
This controls the MakeMaker OPTIMIZE setting. By setting this value to '-g', you can turn on debugging support for your Inline extensions. This will allow you to be able to set breakpoints in your C code using a debugger like gdb.
- inline_CCFLAGS
-
Specify extra compiler flags.
- inline_LIBS
-
Specifies external libraries that should be linked into your code.
- inline_INC
-
Specifies an include path to use. Corresponds to the MakeMaker parameter.
- inline_LDDLFLAGS
-
Specify which linker flags to use.
NOTE: These flags will completely override the existing flags, instead of just adding to them. So if you need to use those too, you must respecify them here.
- inline_MAKE
-
Specify the name of the 'make' utility to use.
USE PACKAGE SECTION
Use this section to specify Perl packages to use. This section is optional.
- Format
-
use package <Perl package name> [, ...]
- Examples
-
use package Benchmark, EasyDate
INIT TABLE SECTION
Use init table to initialise tables in the Pequel script. This will consist of a list of table name followed by key value (or value list) pairs. The key must not contain any spaces. In order to avoid clutter in the script, use load table as described above. To look up a table key/value use the %table name(key) syntax. Table column values are accessed by using the %table name(key)-=>n syntax, when n refers to a column number starting from '1'. The column specification is not required for single value tables. All entries within a table should have the same number of values, empty values can be declared with a null quoted value (''). This section is optional.
- Format
-
init table <table> <key> <value> [, <value>...]
- Example
-
init table // Table-Name Key-Value Field->1 Field-2 Field-3 LOCINFO NSW 'New South Wales' '2061' '02' LOCINFO WA 'Western Australia' '5008' '07' LOCINFO SA 'South Australia' '8078' '08' input section LOCATION, LDESCRIPT => %LOCINFO(LOCATION)->1 . " in postcode " . %LOCINFO(LOCATION)->2
LOAD TABLE SECTION
Use this section to declare tables that are to be initialised from an external data file. If the table is in .tbl
format (key|value) then only the table name (without the .tbl
) need be specified. The filename can consist of the full path name. Compressed files (ending in .gz, .z, .Z, .zip) will be handled properly. If key column is not specified then this is set to 1 by default; if the value column is not specified then this is set to 2 by default. Column numbers are 1 base. To look up a table key/value use the %table name(key) syntax. If the table name is prefixed with the _
character, this table will be loaded at runtime instead of compile time. Thus the table contents will not appear in the generated code. This is useful if the table contains more than a few hundred entries, as it will not clutter up the generated code.
- persistant option
-
The persistant option will make the table disk-based instead of memory-based. Use this option for tables that are too big to fit in available memory. The disk-based table snapshot file will have the name
_TABLE_name.dat
, wherename
is the table name. When thepersistant
option is used, the table is generated only once, the first time it is used. Thereafter it will be loaded from the snaphot file. This is alot quicker and therefore usefull for large tables. In order to re-generate the table, the snapshot file must be manually deleted. In order to use thepersistant
option the Perl DB_File module must be available. The effect ofpersistant
is totie
the table's associative array with a DBM database (Berkeley DB). Note that usingpersistant
tables will downgrade the overall performance of the script. - Format
-
load table [ persistant ] <table> [ <filename> [ <key_col> [ <val_col> ] ] ] [, ...]
- Examples
-
load table POSTCODES MONTH_NAMES /data/tables/month_names.tbl POCODES pocodes.gz 1 2 ZIPSAMPLE zipsample.txt 3 21
INIT _PERIOD SECTION
Use this section to initialise the special internal _PERIOD table. The _PERIOD table is accessed by using the &period() macro. This will map all dates within the start and end date specified to the period value (string or numeric). Please note the space after init
and before _PERIOD
. This section is optional. See also &period() macro below.
- Format
-
init _PERIOD [ persistant ] <period value> <start date> <end date> <date fmt> [, ...]
- Examples
-
init _PERIOD Q1 01JAN01 31MAR01 DDMMMYY, Q2 01APR01 30JUN01 DDMMMYY, Q3 01JUL01 30SEP01 DDMMMYY, Q4 01OCT01 31DEC01 DDMMMYY
INIT _MONTH SECTION
Use this section to initialise the special internal _MONTH table. The _MONTH table is accessed by using the %month() macro. This will map all dates within the start and end date specified to the month value (numeric or string). Please note the space after init
and before _MONTH
. This section is optional. See also %month() macro below.
- Format
-
init _MONTH [ persistant ] <month value> <start date> <end date> <date fmt> [, ...]
- Examples
-
init _MONTH JAN 01/01/2002 01/31/2002 MM/DD/YYYY, FEB 02/01/2002 02/28/2002 MM/DD/YYYY, MAR 03/01/2002 03/30/2002 MM/DD/YYYY
INPUT SECTION
This section defines the format of the input data stream. Any calculated fields must be placed after the last input field. The calculation expression must begin with => and consists of (almost) any valid Perl statement, and can include input field names. All macros are also available to calculation expressions. The input section must appear before all the sections described below. Each input field name must be unique.
- Format
-
input section <input field name> [ => <calculation expression> ] [, ...]
- Example
-
input section ACL, AAL, ZIP, CALLDATE, CALLS, DURATION, REVENUE, DISCOUNT, KINSHIP_KEY, INV => REVENUE + DISCOUNT, MONTH_CALLDATE => &month(CALLDATE), GROUP => MONTH_CALLDATE <= 6 ? 1 : 2, POSTCODE => %POSTCODES(AAL), IN_SAMPLE => exists %ZIPSAMPLE(ZIP), IN_SAMPLE_2 => exists %ZIPSAMPLE(ZIP) ? 'yes': 'no'
FIELD PREPROCESS SECTION
Use this section to perform addition formatting/processing on input fields. These statements will be performed right after the input record is read and before calculating the input derived fields.
FIELD POSTPROCESS SECTION
Use this section to perform addition formatting/processing on output fields. These statements will be performed after the aggregations and just prior to the output of the aggregated record.
SORT BY SECTION
Use this section to sort the input data by field(s). One or more sort fields can be specified. This section must appear after the input section and before the group by and output sections. The numeric option is used to specify a numeric sort, and the desc option is used to specify a descending sort order. The standard Unix sort command is used to perform the sort. The numeric option is translated to the -n Unix sort option; the desc option is translated to the -r Unix sort option. If the input data is pre sorted then the sort by section is not required (even if break processing is activated with a group by section declaration). The sort by section is not required when the hash option is specified.
- Format
-
sort by <field name> [ numeric ] [ desc ] [, ...]
- Examples
-
sort by ACL, AAL numeric desc
REJECT SECTION
Specify one or more filter expressions. Filter expression can consist of any valid Perl statement, and must evaluate to Boolean true or false (0 is false, anything else is true). It can contain input field names and macros. Each input record is evaluated against the filter(s). Records that evaluate to true on any one filter will be rejected and written to the reject file. The reject file is named scriptname.reject unless specified in the reject_file option.
- Format
-
reject <filter expression> [, ...]
- Examples
-
reject !exists %ZIPSAMPLE(ZIP) INV < 200
FILTER SECTION
Specify one or more filter expressions. Filter expression can consist of any valid Perl statement, and must evaluate to Boolean true or false. It can contain input field names and macros. Each input record is evaluated against the filter(s). Only records that evaluate to true on all filter statements will be processed; that is, records that evaluate to false on any one filter statement will be discarded.
- Format
-
filter <filter expression> [, ...]
- Examples
-
filter exists %ZIPSAMPLE(ZIP) ACL =~ /^356/ ZIP eq '52101' or ZIP eq '52102'
GROUP BY SECTION
Use this section to activate break processing. Break processing is required to be able to use the aggregates in the output section. One or more fields can be specified - the input data must be sorted on the group by fields, unless the hash option is used. A break will occur when any of the group field values changes. The group by section must appear after the sort by section and before the output section. The numeric option will cause leading zeros to be stripped from the input field. Group by on calculated input fields is usefull when the hash option is in use because the input does not need to be pre-sorted.
- Format
-
group by <input field name> [ numeric | decimal | string ] [, ...]
- Examples
-
group by AAL, ACL numeric
DEDUP ON SECTION
OUTPUT SECTION
This is where the output data stream format is specified. At least one output field must be defined here (unless the transfer option is specified). Each output field definition must end with a comma or new line (or both). Each field definition must begin with a type (numeric
, decimal
, string
, date
). The output field name can be the same as an input field name, unless the output field is a calculated field. Each output field name must be unique. This name will appear in the header record (if the header option is set). The aggregate expression must consist of at least the input field name.
The aggregates sum
, min
, max
, avg
, first
, last
, distinct
, values_all
, and values_uniq
must be followed by an input field name. The aggregates count
and flag
must be followed by the *
character. The aggregate serial
must be followed by a number (indicating the serial number start).
A prefix of _
in the output field name causes that field to be transparent; these fields will not be output, their use is mainly for intermediate calculations. <input field name> can be any field declared in the input section, including calculated fields. This section is required unless the transfer option is specified.
- Format
-
output section <type> <output field name> <output expression> [, ...]
<type>
numeric, decimal, string, date [ (<datefmt>) ]
<output field name>
Each output field name must be unique. Output field name can be the same as the input field name, unless the output field is a calculated field. A
_
prefix denotes a transparent field. Transparent fields will not be output, they are used for intermediate caclulations.<datefmt>
YYYYMMDD, YYMMDD, DDMMYY, DDMMMYY, DDMMYYYY, DD/MM/YY, DD/MM/YYYY, MMDDYY, MMDDYYYY, MM/DD/YY, MM/DD/YYYY
<output expression>
<input field name>
|
<aggregate> <input field name> [ where <condition expression> ]
|
serial <start num> [ where <condition expression> ]
|
count * [ where <condition expression> ]
|
flag * [ where <condition expression> ]
|
= <calculation expression> [ where <condition expression> ]
<aggregate>
sum | maximum | max | minimum | min | avg | mean | first | last | distinct
| sum_distinct | avg_distinct | count_distinct
| median | variance | stddev | range | mode
| values_all [ (<delim>) ] | values_uniq [ (<delim>) ]
<input field name>
Any field specified in the input section.
<calculation expression>
Any valid Perl expression, including input and output field names, and Pequel macros. This expression can consist of numeric calculations, using arithmetic operators (
+
,*
,-
, etc) and functions (abs
,int
,rand
,sqrt
, etc.), string calculations, using string operators (eg..
for concatenation) and functions (uc
,lc
,substr
,length
, etc.).<condition expresion>
Any valid Perl expression, including input and output field names, and Pequel macros, that evaluates to true (non-zero) or false (zero).
Aggregates
- sum <input field>
-
Accumulate the total for all values in the group. Output type must be numeric, decimal or date.
- sum_distinct <input field>
-
Accumulate the total for distinct values only in the group. Output type must be numeric, decimal or date.
- maximum | max <input field>
-
Output the maximum value in the group. Output type must be numeric, decimal or date.
- minimum | min <input field>
-
Output the minimum value in the group. Output type must be numeric, decimal or date.
- avg | mean <input field>
-
Output the average value in the group. Output type must be numeric, decimal or date.
- avg_distinct <input field>
-
Output the average value for distinct values only in the group. Output type must be numeric, decimal or date.
- first <input field>
-
Output the first value in the group.
- last <input field>
-
Output the last value in the group.
- count_distinct | distinct <input field>
-
Output the count of unique values in the group. Output type must be numeric.
- median <input field>
-
The median is the middle of a distribution: half the scores are above the median and half are below the median. When there is an odd number of values, the median is simply the middle number. When there is an even number of values, the median is the mean of the two middle numbers. Output type must be numeric.
- variance <input field>
-
Variance is calculated as follows: (sum_squares / count) - (mean ** 2), where sum_squares is each value in the distribution squared (** 2); count is the number of values in the distribution; mean is discussed above. Output type must be numeric.
- stddev <input field>
-
Stddev is calculated as the square-root of variance. Output type must be numeric.
- range <input field>
-
The range is the maximum value minus the minimum value in a distribution. Output type must be numeric.
- mode <input field>
-
The mode is the most frequently occuring score in a distribution and is used as a measure of central tendency. A distribution may have more than one mode, in which case a space delimited list is returned. Any output type is valid.
- values_all <input field>
-
Output the list of all values in the group. The specified delimiter delimits the list. If not specified then the default_list_delimiter specified in options is used.
- values_uniq <input field>
-
Output the list of unique values in the group. The specified delimiter delimits the list. If not specified then the default_list_delimiter specified in options is used.
- serial <n>
-
Output the next serial number starting from n. The serial number will be incremented by one for each successive output record. Output type must be numeric.
- count *
-
Output the count of records in the group. Output type must be numeric.
- flag *
-
Output 1 or 0 depending on the result of the where condition clause. If no where clause is specified then the output value is set to 1. The output will be set to 1 if the where condition evaluates to true at least once for all records within the group. Output type must be numeric.
- corr <input field>
-
New in v2.5. Returns the coefficient of correlation of a set of number pairs.
- covar_pop <input field>
-
New in v2.5. Returns the population covariance of a set of number pairs.
- covar_samp <input field>
-
New in v2.5. Returns the sample covariance of a set of number pairs.
- cume_dist <input field>
-
New in v2.5. Calculates the cumulative distribution of a value in a group of values.
- dense_rank <input field>
-
New in v2.5. Computes the rank of a row in an ordered group of rows.
- rank <input field>
-
New in v2.5. Calculates the rank of a value in a group of values.
- = <calculation expression>
-
Calculation expression follows. Use this to create output fields that are based on some calculation expression. The calculation expression can consist of any valid Perl statement, and can contain input field names, output field names and macros.
- Examples
-
output section numeric AAL AAL string _HELLO = 'HELLO' string _WORLD = 'WORLD' string HELLO_WORLD = _HELLO . ' ' . _WORLD decimal _REVENUE sum REVENUE decimal _DISCOUNT sum DISCOUNT decimal INVOICE = _REVENUE + _DISCOUNT
HAVING SECTION
The having section is applied after the grouping performed by group by, for filtering groups based on the aggregate values. Break processing must be activated using the group by section. The having section must appear after the output section. Specify one or more filter expressions. Filter expression can consist of any valid Perl statement, and must evaluate to Boolean true or false. It can contain input field names, output field names and macros. Only groups that evaluate to true on all filter statements will be output; that is, groups that evaluate to false on any one filter statement will be discarded. Each filter statement must end with a comma and/or new line.
- Format
-
having <filter expression> [, ...]
- Examples
-
having SAMPLE == 1 MONTH_1_COUNT > 2 and MONTH_2_COUNT > 2
SUMMARY SECTION
This section contains any perl code and will be executed once after all input records have been processed. Input, output field names, and macros can be used here. This section is mostly relevant when group by is omitted, so that a group all
is in effect. The suppress_output option should also be used. If the script contains a group by section and more than one group of records is produced, only the last group's values will appear in the summary section.
- Format
-
summary section < Perl code >
- Examples
-
summary section print "*** Summary Report ***"; print "Total number of Products: ", sprintf("%12d", COUNT_PRODUCTS); print "Total number of Locations: ", sprintf("%12d", COUNT_LOCATIONS); print "*** End of report ***";
GENERATED PROGRAM OUTLINE
Open Input Stream
Load/Connect Tables
Read Next Input Record
Output Aggregated Record If Grouping Key Changes
Calculate Derived Input Fields
Perform Aggregations
Process Outline:
278 POD Errors
The following errors were encountered while parsing the POD:
- Around line 36:
'=item' outside of any '=over'
- Around line 61:
Unknown directive: =page
- Around line 111:
Unknown directive: =page
- Around line 113:
You forgot a '=back' before '=head1'
- Around line 116:
'=item' outside of any '=over'
- Around line 154:
Unknown directive: =page
- Around line 156:
You forgot a '=back' before '=head1'
- Around line 166:
=begin without a target?
- Around line 183:
'=end' without a target?
- Around line 221:
=begin without a target?
- Around line 239:
'=end' without a target?
- Around line 251:
=begin without a target?
- Around line 273:
'=end' without a target?
- Around line 285:
=begin without a target?
- Around line 312:
'=end' without a target?
- Around line 324:
=begin without a target?
- Around line 357:
'=end' without a target?
- Around line 369:
=begin without a target?
- Around line 405:
'=end' without a target?
- Around line 417:
=begin without a target?
- Around line 453:
'=end' without a target?
- Around line 465:
=begin without a target?
- Around line 503:
'=end' without a target?
- Around line 511:
=begin without a target?
- Around line 515:
'=end' without a target?
- Around line 523:
=begin without a target?
- Around line 528:
'=end' without a target?
- Around line 538:
=begin without a target?
- Around line 557:
'=end' without a target?
- Around line 569:
=begin without a target?
- Around line 595:
'=end' without a target?
- Around line 712:
Unknown directive: =page
- Around line 716:
'=item' outside of any '=over'
- Around line 789:
Unknown directive: =page
- Around line 791:
You forgot a '=back' before '=head1'
- Around line 829:
'=item' outside of any '=over'
- Around line 837:
=begin without a target?
- Around line 848:
'=end' without a target?
- Around line 1003:
You forgot a '=back' before '=head2'
- Around line 1008:
'=item' outside of any '=over'
- Around line 1017:
=begin without a target?
- Around line 1025:
'=end' without a target?
- Around line 1107:
You forgot a '=back' before '=head2'
- Around line 1112:
'=item' outside of any '=over'
- Around line 1120:
=begin without a target?
- Around line 1126:
'=end' without a target?
- Around line 1130:
You forgot a '=back' before '=head2'
- Around line 1135:
'=item' outside of any '=over'
- Around line 1143:
=begin without a target?
- Around line 1155:
'=end' without a target?
- Around line 1159:
You forgot a '=back' before '=head2'
- Around line 1164:
'=item' outside of any '=over'
- Around line 1177:
=begin without a target?
- Around line 1185:
'=end' without a target?
- Around line 1190:
You forgot a '=back' before '=head2'
- Around line 1195:
'=item' outside of any '=over'
- Around line 1203:
=begin without a target?
- Around line 1211:
'=end' without a target?
- Around line 1215:
You forgot a '=back' before '=head2'
- Around line 1220:
'=item' outside of any '=over'
- Around line 1228:
=begin without a target?
- Around line 1235:
'=end' without a target?
- Around line 1239:
You forgot a '=back' before '=head2'
- Around line 1244:
'=item' outside of any '=over'
- Around line 1252:
=begin without a target?
- Around line 1271:
'=end' without a target?
- Around line 1275:
You forgot a '=back' before '=head2'
- Around line 1294:
'=item' outside of any '=over'
- Around line 1302:
=begin without a target?
- Around line 1308:
'=end' without a target?
- Around line 1312:
You forgot a '=back' before '=head2'
- Around line 1317:
'=item' outside of any '=over'
- Around line 1325:
=begin without a target?
- Around line 1331:
'=end' without a target?
- Around line 1335:
You forgot a '=back' before '=head2'
- Around line 1340:
'=item' outside of any '=over'
- Around line 1348:
=begin without a target?
- Around line 1355:
'=end' without a target?
- Around line 1359:
You forgot a '=back' before '=head2'
- Around line 1364:
'=item' outside of any '=over'
- Around line 1372:
=begin without a target?
- Around line 1378:
'=end' without a target?
- Around line 1382:
You forgot a '=back' before '=head2'
- Around line 1397:
'=item' outside of any '=over'
- Around line 1479:
You forgot a '=back' before '=head2'
You forgot a '=back' before '=head2'
- Around line 1481:
'=item' outside of any '=over'
- Around line 1613:
=begin without a target?
- Around line 1624:
'=end' without a target?
- Around line 1629:
You forgot a '=back' before '=head2'
- Around line 1634:
'=item' outside of any '=over'
- Around line 1642:
=begin without a target?
- Around line 1648:
'=end' without a target?
- Around line 1652:
You forgot a '=back' before '=head2'
- Around line 1657:
'=item' outside of any '=over'
- Around line 1665:
=begin without a target?
- Around line 1673:
'=end' without a target?
- Around line 1677:
Unknown directive: =page
- Around line 1679:
You forgot a '=back' before '=head1'
- Around line 1682:
'=item' outside of any '=over'
=over without closing =back
- Around line 1733:
'=end' without a target? (Should be "=end open")
- Around line 1794:
=begin without a target?
- Around line 1803:
'=end' without a target? (Should be "=end open")
- Around line 1819:
=begin without a target?
- Around line 1825:
'=end' without a target? (Should be "=end open")
- Around line 1846:
=begin without a target?
- Around line 1854:
'=end' without a target? (Should be "=end open")
- Around line 1874:
=begin without a target?
- Around line 1880:
'=end' without a target? (Should be "=end open")
- Around line 1900:
=begin without a target?
- Around line 1906:
'=end' without a target? (Should be "=end open")
- Around line 1926:
=begin without a target?
- Around line 1932:
'=end' without a target? (Should be "=end open")
- Around line 1952:
=begin without a target?
- Around line 1958:
'=end' without a target? (Should be "=end open")
- Around line 1978:
=begin without a target?
- Around line 1984:
'=end' without a target? (Should be "=end open")
- Around line 2005:
=begin without a target?
- Around line 2011:
'=end' without a target? (Should be "=end open")
- Around line 2031:
=begin without a target?
- Around line 2037:
'=end' without a target? (Should be "=end open")
- Around line 2057:
=begin without a target?
- Around line 2063:
'=end' without a target? (Should be "=end open")
- Around line 2082:
=begin without a target?
- Around line 2087:
'=end' without a target? (Should be "=end open")
- Around line 2103:
=begin without a target?
- Around line 2108:
'=end' without a target? (Should be "=end open")
- Around line 2124:
=begin without a target?
- Around line 2129:
'=end' without a target? (Should be "=end open")
- Around line 2148:
=begin without a target?
- Around line 2153:
'=end' without a target? (Should be "=end open")
- Around line 2172:
=begin without a target?
- Around line 2177:
'=end' without a target? (Should be "=end open")
- Around line 2196:
=begin without a target?
- Around line 2201:
'=end' without a target? (Should be "=end open")
- Around line 2220:
=begin without a target?
- Around line 2228:
'=end' without a target? (Should be "=end open")
- Around line 2247:
=begin without a target?
- Around line 2253:
'=end' without a target? (Should be "=end open")
- Around line 2271:
=begin without a target?
- Around line 2276:
'=end' without a target? (Should be "=end open")
- Around line 2292:
=begin without a target?
- Around line 2297:
'=end' without a target? (Should be "=end open")
- Around line 2313:
=begin without a target?
- Around line 2318:
'=end' without a target? (Should be "=end open")
- Around line 2334:
=begin without a target?
- Around line 2339:
'=end' without a target? (Should be "=end open")
- Around line 2355:
=begin without a target?
- Around line 2360:
'=end' without a target? (Should be "=end open")
- Around line 2376:
=begin without a target?
- Around line 2381:
'=end' without a target? (Should be "=end open")
- Around line 2397:
=begin without a target?
- Around line 2400:
'=end' without a target? (Should be "=end open")
- Around line 2416:
=begin without a target?
- Around line 2419:
'=end' without a target? (Should be "=end open")
- Around line 2435:
=begin without a target?
- Around line 2438:
'=end' without a target? (Should be "=end open")
- Around line 2454:
=begin without a target?
- Around line 2457:
'=end' without a target? (Should be "=end open")
- Around line 2472:
=begin without a target?
- Around line 2475:
'=end' without a target? (Should be "=end open")
- Around line 2491:
=begin without a target?
- Around line 2497:
'=end' without a target? (Should be "=end open")
- Around line 2514:
=begin without a target?
- Around line 2520:
'=end' without a target? (Should be "=end open")
- Around line 2539:
=begin without a target?
- Around line 2545:
'=end' without a target? (Should be "=end open")
- Around line 2564:
=begin without a target?
- Around line 2570:
'=end' without a target? (Should be "=end open")
- Around line 2589:
=begin without a target?
- Around line 2595:
'=end' without a target? (Should be "=end open")
- Around line 2614:
=begin without a target?
- Around line 2620:
'=end' without a target? (Should be "=end open")
- Around line 2639:
=begin without a target?
- Around line 2645:
'=end' without a target? (Should be "=end open")
- Around line 2664:
=begin without a target?
- Around line 2670:
'=end' without a target? (Should be "=end open")
- Around line 2689:
=begin without a target?
- Around line 2695:
'=end' without a target? (Should be "=end open")
- Around line 2714:
=begin without a target?
- Around line 2720:
'=end' without a target? (Should be "=end open")
- Around line 2739:
=begin without a target?
- Around line 2745:
'=end' without a target? (Should be "=end open")
- Around line 2764:
=begin without a target?
- Around line 2770:
'=end' without a target? (Should be "=end open")
- Around line 2788:
=begin without a target?
- Around line 2794:
'=end' without a target? (Should be "=end open")
- Around line 2827:
=begin without a target?
- Around line 2830:
'=end' without a target? (Should be "=end open")
- Around line 2849:
=begin without a target?
- Around line 2860:
'=end' without a target? (Should be "=end open")
- Around line 2879:
=begin without a target?
- Around line 2882:
'=end' without a target? (Should be "=end open")
- Around line 2901:
=begin without a target?
- Around line 2912:
'=end' without a target? (Should be "=end open")
- Around line 2931:
=begin without a target?
- Around line 2942:
'=end' without a target? (Should be "=end open")
- Around line 2961:
=begin without a target?
- Around line 2972:
'=end' without a target? (Should be "=end open")
- Around line 2991:
=begin without a target?
- Around line 3002:
'=end' without a target? (Should be "=end open")
- Around line 3021:
=begin without a target?
- Around line 3032:
'=end' without a target? (Should be "=end open")
- Around line 3051:
=begin without a target?
- Around line 3057:
'=end' without a target? (Should be "=end open")
- Around line 3076:
=begin without a target?
- Around line 3081:
'=end' without a target? (Should be "=end open")
- Around line 3100:
=begin without a target?
- Around line 3105:
'=end' without a target? (Should be "=end open")
- Around line 3124:
=begin without a target?
- Around line 3129:
'=end' without a target? (Should be "=end open")
- Around line 3148:
=begin without a target?
- Around line 3153:
'=end' without a target? (Should be "=end open")
- Around line 3172:
=begin without a target?
- Around line 3196:
=begin without a target?
- Around line 3215:
'=end' without a target? (Should be "=end open")
- Around line 3226:
=begin without a target?
- Around line 3232:
'=end' without a target? (Should be "=end open")
- Around line 3251:
=begin without a target?
- Around line 3257:
'=end' without a target? (Should be "=end open")
- Around line 3276:
=begin without a target?
- Around line 3282:
'=end' without a target? (Should be "=end open")
- Around line 3301:
=begin without a target?
- Around line 3307:
'=end' without a target? (Should be "=end open")
- Around line 3326:
=begin without a target?
- Around line 3332:
'=end' without a target? (Should be "=end open")
- Around line 3398:
=begin without a target?
- Around line 3401:
'=end' without a target? (Should be "=end open")
- Around line 3419:
=begin without a target?
- Around line 3422:
'=end' without a target? (Should be "=end open")
- Around line 3438:
=begin without a target?
- Around line 3441:
'=end' without a target? (Should be "=end open")
- Around line 3457:
=begin without a target?
- Around line 3460:
'=end' without a target? (Should be "=end open")
- Around line 3476:
=begin without a target?
- Around line 3479:
'=end' without a target? (Should be "=end open")
- Around line 3493:
=begin without a target?
- Around line 3531:
'=end' without a target? (Should be "=end open")
- Around line 3544:
=begin without a target?
- Around line 3574:
'=end' without a target? (Should be "=end open")
- Around line 3587:
=begin without a target?
- Around line 3625:
'=end' without a target? (Should be "=end open")
- Around line 3639:
=begin without a target?
- Around line 3669:
'=end' without a target? (Should be "=end open")
- Around line 3695:
'=end' without a target? (Should be "=end options")
- Around line 3707:
=begin without a target?
- Around line 3748:
'=end' without a target? (Should be "=end options")
- Around line 3759:
=begin without a target?
- Around line 3807:
'=end' without a target? (Should be "=end options")
- Around line 3819:
=begin without a target?
- Around line 3865:
'=end' without a target? (Should be "=end options")
- Around line 3880:
=begin without a target?
- Around line 3906:
'=end' without a target? (Should be "=end options")
- Around line 3910:
=begin without a target?
- Around line 3954:
'=end' without a target? (Should be "=end options")
- Around line 3965:
=begin without a target?
- Around line 3996:
'=end' without a target? (Should be "=end options")
- Around line 4011:
=begin without a target?
- Around line 4053:
'=end' without a target? (Should be "=end options")
- Around line 4066:
=begin without a target?
- Around line 4099:
'=end' without a target? (Should be "=end options")
- Around line 4103:
=begin without a target?
- Around line 4134:
'=end' without a target? (Should be "=end options")
- Around line 4138:
=begin without a target?
- Around line 4166:
'=end' without a target? (Should be "=end options")
- Around line 4170:
=begin without a target?
- Around line 4195:
'=end' without a target? (Should be "=end options")
- Around line 4210:
=begin without a target?
- Around line 4266:
'=end' without a target? (Should be "=end options")
- Around line 4276:
=begin without a target?
- Around line 4283:
'=end' without a target? (Should be "=end options")
- Around line 4287:
=begin without a target?
- Around line 4291:
'=end' without a target? (Should be "=end options")
- Around line 4298:
=begin without a target?
- Around line 4302:
'=end' without a target? (Should be "=end options")
- Around line 4312:
=begin without a target?
- Around line 4487:
'=end' without a target? (Should be "=end options")
- Around line 4493:
=begin without a target?
- Around line 4497:
'=end' without a target? (Should be "=end options")