NAME

char_lstm.pl - Example of training char LSTM RNN on tiny shakespeare using high level RNN interface
               with optional inferred sampling (RNN generates Shakespeare like text)

SYNOPSIS

--num-layers     number of stacked RNN layers, default=2
--num-hidden     hidden layer size, default=256
--num-embed      embed size, default=10
--num-seq        sequence size, default=60
--gpus           list of gpus to run, e.g. 0 or 0,2,5. empty means using cpu.
                 Increase batch size when using multiple gpus for best performance.
--kv-store       key-value store type, default='device'
--num-epochs     max num of epochs, default=25
--lr             initial learning rate, default=0.01
--optimizer      the optimizer type, default='adam'
--mom            momentum for sgd, default=0.0
--wd             weight decay for sgd, default=0.00001
--batch-size     the batch size type, default=32
--bidirectional  use bidirectional cell, default false (0)
--disp-batches   show progress for every n batches, default=50
--chkp-prefix    prefix for checkpoint files, default='lstm_'
--cell-mode      RNN cell mode (LSTM, GRU, RNN, default=LSTM)
--sample-size    a size of inferred sample text (default=10000) after each epoch
--chkp-epoch     save checkpoint after this many epoch, default=1 (saving every checkpoint)