NAME

AI::MXNet::Gluon::NN::Sequential

DESCRIPTION

Stacks `Block`s sequentially.

Example::

    my $net = nn->Sequential()
    # use net's name_scope to give child Blocks appropriate names.
    net->name_scope(sub {
        $net->add($nn->Dense(10, activation=>'relu'));
        $net->add($nn->Dense(20));
    });

Adds block on top of the stack.

NAME

AI::MXNet::Gluon::NN::HybridSequential

DESCRIPTION

Stacks `Block`s sequentially.

Example::

    my $net = nn->Sequential()
    # use net's name_scope to give child Blocks appropriate names.
    net->name_scope(sub {
        $net->add($nn->Dense(10, activation=>'relu'));
        $net->add($nn->Dense(20));
    });

Adds block on top of the stack.

NAME

AI::MXNet::Gluon::NN::Dense

DESCRIPTION

Just your regular densely-connected NN layer.

`Dense` implements the operation:
`output = activation(dot(input, weight) + bias)`
where `activation` is the element-wise activation function
passed as the `activation` argument, `weight` is a weights matrix
created by the layer, and `bias` is a bias vector created by the layer
(only applicable if `use_bias` is `True`).

Note: the input must be a tensor with rank 2. Use `flatten` to convert it
to rank 2 manually if necessary.

Parameters
----------
units : int
    Dimensionality of the output space.
activation : str
    Activation function to use. See help on `Activation` layer.
    If you don't specify anything, no activation is applied
    (ie. "linear" activation: `a(x) = x`).
use_bias : bool
    Whether the layer uses a bias vector.
flatten : bool, default true
    Whether the input tensor should be flattened.
    If true, all but the first axis of input data are collapsed together.
    If false, all but the last axis of input data are kept the same, and the transformation
    applies on the last axis.
weight_initializer : str or `Initializer`
    Initializer for the `kernel` weights matrix.
bias_initializer: str or `Initializer`
    Initializer for the bias vector.
in_units : int, optional
    Size of the input data. If not specified, initialization will be
    deferred to the first time `forward` is called and `in_units`
    will be inferred from the shape of input data.
prefix : str or None
    See document of `Block`.
params : ParameterDict or None
weight_initializer : str or `Initializer`
    Initializer for the `kernel` weights matrix.
bias_initializer: str or `Initializer`
    Initializer for the bias vector.
in_units : int, optional
    Size of the input data. If not specified, initialization will be
    deferred to the first time `forward` is called and `in_units`
    will be inferred from the shape of input data.
prefix : str or None
    See document of `Block`.
params : ParameterDict or None
    See document of `Block`.

If flatten is set to be True, then the shapes are:
Input shape:
    An N-D input with shape
    `(batch_size, x1, x2, ..., xn) with x1 * x2 * ... * xn equal to in_units`.

Output shape:
    The output would have shape `(batch_size, units)`.

If ``flatten`` is set to be false, then the shapes are:
Input shape:
    An N-D input with shape
    `(x1, x2, ..., xn, in_units)`.

Output shape:
    The output would have shape `(x1, x2, ..., xn, units)`.

AI::MXNet::Gluon::NN::Activation

DESCRIPTION

Applies an activation function to input.

Parameters
----------
activation : str
    Name of activation function to use.
    See mxnet.ndarray.Activation for available choices.

Input shape:
    Arbitrary.

Output shape:
    Same shape as input.

NAME

AI::MXNet::Gluon::NN::Dropout

DESCRIPTION

Applies Dropout to the input.

Dropout consists in randomly setting a fraction `rate` of input units
to 0 at each update during training time, which helps prevent overfitting.

Parameters
----------
rate : float
    Fraction of the input units to drop. Must be a number between 0 and 1.


Input shape:
    Arbitrary.

Output shape:
    Same shape as input.

References
----------
    `Dropout: A Simple Way to Prevent Neural Networks from Overfitting
    <http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf>`_

NAME

AI::MXNet::Gluon::NN::BatchNorm

DESCRIPTION

Batch normalization layer (Ioffe and Szegedy, 2014).
Normalizes the input at each batch, i.e. applies a transformation
that maintains the mean activation close to 0 and the activation
standard deviation close to 1.

Parameters
----------
axis : int, default 1
    The axis that should be normalized. This is typically the channels
    (C) axis. For instance, after a `Conv2D` layer with `layout='NCHW'`,
    set `axis=1` in `BatchNorm`. If `layout='NHWC'`, then set `axis=3`.
momentum: float, default 0.9
    Momentum for the moving average.
epsilon: float, default 1e-5
    Small float added to variance to avoid dividing by zero.
center: bool, default True
    If True, add offset of `beta` to normalized tensor.
    If False, `beta` is ignored.
scale: bool, default True
    If True, multiply by `gamma`. If False, `gamma` is not used.
    When the next layer is linear (also e.g. `nn.relu`),
    this can be disabled since the scaling
    will be done by the next layer.
beta_initializer: str or `Initializer`, default 'zeros'
    Initializer for the beta weight.
gamma_initializer: str or `Initializer`, default 'ones'
    Initializer for the gamma weight.
moving_mean_initializer: str or `Initializer`, default 'zeros'
    Initializer for the moving mean.
moving_variance_initializer: str or `Initializer`, default 'ones'
    Initializer for the moving variance.
in_channels : int, default 0
    Number of channels (feature maps) in input data. If not specified,
    initialization will be deferred to the first time `forward` is called
    and `in_channels` will be inferred from the shape of input data.


Input shape:
    Arbitrary.

Output shape:
    Same shape as input.

NAME

AI::MXNet::Gluon::NN::LeakyReLU

DESCRIPTION

Leaky version of a Rectified Linear Unit.

It allows a small gradient when the unit is not active

    `f(x) = alpha * x for x < 0`,
    `f(x) = x for x >= 0`.

Parameters
----------
alpha : float
    slope coefficient for the negative half axis. Must be >= 0.


Input shape:
    Arbitrary.

Output shape:
    Same shape as input.

NAME

AI::MXNet::Gluon::NN::Embedding

DESCRIPTION

Turns non-negative integers (indexes/tokens) into dense vectors
of fixed size. eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]


Parameters
----------
input_dim : int
    Size of the vocabulary, i.e. maximum integer index + 1.
output_dim : int
    Dimension of the dense embedding.
dtype : str or np.dtype, default 'float32'
    Data type of output embeddings.
weight_initializer : Initializer
    Initializer for the `embeddings` matrix.


Input shape:
    2D tensor with shape: `(N, M)`.

Output shape:
    3D tensor with shape: `(N, M, output_dim)`.

NAME

AI::MXNet::Gluon::NN::Flatten

DESCRIPTION

Flattens the input to two dimensional.

Input shape:
    Arbitrary shape `(N, a, b, c, ...)`

Output shape:
    2D tensor with shape: `(N, a*b*c...)`