NAME
AI::MXNet::Gluon::NN::Sequential
DESCRIPTION
Stacks `Block`s sequentially.
Example::
my $net = nn->Sequential()
# use net's name_scope to give child Blocks appropriate names.
net->name_scope(sub {
$net->add($nn->Dense(10, activation=>'relu'));
$net->add($nn->Dense(20));
});
Adds block on top of the stack.
NAME
AI::MXNet::Gluon::NN::HybridSequential
DESCRIPTION
Stacks `Block`s sequentially.
Example::
my $net = nn->Sequential()
# use net's name_scope to give child Blocks appropriate names.
net->name_scope(sub {
$net->add($nn->Dense(10, activation=>'relu'));
$net->add($nn->Dense(20));
});
Adds block on top of the stack.
NAME
AI::MXNet::Gluon::NN::Dense
DESCRIPTION
Just your regular densely-connected NN layer.
`Dense` implements the operation:
`output = activation(dot(input, weight) + bias)`
where `activation` is the element-wise activation function
passed as the `activation` argument, `weight` is a weights matrix
created by the layer, and `bias` is a bias vector created by the layer
(only applicable if `use_bias` is `True`).
Note: the input must be a tensor with rank 2. Use `flatten` to convert it
to rank 2 manually if necessary.
Parameters
----------
units : int
Dimensionality of the output space.
activation : str
Activation function to use. See help on `Activation` layer.
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias : bool
Whether the layer uses a bias vector.
flatten : bool, default true
Whether the input tensor should be flattened.
If true, all but the first axis of input data are collapsed together.
If false, all but the last axis of input data are kept the same, and the transformation
applies on the last axis.
weight_initializer : str or `Initializer`
Initializer for the `kernel` weights matrix.
bias_initializer: str or `Initializer`
Initializer for the bias vector.
in_units : int, optional
Size of the input data. If not specified, initialization will be
deferred to the first time `forward` is called and `in_units`
will be inferred from the shape of input data.
prefix : str or None
See document of `Block`.
params : ParameterDict or None
weight_initializer : str or `Initializer`
Initializer for the `kernel` weights matrix.
bias_initializer: str or `Initializer`
Initializer for the bias vector.
in_units : int, optional
Size of the input data. If not specified, initialization will be
deferred to the first time `forward` is called and `in_units`
will be inferred from the shape of input data.
prefix : str or None
See document of `Block`.
params : ParameterDict or None
See document of `Block`.
If flatten is set to be True, then the shapes are:
Input shape:
An N-D input with shape
`(batch_size, x1, x2, ..., xn) with x1 * x2 * ... * xn equal to in_units`.
Output shape:
The output would have shape `(batch_size, units)`.
If ``flatten`` is set to be false, then the shapes are:
Input shape:
An N-D input with shape
`(x1, x2, ..., xn, in_units)`.
Output shape:
The output would have shape `(x1, x2, ..., xn, units)`.
NAME
AI::MXNet::Gluon::NN::Dropout
DESCRIPTION
Applies Dropout to the input.
Dropout consists in randomly setting a fraction `rate` of input units
to 0 at each update during training time, which helps prevent overfitting.
Parameters
----------
rate : float
Fraction of the input units to drop. Must be a number between 0 and 1.
Input shape:
Arbitrary.
Output shape:
Same shape as input.
References
----------
`Dropout: A Simple Way to Prevent Neural Networks from Overfitting
<http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf>`_
NAME
AI::MXNet::Gluon::NN::BatchNorm
DESCRIPTION
Batch normalization layer (Ioffe and Szegedy, 2014).
Normalizes the input at each batch, i.e. applies a transformation
that maintains the mean activation close to 0 and the activation
standard deviation close to 1.
Parameters
----------
axis : int, default 1
The axis that should be normalized. This is typically the channels
(C) axis. For instance, after a `Conv2D` layer with `layout='NCHW'`,
set `axis=1` in `BatchNorm`. If `layout='NHWC'`, then set `axis=3`.
momentum: float, default 0.9
Momentum for the moving average.
epsilon: float, default 1e-5
Small float added to variance to avoid dividing by zero.
center: bool, default True
If True, add offset of `beta` to normalized tensor.
If False, `beta` is ignored.
scale: bool, default True
If True, multiply by `gamma`. If False, `gamma` is not used.
When the next layer is linear (also e.g. `nn.relu`),
this can be disabled since the scaling
will be done by the next layer.
beta_initializer: str or `Initializer`, default 'zeros'
Initializer for the beta weight.
gamma_initializer: str or `Initializer`, default 'ones'
Initializer for the gamma weight.
moving_mean_initializer: str or `Initializer`, default 'zeros'
Initializer for the moving mean.
moving_variance_initializer: str or `Initializer`, default 'ones'
Initializer for the moving variance.
in_channels : int, default 0
Number of channels (feature maps) in input data. If not specified,
initialization will be deferred to the first time `forward` is called
and `in_channels` will be inferred from the shape of input data.
Input shape:
Arbitrary.
Output shape:
Same shape as input.
NAME
AI::MXNet::Gluon::NN::Embedding
DESCRIPTION
Turns non-negative integers (indexes/tokens) into dense vectors
of fixed size. eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]
Parameters
----------
input_dim : int
Size of the vocabulary, i.e. maximum integer index + 1.
output_dim : int
Dimension of the dense embedding.
dtype : str or np.dtype, default 'float32'
Data type of output embeddings.
weight_initializer : Initializer
Initializer for the `embeddings` matrix.
sparse_grad: bool
If True, gradient w.r.t. weight will be a 'row_sparse' NDArray.
NAME
AI::MXNet::Gluon::NN::Flatten
DESCRIPTION
Flattens the input to two dimensional.
Input shape:
Arbitrary shape `(N, a, b, c, ...)`
Output shape:
2D tensor with shape: `(N, a*b*c...)`
NAME
AI::MXNet::Gluon::NN::InstanceNorm - Applies instance normalization to the n-dimensional input array.
DESCRIPTION
Applies instance normalization to the n-dimensional input array.
This operator takes an n-dimensional input array where (n>2) and normalizes
the input using the following formula:
Parameters
----------
axis : int, default 1
The axis that will be excluded in the normalization process. This is typically the channels
(C) axis. For instance, after a `Conv2D` layer with `layout='NCHW'`,
set `axis=1` in `InstanceNorm`. If `layout='NHWC'`, then set `axis=3`. Data will be
normalized along axes excluding the first axis and the axis given.
epsilon: float, default 1e-5
Small float added to variance to avoid dividing by zero.
center: bool, default True
If True, add offset of `beta` to normalized tensor.
If False, `beta` is ignored.
scale: bool, default True
If True, multiply by `gamma`. If False, `gamma` is not used.
When the next layer is linear (also e.g. `nn.relu`),
this can be disabled since the scaling
will be done by the next layer.
beta_initializer: str or `Initializer`, default 'zeros'
Initializer for the beta weight.
gamma_initializer: str or `Initializer`, default 'ones'
Initializer for the gamma weight.
in_channels : int, default 0
Number of channels (feature maps) in input data. If not specified,
initialization will be deferred to the first time `forward` is called
and `in_channels` will be inferred from the shape of input data.
References
----------
Instance Normalization: The Missing Ingredient for Fast Stylization
<https://arxiv.org/abs/1607.08022>
Examples
--------
>>> # Input of shape (2,1,2)
>>> $x = mx->nd->array([[[ 1.1, 2.2]],
... [[ 3.3, 4.4]]]);
>>> $layer = nn->InstanceNorm()
>>> $layer->initialize(ctx=>mx->cpu(0))
>>> $layer->($x)
[[[-0.99998355 0.99998331]]
[[-0.99998319 0.99998361]]]
<NDArray 2x1x2 @cpu(0)>
NAME
AI::MXNet::Gluon::NN::LayerNorm - Applies layer normalization to the n-dimensional input array.
DESCRIPTION
Applies layer normalization to the n-dimensional input array.
This operator takes an n-dimensional input array and normalizes
the input using the given axis:
Parameters
----------
axis : int, default -1
The axis that should be normalized. This is typically the axis of the channels.
epsilon: float, default 1e-5
Small float added to variance to avoid dividing by zero.
center: bool, default True
If True, add offset of `beta` to normalized tensor.
If False, `beta` is ignored.
scale: bool, default True
If True, multiply by `gamma`. If False, `gamma` is not used.
beta_initializer: str or `Initializer`, default 'zeros'
Initializer for the beta weight.
gamma_initializer: str or `Initializer`, default 'ones'
Initializer for the gamma weight.
in_channels : int, default 0
Number of channels (feature maps) in input data. If not specified,
initialization will be deferred to the first time `forward` is called
and `in_channels` will be inferred from the shape of input data.
References
----------
`Layer Normalization
<https://arxiv.org/pdf/1607.06450.pdf>`_
Examples
--------
>>> # Input of shape (2, 5)
>>> $x = mx->nd->array([[1, 2, 3, 4, 5], [1, 1, 2, 2, 2]])
>>> # Layer normalization is calculated with the above formula
>>> $layer = nn->LayerNorm()
>>> $layer->initialize(ctx=>mx->cpu(0))
>>> $layer->($x)
[[-1.41421 -0.707105 0. 0.707105 1.41421 ]
[-1.2247195 -1.2247195 0.81647956 0.81647956 0.81647956]]
<NDArray 2x5 @cpu(0)>
NAME
AI::MXNet::Gluon::NN::Lambda - Wraps an operator or an expression as a Block object.
DESCRIPTION
Wraps an operator or an expression as a Block object.
Parameters
----------
function : str or sub
Function used in lambda must be one of the following:
1) the name of an operator that is available in ndarray. For example
$block = nn->Lambda('tanh')
2) a sub. For example
$block = nn->Lambda(sub { my $x = shift; nd->LeakyReLU($x, slope=>0.1) });
NAME
AI::MXNet::Gluon::NN::HybridLambda - Wraps an operator or an expression as a HybridBlock object.
DESCRIPTION
Wraps an operator or an expression as a HybridBlock object.
Parameters
----------
function : str or sub
Function used in lambda must be one of the following:
1) the name of an operator that is available in symbol and ndarray. For example
$block = nn->Lambda('tanh')
2) a sub. For example
$block = nn->Lambda(sub { my $F = shift; $F->LeakyReLU($x, slope=>0.1) });