NAME
AI::MXNet::Gluon::NN::Conv
|
DESCRIPTION
Abstract nD convolution layer (private, used as implementation base).
This layer creates a convolution kernel that is convolved
with the layer input to produce a tensor of outputs.
If `use_bias` is `True`, a bias vector is created and added to the outputs.
Finally, if `activation` is not `None`,
it is applied to the outputs as well.
Parameters
----------
channels : int
The dimensionality of the output space
i.e. the number of output channels in the convolution.
kernel_size : int or tuple/list of n ints
Specifies the dimensions of the convolution window.
strides: int or tuple/list of n ints,
Specifies the strides of the convolution.
padding : int or tuple/list of n ints,
If padding is non-zero, then the input is implicitly zero-padded
on both sides for padding number of points
dilation: int or tuple/list of n ints,
Specifies the dilation rate to use for dilated convolution.
groups : int
Controls the connections between inputs and outputs.
At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two convolution
layers side by side, each seeing half the input channels, and producing
half the output channels, and both subsequently concatenated.
layout : str,
Dimension ordering of data and weight. Can be 'NCW' , 'NWC' , 'NCHW' ,
'NHWC' , 'NCDHW' , 'NDHWC' , etc. 'N' , 'C' , 'H' , 'W' , 'D' stands for
batch, channel, height, width and depth dimensions respectively.
Convolution is performed over 'D' , 'H' , and 'W' dimensions.
in_channels : int , default 0
The number of input channels to this layer. If not specified,
initialization will be deferred to the first time `forward` is called
and `in_channels` will be inferred from the shape of input data.
activation : str
Activation function to use . See :func:`~mxnet.ndarray.Activation`.
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias: bool
Whether the layer uses a bias vector.
weight_initializer : str or `Initializer`
Initializer for the `weight` weights matrix.
bias_initializer: str or `Initializer`
Initializer for the bias vector.
|
NAME
AI::MXNet::Gluon::NN::Conv1D
|
DESCRIPTION
1D convolution layer (e.g. temporal convolution).
This layer creates a convolution kernel that is convolved
with the layer input over a single spatial (or temporal) dimension
to produce a tensor of outputs.
If `use_bias` is True, a bias vector is created and added to the outputs.
Finally, if `activation` is not `None`,
it is applied to the outputs as well.
If `in_channels` is not specified, `Parameter` initialization will be
deferred to the first time `forward` is called and `in_channels` will be
inferred from the shape of input data.
Parameters
----------
channels : int
The dimensionality of the output space, i.e. the number of output
channels (filters) in the convolution.
kernel_size : int or tuple/list of 1 int
Specifies the dimensions of the convolution window.
strides : int or tuple/list of 1 int ,
Specify the strides of the convolution.
padding : int or a tuple/list of 1 int ,
If padding is non-zero, then the input is implicitly zero-padded
on both sides for padding number of points
dilation : int or tuple/list of 1 int
Specifies the dilation rate to use for dilated convolution.
groups : int
Controls the connections between inputs and outputs.
At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv
layers side by side, each seeing half the input channels, and producing
half the output channels, and both subsequently concatenated.
layout: str, default 'NCW'
Dimension ordering of data and weight. Can be 'NCW' , 'NWC' , etc.
'N' , 'C' , 'W' stands for batch, channel, and width ( time ) dimensions
respectively. Convolution is applied on the 'W' dimension.
in_channels : int , default 0
The number of input channels to this layer. If not specified,
initialization will be deferred to the first time `forward` is called
and `in_channels` will be inferred from the shape of input data.
activation : str
Activation function to use . See :func:`~mxnet.ndarray.Activation`.
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias : bool
Whether the layer uses a bias vector.
weight_initializer : str or `Initializer`
Initializer for the `weight` weights matrix.
bias_initializer : str or `Initializer`
Initializer for the bias vector.
Input shape:
This depends on the `layout` parameter. Input is 3D array of shape
(batch_size, in_channels, width) if `layout` is `NCW`.
Output shape:
This depends on the `layout` parameter. Output is 3D array of shape
(batch_size, channels, out_width) if `layout` is `NCW`.
out_width is calculated as::
out_width = floor((width+2 *padding -dilation*(kernel_size-1)-1)/stride)+1
|
NAME
AI::MXNet::Gluon::NN::Conv2D
|
DESCRIPTION
2D convolution layer (e.g. spatial convolution over images).
This layer creates a convolution kernel that is convolved
with the layer input to produce a tensor of
outputs. If `use_bias` is True,
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
If `in_channels` is not specified, `Parameter` initialization will be
deferred to the first time `forward` is called and `in_channels` will be
inferred from the shape of input data.
Parameters
----------
channels : int
The dimensionality of the output space, i.e. the number of output
channels (filters) in the convolution.
kernel_size : int or tuple/list of 2 int
Specifies the dimensions of the convolution window.
strides : int or tuple/list of 2 int ,
Specify the strides of the convolution.
padding : int or a tuple/list of 2 int ,
If padding is non-zero, then the input is implicitly zero-padded
on both sides for padding number of points
dilation : int or tuple/list of 2 int
Specifies the dilation rate to use for dilated convolution.
groups : int
Controls the connections between inputs and outputs.
At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv
layers side by side, each seeing half the input channels, and producing
half the output channels, and both subsequently concatenated.
layout : str, default 'NCHW'
Dimension ordering of data and weight. Can be 'NCHW' , 'NHWC' , etc.
'N' , 'C' , 'H' , 'W' stands for batch, channel, height, and width
dimensions respectively. Convolution is applied on the 'H' and
'W' dimensions.
in_channels : int , default 0
The number of input channels to this layer. If not specified,
initialization will be deferred to the first time `forward` is called
and `in_channels` will be inferred from the shape of input data.
activation : str
Activation function to use . See :func:`~mxnet.ndarray.Activation`.
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias : bool
Whether the layer uses a bias vector.
weight_initializer : str or `Initializer`
Initializer for the `weight` weights matrix.
bias_initializer : str or `Initializer`
Initializer for the bias vector.
Input shape:
This depends on the `layout` parameter. Input is 4D array of shape
(batch_size, in_channels, height, width) if `layout` is `NCHW`.
Output shape:
This depends on the `layout` parameter. Output is 4D array of shape
(batch_size, channels, out_height, out_width) if `layout` is `NCHW`.
out_height and out_width are calculated as::
out_height = floor((height+2 *padding [0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1
out_width = floor((width+2 *padding [1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1
|
NAME
AI::MXNet::Gluon::NN::Conv3D
|
DESCRIPTION
3D convolution layer (e.g. spatial convolution over volumes).
This layer creates a convolution kernel that is convolved
with the layer input to produce a tensor of
outputs. If `use_bias` is `True`,
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
If `in_channels` is not specified, `Parameter` initialization will be
deferred to the first time `forward` is called and `in_channels` will be
inferred from the shape of input data.
Parameters
----------
channels : int
The dimensionality of the output space, i.e. the number of output
channels (filters) in the convolution.
kernel_size : int or tuple/list of 3 int
Specifies the dimensions of the convolution window.
strides : int or tuple/list of 3 int ,
Specify the strides of the convolution.
padding : int or a tuple/list of 3 int ,
If padding is non-zero, then the input is implicitly zero-padded
on both sides for padding number of points
dilation : int or tuple/list of 3 int
Specifies the dilation rate to use for dilated convolution.
groups : int
Controls the connections between inputs and outputs.
At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv
layers side by side, each seeing half the input channels, and producing
half the output channels, and both subsequently concatenated.
layout : str, default 'NCDHW'
Dimension ordering of data and weight. Can be 'NCDHW' , 'NDHWC' , etc.
'N' , 'C' , 'H' , 'W' , 'D' stands for batch, channel, height, width and
depth dimensions respectively. Convolution is applied on the 'D' ,
'H' and 'W' dimensions.
in_channels : int , default 0
The number of input channels to this layer. If not specified,
initialization will be deferred to the first time `forward` is called
and `in_channels` will be inferred from the shape of input data.
activation : str
Activation function to use . See :func:`~mxnet.ndarray.Activation`.
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias : bool
Whether the layer uses a bias vector.
weight_initializer : str or `Initializer`
Initializer for the `weight` weights matrix.
bias_initializer : str or `Initializer`
Initializer for the bias vector.
Input shape:
This depends on the `layout` parameter. Input is 5D array of shape
(batch_size, in_channels, depth, height, width) if `layout` is `NCDHW`.
Output shape:
This depends on the `layout` parameter. Output is 5D array of shape
(batch_size, channels, out_depth, out_height, out_width) if `layout` is
`NCDHW`.
out_depth, out_height and out_width are calculated as::
out_depth = floor((depth+2 *padding [0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1
out_height = floor((height+2 *padding [1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1
out_width = floor((width+2 *padding [2]-dilation[2]*(kernel_size[2]-1)-1)/stride[2])+1
|
NAME
AI::MXNet::Gluon::NN::Conv1DTranspose
|
DESCRIPTION
Transposed 1D convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
If `in_channels` is not specified, `Parameter` initialization will be
deferred to the first time `forward` is called and `in_channels` will be
inferred from the shape of input data.
Parameters
----------
channels : int
The dimensionality of the output space, i.e. the number of output
channels (filters) in the convolution.
kernel_size : int or tuple/list of 3 int
Specifies the dimensions of the convolution window.
strides : int or tuple/list of 3 int ,
Specify the strides of the convolution.
padding : int or a tuple/list of 3 int ,
If padding is non-zero, then the input is implicitly zero-padded
on both sides for padding number of points
dilation : int or tuple/list of 3 int
Specifies the dilation rate to use for dilated convolution.
groups : int
Controls the connections between inputs and outputs.
At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv
layers side by side, each seeing half the input channels, and producing
half the output channels, and both subsequently concatenated.
layout : str, default 'NCW'
Dimension ordering of data and weight. Can be 'NCW' , 'NWC' , etc.
'N' , 'C' , 'W' stands for batch, channel, and width ( time ) dimensions
respectively. Convolution is applied on the 'W' dimension.
in_channels : int , default 0
The number of input channels to this layer. If not specified,
initialization will be deferred to the first time `forward` is called
and `in_channels` will be inferred from the shape of input data.
activation : str
Activation function to use . See :func:`~mxnet.ndarray.Activation`.
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias : bool
Whether the layer uses a bias vector.
weight_initializer : str or `Initializer`
Initializer for the `weight` weights matrix.
bias_initializer : str or `Initializer`
Initializer for the bias vector.
Input shape:
This depends on the `layout` parameter. Input is 3D array of shape
(batch_size, in_channels, width) if `layout` is `NCW`.
Output shape:
This depends on the `layout` parameter. Output is 3D array of shape
(batch_size, channels, out_width) if `layout` is `NCW`.
out_width is calculated as::
out_width = (width-1) *strides -2 *padding +kernel_size+output_padding
|
NAME
AI::MXNet::Gluon::NN::Conv2DTranspose
|
DESCRIPTION
Transposed 2D convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
If `in_channels` is not specified, `Parameter` initialization will be
deferred to the first time `forward` is called and `in_channels` will be
inferred from the shape of input data.
Parameters
----------
channels : int
The dimensionality of the output space, i.e. the number of output
channels (filters) in the convolution.
kernel_size : int or tuple/list of 3 int
Specifies the dimensions of the convolution window.
strides : int or tuple/list of 3 int ,
Specify the strides of the convolution.
padding : int or a tuple/list of 3 int ,
If padding is non-zero, then the input is implicitly zero-padded
on both sides for padding number of points
dilation : int or tuple/list of 3 int
Specifies the dilation rate to use for dilated convolution.
groups : int
Controls the connections between inputs and outputs.
At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv
layers side by side, each seeing half the input channels, and producing
half the output channels, and both subsequently concatenated.
layout : str, default 'NCHW'
Dimension ordering of data and weight. Can be 'NCHW' , 'NHWC' , etc.
'N' , 'C' , 'H' , 'W' stands for batch, channel, height, and width
dimensions respectively. Convolution is applied on the 'H' and
'W' dimensions.
in_channels : int , default 0
The number of input channels to this layer. If not specified,
initialization will be deferred to the first time `forward` is called
and `in_channels` will be inferred from the shape of input data.
activation : str
Activation function to use . See :func:`~mxnet.ndarray.Activation`.
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias : bool
Whether the layer uses a bias vector.
weight_initializer : str or `Initializer`
Initializer for the `weight` weights matrix.
bias_initializer : str or `Initializer`
Initializer for the bias vector.
Input shape:
This depends on the `layout` parameter. Input is 4D array of shape
(batch_size, in_channels, height, width) if `layout` is `NCHW`.
Output shape:
This depends on the `layout` parameter. Output is 4D array of shape
(batch_size, channels, out_height, out_width) if `layout` is `NCHW`.
out_height and out_width are calculated as::
out_height = (height-1) *strides [0]-2 *padding [0]+kernel_size[0]+output_padding[0]
out_width = (width-1) *strides [1]-2 *padding [1]+kernel_size[1]+output_padding[1]
|
NAME
AI::MXNet::Gluon::NN::Conv3DTranspose
|
DESCRIPTION
Transposed 3D convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
If `in_channels` is not specified, `Parameter` initialization will be
deferred to the first time `forward` is called and `in_channels` will be
inferred from the shape of input data.
Parameters
----------
channels : int
The dimensionality of the output space, i.e. the number of output
channels (filters) in the convolution.
kernel_size : int or tuple/list of 3 int
Specifies the dimensions of the convolution window.
strides : int or tuple/list of 3 int ,
Specify the strides of the convolution.
padding : int or a tuple/list of 3 int ,
If padding is non-zero, then the input is implicitly zero-padded
on both sides for padding number of points
dilation : int or tuple/list of 3 int
Specifies the dilation rate to use for dilated convolution.
groups : int
Controls the connections between inputs and outputs.
At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv
layers side by side, each seeing half the input channels, and producing
half the output channels, and both subsequently concatenated.
layout : str, default 'NCDHW'
Dimension ordering of data and weight. Can be 'NCDHW' , 'NDHWC' , etc.
'N' , 'C' , 'H' , 'W' , 'D' stands for batch, channel, height, width and
depth dimensions respectively. Convolution is applied on the 'D' ,
'H' , and 'W' dimensions.
in_channels : int , default 0
The number of input channels to this layer. If not specified,
initialization will be deferred to the first time `forward` is called
and `in_channels` will be inferred from the shape of input data.
activation : str
Activation function to use . See :func:`~mxnet.ndarray.Activation`.
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias : bool
Whether the layer uses a bias vector.
weight_initializer : str or `Initializer`
Initializer for the `weight` weights matrix.
bias_initializer : str or `Initializer`
Initializer for the bias vector.
Input shape:
This depends on the `layout` parameter. Input is 5D array of shape
(batch_size, in_channels, depth, height, width) if `layout` is `NCDHW`.
Output shape:
This depends on the `layout` parameter. Output is 5D array of shape
(batch_size, channels, out_depth, out_height, out_width) if `layout` is `NCDHW`.
out_depth, out_height and out_width are calculated as::
out_depth = (depth-1) *strides [0]-2 *padding [0]+kernel_size[0]+output_padding[0]
out_height = (height-1) *strides [1]-2 *padding [1]+kernel_size[1]+output_padding[1]
out_width = (width-1) *strides [2]-2 *padding [2]+kernel_size[2]+output_padding[2]
|
NAME
AI::MXNet::Gluon::NN::MaxPool1D
|
DESCRIPTION
Max pooling operation for one dimensional data.
Parameters
----------
pool_size: int
Size of the max pooling windows.
strides: int , or None
Factor by which to downscale. E.g. 2 will halve the input size.
If `None`, it will default to `pool_size`.
padding: int
If padding is non-zero, then the input is implicitly
zero-padded on both sides for padding number of points.
layout : str, default 'NCW'
Dimension ordering of data and weight. Can be 'NCW' , 'NWC' , etc.
'N' , 'C' , 'W' stands for batch, channel, and width ( time ) dimensions
respectively. Pooling is applied on the W dimension.
ceil_mode : bool, default False
When `True`, will use ceil instead of floor to compute the output shape. Input shape:
This depends on the `layout` parameter. Input is 3D array of shape
(batch_size, channels, width) if `layout` is `NCW`.
Output shape:
This depends on the `layout` parameter. Output is 3D array of shape
(batch_size, channels, out_width) if `layout` is `NCW`.
out_width is calculated as::
out_width = floor((width+2 *padding -pool_size)/strides)+1
When `ceil_mode` is `True`, ceil will be used instead of floor in this
equation.
|
NAME
AI::MXNet::Gluon::NN::MaxPool2D
|
DESCRIPTION
Max pooling operation for two dimensional (spatial) data.
Parameters
----------
pool_size: int or list/tuple of 2 ints,
Size of the max pooling windows.
strides: int , list/tuple of 2 ints, or None.
Factor by which to downscale. E.g. 2 will halve the input size.
If `None`, it will default to `pool_size`.
padding: int or list/tuple of 2 ints,
If padding is non-zero, then the input is implicitly
zero-padded on both sides for padding number of points.
layout : str, default 'NCHW'
Dimension ordering of data and weight. Can be 'NCHW' , 'NHWC' , etc.
'N' , 'C' , 'H' , 'W' stands for batch, channel, height, and width
dimensions respectively. padding is applied on 'H' and 'W' dimension.
ceil_mode : bool, default False
When `True`, will use ceil instead of floor to compute the output shape. Input shape:
This depends on the `layout` parameter. Input is 4D array of shape
(batch_size, channels, height, width) if `layout` is `NCHW`.
Output shape:
This depends on the `layout` parameter. Output is 4D array of shape
(batch_size, channels, out_height, out_width) if `layout` is `NCHW`.
out_height and out_width are calculated as::
out_height = floor((height+2 *padding [0]-pool_size[0])/strides[0])+1
out_width = floor((width+2 *padding [1]-pool_size[1])/strides[1])+1
When `ceil_mode` is `True`, ceil will be used instead of floor in this
equation.
|
NAME
AI::MXNet::Gluon::NN::MaxPool3D
|
DESCRIPTION
Max pooling operation for 3D data (spatial or spatio-temporal).
Parameters
----------
pool_size: int or list/tuple of 3 ints,
Size of the max pooling windows.
strides: int , list/tuple of 3 ints, or None.
Factor by which to downscale. E.g. 2 will halve the input size.
If `None`, it will default to `pool_size`.
padding: int or list/tuple of 3 ints,
If padding is non-zero, then the input is implicitly
zero-padded on both sides for padding number of points.
layout : str, default 'NCDHW'
Dimension ordering of data and weight. Can be 'NCDHW' , 'NDHWC' , etc.
'N' , 'C' , 'H' , 'W' , 'D' stands for batch, channel, height, width and
depth dimensions respectively. padding is applied on 'D' , 'H' and 'W'
dimension.
ceil_mode : bool, default False
When `True`, will use ceil instead of floor to compute the output shape. Input shape:
This depends on the `layout` parameter. Input is 5D array of shape
(batch_size, channels, depth, height, width) if `layout` is `NCDHW`.
Output shape:
This depends on the `layout` parameter. Output is 5D array of shape
(batch_size, channels, out_depth, out_height, out_width) if `layout`
is `NCDHW`.
out_depth, out_height and out_width are calculated as ::
out_depth = floor((depth+2 *padding [0]-pool_size[0])/strides[0])+1
out_height = floor((height+2 *padding [1]-pool_size[1])/strides[1])+1
out_width = floor((width+2 *padding [2]-pool_size[2])/strides[2])+1
When `ceil_mode` is `True`, ceil will be used instead of floor in this
equation.
|
NAME
AI::MXNet::Gluon::NN::AvgPool1D
|
DESCRIPTION
Average pooling operation for temporal data.
Parameters
----------
pool_size: int
Size of the max pooling windows.
strides: int , or None
Factor by which to downscale. E.g. 2 will halve the input size.
If `None`, it will default to `pool_size`.
padding: int
If padding is non-zero, then the input is implicitly
zero-padded on both sides for padding number of points.
layout : str, default 'NCW'
Dimension ordering of data and weight. Can be 'NCW' , 'NWC' , etc.
'N' , 'C' , 'W' stands for batch, channel, and width ( time ) dimensions
respectively. padding is applied on 'W' dimension.
ceil_mode : bool, default False
When `True`, will use ceil instead of floor to compute the output shape. count_include_pad : bool, default True
When 'False' , will exclude padding elements when computing the average value.
Input shape:
This depends on the `layout` parameter. Input is 3D array of shape
(batch_size, channels, width) if `layout` is `NCW`.
Output shape:
This depends on the `layout` parameter. Output is 3D array of shape
(batch_size, channels, out_width) if `layout` is `NCW`.
out_width is calculated as::
out_width = floor((width+2 *padding -pool_size)/strides)+1
When `ceil_mode` is `True`, ceil will be used instead of floor in this
equation.
|
NAME
AI::MXNet::Gluon::NN::AvgPool2D
|
DESCRIPTION
Average pooling operation for spatial data.
Parameters
----------
pool_size: int or list/tuple of 2 ints,
Size of the max pooling windows.
strides: int , list/tuple of 2 ints, or None.
Factor by which to downscale. E.g. 2 will halve the input size.
If `None`, it will default to `pool_size`.
padding: int or list/tuple of 2 ints,
If padding is non-zero, then the input is implicitly
zero-padded on both sides for padding number of points.
layout : str, default 'NCHW'
Dimension ordering of data and weight. Can be 'NCHW' , 'NHWC' , etc.
'N' , 'C' , 'H' , 'W' stands for batch, channel, height, and width
dimensions respectively. padding is applied on 'H' and 'W' dimension.
ceil_mode : bool, default False
When True, will use ceil instead of floor to compute the output shape. count_include_pad : bool, default True
When 'False' , will exclude padding elements when computing the average value.
Input shape:
This depends on the `layout` parameter. Input is 4D array of shape
(batch_size, channels, height, width) if `layout` is `NCHW`.
Output shape:
This depends on the `layout` parameter. Output is 4D array of shape
(batch_size, channels, out_height, out_width) if `layout` is `NCHW`.
out_height and out_width are calculated as::
out_height = floor((height+2 *padding [0]-pool_size[0])/strides[0])+1
out_width = floor((width+2 *padding [1]-pool_size[1])/strides[1])+1
When `ceil_mode` is `True`, ceil will be used instead of floor in this
equation.
|
NAME
AI::MXNet::Gluon::NN::AvgPool3D
|
DESCRIPTION
Average pooling operation for 3D data (spatial or spatio-temporal).
Parameters
----------
pool_size: int or list/tuple of 3 ints,
Size of the max pooling windows.
strides: int , list/tuple of 3 ints, or None.
Factor by which to downscale. E.g. 2 will halve the input size.
If `None`, it will default to `pool_size`.
padding: int or list/tuple of 3 ints,
If padding is non-zero, then the input is implicitly
zero-padded on both sides for padding number of points.
layout : str, default 'NCDHW'
Dimension ordering of data and weight. Can be 'NCDHW' , 'NDHWC' , etc.
'N' , 'C' , 'H' , 'W' , 'D' stands for batch, channel, height, width and
depth dimensions respectively. padding is applied on 'D' , 'H' and 'W'
dimension.
ceil_mode : bool, default False
When True, will use ceil instead of floor to compute the output shape. count_include_pad : bool, default True
When 'False' , will exclude padding elements when computing the average value.
Input shape:
This depends on the `layout` parameter. Input is 5D array of shape
(batch_size, channels, depth, height, width) if `layout` is `NCDHW`.
Output shape:
This depends on the `layout` parameter. Output is 5D array of shape
(batch_size, channels, out_depth, out_height, out_width) if `layout`
is `NCDHW`.
out_depth, out_height and out_width are calculated as ::
out_depth = floor((depth+2 *padding [0]-pool_size[0])/strides[0])+1
out_height = floor((height+2 *padding [1]-pool_size[1])/strides[1])+1
out_width = floor((width+2 *padding [2]-pool_size[2])/strides[2])+1
When `ceil_mode` is `True,` ceil will be used instead of floor in this
equation.
|
NAME
AI::MXNet::Gluon::NN::GlobalMaxPool1D
|
DESCRIPTION
Global max pooling operation for temporal data.
|
NAME
AI::MXNet::Gluon::NN::GlobalMaxPool2D
|
DESCRIPTION
Global max pooling operation for spatial data.
|
NAME
AI::MXNet::Gluon::NN::GlobalMaxPool3D
|
DESCRIPTION
Global max pooling operation for 3D data.
|
NAME
AI::MXNet::Gluon::NN::GlobalAvgPool1D
|
DESCRIPTION
Global average pooling operation for temporal data.
|
NAME
AI::MXNet::Gluon::NN::GlobalAvgPool2D
|
DESCRIPTION
Global average pooling operation for spatial data.
|
NAME
AI::MXNet::Gluon::NN::GlobalAvgPool2D
|
DESCRIPTION
Global average pooling operation for 3D data.
|
NAME
AI::MXNet::Gluon::NN::ReflectionPad2D
|
DESCRIPTION
Pads the input tensor using the reflection of the input boundary.
Parameters
----------
padding: int
An integer padding size
Examples
--------
>>> $m = nn->ReflectionPad2D(3);
>>> $input = mx->nd->random->normal( shape =>[16, 3, 224, 224]);
>>> $output = $m ->( $input );
|