torch.nn.modules.regularization
Members list
Type members
Classlikes
During training, randomly zeroes some of the elements of the input tensor with probability p
using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.
During training, randomly zeroes some of the elements of the input tensor with probability p
using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.
This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper Improving neural networks by preventing co-adaptation of feature detectors.
Furthermore, the outputs are scaled by a factor of $\frac{1}{1−p} during training. This means that during evaluation the module simply computes an identity function.
Shape:
- Input: $(∗)(∗)$. Input can be of any shape
- Output: $(∗)(∗)$. Output is of the same shape as input
Value parameters
- inplace
-
– If set to True, will do this operation in-place. Default:
false
- p
-
– probability of an element to be zeroed. Default: 0.5
Attributes
- See also
-
See https://pytorch.org/docs/stable/generated/torch.nn.functional.dropout.html#torch-nn-functional-dropout TODO: https://pytorch.org/docs/master/nn.html#torch.nn.Dropout Add 2D, 3D, Alpha and feature alpha versions
- Example
-
import torch.nn val m = nn.Dropout(p=0.2) val input = torch.randn(20, 16) val output = m(input)
- Source
- Dropout.scala
- Supertypes
-
trait TensorModule[ParamType]trait HasParams[ParamType]class Moduleclass Objecttrait Matchableclass AnyShow all