torch.nn.modules.regularization

Members list

Type members

Classlikes

final class Dropout[ParamType <: FloatNN | ComplexNN](p: Double, inplace: Boolean)(implicit evidence$1: Default[ParamType]) extends HasParams[ParamType], TensorModule[ParamType]

During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.

During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.

This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper Improving neural networks by preventing co-adaptation of feature detectors.

Furthermore, the outputs are scaled by a factor of $\frac{1}{1−p}​ during training. This means that during evaluation the module simply computes an identity function.

Shape:

  • Input: $(∗)(∗)$. Input can be of any shape
  • Output: $(∗)(∗)$. Output is of the same shape as input

Value parameters

inplace

– If set to True, will do this operation in-place. Default: false

p

– probability of an element to be zeroed. Default: 0.5

Attributes

See also
Example
import torch.nn
val m = nn.Dropout(p=0.2)
val input = torch.randn(20, 16)
val output = m(input)
Source
Dropout.scala
Supertypes
trait TensorModule[ParamType]
trait Tensor[ParamType] => Tensor[ParamType]
trait HasParams[ParamType]
class Module
class Object
trait Matchable
class Any
Show all