AdamW

torch.optim.AdamW

Implements the AdamW algorithm.

Attributes

Source
AdamW.scala
Graph
Supertypes
class Optimizer
class Object
trait Matchable
class Any

Members list

Value members

Inherited methods

def step(): Unit

Performs a single optimization step (parameter update).

Performs a single optimization step (parameter update).

Attributes

Note

Unless otherwise specified, this function should not modify the .grad field of the parameters.

Inherited from:
Optimizer
Source
Optimizer.scala

Attributes

Inherited from:
Optimizer
Source
Optimizer.scala
def zeroGrad(): Unit

Sets the gradients of all optimized Tensors to zero.

Sets the gradients of all optimized Tensors to zero.

Attributes

Inherited from:
Optimizer
Source
Optimizer.scala