A simple lookup table that stores embeddings of a fixed dictionary and size.
This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.
The size of each embedding vector
If given, each embedding vector with norm larger than
maxNormis renormalized to have norm
The p of the p-norm to compute for the
Size of the dictionary of embeddings
If specified, the entries at
paddingIdxdo not contribute to the gradient; therefore, the embedding vector at
paddingIdxis not updated during training, i.e. it remains as a fixed "pad". For a newly constructed Embedding, the embedding vector at
paddingIdxwill default to all zeros, but can be updated to another value to be used as the padding vector.
If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default
True, gradient w.r.t.
weightmatrix will be a sparse tensor. See Notes for more details regarding sparse gradients.