Learn the Basics || Quickstart || Tensors || Datasets & DataLoaders || Transforms || Build Model || Autograd || Optimization || Save & Load Model
Build the Neural Network
Neural networks comprise of layers/modules that perform operations on data. The nn namespace provides all the building blocks you need to build your own neural network. Every module in PyTorch subclasses the Module. A neural network is a module itself that consists of other modules (layers). This nested structure allows for building and managing complex architectures easily.
In the following sections, we'll build a neural network to classify images in the FashionMNIST dataset.
import torch.*
Get Device for Training
We want to be able to train our model on a hardware accelerator like the GPU, if it is available. Let's check to see if torch.cuda is available, else we continue to use the CPU.
import torch.Device.{CPU, CUDA}
val device = if torch.cuda.isAvailable then CUDA else CPU
// device: Device = Device(device = CPU, index = -1)
println(s"Using $device device")
// Using Device(CPU,-1) device
Define the Class
We define our neural network by subclassing nn.Module
, and
initialize the neural network layers in the constructor. Every nn.Module
subclass implements
the operations on input data in the apply
method.
class NeuralNetwork extends nn.Module:
val flatten = nn.Flatten()
val linearReluStack = register(nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
))
def apply(x: Tensor[Float32]) =
val flattened = flatten(x)
val logits = linearReluStack(flattened)
logits
We create an instance of NeuralNetwork
, and move it to the device
, and print
its structure.
val model = NeuralNetwork().to(device)
// model: NeuralNetwork = NeuralNetwork
println(model)
// NeuralNetwork
To use the model, we pass it the input data. This executes the model's apply
method.
Calling the model on the input returns a 2-dimensional tensor with dim=0 corresponding to each output of 10 raw predicted values for each class, and dim=1 corresponding to the individual values of each output.
We get the prediction probabilities by passing it through an instance of the nn.Softmax
module.
val X = torch.rand(Seq(1, 28, 28), device=device)
// X: Tensor[Float32] = tensor dtype=float32, shape=[1, 28, 28], device=CPU
// [[[0.5862, 0.9046, 0.5192, ..., 0.1778, 0.9174, 0.6018],
// [0.0090, 0.8940, 0.6416, ..., 0.3858, 0.2927, 0.9230],
// [0.6678, 0.9025, 0.9660, ..., 0.4752, 0.5883, 0.6594],
// ...,
// [0.8478, 0.8303, 0.2089, ..., 0.7907, 0.7650, 0.5328],
// [0.6994, 0.1660, 0.8496, ..., 0.4151, 0.5189, 0.7628],
// [0.7866, 0.0180, 0.9404, ..., 0.8229, 0.3069, 0.1515]]]
val logits = model(X)
// logits: Tensor[Float32] = tensor dtype=float32, shape=[1, 10], device=CPU
// [[0.1353, 0.1562, 0.0768, ..., 0.0369, 0.0401, 0.0638]]
val predProbab = nn.Softmax(dim=1).apply(logits)
// predProbab: Tensor[Float32] = tensor dtype=float32, shape=[1, 10], device=CPU
// [[0.1103, 0.1126, 0.1040, ..., 0.1000, 0.1003, 0.1027]]
val yPred = predProbab.argmax(1)
// yPred: Tensor[Int64] = tensor dtype=int64, shape=[1], device=CPU
// [1]
println(s"Predicted class: $yPred")
// Predicted class: tensor dtype=int64, shape=[1], device=CPU
// [1]
Model Layers
Let's break down the layers in the FashionMNIST model. To illustrate it, we will take a sample minibatch of 3 images of size 28x28 and see what happens to it as we pass it through the network.
val inputImage = torch.rand(Seq(3,28,28))
// inputImage: Tensor[Float32] = tensor dtype=float32, shape=[3, 28, 28], device=CPU
// [[[0.9383, 0.2374, 0.2008, ..., 0.8057, 0.1371, 0.1068],
// [0.1094, 0.7421, 0.2517, ..., 0.2160, 0.4192, 0.6066],
// [0.0289, 0.7839, 0.3617, ..., 0.4734, 0.9051, 0.6345],
// ...,
// [0.0217, 0.5367, 0.7781, ..., 0.5098, 0.2963, 0.2213],
// [0.8749, 0.4110, 0.9736, ..., 0.7398, 0.9754, 0.5242],
// [0.5868, 0.6748, 0.2041, ..., 0.4964, 0.2741, 0.1696]],
//
// [[0.2660, 0.0460, 0.1353, ..., 0.7905, 0.4356, 0.8255],
// [0.0452, 0.4857, 0.2228, ..., 0.9531, 0.2832, 0.3806],
// [0.3757, 0.6875, 0.5480, ..., 0.9721, 0.3711, 0.3945],
// ...,
// [0.7317, 0.8068, 0.3320, ..., 0.7489, 0.1174, 0.3785],
// [0.0772, 0.4222, 0.2819, ..., 0.1456, 0.8183, 0.9512],
// [0.7143, 0.9930, 0.0092, ..., 0.9522, 0.3762, 0.5856]],
//
// [[0.8299, 0.2597, 0.6590, ..., 0.4526, 0.1189, 0.1953],
// [0.6534, 0.2759, 0.7588, ..., 0.3603, 0.5345, 0.2564],
// [0.5605, 0.4076, 0.6879, ..., 0.7435, 0.9332, 0.6544],
// ...,
// [0.8600, 0.4643, 0.2066, ..., 0.1571, 0.8621, 0.0958],
// [0.7132, 0.1990, 0.1905, ..., 0.6690, 0.8212, 0.6363],
// [0.9950, 0.8546, 0.1012, ..., 0.2659, 0.6177, 0.2765]]]
print(inputImage.size)
// ArraySeq(3, 28, 28)
nn.Flatten
We initialize the Flatten layer to convert each 2D 28x28 image into a contiguous array of 784 pixel values ( the minibatch dimension (at dim=0) is maintained).
val flatten = nn.Flatten()
// flatten: Flatten[Float32] = Flatten
val flatImage = flatten(inputImage)
// flatImage: Tensor[Float32] = tensor dtype=float32, shape=[3, 784], device=CPU
// [[0.9383, 0.2374, 0.2008, ..., 0.4964, 0.2741, 0.1696],
// [0.2660, 0.0460, 0.1353, ..., 0.9522, 0.3762, 0.5856],
// [0.8299, 0.2597, 0.6590, ..., 0.2659, 0.6177, 0.2765]]
println(flatImage.size)
// ArraySeq(3, 784)
nn.Linear
The Linear layer is a module that applies a linear transformation on the input using its stored weights and biases.
val layer1 = nn.Linear(inFeatures=28*28, outFeatures=20)
// layer1: Linear[Float32] = Linear(inFeatures=784, outFeatures=20, bias=true)
var hidden1 = layer1(flatImage)
// hidden1: Tensor[Float32] = tensor dtype=float32, shape=[3, 20], device=CPU
// [[0.3432, 0.4017, 0.5258, ..., 0.0494, -0.3462, 0.4031],
// [0.3284, 0.1825, -0.1181, ..., 0.0910, -0.2152, 0.7398],
// [0.0927, 0.3688, 0.4693, ..., -0.1242, -0.2478, 0.7332]]
println(hidden1.size)
// ArraySeq(3, 20)
nn.ReLU
Non-linear activations are what create the complex mappings between the model's inputs and outputs. They are applied after linear transformations to introduce nonlinearity, helping neural networks learn a wide variety of phenomena.
In this model, we use ReLU between our linear layers, but there's other activations to introduce non-linearity in your model.
println(s"Before ReLU: $hidden1\n\n")
// Before ReLU: tensor dtype=float32, shape=[3, 20], device=CPU
// [[0.3432, 0.4017, 0.5258, ..., 0.0494, -0.3462, 0.4031],
// [0.3284, 0.1825, -0.1181, ..., 0.0910, -0.2152, 0.7398],
// [0.0927, 0.3688, 0.4693, ..., -0.1242, -0.2478, 0.7332]]
//
//
val relu = nn.ReLU()
// relu: ReLU[Float32] = ReLU
hidden1 = relu(hidden1)
println(s"After ReLU: $hidden1")
// After ReLU: tensor dtype=float32, shape=[3, 20], device=CPU
// [[0.3432, 0.4017, 0.5258, ..., 0.0494, 0.0000, 0.4031],
// [0.3284, 0.1825, 0.0000, ..., 0.0910, 0.0000, 0.7398],
// [0.0927, 0.3688, 0.4693, ..., 0.0000, 0.0000, 0.7332]]
nn.Sequential
Sequential is an ordered
container of modules. The data is passed through all the modules in the same order as defined. You can use
sequential containers to put together a quick network like seqModules
.
val seqModules = nn.Sequential(
flatten,
layer1,
nn.ReLU(),
nn.Linear(20, 10)
)
// seqModules: Sequential[Float32] = Sequential
val inputImage = torch.rand(Seq(3,28,28))
// inputImage: Tensor[Float32] = tensor dtype=float32, shape=[3, 28, 28], device=CPU
// [[[0.5833, 0.6980, 0.3605, ..., 0.2316, 0.3994, 0.0229],
// [0.3869, 0.3399, 0.1972, ..., 0.3379, 0.8123, 0.4418],
// [0.3761, 0.5126, 0.8768, ..., 0.7320, 0.3415, 0.2787],
// ...,
// [0.4208, 0.7801, 0.7355, ..., 0.7698, 0.0263, 0.1551],
// [0.8317, 0.2939, 0.8191, ..., 0.1446, 0.7855, 0.1638],
// [0.3432, 0.3006, 0.4446, ..., 0.8962, 0.4462, 0.1422]],
//
// [[0.1961, 0.8647, 0.6700, ..., 0.2980, 0.1454, 0.8018],
// [0.5516, 0.0714, 0.5854, ..., 0.0142, 0.5712, 0.9306],
// [0.6504, 0.4090, 0.9202, ..., 0.0306, 0.4228, 0.2798],
// ...,
// [0.5935, 0.2578, 0.5606, ..., 0.5228, 0.1927, 0.3713],
// [0.8152, 0.5469, 0.9350, ..., 0.5019, 0.2795, 0.3886],
// [0.1782, 0.8340, 0.1515, ..., 0.1756, 0.8141, 0.5373]],
//
// [[0.5033, 0.4541, 0.0048, ..., 0.9341, 0.0862, 0.5317],
// [0.8090, 0.6681, 0.5457, ..., 0.3940, 0.7939, 0.9146],
// [0.6933, 0.7915, 0.1117, ..., 0.0156, 0.1873, 0.6216],
// ...,
// [0.6396, 0.2659, 0.1472, ..., 0.6356, 0.5048, 0.6193],
// [0.4359, 0.1181, 0.3762, ..., 0.9704, 0.5040, 0.2495],
// [0.1148, 0.9934, 0.6551, ..., 0.2044, 0.8356, 0.3676]]]
val logits = seqModules(inputImage)
// logits: Tensor[Float32] = tensor dtype=float32, shape=[3, 10], device=CPU
// [[0.1660, -0.2910, 0.0392, ..., 0.2986, -0.3197, -0.1575],
// [0.1888, -0.2687, -0.0310, ..., 0.3105, -0.3418, -0.1508],
// [0.2109, -0.3151, -0.1185, ..., 0.2616, -0.3061, -0.2284]]
nn.Softmax
The last linear layer of the neural network returns logits
- raw values in $[-\infty, \infty]$ - which are passed to the
Softmax module. The logits are scaled to values
[0, 1] representing the model's predicted probabilities for each class. dim
parameter indicates the dimension along
which the values must sum to 1.
val softmax = nn.Softmax(dim=1)
// softmax: Softmax[Float32] = TensorModule
val predProbab = softmax(logits)
// predProbab: Tensor[Float32] = tensor dtype=float32, shape=[3, 10], device=CPU
// [[0.1152, 0.0730, 0.1015, ..., 0.1316, 0.0709, 0.0834],
// [0.1168, 0.0739, 0.0937, ..., 0.1319, 0.0687, 0.0831],
// [0.1224, 0.0723, 0.0880, ..., 0.1287, 0.0730, 0.0789]]
Model Parameters
Many layers inside a neural network are parameterized, i.e. have associated weights
and biases that are optimized during training. Subclassing nn.Module
automatically
tracks all fields defined inside your model object, and makes all parameters
accessible using your model's parameters()
or namedParameters()
methods.
In this example, we iterate over each parameter, and print its size and a preview of its values.
println(s"Model structure: ${model}")
// Model structure: NeuralNetwork
for (name, param) <- model.namedParameters() do
println(s"Layer: ${name} | Size: ${param.size.mkString} | Values:\n${param(Slice(0, 2))} ")
// Layer: linearReluStack.0.weight | Size: 512784 | Values:
// tensor dtype=float32, shape=[2, 784], device=CPU
// [[-0.0003, 0.0192, -0.0294, ..., 0.0219, 0.0037, 0.0021],
// [-0.0198, -0.0150, -0.0104, ..., -0.0203, -0.0060, -0.0299]]
// Layer: linearReluStack.0.bias | Size: 512 | Values:
// tensor dtype=float32, shape=[2], device=CPU
// [0.0320, 0.0142]
// Layer: linearReluStack.2.weight | Size: 512512 | Values:
// tensor dtype=float32, shape=[2, 512], device=CPU
// [[0.0217, 0.0426, -0.0431, ..., 0.0227, 0.0204, 0.0404],
// [-0.0165, -0.0338, -0.0328, ..., -0.0084, -0.0116, -0.0251]]
// Layer: linearReluStack.2.bias | Size: 512 | Values:
// tensor dtype=float32, shape=[2], device=CPU
// [0.0156, 0.0262]
// Layer: linearReluStack.4.weight | Size: 10512 | Values:
// tensor dtype=float32, shape=[2, 512], device=CPU
// [[-0.0299, -0.0093, 0.0166, ..., 0.0170, -0.0347, 0.0053],
// [0.0414, -0.0066, -0.0315, ..., -0.0322, -0.0363, 0.0382]]
// Layer: linearReluStack.4.bias | Size: 10 | Values:
// tensor dtype=float32, shape=[2], device=CPU
// [0.0432, 0.0425]
Further Reading
- torch.nn API