Loss and Learning with PyTorch

Loss and Learning with PyTorch


A model that represents data well should make predictions that match the data. The error - the deviation from the data - of the model’s predictions is referred to as the “loss”. This unit gives you an introduction to the concept of loss and how it is used to find the model that best represents the data. Through hands-on exercises, you will learn how to work with tensors in PyTorch, explore learning through gradient descent, and learn to use PyTorch’s automatic differentiation (autograd) to efficiently compute gradients. You will also learn how to create network models in PyTorch and experiment with hyperparameters and optimizer choice to see how they influence the training of a model.

Tools