The torch.no_grad() context manager is used to disable gradient calculation during forward propagation. This is useful when we only want to evaluate our model and not update its parameters (e.g. during inference or testing). By using this context manager, we can save memory and computation time by avoiding the creation of unnecessary computation graphs.

Here's an example of how to use torch.no_grad():

import torch

# Define a simple model
class Model(torch.nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.fc1 = torch.nn.Linear(10, 5)
        self.fc2 = torch.nn.Linear(5, 1)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return x

# Create an instance of the model and some input data
model = Model()
input_data = torch.randn(10)

# Make a prediction without gradient calculation
with torch.no_grad():
    output = model(input_data)
    
print(output)

In this example, we create a simple neural network model that consists of two linear layers with a ReLU activation function in between. We then create an instance of the model and some random input data. Finally, we use the torch.no_grad() context manager to make a prediction without gradient calculation. The resulting output is a tensor of shape (1,) that represents the model's prediction for the given input data.

with torchno_grad

原文地址: https://www.cveoy.top/t/topic/b5oZ 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录