如何在Caffe中计算相对于“Input”层的梯度?

huangapple go评论77阅读模式
英文:

How to calculate the gradient with respect to "Input" layer in Caffe?

问题

I want to implement the algorithm proposed by paper "Generalizing to unseen domains via adversarial data augmentation" using Caffe framework. I have to compute the gradient with regard to the input layer to add it onto the input blob. In PyTorch, it can be done by grad = torch.autograd.grad(loss, data)[0]. But in Caffe, there is no function to do this as far as I know. So how to compute the gradient of the "Input" layer in Caffe? The "Input" layer means the input image in semantic segmentation.

I have tried calling net->input_blobs()[0]->cpu_diff() after backpropagation, but the values in cpu_diff are all 0. Obviously, Caffe does not compute the gradient of the input layer by default. The overall algorithm is as the image shows.

英文:

I want to implement the algorithm proposed by paper "Generalizing to unseen domains via adversarial data augmentation" using Caffe framework. I have to compute the gradient with regard of input layer to add it onto the input blob. In PyTorch, it can be done by grad = torch.autograd.grad(loss, data)[0]. But in Caffe, there is no function to do this as I know. So how to compute the gradient of "Input" layer in Caffe. The "Input" layer means input image in semantic segmentation.

I have tried call net->input_blobs()[0]->cpu_diff() after backpropagation, but values in cpu_diff are all 0. Obviously, Caffe does not compute the gradient of input layer in default. The overall algorithm is as the image shows.enter image description here

答案1

得分: 0

要获得你想要的,尝试类似以下代码:

for (int i = 0; i < top_vec[0]->count(); i++) {
top_vec[0]->mutable_cpu_diff()[i] = 1.0;
}

net->Backward(top_vec, propagate_down, bottom_vec);

for (int i = 0; i < bottom_vec[0]->count(); i++) {
std::cout << i << " : " << bottom_vec[0]->cpu_diff()[i] << std::endl;
}


<details>
<summary>英文:</summary>

To get what you want, try something like

for (int i=0; i<top_vec[0]->count(); i++) {
top_vec[0]->mutable_cpu_diff()[i] = 1.0;
}

net->Backward(top_vec, propagate_down, bottom_vec);

for (int i=0; i<bottom_vec[0]->count(); i++) {
std::cout << i << " : " << bottom_vec[0]->cpu_diff()[i] << std::endl;
}


</details>



huangapple
  • 本文由 发表于 2023年5月11日 15:29:18
  • 转载请务必保留本文链接:https://go.coder-hub.com/76225095.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定