英文:
Grad-cam always placing the heatmap in the same area
问题
这是与问题相关的代码部分:
def forward_hook(module, input, output):
activation.append(output)
def backward_hook(module, grad_in, grad_out):
grad.append(grad_out[0])
model.layer4[-1].register_forward_hook(forward_hook)
model.layer4[-1].register_backward_hook(backward_hook)
grad = []
activation = []
loader_iter = iter(dataloader_test)
for _ in range(50):
data, target, meta = next(loader_iter)
count1 = 0
for d, t, m in zip(data, target, meta):
hm_dogs = []
heatmap = []
d, t = map(lambda x: x.to(device), (d, t))
# remove batch size
d = d.unsqueeze(0)
output = model(d)
output[:, 4].backward()
# get the gradients and activations collected in the hook
grads = grad[count1].cpu().data.numpy().squeeze()
fmap = activation[count1].cpu().data.numpy().squeeze()
请注意,这段代码用于在模型的最后一个层上注册前向和后向钩子,以收集激活和梯度信息。然后,它在数据集上进行迭代,并打印了梯度。
英文:
Here is the part of my code relevant to the issue:
def forward_hook(module,input,output):
activation.append(output)
def backward_hook(module,grad_in,grad_out):
grad.append(grad_out[0])
model.layer4[-1].register_forward_hook(forward_hook)
model.layer4[-1].register_backward_hook(backward_hook)
grad=[]
activation=[]
loader_iter = iter(dataloader_test)
for _ in range(50):
data, target, meta = next(loader_iter)
count1 = 0
for d, t, m in zip(data, target, meta):
hm_dogs = []
heatmap = []
d, t = map(lambda x: x.to(device), (d, t))
#remove batch size
d = d.unsqueeze(0)
output = model(d)
output[:, 4].backward()
#get the gradients and activations collected in the hook
grads=grad[count1].cpu().data.numpy().squeeze()
fmap=activation[count1].cpu().data.numpy().squeeze()
I printed the grads and they are all looking the same despite the iteration. Anyone have some ideas for me?
答案1
得分: 1
似乎你正在累积每次循环迭代的梯度和激活值。在内部循环之前,确保在每次迭代开始时清空grad
和activation
列表。
loader_iter = iter(dataloader_test)
for _ in range(50):
grad.clear()
activation.clear()
data, target, meta = next(loader_iter)
count1 = 0
for d, t, m in zip(data, target, meta):
hm_dogs = []
heatmap = []
d, t = map(lambda x: x.to(device), (d, t))
# 移除批次维度
d = d.unsqueeze(0)
output = model(d)
output[:, 4].backward()
# 获取挂钩中收集的梯度和激活值
grads = grad[count1].cpu().data.numpy().squeeze()
fmap = activation[count1].cpu().data.numpy().squeeze()
英文:
seems like you are accumulating the gradients and activations for each iteration of the loop. clear the grad
and activation
lists at the start of each iteration, right before the inner loop.
loader_iter = iter(dataloader_test)
for _ in range(50):
grad.clear()
activation.clear()
data, target, meta = next(loader_iter)
count1 = 0
for d, t, m in zip(data, target, meta):
hm_dogs = []
heatmap = []
d, t = map(lambda x: x.to(device), (d, t))
# remove batch size
d = d.unsqueeze(0)
output = model(d)
output[:, 4].backward()
# get the gradients and activations collected in the hook
grads = grad[count1].cpu().data.numpy().squeeze()
fmap = activation[count1].cpu().data.numpy().squeeze()
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论