Grad-cam总是将热力图放在相同的区域。

huangapple go评论75阅读模式
英文:

Grad-cam always placing the heatmap in the same area

问题

这是与问题相关的代码部分:

def forward_hook(module, input, output):
    activation.append(output)

def backward_hook(module, grad_in, grad_out):
    grad.append(grad_out[0])

model.layer4[-1].register_forward_hook(forward_hook)
model.layer4[-1].register_backward_hook(backward_hook)
grad = []
activation = []

loader_iter = iter(dataloader_test)
for _ in range(50):
    data, target, meta = next(loader_iter)
    count1 = 0
    for d, t, m in zip(data, target, meta):
        hm_dogs = []
        heatmap = []
        d, t = map(lambda x: x.to(device), (d, t))

        # remove batch size
        d = d.unsqueeze(0)
        output = model(d)

        output[:, 4].backward()
        # get the gradients and activations collected in the hook
        grads = grad[count1].cpu().data.numpy().squeeze()
        fmap = activation[count1].cpu().data.numpy().squeeze()

请注意,这段代码用于在模型的最后一个层上注册前向和后向钩子,以收集激活和梯度信息。然后,它在数据集上进行迭代,并打印了梯度。

英文:

Here is the part of my code relevant to the issue:

def forward_hook(module,input,output):
    activation.append(output)

def backward_hook(module,grad_in,grad_out):
    grad.append(grad_out[0])


model.layer4[-1].register_forward_hook(forward_hook)
model.layer4[-1].register_backward_hook(backward_hook)
grad=[]
activation=[]


loader_iter = iter(dataloader_test)
for _ in range(50):
    data, target, meta = next(loader_iter)       
    count1 = 0
    for d, t, m in zip(data, target, meta):

        hm_dogs = []
        heatmap = []
        d, t = map(lambda x: x.to(device), (d, t))
        
        #remove batch size
        d = d.unsqueeze(0)
        output = model(d)

        output[:, 4].backward()
        #get the gradients and activations collected in the hook
        grads=grad[count1].cpu().data.numpy().squeeze()
        fmap=activation[count1].cpu().data.numpy().squeeze()

I printed the grads and they are all looking the same despite the iteration. Anyone have some ideas for me?

答案1

得分: 1

似乎你正在累积每次循环迭代的梯度和激活值。在内部循环之前,确保在每次迭代开始时清空gradactivation列表。

loader_iter = iter(dataloader_test)
for _ in range(50):
    grad.clear()
    activation.clear()

    data, target, meta = next(loader_iter)
    count1 = 0
    for d, t, m in zip(data, target, meta):
        hm_dogs = []
        heatmap = []
        d, t = map(lambda x: x.to(device), (d, t))

        # 移除批次维度
        d = d.unsqueeze(0)
        output = model(d)

        output[:, 4].backward()
        # 获取挂钩中收集的梯度和激活值
        grads = grad[count1].cpu().data.numpy().squeeze()
        fmap = activation[count1].cpu().data.numpy().squeeze()
英文:

seems like you are accumulating the gradients and activations for each iteration of the loop. clear the grad and activation lists at the start of each iteration, right before the inner loop.

loader_iter = iter(dataloader_test)
for _ in range(50):
    grad.clear()
    activation.clear()

    data, target, meta = next(loader_iter)
    count1 = 0
    for d, t, m in zip(data, target, meta):
        hm_dogs = []
        heatmap = []
        d, t = map(lambda x: x.to(device), (d, t))

        # remove batch size
        d = d.unsqueeze(0)
        output = model(d)

        output[:, 4].backward()
        # get the gradients and activations collected in the hook
        grads = grad[count1].cpu().data.numpy().squeeze()
        fmap = activation[count1].cpu().data.numpy().squeeze()

huangapple
  • 本文由 发表于 2023年6月13日 18:20:45
  • 转载请务必保留本文链接:https://go.coder-hub.com/76463896.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定