英文:
Drop in performance from using nn.Linear(...) to nn.Parameter(torch.tensor(...))
问题
I am doing some experiments on transformer models using pytorch and need to make some modifications that require me to look at different weight matrices separately. To this end I tried replacing some of the nn.Linear() blocks in the self-attention computation with nn.Parameter(torch.tensor()), but I'm finding a substantial drop in performance. Here's the different implementations:
First (with nn.Linear()):
class Attention(nn.Module):
def __init__(self, dim, heads=8, dim_head=64, dropout=0.):
super().__init__()
inner_dim = dim_head * heads
project_out = not (heads == 1 and dim_head == dim)
self.heads = heads
self.scale = dim_head ** -0.5
self.attend = nn.Softmax(dim=-1)
self.to_qkv = nn.Linear(dim, inner_dim * 3, bias=False)
self.to_out = nn.Sequential(
nn.Linear(inner_dim, dim),
nn.Dropout(dropout)
) if project_out else nn.Identity()
def forward(self, x):
qkv = self.to_qkv(x).chunk(3, dim=-1)
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=self.heads), qkv)
dots = torch.matmul(q, k.transpose(-1, -2)) * self.scale
attn = self.attend(dots)
out = torch.matmul(attn, v)
out = rearrange(out, 'b h n d -> b n (h d)')
return self.to_out(out)
Second (with nn.Parameter(torch.tensor())):
class Attention(nn.Module):
def __init__(self, dim, heads=8, dim_head=64, dropout=0.):
super().__init__()
inner_dim = dim_head * heads
project_out = not (heads == 1 and dim_head == dim)
self.dim_head = dim_head
self.heads = heads
self.scale = dim_head ** -0.5
self.attend = nn.Softmax(dim=-1)
self.to_q = nn.Parameter(torch.randn(dim, inner_dim))
self.to_k = nn.Parameter(torch.randn(dim, inner_dim))
self.to_v = nn.Parameter(torch.randn(dim, inner_dim))
self.projection = nn.Parameter(torch.randn(inner_dim, dim))
self.dropout = nn.Dropout(dropout)
def forward(self, x):
q, k, v = x @ self.to_q, x @ self.to_k, x @ self.to_v
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=self.heads), (q, k, v))
dots = torch.matmul(q, k.transpose(-1, -2)) * self.scale
attn = self.attend(dots)
out = torch.matmul(attn, v)
out = rearrange(out, 'b h n d -> b n (h d)')
out = out @ self.projection
out = self.dropout(out)
return out
Any explanation for why these two methods perform differently would help a lot. Thanks.
英文:
I am doing some experiments on transformer models using pytorch and need to make some modifications that require me to look at different weight matrices seperately. To this end I tried replacing some of the nn.Linear() blocks in the self attention computation with nn.Parameter(torch.tensor()), but I'm finding a substantial drop in performance. Here's the different implementations:
First (with nn.Linear()):
class Attention(nn.Module):
def __init__(self, dim, heads = 8, dim_head = 64, dropout = 0.):
super().__init__()
inner_dim = dim_head * heads
project_out = not (heads == 1 and dim_head == dim)
self.heads = heads
self.scale = dim_head ** -0.5
self.attend = nn.Softmax(dim = -1)
self.to_qkv = nn.Linear(dim, inner_dim * 3, bias = False)
self.to_out = nn.Sequential(
nn.Linear(inner_dim, dim),
nn.Dropout(dropout)
) if project_out else nn.Identity()
def forward(self, x):
qkv = self.to_qkv(x).chunk(3, dim = -1)
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h = self.heads), qkv)
dots = torch.matmul(q, k.transpose(-1, -2)) * self.scale
attn = self.attend(dots)
out = torch.matmul(attn, v)
out = rearrange(out, 'b h n d -> b n (h d)')
return self.to_out(out)
Second(with nn.Parameter(torch.tensor())):
class Attention(nn.Module):
def __init__(self, dim, heads = 8, dim_head = 64, dropout = 0.):
super().__init__()
inner_dim = dim_head * heads
project_out = not (heads == 1 and dim_head == dim)
self.dim_head = dim_head
self.heads = heads
self.scale = dim_head ** -0.5
self.attend = nn.Softmax(dim = -1)
self.to_q = nn.Parameter(torch.randn(dim, inner_dim))
self.to_k = nn.Parameter(torch.randn(dim, inner_dim))
self.to_v = nn.Parameter(torch.randn(dim, inner_dim))
self.projection = nn.Parameter(torch.randn(inner_dim, dim))
self.dropout = nn.Dropout(dropout)
def forward(self, x):
q,k,v = x @ self.to_q, x @ self.to_k, x @ self.to_v
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h = self.heads), (q,k,v))
dots = torch.matmul(q, k.transpose(-1, -2)) * self.scale
attn = self.attend(dots)
out = torch.matmul(attn, v)
out = rearrange(out, 'b h n d -> b n (h d)')
out = out @ self.projection
out = self.dropout(out)
return out
Any explanation for why these two methods perform differently would help a lot. Thanks.
答案1
得分: 1
我的猜测是权重初始化可能是一个问题。请注意,nn.Linear 默认会使用缩放均匀分布来初始化层的权重,以保持方差较小。而在你的第二种实现中,你是从标准的高斯分布中进行初始化的,这具有相当高的方差(std=1)。
我建议尝试以下内容,看看是否可以解决问题:
self.to_q = nn.Parameter(torch.randn(dim, inner_dim) * (2 / np.sqrt(dim + inner_dim)))
self.to_k = nn.Parameter(torch.randn(dim, inner_dim) * (2 / np.sqrt(dim + inner_dim)))
self.to_v = nn.Parameter(torch.randn(dim, inner_dim) * (2 / np.sqrt(dim + inner_dim)))
这将对你的层的权重应用 Xavier 初始化。相同的效果也可以使用以下方式实现:
nn.init.xavier_normal_(self.to_q)
如果你想了解更多关于为什么我们应该以这种方式初始化 Q
、K
、V
矩阵的原因,请参考这里:https://pi-tau.github.io/posts/transformer/#multi-head-attention-layer。在那里,你还会找到关于为什么需要用 dim_head ** -0.5
来缩放分数的解释。
此外,如果你只想访问权重矩阵,那么我看不出为什么你要将 nn.Linear 替换为其他东西。你可以使用 self.to_qkv.weight.data
获取层的权重。
英文:
My guess is that weight initialization might be a problem. Note that nn.Linear will, by default, initialize the weights of the layer using a scaled uniform distribution in order to keep the variance small. While in your second implementation you initialize from a standard gaussian, which has quite high variance (std=1).
I would suggest to try this and see if the issue is fixed:
self.to_q = nn.Parameter(torch.randn(dim, inner_dim) * (2 / np.sqrt(dim + inner_dim)))
self.to_k = nn.Parameter(torch.randn(dim, inner_dim) * (2 / np.sqrt(dim + inner_dim)))
self.to_v = nn.Parameter(torch.randn(dim, inner_dim) * (2 / np.sqrt(dim + inner_dim)))
This will apply xavier initialization to the weights of your layer. The same effect can be achieved with:
nn.init.xavier_normal_(self.to_q)
If you want to now more about why we should initialize the Q
, K
, V
matrices this way please see this: https://pi-tau.github.io/posts/transformer/#multi-head-attention-layer. There you will also find an explanation about why we need to scale the scores with dim_head ** -0.5
.
Also if you only want access to the weight matrix then I see no reason why you should replace nn.Linear with something else. You can get the weights of the layer with self.to_qkv.wight.data
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论