HDFS在AddblockRequestProto中排除数据节点。

huangapple go评论71阅读模式
英文:

HDFS exclude datanodes in AddblockRequestProto

问题

我正在为HDFS实现数据节点故障转移以进行写入操作,这样当块的第一个数据节点失败时,HDFS仍然可以写入该块。

算法如下:首先,会确定故障节点。然后,会请求一个新的块。HDFS端口API提供了excludeNodes参数,我用它来告诉Namenode不要在故障节点上分配新的块。failedDatanodes是已确定失败的数据节点,在日志中是正确的。

req := &hdfs.AddBlockRequestProto{
    Src:          proto.String(bw.src),
    ClientName:   proto.String(bw.clientName),
    ExcludeNodes: failedDatanodes,
}

但是,Namenode仍然将块定位到了故障的数据节点上。

有人知道原因吗?我有什么遗漏吗?
谢谢。

英文:

I am implementing a datanode failover for writing in HDFS, that HDFS can still write a block when the first datanode of the block fails.

The algorithm is. First, the failure node would be identified. Then, a new block is requested. The HDFS port api provides excludeNodes, which I used to tell Namenode not to allocate new block there. failedDatanodes are identified failed datanodes, and they are correct in logs.

req := &hdfs.AddBlockRequestProto{
	Src:           proto.String(bw.src),
	ClientName:    proto.String(bw.clientName),
	ExcludeNodes:  failedDatanodes,
}

But, the namenode still locates the block to the failed datanodes.

Anyone knows why? Did I miss anything here?
Thank you.

答案1

得分: 0

我找到了解决方案,首先放弃该块,然后请求新的块。在之前的设计中,新请求的块无法替换旧的块。

英文:

I found the solution that, first abandon the block and then request the new block. In the previous design, the new requested block cannot replace the old one

huangapple
  • 本文由 发表于 2016年11月8日 14:12:17
  • 转载请务必保留本文链接:https://go.coder-hub.com/40480134.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定