英文:
HDFS exclude datanodes in AddblockRequestProto
问题
我正在为HDFS实现数据节点故障转移以进行写入操作,这样当块的第一个数据节点失败时,HDFS仍然可以写入该块。
算法如下:首先,会确定故障节点。然后,会请求一个新的块。HDFS端口API提供了excludeNodes
参数,我用它来告诉Namenode不要在故障节点上分配新的块。failedDatanodes
是已确定失败的数据节点,在日志中是正确的。
req := &hdfs.AddBlockRequestProto{
Src: proto.String(bw.src),
ClientName: proto.String(bw.clientName),
ExcludeNodes: failedDatanodes,
}
但是,Namenode仍然将块定位到了故障的数据节点上。
有人知道原因吗?我有什么遗漏吗?
谢谢。
英文:
I am implementing a datanode failover for writing in HDFS, that HDFS can still write a block when the first datanode of the block fails.
The algorithm is. First, the failure node would be identified. Then, a new block is requested. The HDFS port api provides excludeNodes
, which I used to tell Namenode not to allocate new block there. failedDatanodes
are identified failed datanodes, and they are correct in logs.
req := &hdfs.AddBlockRequestProto{
Src: proto.String(bw.src),
ClientName: proto.String(bw.clientName),
ExcludeNodes: failedDatanodes,
}
But, the namenode still locates the block to the failed datanodes.
Anyone knows why? Did I miss anything here?
Thank you.
答案1
得分: 0
我找到了解决方案,首先放弃该块,然后请求新的块。在之前的设计中,新请求的块无法替换旧的块。
英文:
I found the solution that, first abandon the block and then request the new block. In the previous design, the new requested block cannot replace the old one
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论