Go Webapp 集群领导者选举

huangapple go评论99阅读模式
英文:

Go Webapp Cluster Leader Election

问题

我正在编写一个相当复杂的Go Web应用程序,我希望它具有高可用性。我计划在多个虚拟机上运行该应用程序,并使用负载均衡器在它们之间分配流量。

复杂之处在于,该Web应用程序有一个类似数据库记账例程在运行,我只希望(最多)有一个实例在任何时候运行。所以,如果我有三个Web应用程序虚拟机,只有一个应该执行记账操作。

(是的,我知道我可以将记账操作拆分到一个单独的虚拟机实例中,但是代码已经与Web应用程序的其他部分相当紧密地集成在一起。)

我花了几个小时研究了诸如etcdraftbullymemberlistPacemaker等等的东西。这些都似乎是相当多的信息需要吸收才能实现我想要的目标,或者我无法找到明确的使用方法。

在这种特定的用例中,我希望有一个系统,使得Go Web应用程序节点能够自动检测彼此并选举一个“领导者”来执行记账操作。理想情况下,这个系统可以扩展从2个到10个节点,并且不需要手动将IP地址添加到配置文件中(如果必要,也可以手动添加)。

我在考虑网络分区或其他情况下,其中一个节点无法看到其他节点的情况下,不希望它自己选举为领导者,因为可能会出现两个节点同时执行记账操作的情况。这也意味着,如果我将集群简化为只有一个虚拟机,将不会进行记账操作,但在维护期间可以容忍这种情况的短暂发生,或者我可以在某个地方设置某种标志。

我想知道是否有人可以指点我正确的方向,并希望我能以尽可能低的复杂性实现这一目标,同时尽量利用现有的代码库。

英文:

I'm writing a fairly complex Go webapp that I want to make highly available. I'm planning on having multiple VMs running the app, with a load-balancer directing traffic between them for what I want.

Where it gets complex is that the webapp has a sort of database book-keeping routine running, which I only want (at most) one instance of at any time. So if I have three webapp VMs, only one of them should be doing the book-keeping.

(Yes, I'm aware that I could split the book-keeping off into a separate VM instance entirely, but the code has been rather heavily integrated with the rest of the webapp.)

I've spent several hours looking at things like etcd, raft, bully, memberlist, Pacemaker, and so on and so forth. These all seem like quite a lot of information to absorb to accomplish what I'm after, or I can't see a clear way of using them.

What I would like, in this specific use case, is a system whereby Go webapp nodes automatically detect each other and elect a "leader" to do the book-keeping. Ideally this would scale anywhere from 2 to 10 nodes, and not require manually adding IP addresses to config files (but possible, if necessary).

I was thinking in the case of a network partition or something, where one node cannot see the others, I wouldn't want it to elect itself as a leader, because it would be possible to have two nodes attempting to do book-keeping at the same time. That also means that if I stripped down the cluster to being just a single VM, no book-keeping would occur, but that could be tolerated for a brief period during maintenance, or I could set some sort of flag somewhere.

I'm wondering if someone could point me in the right direction, and hopefully how I can accomplish this with low complexity, while leveraging existing code libraries as much as possible.

答案1

得分: 2

根据您的容错性和一致性要求,特别是防止分区中的脑裂,像Raft这样的共识算法是您最需要的。但是,尽管Raft被设计成易于理解,但要正确实现它仍然需要相当的专业知识。因此,正如其他人提到的,您应该考虑现有的服务或实现。

ZooKeeper (ZAB)、etcd (Raft)和Consul (Raft)是最常用的用于执行此类任务的系统。考虑到您希望将虚拟机的规模从2个节点扩展到10个节点,这很可能是您想要采取的方式。Raft和其他共识算法具有法定人数要求,如果算法直接嵌入在您的虚拟机中,这种扩展方式可能不太实用。通过使用外部服务,您的虚拟机只需成为共识服务的客户端,可以独立于共识服务进行扩展。

或者,如果您不想依赖外部服务进行协调,Raft网站上有一个详尽的实现列表,涵盖了各种语言,其中一些是共识服务,一些可以嵌入。然而,请注意,其中许多实现是不完整的。至少,任何适用于生产环境的Raft实现都必须实现日志压缩。没有日志压缩,服务器实际上只能运行有限的时间,直到磁盘被日志填满。

英文:

Based on your fault tolerance and consistency requirements - in particular preventing split brain in a partition - a consensus algorithm like Raft is what you most definitely want. But even though Raft was designed for understandability, it still requires significant expertise to implement correctly. Therefore, as others have mentioned you should look into existing services or implementations.

ZooKeeper (ZAB), etcd (Raft), and Consul (Raft) are the most widely used systems for doing things like this. Considering that you want your VMs to scale from 2 to 10 nodes, this is most likely the way you want to go. Raft and other consensus algorithms have quorum requirements that can make scaling in this manner less practical if the algorithm is directly embedded in your VMs. By using an external service, your VMs simply become clients of the consensus service and can scale independently of it.

Alternatively, if you don't want depend on an external service for coordination, the Raft website has an exhaustive list of implementations in various languages, some of which are consensus services and some of which can be embedded. However, note that many of them are incomplete. At a minimum, any Raft implementation suitable for production must have implemented log compaction. Without log compaction, servers can really only operate for a finite amount of time - until the disk fills up with logs.

huangapple
  • 本文由 发表于 2016年1月31日 07:13:05
  • 转载请务必保留本文链接:https://go.coder-hub.com/35108372.html
  • cluster-computing
  • distributed-system
  • go
  • high-availability
  • web-applications
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定