英文:
Golang tour distributed pattern
问题
根据这篇文章,应用引擎前端和游乐场后端通过RPC调用进行通信。每个应用引擎前端实例和游乐场实例都可以创建以支持扩展。
我在思考在保持RPC的同时,如何在前端请求和后端实例之间实现负载均衡的模式(解决方案)。
一种解决方案可能是使用一个全局工作队列,将任务放入其中,并使用一个'Reply-To'头指向每个前端实例队列,将响应放入其中。类似于以下示意图(来自RabbitMQ教程),其中rpc_queue在后端实例之间共享:
我不确定这是否是一个好的方法,特别是如果共享队列离线,整个系统将失败(但如何处理这种情况?)。
谢谢。
英文:
According to this article, the app-engine front-end and the playground back-end communicate through RPC calls. Each one of app-engine front-end instance and playground instance can be created to support scaling.
I am asking myself what is/are the patterns (solutions) to load balance works between front-end request and back-end instance while keeping RPC.
One solution may be to use one global working queue where tasks are puts inside it with a 'Reply-To' header. This header should point to a per front-end instance queue where responses are put. Something like the following schema (from RabbitMQ tutorial) with rpc_queue shared between back-end instances :
I am not sure this would be a good way to do especially the fact that if the shared queue is offline, the whole system fail (but how to take care of this?).
Thank you.
答案1
得分: 1
作为对我在第一篇文章中收到的评论的回答和跟进,我开发了一个名为Indenter的小型概念验证,基于提出的服务发现守护进程的想法(为了简单起见,我使用etcd而不是ZooKeeper)。
我写了一篇关于它的文章,并发布了代码,如果有人有兴趣的话:
Indenter:一个可扩展、容错、分布式的Web服务,复制了Go Playground的架构。
英文:
As an answer and a follow-up of comments I received on the first post, I developed Indenter, a small proof of concept based on the idea proposed of a service discovery daemon (I use etcd instead of ZooKeepr for simplicity however).
I wrote an article about it and release the code if someone may be interested one day:
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论