REST Request from multiple clients, collect all requests per second basis, process them in bulk & send response back to each clients

huangapple go评论59阅读模式
英文:

REST Request from multiple clients, collect all requests per second basis, process them in bulk & send response back to each clients

问题

Sure, here's the translation of the provided content:

我们有一个 Web 服务器,每秒接收数百个客户端请求。

  1. 需要按每秒的基础收集来自客户端的所有 REST 请求。
  2. 对于收集到的请求,准备批量请求到 Elasticsearch 并获取响应。
  3. 解析 Elasticsearch 响应并为每个 REST 客户端准备单独的响应。
  4. 将每个响应发送给 REST 客户端。

我可以处理 Elasticsearch 批量请求,并需要帮助创建解决上述问题的 REST API。

目前,对于每个客户端请求,服务器都会发起单个的 Elasticsearch 调用,这可能会非常昂贵,需要一种解决方案来避免这种情况。

英文:

We are having a web server which receives 100s of client requests every seconds.

  1. Need to collect all REST request from clients per second basis.
  2. For collected requests, prepare bulk request to Elasticsearch & get the response.
  3. Parse Elasticsearch response & prepare individual responses for each REST clients.
  4. Send each response to REST clients.

I can take care of Elasticsearch bulk request & need help in creating a REST API which solves above stated problem.

Currently, for each client request, server is making a single Elasticsearch call would be very expensive and to avoid it, needed a solution.

答案1

得分: 1

以下是翻译好的部分:

有多种可能的解决方案,但无论如何,一般的方法是以线程安全的方式存储请求;以所需速率在单独的线程中安排作业,能够批量加载存储的数据;当批量加载完成时,以某种方式通知请求处理程序。

在一些更详细的情况下,可以按以下方式执行:

  1. 对于每个请求,创建一个CompletableFuture,并将其与接收到的数据一起放入线程安全的集合中。例如,ArrayBlockingQueue<DataAndFutureContainer>ConcurrentHashMap<DataToLoad, CompletableFuture>
  2. 使用@Scheduled(如果使用Spring)或Executors.newSingleThreadScheduledExecutor().scheduleAtFixedRate()以所需速率安排作业。
  3. 该作业应该检索(例如通过ArrayBlockingQueue.poll()ConcurrentHashMap.remove())当前可用的所有数据,进行批量加载(或其他操作),并使用批量加载的结果(或其他结果)通过.complete()履行相关的CompletableFuture
  4. 请求处理程序,在第1步创建了这些CompletableFuture后,通过future.get()等待它,从而获得批量加载的结果。

此外,根据您的框架,还可以进行一些优化。例如,使用Spring时,您可以从请求处理程序返回DeferredResult,并使用它来替代CompletableFuture。如果您使用Spring WebFlux,可以返回Mono.fromFuture()

这是一种方法的简要描述,请注意细节。

英文:

Well, there are multiple possible solutions, however, anyway the general approach is to store requests in a thread-safe manner; schedule a job in a separate thread at a desired rate capable of bulk load stored data; notify a request handler in some way when bulk load is complete.

In some more details it could be done as follows:

  1. For each request create a CompletableFuture and put in along with received data in an a thread-safe collection. For example, ArrayBlockingQueue<DataAndFutureContainer> or ConcurrentHashMap<DataToLoad, CompletableFuture>
  2. Schedule a job at a desired rate via @Scheduled (if using Spring) or Executors.newSingleThreadScheduledExecutor().scheduleAtFixedRate()
  3. That job should retrieve (for example via ArrayBlockingQueue.poll() or ConcurrentHashMap.remove()) all data available at the moment, do bulk load (or whatever you want), and fulfil the related CompletableFuture via .complete() with result of bulk load (or whatever)
  4. Request handler, which have created that futures in step 1 waits on it by future.get(), thus gets the result of a bulk load.

Also, some optimisations is available depending on your framework. For example, with Spring you could return DeferredResult from request handler and use instead of CompletableFuture. If you are using Spring WebFlux, you could return Mono.fromFuture()

Thats a brief description of a approach, be careful in details.

huangapple
  • 本文由 发表于 2023年6月8日 22:12:43
  • 转载请务必保留本文链接:https://go.coder-hub.com/76432757.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定