Java gRPC服务器用于长时间运行的流的有效实现

huangapple go评论74阅读模式
英文:

Java gRPC server for long-lived streams effective implementation

问题

以下是您要求的翻译内容:

我想要了解 gRPC 框架中关于长时间流资源管理的部分。
假设我们有无限的稀有事件源(大约每秒一次),我们想通过 grpc 流的方式将其流式传输给客户端。
这些事件由服务器上的单个应用程序线程生成。

我看到有两种可能的事件流实现方式:

  1. 在调用线程中旋转,在 rpc 调用内与事件源通过(阻塞)队列进行通信。
  2. 将 StreamObserver 暴露给生成事件的线程,然后从那里填充所有客户端流。

第一种选项似乎很直观,但线程数量较多 - 对于稀疏流,每个客户端一个线程似乎有些过度。每个线程都占用一些堆空间,占用调度器等等。

第二种选项看起来更节省资源。然而,我在互联网上找不到任何支持这种方法的资料。我不确定 gRPC 服务器是否会意外关闭 ServerCall 或上下文,导致流被突然关闭。或者可能会有一些其他我不知道的副作用。

因此,我的问题是:
实现长时间流的推荐方法是什么?
是否还有其他可能的方法来解决所描述的问题。
第二种选项是否合法,还是应该坚持每个客户端一个线程的方法?

我尝试使用第二种选项创建了一个原型,看起来似乎可以工作。
但我仍然想要得到答案。

英文:

I would like to understand a part of gRPC framework for resource management for long-lived streams.
Let's say we have infinite source of rare (once a second or so) events that we want to stream to clients by the means of grpc stream.
The events are generated by a single application thread on the server.

I see two possible implementations to stream events:

  1. Spin in the caller thread within rpc call and communicate with source via (blocking) queue
  2. Expose StreamObserver to event generating thread(s) and populate all client streams from there.

Option one seems straightforward but a bit heavy on thread count - one thread per client for sparse stream seems like an overkill. Each thread takes some heap occupies scheduler and so on.

Option two looks a bit more resource-friendly. However I couldn't find any materials on internet to support this approach. I am not sure that gRPC server is not going to close ServerCall or Context unexpectedly leading to stream abrupt close. Or there could be some other side effects I am not aware of.

So my questions are:
What is the recommended way of implementing long-lived streams?
Are there any other possible approaches to implement the described problem.
Is option 2 legit or one should stick with 1 client 1 thread approach?

I've tried to create a prototype with option two, and it seems to be working.
But I still would like to get the answers.

答案1

得分: 2

两种方法从 gRPC 的角度来看都是可以的。在方便的情况下,您可以自由地坚持使用一个客户端、一个线程的方法。对于流式传输的情况,通常最好避免在调用者线程中自旋,而是可以使用第二个线程来发送;这是相当正常的。另一方面,将 StreamObserver 传递给单独的线程进行管理具有资源优势,也是一个很好的方法。

您应该考虑如何响应慢速客户端,在事件生成速度快于发送速度时(即流量控制)。

您将想要将提供的 StreamObserver 强制转换为 ServerCallStreamObserver,以访问其他 API。它提供了 setOnReadyHandler(Runnable)isReady() 用于检测慢速客户端。gRPC Java 允许您在未准备好时调用 onNext(...),但这样做会进行缓冲。

就绪处理程序是一个回调,使用与调用者线程相同的线程,因此如果您在调用者线程中自旋,您将无法接收到该回调。这就是为什么在流式传输中通常最好避免在调用者线程中自旋的原因。

使用专用的发送线程具有具有清晰的队列和生产者与消费者之间的分离的优点。您可以选择队列大小,并在队列已满时决定如何处理。在一个线程中直接与 StreamObserver 进行交互会使用较少的资源。这两种选项的复杂性是不同的。"正确" 的方法选择基于规模、资源考虑、您的服务的具体情况以及您的偏好。

英文:

Both approaches are fine from gRPC's perspective. You are free to stick with 1 client, 1 thread approach when it is convenient. For streaming cases it is generally best to avoid spinning in the caller thread, but you can use a second thread to send instead; that's quite normal. On the other hand, passing the StreamObservers to a single thread for management has resource benefits, and is a good approach too.

You should consider how to respond to slow clients, when events are generated faster than they are being sent (i.e., flow control).

You will want to cast the provided StreamObserver to ServerCallStreamObserver to access additional APIs. It provides setOnReadyHandler(Runnable) and isReady() for detecting slow clients. gRPC Java allows you to call onNext(...) even when not ready, but doing so will buffer.

The on-ready handler is a callback that uses the same thread as the caller thread, so if you spin in the caller thread you won't be able to receive that callback. This is why for streaming it is generally best to avoid spinning in the caller thread.

Using a dedicated sending thread has the advantage of having a clear queue and separation between producer and consumer. You can choose a queue size and decide what to do when that queue is full. Directly interacting with the StreamObservers in one thread has less resource usage. The complexity of both options varies. The "correct" choice of approach is based on scale, resource considerations, specifics of your service, and your preferences.

huangapple
  • 本文由 发表于 2020年9月25日 23:17:50
  • 转载请务必保留本文链接:https://go.coder-hub.com/64066930.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定