英文:
Netty server boss and worker thread pool sizes
问题
如果我对Netty服务器的理解是正确的,主Boss事件循环线程池(默认大小为2*availableProcessors)接受客户端连接,然后将请求处理工作转移到工作线程。
现在,我有一个问题,对于工作线程,线程池的大小应该是多少呢?如果工作线程正在执行一些阻塞操作,比如等待网络调用的响应,那么工作线程池的大小不应该足够大(比如说200个线程),以处理并发的客户端请求,因为每个工作线程都在阻塞地处理一个客户端请求?我看到大多数ServerBootstrap的示例中,工作线程池的大小被设置为2xAvailableProcessors,这与主线程池的大小相同。难道这不会将并发客户端请求的处理数量限制为2xAvailableProcessors吗?
增加了我的困惑,我看到还有另一个被称为处理程序线程池(EventExecutorGroup)的线程池被使用,并且说这个线程池处理请求处理。那么工作线程池的作用是什么?
在我找到的示例中,有三种类型的线程池被创建如下。
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup(10);
EventExecutorGroup handlerThread = new DefaultEventExecutorGroup(20);
如果工作线程正在进行一些CPU密集型的处理(与上面的例子中等待网络调用响应不同),工作线程将与主Boss线程竞争。这种情况下,主Boss线程会变慢,从而导致客户端连接在队列中等待被接受或拒绝。在这种情况下,服务器的表现将与传统的阻塞服务器相同,我认为。您对此有什么想法可以分享吗?
英文:
If my understanding with Netty server is correct, the main boss event loop thread pool (default size is 2*availableProcessors) accepts client connections and then offloads request processing work to worker threads.
Now, the question I have is, what should be the thread pool size for worker threads? If worker is doing some blocking operating say waiting for a network call response, then shouldn't the worker thread pool be large enough (say, 200 threads) to handle concurrent clients requests since each worker thread is blocked serving a client request? I see most of the ServerBootstrap examples showing working thread pool size to be 2xAvailableProcessors, which is same as main thread pool size. Won't this limit number of concurrent client request processing to 2xAvailableProcessors?
Adding to my confusing, I see one more thread pool being used called handler thread pool (EventExecutorGroup) and says that this thread pool handles request processing. Then whats the use of worker thread pool?
The three types of thread pools are created as below in the examples I came across.
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup(10);
EventExecutorGroup handlerThread = new DefaultEventExecutorGroup(20);
And if worker threads are doing some CPU intensive process (as opposed to waiting for network call response in above example), worker threads will contend with main boss thread. Won't this scenario make boss thread slow thereby client connections waiting in queue to be accepted or rejected? In this scenario, the server going to perform same as traditional blocking servers, I believe. Could you share your thoughts on this?
答案1
得分: 3
> 现在,我所要考虑的问题是,对于工作线程,线程池的大小应该是多少?如果工作线程正在执行某些阻塞操作,比如等待网络调用响应。
您绝不能在 Netty 工作线程上进行阻塞,特别是在进行网络 I/O 时 —— 这正是使用像 Netty 这样的非阻塞异步框架的全部目的。话虽如此,为了与像 JDBC 这样的同步/阻塞代码进行交互,Netty 提供了一种通过 EventExecutorGroup
将这种执行转移到单独的线程池的方法。请参阅来自Netty文档的以下代码片段:
// 告诉管道在与 I/O 线程不同的线程中运行 MyBusinessLogicHandler 的事件处理方法,
// 以便 I/O 线程不会被耗时的任务阻塞。
// 如果您的业务逻辑完全是异步的或者执行非常快速,您不需要指定一个组。
pipeline.addLast(group, "handler", new MyBusinessLogicHandler());
> 而且,如果工作线程正在执行一些 CPU 密集型进程。
这是一个更一般性的问题,与 Netty 不太相关,但如果您的工作负载需要大量的 CPU 计算,使用异步网络 I/O 框架可能不适合您。
英文:
> Now, the question I have is, what should be the thread pool size for worker threads? If worker is doing some blocking operating say waiting for a network call response
You should never block on netty worker threads, especially for network I/O -- that is the entire point of using a non-blocking asynchronous framework like netty. That said, in order to interop with synchronous/blocking code like JDBC, netty provides a way to offload such execution into a separate threadpool using an EventExecutorGroup
. Have a look at this snippet from Netty docs:
// Tell the pipeline to run MyBusinessLogicHandler's event handler methods
// in a different thread than an I/O thread so that the I/O thread is not blocked by
// a time-consuming task.
// If your business logic is fully asynchronous or finished very quickly, you don't
// need to specify a group.
pipeline.addLast(group, "handler", new MyBusinessLogicHandler());
> And if worker threads are doing some CPU intensive process
This is a more general question, not exactly related to netty, but if your workload is heavily CPU intensive, using a async network I/O framework may not be a good fit for you.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论