ExecutorService 有时候运行缓慢并且卡住。

huangapple go评论71阅读模式
英文:

ExecutorService slow and hung sometime

问题

我正在调用多个 REST API,并且使用 ExecutorService 进行并行处理。消费者每秒访问我的应用超过 10 次,我观察到 ExecutorService 响应非常慢。我已经在 kubernetes 上使用 Tomcat Web 服务器部署了我的应用程序。以下是我的代码,不确定是什么原因导致速度变慢。

ExecutorService WORKER_THREAD_POOL = Executors.newFixedThreadPool(100);

Collection<Callable<BasePolicy>> tasks = new ArrayList<>();
for (String str : datalist) {
    tasks.add(new MyThreadPool(str)); // MyThreadPool internally calls the REST API using Google HttpRequest.
}
List<Future<BasePolicy>> futures = null;
try {
    long startProcessingTime = System.currentTimeMillis();
    futures = WORKER_THREAD_POOL.invokeAll(tasks);
    WORKER_THREAD_POOL.shutdown();

    if (!WORKER_THREAD_POOL.awaitTermination(60000, TimeUnit.SECONDS)) {
        WORKER_THREAD_POOL.shutdownNow();
    }

    long totalProcessingTime = System.currentTimeMillis() - startProcessingTime;
    log.info("总共完成线程池所需的时间- " + totalProcessingTime);
} catch (InterruptedException e) {
    log.error("异步处理期间发生错误");
    e.printStackTrace();
}
log.info("等待所有线程完成");
for (Future<BasePolicy> mFuture : futures) {
    // 我的逻辑
}

请注意,我已经修正了代码中的一些语法错误,并确保了泛型的正确使用。如果您有任何其他问题或需要进一步的帮助,请告诉我。

英文:

I am calling the multiple rest API and for this i am using the ExecutorService for parallel processing. Consumer hit my app more than 10 times every seconds and i observed ExecutorService very slow to response. I have deployed my application on kubernetes using tomcat web server.
below is my code , not sure what is causing this to slow.

   ExecutorService WORKER_THREAD_POOL = Executors.newFixedThreadPool(100);

            Collection tasks = new ArrayList();
  for( String str: datalist){
 tasks.add(new MyThreadPool(str)); // MyThreadPool internally calls the REST API using Google HttpRequest.
}
 List&lt;Future&lt;BasePolicy&gt;&gt; futures = null;
            try {
                long startProcessingTime = System.currentTimeMillis();
                futures = WORKER_THREAD_POOL.invokeAll(tasks);
                WORKER_THREAD_POOL.shutdown();

                if (!WORKER_THREAD_POOL.awaitTermination(60000, TimeUnit.SECONDS)) {
                    WORKER_THREAD_POOL.shutdownNow();
                }

                long totalProcessingTime = System.currentTimeMillis() - startProcessingTime;
                log.info(&quot;total time to complete the thread pool- &quot; + totalProcessingTime);
            } catch (InterruptedException e) {
                log.error(&quot;error occured during ASYNC process &quot;);
                e.printStackTrace();
            }
            log.info(&quot;Finished Waiting All threads completed &quot;);
            for (Future&lt;Data&gt; mFuture : futures) {
               // my logic 
            }

</details>


# 答案1
**得分**: 2

ExecutorService非常慢响应。 <br>

ExecutorService WORKER_THREAD_POOL = Executors.newFixedThreadPool(100);

让我们从这个问题开始:您的机器有多少个核心? <br>
对于`newFixedThreadPool`,建议开始使用与您的机器核心数相同数量的线程(如果任务需要较长时间)。使用值为`100`时,您的CPU将忙于**调度和上下文切换**。此外,由于它是一个`newFixedThreadPool`,即使负载较轻,许多线程也会保留在池中,对CPU毫无帮助。

如《Effective Java》所建议,`newCachedThreadPool`通常做得不错。
[这里][1]有更多关于线程池的信息。

如果您期望有重负载,我认为在多台服务器上部署应用程序是明智的选择(这取决于您的单台服务器的容量)。线程不会有帮助,最终可能会导致应用程序变慢。

  [1]: https://stackoverflow.com/questions/17957382/fixedthreadpool-vs-cachedthreadpool-the-lesser-of-two-evils

<details>
<summary>英文:</summary>

&gt; and i observed ExecutorService very slow to response. &lt;br&gt;

    ExecutorService WORKER_THREAD_POOL = Executors.newFixedThreadPool(100);

Let&#39;s start with the question: How many cores does your machine have? &lt;br&gt;
For, `newFixedThreadPool`, it is suggested to start with as many threads as the number of cores in your machine (if the tasks are long running ones). With a value of `100`, your CPU is going to be busy with **scheduling and context-switching**. On top of that, since it&#39;s a `newFixedThreadPool`, so many threads are going to remain in pool even when there is less load - not helpin the CPU.

As suggested in *Effective Java*, `newCachedThreadPool` *usually does the good thing*. 
[Here][1] is more on the Thread Pools.

If you are expecting a really heavy load, I think deployment of apps on multiple servers would be the wise thing to do (this depends on the capacity of your single server). Threads are not going to help and can eventually make your app slow.


  [1]: https://stackoverflow.com/questions/17957382/fixedthreadpool-vs-cachedthreadpool-the-lesser-of-two-evils

</details>



# 答案2
**得分**: 0

当您在Kubernetes中部署应用程序时,如果不指定`memory`和`cpu`的要求,Kubernetes调度程序将以尽力而为的方式将Pod调度到节点上,这可能导致饥饿甚至Pod被驱逐。

您可以通过以下方式帮助调度程序做出更好的调度决策,指定`memory`和`cpu`的要求:

```yaml
apiVersion: v1
kind: Pod
metadata:
  name: frontend
spec:
  containers:
  - name: app
    image: images.my-company.example/app:v4
    env:
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

这将确保Pod被调度到能够满足起始要求的节点上,即64Mi内存和250m CPU,并允许它在需要时扩展到128Mi内存和500m CPU。

英文:

When you deploy an application in kubernetes if you do not specify the memory and cpu requirement then kubernetes scheduler schedules the pod into a node on best effort basis which can lead to starvation and even eviction of the pod.

You can help the scheduler to take a better scheduling decision by specifying the memory and cpu requirements as below

apiVersion: v1
kind: Pod
metadata:
  name: frontend
spec:
  containers:
  - name: app
    image: images.my-company.example/app:v4
    env:
    resources:
      requests:
        memory: &quot;64Mi&quot;
        cpu: &quot;250m&quot;
      limits:
        memory: &quot;128Mi&quot;
        cpu: &quot;500m&quot; 

This will make sure the pod gets scheduled on a node which can satisfy requirement of starting with 64Mi memory,250m cpu and allow it to burst upto 128Mi memory, 500m cpu.

huangapple
  • 本文由 发表于 2020年8月1日 14:37:46
  • 转载请务必保留本文链接:https://go.coder-hub.com/63202508.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定