Redis中打开的文件太多了

huangapple go评论96阅读模式
英文:

too many open files in Redis

问题

在我们的项目中,我们正在使用一个 Redis 的单个实例(托管在 GCP 上),总内存为 4 GB,目前只使用了 2 GB。总连接限制为 1000。几天前,我们注意到在从 Redis 缓存中读取时出现了一个意外错误 - "dial tcp xx.xx.xx.xx:6379: socket: too many open files"。

现在,我检查了 CPU 利用率、Redis 的内存使用情况以及 Redis 实例是否宕机,发现没有任何激增。几分钟后,该错误自动消失了。尽管看起来这个错误是指同时打开的连接数过多。我查看了我们正在使用的 go-redis 库的官方文档,发现默认的连接池大小如下:

为了提高性能,go-redis 自动管理一个网络连接池(socket)。默认情况下,连接池大小为每个可用 CPU 的 10 个连接,由 runtime.GOMAXPROCS 报告。在大多数情况下,这已经足够了,很少需要调整。

所以,我无法理解是什么原因导致了这个问题,以及如何修复它(如果将来再次出现)?有人可以帮忙吗?

英文:

In our project, we are using a single instance of Redis (hosted on GCP) with total memory of 4 GB, out of which only 2 GB is used as of now. The total connection limit is 1000. A few days ago, we noticed an unexpected error (for a few minutes) while reading from Redis cache - "dial tcp xx.xx.xx.xx:6379: socket: too many open files"

Now, I checked that there was no kind of surge in either of the CPU utilisation, memory usage of Redis and neither the redis instance went down. After a few minutes, that error was gone automatically. Although it seems like that this error is referring to the high number of connections opened at the same time. And I checked for the default connection pool size (if any), where I observed in the official docs of the go-redis library (which we're using):

> To improve performance, go-redis automatically manages a pool of network connections (sockets). By default, the pool size is 10 connections per every available CPU as reported by runtime.GOMAXPROCS. In most cases, that is more than enough and tweaking it rarely helps.

So, I'm unable to understand what's causing this issue and how to fix it (if it arises again in future)? Can someone please help?

答案1

得分: 1

这不是Redis的问题,很可能是你的代码出了问题。

在Linux中,进程有一些限制,其中之一是进程可以同时拥有的“打开文件描述符”的数量限制。

文件描述符是进程创建的,以使进程能够访问资源并对其执行操作,例如读取/写入。文件描述符不仅仅代表磁盘上的传统“文件”,还用于表示程序可能读取/写入的网络套接字。

在你的情况下,你看到的错误信息是:
"dial tcp xx.xx.xx.xx:6379: socket: too many open files"

你的程序试图打开一个到Redis的新网络连接,在这个过程中,它必须创建一个套接字,这需要使用一个文件描述符。你收到的错误信息“too many open files”是因为达到了这个限制。

你可以做两件事:
1)提高这个限制,请阅读关于ulimit的内容,链接:https://ss64.com/bash/ulimit.html,或者搜索你的错误信息,会有很多结果。
2)调查为什么会有太多打开的文件。

第二个问题可能会显示你正在打开文件/套接字,但没有关闭它们,导致你“泄漏”描述符。例如,如果每次查询Redis时都打开一个从未关闭的新连接,最终会耗尽文件描述符。

英文:

This is not an issue with Redis, it is likely an issue in your code.

Processes in Linux have limits imposed on them, one limit is on the number of 'open file descriptors' a process can have at one time.

A file descriptor is created by a process to enable the process to access the resource and perform operations against it, such as reading/writing to/from it. A file descriptor does not just represent what you think of as a traditional 'file' on disk, it is also used to represent network sockets that a program may read/write from.

In your case, you see:
"dial tcp xx.xx.xx.xx:6379: socket: too many open files"

Your program was attempting to open a new network connection to redis, in doing so, it must create a socket, which requires the usage of a file descriptor. The error you get back "too many open files" is due to hitting this limit.

You can do two things

  1. raise this limit, go read about ulimit https://ss64.com/bash/ulimit.html or search your error, many results.
  2. investigate why you had too many open files.

The 2nd piece is likely to show that you are opening files/sockets, and not closing them, causing you to 'leak' descriptors. For example, if each time you query Redis you open a new connection that never is closed, you would eventually run out of file descriptors.

huangapple
  • 本文由 发表于 2022年7月7日 17:14:36
  • 转载请务必保留本文链接:https://go.coder-hub.com/72895204.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定