Linux 边缘触发的 epoll 避免多次调用 recv 关闭。

huangapple go评论93阅读模式
英文:

Linux edge triggered epoll avoiding multiple recv calls for close

问题

I'm trying to understand if its possible to use edge-triggered epoll and avoid the need to call recv() to read from an epoll triggered READ event multiple times, every single time...

我正在尝试理解是否可以使用边缘触发的 epoll,并避免在每次 epoll 触发的读事件中多次调用 recv() 来读取数据...

Take this scenario:

  • Server sends client say 64 bytes and then closes the socket.
  • Client ET epoll_wait triggers a read event on the client. Now lets say the close made it into the trigger ( I have seen this race where the close counts in this READ event).
  • Client reads into a buffer of say 4k. Now, to be optimal, I would hope that if you see that recv returns <4k ( the buffer size ) then you know there is no more data and you can get back into the epoll_wait . I think in general this does work, EXCEPT for the close() case. Since closing a socket is signaled by returning 0 bytes from the recv call on the client, it would appear that you HAVE to call recv again to ensure that you don't get a 0 back ( in the general case, you would get -1 with EWOULDBLOCK and continue on your merry way to the next epoll_wait call ).

在这种情况下:

  • 服务器向客户端发送了大约64字节,然后关闭了套接字。
  • 客户端的 ET epoll_wait 触发了客户端上的读事件。现在假设关闭操作触发了这个读事件(我曾经见过这种竞争情况,其中关闭操作计入了此读事件)。
  • 客户端读入一个大约4k的缓冲区。现在,为了达到最佳性能,我希望如果 recv 返回小于4k(缓冲区大小),则你知道没有更多的数据可读,并可以返回到 epoll_wait。我认为一般情况下这是有效的,除了 close() 的情况。由于关闭套接字是通过在客户端的 recv 调用中返回0字节来表示的,似乎你必须再次调用 recv 以确保不会得到0字节(在一般情况下,你将得到-1,并带有 EWOULDBLOCK 继续到下一个 epoll_wait 调用)。

Given this, it seems like one would always have to call recv twice per read event if you are using edge-triggered epoll... am I missing something here? It seems grossly inefficient

考虑到这一点,如果你使用边缘触发的 epoll,似乎每次读事件都必须调用两次 recv... 我是否漏掉了什么?这似乎非常低效。

英文:

I'm trying to understand if its possible to use edge-triggered epoll and avoid the need to call recv() to read from an epoll triggered READ event multiple times, every single time...

Take this scenario:

  • Server sends client say 64 bytes and then closes the socket.
  • Client ET epoll_wait triggers a read event on the client. Now lets say the close made it into the trigger ( I have seen this race where the close counts in this READ event).
  • Client reads into a buffer of say 4k. Now, to be optimal, I would hope that if you see that recv returns <4k ( the buffer size ) then you know there is no more data and you can get back into the epoll_wait . I think in general this does work, EXCEPT for the close() case. Since closing a socket is signaled by returning 0 bytes from the recv call on the client, it would appear that you HAVE to call recv again to ensure that you don't get a 0 back ( in the general case, you would get -1 with EWOULDBLOCK and continue on your merry way to the next epoll_wait call ).

Given this, it seems like one would always have to call recv twice per read event if you are using edge-triggered epoll... am I missing something here? It seems grossly inefficient

答案1

得分: 0

在处理流式文件(例如管道、FIFO、流套接字)时,也可以通过检查从目标文件描述符读取/写入的数据量来检测读/写I/O空间是否已耗尽。例如,如果您调用read(2)来请求读取一定量的数据,并且read(2)返回的字节数较少,那么您可以确信已经耗尽了文件描述符的读取I/O空间。使用write(2)进行写入时也是如此。 (如果无法保证监视的文件描述符始终指向流式文件,请避免使用后一种技术。)

英文:

https://man7.org/linux/man-pages/man7/epoll.7.html

> For stream-oriented files (e.g., pipe, FIFO, stream socket),
the condition that the read/write I/O space is exhausted can
also be detected by checking the amount of data read from /
written to the target file descriptor. For example, if you
call read(2) by asking to read a certain amount of data and
read(2) returns a lower number of bytes, you can be sure of
having exhausted the read I/O space for the file descriptor.
The same is true when writing using write(2). (Avoid this
latter technique if you cannot guarantee that the monitored
file descriptor always refers to a stream-oriented file.)

答案2

得分: 0

好的,以下是翻译好的部分:

"Ok I've found the answer, I'm posting here to help someone else who hits this down the line.
To account for this scenario in edge-triggered read processing, you need to add EPOLLRDHUP to your epoll interest. With this set, on the last 'collapsed' data + close event, epoll_wait will return both EPOLLIN | EPOLLRDHUP. The application should read and then treat the EPOLLRDHUP as a close event."

英文:

Ok I've found the answer, I'm posting here to help someone else who hits this down the line.
To account for this scenario in edge-triggered read processing, you need to add EPOLLRDHUP to your epoll interest. With this set, on the last "collapsed" data + close event, epoll_wait will return both EPOLLIN | EPOLLRDHUP. The application should read and then treat the EPOLLRDHUP as a close event

huangapple
  • 本文由 发表于 2023年3月4日 05:56:14
  • 转载请务必保留本文链接:https://go.coder-hub.com/75632201.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定