Tcl服务器应如何处理多个请求而不阻塞?

huangapple go评论55阅读模式
英文:

How is a Tcl server supposed to handle fulfilling multiple requests without blocking?

问题

我有点困惑关于Tcl服务器的工作原理。我一直在编写一个小版本,在我的本地应用程序中运行得相当好,但我想更好地理解它的概念。我有Ashok Nadkarni的书,并以此为起点,我认为我理解了关于服务器不会阻塞的示例,我之前也看过tclhttpd3.5.1服务器的代码,用了其中的方法(几个月前,现在我使用了使用协程的方法)。我认为我的代码一直在工作,因为一个桌面应用程序中,请求被快速处理,几乎没有阻塞的问题。我认为一切都是正确的,因为即使多个数据库请求被(或者看起来被)同时处理,前一个会话也能迅速恢复。

我的计划是使用少量的WebSocket来处理SQLite数据库请求,并且使用HTTP处理音频/视频请求(该应用程序类似于数字学习图书馆)。当我开始编写音频部分以便在播放时保持其他UI活动,以便在播放时可以进行数据库请求时,我开始对我的结构方式产生了疑问。我让它适应了我的特殊情况,并且可能会快速处理所有内容,但我想更好地理解。

我曾经愚蠢地认为服务器可以处理一切而不会阻塞,因为它可以处理多个连接,但我把所有的代码都放在了同一个脚本中作为服务器代码。所以,如果我“制造”一个阻塞场景,服务器请求会堆积,但最终会被完成。虽然服务器不会阻塞,但是如果其中一个请求很耗时,程序本身会阻塞,直到每个请求完成,除了fcopy-command选项的异常。我知道可以使用source将脚本分成组件脚本,但这并不会使它非阻塞。

如果要将Tcl用作“真正的”服务器,而不仅仅是一个应用程序,那么请求应该如何处理,以避免彼此阻塞?我的意思是,它们应该作为单独的线程、单独的子进程还是单独的解释器来执行?也许这三者几乎是相同的,我不知道。

服务器脚本是否应该调用其他脚本的多个实例来满足请求,而不是调用同一个“单一实例”的脚本内的过程;而操作系统将尽可能高效地处理它们?但是,如果是这样,信息如何在进程之间共享?如果两个类似的请求几乎同时到达,是否可以加载两个相同脚本的独立实例以同时满足请求,以便一个不会阻塞另一个?

在查看tclhttpd3.5.1服务器中的代码时,似乎有时他在使用多个解释器和别名;但我不知道那里发生了什么。

对不起,我不知道正确的术语。感谢您提供的任何指导。

英文:

I'm a bit confused about how a Tcl server works. I've been coding a small version that works pretty well for my local application but I'd like to understand it better conceptually. I have Ashok Nadkarni's book and used it as a starting point and thought I understood the examples about how the server does not block. And, I went through some of the code from the tclhttpd3.5.1 server to get started (some months ago, now, and was using that general method of using state to repeatedly read until I was pointed to using coroutines).

I think my code has been working because one desktop application with very rapidly processed requests rarely has any issues of blocking. I thought all was correct because the previous session could be restored quickly even though multiple database requests are (or appear to be) processed concurrently.

My plan was to use a small number of web sockets to handle the SQlite database requests and use HTTP for any audio/video requests. (The application is like a digital study library.) When I started to code the audio parts to play, while leaving the rest of the UI active such that database requests could be made while it played, I started to wonder more about the way I structured it. I got it all to work for my particular case and, it likely, will handle everything quickly but I'd like to understand better.

I was stupidly thinking that the server just handles everything without ever blocking because it can handle multiple connections, but I put all my code in the same script as the server code. So, if I "fabricate" a blocking scenario, the server requests pile up but are eventually fulfilled. Although the server doesn't block, if one of the requests is time consuming, the program itself blocks until each request completes, apart from an exception like fcopy with -command option. I know that source can be used to break the script into component scripts but that doesn't make it non-blocking.

If one was going to use Tcl as a "real" server rather than just for one application, how are the requests to be processed without blocking each other? I mean are they to be executed as separate threads, separate child processes, separate interpreters? Maybe those three are almost the same, I don't know.

Should the server script call multiple instances of other scripts to fulfill the requests rather than calling procedures within the same "single-instance" script; and the operating system will process them as efficently as possible? But, if so, how is information shared between the processes? If two similar requests arrived at nearly the same time, could two spearate instances of the same script be loaded to fulfill the requests concurrently, such that one does not block the other?

In looking over the code in the tclhttpd3.5.1 server, it appears that, at times, he is using multiple interpreters and aliasing something; but I don't follow what's going one there.

I apologize for not knowing the proper terminology. Thank you for any guidance you may be able to provide.

答案1

得分: 2

Tcl的低级实现之一是事件循环。它允许等待事件(通常是文件描述符变为可读或可写,涵盖了许多情况,或来自另一个线程的消息)在操作系统中发生。特别是,它可以同时等待许多文件描述符,并且可以等待一定的时间(这就是定时器事件的工作方式)。这些事物的实现并不是简单的,但事件循环系统提供的API是:您提供一个回调函数,在事件发生时调用它(使用chan eventafter),事件循环实现会处理其余部分。

像http包和chan copy等都是基于这个基础构建的。

特定线程中的事件循环是同步的:当您的代码正在主动执行某些操作时,它不会接收事件。每个线程都将有自己的事件循环(在使用多个线程之前没有重要区别)。

事件循环可能以三种方式运行:

  1. 隐式运行。这是在加载Tk包时的工作方式;当您的脚本结束时,事件循环就会进入。
  2. 显式使用vwait(Tk的tkwait是这种方式的一个轻微变体)。在这种情况下,您以等待模式运行事件循环,直到给定的变量设置为(这会触发一个微不足道的C跟踪以设置一个外部等待循环捕获的标志;这就是它听起来那么简单的原因)。
  3. 显式使用update。在这种情况下,您以非等待模式运行事件循环,以消耗已经准备好的事件,而不实际等待任何事情。(update idletasks是一个变体,只会捕获空闲事件;空闲事件是只在没有其他任务要执行时才会计划发生的事件,并且是Tk的核心秘诀。它们在非Tk应用程序中几乎没有用途。)

在事件循环中运行另一个事件循环不是一个好主意。你可以这样做,这不是非法的或其他什么,但它往往会产生烦人的错误。重写代码以避免这种情况(协程有助于!)通常会产生更好的代码。

另一个主要部分是非阻塞I/O,Tcl支持尽可能多的通道上的非阻塞I/O(通常除了普通文件以外的任何通道,因为您的操作系统通常也不支持它)。当通道是阻塞的时候,从它读取和写入的操作可以阻塞(即需要一些时间才能完成)。当它被设置为非阻塞模式时,这些读取和写入操作可以失败而不会执行;通常情况下,您可以忽略以这种方式失败的写入,因为Tcl的默认行为是将写入安排在稍后执行,但对于读取来说,这意味着尽管有一些数据可用,但可能会出现读取失败,这就是chan blockedchan pending提供的处理方式。

事件循环和非阻塞I/O的结合使您可以在单个线程中执行许多操作,而唯一会显现单线程性的情况是如果您正在执行大量计算或I/O量变得非常大时。这时您可以使用多个线程(将任务发送到线程池是一种推荐的方法,以实现良好的可管理性和资源使用控制)。

英文:

One of the pieces of Tcl's low level implementation is an event loop. It allows it to wait for an event (typically a file descriptor becoming readable or writable, which covers a great many things, or a message from another thread) in the OS. In particular, it can wait for many file descriptors at once, and it can wait a defined amount of time (that's how timer events work). The implementation of these things isn't trivial, but the API surfaced by the event loop system is: you give a callback to be called when the event happens (with chan event or after), and the event loop implementation looks after the rest.

Things like the http package and chan copy and so on all build on this foundation.

The event loop in a particular thread is synchronous: it doesn't receive events when your code is actively doing something. Each thread will have its own event loop (not an important distinction until you use multiple threads).

There are three ways that the event loop may run:

  1. Implicitly. This is how things work when you've got the Tk package loaded; when you get to the end of your script, the event loop is entered.
  2. Explicitly with vwait (Tk's tkwait is a minor variation on this). In this case, you're running the event loop in waiting mode until the given variable is set (which fires a trivial C trace to set a flag that the outer waiting loop picks up; it's exactly as simple as it sounds).
  3. Explicitly with update. In this case, you're running the event loop in no-waiting mode to consume events that have already become ready without actually waiting for anything. (update idletasks is a variation that only picks up idle events; idle events are things that are scheduled to happen only when there's nothing else to do, and are the core of Tk's Secret Sauce. They're not much used in non-Tk applications.)

It's not a good idea to run an event loop within an event loop. You can, it's not illegal or anything like that, but it tends to produce annoying bugs. Rewriting your code to avoid that (coroutines help!) tends to produce better code.

The other major part is non-blocking I/O, which Tcl supports on as many channels as it can (typically anything except an ordinary file, because your OS doesn't usefully support that either). When a channel is blocking, reads from and writes to it can block (i.e., take time to complete). When it is put in non-blocking mode, those reads and writes can instead fail to go through; typically you can ignore writes that fail that way as Tcl's default behaviour there is to just schedule the write to happen later, but for reads that means you can get a failure to read despite some data being available, and that's what chan blocked and chan pending give you a handle on.

The combination of an event loop and non-blocking I/O lets you do a lot of things with a single thread, and the only times when the single-threadedness will show up will be if you're doing lots of computation or the amount of I/O is getting really large. That's when you use multiple threads (with sending tasks into thread pools being a recommended approach for giving good manageability and resource usage control).

huangapple
  • 本文由 发表于 2023年5月13日 13:55:51
  • 转载请务必保留本文链接:https://go.coder-hub.com/76241299.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定