英文:
Why Garbage Collect in web apps?
问题
考虑在一个每个请求都由用户级线程(ULT)(绿色线程/erlang进程/goroutine/... 任何轻量级线程)处理的平台上构建一个Web应用程序。假设每个请求都是无状态的,并且像数据库连接这样的资源在应用程序启动时获取,并在这些线程之间共享。在这些线程中,垃圾回收的需要是什么?
通常,这样的线程运行时间很短(几毫秒),如果设计良好,不会使用超过几KB或几MB的内存。如果在线程退出时对分配给线程的资源进行垃圾回收,并且与其他线程无关,那么即使是第98或99个百分位的请求也不会有垃圾回收暂停。所有请求都将在可预测的时间内得到回答。
这种模型存在什么问题,为什么没有被广泛使用?
英文:
Consider building a web app on a platform where every request is handled by a User Level Thread(ULT) (green thread/erlang process/goroutine/... any light weight thread). Assuming every request is stateless and resources like DB connection are obtained at startup of the app and shared between these threads. What is the need for garbage collection in these threads?
Generally such a thread is short running(a few milliseconds) and if well designed doesn't use more than a few (KB or MB) of memory. If garbage collection of the resources allocated in the thread is done at the exit of the thread and independent of the other threads, then there would be no GC pauses for even the 98th or 99th percentile of requests. All requests would be answered in predictable time.
What is the problem with such a model and why is it not being widely used?
答案1
得分: 4
你的假设可能不正确。
假设是:如果设计得好,函数不会使用超过几个(KB 或 MB)的内存。
想象一下,在一个网络应用程序中使用的用于计算文本文件中单词数量的函数。一些天真的实现可能是这样的:
def count_words(text):
words = text.split()
count = {}
for w in words:
if w in count:
count[w] += 1
else:
count[w] = 1
return count
它分配的内存比文本本身要大。
英文:
You assumption might not be true.
> if well designed doesn't use more than a few (KB or MB) of memory
Imagine a function for counting words in a text file which is used in a web app. Some naive implementation could be,
def count_words(text):
words = text.split()
count = {}
for w in words:
if w in count:
count[w] += 1
else:
count[w] = 1
return count
It allocates larger memory than text.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论