英文:
CPU cache friendly byte slice
问题
我有一个记录器,它并行写入许多消息到标准输出。问题是消息被同时写入并打乱了顺序。
所以我不得不在打印之前添加互斥锁和锁定:
l.mu.Lock()
fmt.Fprintf(os.Stdout format, v...)
l.mu.Unlock()
我希望避免锁定,因为我需要尽可能小的延迟。但是我可以接受一些暂停,并且我不太关心消息的顺序。
在我的服务器上,我有24个CPU,每个CPU都有自己的缓存。我有一个想法,即创建每个CPU的字节片段列表,然后定期收集它们并转储到日志中。
这种做法在实践中可行吗?
我感觉我正在重新发明一些现有的结构。
你能否推荐一种最佳的方法来实现这个?
英文:
I have a logger that writes a lot of messages in parallel to stdout. The problem is that messages were written simultaneously and shuffled.
So I had to add a mutex and lock before printing:
l.mu.Lock()
fmt.Fprintf(os.Stdout format, v...)
l.mu.Unlock()
I wish to avoid the locking because I need as small latency as possible. But I'm fine with some pauses and I don't care much about order of messages.
On my server I have 24 CPUs and each has it's own cache. I have an idea to make per-cpu list of byte slices and then periodically gather all of them and dump to a log.
Will this work in practice?
I'm feeling that I'm reinventing some existing structure.
Could you please recommend an optimal way to do that.
答案1
得分: 1
对于许多并发问题,我会使用并发队列来解决这个问题。
https://pkg.go.dev/github.com/antigloss/go/concurrent/container/queue 似乎是你可以用来解决问题的数据结构。
英文:
As with many concurrency problems I'd use a concurrent queue to solve this problem.
https://pkg.go.dev/github.com/antigloss/go/concurrent/container/queue seems to be the kind of data structure that you could use to solve your problem.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论