英文:
How to optimize thread safe queue in Go?
问题
我有以下的请求队列:
type RequestQueue struct {
Requests []*http.Request
Mutex *sync.Mutex
}
func (rq *RequestQueue) Enqueue(req *http.Request) {
rq.Mutex.Lock()
defer rq.Mutex.Unlock()
rq.Requests = append(rq.Requests, req)
}
func (rq *queue) Dequeue() (*http.Request, error) {
rq.Mutex.Lock()
defer rq.Mutex.Unlock()
if len(rq.Requests) == 0 {
return nil, errors.New("dequeue: queue is empty")
}
req := rq.Requests[0]
rq.Requests = rq.Requests[1:]
return req, nil
}
只使用atomic
包,而不使用Mutex
,将类型简化为type AtomicRequestQueue []*http.Request
,是否可能实现这个功能,并且是否会带来性能上的好处?
英文:
I have the following request queue:
type RequestQueue struct {
Requests []*http.Request
Mutex *sync.Mutex
}
func (rq *RequestQueue) Enqueue(req *http.Request) {
rq.Mutex.Lock()
defer rq.Mutex.Unlock()
rq.Requests = append(rq.Requests, req)
}
func (rq *queue) Dequeue() (*http.Request, error) {
rq.Mutex.Lock()
defer rq.Mutex.Unlock()
if len(rq.Requests) == 0 {
return nil, errors.New("dequeue: queue is empty")
}
req := rq.Requests[0]
rq.Requests = rq.Requests[1:]
return req, nil
}
Is it possible to do this with just the atomic package, without Mutex, type being simply type AtomicRequestQueue []*http.Request
, and will that bring any performance benefit?
答案1
得分: 1
使用一个通道,比如chan *http.Request
。通道实际上就是一个先进先出的队列。
你所称之为Enqueue
的操作实际上就是发送操作c <- req
,而你所称之为Dequeue
的操作实际上就是接收操作req := <-c
。
是否可以只使用
atomic
包来实现这个功能?
你没有说明这个线程安全队列的真实目的,然而你上面提到的使用案例似乎需要同步,即对共享资源的互斥访问。atomic
包中的类型只保证操作的结果能够以一致的方式被其他线程观察到,而没有涉及到互斥性。
如果你的队列需要比你展示的更多的业务逻辑,那么通道可能太简单了;在这种情况下,互斥锁可能是你最好的选择。如果你预计会有很多读操作,你可以使用sync.RWMutex
来减少锁竞争。
英文:
Use a channel, like chan *http.Request
. A channel is literally a FIFO queue.
What you call Enqueue
will just be a send operation c <- req
, and what you call Dequeue
will just be a receive operation req := <-c
.
> Is it possible to do this with just the atomic package
You didn't state what is the real purpose of this thread-safe queue, however the use case you presented above seems to need synchronization, i.e. mutual exclusive access to the shared resource. The types in the atomic
package only guarantee that the result of the operation will be observed by other threads in a consistent fashion. There's no mutual exclusiveness involved.
If your queue needs more business logic than you are actually showing, a channel might be too primitive; in that case mutex locking might be your best bet. You may use sync.RWMutex
to reduce lock contention if you expect to have a lot of reads.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论