如何在Go中优化线程安全队列?

huangapple go评论76阅读模式
英文:

How to optimize thread safe queue in Go?

问题

我有以下的请求队列:

type RequestQueue struct {
    Requests []*http.Request
    Mutex *sync.Mutex
}

func (rq *RequestQueue) Enqueue(req *http.Request) {
    rq.Mutex.Lock()
    defer rq.Mutex.Unlock()
    rq.Requests = append(rq.Requests, req)
}

func (rq *queue) Dequeue() (*http.Request, error) {
    rq.Mutex.Lock()
    defer rq.Mutex.Unlock()
    if len(rq.Requests) == 0 {
        return nil, errors.New("dequeue: queue is empty")
    }
    req := rq.Requests[0]
    rq.Requests = rq.Requests[1:]
    return req, nil
}

只使用atomic包,而不使用Mutex,将类型简化为type AtomicRequestQueue []*http.Request,是否可能实现这个功能,并且是否会带来性能上的好处?

英文:

I have the following request queue:

type RequestQueue struct {
	Requests []*http.Request
	Mutex *sync.Mutex
}

func (rq *RequestQueue) Enqueue(req *http.Request) {
	rq.Mutex.Lock()
	defer rq.Mutex.Unlock()
	rq.Requests = append(rq.Requests, req)
}

func (rq *queue) Dequeue() (*http.Request, error) {
	rq.Mutex.Lock()
	defer rq.Mutex.Unlock()
	if len(rq.Requests) == 0 {
		return nil, errors.New("dequeue: queue is empty")
	}
	req := rq.Requests[0]
	rq.Requests = rq.Requests[1:]
	return req, nil
}

Is it possible to do this with just the atomic package, without Mutex, type being simply type AtomicRequestQueue []*http.Request, and will that bring any performance benefit?

答案1

得分: 1

使用一个通道,比如chan *http.Request。通道实际上就是一个先进先出的队列。

你所称之为Enqueue的操作实际上就是发送操作c <- req,而你所称之为Dequeue的操作实际上就是接收操作req := <-c

是否可以只使用atomic包来实现这个功能?

你没有说明这个线程安全队列的真实目的,然而你上面提到的使用案例似乎需要同步,即对共享资源的互斥访问。atomic包中的类型只保证操作的结果能够以一致的方式被其他线程观察到,而没有涉及到互斥性。

如果你的队列需要比你展示的更多的业务逻辑,那么通道可能太简单了;在这种情况下,互斥锁可能是你最好的选择。如果你预计会有很多读操作,你可以使用sync.RWMutex来减少锁竞争。

英文:

Use a channel, like chan *http.Request. A channel is literally a FIFO queue.

What you call Enqueue will just be a send operation c &lt;- req, and what you call Dequeue will just be a receive operation req := &lt;-c.

> Is it possible to do this with just the atomic package

You didn't state what is the real purpose of this thread-safe queue, however the use case you presented above seems to need synchronization, i.e. mutual exclusive access to the shared resource. The types in the atomic package only guarantee that the result of the operation will be observed by other threads in a consistent fashion. There's no mutual exclusiveness involved.

If your queue needs more business logic than you are actually showing, a channel might be too primitive; in that case mutex locking might be your best bet. You may use sync.RWMutex to reduce lock contention if you expect to have a lot of reads.

huangapple
  • 本文由 发表于 2021年9月26日 21:22:14
  • 转载请务必保留本文链接:https://go.coder-hub.com/69335342.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定