英文:
Is there a queue implementation?
问题
有人可以建议一个简单快速的FIFO/队列的Go容器吗?Go有三种不同的容器:heap
、list
和vector
。哪一种更适合实现一个队列?
英文:
Can anyone suggest Go container for simple and fast FIF/queue, Go has 3 different containers: heap
, list
and vector
. Which one is more suitable to implement a queue?
答案1
得分: 143
实际上,如果你想要一个基本且易于使用的先进先出队列,切片(slice)提供了你所需的一切。
queue := make([]int, 0)
// 入队
queue = append(queue, 1)
// 获取队首元素(不移除)
x = queue[0]
// 移除队首元素
queue = queue[1:]
// 是否为空?
if len(queue) == 0 {
fmt.Println("队列为空!")
}
当然,我们假设我们可以信任append和切片的内部实现,以避免不必要的调整大小和重新分配。对于基本用法来说,这已经足够了。
英文:
In fact, if what you want is a basic and easy to use fifo queue, slice provides all you need.
queue := make([]int, 0)
// Push to the queue
queue = append(queue, 1)
// Top (just get next element, don't remove it)
x = queue[0]
// Discard top element
queue = queue[1:]
// Is empty ?
if len(queue) == 0 {
fmt.Println("Queue is empty !")
}
Of course, we suppose that we can trust the inner implementation of append and slicing so that it avoid useless resize and reallocation. For basic usage, this is perfectly sufficient.
答案2
得分: 116
惊讶地看到还没有人建议使用缓冲通道,无论是用于大小限制的FIFO队列还是其他用途。
//或者根据需要设置缓冲区大小。
c := make(chan int, 300)
//推入
c <- value
//弹出
x <- c
英文:
Surprised to see no one has suggested buffered channels yet, for size bound FIFO Queue anyways.
//Or however many you might need + buffer.
c := make(chan int, 300)
//Push
c <- value
//Pop
x <- c
答案3
得分: 23
大多数队列实现有三种类型:基于切片、基于链表和基于环形缓冲区(环形缓冲区)。
- 基于切片的队列往往会浪费内存,因为它们不会重用之前被移除的元素所占用的内存。此外,基于切片的队列往往只能单向操作。
- 基于链表的队列在内存重用方面可能更好,但由于维护链接的开销,通常会稍微慢一些并且使用更多内存。它们可以在队列中间添加和移除元素而无需移动内存,但如果你需要频繁进行这样的操作,链表就不是正确的数据结构。
- 基于环形缓冲区的队列既具有切片的效率,又不会浪费内存。较少的分配意味着更好的性能。它们在两端添加和移除元素的效率相同,因此自然而然地得到了双端队列。因此,我通常建议使用基于环形缓冲区的队列实现。这是本文其余部分讨论的内容。
基于环形缓冲区的队列通过将其存储环绕起来来重用内存:当队列增长超过底层切片的一端时,它会在切片的另一端添加额外的节点。请参见deque diagram。
以下是一段示例代码:
// PushBack将元素追加到队列的末尾。当使用PopFront()移除元素时,实现FIFO;当使用PopBack()移除元素时,实现LIFO。
func (q *Deque) PushBack(elem interface{}) {
q.growIfFull()
q.buf[q.tail] = elem
// 计算新的尾部位置。
q.tail = q.next(q.tail)
q.count++
}
// next返回下一个缓冲区位置,环绕缓冲区。
func (q *Deque) next(i int) int {
return (i + 1) & (len(q.buf) - 1) // 位模运算
}
这个特定的实现总是使用大小为2的幂的缓冲区大小,因此可以计算位模运算以提高效率。
这意味着切片只在使用完其容量时才需要增长。通过避免在同一边界上同时增长和缩小存储空间的调整策略,这使得它非常节省内存。
以下是调整底层切片缓冲区大小的代码:
// resize将deque调整为正好是其当前内容的两倍。这用于在队列满时扩展队列,以及在只有四分之一满时收缩队列。
func (q *Deque) resize() {
newBuf := make([]interface{}, q.count<<1)
if q.tail > q.head {
copy(newBuf, q.buf[q.head:q.tail])
} else {
n := copy(newBuf, q.buf[q.head:])
copy(newBuf[n:], q.buf[:q.tail])
}
q.head = 0
q.tail = q.count
q.buf = newBuf
}
另一个需要考虑的问题是是否希望在实现中内置并发安全性。你可能希望避免这一点,以便可以根据你的并发策略选择最适合的方法,如果不需要并发安全性,当然也不需要它;与不希望具有一些内置序列化的切片的原因相同。
如果你在godoc上搜索deque,会找到一些基于环形缓冲区的队列实现。选择一个最适合你口味的。
英文:
Most queue implementations are in one of three flavors: slice-based, linked list-based, and circular-buffer (ring-buffer) based.
- Slice-based queues tend to waste memory because they do not reuse the memory previously occupied by removed items. Also, slice based queues tend to only be single-ended.
- Linked list queues can be better about memory reuse, but are generally a little slower and use more memory overall because of the overhead of maintaining links. They can offer the ability to add and remove items from the middle of the queue without moving memory around, but if you are doing much of that a list is the wrong data structure.
- Ring-buffer queues offer all the efficiency of slices, with the advantage of not wasting memory. Fewer allocations means better performance. They are just as efficient adding and removing items from either end so you naturally get a double-ended queue. So, as a general recommendation I would recommend a ring-buffer based queue implementation. This is what is discussed in the rest of this post.
The ring-buffer based queue reuses memory by wrapping its storage around: As the queue grows beyond one end of the underlying slice, it adds additional nodes to the other end of the slice. See deque diagram
Also, a bit of code to illustrate:
// PushBack appends an element to the back of the queue. Implements FIFO when
// elements are removed with PopFront(), and LIFO when elements are removed
// with PopBack().
func (q *Deque) PushBack(elem interface{}) {
q.growIfFull()
q.buf[q.tail] = elem
// Calculate new tail position.
q.tail = q.next(q.tail)
q.count++
}
// next returns the next buffer position wrapping around buffer.
func (q *Deque) next(i int) int {
return (i + 1) & (len(q.buf) - 1) // bitwise modulus
}
This particular implementation always uses a buffer size that is a power of 2, and can therefore compute the bitwise modulus to be a little more efficient.
This means the slice needs to grow only when all its capacity is used up. With a resizing strategy that avoids growing and shrinking storage on the same boundary, this makes it very memory efficient.
Here is code that resizes the underlying slice buffer:
// resize resizes the deque to fit exactly twice its current contents. This is
// used to grow the queue when it is full, and also to shrink it when it is
// only a quarter full.
func (q *Deque) resize() {
newBuf := make([]interface{}, q.count<<1)
if q.tail > q.head {
copy(newBuf, q.buf[q.head:q.tail])
} else {
n := copy(newBuf, q.buf[q.head:])
copy(newBuf[n:], q.buf[:q.tail])
}
q.head = 0
q.tail = q.count
q.buf = newBuf
}
Another thing to consider is if you want concurrency safety built into the implementation. You may want to avoid this so that you can do whatever works best for your concurrency strategy, and you certainly do not want it if your do not need it; same reason as not wanting a slice that has some built-in serialization.
There are a number of ring-buffer based queue implementations for Go if you do a search on godoc for deque. Choose one that best suits your tastes.
答案4
得分: 13
无论是向量还是列表都可以使用,但向量可能是更好的选择。我这样说是因为向量可能比列表更少分配内存,并且垃圾回收(在当前的Go实现中)相对较昂贵。在一个小程序中,这可能并不重要。
英文:
Either vector or list should work, but vector is probably the way to go. I say this because vector will probably allocate less often than list and garbage collection (in the current Go implementation) is fairly expensive. In a small program it probably won't matter, though.
答案5
得分: 11
编辑,队列的更清晰实现:
package main
import "fmt"
type Queue []interface{}
func (self *Queue) Push(x interface{}) {
*self = append(*self, x)
}
func (self *Queue) Pop() interface{} {
h := *self
var el interface{}
l := len(h)
el, *self = h[0], h[1:l]
// 或者使用以下代码实现栈
// el, *self = h[l-1], h[0:l-1]
return el
}
func NewQueue() *Queue {
return &Queue{}
}
func main() {
q := NewQueue()
q.Push(1)
q.Push(2)
q.Push(3)
q.Push("L")
fmt.Println(q.Pop())
fmt.Println(q.Pop())
fmt.Println(q.Pop())
fmt.Println(q.Pop())
}
或者在一个简单的实现中嵌入container/list
并公开接口:
package queue
import "container/list"
// Queue 是一个队列
type Queue interface {
Front() *list.Element
Len() int
Add(interface{})
Remove()
}
type queueImpl struct {
*list.List
}
func (q *queueImpl) Add(v interface{}) {
q.PushBack(v)
}
func (q *queueImpl) Remove() {
e := q.Front()
q.List.Remove(e)
}
// New 是一个新的 Queue 实例
func New() Queue {
return &queueImpl{list.New()}
}
英文:
Edit, cleaner implementation of a Queue:
package main
import "fmt"
type Queue []interface{}
func (self *Queue) Push(x interface{}) {
*self = append(*self, x)
}
func (self *Queue) Pop() interface{} {
h := *self
var el interface{}
l := len(h)
el, *self = h[0], h[1:l]
// Or use this instead for a Stack
// el, *self = h[l-1], h[0:l-1]
return el
}
func NewQueue() *Queue {
return &Queue{}
}
func main() {
q := NewQueue()
q.Push(1)
q.Push(2)
q.Push(3)
q.Push("L")
fmt.Println(q.Pop())
fmt.Println(q.Pop())
fmt.Println(q.Pop())
fmt.Println(q.Pop())
}
Or just embed a "container/list"
inside a simple implementation and expose the interface:
package queue
import "container/list"
// Queue is a queue
type Queue interface {
Front() *list.Element
Len() int
Add(interface{})
Remove()
}
type queueImpl struct {
*list.List
}
func (q *queueImpl) Add(v interface{}) {
q.PushBack(v)
}
func (q *queueImpl) Remove() {
e := q.Front()
q.List.Remove(e)
}
// New is a new instance of a Queue
func New() Queue {
return &queueImpl{list.New()}
}
答案6
得分: 7
为了扩展实现方面,Moraes在他的gist中提出了一些关于队列和栈的结构:
// Stack是一个基本的LIFO栈,根据需要进行调整大小。
type Stack struct {
nodes []*Node
count int
}
// Queue是一个基于循环列表的基本FIFO队列,根据需要进行调整大小。
type Queue struct {
nodes []*Node
head int
tail int
count int
}
您可以在这个playground示例中看到它的运行情况。
英文:
To expand on the implementation side, Moraes proposes in his gist some struct from queue and stack:
// Stack is a basic LIFO stack that resizes as needed.
type Stack struct {
nodes []*Node
count int
}
// Queue is a basic FIFO queue based on a circular list that resizes as needed.
type Queue struct {
nodes []*Node
head int
tail int
count int
}
You can see it in action in this playground example.
答案7
得分: 7
使用切片和适当的(“循环”)索引方案仍然似乎是最佳选择。这是我的看法:https://github.com/phf/go-queue 那里的基准测试也证实了通道更快,但功能更有限。
英文:
Using a slice and an appropriate ("circular") indexing scheme on top still seems to be the way to go. Here's my take on it: https://github.com/phf/go-queue The benchmarks there also confirm that channels are faster, but at the price of more limited functionality.
答案8
得分: 6
很遗憾,队列目前不是Go标准库的一部分,所以你需要自己编写/导入其他人的解决方案。这很可惜,因为在标准库之外编写的容器无法使用泛型。
一个简单的固定容量队列的示例如下:
type MyQueueElement struct {
blah int // 你想要的任何内容
}
const MAX_QUEUE_SIZE = 16
type Queue struct {
content [MAX_QUEUE_SIZE]MyQueueElement
readHead int
writeHead int
len int
}
func (q *Queue) Push(e MyQueueElement) bool {
if q.len >= MAX_QUEUE_SIZE {
return false
}
q.content[q.writeHead] = e
q.writeHead = (q.writeHead + 1) % MAX_QUEUE_SIZE
q.len++
return true
}
func (q *Queue) Pop() (MyQueueElement, bool) {
if q.len <= 0 {
return MyQueueElement{}, false
}
result := q.content[q.readHead]
q.content[q.readHead] = MyQueueElement{}
q.readHead = (q.readHead + 1) % MAX_QUEUE_SIZE
q.len--
return result, true
}
这里避免了一些问题,比如不会出现无限增长的切片(通过使用切片[1:]操作来丢弃元素),以及清零弹出的元素以确保它们的内容可以进行垃圾回收。请注意,在这里只包含一个int的MyQueueElement
结构体的情况下,这不会有任何区别,但如果结构体包含指针,则会有区别。
如果需要自动增长的队列,可以扩展此解决方案以重新分配和复制。
此解决方案不是线程安全的,但如果需要,可以在Push/Pop中添加锁。
Playground链接:https://play.golang.org/
英文:
Unfortunately queues aren't currently part of the go standard library, so you need to write your own / import someone else's solution. It's a shame as containers written outside of the standard library aren't able to use generics.
A simple example of a fixed capacity queue would be:
type MyQueueElement struct {
blah int // whatever you want
}
const MAX_QUEUE_SIZE = 16
type Queue struct {
content [MAX_QUEUE_SIZE]MyQueueElement
readHead int
writeHead int
len int
}
func (q *Queue) Push(e MyQueueElement) bool {
if q.len >= MAX_QUEUE_SIZE {
return false
}
q.content[q.writeHead] = e
q.writeHead = (q.writeHead + 1) % MAX_QUEUE_SIZE
q.len++
return true
}
func (q *Queue) Pop() (MyQueueElement, bool) {
if q.len <= 0 {
return MyQueueElement{}, false
}
result := q.content[q.readHead]
q.content[q.readHead] = MyQueueElement{}
q.readHead = (q.readHead + 1) % MAX_QUEUE_SIZE
q.len--
return result, true
}
Gotchas avoided here include not having unbounded slice growth (caused by using the slice [1:] operation to discard), and zeroing out popped elements to ensure their contents are available for garbage collection. Note, in the case of a MyQueueElement
struct containing only an int like here, it will make no difference, but if struct contained pointers it would.
The solution could be extended to reallocate and copy should an auto growing queue be desired.
This solution is not thread safe, but a lock could be added to Push/Pop if that is desired.
Playground https://play.golang.org/
答案9
得分: 4
我也实现了上面的切片队列。然而,它不是线程安全的。所以我决定添加一个锁(互斥锁)来保证线程安全。
package queue
import (
"sync"
)
type Queue struct {
lock *sync.Mutex
Values []int
}
func Init() Queue {
return Queue{&sync.Mutex{}, make([]int, 0)}
}
func (q *Queue) Enqueue(x int) {
for {
q.lock.Lock()
q.Values = append(q.Values, x)
q.lock.Unlock()
return
}
}
func (q *Queue) Dequeue() *int {
for {
if (len(q.Values) > 0) {
q.lock.Lock()
x := q.Values[0]
q.Values = q.Values[1:]
q.lock.Unlock()
return &x
}
return nil
}
return nil
}
你可以在这里检查我的解决方案 simple queue
英文:
I also implement the queue from slice as above. However, It's not thread-safe. So I decided to add a lock (mutex lock) to guarantee thread-safe.
package queue
import (
"sync"
)
type Queue struct {
lock *sync.Mutex
Values []int
}
func Init() Queue {
return Queue{&sync.Mutex{}, make([]int, 0)}
}
func (q *Queue) Enqueue(x int) {
for {
q.lock.Lock()
q.Values = append(q.Values, x)
q.lock.Unlock()
return
}
}
func (q *Queue) Dequeue() *int {
for {
if (len(q.Values) > 0) {
q.lock.Lock()
x := q.Values[0]
q.Values = q.Values[1:]
q.lock.Unlock()
return &x
}
return nil
}
return nil
}
You can check my solution on github here simple queue
答案10
得分: 3
你可以尝试这样做,
// queue.go
package queue
type Queue struct {
data []int
}
func (q *Queue) Enqueue(d int) {
q.data = append(q.data, d)
}
func (q *Queue) Dequeue() int {
dequeued := q.data[0]
q.data = q.data[1:]
return dequeued
}
func (q *Queue) IsEmpty() bool {
return len(q.data) == 0
}
func NewQueue() *Queue {
return &Queue{
data: make([]int, 0),
}
}
//queue_test.go
package queue
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestQueue_ShouldInitialiseWithEmpty(t *testing.T) {
q := NewQueue()
assert.Equal(t, true, q.IsEmpty())
}
func TestQueue_ShouldErrorIfDequeuePerformedOnEmpty(t *testing.T) {
q := NewQueue()
_, err := q.Dequeue()
assert.NotNil(t, err)
assert.Equal(t, "nothing to dequeue", err.Error())
}
func TestQueue(t *testing.T) {
q := NewQueue()
q.Enqueue(1)
q.Enqueue(2)
q.Enqueue(3)
q.Enqueue(4)
dequeued1, err1 := q.Dequeue()
assert.Nil(t, err1)
assert.Equal(t, 1, dequeued1)
dequeued2, err2 := q.Dequeue()
assert.Nil(t, err2)
assert.Equal(t, 2, dequeued2)
dequeued3, err3 := q.Dequeue()
assert.Nil(t, err3)
assert.Equal(t, 3, dequeued3)
dequeued4, err4 := q.Dequeue()
assert.Nil(t, err4)
assert.Equal(t, 4, dequeued4)
}
英文:
you can try something like this,
// queue.go
package queue
type Queue struct {
data []int
}
func (q *Queue) Enqueue(d int) {
q.data = append(q.data, d)
}
func (q *Queue) Dequeue() int {
dequeued := q.data[0]
q.data = q.data[1:]
return dequeued
}
func (q *Queue) IsEmpty() bool {
return len(q.data) == 0
}
func NewQueue() *Queue {
return &Queue{
data: make([]int, 0),
}
}
//queue_test.go
package queue
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestQueue_ShouldInitialiseWithEmpty(t *testing.T) {
q := NewQueue()
assert.Equal(t, true, q.IsEmpty())
}
func TestQueue_ShouldErrorIfDequeuePerformedOnEmpty(t *testing.T) {
q := NewQueue()
_, err := q.Dequeue()
assert.NotNil(t, err)
assert.Equal(t, "nothing to dequeue", err.Error())
}
func TestQueue(t *testing.T) {
q := NewQueue()
q.Enqueue(1)
q.Enqueue(2)
q.Enqueue(3)
q.Enqueue(4)
dequeued1, err1 := q.Dequeue()
assert.Nil(t, err1)
assert.Equal(t, 1, dequeued1)
dequeued2, err2 := q.Dequeue()
assert.Nil(t, err2)
assert.Equal(t, 2, dequeued2)
dequeued3, err3 := q.Dequeue()
assert.Nil(t, err3)
assert.Equal(t, 3, dequeued3)
dequeued4, err4 := q.Dequeue()
assert.Nil(t, err4)
assert.Equal(t, 4, dequeued4)
}
答案11
得分: 1
列表足够用作队列和栈,我们需要做的是对于队列的出队操作使用 l.Remove(l.Front())
,对于栈的出栈操作使用 l.Remove(l.Back())
,对于栈和队列的入栈操作使用 PushBack
。列表有前指针和后指针,因此时间复杂度为 O(1)。
英文:
list is enough for queue and stack, what we shoud do is l.Remove(l.Front())
for queue Poll, l.Remove(l.Back())
for stack Poll,PushBack
for the Add Operation for stack and queue. there are front and back pointer for list, such that time complexity is O(1)
答案12
得分: 1
类型 队列 struct {
切片 []int
长度 int
}
func 新建队列() 队列 {
q := 队列{}
q.切片 = make([]int, 0)
q.长度 = 0
return q
}
func (q *队列) 添加(v int) {
q.切片 = append(q.切片, v)
q.长度++
}
func (q *队列) 弹出左侧() int {
a := q.切片[0]
q.切片 = q.切片[1:]
q.长度--
return a
}
func (q *队列) 弹出() int {
a := q.切片[q.长度-1]
q.切片 = q.切片[:q.长度-1]
q.长度--
return a
}
根据您的基本需求,上述代码将会起作用。
英文:
type Queue struct {
slice []int
len int
}
func newq() Queue {
q := Queue{}
q.slice = make([]int, 0)
q.len = 0
return q
}
func (q *Queue) Add(v int) {
q.slice = append(q.slice, v)
q.len++
}
func (q *Queue) PopLeft() int {
a := q.slice[0]
q.slice = q.slice[1:]
q.len--
return a
}
func (q *Queue) Pop() int {
a := q.slice[q.len-1]
q.slice = q.slice[:q.len-1]
q.len--
return a
}
For your basic need the code above would do
答案13
得分: 1
O(1)时间复杂度用于EnQueue,DeQueue,Front和Rear查找
O(n)空间复杂度用于容量
type Queue struct {
front int
rear int
size int
capacity int
q []string
}
func (q *Queue) IsFull() bool {
return q.size == q.capacity
}
func (q *Queue) IsEmpty() bool {
return q.size == 0
}
func (q *Queue) EnQueue(s string) error {
if q.IsFull() {
return fmt.Errorf("队列已满")
}
q.rear = (q.rear + 1) % q.capacity
q.q[q.rear] = s
q.size++
return nil
}
func (q *Queue) DeQueue() (string, error) {
if q.IsEmpty() {
return "", fmt.Errorf("队列为空")
}
defer func() { q.front, q.size = (q.front+1)%q.capacity, q.size-1 }()
return q.q[q.front], nil
}
func (q *Queue) Front() (string, error) {
if q.IsEmpty() {
return "", fmt.Errorf("队列为空")
}
return q.q[q.front], nil
}
func (q *Queue) Rear() (string, error) {
if q.IsEmpty() {
return "", fmt.Errorf("队列为空")
}
return q.q[q.rear], nil
}
func (q *Queue) Print() []string {
return q.q[q.front : q.rear+1]
}
func New(capacity int) *Queue {
q := &Queue{
capacity: capacity,
rear: capacity - 1,
q: make([]string, capacity),
}
return q
}
func main() {
queue := New(6)
queue.EnQueue("10")
queue.EnQueue("20")
queue.EnQueue("30")
queue.EnQueue("40")
queue.EnQueue("50")
queue.EnQueue("60")
fmt.Println(queue.EnQueue("70")) // 测试超出容量的EnQueue
fmt.Println(queue.Print())
fmt.Println(queue.DeQueue())
fmt.Println(queue.DeQueue())
fmt.Println(queue.DeQueue())
fmt.Println(queue.Print())
fmt.Println(queue.DeQueue())
fmt.Println(queue.DeQueue())
fmt.Println(queue.DeQueue())
fmt.Println(queue.DeQueue()) // 测试空队列的DeQueue
fmt.Println(queue.Print())
queue.EnQueue("80")
fmt.Println(queue.Print())
fmt.Println(queue.DeQueue())
fmt.Println(queue.Print())
}
英文:
O(1) Time for EnQueue, DeQueue, Front & Rear Lookups
O(n) Space for Capacity
type Queue struct {
front int
rear int
size int
capacity int
q []string
}
func (q *Queue) IsFull() bool {
return q.size == q.capacity
}
func (q *Queue) IsEmpty() bool {
return q.size == 0
}
func (q *Queue) EnQueue(s string) error {
if q.IsFull() {
return fmt.Errorf("queue is full")
}
q.rear = (q.rear + 1) % q.capacity
q.q[q.rear] = s
q.size++
return nil
}
func (q *Queue) DeQueue() (string, error) {
if q.IsEmpty() {
return "", fmt.Errorf("queue is empty")
}
defer func() { q.front, q.size = (q.front+1)%q.capacity, q.size-1 }()
return q.q[q.front], nil
}
func (q *Queue) Front() (string, error) {
if q.IsEmpty() {
return "", fmt.Errorf("queue is empty")
}
return q.q[q.front], nil
}
func (q *Queue) Rear() (string, error) {
if q.IsEmpty() {
return "", fmt.Errorf("queue is empty")
}
return q.q[q.rear], nil
}
func (q *Queue) Print() []string {
return q.q[q.front : q.rear+1]
}
func New(capacity int) *Queue {
q := &Queue{
capacity: capacity,
rear: capacity - 1,
q: make([]string, capacity),
}
return q
}
func main() {
queue := New(6)
queue.EnQueue("10")
queue.EnQueue("20")
queue.EnQueue("30")
queue.EnQueue("40")
queue.EnQueue("50")
queue.EnQueue("60")
fmt.Println(queue.EnQueue("70")) // Test Capcacity Exceeded EnQueue.
fmt.Println(queue.Print())
fmt.Println(queue.DeQueue())
fmt.Println(queue.DeQueue())
fmt.Println(queue.DeQueue())
fmt.Println(queue.Print())
fmt.Println(queue.DeQueue())
fmt.Println(queue.DeQueue())
fmt.Println(queue.DeQueue())
fmt.Println(queue.DeQueue()) // Test Empty DeQueue.
fmt.Println(queue.Print())
queue.EnQueue("80")
fmt.Println(queue.Print())
fmt.Println(queue.DeQueue())
fmt.Println(queue.Print())
}
答案14
得分: 0
我实现了一个队列,它会自动扩展底层缓冲区:
package types
// 注意:这个队列不会缩小底层缓冲区。
type queue struct {
buf [][4]int // 更改为您需要的元素数据类型
head int
tail int
}
func (q *queue) extend(need int) {
if need-(len(q.buf)-q.head) > 0 {
if need-len(q.buf) <= 0 {
copy(q.buf, q.buf[q.head:q.tail])
q.tail = q.tail - q.head
q.head = 0
return
}
newSize := len(q.buf) * 2
if newSize == 0 {
newSize = 100
}
newBuf := make([][4]int, newSize)
copy(newBuf, q.buf[q.head:q.tail])
q.buf = newBuf
q.tail = q.tail - q.head
q.head = 0
}
}
func (q *queue) push(p [4]int) {
q.extend(q.tail + 1)
q.buf[q.tail] = p
q.tail++
}
func (q *queue) pop() [4]int {
r := q.buf[q.head]
q.head++
return r
}
func (q *queue) size() int {
return q.tail - q.head
}
// 将以下内容放入 queue_test.go 文件中
package types
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestQueue(t *testing.T) {
const total = 1000
q := &queue{}
for i := 0; i < total; i++ {
q.push([4]int{i, i, i, i})
assert.Equal(t, i+1, q.size())
}
for i := 0; i < total; i++ {
v := q.pop()
assert.Equal(t, [4]int{i, i, i, i}, v)
assert.Equal(t, total-1-i, q.size())
}
}
英文:
I implemented a queue that will expand the underlying buffer automatically:
package types
// Note: this queue does not shrink the underlying buffer.
type queue struct {
buf [][4]int // change to the element data type that you need
head int
tail int
}
func (q *queue) extend(need int) {
if need-(len(q.buf)-q.head) > 0 {
if need-len(q.buf) <= 0 {
copy(q.buf, q.buf[q.head:q.tail])
q.tail = q.tail - q.head
q.head = 0
return
}
newSize := len(q.buf) * 2
if newSize == 0 {
newSize = 100
}
newBuf := make([][4]int, newSize)
copy(newBuf, q.buf[q.head:q.tail])
q.buf = newBuf
q.tail = q.tail - q.head
q.head = 0
}
}
func (q *queue) push(p [4]int) {
q.extend(q.tail + 1)
q.buf[q.tail] = p
q.tail++
}
func (q *queue) pop() [4]int {
r := q.buf[q.head]
q.head++
return r
}
func (q *queue) size() int {
return q.tail - q.head
}
// put the following into queue_test.go
package types
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestQueue(t *testing.T) {
const total = 1000
q := &queue{}
for i := 0; i < total; i++ {
q.push([4]int{i, i, i, i})
assert.Equal(t, i+1, q.size())
}
for i := 0; i < total; i++ {
v := q.pop()
assert.Equal(t, [4]int{i, i, i, i}, v)
assert.Equal(t, total-1-i, q.size())
}
}
答案15
得分: 0
从Go v1.18开始,已经添加了泛型,我将使用它来创建一个通用队列。
以下是我的实现:
type queue[T any] struct {
bucket []T
}
func newQueue[T any]() *queue[T] {
return &queue[T]{
bucket: []T{},
}
}
func (q *queue[T]) append(input T) {
q.bucket = append(q.bucket, input)
}
func (q *queue[T]) tryDequeue() (T, bool) {
if len(q.bucket) == 0 {
var dummy T
return dummy, false
}
value := q.bucket[0]
var zero T
q.bucket[0] = zero // 避免内存泄漏
q.bucket = q.bucket[1:]
return value, true
}
每当调用dequeue时,队列会通过切片调整大小以释放内存,避免复制内存。这不是线程安全的,在这种情况下,通道可能更好-但需要知道队列的容量以指定正确的缓冲区大小。
为了好玩,我对使用interface{}
的队列进行了基准测试-这是在Go v1.18之前实现通用解决方案的方法。
测试追加并出队1、10、100和1,000个整数。在所有情况下,泛型的速度更快,内存使用更少。
Benchmark_queues/QueueGeneric-Size_1-8 38296201 32.78 ns/op 8 B/op 1 allocs/op
Benchmark_queues/QueueInterface-Size_1-8 11626666 147.6 ns/op 16 B/op 1 allocs/op
Benchmark_queues/QueueGeneric-Size_10-8 7846665 168.2 ns/op 160 B/op 2 allocs/op
Benchmark_queues/QueueInterface-Size_10-8 1501284 752.8 ns/op 320 B/op 2 allocs/op
Benchmark_queues/QueueGeneric-Size_100-8 1000000 1088 ns/op 1536 B/op 1 allocs/op
Benchmark_queues/QueueInterface-Size_100-8 240232 6798 ns/op 3072 B/op 1 allocs/op
Benchmark_queues/QueueGeneric-Size_1000-8 120244 13468 ns/op 17920 B/op 3 allocs/op
Benchmark_queues/QueueInterface-Size_1000-8 20310 54528 ns/op 35776 B/op 4 allocs/op
使用interface{}
的队列的实现如下-添加了我认为是必要的错误处理。
type queueInterface struct {
bucket []interface{}
}
func newQueueInterface() *queueInterface {
return &queueInterface{
bucket: []interface{}{},
}
}
func (q *queueInterface) append(input interface{}) error {
if len(q.bucket) != 0 && reflect.TypeOf(q.bucket[0]) != reflect.TypeOf(input) {
return errors.New("input type not same as those already in queue")
}
q.bucket = append(q.bucket, input)
return nil
}
func (q *queueInterface) tryDequeue(out interface{}) (bool, error) {
if len(q.bucket) == 0 {
return false, nil
}
valuePtr := reflect.ValueOf(out)
if valuePtr.Kind() != reflect.Ptr {
return false, errors.New("must be a pointer")
}
value := q.bucket[0]
if valuePtr.Elem().Type() != reflect.TypeOf(value) {
return false, errors.New("output must be of same type as queue elements")
}
valuePtr.Elem().Set(reflect.ValueOf(value))
var zero interface{}
q.bucket[0] = zero // 避免内存泄漏
q.bucket = q.bucket[1:]
return true, nil
}
英文:
From Go v1.18 generics have been added which I would use to make a generic queue.
Below are my implementations
type queue[T any] struct {
bucket []T
}
func newQueue[T any]() *queue[T] {
return &queue[T]{
bucket: []T{},
}
}
func (q *queue[T]) append(input T) {
q.bucket = append(q.bucket, input)
}
func (q *queue[T]) tryDequeue() (T, bool) {
if len(q.bucket) == 0 {
var dummy T
return dummy, false
}
value := q.bucket[0]
var zero T
q.bucket[0] = zero // Avoid memory leak
q.bucket = q.bucket[1:]
return value, true
}
Whenever dequeue is called the queue is resized to release memory using slicing to avoid copying memory. This isn't thread safe, in those cases channels are probably better - but one needs to know the queues capacity to specify a correct buffer size.
For fun I have made a benchmark run against a queue which uses interface{}
- the way to have a generic solution before Go v1.18.
The test appends and the dequeues 1, 10, 100 and 1.000 integers. In all cases generics are a lot faster with less memory usages.
Benchmark_queues/QueueGeneric-Size_1-8 38296201 32.78 ns/op 8 B/op 1 allocs/op
Benchmark_queues/QueueInterface-Size_1-8 11626666 147.6 ns/op 16 B/op 1 allocs/op
Benchmark_queues/QueueGeneric-Size_10-8 7846665 168.2 ns/op 160 B/op 2 allocs/op
Benchmark_queues/QueueInterface-Size_10-8 1501284 752.8 ns/op 320 B/op 2 allocs/op
Benchmark_queues/QueueGeneric-Size_100-8 1000000 1088 ns/op 1536 B/op 1 allocs/op
Benchmark_queues/QueueInterface-Size_100-8 240232 6798 ns/op 3072 B/op 1 allocs/op
Benchmark_queues/QueueGeneric-Size_1000-8 120244 13468 ns/op 17920 B/op 3 allocs/op
Benchmark_queues/QueueInterface-Size_1000-8 20310 54528 ns/op 35776 B/op 4 allocs/op
The implementation of queue using interface{}
are given below - error handling is added which I think is necessary.
type queueInterface struct {
bucket []interface{}
}
func newQueueInterface() *queueInterface {
return &queueInterface{
bucket: []interface{}{},
}
}
func (q *queueInterface) append(input interface{}) error {
if len(q.bucket) != 0 && reflect.TypeOf(q.bucket[0]) != reflect.TypeOf(input) {
return errors.New("input type not same as those already in queue")
}
q.bucket = append(q.bucket, input)
return nil
}
func (q *queueInterface) tryDequeue(out interface{}) (bool, error) {
if len(q.bucket) == 0 {
return false, nil
}
valuePtr := reflect.ValueOf(out)
if valuePtr.Kind() != reflect.Ptr {
return false, errors.New("must be a pointer")
}
value := q.bucket[0]
if valuePtr.Elem().Type() != reflect.TypeOf(value) {
return false, errors.New("output must be of same type as queue elements")
}
valuePtr.Elem().Set(reflect.ValueOf(value))
var zero interface{}
q.bucket[0] = zero // Avoid memory leak
q.bucket = q.bucket[1:]
return true, nil
}
答案16
得分: -1
双栈实现:
O(1)
的 Enqueue
和 Dequeue
操作,并使用 slices
(这对于缓存未命中来说更好)。
type Queue struct{
enqueue, dequeue Stack
}
func (q *Queue) Enqueue(n *Thing){
q.enqueue.Push(n)
}
func (q *Queue) Dequeue()(*Thing, bool){
v, ok := q.dequeue.Pop()
if ok{
return v, true
}
for {
v, ok := d.enqueue.Pop()
if !ok{
break
}
d.dequeue.Push(v)
}
return d.dequeue.Pop()
}
type Stack struct{
v []*Thing
}
func (s *Stack)Push(n *Thing){
s.v=append(s.v, n)
}
func (s *Stack) Pop()(*Thing, bool){
if len(s.v) == 0 {
return nil, false
}
lastIdx := len(s.v)-1
v := s.v[lastIdx]
s.v=s.v[:lastIdx]
return v, true
}
英文:
Double stack implementation:
O(1)
Enqueue
and Dequeue
and uses slices
(which tends to be better for cache misses).
type Queue struct{
enqueue, dequeue Stack
}
func (q *Queue) Enqueue(n *Thing){
q.enqueue.Push(n)
}
func (q *Queue) Dequeue()(*Thing, bool){
v, ok := q.dequeue.Pop()
if ok{
return v, true
}
for {
v, ok := d.enqueue.Pop()
if !ok{
break
}
d.dequeue.Push(v)
}
return d.dequeue.Pop()
}
type Stack struct{
v []*Thing
}
func (s *Stack)Push(n *Thing){
s.v=append(s.v, n)
}
func (s *Stack) Pop()(*Thing, bool){
if len(s.v) == 0 {
return nil, false
}
lastIdx := len(s.v)-1
v := s.v[lastIdx]
s.v=s.v[:lastIdx]
return v, true
}
答案17
得分: -2
切片可以用来实现队列。
type queue struct {
values []*int
}
func New() *queue {
queue := &queue{}
return queue
}
func (q *queue) enqueue(val *int) {
q.values = append(q.values, val)
}
//deque function
更新:
这是我在GitHub页面上的完整实现 https://github.com/raiskumar/algo-ds/blob/master/tree/queue.go
英文:
Slice can be used to implement queue.
type queue struct {
values []*int
}
func New() *queue {
queue := &queue{}
return queue
}
func (q *queue) enqueue(val *int) {
q.values = append(q.values, val)
}
//deque function
Update:
here is complete implementation on my GitHub page https://github.com/raiskumar/algo-ds/blob/master/tree/queue.go
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论