英文:
Golang: right way to store map structure in lru cache
问题
我有一个类似于map[key]value
的结构,我想通过一个字符串键将其存储在"github.com/golang/groupcache/lru"
中,比如cacheKey
。
这是我的问题:
我发现每当我想要更新缓存项时,我需要先进行Get
操作:
item := cache.Get(cacheKey)
if v, ok := item[key]; ok {
item[key] = new_value
cache.Add(cacheKey, item)
}
这样做是正确的吗?
或者,正如一些人建议的那样,我需要重新设计我的结构,以确保我可以在需要更新时执行cache.Add(cacheKey, item)
。
或者,我甚至应该使用一个组合键,比如cacheKey_key
来存储该项?
英文:
I have a structure like: map[key]value
, and I want to store it in "github.com/golang/groupcache/lru"
by a string key, say, the cacheKey
.
Here is my question:
I found whenever I want update the cached item, I need to Get
it first:
item := cache.Get(cacheKey)
if v, ok := item[key]; ok{
item[key]=new_value
cache.Add(cacheKey, item)
}
Is it right way to do this?
Or, as some of people suggested, I need to re-design my structure to make sure I can do cache.Add(cacheKey, item)
whenever I want to update it.
Or, I should even use a combined key like cacheKey_key
to store that item?
答案1
得分: 0
上面的代码将起作用。我查看了你提到的LRU缓存的源代码。以下是我的笔记:
- 无论你决定使用什么,如果你计划在goroutine中使用它,确保对这个LRU的访问是线程安全的。
- 你可以存储
*map
而不是普通的map
,这样就不需要调用add方法了。 - 如果允许覆盖地向map中添加元素,可以跳过存在性检查(
if v, ok...
)。
所以,根据上述内容,代码可以改为:
m sync.Mutex
m.Lock()
defer m.Unlock()
cache.Get(cacheKey)[key] = new_value
如果你详细说明你计划存储的数据类型,我们可以尝试提供另一种解决方案。
英文:
The code above will work. I looked at the source of LRU cache you refer to. Here are my notes:
- whatever you decide, make sure access to this LRU is thread-safe if you plan to use it within goroutines.
- you may store
*map
instead of plainmap
, which would eliminate a need to call add. - if it's ok to add to the map with override, skip presence check (
if v, ok...
)
So having said that, here is what it becomes:
m sync.Mutex
m.Lock()
defer m.Unlock()
cache.Get(cacheKey)[key] = new_value
If you elaborate on what sort of data you are planning to store, we may try to come up with an alternative solution.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论