咖啡因缓存基于时间的驱逐与缓存写入

huangapple go评论75阅读模式
英文:

Caffeine cache time-based eviction with cache writer

问题

我正在使用以下配置的 Caffeine 缓存:

datesCache = Caffeine.newBuilder()
                .maximumSize(1000L)
                .expireAfterWrite(1, TimeUnit.HOURS)
                .writer(new CacheWriter<String, Map<String, List<ZonedDateTime>>>() {
                    @Override
                    public void write(@NonNull final String key, @NonNull final Map<String, List<ZonedDateTime>> datesList) {
                        CompletableFuture.runAsync(() -> copyToDatabase(key, datesList), Executors.newCachedThreadPool());
                    }

                    @Override
                    public void delete(@NonNull final String key, @Nullable final Map<String, List<ZonedDateTime>> datesList,
                            @NonNull final RemovalCause removalCause) {
                        System.out.println("Cache key " + key + " got evicted from due to " + removalCause);
                    }
                })
                .scheduler(Scheduler.forScheduledExecutorService(Executors.newSingleThreadScheduledExecutor()))
                .removalListener((key, dateList, removalCause) -> {
                    LOG.info("Refreshing cache key {}.", key);
                    restoreKeys(key);
                })
                .build();

我使用 CacheWriter 在将记录写入缓存时将其复制到分布式数据库,如果值满足特定条件。另外,在缓存失效时,我使用 RemovalListener 调用后端服务,将失效的键传递给后端服务,以保持记录的最新状态。

为了使此功能正常工作,我还必须在启动服务时初始化缓存,并使用 put 方法将值插入缓存。我使用 datesCache.get(key, datesList -> callBackendService(key)) 从缓存中检索值,以防请求是针对我在初始化时没有获取的键。

使用此缓存的 API 在某些时间段内会有非常高的使用率,但由于某种原因,似乎记录会被频繁地驱逐(每个请求都会触发?),因为 RemovalListenerCacheWriter 中的代码每隔几毫秒就会执行一次,最终创建超过 25,000 个线程,并导致服务错误。

有人能告诉我是否存在严重错误?或者是否存在明显的错误?我感觉我对 Caffeine 的使用有深刻的误解。

目标是使缓存中的记录每隔 1 小时刷新一次,在刷新时,从后端 API 获取新值,并在满足某些条件时将其持久化到数据库中。

英文:

I'm using a Caffeine cache with the following configuration:

datesCache = Caffeine.newBuilder()
                .maximumSize(1000L)
                .expireAfterWrite(1, TimeUnit.HOURS)
                .writer(new CacheWriter&lt;String, Map&lt;String, List&lt;ZonedDateTime&gt;&gt;&gt;() {
                    @Override
                    public void write(@NonNull final String key, @NonNull final Map&lt;String, List&lt;ZonedDateTime&gt;&gt; datesList) {
                        CompletableFuture.runAsync(() -&gt; copyToDatabase(key, datesList), Executors.newCachedThreadPool());
                    }

                    @Override
                    public void delete(@NonNull final String key, @Nullable final Map&lt;String, List&lt;ZonedDateTime&gt;&gt; datesList,
                            @NonNull final RemovalCause removalCause) {
                        System.out.println(&quot;Cache key &quot; + key + &quot; got evicted from due to &quot; + removalCause);
                    }
                })
                .scheduler(Scheduler.forScheduledExecutorService(Executors.newSingleThreadScheduledExecutor()))
                .removalListener((key, dateList, removalCause) -&gt; {
                    LOG.info(&quot;Refreshing cache key {}.&quot;, key);
                    restoreKeys(key);
                })
                .build();

I'm using a CacheWriter to copy the records to distributed database upon writes to the cache if the values satisfy certain conditions. Also, upon eviction I'm using a RemovalListener to call a backend service with the evicted key to keep the records up-to-date.

In order to make this work, I also had to initialize the cache upon booting up the service and I use the put method to insert the values in the cache. I retrieve values from the cache using datesCache.get(key, datesList -&gt; callBackendService(key)) just in case the request is for a key that I didn't get upon initialization.

The API that leverages this cache has periods of very heavy use, and it seems that for some reason the records were getting evicted (in every request?) because the code in the RemovalListener and the CacheWriter got executed every few milliseconds, eventually creating over 25,000 threads and making the service error out.

Can anybody tell me if I'm doing something deadly wrong? Or something painful obvious that is wrong? I feel like I'm deeply misunderstanding Caffeine.

The goal is to have the records in the cache refresh every 1 hr and upon refresh, get the new values from the backend API and persist them in a database if they satisfy certain conditions.

答案1

得分: 0

缓存进入了一个无限循环,因为RemovalListener是异步的,因此在高负载情况下,缓存值在RemovalListener能够实际刷新缓存之前就被替换为对已过期键的请求。

因此这些值会:

  1. REPLACED移除原因从缓存中移除
  2. 调用RemovalListener
  3. 再次刷新并被替换,然后
  4. 回到#1。

解决方案:
RemovalListener中评估RemovalCause以忽略REPLACED键。可以使用方法wasEvicted()或比较枚举值本身。

英文:

The cache entered in an infinite loop because the RemovalListener is async and therefore, upon heavy load, the cache values were being replaced by a request to an expired key before the RemovalListener could actually refresh the cache.

Therefore the values will:

  1. Be removed from the cache with a REPLACED removal cause
  2. Call the RemovalListener
  3. Refresh again and be replaced, and then
  4. Go back to #1.

Solution:
Evaluate the RemovalCause in the RemovalListener to ignore REPLACED keys. Method wasEvicted() can be used or compare the enum values themselves.

huangapple
  • 本文由 发表于 2020年9月25日 10:54:47
  • 转载请务必保留本文链接:https://go.coder-hub.com/64057142.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定