Postgres在并发写入负载过大时不使用索引。

huangapple go评论49阅读模式
英文:

Postgres not using index when too much concurrent write load

问题

最近我面临了这样的情况:在一个带有一个where过滤条件的简单查询中,PostgreSQL没有使用索引。类似这样的查询 select * from book where obj_id=465789。有时我们对这个表进行大量的写操作和同时进行查询操作。我阅读了这篇文章 Postgres not using index when index scan is much better optionErwin 给出了很好的回答。但有一件事我不太明白,太多的并发写入负载会如何影响索引的使用或不使用?

英文:

Recently I am facing with situation when in simple query with one where filter postgres doesn't use index. Query like this select * from book where obj_id=465789. Sometimes we have a lot of writes to this table and selects simultaneously. I read this article Postgres not using index when index scan is much better option and Erwin gave excellent answer. But one thing I didn't understand. How too much concurrent write load affects use index or not ?

答案1

得分: 1

计划者在制定决策时不考虑并发写入的数量,因此没有直接影响。

我可以想到三种可能的间接影响。并发写入可能使数据分布变化比自动分析能够跟上的速度更快,因此计划者使用不良的行估算(obj_id=465789有多少行 现在)。 强烈的写入可以比自动清理能够重置它的速度更快地清除可见性图,这将惩罚索引仅扫描成本估算。 而强烈的写入可能会使索引膨胀,索引大小在估算索引成本中起到(次要)作用。

英文:

The planner does not ponder the how much concurrent writing there is when making its decisions, so there is no direct effect.

I can think of three possible indirect effects. Concurrent writers might keep the data distribution changing faster than auto-analyze can keep up, so the planner is working with bad row estimates (how many rows have obj_id=465789 right now). Intense writing can clear the visibility map faster than autovacuum can reset it, which will penalize index-only scan cost estimates. And intense writing can bloat the index, and the index size plays a (minor) role in estimating index cost.

huangapple
  • 本文由 发表于 2023年2月14日 21:21:33
  • 转载请务必保留本文链接:https://go.coder-hub.com/75448456.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定