如何通过移除生成子查询的字段来提高查询的效率?

huangapple go评论43阅读模式
英文:

How to improve efficiency of query by removing sub-query producing field?

问题

我已经完成对你提供的内容的翻译,以下是翻译的结果:

我有以下的表格和示例数据:
https://www.db-fiddle.com/f/hxQ7BGdgDJtQcv5xTChY9u/0

我想通过尽量删除产生`success`字段的子查询来提高db-fiddle中包含查询的性能(这是从ChatGPT的输出中获取的,但是无法删除此子查询而不破坏结果)。我应该如何做?

我向ChatGPT提出的问题是这样的:

> 使用`<db-fiddle链接>`中的表格,编写一个选择SQL查询来返回所有`cfg_commissioning_tags`列和`dat_commissioning_test_log.success`。这些表通过`cfg_commissioning_tags.id = dat_commissioning_test_log.tag_id`连接。如果`tag_source`为'plc'并且任何行的`success`为true,则对所有匹配的`type_id`和`relative_tag_path`行返回true。

在ChatGPT生成的结果中,我添加了子查询中的`AND ct_2.device_name != ct_1.device_name`条件,这也是必需的。

目前的查询、表格创建查询和当前的查询结果都在下面复制:

SELECT
ct_1.device_parent_path
,ct_1.device_name
,ct_1.relative_tag_path
,ct_1.tag_source
,ct_1.type_id
,CASE
WHEN EXISTS (
SELECT 1
FROM dat_commissioning_test_log ctl_2
JOIN cfg_commissioning_tags ct_2 ON ct_2.id = ctl_2.tag_id
WHERE
ct_2.type_id = ct_1.type_id
AND ct_2.relative_tag_path = ct_1.relative_tag_path
AND ct_2.device_name != ct_1.device_name -- 没有这个,它运行得很快,但我需要这个
AND ctl_2.success = TRUE
AND ct_2.tag_source = 'plc'
) THEN 'true*'
ELSE CASE ctl_1.success WHEN true THEN 'true' ELSE 'false' END
END AS success
FROM cfg_commissioning_tags ct_1
LEFT JOIN dat_commissioning_test_log ctl_1 ON ct_1.id = ctl_1.tag_id
ORDER BY type_id, relative_tag_path


CREATE TABLE dat_commissioning_test_log
(
id integer NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 2147483647 CACHE 1 ),
tag_id integer,
-- tested_on timestamp with time zone,
success boolean,
-- username character varying(50) COLLATE pg_catalog."default",
-- note character varying(300) COLLATE pg_catalog."default",
CONSTRAINT dat_commissioning_test_log_pkey PRIMARY KEY (id)
);

CREATE TABLE cfg_commissioning_tags
(
id integer NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 2147483647 CACHE 1 ),
-- full_path character varying(400) COLLATE pg_catalog."default",
device_name character varying(50) COLLATE pg_catalog."default",
device_parent_path character varying(400) COLLATE pg_catalog."default",
-- added_on timestamp with time zone,
relative_tag_path character varying(100) COLLATE pg_catalog."default",
-- retired_on timestamp with time zone,
tag_source character varying(10) COLLATE pg_catalog."default",
type_id character varying(400) COLLATE pg_catalog."default",
CONSTRAINT cfg_commissioning_tags_pkey PRIMARY KEY (id)
);

INSERT INTO cfg_commissioning_tags (id, device_name, device_parent_path, relative_tag_path, tag_source, type_id) VALUES
(1, 'PC13A','DUMMY','Run Mins','plc','DOL'),
(2, 'PC12A','DUMMY','Run Mins','plc','DOL'),
(3, 'PC11A','DUMMY','Run Mins','plc','DOL'),
(4, 'PC11A','DUMMY','Status','io','DOL'),
(5, 'PC11A','DUMMY','Alarms/Isolator Tripped','io','DOL'),
(6, 'PC12A','DUMMY','Status','io','DOL');

INSERT INTO dat_commissioning_test_log (tag_id, success) VALUES
(1, true),
(6, true);


这是查询的结果:

|device_parent_path|device_name|relative_tag_path|tag_source|type_id|success|
|---|---|---|---|---|---|
|DUMMY|PC11A|Alarms/Isolator Tripped|io|DOL|FALSE|
|DUMMY|PC13A|Run Mins|plc|DOL|TRUE|
|DUMMY|PC12A|Run Mins|plc|DOL|true*|
|DUMMY|PC11A|Run Mins|plc|DOL|true*|
|DUMMY|PC12A|Status|io|DOL|TRUE|
|D

<details>
<summary>英文:</summary>

I have the following tables and sample data:
https://www.db-fiddle.com/f/hxQ7BGdgDJtQcv5xTChY9u/0

I would like to increase the performance of the contained query in db-fiddle by ideally removing the sub-query producing the `success` field (this was taken from ChatGPT output, but it was unable to remove this sub-query without destroying the results). How can I do this?

My question to ChatGPT was this:

&gt; using the tables in `&lt;db-fiddle link&gt;`, write a select sql query to
&gt; return all cfg_commissioning_tags columns and
&gt; dat_commissioning_test_log.success. These tables are joined by
&gt; cfg_commissioning_tags.id = dat_commissioning_test_log.tag_id. If the
&gt; tag_source is &#39;plc&#39; and any rows have success = true, return true for
&gt; the success field for all matching type_id and relative_tag_path rows.

To the result ChatGPT produced, I added the ` AND ct_2.device_name != ct_1.device_name` condition into the sub-query which is also required.

The current query, table creation queries, and the current query results are all copied below for posterity:

SELECT
ct_1.device_parent_path
,ct_1.device_name
,ct_1.relative_tag_path
,ct_1.tag_source
,ct_1.type_id
,CASE
WHEN EXISTS (
SELECT 1
FROM dat_commissioning_test_log ctl_2
JOIN cfg_commissioning_tags ct_2 ON ct_2.id = ctl_2.tag_id
WHERE
ct_2.type_id = ct_1.type_id
AND ct_2.relative_tag_path = ct_1.relative_tag_path
AND ct_2.device_name != ct_1.device_name -- without this, it runs super fast, but I need this
AND ctl_2.success = TRUE
AND ct_2.tag_source = 'plc'
) THEN 'true*'
ELSE CASE ctl_1.success WHEN true THEN 'true' ELSE 'false' END
END AS success
FROM cfg_commissioning_tags ct_1
LEFT JOIN dat_commissioning_test_log ctl_1 ON ct_1.id = ctl_1.tag_id
ORDER BY type_id, relative_tag_path


CREATE TABLE dat_commissioning_test_log
(
id integer NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 2147483647 CACHE 1 ),
tag_id integer,
-- tested_on timestamp with time zone,
success boolean,
-- username character varying(50) COLLATE pg_catalog."default",
-- note character varying(300) COLLATE pg_catalog."default",
CONSTRAINT dat_commissioning_test_log_pkey PRIMARY KEY (id)
);

CREATE TABLE cfg_commissioning_tags
(
id integer NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 2147483647 CACHE 1 ),
-- full_path character varying(400) COLLATE pg_catalog."default",
device_name character varying(50) COLLATE pg_catalog."default",
device_parent_path character varying(400) COLLATE pg_catalog."default",
-- added_on timestamp with time zone,
relative_tag_path character varying(100) COLLATE pg_catalog."default",
-- retired_on timestamp with time zone,
tag_source character varying(10) COLLATE pg_catalog."default",
type_id character varying(400) COLLATE pg_catalog."default",
CONSTRAINT cfg_commissioning_tags_pkey PRIMARY KEY (id)
);

INSERT INTO cfg_commissioning_tags (id, device_name, device_parent_path, relative_tag_path, tag_source, type_id) VALUES
(1, 'PC13A','DUMMY','Run Mins','plc','DOL'),
(2, 'PC12A','DUMMY','Run Mins','plc','DOL'),
(3, 'PC11A','DUMMY','Run Mins','plc','DOL'),
(4, 'PC11A','DUMMY','Status','io','DOL'),
(5, 'PC11A','DUMMY','Alarms/Isolator Tripped','io','DOL'),
(6, 'PC12A','DUMMY','Status','io','DOL');

INSERT INTO dat_commissioning_test_log (tag_id, success) VALUES
(1, true),
(6, true);


This is the results of the query:

|device_parent_path|device_name|relative_tag_path|tag_source|type_id|success|
|---|---|---|---|---|---|
|DUMMY|PC11A|Alarms/Isolator Tripped|io|DOL|FALSE|
|DUMMY|PC13A|Run Mins|plc|DOL|TRUE|
|DUMMY|PC12A|Run Mins|plc|DOL|true*|
|DUMMY|PC11A|Run Mins|plc|DOL|true*|
|DUMMY|PC12A|Status|io|DOL|TRUE|
|DUMMY|PC11A|Status|io|DOL|FALSE|

**Edit:**
Here is the `EXPLAIN(ANALYZE, VERBOSE, BUFFERS)` result:

"Sort (cost=4368188.41..4368208.24 rows=7932 width=188) (actual time=10378.617..10378.916 rows=8108 loops=1)"
" Output: ct_1.id, ct_1.full_path, ct_1.device_name, ct_1.device_parent_path, ct_1.added_on, ct_1.relative_tag_path, ct_1.retired_on, ct_1.tag_source, ct_1.type_id, (CASE WHEN (SubPlan 1) THEN 'true*'::text ELSE CASE ctl_1.success WHEN CASE_TEST_EXPR THEN 'true'::text ELSE 'false'::text END END)"
" Sort Key: ct_1.type_id, ct_1.relative_tag_path"
" Sort Method: quicksort Memory: 2350kB"
" Buffers: shared hit=2895186"
" -> Hash Left Join (cost=60.69..4367674.67 rows=7932 width=188) (actual time=1.991..10357.671 rows=8108 loops=1)"
" Output: ct_1.id, ct_1.full_path, ct_1.device_name, ct_1.device_parent_path, ct_1.added_on, ct_1.relative_tag_path, ct_1.retired_on, ct_1.tag_source, ct_1.type_id, CASE WHEN (SubPlan 1) THEN 'true*'::text ELSE CASE ctl_1.success WHEN CASE_TEST_EXPR THEN 'true'::text ELSE 'false'::text END END"
" Hash Cond: (ct_1.id = ctl_1.tag_id)"
" Buffers: shared hit=2895186"
" -> Seq Scan on public.cfg_commissioning_tags ct_1 (cost=0.00..426.32 rows=7932 width=156) (actual time=0.013..1.313 rows=7932 loops=1)"
" Output: ct_1.id, ct_1.full_path, ct_1.device_name, ct_1.device_parent_path, ct_1.added_on, ct_1.relative_tag_path, ct_1.retired_on, ct_1.tag_source, ct_1.type_id"
" Buffers: shared hit=347"
" -> Hash (cost=40.86..40.86 rows=1586 width=5) (actual time=0.326..0.326 rows=1593 loops=1)"
" Output: ctl_1.success, ctl_1.tag_id"
" Buckets: 2048 Batches: 1 Memory Usage: 79kB"
" Buffers: shared hit=25"
" -> Seq Scan on public.dat_commissioning_test_log ctl_1 (cost=0.00..40.86 rows=1586 width=5) (actual time=0.012..0.171 rows=1593 loops=1)"
" Output: ctl_1.success, ctl_1.tag_id"
" Buffers: shared hit=25"
" SubPlan 1"
" -> Hash Join (cost=505.71..550.57 rows=1 width=0) (actual time=1.267..1.267 rows=0 loops=8108)"
" Inner Unique: true"
" Hash Cond: (ctl_2.tag_id = ct_2.id)"
" Buffers: shared hit=2894814"
" -> Seq Scan on public.dat_commissioning_test_log ctl_2 (cost=0.00..40.86 rows=1521 width=4) (actual time=0.003..0.112 rows=1300 loops=3800)"
" Output: ctl_2.id, ctl_2.tag_id, ctl_2.tested_on, ctl_2.success, ctl_2.username, ctl_2.note"
" Filter: ctl_2.success"
" Rows Removed by Filter: 56"
" Buffers: shared hit=81338"
" -> Hash (cost=505.64..505.64 rows=6 width=4) (actual time=1.183..1.183 rows=98 loops=8108)"
" Output: ct_2.id"
" Buckets: 1024 Batches: 1 Memory Usage: 8kB"
" Buffers: shared hit=2813476"
" -> Seq Scan on public.cfg_commissioning_tags ct_2 (cost=0.00..505.64 rows=6 width=4) (actual time=0.620..1.169 rows=98 loops=8108)"
" Output: ct_2.id"
" Filter: (((ct_2.device_name)::text <> (ct_1.device_name)::text) AND ((ct_2.type_id)::text = (ct_1.type_id)::text) AND ((ct_2.relative_tag_path)::text = (ct_1.relative_tag_path)::text) AND ((ct_2.tag_source)::text = 'plc'::text))"
" Rows Removed by Filter: 7834"
" Buffers: shared hit=2813476"
"Planning Time: 0.382 ms"
"Execution Time: 10379.346 ms"


**Edit 2**
EXPLAIN after adding compound indexes:

"Sort (cost=540847.20..540867.03 rows=7932 width=198) (actual time=1142.282..1142.843 rows=7932 loops=1)"
" Output: ct_1.full_path, ct_1.device_parent_path, ct_1.device_name, ct_1.relative_tag_path, ct_1.tag_source, ct_1.type_id, dat_commissioning_test_log.tested_on, dat_commissioning_test_log.note, (CASE WHEN (SubPlan 1) THEN 'true*'::text ELSE CASE dat_commissioning_test_log.success WHEN CASE_TEST_EXPR THEN 'true'::text ELSE 'false'::text END END)"
" Sort Key: ct_1.full_path"
" Sort Method: quicksort Memory: 2290kB"
" Buffers: shared hit=778254"
" -> Hash Left Join (cost=149.19..540333.47 rows=7932 width=198) (actual time=1.775..1108.469 rows=7932 loops=1)"
" Output: ct_1.full_path, ct_1.device_parent_path, ct_1.device_name, ct_1.relative_tag_path, ct_1.tag_source, ct_1.type_id, dat_commissioning_test_log.tested_on, dat_commissioning_test_log.note, CASE WHEN (SubPlan 1) THEN 'true*'::text ELSE CASE dat_commissioning_test_log.success WHEN CASE_TEST_EXPR THEN 'true'::text ELSE 'false'::text END END"
" Hash Cond: (ct_1.id = dat_commissioning_test_log.tag_id)"
" Buffers: shared hit=778254"
" -> Seq Scan on public.cfg_commissioning_tags ct_1 (cost=0.00..426.32 rows=7932 width=140) (actual time=0.011..0.837 rows=7932 loops=1)"
" Output: ct_1.id, ct_1.full_path, ct_1.device_name, ct_1.device_parent_path, ct_1.added_on, ct_1.relative_tag_path, ct_1.retired_on, ct_1.tag_source, ct_1.type_id"
" Buffers: shared hit=347"
" -> Hash (cost=139.24..139.24 rows=796 width=35) (actual time=1.404..1.404 rows=1417 loops=1)"
" Output: dat_commissioning_test_log.tested_on, dat_commissioning_test_log.note, dat_commissioning_test_log.success, dat_commissioning_test_log.tag_id"
" Buckets: 2048 (originally 1024) Batches: 1 (originally 1) Memory Usage: 83kB"
" Buffers: shared hit=50"
" -> Hash Join (cost=85.28..139.24 rows=796 width=35) (actual time=0.938..1.249 rows=1417 loops=1)"
" Output: dat_commissioning_test_log.tested_on, dat_commissioning_test_log.note, dat_commissioning_test_log.success, dat_commissioning_test_log.tag_id"
" Inner Unique: true"
" Hash Cond: (dat_commissioning_test_log.id = "ANY_subquery".max)"
" Buffers: shared hit=50"
" -> Seq Scan on public.dat_commissioning_test_log (cost=0.00..40.93 rows=1593 width=39) (actual time=0.009..0.089 rows=1593 loops=1)"
" Output: dat_commissioning_test_log.id, dat_commissioning_test_log.tag_id, dat_commissioning_test_log.tested_on, dat_commissioning_test_log.success, dat_commissioning_test_log.username, dat_commissioning_test_log.note"
" Buffers: shared hit=25"
" -> Hash (cost=82.78..82.78 rows=200 width=4) (actual time=0.926..0.926 rows=1417 loops=1)"
" Output: "ANY_subquery".max"
" Buckets: 2048 (originally 1024) Batches: 1 (originally 1) Memory Usage: 66kB"
" Buffers: shared hit=25"
" -> HashAggregate (cost=80.78..82.78 rows=200 width=4) (actual time=0.710..0.804 rows=1417 loops=1)"
" Output: "ANY_subquery".max"
" Group Key: "ANY_subquery".max"
" Buffers: shared hit=25"
" -> Subquery Scan on "ANY_subquery" (cost=48.90..77.23 rows=1417 width=4) (actual time=0.297..0.475 rows=1417 loops=1)"
" Output: "ANY_subquery".max"
" Buffers: shared hit=25"
" -> HashAggregate (cost=48.90..63.07 rows=1417 width=8) (actual time=0.297..0.413 rows=1417 loops=1)"
" Output: max(dat_commissioning_test_log_1.id), dat_commissioning_test_log_1.tag_id"
" Group Key: dat_commissioning_test_log_1.tag_id"
" Buffers: shared hit=25"
" -> Seq Scan on public.dat_commissioning_test_log dat_commissioning_test_log_1 (cost=0.00..40.93 rows=1593 width=8) (actual time=0.006..0.090 rows=1593 loops=1)"
" Output: dat_commissioning_test_log_1.id, dat_commissioning_test_log_1.tag_id, dat_commissioning_test_log_1.tested_on, dat_commissioning_test_log_1.success, dat_commissioning_test_log_1.username, dat_commissioning_test_log_1.note"
" Buffers: shared hit=25"
" SubPlan 1"
" -> Hash Join (cost=23.10..68.04 rows=1 width=0) (actual time=0.133..0.133 rows=0 loops=7932)"
" Inner Unique: true"
" Hash Cond: (ctl_2.tag_id = ct_2.id)"
" Buffers: shared hit=777857"
" -> Seq Scan on public.dat_commissioning_test_log ctl_2 (cost=0.00..40.93 rows=1528 width=4) (actual time=0.002..0.098 rows=1301 loops=3796)"
" Output: ctl_2.id, ctl_2.tag_id, ctl_2.tested_on, ctl_2.success, ctl_2.username, ctl_2.note"
" Filter: ctl_2.success"
" Rows Removed by Filter: 56"
" Buffers: shared hit=81286"
" -> Hash (cost=23.02..23.02 rows=6 width=4) (actual time=0.057..0.057 rows=100 loops=7932)"
" Output: ct_2.id"
" Buckets: 1024 Batches: 1 Memory Usage: 8kB"
" Buffers: shared hit=696571"
" -> Index Scan using cfg_commissioning_tags_idx on public.cfg_commissioning_tags ct_2 (cost=0.41..23.02 rows=6 width=4) (actual time=0.016..0.049 rows=100 loops=7932)"
" Output: ct_2.id"
" Index Cond: (((ct_2.type_id)::text = (ct_1.type_id)::text) AND ((ct_2.relative_tag_path)::text = (ct_1.relative_tag_path)::text) AND ((ct_2.tag_source)::text = 'plc'::text))"
" Filter: ((ct_2.device_name)::text <> (ct_1.device_name)::text)"
" Rows Removed by Filter: 1"
" Buffers: shared hit=696571"
"Planning Time: 0.550 ms"
"Execution Time: 1143.359 ms"


**EDIT 3**
Replaced covering index on cfg_commissioning_tags:

"Sort (cost=540847.20..540867.03 rows=7932 width=198) (actual time=1152.113..1152.682 rows=7932 loops=1)"
" Output: ct_1.full_path, ct_1.device_parent_path, ct_1.device_name, ct_1.relative_tag_path, ct_1.tag_source, ct_1.type_id, dat_commissioning_test_log.tested_on, dat_commissioning_test_log.note, (CASE WHEN (SubPlan 1) THEN 'true*'::text ELSE CASE dat_commissioning_test_log.success WHEN CASE_TEST_EXPR THEN 'true'::text ELSE 'false'::text END END)"
" Sort Key: ct_1.full_path"
" Sort Method: quicksort Memory: 2290kB"
" Buffers: shared hit=784891"
" -> Hash Left Join (cost=149.19..540333.47 rows=7932 width=198) (actual time=2.016..1115.111 rows=7932 loops=1)"
" Output: ct_1.full_path, ct_1.device_parent_path, ct_1.device_name, ct_1.relative_tag_path, ct_1.tag_source, ct_1.type_id, dat_commissioning_test_log.tested_on, dat_commissioning_test_log.note, CASE WHEN (SubPlan 1) THEN 'true*'::text ELSE CASE dat_commissioning_test_log.success WHEN CASE_TEST_EXPR THEN 'true'::text ELSE 'false'::text END END"
" Hash Cond: (ct_1.id = dat_commissioning_test_log.tag_id)"
" Buffers: shared hit=784891"
" -> Seq Scan on public.cfg_commissioning_tags ct_1 (cost=0.00..426.32 rows=7932 width=140) (actual time=0.014..0.755 rows=7932 loops=1)"
" Output: ct_1.id, ct_1.full_path, ct_1.device_name, ct_1.device_parent_path, ct_1.added_on, ct_1.relative_tag_path, ct_1.retired_on, ct_1.tag_source, ct_1.type_id"
" Buffers: shared hit=347"
" -> Hash (cost=139.24..139.24 rows=796 width=35) (actual time=1.613..1.613 rows=1417 loops=1)"
" Output: dat_commissioning_test_log.tested_on, dat_commissioning_test_log.note, dat_commissioning_test_log.success, dat_commissioning_test_log.tag_id"
" Buckets: 2048 (originally 1024) Batches: 1 (originally 1) Memory Usage: 83kB"
" Buffers: shared hit=50"
" -> Hash Join (cost=85.28..139.24 rows=796 width=35) (actual time=1.117..1.449 rows=1417 loops=1)"
" Output: dat_commissioning_test_log.tested_on, dat_commissioning_test_log.note, dat_commissioning_test_log.success, dat_commissioning_test_log.tag_id"
" Inner Unique: true"
" Hash Cond: (dat_commissioning_test_log.id = "ANY_subquery".max)"
" Buffers: shared hit=50"
" -> Seq Scan on public.dat_commissioning_test_log (cost=0.00..40.93 rows=1593 width=39) (actual time=0.010..0.100 rows=1593 loops=1)"
" Output: dat_commissioning_test_log.id, dat_commissioning_test_log.tag_id, dat_commissioning_test_log.tested_on, dat_commissioning_test_log.success, dat_commissioning_test_log.username, dat_commissioning_test_log.note"
" Buffers: shared hit=25"
" -> Hash (cost=82.78..82.78 rows=200 width=4) (actual time=1.103..1.103 rows=1417 loops=1)"
" Output: "ANY_subquery".max"
" Buckets: 2048 (originally 1024) Batches: 1 (originally 1) Memory Usage: 66kB"
" Buffers: shared hit=25"
" -> HashAggregate (cost=80.78..82.78 rows=200 width=4) (actual time=0.798..0.940 rows=1417 loops=1)"
" Output: "ANY_subquery".max"
" Group Key: "ANY_subquery".max"
" Buffers: shared hit=25"
" -> Subquery Scan on "ANY_subquery" (cost=48.90..77.23 rows=1417 width=4) (actual time=0.332..0.549 rows=1417 loops=1)"
" Output: "ANY_subquery".max"
" Buffers: shared hit=25"
" -> HashAggregate (cost=48.90..63.07 rows=1417 width=8) (actual time=0.331..0.482 rows=1417 loops=1)"
" Output: max(dat_commissioning_test_log_1.id), dat_commissioning_test_log_1.tag_id"
" Group Key: dat_commissioning_test_log_1.tag_id"
" Buffers: shared hit=25"
" -> Seq Scan on public.dat_commissioning_test_log dat_commissioning_test_log_1 (cost=0.00..40.93 rows=1593 width=8) (actual time=0.006..0.095 rows=1593 loops=1)"
" Output: dat_commissioning_test_log_1.id, dat_commissioning_test_log_1.tag_id, dat_commissioning_test_log_1.tested_on, dat_commissioning_test_log_1.success, dat_commissioning_test_log_1.username, dat_commissioning_test_log_1.note"
" Buffers: shared hit=25"
" SubPlan 1"
" -> Hash Join (cost=23.10..68.04 rows=1 width=0) (actual time=0.134..0.134 rows=0 loops=7932)"
" Inner Unique: true"
" Hash Cond: (ctl_2.tag_id = ct_2.id)"
" Buffers: shared hit=784494"
" -> Seq Scan on public.dat_commissioning_test_log ctl_2 (cost=0.00..40.93 rows=1528 width=4) (actual time=0.002..0.098 rows=1301 loops=3796)"
" Output: ctl_2.id, ctl_2.tag_id, ctl_2.tested_on, ctl_2.success, ctl_2.username, ctl_2.note"
" Filter: ctl_2.success"
" Rows Removed by Filter: 56"
" Buffers: shared hit=81286"
" -> Hash (cost=23.02..23.02 rows=6 width=4) (actual time=0.057..0.057 rows=100 loops=7932)"
" Output: ct_2.id"
" Buckets: 1024 Batches: 1 Memory Usage: 8kB"
" Buffers: shared hit=703208"
" -> Index Scan using cfg_commissioning_tags_idx on public.cfg_commissioning_tags ct_2 (cost=0.41..23.02 rows=6 width=4) (actual time=0.015..0.049 rows=100 loops=7932)"
" Output: ct_2.id"
" Index Cond: (((ct_2.type_id)::text = (ct_1.type_id)::text) AND ((ct_2.relative_tag_path)::text = (ct_1.relative_tag_path)::text) AND ((ct_2.tag_source)::text = 'plc'::text))"
" Filter: ((ct_2.device_name)::text <> (ct_1.device_name)::text)"
" Rows Removed by Filter: 1"
" Buffers: shared hit=703208"
"Planning Time: 0.514 ms"
"Execution Time: 1153.156 ms"


</details>


# 答案1
**得分**: 1

你需要这些复合索引:

    在 cfg_commissioning_tags 上创建索引 cfg_commissioning_tags_idx(type_id, relative_tag_path, device_name, tag_source);

    在 dat_commissioning_test_log 上创建索引 dat_commissioning_test_log_idx(tag_id, success);

你可以扩展第一个索引,以包括在`where`子句和`join`中使用的列,我们可以将在选择中使用的所有列添加到索引中(在这种情况下,我们可以将`device_parent_path`添加到我们的索引中),这种类型的索引称为“覆盖索引”。

<details>
<summary>英文:</summary>

You need those compound Indexes :


    CREATE index cfg_commissioning_tags_idx on cfg_commissioning_tags(type_id, relative_tag_path, device_name, tag_source);

    CREATE index dat_commissioning_test_log_idx on dat_commissioning_test_log(tag_id, success);

You can extend the first Index to take in addition of the columns that are used on the `where` clause and `join`, We can add all the columns that are used on the select (in this case we can add `device_parent_path` to our index), this type of index called `Covering Index`.

</details>



huangapple
  • 本文由 发表于 2023年2月16日 18:00:05
  • 转载请务必保留本文链接:https://go.coder-hub.com/75470592.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定