2010-05-25 60 views
0

使用这些气象站实际上有数据的日期从城市给定半径内的气象站提取气候数据的查询(请参阅下文)。该查询使用表的唯一指标,而有效:随机页面成本和规划

CREATE UNIQUE INDEX measurement_001_stc_idx 
    ON climate.measurement_001 
    USING btree 
    (station_id, taken, category_id); 

从2.0到1.1降低为random_page_cost服务器的配置价值已经在给定范围内大规模的性能提升(近一个数量级),因为它提出到PostgreSQL它应该使用索引。虽然结果现在在5秒内(从大约85秒降低)返回,但仍存在问题的线条。通过一个单一的一年碰撞查询的结束日期将导致全表扫描:

sc.taken_start >= '1900-01-01'::date AND 
sc.taken_end <= '1997-12-31'::date AND 

如何说服PostgreSQL的两个日期之间使用的指标,无论岁月? (对4300万行进行全表扫描可能不是最好的计划。)在查询下方找到EXPLAIN ANALYZE结果。

谢谢!

查询

SELECT 
    extract(YEAR FROM m.taken) AS year, 
    avg(m.amount) AS amount 
    FROM 
    climate.city c, 
    climate.station s, 
    climate.station_category sc, 
    climate.measurement m 
    WHERE 
    c.id = 5182 AND 
    earth_distance(
     ll_to_earth(c.latitude_decimal,c.longitude_decimal), 
     ll_to_earth(s.latitude_decimal,s.longitude_decimal))/1000 <= 30 AND 
    s.elevation BETWEEN 0 AND 3000 AND 
    s.applicable = TRUE AND 
    sc.station_id = s.id AND 
    sc.category_id = 1 AND 
    sc.taken_start >= '1900-01-01'::date AND 
    sc.taken_end <= '1996-12-31'::date AND 
    m.station_id = s.id AND 
    m.taken BETWEEN sc.taken_start AND sc.taken_end AND 
    m.category_id = sc.category_id 
    GROUP BY 
    extract(YEAR FROM m.taken) 
    ORDER BY 
    extract(YEAR FROM m.taken) 

1900至1996年:指数

"Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" 
" Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" 
" Sort Method: quicksort Memory: 32kB" 
" -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" 
"  -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" 
"    Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" 
"    -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" 
"     Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube))/1000::double precision) <= 30::double precision)" 
"     -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" 
"       Index Cond: (id = 5182)" 
"     -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" 
"       -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" 
"        Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" 
"       -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" 
"        Index Cond: (s.id = sc.station_id)" 
"        Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" 
"    -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" 
"     -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" 
"       Filter: (m.category_id = 1)" 
"     -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" 
"       Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" 
"       -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" 
"        Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" 
"Total runtime: 2269.264 ms" 

1900年至1997年:全表扫描

"Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" 
" Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" 
" Sort Method: quicksort Memory: 32kB" 
" -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" 
"  -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" 
"    Hash Cond: (m.station_id = sc.station_id)" 
"    Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" 
"    -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" 
"     -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" 
"       Filter: (category_id = 1)" 
"     -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" 
"       Filter: (category_id = 1)" 
"    -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" 
"     -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" 
"       Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube))/1000::double precision) <= 30::double precision)" 
"       -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" 
"        Index Cond: (id = 5182)" 
"       -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" 
"        Hash Cond: (s.id = sc.station_id)" 
"        -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" 
"          Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" 
"        -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" 
"          -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" 
"           Recheck Cond: (category_id = 1)" 
"           Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" 
"           -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" 
"             Index Cond: (category_id = 1)" 
"Total runtime: 86165.936 ms" 

回答

1

问题是电台ID并未顺序分布在测量表中。解决办法:

CREATE UNIQUE INDEX measurement_001_stc_index 
    ON climate.measurement_001 
    USING btree 
    (station_id, taken, category_id); 
ALTER TABLE climate.measurement_001 CLUSTER ON measurement_001_stc_index; 

通过强制在列CLUSTER,该站的ID在物理上与表的自然顺序排列的磁盘。这使性能提高了一个数量级。

2

它看起来像Postgres高估了有多少站在一个城市5182附近。它认为有1220,但只有23个。

你可以两个查询强制让站首先像这样(未测试,可能需要tweeking ):

start transaction; 
create temporary table s(id int); 
insert into s 
    select id from 
    climate.city c, 
    climate.station s 
    where 
    c.id = 5182 AND 
    earth_distance(
     ll_to_earth(c.latitude_decimal,c.longitude_decimal), 
     ll_to_earth(s.latitude_decimal,s.longitude_decimal))/1000 <= 30 AND 
    s.elevation BETWEEN 0 AND 3000 AND 
    s.applicable = TRUE; 
analyze s; 

SELECT 
    extract(YEAR FROM m.taken) AS year, 
    avg(m.amount) AS amount 
    FROM 
    climate.station_category sc, 
    climate.measurement m, 
    s 
    WHERE 
    sc.category_id = 1 AND 
    sc.taken_start >= '1900-01-01'::date AND 
    sc.taken_end <= '1996-12-31'::date AND 
    m.station_id = sc.station_id AND 
    m.taken BETWEEN sc.taken_start AND sc.taken_end AND 
    m.category_id = sc.category_id AND 
    sc.station_id = s.id 
    GROUP BY 
    extract(YEAR FROM m.taken) 
    ORDER BY 
    extract(YEAR FROM m.taken); 
rollback; 

您还可以set enable_seqscan=off此查询。这将迫使Postgres不惜一切代价避免顺序扫描。

+0

第二个想法我已经用2个查询重写了一个查询。这样Postgres不能高估站点。请尝试这个,并告诉它是否更好。 – Tometzky 2010-05-28 06:27:34

+0

工作站收集在一个子选择中。子选择被调整以找到半径的边界矩形。这允许PostgreSQL使用索引消除站点,因此需要仅检查落入最小边界矩形内的站点以查看它们是否在给定半径内。性能的大幅提升来自于使用'CLUSTER'ed索引将物理模型与逻辑模型对齐。 – 2010-06-26 20:16:22