什么是HAWQ中的入口数据库? master中的entryDB进程和段中的Query Executor进程之间有什么区别?在入侵数据库上运行什么类型的查询?如何理解HAWQ中的入口数据库
0
A
回答
1
EntryDB是一种调度在主节点上的查询执行器。段中的EntryDB和QE之间的区别在于EntryDB能够访问主目录。通常udf被分派到EntryDB。
+0
那么,您可以给一个在EntryDB上运行的示例查询吗? –
+0
您可以检查generate_series函数。 – ztao1987
1
带有UDF或串行参数的查询有时计划在entrydb上执行。实际上,UDF/serial可能会被调度到QD/QE/EntryDB并在不同的计划中进行处理。
这是一个串行的例子。正如你所看到的,它在orca/planner的计划中明确使用了entrydb。
CREATE TABLE some_vectors (
id SERIAL,
x FLOAT8[]
);
NOTICE: CREATE TABLE will create implicit sequence "some_vectors_id_seq" for serial column "some_vectors.id"
CREATE TABLE
INSERT INTO some_vectors(x) VALUES
(ARRAY[1,0,0,0]),
(ARRAY[0,1,0,0]),
(ARRAY[0,0,1,0]),
(ARRAY[0,0,0,2]);
SET optimizer = on;
SET
EXPLAIN ANALYZE INSERT INTO some_vectors(x) VALUES (ARRAY[1,0,0,0]), (ARRAY[0,1,0,0]), (ARRAY[0,0,1,0]), (ARRAY[0,0,0,2]);
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Insert (cost=0.00..0.31 rows=4 width=12)
Rows out: Avg 4.0 rows x 1 workers. Max/Last(seg0:rhuo-mbp/seg0:rhuo-mbp) 4/4 rows with 7.320/7.320 ms to first row, 7.331/7.331 ms to end, start offset by 1.718/1.718 ms.
Executor memory: 1K bytes.
-> Redistribute Motion 1:1 (slice1) (cost=0.00..0.00 rows=4 width=20)
Rows out: Avg 4.0 rows x 1 workers at destination. Max/Last(seg0:rhuo-mbp/seg0:rhuo-mbp) 4/4 rows with 1.044/1.044 ms to first row, 1.046/1.046 ms to end, start offset by 1.718/1.718 ms.
-> Assert (cost=0.00..0.00 rows=4 width=20)
Assert Cond: NOT id IS NULL
Rows out: Avg 4.0 rows x 1 workers. Max/Last(seg-1:rhuo-mbp/seg-1:rhuo-mbp) 4/4 rows with 0.577/0.577 ms to first row, 0.824/0.824 ms to end, start offset by 1.826/1.826 ms.
Executor memory: 1K bytes.
-> Result (cost=0.00..0.00 rows=4 width=20)
Rows out: Avg 4.0 rows x 1 workers. Max/Last(seg-1:rhuo-mbp/seg-1:rhuo-mbp) 4/4 rows with 0.569/0.569 ms to first row, 0.815/0.815 ms to end, start offset by 1.826/1.826 ms.
-> Append (cost=0.00..0.00 rows=4 width=8)
Rows out: Avg 4.0 rows x 1 workers. Max/Last(seg-1:rhuo-mbp/seg-1:rhuo-mbp) 4/4 rows with 0.360/0.360 ms to first row, 0.402/0.402 ms to end, start offset by 1.827/1.827 ms.
-> Result (cost=0.00..0.00 rows=1 width=8)
Rows out: Avg 1.0 rows x 1 workers. Max/Last(seg-1:rhuo-mbp/seg-1:rhuo-mbp) 1/1 rows with 0.359/0.359 ms to first row, 0.360/0.360 ms to end, start offset by 1.827/1.827 ms.
-> Result (cost=0.00..0.00 rows=1 width=1)
Rows out: Avg 1.0 rows x 1 workers. Max/Last(seg-1:rhuo-mbp/seg-1:rhuo-mbp) 1/1 rows with 0/0 ms to end, start offset by 1.827/1.827 ms.
-> Result (cost=0.00..0.00 rows=1 width=8)
Rows out: Avg 1.0 rows x 1 workers. Max/Last(seg-1:rhuo-mbp/seg-1:rhuo-mbp) 1/1 rows with 0.015/0.015 ms to end, start offset by 2.411/2.411 ms.
-> Result (cost=0.00..0.00 rows=1 width=1)
Rows out: Avg 1.0 rows x 1 workers. Max/Last(seg-1:rhuo-mbp/seg-1:rhuo-mbp) 1/1 rows with 0/0 ms to end, start offset by 2.411/2.411 ms.
-> Result (cost=0.00..0.00 rows=1 width=8)
Rows out: Avg 1.0 rows x 1 workers. Max/Last(seg-1:rhuo-mbp/seg-1:rhuo-mbp) 1/1 rows with 0.012/0.012 ms to end, start offset by 2.500/2.500 ms.
-> Result (cost=0.00..0.00 rows=1 width=1)
Rows out: Avg 1.0 rows x 1 workers. Max/Last(seg-1:rhuo-mbp/seg-1:rhuo-mbp) 1/1 rows with 0/0 ms to end, start offset by 2.500/2.500 ms.
-> Result (cost=0.00..0.00 rows=1 width=8)
Rows out: Avg 1.0 rows x 1 workers. Max/Last(seg-1:rhuo-mbp/seg-1:rhuo-mbp) 1/1 rows with 0.013/0.013 ms to end, start offset by 2.581/2.581 ms.
-> Result (cost=0.00..0.00 rows=1 width=1)
Rows out: Avg 1.0 rows x 1 workers. Max/Last(seg-1:rhuo-mbp/seg-1:rhuo-mbp) 1/1 rows with 0/0 ms to end, start offset by 2.581/2.581 ms.
Slice statistics:
(slice0) Executor memory: 323K bytes (seg0:rhuo-mbp).
(slice1) Executor memory: 279K bytes (entry db).
Statement statistics:
Memory used: 262144K bytes
Settings: default_hash_table_bucket_number=6; optimizer=on
Optimizer status: PQO version 1.633
Dispatcher statistics:
executors used(total/cached/new connection): (2/2/0); dispatcher time(total/connection/dispatch data): (0.120 ms/0.000 ms/0.033 ms).
dispatch data time(max/min/avg): (0.026 ms/0.005 ms/0.015 ms); consume executor data time(max/min/avg): (0.023 ms/0.013 ms/0.018 ms); free executor time(max/min/avg): (0.000 ms/0.000 ms/0.000 ms).
Data locality statistics:
data locality ratio: 1.000; virtual segment number: 1; different host number: 1; virtual segment number per host(avg/min/max): (1/1/1); segment size(avg/min/max): (560.000 B/560 B/560 B); segment size with penalty(avg/min/max): (560.000 B/560 B/560 B); continuity(avg/min/max): (1.000/1.000/1.000); DFS metadatacache: 6.804 ms; resource allocation: 0.549 ms; datalocality calculation: 0.083 ms.
Total runtime: 31.656 ms
(42 rows)
SET optimizer = off;
SET
EXPLAIN ANALYZE INSERT INTO some_vectors(x) VALUES (ARRAY[1,0,0,0]), (ARRAY[0,1,0,0]), (ARRAY[0,0,1,0]), (ARRAY[0,0,0,2]);
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Insert (slice0; segments: 1) (rows=4 width=32)
-> Redistribute Motion 1:1 (slice1) (cost=0.00..0.07 rows=4 width=32)
Rows out: Avg 4.0 rows x 1 workers at destination. Max/Last(seg0:rhuo-mbp/seg0:rhuo-mbp) 4/4 rows with 1.212/1.212 ms to first row, 1.215/1.215 ms to end, start offset by 1.643/1.643 ms.
-> Values Scan on "*VALUES*" (cost=0.00..0.07 rows=4 width=32)
Rows out: Avg 4.0 rows x 1 workers. Max/Last(seg-1:rhuo-mbp/seg-1:rhuo-mbp) 4/4 rows with 0.628/0.628 ms to first row, 0.888/0.888 ms to end, start offset by 1.848/1.848 ms.
Slice statistics:
(slice0) Executor memory: 255K bytes (seg0:rhuo-mbp).
(slice1) Executor memory: 201K bytes (entry db).
Statement statistics:
Memory used: 262144K bytes
Settings: default_hash_table_bucket_number=6; optimizer=off
Optimizer status: legacy query optimizer
Dispatcher statistics:
executors used(total/cached/new connection): (2/2/0); dispatcher time(total/connection/dispatch data): (0.118 ms/0.000 ms/0.025 ms).
dispatch data time(max/min/avg): (0.018 ms/0.006 ms/0.012 ms); consume executor data time(max/min/avg): (0.723 ms/0.022 ms/0.372 ms); free executor time(max/min/avg): (0.000 ms/0.000 ms/0.000 ms).
Data locality statistics:
data locality ratio: 1.000; virtual segment number: 1; different host number: 1; virtual segment number per host(avg/min/max): (1/1/1); segment size(avg/min/max): (280.000 B/280 B/280 B); segment size with penalty(avg/min/max): (280.000 B/280 B/280 B); continuity(avg/min/max): (1.000/1.000/1.000); DFS metadatacache: 0.053 ms; resource allocation: 0.560 ms; datalocality calculation: 0.073 ms.
Total runtime: 33.478 ms
(18 rows)
相关问题
- 1. 如何清理ODBC数据库输入?
- 2. 如何解析数据库中的值
- 3. 资源管理器如何获取HAWQ中的资源?
- 4. 如何管理数据库中的ImportHistory?
- 5. 如何理解MSStorageDriver_ATAPISmartData数据?
- 6. 如何管理数据库中的外键插入?
- 7. 无法理解如何验证tkinter中的入口小部件
- 8. 什么时候应该使用Greenplum数据库与HAWQ?
- 9. 如何解析xml以将其插入到mysql数据库中?
- 10. 如何在iPhone中将解析数组数据插入到sqlite数据库
- 11. 了解数据库主机和端口?
- 12. 如何在数据库导入到LexikTranslationBundle后导入数据库中的翻译?
- 13. 如何加密/解密SQLite数据库中的数据?
- 14. 如何从数据库中获取解密的数据?
- 15. 如何将数据输入数据库?
- 16. 如何处理来自数据库中2个表的数据?
- 17. 如何处理数据库中的敏感数据?
- 18. 如何将XML解析的内容插入到sqlite3数据库
- 19. 如何在数据库中插入null?
- 20. 如果管理数据库丢失,如何访问数据库?
- 21. PHP数据库入口问题(Wampserver)
- 22. 如何理解ollydbg的注册窗口?
- 23. 如何使用entitymanger在我的数据库中插入数据?
- 24. 如何将关系数据库中的数据导入RDF?
- 25. 如何使数据从listview中的edittext进入数据库?
- 26. 如何循环插入数据到codeigniter中的数据库?
- 27. 如何将数据库中的数据导入Excel表格?
- 28. 如何在Symfony 3问题的数据库中插入数据
- 29. 如何在wordpress的数据库中连接和插入数据?
- 30. 从数据库中写入数据库
什么是entryDB?你的意思是默认的数据库在PGDATABASE环境变量中吗? –
@Jon Roberts恩,我认为这不是我的意思。而ztao1987所说的是HAWQ中的入门数据库。 –