2015-02-06 784 views
2

我们在生产Elasticsearch集群时遇到了问题,Elasticsearch似乎随时间消耗了每个服务器上的所有RAM。每个盒子都有128GB的RAM,所以我们运行两个实例,每个JVM堆分配30GB。剩余的68G留给OS和Lucene。我们上周重新启动了每台服务器,并且每个Elasticsearch进程的内存使用率为24%。现在已经差不多一周了,我们的内存消耗已经上升到每个Elasticsearch实例约40%。我已经附加了我们的配置文件,希望有人能够帮助弄清楚为什么Elasticsearch超出我们为内存利用设置的限制。Elasticsearch内存问题 - ES进程消耗所有RAM

目前我们正在运行ES 1.3.2,但下一个版本将在下周升级到1.4.2。

这里是顶级的(为了清楚起见移除额外的字段)从重新启动后右视图:

PID USER  %MEM TIME+ 
2178 elastics 24.1 1:03.49 
2197 elastics 24.3 1:07.32 

和一个今天:

PID USER  %MEM TIME+ 
2178 elastics 40.5 2927:50 
2197 elastics 40.1 3000:44 

elasticserach -0.yml:

cluster.name: PROD 
node.name: "PROD6-0" 
node.master: true 
node.data: true 
node.rack: PROD6 
cluster.routing.allocation.awareness.force.rack.values: 
PROD4,PROD5,PROD6,PROD7,PROD8,PROD9,PROD10,PROD11,PROD12 
cluster.routing.allocation.awareness.attributes: rack 
node.max_local_storage_nodes: 2 
path.data: /es_data1 
path.logs:/var/log/elasticsearch 
bootstrap.mlockall: true 
transport.tcp.port:9300 
http.port: 9200 
http.max_content_length: 400mb 
gateway.recover_after_nodes: 17 
gateway.recover_after_time: 1m 
gateway.expected_nodes: 18 
cluster.routing.allocation.node_concurrent_recoveries: 20 
indices.recovery.max_bytes_per_sec: 200mb 
discovery.zen.minimum_master_nodes: 10 
discovery.zen.ping.timeout: 3s 
discovery.zen.ping.multicast.enabled: false 
discovery.zen.ping.unicast.hosts: XXX 
index.search.slowlog.threshold.query.warn: 10s 
index.search.slowlog.threshold.query.info: 5s 
index.search.slowlog.threshold.query.debug: 2s 
index.search.slowlog.threshold.fetch.warn: 1s 
index.search.slowlog.threshold.fetch.info: 800ms 
index.search.slowlog.threshold.fetch.debug: 500ms 
index.indexing.slowlog.threshold.index.warn: 10s 
index.indexing.slowlog.threshold.index.info: 5s 
index.indexing.slowlog.threshold.index.debug: 2s 
monitor.jvm.gc.young.warn: 1000ms 
monitor.jvm.gc.young.info: 700ms 
monitor.jvm.gc.young.debug: 400ms 
monitor.jvm.gc.old.warn: 10s 
monitor.jvm.gc.old.info: 5s 
monitor.jvm.gc.old.debug: 2s 
action.auto_create_index: .marvel-* 
action.disable_delete_all_indices: true 
indices.cache.filter.size: 10% 
index.refresh_interval: -1 
threadpool.search.type: fixed 
threadpool.search.size: 48 
threadpool.search.queue_size: 10000000 
cluster.routing.allocation.cluster_concurrent_rebalance: 6 
indices.store.throttle.type: none 
index.reclaim_deletes_weight: 4.0 
index.merge.policy.max_merge_at_once: 5 
index.merge.policy.segments_per_tier: 5 
marvel.agent.exporter.es.hosts: ["1.1.1.1:9200","1.1.1.1:9200"] 
marvel.agent.enabled: true 
marvel.agent.interval: 30s 
script.disable_dynamic: false 

这里是/ etc/SYSCONFIG/elasticsearch-0:

# Directory where the Elasticsearch binary distribution resides 
ES_HOME=/usr/share/elasticsearch 
# Heap Size (defaults to 256m min, 1g max) 
ES_HEAP_SIZE=30g 
# Heap new generation 
#ES_HEAP_NEWSIZE= 
# max direct memory 
#ES_DIRECT_SIZE= 
# Additional Java OPTS 
#ES_JAVA_OPTS= 
# Maximum number of open files 
MAX_OPEN_FILES=65535 
# Maximum amount of locked memory 
MAX_LOCKED_MEMORY=unlimited 
# Maximum number of VMA (Virtual Memory Areas) a process can own 
MAX_MAP_COUNT=262144 
# Elasticsearch log directory 
LOG_DIR=/var/log/elasticsearch 
# Elasticsearch data directory 
DATA_DIR=/es_data1 
# Elasticsearch work directory 
WORK_DIR=/tmp/elasticsearch 
# Elasticsearch conf directory 
CONF_DIR=/etc/elasticsearch 
# Elasticsearch configuration file (elasticsearch.yml) 
CONF_FILE=/etc/elasticsearch/elasticsearch-0.yml 
# User to run as, change this to a specific elasticsearch user if possible 
# Also make sure, this user can write into the log directories in case you change them 
# This setting only works for the init script, but has to be configured separately for systemd startup 
ES_USER=elasticsearch 
# Configure restart on package upgrade (true, every other setting will lead to not restarting) 
#RESTART_ON_UPGRADE=true 

请让我知道,如果有我可以提供任何其它数据。预先感谢您的帮助。

  total  used  free  shared buffers  cached 
Mem:  129022  119372  9650   0  219  46819 
-/+ buffers/cache:  72333  56689 
Swap:  28603   0  28603 
+0

目前尚不清楚。你的系统是否真的耗尽了内存? – 2015-02-06 20:32:08

+0

它有,这就是促使我们重新启动集群中的节点。我们正在将OOM错误和直接内存错误全部丢弃......现在它们已经达到了80%,我们还没有收到错误,但是我想知道如何防止这两个进程消耗100%的内存。 – KLD 2015-02-06 21:15:13

+0

你可以运行免费-m并将结果添加到您的问题?你的意思是一个JVM OOM异常或者调用了Linux OOM杀手? – 2015-02-06 23:37:33

回答

0

你们看到的不是堆吹出来,堆总是会通过你在配置设置限制。免费的-m和关于操作系统相关使用的最高报告,所以在那里的使用很可能是操作系统缓存FS调用。

这不会导致一个Java OOM。

如果您遇到Java OOM,它直接关系到Java堆空间不足,那么还有其他的东西在起作用。你的日志可能会提供一些信息。

+0

我完全同意你的说法,堆应该保持在我设定的范围内。问题是如何让操作系统缓存这么多的FS调用。有什么办法可以绑定这个,所以它会刷新并且不会开始杀死我的集群?令人困惑的是,根据TOP,这个内存正在被弹性搜索过程所消耗......有什么办法可以告诉整个ES进程停止在某个点而不仅仅是堆的消费? – KLD 2015-02-13 02:27:06