2017-04-11 454 views
1

我的操作系统是Arch Linux。当有一个核心转储,我尝试使用gdb来调试它:在Linux上,2G是coredump文件的限制大小吗?

$ coredumpctl gdb 1621 
...... 
     Storage: /var/lib/systemd/coredump/core.runTests.1014.b43166f4bba84bcba55e65ae9460beff.1621.1491901119000000000000.lz4 
     Message: Process 1621 (runTests) of user 1014 dumped core. 

       Stack trace of thread 1621: 
       #0 0x00007ff1c0fcfa10 n/a (n/a) 

GNU gdb (GDB) 7.12.1 
...... 
Reading symbols from /home/xiaonan/Project/privDB/build/bin/runTests...done. 
BFD: Warning: /var/tmp/coredump-28KzRc is truncated: expected core file size >= 2179375104, found: 2147483648. 

我检查/var/tmp/coredump-28KzRc文件:

$ ls -alth /var/tmp/coredump-28KzRc 
-rw------- 1 xiaonan xiaonan 2.0G Apr 11 17:00 /var/tmp/coredump-28KzRc 

2G在Linux核心转储文件的大小限制?因为我认为我的/var/tmp有足够的磁盘空间可供使用:

$ df -h 
Filesystem  Size Used Avail Use% Mounted on 
dev    32G  0 32G 0% /dev 
run    32G 3.1M 32G 1% /run 
/dev/sda2  229G 86G 132G 40%/
tmpfs   32G 708M 31G 3% /dev/shm 
tmpfs   32G  0 32G 0% /sys/fs/cgroup 
tmpfs   32G 957M 31G 3% /tmp 
/dev/sda1  511M 33M 479M 7% /boot 
/dev/sda3  651G 478G 141G 78% /home 

P.S. “ulimit -a” 输出:

$ ulimit -a 
core file size   (blocks, -c) unlimited 
data seg size   (kbytes, -d) unlimited 
scheduling priority    (-e) 0 
file size    (blocks, -f) unlimited 
pending signals     (-i) 257039 
max locked memory  (kbytes, -l) 64 
max memory size   (kbytes, -m) unlimited 
open files      (-n) 1024 
pipe size   (512 bytes, -p) 8 
POSIX message queues  (bytes, -q) 819200 
real-time priority    (-r) 0 
stack size    (kbytes, -s) 8192 
cpu time    (seconds, -t) unlimited 
max user processes    (-u) 257039 
virtual memory   (kbytes, -v) unlimited 
file locks      (-x) unlimited 

更新:/etc/systemd/coredump.conf文件:

$ cat coredump.conf 
# This file is part of systemd. 
# 
# systemd is free software; you can redistribute it and/or modify it 
# under the terms of the GNU Lesser General Public License as published by 
# the Free Software Foundation; either version 2.1 of the License, or 
# (at your option) any later version. 
# 
# Entries in this file show the compile time defaults. 
# You can change settings by editing this file. 
# Defaults can be restored by simply deleting this file. 
# 
# See coredump.conf(5) for details. 

[Coredump] 
#Storage=external 
#Compress=yes 
#ProcessSizeMax=2G 
#ExternalSizeMax=2G 
#JournalSizeMax=767M 
#MaxUse= 
#KeepFree= 
+0

你真的可以在文件系统上创建足够大的文件吗? –

+0

@SergeiKurenkov:是的,我使用“dd if =/dev/zero = test bs = 1024 count = 4MB'”来创建一个4G文件。 –

+0

这里http://stackoverflow.com/questions/8768719/coredump-is-getting-truncated它建议也检查'ulimit -f' –

回答

2

@ n.m。是正确的。
(1)修改/etc/systemd/coredump.conf文件:

[Coredump] 
ProcessSizeMax=8G 
ExternalSizeMax=8G 
JournalSizeMax=8G 

(2)刷新systemd的配置:

# systemctl daemon-reload 

注意这将只需要为新生成的核心转储文件的效果。

2

是2G的Linux核心转储文件的大小限制?

不,我经常处理大于4GiB的核心转储。

ulimit -a
core file size (blocks, -c) unlimited

这告诉你在这个shell您当前限制。它告诉你什么都没有关于runTests运行的环境。该流程可能会通过setrlimit(2)设置自己的限制,或者其父母可能会为其设置限制。

您可以修改runTest以使用getrlimit(2)打印当前限制,并查看过程运行时的实际情况。

P.S.仅仅因为core被截断并不意味着它完全没用(尽管通常是这样)。至少,你应该尝试GDB where命令。