Red Hat Bugzilla – Bug 159646
netdump-server should compress vmcore files once completed
Last modified: 2007-11-30 17:07:18 EST
Description of problem:
Unless I'm doing crash analysis *on the netdump server*, I really don't need a
vmcore file that is 12 gigs in size in /var/crash... especially when I have 10s
or 100s of machines netdumping to the same server. I'd propose a new default
behavior for netdump: compress the huge vmcore files. They compress very very
well.... something like a 6-to-1 ratio, so the space savings are tremendous.
Version-Release number of selected component (if applicable):
The feature isn't in the newest netdump (which as of right now is
Can we get this added as a feature?
One more thing, it probably makes sense to exec the gzip as "nice"d, as a gzip
can take several minutes.
You can do that on your own by utilizing the "netdump-reboot" script file.
If /var/crash/scripts/netdump-reboot exists, it will be called when the
netdump-server sends the reboot request at the end of the dump procedure.
The example script (in /usr/share/doc/netdump-server-<release>/example_scripts)
looks like this:
mail -s "[netdump] $1 rebooted" $ADDRESS <<_EOF
The machine with IP $1 has been rebooted.
Crash dump written to $2
Whether you want to keep the email notification is up to you, but
you could certainly have the netdump-reboot script kick off a background
process to do the gzip. (argument $2 contains the directory containing
Yes... but I'm saying this should be the *default*. If I setup a
netdump-server, there is very little reason to have a 12 or 16 gig vmcore file
on my machine. If I'm going to upload it to Red Hat for analysis I'll have to
compress it anyway.
I understand, but I can't agree that it should be imposed
upon everybody by default -- given that you can make it as
*your* default by use of the scripts. It's just one of those
"You just can't make everybody happy all the time..." deals;
hence the scripts.
I *can* download the code to *any* of this and change the defaults... neither
that fact nor the fact that I can script this RFE is a reason not to consider
compressing the files by default.
Please consider compressing the files on its own merits. You can't make
everybody happy, true. But you can pick the best default for a given situation.
A 16 gig vmcore file just isn't a good default, and it certainly isn't
"scalable" or "enterprise".
I'd even argue that if someone is going to do crash analysis on the
netdump-server (as opposed to the surely-more-common default of uploading to
somewhere else for analysis) that *they* are the corner case, and *they* should
change their script to uncompress the vmcore.gz file
I guess I don't understand why you consider the script
script option (which is what they are there for) is such
Maybe Joshua's request would be better stated like this:
1) Please provide a built-in mechanism for gzip or bzip2 compression on the
2) Please provide a /etc/sysconfig/netdump-server config file which would be
read by the netdump-server init script.
3) Please provide a command-line option which enables #1. We could then add
this command-line option to a /etc/sysconfig/netdump-server file. In fact, we'd
recommend this command-line option should be the default.
How does that sound?
> Maybe Joshua's request would be better stated like this:
> 1) Please provide a built-in mechanism for gzip or bzip2 compression on the
> vmcore files.
I still contend that the script files are the "built-in" mechanism -- they
are there to allow an admin to do whatever he wants at any of the key points
that they are called!
> 2) Please provide a /etc/sysconfig/netdump-server config file which would be
> read by the netdump-server init script.
Well, there already is a config file for the netdump-server, /etc/netdump.conf,
which can be used in lieu of modifying the startup script's command line
> 3) Please provide a command-line option which enables #1. We could then add
> this command-line option to a /etc/sysconfig/netdump-server file. In fact,
> we'd recommend this command-line option should be the default.
> How does that sound?
Look -- I understand completely. But somebody's got to come up with a
compelling argument against using the script capability. Come to think of it,
I can't come up with a *better* reason for using the netdump-reboot script
than to compress the vmcores!
Sounds like we essentially agree.
The script is the way to *customize* netdumps behavior. I am advocating a new
*default*. If Red Hat feels that the netdump-reboot script is the best way to
implement this new default behavior, fine.
Yes, changing the script on my machine is easy, but that *isn't* what I'm
talking about here. I'm wanting the out-of-the-box behavior to be different, for
every future install of netdump-server. This is for the benefit of Red Hat, as
its server would use a smarter default, and for the benefit of Red Hat users who
wouldn't have to customize what really should be the default.
I'd like to see the scripts unused out-of-the-box, and used only for individual
machine administration. However, I'm more concerned with the default behavior
change happening, and less concerned with how Red Hat chooses to do that.
We will not be changing the default behaviour at this point in time.
The netdump-reboot script has always been, and will remain,
the place to do any kind of site-specific, post-dump, compression.
I'm suggesting that gzip isn't a site-specific feature, it is becoming more a
requirment, esp with the crash tools going towards working with gzipped coredumps.
How does this compare to diskdump? I hear diskdump is getting this feature
added, and it isn't done in its scripts.
That's correct -- the RHEL4-U3 diskdump errata will contain a new
configurable option to create compressed diskdumps in a new dumpfile
format. (And the crash utility's RHEL4-U3 errata will be required
to read the new dumpfile format)
It is not planned for any future RHEL3 kernels.
No need to get hung up about it being RHEL3. Sounds like a great feature for
RHEL4, just as it will be for diskdump in RHEL4.
But, as mentioned above, it's not going to happen in netdump.
Why would it happening in diskdump then? Why is it a good idea to automatically
compress with diskdump, but not with netdump? They both do the same thing.
Actually it won't be automatic with diskdump, but user-configurable.
There are huge differences in the netdump and diskdump procedures
by their kernel modules. In diskdump, the creation of the compressed
dumpfile is driven by the in-kernel diskdump module of the crashing machine,
which writes the compressed data to the diskdump partition, and subsequently
during the next reboot, the diskdumputils package takes that information from
the diskdump partition and creates a vmcore; the netdump dumpfile creation is
done by an external daemon based upon the responses it gets from the crashing
Also, there is active diskdump development being performed upstream
by Fujitsu and other partners, primarily initiated due to the shortcomings
and network-topology-specific inconsistencies of netdump. Netdump in and
of itself is in maintenance mode at this point.