From Bugzilla Helper:
User-Agent: Mozilla/4.76 [en] (X11; U; Linux 2.2.16-3 i686)
From /usr/include/protocols/rwhod.h a whod struct contains an array (wd_we)
of 1024/ (sizeof (struct whoent)) = 42 (I think) whoent structs, but does
not seem to check when populating this array, and scribbles over other
variables in the heap including the socket descriptor.
When this happens rwhod loops while trying to recv from the socket and logs
large numbers of messages to syslogd (visble in /var/log/messages), both
rwhod and syslogd consume large amounts of CPU time.
Steps to Reproduce:
1. Make sure rwhod is running (e.g. use linuxconf) check that stats are
being reported correctly by using ruptime, check that no logs are being
produced (examine /var/log/messages), check cpu utlisation with top.
2. Get > 42 active terminal sessions running (can be simulated using a
script an rlogin for example)
3. Voila! Check CPU utilisation with top, look at /var/log/messages, after
a short while ruptime will display that the system is down.
I've kludged a solution (on a live database server with ~ 100 current
users) by getting the Source RPM for rwho 0.17 and hacking the source to
declare the size of the wd_we array to be 4096/(sizeof (struct whoent)) in
There is also a line in rwhod.c that needs changing to match
BUT, this is abviously a horrid solution, and I'm not sure it's a good one,
either the code should be checked to make sure it never runs of the end of
the array, or it should dynamically allocate memory for these whoent's as
required, and I'm not sure how to go about getting advise on which is best
or how to provide patching info etc.
how would you like to go about addressing this bug? Increase the static buffer
size, or are you going to recode this stuff to be dynamic any time in the near
BTW, I checked out OpenBSDs latest rwhod; appears to have the same issue, but it
seems to have several more features added in the last few years as well...
Just checked, bug is still in Fedora Core 2.
Read ya, Phil
Fixed it buy patching to allow now 1024 users per host.
That should be sufficent for most needs.
Read ya, Phil