Description of problem:
Running 'virsh list' or 'xm list' consumes too much CPU time. When
the number pf VMs is relatively high (10) and they are loaded with
network activity the answer to the vmlist command may become very
long (minutes) or may sometimes never complete.
This problem existed to some extent in RHEL 5.1, but has gotten
worse in RHEL 5.2.
It appears xenstored is taking up most of the CPU time.
Version-Release number of selected component (if applicable):
Can be reproduced after a fashion by running 'xm list' or 'virsh
list' in a while loop in a script:
trap "exit;" INT
while true; do xm list &> /dev/null; usleep 5000;done
If you taskset the script to a single CPU, it will consume >75% of
Steps to Reproduce:
1. See above.
'xm list'/xenstored takes up lots of CPU time and/or runs slowly
when there are lots of VMs on the system.
'xm list'/xenstored shouldn't consume as much CPU time.
A workaround that we've come up with is to mount
/var/lib/xenstored on tmpfs. This reduces the overall system load
when 'xm list' is run.
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release. Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products. This request is not yet committed for inclusion in an Update
Created attachment 311576 [details]
Allow putting xenstored on tmpfs
NB, with this patch applied, the user still needs to opt-in by setting
*** Bug 434146 has been marked as a duplicate of this bug. ***
Built into xen-3.0.3-67.el5
The performance is increased by a huge factor. Tested with 10-15 virt machines and it did not consume more than 20% of processor with xm list running all the time.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.