Bug 661321
| Summary: | [vdsm] [scale] vdsm CPU consumption goes between 180-400 when running 100 vms | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Haim <hateya> |
| Component: | vdsm | Assignee: | Federico Simoncelli <fsimonce> |
| Status: | CLOSED ERRATA | QA Contact: | Haim <hateya> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 6.1 | CC: | abaron, cplisko, dnaori, hateya, iheim, ilvovsky, mgoldboi, Rhev-m-bugs, yeylon, ykaul |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | vdsm-4.9-62 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2011-12-06 07:03:37 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 687907 | ||
| Bug Blocks: | |||
|
Description
Haim
2010-12-08 14:55:23 UTC
Patch partially fixes this (reduces consumption from 400% to 200%). Need to investigate further and see what is still consuming so much CPU. Currently each VM is sampled every 1s, but 40 samples per second in general should not take so much CPU (although we need to consider reducing frequency) I am not able to reproduce this issue with vdsm-4.9-51. The vm's are running (100) and each has 1 virtual disk (on nfs) attached. # vdsClient -s 0 list table | wc -l 100 # top -b | head top - 19:24:04 up 4 days, 7:53, 2 users, load average: 1.25, 2.01, 1.26 Tasks: 702 total, 39 running, 663 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.1%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32868712k total, 2653236k used, 30215476k free, 102092k buffers Swap: 16383992k total, 0k used, 16383992k free, 682960k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2147 root 20 0 689m 15m 4680 S 42.7 0.0 13:12.67 libvirtd 23607 vdsm 15 -5 10.3g 186m 6796 S 39.0 0.6 15:09.43 vdsm 2442 qemu 20 0 266m 12m 2964 S 14.9 0.0 1:18.08 qemu-kvm The vm's have no guest running, but it shouldn't affect my test. (In reply to comment #5) > I am not able to reproduce this issue with vdsm-4.9-51. > The vm's are running (100) and each has 1 virtual disk (on nfs) attached. > > # vdsClient -s 0 list table | wc -l > 100 > > # top -b | head > top - 19:24:04 up 4 days, 7:53, 2 users, load average: 1.25, 2.01, 1.26 > Tasks: 702 total, 39 running, 663 sleeping, 0 stopped, 0 zombie > Cpu(s): 0.0%us, 0.1%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Mem: 32868712k total, 2653236k used, 30215476k free, 102092k buffers > Swap: 16383992k total, 0k used, 16383992k free, 682960k cached > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 2147 root 20 0 689m 15m 4680 S 42.7 0.0 13:12.67 libvirtd > 23607 vdsm 15 -5 10.3g 186m 6796 S 39.0 0.6 15:09.43 vdsm > 2442 qemu 20 0 266m 12m 2964 S 14.9 0.0 1:18.08 qemu-kvm > > The vm's have no guest running, but it shouldn't affect my test. please test with NFS (block device), tested it 3 times, all with both FCP\iSCSI, and got same results, never tested with NFS. *** Bug 683044 has been marked as a duplicate of this bug. *** on vdsm build vdsm-4.9-62, when machine runs about 90 vms, CPU consumption doesn't go above 20%. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 13207 vdsm 15 -5 10.2g 212m 6580 S 17.1 0.7 5:38.23 vdsm 13709 vdsm 15 -5 1409m 25m 1628 S 0.0 0.1 0:00.69 vdsm 13204 vdsm 15 -5 9212 684 500 S 0.0 0.0 0:00.00 respawn 13710 vdsm 15 -5 1473m 25m 1400 S 0.0 0.1 0:00.00 vdsm [root@nott-vds2 nfswork]# virsh list |wc -l 91 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2011-1782.html |