Bug 1090664 - VDSM is consuming a lot of cpu time even with no active VMs
Summary: VDSM is consuming a lot of cpu time even with no active VMs
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.3.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: 3.5.0
Assignee: Nir Soffer
QA Contact: Aharon Canan
URL:
Whiteboard: storage
: 1074097 (view as bug list)
Depends On:
Blocks: 1095900 1095907 rhev3.5beta 1156165
TreeView+ depends on / blocked
 
Reported: 2014-04-23 21:44 UTC by Gordon Watson
Modified: 2019-04-28 09:33 UTC (History)
17 users (show)

Fixed In Version: ALPHA2
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1095900 1095907 (view as bug list)
Environment:
Last Closed: 2014-08-12 12:47:52 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Second profile report - sorted by time (2.86 KB, text/plain)
2014-04-30 08:51 UTC, Nir Soffer
no flags Details
Second profile report - sorted by cumtime (6.27 KB, text/plain)
2014-04-30 08:52 UTC, Nir Soffer
no flags Details
profile results when memory filesystems support is disabled, sorted by time (28.82 KB, text/plain)
2014-05-02 22:06 UTC, Nir Soffer
no flags Details
profile results when memory filesystems support is disabled, sorted by cumtime (11.31 KB, text/plain)
2014-05-02 22:07 UTC, Nir Soffer
no flags Details
Script for getting process stats from ps aux output (609 bytes, text/plain)
2014-05-02 22:08 UTC, Nir Soffer
no flags Details
Script to generate profiling reports (837 bytes, text/plain)
2014-05-02 22:23 UTC, Nir Soffer
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 752423 0 None None None Never
oVirt gerrit 27441 0 None None None Never

Description Gordon Watson 2014-04-23 21:44:27 UTC
Description of problem:

Since going to RHEV-H 6.5 (20140324.0) with 'vdsm-4.13.2-0.13', the customer has noticed high cpu usage attributed to 'vdsm'.

This could well be the same or similar to the problem described in BZ 1074097.


Version-Release number of selected component (if applicable):

RHEV 3.3
RHEV-H 6.5 (20140324.0) with vdsm-4.13.2-0.13


How reproducible:

The customer has 4 systems that exhibit this behaviour.


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 5 Nir Soffer 2014-04-24 15:00:50 UTC
Great to have profiling resutls, but I'm not sure how the profile I see is related to the ps output (with 190% cpu for vdsm).

Does vdsm consume 190% cpu time constantly when idle?

Can we have move info about the machine we run on?

Can you run this test on one of the effected machines:

1. Move host to maintainance
2. Enable profiling in vdsm
3. Start batch top or ps or both, so we have some data about cpu and memory usage during the test
4. start vdsm
5. Activate the host
6. Let it run for few minutes (hopefuly the issues reproduce on ech run?)
7. Move host to maintainance
8. Disable profiling, stop collecting of top/ps data
9. Put host back to normal operation

Please upload vdsm.log, ps / top data, and vdsm profile.

Comment 6 Sean Cohen 2014-04-24 15:04:06 UTC
*** Bug 1074097 has been marked as a duplicate of this bug. ***

Comment 8 Nir Soffer 2014-04-24 18:52:03 UTC
Looking in the profile you submitted, there is an issue with code
parsing /proc/mounts and /etc/mtab.

When creating DirectFile, we check if a path requires the O_DIRECT flag. This check
invokes parsing of /proc/mounts and /etc/mtab. Due to the way the code is written,
we may parse /etc/mtab for each line read from /proc/mounts, instead of once per call.

This casues exesive amount of calls to mount parsing functions:

cust_vdsmd.prof% strip
cust_vdsmd.prof% sort time
cust_vdsmd.prof% stats 20

Thu Apr 24 17:41:32 2014    cust_vdsmd.prof

         2734686 function calls (2756014 primitive calls) in 337.270 seconds

   Ordered by: internal time
   List reduced from 2369 to 20 due to restriction <20>

   ncalls       tottime  percall  cumtime  percall filename:lineno(function)
349150/349153   92.124    0.000   99.425    0.000 posixpath.py:311(normpath)
   159814       60.313    0.000  205.255    0.001 mount.py:42(_parseFstabLine)
765312/765313   35.162    0.000   41.672    0.000 mount.py:70(_parseFstabPath)
32144/32166     16.727    0.001  188.689    0.006 mount.py:78(_resolveLoopDevice)
143569/143616    8.564    0.000  168.409    0.001 mount.py:64(_iterateMtab)
   159880        7.793    0.000    8.248    0.000 <string>:8(__new__)
32203/32597      2.215    0.000  238.528    0.007 mount.py:121(_iterMountRecords)


cust_vdsmd.prof% sort cumtime
cust_vdsmd.prof% stats 20
Thu Apr 24 17:41:32 2014    cust_vdsmd.prof

         2734686 function calls (2756014 primitive calls) in 337.270 seconds

   Ordered by: cumulative time
   List reduced from 2369 to 20 due to restriction <20>

   ncalls      tottime  percall  cumtime  percall filename:lineno(function)
32203/32597     2.215    0.000  238.528    0.007 mount.py:121(_iterMountRecords)
  206/227       0.021    0.000  223.861    0.986 fileUtils.py:223(pathRequiresFlagForDirectIO)
  206/227       0.000    0.000  221.325    0.975 mount.py:157(findMountOfPath)
   159814      60.313    0.000  205.255    0.001 mount.py:42(_parseFstabLine)
32144/32166    16.727    0.001  188.689    0.006 mount.py:78(_resolveLoopDevice)
143569/143616   8.564    0.000  168.409    0.001 mount.py:64(_iterateMtab)
349150/349153  92.124    0.000   99.425    0.000 posixpath.py:311(normpath)

Note: I removed some other calls which are not relevant to this issue.

Most of these calls are generated by 206 creations of DirectFile object.

This code is clearly wrong, and it seems that this what cause the unpected
cpu usage, but I cannot be sure about that because I don't know if the ps
output above corrlate to the time when the profiler was active. So it is 
possible that this is not the main issue. So I would like to get another
profile if possible.

In the meantime, I will work on a patch resolving this issue.

Comment 9 Nir Soffer 2014-04-24 22:01:59 UTC
This seems to be caused by these changes:

- http://gerrit.ovirt.org/9595 Have direct file not use the O_DIRECT flag on tmpfs and ramfs
- http://gerrit.ovirt.org/4045 Fix loop device handling for mount command

Saggi, can you look at it?

Comment 11 Nir Soffer 2014-04-27 12:44:15 UTC
Can you tell us what is the storage domain format of all 32 domains?

You can get this from the engine database using:

    select storage_name, storage_domain_format_type from storage_domain_static;

Comment 12 Arthur Berezin 2014-04-27 15:00:43 UTC
adding back needinfos

Comment 13 Nir Soffer 2014-04-28 00:05:38 UTC
Another related patch, adding 3 calls to the expensive pathRequiresFlagForDirectIO:

- http://gerrit.ovirt.org/9661 dd: use iflag=direct only when supported by the os

The last 2 patches seems to make testing easier, and not needed for production environment, where storage domains are never used on tempfs or ramfs file systems.

Comment 15 Saggi Mizrahi 2014-04-29 09:33:19 UTC
Hi, please try disabling hiper threading (if enabled)
and disabling cgroups. These are known to cause scheduling problems in the kernel for our use case.

If that doesn't help, please post a core dump VDSM.
That way we could have a clue as to what VDSM is doing when that happens.

Comment 20 loberman 2014-04-29 16:10:05 UTC
Hello

I tried with and without HYPERthreading enabled.

So 64 CPUS/32 CPUS

The version of REVH was identical.

Thanks
Laurence

Comment 27 Nir Soffer 2014-04-30 08:29:28 UTC
Gordon,

Checking the second profile, I'm now sure about the O(N^2) lookup for file
system type, invoked from pathRequiresFlagForDirectIO(). This code was added to
support tests creating fake domains on tempfs and ramfs filesystems, and should
not be used in production environment.

Since it seems very hard to reroduce this issue, and 100% reproducible on the
customer machine, I think the quickest way to proceed with this, is to try the
posted patch on the customer machine. Please check the commit message for
instructions on how to use this patch.

I suggest to run this patch on one of the customer machines with
memory_filesystems_enable set to false for a couple of hours with normal
operating conditions and report back the results.

While running with this patch, please keep log of "ps aux" and "top". Updating
every 10 seconds should be enough. It would be useful to have profiling enabled
during some of the test time if possible.

Comment 28 Nir Soffer 2014-04-30 08:51:29 UTC
Created attachment 891081 [details]
Second profile report - sorted by time

I removed calls not relevant to this issue or calls with low call count and low total/cumtime value.

Comment 29 Nir Soffer 2014-04-30 08:52:15 UTC
Created attachment 891082 [details]
Second profile report - sorted by cumtime

I removed calls not relevant to this issue or calls with low call count and low total/cumtime value.

Comment 39 Nir Soffer 2014-05-02 22:06:20 UTC
Created attachment 892044 [details]
profile results when memory filesystems support is disabled, sorted by time

Comment 40 Nir Soffer 2014-05-02 22:07:09 UTC
Created attachment 892045 [details]
profile results when memory filesystems support is disabled, sorted by cumtime

Comment 41 Nir Soffer 2014-05-02 22:08:17 UTC
Created attachment 892046 [details]
Script for getting process stats from ps aux output

Comment 42 Nir Soffer 2014-05-02 22:23:17 UTC
Created attachment 892049 [details]
Script to generate profiling reports

Comment 46 Nir Soffer 2014-05-07 08:09:00 UTC
http://gerrit.ovirt.org/27441 is in review for including in upstream, replacing http://gerrit.ovirt.org/27236, which is a testing patch for 3.3.

Comment 61 Aharon Canan 2014-08-12 12:47:52 UTC
As it was already verified in 3.4 (check bug 1095900) no need to verify in 3.5 .


Note You need to log in before you can comment on or make changes to this bug.