Bug 1684917
| Summary: | udisksd is consuming a lot of memory. | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Siddhant Rao <sirao> |
| Component: | udisks2 | Assignee: | Tomáš Bžatek <tbzatek> |
| Status: | CLOSED ERRATA | QA Contact: | guazhang <guazhang> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 7.5 | CC: | fdelorey, guazhang, knoha, mpitt, sirao, smaudet, tbzatek, tnisan, tony.pearce |
| Target Milestone: | rc | Keywords: | Rebase |
| Target Release: | 7.8 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | udisks2-2.8.4-1.el7 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-03-31 19:59:59 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1710507, 1714276 | ||
| Bug Blocks: | |||
|
Description
Siddhant Rao
2019-03-03 20:03:25 UTC
Hi Siddhant, I think you opened this bug on the wrong product, it was opened on RHEV (In reply to Siddhant Rao from comment #0) > Additional info: > > This service was disabled by default on a fresh installed system. > The customer managed to find out that by navigating on the cockpit > interfaces, On the storage section particularly, it triggered this service > to start running. > Does cockpit have a direct relation with this service?. If yes then how?. > also why is it retaining so much memory. Cockpit uses udisks for nearly all storage-related tasks. It also activates udisks modules for LVM and other things. Is it possible get some logs of the service? Such big memory consumption indicates memory leaks, the daemon itself should stay reasonably low-footprint on memory and CPU. The total CPU time spent ("71039:47") is also large enough to indicate some kind of trouble. Is this a constant load or just a spike? Udisks generally responds to uevents (udev events), normally it should be quiet down to zero CPU utilization. Would be interesting to see what is the traffic situation in `udevadm monitor` as well. Also, `udisksctl dump` output may help to understand customer's storage topology. Thanks, got it. Looks like the lvm2 udisks plugin has been activated and it's known to have some memory leaks with some fixes upstream in recent releases. > Dec 28 14:59:10 juc-ucsb-3-p.j.cinglevue.com udisksd[27247]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2' > Dec 28 14:59:10 juc-ucsb-3-p.j.cinglevue.com udisksd[27247]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2' > Jan 03 15:59:45 juc-ucsb-3-p.j.cinglevue.com udisksd[27247]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2' > Jan 03 15:59:45 juc-ucsb-3-p.j.cinglevue.com udisksd[27247]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2' (this is a harmless warning but indicates what objects are being used). Btw. this is RHEL 7 bug judging by the udisks2-2.7.3-6.el7 version. Moved appropriately. Hello Could someone share some reproduces here so that I can add automation to cover the bug ? thanks Guazhang Couple of commits related to memory leaks: https://github.com/storaged-project/udisks/commit/24a9e1cfb9a449a2b5bcb8a1f42fe57b4f76486b#diff-7ce55eaa511876c34301417b27e7f95a https://github.com/storaged-project/udisks/commit/21d14e5cf1a47b0e32750e3239bff20258c357eb#diff-7ce55eaa511876c34301417b27e7f95a https://github.com/storaged-project/udisks/pull/559 https://github.com/storaged-project/udisks/pull/606 https://github.com/storaged-project/libblockdev/pull/410 (still not merged!) https://github.com/storaged-project/libblockdev/commit/bc7b608f33fa121399d681f9d6bf6698d7c26d0e#diff-70f8d8cd722e8008a712d9bb83d0507a However I still don't think we've fixed all leaks, even upstream. Subject to evaluation and leak hunting. (In reply to guazhang from comment #15) > Hello > > Could someone share some reproduces here so that I can add automation to > cover the bug ? Try running our upstream d-bus test suite in a loop and watch the memory consumption rise. It's a memory leak, a dead memory that just sits there. The real work on fixing memory leaks has been done in the following branches: https://github.com/storaged-project/libblockdev/pull/439 https://github.com/storaged-project/libblockdev/pull/440 https://github.com/storaged-project/udisks/pull/659 Still a couple of minor leaks left to fix... Hello I don't know how to test it with Cockpit or gvfs apps, in fact, I don't have the test plan for it. and I will run full udisks regression if it released. Is that OK for you ? thanks Guazhang Hello Don't found memery leak while run 100 cycle udisks2 dbus testing, so move to verified. thanks Guazhang Hi guys, I am the customer whom complained of this. I did give reproduction steps in the redhat support ticket. To reproduce, do the following:
1. First, confirm service is not running on the RHV Host:
[root@juc-ucsb-2-p ~]# systemctl status udisks2
● udisks2.service - Disk Manager
Loaded: loaded (/usr/lib/systemd/system/udisks2.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: man:udisks(8)
[root@juc-ucsb-2-p ~]#
2. browse to cockpit web management on the RHV Host and log in
3. navigate to the storage tab (cockpit > localhost > storage)
4. confirm service is now running on the host:
[root@juc-ucsb-2-p ~]# systemctl status udisks2
● udisks2.service - Disk Manager
Loaded: loaded (/usr/lib/systemd/system/udisks2.service; disabled; vendor preset: disabled)
Active: active (running) since Wed 2019-10-30 15:44:51 AWST; 14s ago
Docs: man:udisks(8)
Main PID: 2399 (udisksd)
Tasks: 6
CGroup: /system.slice/udisks2.service
└─2399 /usr/libexec/udisks2/udisksd
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'physical-volume'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
[root@juc-ucsb-2-p ~]#
5. log out of cockpit and confirm service still running:
[root@juc-ucsb-2-p ~]# systemctl status udisks2
● udisks2.service - Disk Manager
Loaded: loaded (/usr/lib/systemd/system/udisks2.service; disabled; vendor preset: disabled)
Active: active (running) since Wed 2019-10-30 15:44:51 AWST; 3min 11s ago
6. check memory consumption of the service after time and verify increasing memory consumption. Using `top` I see the `RES` value slowly and continuously increasing for this process:
[root@juc-ucsb-2-p ~]# top -p 2399
top - 15:47:59 up 32 days, 6:13, 1 user, load average: 3.54, 3.01, 2.76
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 7.6 us, 6.4 sy, 0.0 ni, 85.7 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 13179067+total, 49212844 free, 74355768 used, 8222056 buff/cache
KiB Swap: 16441340 total, 16089556 free, 351784 used. 52538380 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2399 root 20 0 977248 31832 6144 S 0.0 0.0 0:03.17 udisksd
///
[root@juc-ucsb-2-p ~]# top -p 2399
top - 15:57:44 up 32 days, 6:23, 1 user, load average: 2.25, 2.58, 2.66
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 4.6 us, 4.7 sy, 0.0 ni, 90.6 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 13179067+total, 49202940 free, 74357848 used, 8229880 buff/cache
KiB Swap: 16441340 total, 16089556 free, 351784 used. 52536300 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2399 root 20 0 977248 33020 6144 S 0.0 0.0 0:05.10 udisksd
Rgds.
Thanks Tony, this should be fixed in udisks2-2.8.4-1.el7 and libblockdev-2.18-5.el7 (you need both packages). Should the memory consumption still rise, please open a new bugreport. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:1099 |