DescriptionOlimp Bockowski
2018-03-14 09:22:11 UTC
Description of problem:
The latest RHV-H is affected by following Storaged bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1463843
We should provide the fixed version.
Version-Release number of selected component (if applicable):
RHV 4.1.9
redhat-release-virtualization-host-4.1-9.1.el7.x86_64 we have:
storaged-2.5.2-2.el7.x86_64
How reproducible:
It is tricky because this bug is quiesced most of the time. If someone logs into RHV host using cockpit, then goes into "Storage" tab, then Storaged is started and it opens many pipes - here we hit this bug.
Steps to Reproduce:
1. using cockpit log into RHV Host which is part of RHV env and which has access to some Storage Domains.
2. go into "Storage" tab, just enter into it
3. on hypervisor run for examoke:
for i in {1..100}; do lsof -p 25739 | wc -l 1>> /tmp/files;sleep 5;done
Actual results:
5-second interval, for an idle system (no VMs running), gives us:
350
350
356
356
362
362
368
368
374
374
Expected results:
it doesn't spawn so many file descriptors
Additional info:
Test Version:
redhat-virtualization-host-4.2-20180322.0
udisks2-2.7.3-6.el7.x86_64
cockpit-storaged-160-3.el7.noarch
cockpit-160-3.el7.x86_64
Test Steps:
1. using cockpit log into RHVHost which is part of RHV env and which has access to some Storage Domains.
2. go into "Storage" tab, just enter into it
3. on hypervisor run for example:
for i in {1..100}; do lsof -p 25739 | wc -l 1>> /tmp/files;sleep 5;done
Result:
There is not spawning so many file descriptors.
The bug is fixed, change the status to VERIFIED.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2018:1524