Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1555243 - Consume updated cockpit-storaged packages
Consume updated cockpit-storaged packages
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node-ng (Show other bugs)
4.1.9
All Linux
medium Severity high
: ovirt-4.2.2
: ---
Assigned To: Ryan Barry
Wei Wang
: TestOnly
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2018-03-14 05:22 EDT by Olimp Bockowski
Modified: 2018-05-15 13:58 EDT (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-05-15 13:57:47 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Node
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:1524 None None None 2018-05-15 13:58 EDT

  None (edit)
Description Olimp Bockowski 2018-03-14 05:22:11 EDT
Description of problem:
The latest RHV-H is affected by following Storaged bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1463843
We should provide the fixed version. 


Version-Release number of selected component (if applicable):
RHV 4.1.9
redhat-release-virtualization-host-4.1-9.1.el7.x86_64 we have: 
storaged-2.5.2-2.el7.x86_64

How reproducible:
It is tricky because this bug is quiesced most of the time. If someone logs into RHV host using cockpit, then goes into "Storage" tab, then Storaged is started and it opens many pipes - here we hit this bug.

Steps to Reproduce:
1. using cockpit log into RHV Host which is part of RHV env and which has access to some Storage Domains.
2. go into "Storage" tab, just enter into it 
3. on hypervisor run for examoke:
for i in {1..100}; do lsof -p 25739 | wc -l 1>> /tmp/files;sleep 5;done

Actual results:
5-second interval, for an idle system (no VMs running), gives us:
350
350
356
356
362
362
368
368
374
374


Expected results:
it doesn't spawn so many file descriptors

Additional info:
Comment 3 Wei Wang 2018-04-01 22:51:39 EDT
Test Version:
redhat-virtualization-host-4.2-20180322.0
udisks2-2.7.3-6.el7.x86_64
cockpit-storaged-160-3.el7.noarch
cockpit-160-3.el7.x86_64

Test Steps:
1. using cockpit log into RHVHost which is part of RHV env and which has access to some Storage Domains.
2. go into "Storage" tab, just enter into it 
3. on hypervisor run for example:
for i in {1..100}; do lsof -p 25739 | wc -l 1>> /tmp/files;sleep 5;done

Result:
There is not spawning so many file descriptors.

The bug is fixed, change the status to VERIFIED.
Comment 5 Wei Wang 2018-04-23 23:28:15 EDT
According to comment 3, Change the status to VERIFIED
Comment 8 errata-xmlrpc 2018-05-15 13:57:47 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:1524

Note You need to log in before you can comment on or make changes to this bug.