Bug 1675134 - [GSS] Gluster pod loses udev access with 3.11.1 upgrade
Summary: [GSS] Gluster pod loses udev access with 3.11.1 upgrade
Keywords:
Status: CLOSED DUPLICATE of bug 1674485
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: cns-deploy-tool
Version: ocs-3.11
Hardware: All
OS: Linux
urgent
urgent
Target Milestone: ---
: ---
Assignee: Michael Adam
QA Contact: Prasanth
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-11 20:20 UTC by Matthew Robson
Modified: 2019-02-13 11:42 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-12 17:50:52 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Bugzilla 1674485 None VERIFIED Cannot install latest version of ocp + ocs in aws environment. 2019-03-26 19:19:19 UTC
Red Hat Bugzilla 1676612 None ON_QA lvm tools expect access to udev, even when disabled in the configuration 2019-03-26 19:19:18 UTC

Internal Links: 1676612 1688316

Comment 7 Niels de Vos 2019-02-12 14:55:23 UTC
What are the type of block-devices that you have? I suspect that lvm2 tries to detect the devices (or more information about them) by calling out to udev. This seems to happen on some environments, but not on others. Possibly the difference is the block-devices that are connected.

See https://bugzilla.redhat.com/show_bug.cgi?id=1674485#c8 for a few more details as well.

Comment 8 Niels de Vos 2019-02-12 17:21:51 UTC
The more I am thinking about this, the more it feels like a duplicate of bz 1674485. If someone else agrees, feel free to CLOSE/DUPLICATE this one :)

Comment 10 Matthew Robson 2019-02-12 17:50:52 UTC

*** This bug has been marked as a duplicate of bug 1674485 ***


Note You need to log in before you can comment on or make changes to this bug.