RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 814779 - Handle lvmetad/udev in a cluster
Summary: Handle lvmetad/udev in a cluster
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.4
Hardware: Unspecified
OS: Linux
high
medium
Target Milestone: rc
: 6.4
Assignee: Peter Rajnoha
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 799859 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-04-20 16:08 UTC by Alasdair Kergon
Modified: 2015-09-29 21:39 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
Clustered environment is not supported by lvmetad yet. If global/use_lvmetad=1 is used together with global/locking_type=3 configuration setting (clustered locking), the use_lvmetad setting is automatically overriden to 0 and lvmetad is not used in this case at all. Also, the following warning message is displayed: WARNING: configuration setting use_lvmetad overriden to 0 due to locking_type 3. Clustered environment not supported by lvmetad yet.
Clone Of:
Environment:
Last Closed: 2015-09-29 21:39:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Alasdair Kergon 2012-04-20 16:08:02 UTC
As currently coded, in a cluster, lvmetad would run independently on each node.
Certain events happening on one node would cause the cache contents on another node to be wrong.
Deal with this.

Comment 1 Peter Rajnoha 2012-05-02 14:24:59 UTC
I've heard that there was a suggestion that we could possibly do without event propagation to other nodes. We have to be very careful here as the events are not originated in kernel itself as a result of processing certain dm ioctl only. Any user can call "udevadm trigger" anytime as well as adding the WATCH udev rule (though this rule is removed for dm-device in current RHEL versions) - we can't prohibit that generaly. This would generate (artificial) events and as a matter of fact, all the rules are reapllied (together with calling the lvmetad --cache) which means lvmetad could see something different now - a new state.

We have to be careful if we go the "no event propagation" way. We need to make sure that the states of all lvmetad instances on all nodes are always in sync!

Comment 2 Peter Rajnoha 2012-05-02 14:32:55 UTC
(In reply to comment #1)
> I've heard that there was a suggestion that we could possibly do without event
> propagation to other nodes. We have to be very careful here as the events are
> not originated in kernel itself as a result of processing certain dm ioctl
> only.

(...or updating the state based on current "cluster locking" scheme)

Comment 3 RHEL Program Management 2012-07-10 08:25:46 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 4 RHEL Program Management 2012-07-10 23:57:21 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.

Comment 5 Peter Rajnoha 2012-07-18 10:43:27 UTC
*** Bug 799859 has been marked as a duplicate of this bug. ***

Comment 6 Petr Rockai 2012-07-31 09:08:30 UTC
I have been thinking about this, and this is what comes to mind:

1) when clvmd is running, it "takes over" the lvmetad socket; there are some options on how to achieve this:
  - clients know that clvmd is running and use an alternate socket to talk to lvmetad
  - clvmd is started in place of lvmetad, opens its usual socket and lvmetad is started on an alternate socket that only clvmd will use

2) all requests on the lvmetad socket are intercepted by clvmd, but passed on unchanged into lvmetad; lvmetad does its normal processing and replies (to clvmd); a new field is added to lvmetad responses, something like "needs_propagating" (0 or 1); if this is 1, the original request as intercepted by clvmd is broadcast to all other clvmd instances and each clvmd passes it on to its local lvmetad instance; the reply is then of course forwarded back to the original client.

This basically means that while clvmd implements all the transport logic, lvmetad retains the knowledge of its own protocol and of what it is caching and how. In this case, clvmd acts as a relatively dumb transport. On the other hand, lvmetad needs to know which state is local (devices) and which is global (VGs).

An alternative to 2) would be to include more detailed information about "what changed" in the response, like a list of VGs that have been affected and how (maybe extending the current status = complete mechanism). This might find other uses and include less knowledge about cluster in lvmetad (which shouldn't need to know much about it).

Comment 7 Tom Lavigne 2012-09-07 15:21:52 UTC
This request was evaluated by Red Hat Product Management for 
inclusion in the current release of Red Hat Enterprise Linux.
Since we are unable to provide this feature at this time,  
it has been proposed for the next release of 
Red Hat Enterprise Linux.

Comment 8 Petr Rockai 2012-10-25 12:36:41 UTC
In 6.4, this is *not* supported. I have made the code disable lvmetad when locking_type is set to 3 and warn the user about it (log_warn). The patch implementing that change is b248ba0..2fdd084.

Comment 9 Peter Rajnoha 2012-10-25 12:55:06 UTC
+ 10492b238d25d85b9aab3eb851bedd1937146e39

The warning message we issue:
  "WARNING: configuration setting use_lvmetad overriden to 0 due to locking_type 3. Clustered environment not supported by lvmetad yet."

Comment 10 Peter Rajnoha 2012-10-25 13:05:58 UTC
This issue should be mentioned in RHEL6.4 Known Issues part of the documentation.

Comment 11 Peter Rajnoha 2013-05-17 10:15:05 UTC
We don't have a concrete design yet, setting devel NAK.

Comment 12 Peter Rajnoha 2013-10-08 13:58:26 UTC
Moving to 6.6 for consideration.

Comment 13 Peter Rajnoha 2014-08-27 10:51:15 UTC
Moving to 6.7 for consideration.


Note You need to log in before you can comment on or make changes to this bug.