|Summary:||Handle lvmetad/udev in a cluster|
|Product:||Red Hat Enterprise Linux 6||Reporter:||Alasdair Kergon <agk>|
|Component:||lvm2||Assignee:||Peter Rajnoha <prajnoha>|
|Status:||CLOSED WONTFIX||QA Contact:||cluster-qe <cluster-qe>|
|Version:||6.4||CC:||agk, dwysocha, heinzm, jbrassow, mcsontos, michele, msnitzer, prajnoha, prockai, thornber, zkabelac|
|Fixed In Version:||Doc Type:||Known Issue|
Clustered environment is not supported by lvmetad yet. If global/use_lvmetad=1 is used together with global/locking_type=3 configuration setting (clustered locking), the use_lvmetad setting is automatically overriden to 0 and lvmetad is not used in this case at all. Also, the following warning message is displayed: WARNING: configuration setting use_lvmetad overriden to 0 due to locking_type 3. Clustered environment not supported by lvmetad yet.
|Last Closed:||2015-09-29 21:39:23 UTC||Type:||Bug|
|oVirt Team:||---||RHEL 7.3 requirements from Atomic Host:|
Description Alasdair Kergon 2012-04-20 16:08:02 UTC
As currently coded, in a cluster, lvmetad would run independently on each node. Certain events happening on one node would cause the cache contents on another node to be wrong. Deal with this.
Comment 1 Peter Rajnoha 2012-05-02 14:24:59 UTC
I've heard that there was a suggestion that we could possibly do without event propagation to other nodes. We have to be very careful here as the events are not originated in kernel itself as a result of processing certain dm ioctl only. Any user can call "udevadm trigger" anytime as well as adding the WATCH udev rule (though this rule is removed for dm-device in current RHEL versions) - we can't prohibit that generaly. This would generate (artificial) events and as a matter of fact, all the rules are reapllied (together with calling the lvmetad --cache) which means lvmetad could see something different now - a new state. We have to be careful if we go the "no event propagation" way. We need to make sure that the states of all lvmetad instances on all nodes are always in sync!
Comment 2 Peter Rajnoha 2012-05-02 14:32:55 UTC
(In reply to comment #1) > I've heard that there was a suggestion that we could possibly do without event > propagation to other nodes. We have to be very careful here as the events are > not originated in kernel itself as a result of processing certain dm ioctl > only. (...or updating the state based on current "cluster locking" scheme)
Comment 3 RHEL Product and Program Management 2012-07-10 08:25:46 UTC
This request was not resolved in time for the current release. Red Hat invites you to ask your support representative to propose this request, if still desired, for consideration in the next release of Red Hat Enterprise Linux.
Comment 4 RHEL Product and Program Management 2012-07-10 23:57:21 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development. This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.
Comment 5 Peter Rajnoha 2012-07-18 10:43:27 UTC
*** Bug 799859 has been marked as a duplicate of this bug. ***
Comment 6 Petr Rockai 2012-07-31 09:08:30 UTC
I have been thinking about this, and this is what comes to mind: 1) when clvmd is running, it "takes over" the lvmetad socket; there are some options on how to achieve this: - clients know that clvmd is running and use an alternate socket to talk to lvmetad - clvmd is started in place of lvmetad, opens its usual socket and lvmetad is started on an alternate socket that only clvmd will use 2) all requests on the lvmetad socket are intercepted by clvmd, but passed on unchanged into lvmetad; lvmetad does its normal processing and replies (to clvmd); a new field is added to lvmetad responses, something like "needs_propagating" (0 or 1); if this is 1, the original request as intercepted by clvmd is broadcast to all other clvmd instances and each clvmd passes it on to its local lvmetad instance; the reply is then of course forwarded back to the original client. This basically means that while clvmd implements all the transport logic, lvmetad retains the knowledge of its own protocol and of what it is caching and how. In this case, clvmd acts as a relatively dumb transport. On the other hand, lvmetad needs to know which state is local (devices) and which is global (VGs). An alternative to 2) would be to include more detailed information about "what changed" in the response, like a list of VGs that have been affected and how (maybe extending the current status = complete mechanism). This might find other uses and include less knowledge about cluster in lvmetad (which shouldn't need to know much about it).
Comment 7 Tom Lavigne 2012-09-07 15:21:52 UTC
This request was evaluated by Red Hat Product Management for inclusion in the current release of Red Hat Enterprise Linux. Since we are unable to provide this feature at this time, it has been proposed for the next release of Red Hat Enterprise Linux.
Comment 8 Petr Rockai 2012-10-25 12:36:41 UTC
In 6.4, this is *not* supported. I have made the code disable lvmetad when locking_type is set to 3 and warn the user about it (log_warn). The patch implementing that change is b248ba0..2fdd084.
Comment 9 Peter Rajnoha 2012-10-25 12:55:06 UTC
+ 10492b238d25d85b9aab3eb851bedd1937146e39 The warning message we issue: "WARNING: configuration setting use_lvmetad overriden to 0 due to locking_type 3. Clustered environment not supported by lvmetad yet."
Comment 10 Peter Rajnoha 2012-10-25 13:05:58 UTC
This issue should be mentioned in RHEL6.4 Known Issues part of the documentation.
Comment 11 Peter Rajnoha 2013-05-17 10:15:05 UTC
We don't have a concrete design yet, setting devel NAK.
Comment 12 Peter Rajnoha 2013-10-08 13:58:26 UTC
Moving to 6.6 for consideration.
Comment 13 Peter Rajnoha 2014-08-27 10:51:15 UTC
Moving to 6.7 for consideration.