Bug 538515
Summary: | lvm2-cluster does not properly refresh device cache for newly appeared devices | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Shane Bradley <sbradley> |
Component: | lvm2-cluster | Assignee: | Milan Broz <mbroz> |
Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> |
Severity: | high | Docs Contact: | |
Priority: | low | ||
Version: | 5.4 | CC: | agk, ccaulfie, cmarthal, cward, dwysocha, edamato, haselden, heinzm, jbrassow, mbroz, prockai, pvrabec, tao |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2010-03-30 09:02:29 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Shane Bradley
2009-11-18 18:13:39 UTC
The cache is refreshed when manipulating wth orphan PVs, here in vgcreate and vgextend (the global lock - taken when manipulating with oprhans - is propagated to other nodes and should flush cache). I am probably missing something here - all commands mentioned are run from domain0 or there is some command inside VM? This has nothing to do with vms. This was just the easiest way to recreate the issue and demo it. This happens on non-vms and vms alike. Here is my point and I am not sure of the overhead involved, so this might be expensive and reason we are not doing it this way. We are requesting that end-user be in charge of refreshing the device cache anytime a device is added or changes. $ man clvmd -R Tells all the running clvmd in the cluster to reload their device cache and re-read the lvm configuration file. This command should be run whenever the devices on a cluster system are changed. My point is that seems a lot of responsibility for any end user. Not all end users know this and it is not well documented and i understand that they should read the man pages. It seems that we could detect if we are running in cluster mode and if so, then operations that are going to manipulate the lvm stack should verify that all cluster nodes have a refreshed view of the devices. Moving an end user responsibility into an automated responsibility. Actually I think this is bug and not RFE - in this situation no clvmd -R should be needed. The cache is not properly refreshed, apparently this leads to incorrect mapping and possible data corruption. (Reproduced with recent upstream & RHE5.4 code.) Should be fixed in upstream code now. Fixed in lvm2-cluster-2_02_56-1_el5. ~~ Attention Customers and Partners - RHEL 5.5 Beta is now available on RHN ~~ RHEL 5.5 Beta has been released! There should be a fix present in this release that addresses your request. Please test and report back results here, by March 3rd 2010 (2010-03-03) or sooner. Upon successful verification of this request, post your results and update the Verified field in Bugzilla with the appropriate value. If you encounter any issues while testing, please describe them and set this bug into NEED_INFO. If you encounter new defects or have additional patch(es) to request for inclusion, please clone this bug per each request and escalate through your support representative. Fix was verified in lvm2-cluster-2.02.56-7.el5. An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2010-0299.html |