Description of problem: Currently 'lvconvert --repair --use-policies' is not able to automatically repair raid where some leg experienced 'write error' (sector write error). If the PV itself is still present - repair will not occur and device status remains with 'D' devices. Version-Release number of selected component (if applicable): 2.02.166 How reproducible: Steps to Reproduce: 1. i.e. create raid1 2. simulate write failure during write 3. lvconvert --repair --use-policies Actual results: Expected results: Additional info:
Pretty sure this is expected behavior. Please read over lvmraid.7 man page before reopening this bug with explanation. refresh needed A device was temporarily missing but has returned. The LV needs to be refreshed to use the device again (which will usually require partial synchronization). This is also indicated by the letter "r" (refresh needed) in the 9th posi- tion of the lvs attr field. See Refreshing an LV. This could also indicate a problem with the device, in which case it should be be replaced, see Replacing Devices.
So I'm quite confused. With OLD mirror - we get such leg replaced (primary purpose of dmeventd monitoring). With NEW raid - device is left in 'D' state and used is supposed to detect he has got 'write' errors on some legs and replace them themself ?
Yes, as the code stands, the user has to react. There's multiple ways to go about this: a) don't automatically repair as we do and rather allow the user to replace any failed SubLVs based on an evaluation of the PV b) allow automatic repair based on a new policy even on partial PV failures which cause a RAID event to be thrown on ios oto those parts and result in status character 'D' on the device c) skip automatic repair based on new policy but still allow the user to repair manually in case the error condition on PV(s) of SubLVs persists; doable via replacement already Unless we request b), the current code allowing to replace covers everything and the BZ can stay closed. With b), we need thresholds in order to avoid repair 'storms' on sequences of transient errors+repairs. This is not worth the effort because of the existing replacement feature.
From discussion with Jonathan: With raid_fault_policy == "allocate" From documentaion it's clearly required leg to be replaced automatically upon the first write error. When policy is set to "warn" - it's the case where user is supposed to handle errors. However even in this case of i.e. some transient errors - we need to do a better jobs - see bug 1203011 which basically ask for automatic refresh for such cases. Of course we need to avoid repair 'storms'. So as this bug stands - for "allocate" policy - bugfix is needed. Definition of 'dmeventd' work in case of "warn" policy seems to be a bit fuzzy as well - since I don't see much difference between 'transient' write error and transient disk disappearance - they should work essentially the same way - so if we tend to drop disk in this case - it's not matching logic for sector write error. Also the policy logic should be doing approximately the same job as is doing for mirror - if we need different logic - it should have different new name. User should not be required to deeply study difference between mirror/raid technology - since we put raid1 target as direct replacement. Final note - we currently miss any messages from dmeventd - unless '-ddd' options are specified - needs separate fix.
I wonder if stable commit 9e438b4bc6b9240b63fc79acfef3c77c01a848d8 also fixes Zdenek's issue? Zdenek, can you reproduce still?