Bug 1388103 - RAID should repair leg for single write error sector
Summary: RAID should repair leg for single write error sector
Keywords:
Status: NEW
Alias: None
Product: LVM and device-mapper
Classification: Community
Component: lvm2
Version: 2.02.166
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On: 1416099 1476097 1560739
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-24 13:16 UTC by Zdenek Kabelac
Modified: 2023-08-10 15:40 UTC (History)
6 users (show)

Fixed In Version: v2_02_186
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-10-24 20:37:04 UTC
Embargoed:
rule-engine: lvm-technical-solution?
rule-engine: lvm-test-coverage?


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1203011 1 high CLOSED HA-LVM Automatic Device Revival 2023-12-01 07:27:04 UTC

Internal Links: 1203011

Description Zdenek Kabelac 2016-10-24 13:16:59 UTC
Description of problem:

Currently  'lvconvert --repair --use-policies' is not able to automatically repair  raid where some leg experienced  'write error'  (sector write error).
If the  PV itself is still present - repair will not occur and device status remains with 'D' devices.


Version-Release number of selected component (if applicable):
2.02.166

How reproducible:


Steps to Reproduce:
1. i.e. create raid1 
2. simulate write failure during write
3. lvconvert --repair --use-policies 

Actual results:


Expected results:


Additional info:

Comment 1 Jonathan Earl Brassow 2016-10-24 20:37:04 UTC
Pretty sure this is expected behavior.  Please read over lvmraid.7 man page before reopening this bug with explanation.

       refresh needed
       A device was temporarily missing but has returned.  The LV needs to be refreshed to use the device again (which  will
       usually require partial synchronization).  This is also indicated by the letter "r" (refresh needed) in the 9th posi-
       tion of the lvs attr field.  See Refreshing an LV.  This could also indicate a problem with the device, in which case
       it should be be replaced, see Replacing Devices.

Comment 2 Zdenek Kabelac 2016-10-25 07:04:02 UTC
So I'm quite confused.

With  OLD mirror - we get such leg replaced (primary purpose of dmeventd monitoring).

With NEW raid - device is left in 'D' state  and used is supposed to detect he has got 'write' errors on some legs and replace them themself ?

Comment 3 Heinz Mauelshagen 2016-10-25 16:10:07 UTC
Yes, as the code stands, the user has to react.

There's multiple ways to go about this:

a) don't automatically repair as we do and rather allow the user to replace
   any failed SubLVs based on an evaluation of the PV

b) allow automatic repair based on a new policy even on partial
   PV failures which cause a RAID event to be thrown on ios oto those parts
   and result in status character 'D' on the device

c) skip automatic repair based on new policy but still allow the
   user to repair manually in case the error condition on PV(s) of
   SubLVs persists; doable via replacement already

Unless we request b), the current code allowing to replace
covers everything and the BZ can stay closed.

With b), we need thresholds in order to avoid repair 'storms' on
sequences of transient errors+repairs.

This is not worth the effort because of the existing replacement feature.

Comment 4 Zdenek Kabelac 2016-10-26 08:27:14 UTC
From discussion with Jonathan:

With raid_fault_policy == "allocate"  

From documentaion it's clearly required leg to be replaced automatically upon the first write error.

When policy is set to "warn" - it's the case where user is supposed to handle errors.

However even in this case of i.e. some transient errors - we need to do a better jobs -  see bug 1203011   which basically ask for automatic refresh for such cases. Of course we need to avoid repair 'storms'.


So as this bug stands -  for "allocate" policy  - bugfix is needed.

Definition of 'dmeventd' work in case of "warn" policy seems to be a bit fuzzy as well - since I don't see much difference between 'transient' write error and transient  disk disappearance  -  they should work essentially the same way - so  if we tend to drop disk in this case - it's not matching logic for sector write error.


Also the policy logic should be doing approximately the same job as is doing for mirror - if we need different logic - it should have different new name.
User should not be required to deeply study difference between mirror/raid technology - since we put raid1 target as direct replacement.

Final note - we currently miss any messages from dmeventd - unless '-ddd' options are specified - needs separate fix.

Comment 5 Heinz Mauelshagen 2019-09-09 16:24:27 UTC
I wonder if stable commit 9e438b4bc6b9240b63fc79acfef3c77c01a848d8 also fixes Zdenek's issue?

Zdenek,

can you reproduce still?


Note You need to log in before you can comment on or make changes to this bug.