Bug 186004 - [RHEL4 U3] device-mapper mirror: Write failure region becomes in-sync when suspension.
[RHEL4 U3] device-mapper mirror: Write failure region becomes in-sync when su...
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: kernel (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Alasdair Kergon
Depends On:
Blocks: 181409 186476
  Show dependency treegraph
Reported: 2006-03-20 16:56 EST by Kiyoshi Ueda
Modified: 2013-04-02 19:51 EDT (History)
10 users (show)

See Also:
Fixed In Version: RHSA-2006-0575
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2006-08-10 18:49:18 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
supplimental patch (1.61 KB, patch)
2006-05-15 09:54 EDT, Jonathan Earl Brassow
no flags Details | Diff

  None (edit)
Description Kiyoshi Ueda 2006-03-20 16:56:52 EST
Description of problem:
The status of the region having write failure is marked as "in-sync"
if the mirror device is suspended before the dmeventd takes action.
This causes data corruption because the data is written successfully
except for the failed device and recovery isn't kicked even if the
failed device is restored.

Version-Release number of selected component:

How reproducible:

Steps to Reproduce:
 1. Modify /etc/lvm/lvm.conf not to launch the dmeventd like below.
      dmeventd {
          mirror_library = "none"
 2. Prepare some PVs (more than 2) and create VG from them.
      - /dev/sda, /dev/sdb, /dev/sdc as PVs
      - vg0 contains these 3 PVs
 3. Create a mirror LV and activate it.
      # lvcreate -L 12M -n lv0 -m 1 vg0
 4. Wait until recovery is done.
 5. Disconnect one of PVs used for the mirror LV.
      # echo offline > /sys/block/sdb/device/state
 6. Write distinguishable data to the mirror LV.
      # echo -n "deadbeef" | dd of=/dev/mapper/vg0-lv0
 7. Suspend and resume the mirror LV.
      # dmsetup suspend vg0-lv0
      # dmsetup resume vg0-lv0
 8. Re-connect the PV.
      # echo running > /sys/block/sdb/device/state
 9. Deactivate and reactivate the mirror LV.
      # lvchange -an vg0/lv0
      # lvchange -ay vg0/lv0
10. Check if mirror images are consistent.
      # cmp -l /dev/mapper/vg0-lv0_mimage_0 /dev/mapper/vg0-lv0_mimage_1

Actual results:
cmp detects these two mirror images are inconsistent, while
'dmsetup status' shows that recovery of the mirror LV is completed.
This happens because the kmir_mon calls bio_endio() with no error
for all failed write in write_failure_handler().
Therefore, the write is considered as success and the corresponding
region is marked as "in-sync".

Expected results:
cmp should not detect any error.
In the kernel side, the region having had write errors should not
been marked as "in-sync" while the failed device is still on the
mirror map.
And recovery should be correctly kicked if the failed device is

Additional info:
Comment 1 Jun'ichi NOMURA 2006-03-24 09:15:28 EST
At least, we have to avoid the data corruption.
It's possible by not marking the failed region as clean
until the failed device is removed.

Possible solution is to mark the failed region (which
should be RH_DIRTY at that point) as RH_NOSYNC.
With a proposed fix in BZ#177067 Comment#10,
the RH_NOSYNC region will stay in region hash and be kept unclean
until suspend happens.
Upon suspend, dirty state of the region is flushed to log.
Then after resume, the RH_NOSYNC region will be eventually

If the recoverying process correctly handles write errors (BZ#185785),
we can make sure the region state not becoming clean until
failed devices are removed.
Comment 4 Jason Baron 2006-04-28 13:32:25 EDT
committed in stream U4 build 34.26. A test kernel with this patch is available
from http://people.redhat.com/~jbaron/rhel4/
Comment 5 Jonathan Earl Brassow 2006-05-15 09:50:54 EDT
Needs supplimental patch to properly take account of cluster mirroring.
Sent to kernel list on 05/15/2006.
Comment 6 Jonathan Earl Brassow 2006-05-15 09:54:25 EDT
Created attachment 129057 [details]
supplimental patch

This patch applies ontop of what is already there.
Comment 7 Jonathan Earl Brassow 2006-05-15 10:28:54 EDT
I've created a new bugzilla to track the supplimental patch.
Comment 10 Red Hat Bugzilla 2006-08-10 18:49:18 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.