Red Hat Bugzilla – Bug 186004
[RHEL4 U3] device-mapper mirror: Write failure region becomes in-sync when suspension.
Last modified: 2013-04-02 19:51:29 EDT
Description of problem:
The status of the region having write failure is marked as "in-sync"
if the mirror device is suspended before the dmeventd takes action.
This causes data corruption because the data is written successfully
except for the failed device and recovery isn't kicked even if the
failed device is restored.
Version-Release number of selected component:
Steps to Reproduce:
1. Modify /etc/lvm/lvm.conf not to launch the dmeventd like below.
mirror_library = "none"
2. Prepare some PVs (more than 2) and create VG from them.
- /dev/sda, /dev/sdb, /dev/sdc as PVs
- vg0 contains these 3 PVs
3. Create a mirror LV and activate it.
# lvcreate -L 12M -n lv0 -m 1 vg0
4. Wait until recovery is done.
5. Disconnect one of PVs used for the mirror LV.
# echo offline > /sys/block/sdb/device/state
6. Write distinguishable data to the mirror LV.
# echo -n "deadbeef" | dd of=/dev/mapper/vg0-lv0
7. Suspend and resume the mirror LV.
# dmsetup suspend vg0-lv0
# dmsetup resume vg0-lv0
8. Re-connect the PV.
# echo running > /sys/block/sdb/device/state
9. Deactivate and reactivate the mirror LV.
# lvchange -an vg0/lv0
# lvchange -ay vg0/lv0
10. Check if mirror images are consistent.
# cmp -l /dev/mapper/vg0-lv0_mimage_0 /dev/mapper/vg0-lv0_mimage_1
cmp detects these two mirror images are inconsistent, while
'dmsetup status' shows that recovery of the mirror LV is completed.
This happens because the kmir_mon calls bio_endio() with no error
for all failed write in write_failure_handler().
Therefore, the write is considered as success and the corresponding
region is marked as "in-sync".
cmp should not detect any error.
In the kernel side, the region having had write errors should not
been marked as "in-sync" while the failed device is still on the
And recovery should be correctly kicked if the failed device is
At least, we have to avoid the data corruption.
It's possible by not marking the failed region as clean
until the failed device is removed.
Possible solution is to mark the failed region (which
should be RH_DIRTY at that point) as RH_NOSYNC.
With a proposed fix in BZ#177067 Comment#10,
the RH_NOSYNC region will stay in region hash and be kept unclean
until suspend happens.
Upon suspend, dirty state of the region is flushed to log.
Then after resume, the RH_NOSYNC region will be eventually
If the recoverying process correctly handles write errors (BZ#185785),
we can make sure the region state not becoming clean until
failed devices are removed.
committed in stream U4 build 34.26. A test kernel with this patch is available
Needs supplimental patch to properly take account of cluster mirroring.
Sent to kernel list on 05/15/2006.
Created attachment 129057 [details]
This patch applies ontop of what is already there.
I've created a new bugzilla to track the supplimental patch.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.