Bug 291521 - Cluster mirror can become out-of-sync if nominal I/O overlaps recovery I/O
Cluster mirror can become out-of-sync if nominal I/O overlaps recovery I/O
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: cmirror-kernel (Show other bugs)
All Linux
high Severity urgent
: ---
: ---
Assigned To: Jonathan Earl Brassow
Cluster QE
Depends On:
  Show dependency treegraph
Reported: 2007-09-14 15:31 EDT by Jonathan Earl Brassow
Modified: 2010-01-11 21:11 EST (History)
1 user (show)

See Also:
Fixed In Version: RHBA-2007-0991
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2007-11-21 16:15:25 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Jonathan Earl Brassow 2007-09-14 15:31:04 EDT
If a machine is recovering a region of a mirror, all other machines that are
trying to write to that region will be delayed until it finishes.  This is good.
 However, the state of the region will change after the recovery completes from
not-in-sync to in-sync.  This affects how mirror writes should be carried out.

The problem lies in the fact that the machines being delayed got their
information when the region was not-in-sync, but are allowed to write again when
the mirror is in-sync.  The machines think that they only need to write to the
primary device - thus the mirror becomes out-of-sync.

There are two ways to fix this.
1) Make the mirror write to all mirror disks regardless of sync state
2) Re-try recovery if a collision occurs

#1 is the preferred method, but #2 is less invasive...

So, I'm going to do #2 for 4.6 and #1 for 4.7/5.2
Comment 1 Jonathan Earl Brassow 2007-09-21 16:11:11 EDT
#2 is insufficent due the way the region handling code caches region state.  We
must prevent nodes from getting erronious/stale region state by checking with
'is_remote_recovering' first... a function that had been pulled out because it
was thought it was no longer needed.

assigned -> post
Comment 2 Jonathan Earl Brassow 2007-09-25 23:11:59 EDT
Bad news:
Because a node can cache the state of a region indefinitely (especially for
blocks that are used alot - aka a journaling area of a file system), we must
deny writes to any region of the mirror that is not yet recovered.  This is only
the case with cluster mirroring.  This means poor performance of nominal I/O
during recovery - probably really bad performance.  However, this is absolutely
necessary for mirror reliability.

Good news:
The time I spent coding different fixes for this bug weren't a complete waste. 
I've been able to reuse some of that code to optimize the recovery process. 
Now, rather than going through the mirror from front to back, it skips ahead to
recover regions that have pending writes.  Bottom line: performance will be bad
during recovery, but it will be better than RHEL4.5.

Need for testing:
I've tested mirror consistency during recovery fairly heavily.  However, I
haven't tested this after machine/disk failures.  One particular point of
concern I have is:
- I/O + recovery (or machine failure)  followed by
- non-primary disk failure
This is a concern because the mirror is unable to put the mirror in-sync at this
point and may try to block I/O to non-synced regions.  If the mirror can't
complete I/O, then it can't suspend and reconfigure - meaning, it hangs.  I
should have this case covered, but it will be important to test...  This should
be a standard QA thing, as I often see there tests doing failure of secondary
devices while doing I/O during recovery.

Need another respin of package.
Comment 4 errata-xmlrpc 2007-11-21 16:15:25 EST
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.