Bug 143555
Summary: | Want redundant quorum partitions | ||
---|---|---|---|
Product: | [Retired] Red Hat Cluster Suite | Reporter: | Pietro Dania <p.dania> |
Component: | clumanager | Assignee: | Rob Kenna <rkenna> |
Status: | CLOSED WONTFIX | QA Contact: | Cluster QE <mspqa-list> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 3 | CC: | cluster-maint |
Target Milestone: | --- | Keywords: | FutureFeature |
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Enhancement | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2005-09-29 18:19:23 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Pietro Dania
2004-12-22 08:43:22 UTC
Are you intending to use back-end (e.g. storage-handled) data replication? If so, simply mirror both quorum partitions as well. Note that disaster tolerance (e.g. site-disaster) doesn't currently work without manual intervention. That is, you can mirror all of the data (as long as the cluster members' hostnames and service IP addresses can be the same in both locations) and run the cluster in either place. When site A fails, an administrator must (a) prevent site A from restarting the cluster services and (b) start the cluster services at site B. In future releases of Cluster Suite, quorum partitions will not be required. Evaluating for future enhancement. No back-end data replication is available. I don't matter about Data, as it's managed by Oracle 10g Advanced Storage Management. I have 2 node and 2 arrays: node1 and array1 in site A and node2 and array2 in site B: sda -> quorum on array1 (raw1) sdb -> ASM data on array1 sdc -> quorum on array2 (raw2) sdd -> ASM data on array2 if either sites fail (fire, blackout, meteor) the node in the other site also fails (can't reach one of the quorum partition). I've resolved it this way: sda -> Linux raid autodetect on array1 sdb -> Linux raid autodetect on array1 sdc -> ASM data on array1 sdd -> Linux raid autodetect on array2 sde -> Linux raid autodetect on array2 sdf -> ASM data on array2 md0 -> RAID1 (sda sdd) -> quorum partition (raw1) md1 -> RAID1 (sdb sde) -> quorum partition (raw2) seems to work but md is not cluster aware; i am prone to incosistency between nodes :-( maybe using an md as raw pretects me from inconsistency ? Thanks I don't think MD as raw will entirely fix this situation. However, if you're not actually sharing data (and the only shared data is the shared clumanager partitions themselves), the chances of an inconsistency should be very low (or nonexistent). Clumanager only shares small pieces of data, and those pieces are typically protected by a lock. (That said, running clumanager atop of MD or software LVM partitions is not supported. What is needed for this to work properly is a clustered LVM.) Clustered LVM is available for RHEL4, and RHEL4 doesn't use shared partitions for state information. Since they're not required any longer, I'm closing this against RHCS3. |